paper
stringlengths
13.4k
699k
reviews
stringlengths
0
38.9k
# Provably Safe Reinforcement Learning: Conceptual Analysis, Survey, And Benchmarking | Hanna Krasowski∗ | hanna.krasowski@tum.de | |---------------------------------------------------|--------------------------| | Jakob Thumm∗ | jakob.thumm@tum.de | | Marlon Müller | marlon.mueller@tum.de | | Lukas Schäfer | lukas.schaefer@tum.de | | Xiao Wang | xiao.wang@tum.de | | Matthias Althoff | althoff@tum.de | | School of Computation, Information and Technology | | Reviewed on OpenReview: *https: // openreview. net/ forum? id= mcN0ezbnzO* ## Abstract Ensuring the safety of reinforcement learning (RL) algorithms is crucial to unlock their potential for many real-world tasks. However, vanilla RL and most safe RL approaches do not guarantee safety. In recent years, several methods have been proposed to provide hard safety guarantees for RL, which is essential for applications where unsafe actions could have disastrous consequences. Nevertheless, there is no comprehensive comparison of these provably safe RL methods. Therefore, we introduce a categorization of existing provably safe RL methods, present the conceptual foundations for both continuous and discrete action spaces, and empirically benchmark existing methods. We categorize the methods based on how they adapt the action: action replacement, action projection, and action masking. Our experiments on an inverted pendulum and a quadrotor stabilization task indicate that action replacement is the best-performing approach for these applications despite its comparatively simple realization. Furthermore, adding a reward penalty, every time the safety verification is engaged, improved training performance in our experiments. Finally, we provide practical guidance on selecting provably safe RL approaches depending on the safety specification, RL algorithm, and type of action space. ## 1 Introduction Reinforcement learning (RL) contributes to many recent advancements in challenging research fields such as robotics (El-Shamouty et al., 2020; Zhao et al., 2020), autonomous systems (Kiran et al., 2022; Ye et al., 2021), and games (Mnih et al., 2013; Silver et al., 2017). A vanilla RL agent typically explores randomly and executes undesired actions multiple times to learn how to achieve the highest possible reward. However, safety is important for many applications. Therefore, safe RL emerged where the learning process is adapted such that the agent considers safety aspects next to performance during training and/or operation (García & Fernández, 2015). There are different degrees of how safety is considered for safe RL approaches. First, some approaches only incorporate safety aspects without formal guarantees. Here, the agent chooses safe actions with higher probability, e.g., by adding a reward component that indicates risk or adapting the exploration such that the agent follows a safe heuristic. Second, some approaches provide probabilistic safety guarantees, e.g., by using probabilistic models for safe actions (Könighofer et al., 2021). Still, hard safety guarantees for RL agents are necessary whenever failures are disastrous and need to be avoided at all costs during training and deployment. Such safety-critical applications include autonomous driving, human-robot collaboration, ∗Equal contribution. or energy grids. We refer to this third subcategory of safe RL methods providing hard safety guarantees for both training and operation as *provably safe RL*. We provide for the first time a consistent conceptual framework for *provably safe RL* in both continuous and discrete action spaces, a comprehensive literature survey, and a comparison between provably safe RL approaches on two widely used control benchmarks. The characteristic difference between provably safe RL approaches is how they adapt the actions of the agent. Therefore, we propose classifying them into three categories: action replacement, *action projection*, and *action masking*. Our proposed categorization of provably safe RL provides a concise presentation of the research field, supports researchers implementing provably safe RL through clear terminology and a comprehensive literature review, and outlines ideas for future research within the three categories. Furthermore, we evaluate the methods experimentally. Our three main findings of our experiments were that all provably safe RL methods were indeed safe, that action replacement performed best on average over five tested RL algorithms, and that adding a penalty to the reward when using the safety function further improved performance. Our contributions are fourfold. First, we introduce a comprehensive classification of provably safe RL methods and their formal description. This categorization allows us to compare and benchmark the effects of choosing a specific type of action modification on the ability of agents to learn. Second, we propose the first formulation of action masking for continuous action spaces. Third, we provide a structured and comprehensive survey of previous provably safe RL works and assign them to the three categories. Finally, we are the first to evaluate the performance of all three provably safe RL methods on two common control benchmarks. This comparison provides insights into the strengths and weaknesses of the different provably safe RL approaches and allows us to provide advice on selecting the best-suited provably safe RL approach for a specific problem independent of the safety verification method used. The remainder of this paper is structured as follows. First, we briefly review the historical development of safe RL and provably safe RL in Section 1.1. We describe preliminary concepts in Section 2 and introduce our proposed categorization. Then, we show how the related provably safe RL literature fits our categorization in Section 3. Section 4 compares the different provably safe RL categories experimentally on a two-dimensional (2D) quadrotor stabilization task. Section 5 discusses the results of our experimental evaluation and the practical considerations following them. Finally, we conclude this work in Section 6. ## 1.1 Evolution Towards Provably Safe Rl The notion of risk and safety in RL is discussed at least since the 1990s (Heger, 1994). The reasons for combining safety and RL were to focus the learning on relevant or safe regions and improve the convergence speed. Thus, the field of safe RL started developing, and in 2015, García & Fernández (2015) were the first to cluster safe RL. They provide two high-level categories: approaches that modify the optimization criterion and approaches that modify the exploration. Since 2015, significant advances in model-free RL and the increased applicability of deep RL changed the research focus of safe RL. Notably, the higher efficiency of model-free deep RL made its real-world application tangible and amplified the need for formal guarantees in safe RL. This is also apparent from the survey by Brunke et al. (2022), who investigated recent developments at the intersection of control and learning for safe robotics. As a goal of the broader field of safe learning for control, they identify methods with as little as possible system knowledge while ensuring formal safety guarantees (Brunke et al., 2022, Fig. 4). Among existing safe RL approaches, provably safe RL research is a growing field located at this frontier as it provides hard safety guarantees during both learning and deployment. While a few papers mentioned in Brunke et al. (2022) are part of provably safe RL, it is not a focus of their work. In the following paragraphs, we use the common classification by safety specification type, which can be soft constraints, *probabilistic guarantees*, and *hard guarantees*, to locate provably safe RL in the field of safe RL. Soft constraints Approaches with soft constraints consider safety directly in their optimization objective so that the agent can explore all actions and states regardless of safety. Thus, these methods can be unsafe during training, especially initially, but usually converge to a safer policy without formal safety guarantees after sufficiently many training steps. The simplest way to inform an RL agent about safety constraints is through its reward function. Despite its elegance, the reward function approach has many potential pitfalls. First, the reward function might be ill-defined, either from manual tuning or when learned from human input. When manually defined, the reward function might overlook certain features or fine details, leading to a hackable reward (Skalse et al., 2022) from which the agent learns an unsafe behavior. Learning the reward function from human feedback (Christiano et al., 2017) is also error-prone because communicating safety constraints alongside performance metrics is hard for sparse, nonlinear, conditional, or seldom occurring constraints. Second, even if the reward function is defined correctly, the trained policy is not guaranteed to be safe, e.g., it was shown by Packer et al. (2018) that RL agents struggle with out-of-distribution states during deployment. Third, the agent might learn to perform actions safely but ignore the task objective due to goal misgeneralization (Langosco et al., 2022). Still, the majority of safe RL research considers safety aspects as soft constraints, so we provide a short overview of soft constraint methods in the following paragraph. To reduce the burden of manual reward specification, a recent line of work formalizes the task and its safety specifications as a temporal logic formula. Then, the temporal logic formula is transformed into the RL reward either by directly using the robustness measure associated with the formula as the reward (Aksaray et al., 2016; Li et al., 2017; Varnai & Dimarogonas, 2020) or by transforming the temporal logic formula into an automaton that generates the reward (Camacho et al., 2019; Hahn et al., 2019; Cai et al., 2021; Hasanbeig et al., 2022; Alur et al., 2023). For some algorithms, it can even be ensured that the policy converges to the optimal policy, which maximally satisfies the temporal logic specification (Alur et al., 2022; Yang et al., 2022). Another way to inform an RL agent about safety than through the reward is by formulating a constrained optimization problem. Many recent advances have been made in constrained RL (Altman, 1998; Achiam et al., 2017; Stooke et al., 2020), for which the policy aims to maximize the reward while satisfying user-defined specifications. The specifications can be formulated as constraint functions (Chow et al., 2018; Stooke et al., 2020; Yang et al., 2020; Marvi & Kiumarsi, 2021) or as temporal logic formulas (De Giacomo et al., 2021; Hasanbeig et al., 2019a;b; 2020). The main advantage of soft constraint methods over probabilistic or hard constraint methods is that no explicit model of the agent dynamics or the environment is required as the agent learns the safety aspects through experience. Thus, such safe RL methods have a high potential in non-critical settings, where unsafe actions do not cause major damage. Probabilistic guarantees Probabilistically safe RL approaches rely on probabilistic models or synthesize a model from sampled data. Here, the action and state space can be restricted based on probabilities. Nonetheless, unsafe actions are sometimes not detected and might occur occasionally. Several works (Turchetta et al., 2016; Berkenkamp et al., 2017; Mannucci et al., 2018) try to determine the maximal set of safe states by starting from an often user-defined conservative set and extending it with the gathered learning experience. Other methods (Könighofer et al., 2021; Thananjeyan et al., 2021; Dalal et al., 2018; Zanon & Gros, 2021; Yang et al., 2021; Gillula & Tomlin, 2013) are based on formulating probabilistic models that identify the probability of safety for an action. In general, approaches that rely on probabilistic methods are especially applicable if one cannot bound measurement errors, modeling errors, and disturbances by sets. Hard guarantees Provably safe RL features hard safety guarantees, which are fulfilled by integrating prior system knowledge into the learning process. Here, the agent only explores safe actions and only reaches states fulfilling the safety specifications. Provably safe RL already fulfills the given safety specifications during the learning process, which is essential when training or fine-tuning agents on safety-critical tasks in the physical world. Thus, we exclude approaches that only verify learned policies (Bastani et al., 2018; Schmidt et al., 2021) from our survey. We focus on model-free RL algorithms that do not explicitly learn or use a model of the system dynamics to optimize the policy. Generally, deploying learned controllers in the physical world became increasingly realistic in recent years, and thus, the need for provably safe RL grew, and more provably safe RL approaches were developed. With this work, we aim to structure and provide practical insights into this growing field. ## 2 Conceptual Analysis We introduce three provably safe RL classes by providing their formal description in one comprehensive conceptual framework. This framework clarifies the differences between the three classes and eases the following literature review and benchmarking. Markov decision process The RL agent learns on a Markov decision process (MDP) that is described by the tuple (S, A*, T, r, γ*). Hereby, we assume that the set of states S is fully observable with bounded precision. Partially observable MDPs can be handled using methods like particle filtering (Sunberg & Kochenderfer, 2018) and are not further discussed in this work. The action space A and state space S can be continuous or discrete. T(s, a, s ′) is the transition function, which in the discrete case returns the probability that the transition from state s to state s ′ occurs by taking action a. In the continuous case, T(s, a, s ′) denotes the probability density function of the transition. We assume that the transition function is stationary over time. For each transition, the agent receives a reward r : S × A → R from the environment. The discount factor 0 *< γ <* 1 weights the relevance of future rewards. The policy or value function that the action learns can be optimized for an infinite or finite episode horizon p. Safety of a system For provably safe RL, it is required that the safety of states and actions is verifiable. Otherwise, no formal claims about the safety of a system can be made. Thus, we first introduce the set of safe states Ss ⊆ S containing all states for which all safety specifications are fulfilled1. For verifying the safety of actions, we use a safety function φ : S × A → {0, 1} $\varphi(\mathbf{s},\mathbf{a})=\begin{cases}1,&\text{if}(\mathbf{s},\mathbf{a})\text{is verified safe}\\ 0,&\text{otherwise.}\end{cases}$. $$(1)$$ Conceptually, this is mainly done by over-approximating the set of states that are reachable by taking action a in state s, and then validating if the reachable set of states is a subset of Ss and if all these reachable states are verified safe until the episode horizon p. In other words, for each of these reachable states, there exists at least one action that keeps the system within Ss until episode termination. To formalize this concept, we define the set of provably safe actions Aφ(s) = {a|φ(s, a) = 1} for a given state s. The set of provably safe actions Aφ(s) is a subset of all safe actions2 As(s), i.e., Aφ(s) ⊆ As(s) ⊆ A. The safe action set As(s) includes all safe actions, while the provably safe action set Aφ(s) only includes actions that are verified as safe by the safety function φ(s, a). In other words, the safety function possibly returns that an action is unsafe, which is indeed safe, while it never predicts truly unsafe actions to be safe. Moreover, we define the set of provably safe states based on the safety function: Sφ = {s|∃a ∈ A, φ(s, a) = 1} ⊆ Ss. Consequently, for a verified state-action tuple all reachable next states s ′ are in Sφ. All provably safe RL approaches rely on the availability of provably safe actions and states to achieve a provably safe system: Proposition 1 *Let the system be initiated in a provably safe state* s φ 0 ∈ Sφ. Then, there exists a sequence of provably safe actions that ensures s ∈ Ss *at all times until the episode horizon* p. The proof can be easily obtained from the definitions by induction as φ(s, a) = 1 if s ′ ∈ Ss ∧ Aφ(s ′) ̸= ∅. Note that if the episode horizon is finite, the last state of an episode is verified safe if it is contained in Ss. A provably safe action must not necessarily exist for this last state. We define Ss relatively broad since the safety specifications are usually task-specific and can take various forms such as stability, not entering a time-invariant or time-varying unsafe set, and temporal logic specifications. Depending on the safety specification and system under consideration, different verification methods are applicable and encoded in the safety function φ(s, a). We discuss the concrete verification methods used by previous works in Section 3. Although φ(s, a) only verifies safety for a given state s and action a, it may take more than the next state into account. To make this more graspable, we shortly explain two concepts for 1The state space is often augmented from the state space for classical control purposes to a state space that includes other safety-relevant dimensions. 2Note that "taking no action" is commonly considered to be part of the action space, most often with the action a = [0*, . . . ,* 0]⊤. 4 ![4_image_0.png](4_image_0.png) ![4_image_1.png](4_image_1.png) A action set ![4_image_2.png](4_image_2.png) Figure 1: Structure of the three types of provably safe RL methods. The post-posed action replacement or projection methods (a) alter unsafe actions before sending them to the environment. In contrast, preemptive action masking approaches (b) allow the agent to only choose from the safe action space and, therefore, only output safe actions to the environment. Figures (c-e) highlight the differences between the three approaches in the action space. Here, action replacement (c) replaces unsafe actions with actions from the safe action space, action projection (d) projects unsafe actions to the closest safe action, and action masking (e) lets the agent choose solely from the safe action set. calculating the provably safe states set Sφ. For our benchmarks in Section 4, we compute a control invariant set as Sφ. Thus, the online verification only consists of checking that all reachable states are contained in this control invariant set. Another concept is predicting the reachable states until a specified time horizon while starting from s and applying a for the first time step and an action sequence for the consecutive time steps. The verification checks that all reachable states are in Ss and that either the reachable set at the prediction horizon is contained in a safe terminal set or the prediction horizon is the episode horizon. If the verification succeeds, we know that s ∈ Sφ. Provably safe RL relies on model knowledge to provide safety guarantees, i.e., a conformant model that covers the safety-relevant system and environment dynamics. Hereby, the verification process can use an abstraction of the real system dynamics as long as it is conformant (Roehm et al., 2019; Liu et al., 2023) to the real system, i.e., it over-approximates both aleatoric and epistemic uncertainties and covers all relevant safety aspects. This eases efficient verification, as the complexity of the abstraction is usually significantly lower than the complexity of the underlying MDP. In systems where such a safety model is unavailable, provably safe RL is not applicable, and only non-provably safe approaches, as discussed in Section 1, can be used. In practice, the safety specifications are often weakened to legal or passive safety. Hereby, inevitable safety violations caused by other agents are not considered to be the fault of the agent and are, therefore, not considered unsafe. Examples of proving legal safety have been presented for autonomous driving (Pek et al., 2020) and robotics (Bouraine et al., 2012). There are multiple ways to ensure provable safety for RL systems, which we summarize in three categories: action replacement, where the safety method replaces all unsafe actions from the agent with safe actions, action projection, which projects unsafe actions onto the safe action space, and action masking, where the agent can only choose actions from the safe action space. We choose this categorization, as it represents the three main approaches found in the literature to modify actions and thereby ensure safety for RL. Action replacement and action projection alter the action after the agent returns it. In contrast, action masking lets the agent exclusively choose from the safe action space. Figure 1 displays the basic concept of these methods. The following subsections describe the concept, mathematical formalization, and practical implications of each approach. ## 2.1 Action Replacement The first approach to ensure the safety of actions is to replace any unsafe action returned by the agent with a safe action before its execution. The first step of action replacement is to evaluate the safety of the suggested action a ∈ A using φ(s, a). If the action sampled from the policy π(a|s) is not verified as safe, it is replaced with a provably safe replacement action a˜ = ψ(s), where ψ : S → Aφ is called replacement function. Following this procedure, it is guaranteed that only safe actions a φ with $$\mathbf{a}^{\varphi}=\begin{cases}\mathbf{a}\sim\mathbf{\pi}(\mathbf{a}|\mathbf{s}),&\text{if}\varphi(\mathbf{s},\mathbf{a})=1\\ \mathbf{\psi}(\mathbf{s}),&\text{otherwise}\end{cases}\tag{1}$$ $$\left(2\right)$$ are executed. We discuss how this action replacement alters the MDP in the Appendix and additionally refer the interested reader to Hunt et al. (2021). There are two general replacement functions found in the literature, *sampling* and *failsafe*. In sampling, the replacement function ψsample(s) uniformly samples a random action from Aφ(s). The other approach is to use a failsafe controller ψfailsafe(s) as replacement action, which could also stem from human feedback. In time-critical and complex scenarios, where building Aφ(s) online becomes too time-consuming, ψfailsafe(s) is the only feasible option. ## 2.2 Action Projection In contrast to action replacement, where the replacement action is not necessarily related to the action of the agent, action projection returns the closest provably safe action with respect to the original action and some distance function. For this, we define the optimization problem $$\hat{\mathbf{a}}=\arg\min_{\hat{\mathbf{a}}}\quad\mbox{dist}\,(\mathbf{a},\hat{\mathbf{a}})$$ $$\mbox{subject to}\quad\varphi(\mathbf{s},\hat{\mathbf{a}})=1,$$ $$\left({\boldsymbol{3}}\right)$$ where dist(·) describes an arbitrary distance function, e.g., a p-norm. Note, that it might not be possible to define such a distance function, especially in discrete action spaces. The constraints are often defined explicitly by n constraint functions fi(a˜, s) ≤ 0, ∀ i ∈ 1*, . . . , n* that confine the next state to the set of provably safe states, i.e., s ′ ∈ Sφ. The optimization problem in (3) minimizes the alteration of the actions while satisfying the safety specification, which is usually expressed through constraints for the optimization problem. Following Proposition 1, the optimization problem in (3) must always be feasible. The most prominent ways to formulate the safety constraints for action projection are based on control barrier functions (CBFs) or robust model predictive control (MPC). For the first method, the constraints are defined by CBFs (Wieland & Allgöwer, 2007) that translate state constraints to control input constraints. We formulate the CBFs according to Taylor et al. (2020) as it is an intuitive formulation for RL. Consider a nonlinear control-affine system s˙ = m(s) + b(s) a˜, (4) where s ∈ S ⊆ R N is the continuous state with N dimensions, and a˜ ∈ A ⊂ RM is the continuous control input with M dimensions and m(s) and b(s) are locally Lipschitz continuous functions. Then, the function h is a CBF if there exists an extended class K function α such that (Wabersich et al., 2023, Eq. 10) $$\dot{\mathbf{s}}=\mathbf{m}(\mathbf{s})+\mathbf{b}(\mathbf{s})\,\tilde{\mathbf{a}},$$ $$\nabla h(\mathbf{s})(\mathbf{m}(\mathbf{s})+\mathbf{b}(\mathbf{s}){\hat{\mathbf{a}}})\geq\alpha(h(\mathbf{s})).$$ ∇h(s)(m(s) + b(s)a˜) ≥ α(h(s)). (5) If the system dynamics are not exactly known, the nominal model can be extended with bounded disturbances d to model the unknown system dynamics: $$\dot{\mathbf{s}}=\mathbf{m}(\mathbf{s})+\mathbf{b}(\mathbf{s})\,\tilde{\mathbf{a}}+\mathbf{d}.$$ s˙ = m(s) + b(s) a˜ + d. (6) To reduce conservatism, disturbances can be modeled as state and input-depended d(s, a˜), and the maximal occurred disturbance can be learned from data as presented in Taylor et al. (2021). The limitation to controlaffine systems makes formulating the constrained optimization problem efficient, e.g., for a Euclidean norm $\downarrow$ . $\downarrow$ . $\downarrow$ . as the distance function, (3) results in a quadratic program. A downside of using CBFs is that h(s) is not trivial to find, especially in environments with dynamic obstacles. For the second common projection method, we formulate the optimization problem with MPC according to Wabersich & Zeilinger (2021). There, the constraint in (3) is satisfied if we find an action sequence that steers the system from the current state s into the safe terminal set M within a finite prediction horizon L ∈ N while respecting input and state constraints, which reflect the safety specification (Wabersich & Zeilinger, 2021, Eq. 5): $$\begin{array}{r l}{{\hat{\mathbf{a}}=\operatorname*{arg\,min}_{\hat{\mathbf{a}}}}}&{{\mathrm{dist}\,(\mathbf{a},\hat{\mathbf{a}})}}\\ {{}}&{{}}\\ {{\mathrm{subject~to}}}&{{\mathbf{s}_{l+1}=\mathbf{g}(\mathbf{s}_{l},\mathbf{a}_{l}),\mathbf{s}_{0}=\mathbf{s},}}\\ {{}}&{{}}\\ {{\forall\,l\in\{1,...,L-1\}\,:\,\mathbf{s}_{l}\in\mathbb{S}_{\mathbf{s}},}}\\ {{\mathbf{s}_{L}\in\mathbb{M},}}\\ {{\forall\,l\in\{0,...,L-1\}\,:\,\mathbf{a}_{l}\in\mathbb{A},}}\\ {{\hat{\mathbf{a}}=\mathbf{a}_{0},}}\end{array}$$ dist (a, aˆ) (7) $$\left(7\right)$$ where sl and al are the predicted state and action l steps ahead of the current time step.3 The function g(·, ·) is obtained by time discretizing a smooth continuous-time nonlinear system, whose dynamics are governed by s˙ = f(s, a). The safe terminal set M ⊆ S is control invariant, i.e., after the agent has entered M, the associated invariance-enforcing controller keeps the agent inside this set indefinitely. If the optimization problem is solvable, a˜ is executed. If it is not solvable, the control sequence from the previous state is used as a backup plan until the safe terminal set is reached or the optimization problem is solvable again (Schürmann et al., 2018). For perturbed systems of the form s˙ = f(s, a, d) with a bounded disturbance d, robust MPC schemes, e.g., Schürmann et al. (2018), have to be employed and output-feedback MPC schemes, e.g., Gruber & Althoff (2021), account for measurement uncertainties. Similar to the CBF approach, conservatism can be reduced by learning the disturbance bounds from data (Hewing et al., 2020). Note that, for an environment with dynamic obstacles, the safe terminal set can be time-dependent, and we are unaware of a straightforward integration where Proposition 1 still holds. ## 2.3 Action Masking The two previous approaches modify unsafe actions from the agent. In action masking, we do not allow the agent to output an unsafe action in the first place (preemptive method). Hereby, a mask is added to the agent so that it can only choose from actions in the provably safe action set. In addition to Proposition 1, action masking in practice requires an efficient function η(s) : S → P(A), where P denotes the powerset, that determines a sufficiently large set of provably safe actions Aφ ⊆ A for a given state s. The policy function π is informed by the function η(s) and the action selection is adapted such that only actions from Aφ can be selected: $$a\sim\pi(\mathbf{a}|\eta(s),s)\in\mathbb{A}_{\varphi}.$$ $$({\boldsymbol{\delta}})$$ a ∼ π(a|η(s), s) ∈ Aφ. (8) If η(s) can only verify one or a few actions efficiently, the agent cannot learn properly because the agent cannot explore different actions and find the optimal one among them. Ideally, the function η(s) achieves Aφ = As. The action masking approaches for discrete and continuous action spaces are not easily transferable into each other, and will therefore be discussed separately in this subsection. For discrete actions, the safety of each action is typically verified in each state using φ(s, a) and all verified safe actions are added to Aφ(s), i.e., η iterates over all actions for the current state s with φ(s, a) to identify Aφ(s). Intuitively, the discrete action mask is an informed drop-out layer added at the end of the policy network. We define the resulting safe policy πm(a|s) based on Huang & Ontañón (2022, Eq. 1) as $$\pi_{m}(\mathbf{a}|\mathbf{s})=\varphi(\mathbf{s},\mathbf{a})\,{\frac{\pi(\mathbf{a}|\mathbf{s})}{\sum_{\mathbf{a}^{\varphi}\in\mathbb{A}_{\varphi}(\mathbf{s})}\pi(\mathbf{a}^{\varphi}|\mathbf{s})}}.$$ . (9) 3We omitted in (7) that s1, ..., sL, a1*, ...,* aL−1 are decision variables of the optimization problem to improve the readability. $$({\mathfrak{g}})$$ The integration of masking in a specific learning algorithm is not trivial. The effects on policy optimization methods are discussed in Krasowski et al. (2020); Huang & Ontañón (2022). For RL algorithms that learn the Q-function, we exemplary discuss the effects of discrete action masking for deep Q-network (DQN) (Mnih et al., 2013), which is most commonly used for Q-learning with discrete actions. During exploration with action masking, the agent samples its actions uniformly from Aφ. When the agent exploits the Q-function, it chooses only the best action among Aφ, i.e., arg maxa∈Aφ Q(s, a). The temporal difference error for updating the Q-function Q(s, a) is (Mnih et al., 2013, Eq. 3) $$r(\mathbf{s},\mathbf{a})+\gamma\max_{\mathbf{a}^{\prime}}Q(\mathbf{s}^{\prime},\mathbf{a}^{\prime})-Q(\mathbf{s},\mathbf{a}),\tag{10}$$ where the action in the next state is a ′ ∈ Aφ in contrast to the vanilla temporal difference error where the maximum Q-value for the next state is searched among actions from A. Using the adapted temporal difference error in (10), the learning updates are performed only with Q-values of actions relevant in the next state instead of the full action space. To comprehensively compare the different provably safe RL approaches on discrete and continuous action spaces in Section 4, we propose a simple formulation for continuous masking since there is no existing approach. We formulate this form of continuous action masking as a transformation of the action of agents to the provable safe action set. Our approach requires both A and Aφ to be axis-aligned boxes with the same center. We propose to transform the action space A into Aφ by applying the transformation $$\tilde{\mathbf{a}}=(\mathbf{a}-\min(\mathbb{A}))\,\frac{\max(\mathbb{A}_{\varphi})-\min(\mathbb{A}_{\varphi})}{\max(\mathbb{A})-\min(\mathbb{A})}+\min(\mathbb{A}_{\varphi})\tag{11}$$ to the actions a ∈ A, where min(·) and max(·) return a vector containing the minimal and maximal value of the given set in each dimension respectively, and all operations are evaluated element-wise. For example, given a two-dimensional continuous action space A = [0, 1] × [−1, 2], then min(A) = [0, −1]⊤. Note that the representation of Aφ as an axis-aligned box centered in A can be under-approximative and, thus, lead to conservative behavior. To overcome this limitation, more complex set representations, such as the zonotopes (Althoff et al., 2021), for the action spaces (A and Aφ) in combination with solving an optimization problem that maximizes the size of Aφ could be investigated. A less sophisticated yet possibly effective approach is searching for a good latent interval representation of and transformation to Aφ by applying principal component analysis to a set of Aφ for different states as a pre-computing step. Since the action spaces for RL are defined a priori, there must always be a valid transformation from a to a˜ ∈ Aφ and such that the operation is time-invariant for all state-action pairs. In the next section, we discuss the effect of the three provably safe RL approaches on the policy distribution and exploration. ## 2.4 Impact On The Distribution Of Actions The three previously presented provably safe RL methods have different effects on the resulting distribution of actions, as illustrated for a one-dimensional continuous action space and a probabilistic policy in Figure 2. For action projection, all actions that are not verified safe a ∈/ Aφ are projected to the boundary of the provably safe action set ∂Aφ. Therefore, actions on ∂Aφ are disproportionately explored compared to the interior of Aφ. A similar effect can occur in action replacement depending on the replacement strategy, e.g., with ψfailsafe(s), the failsafe action is explored more often. However, if the random sampling strategy ψsample(s) is used, as shown in Figure 2 (a), the likelihood of all safe actions being explored increases equally. The sampling strategy, therefore, fosters exploration and discourages exploitation, as all non-provably safe actions lead to uniformly distributed exploration in the safe action space. The distribution of actions for both action replacement and projection differs from the distribution of the current policy, which might be problematic for on-policy algorithms. In action masking, we only map the exploration from A to Aφ. Thus, the exploration strategy is not affected by action masking. In this aspect, our approach in (11) is conceptually similar to action normalization, which is commonly used in RL (Sutton & Barto, 2018, Ch. 16.8). ![8_image_0.png](8_image_0.png) Figure 2: Idealized impact of the provably safe RL methods on the probability density function of the RL policy for a single action in a given state. The probability density function of the original RL policy π (a|s) and the provably safe policy π (a˜|s) are depicted as the gray area and the blue line respectively. Figure (a) shows action replacement with the random sampling strategy. Figure (b) displays action projection, where the vertical arrows are scaled Dirac delta distributions that stem from the fact that the unsafe parts of the original policy distribution are projected to the boundary of Aφ. Figure (c) depicts our proposed implementation of continuous action masking. ## 2.5 Learning Tuples When changing the RL action, the training of the agent can be conducted with four possible learning tuples: - *naive* - learning based on the action a returned by the policy network of the agent and the reward r(s, a φ) corresponding to the executed action a φ, which we denote by the tuple (s, a, s ′, r(s, a φ)). This ensures that the agent is updated according to its current policy. Learning with the original action a should benefit on-policy learning, where the policy is updated based on experience collected using the most recent policy. - *adaption penalty* - is *naive* with a penalty r ∗(s, a, a φ) = r(s, a φ) + rpenalty(s, a, a φ) if an unsafe action was selected, which we denote by the tuple (s, a, s ′, r∗(s, a, a φ)). In action projection, the penalty rpenalty can include a term that is proportional to the projection distance dist(a, a φ), as proposed by Wang (2022). - *safe action* - learning based on the safe and possibly adapted action and the corresponding reward, denoted by the tuple (s, a φ, s ′, r(s, a φ)). By using the *safe action* tuple, we are correctly rewarding the agent for the actually performed transition. However, this requires updating the agent with an action that did not stem from its current policy π(a|s), which is an expected behavior in off-policy but not on-policy learning. So, the safe action tuple might be a better fit for off-policy than for on-policy RL. - *both* - in case the RL agent proposes an unsafe action, both the *adaption penalty* and the *safe action* tuples are used for learning. In all cases, the next state s ′ and reward r are the true state and reward received from the environment after executing the safe action a φ. Action masking is always paired with the naive or *adaption penalty* learning tuple in the literature. The adaption penalty is related to the reduction of the action space due to masking or a safety component in the reward function, and not a sparse reward as for action projection and action replacement. For discrete action masking, the naive and the *safe action* tuples are equivalent since the agent is only allowed to choose provably safe actions (see Figure 1). For continuous action masking, using the *safe action* tuple with the transformed action a˜ leads to inconsistencies in learning because every action is transformed if Aφ ̸= A. ## 3 Literature Review In this section, we summarize previous works in provably safe RL and assign them to the proposed categories. To identify the related literature, we used the search string TITLE-ABS("reinforcement learning") AND TITLE(learning) AND [TITLE(safe*) OR TITLE(verif*) OR TITLE(formal*) OR TITLE(shield*)] AND LIMIT-TO(LANGUAGE,"English")4for the Scopus5 and IEEEXplore6search engine, which led to 620 papers already removing duplicates. Then, we screened papers by title and abstracts to identify 160 seemingly relevant papers. After close inspection, we identified 47 of these 160 papers as provably safe RL works. We give a condensed overview of all application-independent provably safe RL works in Table 1, and cluster all 47 provably safe RL works in Table 2 by their application. Some approaches in Table 1 are presented for unbounded disturbance. In such a setting, hard safety guarantees are generally not achievable. Still, we include approaches that would be provably safe with the assumption that the disturbance is bounded. Action replacement One of the earliest provably safe RL works is Alshiekh et al. (2018), which constructed a so-called safety shield from linear temporal logic formulas. For that, they construct the verification function φ(s, a) by converting the linear temporal logic formulas into an automaton, on which they perform model checking. The advantage of online model checking is that linear temporal logic constraints can be guaranteed for general nonlinear systems, and the online complexity is linear in the number of discrete states. However, the method is only applicable to small discrete state spaces, as constructing the automaton offline has exponential complexity in the number of discrete states, and the online checking complexity also increases exponentially with the formula length (Baier & Katoen, 2008). In Alshiekh et al. (2018), the agent outputs n ranked actions, which are all checked for safety using φ(s, a). The shield executes the highest-ranked safe action or replaces the action with ψsample(s) if none of the n actions is safe. They update the agent with the *safe action* learning tuple but also propose that the *both* learning tuple can be used to obtain additional training information. Similarly to Alshiekh et al. (2018), Könighofer et al. (2020) show that both probabilistic and deterministic shields increase the sample efficiency and performance for both action replacement and masking methods. Akametalu et al. (2014) propose action replacement based on Hamilton-Jacobi-Isaacs reachability analysis, which was later extended by Fisac et al. (2019) to a general safe RL framework. They determine Sφ using Hamilton-Jacobi-Isaacs reachability analysis given bounded system disturbances. Safety is guaranteed by replacing any learned action on the border of Sφ with ψfailsafe(s) stemming from an Hamilton-Jacobi-Isaacs optimal controller to guide the system back inside the safe set. Hamilton-Jacobi-Isaacs reachability analysis can verify reach-avoid problems with arbitrary non-convex sets (Wabersich et al., 2023) and disturbances that stem from a compact set. However, constructing the safe set scales exponentially in complexity with the number of state dimensions, which makes Hamilton-Jacobi-Isaacs reachability analysis infeasible for systems with more than four state dimensions (Chen & Tomlin, 2018). Fisac et al. (2019) argue that replacing the unsafe action with the action that maximizes the distance to the unsafe set increases performance in uncertain real-world environments compared to action projection. However, Hamilton-Jacobi-Isaacs approaches suffer from the curse of dimensionality and are, therefore, only feasible for systems with specific characteristics (Herbert et al., 2021). Shao et al. (2021) use a trajectory safeguard based on set-based reachability analysis for φ(s, a). Set-based reachability analysis is applicable to reach-avoid problems for general nonlinear systems with uncertainties in the initial state, system dynamics, and input disturbances as long as they stem from a compact set, see, e.g., Althoff et al. (2021). Set-based reachability analysis has polynomial complexity in the state dimension for most set representations, as discussed by Althoff et al. (2021). However, compared to Hamilton-JacobiIsaacs reachability analysis, set-based reachability analysis cannot handle arbitrary non-convex sets but depends on specific set representations. Shao et al. (2021) sample n new actions randomly in the vicinity of the action if the action is unsafe. They then execute the closest safe action to the original (unsafe) action. If none of the n new actions is safe, a failsafe action ψfailsafe(s) is executed. Shao et al. (2021) train their agent 4Documentation at http://schema.elsevier.com/dtds/document/bkapi/search/SCOPUSSearchTips.htm 5scopus.com 6ieeexplore.ieee.org | Table 1: Comparison of application-independent provably safe RL approaches. Space Learning | | | | | | | | |-----------------------------------------------------------------------------------------------------------------------|---------------------------------------------------|--------------|-------|------------------|----------------------------------|-----------|-----------| | Reference | Verification | Environment1 | | | | | | | Method | State | Action | Tuple | | | | | | Action replacement Akametalu et al. (2014) | HJI2 reachability analysis | cont. | cont. | special RL alg. | 1D | quadrotor | [stoch.], | | | cart-pole [stoch.] | | | | | | | | Fisac et al. (2019) | HJI reachability analysis | cont. | cont. | N/A | 1D quadrotor [stoch.] | | | | Alshiekh et al. (2018) | model checking of automaton constructed from LTL3 | disc. | disc. | safe action | Grid world [stoch.] | | | | Könighofer et al. (2020) | model checking of automaton | disc. | disc. | adaption penalty | ACC4 [stoch.] | | | | constructed from LTL | | | | | | | | | Anderson et al. (2020) | robust control invariant set | cont. | cont. | N/A | pendulum [det.], reachavoid [det.], others [det.] | | | | Hunt et al. (2021) | theorem proving of dL 5 formulas | disc. | disc. | naive | Grid world [stoch.] | | | | Bastani (2021) | MPC6 | cont. | cont. | deployment only | bicycle [det.], cart-pole [det.] | | | | Shao et al. (2021) | set-based reachability analysis | cont. | cont. | naive | 3D | quadrotor | [det.], | | | highway driving [det.] | | | | | | | | Selim et al. (2022b) | set-based reachability analysis | cont. | cont. | adaption penalty | 3D | quadrotor | [stoch.], | | | mobile robot [stoch.] | | | | | | | | Action projection Pham et al. (2018) | verification of affine constraints | cont. | cont. | adaption penalty | manipulator [det.] | | | | for actions | | | | | | | | | Cheng et al. (2019) | CBF7 | cont. | cont. | safe action | ACC [stoch.], pendulum [stoch.] | | | | Li et al. (2019a) | CBF synthesized from LTL | cont. | cont. | adaption penalty | manipulator [det.] | | | | Gros et al. (2020) | MPC | cont. | cont. | naive | 2D LTI8 system [stoch.] | | | | Wabersich & Zeilinger (2021) | MPC | cont. | cont. | naive | 3D | quadrotor | [stoch.], | | | pendulum [stoch.] | | | | | | | | Marvi & Kiumarsi (2022) | CBF | cont. | cont. | adaption penalty | 2D LTI system [det.] | | | | Selim et al. (2022a) | set-based reachability analysis | cont. | cont. | naive | 3D | quadrotor | [stoch.], | | | mobile robot [stoch.] | | | | | | | | Kochdumper et al. (2023) | set-based reachability analysis | cont. | cont. | adaption penalty | 3D quadrotor [stoch.] | | | | Action masking Fulton & Platzer (2018) | theorem proving of dL formulas | cont. | disc. | naive | ACC [det.] | | | | Fulton & Platzer (2019) | theorem proving of dL formulas | cont. | disc. | naive | ACC [stoch.] | | | | Huang & Ontañón (2022) | verification of affine equality | disc. | disc. | naive | Grid world [N/A] | | | | constraints for actions | | | | | | | | | This study | set-based reachability analysis | cont. | cont. | naive | pendulum | [det.], | 2D | | | quadrotor [stoch.] | | | | | | | | Abbreviations: 1 stoch.: stochastic environment model, det.: deterministic environment model, 2Hamilton-Jacobi-Isaacs | | | | | | | | on the *naive* learning tuple. Selim et al. (2022b) also verify the safety of actions with set-based reachability analysis. They propose an informed replacement for ψ(s) such that the reachable set of the controlled system is pushed away from the unsafe set S \ Ss. The authors further propose a method to account for unknown system dynamics using a so-called black-box reachability analysis. They use the *adaption penalty* learning tuple and showcase that their approach achieves provable safety in three use cases. Hunt et al. (2021) build the verification function φ(s, a) using theorem proving of differential dynamic logic formulas. Using φ(s, a), they determine Aφ(s) for discrete action spaces and use ψfailsafe(s) for replacement. They further show how provably safe end-to-end learning can be accomplished using controller and model monitors. They train the RL agent using the *naive* learning tuple on a drone example. The work of Anderson et al. (2020) proposes to define Sφ as a robust control invariant set and construct the safety function φ(s, a) based on a worst-case linear model of the system dynamics. A further notable work is Bastani (2021), which proposes a model predictive shield alongside the trained policy, which uses ψfailsafe(s) for action replacement. | Action Replacement | Action Projection | Action Masking N/A | | |----------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------| | | †Wabersich & Zeilinger (2021); †Selim et al. (2022a); †Kochdumper et al. (2023) | | | | Aerial | ‡‡Akametalu et al. (2014); | | | | Vehicles | Shyamsundar et al. (2017); ‡‡Fisac et al. (2019); Anderson et al. (2020); †Harris & Schaub (2020); †Shao et al. (2021); †Selim et al. (2022b); †Nazmy et al. (2022) | Cheng et al. (2019); †Wang (2022); †Hailemichael et al. (2022b); †Hailemichael et al. (2022a); ‡‡Kochdumper et al. (2023) | Fulton & Platzer (2018); †Mirchevska et al. (2018); Fulton & Platzer (2019); †Krasowski et al. (2020); Brosowsky et al. (2021); †Krasowski et al. (2022) | | Autonomous | †Chen et al. (2020); | | | | Driving | Könighofer et al. (2020); †Shao et al. (2021); †Chen et al. (2022a); †Lee & Kwon (2022); ‡‡Wang et al. (2023); †Evans et al. (2023) | †Tabas & Zhang (2022) | | | Power | †Ceusters et al. (2023) | †Eichelbeck et al. (2022); | | | Systems | †Chen et al. (2022b); †Zhang et al. (2023); †Yu et al. (2023) | | | | Robotic | †Thumm & Althoff (2022) | ‡‡Pham et al. (2018); | N/A | | Manipulation | ‡‡Li et al. (2019a) | N/A | | | Control | Akametalu et al. (2014); | | | | Benchmarks | Anderson et al. (2020); Bastani (2021); Shao et al. (2021) | Cheng et al. (2019); Gros et al. (2020); Wabersich & Zeilinger (2021); Marvi & Kiumarsi (2022) | | | Grid World | Alshiekh et al. (2018); | N/A | Huang & Ontañón (2022) | | Games | Hunt et al. (2021) | Mobile robot: †Selim et al. (2022a); Engine emission: †Norouzi et al. (2023) | Computing networks: Seetanadi et al. (2020); Traffic signal: †Müller & Sabatelli (2022) | | Miscellaneous | Active suspension: Li et al. (2019b); Computing networks: †Wang et al. (2022); Mobile robot: †Selim et al. (2022b) | | | | Note: Studies using high-fidelity simulators are marked with † , and ‡‡ indicates physical experiments. Additionally, papers occur multiple times in case they demonstrate their approach for different applications. | | | | Table 2: Overview of applications in provably safe RL. The most popular method by publications is action replacement. This is also visible from the large variety of application-specific approaches, e.g., for aerial vehicles (Shyamsundar et al., 2017; Harris & Schaub, 2020; Nazmy et al., 2022), autonomous driving (Chen et al., 2020; 2022a; Lee & Kwon, 2022; Wang et al., 2023; Evans et al., 2023), power systems (Ceusters et al., 2023), robotic manipulation (Thumm & Althoff, 2022), active suspension systems (Li et al., 2019b), and traffic engineering in computing networks (Wang et al., 2022). In particular, Ceusters et al. (2023) compare fail-safe action replacement and sampling-based action replacement. They observe that both methods have higher initial performance than the unsafe RL baseline, and fail-safe action replacement leads to better performance than the sampling-based version. Action projection Research on action projection is usually conducted on continuous action and state spaces. The main differentiating factor between studies in this category is the specification of the optimization problem for the projection. To begin with, the work of Pham et al. (2018) guarantees safety using a differentiable constrained optimization layer called OptLayer. Their approach is restricted to quadratic programming problems, so the system model and constraints have to be linear. Despite these limitations, they show the effectiveness of their approach on a collision avoidance task with a simple robotic manipulator. Cheng et al. (2019) specify the safety constraint φ(s, a) = 1 of the optimization problem via CBFs. Thus, the optimization problem in (3) becomes a quadratic program. Theoretically, the CBF approach is applicable to general control-affine systems with disturbances from a compact set for reach-avoid specifications. However, finding a CBF is not trivial, and synthesizing them can be exponential in the system dimension (Ames et al., 2019). To solve (3) more efficiently online, Cheng et al. (2019) add a neural network to the approach in Section 2.2, which approximates the correction due to the CBF. The action is then shifted by the approximated value prior to optimization. This shift improves the implementation efficiency while still guaranteeing safety, as the action is often already safe after the shift, and no optimization problem needs to be solved. Their safe learning with CBFs shows faster convergence speed than vanilla RL when learning on a pendulum and a car following task. Li et al. (2019a) propose a method to construct a continuous CBF from an automaton, which is defined by linear temporal logic formulas. They further construct a guiding reward from the given automaton to improve the learning performance. The proposed approach is capable of learning a high-dimensional cooperative manipulation task safely. The authors of Marvi & Kiumarsi (2022) define a different problem, where the system model is assumed to be deterministic but unknown. They learn an optimal controller and the system dynamics iteratively while decreasing the conservativeness of their CBF in each iteration. The approach is provably safe for linear time-invariant (LTI) systems without disturbances. Gros et al. (2020) and Wabersich & Zeilinger (2021) implement the optimization problem as a robust MPC problem as defined in (7). MPC is applicable to reach-avoid problems with measurement and state disturbances. However, when controlling high-speed systems, robust MPC is often limited to linear systems (Zeilinger et al., 2014). Gros et al. (2020) mainly discuss how the learning update has to be adapted if action projection is used for different RL algorithms. For Q-learning, they find that no adaption of the learning algorithm is necessary when the *naive* tuple is used. For policy gradient methods, they argue that the projection must also be included in the gradient for stable learning (Gros et al., 2020, Sec. 3). One downside of the robust MPC formulation of Gros et al. (2020) and Wabersich & Zeilinger (2021) is that dynamic constraints originating from moving obstacles or persons in the environment are not trivial to integrate. They approximate the dynamics of the system with a Gaussian process (GP) so that hard safety guarantees are impossible to prove. However, they could guarantee hard safety specifications if they would assume deterministic system dynamics with bounded disturbance as the aforementioned approaches do. Gros et al. (2020) evaluate their approach on a simple 2D LTI system, and Wabersich & Zeilinger (2021) show the efficacy of their approach on a pendulum and a quadrotor task. Contrary to their previous work in Selim et al. (2022b), Selim et al. (2022a) propose to solve an optimization problem to find the closest safe action instead of using an informed replacement. They again use setbased reachability analysis to construct φ(s, a). They test their approach on a quadrotor and mobile robot benchmark. Kochdumper et al. (2023) utilize set-based reachability analysis to verify actions in φ(s, a). They formulate the projection for a parameterization of the action space and arrive at a mixed-integer quadratic problem with polynomial constraints. Their approach achieves provable safety for nonlinear systems with bounded disturbances, and they demonstrate their approach on two quadrotor tasks, autonomous driving on highways, and a physical F1TENTH car. Next to the conceptual approaches, action projection algorithms are also specifically proposed for many cyberphysical systems, such as autonomous driving (Wang, 2022; Hailemichael et al., 2022a;b), power systems (Eichelbeck et al., 2022; Chen et al., 2022b; Zhang et al., 2023; Yu et al., 2023), and engine emission control (Norouzi et al., 2023). Wang (2022) compares the deployment of a discrete action masking approach with her continuous action projection approach. The goal-reaching performance is lower for the discrete action masking approach. However, this could be due to the coarse discretization of the action space in three actions. Action masking To the best of our knowledge, the existing literature considers action masking only for discrete action spaces. The work Huang & Ontañón (2022) analyzes the effect of discrete action masking on the policy gradient algorithm in RL, but they assume that As is known, which is typically only the case in game and grid world environments. The two main works investigating action masking are Fulton & Platzer (2018) and Fulton & Platzer (2019). They construct controller and model monitors based on theorem proving of differential dynamic logic specifications, see Platzer (2008). The controller monitor is used to build the mask η(s), and the model monitor verifies if the underlying system model is correct based on previous transitions. In each state, the agent can choose from the set of actions that the controller monitor verified as safe. Identifying the correct system can be challenging, thus an approach to automatically generate candidates is introduced as well. Their approach is provably safe if the initial model is correct (Fulton & Platzer, 2018) or multiple models are given, from which at least one is correct (Fulton & Platzer, 2019) for all times. They validate their provably safe action masking on adaptive cruise control tasks. In addition to the works mentioned above, there are works investigating action masking for the specific application of autonomous driving (Mirchevska et al., 2018; Krasowski et al., 2020; Brosowsky et al., 2021; Krasowski et al., 2022), power systems (Tabas & Zhang, 2022), adaptive routing in computing networks (Seetanadi et al., 2020), and urban traffic signal control (Müller & Sabatelli, 2022). The only applicationspecific approach that compares action masking with other provably safe RL approaches is Brosowsky et al. (2021). They observe that their masking approach converges slightly faster than action projection. ## 4 Experimental Comparison In this section, we evaluate the performance of the three provably safe RL classes and the four learning tuples introduced in Section 2. For our comparison, we select an inverted pendulum and a 2D quadrotor stabilization task7, as these benchmarks are commonly evaluated in related works presented in Table 1. The provably safe state set Sφ is the same for all three approaches and, therefore, comparable. We add system disturbances to the benchmarks to make them more realistic and show that the provably safe RL approaches can handle disturbances sampled from a compact disturbance set. Despite their low dimensionality, our results are likely transferable to real-world systems since real-world complexity is often reduced in practice by using lower-dimensional abstract models and an additional disturbance term. Conformance checking techniques (Roehm et al., 2019; Liu et al., 2023) can then guarantee that the abstract model incorporates recorded real-world behaviors of the system. The algorithms shown in this section are action replacement with ψsample(s), action projection using affine constraints, and action masking. We compare each configuration on ten random seeds and five common RL algorithms8: continuous Twin Delayed Deep Deterministic policy gradient algorithm (TD3) (Fujimoto et al., 2018), continuous soft actor-critic (SAC) (Haarnoja et al., 2018), discrete DQN (Mnih et al., 2013), and continuous and discrete proximal policy optimization (PPO) (Schulman et al., 2017). ## 4.1 Environments We compare the provably safe RL approaches on an inverted pendulum and a 2D quadrotor stabilization task. Inverted pendulum The state of the pendulum is defined as s =-θ, ˙θ⊤, and follows the dynamics $$\dot{\mathbf{s}}=\left(\begin{array}{c c}{{\dot{\theta}}}&{{}}\\ {{\frac{g}{l}\sin(\theta)+\frac{1}{m l^{2}}a}}\end{array}\right),$$ $$(12)$$ , (12) where a is the one-dimensional action, g is gravity, m is the mass of the pendulum, l its length, and friction and damping are ignored. We discretize the dynamics using the explicit Euler method. The actions are bounded by |a| ≤ 30rad s−1. The desired equilibrium state is s ∗ = [0, 0]⊤. The observation and reward are identical to the *OpenAI Gym Pendulum-V0* 9environment. 2D quadrotor The quadrotor in our experiments can only fly in the x-z-plane and rotate around the yaxis with angle θ. The state of the system is defined as s =-x, z, x,˙ *z, θ,* ˙ ˙θ⊤and the action as a = [a1, a2] ⊤. 7Our implementation is available at CodeOcean: doi.org/10.24433/CO.9209121.v1 . 8All implementations are based on stable-baselines3 (Raffin et al., 2021). 9Available at: gymnasium.farama.org/environments/classic_control/pendulum/ The system dynamics $$\dot{\mathbf{s}}=\left(\begin{array}{c c c}{{\dot{x}}}&{{}}&{{}}\\ {{}}&{{\dot{z}}}&{{}}\\ {{}}&{{a_{1}k\sin(\theta)}}&{{}}\\ {{-g+a_{1}k\cos(\theta)}}&{{}}\\ {{}}&{{\dot{\theta}}}\\ {{-d_{0}\theta-d_{1}\dot{\theta}+n_{0}a_{2}}}\end{array}\right)^{+}\left(\begin{array}{c c c}{{}}&{{}}&{{}}\\ {{}}&{{}}\\ {{}}&{{}}\\ {{}}&{{}}\\ {{}}&{{}}\end{array}\right)$$ $$\begin{array}{c}{{0}}\\ {{0}}\\ {{w_{1}}}\\ {{w_{2}}}\\ {{0}}\\ {{0}}\end{array}$$ $$(13)$$ (13) are based on Mitchell et al. (2019), where w1, w2 represent the system disturbance, and k, d0, d1, and n0 are constant parameters (see Table 4). We linearize the dynamics using a first-order Taylor expansion at the equilibrium point s ∗ = [0, 1, 0, 0, 0, 0]⊤and obtain the discrete-time system for the linearized dynamics. We sample the disturbance w = [w1, w2] ⊤uniformly, independent, and identically distributed from a compact disturbance set W ⊂ R 2. The actions range from amin =-−1.5 + gK , − π 12 ⊤to amax =-1.5 + gK , π 12 ⊤. The reward is defined as r(s, a) = exp − ∥s − s ∗∥2 − 0.01 2 ∥a−min(A) max(A)−min(A) ∥1 . ## 4.2 Computation Of The Safe State Set To obtain a possibly large set of provably safe states Sφ and a provably safe controller for our environments, we use the scalable approach for computing robust control invariant sets of nonlinear systems presented in Schäfer et al. (2023): for every state s0 ∈ Sφ ⊂ Ss, there exists a provably safe action a˜0 ∈ A so that s1 = g (s0, a˜0, w0) ∈ Sφ with a bounded disturbance w0 ∈ W ⊂ R O where W has O dimensions. Hence, φ(s0, a˜0) = 1 for every s0 ∈ Sφ. In this work, we assume the disturbance to be constant in between sampling times. Note that we use the obtained provable safe controller for the failsafe replacement function ψfailsafe(s). The algorithm in Schäfer et al. (2023) provides an explicit representation of Sφ, which enables a fair comparison of our provably safe RL implementations. To retrieve Aφ from Sφ at a given state s, we first convert Sφ from generator representation, which is used in Schäfer et al. (2023), into halfspace representation, i.e., Sφ = {s|Cs ≤ q}, using the open-source toolbox CORA (Althoff, 2015). We evaluate the safety function given the state sk ∈ Sφ and an action ak ∈ A by computing the reachable set R(k + 1) at the next time step, which encloses the states that are reachable for all wk ∈ W (Althoff, 2015). The reachable set can be represented as a zonotope, i.e., R(k + 1) = {sk+1|sk+1 = c + Gβ, |β|∞ ≤ 1}. If R(k + 1) ⊆ Sφ, the action a is verified as safe, i.e., φ(sk+1, ak+1) = 1, which holds if and only if (Schürmann et al., 2020, Theorem 2) $$C\mathbf{c}+|C\mathbf{G}|\mathbf{1}\leq\mathbf{q}\,,$$ $$(14)$$ Cc + |CG|1 ≤ q , (14) where the absolute value is applied elementwise and 1 denotes a vector full of ones of appropriate dimension. The approach of Schäfer et al. (2023) allows us to compute Sφ for high-dimensional nonlinear systems. However, the conversion to halfspace representation is computationally too expensive for higher dimensional systems. Therefore, we plan to develop generator-based versions of our provably safe RL methods in future work to mitigate this shortcoming. ## 4.3 Results The 2D quadrotor task is the main comparison environment in this work as it is more complex and shows the differences between provably safe RL approaches clearer than the inverted pendulum task. We evaluate the differences between the provably safe RL algorithms in Figure 3 and the effect of different learning tuples in Figure 4. All training runs on all individual algorithms and environments are presented in the Appendix, including a comparison between the ψsample(s) and ψfailsafe(s) replacement function. Comparison of provably safe RL algorithms The safety violation evaluation of the baselines in Figure 3d shows that the baseline algorithms fail to guarantee safety during training in the 2D quadrotor stabilization task. All provably safe RL algorithms guarantee safety as expected. Between the baselines, TD3 converges significantly faster than all other algorithms. Figures 3a and 3b show the performance of the three provably safe RL categories (a) action replacement using ψsample(s), (b) action projection, and (c) action masking together, with the baselines averaged over all ![15_image_0.png](15_image_0.png) Figure 3: Training curves for the 2D quadrotor benchmark. Top: Comparison of the three provably safe RL classes and the unsafe benchmarks averaged over all algorithms trained with the *naive* tuple. Bottom: Comparison of benchmark algorithms TD3, SAC, DQN, continuous and discrete PPO. All training runs were conducted on ten random seeds per algorithm. The left column depicts the reward. The right column shows the safety violations for the baselines, and the safety intervention rate for the provably safe RL algorithms. five RL implementations and trained using the *naive* learning tuple. For a better comparison of the reward curves in Figure 3a, we added a dashed green line that indicates the final training reward averaged over all five RL baselines and ten random seeds. The reward comparison shows that action replacement performs better than action projection and masking on average. We also compare the intervention rate of the three safety mechanisms. For action replacement and projection, our intervention rate metric indicates the share of RL steps per episode in which the safety function altered the action. For action masking, the intervention rate compares the average volume of the provably safe action set over an episode with the volume of the provably safe action set at the equilibrium point of the system, e.g., VAφ,episode/VAφ,equilibrium. Figure 3b shows that action replacement relies significantly less on the safety mechanism than projection and masking. Generally, we report that a lower intervention rate often coincides with a higher reward. Comparison of learning tuples We evaluate the impact of different learning tuples on the performance and intervention rates averaged over all five RL algorithms in Figure 4. When action masking is used, only safe actions can be sampled, i.e., only the *naive* tuple is meaningful; so we omit action masking from this evaluation. For both action replacement and projection, the *adaption penalty* tuple leads to the highest performance and lowest safety intervention rate, even outperforming the average over the baselines. In action projection, the *naive* tuple performs significantly worse than in action replacement. The *safe action* and both tuples seem to be only beneficial when using action projection and decrease performance when using action replacement. ![16_image_0.png](16_image_0.png) Figure 4: Evaluation of the training tuples for the 2D quadrotor averaged over the five RL algorithms TD3, SAC, DQN, continuous and discrete PPO on ten random seeds per algorithm. The left column depicts the reward and the right column the safety intervention rate. The top row shows the different learning tuples for action replacement and the bottom row for action projection. ## 5 Discussion Our experiments confirm the theoretical statement that provably safe RL methods are always safe when Proposition 1 holds. In the two tested environments, the investigated RL baselines show non-zero safety violations during training, even after convergence. Subsequently, we discuss five statements resulting from the experiments, provide intuitions for implementing provably safe RL, summarize the limitations, and identify future research directions in provably safe RL. Selecting a provably safe RL approach First, we want to summarize our experience with the three provably safe RL classes on the two investigated benchmarks. Action replacement was the easiest method to implement for continuous action spaces. It shows very good performance and low intervention rates. Hereby, using a failsafe action for replacement with an adaption penalty is simple to implement and was among the best-performing methods in our experiments. Still, the random sampling of safe actions might outperform the failsafe action. So, if sampling from the safe action space is readily available, e.g., in discrete action spaces, a failsafe controller is unnecessary. Action projection tends to be problematic in practice due to small numerical errors, resulting in infeasible optimization problems. We, therefore, have to reuse previous optimization results, e.g., as in Schürmann et al. (2018) for robust MPC, or use a failsafe controller if the optimization problem is not solvable. Together with the higher intervention rates compared to action replacement and the complex implementation, we would not recommend action projection based on our experience. However, if one already has a CBF or MPC formulation, it might be the most suitable solution. Action masking is particularly easy to implement for discrete action spaces and performs well in that setting. However, for action masking in continuous action spaces, there is no efficient and general algorithm yet that can handle safe action spaces significantly diverging from an axis-aligned box. Generally, the different approaches can also be combined to some extent. For example, if the optimization problem for action projection becomes infeasible, a failsafe controller can be used. Selecting a learning tuple The *adaption penalty* learning tuple performed best, especially when using action projection. In our experiments, a simple constant reward penalty already improved the performance. Other environments may require more careful reward tuning, or the *adaption penalty* tuple could fail altogether due to reward hacking (Skalse et al., 2022) or goal misgeneralization (Langosco et al., 2022). The results in Figure 4 further show that using the safe action a˜ in the training tuple, i.e., configurations *both* and *safe action*, benefits the performance of action projection methods but impairs the training with action replacement. This effect can result from the fact that action replacement alters the action more than action projection, leading to a lower likelihood that the altered action stems from the RL policy. The evaluations in the Appendix show that this effect is prominent when using PPO. This on-policy algorithm assumes that the current batch of training data stems from the current policy. Hence, we would recommend using the adaption penalty tuple when possible and only using *safe action* in combination with off-policy methods. Convergence The training of provably safe RL agents converges similarly fast or faster than the baselines in our experiments. In contrast, the performance (measured by the reward) at the beginning of the training is better for provably safe RL agents. One reason for the faster convergence is the exploration setting. Since Aφ ⊆ A, the provably safe agents learn in a usually smaller action space than the baselines, which can accelerate training. However, in some cases, the baseline agents might be better informed about the environment dynamics by exploring unsafe actions. Generally, the verification method should aim for Aφ = As such that the provably safe agents can explore the full safe action space. Another reason for convergence differences can be that action replacement and action projection potentially correct the action after the forward pass through the policy. Thus, the gradient calculation might need correction as well. So far, there is only little theoretical work on this (Hunt et al., 2021; Gros et al., 2020), and it is unclear if, in practice, a correction of the gradient is necessary or if using the *adaption penalty* tuple is sufficient. Furthermore, the change in the distribution of the actions can impact the exploration strategy, as discussed in Section 2.4. Computational complexity The computational complexity of the three approaches highly depends on the scenario-specific implementation. For action projection, the main implementation challenge is to guarantee that the optimization problem is always feasible. If the optimization problem can be formulated as a quadratic program, the computational complexity is polynomial, as shown by Vavasis (2001). On the contrary, the computational complexity of action replacement and action masking depends highly on the algorithm that identifies the safety of actions. For discrete action masking, the computational complexity apparently scales linearly with the total number of actions O(|A|). For action replacement, we only need a single safe action, so in the ideal case, e.g., using a failsafe controller, the computational complexity is constant with respect to the total number of actions. Suppose an action replacement approach needs to determine the entire set of safe actions, it obviously has the same computational complexity as action masking with respect to the number of actions. The computational complexity for identifying the continuous safe action space depends on the task-specific implementation. One possibility is to compute the safe action space using set-based reachability analysis, where we want to point the interested reader to Althoff et al. (2021) for different approaches. Online vs. offline implementation *Online* and *offline* have two notions in provably safe RL: online vs. offline safety verification and online vs. offline RL. The safety function usually needs to be evaluated online since, for continuous state spaces, pre-computing the safe action set for all states is often not feasible. Thus, the computational complexity of the safety function is important for real-time applications, as discussed in the previous paragraph. If the state and action space are discrete, it can be possible to pre-compute the safe actions offline (Alshiekh et al., 2018; Huang & Ontañón, 2022). Generally, safety is only guaranteed if the safety function is integrated between the agent and environment to correct actions (see Figure 1). In this study, we compare online on-policy and off-policy RL algorithms (Levine et al., 2020) since they are used for existing provably safe RL research. Still, provably safe RL can also be used for offline RL where the safety function would be integrated during deployment and most likely also during the data gathering phase if this phase is conducted in a safety-critical environment. However, more specific advice on offline provably safe RL needs to be substantiated with experimental evaluations and, thus, is a topic for future research. Limitations of provably safe RL There are limitations of provably safe RL that follow from the conceptual analysis in Section 2. Most importantly, safety has to be verifiable, i.e., there must be a safety function φ(s, a), which complies with Proposition 1. For this safety function, system knowledge is necessary, and especially for systems with a high number of continuous state variables, the safety function is potentially complex to compute. Additionally, safety guarantees are strongly tied to the safety function. If the safety function provides complex guarantees (e.g., ensures temporal logic specifications), it is usually computationally more expensive than for simpler guarantees (e.g., system stays within safe state set). Second, safety can only be decided if the state of the system is correctly observed within noise bounds. Thus, for a provably safe autonomous system, the perception module also needs to be verified such that it provides observations that are correct within the noise bounds. Third, there is often a trade-off between safety and performance since, for many tasks, these two objectives are only partially aligned. For example, if an automated vehicle drives faster, it reaches its destination earlier, but collisions are more difficult to avoid at higher speeds. Since provably safe RL ensures safe behavior, there is no such trade-off as safety is always prioritized over performance. Thus, the safety function φ(s, a) should not be too conservative since the agent would only perform trivial safe actions e.g., standing still at the side of the road forever. Lastly, comparing provably safe RL approaches is challenging as we need to define a safety function φ(s, a) that is efficiently usable by different approaches. Furthermore, the notion of safety is usually application-specific, so different application-specific approaches are hard to compare. We provide the first comparison of provably safe RL on two common continuous stabilization tasks, but further research is necessary to make more substantial claims about the most promising provably safe RL approaches. Future research based on proposed taxonomy Most action projection approaches discussed in Section 3 project the RL action on the border of Aφ. In our experiments, we encountered two negative side effects related to this action projection implementation: First, the projection to the border of Aφ often leads to a relatively small Aφ in the next RL step, quickly resulting in a very small set Aφ if the RL agent proposes a few unsafe actions after each other. Second, small numerical errors can cause unsafe actions and must be considered in the safety verification. Therefore, future action projection research should investigate objective functions for (3) that achieve a more robust behavior while still depending on the action the RL agent proposed, e.g., projecting the action not to the border but by a learnable margin inside the provably safe action set. Action masking is a promising technique but has mainly been used with discrete action spaces in grid world environments and games, e.g., the Atari benchmark (Huang & Ontañón, 2022). Our proposed continuous action masking approach only applies to specific environments and performed well for the pendulum but showed mixed results for the 2D quadrotor. Thus, future research should investigate ways to extend continuous action masking to general convex or non-convex representations of Aφ to improve its applicability to more complex benchmarks and reduce the conservativeness of Aφ. Additionally, it should be investigated if the agent should be informed about the reduction of the action space in action masking. This could result in improved convergence and an agent that is more aware of the action mask, similar to the effect of the *adaption penalty* tuple for action replacement and action projection. The evaluation of the considered benchmarks shows that action replacement performs better than action projection and masking, as discussed previously. However, it is still unclear how important the replacement strategy ψ(s) is for the convergence and performance of the agent, especially when applied to more complex tasks. Thus, future action replacement research should empirically and theoretically investigate this question. Improving the applicability of provably safe RL Despite the promising previous work discussed in Section 3, there are only few works on high-dimensional nonlinear systems and limited real-world applications. We suggest five major factors where future research would improve applicability. First, some approaches need to be computationally more efficient to be real-world applicable. The computational efficiency of verification methods is especially relevant and should be improved, as discussed previously. Second, we observe that the learning tuple used has a significant influence on the performance of the agent for some RL algorithms. Also, there needs to be more theoretical research on how provably safe RL approaches influence convergence to an optimal policy. More empirical and theoretical research on the effects of provably safe RL and its learning tuples for convergence is desirable. Third, common benchmarks are necessary to evaluate new provably safe RL approaches. Additionally, the three action correction strategies should be compared on more complex benchmarks to clarify if our observations can be extended to them. Such benchmarking would make research on provably safe RL more comparable, ease starting research on provably safe RL, and provide more evidence to decide on the best-suited provably safe RL approach. Fourth, recent work shows a low variety of safety specifications, mainly comprising stabilization and reach-avoid specifications. On the contrary, real-world safety is more complex, e.g., traffic rules such as waiting at a red light and safely but quickly moving at a green light. Finally, provably safe RL requires expert knowledge of verification methods. Future research could mitigate this through modular and automatic approaches, where fewer engineering decisions are necessary and more parameters are tuned automatically. With these advances, provably safe RL could bring the best elements of RL and formal specifications together towards RL methods that require as little expert knowledge as necessary and provide formal guarantees for complex safety specifications to achieve reliable and trustworthy cyber-physical systems. ## 6 Conclusion In conclusion, we categorize provably safe RL methods to structure the literature from a machine learning perspective. We present provably safe RL methods from a conceptual perspective and discuss necessary assumptions. Our proposed categorization into action replacement, action projection, and action masking supports researchers in comparing their works and provides valuable insights into the selection process of provably safe RL methods. The comparison of four implementations of provably safe RL on a 2D quadrotor and an inverted pendulum stabilization benchmark provides further insights into the best-suited method for different tasks. We further present practical recommendations for selecting a provably safe RL approach and a learning tuple, which will be valuable for researchers who are new to RL or formal methods. Lastly, as discussed in Section 5, our proposed taxonomy and experimental evaluation yield multiple promising future research directions. ## Acknowledgments The authors gratefully acknowledge the partial financial support of this work by the research training group ConVeY, funded by the German Research Foundation under grant GRK 2428, by the project TRAITS under grant number 01IS21087, funded by the German Federal Ministry of Education and Research, by the Horizon 2020 EU Framework Project CONCERT under grant number 101016007, by the project justITSELF funded by the European Research Council (ERC) under grant agreement number 817629, and by the German Federal Ministry for Economics Affairs and Climate Action project VaF under grant number KK5135901KG0. ## References Joshua Achiam, David Held, Aviv Tamar, and Pieter Abbeel. Constrained policy optimization. In *Proc. of* the Int. Conf. on Machine Learning (ICML), pp. 22–31, 2017. Anayo K. Akametalu, Shahab Kaynama, Jaime F. Fisac, Melanie N. Zeilinger, Jeremy H. Gillula, and Claire J. Tomlin. Reachability-based safe learning with Gaussian processes. In *Proc. of the IEEE Conf.* on Decision and Control (CDC), pp. 1424–1431, 2014. Derya Aksaray, Austin Jones, Zhaodan Kong, Mac Schwager, and Calin Belta. Q-learning for robust satisfaction of signal temporal logic specifications. In *Proc. of the IEEE Conf. on Decision and Control (CDC)*, pp. 6565–6570, 2016. Mohammed Alshiekh, Roderick Bloem, Rüdiger Ehlers, Bettina Könighofer, Scott Niekum, and Ufuk Topcu. Safe reinforcement learning via shielding. In *Proc. of the AAAI Conf. on Artificial Intelligence (AAAI)*, pp. 2669–2678, 2018. Matthias Althoff. An introduction to CORA 2015. In *Proc. of the Workshop on Applied Verification for* Continuous and Hybrid Systems, pp. 120–151, 2015. Matthias Althoff, Goran Frehse, and Antoine Girard. Set propagation techniques for reachability analysis. Annual Review of Control, Robotics, and Autonomous Systems, 4(1):369–395, 2021. Eitan Altman. Constrained Markov decision processes with total cost criteria: Lagrangian approach and dual linear program. *Mathematical Methods of Operations Research*, 48(3):387–417, 1998. Rajeev Alur, Suguman Bansal, Osbert Bastani, and Kishor Jothimurugan. *A Framework for Transforming* Specifications in Reinforcement Learning, pp. 604–624. Springer Nature Switzerland, 2022. Rajeev Alur, Osbert Bastani, Kishor Jothimurugan, Mateo Perez, Fabio Somenzi, and Ashutosh Trivedi. Policy synthesis and reinforcement learning for discounted LTL. In *Proc. of the Int. Conf. on Computer* Aided Verification (CAV), pp. 415–435, 2023. Aaron D. Ames, Samuel Coogan, Magnus Egerstedt, Gennaro Notomista, Koushil Sreenath, and Paulo Tabuada. Control barrier functions: Theory and applications. In *Proc. of the European Control Conference* (ECC), pp. 3420–3431, 2019. Greg Anderson, Abhinav Verma, Isil Dillig, and Swarat Chaudhuri. Neurosymbolic reinforcement learning with formally verified exploration. In *Proc. of the Int. Conf. on Neural Information Processing Systems* (NeurIPS), volume 33, pp. 6172–6183, 2020. Christel Baier and Joost-Pieter Katoen. *Principles of Model Checking*. MIT press, 2008. Osbert Bastani. Safe reinforcement learning with nonlinear dynamics via model predictive shielding. In Proc. of the American Control Conf. (ACC), pp. 3488–3494, 2021. Osbert Bastani, Yewen Pu, and Armando Solar-Lezama. Verifiable reinforcement learning via policy extraction. In *Proc. of the Int. Conf. on Neural Information Processing Systems (NeurIPS)*, pp. 2499–2509, 2018. Felix Berkenkamp, Angela P. Schoellig, Matteo Turchetta, and Andreas Krause. Safe model-based reinforcement learning with stability guarantees. In *Proc. of the Int. Conf. on Neural Information Processing* Systems (NeurIPS), pp. 908–918, 2017. Sara Bouraine, Thierry Fraichard, and Hassen Salhi. Provably safe navigation for mobile robots with limited field-of-views in unknown dynamic environments. In *Proc. of the IEEE Int. Conf. on Robotics and* Automation (ICRA), pp. 174–179, 2012. Mathis Brosowsky, Florian Keck, Jakob Ketterer, Simon Isele, Daniel Slieter, and Marius Zöllner. Safe deep reinforcement learning for adaptive cruise control by imposing state-specific safe sets. In Proc. of the IEEE Intelligent Vehicles Symp. (IV), pp. 488–495, 2021. Lukas Brunke, Melissa Greeff, Adam W. Hall, Zhaocong Yuan, Siqi Zhou, Jacopo Panerati, and Angela P. Schoellig. Safe learning in robotics: From learning-based control to safe reinforcement learning. *Annual* Review of Control, Robotics, and Autonomous Systems, 5:411–444, 2022. Mingyu Cai, Mohammadhosein Hasanbeig, Shaoping Xiao, Alessandro Abate, and Zhen Kan. Modular deep reinforcement learning for continuous motion planning with temporal logic. IEEE Robotics and Automation Letters, 6(4):7973–7980, 2021. Alberto Camacho, Rodrigo Toro Icarte, Toryn Q Klassen, Richard Valenzano, and Sheila A McIlraith. LTL and beyond: Formal languages for reward function specification in reinforcement learning. In *Proc. of the* Int. Joint Conf. on Artificial Intelligence (IJCAI), pp. 6065–6073, 2019. Glenn Ceusters, Luis Ramirez Camargo, Rüdiger Franke, Ann Nowé, and Maarten Messagie. Safe reinforcement learning for multi-energy management systems with known constraint functions. *Energy and AI*, 12, 2023. Dong Chen, Longsheng Jiang, Yue Wang, and Zhaojian Li. Autonomous driving using safe reinforcement learning by incorporating a regret-based human lane-changing decision model. In Proc. of the American Control Conf. (ACC), pp. 4355–4361, 2020. Mo Chen and Claire J. Tomlin. Hamilton-Jacobi reachability: Some recent theoretical advances and applications in unmanned airspace management. *Annual Review of Control, Robotics, and Autonomous Systems*, 1(1):333–358, 2018. Shengduo Chen, Yaowei Sun, Dachuan Li, Qiang Wang, Qi Hao, and Joseph Sifakis. Runtime safety assurance for learning-enabled control of autonomous driving vehicles. In Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 8978–8984, 2022a. Yize Chen, Yuanyuan Shi, Daniel Arnold, and Sean Peisert. SAVER: Safe learning-based controller for real-time voltage regulation. In *Proc. of the IEEE Power and Energy Society General Meeting (PESGM)*, pp. 1–5, 2022b. Richard Cheng, Gábor Orosz, Richard M Murray, and Joel W Burdick. End-to-end safe reinforcement learning through barrier functions for safety-critical continuous control tasks. In Proc. of the AAAI Conf. on Artificial Intelligence (AAAI), pp. 3387–3395, 2019. Yinlam Chow, Ofir Nachum, Edgar Duenez-Guzman, and Mohammad Ghavamzadeh. A Lyapunov-based approach to safe reinforcement learning. In *Proc. of the Int. Conf. on Neural Information Processing* Systems (NeurIPS), pp. 8103–8112, 2018. Paul F. Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. In Proc. of the Int. Conf. on Neural Information Processing Systems (NeurIPS), 2017. Gal Dalal, Krishnamurthy Dvijotham, Matej Vecerik, Todd Hester, Cosmin Paduraru, and Yuval Tassa. Safe exploration in continuous action spaces. *arXiv*, abs/1801.0, 2018. Giuseppe De Giacomo, Luca Iocchi, Marco Favorito, and Fabio Patrizi. Foundations for restraining bolts: Reinforcement learning with LTLf/LDLf restraining specifications. In Proc. of the Int. Conf. on Automated Planning and Scheduling (ICAPS), pp. 128–136, 2021. Michael Eichelbeck, Hannah Markgraf, and Matthias Althoff. Contingency-constrained economic dispatch with safe reinforcement learning. In Proc. of the IEEE Int. Conf. on Machine Learning and Applications (ICMLA), pp. 597–602, 2022. Mohamed El-Shamouty, Xinyang Wu, Shanqi Yang, Marcel Albus, and Marco F. Huber. Towards safe human-robot collaboration using deep reinforcement learning. In Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 4899–4905, 2020. Benjamin D. Evans, Hendrik W. Jordaan, and Herman A. Engelbrecht. Safe reinforcement learning for high-speed autonomous racing. *Cognitive Robotics*, 3:107–126, 2023. Jaime F. Fisac, Anayo K. Akametalu, Melanie N. Zeilinger, Shahab Kaynama, Jeremy Gillula, and Claire J. Tomlin. A general safety framework for learning-based control in uncertain robotic systems. *IEEE Transactions on Automatic Control*, 64(7):2737–2752, 2019. Scott Fujimoto, Herke Van Hoof, and David Meger. Addressing function approximation error in actor-critic methods. In *Proc. of the Int. Conf. on Machine Learning (ICML)*, pp. 2587–2601, 2018. Nathan Fulton and André Platzer. Safe reinforcement learning via formal methods: Toward safe control through proof and learning. In *Proc. of the AAAI Conf. on Artificial Intelligence (AAAI)*, pp. 6485–6492, 2018. Nathan Fulton and André Platzer. Verifiably safe off-model reinforcement learning. In Proc. of the Int. Conf. on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), pp. 413–430, 2019. Javier García and Fernando Fernández. A comprehensive survey on safe reinforcement learning. Journal of Machine Learning Research, 16(42):1437–1480, 2015. Jeremy H. Gillula and Claire J. Tomlin. Reducing conservativeness in safety guarantees by learning disturbances online: Iterated guaranteed safe online learning. *Robotics: Science and Systems*, 8(1):81–88, 2013. Sebastien Gros, Mario Zanon, and Alberto Bemporad. Safe reinforcement learning via projection on a safe set: How to achieve optimality? *IFAC-PapersOnLine*, 53(2):8076–8081, 2020. Felix Gruber and Matthias Althoff. Scalable robust output feedback MPC of linear sampled-data systems. Proc. of the IEEE Conf. on Decision and Control (CDC), pp. 2563–2570, 2021. Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In *Proc. of the Int. Conf. on Machine Learning* (ICML), pp. 1861–1870, 2018. Ernst Moritz Hahn, Mateo Perez, Sven Schewe, Fabio Somenzi, Ashutosh Trivedi, and Dominik Wojtczak. Omega-regular objectives in model-free reinforcement learning. In Proc. of the Int. Conf. on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), pp. 395–412, 2019. Habtamu Hailemichael, Beshah Ayalew, Lindsey Kerbel, Andrej Ivanco, and Keith Loiselle. Safety filtering for reinforcement learning-based adaptive cruise control. *IFAC-PapersOnLine*, 55(24):149–154, 2022a. Habtamu Hailemichael, Beshah Ayalew, Lindsey Kerbel, Andrej Ivanco, and Keith Loiselle. Safe reinforcement learning for an energy-efficient driver assistance system. In *IFAC-PapersOnLine*, volume 55:37, pp. 615–620, 2022b. Andrew Harris and Hanspeter Schaub. Spacecraft command and control with safety guarantees using shielded deep reinforcement learning. In *AIAA Scitech 2020 Forum*, volume 1, 2020. Mohammadhosein Hasanbeig, Yiannis Kantaros, Alessandro Abate, Daniel Kroening, George J. Pappas, and Insup Lee. Reinforcement learning for temporal logic control synthesis with probabilistic satisfaction guarantees. In *Proc. of the IEEE Conf. on Decision and Control (CDC)*, pp. 5338–5343, 2019a. Mohammadhosein Hasanbeig, Daniel Kroening, and Alessandro Abate. Towards verifiable and safe modelfree reinforcement learning. In *Proc. of the Workshop on Artificial Intelligence and Formal Verification,* Logic, Automata, and Synthesis, pp. 1–9, 2019b. Mohammadhosein Hasanbeig, Alessandro Abate, and Daniel Kroening. Cautious reinforcement learning with logical constraints. In *Proc. of the Int. Conf. on Autonomous Agents and Multi Agent Systems (AAMAS)*, pp. 483–491, 2020. Mohammadhosein Hasanbeig, Daniel Kroening, and Alessandro Abate. LCRL: Certified policy synthesis via logically-constrained reinforcement learning. In Proc. of the Int. Conf. on Quantitative Evaluation of Systems (QEST), pp. 217–231, 2022. Matthias Heger. Consideration of risk in reinforcement learning. In Proc. of the Int. Conf. on Machine Learning (ICML), pp. 105–111, 1994. Sylvia Herbert, Jason J. Choi, Suvansh Sanjeev, Marsalis Gibson, Koushil Sreenath, and Claire J. Tomlin. Scalable learning of safety guarantees for autonomous systems using Hamilton-Jacobi reachability. In Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 5914–5920, 2021. Lukas Hewing, Kim P. Wabersich, Marcel Menner, and Melanie N. Zeilinger. Learning-based model predictive control: Toward safe learning in control. *Annual Review of Control, Robotics, and Autonomous Systems*, 3(1):269–296, 2020. Shengyi Huang and Santiago Ontañón. A closer look at invalid action masking in policy gradient algorithms. The Int. Florida Artificial Intelligence Research Society Conf. Proc. (FLAIRS), 35, 2022. Nathan Hunt, Nathan Fulton, Sara Magliacane, Trong Nghia Hoang, Subhro Das, and Armando SolarLezama. Verifiably safe exploration for end-to-end reinforcement learning. In *Proc. of the Int. Conf. on* Hybrid Systems: Computation and Control (HSCC), pp. 1–11, 2021. B. Ravi Kiran, Ibrahim Sobh, Victor Talpaert, Patrick Mannion, Ahmad A.Al Sallab, Senthil Yogamani, and Patrick Perez. Deep reinforcement learning for autonomous driving: A survey. IEEE Transactions on Intelligent Transportation Systems, 23(6):4909–4926, 2022. Niklas Kochdumper, Hanna Krasowski, Xiao Wang, Stanley Bak, and Matthias Althoff. Provably safe reinforcement learning via action projection using reachability analysis and polynomial zonotopes. *IEEE* Open Journal of Control Systems, 2:79–92, 2023. Bettina Könighofer, Florian Lorber, Nils Jansen, and Roderick Bloem. Shield synthesis for reinforcement learning. In *Leveraging Applications of Formal Methods, Verification and Validation: Verification Principles*, pp. 290–306, 2020. Bettina Könighofer, Julian Rudolf, Alexander Palmisano, Martin Tappler, and Roderick Bloem. Online shielding for stochastic systems. In *Proc. of the NASA Formal Methods (NFM)*, pp. 231–248, 2021. Hanna Krasowski, Xiao Wang, and Matthias Althoff. Safe reinforcement learning for autonomous lane changing using set-based prediction. In Proc. of the IEEE Int. Intelligent Transportation Systems Conf. (ITSC), pp. 1–7, 2020. Hanna Krasowski, Yinqiang Zhang, and Matthias Althoff. Safe reinforcement learning for urban driving using invariably safe braking sets. In *Proc. of the IEEE Int. Intelligent Transportation Systems Conf.* (ITSC), pp. 2407–2414, 2022. Lauro Langosco Di Langosco, Jack Koch, Lee D. Sharkey, Jacob Pfau, and David Krueger. Goal misgeneralization in deep reinforcement learning. In *Proc. of the Int. Conf. on Machine Learning (ICML)*, pp. 12004–12019, 2022. Dongsu Lee and Minhae Kwon. ADAS-RL: Safety learning approach for stable autonomous driving. ICT Express, 8(3):479–483, 2022. Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. *arXiv:2005.01643*, 2020. Xiao Li, Cristian-Ioan Vasile, and Calin Belta. Reinforcement learning with temporal logic rewards. In *Proc.* of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), pp. 3834–3839, 2017. Xiao Li, Zachary Serlin, Guang Yang, and Calin Belta. A formal methods approach to interpretable reinforcement learning for robotic planning. *Science Robotics*, 4(37), 2019a. Zhaojian Li, Tianshu Chu, and Uroš Kalabić. Dynamics-enabled safe deep reinforcement learning: Case study on active suspension control. In Proc. of the IEEE Conf. on Control Technology and Applications (CCTA), pp. 585–591, 2019b. Zemin Eitan Liu, Quan Zhou, Yanfei Li, Shijin Shuai, and Hongming Xu. Safe deep reinforcement learningbased constrained optimal control scheme for HEV energy management. *IEEE Transactions on Transportation Electrification*, 9(3):4278–4293, 2023. Tommaso Mannucci, Erik-Jan van Kampen, Cornelis de Visser, and Qiping Chu. Safe exploration algorithms for reinforcement learning controllers. *IEEE Transactions on Neural Networks and Learning Systems*, 29 (4):1069–1081, 2018. Zahra Marvi and Bahare Kiumarsi. Safe reinforcement learning: A control barrier function optimization approach. *International Journal of Robust and Nonlinear Control*, 31(6):1923–1940, 2021. Zahra Marvi and Bahare Kiumarsi. Reinforcement learning with safety and stability guarantees during exploration for linear systems. *IEEE Open Journal of Control Systems*, 1:322–334, 2022. Branka Mirchevska, Christian Pek, Moritz Werling, Matthias Althoff, and Joschka Boedecker. High-level decision making for safe and reasonable autonomous lane changing using reinforcement learning. In *Proc.* of the IEEE Int. Intelligent Transportation Systems Conf. (ITSC), pp. 2156–2162, 2018. Ian M. Mitchell, Jacob Budzis, and Andriy Bolyachevets. Invariant, viability and discriminating kernel under-approximation via zonotope scaling. In Proc. of the Int. Conf. on Hybrid Systems: Computation and Control (HSCC), pp. 268–269, 2019. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing Atari with deep reinforcement learning. *arXiv*, abs/1312.5, 2013. Arthur Müller and Matthia Sabatelli. Safe and psychologically pleasant traffic signal control with reinforcement learning using action masking. *Proc. of the IEEE Conf. on Intelligent Transportation Systems* (ITSC), pp. 951–958, 2022. Islam Nazmy, Andrew Harris, Morteza Lahijanian, and Hanspeter Schaub. Shielded deep reinforcement learning for multi-sensor spacecraft imaging. In *Proc. of the American Control Conf. (ACC)*, pp. 1808– 1813, 2022. Armin Norouzi, Saeid Shahpouri, David Gordon, Mahdi Shahbakhti, and Charles Robert Koch. Safe deep reinforcement learning in diesel engine emission control. *Proc. of the Institution of Mechanical Engineers.* Part I: Journal of Systems and Control Engineering, 2023. Charles Packer, Katelyn Gao, Jernej Kos, Philipp Krähenbühl, Vladlen Koltun, and Dawn Song. Assessing generalization in deep reinforcement learning, 2018. Christian Pek, Stefanie Manzinger, Markus Koschi, and Matthias Althoff. Using online verification to prevent autonomous vehicles from causing accidents. *Nature Machine Intelligence*, 2(9):518–528, 2020. Tu-Hoa Pham, Giovanni De Magistris, and Ryuki Tachibana. OptLayer - practical constrained optimization for deep reinforcement learning in the real world. In *Proc. of the IEEE Int. Conf. on Robotics and* Automation (ICRA), pp. 6236–6243, 2018. André Platzer. Differential dynamic logic for hybrid systems. *Journal of Automated Reasoning*, 41(2): 143–189, 2008. Antonin Raffin, Ashley Hill, Adam Gleave, Anssi Kanervisto, Maximilian Ernestus, and Noah Dormann. Stable-Baselines3: Reliable reinforcement learning implementations. *Journal of Machine Learning Research*, 22(268):1–8, 2021. Hendrik Roehm, Jens Oehlerking, Matthias Woehrle, and Matthias Althoff. Model conformance for cyberphysical systems: A survey. *ACM Transactions on Cyber-Physical Systems*, 3(3):1–26, 2019. Lukas Schäfer, Felix Gruber, and Matthias Althoff. Scalable computation of robust control invariant sets of nonlinear systems. *IEEE Transactions on Automatic Control*, (early acces):1–15, 2023. Lukas M. Schmidt, Georgios D. Kontes, Axel Plinge, and Christopher Mutschler. Can you trust your autonomous car? interpretable and verifiably safe reinforcement learning. In Proc. of the IEEE Intelligent Vehicles Symp. (IV), pp. 171–178, 2021. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. *arXiv*, abs/1707.0, 2017. Bastian Schürmann, Niklas Kochdumper, and Matthias Althoff. Reachset model predictive control for disturbed nonlinear systems. In *Proc. of the IEEE Conf. on Decision and Control (CDC)*, pp. 3463–3470, 2018. Bastian Schürmann, Riccardo Vignali, Maria Prandini, and Matthias Althoff. Set-based control for disturbed piecewise affine systems with state and actuation constraints. *Nonlinear Analysis: Hybrid Systems*, 36 (Art. no. 100826), 2020. Gautham Nayak Seetanadi, Karl-Erik Årzén, and Martina Maggio. Adaptive routing with guaranteed delay bounds using safe reinforcement learning. In *ACM Int. Conf. Proc. Series*, pp. 149–160, 2020. Mahmoud Selim, Amr Alanwar, M. Watheq El-Kharashi, Hazem M. Abbas, and Karl H. Johansson. Safe reinforcement learning using data-driven predictive control. In *Proc. of the Int. Conf. on Communications,* Signal Processing, and their Applications (ICCSPA), pp. 1–6, 2022a. Mahmoud Selim, Amr Alanwar, Shreyas Kousik, Grace Gao, Marco Pavone, and Karl H. Johansson. Safe reinforcement learning using black-box reachability analysis. *IEEE Robotics and Automation Letters*, 7 (4):10665–10672, 2022b. Yifei Simon Shao, Chao Chen, Shreyas Kousik, and Ram Vasudevan. Reachability-based trajectory safeguard (RTS): A safe and fast reinforcement learning safety layer for continuous control. IEEE Robotics and Automation Letters, 6(2):3663–3670, 2021. Suhas Shyamsundar, Tommaso Mannucci, and Erik-Jan Van Kampen. Reinforcement learning based algorithm with safety handling and risk perception. In Proc. of the IEEE Symp. Series on Computational Intelligence (SSCI), pp. 1–7, 2017. David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George Van Den Driessche, Thore Graepel, and Demis Hassabis. Mastering the game of go without human knowledge. *Nature*, 550(7676):354–359, 2017. Joar Max Viktor Skalse, Nikolaus H. R. Howe, Dmitrii Krasheninnikov, and David Krueger. Defining and characterizing reward hacking. In *Proc. of the Int. Conf. on Neural Information Processing Systems* (NeurIPS), 2022. Adam Stooke, Joshua Achiam, and Pieter Abbeel. Responsive safety in reinforcement learning by PID Lagrangian methods. In *Proc. of the Int. Conf. on Machine Learning (ICML)*, pp. 9133–9143, 2020. Zachary Sunberg and Mykel Kochenderfer. Online algorithms for POMDPs with continuous state, action, and observation spaces. *Proc. of the Int. Conf. on Automated Planning and Scheduling (ICAPS)*, 28(1): 259–263, 2018. Richard S. Sutton and Andrew G. Barto. *Reinforcement Leaning: An Introduction*. A Bradford Book, 2nd edition, 2018. Daniel Tabas and Baosen Zhang. Computationally efficient safe reinforcement learning for power systems. In *Proc. of the American Control Conf. (ACC)*, pp. 3303–3310, 2022. Andrew Taylor, Andrew Singletary, Yisong Yue, and Aaron D. Ames. Learning for safety-critical control with control barrier functions. In *Proc. of the Conf. on Learning for Dynamics and Control*, pp. 708–717, 2020. Andrew J. Taylor, Andrew Singletary, Yisong Yue, and Aaron D. Ames. A control barrier perspective on episodic learning via projection-to-state safety. *IEEE Control Systems Letters*, 5(3):1019–1024, 2021. Brijen Thananjeyan, Ashwin Balakrishna, Suraj Nair, Michael Luo, Krishnan Srinivasan, Minho Hwang, Joseph E. Gonzalez, Julian Ibarz, Chelsea Finn, and Ken Goldberg. Recovery RL: Safe reinforcement learning with learned recovery zones. *IEEE Robotics and Automation Letters*, 6(3):4915–4922, 2021. Jakob Thumm and Matthias Althoff. Provably safe deep reinforcement learning for robotic manipulation in human environments. In *Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA)*, pp. 6344–6350, 2022. Matteo Turchetta, Felix Berkenkamp, and Andreas Krause. Safe exploration in finite Markov decision processes with Gaussian processes. In Proc. of the Int. Conf. on Neural Information Processing Systems (NeurIPS), pp. 4312–4320, 2016. Peter Varnai and Dimos V. Dimarogonas. On robustness metrics for learning STL tasks. In Proc. of the American Control Conf. (ACC), pp. 5394–5399, 2020. Stephen A. Vavasis. *Complexity Theory: Quadratic Programming*. Springer, Boston, MA., 2001. Kim P. Wabersich and Melanie N. Zeilinger. A predictive safety filter for learning-based control of constrained nonlinear dynamical systems. *Automatica*, 129(1):109597–109614, 2021. Kim P. Wabersich, Andrew J. Taylor, Jason J. Choi, Koushil Sreenath, Claire J. Tomlin, Aaron D. Ames, and Melanie N. Zeilinger. Data-driven safety filters: Hamilton-Jacobi reachability, control barrier functions, and predictive methods for uncertain systems. *IEEE Control Systems Magazine*, 43(5):137–177, 2023. Chengyu Wang, Luhan Wang, Zhaoming Lu, Xinghe Chu, Zhengrui Shi, Jiayin Deng, Tianyang Su, Guochu Shou, and Xiangming Wen. SRL-TR2: A safe reinforcement learning based trajectory tracker framework. IEEE Transactions on Intelligent Transportation Systems, 24(6):5765–5780, 2023. Linghao Wang, Miao Wang, and Yujun Zhang. A safe training approach for deep reinforcement learningbased traffic engineering. In *Proc. of the IEEE Int. Conf. on Communications (ICC)*, pp. 1450–1455, 2022. Xiao Wang. Ensuring safety of learning-based motion planners using control barrier functions. *IEEE Robotics* and Automation Letters, 7(2):4773–4780, 2022. Peter Wieland and Frank Allgöwer. Constructive safety using control barrier functions. *IFAC Proc. Volumes*, 40(12):462–467, 2007. Cambridge Yang, Michael L. Littman, and Michael Carbin. On the (in)tractability of reinforcement learning for LTL objectives. *Proc. of the Int. Joint Conf. on Artificial Intelligence (IJCAI)*, pp. 3650–3658, 2022. Chenchen Yang, Jing Liu, Haiying Sun, Junfeng Sun, Xiang Chen, and Lipeng Zhang. Safe reinforcement learning for CPSs via formal modeling and verification. In IEEE Int. Joint Conf. on Neural Networks Proc. (IJCNN), pp. 1–8, 2021. Tsung-Yen Yang, Justinian Rosca, Karthik Narasimhan, and Peter J. Ramadge. Projection-based constrained policy optimization. In *Proc. of the Int. Conf. on Learning Representations (ICLR)*, pp. 1–21, 2020. Fei Ye, Shen Zhang, Pin Wang, and Ching Yao Chan. A survey of deep reinforcement learning algorithms for motion planning and control of autonomous vehicles. In Proc. of the IEEE Intelligent Vehicles Symp. (IV), pp. 1073–1080, 2021. Peipei Yu, Hongcai Zhang, Yonghua Song, Hongxun Hui, and Ge Chen. District cooling system control for providing operating reserve based on safe deep reinforcement learning. *IEEE Transactions on Power* Systems, pp. 1–13, 2023. Mario Zanon and Sebastien Gros. Safe reinforcement learning using robust MPC. IEEE Transactions on Automatic Control, 66(8):3638–3652, 2021. Melanie N. Zeilinger, Davide M. Raimondo, Alexander Domahidi, Manfred Morari, and Colin N. Jones. On real-time robust model predictive control. *Automatica*, 50(3):683–694, 2014. Jin Zhang, Yuxiang Guan, Liang Che, and Mohammad Shahidehpour. Ev charging command fast allocation approach based on deep reinforcement learning with safety modules. *IEEE Transactions on Smart Grid*, 2023. Wenshuai Zhao, Jorge Pena Queralta, and Tomi Westerlund. Sim-to-real transfer in deep reinforcement learning for robotics: A survey. In *Proc. of the IEEE Symp. Series on Computational Intelligence (SSCI)*, pp. 737–744, 2020. ## A Appendix Mdp Modification With Action Replacement Action replacement alters the MDP on which the agent learns. Hunt et al. (2021) discuss this modification for discrete action spaces and uniformly sampling from the safe action space. We generalize this discussion to using any replacement function and continuous action spaces. To this end, we define ψ(s) so that it randomly samples the replacement action a˜ according to a replacement policy πr (a˜|s) with Pa˜∈Aφ(s) πr (a˜|s) = 1 for the discrete case, and RAφ(s) πr (a˜|s) da˜ = 1 for the continuous case, and ∀a˜ ∈ Aφ(s) : πr (a˜|s) ≥ 0. In the example of uniform sampling from Aφ(s), the replacement policy is πr (a˜|s) = 1/VAφ(s) where VAφ(s)is the volume of Aφ(s). By replacing unsafe actions, the transition function of the MDP changes to $$T_{\varphi}(\mathbf{s},\mathbf{a},\mathbf{s}^{\prime})=\begin{cases}T(\mathbf{s},\mathbf{a},\mathbf{s}^{\prime}),&\text{if}\varphi(\mathbf{s},\mathbf{a})=1\\ T_{r}(\mathbf{s},\mathbf{s}^{\prime}),&\text{otherwise,}\end{cases}$$ $$T_{r}(\mathbf{s},\mathbf{s}^{\prime})=\sum_{\tilde{\mathbf{a}}\in\mathbb{A}_{\varphi}(\mathbf{s})}\mathbf{\pi}_{r}\left(\tilde{\mathbf{a}}|\mathbf{s}\right)T(\mathbf{s},\tilde{\mathbf{a}},\mathbf{s}^{\prime}).$$ (15) $\binom{16}{2}$ (16) . The reward function of the MDP changes accordingly to $$r_{\varphi}(\mathbf{s},\mathbf{a})=\begin{cases}r(\mathbf{s},\mathbf{a}),&\text{if}\varphi(\mathbf{s},\mathbf{a})=1\\ r_{r}(\mathbf{s}),&\text{otherwise,}\end{cases}$$ $$r_{r}(\mathbf{s})=\sum_{\mathbf{a}\in\mathbb{A}_{\varphi}(\mathbf{s})}\mathbf{\pi}_{r}\left(\tilde{\mathbf{a}}|\mathbf{s}\right)r(\mathbf{s},\tilde{\mathbf{a}}).$$ $$(17)$$ $$(18)$$ $$(19)$$ In the continuous case, we get Tr(s, s ′) by marginalizing the transition probability density function over Aφ(s): $$T_{r}(\mathbf{s},\mathbf{s}^{\prime})=\int_{\mathbb{A}_{\varphi}(\mathbf{s})}\mathbf{\pi}_{r}\left(\tilde{\mathbf{a}}|\mathbf{s}\right)T(\mathbf{s},\tilde{\mathbf{a}},\mathbf{s}^{\prime})d\tilde{\mathbf{a}}.\tag{1}$$ Analogously, we have that $$r_{r}(\mathbf{s})=\int_{\mathbb{A}_{\varphi}(\mathbf{s})}\mathbf{\pi}_{r}\left(\tilde{\mathbf{a}}|\mathbf{s}\right)r(\mathbf{s},\tilde{\mathbf{a}})d\tilde{\mathbf{a}}.\tag{10}$$ $$(20)$$ ## Environment Parameters We provide an overview of all environment-specific parameters in Table 3 and Table 4. Table 3: Environment parameters of the pendulum. | Parameter | Value | |-------------|------------| | Gravity g | 9.81 m s−2 | | Mass m | 1 kg | | Length l | 1 m | ## Hyperparameters For Learning Algorithms We specify the hyperparameters for all learning algorithms (see Table 5 for PPO, Table 6 for TD3, Table 7 for DQN, and Table 8 for SAC) that are different from the Stable Baselines3 (Raffin et al., 2021) default values. Additionally, the code for the experiments is available at the CodeOcean capsule doi.org/10.24433/CO.9209121.v1 to reproduce our results. | Parameter | Value | |-------------|----------------------------| | Gravity g | 9.81 m s−2 | | k | 1 1/kg | | d0 | 70 | | d1 | 17 | | n0 | 55 | | W | [[−0.1, 0.1], [−0.1, 0.1]] | Table 4: Environment parameters of the 2D quadrotor. | Parameter | Pendulum | 2D quadrotor | |------------------------------------|------------|----------------| | Learning rate | 1 × 10−4 | 5 × 10−5 | | Discount factor γ | 0.98 | 0.999 | | Steps per update | 2048 | 512 | | Optimization epochs | 20 | 30 | | Minibatch size | 16 | 128 | | Max gradient clipping | 0.9 | 0.5 | | Entropy coefficient | 1 × 10−3 | 2 × 10−6 | | Value function coefficient | 0.045 | 0.5 | | Clipping range | 0.3 | 0.1 | | Generalized advantage estimation λ | 0.8 | 0.92 | | Activation function | ReLU | ReLU | | Hidden layers | 2 | 2 | | Neurons per layer | 32 | 64 | | Training steps | 60k | 200k | | Parameter | Pendulum | 2D quadrotor | |----------------------------------|------------|----------------| | Learning rate | 3.5 × 10−3 | 2 × 10−3 | | Replay buffer size | 1 × 104 | 1 × 105 | | Discount factor γ | 0.98 | 0.98 | | Initial exploration steps | 10 × 103 | 100 | | Steps between model updates | 256 | 5 | | Gradient steps per model update | 256 | 10 | | Minibatch size per gradient step | 512 | 512 | | Soft update coefficient τ | 5 × 10−3 | 5 × 10−3 | | Gaussian smoothing noise σ | 0.2 | 0.12 | | Activation function | ReLU | ReLU | | Hidden layers | 2 | 2 | | Neurons per layer | 32 | 64 | | Training steps | 60k | 200k | Table 5: Hyperparameters for PPO. Table 6: Hyperparameters for TD3. | Parameter | Pendulum | 2D quadrotor | |-----------------------------------|------------|----------------| | Learning rate | 2 × 10−3 | 1 × 10−4 | | Replay buffer size | 5 × 104 | 1 × 106 | | Discount factor γ | 0.95 | 0.999 99 | | Initial exploration steps | 500 | 100 | | Steps between model updates | 8 | 2 | | Gradient steps per model update | 4 | 4 | | Minibatch size per gradient step | 512 | 64 | | Maximum for gradient clipping | 10 | 100 | | Update frequency target network | 1 × 103 | 1 × 103 | | Initial exploration probability ϵ | 1.0 | 0.137 | | Linear interpolation steps of ϵ | 6 × 103 | 1 × 104 | | Final exploration probability ϵ | 0.1 | 0.004 | | Activation function | tanh | tanh | | Hidden layers | 2 | 2 | | Neurons per layer | 32 | 64 | | Training steps | 60k | 200k | Table 7: Hyperparameters for DQN. | Parameter | Pendulum | 2D quadrotor | |----------------------------------|------------|----------------| | Learning rate | 3 × 10−4 | 3 × 10−4 | | Replay buffer size | 1 × 106 | 5 × 105 | | Discount factor γ | 0.99 | 0.98 | | Initial exploration steps | 100 | 1000 | | Steps between model updates | 1 | 32 | | Gradient steps per model update | 1 | 32 | | Minibatch size per gradient step | 256 | 512 | | Entropy coefficient | learned | 1 × 10−1 | | Soft update coefficient τ | 5 × 10−3 | 1 × 10−2 | | Activation function | ReLU | ReLU | | Hidden layers | 2 | 2 | | Neurons per layer | 32 | 64 | | Training steps | 60k | 200k | Table 8: Hyperparameters for SAC. ## Full Evaluation In this section, we present all training results of the five RL algorithms TD3, SAC, DQN, and PPO continuous and discrete. We compare these algorithms on the inverted pendulum and 2D quadrotor environment on ten random seeds. The tested algorithms are action replacement with ψsample(s) and ψfailsafe(s), action projection, and action masking. When action replacement is used with a failsafe controller, we omit safe action and *both* because for discrete action spaces, the failsafe controller might use an action that is not in the discrete action space. This is due to the failsafe controller proposing actions from the continuous action space. First, we present the effect of the learning tuples on the on-policy algorithm PPO in Figure 5. This comparison clearly shows the negative effect of the *safe action* tuple on the training performance of PPO. Figure 6 depicts the aggregated training results for the pendulum as previously discussed for the 2D quadrotor in Figures 3, 4 and 5. Figures 7 to 10 depict how the reward and intervention rate evolve during training for all investigated configurations. Finally, the Tables 9 and 10 show statistical results for deploying the learned models for the two benchmarks. ![30_image_0.png](30_image_0.png) Figure 5: Evaluation of the training tuples for the 2D quadrotor averaged over the continuous and discrete PPO training runs using action replacement with ten random seeds each. The left column depicts the reward and the right column the safety intervention rate. ![31_image_0.png](31_image_0.png) Figure 6: Evaluation of the training tuples for the pendulum averaged over ten random seeds each. The left column depicts the reward and the right column the safety intervention rate. We would like to refer the reader to Figures 3, 4 and 5 for the corresponding 2D quadrotor results. 32 ![32_image_0.png](32_image_0.png) ![32_image_1.png](32_image_1.png) discrete, and PPO continuous. For each configuration, ten training runs with different random seeds were conducted. Each subplot contains all implemented variants. Note that the reward for the *adaption penalty* variants is still r and the adaption penalty r ∗is not included in the curves for better comparability. ![33_image_0.png](33_image_0.png) PPO discrete, and PPO continuous. For each configuration, ten training runs with different random seeds were conducted. Each subplot contains all implemented variants. Note that the reward for the *adaption* penalty variants is still r and the adaption penalty r ∗is not included in the curves for better comparability. ![34_image_0.png](34_image_0.png) ![35_image_0.png](35_image_0.png) | Reward | Intervention Rate | Safety Violation | | | | | |------------------------------------------|---------------------|--------------------|------|-----------|------|-----------| | Approach | Mean | Std. Dev. | Mean | Std. Dev. | Mean | Std. Dev. | | PPO (continuous) Projection (SafeAction) | -1.14 | 0.42 | 0.76 | 0.29 | 0.00 | 0.00 | | Projection (AdaptionPenalty) | -0.07 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | Projection (Both) | -0.07 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | Projection (Naive) | -0.06 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | Sample (SafeAction) | -0.13 | 0.16 | 0.01 | 0.02 | 0.00 | 0.00 | | Sample (AdaptionPenalty) | -0.07 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | Sample (Both) | -0.07 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | Sample (Naive) | -0.07 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | FailSafe (AdaptionPenalty) | -0.07 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | FailSafe (Naive) | -0.07 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | Baseline (Naive) | -0.06 | 0.07 | - | - | 0.00 | 0.00 | | Masking (Naive) | -0.07 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | PPO (discrete) Projection (SafeAction) | -0.52 | 0.55 | 0.28 | 0.42 | 0.00 | 0.00 | | Projection (AdaptionPenalty) | -0.14 | 0.12 | 0.00 | 0.00 | 0.00 | 0.00 | | Projection (Both) | -0.09 | 0.09 | 0.00 | 0.00 | 0.00 | 0.00 | | Projection (Naive) | -0.15 | 0.27 | 0.03 | 0.10 | 0.00 | 0.00 | | Sample (SafeAction) | -0.09 | 0.08 | 0.00 | 0.00 | 0.00 | 0.00 | | Sample (AdaptionPenalty) | -0.13 | 0.16 | 0.00 | 0.00 | 0.00 | 0.00 | | Sample (Both) | -0.10 | 0.08 | 0.00 | 0.00 | 0.00 | 0.00 | | Sample (Naive) | -0.09 | 0.08 | 0.00 | 0.00 | 0.00 | 0.00 | | FailSafe (AdaptionPenalty) | -0.09 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | FailSafe (Naive) | -0.08 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | Baseline (Naive) | -0.08 | 0.07 | - | - | 0.00 | 0.00 | | Masking (Naive) | -0.07 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | TD3 Projection (SafeAction) | -0.07 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | Projection (AdaptionPenalty) | -0.08 | 0.08 | 0.00 | 0.00 | 0.00 | 0.00 | | Projection (Both) | -0.07 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | Projection (Naive) | -0.07 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | Sample (SafeAction) | -0.07 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | Sample (AdaptionPenalty) | -0.09 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | Sample (Both) | -0.09 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | Sample (Naive) | -0.07 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | FailSafe (AdaptionPenalty) | -0.09 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | FailSafe (Naive) | -0.07 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | Baseline (Naive) | -0.07 | 0.07 | - | - | 0.00 | 0.00 | | Masking (Naive) | -0.07 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | DQN Projection (SafeAction) | -0.07 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | Projection (AdaptionPenalty) | -0.07 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | Projection (Both) | -0.07 | 0.08 | 0.00 | 0.00 | 0.00 | 0.00 | | Projection (Naive) | -0.07 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | Sample (SafeAction) | -0.07 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | Sample (AdaptionPenalty) | -0.09 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | Sample (Both) | -0.07 | 0.08 | 0.00 | 0.00 | 0.00 | 0.00 | | Sample (Naive) | -0.07 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | FailSafe (AdaptionPenalty) | -0.07 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | FailSafe (Naive) | -0.07 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | Baseline (Naive) | -0.07 | 0.07 | - | - | 0.00 | 0.00 | | Masking (Naive) | -0.07 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | SAC Projection (Naive) | -0.08 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | Projection (AdaptionPenalty) | -0.10 | 0.09 | 0.00 | 0.00 | 0.00 | 0.00 | | Projection (SafeAction) | -0.08 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | Projection (Both) | -0.10 | 0.08 | 0.00 | 0.00 | 0.00 | 0.00 | | Sample (Naive) | -0.08 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | Sample (AdaptionPenalty) | -0.10 | 0.08 | 0.00 | 0.00 | 0.00 | 0.00 | | Sample (SafeAction) | -0.08 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | Sample (Both) | -0.09 | 0.08 | 0.00 | 0.00 | 0.00 | 0.00 | | FailSafe (Naive) | -0.08 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | | FailSafe (AdaptionPenalty) | -0.09 | 0.09 | 0.00 | 0.00 | 0.00 | 0.00 | | Baseline (Naive) | -0.11 | 0.09 | - | - | 0.00 | 0.00 | | Masking (Naive) | -0.08 | 0.07 | 0.00 | 0.00 | 0.00 | 0.00 | Table 9: Mean and standard deviation of 30 pendulum deployment episodes. Note: - indicates that there is no intervention rate for the baselines as they don't implement a safety verification. | Reward | Intervention Rate | Safety Violation | | | | | |------------------------------------------|---------------------|--------------------|------|-----------|------|-----------| | Approach | Mean | Std. Dev. | Mean | Std. Dev. | Mean | Std. Dev. | | PPO (continuous) Projection (SafeAction) | -0.44 | 0.00 | 0.68 | 0.00 | 0.00 | 0.00 | | Projection (AdaptionPenalty) | -0.33 | 0.07 | 0.44 | 0.05 | 0.00 | 0.00 | | Projection (Both) | -0.39 | 0.07 | 0.57 | 0.13 | 0.00 | 0.00 | | Projection (Naive) | -0.31 | 0.10 | 0.47 | 0.25 | 0.00 | 0.00 | | Sample (SafeAction) | -0.43 | 0.00 | 0.86 | 0.00 | 0.00 | 0.00 | | Sample (AdaptionPenalty) | -0.28 | 0.12 | 0.10 | 0.12 | 0.00 | 0.00 | | Sample (Both) | -0.36 | 0.03 | 0.41 | 0.20 | 0.00 | 0.00 | | Sample (Naive) | -0.39 | 0.06 | 0.23 | 0.13 | 0.00 | 0.00 | | FailSafe (AdaptionPenalty) | -0.31 | 0.11 | 0.08 | 0.06 | 0.00 | 0.00 | | FailSafe (Naive) | -0.27 | 0.11 | 0.27 | 0.29 | 0.00 | 0.00 | | Baseline (Naive) | -0.86 | 0.01 | - | - | 0.94 | 0.01 | | Masking (Naive) | -0.43 | 0.09 | 0.57 | 0.28 | 0.00 | 0.00 | | PPO (discrete) Projection (SafeAction) | -0.44 | 0.01 | 0.74 | 0.21 | 0.00 | 0.00 | | Projection (AdaptionPenalty) | -0.24 | 0.13 | 0.02 | 0.02 | 0.00 | 0.00 | | Projection (Both) | -0.35 | 0.11 | 0.16 | 0.14 | 0.00 | 0.00 | | Projection (Naive) | -0.34 | 0.14 | 0.46 | 0.37 | 0.00 | 0.00 | | Sample (SafeAction) | -0.42 | 0.02 | 0.82 | 0.01 | 0.00 | 0.00 | | Sample (AdaptionPenalty) | -0.11 | 0.10 | 0.00 | 0.00 | 0.00 | 0.00 | | Sample (Both) | -0.38 | 0.09 | 0.32 | 0.18 | 0.00 | 0.00 | | Sample (Naive) | -0.13 | 0.16 | 0.02 | 0.03 | 0.00 | 0.00 | | FailSafe (AdaptionPenalty) | -0.19 | 0.15 | 0.01 | 0.03 | 0.00 | 0.00 | | FailSafe (Naive) | -0.34 | 0.13 | 0.41 | 0.21 | 0.00 | 0.00 | | Baseline (Naive) | -0.11 | 0.03 | - | - | 0.00 | 0.00 | | Masking (Naive) | -0.25 | 0.14 | 0.28 | 0.21 | 0.00 | 0.00 | | TD3 Projection (SafeAction) | -0.21 | 0.03 | 0.28 | 0.10 | 0.00 | 0.00 | | Projection (AdaptionPenalty) | -0.22 | 0.03 | 0.26 | 0.10 | 0.00 | 0.00 | | Projection (Both) | -0.21 | 0.04 | 0.28 | 0.12 | 0.00 | 0.00 | | Projection (Naive) | -0.25 | 0.05 | 0.26 | 0.17 | 0.00 | 0.00 | | Sample (SafeAction) | -0.18 | 0.03 | 0.07 | 0.01 | 0.00 | 0.00 | | Sample (AdaptionPenalty) | -0.21 | 0.04 | 0.13 | 0.05 | 0.00 | 0.00 | | Sample (Both) | -0.21 | 0.06 | 0.06 | 0.03 | 0.00 | 0.00 | | Sample (Naive) | -0.19 | 0.04 | 0.12 | 0.09 | 0.00 | 0.00 | | FailSafe (AdaptionPenalty) | -0.20 | 0.03 | 0.05 | 0.02 | 0.00 | 0.00 | | FailSafe (Naive) | -0.26 | 0.09 | 0.05 | 0.02 | 0.00 | 0.00 | | Baseline (Naive) | -0.90 | 0.05 | - | - | 0.95 | 0.02 | | Masking (Naive) | -0.16 | 0.03 | 0.03 | 0.03 | 0.00 | 0.00 | | DQN Projection (SafeAction) | -0.06 | 0.02 | 0.00 | 0.01 | 0.00 | 0.00 | | Projection (AdaptionPenalty) | -0.05 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | | Projection (Both) | -0.05 | 0.01 | 0.00 | 0.00 | 0.00 | 0.00 | | Projection (Naive) | -0.09 | 0.06 | 0.12 | 0.24 | 0.00 | 0.00 | | Sample (SafeAction) | -0.07 | 0.01 | 0.01 | 0.02 | 0.00 | 0.00 | | Sample (AdaptionPenalty) | -0.06 | 0.03 | 0.00 | 0.00 | 0.00 | 0.00 | | Sample (Both) | -0.07 | 0.02 | 0.00 | 0.00 | 0.00 | 0.00 | | Sample (Naive) | -0.05 | 0.01 | 0.00 | 0.00 | 0.00 | 0.00 | | FailSafe (AdaptionPenalty) | -0.06 | 0.02 | 0.02 | 0.04 | 0.00 | 0.00 | | FailSafe (Naive) | -0.07 | 0.03 | 0.10 | 0.10 | 0.00 | 0.00 | | Baseline (Naive) | -0.24 | 0.37 | - | - | 0.20 | 0.40 | | Masking (Naive) | -0.15 | 0.16 | 0.14 | 0.26 | 0.00 | 0.00 | | SAC Projection (Naive) | -0.20 | 0.01 | 0.52 | 0.02 | 0.00 | 0.00 | | Projection (AdaptionPenalty) | -0.19 | 0.00 | 0.49 | 0.01 | 0.00 | 0.00 | | Projection (SafeAction) | -0.19 | 0.00 | 0.49 | 0.01 | 0.00 | 0.00 | | Projection (Both) | -0.20 | 0.01 | 0.49 | 0.01 | 0.00 | 0.00 | | Sample (Naive) | -0.15 | 0.01 | 0.05 | 0.00 | 0.00 | 0.00 | | Sample (AdaptionPenalty) | -0.15 | 0.01 | 0.05 | 0.00 | 0.00 | 0.00 | | Sample (SafeAction) | -0.15 | 0.01 | 0.05 | 0.00 | 0.00 | 0.00 | | Sample (Both) | -0.16 | 0.02 | 0.05 | 0.00 | 0.00 | 0.00 | | FailSafe (Naive) | -0.21 | 0.01 | 0.17 | 0.02 | 0.00 | 0.00 | | FailSafe (AdaptionPenalty) | -0.17 | 0.01 | 0.04 | 0.00 | 0.00 | 0.00 | | Baseline (Naive) | -0.88 | 0.02 | - | - | 0.96 | 0.03 | | Masking (Naive) | -0.14 | 0.02 | 0.00 | 0.00 | 0.00 | 0.00 | Table 10: Mean and standard deviation of 30 2D Quadrotor deployment episodes. Note: - indicates that there is no intervention rate for the baselines as they don't implement a safety verification.
Review 1: Summary: This paper presents a survey of provably safe reinforcement learning algorithms, and an experimental comparison of some prior approaches on simple simulated control environments. The paper categorizes prior works under three categories: action replacement, action projection, and action masking depending on how safe behavior is guaranteed by the algorithm. The experiments evaluate these different types of algorithms and attempts to provide consolidated insights into which type of algorithm might be suited for a particular type of problem. Strengths and Weaknesses: Strengths - This survey paper is clear and easy to read, and has detailed coverage of several different prior works. - The paper attempts to provide unifying perspective on prior works by categorizing them into different groups based on how safety is ensured, and through experiments in consistent environments that provide perspectives on relative strengths/weaknesses of these different classes of safe RL algorithms - The overall summaries of prior works in terms of algorithm designs and applications in the two big tables are helpful in distilling the differences at a glance. Weaknesses - the paper does not provide a lot of insights beyond what prior works already cover (in particular other surveys like Brunke et al. 2022) and the there isn't a lot of interesting conceptual comparisons beyond the three broad categories. - the paper does not provide any comparisons of how the theoretical guarantees in the prior works differ from each other, and how they can be viewed under a unified lens. Which papers have better guarantees under certain constraints of the environment needs to come out more clearly. Right now, the comparisons are mostly at the level of algorithm design. - the motivation of the paper is about practical considerations of why provably safe RL is necessary for real world deployments, but the experimental comparisons are very limiting. I am not sure why a 1D pendulum environment, and a 2D quadrotor environment provide insights for safe RL algorithms that would have the possibility of translating to real world systems (that are typically high dimensional both in terms of observation space and action space) For examples of such environments (in simulation), refer to Fig 5 of prior survey paper Brunke et al. 2022 Broadly, given the above limitations, and with existing prior survey+comparison papers like Brunke et al. 2022, this paper does not provide any interesting insights that are likely to be helpful in developing safe RL algorithms of the future. Requested Changes: Please refer to the detailed list of weaknesses above. In summary: 1) is it possible to provide any comparisons of how the theoretical guarantees in the prior works differ from each other, and how they can be viewed under a unified lens? 2) is it possible to provide the experimental comparisons in more realistic (simulated / real world) environments, for example the ones described in Brunke et al. 2022? Broader Impact Concerns: None. ================================================== Review 2: Summary: ### High Level Overview Reinforcement learning has been used in different areas and applications such as robotics, autonomous systems, and games. However, in safety-critical applications it is important to design and develop RL algorithms with safety guarantees. For example, consider using an RL algorithm for autonomous driving. Whether the algorithm is trained using simulations or with historical data, the trained policy must abide by road constraints and ensure safety of the vehicle and its surroundings. Vanilla RL algorithms are typically set-up to explore potentially unsafe regions focused solely on reward maximization. In contrast, safe RL algorithms limit the exploration and execution to only consider “safe” parts of the state-action space. At a high level, the authors here provide a survey of “provably safe” RL algorithms, where provably safe refers to the fact that safety constraints are satisfied both during training and also during test time. They introduce a categorization of existing provably safe RL methods into three types: action replacement, action projection, and action masking. Lastly, they provide experimental evaluation to confirm that existing algorithms from the safe RL field indeed satisfy the safety constraints, and provide insights into which algorithm framework should be used in a given application area. ### Main Model RL is build upon the framework of an MDP, defined as a tuple $(S,A,T,r,\gamma)$ where $S$ is set of states, $A$ set of actions, $T$ the transition distribution, $r$ the rewards, and $\gamma$ the discount factor. For provably safe RL it is required that safety of state-action pairs are verifiable, hence the authors assume there is a safety function $\phi(s,a)$ which outputs $1$ if the state-action pair is verified to be safe, and zero otherwise. It is clear that executing $\phi(s,a)$ or even the knowledge of such a function requires some prior knowledge. However, in settings without some knowledge of the dynamics that provably safe RL is not feasible. Lastly, the authors make the following key assumption that there is at least one provably safe initial state, and that for all safe states there exists at least one safe action. ### Summary Action Replacement: The first approach the authors discuss to ensure safety is to replace any unsafe action outputted by the agent with a safe one before it gets executed. Indeed, if the policy $\pi$ proposes an action $a$, then if $\phi(s,a) = 1$ then $a$ is taken. Otherwise, the authors assume there is a replacement policy $\psi(s)$ which always guarantees to output a provably safe action, and so $\psi(s)$ is instead taken. Note that $\psi(s)$ can either be sampled randomly from the safe set, or using a failsafe controller. Action Projection: The second approach in the literature is action projection. Here, if the action proposed by the policy $\pi$ is unsafe, then the actual action taken solves the following optimization problem, whereby you minimize $d(a, \tilde{a})$ subject to $\phi(s,\tilde{a}) = 1$, i.e. project the action $a$ to the set of feasible actions and find the closest point. This can be formulated using either control barrier functions or using additional domain knowledge. Action Masking: The last approach limits the action space $A$ that is fed into the RL algorithm. Indeed, assume you are given a function $\eta(s)$ that outputs a set of provably safe actions. Then the policy $\pi$ which is learned is informed by both $\eta(s)$ (dictating the actions available in that state) as well as the current state $s$. While this is straightforward in discrete action spaces, the authors additionally outline its implementation with continuous action spaces. ### Experimental Results Lastly, the authors complement their literature review with experimental results. They compare “unsafe” baseline algorithms to their “safe” counterparts on two tasks (a pendulum and quadrotor task). The results highlight that the safe algorithms indeed satisfy the safety constraints both during training and during test. However, they additionally notice that action replacement is the most stable and high-performing method. Strengths and Weaknesses: ### Strengths - The authors provide a thorough literature review and classification of the existing work on safe RL algorithms. This helps summarize the entire field, develop insights for implementing the algorithms in new domains, and potential ideas for modifying existing methods to fit the given application. - Based on the experimental results, on page 14 the authors provide a succinct summary on their experience implementing provably safe RL algorithms to the domains considered in the paper. This helps provide actionable insight for implementing the algorithms in practice. ### Weaknesses - The experimental results seem limited and hard to derive statistical significance on the comparison between the different methods. - The discussion on implementing action masking in continuous state-action spaces is under-developed and could be expanded to more general convex action spaces. Requested Changes: ### Requested Changes - The authors should provide a discussion on the distinction between implementing these algorithms in an offline versus an online setting. This is briefly discussed on page 5 but not elsewhere. There are important considerations between the feedback, adjusting for importance sampled weights for the behaviour policy, etc, which should be included. - The experimental results seem to provide no statistical insights between the performance of the different algorithms (especially with the inclusion of the confidence intervals). This makes it difficult to appreciate one of the key findings that action replacement is the most stable and high-performing method (when in reality the performance is statistically indistinguishable). Potentially this could be ameliorated by running more experiments on different areas. However, as written in Section 4.3 it is difficult to derive any actionable insights form the experiments other than safe RL methods are indeed safe. The first paragraph in page 14 is helpful to this, and could be used as a guide for the rest of the section. - Action masking for continuous state-aciton pairs seems under-developed. The authors only consider a straightforward implementation of it, without any extensions to arbitrary convex action spaces, or tools for potentially computing a "safe" bounding box within the safe set $A_\phi$ using tools from optimization for generating non-axis parallel transformations. ### Questions - What is the reason for the performance gap on the figure in page 13 between “safe” and baseline algorithms? Does this imply that the optimal policy does not necessarily follow a safe trajectory? Issues in convergence for the provably safe RL algorithms? - Do you have any thoughts on extensions for action masking to other continuous action space? ### Minor Comments - Language in the first paragraph is a bit over-the-top and could be toned down - Third paragraph on page one is a nice succinct summary of the main contributions in the paper - Under Section 1.1 should mention that consider “provable” as only “hard” safety constraints and highlight that it means safe during both learning and execution - Loved the diagram in Figure 1 - helped highlight the different approaches and taxonomy of safe RL algorithms proposed - Page 3 under “safety of system” can highlight how can incorporate aggregate or cost constraints by augmenting the state space (e.g. considering budget) - Safe action set A_s(s) seems poorly defined and used a couple places throughout the paper. Recognize that you are trying to highlight that the safety function $\phi$ doesn’t need to necessarily be perfect and allow for some type 2 errors (i.e. the state is safe but it is predicted by $\phi$ to be unsafe) but no type 1 errors. Think that this could be clarified and explained a bit more, since the notation is used later in in the discussion with the taxonomy - First paragraph under section 2.1 discussing the evaluation of $\phi$ or over-approximating the set of safe states could be moved to the previous section - Top of page 7 in equation (9) should be defined by $\eta(s)$ - Enjoyed discussion on page 7 starting with “Since the action spaces for RL …” - Results in Table 2 seemed very interesting, highlighting real-world high fidelity simulations. Would have enjoyed some more discussion into the practical insights from these papers in section 3 - First paragraph on page 14 was great - provided nice actionable advice Broader Impact Concerns: The work provides a general purpose summary of proposed RL algorithms for safety-critical applications. The authors provide no new algorithms or implementations, mostly summarize existing algorithms and ideas in the field. Lastly, the authors mention that researchers utilizing safe RL algorithms should adhere to ethical standards within their particular context. ================================================== Review 3: Summary: The paper proposes a taxonomy of action selection methods for provably safe RL, comprehensively categorizing existing methods according to this taxonomy. In particular, the types of methods are action replacement, action projection, and action masking. The paper also discusses the different design choices for learning that must be made in light of each action selection method. The experimental results compare certain action selection methods. The paper also discusses practical insights that could prove useful for implementing these methods. Strengths and Weaknesses: Strengths * Overall goal of identifying different action selection methods is good * Clear presentation of taxonomy; seems potentially useful as well. E.g., pointing out the relative lack of work on action masking in some applications and for continuous actions * Discussion of learning tuples illuminates different design choices and potential consequences * Methodology for lit review seems good Weaknesses * Experimental section is quite weak * Overall goal is either too strong (“The theory and experiments confirm that provably safe RL methods are always safe, while the baselines still violate the safety results even after the reward converged.”) or unclear * Small sample size so can’t give strong recommendations * Not a lot of algorithms evaluated * Not a lot of environments * Hard to make claims about the categories because only a few algorithms are evaluated; claims are only particular to the algorithms * Not many seeds used * Hard to interpret results of fig 3 * “A notable result of our evaluation is that using the adapted action…” -> unclear to me from plots * Error bars are large * Too much going on, potentially too many colours * Relatively weak discussion of safety in the related work * Safety in the context of this work is with respect to the objective function, but it is often difficult in practice to get that objective function * Should talk about reward learning (https://arxiv.org/abs/1706.03741), reward hacking (https://openreview.net/forum?id=yb3HOXO3lX2), goal misgeneralization (https://arxiv.org/abs/2105.14111) * Missing a discussion of the limitations of the safe RL framework * For example, can the constraint function take into account safe and unsafe trajectories since the constraint is only for a given state and action? * Maybe whether a state is desirable or not depends upon the states that have previously been visited; is this accounted for in the MDP formulation? * Scattered discussion of the benefits of this taxonomy * Not clear that there are enough clear and strong future research directions identified. E.g., should there be more action masking work, and why? Requested Changes: The following are the most significant requested changes. * Strengthen experimental section in line with my comments above. Either a more thorough experimental evaluation of action selection methods overall or a more focused evaluation of the properties of a few select action selection methods could work. * Need a more organized discussion of the benefits of this taxonomy for future work At the moment, I am leaning towards rejection of the paper unless the above are addressed. Some other issues below are lower priority, but which would greatly improve the paper in my opinion. * Improve discussion of related work on safe RL in line with my comments above * Include a discussion of the limitations of the present safe RL framework * Learning tuples should maybe be its own separate section since it’s relevant for all of the action selection methods * Hard to compare rewards of baselines w/ provably safe methods in fig 2; maybe group the lines different or make a separate plot EDIT: All of my concerns have now been addressed by the authors and I have changed my score Broader Impact Concerns: N/A. ================================================== Metareview: Recommendation: Accept with minor revision Comment: I am recommending that the paper be accepted with minor revisions. The authors addressed several initial shortcomings pointed by reviewers, and improved the presentation and organization of the paper. For completeness, I suggest that authors cite and contextualize a few more papers from RL + logic and formal verification communities, since provable safety is a core requirement in this survey. Here are a few examples: LTL and Beyond: Formal Languages for Reward Function Specification in Reinforcement Learning. Camacho et al, IJCAI '19 LTL Realizability via Safety and Reachability Games. Camacho et al, IJCAI '18 A Symbolic Approach to Safety LTL Synthesis. Zhu et al. HVC '17 Policy Synthesis and Reinforcement Learning for Discounted LTL*. Alur et al. CAV '23 Reinforcement Learning With Temporal Logic Rewards. Li et al. IROS '17 Q-Learning for Robust Satisfaction of Signal Temporal Logic Specifications. Aksaray et al. CDC '16 On the (In)Tractability of Reinforcement Learning for LTL Objectives. Yang et al. IJCAI '22 There are multiple other papers that one could cite here, but I'm hoping the authors will manage to present this literature in a more complete way. Other than that, the paper is ready to be accepted. Thank you to the authors for addressing all the issues that the reviewers brought forth. ==================================================
# One-Hot Encoding Strikes Back: Fully Orthogonal Coordinate-Aligned Class Representations Anonymous authors Paper under double-blind review ## Abstract Representation learning via embeddings has become a central component in many machine learning tasks. This featurization process has gotten gradually less interpretable from each coordinating having a specific meaning (e.g., one-hot encodings) to learned distributed representations where meaning is entangled across all coordinates. In this paper, we provide a new mechanism that converts state-of-the-art embedded representations and carefully augments them to allocate some of the coordinates for specific meaning. We focus on applications in multi-class image processing applications, where our method Iterative Class Rectification (ICR) makes the representation of each class completely orthogonal, and then changes the basis to be on coordinate axes. This allows these representations to regain their long-lost interpretability, and demonstrating that classification accuracy is about the same or in some cases slightly improved. ## 1 Introduction Embedded vector representations of structured data objects are nowadays a common intermediate goal for much of machine learning. The goal of these representations is typically to transform data into a form that is easy to work with for downstream applications, most centrally classification tasks. If the representations are successful, then for direct tasks, only a simple classifier is required afterward, e.g., logistic regression. In this work, we argue that due to the fundamental nature of these representations, they should also aim for explicit interpretability. Note that this is not attempting to make the process or neural architecture parameters used in computing these representations interpretable but that given a data point's vector structure, one should understand the components of its representation. In particular, we argue that for labeled classes provided among training data, we should be able to (a) associate these classes with class mean vectors, (b) these class mean vectors should be completely orthogonal, and (c) each should be aligned with a particular coordinate (a one-hot encoding). Given such an embedding of data points, then many tasks can be done directly by simply reading the representation. A multi-class classification task can be solved by returning the class associated with the coordinate with the largest value. To understand a data point's relative association among multiple classes, one can compare their coordinate values; note that there are no hidden associations due to full orthogonality. If one fears there is implicit bias in a task, and that bias is associated with a captured class (e.g., gender bias captured by "woman" or "man" class), one can remove that class via projection like in Bolukbasi et al. (2016); Dev & Phillips (2019) - by simply not using those coordinates in downstream analysis. Other tasks without association with the bias should be unaffected, while those contaminated with bias will have that component removed. A couple of recent papers have attempted to use neural networks to learn embedded representations that have class means orthogonal - their goal was increased generalization. The orthogonal projection loss (OPL) (Ranasinghe et al., 2021), and CIDER (Ming et al., 2023) both add a regularization term which favors compactness among points within a class and near orthogonality among class means. While these methods are useful seeding for our approach, we observe that they fail to produce class means that are nearly ![1_image_0.png](1_image_0.png) Figure 1: Our approach for embedding multi-class data: f1 initializes classes to be clustered and dispersed. In f2, our ICR and DCR make classes completely orthogonal along the coordinate axis. orthogonal. The average dot-product between normalized class means on CIFAR-100 for these prior methods is about 0.2; for ours, it is below 0.01. Furthermore, our proposed framework structurally restricts the classifier to *classification-by-nearest-mean*, also known as the Rocchio algorithm. This directly reflects the training data: for each class, the mean of the training data is stored, and on evaluation of a test point, it is assigned a label of the nearest mean vector. This classification model produces a linear classifier with only two (2) classes, and its standard evaluation reduces to the common task in information retrieval. This multi-class classifier becomes especially effective when the representation of the data is learned and is common among state-of-the-art models (Yu et al., 2020) for few-shot learning approaches in image processing. Our paper achieves the following: 1. We propose two new class rectification methods (ICR and DCR) for multi-class classification under the Rocchio algorithm, which completely orthogonalize class means. 2. We prove that these methods either require one step (DCR) or iteratively converge to an orthogonal representation (ICR), conditioned that the class data is already clustered. 3. We show that this orthogonalized representation maintains state-of-the-art performance in a variety of classification tasks, given a backbone architecture. The iterative class rectification (ICR) at the heart of our approach is an extension of a recent method ISR (Aboagye et al., 2023) designed for bias reduction in natural language. That approach, ISR, requires subspaces defined by two opposing classes (e.g., male-female, pleasant-unpleasant), which is restrictive. That paper only found a handful of such classes with sufficient training data, demonstrated the approach converged with two subspaces (2 pairs of classes) and did not always quickly converge to orthogonal on three subspaces (3 pairs of classes). A challenge addressed in that paper was determining a proper point of rotation. By using single-class means, as we propose, this challenge goes away, and we show our approach effortlessly scales to 100 classes. We also introduce a second-class rectification method (DCR), which achieves this result without iteration but has less continuity in how it augments the representation space. After class means are fully orthogonal, we align them to coordinate axes. This basis transformation, by an orthogonal matrix, does not change any distance or dot-products between data representations. Example Uses. Consider the CIFAR-100 test image with the label orange; see Figure 2.The largest dot-products among the normalized class mean vectors for our technique (OPL+)ICR is with orange (0.995), the correct class, and then a big drop to cockroach at 0.0087 and other smaller values. In contrast, the normalized class mean vectors for other approaches still identify orange as the correct class but have a much larger association with other classes. For OPL, it is orange at 0.9975, but also apple, pear, and sweet_pepper between 0.82 and 0.72. Since the image is so associated with the class mean (dot product of virtually 1), we ascertain that the issue is the class means are not sufficiently orthogonal so that the image earns spurious ![2_image_0.png](2_image_0.png) | top dot products | 0.9975 | 0.8204 | 0.7880 | 0.7215 | 0.4562 | |--------------------|----------|-----------|------------|--------------|----------| | for OPL | orange | apple | pear | sweet_pepper | poppy | | top dot products | 0.9950 | 0.0087 | 0.0061 | 0.0059 | 0.0051 | | for OPL+ICR | orange | cockroach | maple_tree | girl | orchid | | top dot products | 0.8832 | 0.7938 | 0.7681 | 0.7168 | 0.6873 | 0.6266 | |--------------------|----------|----------|----------|----------|----------|----------| | for OPL | hamster | rabbit | mouse | squirrel | possum | fox | | top dot products | 0.7621 | 0.4148 | 0.2396 | 0.2030 | 0.1880 | 0.1331 | | for OPL+ICR | hamster | apple | pear | squirrel | kangaroo | baby | Figure 2: Dot Products with class mean vectors for **orange** image with OPL and OPL+ICR. ![2_image_1.png](2_image_1.png) Figure 3: Dot products with class mean vectors for **hamster+apple** image with OPL and OPL+ICR. correlation with the other classes. However, with ICR, this does not happen since the class means are forced to be orthogonal. Next, in Figure 3 we consider an image that has two classes present: hamster and apple. For OPL+ICR, this image's representational vector has the largest dot-products, with the normalized class means of 0.76 for hamster and 0.41 for apple. The next largest drops to 0.24 for pear and 0.20 for squirrel. In contrast, for OPL, the largest dot products are 0.88 for hamsters, but then the next largest are for rabbits, mice, squirrels, possums, and foxes, all between 0.63 and 0.79. Because the hamster class has a correlation with the other small fury mammals under OPL, they obscure the association with the hamster and hide the association with the apple, which has a score of 0.58. This is not the case with OPL+ICR, so the association with pear and squirrel can be interpreted to geometrically represent uncertainty about those class labels. Then, we can consider removing the "hamster" class via a projection-based approach (e.g., Dev & Phillips (2019)). Under OPL+ICR, the largest dot-product is now apple, still at 0.41, and the next largest is unchanged with pear (0.24) and squirrel (0.20). For OPL after projection, we also have the largest dot-product with apple at 0.41, but somewhat obscured with other correlated classes, including pear, orange, and sweet_pepper, all between 0.33 and 0.38. Notably, the other small fury mammals are also removed from strong association because of their correlation with the hamster class. ## 2 Algorithmic Framework Our method considers a data set Z ⊂ Z, where each zi ∈ Z is associated with a label yi ∈ [k], where k is the number of distinct classes. We use image data Z as an exemplar. Then, it operates in two phases towards creating an embedding in R d, with *d > k*; see Figure 1. The first phase learns an embedding f1 : Z → R d with the goal of classes being (nearly) linearly separable in R d. The second phase, the innovation of this paper, is another map f2 : R d → R d, which aims to retain (and perhaps improve) linear separability but also achieve a form of orthogonality among classes. While this second phase can be interpreted as a form of learning–so it only sees training and not testing data–it is deterministic and does not follow the traditional optimize parameters over a loss function. For input data (*Z, y*), denote X′ = {x ′ i = f1(zi) ∈ R d| zi ∈ Z} as the embedding after phase 1. Then denote X = {xi = f2(x ′ i ) ∈ R d| x ′ i ∈ X′} as the embedding after phase 2. Let Zj , X′j , and Xj be the data points in class j ∈ [k] for the initial data, first, and final embedding, respectively. Rocchio algorithm. We leverage the Rocchio algorithm to build classifiers. For an embedded data set (*X, y*), it first creates class means vj =1 |Xj | Pxi∈Xj xi for each class j ∈ [k]. Then on a training data point x ∈ R dit predicts class ˆj = arg minj∈[k] d(*x, v*j ). If we normalize all class means (so vj ← vj/∥vj∥) then using Euclidean d(*x, v*j ) = ∥x − vj∥ has the same ordering as cosine distance. Instead, we can use ˆj = arg maxj∈[k]⟨*x, v*j ⟩; we do this hereafter unless stated otherwise. Phase 1 embedding. For the first phase embeddings f1, we leverage existing recent algorithms that aim for an embedding with three goals: (a) *accuracy*: each class can be (nearly) linearly separable from all other classes. (b) *compactness*: each class X′j has points close to each other, i.e., small variance. (c) *dispersion*: each pair of classes j and j ′ are separated, and in fact nearly orthogonal. In particular, a couple of recent papers proposed loss functions for f1 as Lf1 = LCE + λ(Lcomp + L*disp*). The LCE is the standard cross entropy loss which optimizes (a), λ ∈ [0, 1], and where L*comp* and L*disp* optimize (b) and (c). These are actualized with |Z| = n, k classes, n1 =Pj∈[k] |Zj |(|Zj | − 1) and n2 =Pj∈[k] |Zj |(n − |Zj |) as: Lcomp = 1 − 1 n1 X j∈[k] X zi,zi ′∈Zj ⟨f1(zi), f1(zi ′ )⟩, Ldisp = 1 n2 X zi∈Zj ;zi ′∈Zj ′̸=j ⟨f1(zi), f1(zi ′ )⟩ (1) Lcomp = − 1 n Xn i=1 log exp(⟨f1(zi), vji )⟩ Pk j=1 exp(⟨f1(zi), vj ⟩) , Ldisp = 1 k X j∈[k] log 1 k − 1 X j ′̸=j exp(⟨vj , vj ′ ⟩) (2) The loss for OPL (Ranasinghe et al., 2021) is in eq 1 and for CIDER (Ming et al., 2023) in eq 2. We observe (see Table 1) that these achieve good clustering among classes, but the classes are not fully orthogonal. On training data for CIFAR-100, they achieve about 98% accuracy or better. This holds under the trained linear classifier (under logistic regression) or the Rocchio algorithm. Pre-processing in phase 1 will prove an important first step for the success of our phase 2. Phase 2 embedding: Full orthogonalization. As we observe that the result of *learning* an orthogonal embedding through regularization is not completely effective, the second phase provides a deterministic approach that *enforces* orthogonality of the class means. A first - but unworkable - thought is to *just run* Gram-Schmidt on the class mean vectors v1*, . . . , v*k. However, this does not produce a generic function f2 that also applies to training data; we want to transform X by some f2, so if we recalculate their class means under f2(X), they are then orthogonal. But Gram-Schmidt only explains what to do for vectors v1*, . . . , v*k, not a general function f2 that applies to other embedded data X. Toward this end, we propose two approaches: ICR and DCR. Iterative Class Rectification (ICR): For ICR, we adapt a recent approach called Iterative Subspace Rectification (ISR) (Aboagye et al., 2023) designed to orthogonalize language subspaces to reduce bias. This approach handles two concepts, each defined by a pair of classes (e.g., male-female, pleasant-unpleasant) as vectors v1, v2, and centers the data around these. Then it applies a "graded rotation" operation (Dev et al., 2021) (see Algorithm 5 in the Appendix) to the components of each x ∈ X that lies within the span of the two linear concept directions: span(v1, v2). Because it operates only in this span, it only alters the embedding of each x ∈ X in this 2-dimensional span, denoted πspan(v1,v2)(x) and shortened to π(x) in the algorithms. The graded rotation is designed so the operation when defined on v1 and v2, is easy to understand: it moves v2 7→ v ′ 2 so it is orthogonal to v1, and it does not change v1. For every other x ′ = πspan(v1,v2)(x), the graded rotation defines a rotation that depends on the angle to v1. Those closer in angle to v1 rotate very little, and those closer in angle to v2 rotate an angle almost as much as v2 7→ v ′ 2 . The magnitude of this angle varies continually based on the angle from v1. The full technical details are explained in Dev et al. (2021) and are reproduced in full in Algorithm 5 in the Appendix. The ISR paper (Aboagye et al., 2023) demonstrates empirically that by *repeating* this process we get v2 7→ v ⋆ 2 , with v ⋆ 2 orthogonal to v1, even recomputing v1 and v ⋆ 2 from the updated embedded data points which define the associated classes. | | Algorithm 2 BinaryCR(X, u, v) | |------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Algorithm 1 BinaryICR(X, X1, X2, iters: T) 1: for i = 0, 1, . . . , T − 1 do 2: v1, v2 ← normalized means(X1, X2) 3: BinaryCR(X, v1, v2) | ′ = v − ⟨u, v⟩u | | | 1: Set v 2: Define projection π(·) = (⟨·, u⟩,⟨·, v′ ⟩) 3: for x ∈ X do 4: x˜ ← GradedRotat(π(u), π(v), π(x)) 5: x ← x + (⟨π(u), x˜ − π(x)⟩u + ⟨π(v ′ ), x˜ − π(x)⟩v ′ ) | We adapt this process in two ways in this paper, in Algorithms 1 and 2. First, we only use individual classes and their class means (line 2 of Alg 1) in place of concepts that spanned across two opposing ideas (and hence two sets of embedded points for each concept). Second, because we initialize with clustered concepts by cosine similarity around their class mean vectors, we can rotate around the origin (line 4 of Alg 2) and do not require a centering step as in ISR. Algorithm 2 does the core operation of projecting to the span of two subspaces *u, v*, apply GradedRotation on each point x ∈ X and then adjust only the coordinates in span(*u, v*) (line 5). Algorithm 1 iterates this procedure T steps as the recalculated class means become orthogonal. Algorithm 3 MultiICR(X, X1*, . . . , X*k, iters: T) 1: for i = 0, 1*, . . . , T* − 1 do 2: Let vi be the normalized mean vector of class Xi for i = 1, 2*, . . . , k*. 3: Set *r, s* = arg min1≤i,j≤k |⟨vi, vj ⟩|, WLOG suppose r = 1, s = 2 4: Let S1 and S2 be the span of {v1} and {v1, v2} respectively 5: Run BinaryCR(X, v1, v2) 6: Recalculate normalized class means vi for all i 7: for i = 3*, . . . , k* do 8: Choose t = arg minj≥i ⟨v1, vj ⟩ 2 + ⟨v2, vj ⟩ 2 + · · · ⟨vi−1, vj ⟩ 2 9: WLOG assume t = i 10: Let v¯i be the projection of vi onto Si−1 11: Set ui = vi −Pi−1 j=1⟨vj , vi⟩vj and v ′ i = ui/∥ui∥ 12: Run BinaryCR(*X, v*′i , v¯i) 13: Set Si to be the span of {Si−1, vi} 14: Recalculate class normalized means vj for all j To apply this to all classes, we now apply a Gram-Schmidt sort of procedure; see details in Algorithm 3. We first identify the class mean vectors that are most orthogonal (line 3) and apply one step of Binary ICR. Then, at each round, we find and maintain the subspace of the class means we have attended to so far Sj−1, and find the class mean vj most orthogonal to that subspace (line 8). We project vj onto Sj−1, to get v¯j , and then run one step of BinaryCR to orthogonalize vj from v¯j (and hence to all of Sj−1). Once we have addressed all classes, we iterate this entire procedure a few times (typically T = 1 or 2 iterations, and not more than 5). Finally, at the conclusion of the MultiClass ICR, the class means on the embeddings v1*, . . . , v*k are all orthogonal (up to several digits of precision). To complete the definition of the function f2, we add a final transformation step that aligns v1*, . . . , v*k to the first k coordinate axes. This step is defined by a single orthogonal matrix, so it does not change Euclidean distance or dot products. Algorithm 4 BinaryDCR(X, X1, X2) 1: v1, v2 ← normalized means(X1, X2) 2: θ ′ ← angle between v1 and v2; θ = π 2 − θ ′ 3: if (θ ′ ≤ π 2 ) **then** set angle ϕ = θ ′/2 else set angle ϕ = π 4 4: for x ∈ {x ∈ X | ⟨v2, x⟩ ≤ ϕ} do 5: x ← Rθx Discontinuous Class Rectification (DCR): This approach is similar but does not require iteration at the expense of a discontinuous operation. It replaces the graded rotation (Dev et al., 2021) with a step that identifies a conical region around v2 and applies an angle ϕ to all points in this region so afterward ⟨v1, v2⟩ = 0. If the angle between v1 and v2 is acute, then the conical region is defined in the span of v1, v2 by an angle θ from v2 to the bisector direction between v1 and v2. That is, for points closer to v2, they are moved along with v2, and the rest are left alone. If v1 and v2 have an obtuse angle, then we set the conical angle around v2 at π/4, so we only move points which will be closer to v2 *after* the transformation when ⟨v1, v2⟩ = 0. The multiclass version of DCR follows the Gram-Schmidt recipe of ICR but with no iteration. Freezing learned embedding X′. It is important to note that before ICR or DCR is applied to determine X, we need to learn and *freeze* the initial embedding X′ ← f1(Z). Then f2 operates on X′, to create X ← f2(X′) without adjusting f1. There are slight differences in how OPL (Ranasinghe et al., 2021) and CIDER (Ming et al., 2023) choose an embedding layer: for OPL, it is the penultimate layer, whereas for CIDER, it is the "head," the last layer. We follow recommendations in those works. In evaluation mode, we also need a classifier. In Section 3.2, we discuss two ways to train classifiers - one is the Rocchio classifier (which we recommend for its structural properties and since it needs no further training). However, a common approach is to build a logistic regression model on the last layer of f2(f1(Z)); we also do this on the training data. Finally, we can consider the evaluation/test data z ∈ Ztest, which are evaluated with the chosen classifier operating on f2(f1(z)). ## 2.1 Properties Of Icr And Dcr We highlight key properties of the ICR and DCR procedures. Technical proofs are deferred to Appendix B. First, we show that binary ICR, even if iterated, only affects the coordinates of data points in the span of the original mean vectors. This implies that the mean vectors of classes stay in their original span. Moreover, it implies that as MultiICR gradually includes more classes, it maintains a modified span, and all components of coordinates outside those spans are unchanged. Hence, if *d > k*, then the null space of the classes is unchanged under the MultiICR procedure. These results follow trivially for binaryDCR and MultiDCR since we just apply the Gram-Schmidt procedure to class cones (the cones around the class that contain the whole associated class). Second, we show that this process converges to have the mean vectors completely orthogonal to each other. This argument requires that initial classes X′j are clustered; this explains and justifies the use of optimizing f1 under the OPL or CIDER loss, or something similar, before applying BinaryICR. The assumption we use (Assumption 1; see also Appendix B.1) is probably more restrictive than necessary (it requires clusters to be completely separable), but it makes already messy proofs manageable. Assumption 1 Let vi be the mean of Xi, and let Xi be included in the cone of radius ϕi around vi for i = 1, 2, . . . , k. Assume these cones are disjoint (except at the origin). Figure 4 *illustrates* k = 2 and k = 3. Theorem 1 (Convergence of BinaryICR) Let Assumption 1 *hold with* k = 2*, and the angle between* v1 and v2 *is less than* π 2 . Then the BinaryICR algorithm converges: as we iterate, in the limit, the angle between class means approaches π 2 . Proof Sketch. We consider the positive angle γ, the gap between the upper bound of the cones of radius ϕ1 and the lower bound of the cone with radius ϕ2 (see Figure 4). We then prove that after each iteration of BinaryICR, the angle between the new means of two classes does not exceed π 2 (see Lemma 4). This helps to show that after each iteration of BinaryICR, the gap γ increases. Therefore, we end up with an increasing sequence of positive real numbers bounded by π 2 , which is convergent. Lastly, we discuss that the convergence is to π 2 , showing the means of two classes after convergence are orthogonal to each other. □ The comparable arguments for DCR are more straightforward. Following Assumption 1, all points of X′2 are in a cone, and all of them and only them are updated in the operation. Since those points are all moved at an angle exactly ϕ, and ϕ moves v2 orthogonal to v1, then if we recalculate v2 after the move, it will still be ![6_image_0.png](6_image_0.png) Figure 4: Pictorial view of Assumption 1. Left: k = 2, Right: k = 3. orthogonal to v1. Hence, this achieves the orthogonality goal after one step and only affects data in the span of v1, v2. Theorem 2 (Convergence of BinaryDCR) Assume Assumption 1 *holds with* k = 2. In addition, if the angle between v1 and v2 *is bigger than* π 2 , then we assume ϕ1, ϕ2 *are less than* π 4 . Then, after running the BinaryDCR algorithm, the class means will be orthogonal to each other. Proof Sketch. We basically run the Gram-Schmidt algorithm on the two class cones instead of class means. Under Assumption 3, the whole class X2 will be rotated by a fixed angle, the gap between v2 and y-axis. Thus the class mean v2 will be located on y-axis. As the first class mean v1 is supposed to be on the x-axis, the two class means are now completely orthogonal to each other. □ However, data may not be completely separable; we experimentally observe that OPL and CIDER achieve 99-100% We observed that the difference in output from the original and robust version is in the third digit of precision, so we only report results for the simpler non-robust variant of DCR. The MultiDCR algorithm is running the Gram-Schmidt algorithm on class cones such that normalized class means will constitute an orthonormal basis for a k-dimensional subspace of R d. Theorem 3 (Convergence of MultiDCR) Let Assumption 1 *hold. In addition, suppose that cones are* sufficiently well-separated (see Assumption 3 in Appendix *B.2). Then, after running the MultiDCR algorithm,* all class means will be orthogonal to each other. ## 3 Experiments We evaluate our methods, ICR and DCR, in two main ways. First, we show that these approaches, with high precision, achieve orthogonality of class means while previous approaches do not and maintain good class compactness. Second, we show these approaches maintain or improve upon the near state-of-the-art accuracy in various learning frameworks. Note that ICR and DCR are designed to *maintain* class cohesiveness, not improve upon it, so we do not expect improvement in training data, and any improvement on the evaluation sets can be seen as a fortuitous effect of regularizing to a meaningful structure. Third, we examine a few example images and how, with OPL or CIDER, they have unwanted associations with other classes, but after applying ICR or DCR, that association mostly disappears. Datasets and Training Details. We use standard image classification data sets, tasks, and basic architectures. In our main experiments, we use Resnet-9 as the backbone architecture for the CIFAR-100 Krizhevsky (2009) classification task and train for 120 epochs. The CIFAR-100 is an image dataset that consists of 60,000 natural images that are distributed across 100 classes with 600 images per class. All training, including ICR & DCR, is performed on the training samples of 50,000 images. All evaluation is shown on the test data of the remaining 10,000 images. ## 3.1 Orthogonality And Compactness The dimension of the penultimate layer in OPL (Ranasinghe et al., 2021) that was optimized towards being orthogonal was set to d = 64 dimensions. It is mathematically impossible to fit k classes orthogonally for *k > d* dimensions; note k = 100 for CIFAR-100 has 100 = *k > d* = 64. Alternatively, CIDER (Ming et al., 2023) uses d = 512 dimensions in the final layer where dispersion and compactness are optimized. To help identify the best choice of d, we first measure geometric properties for OPL and CIDER for d = 64, 128, 256, 512, 1024. Table 1 shows for each: first, the average absolute dot-product between class means 1 ( k 2) Pj̸=j ′ |⟨vj , vj ′ ⟩|, and second, the average intra-class compactness 1k Pj∈[k] 1 X′j Px∈X′j ⟨vj , x⟩. For both, orthogonality increases (average dot products decrease) with higher dimensions, and while OPL's compactness keeps increasing, CIDER's decreases after d = 128. Notably, even at d = 1024, both OPL and CIDER are still far from orthogonal, with an average dot product of about 0.1. Table 1: Average class absolute dot products; and intra-class compactness. | dim: | 64 | 128 | 256 | 512 | 1024 | 64 | 128 | 256 | 512 | 1024 | |------------------------------------------|-----------------------------------------------------------------------|-------|-------|-------|--------|------|-------|-------|-------|--------| | OPL | 0.2709 0.2412 0.1945 0.1509 0.1267 | | | | | | | | | | | CIDER 0.1602 0.1435 0.1247 0.1017 0.0930 | 0.9742 0.9784 0.9851 0.9885 0.9900 0.9816 0.9809 0.9789 0.9764 0.9754 | | | | | | | | | | Next, in Table 2, we show the top-1 and top-5 accuracy for OPL and CIDER by dimension on the CIFAR-100 evaluation set. OPL performs better than CIDER and has the best top-1 accuracy at 1024 dimensions. Somewhat surprisingly, all others peak at smaller dimensions (128 or 256), but the decrease is mostly not too significant. We decided to continue with the best result for top-1 accuracy and orthogonality, and so set d = 1024 dimensions as a default. In Figure 5, we also plot block matrices for the absolute value of dot products between all pairs of class means for OPL and CIDER embeddings at 64 and 1024 dimensions. While increasing d can be seen to improve orthogonality, none are fully orthogonal. Note CIDER dot products appear more uniform than OPL, but the overall average absolute dot products do not differ much in Table 1. ![7_image_0.png](7_image_0.png) Figure 5: Orthogonality visualization of the dot product of the average per-class feature vectors. From Left to right: OPL(64), OPL(1024), CIDER(64), CIDER(1024). Thus, OPL and CIDER cannot achieve complete orthogonality of different class features by clustering the same class features. As one of our goals is to translate class indicators to align exactly onto coordinates for interpretability, these loss-function-based approaches are not sufficient. Augmentation of OPL features with ICR and DCR. Next, we add our rectification algorithms, ICR and DCR, on top of the near-orthogonal and compact embeddings as output by OPL or CIDER. We use d = 1024 as default but also show the dimensions used in the paper as OPL(64) and CIDER(512). The orthogonality and compactness results are in Table 3. For ICR, we show the result after each of the 5 iterations. Note that ICR improves the average dot product by about one (1) digit of precision in each | Loss | 64 dim 128 dim 256 dim 512 dim 1024 dim | | | | | | |-------------|-------------------------------------------|-------|-------|-------|-------|-------| | OPL | Top 1 | 73.38 | 74.29 | 74.26 | 74.87 | 75.22 | | CIDER Top 1 | 71.94 | 72.23 | 72.00 | 72.00 | 71.80 | | | OPL | Top 5 | 91.41 | 92.42 | 92.61 | 92.62 | 92.14 | | CIDER Top 5 | 89.02 | 89.35 | 89.15 | 89.20 | 88.84 | | Table 2: Softmax Top 1 and Top 5 Accuracy of each d iteration, and compactness stays about the same, sometimes increasing. DCR achieves two digits of precision in the average dot product after one step, with a slight degradation in compactness. | Score | OPL (64) | OPL OPL+DCR OPL+ICR 1 OPL+ICR 2 OPL+ICR 3 OPL+ICR 4 OPL+ICR 5 | | | | | | | |---------------|-------------|-----------------------------------------------------------------|-----------------------------------------------------------|--------|--------|-----------|-----------|-----------| | Orthogonality | 0.2709 | 0.1268 | 0.0015 | 0.0056 | 0.0006 | 8.2321e-5 | 1.1560e-5 | 1.7660e-6 | | Compactness | 0.9742 | 0.9899 | 0.9669 | 0.9785 | 0.9779 | 0.9779 | 0.9779 | 0.9779 | | Score | CIDER (512) | CID | CID+DCR CID+ICR 1 CID+ICR 2 CID+ICR 3 CID+ICR 4 CID+ICR 5 | | | | | | | Orthogonality | 0.1017 | 0.0930 | 0.0057 | 0.0138 | 0.0021 | 0.0004 | 7.4106e-5 | 1.5946e-5 | | Compactness | 0.9764 | 0.9754 | 0.9594 | 0.9586 | 0.9566 | 0.9563 | 0.9562 | 0.9562 | Table 3: Orthogonality and Compactness scores for OPL, CIDER, and each after applying +DCR or +ICR j, for j iterations. As default, with 1024 dimensions. | Metric | | OPL(64) | OPL OPL+DCR OPL+ICR 1 OPL+ICR 2 OPL+ICR 3 OPL+ICR 4 OPL+ICR 5 | | | | | | | |------------|---------------------------------------------------------------------------|-----------|-----------------------------------------------------------------|-------|--------|-------|-------|-------|-------| | Smax Top 1 | | 73.20 | 75.28 | 74.47 | 75.21 | 75.19 | 75.19 | 75.20 | 75.20 | | Smax Top 5 | | 91.23 | 91.93 | 89.31 | 91.71 | 91.35 | 91.28 | 91.29 | 91.29 | | NN Top 1 | | 72.36 | 74.57 | 73.39 | 75.02 | 75.03 | 75.03 | 75.03 | 75.03 | | NN Top 5 | | 90.17 | 89.84 | 89.25 | 91.76 | 91.35 | 91.26 | 91.24 | 91.23 | | Metric | CIDER (512) CID CID+DCR CID+ICR 1 CID+ICR 2 CID+ICR 3 CID+ICR 4 CID+ICR 5 | | | | | | | | | | Smax Top 1 | | 72.00 | 71.80 | 71.46 | 71.59 | 71.60 | 71.58 | 71.58 | 71.79 | | Smax Top 5 | | 89.20 | 88.84 | 86.02 | 88.26 | 87.72 | 87.60 | 87.60 | 87.67 | | NN Top 1 | | 72.19 | 71.74 | 71.50 | 71.60 | 71.66 | 71.61 | 71.61 | 71.61 | | NN Top 5 | | 89.08 | 88.65 | 85.95 | 88.240 | 87.63 | 87.52 | 87.47 | 87.47 | ## 3.2 Classification Accuracy After Icr/Dcr We next investigate the effect on classification accuracy after applying ICR and DCR. We now note that there are two standard ways to enact class predictions in this setting. The first is recommended in the OPL paper: build a simple logistic regression for each class and choose the class with the highest score for a query (denoted Smax). In this paper, we prefer using the less powerful model of the Rocchio algorithm ˆj = arg maxj∈[k] ⟨vj , q⟩, for a query q (denoted NN). Table 4 shows the top-1 and top-5 accuracy for OPL, CIDER, and after applying +DCR or +ICR for up to 5 iterations. Table 4: Test data results for OPL, CIDER and + DCR or +ICR with 1024 dimensions For both the Smax (logistic) and NN (Rocchio) classifiers, the OPL initialization outperforms CIDER. Unsurprisingly, the more powerful Smax (logistic) classifier (about 75.2% on top-1) has a bit better performance than the NN (Rocchio) approach (about 74.5 − 75% on top-1). The overall best score is found with just OPL (d = 1024) at 75.28% improving upon the baseline OPL (d = 64) at 73.20%; applying ICR slightly decreases this to 75.21% or 75.20%. However, for the NN classifier, applying ICR actually improves the score from OPL (d = 64) at 72.36% and OPL (d = 1024) at 74.57% up to a score of 75.03% - which is not far from | SVHN | Places365 | LSUN | | iSUN | Texture | Average | | | | | | | |------------|-------------|--------|-------|--------|-----------|-----------|-------|-------|-------|-------|-------|-------| | FPR↓ | AUC↑ | FPR↓ | AUC↑ | FPR↓ | AUC↑ | FPR↓ | AUC↑ | FPR↓ | AUC↑ | FPR↓ | AUC↑ | | | CE+SimCLR | 24.82 | 94.45 | 86.63 | 71.48 | 56.40 | 89.00 | 66.52 | 83.82 | 63.74 | 82.01 | 59.62 | 84.15 | | KNN+ | 39.23 | 92.78 | 80.74 | 77.58 | 48.99 | 89.30 | 74.99 | 82.69 | 57.15 | 88.35 | 60.22 | 86.14 | | OPL | 98.83 | 43.00 | 99.16 | 38.08 | 99.85 | 25.93 | 91.52 | 63.20 | 91.54 | 51.90 | 96.18 | 44.42 | | CIDER | 44.16 | 89.47 | 69.44 | 80.82 | 57.59 | 86.29 | 9.27 | 98.09 | 35.74 | 91.72 | 43.24 | 89.28 | | CIDER+DCR | 48.52 | 88.21 | 71.29 | 79.95 | 62.18 | 84.33 | 10.78 | 97.80 | 37.46 | 90.95 | 46.05 | 88.25 | | CIDER+ICR1 | 49.28 | 87.97 | 70.28 | 79.93 | 60.42 | 84.94 | 10.96 | 97.71 | 37.84 | 91.02 | 45.75 | 88.32 | | CIDER+ICR2 | 49.72 | 87.92 | 70.53 | 79.89 | 60.51 | 84.86 | 11.08 | 97.70 | 38.03 | 90.99 | 45.97 | 88.27 | Table 5: OOD performance for CIDER, CIDER+DCR/ICR on CIFAR-100 the best Smax (logistic) score. Similar effects are seen with top-5 accuracy (and CIFAR-10 in Appendix C), where OPL outperforms CIDER, and in this case, using ICR has little effect and provides an improvement in NN (Rocchio) classifiers. To verify that OPL+ICR does not deteriorate representations, we applied it to the training data (see Tables 13 and 14 in Appendix D) where all methods achieve between 99.5% and 100% accuracy; with the exception of some degradation under the Smax (logistic) classifier after using CIDER loss. ## 3.3 Out-Of-Distribution Detection Out-of-Distribution Detection (OOD) is the task of identifying testing samples that originate from an unknown distribution, which the data representation did not encounter during training. This task evaluates the model's dependability when encountering both known in-distribution (ID) inputs and OOD samples – these should not be forced into an existing classification structure and may represent anomalies requiring further attention. A wide variety of OOD detection methods have been explored, with distance-based OOD detection demonstrating considerable potential (Lee et al., 2018; Xing et al., 2019) via representation learning. A central approach extends a Rocchio-type setup and determines ID vs. OOD based on the distance to class means. Very recently, Ming et al. (2023) introduced CIDER, a Compactness and Dispersion Regularized learning framework for OOD detection, discussed earlier in equation 2. This provides a significant improvement in the state of the art. Datasets and Training Details In line with the approach taken by Ming et al. (2023), we adopt the CIFAR-10 and CIFAR-100 (Krizhevsky, 2009) as the in-distribution datasets (CIFAR-10 results in Appendix C). For evaluating the OOD detection performance, we use a diverse collection of natural image datasets encompassing SVHN (Netzer et al., 2011), Places365 (Zhou et al., 2018), Textures (Cimpoi et al., 2013), LSUN (Yu et al., 2015), and iSUN (Xu et al., 2015). Our experiments utilize the pre-trained ResNet-9 used in the Image Classification task for the CIFAR-100 dataset. We freeze the pre-trained model up to the penultimate layer to extract CIDER ID and OOD features for our OOD detection experiments. After obtaining the extracted CIDER features, we apply ICR to refine the features further, enhancing inter-class separation within the feature embedding space. Upon acquiring the ICR-rectified CIDER ID and OOD features at test time, we employ CIDER's distance-based code for OOD detection. The following metrics are reported in Table 5: 1) False Positive Rate (FPR) of OOD samples at 95% True Positive Rate (TPR) of ID samples, and 2) Area Under the Receiver Operating Characteristic Curve (AUC). We show two representative prior art: CE+SimCLR (Winkens et al., 2020) and KNN+(Sun et al., 2022), the best two methods before CIDER. Observe how CIDER significantly improves FPR from about 60% to about 43% and AUROC from about 84-86% to 89% (averaged across data sets). Applying ICR or DCR shows a small degradation of these improvements, with an FPR of about 45% and AUROC of about 88%, still a significant improvement over the previous baselines, but now with an interpretable structure. On CIFAR-10, CIDER+ICR slightly improves over just CIDER; see Appendix C. This task seems delicate, and for instance, using OPL equation 1 in place of CIDER equation 2 achieves much worse results with an average FPR of 96% and AUROC of only 44%. ## 3.4 Qualitative Example Comparison In Figure 6 we show a few illustrative examples from CIFAR-100, and compare their predictions on OPL, CIDER, and applying +DCR or +ICR. For each image, we show the top 5 results under the NN (Rocchio) classifier. After ICR or DCR, these are the coordinates in the new coordinate system associated with the k = 100 classes. We observe that while all methods have the label correct as the top prediction, the drop-off after the first prediction is steeper after ICR or DCR is applied. For instance, because under OPL, the class means for "woman" are correlated with other people (e.g., girl, man, boy), under OPL, the woman example also has a high dot-product with those class means. But after ICR or DCR, the class means are orthogonal, so the forced high association is gone. The same phenomenon can be seen with man & boy and with worm & snake. WORM ![10_image_1.png](10_image_1.png) ![10_image_0.png](10_image_0.png) MAN ![10_image_2.png](10_image_2.png) Figure 6: Example images and top-5 scoring NN (Rocchio) values among classes in CIFAR-100. ## 4 Conclusion & Discussion This paper introduces a post-processing to the training phase of a learned embedding mechanism, which provides an interpretable structure. Namely, for a learned embedded representation for a multi-class classification task, our method, Iterative Class Rectification (ICR), continuously adjusts the embedding function so each of k identified class means is associated with a coordinate. Thus, the representation of each class is orthogonal and can be independently measured. This does not preclude an object from having an association with multiple classes, but it decouples those contributions. This class orthogonality could also be useful if the class is associated with a protected class (e.g., gender, race, etc). By restricting to classifiers that predict labels based on dot products along these class coordinates, we could eliminate association learned about that trait by simply ignoring that coordinate from the representation at the evaluation phase. This pre-processes and makes simple the technique that has become popular in language debiasing (Bolukbasi et al., 2016; Dev & Phillips, 2019; Ravfogel et al., 2020; Wang et al., 2020) which first attempts to identify a linear subspace, and then projects all data in the representation of that subspace. ## References Prince Osei Aboagye, Yan Zheng, Jack Shunn, Chin-Chia Michael Yeh, Junpeng Wang, Zhongfang Zhuang, Huiyuan Chen, Liang Wang, Wei Zhang, and Jeff M Phillips. Interpretable debiasing of vectorized language representations with iterative orthogonalization. In *International Conference on Learning Representations* (ICLR), 2023. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Advances in neural information processing systems, 29, 2016. Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi. Describing textures in the wild. *2014 IEEE Conference on Computer Vision and Pattern Recognition*, pp. 3606–3613, 2013. Sunipa Dev and Jeff Phillips. Attenuating bias in word vectors. In *The 22nd International Conference on* Artificial Intelligence and Statistics, pp. 879–887. PMLR, 2019. Sunipa Dev, Tao Li, Jeff M Phillips, and Vivek Srikumar. Oscar: Orthogonal subspace correction and rectification of biases in word embeddings. In *Empirical Methods in Natural Language Processing (EMNLP)*, 2021. Alex Krizhevsky. Learning multiple layers of features from tiny images. In Masters Thesis; University of Toronto, 2009. Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. A simple unified framework for detecting out-ofdistribution samples and adversarial attacks. *ArXiv*, abs/1807.03888, 2018. Yifei Ming, Yiyou Sun, Ousmane Dia, and Yixuan Li. How to exploit hyperspherical embeddings for out-of-distribution detection? In *The Eleventh International Conference on Learning Representations*, 2023. URL https://openreview.net/forum?id=aEFaE0W5pAd. Yuval Netzer, Tao Wang, Adam Coates, A. Bissacco, Bo Wu, and A. Ng. Reading digits in natural images with unsupervised feature learning. In NeurIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011. Kanchana Ranasinghe, Muzammal Naseer, Munawar Hayat, Salman Hameed Khan, and Fahad Shahbaz Khan. Orthogonal projection loss. *2021 IEEE/CVF International Conference on Computer Vision (ICCV)*, pp. 12313–12323, 2021. Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. Null it out: Guarding protected attributes by iterative nullspace projection. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7237–7256, 2020. Yiyou Sun, Yifei Ming, Xiaojin Zhu, and Yixuan Li. Out-of-distribution detection with deep nearest neighbors. In *International Conference on Machine Learning*, pp. 20827–20840. PMLR, 2022. Tianlu Wang, Xi Victoria Lin, Nazneen Fatema Rajani, Bryan McCann, Vicente Ordonez, and Caiming Xiong. Double-hard debias: Tailoring word embeddings for gender bias mitigation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 5443–5453, 2020. Jim Winkens, Rudy Bunel, Abhijit Guha Roy, Robert Stanforth, Vivek Natarajan, Joseph R Ledsam, Patricia MacWilliams, Pushmeet Kohli, Alan Karthikesalingam, Simon Kohl, et al. Contrastive training for improved out-of-distribution detection. *arXiv preprint arXiv:2007.05566*, 2020. Chen Xing, Sercan Ö. Arik, Zizhao Zhang, and Tomas Pfister. Distance-based learning from errors for confidence calibration. *ArXiv*, abs/1912.01730, 2019. Pingmei Xu, Krista A. Ehinger, Yinda Zhang, Adam Finkelstein, Sanjeev R. Kulkarni, and Jianxiong Xiao. Turkergaze: Crowdsourcing saliency with webcam based eye tracking. *ArXiv*, abs/1504.06755, 2015. Fisher Yu, Yinda Zhang, Shuran Song, Ari Seff, and Jianxiong Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. *ArXiv*, abs/1506.03365, 2015. Lu Yu, Bartlomiej Twardowski, Xialei Liu, Luis Herranz, Kai Wang, Yongmei Cheng, Shangling Jui, and Joost van de Weijer. Semantic drift compensation for class-incremental learning. In *Proceedings of the* IEEE/CVF conference on computer vision and pattern recognition, pp. 6982–6991, 2020. Bolei Zhou, Àgata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba. Places: A 10 million image database for scene recognition. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 40: 1452–1464, 2018. ## A Graded Rotation Here, we describe the graded rotation algorithm from Dev et al. (2021) for completeness. For an angle θ we can denote the 2 × 2 rotation matrix by Rθ, that is, Rθ = cos θ − sin θ sin θ cos θ . The graded rotation of a vector x with respect to the mean vectors v1 and v2 was introduced in Dev et al. (2021), which we recall below. Algorithm 5 GradedRotat(v1, v2, x) 1: **Input:** Unit vectors v1, v2 in R 2 and x ∈ R 2 2: Set θ ′ = arccos(⟨v1, v2⟩) and θ = π 2 − θ ′ 3: Set ϕ1 = arccos⟨v1, x |x| ⟩ 4: Set v ′ 2 = v2 − ⟨v1, v2⟩v1 5: Set d2 = arccos⟨v ′ 2 , x |x| ⟩ θ ϕ1 θ ′if d2 > 0 and ϕ1 ≤ θ ′ θ π−ϕ1 π−θ ′if d2 > 0 and ϕ1 > θ′ −θ π−ϕ1 θ ′if d2 < 0 and ϕ1 ≥ π − θ ′ −θ ϕ1 π−θ ′if d2 < 0 and ϕ1 < π − θ ′ 7: **return** Rθx x 6: Compute θx = It operates entirely on a data point x that lies in R 2, a span that also contains input vectors v1 and v2. When applied to data in higher dimensions, it is assumed data x has already been projected into this span; that step is not covered here. It then identifies v2 needs to be rotated (angle θ ′) so that it will be orthogonal to v1. Then, based on the angle ϕ1 that x makes with v1, it determines how large of a ratio that x should make, denoted θx. Note that line 6 shows a case statement. While it is convenient to think about data x that lies in the first quadrant and between v1 and v2, the algorithm should also work for data elsewhere in the span. Depending on whether ϕ1 is greater or less than θ ′is one condition. The other condition depends on d2, which determines if x is positive (in the direction of v2 or negative (away from v2, in which case things work symmetrically). ## B Proofs Of Convergence Of Binaryicr B.1 Convergence Of Binaryicr And Binarydcr In order to prove the convergence of BinaryICR and BinaryDCR algorithms, we need to make the following assumptions on data, which are illustrated in Figure 7. Notice Assumption 1 is a special case of Assumption 2. Assumption 2 Let 0 < θ′ < π 2 , θ = π 2 − θ ′, −ϕ1 ≤ 0 ≤ ϕ2 < ψ1 ≤ ψ2 ≤ π and γ = ψ1 − ϕ2 > 0*. Let also* X1 and X2 be subsets of the cones Γ1 = {reiϕ : ϕ1 ≤ ϕ ≤ ϕ2, r > 0} and Γ2 = {reiψ : ψ1 ≤ ψ ≤ ψ2, r > 0}, respectively, and θ ′be the angle between the mean vectors v1 and v2 of X1 and X2, respectively (see Figure 7). Lemma 4 Under Assumption 2, for any i*, the angle* θi = π 2 − θ ′ i stays positive, where θ ′ 1 = θ ′ and θ ′ i (i ≥ 2) shows the angle between two class means after i*-th iteration of BinaryICR.* Proof. First, we discuss what happens for the cones Γ1 and Γ2 after one iteration of BinaryICR. Then, by an induction argument, we conclude that θi > 0 for any i. In Γ1, the half-cone under v1 shrinks, but the other half expands. It means that the y-values of data points in the half-cone under v1 increase, and their x-values decrease a bit but stay positive. The same phenomenon happens for the other half of the cone. Since we had an increase in y-values, their average will increase as well (i.e., will be positive as it was 0 previously). Therefore, v ′ 1 will be in the first quadrant. ![14_image_0.png](14_image_0.png) Figure 7: Pictorial view of Assumption 1 (left) and Assumption 2 (right) on two classes X1 and X2. For Γ2, after running one iteration of BinaryICR, the half-cone above v2 shrinks, but the other half expands. Now, in order to make the comparison easy, we rotate all the points of X′2 by −θ1 (i.e., y-axis will be transformed on top of v2) and call it X′′ 2 , where X′2 is the transformation of X2 after applying graded rotations on top of X2. This means that the x-values of data points in X′′ 2 are increased with respect to their x-values in X2 (consider two cases ψ2 ≤ π/2 and ψ2 *> π/*2 separately). This will also happen to their average, and thus, their average will be under v2. Rotating the points of X′′ 2 back by θ1 degree to get X′2 means that the average of X′2 , which we call v ′ 2 , will be less than π/2. We observe that both mean vectors v ′ 1 and v ′ 2 stay in the first quadrant, implying θ ′ 2 *< π/*2 or equivalently θ2 > 0. Now by an induction argument if θi > 0, completely similar to going from θ1 > 0 to θ2 > 0 above, we can conclude that θi+1 > 0. □ Theorem 5 (Restatement of Theorem 1) Under Assumption 2, the BinaryICR algorithm converges, that is, after enough epochs, θ ′ i *will approach* π 2 , where θ ′ i is the angle between two class means after ith iteration of BinaryICR. Proof. Let θ1 = θ and θ ′ 1 = θ ′. Notice that all vectors in (X1 ∪ X2) \ R × {0} will be changed after any iteration of BinaryICR if θ ′ 1 ≠ π 2 . Now consider the gap γ1 = γ between ϕ2 and ψ1, i.e. γ1 = ψ1 − ϕ2 > 0. Since both ϕ2 and ψ1 lie in [0, θ′1 ], after one iteration of BinaryICR, they will be mapped to ϕ2 + θ1 θ ′ 1 ϕ2 and ψ1 + θ1 θ ′ 1 ψ1. Thus γ1 will be changed to γ2 = γ1 + θ1 θ ′ 1 γ1 > γ1 (note θi ≥ 0 and 0 < γ < θ′i *< π/*2 by Lemma 4). Considering θ ′ 2 , running another iteration of BinaryICR will modify γ2 to γ3 = γ2 + θ2 θ ′ 2 γ2 > γ2 and so on. Therefore, the sequence (γn) ⊂ [0, π 2 ] is a bounded increasing sequence and thus convergent, say to γ ′. This means that running another iteration of BinaryICR will not change γ ′, that is θn → 0, otherwise γ ′ will need to be changed. Hence, the BinaryICR algorithm converges. □ Theorem 6 (Restatement of Theorem 2) Assume Assumption 1 *holds. In addition, if the angle between* v1 and v2 *is bigger than* π 2 , then we assume ϕ1, ϕ2 *are less than* π 4 . Then, after running the BinaryDCR algorithm, the class means will be orthogonal to each other. Proof. The proof is trivial, but we include it for completeness. Let θ ′ be the angle between v1 and v2. There are two cases. Case 1. When θ ′ < π 2 and the two classes are disjoint, according to the BinaryDCR algorithm, all vectors in class 2 will be rotated by θ = π 2 − θ ′ degrees, and so their mean v2 will be rotated by θ degrees as well. However, the vectors in class 1 will not be rotated. Thus, v1 will stay the same. Therefore, after running the algorithm, the class means will be orthogonal to each other. Case 2. In the case θ ′ > π 2 , according to the BinaryDCR algorithm, all the vectors within π 4 of v2 will be rotated by θ = π 4 degrees. So, by the assumptions, all the points in class 2, and thus their mean v2, will be rotated by π 4 degrees. Since the points in class 1 will stay the same, this means that, after running the algorithm, the class means will be orthogonal to each other. □ ## B.2 Convergence Of Multidcr Assumption 3 We consider the following assumptions on the dataset in order to prove the convergence of the MultiClassDCR algorithm (see Figure *8). Without loss of generality, we assume if we run the Gram-Schmidt* process on class means {v1, . . . , vk}*, and it runs successfully (handled by assumption (1)), then the resulting* orthonormal basis would be the standard basis {e1*, . . . , e*k}. 1. Class means are linearly independent. 2. For each i, class Xi is included in a cone Ci around vi with radius ϕi, where for i ≥ 3, Ci is located inside a cone Bi around ei *of radius less than* π/2. 3. *All class means are in the first orthant, or* ϕi < π 4 for all i. 4. For all j < i, where i ≥ 3, Xj *is outside of the cone* Bi. ![15_image_0.png](15_image_0.png) Figure 8: Pictorial view of Assumption 3. Theorem 7 (Restatement of Theorem 3) Let Assumption 3 *hold. Then, after running the MultiDCR* algorithm, all class means will be orthogonal to each other. Proof. In the MultiDCR algorithm, for each class, we rotate the encompassing cone in a Gram-Schmidt manner. Considering the separation assumptions and linear independence property in Assumption 3, all cones will stay separated after any step in the Gram-Schmidt process. This is because, in the Gram-Schmidt process, we orthogonalize vectors one by one; notice that this process happens in the same subspace as before, that is, in the ith step the span of {e1*, . . . , e*i} and {v1*, . . . , v*i} will be the same. Now Assumption 3 implies that the ith class cone Ci around vi will be rotated in such a way so that ei will be its center after the rotation. We call this rotated cone C ′ i . Thus C ′ i will be inside the cone Bi, by Assumption 3(2). This means that C ′ i will be disjoint from the previously orthogonalized cones C ′ j for *j < i* as they live outside the cone Bi and so outside the cone C ′ i . Therefore, after running the MultiDCR algorithm, all class means will be orthogonal to each other. □ ## C Experiments On Cifar-10 We repeat here on CIFAR-10 many of the experiments we performed for CIFAR-100 in the main text. Things mostly work about the same, but there are some differences due to some accuracy problems being easier due to fewer classes. Also, orthogonality is easier to obtain since there are few classes in the same dimensional space. ## C.1 Orthogonality And Compactness Like Table 1, Table 6 shows the average absolute dot-product between class means and the average intra-class compactness with OPL and CIDER for CIFAR-10 dataset. For OPL, orthogonality and compactness increase with higher dimensions, while CIDER's orthogonality stays the same and compactness decreases. Interestingly, CIDAR has worse orthogonality in this case than with CIFAR-100; this is because vectors v1*, . . . , v*10 must lie in a 10-dimensional span (with the origin), and so one can make them further apart by putting 1/9 of a way into the direction of the origin, hence the 0.111 dot product. | dim: | 64 | 128 | 256 | 512 | 1024 | 64 | 128 | 256 | 512 | 1024 | |------------------------------------------|-----------------------------------------------------------------------|-------|-------|-------|--------|------|-------|-------|-------|--------| | OPL | 0.0111 0.0093 0.0083 0.0036 0.0058 | | | | | | | | | | | CIDER 0.1111 0.1111 0.1111 0.1111 0.1111 | 0.9989 0.9990 0.9990 0.9990 0.9991 0.9892 0.9885 0.9880 0.9875 0.9859 | | | | | | | | | | Table 6: Average class dot products; and intra-class compactness. | Loss | 64 dim 128 dim 256 dim 512 dim 1024 dim | | | | | | |-------------|-------------------------------------------|--------|--------|--------|--------|--------| | OPL | Top 1 | 93.020 | 93.610 | 93.200 | 93.420 | 93.310 | | CIDER Top 1 | 92.730 | 92.640 | 92.870 | 92.730 | 92.590 | | | OPL | Top 5 | 99.590 | 99.570 | 99.610 | 99.570 | 99.650 | | CIDER Top 5 | 99.570 | 99.590 | 99.550 | 99.520 | 99.690 | | Table 7, as Table 4, shows the top-1 and top-5 accuracy for OPL, CIDER, and after applying +DCR or +ICR for up to 5 iterations for CIFAR-10 dataset. Here, as noted in Section 3.2, Smax means applying logistic regression and choosing the class with the highest score for a query, and NN stands for applying the Rocchio algorithm to infer the class predictions. Table 7: Softmax Top 1 and Top 5 Accuracy of each k ## C.2 Orthogonality Visualization In Figure 9, we plot block matrices for the absolute value of dot products between all pairs of class means for OPL and CIDER embeddings at 64 and 1024 dimensions for the CIFAR-10 dataset. For OPL, all classes but two are almost orthogonal in 64 dimensions and this trend is improved in 1024 dimensions, but those two classes are not orthogonal to each other yet. For CIDER, going from 64 to 1024 dimensions does not considerably improve class orthogonality scores and is far from orthogonality. ## C.3 Augmentation Of Opl Features With Icr And Dcr Table 8 below shows the average absolute dot-product between class means and the average intra-class compactness for the CIFAR-10 dataset when we apply ICR/DCR on top of the OPL/CIDER features. Note that ICR and DCR improve dot products drastically for both OPL and CIDER, where OPL+ICR5 reaches complete orthogonality of classes, and CIDER+ICR5 improves the average dot product by about one (1) digit. Compactness stays about the same for OPL but decreases for CIDER as we iterate more ICR steps. ![17_image_0.png](17_image_0.png) (a) OPL(64) ![17_image_4.png](17_image_4.png) ![17_image_1.png](17_image_1.png) ![17_image_2.png](17_image_2.png) ![17_image_3.png](17_image_3.png) (c) CIDER(64) (d) CIDER(1024) Figure 9: Orthogonality visualization of the dot product of the average per-class feature vectors. From Left to right: OPL(64), OPL(1024), CIDER(64), CIDER(1024). Table 8: Orthogonality and Compactness scores for OPL, CIDER, and each after applying +DCR or +OPL j, for j iterations. As default, with 1024 dimensions. | Score | OPL (64) | OPL OPL+DCR OPL+ICR 1 OPL+ICR 2 OPL+ICR 3 OPL+ICR 4 OPL+ICR 5 | | | | | | | |---------------|-------------|-----------------------------------------------------------------|-----------------------------------------------------------|------------|------------|------------|------------|--------| | Orthogonality | 0.0111 | 0.0058 2.0720e-05 | 5.2844e-05 | 1.0261e-06 | 1.8735e-08 | 3.2816e-10 | 5.7593e-12 | | | Compactness | 0.9989 | 0.9991 | 0.9991 | 0.9991 | 0.9991 | 0.9991 | 0.9991 | 0.9991 | | Score | CIDER (512) | CID | CID+DCR CID+ICR 1 CID+ICR 2 CID+ICR 3 CID+ICR 4 CID+ICR 5 | | | | | | | Orthogonality | 0.1111 | 0.1111 | 0.0744 | 0.0838 | 0.0592 | 0.0372 | 0.0239 | 0.0143 | | Compactness | 0.9875 | 0.9859 | 0.9779 | 0.9351 | 0.8976 | 0.8778 | 0.8662 | 0.8601 | ## C.4 Classification Accuracy After Icr/Dcr On Test Data Similar to Table 4, Table 9 shows the top-1 and top-5 accuracy on CIFAR-10 test data for OPL, CIDER, and after applying +DCR or +ICR for up to 5 iterations, where OPL outperforms CIDER. For OPL(1024), applying +DCR or +ICR, top-1 accuracy increases a bit with the Smax (logistic) classifier and stays the same with the NN (Rocchio) classifier. In contrast, for CIDER, top-1 accuracy decreases by an insignificant amount with both Smax and NN classifiers. For both OPL and CIDER, top-5 accuracy degrades by less than 1%. | Metric | OPL(64) | OPL OPL+DCR OPL+ICR 1 OPL+ICR 2 OPL+ICR 3 OPL+ICR 4 OPL+ICR 5 | | | | | | | |------------|-------------|-----------------------------------------------------------------|-----------------------------------------------------------|--------|--------|--------|--------|--------| | Smax Top 1 | 93.020 | 93.310 | 93.330 | 93.330 | 93.330 | 93.330 | 93.330 | 93.330 | | Smax Top 5 | 99.590 | 99.650 | 98.700 | 98.700 | 98.700 | 98.700 | 98.700 | 98.700 | | NN Top 1 | 93.030 | 93.300 | 93.290 | 93.300 | 93.300 | 93.300 | 93.300 | 93.300 | | NN Top 5 | 99.560 | 99.720 | 98.900 | 98.920 | 98.920 | 98.920 | 98.920 | 98.920 | | Metric | CIDER (512) | CID | CID+DCR CID+ICR 1 CID+ICR 2 CID+ICR 3 CID+ICR 4 CID+ICR 5 | | | | | | | Smax Top 1 | 92.730 | 92.590 | 92.360 | 92.630 | 92.610 | 92.440 | 92.370 | 92.400 | | Smax Top 5 | 99.520 | 99.690 | 99.180 | 99.630 | 99.610 | 99.580 | 99.570 | 99.590 | | NN Top 1 | 92.730 | 92.560 | 92.180 | 92.420 | 91.210 | 90.000 | 89.940 | 89.940 | | NN Top 5 | 99.420 | 99.550 | 99.010 | 99.170 | 98.870 | 96.840 | 95.510 | 95.350 | Table 9: Test data results for OPL, CIDER and + DCR or +ICR with 1024 dimensions ## C.5 Classification Accuracy After Icr/Dcr On Training Data | Metric | OPL (64) OPL (1024) OPL+DCR OPL+ICR1 OPL+ICR2 OPL+ICR3 OPL+ICR4 OPL+ICR5 | | | | | | | | |------------|----------------------------------------------------------------------------|---------|---------|---------|---------|---------|---------|---------| | Smax Top 1 | 99.976 | 99.976 | 99.976 | 99.976 | 99.976 | 99.976 | 99.976 | 99.976 | | Smax Top 5 | 99.998 | 100.000 | 100.000 | 100.000 | 100.000 | 100.000 | 100.000 | 100.000 | | NN Top 1 | 99.974 | 93.300 | 93.290 | 99.974 | 99.974 | 99.974 | 99.974 | 99.974 | | NN Top 5 | 100.000 | 99.720 | 98.900 | 100.000 | 100.000 | 100.000 | 100.000 | 100.000 | Tables 10 and 11 show the top-1 and top-5 accuracy for OPL, CIDER, and after applying +DCR or +ICR on training data. Similar to CIFAR-100, we observe that OPL+ICR does not deteriorate representations, where all methods achieve between 99.9% and 100% accuracy. Applying +DCR after CIDER affects the accuracy by less than 1%. However, we see some degradation after applying +ICR on top of CIDER features, especially with the Smax classifier, when we use more iterations. Table 10: Training data results for OPL, OPL+DCR, and OPL+ICR with 1024 dimensions | Metric | CIDER (512) | CID | CID+DCR CID+ICR 1 CID+ICR 2 CID+ICR 3 CID+ICR 4 CID+ICR 5 | | | | | | |------------|---------------|---------|-------------------------------------------------------------|---------|--------|--------|--------|--------| | Smax Top 1 | 99.944 | 99.952 | 99.150 | 94.176 | 93.402 | 87.858 | 86.980 | 86.890 | | Smax Top 5 | 100.000 | 100.000 | 99.998 | 98.456 | 93.588 | 93.582 | 93.192 | 87.336 | | NN Top 1 | 99.946 | 99.950 | 99.716 | 99.872 | 98.178 | 96.554 | 96.478 | 96.474 | | NN Top 5 | 100.000 | 100.000 | 100.000 | 100.000 | 99.926 | 97.570 | 96.590 | 96.572 | Metric OPL (64) OPL (1024) OPL+DCR OPL+ICR1 OPL+ICR2 OPL+ICR3 OPL+ICR4 OPL+ICR5 Table 11: Training data results for CIDER, CIDER+DCR, and CIDER+ICR with 1024 dimensions ## C.6 Full Table For Out Of Distribution Experiment Using Cifar-10 As In-Distribution (Id) Data We show results in Table 15 for CIFAR-10 on the OOD experiments for which CIDER gave state-of-the-art results. Many prior art results are taken directly from Ming et al. (2023). Unlike in CIFAR-100, where running ICR gives minor degradation of results under these measures, with CIFAR-10, ICR and DCR give barely measurable improvement or degradation (in the fourth bit of precision). OPL still does not perform as well on this task. | OOD Dataset | | | | | | | | | | | | | |-----------------------------------------------|-------|-----------|-------|-------|---------|---------|-------|-------|-------|-------|-------|-------| | Method | SVHN | Places365 | LSUN | iSUN | Texture | Average | | | | | | | | FPR↓ | AUC↑ | FPR↓ | AUC↑ | FPR↓ | AUC↑ | FPR↓ | AUC↑ | FPR↓ | AUC↑ | FPR↓ | AUC↑ | | | Without Contrastive Learning | | | | | | | | | | | | | | MSP | 59.66 | 91.25 | 62.46 | 88.64 | 45.21 | 93.80 | 54.57 | 92.12 | 66.45 | 88.50 | 57.67 | 90.86 | | ODIN | 53.78 | 91.30 | 43.40 | 90.98 | 10.93 | 97.93 | 28.44 | 95.51 | 55.59 | 89.47 | 38.43 | 93.04 | | Mahalanobis | 9.24 | 97.80 | 83.50 | 69.56 | 67.73 | 73.61 | 6.02 | 98.63 | 23.21 | 92.91 | 37.94 | 86.50 | | Energy | 54.41 | 91.22 | 42.77 | 91.02 | 10.19 | 98.05 | 27.52 | 95.59 | 55.23 | 89.37 | 38.02 | 93.05 | | GODIN | 18.72 | 96.10 | 55.25 | 85.50 | 11.52 | 97.12 | 30.02 | 94.02 | 33.58 | 92.20 | 29.82 | 92.97 | | With Contrastive Learning | | | | | | | | | | | | | | ProxyAnchor | 39.27 | 94.55 | 43.46 | 92.06 | 21.04 | 97.02 | 23.53 | 96.56 | 42.70 | 93.16 | 34.00 | 94.67 | | CE+SimCLR | 6.98 | 99.22 | 54.39 | 86.70 | 64.53 | 85.60 | 59.62 | 86.78 | 16.77 | 96.56 | 40.46 | 90.97 | | CSI | 37.38 | 94.69 | 38.31 | 93.04 | 10.63 | 97.93 | 10.36 | 98.01 | 28.85 | 94.87 | 25.11 | 95.71 | | SSD+ | 2.47 | 99.51 | 22.05 | 95.57 | 10.56 | 97.83 | 28.44 | 95.67 | 9.27 | 98.35 | 14.56 | 97.38 | | KNN+ | 2.70 | 99.61 | 23.05 | 94.88 | 7.89 | 98.01 | 24.56 | 96.21 | 10.11 | 97.43 | 13.66 | 97.22 | | Regularization for Compactness and Dispersion | | | | | | | | | | | | | | CIDER | 8.30 | 98.46 | 21.37 | 95.93 | 9.63 | 98.18 | 0.68 | 99.79 | 27.75 | 94.45 | 13.55 | 97.36 | | CIDER+DCR | 8.33 | 98.46 | 21.58 | 95.92 | 9.59 | 98.19 | 0.67 | 99.79 | 27.75 | 94.48 | 13.58 | 97.37 | | CIDER+ICR1 | 8.31 | 98.46 | 21.33 | 95.93 | 9.48 | 98.19 | 0.69 | 99.79 | 27.82 | 94.44 | 13.53 | 97.36 | | CIDER+ICR2 | 8.32 | 98.46 | 21.29 | 95.93 | 9.46 | 98.19 | 0.69 | 99.79 | 27.84 | 94.45 | 13.52 | 97.36 | | CIDER+ICR3 | 8.32 | 98.46 | 21.30 | 95.93 | 9.46 | 98.19 | 0.69 | 99.79 | 27.84 | 94.45 | 13.52 | 97.36 | | CIDER+ICR4 | 8.32 | 98.46 | 21.29 | 95.93 | 9.46 | 98.19 | 0.69 | 99.79 | 27.84 | 94.45 | 13.52 | 97.36 | | CIDER+ICR5 | 8.32 | 98.46 | 21.29 | 95.93 | 9.46 | 98.19 | 0.69 | 99.79 | 27.84 | 94.45 | 13.52 | 97.36 | | OPL | 99.74 | 33.74 | 99.44 | 42.56 | 99.75 | 56.45 | 99.93 | 49.33 | 97.15 | 40.49 | 99.20 | 44.51 | Table 12: OOD performance for for CIDER, CIDER+DCR, and CIDER+ICR on the CIFAR10 Dataset ## D Training Data Experiments On Accuracy For Cifar-100 Tables 13 and 14 show the top-1 and top-5 accuracy for OPL, CIDER, and after applying +DCR or +ICR on training data of CIFAR-100 dataset. Table 13 confirms that OPL+ICR does not deteriorate representations, where all methods achieve between 99.5% and 100% accuracy. In CIDER, with the NN classifier, accuracies remain the same (above 99.7%), but with the Smax classifier, we see some degradation after applying +DCR or +ICR. | Metric | OPL (64) OPL (1024) OPL+DCR OPL+ICR1 OPL+ICR2 OPL+ICR3 OPL+ICR4 OPL+ICR5 | | | | | | | | |------------|----------------------------------------------------------------------------|---------|---------|---------|---------|---------|---------|---------| | Smax Top 1 | 99.946 | 99.976 | 99.762 | 99.594 | 99.686 | 99.698 | 99.698 | 99.698 | | Smax Top 5 | 100.000 | 100.000 | 99.992 | 100.000 | 100.000 | 100.000 | 100.000 | 100.000 | | NN Top 1 | 99.858 | 99.972 | 99.222 | 99.974 | 99.974 | 99.974 | 99.974 | 99.974 | | NN Top 5 | 100.000 | 100.000 | 100.000 | 100.000 | 100.000 | 100.000 | 100.000 | 100.000 | | Metric | CIDER (512) | CID | CID+DCR CID+ICR 1 CID+ICR 2 CID+ICR 3 CID+ICR 4 CID+ICR 5 | | | | | | |------------|---------------|---------|-------------------------------------------------------------|---------|--------|--------|--------|--------| | Smax Top 1 | 99.888 | 99.892 | 94.370 | 97.396 | 84.540 | 82.798 | 82.612 | 82.572 | | Smax Top 5 | 100.000 | 100.000 | 98.266 | 99.408 | 89.178 | 88.088 | 87.920 | 87.902 | | NN Top 1 | 99.890 | 99.898 | 99.720 | 99.872 | 99.864 | 99.862 | 99.862 | 99.862 | | NN Top 5 | 100.000 | 100.000 | 99.988 | 100.000 | 99.998 | 99.998 | 99.998 | 99.998 | Table 13: Training data results for OPL, OPL+DCR, and OPL+ICR with 1024 dimensions Table 14: Training data results for CIDER, CIDER+DCR, and CIDER+ICR with 1024 dimensions Metric CIDER (512) CID CID+DCR CID+ICR 1 CID+ICR 2 CID+ICR 3 CID+ICR 4 CID+ICR 5 ## E Full Table For Out Of Distribution Experiment Using Cifar-100 As In-Distribution (Id) Data This table is a full version of Table 5, where we have added some methods without contrastive learning and a few more iterations of ICR on CIDER features. The numbers for additional methods are pulled from Ming et al. (2023). | OOD Dataset | | | | | | | | | | | | | |-----------------------------------------------|-------|-----------|-------|-------|---------|---------|-------|-------|-------|-------|-------|-------| | Method | SVHN | Places365 | LSUN | iSUN | Texture | Average | | | | | | | | FPR↓ | AUC↑ | FPR↓ | AUC↑ | FPR↓ | AUC↑ | FPR↓ | AUC↑ | FPR↓ | AUC↑ | FPR↓ | AUC↑ | | | Without Contrastive Learning | | | | | | | | | | | | | | MSP | 78.89 | 79.8 | 84.38 | 74.21 | 83.47 | 75.28 | 84.61 | 74.51 | 86.51 | 72.53 | 83.12 | 75.27 | | ODIN | 70.16 | 84.88 | 82.16 | 75.19 | 76.36 | 80.1 | 79.54 | 79.16 | 85.28 | 75.23 | 78.7 | 79.11 | | Mahalanobis | 87.09 | 80.62 | 84.63 | 73.89 | 84.15 | 79.43 | 83.18 | 78.83 | 61.72 | 84.87 | 80.15 | 79.53 | | Energy | 66.91 | 85.25 | 81.41 | 76.37 | 59.77 | 86.69 | 66.52 | 84.49 | 79.01 | 79.96 | 70.72 | 82.55 | | GODIN | 74.64 | 84.03 | 89.13 | 68.96 | 93.33 | 67.22 | 94.25 | 65.26 | 86.52 | 69.39 | 87.57 | 70.97 | | LogitNorm | 59.6 | 90.74 | 80.25 | 78.58 | 81.07 | 82.99 | 84.19 | 80.77 | 86.64 | 75.6 | 78.35 | 81.74 | | With Contrastive Learning | | | | | | | | | | | | | | ProxyAnchor | 87.21 | 82.43 | 70.1 | 79.84 | 37.19 | 91.68 | 70.01 | 84.96 | 65.64 | 84.99 | 66.03 | 84.78 | | CE+SimCLR | 24.82 | 94.45 | 86.63 | 71.48 | 56.4 | 89 | 66.52 | 83.82 | 63.74 | 82.01 | 59.62 | 84.15 | | CSI | 44.53 | 92.65 | 79.08 | 76.27 | 75.58 | 83.78 | 76.62 | 84.98 | 61.61 | 86.47 | 67.48 | 84.83 | | SSD+ | 31.19 | 94.19 | 77.74 | 79.9 | 79.39 | 85.18 | 80.85 | 84.08 | 66.63 | 86.18 | 67.16 | 85.9 | | KNN+ | 39.23 | 92.78 | 80.74 | 77.58 | 48.99 | 89.3 | 74.99 | 82.69 | 57.15 | 88.35 | 60.22 | 86.14 | | Regularization for Compactness and Dispersion | | | | | | | | | | | | | | CIDER | 44.16 | 89.47 | 69.44 | 80.82 | 57.59 | 86.29 | 9.27 | 98.09 | 35.74 | 91.72 | 43.24 | 89.28 | | CIDER+DCR | 48.52 | 88.21 | 71.29 | 79.95 | 62.18 | 84.33 | 10.78 | 97.8 | 37.46 | 90.95 | 46.05 | 88.25 | | CIDER+ICR 1 | 49.28 | 87.97 | 70.28 | 79.93 | 60.42 | 84.94 | 10.96 | 97.71 | 37.84 | 91.02 | 45.75 | 88.32 | | CIDER+ICR 2 | 49.72 | 87.92 | 70.53 | 79.89 | 60.51 | 84.86 | 11.08 | 97.7 | 38.03 | 90.99 | 45.97 | 88.27 | | CIDER+ICR 3 | 49.82 | 87.9 | 70.6 | 79.88 | 60.59 | 84.84 | 11.15 | 97.7 | 38.07 | 90.98 | 46.05 | 88.26 | | CIDER+ICR 4 | 49.85 | 87.9 | 70.61 | 79.87 | 60.62 | 84.84 | 11.15 | 97.7 | 38.16 | 90.98 | 46.08 | 88.26 | | CIDER+ICR 5 | 49.84 | 87.9 | 70.57 | 79.87 | 60.59 | 84.84 | 11.15 | 97.7 | 38.12 | 90.98 | 46.05 | 88.26 | | OPL | 98.83 | 43 | 99.16 | 38.08 | 99.85 | 25.93 | 91.52 | 63.2 | 91.54 | 51.9 | 96.18 | 44.42 | Table 15: OOD performance for for CIDER, CIDER+DCR, and CIDER+ICR on the CIFAR10 Dataset
Review 1: Summary: This paper presents a mechanism for postprocessing embedding vectors (learned via some previous method) so that the class-means for each labeled class are close-to-orthogonal. They present two versions of the mechanism, one of which iteratively performs "graded" piecewise rotations along 2D subspaces to bring class means closer to orthogonal, and one which discontinuously makes them orthogonal by modifying all points within given conical regions. The authors compare their approach to previously-proposed regularization methods OPL and CIDER, and show that these previous methods do not reach full orthogonality, whereas the proposed approach gets much closer. They also show that the proposed technique keeps roughly similar classification performance and somewhat improves out-of-distribution detection. Strengths and Weaknesses: ## Strengths ### S1. Clearly written The main paper is clearly written overall and includes useful examples and figures to demonstrate the technique. ### S2. Reasonable experimental results The experiments show that previously-proposed methods for orthogonalizing representations do not lead to having orthogonal class means, whereas the proposed technique does produce this. Their technique also seems to slightly improve out-of-distribution detection when using a simple distance-from-class-means metric. ## Weaknesses ### W1. The problem considered by this work has a simple closed-form solution The primary goal of the proposed approach is to find a mapping that adjusts the existing learned embeddings to make the class means orthogonal, and to align them with the standard coordinate axes. The two proposed approaches are both fairly complex, and involve piecewise or discontinuous nonlinear transformations: ICR applies a pairwise graded rotation (a nonlinear operation that interpolates angles) to pairs of dimensions and repeats it until convergence, and DCR directly moves points within certain distances of the class means in a discontinuous fashion. However, I believe there is a simple closed-form solution to this problem. If our goal is to map each class mean $v_i$ to the standard basis vector $e_i$, we can arrange the class means as columns of a matrix $$ V = \\begin{bmatrix} \\vdots & \\vdots & & \\vdots \\\\ v_1 & v_2 & \cdots & v_n \\\\ \\vdots & \\vdots & & \\vdots \\\\ \\end{bmatrix} $$ and then define our mapping as $f_2(x) = V^{-1}x$. This immediately ensures that $f_2(v_i) = e_i$, and does not require any nonlinear or discontinuous steps or any iterative procedure. Moreover, the $e_i$ are the new class means, because $f_2$ is linear and thus $f_2(E[X]) = E[f_2(x)]$ by the linearity of expectation. A bit more care must be taken if the embedding space is not equal to the number of classes, but this seems fairly straightforward as well: Gram-Schmidt orthogonalize the class means in order to identify the subspace spanned by those means, then define the mapping so that vectors in this subspace get transformed with $V^{-1}$, and vectors orthogonal to it remain unchanged; this is still a linear mapping. The authors are already doing a similar thing in their BinaryCR algorithm for a 2D subspace. It seems to me that this simpler procedure would meet all the stated goals of the proposed approaches in this work. Given that, it seems hard to justify the complexity of the proposed methods, and I think the ICR and DCR approaches described here may not be of interest to the TMLR audience. ### W2. Proofs are informal and not sufficiently detailed The authors make a number of claims about the convergence of their algorithms. However, the proofs of these claims seem informal and are in my opinion not detailed enough to follow the argument. As one example, the authors state in their proof of Lemma 4 that "In $Γ_1$, the half-cone under $v_1$ shrinks, but the other half expands. It means that the y-values of data points in the half-cone under $v_1$ increase, and their $x$-values decrease a bit but stay positive. The same phenomenon happens for the other half of the cone." It's not clear to me which half-cones are being referenced here or what it means for them to be "under" a vector, and if one half shrinks and the other expands, what does it mean that "the same phenomenon happens for the other half"? I similarly had difficulty following the statement "Now, in order to make the comparison easy, we rotate all the points of $X_2$′ by −$θ_1$ (i.e., y-axis will be transformed on top of v ) and call it X′′, where X′ is the transformation of X after applying graded rotations on top of X", and there are more statements like this that follow as well. This is very difficult to follow without either formal statements about what is being compared to what, or at least a diagram showing these rotations and shrinking/expansion. I also noted that the proof of Theorem 5 doesn't ever explicitly prove that it converges to $\pi/2$ (although I think it may be indirectly implied by the second-to-last statement of the proof). I did not read the rest of the proofs in detail, since they do not seem clearly written enough to follow and I'm not convinced the complexity of these proofs is justified given the simpler algorithm I described above. ### W3. Somewhat weak motivation I found the motivation for the approach somewhat weak. It wasn't clear to me what we actually gain from having representations with orthogonal class means, or why simpler solutions wouldn't be sufficient for this. For instance, if we want to be able to do classification, or look at relative associations between multiple classes, why isn't it enough to look at the post-softmax classification probabilities, given that we are training the model on a labeled dataset to begin with? The softmax probabilities are already axis-aligned, and if the model is confident, those probabilities will already be close to orthogonal. And if the model isn't confident and can't correctly distinguish classes, it's not clear to me why we should postprocess the representations to force them to be orthogonal. This comes up in the presented example regarding the orange, for instance. It seems reasonable that this image should be associated with other classes, since the model may not be able to say with certainty that this is an orange and not a different kind of fruit. Why is it better to force the class means to be orthogonal, aren't we losing information in that case? The example with the hamster and the apple makes a bit more sense to me, but I still think the motivation could be explained better. It still seems like it's useful to know if the model can't tell the difference between a hamster and a rabbit or a mouse. But perhaps you're implicitly trying to force the model to be more confident on "typical" examples, so that you can distinguish typical ones from atypical ones (e.g. ones with two classes in the image)? Requested Changes: I would like to see the weaknesses W1 and W2 addressed before I would recommend this submission for acceptance. - If the authors believe their approach has advantages beyond the simple matrix-inversion baseline I describe above, I think they should clearly explain this in the paper and discuss why this simple baseline is insufficient. I also think it would be appropriate to add experimental results comparing against this baseline, to see whether the proposed approaches are actually any better. - I also think the proofs should also be expanded so that each step is clearly explained, either with formal notation or with a step-by-step diagram that explains what the authors mean by their statements. This seems necessary to meet the bar for correctness. Beyond this, I think the paper could be improved by better motivating why we want orthogonal class means. In particular, if the model can't distinguish well between different classes, why aren't we losing valuable information by forcing the average representations to be orthogonal anyway? What specifically do you believe this orthogonalized representation will allow us to do (that we couldn't do with the class probabilities or non-orthogonal representations already)? (The paper gestures at implicit bias, but I don't think it goes into this in enough detail to motivate the approach, and it's not obvious to me why forcing class means to be orthogonal would debias the representation instead of just hiding the bias inside the transformation.) It would also be interesting to discuss connections to neural collapse ([Papyan et al. 2020](https://www.pnas.org/doi/full/10.1073/pnas.2015509117)), which is a phenomenon where, upon reaching near-zero training loss, models end up collapsing their class means to a "simplex equiangular tight frame". This isn't quite the same as being orthogonal but it seems closely related. Broader Impact Concerns: No broader impact concerns. ================================================== Review 2: Summary: This paper proposes ICR and DCR algorithms for multi-classification under the Rocchio algorithm, which further orthogonalizes the class means. Convergence analysis to orthogonal representations is provided with proof. The findings are verified by empirical experiments with a comparison with existing works. Strengths and Weaknesses: Strengths: 1, Overall, the paper is easy to follow. 2. The logic is clear, and the proposed method makes sense. Weaknesses: Please see the requested changes. Requested Changes: 1. The limitations need discussion. From my understanding, the good performance of the proposed method requires Phase I embedding to provide embeddings in several separable clusterings. It is theoretically reflected in Assumption 1, where "cones are disjoint". What if Phase I embedding cannot ensure so? Can you still provide a convergence guarantee theoretically and empirically? 2. I think the method is based on the idea that one class corresponds to one feature since different classes of data are orthogonalized. I feel it is more reasonable if a combination of multiple orthogonal features corresponds to one class. 3. Can the proposed method be applied to solve any tasks beyond classification, e.g., regression tasks? 4. The motivation needs some more clarification. Why do we require "orthogonality"? Any supportive references? Broader Impact Concerns: No such concerns. ================================================== Review 3: Summary: The paper proposes two methods called Iterative Class Rectification (ICR) and Discontinuous Class Rectification (DCR) that projects the learned representations of examples, such as images, so that their class means become orthogonal. The authors argue that orthogonality enables interpretability, since with an appropriate rotation, each class can correspond to a single coordinate. They also mention that this approach doesn't hurt classification results and can help in debiasing models; e.g. by leaving out the coordinate that corresponds to a particular sensitive attribute such as gender. Strengths and Weaknesses: *Strengths* 1. The paper extends a prior algorithm, called ISR, that was used in language for debiasing data. Identifying and extending new applications of existing methods is definitely an advantage. *Weaknesses* 1. Overall, the primary limitation is in the original motivation. I have honestly failed to see why post-processing the learned representation of a pretrained model such that class means become orthogonal matters. This is especially the case given the clustering/separation assumption made in the paper, which would make it easy to build a linear classifier on top of the representations. The authors argue that orthogonality enables interpretability but this is actually misleading, because what the authors call "interpretability" here boils down to classification. Interpretability is not about how or why the model makes a particular prediction; it is simply about how to infer the class label from the representation. Obviously, the same "interpretability" advantage can be achieved if one simply build a linear classifier on top of the representation, and outputs the probability scores assigned to each class. Each coordinate of those probability scores would correspond to a particular class. Similarly, the claim that one can debias by ignoring/removing the coordinate that corresponds to the sensitive attribute is quite wrong. Suppose, for example, we have an image classification problem and the classes are {"gender", "weight", "height"}. Even if weight and height are *individually* orthogonal to gender (uncorrelated), both *together* can be predictive of gender. Please see the literature on "fairness through unawareness," such as [1] that discusses these pitfalls. 2. The paper claims that ICR and DCR do not impact classification accuracy. But, the empirical results show that they do hurt accuracy! In CIFAR100, for example, DCR reduces the top-1 accuracy of OPL from 75.28% to 74.47% and the top-5 accuracy from 91.93% to 89.31%. It also hurts accuracy in CIDER. ICR may appear to be ok in the main paper but the CIFAR10 results in the appendix show that it also hurts; e.g. reducing top-5 in OPL from 99.65% to 98.7%. 3. * Confusing classes can actually convey useful information. In Figure 6, for example, the baseline model confuses the the image of a "woman" with a "girl". To me, this indicates that it is difficult to infer from the image if it corresponds to a "woman" or a "girl", and that's useful information. After applying the proposed ICR method, however, the model is now confident that it is a "woman" and the next best prediction is a "lobster"! I'm not sure if this is, in fact, a desired outcome. 4. The experiments are limited to two small datasets only: CIFAR100 and CIFAR10, and one model ResNet-9. 5. There are several places where the paper is either imprecise or confusing. For example: * In Eq 1, $n_1=0$ if we train in the 1-shot setting, which makes the objective function invalid. * Also in Eq 1, should the absolute sign be inside the summation for $\mathcal{L}_{disp}$? * In Eq 2, the objective function for dispersion is not minimized by having orthogonal vectors. * The authors says they "prefer using the less powerful model of the Rocchio algorithm," without supporting that decision. Logistic regression is a simple linear head and works better. * The values in Table 4 are very close to each other and there no estimate of the error margin. It's not easy to infer if the differences are reliable. * The OOD experiments are not really OOD. In the OOD setting, once a model is trained, we deploy on new domains/distributions. In CIFAR10, there are OOD variants of the dataset (see for example https://github.com/google/uncertainty-baselines). If my understanding is correct, the authors train on CIFAR and then adjust the post-processing to fit another dataset, such as SVHN. Some minor comments: * In Algorithm 3, it would more readable to use the same notation throughout the algorithm. The WLOG clauses are not needed and are confusing. For example, why is it easier to read when we "assume that $t=i$"? * In the abstract, "from each coordinating". Should this be "from each coordinate"? * In Section 2, the paper assumes the multiclass setting where $y_i\in[k]$, but Figure 3 assumes a multi-label setting. * What does "we use image data as an exemplar" mean? [1] Dwork, Cynthia, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. "Fairness through awareness," 2012. Requested Changes: - The paper should clearly motivate why orthogonality in the class means matters. I understand that the authors have tried to motivate this in several places in the paper but I don't think those are valid motivations. As I mentioned above, what the authors refer to as "interpretability" can be simply achieved by representing an example with class probability scores. The claim it helps in debiasing is invalid. And, the argument that the model will not confuse classes is not well-justified (see for example my comment above about Figure 6). - The experiments are done on two small datasets only and one small model, and the evidence is not convincing as I mention above. DCR does hurt performance in both datasets. Adding more datasets and models would strengthen the paper. - Please fix the typos and remove the WLOG clauses in the algorithms. Using a consistent notation would make the algorithms more readable. Broader Impact Concerns: I believe that the claim about using this method for debiasing data is unfounded, and is reminiscent of "fairness through unawareness" that is, by now, well-known that it doesn't work. See the reference above and its related works. ================================================== Metareview: Recommendation: Reject Comment: The paper is not ready for publication in its current form, as the authors themselves agreed. ==================================================
# Continual Learning: Applications And The Road Forward | Eli Verwimp∗ | KU Leuven, Belgium | |-----------------------|---------------------------------------------------------------| | Rahaf Aljundi | Toyota Motor Europe, Belgium | | Shai Ben-David | University of Waterloo, and Vector Institute, Ontario, Canada | | Matthias Bethge | University of Tübingen, Germany | | Andrea Cossu | University of Pisa, Italy | | Alexander Gepperth | University of Applied Sciences Fulda, Germany | | Tyler L. Hayes | NAVER LABS Europe, France | | Eyke Hüllermeier | University of Munich (LMU), Germany | | Christopher Kanan | University of Rochester, Rochester, NY, USA | | Dhireesha Kudithipudi | University of Texas at San Antonio, TX, USA | | Christoph H. Lampert | Institute of Science and Technology Austria (ISTA) | | Martin Mundt | TU Darmstadt & hessian.AI, Germany | | Razvan Pascanu | Google DeepMind, UK | | Adrian Popescu | Université Paris-Saclay, CEA, LIST, France | | Andreas S. Tolias | Baylor College of Medicine, Houston, TX, USA | | Joost van de Weijer | Computer Vision Center, UAB, Barcelona, Spain | | Bing Liu | University of Illinois at Chicago, USA | | Vincenzo Lomonaco | University of Pisa, Italy | | Tinne Tuytelaars | KU Leuven, Belgium | | Gido M. van de Ven | KU Leuven, Belgium | Reviewed on OpenReview: **https://openreview.net/forum?id=axBIMcGZn9** ## Abstract Continual learning is a subfield of machine learning, which aims to allow machine learning models to continuously learn on new data, by accumulating knowledge without forgetting what was learned in the past. In this work, we take a step back, and ask: "Why should one care about continual learning in the first place?". We set the stage by examining recent continual learning papers published at four major machine learning conferences, and show that memory-constrained settings dominate the field. Then, we discuss five open problems in machine learning, and even though they might seem unrelated to continual learning at first sight, we show that continual learning will inevitably be part of their solution. These problems are model editing, personalization and specialization, on-device learning, faster (re-)training and reinforcement learning. Finally, by comparing the desiderata from these unsolved problems and the current assumptions in continual learning, we highlight and discuss four future directions for continual learning research. We hope that this work offers an interesting perspective on the future of continual learning, while displaying its potential value and the paths we have to pursue in order to make it successful. This work is the result of the many discussions the authors had at the Dagstuhl seminar on Deep Continual Learning, in March 2023. ∗Corresponding author: eli.verwimp@kuleuven.be 1 ## 1 Introduction Continual learning, sometimes referred to as lifelong learning or incremental learning, is a subfield of machine learning that focuses on the challenging problem of incrementally training models on a stream of data with the aim of accumulating knowledge over time. This setting calls for algorithms that can learn new skills with minimal forgetting of what they had learned previously, transfer knowledge across tasks, and smoothly adapt to new circumstances when needed. This is in contrast with the traditional setting of machine learning, which typically builds on the premise that all data, both for training and testing, are sampled i.i.d. (independent and identically distributed) from a single, stationary data distribution. Deep learning models in particular are in need of continual learning capabilities. A first reason for this is their strong dependence on data. When trained on a stream of data whose underlying distribution changes over time, deep learning models tend to adapt to the most recent data, thereby "catastrophically" forgetting the information that had been learned earlier (French, 1999). Secondly, continual learning capabilities could reduce the very long training times of deep learning models. When new data are available, current industry practice is to retrain a model fully from scratch on all, past and new, data (see Example 3.4). Such retraining is time inefficient, sub-optimal and unsustainable, with recent large models exceeding 10 000 GPU days of training (Radford et al., 2021). Simple solutions, like freezing feature extractor layers, are often not an option as the power of deep learning hinges on the representations learned by those layers (Bengio et al., 2013). To work well in challenging applications in e.g. computer vision and natural language processing, they often need to be changed. The paragraph above describes two naive approaches to the continual learning problem. The first one, incrementally training - or finetuning - a model only on the new data, usually suffers from suboptimal performance when models adapt too strongly to the new data. The second approach, repeatedly retraining a model on all data used so far, is undesirable due to its high computational and memory costs. The goal of continual learning is to find approaches that have a better trade-off between performance and efficiency (e.g. compute and memory) than these two naive ones. In the contemporary continual learning literature, this trade-off typically manifests itself by limiting memory capacity and optimizing performance under this constraint. Computational costs are not often considered in the current continual learning literature, although this is challenged in some recent works, which we discuss in Sections 2 and 4.1. In this article, we highlight several practical problems in which there is an inevitable continual learning component, often because there is some form of new data available for a model to train on. We discuss how these problems require continual learning, and how in these problems that what is constrained and that what is optimized differs. Constraints are hard limits set by the environment of the problem (e.g. small devices have limited memory), under which other aspects, such as computational cost and performance, need to be optimized. Progress in the problems we discuss goes hand in hand with progress in continual learning, and we hope that they serve as a motivation to continue working on continual learning, and offer an alternative way to look at it and its benefits. Similarly, they can offer an opportunity to align currently common assumptions that stem from the benchmarks we use, with those derived from the problems we aim to solve. Section 3 describes some of these problems and in Section 4 we discuss some exciting future research directions in continual learning, by comparing the desiderata of the discussed problems and contemporary continual learning methods. The aim of this paper is to offer a perspective on continual learning and its future. We do not aim to cover the latest technical progress in the development of continual learning algorithms, for this we refer to e.g. Masana et al. (2022); Zhou et al. (2023); Wang et al. (2023). ## 2 Current Continual Learning Before exploring different problem settings in which we foresee continual learning as a useful tool, we first wish to understand the current landscape. Our aim is to paint a clear picture of how memory and computational cost are generally approached in current continual learning papers. To achieve this, we examined continual learning papers accepted at four top machine learning conferences (ECCV '22, NeurIPS '22, CVPR '23 and ICML '23) to have a representative sample of the current field. We considered all papers with either 'incremental', 'continual', 'forgetting', 'lifelong' or *'catastrophic'* in their titles, disregarding false positives. ![2_image_0.png](2_image_0.png) Figure 1: Most papers strongly restrict data storage **and do not discuss computational cost**. Each dot represents one paper, illustrating what percentage of data their methods store (horizontal axis) and how computational cost is handled (vertical axis). The majority of these papers are in the lower-left corner: those that strongly restrict memory use and do not quantitatively approach computational cost (i.e. it is at most discussed). For more details, see Appendix. See Appendix for the methodology. For our final set of 77 papers, we investigated how they balance the memory and compute cost trade-offs. We discern five categories: Not discussed: No clear mention of the impact of the proposed method/analysis on the cost. Discussed: Cost is discussed in text, but not quantitatively compared between methods. Compared: Cost is qualitatively compared to other methods. Constrained: Methods are compared using the same limited cost. Optimized: Cost is among the optimized metrics. Many continual learning papers use memory in a variety of ways, most often in the form of storing samples, but regularly model copies (e.g. for distillation) or class means and their variances are stored as well. We focus on the amount of stored data, as this is the most common use of memory, but discuss other memory costs in the Appendix. Of the examined papers, all but three constrain the amount of stored samples. So rather than reporting the category, in Figure 1, we report how strongly data storage is constrained, using the percentage of all data that is stored. It is apparent that the majority of these papers do not store any (raw) samples and many store only a small fraction. Three notable exceptions that store all the raw data are a paper on continual reinforcement learning (RL) (Fu et al., 2022), something which is not uncommon in RL, see Section 3.5. The second one, by Prabhu et al. (2023a), studies common CL algorithms under a restricted computational cost. Finally, Kishore et al. (2023) study incremental document retrieval, a practical problem involving continual learning. While memory costs (for raw samples) are almost always constrained, computational costs are much less so. Sometimes simply discussing that there is (almost) no additional computational cost can suffice, especially in settings where memory is the crucial limiting factor. Yet it is remarkable that in more than 50% of the papers there is no mention of the computational cost at all. When it is compared, it is often done in the appendix. There are a few notable exceptions among the analyzed papers that focus explicitly on the influence of the computational cost, either by constraining (Prabhu et al., 2023a; Kumari et al., 2022; Ghunaim et al., 2023) or optimizing it (Wang et al., 2022b; Kishore et al., 2023). For a more elaborate discussion of measuring the computational cost, see Section 4.1. Together, these results show that many continual learning methods are developed with a low memory constraint, and with limited attention to the computational cost. They are two among other relevant dimensions of continual learning in biological systems (Kudithipudi et al., 2022) and artificial variants (Mundt et al., 2022), yet with the naive solutions of the introduction in mind, they are two crucial components of any continual learning algorithm. In the next section, we introduce some problems for which continual learning is inevitable. They illustrate that methods with a low computational cost are, just like methods that work in memory restricted settings, an important setting, yet they have not received the same level of attention. ## 3 Continual Learning Is Not A Choice To solve the problems described in this section, continual learning is necessary and not just a tool that one could use. We argue that in all of them, the problem can, at least partly, be recast as a continual learning problem. This means that the need for continual learning algorithms arises from the nature of the problem itself, and not just from the choice of a specific way for solving it. We start these subsections by explaining what the problem is and why it fundamentally requires continual learning. Next, we briefly discuss current solutions and how they relate to established continual learning algorithms. We conclude each part by laying down what the constraints are and what metrics should be optimized. ## 3.1 Model Editing It is often necessary to correct wrongly learned predictions from past data. Real world practice shows us that models are often imperfect, e.g. models frequently learn various forms of decision shortcuts (Lapuschkin et al., 2019), or sometimes the original training data become outdated and are no longer aligned with current facts (e.g. a change in government leaders). Additionally, strictly accumulating knowledge may not always be compliant with present legal regulations and social desiderata. Overcoming existing biases, more accurately reflecting fairness criteria, or adhering to privacy protection regulations (e.g. the right to be forgotten of the GDPR in Europe (European Union, 2016)), represent a second facet of this editing problem. When mistakes are exposed, it is desirable to selectively edit the model without forgetting other relevant knowledge and without re-training from scratch. Such edits should thus only change the output of a model to inputs in the neighborhood of the effected input-output mapping, while keeping all others constant. The model editing pipeline (Mitchell et al., 2022) first identifies corner cases and failures, then prompts data collection over those cases, and subsequently retrains/updates the model. Recently proposed methods are able to locally change models, yet this comes at a significant cost, or model draw-down, i.e. forgetting of knowledge that was correct (Santurkar et al., 2021). Often the goal of model editing is to change the output associated with a specific input from A to B, yet changing the output to something generic or undefined is an equally interesting case. Such changes can be important in privacy-sensitive applications, to e.g. forget learned faces or other personal attributes. Naively, one could retrain a model from scratch with an updated dataset, that no longer contains outdated facts and references to privacy-sensitive subjects, or includes more data on previously out-of-distribution data. To fully retrain on the new dataset, significant computational power and access to all previous training data is necessary. Instead, with effective continual learning, this naive approach can be improved by only changing what should be changed. An ideal solution would be able to continually fix mistakes, at a much lower computational cost than retraining from scratch, without forgetting previously learned and unaffected information. Such a solution would minimize computational cost, while maximizing performance. There is no inherent limitation on memory in this problem, although it can be limited if not all training data are freely accessible. ## 3.2 Personalization And Specialization Some of the most powerful machine learning models are trained on very large datasets, usually scraped from the Internet. The result is a model that is able to extract useful and diverse features from high-dimensional data. However, the vastness of the data they are trained on also has a downside. Internet data is generated by many different people, who all have their own preferences and interests. One model cannot fit these conflicting preferences, and the best fit is close to the average internet user (Hu et al., 2022b). However, machine learning models are often used by individuals or small groups, or for highly specific applications. This contradiction makes any possessive references such as 'my car' or 'my favorite band' by construction ## Example: 3.1 ![4_Image_0.Png](4_Image_0.Png) Lazaridou et al. (2021) used the customnews benchmark to evaluate how well a language model trained on news data from 1969 - 2017 performs on data from 2018 and 2019. They find that models perform worse on the newest data, mostly on proper nouns (e.g. "Ardern" or "Khashoggi"), as well as words introduced because of societal changes such as "Covid-19" and "MeToo". They identify a set of 287 new words that were not used in any document prior to 2018. Such new words are inevitable in future texts too. To teach a model these changes they perform updates on the newly arriving data, which gradually improves the performance on the years 2018 and 2019 (a 10% decrease in perplexity), yet at the cost of performance on earlier years (a 5% increase on all previous years). When weighing all years equally, the final model thus got worse than before updating. ambiguous and impossible for the system to understand. Further, Internet scraped data do not always contain (enough) information to reach the best performance in specialized application domains like science and user sentiment analysis (Beltagy et al., 2019). In contrast to Section 3.1, here there are many new data that have a strong relation to each other (e.g. scientific words), whereas model editing typically concerns fewer and less related, individual changes. Domain adaptation and personalization are thus often necessary. The topic has been investigated in the natural language processing (NLP) community for many different applications. Initially, fine-tuning on a supervised domain-specific dataset was the method of choice, but recently, with the success of very large language models (LLM), the focus has shifted towards changing only a small subset of parameters with adapters (Houlsby et al., 2019), low-rank updates (Hu et al., 2022a) or prompting (Jung et al., 2023). However, these methods do not explicitly identify and preserve important knowledge in the original language model. This hampers the integration of general and domain-specific knowledge and produces weaker results (Ke et al., 2022). To identify the parameters that are important for the general knowledge in the LLM in order to protect them is a challenging problem. Recent works (Ke et al., 2021) made some progress in balancing the trade-off between performance on in-domain and older data. In the computer vision field, similar work has also been done by adapting CLIP to different domains (Wortsman et al., 2022) and to include personal text and image pairs (Cohen et al., 2022). No matter how large or sophisticated the pre-trained models become, there will always be data that they are not, or cannot be, trained on (e.g. tomorrow's data). Specialized and personalized data can be collected afterwards, either by collecting new data or by extracting the relevant bits from the larger, original dataset. The addition of such a specialized dataset is a distribution shift, and thus continual learning algorithms are required. The final goal is to train a specialized or personalized model, more compute-efficient than when trained from scratch, without losing relevant performance on the pre-trained domain. This makes this problem different from transfer learning, where it is not a requirement (Zhuang et al., 2020). When training is done on the original server, past data are usually available and can be used to prevent forgetting. Yet sometimes this is not the case, because training happens on a more restricted (e.g. personal) device, then memory does become a constraint, which we elaborate on in the next subsection. ## 3.3 On-Device Learning To offer an experience aligned with a user's preferences, or adjusted to a new personal environment, many deep learning applications require updates on the deployed device. Cloud computing is often not available because of communication issues (e.g. in remote locations with restricted internet access, or when dealing with very large quantities of data), or to preserve the privacy of the user (e.g. for domestic robots, monitoring cameras). On such small devices, both memory and computational resources are typically constrained, and the primary goal is to maximize model efficacy under these constraints. These tight constraints often make storing all user data and retraining from scratch infeasible, necessitating continual learning whenever the pre-trained capabilities should not be lost during continued on-device training (see also Example 3.3). ## Example: 3.2 Dingliwal et al. (2023) personalize end-to-end speech recognition models with words that are personal to the user (e.g. family member names) or words that are very rare except in specialized environments (e.g. "ecchymoses" in medical settings). With an extra attention module and a precomputed set of representations of the specialized vocabulary, they 'bias' the original model towards using the new rare and unknown words. The performance on the specialized words is remarkably improved, yet with a decrease in performance on non-specialist word recognition. In their experiments specialized tokens are less than 1% off all tokens, so even a relatively small decrease in performance on other tokens is non-negligible. ![5_image_0.png](5_image_0.png) These constraints, as well as increasingly complex computations for energy-accuracy trade-offs in realtime (Kudithipudi et al., 2023), limit the direct application of optimization typically used in cloud deployments. For example, existing methods only update the final classification layer of a pre-trained feature extractor (Hayes & Kanan, 2022). Yet this relatively lightweight process becomes challenging when there is a large domain gap between the initial training set and the on-device data. The latter is often hard to collect, since labeling large amounts of data by the user is impractical, requiring few-shot solutions. When devices shrink even further, the communication costs become significant, and reading and writing to memory can be up to ∼99% of the total energy budget (Dally, 2022). In addition to algorithmic optimizations for continual learning, architectural optimizations offer interesting possibilities. These enhancements may include energy-efficient memory hierarchies, adaptable dataflow distribution, domain-specific compute optimizations like quantization and pruning, and hardware-software co-design techniques (Kudithipudi et al., 2022). On-device learning from data that is collected locally almost certainly involves a distribution shift from the original (pre-)training data. This means the sampling process is no longer i.i.d., thus requiring continual learning to maintain good performance on the initial training set. If these devices operate on longer time scales, the data they sample themselves will not be i.i.d. either. To leverage the originally learned information as well as adapt to local distribution changes, such devices require continual learning to operate effectively. Importantly, they should be able to learn using only a limited amount of labeled information, while operating under the memory and compute constraints of the device. ## Example: 3.3 In a 2022 article by MIT Review (Guo, 2022), it was revealed how a robot vacuum cleaner had sent images, in some cases sensitive ones, back to the company, to be labeled and used in further training on central servers. In response, an R&D director of the company stated: "Road systems are quite standard, so for makers of self-driving cars, you'll know how the lane looks [...], but each home interior is vastly different", acknowledging the need to adjust the robots to the environment they are working in. Our homes are highly diverse, but also one of the most intimate and private places that exist. Images can reveal every detail about them and should thus remain private. Adapting to individual homes is necessary, but should not come at the cost of initial smart abilities such as object recognition, collision prevention and planning, which are unlikely to be learned using only locally gathered data. ## 3.4 Faster Retraining With Warm Starting In many industrial settings, deep neural networks are periodically re-trained from scratch when new data are available, or when a distribution shift is detected. The newly gathered data is typically a lot smaller than the original dataset is, which makes starting from scratch a wasteful endeavor. As more and more data is collected, the computational requirements for retraining continue to grow over time. Instead, continual learning can start from the initial model and only update what is necessary to improve performance on the ![5_image_1.png](5_image_1.png) new dataset. Most continual learning methods are not designed for computational efficiency (Harun et al., 2023a), yet Harun et al. (2023b) show that reductions in training time by an order of magnitude are possible, while reaching similar performance. Successful continual learning would offer a way to drastically reduce the expenses and extraordinary carbon footprint associated with retraining from scratch (Amodei & Hernandez, 2018), without sacrificing accuracy. The challenge is to achieve performance equal to or better than a solution that is trained from scratch, but with fewer additional resources. One could say that it is the performance that is constrained, and computational cost that must be optimized. Simple approaches, like warm-starting, i.e. from a previously trained network, can yield poorer generalization than models trained from scratch on small datasets (Ash & Adams, 2020), yet it is unclear whether this translates to larger datasets, and remains a debated question. Similar results were found in (Berariu et al., 2021; Dohare et al., 2023), which report a loss of plasticity, i.e. the ability to learn new knowledge after an initial training phase. In curriculum learning (Bengio et al., 2009), recent works have tried to make learning more efficient by cleverly selecting which samples to train on when (Hacohen et al., 2020). Similarly, active learning (Settles, 2009) studies which unlabeled samples could best be labeled (given a restricted budget) to most effectively learn. Today those fields have to balance learning new information with preventing forgetting, yet with successful continual learning they could focus more on learning new information as well and as quickly as possible. Minimizing computational cost could also be rephrased as maximizing learning efficiency. Not having to re-learn from scratch whenever new data is available, figuring out the best order to use data for learning, or the best samples to label can all contribute to this goal. Crucially, maximizing knowledge accumulation from the available data is part of this challenge. Previous work (Hadsell et al., 2020; Hacohen et al., 2020; Pliushch et al., 2022) suggested that even when all data is used together, features are learned in sequential order. Exploiting this order to make learning efficient requires continual learning. ## Example: 3.4 Continuous training is one of the six important building blocks in MLOps (Machine Learning Operations, similar to DevOps), according to a Google white paper on the subject (Salama et al., 2021). This step is considered necessary, in response to performance decays when incoming data characteristics change. They describe in great detail how to optimize this pipeline, from various ways to trigger retraining to automated approaches to deploy retrained models. However, retraining is implicitly considered to be from scratch, which makes most pipelines inherently inefficient. Similarly, other resources stating the importance of retraining ML models and efficient MLOps, at most very briefly consider other options than retraining from scratch (Kreuzberger et al., 2023; Komolafe, 2023; Alla et al., 2021). The efficiency that can be gained here represents an enormous opportunity for the continual learning field, which is clearly illustrated by Huyen (2022) from an industry perspective. ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) ![6_image_2.png](6_image_2.png) ## 3.5 Reinforcement Learning In reinforcement learning (RL), agents learn by interacting with an environment. This creates a loop, where the agent takes an action within the environment, and receives from the environment an observation and a reward. The goal of the learning process is to learn a policy, i.e. a strategy to choose the next action based on the observations and rewards seen so far, which maximizes the rewards (Sutton & Barto, 2018). Given that observations and rewards are conditioned on the policy, this leads to a natural non-stationarity, where each improvement step done on the policy can lead the agent to explore new parts of the environment. The implicit non-stationarity of RL can be relaxed to a piece-wise stationary setting in off policy RL settings (Sutton & Barto, 2018), however this still implies a continual learning problem. Offline RL (Levine et al., 2020) (e.g. imitation learning) completely decouples the policy used to collect data from the learning policy, leading to a static data distribution, though is not always applicable and can lead to suboptimal solutions due to the inability of the agent to explore. Lastly, for real-world problems, the environment itself may be non-stationary, either intrinsically so, or through the actions of the agent. The presence of non-stationarities in reinforcement learning makes efficient learning difficult. To accelerate learning, experience replay has been an essential part of reinforcement learning (Lin, 1992; Mnih et al., 2015). While engaging in new observations, previously encountered states and action pairs are replayed to make training more i.i.d. In contrast to replay in supervised learning, in RL there is less focus on restricting the amount of stored examples, as the cost of obtaining them is considered very high. Instead the focus is how to select samples for replay (e.g. Schaul et al., 2016) and how to create new experiences from stored ones (Lin et al., 2021). Additionally, loss of plasticity (e.g. Dohare et al., 2023; Lyle et al., 2022) - inability of learning efficiently new tasks - and formalizing the concept of continual learning (e.g. Kumar et al., 2023; Abel et al., 2023) also take a much more central role in the RL community. Finally, besides the non-stationarities encountered while learning a single task, agents are often required to learn multiple tasks. This setting is an active area of research (Wołczyk et al., 2021; Kirkpatrick et al., 2017; Rolnick et al., 2019), particularly since the external imposed non-stationarity allows the experimenter to control it and probe different aspects of the learning process. RL has its own specific problems with continual learning, e.g. trivially applying rehearsal methods fails in the multi-task setting, and not all parts of the network should be regularized equally (Wolczyk et al., 2022). Issues considering the inability to learn continually versus the inability to explore an environment efficiently, as well as dealing with concepts like episodic and non-episodic RL, makes the study of continual learning in RL more challenging. Further research promises agents that train faster, learn multiple different tasks sequentially and effectively re-use knowledge from previous tasks to work faster and towards more complex goals. ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) ![7_image_2.png](7_image_2.png) ![7_image_3.png](7_image_3.png) ## 4 Future Directions For Continual Learning In this section we discuss interesting future directions for continual learning research, informed by what was discussed in the previous sections. We start by addressing the motivation for these research directions, followed by a brief overview of existing work, and finally justifying the importance of each concept. ## 4.1 Rethinking Memory And Compute Assumptions In all of the problems described in the previous section, optimizing or restricting compute complexity plays an important role, often a more central one than memory capacity does. This is in stark contrast to the results in Section 2. The vast majority of papers does not qualitatively approach compute complexity, while not storing, or only very few, samples. Two popular reasons for arguing a low storage solution are the cost of memory and privacy concerns, but these arguments are often not relevant in practice. Prabhu et al. (2023b) calculate that the price to store ImageNet1K for one month is just 66¢, while training a model on it requires 500$. This means storing the entire dataset for 63 years is as expensive as training ImageNet once. Further, privacy and copyright concerns are not solved by simply deleting data from the training set, as data can be recovered from derivative models (Haim et al., 2022), and rulings to remove data might only be viable by re-training from scratch (Zhao, 2022) (hence making continual learning superfluous), at least until reliable ![8_image_0.png](8_image_0.png) Figure 2: An overview of the future directions (FD) discussed in Section 4 model editing exists (see Section 3.1). Section 3.3 showed that use cases in low memory settings exist, but, as four of the five problems show, there are many reasons to study algorithms that restrict computational cost just like restricted memory settings are studied today. We believe it is important to reconsider these common assumptions on memory and computational cost, and instead derive them from the real-world problems that continual algorithms aim to solve. To achieve this goal, we should agree on how to measure computational cost, which is necessary to restrict it. Yet it is not straightforward to do so. Recent approaches use the number of iterations (Prabhu et al., 2023a) and forward/backward passes (Kumari et al., 2022; Harun et al., 2023c), which works well if the used model is exactly the same, but cannot capture architectural differences that influence computational cost. Similarly, when the number of parameters is used (Wang et al., 2022a), more iterations or forward passes do not change the perceived cost. The number of floating point operations (FLOPs) is often used to measure computational cost in computer science, and is a promising candidate, yet is sometimes hard to measure accurately (Wang et al., 2022b). In practice, limiting the size of the memory buffer can reduce the computational cost; it can for example reduce the convergence time, but it does not offer guarantees. Directly controlling the computational cost, rather than by proxy, can alter which algorithms perform best, e.g. in RL this is shown in Javed et al. (2023). Additionally, time to convergence should also be considered, as faster convergence would also lower compute time Schwartz et al. (2020). To properly benchmark compute time and memory use in continual learning algorithms, we should build on this existing literature to attain strong standards for measuring both compute and memory cost and the improvements thereof. As illustrated in Section 2, there are works that have started to question our common assumptions in continual learning (Harun et al., 2023b). SparCL optimizes compute time explicitly (Wang et al., 2022b), while (Prabhu et al., 2023b;a) compare methods while constraining computational cost. Chavan et al. (2023) establish DER (Yan et al., 2021) as a promising method when compute complexity is constrained, while other works suggest that experience replay is likely most efficient (Harun et al., 2023c; Prabhu et al., 2023a). These early works have laid the groundwork, and we believe that it is in continual learning's best interest to push further in this direction, and develop strategies for learning under a tight compute budget, with and especially *without* memory constraints. ## 4.2 Theory In the past, continual learning research has achieved interesting empirical results. In contrast to classic machine learning, not much is known about whether and under which conditions, we can expect results. Many theoretical results rely on the i.i.d assumption, among which the convergence of stochastic gradient descent and the difference between expected and empirical risk in many PAC-bound analyses (although there are some exceptions, e.g. Pentina & Lampert 2014). Crucially, the i.i.d. assumption is almost always broken in continual learning, as illustrated by the problems in Section 3. To a certain extent this also happens in training with very large datasets, due to the computational cost of sampling data batches in an i.i.d. fashion compared to ingesting them in fixed but random order. Not having theoretical guarantees means that continual learning research is often shooting in the dark, hoping to solve a problem that we do not know is solvable in the first place. To understand when and under which assumptions we can find solutions, new concepts in a number of directions need to be developed in order to theoretically grasp continual learning in its full breadth. A key aspect is optimization. In which sense and under which conditions do continual learning algorithms converge to stable solutions? And what kind of generalization can we expect? We want to emphasize that we should not be misguided by classical notions of those concepts. It might be, for instance, more insightful to think of continual learning as tracking a time-varying target when reasoning about convergence (e.g. Abel et al., 2023). Algorithms designed from this perspective can outperform solutions that are converged in the classical sense, a result of acknowledging the temporal coherence of our world (Sutton et al., 2007; Silver et al., 2008). They show the potential for continual learning algorithms to improve over static systems, although that does not mean more classical, static notions of convergence are not useful, as illustrated by Zimin & Lampert (2019). Even if it is possible to find a good solution, it is unclear whether this is achievable in reasonable time, and crucially, whether it can be more efficient than re-training from scratch. Knoblauch et al. (2020) show that even in ideal settings continual learning is NP-complete, yet Mirzadeh et al. (2021) empirically illustrate that often there are linear low-loss paths to the solution, reassuring that solutions that are easy to find are not unlikely to exist. Not all continual learning is equally difficult. An important factor is the relatedness of old and new data. In domain adaptation, David et al. (2010) have shown that without assumptions on the data, some adaptation tasks are simply impossible. Empirically (Zamir et al., 2018) and to some extent theoretically (Prado & Riddle, 2022), we know that in many cases transfer is successful because most tasks are related. Similar results for continual learning are scarce. Besides data similarity, the problem setting is an important second facet. For instance, class incremental learning is much harder than its task-incremental counterpart, as it additionally requires the predictions of task-identities (van de Ven et al., 2022; Kim et al., 2022). We believe that understanding the *difficulty of a problem* and having formal tools expressive enough to describe or understand relatedness between natural data will allow a more principled approach, and better guarantees on possible results. Finally, theory in continual learning might simply be necessary to deploy continual learning models in a trustworthy manner. It requires models to be certified (Huang et al., 2020), i.e. they need to be thoroughly tested before deployment to work as intended. It is however unclear how this would fare in a continual learning setting, as by design, such models will be updated after deployment. ## 4.3 Large-Scale Continual Learning Most of the problems in Section 3 start when there is a change in the environment of the model and it needs to be updated. These changes are often are small compared to the preceding training. The initial models, often referred to as foundation models, are typically powerful generalist models that can perform well on various downstream tasks, e.g. Oquab et al. (2023); Radford et al. (2021). However, performance gains are generally seen when adapting these models to specific tasks or environments, which compromises the initial knowledge in the pretrained model. In a continuously evolving world, one would expect that this knowledge is subject to be continuous editing, updating, and expansion, without losses in performance. When investigating continual learning that starts with large-scale pretrained models, the challenges might differ from those encountered in continual learning from random initializations and smaller models. In contrast to smaller models, the required adjustments to accommodate new tasks are usually limited compared to the initial training phase, which may result in forgetting being less pronounced than previously anticipated. It is an open questions which continual learning techniques are more effective in such a case. For example, Xiang et al. (2023) suggest that parameter regularization mechanisms (Kirkpatrick et al., 2017) are more effective than functional regularization (e.g. distillation approaches (Li & Hoiem, 2017)) in reducing forgetting in a large language model. Additionally, it might not be necessary to update all parameters, which in itself is computationally expensive for large models. Approaches considering adapters (Houlsby et al., 2019; Jia et al., 2022; Li & Liang, 2021), low rank updates (Hu et al., 2022a) or prompting (Jung et al., 2023), are argued to be more feasible in this setting. Freezing, or using non-uniform learning rates, might be necessary when the amount of data is limited to prevent optimization and overfitting issues, or to improve the implicit bias of a model when faced with a new data (Sutton, 1992) How to adapt models if the required changes are comparatively small to the original training remains an interesting research direction, with promising initial results (Wang et al., 2022c; Li et al., 2023; Panos et al., 2023). Lastly, in the large-scale learning setting there is a paradigm shift from end-to-end learning towards more modular approaches, where different components are first trained and then stitched together. It is somewhat of an open question of what implication this has for continual learning (Ostapenko et al., 2022; Cossu et al., 2022). In the simplest scenario, one could decouple the learning of a representation, done with e.g. contrastive unsupervised learning, versus that of classifier with supervision (e.g. Alayrac et al., 2022). Yet this idea can be extended towards using multiple (e.g. domain specific) experts (Ramesh & Chaudhari, 2021) and using more than one modality (e.g. vision and speech) (Radford et al., 2021). A better understanding of how continual learning algorithms can exploit these setting is required to expand beyond the end-to-end paradigms currently used. While there has been promising research in these directions, we believe that considerably more is needed. So far, we do not have a strong understanding of the possibilities and limits of small updates on large pretrained models, and how the training dynamics are different than the smaller-scale models typically used in continual learning. Further research in the relation between new data and pre-training data might open up new opportunities to more effectively apply these smaller updates, and will ultimately make continual learning more effective in handling all sorts of changes in data distributions. Understanding the interplay between memory and learning, and how to exploit the modular structure of this large model could enable specific ways to address the continual learning problem. ## 4.4 Continual Learning In A Real-World Environment Continual learning, in its predominant form, is centered around effective and resource-efficient accumulation of knowledge. The problem description typically starts whenever there is some form of new data available, see Section 3. How the data is produced is a question that is much less considered in continual learning. We want to emphasize that there is a considerable overlap between machine learning subfields (Mundt et al., 2022), in particular in those that are concerned with both detecting change in data distributions and techniques that reduce the required effort in labeling data. It will be important to develop continual learning algorithms with these in mind. These fields depend on and need each other to solve real-world problems, making it crucial that their desiderata align. Open world learning (Bendale & Boult, 2015) is such a closely related field. Early work on open-world learning focused on detecting novel classes of objects that were not seen during training, relaxing the typical closed-world assumption in machine learning. A first step to realize open-world learning is detecting a change in incoming data, more formally known as out-of-distribution (OOD) or novelty detection. Detecting such changes requires a certain level of uncertainty-awareness of a model, i.e. it should quantify what it does and does not know. This uncertainty can be split into aleatoric uncertainty, which is an irreducible property of the data itself, and epistemic uncertainty, a result of the model not having learned enough (Hüllermeier & Waegeman, 2021). When modeled right, the latter can provide a valuable signal to identify what should be changed in continually trained models (Ebrahimi et al., 2020). Alternatively, it provides a theoretically grounded way for active learning, which studies how to select the most efficient unlabeled data points for labeling (Settles, 2009; Nguyen et al., 2022). Even when OOD data is properly detected, it might not be directly usable. It can be unlabeled, without sufficient meta-data, or in the worst case corrupted. Many CL algorithms require the new data to be labeled before training, which is always costly and often difficult in e.g. on-device applications. This process makes it likely that when solving problems as described in Section 3, a model has access to a set of unlabeled data, possibly extended by some labeled samples that are obtained using active learning techniques. To successfully work in such an environment, a model should be able to update itself in a self- or semi-supervised way, an idea recently explored in Fini et al. (2022). Continual learning depends on the data available to update a model. It is thus important to develop CL algorithms that are well calibrated, capable of OOD detection and learning in an open world. Further, in many settings (see Section 3.3), new data will not, or only partly, be labeled, which requires semi- or selfsupervised continual learning (Mundt et al., 2023). We recommend working towards future continual learning algorithms with these considerations in mind, as methods that rely less on the fully labeled, closed-world assumption will likely be more practically usable in the future. ## 5 Conclusion In this work, we first examined the current continual learning field, and showed that many papers study the memory-restricted setting with little or no concern for the computational cost. The problems we introduced all require some form of continual learning, not because it is a nice-to-have, but because the solution inherently depends on continual learning. Finally, we established four research directions in continual learning that we find promising, in the light of the scenarios we described. In summary, many of these applications are more compute-restricted than memory-restricted, so we vouch for exploring this setting more. Further, we believe a better theoretical understanding, a larger focus on pre-training and comparatively small future updates, and greater attention to how data is attained, will help us solving these problems, and make continual learning a practically useful tool to solve the described and other machine learning problems. A summary of the talks and discussions at the Deep Continual Learning seminar in Dagstuhl that inspired this paper can be found in Tuytelaars et al. (2023). ## Broader Impact This paper does not present any new algorithm or dataset, hence the potential *direct* societal and ethical implications are rather limited. However, continual learning and applications thereof, as we have examined, may have a long-term impact. Reducing computational cost can positively affect the environmental impact machine learning has. Easily editable networks, or ways to quickly update parts of networks as discussed in Section 3.1 and 3.4, may further democratize the training of machine learning model. Yet this also means that it can be exploited by malicious actors to purposely inject false information in a network. Predictions made by those networks could misinform people or lead to harmful decisions. Excessive personalization as described in Section 3.2 may negatively impact community solidarity, yet benefit the individual. ## References David Abel, André Barreto, Benjamin Van Roy, Doina Precup, Hado van Hasselt, and Satinder Singh. A definition of continual reinforcement learning. *arXiv preprint arXiv:2307.11046*, 2023. Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. *arXiv preprint arXiv:2204.14198*, 2022. Sridhar Alla, Suman Kalyan Adari, Sridhar Alla, and Suman Kalyan Adari. What is MLOPs? *Beginning* MLOps with MLFlow: Deploy Models in AWS SageMaker, Google Cloud, and Microsoft Azure, pp. 79–124, 2021. Dario Amodei and Danny Hernandez. AI and compute. https://openai.com/research/ai-and-compute, 2018. Online; accessed 20-June-2023. Jordan Ash and Ryan P Adams. On warm-starting neural network training. Advances in Neural Information Processing Systems, 33:3884–3894, 2020. Iz Beltagy, Kyle Lo, and Arman Cohan. Scibert: A pretrained language model for scientific text. arXiv preprint arXiv:1903.10676, 2019. Abhijit Bendale and Terrance Boult. Towards open world recognition. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 1893–1902, 2015. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In *Proceedings* of the 26th annual International Conference on Machine Learning, pp. 41–48, 2009. Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. *IEEE transactions on pattern analysis and machine intelligence*, 35(8):1798–1828, 2013. Tudor Berariu, Wojciech Czarnecki, Soham De, Jorg Bornschein, Samuel Smith, Razvan Pascanu, and Claudia Clopath. A study on the plasticity of neural networks. *arXiv preprint arXiv:2106.00042*, 2021. Vivek Chavan, Paul Koch, Marian Schlüter, and Clemens Briese. Towards realistic evaluation of industrial continual learning scenarios with an emphasis on energy consumption and computational footprint. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 11506–11518, 2023. Niv Cohen, Rinon Gal, Eli A Meirom, Gal Chechik, and Yuval Atzmon. "this is my unicorn, fluffy": Personalizing frozen vision-language representations. In *European Conference on Computer Vision*, pp. 558–577. Springer, 2022. Andrea Cossu, Tinne Tuytelaars, Antonio Carta, Lucia Passaro, Vincenzo Lomonaco, and Davide Bacciu. Continual pre-training mitigates forgetting in language and vision. *arXiv preprint arXiv:2205.09357*, 2022. William Dally. On the model of computation: point. *Communications of the ACM*, 65(9):30–32, 2022. Shai Ben David, Tyler Lu, Teresa Luu, and Dávid Pál. Impossibility theorems for domain adaptation. In *Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics*, pp. 129–136. JMLR Workshop and Conference Proceedings, 2010. Saket Dingliwal, Monica Sunkara, Srikanth Ronanki, Jeff Farris, Katrin Kirchhoff, and Sravan Bodapati. Personalization of CTC speech recognition models. In 2022 IEEE Spoken Language Technology Workshop (SLT), pp. 302–309. IEEE, 2023. Shibhansh Dohare, Juan Hernandez-Garcia, Parash Rahman, Richard Sutton, and Rupam Mahmood. Loss of plasticity in deep continual learning. *Research Square preprint PPR: PPR727015*, 2023. doi: 10.21203/ rs.3.rs-3256479/v1. Sayna Ebrahimi, Mohamed Elhoseiny, Trevor Darrell, and Marcus Rohrbach. Uncertainty-guided continual learning with bayesian neural networks. *International Conference on Learning Representations*, 2020. European Union. Regulation (eu) 2016/679 of the european parliament and of the council of 27 april 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing directive 95/46/ec (general data protection regulation). *Official Journal L110*, 59:1–88, 2016. Enrico Fini, Victor G Turrisi Da Costa, Xavier Alameda-Pineda, Elisa Ricci, Karteek Alahari, and Julien Mairal. Self-supervised models are continual learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9621–9630, 2022. Robert M French. Catastrophic forgetting in connectionist networks. *Trends in Cognitive Sciences*, 3(4): 128–135, 1999. Haotian Fu, Shangqun Yu, Michael Littman, and George Konidaris. Model-based lifelong reinforcement learning with bayesian exploration. *Advances in Neural Information Processing Systems*, 35:32369–32382, 2022. Yasir Ghunaim, Adel Bibi, Kumail Alhamoud, Motasem Alfarra, Hasan Abed Al Kader Hammoud, Ameya Prabhu, Philip HS Torr, and Bernard Ghanem. Real-time evaluation in online continual learning: A new hope. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 11888–11897, 2023. Eileen Guo. A roomba recorded a woman on the toilet. how did screenshots end up on facebook? https://www.technologyreview.com/2022/12/19/1065306/ roomba-irobot-robot-vacuums-artificial-intelligence-training-data-privacy/, 2022. Online; accessed 11-October-2023. Guy Hacohen, Leshem Choshen, and Daphna Weinshall. Let's agree to agree: Neural networks share classification order on real datasets. In *Proceedings of the 37th International Conference on Machine* Learning, volume 119 of *Proceedings of Machine Learning Research*, pp. 3950–3960. PMLR, 2020. Raia Hadsell, Dushyant Rao, Andrei A Rusu, and Razvan Pascanu. Embracing change: Continual learning in deep neural networks. *Trends in Cognitive Sciences*, 24(12):1028–1040, 2020. Niv Haim, Gal Vardi, Gilad Yehudai, Ohad Shamir, and Michal Irani. Reconstructing training data from trained neural networks. *Advances in Neural Information Processing Systems*, 35:22911–22924, 2022. Md Yousuf Harun, Jhair Gallardo, Tyler L. Hayes, and Christopher Kanan. How efficient are today's continual learning algorithms? In *Proceedings of the IEEE/CVF Conference on Computer Vision and* Pattern Recognition (CVPR) Workshops, pp. 2431–2436, June 2023a. Md Yousuf Harun, Jhair Gallardo, Tyler L Hayes, Ronald Kemker, and Christopher Kanan. SIESTA: Efficient online continual learning with sleep. *Transactions on Machine Learning Research*, 2023b. Md Yousuf Harun, Jhair Gallardo, and Christopher Kanan. GRASP: A rehearsal policy for efficient online continual learning. *arXiv preprint arXiv:2308.13646*, 2023c. Tyler L Hayes and Christopher Kanan. Online continual learning for embedded devices. In *Conference on* Lifelong Learning Agents, 2022. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for NLP. In *International* Conference on Machine Learning, pp. 2790–2799. PMLR, 2019. Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations, 2022a. URL https://openreview.net/forum?id=nZeVKeeFYf9. Hexiang Hu, Ozan Sener, Fei Sha, and Vladlen Koltun. Drinking from a firehose: Continual learning with web-scale natural language. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 45(5): 5684–5696, 2022b. Xiaowei Huang, Daniel Kroening, Wenjie Ruan, James Sharp, Youcheng Sun, Emese Thamo, Min Wu, and Xinping Yi. A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. *Computer Science Review*, 37:100270, 2020. E. Hüllermeier and W. Waegeman. Aleatoric and epistemic uncertainty in machine learning: An introduction to concepts and methods. *Machine Learning*, 110(3):457–506, 2021. doi: 10.1007/s10994-021-05946-3. Chip Huyen. Real-time machine learning: challenges and solutions, Jan 2022. URL https: //huyenchip.com/2022/01/02/real-time-machine-learning-challenges-and-solutions.html\# towards-continual-learning. Online; accessed 14-November-2023. Khurram Javed, Haseeb Shah, Richard S Sutton, and Martha White. Scalable real-time recurrent learning using columnar-constructive networks. *Journal of Machine Learning Research*, 24:1–34, 2023. Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, and SerNam Lim. Visual prompt tuning. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXIII, pp. 709–727. Springer, 2022. Dahuin Jung, Dongyoon Han, Jihwan Bang, and Hwanjun Song. Generating instance-level prompts for rehearsal-free continual learning. In *Proceedings of the IEEE/CVF International Conference on Computer* Vision, pp. 11847–11857, 2023. Zixuan Ke, Bing Liu, Nianzu Ma, Hu Xu, and Lei Shu. Achieving forgetting prevention and knowledge transfer in continual learning. *Advances in Neural Information Processing Systems*, 34, 2021. Zixuan Ke, Yijia Shao, Haowei Lin, Hu Xu, Lei Shu, and Bing Liu. Adapting a language model while preserving its general knowledge. In *Proceedings of The 2022 Conference on Empirical Methods in Natural* Language Processing (EMNLP-2022), 2022. Gyuhak Kim, Changnan Xiao, Tatsuya Konishi, Zixuan Ke, and Bing Liu. A theoretical study on solving continual learning. In *Advances in Neural Information Processing Systems*, 2022. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. *Proceedings of the national academy of sciences*, 114(13):3521–3526, 2017. Varsha Kishore, Chao Wan, Justin Lovelace, Yoav Artzi, and Kilian Q Weinberger. Incdsi: incrementally updatable document retrieval. In *International Conference on Machine Learning*, pp. 17122–17134. PMLR, 2023. Jeremias Knoblauch, Hisham Husain, and Tom Diethe. Optimal continual learning has perfect memory and is np-hard. In *International Conference on Machine Learning*, pp. 5327–5337. PMLR, 2020. Akinwande Komolafe. Retraining model during deployment: Continuous training and continuous testing, 2023. URL https://neptune.ai/blog/ retraining-model-during-deployment-continuous-training-continuous-testing. Online; accessed 30-June-2023. Dominik Kreuzberger, Niklas Kühl, and Sebastian Hirschl. Machine learning operations (MLOPS): Overview, definition, and architecture. *IEEE Access*, 2023. Dhireesha Kudithipudi, Mario Aguilar-Simon, Jonathan Babb, Maxim Bazhenov, Douglas Blackiston, Josh Bongard, Andrew P Brna, Suraj Chakravarthi Raja, Nick Cheney, Jeff Clune, et al. Biological underpinnings for lifelong learning machines. *Nature Machine Intelligence*, 4(3):196–210, 2022. Dhireesha Kudithipudi, Anurag Daram, Abdullah Zyarah, Fatima tuz Zohora, James B. Aimone, Angel Yanguas-Gil, Nicholas Soures, Emre Neftci, Matthew Mattina, Vincenzo Lomonaco, Clare D. Thiem, and Benjamin Epstein. Uncovering design principles for lifelong learning ai accelerators. Nature Electronics (Final Revisions), 2023. Saurabh Kumar, Henrik Marklund, Ashish Rao, Yifan Zhu, Hong Jun Jeon, Yueyang Liu, and Benjamin Van Roy. Continual learning as computationally constrained reinforcement learning. *arXiv preprint* arXiv:2307.04345, 2023. Lilly Kumari, Shengjie Wang, Tianyi Zhou, and Jeff A Bilmes. Retrospective adversarial replay for continual learning. *Advances in Neural Information Processing Systems*, 35:28530–28544, 2022. Sebastian Lapuschkin, Stephan Wäldchen, Alexander Binder, Grégoire Montavon, Wojciech Samek, and Klaus-Robert Müller. Unmasking clever hans predictors and assessing what machines really learn. *Nature* communications, 10(1):1096, 2019. Angeliki Lazaridou, Adhi Kuncoro, Elena Gribovskaya, Devang Agrawal, Adam Liska, Tayfun Terzi, Mai Gimenez, Cyprien de Masson d'Autume, Tomas Kocisky, Sebastian Ruder, et al. Mind the gap: Assessing temporal generalization in neural language models. *Advances in Neural Information Processing Systems*, 34:29348–29363, 2021. Sergey Levine, Aviral Kumar, George Tucker, and Justin fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. *arXiv preprint arXiv:2005.01643*, 2020. Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation. *arXiv preprint* arXiv:2101.00190, 2021. Zhizhong Li and Derek Hoiem. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12):2935–2947, 2017. Zhuowei Li, Long Zhao, Zizhao Zhang, Han Zhang, Di Liu, Ting Liu, and Dimitris N Metaxas. Steering prototype with prompt-tuning for rehearsal-free continual learning. *arXiv preprint arXiv:2303.09447*, 2023. Junfan Lin, Zhongzhan Huang, Keze Wang, Xiaodan Liang, Weiwei Chen, and Liang Lin. Continuous transition: Improving sample efficiency for continuous control problems via mixup. In *2021 IEEE International* Conference on Robotics and Automation (ICRA), pp. 9490–9497. IEEE, 2021. Long-Ji Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine learning, 8:293–321, 1992. Clare Lyle, Mark Rowland, Will Dabney, Marta Kwiatkowska, and Yarin Gal. Learning dynamics and generalization in deep reinforcement learning. In *International Conference on Machine Learning*, pp. 14560–14581. PMLR, 2022. Marc Masana, Xialei Liu, Bartłomiej Twardowski, Mikel Menta, Andrew D Bagdanov, and Joost van de Weijer. Class-incremental learning: survey and performance evaluation on image classification. *IEEE* Transactions on Pattern Analysis and Machine Intelligence, 2022. Seyed Iman Mirzadeh, Mehrdad Farajtabar, Dilan Gorur, Razvan Pascanu, and Hassan Ghasemzadeh. Linear mode connectivity in multitask and continual learning. In *International Conference on Learning* Representations, 2021. URL https://openreview.net/forum?id=Fmg_fQYUejf. Eric Mitchell, Charles Lin, Antoine Bosselut, Chelsea Finn, and Christopher D Manning. Fast model editing at scale. In *International Conference on Learning Representations*, 2022. URL https://openreview. net/forum?id=0DcZxeWfOPt. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. *nature*, 518(7540):529–533, 2015. Martin Mundt, Steven Lang, Quentin Delfosse, and Kristian Kersting. CLEVA-compass: A continual learning evaluation assessment compass to promote research transparency and comparability. *International* Conference on Learning Representations, 2022. Martin Mundt, Yongwon Hong, Iuliia Pliushch, and Visvanathan Ramesh. A wholistic view of continual learning with deep neural networks: Forgotten lessons and the bridge to active and open world learning. Neural Networks, 160:306–336, 2023. V.L. Nguyen, M.H. Shaker, and E. Hüllermeier. How to measure uncertainty in uncertainty sampling for active learning. *Machine Learning*, 111(1):89–122, 2022. doi: 10.1007/s10994-021-06003-9. Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. *arXiv preprint arXiv:2304.07193*, 2023. Oleksiy Ostapenko, Timothee Lesort, Pau Rodríguez, Md Rifat Arefin, Arthur Douillard, Irina Rish, and Laurent Charlin. Foundational models for continual learning: An empirical study of latent replay, 2022. URL https://arxiv.org/abs/2205.00329. Aristeidis Panos, Yuriko Kobe, Daniel Olmeda Reino, Rahaf Aljundi, and Richard E Turner. First session adaptation: A strong replay-free baseline for class-incremental learning. *arXiv preprint arXiv:2303.13199*, 2023. Anastasia Pentina and Christoph H. Lampert. A PAC-bayesian bound for lifelong learning. In *ICML*, 2014. Iuliia Pliushch, Martin Mundt, Nicolas Lupp, and Visvanathan Ramesh. When Deep Classifiers Agree: Analyzing Correlations Between Learning Order and Image Statistics. *European Conference on Computer* Vision (ECCV), pp. 397–413, 2022. Ameya Prabhu, Hasan Abed Al Kader Hammoud, Puneet K Dokania, Philip HS Torr, Ser-Nam Lim, Bernard Ghanem, and Adel Bibi. Computationally budgeted continual learning: What does matter? In *Proceedings* of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3698–3707, 2023a. Ameya Prabhu, Zhipeng Cai, Puneet Dokania, Philip Torr, Vladlen Koltun, and Ozan Sener. Online continual learning without the storage constraint. *arXiv preprint arXiv:2305.09253*, 2023b. Diana Benavides Prado and Patricia Riddle. A theory for knowledge transfer in continual learning. In Conference on Lifelong Learning Agents, 2022. Alexander Pritzel, Benigno Uria, Sriram Srinivasan, Adria Puigdomenech Badia, Oriol Vinyals, Demis Hassabis, Daan Wierstra, and Charles Blundell. Neural episodic control. In International Conference on Machine Learning, pp. 2827–2836. PMLR, 2017. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In *International Conference on Machine Learning*, pp. 8748–8763. PMLR, 2021. Rahul Ramesh and Pratik Chaudhari. Model zoo: A growing" brain" that learns continually. *arXiv preprint* arXiv:2106.03027, 2021. David Rolnick, Arun Ahuja, Jonathan Schwarz, Timothy Lillicrap, and Gregory Wayne. Experience replay for continual learning. *Advances in Neural Information Processing Systems*, 32, 2019. Khalid Salama, Jarek Kazmierczak, and Donna Schut. Practitioners guide to MLOPS: A framework for continuous delivery and automation of machine learning. *Google Could White paper*, 2021. Shibani Santurkar, Dimitris Tsipras, Mahalaxmi Elango, David Bau, Antonio Torralba, and Aleksander Madry. Editing a classifier by rewriting its prediction rules. Advances in Neural Information Processing Systems, 34:23359–23373, 2021. Tom Schaul, John Quan andIoannis Antonoglou, and David Silver. Prioritized experience replay. In 4th International Conference on Learning Representations, ICLR 2016, 2016. Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, et al. Mastering atari, go, chess and shogi by planning with a learned model. *Nature*, 588(7839):604–609, 2020. Roy Schwartz, Jesse Dodge, Noah A Smith, and Oren Etzioni. Green AI. *Communications of the ACM*, 63 (12):54–63, 2020. Burr Settles. Active learning literature survey. Computer Sciences Technical Report 1648, University of Wisconsin–Madison, 2009. David Silver, Richard S Sutton, and Martin Müller. Sample-based learning and search with permanent and transient memories. In *Proceedings of the 25th International Conference on Machine learning*, pp. 968–975, 2008. Richard S Sutton. Adapting bias by gradient descent: An incremental version of delta-bar-delta. In *AAAI*, volume 92, pp. 171–176. San Jose, CA, 1992. Richard S. Sutton and Andrew G. Barto. *Reinforcement Learning: An Introduction*. The MIT Press, second edition, 2018. URL http://incompleteideas.net/book/the-book-2nd.html. Richard S Sutton, Anna Koop, and David Silver. On the role of tracking in stationary environments. In Proceedings of the 24th International Conference on Machine learning, pp. 871–878, 2007. Tinne Tuytelaars, Bing Liu, Vincenzo Lomonaco, Gido van de Ven, and Andrea Cossu. Deep Continual Learning (Dagstuhl Seminar 23122). *Dagstuhl Reports*, 13(3):74–91, 2023. ISSN 2192-5283. doi: 10.4230/ DagRep.13.3.74. URL https://drops.dagstuhl.de/entities/document/10.4230/DagRep.13.3.74. Gido M van de Ven, Tinne Tuytelaars, and Andreas S Tolias. Three types of incremental learning. Nature Machine Intelligence, 4(12):1185–1197, 2022. Liyuan Wang, Xingxing Zhang, Qian Li, Jun Zhu, and Yi Zhong. Coscl: Cooperation of small continual learners is stronger than a big one. In *European Conference on Computer Vision*, pp. 254–271. Springer, 2022a. Liyuan Wang, Xingxing Zhang, Hang Su, and Jun Zhu. A comprehensive survey of continual learning: Theory, method and application. *arXiv preprint arXiv:2302.00487*, 2023. Zifeng Wang, Zheng Zhan, Yifan Gong, Geng Yuan, Wei Niu, Tong Jian, Bin Ren, Stratis Ioannidis, Yanzhi Wang, and Jennifer Dy. SparCL: Sparse continual learning on the edge. *Advances in Neural Information* Processing Systems, 35:20366–20380, 2022b. Zifeng Wang, Zizhao Zhang, Chen-Yu Lee, Han Zhang, Ruoxi Sun, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, and Tomas Pfister. Learning to prompt for continual learning. In *Proceedings of* the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 139–149, 2022c. Maciej Wołczyk, Michał Zając, Razvan Pascanu, Łukasz Kuciński, and Piotr Miłoś. Continual world: A robotic benchmark for continual reinforcement learning. *Advances in Neural Information Processing Systems*, 34:28496–28510, 2021. Maciej Wolczyk, Michał Zając, Razvan Pascanu, Łukasz Kuciński, and Piotr Miłoś. Disentangling transfer in continual reinforcement learning. *Advances in Neural Information Processing Systems*, 35:6304–6317, 2022. Mitchell Wortsman, Gabriel Ilharco, Jong Wook Kim, Mike Li, Simon Kornblith, Rebecca Roelofs, Raphael Gontijo Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, et al. Robust fine-tuning of zero-shot models. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 7959–7971, 2022. Jiannan Xiang, Tianhua Tao, Yi Gu, Tianmin Shu, Zirui Wang, Zichao Yang, and Zhiting Hu. Language models meet world models: Embodied experiences enhance language models. *arXiv preprint arXiv:2305.10626*, 2023. Shipeng Yan, Jiangwei Xie, and Xuming He. Der: Dynamically expandable representation for class incremental learning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 3014–3023, 2021. Amir R Zamir, Alexander Sax, William Shen, Leonidas J Guibas, Jitendra Malik, and Silvio Savarese. Taskonomy: Disentangling task transfer learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 3712–3722, 2018. Zeyu Zhao. The application of the right to be forgotten in the machine learning context: From the perspective of european laws. *Cath. UJL & Tech*, 31:73, 2022. Da-Wei Zhou, Qi-Wei Wang, Zhi-Hong Qi, Han-Jia Ye, De-Chuan Zhan, and Ziwei Liu. Deep classincremental learning: A survey. *arXiv preprint arXiv:2302.03648*, 2023. Fuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, and Qing He. A comprehensive survey on transfer learning. *Proceedings of the IEEE*, 109(1):43–76, 2020. Alexander Zimin and Christoph H. Lampert. Tasks without borders: A new approach to online multi-task learning. In *ICML Workshop on Adaptive & Multitask Learning*, 2019. URL https://openreview.net/ forum?id=HkllV5Bs24. ## A Details Of The Analysis In Section 2 To verify the keywords *'incremental', 'continual', 'forgetting', 'lifelong'* and *'catastrophic'*, used to filter the papers based on their titles, we tested them using a manually collected validation set of which we are certain that they are continual learning related. This set was manually collected while doing research on continual learning over the past few years. The keywords were present in 96% of the paper titles. From each conference, we randomly picked up to 20 out of all matched papers, disregarding false positives. It is common for to evaluate new methods and analyses on more than one benchmark. Often this means that the percentage of stored samples is not uniform across the experiments in a paper. In Figure 1, we showed the minimum percentage used, in Figure 3 we show the maximum. The conclusion remains the same, and the amount of stored samples is constrained in all but two benchmarks. In Table 1 we provide a table of all the papers we used in the analysis of Section 2, showing their minimal and maximal sample store ratio (SSR) i.e. the percentage of samples stored, as well as possibly other memory consumption. The last column mentions how they approached the computational cost. ![18_image_0.png](18_image_0.png) Figure 3: **Most papers strongly restrict memory use and do not discuss computational cost**. This figure is an alternate version of Figure 1, with the maximum percentage of stored samples rather than the minimum. Each dot represents one paper, illustrating what percentage of data their methods store (horizontal axis) and how computational complexity is handled (vertical axis). The majority of examined papers are in the lower-left corner: those that strongly restrict memory use and do not quantitatively approach computational cost | SSR (min) SSR (max) Memory (other) Computational Cost | 0 0 prototypes not discussed | | 0 0 generators not discussed | 0 0 / not discussed | 0 0 model copies not discussed | 0 0 model copies not discussed | 0.002 0.1 / not discussed | 0 0 prototypes not discussed | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------|----------------------------------|-----------------------------------------------------------------------------------------------|-----------------------------|--------------------------------| | 0 0 Null spaces discussed | | | 0 0 / discussed | 0.002 0.002 model copies compared | 0.02 0.1 / discussed | 0 0 model copies compared | | 0.01 0.06 / compared | | Table 1: All papers used in the examination of Section 2. SSR refers to the sample store ratio, i.e. how much samples are stored in relation to the entire dataset. | # Conference Title | 2 ECCV Class-Incremental Novel Class Discovery 0 0 prototypes not discussed 3 ECCV Prototype-Guided Continual Adaptation For Class-Incremental Unsupervised Do- main Adaptation 4 ECCV Few-Shot Class-Incremental Learning Via Entropy-Regularized Data-Free Replay 0 0 generators not discussed 5 ECCV Anti-Retroactive Interference For Lifelong Learning 0.02 0.2 / discussed 6 ECCV Long-Tailed Class Incremental Learning 0.01 0.04 / not discussed 7 ECCV Dlcft: Deep Linear Continual Fine-Tuning For General Incremental Learning 0.001 0.04 / not discussed 8 ECCV Generative Negative Text Replay For Continual Vision-Language Pretraining 0 0 / not discussed 9 ECCV Online Continual Learning With Contrastive Vision Transformer 0.001 0.02 / not discussed 10 ECCV Coscl: Cooperation Of Small Continual Learners Is Stronger Than A Big One 0 0.04 / not discussed 11 ECCV R-Dfcil: Relation-Guided Representation Learning For Data-Free Class Incremen- tal Learning | 14 ECCV Few-Shot Class-Incremental Learning For 3d Point Cloud Objects 0.001 0.001 prototypes not discussed 15 ECCV Meta-Learning With Less Forgetting On Large-Scale Non-Stationary Task Distri- butions 16 ECCV Novel Class Discovery Without Forgetting 0 0 prototypes not discussed 17 ECCV Rbc: Rectifying The Biased Context In Continual Semantic Segmentation 0 0 model copies not discussed 18 ECCV Coarse-To-Fine Incremental Few-Shot Learning 0 0 / not discussed 19 ECCV Continual Variational Autoencoder Learning Via Online Cooperative Memoriza- tion 20 ECCV Dualprompt: Complementary Prompting For Rehearsal-Free Continual Learning 0 0 prompts not discussed 21 CVPR Incrementer: Transformer for Class-Incremental Semantic Segmentation with Knowledge Distillation Focusing on Old Class 22 CVPR Real-Time Evaluation in Online Continual Learning: A New Hope 0.001 0.001 / constrained 23 CVPR Heterogeneous Continual Learning 0 0.004 generators discussed 24 CVPR Decoupling Learning and Remembering: a Bilevel Memory Framework with Knowledge Projection for Task-Incremental Learning | | | 26 CVPR Continual Detection Transformer for Incremental Object Detection 0.1 0.1 model copies not discussed 27 CVPR Continual Semantic Segmentation with Automatic Memory Sample Selection 0.001 0.01 / discussed 28 CVPR Adaptive Plasticity Improvement for Continual Learning 0 0 gradient bases compared 29 CVPR VQACL: A Novel Visual Question Answering Continual Learning Setting 0.001 0.09 / not discussed 30 CVPR Task Difficulty Aware Parameter Allocation & Regularization for Lifelong Learning 0 0 model copies discussed 31 CVPR Computationally Budgeted Continual Learning: What Does Matter? 1 1 / constrained 32 CVPR CoMFormer: Continual Learning in Semantic and Panoptic Segmentation 0 0 model copies not discussed 33 CVPR PIVOT: Prompting for Video Continual Learning 0.006 0.1 / not discussed 34 CVPR Class-Incremental Exemplar Compression for Class-Incremental Learning 0.003 0.02 / discussed 35 CVPR PCR: Proxy-based Contrastive Replay for Online Class-Incremental Continual Learning 36 CVPR AttriCLIP: A Non-Incremental Learner for Incremental Knowledge Learning 0 0 / not discussed 37 CVPR Learning with Fantasy: Semantic-Aware Virtual Contrastive Constraint for FewShot Class-Incremental Learning 38 CVPR On the Stability-Plasticity Dilemma of Class-Incremental Learning 0.01 0.01 / discussed 39 CVPR MetaMix: Towards Corruption-Robust Continual Learning with Temporally SelfAdaptive Data Transformation | | | | 1 ECCV Balancing Stability And Plasticity Through Advanced Null Space In Continual Learning | | | 12 ECCV Continual Semantic Segmentation Via Structure Preserving And Projected Feature Alignment 13 ECCV Balancing Between Forgetting And Acquisition In Incremental Subpopulation Learning | | | 25 CVPR Geometry and Uncertainty-Aware 3D Point Cloud Class-Incremental Semantic Segmentation | | | 20 | 0 0 model copies not discussed | 0.001 0.03 / not discussed | 0 0 / not discussed | | 0 0 / not discussed | 0.01 0.04 / not discussed | 0 0 / not discussed | 0 0 / not discussed | 0 0.16 model copies not discussed | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------|-----------------------|-------------------------------------| | | | | 0.004 0.04 / compared | 0.01 0.1 / compared | 0.02 0.045 / compared | | | | | Table 1 continued from previous page 40 CVPR Exploring Data Geometry for Continual Learning 0.004 0.04 / not discussed 41 NeurIPS Uncertainty-Aware Hierarchical Refinement for Incremental Implicitly-Refined Classification | | 44 NeurIPS NOTE: Robust Continual Test-time Adaptation Against Temporal Correlation 0.5 0.5 / discussed 45 NeurIPS Decomposed Knowledge Distillation for Class-Incremental Semantic Segmentation 0.008 0.1 / discussed 46 NeurIPS Few-Shot Continual Active Learning by a Robot 0 0 prototypes not discussed 47 NeurIPS Navigating Memory Construction by Global Pseudo-Task Simulation for Continual Learning | | 51 NeurIPS A Theoretical Study on Solving Continual Learning 0.004 0.04 / discussed 52 NeurIPS Beyond Not-Forgetting: Continual Learning with Backward Knowledge Transfer 0 0 gradient bases compared 53 NeurIPS Task-Free Continual Learning via Online Discrepancy Distance Learning 0.04 0.2 / compared 54 NeurIPS Disentangling Transfer in Continual Reinforcement Learning 0.1 0.1 / not discussed 55 NeurIPS Less-forgetting Multi-lingual Fine-tuning 0.5 0.5 / not discussed 56 NeurIPS Model-based Lifelong Reinforcement Learning with Bayesian Exploration 1 1 / discussed 57 NeurIPS ALIFE: Adaptive Logit Regularizer and Feature Replay for Incremental Semantic Segmentation 58 NeurIPS Retrospective Adversarial Replay for Continual Learning 0.004 0.2 / constrained 59 NeurIPS ACIL: Analytic Class-Incremental Learning with Absolute Memorization and Pri- vacy Protection 60 NeurIPS Memory Efficient Continual Learning with Transformers 0.1 0.16 / compared 61 ICML Poisoning Generative Replay In Continual Learning To Promote Forgetting 0 0 generators not discussed 62 ICML Dualhsic: Hsic-Bottleneck And Alignment For Continual Learning 0.01 0.1 / discussed 63 ICML Adaptive Compositional Continual Meta-Learning 0 0 / discussed 64 ICML Ddgr: Continual Learning With Deep Diffusion-Based Generative Replay 0 0 generators compared 65 ICML Neuro-Symbolic Continual Learning: Knowledge, Reasoning Shortcuts And Con- cept Rehearsal 66 ICML Birt: Bio-Inspired Replay In Vision Transformers For Continual Learning 0.003 0.04 / discussed 67 ICML Optimizing Mode Connectivity For Class Incremental Learning 0.01 0.04 / not discussed 68 ICML Continual Learners Are Incremental Model Generalizers 0 0 extra modules discussed 69 ICML Learnability And Algorithm For Continual Learning 0.02 0.04 prototypes not discussed 70 ICML Do You Remember? Overcoming Catastrophic Forgetting For Fake Audio Detection | 72 ICML Prototype-Sample Relation Distillation: Towards Replay-Free Continual Learning 0 0.1 model copies not discussed 73 ICML Parameter-Level Soft-Masking For Continual Learning 0 0 model copies not discussed 74 ICML Does Continual Learning Equally Forget All Parameters? 0.002 0.05 / compared 75 ICML Lifelong Language Pretraining With Distribution-Specialized Experts 0 0 extra modules discussed 76 ICML Continual Task Allocation In Meta-Policy Network Via Sparse Prompting 0 0 / not discussed 77 ICML Incdsi: Incrementally Updatable Document Retrieval 1 1 / optimized | | | | | | | | | 48 NeurIPS SparCL: Sparse Continual Learning on the Edge 0.004 0.01 / optimized 49 NeurIPS A simple but strong baseline for online continual learning: Repeated Augmented Rehearsal | | | | | | | 42 NeurIPS Learning a Condensed Frame for Memory-Efficient Video Class-Incremental Learn- ing 43 NeurIPS S-Prompts Learning with Pre-trained Transformers: An Occam's Razor for Do- main Incremental Learning | | | 50 NeurIPS Lifelong Neural Predictive Coding: Learning Cumulatively Online without For- getting | 71 ICML Continual Vision-Language Representation Learning With Off-Diagonal Information | | | |
Review 1: Summary: The paper argues that the field of continual learning is limiting itself to a few arbitrary use-cases, when in-fact continual learning has much wider applicability. The authors argue about multiple use-cases of CL that are ignored by most of the current research. These use-cases include (1) specializing the AI systems to their environment (as opposed to fixing them at the time of deployment), editing the knowledge of the systems when they make mistakes, (3) saving computation by not retraining models from scratch, (4) putting more emphasis on computational constraints, as opposed to memory constraints, and tacking continual learning in a more principled way from the lens of theory. Strengths and Weaknesses: ## Strengths This is a much needed paper for the field of continual learning. A large part of the CL community is plagued by a hand-crafted problem setting---memory constrained supervised learning---that only covers a small aspect of the need for continual learning, and this paper makes a clear and concise case for why CL has a much wider appeal. I also wholly agree with the authors that the motivations used by the current supervised CL community---privacy and memory constrained---are not well thought out, and the meat of the need for CL lies elsewhere---adapting to distribution shifts, adding and removing knowledge, and specializing the models to their environment at the time of deployment, etc. I would strongly recommend the paper to be accepted. ## Small gripes 1. I think the paper can emphasize some points better. More specifically, prior work by Silver, Koop, Müller, and Sutton has empirically shown that continually learning systems can out-perform fixed systems, laying a strong case for CL. A discussion on them would be useful. The two relevant papers are: i). Sutton, Richard S., Anna Koop, and David Silver. "On the role of tracking in stationary environments." Proceedings of the 24th international conference on Machine learning. 2007. ii) Silver, David, Richard S. Sutton, and Martin Müller. "Sample-based learning and search with permanent and transient memories." Proceedings of the 25th international conference on Machine learning. 2008. 2. There has been some work that empirically verifies the impact of computational constraints on online learning/lifelong learning systems. The following paper shows that under computational constraints of FLOPs-per-step, algorithms that are believed to be inferior can out-perform the popular algorithms (See Section 2 for problem setting and computational constraints): Javed, K., Shah, H., Sutton, R. S., & White, M. (2023). Scalable Real-Time Recurrent Learning Using Columnar-Constructive Networks. Journal of Machine Learning Research, 24, 1-34. 3. "Freezing, or using non-uniform learning rates, might also be necessary when data is limited to prevent optimization and overfitting issues" Using non-uniform learning rates can also help with better credit assignment for continual learning. Mainly, it can allow continual learning systems to learn to preserve some knowledge, and overwrite other. The following paper touches on these ideas: Sutton, R. S. (1992, July). Adapting bias by gradient descent: An incremental version of delta-bar-delta. In AAAI (Vol. 92, pp. 171-176). There are other works with similar ideas as well, such as Stochastic Meta-Descent (Schraudolph 1999). Requested Changes: I would request the authors to read or skim the papers I listed, and support their arguments with the evidence in the papers (if possible). I'm not making any hard constraints, as the authors have already done an excellent job of supporting their arguments using existing literature, and I trust that they will do a good job including the papers I mentioned. Broader Impact Concerns: No comments ================================================== Review 2: Summary: This paper presents a brief survey of recent papers on continual learning (CL for short, a.k.a. lifelong learning), provides five motivating scenarios (error correction, personalization, on-device learning, model re-training and reinforcement learning) and finally identifies four possible future research directions (computational cost, theory, large-scale learning and open-world learning). Strengths and Weaknesses: Strengths - Summarizing very recent papers and pointing out computational cost is much less explored in the literature. - Writing is mostly clear and easy to read Weaknesses - The number of surveyed papers is limited. The surveyed papers come from only three conferences (ECCV 22, NeurIPS 22 and CVPR 23), which is far from enough for a survey paper. It would be important to cover other important venues such as ICML and ICLR. - The categorization is unclear. As a survey paper, it is crucial to provide a clear categorization of existing methods, distinguishing the key differences and applicability of various algorithms, which is unfortunately missing for the current paper. For example, The title of Sec.3.1 is “Adapting machine learning models locally” but the content is more about correcting prior mistakes and there seems to be nothing specific about “locally”. Moreover, the abstract calls it “model-editing”, which overlaps with Sec.3.2 personalization as one may want to adjust the model for the purpose of personalization/adaptation. - Some motivations are not well connected to continual learning. For example, Sec.3.2 is more related to transfer learning (one-time; adapting a model learned from vast data to a specific domain) than continual learning (lifelong). “Further, Internet scraped data often do not contain (enough) information to reach the best performance in specialized application domains like science and user sentiment analysis.” I beg to disagree. The problem here is more about identifying the relevant information than not having enough information. There are several similar claims that are not substantiated by papers or experiments. - Only pointing out potential problems without providing enough solutions Requested Changes: (1) Include more papers from recent years and from more venues (2) Provide a clear categorization Minor comments - 10.000 GPU -> 10,000 GPU - Figure 1: why use dots when a number suffices. If visualization is preferred, one can use a heat map. - make learning efficiently -> make learning efficient - Citation: use \citet when the citation is part of the sentence and \citep otherwise. For example, in Sec.3.5, “in off policy RL settings Sutton & Barto (2018)” should be “in off policy RL settings (Sutton & Barto 2018)” Broader Impact Concerns: N/A ================================================== Review 3: Summary: This paper is a survey paper to discuss the current state and future direction of continual learning. This paper presents the necessity of the continual learning; the past discussion points, such as memory usage and computation costs; and the future open problems. Strengths and Weaknesses: Strength: This is a plainly written survey paper without too much detail, so this article is very accessible to any person with some knowledge of machine learning. Weakness: 1. This paper shows some surveys in quantitative method, i.e. Figure 1. However, the background context of the past research is not well discussed. For instance, author wrote "strongly restrict memory and do not discuss computation cost". There are valid reasons of such setting. For example, continual learning is often deployment-purposed, so they restrain the training with future memory usage. Subsequently, such training will take less amount of data, so the computational cost could be less important because of the reduced data to process compared to the main training data. These pros and cons are not discussed, and only authors view is prevalent. Unbiased survey is crucial in writing a survey paper. 2. Only very limited survey is done with three conference events: ECCV 22, NIPS 22, CVPR 23. Now, we are at the end of 2023, so there will be plenty more events to cover. Moreover, why not include ICCV, ICML, etc? I would say that the survey should be exhaustive to be a published paper in TMLR. 3. More indepth insight is needed. The key development of continual learning is not being covered in any technical depth. There is no trace of the loss function development, or problem setting evolution in the field of continual learning. Requested Changes: see the weakness Broader Impact Concerns: Not much of an ethical concern paper. ================================================== Metareview: Recommendation: Accept with minor revision Comment: The decision to accept the paper is based on its substantial contribution to the discourse on continual learning in machine learning. The authors effectively identify and challenge the current limitations within the field, advocating for a better perspective on the applications and implications of CL. Their engagement with recent literature and incorporation of feedback from reviewers have strengthened the paper’s arguments. While the paper may not provide an exhaustive survey of the field it is well-positioned to stimulate further discussion and research in the field, making it a valuable addition to the journal. I strongly recommend authors to further incorporate suggestions from reviews in their revision and extend the scope of the paper with the work published during the review process. ==================================================
# On Intriguing Layer-Wise Properties Of Robust Overfitting In Adversarial Training Anonymous authors Paper under double-blind review ## Abstract Adversarial training has proven to be one of the most effective methods to defend against adversarial attacks. Nevertheless, robust overfitting is a common obstacle in adversarial training of deep networks. There is a common belief that the features learned by different network layers have different properties, however, existing works generally investigate robust overfitting by considering a DNN as a single unit and hence the impact of different network layers on robust overfitting remains unclear. In this work, we divide a DNN into a series of layers and investigate the effect of different network layers on robust overfitting. We find that different layers exhibit distinct properties towards robust overfitting, and in particular, robust overfitting is mostly related to the optimization of latter parts of the network. Based upon the observed effect, we propose a *robust adversarial training* (RAT) prototype: in a minibatch, we optimize the front parts of the network as usual, and adopt additional measures to regularize the optimization of the latter parts. Based on the prototype, we designed two realizations of RAT, and extensive experiments demonstrate that RAT can eliminate robust overfitting and boost adversarial robustness over the standard adversarial training. ## 1 Introduction Deep neural networks (DNNs) have been widely applied in multiple fields, such as computer vision (He et al., 2016) and natural language processing (Devlin et al., 2018). Despite its achieved success, recent studies show that DNNs are vulnerable to adversarial examples. Well-constructed perturbations on the input images that are imperceptible to human's eyes can make DNNs lead to a completely different prediction (Szegedy et al., 2013). The security concern due to this weakness of DNNs has led to various works in the study of improving DNNs robustness against adversarial examples. Across existing defense techniques thus far, Adversarial Training (AT) (Goodfellow et al., 2014; Madry et al., 2017), which optimizes DNNs with adversarially perturbed data instead of natural data, is the most effective approach (Athalye et al., 2018). However, it has been shown that networks trained by AT technique do not generalize well (Rice et al., 2020). After a certain point in AT, immediately after the first learning rate decay, the robust test accuracy continues to decrease with further training. Typical regularization practices to mitigate overfitting such as l1 & l2 regularization (weight decay), data augmentation, etc. are reported to be as inefficient in adversarial robustness compared to simple early stopping (Rice et al., 2020). Many studies have attempted to improve the robust generalization gap in AT, and most have generally investigated robust overfitting by considering DNNs as whole. However, DNNs trained on natural images exhibit a common phenomenon: features obtained in the first layers appear to be general and applicable widespread, while features computed by the last layers are dependent on a particular dataset and task (Yosinski et al., 2014). Such behavior of DNNs sparks a question: Do different layers contribute differently to robust overfitting? Intuitively, robust overfitting acts as an unexpected optimization state in adversarial training, and its occurrence may be closely related to the entire network. Nevertheless, the unique effect of different network layers on robust overfitting is still unclear. Without a detailed understanding of the layer-wise mechanism of robust overfitting, it is difficult to completely demystify the exact underlying cause of the robust overfitting phenomenon. In this paper, we provide the first layer-wise diagnosis of robust overfitting. Specifically, instead of considering the network as a whole, we treat the network as a composition of layers and systematically investigate the impact of robust overfitting phenomenon on different layers. To do this, we first fix the parameters for the selected layers, leaving them unoptimized during AT, and then normally optimize other layer parameters. We discovered that robust overfitting is always *mitigated* in the case where the latter layers are left unoptimized, and applying the same effect to other layers is *futile* for robust overfitting, suggesting a strong connection between the optimization of the latter layers and the overfitting phenomenon. Based upon the observed effect, we propose a *robust adversarial training* (RAT) prototype to relieve the issue of robust overfitting. Specifically, RAT works in each mini-batch: it optimizes the front layers as usual, and for the latter layers, it implements additional measures on these parameters to regularize their optimization. It is a general adversarial training prototype, where the front and latter network layers can be separated by some simple test experiments, and the implementation of additional measures to regularize network layer optimization can be versatile. For instance, we designed two representative methods for the realizations of RAT: RATLR and RATWP. They adopt different strategies to hinder weight update, e.g., enlarging the learning rate and weight perturbation, respectively. Extensive experiments show that the proposed RAT prototype effectively eliminates robust overfitting. The contributions of this work are summarized as follows: - We provide the first diagnosis of robust overfitting on different network layers, and find that there is a strong connection between the optimization of the latter layers and the robust overfitting phenomenon. - Based on the observed properties of robust overfitting, we propose the RAT prototype, which adopts additional measures to regularize the optimization of the latter layers and is tailored to prevent robust overfitting. - We design two different realizations of RAT, with extensive experiments on a number of standard benchmarks, verifying its effectiveness. ## 2 Related Work 2.1 Adversarial Training Since the discovery of adversarial examples, there have been many defensive methods attempted to improve the DNN's robustness against such adversaries, such as adversarial training (Madry et al., 2017), defense distillation (Papernot et al., 2016), input denoising (Liao et al., 2018), gradient regularization (Tramèr et al., 2018). So far, adversarial training (Madry et al., 2017) has proven to be the most effective method. Adversarial training comprises two optimization problems: the inner maximization and outer minimization. The first one constructs adversarial examples by maximizing the loss and the second updates the weight by minimizing the loss on adversarial data: $$\ell^{\mathrm{AT}}(\mathbf{w})=\operatorname*{min}_{w}\sum_{i}\operatorname*{max}_{d(x_{i},x_{i}^{\prime})\leq\epsilon}\ell(f_{w}(x_{i}^{\prime}),y_{i}),$$ ), yi),(1) where fw is the DNN classifier with weight w, and ℓ(·) is the loss function. d(*., .*) specify the distance between original input data xi and adversarial data x ′ i , which is usually an lp-norm ball such as the l2 and l∞-norm balls and ϵ is the maximum perturbation allowed. A different type of AT variation that is commonly used is referred to as TRADES (Zhang et al., 2019), which involves optimizing a surrogate loss that is a tradeoff between the natural accuracy and adversarial robustness: $$\ell^{\mathrm{TRADES}}(\mathbf{w})=\operatorname*{min}_{w}\sum_{i}\left\{\mathrm{CE}(f_{w}(x_{i}),y_{i})\quad+\beta\cdot\operatorname*{max}_{d(x_{i},x_{i}^{\prime})\leq\epsilon}\mathrm{KL}(f_{w}(x_{i})||f_{w}(x_{i}^{\prime}))\right\},$$ )) ,(2) $$(1)$$ $$\left(2\right)$$ ![2_image_0.png](2_image_0.png) Figure 1: The learning curves of adversarial training using PreAct ResNet-18 on the CIFAR-10 dataset. The depicted curves reveal "robust overfitting", wherein the adversarially trained model briefly achieves 52.01% test robust accuracy shortly after the first learning rate decay. Surprisingly, at this point, the adversarially trained model is actually more robust than it is at the end of training, where it only attains a 43.95% test robust accuracy against a 20-step PGD adversary with an ℓ∞ radius of ϵ = 8/255. The learning rate is decayed at 100 and 150 epochs. The surrogate loss consists of two parts: cross-entropy (CE) loss, which encourages the network to maximize natural accuracy, and Kullback-Leibler (KL) divergence, which encourages the improvement of robust accuracy. The hyperparameter β is used to control the tradeoff between natural accuracy and adversarial robustness. Another line of work involves utilizing semi-supervised learning (SSL) technique. Methods based on SSL use additional unlabeled data to improve the robustness of the trained model. In these methods, a natural model is first trained on labeled data to generate pseudo-labels for the unlabeled data. Then, a robust model is trained using an adversarial loss function ℓ(w) on both labeled and unlabeled data: $$\ell^{\mathrm{SSL}}(\mathbf{w})=\ell^{\mathrm{labeled}}(\mathbf{w})+\lambda\cdot\ell^{\mathrm{unlabeled}}(\mathbf{w}),$$ $$(3)$$ unlabeled(w), (3) where λ control the weight on the unlabeled data. ## 2.2 Robust Generalization An interesting characteristic of deep neutral networks (DNNs) is their ability to generalize well in practice (Belkin et al., 2019). For the standard training setting, it is observed that test loss continues to decrease for long periods of training (Nakkiran et al., 2020), thus the common practice is to train DNNs for as long as possible. However, in the case of adversarial training, further training past a certain point leads to a significant decrease in the robust training loss of the classifier, while increasing the robust test loss. Figure 1 depicts this phenomenon for adversarial training on CIFAR-10, where the robust test accuracy initially increases but then drops after the first learning rate decay. This phenomenon is called "robust overfitting", which has shown strong resistance to standard regularization techniques such as l1, l2 regularization and data augmentation methods, and can be observed on various datasets, including SVHN, CIFAR-100, and ImageNet (Rice et al., 2020). Schmidt et al. (2018) theorizes that robust generalization have a large sample complexity, which requires substantially larger dataset. Many subsequent works have empirically validated such claim, such as AT with semi-supervised learning (Carmon et al., 2019; Zhai et al., 2019; Najafi et al., 2019; Uesato et al., 2019), robust local feature (Song et al., 2020) and data interpolation (Lee et al., 2020; Chen et al., 2021). Chen et al. (2020) proposes to combine smoothing the logits via self-training and smoothing the weight via stochastic weight averaging to mitigate robust overfitting. Wu et al. (2020) emphasizes the connection of weight loss landscape and robust generalization gap, and suggests injecting the adversarial perturbations into both inputs and weights during AT to regularize the flatness of weight loss landscape. The intriguing property of robust overfitting has motivated great amount of study and investigation (Wang et al., 2019; Zhang et al., 2020; Liu et al., 2021; Rebuffi et al., 2021; Dong et al., 2022b; Yu et al., 2022; Dong et al., 2022a; Li & Spratling, 2023; Wang et al., 2023), but current works typically approach the phenomenon considering a DNN as a whole. In contrast, our work treats a DNN as a series of layers and reveals a strong connection between robust overfitting and the optimization of the latter layers, providing a novel perspective into better understanding the phenomenon. ## 3 Intriguing Properties Of Robust Overfitting In this section, we first investigate the layer-wise properties of robust overfitting by fixing model parameters in AT (Section 3.1). Based on our observations, we further propose a robust adversarial training (RAT) prototype to mitigate robust overfitting (Section 3.2). Finally, we design two different realizations for RAT to verify the effectiveness of the proposed method (Section 3.3). ## 3.1 Layer-Wise Analysis Of Robust Overfitting Current works usually study the robust overfitting phenomenon considering the network as a single unit (Rice et al., 2020; Wu et al., 2020; Yu et al., 2022; Dong et al., 2022a; Li & Spratling, 2023; Wang et al., 2023). However, features computed by different layers exhibit different properties, such as first-layer features are general and last-layer features are specific (Yosinski et al., 2014). We hypothesize that different network layers have different effects on robust overfitting. To empirically verify the above hypothesis, we deliberately fix the parameters of the selected network layers, leaving them unoptimized during AT and observe the behavior of robust overfitting accordingly. Specifically, we considered PreAct ResNet-18 architecture as a composition of 4 main layers, corresponding to 4 Residual blocks. We then train multiple PreAct ResNet-18 networks on CIFAR-10 for 200 epochs using AT, each time selecting a set of network layers to have their parameter fixed. The robust test performance in Figure 2(a) shows a consistent pattern. Robust overfitting is mitigated whenever we fix the parameters for layer 4 during AT, while any settings that do not fix the parameters for layer 4 result in a more severe gap between the best accuracy and the accuracy at the last epoch. For example, for settings such as AT-fix-param-[4], AT-fix-param-[1,4], AT-fix-param-[2,4] and AT-fix-param- [3,4], robust overfitting is significantly reduced. On the other hand, for settings such AT-fix-param-[1,2], AT-fix-param-[1,3] and AT-fix-param-[2,3], when we fix the parameters of various set of layers but allow for the optimization of layer 4, robust overfitting still widely exists. For extreme case like AT-fix-param- [1,2,3], where we fix the first three front layers and only allow for the optimization of that last layer 4, the gap between the best accuracy and the last accuracy is still obvious. This clearly indicates that the optimization of the latter layers present a strong correlation to the robust overfitting phenomenon. Note that this relationship can be observed across a variety of datasets, network architectures, AT methods, and threat models (shown in Appendix A), indicating that it is a general property in adversarial training. In many of these settings, robust overfitting is mitigated at the cost of robust test accuracy. For example in AT-fix-param-[3,4], if we leave both layer 3 & 4 unoptimized, robust overfitting will practically disappear, but the peak performance is much worse compared to standard AT. When carefully examining the training performance in these settings shown in Figure 2(b), we generally observe that the network capacity to fit adversarial data is strong when we fix the parameters for the front layers, but it gradually gets weaker as we try to fix the latter layers. For instance, AT-fix-param-[1] has the highest train robust accuracy, then comes AT-fix-param[2], AT-fix-param[3] and AT-fix-param[4]; AT-fix-param[1,2,3] has higher training accuracy than AT-fix-param[2,3,4]. This suggests fixing the latter layers' parameters can regularize the network stronger compared to fixing the front layers's parameters. This stronger regularization may be due to the latter layers containing more parameters than the front layers, or because fixing the parameters for the latter layers leads to more severe underfitting. However, these factors do not adequately explain the layer-wise property of robust overfitting. For example, the front layers also contain a certain number of parameters. Yet, when we fix the parameters of the front layers, the degree of robust overfitting remains ![4_image_0.png](4_image_0.png) Figure 2: The robust train/test performance of adversarial training with different sets of network layers fixed. AT-fix-param[1,2] corresponds to fixing the parameters of layers 1 & 2 during AT. almost unchanged, as shown in Figure 2(a). On the other hand, we observe that fixing parameters in front layers also leads to some degree of underfitting, as shown in Figure 2(b). However, these measures have almost no effect on the robust overfitting phenomenon. Nevertheless, in the subsequent sections, we will introduce methods that specifically regularize the optimization of the latter layers, so as to mitigate robust overfitting without tradeoffs in robustness. We will compare the impact on robust overfitting when applied such methods on the front layers vs the latter layers, further highlighting the importance of the latter layers in relation to robust overfitting. ## 3.2 A Prototype Of Rat As witnessed in Section 3.1, the optimization of AT in the latter layers is highly correlated to the existence of robust overfitting. To address this, we propose to train the network on adversarial data with some restrictions put onto the optimization of the latter layers, dubbed as *Robust Adversarial Training* (RAT). RAT adopts additional measures to regularize the optimization of the latter layers and mitigate robust overfitting. The RAT prototype is given in Algorithm 1. It runs as follows. We start with a base adversarial training algorithm A. In Line 1-3, The inner maximization pass aims to generate adversarial examples via maximizing the loss, and then the outer minimization pass updates the weight to minimize the loss on adversarial data. Line 4 initiates a loop through all parts of the weight w from the front layers to the latter layers. Line 5-9 then manipulate different parts of the weight based on its layer conditions. If the parts of the weight belong to the front layers (Cfront), their gradients will be kept intact. Otherwise, a weight update scheme S is put onto the parts of the weight corresponding to the latter layers (Clatter). Finally, the optimizer O updates the model fw in Line 11. Note that RAT is a general prototype where layer conditions Cfront, Clatter and weight adjustment strategy S can be versatile. Based on the setting in Section 3.1, the ResNet architecture is treated as a composition of 4 main layers, corresponding to 4 residual blocks. In our subsequent experiments, except for the case of including all layers, when we count from the first layer backward, we regard them as front layers. When we count from the fourth layer forward, we regard them as latter layers. For example, layer 1 & 2 is Cfront and layer 3 & 4 is Clatter. S can also represent various strategies that serves to regularize the optimization of the latter layers. In the section below, we will propose two different strategies S in the implementations of RAT to demonstrate RAT's effectiveness. Algorithm 1 RAT-prototype (in a mini-batch). Require: base adversarial training algorithm A, optimizer O, network fw, model parameter w = {w1, w2*, ..., w*n}, training data D = {(xi, yi)}, mini-batch B, front and latter layer conditions Cfront and Clatter for fw, gradient adjustment strategy S. 1: Sample a mini-batch B = {(xi, yi)} from D 2: B ′ = A.inner_maximization(fw, B) 3: ∇w ← A.outer_minimization(fw, ℓB′ ) 4: for j = 1*, ..., n* do 5: if Cfront(wj ) **then** 6: ∇wj ← ∇wj 7: **else if** Clatter(wj ) **then** 8: ∇wj ← S(fw, B ′, ∇wj ) \# adjust gradient 9: **end if** 10: **end for** 11: O.step(∇w) ## 3.3 Two Realizations Of Rat In this section, we will propose two different methods to put certain restrictions on the optimization of selected parts of the network, and then investigate the robust overfitting behavior upon applying such method to the front layers vs the latter layers. These methods showcase a clear relation between the optimization of the latter layers and robust generalization gap. RAT through enlarging learning rate. In standard AT, the sudden increases in robust test performance appears to be closely related to the drops in the scheduled learning rate decay. We hypothesize that training AT without learning rate decays is sub-optimal, which can regularize the learning process of adversarial training. Comparison of the train/test performance between standard AT and AT without learning rate decay (AT-fix-lr-[1,2,3,4]) are shown in Figure 3(b). Training performance of standard AT accelerates quickly right after the first learning rate drop, expanding the generalization gap with further training, whereas for AT without learning rate decay, training performance increases slowly and maintain a stable generalization gap. This suggests that AT optimized without learning rate decay has less capacity to fit adversarial data, and thus provides the regularization needed to relieve robust overfitting. As our previous analysis suggests that the optimization of the latter layers is more important in mitigating robust overfitting, we propose using a fixed learning rate of 0.1 for optimizing the latter parts of the network while applying the piecewise decay learning rate for the former parts to close the robust generalization gap. We refer to this approach as a realization of RAT through enlarging learning rate, namely RATLR. Note that there is no difference between enlarging learning rate and enlarging gradients, since the amplification coeffcient has the same effects in increasing gradients and increasing the learning rate inside O. Therefore, compared to standard AT, RATLR essentially enlarge the weight gradient ∇wj along the latter layers by 10 at the first learning rate decay and 100 at the second decay: $$\nabla_{w_{j}}=\eta\nabla_{w_{j}},$$ , (4) where η is the amplification coefficient. To demonstrate the effectiveness of RATLR, we train multiple PreAct ResNet-18 networks on CIFAR-10 for 200 epochs using AT, each time selecting a set of network layers to have their learning rate fixed to 0.1 while maintaining the piece-wise learning rate schedule for other layers. Figure 3(a) validate our proposition. $$\left({4\atop4}\right)$$ ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) (a) Robust test performance of all settings (b) Standard AT vs AT using a fixed learning rate ![6_image_2.png](6_image_2.png) (c) AT using a fixed learning rate for all layers vs the latter layers Figure 3: The train/test performance of adversarial training using a fixed learning rate for different sets of network layers. AT-fix-lr[1,2] corresponds to using a fixed learning rate for layers 1 & 2 during AT. Robust overfitting is relieved for all settings that target layers that include layer 4 (AT-fix-lr-[4], AT-fix-lr- [1,4], AT-fix-lr-[2,4], etc.) while any settings that fix the learning rate of layers that exclude layer 4 do not reduce robust overfitting. Furthermore, all settings that fix the learning rate for both layer 3 & 4, including AT-fix-lr-[3,4], AT-fix-lr-[1,3,4], AT-fix-lr-[2,3,4] & AT-fix-lr-[1,2,3,4] completely eliminate robust overfitting. The observations indicate that regularizing the optimization of the latter layers by optimizing those layers without learning rate decays can prevent robust overfitting from occurring. An important observation is that RATLR (AT-fix-lr-[3,4]) can both overcome robust overfitting and achieve better robust test performance compared to the network using a fixed learning rate for all layers (AT-fix-lr-[1,2,3,4]). Examining the training performance between these two settings in Figure 3(c), we find that RATLR exhibits a rapid rise in both robust and standard training performance immediately after the first learning rate decay similar to standard AT. The training performance of RATLR is able to benefit from the learning rate decay occurring at layer 1 & 2, making a notable improvement compared to AT-fix-lr-[1,2,3,4]. By training layers 3 & 4 without learning rate decays, we specifically put some restrictions on the optimization of only the latter parts of the network heavily responsible for robust overfitting, which can relieve robust overfitting without sacrificing too much performance. The experiment results provide another indication that the latter layers have stronger connections to robust overfitting than the front layers do, and regularizing the optimization of the latter layers from the perspective of learning rate can effectively mitigate robust overfitting. RAT through adversarial weight pertubation. We continue to study the impact of different network layers to robust overfitting phenomenon from the perspective of adversarial weight perturbation (AWP). Wu et al. (2020) proposes AWP as a method to explicitly flatten weight loss landscape, by introducing adversarial perturbations into both inputs and weights during AT: $$\operatorname*{min}_{w}\operatorname*{max}_{v\in V}\sum_{i}\operatorname*{max}_{d(x_{i},x_{i}^{\prime})\leq\epsilon}\ell(f_{w+v}(x_{i}^{\prime}),y_{i}),$$ ), yi), (5) ![7_image_0.png](7_image_0.png) $$\mathbf{\Sigma}$$ $$\left(7\right)$$ $$||v_{j}||=\gamma||w_{j}||,\tag{1}$$ Figure 4: The train/test performance of adversarial training when applying AWP for different sets of network layers. AT-awp-[1,2] means only layer 1 & 2 have their weight perturbed using AWP. where V is a feasible region for the perturbation v, and v is the adversarial weight perturbation generated by maximizing the classification loss: $$v=\nabla_{w}\sum_{i}\ell_{i},$$ i ℓi, (6) where ℓiis the adversarial loss of x ′ i . Then the norm of weight perturbation vj is restricted by its relative size to the norm of model weight wj : ||vj || = γ||wj ||, (7) where γ is the constraint on weight perturbation size. As AWP keeps injecting the worst-case perturbations on weight during training, it could also be viewed as a means to regularize the optimization of AT. In fact, the training of AWP exhibits a negative robust generalization gap, where robust training accuracy is in short of robust testing accuracy by a large margin, shown in Figure 4(c). This indicates AWP put significant restrictions to the optimization of AT, introducing huge trade-offs to training performance. As our previous analysis suggests a strong correlation between robust overfitting and the optimization of the latter layers, we argue that the capacity to mitigate robust overfitting from AWP is mostly thanks to the perturbations occurring at latter layers' weight. As such, we propose to specifically apply AWP to the latter half of the network, and refer to this method as RATWP. In essence, RATWP compute the adversarial weight perturbation vj under the layer condition Clatter(wj ), so that only the parts of the weight along the latter half of the network are perturbed: $$\min_{\mathbf{w}=[w_{1},...,w_{j},...,w_{m}]}\max_{\mathbf{v}=[0,...,v_{j},...,0]\in V}\sum_{i}\max_{d(x_{i},x_{i}^{\prime})\leq\epsilon}\ell(f_{\mathbf{w}+\mathbf{v}}(x_{i}^{\prime}),y_{i}),\tag{1}$$ $$v_{j}=\nabla_{w_{j}}\sum_{i}\ell_{i}.$$ ℓi. (9) $$\mathbf{\Sigma}$$ $$\mathbf{\Sigma}$$ ## ||Vj || = Γ||Wj ||. (10) To prove the effectiveness of RATWP, we train multiple PreAct ResNet-18 networks on CIFAR-10 for 200 epochs using AT, each time selecting a set of network layers to have their weight locally perturbed using AWP. As seen from Figure 4(a), there are only 3 settings that can overcome robust overfitting, namely AT-awp-[3,4], AT-awp-[1,3,4] and AT-awp-[2,3,4]. These settings share one key similarity: both layer 3 & 4 have their weight adversarially perturbed during AT. Simply applying AWP to any set of layers that exclude layers 3 & 4 is not sufficient to mitigate robust overfitting. This shows that AWP is effective in solving robust overfitting only when applied to both layer 3 and layer 4. Even when AWP is applied to the first 3 former layers out of 4 layers (AT-awp-[1,2,3]), robust overfitting still widely exists. In another word, it is essential for the adversarial weight perturbations to occur at the latter part of the network in order to mitigate robust overfitting. To examine this phenomenon in detail, we compare the training performance of AWP applied to front layers (represented by AT-awp-[1,2,3]) vs AWP applied to latter layers (represented by AT-awp- [3,4]), shown in Figure 4(b). AWP applied in the front layers have a much better training performance than AWP applied in the latter layers. Furthermore, AWP applied to front layers reveals a positive robust generalization gap (training accuracy > testing accuracy) shortly after the first drop in learning rate, which continues to widen with further training. Conversely, AWP applied in the latter layers exhibits a negative robust generalization gap throughout most of the training, only converging to 0 after the second drop in learning rate. These differences demonstrate that worst-case perturbations, when injected into the latter layers' weights, have a more powerful impact in regularizing the optimization of AT. Consistent with our previous findings, AWP applied to the latter layers can be considered as an approach to regularize the optimization of AT in those layers, which successfully mitigates robust overfitting. This finding supports our analysis thus far, further demonstrating that regularizing the optimization of the latter layers is key to improving the robust generalization. ## 4 Experiment In this section, we conduct extensive experiments to verify the effectiveness of RATLR and RATWP. Details of the experiment settings and performance evaluation are introduced below. ## 4.1 Experimental Setup We conduct extensive experiments on two realizations of RAT under two threat models (L∞ and L2) across three benchmark datasets: - CIFAR-10 (Krizhevsky et al., 2009). The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class. - CIFAR-100 (Krizhevsky et al., 2009). The CIFAR-100 dataset (Canadian Institute for Advanced Research, 100 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The 100 classes in the CIFAR-100 are grouped into 20 superclasses. There are 600 images per class. Each image comes with a "fine" label (the class to which it belongs) and a "coarse" label (the superclass to which it belongs). There are 500 training images and 100 testing images per class. - SVHN (Netzer et al., 2011). Street View House Numbers (SVHN) is a digit classification benchmark dataset that contains 600,000 32×32 RGB images of printed digits (from 0 to 9) cropped from pictures of house number plates. The cropped images are centered in the digit of interest, but nearby digits and other distractors are kept in the image. SVHN has three sets: training, testing sets and an extra set with 530,000 images that are less difficult and can be used for helping with the training process. We use PreAct ResNet-18 (He et al., 2016) and Wide ResNet-34-10 (Zagoruyko & Komodakis, 2016) following the same hyperparameter settings for AT in Rice et al. (2020): for L∞ threat model, ϵ = 8/255, step size | Network | Threat Model | Method | PGD-20 | AA | | | | |-------------------|----------------|----------|----------|-------|-------|-------|------| | Best | Last | Diff | Best | Last | Diff | | | | AT | 52.31 | 44.45 | 7.86 | 47.95 | 42.05 | 5.90 | | | L∞ | RATLR | 51.57 | 49.07 | 2.50 | 46.89 | 45.35 | 1.54 | | RATWP | 54.85 | 53.98 | 0.87 | 49.19 | 48.24 | 0.95 | | | PreAct ResNet-18 | AT | 69.27 | 65.86 | 3.41 | 67.65 | 64.64 | 3.01 | | L2 | RATLR | 68.97 | 68.21 | 0.76 | 64.26 | 63.44 | 0.82 | | RATWP | 70.77 | 69.49 | 1.28 | 68.29 | 67.11 | 1.18 | | | AT | 55.57 | 47.37 | 8.20 | 52.13 | 46.09 | 6.04 | | | L∞ | RATLR | 55.50 | 47.32 | 8.18 | 52.05 | 45.89 | 6.16 | | RATWP | 58.92 | 58.23 | 0.69 | 54.46 | 53.98 | 0.48 | | | Wide ResNet-34-10 | AT | 70.57 | 68.99 | 1.58 | 69.44 | 66.92 | 2.52 | | L2 | RATLR | 71.91 | 68.94 | 2.96 | 70.53 | 67.90 | 2.63 | | RATWP | 71.31 | 69.19 | 2.12 | 70.12 | 67.35 | 2.77 | | Table 1: Test robustness (%) on CIFAR10. We omit the standard deviations of 5 runs as they are very small (< 0.6%). | Network | Threat Model | Method | PGD-20 | AA | | | | |-------------------|----------------|----------|----------|-------|-------|-------|------| | Best | Last | Diff | Best | Last | Diff | | | | AT | 28.07 | 21.24 | 6.83 | 23.61 | 18.41 | 5.20 | | | L∞ | RATLR | 26.57 | 26.18 | 0.39 | 21.77 | 21.22 | 0.55 | | RATWP | 30.91 | 30.42 | 0.49 | 25.52 | 24.57 | 1.05 | | | PreAct ResNet-18 | AT | 41.38 | 35.34 | 6.04 | 37.94 | 33.58 | 4.36 | | L2 | RATLR | 38.31 | 37.76 | 0.55 | 35.16 | 34.49 | 0.77 | | RATWP | 45.23 | 44.93 | 0.3 | 41.32 | 39.47 | 1.85 | | | AT | 30.74 | 24.89 | 5.85 | 26.98 | 23.07 | 3.91 | | | L∞ | RATLR | 30.57 | 23.53 | 7.04 | 26.72 | 22.53 | 4.19 | | RATWP | 30.81 | 25.46 | 5.35 | 27.11 | 23.56 | 3.55 | | | Wide ResNet-34-10 | AT | 44.05 | 41.22 | 2.83 | 41.39 | 39.34 | 2.05 | | L2 | RATLR | 44.43 | 40.42 | 4.01 | 41.47 | 39.42 | 2.05 | | RATWP | 46.12 | 44.64 | 1.48 | 41.94 | 40.38 | 1.56 | | Table 2: Test robustness (%) on CIFAR100. We omit the standard deviations of 5 runs as they are very small (< 0.6%). is 1/255 for SVHN, and 2/255 for CIFAR-10 and CIFAR-100; for L2 threat model, ϵ = 128/255, step size is 15/255 for all datasets. For training, all models are trained under 10-step PGD (PGD-10) attack for 200 epochs using SGD with momentum 0.9, weight decay 5 × 10−4, and a piecewise learning rate schedule with an initial learning rate of 0.1. Standard data augmentation techniques, including random cropping with 4 pixels of padding and random horizontal flips, are applied. The models are decomposed into a series of 4 main layers, corresponding to 4 residual blocks of the ResNet architecture. For RATLR, learning rate for layer 3 & 4 are set to a fixed value of 0.1. For RATWP, weight perturbation is applied in layer 3 & 4, and other hyperparameters are configured as per the original paper. For testing, the robustness accuracy is evaluated under two different adversarial attacks, including 20-step PGD (PGD-20) and Auto Attack (AA) (Croce & Hein, 2020b). Auto Attack is considered the most reliable robustness evaluation to date, which is an ensemble of complementary attacks, consisting of three white-box attacks (APGD-CE (Croce & Hein, 2020b), APGD-DLR (Croce & Hein, 2020b), and FAB (Croce & Hein, 2020a)) and a black-box attack (Square Attack (Andriushchenko et al., 2020)) | Network | Threat Model | Method | PGD-20 | AA | | | | |-------------------|----------------|----------|----------|-------|-------|-------|------| | Best | Last | Diff | Best | Last | Diff | | | | AT | 53.10 | 44.12 | 8.98 | 45.09 | 40.36 | 4.73 | | | L∞ | RATLR | 53.32 | 43.41 | 9.92 | 45.98 | 39.61 | 6.37 | | RATWP | 57.91 | 54.32 | 3.58 | 50.32 | 44.82 | 5.49 | | | PreAct ResNet-18 | AT | 66.29 | 64.73 | 1.55 | 63.55 | 60.14 | 3.41 | | L2 | RATLR | 66.47 | 62.10 | 4.36 | 62.44 | 58.72 | 3.72 | | RATWP | 71.66 | 65.68 | 5.98 | 65.17 | 59.64 | 5.53 | | | AT | 55.57 | 47.11 | 8.46 | 48.05 | 42.46 | 5.59 | | | L∞ | RATLR | 55.34 | 46.81 | 8.53 | 47.94 | 42.12 | 5.82 | | RATWP | 58.48 | 54.92 | 3.56 | 54.65 | 50.46 | 3.99 | | | Wide ResNet-34-10 | AT | 67.19 | 65.08 | 2.11 | 62.58 | 60.86 | 1.72 | | L2 | RATLR | 67.50 | 64.24 | 3.27 | 62.79 | 59.94 | 2.85 | | RATWP | 69.07 | 64.76 | 4.31 | 63.12 | 59.57 | 3.55 | | Table 3: Test robustness (%) on SVHN. We omit the standard deviations of 5 runs as they are very small (< 0.6%). | with different adversarial training methods and network architectures. Natural PGD-20 | | | | | | AA | | | | | |-----------------------------------------------------------------------------------------|-------------------|-------|-------|-------|-------|------|-------|-------|------|------| | Method | Network | Best | Last | Diff | Best | Last | Diff | Best | Last | Diff | | AT | 81.16 | 84.44 | -3.28 | 52.31 | 44.45 | 7.86 | 47.95 | 42.05 | 5.90 | | | AWP | 81.11 | 82.00 | -0.89 | 55.39 | 54.73 | 0.66 | 50.12 | 49.85 | 0.27 | | | | PreAct ResNet-18 | | | | | | | | | | | RATWP | 82.24 | 83.36 | -1.12 | 54.85 | 53.98 | 0.87 | 49.19 | 48.24 | 0.95 | | | AT | 85.49 | 86.50 | -1.01 | 55.57 | 47.37 | 8.20 | 52.13 | 46.09 | 6.04 | | | AWP | 85.30 | 85.39 | -0.09 | 58.35 | 57.16 | 1.19 | 54.07 | 53.49 | 0.58 | | | | Wide ResNet-34-10 | | | | | | | | | | | RATWP | 86.08 | 86.23 | -0.15 | 58.92 | 58.23 | 0.69 | 54.46 | 53.98 | 0.48 | | | TRADES | 82.77 | 82.94 | -0.17 | 52.67 | 49.65 | 3.02 | 49.28 | 46.80 | 2.48 | | | TRADES-AWP | 82.05 | 82.57 | -0.52 | 55.44 | 54.81 | 0.63 | 51.42 | 50.73 | 0.69 | | | | PreAct ResNet-18 | | | | | | | | | | | TRADES-RATWP | 82.61 | 83.08 | -0.47 | 55.33 | 54.85 | 0.48 | 51.34 | 51.04 | 0.30 | | Table 4: The comparison between RATWP and AWP on the CIFAR-10 dataset under the L∞ threat model with different adversarial training methods and network architectures. ## 4.2 Performance Evaluation In this section, we present the experimental results of RATLR and RATWP across three benchmark datasets. CIFAR-10 Results. The evaluation results on CIFAR10 dataset are summarized in Table 1, where "Best" is the highest test robustness achieved during training; "Last" is the test robustness at the last epoch checkpoint; "Diff" denotes the robust accuracy gap between the "Best" & "Last". It is observed that RATWP generally achieves the best robust performance compared to RATLR & standard AT. Regardless, both RATLR and RATWP tighten the robustness gaps by a significant margin, indicating they can effectively suppress robust overfitting. CIFAR-100 Results. We also show the results on CIFAR100 dataset in Table 2. We observe similar performance like CIFAR10, where both RATLR and RATWP is able to significantly reduce the robustness gaps. For robustness improvement, RATWP stands out to be the leading method. The results further verify the effectiveness of the proposed approach. SVHN Results. The results on the SVHN dataset are shown in Table 3, where robustness gap are also narrowed down to a small margin by RATWP. SVHN dataset is a special case where RATLR strategy does not improve robust overfitting. Unlike CIFAR10 and CIFAR100, learning rate decay in SVHN's training does not have much connection to the sudden increases in robust test performance or the prevalence of robust | L∞ threat model. | Natural | PGD-20 | AA | | | | | | | | |--------------------|-----------|----------|-------|-------|-------|-------|------|-------|-------|------| | Method | Layer | Best | Last | Diff | Best | Last | Diff | Best | Last | Diff | | TRADES-RATWP | [-] | 82.77 | 82.94 | -0.17 | 52.67 | 49.65 | 3.02 | 49.28 | 46.80 | 2.48 | | TRADES-RATWP | [4] | 82.98 | 83.11 | -0.13 | 52.35 | 49.76 | 2.59 | 48.48 | 46.66 | 1.82 | | TRADES-RATWP | [3,4] | 82.61 | 83.08 | -0.47 | 55.33 | 54.85 | 0.48 | 51.34 | 51.04 | 0.30 | | TRADES-RATWP | [2,3,4] | 81.89 | 82.37 | -0.48 | 55.44 | 54.91 | 0.53 | 51.24 | 51.20 | 0.04 | | TRADES-RATWP | [1,2,3,4] | 82.05 | 82.57 | -0.52 | 55.44 | 54.81 | 0.63 | 51.42 | 50.73 | 0.69 | Table 5: The ablation study with TRADES on the CIFAR-10 dataset using PreAct ResNet-18 under the L∞ threat model. overfitting, and hence makes RATLR ineffective. Other than this, The improvement in robust generalization gaps can be witnessed in all cases, demonstrating the proposed approachs are generic and can be applied widely. Comparision with AWP. We further provide a comparison between RATWP and AWP (Wu et al., 2020) on the CIFAR-10 dataset under the L∞ threat model, using different adversarial training methods and network architectures. The results are summarized in Table 4. For natural accuracy, RATWP enforces less regularization compared to AWP, thus achieving higher natural accuracy. For adversarial robustness, it is observed that RATWP slightly degrades the model's performance in some cases. However, RATWP generally maintains comparable adversarial robustness to AWP, which can be attributed to the strong correlation between the latter layers and robust overfitting. Ablation Study. Finally, we provide an ablation study to illustrate the selection of layers in our experiments. Specifically, we apply regularizations to different layers in the RATWP method and conduct experiments with TRADES on the CIFAR-10 dataset using PreAct ResNet-18 under the L∞ threat model. The experimental results are summarized in Table 5. It is observed that when regularizations are applied only to layer 4, the model can mitigate robust overfitting to a certain extent, but its robustness performance is poor. When regularizations are applied to layer 3&4, the model achieves better adversarial robustness while effectively mitigating robust overfitting. Therefore, in our experiments, we consistently chose to apply regularizations to layer 3&4 in consideration of both robust overfitting and adversarial robustness. ## 5 Conclusion In this paper, we investigate the effects of different network layers on robust overfitting and identify that robust overfitting is mainly driven by the optimization occurred at the latter layers. Following this, we propose a *robust adversarial training* (RAT) prototype to specifically hinder the optimization of the latter layers in the process of training adversarial network. The approach prevents the model from overfitting the latter parts of the network, which effectively mitigate robust overfitting of the network as a whole. We then further demonstrate two implementations of RAT: one locally uses a fixed learning rate for the latter layers and the other utilize adversarial weight perturbation for the latter layers. Extensive experiments show the effectiveness of both approaches, suggesting RAT is generic and can be applied across different network architectures, threat models and benchmark datasets to mitigate robust overfitting. ## References Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion, and Matthias Hein. Square attack: a query-efficient black-box adversarial attack via random search. In European Conference on Computer Vision, pp. 484–501. Springer, 2020. Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In *International conference on machine learning*, pp. 274–283. PMLR, 2018. Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. Reconciling modern machine-learning practice and the classical bias–variance trade-off. *Proceedings of the National Academy of Sciences*, 116(32):15849– 15854, 2019. Yair Carmon, Aditi Raghunathan, Ludwig Schmidt, Percy Liang, and John C Duchi. Unlabeled data improves adversarial robustness. *arXiv preprint arXiv:1905.13736*, 2019. Chen Chen, Jingfeng Zhang, Xilie Xu, Tianlei Hu, Gang Niu, Gang Chen, and Masashi Sugiyama. Guided interpolation for adversarial training. *arXiv preprint arXiv:2102.07327*, 2021. Tianlong Chen, Zhenyu Zhang, Sijia Liu, Shiyu Chang, and Zhangyang Wang. Robust overfitting may be mitigated by properly learned smoothening. In *International Conference on Learning Representations*, 2020. Yunpeng Chen, Jianan Li, Huaxin Xiao, Xiaojie Jin, Shuicheng Yan, and Jiashi Feng. Dual path networks. Advances in neural information processing systems, 30, 2017. Francesco Croce and Matthias Hein. Minimally distorted adversarial examples with a fast adaptive boundary attack. In *International Conference on Machine Learning*, pp. 2196–2205. PMLR, 2020a. Francesco Croce and Matthias Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In *International conference on machine learning*, pp. 2206–2216. PMLR, 2020b. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. Chengyu Dong, Liyuan Liu, and Jingbo Shang. Label noise in adversarial training: A novel perspective to study robust overfitting. *Advances in Neural Information Processing Systems*, 35:17556–17567, 2022a. Yinpeng Dong, Ke Xu, Xiao Yang, Tianyu Pang, Zhijie Deng, Hang Su, and Jun Zhu. Exploring memorization in adversarial training. In *International Conference on Learning Representations*, 2022b. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Saehyung Lee, Hyungyu Lee, and Sungroh Yoon. Adversarial vertex mixup: Toward better adversarially robust generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 272–281, 2020. Lin Li and Michael Spratling. Data augmentation alone can improve adversarial training. *arXiv preprint* arXiv:2301.09879, 2023. Fangzhou Liao, Ming Liang, Yinpeng Dong, Tianyu Pang, Xiaolin Hu, and Jun Zhu. Defense against adversarial attacks using high-level representation guided denoiser. In *Proceedings of the IEEE Conference* on Computer Vision and Pattern Recognition, pp. 1778–1787, 2018. Feng Liu, Bo Han, Tongliang Liu, Chen Gong, Gang Niu, Mingyuan Zhou, Masashi Sugiyama, et al. Probabilistic margins for instance reweighting in adversarial training. *Advances in Neural Information Processing* Systems, 34:23258–23269, 2021. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. *arXiv preprint arXiv:1706.06083*, 2017. Amir Najafi, Shin-ichi Maeda, Masanori Koyama, and Takeru Miyato. Robustness to adversarial perturbations in learning from incomplete data. *Advances in Neural Information Processing Systems*, 32, 2019. Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya Sutskever. Deep double descent: Where bigger models and more data hurt. In *International Conference on Learning* Representations, 2020. URL https://openreview.net/forum?id=B1g5sA4twr. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011. Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE symposium on security and privacy (SP), pp. 582–597. IEEE, 2016. Sylvestre-Alvise Rebuffi, Sven Gowal, Dan Andrei Calian, Florian Stimberg, Olivia Wiles, and Timothy A Mann. Data augmentation can improve robustness. *Advances in Neural Information Processing Systems*, 34:29935–29948, 2021. Leslie Rice, Eric Wong, and Zico Kolter. Overfitting in adversarially robust deep learning. In International Conference on Machine Learning, pp. 8093–8104. PMLR, 2020. Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, and Aleksander Madry. Adversarially robust generalization requires more data. *Advances in neural information processing systems*, 31, 2018. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Chubiao Song, Kun He, Jiadong Lin, Liwei Wang, and John E Hopcroft. Robust local features for improving the generalization of adversarial training. In *International Conference on Learning Representations*, 2020. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. *arXiv preprint arXiv:1312.6199*, 2013. Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. In *International Conference on Learning Representations*, 2018. Jonathan Uesato, Jean-Baptiste Alayrac, Po-Sen Huang, Robert Stanforth, Alhussein Fawzi, and Pushmeet Kohli. Are labels required for improving adversarial robustness? *arXiv preprint arXiv:1905.13725*, 2019. Yifei Wang, Liangchen Li, Jiansheng Yang, Zhouchen Lin, and Yisen Wang. Balance, imbalance, and rebalance: Understanding robust overfitting from a minimax game perspective. In *Thirty-seventh Conference* on Neural Information Processing Systems, 2023. Yisen Wang, Difan Zou, Jinfeng Yi, James Bailey, Xingjun Ma, and Quanquan Gu. Improving adversarial robustness requires revisiting misclassified examples. In *International Conference on Learning Representations*, 2019. Dongxian Wu, Shu-Tao Xia, and Yisen Wang. Adversarial weight perturbation helps robust generalization. arXiv preprint arXiv:2004.05884, 2020. Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? *Advances in neural information processing systems*, 27, 2014. Chaojian Yu, Bo Han, Li Shen, Jun Yu, Chen Gong, Mingming Gong, and Tongliang Liu. Understanding robust overfitting of adversarial training and beyond. In *International Conference on Machine Learning*, pp. 25595–25610. PMLR, 2022. Fisher Yu, Dequan Wang, Evan Shelhamer, and Trevor Darrell. Deep layer aggregation. In *Proceedings of* the IEEE conference on computer vision and pattern recognition, pp. 2403–2412, 2018. Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In British Machine Vision Conference 2016. British Machine Vision Association, 2016. Runtian Zhai, Tianle Cai, Di He, Chen Dan, Kun He, John Hopcroft, and Liwei Wang. Adversarially robust generalization just requires more unlabeled data. *arXiv preprint arXiv:1906.00555*, 2019. Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. Theoretically principled trade-off between robustness and accuracy. In International Conference on Machine Learning, pp. 7472–7482. PMLR, 2019. Jingfeng Zhang, Jianing Zhu, Gang Niu, Bo Han, Masashi Sugiyama, and Mohan Kankanhalli. Geometryaware instance-reweighted adversarial training. *arXiv preprint arXiv:2010.01736*, 2020. ## A More Evidences On The Layer-Wise Properties Of Robust Overfitting In this section, we provide more empirical experiments to showcase the layer-wise properties of robust overfitting across different datasets, network architectures, threat models and adversarial training methods. Specifically, we use the proposed strategies mentioned in Section 3.3 to put restriction on the optimization of different network layers. We can always observe that there is significant robust overfitting relief when we regularize the optimization of the latter layers, while robust overfitting is prevalent for other settings. These evidences further highlight the strong relation between robust overfitting and the optimization of the latter layers. ## A.1 Evidences Across Datasets We show that the layer-wise properties of robust overfitting are universal across datasets on CIFAR-100 and SVHN. We adversarially train PreAct ResNet-18 using AT under l∞ threat model on different datasets with the same settings as Section 3.3. The results are shown in Figure 5 and 6. Note that for SVHN, regularization strategy utilizing a fixed learning rate (RATLR) does not improve robust overfitting (Figure 5). Unlike CIFAR10 and CIFAR100, SVHN's robust overfitting appears before the first learning rate decay. Also, learning rate decay in SVHN's training does not have any relation to the sudden increases in robust test performance or the appearance of robust overfitting. Hence, SVHN dataset is a special case where RATLR does not apply. For all other cases, robust overfitting is effectively mitigated by regularizing the optimization of the latter layers. ## A.2 Evidences Across Threat Models We further demonstrate that the generality of layer-wise properties of robust overfitting by conducting experiments under l2 threat model. The settings are the same as Section 3.3. The results are shown in Figure 7 and 8. Under l2 threat model, except for SVHN dataset where regularization strategy utilizing a fixed learning rate (RATLR) does not apply, robust overfitting is effectively mitigated by regularizing the optimization of the latter layers. ## A.3 Evidences Across Network Architectures In this part, we show that the generality of layer-wise properties of robust overfitting by conducting experiments across different network architectures (PreAct ResNet-18 (He et al., 2016), PreAct ResNet-34 (He et al., 2016), VGG-16 (Simonyan & Zisserman, 2014), DPN-26 (Chen et al., 2017) and DLA (Yu et al., 2018)). For PreAct ResNet-18 and PreAct ResNet-34, the division of the network layer are the same as Section 3.3. For DPN-26 and DLA, similarly, we consider the four main network blocks separately for layer1, layer2, layer3 and layer4. For VGG-16, "features.0", "features.1", "features.3", "features.4", "features. 7", "features.8" are regarded as layer1, "features.10", "features.11", "features.14", "features.15", "features .17", "features.18" are regarded as layer2, "features.20", "features.21", "features.24", "features.25 ", "features.27", "features.28", "features.30" are regarded as layer3, and "features.31", "features.34' ', "features.35", "features.37", "features.38", "features.40", "features.41" are regarded as layer4. The results are shown in Figure 9. These results clearly indicate that the optimization of the latter layers present a strong correlation to the robust overfitting phenomenon, and the layer-wise properties of robust overfitting are common in different network architectures. ## A.4 Evidences Across Adversarial Training Methods We further demonstrate that the generality of layer-wise properties of robust overfitting by conducting experiments using different adversarial training methods (Standard AT (Madry et al., 2017) and TRADES (Zhang et al., 2019)). The settings are the same as Section 3.3. The results are shown in Figure 10. The experiments results indicate that the latter layers have stronger connections to robust overfitting than the front layers do, and the layer-wise properties of robust overfitting is generally hold regardless of the chosen adversarial training methods. ![15_image_0.png](15_image_0.png) ![15_image_1.png](15_image_1.png) (b) AT using PreAct ResNet-18 on SVHN dataset under l∞ threat model Figure 5: Robust test performance of AT using a fixed learning rate for different sets of network layers in PreAct ResNet-18, across datasets (CIFAR-100 and SVHN) under l∞ threat model. ![16_image_0.png](16_image_0.png) 0 25 50 75 100 125 150 0.25 ![16_image_3.png](16_image_3.png) ![16_image_2.png](16_image_2.png) ![16_image_1.png](16_image_1.png) ![16_image_4.png](16_image_4.png) (a) AT using PreAct ResNet-18 on CIFAR100 dataset under l∞ threat model (b) AT using PreAct ResNet-18 on SVHN dataset under l∞ threat model Figure 6: Robust test performance of AT applying AWP for different sets of network layers in PreAct ResNet-18, across datasets (CIFAR-100 and SVHN) under l∞ threat model. ![17_image_3.png](17_image_3.png) AT_standard ![17_image_0.png](17_image_0.png) ![17_image_2.png](17_image_2.png) ![17_image_1.png](17_image_1.png) (a) AT using PreAct ResNet-18 on CIFAR10 dataset under l2 threat model 0 50 100 150 200 0.250 ![17_image_6.png](17_image_6.png) AT_standard ![17_image_7.png](17_image_7.png) AT_fix_lr_[1] AT_fix_lr_[2] AT_fix_lr_[3] AT_fix_lr_[4] ![17_image_4.png](17_image_4.png) ![17_image_8.png](17_image_8.png) ![17_image_5.png](17_image_5.png) (b) AT using PreAct ResNet-18 on CIFAR100 dataset under l2 threat model ![17_image_9.png](17_image_9.png) ![17_image_12.png](17_image_12.png) 0 25 50 75 100 125 150 0.500 ![17_image_10.png](17_image_10.png) AT_standard AT_fix_lr_[1] AT_fix_lr_[2] AT_fix_lr_[3] AT_fix_lr_[4] ![17_image_11.png](17_image_11.png) ![17_image_13.png](17_image_13.png) (c) AT using PreAct ResNet-18 on SVHN dataset under l2 threat model Figure 7: Robust test performance of AT using a fixed learning rate for different sets of network layers in PreAct ResNet-18, across datasets (CIFAR-10, CIFAR-100 and SVHN) under l2 threat model. ![18_image_0.png](18_image_0.png) ![18_image_1.png](18_image_1.png) ![18_image_2.png](18_image_2.png) Figure 8: Robust test performance of AT applying AWP for different sets of network layers in PreAct ResNet-18, across datasets (CIFAR10, CIFAR-100 and SVHN) under l2 threat model. ![19_image_0.png](19_image_0.png) Figure 9: Robust test performance of AT applying AWP for different sets of network layers across network architectures (PreAct ResNet-34, VGG-16, DPN-26 and DLA), on CIFAR-10 datasets under l∞ threat model. ![20_image_0.png](20_image_0.png) Figure 10: Robust test performance of AT and TRADES applying AWP for different sets of network layers in PreAct ResNet-18 and PreAct ResNet-34, on CIFAR-10 datasets under l∞ threat model.
Review 1: Summary: This submission’s main contributions are (i) analyzing robust overfitting in neural networks in a layer-wise fashion and identifying that optimization in latter layers has more effect on robust overfitting; (ii) proposed a general prototype (RAT), where additional regularization is imposed on latter layers of a network, to mitigate robust overfitting. The proposed prototype is instantiated with two types of methods: fixed non-decaying learning rate and adversarial weight perturbation (AWP). Strengths and Weaknesses: **Strengths**: 1. I am not familiar with the adversarial training literature, so my review should be taken with a grain of salt. But the authors clearly state how their focus differs from existing works on robust generalization, which convinced me that their investigation of layer-wise contribution to robust overfitting is unique. 2. The two proposed solutions to mitigate robust overfitting are straightforward and should be scalable. **Weakness**: 1. My main concern is that the latter blocks of ResNets contain more parameters than the former blocks. However, the current paper does not provide insights or analysis on whether the latter blocks’ larger effect on robust overfitting is due to the larger number of parameters or something else perhaps more inherent to the latter layers. Therefore, it is hard to conclude the cause of the empirical observations on the layer-wise effects presented in the paper. 2. The effectiveness of the proposed RAT (e.g. RAT-WP) seems limited, when evidence in Table 4 shows that AWP (which is AWP applied to all layers, if I understood correctly), achieves better performance in both test robustness and robust generalization gap. I understand that in comparison to standard AT, the experiments suggest that regularizations on latter layers have a larger contribution to the reduction of the robust generalization gap. However, again, without an analysis on whether this is simply due to the larger number of parameters in the latter layers, it is hard to see how interesting the findings are. 3. There are no quantitative results of RAT applied with other adversarial training methods like TRADES and with other networks architectures. The authors show visualizations of robust test performance of AWP under TRADES instead of standard AT in Figure 10 and performance of AWP for different network architectures in Figure 9, but it would be good to see some quantitative comparisons like in Table 1 through Table 4. That will help verify the effectiveness of RAT. It might also help to answer my second question in Requested Changes, please see below. Requested Changes: **Critical**: 1. It appears that Table 1 and Table 4 share the same experiment setup. If this is true, then why does the first row (AT) of Table 4 not correspond to the first row of Table 1, but to the first row of Table 3 instead? 2. In Section 3.1, it is mentioned that “robust overfitting is mitigated at the cost of robust accuracy. For example in AT-fix-param-[3,4], if we leave both layer 3 & 4 unoptimized, robust overfitting will practically disappear, but the peak performance is much worse compared to standard AT”. Are there any motivations for selecting Layer 3&4 instead of Layer 4 only to apply regularizations in Section 4? For other types of network architectures, how does the selection of “latter layers” transfer? 3. In Figure 4(b), why are the front layers “Layer 1&2&3” while the latter layers are “Layer 3&4”? I am confused with the definition of former vs latter layers here. **Nice to have**: 1. How are the semi-supervised learning methods in Section 2.2 relevant to the methods and observations in this paper? Is there any reason for mentioning them in the Related Work section? 2. In Section 4.2, it is mentioned that “learning rate decay in SVHN’s training does not have much connection to the sudden increases in robust test performance or the prevalence of robust overfitting”. Where is this evidence for the SVHN dataset, is it in Figure 5? 3. In Figure 4(c), it is shown that RAT_WP has a negative robust generalization gap throughout most of the training. Can the authors offer some explanations for this unusual phenomenon? What about AWP applied to all layers? Broader Impact Concerns: N/A ================================================== Review 2: Summary: The paper addresses the problem of overfitting in adversarial training of deep neural networks. It is observed that regularizing the final layers of a deep neural network has a larger effect than regularizing the earlier layers. Based on this observation, it is proposed that regularization be applied to the final layers of the network. In experiments, the authors compare different versions of their regularization scheme to unregularized models. Strengths and Weaknesses: Strengths: - The paper is mostly well-organized and written clearly. Weaknesses: - The logical argument for why regularization should be preferentially applied to the later layers is flawed. - The experiments have many weaknesses. Requested Changes: - The authors claim that weight decay is a regularization method that mitigates overfitting (Section 1 p1). This is false. - In Section 3.1, the authors claim Figure 2 shows that "any settings that do not fix the parameters for layer 4 result in a more severe gap between the best accuracy and the accuracy at the last epoch." This is technically accurate, but the authors should acknowledge the more obvious effect is that fixing the Layer 4 parameters decreases performance! This underfitting is what causes there to be no overfitting. The way it is phrased in the paper is misleading and needs to be revised. - It is misleading to claim this phenomena as "a general property of adversarial learning" (Section 3.1, second paragraph). The authors need to clarify their claim as being either: (1) this is a property particular to adversarial training, in which case they need to give evidence that it doesn't affect non-adversarial training, or (2) clarify to the reader that this is a general property of deep learning. I believe the latter is true. It is well known that weight updates in the last layer are particularly important in deep learning in general (not just in adversarial training), so for the experiments in Figure 2, I would expect similar patterns with non-adversarial training. - In the experiments comparing the proposed regularization methods, the authors report the "best" and "final" accuracy of the model. I think the final accuracy is irrelevant because in practice one would use early stopping. The best accuracy is problematic because taking the maximum of the test set loss over training gives an overestimate of the true generalization performance. The correct way to do this experiment is to have a train/valid/test split where early stopping is performed using the validation set and then the test set performance is reported. I think this is critical for fair experiments. - The experiments show that some regularization helps performance compared to non-regularized adversarial training. This is no surprise. But are there any experiments showing that applying regularization using the proposed RAT method is an improvement over the existing regularization methods mentioned? To be convincing, these experiments need to be done carefully with hyperparameter optimization on a validation set. Broader Impact Concerns: None ================================================== Review 3: Summary: The authors study the problem of robust overfitting, that is, the fact that the test's robust accuracy learning curve has a U-shape. More specifically, in the present contribution, the authors focus on how much different parts of neural networks contribute to robust overfitting. It is demonstrated with many careful experiments that the last layers contribute more to the robust overfitting problem. Based on this observation, authors modify two methods, weight perturbation and learning rate (related to loss scaling), to act differently on the first and last layers. Both techniques perform reasonably well but do not appear to improve over state-of-the-art methods. Strengths and Weaknesses: **Strengths** 1. Main claims are supported by most of the experiments 2. Experiments are well documented 3. Main ideas are clearly explained **Weaknesses** 1. Some unclear parts 2. Some experiments provide conflicting evidence 3. The overall significance of the main claim is unclear Requested Changes: I would like to stress that I am not an expert in adversarial machine learning, so the present review is a detached perspective from the outside of the field. I suggest Action Editor rely more on reviewers more experienced in the matter. **Unclear parts** 1. Can authors please include a formal definition of robust accuracy? How robust accuracy is measured in training time? Is it done with the same adversarial? 2. It is not entirely clear what constitutes the last layers. As I understand authors consider classification problems, so the network's output should correspond to probabilities or log probabilities of classes. Is this linear/softmax layer included in the last layer? 3. Does the initialization of the network affect the outcomes of the experiments when some layers are frozen? **Conflicting evidence** All the experiments with frozen layers suggest that the last layers' regularisation has more effect on robust overfitting (Figure 3a and Figure 4a). These experiments can be considered a special type of intervention/ablation when one deliberately fixes part of the parameters. However, in Table 4 we see that AWP performs better than RAT-WP. This case also can be considered a type of intervention to AWP: in AWP adversarial is allowed to perturb all weights, whereas in RAT-WP adversarial perturbs only weights in the last two layers. The result of this intervention suggests that it is better to perturb all weights, and later layers have no special significance. I would like to ask the authors to: 1. Comment on this possible interpretation of presented data 2. Add results with AWP to tables 1, 2, and 3 -- this will allow one to judge whether perturbation of all layers leads to better robustness **Overall significance** Given that AWP performs better than RAT-WP, and RAT-WP does not show state-of-the-art results, it is unclear to what extent the observation made by the authors is significant. I suggest the authors to: 1. Extend the discussion of comparison between RAT-WP and AWP. 2. Provide additional data that shows differences between RAT-WP and AWP. For example, one may suspect that RAT-WP can lead to higher natural accuracy because it enforces less regularisation. Can the authors provide natural accuracy to all methods in Table 4? Broader Impact Concerns: N/A ==================================================
# On Safety In Safe Bayesian Optimization Anonymous authors Paper under double-blind review ## Abstract Safe Bayesian Optimization (BO) is increasingly used to optimize an unknown function under safety constraints, a central task in robotics, biomedical engineering, and many other disciplines. Due to the safety-critical nature of these applications, it is crucial that theoretical safety guarantees for these algorithms translate into the real world. In this work, we investigate three safety-related issues in SafeOpt-type algorithms, a popular class of safe BO methods. First, these algorithms critically rely on frequentist uncertainty bounds for Gaussian Process (GP) regression, but concrete implementations typically utilize heuristics that invalidate all safety guarantees. We provide a detailed analysis of this problem and introduce Real-β-SafeOpt, a variant of the SafeOpt algorithm that leverages recent GP bounds and thus retains all theoretical guarantees. Second, we identify a key technical assumption in SafeOpt-like algorithms, the assumption of an upper bound on the reproducing kernel Hilbert space (RKHS) norm of the target function, as a central obstacle to real-world usage. To overcome this obstacle, we introduce the Lipschitz-only Safe Bayesian Optimization (LoSBO) algorithm, which guarantees safety without an assumption on the RKHS bound, and show empirically that this algorithm is not only safe, but also outperforms the state-of-the-art on several function classes. Third, SafeOpt and derived algorithms rely on a discrete search space, complicating their application to higher-dimensional problems. To broaden the applicability of these algorithms, we introduce Lipschitz-only Safe GP-UCB (LoS-GP-UCB), a LoSBO variant that is applicable to moderately high-dimensional problems, while retaining safety. By analyzing practical safety issues in an important class of safe BO algorithms, and providing ready-to-use algorithms that overcome these issues, this work contributes to bringing safe and reliable machine learning techniques closer to real world applications. ## 1 Introduction In science, engineering, and business it is often necessary to optimize an unknown target function. Typically, such functions are expensive to evaluate and only noisy function values are available. If it is possible to actively query the function, i.e., to select the inputs that are to be evaluated, this problem is commonly addressed using Bayesian Optimization (BO), see (Shahriari et al., 2015) or (Garnett, 2023) for an overview. However, in many real-world applications, BO algorithms should avoid the use of certain inputs, frequently for safety reasons. For example, the target function might be a reward function for a robotic task, with the input a control policy for a physical robot. In this example, any inputs leading to physical damage or unsafe behavior should be avoided. An important special case of such a safety constraint is the restriction of query inputs to those with function values not lower than a given threshold (Kim et al., 2020). Inputs that satisfy this constraint are termed safe inputs, and a BO algorithm is considered safe if it queries only safe inputs throughout its run. This type of safety constraint has been introduced in (Sui et al., 2015) and arises for example in biomedical applications (where the target function is patient comfort and the input corresponds to treatment settings) or controller tuning (where the target function is a measure of controller performance and the inputs are tuning parameters). A popular BO algorithm for this problem setting is SafeOpt (Sui et al., 2015). Starting from a given set of safe inputs, this algorithm iteratively searches for a maximum while aiming to avoid unsafe inputs with high probability. It achieves this by utilizing Gaussian Processes (GPs) together with a frequentist uncertainty bound (Srinivas et al., 2010; Chowdhury & Gopalan, 2017) and a known upper bound on the Lipschitz constant of the target function. Provided the algorithmic parameters are set correctly, then with (high) predefined probability, SafeOpt demonstrably converges to the safely reachable maximum while avoiding unsafe inputs. SafeOpt and its variants have been used in various applications, e.g., safe controller optimization (Berkenkamp et al., 2016b), automated deep brain stimulation (Sarikhani et al., 2021) and safe robot learning (Baumann et al., 2021). To ensure safety, SafeOpt and its variants require frequentist uncertainty sets that are valid (i.e., they hold with specified high probability) and explicit (i.e., they can be numerically evaluated). However, two issues arise here: First, the uncertainty bounds from (Srinivas et al., 2010; Chowdhury & Gopalan, 2017) used in most SafeOpt-type algorithms tend to be conservative, even if all necessary ingredients for their computation are available. This restriction can entirely prevent exploration of the target function. Second, such uncertainty bounds rely upon a particular property of the target function (a known finite upper bound on the Reproducing Kernel Hilbert Space (RKHS) norm, see Section 2.2 for details), which in practice is very difficult to derive from reasonable prior knowledge. As a consequence of these issues, algorithmically usable uncertainty sets are not yet available for SafeOpt. Indeed, to the best of our knowledge, all implementations of SafeOpt and its variants have instead used heuristics (Berkenkamp et al., 2016b; Kirschner et al., 2019a; Baumann et al., 2021; Koller et al., 2019; Helwa et al., 2019).1 This shortcoming means that in practice such implementations lose all their theoretical safety guarantees. In this work, we carefully investigate this issue and propose practical solutions, bringing safe BO closer to real-world usage. Outline In Section 2, we recall necessary technical background material. The problem setting and our objectives are presented in Section 3, and Section 4 provides a comprehensive discussion of related work. In Section 5 we investigate safety issues of SafeOpt-type algorithms arising from using heuristics instead of valid frequentist uncertainty bounds. In Section 6, we discuss the practical safety problems arising from the assumption of a known upper bound on the RKHS norm of the target function, and introduce Lipschitz-only Safe Bayesian Optimization (LoSBO) as a solution. To enable this type of safe BO to be applied in high dimensions, Section 7 introduces another algorithmic variant, called Lipschitz-only Safe Gaussian ProcessUpper Confidence Bound (LoS-GP-UCB). Finally, Section 8 closes the article with a summary and outlook. The source code for all experiments is available in an anonymous repository.2 ## 2 Background In this section, we provide a brief review of Gaussian Process (GP) regression, reproducing kernel Hilbert spaces (RKHSs), and frequentist uncertainty bounds for GP regression, as these three components form the foundations for SafeOpt-type algorithms. ## 2.1 Gaussian Processes A GP is a collection of R-valued random variables, here indexed by the set D, such that every finite collection of those random variables has a multivariate normal distribution. A GP g is uniquely defined by its mean function m(x) = E[g(x)] and covariance function k(*x, x*′) = E[(g(x) − m(x))((g(x ′) − m(x ′))], and we denote such a GP by g ∼ GPD(*m, k*). In GP regression, we start with a prior GP. Assuming independent and identically distributed (i.i.d.) additive normal noise, ϵt ∼ N (0, σ2), this prior GP can be updated with data (x1, y1), . . . ,(xt, yt), leading to a posterior GP. Without loss of generality we assume that the prior GP has zero mean. Then the posterior mean, posterior covariance and posterior variance are given by µt(x) = kt(x) T(Kt + σ 2It) −1yt , kt(*x, x*′) = k(*x, x*′) − kt(x) T(Kt + σ 2It) −1kt(x ′), and σ 2 t (x) = kt(*x, x*), respectively, where yt = [y1*, ..., y*t] Tis the vector of observed, noisy function values of f, the kernel matrix Kt ∈ R t×t has entries [k(x, x′)]x,x′∈Dt , the vector kt(x) = [k(x1, x)· · · k(xt, x)]Tcontains the covariances between x and the observed data points, and It is the t × t identity matrix. 1The precise experimental settings are not reported in (Sui et al., 2015). However, based on the descriptions provided, it can be inferred that some form of heuristic was used. 2https://anonymous.4open.science/r/losbo-482C In practice, the prior mean, prior covariance function, and noise level, are (partially) chosen based on prior knowledge. If no specific prior knowledge regarding the mean is available, the zero function is usually chosen. Furthermore, in practice these three components are only partially specified, usually up to some parameters, which are then called hyperparameters in the context. In BO, the latter are often determined during the optimization via hyperparameter optimization (Garnett, 2023). For more details on GPs, GP regression, and related method, we refer to (Rasmussen & Williams, 2006) and (Garnett, 2023). ## 2.2 Reproducing Kernel Hilbert Spaces Consider a function k : D × D → R. We call k *positive semidefinite* if for all x1*, . . . , x*N ∈ D, N ∈ N>0, the matrix k(xi, xj )i,j=1*,...,N* is positive semidefinite. Equivalently, the function k is symmetric (k(xi, xj ) = k(xj , xi) for all *i, j* = 1*, . . . , N*), and for all α1*, . . . , α*N ∈ R we have PN i,j=1 αiαjk(xi, xj ) ≥ 0. In the literature such a function is often called positive definite or *of positive type*. Additionally, k is called positive definite, if for all pairwise distinct x1*, . . . , x*N ∈ D, the matrix k(xi, xj )i,j=1*,...,N* is positive definite. This property is sometimes referred to as *strict positive definiteness* in the literature. Let H be a Hilbert space functions on D. We call H a *reproducing kernel Hilbert space (RKHS)* if every evaluation functional is continuous, i.e., for all x ∈ D the mapping H ∋ f 7→ f(x) ∈ R is continuous w.r.t. the norm induced by the scalar product in H. Furthermore, k is called a *reproducing kernel (for* H) if 1) k(·, x) ∈ H for all x ∈ D, and 2) f(x) = ⟨f, k(·, x)⟩H for all f ∈ H and x ∈ D. As is well-known, H is a RKHS if and only if it has a reproducing kernel, and in this case the latter is unique (Steinwart & Christmann, 2008, Lemma 4.19, Theorem 4.20). Furthermore, every reproducing kernel is positive semidefinite, and every positive semidefinite function is a reproducing kernel for a unique RKHS (Steinwart & Christmann, 2008, Theorem 4.16, 4.21). If k is positive semidefinite, then we denote its unique RKHS as (Hk,⟨·, ·⟩k), and the induced norm by *∥ · ∥*k. Furthermore, we define the *pre RKHS* by $$H_{k}^{\rm pure}={\rm span}\left\{k(\cdot,x)\mid x\in D\right\}=\left\{\sum_{n=1}^{N}\alpha_{n}k(\cdot,x_{n})\mid N\in\mathbb{N}_{>0},\alpha_{n}\in\mathbb{R},x_{n}\in D,\,n=1,\ldots,N\right\},\tag{1}$$ and this subspace is dense in $H_{k}$ w.r.t. $\|\cdot\|_{k}$. Given $f=\sum_{n=1}^{N}\alpha_{n}k(\cdot,x_{n})$, $g=\sum_{m=1}^{M}\beta_{m}k(\cdot,y_{m})\in H_{k}^{\rm pure}$, we have $$\langle f,g\rangle_{k}=\sum_{n=1}^{N}\sum_{m=1}^{M}\alpha_{n}\beta_{m}k(y_{m},x_{n}),\tag{2}$$ see (Stewart & Christmas, 2008, Theorem 4.21). If k is the covariance function of a GP, then it is positive semidefinite (since every covariance matrix is positive semidefinite), and hence the reproducing kernel of a unique RKHS. Conversely, if k is the reproducing kernel of a RKHS, then k is positive semidefinite, there exists a GP having k as its covariance function, and the GP can be chosen with a zero mean function (Berlinet & Thomas-Agnan, 2004). Furthermore, consider GP regression with a prior GPD(*m, k*), then µt − m ∈ H pre k, where µt is the posterior mean corresponding to a finite data set with t points. In particular, the posterior mean for a zero mean prior GP is always in the pre RKHS corresponding to the covariance function. As is customary in machine learning with GPs (Rasmussen & Williams, 2006), and also in many BO scenarios (Garnett, 2023), in the following we will use a zero mean GP prior, m ≡ 0, without loss of generality. ## 2.3 Frequentist Uncertainty Bounds An important ingredient in SafeOpt-type algorithms are upper and lower bounds on the unknown target function, and these bounds have to hold uniformly in both time and input space. If we adopt a stochastic setup, then this can be formalized by finding upper and lower bounds such that for a given user-specified confidence δ ∈ (0, 1), we have $$\mathbb{P}\left[\ell_{t}(x)\leq f(x)\leq u_{t}(x),\,\forall x\in D,t\geq0\right]\geq1-\delta,$$ ![3_image_0.png](3_image_0.png) Figure 1: Illustration of the required GP error bounds. Consider a fixed ground truth (solid black line), of which only finitely many samples are known (black dots). Applying GP regression leads to a posterior GP fully described by the posterior mean (solid blue line) and the posterior variance, from which a highprobability uncertainty set can be derived (shaded blue). Left: The ground truth is completely contained in the uncertainty set. Right: The ground truth violates the uncertainty bound around x = 1. where the probability is with respect to the data-generating process, while f is assumed fixed and nonrandom. In GP regression, the posterior mean µt can be interpreted as a nominal estimate of f, and the posterior variance σ 2 t as a measure of uncertainty of this estimate. However, using the posterior variance to build upper and lower bounds in the SafeOpt setting is not straightforward. Firstly, the posterior variance is a *pointwise* measure of uncertainty about the ground truth, but the upper and lower bounds have to hold *uniformly* over the input set, and also uniformly in time. Secondly, GP regression is by its nature a Bayesian method. However, the SafeOpt setting is a typical frequentist setup - we have a fixed, yet unknown ground truth, about which we receive noisy information. In particular, any stochasticity arises only through the datagenerating process (e.g., via random noise on the function values), and not through epistemic uncertainty, as is the case in the Bayesian setup. This difficulty is well-known, see (Fiedler et al., 2021a), and is particularly relevant in the context of robust control and related areas (Fiedler et al., 2021b). We thus need bounds (νt)t≥1, such that for a user-specified δ ∈ (0, 1) we have $$\mathbb{P}\left[|f(x)-\mu_{t}(x)|\leq\nu_{t}(x)\,\forall x\in D,t\geq1\right]\geq1-\delta.\tag{1}$$ The bounding function νt must only depend on data collected up to time t, as well as reasonable prior knowledge about f and the data-generating process, e.g., about the noise statistics. These bounds are illustrated in Figure 1. All common bounds are of the form $$\left({\mathrm{3}}\right)$$ $$\nu_{t}(x)=\beta_{t}\sigma_{t}(x),$$ $$\left({4}\right)$$ νt(x) = βtσt(x), (4) where βt ∈ R≥0 is some scaling factor. Let k be the covariance function used in GP regression, and Hk the unique RKHS with k as its reproducing kernel. Assume that the ground truth is contained in this RKHS, i.e., f ∈ Hk. Let F = (Ft)t≥0 be a filtration defined on the underlying probability space, and assume that the sequence of inputs (xt)t≥0 chosen by the algorithm is adapted to F. 3 The first bound of the form (4) was introduced in the seminal work (Srinivas et al., 2010), cf. their Theorem 6, which holds in the case of bounded noise. This is also the bound that was used in the original SafeOpt paper Sui et al. (2015). At the moment, the most commonly used uncertainty bound in the analysis of SafeOpt-type algorithms is (Chowdhury & Gopalan, 2017, Theorem 2). Assume that (ϵt)t is a martingale 3Of course, this requires that D is a measurable space, which is not a problem in practice. difference sequence that is conditionally R-subgaussian w.r.t. F for some R ∈ R≥0, i.e., for all t ≥ 1 $$\mathbb{E}\left[\exp(\nu\epsilon_{t})\mid{\mathcal{F}}_{t}\right]\leq\exp\left({\frac{R^{2}\nu^{2}}{2}}\right)\,\mathbb{P}\text{-a.s.}\quad\forall\nu\in\mathbb{R}.$$ Additionally, assume that λ > 1, or λ ≥ 1 and the covariance function k is positive definite, where λ is the nominal noise variance used in GP regression. Under these conditions, the uncertainty bound (3) holds with $$\beta_{t}=\|f\|_{k}+R{\sqrt{2(\gamma_{t-1}+1+\log(1/\delta))}}$$ $\downarrow$ . in (4), where γt is the *maximum information gain* after t rounds, cf. (Srinivas et al., 2010) for a thorough discussion of this quantity. In contrast to (Srinivas et al., 2010, Theorem 6), this bound allows subgaussian noise (including bounded noise and normal noise) and involves only fairly small numerical constants. However, it still requires the maximum information gain or an upper bound thereof, which can be difficult to work with in practice, and it introduces some conservatism. Motivated by these shortcomings, Fiedler et al. (2021a) proposed a data-dependent scaling factor in (4), based on (Chowdhury & Gopalan, 2017, Theorem 2). Assume the same setting as this latter result, and also assume that the covariance function k is positive definite, then we can set $$\beta_{t}=\|f\|_{k}+\frac{R}{\sqrt{\lambda}}\sqrt{\ln\left(\det(\bar{\lambda}/\lambda{\bf K}_{t}+\bar{\lambda}{\bf I}_{t})\right)-2\ln(\delta)},\tag{6}$$ where we defined λ¯ = max{1, λ}, and λ is again the nominal noise variance used in GP regression, corresponding to the regularization parameter in kernel ridge regression. This bound no longer involves the maximum information gain, and numerical experiments demonstrate that the resulting uncertainty bounds are rarely significantly larger than common heuristics, see (Fiedler et al., 2021a). In fact, the bounds are sufficiently small to use in algorithms, e.g. (Fiedler et al., 2021b). Finally, from the results in the doctoral thesis (Abbasi-Yadkori, 2013), which was published in 2012, an uncertain bound can be deduced that is superior to (Chowdhury & Gopalan, 2017, Theorem 2), and therefore also improves over (6). Consider the same setting as introduced above. Combining Theorem 3.11 with Remark 3.13 in (Abbasi-Yadkori, 2013), we find that for all δ ∈ (0, 1) we can set $$\beta_{t}=\|f\|_{k}+\frac{R}{\sqrt{\lambda}}\sqrt{2\ln\left(\frac{1}{\delta}\det\left({\bf I}_{t}+\frac{1}{\lambda}{\bf K}_{t}\right)\right)}\tag{7}$$ in (4). Like (6), this bound can easily be evaluated, though it seems that (7) has not appeared explicitly before. Interestingly, this result appears to have been infrequently used in the machine learning community, though it has recently been rediscovered in for example (Whitehouse et al., 2024). Observe that for 0 *< λ <* 1 and k positive definite, $$\begin{split}\frac{R}{\sqrt{\lambda}}\sqrt{2\ln\left(\frac{1}{\delta}\det\left(\mathbf{I}_{t}+\frac{1}{\lambda}\mathbf{K}_{t}\right)\right)}&=\frac{R}{\sqrt{\lambda}}\sqrt{\ln\left(\det(1/\lambda\mathbf{K}_{t}+\mathbf{I}_{t})\right)-2\ln(\delta)}\\ &=\frac{R}{\sqrt{\lambda}}\sqrt{\ln\left(\det(\bar{\lambda}/\lambda\mathbf{K}_{t}+\bar{\lambda}\mathbf{I}_{t})\right)-2\ln(\delta)},\end{split}$$ so in this case (Fiedler et al., 2021a, Theorem 1) reproduces the result from (Abbasi-Yadkori, 2013). Since the only difference between (6) and (7) happens inside pln(·), any noticeable difference between the two bounds will happen for λ ≫ 1, so any difference will be negligible in practice. ## 3 Problem Setting And Objectives We now formalize our problem setting, describe SafeOpt-type algorithms in detail, and specify our objectives for the remainder of this work. Algorithm 1 Generic SafeOpt-type algorithm Initialize model M0 for t = 1, 2*, . . .* do Compute current model Mt Compute St from Mt Compute Gt from St and Mt Compute Mt from St and Mt xt ← argmaxx∈Gt∪Mtwt(x) Query function with xt, receive yt = f(xt) + ϵt Update model with (xt, yt) end for We work in the setting introduced by the seminal paper (Sui et al., 2015). Consider a nonempty set D, the *input set*, and a fixed, but unknown function f : D → R, the target function or *ground truth*. We are interested in an algorithm that finds the maximum of f by iteratively querying the function. At time step t ∈ N0, such an algorithm chooses an input xt ∈ D and receives a noisy function evaluation yt = f(xt) + ϵt, where ϵt is additive measurement noise. As a safety constraint, all chosen inputs must have a function value above a given safety threshold h ∈ R, i.e., f(xt) ≥ h for all t. Furthermore, the algorithm should be sample-efficient, i.e., use as few function queries as possible to find an input with a high function value. To make progress on this problem, it is clear that some restriction on the function f must be posed. Central to our developments is the next assumption. Assumption 1. D is equipped with a metric d : D × D → R≥0. Additionally, f is L*-Lipschitz continuous,* where L ∈ R≥0 *is a known Lipschitz constant.* The second assumption means that for all *x, x*′ ∈ D, we have |f(x) − f(x ′)| ≤ L d(*x, x*′). From now on, we work under Assumption 1. Furthermore, we assume that we have access to a non-empty set of known safe inputs S0 ⊆ D, i.e., for all x ∈ S0 we have f(x) ≥ h. The SafeOpt algorithm and its derivatives use an iteratively updated model Mt that provides estimated upper and lower bounds ut and ℓt on f, i.e., with a certain confidence it holds that ℓt(x) ≤ f(x) ≤ ut(x) for all x ∈ D and all t ≥ 1. These bounds are also used to provide a measure of uncertainty defined as wt(x) = ut(x) − ℓt(x). In each step t ≥ 1, the previous model Mt−1 together with the Lipschitz assumption is used to determine a new set St ⊆ D of safe inputs, starting from the initial safe set S0. Subsequently, a set Mt ⊆ St of potential maximizers of the target function, and a set Gt ⊆ St of potential expanders is computed. The latter contains inputs that are likely to lead to new safe inputs upon query. Finally, the target function is queried at the input xt = argmaxx∈Gt∪Mtwt(x), a noisy function value yt is received, and the model Mt−1 is updated with the data point (xt, yt). Pseudocode for a generic version of SafeOpt is provided by Algorithm 1. Different variants of SafeOpt result from different choices of models and computations of St, Mt and Gt. To the best of our knowledge, in all SafeOpt-type algorithms, the unknown ground truth f is modeled as a GP. To compute appropriate upper and lower bounds, it is assumed that an appropriate scaling factor βt is available, cf. (4). For each time step t, define Ct(x) = Ct−1 ∩ Qt(x), where Qt(x) = [µt−1(x) ± βt σt−1(x)], and, starting with Q0(x) = R for all x ∈ D, C0(x) = [h, ∞) for all x ∈ S0 and C0(x) = R for x ∈ D \S0. The corresponding estimated upper and lower bounds are given by ut(x) = max Ct(x) and ℓt(x) = min Ct(x), respectively. In the original SafeOpt algorithm from (Sui et al., 2015), for each step t ≥ 1, the new safe sets are calculated by $S_{t}=\bigcup\limits_{x\in S_{t-1}}\{x^{\prime}\in D\,|\,\ell_{t}(x)-L\,d(x,x^{\prime})\geq h\},$ ′ ∈ D | ℓt(x) − L d(*x, x*′) ≥ h}, (8) the potential maximizers are given by $$M_{t}=\{x\in S_{t}\mid u_{t}(x)\geq\operatorname*{max}_{x_{S}\in S_{t}}\ell_{t}(x_{S})\},$$ ℓt(xS)}, (9) and the potential expanders by $$G_{t}=\{x_{S}\in S_{t}\mid\exists x\in D\setminus S_{t}:\,u_{t}(x_{S})-L d(x_{S},x)\geq h\}.$$ Gt = {xS ∈ St | ∃x ∈ D \ St : ut(xS) − Ld(xS, x) ≥ h}. (10) $$({\boldsymbol{\delta}})$$ $$({\mathfrak{g}})$$ $\left(10\right)^3$ ![6_image_0.png](6_image_0.png) Figure 2: Illustration of SafeOpt. The safe set (gray bar), expanders (blue bar), and maximizers (green bar), derived from the current GP model (with solid blue line the posterior mean, shaded blue areas the uncertainty sets), are used to find the safely reachable optimum (red box). In each iteration, the next input is chosen from the union of the current expanders and maximizers (a subset of the safe set) by maximizing the acquisition function. ## Algorithm 2 Safeopt Require: Lipschitz constant L, algorithm to compute βt, initial safe set S0, safety threshold h 1: Q0(x) ← R for all x ∈ D ▷ Initialization of uncertainty sets 2: C0(x) ← [h, ∞) for all x ∈ S0 3: C0(x) ← R for all x ∈ D \ S0 4: for t = 1, 2*, . . .* do 5: Ct(x) ← Ct−1(x) ∩ Qt−1(x) for all x ∈ D ▷ Compute upper and lower bounds for current iteration 6: ℓt(x) ← min Ct(x), ut(x) ← max Ct(x) for all x ∈ D 7: if t > 1 **then** ▷ Compute new safe set 8: St = St−1 ∪ {x ∈ D | ∃xs ∈ St−1 : ℓt(xs) − Ld(xs, x) ≥ h} 9: **else** 10: S1 = S0 11: **end if** 12: Gt ← {x ∈ St | ∃x ′ ∈ D \ St : ut(x) − Ld(*x, x*′) ≥ h} ▷ Compute set of potential expanders 13: Mt = {x ∈ St | ut(x) ≥ maxxS ∈Stℓt(xS)} ▷ Compute set of potential maximizers 14: xt ← argmaxx∈Gt∪Mtwt(x) ▷ Determine next input 15: Query function with xt, receive yt = f(xt) + ϵt 16: Update GP with new data point (xt, yt), resulting in mean µt and σt 17: Compute updated βt 18: Qt(x) = [µt(x) − βt σt(x), µt(x) + βt σt(x)] for all x ∈ D 19: **end for** The resulting algorithm is illustrated in Figure 2. A formal description of the algorithm using pseudocode is provided by Algorithm 2. In some popular variants of the SafeOpt algorithm, no Lipschitz bound is used in the computation of the safe sets (Berkenkamp et al., 2016b). However, since knowledge of such a Lipschitz bound is additional knowledge that should be used by the algorithm, and we strongly rely on this assumption from Section 6 onwards, we do not consider these algorithmic variants in the present work. Our primary objective is to investigate and improve practically relevant safety aspects of SafeOpt-type algorithms. We will consider three specific objectives. (O1) While SafeOpt and related algorithms come with precise theoretical safety guarantees, all known implementations to date use heuristics (usually by setting βt to some constant) instead of theoretically sound uncertainty bounds. These heuristics generally invalidate all theoretical safety guarantees, raising the question of whether their use leads to safety problems in practice. (O2) Without heuristics, safety guarantees for SafeOpt-type algorithms rely on the appropriateness of the assumptions used in the algorithmic setup. Since SafeOpt and its variants are adopted for scenarios with stringent safety requirements where no failure should occur with high probability and no tuning phase is allowed, it is important that all underpinning assumptions are reasonable and verifiable by users. The most delicate of these assumptions is knowledge of an upper bound of the RKHS norm of the target function, raising the question of whether the RKHS norm bound assumption is reasonable, and if not, how it can be replaced. (O3) In many relevant applications of safe BO, the input space is a continuous domain in moderately high dimensions. Since SafeOpt-type algorithms rely on a discrete input space, they cannot be used in these settings. It is therefore necessary to devise alternatives that work in moderately high dimensions, while retaining safety guarantees. These three aspects will be explored in Sections 5, 6 and 7, respectively. ## 4 Related Work In the following, related work will be reviewed. We first provide an overview of safe BO methods, particularly those closely related to the present setting. As Lipschitz bounds play a central role from Section 6 onwards, we also review previous work on methods that rely on such an assumption. ## 4.1 Safety In Bayesian Optimization We first consider safety in the context of Bayesian optimization (BO) (Shahriari et al., 2015; Garnett, 2023). The type of safety constraint considered in this work was first introduced in the seminal paper (Sui et al., 2015), which also proposed and analyzed the original SafeOpt algorithm. Several variations to the original algorithm have been proposed, with a SafeOpt-variant that does not use a Lipschitz constant (Berkenkamp et al., 2016b) proving particularly popular in the robotics community. While the original SafeOpt algorithm from Sui et al. (2015) interleaves safe exploration and function optimization over certified safe sets, a twostage variant was introduced and analyzed in (Sui et al., 2018). Since SafeOpt relies on a discrete input space, and hence requires discretization in the case of a continuous optimization problem, applying this algorithm to continuous problems in even moderate dimensions can be very challenging, cf. also Section 7. This motivated the introduction of a variant based on swarm-optimization (Duivenvoorden et al., 2017), albeit with heuristic safety guarantees, as well as a variant tailored to dynamical problems in high dimensions (Sukhija et al., 2023). A method based on random projections was also proposed and analyzed in (Kirschner et al., 2019a), and applied to tuning of a particle accelerator (Kirschner et al., 2019b). In terms of problem formulations, instead of just one safety constraint for the input function, multiple safety constraints can also be enforced, each encoded by an unknown constraint function (Berkenkamp et al., 2016a; Sui et al., 2018; Berkenkamp et al., 2023). In many applications of SafeOpt-type algorithms, properties of a dynamical system need to be optimized under safety constraints, for example, in in controller tuning with BO. The dynamic aspect can be included in the optimization algorithm and its safety mechanisms, for example, in order to use a backup controller (Baumann et al., 2021). Furthermore, while formally SafeOpt-type algorithms are BO algorithms having input constraints, this is different from *constrained BO* as considered for example in (HernándezLobato et al., 2016). In the latter type of BO, the focus is on finding good inputs (i.e., corresponding to a high objective function value) that fulfill the (usually unknown) constraints, and violation of the constraints during the optimization process is not considered problematic (though of course it can be advantageous to avoid this). By contrast, safe BO aims to avoid constraint violations during the optimization process since these are considered problematic or even harmful. For an in-depth discussion of this difference, and further connections between the two problem settings, we refer to the excellent survey (Kim et al., 2020). SafeOpt-type algorithms are motivated by problems where safety violations are considered to be very costly, so any safety violations should be avoided with very high probability. This contrasts with a whole spectrum of related, but different safe BO (and reinforcement learning) settings. For example, in robot learning a small amount of crashes might deemed acceptable, in which case a crash-aware BO algorithm (Marco et al., 2021) would be preferable over a SafeOpt-type algorithm. Similarly, in the context of bandit algorithms (Slivkins et al., 2019; Lattimore & Szepesvári, 2020), instead of avoiding a bad option at all costs, one might instead want to be *conservative*, carefully exploring alternatives, starting from a default strategy. This setting is formalized in the context of *conservative bandits* (Wu et al., 2016), and again, is related to, but distinct from, the SafeOpt setting. Corresponding bandit formulations in the context of hard safety are also available (Amani et al., 2019), where the focus is on regret bounds under safety constraints. Finally, parallel to the situation of conservative bandits, there are *cautious* variants of BO (Fröhlich et al., 2021). ## 4.2 Lipschitz-Based Methods In order to address safety-related issues uncovered and discussed in Sections 5.1 and 6.1, we will introduce algorithms based on a Lipschitz assumption. While regularity properties like Lipschitz (and the more general Hölder) continuity play an important role in the theory of statistical learning, especially in nonparametric statistical estimation (Tsybakov, 2009), learning algorithms based on Lipschitz assumptions have received relatively little attention in the machine learning community. Relevant exceptions to this are Lipschitz bandits (Kleinberg et al., 2019) and the original SafeOpt algorithm from Sui et al. (2015). The situation is considerably different in the field of global optimization and the systems and control community, respectively. In the former, Lipschitz continuity with a specific Lipschitz constant is a standard assumption, used in a variety of algorithms for (certified) global optimization, cf. (Hansen et al., 1992; Pintér, 1995), though usually a noise-free setting is assumed in this literature. Global optimization of Lipschitz functions has recently also received attention from the machine learning community (Malherbe & Vayatis, 2017). Similarly, a specified Lipschitz constant is also used in the context of Lipschitz interpolation (Beliakov, 2006). Furthermore, closely related to our approach taken in Section 6, a deterministic variant of SafeOpt has been considered in Sergeyev et al. (2020). However, the latter reference only works with functions on a compact interval, and does not use any BO techniques. In the systems and control community, Lipschitz assumptions have long been used, especially in the context of systems identification, where they were explicitly introduced and popularized by Milanese & Novara (2004), though similar methods had been used before, e.g., (Cooper, 1995). A known bound on the Lipschitz constant and on the size of additive noise is commonly used to derive uncertainty bounds in the context of regression. This approach has been further popularized and extended to the case of Hölder continuous functions by Calliess (2014), widely known as *kinky inference* in the systems and control community. A central assumption in this context is knowledge of a *concrete, numerical* upper bound on the Lipschitz constant of the target function. This assumption has a clear geometric and practical interpretation, namely a bounded rate of change of the target quantity. As such, it is related to the well-established field of sensitivity analysis (Da Veiga et al., 2021), for example. Approaches to estimate the Lipschitz constant of an unknown function have been proposed both in the context of global optimization (Strongin, 1973) and in Lipschitz-based regression methods, particularly in the context of systems identification (Milanese & Novara, 2004; Novara et al., 2013; Calliess et al., 2020), see (Huang et al., 2023) for an overview and very recent sample-complexity results. We would like to stress that these approaches are not suitable for the present setting of hard safety constraints, since the estimation of a Lipschitz constant bound requires queries to the target function, which in turn already need to be safe, see also the discussion in Section 6.1. The developments in Sections 6 and 7 combine kernel-based methods (here GP regression) with a Lipschitz assumption to overcome the requirement of a known bound on the RKHS norm of the target function, see Section 6.1 for details. The problematic nature of an RKHS norm bound in the context of learning-based control has been recognized for some time (Lederer et al., 2019; Fiedler et al., 2022). In (Lederer et al., 2019), using probabilistic Lipschitz bounds together with a space discretization was suggested to derive GP uncertainty bounds. However, this approach relies on a probabilistic setting, and is therefore not suitable in the context of SafeOpt-type algorithms. The work (Fiedler et al., 2022) proposes the usage of geometric constraints as prior knowledge in the context of uncertainty sets for kernel-based regression, with Lipschitz constant bounds as a special case. The resulting kernel machines, which provide nominal predictions and smoothed uncertainty bounds adhering to the geometric constraints, are not necessary in our setting, though using more general geometric constraints than Lipschitz constant bounds might be a promising avenue in the context of SafeOpt-type algorithms. Finally, combining kernel methods with Lipschitz assumptions is a natural approach as there is a close connection between regularity properties of a kernel and Lipschitz continuity of functions in the RKHS generated by the kernel. For a thorough discussion of this point, we refer to (Fiedler, 2023). ## 5 Frequentist Uncertainty Bounds And Practical Safety Issues In Safeopt We now investigate practical safety implications of commonly used heuristics in the frequentist uncertainty bounds in SafeOpt-type algorithms, addressing objective (O1). In Section 5.1, we discuss why these heuristics are problematic and demonstrate safety issues using numerical experiments. To overcome these issues, in Section 5.2 we propose using state-of-the-art frequentist uncertainty bounds in the actual algorithm. ## 5.1 Practical Safety Issues In Safeopt Safety in SafeOpt-type algorithms is ensured by restricting query inputs to safe sets, which are computed using frequentist uncertainty bounds, typically in the form (4) using (5). However, these bounds are often too conservative for algorithmic use, which has led to implementations adopting heuristic choices for βt, for example, βt ≡ 2 in (Berkenkamp et al., 2016b; Turchetta et al., 2016), βt ≡ 3 in (Helwa et al., 2019; Baumann et al., 2021), or βt ≡ 4 in (Sukhija et al., 2022). Using such heuristics instead of evaluating βt invalidates all safety guarantees. In practice, choosing some βt can be a useful heuristic , as demonstrated by the reported success of SafeOpt-type algorithms (Berkenkamp et al., 2016b; Baumann et al., 2021; Sukhija et al., 2023). However, it should be stressed that in the setting of SafeOpt as outlined in Section 3, the learning algorithm has to fulfill a **hard safety constraint** - namely, that no unsafe inputs are queried by the algorithm (potentially only with high probability). In particular, no burn-in or tuning phase for βt is allowed. Such heuristics not only invalidate the theoretical safety guarantees, but can also actually lead to safety violations. First, we demonstrate empirically that a simple heuristic like setting βt ≡ 2 can lead to a significant proportion of bound violations. To do so, we follow the general approach adopted in (Fiedler et al., 2021a). We randomly generate 100 RKHS functions on [0, 1] with RKHS norm 10, using the squared exponential kernel with length scale 0.2/ √2. Here, we utlize the orthonormal basis (ONB) of the corresponding RKHS as described in (Steinwart & Christmann, 2008, Section 4.4), by selecting some of these basis functions, and combining them in a weighted sum using randomly generated weights. For each of the resulting functions, we generate 10000 independent data sets, by uniformly sampling 100 inputs from [0, 1], evaluating the RKHS function on these inputs, and then adding i.i.d. normal noise with variance 0.01. For each data set, we apply GP regression with a zero mean prior, and the SE kernel as covariance function, using the same length scale as for the generation of the target functions. Finally, we use the uncertainty set (4) with βt ≡ 2, and check on a fine grid on [0, 1] whether the target function from the RKHS is completely contained within this uncertainty set. 2727±3882 (average ± SD) of these runs (all 10000 repetitions for all 100 functions) led to a bound violation, a sizeable proportion. Second, we show that these bound violations can indeed lead to safety violations when running SafeOpt. We generate an RKHS function f (same approach as above), and define the safety threshold h by setting h = ˆµ(f) − 0.2SDd(f), where µˆ(f) and SDd(f) are the empirical mean and standard deviation of the test function f evaluated on a fine grid of the input space. We further evaluate |f ′| on a fine grid, take the maximum and multiply it by 1.1 to find a (slightly conservative) upper bound of the Lipschitz constant of f. We then run SafeOpt on this function 10000 times from a random safe initial state, again using i.i.d. additive normal noise with variance 0.01. This leads to 2862 (out of 10000) runs with safety violations (see Table 1), which is unacceptable for most application scenarios of SafeOpt-type algorithms. These experiments illustrate that using heuristics in SafeOpt can be highly problematic. Even in the relative benign setting used above, both uncertainty bound and safety violations occur. Moreover, in application scenarios for SafeOpt-type algorithms, tuning the heuristic scaling factor is not possible, as it is the primary mechanism for safety. Dispensing with the need for such heuristics, and retaining safety guarantees both in theory and practice, is therefore the primary motivation for this work. ## 5.2 Real-Β**-Safeopt** As a first step, we propose using modern uncertainty bounds in SafeOpt that can be computed numerically, avoiding the replacement with unreliable heuristics. For this purpose, we investigate the original SafeOpt algorithm with βt from (7), which is very close to (6). To clearly distinguish this variant of SafeOpt from previous work, we call it Real-β*-SafeOpt*, emphasizing that we use a *theoretically sound choice of* βt. The resulting algorithm is again described by Algorithm 2, using (7) to compute βt. The bound (7) requires the determinant to be computed, which can be computationally expensive, but typical applications of SafeOpt and related algorithms allow few evaluations, so this does not pose a problem. Furthermore, the additive noise needs to be a conditionally R-subgaussian (martingale-difference) sequence with a (known upper bound on) R. This assumption is standard, and in many cases harmless. It also has a clear interpretation. Finally, for a frequentist uncertainty bound, we also need the next assumption. Assumption 2. Some B ∈ R≥0 *is known with* ∥f∥k ≤ B. While Assumption 2 is standard in the literature on safe BO, to the best of our knowledge, it has only been used for theoretical analyses, but not in the actual algorithms. Combining these ingredients allows us to compute the bound (7), so Real-β-SafeOpt can actually be implemented. To illustrate the advantages of Real-β-SafeOpt, we run it on the same function f as before (cf. Section 5.1), and set δ = 0.01, as well as the true RKHS norm in (7). Running this experiment results in 0 failures (cf. Table 1), evidently well within the δ range required. An obvious question arising at this point is how Real-β-SafeOpt compares in terms of performance with previous SafeOpt-variants relying on heuristics. Unfortunately, a meaningful comparison is impossible since the two algorithmic variants address different problems. If for the latter the heuristic constant β is determined by trial-and-error, and SafeOpt is then run with this constant that leads to no or almost no safety violation, then we essentially end up with a different algorithm overall. If the heuristic constant β is chosen arbitrarily or based on experience with previous usages of SafeOpt-type algorithms, then the original setting of hard safety constraints is abandoned, and we instead end up with a form of cautious BO. By contrast, Real-β-SafeOpt tries to stay within the original setting of SafeOpt that requires hard safety constraints. To investigate how Real-β-SafeOpt typically behaves, we will perform a careful empirical evaluation of this algorithm variant in Section 6.4. ## 6 Lipschitz-Only Safe Bayesian Optimization (Losbo) As discussed above, the safety of SafeOpt-type algorithms should only depend on reasonable assumptions that can be verified and interpreted by the algorithm's user. In this section, we address objective (O2). In particular, we investigate the central Assumption 2 of a known upper bound on the RKHS norm of the target function, finding that this is a problematic assumption in practice. To overcome this issue, we propose Lipschitz-only Safe Bayesian Optimization (LoSBO) as a solution to this problem, and perform extensive numerical experiments, comparing LoSBO with Real-β-SafeOpt. ## 6.1 Practical Problems With The Rkhs Norm Bound A central ingredient in the Real-β-SafeOpt algorithm is an upper bound on the RKHS norm of the target function. In particular, the safety and exploration guarantees inherited from (Sui et al., 2015) hinge on the knowledge of such an upper bound. Unfortunately, while the RKHS norm is very well understood from a theoretical point of view, it is unclear how to derive such a bound in practice. In the following, we provide an overview of known characterizations and representations of the RKHS norm, and discuss why these results seem unsuitable for this task. For an arbitrary kernel, one can use discretization-based variational characterizations of the RKHS norm (and RKHS functions), for example, by maximization over a family of lower bounds on the RKHS norm (Fiedler et al., 2024, Section B), (Atteia, 1992, Chapter I), by minimization over certain bounds on function values at finitely many inputs (Okutmuştur, 2005, Theorem A.2.6), by minimization over finite interpolation problems (Paulsen & Raghupathi, 2016, Theorem 3.11), or by minimization over certain matrix inequalities (Paulsen & Raghupathi, 2016, Theorem 3.11). For separable RKHSs, the RKHS norm can be expressed using a sampling expansion (Korezlioglu, 1968), or as the limit of norms of RKHSs over finite inputs (Lukić & Beder, 2001, Lemma 4.6). On the one hand, all of these variational problems have an explicit form and they work for any kernel (any kernel with separable RKHS, respectively). However, it is not at all clear how to relate these representations to common properties of functions that might be used as reliable prior knowledge to derive upper bounds on the RKHS norm. Furthermore, these variational problems generally cannot be used in numerical methods to estimate upper bounds on the RKHS norm, but only lower bounds, though they may be used for estimating bounds in heuristics (Tokmak et al., 2023). Since these characterizations are based on discretizations of a given RKHS function, in particular, using the exact function values, they are not suitable in the present setting of unknown target functions only accessible through noisy evaluations. If one considers more specific classes of kernels, other characterizations of the RKHS norm become available. For example, continuous kernels on a compact metric space equipped with a measure having full support (often called a Mercer kernel in this context) allow a description of the RKHS norm as a weighted ℓ2-norm (Steinwart & Christmann, 2008, Section 4.5), based on Mercer's theorem. This has a clear interpretation in the context of kernel methods, in particular, giving insight into the regularization behavior of the RKHS norm in optimization problems in kernel machines (Hastie et al., 2009, Section 5.8), which in turn can be used to derive learning rates for various statistical learning problems (Steinwart et al., 2009). More general forms of Mercer's theorem are available (Steinwart & Scovel, 2012), which in turn lead to improved learning theory results (Fischer & Steinwart, 2020). While the RKHS norm representation for Mercer kernels is an important tool for statistical learning theory and provides intuition about the regularization behavior, it is again unclear how it can be used to derive *quantitative* RKHS norm bounds. Expressing the RKHS norm for Mercer kernels as a weighted ℓ2-norm provides valuable *qualitative* intuition about the corresponding RKHS norm, but we are not aware of any practically relevant example where this has been used to translate realistic prior knowledge into a concrete upper bound on the RKHS norm. Similarly, for sufficiently regular translation-invariant kernels, the RKHS norm can be expressed as a weighted integral over the Fourier transform of RKHS functions (Wendland, 2004, Theorem 10.12). This formulation allows an intuitive interpretation of the RKHS norm as a generalized energy, penalizing high-frequency behavior of RKHS functions (as determined by the Fourier transforms of the kernel). Several important function spaces are related to RKHSs, for example certain Sobolev spaces (Wendland, 2004, Chapter 10) or Fock spaces (Steinwart & Christmann, 2008, Section 4.4), which again have their own representations of the RKHS norm (potentially after some embedding). Again, all of these representations offer insights into the RKHS norm, and are important theoretical tools, but how this can be used to derive practically useful quantitative upper bounds on the RKHS norm remains unclear. To summarize, while an extensive body of work on characterization and representation results for the RKHS norm is available, these results appear to be unsuitable to derive upper bounds on this norm. In particular, to the best of our knowledge, it is not currently possible to derive concrete numerical upper bounds on the RKHS norm from realistic assumptions in non-trivial cases. The central difficulty here is that a concrete numerical upper bound on the RKHS norm is needed for algorithms like Real-β-SafeOpt. This issue is known in e.g. the learning-based control community (Fiedler et al., 2022), but has not yet been addressed in the context of safe BO. A possible explanation for this current lack is that this problem does not appear in many other kernel-based learning scenarios. For example, to derive learning rates for SVMs and related kernel machines, membership of the target function4in an appropriate RKHS is enough (or in a function space 4In the standard statistical learning theory setup, there is no notion of a target function as in the context of safe BO. Instead, the learning problem is described using a loss function. However, for example in the context of regression with the squared-error loss, the conditional expectation (the regression function) takes the role of our target function in the explanation above. that in a suitable sense can be approximated by an RKHS), and (an upper bound of) the RKHS norm of the target function is not needed by the kernel-based learning algorithm. The discussion above has immediate consequences for SafeOpt-type algorithms. These algorithms are intended for hard safety settings, i.e., scenarios where any safety violation is very costly and must be avoided. In these scenarios it is not possible to have a tuning phase before the actual optimization run where algorithmic parameters are set - the safety requirements hold from the start. For SafeOpt-type algorithms this means that all algorithmic parameters have to be set beforehand, and these parameters need to ensure the safety guarantees and lead to satisfying exploration behaviour. However, this entails an upper bound on the RKHS norm of the target function, which currently appears to be impossible to derive from reasonable prior knowledge in practically relevant scenarios. Furthermore, using an invalid RKHS upper norm bound can indeed easily lead to safety violations. In order to illustrate this, we run SafeOpt on the same function f as before (cf. Section 5.1), and set δ = 0.01, but this time, we use a misspecified RKHS norm of 2.5 in (7). Running this experiment now results in 1338 failure runs out of 10000, cf. Table 1, which is much more than what would be expected from a safety probability of 1 − δ = 0.99. For a summary of this experiment, see again Table 1. Finally, simply using a very conservative upper bound on the RKHS norm is not a viable strategy to overcome this problem. A severe overestimation of the RKHS norm leads to very large and overly conservative uncertainty bounds, which in turn leads to performance issues. In particular, since the uncertainty bounds are used to determine the safety sets in SafeOpt-type algorithms, a supposed RKHS upper norm bound that is too conservative can result in the algorithm "getting stuck", i.e., no more exploration is possible. It should also be noted that it is not even clear what "very conservative" means in the present context. Recalling the discussion of the RKHS norm from above, while an extensive body of theory and strong qualitative intuitions on these objects are available, the lack of concrete, quantitative bounds amenable to numerical evaluation makes it very difficult for users of BO to judge what a conservative estimate of the RKHS norm in a given situation could be. We argue that as a consequence, any SafeOpt-like algorithm *should not depend on the knowledge of an upper* bound on the RKHS norm of the target function for safety, at least in the setting of hard safety requirements where any failure should be avoided (with high probability). More generally, in order to guarantee safety, we should only use assumptions that are judged as reliable and reasonable by practitioners. In particular, all assumptions that are used for safety should have a clear interpretation for practitioners and a clear connection to established prior knowledge in the application area of the safe BO algorithm. In the end, it is up to the user of safe BO to decide which assumptions used in the safety mechanism can be considered as reliable. ## 6.2 Describing Losbo And Its Safety Guarantees Motivated by the popularity of SafeOpt, which combines GPs with a Lipschitz assumption, and the extensive experience of the systems and control community with Lipschitz bounds and bounded noise (cf. Section 4), we propose to use *an upper bound on the Lipschitz constant of the target function and a known noise bound* as the ingredients for safety in BO. The key idea is to ensure that the safety mechanism works reliably independent of the (statistical) exploration mechanism. In a generic SafeOpt algorithm, safety is guaranteed by ensuring that the safe sets St contain only safe inputs (potentially only up to high probability), i.e., requiring that f(x) ≥ h for all x ∈ St. Once this property is fulfilled, the rest of the algorithm can no longer violate the safety constraints anymore. Based on the earlier discussion, the construction of the safe set should only rely on the Lipschitz and noise bound. As is well-known, these two assumption allow the construction of lower bounds on the function (Milanese & Novara, 2004), and the corresponding safe sets should therefore be defined for all t ≥ 1 as $$S_{t}=S_{t-1}\cup\{x\in D\mid y_{t-1}-E-L d(x_{t-1},x)\geq h\},$$ St = St−1 ∪ {x ∈ D | yt−1 − E − Ld(xt−1, x) ≥ h}, (11) where L ∈ R≥0 is a bound on the Lipschitz constant of the unknown target function, E ∈ R≥0 a bound on the magnitude of the noise, and S0 is the initial safe set. We propose using this variant of the safe set, and leaving the rest of the generic SafeOpt algorithm unchanged, which leads to an algorithm we call $\left(11\right)^{2}$ Lipschitz-only Safe Bayesian Optimization (LoSBO). Our proposed modification applies to any algorithm instantiating the generic SafeOpt strategy outlined in Section 3. For concreteness, we focus in the following on the original SafeOpt algorithm from (Sui et al., 2015). The resulting variant of LoSBO is described in detail in Algorithm 3. The algorithm fulfills the following safety guarantee. Algorithm 3 LoSBO Require: Lipschitz constant L, algorithm to compute βt, noise bound E, initial safe set S0, safety threshold h 1: Q0(x) ← R for all x ∈ D ▷ Initialization of uncertainty sets 2: C0(x) ← [h, ∞) for all x ∈ S0 3: C0(x) ← R for all x ∈ D \ S0 4: for t = 1, 2*, . . .* do 5: Ct(x) ← Ct−1(x) ∩ Qt−1(x) for all x ∈ D ▷ Compute upper and lower bounds for current iteration 6: ℓt(x) ← min Ct(x), ut(x) ← max Ct(x) for all x ∈ D 7: if t > 1 **then** ▷ Compute new safe set 8: St = St−1 ∪ {x ∈ D | yt−1 − E − Ld(xt−1, x) ≥ h} 9: **else** 10: S1 = S0 11: **end if** 12: Gt ← {x ∈ St | ∃x ′ ∈ D \ St : ut(x) − Ld(*x, x*′) ≥ h} ▷ Compute set of potential expanders 13: Mt = {x ∈ St | ut(x) ≥ maxxS ∈Stℓt(xS)} ▷ Compute set of potential maximizers 14: xt ← argmaxx∈Gt∪Mtwt(x) ▷ Determine next input 15: Query function with xt, receive yt = f(xt) + ϵt 16: Update GP with new data point (xt, yt), resulting in mean µt and σt 17: Compute updated βt 18: Qt(x) = [µt(x) − βt σt(x), µt(x) + βt σt(x)] for all x ∈ D 19: end for Proposition 1. Let f : D → R be an L-Lipschitz function. Assume that |ϵt| ≤ E for all t ≥ 1 *and let* ∅ ̸= S0 ⊆ D such that f(x) ≥ h for all x ∈ S0. For any choice of the scaling factors βt > 0, running the LoSBO algorithm leads to a sequence of only safe inputs, i.e., we have f(xt) ≥ h *for all* t ≥ 1. Proof. It is enough to show that ∀t ≥ 0 and x ∈ St, we have f(x) ≥ h. Induction on t: For t = 0, this follows by assumption. For t ≥ 1, let x ∈ St = St−1 ∪ {x ∈ D | yt−1 − E − Ld(xt−1, x) ≥ h}. If x ∈ St−1, then f(x) ≥ h follows from the induction hypothesis. Otherwise we have f(x) = f(xt) + ϵt − ϵt + f(x) − f(xt) ≥ yt − E − Ld(xt, x) ≥ h, where we used the L-Lipschitz continuity of f and the noise bound |ϵt| ≤ E in the first inequality, and the definition of St in the second inequality. The argument in the proof above is well-known in e.g. the systems identification literature, and the resulting bounds even fulfill certain optimality properties (Milanese & Novara, 2004). We would like to stress that the safety guarantee of LoSBO, as formalized in Proposition 1, is *deterministic*, i.e., it always holds and not only with high probability. This type of safety is often preferred in the context of control and robotics (Hewing et al., 2020; Brunke et al., 2022). Remark 1. *Inspecting the proof of the preceding result reveals that considerably more general (and weaker)* assumptions can be used with the same argument. 1. Instead of a fixed noise bound E, one can mutatis mutandis use asymmetric, time-varying and heteroscedastic noise. Formally, one can assume that two functions Eℓ, Eu : D × N0 → R≥0 exist, such that for all t ≥ 1 and x ∈ D, it holds that Eℓ(x, t) ≤ ϵt ≤ Eu(x, t), if x *is the input used at* time t. 2. Instead of Lipschitz continuity, one can assume that there exists a continuous and strictly increasing function ϕ : R≥0 → R≥0 *with* ϕ(0) = 0, such that for all x, x′ ∈ D *it holds that* f(x ′) ≥ f(x) − ϕ(dD(x, x′))*. This includes the case of Hölder continuity, which has previously been used in a similar* context (Calliess, 2014). To keep the presentation focused and easy to follow, we do not use this additional flexibility in the present work, but everything that follows immediately applies to these more general cases. Remark 2. Proposition 1 states that if E ∈ R>0 *is a bound on the noise magnitude, then LoSBO is safe.* Suppose we know that |ϵt| ≤ Bϵ for some constant Bϵ ∈ R>0, then we could set E = Bϵ, and assume that the bound Bϵ *is sharp. For example, we might have* ϵt ∼ 1 2 δBϵ + 1 2 δ−Bϵ , where δx is the Dirac distribution with atom on x. If we choose E = Bϵ, then LoSBO is indeed safe according to Proposition *1, i.e., for all inputs* xt ∈ D that are queried by the algorithm, we have f(xt) ≥ h. However, if we additionally assume that there are sizeable parts of the input space D where f ≈ h*, and since the border of the safe sets will be in such a* region, it is likely that inputs from this area will be queried. This means that it is likely that measurements yt with yt < h will be received - this happens if f(xt) ≈ h and ϵt ≈ −Bϵ*. While according to the formal model* such an input xt is safe, to the user it looks as if a safety violation occurred. Since in practice a detected (not necessarily real) safety violation might lead to some costly action (e.g., emergency stop), such a situation is undesirable. To avoid this, we can set E = 2Bϵ. While this introduces some conservatism, it avoids the described apparent safety violations - essentially, this option mitigates false alarms. Whether to choose in the present situation E = Bϵ or E = 2Bϵ *(or even an option in between) is ultimately a practical question* to be addressed by the practitioner using the algorithm. ## 6.3 Discussion While LoSBO arises from a rather minor modification of the generic SafeOpt algorithm class (by changing the computation of the safe sets St), **on a conceptual level significant differences arise**. Inspecting the proof of Proposition 1 shows that the safety guarantee of LoSBO is *independent* of the underlying model sequence (Mt)t. As an important consequence, the choice of the uncertainty sets used in the optimization of the acquisition function cannot jeopardize safety. One consequence is that the assumption that the target function f is contained in the RKHS of the covariance function used in GP regression is not necessary anymore. In particular, *in order to ensure safety, we need only Assumption* 1 together with a noise bound, and not Assumption 2 *anymore*. Similarly, hyperparameter tuning is not an issue for safety, cf. also to our discussion in Section 2.3. Of course, an appropriate function model is important for good exploration performance, but this issue is now independent of the safety aspect. As another, even more important consequence, in the context of the concrete LoSBO variant described in Algorithm 3, the scaling parameter βt is now a proper tuning parameter. Modifying them, even online, in order to improve exploration no longer interferes with the safety requirements. This differs from previous variants (and practical usage) of SafeOpt, where the scaling factors βt *cannot* be freely tuned since they are central for the safety mechanism of these algorithms. This aspect is illustrated in Figure 3. In the situation depicted there, the uncertainty bounds do not hold uniformly, i.e., the target function is not completely covered by them, and deriving safety sets from these uncertainty bounds, regardless of whether to include the additional knowledge of the Lipschitz bound (Sui et al., 2015) or not (Berkenkamp et al., 2016a), results in potential safety violations. However, since in LoSBO these bounds are ignored for the safe sets, this problem does not occur. Using a **Lipschitz bound instead of an RKHS norm bound** comes with several advantages. First, a (quantitative) Lipschitz bound has a clear interpretation - it is an upper bound on the slope of the target function. Second, the Lipschitz assumption can be related to established prior knowledge: A known upper bound on the Lipschitz constant corresponds to an a priori bound on the rate of change of a function, i.e., it is related to the sensitivity of the underlying problem. As discussed in-depth in Section 6.1, to the best of our knowledge this is not possible for the RKHS norm in non-trivial cases. Third, as already mentioned above, the Lipschitz assumption is independent of the internal model used for exploration, and hence much less prone to model misspecifications like wrong hyperparameters of kernels. Finally, the exact Lipschitz constant is rarely known in practice, and hence an upper bound has to be used. It is clear that a very conservative upper bound will inhibit exploration, just as a conservative upper bound on the RKHS norm, but this issue appears to be unavoidable with any quantitative a priori assumption. ![15_image_0.png](15_image_0.png) Figure 3: Illustration of LosBO being safe, while a safe set based on invalid uncertainty bounds leads to potential safety violations. The safe set of LoSBO (gray set) is determined by the constant E (gray arrow) and the Lipschitz cone (orange). The GP mean and the confidence bounds are illustrated in blue. The points in the safe set given by the lower confidence bound are green if they are safe and red if they are unsafe. The original SafeOpt algorithm comes with conditional5 **exploration guarantees**. Since our modification leading to LoSBO essentially separates the safety and exploration mechanisms, the exploration guarantees from SafeOpt are rendered inapplicable in the present context. An inspection of the proof of (Sui et al., 2015, Theorem 1) shows that it cannot easily be modified to apply to LoSBO again, as the argument used there relies on the GP model interacting with the safety mechanism6. Furthermore, we suspect that pathological situations exist where LoSBO fails to properly explore. However, we have not observed such a situation in our extensive experiments. While providing (again conditional) exploration guarantees for LoSBO is an interesting focus for future work, we argue that the present lack of such theoretical guarantees does not diminish the relevance and usefulness of this algorithm. First, LoSBO shows excellent exploration performance, as demonstrated in the experiments described in the next section. Second, since the scaling parameters βt (which have an important influence on the exploration performance) are proper tuning parameters in LoSBO, unsatisfying performance of the algorithm can be overcome by using this tuning knob. We would like to stress again that in the previous variants of SafeOpt, this option is not available as the scaling parameters need to lead to valid uncertainty sets. ## 6.4 Experimental Evaluation As discussed in Section 5.2, a meaningful comparison with SafeOpt implementations relying on heuristics is impossible, since the latter essentially address a different problem setting. For this reason, we will compare LoSBO with Real-β-SafeOpt, which precisely adheres to the original SafeOpt setting. Experimental setup For the empirical evaluations a frequentist approach will be used as this is the most natural setting for SafeOpt-like algorithms, see (Srinivas et al., 2010; Fiedler et al., 2021a;b) for further discussion on this. A target function is therefore fixed, and the algorithms are run multiple times on this same function with independent noise realizations. To enable the performance of the algorithms to be clearly evaluated, synthetic target functions will be used, and since we want to compare LoSBO with Real-β-SafeOpt - the latter requiring a target function from an RKHS and with a known RKHS norm upper bound - we generate target functions from an RKHS. The frequentist setup is inherently worst-case, but for numerical experiments it is necessary to restrict ourselves to finitely many RKHS functions. Nevertheless, the 5By this we mean that the exploration guarantee given in (Sui et al., 2015, Theorem 1) is conditional on the choice of the kernel. Inspecting the expression for the time t ∗ in this latter result, we find that this result requires appropriate growth behavior of the maximum information gain γt in order to lead to non-vacuous exploration guarantees. 6More precisely, with LoSBO we cannot ensure that the inequality in the last display in the proof of (Sui et al., 2015, Lemma 7) holds. RKHS functions used should be somewhat representative to give a meaningful indication of the algorithmic performance. Specifically, any bias due to the function generating method should be minimized. In the following experiments, we sample functions from the pre RKHS, i.e., given a kernel k, we randomly choose some M ∈ N>0, α1*, . . . , α*M ∈ R and x1*, . . . , x*M ∈ D and then use f =PM i=1 αik(·, xi) ∈ Hk as a target function, which works for any kernel, cf. Section 2.2. In the case of the squared exponential kernel, we also utilize the ONB described in Steinwart & Christmann (2008, Section 4.4), which we have already used for some of the experiments in Sections 5.1 and 6.1. Generating RKHS functions with more than one method ensures more variety of the considered RKHS functions. Moreover, with both approaches, the exact RKHS norm is available (and can be set by normalization), and the generated functions can be evaluated at arbitrary inputs. Unless noted otherwise, we generate RKHS functions with an RKHS norm of 10, i.e., we consider target functions f ∈ Hk with ∥f∥k = 10. For a more thorough discussion of generating RKHS functions and subtle biases due to the chosen method, we refer to (Fiedler et al., 2021a). LoSBO and Real-β-SafeOpt work on arbitrary metric spaces, as long as a kernel can be defined on that space. Following the previous safe BO literature, we restrict ourselves to compact subsets of R d, and in this section for simplicity we further restrict ourselves to d = 1. To run LoSBO and Real-β-SafeOpt, we need a bound on the Lipschitz constant of the target function, as well as an initial safe set. For the former, we restrict ourselves to kernels inducing continuously differentiable RKHS functions, since the latter are Lipschitz continuous due to the compact domain. To determine a bound on the Lipschitz constant, we evaluate the target function on a fine discretization of the input domain, numerically compute an approximation of the Lipschitz constant, and multiply the result by 1.1 to counterbalance the discretization error. Since the target functions are randomly generated, we compute an appropriate safety threshold for each function so that some portions of the input space are safe, and some are unsafe. This avoids trivial situations for safe BO. More precisely, for a given target function f, we compute its empirical mean µˆ(f) and empirical standard deviation SDd(f) on a fine grid, and then set h = ˆµ(f) − 0.2SDd(f). Next, for each target function f and safety threshold h, we need to generate an initial safe set. Similar to the choice of the safety threshold, trivial situations should be avoided, particularly cases where no safe exploration is possible. To achieve this goal, we first determine some x0 ∈ argmaxx∈Df(x), then consider the set D ∩Ix0 , where Ix0 is the largest interval such that x0 ∈ Ix0 and f|Ix0 ≥ h + E. Finally, one input is randomly selected from this set, and the singleton set containing this latter input is then selected as the initial safe set. Using a singleton initial safe set is common in the literature on SafeOpt-type algorithms, see (Berkenkamp et al., 2016a). The typical application scenario for SafeOpt-type algorithms is the optimization of some performance measure by interacting with a physical system. In particular, each function query is relatively expensive, hence in these scenarios only a few function values are sampled. Motivated by this, in all of the following experiments, for each target function, LoSBO and Real-β-SafeOpt are run for 20 iterations, starting from the same safe set. For each target function, this is repeated 10000 times to allow a frequentist evaluation of the behavior. Finally, each type of experiment is run with 100 different randomly generated target functions. To make runs with different target functions comparable, we evaluate the performance in a given run of a target function f by $${\hat{f}}_{t}^{*}={\frac{f\left(\operatorname{argmax}_{x\in S_{t}}\mu_{t}(x)\right)-h}{f^{*}-h}},$$ ∗ − h, (12) where µt(x) is the predictive mean (in LoSBO and Real-β-SafeOpt the posterior mean) at time t ≥ 1, evaluated at input x, and f ∗is the maximum of f. This metric will be averaged over all runs for a given f, and over all target functions, respectively, in the following plots. For simplicity, independent additive noise, uniformly sampled from [−Bϵ, Bϵ], is used in all of the following experiments. As is well-known, bounded random variables are subgaussian, and we can set R = Bϵ in Real-β-SafeOpt. Additionally, we choose δ = 0.01 and the true RKHS norm as the RKHS norm upper bound in Real-β-SafeOpt, unless noted otherwise. We further set the nominal noise variance equal to R in both LoSBO and Real-β-SafeOpt. Following the discussion in Remark 2, we choose E = 2Bϵ in LoSBO. Finally, we must specify a strategy to compute βt in LoSBO. Recall from Section 6.2 that these scaling factors are now proper tuning parameters. In all of the following experiments, we use β ≡ 2 in LoSBO, as this is a common choice in the literature on SafeOpt and GP-UCB. Choosing such a simple rule also simplifies the experimental evaluation, as no additional tuning parameters or further algorithmic choices are $$(12)$$ | Algorithm | SafeOpt | Real-β-SafeOpt | Real-β-SafeOpt | Real-β-SafeOpt | LosBO | |--------------------------------|-----------------|------------------|------------------|------------------|---------| | B = 2.5 | B = 10 | B = 20 | β = 2 | | | | Experimental setting | β = 2 | | | | | | (heuristic) | (underestimate) | (correct) | (overestimate) | | | | Not started % | 1.93 | 3.16 | 30.40 | 68.34 | 0.018 | | Safety violations % | 3.95 | 0.859 | 0 | 0 | 0 | | Safety violations worst case % | 28.62 | 13.38 | 0 | 0 | 0 | | Final performance % | 88.75 | 88.76 | 82.45 | 76.69 | 90.90 | Table 1: Safety-performance tradeoff in SafeOpt. We evaluated 100 functions sampled from an SE-kernel with B = 10. On each function we ran each algorithm 10000 times, starting from two initial safe points. ![17_image_0.png](17_image_0.png) Figure 4: Comparison of LosBO and Real-β-SafeOpt in a well-specified setting. Thick solid lines are the means over all functions and repetitions, thin solid lines are the means over all repetitions for each individual function, shaded area corresponds to one standard deviation over all runs. introduced. Unless noted otherwise, in all of the following experiments Bϵ = 0.01 is used. For convenience, the experimental results are concisely summarized in Table 1. Well-specified setting We start by comparing LoSBO and Real-β-SafeOpt in a well-specified setting. This means that all the algorithmic parameters are set correctly, in particular, the covariance function used in GP regression is the kernel generating the RKHS from which the target functions are sampled. In Figure 4, the results for this setting are presented. The thick solid lines are the means over all 100 functions and all of their repetitions, the fine lines are the means for each of the individual 100 target functions (over 10000 repetitions for each function), and the shaded area shows an interval of width one standard deviation around the mean, again over all functions. Figure 4, top left, displays the results for functions from the Squared Exponential RKHS, sampled using the ONB approach. Interestingly, LoSBO exhibits superior performance compared to Real-β-SafeOpt, despite providing the latter algorithm with the correct ingredients (Lipschitz bound, kernel, RKHS norm bound, noise variance). Figure 4, top right, displays the results for functions sampled from the Squared Exponential pre RKHS. While no real difference in performance is noticable for LoSBO, Real-β-SafeOpt appears to perform slightly better compared to target functions generated using the ONB approach. A potential explanation lies in the shapes of functions that typically arise in the two different sampling methods. As observed in (Fiedler et al., 2021a), functions sampled from the ONB look more "bumpy" compared to pre RKHS functions, and appear to be more challenging for the uncertainty ![18_image_0.png](18_image_0.png) Figure 5: Comparison of LosBO and Real-β-SafeOpt in misspecified settings. bounds. Since Real-β-SafeOpt needs to precisely adhere to these bounds, its exploration performance is diminished. By contrast, LoSBO behaves overly optimistically as βt ≡ 2 is used, but the underlying RKHS functions have RKHS norm 10, see also the evaluations in (Fiedler et al., 2021a). It appears that this over-optimism leads to better performance, and since for safety LoSBO *does not* rely on scaling factors βt that correspond to valid frequentist uncertainty sets, this over-optimism does not jeopardize safety. Finally, Figure 4, lower left, shows the results for RKHS functions corresponding to a Matern-3/2 kernel and sampled with the pre RKHS approach. Qualitatively, we see the same picture, though the performance of both LoSBO and Real-β-SafeOpt appear to be slightly weaker compared to the previous setting. Intuitively, this is to be expected as Matern RKHS functions are generically less smooth than Squared Exponential RKHS functions, and both LoSBO and Real-β-SafeOpt rely on a Lipschitz bound, which is in turn related to regularity of functions. Misspecified setting We turn to misspecified settings, where the algorithmic parameters do not match the true setting of the target function. This is particularly interesting in the present situation, since the underlying GP model does not impact the safety of LoSBO, and therefore becomes amenable to tuning. The results of the experiments are displayed in Figure 5, where once more the thick solid lines are the means over all 100 functions, the fine lines are the means for each of the individual 100 target functions, and the shaded areas show an interval of width one standard deviation around the mean, again over all functions. We start with the length scale used in the kernel, which is arguably the most important type of hyperparameter in practice. In Figure 5, top left, we show the results for overestimating the length scale in GP regression, using Matern kernels. More precisely, a Matern kernel is used both as the kernel for generating the target functions and as a covariance function in GP regression, but the length scale of the covariance function in GP regression is 4 times the ength scale used in the kernel to generate the target function. The qualitative picture remains the same, though it appears that the performance of Real-β-SafeOpt suffers from the misspecification more than LoSBO. More importantly, in this setting safety violations occur in 12.57 % of all runs. Moreover, for the worst-behaving target function 943 out of 10000 runs lead to safety violations, which is unacceptable in a real-world use case. Figure 5, top right, shows the complementary situation of underestimating the length scale in GP regression, again using Matern kernels. The length scale of the covariance function used in GP regression is 0.2 times the length scale of the kernel that is used to generate the target function. Again, the qualitative picture remains the same, but the performance degradation is worse for both algorithms in this case. Finally, consider the case where a different kernel is used to generate the target functions than the covariance function in the GP regression. We use a Matern-3/2 kernel to generate the target functions, and a Squared Exponential kernel as covariance function in the GP regression, with the same length scale for both. The results are displayed in Figure 5, lower left. Interestingly, essentially no qualitative difference can be noticed compared to the well-specified Matern case (Figure 4, lower left). We suspect that this is due to the correct specification of the length scale, which in the present setting is more important than the kernel misspecification. ## 7 Los-Gp-Ucb In SafeOpt-type algorithms, including our variant LoSBO (Algorithm 3), the central computational step is the optimization of the acquisition function over the expander and maximizer sets, see Algorithm 1. Inspecting the definition of the latter two sets, see Section 3, makes it clear that computing these sets requires a discrete input set D. For many typical application scenarios, at least parts of the input set will be continuous (e.g., if the optimization variables include a physical parameter that can vary continuously). This means that some form of discretization is necessary before a SafeOpt-type algorithm can be applicable. Typically, equidistant gridding of the (continuous parts of the) input set is used as a discretization, see (Berkenkamp et al., 2016a) for a typical example. As a result, SafeOpt-type algorithms become impractical for even moderate dimensions, e.g., D ⊆ R dfor d > 3 (Kirschner et al., 2019a; Sukhija et al., 2023), and as a member of this class, LoSBO inherits this limitation. In this section, we present and investigate an approach to overcome this issue. Instead of adapting existing solution approaches like (Duivenvoorden et al., 2017; Kirschner et al., 2019a; Sukhija et al., 2023), we suggest a pragmatic and straightforward variant that is motivated by three observations. First, safety in SafeOpt-type algorithms is ensured by restricting the optimization of the acquisition function to (subsets of) safe sets, i.e., sets S ⊆ D such that f|S≥ h, where f is the unknown target function. In other words, as long as we ensure that the acquisition function optimization is restricted to such sets S, the resulting algorithm will be safe, no matter how this optimization is performed or whether an additional restriction is added (as in SafeOpt, where the optimization is only over expander and maximizer sets). In the case of LoSBO, these safe sets are of a particularly simple form, as they are the union of closed spheres in a metric space, see Figure 6, left, for an illustration for the case D ⊆ R 2. In typical application scenarios of SafeOpt-type algorithms, the number of input queries is relatively low, and hence the aforementioned union is only over relatively few sets. Second, a discrete input set for SafeOpt-type algorithms is necessary due to the involved definition of the expander and maximizer sets, which in turn are defined to guarantee proper exploration in the original SafeOpt setting (Sui et al., 2015). However, for concrete practical problems, such an underexploration might not pose a severe challenge, and an existing BO algorithm can simply be made safe by restricting the optimization of the acquisition function to a safe set S as described above. In fact, the original SafeOpt paper (Sui et al., 2015) already discussed a safe variant of GP-UCB (Srinivas et al., 2010). This indicates that it might be possible to avoid the complicated sets involved in SafeOpt-type algorithms, and still attain practically useful exploration performance. Third, in the current practice of BO, moderately high dimensions of the input space are not problematic for modern BO algorithms7 Typically, acquisition functions are optimized by running a local optimization method (usually gradient-based) from several initial guesses (which might be random, or based on heuristics). In particular, no gridding is necessary, and this strategy can even deal with moderately high dimensions since local search methods behave well despite increasing dimensionality. In fact, state-of-the-art BO libraries like BOTorch (Balandat et al., 2020) implement exactly this approach as the default option. Based on these three observations, we now propose a straightforward safe BO algorithm that works even in moderate input dimensions, is compatible with modern BO libraries, and retains all the safety guarantees from LoSBO. In Section 7.1 we describe this algorithm in detail and provide some discussion. In Section 7.2 we evaluate the algorithm empirically and compare it to LoSBO. 7Here we are referring to handling the dimensionality within the algorithm, specifically optimization of the acquisition function, and not the exploration performance. Of course, the latter poses challenges, especially if not enough structural or qualitative prior knowledge is encoded in the BO function model. ![20_image_0.png](20_image_0.png) Figure 6: Illustration of safe sets for a 2-dimensional input set. Left: Safe set in LoSBO resulting from three function evaluations. Right: Illustration of the acquisition function optimization in LoS-GP-UCB over a safe set resulting from four function evaluations. Two initial guesses for the local optimization are used for each of the balls that span the safe set according to (15). ## 7.1 Algorithm We now introduce a practical safe BO algorithm that retains the favorable properties of LoSBO, but also works in moderately high dimensions. Consider the setting of LoSBO as described in Section 6.2. Motivated by the preceding discussion, we start with a standard GP-based BO algorithm that does not need expander and maximizer sets (or similar complicated sets requiring discretization). Due to its relation to SafeOpt, we choose GP-UCB (Srinivas et al., 2010) for this task, so at step t ≥ 1, the next input is $$x_{t+1}=\operatorname{argmax}_{x\in D}\mu_{t}(x)+\beta_{t}\sigma_{t}(x),$$ xt+1 = argmaxx∈Dµt(x) + βtσt(x), (13) for an appropriate scaling factor βt ∈ R>0. As usual, ties are broken arbitrarily. Using the (scaled) posterior variance as the acquisition function would be even closer to SafeOpt, but numerical experiments indicate that GP-UCB performs slightly better in this context. Next, we restrict the acquisition function optimization to the safe sets St as defined for LoSBO in (11), $$x_{t+1}=\operatorname{argmax}_{x\in S_{t}}\mu_{t}(x)+\beta_{t}\sigma_{t}(x).$$ µt(x) + βtσt(x). (14) Observe now that: $\bullet$ $$S_{t}=\bigcup_{j=1}^{N_{t}}\bar{B}_{r_{j}}(z_{j})\tag{1}$$ $$(13)$$ $$(14)$$ $$(15)$$ (zj ) (15) for some Nt ∈ N>0, r1*, . . . , r*Nt ∈ R>0 and z1*, . . . , z*Nt ∈ D, and B¯r(z) = {x ∈ D | dD(*z, x*) ≤ r} is the closed ball with radius r ∈ R>0 and center z ∈ D in the metric space D. For example, if the initial safe set Algorithm 4 LoS-GP-UCB Require: Lipschitz constant L, algorithm to compute βt, noise bound E, initial safe set S0, safety threshold h 1: Compute N0, z1*, . . . , z*N0 , r1*, . . . , r*N0 , β0 2: for t = 1, 2*, . . .* do 3: xt = argmaxj=1*,...,N*t−1 argmaxx∈B¯rj (zj )µt−1(x) + βt−1σt−1(x). ▷ Determine next input 4: Query function with xt, receive yt = f(xt) + ϵt 5: Update GP with new data point (xt, yt), resulting in mean µt and σt 6: Compute updated βt 7: Compute Nt and add new zj , rj 8: **end for** has only one element x0 and no input is repeatedly sampled, then Nt = t + 1, z1 = x0 and zj = xj−1 for j = 2*, . . . , t* + 1. Using the decomposition (15), we now have $$(16)$$ $$\bar{\partial}_{r_{j}}(z_{j})\mu_{t}(x)+\beta_{t}\sigma_{t}(x).$$ xt+1 = argmaxj=1*,...,N*t argmaxx∈B¯rj (zj )µt(x) + βtσt(x). (16) Each of the inner optimization problems maxx∈B¯rj (zj ) µt(x) + βtσt(x), j = 1*, . . . , N*t, is a maximization problem over the convex sets B¯rj (zj ), and each of these inner problems are independent. In particular, these optimizations can be trivially parallelized. In practice, one usually has D ⊆ R d(often with a simple geometry) and a differentiable covariance function k, so it is possible to use a gradient-based local optimization method started from multiple initial guesses. This is illustrated in Figure 6, right. All of these multistarts are independent, and can therefore also be parallelized. We thus arrive at Algorithm 4, which we call Lipschitzonly Safe Gaussian Process Upper Confidence Bound (LoS-GP-UCB) algorithm in the following. LoSGP-UCB can easily be implemented with state-of-the-art BO libraries. For the numerical experiments described in the next section, we have chosen BoTorch (Balandat et al., 2020), which allows an easy parallel implementation of the acquisition function optimization. Finally, LoS-GP-UCB retains the safety guarantees from LoSBO. In particular, the scaling factors (βt)t remain tuning factors, and the safety of LoS-GP-UCB is independent of their choice. Furthermore, safety of LoS-GP-UCB *does not require Assumption* 2. Similarly, Remarks 1 and 2 also apply to LoS-GP-UCB. ## 7.2 Experimental Evaluation Analogously to the case of Real-β-SafeOpt and LoSBO, a frequentist setup will be used, i.e., the algorithm will be run on a fixed target function for many independent noise realizations. In the experiments two aspects will be investigated. First, LoS-GP-UCB will be compared with Real-β-SafeOpt and LoSBO. Second, we apply LoS-GP-UCB to several benchmark functions with moderate input dimensions. We start with the comparison to Real-β-SafeOpt and LoSBO. Since these two algorithms rely on a discrete input space, this comparison necessarily has to be performed on functions with a low dimensional input. We choose essentially the same experimental settings as in Section 7.2 and consider only the well-specified case. As for LoSBO, we use βt ≡ 2 in LoS-GP-UCB, and we consider one-dimensional RKHS functions. The algorithms are evaluated on 100 target functions sampled from the pre RKHS corresponding to a Matern-3/2 kernel, and for each function, we run the algorithms 1000 times8. The results of this experiment are shown in 7, where we again use the evaluation metric (12). Thick solid lines are the means over all functions and repetitions and shaded areas correspond to one standard deviation from the mean (again over all functions and realizations). To avoid clutter, the means for each individual function are plotted only for LoS-GP-UCB (thin lines). It is clear from Figure 7 that the performance of Los-GP-UCB is only slightly inferior to the original LosBO algorithm, yet still superior to Real-β-SafeOpt. This outcome indicates that LoS-GP-UCB is not severely affected by the under-exploration problem described for a safe variant of GP-UCB in (Sui et al., 8The qualitative picture is already clear for 1000 repetitions, hence we choose to save computational resources and do not use 10000 repetitions as in Section 7.2. ![22_image_0.png](22_image_0.png) 2015). We suspect that, similar to LoSBO, this is due to the overoptimism resulting from setting βt ≡ 2, corresponding to moderately aggressive exploration. Figure 7: Comparison of LoS-GP-UCB to Real-β-SafeOpt and LoSBO. Let us turn to the evaluation of LoS-GP-UCB on several benchmark functions with moderate to high input dimensions. As test functions, we use the standard benchmarks Camelback (2d) and Hartmann (6d). Similar to (Kirschner et al., 2019a), we also use a Gaussian function f(x) = exp (−4∥x∥ 2 2 ) in ten dimensions (10d) as a benchmark. For the Camelback and Hartmann functions, we choose a random initial point in the safe set. For the Gaussian function, we choose a random safe set from the level f(x0) = 0.4. We assume uniformly bounded noise with a noise level of 0.01. In this synthetic setting the Lipschitz constant L is determined by evaluating the function on a fine grid. As a model we use a Squared Exponential kernel with output variance set to 1 and length scale set to 1/L. For the prior mean we choose 0.5 as the function values are between 0 and 1. Finally, in all these settings, we compare LoS-GP-UCB to random search, and run both algorithms from the same initial safe set for 100 iterations, repeating this 100 times (for different random choices of the initial safe set). ## 8 Conclusion In this work, we are concerned with practically relevant safety aspects of the important class of SafeOpt-type algorithms. We identified the use of heuristics to derive uncertainty bounds as a potential source of safety violations in the practical application of these algorithms. This prompted us to use recent, rigorous uncertainty bounds in SafeOpt, which allowed us to numerically investigate the safety behavior of this algorithm. We further identified the knowledge of an upper bound on the RKHS norm of the target function as a serious obstacle to reliable real-world applicability of SafeOpt-type algorithms. To overcome this obstacle, we proposed LoSBO, a BO algorithm class relying only on a Lipschitz bound and noise bound to guarantee safety. Numerical experiments demonstrated that this algorithm is not only safe, but also exhibits superior performance. However, analogously to related algorithms, LoSBO is only suitable for very low dimensional problems. We therefore also proposed an additional variant (LoS-GP-UCB) suitable for moderately high dimensional problems. The two key assumptions to ensure safety are a known Lipschitz bound on the target function and a known bound on the additive measurement noise, which have clear interpretations, are natural in many applications, and are established in domains like control engineering. However, the applicability of the proposed algorithms clearly hinges on these assumptions, and they have to be judge on a case-by-case base by practitioners. ![23_image_0.png](23_image_0.png) Figure 8: Evaluation of LoS-GP-UCB (pink) on three benchmark functions and comparison to random search (blue). Thick lines are the means over all runs, thin lines are individual runs. Ongoing work is concerned with implementing the presented algorithms for safe learning in an automotive context, as well as providing exploration guarantees for LoSBO. We expect that the approach outlined in Section 6 applies to most SafeOpt variants. The derivation, implementation, and evaluation of the corresponding LoSBO-type algorithms for these variants is thus another interesting direction for future work. Our findings in combination with evidence in the literature that SafeOpt and related algorithms have been successfully used in various applications indicate that this algorithm class does not ensure hard safety constraints (in practice), but instead yields "cautious" behavior. The precise connection to conservative bandits and existing cautious BO approaches is another interesting topic for further investigations. ## Broader Impact Statement This work is concerned with safety issues of a popular BO algorithm that has already found numerous applications in real-world scenarios. Henceforth we contribute to the improved safety and reliability of machine learning methods for real-world applications. Furthermore, we expect no adverse societal impact of our work. ## References Yasin Abbasi-Yadkori. Online learning for linearly parametrized control problems. 2013. Sanae Amani, Mahnoosh Alizadeh, and Christos Thrampoulidis. Linear stochastic bandits under safety constraints. *Advances in Neural Information Processing Systems*, 32, 2019. Marc Atteia. *Hilbertian kernels and spline functions*. Elsevier, 1992. Maximilian Balandat, Brian Karrer, Daniel Jiang, Samuel Daulton, Ben Letham, Andrew G Wilson, and Eytan Bakshy. BoTorch: A framework for efficient Monte-Carlo Bayesian optimization. *Advances in* neural information processing systems, 33:21524–21538, 2020. Dominik Baumann, Alonso Marco, Matteo Turchetta, and Sebastian Trimpe. Gosafe: Globally optimal safe robot learning. In *2021 IEEE International Conference on Robotics and Automation (ICRA)*, pp. 4452–4458. IEEE, 2021. Gleb Beliakov. Interpolation of Lipschitz functions. *Journal of computational and applied mathematics*, 196 (1):20–44, 2006. Felix Berkenkamp, Andreas Krause, and Angela P Schoellig. Bayesian optimization with safety constraints: Safe and automatic parameter tuning in robotics. *arXiv preprint arXiv:1602.04450*, 2016a. Felix Berkenkamp, Angela P Schoellig, and Andreas Krause. Safe controller optimization for quadrotors with gaussian processes. In *2016 IEEE international conference on robotics and automation (ICRA)*, pp. 491–496. IEEE, 2016b. Felix Berkenkamp, Andreas Krause, and Angela P Schoellig. Bayesian optimization with safety constraints: safe and automatic parameter tuning in robotics. *Machine Learning*, 112(10):3713–3747, 2023. Alain Berlinet and Christine Thomas-Agnan. *Reproducing kernel Hilbert spaces in probability and statistics*. Springer Science & Business Media, 2004. Lukas Brunke, Melissa Greeff, Adam W Hall, Zhaocong Yuan, Siqi Zhou, Jacopo Panerati, and Angela P Schoellig. Safe learning in robotics: From learning-based control to safe reinforcement learning. *Annual* Review of Control, Robotics, and Autonomous Systems, 5:411–444, 2022. Jan-Peter Calliess. *Conservative decision-making and inference in uncertain dynamical systems*. PhD thesis, Oxford University, UK, 2014. Jan-Peter Calliess, Stephen J Roberts, Carl Edward Rasmussen, and Jan Maciejowski. Lazily adapted constant kinky inference for nonparametric regression and model-reference adaptive control. *Automatica*, 122:109216, 2020. Sayak Ray Chowdhury and Aditya Gopalan. On kernelized multi-armed bandits. *34th International Conference on Machine Learning, ICML 2017*, 2:1397–1422, 2017. Duane A Cooper. Learning Lipschitz functions. *International journal of computer mathematics*, 59(1-2): 15–26, 1995. Sébastien Da Veiga, Fabrice Gamboa, Bertrand Iooss, and Clémentine Prieur. *Basics and trends in sensitivity* analysis: Theory and practice in R. SIAM, 2021. Rikky RPR Duivenvoorden, Felix Berkenkamp, Nicolas Carion, Andreas Krause, and Angela P Schoellig. Constrained Bayesian optimization with particle swarms for safe adaptive controller tuning. *IFACPapersOnLine*, 50(1):11800–11807, 2017. Christian Fiedler. Lipschitz and Hölder continuity in reproducing kernel Hilbert spaces. arXiv preprint arXiv:2310.18078, 2023. Christian Fiedler, Carsten W. Scherer, and Sebastian Trimpe. Practical and rigorous uncertainty bounds for Gaussian process regression. *Proceedings of the AAAI conference on artificial intelligence*, 35, 2021a. Christian Fiedler, Carsten W. Scherer, and Sebastian Trimpe. Learning-enhanced robust controller synthesis. 60th IEEE Conference on Decision and Control (CDC), 2021b. Christian Fiedler, Carsten W Scherer, and Sebastian Trimpe. Learning functions and uncertainty sets using geometrically constrained kernel regression. In 2022 IEEE 61st Conference on Decision and Control (CDC), pp. 2141–2146. IEEE, 2022. Christian Fiedler, Michael Herty, and Sebastian Trimpe. On kernel-based statistical learning in the mean field limit. In *Advances in Neural Information Processing Systems*, volume 36, 2024. Simon Fischer and Ingo Steinwart. Sobolev norm learning rates for regularized least-squares algorithms. The Journal of Machine Learning Research, 21(1):8464–8501, 2020. Lukas P Fröhlich, Melanie N Zeilinger, and Edgar D Klenske. Cautious Bayesian optimization for efficient and scalable policy search. In *Learning for Dynamics and Control*, pp. 227–240. PMLR, 2021. Roman Garnett. *Bayesian optimization*. Cambridge University Press, 2023. Pierre Hansen, Brigitte Jaumard, and Shi-Hui Lu. Global optimization of univariate Lipschitz functions: I. survey and properties. *Mathematical programming*, 55(1-3):251–272, 1992. Trevor Hastie, Robert Tibshirani, Jerome H Friedman, and Jerome H Friedman. *The elements of statistical* learning: data mining, inference, and prediction, volume 2. Springer, 2009. Mohamed K. Helwa, Adam Heins, and Angela P. Schoellig. Provably robust learning-based approach for high-accuracy tracking control of Lagrangian systems. *IEEE Robotics and Automation Letters*, 4(2):1587– 1594, 2019. ISSN 23773766. doi: 10.1109/LRA.2019.2896728. José Miguel Hernández-Lobato, Michael A Gelbart, Ryan P Adams, Matthew W Hoffman, Zoubin Ghahramani, et al. A general framework for constrained bayesian optimization using information-based search. Journal of Machine Learning Research, 17(160):1–53, 2016. Lukas Hewing, Kim P Wabersich, Marcel Menner, and Melanie N Zeilinger. Learning-based model predictive control: Toward safe learning in control. *Annual Review of Control, Robotics, and Autonomous Systems*, 3:269–296, 2020. Julien Walden Huang, Stephen J Roberts, and Jan-Peter Calliess. On the sample complexity of Lipschitz constant estimation. *Transactions on Machine Learning Research*, 2023. Youngmin Kim, Richard Allmendinger, and Manuel López-Ibáñez. Safe learning and optimization techniques: Towards a survey of the state of the art. In International Workshop on the Foundations of Trustworthy AI Integrating Learning, Optimization and Reasoning, pp. 123–139. Springer, 2020. Johannes Kirschner, Mojmir Mutny, Nicole Hiller, Rasmus Ischebeck, and Andreas Krause. Adaptive and safe Bayesian optimization in high dimensions via one-dimensional subspaces. In *International Conference* on Machine Learning, pp. 3429–3438. PMLR, 2019a. Johannes Kirschner, Manuel Nonnenmacher, Mojmír Mutn`y, Andreas Krause, Nicole Hiller, Rasmus Ischebeck, and Andreas Adelmann. Bayesian optimisation for fast and safe parameter tuning of SwissFEL. In *FEL2019, Proceedings of the 39th International Free-Electron Laser Conference*, pp. 707–710. JACoW Publishing, 2019b. Robert Kleinberg, Aleksandrs Slivkins, and Eli Upfal. Bandits and experts in metric spaces. *Journal of the* ACM (JACM), 66(4):1–77, 2019. Torsten Koller, Felix Berkenkamp, Matteo Turchetta, and Andreas Krause. Learning-based model predictive control for safe exploration. *Proceedings of the IEEE Conference on Decision and Control*, 2018-Decem: 6059–6066, 2019. ISSN 07431546. doi: 10.1109/CDC.2018.8619572. Hayri Korezlioglu. Reproducing kernels in separable Hilbert spaces. *Pacific Journal of Mathematics*, 25(2): 305–314, 1968. Tor Lattimore and Csaba Szepesvári. *Bandit algorithms*. Cambridge University Press, 2020. Armin Lederer, Jonas Umlauft, and Sandra Hirche. Uniform error bounds for Gaussian process regression with application to safe control. *Advances in Neural Information Processing Systems*, 32, 2019. Milan Lukić and Jay Beder. Stochastic processes with sample paths in reproducing kernel Hilbert spaces. Transactions of the American Mathematical Society, 353(10):3945–3969, 2001. Cédric Malherbe and Nicolas Vayatis. Global optimization of Lipschitz functions. In *International Conference* on Machine Learning, pp. 2314–2323. PMLR, 2017. Alonso Marco, Dominik Baumann, Majid Khadiv, Philipp Hennig, Ludovic Righetti, and Sebastian Trimpe. Robot learning with crash constraints. *IEEE Robotics and Automation Letters*, 6(2):1439–1446, 2021. Mario Milanese and Carlo Novara. Set membership identification of nonlinear systems. *Automatica*, 40(6): 957–975, 2004. Carlo Novara, Lorenzo Fagiano, and Mario Milanese. Direct feedback control design for nonlinear systems. Automatica, 49(4):849–860, 2013. Baver Okutmuştur. *Reproducing kernel Hilbert spaces*. PhD thesis, Bilkent Universitesi (Turkey), 2005. Vern I Paulsen and Mrinal Raghupathi. *An introduction to the theory of reproducing kernel Hilbert spaces*, volume 152. Cambridge university press, 2016. János D Pintér. *Global optimization in action: continuous and Lipschitz optimization. Algorithms, implementations and applications*, volume 6. Springer Science & Business Media, 1995. C E Rasmussen and C K I Williams. *Gaussian Processes for Machine Learning*. Adaptive Computation and Machine Learning. MIT Press, Cambridge, MA, USA, 2006. Parisa Sarikhani, Benjamin Ferleger, Jeffrey Herron, Babak Mahmoudi, and Svjetlana Miocinovic. Automated deep brain stimulation programing with safety constraints for tremor suppression. *Brain* Stimulation, 14(6):1699–1700, 2021. ISSN 1935861X. doi: 10.1016/j.brs.2021.10.357. URL https: //doi.org/10.1016/j.brs.2021.10.357. Yaroslav D Sergeyev, Antonio Candelieri, Dmitri E Kvasov, and Riccardo Perego. Safe global optimization of expensive noisy black-box functions in the δ-Lipschitz framework. *Soft Computing*, 24(23):17715–17735, 2020. Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P Adams, and Nando De Freitas. Taking the human out of the loop: A review of Bayesian optimization. *Proceedings of the IEEE*, 104(1):148–175, 2015. Aleksandrs Slivkins et al. Introduction to multi-armed bandits. Foundations and Trends® *in Machine* Learning, 12(1-2):1–286, 2019. Niranjan Srinivas, Andreas Krause, Sham Kakade, and Matthias Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. *ICML 2010 - Proceedings, 27th International* Conference on Machine Learning, pp. 1015–1022, 2010. doi: 10.1109/TIT.2011.2182033. Ingo Steinwart and Andreas Christmann. *Support vector machines*. Springer Science & Business Media, 2008. Ingo Steinwart and Clint Scovel. Mercer's theorem on general domains: On the interaction between measures, kernels, and RKHSs. *Constructive Approximation*, 35:363–417, 2012. Ingo Steinwart, Don R Hush, Clint Scovel, et al. Optimal rates for regularized least squares regression. In COLT, pp. 79–93, 2009. RG Strongin. On the convergence of an algorithm for finding a global extremum. *Eng. Cybernetics*, 11: 549–555, 1973. Yanan Sui, Alkis Gotovos, Joel Burdick, and Andreas Krause. Safe exploration for optimization with Gaussian processes. In *International conference on machine learning*, pp. 997–1005. PMLR, 2015. Yanan Sui, Vincent Zhuang, Joel Burdick, and Yisong Yue. Stagewise safe Bayesian optimization with Gaussian processes. In *International conference on machine learning*, pp. 4781–4789. PMLR, 2018. Bhavya Sukhija, Matteo Turchetta, David Lindner, Andreas Krause, Sebastian Trimpe, and Dominik Baumann. Scalable Safe Exploration for Global Optimization of Dynamical Systems. 2022. URL http://arxiv.org/abs/2201.09562. Bhavya Sukhija, Matteo Turchetta, David Lindner, Andreas Krause, Sebastian Trimpe, and Dominik Baumann. Gosafeopt: Scalable safe exploration for global optimization of dynamical systems. *Artificial* Intelligence, 320:103922, 2023. Abdullah Tokmak, Christian Fiedler, Melanie N Zeilinger, Sebastian Trimpe, and Johannes Köhler. Automatic nonlinear MPC approximation with closed-loop guarantees. *arXiv preprint arXiv:2312.10199*, 2023. Alexandre Tsybakov. *Introduction to Nonparametric Estimation*. Springer New York, NY, 2009. Matteo Turchetta, Felix Berkenkamp, and Andreas Krause. Safe exploration in finite Markov decision processes with Gaussian processes. *Advances in Neural Information Processing Systems*, (Nips):4312– 4320, 2016. ISSN 10495258. Holger Wendland. *Scattered data approximation*, volume 17. Cambridge university press, 2004. Justin Whitehouse, Aaditya Ramdas, and Steven Z Wu. On the sublinear regret of GP-UCB. *Advances in* Neural Information Processing Systems, 36, 2024. Yifan Wu, Roshan Shariff, Tor Lattimore, and Csaba Szepesvári. Conservative bandits. In International Conference on Machine Learning, pp. 1254–1262. PMLR, 2016.
Review 1: Summary: By deeply examining the practical implementation of a wide range of methods, specifically SafeOpt-type algorithms, the author questions three important aspects of these algorithms: 1) the use of heuristics instead of theoretically sound uncertainty bounds, 2) the assumption of an RKHS norm bound, and 3) the discrepancy between discrete input space in theory and continuous domain in moderately high dimensions in application. Overall, the motivations for addressing these three points are clear, and the proposed strategies for tackling these problems are novel. Strengths and Weaknesses: Strengths: 1) The theoretical results are solid, and the paper is well-written. 2) The motivations are clearly articulated, with some supported by observations from experiments. Summaries in the main text provide valuable insights. 3) The paper addresses a key question regarding safety in safe Bayesian optimization. Suggestions: Overall, I think this work is good. Here are some suggestions (not necessary) that might be helpful. 1) I realize that some experiments in Section 5.1 may be sufficient to demonstrate that using heuristics in SafeOpt can be highly problematic. However, a theoretical analysis here would be more appreciated. Additionally, examining the underlying theory would provide more insight into why these heuristics are problematic and could lead to more interesting strategic approaches. I am trying to understand this part intuitively but find it difficult. Could you provide an interpretation of why such heuristics are problematic in an intuitive way? 2) Regarding the experimental design, I am curious about the performance of each strategy for each question. For example, how does the proposed solution compare to the heuristic solution on benchmarks, similar to the comparison of RealBetaSafeOpt versus LoSBO in Section 6. What about the remaining two solutions for the two questions, O(2) and O(3), in Section 3? Requested Changes: None Broader Impact Concerns: None ================================================== Review 2: Summary: This paper studies "safe" Bayesian optimization (BO), which I understand is maximizing a function $f(x)$ while avoiding the evaluation of any inputs $x$ where $f(x)<h$ for some constant $h\in\mathbb{R}$. The paper describes some theoretical and practical limitations of existing algorithms for safe BO, then proposes a new algorithm called LoSBO (Lipschitz-only safe BO) which uses a Lipschitz continuity assumption to guarantee the "safety" behavior. Strengths and Weaknesses: Overall this paper has some good critiques of previous safe BO methods and proposes a reasonable algorithm. That being said, after a lot of criticism of previous papers for unrealistic assumptions of bounding constants/etc, the authors provide a pretty flimsy justification for why we would know the Lipschitz constant of the function being optimized. Furthermore, for what it provides, the paper is too long in my opinion- it feels like a good 6-page paper trapped in the body of a 24 page paper. I think the authors can cut a lot out. More on these points below. ## Strength 1: criticisms I like the analysis of the following criticisms: - Previous works not setting $\beta_t$ to explicitly satisfy a uniform high-probability bound for containing the true function values, instead just using a heuristic - Assuming a known upper bound to the RKHS norm (this is very hard to calculate) ## Weakness 1: known Lipschitz constant The authors correctly mention that there are many established methods for estimating Lipschitz constants from partial observations of a function. The authors vaguely argue that it is easier to get a good estimate of a Lipschitz constant compared to an RKHS norm, which I agree with. However, after repeatedly emphasizing how it is critical to have a super-high probability safety _guarantee_ and avoid hacky methods, the authors estimate the Lipschitz constant using a grid of evaluations, then multiply by 1.1 "just to be safe." This is very hypocritical: 1.1 is clearly another "heuristic constant" and this method could easily fail for suitably varying functions- think something like the [Weierstrass function](https://en.wikipedia.org/wiki/Weierstrass_function), which is "smooth" at a coarse level, but has low-level fluctuations. Of course, Weierstrass-like functions are clearly pathological, and would not be anticipated in practice. Yet, if the authors assume some sort of general smoothness, this feels like a qualitatively similar assumption to $f$ belonging in a smooth RKHS, no? ## Weakness 2: writing This paper is, in my opinion, way too long for what it provides. It is clear that the authors have done a lot of reading and I appreciated thorough citations, but a lot of the discussion seemed tangential to the main points of the paper. Here are my thoughts section-by-section: 1. Good intro, but in the future if the paper is made shorter, maybe remove the huge "outline" paragraph (since it would be less necessary) 2. Intro to GPs makes sense. The introduction to RKHSs seems a bit too formal and long, given that RKHSs are not really used anywhere in the paper (except for criticizing not knowing the RKHS norm). Section 2.3 seems very long and also not necessary: in the end all that is used is the bound in equation (7). Also, the sentence "_where the probability is with respect to the data-generating process, rather than to the target function f_" was confusing to me: are we not assuming that data is generated using $f$? Or are we assuming some fixed distribution over input values in addition to $f$? 3. This seemed like it should also belong in the "background", probably at the beginning, since it is the definition of the problem. The list O1-O3 seem duplicated with the intro and sections 5-6. The reader is already expecting this content, why list it again? 4. 4.1 has a lot of duplicate content with the intro. I think the reader gets the idea of why you would want to avoid "bad" function evaluations, no need to repeat this over and over again. 4.2 seems like a bunch of sentences that should just be elsewhere in the paper. The parts justifying the Lipschitz constant should probably be near Assumption 1. The parts mentioning sections 5-6 should probably just come after these sections. 5. This section basically says "you can't just set algorithm constants arbitrarily and expect safe behavior". I think this could be said much more succinctly. I don't think the experimental demonstration provided much value (of course heuristic values will not always work). 6. 6.1 seemed like it belongs in section 5.2, no? What does it have to do with Lipschitz constants? In 6.2, the algorithm could probably be stated first and then analyzed (currently Proposition 1 comes even before the statement of the algorithm). Also, I think it could be explained more intuitively (it is essentially just finding the regions which are guaranteed safe by the Lipschitz constant) 7. This algorithm seems very similar to the one in section 6. Maybe they can be presented together? The experiments were also very long. Consider moving some of it to the experiments? 8. Conclusion without much of a discussion of limitations Requested Changes: The main change I would request is to add discussion contrasting the pros/cons of estimating RKHS norm bounds vs Lipschitz constant bounds. Also, acknowledging the failure mode of underexploring if the Lipschitz constant is high is also important to guide future users of the proposed algorithms. I also highly highly suggest re-writing the paper to be much shorter (ideally < 10 pages). To do this, you can look at my notes above and think about re-structuring the paper by merging semi-duplicated sections and moving things to appendices. Here are some additional notes: - No need to repeatedly say how important safety is: you can just mention it once in the intro. - You can use appendices to add content which could be helpful to the paper, but not important for the main narrative. Here are some things which could go into an appendix? - Experiments where existing BO algorithms are not safe if the parameters are not tuned correctly (it should not be surprising that this can happen; you can just mention your constructed example and refer readers to the appendix if they are interested in the details.) - Extended definitions of RKHSs - Extended literature review (where not super related to main point of paper) - Keep remarks short- e.g. remark 1 is not really a "remark" in my opinion... ([reading](https://math.stackexchange.com/questions/2097999/in-writing-mathematical-papers-what-qualifies-as-a-remark-and-not-just-some-d?rq=1)) - Clarify what input dimension was used for your experiments in sections 6-7 (this detail was not clear) Also, small typos: - You use ">>" instead of "\gg" to type $\gg$ - The notation "SD" with a hat looks awkward. You probably don't know the `\widehat{}` command. - in your citations, you cite "hernandez et al" when it should be "Hernández-Lobato et al" (note the accents and a hyphenated last name) Broader Impact Concerns: None ================================================== Review 3: Summary: The authors identify and analyze three problems with current Safe Bayesian Optimization (SafeOpt) algorithms, propose three novel SafeOpt algorithms - one to address/solve each problem. First, the authors demonstrate that using typical Safe BO algorithms often leads to safety guarantees being invalid in practice. To solve this problem, the authors propose a new algorithm (the Real-β-SafeOpt algorithm) which only requires relatively small changes to the standard SafeOpt algorithms to recover the desired theoretical safety guarantees. Secondly, the authors point out that many SafeOpt algorithms are difficult to apply in real-world settings since they make the assumption that the target function has an upper bound on the reproducing kernel Hilbert space (RKHS) norm. To solve this problem, the authors introduce an additional new SafeBO algorithm (the Lipschitz-only Safe BO algorithm) which maintains safety guarantees without the need for the limiting RKHS norm assumption. In addition to removing this assumption, the authors demonstrate that this algorithm outperforms other state-of-the-art SafeBO algorithms on several test functions. Third, the authors point out that SafeOpt and other similar algorithms are difficult to apply to higher-dimensional problems since they assume a discrete search space. To fix this, the authors propose a third algorithm (Lipschitz-only Safe GP-UCB) which is makes Safe BO applicable to higher-dimensional functions while still having the same safety guarantees. Strengths and Weaknesses: Strengths: 1. The paper is well motivated since it is clear to me that each problem the authors identify with current SafeOpt algorithms is important to solve. 2. Very good use of diagrams to illustrate points/algorithms. 3. Algorithm blocks make it very clear which each algorithm is doing and imporves readability. 4. The paper is well written and easy to follow. Weaknesses: 1. See in requested changes below Requested Changes: None of the changes list below are critical to securing my recommendation for acceptance of the paper, but I do think they could strengthen the work. 1. The authors claim that the significance of LoS-GP-UCB is that it allows SafeBO algorithm to be applied to higher-dimensional functions. However, in the experimental results section, the highest dimensional function that the authors apply LoS-GP-UCB to is 10-dimensional. It is still exciting and significant that LoS-GP-UCB is able to scale SafeBO up to 10 dimensions! But it would be very interesting to see what the limit is on the dimensionality where LoS-GP-UCB can be applied successfully? Could it be applied to say 20-dimensional function? How about 100-dimensional functions? Additional experiments on higher-dimensional functions could show what the upper limit is on the dimensionality where LoS-GP-UCB is no longer very useful. I think this would be informative for readers who may want to use this algorithm in practice, but again, this addition is not critical to securing my recommendation for acceptance. 2. While the text is clear and easy to follow, some of the text could likely be made more concise (i.e. some things are said in several sentences that could be condensed down to one sentence). Broader Impact Concerns: NA ================================================== Metareview: Recommendation: Accept with minor revision Comment: All reviewers agreed that the paper should be accepted in TMLR. See comments above regarding claims. I do agree with Reviewer QF7s that this paper is quite a bit longer than it needs to be. I would strongly encourage the authors to shorten the paper by making the writing more concise throughout. The usual length of a TMLR paper is around 12 pages. ==================================================
# Deep Policies For Online Bipartite Matching: A Reinforcement Learning Approach Mohammad Ali Alomrani *mohammad.alomrani@mail.utoronto.ca* Department of Electrical & Computer Engineering University of Toronto Reza Moravej *mreza.moravej@mail.utoronto.ca* Department of Mechanical & Industrial Engineering University of Toronto Elias B. Khalil khalil@mie.utoronto.ca Department of Mechanical & Industrial Engineering SCALE AI Research Chair in Data-Driven Algorithms for Modern Supply Chains University of Toronto Reviewed on OpenReview: *https: // openreview. net/ forum? id= mbwm7NdkpO* ## Abstract The challenge in the widely applicable online matching problem lies in making irrevocable assignments while there is uncertainty about future inputs. Most theoretically-grounded policies are myopic or greedy in nature. In real-world applications where the matching process is repeated on a regular basis, the underlying data distribution can be leveraged for better decision-making. We present an end-to-end Reinforcement Learning framework for deriving better matching policies based on trial-and-error on historical data. We devise a set of neural network architectures, design feature representations, and empirically evaluate them across two online matching problems: Edge-Weighted Online Bipartite Matching and Online Submodular Bipartite Matching. We show that most of the learning approaches perform consistently better than classical baseline algorithms on four synthetic and real-world datasets. On average, our proposed models improve the matching quality by 3–10% on a variety of synthetic and real-world datasets.Our code is publicly available at https://github.com/lyeskhalil/CORL. ## 1 Introduction Originally introduced by Karp et al. (1990), the Online Bipartite Matching (OBM) problem is a simple formulation of sequential resource allocation. A fixed set U of known entities (e.g., ads, tasks, servers) are to be dynamically assigned to at most one of a discrete stream V of (apriori unknown) entities (e.g., adslots, job candidates, computing job) upon their arrival, so as to maximize the size of the final matching. Matching decisions are irrevocable and not matching is allowed at all times. Despite its simplicity, finding better algorithms for OBM and its variants remains an active area of research. The uncertainty about future inputs makes online problems inherently challenging. While practical exact methods (e.g., using integer programming formulations and solvers) exist for many offline combinatorial problems, the restriction to irrevocable and instant decision-making makes the use of such algorithmic tools impractical. Existing algorithms for online matching are thus typically myopic and greedy in nature (Mehta, 2013). In practice, however, the underlying (bipartite) graph instances may come from the same unknown distribution (Borodin et al., 2020). In many applications, a sufficiently large collection of samples from the data can represent the often implicit statistical properties of the entire underlying data-generating distribution. It is often the case that corporations, for example, have access to a wealth of information that is represented as a large graph instance capturing customer behaviour, job arrivals, etc. Thus, it is sensible for an algorithm to use historical data to derive statistical information about online inputs in order to perform better on future instances. However, the majority of non-myopic hand-designed algorithms depend on estimating the arrival distribution of the incoming nodes (Borodin et al., 2020; Mehta, 2013). The downside of this approach is that imperative information such as the graph sparsity, ratio of incoming nodes to fixed nodes, existence of community structures, degree distribution, and the occurrence of structural motifs are ignored. Ideally, a matching policy should be able to leverage such information to refine its decisions based on the observed history. In this work, we formulate online matching as a Markov Decision Process (MDP) for which a neural network is trained using Reinforcement Learning (RL) on past graph instances to make near-optimal matchings on unseen test instances. We design six unique models, engineer a set of generic features, and test their performance on two variations of OBM across two synthetic datasets and two real-world datasets. Our contributions can be summarized as follows: Automating matching policy design: Motivated by practical applications, other variants of OBM have been introduced with additional constraints (such as fairness constraints) or more complex objective functions than just matching size. Our method reduces the reliance on human handcrafting of algorithms for each individual variant of OBM since the RL framework presented herein can flexibly model them; this will be demonstrated for the novel Online Submodular Bipartite Matching problem (Dickerson et al., 2019). Deriving tailored policies using past graph instances: We show that our method is capable of taking advantage of past instances to learn a near-optimal policy that is tailored to the problem instance. Unlike "pen-and-paper" algorithms, our use of historical information is not limited to estimating the arrival distribution of incoming nodes. Rather, our method takes advantage of additional statistics such as the existing (partial) matching, graph sparsity, the |U|-to-|V | ratio, and the graph structure. Taking in a more comprehensive set of statistics into account allows for fine-grained decision-making. For example, the RL agent can learn to skip matching a node strategically based on the observed statistical properties of the current graph. Our results on synthetic and real world datasets demonstrate this. Leveraging Node Attributes: In many variants of OBM, nodes have identities, e.g., the nodes in V could correspond to social media users whose demographic information could be used to understand their preferences. Existing algorithms are limited in considering such node features that could be leveraged to obtain better solutions. For instance, the RL agent may learn that connecting a node v with a particular set of attributes to a specific node in U would yield high returns. The proposed framework can naturally account for such attributes, going beyond simple greedy-like policies. We will show that accounting for node attributes yields improved results on a real-world dataset for Online Submodular Bipartite Matching. ## 2 Problem Setting In a bipartite graph G = (*U, V, E*), U and V are disjoint sets of nodes and E is the set of edges connecting a node in U to one in V . In the online bipartite matching problem, the vertex set U is fixed and at each timestep a new node v ∈ V and its edges {(*u, v*) : u ∈ U} arrive. The algorithm must make an *instantaneous* and *irrevocable* decision to match v to one of its neighbors or not match at all. Nodes in U can be matched to at most one node in V . The time horizon T = |V | is finite and assumed to be known in advance. The simplest generalization of OBM is the edge-weighted OBM (E-OBM), where a non-negative weight is associated with each edge. Other well-known variants include Adwords, Display Ads and Online Submodular Welfare Maximization (Mehta, 2013). We will focus our experiments on E-OBM and Online Submodular Bipartite Matching (OSBM), a new variation of the problem introduced by Dickerson et al. (2019); together, the two problems span a wide range in problem complexity. The general framework can be extended with little effort to address other online matching problems with different constraints and objectives; see Appendix F for a discussion and results on Adwords. ## 2.1 Edge-Weighted Obm (E-Obm) Each edge e ∈ E has a predefined weight we ∈ R + and the objective is to select a subset S of the incoming edges that maximizes Pe∈S we. Note that in the offline setting, where the whole graph is available, this problem can be solved in polynomial time using existing algorithms (Kuhn, 1955). However, the online setting involves reasoning under uncertainty, making the design of optimal online algorithms non-trivial. ## 2.2 Online Submodular Bipartite Matching (Osbm) We first define some relevant concepts: Submodular function: A function f : 2U → R +, f(∅) = 0 is *submodular* iff ∀*S, T* ⊆ U: f(S ∪ T) + f(S ∩ T) ≤ f(S) + f(T). Some common examples of submodular functions include the coverage function, piecewise linear functions, and budget-additive functions. In our experiments, we will focus on the weighted coverage function following Dickerson et al. (2019): Coverage function: Given a universe of elements U and a collection of g subsets A1, A2*, . . . , A*g ⊆ U, the function f(M) = | ∪i∈M Ai| is called the coverage function for M ⊆ {1*, . . . , g*}. Given a non-negative, monotone weight function w : 2U → R +, the weighted coverage function is defined analogously as f(M) = w(∪i∈MAi) and is known to be submodular. In this setting, each edge e ∈ E incident to arriving node vt has the weight f(Mt ∪ {e}) − f(Mt) where Mt is the matching at timestep t. The objective in OSBM is to find M such that f(M) = Pe∈M we is maximized; f is a submodular function. An illustrative application of the OSBM problem, identified by Dickerson et al. (2019), can be found in movie recommendation systems. There, the goal is to match incoming users to a set of movies that are both relevant and diverse (genre-wise). A user can login to the platform multiple times and may be recommended (matched to) a movie or left unmatched. Since we have historical information on each user's average ratings for each genre, we can quantify diversity as the weighted coverage function over the set of genres that were matched to the user. The goal is to maximize the sum of the weighted coverage functions for all users. More concretely, if we let U be the universe of genres, then any movie i belongs to a subset of genres Ai. Let L be the set of all users, Ml be the set of movies matched to user l, and fl(Ml) = w(∪i∈MAi) be the weighted coverage function defined as the sum of the weights of all genres matched to the user, where the weight of a genre k is the average rating given by user l to movies of genre k. Each user's weighted coverage function is submodular. The objective of OSBM is to maximize the (submodular) sum of these user functions: f(M) = Pl∈L fl(Ml). ## 2.3 Arrival Order Online problems are studied under different input models that allow the algorithm to access varying amounts of information about the arrival distribution of the vertices in V . The *adversarial* order setting is often used to study the worst-case performance of an algorithm, positing that an imaginary adversary can generate the worst possible graph and input order to make the algorithm perform poorly. More optimistic is the *known i.i.d distribution (KIID)* setting, where the algorithm knows U as well as a distribution D on the possible *types* of vertices in V . Each arriving vertex v belongs to one type and vertices of a given type have the same neighbours in U. This assumption, i.e., that the arrival distribution D is given, is too optimistic for complex real-world applications. In this work, we study the *unknown i.i.d distribution (UIID)* setting, which lies between the *adversarial* and the *KIID* settings in terms of how much information is given about the arrival distribution (Karande et al., 2011). The *unknown i.i.d setting* best captures real-world applications, where a base graph is provided from an existing data set, but an explicit arrival distribution D is not accessible. For example, a database of past job-to-candidate or item-to-customer relationships can represent a base graph. It is thus safe to assume that the arriving graph will follow the same distribution. The arrival distribution is accessible on through sample instances drawn from D. More details on data generation are provided in Section 5.1 and Appendix C. ## 3 Related Work Traditional Algorithms for OBM: Generally, the focus of algorithm design for OBM has been on worst-case approximation guarantees for "pen-and-paper" algorithms via competitive analysis, rather than average-case performance in a real-world application. We refer the reader to (Karande et al., 2011; Mehta, 2013) for a summary of the many results for OBM under various arrival models. On the empirical side, an extensive set of experiments by Borodin et al. (2020) showed that the naive greedy algorithm performs similarly to more sophisticated online algorithms on synthetic and real-world graphs in the KIID setting. Though the experiments were limited to OBM with |U| = |V |, they were conclusive in that (i) greedy is a strong baseline in practical domains, and (ii) having better proven lower bounds does not necessarily translate into better performance in practice. The main challenge in online problems is decision-making in the face of uncertainty. Many traditional algorithms under the KIID setting aim to overcome this challenge by explicitly approximating the distribution over node types via a type graph. The algorithms observe past instances and estimate the frequency of certain types of online nodes, i.e., for each type i, the algorithm predicts a probability pi of a node of this type arriving. We refer the reader to Borodin et al. (2020) for detailed explanation on algorithms under the KIID setting. As noted earlier, the KIID setting is rather simplistic compared to the more realistic UIID setting that we tackle here. Other non-myopic algorithms have been proposed which do not rely on estimating the arrival distribution. For example, Awasthi & Sandholm (2009) solve stochastic kidney exchange, a generalization of OBM, by sampling a subset of future trajectories, solving the offline problem on each of them, and assigning a score to each action. The algorithm then selects the action that is the best overall. To the best of our knowledge, our presented method is the first which learns a custom policy from data based on both explicit and implicit patterns (such as graph sparsity, the graph structure, degree distribution, etc.). Learning in Combinatorial Optimization: There has recently been substantial progress in using RL for finding better heuristics for offline, NP-complete graph problems. Dai et al. (2017) presented an RL-based approach combined with graph embeddings to learn greedy heuristics for some graph problems. Barrett et al. (2020) take a similar approach but start with a non-empty solution set and allow the policy to explore by removing nodes/edges from the solution. Chen & Tian (2019) learn a secondary policy to pick a particular region of the current solution to modify and incrementally improve the solution. The work by Kool et al. (2019) uses an attention based encoder-decoder approach to find high-quality solutions for TSP and other routing problems. We refer to the following surveys for a more comprehensive view of the state of this area (Mazyavkina et al., 2021; Bengio et al., 2021). Prior work on using predictive approaches for online problems has been fairly limited. Wang et al. (2019) overlook the real-time decision making condition and use Q-learning for a batch of the arriving nodes. The matching process, however, is not done by an RL agent but using an offline matching algorithm. Consequently, their method is not practical for OBM variants that are NP-hard in the offline setting (e.g., Adwords) and where instantaneous decision making is paramount, e.g., in ad placement on search engines. The work by Kong et al. (2019) is one of few to apply RL to online combinatorial optimization. Their work differs from ours in three main ways: (i) the question raised there is whether RL can discover algorithms which perform best on worst-case inputs. They show that the RL agent will eventually learn a policy which follows the "pen-and-paper" algorithm with the best worst-case guarantee. Our work, on the other hand, asks if RL can outperform hand-crafted algorithms on average; (ii) The MDP formulation introduced in (Kong et al., 2019), unlike ours, does not consider the entire past (nodes that have previously arrived and the existing matching) which can help an RL policy better reason about the future; (iii) our family of invariant neural network architectures apply to graphs of arbitrary sizes |V | and |U|. More details about the advantages of our method are provided in the next section. Zuzic et al. (2020) propose a GAN-like adversarial training approach to learn robust OBM algorithms. However, just like (Kong et al., ![4_image_0.png](4_image_0.png) Figure 1: The MDP formulation of E-OBM. The agent is trained on different graph instances sampled from a distribution D. At each timestep t, the agent picks a node to match or skip. In this 3x3 graph, the optimal matching has weight 22; following the greedy policy would yield a matching of weight 7. The illustrated policy ends up with a matching of weight 18. 2019), they are more concerned with learning algorithms that are robust to hard distributions rather than real-world graphs, and do not utilize historical information accumulated in the previous steps within the same instance. Online Algorithm Design via Learned Advice: A hybrid paradigm has been recently introduced where the predictive and competitive-analysis approaches are combined to tackle online problems. Such algorithms take advantage of the predictions made by a model to obtain an improved competitive ratio while still guaranteeing a worst-case bound when the predictions are inaccurate. Work in this area has resulted in improvements over traditional algorithms for the secretary, ski rental and online matching problems (Antoniadis et al., 2020; Wang et al., 2020; Diakonikolas et al., 2021; Purohit et al., 2018). Unlike our approach, the model does not get to construct a solution. Rather, its output is used as advice to a secondary algorithm. Since competitive analysis is of concern in this line of work, the algorithm is not treated as a black box and must be explicitly handcrafted for each different online problem. On the other hand, we introduce a general end-to-end framework that can handle online matching problems with different objectives and constraints, albeit without theoretical performance guarantees. ## 4 Learning Deep Policies For Matching We now formalize online matching as a Markov Decision Process (MDP). We then present a set of neural network architectures with different representational capabilities, numbers of parameters, and assumptions on the size of the graphs. An extensive set of features have been designed to facilitate the learning of highperformance policies. We conclude this section by mentioning the RL training algorithm we use as well as a supervised behavioral cloning baseline. ## 4.1 Mdp Formulation The online bipartite matching problem can be formulated in the RL setting as a Markov Decision Process as follows; see Fig. 1 for a high-level illustration. Each instance of the online matching problem is drawn uniformly at random from an unknown distribution D. The following MDP captures the sequential decision-making task at hand: - **State**: A state S is a set of selected edges (a matching) and the current (partial) bipartite graph G. A terminal state Sˆ is reached when the final node in V arrives. The length of an episode is T = |V |. - **Action**: The agent has to pick an edge to match or skip. At each timestep t, a node vt arrives with its edges. The agent can choose to match vt to one of its neighbors in U or leave it unmatched. Therefore, |At|, the maximum number of possible actions at time t is |Ngbr(v)| + 1, where Ngbr(v) is the set of U nodes with edges to v. Note that there can exist problem-specific constraints on the action space, e.g., a fixed node can only be matched once in E-OBM. Unlike the majority of domains where RL is applied, the uncertainty is exogenous here. Thus, the transition is *deterministic* regardless of the action picked. That is, the (random) arrival of the next node is independent of the previous actions. - **Reward function**: The reward r(*s, a*) is defined as the weight of the edge selected with action a. Hence, the cumulative reward R at the terminal state Sˆ represents the total weight of the final matching solution: $$R=\sum_{e\in S}w_{e}.$$ - **Policy**: A solution (matching) is a subset of the edges in E, π = E¯ ⊂ E. A stochastic policy, parameterized by θ, outputs a solution π with probability $$p_{\theta}(\pi|G)=\prod_{t=1}^{|V|}p_{\theta}(\pi_{t}|s_{t}),$$ where st represents the state at timestep t, G represents the full graph, and πt represents the action picked at timestep t in solution π. ## 4.2 Deep Learning Architectures In this section, we propose a number of architectures that can be utilized to learn effective matching policies. Unless otherwise stated, the models are trained using RL. Feed-Forward (ff): When node vt arrives, the ff policy will take as input a vector (w0, . . . , w|U|, m0*, . . . , m*|U|) ∈ R 2(|U|+1) 1, where wu is the weight of the edge from vt to fixed node u (with wu = 0 if v is not a neighbor of u), and mu is a binary mask representing the availability of node u for matching. The policy will output a vector of probabilities of size |U| + 1, where the additional action represents skipping. ff is similar to the architecture presented in Kong et al. (2019). Feed-Forward with history (ff-hist): This model is similar to ff but takes additional historical information about the current graph to better reason about future input. That is, ff-hist will take in a vector consisting of five concatenated vectors, (w, m, ht, gt, nt). The vectors w and m are the same as those in ff. The feature vectors *h, n, g* contain a range of node-level features such as average weights seen so far per fixed node and solution-level features such as maximum weight in current solution; see Table 1 for details. Invariant Feed-Forward (inv-ff): We present an invariant architecture, inspired by Andrychowicz et al. (2016), which processes each of the edges and their fixed nodes *independently* using the same (shared) feed-forward network; see Fig. 2 for an illustration. That is, inv-ff will take as input a 3dimensional vector, (wu, su, w*mean*), where w*mean* is the mean of the edge weights incident to incoming node vt, and su is a binary flag set to 1 if u is the "skip" node. The output for each potential edge is a single number ou. The vector o is normalized using the softmax to output a vector of probabilities. 1The extra input represents the skip node, which is not needed for ff and ff-hist, but we add it to make the input consistent across models. ![6_image_0.png](6_image_0.png) Figure 2: Invariant (inv-ff) Architecture. A shared feed-forward neural network takes in node-specific features and outputs a single number for each node in U. The outputs are then fed into the softmax function to give a vector of probabilities. The red node represents skipping. Invariant Feed-Forward with history (inv-ff-hist): An invariant model, like inv-ff, which utilizes historical information. It is important to note that inv-ff-hist will only look at historical features of one node at a time, in addition to solution-level features. Therefore, the *node-wise* input is (wu, mu, su, wmean, nt, gt,u, ht). Supervised Feed-Forward with history (ff-supervised): To test the advantage of using RL methods, we train ff-hist in a supervised learning fashion. In other words, each incoming node is considered a data sample with a target (the optimal U node to match, in hindsight). During training, after all V nodes have arrived, we minimize the cross-entropy loss across all the samples. This setup is equivalent to behavior cloning (Ross & Bagnell, 2010) where expert demonstrations are divided into state-action pairs and treated as i.i.d. samples. Graph Neural Network (gnn-hist): In this model, we employ the encoder-decoder architecture used in many combinatorial optimization problems, see (Cappart et al., 2021). At each timestep t, the graph encoder consumes the current graph and produces embeddings for all nodes. The decoder feed-forward neural network, which also takes *node-wise* inputs, will take in (wu, t/|V |, mu, su, pt, evt , eu, emean, es) where the last four inputs represent the embedding of the incoming node vt, embedding of the fixed node u being considered, mean embedding of all fixed nodes, and mean solution embedding, respectively. Our graph encoder is a MPNN (Gilmer et al., 2017) with the weights as edge features. The mean solution embedding is defined as the sum of a learnable linear transformation of the concatenation of the embeddings of the vertices of the edges in the solution S: $$e_{s}=\frac{1}{|S|}\sum_{(u,v)\in S}\Theta_{e}([e_{u};e_{v}]),$$ Θe([eu; ev]), (1) where ";" represents horizontal concatenation, Θe is a learnable parameter, and S is the set of all matchings made so far. The mean of the embeddings of all fixed nodes is calculated simply as: $$\left(1\right)$$ $$e_{m e a n}={\frac{1}{|\bar{U}|}}\sum_{u\in\bar{U}}e_{u}.$$ eu. (2) where U¯ = U ∪ {u*skip*} and u*skip* represents the skip node, i.e., matching to this node means skipping. The graph encoder also takes in problem-specific node features if available; see Appendix B.2 for details. The output of the encoder is fed into a feed-forward network which outputs a distribution over available edges. $$\left(2\right)$$ | Feature type | Description | Equation | Size | | |--------------------------------------------------|---------------------------------------------|-------------|----------------|---------| | Average weight per fixed node u | 1 | | | | | up to time t | µw = d t u | P | | | | | (u,vi)∈E: vi∈V, 1≤i<t w(u,vi) | |U| + 1 | | | | Graph-Level Features gt | Variance of weights per fixed node u | 1 | 2 | |U| + 1 | | up to time t | σw = d t u | P | (w(u,vi) − µw) | | | | (u,vi)∈E: vi∈V, 1≤i<t | | | | | Average degree per fixed node u | 1 |{(u, vi) ∈ E : i ≤ t}| | |U| + 1 | | | | | t | | | | | up to time t Percentage of fixed nodes incident | 1 |{(u, vt) ∈ E : u ∈ U}| | 1 | | | | to incoming vt (For invariant models only) | |U| | | | | | Incoming Node Features nt | Normalized step size at time t | t |V | | 1 | | | Solution-Level Features ht | Maximum weight in current matching solution | maxe∈S we | 1 | | | Minimum weight in current matching solution | mine∈S we | 1 | | | | Mean weight in current matching solution | µS = | 1 P e∈S we | 1 | | | | |S| | | | | | Variance of weights in current | (we − µS) 2 | 1 | | | | matching solution | σS = | 1 |S| P e∈S | | | | Ratio of already matched nodes in U | 1 |{(u, v) ∈ S, u ̸= uskip}| | 1 | | | | | |U| | | | | | Ratio of skips made up to time t | 1 |{(u, v) ∈ S, u = uskip}| | 1 | | | | | t | | | | | The normalized size of current matching solution | pt = | 1 | P e∈S we | 1 | | | |U| | | | | Table 1: Features used in ff-hist and inv-ff-hist. d t u represents degree of node u at time t. u*skip* represents the skip node, i.e., matching to this node means choosing to skip. The models outlined above are designed based on a set of desirable properties for matching. Table 2 summarizes the properties that are satisfied by each model: - **Graph Size Invariance**: Training on large graph instances may be infeasible and costly. Thus, it would be ideal to train a model on small graphs if it generalizes well to larger graphs with a similar generating distribution. We utilize normalization in a way to make sure that each statistic (feature) that we compute lies within a particular range, independently of the graph size. Moreover, the invariant architectures allow us to train small networks that only look at node-wise inputs and share parameters across all fixed nodes. It is also worth noting that the invariance property can be key to OBM variants where U is not fixed, e.g., 2-sided arrivals (Dickerson et al., 2018), an application that is left for future work. - **Permutation Invariance**: In most practical applications, such as assigning jobs to servers or web advertising, the ordering of nodes in the set U should not affect the solution. The invariant architectures ensure that the model outputs the same solution regardless of the permutation of the nodes in U. On the other hand, the non-invariant models such as ff would predict differently for the same graph instance if the U nodes were permuted. - **History-Awareness**: A state space defined based on the entire current graph and the current matching will allow the model to learn smarter strategies that reason about the future based on the observed past. Historical and graph-based information within the current graph gives the models an "identity" for each fixed node which may be lost due to node-wise input. Contextual features such as incoming node features nt (see Table 1) and the ratio of already matched nodes help the learned policies to generalize to different graph sizes and U-to-V ratios. | Model | Graph size | Permutation | History | Node-feature | Learnable | |---------------|--------------|---------------|-----------|----------------|------------------| | | Invariance | Invariance | Awareness | Awareness | Parameters | | inv-ff | ✓ | ✓ | O(LH2) | | | | ff | | O(LH2 + |U|H) | | | | | ff-hist | ✓ | O(LH2 + |U|H) | | | | | ff-supervised | ✓ | O(LH2 + |U|H) | | | | | inv-ff-hist | ✓ | ✓ | ✓ | O(LH2) | | | gnn-hist | ✓ | ✓ | ✓ | ✓ | O(LH2 + EH + E2) | Table 2: Important model characteristics. L: Number of hidden layers, H: Hidden layer size, E: Embedding dimension. - **Node-feature Awareness**: In real-world scenarios, nodes in U and V represent entities with features that can be key to making good matching decisions. For example, incoming nodes can be users with personal information such as age, gender, and occupation. The node features can be leveraged to obtain better matchings. Our GNN model supports node features. Other models can be modified to take such additional features but would need to be customized to the problem at hand. ## 4.3 Training Algorithms RL Models: Because our focus is on flexible modeling of OBM-type problems with deep learning architectures, we have opted to leverage existing training algorithms with little modification. We use the REINFORCE algorithm (Sutton & Barto, 2018), both for its effectiveness and simplicity: $$\nabla L(\theta|s)=\mathbb{E}_{p_{\theta}(\pi|s)}[(\mathcal{L}(\pi)-b(s))\nabla\log p_{\theta}(\pi|s)].$$ To reduce gradient variance and noise, we add a baseline b(s) which is the exponential moving average, b(s) = M, where M is the negative episode reward ,L(π), in the first training iteration and the update step is b(s) = βM + (1 − β)L(π) (Sutton & Barto, 2018). Supervised Models: All incoming nodes are treated as independent samples with targets. Therefore, for a batch of N bipartite graphs with T incoming nodes, we minimize the weighted cross entropy loss: $${\frac{1}{N\times T}}\sum_{i=1}^{N}\sum_{j=1}^{T}l o s s(p_{j}^{i},t_{j}^{i},c)$$ where p i j is output of the policy for graph instance i at timestep j, t i j is the target which is generated by solving an integer programming formulation on the full graph in hindsight (see Appendix D for details), and c is the weight vector of size |U| + 1. All classes are given a weight of 1 except the skipping class which is given a weight of |U| |V | . This is to prevent overfitting when most samples belong to the skipping class, i.e., when |V *| ≫ |*U| and most incoming nodes are left unmatched. Masking is utilized to prevent all models from picking non-existent edges or already matched nodes. ## 5 Experimental Setup 5.1 Dataset Preparation We train and test our models across two synthetically generated datasets from the Erdos-Renyi (ER) (Erdos & Renyi, 1960) and Barabasi-Albert (BA) (Albert & Barabási, 2002) graph families. In addition, we use two datasets generated from real-world base graphs. The gMission base graph (Chen et al., 2014) comes from crowdsourcing data for assigning workers to tasks. We also use MovieLens (Harper & Konstan, 2015), which is derived from data on users' ratings of movies based on Dickerson et al. (2019). Table 3 summarizes the datasets and their key properties. In our experiments, we generate two versions of each real-world dataset: one where the same fixed nodes are used for all graph instances (gMission, MovieLens), and one where a new set of fixed nodes is generated for each graph instance (gMission-var, MovieLensvar). To generate a bipartite graph instance of size |U| by |V | from the real-world base graph, we sample |U| nodes uniformly at random without replacement from the nodes on the left side of the base graph and sample |V | nodes with replacement from the right side of the base graph. A 10x30 graph is one with |U| = 10, |V | = 30, a naming convention we will adopt throughout. We note that our framework could be used in the non-i.i.d arrival settings. However, the graph generation process depends on the i.i.d assumption, since we sample nodes from the base graph at random. Erdos-Renyi (ER): We generate bipartite graph instances for the E-OBM problem using the ErdosRenyi (ER) scheme (Erdos & Renyi, 1960). Edge weights are sampled from the uniform distribution U(0, 1]. For each graph size, e.g., 10x30, we generate datasets for a number of values of p, the probability of an edge being in the graph. Barabasi-Albert (BA): We follow the same process described by (Borodin et al., 2020) for generating preferential attachment bigraphs. To generate a bigraph in this model, start with |U| offline nodes and introduce online nodes V one at a time. The model has a single parameter p which is the average degree of an online node. Upon arrival of a new online node v ∈ V , we sample nv ∼ Bionomial(|U|*, p/*|U|) to decide the number of the neighbours of v. Let µ be a probability distribution over the nodes in U defined by µ(u) = 1+*degree*(u) |U|+Pu∈U degree(u) . We sample offline nodes according to µ from U until nv neighbours are selected. gMission: In this setting, we have a set of workers available offline and incoming tasks which must be matched to compatible workers (Chen et al., 2014). Every worker is associated with a location in Euclidian space, a range within which they can service tasks, and a success probability with which they will complete any task. Tasks are represented by a Euclidean location and a payoff value for being completed. We use the same strategy as in (Dickerson et al., 2018) to pre-process the dataset. That is, workers that share similar locations are grouped into the same "type", and likewise for tasks. An edge is drawn between a worker and a task if the task is within the range of the worker. The edge weight is calculated by multiplying the payoff for completing the task with the success probability. In total, we have 532 worker types and 712 task types. To generate a bipartite graph instance of size |U| by |V |, we sample |U| workers uniformly at random without replacement from the 532 types and sample |V | tasks with replacement from D. We set D to be uniform. That is, the graph generation process involves sampling node from V in the base graph uniformly. MovieLens: The dataset consists of a set of movies each belonging to some genres and a set of users which can arrive and leave the system at any time. Once a user arrives, they must be matched to an available movie or left unmatched if no good movies are available. We have historical information about the average ratings each user has given for each genre. The goal is to recommend movies which are relevant and diverse genre-wise. This objective is measured using the weighted coverage function over the set of genres (see Section 2). Therefore, we must maximize the sum of the weighted coverage functions of all users which have arrived. The MovieLens dataset contains a total of 3952 movies, 6040 users, and 100, 209 ratings of the movies by the users. As in (Dickerson et al., 2019), we choose 200 users who have given the most ratings and sample 100 movies at random. We then remove any movies that have no neighbors with the 200 users to get a total of 94 movies. These sets of movies and users will be used to generate all bipartite graphs. We calculate the average ratings each user has given for each genre. These average ratings will be used as the weights in the coverage function (see section 2.1). To generate an instance of size |U| by |V |, we sample |U| movies uniformly at random without replacement from the 94 movies and |V | users with replacement according to the uniform arrival distribution D. The full graph generation procedure for gMission and MovieLens can be seen in Algorithm 3 of Appendix C. | Type | Problem | Base Graph Size | Node Attributes? | Weight Generation | |-----------|------------------------------------------------------------|----------------------------|---------------------------------------------------------|-----------------------------------------| | ER | E-OBM | w(u,v) ∼ U(0, 1] | | | | BA | E-OBM | w(u,v) ∼ N(degree(u), p/5) | | | | gMission | E-OBM | 532 jobs × 712 workers | payoff for computing the task × the success probability | | | MovieLens | OSBM | 94 movies × 200 users | ✓ | average ratings each user has given for | | | each genre is used as the weights in the coverage function | | | | Table 3: Datasets used for our experiments. p is the average node degree in BA graphs. ## 5.2 Evaluation Evaluation Metric: We use the *optimality ratio* averaged over the set of test instances. The optimality ratio of a solution S on a graph instance G is defined as O(*S, G*) = c(S) OP T(G) , where c(S) is the objective value of S and *OP T*(G) is the optimal value on graph instance G, which is computed in hindsight using integer programming; see Appendix D. Baselines: For E-OBM, we compare our models to the greedy baseline, which simply picks the maximum-weight edge, and greedy-rt (Ting & Xiang, 2014), a randomized algorithm which is nearoptimal in the adversarial setting. In an effort to compare to strong tunable baselines, we implemented greedy-t, which picks the maximum-weight edge with weight above a dataset-specific threshold wT that is tuned on the training set. If no weight is above the threshold, then we skip (see Appendix B.4 for details). To best of our knowledge, this is the first application of such a baseline to real-world datasets. For OSBM, we only use greedy as (Dickerson et al., 2019) find that it achieves a better competitive ratio than their algorithms when movies cannot be matched more than once and the incoming user can be matched to one movie at a time, which is the setting we study here. ## 5.3 Hyperparameter Tuning And Training Protocol In a nutshell, around 400 configurations, varying four hyperparameters, are explored using Bayesian optimization (Biewald, 2020) on a small validation set consisting of small graphs (10x30) from the gMission ![10_image_0.png](10_image_0.png) dataset. We have found the models to be fairly robust to hyperparameter values. In fact, most configurations with low learning rates (under 0.01) result in satisfactory performance as seen in Fig 3. The model with the best average optimality ratio on the validation set is selected for final evaluation, the results of which will be shown in the next section. Some hyperparameters are fixed throughout, particularly the depths/widths of the feed-forward networks (2-3 layers, 100-200 neurons), and the use of the ReLU as activation function. Training often takes less than 6 hours on a NVIDIA v100 GPU. Full details are deferred to appendices B.3 and B.1. Figure 3: Top 200 hyperparameter tuning results for ff-hist on gMission 10x30. Each curve represents a hyperparameter configuration. Lighter color means better average optimality ratio on the validation set. ![11_image_0.png](11_image_0.png) Figure 4: Distributions of the Optimality Ratios for E-OBM on ER graphs. The graph family parameter p is the probability of a random edge existing in the graph. ## 6 Experimental Results 6.1 Edge-Weighted Online Bipartite Matching For E-OBM, we will analyze the performance of the models across ER and BA graphs as well as the gMission datatset. The edges and weights in the ER graphs are generated from a uniform distribution. Thus, ER graphs do not have special graph properties such as the existence of community structures or the occurrence of structural motifs. As a result, the ER dataset is hard to learn from as the models would merely be able to leverage the |U|-to-|V | ratio and the density of the graph (the graph family parameter is proportional to the expected node degree in a graph). Unlike ER graphs, explicit structural patterns are found in BA graphs. The BA graph generation process captures heterogeneous and varied degree distributions which are often observed in real world graphs (Barabási & Pósfai, 2016). For example, graphs with many low-degree nodes and a few high-degree nodes occur in practical domains where the rich gets richer phenomenon is witnessed. The BA graph generation process is described in Appendix C. In our experiments, nodes with higher degrees also have higher weights in average. We also study the models under the gMission dataset. Like many real-world datasets, the exact properties of the graphs are unknown. Thus, the models may derive policies based on implicit graph properties. The results will demonstrate that the models have taken advantage of some existing patterns in the dataset. Trends in decisions with respect to the |U|-to-|V | **ratio and graph sparsity:** When |U| < |V |, the models outperform the greedy strategies since they learn that skipping would yield a better result in hindsight, despite missing a short-term reward. This is apparent for the 10x30 and 10x60 graphs in Figure 4 for ER and Figure 5 (b) for gMission. To substantiate this and other hypotheses about the behavior of various policies, we use "agreement plots" such as those in Figure 6. An agreement plot shows how frequently the different policies agree with a reference policy, e.g., with a hindsight-optimal solution or with the greedy method. Appendix E includes agreement plots w.r.t. greedy: most disagreements between the learned policies and greedy happen in the beginning but all methods (including greedy) imitate the optimum towards the end, when most actions consist in skipping due to the fixed nodes having been matched already. Outperforming greedy on 100x100 (3rd plot in Fig. 4) ER graphs is quite a difficult task, as there is not much besides the graph density for the models to take advantage of. Since |V | = |U|, skipping is also rarely feasible. Hence, the models perform similarly to greedy. ![12_image_0.png](12_image_0.png) ![12_image_1.png](12_image_1.png) (a) Distributions of the Optimality Ratios for E-OBM on BA graphs with average node degree 5, and weights ![12_image_2.png](12_image_2.png) E-OBM gMission 100x100 E-OBM gMission-var 100x100 (c) Distributions of the Optimality Ratios for E-OBM on gMission-var. Figure 5: Distributions of the Optimality Ratios for E-OBM on BA and gMission. As the ER graphs get denser (moving to the right within the first three plots of Fig. 4), the gap between the models and the greedy baselines increases as there is a higher chance of encountering better future options in denser graphs. Hence, the models learn to skip as they find it more rewarding over the long term. This can be further seen in Fig. 6, where the models agree less with greedy on denser ER graphs. For gMission (right side of Fig. 6), most disagreements happen in the beginning but all models imitate the optimum towards the end when most actions consist in skipping; Appendix E has more agreement plots. Model-specific Results: Models with history, namely inv-ff-hist (gray) and ff-hist (brown), consistently outperform their history-less counterparts, ff and inv-ff, across all three datasets (Figure 5). inv-ff receives the same information as greedy and performs fairly similar to it on gMission and ER graphs. In fact, inv-ff learns to be exactly greedy on gMission 100x100 (see Appendix E). However, inv-ff performs better than other non-invariant models on the BA dataset. The ideal policy on BA ![13_image_0.png](13_image_0.png) ![13_image_1.png](13_image_1.png) Agreement with Optimal for ER 10x30 Agreement with Optimal for gMission 10x30 Figure 6: Percent agreement with the optimal solution per timestep. A point (timestep t, agreement a) on this plot can be read as: at timestep t, this method makes the same matching decision as the optimal solution on a% of the test instances. graphs would discover that matching to a high-degree node is not wise since the node will likely receive a lot more edges in the future. Similarly, matching to a low-degree node is desired since the node will probably not have many edges in the future. The node-wise reasoning of the invariant models is effective at learning to utilize such node-specific graph property and following a more feasible policy. Armed with historical context, inv-ff-hist outperforms all other models on BA graphs (Fig. 5a). The best performance on ER and gMission is achieved by ff-hist since the model gets all information (weights) at once (Fig. 5). However, when U nodes are permuted, inv-ff-hist vastly outperforms ff-hist, as shown in Appendix G. ff-supervised performs well but not as good as RL models since supervised learning comes with its own disadvantages, i.e, overfitting when there are more skip actions than match, and being unable to reason sequentially when it makes a wrong decision early on. The latter is a well-known fatal flaw in behavior cloning, as observed by Ross & Bagnell (2010) and others. greedy-t performs well compared to greedy, which shows the advantage of strategically skipping if weights do not pass a tuned threshold. However, it is still outperformed by the learned models, especially on BA graphs where the graphs exhibit explicit patterns and gMission 100x100. In general, the choice of the best model is dependent on the problem, but we provide some empirical evidence on how this choice should be made. The invariant models with history (inv-ff-hist, gnn-hist) are the best performing models and most recommended to be used in practice as they are invariant to |U|, can support more general OBM variants such as 2-sided arriving nodes, and can take advantage of node/edge features. For settings where |U| is always fixed (e.g., scheduling jobs to servers), ff-hist is the best as it can see all arriving edges at once and takes advantage of history. Some advantages to using invariant models: We also experiment with a variation on the gMission dataset, where a new set of fixed nodes is generated for each graph instance. We see the same pattern as in gMission (where the same fixed nodes in |U| existed across all instances), except non-invariant models ![14_image_0.png](14_image_0.png) Figure 7: Graph Transferability on gMission-var: Average optimality ratios for models trained on graphs of size 10x30 (left) & 10x60 (right) and tested on graphs of different sizes. Missing values are denoted with a dash for models that are not invariant to the number of U nodes of the training graphs. degrade substantially for 100x100. This is because the input size increased substantially but the model's hidden layer sizes were kept constant. The same issue is seen for BA graphs. A significant disadvantage of non-invariant models is that they are not independent of |U|, so model size needs to be increased with |U|. Invariant models are unaffected by this issue as they reason *node-wise*. We notice that this problem is not seen in gMission. One explanation for this is that fixed nodes are the same across all instances so models have a good idea of the weight distribution for each fixed node which is the same during testing and training. Therefore, even though the model size is not increased as the input size increases, the models can in some sense "guess" the incoming weights and so do not need as much of an increase in capacity. Once again, models with history display better performance as history helps the models build a better "identity" of each fixed node as more nodes come in even if never seen during training. Do models trained on small graphs transfer to larger graphs? In Fig. 7, we train all models on 10x30 and 10x60 graphs separately and test their transferability to graphs with different |U|-to-|V | ratios up to size 100x200. gnn-hist and inv-ff-hist perform especially well on graphs with similar |U|-to-|V | ratio. For 100x100 graphs, inv-ff and greedy-t perform poorly as they do not receive any features that give them context within a new graph size such as number of available fixed nodes. ## 6.2 Online Submodular Bipartite Matching (Osbm) The inherent complexity of the OSBM problem and the real-world datasets provide a learning-based approach with more information to leverage. As such, the models tend to discover policies which do significantly better than greedy as shown in Fig. 8. The benefit of RL models is apparent here when compared to ff-supervised, particularly for 10x30 and 94x100 graphs. The relative complexity of OSBM compared to E-OBM will require the model to be more generalizable as the reasoning involved is more complex and mere imitation is not enough. ff-supervised also underperforms because the edge weights depend on the current solution and can change on the same graph instance if previous matches are different, causing a great mismatch with the training data. A similar trend to E-OBM results is observed here: models with history outperform their history-less counterparts. The context provided by history is particularly helpful as the edge weights depend on previous matches. Furthermore, we notice that gnn-hist has the best performance on 10x30 and 94x100 graphs as gnn-hist is the only model that uses user attributes as node features. We witness the same issue seen in gMission-Var (5). The non-invariant models degrade on 94x100 graphs due to having the same number of hidden layer despite processing larger graphs. The invariant models ![15_image_0.png](15_image_0.png) OSBM MovieLens 94x100 OSBM MovieLens-var 94x100 remain unaffected by the graph size. Interestingly, the invariant models even slightly outperform their non-invariant counterparts on 10x30 and 10x60 MovieLens-var. Figure 8: Distributions of the Optimality Ratios for OSBM on three graph sizes for MovieLens and MovieLens-var. Higher is better. ## 7 Conclusion And Future Work Through extensive experiments, we have demonstrated that deep reinforcement learning with appropriately designed neural network architectures and feature engineering can produce high-performance online matching policies across two problems spanning a wide range of complexity. In particular, we make the following concluding observations: - A basic reinforcement learning formulation and training scheme are sufficient to produce good learned policies for online matching, are typically not sensitive to the choice of hyperparameters, and are advantageous compared to a supervised learning approach; - Compared to greedy policies, RL-based policies are more effective, a result that can be partially explained by a stronger agreement with the optimal solution (in hindsight) in the early timesteps of the process when greedy is too eager to match. RL policies tend to perform particularly well when trained and tested on dense graphs or ones with a strong structural pattern; - Models that are invariant to the number of nodes are more advantageous than fully-connected models in terms of how well they generalize to test instances that are slightly perturbed compared to the training instances, either in graph size or in the identities of the fixed nodes; - Feature engineering at the node and graph levels can help model the history of the matching process up to that timestep, resulting in improved solutions compared to models that use only weight information from the current timestep; - Graph Neural Network models are a viable alternative to feed-forward models as they can leverage node features and their dependencies across nodes more naturally. Future avenues of research include: - A more extensive experimental analysis of different RL training algorithms beyond basic policy gradient; - Extensions to new real-world datasets with rich node and edge features that could benefit even more from highly expressive models such as GNNs; - Extensions to other online combinatorial optimization problems, which can leverage our framework, models, and code as a starting point. ## References Réka Albert and Albert-László Barabási. Statistical mechanics of complex networks. *Rev. Mod. Phys.*, 74:47– 97, Jan 2002. doi:10.1103/RevModPhys.74.47. URL https://link.aps.org/doi/10.1103/RevModPhys. 74.47. Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, and Nando De Freitas. Learning to learn by gradient descent by gradient descent. In Advances in neural information processing systems, pp. 3981–3989, 2016. Antonios Antoniadis, Themis Gouleakis, Pieter Kleer, and Pavel Kolev. Secretary and online matching problems with machine learned advice. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 7933– 7944. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/ 5a378f8490c8d6af8647a753812f6e31-Paper.pdf. Pranjal Awasthi and Tuomas Sandholm. Online stochastic optimization in the large: Application to kidney exchange. In *IJCAI*, 2009. Albert-László Barabási and Márton Pósfai. *Network science*. Cambridge University Press, Cambridge, 2016. ISBN 9781107076266 1107076269. URL http://barabasi.com/networksciencebook/. Thomas D. Barrett, William R. Clements, Jakob N. Foerster, and A. I. Lvovsky. Exploratory combinatorial optimization with reinforcement learning. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pp. 3243–3250. AAAI Press, 2020. URL https://aaai.org/ojs/index.php/AAAI/article/view/5723. Yoshua Bengio, Andrea Lodi, and Antoine Prouvost. Machine learning for combinatorial optimization: a methodological tour d'horizon. *European Journal of Operational Research*, 290(2):405–421, 2021. Lukas Biewald. Experiment tracking with weights and biases, 2020. URL https://www.wandb.com/. Software available from wandb.com. Allan Borodin, Christodoulos Karavasilis, and Denis Pankratov. An experimental study of algorithms for online bipartite matching. *ACM J. Exp. Algorithmics*, 25, March 2020. ISSN 1084-6654. doi:10.1145/3379552. URL https://doi.org/10.1145/3379552. Quentin Cappart, Didier Chételat, Elias B. Khalil, Andrea Lodi, Christopher Morris, and Petar Velickovic. Combinatorial optimization and reasoning with graph neural networks. *CoRR*, abs/2102.09544, 2021. URL https://arxiv.org/abs/2102.09544. Xinyun Chen and Yuandong Tian. Learning to perform local rewriting for combinatorial optimization. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett (eds.), *Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada*, pp. 6278–6289, 2019. URL https://proceedings.neurips.cc/paper/2019/hash/ 131f383b434fdf48079bff1e44e2d9a5-Abstract.html. Zhao Chen, Rui Fu, Ziyuan Zhao, Zheng Liu, Leihao Xia, Lei Chen, Peng Cheng, Caleb Chen Cao, Yongxin Tong, and Chen Jason Zhang. Gmission: A general spatial crowdsourcing platform. *Proc.* VLDB Endow., 7(13):1629–1632, August 2014. ISSN 2150-8097. doi:10.14778/2733004.2733047. URL https://doi.org/10.14778/2733004.2733047. Hanjun Dai, Elias B. Khalil, Yuyu Zhang, Bistra Dilkina, and Le Song. Learning combinatorial optimization algorithms over graphs. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper/2017/file/ d9896106ca98d3d05b8cbdf4fd8b13a1-Paper.pdf. Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, Ali Vakilian, and Nikos Zarifis. Learning online algorithms with distributional advice. In Marina Meila and Tong Zhang (eds.), *Proceedings of the 38th International Conference on Machine Learning*, volume 139 of *Proceedings of Machine Learning Research*, pp. 2687–2696. PMLR, 18–24 Jul 2021. URL http://proceedings.mlr.press/v139/diakonikolas21a. html. John P. Dickerson, Karthik Abinav Sankararaman, Aravind Srinivasan, and Pan Xu. Assigning tasks to workers based on historical data: Online task assignment with two-sided arrivals. In *Proceedings of* the 17th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS '18, pp. 318–326, Richland, SC, 2018. International Foundation for Autonomous Agents and Multiagent Systems. John P. Dickerson, Karthik Abinav Sankararaman, Aravind Srinivasan, and Pan Xu. Balancing relevance and diversity in online bipartite matching via submodularity. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01):1877–1884, Jul. 2019. doi:10.1609/aaai.v33i01.33011877. URL https: //ojs.aaai.org/index.php/AAAI/article/view/4013. Paul Erdos and Alfred Renyi. On the evolution of random graphs. *Publ. Math. Inst. Hungary. Acad. Sci.*, 5: 17–61, 1960. Matthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019. Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In *Proceedings of the 34th International Conference on Machine Learning* - Volume 70, ICML'17, pp. 1263–1272. JMLR.org, 2017. Gurobi Optimization, LLC. Gurobi Optimizer Reference Manual, 2021. URL https://www.gurobi.com. Aric A. Hagberg, Daniel A. Schult, and Pieter J. Swart. Exploring Network Structure, Dynamics, and Function using NetworkX. In Gaël Varoquaux, Travis Vaught, and Jarrod Millman (eds.), Proceedings of the 7th Python in Science Conference, pp. 11 - 15, Pasadena, CA USA, 2008. F. Maxwell Harper and Joseph A. Konstan. The movielens datasets: History and context. ACM Trans. Interact. Intell. Syst., 5(4), December 2015. ISSN 2160-6455. doi:10.1145/2827872. URL https://doi. org/10.1145/2827872. Eric Jones, Travis Oliphant, Pearu Peterson, et al. SciPy: Open source scientific tools for Python, 2001–. URL http://www.scipy.org/. Chinmay Karande, Aranyak Mehta, and Pushkar Tripathi. Online bipartite matching with unknown distributions. In *Proceedings of the Forty-Third Annual ACM Symposium on Theory of Computing*, STOC '11, pp. 587–596, New York, NY, USA, 2011. Association for Computing Machinery. ISBN 9781450306911. doi:10.1145/1993636.1993715. URL https://doi.org/10.1145/1993636.1993715. Richard Karp, Umesh Vazirani, and Vijay Vazirani. An optimal algorithm for on-line bipartite matching. In STOC '90, 1990. Seyed Mehran Kazemi, Rishab Goel, Kshitij Jain, Ivan Kobyzev, Akshay Sethi, Peter Forsyth, and Pascal Poupart. Representation learning for dynamic graphs: A survey, 2020. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun (eds.), *3rd International Conference on Learning Representations, ICLR 2015, San Diego,* CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv.org/abs/1412.6980. Weiwei Kong, Christopher Liaw, Aranyak Mehta, and D. Sivakumar. A new dog learns old tricks: Rl finds classic optimization algorithms. 2019. URL https://openreview.net/pdf?id=rkluJ2R9KQ. Wouter Kool, Herke van Hoof, and Max Welling. Attention, learn to solve routing problems! In *International Conference on Learning Representations*, 2019. URL https://openreview.net/forum?id= ByxBFsRqYm. H. W. Kuhn. The hungarian method for the assignment problem. *Naval Research Logistics Quarterly*, 2 (1-2):83–97, 1955. doi:https://doi.org/10.1002/nav.3800020109. URL https://onlinelibrary.wiley. com/doi/abs/10.1002/nav.3800020109. Nina Mazyavkina, Sergey Sviridov, Sergei Ivanov, and Evgeny Burnaev. Reinforcement learning for combinatorial optimization: A survey. *Computers & Operations Research*, 134:105400, 2021. A. Mehta, A. Saberi, U. Vazirani, and V. Vazirani. Adwords and generalized on-line matching. In 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS'05), pp. 264–273, 2005. doi:10.1109/SFCS.2005.12. Aranyak Mehta. Online matching and ad allocation. *Foundations and Trends in Theoretical Computer* Science, 8 (4):265–368, 2013. URL http://dx.doi.org/10.1561/0400000057. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, highperformance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'AlchéBuc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems 32*, pp. 8024–8035. Curran Associates, Inc., 2019. URL http://papers.neurips.cc/paper/ 9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf. Manish Purohit, Zoya Svitkina, and Ravi Kumar. Improving online algorithms via ml predictions. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), *Advances* in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https:// proceedings.neurips.cc/paper/2018/file/73a427badebe0e32caa2e1fc7530b7f3-Paper.pdf. Stephane Ross and Drew Bagnell. Efficient reductions for imitation learning. In Yee Whye Teh and Mike Titterington (eds.), *Proceedings of the Thirteenth International Conference on Artificial Intelligence and* Statistics, volume 9 of *Proceedings of Machine Learning Research*, pp. 661–668, Chia Laguna Resort, Sardinia, Italy, 13–15 May 2010. PMLR. URL http://proceedings.mlr.press/v9/ross10a.html. Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. Practical bayesian optimization of machine learning algorithms. In *Proceedings of the 25th International Conference on Neural Information Processing Systems* - Volume 2, NIPS'12, pp. 2951–2959, Red Hook, NY, USA, 2012. Curran Associates Inc. Richard S. Sutton and Andrew G. Barto. *Reinforcement Learning: An Introduction*. The MIT Press, second edition, 2018. URL http://incompleteideas.net/book/the-book-2nd.html. Hingfung Ting and Xiangzhong Xiang. Near optimal algorithms for online maximum weighted b-matching. In Jianer Chen, John E. Hopcroft, and Jianxin Wang (eds.), *Frontiers in Algorithmics*, pp. 240–251, Cham, 2014. Springer International Publishing. ISBN 978-3-319-08016-1. Shufan Wang, Jian Li, and Shiqiang Wang. Online algorithms for multi-shop ski rental with machine learned advice. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 8150–8160. Curran Associates, Inc., 2020. URL https: //proceedings.neurips.cc/paper/2020/file/5cc4bb753030a3d804351b2dfec0d8b5-Paper.pdf. Yansheng Wang, Yongxin Tong, Cheng Long, Pan Xu, Ke Xu, and Weifeng Lv. Adaptive dynamic bipartite graph matching: A reinforcement learning approach. In 2019 IEEE 35th International Conference on Data Engineering (ICDE), pp. 1478–1489, 2019. doi:10.1109/ICDE.2019.00133. Goran Zuzic, Di Wang, Aranyak Mehta, and D. Sivakumar. Learning robust algorithms for online allocation problems using adversarial training, 2020. ## A Implementation Details All environments are implemented in Pytorch (Paszke et al., 2019). We use NetworkX (Hagberg et al., 2008) to generate synthetic graphs and find optimal solutions for E-OBM problems. Optimal solutions for OSBM problems are found using Gurobi (Gurobi Optimization, LLC, 2021); see Appendix D. Pytorch Geometric (Fey & Lenssen, 2019) is used for handling graphs during training and implementing graph encoders. Our code is attached to the submission as supplementary material. ## B Training And Evaluation B.1 Training Protocol We train our models for 300 epochs on datasets of 20000 instances using the Adam optimizer (Kingma & Ba, 2015). We use a batch size of 200 except for MovieLens, where batch size 100 is used on graphs bigger than 10x60 due to memory constraints. Training often takes less than 6 hours on a NVIDIA v100 GPU. gnn-hist takes less than a day to train on small graphs but consumes more time for bigger graphs and more complicated environments such as MovieLens 94x100. This is due to re-embedding the graph at every timestep which consumes more memory and computation as the graph size grows. We believe this can be improved with more efficient embedding schemes for dynamic graphs (Kazemi et al., 2020) but leave this for future work. ![20_image_0.png](20_image_0.png) Figure 9: Average Validation reward during training on gMission 10x30. Note that ff-supervised starts with a reward of approximately 1.9 at batch 0. ## B.2 Node Features Since gnn-hist supports node features by default, we leverage this property in order for incoming nodes to have "identities" that can be helpful for the learning of effective strategies. For the E-OBM problem, the ER and gMission datasets do not have any helpful node attributes that are not encoded in the edge weights. Therefore, we only provide node attributes that help the encoder differentiate between different types of nodes. That is, the skip node will get node feature −1, any fixed node i will get feature (ji) which is 1 if i is already matched and 0 otherwise, and incoming nodes have feature 2. These features simply help the model differentiate between incoming nodes, fixed nodes, and the node that represents the skipping action. In the original MovieLens dataset, the incoming nodes are users while the fixed nodes are movies, both of which have helpful features. A fixed node i has feature vector (ji, gi) where giis a binary vector that represents the genres which the movie belongs to. The skip nodes will have feature vector ⃗0. Incoming users have attributes (gender, occupation, age), each of which is mapped to a number and normalized to between 0 and 1. See Table 4 for details. | Attribute | Categories | Feature Range | |-------------|----------------------|-----------------| | Gender | Male, Female | 0 ≤ i ≤ 1 | | Age | - | Under 18 | | - | 18-24 | | | - | 25-34 | | | - | 35-44 | | | - | 45-49 | | | - | 50-55 | | | - | 56+ | 0 ≤ i ≤ 6 | | Occupation | - | Other | | - | academic/educator | | | - | artist | | | - | clerical/admin | | | - | college/grad student | | | - | customer service | | | - | doctor/health care | | | - | executive/managerial | | | - | farmer | | | - | homemaker | | | - | K-12 student | | | - | lawyer | | | - | programmer | | | - | retired | | | - | sales/marketing | | | - | scientist | | | - | self-employed | | | - | technician/engineer | | | - | tradesman/craftsman | | | - | unemployed | | | - | writer | 0 ≤ i ≤ 20 | Table 4: User features in the MovieLens dataset. ## B.3 Hyperparameter Tuning We tune 4 training hyperparameters for each RL model using a held-out validation set of size 1000. The hyperparameters are the learning rate, learning rate decay, exponential decay parameter, and entropy regularization rate. For the supervised-ff model, only the learning rate and learning rate decay are tuned. Hyperparameters are optimized using the Bayesian search method (Snoek et al., 2012) in the Weights & Biases library (Biewald, 2020) with default parameters. We conduct around 400 runs for each model. All models are tuned on small bipartite graphs (10×30) from the gMission dataset; the same hyperparameters are used for bigger graphs at evaluation time. We also use the same hyper-parameters for all datasets. We have found the models to be fairly robust to different hyperparameters. As can be seen in Figure 3, most configurations with low learning rates (under 0.01) result in satisfactory performance. Fixed Hyperparameters: The ff and ff-hist models have 3 hidden layers with 100 neurons each. inv-ff and inv-ff-hist have 2 hidden layers of size 100. The gnn-hist's decoder feed-forward neural | Hyperparameter | Range | |---------------------|---------------------------------------------| | Learning Rate | {j × 10−i |2 ≤ i ≤ 6, 1 ≤ j ≤ 9} | | Learning Rate Decay | {1.0, 0.99, 0.98, 0.97, 0.96, 0.95} | | Exponential Decay β | {1.0, 0.95, 0.9, 0.85, 0.8, 0.7, 0.65, 0.6} | | Entropy Rate γ | {j × 10−i |1 ≤ i ≤ 4, 1 ≤ j ≤ 9} | Table 5: Hyperparameter Grid ## Algorithm 1 Greedy-Rt 1: Choose an integer K uniformly at random in the set N = {0, 1*, . . . ,* ⌈ln(wmax + 1)⌉ − 1} 2: Set τ = e K 3: **while** a new vertex v ∈ V *arrives* do 4: A = {u|u is v's unmatched neighbor in U and w(u,v) ≥ τ} 5: if A = ϕ **then** 6: leave v *unmatched* 7: **else** 8: match v *to an arbitrary vertex in* A network has 2 hidden layers of size 200 and the encoder uses embedding dimension 30 with one embedding layer. All models use the ReLU non-linearity. Each RL model is tuned on 4 hyperparameters, as seen in Table 5, using a held-out validation set of size 1000. The figure below shows the top 200 hyperparameter search results for ff-hist. Each curve represents a hyperparameter configuration. Evidently, most configurations with small learning rates result in a high average optimality ratio. Other models also show similar results in being insensitive to minor hyperparameter changes. All experiments are done with a constant random seed. Unlike other RL domains, we have not found the models to be sensitive to the seed and never had to restart training due to a bad run. This could be explained by the fact that the transitions in OBM are deterministic regardless of the action (skip or match). Therefore, the MDP is not highly stochastic. ## B.4 Evaluation greedy-rt **baseline**: As shown in Algorithm 1, greedy-rt works by randomly picking a threshold between e and e ⌈ln(wmax+1)⌉, where wmax is the maximum possible weight. When a new node comes in, we arbitrarily pick any edge whose weight is at least the threshold, or skip if none exist. Surprisingly, this simple strategy is near-optimal in the adversarial setting, with a competitive ratio of 1 2e⌈ln(wmax+1)⌉ (Ting & Xiang, 2014). Since greedy-rt does not support weights between 0 and 1, we re-normalize the edge weights in all EOBM datasets. For ER and BA graphs, we divide all weights by the minimum weight in the dataset. For gMission graphs, we re-normalize by multiplying all weights by the maximum weight in the original dataset. greedy-t **baseline**: Gaining intuition from greedy-rt, we implement a baseline where the threshold is tuned rather than randomly picked. That is, we find a threshold wT ∈ {0.01, 0.02*, . . . ,* 1.} that achieves the best average reward on the training set. Then, we use wT as fixed threshold for all test graphs. See Algorithm 2 for pseudo-code. ## C Dataset Generation Details We provide high-level pseudocode for graph generation in Algorithm 3. K is the number of fixed nodes, T is the time horizon and also the number of incoming V nodes, N is the number of instances to be generated, and "type" can be set to "var" to ask for graphs with varying sets of fixed nodes, with the default being to use the same set of fixed nodes. ## Algorithm 2 Greedy-T 1: **Input** wT . 2: **while** a new vertex v ∈ V *arrives* do 3: A = {u|u is v's unmatched neighbor in U and w(u,v) ≥ wT } 4: if A = ϕ **then** 5: leave v *unmatched* 6: **else** 7: Match v *with the maximum-weighted edge to a node in* A Algorithm 3 Graph Generation 1: **procedure** Generate(K, T, D, G(U ∗, V ∗, E∗), type, N) 2: D = {} 3: if type != "var" **then** ▷ if all graphs should have same fixed nodes 4: U = u1, . . . , uK ∼ *Uniform*(U ∗) ▷ Sample K fixed nodes without replacement from base graph 5: **while** *i < N* do ▷ Generate N graphs 6: if type = "var" **then** 7: U = u1, . . . , uK ∼ *Uniform*(U ∗) ▷ Re-sample for every graph 8: V , E = {}, {} 9: **while** *j < T* do ▷ Add T nodes to V 10: v ∼ D(V ∗) ▷ Sample according to arrival distribution D from base graph 11: e = {(*u, v*) : u ∈ U} 12: if e = ϕ **then** 13: Go to Step 10 ▷ Re-sample if incoming node has no neighbors 14: V = V ∪ {v} 15: E = E ∪ e 16: j += 1 17: D = D ∪ {G(*U, V, E*)} ▷ Add graph instance to dataset 18: i += 1 return D ▷ Return N graphs of size K by T ## D Finding Optimal Solutions Offline To find the optimal solutions for E-OBM, we use the linear_sum_assignment function in the SciPy Library (Jones et al., 2001–). For OSBM (with the coverage function), we borrow the IP formulation from (Dickerson et al., 2019) defined below, where [g] is the set of genres and z is used to index into each genre. Every edge is associated with a binary feature vector q(u,v) of dimension g: Maximize $\frac{1}{2}$ $\square$ $\mathbf{v}$ $\boldsymbol{u}\boldsymbol{v}_{\boldsymbol{\pi}\boldsymbol{v}}\boldsymbol{\gamma}$ $$\frac{\prime}{\in[q]}$$ $$\frac{\epsilon}{\epsilon_{1}V}$$ $\mathbf{u}$ Maximize X v∈V X z∈[g] wzvγzv $\forall u\in U$. X Subject to X u∈Ngbr(v) xuv ≤ rv , ∀v ∈ V v∈Ngbr(u) xuv ≤ 1 , ∀u ∈ U γzv ≤X (u,v)∈E:q(u,v)[z]=1 x(u,v), ∀z ∈ [g], v ∈ V γzv ≤ 1 , ∀z ∈ [g], v ∈ V xuv ∈ {0, 1} ## E Agreement Plots In Figure 10a, greedy is closer to optimal for sparse graphs but gets increasingly different as the graphs get denser. This suggests that sparse graphs require a more greedy strategy as there not many options available in the future, while dense graphs with more incoming nodes than fixed nodes require a smarter strategy as the future may hold many better options. For gMission 100x100, invariant models are closer to greedy in the beginning only. Non-invariant models learn very different strategies from greedy. Interestingly, while ff-supervised does not have the best performance on gMission 100x100, it is the closest to optimal in Figure 10c. This is because the supervised model is learning to copy the optimal actions but lacks the sequential reasoning that RL models possess. It is often the case that there are many different solutions besides optimal that give high reward, so RL models may not necessarily learn the same strategy as ff-supervised. ## F Adwords In order to demonstrate that our RL framework can also tackle other variations of OBM, we test the performance of our models on the Adwords Mehta et al. (2005) problem. Adwords is a variation of OBM with applications in real-time ad allocation. Each vertex u ∈ U has a budget Bu, and each edge (*u, v*) has a bid (weight) biduv. Upon matching the edge (*v, u*), the node u depletes biduv amount of its budget. When a vertex is out of budget (Bu = 0), then the vertex becomes unavailable. The goal is to maximize the total amount of budget spent. To give the hist-models (inv-ff-hist, ff-hist) more context about the problem state, we input an additional vector of the remaining and original budgets of nodes (r0, r2, . . . r|U|, B0, B2*, . . . , B*|U|), where riis the remaining budget of node i at time t 2. We train and test our models on two hard distributions, Thick-z and Triangular, used in the work by Zuzic et al. (2020). The instances in the datasets are generated by permuting the nodes in U, all edges have the same bid sampled from U[0.1, 0.4). The budget of each fixed node is bid|V | |U| . The order in which the nodes in V arrive is the same across all instances. We find that ff and inv-ff converge to one of the existing policies (greedy and MSVV), or outperform them by a small margin. This result is coherent with the findings in Zuzic et al. (2020). However, we can see that inv-ff-hist and ff-hist outperform the baselines on thick-z graphs. In particular, inv-ff-hist is able to achieve 0.91 average optimality ratio. It is noteworthy that, in such hard graph datasets, only the permutation of the U nodes and bid ∼ U[0.1, 0.4) change for each instance. If one discovers the real identities (order) of the U nodes, then achieving the optimal matching is trivial Zuzic et al. (2020). Therefore, the historical data coupled with the invariance property gives inv-ff-hist better inductive bias to discover the identities and achieve nearoptimal matching performance. In traditional RL applications, a sub-optimal action can result in the agent entirely changing its trajectory ahead. In the E-OBM setting, however, a sub-optimal action is not guaranteed to affect all the future actions (edge selections), especially on sparse graphs. In Adwords, the impact of a single sub-optimal action is even less significant on the total budget spent (and the cumulative reward). That is, a sub-optimal action is very unlikely to affect all the future actions (edge selections) in the future. Hence, there will be less need to be strategic as the penalty for a sub-optimal action is negligible. Thus, discovering a new policy on real-world graphs will be quite hard. In general, the existing greedy algorithms for Adwords perform quite well and are hard to beat (especially when the bids are significantly smaller than budgets). 2Models without history are only given the current budget. ![25_image_0.png](25_image_0.png) ![25_image_1.png](25_image_1.png) Agreement with Optimal for gMission 10x60 Agreement with Optimal for gMission 100x100 ![25_image_2.png](25_image_2.png) (c) Figure 10: Percent agreement with the optimal solution per timestep. A point (timestep t, agreement a) on this plot can be read as: at timestep t, this method makes the same matching decision as the optimal solution on a% of the test instances. Table 6: Mean/std optimality ratio on 2 hard Adwords datasets Zuzic et al. (2020). All graphs in the training and test set have the same structure except the fixed nodes are permuted and the bids are sampled uniformly between 0.1 and 0.4 for all edges. | Thick-z | | Triangular | | | |-------------|------------|--------------|------------|------------| | 10x60 | 10x100 | 10x60 | 10x100 | | | greedy | 0.59 ± .03 | 0.58 ± .02 | 0.66 ± .03 | 0.66 ± .02 | | MSVV | 0.7 ± .01 | 0.7 ± 0 | 0.66 ± 0 | 0.66 ± 0 | | ff | 0.7 ± .08 | 0.7 ± .09 | 0.65 ± .06 | 0.65 ± .06 | | ff-hist | 0.77 ± .07 | 0.71 ± .09 | 0.65 ± .06 | 0.65 ± .06 | | inv-ff | 0.7 ± .01 | 0.7 ± 0 | 0.66 ± 0 | 0.66 ± 0 | | inv-ff-hist | 0.91 ± 0 | 0.91 ± 0 | 0.66 ± 0 | 0.66 ± 0 | ## G Permutation Invariance In Figure 11, we show the results on gMission-perm where we train all models on the gMission dataset and test on the same dataset but with the fixed nodes permuted for each graph instance independently. We can see that the performance of the non-invariant models degrades significantly compared to the gMission results. The invariant models are unaffected by the permutation as they receive *node-wise* input. ff-hist is less affected by this permutation in the 10x60 plot, this suggests that historical features help the model learn "identities" for each fixed node even if the nodes are permuted at test time. However, more incoming nodes need to be observed for the features to be statistically significant. ![26_image_0.png](26_image_0.png) E-OBM gMission-perm 10x60 Figure 11: Distribution of Optimality Ratios for gMission-perm: gMission with Fixed Nodes Permuted at Test Time
Review 1: Summary: This paper propose a reinforcement learning method to solve the Online Bipartite Matching problem. A set of policy networks are designed to improve the matching efficiency. Extensive experiments on several synthetic and real-world datasets show its significant performance improvement over previous baselines. Strengths and Weaknesses: Strengths: 1.This paper study an interesting application of RL to real world problems. 2.The related work is detailed. 3.The proposed network structure is novel and convince readers. 4.The performance improvement is significant. Weakness: 1.The experimental evaluation of the model is not sufficient. Some ablation study of each part in the model is missing. 2.The technical contribution of this paper is weak, as there is no special design of the training algorithm in the matching domain. Requested Changes: Questions: 1.What is the impact of the supervised module on the performance? There is no ablation study about this factor? What is the relationship between the SL model and the RL model? 2.What is the training curve? 3.How to evaluate the RL policy? Do you test the policy in the same environment? 4.The main algorithm is Reinforce. I suggest authors can try some state of art RL algorithms, such as TD3 and SAC. 5.Supervised learning methods such as Deep FM are widely used in recommendation tasks. However, the SL baseline is not compared in this paper. 6.I suggest that this paper clearly discuss the contribution of the performance improvement of each feature type. That is, what is the most important part? Presentation comments: 1.The definition of the action in page 6, denoted by pi_t as immediate actions confuses the readers. I suggest a_t instead. 2.It would be better to explain 3 feature types: graph-level, incoming node features, and solution-level features. 3.L(pi) in the loss function of Reinforce should be an estimate of the return, not the loss. 4.It is more readable to present results in Figure 4 in a table. Broader Impact Concerns: None. ================================================== Review 2: Summary: This paper studies the online bipartite matching problem, where one side of the nodes are pre-selected and fixed; the other side of the nodes are arriving in an online fashion. The algorithm needs to make a matching or choose to skip every time a new node arrives. The goal is the maximize the total number of matches (or weighted matches) within a given time horizon T. The paper considers the setting where the nodes arriving are iid samples drawn from an unknown fixed distribution, and proposes a reinforcement learning framework for this problem. Based on this MDP framework the authors evaluated a set of neural network architectures, feature representations, and empirically evaluate the models on two online matching problems: Edge-Weighted Online Bipartite Matching and Online Submodular Bipartite Matching. In the evaluation part, the authors compared to a few greedy-based baselines. Strengths and Weaknesses: # Strengths - the problem setting is very clear - the MDP framework and the application of RL algorithm to me seem to be very natural and reasonable solution to the problem - the experimental section was rather detailed, the break down of the features used; the results of the hyper parameter tuning etc were nicely presented # Weaknesses - the related work section is not comprehensive enough and some references and details are missing. For example, the author noted that "Many traditional algorithms under the KIID setting aim to overcome this challenge by explicitly approximating the distribution" but there were no references for that. I suggest these related references to be added. - the writing could be improved much to make the paper easier to follow, especially the intro / related work sections, see below for more details - I feel that the baselines that the authors compared with were fairly simple, e.g the greedy algorithm. It seems to be more fair to compare the RL models to other predictive-type models, e.g the Q-learning approach and other RL approaches mentioned in the related work section. I think stronger baselines comparisons are needed - It was not very clear to me where the RL framework and solution is depending on the "iid" assumption. Why can't such an algorithm work with non iid, or adversarial node samples? Can one stress test the robustness of the models toward non-iid samples? - in the models that were used, there were a few (e.g. ff with history) that tried to include historical information into the model. How does a RNN base model perform compared to the models that were used in the paper? - the scale of the experiments was a bit limited. e.g. for MovieLens data, only ~3 percents of the users in the whole dataset were used in the experiments' graph --any particular constraints in limiting the size of the data? ## Presentation and writing: - In introduction it was unclear to me why the unknown node distribution lead to the use of MDP -- as the authors noted "the underlying (bipartite) graph instances may come from the same unknown distribution...", for a fixed unknown distribution there could be other simpler estimators. I'd suggest the authors add more motivation and related prior works to explain why the use of RL and MDP became the focused solution - In section 2.1 Figure 1 was first-time referenced but the plot did not show up until page 5, and jumping from section 2.1 to figure 1 made me confused because non of the MDP notations / framework was introduced before or in sec 2.1 - I found the arrangement of related work section between the setup section and the MDP model section rather hard to read. Section 2 described the problem setting but it felt that the related work section then distracted the reader. - as noted above please add missing details in the related works ## Typos: - Section 2.1 "..the online setting involves reasoning under uncertainly making the design of optimal online algorithms non-trivial .." Requested Changes: Critical: - please improve the writing and presentation on motivation and related works - please add comparisons to stronger baselines in prior works - please enhance the experimental results with larger scale data / rnn-type models Minor: - fix typos - please add discussion about the dependence / robustness of the RL models towards the iid assumption - please improve the presentation and arrangements of the figure (e.g. figure 1, see comments above) Broader Impact Concerns: I did not find major ethical concerns about this work. ================================================== Review 3: Summary: Online bipartite matching is a well-known problem in combinatorial optimization. The idea is that nodes from one side of a bipartite graph arrive online one at a time, each with some outgoing edges of different weights. One must decide upon seeing each node whether to irrevocably match it to another node, or to skip it. The goal is to maximize the total edge weight of the final matching. This problem is heavily studied from a theoretical perspective, often looking for worst-case guarantees. However, as the authors observe, it is extremely natural to model this problem as an MDP. Then one might ask whether, given access to some simulation or other source of information about node arrivals, reinforcement learning can learn a good policy. This is exactly what the authors do. Modeling the problem from the perspective of OBM, the authors choose to focus on the "unknown IID setting", which is previously studied in the literature and also conveniently seems to be amenable to coding up RL simulation environments from existing bipartite matching datasets. Modeling the problem from the perspective of RL, the authors have a fairly straightforward MDP, but they point out an important property, which is that "uncertainty is exogenous". In other words, the result of the agent's action (choose to match an edge) is completely predictable and has predictable immediate reward -- stochasticity comes in only from the random node arrivals, which are independent of the agent's actions. Given this model, they present a thorough examination of how to use RL to solve this problem. They test on a few standard random graph models and use a simple REINFORCE w/ baseline policy optimization approach. They present a thorough empirical study of many different neural network architectures, in particular covering important properties like permutation invariance and ability to generalize to unseen graph sizes. Strengths and Weaknesses: The main empirical section is very carefully done. In terms of experimental setup, they choose reasonable baselines to investigate -- some theoretical baselines, as well as behavioral cloning (which they find does not perform as well as true RL). Testing against optimal hindsight decision-making is also good, and comparing not just performance but decisions chosen against hindsight gives some useful information. More broadly, it's good that the authors actually look to see what their learned policies are doing. The hyperparameter tuning also seems to be far more thorough than is usual. In terms of the choice of network architectures, the authors seem to explore all the design choices that matter. Intuitively, this is a problem where invariance should be valuable -- the authors find that it is. It also makes sense that one would hope for generalization to multiple graph sizes, which the authors find is possible. I can believe that giving access to history features matters in practice, but it seems sort of odd as this is an MDP. The major weakness is that this is a completely reasonable choice of problem, but it is also not that profound. It is not surprising that OBM can be formulated as an MDP, and that given such an MDP, standard deep RL techniques can find good solutions. On the other hand, it's good that someone actually bothered to do this, and did a careful job of it. Requested Changes: - Weak suggestions: - Everyone uses transformers for everything nowadays, and they're also permutation equivariant and can generalize to new sizes. This might be an interesting thing to try. - All other suggestions I thought of were in fact already mentioned in authors' last list of bullet points. A critical request: please discuss initialization/random seeds explicitly, for both network parameters and environment randomness. This is always a big problem in deep RL. Based on the way the results are structured here, the robustness to hyperparameters, and the repeated trials, I don't think that "tuning the random seed" is a likely explanation for good performance. But I do believe seeds are essential to mention briefly in the paper. Broader Impact Concerns: There are no broader impact concerns. ================================================== Metareview: Recommendation: Accept as is Comment: All three reviewers agree that this paper deserved an accept. The primary weakness across reviewers is that the novelty side of the work is not so high. That said, as one reviewer points out, the authors study a natural approach that it is useful for the community to understand and see performance on. Moreover, the authors studied their approach quite thoroughly. The reviewers also felt that the author responses successfully addressed the questions of the reviewers. ==================================================
# Bounding Generalization Error With Input Compression: An Empirical Study With Infinite-Width Networks Angus Galloway1,2*gallowaa@uoguelph.ca* Anna Golubeva3,4*golubeva@mit.edu* Mahmoud Salem1,2 *msalem04@uoguelph.ca* Graham W. Taylor1,2,7*gwtaylor@uoguelph.ca* Mihai Nica5,2 *nicam@uoguelph.ca* Yani Ioannou6*yani.ioannou@ucalgary.ca* 1. School of Engineering, University of Guelph, ON, Canada 2. Vector Institute for Artificial Intelligence, ON, Canada 3. The NSF AI Institute for Artificial Intelligence and Fundamental Interactions 4. Department of Physics, Massachusetts Institute of Technology, MA, USA 5. Mathematics & Statistics, University of Guelph, ON, Canada 6. Schulich School of Engineering, University of Calgary, AB, Canada 7. Canada CIFAR AI Chair Reviewed on OpenReview: *https: // openreview. net/ forum? id= jbZEUtULft* ## Abstract Estimating the Generalization Error (GE) of Deep Neural Networks (DNNs) is an important task that often relies on availability of held-out data. The ability to better predict GE based on a single training set may yield overarching DNN design principles to reduce a reliance on trial-and-error, along with other performance assessment advantages. In search of a quantity relevant to GE, we investigate the Mutual Information (MI) between the input and final layer representations, using the infinite-width DNN limit to bound MI. An existing input compression-based GE bound is used to link MI and GE. To the best of our knowledge, this represents the first empirical study of this bound. In our attempt to empirically stress test the theoretical bound, we find that it is often tight for best-performing models. Furthermore, it detects randomization of training labels in many cases, reflects test-time perturbation robustness, and works well given only few training samples. These results are promising given that input compression is broadly applicable where MI can be estimated with confidence. ## 1 Introduction Generalization Error (GE) is the central quantity for the performance assessment of Deep Neural Networks (DNNs), which we operationalize as the difference between the train-set accuracy and the test-set accuracy1. Bounding a DNN's GE based on a training set is a longstanding goal (Jiang et al., 2021) for various reasons: i) Labeled data is often scarce, making it at times impractical to set aside a representative test set. ii) The ability to predict generalization is expected to yield overarching design principles that may be used for Neural Architecture Search (NAS), reducing a reliance on trial-and-error. iii) Bounding the error rate is helpful for model comparison and essential for establishing performance guarantees for safety-critical applications. In 1GE is also referred to as *generalization gap*. Note that some use "generalization error" as a synonym for "test error". contrast, the test accuracy is merely a single performance estimate based on an arbitrary and finite set of examples. Furthermore, the *adversarial examples* phenomenon has revealed the striking inability of DNNs to generalize in the presence of human-imperceptible perturbations (Szegedy et al., 2014; Biggio & Roli, 2018), highlighting the need for a more specific measure of *robust* generalization. Various proxies for DNN complexity which are assumed to be relevant to GE—such as network depth, width, `p-norm bounds (Neyshabur et al., 2015), or number of parameters—do not consistently predict generalization in practice (Zhang et al., 2021). In search of an effective measure to capture the GE across a range of tasks, we investigate the Mutual Information (MI) between the input and final layer representations, evaluated solely on the training set. In particular, we empirically study the Input Compression Bound (ICB) introduced by (Tishby, 2017; Shwartz-Ziv et al., 2019), linking MI and several GE metrics. An emphasis on *input* is an important distinction from many previously proposed GE bounds (e.g., Zhou et al. (2019)), which tend to be model-centric rather than *data*-centric. We use *infinite ensembles of infinite-width networks* (Lee et al., 2019), as in deterministic DNNs the MI quantity we examine is ill-defined (Goldfeld et al., 2019). Infinite-width networks correspond to *kernel regression* and are simpler to analyze than finite-width DNNs, yet they exhibit double-descent and overfitting phenomena observed in Deep Learning (DL) (Belkin et al., 2019). For these reasons, Belkin et al. (2018) suggested that understanding kernel learning should be the first step taken towards understanding generalization in DL. To this end, we evaluate the ICB with respect to three criteria: (1) First, we asess the extent to which the bound holds in practice. To do so, we evaluate the GE of a variety of models, composed by many different metaparameters of the neural architecture and training procedure. We then compare the GE to the theoretical GE bound given by ICB. We show that ICB contains the GE at the expected frequency for four of five datasets. We identify several factors that influence ICB success, including a training-label randomization test inspired by Zhang et al. (2017). (2) Next, we analyze whether the ICB is sufficiently small for useful model comparisons. If a theoretical GE bound exceeds 100% in practice, it is said to be *vacuous*. As we study binary classification tasks, we additionally require that the bound be less than 50% for models with non-trivial GE. We show that ICB is often sufficiently close to the empirical GE, and thus presents a *non-vacuous* bound. These results are obtained with 2000 or fewer training samples. (3) Last, we assess the ability of ICB to predict changes in GE under different metaparameter interventions. The ICB accurately predicts changes in GE as the training time, number of training samples, and explicit parameter regularization vary; ICB was less consistent when the activation function, input data, and depth vary, depending on the dataset. Beyond these three main desiderata for generalization bounds, we show advantages in reducing ICB even when the GE is small. Reducing ICB on *natural* training labels prevents models from fitting *random* labels, and conversely, ICB *increases* when models are trained on *random* versus *natural* training labels (Zhang et al., 2017; 2021). Finally, we show that ICB is predictive of test-time perturbation robustness (Goodfellow et al., 2015; Gilmer et al., 2019), without assuming access to a differentiable model. ## 2 Background We make use of an information-theoretically motivated generalization bound, the ICB, to establish an overlooked link between MI and GE. The bound seems to have first appeared in a lecture series (see, e.g., Tishby (2017)), later in a pre-print (Shwartz-Ziv et al., 2019)[Thm. 1] and more recently in a thesis (Shwartz-Ziv, 2021)[Ch. 3]. To the best of our knowledge the bound has not yet been studied empirically. ## 2.1 Mutual Information In Infinite-Width Networks The MI between two random variables X and Z is defined as $$I(X;Z)\equiv\sum_{x,z}p(x,z)\log{\frac{p(x,z)}{p(x)p(z)}}=\mathbb{E}_{p(x,z)}\left[\log{\frac{p(z|x)}{p(z)}}\right],$$ , (1) where we used Bayes' rule to obtain the expression on the right and introduced Ep(x,z)[·] to denote the average over p(*x, z*). In our case, X denotes the input, and Z the input representation which is taken as the Neural Network (NN) output. Since the marginal p(z) is unknown, we use an unnormalized multi-sample "noise contrastive estimation" (InfoNCE) variational bound. The InfoNCE procedure was originally proposed for unsupervised representation learning (van den Oord et al., 2018), which also serves as a lower bound on MI (Poole et al., 2019). In van den Oord et al. (2018), the density ratio p(z|x)/p(z) was learned by a NN. Instead, following Shwartz-Ziv & Alemi (2020), we use infinite ensembles of infinitely-wide NNs, which have a conditional Gaussian predictive distribution: p(z|x) ∼ N (µ(x, τ ), Σ(*x, τ* )) with µ, Σ given by the Neural Tangent Kernel (NTK) and Neural Network Gaussian Process (NNGP) kernel (Jacot et al., 2018). The predictive distribution also remains Gaussian following τ steps of Gradient Descent (GD) on the Mean-Squared Error (MSE) loss. The conditional Gaussian structure given by NTK may be supplied in the InfoNCE procedure, yielding MI bounds free from variational parameters. Specifically, we use the "leave one out" upper bound (Poole et al., 2019) on MI to conservatively bound MI: $$I(X;Z)\leq\mathbb{E}\left[\frac{1}{N}\sum_{i=1}^{N}\log\frac{p(z_{i}|x_{i})}{\frac{1}{N-1}\sum_{j\neq i}p(z_{i}|x_{j})}\right]=I_{\rm UB}.\tag{2}$$ A lower bound on MI, ILB, of a similar form as Eq. (2) is also available (Eq. 6, Appendix A.2). We verified that both bounds yield similar results for the training regime in which we apply them (Fig. A5). See van den Oord et al. (2018) and Poole et al. (2019) for formal derivations of Eq. (2) and (6). These MI bounds must be computed on the training set only to evaluate a *generalization* bound. ## 2.2 Input Compression Bound Here, we provide an intuitive explanation of the ICB building on existing results and using information theory fundamentals (Cover & Thomas, 1991). A more formal derivation including a proof can be found in ShwartzZiv et al. (2019)[Appendix A]. We begin with the conventional Probably Approximately Correct (PAC) GE bound, which plays a central role in early mathematical descriptions of machine learning (Shalev-Shwartz & Ben-David, 2014). It is assumed that a model receives a sequence of examples x, each labeled with the value f(x) of a particular target function, and has to select a hypothesis that approximates f well from a certain class of possible functions. By relating the hypothesis-class cardinality |H| and the number of training examples Ntrn, one obtains the following bound on the GE: $$\mathrm{GE}<\sqrt{\frac{\log(|\mathcal{H}|)+\log(1/\delta)}{2N_{\mathrm{trn}}}}\tag{3}$$ where the confidence parameter δ ∈ (0, 1) indicates the probability with which the bound holds w.r.t. the random training sample. The key term in this bound is the hypothesis-class cardinality, the *expressive power* of the chosen ansatz. For a finite H, it is simply the number of possible functions in this class; when H is infinite, a discretization procedure is applied in order to obtain a finite set of functions. For NNs, |H| is usually assumed to increase with the number of trainable parameters. The bound (3) states that generalization is only possible when the expressivity is outweighed by the size of the training set, in line with the well-known bias-variance trade-off of statistical learning theory. Empirical evidence, however, demonstrates that this trade-off is qualitatively different in deep learning, where generalization tends to improve as the NN size increases even when the size of the training set is held constant. The key idea behind the ICB is to shift the focus from the hypothesis to the *input space*. For instance, consider binary classification where each of the *|X |* inputs belongs to one of two classes. The approach that leads to bound (3) reasons that there are 2 |X | possible label assignments, only one of which is true, and hence a hypothesis space with 2 |X | Boolean functions is required to guarantee that the correct labeling can be learned. The implicit assumptions made here are that all inputs are fully distinct and that all possible label assignments are equiprobable. These assumptions do not hold true in general, since classification fundamentally *relies* on similarity between inputs. However, the notion of similarity is data-specific and a priori unknown; thus, the uniformity assumption is required when deriving a general statement. The approach behind ICB circumvents these difficulties altogether by applying information theory to the process of NN learning. First, note that solving a classification task involves finding a suitable partition of the input space by class membership. DNNs perform classification by creating a representation Z for each input X and progressively coarsening it towards the class label, which is commonly represented as an indicator vector. The coarsening procedure is an inherent property of the NN function, which is implicitly contained in Z. By construction, the NN implements a partitioning of the input space, which is adjusted in the course of training to reflect the true class membership. In this sense, the cardinality of the hypothesis space reduces to |H| ≈ 2 |T |, where *|T |* is the number of class-homogeneous clusters that the NN distinguishes. To estimate |T |, the notion of *typicality* is employed: *Typical* inputs have a Shannon entropy H(X) that is roughly equal to the average entropy of the source distribution and consequently a probability close to 2 −H(X). Since the typical set has a probability of nearly 1, we can estimate the size of the input space to be approximately equal to the size of the typical set, namely 2 H(X). Similarly, the average size of each partition is given by 2 H(X|Z). An estimate for the number of clusters can then be obtained by *|T | ≈* 2 H(X)/2 H(X|Z) = 2I(X;Z), yielding a hypothesis class cardinality *|H| ≈* 2 2 I(X;Z). With this, the final expression for the ICB is: $$\mathrm{GE}_{\mathrm{ICB}}<\sqrt{\frac{2^{I(X;Z)}+\log(1/\delta)}{2N_{\mathrm{{trn}}}}}\,,$$ $$\left(4\right)$$ 2Ntrn, (4) where it is assumed that X is a d-dimensional random variable that obeys an ergodic Markov Random Field (MRF) probability distribution, asymptotically in d (common for signal and image data, see e.g., Murphy (2012)[Ch. 19]). Unfortunately, it is impossible to check this assumption directly because it involves the data-generating process, which we can not access based on finitely many samples (i.e., the training set). We therefore treat ICB as a tool, and empirically test how useful this tool is in practice. We comment on the ergodic MRF assumption in Appendix A.1. We only evaluate ICB when we can obtain a confident estimate of I(X;Z). For this we require a tight sandwich bound on I(X;Z) with IUB ≈ ILB. We discard samples where IUB(X;Z) > log(Ntrn), since ILB(X;Z) cannot exceed log(Ntrn). See Fig. A5 for typical IUB, ILB values during training and samples to discard. Note that the units for I(X;Z) in ICB are *bits*. ## 3 Experiments Our experiments are structured around three main questions: 1) To what extent does ICB bound GE in practice (§4.1), including for random labels (§4.2)? 2) Is the ICB close enough to the empirical GE to be useful for model comparisons (§4.3)? 3) To what extent does ICB predict GE with respect to different metaparameter interventions (§4.4)? We focus on binary classification like much of the generalization literature, which also enables us to more efficiently evaluate MI bounds by processing kernel matrices that scale by N2 trn rather than (k × Ntrn) 2for k classes. Aside from this computational advantage, our methodology extends to the multi-class setting. We conduct experiments with five standard datasets: MNIST (LeCun & Cortes, 1998), FashionMNIST (Xiao et al., 2017), SVHN (Netzer et al., 2011), CIFAR-10 (Krizhevsky, 2009), and EuroSAT (Helber et al., 2018; 2019). These datasets are intended to be representative of low to moderate complexity tasks and make it tractable to train thousands of models (Jiang* et al., 2019). Experiments with EuroSAT further demonstrate how the method scales to 64-by-64 pixel images. For each of the image datasets, we devise nine binary classification "tasks" corresponding to labels "i versus i + 1" for i ∈ {0*, . . . ,* 8}. This sequential class ordering ensures each label is used once, and is otherwise an arbitrary choice. We use metaparameters that are common to deep learning, with the exception of "diagonal regularization", which is specific to the NTK, denoted by K. The regularized NTK is defined as: Kreg = K + λ Tr(K) Ntrn I, where λ is a coefficient that controls the amount of regularization. This is analogous to `2 regularization of finite-width DNNs, only we penalize the parameters' distance w.r.t. their initial values instead of w.r.t. the origin (Lee et al., 2020). We train a population of models by selecting metaparameters as follows: the total number of layers D ∈ {1, 2, 3, 4, 5}, diagonal regularization coefficients λ ∈ {100, 10−1, 10−2, 10−3, 10−4}, and activation functions φ(·) ∈ {ReLU(·), Erf(·)}. For fully-connected Multi-Layer Perceptron (MLP) models we select the number of training samples, Ntrn ∈ {1000, 1250, 1500, 1750, 2000}, whereas for Convolutional Neural Networks (CNNs) we use Ntrn ∈ {1000, 1500} for computational reasons. Test sets have constant Ntst = 2000. We do not vary the learning rate or mini-batch size, as the infinite-width networks are trained by full-batch GD, for which the training dynamics do not depend on the learning rate once below a critical ![4_image_0.png](4_image_0.png) Figure 1: **The ICB may reflect robust GE when it is loose w.r.t. standard GE.** The ICB is plotted as a grey shaded band underneath the training accuracy indicating the range of test accuracy compatible with the theoretical bound. Performance metrics are evaluated for a EuroSAT Pasture versus Sea-Lake binary classification task using 500 training samples and 2000 test samples and different regularization levels in (a)–(c). At low regularization λ = 0.1 (a), the ICB is vacuous with respect to standard generalization beyond 104training steps, but reflects the poor robust generalization for the "Noisy" test set with AWGN. Increasing the regularization to λ = 0.5 (b) and λ = 1.0 (c) reduces ICB and the AWGN GE. Arrows indicate the steady-state AWGN GE (black), and ICB (grey) along with their respective values. See Fig. A5 for the corresponding upper and lower I(X;Z) bounds for this experiment. stable value (Lee et al., 2019). A nominal learning rate of 1.0 was used in all cases and found to be sufficient.2 Models were evaluated at five different training times, t ∈ {102, 103, 104, 105, 106}, which contained most of the variation in GE. Training for less than t = 102steps typically resulted in a small GE, as both training and test accuracy were near random chance or increasing in lockstep. In terms of steady-state behaviour, GE was often stable beyond t = 106. Furthermore, t = 106 was found to be a critical time beyond which IUB(X;Z) sometimes exceeded its upper confidence limit of log(Ntrn), particularly for small λ values where memorization (lack of compression) is possible. In total, there are 1250 unique metaparameter combinations for MLPs: 5 (D) × 5 (λ) × 2 (φ) × 5 (Ntrn) × 5 (t) = 1250, and 500 for CNNs. We measure correlations between ICB and standard GE, as well as *robust* GE. The latter is defined as the accuracy difference on the standard train set and the test set with Additive White Gaussian Noise (AWGN) (Gilmer et al., 2019). It can be shown that a classifier's error rate for a test set corrupted by AWGN determines the mean distance to the decision boundary for linear models (Fawzi et al., 2016) and serves as a useful guide for DNNs (Gilmer et al., 2019). For the AWGN we use a Gaussian variance σ 2 = 1/16 for EuroSAT and σ 2 = 1/4 for the other datasets. ## 4 Results Illustrative example Before presenting the main results, we examine ICB for a EuroSAT classification task using only 500 training samples (Fig. 1). This is a challenging task, as tight MI and GE bounds are difficult to obtain for high-dimensional DNNs, particularly with few samples. For example, in (Dziugaite & Roy, 2017) 55000 samples were used to obtain a ≈ 20% GE bound for finite-width DNNs evaluated on MNIST. We evaluate ICB throughout training from the first training step (t = 100) until steady state when all accuracies stabilize (t = 108). Shortly after model initialization (t = 100to t = 101) the ICB is < 7% (indicated by the height of the shaded region in Fig. 1) and the training and test accuracy are both at 50% (GE= 0). Here, ICB is non-vacuous, but also not necessarily interesting for this random-guessing phase. ICB increases as training is prolonged.3 At low regularization (Fig. 1 a), the ICB ultimately becomes vacuous (ICB ≈ 50%) around 104steps. However, although ICB is vacuous with respect to *standard* generalization in 2This was the default setting for the neural_tangents software library (Novak et al., 2020). 3It may not be obvious that ICB increases monotonically with training steps as the training accuracy also increases. ![5_image_0.png](5_image_0.png) Figure 2: **The ICB often bounds GE for most datasets.** The ICB ("Theoretical generalization error bound") is plotted versus GE (MSE) for four datasets. The ICB satisfaction rate is annotated in the top left corner of each plot with format "Sat. % (N)", where N denotes the number of valid experiments. Each binary classification task is assigned a unique colour. See Fig. A6 for equivalent plot using the 0–1 loss. Table 1: Overall ICB satisfaction rate for two different loss functions, with the number of valid experiments (N) in brackets. The MNIST and FashionMNIST datasets were both at 100% and omitted for brevity. | Loss | SVHN (CNN) | CIFAR (CNN) | EuroSAT (MLP) | |--------|--------------|---------------|-----------------| | Square | 95.2% (3914) | 80.5% (4148) | 93.0% (9637) | | 0–1 | 67.6% (3914) | 84.9% (4148) | 95.7% (9637) | a), it reflects well the poor *robust* generalization when tested with AWGN (Gilmer et al., 2019). Increasing the regularization coefficient λ reduces ICB from 50% (a) to 23% (b) and 15% (c), and the robust GE from 38% (a) to 19% (b) and 10% (c). Both standard and robust GE are bounded at all times by ICB. The latter is, however, a coincidence, as the robust GE is subject to the arbitrary AWGN noise variance (σ 2 = 1/16). The additive noise variance could be increased to increase the robust GE beyond the range bounded by ICB. More important than bounding the robust GE *absolute percentage* is that ICB captures the trend of robust generalization. Evaluating robustness effectively is error-prone and often assumes access to test data and a differentiable model (Athalye et al., 2018; Carlini et al., 2019). We make no such assumptions here. The lack of robustness in Fig. 1 a) would have likely gone unnoticed. Next, we assess the extent to which ICB contains GEs evaluated on a variety of models and datasets. ## 4.1 Bounding Generalization Error We refer to the percentage of cases where GE < ICB as the *"ICB satisfaction rate"*, or *"Sat."* in plots. We expect ≈ 95% of samples to satisfy this property as ICB is evaluated at 95% confidence (δ = 0.05). We consider other δ values in §A.4. Overall ICB satisfaction rates are listed in Table 1 for two loss functions, and the ICB is plotted versus GE in Fig. 2 for the square loss. In general, ICB was satisfied at a high rate depending on the dataset. Next, we identify factors affecting the ICB satisfaction rate. Best-performing models with test accuracy ≥ 70% attain an ICB satisfaction rate of 94.7% (N = 792) for SVHN and 0–1 loss. An 80% threshold for CIFAR led 94.0% (N = 1438) of samples to satisfy ICB. Architecture type has a significant effect on ICB satisfaction rate: CNNs had an 11–14% greater ICB satisfaction rate than MLPs for CIFAR, and 10–18% greater rate for SVHN, where the range depended on the loss function. Intriguingly, the superior ICB satisfaction rate of CNNs was not due to greater test-set accuracy. CNNs had lower GE as a result of their lower *train*-set accuracy (Table A5). ![6_image_0.png](6_image_0.png) Figure 3: **The ICB often distinguishes between natural and randomized training sets.** Training accuracy for "Natural" and "Random" labels is plotted versus the theoretical GE bound (ICB), which is evaluated on the natural labels. Each data point corresponds to a unique regularization value λ, which influences the ICB value. We show the "0 vs. 1" task for all datasets, except CIFAR, for which the "7 vs. 8" task performed best (Table 2). Considerable separation between natural and randomly labeled sets is observed for all datasets. ICB is highly correlated with ability to fit random labels in all cases as shown by τ values. The broken vertical line for EuroSAT indicates the ICB value for which there is at least a 10% accuracy difference between natural versus random sets. Table 2: Ability of ICB to separate natural versus random training labels is a good predictor of ICB satisfaction rate by task. The row "ICBrand@X%" indicates the minimum ICB value for which a X% accuracy difference between natural and random labels is observed. The "Sat. (%)" column showing the ICB satisfaction rate is taken from §4.1, Exp. A. The column τ indicates the rank correlation between ICBrand@X% and Sat. (%) over the nine tasks. Columns sorted by ascending order of ICBrand@X%. | Task | 2/3 | 6/7 | 3/4 | 7/8 | 4/5 | 5/6 | 0/1 | 1/2 | 8/9 | | | |-------------|--------|-------|-------|-------|-------|-------|-------|-------|-------|-----|----------| | EuroSAT | Sat. % | 82 | 84 | 98 | 100 | 100 | 100 | 100 | 100 | 100 | τ = 0.76 | | ICBrand@10% | 7.5 | 10.0 | 11.1 | 13.2 | 18.5 | 21.6 | 41.6 | 47.9 | 51.2 | | | | Task | 2/3 | 3/4 | 5/6 | 4/5 | 6/7 | 0/1 | 1/2 | 8/9 | 7/8 | | | | CIFAR-10 | Sat. % | 54 | 54 | 62 | 56 | 84 | 85 | 92 | 89 | 100 | τ = 0.87 | | ICBrand@5% | 7.4 | 9.4 | 11.8 | 12.1 | 12.5 | 12.6 | 14.8 | 15.2 | 17.8 | | | Training time GE generally increases with training, therefore the overall ICB satisfaction rate may be biased upward by the inclusion of models trained for a short time (e.g. t = 102). Selecting models trained for at least t = 104steps, resulted in an ICB satisfaction rate *decrease* for CIFAR-10 trained CNNs from 84.9% overall to 76.3% (N = 2351), and a slight *increase* for SVHN from 67.6% overall to 70.5% (N = 2114). Expressing GE in terms of MSE, ICB saturation rate fell from 95.2% overall to 91.3% (N = 2114). The number of training samples impacted the ICB satisfaction rate for the 0–1 loss on SVHN (Fig. A7). Inter-task differences in ICB satisfaction rate were observed (Table A6). For EuroSAT, six of nine tasks *always* satisfied ICB, and three tasks reduced the overall average. The satisfaction rate was 82.2% (N = 1123) for the "2 vs. 3" task and 83.5% (N = 1154) for the "6 vs. 7" task. For CIFAR-10, tasks "2 vs. 3" through "5 vs. 6" had ICB satisfaction rates considerably worse than the rest by a margin of 15 − 20%, and for SVHN, tasks "2 vs. 3", "5 vs. 6", and "8 vs. 9" were poor performing. These inter-task trends were consistent for the CNN and MLP architectures. ## 4.2 The Randomization Test Zhang et al. (2017; 2021) proposed the "randomization test" after observing that DNNs easily fit random labels. They argued that generalization bounds ought to be able to distinguish models trained on *natural* versus *random* training labels, since generalization is by construction made impossible in the latter case. Motivated by the work of Zhang et al., we first hold all metaparameters constant and report ICB values before and after randomizing the training labels (Table 3). We then regularized models until they could no longer fit *random* training labels, while still permitting them to fit the *natural* training labels (Table 2, Fig. 3). We consider only two-layer ReLU networks here, and consistent with §4.1, we use the CNN architecture for FashionMNIST, SVHN & CIFAR and an MLP for MNIST & EuroSAT. We train these models to t = ∞ on the natural training set (Ntrn = 1000). Surprisingly, ICBUB approximates the GE well even when the model is trained on *random labels* (Table 3). For λ = 0.1, ICBUB = 15.5% compared to a GE of 21.3%. Next, for λ = 0.01, ICBUB = 38.5% and GE is 39.7%. Last, for λ = 0.001, IUB = 8.96, which is greater than log(Ntrain) = 6.91 nats, therefore the corresponding ICBUB of 197.6% should be discarded. In this case, substituting the "optimistic" lower estimate ICBLB = 54.1% ≈ GE = 50%. We expect I(X;Z) to be smaller after training on natural labels, since training on random labels requires memorization of random data, i.e., the opposite of compression. Importantly, to isolate the effect of the training label type, the training accuracy must be controlled, as higher accuracy generally requires greater complexity and thus larger I(X;Z). This intuition is consistent with the results of Table 3, as both ILB and IUB increase monotonically with the training accuracy for both training label types. Training with λ = 0.001 allows models to perfectly fit both natural and randomized training sets (Table 3 rows with "Train" = 100%) and presents a suitable setting for evaluating the sensitivity of ICB to the training label type. Indeed, ILB is greater for random labels (6.37 vs. 5.78 nats), resulting in an increase of the *optimistic* theoretical GE bound, ICBLB, from 40.5% to 54.1%. The more *pessimistic* ICBUB increases even more dramatically from 90.4% to 197.6%, which is beyond the valid range of GE (0 − 100%). Table 3: **ICB increases after training on random labels.** Randomization test results for EuroSAT. The lower and upper MI bounds, ILB and IUB, are included for comparison against log(Ntrn) ≈ 6.91 nats. Columns ICBLB and ICBUB refer to whether ILB or IUB is taken as I(X;Z) estimate, respectively. Columns "Train" and "Test" show the respective accuracy in %. ICB values are larger for random labels when comparing rows with "Train"= 100.0. | Natural training labels | | | | | | | | |---------------------------|------|------|-------|-------|-------|------|------| | λ | ILB | IUB | ICBLB | ICBUB | Train | Test | GE | | 10−1 | 4.87 | 5.37 | 26.0 | 33.1 | 97.9 | 98.7 | -0.8 | | 10−2 | 5.40 | 6.58 | 33.7 | 60.2 | 99.5 | 98.6 | 0.9 | | 10−3 | 5.78 | 7.40 | 40.5 | 90.4 | 100.0 | 97.5 | 2.5 | | Random training labels | | | | | | | | | 10−1 | 3.68 | 3.75 | 15.0 | 15.5 | 71.3 | 50.0 | 21.3 | | 10−2 | 5.28 | 5.67 | 31.7 | 38.5 | 89.7 | 50.0 | 39.7 | | 10−3 | 6.37 | 8.96 | 54.1 | 197.6 | 100.0 | 50.0 | 50.0 | The randomization test identifies tasks with low ICB satisfaction Recall from §4.1 that three binary classification tasks reduced the ICB satisfaction rate below 100% for EuroSAT: "2 vs. 3" (Sat. 82%), "6 vs. 7" (Sat. 84%), and "3 vs. 4" (Sat. 98%). We observed that these were the *same* tasks for which ICB performed poorly on the randomization test. Specifically, we measured the minimum ICB value for which an at least 10% accuracy difference was recorded between the natural and random training sets (vertical broken line in Fig. 3) when training with 20 different regularization values λ in the range 10−4to 101. The "2 vs. 3" task required the smallest ICB value (7.5%) before this accuracy difference was reached between the two label types. The "6 vs. 7" task had the next highest ICB value of 10.0%, followed by "3 vs. 4" with 11.1%. The other six tasks—with 100% ICB satisfaction—had strictly greater ICB values (Table 2). Similar results are observed for CIFAR-10 using a smaller 5% threshold as accuracies for natural and random labels were closer than for EuroSAT. The tasks with minimum ("2 vs. 3") and maximum ("7 vs. 8") satisfaction rate are the same tasks with the minimum and maximum ICBrand@5%. Therefore, the training-set based randomization test—which only required training a single model here—may be used to help identify when ICB performs well as a GE bound for a variety of models. ## 4.3 Vacuous Or Non-Vacuous? To evaluate whether ICB is close enough to GE to aid model comparison, we examined the model with the greatest accuracy on the test set with AWGN for each SVHN and CIFAR task. We used AWGN accuracy rather than standard test accuracy to select these models given the observation from Figure 1 that ICB may be more Table 4: **The ICB is close enough to the GE for model comparison.** For five SVHN and CIFAR classification tasks (beginning with even numbers), we select the model with maximum accuracy on the test set with AWGN, shown in the "Robust" column. ICB is compared to the standard GE as well as the Robust GE ("RGE" column). High train-set accuracy ("Train" ≥ 99%) tends to coincide with large RGE and ICB values, making ICB loose w.r.t. GE. Otherwise, ICB is consistently close to GE (compare values in **bold**). | | CIFAR | | | | SVHN | | | | | | | | |------|---------|------|--------|----|--------|-----|-------|------|--------|----|-----|-----| | Task | Train | Test | Robust | GE | RGE | ICB | Train | Test | Robust | GE | RGE | ICB | | 0/1 | 96 | 86 | 83 | 10 | 12 | 13 | 87 | 79 | 68 | 8 | 19 | 15 | | 2/3 | 93 | 74 | 71 | 18 | 21 | 25 | 100 | 85 | 67 | 15 | 32 | 46 | | 4/5 | 99 | 77 | 73 | 22 | 26 | 52 | 100 | 88 | 70 | 12 | 29 | 67 | | 6/7 | 93 | 85 | 83 | 8 | 10 | 12 | 90 | 78 | 70 | 12 | 20 | 15 | | 8/9 | 85 | 80 | 80 | 5 | 5 | 7 | 99 | 77 | 66 | 23 | 33 | 62 | ![8_image_0.png](8_image_0.png) Figure 4: **The ability of ICB to predict GE varies for different metaparameters.** Plots show prediction Sign-Errors (SEs) (where *lower* is better) for robust GE with AWGN for six different metaparameter interventions on nine binary classification tasks {0 vs. 1, 1 vs. 2*, . . . ,* 8 vs. 9} for a) FashionMNIST and b) CIFAR-10. Numbers above the SE for even-numbered tasks indicate the number of samples N comprising the respective SE measurement. The train samples category had insufficient N for FashionMNIST and was therefore replaced with a dataset category in a), which examines the effect of switching the dataset from FashionMNIST to CIFAR-10. The dataset category is omitted in b) as SE is symmetric. Note that mean SE is equal to max SE for the activation metaparameter which had only one combination (ReLU versus Erf). See Table A7 for a detailed example showing how a column is computed in these plots. aligned with *robust* GE (also see Fig. A8). The most accurate models on the standard test set often recorded zero error on the training set and thus incurred large ICB values. Here, ICB values were considerably less than 50% in all cases except where training accuracy was 99% or greater (Table 4). Furthermore, ICB values were often close to the empirical GE, e.g., an ICB of 7% was obtained for GE = 5% for the CIFAR task 8/9 using only 1500 training samples (Table 4). ## 4.4 Effect Of Metaparameters On Theoretical Bound The goal of this section is to assess the ability of ICB to robustly rank GEs for a variety of models trained by different metaparameters. We perform *coupled-network* experiments as per Dziugaite et al. (2020) to assess the ability of ICB to predict whether GE increases or *decreases* for all metaparameter combinations when manipulating one held-out metaparameter at a time. Ranking is assessed in terms of Sign-Error (SE): $$\mathrm{SE}(P^{e},\mathrm{ICB})=\frac{1}{2}\mathbb{E}_{(w,w^{\prime})\sim P^{e}}\left[1-\mathrm{sgn}\left(\mathrm{GE}\left(w^{\prime}\right)-\mathrm{GE}\left(w\right)\right)\cdot\mathrm{sgn}\left(\mathrm{ICB}\left(w^{\prime}\right)-\mathrm{ICB}\left(w\right)\right)\right].\tag{5}$$ The SE (Eq. 5) is evaluated with respect to different assignments of the metaparameters, w and w 0, which are said to be drawn from a distribution P einduced by environment e, and differ in only one metaparameter value at a time. Note that in Dziugaite et al. the expectation in Eq. (5) is taken over randomly initialized finite DNN parameters and SGD mini-batch ordering. Conversely, infinite-ensembles trained by GD are deterministic. Therefore, the expectation in Eq. (5) is with respect to the choice of metaparameters and random training sample only. We include three metaparameters that were not present in the work of Dziugaite et al.: the diagonal regularization, training time, and activation function. We omit width, learning rate, and batch size, as they do not apply to our setting. We report the mean SE over all possible interventions to each metaparameter in Fig. 4. The maximum or "max" sign-error advocated by Dziugaite et al. is discussed in text, as it was subject to greater noise than mean SE due to the smaller sample size.4 To calculate the mean SE for the "depth" metaparameter, for example, which has a range of 1 to 5, we evaluate the SE arising from changing the depth from 1 to 2, again for 1 to 3, and so on for all ten combinations. The SE is then averaged across the ten "before" and "after" configurations, whereas the max SE refers to the one combination with the greatest SE, e.g., changing the depth from 4 to 5. We discard samples with a Hoeffding weight less than 0.5 as per Dziugaite et al., indicating that a GE difference is too small relative to the number of train-set and test-set samples to reliably measure changes in the *true error* rate (i.e., on the underlying data-generating process).5 We discard cases with N < 10 for the max SE, whereas we compute a sample-weighted average for the mean SE based on all cases. Few measurements of the *standard* GE satisfied the Hoeffding threshold for FashionMNIST, therefore we added AWGN to the test set to increase the typical GE and number of valid cases to analyze. We repeat our analysis for standard GE for CIFAR-10 in Fig. A9 and contrast it with the AWGN case. The diagonal regularization was the most consistent metaparameter, with zero max SE for all tasks and datasets. Intervening on the training time consistently led to small mean and max SE, as well as for the number of train samples for the majority of tasks (Fig. 4). The time metaparameter had a max SE of 0.45 for the 6/7 FashionMNIST task—consistent with the peak mean SE for task 6/7 in Fig. 4a—and five tasks had a max SE of zero. For CIFAR-10, max SE w.r.t. time was 0.27, with five tasks having max SE < 0.02. The max SE for train samples was 0.89 (N = 19) for task 5/6, which was followed by 0.14 (N = 44) for task 4/5. Other tasks either had zero max SE or insufficient data. Note that these CIFAR-10 tasks 4/5 and 5/6 also had adverse performance for the randomization test (Table 2). In summary, the three metaparameters: diagonal, time, and train samples strongly influence overall correlation between ICB and GE. Manipulating the activation function and depth resulted in a greater discrepancy between FashionMNIST and CIFAR-10. For these metaparameters ICB was considerably more predictive for the former dataset resulting in a lower SE. For example, for the depth metaparameter the max SE for FashionMNIST was 0.23 followed by 0.06, compared to 1.0 for CIFAR-10. Results for EuroSAT were closer to FashionMNIST than CIFAR-10 and may be found in Fig. A10. 4We use the term "max SE" rather than "robust SE" from Dziugaite et al. to avoid confusion with the "robust GE". The latter is evaluated on a test set with AWGN and is a different concept from the former. 5Recall from the Introduction that we merely *operationalized* GE as the difference between train-set accuracy and the test-set accuracy. In contrast, the GE is formally defined as the difference between train-set accuracy and the *true* accuracy (or error); test-set accuracy is an approximation for the latter, which we account for with this step. ## 5 Discussion Our results show that the ICB serves as a non-vacuous generalization bound, which we verified in the case of infinite-width networks. Furthermore, we performed a broader evaluation than is typically considered for theoretical GE bounds: i) We searched for ICB violations by evaluating ICB throughout training, rather than at a specific number of epochs or training loss value. ii) We varied the number of training samples and classification labels, compared to a static train/test split. iii) We considered robust GE in addition to standard GE. iv) Experiments were performed on five datasets. ICB was satisfied at around the 95% rate for four of five datasets when GE was expressed in terms of MSE, or three of five when GE was expressed in terms of accuracy. Best-performing models with at least 70 − 80% test accuracy consistently satisfied ICB for all datasets, which is encouraging, since accurate models are more likely to be deployed. The SVHN dataset had the lowest ICB satisfaction rate among all cases for the 0–1 loss, however, this may have been impacted by too few training examples. Furthermore, we identified that the training label randomization test may detect specific tasks with low ICB satisfaction rate; ensuring ICB yields different results for natural and randomized training labels may greatly improve results. CNNs had superior performance compared to MLPs. Although the MRF assumption does not appear to be necessary to derive ICB, the success of CNNs in comparison to MLPs may be related to the MRF assumption and 2D images having only local correlations, compared to their flattened counterparts used with MLPs (see §A.1). The ICB was able to successfully predict changes in robust GE when the digonal regularization, training time, and number of training samples varied. ICB was less successful when the depth and activation function varied for CIFAR-10, yet performed well for FashionMNIST as well as for EuroSAT. The precise NTK limit we consider is generally assumed to only hold for shallow models, which may explain some of the failure cases for the depth metaparameter (Li et al., 2021). It is also worth considering to what extent a generalization bound *ought* to be able to rank GEs, given that it is by definition merely an upper bound on the error. For example, GEs of 1% and 29% are both compatible with a bound of 30%, which would contribute to poor GE prediction on an absolute scale. Relevance to deep learning One should use caution before extrapolating our conclusions based on infinitewidth networks to finite-width DNNs. The ability of infinite-width networks to approximate their finite-width counterparts is reduced with increasing training samples (Lee et al., 2019), regularization (Lee et al., 2020), and depth (Li et al., 2021). Nevertheless, the infinite-width framework has allowed us to demonstrate the practical relevance of the ICB for an exciting family of models as a first step. It has been argued that understanding generalization for shallow kernel learning models is essential to understand generalization behaviour of deep networks. Kernel learning and DL share the ability to exactly fit their training sets yet still generalize well, a phenomenon that other bounds fail to explain (Belkin et al., 2018). We leave the study of ICB for finite-width DNNs to future work, which may require alternative MI estimation techniques. ## 6 Related Work Kernel-regression generalization error Canatar et al. (2021b) derived an analytical expression for the generalization MSE of kernel regression models using a replica method from statistical mechanics. Their predictions show excellent agreement with the empirical GE of NTK models on MNIST and CIFAR datasets as a function of the training sample size. Furthermore, their method is sensitive to differences in difficulty between similar classification tasks, e.g., showing that MNIST "0 vs. 1" digit classification is easier to learn than "8 vs. 9". Canatar et al. (2021a) extend the method to predict out-of-distribution GE. An alternative method is the Leave-One-Out (LOO) error estimator (Lachenbruch, 1967). LOO is generally impractical for DL due to the computational requirement of training N DNNs on N different training sets. However, Bachmann et al. (2022) proposed a closed-form LOO estimator based on a kernel regression model trained on the complete training set once. Their estimator shows excellent agreement with test MSE and accuracy for a five-layer ReLU NTK model trained on MNIST and CIFAR. While Bachmann et al. averaged results over five training sets of size 500 − 20000, we only draw a single training set of 250 − 2000 samples for each set of metaparameters. Our choice was made to reflect a practical "small data" scenario, where GE has to be bounded using a modest set of labeled data. As a result, however, our GE and ICB estimates have greater variance than those of Bachmann et al. We used the infinite-width DNN limit for convenience and as a first step to assess the efficacy of ICB; we did not set out to find optimal generalization bounds for kernel regression. An advantage of ICB is that it only requires access to I(X;Z)—a black-box statistic applicable to a wide variety of models beyond kernel regression. Therefore, ICB may become increasingly relevant for DLs using MI estimators with different strengths and assumptions, e.g., with distributional constraints on weight matrices (Gabrié et al., 2018) or infinite-depth corrections (Li et al., 2021). Generalization bounds for deep learning Dziugaite & Roy (2017) develop a PAC-Bayes GE bound and evaluated it on a MNIST binary classification task using the complete training set (Ntrain = 55 k) and a fully-connected NN with 2-3 layers and ReLU activations. Although their bound was non-vacuous (≈ 20%), it was several times larger than the error estimated on held-out data (< 1%). A comparison with our work is difficult, as we did not use finite-width DNNs. We showed that the ICB yields a smaller (≈ 10%) bound from less than 2000 samples for several classification tasks. Zhou et al. (2019) proposed a PAC-Bayes generalization bound based on the compressed size of a DNN after pruning and quantization. They obtain a GE bound of 46% for MNIST and 96 − 98% for ImageNet. The measure of compression used by Zhou et al. (2019) is distinct from input compression in terms of MI here. The bounds of Dziugaite & Roy and Zhou et al. concern model complexity, whereas ICB is based on data compression by the hidden layers. Both Dziugaite & Roy and Zhou et al. optimized their bounds for best results, whereas we used standard training procedures. Generalization bounds from unlabeled data GE bounds or estimates may be obtained without directly estimating model complexity. Garg et al. (2021) leverage the so-called "early learning" phenomenon, whereby DNNs fit true labels before noisy labels, to develop a post-hoc GE bound. They validate their bound on NTK-based wide DNNs, CNNs, and LSTMs. In contrast to our work, the Garg et al. bound requires additional unlabeled data, that in practice, can be carved out from the training set. They assign random labels to the carved-out set, and augment the training set with this random data. Their bound is based on the empirical error computed on both the clean and random set. Empirically, Garg et al. (2021) show that it may be possible to maintain model accuracy when training on partially randomized labels in some settings by using weight decay or early stopping. Unfortunately, random labels reduce the task signal-to-noise ratio, I(X; Y ), and may be challenging to apply with unregularized models that nonetheless generalize well (Zhang et al., 2017). Jiang et al. (2022) observed that the disagreement of separately trained DNNs on *unlabeled* held-out datasets is similar to the disagreement of those models on a *labeled* held-out set. Their claim follows an empirical observation that deep ensembles are often well-calibrated, however, this calibration property may not always hold in important settings (Kirsch & Gal, 2022). Information compression and generalization The MI I(S; w) between the training data S = (*x, y*) supplied as input to a stochastic learning algorithm and the weights w it outputs can also serve to bound GE (Xu & Raginsky, 2017; Achille & Soatto, 2018). Decomposing I(S; w) into I(w; x) + I(w; y|x), Harutyunyan et al. (2020) show that reducing the second term—the information w contain about the labels y beyond what can be inferred from x—is key to avoid unintended memorization. As a result, these works *optimize* MI bounds, whereas we seek to *measure* MI to evaluate a GE bound. Furthermore, Shwartz-Ziv & Alemi (2020)[Appendix C.7] evaluated I(S; w) for infinite-width networks and found that it tends to infinity as the training time goes to infinity. Thus, a GE bound based on I(S; w) is vacuous for these networks which nevertheless generalize well. Saxe et al. (2018) observed a lack of compression in ReLU networks and argued that compression must be unrelated to generalization in DNNs, since it is known that ReLU networks generalize well. However, their binning procedure based on Paninski (2003) involves metaparameters that influence entropy and MI estimation. Other works have studied input compression in linear regression (Chechik et al., 2005) and finite-width ReLU DNNs using adaptive binning estimators (Chelombiev et al., 2019). We use MI bounds free from such metaparameters and observe input compression regardless of the nonlinearity type, consistent with Shwartz-Ziv & Alemi (2020). We are excited about future work on input compression phenomena and the challenging case of finite-width DNNs. ## 7 Conclusion We assessed the ICB along three performance axes: tightness, percentage of trials satisfying the bound, and correlation with GE. Empirical results show that input compression serves as a simple and effective generalization bound, complementing previous theory. Additionally, ICB can help pinpoint interesting failures of robust generalization that go undetected by standard generalization metrics. An important consequence of the ICB with respect to NAS is that *bigger is not necessarily better*, at least in terms of the information complexity of infinite-width networks. Equally important as the architecture are the metaparameters and training duration, all of which affect input compression. Consistent with Occam's razor, less information complexity—or more input compression—yields more performant models, reducing the upper bound on generalization error. We conclude that input compression, which is data-centric, is a more effective complexity metric than model-centric proxies like the number of parameters or depth. ## Author Contributions Angus Galloway devised the study and wrote the first draft in consultation with all co-authors. Anna Golubeva critically revised the entire manuscript and made considerable contributions to the theoretical background. Mahmoud Salem provided experiment support with adversarial attacks in the JAX framework, and characterizing the finite to infinite-width NN correspondence. These experiments informed the final scope of the study and addressed alternate hypotheses. Mihai Nica, Yani Ioannou, and Graham W. Taylor provided technical advice and revised the manuscript. All authors consented to submission of this work to TMLR. ## Acknowledgments This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA). The views, opinions and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. Graham W. Taylor and Angus Galloway also acknowledge support from CIFAR and the Canada Foundation for Innovation. Angus Galloway also acknowledges supervision by Medhat Moussa. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute: http://www.vectorinstitute.ai/\#partners. Anna Golubeva is supported by the National Science Foundation under Cooperative Agreement PHY-2019786 (The NSF AI Institute for Artificial Intelligence and Fundamental Interactions, http://iaifi.org/). Mihai Nica is supported by an NSERC Discovery Grant. ## References Alessandro Achille and Stefano Soatto. Emergence of Invariance and Disentanglement in Deep Representations. Journal of Machine Learning Research, 19(50):1–34, 2018. 12 Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. In *International Conference on Machine Learning*, pp. 274–283, 2018. 6 Gregor Bachmann, Thomas Hofmann, and Aurelien Lucchi. Generalization Through the Lens of Leave-One-Out Error. In *International Conference on Learning Representations*, 2022. 11, 12 Mikhail Belkin, Siyuan Ma, and Soumik Mandal. To Understand Deep Learning We Need to Understand Kernel Learning. In *Proceedings of the 35th International Conference on Machine Learning*, pp. 541–549. PMLR, 2018. 2, 11 Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. Reconciling modern machine-learning practice and the classical bias–variance trade-off. *Proceedings of the National Academy of Sciences*, 116(32):15849–15854, 2019. doi: 10.1073/pnas.1903070116. 2 Battista Biggio and Fabio Roli. Wild patterns: Ten years after the rise of adversarial machine learning. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, CCS '18, pp. 2154–2156, New York, NY, USA, 2018. Association for Computing Machinery. ISBN 978-1-4503-5693-0. doi: 10.1145/3243734.3264418. 2 Abdulkadir Canatar, Blake Bordelon, and Cengiz Pehlevan. Out-of-distribution generalization in kernel regression. In Advances in Neural Information Processing Systems, volume 34, pp. 12600–12612. Curran Associates, Inc., 2021a. 11 Abdulkadir Canatar, Blake Bordelon, and Cengiz Pehlevan. Spectral Bias and Task-Model Alignment Explain Generalization in Kernel Regression and Infinitely Wide Neural Networks. *Nature Communications*, 12(1):2914, 2021b. ISSN 2041-1723. doi: 10.1038/s41467-021-23103-1. 11 Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, Dimitris Tsipras, Ian Goodfellow, Aleksander Madry, and Alexey Kurakin. On evaluating adversarial robustness. *arXiv preprint arXiv:1902.06705*, 2019. 6 Gal Chechik, Amir Globerson, Naftali Tishby, and Yair Weiss. Information Bottleneck for Gaussian Variables. Journal of Machine Learning Research, 6(Jan):165–188, 2005. 12 Ivan Chelombiev, Conor Houghton, and Cian O'Donnell. Adaptive Estimators Show Information Compression in Deep Neural Networks. In *International Conference on Learning Representations*, 2019. 12 Thomas M Cover and Joy A Thomas. *Elements of Information Theory*. John Wiley & Sons, Inc., New York, 1991. 3 Gintare Karolina Dziugaite and Daniel M. Roy. Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data. In *Uncertainty in Artificial Intelligence (UAI)*, 2017. 5, 12 Gintare Karolina Dziugaite, Alexandre Drouin, Brady Neal, Nitarshan Rajkumar, Ethan Caballero, Linbo Wang, Ioannis Mitliagkas, and Daniel M Roy. In search of robust measures of generalization. In *Advances in Neural* Information Processing Systems, volume 33, pp. 11723–11733. Curran Associates, Inc., 2020. 10 Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, and Pascal Frossard. Robustness of classifiers: From adversarial to random noise. In *Advances in Neural Information Processing Systems*, volume 29, pp. 1632–1640. Curran Associates, Inc., 2016. 5 Marylou Gabrié, Andre Manoel, Clément Luneau, jean barbier, Nicolas Macris, Florent Krzakala, and Lenka Zdeborová. Entropy and mutual information in models of deep neural networks. In *Advances in Neural Information Processing* Systems, volume 31, pp. 1821–1831. Curran Associates, Inc., 2018. 12 Saurabh Garg, Sivaraman Balakrishnan, Zico Kolter, and Zachary Lipton. RATT: Leveraging unlabeled data to guarantee generalization. In *Proceedings of the 38th International Conference on Machine Learning*, volume 139 of Proceedings of Machine Learning Research, pp. 3598–3609. PMLR, 2021. 12 Justin Gilmer, Nicolas Ford, Nicholas Carlini, and Ekin Cubuk. Adversarial Examples Are a Natural Consequence of Test Error in Noise. In *Proceedings of the 36th International Conference on Machine Learning*, pp. 2280–2289. PMLR, 2019. 2, 5, 6 Ziv Goldfeld, Ewout Van Den Berg, Kristjan Greenewald, Igor Melnyk, Nam Nguyen, Brian Kingsbury, and Yury Polyanskiy. Estimating Information Flow in Deep Neural Networks. In *Proceedings of the 36th International* Conference on Machine Learning, volume 97 of *Proceedings of Machine Learning Research*, pp. 2299–2308. PMLR, 2019. 2 Ian. J. Goodfellow, Jonathon. Shlens, and Christian. Szegedy. Explaining and Harnessing Adversarial Examples. In International Conference on Learning Representations, 2015. 2 Hrayr Harutyunyan, Kyle Reing, Greg Ver Steeg, and Aram Galstyan. Improving generalization by controlling label-noise information in neural network weights. In *Proceedings of the 37th International Conference on Machine* Learning, volume 119 of *Proceedings of Machine Learning Research*, pp. 4071–4081. PMLR, 2020. 12 Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. Introducing EuroSAT: A novel dataset and deep learning benchmark for land use and land cover classification. In *IGARSS 2018-2018 IEEE International* Geoscience and Remote Sensing Symposium, pp. 204–207, 2018. 4 Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. *IEEE Journal of Selected Topics in Applied Earth Observations* and Remote Sensing, 2019. 4 Arthur Jacot, Franck Gabriel, and Clement Hongler. Neural tangent kernel: Convergence and generalization in neural networks. In *Advances in Neural Information Processing Systems*, volume 31, pp. 8571–8580. Curran Associates, Inc., 2018. 3 Yiding Jiang*, Behnam Neyshabur*, Hossein Mobahi, Dilip Krishnan, and Samy Bengio. Fantastic Generalization Measures and Where to Find Them. In *International Conference on Learning Representations*, 2019. 4 Yiding Jiang, Parth Natekar, Manik Sharma, Sumukh K. Aithal, Dhruva Kashyap, Natarajan Subramanyam, Carlos Lassance, Daniel M. Roy, Gintare Karolina Dziugaite, Suriya Gunasekar, Isabelle Guyon, Pierre Foret, Scott Yak, Hossein Mobahi, Behnam Neyshabur, and Samy Bengio. Methods and Analysis of The First Competition in Predicting Generalization of Deep Learning. In Proceedings of the NeurIPS 2020 Competition and Demonstration Track, pp. 170–190. PMLR, 2021. 1 Yiding Jiang, Vaishnavh Nagarajan, Christina Baek, and J Zico Kolter. Assessing generalization of SGD via disagreement. In *International Conference on Learning Representations*, 2022. 12 Andreas Kirsch and Yarin Gal. A Note on "Assessing Generalization of SGD via Disagreement". Transactions on Machine Learning Research, 2022. 12 Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. 4 P. A. Lachenbruch. An almost unbiased method of obtaining confidence intervals for the probability of misclassification in discriminant analysis. *Biometrics*, 23(4):639–645, 1967. ISSN 0006-341X. 11 Yann LeCun and Corinna Cortes. The MNIST Database of Handwritten Digits. *http://yann.lecun.com/exdb/mnist/*, 1998. 4 Jaehoon Lee, Lechao Xiao, Samuel Schoenholz, Yasaman Bahri, Roman Novak, Jascha Sohl-Dickstein, and Jeffrey Pennington. Wide neural networks of any depth evolve as linear models under gradient descent. In Advances in Neural Information Processing Systems, volume 32, pp. 8570–8581. Curran Associates, Inc., 2019. 2, 5, 11 Jaehoon Lee, Samuel Schoenholz, Jeffrey Pennington, Ben Adlam, Lechao Xiao, Roman Novak, and Jascha SohlDickstein. Finite versus infinite neural networks: An empirical study. In Advances in Neural Information Processing Systems, volume 33, pp. 15156–15172. Curran Associates, Inc., 2020. 4, 11 Mufan Li, Mihai Nica, and Dan Roy. The future is log-Gaussian: ResNets and their infinite-depth-and-width limit at initialization. In *Advances in Neural Information Processing Systems*, volume 34, pp. 7852–7864. Curran Associates, Inc., 2021. 11, 12 Kevin P. Murphy. *Machine Learning: A Probabilistic Perspective*. MIT Press, 2012. 4, 17 Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya Sutskever. Deep Double Descent: Where Bigger Models and More Data Hurt. *arXiv preprint arXiv.1912.02292*, 2019. 19 Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading Digits in Natural Images with Unsupervised Feature Learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011. 4 Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. Norm-Based Capacity Control in Neural Networks. In Proceedings of The 28th Conference on Learning Theory, 2015. 2 Roman Novak, Lechao Xiao, Jiri Hron, Jaehoon Lee, Alexander A. Alemi, Jascha Sohl-Dickstein, and Samuel S. Schoenholz. Neural tangents: Fast and easy infinite neural networks in python. In *International Conference on* Learning Representations, 2020. 5 Liam Paninski. Estimation of Entropy and Mutual Information. In *Neural Computation*, volume 15, pp. 1191–1253. MIT Press, 2003. 12 Ben Poole, Sherjil Ozair, Aaron Van Den Oord, Alex Alemi, and George Tucker. On Variational Bounds of Mutual Information. In *Proceedings of the 36th International Conference on Machine Learning*, volume 97 of *Proceedings of* Machine Learning Research, pp. 5171–5180. PMLR, 2019. 3 Andrew Michael Saxe, Yamini Bansal, Joel Dapello, Madhu Advani, Artemy Kolchinsky, Brendan Daniel Tracey, and David Daniel Cox. On the Information Bottleneck Theory of Deep Learning. In *International Conference on* Learning Representations, 2018. 12 Shai Shalev-Shwartz and Shai Ben-David. *Understanding Machine Learning: From Theory to Algorithms*. Cambridge university press, 2014. 3, 19 Ravid Shwartz-Ziv. *Information Flow in Deep Neural Networks*. PhD thesis, The Hebrew University of Jerusalem, 2021. 2 Ravid Shwartz-Ziv and Alexander A Alemi. Information in infinite ensembles of infinitely-wide neural networks. In Proceedings of the 2nd Symposium on Advances in Approximate Bayesian Inference, volume 118 of Proceedings of Machine Learning Research, pp. 1–17. PMLR, 2020. 3, 12 Ravid Shwartz-Ziv, Amichai Painsky, and Naftali Tishby. Representation Compression and Generalization in Deep Neural Networks. *OpenReview*, 2019. 2, 3 Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In *International Conference on Learning Representations*, 2014. 2 Naftali Tishby. Information Theory of Deep Learning, 2017. URL https://youtu.be/bLqJHjXihK8?t=1051. 2 Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness May Be at Odds with Accuracy. In *International Conference on Learning Representations*, 2019. 20 Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation Learning with Contrastive Predictive Coding. arXiv preprint arXiv.1807.03748, 2018. 3 Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms, 2017. 4 Aolin Xu and Maxim Raginsky. Information-theoretic analysis of generalization capability of learning algorithms. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. 12 Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. In *International Conference on Learning Representations*, 2017. 2, 7, 12 Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning (still) requires rethinking generalization. *Communications of the ACM*, 64(3):107–115, 2021. ISSN 0001-0782. doi: 10.1145/3446776. 2, 7 Wenda Zhou, Victor Veitch, Morgane Austern, Ryan P. Adams, and Peter Orbanz. Non-vacuous Generalization Bounds at the ImageNet Scale: A PAC-Bayesian Compression Approach. In International Conference on Learning Representations, 2019. 2, 12 ![16_image_0.png](16_image_0.png) Figure 5: We plot I(X;Z) upper (2) and lower (6) bounds corresponding to the illustrative EuroSAT example (Figure 1). Increasing the regularization to λ = 0.5 in b) and λ = 1.0 in c) reduces MI below log(Ntrn). Samples to the right of the vertical line in a) where IUB crosses log(Ntrn) are discarded for the main analyses. NB: We use natural units ("Nats" or "Shannons") for I(X;Z) here, but we convert to bits when evaluating the ICB. ## A Appendix A.1 Assumptions Of Input Compression Bound It is assumed in the construction of the ICB that X is a d-dimensional random variable that obeys an ergodic MRF probability distribution, asymptotically in d. A MRF is an undirected graphical model, used to model data distributions with a particular conditional independency structure, which is commonly used for spatial data, including images (see (Murphy, 2012)[Ch. 19] for an introduction). Mathematically, this means that p(x) factorizes into a product of terms which represent the potentials for each clique on the underlying graph. In terms of correlations, this means that each pixel in a 2D image is strongly correlated with its immediate neighbors, but not with pixels that are further away. The "ergodic" part is essential for the derivation of the ICB: An ergodic MRF does not "get stuck" in any part of the state space; in other words, there is a nonzero probability for every possible state to be reached. Ultimately, this is the necessary assumption to invoke the Asymptotic Equipartition Property (AEP), which in turn allows us to invoke typicality. Defining the typical set is the crux of the ICB derivation, because it enables us to quantify the hypothesis space cardinality in terms of entropy. From here, the rest follows from information-theory fundamentals. ## A.2 Lower Bound On Mi We may lower bound I(X;Z) using a bound of similar form as equation 2 based on a batch of N samples: $$I(X;Z)\geq\mathbb{E}\left[\frac{1}{N}\sum_{i=1}^{N}\log\frac{p(z_{i}|x_{i})}{\frac{1}{N}\sum_{j}p(z_{i}|x_{j})}\right]=I_{\rm LB}\,,\tag{6}$$ where the expectation is taken over N independent samples from the joint distribution Qj p(xj , zj ). The main difference between this bound and equation 6 is the inclusion of p(zi|xi) in the denominator. ## A.3 Illustrative Example And Filtering Mi We empirically verified that equation 6 and equation 2 yield similar results when IUB < log(Ntrn) (Fig. A5). ![17_image_0.png](17_image_0.png) Figure 6: ICB is plotted versus GE (0–1 loss) for FashionMNIST, SVHN, CIFAR-10, and EuroSAT datasets. The ICB satisfaction rate is annotated in the top left corner of each plot with format "ICB % (N)". Each binary classification task is assigned a unique colour to highlight inter-task differences in ICB satisfaction rate. See Figure 2 of §4.1 for the corresponding Figure with GE expressed using square loss. NB: Results for MNIST omitted from Figure as they were similar to FashionMNIST. Table 5: CNNs have consistently smaller average GE than MLPs for SVHN and CIFAR-10, as they do not fit the training set to the same extent as MLPs. Note that these average accuracies include models trained for both short and long periods of time, therefore a low test accuracy here does not imply failure of learning. | Dataset | Arch. | ICB Sat (Acc./MSE) (%) | Train (%) | Test (%) | GE (%) | |-----------|--------------------|--------------------------|-------------|------------|----------| | SVHN | CNN | 67.6%/95.7% (3914) | 74.78 | 64.18 | 10.60 | | MLP | 49.5%/86.2% (3980) | 79.82 | 67.07 | 12.74 | | | CIFAR-10 | CNN | 84.9%/80.5% (4153) | 85.84 | 76.12 | 9.72 | | MLP | 73.8%/68.0% (4318) | 88.13 | 76.70 | 11.43 | | ![18_image_0.png](18_image_0.png) Figure 7: The ICB satisfaction rate is plotted versus the number of training samples for SVHN and CIFAR-10 datasets. The primary vertical axis bounds GE using the 0–1 loss (GE=Acc), whereas the secondary vertical axis bounds GE using the square loss (GE=MSE). Here, we only consider the first three tasks for each dataset, i.e., "0 vs 1", "1 vs. 2", and "2 vs. 3" as an approximation for all nine tasks. Other datasets FashionMNIST and EuroSAT were excluded as they consistently satisfied ICB for all train-set sizes considered here. ## A.4 Bounding Generalization Error Loss function We measured GE using the 0–1 and square loss (see Fig. A6 for 0–1 loss). This change results in no difference in the overall ICB satisfaction rate for FashionMNIST, a decrease for SVHN from 95.7% to 67.6%, a small increase for CIFAR-10 from to 80.5% to 84.9% as well as for EuroSAT from 93.6% to 95.7%. Train-set size The number of training samples may have affected the ICB satisfaction rate for some datasets (Fig. A7). For this experiment, we included small training sets of size Ntrn ∈ {250, 500, 750} to observe a broader relationship between Ntrn and the ICB satisfaction rate. For SVHN, the ICB satisfaction rate appears to be still improving at N = 1500 despite a dip at N = 1000. We ran additional experiments with Ntrn ∈ {950, 1050, 1150} to assess the width of this dip for SVHN. The dip at N = 1000 is not necessarily alarming as other studies have observed that more training data can sometimes hurt model performance, e.g., see Nakkiran et al. (2019). The ICB satisfaction rate for SVHN may therefore benefit from additional training data given the overall positive trend. Conversely, there is no discernible trend for CIFAR-10. Task labels There were considerable inter-task differences in ICB satisfaction rate (Table A6). The "Overall" row of Table A6 corresponds to the ICB satisfaction rates presented in Table 1, as well as in the top left corners of Fig. 2 and Fig. A6 for the square and 0–1 loss respectively. Confidence The confidence parameter δ in the PAC framework accounts for a small probability that the training set received by a learning algorithm is misleading, i.e., the samples are not representative of the true data-generating distribution, or they do not reflect all relevant details of the distribution (Shalev-Shwartz & Ben-David, 2014). It is natural to wonder how the choice of δ affects the ICB satisfaction rate. Recall from §4.1 that we refer to the percentage of tuples (ICB, GE) for which GE < ICB as the "ICB satisfaction rate", which should roughly equal 1 − δ expressed as a percentage, e.g., δ = 0.05 equals 95% confidence. Decreasing δ to 0.01 from its default value of 0.05 increased the ICB satisfaction rate by 6.2% for SVHN on the 0–1 loss, compared to an expected ≈ 4% difference (99% − 95%). Increasing δ to 0.20 decreased the ICB satisfaction rate by 11.2% compared to an anticipated difference of ≈ 19% (95% − 80%). Results were less sensitive to δ for CIFAR-10 and EuroSAT (≈ 1% difference for 99% − 95%), and independent from δ for FashionMNIST as the compression term alone (i.e. from the numerator of equation 4) sufficed to bound the small GEs for this dataset. Table 6: ICB satisfaction rates broken down by task for three datasets. MNIST and FashionMNIST are excluded for brevity as they were always at 100%. Columns "Acc." and "MSE" indicate whether GE is quantified in terms of classification accuracy (0–1 loss) or MSE (square loss) respectively. The "N" column indicates the number of valid experiments. | | SVHN CNN | CIFAR CNN | EuroSAT MLP | | | | | | | |---------|------------|-------------|---------------|--------|-------|------|--------|--------|------| | Task | Acc. | MSE | N | Acc. | MSE | N | Acc. | MSE | N | | 0 vs 1 | 87.7% | 99.3% | 431 | 93.3% | 86.6% | 461 | 100.0% | 100.0% | 1012 | | 1 vs 2 | 60.2% | 93.6% | 435 | 95.7% | 88.7% | 462 | 100.0% | 100.0% | 961 | | 2 vs 3 | 51.8% | 94.1% | 440 | 69.9% | 69.2% | 468 | 82.2% | 82.2% | 1123 | | 3 vs 4 | 78.7% | 97.2% | 431 | 74.5% | 71.0% | 459 | 97.9% | 86.0% | 1175 | | 4 vs 5 | 94.2% | 97.0% | 434 | 71.3% | 71.3% | 460 | 100.0% | 100.0% | 1151 | | 5 vs 6 | 30.0% | 88.6% | 440 | 74.8% | 72.8% | 460 | 100.0% | 100.0% | 1066 | | 6 vs 7 | 85.7% | 97.0% | 433 | 91.6% | 83.2% | 463 | 83.5% | 78.3% | 1154 | | 7 vs 8 | 84.4% | 97.7% | 430 | 100.0% | 94.1% | 461 | 100.0% | 94.9% | 1135 | | 8 vs 9 | 37.1% | 92.7% | 440 | 93.5% | 87.4% | 459 | 100.0% | 100.0% | 860 | | Overall | 67.8% | 95.2% | 3914 | 84.9% | 80.5% | 4153 | 96.0% | 93.5% | 9637 | Figure 8: Train-set accuracy is plotted versus clean test accuracy (top) and robust test accuracy (bottom) ![19_image_0.png](19_image_0.png) using AWGN. The marker size indicates the ICB value, and each binary classification task is assigned a unique colour. ## A.5 Vacuous Or Non-Vacuous? We plotted train-set accuracy versus test accuracy on the "clean" and "noisy" sets, indicating ICB values by the marker size. Large ICB values tend to occur for very high training accuracy and coincide with poor robust accuracy (Fig. A8). Since ICB increased considerably for the models with highest clean test accuracy, we selected models with greatest robust accuracy in §4.3. These results are consistent with previously observed accuracy versus robustness trade-offs (Tsipras et al., 2019). ## B Effect Of Metaparameters On Theoretical Bound We repeat the analysis from §4.4 for the CIFAR-10 dataset in Fig. A9, this time comparing SE results when GE is measured using the standard test set versus a test set with AWGN. Evaluating GE with a noisy test set yields greater a sample size after filtering out measurements where GE is too small to reliably measure a change in a classifiers true error. The SE values for the activation and diagonal metaparameters are also slightly larger for the standard GE, implying that ICB is better correlated with robust GE than standard GE in our experiments for CIFAR-10. A detailed worked example showing how a column is computed in these ![20_image_0.png](20_image_0.png) Figure 9: **Evaluating GE on a noisy test set yields greater sample size after accounting for Monte** Carlo variance in GE estimates, and slightly smaller sign-errors. Plots show GE prediction SEs (where *lower* is better) for six different metaparameter interventions on nine binary classification tasks {0 vs. 1, 1 vs. 2*, . . . ,* 8 vs. 9} for CIFAR-10. Numbers above the SE for even-numbered tasks indicate the number of samples N comprising the respective SE measurement. See Table A7 for a worked example showing how a column is computed in these plots. ![20_image_1.png](20_image_1.png) Figure 10: Evaluating SE for EuroSAT **with AWGN test set.** Other details similar to those described in captions for Fig 4 and Fig A9. plots is provided in Table A7. Smaller SEs are observed on average for the EuroSAT dataset than for CIFAR-10, but with greater inter-task variance (Fig. A10). The SE is consistently near zero for the time and diagonal metaparameters across all datasets. Table 7: Detailed GE prediction SE for all combinations of the depth metaparameter for the CIFAR AWGN 8 versus 9 task. The first two columns ("Raw") comprise all SEs with only basic filtering for IUB(X;Z) ≤ log(train samples). The next set of ("Filtered") columns additionally accounts for Monte Carlo variance of empirical averages and discards samples with a small difference in GE relative to the number of train and test samples (see text for details). The final "filtered" weighted average SE of 61.5% (N = 278) appears in Fig. A9b (series 8/9). | Raw | Filtered | | Depth | | | | |-------|------------|-------|------------|-----------|--------|-------| | N | Sign-error | N | Sign-error | Weighting | Before | After | | 94 | 46.8% | 34 | 35.3% | 12 | 1 | 2 | | 93 | 49.5% | 42 | 45.2% | 19 | 1 | 3 | | 91 | 51.6% | 55 | 49.1% | 27 | 1 | 4 | | 87 | 54.0% | 60 | 51.7% | 31 | 1 | 5 | | 93 | 49.5% | 4 | 100.0% | 4 | 2 | 3 | | 91 | 62.6% | 22 | 100.0% | 22 | 2 | 4 | | 87 | 83.9% | 32 | 87.5% | 28 | 2 | 5 | | 91 | 73.6% | 1 | 100.0% | 1 | 3 | 4 | | 87 | 90.8% | 26 | 96.2% | 25 | 3 | 5 | | 84 | 95.2% | 2 | 100.0% | 2 | 4 | 5 | | | 278 | 61.5% | | | | | ![22_image_0.png](22_image_0.png) Figure 11: ICB (bottom) ranks GEs better than I(X;Z) alone (top) for different training set sizes. Shown are 750 fully-connected NTK ReLU models trained (t = 0) on a CIFAR-10 binary classification task (classes 2 and 5) using three different training set sizes of N = {500,1000,2000} and a test set with N = 2000. Model depth is indicated by the colour intensity for each series, where the darkest shade indicates the maximum depth of five (5) layers. Three GE types are evaluated: Clean (standard), AWGN (adversarial), and Fast Gradient-Sign Method (FGSM) (adversarial) are plotted with respect to I(X; Z) (top row) and ICB (bottom row). Plotting GE versus the ICB better aligns results for different sized training sets (N) compared to I(X;Z), and yields a better ranking in terms of Kendall-T. ## Advantage Of Icb Versus Mi B.1 To gain further insight into ICB, we examine GEs for a specific CIFAR-10 binary classification task (classes 2 and 5) using three different training set sizes. Plotting GEs with respect to I(X; Z) alone yields a poor overall ranking, whereas ICB effectively aligns trials with different training set sizes (Figure 11).
Review 1: Summary: This work studies the problem of generalization error estimation without the use of validation samples. Authors studied the mutual information (MI) between inputs and final layer representations of deep neural networks, and derive a so-called input-compression bound of generalization errors ($GE_\mathrm{ICB}$). Then, authors carried out empirical studies to investigate the correlation between MI and the generalization error. Specifically, authors conducted several sets of experiments to investigate the nexus between theoretical GE bounds and the generalization errors. The results demonstrate the feasibility of using ICB to estimate generalization errors. Strengths and Weaknesses: Strengths 1. Authors studied an interesting yet importance problem in the field. 2. Authors clearly stated the derivations of $GE_{ICB}$ from the statistical learning theory (e.g., PAC) and information theory. 3. Several experiments have been done to demonstrate the feasibility of the proposal. Weakness. 1. Eq. (3) should hold in a probability higher than $1-O(\delta)$ or $1-\delta$. So, how the confidence factor $\delta$ would influence the estimation? If ``ICB satisfaction rate'' equals to $1-\delta$, please make it clear in the texts. I may suggest you to complete the inequality in Eq. (3) with the probability term and introduce the detailed definition of $\delta$ there. Another issue is that the incorporation of $\delta$ makes ICB a tradeoff between mutual information (input-output) and the confidence levels of GE estimation. Please discuss this issue in the revision. 2. Experiments results are intensive and verbose. Could you please enumerate the hypotheses or expected observations you hope to see throughout your experiments, so as to test your major hypothesis on ICB and GE? Please highlight your claims from your experiments. 3. Authors discussed the correlations between ICB and GE, and mentioned the NeurIPS competition on generalization error predictions. I am wondering if it is possible to include more competitors from the competition to compare with ICB? I understand ICB is a more comprehensive tool with confidence bounds. Requested Changes: Please consider to include the three points in the weakness section for revision. Broader Impact Concerns: N/A ================================================== Review 2: Summary: This paper empirically evaluates Input Compression Bound (ICB) as a generalization bound. The ICB directly links the generalization with the mutual information between input X and representation Z. Through three axes: tightness of the bound (vacuous or non-vacuous), percentage of trials satisfying the bound, and ranking generalization errors, the authors show that: for many datasets, the ICB has a good satisfaction rate (although for some have low satisfaction rate); randomization test may be used to help identify when ICB performs bad as a GE bound for a variety of models; it is non-vacuous; etc. Overall, the extensive empirical study, the authors show that ICB is generally effective and useful for indicating generalization error. Strengths and Weaknesses: Strengths: (1) The problem this paper addresses is important, and the experiment result is interesting, which can provide good materials for further discussions in the community. (2) The experiments are performed in a quite thorough way and to my knowledge, is sound. Weaknesses: (1) It may be useful to compare with some other generalization bounds as mentioned in the related works. Even though it is difficult as said by the authors, it may still be useful to see the relative strengths of different generalization bounds. (2) The writing of the experiment section can be more clear. Right now it seems there are many observations, but not presented in a clear and well-organized way. (3) For some datasets (e.g. SVHN, CIFAR), the ability of ICB to bound generalization error (measured in satisfaction rate) is low. The author observe that these are also the ones that can be identified via randomization test. Does the authors have an intuitive explanation/hypothesis for this? If so, the authors can also perform experiments to verify or disprove the hypotheses. In this way, we can understand deeper about the failure modes of ICB, and may even improve it. Requested Changes: Corresponding to the weaknesses above, (2) is critical for the acceptance; and deeper explanation for (3), and (1) comparing with other generalization bounds can significantly strengthen the work. Minor: 1. Can the authors explain in the text or Appendix how the \lambda regularization is used? It is not clear in current manuscript. Broader Impact Concerns: There is no need for Broader Impact Statement. ================================================== Review 3: Summary: In this paper, the authors empirically explore an MI-based bound to upper bound generalization error. Their theoretical generalization error bound is based on a compression technique that relates input and final representations. Experimental evidence across a few architectures and datasets shows the applicability of this bound in practice. Strengths and Weaknesses: **Strength**: - This paper presents empirical analysis of an existing bound. This is particularly important as it can highlight how bounds derived under restrictive settings behave more generally. **Weakness**: - The abstract highlights that "in an attempt to falsify the bound", it would be great to explictly list down all the reasons on why one should expect the bound to incorrect. This should include the setting under which the bound is theoretically applicable and clearly highlight how they may not be true in practice. - The language in abstract about "In our attempt to empirically falsify the theoretical bound," is a bit too strange. To the best of my understanding this paper attempts to emperically study a bound derived theoretically under some strong assumptions. If this understanding is correct, calling it as trying to empirically falsify is incorrect language in my opinion as the goal is not to falsify the bound in the settings it was derived in. - Figure 1 shows the bound with shaded grey region. It is unclear why the bound is shown with a shaded region. Shouldn't it be simply a line? While it may mean that the test error can lie anywhere in between, the current exposition is not highlighing this clearly. - Figure 2, and more broadly, experiments in that figures seems a bit weird. While to theoretically verify the bounds, it makes sense to have values below the y =x line, it is unclear why it is the only criterion used in the paper. A loose bound, as loose as predicting an error of 100% would always be correct according to the `satisfaction' criterion used in Figure 2. To understand the tightness of the bound, should the bound be close to the line y =x. - Since one of the goal of the paper is find bounds that correlate well with GE, paper should include discussion (and perhaps compare) with other related work that leverages unlabeled data to obtain bounds [1,2] - Moreover, a discussion with other mutual information bounds must be inluded in the paper [3,4,5]. - Contribution of this work also a bit skim. First, it is unclear why authors only focus on the bound in equation (4), what are the (desirable) properties of this bounds that even motivates studying only this bound. Second, it would be great if authors can add comparison to other MI based bounds derived in other related work highlighted above. References: [1] Garg et al. RATT: Leveraging Unlabeled Data to Guarantee Generalization. ICML 2021. [2] Jiang et al. Assessing Generalization of SGD via Disagreement. ICLR 2022. [3] Achille and Soatto. Emergence of Invariance and Disentanglement in Deep Representations. JMLR 2018. [4] Harutyunyan et al. Improving Generalization by Controlling Label-Noise Information in Neural Network Weights. ICML 2020. [5] Xu and Raginsky. Information-theoretic analysis of generalization capability of learning algorithms. NeuRIPS 2017 Requested Changes: Please see the comments in the weakness section above. Addressing all of those bullets would be crucial to securing an acceptance. Moreover, to make the work complete it would be great to include the following (not necessary to secure acceptance): - Formal proof for equation 4 should be included in the paper. While authors cite Shwartz-Ziv 2019, a formal proof for equation (4) with consistent terminology can help readers as the paper primarily uses that bound. Broader Impact Concerns: Not applicable ================================================== Review 4: Summary: This paper performs an empirical study of an information bottleneck-inspired generalization bound, ICB. The evaluation of the bound leverages relatively recent empirical approaches to measuring the mutual information, and the work applies this methodology to answer a range of questions about the ICB in an infinite-width model of deep neural networks. First, the paper attempts to falsify the bound on real-world data, succeeding on some datasets and suggesting that the assumptions required in the derivation of the ICB do not hold in many real-world datasets It goes on to study the ability of the ICB to measure a network’s ability to fit random labels, and to distinguish between natural and randomly re-labelled datasets, finding that the bound can distinguish between random and natural labels in some settings but not others. Finally, the work evaluates the correlation between the value of the bound and empirical generalization performance. Strengths and Weaknesses: ## Strengths - The bound considered by the paper is not one that has been studied in great depth in the past -- most work on empirical generalization bounds has focused on those based on flatness of the local minimum, rather than looking at compression of the input space. - The paper considers a number of different datasets which are reasonably diverse (though all vision), and identifies interesting differences in the correctness and predictive power of the proposed bound between these datasets. - The work further studies a number of interesting experimental settings and asks thoughtful questions about the proposed bound, in particular: 1) evaluating effect of random labels on the bound, and 2) studying the predictive power, as opposed to correctness, of the bound on real architectures. - The presentation and writing in the paper is polished — typos are not an issue, and the text flows nicely. ## Weaknesses - There is significant lack of clarity in how the practical instantiations of the bound are computed: in particular, the work claims to evaluate the MI between a feature and the input, but also claims to use the NTK regime, in which to the best of my understanding networks don’t do feature learning. - There are further potential issue with the MI estimation procedure, as the features on the training set are themselves a function of the training set, so estimating $I(X, Z_\theta)$ shouldn’t use the training set $X$ because of the dependence of the parameters $\theta$ (and hence the features $Z_\theta$) on $X$. It is possible that the procedure used gets around this issue but I wasn’t able to determine this from the information provided in the paper. - The assumptions required for the ICB bound are not explicitly stated, and based on the cited paper they are in fact crucial for the validity of the bound. The paper should include a discussion of these assumptions and also evaluate the degree to which they are violated in the different datasets. This seems particularly relevant as the justification for ICB seems to be a limiting argument, and it is not clear how this scales with the dimension and number of samples in the problem. It is further unclear whether the generalization bound as presented in the paper is valid for any finite problem. - The previous point is crucial and relates to a more philsophical issue I had with the paper: a generalization bound is either correct or not, and therefore there is no reason for it to be empirically “falsified”. The correctness of the bound stems from the correctness of its proof, and empirical investigation should not be necessary. An empirical falsification of a bound thus indicates one of two things: either the proof of the bound is incorrect, or the assumptions of the bound do not hold in the setting studied. The study of infinite-width networks is not well-motivated: why not also include deep neural networks of finite width, as it is in these settings where feature learning, and hence input compression, will play a more significant role in the learning process? If the bound is not in fact a provable upper bound on the generalization performance of a model but rather a heuristic, then a meaningful contribution to the literature would require a more detailed empirical analysis of the heuristic as the current empirical investigation is limited in the scope of problems considered and the detail of the analysis. This work only appears to consider binary classification problems, for example, and the differential performance in predicting generalization error and robust generalization error isn’t explained or motivated: what does robust generalization tell us about the ICB that classical generalization doesn’t? Requested Changes: Motivated by my comments on the weaknesses of the paper, there are three major categories of changes I would like to see in the paper before I would be comfortable recommending it for publication. 1. Please provide a more detailed description of how the latent variable Z is computed in infinite width networks and prove the validity of the mutual information estimator. 2. The discussion of empirical falsification needs to be rewritten to acknowledge whether this is aiming to verify the correctness of the proof (seeing as this result has not to my knowledge been formally presented in peer-reviewed proceedings) or the validity of the assumptions. 3. I would also like to see a more detailed empirical analysis that explains how the observed correlation between the bound and generalization performance arises. For example, how does generalization under interventions on different meta-parameters correlate with the generalization bound along the lines of [Dziugaite et al., 2020] — is ICB better at predicting generalization under some hyper-parameter selection problems than others? Broader Impact Concerns: None. ================================================== Metareview: Recommendation: Accept as is Comment: The authors have engaged with the reviewers and through rebuttals, addressed reviewers' concern in a satisfactory way. On a personal note, I found the manuscript to be an enjoyable read, and I appreciate the care the authors have put into explaining their results. I expect the camera-ready to reflect the discussions with reviewers. Congratulations on a fine piece of work and for your contribution to TMLR! ==================================================
# Mixtrain: Accelerating Dnn Training Via Input Mixing Anonymous authors Paper under double-blind review ## Abstract Training Deep Neural Networks (DNNs) places immense compute requirements on the underlying hardware platforms, expending large amounts of time and energy. An important factor contributing to the long training times is the increasing dataset complexity required to reach state-of-the-art performance in real-world applications. To address this challenge, we explore the use of input mixing, where multiple inputs are combined into a single composite input with an associated composite label for training. The goal is for training on the mixed input to achieve a similar effect as training separately on each the constituent inputs that it represents. This results in a lower number of inputs (or mini-batches) to be processed in each epoch, proportionally reducing training time. We find that naive input mixing leads to a considerable drop in learning performance and model accuracy due to interference between the forward/backward propagation of the mixed inputs. We propose two strategies to address this challenge and realize training speedups from input mixing with minimal impact on accuracy. First, we reduce the impact of interinput interference by exploiting the spatial separation between the features of the constituent inputs in the network's intermediate representations. We also adaptively vary the mixing ratio of constituent inputs based on their loss in previous epochs. Second, we propose heuristics to automatically identify the subset of the training dataset that is subject to mixing in each epoch. Across ResNets of varying depth, MobileNetV2 and two Vision Transformer networks, we obtain upto 1.6× and 1.8× speedups in training for the ImageNet and Cifar10 datasets, respectively, on an Nvidia RTX 2080Ti GPU, with negligible loss in classification accuracy. ## 1 Introduction The success of deep neural networks has come at a cost of rapidly rising computational requirements for training. This increase is due to a combination of rising dataset and model complexities. For example, in the context of image classification, training dataset complexity increased significantly from MNIST and CIFAR-10/100 (50,000 - 60,000 images) to ImageNet-1K (1.2 million) and ImageNet-21K (14.2 million). This is supplemented by a growth in model complexity required to achieve state-of-the-art performance (Stojnic et al., 2023). The impact of increased training computation is both monetary (cost to train) and environmental (CO2 emissions) (Strubell et al., 2019). A study from OpenAI (Amodei et al., 2018) reports that training costs of deep neural networks have been doubling every 3.5 months, greatly outpacing improvements in hardware capabilities. Prior Efforts on accelerating DNN Training: Several methods have been proposed to accelerate DNN training. We divide them into a few broad categories, such as enabling the use of large-scale parallelism (e.g., hundreds or thousands of servers) in DNN training (You et al., 2017; Goyal et al., 2017), training on reduced-resolution inputs (Touvron et al., 2020; Tan & Le, 2021), training at reduced precision (Sun et al., 2019), pruning to reduce the model size during training (Lym et al., 2019), input instance skipping (Jiang et al., 2019; Zhang et al., 2019) and dataset condensation (Mirzasoleiman et al., 2020; Killamsetty et al., 2021). mixTrain**: Accelerating DNN Training by mixing inputs**: Complementary to the aforementioned efforts, we propose the use of input mixing, a technique that has traditionally been used for data augmentation (Zhang et al., 2017; Yun et al., 2019), to accelerate DNN training. Consider two training inputs x1 and x2. A mixing function F is applied to x1 and x2 to produce a *mixed input* X. The mixed input can be thought of as a point in the input space that combines information from both the constituent inputs that it represents. From the functional perspective, training on a mixed input must produce a similar effect on the model as training on the individual constituent inputs. On the other hand, from a computational viewpoint, mixing inputs reduces the number of input samples that need to be processed during training. This reduction in the effective size of the training dataset leads to fewer mini-batches in each epoch, and thereby lower training time. Due to the nature of input mixing, it is complementary to, and can be combined with, the other approaches to accelerate training described above. In mixTrain, we adopt computationally lightweight mixing operators CutMix and MixUp that have been proposed for a different purpose, *viz.* data augmentation (Zhang et al., 2017; Yun et al., 2019). As illustrated in Fig. 1, MixUp performs a simple weighted linear averaging of the pixels of two inputs, while CutMix randomly selects a patch of one input and pastes it onto the other. Realizing training speedups through input mixing raises interesting questions, such as how to train networks on mixed samples, which samples to mix, etc. We observe that indiscriminate application of mixing leads to a considerable drop in learning performance and model accuracy. On further investigation, we find that this can be attributed to the interference between the processing of the constituent inputs within each mixed input. To preserve accuracy, we therefore propose techniques to mitigate this interference. We find that for the CutMix operator, the network's internal features largely maintain spatial separation between the constituent inputs in convolutional layers, but this separation is lost in the fully connected layers. We thus propose *split propagation*, wherein the features corresponding to each constituent input are processed separately by the fully connected layers. In contrast, with the MixUp operator, spatial separation between the constituent inputs is not maintained. Here, we mitigate the impact of interference through *adaptive* mixing, where the weights of the constituent inputs are varied based on their losses in previous epochs. Additionally, we explore applying mixing selectively, i.e., only for a subset of training inputs in each epoch. We design a loss-driven metric to identify the training samples that are amenable to mixing in each epoch. We find that inputs at the two ends of the loss distribution, i.e., with very low and very high loss magnitudes, are amenable to mixing. Low-loss inputs are mixed because their functional performance remains largely unaffected by mixing. In contrast, we mix samples with high loss because a considerable percentage of such samples are unlikely to be learned even when no mixing is applied. We show that mixTrain achieves superior accuracy vs. efficiency tradeoffs compared to alternative approaches such as input skipping and early termination. Finally, we note that mixTrain is designed in a completely hyper-parameter free manner. This reduces the additional effort spent on hyper-parameter tuning for different models. The key contributions of this work can be summarized as follows. - To the best of our knowledge, mixTrain is the first effort to reduce the complexity of DNN training by mixing inputs - We propose two strategies to improve the learning performance of mixTrain. First, we propose split propagation and adaptive mixing to reduce the impact of interference between the constituent inputs in a composite sample. Second, we apply mixing selectively, i.e., only on a subset of the training dataset every epoch. - Across our benchmarks consisting of both image recognition CNNs (including ResNet18/34/50 and MobileNet) and vision transformers, we demonstrate up to 1.6× and 1.8× improvement in training time on the ImageNet and Cifar10 datasets respectively for ∼0.2% Top-1 accuracy loss on a Nvidia RTX 2080Ti GPU, without the use of additional hyper-parameters. ## 2 Input Mixing: Preliminaries Input mixing takes multiple inputs and combines them into a composite input, taking in information from each of the constituent inputs. mixTrain uses two operators - MixUp (Zhang et al., 2017) and CutMix (Yun et al., 2019), which are illustrated in Fig. 1. Consider two inputs, x1 and x2. For MixUp, as seen in Equation 1, each pixel j of the composite input X is obtained by linearly averaging the corresponding pixels of x1 and x2. The mixing ratio r is in the range [0, 1]. The CutMix operator selects a random patch of x1, and pastes it onto x2. The weightage r of each input xiis decided by its area in the composite sample. $$X_{j}=r\cdot x_{1,j}+(1-r)\cdot x_{2,j}$$ Xj = r · x1,j + (1 − r) · x2,j (1) Further, let us assume the target labels of the constituent inputs are y1 and y2. In (Zhang et al., 2017; Yun et al., 2019), the loss of the composite input X is defined as the weighted sum of the loss of X with respect to y1 and y2, as shown in Equation 2 for the cross-entropy loss. Here, f is the DNN model, and K the number of classes. ![2_image_0.png](2_image_0.png) Input mixing has previously been applied for data augmentation, wherein randomly selected training input samples are combined through operators such as (Zhang et al., 2017; Yun et al., 2019) and added to the training set. Training on the randomly combined input samples has the effect of virtually augmenting the dataset, as the model is exposed to new training samples in each epoch. These efforts are focused on improving generalization, often achieved at the cost of increased training time. Specifically, the total number of input samples in each epoch of training after mixing remains the same. Further, in order to realize improvements in accuracy, these techniques often require 2-3× more training epochs than baseline SGD (Yun et al., 2019; Zhang et al., 2017). Figure 1: Mixing operators (a) MixUp (b) CutMix $$Loss(X)=-(\alpha\cdot log(\frac{e^{f(X)_{y_{1}}}}{\sum_{l=1}^{K}e^{f(X)_{l}}})+(1-\alpha)\cdot log(\frac{e^{f(X)_{y_{2}}}}{\sum_{l=1}^{K}e^{f(X)_{l}}}))\tag{2}$$ ## 3 Mixtrain**: Accelerating Dnn Training Via Input Mixing** The key idea in mixTrain is to improve the overall training time by dynamically applying the mixing operators, MixUp and CutMix, on the training dataset D to reduce the number of samples in each epoch. However, naive mixing, e.g., where random pairs of input samples are mixed in each training epoch to reduce the number of training samples by half, negatively impacts classification accuracy. As observed in Fig. 2(a), on the ImageNet-ResNet50 benchmark, the drop in accuracy incurred after training on the reduced (i.e., halved) dataset obtained after applying either operator is nearly 4-6%. The following subsections discuss the two key strategies that are critical to the overall success of mixTrain, namely, reducing the impact of interference between constituent inputs and selective mixing. ## 3.1 Reducing Impact Of Interference In this subsection, we discuss the primary cause affecting the accuracy of training with naive mixing, *i.e.*, interference between constituent inputs, and propose techniques to address the same. We begin by analyzing the ability of a network trained with mixed inputs to correctly classify the constituent inputs of a composite sample. At different stages of training (different training epochs), we identify the set of training samples that the network classifies correctly without mixing, say set S. Our goal is to understand how the network fares in classifying the samples in set S after they have been mixed. Specifically, we study the network's performance in detecting the presence of both constituent inputs in the mixed sample. Consider inputs x1 and x2 in S mixed with ratio α = 0.5 to form X, which is passed through the network. The network detects constituent inputs x1 and x2 in X, when the softmax scores of their corresponding class labels occupy the highest and second highest positions (order can be inter-changeable between x1 and x2). Only a single input is detected when the class label of one of the constituent inputs has the highest softmax score (say x1), while the second-highest score is achieved by a class not corresponding to the second constituent input (i.e., other than x2). ![3_image_0.png](3_image_0.png) Figure 2: Classification performance with mixed inputs Samples in set S are thus mixed in pairs (r = 0.5), and the accuracy on the mixed inputs is recorded. Five such runs are conducted to allow for different random input combinations and the results are averaged and presented in Fig. 2. Surprisingly, after mixing is applied, the network is able to classify only less than half of the inputs in S (green and blue dotted curves in Fig. 2(b)) even in the final epochs of training- note that these were inputs that were classified correctly without mixing (black line). On further investigation, it is found that for many mixed inputs, the network is able to correctly classify only one of the constituent inputs. The class label of the other constituent input often does not appear even amongst the Top-5 predictions made by the network. This leads to increased loss for one of the constituent samples, consequently impacting training performance and the final validation accuracy. It is thus critical to develop techniques that effectively learn on all constituent samples of a composite input. We next describe our approach to addressing this challenge. Split Propagation: We identify two factors that contribute to the poor classification accuracy of a mixed input's constituent inputs in the case of the CutMix operator. Due to the random nature of the patch selected from a constituent input, it is possible to miss the corresponding constituent inputs' class object. Second, there may be interference between the features of the constituent inputs when the network processes the mixed sample. To design effective strategies that improve overall classification performance, it is important to understand the individual effect of each factor. We study the impact of the first factor by passing random patches from the inputs through the network; however, instead of mixing, random patches amounting to half the input area are zeroed-out. As shown using the solid orange curve (ZeroPatch) in Fig. 2(b), the drop in accuracy is only ∼16%, and is significantly lower compared to mixing. This indicates that it is the interference between the constituent inputs that is the primary factor causing degradation in classification performance. Examining the intermediate representations of the network while processing mixed inputs sheds some light on this interference. By virtue of the nature of convolutions, the spatial separation between constituent inputs in the composite input is maintained through many layers of the network, with only mild interference occurring at the boundaries of the inputs. For example, in Fig. 3, the right half of the features in the final convolution layer's output pertain to the right half of the mixed input. The spatial distinction between the features is maintained until the last convolutional layer, but is lost after the averaging action of the final pooling layer. As a result, the fully connected layer correctly classifies only one of the constituent inputs1. To aid the network in classifying both constituent inputs correctly, we propose split propagation of constituent features after the final convolution layer. As shown in Fig. 3, we identify the region in the final convolutional layer's output maps pertaining to each constituent input, and pass the features separately through the remaining layers of the network. Both constituent inputs of mixed samples are now classified correctly, leading to a significant improvement in classification performance (solid blue curve in Fig. 2(b)). During back-propagation, the output errors of each constituent input are propagated separately until the average pooling layer. The error tensors obtained at the input of the average pooling layer are then concatenated 1(Zhang et al., 2017; Yun et al., 2019) resolve this issue by exposing the constituent inputs twice in each epoch through two different mixed inputs. While this improves accuracy, it defeats our objective of improving training runtime ![4_image_1.png](4_image_1.png) Figure 3: Training Mixed Inputs ![4_image_0.png](4_image_0.png) and propagated backwards across the rest of the network. The classification loss for the constituent inputs improves, thereby improving overall validation accuracy (Fig. 2(a)). We note that the split propagation of the constituent inputs can be performed in parallel. Thus, the runtime overheads of this scheme are negligible, accounting for < 3% of overall training time. Adaptive Mixing: Unlike CutMix, the MixUp operator averages each element of the constituent inputs prior to processing the network. Therefore, the network's internal representations do not exhibit any spatial separation between the constituent inputs. We thus devise alternative strategies to mitigate the impact of inter-input interference. It appears from Fig. 2(a) that the validation accuracy with MixUp is even lower compared to CutMix , due to a slower rate at which training loss improves for the mixed inputs. Naturally, a simple boost in performance can be achieved by at least improving the loss for one of the constituent inputs of the mixed input. We thus adapt the weight (r) of constituent inputs so as to favour the more difficult input, as identified by the loss in the previous epoch. However, if the constituent samples were mixed in the previous epoch, it is not trivial to obtain their individual losses prior to mixing. To that end, we utilize an approximation to evaluate the losses of the constituent inputs in the previous epoch, described as follows. Consider two constituent inputs x1 and x2 with target labels y1 and y2 respectively, that have been mixed with ratio rE in epoch E (Equation 3), to form the composite sample X. As seen in Equation 4, we use the loss of the network on the mixed input X to estimate its loss on the individual constituent inputs. Here, K stands for the number of classes in the task. While estimating the loss of x1 and x2 in such a manner is indeed an approximation, this allows us to avoid an additional forward propagation step to estimate the true loss of x1 and x2, thereby alleviating any runtime overhead. $$X=r_{E}*x_{1}+(1-r_{E})*x_{2}$$ $$(3)$$ X = rE ∗ x1 + (1 − rE) ∗ x2 (3) $$L o s s(x_{1},E)=-l o g(\frac{e^{f(X)_{v_{1}}}}{\sum_{l=1}^{K}e^{f(X)_{l}}})\quad L o s s(x_{2},E)=-l o g(\frac{e^{f(X)_{v_{2}}}}{\sum_{l=1}^{K}e^{f(X)_{l}}})$$ ) (4) Once the losses of the constituent inputs have been obtained, we mix them in the next epoch E + 1 with the ratio rE+1 as shown below. As seen in Fig. 2(a), this provides a boost in classification accuracy. $$\quad(4)$$ $$r_{E+1}={\frac{L o s s(x_{1},E)}{L o s s(x_{2},E)}}$$ Loss(x2, E)(5) Note that there is still some gap between the accuracy with and without mixing even after the use of split propagation and adaptive mixing, which we address next. $$\left(5\right)$$ ## 3.2 Selective Mixing We explore a second strategy, selective mixing, to further improve accuracy when training with mixed inputs. Here, the general principle is to dynamically identify a subset of the training dataset in each epoch for which mixing does not have a negative impact on overall classification performance. We achieve this through the design of a loss-based amenability metric that determines, for each epoch, the subset of samples Smix that can be mixed in subsequent epochs. Samples that are not amenable to mixing are added to set S*noM ix*. The training dataset is thus formed using samples in S*noM ix* as is, and mixing pairs of samples in Smix. Overview: The proposed selective mixing strategy consists of three steps as shown in Fig. 4. At every epoch, the reduced dataset is divided into mini-batches and fed to the network. The network performs the forward and backward passes on each mini-batch. Once the forward pass for a particular mini-batch is complete, the loss of each constituent input is computed. This is used to determine the amenability of each constituent input to mixing in the next epoch E+1, subsequent to which it is added appropriately to Smix or S*noM ix*. Finally, the batch-sampler forms mini-batches for the epoch E+1 by randomly drawing samples from either Smix or S*noM ix*. The first and the third steps are straight-forward. In the following sub-section, we elaborate on the second step, i.e., determining the amenability of a sample to mixing, in greater detail. ## 3.2.1 Evaluating Amenability To Mixing In Epoch E A suitable loss-based metric must estimate the subsets Smix and S*noM ix* every epoch, such that no negative impact on accuracy is suffered. We design such a metric by studying trends in the loss of a sample prior to and after mixing, at different stages of the training process. Consider models trained with MixUp and CutMix at three different training epochs as shown. At each selected epoch, we compute the L1 difference of the loss of every sample x with and without mixing, i.e., lossmix(x) and *loss(x)* respectively. We define lossmix(x) as the loss of the mixed sample x ′with respect to the golden label of x, as shown in Equation 6. Here, K is the number of classes, and y is the golden label of x. We average lossmix(x) after 5 different random pairings to create x ′. $$L o s s_{m i x}(x)=-l o g(\frac{e^{f(x^{'})_{y}}}{\sum_{l=1}^{K}e^{f(x^{'})_{l}}})$$ $$({\mathfrak{G}})$$ ) (6) We observe that lossmix(x) deviates and increases further away as *loss(x)* increases, consistently across the benchmarks analyzed for both operators (Fig. 5(a) depicts the same for CutMix). In other words, the graph indicates that *as loss(x) increases, its amenability to mixing decreases*. Furthermore, we find that prior to mixing, a majority of the correctly classified samples occupy the low loss regime as shown in Figure 5(a). ![5_image_0.png](5_image_0.png) Figure 4: Overview of Selective Mixing ![6_image_0.png](6_image_0.png) After applying mixing to these samples, we find that their classification accuracy is largely retained, especially as epochs progress, as depicted in Fig. 5(b) for the CutMix operator. Figure 5: Analyzing amenability to mixing Hence, for samples that are not mixed in epoch E, we determine their amenability to mixing in the next epoch based on the particular region of the loss distribution it belongs to. As illustrated in Fig. 5(c), the loss distribution is divided into three regions that utilize a different criteria for gauging amenability. We now discuss the criteria for each region, and the conditions for continuing mixing in subsequent epochs. Region 1 corresponds to the area in the loss distribution where a majority of the correctly classified samples are located. From Fig. 5(b) we know that the loss, and to a certain extent the classification accuracy of such samples remains largely unaffected by mixing and are hence mixed aggressively. Next, we consider the portion of the loss distribution occupied by the incorrect samples and divide this space into two regions. Region 2 comprises of incorrect samples with moderate loss. To avoid any negative impact on accuracy, we avoid mixing these samples. Moving on to Region 3, these are samples the network finds very difficult to classify as characterized by their high loss magnitudes. We find that the training effort can be reduced on samples that consistently occur in Region 3 by mixing them, as they are unlikely to contribute to final classification accuracy. The separations in the loss distribution are realized using simple linear clustering techniques that correlate the loss of a training sample in some epoch E to classification accuracy, based on trends in previous epochs. Let L*corr* and L*incorr* represent the running average of the correct and incorrect samples in S*noM ix* respectively (calculated from epoch 0 to E-1), and let Lmid denote the average of the two quantities, i.e., $$L_{m i d}=0.5*(L_{c o r r}+L_{i n e o r r})$$ Lmid = 0.5 ∗ (Lcorr + L*incorr*) (7) ![6_image_1.png](6_image_1.png) → Lmid acts as a boundary between the correct and incorrect samples, effectively creating two clusters whose centroids are given by L*corr* and L*incorr*. Thus, samples with loss less than Lmid in epoch E can be identified as Region 1 samples, as they are likely to be correct. Fig. 6 plots the efficacy of Lmid across different epochs (fraction of correct inputs under Lmid). As desired, a majority of the correct samples (> 95%) fall in Region 1, while only including a negligible fraction of incorrect samples (< 10%). Furthermore, samples with loss greater than L*incorr* in a particular epoch are in the upper percentile of the loss distribution of the incorrect samples. L*incorr* can hence used to create Region 2 and Region 3 as shown. We note that loss thresholds of better quality can potentially be identified using other techniques, such as by introducing hyper-parameters. However, tuning these hyper-parameters for each network separately is a costly process, diminishing the runtime benefits achieved by reducing training complexity. Figure 6: Efficacy of threshold Lmid We will now discuss the amenability criteria designed for samples belonging to Regions 1 and 3. Amenability Criteria for Region 1: Consider a sample A belonging to Region 1 in epoch E, i.e., LossA < Lmid. From Figure 5(b) it is known that samples in Region 1 are likely to be correctly classified prior to mixing. We mix such samples as long as their loss does not exceed Lmid at some later epoch E ′, i.e., likely to be classified incorrectly. After epoch E ′, they are shifted to S*noM ix*. Fig. 7 illustrates the temporal variation in the number of samples that are in Smix, and from Region 1 of the loss distribution. As can be seen, the number of such samples increases across epochs. This is because as epochs progress classification accuracy improves, thereby resulting in more samples having loss below Lmid, i.e., belonging to Region 1. We note that using a loss-based threshold to determine amenability to mixing is more robust instead of directly using classification performance (Sec. 7), as we find that mixing outlier samples, i.e., samples with high loss yet correct classification affects overall accuracy. ImageNet-ResNet18 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Frac. of D mixed, and from R1 → \#samples increases, closely matches accuracy \#samples deflecting to SnoMix *in next epoch* 1 21 41 61 81 Epoch Index→ Figure 7: Amenability of Region 1 The graph also depicts the fraction of samples that deflect to S*noM ix* every epoch, which is a very small fraction of the samples that are mixed. This justifies the design of the amenability rule for Region 1. Amenability Criteria for Region 3: Samples in Region 3 have high loss (loss > L*incorr*), and are generally very difficult to classify by the network even if they are trained without mixing. In fact, we observe that a considerable fraction of samples that consistently occur in Region 3 across epochs remain incorrect at the end of the training process. Let I denote the set of such samples that are incorrect when training concludes. We plot a histogram of the number of epochs samples in I occupy Region 3 across training in Fig. 8(a). Clearly, it is observed that over half the samples in I consistently occur in Region 3 for over 70% ![7_image_0.png](7_image_0.png) Figure 8: Loss dynamics of samples in set I and set C of the training process. It can thus be argued from a practical runtime efficiency perspective that training effort on such samples can be reduced using mixing. Some challenges however persist. As classification statistics evolve during training, it is difficult to determine which samples to mix at earlier epochs, without negatively affecting final classification accuracy. Consider set C, which comprises of samples that are correctly classified at the end of training. In Fig. 8(b), it is seen that around 4% of the samples in C occur in Region 3 for over 60% of the training process, with their classification accuracy improving only in the later stages of training. We must thus stipulate criteria to identify the desired subset of Region 3 samples that can be mixed. To that end, we therefore target samples that the network finds difficult to classify at the moment, i.e., in the current epoch. In addition to belonging to Region 3, if a sample's loss increases over consecutive epochs (i.e., become increasingly difficult) it is mixed for the next epoch, ensuing which it is brought back to S*noM ix*. In Fig. 9(b), we find that increasing the period of time k for which the difficult samples must exhibit increasing loss and subsequently be mixed, only marginally improves the accuracy and runtime benefits. We hence use k = 1 for all our experiments thereby eliminating our dependence on any hyper-parameters. The temporal variation in the fraction of Region 3 samples mixed every epoch is depicted in Fig. 9(a). This fraction decreases across epochs, since several samples in Region 3 shift to Region 1 as accuracy improves. Interestingly, mixing difficult samples provides ∼ 0.2% boost in classification performance over the overall validation set across all our benchmarks, as opposed to training them without mixing. We believe this has the effect of allowing the network to focus on samples with moderate loss, that are more likely to contribute to final accuracy. Finally, we highlight the advantage of mixing such difficult samples instead of skipping them in Sec 4. ![8_image_0.png](8_image_0.png) → Figure 9: Amenability for samples in Region 3 Determining sample amenability every epoch adds not more than 2% overhead in runtime on average, and 4% additional storage costs. The proposed amenability criteria thus help us successfully realize selective mixing, i.e., achieve a competitive runtime efficiency versus accuracy trade-off. ## 4 Experimental Results We showcase the runtime benefits achieved by mixTrain across different classes of image recognition DNNs, namely convolutional neural networks (i.e., CNNs) and vision transformers Dosovitskiy et al. (2020). We consider two datasets, namely ImageNet (Deng et al., 2009) and Cifar10 (Krizhevsky et al.). The benchmarks for the ImageNet dataset consist of four image-recognition CNNs, *viz.* ResNet18, ResNet34, ResNet50 (He et al., 2015) and MobileNetV2 (Sandler et al., 2018), trained using the same training hyper-parameters such as learning rate, epochs etc., as in (He et al., 2015; Sandler et al., 2018). With regards to the Cifar10 dataset, we consider the ResNet18 and Resnet34 image-recognition CNNs (He et al., 2015). We also consider three vision transformer architectures, ViT-small, ViT-SWIN and ViT-pretrained. Details on the vision transformer architectures, and training hyper-parameters for all benchmarks can be found in Sec. 7.1. Across all benchmarks, we report the speed-up achieved by mixTrain over the same number of epochs as the baseline, by comparing wall-clock times. ## 4.1 Execution Time Benefits ImageNet: Table 1 presents the training performance of baseline SGD and mixTrain on different ImageNet benchmarks in terms of the Top-1 classification error and speed-up. On average, across all benchmarks, mixTrain mixes nearly 48% and 68% of the training dataset per epoch with MixUp and CutMix respectively. As can be seen, CutMix achieves a slightly superior trade-off than MixUp across all benchmarks, achieving upto around 1.6× reduction in runtime compared to to the baseline, while sacrificing only ∼0.2% loss in Top1 accuracy. This is primarily because interference between constituent samples is better mitigated through split propagation, thereby resulting in more inputs being mixed. Cifar10: We present our runtime and accuracy trade-off achieved on the Cifar10 vision transformer benchmarks in Table 2. As can be seen, mixTrain achieves 1.3×-1.6× training speed-up for nearly no loss in accuracy. This clearly underscores that mixTrain is directly applicable to any image classification DNN, regardless of the architecture or backbone deployed. Further, our results in Table 2 also indicate that mixTrain is not only applicable to training vision transformers from scratch, but to the fine-tuning stage as well. In Section 7.2 we discuss the speed-ups achieved by mixTrain on the CNN benchmarks trained on Cifar10. Runtime overhead analysis: Across all our benchmarks, we observe that mixTrain adds no more than 2% overhead in runtime. These marginal overheads arise due to (i) calculating amenability of inputs to interpo- | Network | Training Strategy | Top-1 Error | Speed-Up | |-------------|---------------------|---------------|------------| | | Baseline SGD | 30.2% | 1× | | ResNet18 | mixTrain-CutMix | 30.44% | 1.51× | | | mixTrain-MixUp | 30.6% | 1.32× | | | Baseline SGD | 26% | 1× | | ResNet34 | mixTrain-CutMix | 26.25% | 1.54× | | | mixTrain-MixUp | 26.4% | 1.37× | | | Baseline SGD | 24.3% | 1× | | ResNet50 | mixTrain-CutMix | 24.45% | 1.56× | | | mixTrain-MixUp | 24.6% | 1.41× | | | Baseline SGD | 28.5% | 1× | | MobileNetV2 | mixTrain-CutMix | 28.76% | 1.52× | | | mixTrain-MixUp | 29% | 1.3× | Table 1: Training CNNs on ImageNet | Network | Training Strategy | Top-1 Error | Speed-Up | |-------------------------|---------------------|---------------|------------| | | Baseline SGD | 19% | 1× | | ViT-small | mixTrain-MixUp | 19.11% | 1.37× | | (Training from scratch) | mixTrain-CutMix | 1.35% | 1.32× | | | Baseline SGD | 9% | 1× | | ViT-SWIN | mixTrain-MixUp | 8.9% | 1.44× | | (Training from scratch) | mixTrain-CutMix | 9.2% | 1.4× | | | Baseline SGD | 2.5% | 1× | | ViT-pretrained | mixTrain-MixUp | 2.46% | 1.6× | | (Fine-tuning) | mixTrain-CutMix | 2.55% | 1.58× | Table 2: Training vision transformers on Cifar10 lation and (ii) split propagation (for Cut-Mix). In (i) we compare the sample's loss against some thresholds, and update thresholds every epoch. However, these simple scalar operations have negligible runtime (<1.5% overhead) compared to the multiple GEMM operations performed during training. For (ii), during split propagation, the FC layers process the constituent inputs separately. However, the FC layers now operate on inputs of smaller size (i.e., corresponding to the size occupied by the features of the constituent input, which is nearly half the size of the original input). Thus, split propagation also adds less than <1% runtime overhead compared to the baseline. ## 4.2 Ablation Study In this subsection we conduct an ablation analysis of mixTrain. Contribution of interference reduction and selective mixing: mixTrain uses two strategies to achieve an optimal accuracy versus runtime trade-off, i.e., reducing impact of interference and selective mixing. Fig. 10(a) depicts the contribution of each strategy towards runtime savings, for the CutMix operator. The light blue markings indicate naive mixing. Selective mixing automatically identifies a subset of training samples that can be mixed every epoch such that classification accuracy is not impacted. However, if interference between the constituent inputs is not mitigated, training performance on mixed samples is poor (green markings). Consequently, the selective mixing strategy is forced to become conservative, identifying fewer samples that can be mixed every epoch Figure 10: Ablation analysis ![9_image_0.png](9_image_0.png) without affecting accuracy severely. Reducing interference between the constituent inputs improves accuracy by more than 1%, and speed-up by 10% (red markings). Breakdown of selective interpolation: We breakdown selective interpolation by examining the region of the loss distribution that provides the most benefits. From Fig. 10(b) (generated using CutMix) it is evident that Region 1 samples provide the bulk of our benefits on the ResNet18-ImageNet benchmark, accounting for nearly 25% of the savings. This is because as training progresses, a majority of training samples fall in Region 1. Interpolating Region 3 samples, accounts for additional 8% runtime savings. ## 4.3 Quantitative Comparison Study We compare the performance of mixTrain against competing methods that accelerate DNN training. ![10_image_0.png](10_image_0.png) Figure 11: Quantitative Comparison Study Instance Skipping: As a representative of instance skipping, we specifically consider the performance of (Zhang et al., 2019) (Fig. 11(a)) and (Jiang et al., 2019) (Fig. 11(b)) on the ResNet50 benchmark. In these techniques samples that the network finds easy to classify, as identified by low classification loss, are skipped thereby resulting in fewer mini-batches as training proceeds. Two issues are typically encountered by such techniques. First, as no training is conducted on the samples that are skipped, this subset is often a small, conservative fraction of the training dataset. Second, additional overhead is incurred in each epoch to determine this subset, as it is non-trivial to estimate the most recent loss of samples that had been discarded in previous epochs. In Fig. 11(b), we implement (Jiang et al., 2019) and overlook the overheads associated in determining the subset of samples that must be skipped, and report the resulting runtime across epochs. Clearly, mixTrain achieves better model accuracy and runtime benefits against both efforts, even when overheads are overlooked. As the network is ultimately trained on every input in each epoch, we reduce the number of minibatches more aggressively, while incurring negligible overheads incurred to form Smix and S*noM ix*. Finally, we analyze the accuracy if Region 3 samples were to be skipped instead of mixed, using the same policy discussed in Sec. 3.2 for different values of k. Clearly, mixTrain achieves better convergence, allowing it to leverage runtime benefits from this region. Coreset selection techniques: In the table below, we compare the performance of MixTrain-CutMix against three popular coreset selection techniques: Glister (Killamsetty et al., 2020), Grand (Paul et al., 2021) and Facility-location based methods (Iyer et al., 2020). Similar to mixTrain, coreset selection techniques aim to reduce training runtime by reducing the number of mini-batches to train every epoch, by identifying a subset of training data-points that are critical to accuracy. Such techniques perform better than random sampling (i.e., better accuracy), when the fraction of the training dataset retained is low (Guo et al., 2022). However, as can be seen in Table 3, these techniques require a large fraction of the training dataset in order to remain iso-accurate with the baseline. mixTrain clearly achieves a better accuracy versus speed-up trade-off. Other approximations: We consider three approximation strategies, i.e., early termination, mini-batch skipping and input size scaling (Fig.Fig. 11(a)). For early-termination, we stop baseline SGD training at | | Average fraction of the | | | |-------------------|---------------------------|-------------|----------| | Training Method | dataset used for training across epochs | Top-1 Error | Speed-Up | | Baseline | 1 | 4.4% | 1× | | mixTrain-MixUp | 0.69 | 4.33% | 1.4× | | mixTrain-CutMix | 0.66 | 4.2% | 1.45× | | | 0.8 | 4.65 | 1.18× | | Glister | 0.7 | 4.76% | 1.32× | | | 0.8 | 4.6% | 1.15× | | Grand | 0.7 | 4.7% | 1.2× | | | 0.8 | 4.55% | 1.19× | | Facility Location | 0.7 | 4.79% | 1.25× | Table 3: Comparison against coreset selection techniques an earlier epoch when it achieves the same accuracy as mixTrain, and report the resulting runtime benefits. Next, for mini-batch skipping we stochastically skip s% of the mini-batches every epoch, and for input size scaling, we train on inputs scaled down by some factor s. In both cases, the parameter s is selected such that it is iso-runtime with mixTrain. Clearly, in all three cases, mixTrain achieves a superior accuracy versus runtime trade-off as seen for the ResNet50 benchmark. ## 5 Related Work We now discuss related research efforts to accelerate DNN training. Hyper-parameter tuning: Many notable efforts are directed towards achieving training efficiency by controlling the hyper-parameters involved in gradient-descent, notably the learning rate and momentum. (You et al., 2017; Akiba et al., 2017; Goyal et al., 2017) propose learning rate tuning algorithms that achieve training in less than an hour with no loss in accuracy, when distributed to over hundreds of CPU/GPU cores. Optimizers with fast convergence: This class of efforts includes optimizers that achieve improved generalization performance within a certain training budget. These techniques target the evaluation of the weight gradient every iteration- for example, optimizers such as AvaGrad (Savarese et al., 2019) and Adam (Kingma & Ba, 2015) adaptively compute the learning rate across training epochs, resulting in faster convergence than SGD in a similar number of epochs for certain tasks. Similarly, techniques such as (Sutskever et al., 2013) utilize a momentum parameter during training to achieve faster convergence. Model size reduction during training: Model size reduction investigates dynamically pruning (Yuan et al., 2020) or quantizing (Sun et al., 2019) a model during training itself. Training a reduced-capacity model, or with lower-precision results in training speed-ups. Coreset selection strategies: Such techniques select a subset of the training samples that are most informative, i.e., critical to accuracy. These techniques differ in the identification of such critical training samples. Commonly used methods to determine a sample's importance include analyzing sample loss (Jiang et al., 2019; Zhang et al., 2019), gradient-matching techniques (Killamsetty et al., 2021), bi-level optimization methods (Killamsetty et al., 2020), sub-modularity based approaches (Iyer et al., 2020), and decision boundary based methods (Margatina et al., 2021). ## 6 Conclusion We introduce a new approach to improve the training efficiency of state-of-the-art DNNs by utilizing input mixing. We propose mixTrain that comprises of two strategies to achieve an acceptable accuracy versus speedup trade-off. First, we propose split propagation and adaptive mixing to reduce the impact of interference between the constituent inputs in a composite sample. Second, we apply mixing selectively, i.e., only on a subset of the training dataset every epoch. Across DNNs on the ImageNet dataset, we achieve upto a 1.6× improvement in runtime for ∼0.2% loss in accuracy. ## References Takuya Akiba, Shuji Suzuki, and Keisuke Fukuda. Extremely large minibatch SGD: training resnet-50 on imagenet in 15 minutes. *CoRR*, abs/1711.04325, 2017. URL http://arxiv.org/abs/1711.04325. Dario Amodei, Danny Hernandez, Girish Sastry, Jack Clark, Greg Brockman, and Ilya Sutskever. Training costs, 2018. URL https://openai.com/blog/ai-and-compute/. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In *CVPR09*, 2009. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. *CoRR*, abs/2010.11929, 2020. URL https://arxiv.org/abs/2010.11929. Priya Goyal, Piotr Dollár, Ross B. Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch SGD: training imagenet in 1 hour. CoRR, abs/1706.02677, 2017. URL http://arxiv.org/abs/1706.02677. Chengcheng Guo, Bo Zhao, and Yanbing Bai. Deepcore: A comprehensive library for coreset selection in deep learning, 2022. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. URL http://arxiv.org/abs/1512.03385. Rishabh K. Iyer, Ninad Khargoankar, Jeff A. Bilmes, and Himanshu Asanani. Submodular combinatorial information measures with applications in machine learning. *CoRR*, abs/2006.15412, 2020. URL https://arxiv.org/abs/2006.15412. Angela H. Jiang, Daniel L. K. Wong, Giulio Zhou, David G. Andersen, Jeffrey Dean, Gregory R. Ganger, Gauri Joshi, Michael Kaminksy, Michael Kozuch, Zachary C. Lipton, and Padmanabhan Pillai. Accelerating deep learning by focusing on the biggest losers, 2019. KrishnaTeja Killamsetty, Durga Sivasubramanian, Ganesh Ramakrishnan, and Rishabh K. Iyer. GLISTER: generalization based data subset selection for efficient and robust learning. *CoRR*, abs/2012.10630, 2020. URL https://arxiv.org/abs/2012.10630. KrishnaTeja Killamsetty, Durga Sivasubramanian, Ganesh Ramakrishnan, Abir De, and Rishabh K. Iyer. GRAD-MATCH: gradient matching based data subset selection for efficient deep model training. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of *Proceedings of Machine Learning Research*, pp. 5464–5474. PMLR, 2021. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv.org/abs/1412.6980. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2017. Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. Cifar-10 (canadian institute for advanced research). URL http://www.cs.toronto.edu/ kriz/cifar.html. Duo Li, Aojun Zhou, and Anbang Yao. Hbonet: Harmonious bottleneck on two orthogonal dimensions. In The IEEE International Conference on Computer Vision (ICCV), Oct 2019. Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. *CoRR*, abs/2103.14030, 2021. URL https://arxiv.org/abs/2103.14030. Ilya Loshchilov and Frank Hutter. SGDR: stochastic gradient descent with restarts. *CoRR*, abs/1608.03983, 2016. URL http://arxiv.org/abs/1608.03983. Sangkug Lym, Esha Choukse, Siavash Zangeneh, Wei Wen, Mattan Erez, and Sujay Shanghavi. Prunetrain: Gradual structured pruning from scratch for faster neural network training. *CoRR*, abs/1901.09290, 2019. URL http://arxiv.org/abs/1901.09290. Katerina Margatina, Giorgos Vernikos, Loïc Barrault, and Nikolaos Aletras. Active learning by acquiring contrastive examples. *CoRR*, abs/2109.03764, 2021. URL https://arxiv.org/abs/2109.03764. Baharan Mirzasoleiman, Jeff Bilmes, and Jure Leskovec. Coresets for data-efficient training of machine learning models. In Hal Daumé III and Aarti Singh (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of *Proceedings of Machine Learning Research*, pp. 6950–6960. PMLR, 13–18 Jul 2020. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems 32, pp. 8024–8035. Curran Associates, Inc., 2019. Mansheej Paul, Surya Ganguli, and Gintare Karolina Dziugaite. Deep learning on a data diet: Finding important examples early in training. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems*, volume 34, pp. 20596–20607. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paperf *iles/paper/*2021*/f ile/ac*56f8fe9eea3e4a365f29f0f1957c55− P aper.pdf. Mark Sandler, Andrew G. Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation. *CoRR*, abs/1801.04381, 2018. URL http://arxiv.org/abs/1801.04381. Pedro Savarese, David McAllester, Sudarshan Babu, and Michael Maire. Domain-independent dominance of adaptive methods. *CoRR*, abs/1912.01823, 2019. URL http://arxiv.org/abs/1912.01823. Robert Stojnic, Ross Taylor, and Marcin Kardas. Imagenet leaderboard, 2023. URL https://paperswithcode.com/sota/image-classification-on-imagenet. Emma Strubell, Ananya Ganesh, and Andrew McCallum. Energy and policy considerations for deep learning in nlp, 2019. Xiao Sun, Jungwook Choi, Chia-Yu Chen, Naigang Wang, Swagath Venkataramani, Vijayalakshmi Srinivasan, Xiaodong Cui, Wei Zhang, and Kailash Gopalakrishnan. Hybrid 8-bit floating point (hfp8) training and inference for deep neural networks. In *NeurIPS*, 2019. Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In Sanjoy Dasgupta and David McAllester (eds.), *Proceedings of the 30th International Conference on Machine Learning*, volume 28 of *Proceedings of Machine Learning Research*, pp. 1139–1147, Atlanta, Georgia, USA, 17–19 Jun 2013. PMLR. URL https://proceedings.mlr.press/v28/sutskever13.html. Mingxing Tan and Quoc V. Le. Efficientnetv2: Smaller models and faster training. *CoRR*, abs/2104.00298, 2021. URL https://arxiv.org/abs/2104.00298. Hugo Touvron, Andrea Vedaldi, Matthijs Douze, and Hervé Jégou. Fixing the train-test resolution discrepancy, 2020. Yang You, Igor Gitman, and Boris Ginsburg. Scaling SGD batch size to 32k for imagenet training. *CoRR*, abs/1708.03888, 2017. URL http://arxiv.org/abs/1708.03888. Xin Yuan, Pedro Savarese, and Michael Maire. Growing efficient deep networks by structured continuous sparsification. *CoRR*, abs/2007.15353, 2020. URL https://arxiv.org/abs/2007.15353. Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. *CoRR*, abs/1905.04899, 2019. URL http://arxiv.org/abs/1905.04899. Hongyi Zhang, Moustapha Cissé, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. *CoRR*, abs/1710.09412, 2017. URL http://arxiv.org/abs/1710.09412. Jiong Zhang, Hsiang-Fu Yu, and Inderjit S. Dhillon. Autoassist: A framework to accelerate training of deep neural networks. *CoRR*, abs/1905.03381, 2019. URL http://arxiv.org/abs/1905.03381. ## 7 Appendix 7.1 Experimental Setup This subsection describes the experimental setup used for realizing the baseline and proposed training schemes, on the benchmarks specified in Section 4 of the main paper. We conduct our experiments on the complete training and test datasets of each benchmark, using the PyTorch (Paszke et al., 2019) framework. Baseline: We consider SGD training as the baseline in our experiments. The hyper-parameters used in SGD training of each of the benchmarks are described below. ImageNet: For experiments in Section 4.1 we utilize a batch-size of 64 per GPU, for all benchmarks. For the ResNet18, ResNet34 and ResNet50 benchmarks the initial learning rate set to 0.025. The learning rate is decreased by 0.1 every 30 epochs, for a total training duration of 90 epochs, and the weight decay is 4e − 5. The MobileNetV2 benchmark utilizes an initial learning rate of 0.0125. We use a cosine learning rate decay schedule, as in (Li et al., 2019) for 150 epochs. The weight decay is set to 4e − 5. All benchmarks use an input size of 224*224*3. Cifar10-CNNs: All experiments on the convolutional neural networks (i.e., ResNet18 and ResNet34) utilize a batch-size of 128, trained on a single GPU. We consider two different hyper-parameter settings that differ in the learning rate schedule used. When using a linear learning rate schedule, all benchmarks are trained with an initial learning rate of 0.05 that is decayed by 0.1 every 10 epochs, across 90 epochs. The cosine annealing learning rate schedule uses an initial learning rate of 0.1, that is gradually decayed over 200 epochs. Across all experiments, the weight decay is set to 5e − 4. All benchmarks utilize an input size of 32*32*3. Cifar10-Transformers: We consider three vision transformer architectures, ViT-small, ViT-SWIN and ViTpretrained. The ViT-small architecture has a patch-size of (4*4), with the hidden dimension size equal to 512. The network consists of 8 attention heads, and a depth of 6. The ViT-SWIN architecture is identical to the Swin-T architecture in (Liu et al., 2021). When training from scratch, both networks operate on inputs of size (32*32*3), and are trained for 100 epochs using a cosine annealing learning rate schedule, with the initial learning rate of 1e-4. For the fine-tuning experiment, the ViT-pretrained network uses the ViT-B/16 architecture described in (Dosovitskiy et al., 2020). Here the pretrained weights are obtained by training on the ImageNet-21k dataset (Deng et al., 2009), and the network hence accepts an input of size (384*384*3). Fine-tuning is conducted for 3 epochs. For all models, training is conducted across 4 GPUs, with the batch-size set to 128. mixTrain: mixTrain uses the same learning rate, weight decay, and number of epochs as baseline SGD, requiring no additional hyper-parameters. We use the same random seed for both our baseline and mixTrain experiments. Results in Sec. 4.1 are reported by averaging over 3 different training runs. | Network | Training Strategy | Top-1 Error | Speed-Up | |-----------|--------------------------------------------------------|---------------|------------| | | Baseline SGD (linear learning rate schedule) | 6.5% | 1× | | ResNet18 | mixTrain-CutMix | 5.4% | 1.74× | | | mixTrain-MixUp | 5.7% | 1.69× | | | Baseline SGD (cosine annealing learning rate schedule) | 4.4% | 1× | | ResNet18 | mixTrain-CutMix | 4.2% | 1.45× | | | mixTrain-MixUp | 4.33% | 1.41× | | | Baseline SGD (linear learning rate schedule) | 5.2% | 1× | | ResNet34 | mixTrain-CutMix | 4.2% | 1.78× | | | mixTrain-MixUp | 4.6% | 1.71× | | Network | Training Strategy | Top-1 Error | Speed-Up | |-----------|---------------------|---------------|------------| | | Baseline Adam | 6% | 1× | | ResNet18 | mixTrain-CutMix | 5.8% | 1.59× | | | mixTrain-MixUp | 5.92% | 1.51× | | | Baseline AvaGrad | 5.7% | 1× | | ResNet18 | mixTrain-CutMix | 5.2% | 1.42× | | | mixTrain-MixUp | 5.4% | 1.4× | | | Baseline AvaGrad-W | 5.8% | 1× | | ResNet18 | mixTrain-CutMix | 5.28% | 1.44× | | | mixTrain-MixUp | 5.3% | 1.39× | Table 4: Training CNNs on Cifar10 using mixTrain Table 5: mixTrain with different optimizers ## 7.2 Experimental Results On Cifar10 To underscore the wide applicability of mixTrain, we present our runtime and accuracy trade-off achieved on the Cifar10 benchmarks in Table 4. Across our benchmarks, MixUp achieves upto 1.7 × improvement in runtime, while CutMix achieves a 1.8× runtime improvement. Clearly, both mixing strategies provide a boost in accuracy, due to the improved regularization provided via mixing samples. As can be seen in Table 5, we also highlight the applicability of mixTrain to optimizers such as Adam (Kingma & Ba, 2017) and AvaGrad (Savarese et al., 2019), and different learning rate schedules (Loshchilov & Hutter, 2016). Typically, such optimizers propose techniques to evaluate the weight gradients in a manner that results in faster convergence. MixTrain does not interfere with the evaluation of such weight gradientsregardless of the optimizer used, MixTrain achieves training acceleration by reducing the effective size of the dataset to iterate over each epoch. As can be seen, MixTrain can be successfully applied in conjunction with such optimizers. ## 7.3 Analysis Of Top-5 Accuracy Without Interference Reduction In Sec. 3.1 we mention that the network appears to be unable to detect both constituent inputs when interference is not reduced. At most, the network detects only one of the constituent inputs, with the second constituent rarely appearing in the Top-5 predictions made. We provide the Top-5 classification accuracy of the second constituent, prior to reducing interference. This necessitates the need for devising strategies to reduce interference between the constituent inputs of a composite sample. ![16_image_0.png](16_image_0.png) Figure 12: Top-5 classification accuracy of second constituent input ## 7.4 Training Runtime We present our training runtime results in Table 6. Note that the Cifar10 experiments are conducted on a single Nvidia RTX 2080Ti machine, while the ImageNet experiments are conducted across 4 RTX gpus. Table 6: Training runtime | | Table 6: Training runtime | | |-------------------|-----------------------------|------------| | Network | Training Strategy | Runtime | | ResNet18-Cifar10 | mixTrain-CutMix | 1.95 hours | | ResNet18-ImageNet | mixTrain-CutMix | 20.1 hours | | ResNet50-ImageNet | mixTrain-CutMix | 32.7 hours | ## 7.5 Analyzing Mixtrain **When Mixing More Than 2 Inputs** mixTrain can be extended to beyond 2 samples. However, we observe that the effect of mixing more than two samples is different for different benchmarks. Table 7 below shows the performance of mixTrain when 2 and 3 samples are mixed using the Cut-Mix operator, on the Cifar10 and ImageNet datasets for the ResNet18 network. For the Cifar10-ResNet18 benchmark, mixing N=3 samples clearly provides better runtime savings than N=2, and at comparable accuracy to baseline. However, we observe a noticeable drop in accuracy for the ImageNet benchmark. In the context of Cut-Mix, this is because the class object of interest occupies a smaller fraction of the input area for ImageNet, and is likely to be missed in the random Cut-Mix patch. We note that we observe similar trends in accuracy when using the Mix-Up operator. Here, the interference between constituent inputs is higher due to averaging pixel information across more samples. ## 7.6 Analysis Of Loss Across Consecutive Epochs As mentioned in Sec. 3.2, we utilize the loss of a sample in epoch E to determine it's amenability to mixing in epoch E+1. However, several mini-batches pass before a sample is trained again in the next epoch. As the model undergoes many changes to its weights, it is possible that the loss of a sample in epoch E might be quite substantially different from that in epoch E+1. Fig. 13 plots the loss curve averaged across all training examples when trained with SGD. The loss appears to change rapidly only for the first few epochs, and later when the learning rate changes. In other periods, changes in loss happen more gradually. We find that the same analysis is generally applicable when samples are mixed as well. This thus justifies using the loss in epoch E to justify amenability in epoch E+1. | Network | Training Strategy | Top-1 Acc | Speed-Up | |-----------------------|-----------------------|-------------|------------| | Baseline SGD (N=0) | 95.6% | 1× | | | ResNet18-Cifar10 | mixTrain-CutMix (N=2) | 95.8% | 1.45× | | mixTrain-CutMix (N=3) | 94.4% | 2.32× | | | Baseline SGD (N=0) | 69.8% | 1× | | | ResNet18-ImageNet | mixTrain-CutMix (N=2) | 69.56% | 1.5× | | mixTrain-CutMix (N=3) | 68.7% | 1.9× | | Table 7: Mixing more than 2 inputs using mixTrain ![17_image_0.png](17_image_0.png) Figure 13: Change in average loss across epochs ## 7.7 Analyzing Efficacy Of Amenability Metric As part of our key strategies to achieve a good accuracy versus runtime efficiency trade-off, we propose selectively mixing samples in Section 3.2. We take into consideration the region of the loss distribution where the sample occurs in epoch E to appropriately decide whether the sample should be mixed in the next epoch. Each region differs based on the estimated impact mixing a sample may have on accuracy. Consequently, each region has its own criteria for gauging amenability for the next epoch. | Table 8: Analyzing efficacy of our amenability metric | | | | |---------------------------------------------------------|----------------------|-----------|----------| | Benchmark | Amenability Metric | Top-1 err | Speed-Up | | Our Effort (Region 1 only) | 24.56% | 1.38 | | | Our Effort (Regions 1 and 3) | 24.45% | 1.56 | | | ResNet50 | Threshold = Accuracy | 24.80% | 1.36 | | Threshold = Average Loss | 24.50% | 1.33 | | | Region 1 and threshold = Lincorr for Region 3 | 25.14% | 1.74 | | In Table 8, we compare our proposed selective mixing strategy against the following set of rules to gauge amenability. - First, we analyze the trade-off achieved when we mix inputs that were correctly classified in previous epoch, instead of using Lmid as in Sec. 3.2. Essentially, only those samples that are correct in epoch E are mixed in the next epoch. We find that the Lmid threshold (Row 1) achieves slightly better classification accuracy, as outlier inputs with correct classification are avoided. - Next, we compare against an average loss threshold, i.e., we calculate the running average of the loss across all the samples in S*noM ix*. If a sample in epoch E has loss lower than the running average, it is mixed in the next epoch and vice-versa. As can be seen, our Lmid threshold (Row 1) achieves better speed-ups for nearly the same classification accuracy. Across epochs, classification accuracy improves and average loss reduces, often with several correctly classified samples with loss above the average loss. This metric thus loses the opportunity to approximate training effort on these these correctly classified samples that are amenable to interpolation. - Finally, we compare the efficacy of our Region 3 criterion. We observe the trade-off achieved when all samples above L*incorr* are mixed, in addition to Region 1 approximations. Clearly, our proposed criterion attains better accuracy (Row 2). ## 7.8 Discussion On Applicability Of Mixtrain**To Other Tasks** We now discuss the applicability of mixTrain to different kinds of tasks other than image classification, such as natural language processing and semantic segmentation. In image classification problems, mixing inputs does not hinder the DNN's ability to recognize the features of the constituent inputs [1,2], as the DNN's input space is continuous. Thus, the mixed input can be considered to be a new data point in the input space. However, on tasks such as NLP, each word in a sentence is from a discrete dictionary, represented using a token generated using an embedding table. Thus, the new token generated by, for example, linearly averaging the tokens of N constituent sentences, may not exist in the dictionary the network is trained on. Furthermore, networks designed for such tasks factor into consideration the position of each word in a sentence, and their relation to other words. The impact of mixing tokens of different sentences is thus unclear. Consequently, MixTrain may not be applicable to such tasks. Moving on to semantic segmentation tasks, each pixel in the input image is assigned a target label that indicates the class to which the pixel belongs to. In the context of the MixUp operator, after mixing has been applied, each pixel belongs to two classes, pertaining to each of the constituent inputs. The final target label of the pixel in the composite sample can thus be obtained by averaging the one-hot encoded labels of the corresponding pixel in the constituent inputs. In the context of the CutMix operator, the composite sample contains patches from each of the constituent inputs. Thus, the labels of pixels in a patch of the composite sample can be easily inferred by their value in the original constituent input. However, while such CutMix and MixUp based training for segmentation tasks have been demonstrated in a semi-supervised context, their usage in fully supervised settings has not been widely adopted. Thus, extending MixTrain to such tasks is unclear, and beyond the scope of this work.
Review 1: Summary: This work investigates the feasibility of leveraging mixup to accelerate DNN training. The motives of this work are quite simple -- train the model on the mixed inputs/labels to reduce the cost of training. This work finds the performance drop of simple linear combination and proposes the advanced approach to address the issue. Authors use several CNN architectures together with CIFAR-10 and ImageNet to evaluate their proposals. Significant speedup could be observed with marginal performance drop. Strengths and Weaknesses: S1. this work studied an interesting problem, while the motivation of adopting mixup for training acceleration was quite straightforward. S2. authors found the limitation of the vanilla mixup for training acceleration, and proposed advanced approach. W1. All experiments are based on convolutional architectures, such as MobileNet and ResNet-50, while the sizes of neural nets are quite small. These nets might no longer be considered as SOTA of image classification. Please include backbones such as ViT and incorporate more optimizers into the consideration/comparisons. It is reasonable to assume the proposed method would cause significant performance drop with modern architectures, as they perform better. W2. This work studied the problem of image classification, it would be good to discuss the scenarios, such as natural language processing or semantic segmentation. In these cases, how can we perform mixup to accelerate training procedures, or whehter it is possible to mix images as the input and segmentation masks as the label. Requested Changes: Please address W1 and W2 above. Broader Impact Concerns: Not applicable here. ================================================== Review 2: Summary: This paper introduces MixTrain, a new approach for accelerating the training speed of DNN models. The key idea of MixTrain is to mix two inputs into one. By doing so, one may reduce the number of samples in one epoch for training and lead to potential training time reduction. The challenge is how to make sure this scheme would lead to good accuracy. To tackle this challenge, the paper explores two input mixing schemes: CutMix and Mixup. The former uses a random cut to an image and replaces a part of another image with this cut, while the latter uses a weighted average of two inputs to generate the mixed input. Naively applying the above schemes would lead to an accuracy drop. Therefore, the authors propose several fixes to mitigate the accuracy drop: (1) split propagation, which sequentially processes unmixed inputs to avoid interference, (2) adaptive mixing, which adaptively changes the weights when doing MixUp, and (3) selective mixing, maintain a subset of samples for input mixing and dynamically change that subset based on loss dynamics during training. With those optimizations, the paper shows that MixTrain achieves up to 1.6x training speedup on ResNets over ImageNet/CIFAR with 0.2 accuracy drop. Strengths and Weaknesses: Strengths: - Interesting idea of selectively using mixed inputs to accelerate training speed. - The amenability evaluation for selective mixing provides some good insight on how to get input with and without mixing loss values in a lightweight way. - The approach seems to lead to promising training speedup without incurring additional hyperparameter tuning on tested datasets. Weaknesses: - Training ResNets on ImageNet does not pose major training cost challenges. Training ResNet-50 on ImageNet takes no more than 2 days on lower-end GPUs and within less than 15 mins in more recent advanced hardware: https://developer.nvidia.com/deep-learning-performance-training-inference. While the authors may argue the approach is applicable to other tasks, there is no evidence to support so at this point. - The paper compares MixTrain with several baseline methods and shows a better accuracy-time tradeoff. However, those baselines appear to be weak. For instance, https://openreview.net/pdf?id=493VFz-ZvDD reduces training cost via a combination of data sieving and other techniques. It would be better if the paper can make a comparison with more recent and stronger baselines. Requested Changes: Changes that are critical for a recommendation for acceptance: - Comparison with SOTA approach such as https://openreview.net/pdf?id=493VFz-ZvDD. - In adaptive mixing, the losses of samples from the previous epochs are needed to calculate the weights. However, the inputs are mixed in the previous epoch, e.g., one loss for mixed input 1&2, so how is it possible to get the separate loss of input 1 and 2 from the previous epoch without incurring additional cost? A runtime overhead discussion is needed. Items that would strengthen the work: - Split propagation splits mixed input and processes them separately through fully connected layers (FCL). This seems to reduce the compute granularity of FCL and slows down the training. For models with many FCLs, the approach's speedup would be further limited. A clarification is needed. - While split propagation may be applicable to CutMix to reduce interference, where the split is clear, it does not seem to be applicable to Mixup, where all elements in an intermediate feature map have a dependency on both inputs. Some clarification is needed. - No real training time has been reported in the paper. It would be better to report the actual training time to cross-reference the results. - Would the proposed method be extended to mixing up more than two inputs? It would be good to include a study of how mixing multiple inputs (>2) would affect the accuracy-time tradeoff. - In the last paragraph of Section 3.1, "by atleast improving" -> "by at least improving". Broader Impact Concerns: None. ================================================== Review 3: Summary: This paper proposes a novel training method called MixTrain to address the issue of accelerating training deep models. MixTrain proposes that the main cause of the poor performance of naïve mixing is the interference between constituent inputs. Thus , split propagation and adaptive mixing is introduced for the problem. Selective mixing is proposed to improve the accuracy. Strengths and Weaknesses: Strengths: The paper is generally well written, clearly structured and quite easy to follow. The empirical results seem good on most of the performed tasks. Weaknesses: 1. Too many typos and incorrect format. For example 1)We show that mixTrainachieves superior accuracy vs. efficiency tradeoffs 2)Naturally, a simple boost in performance can be achieved by atleast improving the loss for one of the constituent inputs of the mixed input. Besides, the abstract should not be divided into two paragraphs. 2. DNN does not only include CNN. It seems that the method can be used in some SOTA backbones, such as deit, swin transformer, etc. Some experiments should be conducted. 3. This method should be compared with the other DNN Training acceleration methods. Requested Changes: See Weaknesses Broader Impact Concerns: n/a ================================================== Metareview: Recommendation: Reject Comment: The paper contains a collection of nice ideas, and a good motivation and analysis. We thank the authors for their engagement in the review process, and submitting this interesting paper. However, for the reasons outlined above, the current version does not meet TMLR's criterion for acceptance in terms of support for evidence. Since major revision is not an option, the recommendation is Reject. However, we encourage the authors to consider major revisions: both in the claims on the generality of the method, and substantial expansion of the empirical evidence that the technique can benefit over the results already presented in the MixUp and CutMix papers. ==================================================
# Multi-Accurate Cate Is Robust To Unknown Covariate Shifts Anonymous authors Paper under double-blind review ## Abstract Estimating heterogeneous treatment effects is important to tailor treatments to those individuals who would most likely benefit. However, conditional average treatment effect predictors may often be trained on one population but possibly deployed on different, possibly unknown populations. We use methodology for learning multi-accurate predictors to postprocess CATE T-learners (differenced regressions) to become robust to unknown covariate shifts at the time of deployment. The method works in general for pseudo-outcome regression, such as the DR-learner. We show how this approach can combine (large) confounded observational and (smaller) randomized datasets by learning a confounded predictor from the observational dataset, and auditing for multi-accuracy on the randomized controlled trial. We show improvements in bias and mean squared error in simulations with increasingly larger covariate shift, and on a semi-synthetic case study of a parallel large observational study and smaller randomized controlled experiment. Overall, we establish a connection between methods developed for multi-distribution learning and achieve appealing desiderata (e.g. external validity) in causal inference and machine learning. ## 1 Introduction And Related Work Causal inference studies how to make the right decision, at the right time, for the right person. Extensive recent literature on *heterogeneous treatment effects*, also called conditional average treatment effects (CATE), studies the estimation of personalized causal effects, rather than only population-level average treatment effects. Estimating CATE can inform better triage of resources to those who most benefit in healthcare, social services, e-commerce, and many other domains. In these consequential domains, many firms/decision-makers face treatment decisions where other firms also need to make the same decision, although perhaps each with slightly different data distributions. For example, problems of clinical risk prediction, such as risk of a heart disease or medication treatment guidelines, are shared widely across hospitals, but each has its own distribution of patients in addition to idiosyncratic reporting, testing, and treatment patterns that can hinder external validity (Caruana et al., 2015). Indeed, off-the-shelf, relatively simple clinical risk calculators developed on one population are often broadly deployed as a decision support tool in many locations, without the ability to share the originating individual-level data, or with data drift over time. In social settings, the Arnold Public Safety Assessment (PSA), trained on a proprietary dataset and used in hundreds of jurisdictions (Goel et al., 2021), is an example of a widely deployed tool. Its accompanying decision-making matrix is another example of a treatment recommendation rule made more widely available (Laura and Foundation, 2016), which can introduce disparities and poor treatment efficacy (Zhou, 2024). Other examples include the design of algorithmic profiling in active labor market programs (Crépon and Van Den Berg, 2016; Bach et al., 2023; Körtner and Bonoli, 2023): many different jurisdictions run different active labor market programs, and policymakers face questions about how to learn from what works elsewhere and how to scale up programs across heterogeneous locations. A key challenge in these settings is to certify valid predictive performance of personalized causal effects for unknown deployment settings. For example, predictive risk calculators, such as those for chronic heart disease, learned on a specific population might induce biased estimation for different locales with different populations. As one example, the widely used Framingham risk score overestimates risk for Asian populations (Badawy et al., 2022). This problem is not limited to earlier risk scores, but also modern ones: a sepsis predictive risk score provided by Epic, a major healthcare IT provider, fell short in a study of external validity on another population (Habib et al., 2021). External validity, generalizability, and transportability are also important questions for causal inference (Tipton, 2014; Tipton and Hartman, 2023; Bareinboim and Pearl, 2013). Heterogeneous causal effect estimates might also be similarly learned on one population, but made more widely available, hence vulnerable to unknown covariate shifts. Spini (2021) studies the potential impacts of shifts in population for generalizing results from the Oregon Health Insurance Experiment, while Shyr et al. (2024) studies potential shifts in effect heterogeneity across multiple cancer studies. On the other hand, we do want to leverage predictive information when it is available. How can we develop methods for heterogeneous treatment effect estimation so that a new hospital, without its own large database or in-house machine learning team, is still assured guarantees of low predictive bias on its own population, that might differ in unknown ways from a proprietary risk score that does not publish the original data? In this paper, we show how methods from multi-accurate learning (Hébert-Johnson et al., 2018; Kim et al., 2019) can endow conditional average treatment effect estimation with robustness to unknown covariate shifts. Indeed, the problem of confounding itself is a covariate shift problem, from the treated or control population to some reference population (Johansson et al., 2022). Multi-accurate learning is a powerful and flexible framework that, by ensuring low predictive bias over a test function class, is also robust to combinations of these covariate shifts: those induced by confounding or unknown covariate shifts in the reference population. Although multi-calibrated and accurate learning originated from fairness motivations re: ensuring calibration/low prediction bias over rich subgroups, in this work we show how the adversarial test functions in the formulation also confer broad robustness against covariate shift. To highlight this flexibility, we use multi-accurate calibration on an extremely small clinical trial to correct a predictor from an confounded observational study. Though there is extensive work on establishing external validity and transportability of causal effects, most of this work assumes information about a target population. Drawing inspiration from Kim et al. (2022), which studied "universal adaptability" of estimating the ATE with bias robust to unknown covariate shifts, we learn *CATE* estimates that will maintain unbiased predictions under *unknown* target populations. Although causal inference and machine learning has witnessed significant methodological innovation either in orthogonal/statistical learning or other machine learning adaptations (Kennedy, 2023; Nie and Wager, 2020; Chernozhukov et al., 2018; Wager and Athey, 2018; Shalit et al., 2017; Hill, 2011), to name just a few, multi-accurate learning (Kim et al., 2019) introduces a different methodological toolkit related to boosting/adversarial formulations of conditional moment conditions. Recent advances in machine learning for conditional moment equations (Dikkala et al., 2020; Bennett and Kallus, 2023; Ghassami et al., 2022) typically develop min-max estimation algorithms that are unstable in practice; besides, the theory of conditional moment restrictions typically identifies finite-dimensional parameters rather than entire functions like the CATE. (See Section 5 for more extensive discussion of related work.) We conduct a thorough empirical study comparing finite- and large-sample performance of multi-accurate learning and other causal machine learning techniques more specifically tailored for causal structure. To summarize, we find that multi-accurate methods grant additional robustness against unknown covariate shifts while being competitive with more advanced causal machine learning methods in finite-samples. There is a robustness-efficiency tradeoff: the latter methods are designed to exploit in-distribution efficiency, which multi-accurate learning "off-the-shelf" does not. Nonetheless, our work connects these two previously unrelated lines of work and shows how multiaccurate learning "off-the-shelf" can address the problem of robust CATE estimation. Multi-accurate learning reduces prediction bias from model misspecification, just as is required for conditional average treatment effect estimation. In our thorough empirical study we find that our proposed multi-accurate T- and DR-learner perform well under unobserved covariate shift. Although our work does not suggest multi-accurate learning as a replacement for state-of-the-art causal machine-learning for in-distribution estimation, it does provide evidence that could inform further methodological improvements and variance reduction of multi-accurate learning for CATE estimation. In summary: - Multi-accurate learning can be used "off-the-shelf" to post-process CATE estimates based on differenced outcome regressions to endow them with robustness to unknown covariate shift. - Multi-accurate post-processing can improve CATE estimates with only *black-box* access to predictors and original data. - Alternative approaches to robustness against unknown shifts, like distributionally robust optimization, could change the robust-optimal predictor to a risk-sensitive one rather than the true CATE, but multi-accurate learning does not. - The multi-accuracy framework can approximate more advanced CATE estimators (such as the DRlearner (Kennedy, 2020; Semenova and Chernozhukov, 2021)) with appropriate selection of the test function class. That is, in Proposition 3 we show that multi-accurate post-processing of simple CATE estimates (T-learner) with a *richer* test function class can approximate a less-multi-accurate/lessrobust but more-advanced CATE estimator, i.e. a multi-accurate DR-learner under a *simpler* test function class. The contributions of our work are the following. We propose multi-accurate post-processing of pseudooutcome based CATE estimation to obtain unbiased prediction on unknown deployment populations. This approach can also flexibly adapt to a variety of covariate shifts from confounding to adversarial/unknown shifts: we illustrate by postprocessing a CATE estimator that combines large observational/small randomized data. We show in extensive experiments with simulations and real-world observational and randomized data from the Women's Health Initiative how our approach achieves finite-sample gains in ensuring robust bias control (and correspondingly, MSE) under unknown distribution shifts. ## 2 Problem Setup We overview the problem setup and describe directly related prior work on multicalibration/multi-accuracy. See Section 5 for discussion of other methodological approaches. ## Problem Setup Data. The dataset D = {(Xi, Ti, Yi(Ti))} n i=1 comprises of covariates, treatment ∈ {0, 1}, and (potential) outcomes Y (T). In different applications it will satisfy different assumptions, so we will later define different variants of D. We first assume it arises from a randomized controlled trial or observational study under the assumption of weak ignorability, so that the following assumption about selection on unobservables holds. Assumption 1 (Unconfoundedness (ignorability)). $\left\{Y(1),Y\right\}$ ## {Y (1), Y (0)} ⊥ T | X Assumption 1 is a generally untestable assumption that permits causal identification. For example, it holds in randomized trials by design, and in observational studies if the observed covariates are fully informative of selection into treatment. Later on, we will jointly consider access to both a large-scale observational study (with potential violations of unconfoundedness) and a small randomized trial. Throughout we also assume standard assumptions of consistency, SUTVA, and overlap. Assumption 2 (Consistency, SUTVA, and overlap). We assume that Yi = Yi (Ti) (consistency and SUTVA), and that there exists ν > 0 *such that* ν ≤ e1(x) ≤ 1 − ν. Estimands. The common estimand in causal inference is the *average treatment effect*, (ATE) E [Y (1) − Y (0)]. In regimes with posited heterogeneous treatment effects, that are predictable given covariates X, a (functional) estimand of interest is the *conditional average treatment effect* (CATE) $$\tau(X)=\operatorname{E}{\big[}Y(1)-Y(0)\mid X{\big]}.$$ We denote treatment-conditional outcome regressions, and the propensity score as: µt(x) = E[Y | X = x, T = t, et(x) = P(T = t | X = x) These are the typical so-called nuisance estimation functions used in common estimators. Sometimes we will refer to the true population functions as µ ∗ t , e∗ t to clarify. Performance assessment. The convention for benchmarking estimation of CATE is the mean-squared error (MSE) with respect to the true τ (X) CATE function: $$\operatorname{E}[\left({\hat{\tau}}(X)-\tau(X)\right)^{2}].$$ Further, estimators for CATE will all eventually involve different regressions that implicitly minimize predictive error marginalized over the dataset's distribution of X ∼ PX. Later on, our work will focus on providing guarantees on conditional bias achieved by a CATE estimate τˆ marginalized under a covariate distribution X ∼ QX that can be *different* from the distribution X ∼ P upon which the CATE estimate was trained: $$|\operatorname{E}_{Q}[({\hat{\tau}}(X)-\tau(X))]|$$ $$(1)$$ |EQ[(ˆτ (X) − τ (X))]| (1) The bias is of course a component of the MSE: multi-accuracy methods provide guarantees on the absolute bias; later in Section 4 we extensively empirically evaluate the mean squared error as well. We write QX(x), PX, PX1 , PX0 for the marginal distribution of X under Q, the marginal distribution of X under P, and PX1 , PX0 under X | T = 1, X | T = 0 on the observed data, respectively. We also denote EP [·], EP1 [·], EP0 [·] to denote marginalization over X in the training data, X | T = 1, or X = T = 0, respectively. For brief we write EQ[·] to denote expectations under the *unknown reference distribution* QX on X (which can be extended to accommodate shifts beyond the typical covariate shift assumption in a slight abuse of notation). For example, µt ∈ arg minµ EPt [(Y −µ) 2], e.g. by default, regression in each treated arm minimizes the MSE under the covariate distribution of each treatment arm. Notation conventions. When, for example describing the multi-accuracy criteria without reference to the dataset's distribution, we write E[·] when referring to the distribution of the training data. We next introduce the shift scenarios (i.e. combinations of assumptions) under which we seek guarantees on CATE estimation. See Figure 1 for an informal illustration. 2.1 Robustness to unknown deployment shifts Unknown deployment covariate shifts ![4_image_0.png](4_image_0.png) ![4_image_1.png](4_image_1.png) Figure 1: Schematic of setting 1 (external shift), and setting 2 (learning from large observational and small RCT data) Setting 1 (Unknown external covariate shifts). Suppose Assumption 1, that unconfoundedness holds, and Assumption 2. Consider valid likelihood ratios with respect to the marginal distribution of X in observational data, PX: $\mathcal{L}_{1}:=\{\frac{dQ_{X}(x)}{dP_{X_{1}}(x)}\colon Q_{P}\ll P_{X_{1}},\mathrm{E}[\frac{dQ_{X}(X)}{dP_{X_{1}}(X)}\mid X]=1,P\text{-}a.s.\}$ $\mathcal{L}_{0}:=\{\frac{dQ_{X}(x)}{dP_{X_{0}}(x)}\colon Q_{X}\ll P_{X_{0}},\mathrm{E}[\frac{dQ_{X}(X)}{dP_{X_{0}}(X)}\mid X]=1,P\text{-}a.s.\}$ We seek an estimator τˆ(X) *with low bias under* Q : |EQ[(ˆτ (X) − τ (X))]| ≤ ϵ. We will call this type of unknown deployment shift an "external shift". This is analogous to the unknown shift setting studied in Kim et al. (2022), as well as other literature on unknown covariate shifts (Jeong and Namkoong, 2020; Subbaswamy et al., 2021; Hatt et al., 2021). In contrast to an extensive literature on transportability and external validity, we focus on the case of a-priori *unknown* deployment shifts. If suitably nonparametric CATE estimation indeed recovered the Bayes-optimal predictor in finite samples, there would be no issue of unknown deployment shifts. But because in finite samples it generally does not, modifying estimation to protect against unknown deployment shifts can protect against misspecification and finite-sample issues. For example, misspecified CATE estimation is vulnerable to unknown covariate shift. The conventional mean-squared error MSE can be nonzero for the *Bayes-optimal* predictor µ ∗ 1 (X) = E[Y (1) | X]. If the conditional bias or variance in Y is heteroskedastic (i.e. varies in x), the *prediction MSE* changes as external shifts change the marginalizing covariate distribution. Later on we will use multi-accurate learning to post-process CATE estimates to ensure robustness against covariate shifts represented by a function class of likelihood ratios. ## 2.2 Unobserved Confounding: Observational Data With Rct We consider a different setting where unknown covariate shifts may arise: a large observational dataset and small randomized trial. The observational study may be subject to unobserved confounding. On the other hand, the sample size of the randomized data may be small, so that learning conditional causal effects solely from randomized data is unsupported. This regime is common in clinical settings, such as the parallel Women's Health Initiative observational study and clinical trial (Machens and Schmidt-Gollwitzer, 2003); see also (Colnet et al., 2020; Yang et al., 2020) and (Bareinboim and Pearl, 2013) for identification results for the related setting of data fusion. The data setting is as follows. The observational dataset may have been collected under unobserved confounders, D∗ obs = (*X, U, T, Y* ), but we only observe Dobs = (*X, T, Y* ). Hence unbiased causal estimation is not possible from the observational dataset alone. On the other hand, we also have a randomized controlled study, Drct = (Xr, Ur, Tr, Yr). We summarize our assumptions about the sample size, and shift regimes of observational/randomized data below. The punchline is that multi-accuracy provides robustness against these potentially unknown shifts, under assumptions of well-specification of the test function class. (All shifts are in the "causal", rather than "anti-causal" setting). In the below setting, we aim to learn a valid CATE estimator E[Y (1)−Y (0) | X] for the covariate distribution of the observational study or additional unknown covariate shifts. Setting 2 (Observational and randomized study). *Assume Assumption 2. Suppose an observational dataset* Do, collected under violations of Assumption 1 (the observational data were collected under unobserved confounders), and a randomized dataset Dr*, where Assumption 1 holds (the randomized data are unconfounded).* Assumption 3 (Small RCT, large observational study). |Dobs*| ≫ |D*r| Assumption 3 is not necessary for identification, but it describes the relevant regime where the method is helpful: if instead |Dr*| ≫ |D*o|, unbiased CATE estimation is possible from the randomized data alone. ## 3 Method 3.1 Background On Estimation Conditional average treatment effect estimation We briefly discuss a few options for estimating τ , upon which we will build multicalibrated approaches in differing shift scenarios. The *S-learner* models outcome given covariates and treatment indicator (*x, t*), that is, the covariate vector is simply augmented with a column for the treatment indicator. The corresponding CATE estimator imputes the counterfactual outcome: τˆ(x) = ˆµ(x, 1) − µˆ(x, 0). The *T-learner* differences two regressions for the conditional means of Y for treated and untreated: $${\hat{\tau}}(x)={\hat{\mu}}_{1}(x)-{\hat{\mu}}_{0}(x).$$ Implicitly in the definition of these methods, both of these basic approaches for CATE estimation learn predictive models µt(X) by minimizing the mean-squared error, evaluated over some distribution of covariates X. Namely, for the T-learner, $$\mu_{t}\in{\underset{g}{\operatorname{arg\,min}}}\operatorname{E}_{P}[(Y-g(X))^{2}\mid T=t],t\in\{0,1\}$$ Although many other advanced machine learning and causal inference methods have been developed based on advanced estimating equations (Nie and Wager, 2020; Kennedy, 2023; Oprescu et al., 2019; Wager and Athey, 2018; Semenova and Chernozhukov, 2021), or other machine-learning adaptations (Shalit et al., 2017; Shi et al., 2019), we will first instantiate our post-processing method with the T-learner and describe how it can be used with more advanced methods based on pseudo-outcome regression (such as the DR-learner (Kennedy, 2020; Semenova and Chernozhukov, 2021)). Our meta-algorithm is based on post-processing CATE estimation (the T or DR− learner) with algorithms for multicalibration. Next, we describe multi-calibration and its prior use for universal adaptability in causal inference for the average treatment effect. Universal Adaptability via Multicalibration Recent work of (Kim et al., 2022) introduced the concept of *universal adaptability* in the context of causal inference. Much work on inference under (external) covariate shift assumes that the shift is known at the time of estimation. Instead, universal adaptability estimates the ATE from a dataset, with an estimate that incurs small bias on any downstream covariate distribution, within a broad class of unknown shifts. The work of (Kim et al., 2022) establishes the feasibility of universal adaptability via a connection to the notions of multicalibration/multiaccuracy, originally introduced in the literature on algorithmic fairness Hébert-Johnson et al. (2018). Following this line of research, we show how multi-accuracy can be used, off-the-shelf, to address unknown (external and internal) shifts in the context of CATE. The multi-calibration criterion was originally motivated to provide guarantees over a variety of subpopulations, such as valid calibration over arbitrary subgroups (Hébert-Johnson et al., 2018). The related, but somewhat weaker, notion of multi-accuracy ensures low prediction bias within arbitrary subgroups (Kim et al., 2019). Throughout this paper, we focus on multi-accuracy (although analogous results hold for the stronger criterion of multi-calibration). Definition 1 (Multi-accuracy). For c(X) in a class of functions C, a predictor p˜ : X → [0, 1] is (C, α) multi-accurate if $$\operatorname*{max}_{c\in{\mathcal{C}}}|\operatorname{E}[(Y-{\tilde{p}}(X))c(X)]|\leq\alpha,$$ Interpretations depend on the specification of the function class C. When C is a class of subgroup indicator functions, C = {I[x ∈ C]: C ∈ C˜}, with C˜ a set of subsets of X , then the multi-accuracy criterion ensures low prediction bias over a rich set of subpopulations. The class C˜ could indicate sublevel sets of functions with a finite VC-dimension. For example, if C is the space of all decision trees of depth 4, it has a finite VC-dimension and can describe complex subpopulations. A growing line of work has developed algorithms with guarantees to (approximately) satisfy multi-calibration and multi-accuracy criteria (Hébert-Johnson et al., 2018; Kim et al., 2019; Gopalan et al., 2022b; Pfisterer et al., 2021) via boosting. When initialized from scratch, multi-calibration/accuracy can be viewed as a learning algorithm, but it can also be used to post-process a given predictor, as we do in this paper. Our meta-algorithms leverage these existing algorithms for obtaining *multi-accurate* predictors by post-processing. Specifically, we use the MCBoost algorithm (Pfisterer et al., 2021), pseudocode included in Algorithm 4. MCBoost (Pfisterer et al., 2021) takes as input a given initial predictor p, test-function class C, *approximation parameter* α, and post-processing datasets D*post* for calibration and validation. Later on in our meta-algorithms, to be concise we will refer to this as running MCBoost(p, C, α, D*post*). As a brief summary, MCBoost is a boosting procedure that proceeds via a series of *auditing steps*: given the initial predictor p, it then solves a least-squares problem over the calibration dataset to find a c ∈ C that maximizes correlation with the residuals. The predictor is then updated with a multiplicative-weights update based on the worstcase c. The process iterates until the total miscalibration or accuracy error drops below a stopping criterion. Next, we discuss the auditing step of this procedure in more detail. Auditing In practice, evaluation of the multi-calibration or multi-accuracy criterion over discrete subgroups is implemented via an auditing step that is reminiscent of twicing (Tukey et al., 1977). (In the conditional moment literature, these audit test functions would be called instrument functions). That is, the algorithm often audits over real-valued functions c(x): *X 7→* R. These can be connected to the subpopulation motivation by viewing c(x): *X 7→* [0, 1] as a relaxation of indicator functions; and real-valued functions as Algorithm 1 Multi-accuracy for CATE estimation for Setting 1, unknown covariate shifts 1: Input: D = {(Xi, Ti, Yi)} n i=1 unconfounded data, F auditor function class, G function class for outcome functions. 2: Split D into Dest and D*post* 3: Fit treatment-conditional outcome functions from the observational dataset Dest: µˆt(x) ← arg min g∈G E[(g − Y ) 2| T = t], for t ∈ {0, 1} 4: Post-process µˆt(X) for t ∈ {0, 1} by multi-accuracy: µ˜t(x) ← MCBoost(ˆµt, α, F, Dt*post*), where Dt*post* is the subset of D*post* where I[T = t]. so that maxf∈F |EP [f(X) · (Y − µ˜(X)) | T = t]| ≤ α. 5: Return τ˜(x) = ˜µ1(x) − µ˜0(x) a rescaling of the former. (Later on, we will relate the real-valued weight functions directly to IPW weight functions (i.e. Riesz representers) in causal inference estimators). Given a predictor pk(x) at some iteration k of the algorithm, the auditing step learns a test function c that best correlates with the residual function pk(x) − y. Auditing and postprocessing occurs in a different held-out dataset: we will refer to this as the post-processing dataset, including both calibration and validation sets. If the multi-accuracy criterion is not met for this test function c ∈ C, the algorithm takes a boosting step and adds a multiplicative update with this test function. If the multi-accuracy criterion is met, the algorithm terminates. Definition 2 (Multiaccuracy auditing). Let α > 0, m ∈ N. Suppose Dpost ∼ D is a set of independent samples. A hypothesis p˜ : X → [0, 1] passes (α)*-multiaccuracy auditing if for* h ∈ arg min E*post*[((˜p(x) − y) − h(x))2], $$|\operatorname{E}[(Y-{\tilde{p}}(X))h(x)]|\leq\alpha.$$ Remark 1 (Relation to conditional moment restrictions). *A reader in causal inference or econometrics may* notice connections to conditional moment formulations. We expect that our later analysis, which is focused on multi-calibration/accuracy algorithms, also hold for adversarial formulations of conditional moments. (For example, Greenfeld and Shalit (2020) observes that adversarial moment conditions, in their case HSIC for independence of residuals, imply robustness to covariate shift). (Meta)-Algorithm We describe the meta-learner in Algorithm 1. We learn CATE estimates based on the T−learner (i.e. differencing outcome regression models). Then the multi-accurate CATE estimate is: $$\tilde{\tau}(X)=\tilde{\mu}_{1}(X)-\tilde{\mu}_{0}(X)$$ τ˜(X) = ˜µ1(X) − µ˜0(X) (2) It also admits a natural regression-adjustment estimate for the ATE: $$(2)$$ $$\operatorname{E}[{\tilde{\tau}}(X)].$$ ## 3.2 Warmup: Multicalibration, Universal Adaptability, And The Ate As a precursor to introducing our method for robust CATE estimation with multi-calibration, we introduce properties of multi-calibration/multi-accuracy algorithms for estimation of the CATE and ATE. We begin with a result about how multi-calibrated predictors implies identification of the ATE under weaker functional specification conditions via regression adjustment. In the next subsection we show how these properties also can imply robust identification of the ATE and CATE under external covariate shifts. Target-independent identification of the ATE under unconfoundedness Our first exposition, when unconfoundedness holds, shows that multi-calibration/multi-accuracy can be viewed as finding a boosted predictor whose marginalization satisfies estimating equations for the average treatment effect. In addition, multi-calibration/multi-accuracy as an algorithmic scheme *expands* the functional complexity of the original predictor it is initialized with. Interestingly, regression adjustment with a multi-calibrated/multi-accurate predictor approximates the doubly-robust estimator for the ATE, and hence is consistent if either the original predictor is well-specified, the inverse propensity score is within the auditor function class, or if the prediction function is within the *expanded* function class output by multi-calibration/multi-accuracy. The doubly-robust augmented inverse-propensity weighting estimator (AIPW) is a canonical estimator highlighting improved estimation opportunities for causal inference (Robins et al., 1994). It has the following form, for a given outcome and propensity model *µ, e*: $$\operatorname{E}[Y(1)-Y(0)]=\sum_{t\in\{0,1\}}\operatorname{E}\left[{\frac{\mathbb{I}[T=t]}{e_{t}(X)}}\left(Y-\mu_{t}(X)\right)+\mu_{t}(X)\right].$$ It enjoys improved estimation properties, such as the mixed-bias property (only requiring one of outcome or propensity model to be consistent for consistent estimation of the ATE) or rate double-robustness. We can characterize multi-accurate learning with an auditor function class containing the inverse propensity score as an approximation of the AIPW estimate. As a note, the below statements hold up to an additional misspecification error (as shown in Kim et al. (2022)). Because the auditor function class is typically large (i.e. contains functions beyond the inverse propensity score), this is a "robust" way to conduct doubly-robust estimation. Proposition 1 (Multi-accuracy implies robust estimation of the ATE). Consider an auditor class H *that is* closed under affine transformation. Assume unconfoundedness holds. Consider the estimator E[˜τ (X)] *where* τ˜(x) is the output of Algorithm 1 with auditor class H, approximation parameter α, initial outcome model estimators µˆ1, µˆ0, and Dpost from the same distribution as the data. If at least one of the following is true: (1) the original outcome models µˆ1(x), µˆ0(x) are consistent estimators, (2) e1(X) −1,(1 − e1(X))−1 ∈ H, or (3) if using multi-accuracy, the true µ1(x), µ0(x) *are in the linear span* of G + conv(H)*, then* $$\operatorname{E}\left[{\tilde{\tau}}(X)\right]=\operatorname{E}\left[{\tilde{\mu}}_{1}(X)-{\tilde{\mu}}_{0}(X)\right]=\operatorname{E}[Y(1)-Y(0)]+2\alpha,$$ i.e. we obtain 2α*-consistent estimation of the ATE.* This proposition connects the use of multi-accurate estimation to doubly-robust estimates and therefore establishes variance reduction properties, which is important because the multi-accuracy criterion itself is characterized via bias reduction on subgroups alone, without directly discussing the mean-squared error or estimation variance. Identification of the ATE under universal adaptability We next recall robust identification properties of the ATE under potential violations of unconfoundedness. This was called "universal adaptability" in Kim et al. (2022) which studied missing data under unknown shifts, which directly implies robust identification for causal inference. If we had known the true propensity score function 1/e1(*X, U*), we would obtain identification with respect to the observable marginalization of E[1/e1(*X, U*) | X]: $$\mathrm{E}[Y(1)]=\mathrm{E}[\mathrm{E}[Y\mathbb{I}[T=1]\mid X,U]\cdot\mathrm{E}[1/e^{*}(X,U)\mid X,U]]=\mathrm{E}[Y\mathbb{I}[T=1]\,\mathrm{E}[1/e_{1}(X,U)\mid X]]$$ $$=\mathrm{E}[Y\mathbb{I}[T=1]W_{1}^{*}(X)]$$ where in the first equality we apply ignorability conditional on (*X, U*) and iterated expectations to obtain identification via the observable marginalization of $$W_{1}^{*}(X):=\operatorname{E}[1/e_{1}(X,U)\mid X].$$ $$(3)$$ (Note that 1/e(X) ̸= E[1/e1(*X, U*) | X] due to Jensen's inequality). Robust identification of the ATE follows from Equation (3), i.e. that H contains (approximately) E[1/e1(*X, U*) | X]. Essentially, assuming that the auditor function class contains the identifying (unknown weight), we can re-interpret the multi-accurate criterion as an approximation of adversarial IPW. Corollary 1. *Suppose that* W∗ 1 (X), W∗ 0 (X) ∈ H. Run Algorithm 1 on D (possibly with unobserved confounders) over auditor function class H and outcome function class G *to obtain* τ˜(X) = ˜µ1(X) − µ˜0(X). Then |E[˜τ (X)] − E[Y (1) − Y (0)]| ≤ 2α (4) $$|\operatorname{E}[{\tilde{\tau}}(X)]-\operatorname{E}[Y(1)-Y(0)]|\leq2\alpha$$ The result follows from the multi-accuracy criterion, which implies that |E[I[T = t]W∗ t (X) (Y − µ˜t(X))]| ≤ α, which obtains identification as in eq. (3) and the triangle inequality. Of course, we have not gained identification for free: we *cannot* verify the assumption that W∗ 1 (X), W∗ 0 (X) ∈ H from observational data alone, just as we *cannot* test the unconfoundedness assumption from data alone. However, multi-calibration methods already work with quite flexible function classes, which could be nonparametric (RKHS, etc). This is how multi-calibration confers general robustness to distribution shift, whether from the data generating process such as unobserved confounders, or from external covariate shifts at the time of deployment. ## 3.3 External Validity: Unknown Deployment Shift Identification under Setting 1 The robust identification argument for "universal adaptability" reinterprets the test functions c(X) ∈ C as potential adversarial likelihood ratios for distribution shift. However, the same properties of multi-accuracy also imply robustness to external shift. In this subsection, we indeed suppose Assumption 1, unconfoundedness. Recall that our goal was to control the predictive bias on a reference covariate distribution QX, potentially unknown, |EQ[(ˆτ (X) − τ (X))]|. Note that each of µ1, µ0 are learned on a treatment conditional distribution, so we have that the valid likelihood ratio, which we denote wt(x), is defined as: $$\left({4}\right)$$ $$w_{t}(x)={\frac{d Q_{X}(x)}{d P_{X_{t}}(x)}}={\frac{d Q_{X}(x)}{d P_{X_{t}}(x)}}{\frac{P(T=t)}{e_{t}(x)}}$$ $$\left(5\right)$$ Obtaining robust identification for a "universally adaptable" CATE function instead interprets adversarial test functions as a product function class F = *C × H* for *both* the subpopulations that identify CATE, and the adversarial likelihood ratio function. Our next proposition gives conditions on the weight functions wt ∈ H, t ∈ {0, 1} to satisfy robust CATE estimation under unknown covariate shifts. Proposition 2. Suppose Assumptions 1 and 2. Let C denote a test function class for subgroup membership and H *a test function class for likelihood ratios. Then multi-accuracy of the T-learner CATE estimate by* running Algorithm 1 implies that, for all reference covariate distributions Q *such that the likelihood ratios* w1, w0 ∈ H, $$\forall c\in{\mathcal{C}},\ |\mathrm{E}_{Q}[\{{\tilde{\tau}}(X)-(Y(1)-Y(0))\}c(X)]|\leq2\alpha$$ Because the guarantee holds for all functions f ∈ F, it holds for complex subpopulations c(X) and vacuous likelihood ratios with h(X) = 1, as well as the inverse: complex h(X) and vacuous subgroups (i.e. c(x) = 1). Our assumption is that F is sufficiently well-specified to cover the product of these relevant functions, but we are generally agnostic as to the precise complexity of its constituent classes C, H. And, in practice, following the algorithmic implementation of MCBoost, we work with auditor function classes such as ridge regression, rather than direct products of subpopulations and other test functions. Observe that although similar arguments apply, obtaining conditional guarantees for CATE estimation requires a richer test function class than for universal adaptability of the ATE alone. This illustrates that the case of learning CATE is indeed statistically harder than that of "universal adaptability" of the ATE that was studied in Kim et al. (2022). For CATE estimation, we need to choose a richer auditor function class than we would for ATE estimation. ## 3.3.1 Extension To Cate Pseudo-Outcome Regression A natural question given our work on the T-learner is whether we can provide similar guarantees for an estimation-improved CATE learner, since the T-learner generally does not enjoy any improved estimation properties in causal inference, whereas causal inference and machine learning has developed many improved orthogonal/semiparametrically efficient procedures such as (but not limited to) the R-learner (Nie and Wager, 2020), DR-learner (Kennedy, 2023), or other machine-learning adaptations (Wager and Athey, 2018; Shalit et al., 2017). Namely, some CATE estimation procedures give a pseudo-outcome ψ(O; *e, µ*), where O denotes data tuples, i.e. O = (*X, T, Y* ), such that E[ψ(O; *e, µ*) | X] = τ (X). (It is designated as a pseudo-outcome because regressing upon it identifies the CATE or functional of interest, although it is not exactly outcome itself). One such pseudo-outcome is the doubly-robust score. Pseudo-outcome regression of it as a CATE estimator was recently studied in Semenova and Chernozhukov (2021); Kennedy (2020). $$\hat{\varphi}(O;\hat{e},\hat{\mu})=\frac{T-e_{1}(X)}{e_{1}(X)\{1-e_{1}(X)\}}\left(Y-\hat{\mu}_{T}(X)\right)+\hat{\mu}_{1}(X)-\hat{\mu}_{0}(X)$$ (Y − µˆT (X)) + ˆµ1(X) − µˆ0(X) (6) Regressing upon pseudo-outcomes with favorable properties like orthogonality therefore confers these favorable properties to the estimated functional, such as improved statistical rates of convergence. Our arguments for external validity can naturally be extended for pseudo-outcome based CATE regression, so long as the pseudo-outcome's conditional expectation is the CATE function. We multi-calibrate the pseudo-outcome regression step. That is, we learn τ˜ such that: $$(6)$$ E[{φˆ(O; ˆe, µˆ) − τ˜(X)}f(x)] ≤ ϵ, ∀f ∈ F Next, we instantiate such a procedure when the pseudo-outcome is the doubly-robust score. Multi-accurate DR-learner We give the algorithm for obtaining a multi-accurate DR-learner estimate in Algorithm 2. To summarize: we do need four folds of data (D1a, D1b, D2, D3); the first three for samplesplitting of the nuisance estimates and pseudo-outcome evaluation and the last for validation/calibration for MCBoost. Estimate the nuisance functions on the first two folds D1a, D1b and on D2, evaluate the pseudooutcome value φˆ(O; ˆe, µˆ) and regress τˆ(x) = Eˆn{φˆ(O; ˆe, µˆ) | X = x}. Finally, we conduct post-processing via multi-accurate learning upon the DR-learner estimate τˆ, to obtain a multi-accurate τ˜. Again we will interpret the input auditor function class F = *C × H* as a product function class of subgroup envelope functions c ∈ C and likelihood ratios ∈ H. (Likelihood ratios are assumed to transport from the marginal distribution of X to the new distribution). Then, (robust) identification of the predictions follows exactly as in Proposition 2. Proposition 1 establishes that under specification assumptions, the multi-accurate regression adjustment estimator is (robustly) equivalent to the doubly-robust estimator up to ϵ approximation error, connecting multi-calibration with doubly-robust estimation. This implies basic (robust) doubly-robust properties of the multi-accurate T-learner. We now strengthen this connection by showing that multi-accurate post-processing of the T-learner over a *richer* function class (containing the true propensity score, and additional functions) implies that µ˜t is also a *multi-accurate* estimate of the DR-learner over the additional functions. Proposition 3. Suppose that µ˜t(x) ← MCBoost(ˆµt, α, F, Dpost), over auditor function class F *such that* Ft ⊆ F*, where* Ft = n1, I[T =t] et(X) o× C × H. That is, µ˜1 − µ˜0 comprise an α-multi-accurate T *learner. Then* $$\Big\vert\operatorname*{max}_{c h\in C\times\mathcal{H}}\mathbb{E}[\{\varphi(e,{\hat{\mu}})-\tau(X)\}c(x)h(x)]-\operatorname*{max}_{c h\in C\times\mathcal{H}}\mathbb{E}[\{{\hat{\tau}}-\tau(X)\}c(x)h(x)]\Big\vert\leq2\alpha$$ Algorithm 2 Multi-accurate DR-learner (Equation (6)) for unknown covariate shift 1: Input: (D1a, D1b, D2, D3) four independent samples of n observations of Oi = (Xi, Ti, Yi) (Dn 3 can be smaller). Auditor function class F, approximation parameter α. 2: Learn nuisance functions: Estimate propensity scores et on Dn 1a . Estimate outcomes (ˆµ0, µˆ1) on D1b. 3: Pseudo-outcome regression: Construct the pseudo-outcome which takes as input observation O = (*X, A, Y* ) and nuisance functions e, ˆ µˆ $${\hat{\varphi}}(O;{\hat{e}},{\hat{\mu}})={\frac{T-e_{1}(X)}{e_{1}(X)\{1-e_{1}(X)\}}}\left(Y-{\hat{\mu}}_{T}(X)\right)+{\hat{\mu}}_{1}(X)-{\hat{\mu}}_{0}(X)$$ and regress it on covariates X in the test sample D2. 4: Post-process pseudo-outcome regression: run MCBoost(ˆτdr, F*, α,* D3) to obtain multi-accurate τ˜ such that $$\operatorname{E}[\{{\hat{\varphi}}({\hat{e}},{\hat{\mu}})-{\hat{\tau}}(X)\}f(x)]\leq\epsilon,{\forall}f\in{\mathcal{F}}$$ 5: Cross-fitting (optional)1 That is, the multi-accurate T-learner µ˜1 − µ˜0 is, up to 2α *additive approximation error, a multi-accurate* DR-learner with outcome model µ˜, post-processed over the function class *C × H*. Proof. Consider a function class richer than that needed for Proposition 2. Define $${\overline{{\mathcal{F}}}}_{t}=\left\{1,{\frac{\mathbb{I}[T=t]}{e_{t}(X)}}\right\}\times{\mathcal{C}}\times{\mathcal{H}}$$ Consider a multi-accurate T-learner τ˜ = ˜µ1 − µ˜0 where each µ˜t is α-multi-accurate over an auditor function class Ft so that Ft ⊂ F. Note that max c×h∈{C×H} |E[{{(Y − µ˜1(X))I[T = 1] e1(X)+ (Y − µ˜0(X))I[T = 0] e0(X) } + ˜µ1(X) − µ˜0(X)} − τ (X)}c(X)h(X)]| − max c×h∈{C×H} |E[{{µ˜1(X) − µ˜0(X)} − τ (X)}c(X)h(X)]| t∈{0,1} max c×h∈{C×H} E (Y − µ˜t(X))I[T = t] et(X) c(X)h(X) ≤X ≤ 2α by the triangle inequality and multi-accuracy of µ˜t over the richer function class. The interpretation is that post-processing a simple T-learner for multi-accuracy over a richer (yet wellspecified) function class can approximate a *DR-learner* that was post-processed for multi-accuracy over a weaker function class. The population criterion for multi-accuracy confers some nonparametric robustness to bias over the specified test function class. Although this is a different estimation approach than causal machine learning estimates, we relate them formally here, and investigate empirically and thoroughly in Section 4. So, although multi-accurate post-processing of a T-learner appears on its face as a basic CATE estimator, in fact, the judicious choice of a richer function class for post-processing can approximate a more advanced estimator. Interestingly, concurrent with the preparation of this work, (Bruns-Smith et al., 2023) study augmented balancing weights and find a certain target-independent property of the augmented estimator related to the universal adaptability of (Kim et al., 2022). Studying connections further would be an interesting direction for future work. Algorithm 3 Multi-accuracy for CATE estimation for calibrating CATE on small Randomized Controlled Trial data 1: Input: Dobs = (*X, T, Y* ) confounded observational data, Drct = (*X, T, Y* ) unconfounded randomized data, F auditor function class, G function class for outcome functions 2: Fit treatment-conditional outcome functions from the observational dataset: µˆt(x) ← arg min g∈G Eobs[(g − Y ) 2| T = t], for t ∈ {0, 1} 3: For t ∈ {0, 1}, use multi-accurate learning with Drct as validation set, i.e. µ˜t(x) ← MCBoost(ˆµt(x), F*, α,* Drct). 4: Return τ˜(x) = ˜µ1(x) − µ˜0(x) ## 3.4 Observational And Randomized Data (Setting 2) (Meta)-Algorithm In this setting, we learn confounded outcome regressions from the observational data. We use the smaller randomized controlled trial data as *post-processing* datasets in MCBoost (the boosting paradigm for multi-calibrated and multi-accurate predictors). In Algorithm 3 we describe the metaalgorithm. Identification of CATE Identification for the CATE follows by interpreting the auditing functions c(X) ∈ C as subpopulations. Achieving multi-accuracy on the RCT data hence identifies the CATE. That is, multiaccuracy assures us that |E[(Y − µt(X))c(X) | T = t]| ≤ α, ∀c(X) ∈ C and we can evaluate this criterion on the unconfounded RCT data. On the unconfounded RCT data, we indeed have that E[Y | *X, T* = t] = E[Y (t) | X] so that the T-learner identifies CATE. The intuition for why our meta-algorithm improves upon directly running the T-learner on the randomized data alone is that we can learn a low-variance, high-bias (due to unobserved confounding) estimate of the true outcome model E[Y (t) | X] by outcome modeling on the observational data to obtain Eobs[Y | T = 1, X]. On the other hand, although randomized data is available, the finite-sample estimate of Erct[Y | T = 1, X] can be high-variance (though unbiased) under Assumption 3. We do note that the analysis of the boosting algorithm in Hébert-Johnson et al. (2018) is not tight enough to provably show faster convergence from warm-starting on the confounded regressions on the observational data, relative to multi-calibrating on the randomized data alone. However, we show benefits in later experiments. Identification of target-independent CATE In complete analogy to the external shift setting, changing our interpretation of the target functions allows us to infer robustness to external shifts. Multi-accuracy ensures that, for all reference covariate distributions QX such that the likelihood ratios wt(x) ∈ H, t ∈ {0, 1}, running Algorithm 3 with auditor function class F = *C × H* results in a multi-accurate and deployment-shift robust CATE estimate: $$\forall c\in{\mathcal{C}},\ |\mathrm{E}_{Q}[\{{\hat{\tau}}(X)-(Y(1)-Y(0))\}c(X)]|\leq2\alpha.$$ Multi-accuracy vs. multi-calibration Throughout this paper, we have focused on a weaker multiaccuracy criterion (Kim et al., 2019), which is nonetheless sufficient for the robustness guarantees we seek for external validity and robustness to shifts. On the other hand, the stronger criterion of multi-calibration is somewhat more often studied in the literature. In this paper, we *do not* investigate the *calibration* properties of heterogeneous treatment effects, focusing instead on *robustness to external shift*. On the other hand, these are not completely unrelated: with slightly different notions, Wald et al. (2021) studies connections between multi-domain calibration and out-of-distribution generalization (without a focus on causal estimation). Other papers have investigated calibration for heterogeneous treatment effects (Xu and Yadlowsky, 2022), including orthogonalized isotonic regression (Van Der Laan et al., 2023). Our method of learning the multi-accurate DR-learner and other pseudo-outcome learners could also be used with multi-calibration instead of multiaccuracy. We leave a thorough exploration of this for further work. ## 4 Experiments We previously provided identification arguments and meta-algorithms for leveraging multi-accurate learning to learn CATE subject to unknown covariate shifts. To be sure, modern estimation of CATE prescribes nonparametric estimation that, in the infinite-data limit, is immune to external covariate shifts if CATE estimation recovers the Bayes-optimal predictor. Of course, real-world datasets are often smaller so that our methods can improve robustness in finite samples. To illustrate this, we conduct extensive empirical studies, testing our proposed multi-calibrated CATE estimation algorithms in comparison to other CATE learners in simulation scenarios that follow the previously introduced settings - unknown deployment shifts (Setting 1) and observational data with RCT (Setting 2). ## 4.1 Simulations For both settings, we simulate data according to pre-specified propensity score functions, true outcome functions and external shift functions, with different degrees of complexity. For the external shift setting (simulation 1a and 1b), we assume access to training data from an observational study without unobserved confounding and a small auditing sample from the same distribution. The test data used for evaluation, however, is externally shifted by deliberately sampling with weights given by the shift function (and different shift intensities) from the original distribution. In this setting, we implement two simulations that differ in the complexity of the true CATE and propensity score functions (simulation 1a: linear CATE, beta confounding, simulation 1b: full linear CATE, logistic confounding). In the joint observational/RCT setting, we assume access to both a large observational training data set and a small RCT; and an external shift between both data sources. We implement two simulations in this setting that differ in whether both data sources (simulation 2a: confounded observational data and RCT) or only the observational training data (simulation 2b: total shift between observational data and RCT) are affected by unobserved confounding. The test data used for evaluation follows the covariate distribution of the observational training data. In simulation 2a, the problem is that of covariate shift alone; while in simulation 2b, the underlying conditional model (true E[Y (t) | X] also changes. Simulation 2b illustrates the usefulness of the framework to simultaneously handle a variety of shifts. We present the simulation framework and comparisons to a broader set of baseline methods in supplementary material (D.2). Methods. In both simulation settings, we use causal forests (CForest-OS) and random forest-based Tlearner (T-learner-OS) and DR-learner (DR-learner-OS) trained in the observational training data as benchmark methods. T-learner-OS and DR-learner-OS also serve as the input for post-processing with MCBoost using ridge regression in the auditing data in simulation 1a and 1b (T-learner-MC-Ridge, DR-learner-MCRidge). In simulation 2a and 2b, post-processing is implemented with ridge regression-based auditing in the RCT data. In these simulations, we present causal forests trained in the RCT data (CForest-CT) as an additional baseline. To prevent overfitting with small auditing data, we regularized multi-accuracy boosting using small learning rates and a limited fixed number of boosting iterations (see Table 2 in D.2). Comparing to T-learner-OS establishes the robustness benefits of our methods. On the other hand, our dono-harm property holds with respect to the MSE of the best-in-class T-learner. Comparing to CForest-OS allows us to assess robustness-efficiency tradeoffs. CForest-OS is a representative state-of-the-art method that leverages the causal structure and modifies the estimation procedure of random forests; it is a very strong comparison point, but also very data-hungry. In contrast, our post-processing approach does not modify the estimation procedure. An interesting direction for future work is to achieve the robust bias guarantees of multi-calibration with other variance-reduced CATE estimators. Results. We evaluate the outlined methods with respect to MSE of the estimated CATE in the test data in Figure 2 (external shift) and Figure 3 (observational data with RCT), over different sizes of initial training datasets and different intensities of covariate shift. The results of simulation 1a (Figure 2a) highlight how post-processing robustifies the initial T-learner and consistently improves over T-learner-OS in scenarios with moderate and strong external shift. When the observational training data is small, the multi-accurate T-learner also outperforms causal forests in these scenarios. With small training data, we see similar improvements of the multi-accurate DR-learner over DR-learner-OS. As the training data size increases, the naive DR-learner becomes more competitive and post-processing yields smaller gains. In simulation 1b (Figure 2b), the more complex CATE function leads to higher MSE overall, while the previously observed pattern persists: The multi-accurate T-learner consistently improves over the naive Tlearner, particularly under distribution shift. Our approach, DR-learner-MC-Ridge, is best in settings with strong unknown external shift and small dataset size: it then outperforms *both* T-learner and causal forest. Larger dataset sizes permit estimation over richer function classes and methods become asymptotically equivalent. We compare our approach to additional baselines, including shift-reweighted causal forests, Tand DR-learner in the supplementary material D.2.2. Figure 3a shows that in simulation 2a, learning from both the observational training data and a small RCT via multi-accuracy boosting is beneficial across scenarios. The multi-accurate T-learner and DRlearner considerably improve over T-learner-OS and DR-learner-OS and in particular T-learner-MC-Ridge is competitive with CForest-OS. The improvement from multi-accuracy boosting can also be observed when post-processing was conducted with externally shifted RCT data and is similarly prominent for both T- and DR-learner in the "total shift" setting (Figure 3b). In both simulations, learning directly in the RCT data (CForest-CT) is only a viable option in the absence of a shift in the covariate distribution in the evaluation distribution (i.e. when only deploying on the smaller RCT population), and can incur considerable error otherwise. For results of additional CATE estimation techniques, see supplementary material D.2.3. ## 4.2 Whi Data Application We next present a case study that draws on data from the Women's Health Initiative (WHI) studies (Machens and Schmidt-Gollwitzer, 2003). The WHI includes a large observational study and clinical trial data to investigate the effectiveness of hormone replacement therapy (HRT) in preventing the onset of chronic diseases. As the observational study has been suspected to suffer from various (unobserved) confounding phenomena (Kallus and Zhou, 2018), we study how utilizing both data sources in combination via multi-accuracy boosting compares to learning CATE from the observational or clinical trial data only. We focus on the effect of HRT treatment on systolic blood pressure and use age and ethnicity as covariates. Implementation details and results with an extended set of covariates is presented in the supplementary material D.3. Methods and Results. We subsample the observational data to train causal forests (CForest-OS) and initial T-learner (T-learner-OS) and DR-learner (DR-learner-OS). We further sample from the clinical trial data to create CT training data with different sample sizes to post-process the initial T- and DR-learner with MCBoost using ridge regression (T-learner-MC-Ridge, DR-learner-MC-Ridge). We also train causal forests solely on the CT training data as an additional (strong) baseline (CForest-CT). Another sample from the CT data serves as the test set, with which we infer the (unobserved) "true" CATE using elastic net-based R-learner (Nie and Wager, 2020) and estimate the ATE as evaluation benchmarks. We evaluate bias of the estimated ATE and MSE of the estimated CATE in Figure 4. Figure 4a shows how post-processing an initial T- and DR-learner with CT data can reduce bias, even if the auditing data is small. We similarly see improvements in MSE when comparing the multi-accurate learner to T-learner-OS and DR-learner-OS in Figure 4b. T-learner-MC-Ridge additionally improves over CForest-OS. Training only in the CT data leads to ATE estimates with low bias, but the MSE of CForest-CT is not competitive when model training is based on CT data with small sample sizes. Further results are presented in supplementary material D.3. ![15_image_0.png](15_image_0.png) ![15_image_1.png](15_image_1.png) (b) Simulation 1b (full linear CATE, logistic confounding) Figure 2: Average MSE of CATE estimation by shift intensity and training set size for post-processed (multi-calibrated) T- and DR-learner and benchmark methods in simulation studies (external shift setting). ## 5 Related Work: Further Discussion A popular approach for handling unknown shifts is to enforce robustness against a family of covariate shifts (e.g. unknown shifts parametrized by unknown covariate shift functions) (Liu and Ziebart, 2014; Wen et al., 2014; Chen et al., 2016). The goal is to find a robust hypothesis that maximizes the worst-case prediction risk (for example, squared error) evaluated with respect to unknown shifts within some class of covariate shifts. Parametrizations include distributionally robust optimization or linear basis functions. The work of (Greenfeld and Shalit, 2020) is motivated differently and penalizes with a Hilbert-Schmidt Independence Criterion loss; they show this implies some robustness to covariate shift. While most of this work is in the generic prediction setting, recent work also assesses ATE under covariate shift via distributionally robust optimization (Subbaswamy et al., 2021), use of the marginal sensitivity model for external shifts (Hatt et al., 2021), or variational characterizations of coherent risk measures (Jeong and Namkoong, 2020). Methodologically, some of this work is similar to work in causal inference that studies unobserved confounding under the lens of robust optimization adversarial likelihood ratios over some ambiguity set (Kallus et al., 2018a; Kallus and Zhou, 2021; 2020; Dorn et al., 2021; Zhao et al., 2019; Bruns-Smith and Zhou, 2023; Yadlowsky et al., 2018; Tan, 2006). This highlights the broad simultaneous interpretations of adversarial weight functions for handling unobserved confounding (in the generation of the data) in addition to robust adversarial covariate shifts (in the deployment of the predictor). The approach based on multi-accuracy boosting, although it can be stated as a similar optimization problem in the abstract, differs from the previously mentioned works in a few important ways: (1) boosting couples the functional complexity of the post-processed predictor and the covariate shift function, and (2) under well-specification of the auditor function class and other conditions, boosting's asymptotic limit is the Bayes- ![16_image_0.png](16_image_0.png) ![16_image_1.png](16_image_1.png) ![16_image_2.png](16_image_2.png) (b) Simulation 2b (total shift between observational data and RCT) Figure 3: Average MSE of CATE estimation by shift intensity and training set size for post-processed (multicalibrated) T- and DR-learner and benchmark methods in simulation studies (observational data with RCT setting). ![16_image_3.png](16_image_3.png) Figure 4: Average absolute bias and MSE by clinical trial sample size in WHI data application optimal predictor, whereas robust optimization changes the asymptotic limit: typically to a coherent risk measure. In this sense, we expect that approaches based on multi-accuracy are *less conservative* within distribution. To the best of our knowledge, the only prior discussion of connections between boosting-style algorithms and distributionally robust optimization is Blanchet et al. (2019). Approaches based on multi-calibration inherently couple the specification of the (expanded) hypothesis class of multicalibrated predictors along with the specification of covariate shift functions, i.e. the boosting-type algorithm returns a predictor in the *sum class* of the original predictor and the classes of shifts. Approaches for robust covariate shift, to reduce the complexity of the adversary, require additional moment constraints satisfied by valid likelihood ratios, i.e specifying a sharp set C of *only* valid covariate shifts. In robust optimization-based approaches to covariate shift, the hypothesis class and class of weight functions can be independently varied. But for multi-calibration, restricting the auditor function classes *also* simultaneously reduces the functional complexity of the hypothesis class of predictors. Distributionally robust objectives are equivalent to variance regularization or control of the tail risk, which couples statistically more difficult control of tail behavior with the control of ambiguous shift functions. Another important point of difference is that the Bayes-optimal predictor satisfies the multi-accuracy criterion, while a Bayes-optimal predictor with heteroskedastic noise may not satisfy desiderata of uniform performance implied by distributional robustness. For example, (Duchi and Namkoong, 2018, Example 2) discusses the example of linear well-specified models where the distributionally robust predictor coincides with the Bayes-optimal predictor; but in cases of model misspecification/heteroskedastic noise, this may not be the case. We leave a finer-grained comparison for alternative work. Our discussion of the hybrid observational and randomized setting is more to highlight an "off-the-shelf" application of multi-accuracy, rather than the tightest analysis in this setting. Other works use more structure of this hybrid setting, or more heavily modify algorithms (i.e. learning shared representations) (Hatt et al., 2021; Yang et al., 2020; Kallus et al., 2018b); analogous adaptations with multi-accuracy are interesting directions for future work. See also Bareinboim and Pearl (2013) for a survey on transportability and external validity, Colnet et al. (2020) for a survey on learning from observational and randomized data. In this literature, (Wu and Yang, 2023) among others consider reweighting towards shifted distributions. Other approaches include Cheng et al. (2023;?). ## 6 Conclusion In this work, we connect multi-accurate learning and show how off-the-shelf multi-accurate learning can be used for conditional average treatment effect estimation that is robust to unknown covariate shift. Although we empirically compare to more "state of the art" causal machine learning, these methods were designed for different purposes. Important directions for future work include "best-of-both-worlds" guarantees on both robustness and efficiency by improving variance reduction properties of Multi-CATE. A finer-grained analysis of the statistical implications of algorithmic implementations of boosting could also be relevant, in addition to improving hyperparameter tuning in the causal setting. In our work, we focus on establishing robustness properties. ## References Ruben L. Bach, Christoph Kern, Hannah Mautner, and Frauke Kreuter. The impact of modeling decisions in statistical profiling. *Data & Policy*, 5:e32, 2023. doi: 10.1017/dap.2023.29. Mohammed Abd ElFattah Mohammed Darwesh Badawy, Lin Naing, Sofian Johar, Sokking Ong, Hanif Abdul Rahman, Dayangku Siti Nur Ashikin Pengiran Tengah, Chean Lin Chong, and Nik Ani Afiqah Tuah. Evaluation of cardiovascular diseases risk calculators for cvds prevention and management: scoping review. BMC Public Health, 22(1):1742, 2022. Elias Bareinboim and Judea Pearl. A general algorithm for deciding transportability of experimental results. Journal of causal Inference, 1(1):107–134, 2013. Andrew Bennett and Nathan Kallus. The variational method of moments. *Journal of the Royal Statistical* Society Series B: Statistical Methodology, 85(3):810–841, 2023. Jose Blanchet, Fan Zhang, Yang Kang, and Zhangyi Hu. A distributionally robust boosting algorithm. In 2019 Winter Simulation Conference (WSC), pages 3728–3739. IEEE, 2019. David Bruns-Smith and Angela Zhou. Robust fitted-q-evaluation and iteration under sequentially exogenous unobserved confounders. *arXiv preprint arXiv:2302.00662*, 2023. David Bruns-Smith, Oliver Dukes, Avi Feller, and Elizabeth L Ogburn. Augmented balancing weights as linear regression. *arXiv preprint arXiv:2304.14545*, 2023. Rich Caruana, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, and Noemie Elhadad. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pages 1721–1730, 2015. Xiangli Chen, Mathew Monfort, Anqi Liu, and Brian D Ziebart. Robust covariate shift regression. In Artificial Intelligence and Statistics, pages 1270–1279. PMLR, 2016. Yuwen Cheng, Lili Wu, and Shu Yang. Enhancing treatment effect estimation: A model robust approach integrating randomized experiments and external controls using the double penalty integration estimator. In *Uncertainty in Artificial Intelligence*, pages 381–390. PMLR, 2023. Victor Chernozhukov, Mert Demirer, Esther Duflo, and Ivan Fernandez-Val. Generic machine learning inference on heterogeneous treatment effects in randomized experiments, with an application to immunization in india. Technical report, National Bureau of Economic Research, 2018. Bénédicte Colnet, Imke Mayer, Guanhua Chen, Awa Dieng, Ruohong Li, Gaël Varoquaux, Jean-Philippe Vert, Julie Josse, and Shu Yang. Causal inference methods for combining randomized trials and observational studies: a review. *arXiv preprint arXiv:2011.08047*, 2020. Bruno Crépon and Gerard J Van Den Berg. Active labor market policies. *Annual Review of Economics*, 8: 521–546, 2016. Nishanth Dikkala, Greg Lewis, Lester Mackey, and Vasilis Syrgkanis. Minimax estimation of conditional moment models. *Advances in Neural Information Processing Systems*, 33:12248–12262, 2020. Jacob Dorn, Kevin Guo, and Nathan Kallus. Doubly-valid/doubly-sharp sensitivity analysis for causal inference with unmeasured confounding. *arXiv preprint arXiv:2112.11449*, 2021. John Duchi and Hongseok Namkoong. Learning models with uniform performance via distributionally robust optimization. *arXiv preprint arXiv:1810.08750*, 2018. AmirEmad Ghassami, Andrew Ying, Ilya Shpitser, and Eric Tchetgen Tchetgen. Minimax kernel machine learning for a class of doubly robust functionals with application to proximal causal inference. In *International conference on artificial intelligence and statistics*, pages 7210–7239. PMLR, 2022. Sharad Goel, Ravi Shroff, Jennifer Skeem, and Christopher Slobogin. The accuracy, equity, and jurisprudence of criminal risk assessment. In *Research handbook on big data law*, pages 9–28. Edward Elgar Publishing, 2021. Parikshit Gopalan, Lunjia Hu, Michael P Kim, Omer Reingold, and Udi Wieder. Loss minimization through the lens of outcome indistinguishability. *arXiv preprint arXiv:2210.08649*, 2022a. Parikshit Gopalan, Michael P Kim, Mihir A Singhal, and Shengjia Zhao. Low-degree multicalibration. In Conference on Learning Theory, pages 3193–3234. PMLR, 2022b. Daniel Greenfeld and Uri Shalit. Robust learning with the hilbert-schmidt independence criterion. In International Conference on Machine Learning, pages 3759–3768. PMLR, 2020. Anand R Habib, Anthony L Lin, and Richard W Grant. The epic sepsis model falls short—the importance of external validation. *JAMA Internal Medicine*, 181(8):1040–1041, 2021. Tobias Hatt, Daniel Tschernutter, and Stefan Feuerriegel. Generalizing off-policy learning under sample selection bias. *arXiv preprint arXiv:2112.01387*, 2021. Ursula Hébert-Johnson, Michael Kim, Omer Reingold, and Guy Rothblum. Multicalibration: Calibration for the (computationally-identifiable) masses. In *International Conference on Machine Learning*, pages 1939–1948. PMLR, 2018. Jennifer L Hill. Bayesian nonparametric modeling for causal inference. Journal of Computational and Graphical Statistics, 20(1):217–240, 2011. Sookyo Jeong and Hongseok Namkoong. Robust causal inference under covariate shift via worst-case subpopulation treatment effects. In *Conference on Learning Theory*, pages 2079–2084. PMLR, 2020. Fredrik D Johansson, Uri Shalit, Nathan Kallus, and David Sontag. Generalization bounds and representation learning for estimation of potential outcomes and causal effects. *Journal of Machine Learning* Research, 23(166):1–50, 2022. Nathan Kallus and Angela Zhou. Confounding-robust policy improvement. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/ paper/2018/file/3a09a524440d44d7f19870070a5ad42f-Paper.pdf. Nathan Kallus and Angela Zhou. Confounding-robust policy evaluation in infinite-horizon reinforcement learning. *Advances in Neural Information Processing Systems*, 33:22293–22304, 2020. Nathan Kallus and Angela Zhou. Minimax-optimal policy learning under unobserved confounding. *Management Science*, 67(5):2870–2890, 2021. Nathan Kallus, Xiaojie Mao, and Angela Zhou. Interval estimation of individual-level causal effects under unobserved confounding. *arXiv preprint arXiv:1810.02894*, 2018a. Nathan Kallus, Aahlad Manas Puli, and Uri Shalit. Removing hidden confounding by experimental grounding. *Advances in neural information processing systems*, 31, 2018b. Edward H Kennedy. Optimal doubly robust estimation of heterogeneous causal effects. *arXiv preprint* arXiv:2004.14497, 2020. Edward H Kennedy. Towards optimal doubly robust estimation of heterogeneous causal effects. *Electronic* Journal of Statistics, 17(2):3008–3049, 2023. Michael P Kim, Amirata Ghorbani, and James Zou. Multiaccuracy: Black-box post-processing for fairness in classification. In *Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society*, pages 247–254, 2019. Michael P Kim, Christoph Kern, Shafi Goldwasser, Frauke Kreuter, and Omer Reingold. Universal adaptability: Target-independent inference that competes with propensity scoring. Proceedings of the National Academy of Sciences, 119(4):e2108097119, 2022. John Körtner and Giuliano Bonoli. Predictive algorithms in the delivery of public employment services. In Handbook of Labour Market Policy in Advanced Democracies, pages 387–398. Edward Elgar Publishing, 2023. Sören R. Künzel, Jasjeet S. Sekhon, Peter J. Bickel, and Bin Yu. Metalearners for estimating heterogeneous treatment effects using machine learning. *Proceedings of the National Academy of Sciences*, 116(10): 4156–4165, 2019. doi: 10.1073/pnas.1804597116. Laura and John Arnold Foundation. Public safety assessment decision making framework - cook county, il [effective march 2016]. https://news.wttw.com/sites/default/files/article/file-attachments/ PSA%20Decision%20Making%20Framework.pdf, 2016. Daniel Lewandowski, Dorota Kurowicka, and Harry Joe. Generating random correlation matrices based on vines and extended onion method. *Journal of Multivariate Analysis*, 100(9):1989–2001, 2009. ISSN 0047-259X. doi: https://doi.org/10.1016/j.jmva.2009.04.008. URL https://www.sciencedirect.com/ science/article/pii/S0047259X09000876. Anqi Liu and Brian Ziebart. Robust classification under sample selection bias. *Advances in neural information* processing systems, 27, 2014. K. Machens and K. Schmidt-Gollwitzer. Issues to debate on the Women's Health Initiative (WHI) study. Hormone replacement therapy: an epidemiological dilemma? *Human Reproduction*, 18(10):1992–1999, 2003. ISSN 0268-1161. doi: 10.1093/humrep/deg406. X Nie and S Wager. Quasi-oracle estimation of heterogeneous treatment effects. *Biometrika*, 108(2):299–319, 2020. doi: 10.1093/biomet/asaa076. URL https://doi.org/10.1093/biomet/asaa076. Miruna Oprescu, Vasilis Syrgkanis, and Zhiwei Steven Wu. Orthogonal random forest for causal inference. In *International Conference on Machine Learning*, pages 4932–4941. PMLR, 2019. Florian Pfisterer, Christoph Kern, Susanne Dandl, Matthew Sun, Michael P Kim, and Bernd Bischl. mcboost: Multi-calibration boosting for r. *Journal of Open Source Software*, 6(64):3453, 2021. R Core Team. *R: A Language and Environment for Statistical Computing*. R Foundation for Statistical Computing, Vienna, Austria, 2020. URL https://www.R-project.org/. James M Robins, Andrea Rotnitzky, and Lue Ping Zhao. Estimation of regression coefficients when some regressors are not always observed. *Journal of the American Statistical Association*, 89(427):846–866, 1994. Vira Semenova and Victor Chernozhukov. Debiased machine learning of conditional average treatment effects and other causal functions. *The Econometrics Journal*, 24(2):264–289, 2021. Uri Shalit, Fredrik D Johansson, and David Sontag. Estimating individual treatment effect: generalization bounds and algorithms. In *International conference on machine learning*, pages 3076–3085. PMLR, 2017. Claudia Shi, David Blei, and Victor Veitch. Adapting neural networks for the estimation of treatment effects. Advances in neural information processing systems, 32, 2019. Cathy Shyr, Boyu Ren, Prasad Patil, and Giovanni Parmigiani. Multi-study r-learner for estimating heterogeneous treatment effects across studies using statistical machine learning, 2024. Pietro Emilio Spini. Robustness, heterogeneous treatment effects and covariate shifts. *arXiv preprint* arXiv:2112.09259, 2021. Adarsh Subbaswamy, Roy Adams, and Suchi Saria. Evaluating model robustness and stability to dataset shift. In *International Conference on Artificial Intelligence and Statistics*, pages 2611–2619. PMLR, 2021. Zhiqiang Tan. A distributional approach for causal inference using propensity scores. *Journal of the American* Statistical Association, 101(476):1619–1637, 2006. Julie Tibshirani, Susan Athey, Erik Sverdrup, and Stefan Wager. *grf: Generalized Random Forests*, 2021. URL https://CRAN.R-project.org/package=grf. R package version 2.0.2. Elizabeth Tipton. How generalizable is your experiment? an index for comparing experimental samples and populations. *Journal of Educational and Behavioral Statistics*, 39(6):478–501, 2014. Elizabeth Tipton and Erin Hartman. Generalizability and transportability. In *Handbook of Matching and* Weighting Adjustments for Causal Inference, pages 39–60. Chapman and Hall/CRC, 2023. John Wilder Tukey et al. *Exploratory data analysis*, volume 2. Springer, 1977. Lars Van Der Laan, Ernesto Ulloa-Pérez, Marco Carone, and Alex Luedtke. Causal isotonic calibration for heterogeneous treatment effects. In *International Conference on Machine Learning*, pages 34831–34854. PMLR, 2023. Stefan Wager and Susan Athey. Estimation and inference of heterogeneous treatment effects using random forests. *Journal of the American Statistical Association*, 113(523):1228–1242, 2018. doi: 10.1080/01621459. 2017.1319839. Yoav Wald, Amir Feder, Daniel Greenfeld, and Uri Shalit. On calibration and out-of-domain generalization. Advances in neural information processing systems, 34:2215–2227, 2021. Junfeng Wen, Chun-Nam Yu, and Russell Greiner. Robust learning under uncertain test distributions: Relating covariate shift to model misspecification. In *International Conference on Machine Learning*, pages 631–639. PMLR, 2014. Marvin N. Wright and Andreas Ziegler. ranger: A fast implementation of random forests for high dimensional data in C++ and R. *Journal of Statistical Software*, 77(1):1–17, 2017. doi: 10.18637/jss.v077.i01. Lili Wu and Shu Yang. Transfer learning of individualized treatment rules from experimental to real-world data. *Journal of Computational and Graphical Statistics*, 32(3):1036–1045, 2023. Yizhe Xu and Steve Yadlowsky. Calibration error for heterogeneous treatment effects. In *International* Conference on Artificial Intelligence and Statistics, pages 9280–9303. PMLR, 2022. Steve Yadlowsky, Hongseok Namkoong, Sanjay Basu, John Duchi, and Lu Tian. Bounds on the conditional and average treatment effect with unobserved confounding factors. *arXiv preprint arXiv:1808.09521*, 2018. Shu Yang, Chenyin Gao, Donglin Zeng, and Xiaofei Wang. Elastic integrative analysis of randomized trial and real-world data for treatment heterogeneity estimation. *arXiv preprint arXiv:2005.10579*, 2020. Qingyuan Zhao, Dylan S Small, and Bhaswar B Bhattacharya. Sensitivity analysis for inverse probability weighting estimators via the percentile bootstrap. *Journal of the Royal Statistical Society: Series B* (Statistical Methodology), 81(4):735–761, 2019. Angela Zhou. Optimal and fair encouragement policy evaluation and learning. *Advances in Neural Information Processing Systems*, 36, 2024. ## A Notation Summary | Notation/Object | Description | |-------------------|-----------------------------------------------------------------------| | X | Covariates | | T | Treatment, T ∈ 0, 1 | | Y (T) | Potential outcomes | | Y | Observed outcome, Y = Y (T) | | U | Unobserved confounders | | et(x) | Propensity score, P(T = t | X = x) | | µt(x) | Outcome regression, E[Y | X = x, T = t] | | τ (x) | Conditional average treatment effect (CATE), E[Y (1) − Y (0) | X = x] | | C | Class of subsets of X | | F, G, H | Function classes | | α | Multi-accuracy parameter | | Dobs | Observational dataset | | Drct | Randomized controlled trial dataset | | τˆ(x) | Estimated CATE function | | τ˜(x) | Multi-accurate/calibrated CATE estimator | Table 1: Notation used in the paper. ## B Details On Algorithms For completeness we describe the MCBoost algorithm for multi-calibration. See (Hébert-Johnson et al., 2018; Kim et al., 2019; 2022; Pfisterer et al., 2021) for more details, including theoretical analysis and implementation details. We describe the algorithm for a generic (*x, y*) dataset (without reference to causal inference). See (Kim et al., 2019) for more details on the variant that achieves multi-accuracy (although ideas at a high level are similar.) The key inputs include a regression algorithm for the boosting procedure, approximation parameter α which is a stopping condition (although in practice a finite limit on the number of iterations is used), and a validation/calibration set. When developing methods for Setting 1 (unknown covariate shifts), the calibration and validation set are drawn from the observational distribution. Our method for Setting 2 uses the (assumed small) RCT data as calibration/validations sets. ## Algorithm 4 Mcboost | Given: p0 : X → [0, 1] | // initial predictor | | | |-----------------------------------------------------------------------------------------------------------------------------|--------------------------------------------|--------------------|--------------------------------------------------| | A : (X × [−1, 1])m → C | // regression algorithm for functions in C | | | | α > 0 | // approximation parameter | | | | S = {(x1, y1),(x2, y2), . . . ,(xm, ym)} | // calibration set | | | | V = {(x1, y1),(x2, y2), . . . ,(xv, yv)} | // validation set | | | | Returns: (C, α)-multi-calibrated predictor µ˜ Repeat: k = 0, 1, 2, . . . Sk ← {(x1, y1 − pk(x1)), . . . ,(xm, ym − pk(xm))} | // update labels in calibration set | | | | c ← A(Sk) | // regression over St | | | | ∆c ← | 1 | c(x) · (y − pk(x)) | // compute miscalibration over V, validation set | | | |V | | P | | | | (x,y)∈V | | | | if | ∆c |> α then pk+1(x) ∝ e −∆c·c(x)/2 · pk(x) | // multiplicative weights update | | | | elsereturn p˜ = pk | // return when miscalibration small | | | | end if | | | | ## C Proofs Proof of Proposition 1. (1a) Suppose µˆ1(x), µˆ0(x) are consistent estimators and e ∈ H. Then Equation (3) immediately implies ϵ-consistency. (1b) Suppose µˆ1(x), µˆ0(x) are consistent estimators but e /∈ H. If µˆ1(x), µˆ0(x) are consistent, they will asymptotically satisfy the multi-calibrated or multi-accurate criterion. See Hébert-Johnson et al. (2018) for related do-no-harm properties in this setting. Let µ ∗ t = E[Y | *X, T* = t] denote the true conditional expectation; it satisfies µ ∗ t ∈ arg min E[(Y − µt(X))2| T = t] and that E[Y − µ ∗ t (X) | T = 1, X] = 0*, a.s.* Hence ∀f(X) ∈ F, E[(Y − µ ∗ t (X))f(X) | T = 1] = 0. Therefore µ ∗ t (X) is feasible. Since the additive iterates of boosting approaches like MCBoost for multi-accuracy are commutative, (Gopalan et al., 2022a) characterizes multi-accuracy via a global optimization of squared loss over additive basis functions of H. Since µ ∗ t (X) is a optimal solution for the unconstrained problem, and feasible for the constrained problem, it is also optimal for the constrained problem. (2) Suppose any of µˆ1, µˆ0 are not consistent estimators and e −1 1,(1 − e1) −1 ∈ H. The implications of multiaccuracy with respect to H relate to the classical doubly-robust estimator: $$\sum_{t\in\{0,1\}}\mathrm{E}\left[\frac{\left|\mathrm{i}T=t\right|}{e_{t}(X)}(Y-\tilde{\mu}_{t}(X))+\tilde{\mu}_{t}(X)\right]-\mathrm{E}\left[\tilde{\mu}_{t}(X)\right]\right]\leq2\alpha\tag{7}$$ $\square$ By properties of AIPW, the left hand term is consistent due to model double-robustness. By multi-accuracy, the CATE estimator is 2α-close to AIPW under well-specification. (3) This follows via the same arguments given in (1b). ## D Experiments D.1 Data And Software We provide code of the simulation studies and the real data application for replication purposes in the following public OSF repository: https://osf.io/zxjvw/?view_only=a622c123414e4be6a218f121ded191d3 Data preparations, model training and evaluation are conducted in R (3.6.3) (R Core Team, 2020) using the packages ranger (0.13.1) (Wright and Ziegler, 2017), grf (2.0.2) (Tibshirani et al., 2021) and rlearner (1.1.0) (Nie and Wager, 2020). The simulation studies heavily draw on the causal experiment simulator of the causalToolbox (0.0.2.000) (Künzel et al., 2019) package. In all experiments, (initial) T-learner and DR-learner are post-processed using the MCBoost algorithm as implemented in the mcboost (0.4.2) (Pfisterer et al., 2021) package. More concretely, we make use of boosting for degree-2 multi-calibration, a (slightly) stronger notion than multi-accuracy, but computationally less demanding than full multi-calibration Gopalan et al. (2022b). The hyperparameter settings used for post-processing are listed as part of the following detailed presentation of the experiments (Table 2 and 12). ## D.2 Simulations D.2.1 Setup Data We follow the simulation setup of Künzel et al. (2019) in designing our experiments. Each of the following simulations is initialized by specifying the following components: Propensity score e, outcome functions µ ∗ 0 and µ ∗ 1 , and external shift function z. We then simulate the following components: - A 10-dimensional feature vector, $$X_{1},\ldots,X_{10}\sim{\mathcal{N}}(0,\Sigma)$$ with modest correlations in Σ (governed by alpha of the vine method (Lewandowski et al., 2009), which is set to 0.1). - Potential outcomes are simulated according to the pre-specified covariate-conditional outcome functions µ ∗ 0 and µ ∗ 1 , $$\begin{array}{l}{{Y_{i}(0)=\mu_{0}^{*}(x)+\varepsilon_{i}}}\\ {{Y_{i}(1)=\mu_{1}^{*}(x)+\varepsilon_{i}}}\end{array}$$ where εi ∼ N (0, 1). - Treatment assignment is simulated given the pre-specified propensity score e, Ti ∼ Bern(e(x)) and the observed outcome is set to Yi = Y (Ti). - A set of sampling weights is constructed given the external shift function z (and shift intensity s), $$w^{(s)}(x)=\left({\frac{z(x)}{1-z(x)}}\right)^{s}$$ and used to simulate externally shifted observational data Dos−*shif t* or shifted randomized control trial (RCT) data, Drct (where e(x) = 0.5), depending on the simulation scenario. We vary the shift intensity s ∈ {0, 0.25*, . . . ,* 2} and training set size {500, 2000, 3500, 5000}, and run experiments for each combination 25 times. The size of the (audit/RCT) data used for multi-calibration boosting (500 observations) and the (test) data used for model evaluation (5000 observations) is fixed. Evaluation We compare and evaluate various techniques with respect to bias in ATE and MSE in CATE estimation. Bias is assessed based on the true ATE and the average of the estimated τˆ(x) in the test data. $${\mathrm{Bias}}=\tau-{\frac{1}{n}}\sum{\hat{\tau}}(x)$$ We further evaluate the true CATE τ (x) against τˆ(x) of the respective CATE estimation method. $$\mathrm{MSE}={\frac{1}{n}}\sum(\tau(x)-{\hat{\tau}}(x))^{2}$$ MCBoost Multi-calibration boosting is conducted using the hyperparameter settings listed in Table 2. Table 2: Hyperparameter settings for post-processing using MCBoost. Default settings are used for parameters not listed. | (a) T-learner MC | | | | |---------------------------------------------------|----------------|----------------|-------| | Method | Implementation | Hyperparameter | Value | | Ridge | mcboost | max_iter | 5 | | alpha | 1e-06 | | | | eta | 0.5 | | | | weight_degree | 2 | | | | glmnet | alpha | 0 | | | s | 1 | | | | Tree | mcboost | max_iter | 5 | | alpha | 1e-06 | | | | eta | 0.5 | | | | weight_degree | 2 | | | | rpart | maxdepth | 3 | | | (b) DR-learner MC | | | | | Method | Implementation | Hyperparameter | Value | | Ridge | mcboost | max_iter | 5 | | alpha | 1e-06 | | | | eta | 0.1 | | | | weight_degree | 2 | | | | glmnet | alpha | 0 | | | s | 1 | | | | Tree | mcboost | max_iter | 5 | | alpha | 1e-06 | | | | eta | 0.1 | | | | weight_degree | 2 | | | | rpart | maxdepth | 3 | | | Note: eta = 0.01 in simulation 2a and 2b (D.2.3). | | | | ## D.2.2 External Shift In this initial setting, we simulate data that emulates an observational study with (observable) confounding. We additionally consider an external shift between the observational data that is available for initial model training, Dos, and the distribution of the test (or deployment) data, Dos−*shif t*. We further assume access to an auditing sample from the original training distribution. The task is to estimate the true CATE function as evaluated the shifted test set, using models that either learned in the observational training data only or made additional use of the auditing data. (Xtrain, Ttrain, Ytrain) ∼ Dos,(Xaudit, Taudit, Yaudit) ∼ Dos,(Xtest, Ttest, Ytest) ∼ Dos−*shif t* Simulation 1a (external shift, linear CATE, beta confounding) $$\mu_{0}^{*}(x)=\begin{cases}x^{\prime}\beta_{l}&\text{if}x_{10}<-0.4\\ x^{\prime}\beta_{m}&\text{if}-0.4\leq x_{10}\leq0.4\\ x^{\prime}\beta_{u}&\text{if}0.4<x_{10}\end{cases}$$ with $\beta_{l}\sim\text{unif}([-5,5]^{10}),\beta_{m}\sim\text{unif}([-5,5]^{10}),\beta_{u}\sim\text{unif}([-5,5]^{10})$ $$\mu_{1}^{*}(x)=\mu_{0}^{*}(x)+3x_{1}+5x_{2}$$ $$e(x)=\frac{1}{4}(1+\mathcal{B}(x_{1},2,4))$$ where $\mathcal{B}(x_{1},x_{2},4)$ is the last distribution with respect to $x_{1}$ and $x_{2}$ $${\mathrm{where~}}{\mathcal{B}}(x_{1},$$ where B(x1, 2, 4) is the beta distribution with parameters 2 and 4. $x_1,x_2,x_3$ is the beta distribution with parameters: $$z(x)=\frac{1}{1+e^{\left(-\left(x_1-0.5\right)-2\left(x_2-0.5\right)-0.5\left(x_1*x_2-0.5\right)\right)}}$$. CATE estimation We use the following methods for estimating the CATE based on the observational training data. Shift-reweighting is conducted by training a logistic regression to predict sample membership in the observational training versus shifted test data and calculating propensity weights 1−pˆ pˆbased on the predicted probability of membership in the training data pˆ. - (**CForest-OS**) Causal forest (Wager and Athey, 2018) trained in the observational training data. - (**CForest-wOS**) Causal forest trained in the shift-reweighted observational training data. - (**S-learner-OS**) S-learner using random forest trained in the observational training data. - (**S-learner-wOS**) S-learner using random forest trained in the shift-reweighted observational training data. - (**DR-learner-OS**) DR-learner (Kennedy, 2023) using random forest trained in the observational training data. - (**T-learner-OS**) T-learner using random forest trained in the observational training data. - (**T-learner-wOS**) T-learner using random forest trained in the shift-reweighted observational training data. We further estimate DR-learner and T-learner using multi-calibration boosting with a small set of auditing data. - (**DR-learner-MC-Ridge**) DR-learner using random forest in the observational training data is post-processed with MCBoost using ridge regression in the auditing data. - (**DR-learner-MC-Tree**) DR-learner using random forest in the observational training data is postprocessed with MCBoost using decision trees in the auditing data. - (**T-learner-MC-Ridge**) T-learner using random forest in the observational training data is postprocessed with MCBoost using ridge regression in the auditing data. - (**T-learner-MC-Tree**) T-learner using random forest in the observational training data is postprocessed with MCBoost using decision trees in the auditing data. Evaluation We evaluate bias in ATE and MSE in CATE estimation in the externally shifted test data. Results We show the bias of the estimated average treatment effect (ATE) by shift intensity (column panels) and training set size (row panels) for each CATE estimation method in Figure 5 (see also Table 3). The results show that in the present setting all methods are able to produce unbiased estimates of the ATE in the non-shifted test data (first column). Introducing an external shift (second and third column), however, incurs bias across all methods with the shift-reweighted causal forest and shift-reweighted T-learner performing best. The ridge regression-based multi-accurate DR- and T-learner perform best among the shiftblind methods that had no access to the shifted test distribution. We show the corresponding results for the MSE of the CATE estimation by shift intensity and training set size in Figure 6 (and Table 4). In the present setting, causal forest achieve the smallest MSE in the non-shifted test data as well as in settings with large initial training data (first column and third and last row). With increasing shift, however, ridge-based multi-accurate T-learner perform best in settings with small to moderately sized training data (upper right quadrant). ![29_image_0.png](29_image_0.png) Figure 5: Bias of ATE estimation by shift intensity and training set size for different CATE estimation methods (Simulation 1a (external shift, linear CATE, beta confounding)). The distribution of bias scores over simulation runs is shown. Given an external shift between training and test data, DR-learner-MC-Ridge and T-learner-MC-Ridge perform best among the shift-blind methods that had no access to the shifted target distribution. ![30_image_0.png](30_image_0.png) Figure 6: MSE of CATE estimation by shift intensity and training set size for different estimation methods (Simulation 1a (external shift, linear CATE, beta confounding)). The distribution of MSE scores over simulation runs is shown. T-learner-MC-Ridge performs best in settings with small to moderately sized training data and shifted test data. # Simulation 1B (External Shift, Full Linear Cate, Logistic Confounding) $$\mu_{0}^{*}(x)=3$$ ∗ (x) = 3x1 + 5x2 $$\mu_{1}^{*}(x)=\mu_{0}^{*}(x)+x^{\prime}\beta,w i t h\ \beta\sim\operatorname*{unif}([-5,5]^{10})$$ $e(x)=\frac{1}{1+e^{(-2-2(x_{1}-0.5)-1(x_{2}-0.5))}}$ $$z(x)={\frac{1}{1+e^{(2(x_{2}-0.5)+(x_{3}-0.5))}}}$$ Evaluation Bias in ATE and MSE in CATE estimation is evaluated in the externally shifted test data. Results The results for bias of the ATE estimation in Figure 7 (and Table 5) show that in absence of external shift (first column), causal forest-based estimators perform best and are able to achieve unbiasedness. Introducing an external shift between the observational training and test data (second and third columns) amplifies bias such that only the shift-reweighted causal forest is able to approximate the true ATE on average, given sufficient training data. The ridge regression-based multi-accurate DR-learner improve over the initial DR-learner and are competitive with shift-reweighted causal forest in settings with small initial training data and strong shift. Again note that, in contrast to the shift-reweighted methods, the multiaccurate learner had no access to the shifted test distribution during model training. Figure 8 (and Table 6) shows results for the MSE of the estimated CATE. The ridge-based multi-accurate DR-learner consistently improve over the initial DR-learner and achieve the lowest MSE among all methods in the initial, non-shifted setting. With increasing shift, ridge-based multi-accurate and shift-reweighted learner perform well in settings with small to moderately sized training data. Causal forest is competitive in all settings and particularly as the training set size increases. ![32_image_0.png](32_image_0.png) Figure 7: Bias of ATE estimation by shift intensity and training set size for different CATE estimation methods (Simulation 1b (external shift, full linear CATE, logistic confounding)). The distribution of bias scores over simulation runs is shown. Given an external shift between training and test data, DR-learnerMC-Ridge performs best among the shift-blind methods, particularly in settings with small initial training data. ![33_image_0.png](33_image_0.png) Figure 8: MSE of CATE estimation by shift intensity and training set size for different estimation methods (Simulation 1b (external shift, full linear CATE, logistic confounding)). The distribution of MSE scores over simulation runs is shown. DR-learner-MC-Ridge and T-learner-MC-Ridge consistently improve over DR-learner-OS and T-learner-OS. DR-learner-MC-Ridge performs best overall in settings with small to moderately sized training data. ## D.2.3 Observational Study And Rct We simulate a setting in which we have access to training data from an observational study (OS) and from a small randomized control trial (RCT). We consider a covariate/ external shift between the observational study, Dos, and the RCT, Drct. We further assume unobserved confounding either in both data sources (D.2.3) or in the observational training data only (D.2.3). The task is to estimate the true CATE using models that learned either in the observational (training) data or in the RCT, or by using both data sources in combination. (Xtrain, Ttrain, Ytrain) ∼ Dos,(Xaudit, Taudit, Yaudit) ∼ Drct,(Xtest, Ttest, Ytest) ∼ Dos That is, the randomized controlled trial data is crucial to obtain identification, but ultimately we seek a predictor with good performance on the covariate distribution of the *observational data*. Simulation 2a (confounded observational data and RCT) In the first simulation, we consider covariate shifts from the observational to the RCT setting alone. Assumption 4 (Covariate shift from observational to RCT). $\mathit{udit})\sim\mathcal{D}_{rt,}(X_{test},T_{test},Y_{test})\sim1$ $P(X_{obs})\neq P(X_{ret})$ $P(Y_{obs}=y\mid X,U,A)=P(Y_{ret}=y\mid X,U,A),\forall y$ In addition to the setup in Appendix D.2, we introduce unobserved confounding. The specification is as follows: The unobserved confounder U is correlated with x1: $$u(x)=\begin{cases}0.8&\text{if}x_{1}>\bar{x}_{1}\\ 0.2&\text{if}x_{1}\leq\bar{x}_{1}\end{cases},\qquad U_{i}\sim\operatorname{Bern}(u(x))\,$$ $$\mu_{0}(x)=3x_{1}+5x_{2}$$ $$\mu_{1}(x)=\mu_{0}(x)+3x_{1}+5x_{2}$$ $$\mu_{0}^{*}(x,u)=\mu_{0}(x)-u$$ $$\mu_{1}^{*}(x,u)=\mu_{1}(x)+3u$$ $$e^{os}(x,u)=\frac{1}{1+e^{(2-3u+(-2(x_{1}-0.5)-1(x_{2}-0.5)))}}$$ $$e^{rt}(x)=0.5$$ $$z(x)=\frac{1}{1+e^{(2(x_{2}-0.5)+(x_{3}-0.5))}}$$ CATE Estimation We use the following methods for estimating the CATE based on training sets of simulated observational data. - (**CForest-OS**) Causal forest (Wager and Athey, 2018) trained in the training set of the observational data. - (**S-learner-OS**) S-learner using random forest trained in the training set of the observational data. - (**DR-learner-OS**) DR-learner (Kennedy, 2023) using random forest trained in the training set of the observational data. - (**T-learner-OS**) T-learner using random forest trained in the training set of the observational data. We estimate DR-learner and T-learner using multi-calibration boosting with simulated RCT data. - (**DR-learner-MC-Ridge**) DR-learner using random forest in the training set of the observational data is post-processed with MCBoost using ridge regression in the randomized control trial. - (**DR-learner-MC-Tree**) DR-learner using random forest in the training set of the observational data is post-processed with MCBoost using decision trees in the randomized control trial. - (**T-learner-MC-Ridge**) T-learner using random forest in the training set of the observational data is post-processed with MCBoost using ridge regression in the randomized control trial. - (**T-learner-MC-Tree**) T-learner using random forest in the training set of the observational data is post-processed with MCBoost using decision trees in the randomized control trial. We further compare to CATE learner that are solely based on the simulated RCT data. Shift-reweighting is conducted by training a logistic regression to predict sample membership in the observational versus RCT data and calculating propensity weights 1−pˆ pˆbased on the predicted probability of membership in the RCT data pˆ. - (**CForest-CT**) Causal forest trained in the randomized control trial. - (**CForest-wCT**) Causal forest trained in the shift-reweighted randomized control trial. - (**S-learner-CT**) S-learner using random forest trained in the randomized control trial. - (**S-learner-wCT**) S-learner using random forest trained in the shift-reweighted randomized control trial. - (**DR-learner-CT**) DR-learner using random forest trained in the randomized control trial. - (**T-learner-CT**) T-learner using random forest trained in the randomized control trial. - (**T-learner-wCT**) T-learner using random forest trained in the shift-reweighted randomized control trial. Evaluation We evaluate bias in ATE and MSE in CATE estimation on a test set drawn from the observational data. In calculating the true ATE and τ (x), we marginalize over U and compute E[Yi(1)|X] = Yi(1) + 3 E[U|Xi] and E[Yi(0)|X] = Yi(0) − E[U|Xi]. Results We plot the bias of the estimated ATE for each method by shift intensity (column panels) and training set size (row panels) in Figure 9 (see also Table 7). In the absence of covariate shift (first column), naive learning in the observational data results in biased estimates of the ATE. Utilizing both data sources in combination via multi-calibration boosting allows to improve over the initial DR- and T-learner. Introducing a covariate shift between the observational data and the RCT (second column) degenerates the performance of the RCT-based estimators and the best results are achieved by multi-accurate DR- and T-learner, especially for strong shifts (third column). Results for the MSE of the estimated CATE are shown in Figure 10 (Table 8). In the absence of covariate shift (first column), the RCT-based estimators outperform the estimators that learned from the observational data. Introducing a shift between the observational study and the RCT (second and third column) increases the MSE of the RCT-based learners considerably such that the best results can now be observed for the treebased multi-accurate T-learner, followed by the ridge regression-based multi-accurate T-learner and causal forests learned in the observational data. ![36_image_0.png](36_image_0.png) Figure 9: Bias of ATE estimation by shift intensity and training set size for different CATE estimation methods (Simulation 2a (confounded observational data and RCT)). The distribution of bias scores over simulation runs is plotted. Given moderate to strong covariate shift between the observational data and RCT, multi-accurate learner achieve the best results. ![37_image_0.png](37_image_0.png) Figure 10: MSE of CATE estimation by shift intensity and training set size for different estimation methods (Simulation 2a (confounded observational data and RCT)). The distribution of MSE scores over simulation runs is shown. T-learner-MC-Tree outperform other methods in settings with shifted RCT data. Simulation 2b (total shift between observational data and RCT) In simulation 2b, we consider potentially stronger distribution shifts beyond covariate shift alone. Assumption 5 (Total shift from observational to RCT). $$\begin{array}{l}{{P(X_{o b s})\neq P(X_{r c t})}}\\ {{P(Y_{o b s}=y\mid X,A)\neq P(Y_{r c t}=y\mid X,A),\forall y}}\end{array}$$ The difference between Assumption 5 and Assumption 4 is whether we allow the marginal distribution of U to shift. Assumption 4 is a "conditional model invariance" assumption between the data-generating process and the RCT. A sufficient condition for this to hold is that P(Uobs) = P(Urct) and the invariant conditional probability assumption above. On the other hand, the total shift of Assumption 5 could arise from shifts in the distribution of U. Both of these are additional covariate shifts. The specification is as follows: 3: $$\mu_0(x)=\mu_0^{*rt}(x)=3x_1+5x_2$$ $$\mu_1(x)=\mu_1^{*rt}(x)=\mu_0(x)+3x_1+5x_2$$ $$\mu_0^{*os}(x,u)=\mu_0(x)-u$$ $$\mu_1^{*os}(x,u)=\mu_1(x)+3u$$ $$e^{os}(x,u)=\frac{1}{1+e^{(2-3u+(-2(x_1-0.5)-1(x_2-0.5)))}}$$ $$e^{rt}(x)=0.5$$ $$z(x)=\frac{1}{1+e^{(2(x_2-0.5)+(x_3-0.5))}}$$ Evaluation We evaluate bias in ATE and MSE in CATE estimation on a test set that follows the covariate distribution of the observational data, Dos. However, in constructing the true ATE and τ (x) we use µ0(x) and µ1(x) as specified in the RCT, i.e. without unobserved confounders. Results The bias of the estimated ATE for each method by shift intensity (column panels) and training set size (row panels) is presented in Figure 11 (and Table 9). The first set of results (first column) indicate that under unobserved confounding in the observational data only and without external covariate shift, RCT-based estimators are, as expected, unbiased. The multi-accurate DR- and T-learner that draw on both data sources are able to reduce the bias of the naive DR- and T-learner. As the external shift between the observational data and the RCT increases (second and third column), learning only on the RCT incurs bias and shift-reweighted methods as well as (tree-based) multi-accurate DR- and T-learner achieve the best results. The results for the MSE of the CATE are shown in Figure 12 (Table 10). Similar to the results for bias, the RCT-based estimators perform best under the no external covariate shift setting (first column). Among the estimators based on the observational data, the ridge- and tree-based multi-accurate T-learner and causal forests perform best. Tree-based post-processing performs best among all methods in scenarios with strong covariate shift (third column). ![39_image_0.png](39_image_0.png) Figure 11: Bias of ATE estimation by shift intensity and training set size for different CATE estimation methods (Simulation 2b (total shift between observational data and RCT)). The distribution of bias scores over simulation runs is shown. As the external shift between the observational data and the RCT increases, multi-accurate DR-learner and T-learner-MC-Tree are competitive with shift-reweighted learning. ![40_image_0.png](40_image_0.png) Figure 12: MSE of CATE estimation by shift intensity and training set size for different estimation methods (Simulation 2b (total shift between observational data and RCT)). The distribution of MSE scores over simulation runs is shown. T-learner-MC-Tree performs best among all methods in scenarios with strong covariate shift. | Train | Shift | CForest | S-learner | DR-learner | T-learner | | | | | | | | |---------|---------|-----------|-------------|--------------|-------------|-------|-------|-------|-------|-------|-------|------| | size | degree | OS | wOS | OS | wOS | OS | Ridge | Tree | OS | wOS | Ridge | Tree | | 0 | 0.04 | 0 | -0.02 | -0.02 | 0.76 | 0.35 | 0.34 | -0.09 | -0.12 | -0.15 | -0.11 | | | 0.25 | 1.62 | -0.11 | 2.13 | 1.87 | 0.83 | 1.19 | 1.39 | 1.17 | 0.61 | 0.64 | 0.84 | | | 0.5 | 3.17 | -0.09 | 4.2 | 3.39 | 1.92 | 1.88 | 1.88 | 2.34 | 1.34 | 1.57 | 1.8 | | | 0.75 | 4.46 | -0.55 | 5.96 | 4.69 | 2.43 | 2.65 | 3.4 | 3.41 | 2.17 | 2.49 | 2.45 | | | 500 | 1 | 5.26 | 1.52 | 7.05 | 5.6 | 3.69 | 3.28 | 3.55 | 4.24 | 2.77 | 3.01 | 3.45 | | 1.25 | 5.87 | -2.09 | 7.65 | 5.92 | 3.89 | 3.36 | 4.42 | 4.4 | 2.55 | 2.77 | 3.08 | | | 1.5 | 6.29 | -2.33 | 8.1 | 6.38 | 2.59 | 3.68 | 3.93 | 5.09 | 2.46 | 3.4 | 3.69 | | | 1.75 | 6.8 | -2.73 | 8.45 | 7.42 | 4.28 | 4.91 | 4.93 | 5.13 | 2.26 | 3.01 | 4.21 | | | 2 | 6.88 | -4.08 | 8.6 | 7.1 | 5.01 | 4.01 | 4.31 | 5.17 | 1.69 | 3.48 | 3.58 | | | 0 | 0.04 | 0.04 | 0.01 | 0 | 0.31 | -0.01 | 0.21 | -0.01 | -0.01 | 0.16 | 0.2 | | | 0.25 | 0.79 | -0.34 | 1.79 | 1.35 | 0.46 | 0.61 | 0.6 | 0.74 | 0.33 | 0.5 | 0.58 | | | 0.5 | 1.8 | -0.65 | 3.62 | 2.42 | 1.41 | 1.21 | 1.35 | 1.68 | 0.98 | 1.19 | 1.24 | | | 0.75 | 2.52 | -1.41 | 5.14 | 2.87 | 1.8 | 1.4 | 1.92 | 2.39 | 1.43 | 1.77 | 2 | | | 2000 | 1 | 2.97 | -2.08 | 6 | 3.27 | 1.82 | 1.51 | 1.91 | 2.76 | 1.58 | -2.41 | 2.18 | | 1.25 | 3.35 | -2.12 | 6.65 | 3.96 | 2.35 | 1.92 | 2.1 | 3.26 | 1.99 | 2.2 | 2.52 | | | 1.5 | 3.63 | -2.84 | 7.07 | 4.21 | 2.38 | 2.19 | 2.17 | 3.49 | 1.92 | 2.56 | 2.89 | | | 1.75 | 3.81 | -3.27 | 7.29 | 4.5 | 2.29 | 2.08 | 2.27 | 3.59 | 1.86 | 2.54 | 2.84 | | | 2 | 3.93 | -3.59 | 7.48 | 4.55 | 2.67 | 2.32 | 2.52 | 3.85 | 1.71 | 3.01 | 3.59 | | | 0 | 0.03 | 0.02 | -0.05 | -0.05 | 0.28 | 0.14 | 0.14 | -0.07 | -0.07 | -0.18 | -0.09 | | | 0.25 | 0.62 | -0.27 | 1.65 | 1.15 | 0.39 | 0.53 | 0.43 | 0.66 | 0.3 | 0.48 | 0.57 | | | 0.5 | 1.31 | -0.57 | 3.41 | 2.11 | 0.94 | 0.99 | 0.82 | 1.51 | 0.84 | 1.03 | 1.37 | | | 0.75 | 2.08 | -1.22 | 4.77 | 2.57 | 1.5 | 1.13 | 1.37 | 2.25 | 1.38 | 1.62 | 1.81 | | | 3500 | 1 | 2.38 | -1.63 | 5.64 | 2.83 | 1.88 | 1.54 | 1.62 | 2.61 | 1.55 | 2 | 2.29 | | 1.25 | 2.59 | -2.1 | 6.19 | 3.03 | 1.7 | 1.51 | 1.87 | 2.91 | 1.64 | 2.16 | 2.38 | | | 1.5 | 2.72 | -3.61 | 6.45 | 3.28 | 2.13 | 1.59 | 1.98 | 3.05 | 1.61 | 2.19 | 2.62 | | | 1.75 | 2.94 | -3.18 | 6.7 | 3.67 | 2.18 | 1.69 | 1.85 | 3.23 | 1.67 | 2.45 | 2.73 | | | 2 | 3 | -3.97 | 6.87 | 3.79 | 1.9 | 2.28 | 2.26 | 3.35 | 1.49 | 2.4 | 2.87 | | | 0 | -0.06 | -0.05 | -0.11 | -0.11 | -0.06 | 0.02 | -0.01 | -0.13 | -0.13 | -0.14 | -0.14 | | | 0.25 | 0.52 | -0.25 | 1.57 | 1.07 | 0.32 | 0.36 | 0.41 | 0.62 | 0.27 | 0.49 | 0.71 | | | 0.5 | 1.1 | -0.54 | 3.2 | 1.82 | 0.76 | 0.67 | 0.7 | 1.39 | 0.74 | 0.96 | 1.14 | | | 0.75 | 1.65 | -1.38 | 4.56 | 2.28 | 1.11 | 1.11 | 1.39 | 2.07 | 1.2 | 1.56 | 1.66 | | | 5000 | 1 | 2.02 | -1.86 | 5.38 | 2.55 | 1.27 | 1.4 | 1.59 | 2.47 | 1.5 | 1.83 | 2.24 | | 1.25 | 2.34 | -1.66 | 5.91 | 2.97 | 1.58 | 1.69 | 1.47 | 2.76 | 1.69 | 2.02 | 2.4 | | | 1.5 | 2.37 | -3.08 | 6.2 | 2.94 | 1.93 | 1.97 | 1.54 | 2.89 | 1.62 | 2.3 | 2.23 | | | 1.75 | 2.55 | -3.24 | 6.39 | 3.26 | 1.91 | 1.74 | 1.45 | 3.06 | 1.68 | 2.49 | 3.14 | | | 2 | 2.63 | -3.56 | 6.53 | 3.7 | 1.83 | 1.9 | 1.98 | 3.2 | 1.7 | 2.38 | 2.62 | | Table 3: Bias of ATE estimation by shift intensity and training set size for different CATE estimation methods, averaged over simulation runs (Simulation 1a (external shift, linear CATE, beta confounding)). For each setting, method achieving best performance printed in **bold** (second best in *italic*). | Train | Shift | CForest | S-learner | DR-learner | T-learner | | | | | | | | |---------|---------|-----------|-------------|--------------|-------------|-------|-------|-------|-------|-------|--------|-------| | size | degree | OS | wOS | OS | wOS | OS | Ridge | Tree | OS | wOS | Ridge | Tree | | 0 | 12.52 | 12.5 | 19.64 | 19.69 | 39.11 | 35.83 | 34.71 | 14.99 | 15.01 | 23.65 | 23.34 | | | 0.25 | 16.22 | 14.87 | 25.4 | 24.31 | 39.71 | 38.62 | 33.19 | 18.39 | 17.96 | 13.76 | 23.99 | | | 0.5 | 23.97 | 17.31 | 38.24 | 30.95 | 49.65 | 53.33 | 34.47 | 22.98 | 20.9 | 15.71 | 28.8 | | | 0.75 | 33.76 | 22.6 | 53.54 | 39.98 | 56.69 | 43.87 | 48.8 | 29.32 | 25.36 | 21.3 | 31.88 | | | 500 | 1 | 39.1 | 33.58 | 63.77 | 46.94 | 58.16 | 59.52 | 49.9 | 34.82 | 30.83 | 22.51 | 36.65 | | 1.25 | 44.13 | 50.24 | 69.13 | 47.81 | 57.9 | 58.83 | 60.97 | 34.24 | 26.37 | 22.24 | 32.55 | | | 1.5 | 47.91 | 55.52 | 74.24 | 51.85 | 82.21 | 66.16 | 56.16 | 40.65 | 25.54 | 24.22 | 37.26 | | | 1.75 | 53.07 | 164 | 78.5 | 63.76 | 60.46 | 64.53 | 62.66 | 39.19 | 29.42 | 27.65 | 41.72 | | | 2 | 54.01 | 100.17 | 80.29 | 58.83 | 68.18 | 63.73 | 56.24 | 40.18 | 24.46 | 22.7 | 37.35 | | | 0 | 5.06 | 5.1 | 14.97 | 14.92 | 15.75 | 15.83 | 15.31 | 9.56 | 9.52 | 8.5 | 15.34 | | | 0.25 | 6.13 | 5.98 | 19.28 | 17.02 | 16.96 | 15.4 | 15.32 | 10.43 | 10.35 | 8.83 | 15.77 | | | 0.5 | 10.47 | 9.33 | 29.76 | 20.21 | 20.28 | 18.6 | 17.6 | 14.83 | 14.49 | 11.65 | 18.63 | | | 0.75 | 13.19 | 14.76 | 40.88 | 21.47 | 19.86 | 18.81 | 19.15 | 17.24 | 15.84 | 13.61 | 22.23 | | | 2000 | 1 | 15.44 | 23.4 | 47.64 | 23.43 | 23.09 | 21.72 | 25.05 | 19.44 | 17.07 | 541.18 | 22.88 | | 1.25 | 17.03 | 30.8 | 53.67 | 27.69 | 25.46 | 22.39 | 21.78 | 21.68 | 19.04 | 14.56 | 25.03 | | | 1.5 | 18.42 | 40.07 | 57.9 | 28.71 | 26.72 | 26.79 | 24.99 | 23.04 | 17.78 | 16.42 | 25.81 | | | 1.75 | 19.23 | 42.16 | 59.87 | 29.85 | 26.8 | 22.7 | 22.47 | 23.04 | 16.93 | 15.57 | 27.11 | | | 2 | 19.99 | 45.5 | 62.18 | 30.1 | 24.72 | 27.07 | 23.7 | 25.36 | 16.63 | 19.02 | 29.57 | | | 0 | 3.2 | 3.19 | 12.91 | 12.87 | 12.56 | 11.02 | 11.25 | 7.64 | 7.63 | 7.24 | 12.44 | | | 0.25 | 4.22 | 4.17 | 16.63 | 14.13 | 12.04 | 11.28 | 10.98 | 9 | 9.05 | 7.34 | 13.86 | | | 0.5 | 6.57 | 6.46 | 26.72 | 16.98 | 15.43 | 13.4 | 11.72 | 12.47 | 12.03 | 9.51 | 16.99 | | | 0.75 | 9.78 | 12.25 | 36.13 | 18.59 | 14.68 | 14.92 | 14.56 | 15.35 | 14.25 | 11.21 | 17.79 | | | 3500 | 1 | 10.65 | 14.68 | 42.75 | 19.42 | 17.16 | 15.14 | 15.14 | 16.8 | 15 | 12.84 | 21.89 | | 1.25 | 10.83 | 23.62 | 47.2 | 19.39 | 16.18 | 16.57 | 16.01 | 17.78 | 14.77 | 12.98 | 20.17 | | | 1.5 | 11.61 | 32.26 | 49.25 | 20.51 | 18.26 | 16.88 | 17.81 | 18.68 | 14.49 | 13.39 | 23.67 | | | 1.75 | 12.69 | 38.13 | 51.6 | 22.77 | 17.92 | 16.71 | 16.32 | 19.75 | 15.28 | 15.07 | 21.18 | | | 2 | 12.73 | 56.72 | 53.34 | 23.36 | 16.4 | 21.69 | 19.04 | 20.17 | 15.18 | 14.76 | 22.09 | | | 0 | 2.62 | 2.63 | 12.45 | 12.36 | 9.31 | 8.73 | 9.23 | 6.97 | 6.99 | 6.14 | 10.51 | | | 0.25 | 3.19 | 3.01 | 15.53 | 13.18 | 9.32 | 8.49 | 8.95 | 7.77 | 7.73 | 7.74 | 12.25 | | | 0.5 | 4.95 | 4.58 | 23.96 | 14.14 | 10.56 | 9.48 | 10.36 | 10.57 | 10.09 | 8.2 | 14.69 | | | 0.75 | 6.66 | 9.45 | 33.15 | 16.08 | 12.62 | 11.07 | 12.24 | 13.68 | 12.92 | 11.27 | 16.6 | | | 5000 | 1 | 7.89 | 11.84 | 38.96 | 16.78 | 13.9 | 12.46 | 12.89 | 14.89 | 13.63 | 11.27 | 18.55 | | 1.25 | 9.41 | 16.77 | 43.57 | 18.71 | 15.24 | 12.35 | 13.52 | 16.02 | 13.83 | 22.07 | 20.03 | | | 1.5 | 9.06 | 23.63 | 45.78 | 17.95 | 14.93 | 13.22 | 13.17 | 16.68 | 13.61 | 13.18 | 19.91 | | | 1.75 | 9.82 | 29.24 | 47.46 | 19.6 | 14.82 | 13.09 | 14.68 | 17.55 | 13.89 | 14.38 | 22.76 | | | 2 | 10.36 | 40.95 | 48.85 | 22.73 | 15.09 | 14.9 | 15.18 | 18.62 | 14.32 | 14.42 | 19.83 | | Table 4: MSE of CATE estimation by shift intensity and training set size for different estimation methods, averaged over simulation runs (Simulation 1a (external shift, linear CATE, beta confounding)). For each setting, method achieving best performance printed in **bold** (second best in *italic*). | Train | Shift | CForest | S-learner | DR-learner | T-learner | | | | | | | | |---------|---------|-----------|-------------|--------------|-------------|-------|-------|--------|--------|--------|--------|--------| | size | degree | OS | wOS | OS | wOS | OS | Ridge | Tree | OS | wOS | Ridge | Tree | | 0 | -0.59 | -0.66 | -2.25 | -2.27 | -2.28 | -2.32 | -2.42 | -4.59 | -4.6 | -3.73 | -4.4 | | | 0.25 | -2.94 | -1.19 | -4.51 | -3.91 | -3.96 | -3.38 | -3.88 | -6.21 | -5.58 | -4.39 | -5.64 | | | 0.5 | -5.21 | -1.94 | -6.68 | -5.78 | -5.14 | -5.16 | -5.36 | -7.94 | -6.79 | -6.71 | -8.1 | | | 0.75 | -7.23 | -2.63 | -8.54 | -7.44 | -6.94 | -5.77 | -6.43 | -9.59 | -7.92 | -8.42 | -9.96 | | | 500 | 1 | -8.57 | -4.95 | -10.07 | -8.73 | -8.58 | -6.96 | -7.66 | -11.09 | -9.13 | -9.26 | -10.79 | | 1.25 | -9.07 | -5.13 | -10.71 | -9.44 | -6.83 | -6.66 | -7.65 | -11.62 | -9.41 | -10.17 | -11.92 | | | 1.5 | -9.56 | -6.54 | -11.14 | -9.84 | -8.59 | -6.72 | -8.72 | -12 | -9.49 | -10.49 | -12.25 | | | 1.75 | -9.71 | -5.74 | -11.54 | -10.19 | -8.4 | -7.83 | -8.57 | -12.48 | -9.76 | -11.2 | -12.58 | | | 2 | -10.04 | -7.48 | -11.7 | -10.16 | -9.39 | -7.1 | -8.24 | -12.63 | -9.29 | -10.75 | -12.02 | | | 0 | 0.18 | 0.17 | -2.05 | -2.06 | -1.62 | -1.63 | -1.48 | -3.51 | -3.52 | -3.3 | -3.46 | | | 0.25 | -2.01 | -0.22 | -3.93 | -3.37 | -2.49 | -2.87 | -3.04 | -4.83 | -4.28 | -4.3 | -4.78 | | | 0.5 | -3.88 | -0.64 | -5.86 | -4.88 | -3.36 | -3.81 | -3.81 | -6.53 | -5.42 | -6.2 | -7.06 | | | 0.75 | -5.55 | -1.76 | -7.54 | -6.37 | -4.39 | -3.9 | -4.55 | -8.12 | -6.72 | -7.29 | -8.26 | | | 2000 | 1 | -6.5 | -1.8 | -8.64 | -7.27 | -5.98 | -5.41 | -5.71 | -9.15 | -7.28 | -8.18 | -9.26 | | 1.25 | -7.18 | -2.12 | -9.45 | -8.04 | -6.11 | -5.29 | -5.73 | -9.95 | -7.68 | -8.98 | -10.53 | | | 1.5 | -7.6 | -2.23 | -9.74 | -8.27 | -6.07 | -6.02 | -6.41 | -10.26 | -7.87 | -9.3 | -10.63 | | | 1.75 | -7.8 | -3.98 | -10.05 | -8.81 | -6.2 | -5.67 | -6.62 | -10.55 | -8.44 | -9.95 | -11.77 | | | 2 | -7.87 | -2.7 | -10.19 | -8.6 | -6.84 | -5.42 | -6.29 | -10.63 | -7.7 | -9.79 | -11.07 | | | 0 | 0.13 | 0.16 | -2.01 | -2 | -1.41 | -1.46 | -1.54 | -3.17 | -3.17 | -3.02 | -3.15 | | | 0.25 | -1.64 | -0.01 | -3.64 | -3.09 | -2.29 | -2.4 | -2.35 | -4.37 | -3.86 | -4.1 | -4.39 | | | 0.5 | -3.5 | -0.4 | -5.53 | -4.53 | -3.6 | -3.47 | -3.77 | -6.08 | -5.04 | -5.9 | -6.55 | | | 0.75 | -4.77 | -0.37 | -6.87 | -5.61 | -4.15 | -4.01 | -4.16 | -7.27 | -5.73 | -6.72 | -7.8 | | | 3500 | 1 | -5.83 | -0.64 | -8.12 | -6.73 | -5.24 | -4.64 | -4.55 | -8.57 | -6.61 | -7.76 | -8.58 | | 1.25 | -6.39 | -0.74 | -8.74 | -7.25 | -4.73 | -5.2 | -6.17 | -9.08 | -7.02 | -8.4 | -9.62 | | | 1.5 | -6.7 | -0.03 | -9.06 | -7.42 | -6.11 | -5.35 | -5.37 | -9.5 | -7.08 | -8.98 | -10 | | | 1.75 | -7 | -0.6 | -9.29 | -7.63 | -6.05 | -5.85 | -5.81 | -9.68 | -7.09 | -8.8 | -10.08 | | | 2 | -7.17 | -1.68 | -9.58 | -7.77 | -6.13 | -6.14 | -6.2 | -10.07 | -7.46 | -9.1 | -9.93 | | | 0 | 0.21 | 0.22 | -1.91 | -1.89 | -1.2 | -1.35 | -1.23 | -3 | -2.99 | -2.96 | -3.28 | | | 0.25 | -1.47 | 0.09 | -3.49 | -2.94 | -2.01 | -2.14 | -2.18 | -4.14 | -3.67 | -3.93 | -4.63 | | | 0.5 | -3.05 | 0.05 | -5.08 | -4.04 | -2.9 | -2.94 | -3.08 | -5.53 | -4.47 | -5.01 | -5.58 | | | 0.75 | -4.53 | -0.21 | -6.68 | -5.4 | -3.9 | -3.92 | -4 | -7.11 | -5.58 | -6.81 | -7.51 | | | 5000 | 1 | -5.38 | 0.18 | -7.65 | -6.25 | -4.79 | -4.55 | -4.7 | -8.02 | -6.14 | -7.51 | -8.49 | | 1.25 | -5.99 | -0.97 | -8.28 | -6.76 | -4.67 | -5.11 | -4.95 | -8.64 | -6.45 | -8.24 | -9.2 | | | 1.5 | -6.32 | -0.21 | -8.76 | -6.99 | -5.27 | -4.71 | -5.39 | -9.11 | -6.76 | -8.62 | -9.71 | | | 1.75 | -6.42 | -0.64 | -8.83 | -7.1 | -5.64 | -4.39 | -5.24 | -9.28 | -6.77 | -8.57 | -9.49 | | | 2 | -6.88 | -2.61 | -9.24 | -7.47 | -6.14 | -5.33 | -5.35 | -9.68 | -7.28 | -9.12 | -10.24 | | Table 5: Bias of ATE estimation by shift intensity and training set size for different CATE estimation methods, averaged over simulation runs (Simulation 1b (external shift, full linear CATE, logistic confounding)). For each setting, method achieving best performance printed in **bold** (second best in *italic*). | Train | Shift | CForest | S-learner | DR-learner | T-learner | | | | | | | | |---------|---------|-----------|-------------|--------------|-------------|--------|--------|--------|--------|--------|--------|--------| | size | degree | OS | wOS | OS | wOS | OS | Ridge | Tree | OS | wOS | Ridge | Tree | | 0 | 48.96 | 48.83 | 61.44 | 61.47 | 49.01 | 36.24 | 41.68 | 50.37 | 50.47 | 37.92 | 48.71 | | | 0.25 | 58.03 | 52.83 | 79.93 | 79.13 | 64.22 | 47.74 | 53.11 | 72.51 | 64.08 | 52.5 | 67.26 | | | 0.5 | 75.47 | 59.45 | 107.39 | 107.74 | 76.5 | 65.54 | 69.39 | 103.71 | 86.43 | 80.42 | 111.06 | | | 0.75 | 98.37 | 78.02 | 134.31 | 132.05 | 109.67 | 83.42 | 87.06 | 132.66 | 105.15 | 107.44 | 147.33 | | | 500 | 1 | 116.47 | 94.48 | 160.77 | 154.28 | 126.34 | 110.72 | 111.87 | 164.12 | 127.02 | 121.08 | 165.23 | | 1.25 | 121.81 | 132.84 | 169.13 | 165.05 | 142.91 | 108.59 | 128.54 | 173.35 | 133.51 | 138.64 | 186.45 | | | 1.5 | 128.21 | 144.18 | 177.03 | 169.83 | 133.75 | 115.82 | 119.92 | 179.79 | 137.04 | 142.15 | 194.81 | | | 1.75 | 129.24 | 139.23 | 183.05 | 177.42 | 153.78 | 109.25 | 123.72 | 189.41 | 140.58 | 158.14 | 198.73 | | | 2 | 136.23 | 156.07 | 187.09 | 176.18 | 143.81 | 114.75 | 133.21 | 193.61 | 136.73 | 148.47 | 192.43 | | | 0 | 31.25 | 31.29 | 41.25 | 41.46 | 26.21 | 23.87 | 25.76 | 32.69 | 32.76 | 28.04 | 33.36 | | | 0.25 | 35.78 | 33.1 | 56.21 | 54.81 | 37.36 | 31.1 | 35.15 | 49.45 | 42.36 | 40.36 | 51.18 | | | 0.5 | 47.93 | 36.68 | 78.99 | 74.7 | 53.38 | 47.29 | 58.23 | 74.61 | 57.9 | 68 | 86.61 | | | 0.75 | 62.81 | 43.77 | 102.92 | 98.62 | 67.95 | 55.98 | 59.31 | 101.23 | 78.57 | 83.52 | 107.64 | | | 2000 | 1 | 71.52 | 50.4 | 118.4 | 112.52 | 76.93 | 65.76 | 73.73 | 117.39 | 86.09 | 96.77 | 125.88 | | 1.25 | 78.65 | 65.43 | 131.65 | 125.92 | 75.88 | 68 | 82.42 | 130.8 | 91.11 | 111.73 | 148.89 | | | 1.5 | 83.4 | 87.66 | 134.23 | 128.7 | 80.8 | 77.14 | 94 | 134.91 | 95.45 | 113.56 | 149.28 | | | 1.75 | 84.4 | 80.99 | 139.06 | 138.6 | 86.48 | 81.35 | 96.41 | 139.34 | 104.81 | 125.67 | 171.48 | | | 2 | 85.39 | 117.33 | 139.58 | 135.01 | 85.56 | 82.29 | 84.27 | 139.53 | 90.69 | 120.89 | 155.25 | | | 0 | 26.41 | 26.48 | 34.62 | 34.77 | 22.3 | 20.45 | 22.49 | 27.94 | 27.94 | 24.09 | 28.1 | | | 0.25 | 29.68 | 27.89 | 48.19 | 46.15 | 29.57 | 27.95 | 28.28 | 42.62 | 36.47 | 36.65 | 43.35 | | | 0.5 | 40.64 | 32.07 | 69.51 | 64.46 | 40.27 | 44.18 | 42.4 | 66.82 | 52.49 | 62.16 | 76.77 | | | 0.75 | 50.62 | 34.89 | 86.45 | 80.05 | 55.44 | 50.07 | 52.88 | 83.88 | 60.45 | 72.47 | 96.5 | | | 3500 | 1 | 60.8 | 40.59 | 105.13 | 99.35 | 58.89 | 55.2 | 65.03 | 104.92 | 73.67 | 88.04 | 109.7 | | 1.25 | 65.42 | 53.8 | 113.57 | 105.53 | 73.34 | 60.93 | 71.11 | 112.44 | 79.46 | 98.13 | 128.19 | | | 1.5 | 67.51 | 58.93 | 115.71 | 105.15 | 66.37 | 63.82 | 62.2 | 117.23 | 78.11 | 106 | 131.51 | | | 1.75 | 71.14 | 84.67 | 119.63 | 110.99 | 68.86 | 60.95 | 72.15 | 119.74 | 79.36 | 100.75 | 133.81 | | | 2 | 72.57 | 94.48 | 124.32 | 112.8 | 70.37 | 68.83 | 68.96 | 127.26 | 84.85 | 106.19 | 128.92 | | | 0 | 23.76 | 23.84 | 30.99 | 31.12 | 19.81 | 18.33 | 19.01 | 25.41 | 25.37 | 22.87 | 28.73 | | | 0.25 | 26.53 | 25.12 | 43.11 | 40.92 | 25.98 | 25.28 | 26.89 | 38.97 | 33.2 | 36.33 | 45.83 | | | 0.5 | 35.34 | 28.45 | 60.29 | 54.75 | 32.56 | 31.91 | 35.93 | 57.47 | 42.99 | 48.56 | 60.4 | | | 0.75 | 46.53 | 31.58 | 81.14 | 73.84 | 48.41 | 41.51 | 53.5 | 81.54 | 58.2 | 75.15 | 91.08 | | | 5000 | 1 | 53.35 | 35.06 | 94.19 | 86.83 | 53.36 | 52.64 | 59.22 | 94.05 | 64.56 | 83.78 | 105.81 | | 1.25 | 58.87 | 41.09 | 102.34 | 93.53 | 54.68 | 54.36 | 56.8 | 102.96 | 68.77 | 94.5 | 117.64 | | | 1.5 | 61.59 | 51.86 | 109.6 | 96.23 | 57.74 | 59.66 | 65.16 | 110.6 | 73.29 | 99.92 | 125.89 | | | 1.75 | 61.85 | 64.77 | 108.83 | 97.83 | 65.31 | 62.42 | 68.62 | 111.54 | 71.68 | 97.3 | 120.56 | | | 2 | 67.63 | 65.96 | 116.97 | 105 | 67.87 | 60.4 | 78.9 | 119.41 | 81.65 | 107.55 | 134.97 | | Table 6: MSE of CATE estimation by shift intensity and training set size for different estimation methods, averaged over simulation runs (Simulation 1b (external shift, full linear CATE, logistic confounding)). For each setting, method achieving best performance printed in **bold** (second best in *italic*). Table 7: Bias of ATE estimation by shift intensity and training set size for different CATE estimation methods, averaged over simulation runs(Simulation 2a (confounded observational data and RCT)). For each setting, method achieving best performance printed in **bold** (second bestin *italic*).Train Shift CForest S-learner | | | 1.5 | | | | | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------|-----------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 0.83 0.82 -0.07 0.05 0.05 | | | 0 0.76 0.76 -0.19 0.05 0.04 | | | | | CForest S-learner DR-l T-learner OS OS OS Ridge Tree OS Ridge Tree RCT wRCT RCT wRCT RCT RCT wRCT | | | | | -2.4 -4.04 -3.39 0.49 1.74 -5.5 -4.43 -2.25 5.18 2.6 3.47 2.14 2.38 4.09 3.05 | -2.3 -4.07 -3 -0.74 0.02 -5.25 -3.8 -2.57 3.42 -1.18 2.73 2.03 2.2 2.78 1.22 | | 0.25 -3.75 -4.43 -5.1 -4.29 -3.88 -7.52 -4.18 -4.16 0.85 -0.64 1.54 1.25 0.64 0.89 0.41 0.5 -3.59 -4.39 -4.91 -4.73 -4.16 -7.32 -4.36 -3.89 1.87 -1.05 2.08 1.57 1.75 1.67 0.84 0.75 -3.41 -3.99 -4.4 -2.55 -3.6 -6.73 -4.24 -3.32 3.2 -1.37 2.7 1.88 2.56 2.67 1.17 | | | | 0.25 -1.97 -3.62 -2.37 -0.86 -0.19 -4.96 -3.18 -2.97 0.92 -1.51 1.49 1.12 0.5 0.85 0.25 0.5 -2.29 -3.76 -2.51 -0.9 -0.37 -5.05 -3.22 -2.6 1.96 -1.64 2.09 1.63 1.93 1.68 0.62 0.75 -2.22 -3.86 -2.73 -0.37 0.15 -5.2 -3.88 -2.56 3.4 -1.98 2.76 1.95 2.48 2.87 1.29 | 1.25 -2.61 -4.21 -3.37 -0.77 0.26 -5.54 -4.78 -2.08 6.79 1.54 4.43 2.45 3.98 5.49 3.67 1.5 -2.23 -3.93 -2.59 1.03 1.9 -5.34 -4.41 -1.32 7.85 1.72 5.14 2.47 4.67 6.2 3.96 1.75 -2.46 -4.1 -3.17 0.71 2.59 -5.43 -4.63 -1.11 8.82 3.93 5.92 2.82 5.85 7.33 5.58 2 -2.64 -4.41 -3.73 -0.5 2.05 -5.81 -4.86 -1.17 9.21 6.79 6.58 2.73 6.2 8.27 6.16 0 -2.12 -3.78 -2.52 -0.64 -0.02 -5 -2.91 -2.89 -0.05 -0.04 0.74 0.74 0.01 -0.04 -0.06 0.25 -2.25 -3.9 -3.11 -0.24 -0.55 -5.14 -2.91 -2.93 0.81 -1.68 1.49 1.04 0.97 0.73 0.12 0.5 -2.06 -3.69 -2.5 0.1 -0.16 -5 -3.3 -2.53 1.88 -1.81 2.04 1.47 1.42 1.67 0.59 0.75 | | | T-learner | 500 1 -3.61 -4.38 -6.05 -2.71 -1.57 -7.53 -5.08 -3.25 5.01 1.15 3.5 2.13 1.71 4.13 2.17 1.25 -3.44 -4.12 -4.73 -3.54 -3.64 -7.13 -4.95 -2.56 6.56 0.48 4.43 2.39 3.17 5.47 3.83 1.5 -3.49 -4.16 -4.96 -3.61 -3.02 -7.31 -4.79 -1.77 7.89 2.07 5.16 2.53 4.42 6.26 3.97 1.75 -3.45 -4.18 -6.22 -3.01 -3.7 -6.98 -5.08 -1.35 8.81 5.85 6.01 2.65 5.95 7.47 5.69 2 -3.58 -4.36 -4.98 -3.1 -3.43 -7.42 -4.7 -0.95 9.5 2.38 6.6 2.83 7.1 8.21 6.07 0 -2.73 -4.23 -3.45 -1.89 -1.27 -5.95 -3.73 -3.61 0.01 0.12 0.78 0.81 0.08 0.03 0.06 0.25 -2.59 -4.07 -3.36 -1.23 -1.4 -5.78 -3.19 -3.17 0.79 -1.29 1.44 1.08 0.46 0.72 0.16 0.5 -2.73 -4.39 -3.53 -1.59 -1.65 -6.07 -3.76 -3.24 1.8 -2.16 2.07 1.57 1.25 1.55 0.6 0.75 -2.81 -4.4 -4.37 -1.47 -1.82 -6.1 -4.3 -2.97 3.34 -2.05 2.7 1.91 2.48 2.82 2000 1 -2.93 -4.38 -3.84 -1.86 -0.54 -6.13 -4.71 -2.53 4.99 1.46 3.41 2.23 2.67 3.83 2.19 1.25 -2.71 -4.32 -3.81 -0.9 -0.84 -6.01 -4.83 -2.42 6.45 1.47 4.17 2.2 2.8 5.03 3.25 1.5 -2.59 -4.18 -3.6 0.58 2.09 -5.97 -4.71 -1.2 7.99 4.75 5.38 2.7 4.07 6.91 6.16 1.75 -2.63 -4.05 -3.17 0.5 2.4 -5.81 -4.59 -0.87 9.05 7.45 6.38 3.15 7.05 8.17 6.99 2 -2.55 -4.16 -3.68 -0.68 2.24 -5.93 -4.07 -0.42 9.46 7.39 6.66 3.16 4.94 8.15 6.7 0 -2.41 -4.18 -3.67 -0.93 -0.74 -5.66 -3.32 -3.41 0.01 | | | | | 5000 1 -2.22 -3.95 -2.86 -0.2 0.41 -5.23 -4.33 -2.57 4.96 1.25 3.51 2.12 3.17 4.12 2.68 1.25 -2.52 -4.16 -3.41 0.4 0.95 -5.46 -4.72 -1.87 6.89 0.74 4.52 2.43 4.1 5.65 4.13 1.5 -2.33 -3.97 -2.96 0.17 1.78 -5.1 -4.55 -1.32 8.15 1.48 5.42 2.58 5.85 6.88 4.61 1.75 -1.86 -3.47 -2.45 1.03 2.74 -4.64 -4.07 -0.39 8.89 6.29 6.15 2.98 6.81 7.63 6.15 2 -2.25 -3.9 -3.03 1.32 3.8 -5.16 -4.27 -0.57 9.51 6.84 6.61 3.26 7.45 8.2 6.85 | | 0 -3.62 -4.41 -5.87 -3.91 -4.45 -7.43 -4.06 -4.23 0.01 | | | | | | | | DR-learner | | | | | 3500 1 | | | size degree | | | | | | | | 18 24.62 17.87 | | | | | | | | | | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------|----------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------|------| | CForest S-learner DR-l T-learner OS Ridge Tree OS Ridge Tree RCT wRCT RCT wRCT RCT RCT wRCT | | | | 42 79.1 64.98 96.36 64.11 33.47 21.65 89.63 103.76 59.41 49.25 76.25 69.79 78.28 | | | | | | | | 43.26 92.51 77.29 69.69 81.17 36.15 26.93 67.26 158.72 47.33 42.52 55.63 49.43 52.96 43.29 117.41 64.09 82.92 84.48 34.91 24.9 88.28 141.6 54.21 43.13 65.94 70.72 55.18 | 41.14 62.5 37.85 47.54 61.23 28.29 20.71 29.3 72.08 35.42 39.42 27.35 38.08 30.23 39.05 63.93 37.05 51.06 60.09 31.62 18.55 47.02 64.63 38.93 41.07 37.35 45.56 36.95 38.99 63.13 41.87 43.79 59.06 33.76 20.66 65.93 100.8 45.44 43.09 50.5 51.02 51.02 | | | 39.08 71.37 74.64 98.74 59.68 31.85 19.3 108.56 158.95 70.25 52.11 95.57 103.41 101.76 39.83 66.41 47.75 89.98 61.15 34.14 20.65 116.68 208.11 74.49 49.51 96.45 79.35 93.46 | 34.73 56.67 34.18 43.26 49.95 25.81 17.5 29.86 103.45 35.11 38.22 28.17 31.97 30.87 38.5 65.98 39.1 76.1 56.8 31.11 18.87 49.48 71.97 40.2 42.44 39.8 39.28 40.99 36.03 53.88 33.08 49.66 51.35 33.33 17.39 70.98 111.6 47.92 43.2 57.07 62.41 51.49 35.95 70.68 58.7 82.49 51.67 30.39 18.97 88.1 120.91 55.55 45.01 66.91 66.19 71.98 35.55 54.55 41.46 86.55 50.35 31.43 15.85 103.74 102.29 63.82 46.56 81.41 76.27 78.81 37.34 52.04 33.9 70.82 53.78 34.3 16.56 111.67 136.3 72.53 47.06 97.04 80.99 78.24 | 33.95 44.95 25.29 38.93 47.43 23.59 16.58 30.2 89.63 36.12 39.9 27.66 44.06 30.68 33.63 49.97 29.1 48.54 47.89 28.7 18.13 46.43 80.45 40.07 42.06 39.98 46.77 38.25 36.54 53.89 33.91 50.57 51.67 32.42 16.86 72.71 121.44 49.42 45.18 58.9 66.74 56.3 32.9 45.59 30.67 60.29 45.23 31.14 14.83 92.41 162.72 56.67 43.41 74.65 78.07 63.22 32.38 47.65 36.18 80.13 45.23 27.74 16.96 105.55 123.41 66.73 48.54 86.49 92.77 83.16 33.02 48.57 36.72 104.62 47.78 30.67 14.75 117.25 147.32 71.05 47.19 94.93 91.72 96.34 | | | | | | | | | | 17.9 34.03 52.44 32.03 40.27 48.91 23.74 22.99 10.09 21.26 26.64 27.86 12.82 21.18 14.01 | | | | | 0 37.43 47.2 94.84 100.75 76.41 86.4 30.54 33.85 7.71 7.76 24.31 24.37 11.11 21.63 11.12 | | | | | | 0 20.74 38.4 57.65 36.75 40.48 56.42 24.32 24.65 7.55 7.69 24.02 24.18 11.67 20.97 11.58 | 0 18.16 33.76 59.42 31.65 39.66 47.26 20.71 19.68 7.6 7.74 23.72 23.6 11.52 19.88 11.61 | | | | 0.25 37.22 47.11 95.35 75.57 81.38 87.48 31.09 31.7 9.33 11.57 26.49 26.4 11.53 19.14 11.52 0.5 38.09 47.77 93.25 70.52 66.67 84.78 35.26 35.56 17.13 34.55 33.46 35.15 0.75 35.07 44.28 93.6 91.21 64.14 78.57 30.76 29.05 27.5 52.74 34.61 37.01 25.26 32.78 26.67 | 1.75 36.48 44.88 86.79 84.26 77.4 77.93 36.47 25.58 104.41 134.25 66.15 46.86 85.97 89.8 78.45 2 40.22 51.41 117.57 103.47 108.49 91.47 35.91 27.33 117.4 166.51 73.7 51.99 96.27 91.76 85.72 0 24.77 39.92 69.39 38.64 44.04 60.21 28.88 27.3 8.2 8.49 25.29 25.55 11.9 20.17 12.31 0.25 22.66 36.62 61.15 46.68 40.69 56.5 21.7 21.79 9.11 17.4 25.27 25.67 12.27 19.06 13.67 0.5 23.36 40.53 63.01 43.82 57.06 60.26 24.92 20.93 16.28 45.42 31.93 35.5 17.8 25.63 19.64 0.75 24.91 | | | | | 0.5 18.72 33.18 51.65 27.34 39 47.32 19.87 16.82 16.62 58.54 29.97 33.15 17.48 26.29 19.55 0.75 19.19 | 0.25 18.81 34.99 49.79 31.67 36.22 49.2 19.26 19.72 9.88 20.01 27.91 28.19 12.14 21.69 13.87 0.5 18.02 33.88 54.92 35.51 39.12 49.27 21.67 16.76 16.74 87.34 31.02 34.38 18.56 24.05 21.31 0.75 17.66 | | | | T-learner | 500 1 38.69 49.32 102.71 95.48 166.78 91.69 37.94 31.16 47.8 84.65 41.38 43.62 40.63 40.01 43.95 1.25 34.34 | | | | | | | | | | DR-learner OS OS | | | | | | | | | 17.7 | | | | | | | 2 23.43 | | 2 20.44 | | | | | 1.5 34.78 | | 1.25 23.26 1.5 24.77 1.75 24.54 | | | | 1.25 20.38 1.5 19.63 1.75 19.63 | 1.25 20.04 1.5 17.68 1.75 16.91 2 | | | | | | 2000 1 24.69 | | | 3500 1 21.22 | | | | | | | | | | | 0.25 | | 5000 1 17.48 | | | size degree | | | | | | | | | | Table 8: MSE of CATE estimation by shift intensity and training set size for different estimation methods, averaged over simulation runs(Simulation 2a (confounded observational data and RCT)). For each setting, method achieving best performance printed in **bold** (second bestin *italic*).Train Shift CForest S-learner Table 9: Bias of ATE estimation by shift intensity and training set size for different CATE estimation methods, averaged over simulationruns (Simulation 2b (total shift between observational data and RCT)). For each setting, method achieving best performance printed in **bold**(second best in *italic*).Train Shift CForest S-learner | | | 7 5.68 | | | | | | | | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------| | | 0 -0.05 -0.06 | 1.9 5.11 8.01 6.33 | | | 1.7 4.99 7.78 6.21 | | 0.7 2.68 4.36 1.89 | | | | | | | | | | 0 0.01 -0.11 -0.04 -0.05 | | | 7 1.97 6.6 8.45 6.43 | | 0 0.07 0.05 -0.05 0.12 0.11 | | | | | | | | | | | CForest S-learner DR-l T-learner OS OS OS Ridge Tree OS Ridge Tree RCT wRCT RCT wRCT RCT RCT wRCT | -4.8 -6.45 -5.86 -4.22 -3.9 -8.1 -4.8 -4.22 1.76 -2.54 1.46 0.45 2.06 1.8 0.64 -4.6 -6.2 -5.22 -2.59 -3 -7.92 -5.15 -4.06 3.27 -3.66 2.42 0.62 2.39 3.3 1.01 | | | | -4.2 -5.89 -4.68 -1.8 -0.39 -7.27 -5.42 -2.37 7.97 1.4 5.71 1.23 5.08 7.06 4.03 | | -4.3 -5.96 -5.1 -2.23 -1.5 -7.17 -4.75 -3.52 3.17 -0.9 2.21 0.47 2.19 2.92 1.27 | -4 -5.64 -4.34 -2.04 0.12 -6.86 -5.33 -2.63 6.54 4.26 4.45 0.92 4.74 5.65 4.17 -4.3 -6.05 -5.1 -2.42 -0.52 -7.28 -5.74 -2.49 8.24 3.39 5.82 1.43 4.18 7.26 5.58 | | | | | 2 -4.65 -6.32 -5.44 -3.35 -2.33 -8 -5.75 -1.88 10.11 0.68 7.17 2.21 8.35 8.59 4.95 0 -4.32 -5.94 -4.98 -2.43 -2.51 -7.37 -4.13 -4.27 0.03 -0.03 -0.03 -0.04 -0.08 0.03 0.01 | | | 2 -4.47 -6.27 -5.68 -1.73 -1.48 -7.71 -5.75 -1.6 10.05 2.55 7.11 2.69 6.69 8.55 6.67 0 -4.28 -5.94 -4.76 -2.6 -2.26 -7.22 -3.78 -4.05 0.07 0.05 | | | | | | 0.25 -5.37 -6.13 -7.43 -4.82 -5.46 -9.03 -5.33 -5.31 0.7 -0.78 0.75 0.24 0.47 0.84 0.31 0.5 -5.51 -6.22 -7.39 -5.89 -5.74 -9.2 -5.3 -4.66 1.73 -1.71 1.55 0.6 1.61 1.95 0.93 0.75 -5.71 -6.56 -7.68 -7.19 -6.15 -9.6 -6.08 -4.79 3.04 -1.09 2.29 0.77 2.01 2.95 1.58 | 0.25 -4.45 -5.98 -5.77 -2.93 -3.04 -7.63 -4.14 -4.08 0.64 -1.15 0.66 0.12 0.57 0.72 0.11 0.5 | | 0.25 -4.45 -6.06 -5.05 -2.79 -2.07 -7.54 -4.71 -4.5 0.71 -1.46 0.73 0.07 0.83 0.74 0.05 0.5 -4.22 -5.83 -4.95 -2.34 -1.98 -7.25 -4.33 -3.69 1.84 -2.67 1.43 0.3 1.72 1.82 0.65 0.75 -4.46 -6.16 -5.13 -2.75 -1.54 -7.6 -5.08 -3.64 3.15 -2.28 2.39 0.64 2.52 3.14 1.68 | | | 0.25 -4.14 -5.88 -4.72 -2.49 -2.18 -7.1 -3.99 -3.83 0.72 -1.42 0.73 0.05 1.2 0.84 0.19 0.5 -4.13 -5.76 -4.92 -2.25 -1.93 -7.03 -4.24 -3.65 1.71 -2.44 1.44 0.26 2.13 1.85 0.46 0.75 | | | 1.75 -4.52 -6.16 -5.04 -2.59 -1.04 -7.4 -5.88 -2.11 9.14 4.9 6.47 1.98 5.79 7.9 6.7 2 -4.27 -6.02 -4.84 -1.91 -0.56 -7.23 -5.62 -1.67 9.96 3.5 | | T-learner | 500 1 -5.72 -6.46 -7.19 -5.86 -4.86 -9.36 -6.08 -4.18 4.73 -2.66 3.17 0.83 2.46 4.13 2.06 1.25 -5.36 -5.96 -6.53 -5.11 -4.3 -8.72 -5.8 -3.31 6.21 -2.69 4.3 1.05 4.06 5.46 3.24 1.5 -5.55 -6.2 -5.96 -4.16 -3.03 -9.24 -6.04 -3.22 7.82 2.47 5.49 1.68 4.29 6.67 4.99 1.75 -5.46 -6.35 -7.05 -5.34 -4.84 -9.26 -6.12 -2.65 9.31 6.51 6.98 2.38 6.17 8.49 7.01 2 -5.61 -6.45 -7.51 -5.71 -3.08 -9.38 -6.11 -2.29 9.97 4.22 7.23 2.79 6.33 8.59 6.87 0 -4.79 -6.39 -5.9 -3.77 -3.07 -8.12 -5.6 -6.46 -0.02 -0.05 -0.01 -0.05 | 2000 1 -4.77 -6.39 -6.19 -4.61 -2.14 -8.03 -5.67 -3.56 4.77 0.86 3.11 0.6 2.93 4.18 2.43 1.25 -4.68 -6.29 -5.66 -3.2 -1.78 -8.09 -5.92 -3.12 6.73 1.77 4.66 1.22 3.23 6.07 4.55 1.5 -4.77 -6.3 -5.94 -2.86 -1.87 -7.96 -5.99 -2.81 7.87 4.02 5.75 1.41 4.91 | | 3500 1 -4.66 -6.31 -5.5 -2.87 -1.3 -7.72 -5.62 -3.48 4.92 -2.06 3.43 0.67 2.96 4.55 2.02 1.25 -4.26 -5.88 -4.72 -2.18 0.16 -7.29 -5.45 -2.86 6.58 -1.34 4.34 0.95 4.54 5.46 3.22 1.5 | | | | | | | | | -4.7 -6.32 -5.24 -3 -0.76 -8.07 -5.82 -2.3 9.27 3.31 6.61 | | | 1.75 -4.48 -6.26 -5.09 -2.33 -0.46 -7.63 -5.78 -2.3 9.11 4.94 6.44 | | 5000 1 -4.27 -5.89 -4.97 -1.93 -1.46 -7.1 -5.18 -3.11 4.89 -0.22 3.31 | | | | 0 -5.53 -6.38 -7.68 -6.73 -6.81 -9.25 -5.43 -5.59 0.03 | | | | | | | | | | | DR-learner | | | | | | | | 1.5 | | | | 0.75 | 1.75 | | | | | | 1.25 | | | size degree | | | | | | | | | | Table 10: MSE of CATE estimation by shift intensity and training set size for different estimation methods, averaged over simulation runs(Simulation 2b (total shift between observational data and RCT)). For each setting, method achieving best performance printed in **bold**(second best in *italic*).Train Shift CForest S-learner | | 36 40.86 33.74 | | 5.5 17.6 17.51 7.27 13.99 7.38 | | | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------| | CForest S-learner DR-l T-learner OS Ridge Tree RCT wRCT RCT wRCT RCT RCT wRCT | | | 75 39.02 21.75 45.43 127.32 33.08 29.67 41.02 47.21 34.11 | | | | | | 70 38.2 51.76 74.17 38.98 32.72 7.07 11.5 18.86 21.25 9.13 16.56 9.78 | | | | | | | 56.3 87.1 50.7 64.66 84.99 43.36 23.91 70.32 67.38 46.2 33.52 61.94 67.42 56.73 55.48 77.48 51.87 70.75 81.06 43.36 22.2 88.01 93.85 58.31 34.63 74.66 66.4 71.68 54.91 79.32 50.4 76.92 82.71 42.06 19.67 111.74 147.67 68.42 35.19 89.55 65.77 82.72 56.07 67.16 51.21 58.54 82.06 41.47 19.17 129.68 111.65 77.32 38.03 100.22 131.54 79.72 | 49.97 66.27 38.69 78.7 71.01 37.88 20.5 68.1 173.54 42.18 32.84 52.77 65.16 45.2 50.26 68.44 47.33 68.79 70.92 38.41 20.9 89.94 112.42 57.41 33.69 75.11 81.29 60.63 53.2 68.33 44.42 72.64 74.55 42.47 18.1 109.79 136.09 66.94 36.19 86.19 68.07 80.93 53.3 66.32 47.85 58.99 75.38 41.67 16.48 128.92 113.56 77.84 44.29 100.65 95.59 91.14 | 48.63 60.2 35.77 39.01 67.02 35.05 20.49 44.87 100.33 31.93 30.69 38.84 46.02 38.92 47.38 64.71 42.64 57.49 65.43 37.81 19.17 67.81 108.59 43.38 33.45 54.84 61.97 50.91 50.79 63.56 43.44 54.69 69.8 42.46 17.81 94.26 124.26 59.78 35.38 78.68 69.38 89.64 50.98 62.66 50.51 59.41 69.61 42.37 16.1 110.83 190.54 68.43 37.49 89.63 90.15 88.29 49.49 58.89 41.56 61.87 67.18 40.06 15.78 126.05 90.74 74.91 35.83 98.03 85.04 77.14 | | | 47.1 59.48 104.44 81.08 100.4 108.1 36.17 35.04 13.45 23.04 20.44 23.6 15.97 26.37 13.64 | | 46.1 60.06 113.18 102.82 174.57 108.76 44.54 23.56 126.01 125.37 78.09 39.27 99.86 93.8 85.42 | | | | | 0 46.53 61.3 111.73 82.85 83.54 108.92 45.41 51.38 5.22 5.28 18.27 18.27 7.07 16.15 7.33 | | 0 34.02 56.98 83.89 54.37 58.17 83.85 48.68 64.93 5.81 5.88 19.34 19.61 8.19 15.34 8.28 | 0 28.05 51.13 72.69 38.9 48.91 72.77 28.94 31.78 5.44 5.67 18.39 18.49 8.02 15.26 7.95 | | | | 0.25 45.23 58.48 107.7 105.93 123.73 105.4 42.59 45.13 6.91 8.58 18.42 19.6 9.11 14.66 8.92 0.5 0.75 49.53 64.22 109.15 93.88 89.59 115.87 45.45 36.6 25.35 45.64 24.45 27.44 23.96 27.56 19.9 | 1.25 45.41 56.97 111.54 91.98 99.49 100.86 41.42 30.18 62.16 139.28 40.83 31.4 52.28 52.26 48.02 1.5 48.76 61.09 120.58 83.81 152.2 113.71 45.95 33.42 88.73 91.14 56.46 36.34 70.6 59.34 63.67 1.75 46.44 60.79 115.11 106.45 108.84 108.87 45.81 26.6 114.53 183.94 75.91 40.42 100.12 88.11 107.75 2 | 0.25 29.49 50.98 76.11 49.77 60.99 75.93 25.82 25.96 6.98 10.82 18.25 20.02 9.11 17.66 9.61 0.5 33.12 55.6 72.25 46.41 46.75 81.12 30.23 26.36 13.76 37.77 19.88 24.6 14.86 29.09 13.93 0.75 31.86 54.78 80.56 49.81 55.36 81.86 36.61 28.8 27.04 154.84 25.33 28.49 27.44 41.5 28.09 | 0.5 27.74 50.57 66.08 38.05 47.27 72.25 26.5 24.19 14.89 46.8 20.9 27.29 16.06 22.96 16.65 0.75 29.41 52.81 79.39 44.81 49.55 75.39 33.09 23.02 26.21 103.1 24.51 28.51 24.75 41.25 21.19 | 0.25 25.26 48.91 59.2 35.38 41.97 66.92 24.3 23.91 7.48 13.7 19.15 21.72 9.84 18.6 10.69 0.5 25.51 47.96 58.62 35.94 39.65 66.99 25.84 22.22 13.46 37.92 19.55 24.89 14.6 27.59 15.32 0.75 26.09 48.06 56.54 37.49 39.72 66.26 29.67 20.88 25.74 53.22 23.56 27.51 24.02 40.76 20.13 | | | T-learner | | 2000 1 33.65 55.77 72.6 46.45 60.35 80.97 38.76 23.86 44.3 63.07 31.02 31.36 37.28 39.17 32.74 1.25 33.01 | | | | | | 500 1 48.78 63.54 124.72 93.16 108.59 112.39 45.89 32.91 43.62 106.32 30.65 30.28 | | 0 26.08 48.7 60.13 31.96 38.88 67.99 23.98 26.11 5.42 | | | | OS Ridge Tree | | | | | | | DR-learner | | | 3500 1 30.62 53.15 64.92 38.36 58.88 | | | | OS OS | | 0.25 28.66 51.48 | 27 | 26.2 24.7 | | | | | 2 32.33 | 2 29.14 | 2 25.92 | | | | | 1.5 33.69 1.75 31.71 | 1.25 27.33 | 1.75 29.01 | 1.5 26.65 1.75 28.42 | | | | | 1.5 | 1.25 | | | size degree | | | 5000 1 | | | ## 49 D.3 Whi Data Application Data We consider a case study using clinical trial and observational data from the Women's Health Initiative (Machens and Schmidt-Gollwitzer, 2003). A focus of this study was to investigate the effectiveness of hormone replacement therapy (HRT) treatment in preventing the onset of chronic (cardiovascular) diseases. As the observational study and clinical trial data led to conflicting findings, the WHI study has become a prime example of how confounding in observational data can introduce bias and, in this case, suggest overly optimistic results (for more detail, see Kallus and Zhou 2018). In this setting, we study how multi-accurate CATE estimators that are "warm-started" with observational data and have access to small samples from the clinical trial compare to estimators that draw on either observational or clinical trial data only. We aim to assess the effect of HRT treatment on systolic blood pressure as a major risk factor for cardiovascular diseases. We estimate the CATE with respect to two sets of covariates - a small (age, ethnicity) and an extended set (age, ethnicity, number of cigarettes per day, systolic blood pressure baseline, diastolic blood pressure baseline, BMI baseline; see Table 11). In our application setting, we start with the observational study (OS) (52,335 observations) and draw a random 50% sample that serves as observational training data for (naive) CATE estimation. We split the clinical trial data (14,531 observations) into an initial 50% training set and a 50% test set. The initial training set is used to draw further random samples of size {250, 500, 750, 1000, 1250, 1500} that serve as clinical trial (CT) training data. For each CT training set size, sampling is repeated 25 times. CATE estimation We use the following methods for estimating the CATE based on the training set from the observational study. - (**CForest-OS**) Causal forest (Wager and Athey, 2018) trained in the training set of the observational data. - (**S-learner-OS**) S-learner using random forest to learn a joint outcome model for treated and untreated in the training set of the observational data. - (**DR-learner-OS**) DR-learner (Kennedy, 2023) using regression forest to learn separate outcome models for treated and untreated in the training set of the observational data. - (**T-learner-OS**) T-learner using regression forest to learn separate outcome models for treated and untreated in the training set of the observational data. We estimate DR-learner and T-learner using multi-calibration boosting with samples of clinical trial data. The MCBoost hyperparameter settings are shown in Table 12. - (**DR-learner-MC-Ridge**) DR-learner using regression forest in the training set of the observational data is post-processed with MCBoost using with ridge regression in the training set of the clinical trial data. - (**DR-learner-MC-Tree**) DR-learner using regression forest in the training set of the observational data is post-processed with MCBoost using with decision trees in the training set of the clinical trial data. - (**T-learner-MC-Ridge**) T-learner using regression forest in the training set of the observational data is post-processed with MCBoost using with ridge regression in the training set of the clinical trial data. - (**T-learner-MC-Tree**) T-learner using regression forest in the training set of the observational data is post-processed with MCBoost using with decision trees in the training set of the clinical trial data. We further compare to the following CATE learner that are solely based on clinical trial data. - (**CForest-CT**) Causal forest trained in the training set of the clinical trial data. - (**S-learner-CT**) S-learner using random forest to learn a joint outcome model for treated and untreated in the training set of the clinical trial data. - (**T-learner-CT**) T-learner using random forest to learn separate outcome models for treated and untreated in the training set of the clinical trial data. We infer the "true" CATE by applying the following methods to the test set of the clinical trial data. - (**RL-NET**) R-learner (Nie and Wager, 2020) using elastic net as base learner. - (**TL-NET**) T-learner using elastic net as base learner. - (**XL-RF**) X-learner (Künzel et al., 2019) using random forest as base learner. Evaluation We compare the outlined methods with respect to the bias in ATE and MSE in CATE estimation in the test set of the clinical trial data. To evaluate bias, we use the observed difference in outcomes by treatment condition in the clinical trial, ATEˆobs = P P T Y T− P P (1−T)Y (1−T) , as the estimate of the true ATE and evaluate against the respective mean of τˆ of the various CATE estimation methods. $$\mathrm{Bias}=\mathrm{A}\hat{\mathrm{T}}\mathrm{E}_{o b s}-\frac{1}{n}\sum\hat{\tau}(x)$$ In evaluating MSE, we use the estimated CATE function, τ ∗(x), based on learners that had privileged access to the clinical trial test data (XRF, RL, TL) as a substitute for the true τ (x) and evaluate against τˆ(x) of the CATE estimation methods outlined above (using the observational and/or clinical trial training data only). $$\mathrm{MSE}={\frac{1}{n}}\sum(\tau^{*}(x)-{\hat{\tau}}(x))^{2}$$ Results Figure 13a (small set of covariates) and Figure 13b (extended set) show the bias of the estimated ATE for each method by clinical trial training set size. As expected, learning in the clinical trial training data allows for unbiased estimation of the ATE as shown by the three CT-based methods in both settings. These estimates, however, come with high variability if the CT training data is small. Learning solely in the observational data incurs bias in ATE estimation, particularly in settings where the CATE learner only have access to a small set of covariates (Figure 13a). In this case, post-processing with clinical trial data improves upon the initial T-learner. Given an extended set of covariates the bias of the observational data-based methods decreases and post-processing is less effective (Figure 13b). We evaluate the MSE of the estimated CATE in Figure 14 (small set of covariates) and Figure 15 (extended set) by clinical trial training set size against the three approximations of the true CATE that are based on the clinical trial test data. The observational data-based methods generally outperform the CT-based CATE estimates, indicating that the small clinical trial training sets on their own are not sufficient for accurate CATE estimation (comparing Figure 14a to 14b). Post-processing the initial T-learner via multi-calibration boosting with clinical trial data allows to achieve the smallest MSE for most CT training set sizes and true CATE estimation techniques in the limited covariate setting (Figure 14a). As the observational data-based CATE learner achieve low MSE with the extended set of covariates, post-processing shows no improvement in this case (Figure 15a). Table 11: Sample composition (averages and proportions) of the observational study and clinical trial of the WHI data. | OS | RCT | | | | | | |-----------------------------------|--------|--------|---------|--------|--------|--------| | Overall | T = 0 | T = 1 | Overall | T = 0 | T = 1 | | | Treatment | 0.33 | 0.50 | | | | | | Systolic blood pressure | 124.83 | 125.88 | 122.68 | 125.54 | 125.30 | 125.78 | | Systolic blood pressure baseline | 125.09 | 126.24 | 122.75 | 127.65 | 127.69 | 127.61 | | Diastolic blood pressure baseline | 74.56 | 74.78 | 74.12 | 75.68 | 75.78 | 75.59 | | BMI baseline | 26.83 | 27.29 | 25.88 | 28.52 | 28.53 | 28.50 | | Age | 62.52 | 63.43 | 60.68 | 63.37 | 63.37 | 63.37 | | Cigarettes per day 0 | 0.53 | 0.54 | 0.51 | 0.52 | 0.52 | 0.51 | | <1 | 0.02 | 0.02 | 0.02 | 0.02 | 0.02 | 0.02 | | 1-4 | 0.09 | 0.09 | 0.09 | 0.08 | 0.07 | 0.08 | | 5-14 | 0.15 | 0.15 | 0.15 | 0.15 | 0.16 | 0.15 | | 15-24 | 0.13 | 0.12 | 0.14 | 0.14 | 0.14 | 0.15 | | 25-34 | 0.04 | 0.04 | 0.05 | 0.05 | 0.05 | 0.05 | | 35-44 | 0.03 | 0.03 | 0.03 | 0.03 | 0.03 | 0.03 | | 45+ | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | | Ethnicity White | 0.89 | 0.87 | 0.92 | 0.84 | 0.84 | 0.84 | | Black | 0.05 | 0.07 | 0.02 | 0.07 | 0.07 | 0.06 | | Hispanic | 0.03 | 0.03 | 0.02 | 0.05 | 0.05 | 0.05 | | American Indian | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | | Asian/Pacific Islander | 0.02 | 0.02 | 0.03 | 0.02 | 0.02 | 0.02 | | Unknown | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | ![51_image_0.png](51_image_0.png) Figure 13: Bias by clinical trial training set size (WHI Data Application). The distribution of bias scores over sampling repetitions is plotted. Post-processing initial T-learner with clinical trial data improves over T-learner-OS in the limited covariate setting. Table 12: Hyperparameter settings for post-processing using MCBoost. Default settings are used for parameters not listed. (a) T-learner MC | (a) T-learner MC | | | | |--------------------|----------------|----------------|-------| | Method | Implementation | Hyperparameter | Value | | Ridge | mcboost | max_iter | 10 | | | alpha | 1e-06 | | | | eta | 0.1 | | | | weight_degree | 2 | | | glmnet | alpha | 0 | | | | s | 1 | | | Tree | mcboost | max_iter | 10 | | | alpha | 1e-06 | | | | eta | 0.1 | | | | weight_degree | 2 | | | rpart | maxdepth | 3 | | | (b) DR-learner MC | | | | | Method | Implementation | Hyperparameter | Value | | Ridge | mcboost | max_iter | 5 | | | alpha | 1e-06 | | | | eta | 0.1 | | | | weight_degree | 2 | | | glmnet | alpha | 0 | | | | s | 1 | | | Tree | mcboost | max_iter | 5 | | | alpha | 1e-06 | | | | eta | 0.1 | | | | weight_degree | 2 | | | rpart | maxdepth | 3 | | ![53_image_0.png](53_image_0.png) ![53_image_1.png](53_image_1.png) (b) Clinical trial data-based CATE estimation Figure 14: MSE by 'true' CATE estimation method and clinical trial training set size with small set of covariates (WHI Data Application). The distribution of MSE scores over sampling repetitions is plotted. T-learner-MC-Ridge outperforms other methods for most CT training set sizes and true CATE estimation techniques RL-NET and TL-NET. ![54_image_0.png](54_image_0.png) ![54_image_1.png](54_image_1.png) Figure 15: MSE by 'true' CATE estimation method and clinical trial training set size with extended set of covariates (WHI Data Application). The distribution of MSE scores over sampling repetitions is plotted. Multicalibration boosting yields little improvement as CForest-OS and T-learner-OS already achieve low MSE.
Review 1: Summary: This paper proposed a novel method for conditional average treatment effect estimation that is robust to unknown covariate shifts. The extensive experiments show the efficiency and excellent bias control of the proposed method under distribution shifts. This approach can also adapt to multiple kinds of covariate distribution shifts. Strengths and Weaknesses: Strengths: 1. The robustness of unknown covariates shifts in CATE estimation is important and challenging, and it also has promising applications. 2. The proposed method is novel, combining the strengths of multi-accurate learning to ensure a low estimation bias. 3. The experiment evaluations are extensive, including a simulation study, a semi-synthetic case study, and an RCT experiment. 4. The authors employed a lot of examples to introduce their motivation, which is easy for readers to understand. Weaknesses: 1. The related work discussion in the introduction is somewhat scattered. 2. It is advised to list contributions separately in the introduction. 3. It is suggested that the definition of the propensity score be moved to before the overlap assumption. 4. I suggest authors put the background in an individual section, rather than the Method section. 5. Since the paper aims to improve the robustness of CATE under covariate distribution shifts, I suggest the authors discuss the quantified criterion of distribution shift (e.g. Kullback-Leibler Divergence) in the simulations section. Requested Changes: Please refer to the Weaknesses. Broader Impact Concerns: N/A ================================================== Review 2: Summary: The paper applies recent advances in algorithmic fairness to the estimation of CATE. Two settings are considered: 1) a CATE estimator is trained on an unconfounded dataset and the goal to deploy it on another unconfounded dataset; and 2) a CATE estimator is trained on a large but confounded dataset and the goal is to deploy it on a small RCT dataset. In both cases, it is expected that the estimators are valid on the target datasets, after post-processing estimators on target datasets by the multi-calibration technique from the algorithmic fairness literature. Strengths and Weaknesses: ### Strengths Applying multi-calibration technique to the estimation of CATE is an interesting research direction. ### Weaknesses Unfortunately, the proposed method is based on misunderstandings of the original work (Kim et al. 2022) and is fundamentally flawed. The writing is very sloppy, and the paper reads like cursory notes based on a quick read and application of the previous work. Requested Changes: I think the whole project should be reworked, and a whole new paper should be written. So please take my following comments as suggestions for future work. ### Method The current paper apparently says multi-calibration can deal with arbitrary covariate shifts. However, the problem addressed by Kim et al. (2022) is roughly the adaptation of the averaged outcome estimator from one of the treatment/control groups to the other group. That is, the difference between the two groups (populations) can be *completely specified* by the true propensity score, although the true propensity score is *unknown*. Perhaps the most obvious mistake in the current paper is that the multi-calibration technique is applied to the conditional outcome $\mu_t(X)$. However, *what is multi-calibrated should be a propensity score* which, as stated in the 1st point, lies at the core of the technique. - The domain of the function $\tilde{p}$ in Def 1 and 2, taken from the previous work, also shows this clearly. This is an example of taking cursory notes from the previous work without much thinking (or it can be noticed that multi-calibration should be applied to propensity scores). The identification results in the current paper make no sense. In practice, Kim et al. (2022) address the problem of ATE error caused by the error of propensity score estimation, so that the former has a known bound if the latter is bounded (Universal Adaptability). That is, *the original work deals with the problem of estimation but not identification*, and, in fact, the identification, including under those settings in the current paper, is just textbook results of identification under unconfoundedness. ### Writing The paper does not use consistent terminology. For example, reference population and target population refer to the same thing. Another general problem is unexplained terminology and symbols. For example, - reference/target population is not really explained in the Intro and their meaning can only be guessed from later parts; - "<<" and "P-a.s." in Setting 1. Some explanations are not in the right place. Examples: - “pseudo-outcome regression” is mentioned in the Intro and Background, but only defined as late as page 11; - “MC-Boost” and “Auditing” are in Figure 1, but are explained nearly 3 pages later, and this makes Figure 1 not very meaningful. Insufficient/misleading explanations. Examples: - why it is required that the conditional expectation equals to 1 in the Setting 1? - in Figure 1 for Setting 2, "unknown $Q$" is mentioned, while it is only used in the main text for Setting 1 but not Setting 2; - at the beginning of Sec 3.1, the eqs do not show the difference between S and T learners because $\mu_t(x)$ is an equivalent way to write $\mu(x, t)$; and it puzzles me why S-learner is mentioned here because S-learner is never mentioned again in the main text. - the paper spends quite some space on MCBoost and auditing while it adds no intuition but confusion, e.g., "residuals" and "twisting" are not clearly explained. Broader Impact Concerns: None ================================================== Review 3: Summary: The authors propose a method to learn multi-accurate predictors to postprocess CATE T-learners (differenced regressions) to become robust to unknown covariate shifts at the time of deployment. Strengths and Weaknesses: Strengths: The method works in general for pseudo-outcome regression, such as the DR-learner. They show how this approach can combine (large) confounded observational and (smaller) randomized datasets by learning a confounded predictor from the observational dataset, and auditing for multi-accuracy on the randomized controlled trial. They show improvements in bias and mean squared error in simulations with increasingly larger covariate shift, and on a semi-synthetic case study of a parallel large observational study and smaller randomized controlled experiment. Overall, the authors establish a connection between methods developed for multi-distribution learning and achieve appealing desiderata (e.g. external validity) in causal inference and machine learning. Weaknesses: What are the hyperparameters setup? Theoretically speaking, what are the advantages of the proposed method? Requested Changes: Please refer to the weaknesses. Broader Impact Concerns: NA ==================================================
# Dual-Windowed Vision Transformer With Angular Selfattention Weili Shi rhs2rr@virginia.edu School of Data Science University of Virginia Sheng Li *shengli@virginia.edu* School of Data Science University of Virginia Reviewed on OpenReview: *https: // openreview. net/ forum? id= 7jgu4oXsGM* ## Abstract Following the great success in natural language processing, transformer-based models have emerged as the competitive model against the convolutional neural networks in computer vision. Vision transformer (ViT) and its subsequent variants have exhibited promising performance in tasks such as image classification, object detection and semantic segmentation. The core of vision transformers is the self-attention mechanism, which models the long-range dependency of different tokens. Conventionally, the attention matrix in self-attention is calculated by the scaled dot-product of *query* (Q) and key (K). In this case, the attention weight would depend on the norm of Q and K as well as the angle between them. In this paper, we propose a new attention mechanism named angular self-attention, which replaces the scaled dot-product operation with the angular function in order to effectively model the relationship between tokens. In particular, we propose two forms of functions: quadratic and cosine functions, for our angular self-attention. Based on angular self-attention, we design a new vision transformer architecture called dual-windowed angular vision transformer (**DWAViT**). DWAViT is a hierarchical-structured model characterized by the angular selfattention and a new local window mechanism. We evaluate DWAViT on multiple computer vision benchmarks, including image classification on ImageNet-1K, object detection on COCO, and semantic segmentation on ADE20K. Our experimental results also suggest that our model can achieve promising performance on the tasks while maintaining comparable computational cost with that of the baseline models (e.g., Swin Transformer). The source code is available at https://github.com/DamoSWL/DWAViT. ## 1 Introduction Vision transformers have received tremendous attention since its emergence. Inspired by the success of the transformer (Vaswani et al., 2017) in the sequence modeling, Dosovitskiy et al. (Dosovitskiy et al., 2021) proposed the initial architecture of vision transformer which can be regarded as the encoder part of the original transformer (Vaswani et al., 2017). Compared to convolutional neural networks (CNNs), the vision transformer is featured by its ability to transform the spatial visual representation learning on the image into the token-to-token learning, by partitioning the image into multiple patches. Benefited from the ability of the self-attention mechanism that can model the long-range dependence of tokens in the image, vision transformers exhibit on par or better performance against CNNs in many computer vision tasks Shi et al. (2022; 2023); Zhou et al. (2022b); Chen et al. (2023a); Wang et al. (2023a;b), such as image classification (Dosovitskiy et al., 2021; Dong et al., 2022), object detection (Carion et al., 2020; He & Todorovic, 2022; Zhang et al., 2022a), and semantic segmentation (Zheng et al., 2021; Xie et al., 2021). Despite the merits mentioned above, the shortcoming of vision transformer is also obvious. The low level of the inductive bias requires more large datasets such as Image-21K (Deng et al., 2009) and JFT300M (Sun et al., 2017) for model training. Besides, the time complexity of the self-attention computation is quadratic to the number of input tokens, which prohibits the application of vision transformers on the tasks that basically involve high-resolution images. To deal with the excessive computation of the self-attention, the subsequent work (Liu et al., 2021; Dong et al., 2022; Huang et al., 2019; Wang et al., 2020; Xia et al., 2022) proposed different local-window mechanisms to restrict the computation of self-attention in a local window. For instance, the pioneering work Swin Transformer (Liu et al., 2021) adopts the shift windows to reduce the workload of computing self-attention and to facilitate the interaction of local windows. CSwin (Dong et al., 2022) proposes the cross-shaped window, in which the image is split into the horizontal and vertical strips in parallel. Another work (Xia et al., 2022) presents a flexible local window, which could be implemented in a data-dependent way. Another branch of work (Qin et al.; Katharopoulos et al., 2020; Peng et al.; Choromanski et al.) focuses on the in-depth understanding of the self-attention mechanism and proposes new formulations to calculate the attention scores between different pairs of tokens. From the perspective of kernel learning, the interaction of the query and key can be modeled by specific kernel function, and the scaled dot-product operation can be replaced by the softmax-free operation in self-attention. Usually, the softmax-free operation can lower the time complexity of the computation in self-attention. ![1_image_0.png](1_image_0.png) Figure 1: The illustration of the dual window mechanism. The image is partitioned into (a) even number of local windows and (b) odd number of local windows in two layers, respectively. The size of the local window is flexible and the tokens lie on the border of one local window would reside in the interior of the local window in the following layer. The connection of the local window in each layer can be bridged by the operation of the local window in the next layer. In this paper, we present new designs on the local window mechanism and the operation in self-attention. In terms of the local window, we propose a dual window mechanism. As shown in Fig. 1, similar to Swin Transformer (Liu et al., 2021), the local window is also imposed on the feature maps for the purpose of reduction of the time complexity. However, unlike the previous work in which the size of the local window is fixed, the size of our proposed local window is flexible and is adjustable according to the size of the feature maps. Besides, to mitigate the problem of lacking connections between local windows, the number of local windows is different at layer t and layer t + 1. For instance, there are even numbers of the local windows at layer t but odd numbers of local windows at layer t + 1. In this case, the tokens that lie in one local window from the first feature map would belong to another local window in the following features map. Since the feature maps are partitioned into different numbers of local windows, the coordinates of the local windows in the adjacent feature maps are different. The tokens in the overlapping area of local windows can bridge the connection of local windows since these tokens would participate in the self-attention calculation within each local window. With the interaction of the local windows, receptive fields can be enlarged implicitly, and the ability to model long range relations can also be enhanced considerably. In traditional self-attention mechanism, the similarity of the query and key is computed by the scaled dotproduct. Thus, the similarity would depend on the norm of query and key as well as the angle between them. Inspired by previous work (Wang et al., 2018; Zhao et al., 2020), we notice that scaled dot-product function is not the only choice to model the relationship of tokens. In this paper, we propose the angular self-attention, in which the similarity of query and key is only dependent on the angle between them. To reduce the impact of the norm of the query and key on the relation of tokens, the query and key are L2-normalized, and query and key are distributed on the unit sphere. The relationship of the query and key would be determined by the angle between them, and a smaller angle could yield a larger attention score between a pair of query and key. In angular self-attention, we adopt two forms of functions: quadratic and cosine functions, to model this relationship, and the similarity is further enlarged by the temperature scaling. Besides, we also propose a new linear function to simplify the computation in the quadratic self-attention. Our experiments show that the angular self-attention can serve as the alternative for the traditional scaled dot-product self-attention. Jointly combining the dual window mechanism and angular self-attention, we propose a novel hierarchicalstructured vision transformer backbone called dual-windowed angular vision transformer (DWAViT). In DWAViT, the attention score for each pair of query and key is modeled by the temperature-scaled quadratic/cosine functions, and experimental results validate that our quadratic/cosine functions are effective in modeling the relationship between tokens. Besides, the dual window mechanism is also adopted in our new backbone. The feature maps are partitioned into even/odd numbers of local windows in the layers of DWAViT alternatively. The dual window mechanism can preserve the ability to model long-range relationships between tokens. With proper partition of the feature maps, our DWAViT can be applied to the downstream tasks (e.g., object detection, semantic segmentation) which involve high-resolution images and achieve comparable computational cost with that of baseline models (i.e., Swin Transformer (Liu et al., 2021)). The major contributions of this paper are as follows: - We propose the dual window mechanism to split the global feature map into a couple of smaller localized feature maps. By partitioning the feature maps in even/odd numbers of local windows in an alternative way, the lack of connection between local windows can be alleviated. - We propose the angular self-attention in which the scaled dot-product operation is replaced by the temperature-scaled quadratic/cosine functions. A linear function is also proposed to simplify the computation in quadratic self-attention. Our proposed angular self-attention can model the long range relationship of tokens and is the competitive alternative for the scaled dot-product self-attention. - The dual-windowed angular vision transformer (DWAViT) is proposed by jointly combining the dual window and angular self-attention. The DWAViT is evaluated on a series of dense prediction tasks and achieves competitive performance on ImageNet image classification, COCO object detection, and ADE20K semantic segmentation. With proper partition of the local windows, our model can achieve comparable computational cost as that of the baseline models. ## 2 Related Work Vision Transformers. The pioneering work (Parmar et al., 2018; Wang et al., 2018) first introduced the selfattention mechanism to the computer vision field and some early work Ramachandran et al. (2019); Cordonnier et al. (2019) applied self-attention in the computer vision tasks. Dosovitskiy et al. (Dosovitskiy et al., 2021) proposed the transformer-based backbone architecture called vision transformers (ViTs) (Dosovitskiy et al., 2021). With the new paradigm of representative learning, ViTs achieve on par or better performance on image classification, object detection and semantic segmentation against CNNs. Since the emergence of vision transformers, plenty of work (Touvron et al., 2021a; 2022) has been done on this field and the subsequent work aims to improve the ViTs on different aspects. DeiT (Touvron et al., 2021a; 2022) proposes new training recipe to reduce the high demand of ViTs for the very large datasets. With the techniques provided by DeiT (Touvron et al., 2021a; 2022), ViTs can pretrained from scratch on smaller datasets such as ImageNet-1K (Deng et al., 2009) compared to ImageNet-21K (Deng et al., 2009) and JFT300M (Sun et al., 2017). Besides, ViTs also borrow the idea form the modern CNN architectures (Wang et al., 2023c; He et al., 2016; Howard et al., 2017; Sandler et al., 2018; Tan & Le, 2019; 2021; 2019; Huang et al., 2017; Liu et al., 2022b; Rao et al.; Wang et al., 2022a; Dai et al., 2021) to improve the ability of representative learning and develop hierarchical pyramid structure to handle the multi-scale feature maps. The pyramid-structured ViTs usually have four stage and in each stage the size of the feature maps is half of that in the previous stage while the dimension is doubled. Another line of work (Wu et al., 2021; Guo et al., 2022; Xiao et al., 2021; Tu et al., 2022; Yuan et al., 2021; Srinivas et al., 2021; Chen et al., 2022; Mehta & Rastegari; Peng et al., 2021) incorporates the convolution operation into the architecture of the vision transformers at different location. The performance of the hybrid vision transformers are further improved by fusing the local information learned by CNNs and global dependence information obtained by self-attention. To mitigate the computational cost of the global self-attention which is quadratic to the size of the input features. Some work (Tian et al., 2023; Lee et al., 2022; Chen et al., 2021) learn the contextual information from the multi-scale patch embedding. An extensive work (Hatamizadeh et al., 2023; Chen et al., 2023b; Hassani et al., 2023; Dong et al., 2022; Liu et al., 2022a; 2021; Xia et al., 2022; Wang et al., 2020; Hassani et al., 2022; Han et al., 2021; Huang et al., 2019; Ren et al., 2022) proposes different local window mechanism to reduce the computational cost. The self-attention is performed within the local windows and the connection of different local window is achieved by some techniques such as shifted window (Liu et al., 2021) or cross-shaped window (Dong et al., 2022). Another line of work Shi & Li (2022); Zhou et al. (2022a); Mao et al. (2022) suggested that vision transformers exhibit stronger robustness to adversarial attacks that CNNs. In our paper we propose a new local window mechanism called dual window and the connection of local windows can be achieved in a simple way. ![3_image_0.png](3_image_0.png) Figure 2: The illustration of our proposed dual-windowed angular vision transformer (DWAViT). Similar to previous work our backbone adopts the hierarchical pyramid structure. The core module in our backbone is the dual-windowed angular multi-head self-attention (DWA MSA) which jointly combines the dual window mechanism and angular self-attention. In each block the feature maps are divided into even/odd numbers of local windows. Besides. The depthwise convolution provides the conditional positional embedding. Self-attention. Apart from the traditional scaled dot-product self-attention, different forms of self-attention mechanism are also proposed. Early work (Wang et al., 2018; Zhao et al., 2020) explored the general form of the function in the self-attention and proposed several operations such as dot-product and concatenation. XciT (Ali et al., 2021) proposed cross-covariance attention (XCA) in which the attention is performed on g over channels instead of tokens. MaxVit (Tu et al., 2022) and DaViT (Ding et al., 2022) proposed grid attention and channel group attention, respectively. These attentions are also performed on channels dimension rather than spatial dimension. To reduce the the computational cost, the efficient self-attention(Han et al., 2023; Dao et al., 2022; Sun et al., 2023; Katharopoulos et al., 2020) is proposed to approximate the traditional softmax self-attention. Linear transformer (Katharopoulos et al., 2020) suggests that softmax function can be removed and the similarity of tokens can be obtained by pure dot product of query and key. RFA (Peng et al.) and performer (Choromanski et al.) approximate the softmax attention with positive random features. CosFormer (Qin et al.) proposed cos-based re-weighting self-attention in which the attention score is calculated by the weighted dot-product of query and key. In SOFT (Lu et al., 2021), the dot-product similarity is replaced by the Gaussian kernel function. In our paper, we also propose a new self-attention mechanism called angular self-attention, in which the similarity of tokens is calculated from a quadratic function. ## 3 Methodology 3.1 Dual Window As shown if Fig. 1, the feature maps is partitioned into even number of local windows and odd number of local windows at layer t and layer t+1 alternatively. The feature maps is padded if necessary. Suppose the original size of feature map is h × w. after the padding, the size of the padded feature map is h ′ × w ′. The number of the local window is N*even* = n 2 even(Nodd = n 2 odd). neven(nodd) is the number of local window per side. Thus, the size of local window is h n*even* ×w n*even* (h nodd ×w nodd ). Compared to Swin Transformer (Liu et al., 2021) that bridge the connection of different local windows by complicated techniques such as cycle shifted local window, we solve this problem in a simple way. Notice that tokens lie on the border of one local window would reside in the interior of the local window in the following layer. Therefore, the tokens on the border of the local window at one layer can participate in the self-attention calculation with the tokens from other local windows in the next layer. The dynamic interaction of the local windows can facilitate the propagation of the information between local windows, The actual size of receptive field would be larger than the size of local window and the ability to model the long-range relationship of tokens would also be enhanced. ## 3.2 Angular Self-Attention Self-attention can be regarded as weighted combination of the input sequence, where the weights are determined by the similarities between tokens of the input sequence. We use Oi ∈ Rdto denote the generated embedding of token i from self-attention. Then the general form of self-attention could be written as: $${\cal O}_{i}=\sum_{j}\frac{{\cal S}(Q_{i},K_{j})}{\sum_{j}{\cal S}(Q_{i},K_{j})}V_{j},\tag{1}$$ $$(1)$$ where S(·) represents the similarities between Q and K and it has many forms according to the previous work (Wang et al., 2018; Zhao et al., 2020). if S(Qi, Kj ) = exp(Qi· Kj/ √dk), Eq. 1 would become the scaled dot-product attention as we commonly see in vision transformers. The formulation of scaled dot-product self-attention in vision transformers is Attention(*Q, K, V* ) = Softmax( QKT √dk )V . In dot-product self-attention, the attention weight is generated from the scaled dot-product between Q and K. The dot-product of Qi and Kj can be expanded as Qi· Kj = ||Qi*||||*Kj || cos θ. It indicates that the similarity would depend on the L2 norm of Q and K as well as their angles θ. In our paper, we propose angular self-attention, in which we use angular function s(θ) to replace the conventional scaled dot-product operation. Then the self-attention could be reformulated as: $${\mathcal{O}}_{i}=\sum_{j}{\frac{\exp(s(\theta_{i j})/\tau)}{\sum_{j}\exp(s(\theta_{i j})/\tau)}}V_{j},$$ where θij = arccos(Qˆi· Kˆj ), Qˆ and Kˆ are L2 normalized query and key, respectively. τ is the temperature hyper-parameter that regulates the attention weight of each token. When Q and K are normalized, they can be distributed on the surface of the unit sphere. Then the attention weight obtained from our angular self-attention is solely dependent on the angle θ. Through our training the angles θ between different Q and K would be adjusted to model the relationship of different tokens and make the vision transformer achieve strong representative ability. Thus, we propose two alternative functions for s(θ) in Eq. 2. They are cosine function s(θ) = cos(θ) and quadratic function s(θ) = 1 − 4θ 2 π2 . In angular self-attention, the similarity of Q and K would solely depend on their angles. The matrix form of angular self-attention could be formulated as: Attention($Q,K,V$) = Softmax($\frac{\hat{Q}\hat{K}^{T}}{\tau}$)$V$ Attention($Q,K,V$) = Softmax($\frac{1-4\Theta^{2}/\pi^{2}}{\tau}$)$V$, $$\left(2\right)$$ where Θ = arccos(QˆKˆ T). Qˆ and Kˆ are L2 normalized query and key, respectively. To simplify the computation of Θ, a linear function Θ = −( π 2 − τ )QˆKˆ T + π 2 is proposed to approximate arccos function, where τ is the hyper-parameter that determines the closeness between the linear function and arccos function. The intuition behind our angular-self attention is straightforward. In scaled dot-product attention the similarity score would be dependent on both the norm of the tokens as well as the angle between them. Our intuition is that the norm of the tokens may have a negative impact on the interaction of the query and key. Thus, In our angular self-attention, we design two angular functions to model the similarity of the tokens and reduce the noisy impact from the norm of the query and key. The extensive experiments show that our angular self-attention can successfully model the relationship of the tokens and achieve promising results on different tasks such as classification, object detection and segmentation. | Table 1: The details of the DWAViT variants. | | | | | | |------------------------------------------------|------------------|------------|-----------|-----------|-----------| | Models | #Dim | #Blocks | #Heads | #Param(M) | #FLOPs(G) | | DWAViT-Tiny | [64,128,256,512] | [2,4,18,2] | [1,2,4,8] | 22.7 | 4.18 | | DWAViT-Small | [80,160,320,640] | [3,6,21,3] | [1,2,4,8] | 44.6 | 8.22 | | DWAViT-Base | [96,192,384,768] | [4,8,24,4] | [1,2,4,8] | 77.4 | 14.27 | The cosine similarity and quadratic distance has common mathematical properties. Both functions are descending when θ ∈ [0, π], which means that the tokens with larger angles would have less weaker relationships. Specifically, when θ ∈ [0*, π/*2], cos θ ≈ 1 − θ 2/2 ≈ 1 − 4θ 2/π2, when θ ∈ (π/2, π], 1 − 4θ 2/π2 < cos θ < 0 , which means that tokens with angles larger than π/2 have weaker relationships in quadratic function than that of in cos function. Our experiments suggest that on most tasks the performance of quadratic and cosine functions is comparable and the difference is very slight. ## 3.3 Overall Architecture ![5_image_0.png](5_image_0.png) We replace the traditional scaled dot-product selfattention with our angular self-attention, and integrate the dual window mechanism to build our dualwindowed angular vision transformer (DWAViT). The overall illustration of our DWAViT is illustrated in Fig. 2. Similar to the previous work (Wang et al., 2021; 2022b; Ding et al., 2022; Fan et al., 2021; Li et al., 2022; Liu et al., 2021; 2022a), the DWAViT also adopts the hierarchical pyramid structure that take advantage of the multi-scale resolution of feature maps for the dense prediction task. The size of the input image is H × W × 3. Instead of adopting the convolutional layer with large kernel, we follow the work (Xiao et al., 2021) and leverage the two stacked convolutional layer as the stem to generate patch embedding. For each convolutional layer, the kernel size is 3 × 3 and the stride is 2 × 2. The size of the output from the stem is H 4 × W 4 × C. The DWAViT consists of four stages in which the size of the feature maps is half of that from the previous stage while the dimension is doubled compared to that from the previous stage. Between two adjacent stages we adopt a convolutional layer with kernel size of 2 × 2 and stride of 2 to downsample the feature maps. Each stage consists of multiple blocks which include the depthwise convolution (Chollet, 2017) that generates the conditional positional embedding (CPE) (Chu et al., 2021b) , the dual-windowed angular multi-head self-attention (DWA MSA) and feed-forward network (FFN). Compared to absolute positional embedding (APE) (Vaswani et al., 2017) that could only provide the positional information for the fixed length of sequence, The CPE can provide flexible positional information adaptive to various length of input sequence that is often seen in the downstream tasks. Relative positional embedding (RPE) (Liu et al., 2021; Shaw et al., 2018) provides the relative positional information within the window. However, since the size of the window is different in each stage, RPE (Liu et al., 2021; Shaw et al., 2018) would not be adopted in our DWAViT. Figure 3: The illustration of the pipeline in the dualwindowed angular multi-head self-attention (DWA MSA). The Q, K and V are localized by dividing them into a couple of local windows. The scaled dot-product operation is replaced by the temperature-scaled angular function in the calculation of attention matrix. The dual-windowed angular multi-head self-attention (DWA MSA) serves the core function for our backbone. It jointly combines the dual window mechanism and angular self-attention. The details of DWA MSA is illustrated in Fig. 3, Suppose the input feature is X ∈ R h×w×D, N = Neven or Nodd is the number of local windows and n = √N is the number of local windows per side. After linear projection, we obtain query, key and value Q, K, V ∈ R h×w×D. Instead of splitting the X into smaller local windows, we split Q and K into N local windows. The size of the local window is h ′ n × w ′ n . The h ′ and w ′ are the height and width of the padded feature maps. In each stage, the partition with even/odd number of windows take turns and the value of Neven(Nodd) vary according to the size of the feature maps. After the partition, for k th head, we obtain the localized query Q˜k = {Qk i , Qk 2 , ..., QkN } and key K˜ k = {Kk i , Kk 2 , ..., KkN }. the localized query and key are L2 normalized and the angle matrix Θ is calculated from the normalized query and key. Next the new embedding is calculated from the localized query, key and value by Eq. 3. The new feature maps of each local window is obtained by the concatenation of the embedding from each head X˜i = concat(X˜ 1 i , X˜ 2 i , ..., X˜ K i ). K is the number of heads. We concatenate the feature maps of each local window to form the complete feature map X˜ = concat(X˜1, X˜2*, ...,* X˜N ). The size of X˜ is larger than that of the original feature map. To restore it, the final feature map is obtained from X˜ with slice operation: $$X=\bar{X}[\mathrm{top}:h-\mathrm{bottom},\mathrm{left}:w-\mathrm{right}],\tag{1}$$ $$|\rangle$$ $$\mathbf{\partial}(t)$$ where top, bottom, left and right are the size of padding on the top, bottom,left and right of the feature maps, respectively. With all the components aforementioned above, the pipeline of the block in our DWAViT can be formulated as: $$\begin{array}{l}{{Z^{\ell}=\mathrm{DWConv}(X^{\ell-1})+X^{\ell-1},}}\\ {{\bar{X}_{\ell}=\mathrm{DWA-attention}(\mathrm{LN}(Z^{\ell}))+Z^{\ell},}}\\ {{X_{\ell}=\mathrm{FFN}(\mathrm{LN}(\bar{X}_{\ell}))+\bar{X}_{\ell},}}\end{array}$$ (5) where Xℓ−1 denote the output feature from the ℓ th block in the backbone. In our DWAViT, there are strong connections between the two key components: angular self-attention and dual local window mechanism. On one hand, The dual window mechanism can confine the operation of selfattention in localized areas. On the other hand, our empirical study suggests that our angular self-attention can achieve better performance with our dual local window techniques than that with previous local window techniques (i.e., Swin Transformer (Liu et al., 2021)) for two reasons. First, the partition of the dual local window is implemented on the queries and keys, not on the feature maps directly, which would be helpful for the direct interaction of the tokens. Second, the size of our local window is flexible. And we can choose the optimal window size for tasks with different input resolutions such as object detection and segmentation. To wrap up, the two proposed techniques have strong mutual connections and are two indispensable components. ## 3.4 Architecture Variants We build three variants of DWAViT with different numbers of parameters and FLOPs, namely DWAViT-Tiny, DWAViT-Small and DWAViT-Base. For all the variants, the number of local windows in each stage is set to (100,49), (49,16), (4,1), (1,1) in image classification, respectively. In stage one, the size of the local window would be 6 × 6 and 8 × 8 in two consecutive blocks. For the downstream tasks such as object detection and semantic segmentation, since the size of the input image is larger, the number of local windows in DWAViT is also different. The details of three DWAViT variants are illustrated in Table 1. ## 3.5 Time Complexity Analysis Suppose the original size of the feature map is h × w and the dimension is C. After padding the size of feature maps becomes h ′ × w ′. The total number of local windows is N. The time complexity of the linear projection is 4hwC2. Since the self-attention is performed on the padded feature maps, the time complexity for the self-attention calculation is 2(h ′w ′) 2C/N. Thus, the total time complexity of our DWA MSA is: $$\Omega(\mathrm{DWA~MSA})=4h w C^{2}+2(h^{\prime}w^{\prime})^{2}C/N.$$ 2*C/N.* (6) 7 As illustrated in Eq. 6, In order to reduce the computational cost, we should choose a large value of N while keeping the h ′ and w ′ as close as to the original value. Note that though angular self-attention includes some operations like arccos and L2 normalization, they do not increase the time complexity. However, the memory demand of angular self-attention is larger than that of traditional self-attention. ![7_image_0.png](7_image_0.png) Figure 4: The comparison of our dual local window mechanism with the prior work such as the shifted local window and the cross-shaped window. ## 3.6 Theoretical Analysis As aforementioned, we propose quadratic and cosine functions to model the relationships of tokens. Compared to the scale dot-product function, the difference of our method is that we map the Q and K features on the unit sphere and the relationship of tokens is only dependent on the angles between them. To better understand the angular self-attention, it is essential to investigate the relationship of tokens in our method. Thus, we provided the Proposition to analyze this problem. Proposition 1 *Suppose the angles between the query of a token and the keys of all the tokens match* θ0 ≤ θ1 ≤ · · · ≤ θJ−1. J is the total number of the tokens. In angular self-attention, the embedding of the token Oi ∝ V0 + ω1V1 + · · · + ωJ−1VJ−1*, where* ωk = exp( s(θk)−s(θ0) τ) ∈ (0, 1). s(θ) = 1 − 4θ 2 π2 for quadratic self-attention and s(θ) = cos θ *for cosine self-attention.* The proposition states that the embedding of a token can be regarded as the combination of the value vectors with the relative weight denoted by wk. The relative weight of the value vector with the smallest angle to the target query is normalized to 1. And the relative weight of a value vector is smaller if the corresponding angle is larger. The proposition suggests the larger contribution of the value vector to the embedding of the target token with smaller angles between them. The proof can be found in Appendix. ## 3.7 Comparison With Prior Work The local window mechanism has been widely adopted in various vision transformers. In different vision transformers, the local window mechanism is featured by their unique partition of the feature maps as well as the dynamic interaction of the tokens from different local windows. Fig. 4 illustrates our dual local window as well as the previous local window mechanisms. Swin Transformers (Liu et al., 2021) first proposed the shifted local window to reduce the time complexity of the vision transformer. To tackle the lack of the connection of the local windows, some techniques such as cyclic shift is adopted to adjust the partition of the local windows and the connection can be bridged in the alternating layers of the model. Another local window mechanism called Cross-Shaped Window (Dong et al., 2022) is proposed by CSwin Transformers (Dong et al., 2022). Different from the local window partition in Swin Transformers (Liu et al., 2021), the feature map is splitted into vertical and horizontal strips in parallel and the width of the strips would determine the level of connection of tokens. Recently, Neighborhood Attention (Hassani et al., 2023) proposed the efficient and scalable sliding local window mechanism in which the attention operation is confined in a small sliding local window. Besides, MaxViT (Tu et al., 2022) proposed block attention and grid attention, respectively. The block attention performs the self-attention in the local window while the grid attention would attend to tokens globally. Furthermore, DaViT (Ding et al., 2022) also introduced channel group attention to remedy the lack of connection between the spatial local windows. In our dual local window, the feature map is partitioned into local windows of different sizes at each layer. Compared to the prior local window mechanism such MaxViT (Tu et al., 2022) that deals with connection of local windows by grid attention, the lack of connection of local windows is mitigated by the overlapping part of the local windows in the alternating layer of the model. Besides, our dual local window enjoys three merits that are not possessed by prior work such as Swin Transformers (Liu et al., 2021) and CSwin Transformers (Dong et al., 2022) First, the partition of the local window is implemented on the queries and keys, not on the feature maps directly, which would be helpful for the direct interaction of the tokens. Second, the size of our local window is flexible. And we can choose the optimal window size for tasks with different input resolutions such as segmentation. Last but not least, our dual local window is easy to implement. Self-attention mechanism is the core of transformers and previous work (Wang et al., 2018; Zhao et al., 2020) has shown that in the attention mechanism the similarity can be modeled by different functions and scaled dot-product function is not the only choice to model the relationship of tokens. To alleviate the large computation complexity in scaled dot product self-attention, XciT (Ali et al., 2021) proposed cross-variance attention in which the attention scores are obtained from the cross-covariance matrix computed over the key and query. Besides, Qin et. al. (Qin et al.) proposed a new variant linear transformer called Cosformer (Qin et al.). The core of Cosformer (Qin et al.) is the linearized self-attention in which the similarity score is modeled by the linear kernel function (e.g., ReLU). Furthermore, Lu et.al. (Lu et al., 2021) proposed a softmax-free transformer (SOFT) (Lu et al., 2021). In SOFT (Lu et al., 2021) the Gaussian kernel is adopted to model the interaction of the tokens and a novel low-rank matrix decomposition algorithm is proposed for approximation to reduce the computation overhead. In our angular self-attention, we suggest that the norm of the tokens play an insignificant role in the self-attention and the relationship of the tokens can be solely determined by the angles between them. As a consequence, we propose the angular self-attention in which the attention scores are obtained from our angular function and the impact from the norm of the tokens can be diminished. In order to reduce the computational cost, the linear function is adopted to approximate the time-consuming operation in the angular self-attention. The extensive experiments suggest that our angular self-attention can also successfully model the relationships of the tokens and serve as the competitor for the scaled dot product attention and linearized attention. ## 4 Experiments In this section, we evaluate our proposed DWAViT on ImageNet-1K (Deng et al., 2009) classification, COCO (Lin et al., 2014) object detection, and ADE20K (Zhou et al., 2017) semantic segmentation. Besides, we also implement the ablation study to investigate the effectiveness of angular self-attention and compare the results of angular self-attention against that of traditional scaled dot-product self-attention on benchmarks. ## 4.1 Imagenet-1K Classification In this experiment we adopt the same training recipe as previous work (Touvron et al., 2021a; Li et al., 2022; Lee et al., 2022; Dong et al., 2022) for fair comparison. The training strategies include repeated data augmentation methods and the EMA (Polyak & Juditsky, 1992). The total training epoch is 300 with the first 20 epochs as warm-up. We adopt the AdamW (Kingma & Ba, 2014) algorithm to optimize the model. The initial learning rate is 1.2e-3 and the weight decay is 0.05. The learning rate is adjusted according to the cosine learning rate schedule. The drop path rate is 0.1 and the input image is resized to 224 × 224. The mlp ratio for all the DWAViT variants is set to 4. The number of windows in each stage is (100,49), (49,16), (4,1), (1,1). The temperature in an angular self-attention is 0.1 for DWAViT-T and DWAViT-S | [quad] denote cosine and quadratic function, respectively. Model #Param(M) #FLOPs(G) | #Inference(s) | CNN | ViT | Top-1 Acc | | |----------------------------------------------------------------------------------------|-----------------|-------|--------|-------------|------| | ResNet-50 (He et al., 2016) | 25.0 | 4.1 | 205.01 | ✓ | 76.2 | | DeiT-S (Touvron et al., 2021a) | 22.1 | 4.5 | 201.16 | ✓ | 79.8 | | PVT-S (Wang et al., 2021) | 24.5 | 3.8 | 217.51 | ✓ | 79.8 | | RegNetY-4G (Radosavovic et al., 2020) | 21.0 | 4.0 | 194.54 | ✓ | 80.0 | | CrossViT-S (Chen et al., 2021) | 26.7 | 5.6 | 208.11 | ✓ | 81.0 | | TNT-S (Han et al., 2021) | 23.8 | 5.2 | 213.60 | ✓ | 81.3 | | Swin-T (Liu et al., 2021) | 28.3 | 4.5 | 211.91 | ✓ | 81.2 | | CoAtNet-0 (Dai et al., 2021) | 25.0 | 4.0 | 214.64 | ✓ | 81.6 | | CvT-13 (Wu et al., 2021) | 20.0 | 4.5 | 213.26 | ✓ | 81.6 | | CaiT–XS-24 (Touvron et al., 2021b) | 26.6 | 5.4 | 213.11 | ✓ | 81.8 | | ViL-S (Zhang et al., 2021) | 24.6 | 5.1 | 202.83 | ✓ | 82.0 | | PVTv2-B2 (Wang et al., 2022b) | 25.4 | 4.0 | 218.47 | ✓ | 82.0 | | ConvNeXt-T (Liu et al., 2022b) | 29.0 | 5.0 | 203.89 | ✓ | 82.1 | | HiViT-T (Zhang et al., 2022b) | 19.0 | 4.6 | 213.71 | ✓ | 82.1 | | DaViT-T (Ding et al., 2022) | 28.3 | 4.5 | 218.67 | ✓ | 82.8 | | MViTv2-T (Li et al., 2022) | 24.0 | 4.7 | 217.27 | ✓ | 82.3 | | CSWin-T (Dong et al., 2022) | 23.0 | 4.3 | 210.46 | ✓ | 82.7 | | DWAViT-T[cos] (Ours) | 22.7 | 4.2 | 216.24 | ✓ | 82.7 | | DWAViT-T[quad] (Ours) | 22.7 | 4.2 | 216.90 | ✓ | 82.8 | | ResNet-101 (He et al., 2016) | 45.0 | 7.9 | 210.12 | ✓ | 77.4 | | PVT-M (Wang et al., 2021) | 44.2 | 6.7 | 222.54 | ✓ | 81.2 | | RegNetY-8G (Radosavovic et al., 2020) | 39.0 | 8.0 | 201.64 | ✓ | 81.7 | | Swin-S (Liu et al., 2021) | 49.6 | 8.7 | 216.56 | ✓ | 83.1 | | CoAtNet-1 (Dai et al., 2021) | 42.0 | 8.0 | 218.24 | ✓ | 83.3 | | CvT-21 (Wu et al., 2021) | 32.0 | 7.1 | 218.60 | ✓ | 82.5 | | ViL-M (Zhang et al., 2021) | 39.7 | 9.1 | 208.04 | ✓ | 83.3 | | PVTv2-B3 (Wang et al., 2022b) | 45.2 | 6.9 | 226.60 | ✓ | 83.2 | | ConvNeXt-S (Liu et al., 2022b) | 50.0 | 9.0 | 207.71 | ✓ | 83.1 | | Focal-S (Yang et al., 2021) | 51.1 | 9.1 | 230.70 | ✓ | 83.5 | | HiViT-S (Zhang et al., 2022b) | 38.0 | 9.1 | 221.46 | ✓ | 83.5 | | MViTv2-S (Li et al., 2022) | 35.0 | 7.0 | 224.71 | ✓ | 83.6 | | CSWin-S (Dong et al., 2022) | 35.0 | 6.9 | 213.72 | ✓ | 83.6 | | DWAViT-S[cos] (Ours) | 44.6 | 8.2 | 218.81 | ✓ | 83.5 | | DWAViT-S[quad] (Ours) | 44.6 | 8.2 | 219.38 | ✓ | 83.6 | | ResNet-152 (He et al., 2016) | 60.0 | 11.0 | 216.37 | ✓ | 78.3 | | PVT-L (Wang et al., 2021) | 61.4 | 9.8 | 227.47 | ✓ | 81.7 | | DeiT-B (Touvron et al., 2021a) | 86.7 | 17.4 | 206.01 | ✓ | 81.8 | | Swin-B (Liu et al., 2021) | 87.8 | 15.4 | 219.12 | ✓ | 83.4 | | ViL-B (Zhang et al., 2021) | 55.7 | 13.4 | 210.63 | ✓ | 83.2 | | Focal-B (Yang et al., 2021) | 89.8 | 16.0 | 235.18 | ✓ | 83.8 | | HiViT-B (Zhang et al., 2022b) | 66.0 | 15.9 | 225.62 | ✓ | 83.8 | | DWAViT-B[cos] (Ours) | 77.4 | 14.3 | 220.36 | ✓ | 83.9 | | DWAViT-B[quad] (Ours) | 77.4 | 14.3 | 223.21 | ✓ | 83.8 | | function, respectively. Backbone | #Param(M) | #FLOPs(G) | #Inference(s) | APb | APb 50 | APb 75 | AP m | AP m 50 | AP m 75 | |-------------------------------------|-------------|-------------|-----------------|-------|----------|----------|--------|-----------|-----------| | ResNet-50 (He et al., 2016) | 44 | 260 | 138.23 | 41.0 | 61.7 | 44.9 | 37.1 | 58.4 | 40.1 | | ConvNeXt-T (Liu et al., 2022b) | 48 | 262 | 163.04 | 46.2 | 67.9 | 50.8 | 41.7 | 65.0 | 44.9 | | PVT-S (Wang et al., 2021) | 44 | 245 | 177.20 | 43.0 | 65.3 | 46.9 | 39.9 | 62.5 | 42.8 | | ViL-S (Zhang et al., 2021) | 45 | 218 | - | 47.1 | 68.7 | 51.5 | 42.7 | 65.9 | 46.2 | | TwinsP-S (Chu et al., 2021a) | 44 | 245 | - | 46.8 | 69.3 | 51.8 | 42.6 | 66.3 | 46.0 | | Twins-S (Chu et al., 2021a) | 44 | 228 | - | 46.8 | 69.2 | 51.2 | 42.6 | 66.3 | 45.8 | | Swin-T (Liu et al., 2021) | 48 | 264 | 194.50 | 46.0 | 68.2 | 50.2 | 41.6 | 65.1 | 44.8 | | Focal-T (Yang et al., 2021) | 49 | 291 | - | 47.2 | 69.4 | 51.9 | 42.7 | 66.5 | 45.9 | | PVTv2-B2 (Wang et al., 2022b) | 45 | 309 | 210.43 | 47.8 | 69.7 | 52.6 | 43.1 | 66.8 | 46.7 | | XciT-S12/8 (Ali et al., 2021) | 43 | 550 | 233.54 | 47.0 | 68.9 | 51.7 | 42.3 | 66.0 | 45.4 | | DaViT-T (Ding et al., 2022) | 48 | 263 | 244.08 | 47.4 | 69.5 | 52.0 | 42.9 | 66.8 | 46.4 | | DAT-T (Xia et al., 2022) | 48 | 272 | - | 47.1 | 69.2 | 51.6 | 42.4 | 66.1 | 45.5 | | MViTv2-T (Li et al., 2022) | 44 | 279 | 206.42 | 48.2 | 70.9 | 53.3 | 43.8 | 67.9 | 47.2 | | NAT-T (Hassani et al., 2023) | 48 | 258 | 157.28 | 47.7 | 69.0 | 52.6 | 42.6 | 66.1 | 45.9 | | GC ViT-T (Hatamizadeh et al., 2023) | 48 | 291 | 240.19 | 47.9 | 70.1 | 52.8 | 43.2 | 67.0 | 46.7 | | DWAViT-T[cos] (Ours) | 42 | 255 | 223.46 | 48.4 | 70.4 | 53.1 | 43.5 | 67.7 | 47.1 | | DWAViT-T[quad] (Ours) | 42 | 255 | 235.47 | 48.8 | 70.7 | 53.6 | 43.8 | 68.1 | 47.1 | | Res101 (He et al., 2016) | 63 | 336 | 155.50 | 42.8 | 63.2 | 47.1 | 38.5 | 60.1 | 41.3 | | ConvNeXt-S (Liu et al., 2022b) | 70 | 348 | 225.28 | 47.9 | 70.0 | 52.7 | 42.9 | 66.9 | 46.2 | | PVT-M (Wang et al., 2021) | 64 | 302 | 222.18 | 44.2 | 66.0 | 48.2 | 40.5 | 63.1 | 43.5 | | ViL-M (Zhang et al., 2021) | 60 | 261 | - | 44.6 | 66.3 | 48.5 | 40.7 | 63.8 | 43.7 | | TwinsP-B (Chu et al., 2021a) | 64 | 302 | - | 47.9 | 70.1 | 52.5 | 43.2 | 67.2 | 46.3 | | Twins-B (Chu et al., 2021a) | 76 | 340 | - | 48.0 | 69.5 | 52.7 | 43.0 | 66.8 | 46.6 | | Swin-S (Liu et al., 2021) | 69 | 354 | 222.31 | 48.5 | 70.2 | 53.5 | 43.3 | 67.3 | 46.6 | | Focal-S (Yang et al., 2021) | 71 | 401 | - | 48.8 | 70.5 | 53.6 | 43.8 | 67.7 | 47.2 | | PVTv2-B3 (Wang et al., 2022b) | 65 | 397 | 265.97 | 48.4 | 69.8 | 53.3 | 43.2 | 66.9 | 46.7 | | XCiT-M24/8 (Ali et al., 2021) | 99 | 1448 | 467.60 | 48.5 | 70.3 | 53.4 | 43.7 | 67.5 | 46.9 | | NAT-S (Hassani et al., 2023) | 70 | 330 | 204.44 | 48.4 | 69.8 | 53.2 | 43.2 | 66.9 | 46.5 | | DWAViT-S[cos] (Ours) | 64 | 338 | 268.51 | 48.4 | 70.0 | 53.3 | 43.6 | 67.4 | 47.0 | | DWAViT-S[quad] (Ours) | 64 | 338 | 282.52 | 49.1 | 70.8 | 53.5 | 44.0 | 68.2 | 47.4 | | X101-64 (Xie et al., 2017) | 101 | 493 | 207.68 | 44.4 | 64.9 | 48.8 | 39.7 | 61.9 | 42.6 | | PVT-L (Wang et al., 2021) | 81 | 364 | 286.52 | 44.5 | 66.0 | 48.3 | 40.7 | 63.4 | 43.7 | | Swin-B (Liu et al., 2021) | 107 | 496 | 393.74 | 48.5 | 69.8 | 53.2 | 43.4 | 66.8 | 46.9 | | DaViT-B (Ding et al., 2022) | 107 | 491 | 461.84 | 49.9 | 71.5 | 54.6 | 44.6 | 68.8 | 47.8 | | DWAViT-B[cos] (Ours) | 97 | 462 | 423.62 | 49.8 | 71.2 | 54.8 | 44.5 | 68.6 | 47.8 | | DWAViT-B[quad] (Ours) | 97 | 462 | 440.69 | 49.4 | 71.1 | 54.6 | 44.4 | 68.6 | 47.8 | and 0.25 for DWAViT-B, respectively. The linear function is adopted to simplify the computation of the quadratic self-attention and τ is set to 0.4. All the experiments are running on NVIDIA A100. To evaluate the time efficiency of our proposed model, the inference time is also reported. Since the model is trained on ImageNet-1K for many training epochs (300 epochs), the variance could be diminished considerably and become insignificant compared to the main results. Thus, only the major results are reported. The results are illustrated in Table 2. Our proposed DWAViT is compared against the previous state-ofthe-art vision transformers and CNNs including the CSwin (Dong et al., 2022), MViTv2 (Li et al., 2022), DaViT (Ding et al., 2022) and ConvNeXt (Liu et al., 2022b). Specifically, Swin Transformer (Liu et al., 2021), Focal Transformer (Yang et al., 2021), DaViT (Ding et al., 2022), MViTv2 (Li et al., 2022) and CSWin Transformer (Dong et al., 2022) are baselines for the exact comparison. The experimental results show that under the similar amount of parameters, the DWAViT-T can outperform the latest vision transformers and CNNs, The top-1 accuracy of the DWAViT-T can achieve 82.8%, which is even 0.1% higher than that of CSwin-T (Dong et al., 2022). As for the small-sized model, our DWAViT-S can achieve the top-1 accuracy of 83.6% in the classification task, which is on par with that of CSWin (Dong et al., 2022) and MViTv2 (Li et al., 2022). For base-sized models, our DWAViT-B with cosine self-attention can achieve 83.9% accuracy in the ImageNet-1K classification task. Table 4: The semantic segmentation performance of DWAViT-T and baselines on ADE20K. The FLOPs are calculated with resolution 512 × 2048. [cos] and [quad] denote cosine and quadratic function, respectively. Backbone #Param(M) #FLOPs(G) mIoU Swin-T (Liu et al., 2021) 59 945 44.5 Focal-T (Yang et al., 2021) 62 998 45.8 XciT-S12/16 (Ali et al., 2021) 54 966 45.9 XciT-S12/8 (Ali et al., 2021) 53 1237 46.6 DaViT-T (Ding et al., 2022) 60 940 46.3 DAT-T (Xia et al., 2022) 60 957 45.5 GC ViT-T (Hatamizadeh et al., 2023) 58 947 47.0 NAT-T (Hassani et al., 2023) 58 934 47.1 DWAViT-T[cos] (Ours) 52 930 **47.5** DWAViT-T[quad] (Ours) 52 930 45.4 ## 4.2 Coco Object Detection Next, we evaluate our model on the COCO object detection task. The COCO dataset has 118K images for training and 5K images for validation. We adopt the Mask R-CNN (He et al., 2017) and Cascade Mask R-CNN (Cai & Vasconcelos, 2018) as the framework and our DWAViT serves as the backbone. For a fair comparison, we follow the same training recipe as the previous work (Touvron et al., 2021a; Li et al., 2022; Lee et al., 2022; Dong et al., 2022) and perform the experiment with MMDetection toolbox (Chen et al., 2019). In order to tackle the images with high resolution. The number of local windows in the object detection task is different from that in the image classification task. The number of windows in each stage is (256,225), (64,49), (16,9), (4,1), respectively. In both frameworks, the size of the local window in each stage is half of that in the previous stage. We use the model pretrained on ImageNet-1K and fine-tune it on the COCO dataset with 1× and 3 × schedule with 12 and 36 epochs, respectively. The results on object detection and instance segmentation of our our model and the baseline models with Mask R-CNN (He et al., 2017) framework with 3× schedule are illustrated in Table 3. The baseline methods include the latest ViT models such as MViTv2 (Li et al., 2022), DAT (Xia et al., 2022) and DAViT (Xia et al., 2022).Specifically, Swin Transformer (Liu et al., 2021), Focal Transformer (Yang et al., 2021), XciT (Ali et al., 2021), DAT (Xia et al., 2022) and CSwin (Dong et al., 2022) are baselines for exact comparison. The experimental results show that the DWAViT can achieve on par or better result with latest baseline methods. For DWAViT-T, the APbcan achieve 48.8%, which is 0.4% higher than that of MViTv2-T (Li et al., 2022). the APm of DWAViT-T is 43.8%, which is on par with that of MViTv2-T (Li et al., 2022). The DWAViT-S achieves 49.1% on APb and 44.4% on AP m, which outperforms all the baseline methods. And the experimental results suggest that our DWAViT-B can outperform the Swin-B (Liu et al., 2021) on this task. More results can be found in Appendix. ## 4.3 Ade20K Semantic Segmentation In this section, we further investigate the performance of our proposed model on semantic segmentation task. The Upernet (Xiao et al., 2018) framework is adopted. Our model and the baseline methods are evaluated on benchmark ADE20K (Zhou et al., 2017). For fair comparison, we follow the training procedure from previous works (Ding et al., 2022; Dong et al., 2022) and perform the experiment with MMSegmentation toolbox (Contributors, 2020). The image is resized to 512 × 512 and train the model with 160K iterations. the mIoU is adopted as the metric. The results of experiment are illustrated in Table 4 and Table 15 (see Appendix). Swin Transformer (Liu et al., 2021), Focal Transformer (Yang et al., 2021), XciT (Ali et al., 2021), DaViT (Xia et al., 2022) and DAT (Xia et al., 2022) are the baselines for exact comparison. Those methods are also implemented with MMSegmentation toolbox (Contributors, 2020). Besides, Upernet (Xiao et al., 2018) is adopted as the framework and models pre-trained on ImageNet-1K only are used as the feature extractor for the baselines and our method. The results show that our DWAViT with cosine self-attention function can outperform the baselines. Besides, the performance of our model with cosine function is also much better than that of our model with quadratic function. The mIoU of DWAViT-T can reach 47.5% , Table 5: The performance of our proposed DWAViT with angular self-attention, scaled dot-product selfattention and the linearized self-attention on the ImageNet-1K classification. The resolution of the image is 224 × 224. Model Param(M) FLOPs(G) Top-1 Acc DWAViT-T[dot product] 22.7 4.2 82.5 DWAViT-T[linear] 22.7 4.2 82.2 DWAViT-T[cos] (Ours) 22.7 4.2 82.7 DWAViT-T[quad] (Ours) 22.7 4.2 **82.8** Table 6: The performance of our proposed DWAViT with the dual local window, window/grid attention and the shifted local window on the ImageNet-1K classification. We adopt the quadratic self-attention and the resolution of the image is 224 × 224. Backbone Param(M) FLOPs(G) Top-1 Acc DWAViT-T[shifted window] 22.7 4.2 82.4 DWAViT-T[window/grid] 22.7 4.2 82.5 DWAViT-T[dual window] (Ours) 22.7 4.2 **82.8** Table 7: Object detection and instance segmentation performance of our DWAViT with angular self-attention, scaled dot-product self-attention and linearized self-attention in Mask R-CNN framework. The model is trained with a 3x scheme. The FLOPs are measured at resolution 800 × 1280. Backbone #Param(M) #FLOPs(G) APb APb 50 APb 75 AP m AP m 50 AP m 75 DWAViT-T[dot product] 42 255 48.4 70.3 53.2 43.5 67.5 **47.1** DWAViT-T[linear] 42 255 48.0 70.0 52.8 43.2 67.0 46.5 DWAViT-T[cos] (Ours) 42 255 48.4 70.4 53.1 43.5 67.7 **47.1** DWAViT-T[quad] (Ours) 42 255 48.8 70.7 53.6 **43.8 68.1 47.1** Table 8: Object detection and instance segmentation performance of our DWAViT with the dual local window, shifted local window and window/grid attention in Mask R-CNN framework. The model is trained with a 3x scheme. We adopt the quadratic self-attention and the FLOPs are measured at resolution 800 × 1280. Backbone #Param(M) #FLOPs(G) APb APb 50 APb 75 AP m AP m 50 AP m 75 DWAViT-T[shifted window] 42 259 47.5 69.1 52.3 42.5 66.3 45.9 DWAViT-T[window/grid] 42 259 47.8 70.2 52.7 42.9 66.7 46.3 DWAViT-T[dual window] (Ours) 42 255 48.8 70.7 53.6 **43.8 68.1 47.1** | Model | Param(M) | FLOPs(G) | Top-1 Acc | |----------------------------------------------|------------|------------|-------------| | Swin-T (Liu et al., 2021) [dot product] | 28 | 4.3 | 81.2 | | Swin-T (Liu et al., 2021) [cos] (Ours) | 28 | 4.3 | 81.4 | | Swin-T (Liu et al., 2021) [quad] (Ours) | 28 | 4.3 | 81.3 | | DeiT-S (Touvron et al., 2021a) [dot product] | 22 | 4.6 | 79.8 | | DeiT-S (Touvron et al., 2021a) [cos] (Ours) | 22 | 4.6 | 80.0 | | DeiT-S (Touvron et al., 2021a) [quad] (Ours) | 22 | 4.6 | 80.2 | | CSwin-T (Dong et al., 2022)[dot product] | 23 | 4.3 | 82.7 | | CSwin-T (Dong et al., 2022)[cos] (Ours) | 23 | 4.3 | 82.9 | | CSwin-T (Dong et al., 2022)[quad] (Ours) | 23 | 4.3 | 83.0 | | MaxViT-T (Tu et al., 2022)[dot product] | 31 | 5.6 | 83.6 | | MaxViT-T (Tu et al., 2022)[cos] (Ours) | 31 | 5.6 | 83.8 | | MaxViT-T (Tu et al., 2022)[quad] (Ours) | 31 | 5.6 | 83.8 | which outperforms other baseline methods like DAT-T (Xia et al., 2022), DaViT-T (Ding et al., 2022) and XciT-S (Ali et al., 2021). 13 ![13_image_0.png](13_image_0.png) Figure 5: The performance of our model with different values of λ on classification, object detection and segmentation tasks. | average time per iteration is reported. Model #Param(M) | #FLOPs(G) | Inference(×10−2 s) | Training(s) | | |-----------------------------------------------------------|-------------|----------------------|---------------|------| | Swin-T (Liu et al., 2021) | 28.3 | 4.5 | 4.6 | 0.15 | | DWAViT-T[cos](Ours) | 22.7 | 4.2 | 3.6 | 0.20 | | DWAViT-T[quad] (Ours) | 22.7 | 4.2 | 4.0 | 0.20 | | Swin-S (Liu et al., 2021) | 49.6 | 8.7 | 7.1 | 0.23 | | DWAViT-S[cos](Ours) | 44.6 | 8.2 | 6.9 | 0.29 | | DWAViT-S[quad] (Ours) | 44.6 | 8.2 | 7.4 | 0.31 | | Swin-B (Liu et al., 2021) | 87.8 | 15.4 | 9.2 | 0.30 | | DWAViT-B[cos](Ours) | 77.4 | 14.3 | 9.6 | 0.40 | | DWAViT-B[quad] (Ours) | 77.4 | 14.3 | 10.3 | 0.42 | ## 4.4 Running Time Analysis In this section we quantitatively evaluate the actual running time of our model in both training and inference stages on the image classification task. Specifically, we fix the batch to 100 and compare the running time of our model with that of Swin Transformer (Liu et al., 2021). All the experiments are implemented on a single A100 GPU and we report the average time per iteration and the results are illustrated in Table 10. The experimental results suggest that our model could achieve comparable inference time with that of Swin Transformer (Liu et al., 2021). Specifically, compared to Swin-T (Liu et al., 2021), our DWAViT-T(cos) can achieve 1.5% higher accuracy on image classification with lower time cost during the inference stage. Our DWAViT-B(cos) can achieve an improvement of 0.5% in accuracy on the classification task with almost the same inference time of Swin-B Liu et al. (2021). Basically, the time cost of our model with quadratic self-attention would be slightly higher than that of our model with cosine self-attention. Besides, the training time of our model is slightly higher than that of Swin Transformer (Liu et al., 2021) due to the new formulation in self-attention. However, the actual running time does not blow up when the size of our model scales up. ## 4.5 Ablation Study In the ablation study we first compare the performance of our model with traditional scaled dot-product self-attention and our proposed angular self-attention. In the experiment we replace the angular self-attention with traditional scaled dot-product self-attention or linearized self-attention and evaluate the performance of the model on ImageNet-1K classification, COCO object detection and ADE20K semantic segmentation. We adopt the DWAViT-T as the backbone in three tasks. The results of the image classification and objection detection are illustrated in Table 5 and Table 7, respectively. More results on other tasks can be found in Appendix. When our model adopts the traditional scaled dot-product self-attention, the top-1 accuracy would be 0.2% and 0.3% lower than that of cosine function and quadratic function. On the object detection task our angular self-attention also achieves better performance than that of the baseline methods. We further investigate the performance of our angular self-attention by replacing the scaled dot-product self-attention with our angular self-attention in other vision transformer models. Table 9 shows the performance of Swin-T (Liu et al., 2021), DeiT-S (Touvron et al., 2021a), CSWin-T (Dong et al., 2022) and MaxViT-T (Tu et al., 2022) ImageNet-1K image classification with scaled dot-product self-attention and our proposed angular self-attention. The experimental results suggest that when the existing vision transformer models are equipped with our angular self-attention, the performance on the image classification can be improved. It further validates that our angular self-attention is a competitive alternative for the scaled dot-product self-attention. We also compare the performance of our model with the proposed dual local window with the shifted window proposed by Swin Transformer (Liu et al., 2021) and the window/grid attention from MaxViT (Tu et al., 2022). In the experiment we replace the dual local window with the shifted local window and window/grid attention and evaluate the performance of the model on ImageNet-1K classification, COCO object detection and ADE20K semantic segmentation. We adopt the DWAViT-T(quad) as the backbone in three tasks. The results of the image classification and object detection are illustrated in Table 6 and Table 8. More results on other tasks can be found in Appendix. When our model is equipped with the shifted local window, the top-1 accuracy would be 0.4% lower than that of the model with our proposed dual local window. On the object detection task, our dual local window also achieves better performance than that of baseline methods, which suggests our dual local window is a strong competitor to the shifted local window and window/grid attention. At last, We investigate the effect of the hyper-parameter on the performance of our model. The temperature scaling coefficient λ is the primary hyper-parameter in our angular self-attention and it determines the discriminativeness of the similarity score of tokens. We adopt DWAViT-T˜(quad) as the target model and choose 0.1, 0.5,1.0 as the value of λ. The performance of the model is evaluated on image classification, object detection and semantic segmentation and the results are shown in Fig 5. The results suggest that the performance of our model would be reduced with larger value of λ. Compared to the classification task, the performance of our model is more sensitive to the change of the value of λ. Since the value of λ become larger, the similarity score of the tokens would become less discriminative. And the ability of our angular self-attention to model the interaction of tokens would be undermined. As a consequence, the performance of the model would drop, especially on object detection and segmentation tasks. ## 4.6 Statistical Significance To validate the statistical significance of our results on various tasks (classification, object detection and segmentation), we perform the significant test on our proposed DWAViT model and choose one important baseline model (e.g., Swin Transformer (Liu et al., 2021)) for the comparison. In image classification the Swin-T (Liu et al., 2021) is adopted in our test. The average accuracy of Swin-T (Liu et al., 2021) is 81.20% as reported in the original paper. To demonstrate that the performance improvement of our proposed model is statistically significant over that of Swin Transformer. We first choose DWAViT-T as the target model since it has similar size with that of Swin-T. Our null hypothesis is acc(DWAViT-T) acc(Swin-T) and the alternative hypothesis is acc(DWAViT-T) > acc(Swin-T). Next, the evaluation of our model on image classification is performed multiple times (e.g., 20) under different settings. At last, the t-score and p-value are calculated according to the one-tail T-test and the result is shown in Table 11. The p-value is 7.65e-50, which is significantly smaller than the threshold 0.05. For object detection and segmentation tasks, we follow the same routine and the results are also shown in Table 11. The results suggest the rejection of the null hypothesis and the improvements of our proposed model are statistically significant. Table 11: The results of the significant test on classification, objection detection and segmentation tasks. Task t-score p-value Classification 1477.27 7.65e-50 Object detection 245.76 4.81e-35 Segmentation 369.16 2.11e-38 Our experimental results show that when the scaled dot-product self-attention in CSwin (Dong et al., 2022) and MaxViT (Tu et al., 2022) is placed by our angular self-attention. There is a slight performance improvement in the image classification, as illustrated in Table 9. To further validate the statistical significance of the improvement, we perform the significant test | Table 12: The results of the significant test on CSwin | | | |----------------------------------------------------------|---------|----------| | and MaxViT. Model | t-score | p-value | | CSwin-T (Dong et al., 2022) | 144.47 | 1.16e-30 | | MaxViT-T (Tu et al., 2022) | 96.55 | 2.42e-27 | on CSwin (Dong et al., 2022) and MaxViT (Tu et al., 2022) with our angular self-attention, respectively. In the test, the CSwin-T (Dong et al., 2022) is chosen as the target model and the null hypothesis would be that the performance of CSwin (Dong et al., 2022) with our angular self-attention (quad) is less than that of CSwin (Dong et al., 2022) with scaled dot-product self-attention, namely acc(CSwin(quad)) ≤ acc(CSwin(dot-product)) = 82.7%. Next, the evaluation of CSwin with our angular self-attention on image classification is performed multiple times (e.g., 20) under different settings (e.g., different seeds). At last, the t-score and p-value are also calculated according to the one-tail T-test and the results of CSwin (Dong et al., 2022) are shown in Table 12. The p-value for CSwin (Dong et al., 2022) is 1.16e-39, which is significantly smaller than the threshold 0.05. The results suggest that the null hypothesis should be rejected. We follow the same routine to perform the hypothesis test on MaxViT (Tu et al., 2022) and the results are also illustrated in Table 12. Both results suggest that the improvement on the image classification is indeed statistically significant and our angular self-attention can serve as a strong competitor for the traditional scaled dot-product self-attention. ## 5 Conclusions In this paper, we propose the dual-window mechanism and angular self-attention. The dual-window mechanism divides the feature maps into even/odd numbers of local windows in each stage alternatively for the information exchange of the local windows. In angular self-attention, the traditional scaled dot-product operation is replaced by our proposed quadratic and cosine functions. The proposed angular function can also model the relationship of tokens in the long range. Based on the dual-window mechanism and angular selfattention, we propose a new vision transformer backbone called dual-windowed angular vision transformer. Extensive experiments show that our backbone can achieve competitive performance on the tasks such as image classification, object detection and semantic segmentation and while maintaining the comparable computational cost with that of the baseline models. Broader Impact. This work proposed a new architecture of vision transformer called DWAViT featured by angular self-attention and dual local window mechanism. Our model is proven to achieve competitive performance in downstream tasks such as object detection and semantic segmentation and has the enormous potential to be used in various practical scenarios. In particular, object detection is one of the most promising applications of vision transformers in the real world and it is often used in systems which require extensive interaction with the surrounding environment visually. For instance, autonomous vehicles require a large number of object detectors to identify the pedestrians and other vehicles nearby and the safety and the trustworthiness of vision transformers are critical in this area. Though our proposed model can achieve promising results on object detection and other tasks, some critical issues such as adversarial robustness and trustworthiness are quite under-explored and further investigation is necessarily required. ## Acknowledgement The work is in part supported by the U.S. Army Research Office Award under Grant Number W911NF-21-10109 and the National Science Foundation under Grant IIS-2316306. ## References Alaaeldin Ali, Hugo Touvron, Mathilde Caron, Piotr Bojanowski, Matthijs Douze, Armand Joulin, Ivan Laptev, Natalia Neverova, Gabriel Synnaeve, Jakob Verbeek, et al. Xcit: Cross-covariance image transformers. Advances in neural information processing systems, 34:20014–20027, 2021. Zhaowei Cai and Nuno Vasconcelos. Cascade r-cnn: Delving into high quality object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In *Computer Vision–ECCV 2020: 16th European Conference,* Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16, pp. 213–229. Springer, 2020. Chun-Fu Richard Chen, Quanfu Fan, and Rameswar Panda. Crossvit: Cross-attention multi-scale vision transformer for image classification. In *Proceedings of the IEEE/CVF international conference on computer* vision, pp. 357–366, 2021. Kai Chen, Jiaqi Wang, Jiangmiao Pang, Yuhang Cao, Yu Xiong, Xiaoxiao Li, Shuyang Sun, Wansen Feng, Ziwei Liu, Jiarui Xu, et al. Mmdetection: Open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155, 2019. Liyan Chen, Weihan Wang, and Philippos Mordohai. Learning the distribution of errors in stereo matching for joint disparity and uncertainty estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 17235–17244, June 2023a. Mengzhao Chen, Mingbao Lin, Ke Li, Yunhang Shen, Yongjian Wu, Fei Chao, and Rongrong Ji. Cf-vit: A general coarse-to-fine method for vision transformer. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp. 7042–7052, 2023b. Yinpeng Chen, Xiyang Dai, Dongdong Chen, Mengchen Liu, Xiaoyi Dong, Lu Yuan, and Zicheng Liu. Mobile-former: Bridging mobilenet and transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5270–5279, 2022. François Chollet. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1251–1258, 2017. Krzysztof Marcin Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Quincy Davis, Afroz Mohiuddin, Lukasz Kaiser, et al. Rethinking attention with performers. In *International Conference on Learning Representations*. Xiangxiang Chu, Zhi Tian, Yuqing Wang, Bo Zhang, Haibing Ren, Xiaolin Wei, Huaxia Xia, and Chunhua Shen. Twins: Revisiting the design of spatial attention in vision transformers. In *NeurIPS*, pp. 9355–9366, 2021a. Xiangxiang Chu, Zhi Tian, Bo Zhang, Xinlong Wang, Xiaolin Wei, Huaxia Xia, and Chunhua Shen. Conditional positional encodings for vision transformers. *arXiv preprint arXiv:2102.10882*, 2021b. MMSegmentation Contributors. Mmsegmentation: Openmmlab semantic segmentation toolbox and benchmark, 2020. Jean-Baptiste Cordonnier, Andreas Loukas, and Martin Jaggi. On the relationship between self-attention and convolutional layers. In *International Conference on Learning Representations*, 2019. Zihang Dai, Hanxiao Liu, Quoc V Le, and Mingxing Tan. Coatnet: Marrying convolution and attention for all data sizes. *Advances in Neural Information Processing Systems*, 34:3965–3977, 2021. Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. Flashattention: Fast and memory-efficient exact attention with io-awareness. *Advances in Neural Information Processing Systems*, 35:16344–16359, 2022. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009. Mingyu Ding, Bin Xiao, Noel Codella, Ping Luo, Jingdong Wang, and Lu Yuan. Davit: Dual attention vision transformers. In *Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October* 23–27, 2022, Proceedings, Part XXIV, pp. 74–92. Springer, 2022. Xiaoyi Dong, Jianmin Bao, Dongdong Chen, Weiming Zhang, Nenghai Yu, Lu Yuan, Dong Chen, and Baining Guo. Cswin transformer: A general vision transformer backbone with cross-shaped windows. In *Proceedings* of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12124–12134, 2022. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In *ICLR*. OpenReview.net, 2021. Haoqi Fan, Bo Xiong, Karttikeya Mangalam, Yanghao Li, Zhicheng Yan, Jitendra Malik, and Christoph Feichtenhofer. Multiscale vision transformers. In *Proceedings of the IEEE/CVF International Conference* on Computer Vision, pp. 6824–6835, 2021. Jianyuan Guo, Kai Han, Han Wu, Yehui Tang, Xinghao Chen, Yunhe Wang, and Chang Xu. Cmt: Convolutional neural networks meet vision transformers. In *Proceedings of the IEEE/CVF Conference on* Computer Vision and Pattern Recognition, pp. 12175–12185, 2022. Dongchen Han, Xuran Pan, Yizeng Han, Shiji Song, and Gao Huang. Flatten transformer: Vision transformer using focused linear attention. In *Proceedings of the IEEE/CVF International Conference on Computer* Vision, pp. 5961–5971, 2023. Kai Han, An Xiao, Enhua Wu, Jianyuan Guo, Chunjing Xu, and Yunhe Wang. Transformer in transformer. Advances in Neural Information Processing Systems, 34:15908–15919, 2021. Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi. Neighborhood attention transformer. arXiv preprint arXiv:2204.07143, 2022. Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi. Neighborhood attention transformer. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 6185–6194, 2023. Ali Hatamizadeh, Hongxu Yin, Greg Heinrich, Jan Kautz, and Pavlo Molchanov. Global context vision transformers. In *International Conference on Machine Learning*, pp. 12633–12646. PMLR, 2023. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Girshick. Mask r-cnn. In *Proceedings of the IEEE* International Conference on Computer Vision (ICCV), Oct 2017. Liqiang He and Sinisa Todorovic. Destr: Object detection with split transformer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9377–9386, 2022. Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. *arXiv preprint arXiv:1704.04861*, 2017. Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 4700–4708, 2017. Zilong Huang, Xinggang Wang, Lichao Huang, Chang Huang, Yunchao Wei, and Wenyu Liu. Ccnet: Crisscross attention for semantic segmentation. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 603–612, 2019. Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. Transformers are rnns: Fast autoregressive transformers with linear attention. In *International Conference on Machine Learning*, pp. 5156–5165. PMLR, 2020. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint* arXiv:1412.6980, 2014. Youngwan Lee, Jonghee Kim, Jeffrey Willette, and Sung Ju Hwang. Mpvit: Multi-path vision transformer for dense prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7287–7296, 2022. Yanghao Li, Chao-Yuan Wu, Haoqi Fan, Karttikeya Mangalam, Bo Xiong, Jitendra Malik, and Christoph Feichtenhofer. Mvitv2: Improved multiscale vision transformers for classification and detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4804–4814, 2022. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In *Computer Vision–ECCV 2014: 13th* European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pp. 740–755. Springer, 2014. Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In *Proceedings of the IEEE/CVF* international conference on computer vision, pp. 10012–10022, 2021. Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, et al. Swin transformer v2: Scaling up capacity and resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 12009–12019, 2022a. Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. A convnet for the 2020s. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 11976–11986, 2022b. Jiachen Lu, Jinghan Yao, Junge Zhang, Xiatian Zhu, Hang Xu, Weiguo Gao, Chunjing Xu, Tao Xiang, and Li Zhang. Soft: Softmax-free transformer with linear complexity. *Advances in Neural Information* Processing Systems, 34:21297–21309, 2021. Xiaofeng Mao, Gege Qi, Yuefeng Chen, Xiaodan Li, Ranjie Duan, Shaokai Ye, Yuan He, and Hui Xue. Towards robust vision transformer. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, pp. 12042–12051, 2022. Sachin Mehta and Mohammad Rastegari. Mobilevit: Light-weight, general-purpose, and mobile-friendly vision transformer. In *International Conference on Learning Representations*. Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. Image transformer. In *International conference on machine learning*, pp. 4055–4064. PMLR, 2018. Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah Smith, and Lingpeng Kong. Random feature attention. In *International Conference on Learning Representations*. Zhiliang Peng, Wei Huang, Shanzhi Gu, Lingxi Xie, Yaowei Wang, Jianbin Jiao, and Qixiang Ye. Conformer: Local features coupling global representations for visual recognition. In *Proceedings of the IEEE/CVF* international conference on computer vision, pp. 367–376, 2021. Boris T Polyak and Anatoli B Juditsky. Acceleration of stochastic approximation by averaging. *SIAM journal* on control and optimization, 30(4):838–855, 1992. Zhen Qin, Weixuan Sun, Hui Deng, Dongxu Li, Yunshen Wei, Baohong Lv, Junjie Yan, Lingpeng Kong, and Yiran Zhong. cosformer: Rethinking softmax in attention. In International Conference on Learning Representations. Ilija Radosavovic, Raj Prateek Kosaraju, Ross B. Girshick, Kaiming He, and Piotr Dollár. Designing network design spaces. In *CVPR*, pp. 10425–10433. Computer Vision Foundation / IEEE, 2020. Prajit Ramachandran, Niki Parmar, Ashish Vaswani, Irwan Bello, Anselm Levskaya, and Jon Shlens. Stand-alone self-attention in vision models. *Advances in neural information processing systems*, 32, 2019. Yongming Rao, Wenliang Zhao, Yansong Tang, Jie Zhou, Ser-Nam Lim, and Jiwen Lu. Hornet: Efficient high-order spatial interactions with recursive gated convolutions. In Advances in Neural Information Processing Systems. Pengzhen Ren, Changlin Li, Guangrun Wang, Yun Xiao, Qing Du, Xiaodan Liang, and Xiaojun Chang. Beyond fixation: Dynamic window visual transformer. In *Proceedings of the IEEE/CVF Conference on* Computer Vision and Pattern Recognition, pp. 11987–11997, 2022. Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In *Proceedings of the IEEE conference on computer vision and* pattern recognition, pp. 4510–4520, 2018. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. Self-attention with relative position representations. arXiv preprint arXiv:1803.02155, 2018. Weili Shi and Sheng Li. Improving robustness of vision transformers via data-augmented virtual adversarial training. In *2022 IEEE International Conference on Big Data (Big Data)*, pp. 135–140. IEEE, 2022. Weili Shi, Ronghang Zhu, and Sheng Li. Pairwise adversarial training for unsupervised class-imbalanced domain adaptation. In Proceedings of the 28th ACM SIGKDD conference on knowledge discovery and data mining, pp. 1598–1606, 2022. Weili Shi, Zhongliang Zhou, Benjamin H Letcher, Nathaniel Hitt, Yoichiro Kanno, Ryo Futamura, Osamu Kishida, Kentaro Morita, and Sheng Li. Aging contrast: A contrastive learning framework for fish reidentification across seasons and years. In *Australasian Joint Conference on Artificial Intelligence*, pp. 252–264. Springer, 2023. Aravind Srinivas, Tsung-Yi Lin, Niki Parmar, Jonathon Shlens, Pieter Abbeel, and Ashish Vaswani. Bottleneck transformers for visual recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 16519–16529, 2021. Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In *Proceedings of the IEEE international conference on computer vision*, pp. 843–852, 2017. Weixuan Sun, Zhen Qin, Hui Deng, Jianyuan Wang, Yi Zhang, Kaihao Zhang, Nick Barnes, Stan Birchfield, Lingpeng Kong, and Yiran Zhong. Vicinity vision transformer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023. Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning, pp. 6105–6114. PMLR, 2019. Mingxing Tan and Quoc Le. Efficientnetv2: Smaller models and faster training. In *International conference* on machine learning, pp. 10096–10106. PMLR, 2021. Rui Tian, Zuxuan Wu, Qi Dai, Han Hu, Yu Qiao, and Yu-Gang Jiang. Resformer: Scaling vits with multi-resolution training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 22721–22731, 2023. Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through attention. In *International conference on* machine learning, pp. 10347–10357. PMLR, 2021a. Hugo Touvron, Matthieu Cord, Alexandre Sablayrolles, Gabriel Synnaeve, and Hervé Jégou. Going deeper with image transformers. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 32–42, 2021b. Hugo Touvron, Matthieu Cord, and Hervé Jégou. Deit iii: Revenge of the vit. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXIV, pp. 516–533. Springer, 2022. Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, and Yinxiao Li. Maxvit: Multi-axis vision transformer. In *Computer Vision–ECCV 2022: 17th European Conference, Tel* Aviv, Israel, October 23–27, 2022, Proceedings, Part XXIV, pp. 459–479. Springer, 2022. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. *Advances in neural information processing systems*, 30, 2017. Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, and Liang-Chieh Chen. Axial-deeplab: Stand-alone axial-attention for panoptic segmentation. In *Computer Vision–ECCV 2020: 16th European* Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IV, pp. 108–126. Springer, 2020. Weihan Wang, Bharat Joshi, Nathaniel Burgdorfer, Konstantinos Batsosc, Alberto Quattrini Lid, Philippos Mordohaia, and Ioannis Rekleitisb. Real-time dense 3d mapping of underwater environments. In *2023* IEEE International Conference on Robotics and Automation (ICRA), pp. 5184–5191, 2023a. doi: 10.1109/ ICRA48891.2023.10160266. Weihan Wang, Jiani Li, Yuhang Ming, and Philippos Mordohai. Edi: Eskf-based disjoint initialization for visual-inertial slam systems. In *2023 IEEE/RSJ International Conference on Intelligent Robots and* Systems (IROS), pp. 1466–1472, 2023b. doi: 10.1109/IROS55552.2023.10342106. Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 568–578, 2021. Wenhai Wang, Jifeng Dai, Zhe Chen, Zhenhang Huang, Zhiqi Li, Xizhou Zhu, Xiaowei Hu, Tong Lu, Lewei Lu, Hongsheng Li, et al. Internimage: Exploring large-scale vision foundation models with deformable convolutions. *arXiv preprint arXiv:2211.05778*, 2022a. Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pvt v2: Improved baselines with pyramid vision transformer. *Computational Visual Media*, 8(3): 415–424, 2022b. Wenhai Wang, Jifeng Dai, Zhe Chen, Zhenhang Huang, Zhiqi Li, Xizhou Zhu, Xiaowei Hu, Tong Lu, Lewei Lu, Hongsheng Li, et al. Internimage: Exploring large-scale vision foundation models with deformable convolutions. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 14408–14419, 2023c. Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7794–7803, 2018. Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, and Lei Zhang. Cvt: Introducing convolutions to vision transformers. In *Proceedings of the IEEE/CVF International Conference on Computer* Vision, pp. 22–31, 2021. Zhuofan Xia, Xuran Pan, Shiji Song, Li Erran Li, and Gao Huang. Vision transformer with deformable attention. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 4794–4803, 2022. Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, and Jian Sun. Unified perceptual parsing for scene understanding. In *Proceedings of the European conference on computer vision (ECCV)*, pp. 418–434, 2018. Tete Xiao, Mannat Singh, Eric Mintun, Trevor Darrell, Piotr Dollár, and Ross Girshick. Early convolutions help transformers see better. *Advances in Neural Information Processing Systems*, 34:30392–30400, 2021. Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M Alvarez, and Ping Luo. Segformer: Simple and efficient design for semantic segmentation with transformers. Advances in Neural Information Processing Systems, 34:12077–12090, 2021. Saining Xie, Ross Girshick, Piotr Dollar, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In *Proceedings of the IEEE Conference on Computer Vision and Pattern* Recognition (CVPR), July 2017. Jianwei Yang, Chunyuan Li, Pengchuan Zhang, Xiyang Dai, Bin Xiao, Lu Yuan, and Jianfeng Gao. Focal attention for long-range interactions in vision transformers. In *NeurIPS*, pp. 30008–30022, 2021. Kun Yuan, Shaopeng Guo, Ziwei Liu, Aojun Zhou, Fengwei Yu, and Wei Wu. Incorporating convolution designs into visual transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 579–588, 2021. Hao Zhang, Feng Li, Shilong Liu, Lei Zhang, Hang Su, Jun Zhu, Lionel Ni, and Harry Shum. Dino: Detr with improved denoising anchor boxes for end-to-end object detection. In International Conference on Learning Representations, 2022a. Pengchuan Zhang, Xiyang Dai, Jianwei Yang, Bin Xiao, Lu Yuan, Lei Zhang, and Jianfeng Gao. Multi-scale vision longformer: A new vision transformer for high-resolution image encoding. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 2998–3008, 2021. Xiaosong Zhang, Yunjie Tian, Lingxi Xie, Wei Huang, Qi Dai, Qixiang Ye, and Qi Tian. Hivit: A simpler and more efficient design of hierarchical vision transformer. In *The Eleventh International Conference on* Learning Representations, 2022b. Hengshuang Zhao, Jiaya Jia, and Vladlen Koltun. Exploring self-attention for image recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10076–10085, 2020. Sixiao Zheng, Jiachen Lu, Hengshuang Zhao, Xiatian Zhu, Zekun Luo, Yabiao Wang, Yanwei Fu, Jianfeng Feng, Tao Xiang, Philip HS Torr, et al. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 6881–6890, 2021. Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Scene parsing through ade20k dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 633–641, 2017. Daquan Zhou, Zhiding Yu, Enze Xie, Chaowei Xiao, Animashree Anandkumar, Jiashi Feng, and Jose M Alvarez. Understanding the robustness in vision transformers. In International Conference on Machine Learning, pp. 27378–27394. PMLR, 2022a. Zhongliang Zhou, Nathaniel P Hitt, Benjamin H Letcher, Weili Shi, and Sheng Li. Pigmentation-based visual learning for salvelinus fontinalis individual re-identification. In *2022 IEEE International Conference on Big* Data (Big Data), pp. 6850–6852. IEEE, 2022b. ## A Appendix A.1 Theoretical Analysis The proof of Proposition 1 is provided below. Proof 1 Assume θ0 ≤ θ1 ≤ ... ≤ θJ−1 for angles between the target query and all the keys. J is the total number of the tokens. V *is the value vector. According to our angular self-attention, the embedding of target* | × 1280. [cos] and [quad] denote cosine and quadratic function, respectively. Backbone #Param(M) #FLOPs(G) APb APb 50 APb 75 | AP m | AP m 50 | AP m 75 | | | | | | |-------------------------------------------------------------------------------------------------------------------------------|--------|-----------|-----------|------|------|------|------|------| | ResNet-50 (He et al., 2016) | 44 | 260 | 38.0 | 58.6 | 41.4 | 34.4 | 55.1 | 36.7 | | PVT-S (Wang et al., 2021) | 44 | 245 | 40.4 | 62.9 | 43.8 | 37.8 | 60.1 | 40.3 | | ViL-S (Zhang et al., 2021) | 45 | 218 | 44.9 | 67.1 | 49.3 | 41.0 | 64.2 | 44.1 | | TwinsP-S (Chu et al., 2021a) | 44 | 245 | 42.9 | 65.8 | 47.1 | 40.0 | 62.7 | 42.9 | | Twins-S (Chu et al., 2021a) | 44 | 228 | 43.4 | 66.0 | 47.3 | 40.3 | 63.2 | 43.4 | | Swin-T (Liu et al., 2021) | 48 | 264 | 42.2 | 64.6 | 46.2 | 39.1 | 61.6 | 42.0 | | DAT-T (Xia et al., 2022) | 48 | 272 | 44.4 | 67.6 | 48.5 | 40.4 | 64.2 | 43.1 | | CSWin-T (Dong et al., 2022) | 42 | 279 | 46.7 | 68.6 | 51.3 | 42.2 | 65.6 | 45.4 | | DWAViT-T[cos] (Ours) | 42 | 255 | 46.2 | 69.2 | 50.8 | 41.7 | 65.9 | 44.7 | | DWAViT-T[quad] (Ours) | 42 | 255 | 46.6 | 69.6 | 51.3 | 42.2 | 66.3 | 45.7 | | Res101 (He et al., 2016) | 63 | 336 | 40.4 | 61.1 | 44.2 | 36.4 | 57.7 | 38.8 | | PVT-M (Wang et al., 2021) | 64 | 302 | 42.0 | 64.4 | 45.6 | 39.0 | 61.6 | 42.1 | | ViL-M (Zhang et al., 2021) | 60 | 261 | 44.6 | 66.3 | 48.5 | 40.7 | 63.8 | 43.7 | | TwinsP-B (Chu et al., 2021a) | 64 | 302 | 44.6 | 66.7 | 48.9 | 40.9 | 63.8 | 44.2 | | Twins-B (Chu et al., 2021a) | 76 | 340 | 45.2 | 67.6 | 49.3 | 41.5 | 64.5 | 44.8 | | Swin-S (Liu et al., 2021) | 69 | 354 | 44.8 | 66.6 | 48.9 | 40.9 | 63.4 | 44.2 | | CSWin-S (Dong et al., 2022) | 54 | 342 | 47.9 | 70.1 | 52.6 | 43.2 | 67.1 | 46.2 | | DAT-S (Xia et al., 2022) | 69 | 378 | 47.1 | 69.9 | 51.5 | 42.5 | 66.7 | 45.4 | | DWAViT-S[cos] (Ours) | 64 | 338 | 48.0 | 70.7 | 52.6 | 43.3 | 67.6 | 46.5 | | DWAViT-S[quad] (Ours) | 64 | 338 | 48.2 | 70.8 | 52.8 | 43.3 | 67.7 | 46.6 | | X101-64 (Xie et al., 2017) | 101 | 493 | 42.8 | 63.8 | 47.3 | 38.4 | 60.6 | 41.3 | | PVT-L (Wang et al., 2021) | 81 | 364 | 42.9 | 65.0 | 46.6 | 39.5 | 61.9 | 42.5 | | CSWin-B (Dong et al., 2022) | 97 | 526 | 48.7 | 70.4 | 53.9 | 43.9 | 67.8 | 47.3 | | DWAViT-B[cos] (Ours) | 97 | 462 | 48.6 | 71.1 | 53,7 | 43.6 | 68.0 | 46.9 | | DWAViT-B[quad] (Ours) | 97 | 462 | 48.6 | 71.2 | 53.6 | 43.6 | 67.9 | 47.1 | | tokens can be expressed as: | | | | | | | | | $$\begin{array}{l}{{\mathcal{O}=\frac{1}{C}\sum_{j}\exp(s(\theta_{j})/\tau)V_{j}}}\\ {{\quad=\frac{1}{C}\exp(\frac{s(\theta_{0})}{\tau})(V_{0}+\exp(\frac{s(\theta_{1})-s(\theta_{0})}{\tau})V_{1}+\ldots}}\\ {{\quad+\exp(\frac{s(\theta_{J-1})-s(\theta_{0})}{\tau})V_{J-1})}}\\ {{\quad=\frac{1}{C}\exp(\frac{s(\theta_{0})}{\tau})(V_{0}+\omega_{1}V_{1}+\ldots+\omega_{J-1}V_{J-1}),}}\\ {{\quad\propto V_{0}+\omega_{1}V_{1}+\cdots+\omega_{J-1}V_{J-1},}}\end{array}$$ $$(7)$$ where ωj = exp( s(θj )−s(θ0) τ) and C is normalization coefficient. Since θj > θ0 and s(θj ) < s(θ0), ωj ∈ (0, 1). ## A.2 Experiment The results on object detection and instance segmentation of our model and the baseline models with Mask R-CNN (He et al., 2017) framework with 1× schedule are illustrated in Table 13. The baseline methods include the latest ViT models such as CSwin (Dong et al., 2022) and DAT (Xia et al., 2022). Specifically, Swin Transformer (Liu et al., 2021), DAT (Xia et al., 2022) and CSwin (Dong et al., 2022) are baselines for exact comparison. These baselines and our method adopt MMDetection toolbox (Chen et al., 2019) to perform the experiment and use models pre-trained on ImageNet-1K only as the feature extractor. For tiny-sized models the experimental results show that the DWAViT-T can achieve on par or better result against that of CSWin (Dong et al., 2022). For instance, the APb 50 of DWAViT-T(quad) can achieve 69.6%, which is 1.0% higher than that of CSWin (Dong et al., 2022). And the AP m of DWAViT-T(quad) is 42.2%, | resolution 800 × 1280. [cos] and [quad] denote cosine and quadratic function, respectively. Backbone #Param(M) #FLOPs(G) APb APb 50 APb 75 AP m AP m 50 | AP m 75 | | | | | | | | |-----------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|------|------|------|------|------|------|------| | Res50 (He et al., 2016) | 82 | 739 | 46.3 | 64.3 | 50.5 | 40.1 | 61.7 | 43.4 | | Swin-T (Liu et al., 2021) | 86 | 745 | 50.5 | 69.3 | 54.9 | 43.7 | 66.6 | 47.1 | | DAT-T (Xia et al., 2022) | 86 | 750 | 51.3 | 70.1 | 55.8 | 44.5 | 67.5 | 48.1 | | NAT-T (Hassani et al., 2023) | 85 | 737 | 51.4 | 70.0 | 55.9 | 44.5 | 67.6 | 47.9 | | GC ViT-T (Hatamizadeh et al., 2023) | 85 | 770 | 51.6 | 70.4 | 56.1 | 44.6 | 67.8 | 48.3 | | DWAViT-T[cos] (Ours) | 80 | 734 | 52.2 | 71.0 | 57.0 | 45.1 | 68.3 | 49.0 | | DWAViT-T[quad] (Ours) | 80 | 734 | 51.0 | 70.6 | 56.7 | 44.9 | 68.5 | 48.8 | | X101-32 (Xie et al., 2017) | 101 | 819 | 48.1 | 66.5 | 52.4 | 41.6 | 63.9 | 45.2 | | Swin-S (Liu et al., 2021) | 107 | 838 | 51.8 | 70.4 | 56.3 | 44.7 | 67.9 | 48.5 | | DAT-S (Xia et al., 2022) | 107 | 807 | 52.7 | 71.7 | 57.2 | 45.5 | 69.1 | 49.3 | | NAT-S (Hassani et al., 2023) | 108 | 809 | 52.0 | 70.4 | 56.3 | 44.9 | 68.1 | 48.6 | | GC ViT-S (Hatamizadeh et al., 2023) | 108 | 866 | 52.4 | 71.0 | 57.1 | 45.4 | 68.5 | 49.3 | | DWAViT-S[cos] (Ours) | 102 | 817 | 52.5 | 71.3 | 57.0 | 45.6 | 68.9 | 49.6 | | DWAViT-S[quad] (Ours) | 102 | 817 | 52.5 | 71.4 | 57.2 | 45.6 | 68.9 | 49.9 | | X101-64 (Xie et al., 2017) | 140 | 972 | 48.3 | 66.4 | 52.3 | 41.7 | 64.0 | 45.1 | | Swin-B (Liu et al., 2021) | 145 | 982 | 51.9 | 70.9 | 56.5 | 45.0 | 68.4 | 48.7 | | NAT-B (Hassani et al., 2023) | 147 | 931 | 52.5 | 71.1 | 57.1 | 45.2 | 68.6 | 49.0 | | GC ViT-B (Hatamizadeh et al., 2023) | 146 | 1018 | 52.9 | 71.7 | 57.8 | 45.8 | 69.2 | 49.8 | | DWAViT-B[cos] (Ours) | 134 | 940 | 52.5 | 71.3 | 57.0 | 45.6 | 69.0 | 49.6 | | DWAViT-B[quad] (Ours) | 134 | 940 | 52.8 | 71.6 | 57.3 | 45.7 | 69.1 | 49.5 | which is on par with that of CSWin (Dong et al., 2022). The DWAViT-S with quadratic self-attention can outperform all the baseline methods on all the metrics. The APb and AP m can reach 48.2% and 43.3%, respectively. Furthermore, DWAViT-B can achieve best results on APb 50 and AP m 50 . Table 14 show the performance of our model and the baseline models with Cascade Mask R-CNN (Cai & Vasconcelos, 2018) framework on object detection and instance segmentation. Swin Transformer (Liu et al., 2021) and DAT (Xia et al., 2022) are the major baselines for the exact comparison. The experimental results show that our DWAViT outperforms baseline methods. DWAViT-T can achieve 52.2% on APb and 45.1% on AP m, and DWAViT-S can achieves 52.5% on APb and 45.6% on AP m. The DWAViT-S with quadratic self-attention can achieve 45.6% and 49.9% on AP m and AP m 75 , respectively, which is 0.1% and 0.3% higher than that of DAT-S (Xia et al., 2022). The performance of DWAViT-S and DWAViT-B on semantic segmentation task are illustrated in Table 15. The baseline models include XCiT (Ali et al., 2021), Swin Transformer (Liu et al., 2021), DaViT (Ding et al., 2022), Focal Transformer (Yang et al., 2021) and DAT (Xia et al., 2022). The experimental results suggest that our DWAViT with cosine self-attention can achieve the best performance and the mIoU can reach 49.3% and 49.8% for DWAViT-S and DWAViT-B, respectively. ## A.3 Ablation Study In the ablation study we first compare the performance of our model with traditional scaled dot-product selfattention and our proposed angular self-attention and evaluate the performance on ImageNet-1K classification, COCO object detection and ADE20K semantic segmentation. We adopt the DWAViT-T as the backbone in three tasks. In object detection, Mask R-CNN is adopted as the framework and the model is trained with 36 epochs. In semantic segmentation Upernet is adopted as the framework and the model is trained with 160K iteration. The performance of model on semantic segmentation are illustrated in Table 16. The performance of our model with scaled dot-product self-attention is lower than that of our model with the proposed angular self-attention. On some tasks the gap of the performance is more obvious. For instance, on semantic segmentation our model of scaled dot-product self-attention only achieves 44.7% on mIoU, approximately 3% lower than that of our model with cosine self-attention. The experimental results suggest our angular self-attention can model the relationship of the tokens successfully and the angular self-attention is a powerful alternative to the traditional scaled dot-product self-attention. | Backbone | #Param(M) | #FLOPs(G) | mIoU | |-------------------------------------|-------------|-------------|--------| | ResNet-101 (He et al., 2016) | 86 | 1029 | 44.9 | | XCiT-S24/16 (Ali et al., 2021) | 76 | 1053 | 46.9 | | TwinsP-B (Chu et al., 2021a) | 74 | 977 | 47.1 | | XCiT-M24/16 (Ali et al., 2021) | 112 | 1213 | 47.6 | | Swin-S (Liu et al., 2021) | 81 | 1038 | 47.6 | | Twins-B (Chu et al., 2021a) | 89 | 1020 | 47.7 | | Focal-S (Yang et al., 2021) | 85 | 1130 | 48.0 | | DaViT-T (Ding et al., 2022) | 81 | 1030 | 48.8 | | DAT-T (Xia et al., 2022) | 83 | 1079 | 48.3 | | GC ViT-S (Hatamizadeh et al., 2023) | 84 | 1163 | 48.3 | | NAT-S (Hassani et al., 2023) | 82 | 1010 | 48.0 | | DWAViT-S[cos] (Ours) | 75 | 1015 | 49.3 | | DWAViT-S[quad] (Ours) | 75 | 1015 | 47.8 | | XCiT-M24/8 (Ali et al., 2021) | 110 | 2161 | 48.4 | | Swin-B (Liu et al., 2021) | 121 | 1841 | 48.1 | | Focal-B (Yang et al., 2021) | 126 | 1354 | 49.0 | | DaViT-B (Ding et al., 2022) | 121 | 1175 | 49.4 | | DAT-B (Xia et al., 2022) | 121 | 1212 | 49.4 | | GC ViT-B (Hatamizadeh et al., 2023) | 125 | 1348 | 49.2 | | NAT-B (Hassani et al., 2023) | 123 | 1137 | 48.5 | | DWAViT-B[cos] (Ours) | 108 | 1143 | 49.8 | | DWAViT-B[quad] (Ours) | 108 | 1143 | 49.1 | Table 16: The performance of our proposed DWAViT with angular self-attention, scaled dot-product selfattention and linearized self-attention on the semantic segmentation. FLOPs are calculated with resolution 512 × 2048. The [dot product], [cos] and [quad] in the square bracket denote scaled dot-product, cosine and quadratic function, respectively. Backbone Param(M) FLOPs(G) mIoU DWAViT-T[dot product] 52 930 44.7 DWAViT-T[linear] 52 930 44.1 DWAViT-T[cos] (Ours) 52 930 **47.5** DWAViT-T[quad] (Ours) 52 930 45.4 We also compare the performance of our model with the proposed dual local window with the shifted window proposed by Swin Transformer (Liu et al., 2021). In the experiment we replace the dual local window with the shifted local window and evaluate the performance of the model on ImageNet-1K classification, COCO object detection and ADE20K semantic segmentation. We adopt the DWAViT-T(quad) as the backbone in three tasks. The results on ADE20K semantic segmentation are illustrated in Table 17, respectively. On semantic segmentation, our model with shifted local window can achieve better performance than that of our model with dual local window. However, on the object detection task, the performance of our model with shifted window is lower to that of our model with dual local window on all metrics. To wrap up, our dual local window is effective in the modeling of the relationship of tokens and it can achieve better performance than that of the shifted local window on some tasks. Table 17: The performance of our proposed DWAViT with the dual local window and shifted local window on the semantic segmentation. We adopt the quadratic self-attention and the FLOPs are calculated with resolution 512 × 2048. Backbone Param(M) FLOPs(G) mIoU DWAViT-T[shifted window] 52 933 46.2 DWAViT-T[dual window] (Ours) 52 930 45.4
Review 1: Summary: This paper proposes a new vision transformer architecture DWAViT. It introduces two main components compared to existing works: (1) dual window mechanism for local window interactions and (2) angular self-attention mechanism to replace dot-product in original self-attention. DWAViT achieves comparable performance compared to existing works on various computer vision tasks. The ablations studies show that the introduced components can slightly improve the overall performance. Strengths and Weaknesses: ### Strengths: - The paper introduces two components, the dual window mechanism and angular self-attention, to enhance the original vision transformer architecture. - The methodology is well-explained and easy to understand. ### Weaknesses: Limited novelty as the proposed components bear similarities to existing works, with limited demonstrated benefits over prior approaches. 1. Regarding the dual window mechanism, several previous works (MaxVit, CSwin, DaViT, etc.) have explored alternatives to the global self-attention in the original vision transformer. However, this paper does not provide a comprehensive comparison or theoretical analysis to highlight the advantages of their approach over these existing methods. The empirical results (Table 6) show only marginal improvements compared to the Swin transformer. 2. The proposed angular self-attention mechanism also fails to demonstrate significant benefits over existing approaches like CosFormer or SOFT. Even when compared to the original dot-product self-attention (Table 5), the improvements are relatively minor. Requested Changes: 1. A more comprehensive presentation and analysis of the advantages offered by the proposed method, particularly in comparison to advanced architectures such as MaxVit, CosFormer, and other relevant state-of-the-art approaches. 2. A more thorough empirical evaluation against the latest architectures with analysis or insights to demonstrate the significance of the proposed method. Broader Impact Concerns: N/A ================================================== Review 2: Summary: This paper proposes a new attention computation method, that replaces the dot-product operation into angular computation. It reckons that the angular computation can represent the relationship between different tokens better than the conventional dot-product operation. Based on the new attention, the authors construct a new transformer model, called DWAViT. Similar to Swin-Transformer, it also has a hierarchical-structure which extract information gradually. Strengths and Weaknesses: Strengths: -The proposed angle operation is reasonable. It measure the relationship between different tokens only with the angle, instead of the norm. It naturally reduces some noise. -The authors conduct extensive experiments to validate the effectiveness of the proposed methods. It compares image classification methods on ImageNet-1K,detection methods on COCO and semantic segmentation methods on ADE20K. The results show that the proposed new architecture achieves better performance than the existing methods. Weaknesses: -Compared to the standard attention, the angle computation seem more complex. Will it introduce extra latency. The authors are required to measure the practical inference speed, and add the results in the main tables (such as table 2 and table 3). -The architecture introduces extra depthwise convolution, which has been proven effective to improve model's performance. For a fair comparison, could the authors use the architecture configure of swin transformer and only replace the attention module? More discussions are also required. Requested Changes: See the weaknesses. Broader Impact Concerns: No concern on the ethical implications. ================================================== Review 3: Summary: - The paper proposes a new transformer variant for vision tasks called DWaViT. (dual-windowed angular vision transformer). - Two key modifications are proposed over standard ViT and Swin transformers : a dual window mechanism is proposed which partitions alternate feature maps in odd/even local windows to allow for connections between different local windows. Second, use of angular attention is proposed which depends only on the angle between query and key as opposed to scaled dot product attention which is also a function of norm of the vectors. - The proposed model is trained on a variety of pre-training and downstream tasks and shows improvements over prior works. Strengths and Weaknesses: Strengths: - Paper is well written and easy to follow - The idea of using odd/even windows is interesting and can help bridge connections across layers at least due to the changes in border areas. - The gains are positive but small for the most part - The approach is simple which makes easier for the community to adopt it. Weaknesses: - Performance improvements : My key worry is that : are the improvements over various prior works statistically significant ? In 4.1, what do you mean "variance diminished considerably" ? "major results are reported" ? - Intuitions: Authors should provide some intuitions on why they think angular attention is better than scaled dot product attention. Do the authors have some intuitions for why the angular attention works better with the dual local window techniques? - Make differences more obvious: In the key result tables, make it clear what DWAViT was built on. Ie, if you do not use the dual window and angular attention contributions, which row in the table will be your baseline. I also suggest that the authors include second-best results for the tables. - Hyperparameters: How sensitive are the results to various choice of hyper parameters ? Is the proposed approach less or more sensitive to such changes compared to prior works ? This will be interesting and might help strengthen the paper. Minor: - Some typos : 3.1, ood or odd ?, 3.2 "tokes" Requested Changes: A discussion/experiment on statistical significance will be helpful in securing my recommendation for this paper. Broader Impact Concerns: No concern ================================================== Metareview: Recommendation: Accept with minor revision Comment: This paper proposes a new vision transformer architecture DWAViT. It introduces two main components: (1) dual window mechanism for local window interactions and (2) angular self-attention mechanism to replace dot-product in original self-attention. DWAViT achieves comparable performance compared to existing works on various computer vision tasks. After the rebuttal, the three reviewers gave two Leaning Accept and one Reject. The major concern of the Reject is the slightly lower performance in image classification. The authors demonstrated only comparable performance of their proposed DWAViT in Tables 12-14. The AE understands the concern of the reviewer. The following question shall also be raised: - When adding the proposed angular attention and dot product attention to CSwin and MaxViT, the gains are only ~0.3%. Are the gains statistically significant? ==================================================
# Towards Large Scale Transfer Learning For Differentially Private Image Classification Harsh Mehta *harshm@google.com* Google Research Abhradeep Thakurta athakurta@google.com Google Research Alexey Kurakin kurakin@google.com Google Research Ashok Cutkosky ashok@cutkosky.com Boston University Reviewed on OpenReview: *https: // openreview. net/ forum? id= Uu8WwCFpQv* ## Abstract Differentially Private Stochastic Gradient Descent (DP-SGD) has emerged as a popular private training algorithm. Unfortunately, the computational cost of training large-scale models with DP-SGD is substantially higher than non-private training. This is further exacerbated by the fact that increasing the number of parameters leads to larger degradation in utility with DP. In this work, we zoom in on the ImageNet dataset and demonstrate that, similar to the non-private case, pre-training over-parameterized models on a large public dataset can lead to substantial gains when the models are finetuned privately. Moreover, by systematically comparing private and non-private models across a range of large batch sizes, we find that similar to the non-private setting, the choice of optimizer can further improve performance substantially with DP. By using the LAMB optimizer, we saw improvement of up to 20% points (absolute). We also show that finetuning just the last layer for a single step in the full batch setting, combined with extremely small-scale (near-zero) initialization leads to both SOTA results of 81.7 % under a wide privacy budget range of ε ∈ [4, 10] and δ = 10−6 while minimizing the computational overhead substantially. Finally, we present additional results on CIFAR-10 and CIFAR-100, surpassing previous state of the art by leveraging transfer learning with our recommendations. ## 1 Introduction Despite significant research investment in privacy in machine learning, training private models remains challenging in practice. The standard metric for privacy is Differential Privacy (DP), which bounds the change in model weights (and predictions) if a single training example is added or removed. Informally, Differential Privacy implies that an adversary learns almost the same thing about an individual data point independent of their presence or absence in the data set, which intuitively provides protection against membership attacks (Nasr et al., 2021). Prior work illustrates that without DP's guarantees, it is quite plausible to attack a variety of models, across modalities, to reveal individual example information (Shokri et al., 2017; Carlini et al., 2019; 2021; Choquette-Choo et al., 2020; Liu et al., 2021; Balle et al., 2022). Revealing such information can be very dangerous if the model was trained using sensitive data such as demographics or personal photos, making it critical to establish practical DP training methods. Code: https://github.com/google-research/google-research/tree/master/dp_transfer Formally we can define DP as follows: Definition 1.1 (Differential Privacy (Dwork et al., 2006b;a)). A randomized algorithm A is (ε, δ)-differentially private if, for any pair of datasets D and D0 *differing in at most one example (called* neighboring datasets), and for all events S in the output range of A*, we have* Pr[A(D) ∈ S] ≤ e ε· Pr[A(D0) ∈ S] + δ, where the probability is over the randomness of A. The most popular method for DP training in deep learning is Differentially Private Stochastic Gradient Descent (DP-SGD) (Song et al., 2013; Bassily et al., 2014; Abadi et al., 2016). The core recipe implements a common theme in DP: "fuzzing" an algorithm's outputs with noise to obscure the contributions of any individual input. Specifically, DP-SGD is a preprocessing step prepended to ordinary SGD: for each example in a minibatch, compute the gradient, clip to a pre-selected norm, and add Gaussian noise before averaging over the minibatch and taking an SGD step. In practice, DP training can be very expensive or even ineffective for very large models. A naive implementation might significantly increase the cost of bounding the sensitivity of each example as parameter count is increased. In addition, the norm of the noise added also increases, which is believed to lead to worse model quality. Our goal is to develop simple and practical procedures for producing high-quality large-scale private models. To do this, we focus on *transfer learning* for improving the privacy-utility tradeoff in this large model regime. That is, we first pre-train a model on a large dataset with no privacy concerns, and only then privately finetune the model on the sensitive dataset. This strategy has emerged as a promising technique to improve the accuracy of private models. We specifically examine the effect of pre-training on JFT-300M, JFT-4B and ImageNet-21k datasets (Sun et al., 2017; Deng et al., 2009), and private finetuning on the popular ImageNet classification task with additional results on CIFAR-10 and CIFAR-100 datasets. Here, following Kurakin et al. (2022); De et al. (2022), we are simulating private training on a sensitive dataset by using the publicly available Imagenet data in place of the "sensitive" dataset. We show that using transfer learning, utility improves as the size of the model is increased, despite requiring more Gaussian noise. In addition, scaling both model and pre-training dataset size reduces the gap between private and non-private performance. In search of further improvements, we recall that increasing batch size improves performance under DP (McMahan et al., 2018; Anil et al., 2021a; Kurakin et al., 2022; De et al., 2022). Thus, we consider very large batch sizes (within a small multiple of the full batch). With these large batch sizes, similar to the non-private setting, we find the choice of optimizer leads to substantial improvements when training the model with privacy. For instance, ResNet50x4 (BiT) obtains 68.8 % test accuracy when trained using DP-SGD with LAMB optimizer (denoted as DP-LAMB below) compared to 47.1% using DP-SGD with Momentum over a privacy guarantee of (*ε, δ*) = (10, 10−6). Furthermore, we carefully explore other hyperparameters affecting DP performance. Surprisingly, we found the initialization of the last layer to be crucial: initializing the last layer to *zero* yields significant improvements. In summary, we investigate three distinct recommendations: 1. Increase pretraining data and model size. 2. Use large batch sizes in combination with optimizers that can work well with large batch sizes (e.g. LAMB). 3. Initialize the last layer to zero (or very small) when doing finetuning with DP. Combining these recommendations, we achieve state-of-the-art DP results through finetuning just the last layer of ViT-H/14-4b. Remarkably, we only need to train for a *single step* in the full batch setting. Recently, Kurakin et al. (2022) obtain 47.9 % accuracy on a privacy budget of (10, 10−6) with 70 epochs of finetuning and De et al. (2022) obtain an impressive result of 81.1% on privacy budget of (8, 8 ∗ 10−7) and 77.1% at (1, 8 ∗ 10−7), with 2500 epochs. Our results outperform the previous state of the art of Kurakin et al. (2022) and De et al. (2022) for all values of ε considered in both studies. In addition, since our best results were obtained by finetuning the last layer for a single epoch on ImageNet, our recommendations lead to at least 70x reduction in private training time than prior work and thus significantly improve the cost-utility ratio of training a high-quality image classification model with DP. ![2_image_0.png](2_image_0.png) Figure 1: (a) Compares best models from our study with and without privacy on ImageNet across model and pre-training dataset sizes. Scaling the model (from ViT-B/16 → ViT-L/16 → ViT-H/14) and using a larger pre-training dataset (JFT-4b instead of JFT-300M) decreases the gap in accuracy coming from addition of the privacy guarantee of (ε = 10, δ = 10−6) (b) We finetune only the last layer of ViT-H/14-4b model for a single step in the full batch setting and vary ε ∈ [0.25, 10]. The previous state of the art of Kurakin et al. (2022) obtains 47.9 % accuracy on a privacy budget of (10, 10−6) with 70 epochs of finetuning. More recently, concurrent work of De et al. (2022) obtain an impressive result of 81.1% on privacy budget of (8, 8 ∗ 10−7) and 77.1% at (1, 8 ∗ 10−7), with 2500 epochs. We present significant improvements on both studies at all ε values we tried with only 1 epoch of finetuning. Note that these results are average over 5 independent runs with different seeds. For reproducability purposes, we also make the exact hyperparameters available in the Appendix. ## 2 Related Work Theoretical Understanding. In the convex case, there is a rich literature on DP learning and optimization (Chaudhuri et al., 2011; Kifer et al., 2012; Song et al., 2013; Bassily et al., 2014; Talwar et al., 2015; Wu et al., 2016; Feldman et al., 2020a; Song et al., 2021; Asi et al., 2021), and the optimal privacy-utility tradeoffs are well-known. Such matching upper and lower bounds are not available in the non-convex context of large deep learning models. In practice, DP-SGD (Song et al., 2013; Bassily et al., 2014; Abadi et al., 2016) is the most popular learning algorithm. Although its optimality is not known, there is an active line of work on improving the theoretical properties of DP-SGD, including adaptive clipping (Andrew et al., 2021; Pichapati et al., 2019) and private aggregation of teacher ensembles (Papernot et al., 2018). Transfer Learning. Several recent works demonstrate improved performance in the setting where we have access to a large public or non-sensitive dataset of the same modality as the private data (Tramèr & Boneh, 2021; Yu et al., 2021; Li et al., 2022; Kurakin et al., 2022; Hoory et al., 2021). Perhaps the direction most similar to ours is the concurrent work of De et al. (2022) which also employs JFT-300M as a public dataset to obtain high utility privately trained ImageNet models, corroborating some of the trends we observe as well. While the overall direction is similar, there are many notable differences including model families explored (ResNet (BiT) (Kolesnikov et al., 2020) and Vision Transformer (Dosovitskiy et al., 2021) vs NFResNet (Brock et al., 2021b)), batch size (full batch vs 16k) and choice of optimizers for best results (LAMB/ADAM vs Momentum). Large batch sizes. In the context of differential privacy, several recent paper advocate the use of large batch sizes (McMahan et al., 2018; Anil et al., 2021b; Dormann et al., 2021; Hoory et al., 2021; Liu et al., 2021; Kurakin et al., 2022) in order to improve the privacy-utility tradeoff. We confirm this finding in our setting as well, except for a few cases where the degradation in utility due to optimization difficulty with batch size increase trumped any potential increase in utility. ## 3 Background We start our discussion with optimization details in the non-private setting. Given a data set D = {(x1, y1), · · · ,(xn, yn} with (xi, yi) ∈ D, we are interested in optimizing the function L : R d *× D →* R denoted as follows $${\cal L}(\theta)\triangleq\frac{1}{|{\cal D}|}\sum_{(x,y)\in{\cal D}}\ell(\theta,(x,y))\tag{1}$$ $$(1)$$ $$\left(2\right)$$ $$\left({\mathrm{3}}\right)$$ The standard (non-private) approach to this problem is mini-batch Stochastic Gradient Descent (SGD). In a single iteration of SGD, a minibatch Bt ⊂ D of size |Bt| = N examples are chosen randomly from the dataset. Then, given a current iterate θt, SGD performs the update: $$g_{t}\leftarrow\frac{1}{|B_{t}|}\sum_{(x,y)\in B_{t}}\nabla\ell(\theta_{t};(x,y))\qquad\theta_{t+1}=\theta_{t}-\eta_{t}g_{t}$$ where ηt denotes the learning rate used for for iteration t. DP-SGD is a private variant of this algorithm. In order to bound the sensitivity of each training example, Abadi et al. (2016) suggest computeing a gradient for each example separately and clip each a maximum norm of C (a user-specified hyperparameter): $$\tilde{g}_{t}\leftarrow\sum_{(x,y)\in B_{t}}\operatorname*{clip}\left(\nabla\ell(\theta_{t};(x,y))\right)\qquad g_{t}\leftarrow\frac{\tilde{g}_{t}+\mathcal{N}\left(0,\sigma C\right)}{|B_{t}|}$$ where clip(v) = v · min n1,C kvk2 o. This is notably different from non-private training where the forward pass can be vectorized and only a single pre-accumulated gradient need be calculated and used per mini-batch. If implemented naively, this step alone increases the computational cost of DP training proportional to the batch size for a single step. We do however note that, for several deep learning architectures, it is indeed possible to bound the sensitivity of each example without calculating the gradient for every example separately, leading to a dramatic cost-reduction both in terms of memory and compute (Goodfellow, 2015). One promising recent technique is Ghost Clipping (Li et al., 2022; Bu et al., 2022) where its possible to match the memory requirements of non-private training with marginal additional compute cost. In this paper, however, we stick to the exact scheme proposed in Abadi et al. (2016). Our recommendation for fine-tuning just the layer also reduces computational cost and can be used in conjunction with other cost-saving techniques like Ghost Clipping. After summing the clipped example gradients, a noise vector sampled from a Gaussian distribution with standard deviation σC is added, where σ is a parameter that will specify the privacy guarantee via the Gaussian mechanism. Formally, any algorithm that uses this noisy gt (not just SGD) will be private by the standard post-processing property of differential privacy. ## 4 Experiments Datasets. We use the ILSVRC-2012 ImageNet dataset (Deng et al., 2009) with 1k classes and 1.3M images (we refer to it as ImageNet in what follows) as our final evaluation dataset. However, we provide supplementary results in Section F where evaluate on 2 additional datasets, namely CIFAR-10 and CIFAR-100. We also refer to these as the private dataset for which we want a privacy guarantee. We use 2 variants of JFT datasets for our pre-training as our public (in DP sense) dataset. JFT-300M (Sun et al., 2017) consists of 18k classes and 303M high-resolution images and JFT-4B with 29.5k classes and 4B images. Additionally, we also explore ImageNet-21k as our pretraining dataset in Section F. Public and private data. We define "public data" and "private data" only in the context of DP. **Public** data is simply a dataset that we don't require privacy guarantee over and **Private data** is a sensitive dataset which we do want privacy guarantee over. "Public data" should not be confused with a dataset which is freely available. With this definition mind, it is completely plausible that a model trained with public data (in DP sense) still can not be shared widely without scrutiny due to either ethical, societal or legal concerns. For instance, the JFT datasets we use as "public data" are not technically publicly available but have been used extensively as a pre-training dataset in the non-private setting to obtain state-of-the-art results (Dosovitskiy et al., 2021; Brock et al., 2021a; Tolstikhin et al., 2021; Zhai et al., 2021). Finally, note that none of our finetuning datasets (e.g. ImageNet-1K) in reality are sensitive datasets: we are only simulating a public/private dataset split only for demonstration purposes (Kurakin et al., 2022; De et al., 2022). To make sure that our simulated "public" and "private" datasets capture a practical scenario, we carefully de-duplicate our pre-training datasets w.r.t. all splits of our finetuning datasets (Kolesnikov et al., 2020; Dosovitskiy et al., 2021). More details about this process can be found in the appendix. Model variants. We evaluate the transfer learning capabilities of ResNet (He et al., 2016) and Vision Transformer (ViT) (Dosovitskiy et al., 2021) in our study. Note that in typical non-private training, ResNet employs batch normalization, which has a difficult-to-analyze effect on the sensitivity of the gradients and so therefore makes computing the privacy parameters ε and δ difficult. Somewhat fortuitously, to improve transfer, Kolesnikov et al. (2020) replace Batch Normalization with Group Normalization and used standardized convolutions, which has a much simpler privacy analysis. We therefore follow the same practice and denote our modified model "Resnet (BiT)". For Vision Transformer, we follow the standard notation to indicate the model size and the input patch size, for example, ViT-B/32 means the "Base" variant with 32x32 input patch size. Note that for ViT, compute requirements scales up as we reduce the patch size. Training details. At the pre-training stage, we stick with the common practice of employing Adam optimizer (even for ResNet) (Kingma & Ba, 2014) with β1 = 0.9 and β2 = 0.999, with a batch size of 4096 and high weight decay of 0.1 unless mentioned otherwise. We train with sigmoid cross-entropy loss and use linear learning rate warmup until 10k steps, followed by linear decay until the end of training. For our private finetuning experiments, we stick with a reasonably stringent privacy guarantee of ε = 10 and δ = 10−6, unless specified otherwise. We use DP-SGD privacy analysis to compute the noise multiplier. To limit other confounding factors we set the clipping norm C to 1. Also, since training with DP-SGD is computationally expensive, we finetune on ImageNet for at most 10 epochs. Finally, when training the last layer with DP we found it to be crucial to initialize the last layer weights to zero (or a small value). We expand on this more in Section 5. ## 4.1 Dp-Sgd With Large Batch Sizes There is an inherent tradeoff between privacy enforced by DP and utility obtained from the model when trained privately. The decrease in the quality of the model stems from the additional randomness added to the gradients as part of DP-SGD. Even though there is precedence for gradient noise leading to superior generalization ability (Neelakantan et al., 2015; Smith et al., 2020), more often than not, in the context of differential privacy it leads to substantial degradation in model quality (Song et al., 2013; Bassily et al., 2014; Abadi et al., 2016). DP-SGD first adds per-example gradients in a mini-batch, then adds a noise vector to the accumulated gradient, and finally divides by the batch size. Thus, increasing the batch size might seem to decrease the noise added as σ |Bt| decreases. The picture is actually more complicated: increasing batch size also decreases the effect of amplification by sampling, which means that the increase in |Bt| must be partially offset by an increase in σ in order to maintain the same privacy guarantee. Regardless, even for large models, others (McMahan et al., 2018; Anil et al., 2021b; Dormann et al., 2021; Hoory et al., 2021; Liu et al., 2021; Kurakin et al., 2022) have observed that employing larger batch size improves performance of DP-SGD. In our case, ImageNet contains roughly 1.3M examples; so we perform finetuning in the range of 128k (2 17) - 1M (2 20) batch size across our exploration and use the full batch setting for the final results. | Without privacy | With privacy | | | | | | | | | |-------------------|----------------|------|------|------|------|------|------|------|------| | Batch Size | | 20 | | | | | | | | | Model | Tuning type | 2 17 | 2 18 | 2 19 | 2 20 | 2 17 | 2 18 | 2 19 | 2 | | ViT-B/16 | Full | 75.8 | 69.5 | 57.9 | 57.0 | 66.5 | 57.7 | 57.8 | 57.9 | | Last layer | 81.1 | 80.2 | 75.1 | 69.2 | 61.3 | 65.1 | 66.6 | 65.3 | | | ResNet50x4 (BiT) | Last layer | 74.1 | 69.4 | 61.9 | 46.1 | 39.6 | 43.6 | 47.1 | 49.2 | Table 1: Comparison of Top-1 test accuracies on Imagenet when trained using DP-SGD with Momentum when using various models with and without privacy. All models are pretrained on JFT-300M dataset. Note that we exclude ResNet50x4 full finetuning results since we did not see accuracies greater than 20% in in our hyperparameter range even without privacy. We finetune on ImageNet using 2 different schemes: 1) Full transfer learning, where we finetune all the parameters of the model and 2) Last layer tuning, where we only train a classifier produced from the features obtained from the pre-trained model. Despite all the recent advances made in engineering efficient implementations of DP-SGD for large models, last layer tuning is significantly more cost-effective in the context of DP-SGD since per-example gradients are trivial to compute and don't lead to much additional compute cost in comparison to non-private training. In fact, last layer tuning can also be cast as a linear regression problem and solved exactly, but in this study, we stick with gradient-based optimization methods which let us rely on DP-SGD's privacy accounting for all our experiments. Finally, in both types of finetuning methods, we reinitialize the last layer and train it from scratch. As the first step in this section, we start with pre-training on 2 medium-sized models, namely 1) ViT-B/16 and 2) ResNet50x4. We conduct finetuning in almost the same fashion as reported in non-private cases except our batch sizes are much larger and we replace the traditional Momentum Optimizer with DP-SGD with Momentum. As shown in Table 1, as the batch size increases, performance in the non-private case decreases significantly as optimization quality degrades. However, with DP, we sometimes observe the reverse trend, even to the point where it is *better* than non-private training (ResNet50x4, batch size 2 20). Perhaps surprisingly, our results also suggest that training the full model is not necessarily better than training just the last layer. This is true even without privacy. We conjecture that this phenomenon is due to the optimization difficulty with large batch sizes. ## 4.2 Optimization Difficulty With Large Batch Sizes Our results in Table 1 show that increasing the batch size improves the privacy-utility tradeoff (except for ViT-B/16-Full). However, without privacy, the performance seems to suffer as the batch size is increased in all cases. This phenomenon, outside the context of differential privacy, has been studied by others already (Krizhevsky, 2014; Li et al., 2014; Goyal et al., 2017; You et al., 2017; Hoffer et al., 2018). Most notably, You et al. (2019) introduced LAMB optimizer to address this issue, which modifies AdamW (Loshchilov & Hutter, 2017) with a layer-wise adaptive learning rate correction. With LAMB, they trained both ResNet-50 and BERT with batch sizes up to 32k. In this work, we would like to train our models with even larger batch sizes i.e. in the range of 128k-1M. Tuning only the learning rate, in our results in Table 2, we extend the use LAMB optimizer for both models in this batch size range, with and without privacy on ImageNet. Switching from Momentum to LAMB provides improvements in several dimensions. First, without privacy, performance at the lower end of our batch size range comes quite close to that when trained with a much lower batch size of 512 (as proposed by Dosovitskiy et al. (2021)). Most notably, we see a big jump in performance for ResNet-50x4 when training the full model. Test accuracy on ResNet-50x4 with momentum did not surpass 20% (and hence was excluded from Table 1). In contrast, test accuracy with LAMB was comparable to ViT-B/16 across batch sizes. Second, the batch size most attractive in our context is 2 20 (around 1M). Note this batch size is close to the ImageNet training dataset size of around 1.28M. We observe a | Without privacy | | | With privacy | | | | | | | | |-------------------|-------------|------|----------------|------|------|------|------|------|------|------| | | Batch Size | | 20 | | | | | | | | | Model | Tuning type | Best | 2 17 | 2 18 | 2 19 | 2 20 | 2 17 | 2 18 | 2 19 | 2 | | ViT-B/16 | Full | 84.8 | 81.7 | 77.1 | 76.3 | 73.1 | 68.2 | 70.5 | 70.5 | 71.1 | | Last layer | 81.2 | 80.7 | 76.4 | 75.9 | 73.3 | 66.4 | 69.1 | 70.1 | 70.6 | | | ResNet50x4 | Full | 85.3 | 83.7 | 79.5 | 74.5 | 72.6 | 55.6 | 64.5 | 68.6 | 70.1 | | Last layer | 82.9 | 82.7 | 79.1 | 73.5 | 72.7 | 52.5 | 61.7 | 66.1 | 68.8 | | | | Without privacy | With privacy | | | |-------------------|-------------------|----------------|------|------| | | Batch Size | 20 | | | | Model | Tuning type | Best | 2 20 | 2 | | ResNet152x4 (BiT) | Full | 87.1 | 77.0 | 72.9 | | Last layer | 84.6 | 76.9 | 72.9 | | | ViT-L/16 | Full | 87.2 | 79.6 | 77.4 | | Last layer | 84.8 | 79.4 | 77.3 | | | ViT-H/14 | Full | 88.2 | 81.5 | 80.1 | | Last layer | 86.2 | 81.4 | 80.0 | | | ViT-H/14 - 4B | Full | 88.7 | 82.5 | 81.0 | | Last layer | 86.9 | 82.5 | 81.0 | | Table 2: Comparison of Top-1 test accuracies on Imagenet when trained using DP-LAMB when using various models with and without privacy. Similar to Table 1, all models are pretrained on JFT-300M dataset. The numbers in Best column are obtained by using Momentum Optimizer in non-private setting with batch size 512 and rest of the hyperparams reproduced from Dosovitskiy et al. (2021) Table 3: Comparison of Top-1 test accuracies on Imagenet when the model and dataset size is increased. We observe consistent improvement with scaling model from ViT-B/16 to ViT-L/16 and even ViT-H/14. Similarly, scaling the pretraining dataset from JFT-300M to JFT-4B also improves both private and nonprivate performance. Similar to before, all models are trained using DP-LAMB with and without privacy. The numbers in Best column are obtained by using Momentum Optimizer in non-private setting with batch size 512 and rest of the hyperparams reproduced from Dosovitskiy et al. (2021) significant increase in accuracy when switching to LAMB for batch size 2 20. Lastly, the increase in accuracies in the non-private setting helps when training the models privately across the board, to the extent that we obtain the best private finetuning numbers at the largest batch size of 2 20. ## 4.3 Scaling Analysis In this section, we systematically study the effect of scaling both the model size and the pre-training dataset. We follow Dosovitskiy et al. (2021) and experiment with 3 additional models, namely ResNet152x4, ViT-L/16 and ViT-H/14. Similar to earlier experiments, we pretrain these models with JFT-300M. To study the effect of increasing pre-training data, we consider one more variant "ViT-H/14 - 4b" where we pre-train a ViT-H/14 model on JFT-4B dataset for a single epoch. Due to the excessive computational cost of running these experiments, we only consider a batch size of 2 20 for all our results in this section. As shown in Table 3, for both the model families, namely ResNet (BiT) and Vision Transformers, increasing the model size improves accuracy in both non-private and private settings. We find this to be the case regardless of whether we train the full model or just the last layer. Perhaps even more encouraging is the fact that the gap between the performance with privacy and best numbers without privacy also decreases as the model size increases. For instance, the gap for ViT-L/16 (last layer tuning) between the best non-private and private case is 7.5 percentage points and for ViT-H/14 in the same setting is 6.2 percentage points. We also present results when the pre-training dataset size is increased when we train using a much larger JFT-4B dataset instead of JFT-300M. Note that in this case we only train for a single epoch with JFT-4B which results in a similar number of pre-training steps if we were to train with JFT-300M for 14 epochs. This means that changes in performance between ViT-H/14 and ViT-H/14-4b can be largely attributed to the diversity of examples seen and not necessarily "more" pre-training. As shown in the best column in Table 3, we notice moderate improvement (about half a percentage point) between ViT-H/14 and ViT-H/14-4b but when training with privacy at a much larger batch size we see an even larger improvement (a full percentage point). Lastly, when comparing full finetuning and just last layer tuning on ViT-H/14-4b setting, we still see a noticeable gap in the best column (trained with 512 batch size). However, when trained at a much larger batch size of 2 20, the difference shrinks significantly regardless of whether it was trained with privacy or not. This is quite fortuitous since the computational cost of training with privacy is much larger when we finetune the full model vs finetuning just the last layer. In the former case, vanilla DP-SGD finetuning as proposed in Abadi et al. (2016) needs access to the per example gradient of the full model whereas when tuning only the last layer, the per example gradient of just the last layer would suffice. The difference can be even more striking when the models are made larger (Appendix Table 10). Note that when tuning only the last layer we still perform a forward pass again every time an example is visited largely due to data augmentation. An alternative is to perform the forward pass for the whole dataset once and cache to features to be trained with DP. This is further simplified when data augmentation is turned off. ## 5 Influence Of Other Hyperparameters Across our exploration so far, the only hyperparameter we tuned was the learning rate. We did this to limit the scope of our exploration since each added variable would result in a multiplicative increase in compute. Outside the context of differential privacy though, good performance on image classification task can depend on a lot of other factors, such as data augmentation (Perez & Wang, 2017; Cubuk et al., 2018), initialization (Hanin & Rolnick, 2018; Zhang et al., 2019; Mehta et al., 2021; Brock et al., 2021c), feature/data normalization (Ioffe & Szegedy, 2015; Wu & He, 2018), resolution of the input image, training iterations etc. It is possible that when trained with privacy, the optimal choices of these settings are different. Thus, in this section, we systematically unpack a potentially mysterious bag of choices when training the model with a privacy guarantee. To do this, we take the best performing model so far i.e. ViT-H/14 - 4b, and change one hyperparameter at a time as shown in Table 4. Note that since there was no performance difference between training the full model vs last layer tuning on ViT-H/14 - 4b (Table 3), we only tune the last layer for this study. | ViT-H/14 - 4b | Accuracy | ∆ | |--------------------------------|---------------|-------| | No change | 81.0 | - | | DP-LAMB → DP-Adam | 81.0 | - | | DP-LAMB → DP-SGD with Momentum | 77.2 | -3.8 | | Zero Init → Lecun Normal Init | Random Chance | -81.0 | | Inception Crop → Center Crop | 81.2 | +0.2 | | 10 epochs → 1 epochs | 81.1 | +0.1 | | Clip at 1 → 10 | 80.1 | -0.9 | | Clip at 1 → 0.1 | 81.0 | - | | Resolution 384 → 512 | 79.0 | -2.0 | | Resolution 384 → 256 | 81.4 | +0.4 | Table 4: Taking the best performing model from the previous sections (ViT-H/14-4b), we systematically study the effect of changing a single hyperparameter at a time. Combining the positive deltas from this study led to a cumulative improvement and we used the resulting model to present the numbers in Figure 1b. Choice of optimizer. As shown in Table 4, switching from DP-LAMB to DP-Adam did not change at all in this setting, however, switching to DP-SGD with Momentum leads to a noticeable decrease in accuracy. This suggests that the layer-wise adaptation of the learning rate, as suggested by LAMB, is much less crucial than the contribution made by the Adam update. This is perhaps not surprising since we are only learning the last layer and the layerwise adaptivity provided by LAMB can be subsumed in the learning rate, which we tune. Initialization. From our results, the choice of initialization seems to be a crucial hyperparameter. Switching the last layer from Zero Initialization to Lecun Normal (default in Flax for dense layers) resulted in a complete lack of progress and random chance performance on the test set when we trained privately. In the non-private case, Zero initialization has a special property that the learning rate used *in the first step* can be decreased arbitrarily (up to precision limits) since it only affects how much the loss decreases and not the accuracy (since we use sigmoid cross-entropy loss). This is useful when training privately since the update is artificially made noisy for the privacy guarantee. In this case, the magnitude of the update can be controlled using the learning rate without changing the potential accuracy obtained by the original gradient before the noise was added. This argument is very similar in spirit to the one made by Bu et al. (2021) who show that noise multiplier σ vanishes and doesn't affect the convergence in the gradient flow regime. Note that there is nothing special about initializing at absolute zero. As long as the **initialization is small enough** to allow setting small learning in the first step, we find, also leads to similar results as shown in Figure 2c. Number of epochs. Since every visitation of the dataset affects the privacy budget, training for less number of epochs requires a lower noise multiplier σ for the same values of (*ε, δ*). Typically, the advantage with less noise is trumped by the gain in performance by training longer (Kurakin et al., 2022; De et al., 2022). However, we observe that training for a single epoch performs slightly better than training for 10, as shown in Table 4. ![8_image_0.png](8_image_0.png) Figure 2: We consider ViT-H/14-4b model in the single step, full batch setting where only the last layer is finetuned and plot a more fine-grained relationship between a few important hyperparameters and test accuracy on ImageNet. (a) LAMB outperforms SGD with Momentum in non-private setting when only the last layer is finetuned. Note that in the first step since the norm of the weights is zero, LAMB is equivalent to Adam. (b) We find a non-trivial relationship between input resolution and large batch training where a sweet spot of 256x256 image resolution performs best. (c) Increase in scale of initialization of the last layer degrades test accuracy when trained with DP. With big enough initialization, test accuracy degenerates to random chance performance. Note that the phenomenon observed in (a) and (b) can be attributed to the use of large batch sizes and not necessarily DP but (c) is more tied to the use of DP, as larger initializations are actually *standard* without DP. ## 5.1 Extreme Case Of Full Batch And Single Step Update Since we observed best results when the model was trained for a single epoch, we consider an extreme case where we finetune the model for a **single step** in the full batch setting. We also combine all the positive changes from Table 4, namely 1) replacing Inception Crop with Central Crop and 2) changing the resolution of the input image from 384 to 256. As shown in Figure 1b, as we hoped, these changes provide improvements | Epsilon | | | | | | | |--------------------|-------------|------|------|------|------|------| | Finetuning Dataset | Non-Private | 0.5 | 1.0 | 2.0 | 4.0 | 8.0 | | CIFAR-10 | 95.2 | 95.1 | 95.1 | 95.1 | 95.1 | 95.2 | | CIFAR-100 | 79.9 | 78.3 | 79.5 | 79.7 | 79.7 | 79.7 | | ImageNet-1k | 73.8 | 19.0 | 39.1 | 54.2 | 63.3 | 68.3 | Table 5: Comparison of Top-1 test set accuracies on when pre-training ViT-B/16 model on ImageNet-21k dataset. All other details are present in the Appendix Section F. almost orthogonal to each other, through which we obtain SOTA results on ImageNet across all values of ε we tried. Finally, we also observe consistent improvements from just the scale of pre-training in Figure 1a. ## 5.2 Pretraining With Imagenet-21K In this section, to test our own recommendations on other tasks, we explore using ImageNet-21k as our pre-training dataset. As shown in Table 5, even when pretrained on much smaller ImageNet-21k dataset, our recommendations lead to beating previous state of the art of 47.9% accuracy on ImageNet-1K Kurakin et al. (2022) with only 1 epoch of finetuning on last layer features. For CIFAR-10, (Klause et al., 2022) obtain an impressive 82.5% top-1 accuracy for ε = 8 without leveraging transfer learning. With private finetuning, we improve this result to 95.2% at ε = 8. Finally, at stringent epsilon values like ε = 1, our results outperform best results of 94.7% on CIFAR-10 and 70.3% on CIFAR-100 from concurrent work of De et al. (2022) even when additional data is used (i.e. employing transfer learning). ## 6 Conclusion We demonstrate that large-scale pretraining on public dataset is an effective strategy for obtaining good results when finetuned privately. Moreover, scaling both model size and pre-training dataset improves performance of the private model and closes the gap between the accuracy obtained in the non-private setting. We also find that gains from transfer learning can be further amplified by switching to an optimizer that works well with large batch sizes, LAMB in our case. In addition, our exploration allowed us to obtain a significant improvement over existing results on training ImageNet privately a wide range of ε by finetuning just the last layer for only 1 epoch (1 step in full batch setting as shown in Figure 1b). This significantly reduces the computational cost of training with privacy. Additionally, recent low-rank update techniques other than last layer finetuning can also be leveraged for effective DP training (Yu et al., 2021). We leave this interesting direction to future work. Finally, similar to the baselines, we note that our privacy accounting do not include any cost of hyperparameter tuning, which is another direction of interesting subsequent exploration (Papernot & Steinke, 2022). ## Bibliography Martín Abadi, Andy Chu, Ian J. Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In *Proc. of the 2016 ACM SIGSAC Conf. on Computer* and Communications Security (CCS'16), pp. 308–318, 2016. Galen Andrew, Om Thakkar, Hugh Brendan McMahan, and Swaroop Ramaswamy. Differentially private learning with adaptive clipping. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, 2021. URL https://openreview.net/forum?id= RUQ1zwZR8_. Rohan Anil, Badih Ghazi, Vineet Gupta, Ravi Kumar, and Pasin Manurangsi. Large-scale differentially private BERT. *CoRR*, abs/2108.01624, 2021a. URL https://arxiv.org/abs/2108.01624. Rohan Anil, Badih Ghazi, Vineet Gupta, Ravi Kumar, and Pasin Manurangsi. Large-scale differentially private BERT. *CoRR*, abs/2108.01624, 2021b. Hilal Asi, John Duchi, Alireza Fallah, Omid Javidbakht, and Kunal Talwar. Private adaptive gradient methods for convex optimization. In *International Conference on Machine Learning*, pp. 383–392. PMLR, 2021. Borja Balle, Giovanni Cherubin, and Jamie Hayes. Reconstructing training data with informed adversaries, 2022. Raef Bassily, Adam Smith, and Abhradeep Thakurta. Private empirical risk minimization: Efficient algorithms and tight error bounds. In Proc. of the 2014 IEEE 55th Annual Symp. on Foundations of Computer Science (FOCS), pp. 464–473, 2014. Lucas Beyer, Olivier J. H'enaff, Alexander Kolesnikov, Xiaohua Zhai, and Aäron van den Oord. Are we done with imagenet? *ArXiv*, abs/2006.07159, 2020. James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, and Skye Wanderman-Milne. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax. Andrew Brock, Soham De, and Samuel L Smith. Characterizing signal propagation to close the performance gap in unnormalized resnets. In *International Conference on Learning Representations*, 2021a. URL https://openreview.net/forum?id=IX3Nnir2omJ. Andrew Brock, Soham De, Samuel L. Smith, and Karen Simonyan. High-performance large-scale image recognition without normalization, 2021b. Andy Brock, Soham De, Samuel L Smith, and Karen Simonyan. High-performance large-scale image recognition without normalization. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of *Proceedings of Machine Learning Research*, pp. 1059–1071. PMLR, 18–24 Jul 2021c. URL https://proceedings.mlr.press/v139/brock21a.html. Zhiqi Bu, Hua Wang, and Qi Long. On the convergence and calibration of deep learning with differential privacy, 2021. Zhiqi Bu, Jialin Mao, and Shiyun Xu. Scalable and efficient training of large convolutional neural networks with differential privacy, 2022. Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Song. The secret sharer: Evaluating and testing unintended memorization in neural networks. In *Proceedings of the 28th USENIX Conference* on Security Symposium, 2019. Nicholas Carlini, Florian Tramèr, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Úlfar Erlingsson, Alina Oprea, and Colin Raffel. Extracting training data from large language models. In *USENIX Security*, 2021. Kamalika Chaudhuri, Claire Monteleoni, and Anand D Sarwate. Differentially private empirical risk minimization. *Journal of Machine Learning Research*, 12(Mar):1069–1109, 2011. Christopher A. Choquette-Choo, Florian Tramer, Nicholas Carlini, and Nicolas Papernot. Label-only membership inference attacks, 2020. Ekin D. Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V. Le. Autoaugment: Learning augmentation policies from data, 2018. Ekin D. Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V. Le. Randaugment: Practical automated data augmentation with a reduced search space. *2020 IEEE/CVF Conference on Computer Vision* and Pattern Recognition Workshops (CVPRW), Jun 2020. doi: 10.1109/cvprw50498.2020.00359. URL http://dx.doi.org/10.1109/CVPRW50498.2020.00359. Soham De, Leonard Berrada, Jamie Hayes, Samuel L. Smith, and Borja Balle. Unlocking high-accuracy differentially private image classification through scale, 2022. URL https://arxiv.org/abs/2204.13650. Mostafa Dehghani, Alexey Gritsenko, Anurag Arnab, Matthias Minderer, and Yi Tay. Scenic: A JAX library for computer vision research and beyond. *arXiv preprint arXiv:2110.11403*, 2021. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009. Friedrich Dormann, Osvald Frisk, Lars Norvang Andersen, and Christian Fischer Pedersen. Not all noise is accounted equally: How differentially private learning benefits from large sampling rates. *2021 IEEE* 31st International Workshop on Machine Learning for Signal Processing (MLSP), Oct 2021. doi: 10.1109/ mlsp52302.2021.9596307. URL http://dx.doi.org/10.1109/mlsp52302.2021.9596307. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=YicbFdNTTy. Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. Our data, ourselves: Privacy via distributed noise generation. In *Advances in Cryptology—EUROCRYPT*, pp. 486–503, 2006a. Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In *Proc. of the Third Conf. on Theory of Cryptography (TCC)*, pp. 265–284, 2006b. URL http://dx.doi.org/10.1007/11681878_14. Úlfar Erlingsson, Vitaly Feldman, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, and Abhradeep Thakurta. Amplification by shuffling: From local to central differential privacy via anonymity. In Timothy M. Chan (ed.), *Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA* 2019, San Diego, California, USA, January 6-9, 2019, pp. 2468–2479. SIAM, 2019. doi: 10.1137/1. 9781611975482.151. URL https://doi.org/10.1137/1.9781611975482.151. Vitaly Feldman, Tomer Koren, and Kunal Talwar. Private stochastic convex optimization: Optimal rates in linear time. In *Proc. of the Fifty-Second ACM Symp. on Theory of Computing (STOC'20)*, 2020a. Vitaly Feldman, Audra McMillan, and Kunal Talwar. Hiding among the clones: A simple and nearly optimal analysis of privacy amplification by shuffling. *arXiv preprint arXiv:2012.12803*, 2020b. Roy Frostig, Matthew Johnson, and Chris Leary. Compiling machine learning programs via high-level tracing. 2018. URL https://mlsys.org/Conferences/doc/2018/146.pdf. Ian J. Goodfellow. Efficient per-example gradient computations. *ArXiv*, abs/1510.01799, 2015. Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour, 2017. Boris Hanin and David Rolnick. How to start training: The effect of initialization and architecture, 2018. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling (eds.), *Computer Vision - ECCV 2016*, 2016. Jonathan Heek, Anselm Levskaya, Avital Oliver, Marvin Ritter, Bertrand Rondepierre, Andreas Steiner, and Marc van Zee. Flax: A neural network library and ecosystem for JAX, 2020. URL http://github.com/ google/flax. Elad Hoffer, Itay Hubara, and Daniel Soudry. Train longer, generalize better: closing the generalization gap in large batch training of neural networks, 2018. Shlomo Hoory, Amir Feder, Avichai Tendler, Sofia Erell, Alon Cohen, Itay Laish, Hootan Nakhost, Uri Stemmer, Ayelet Benjamini, Avinatan Hassidim, and Yossi Matias. Learning and evaluating a differentially private pre-trained language model. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pp. 1178–1189, Punta Cana, Dominican Republic, 2021. URL https://aclanthology.org/2021. findings-emnlp.102/. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift, 2015. Shiva Prasad Kasiviswanathan, Homin K. Lee, Kobbi Nissim, Sofya Raskhodnikova, and Adam D. Smith. What can we learn privately? In *49th Annual IEEE Symp. on Foundations of Computer Science (FOCS)*, pp. 531–540, 2008. Daniel Kifer, Adam Smith, and Abhradeep Thakurta. Private convex empirical risk minimization and high-dimensional regression. In *Conference on Learning Theory*, pp. 25–1, 2012. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2014. Helena Klause, Alexander Ziller, Daniel Rueckert, Kerstin Hammernik, and Georgios Kaissis. Differentially private training of residual networks with scale normalisation, 2022. Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, and Neil Houlsby. Big transfer (bit): General visual representation learning. *Lecture Notes in Computer Science*, pp. 491–507, 2020. ISSN 1611-3349. doi: 10.1007/978-3-030-58558-7_29. URL http://dx.doi.org/10. 1007/978-3-030-58558-7_29. Alex Krizhevsky. One weird trick for parallelizing convolutional neural networks, 2014. Alexey Kurakin, Shuang Song, Steve Chien, Roxana Geambasu, Andreas Terzis, and Abhradeep Thakurta. Toward training at imagenet scale with differential privacy, 2022. Mu Li, Tong Zhang, Yuqiang Chen, and Alexander J. Smola. Efficient mini-batch training for stochastic optimization. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '14, pp. 661–670, New York, NY, USA, 2014. Association for Computing Machinery. ISBN 9781450329569. doi: 10.1145/2623330.2623612. URL https://doi.org/10.1145/2623330.2623612. Xuechen Li, Florian Tramer, Percy Liang, and Tatsunori Hashimoto. Large language models can be strong differentially private learners. In *International Conference on Learning Representations*, 2022. URL https://openreview.net/forum?id=bVuP3ltATMz. Yugeng Liu, Rui Wen, Xinlei He, Ahmed Salem, Zhikun Zhang, Michael Backes, Emiliano De Cristofaro, Mario Fritz, and Yang Zhang. Ml-doctor: Holistic risk assessment of inference attacks against machine learning models, 2021. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization, 2017. H Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. Learning differentially private recurrent language models. *arXiv preprint arXiv:1710.06963*, 2017. H. Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. Learning differentially private recurrent language models. In *International Conference on Learning Representations*, 2018. URL https: //openreview.net/forum?id=BJ0hF1Z0b. Harsh Mehta, Ashok Cutkosky, and Behnam Neyshabur. Extreme memorization via scale of initialization. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id= Z4R1vxLbRLO. Ilya Mironov. Rényi differential privacy. In *2017 IEEE 30th Computer Security Foundations Symposium* (CSF), pp. 263–275. IEEE, 2017. Ilya Mironov, Kunal Talwar, and Li Zhang. Rényi differential privacy of the sampled gaussian mechanism, 2019. Milad Nasr, Shuang Songi, Abhradeep Thakurta, Nicolas Papernot, and Nicholas Carlin. Adversary instantiation: Lower bounds for differentially private machine learning. 2021 IEEE Symposium on Security and Privacy (SP), May 2021. doi: 10.1109/sp40001.2021.00069. URL http://dx.doi.org/10.1109/sp40001. 2021.00069. Arvind Neelakantan, Luke Vilnis, Quoc V. Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach, and James Martens. Adding gradient noise improves learning for very deep networks, 2015. Nicolas Papernot and Thomas Steinke. Hyperparameter tuning with renyi differential privacy. In *International* Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=-70L8lpp9DF. Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, and Úlfar Erlingsson. Scalable private learning with pate. *arXiv preprint arXiv:1802.08908*, 2018. Luis Perez and Jason Wang. The effectiveness of data augmentation in image classification using deep learning, 2017. Venkatadheeraj Pichapati, Ananda Theertha Suresh, Felix X Yu, Sashank J Reddi, and Sanjiv Kumar. Adaclip: Adaptive clipping for private sgd. *arXiv preprint arXiv:1908.07643*, 2019. Siyuan Qiao, Huiyu Wang, Chenxi Liu, Wei Shen, and Alan Yuille. Micro-batch training with batch-channel normalization and weight standardization, 2019. M. Riedmiller and H. Braun. A direct adaptive method for faster backpropagation learning: the rprop algorithm. In *IEEE International Conference on Neural Networks*, pp. 586–591 vol.1, 1993. doi: 10.1109/ ICNN.1993.298623. Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership inference attacks against machine learning models. In *2017 IEEE Symposium on Security and Privacy (SP)*, pp. 3–18, 2017. Samuel L. Smith, Erich Elsen, and Soham De. On the generalization benefit of noise in stochastic gradient descent, 2020. Shuang Song, Kamalika Chaudhuri, and Anand D Sarwate. Stochastic gradient descent with differentially private updates. In *2013 IEEE Global Conference on Signal and Information Processing*, pp. 245–248. IEEE, 2013. Shuang Song, Thomas Steinke, Om Thakkar, and Abhradeep Thakurta. Evading the curse of dimensionality in unconstrained private glms. In Arindam Banerjee and Kenji Fukumizu (eds.), Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, volume 130 of *Proceedings of Machine* Learning Research, pp. 2638–2646. PMLR, 13–15 Apr 2021. URL https://proceedings.mlr.press/ v130/song21a.html. Andreas Steiner, Alexander Kolesnikov, Xiaohua Zhai, Ross Wightman, Jakob Uszkoreit, and Lucas Beyer. How to train your vit? data, augmentation, and regularization in vision transformers, 2021. Pranav Subramani, Nicholas Vadivelu, and Gautam Kamath. Enabling fast differentially private sgd via just-in-time compilation and vectorization. *arXiv preprint arXiv:2010.09063*, 2020. Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting unreasonable effectiveness of data in deep learning era. *2017 IEEE International Conference on Computer Vision (ICCV)*, Oct 2017. doi: 10.1109/iccv.2017.97. URL http://dx.doi.org/10.1109/iccv.2017.97. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott E. Reed, Dragomir Anguelov, D. Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. *2015 IEEE Conference on* Computer Vision and Pattern Recognition (CVPR), pp. 1–9, 2015. Kunal Talwar, Abhradeep Guha Thakurta, and Li Zhang. Nearly optimal private lasso. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc., 2015. URL https://proceedings.neurips.cc/paper/ 2015/file/52d080a3e172c33fd6886a37e7288491-Paper.pdf. Ilya Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Peter Steiner, Daniel Keysers, Jakob Uszkoreit, Mario Lucic, and Alexey Dosovitskiy. MLP-mixer: An all-MLP architecture for vision. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems*, 2021. URL https://openreview. net/forum?id=EI2KOXKdnP. Florian Tramèr and Dan Boneh. Differentially private learning needs better features (or much more data). In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id= YTWGvpFOQD-. Yu-Xiang Wang, Borja Balle, and Shiva Kasiviswanathan. Subsampled rényi differential privacy and analytical moments accountant. *Journal of Privacy and Confidentiality*, 10(2), Jun 2020. ISSN 2575-8527. doi: 10.29012/jpc.723. URL http://dx.doi.org/10.29012/jpc.723. Xi Wu, Fengan Li, Arun Kumar, Kamalika Chaudhuri, Somesh Jha, and Jeffrey F. Naughton. Bolt-on differential privacy for scalable stochastic gradient descent-based analytics, 2016. Yuxin Wu and Kaiming He. Group normalization. *Lecture Notes in Computer Science*, pp. 3–19, 2018. ISSN 1611-3349. doi: 10.1007/978-3-030-01261-8_1. URL http://dx.doi.org/10.1007/978-3-030-01261-8_ 1. Yang You, Igor Gitman, and Boris Ginsburg. Large batch training of convolutional networks, 2017. Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. Large batch optimization for deep learning: Training bert in 76 minutes, 2019. Da Yu, Saurabh Naik, Arturs Backurs, Sivakanth Gopi, Huseyin A. Inan, Gautam Kamath, Janardhan Kulkarni, Yin Tat Lee, Andre Manoel, Lukas Wutschitz, Sergey Yekhanin, and Huishuai Zhang. Differentially private fine-tuning of language models, 2021. Xiaohua Zhai, Alexander Kolesnikov, Neil Houlsby, and Lucas Beyer. Scaling vision transformers, 2021. Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization, 2017. Hongyi Zhang, Yann N. Dauphin, and Tengyu Ma. Fixup initialization: Residual learning without normalization, 2019. Yuqing Zhu and Yu-Xiang Wang. Poission subsampled rényi differential privacy. In *International Conference* on Machine Learning, pp. 7634–7642. PMLR, 2019. ## A Algorithmic Details We present below a generalized version of DP-SGD where the gradients get processed in the traditional DP-SGD fashion and are then passed to a first order optimizer as an input. This lets us instantiate DP versions of well known optimizers like SGD, Momentum, Adam and LAMB. We prepend the optimizer's name with DP to denote that the gradients were first processed as shown in Algorithm 1 and then passed to the said optimizer. Algorithm 1 Generalized First Order Differentially Private Algorithm Require: Data set D = {(x1, y1), · · · ,(xn, yn} with (xi, yi) ∈ D, loss function: ` : R d *× D →* R, a first order optimizer Opt, clipping norm: C, number of iterations: T, noise multiplier: σ 1: Randomly initialize θ0. 2: for t = 0*, . . . , T* − 1 do 3: Select randomly without replacement a mini-batch of examples Bt ⊆ D 4: gt ←P(x,y)∈Bt clip (∇`(θt; (*x, y*))), where clip(v) = v · min n1,C kvk2 o. 5: g˜t ← 1 kBtk gt + N0,(σC) 2 6: θt+1 ← single step of first order optimization with gradient Opt( ˜gt) 7: **end for** 8: **return** 1 T P T t=1 θt or θT . ## B Privacy Analysis The privacy parameters (*ε, δ*) are functions of C, σ, |Bt|, |D|, and the total number of iterations T. DP-SGD algorithm involves setting the right clipping norm C and the noise multiplier σ given a privacy budget, batch and dataset size. The (*ε, δ*) guarantee is computed by analysis of the Gaussian Mechanism with privacy amplification by subsampling and composition across across iterations (Kasiviswanathan et al., 2008; Bassily et al., 2014; Abadi et al., 2016; Mironov, 2017; McMahan et al., 2017; Mironov et al., 2019; Erlingsson et al., 2019; Zhu & Wang, 2019; Feldman et al., 2020b; Wang et al., 2020). Our implementation relies on Tensorflow Privacy 1codebase for conversion of (*ε, δ*) and clipping norm C to/from noise multiplier σ. To put the epsilon-delta values in context, our best model obtains 81.7% accuracy for ε ∈ [4, 10]. Privacy guarantee for ε ≈ 4 (Figure 1a) in fact satisfy a much stronger property of zCDP <=1 (0.154 for ε = 4) which is by now an industry standard. ## C Training Details The core bottleneck of DP-SGD is the computational cost of computing per-example gradients. A naive implementation of per-example gradient calculation can lead to a dramatic reduction of throughput and an increase in memory usage (proportional to the batch size) compared to non-private training. Inspired by Subramani et al. (2020), we conduct all our experiments in Jax (Bradbury et al., 2018; Frostig et al., 2018) is framework that leverages just-in-time compilation using XLA2 and does auto-vectorization of the backward pass. We leverage this functionality throughout our experiments. Finally, we conduct our experiments on TPUv4 architecture. We conduct all our experiment using Scenic library (Dehghani et al., 2021) for high quality reproducible implementations of both ResNet (BiT) and Vision Transformers. Scenic, in turn, uses Flax (Heek et al., 2020) for may of the layer definitions.We will also open-source our implementation of DP-SGD and DP-LAMB which includes vectorized per-example gradient clipping in a distributed training setup for further auditing by the research community. For the privacy accounting, we rely on the default Rényi accountant implementation already open-sourced as part of Tensorflow Privacy library. 1https://github.com/tensorflow/privacy 2https://www.tensorflow.org/xla ## C.1 Model Details We use 2 competitive model families for our experiments. 1) ResNet (BiT) and 2) Vision Transformer. ResNet (BiT). Originally proposed in (Kolesnikov et al., 2020), it modifies the original ResNet architecture (He et al., 2016) by replacing Batch Normalization with Group Normalization (Wu & He, 2018) and additionally using Weight Standardization for the convolutional layers (Qiao et al., 2019). Vision Transformers. We use the exact architecture and notation proposed by (Dosovitskiy et al., 2021). Vision Transformer creates 2d input patches of images and applies the Transformer backbone typically used NLP tasks. In addition to the models trained in the original paper, we make one addition of ViT-H/14-4b, denoting ViT-H/14 size model pretrained on the larger JFT-4B dataset. ## D Pretraining Details With Jft Datasets. We use 2 variants of JFT datasets for our pre-training. JFT-300M (Sun et al., 2017) consists of 18k classes and 303M high-resolution images, while JFT-4B consists of 29.5k classes with 4B images. Note that we really do mean JFT-4B, which is a larger version of JFT-3B as used in Zhai et al. (2021). Deduplication. In order to both not inflate our results and break privacy guarantee offered by finetuning privately on ImageNet, we extend the deduplication process proposed by Kolesnikov et al. (2020) and deduplicate both JFT-300M and JFT-4B with respect to all splits of ImageNet. We use a model based deduplication system which removes both exact and near-duplicates across common image transformation like crop, shift, resize etc. Hyperparameters. At the pre-training stage, we stick with the common practice of employing Adam optimizer (even for ResNet) with β1 = 0.9 and β2 = 0.999, with a batch size of 4096 and high weight decay of 0.1. We also use linear learning rate warmup until 10k steps and linearly decay it until the end. All our models are pre-trained with 224x224-sized JFT images. | Model | Epochs | Base LR | TPU v4 hours | Model size | |--------------------|----------|-----------|----------------|--------------| | ViT-B/16 | 7 | 8 · 10−4 | 0.6k | 86M | | ViT-L/16 | 14 | 4 · 10−4 | 4k | 307M | | ViT-H/14 | 14 | 3 · 10−4 | 15k | 632M | | ViT-H/14 - 4B | 1 | 3 · 10−4 | 15k | 632M | | ResNet-50x4 (BiT) | 7 | 1 · 10−3 | 8k | 384M | | ResNet-152x4 (BiT) | 7 | 6 · 10−4 | 11.4k | 937M | Table 6: Pre-training hyperparams. All models are trained on deduped JFT-300M with the exception of ViT-H/14-4B, which was trained on much larger JFT-4B dataset but for roughly the same number of steps as ViT-H/14. We used batch size of 4096, learning rate warmup of 10k steps and then linear decay. Additionally, we set dropout rate to 0.0, clip global norm to 1 and weight decay to 0.1. We use images of resolution 224x224. Training details. All our models were pre-trained using TPUv4 hardware with exact amounts depending on the model. For reference, our smallest model ViT-B/16 took around 600 TPUv4 hours to pre-train and our biggest model ViT-H/14-4b took 25k TPUv4 hours. ## E Finetuning Details We finetune on ImageNet train split and present the Top-1 accuracies we obtain from the official test split. Unless specified otherwise, we used images of input resolution 384x384 which is inception cropped (Szegedy et al., 2015) from a resolution of 448x448. In addition we perform horizontal flipping as data augmentation. These pre-processing steps exactly follow Kolesnikov et al. (2020); Dosovitskiy et al. (2021). ## E.1 Sgd With Momentum Hyperparameters | Hyper parameters | Range | |--------------------|---------------------------------------------| | Learning rate | 0.03, 0.06, 0.1, 0.4, 1.6, 6.4, 25.6, 102.4 | Table 7: Finetuning hyperparams. Models are trained for 10 epochs, with 0.25 epochs for warmup and cosine decay, no weight decay, no dropout and grad clipping at global norm 1. Similar to previous art, we fine-tune at a higher resolution of 384. When training the models with DP, we replace the global clipping with per example clipping norm of 1.0 | Hyper parameters | Range | |-----------------------------------|---------------------------------------------------------| | Learning rate | 0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1.0 | | Pre-trained layer LR multiplier α | 1.0, 10−1 , 10−2 , 10−3 , 10−4 , 10−5 | ## E.2 Lamb Hyperparameters Table 8: Finetuning hyperparams. Models are trained for 10 epochs, with 0.25 epochs for warmup and linear decay, no weight decay, no dropout and grad clipping at global norm 1. Similar to previous art, we fine-tune at a higher resolution of 384. When training the models with DP, we replace the global clipping with per example clipping norm of 1.0. In our earlier results we found that sometimes finetuning the full model performed much worse than finetuning just the last layer with privacy. We added an additional hyperparameter α which we multiply the global learning rate with for all layers except the last. The intuition is that since all layers except last have been pre-trained, the optimal choice of learning rate may be different for them compared to the last layer, which we start from scratch. ## E.3 Hyperparameters To Reproduce Figure 1B In this setting, we only train the last layer for a single epoch in full batch setting. This amounts to a single update of the optimizer. We emphasize that we use the full dataset of around 1.28M training set as a single batch. This is different from using batch size of 2 20 for 1 step, which leaves a subset of the images out. We initialize the last layer with zero and biases with -10 (default in Scenic). Additionally, we also changed the input resolution to 256x256 which is central cropped from an image of resolution 384x384. Finally, since we only make one update, we sweep over constant learning rate values ∈ [10−4, 10−3] and make an update with DP-LAMB optimizer. Note that, for 1 step updates and zero initialization, LAMB is equivalent to Adam, thus we expect DP-Adam to produce similar results. Adam in single step full batch setting. It is important to note that Adam in single step, full batch setting in fact reduces to signSGD where adam update g √g+ε ' *sign*(g). Sign based methods are indeed shown to be quite effective (Riedmiller & Braun, 1993) and in our case even surpass vanilla SGD. Since LAMB is equivalent to Adam when finetuning just one layer (with learning rate retuned), LAMB also reduces to signSGD in this restricted setting. Privacy accounting of hyperparameter tuning We additionally note that we do not include any cost related to hyperparmeter tuning in our privacy accounting and budget. This is in line with the setup of the baselines we consider. Finally, since learning rate is the only hyperparamter we tune, at least in the last layer single step setting, we have found that results are quite robust to the exact value of the learning rate as long as it is low enough to not diverge. ## F Pretraining With Imagenet21K Dataset. ImageNet-21k is a superset of ImageNet-k with 21k classes and 14M images (Deng et al., 2009). Similar to before, in order to both not inflate our results and break privacy guarantee, we extend the deduplication process proposed by Kolesnikov et al. (2020) and deduplicate ImageNet-21k with respect to all splits of ImageNet-1k, CIFAR-10 and CIFAR-100. Pretraining Hyperparameters. At the pre-training stage, we stick with the common practice of employing Adam optimizer (even for ResNet) with β1 = 0.9 and β2 = 0.999, with a batch size of 4096. Unlike pre-training with JFT dataset, we follow recommendations from Steiner et al. (2021) to use AugReg strategy where we lower the weight decay to 0.0001 and don't use dropout but instead use data augmentation strategy called medium1 which combines Mixup with α = 0.2 (Zhang et al., 2017) and RandAugment with l = 15 and m = 2 (Cubuk et al., 2020). We also use linear learning rate warmup until 10k steps and linearly decay it until the end. Our model is pre-trained with 224x224-sized images. | Model | Epochs | Base LR | TPU v4 hours | |----------|----------|-----------|----------------| | ViT-B/16 | 300 | 10−3 | 2.7k | Table 9: Pre-training hyperparams. We used batch size of 4096, learning rate warmup of 10k steps and then linear decay. Additionally, we set dropout rate to 0.0, clip global norm to 1 and weight decay to 0.0001. We use images of resolution 224x224. ## F.1 Hyperparameters To Reproduce Table 5 Similar to Figure 1b, we only train the last layer for a single epoch in full batch setting. This amounts to just a single step of optimization. Similar to before, we initialize the last layer with zero and biases with -10 (default in Scenic). We also changed the input resolution to 256x256 which is central cropped from an image of resolution 384x384. This may look a little unusual at first for CIFAR-10 and CIFAR-100 since the original resolution of the images is 32x32. But we first upsample them to 384x384 and then central crop them. We found that using higher resolution images made a big difference in performance (even in non-private setting), especially when using features from a pre-trained model. Finally, since we only make one update, we sweep over constant learning rate values ∈ [10−6, 10−5, 10−4, 10−3] and make an update with DP-ADAM optimizer as before. Privacy accounting of hyperparameter tuning We additionally note that we do not include any cost related to hyperparmeter tuning in our privacy accounting and budget. This is in line with the setup of the baselines we consider. Finally, since learning rate is the only hyperparamter we tune, at least in the last layer single step setting, we have found that results are quite robust to the exact value of the learning rate as long as it is low enough to not diverge (10−3in this case). ## G Regarding Reproducibility In order to aid in reproducability of our work, we provide results with large open pre-training dataset called ImageNet-21k. In addition, we provide exact hyperparameters and other details in the Appendix. We will also release checkpoints and features from models pre-trained on ImageNet-21k. Finally, in order to help reproduce our results even further, we will also release our code. ## H Supplementary Results H.1 Throughput Comparison | | Tuning type | | | |-------------------|---------------|------------|----------| | Model | Full | Last layer | Speed Up | | ViT-B/16 | 170 | 900 | 5.3x | | ViT-L/16 | 25 | 235 | 9.4x | | ViT-H/14 | 12 | 100 | 8.3x | | ResNet50x4 (BiT) | 25 | 225 | 9x | | ResNet152x4 (BiT) | 3 | 90 | 30x | Table 10: Throughput (img/sec/core) comparison of DP-LAMB on various models trained on Imagenet. All models were trained using 64 TPUv4 cores. ## H.2 More Stringent Delta We conducted all our experiments with δ = 1e-6 in order for a fair comparison with Kurakin et al. (2022) ![19_image_0.png](19_image_0.png) but ImageNet dataset size is close to 1.28M. Here we re-did sweep over ε on H/14-4b using the same hyperparameters used to obtain results in Figure 1b but with δ set to 8e-7. Note that De et al. (2022) also set δ to 8e-7. As shown in Figure 3, we find that the change in effective noise multiplier is small enough that our results don't change at all across both values of δ. Figure 3: Comparison of Top-1 accuracies on ImageNet when varying privacy parameter ε when δ set to 8e-7. ## H.3 Real Labels We also provide results when evaluated on ImageNet-ReaL (Beyer et al., 2020) labels in Figure 4. ![20_image_0.png](20_image_0.png) ![20_image_1.png](20_image_1.png) Figure 4: Comparison of Top-1 accuracies on ImageNet-ReaL labels (Beyer et al., 2020) when varying privacy parameter ε | | With privacy | | | |----------|-----------------|-------|------| | Model | Finetuning type | 2 15 | 2 16 | | ViT-B/16 | Full | 45.23 | 58.6 | Table 11: We complement results from Table 1 with full finetuning of ViT-B/16 on smaller batch sizes as well. As shown in Table 1 we obtain best results with batch size 2 17. ## H.4 Smaller Batch Sizes Since we get the highest accuracy with full finetuning of ViT-B/16 in Table 1 at the lower end of our batch size range, we also try batch sizes even lower than that to verify if that is indeed the maximum we could achieve. We verify this as shown in Table 11.
Review 1: Summary: The paper describes large-scale experiments with transfer learning and DP-SGD. The results are very important for our community as they demonstrate that at large-scale the model performance can reach non-private baselines. Strengths and Weaknesses: - The paper has a very good experimental setup and set of results that are very convincing. - A thorough description of the DP-SGD Weaknesses: - More discussion on private vs public (see "requested changes" section) - Emphasize other domains, e.g. text, as possible other domains where this method might be relevant - Also mention other settings, e.g. federated learning, as possible benefits for training large models. Requested Changes: I think that the paper might get stronger with a discussion on the difference between private and public data. My biggest concern is the experimental setup: a public ImageNet dataset is considered **private** and the private JFT dataset is considered **public**. The described approach sounds to me practical, however I wonder what is the meaning of the "public data" that is not publicly available. Is it reasonable to expect that datasets like JFT will become publicly available or due to sensitivity of large sets of data it might never happen? I also might suspect that other domains, e.g. text, might have more data publicly available. I hope this request does not sound too abstract, however as you describe benefits of using public data for private training, I feel like it's important to dedicate a section on "defining" the public and private data. For example, using privacy frameworks like contextual integrity to emphasize that even publicly data might still not allow to train models for certain cases or be released in an aggregated form. Broader Impact Concerns: I would try to still say that public data does not always mean that the model can be created or that data can shared. I think your examples with JFT emphasizes this distinction, which is a very good argument for me. ================================================== Review 2: Summary: The paper studies the application of transfer learning through pre-training for image classification, in a private setup. The paper shows that privately fine-tuning pre-trained models helps close the gap between private and non-private setups. Based on their empirical analysis, the authors make recommendations on what best practices are when using DP-SGD for image classification. They show that using the right optimizer and initialization for the classification layer are both very important in achieving desirable privacy-utility trade-off. They also show that through their recommendations, the training time of models can be severely decreased. Strengths and Weaknesses: Strengths: 1. I really liked how the authors made sure to deduplicate the pre-training data from the fine-tuning data, that is a very important step which is missing in many similar related work, and makes the conclusions much more reliable. 2. The paper's narrative is very easy to follow, and I enjoyed how it answered every question that came to my mind in the next paragraph. The results are thoroughly explained and justified. 3. Studying the effect of the optimizer on performance, and also analyzing the computational overheads (run-time) and achieving faster run time are aspects I had not seen studied much before in similar related work. Weakness: 1. Although there are aspects to this paper that differentiate it from prior work, the main idea is still using pre-training which has been explored before multiple times. However, I don't think this is really that much of a weakness as the findings and insights do have novelty. Requested Changes: My main question/potential request for more experiments is regarding the initialization of the last layer, for fine-tuning. right now zero initialization and Lacuna initialization are studied, and there seems to be a huge gap between their performances. I wonder if the authors tried other initializations? Or even maybe a way to initialize with transfer learning and using parameters from either other layers or other models? Another question I had was regarding Tables 3 and 4: The performance of the best setup (the final row in table 3) which is the highest scaled one seems to be invariant of optimizer choice (according to table 4). Do the authors think that this might not be the case if full-fine tuning is performed? Or could it be that the large scale of the data/training kind of overshadows the effect of the optimizer? (I mainly mean between Adam and LAMB). Some minor comments/suggestions: 1. Page 1: Introduction, end of line 3, differentially privacy should probably be differential privacy 2. Page 2: Introduction, line 5, might significant increase -> might significantly increase 3. Page 8: end of last line "that that" -> that Broader Impact Concerns: The paper facilitates wider deployment of private training algorithms which in itself can have many positive impacts. However, my only ethical concern is with the training time and hardware, as it seems training takes in the order of thousands of hours, using TPU v4s (based on Appendix D). Given how pre-training is what actually causes the improvements shown in the paper, I think it would be beneficial to the community if the pre-trained models are made publicly available to help accessibility, and alleviate the need for re-training them over and over by different parties. ================================================== Review 3: Summary: This paper presents an experimental study of large scale transform learning for differentially private image classification. Authors study the effects of model-architecture and optimization methods in DP fine-tuning of a large image classification deep learning model. Based on their experimental results, authors suggest three practices for the DP transfer learning for this particular set of models: 1. "Increase pretraining data and model size.". Authors show in their experiments (Table 3), that using a larger pre-training data set improves the performance of both non-private and private classification. Authors also show that using a more complex model improves the fine-tuning performance for both private and non-private cases. 2. "Use large batch sizes in combination with optimizers that can work well with large batch sizes (e.g. LAMB).". Motivated by prior work on the topic, authors experiment with large batch sizes in the DP fine-tuning. The experiments suggest that using a large batch size and training for only few iterations improves the DP performance. Towards the end of the paper authors suggest using only a single gradient update step with the full data set used as the "batch". 3. "Initialize the last layer to zero (or very small) when doing finetuning with DP.". In their experiments, authors demonstrate that fine-tuning only the last layer of the classification model provides on par performance to the full fine-tuning when trained under DP and using a large batch size. In their ablation study authors show that initializing the fine-tuning with zeros (the weights of the last layer) provides significant performance increase compared to the random initialization. Strengths and Weaknesses: The empirical results shown in the paper are very strong and improve over the state-of-the-art significantly. While the work does not present theoretical reasoning for the proposed improvements for the learning procedure, I find this type of empirical work also valuable for building actually functioning DP algorithms. Also, I think the topic is relevant and of high interest to the TMLR community. However, I do have some serious concerns about the method that need to be addressed before the empirical performance can actually be qualified. If I do understand the paper correctly, the results are based on fine-tuning the model for only a very iterations. The ablation study in Table 4 shows that the performance is actually the best if we train only for a single epoch. Moreover, the authors suggest that the extreme case of training only a _single step_ using all the data as the "batch" provides the best performance. Now, if we use the Adam optimizer from Kingma et al. (I assume that the DP-Adam is just a DP variant of this where the noise is added to the clipped gradients prior to learning rate adaptation), then the first gradient update step should actually correspond to a constant update. This is easy to see from the description of the Adam in the original paper. The moment parameters of Adam are initialized as $m_0=0$, $v_0=0$ and the first update becomes: \begin{align} &m_1 = (1-\beta_1) g \\\\ &v_1 = (1-\beta_2) g^2 \\\\ &\hat{m}_1 = m_1 / (1-\beta_1) = g \\\\ &\hat{v}_1 = v_1 / (1-\beta_2) = g^2 \\\\ &\theta_1 = \theta_0 - \alpha \hat{m}_1 / (\sqrt{\hat{v}_1} + \epsilon) = \theta_0 - \alpha g/(g + \epsilon) \end{align} So if the $\epsilon$ (note that this is not the privacy parameter) in the denominator is chosen significantly smaller than the gradient (which typically is the case), then the first step in Adam becomes essentially $\theta_1 = \theta_0 - \alpha$ and the update is independent of the gradient. Therefore the model update happens independently of the data. The results for the CIFAR-10 and CIFAR-100 seem to be almost non-affected by the epsilon, which could support the hypothesis that the gradient step actually does not use the data at all. However, the results for the ImageNet-1k of Table 5 seem to be affected by the privacy parameter $\epsilon$, which would suggest that the method is actually dependent on the data. Can authors clarify the use of Adam in the single iteration setting? If the DP-Adam is somehow fundamentally different to the Adam, please specify it in the appendix. Besides the possible problem with single step learning pointed above, I want to point out some other weaknesses of the paper as well. - Simply using more data for pre-training seems somewhat unattainable suggestion for many cases. If there is a large public data set available that could be use to pre-train the model, then in what scenario the analyst _wouldn't_ use it? I think this is crucial point to consider for better motivating the suggestion 1. - Authors should better specify the model architectures, and which models are more complex than other. I didn't find much discussion about this in the paper. For example I don't quite follow the discussion after Table 3. You state that the accuracy gap decreases with larger models. And then you state "For instance, the gap for ViT-L/16 (last layer tuning) between the best non-private and private case is 7.5 percentage points and for ViT-H/14 in the same setting is 6.2 percentage points.". So is the ViT-L/16 model smaller than the ViT-H/14? Some minor comments: - Captions of Table 2 and 3 are equivalent. While the Tables might contain overlapping settings, the captions should at least describe the main result of the table and the particular effect studied. - Section 5.1: What do you mean by "orthogonal" here? Orthogonal compared to what? - Figure 1a is not referred in the main text. - Caption of Figure 1a says "across model and pre-training dataset sizes.", but it remained bit unclear to me what are the data sets and models corresponding to the labels in the x-axis. - The appendix says that the code is included but I didn't find it from the openreview system. Maybe it was there, but just didn't show for me. Typos: - "computeing" Requested Changes: I require following changes: - The Adam should be better discussed to understand the results of the single step training. - The models should be better described in the appendix, including for example a clear description of the model size. More minor changes that will improve the paper - The captions for the Tables should be more descriptive to convey the main message of the Table. Broader Impact Concerns: I have no concerns relating to the broader impact. ================================================== Metareview: Recommendation: Accept with minor revision Comment: The reviewers all thought the work was great, and were generally satisfied by the revisions by the authors. I too agree: I think this is a really nice work, which pushes the limits of public pre-training, showing that differentially private fine-tuning can preserve most of the utility. Congratulations to the authors. For the paper itself, I would "Accept as is." However, as the authors have promised, the code is in the process of being publicly released. The only remaining concern of the reviewers (and myself) is that it has not yet been released. To add some impetus to this process, I am marking the paper as "Accept with minor revision." To satisfy the minor revision, I would like either a) the code to be publicly released and added to the submission, or b) the authors to publicly commit to a timeline for code release (in the case that administrative overhead precludes releasing within this limited timeframe). ==================================================
# Assuming Locally Equal Calibration Errors For Non-Parametric Multiclass Calibration Kaspar Valk *valk.kaspar@gmail.com* Meelis Kull *meelis.kull@ut.ee* University of Tartu Reviewed on OpenReview: *https: // openreview. net/ forum? id= na5sHG69rI* ## Abstract A probabilistic classifier is considered calibrated if it outputs probabilities equal to the expected class distribution given the classifier's output. Calibration is essential in safetycritical tasks where small deviations between the predicted probabilities and the actually observed class proportions can incur high costs. A common approach to improve the calibration of a classifier is to use a hold-out dataset and a post-hoc calibration method to learn a correcting transformation for the classifier's output. This work explores the field of posthoc calibration methods for multi-class classifiers and formulates two assumptions about the probability simplex which have been used by many existing non-parametric calibration methods, but despite this, have never been explicitly stated: assuming locally equal label distributions or assuming locally equal calibration errors. Based on the latter assumption, an intuitive non-parametric post-hoc calibration method is proposed, which is shown to offer improvements to the state-of-the-art according to the expected calibration error metric on CIFAR-10 and CIFAR-100 datasets. ## 1 Introduction Probabilistic classifiers take some data as input and produce probability distributions over classes as output. For example, a classifier could be tasked to take as input an X-ray image of a person's chest and produce as output a vector of three probabilities for whether the image depicts a healthy lung, lung cancer or *some other* lung disease. A classifier is considered to be calibrated if its predicted probabilities are in correspondence with the true class distribution. It is possible that a probabilistic classifier is not well-calibrated and produces distorted probabilities. For example, predicting an X-ray image to show a *healthy lung* with a probability of 0.9 is calibrated if, among a large sample of images with similar predictions, 0.9 of them truly depict a healthy lung. If in reality only 0.7 of the images depict a healthy lung, then the prediction of 0.9 is overconfident. The problem of over-confident predictions is especially common for modern deep neural networks (Guo et al., 2017; Lakshminarayanan et al., 2017). Distorted output probabilities are also characteristic of many classical machine learning methods such as naive Bayes, decision trees (Niculescu-Mizil & Caruana, 2005; Domingos & Pazzani, 1996) or high-dimensional logistic regression (Bai et al., 2021; Clarté et al., 2022a;b). Well-calibrated classifiers are essential in safety-critical applications (e.g. medical diagnosis, autonomous driving) where small deviations of predicted probabilities from being calibrated can cause costly mistakes (Leibig et al., 2017). For example, in a self-driving car that uses a classifier to detect if the road is clear of obstructions, over-confident predictions can lead to accidents, and under-confident predictions can prevent the vehicle from driving. The machine learning literature has two fundamentally different approaches to achieve better-calibrated classifiers. The first approach, with a focus on neural networks, is to modify the classifier's training algorithm or use Bayesian approaches to model uncertainties. For example, Mukhoti et al. (2020) studied the use of focal loss (Lin et al., 2017) instead of log-loss for training better calibrated classifiers; Müller et al. (2019) investigated the use of label smoothing (Szegedy et al., 2016) during training for better calibration; Kumar et al. (2018) and Popordanoska et al. (2021) proposed additional terms to be added to the training time loss function that penalize miscalibration; Maddox et al. (2019) proposed Bayesian model averaging for achieving calibration in deep learning. The second approach to achieve better-calibrated classifiers is to apply a *post-hoc calibration method* on an already trained classifier. Post-hoc calibration methods receive as input a classifier and a hold-out validation dataset and learn a transformation from the classifier's predictions to better-calibrated predictions. Many methods have been proposed for binary probabilistic classifiers, where the output has only two classes and only one degree of freedom. For example, there exists logistic calibration (Platt, 1999); isotonic calibration (Zadrozny & Elkan, 2002); histogram binning (Zadrozny & Elkan, 2001); beta calibration (Kull et al., 2017). For multi-class classifiers with more than two output classes, a common approach has been to apply binary methods in a one-versus-rest manner: a binary post-hoc calibration method is applied separately on the probabilities of each class (Zadrozny & Elkan, 2002). In recent years, several inherently multi-class post-hoc calibration methods have been proposed as well, even though some of them are applicable only for neural networks. For example, Guo et al. (2017) proposed temperature scaling, vector scaling, and matrix scaling; Kull et al. (2019) introduced Dirichlet calibration; Zhao et al. (2021) proposed a method specifically intended for decision making scenarios; Rahimi et al. (2020) suggested intra-order preserving functions; Wenger et al. (2020) proposed a non-parametric method based on a latent Gaussian process. This work takes a look at the *post-hoc calibration method* approach for achieving better calibrated multiclass classifiers. While there already exist many strong multi-class methods, several of them are limited to symmetrical transformations for all the classes; for example, temperature scaling, Gaussian process calibration, diagonal and order-invariant subfamilies of the intra-order preserving functions are all symmetrical. As shown in the experiments section, symmetrical transformations are usually not a problem but can be severely limiting in some cases. The asymmetrical existing methods are limited in their expressivity; for example, matrix scaling and vector scaling are limited to only linear transformations in the logit space. This work proposes an intuitive and simple non-parametric post-hoc calibration method that is not limited by a symmetrical transformation or the expressivity of parametric function families. The basis of the proposed method is assuming that similar predictions on the probability simplex have similar calibration errors. The method is shown to outperform competing methods, offering improvements in expected calibration error and avoiding the failures of symmetric methods. In Section 2, notation is introduced and an overview of background information connected to multi-class calibration is given. The concepts of calibration, calibration error, calibration error estimation, and existing post-hoc calibration methods for multi-class calibration are explained. In Section 3, the contributions of this work are described and a post-hoc calibration method is proposed. In Section 4, experiments are carried out to compare the proposed method to its competitors. The source code of the experiments is available at https: //github.com/kaspar98/lece-calibration. This paper builds upon preliminary research conducted for the author's master's thesis (Valk, 2022). ## 2 Background And Related Work The following sections introduce the concepts of calibration, calibration error, calibration error estimation and explain the existing post-hoc calibration methods for multi-class calibration. ## 2.1 Notation This work focuses on m-class classification problems with feature space X and one-hot encoded label space Y = {(1, 0, . . . , 0),(0, 1, . . . , 0), . . . ,(0, 0*, . . . ,* 1)}, where m ≥ 3. A probabilistic multi-class classifier for such a classification problem is a function f : X → ∆m that takes as input features x ∈ X and outputs a probability vector f(x) = pˆ = (ˆp1*, . . . ,* pˆm) ∈ ∆m, where ∆m = {(q1, . . . , qm) ∈ [0, 1]m|Pm i=1 qi = 1} is the (m − 1)-dimensional probability simplex. In addition, let X ∈ X , Y = (Y1, . . . , Ym) ∈ Y and f(X) = Pˆ = (Pˆ1*, . . . ,* Pˆm) ∈ ∆m be random variables, where X denotes the input features, Y denotes the label, and Pˆ the classifier's prediction. ## 2.2 Calibration There exist several definitions for calibration in the context of probabilistic multi-class classifiers. Multi-class calibration A classifier is considered to be multi-class-calibrated (or just calibrated) (Kull et al., 2019) if for any prediction vector pˆ ∈ ∆m it holds that $$\mathbb{E}_{Y}\left[Y|{\hat{P}}={\hat{p}}\right]={\hat{p}}.$$ Classwise calibration A weaker notion of classwise calibration conditions the expectation on each class separately (Zadrozny & Elkan, 2002). A classifier is considered to be classwise calibrated if for any class i ∈ {1, 2*, . . . , m*} and any real value c ∈ [0, 1] it holds that $$\mathbb{E}_{Y_{i}}\left[Y_{i}|{\hat{P}}_{i}=c\right]=c.$$ Note that for binary classification, classwise calibration is the same as multi-class calibration (Vaicenavicius et al., 2019). Confidence calibration Another weaker notion, confidence calibration used by Guo et al. (2017) requires calibration only for the predictions of the class with the highest probability in each output. A classifier is considered to be confidence calibrated if for any real value c ∈ [0, 1] it holds that $$\mathbb{E}_{Y}\left[Y_{a r g m a x\ {\hat{P}}}|m a x\ {\hat{P}}=c\right]=c.$$ An illustrative example of the different calibration definitions is presented in Appendix A.1. ## 2.3 Calibration Error Calibration error describes the difference between the predicted probabilities of the classifier and the corresponding perfectly calibrated class probabilities. Kumar et al. (2019) defined calibration error for confidence and classwise calibration for a classifier f. In a slightly more generalized form, confidence calibration error is defined as $$C E_{c o n f}=\mathbb{E}_{\hat{\mathbf{P}}}\left[\left|m a x\,\hat{\mathbf{P}}-\mathbb{E}_{\mathbf{Y}}\left[Y_{a r g m a x\,\hat{\mathbf{P}}}|m a x\,\hat{\mathbf{P}}\right]\right|^{\alpha}\right]^{1/\alpha},$$ and classwise calibration error is defined as $$\mathrm{{\quad~lcd\;as}}$$ $$C E_{c w}=\frac{1}{m}\sum_{i=1}^{m}\mathbb{E}_{\hat{P}}\left[\left|\hat{P}_{i}-\mathbb{E}_{Y}\left[Y_{i}|\hat{P}_{i}\right]\right|^{\alpha}\right]^{1/\alpha}.$$ The calibration errors are parameterized by α, where α = 1 results in mean-absolute-error, and α = 2 mean-squared-error. Calibration error could not only be defined for the whole classifier f but also for just one prediction value as the difference between the right-hand side and the left-hand side of the corresponding calibration definition. For this work, calibration error for multi-class calibration for prediction vector value pˆ is defined as $$C E({\hat{\mathbf{p}}})={\hat{\mathbf{p}}}-\mathbb{E}_{Y}\left[Y|{\hat{\mathbf{P}}}={\hat{\mathbf{p}}}\right].$$ Note that for multi-class calibration, the error defined in such a way is a vector of real values. ## 2.4 Calibration Evaluation In any real-world setting, true calibration error can not be directly found, it can only be estimated. Common metrics to evaluate calibration are *expected calibration error* (ECE) and proper scoring rules. The ECE metric groups similar predictions into bins and uses bin averages to estimate the calibration error. For confidence calibration, ECE is calculated using Y*argmax* Pˆ and max Pˆ and it is defined as confidence $\text{ECE}=\sum_{i=1}^{b}\frac{|B_{i}|}{n}\cdot|\overline{p}_{i}-\overline{y}_{i}|$, (1) where b is the number of bins, |Bi| the number of data points in bin Bi, n the number of data points, pi the average prediction value in the i-th bin and yi the average label value in the i-th bin (Naeini et al., 2015). For classwise calibration, ECE is defined as $${\mathrm{classwise~ECE}}={\frac{1}{m}}\sum_{j=1}^{m}{\mathrm{class-}}j{\mathrm{-ECE}},$$ where class-j-ECE is calculated with the same formula as in Equation (1) but with values Yj and Pˆj used instead of Y*argmax* Pˆ and max Pˆ when calculating yi and pi (Kull et al., 2019). Commonly bins are chosen with either equal size so that they contain an equal amount of data points, or with equal width so that the probability interval from 0 to 1 is uniformly partitioned between them. Equal-sized bins are also called equal-mass bins or data dependent bins in some works. While ECE is an important measure for calibration evaluation, it should not be the only metric to be evaluated. Very low calibration error can be achieved if the classifier makes very uninformative predictions; e.g. predicts the overall class distribution of the training dataset for any given input. Good metrics to consider in addition to ECE are proper scoring rules (Brier score or log-loss) as they are shown to decompose into calibration loss and refinement loss (DeGroot & Fienberg, 1983). While the calibration loss measures miscalibration, the refinement loss measures the extent to which instances of different classes are getting the same prediction. The key property of proper scoring rules is to have the Bayes-optimal model as the unique loss minimizer, achieving zero calibration loss and the minimal possible refinement loss, which can be non-zero due to aleatoric uncertainty (Kull & Flach, 2015). ## 2.5 Post-Hoc Calibration Methods Calibration of an already trained classifier can be improved by post-hoc calibration methods. Given a classifier and a hold-out validation dataset different from the original training dataset, the goal of a post-hoc calibration method is to learn a map cˆ : ∆m → ∆m from the uncalibrated output pˆ of the classifier to a better-calibrated output cˆ(pˆ). The ideal result would be if the calibration method learns the true calibration map c(pˆ) = EY hY |Pˆ = pˆ i. The transformation is typically learned by optimizing a proper scoring rule (Brier score or log-loss) (Rahimi et al., 2020). A possible motivation behind this can be that unless the refinement loss is decreasing, any reduction of a proper loss is due to the reduction of the calibration loss. One-versus-rest methods A common approach to multi-class calibration has been to apply binary posthoc calibration methods in a one-versus-rest manner (Zadrozny & Elkan, 2002). For every class in a m class classification task, one can define a binary one-vs-rest classification problem: the currently viewed class is the positive class, rest of the classes grouped together are the negative class. In a one-versus-rest approach to multi-class calibration, a binary classification method is trained on m such one-vs-rest tasks separately. For example, some binary calibration methods that have been applied in the one-versus-rest approach are Platt scaling (Platt, 1999), isotonic regression calibration (Zadrozny & Elkan, 2002), histogram binning (Zadrozny & Elkan, 2001), and beta calibration (Kull et al., 2017). Platt scaling and beta calibration are both parametric methods fitting a specific family for calibration, isotonic regression calibration uses isotonic regression to learn a calibrating transformation, histogram binning divides the probability space into equalsized bins and in each bin sets the calibrated value of predictions belonging to that bin equal to the empirical class distribution value in the bin. There are two considerable flaws to the one-versus-rest approach. First, it is not able to learn any dependencies between classes. Second, when the output of m binary methods is put back together, the prediction vector will likely no longer sum to 1 and needs to be normalized. It has been shown that normalization can make the probabilities less calibrated depending on the metric used (Gupta & Ramdas, 2022). Therefore, some works propose to ignore the final step of normalization and treat one-vs-rest calibration as truly m separate binary calibration problems (Patel et al., 2021; Gupta & Ramdas, 2022). Temperature scaling Temperature scaling is a logit scaling method designed for neural networks introduced by Guo et al. (2017). The method is defined as cˆ(z) = σ(z/t) where z is the logit vector and t ∈ (0, ∞) is the learned temperature parameter shared across all classes. If t > 1, then the method makes the predictions less confident by pulling the probability distribution towards the uniform distribution; if t < 1, then the method makes the predictions more confident, making the largest probability in the output even larger. Matrix and vector scaling Matrix and vector scaling are both logit transformation techniques proposed by Guo et al. (2017) similar to temperature scaling. The calibrated output of these techniques is obtained by cˆ(z) = σ(Wz + b), where W ∈ R k×kis a matrix of learned weights (restricted to the diagonal matrix for vector scaling, unrestricted for matrix scaling) and b ∈ R kis a vector of learned biases. Vector scaling is similar to temperature scaling, but instead of a single scaling parameter, a scaling parameter is learned separately for each class and an additional bias is also learned. For matrix scaling, each logit becomes a linear combination of other logits. Matrix scaling gives better results if it is trained with off-diagonal and intercept regularization (ODIR) (Kull et al., 2019): the term λ(1 m(m−1) Pi̸=j w 2 ij ) + µ( 1 m Pj b 2 j ) is added to the training loss, where λ and µ are the regularization hyperparameters. Dirichlet calibration Dirichlet calibration is a method proposed by Kull et al. (2019) that is quite similar to matrix scaling, except it does not work on the logits of a neural network but rather on the actual predicted probabilities pˆ of a classifier. With Dirichlet calibration, the calibrated probabilities are obtained by cˆ(pˆ) = σ(Wlnpˆ + b), where ln is a vectorized natural-logarithm function. Similar to matrix scaling, Dirichlet calibration is also trained with ODIR to prevent overfitting. Intra order-preserving functions Rahimi et al. (2020) proposed to use the family of intra orderpreserving (IOP) functions to learn a calibration map on the logits of a neural network. An IOP function cˆ : R m → R m is a vector-valued function where the order of the sorted output components is the same as the order of sorted input components, that is *argsort* cˆ(z) = *argsort* z. The use of IOP functions was motivated by their property of preserving classifier accuracy, and having larger expressive power than the scaling methods proposed by Guo et al. (2017) or Dirichlet calibration. The IOP function family preserves classifier accuracy after calibration, as the largest probability in the classifier output still remains the largest probability after calibration, thanks to the order-preserving property. The IOP function family is more expressive than matrix scaling or Dirichlet calibration, as IOP functions can learn non-linear transformations while matrix scaling and Dirichlet calibration are limited to linear transformations. In addition, Rahimi et al. (2020) showed in their experiments that in practice, it is better to use a diagonal subfamily of IOP functions as they can be expressed with fewer parameters and are therefore less prone to overfitting. An IOP function cˆ is a diagonal function if cˆ(z) = (ˆc(z1)*, . . . ,* cˆ(zm)), where cˆ : R → R is a continuous and increasing function. A diagonal IOP function is symmetrical for all classes and produces output where the different class logits do not interact with each other in cˆ. It can be noted that temperature scaling uses a subfamily of diagonal IOP functions: it uses linear diagonal IOP functions where the bias term equals 0. Rahimi et al. (2020) implemented the IOP post-hoc calibration method as a neural network with two fully connected hidden layers with order-preserving constraints. Their implementation has two hyperparameters: the number of neurons in the first and second hidden layers. Decision calibration Zhao et al. (2021) proposed a non-parametric calibration method for the context of decision making settings. The method works iteratively by partitioning the probability simplex and applying an adjustment on each partition. The partition and adjustment are both determined by the average difference between the label and prediction values. Gaussian process calibration Wenger et al. (2020) proposed a natively multi-class non-parametric calibration method that uses a latent Gaussian process to learn the calibrating transformation. The method applies the same transformation for all the classes. ## 3 Contributions The contributions of this work to the field of multi-class calibration can be split into two: 1. The work formulates two assumptions about the true calibration map that have been previously used but not clearly stated in the calibration literature. 2. By explicitly acknowledging the assumptions, we propose an intuitive and simple non-parametric post-hoc calibration method. ## 3.1 Proposed Assumptions Before introducing the proposed assumptions, a small introduction is needed. According to the definition of multi-class calibration given in Section 2.2, a prediction vector pˆ is considered calibrated if pˆ = EY hY |Pˆ = pˆ i. Therefore, a calibrating transformation of a prediction pˆ could be found if we had a good estimate of its true conditional class distribution EY hY |Pˆ = pˆ i— we could simply set the prediction value pˆ equal to the estimate. Similarly, if we were to approach calibration from the definition of calibration error CE(pˆ) = pˆ−EY hY |Pˆ = pˆ igiven in Section 2.3, it would suffice for calibration if we had a good calibration error estimate - we could subtract the estimate from the prediction and our output would be calibrated. One obvious weak estimator for the true class distribution could be the (single) label value Y corresponding to pˆ. The estimator would clearly be unbiased as for each pˆ, the average value of Y is equal to EY [Y |Pˆ = pˆ], and hence, EY [Y |Pˆ = pˆ] − EY [Y |Pˆ = pˆ] = 0. However, this estimator Y would have very high variance as it is based on a sample with just one element. Likewise, an unbiased high variance estimator CEd could be constructed for the calibration error as the difference between the prediction and its label CEd(pˆ) = pˆ− Y . Unfortunately, both of these simple estimators have too high variance to be used for calibration. However, if we made some assumptions about the calibration map of our classifier, we could construct good estimators that make use of these weaker estimators. Assumption of locally equal calibration errors (LECE) We propose to assume that the calibration error is approximately equal in a close neighborhood on the probability simplex. Formally, for some fixed ϵ, δ > 0 and some neighborhood function d : ∆m × ∆m → R, we assume that d(pˆ, pˆ ′) ≤ δ =⇒CE(pˆ) − CE(pˆ ′) 2≤ ϵ where ∥·∥2 denotes the squared Euclidean norm. Note that the term 'locally' is often used to refer to neighborhoods in the original feature space, whereas we consider neighborhoods in the simplex. Given a validation dataset, the LECE assumption allows us to construct a considerably better estimator CEd*neigh* for the calibration error of prediction pˆ than the previously introduced weak estimator CEd(pˆ) = pˆ− Y . First, we need to find the close neighborhood of pˆ, meaning the validation set predictions pˆ1 , . . . , pˆk with labels Y 1*, . . . ,* Y k for which d(pˆ, pˆi ) ≤ δ for i in 1*, . . . , k*. A stronger estimator can then be constructed if we average across the weak calibration error estimator values belonging to that close neighborhood $${\widehat{C E}}_{n e i g h}({\hat{\pmb{p}}})={\frac{1}{k}}\sum_{i=1}^{k}{\widehat{C E}}({\hat{\pmb{p}}}_{i})={\frac{1}{k}}\sum_{i=1}^{k}({\hat{\pmb{p}}}_{i}-{\boldsymbol{Y}}_{i}).$$ The stronger estimator has an upper bound on its squared bias as Bias hCEdneigh(pˆ) i2= EY 1,...,Y k hCEdneigh(pˆ) i− CE(pˆ) 2 i=1 EY i [(pˆi − Y i)] − CE(pˆ) 2 = 1 k X k i=1 (pˆi − EY i [Y i]) − CE(pˆ) 2 = 1 k X k i=1 CE(pˆi ) − CE(pˆ) 2 ≤ 1 k X k = 1 k X k i=1 ∥CE(pˆi ) − CE(pˆ)∥ 2 ≤ 1 k X k i=1 ϵ = ϵ. The variance of the estimator decreases approximately linearly as the number of neighbors increases $$Var\left[\widehat{CE}_{negl}(\hat{\mathbf{p}})\right]=Var\left[\frac{1}{k}\sum_{i=1}^{k}\widehat{CE}(\hat{\mathbf{p}}_{i})\right]$$ $$=\frac{\sum_{i=1}^{k}Var[\widehat{CE}(\hat{\mathbf{p}}_{i})]}{k^{2}}\approx\frac{Var[\widehat{CE}(\hat{\mathbf{p}})]}{k}.$$ Given the estimate CEd*neigh*(pˆ), a calibrated prediction can finally be constructed by subtracting it from the original prediction: cˆ(pˆ) = pˆ− CEd*neigh*(pˆ). Assumption of locally equal class distributions (LECD) A similar derivation for constructing calibrated predictions is possible if one would instead assume that the true label distribution EY hY |Pˆ = pˆ i is approximately equal in a close neighborhood. Formally, for some fixed *ϵ, δ >* 0 and some neighborhood function d : ∆m × ∆m → R, we assume that $$d({\hat{\mathbf{p}}},{\hat{\mathbf{p}}}^{\prime})\leq\delta\quad\Longrightarrow\quad\left\|\mathbb{E}_{Y}\left[Y|{\hat{\mathbf{P}}}={\hat{\mathbf{p}}}\right]-\mathbb{E}_{Y}\left[Y|{\hat{\mathbf{P}}}={\hat{\mathbf{p}}}^{\prime}\right]\right\|^{2}\leq\epsilon.$$ With this assumption, the method would arrive at a calibrated prediction by using the average one-hot label vector in a close neighborhood cˆ(pˆ) = 1k Pk i=1 Y i. Similarly to the previous LECE assumption, the estimator used with the LECD assumption would also have an upper bound of ϵ on its squared bias, and its variance would decrease approximately linearly as the number of neighbors increases. Visualisation of the assumptions To better understand the difference between the two assumptions, consider Figure 1. It depicts the calibration maps learned by two calibration methods on a synthetic calibration task. The exact details about the synthetic dataset are given in Section 4.1, but in short, 5000 prediction and label pairs were generated from a Dirichlet distribution with parameters [0.5, 0.5, 0.5], and the goal of the calibration methods in Figures 1a and 1b was to learn to imitate the true calibration map depicted in Figure 1c from the generated data. Note that the true calibration map depicted in Figure 1c is something we never know in practice and here it has been manually created merely for this synthetic calibration task to allow for visual comparison between the two assumptions. Both the background colors and black arrows depict the learned transformation. The same information that is represented by the arrow is also represented by the RGB color value which is in a linear relation with the calibration error vector at that point: red for class 1, green for class 2, and blue for class 3. ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) Figure 1: Illustrative example of the differences between the LECD and the LECE assumption on a synthetic calibration task. Two versions of histogram binning applied in a one-vs-rest style with 5 equal-width bins (a, b) aim to learn the true calibration map (c). The classical histogram binning uses the LECD assumption (a); the novel version uses the LECE assumption (b). Note the difference between the two methods: (a) for LECD the black calibration arrows point to the same location in one bin; (b) for LECE the arrows are the same in one bin. Histogram binning applied with the LECE assumption (b) imitates the true calibration map (c) more closely than the one applied with the LECD assumption (a). For details about the synthetic data calibration task, see Section 4.1. Both of the methods depicted in Figure 1a and Figure 1b depict the result of a histogram binning method applied in one-vs-rest style with 5 equal-width bins. Note that the classical histogram binning uses equal-size bins, but here equal-width bins are used for a clearer visualization. The only difference between the methods is the underlying assumption used: in Figure 1a the LECD assumption is used as in classical histogram binning; the novel alternative in Figure 1b uses the LECE assumption. Note that the visualization in Figure 1a is very similar to the visualization of the higher-dimensional reliability diagrams as used by Vaicenavicius et al. (2019), and even though they are technically the same, here Figure 1a does not depict a reliability diagram, but rather the transformation of a calibration method. In both cases, the binning scheme defines the close neighborhoods, where we assume the corresponding assumptions to hold. With the classical LECD assumption, the end point of every calibrating arrow is the same in one bin; with the novel assumption, the calibrating arrow itself is the same in one bin. When comparing Figure 1a and Figure 1b to Figure 1c, histogram binning applied with the LECE assumption provides a closer imitation of the true calibration map than histogram binning applied with the LECD assumption. The visual intuition is also confirmed by the several numeric metrics in provided in Table 1 in Section 4.1, showing that the histogram binning based on the LECE assumption is indeed closer to the true calibration map. The LECE assumption outperformed the LECD assumption on the real experiments as well (as shown in Section 4.2.4) and is therefore preferred in this work. Relation to prior work Neither of the two assumptions are completely novel as estimators based on them have been used in previous work. However, the assumptions themselves have never been explicitly stated. An estimator based on the assumption of equal calibration errors has been used in at least two algorithms that use the average difference between the label and prediction in a close neighborhood. First, the widely used ECE calculation algorithm (Naeini et al., 2015) defines the close neighborhood with bins and in each bin uses the difference between the average label and average prediction value to estimate the calibration error - this is similar to the proposed assumption as 1k Pk i=1 pˆi − 1 k Pk i=1 yi = 1 k Pk i=1(pˆi − yi ). Second, the recently proposed decision calibration algorithm (Zhao et al., 2021) also uses the average difference between the predictions and labels in a close neighborhood. In the decision calibration algorithm, the close neighborhoods are defined in each iteration by the partition. An estimator based on the assumption of equal class distributions has also been previously used. It is the basis of the histogram binning method (Zadrozny & Elkan, 2001) where the close neighborhood is defined by the binning scheme; in each bin, the average one-hot label vector is used to calibrate the predictions. The weighted average one-hot label vector is also used in recent work by Popordanoska et al. (2021) for a training time calibration method, not a post-hoc calibration method. In their work, all the neighbors of a prediction are assigned a weight with a kernel function; the weighted average of label vectors is then used to estimate the calibrated prediction. Defining the close neighborhood For both assumptions, two open questions remain: 1. How many instances should the close neighborhood cover? 2. How should the close neighborhood be defined? To answer the first question: the more neighbors taken, the less varying the estimators; however, the further away the neighbors are, the bolder the assumptions and the larger the bias from the assumption. Therefore, a sweet spot for the bias-variance tradeoff has to be found. This can be achieved if the used neighborhood scheme offers a hyperparameter defining the neighborhood size. The hyperparameter could then be optimized with cross-validation with Brier score or log-loss. There is no simple answer to the second question. Histogram binning and the ECE algorithm define the close neighborhood with a binning scheme. However, the binning schemes are only defined for the binary case. The decision calibration algorithm (Zhao et al., 2021) defines the close neighborhoods by a linear partition that splits the probability simplex such that the calibration error estimates would be maximized. Popordanoska et al. (2021) define the neighborhood through assigning weights with a kernel function. ## 3.2 Lece Calibration One intuitive approach would be to define the close neighborhood separately for every data point: for some prediction pˆ, the close neighborhood could be defined by the k closest instances on the validation dataset. Defining the neighborhood this way results basically in a modified k-nearest-neighbors algorithm that we call LECE calibration. For any uncalibrated prediction pˆ, LECE calibration would: 1. Find the k closest predictions on the validation dataset pˆ1 , . . . , pˆk according to some neighborhood function d. 2. Estimate the calibration error of pˆ by finding the average difference between its neighbors' prediction and label $${\widehat{C E}}_{n e i g h}({\hat{\mathbf{p}}})={\frac{1}{k}}\sum_{i=1}^{k}({\hat{\mathbf{p}}}_{i}-\mathbf{y}_{i})={\frac{1}{k}}\sum_{i=1}^{k}{\widehat{C E}}({\hat{\mathbf{p}}}_{i}).$$ 3. Subtract the calibration error estimate CEd*neigh*(pˆ) from the uncalibrated prediction pˆ to calibrate: $${\hat{c}}({\hat{p}})={\hat{p}}-{\widehat{C E}}_{n e i g h}({\hat{p}}).$$ Similary to the LECE calibration algorithm, a LECD calibration algorithm could be defined as well, with the only difference being in the underlying assumption used - instead of steps 2 and 3 of the algorithm, the method would instead set the calibrated prediction equal to the average label value of the neighbors cˆ(pˆ) = 1k Pk i=1 yi . For the neighborhood function d, we considered Kullback-Leibler divergence and Euclidean distance. As shown in the real data experiments in Section 4.2.4, Kullback-Leibler divergence performed slightly better. To find the neighbors of pˆ, Kullback-Leibler divergence is applied as dKL(pˆ, ·) where dKL(pˆ, pˆ ′) = Pm i=1 pˆilog pˆi pˆ ′ i , and the term in the sum is considered equal to 0 if pˆi = 0 and equal to ∞ if pˆi ̸= 0 and pˆ ′ i = 0. The number of classes is denoted by m. Thresholding tiny probabilities A problem inherent to the non-parametric LECE (and LECD) calibration method is its inability to work well for tiny probabilities. This is because the method uses an estimator, which has some built in errors coming from its bias and/or variance. For class probabilities that are very near to 0, these errors of the estimator become very large proportionally to the probability. To see this, consider a true class probability pi estimated based on k neighbours. In the ideal case where all the neighbours have the same true label distribution, the variance of this estimator is pi(1−pi) k. Hence the estimator's relative error (standard deviation divided by the true value) is √ pi(1−pi) √ k/pi = √ √ 1−pi pi·k which becomes increasingly large when pi gets small. This could even lead to situations, where the LECE method produces output that is smaller than 0 for some classes and could no longer be interpreted as probabilities. For example, consider pˆi = 0.01 and suppose the average prediction of its k neighbors is p¯ = 0.03 and the average label y¯ = 0.01. In that case, the calibration error estimate is CEd*neigh*( ˆpi) = 0.03 − 0.01 = 0.02 and the calibrating transformation would be cˆ(ˆpi) = ˆpi − CEd*neigh*( ˆpi) = 0.01 − 0.02 = −0.01, which is no longer on the probability simplex. Therefore, to overcome this problem with small probabilities, we opted to introduce one more parameter to the calibration method: a threshold value t ∈ R which sets a lower limit when to apply the method. For any class probability smaller than t, we do not apply the method. As the true class probability piis unknown, then instead we apply this threshold on both the uncalibrated prediction pˆi and the corresponding would-be-calibrated prediction cˆ(ˆpi). More precisely, given the prediction vector pˆ, and the would-be-calibrated prediction vector cˆ(pˆ) = pˆ − CEd*neigh*(pˆ), if for the i-th class pˆi ≤ t or cˆi(pˆ) ≤ t, then we set cˆi(pˆ) = ˆpi, where cˆ(·) = (ˆc1(·)*, . . . ,* cˆm(·)). Thresholding can cause the final output to no longer sum to 1, so to solve this, as a final step we divide the output vector by its sum. As shown by optimal threshold values chosen by hyperparameter tuning in the real experiments in Table 11, the LECE method chooses small threshold values ranging from t = 0 to t = 0.02. Composition with parametric methods The authors of the decision calibration algorithm noticed that their proposed non-parametric post-hoc calibration method works better if it is applied in composition with temperature scaling (Zhao et al., 2021). First, temperature scaling learns the calibration map on the validation dataset and then their method fine-tunes the result of temperature scaling using the temperaturescaled validation data. The benefit of composing parametric and non-parametric calibration methods was also shown by Zhang et al. (2020) who noted that isotonic regression applied in a one-versus-rest manner works better if it is applied on top of temperature scaling. A similar observation is true for the proposed non-parametric LECE calibration method in this work as well. The experiments on real data in Section 4.2 show that the proposed method loses to existing parametric post-hoc calibration methods when applied directly, but wins when applied on top of temperature scaling. From the experiments it can be concluded that on datasets with few samples per class, LECE alone is not strong enough to compete with methods with parametric assumptions. However, LECE has its merits being less constrained than its competitors, and therefore it can offer improvements on top of parametric methods, wherever they are limited by their parametric family. Additionally, the violations of the LECE assumption are likely to diminish when the calibration errors become smaller, e.g. on top of temperature scaling. Algorithm 1: LECE calibration method Input : predictions on the validation set pˆ1 , . . . , pˆn validation set labels y1 , . . . , yn prediction to calibrate pˆ neighborhood size k distance function d threshold t number of classes m Output: calibrated prediction cˆ(pˆ) 1 D ← distances d(pˆ, pˆi ) for i in 1*, . . . , n* 2 I ← indices of k smallest values from D 3 ¯p ← 1k Pi∈I pˆi // average prediction 4 ¯y ← 1k Pi∈I yi // average label 5 CEd ← ¯p − ¯y // calibration error estimate 6 cˆ(pˆ) ← pˆ− CEd // initial calibrated prediction 7 for i in 1*, . . . , m* do 8 if pˆi ≤ t or cˆ(pˆ)i ≤ t **then** 9 cˆ(pˆ)i ← pˆi // thresholding 10 end 11 end 12 cˆ(pˆ) ← cˆ(pˆ)/Pm i=1 cˆ(pˆ)i // ensure sums to 1 13 **return** cˆ(pˆ) The complete pseudocode of LECE calibration with thresholding is presented in Algorithm 1. Note that if LECE calibration were to be applied in composition with temperature scaling, then the only difference in Algorithm 1 would be that the input pˆ1 , . . . , pˆn and pˆ would be the output of temperature scaling. A discussion on the computational and memory complexity of Algorithm 1 is given in Appendix A.2. To summarize, the LECE calibration method is essentially a k-nearest neighbors algorithm using the neighbors prediction and label difference; the method involves thresholding of tiny probabilities; and it works best when composed with a parametric method. ## 4 Experiments The experiments' section consists of two parts: - A small experiment on synthetically generated data. The goal of this experiment is to illustrate and give visual intuition about the different post-hoc calibration methods. - Larger experiments on two real datasets and three convolutional neural network classifiers. The goal of these experiments is to compare the proposed post-hoc calibration method with its competitors and see if it can offer improvement over the state-of-the-art. ## 4.1 Synthetic Data Experiment To illustrate the differences between the assumption of locally equal calibration errors (LECE) and the assumption of locally equal class distributions (LECD), and the limitations of the existing post-hoc calibration methods, a synthetic dataset was created. A synthetic approach allows to define the true calibration function c(pˆ) = EY [Y |Pˆ = pˆ], which is not available in any real-world dataset. ## 4.1.1 Data Generation For the experiment, a 3-class validation dataset consisting of 5000 prediction and label pairs, and a test dataset consisting of 100000 prediction and label pairs were generated. Note that as calibration applies on the predictions and does not depend on the original feature space, we are directly generating predictions without considering the features nor the classification model at all. Classifier predictions pˆ were sampled from a 3class Dirichlet distribution with parameters [0.5, 0.5, 0.5]; the label distributions were calculated by applying the chosen true calibration function c(pˆ) = EY [Y |Pˆ = pˆ] on the predictions; the labels y were sampled from c(pˆ). The chosen true calibration function is defined as c(pˆ) = (ˆp 0.8 1 + pˆ1·pˆ2 5, pˆ2 + pˆ1·pˆ3 3, pˆ3 + pˆ1·pˆ2 10 )/Z, where Z is the renormalizing term to sum to 1. The chosen function is depicted in Figure 1c and repeated again in Figure 2d for convenience. Note that the results of the synthetic experiment should be considered purely illustrative as the function c(·) was chosen rather arbitrarily and with a different function the results could be different. ## 4.1.2 Compared Post-Hoc Calibration Methods On the synthetic task several post-hoc calibration methods were evaluated: - Two different histogram binnings with 5 equal-width bins: a classical version with the LECD assumption (**H-LECD**) and a novel version with the LECE assumption (**H-LECE**). The histogram binning methods were chosen to visualise the difference between the two assumptions formalized in Section 3. The visualization between the two assumptions is provided in Figure 1. - Temperature scaling (TS) to illustrate the limitations of symmetric calibration methods which can only learn transformations that act the same way for all the classes. Temperature scaling was applied to log(pˆ) as no logits were available for the task. - Dirichlet calibration (DIR) to show the limited expressivity of the otherwise powerful parametric methods. Dirichlet calibration was fit without regularization as the number of parameters is low with 3 classes. - Our proposed LECE calibration method (**LECE**) to illustrate its merits. The method was applied with k = 500 neighbors and threshold value t = 0 (thus applying thresholding only to ensure that outputs are non-negative). Note that this synthetic calibration task is meant to be purely illustrative of different calibration methods, and therefore the hyperparameters of LECE were manually chosen to show that the method can work well given good hyperparameters. The experiments on real data where hyperparameters are tuned with cross-validation show that good hyperparameter values can be found in practice as well. ## 4.1.3 Results The learned calibration maps of H-LECD and H-LECE are depicted in Figure 1, the maps of all other methods are shown in Figure 2. Both the background colors and black arrows in both Figure 1 and Figure 2 depict the learned transformation. The same information that is represented by the arrow is also represented by the RGB color value which is in a linear relation with the calibration error vector at that point: red for class 1, green for class 2, and blue for class 3. Table 1 contains the results of the synthetic experiment according to several different numeric metrics; it shows the mean of 100 data generation seeds with the standard deviation. For synthetic experiments, the ECE measures are replaced with the true calibration Table 1: Results of the illustrative synthetic experiment: the mean and standard deviation over 100 data seeds is reported. For details of the applied methods, see Section 4.1.2. H-LECD (in Figure 1a) is outperformed by H-LECE (in Figure 1b). TS (in Figure 2a) is limited by symmetry and loses to DIR (in Figure 2b). DIR is limited by its parametric family and loses to LECE (in Figure 2c). The final column (true) depicts the theoretical best result which is obtained by applying the true calibration map (in Figure 2d). | ours | | | ours | | | | |---------------|---------------|---------------|---------------|---------------|---------------|---------------| | H-LECD | H-LECE | TS | DIR | LECE | true | | | confidence CE | 0.056 ± 0.001 | 0.019 ± 0.002 | 0.030 ± 0.002 | 0.022 ± 0.004 | 0.018 ± 0.003 | 0.000 ± 0.000 | | classwise CE | 0.049 ± 0.001 | 0.016 ± 0.001 | 0.027 ± 0.001 | 0.018 ± 0.002 | 0.015 ± 0.002 | 0.000 ± 0.000 | | Brier score | 0.448 ± 0.001 | 0.438 ± 0.001 | 0.440 ± 0.001 | 0.438 ± 0.001 | 0.438 ± 0.001 | 0.436 ± 0.001 | | log-loss | 0.772 ± 0.002 | 0.749 ± 0.010 | 0.751 ± 0.002 | 0.747 ± 0.002 | 0.742 ± 0.002 | 0.738 ± 0.002 | | accuracy | 0.668 ± 0.002 | 0.671 ± 0.002 | 0.669 ± 0.001 | 0.671 ± 0.001 | 0.670 ± 0.001 | 0.671 ± 0.001 | ![12_image_0.png](12_image_0.png) Figure 2: Illustrative example of calibration maps produced by different post-hoc calibration methods on a synthetic calibration task (a, b, c). The goal of the methods is to learn the true calibration map (d). Temperature scaling (a) is limited by its symmetric calibration map. Dirichlet calibration (b) performs well, but is held back by its parametric family: it fails to imitate the calibration arrows of the true calibration map (d) for small values of pˆ1. LECE calibration (c) manages to learn a transformation very similar to the true calibration map (d). errors (CE) as the true calibration function is known (see Section 2.3 for details): CE is measured with α = 1 and the expectation is replaced with empirical test set average. The last column in Table 1 depicts the results when applying the true calibration map - the theoretical best result the methods could achieve. First, when comparing the two histogram binnings, it can be seen that the evaluation metrics in Table 1 confirm the visual intuition available from Figure 1a and Figure 1b: the new proposed LECE assumption significantly outperforms the classical LECD assumption. Second, as seen in Figure 2a, temperature scaling is clearly not sufficient for the task as it is limited to a symmetrical transformation. Third, Dirichlet calibration, one of the strongest existing competitors learns a result close to the true calibration map, but is held back by its parametric family: note its bad performance for values close to 0 for pˆ1 as the learned calibration arrows there are not similar to the arrows of the true calibration map. Overall, the proposed LECE calibration method depicted in Figure 2c learns the most similar transformation to the true calibration map. ## 4.2 Real Data Experiments The goal of the real data experiments is to see if the proposed method can improve state-of-the-art in practical settings. ## 4.2.1 Datasets And Models CIFAR-10 and CIFAR-100 datasets (Krizhevsky, 2009) are used for the experiments. On both of the datasets, the predictions of convolutional neural networks ResNet-110 (He et al., 2016), ResNet Wide 32 (Zagoruyko & Komodakis, 2016), DenseNet-40 (Huang et al., 2017) are used. The precomputed logits of these three CNN-s were provided by Kull et al. (2019), and the same validation and test set split of the logits was used as in the experiments of Kull et al. (2019) and Rahimi et al. (2020). An overview of the used datasets, classifiers, and dataset sizes is given in Table 2. ## 4.2.2 Compared Post-Hoc Calibration Methods On the real datasets the following methods are compared: - Uncalibrated predictions (**uncal**) to have a baseline. - Matrix scaling with ODIR (MS); best hyperparameters for the dataset and classifier combinations were provided by the authors of ODIR for matrix scaling (Kull et al., 2019). - Diagonal subfamily of intra-order preserving functions (IOP); best hyperparameters for the dataset and classifier combinations were obtained from the original article (Rahimi et al., 2020). - Gaussian process calibration (GP) applied on logits (Wenger et al., 2020). - Temperature scaling (TS) (Guo et al., 2017). Table 2: Datasets and model details for the real experiments. The precomputed logits were provided by Kull et al. (2019), and the same validation and test set split of the logits was used as in the experiments of Kull et al. (2019) and Rahimi et al. (2020). | | Dataset size | | | | | |---------------------------|----------------|---------|----------|------------|-------| | Dataset | Models | Classes | Training | Validation | Test | | CIFAR-10 | DenseNet-40 | 10 | 45000 | 5000 | 10000 | | ResNet-110 ResNet Wide 32 | | | | | | | CIFAR-100 | DenseNet-40 | 100 | 45000 | 5000 | 10000 | | ResNet-110 ResNet Wide 32 | | | | | | - Decision calibration (Zhao et al., 2021); trained to achieve decision calibration with respect to all loss functions with 2 decisions; number of trained iterations was determined by looking at the test set (!) Brier score to save computational resources, thus giving a slight unfair advantage to this method; the final output was normalised to sum to 1 which was not done in the original implementation but is inevitable for log-loss evaluation; applied in composition with temperature scaling (**TS+DEC**) as recommended by the original paper. - Isotonic regression calibration (Zadrozny & Elkan, 2002) applied in a one-vs-rest approach (IR) and the same method in composition with temperature scaling (**TS+IR**) as proposed by Zhang et al. (2020). In both cases, the final output is normalized by dividing the output vector with its sum. - Our proposed method; optimal hyperparameters were found with 10-fold cross-validation grid search optimized by log-loss from neighborhood size proportions q of the training dataset q = k/n ∈ {0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.1, 0.2, 1.0}, and threshold values t ∈ {0, 0.00125, 0.0025, 0.005, 0.01, 0.02, 0.04, 0.05, 0.10, 1.0}. Note that including t = 1 as a possible value allows the calibration method to learn the identity map. Neighborhood size proportion 0.01 stands for 0.01 · 5000 = 50 neighbors in the validation set and 0.01 · 4500 = 45 neighbors in one fold of the 10-fold cross-validation as there are 5000 total data points in the validation set. The reason for using a fixed proportion q instead of a fixed number of neighbors k is to ensure that the neighborhood would cover approximately the same subregion of the probability simplex on data folds of different sizes. LECE calibration is applied in composition with temperature scaling (**TS+LECE**) and without temperature scaling (**LECE**). For methods applied in composition with temperature scaling, temperature scaling is always applied first. ## 4.2.3 Results In the following paragraphs the results for confidence ECE, classwise ECE, log-loss, and accuracy are presented. Confidence ECE Table 3 presents the results for confidence ECE. Both confidence and classwise ECE are measured with 15 equal-sized bins or also known as data dependent bins according to the formulas described in Section 2.4. According to confidence ECE, the best performing methods are GP and our proposed method TS+LECE. Without TS, LECE performs poorly on the real datasets: it is heavily outperformed by every other method. However, when applied in composition with TS, the result of TS is improved for 5 out of 6 cases. Table 3: Confidence ECE (×102) applied with 15 equal-sized bins according to the formula described in Section 2.4. For details of the applied methods see Section 4.2.2. In every row, the methods are ranked and the rank is displayed as a subindex, the best performing method is also highlighted in bold. On average, the best performing methods are Gaussian process calibration (GP) and our proposed method TS+LECE. | | ours | | | | | | | ours | | | | |----------------|-------------|---------|--------|--------|-------|-------|--------|--------|---------|-------|-------| | uncal | IR | LECE | MS | IOP | GP | TS | TS+DEC | TS+IR | TS+LECE | | | | C-10 | DenseNet-40 | 5.4910 | 1.678 | 3.729 | 0.914 | 0.802 | 0.863 | 0.925 | 1.136 | 1.397 | 0.691 | | ResNet-110 | 4.7510 | 1.448 | 3.319 | 0.995 | 0.913 | 0.882 | 0.944 | 1.016 | 1.027 | 0.651 | | | ResNet Wide 32 | 4.4810 | 1.078 | 2.059 | 0.756 | 0.693 | 0.391 | 0.693 | 0.767 | 0.693 | 0.602 | | | average rank | 10.0 | 8.0 | 9.0 | 5.0 | 2.7 | 2.0 | 4.0 | 6.3 | 5.7 | 1.3 | | | C-100 | DenseNet-40 | 21.1610 | 4.868 | 10.499 | 1.224 | 3.456 | 0.892 | 0.791 | 3.857 | 2.125 | 1.183 | | ResNet-110 | 18.4810 | 5.808 | 6.369 | 2.314 | 2.805 | 1.891 | 2.133 | 3.407 | 3.176 | 1.992 | | | ResNet Wide 32 | 18.7810 | 5.748 | 15.539 | 1.855 | 1.043 | 0.872 | 1.414 | 3.347 | 2.956 | 0.851 | | | average rank | 10.0 | 8.0 | 9.0 | 4.3 | 4.7 | 1.7 | 2.7 | 7.0 | 5.7 | 2.0 | | Table 4: Classwise ECE (×102) applied with 15 equal-sized bins according to the formula described in Section 2.4. For details of the applied methods see Section 4.2.2. In every row, the methods are ranked and the rank is displayed as a subindex, the best performing method is also highlighted in bold. On average, the best performing methods are matrix scaling (MS) and our proposed method TS+LECE. Row CIFAR-10 ResNet Wide 32 illustratres the limitations of symmetrical methods, where all symmetrical methods (IOP, GP, TS) offer minimal improvements over uncalibrated predictions. | | | ours | | | | | | ours | | | | |----------------|-------------|---------|---------|---------|--------|--------|--------|--------|--------|---------|--------| | | uncal | IR | LECE | MS | IOP | GP | TS | TS+DEC | TS+IR | TS+LECE | | | C-10 | DenseNet-40 | 0.44510 | 0.2807 | 0.4119 | 0.2141 | 0.2513 | 0.2595 | 0.2554 | 0.2918 | 0.2646 | 0.2342 | | ResNet-110 | 0.35810 | 0.2608 | 0.2979 | 0.1801 | 0.2122 | 0.2153 | 0.2164 | 0.2487 | 0.2346 | 0.2164 | | | ResNet Wide 32 | 0.49610 | 0.3196 | 0.2925 | 0.1811 | 0.4529 | 0.4407 | 0.4468 | 0.2824 | 0.2462 | 0.2462 | | | average rank | 10.0 | 7.0 | 7.7 | 1.0 | 4.7 | 5.0 | 5.3 | 6.3 | 4.7 | 2.7 | | | C-100 | DenseNet-40 | 0.1308 | 0.1177 | 0.20610 | 0.0952 | 0.1106 | 0.1024 | 0.1024 | 0.1399 | 0.0952 | 0.0941 | | ResNet-110 | 0.1328 | 0.1197 | 0.15910 | 0.1105 | 0.0964 | 0.0921 | 0.0943 | 0.1359 | 0.1146 | 0.0921 | | | ResNet Wide 32 | 0.1248 | 0.1227 | 0.13610 | 0.0942 | 0.1055 | 0.1055 | 0.1034 | 0.1248 | 0.0963 | 0.0891 | | | average rank | 8.0 | 7.0 | 10.0 | 3.0 | 5.0 | 3.3 | 3.7 | 8.7 | 3.7 | 1.0 | | Classwise ECE Table 4 presents the results for classwise ECE. Note that classwise ECE values tend to be a lot smaller than confidence ECE: this is due to the fact that most predictions coming from the softmax function are tiny and wash out the ECE score (Nixon et al., 2019). On CIFAR-10, matrix scaling is clearly the best performing method. On CIFAR-100 our proposed TS+LECE performs best but only marginally: many other methods offer similar scores. The results on CIFAR-10 ResNet Wide 32 expose the limitations of symmetrical methods that perform the same transformation for all the classes. For that case, IOP, GP, and TS all fail and produce bad classwise ECE scores around 0.440 which is only slightly lower than the uncalibrated result 0.496. Methods not limited by symmetry offer substantially better results, all producing scores less than 0.320 with matrix scaling even reaching as low as 0.181. Without TS, LECE performs again poorly: for CIFAR-100 it even worsens the result of uncalibrated predictions. However, similarly to confidence ECE, when applied in composition with TS, it offers consistent improvements over TS. Log-loss Table 5 displays the results for log-loss. The best method according log-loss is clearly matrix scaling, being ranked first every time. The second best method is our proposed TS+LECE. The limitations of symmetrical methods can be seen according to log-loss as well: on CIFAR-10 ResNet Wide 32, IOP, GP, and TS perform worse than MS and TS+LECE. Without TS, LECE again performs poorly, but in composition with TS, it consistently offers improvements over TS. | | ours | | | | | | ours | | | | | |----------------|-------------|---------|--------|--------|--------|--------|--------|--------|---------|--------|--------| | uncal | IR | LECE | MS | IOP | GP | TS | TS+DEC | TS+IR | TS+LECE | | | | C-10 | DenseNet-40 | 0.42810 | 0.2688 | 0.3109 | 0.2221 | 0.2253 | 0.2265 | 0.2253 | 0.2667 | 0.2616 | 0.2232 | | ResNet-110 | 0.35810 | 0.2507 | 0.2679 | 0.2041 | 0.2084 | 0.2062 | 0.2095 | 0.2326 | 0.2548 | 0.2062 | | | ResNet Wide 32 | 0.38210 | 0.2146 | 0.2649 | 0.1821 | 0.1925 | 0.1913 | 0.1913 | 0.2348 | 0.2197 | 0.1852 | | | average rank | 10.0 | 7.0 | 9.0 | 1.0 | 4.0 | 3.3 | 3.7 | 7.0 | 7.0 | 2.0 | | | C-100 | DenseNet-40 | 2.01710 | 1.3677 | 1.7399 | 1.0471 | 1.0665 | 1.0563 | 1.0574 | 1.3386 | 1.4428 | 1.0542 | | ResNet-110 | 1.69410 | 1.6199 | 1.4987 | 1.0731 | 1.1055 | 1.0822 | 1.0924 | 1.4536 | 1.5918 | 1.0853 | | | ResNet Wide 32 | 1.80210 | 1.3657 | 1.5769 | 0.9311 | 0.9454 | 0.9393 | 0.9454 | 1.2846 | 1.4018 | 0.9372 | | | average rank | 10.0 | 7.7 | 8.3 | 1.0 | 4.7 | 2.7 | 4.0 | 6.0 | 8.0 | 2.3 | | Table 5: Log-loss. For details of the applied methods see Section 4.2.2. In every row, the methods are ranked and the rank is displayed as a subindex, the best performing method is also highlighted in bold. On average, the best performing method is matrix scaling (MS) followed by our proposed TS+LECE. Table 6: Classifier accuracy. For details of the applied methods see Section 4.2.2. In every row, the methods are ranked and the rank is displayed as a subindex, the best performing method is also highlighted in bold. There are no large differences between the methods. Only notable value is isotonic regression combined with temperature scaling (TS+IR) on CIFAR-100 ResNet-110, where it reduces accuracy from 0.715 to 0.710. | | ours | | | | | | | ours | | | | |----------------|-------------|--------|--------|--------|--------|--------|--------|--------|---------|---------|--------| | uncal | IR | LECE | MS | IOP | GP | TS | TS+DEC | TS+IR | TS+LECE | | | | C-10 | DenseNet-40 | 0.9243 | 0.9243 | 0.9243 | 0.9251 | 0.9243 | 0.9243 | 0.9243 | 0.9243 | 0.9243 | 0.9251 | | ResNet-110 | 0.9361 | 0.9361 | 0.9361 | 0.9361 | 0.9361 | 0.9359 | 0.9361 | 0.9361 | 0.9359 | 0.9361 | | | ResNet Wide 32 | 0.9397 | 0.9405 | 0.9405 | 0.9421 | 0.9397 | 0.9397 | 0.9397 | 0.9413 | 0.9413 | 0.9421 | | | average rank | 3.7 | 3.0 | 3.0 | 1.0 | 3.7 | 6.3 | 3.7 | 2.3 | 5.0 | 1.0 | | | C-100 | DenseNet-40 | 0.7004 | 0.7004 | 0.7004 | 0.7041 | 0.7004 | 0.7004 | 0.7004 | 0.7023 | 0.69710 | 0.7041 | | ResNet-110 | 0.7153 | 0.7129 | 0.7153 | 0.7153 | 0.7153 | 0.7153 | 0.7153 | 0.7161 | 0.71010 | 0.7161 | | | ResNet Wide 32 | 0.7384 | 0.7384 | 0.7384 | 0.7401 | 0.7384 | 0.7384 | 0.7384 | 0.7401 | 0.73610 | 0.7401 | | | average rank | 3.7 | 5.7 | 3.7 | 1.7 | 3.7 | 3.7 | 3.7 | 1.7 | 10.0 | 1.0 | | Accuracy Table 6 presents accuracies of the methods on the test set. None of the methods offer substantial improvements in accuracy, nor do any of the methods have considerable detrimental effects. In general, the methods perform very similarly. The only notable value is TS+IR on CIFAR-100 ResNet-110, where it reduces accuracy of the uncalibrated classifier from 0.715 to 0.710. ## 4.2.4 Ablation Study To understand the importance of the neighborhood function used in the LECE calibration algorithm, and to compare LECE calibration with LECD calibration, we repeat the real data experiments for the following methods - LECE calibration with Euclidean distance instead of Kullback-Leibler divergence (**LECE**euc and TS+LECEeuc), - LECD calibration as described in Section 3.2 - otherwise the same method as LECE calibration but using the LECD assumption instead of the LECE assumption (**LECD** and **TS+LECD**). The method parameters were chosen with 10-fold cross-validation from the same parameter sets as for LECE calibration described in Section 4.2.2. Table 7 presents the results for confidence ECE, Table 8 the results for | LECE | LECEeuc | LECD | TS+LECE | TS+LECEeuc | TS+LECD | | | |----------------|-------------|--------|-----------|--------------|-----------|-------|-------| | C-10 | DenseNet-40 | 3.726 | 3.304 | 3.435 | 0.691 | 0.762 | 1.193 | | ResNet-110 | 3.316 | 3.205 | 2.694 | 0.651 | 0.662 | 0.943 | | | ResNet Wide 32 | 2.055 | 2.936 | 1.934 | 0.603 | 0.532 | 0.501 | | | average rank | 5.7 | 5.0 | 4.3 | 1.7 | 2.0 | 2.3 | | | C-100 | DenseNet-40 | 10.495 | 11.166 | 5.974 | 1.182 | 1.213 | 0.791 | | ResNet-110 | 6.365 | 12.026 | 3.534 | 1.992 | 2.033 | 1.711 | | | ResNet Wide 32 | 15.536 | 14.215 | 5.764 | 0.851 | 1.162 | 1.413 | | | average rank | 5.3 | 5.7 | 4.0 | 1.7 | 2.7 | 1.7 | | Table 7: Ablation study, confidence ECE (×102) applied with 15 equal-sized bins according to the formula described in Section 2.4. For details of the applied methods see Section 4.2.4. In every row, the methods are ranked and the rank is displayed as a subindex, the best performing method is also highlighted in bold. On average, the best performing method is TS+LECE. Table 8: Ablation study, classwise ECE (×102) applied with 15 equal-sized bins according to the formula described in Section 2.4. For details of the applied methods see Section 4.2.4. In every row, the methods are ranked and the rank is displayed as a subindex, the best performing method is also highlighted in bold. On average, the best performing method is TS+LECE. | LECE | LECEeuc | LECD | TS+LECE | TS+LECEeuc | TS+LECD | | | |----------------|-------------|--------|-----------|--------------|-----------|--------|--------| | C-10 | DenseNet-40 | 0.4114 | 0.6706 | 0.4535 | 0.2341 | 0.2382 | 0.2673 | | ResNet-110 | 0.2974 | 0.5576 | 0.3355 | 0.2162 | 0.2021 | 0.2162 | | | ResNet Wide 32 | 0.2925 | 0.7326 | 0.2854 | 0.2461 | 0.2742 | 0.2773 | | | average rank | 4.3 | 6.0 | 4.7 | 1.3 | 1.7 | 2.7 | | | C-100 | DenseNet-40 | 0.2515 | 0.3366 | 0.1724 | 0.0941 | 0.0962 | 0.1023 | | ResNet-110 | 0.1595 | 0.2696 | 0.1234 | 0.0921 | 0.0953 | 0.0921 | | | ResNet Wide 32 | 0.1375 | 0.3326 | 0.1264 | 0.0891 | 0.0952 | 0.1033 | | | average rank | 5.0 | 6.0 | 4.0 | 1.0 | 2.3 | 2.3 | | | | LECE | LECEeuc | LECD | TS+LECE | TS+LECEeuc | TS+LECD | | |----------------|-------------|-----------|--------|-----------|--------------|-----------|--------| | C-10 | DenseNet-40 | 0.3105 | 0.3054 | 0.3196 | 0.2231 | 0.2242 | 0.2263 | | ResNet-110 | 0.2675 | 0.2654 | 0.2796 | 0.2061 | 0.2061 | 0.2093 | | | ResNet Wide 32 | 0.2645 | 0.2604 | 0.2726 | 0.1851 | 0.1872 | 0.1883 | | | average rank | 5.0 | 4.0 | 6.0 | 1.0 | 1.7 | 3.0 | | | C-100 | DenseNet-40 | 1.7396 | 1.6815 | 1.4374 | 1.0541 | 1.0541 | 1.0573 | | ResNet-110 | 1.4985 | 1.5166 | 1.3064 | 1.0851 | 1.0872 | 1.0923 | | | ResNet Wide 32 | 1.5766 | 1.5445 | 1.2724 | 0.9371 | 0.9412 | 0.9453 | | | average rank | 5.7 | 5.3 | 4.0 | 1.0 | 1.7 | 3.0 | | Table 9: Ablation study, log-loss. For details of the applied methods see Section 4.2.4. In every row, the methods are ranked and the rank is displayed as a subindex, the best performing method is also highlighted in bold. The best performing method is TS+LECE. classwise ECE, and Table 9 the results for log-loss. The three tables are discussed in unison, as the methods are ranked similarly across the tables, and the key conclusions to be made from the tables are the same. The best method on average in all the tables is TS+LECE. TS+LECE outperforms its derivative with Euclidean distance TS+LECEeuc in almost all cases with a few exceptions: it loses once in confidence ECE, once in classwise ECE, and is tied twice in log-loss. TS+LECE outperforms TS+LECD as well: it performs better in 13 cases, is tied in 2, and loses in 3 out of the 18 total rows in the three tables. When comparing the methods with and without TS, it can be seen that TS is crucial for all of them. Adding the composition with TS improves the result in all cases and by a very large margin. For cases without TS, LECE and LECEeuc perform otherwise similarly but for classwise ECE the method LECEeuc fails. Therefore, similary to the LECE methods applied in composition with TS, Kullback-Leibler divergence can be concluded to perform better than Euclidean distance for the non-compositional case as well. Comparing LECE with LECD, LECE seems to perform better for CIFAR-10 but LECD for CIFAR-100. Yet, as the compositional TS+LECE methods heavily outperformed the non-compositonal LECE methods, the final conclusion would still be that LECE assumption is better than the LECD assumption. Table 10 shows the optimal neighborhood proportion parameter q chosen by the 10-fold cross-validation for the methods. Many different values are represented, starting from 0.01 corresponding to 0.01 · 5000 = 50 Table 10: Optimal neighborhood proportion q chosen by cross-validation. Commonly chosen values are between 0.01 and 0.1 (corresponding respectively to 50 and 500 neighbors). In some cases the value 1.0 is also chosen corresponding to the whole 5000 validation data set points. For TS+LECE, the neighborhood proportion remains in range between 0.01 to 0.04 corresponding to 50 to 200 neighbors. | LECE | LECEeuc | LECD | TS+LECE | TS+LECEeuc | TS+LECD | | | |----------------|-------------|--------|-----------|--------------|-----------|------|------| | C-10 | DenseNet-40 | 0.04 | 0.1 | 0.04 | 0.04 | 0.03 | 0.07 | | ResNet-110 | 0.06 | 0.1 | 1.0 | 0.03 | 0.02 | 0.01 | | | ResNet Wide 32 | 0.02 | 0.1 | 0.02 | 0.01 | 0.01 | 0.01 | | | C-100 | DenseNet-40 | 0.05 | 0.03 | 1.0 | 0.01 | 0.02 | 0.01 | | ResNet-110 | 0.01 | 0.05 | 1.0 | 0.04 | 0.03 | 0.01 | | | ResNet Wide 32 | 1.0 | 0.03 | 1.0 | 0.04 | 0.05 | 0.01 | | | LECE | LECEeuc | LECD | TS+LECE | TS+LECEeuc | TS+LECD | | | |----------------|-------------|--------|-----------|--------------|-----------|--------|-----| | C-10 | DenseNet-40 | 0 | 0 | 0 | 0.0025 | 0.0025 | 0.1 | | ResNet-110 | 0 | 0 | 0.05 | 0.01 | 0.01 | 1.0 | | | ResNet Wide 32 | 0 | 0 | 0 | 0.01 | 0.02 | 0.05 | | | C-100 | DenseNet-40 | 0 | 0 | 0.005 | 0.02 | 0.02 | 1.0 | | ResNet-110 | 0 | 0 | 0.005 | 0.01 | 0.02 | 0.1 | | | ResNet Wide 32 | 0 | 0 | 0.005 | 0.005 | 0.005 | 1.0 | | Table 11: Optimal threshold t chosen by cross-validation. For methods applied without TS, values close to 0 are chosen, corresponding to no thresholding. For methods applied in composition with TS, the chosen threshold values are usually small: between 0.0025 and 0.02. In three cases, TS+LECD has chosen t = 1.0 - meaning it has learned the identity transformation. neighbors, up to 1.0, corresponding to the whole validation dataset of 5000 neighbors. For TS+LECE, the neighborhood proportion remains in range between 0.01 to 0.04 corresponding to 50 to 200 neighbors. Table 11 shows the optimal threshold parameter chosen by the 10-fold cross-validation. For methods applied without TS, 0 seems to be a good value as it was chosen by LECE and LECEeuc in all cases. For methods applied in composition with TS, the chosen threshold values remain usually small: between 0.0025 and 0.02. TS+LECD is an exception as it uses larger threshold values. In three cases out of six it even picked t = 1.0, meaning it learned the identity map and kept the result of TS. The running times of LECE and LECD calibration are discussed in Appendix A.3. ## 4.2.5 Discussion Overall, from the experiments it can be concluded that while many strong methods exist, our proposed TS+LECE can still offer improvements. Symmetrical methods IOP, GP, and TS perform generally well, but can fail for problems where asymmetrical transformations are needed as shown by classwise ECE and log-loss on CIFAR-10 ResNet Wide 32. Matrix scaling could be considered the best method according to classwise ECE and log-loss, but according to confidence ECE it is clearly outperformed by other methods. TS+LECE avoids the problems of symmetrical methods and offers the best confidence ECE, being second best in classwise ECE and log-loss. Another aspect to notice from the experiments, is the behaviour of LECE with the number of classes. On the illustrational synthetic task with 3 classes the method performs very well without TS. However, on the real tasks with 10 and 100 classes, the method fails unless it is used on top of TS. This is due to problems arising from the curse of dimensionality inherent to the proposed non-parametric approach. In higher dimensions, the probability space is more sparsely populated as every data point starts to become approximately equally distant from every other data point. Because of this, the neighborhood sizes grow and the applied assumption of locally equal calibration errors becomes very bold, lowering the effectiveness of the method. However, applying LECE on top of TS makes the assumption to be more realistic, as the calibration errors are smaller after TS and hence the errors of neighbors are more similar, as required by the LECE assumption. ## 5 Conclusion This work explored the field of post-hoc calibration methods for multi-class classifiers. Two assumptions about the true calibration map were formalized that have been previously used for creating estimators, but despite this have never been clearly stated: assuming locally equal calibration errors (LECE) and assuming locally equal class distributions (LECD). Based on the more reasonable of the assumptions, a non-parametric calibration method was proposed - LECE calibration. The used assumption states that the calibration error of a data point can be modeled by the calibration errors of its neighbors. This results in using the average difference of predictions and labels in a close neighborhood to estimate the calibration error of a prediction. Based on the definition of calibration, the found calibration error estimate is then subtracted from the prediction to reach a calibrated prediction. Experiments were carried out on three convolutional neural networks trained on CIFAR-10 and CIFAR-100 to compare the proposed method with its competitors. The experimental results on real data showed that the proposed method alone is clearly not competitive for cases with many classes and a limited validation dataset due to problems arising from the curse of dimensionality. However, when applying the proposed method in composition with temperature scaling, it tops the state-of-the-art in confidence ECE and is close to the best according to classwise ECE and log-loss. For future work, the limitations of the proposed approach could be studied more thoroughly. How does improvement in calibration depend on the number of classes in the dataset; on the validation dataset size; or on the distribution of the predictions? In addition, the composition of different calibration methods could be studied further as this work and several previous (Zhang et al., 2020; Zhao et al., 2021) have shown the possible benefits. ## Acknowledgments This work was supported by the European Social Fund via IT Academy programme, Estonian Research Council grant PRG1604, and by the Estonian Centre of Excellence in IT (EXCITE), funded by the European Regional Development Fund. ## References Yu Bai, Song Mei, Huan Wang, and Caiming Xiong. Don't just blame over-parametrization for overconfidence: Theoretical analysis of calibration in binary classification. In International Conference on Machine Learning, pp. 566–576. PMLR, 2021. Lucas Clarté, Bruno Loureiro, Florent Krzakala, and Lenka Zdeborová. A study of uncertainty quantification in overparametrized high-dimensional models. arXiv preprint arXiv:2210.12760, 2022a. Lucas Clarté, Bruno Loureiro, Florent Krzakala, and Lenka Zdeborová. Theoretical characterization of uncertainty in high-dimensional linear classification. arXiv preprint arXiv:2202.03295, 2022b. Morris H DeGroot and Stephen E Fienberg. The comparison and evaluation of forecasters. Journal of the Royal Statistical Society: Series D (The Statistician), 32(1-2):12–22, 1983. Pedro M. Domingos and Michael J. Pazzani. Beyond Independence: Conditions for the Optimality of the Simple Bayesian Classifier. In ICML, 1996. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On Calibration of Modern Neural Networks. In International Conference on Machine Learning, pp. 1321–1330, 2017. Chirag Gupta and Aaditya Ramdas. Top-label calibration and multiclass-to-binary reductions. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum? id=WqoBaaPHS-. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely Connected Convolutional Networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700–4708, 2017. Alex Krizhevsky. Learning Multiple Layers of Features from Tiny Images. 2009. URL https://www.cs. toronto.edu/~kriz/learning-features-2009-TR.pdf. Meelis Kull and Peter Flach. Novel Decompositions of Proper Scoring Rules for Classification: Score Adjustment as Precursor to Calibration. In Machine Learning and Knowledge Discovery in Databases, pp. 68–85, 2015. Meelis Kull, Telmo Silva Filho, and Peter Flach. Beta calibration: a well-founded and easily implemented improvement on logistic calibration for binary classifiers. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, pp. 623–631, 2017. Meelis Kull, Miquel Perello Nieto, Markus Kängsepp, Telmo Silva Filho, Hao Song, and Peter Flach. Beyond temperature scaling: Obtaining well-calibrated multi-class probabilities with Dirichlet calibration. Advances in neural information processing systems, 32, 2019. Ananya Kumar, Percy S Liang, and Tengyu Ma. Verified Uncertainty Calibration. Advances in Neural Information Processing Systems, 32, 2019. Aviral Kumar, Sunita Sarawagi, and Ujjwal Jain. Trainable Calibration Measures for Neural Networks from Kernel Mean Embeddings. In International Conference on Machine Learning, pp. 2805–2814, 2018. Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles. Advances in neural information processing systems, 30, 2017. Christian Leibig, Vaneeda Allken, Murat Seçkin Ayhan, Philipp Berens, and Siegfried Wahl. Leveraging uncertainty information from deep neural networks for disease detection. Scientific reports, 7(1):1–14, 2017. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal Loss for Dense Object Detection. In Proceedings of the IEEE international conference on computer vision, pp. 2980–2988, 2017. Wesley J Maddox, Pavel Izmailov, Timur Garipov, Dmitry P Vetrov, and Andrew Gordon Wilson. A Simple Baseline for Bayesian Uncertainty in Deep Learning. Advances in Neural Information Processing Systems, 32, 2019. Jishnu Mukhoti, Viveka Kulharia, Amartya Sanyal, Stuart Golodetz, Philip Torr, and Puneet Dokania. Calibrating Deep Neural Networks using Focal Loss. Advances in Neural Information Processing Systems, 33:15288–15299, 2020. Rafael Müller, Simon Kornblith, and Geoffrey E Hinton. When Does Label Smoothing Help? Advances in neural information processing systems, 32, 2019. Mahdi Pakdaman Naeini, Gregory Cooper, and Milos Hauskrecht. Obtaining Well Calibrated Probabilities Using Bayesian Binning. In AAAI Conference on Artificial Intelligence, 2015. Alexandru Niculescu-Mizil and Rich Caruana. Predicting good probabilities with supervised learning. In Proceedings of the 22nd international conference on Machine learning, pp. 625–632, 2005. Jeremy Nixon, Michael W Dusenberry, Linchuan Zhang, Ghassen Jerfel, and Dustin Tran. Measuring Calibration in Deep Learning. In CVPR Workshops, volume 2, 2019. Kanil Patel, William H. Beluch, Bin Yang, Michael Pfeiffer, and Dan Zhang. Multi-class uncertainty calibration via mutual information maximization-based binning. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=AICNpd8ke-m. John C. Platt. Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods. In Advances in large margin classifiers, pp. 61–74, 1999. Teodora Popordanoska, Raphael Sayer, and Matthew B Blaschko. Calibration Regularized Training of Deep Neural Networks using Kernel Density Estimation. 2021. URL https://openreview.net/forum?id= 1-lFH8oYTI. Amir Rahimi, Amirreza Shaban, Ching-An Cheng, Richard Hartley, and Byron Boots. Intra OrderPreserving Functions for Calibration of Multi-Class Neural Networks. Advances in Neural Information Processing Systems, 33:13456–13467, 2020. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818–2826, 2016. Juozas Vaicenavicius, David Widmann, Carl Andersson, Fredrik Lindsten, Jacob Roll, and Thomas Schön. Evaluating model calibration in classification. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 3459–3467, 2019. Kaspar Valk. Calibration of Multi-Class Probabilistic Classifiers. Master's thesis, University of Tartu, 2022. URL https://comserv.cs.ut.ee/ati_thesis/datasheet.php?id=74918&language=en. Jonathan Wenger, Hedvig Kjellström, and Rudolph Triebel. Non-Parametric Calibration for Classification. In International Conference on Artificial Intelligence and Statistics, pp. 178–190, 2020. Bianca Zadrozny and Charles Elkan. Obtaining Calibrated Probability Estimates from Decision Trees and Naive Bayesian Classifiers. ICML, 1, 2001. Bianca Zadrozny and Charles Elkan. Transforming Classifier Scores into Accurate Multiclass Probability Estimates. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 694–699, 2002. Sergey Zagoruyko and Nikos Komodakis. Wide Residual Networks. In Proceedings of the British Machine Vision Conference (BMVC), pp. 87.1–87.12, 2016. Jize Zhang, Bhavya Kailkhura, and T Yong-Jin Han. Mix-n-Match: Ensemble and Compositional Methods for Uncertainty Calibration in Deep Learning. In International Conference on Machine Learning, pp. 11117–11128, 2020. Shengjia Zhao, Michael P Kim, Roshni Sahoo, Tengyu Ma, and Stefano Ermon. Calibrating Predictions to Decisions: A Novel Approach to Multi-Class Calibration. In Thirty-Fifth Conference on Neural Information Processing Systems, 2021. ## A Appendix A.1 Example Of Different Calibration Definitions As a toy example to illustrate the different definitions of calibration, consider a 3-class classifier for which Pˆ can take two equally likely values Pˆ = (0.6, 0.3, 0.1) or Pˆ = (0.6, 0.2, 0.2). Now suppose that the corresponding expected label values for these predictions are $$\begin{array}{l}{{\mathbb{E}_{Y}\left[Y|\hat{\mathbf{P}}=(0.6,0.3,0.1)\right]=(0.5,0.3,0.2)\mathrm{~and~}}}\\ {{\mathbb{E}_{Y}\left[Y|\hat{\mathbf{P}}=(0.6,0.2,0.2)\right]=(0.7,0.2,0.1).}}\end{array}$$ Such a classifier would not be multi-class calibrated as $\mathbb{E}_{Y}\left[Y|\hat{\mathbf{P}}=(0.6,0.3,0.1)\right]\neq(0.6,0.3,0.1)$, and also $\mathbb{E}_{Y}\left[Y|\hat{\mathbf{P}}=(0.6,0.2,0.2)\right]\neq(0.6,0.2,0.2)$. The classifier would also not be classwise calibrated as $\mathbb{E}_{\mathbf{Y}}\left[Y_3|\hat{P_3}=0.1\right]\neq0.1$, and also $\hat{P_3}$ $\mathbb{E}_{\mathbf{Y}}\left[Y_3|\hat{P_3}=0.2\right]\neq0.2$. However, the classifier would be confidence calibrated as $$\mathbb{E}_{Y}\left[Y_{a r g m a x\,\hat{P}}|m a x\,\hat{P}=0.6\right]=0.5\cdot0.5+0.5\cdot0.7=0.6.$$ ## A.2 Computational And Memory Complexity Of Lece Calibration The complete pseudocode of LECE calibration with thresholding was presented in Algorithm 1. The LECE calibration method is essentially a variation of the k-nearest-neighbors algorithm and its exact memory and computational complexity depends on the implementation. The LECE calibration method does not need any training, but has heavy computational and memory complexity during inference time. On a validation set with size x, a test set with size y, m classes, and k neighbors, the total computational complexity of our implementation is O(m · x · y) and it is caused by line 1 of Algorithm 1, which needs O(m · x) calculations for a single test set data point. The memory complexity of our implementation is O(x · m · b + y · m), where b ≤ y is a batch size parameter. The memory complexity O(x · m · b) is caused by line 1 of the algorithm, where the distances are calculated with matrix operations applied on test set batches of size b. The O(y · m) is also needed as the test dataset has to be kept in memory during inference time. ## A.3 Running Times Of Lece Calibration The LECE method requires no time for training but considerable time for inference as discussed in Appendix A.2. The real data experiments were implemented in Python and run on a machine with 16 GBs of RAM and a CPU with clock speed 3.7 GHz. For CIFAR-10 DenseNet-40, LECE calibration with the best hyperparameters reported in Table 10 and Table 11 took 4.7s to calibrate the 10000 test set data points given the 5000 validation set data points (average running time over 10 runs). For CIFAR-100 DenseNet-40, the average running time over 10 runs was 34.2s. By far the most computationally expensive part of the LECE calibration method is the calculation of distances in line 1 of Algorithm 1. For CIFAR-10, line 1 accounted for 85% of the running time (4.0s of 4.7s), and for CIFAR-100 it was even 97% (33.2s of 34.2s). The second most computationally expensive part of the algorithm is finding the k closest neighbors in line 2 of Algorithm 1. For CIFAR-10, line 2 accounted for 13% of total running time, and for CIFAR-100 it was 2%. The running times for the LECD calibration method were very similar to LECE; as were the running times for TS+LECE and TS+LECD as temperature scaling requires only a fraction of a second for training and evaluation. For example, TS+LECE took 4.9s to train the calibration method and evaluate the 10000 test set points on CIFAR-10 DenseNet-40; and for CIFAR-100 DenseNet-40 it was 34.8s (average of 10 runs). The running times for ResNet-110 and ResNet Wide-32 were similar to DenseNet-40.
Review 1: Summary: The LECD principle is inherent in many post-hoc calibration methods such as temperature scaling, histogram binning, isotonic regression, fixed-width binning, Dirichlet calibration, etc. The LECD principle being: locally (with respect to the pre-hoc classifier), the true label distribution is approximately equal. The authors propose an alternative LECE principle: locally (with respect to the pre-hoc classifier), calibration errors are approximately equal. New calibration methods can be derived based on the LECE principle, and these seem to outperform existing methods that are based on the LECD principle. This is demonstrated through an illustration on synthetic data, and standard baselining on CIFAR-10 and CIFAR-100 (with deepnets). Strengths and Weaknesses: ### Strength: I liked the proposed LECE principle. It is simple, makes complete sense, and as far as I know has not been studied before. Figure 1 is quite useful and the paper is easy to read until the end of Section 3. Experiments suggest the method works well. The authors go beyond only comparing to other calibration methods, also studying the following aspects: - the effect of scaling before applying LECE, - two notions of distance and their parameters, - dealing with near-zero probabilities by thresholding, - behavior with respect to multiple calibration notions as well as proper scoring rules. Once revised and published, I think the LECE method/principle will be a valuable addition to the calibration community. ### Weaknesses: - No 1-v-rest baselines are reported in the empirical comparisons. While some reasons are briefly indicated on page 4 for ignoring 1-v-rest methods ("There are two ... deforms the calibrated probabilities"), no citations or experiments are included. I am aware of at least a couple of papers (https://arxiv.org/pdf/2107.08353.pdf, https://arxiv.org/pdf/2006.13092.pdf) where it was shown that 1-v-rest methods can outperform some full-vector scaling methods that you considered. - Some presentation aspects need to be strengthened, as elaborated in the "requested changes" segment. Particularly, the experiments and tables can be improved significantly. Other issues were the misnomer of "histogram binning" for fixed-width binning, lack of good motivation for thresholding small probabilities, and not reporting standard deviations (or other measures of statistical uncertainty) in the empirical findings. - The technical discussion around LECE on page 6 is limited and informal. I think it is better to be informal and brief than to have meaningless theory, so I have not requested a revision on this front. Requested Changes: ### Major: - Please add one or two 1-v-rest baselines in the empirical comparison (perhaps based on histogram binning/Platt scaling/isotonic regression/ Bayesian binning in quantiles, etc). - Histogram binning has bins with equal mass, whereas the method you have used in experiments is equal-width binning. Please refer to the method you are using as equal-width binning to avoid confusion. - Supplement all tables with meaningful captions following standard scientific conventions. In general, the experiments section is currently not well-organized: float placement is not standard (middle of pages instead of top), and the lists of methods split across pages too many times. Not everything needs to be standard but too many deviations are not ideal. - For instance, Table 1 appears suddenly in the middle of Sec 4.1 without a caption and without introducing the acronymized methods. The table should perhaps be on top of the following page, and a caption should be included summarizing the takeaway message, describing what the subindices are, what "true" means in the last column, etc (I mention the things that were unclear to me, but other things could be unclear to other readers, so please include a comprehensive caption). - Sec 4.1: you mention averaging over 10 seeds, but no standard deviation is reported. Thus, I am not sure of the significance of the results in Table 1. Since it is a synthetic experiment, I suggest derandomizing with 1000 simulations and reporting the (very small) standard deviation to make the results almost 100% conclusive. - What splits were used for the CIFAR post-hoc experiments? If splits were standard, refer to where these were obtained from. If splits were random, report standard deviations in the results. ### Minor but strongly recommended - The fixed-width binning in 3-dimensions that you used was initially suggested in the paper by Vaicenavicius et al. (2019), so do cite this contribution. - The title is not very informative since "nonparametric multiclass calibration" is quite broad and the qualification "simple" is subjective. Please provide a more informative title without the subjective qualifier. - Page 5: the description of intra order-preserving functions can be improved. - The motivation/background for "thresholding tiny probabilities" could be made clearer. The description makes a number of statements without theoretical/empirical justification or referencing other work: "these build-in errors become very large proportional to the probability", "LECE method produces output smaller than 0". The method of correction is also not well-unmotivated; why retain the original probability and not just set c-hat(p-hat) = 0? (In contrast, the paragraph immediately after, "Composition with parametric methods" is very clear! Citations and independent justifications are provided, and an experiments section is also referenced.) ### Minor and up to the authors - I believe when you say 'locally' in the paper, you mean with respect to the pre-hoc classifier. This may be useful to qualify explicitly. - If the authors can show on an illustrative synthetic example that LECE provably does better than LECD, that would be a very useful addition to the paper. Broader Impact Concerns: None ================================================== Review 2: Summary: This paper uses the assumption that nearby points on the probability simplex have similar calibration mappings to propose a post-hoc calibration method based on the nearest neighbor algorithm. They present two assumptions on the local neighborhood of a point on a probability simplex: 1) LECE assumes nearby points have equal calibration errors, and 2) LECD assumes nearby points have the same class distribution. The authors show that their LECE assumption leads to an unbiased estimation of calibration error. Then the authors discuss the Relation of their work to previous works and provide some practical considerations to enhance the results. The results show that the proposed method is comparable to other state-of-the-art methods. Strengths and Weaknesses: **Pros** --------- + The paper is easy to read. + The proposed method is relatively simple, technically sound, and has strong results. + Relation to previous works and discussing the drawbacks and fair comparison to previous methods are valuable. **Cons** --------- - The motivation for the proposed method needs to be improved in the abstract. - The proposed method has the main drawbacks of non-parametric: mainly sensitivity to noise and being computationally expensive. These may also need to be discussed in the main text. - The accuracies of the models after calibration are not present in the experiments. - Some drawbacks limit the applicability of the proposed method. As the number of classes increases, the method becomes computationally expensive and less reliable due to high dimensionality. Similarly, the number of validation samples, whether small or large, affects the results. Although these issues are discussed to some extent in the paper, conducting experiments by varying the number of validation samples or using a greater number of classes (such as ImageNet) and reporting the computation times would further strengthen the paper. Requested Changes: Please see the Strengths And Weaknesses section. Broader Impact Concerns: I see no concern about broader impact of this work. ================================================== Review 3: Summary: This work focuses on the **post-hoc calibration of the multi-class probabilistic classifier**. The contributions could be summarized below: * **1. Formulation of two assumptions** (Section 3.1) The authors propose two assumptions about the true calibration map: (a) Assumption of locally equal calibration errors (LECE) (b) Assumption of locally equal class distributions (LECD) Similar assumptions are implicitly made in existing literature [1, 2, 3, etc] but have never been explicitly stated. * **2. A detailed visualization comparison of different post-hoc calibration methods** The authors adopt a well-designed synthetic dataset to compare the performance of a list of post-hoc calibration methods using the calibration map. (Not 100$\%$ sure if this is a novel contribution or not.) * **3. A non-parametric post-hoc calibration method** (Section 3.2 and Section 4) The authors proposed LECE calibration, which estimates the calibration error via the averaged difference between the prediction and label, w.r.t. its $k$ nearest neighbors. The synthetic (3-classes) dataset illustrates the effectiveness of the LECE in calibration. On real-world CIFAR datasets, LECE is able to achieve promising results when completed with the temperature scaling method. **References:** [1] Obtaining Well Calibrated Probabilities Using Bayesian Binning, AAAI'15. [2] Calibrating Predictions to Decisions: A Novel Approach to Multi-Class Calibration, NeurIPS'21. [3] Obtaining Calibrated Probability Estimates from Decision Trees and Naive Bayesian Classifiers, ICML'01. Strengths and Weaknesses: **Strengths** * 1. The paper overall is easy to read. The connections between the paper and existing results are well illustrated. * 2. The "formally" summarized two assumptions are beneficial for the literature. * 3. The introduced non-parametric calibration works well in synthetic data. When the number of classes is relatively large, combining with temperature scaling can generate promising results. **Weaknesses** * 1. **The formulated assumptions are kind of informal** The two assumptions are not rigorously formulated. A formal illustration could be much better, i.e., instead of mentioning $\hat{p}_1\approx \hat{p}_2\$, a more rigorous saying could be better since these two assumptions are highlighted as contributions. For example, for some distance measure $D$, if $D(\hat{p}_1, \hat{p}_2)<\delta$ for any $\delta\to 0$, ...... * 2. **Experiment results are not fully convincing** * * 2.1 In the experiments of synthetic datasets, the improvements of LECE (Table 1) are marginal in many settings, i.e., under "accuracy", and "Brier score", the differences between LECE and DIR are negligible. In such cases, is it necessary to give them different ranks? Besides, can authors try 2 other true calibration functions to see if the performance of LECE is still consistently better than DIR? What is more, why do authors choose $k=500, t=0$ in this synthetic dataset, and what are the candidate values of $k, t$ in this synthetic experiment? * * 2.2 In the experiments of CIFAR datasets, LECE performs too badly without using TS. And it is hard to attribute the nice performance of TS+LECE to LECE. Since in certain cases, TS could achieve fairly well. Adding LECE has no further improvements. A couple of questions are listed in the section **Requested changes**. * * 2.3 More detailed explanations on why adding LECE requires TS might be needed, for the experiments on CIFAR datasets. * * 2.4 When looking into the ablation study (Table 7), please correct me if my understanding is incorrect: personally, it seems that the reason why the authors report the performance of TS+LECE rather than others, i.e., $LECE_{\text{euc}}$ or $TS+LECE_{\text{euc}}$ is because of the performance on the dataset for evaluation? For me, I feel like this is kind of cherry-picking. In practice, why do the authors decide on reporting $TS+LECE$ instead of $TS+LECE_{\text{euc}}$ for comparing with other methods? Requested Changes: I summarize my concerns, suggestions, and observed typos below, with the order of their appearances in the paper. * 1. **Suggestion 1:** "Probabilistic classifiers take as input some data and produce as output probability distributions over classes." --> It might be better to say "Probabilistic classifiers take some data as input and produce probability distributions over classes as output." * 2. **Suggestion 2:** "It is possible that a probabilistic classifier is not well-calibrated and produces distorted probabilities. A classifier is considered to be calibrated if its predicted probabilities are in correspondence with the true class distribution." --> It might be better to switch the order of these two sentences. * 3. **Suggestion 3:** A toy example for the illustration of three definitions in Section 2.2 could be much better. * 4. **Suggestion 4:** In Section 2.4, more introduction about proper scoring rules might be better. * 5. **Notation 1 (Expectation):** on page 6, when talking about expectations, it could be better to what the expectation is with respect to, making the presentation more clear, i.e., $\mathbb{E}_{?} \[\hat{CE}_\{neigh} (\hat{p}) ]$. * 6. **Suggestion 5:** a brief introduction of the synthetic dataset at the end of page 6 could be much better. Or put the visualization of the assumptions in the experiment section? * 7. **Question 1:** is the visualization (calibration map) a novel visualization tool for comparisons between post-hoc calibrations? * 8. **Question 2:** Figure 1 is kind of hard for me to follow. The differences among certain figures (except for (a)) are not quite straightforward to catch. Can authors adopt two sub-figures and explain their differences in more detail (i.e., (d), (e), (f))? Besides, more explanations on settings of the sub-figure are appreciated, i.e., what information is the color conveying. As for the differences between calibration maps, can they be quantified in numeric values? * 9. **Typo 1:** on page 8, "The LECE assumption outperformed the LECD assumption on the real experiments also". "Also" --> as well. * 10. **Suggestion 6:** the first sentence of section 3.2 reads weird. * 11. **Notation 2 ($\hat{p}_{1i}$):** some notations are not well defined/introduced, i.e., in the introduction of KL-divergence on page 9, $\hat{p}_{1i}$ is not well-defined. * 12. **Notation 3 ($\hat{c}(\hat{p})$):** at the end of paragraph "Thresholding tiny probabilities", the notation is not consistent, i.e., $\hat{\mathbf{c}}(\hat{\mathbf{p}})$ and $\hat{c}(\hat{\mathbf{p}})$. * 13. **Typo 2:** in the last sentence of the paragraph "Composition with parametric methods", white space is needed in the front. * 14. **Exp 1:** can authors try 2 other true calibration functions to see if the performance of LECE is still consistently better than DIR, in the synthetic experiments? * 15. **Exp 2:** why do authors choose $k=500, t=0$ in the synthetic dataset, and what are the candidate values of $k, t$ in this synthetic experiment? * 16. **Exp 3:** at the end of section 4.1.3, can the authors explain how the conclusion "the proposed LECE calibration method depicted in Figure 1e learns the most similar transformation to the true calibration map" is made? Is it numerically tested? * 17. **Exp 4:** in experiments of CIFAR datasets, for TS+LECE, what temperature scaling parameter is adopted? And how is it selected? * 18. **Exp 5:** for the comparison between TS, and TS+LECE, is TS+LECE always better than TS? * 19. **Exp 6:** in experiments of CIFAR datasets, while searching for optimal parameters, do the authors have any observations on the pattern of good parameters? * 20. **Exp 7:** More detailed explanations on why adding LECE requires TS might be needed, for the experiments on CIFAR datasets. * 21. **Exp 8:** experiment details on how the statistics are calculated (i.e., the evaluation dataset, etc), are needed, for CIFAR experiments. * 22. **Exp 9:** When looking into the ablation study (Table 7), please correct me if my understanding is incorrect: personally, it seems that the reason why the authors report the performance of TS+LECE rather than others, i.e., $LECE_{\text{euc}}$ or $TS+LECE_{\text{euc}}$ is because of the performance on the dataset for evaluation? For me, I feel like this is kind of cherry-picking. In practice, why do the authors decide on reporting $TS+LECE$ instead of $TS+LECE_{\text{euc}}$ for comparing with other methods? Broader Impact Concerns: There seem to be no ethical concerns about the work. ================================================== Review 4: Summary: This work proposes a new multi-class post-hoc non-parametric calibration method. The method is based on the idea that the calibration error is approximately equal in a small neighbourhood of the simplex ("LECE assumption"), and consists of subtracting the uncalibrated classifier by the averaged calibration error in a neighbourhood on the classifier. The algorithm therefore depends on a choice of distance on the simplex, and has one hyperparameter that is chosen by cross-validation: the size of the neighbourhood. Most of the results consist of benchmarking the algorithm against popular calibration methods (TS, MS, IOP, etc.) and across different architectures (DenseNet-40, ResNet-110, etc.) and datasets, both synthetic and real (CIFAR10, CIFAR100). By itself, the proposed method is outperformed in most of the experiments. However, when combined with temperature scaling, it consistently outperforms all the other benchmarks. Strengths and Weaknesses: - **[C1]**: > *A probabilistic classifier is considered calibrated if it outputs probabilities that are in correspondence with empirical class proportions.* Being pedantic, this sentence in the abstract is at odds with the definition of calibration given in Section 2.2, where the calibration is defined as an expectation over the population. I agree that in practice this is approximated by an empirical average over the validation set, but technically these two are different. - **[C2]**: > *The problem of over-confident predictions is especially common for modern deep neural networks (Guo et al., 2017; Lakshminarayanan et al., 2017). Distorted output probabilities are also characteristic of many classical machine learning methods such as naive Bayes or decision trees (Niculescu-Mizil & Caruana, 2005; Domingos & Pazzani, 1996).* Recent theoretical progress (Bai et al. 2021; Clarté et al. 2022a,b) have shown that over-confidence is widespread in models as simple as high-dimensional logistic regression. I think it is important to mention this since there is a widespread belief in the literature that overconfidence is a specific problem to deep neural networks. - **[C3]**: Assumptions LECD and LECE in Page 6 translate to assumptions on the underlying data distribution / target classifier. More precisely, if data is generated from a target classifier $f_{\star}(x) = \mathbb{P}(Y=1|X=x)$ what do these assumptions intuitively imply on the functional form of $f_{\star}$? For instance, would they be satisfied by a logit or probit model? - **[C4]**: Why don't the authors report the standard deviation over the independent runs in Table 1-4? This is not a good practice, specially when the difference between the comparison is (for certain methods) on the third digit. - **[C5]**: Since this is a numerical paper based on comparing very different algorithms, I miss a discussion on running times and sample complexity. Are (at least the order of) the running times similar? Both the proposed method and TS require cross-validation. Does this requires two validation sets (one for TS, one for the hyperparameter) or both are done together? More generally, does all methods use the same quantity of validation data? - **[C6]**: It would be nice to have a pseudo-code for LECE and LECE+TS. This is sort of done in the list in Page 9, but a pseudo-code with input, steps and output clearly stated could help better the implementation and understanding the points above. - **[C7]**: I find curious that comparing LECE to TS and to TS+LECE, in most cases the highest decrease in the overconfidence often comes from TS itself, which remains a simple and easy to implement metho. What are your thoughts on that? **References** [[Bai et al. 2021]](https://proceedings.mlr.press/v139/bai21c.html) Yu Bai, Song Mei, Huan Wang, Caiming Xiong. *Don’t Just Blame Over-parametrization for Over-confidence: Theoretical Analysis of Calibration in Binary Classification*. Proceedings of the 38th International Conference on Machine Learning, PMLR 139:566-576, 2021. [[Clarte 2022a]](https://arxiv.org/abs/2202.03295) Lucas Clarté, Bruno Loureiro, Florent Krzakala, Lenka Zdeborová. *Theoretical characterization of uncertainty in high-dimensional linear classification*. arXiv: 2202.03295 [[Clarte 2022b]](https://arxiv.org/abs/2210.12760) Lucas Clarté, Bruno Loureiro, Florent Krzakala, Lenka Zdeborová. *A study of uncertainty quantification in overparametrized high-dimensional models*. arXiv: 2210.12760 Requested Changes: **Major**: - Include error bars / standard deviation in every experiment. - Add a discussion on running time and how much validation data is used for cross-validation / TS. **Minor**: - Add a pseudo-code for the proposed algorithm and the coupled TS version. Broader Impact Concerns: In my reading there are no broader impact concerns for this work. ================================================== Metareview: Recommendation: Accept as is Comment: The paper studies the problem of calibrating multi-class classifiers, a fundamental problem of broad interest to the community. It explicates conditions underpinning the success of calibration that have hitherto not been discussed directly. These are used to devise a new calibration scheme with strong empirical performance. Reviewers unanimously agreed that the paper makes an interesting contribution that would be valuable for the TMLR audience. From our reading, we concur with this view. We are thus pleased to recommend the paper for acceptance. For the final version, we do agree with one reviewer's suggestion that the authors consider reducing the length of the the main body. ==================================================
# Dignet: Learning Decomposed Patterns In Representation Balancing For Treatment Effect Estimation Yiyan Huang ∗yiyhuang@polyu.edu.hk Department of Applied Mathematics, The Hong Kong Polytechnic University Siyi Wang *siyi.wang@my.cityu.edu.hk School of Data Science, City University of Hong Kong Cheuk Hang Leung chleung87@cityu.edu.hk School of Data Science, City University of Hong Kong Qi Wu †*qi.wu@cityu.edu.hk* School of Data Science, City University of Hong Kong Dongdong Wang wangdongdong9@jd.com JD Digits Zhixiang Huang huangzhixiang@jd.com JD Digits Reviewed on OpenReview: *https: // openreview. net/ forum? id= Z20FInfWlm* ## Abstract Estimating treatment effects from observational data is often subject to a covariate shift problem incurred by selection bias. Recent research has sought to mitigate this problem by leveraging representation balancing methods that aim to extract balancing patterns from observational data and utilize them for outcome prediction. The underlying theoretical rationale is that minimizing the unobserved counterfactual error can be achieved through two principles: (I) reducing the risk associated with predicting factual outcomes and (II) mitigating the distributional discrepancy between the treated and controlled samples. However, an inherent trade-off between the two principles can lead to a potential loss of information useful for factual outcome predictions and, consequently, deteriorating treatment effect estimations. In this paper, we propose a novel representation balancing model, DIGNet, for treatment effect estimation. DIGNet incorporates two key components, PDIG and PPBR, which effectively mitigate the trade-off problem by improving one aforementioned principle without sacrificing the other. Specifically, PDIG captures more effective balancing patterns (Principle II) without affecting factual outcome predictions (Principle I), while PPBR enhances factual outcome prediction (Principle I) without affecting the learning of balancing patterns (Principle II). The ablation studies verify the effectiveness of PDIG and PPBR in improving treatment effect estimation, and experimental results on benchmark datasets demonstrate the superior performance of our DIGNet model compared to baseline models. ## 1 Introduction In the context of the ubiquity of personalized decision-making, causal inference has sparked a surge of research exploring causal machine learning in many disciplines, including economics and statistics (Wager & ∗The co-first authors. †The corresponding author. Athey, 2018; Athey & Wager, 2019; Farrell, 2015; Chernozhukov et al., 2018; Huang et al., 2021), healthcare (Qian et al., 2021; Bica et al., 2021a;b), and commercial applications (Guo et al., 2020a;b; Chu et al., 2021). The core of causal inference is to estimate *treatment effects*, which is closely related to the *factual outcomes* (observed outcomes) and *counterfactual outcomes*. The concept of the counterfactual outcome is closely linked to a fundamental hypothetical question: What would the outcome be if an alternative treatment were received? Answering this question is challenging because counterfactual outcomes are unobservable in reality, making it impossible to directly access ground-truth treatment effects from observational data. Consequently, an increasing amount of recent research has focused on developing innovative machine learning models that aim to enhance the estimation of counterfactual outcomes to obtain more accurate treatment effect estimates. One of the challenges in estimating counterfactual outcomes lies in the *covariate shift* problem. In observational data, the population can be typically divided into two groups: (i) individuals who received treatment (T = 1), referred to as treated samples or *treatment samples*, and (ii) individuals who did not receive treatment (T = 0), referred to as controlled samples or *control samples*. The covariate shift problem indicates the difference between the distribution of covariate in the treated group and that in the controlled group, meaning P(X|T = 1) ̸= P(X|T = 0). This phenomenon is a result of the non-random treatment assignment mechanism, where the decision to receive treatment (e.g., heart medicine) is often determined by the covariate (e.g., age). For example, people receiving heart medicine treatment tend to be much older compared to those who do not receive such treatment, because the doctor's decision-making regarding whether to undergo heart medicine treatment highly depends on the patients' age. Such a non-random treatment assignment is known as the selection bias phenomenon in the causal inference literature. Although the covariate shift arises from the association between covariate and treatment, this issue can significantly exacerbate the difficulty in inferring counterfactual outcomes, as traditional machine learning models can be invalid in estimating potential outcomes when a covariate shift is present (Yao et al., 2018; Hassanpour & Greiner, 2019a). Specifically, to infer the potential outcome Y 0for treated (T = 1) samples, the conventional approach is to first train a model τˆ 0(X) using controlled (T = 0) samples, and then utilize τˆ 0(X) to predict Y 0for treated (T = 1) samples. This approach, known as the T-learner in the causal inference literature (Curth & Van Der Schaar, 2023; Mahajan et al., 2024), becomes problematic because the training data (control samples) used for model training do not have the same distribution as the test data (treated samples), i.e., P(X|T = 1) ̸= P(X|T = 0). This violates the assumption in machine learning that training data and test data should be independent and identically distributed. To alleviate the covariate shift problem, recent advancements in representation balancing research have explored the representation learning model, such as CounterFactual Regression Network (CFRNet) (Shalit et al., 2017), to estimate individual treatment effects (ITEs). These representation balancing models aim to extract balancing patterns from observational data and utilize these patterns to predict outcomes. The corresponding objective function is typically concerned with minimizing the empirical risk of factual outcomes while concurrently minimizing the distributional distance between the treatment and control groups in the representation space (Shalit et al., 2017; Johansson et al., 2022a). The underlying theoretical logic behind these studies is that minimizing counterfactual error can be achieved by two principles in the representation space: *(Principle I) minimizing the risk associated with factual outcome prediction*, and (Principle II) reducing the distributional discrepancy between the treated and controlled samples. While the representation balancing framework provides a powerful tool to address the covariate shift issue, models based on classical structures such as CFRNet (Figure 1(a)) still encounter a trade-off between the aforementioned two principles: Enforcing models to focus solely on balancing can undermine the predictive power of the outcome function (Zhang et al., 2020; Assaad et al., 2021; Huang et al., 2023). A detailed discussion of this trade-off problem can be found in Appendix A.1. This inherent trade-off motivates us to explore a pivotal question: considering the inherent trade-off between the two principles, is it possible to explore a scheme that enhances one principle without sacrificing the other? More specifically, can we explore improving treatment effect estimation through the following two paths: (Path I) learning more effective balancing patterns without sacrificing factual outcome prediction and *(Path II) enhancing factual outcome* prediction without sacrificing the learning of balancing patterns? In the following, we present the proposed solutions and the rationale behind the underlying intuitions. In classic representation balancing models, the process of learning balancing patterns can lead to the loss of outcome-predictive information. It is therefore natural to consider a module that can preserve the prebalancing information before the representation balancing step. This increases the model's complexity to maintain the useful predictive knowledge while still benefiting from the covariate balancing properties of the representation balancing framework. Furthermore, in multi-task learning, distinct representations are learned for different tasks, with each task involving its own objective function. An important step in multitask learning is integrating the information from these separately learned representations into a unified representation. Therefore, following the multi-task learning paradigm (Li et al., 2018; Baltrušaitis et al., 2018; Crawshaw, 2020; Yan et al., 2021; Xu et al., 2023b), we propose concatenating the representations learned with Wasserstein distance and H-divergence to form a joint representation. This joint representation can effectively capture the task-specific balancing information provided by each distance metric without adversely affecting the outcome modeling task. A detailed discussion of these intuitions can be found in Section 5.3.1. Based on the above motivations, we introduce a novel representation balancing model, **DIGNet** (Section 5.2.2), a neural network that incorporates D*ecomposed patterns* with I*ndividual propensity confusion* and G*roup distance minimization*. Decomposed patterns denote distinct components disentangled from specific representations in DIGNet (Section 5.2). The individual propensity confusion aspect of DIGNet aims to learn representations that obscure the propensity of each individual being treated or controlled (Section 5.1.2), grounded in our derived H-divergence guided counterfactual and ITE error bounds (Section 4.2). The group distance minimization aspect focuses on learning representations that minimize the distance between treated and controlled groups (Section 5.1.1), supported by previous work on Wasserstein distance guided counterfactual and ITE error bounds (Shalit et al., 2017) (Section 4.1). Figure 1 visually depicts these introduced concepts and their relationships. Contributions. Our main contributions are summarized as follows: 1. We derive theoretical upper bounds for counterfactual error and ITE error based on H-divergence (Section 4.2). In particular, this theoretical foundation highlights the important role of propensity score for representation balancing models, connecting the representation balancing with the concept of individual propensity confusion. 2. We suggest learning decomposed patterns in representation balancing models (Section 5.2.1) to mitigate the trade-off problem rooted in classic causal representation balancing models. First, we propose a *PDIG* method (Figure 1(b)), which aims to learn Patterns Decomposed with Individual propensity confusion and Group distance minimization to improve treatment effect estimation through Path I. Second, we propose a *PPBR* method (Figure 1(c)), which aims to learn Patterns of Pre-balancing and Balancing Representations to improve treatment effect estimation through Path II. 3. Building upon PDIG and PPBR, we propose a novel representation balancing model, DIGNet (Figure 1(d)), for treatment effect estimation. In Section 6, ablation studies verify the efficacy of PDIG and PPBR in improving ITE estimation through Path I and Path II, respectively. Furthermore, experimental results on benchmark datasets demonstrate that DIGNet surpasses the performance of baseline models in terms of treatment effect estimation. ## 2 Related Work The presence of a covariate shift problem stimulates the line of representation balancing works (Johansson et al., 2016; Shalit et al., 2017; Johansson et al., 2022a). These works aim to balance the distributions of representations between treated and controlled groups and simultaneously try to maintain representations predictive of factual outcomes. This idea is closely connected with domain adaptation. In particular, the ITE error bound based on Wasserstein distance is similar to the generalization bound in Ben-David et al. (2010); Long et al. (2014); Shen et al. (2018). The theoretical foundation and the classic CFRNet structure proposed in Shalit et al. (2017) have inspired many subsequent studies on representation balancing methods for treatment effect estimation, including Yao et al. (2018); Shi et al. (2019); Zhang et al. (2020); Hassanpour ![3_image_0.png](3_image_0.png) Figure 1: (a): The classic model (e.g., GNet in Section 5.1.1 and INet in Section 5.1.2) maps the original data D into representations ΦE to achieve representation balancing. The balanced representations are referred to as *balancing patterns*. These balancing patterns are also used for outcome prediction. (b): The PDIG (Section 5.2.1) is illustrated as the yellow part, where balancing patterns are decomposed into two distinct components, ΦG and ΦI . ΦG serves for *group distance minimization* (Section 5.1.1) and ΦI serves for individual propensity confusion (Section 5.1.2). The balancing patterns ΦG and ΦI are concatenated for predicting outcomes. (c): The PPBR (Section 5.2.1) is represented by the yellow section, where ΦE is used for feature extraction and ΦG is used for representation balancing. Here representations are decomposed into *pre-balancing patterns* ΦE and balancing patterns ΦG. ΦE and ΦG are concatenated for predicting outcomes. (d): The proposed model DIGNet (Section 5.2.2) integrates both PDIG and PPBR. Specifically, DIGNet decomposes balancing patterns into two distinct components, ΦG and ΦI . The outcome predictors are further formed by concatenating ΦG, ΦI , and pre-balancing patterns ΦE. & Greiner (2019a); Assaad et al. (2021); Huang et al. (2022a). This paper derives a new ITE error bound based on H-divergence (Ben-David et al., 2006; 2010; Ganin et al., 2016). In addition to the connection to domain adaptation, causal representation learning is also linked to the field of fair representation learning, which aims to ensure that machine learning algorithms make fair decisions by learning fair representations. The main goal of these studies is to enforce a classification model to be less sensitive to certain sensitive variables when the representations of different groups are sufficiently similar (Zemel et al., 2013; Edwards & Storkey, 2015; Beutel et al., 2017; Madras et al., 2018; Zhang et al., 2018; Adel et al., 2019; Feng et al., 2019; Zhao et al., 2019a; Zhao & Gordon, 2022). Notably, the original idea of adversarial learned fair representations in Edwards & Storkey (2015) is also motived by the domain adaptation work (Ben-David et al., 2006; 2010; Ganin et al., 2016), sharing a similar motivation to our utilization of INet, which relies on H-divergence guided error bounds for ITE estimation. Moreover, Wasserstein distance has also been employed for learning fair representations in Jiang et al. (2020). Another recent line of causal representation learning literature investigates efficient neural network structures for treatment effect estimation. Kuang et al. (2017); Hassanpour & Greiner (2019b) extract the original covariates into treatment-specific factors, outcome-specific factors, and confounding factors; X-learner (Künzel et al., 2019) and R-learner (Nie & Wager, 2021) are developed beyond the classic S-learner and T-learner; Curth & van der Schaar (2021) leverage structures for end-to-end learners to counteract the inductive bias towards treatment effect estimation, which is motivated by Makar et al. (2020). There are some other deep neural network models that have been employed in treatment effect estimation Louizos et al. (2017); Yao et al. (2018); Yoon et al. (2018); Shi et al. (2019); Du et al. (2021). To ensure comparability and consistency, we rigorously follow the same framework as these causal inference works. The causal graph in these studies satisfies the standard setup T ← X → Y and T → Y . Additionally, it is also worth noting that there are many other causal inference works exploring treatment effect estimation under more complex causal graphs. For instance, studies such as Kallus et al. (2019); Jesson et al. (2021); Miao et al. (2023) specifically tackle the treatment effect estimation when unobserved confounders U present. In this case, the causal graph setup extends to T ← X → Y , T → Y , T ← U → Y . A recent work (Cao et al., 2023) further expands this static causal graph to a dynamic setting. Moreover, some studies such as Angrist et al. (1996); Burgess et al. (2017); Wu et al. (2022); Yuan et al. (2023) estimate treatment effects with instrumental variables I involved. In this case, there are various causal graph setups such as T ← X → Y , I → T → Y , and T ← I → Y . More complex causal graph settings (Nogueira et al., 2021; Vowels et al., 2022; Zanga et al., 2022) have been studied with the development of Directed Graphical Models (Pearl, 2009), which represents another significant research direction known as causal discovery. Our method is highly motivated by the trade-off problem between outcome prediction and representation balancing. In the causal representation learning literature, a similar trade-off phenomenon has been noticed by Zhang et al. (2020); Assaad et al. (2021); Huang et al. (2022a), where the researchers argue that highlybalanced representations can have adverse effects on outcome modeling. However, the explanations for this phenomenon and its connections with other related literature are not extensively provided in their work. We highlight that the trade-off between outcome prediction and representation balancing is also connected with trade-offs observed in other research domains. In representation balancing models, representation balancing helps improve the model's ability to generalize to counterfactual estimates. However, representation balancing can potentially sacrifice information necessary for predicting factual outcomes. In supervised machine learning, penalizing model complexity during model training helps the model to learn simpler patterns, thereby promoting generalization ability (reducing its variance) to unseen data. However, a bias-variance trade-off occurs because less flexible models tend to exhibit higher bias in training data (Geman et al., 1992; Domingos, 2000; Valentini & Dietterich, 2004; Yang et al., 2020). In the literature of domain adaptation (Shen et al., 2018; Zhao et al., 2019b), transfer learning (Long et al., 2015; 2017; Ma et al., 2023), out-of-distribution detection (Kumar et al., 2021; 2022), and fair representation learning (Zliobaite, 2015; Hardt et al., 2016), enforcing a model to capture proxy features that are domain-invariant helps the model to generalize well to unseen target (also known as out-of-distribution) data. However, a trade-off between classification accuracy and domain-invariance (or fairness in fair representation learning literature) occurs because the pursuit of domain-invariant features may lead to a loss of classification accuracy on the source (also known as in-distribution) data (Zhao et al., 2019a; Zhao & Gordon, 2022; Zhao et al., 2022). ## 3 Preliminaries Notations. Suppose there are N i.i.d. random variables D = {(Xi, Ti, Yi)} N i=1 with observed realizations {(xi, ti, yi)} N i=1, where there are N1 treated units and N0 controlled units. For each unit i, Xi *∈ X ⊂* R d denotes d-dimensional covariates and Ti ∈ {0, 1} denotes the binary treatment, with e(xi) := p(Ti = 1 | Xi = xi) defined as the propensity score (Rosenbaum & Rubin, 1983). Potential outcome framework (Rubin, 2005) defines the potential outcomes Y 1, Y 0 *∈ Y ⊂* R for treatment T = 1 and T = 0, respectively. We let the observed outcome (factual outcome) be Y = T ·Y 1+(1−T)·Y 0, and the unobserved outcome (counterfactual outcome) be Y = T · Y 0 + (1 − T) · Y 1. For t ∈ {0, 1}, let τ t(x) := E [Y t| X = x] be a function of Y t w.r.t. X, then our goal is to estimate the individual treatment effect (ITE) τ (x) := E-Y 1 − Y 0| X = x= τ 1(x) − τ 0(x) 1, and the average treatment effect (ATE) τ*AT E* := E-Y 1 − Y 0=RX τ (x)p(x)dx. The introduced concepts PPBR and PDIG are illustrated in Figure 1, and the necessary representation functions ΦE, ΦG and ΦI , as well as different model structures, are illustrated in Figure 2. Throughout the paper, we refer to patterns as meaningful representations. For instance, decomposed patterns are distinct components disentangled from some specific representations. ## 3.1 Problem Setup In causal representation balancing works, we denote representation space by R ⊂ R d, and Φ : *X → R* is assumed to be a twice-differentiable, one-to-one and invertible function with its inverse Ψ : *R → X* such that Ψ(Φ(x)) = x 1. The densities of the treated and controlled covariates are denoted by p T =1 x = p T =1(x) := p(x | T = 1) and p T =0 x = p T =0(x) := p(x | T = 0), respectively. Correspondingly, the densities of the treated and controlled covariates in the representation space are denoted by p T =1 Φ = p T =1 Φ (r) := pΦ(r | T = 1) and p T =0 Φ = p T =0 Φ (r) := pΦ(r | T = 0), respectively. 1The term E-Y 1 − Y 0| X = xis commonly known as the Conditional Average Treatment Effect (CATE). In order to maintain consistency with the notion used in the existing causal representation balancing literature, e.g., Shalit et al. (2017), we refer to this term as ITE throughout this paper. Note that the original definition of ITE for the i-th individual is commonly expressed as the difference between their potential outcomes, represented as Y 1 i − Y 0 i . 1Theoretically, the invertibility is necessary for deriving the upper bounds of ITE error, specifically for equation 39 and equation 47. However, the invertibility can be hard to verify in practice (Johansson et al., 2022b). Our study is based on the potential outcome framework (Rubin, 2005). Assumption 1 states standard and necessary assumptions to ensure treatment effects are identifiable. Before proceeding with theoretical analysis, we also present some necessary terms and definitions in Definition 1. Assumption 1 (Consistency, Overlap, and Unconfoundedness). Consistency: If the treatment is t, then the observed outcome equals Y t. Overlap: The propensity score is bounded away from 0 to 1, i.e., 0 < e(x) < 1. Unconfoundedness: Y t ⊥⊥ T | X, ∀t ∈ {0, 1}. Definition 1. Let h : R × {0, 1} → Y be an hypothesis defined over the representation space R *such* that h(Φ(x), t) *estimates* y t, and L : Y × Y → R+ be a loss function (e.g., the squared loss L(*y, y*′) = (y − y ′) 2 or the absolute loss L(*y, y*′) = |y − y ′|). If we define the expected loss for (x, t) as ℓh,Φ(x, t) = RY L(y t, h(Φ(x), t))p(y t|x)dyt*, we then have factual and counterfactual errors, as well as them on the treated* and controlled: ϵF (h, Φ) = Z X×{0,1} ℓh,Φ(x, t)p(x, t)dxdt, ϵCF (h, Φ) = Z X×{0,1} ℓh,Φ(x, t)p(x, 1 − t)dxdt, ϵ T =1 F (h, Φ) = Z X ℓh,Φ(x, 1)p T =1(x)dx, ϵT =0 F (h, Φ) = Z X ℓh,Φ(x, 0)p T =0(x)dx, ϵ T =1 CF (h, Φ) = Z X ℓh,Φ(x, 1)p T =0(x)dx, ϵT =0 CF (h, Φ) = Z X ℓh,Φ(x, 0)p T =1(x)dx. If we let f(x, t) be h(Φ(x), t), where f : X × {0, 1*} → Y* is a prediction function for outcome, then the estimated ITE over f is defined as τˆf (x) := f(x, 1) − f(x, 0). We can measure the error in ITE estimation with the metric, Precision in the expected Estimation of Heterogeneous Effect (PEHE): $$\epsilon_{P E H E}(f)=\int_{\mathcal{X}}L(\hat{\tau}_{f}({\bf x}),\tau({\bf x}))p({\bf x})d{\bf x}.$$ $$\quad(1)$$ Here, ϵ*P EHE*(f) can also be denoted by ϵ*P EHE*(h, Φ) if we let f(x, t) be h(Φ(x), t). ## 4 Theoretical Results In this section, we first prove ϵ*P EHE* is bounded by ϵF and ϵCF in Lemma 1. Next, we revisit the upper bound for Wasserstein distance guided representation balancing method in Section 4.1. Furthermore, we state the new theoretical results concerning H-divergence guided representation balancing method in Section 4.2. Lemma 1. Let functions h and Φ be as defined in Definition *1. Recall that* τ t(x) = E [Y t| X = x]. Define σ 2 y = min{σ 2 yt (p(x, t)), σ2 yt (p(x, 1 − t))} and Ay = max{Ayt (p(x, t)), Ayt (p(x, 1 − t))} ∀t ∈ {0, 1}, where σ 2 yt (p(x, t)) = RX×{0,1}×Y (y t − τ t(x))2p(y t|x)p(x, t)dytdxdt and Ayt (p(x, t)) = RX×{0,1}×Y |y t − τ t(x)|p(y t|x)p(x, t)dytdxdt ∀t ∈ {0, 1}. Let loss function L *be the squared loss. Then we have:* $$\epsilon_{P E H E}(h,\Phi)\leq2(\epsilon_{C F}(h,\Phi)+\epsilon_{F}(h,\Phi)-2\sigma_{y}^{2}).$$ ). (2) Let loss function L *be the absolute loss. Then we have:* $$\epsilon_{P E H E}(h,\Phi)\leq\epsilon_{C F}(h,\Phi)+\epsilon_{F}(h,\Phi)+2A_{y}.$$ ϵ*P EHE*(h, Φ) ≤ ϵCF (h, Φ) + ϵF (h, Φ) + 2Ay. (3) Lemma 1 reveals that the ITE error ϵ*P EHE* is closely connected with the factual error ϵF and counterfactual ϵCF , as well as a constant σ 2 y (or Ay) that is unrelated with functions h and Φ. Here, σ 2 y is the smaller value of the variance in Y t w.r.t. the distribution p(x, t) and the variance in Y 1−t w.r.t. p(x, 1 − t), and Ay is the larger value of the absolute deviation in Y t w.r.t. the distribution p(x, t) and the absolute deviation in Y 1−t w.r.t. the distribution p(x, 1 − t). The proof of Lemma 1 is deferred to Section A.2. Note that equation (2) corresponds to the result presented in Shalit et al. (2017), while equation (3) is our new result, which supplements the case when L denotes the absolute loss. $$\left(2\right)$$ $$\left({\mathrm{3}}\right)$$ ## 4.1 Wasserstein Distance Guided Error Bounds Previous causal learning models commonly adopt the Wasserstein distance guided approach to seek representation balancing. In this subsection, we first give the definition of Wasserstein distance (Cuturi & Doucet, 2014) by introducing the Integral Probability Metric (IPM) (Sriperumbudur et al., 2012) defined in Definition 2. Then we state the theorem regarding the upper bounds for counterfactual error ϵCF and ITE error ϵ*P EHE* using Wasserstein distance in Theorem 1. Definition 2. Let G be a function family consisting of functions g : S → R*. For a pair of distributions* p1, p2 over S*, the Integral Probability Metric is defined as* $$I P M_{\mathcal{G}}(p_{1},p_{2}):=\operatorname*{sup}_{g\in\mathcal{G}}|\int_{\mathcal{S}}g(s)(p_{1}(s)-p_{2}(s))d s|.$$ $$\quad(4)$$ If G is the family of 1-Lipschitz functions, we can obtain the so-called 1-Wasserstein distance, denoted by W ass(p1, p2). Next, we present the bounds for counterfactual error ϵCF and ITE error ϵ*P EHE* using Wasserstein distance in Theorem 1. Theorem 1. Let Φ : X → R be an invertible representation with Ψ *being its inverse. Define* σ 2 y = min{σ 2 yt (p(x, t)), σ2 yt (p(x, 1 − t))} and Ay = max{Ayt (p(x, t)), Ayt (p(x, 1 − t))} ∀t ∈ {0, 1}, where σ 2 yt (p(x, t)) = RX×{0,1}×Y (y t − τ t(x))2p(y t|x)p(x, t)dytdxdt and Ayt (p(x, t)) = RX×{0,1}×Y |y t − τ t(x)|p(y t|x)p(x, t)dytdxdt ∀t ∈ {0, 1}*. Let* p T =1 Φ (r), p T =0 Φ (r) be as defined before, h : R × {0, 1*} → Y*, u := P r(T = 1) and G be the family of 1-Lipschitz functions. Assume there exists a constant BΦ ≥ 0, such that for t ∈ {0, 1}, the function gΦ,h(r, t) := 1 BΦ · ℓh,Φ(Ψ(r), t) ∈ G. Given a loss function L*, we have* $$\epsilon_{C F}(h,\Phi)\leq(1-u)\cdot\epsilon_{F}^{T=1}(h,\Phi)+u\cdot\epsilon_{F}^{T=0}(h,\Phi)+B_{\Phi}\cdot W a s(p_{\Phi}^{T=1},p_{\Phi}^{T=0}).$$ Φ ). (4) Let loss function L *be the squared loss. Then we have:* ϵ*P EHE*(h, Φ) ≤ 2(ϵ T =1 F (h, Φ) + ϵ $$,\Phi)+\epsilon_{F}^{T=0}(h,\Phi)+B_{\Phi}$$ F (h, Φ) + BΦ · *W ass*(p ). (5) $$V a s s(p_{\Phi}^{T=1},p_{\Phi}^{T=0})-2\sigma_{y}^{2}).$$ Let loss function L *be the absolute loss. Then we have:* $$\left(5\right)$$ $$\epsilon_{P E H E}(h,\Phi)\leq\epsilon_{F}^{T=1}(h,\Phi)+\epsilon_{F}^{T=0}(h,\Phi)+B_{\Phi}\cdot W a s s(p_{\Phi}^{T=1},p_{\Phi}^{T=0})+2A_{y}.$$ $$(6)$$ Φ ) + 2Ay. (6) Theorem 1 reveals that the ITE error is closely tied to the factual error ϵF and the Wasserstein distance between treated and controlled groups in the representation space. This theorem provides a theoretical foundation for representation balancing models based on group distance minimization (Section 5.1.1). The proof of Theorem 1 is deferred to Section A.3. Note that equation (5) corresponds to the result presented in Shalit et al. (2017), while equation (6) is our new result, which supplements the case when L denotes the absolute loss. ## 4.2 H**-Divergence Guided Error Bounds** In most representation balancing literature, the models mainly rely on Wasserstein distance guided error bounds as discussed in Section 4.1. In this subsection, we will focus on establishing H-divergence guided error bounds for counterfactual and ITE estimations in representation balancing approach. We first give the definition of H-divergence (Ben-David et al., 2006) in Definition 3. Then we state the theorem regarding the upper bounds for counterfactual error ϵCF and ITE error ϵ*P EHE* using H-divergence in Theorem 2. Definition 3. Given a pair of distributions p1, p2 over S, and a hypothesis binary function class H, the H-divergence between p1 and p2 *is defined as* $$d_{\mathcal{H}}(p_{1},p_{2}):=2\operatorname*{sup}_{\eta\in\mathcal{H}}|P r_{p_{1}}[\eta(s)=1]-P r_{p_{2}}[\eta(s)=1]|\,.$$ [η(s) = 1]| . (7) Theorem 2. Let Φ : X → R be an invertible representation with Ψ *being its inverse. Define* σ 2 y = min{σ 2 yt (p(x, t)), σ2 yt (p(x, 1 − t))} and Ay = max{Ayt (p(x, t)), Ayt (p(x, 1 − t))} ∀t ∈ {0, 1}, $$\left(7\right)$$ where σ 2 yt (p(x, t)) = RX×{0,1}×Y (y t − τ t(x))2p(y t|x)p(x, t)dytdxdt and Ayt (p(x, t)) = RX×{0,1}×Y |y t − τ t(x)|p(y t|x)p(x, t)dytdxdt ∀t ∈ {0, 1}*. Let* p T =1 Φ (r), p T =0 Φ (r) be as defined before, h : R × {0, 1*} → Y*, u := P r(T = 1) and H be the family of binary functions. Assume that there exists a constant K ≥ 0 *such* that RY L(*y, y*′)dy ≤ K ∀y ′ ∈ Y. Given a loss function L*, we have* $$\epsilon_{C F}(h,\Phi)\leq(1-u)\cdot\epsilon_{F}^{T=1}(h,\Phi)+u\cdot\epsilon_{F}^{T=0}(h,\Phi)+\frac{K}{2}d_{\mathcal{H}}(p_{\Phi}^{T=1},p_{\Phi}^{T=0}).$$ $$({\boldsymbol{\delta}})$$ Let loss function L *be the squared loss. Then we have:* $$\epsilon_{P E H E}(h,\Phi)\leq2(\epsilon_{F}^{T=1}(h,\Phi)+\epsilon_{F}^{T=0}(h,\Phi)+\frac{K}{2}d_{\mathcal{H}}(p_{\Phi}^{T=1},p_{\Phi}^{T=0})-2\sigma_{y}^{2}).$$ ). (9) Let loss function L *be the absolute loss. Then we have:* $$({\mathfrak{g}})$$ $$\epsilon_{P E H E}(h,\Phi)\leq\epsilon_{F}^{T=1}(h,\Phi)+\epsilon_{F}^{T=0}(h,\Phi)+\frac{K}{2}d_{H}(p_{\Phi}^{T=1},p_{\Phi}^{T=0})+2A_{y}.$$ $$(10)$$ Theorem 2 reveals that the ITE error is closely connected with the factual error ϵF and the H-divergence between treated and controlled samples in the representation space. This new theoretical result provides a theoretical foundation for representation balancing models based on individual propensity confusion (Section 5.1.2). The proof of Theorem 2 is deferred to Section A.4. ## 5 Method In the preceding section, we have stated the theoretical foundations for representation balancing methods, which are the Wasserstein distance guided error bounds (results in Shalit et al. (2017)) and H-divergence guided error bounds (Our results). Moving on to Section 5.1, we will begin by introducing representation balancing methods without decomposed patterns. Specifically, Section 5.1.1 revisits a Wasserstein distance based representation balancing network GNet, and Section 5.1.2 demonstrates how Theorem 2 can be connected with individual propensity confusion, helping us to build a H-divergence based representation balancing network INet. Subsequently, in Section 5.2, we will introduce how to design a representation balancing method within the scheme of decomposed patterns, based on the PDIG and PPBR methods (Section 5.2.1). The final proposed model DIGNet is presented in Section 5.2.2. ## 5.1 Representation Balancing Without Decomposed Patterns In representation balancing models, given the input data tuples (x, t, y) = {(xi, ti, yi)} N i=1, the original covariates x are extracted by some representation function Φ(·), and representations Φ(x) are then fed into the outcome functions h 1(·) := h(·, 1) and h 0(·) := h(·, 0) that estimate the potential outcome y 1 and y 0, respectively. Finally, the factual outcome can be predicted by h t(·) = th1(·) + (1 − t)h 0(·), and the corresponding outcome loss is $${\mathcal{L}}_{y}({\bf{x}},{\bf{t}},{\bf{y}};\Phi,h^{t})=\frac{1}{N}\sum_{i=1}^{N}L(h^{t}(\Phi({\bf{x}}_{i})),y_{i}).\tag{1}$$ $$(11)$$ The loss function Ly approximates the factual error ϵF appeared in Theorems 1 and 2. Minimizing Ly also corresponds to the Principle I as mentioned in the Introduction. ## 5.1.1 Gnet: Group Distance Minimization Guided Network The *group distance minimization* focuses on learning representations that minimize the distance between the treated and controlled groups, and the corresponding theoretical foundation is supported by Wasserstein distance guided counterfactual and ITE error bounds (Theorem 1). Previous causal inference methods (e.g., Shalit et al. (2017); Yao et al. (2018); Zhang et al. (2020); Huang et al. (2022a)) commonly adopt Wasserstein distance to achieve group distance minimization. Specifically, these methods aim to minimize the empirical approximation of LG(x, t; Φ) = *W ass* ({Φ(xi)}i:ti=0, {Φ(xi)}i:ti=1) to learn balancing patterns. If we denote ΦE(·) by the feature extractor that extracts the original covariates x, then the objective function designed on Theorem 1 is $$\operatorname*{min}_{\Phi_{E,h^{t}}}\quad{\mathcal{L}}_{y}(\mathbf{x},\mathbf{t},\mathbf{y};\Phi_{E},h^{t})+\alpha_{1}{\mathcal{L}}_{G}(\mathbf{x},\mathbf{t};\Phi_{E}).$$ $$\left(12\right)$$ Ly(x, t, y; ΦE, ht) + α1LG(x, t; ΦE). (12) Since the objective is to learn balancing patterns by minimizing the distributional distance between treated and controlled groups, i.e., group distance minimization, we refer to a model with the objective in equation (12) as **GNet**. For the reader's convenience, we illustrate the structure of GNet in Figure 2(a). Note that CFRNet (Shalit et al., 2017) is also the category of GNet. ## 5.1.2 Inet: Individual Propensity Confusion Guided Network In the field of causal inference, the propensity score plays a central role because it characterizes the probability that one receives treatment (Rosenbaum & Rubin, 1983). For example, the propensity score has been widely employed in prior literature for matching (Caliendo & Kopeinig, 2008) or weighting (Austin & Stuart, 2015) purposes. In this paper, we emphasize that the propensity score also plays an important role in representation balancing, where it serves as a natural indicator of the adequacy of leanred balancing patterns. Specifically, we propose the concept of individual propensity confusion, which aims to learn representations that are difficult to utilize for characterizing the propensity of each individual being treated or controlled. The underlying theoretical foundation is upon the H-divergence guided ITE error bounds derived in Theorem 2. Specifically, equations 9 and 10 in Theorem 2 highlight the significance of minimizing the generalization bound associated with factual outcome error and the H-divergence between treated and controlled representations in reducing ITE errors. Subsequently, we will present the details of achieving representation balancing by reducing the H-divergence between treated and controlled samples in the representation space. Let 1(a) be an indicator function that gives 1 if a is true, and H be the family of binary functions as defined in Theorem 2. To achieve representation balancing, the objective function designed on Theorem 2 should aim to minimize the empirical H-divergence ˆdH(p T =1 Φ , pT =0 Φ ) such that $$\hat{d}_{\mathcal{H}}(p_{\Phi}^{T=1},p_{\Phi}^{T=0})=2\left(1-\min_{q\in\mathcal{H}}\left[\frac{1}{N}\sum_{i:q(\Phi(\mathbf{\kappa}_{i}))=0}\mathbf{1}[t_{i}=1]\,+\,\frac{1}{N}\sum_{i:q(\Phi(\mathbf{\kappa}_{i}))=1}\mathbf{1}[t_{i}=0]\right]\right).\tag{13}$$ $$(14)$$ The "min" part in equation (13) indicates that the optimal classifier η ∗ ∈ H minimizes the classification error between the estimated treatment η ∗(Φ(xi)) and the observed treatment ti, i.e., discriminating whether Φ(xi) is a control (T = 0) or treatment (T = 1). Equation (13) suggests that ˆdH(p T =1 Φ , pT =0 Φ ) will be large if η ∗can easily distinguish whether Φ(xi) is treated or controlled. In contrast, ˆdH(p T =1 Φ , pT =0 Φ ) will be small if it is hard for η ∗to determine whether Φ(xi) is treated or controlled. Therefore, the prerequisite of a small H-divergence is to find a map Φ such that any classifier η ∈ H will get confused about the probability of Φ(xi) being treated or controlled. To achieve this goal, similar to the strategy of empirical approximation of H-divergence (Ganin et al., 2016), we define a discriminator π(r) : R → [0, 1] that estimates the propensity score of r, which can be regarded as a surrogate for η(r). The classification error for the i th individual can be empirically approximated by the cross-entropy loss between π(Φ(xi)) and ti: $${\mathcal{L}}_{t}(t_{i},\pi(\Phi({\bf x}_{i})))=-\left[t_{i}\log\pi(\Phi({\bf x}_{i}))\,+\,(1-t_{i})\log(1-\pi(\Phi({\bf x}_{i})))\right].$$ As a consequence, we aim to find an optimal discriminator π ∗for equation (13) such that π ∗ maximizes the probability that treatment is correctly classified: probability that treatment is correctly classified: $$ \max_{\pi\in \mathcal{H}} \mathcal{L}(\mathbf{x}, t; \Phi, \pi) = \max_{\pi\in \mathcal{H}} \left[ -\frac{1}{N}\sum_{i=1}^{N} \mathcal{L}_t(t_i, \pi(\Phi(\mathbf{x}_i))) \right].$$ Given the feature extractor $\Phi_E(\cdot)$, the objective of *IN* can be formulated as a *min-max game*: $\Phi_E(\cdot)$. $$\left(15\right)$$ $$(16)$$ $$\operatorname*{min}_{\Phi_{E},h^{t}}\operatorname*{max}_{\pi}\quad{\mathcal{L}}_{y}(\mathbf{x},\mathbf{t},\mathbf{y};\Phi_{E},h^{t})+\alpha_{2}{\mathcal{L}}_{I}(\mathbf{x},\mathbf{t};\Phi_{E},\pi).$$ πLy(x, t, y; ΦE, ht) + α2LI (x, t; ΦE, π). (16) In the maximization, the discriminator π is trained to maximize the probability that treatment is correctly classified. This forces π(ΦE(xi)) closer to the true propensity score e(xi). In the minimization, the feature extractor ΦE is trained to fool the discriminator π. This confuses π such that π(ΦE(xi)) cannot correctly specify the true propensity score e(xi). Eventually, the representations are balanced as the adversarial process makes it difficult for π to determine the propensity of each individual being treated or controlled. We refer to this process as *individual propensity confusion*. Such an adversarial learning technique has been widely used in domain adaptation (e.g., Ganin et al. (2016); Tzeng et al. (2017)) and fair representation learning (e.g., Edwards & Storkey (2015); Madras et al. (2018)) to learn domain-invariant and fair representations. For the reader's convenience, we illustrate the structure of INet in Figure 2(b). ## 5.2 Representation Balancing With Decomposed Patterns 5.2.1 The Proposed Pdig And Ppbr Methods PDIG. Although Theorems 1 and 2 provide solid theoretical foundation for GNet (model proposed by Shalit et al. (2017)) and INet (model proposed by us), both of these model types still encounter the inherent trade-off between representation balancing and outcome modeling. To this end, we expect to capture more effective balancing patterns by learning Patterns Decomposed with Individual propensity confusion and Group distance minimization **(PDIG)**. More specifically, the covariates x are extracted by the feature extractor ΦE(·), and then ΦE(x) are fed into two distinct balancing networks ΦG(·) and ΦI (·) for group distance minimization and individual propensity confusion, respectively. In summary, PDIG decomposes the balancing patterns into two distinct parts, group distance minimization and individual propensity confusion, which are respectively achieved by the following loss functions: $$\operatorname*{min}_{\Phi_{G}}\;\;{\mathcal{L}}_{G}({\bf x},{\bf t};\Phi_{G}\circ\Phi_{E})$$ $$\operatorname*{min}_{\Phi_{I}}\operatorname*{max}_{\pi}\;\;{\mathcal{L}}_{I}({\bf x},{\bf t};\Phi_{I}\circ\Phi_{E},\pi).$$ $$(17)$$ $$(18)$$ Here, ◦ denotes the composition of two functions, indicating that Φ(·) in LG(x, t; Φ) and LI (x, t; Φ, π) are replaced by ΦG(ΦE(·)) and ΦI (ΦE(·)), respectively. PPBR. Motivated by the discussion in Section 1, we design a framework that is capable of capturing Patterns of Pre-balancing and Balancing Representations **(PPBR)** to mitigate potential over-balancing issue mentioned in the Introduction, aiming to preserve information that is useful for outcome predictions. In the PPBR method, the balancing patterns ΦG(ΦE(x)) and ΦI (ΦE(x)) are first learned over ΦG and ΦI , while ΦE is remained fixed as pre-balancing patterns. Furthermore, we concatenate the balancing patterns ΦG(ΦE(x)) and ΦI (ΦE(x)) with the pre-balancing representations ΦE(x) as attributes for outcome prediction. As a result, the proxy features used for outcome predictions are ΦE(x)⊕ΦG(ΦE(x))⊕ΦI (ΦE(x)), where ⊕ indicates the concatenation by column. For example, if a = [1, 2] and b = [3, 4], then a ⊕ b = [1, 2, 3, 4]. Consequently, representation balancing is accomplished over ΦG and ΦI , rather than ΦE. Even if there may be a loss of information relevant to outcome prediction in ΦG and ΦI , the pre-balancing patterns ΦE can still effectively preserve such information. Finally, the objective function with regard to outcome modeling under PPBR method becomes $$\operatorname*{min}_{\Phi_{E},\Phi_{I},\Phi_{G},h^{t}}\quad\mathcal{L}_{y}(\mathbf{x},\mathbf{t},\mathbf{y};\Phi_{E}\oplus(\Phi_{I}\circ\Phi_{E})\oplus(\Phi_{G}\circ\Phi_{E}),h^{t}).$$ ## 5.2.2 The Proposed Dignet Combining with PDIG and PPBR, we propose a new neural Network model that incorporates Decomposed patterns with Individual propensity confusion and Group distance minimization, which we call **DIGNet**. $$(19)$$ $$(20)$$ $$(21)$$ $$(22)$$ $$(23)$$ ![10_image_0.png](10_image_0.png) Figure 2: Illustrations of the network architecture of the five models studied in Section 6. The objective of DIGNet is separated into four stages: $$\begin{array}{ll}\min_{\Phi_{G}}&\alpha_{1}{\cal L}_{G}({\bf x},{\bf t};\Phi_{G}\circ\Phi_{E}),\\ \max_{\pi}&\alpha_{2}{\cal L}_{I}({\bf x},{\bf t};\Phi_{I}\circ\Phi_{E},\pi),\\ \min_{\Phi_{I}}&\alpha_{2}{\cal L}_{I}({\bf x},{\bf t};\Phi_{I}\circ\Phi_{E},\pi),\\ \min_{\Phi_{E},\Phi_{I},\Phi_{G},h^{t}}&{\cal L}_{y}({\bf x},{\bf t},{\bf y};\Phi_{E}\oplus(\Phi_{I}\circ\Phi_{E})\oplus(\Phi_{G}\circ\Phi_{E}),h^{t}).\end{array}$$ Within each iteration, DIGNet minimizes the group distance through equation 20, and plays an adversarial game to achieve propensity confusion through equation 21 and equation 22. In equation 23, DIGNet updates both the pre-balancing patterns ΦE and balancing patterns ΦI , ΦG, along with the outcome function h tto minimize the outcome prediction loss. For the reader's convenience, we illustrate the structure of DIGNet in Figure 2(e). ## 5.3 Insights Of Representation Balancing With Decomposed Patterns Our proposed DIGNet model builds upon the PDIG and PPBR methods. The PPBR method is relatively straightforward, as it forms more flexible predictor (ΦE ⊕ (ΦI ◦ ΦE)) (or (ΦE ⊕ (ΦG ◦ ΦE))) compared to the solely predictor ΦE. Therefore, incorporating both pre-balancing and balancing patterns is helpful in enhancing the model's complexity and its ability to capture more useful information for outcome prediction. However, there still remains further exploration to better understand why the PDIG method is effective. The DIGNet model aims to learn balancing patterns based on both Wasserstein distance and H-divergence. At first glance, one might assume that incorporating both distances could be redundant, as one distance seems naturally to imply the other. In this section, we gain some insights of these two divergence metrics. First, we provide a systematic discussion on the properties of Wasserstein distance and H-divergence. In addition, we utilize a toy example to illustrate their distinct abilities in capturing distributional disparity. Further, we use this example to aid readers in better understanding the trade-off problem encountered in representation balancing models (Figure 5). Finally, we establish a connection between our method and the Elastic Net method and Multi-task learning approach, which offers valuable insights and explanations regarding the intuition behind involving both metrics as regularizations. ## 5.3.1 Properties Of Wasserstein Distance And H**-Divergence** Wasserstein distance and H-Divergence possess distinct theoretical properties. The effectiveness of the Wasserstein distance in measuring distributional differences for classification tasks in domain adaptation has been demonstrated in Shen et al. (2018). Furthermore, Shalit et al. (2017) highlights the potential of Wasserstein distance in representation balancing models for ITE estimation, which significantly outperforms traditional ITE estimation methods. Wasserstein distance is also widely adopted in other research domains, such as fair representation learning (Jiang et al., 2020), as discussed in Section 2. Its prevalence stems from its strong capability to capture better diversities compared to H-Divergence (Shui et al., 2020). Studies have proven that under certain conditions, it is possible to bound H-Divergence using Wasserstein distance (Villani et al., 2009; Shui et al., 2020), which provides a reasonable explanation for the overall superiority of ![11_image_0.png](11_image_0.png) Figure 3: Distributions of p1(x), p2(x), and p3(x) in the example of Section 5.3.1. ![11_image_1.png](11_image_1.png) ![11_image_2.png](11_image_2.png) ![11_image_3.png](11_image_3.png) . (a) Model fitting using (*X, Y* ). The probability density functions of X treat and X control are p1 and p3, respectively. (b) Model fitting using (Φ(X), Y ). The probability density functions of Φ treat(X) and Φ control(X) are p1 and p2, respectively. Figure 4: Model fitting using (*X, Y* ) and (Φ(X), Y ) based on the example in Section 5.3.1. the Wasserstein distance in learning domain-invariant features (Zhiri et al., 2022). However, it is important to note that this bound does not hold in general (Chae & Walker, 2020), suggesting that a smaller H-divergence does not necessarily imply a smaller Wasserstein distance. To better illustrate the difference between these two measures, we provide a concrete example below. Toy example. Consider the following three probability density functions p1(x), p2(x), and p3(x) defined over x ∈ [0, 1]: $$p_{1}(x)={\begin{cases}2.5,\\ 0.5,\\ 0.5,\\ 0.5,\end{cases}}$$ if $0\leq x<0.25\\ \text{if}0.25\leq x<0.5\\ \text{if}0.5\leq x<0.75\\ \text{if}0.75\leq x\leq1$ $\qquad p_2(x)=\begin{cases}0.5,&\text{if}0\leq x<0.25\\ 2.5,&\text{if}0.25\leq x<0.5\\ 0.5,&\text{if}0.5\leq x<0.75\\ 0.5,&\text{if}0.75\leq x\leq1\end{cases}$ $\qquad p_3(x)=\begin{cases}0.5,&\text{if}0.5\leq x<0.75\\ 0.5,&\text{if}0.5,\\ 2.5,&\text{if}0.5,\end{cases}$ $\qquad p_4(x)=\begin{cases}0.5,&\text{if}0.5\leq x<0.75\\ 2.5,&\text{if}0.5,\\ 0.5,&\text{if}0.75\leq x\leq1\end{cases}$ $,\;\text{if}0\leq x<0.25\\$, $\text{if}0.25\leq x<0.5\\$, $\text{if}0.5\leq x<0.75\\$, $\text{if}0.75\leq x\leq1\\$. The above three distributions are depicted in Figure 3. Further, we set the classifier in H-divergence as η(x) = 1{x ≥ p}, and set the order in Wasserstein distance as p = 1. By utilizing the definitions of Hdivergence and 1-Wasserstein distance, one can make a direct comparison between the discrepancy in (p1, p2) and the discrepancy in (p1, p3): $$(24)$$ $$\begin{array}{c}{{d_{\mathcal{H}}(p_{1},p_{2})=d_{\mathcal{H}}(p_{1},p_{3});}}\\ {{W a s s(p_{1},p_{2})<W a s s(p_{1},p_{3}).}}\end{array}$$ Equation 24 confirms that Wasserstein distance is able to capture more diverse distributional disparities compared to H-divergence. However, in the subsequent content, we will demonstrate that such an advantage might be a limitation in causal representation learning due to the trade-off problem. Understanding the trade-off. The above example serves as simple evidence that supports the conclusion that Wasserstein distance can capture better diversities between distributions compared to H-divergence (Shui et al., 2020). However, as discussed in Section 1, it is important to note that achieving a more balanced distribution does not necessarily ensure favorable generalization to counterfactuals. This is because the pursuit of balanced representations may inadvertently lead to a loss of information useful for factual outcome estimates. We will now use the above example to gain further understanding on this matter. Consider a simple data-generating process where Xtreat, the covariate in the treated group, follows the distribution p1(x), and Xcontrol, the covariate in the controlled group, follows the distribution p3(x). Let the potential outcomes are generated by Y 1 = τ1(X) + ϵ1 and Y 0 = τ0(X) + ϵ0, where ϵ1 ∼ N (0, 0.1) and ϵ0 ∼ N (0, 0.1). Let the true potential outcome functions τ 1(x) and τ 0(x) be as follows: $$\tau^{1}(x)=\tau^{0}(x)=(x^{2}+1)\mathbf{1}\{0\leq x<0.5\}+(4x-0.75)\mathbf{1}\{0.5\leq x\leq1\}.$$ In addition, consider a representation function Φ(x) such that Φ(x) = x1{0 ≤ x < 0.25} + (x + 0.25)1{0.25 ≤ x < 0.5} + (x − 0.25)1{0.5 ≤ x < 0.75} + x1{0.75 ≤ x ≤ 1}. (26) We can find Φ achieves representation balancing under Wasserstein distance measure, but does not under H-divergence measure. In original data, x treat follows p1 and x control follows p3. After mapping x to Φ(x), Φ treat(x) follows p1 and Φ control(x) follows p2. Consequently, based on the results in equation 24, we have $$(25)$$ $$\begin{array}{c}{{d_{\mathcal{H}}(p_{\Phi}^{\mathrm{treat}},p_{\Phi}^{\mathrm{control}})=d_{\mathcal{H}}(p_{X}^{\mathrm{treat}},p_{X}^{\mathrm{control}});}}\\ {{W a s(p_{\Phi}^{\mathrm{treat}},p_{\Phi}^{\mathrm{control}})<W a s(p_{X}^{\mathrm{treat}},p_{X}^{\mathrm{control}}).}}\end{array}$$ $$(27)$$ We now investigate the fitting performance of models using (*x, y*) and (Φ(x), y) to check whether there is a loss of outcome-related information during representation balancing. In Figure 4a and Figure 4b, we present scatter plots of samples from (*x, y*) and (Φ(x), y) respectively, depicted as gray points. Following the approach of Kennedy (2023), we employ smoothing spline functions to fit these samples, and the estimated functions are illustrated in blue. In Figure 4a, we observe that both τ1 and τ0 are well fitted using (*x, y*), with their estimates being very close to each other. This is consistent with the setup of τ1 = τ0. In contrast, Figure 4b reveals that the fittings of τ1 and τ0 are inadequate using (Φ(x), y), resulting in substantially different estimates. The result of different estimates violates the setup of τ1 = τ0. In this case, a model based on Wasserstein distance would retain Φ due to its achievement of representation balancing. Unfortunately, Φ suffers from a loss of valuable information that is crucial for outcome prediction. In contrast, a model based on H-divergence would not keep Φ since it does not contribute to reducing the domain distance compared to the original data. Fortunately, the original data preserve the information necessary for outcome modeling. Therefore, this example not only emphasizes the significance of incorporating both metrics but also highlights the importance of considering both pre-balancing patterns and balancing patterns. ## 5.3.2 Connection With Other Machine Learning Methods In the previous sections, we have discussed the trade-off between factual outcome prediction and representation balancing in classic representation learning models. As part of our proposed improvements, DIGNet involves learning two distinct representations using Wasserstein distance and H-divergence separately and concatenates the learned representations for outcome modeling. In this section, we will explore more detailed connections between our design and other machine learning methods. Connection with Elastic Net: balancing on two discrepancies. Our DIGNet model involves two discrepancy metrics: Wasserstein distance and H-divergence. We will now provide additional explanations on its connection with the Elastic Net method. In supervised learning, a regularization term is often incorporated during model training to mitigate the bias-variance trade-off. In the case of linear regression, Lasso (Tibshirani, 1996) and Ridge (Hoerl & Kennard, 1970) are proposed to improve the Ordinary Least Squares (OLS) method, with Lasso involving l1 regularization while Ridge involving l2 regularization: Lasso: min β∈Rd 1 N ||y − Xβ||22 + α||β||1 = min β∈Rd 1 N X N i=1 (x ′ iβ − yi) 2 + α X d j=1 |βj |. Ridge: min β∈Rd 1 N ||y − Xβ||22 + α||β||22 = min β∈Rd 1 N X N i=1 (x ′ iβ − yi) 2 + α X d j=1 β 2 j . The different properties between l1 regularization and l2 regularization lead to distinct advantages and disadvantages between Lasso method and Ridge method. Given their differences, a method of Elastic Net (Zou & Hastie, 2005) is proposed by combing both l1 regularization and l2 regularization: $$\text{Elastic Net:}\quad\min_{\mathbf{\beta}\in\mathbb{R}^{d}}\frac{1}{N}\|\mathbf{y}-\mathbf{X}\mathbf{\beta}\|_{2}^{2}+\alpha_{1}\|\beta\|_{1}+\alpha_{2}\|\beta\|_{2}^{2}=\min_{\mathbf{\beta}\in\mathbb{R}^{d}}\frac{1}{N}\sum_{i=1}^{N}(\mathbf{x}_{i}^{\prime}\mathbf{\beta}-y_{i})^{2}+\alpha_{1}\sum_{j=1}^{d}|\beta_{j}|+\alpha_{2}\sum_{j=1}^{d}\beta_{j}^{2}.$$ The Elastic Net method integrates the strengths of two distinct approaches: the l1 regularization term enforces sparsity, while the l2 regularization maintains the grouping effect (Zhou, 2013; Narisetty, 2020). The Elastic Net has also motivated some research studies to adopt the idea of combining l1 and l2 regularizations in of deep neural networks (DNNs) (Kang et al., 2017; Chen et al., 2018; Hu et al., 2023; Xu et al., 2023a). Notably, a recent study (Xu et al., 2023a) presents an excess risk bound for Elastic Net Regularized DNNs. This finding provides supporting evidence that incorporating both l1 and l2 regularizations in a DNN model is reasonable. The insights gained from (Xu et al., 2023a) shed light on the theoretical explanation of our method, and even pave the way for exploring the integration of different divergence metrics in other research areas, such as domain adaptation, transfer learning, and fair representation learning. Connection with multi-task learning: balancing on two representations. Our DIGNet model performs representation balancing on two distinct representations using Wasserstein distance and H-divergence separately, and the learned representations are then concatenated for outcome modeling. We will now provide additional explanations regarding its connection with the multi-task learning method. In multi-task learning, distinct representations are learned for different tasks, with each task involving its own objective function. An important step in multi-task learning is integrating the information from these separately learned representations into a unified representation. One common approach is to concatenate the taskspecific representations to form a joint representation, which effectively preserves the information from each task for outcome modeling (e.g.,Li et al. (2018); Baltrušaitis et al. (2018); Crawshaw (2020); Yan et al. (2021); Xu et al. (2023b)). For example, in an E-commerce application Liu et al. (2023), diverse types of user footprints are encoded using different representations with diverse objectives. The learned representations are then concatenated to make the final target prediction. Similarly, in another application Wu et al. (2018), user and product attentions are separately learned on two distinct representations, which are later concatenated for the final outcome prediction. In tasks such as image and text classification Hao et al. (2023), concatenating multiple representations has shown effective improvements in the classification performance, which is brought by the complementary information of each representation. Furthermore, a recent study on multi-view learning (Li et al., 2024) has also demonstrated that concatenating both the non-attention and attention representations of each view can prevent information loss in the final classification task. Summary of strengths and limitations. Our method combines Wasserstein distance and H-divergence for representation balancing to capture different types of balancing patterns compared to classic representation balancing models. This shares a similar intuition with the Elastic Net, which combines l1 and l2 regularizations to learn features with different properties. Notably, the two regularizations in Elastic Net are learned on a single parameter space with one objective, this provides more interpretability but might introduce a new trade-off. Different from Elastic Net, our DIGNet model concatenates the two distinct representations that are learned from two different tasks: Wasserstein distance guided and H-divergence guided representation balancing. This aligns with the principle of multi-task learning. The concatenation fusion technique is extensively employed in numerous multi-task learning studies (Baltrušaitis et al., 2018), as it effectively preserves and integrates information from different tasks, leading to improved performance ![14_image_0.png](14_image_0.png) Figure 5: T-SNE visualizations of the covariates as γ varies. Red represents the treatment group and blue represents the control group. A larger γ indicates a greater imbalance between the two groups. in the final prediction objective (Hao et al., 2023; Li et al., 2024). However, it is crucial to acknowledge that this straightforward concatenation approach can present challenges when interpreting the specific role of each representation and can also increase model complexity (Jia et al., 2020). ## 6 Experiments In non-randomized observational data, the ground truth regarding treatment effects remains inaccessible due to the absence of counterfactual information. Therefore, we use simulated data and semi-synthetic benchmark data to test the performance of our methods and other baseline models. In this section, we primarily investigate the three following questions: Q1. Is PDIG helpful in ITE estimation through Path I in the Introduction, i.e., learning more effective balancing patterns without affecting factual outcome prediction? Q2. Is PPBR helpful in ITE estimation through Path II in the Introduction, i.e., improving factual outcome prediction without affecting learning balancing patterns? Q3. Can the proposed DIGNet model outperform other baseline models on benchmark dataset? Ablation models. To investigate Q1 and Q2, we conducted ablation studies and designed two ablation models, **DGNet** and **DINet**, where DGNet (or DINet) can be considered as DIGNet without PDIG, and GNet (or INet) can be considered as DGNet (or DINet) without PPBR. The structures of DGNet and DINet are shown in Figure 2(c) and Figure 2(d), and the objectives of DGNet and DINet are deferred to Section A.6. ## 6.1 Experimental Settings Simulation data. Previous causal inference works assess the model effectiveness by varying the distribution imbalance of covariates in treated and controlled groups at different levels (Yao et al., 2018; Yoon et al., 2018; Du et al., 2021). As suggested by Assaad et al. (2021), we draw 1000 observational data points from the following data generating strategy: $$\begin{array}{l}{{{\bf X}_{i}\sim{\cal N}({\bf0},\sigma^{2}\cdot[\rho{\bf1}_{p}{\bf1}_{p}^{'}+(1-\rho){\bf I}_{p}]),}}\\ {{T_{i}\mid{\bf X}_{i}\sim\mathrm{Bernoulli}(1/(1+\exp(-\gamma{\bf X}_{i}))),}}\\ {{Y_{i}^{0}=\beta_{\bf0}^{\prime}{\bf X}_{i}+\xi_{i},\quad Y_{i}^{1}=\beta_{\bf1}^{\prime}{\bf X}_{i}+\xi_{i},\quad\xi_{i}\sim{\cal N}(0,1).}}\end{array}$$ Here, 1p denotes the p-dimensional all-ones vector and Ip denotes the identity matrix of size p. We fix p = 10, ρ = 0.3, σ2 = 2, β ′ 0 = [0.3*, ...,* 0.3], β ′ 1 = [1.3*, ...,* 1.3] and vary γ ∈ {0.25, 0.5, 0.75, 1, 1.5, 2, 3} to yield different levels of selection bias. As seen in Figure 5, selection bias becomes more severe with γ increasing. For each γ, we repeat the above data generating process to generate 30 different datasets, with each dataset split by the ratio of 56%/24%/20% as training/validation/test sets. Semi-synthetic data. The IHDP dataset, introduced by Hill (2011), originates from the Infant Health and Development Program (IHDP). This program conducted a randomized controlled experiment in 1985 ![15_image_0.png](15_image_0.png) Figure 6: Plots of model performances on test set for different metrics as γ varies in {0.25, 0.5, 0.75, 1, 1.5, 2, 3}. Each graph shows the average of 30 runs with standard errors shaded. Lower lines indicate lower values of the metric. to investigate whether there is a positive causal effect of frequent high-quality child care and home visits (treatment) on cognitive scores (outcome). The collected data comprise 25-dimensional pre-treatment covariates, including measurements on the infants (e.g., birth weight, gender, head circumference), as well as measurements on the mothers during pregnancy (e.g., age, marital status, education, smoking and drinking habits). In order to create an observational dataset that involves selection bias, Hill excluded a subpopulation (children with nonwhite mothers) from the treated group. Consequently, the IHDP dataset exhibits a covariate shift, resulting in imbalanced treated and controlled groups. The final IHDP dataset consists of 747 samples, comprising 139 treated samples and 608 controlled samples. The potential outcomes were generated using setting A in the NPCI package Dorie (2021). We use the same 1000 datasets as used in Shalit et al. (2017), with each dataset split by the ratio of 63%/27%/10% as training/validation/test sets. Models and metrics. In simulation experiments, we perform comprehensive comparisons between INet, GNet, DINet, DGNet, and DIGNet in terms of the mean and standard error for the following metrics: √ϵ*P EHE*, √ϵCF , and √ϵF with L defined in Definition 1 being the squared loss, as well as the empirical approximations of *W ass*(p T =1 Φ , pT =0 Φ ) and dH(p T =1 Φ , pT =0 Φ ) (denoted by *W ass* and ˆdH, respectively). Note that as shown in Figure 2, *W ass* is over ΦE for GNet while over ΦG for DGNet and DIGNet; ˆdH is over ΦE for INet while over ΦI for DINet and DIGNet. To analyze the source of gain and ensure fair comparison in simulation studies, we fix hyperparameters across all models. This way is consistent with Curth & van der Schaar (2021). We apply an early stopping rule to all models as Shalit et al. (2017) do. In IHDP experiment, we use √ϵ*P EHE*, as well as an additional metric ϵAT E = |τˆAT E − τ*AT E*| to evaluate performances of various causal models (see them in Table 6). More descriptions of the implementation details, as well as the analysis of training time, training stability, and hyperparameter sensitivity, are deferred to Section A.5. Device. All the experiments are run on Dell 7920 with one 16-core Intel Xeon Gold 6250 3.90GHz CPU and three NVIDIA Quadro RTX 6000 GPUs. ## 6.2 Results And Analysis 6.2.1 Preliminary Experimental Results In this part, we first make a general comparison between different models with the degree of covariate imbalance increasing, and the relevant results are shown in Figure 6. There are four main observations: (1) DIGNet attains the lowest √ϵ*P EHE* across all datasets, while GNet have inferior performances than other models; (2) DINet and DGNet outperform INet and GNet regarding √ϵCF and √ϵ*P EHE*; (3) INet, DINet, and DGNet have comparable performance to DIGNet in terms of factual outcome estimations (√ϵF ), but cannot compete with DIGNet in terms of counterfactual estimations (√ϵCF ) or ITE estimations ( √ϵ*P EHE*); (4) DIGNet achieves smaller ˆdH (or *W ass*) than DINet and INet (or DGNet and GNet), especially when the covariate shift problem is severe (e.g., when γ > 1). In conclusion, the above study has produced several noteworthy findings. Firstly, finding (1) reveals that our proposed DIGNet model consistently performs well in ITE estimation. Secondly, as indicated by finding (2), implementing the PPBR approach can enhance the Table 1: Ablation study for PDIG: Mean ± std of each metric averaged across 30 runs on test set when γ = 3. Lower value is better. Table 2: Ablation study for PPBR: Mean ± std of each metric averaged across 30 runs on test set when γ = 3. Lower value is better. | √ ϵF | √ ϵCF | ˆdH | W ass | √ ϵF | √ ϵCF | ˆdH | W ass | | | |--------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|----|-------------| | DIGNet | 1.07 ± 0.01 | 2.89 ± 0.07 | 1.94 ± 0.00 | 0.06 ± 0.00 | | | | | | | DINet | 1.07 ± 0.01 | 2.95 ± 0.07 | 1.94 ± 0.00 | - | | | | | | | DGNet | 1.07 ± 0.01 | 3.08 ± 0.07 | - | 0.10 ± 0.00 | DGNet | 1.07 ± 0.01 | 3.08 ± 0.07 | - | 0.10 ± 0.00 | | | GNet | 1.12 ± 0.03 | 3.55 ± 0.14 | - | 0.10 ± 0.00 | | | | | | DINet | 1.07 ± 0.01 | 2.95 ± 0.07 | 1.96 ± 0.00 | - | | | | | | | | INet | 1.08 ± 0.01 | 3.47 ± 0.12 | 1.96 ± 0.00 | - | | | | | predictive accuracy of factual and counterfactual outcomes. Lastly, findings (3) and (4) highlight the role of PDIG structure in enhancing the simultaneous reinforcement and complementarity of group distance minimization and individual propensity confusion, resulting in more balanced representations. Our subsequent analysis will step beyond these preliminary conclusions to gain a deeper understanding of the effectiveness of the proposed methods. ## 6.2.2 Further Ablation Studies So far our preliminary observations have show that the relationship between the ITE errors of each model is: DIGNet<DINet<INet and DIGNet<DGNet<GNet. To further explore how PDIG and PPBR contribute to the improvement of ITE estimations, we choose the case with high selection bias (γ = 3) to analyze the source of gain for PDIG and PPBR. We report model performances (mean ± std) of each specific metric averaged across 30 runs on test set in Table 1 and Table 2. We also report model performances (mean ± std) averaged over 30 training and test sets in Table 3. Below we discuss the source of gain in detail. Ablation study for PDIG. *The PDIG structure is manifest to be effective in capturing more* effective balancing patterns, without affecting factual outcome predictions. As depicted in Figure 6, DIGNet exhibits more balanced representations, irrespective of whether the discrepancy is measured by ˆdH or *W ass*, while DIGNet, DINet, and DGNet demonstrate comparable estimates of factual outcomes ( √ϵF ). Two additional pieces of specific evidence can be observed from Table 1: (1) Despite the absence of PDIG in DINet and DGNet when compared to DIGNet, these three models exhibit very similar performance regarding √ϵF , with the performance being 1.07 ± 0.01. This indicates that PDIG does not impact the factual estimation. (2) DIGNet achieves smaller ˆdH with a |1.94/1.96 − 1| = 1.0% reduction (or *W ass* with a |0.06/0.10 − 1| = 40% reduction) compared with DINet (or DGNet). This indicates that PDIG enables the model to learn more effective balancing patterns. The above two points indicate that PDIG can capture more effective balancing patterns, without affecting factual outcome predictions. This advantage translates into superior counterfactual estimation, with DIGNet reduceing √ϵCF by |2.89/2.95 − 1| = 2.0% and |2.89/3.08 − 1| = 6.2% compared to DINet and DGNet, respectively. Correspondingly, DIGNet also shows superiority in treatment effect estimation (√ϵ*P EHE* and ϵ*AT E*) compared to DINet (or DGNet), as demonstrated in Table 3. Ablation study for PPBR. The PPBR approach contributes to enhancing factual outcome predictions, without affecting learning balancing patterns. From Table 2, we gain two important insights: (1) The difference in learned balancing patterns, measured by ˆdH (or *W ass*), between DINet and INet (or DGNet and GNet), is negligible. This implies that PPBR does not affect learning balancing patterns. (2) Compared with INet, DINet achieves smaller √ϵF , with |1.07/1.08−1| = 0.9% error reduction. Similarly, compared with GNet, DGNet achieves smaller √ϵF , with |1.07/1.12 − 1| = 4.5% error reduction. These two observations reveal that PPBR can improve factual outcome predictions, without affecting learning balancing patterns. Benefiting from the advantage of PPBR, the improvement is particularly pronounced in counterfactual estimation. Comparing DINet with INet, the reduction in √ϵCF amounts to |2.95/3.47−1| = 15.0%. Similarly, comparing DGNet with GNet, the reduction is |3.08/3.55 − 1| = 13.2%. Correspondingly, DINet (or DGNet) attains smaller treatment effect errors (√ϵ*P EHE* and ϵ*AT E*) compared with INet (or GNet), as shown in Table 3. Table 3: Training- & test- set √ϵP EHE & ϵ*AT E* when γ = 3. Mean ± standard error of 30 runs. | Training set | Test set | | | | | | |----------------|------------|-----------|-----------|-----------|--------------|----------| | √ ϵP EHE | ϵAT E | √ ϵP EHE | ϵAT E | | | | | GNet | 3.30±0.15 | 2.58±0.14 | 3.30±0.16 | 2.59±0.14 | | | | INet | 3.24±0.11 | 2.46±0.09 | 3.22±0.12 | 2.47±0.10 | | | | DGNet | 2.86±0.06 | 2.15±0.03 | 2.83±0.07 | 2.15±0.04 | | | | DINet | 2.70±0.06 | 2.12±0.04 | 2.69±0.08 | 2.13±0.05 | | | | DIGNet | 2.66±0.07 | 2.04±0.05 | 2.63±0.07 | 2.03±0.04 | Training set | Test set | | | √ ϵP EHE | ϵAT E | √ ϵP EHE | ϵAT E | | | | | GNet | 0.71±0.15 | 0.12±0.01 | 0.77±0.18 | 0.15±0.02 | | | | INet | 0.66±0.09 | 0.13±0.01 | 0.72±0.11 | 0.15±0.02 | | | | DGNet | 0.53±0.07 | 0.11±0.01 | 0.60±0.09 | 0.13±0.01 | | | | DINet | 0.57±0.12 | 0.13±0.01 | 0.60±0.11 | 0.14±0.01 | | | | DIGNet | 0.42±0.02 | 0.11±0.01 | 0.45±0.04 | 0.12±0.01 | | | Training set | | Test set | | | | | | | |------------------|---------|------------|---------|---------|---------|---------|---------|--------| | √ ϵP EHE | ϵAT E | √ ϵP EHE | ϵAT E | | | | | | | t-value | p-value | t-value | p-value | t-value | p-value | t-value | p-value | | | GNet vs. DGNet | 2.7435 | 0.0081 | 2.9844 | 0.0042 | 2.7073 | 0.0089 | 2.9269 | 0.0049 | | INet vs. DINet | 4.0812 | 0.0001 | 3.5222 | 0.0008 | 3.5665 | 0.0007 | 3.0824 | 0.0031 | | DGNet vs. DIGNet | 2.0240 | 0.0476 | 1.8888 | 0.0639 | 2.0650 | 0.0434 | 2.0935 | 0.0407 | | DINet vs. DIGNet | 0.4513 | 0.6535 | 1.3525 | 0.1815 | 0.6079 | 0.5456 | 1.5473 | 0.1272 | Table 5: Significance analysis regarding the achieved improvements by comparing GNet and DGNet, INet and DINet, DGNet and DIGNet, DINet and DIGNet. The p-value ≤ 0.05 indicates difference is statistically significant. Significance analysis for the improvements. To assess the significance of the improvements observed in the above ablation studies, we conducted an additional significance analysis by recording the values of √ϵ*P EHE* and ϵ*AT E* for 30 runs of each of the 5 models (GNet, INet, DGNet, DINet, and DIGNet). Subsequently, we performed a t-test for GNet vs. DGNet, INet vs. DINet, DGNet vs. DIGNet, and DINet vs. DIGNet, to investigate the statistical significance of their differences. The relevant results are reported in Table 5. The results reveal a statistically significant difference between GNet and DGNet, INet and DINet, as well as DGNet and DIGNet. Note that the difference between DINet and DIGNet is not statistically significant, despite DIGNet exhibiting smaller treatment effect estimation errors on average compared to DINet. ## 6.2.3 Comparisons On Ihdp Benchmark. In this part, we perform experiments on the IHDP benchmark dataset to compare the performances of different models. The corresponding results are reported in Table 4 and 6. First, we report the ablation results on 1-100 IHDP datasets in Table 4, aiming to examine the consistent effectiveness of PDIG and PPBR. Specifically, Table 4 shows that DINet and DGNet are superior to INet and GNet but inferior to DIGNet concerning treatment effect estimation, suggesting that both PDIG and PPBR are advantageous for treatment effect estimation. For example, on the test set, DINet reduces √ϵ*P EHE* by |0.60/0.72 − 1| = 16.7% for INet, and DIGNet reduces √ϵ*P EHE* by |0.45/0.60 − 1| = 25% for DINet. This is consistent with the findings before: PDIG and PPBR are beneficial to treatment effect estimation. Furthermore, we undergo comparisons between DIGNet and other causal models on 1-1000 IHDP datasets and report the results in Table 6. The results highlight the superior performance of the proposed DIGNet across all the models. Specifically, in comparison to the second-best method in test-sample performance, DIGNet achieves a substantial improvement, with error reducted by |0.45/0.57 − 1| = 21% in terms of √ϵ*P EHE* and |0.12/0.13−1| = 7.7% in terms of ϵ*AT E*. Moreover, it is worth noting that DIGNet consistently achieves the lowest errors across various datasets and metrics, revealing its robust performance. We also conduct an additional experiments on another benchmark dataset Twins. The details and results are deferred to Section A.5 ## 7 Conclusion This paper establishes a theoretical foundation by deriving counterfactual and ITE error bounds based on H-divergence. This theoretical foundation builds a connection between representation balancing and individual propensity confusion. Furthermore, based on individual propensity confusion and group distance minimization, we suggest learning decomposed patterns for representation balancing models using the PDIG and PPBR methods. Further, building upon PDIG and PPBR, we propose a novel model DIGNet, for treatment effect estimation. Comprehensive experiments verify that PDIG and PPBR follow different pathways to improve counterfactual and ITE estimation. In particular, PDIG enables the model to capture more effective balancing patterns without affecting factual outcome prediction, while PPBR contributes to improving factual outcome predictions without influencing learning balancing patterns. We hope these findings can constitute an important step to inspire more research concerning the generalization of representation balancing models for counterfactual and ITE estimation. Table 6: Training- & test- set √ϵP EHE & ϵ*AT E* on IHDP. Mean ± standard error of 1000 runs. Training set Test set √ϵP EHE ϵAT E √ϵP EHE ϵ*AT E* OLS/LR1 (Johansson et al., 2016) 5.8 ± .3 .73 ± .04 5.8 ± .3 .94 ± .06 OLS/LR2 (Johansson et al., 2016) 2.4 ± .1 .14 ± .01 2.5 ± .1 .31 ± .02 k-NN (Crump et al., 2008) 2.1 ± .1 .14 ± .01 4.1 ± .2 .79 ± .05 BART (Chipman et al., 2010) 2.1 ± .1 .23 ± .01 2.3 ± .1 .34 ± .02 CF (Wager & Athey, 2018) 3.8 ± .2 .18 ± .01 3.8 ± .2 .40 ± .03 CEVAE (Louizos et al., 2017) 2.7 ± .1 .34 ± .01 2.6 ± .1 .46 ± .02 SITE (Yao et al., 2018) .69 ± .0 .22 ± .01 .75 ± .0 .24 ± .01 GANITE (Yoon et al., 2018) 1.9 ± .4 .43 ± .05 2.4 ± .4 .49 ± .05 BLR (Johansson et al., 2016) 5.8 ± .3 .72 ± .04 5.8 ± .3 .93 ± .05 BNN (Johansson et al., 2016) 2.2 ± .1 .37 ± .03 2.1 ± .1 .42 ± .03 TARNet (Shalit et al., 2017) .88 ± .0 .26 ± .01 .95 ± .0 .28 ± .01 CFR-Wass (GNet) (Shalit et al., 2017) .73 ± .0 .12 ± .01 .81 ± .0 .15 ± .01 Dragonnet (Shi et al., 2019) 1.3 ± .4 .14 ± .01 1.3 ± .5 .20 ± .05 MBRL (Huang et al., 2022a) .52 ± .0 .12 ± .01 .57 ± .0 .13 ± .01 DIGNet (Ours) .41 ± .0 .11 ± .01 .46 ± .0 .12 ± .01 Limitations and future works. Our paper verifies the effectiveness of PDIG and PPBR in improving ITE estimation, it is also important to step beyond our empirical insights into future theoretical studies aimed at addressing the trade-off challenge mentioned in the introduction, e.g., exploring the possibility of deriving tighter theoretical error bounds based on learning decomposed patterns, and involving the orthogonal machine learning (Chernozhukov et al., 2018; Oprescu et al., 2019; Nie & Wager, 2021; Huang et al., 2022b) into the representation learning model to improve model's robustness to the misspecification. Furthermore, it remains challenging to analytically determine the best divergence metric for representation balancing methods. A promising avenue for future theoretical investigations would involve developing new distributional divergences or exploring a unified theory that enables models to select appropriate divergence metrics based on the distinct data. Empirical studies can focus on discouraging the redundancy of the concatenation fusion of each decomposed pattern and improving the efficacy of the multi-task learning objectives. While we have followed the same approach as previous studies by evaluating model performance using simulated and semi-synthetic data, it is crucial for future research to explore the creation of appropriate benchmark datasets (Athey & Wager, 2019; Curth et al., 2021) for assessing the performance of ITE estimation methods in real-world scenarios. ## Acknowledgement We are thankful for the constructive and helpful comments provided by the reviewers and action editor during the reviewing process of TMLR, which has contributed a lot to the improvement of our work. Qi WU acknowledges the support from The CityU-JD Digits Joint Laboratory in Financial Technology and Engineering and The Hong Kong Research Grants Council [General Research Fund 11219420/9043008 and 11200219/9042900]. The work described in this paper was partially supported by the InnoHK initiative, the Government of the HKSAR, and the Laboratory for AI-Powered Financial Technologies. ## References Tameem Adel, Isabel Valera, Zoubin Ghahramani, and Adrian Weller. One-network adversarial fairness. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 2412–2420, 2019. Joshua D Angrist, Guido W Imbens, and Donald B Rubin. Identification of causal effects using instrumental variables. Journal of the American statistical Association, 91(434):444–455, 1996. Serge Assaad, Shuxi Zeng, Chenyang Tao, Shounak Datta, Nikhil Mehta, Ricardo Henao, Fan Li, and Lawrence Carin. Counterfactual representation learning with balancing weights. In International Conference on Artificial Intelligence and Statistics, pp. 1972–1980. PMLR, 2021. Susan Athey and Stefan Wager. Estimating treatment effects with causal forests: An application. Observational Studies, 5(2):37–51, 2019. Peter C Austin and Elizabeth A Stuart. Moving towards best practice when using inverse probability of treatment weighting (iptw) using the propensity score to estimate causal treatment effects in observational studies. Statistics in medicine, 34(28):3661–3679, 2015. Tadas Baltrušaitis, Chaitanya Ahuja, and Louis-Philippe Morency. Multimodal machine learning: A survey and taxonomy. IEEE transactions on pattern analysis and machine intelligence, 41(2):423–443, 2018. Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. Analysis of representations for domain adaptation. Advances in neural information processing systems, 19, 2006. Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. A theory of learning from different domains. Machine learning, 79(1):151–175, 2010. Alex Beutel, Jilin Chen, Zhe Zhao, and Ed H Chi. Data decisions and theoretical implications when adversarially learning fair representations. arXiv preprint arXiv:1707.00075, 2017. Ioana Bica, Ahmed M Alaa, Craig Lambert, and Mihaela Van Der Schaar. From real-world patient data to individualized treatment effects using machine learning: current and future methods to address underlying challenges. Clinical Pharmacology & Therapeutics, 109(1):87–100, 2021a. Ioana Bica, Daniel Jarrett, Alihan Hüyük, and Mihaela van der Schaar. Learning "what-if" explanations for sequential decision-making. In International Conference on Learning Representations, 2021b. URL https://openreview.net/forum?id=h0de3QWtGG. Stephen Burgess, Dylan S Small, and Simon G Thompson. A review of instrumental variable estimators for mendelian randomization. Statistical methods in medical research, 26(5):2333–2355, 2017. Marco Caliendo and Sabine Kopeinig. Some practical guidance for the implementation of propensity score matching. Journal of economic surveys, 22(1):31–72, 2008. Defu Cao, James Enouen, Yujing Wang, Xiangchen Song, Chuizheng Meng, Hao Niu, and Yan Liu. Estimating treatment effects from irregular time series observations with hidden confounders. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp. 6897–6905, 2023. Minwoo Chae and Stephen G Walker. Wasserstein upper bounds of the total variation for smooth densities. Statistics & Probability Letters, 163:108771, 2020. Pin-Yu Chen, Yash Sharma, Huan Zhang, Jinfeng Yi, and Cho-Jui Hsieh. Ead: elastic-net attacks to deep neural networks via adversarial examples. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018. Victor Chernozhukov, Denis Chetverikov, Mert Demirer, Esther Duflo, Christian Hansen, Whitney Newey, and James Robins. Double/debiased machine learning for treatment and structural parameters. The Econometrics Journal, 21(1):C1–C68, 2018. Hugh A Chipman, Edward I George, and Robert E McCulloch. Bart: Bayesian additive regression trees. The Annals of Applied Statistics, 4(1):266–298, 2010. Zhixuan Chu, Stephen L. Rathbun, and Sheng Li. Graph infomax adversarial learning for treatment effect estimation with networked observational data. In KDD, pp. 176–184, 2021. URL https://doi.org/10. 1145/3447548.3467302. Michael Crawshaw. Multi-task learning with deep neural networks: A survey. arXiv preprint arXiv:2009.09796, 2020. Richard K Crump, V Joseph Hotz, Guido W Imbens, and Oscar A Mitnik. Nonparametric tests for treatment effect heterogeneity. The Review of Economics and Statistics, 90(3):389–405, 2008. Alicia Curth and Mihaela van der Schaar. On inductive biases for heterogeneous treatment effect estimation. Advances in Neural Information Processing Systems, 34:15883–15894, 2021. Alicia Curth and Mihaela Van Der Schaar. In search of insights, not magic bullets: Towards demystification of the model selection dilemma in heterogeneous treatment effect estimation. In Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pp. 6623–6642. PMLR, 23–29 Jul 2023. Alicia Curth, David Svensson, Jim Weatherall, and Mihaela van der Schaar. Really doing great at estimating cate? a critical look at ml benchmarking practices in treatment effect estimation. In Thirty-fifth conference on neural information processing systems datasets and benchmarks track (round 2), 2021. Marco Cuturi and Arnaud Doucet. Fast computation of wasserstein barycenters. In International conference on machine learning, pp. 685–693. PMLR, 2014. Pedro Domingos. A unified bias-variance decomposition. In Proceedings of 17th international conference on machine learning, pp. 231–238. Morgan Kaufmann Stanford, 2000. Vincent Dorie. Nonparametric methods for causal inference. https://github.com/vdorie/npci, 2021. Xin Du, Lei Sun, Wouter Duivesteijn, Alexander Nikolaev, and Mykola Pechenizkiy. Adversarial balancingbased representation learning for causal effect inference with observational data. Data Mining and Knowledge Discovery, 35(4):1713–1738, 2021. Harrison Edwards and Amos Storkey. Censoring representations with an adversary. arXiv preprint arXiv:1511.05897, 2015. Max H Farrell. Robust inference on average treatment effects with possibly more covariates than observations. Journal of Econometrics, 189(1):1–23, 2015. Rui Feng, Yang Yang, Yuehan Lyu, Chenhao Tan, Yizhou Sun, and Chunping Wang. Learning fair representations via an adversarial framework. arXiv preprint arXiv:1904.13341, 2019. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096–2030, 2016. Stuart Geman, Elie Bienenstock, and René Doursat. Neural networks and the bias/variance dilemma. Neural computation, 4(1):1–58, 1992. Ruocheng Guo, Jundong Li, Yichuan Li, K Selçuk Candan, Adrienne Raglin, and Huan Liu. Ignite: A minimax game toward learning individual treatment effects from networked observational data. In IJCAI, pp. 4534–4540, 2020a. Ruocheng Guo, Jundong Li, and Huan Liu. Learning individual causal effects from networked observational data. In Proceedings of the 13th International Conference on Web Search and Data Mining, pp. 232–240, 2020b. Yaru Hao, Xiao-Yuan Jing, Runhang Chen, and Wei Liu. Learning enhanced specific representations for multi-view feature learning. Knowledge-Based Systems, 272:110590, 2023. Moritz Hardt, Eric Price, and Nati Srebro. Equality of opportunity in supervised learning. Advances in neural information processing systems, 29, 2016. Negar Hassanpour and Russell Greiner. Counterfactual regression with importance sampling weights. In IJCAI, pp. 5880–5887, 2019a. Negar Hassanpour and Russell Greiner. Learning disentangled representations for counterfactual regression. In International Conference on Learning Representations, 2019b. Jennifer L Hill. Bayesian nonparametric modeling for causal inference. Journal of Computational and Graphical Statistics, 20(1):217–240, 2011. Arthur E Hoerl and Robert W Kennard. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics, 12(1):55–67, 1970. Cong Hu, Yuanbo Li, Zhenhua Feng, and Xiaojun Wu. Attention-guided evolutionary attack with elastic-net regularization on face recognition. Pattern recognition, 143:109760, 2023. Yiyan Huang, Cheuk Hang Leung, Xing Yan, Qi Wu, Nanbo Peng, Dongdong Wang, and Zhixiang Huang. The causal learning of retail delinquency. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 204–212, 2021. Yiyan Huang, Cheuk Hang Leung, Shumin Ma, Qi Wu, Dongdong Wang, and Zhixiang Huang. Moderatelybalanced representation learning for treatment effects with orthogonality information. In Pacific Rim International Conference on Artificial Intelligence, pp. 3–16. Springer, 2022a. Yiyan Huang, Cheuk Hang Leung, Qi Wu, Xing Yan, Shumin Ma, Zhiri Yuan, Dongdong Wang, and Zhixiang Huang. Robust causal learning for the estimation of average treatment effects. In 2022 International Joint Conference on Neural Networks (IJCNN 2022). IEEE, 2022b. Yiyan Huang, Cheuk Hang Leung, Shumin Ma, Zhiri Yuan, Qi Wu, Siyi Wang, Dongdong Wang, and Zhixiang Huang. Towards balanced representation learning for credit policy evaluation. In International Conference on Artificial Intelligence and Statistics, pp. 3677–3692. PMLR, 2023. Andrew Jesson, Sören Mindermann, Yarin Gal, and Uri Shalit. Quantifying ignorance in individual-level causal-effect estimates under hidden confounding. In International Conference on Machine Learning, pp. 4829–4838. PMLR, 2021. Xiaodong Jia, Xiao-Yuan Jing, Xiaoke Zhu, Songcan Chen, Bo Du, Ziyun Cai, Zhenyu He, and Dong Yue. Semi-supervised multi-view deep discriminant representation learning. IEEE transactions on pattern analysis and machine intelligence, 43(7):2496–2509, 2020. Ray Jiang, Aldo Pacchiano, Tom Stepleton, Heinrich Jiang, and Silvia Chiappa. Wasserstein fair classification. In Uncertainty in artificial intelligence, pp. 862–872. PMLR, 2020. Fredrik Johansson, Uri Shalit, and David Sontag. Learning representations for counterfactual inference. In International conference on machine learning, pp. 3020–3029. PMLR, 2016. Fredrik D. Johansson, Uri Shalit, Nathan Kallus, and David Sontag. Generalization bounds and representation learning for estimation of potential outcomes and causal effects. Journal of Machine Learning Research, 23(166):1–50, 2022a. URL http://jmlr.org/papers/v23/19-511.html. Fredrik D Johansson, Uri Shalit, Nathan Kallus, and David Sontag. Generalization bounds and representation learning for estimation of potential outcomes and causal effects. Journal of Machine Learning Research, 23(166):1–50, 2022b. Nathan Kallus, Xiaojie Mao, and Angela Zhou. Interval estimation of individual-level causal effects under unobserved confounding. In The 22nd international conference on artificial intelligence and statistics, pp. 2281–2290. PMLR, 2019. Guoliang Kang, Jun Li, and Dacheng Tao. Shakeout: A new approach to regularized deep neural network training. IEEE transactions on pattern analysis and machine intelligence, 40(5):1245–1258, 2017. Edward H Kennedy. Towards optimal doubly robust estimation of heterogeneous causal effects. Electronic Journal of Statistics, 17(2):3008–3049, 2023. Kun Kuang, Peng Cui, Bo Li, Meng Jiang, Shiqiang Yang, and Fei Wang. Treatment effect estimation with data-driven variable decomposition. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 31, 2017. Ananya Kumar, Aditi Raghunathan, Robbie Matthew Jones, Tengyu Ma, and Percy Liang. Fine-tuning can distort pretrained features and underperform out-of-distribution. In International Conference on Learning Representations, 2021. Ananya Kumar, Tengyu Ma, Percy Liang, and Aditi Raghunathan. Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift. In Uncertainty in Artificial Intelligence, pp. 1041–1051. PMLR, 2022. Sören R Künzel, Jasjeet S Sekhon, Peter J Bickel, and Bin Yu. Metalearners for estimating heterogeneous treatment effects using machine learning. Proceedings of the national academy of sciences, 116(10):4156– 4165, 2019. Jinxing Li, Chuhao Zhou, Xiaoqiang Ji, Mu Li, Guangming Lu, Yong Xu, and David Zhang. Multi-view instance attention fusion network for classification. Information Fusion, 101:101974, 2024. Yingming Li, Ming Yang, and Zhongfei Zhang. A survey of multi-view representation learning. IEEE transactions on knowledge and data engineering, 31(10):1863–1883, 2018. Qi Liu, Zhilong Zhou, Gangwei Jiang, Tiezheng Ge, and Defu Lian. Deep task-specific bottom representation network for multi-task recommendation. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, pp. 1637–1646, 2023. Mingsheng Long, Jianmin Wang, Guiguang Ding, Jiaguang Sun, and Philip S Yu. Transfer joint matching for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1410–1417, 2014. Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In International conference on machine learning, pp. 97–105. PMLR, 2015. Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. Deep transfer learning with joint adaptation networks. In International conference on machine learning, pp. 2208–2217. PMLR, 2017. Christos Louizos, Uri Shalit, Joris M Mooij, David Sontag, Richard Zemel, and Max Welling. Causal effect inference with deep latent-variable models. Advances in neural information processing systems, 30, 2017. Shumin Ma, Zhiri Yuan, Qi Wu, Yiyan Huang, Xixu Hu, Cheuk Hang Leung, Dongdong Wang, and Zhixiang Huang. Deep into the domain shift: Transfer learning through dependence regularization. IEEE Transactions on Neural Networks and Learning Systems, 2023. David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. Learning adversarially fair and transferable representations. In International Conference on Machine Learning, pp. 3384–3393. PMLR, 2018. Divyat Mahajan, Ioannis Mitliagkas, Brady Neal, and Vasilis Syrgkanis. Empirical analysis of model selection for heterogenous causal effect estimation. International Conference on Learning Representations, 2024. Maggie Makar, Fredrik Johansson, John Guttag, and David Sontag. Estimation of bounds on potential outcomes for decision making. In International Conference on Machine Learning, pp. 6661–6671. PMLR, 2020. Wang Miao, Wenjie Hu, Elizabeth L Ogburn, and Xiao-Hua Zhou. Identifying effects of multiple treatments in the presence of unmeasured confounding. Journal of the American Statistical Association, 118(543): 1953–1967, 2023. Naveen Naidu Narisetty. Bayesian model selection for high-dimensional data. In Handbook of statistics, volume 43, pp. 207–248. Elsevier, 2020. Xinkun Nie and Stefan Wager. Quasi-oracle estimation of heterogeneous treatment effects. Biometrika, 108 (2):299–319, 2021. Ana Rita Nogueira, João Gama, and Carlos Abreu Ferreira. Causal discovery in machine learning: Theories and applications. Journal of Dynamics & Games, 8(3), 2021. Miruna Oprescu, Vasilis Syrgkanis, and Zhiwei Steven Wu. Orthogonal random forest for causal inference. In International Conference on Machine Learning, pp. 4932–4941. PMLR, 2019. Judea Pearl. Causality. Cambridge university press, 2009. Zhaozhi Qian, Yao Zhang, Ioana Bica, Angela Wood, and Mihaela van der Schaar. Synctwin: Treatment effect estimation with longitudinal outcomes. Advances in Neural Information Processing Systems, 34: 3178–3190, 2021. Paul R Rosenbaum and Donald B Rubin. The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1):41–55, 1983. Donald B Rubin. Causal inference using potential outcomes: Design, modeling, decisions. Journal of the American Statistical Association, 100(469):322–331, 2005. Uri Shalit, Fredrik D Johansson, and David Sontag. Estimating individual treatment effect: generalization bounds and algorithms. In International Conference on Machine Learning, pp. 3076–3085. PMLR, 2017. Jian Shen, Yanru Qu, Weinan Zhang, and Yong Yu. Wasserstein distance guided representation learning for domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018. Claudia Shi, David Blei, and Victor Veitch. Adapting neural networks for the estimation of treatment effects. Advances in neural information processing systems, 32, 2019. Changjian Shui, Fan Zhou, Christian Gagné, and Boyu Wang. Deep active learning: Unified and principled method for query and training. In International Conference on Artificial Intelligence and Statistics, pp. 1308–1318. PMLR, 2020. Bharath K Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Schölkopf, and Gert RG Lanckriet. On the empirical estimation of integral probability metrics. Electronic Journal of Statistics, 6:1550–1599, 2012. Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society Series B: Statistical Methodology, 58(1):267–288, 1996. Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7167–7176, 2017. Giorgio Valentini and Thomas G Dietterich. Bias-variance analysis of support vector machines for the development of svm-based ensemble methods. Journal of Machine Learning Research, 5(Jul):725–775, 2004. Cédric Villani et al. Optimal transport: old and new, volume 338. Springer, 2009. Matthew J Vowels, Necati Cihan Camgoz, and Richard Bowden. D'ya like dags? a survey on structure learning and causal discovery. ACM Computing Surveys, 55(4):1–36, 2022. Stefan Wager and Susan Athey. Estimation and inference of heterogeneous treatment effects using random forests. Journal of the American Statistical Association, 113(523):1228–1242, 2018. Anpeng Wu, Kun Kuang, Bo Li, and Fei Wu. Instrumental variable regression with confounder balancing. In International Conference on Machine Learning, pp. 24056–24075. PMLR, 2022. Zhen Wu, Xin-Yu Dai, Cunyan Yin, Shujian Huang, and Jiajun Chen. Improving review representations with user attention and product attention for sentiment classification. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018. Lihu Xu, Fang Yao, Qiuran Yao, and Huiming Zhang. Non-asymptotic guarantees for robust statistical learning under infinite variance assumption. Journal of Machine Learning Research, 24(92):1–46, 2023a. Peng Xu, Xiatian Zhu, and David A Clifton. Multimodal learning with transformers: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023b. Xiaoqiang Yan, Shizhe Hu, Yiqiao Mao, Yangdong Ye, and Hui Yu. Deep multi-view learning methods: A review. Neurocomputing, 448:106–129, 2021. Zitong Yang, Yaodong Yu, Chong You, Jacob Steinhardt, and Yi Ma. Rethinking bias-variance trade-off for generalization of neural networks. In International Conference on Machine Learning, pp. 10767–10777. PMLR, 2020. Liuyi Yao, Sheng Li, Yaliang Li, Mengdi Huai, Jing Gao, and Aidong Zhang. Representation learning for treatment effect estimation from observational data. Advances in Neural Information Processing Systems, 31, 2018. Jinsung Yoon, James Jordon, and Mihaela Van Der Schaar. Ganite: Estimation of individualized treatment effects using generative adversarial nets. In International Conference on Learning Representations, 2018. Junkun Yuan, Xu Ma, Ruoxuan Xiong, Mingming Gong, Xiangyu Liu, Fei Wu, Lanfen Lin, and Kun Kuang. Instrumental variable-driven domain generalization with unobserved confounders. ACM Transactions on Knowledge Discovery from Data, 17(8):1–21, 2023. Alessio Zanga, Elif Ozkirimli, and Fabio Stella. A survey on causal discovery: Theory and practice. International Journal of Approximate Reasoning, 151:101–129, 2022. Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. Learning fair representations. In International conference on machine learning, pp. 325–333. PMLR, 2013. Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 335–340, 2018. Yao Zhang, Alexis Bellot, and Mihaela Schaar. Learning overlapping representations for the estimation of individualized treatment effects. In International Conference on Artificial Intelligence and Statistics, pp. 1005–1014. PMLR, 2020. Han Zhao and Geoffrey J Gordon. Inherent tradeoffs in learning fair representations. Journal of Machine Learning Research, 23(57):1–26, 2022. Han Zhao, Amanda Coston, Tameem Adel, and Geoffrey J Gordon. Conditional learning of fair representations. In International Conference on Learning Representations, 2019a. Han Zhao, Remi Tachet Des Combes, Kun Zhang, and Geoffrey Gordon. On learning invariant representations for domain adaptation. In International conference on machine learning, pp. 7523–7532. PMLR, 2019b. Han Zhao, Chen Dan, Bryon Aragam, Tommi S Jaakkola, Geoffrey J Gordon, and Pradeep Ravikumar. Fundamental limits and tradeoffs in invariant representation learning. Journal of Machine Learning Research, 23(340):1–49, 2022. YUAN Zhiri, HU Xixu, WU Qi, MA Shumin, Cheuk Hang Leung, Xin Shen, and Yiyan Huang. A unified domain adaptation framework with distinctive divergence analysis. 2022. Ding-Xuan Zhou. On grouping effect of elastic net. Statistics & Probability Letters, 83(9):2108–2112, 2013. Indre Zliobaite. On the relation between accuracy and fairness in binary classification. arXiv preprint arXiv:1505.05723, 2015. Hui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society Series B: Statistical Methodology, 67(2):301–320, 2005. ## A Appendix A.1 Discussion Of The Trade-Off Problem We will now discuss two cases to gain a deeper understanding of this trade-off phenomenon. (1) The case without representation balancing: In this case, the outcome functions are fitted by Y 1 = ˆτ 1(X*treat*) and Y 0 = ˆτ 0(X*control*) using treated and controlled samples, respectively. τˆ 1(X*treat*) and τˆ 0(X*control*) can be good estimates of factual outcomes based on the well-preserved pre-balancing information (group information). However, the estimated counterfactual outcomes τˆ 0(X*treat*) and τˆ 1(X*control*) can be problematic due to the presence of the covariate shift problem P(X|T = 1) ̸= P(X|T = 0), where the distribution of training data P(*X, Y* |T = t) differs from that of the test data P(*X, Y* |T = 1 − t) for t ∈ {0, 1} 2. (2) The case with representation balancing: In this case, the outcome functions are fitted by Y 1 = hˆ1(Φ(X*treat*)) and Y 0 = hˆ0(Φ(X*control*)) using treated and controlled samples, respectively. Using Φ(X) to fit factual outcomes can improve the accuracy of the counterfactual estimates hˆ0(Φ(X*treat*)) and hˆ1(Φ(X*control*)), because representation balancing enforces the distributions of treated and controlled samples to be as close as possible in the representation space. As a result, representation balancing effectively tackles the covariate shift issue, resulting in training data and test data following the same distribution 3. However, executing representation balancing can inevitably lead to a loss of outcome-predictive information in Φ(X). This occurs naturally as Φ becomes insensitive to the treatment variable, thereby sacrificing pre-balancing information (group information) that contributes to factual outcome predictions. To illustrate the negative impact of losing pre-balancing information in balanced representations, we present a motivating example below. Motivating example. Suppose there is a vaccine available to prevent a certain disease. We define X as the covariate, T = 1 as the treatment (receiving the vaccine), T = 0 as the control (not receiving the vaccine), and Y as the outcome (the level of specific antibodies). Assume that the outcome is determined by Y = T exp(X) + (1 − T) · 0 = T exp(X), which means that if an individual receives the treatment, the level of antibodies will be y = exp(x); otherwise, it will be y = 0. In observational data, the treatment is assigned based on the covariate of each individual. The left graph of Figure 7 illustrates the distributions of X in the treated and control groups. We observe that individuals with positive x values are more likely to receive the vaccine, resulting in a higher level of antibodies. Given some sample i, a well trained model first determines whether the sample is more likely to be in the treatment or control group based on its covariate Xi = x. If it determines the sample to be in the treatment group, the model then predicts y = exp(x); otherwise, it predicts y = 0. For example, if x = 1, the model would classify the sample as more likely to be in the treatment group and predict y = e. Therefore, in this case, the pre-balancing covariate remain informative for predicting the outcome. Now, let's consider a representation function Φ that achieves 2By unconfoundedness, we have P(Y |*X, T* = t) = P(Y |*X, T* = 1 − t). Due to the covariate shift P(X|T = t) ̸= P(X|T = 1 − t), we have P(Y |X, T = t)P(X|T = t) ̸= P(Y |*X, T* = 1 − t)P(X|T = 1 − t), i.e., P(X, Y |T = t) ̸= P(*X, Y* |T = 1 − t). 3By one-to-one and invertible properties of Φ and unconfoundedness, we have P(Y |Φ(X), T = t) = P(Y |Φ(X), T = 1 − t). Given an ideal representation balancing P(Φ(X)|T = t) = P(Φ(X)|T = 1 − t), we have P(Y |Φ(X), T = t)P(Φ(X)|T = t) = P(Y |Φ(X), T = 1 − t)P(Φ(X)|T = 1 − t), i.e., P(Φ(X), Y |T = t) = P(Φ(X), Y |T = 1 − t). ![26_image_0.png](26_image_0.png) Figure 7: Motivating example (similar to the example demonstrated in Huang et al. (2023)) for illustrating the trade-off between outcome prediction and representation balancing. improper representation balancing between the treated and control samples. The right graph of Figure 7 shows the distributions of Φ(X) in the treated and control groups. In this case, it becomes challenging for a model to accurately predict Y using Φ(X) because the model may become confused about whether a sample is more likely to receive the treatment or the control. Consequently, improperly balanced representations can lead to a loss of outcome-predictive information. The above discussions and motivating example illustrate the inherent trade-off problem between outcome prediction (Principle I) and representation balancing (Principle II), which arises due to the fact that representation balancing models alleviate covariate shift at the expense of factual outcome prediction. ## A.2 Proof Of Lemma 1 Proof of L taking the squared loss, i.e., L(y1, y2) = (y1 − y2) 2: Proof. We denote ϵ*P EHE*(f) = ϵ*P EHE*(h, Φ), ϵF (f) = ϵF (h, Φ), ϵCF (f) = ϵCF (h, Φ) for f(x, t) = h(Φ(x), t). ϵF (f) = Z X×{0,1}×Y (f(x, t) − y t) 2p(y t|x)p(x, t)dytdxdt = Z X×{0,1}×Y (f(x, t) − τ t(x))2p(y t|x)p(x, t)dytdxdt + Z X×{0,1}×Y (τ t(x) − y t) 2p(y t|x)p(x, t)dytdxdt + 2 Z X×{0,1}×Y (f(x, t) − τ t(x))(τ t(x) − y t)p(y t|x)p(x, t)dytdxdt (28) = Z X×{0,1} (f(x, t) − τ t(x))2p(x, t)dxdt + σ 2 yt (p(x, t)) (29) Equation (29) is by the definition of σ 2 yt (p(x, t)) in Lemma 1 and equation (28) equaling zero since τ t(x) = RY y tp(y t|x)dyt. A similar result can be obtained for ϵCF : $$\epsilon_{C F}(f)=\int_{X\times\{0,1\}}(f(\mathbf{x},t)-\tau^{t}(\mathbf{x}))^{2}p(\mathbf{x},1-t)d\mathbf{x}d t+\sigma_{y^{t}}^{2}(p(\mathbf{x},1-t)).$$ ϵ*P EHE*(f) = Z X ((f(x, 1) − f(x, 0)) − (τ 1(x) − τ 0(x)))2p(x)dx ≤2 Z X ((f(x, 1) − τ 1(x))2 + (f(x, 0) − τ 0(x))2)p(x)dx (30) =2 Z X (f(x, 1) − τ 1(x))2p(x, T = 1)dx + 2 Z X (f(x, 0) − τ 0(x))2p(x, T = 0)dx + 2 Z X (f(x, 1) − τ 1(x))2p(x, T = 0)dx + 2 Z X (f(x, 0) − τ 0(x))2p(x, T = 1)dx (31) =2 Z X×{0,1} (f(x, t) − τ t(x))2p(x, t)dxdt + 2 Z X×{0,1} (f(x, t) − τ t(x))2p(x, 1 − t)dxdt =2(ϵF (f) − σ 2 yt (p(x, t))) + 2(ϵCF (f) − σ 2 yt (p(x, 1 − t))). (32) Inequality (30) is by (x + y) 2 ≤ 2(x 2 + y 2); equation (31) is by p(x) = p(x, T = 0) + p(x, T = 1). By (equation 32) and the definition of σ 2 y in Lemma 1, we have $$(30)$$ (31) $$\begin{array}{l}\small\mathbf{(32)^{}}\end{array}$$ . $$\epsilon_{P E H E}(h,\Phi)\leq2(\epsilon_{C F}(h,\Phi)+\epsilon_{F}(h,\Phi)-2\sigma_{y}^{2}).$$ Proof of L taking the absolute loss, i.e., L(y1, y2) = |y1 − y2|: Proof. We denote ϵ*P EHE*(f) = ϵ*P EHE*(h, Φ), ϵF (f) = ϵF (h, Φ), ϵCF (f) = ϵCF (h, Φ) for f(x, t) = h(Φ(x), t). $$\epsilon_{F}(f)$$ $$=\int_{\mathcal{X}\times\{0,1\}\times\mathcal{Y}}|f(\mathbf{x},t)-y^{t}|p(y^{t}|\mathbf{x})p(\mathbf{x},t)dy^{t}d\mathbf{x}dt$$ $$\geq\int_{\mathcal{X}\times\{0,1\}\times\mathcal{Y}}|f(\mathbf{x},t)-\tau^{t}(\mathbf{x})|p(y^{t}|\mathbf{x})p(\mathbf{x},t)dy^{t}d\mathbf{x}dt$$ $$\quad-\int_{\mathcal{X}\times\{0,1\}\times\mathcal{Y}}|\tau^{t}(\mathbf{x})-y^{t}|p(y^{t}|\mathbf{x})p(\mathbf{x},t)dy^{t}d\mathbf{x}dt\tag{33}$$ $$=\int_{\mathcal{X}\times\{0,1\}}|f(\mathbf{x},t)-\tau^{t}(\mathbf{x})|p(\mathbf{x},t)d\mathbf{x}dt-A_{y^{t}}(p(\mathbf{x},t)).\tag{34}$$ Inequality (33) is by |x + y| ≥ |x*| − |*y|, equation (34) is by the definition of Ayt (p(x, t)) in Lemma 1. A similar result can be obtained for ϵCF : $$\epsilon_{C F}(f)\geq\int_{X\times\{0,1\}}|f(\mathbf{x},t)-\tau^{t}(\mathbf{x})|p(\mathbf{x},1-t)d\mathbf{x}d t-A_{y^{t}}(p(\mathbf{x},1-t)).$$ ϵ*P EHE*(f) = Z X |(f(x, 1) − f(x, 0)) − (τ 1(x) − τ 0(x))|p(x)dx ≤ Z X (|f(x, 1) − τ 1(x)| + |f(x, 0) − τ 0(x)|)p(x)dx (35) = Z X |f(x, 1) − τ 1(x)|p(x, T = 1)dx + Z X |f(x, 1) − τ 1(x)|p(x, T = 0)dx (36) $$(35)$$ (36) (37) $$\left(38\right)$$ .............. + Z X |f(x, 0) − τ 0(x)|p(x, T = 0)dx + Z X |f(x, 0) − τ 0(x)|p(x, T = 1)dx (37) = Z X×{0,1} |f(x, t) − τ t(x)|p(x, t)dxdt + Z X×{0,1} |f(x, t) − τ t(x)|p(x, 1 − t)dxdt ≤ϵF (f) + Ayt (p(x, t)) + ϵCF (f) + Ayt (p(x, 1 − t)). (38) Inequality (35) is by |x+y*| ≤ |*x|+|y|. Equation (36) and equation (37) are by p(x) = p(x, T = 0)+p(x, T = 1). By equation (38) and the definition of Ay in Lemma 1, we have $$\epsilon_{P E H E}(h,\Phi)\leq\epsilon_{F}(f)+A_{y^{t}}(p({\bf x},t))+\epsilon_{C F}(f)+A_{y^{t}}(p({\bf x},1-t))$$ $$\leq\epsilon_{C F}(h,\Phi)+\epsilon_{F}(h,\Phi)+2A_{y}.$$ $$\begin{array}{l}{\lceil\bot\rceil}\end{array}$$ ## A.3 Proof Of Theorem 1 Proof of equation (4): Proof. ϵCF (h, Φ) − [(1 − u) · ϵ T =1 F (h, Φ) + u · ϵ T =0 F (h, Φ)] =[(1 − u) · ϵ T =1 CF (h, Φ) + u · ϵ T =0 CF (h, Φ)] − [(1 − u) · ϵ T =1 F (h, Φ) + u · ϵ T =0 F (h, Φ)] =(1 − u) · [ϵ T =1 CF (h, Φ) − ϵ T =1 F (h, Φ)] + u · [ϵ T =0 CF (h, Φ) − ϵ T =0 F (h, Φ)] =(1 − u) Z X ℓh,Φ(x, 1)(p T =0(x) − p T =1(x))dx + u Z X ℓh,Φ(x, 0)(p T =1(x) − p T =0(x))dx =(1 − u) Z R ℓh,Φ(Ψ(r), 1)(p T =0 Φ (r) − p T =1 Φ (r))dr + u Z R ℓh,Φ(Ψ(r), 0)(p T =1 Φ (r) − p T =0 Φ (r))dr (39) =BΦ · (1 − u) Z R 1 BΦ ℓh,Φ(Ψ(r), 1)(p T =0 Φ (r) − p T =1 Φ (r))dr + BΦ · u Z R 1 BΦ ℓh,Φ(Ψ(r), 0)(p T =1 Φ (r) − p T =0 Φ (r))dr ≤BΦ · (1 − u) sup g∈G | Z R g(r)(p T =0 Φ (r) − p T =1 Φ (r))dr| + BΦ · u · sup g∈G | Z R g(r)(p T =1 Φ (r) − p T =0 Φ (r))dr| (40) =BΦ · *W ass*(p T =1 Φ , pT =0 Φ ) (41) Equation (39) is by the change of formula, p T =0 Φ (r) = p T =0(Ψ(r))JΨ(r), p T =1 Φ (r) = p T =1(Ψ(r))JΨ(r), where JΨ(r) is the absolute of the determinant of the Jacobian of Ψ(r). Equation (41) is by Definition 2. Proof of equation (5): Proof. $$\begin{array}{l}{{\epsilon_{P E H E}(h,\Phi)}}\\ {{\leq2(c_{F F}(h,\Phi)+\epsilon_{F}(h,\Phi)-2\sigma_{y}^{2}).}}\\ {{\leq2(\epsilon_{F}^{T=1}(h,\Phi)+\epsilon_{F}^{T=0}(h,\Phi)+B_{\Phi}\cdot W a s s(p_{\Phi}^{T=1},p_{\Phi}^{T=0})-2\sigma_{y}^{2}).}}\end{array}$$ $${\left({42}\right)}$$ $${\left({43}\right)}$$ Inequality (42) is by equation (2) in Lemma 1. Inequality (43) is by equation 4 in Theorem 1. Proof of equation (6): Proof. $$\epsilon_{PEHE}(h,\Phi)$$ $$\leq_{CP}(h,\Phi)+\epsilon_{F}(h,\Phi)+2A_{y}\tag{44}$$ $$\leq\epsilon_{F}^{T-1}(h,\Phi)+\epsilon_{F}^{T=0}(h,\Phi)+B_{\Phi}\cdot Wass(p_{\Phi}^{T=1},p_{\Phi}^{T=0})+2A_{y}\tag{45}$$ Inequality (44) is by equation (3) in Lemma 1. Inequality (45) is by equation 4 in Theorem 1. ## A.4 Proof Of Theorem 2 We first introduce Lemma 2 that is useful for proving Theorem 2. Lemma 2. Let G that is defined in Definition 2 *be the family of binary functions. Then we obtain* supη∈H RS η(s)(p1(s) − p2(s))ds = 1 2 dH(p1, p2). Proof. Let I(·) denotes an indicator function. $$d_{{\mathcal{H}}}(p_{1},p_{2})$$ $$d_{\mathcal{H}}(p_{1},p_{2})$$ $$=2\sup_{\eta\in\mathcal{H}}\left|\int_{\eta(s)=1}(p_{1}(s)-p_{2}(s))ds\right|$$ $$=2\sup_{\eta\in\mathcal{H}}\left|\int_{\mathcal{S}}\mathbb{I}(\eta(s)=1)(p_{1}(s)-p_{2}(s))ds\right|$$ $$=2\sup_{\eta\in\mathcal{H}}\left|\int_{\mathcal{S}}\eta(s)(p_{1}(s)-p_{2}(s))ds\right|\tag{4}$$ The last equation is because an indicator function is also a binary function. $$(46)$$ Proof. ϵCF (h, Φ) − [(1 − u) · ϵ T =1 F (h, Φ) + u · ϵ T =0 F (h, Φ)] =(1 − u) Z R ℓh,Φ(Ψ(r), 1)(p T =0 Φ (r) − p T =1 Φ (r))dr + u Z R ℓh,Φ(Ψ(r), 0)(p T =1 Φ (r) − p T =0 Φ (r))dr (47) ≤(1 − u) Z p T =0 Φ >pT =1 Φ ℓh,Φ(Ψ(r), 1)(p T =0 Φ (r) − p T =1 Φ (r))dr + u Z p T =1 Φ >pT =0 Φ ℓh,Φ(Ψ(r), 0)(p T =1 Φ (r) − p T =0 Φ (r))dr (48) ≤(1 − u)K Z p T =0 Φ >pT =1 Φ (p T =0 Φ (r) − p T =1 Φ (r))dr + u · K Z p T =1 Φ >pT =0 Φ (p T =1 Φ (r) − p T =0 Φ (r))dr (49) =(1 − u)K Z R I(p t=0 Φ > pT =1 Φ )(p T =0 Φ (r) − p T =1 Φ (r))dr + u · K Z R I(p T =1 Φ > pT =0 Φ )(p T =1 Φ (r) − p T =0 Φ (r))dr ≤(1 − u)K sup η∈H | Z R η(r)(p T =1 Φ (r) − p T =0 Φ (r))dr| + u · K · sup η∈H | Z R η(r)(p T =1 Φ (r) − p T =0 Φ (r))dr| (50) ≤K · sup η∈H | Z R η(r)((p T =1 Φ (r) − p T =0 Φ (r)))dr| = K 2 dH(p T =1 Φ , pT =0 Φ ) (51) Equation (47) is derived in the same way as equation (39). Equation (48) is by ℓh,Φ ≥ 0 for all r and t. Inequality (49) is by the definition of K in Theorem 2. Inequality (50) is because an indicator function is also a binary function. Equation (51) is by Lemma 2. ## Proof Of Equation (9): Proof. $$\begin{array}{l}{{\epsilon_{P E H E}(h,\Phi)}}\\ {{\leq2(\epsilon_{C F}(h,\Phi)+\epsilon_{F}(h,\Phi)-2\sigma_{y}^{2})}}\\ {{\leq2(\epsilon_{F}^{T=1}(h,\Phi)+\epsilon_{F}^{T=0}(h,\Phi)+\frac{K}{2}d_{H}(p_{\Phi}^{T=1},p_{\Phi}^{T=0})-2\sigma_{y}^{2})}}\end{array}$$ Inequality (52) is by equation 2 in Lemma 1. Inequality (53) is by equation 8 in Theorem 2. Proof of equation (10): Proof. $$\begin{array}{c}{{\epsilon_{P E H E}(h,\Phi)}}\\ {{\leq\epsilon_{C F}(h,\Phi)+\epsilon_{F}(h,\Phi)+2A_{y}}}\\ {{\leq\epsilon_{F}^{T=1}(h,\Phi)+\epsilon_{F}^{T=0}(h,\Phi)+\frac{K}{2}d_{\mathcal{H}}(p_{\Phi}^{T=1},p_{\Phi}^{T=0})+2A_{y}}}\end{array}$$ Inequality (54) is by equation 3 in Lemma 1. Inequality (55) is by equation 8 in Theorem 2. (54) $\binom{55}{55}$ . ## A.5 Additional Experimental Details Additional results on Twins Benchmark. To investigate the applicability of our model DIGNet to benchmark datasets beyond the commonly used IHDP benchmark, we conducted additional comparisons with several baseline models, including linear, tree, matching, and representation learning methods, on the Twins benchmark, as presented in Table 7. The Twins dataset comprises records of twin births in the USA between 1989 and 1991. After preprocessing, each unit contains 30 covariates relevant to parents, pregnancy, and birth. The treatment D = 1 indicates the heavier twin, while D = 0 indicates the lighter twin. The binary outcome variable Y represents 1-year mortality. For more comprehensive details on this dataset and the limitation of IHDP, refer to Curth et al. (2021). Notably, for ϵ*AT E*, the simple linear or matching estimator performs best across different methods. On the other hand, when assessing ITE performance using the AUC of potential outcomes, representation learning models all demonstrate strong performance, with AUC values exceeding 0.800 on both training and test sets. The observation might stem from the fact that representation balancing models are based on ITE error bounds, rather than ATE error bounds, thereby optimizing for AUC instead of ϵ*AT E*. Moreover, among all the models, our DIGNet achieves the second-best AUC results. The best results are achieved by MBRL, which involves the orthogonality information (similar to doubly robust estimators) in representation balancing. This, in turn, inspires us to explore ATE error bounds, or consider involving doubly robust methods in future research. Table 7: Training- & test- set AUC & ϵ*AT E* on Twins. Mean ± standard error of 100 runs. | | Training set | Test set | | | |--------------------------------------|----------------|-------------|-------------|-------------| | | AUC | ϵAT E | AUC | ϵAT E | | OLS/LR1 Johansson et al. (2016) | .660 ± .005 | .004 ± .003 | .500 ± .028 | .007 ± .006 | | OLS/LR2 Johansson et al. (2016) | .660 ± .004 | .004 ± .003 | .500 ± .016 | .007 ± .006 | | k-NN Crump et al. (2008) | .609 ± .010 | .003 ± .002 | .492 ± .012 | .005 ± .004 | | BART Chipman et al. (2010) | .506 ± .014 | .121 ± .024 | .500 ± .011 | .127 ± .024 | | CEVAE Louizos et al. (2017) | .845 ± .003 | .022 ± .002 | .841 ± .004 | .032 ± .003 | | SITE Yao et al. (2018) | .862 ± .002 | .016 ± .001 | .853 ± .006 | .020 ± .002 | | BLR Johansson et al. (2016) | .611 ± .009 | .006 ± .004 | .510 ± .018 | .033 ± .009 | | BNN Johansson et al. (2016) | .690 ± .008 | .006 ± .003 | .676 ± .008 | .020 ± .007 | | TARNet Shalit et al. (2017) | .849 ± .002 | .011 ± .002 | .840 ± .006 | .015 ± .002 | | CFR-Wass (GNet) Shalit et al. (2017) | .850 ± .002 | .011 ± .002 | .842 ± .005 | .028 ± .003 | | MBRL (Huang et al., 2022a) | .879 ± .000 | .003 ± .000 | .874 ± .001 | .007 ± .00q | | DIGNet (Ours) | .874 ± .001 | .004 ± .001 | .871 ± .001 | .008 ± .001 | | ΦE | ΦG | ΦI | π | h1 | h 0 | α1 | α2 | batchsize | iteration | learning rate | learning rate for π | | |--------|----------------------|------------|------------|-----------------|------------|----------------|------|-------------|-------------|-----------------|-----------------------|------| | Gnet | (100, 100, 100, 100) | − | − | − | (100, 100) | (100, 100) 0.1 | − | 100 | 300 | 1e−3 | − | | | Inet | (100, 100, 100, 100) | − | − | (100, 100, 100) | (100, 100) | (100, 100) | − | 0.1 | 100 | 300 | 1e−3 | 1e−4 | | DGNet | (100, 100, 100, 100) | (100, 100) | − | − | (100, 100) | (100, 100) 0.1 | − | 100 | 300 | 1e−3 | − | | | DINet | (100, 100, 100, 100) | − | (100, 100) | (100, 100, 100) | (100, 100) | (100, 100) | − | 0.1 | 100 | 300 | 1e−3 | 1e−4 | | DIGNet | (100, 100, 100, 100) | (100, 100) | (100, 100) | (100, 100, 100) | (100, 100) | (100, 100) 0.1 | 0.1 | 100 | 300 | 1e−3 | 1e−4 | | Implementation details. In simulation studies, we ensure a fair comparison by fixing all the hyperparameters in all datasets across different models. The relevant details are stated in Table 8. In IHDP studies, to compare with the baseline model CFR-Wass (GNet), we remain the hyperparameters of INet, DGNet, DINet and the early stopping rule the same as those used in CFR-Wass Shalit et al. (2017). Since DIGNet Table 8: Hyperparameters of different models in simulation studies. is more complex than other four models, we adjust the hyperparameters of ΦE, ΦG, ΦI , α1, and α2 for DIGNet as Shalit et al. (2017) do. The relevant details are stated in Table 9. | ΦE | ΦG | ΦI | π | h1 | h 0 | α1 α2 batchsize | iteration | learning rate | learning rate for π | | | | |---------------------------------------|----------------------|-----------------|-----------------|-----------------|-----------------|---------------------|-------------|-----------------|-----------------------|------|------|----| | Gnet | (100, 100, 100, 100) | − | − | − | (100, 100, 100) | (100, 100, 100) | 1 | − | 100 | 600 | 1e−3 | − | | Inet | (100, 100, 100, 100) | − | − | (200, 200, 200) | (100, 100, 100) | (100, 100, 100) − 1 | 100 | 600 | 1e−3 | 1e−3 | | | | DGNet | (100, 100, 100, 100) | (100, 100) | − | − | (100, 100, 100) | (100, 100, 100) | 1 | − | 100 | 600 | 1e−3 | − | | DINet | (100, 100, 100, 100) | − | (100, 100) | (200, 200, 200) | (100, 100, 100) | (100, 100, 100) − 1 | 100 | 600 | 1e−3 | 1e−3 | | | | DIGNet (100, 100, 100, 100, 100, 100) | (100, 100, 100) | (100, 100, 100) | (200, 200, 200) | (100, 100, 100) | (100, 100, 100) | 1 | 1 | 100 | 600 | 1e−3 | 1e−3 | | Table 9: Hyperparameters of different models in IHDP experiments. | Model | Time for 600 epochs | Avg early stopping | Actual time | √ ϵP EHE on test set | |---------|-----------------------|----------------------|---------------|------------------------| | GNet | 3096s | 240.61 | 1241s | 0.77±0.18 | | INet | 4042s | 254.19 | 1712s | 0.72±0.11 | | DGNet | 3775s | 169.17 | 1064s | 0.60±0.09 | | DINet | 3212s | 157.98 | 846s | 0.60±0.11 | | DIGNet | 4984s | 226.76 | 1884s | 0.45±0.04 | Analysis of training time and training stability. We record the time it took for different models to run through 100 IHDP datasets in Table 10, and each model is trained within 600 epochs. Following Shalit et al. (2017), all models adopt the early stopping rule. We also record the average early stopping epoch on 100 runs and the actual time on 100 runs, where (actual time) = (total time) × (average early stopping epoch)/600. Not surprisingly, GNet took the least amount of time with 3096 seconds since the objective of GNet is the simplest. However, it is very interesting that the proposed methods, DGNet and DINet, are the first two to early stop. As a result, though DGNet and DINet have multi-objectives, they spent less actual training time but achieved better ITE estimation compared to GNet and INet. Since GNet and INet are actually DGNet and DINet with PPBR ablated, we find that PPBR component can help a model achieve better ITE estimates with less time. In addition, we find that DIGNet spent the longest time to optimize since it has the most complex objective. To further study the stability of the model training, we also plot the metrics √ϵF , Wass, ˆdH, and √ϵ*P EHE* for the first 100 epochs of each model on the first IHDP dataset in Figure 8. We find that the training process of DIGNet is stable, even steadier than GNet and INet. From this perspective, we haven't seen a difficulty of optimizing DIGNet. Table 10: Training time records on 100 IHDP datasets. | | Training set | | Test set | | |------------|----------------|---------------|---------------|---------------| | (α1, α2) | √ ϵP EHE | ϵAT E | √ ϵP EHE | ϵAT E | | (0.1, 0.1) | 0.407 ± 0.018 | 0.125 ± 0.015 | 0.434 ± 0.022 | 0.138 ± 0.016 | | (0.1, 0.5) | 0.414 ± 0.026 | 0.120 ± 0.015 | 0.434 ± 0.028 | 0.123 ± 0.015 | | (0.1, 1) | 0.416 ± 0.019 | 0.116 ± 0.014 | 0.452 ± 0.026 | 0.121 ± 0.015 | | (0.5, 0.1) | 0.417 ± 0.023 | 0.130 ± 0.016 | 0.440 ± 0.026 | 0.137 ± 0.017 | | (0.5, 0.5) | 0.407 ± 0.021 | 0.125 ± 0.015 | 0.416 ± 0.022 | 0.124 ± 0.015 | | (0.5, 1) | 0.413 ± 0.020 | 0.126 ± 0.014 | 0.455 ± 0.028 | 0.133 ± 0.016 | | (1, 0.1) | 0.411 ± 0.021 | 0.119 ± 0.015 | 0.439 ± 0.027 | 0.118 ± 0.015 | | (1, 0.5) | 0.403 ± 0.020 | 0.118 ± 0.015 | 0.430 ± 0.026 | 0.128 ± 0.016 | | (1, 1) | 0.402 ± 0.019 | 0.112 ± 0.014 | 0.437 ± 0.027 | 0.121 ± 0.015 | We also provide the ITE and ATE estimation results on 100 IHDP datasets when the combination of (α1, α2) in DIGNet objective varies in {0.1, 0.5, 1}. The relevant results are reported in Table 11, indicating our DIGNet model is robust to the hyperparameters varying. Table 11: The results on 100 IHDP datasets with different combinations of (α1, α2) in DIGNet objective. ![33_image_0.png](33_image_0.png) Figure 8: Training loss plots for the first 100 epochs on the first IHDP dataset. ## Objectives Of Different Models A.6 Objective of GNet. $$\operatorname*{min}_{\Phi_{E},h^{t}}\quad{\mathcal{L}}_{y}(\mathbf{x},\mathbf{t},\mathbf{y};\Phi_{E},h^{t})+\alpha_{1}{\mathcal{L}}_{G}(\mathbf{x},\mathbf{t};\Phi_{E}).$$ Objective of INet. $$\begin{array}{r l}{{\operatorname*{max}_{\pi}}}&{{\alpha_{2}{\mathcal L}_{I}(\mathbf{x},\mathbf{t};\Phi_{E},\pi),}}\\ {{\operatorname*{min}_{\Phi_{E},h^{t}}}}&{{{\mathcal L}_{y}(\mathbf{x},\mathbf{t},\mathbf{y};\Phi_{E},h^{t})+\alpha_{2}{\mathcal L}_{I}(\mathbf{x},\mathbf{t};\Phi_{E},\pi).}}\end{array}$$ Objective of DINet. Note that similar to DIGNet, the pre-balancing patterns are preserved by only updating @ but fixing DE in the second step. $$\begin{array}{r l}{{\operatorname*{max}_{\pi}}}&{{\alpha_{2}{\mathcal{L}}_{I}(\mathbf{x},\mathbf{t};\Phi_{I}\circ\Phi_{E},\pi),}}\\ {{}}&{{\operatorname*{min}_{\Phi_{I}}}}&{{\alpha_{2}{\mathcal{L}}_{I}(\mathbf{x},\mathbf{t};\Phi_{I}\circ\Phi_{E},\pi),}}\\ {{}}&{{\operatorname*{min}_{\Phi_{E},\Phi_{I},h^{t}}}}&{{{\mathcal{L}}_{y}(\mathbf{x},\mathbf{t},\mathbf{y};\Phi_{E}\oplus(\Phi_{I}\circ\Phi_{E}),h^{t}).}}\end{array}$$ Objective of DGNet. Note that similar to DIGNet, the pre-balancing patterns are preserved by only updating Φς but fixing ΦΕ in the first step. $$\begin{array}{r l}{{}}&{{}\operatorname*{min}_{\Phi_{G}}\quad\alpha_{1}{\mathcal{L}}_{G}(\mathbf{x},\mathbf{t};\Phi_{G}\circ\Phi_{E}),}\\ {{}}&{{}\operatorname*{min}_{\Phi_{E},\Phi_{G},h^{t}}\quad{\mathcal{L}}_{y}(\mathbf{x},\mathbf{t},\mathbf{y};\Phi_{E}\oplus(\Phi_{G}\circ\Phi_{E}),h^{t}).}\end{array}$$ Objective of DIGNet. $$\begin{array}{r l}{{}}&{{\operatorname*{min}_{\Phi_{G}}\quad\alpha_{1}{\mathcal L}_{G}({\mathbf x},{\mathbf t};\Phi_{G}\circ\Phi_{E}),}}\\ {{}}&{{\operatorname*{max}_{\pi}\quad\alpha_{2}{\mathcal L}_{I}({\mathbf x},{\mathbf t};\Phi_{I}\circ\Phi_{E},\pi),}}\\ {{}}&{{\operatorname*{min}_{\Phi_{I}}\quad\alpha_{2}{\mathcal L}_{I}({\mathbf x},{\mathbf t};\Phi_{I}\circ\Phi_{E},\pi),}}\\ {{}}&{{\operatorname*{min}_{\Phi_{E},\Phi_{I},\Phi_{G},h^{t}}\quad{\mathcal L}_{y}({\mathbf x},{\mathbf t},{\mathbf y};\Phi_{E}\oplus(\Phi_{I}\circ\Phi_{E})\oplus(\Phi_{G}\circ\Phi_{E}),h^{t}).}}\end{array}$$
Review 1: Summary: The paper proposes a new deep learning method and some theoretical bounds for the problem of individual treatment effect estimation. Strengths and Weaknesses: Strengths: 1. The idea of the proposed method is easy to follow 2. The related works have been reviewed thoroughly 3. The simulation results are intensive and evident Weaknesses: 1. The theory is incremental given the existing works in the literature. 2. I don't think the theory is sufficient to explain the improvements made by the method. Requested Changes: I think the writing of the paper can be improved. Many sentences are hard to read and follow. Broader Impact Concerns: N.A. ================================================== Review 2: Summary: This paper presents an approach for estimating individual treatment effects - given an input X, the difference between the treatment and control potential (i.e. counterfactual) outcomes. The approach relies on two techniques for removing information from embeddings which have been explored in related literature: minimizing distributional distance between embeddings of treated and control units, and minimizing the ability of an adversary to predict from the embedding whether a unit is treated or control. They prove theoretical results similar to those in Shalit et al (2017) connecting these two properties to treatment effect estimation error (e.g. PEHE). Their proposed method includes two types of embeddings corresponding to these two properties, and concatenates them for the full model (DIGNet). They show that DIGNet outperforms baseline approaches on standard treatment effect estimation semi-synthetic setups like IHDP and Twins. Strengths and Weaknesses: Strengths: - empirical results are strong - demonstrates wins over baselines - some useful theoretical results extending work of Shalit et al (2017) and Ben-David (2006) Weaknesses & Suggestions: - I find the proposed method somewhat confusing and the motivation a little unclear. It seems to me that the adversarial approach (minimizing treatment predictability) and the distributional discrepancy approach are somewhat redundant - it's been shown that one often implies the other, and so I don't see the benefit of concatenating both. Additionally, if in fact they do have different properties, it seems like concatenating them would be counterproductive; e.g. an adversary might perform poorly on the adversarially-optimized embedding, but well when we concatenate a non-adversarially-optimized embedding onto it. Because of this confusion I'm not able to gain much insight from the method or experimental results, which I feel like is particularly important in causal inference work; for instance, the theoretical results are nice but it's not so clear how the proposed setup helps optimize them - I think there should be more clarity off the top (intro and exposition) around the role of covariate shift here. For instance, the term "covariate shift" is ellided with "selection bias" at the bottom of page 1 and seemingly general distribution shift as well, when in fact there are many types of distribution shift that can present issues. A discussion of this would be helpful, at least for scoping purposes -I think the phrase "over-balancing" is not necessarily helpful - the issue is not always "too much" balancing, but rather the wrong kind of balancing when we lose accuracy - the motivating example on p2 is unclear: it seems to me like in this case, if T=age, then mapping these representations onto each other perfectly would be helpful, since then we can build an estimator for T=old and T=young separately. I'm not sure I understand what the issue is in this case. - the lack of insight around why this method might work makes me wonder if there is another reason, such as increased representational capacity achieved by concatenating representations. It would be good to either address some of these alternative reasons or clarify the motivations more Smaller points: - in 2.1, not sure why \phi needs to be invertible - I know that's true in some works but it doesn't in general need to be the case for learning balanced representations right? - in 3, could use just a short explanation of what A and \sigma^2 are intended to be - I understand but would help with readability - on bottom on p8, the authors claim that INet is novel - however there is quite a bit of work in the fair representation learning literature proposing very similar ideas, so I'm not sure I would agree with the claim. See "Censoring Representations with an Adversary" (Edwards & Storkey) and "Learning Adversarially Fair and Transferable Representations" (Madras et al) for two examples (should probably touch on this line of work in the related work section as well) - could use some more information in 5.1 on how IHDP is created and which treated samples are removed to create selection bias - I think the visualization in Fig 5 could be a bit stronger - no real reason AFAICT to lay out the x-axis over seeds like this. Also, would DGNet and DINet be useful to show in 5c and 5d respectively? Requested Changes: - my main point is around the method - it's not clear to me how the distributional distance minimizing method and the adversarial optimization method are either different, or complementary. This is key and I wouldn't accept this paper without understanding this - I would want to see more clarity in the intro around distribution shifts and the role of covariate shift in the scope - the motivating example on p2 needs clarification I think - this may help quite a bit with overall motivation - I'd like some extra thought put into empirical clarifications of a) why this approach works or b) why alternative explanations for the method working are not correct Broader Impact Concerns: no concerns ================================================== Review 3: Summary: This paper studies learning representation for treatment effect estimation under covariate shift. This paper proposes a representation balancing model, called DIGNet, for treatment effort estimation under covariate shift. Strengths and Weaknesses: **Strengths** - This paper develops upper bounds for the counterfactual errors and ITE errors based on H-divergence. - The proposed framework DIGNet learns representations by adversarial learning using a set of objectives to balance factual prediction and causal effect estimation. **Weaknesses** - The motivation is not clear. It seems this paper targets treatment effect estimation using representation balancing. The underlying mechanism reveals the factual errors and the distribution discrepancy bound the counterfactual error. But the contradiction is unclear. I wonder whether one can directly optimize Eq (4). - The framework is based on potential outcomes and adopts the most common assumptions in potential outcomes. However, the framework implicitly assumes X->T->Y and X -> Y. - The proposed framework targets to mitigate the trade-off problem. However, the objective functions in Eq (20-23) are still difficult to balance. The adversarial games and the coefficients alpha are both hard to handle in optimization. - The method is only evaluated in synthetic data and semi-synthetic data. It is not evaluated in real-world datasets. The real-world datasets may not have the ground truth but there might be some challenges, such as causal graphs, and hyper-parameter tuning. I wonder whether the proposed method is sensitive or robust for misspecification. Requested Changes: - This paper is about approximation rather than estimation. Given sufficient data samples, it is possible to estimate causal effects under a covariate shift. I assume this paper works on a numerical approximation from samples. - The motivation example is confusing as the age is identical to the treatment. It would be better to use a probabilistic relationship between - In Sec 2 preliminaries, there are the N iid random variables --> N randomly selected samples. The factual outcome and counterfactual come should be distinct. - In the discussion in Theorem 2, "This new theoretical result provides a theoretical foundation for representation balancing models based on individual propensity confusion." Please explain either in this paragraph or in Sec 4.1.2. Broader Impact Concerns: No concerns ================================================== Metareview: Recommendation: Accept with minor revision Comment: In their final assessments, two reviewers voted "leaning reject" and one voted "leaning accept". The core issues for the two who voted leaning reject were in the motivation of the approach and the associated clarity of how this is presented in the paper. Also factoring into my decision is that this is a "major revision" of a previous TMLR submission (https://openreview.net/forum?id=uyp8eFbzzT) that was only relatively narrowly rejected with clear requested changes. Unfortunately, it was only possible to re-recruit one of the same reviewers from the previous submission (they changed their assessment from leaning reject to leaning accept). I generally feel that most of the issues raised from the previous submission have been corrected, with the exception of the overall clarity of the work (especially in motivating its core approach) which I still believe is problematic. I do not feel that many substantial new issues have been raised in this revision round, but the two new revisions took a more critical stance on the clarity issues with the paper and lack of sufficient motivation for the core approach. Overall, I think this is still quite a borderline decision. However, ultimately I do believe that the vast majority of the core issues in the paper have been addressed over the two submission cycles and that the paper is now suitable for publication at TMLR. In particular, while I think that many justified criticisms have been raised by reviewers that would likely prohibit publication at top ML conference venues, I do think it satisfies the criteria set out for acceptance by TMLR. My recommendation is therefore that the paper should now be accepted. That said, I still think the paper has noticeable room for improvement in the clarity of its key ideas. I would therefore like to request minor changes based on further refinement of the introduction and general exposition of the core contributions. I would like to see the current introduction made significantly more concise (potentially moving some content elsewhere if needed) so that it provides a more direct and focussed motivation and summary of the work, and more clearly explains the rationale behind the precise representation concatenation being used (i.e. why the approach being used achieves the two principles introduced, rather than just motivating them in isolation). ==================================================
# Switching Latent Bandits Alessio Russo alessio.russo@polimi.it Department of Electronics, Information and Bioengineering Politecnico di Milano Alberto Maria Metelli albertomaria.metelli@polimi.it Department of Electronics, Information and Bioengineering Politecnico di Milano Marcello Restelli marcello.restelli@polimi.it Department of Electronics, Information and Bioengineering Politecnico di Milano Reviewed on OpenReview: *https: // openreview. net/ forum? id= 4ZGqCXcUqR* ## Abstract We consider a Latent Bandit problem where the latent state keeps changing in time according to an underlying Markov chain, and every state is represented by a specific Bandit instance. At each step, the agent chooses an arm and observes a random reward but is unaware of which MAB he is currently pulling. As typical in Latent Bandits, we assume to know the reward distribution of the arms of all the Bandit instances. Within this setting, our goal is to learn the transition matrix determined by the Markov process. We propose a technique to tackle this estimation problem that results in solving a least-square problem obtained by exploiting the knowledge of the reward distributions and the properties of Markov chains. We prove the consistency of the estimation procedure, and we make a theoretical comparison with standard Spectral Decomposition techniques. We then discuss the dependency of the problem on the number of arms and present an offline method that chooses the best subset of possible arms that can be used for the estimation of the transition model. We ultimately introduce the SL-EC algorithm based on an Explore then Commit strategy that uses the proposed approach to estimate the transition model during the exploration phase. This algorithm achieves a regret of the order O(T 2/3) when compared against an oracle that builds a belief representation of the current state using the knowledge of both the observation and transition model and optimizes the expected instantaneous reward at each step. Finally, we illustrate the effectiveness of the approach and compare it with state-of-the-art algorithms for non-stationary bandits and with a modified technique based on spectral decomposition. ## 1 Introduction The Multi-Armed Bandit (MAB) (Lattimore & Szepesvári, 2020) framework is a well-known model used for sequential decision-making with little or no information. This framework has been successfully applied in a large number of fields, such as recommender systems, advertising, and networking. In the general MAB formulation, a learner sequentially selects an action among a finite set of different ones. The choice over the arm to select is made by properly balancing the exploration-exploitation trade-off with the goal of maximizing the expected total reward over a horizon T and guaranteeing the *no-regret* property, thus meaning that the loss incurred by not knowing the best arm is increasing sublinearly over time. Standard MAB literature requires the payoff of the available actions to be stationary (i.e., rewards come from a fixed distribution) in order to design efficient no-regret algorithms. However, in many real-life applications, the stationarity assumption may not necessarily hold as data may be subject to changes over time. In some applications, it is also possible to identify different data distributions, each one corresponding to a specific working regime that can be modeled as a MAB. In cases of large availability of historical data appearing in the form of past user interactions, it is possible to learn *offline* the observation models associated with the different arms for each working regime. Exploiting the knowledge of observation models leads to many advantages over the *fully online exploration* setting where no prior information is available at the beginning, and a massive number of interactions is required to learn the observation models associated with each working regime. It is often the case that the underlying working regime (state) cannot be directly observed and the non-stationarity of the process is caused by the continuous change over time of the underlying regimes. By knowing how these regimes are characterized, it is possible to learn the dynamics of the changes by repeatedly interacting with the evolving environment. Inferring the underlying state accelerates the adaptation of the agent to the environment, thus leading to improved performances over time. Learning observation models independently and prior to transition models can be a viable option when computational resources are limited. Indeed, we will show in the following that spectral decomposition (SD) techniques (Anandkumar et al., 2014), which are used to learn jointly the observation and the transition model, typically require a large number of samples and involve computationally intensive operations. Other scenarios where we can assume that the observation models are already known are those where the models are learned offline from samples generated by simulators. Once these models are deployed in an environment that is characterized by changes, the dynamics can be learned by interacting with the environment. We can consider, for example, the problem of resource allocation such as the electricity allocation in a specific residential area. This problem can be modeled as a Bandit where each arm represents a specific resource allocation, while the rewards represent the extent to which the allocation has been optimal. Obviously, the optimality of the allocation depends on the state of the system, which may be conditioned by several factors such as environmental conditions, community trends, and seasonality. Another possible scenario that suits our setting is the one of *Transfer Learning*, where partial knowledge of the system (in our case, the observation model) can be used in a context with different dynamics (in this case the transition model, representing the dynamics of the system, needs to be learned). Building on the previously mentioned scenario, we can consider using the same observation models in a new residential area, with a structure analog to the first one (thus justifying the use of the same observation model) but located in a different place, with potentially different weather conditions and inhabitants having different behaviors (modeled using a different transition model). Assuming the existence of a finite set of discrete latent states is a relevant choice when approaching the modeling of complex real-life problems characterized by different and recurrent working regimes. Regimes of this type can typically be observed in domains such as the financial market and online advertising, generally marked by high volatility and specific seasonality patterns (M. et al., 2022; Heston & Sadka, 2008; Guo et al., 2021). Considering the financial market example, these regimes might include bull markets (characterized by rising prices), bear markets (characterized by falling prices), and periods of high or low volatility. The actual regime at any given time is not directly observable (hidden state), but we can infer it from observable data. In this example, the different actions available are the decisions whether to sell or buy different amounts of stocks and the observations can be the different returns, trading volumes, or stock prices. Concerning the online advertising example, the hidden states can be represented by the interests of users, which are not directly observable. The interests may vary according to seasonal patterns, new trends, or exogenous variables that modify the market, and we can model it as a Markov chain. The actions are the different types of content that can be displayed to the users, while the observations may be represented by factors such as conversions or interactions with the ads (metrics such as the click-through rate could be considered). Past works focused on this state identification problem under the assumption of knowing the conditional observation models (Maillard & Mannor, 2014; Zhou & Brunskill, 2016) and defined theoretically optimal UCB algorithms. Follow-up work by Hong et al. (2020a) provided more practical Thompson Sampling algorithms, also considering the problem of model misspecification and came up with an analysis on the Bayes regret. The works cited above assume that the latent state does not change during the interaction process: once the real state is identified, the agent can act optimally. Differently, we embrace a more realistic scenario and assume that the latent state can change through time. In accordance with the latent bandits setting, we assume that the learning agent is aware of the observation models of the arms conditioned on each latent state. A setting similar to ours has been considered also in Hong et al. (2020b), the key difference is that they assume to have either full or partial knowledge of both the observation model and the transition model. We instead focus on the problem of learning the transition model given the knowledge of the observation models and maximizing the cumulative reward over T interaction steps. More specifically, our problem is modeled by assuming the existence of a finite set S of different MABs all sharing the same set of finite arms I, each generating rewards (observations) in a finite set V. Each state s ∈ S represents a different instance of a MAB. At each time step t, there is a transition from latent state st−1 to the new latent state st according to the transition matrix governing the process. The action at selected in t will thus generate a reward conditioned on the latent state st. Contributions and Outline We introduce the Related Works in Section 2 and the Preliminaries in Section 3. After that, in Section 4, we define the formulation of the problem that considers known Bandit instances that switch through time according to an underlying Markov process. The information about the reward distributions of the bandit instances is encoded into a suitable observation matrix, while the transition model of the chain needs to be estimated. The learning objective of the agent is to maximize at each instant the expected instantaneous reward given the estimated belief over the current Bandit. After this part, we introduce the main assumptions that hold in our setting, motivate the reasons behind them, and show how they can be relaxed for the estimation of the transition model (Section 4.1). Section 5.1 presents the estimation procedure of the transition model that uses samples collected using a round-robin procedure for selecting arms. Then, we propose an offline arm selection strategy that chooses a subset of the available arms for the estimation approach, with the objective of promoting diversity between observation distributions induced by the selected arms. In Section 5.2, we detail the SL-EC algorithm that employs an Explore then Commit approach and uses the proposed estimation procedure for learning the transition model during the exploration phase. Finally, Section 7 shows numerical simulations on synthetic and semi-synthetic data. We provide additional experiments that highlight the difference in performance between our estimation procedure and a technique based on SD approaches. We complement the numerical simulation with further experiments in Appendix A while we present a comparison with SD approaches on the theoretical side in Appendix D. ## 2 Related Works Non-stationary Bandits Non-stationary behaviors are closer to real-world scenarios, and this has induced a vast interest in the scientific community leading to the formulation of different methods that consider either abruptly changing environments (Garivier & Moulines, 2011), smoothly changing environments (Trovò et al., 2020), or settings with a bounded variation of the rewards (Besbes et al., 2014). It is known that when rewards may arbitrarily change over time, the problem of Non-Stationary Bandits is intractable, meaning that only trivial bounds can be derived on the dynamic pseudo-regret. That is the main reason why in the literature, there is a large focus on non-stationary settings enjoying some specific structure in order to design algorithms with better guarantees. Non-stationary MAB approaches typically include both passive methods in which arm selection is mainly driven by the most recent feedback (Auer et al., 2019; Besbes et al., 2014; Trovò et al., 2020) and active methods where a change detection layer is used to actively perceive a drift in the rewards and to discard old information (Liu et al., 2017; Cao et al., 2018). Works such as Garivier & Moulines (2011) provide a O( √T) regret guarantee under the assumption of knowing the number of abrupt changes. Other works, such as Besbes et al. (2014), employ a fixed budget to bound the total variation of expected rewards over the time horizon. They are able to provide a near-optimal frequentist algorithm with pseudo-regret O(T 2/3) and a distribution-independent lower bound. All the above methods are not suited for environments that switch between different regimes as they do not keep in memory past interactions but rather tend to forget or discard the past. A particular type of non-stationary Bandit problem related to our work includes the *restless Markov* setting (Ortner et al., 2014; Slivkins & Upfal, 2008) where each arm is associated with a different Markov process and the state of each arm evolves independently of the learner's actions. Differently, Fiez et al. (2018) investigate MAB problems with rewards determined by an unobserved Markov chain where the transition to the next state depends on the action selected at each time step, while Zhou et al. (2021) focus on MAB problems where the state transition dynamics evolves independently of the chosen action. This last work has many similarities with our setting. The main difference lies in the fact that they do not assume to know the conditional reward models and learn them jointly with the transition matrix. They make use of SD techniques (Anandkumar et al., 2014) and use this tool in a regret minimization algorithm achieving a O(T 2/3) regret bound. Their setting is more complex than ours but involves additional assumptions, like the invertibility of the transition matrix that defines the chain. Furthermore, spectral methods need a vast amount of samples in order to provide reasonable estimation errors and can hardly be used in large problems. A detailed discussion on the differences between the estimation procedure used in Zhou et al. (2021) and ours is presented in Appendix D. A setting similar to the one we consider has also been described in Kontorovich et al. (2013) in the context of *Hidden Markov Model*. They adopt an estimation approach of the transition model that shares some similarities with our estimation approach since they consider couples of consecutive observations, but their procedure is more involved and leads to worse convergence guarantees than ours. Latent Bandits More similar lines of work are related to bandit studies where latent variables determine the distribution of rewards (Maillard & Mannor, 2014; Zhou & Brunskill, 2016). In these works, the unobserved state is fixed across different rounds, and the conditional rewards depend on the latent state. Maillard & Mannor (2014) developed UCB algorithms without context, considering the two different cases in which the conditional rewards are either known or need to be estimated. This line of work has been extended to the contextual bandit case in Zhou & Brunskill (2016) where there is an offline procedure to learn the policies and a selection strategy to use them online. Hong et al. (2020a) proposed a TS procedure in the contextual case that updates a prior probability over the set of states in order to give a higher probability to the real latent state. A non-stationary variant of this setting is proposed in Hong et al. (2020b) where the latent states are assumed to change according to an underlying Markov chain. They develop TS algorithms under different cases when both the reward and transition models are completely known and when partial information about them is available. For the partial information case, they provide an algorithm based on particle filters, which will be used for comparison in the experimental section. Differently from Hong et al. (2020b), we do not assume any prior information about the transition matrix, and we learn it through interactions with the environment using the information about the reward models. Another interesting work associated with latent bandits is the one from Kwon et al. (2022) where, differently from previously cited works, they assume an episodic setting with a fixed horizon H. At the beginning of each episode, a specific MAB instance is sampled from a fixed mixing distribution, and the agent interacts with the sampled MAB until the end of the episode without being aware of the MAB she is interacting with. The goal is to learn both the mixture weights and the reward distributions associated with each MAB. The relevant difference with our work is that they consider an episodic setting, while we consider a continuous one. Another main difference is that they provide results in terms of sample complexity needed in order to learn a near-optimal policy, not taking into account the suffered regret. In Appendix C, we provide further discussion comparing our setting with related works and we report a table summarizing the differences with respect to the most related ones. ## 3 Preliminaries In the following, we will present the main elements that are useful to understand what will follow. We will denote with ∆(X) the simplex of a finite space X, and we will use the bold symbol P to denote the transition matrix and the probabilities associated with a Markov chain (see Section 3.2). ## 3.1 Multi-Armed Bandits A I-armed stochastic bandit (Lattimore & Szepesvári, 2020) is a collection of distributions ν = (Pr(·|a) ∈ ∆(V) : a ∈ I) where I is the set of available actions with cardinality I and V is a finite set of possible rewards with cardinality V . A learning agent sequentially interacts with the environment over T rounds. For each round t ∈ {1*, . . . , T*}, the learner chooses an action at ∈ I and the environment gives as output a reward rt ∈ V. The goal of the learner is to maximize the sum of cumulative rewards PT t=1 rt, which is a random quantity that depends on the stochasticity of both the environment and the choice of the agent's actions. The behavior of an agent interacting with an environment is defined by a policy θ : H → ∆(A) that maps the observed history1to actions. In general, the performance of a bandit algorithm is measured using the notion of regret, which is defined as the deficit suffered by the learning agent with respect to the optimal policy. The regret of a policy θ on a bandit instance is defined as: $${\cal R}_{T}(\theta)=T\mu^{*}-\mathbb{E}\Bigg{[}\sum_{t=1}^{T}r_{t}\Bigg{]},\tag{1}$$ where µ ∗ defines the maximum expected reward among the available arms, while the expectation is taken with respect to the policy θ and the stochasticity of the environment. ## 3.2 Markov Chains A Markov Chain (MC) (Feller, 1968) is defined by a tuple M := (S, P , ν), where S is a finite state space (|S| = S), P : S → ∆(S) is the transition model, such that P (*s, s*′) denotes the probability of reaching state s ′ ∈ S when being in state s ∈ S, and ν is the initial state distribution. We will denote with P the stochastic matrix of size S × S representing the transition model. A Markov chain is said to be *ergodic* if its associated transition matrix consists of a single recurrent class containing all states (Puterman, 1994). Ergodic Markov chains satisfy the properties of being *irreducibile*, thus meaning that it is possible to reach any state from any other state with positive probability in a finite number of steps and *aperiodic*, meaning that the chain does not follow a regular, repeating pattern in its transitions. We state the following result for ergodic Markov chains. Proposition 3.1. Let P be the transition matrix of an ergodic Markov Chain and ν *an arbitrary probability* vector. Then: $$\operatorname*{lim}_{n\to\infty}\nu P^{n}=\pi,$$ where P n represents the transition kernel induced after n steps, π is the unique stationary distribution of the chain, and the components of the vector π are all strictly positive. By definition, this distribution satisfies the equation πP = π. Since the stationary distribution of the Markov chain is unique, it follows that there is only one eigenvalue with unitary value (λmax = 1). Let's define the set of ordered moduli of the eigenvalues of transition matrix P as (|λi|) S i=1. By denoting |λmax| = |λ1|, we have the following relation: $$|\lambda_{1}|=1>|\lambda_{2}|\geq\cdots\geq|\lambda_{S}|,$$ where the inequality between |λ1| and |λ2| is strict for ergodic chains. The quantity 1 − |λ2| is defined as the absolute spectral gap of the Markov chain and controls the rate of convergence of the chain towards its stationary distribution (Krishnamurthy, 2016). In what will follow, we will use the symbol λ to denote the modulus of the second largest eigenvalue, namely λ = |λ2|. 1We define a history h := (aj , rj ) t j=0 ∈ Ht with Ht being the space of histories of length t and we denote instead with H the space of the histories of arbitrary length. ## 4 Switching Latent Bandits We consider to have a finite set S := {s1, . . . , sS} of S = |S| different MAB instances. Each MAB is characterized by the same set of discrete arms I := {a1*, . . . , a*I } with cardinality I = |I| and the same set of finite rewards V = {r1*, . . . , r*V } with cardinality V = |V|. Whenever an arm a ∈ I is pulled, a corresponding reward r ∈ V is generated by the environment. We consider each reward r ∈ V bounded for simplicity in the range [0, 1]. The distribution of rewards Pr(·|*s, a*) conditioned on MAB instance s and action a is categorical2. In particular, we assume to know the parameters characterizing these distributions and to store this information into matrix O ∈ R IV ×S, which we call action observation matrix. Each row of this matrix encodes a specific action-reward pair (*a, r*) ∈ I × V. Then, for any pair (*a, r*) ∈ I × V and any state s ∈ S, we have: $$O((a,r),s)=\operatorname*{Pr}(r|s,a),$$ $$\left(2\right)$$ O(a, r), s= Pr(r|*s, a*), (2) where Pr(r|*s, a*) represents the probability value of observing reward r while pulling action a from MAB s. At each step t, only one MAB st ∈ S is active, and it determines the reward rt that is received when the agent pulls action at. The choice over the active MAB is determined by an underlying Markov chain with transition matrix P ∈ R S×S. More precisely, the probability over the next active MAB st+1 is determined by the distribution P (st, ·) ∈ ∆(S) and is thus independent of the chosen action at. The setting we consider assumes that the agent is not able to observe the active MAB at each step, and the objective is to learn the transition matrix P characterizing the underlying process while knowing the observation model O. Learning objective As already seen, the agent does not observe the sequence of MAB instances, but by deriving an estimate of the transition matrix P , a belief representation over the current active MAB s ∈ S can be defined. In the following, we will report the update rule of the belief vector bt ∈ ∆(S) under the knowledge of the observation model O and the transition model P . The update of the belief derives from the typical correction and update step of the Bayes rule, where the correction step adjusts the current belief bt using the reward rt obtained by pulling arm at and the prediction step computes the new belief bt+1 simulating a transition step of the chain. More formally, for each element bt+1(s) of the belief vector bt+1, the update step is as follows: $$\mathbf{b}_{t+1}(s)={\frac{\sum_{s^{\prime}\in\mathbb{S}}\mathbf{b}_{t}(s^{\prime})\mathbf{O}\big{(}(a_{t},r_{t}),s^{\prime}\big{)}\mathbf{P}(s^{\prime},s)}{\sum_{s^{\prime\prime}\in\mathbb{S}}\mathbf{b}_{t}(s^{\prime\prime})\mathbf{O}\big{(}(a_{t},r_{t}),s^{\prime\prime}\big{)}}}.$$ After having defined the update rule of the belief vector bt, we introduce, for each action a ∈ I, vector µ(a) ∈ R S where element µ(*a, s*) referred to state s ∈ S contains the expected reward obtained when pulling arm a while being in state s. More formally: $$\mathbf{\Sigma}$$ $$\mu(a,s)=\sum_{r\in\mathbb{V}}r\;O\big{(}(a,r),s\big{)}.\tag{1}$$ $$\mathbf{\Sigma}$$ Given the belief bt over the states, the objective of the agent is to pull the action that maximizes the instantaneous expected reward such that: $$a_{t}=\operatorname*{arg\,max}_{a\in\mathbb{I}}\sum_{s\in\mathbb{S}}\mu(a,s)\mathbf{b}_{t}(s)=\operatorname*{arg\,max}_{a\in\mathbb{I}}\,\langle\mu(a),\mathbf{b}_{t}\rangle,\tag{1}$$ where ⟨·, ·⟩ denotes the scalar product between the two vectors. From the considerations reported above, we are now ready to formulate the notion of regret we try to minimize: $$\mathfrak{R}_{T}=\sum_{t=1}^{T}\max_{a\in\mathbb{I}}\,\langle\boldsymbol{\mu}(a),\mathbf{b}_{t}\rangle-\max_{a\in\mathbb{I}}\,\langle\boldsymbol{\mu}(a),\widehat{\mathbf{b}}_{t}\rangle,\tag{1}$$ where bt and bbt denote the belief vectors updated using the real transition matrix P and the estimated one Pb respectively. Here, we used symbol R to characterize the regret defined in Equation 6, in order to discriminate from the standard notion of regret R introduced in Section 3.1. 2In Appendix F, we will see how this formulation can be extended to continuous distributions. $$\mathbf{\Sigma}$$ $$\mathbf{\Sigma}$$ ## 4.1 Assumptions We need now to introduce some assumptions that should hold in our setting: Assumption 4.1. *The smallest element of the transition matrix defining the Markov chain is* ϵ := mins,s′∈S P (*s, s*′) > 0. This assumption ensures a non-null probability of transitioning from any state to any other in one step. It is possible to show that under this assumption, the induced Markov chain is ergodic, thus guaranteeing the presence of a unique stationary distribution, as shown in Proposition 3.1. Under the ergodic condition, the chain is able to reach its stationary distribution π geometrically fast, regardless of its initial distribution Krishnamurthy (2016). Our assumption on the minimum entry is not a necessary condition for the two aforementioned motivations but a sufficient one. However, we require this condition to bound the error between the belief computed using the real transition matrix and an estimated one. This result is presented in Proposition E.6 and builds on the original result present in De Castro et al. (2017). This one-step reachability assumption is always present in works dealing with partial observability that show results in terms of regret in non-episodic scenarios. Notably, it has been used in similar works such as Zhou et al. (2021); Jiang et al. (2023); Mattila et al. (2020) and also employed in the more complex POMDP setting (Xiong et al., 2022). Works not using this assumption either do not need it since they use a less powerful class of policies such as memoryless ones3(Azizzadenesheli et al., 2016) or they directly impose an error of the estimated belief that adequately decreases with the number of collected samples (Jafarnia Jahromi et al., 2022). Assumption 4.2. *The action observation matrix* O ∈ R IV ×S *is full column rank.* This second assumption, instead, is related to the identifiability of the parameters of the problem and has been largely used in works using spectral decomposition techniques (Zhou et al., 2021; Azizzadenesheli et al., 2016; Hsu et al., 2012). A robust version of this assumption, called weakly-revealing4is also present in other works involving learning parameters in POMDPs (Liu et al., 2022; Jin et al., 2020). In the following, we will see that this is a necessary condition in order to recover matrix P . Indeed, we will see that the error of the estimation procedure has an inverse dependency on the minimum singular value σmin(O) of the action observation matrix O and through Assumption 4.2, we implicitly require that σmin(O) > 0. ## 5 Proposed Approach As clarified in the previous section, our goal is to minimize the regret formulated in Equation 6. To reach this objective, we need to define a good estimate Pb of the transition matrix that in turn results in a more accurate update of the belief vector bbt. We will now show how the transition model can be learned by exploiting the knowledge of the observation model O (Section 5.1) and we will present the SL-EC algorithm that makes use of the presented estimation approach in order to minimize the regret (Section 5.2). ## 5.1 Transition Model Estimation The Markov chain estimation procedure presented in this section holds under weaker assumptions than those presented in Section 4.1. In particular, we relax the one-step reachability assumption (Assumption 4.1) and we only require the ergodicity of the transition matrix P . Stationary Distribution of Consecutive States We start with a consideration about the transition matrix that defines the chain. Building on Proposition 3.1, an ergodic chain admits a unique stationary distribution π. From the uniqueness of this distribution, it can be easily shown that there exists as well a unique stationary distribution on consecutive states that we represent with a matrix W ∈ ∆(S 2) having dimension S × S. 3A memoryless policy defines the action to choose only based on the last observation seen. For this reason, it does not require a notion of belief over the states. 4The α-weakly revealing assumption defines a lower bound α to the minimum singular value of the observation matrix O, such that σmin(O) ≥ α. 7 Its elements are obtained as W(*s, s*′) = π(s)P (*s, s*′). By defining with Π = *diag*(π) the diagonal matrix of size S × S having values of the stationary distribution π along its diagonal, we can express matrix W of the stationary distribution of consecutive states as follows: $$W=\mathbf{\Pi}P,$$ $$\left(7\right)$$ which is obtained by multiplying each row of the transition matrix P by the associated probability value of the stationary distribution. The reverse procedure that allows retrieving matrix P from W is defined by the following equation: P (s, s′) = W(s, s′) Ps ′′∈S W(s, s′′) , (7) $$\mathbf{\Sigma}$$ which shows that the rows of matrix P are obtained by normalizing the rows of matrix W such that they sum to 1, as required for stochastic matrices. The next paragraph shows how the matrix W of stationary distribution of consecutive states relates to the stationary distribution of consecutive rewards. Stationary Observation-State Relation Let's choose an arm a ∈ I: we will denote with da ∈ ∆(V) the stationary distribution of rewards conditioned on pulling action a when the chain has mixed5. Vector da has dimension V and its elements are characterized as follows: $$\mathbf{d}_{\alpha}(r)=\sum_{s\in\mathbb{S}}\mathbf{O}\big{(}(a,r),s\big{)}\mathbf{\pi}(s)\qquad\quad\forall\,r\in\mathbb{V},\tag{1}$$ where we recall that π(s) represents the probability of state s taken from the stationary distribution of the chain and O(a, r), srepresents the probability of observing reward r while pulling action a and being in state s. A similar rationale can be extended to consecutive rewards (*r, r*′) ∈ V 2conditioned on pulling a couple of consecutive actions (*a, a*′) ∈ I 2. We denote with da,a′ ∈ ∆(V 2) the distribution over consecutive rewards conditioned on pulling the pair of arms (*a, a*′). We represent it with a vector of size V 2 and define it as follows: $$\mathbf{d}_{a,r^{\prime}}((r,r^{\prime}))=\sum_{s,s^{\prime}\in\mathbb{S}^{2}}\mathbf{O}((a,r),s)\ \mathbf{O}((a^{\prime},r^{\prime}),s^{\prime})\ \mathbf{W}(s,s^{\prime})\qquad\quad\forall\,r,r^{\prime}\in\mathbb{V},\tag{9}$$ where we recall that matrix W ∈ ∆(S 2) represents the stationary distribution of consecutive states. By considering the different vectors of type da,a′ , we define vector: $$\mathbf{d}=\left(\mathbf{d}_{a,a^{\prime}}\right)_{(a,a^{\prime})\in\mathbb{I}^{2}}$$ $$(10)$$ 2(10) where the term on the right denotes a concatenation of vectors da,a′ for all (*a, a*′) ∈ I 2 and the resulting vector d has size I 2V 2. We define now a new matrix A ∈ R I 2V 2×S 2to which we will refer as reference matrix. It extends the information contained in the action observation matrix O considering consecutive pairs of elements and it is characterized as follows: $$A=O\otimes O,$$ $$(11)$$ A = O ⊗ O, (11) where symbol ⊗ refers to the Kronecker product (Loan, 2000). Since we assume knowledge of the observation model O, we can directly compute the reference matrix by applying the Kronecker operator. As a last step before presenting the main result, we transform matrix W and vectorize6it to obtain vector w ∈ ∆(S 2) having dimension S 2. By using the quantities just defined, we can finally reformulate Equation 9 so that it can be extended to all pairs of actions. Using vector notation, we have: d = Aw. (12) 5The distribution of states in a mixed chain corresponds by definition to its stationary distribution π. 6The vectorization operation used here creates a new vector w by concatenating each row of matrix W. $$(12)$$ Basically, this equation relates the stationary probability distribution of consecutive observations with the stationary probability distribution of consecutive latent states. The next paragraph shows how to obtain an estimate db of vector d, from which, by reversing Equation 12, an estimate wb of the stationary distribution of consecutive states can be computed. Transition Model Estimation We will now see how to concretely compute an estimate of w using Equation 12. Going back to vectors da,a′ , we can build an estimate dba,a′ for each pair of actions (*a, a*′) ∈ I 2. For this purpose, let's take a pair of action (*a, a*′) and repeatedly pull it. We can count the number of occurrences of each pair of observed rewards (*r, r*′) ∈ V 2 and store this information in a suitable count vector na,a′ of size V 2. We can easily derive an estimate of vector da,a′ as follows: $$\widehat{d}_{a,a^{\prime}}=\frac{n_{a,a^{\prime}}}{N},$$ $$(13)$$ N, (13) where N represents the number of times the pair of consecutive arms (*a, a*′) has been pulled. We propose an estimation procedure that pulls each pair of arms (*a, a*′) ∈ I 2in a round-robin fashion and stores the observed pairs of rewards in the corresponding vector count na,a′ . The choice of a round-robin approach highlights interesting properties in the theoretical analysis, as will be shown later in Section 6. By executing N different rounds, thus meaning that each pair of arms is pulled exactly N times and by exploiting the knowledge of the reference matrix A, we can derive: $${\hat{\mathbf{w}}}=\mathbf{A}^{\dagger}{\hat{\mathbf{d}}}=\mathbf{A}^{\dagger}{\frac{\mathbf{n}}{N}},$$ $$(14)$$ , (14) where A†is the Moore–Penrose inverse of reference matrix A, while vectors db and n are obtained by concatenating the different vectors dba,a′ and na,a′ as also done in Equation 10. The second equality is derived by extending Equation 13 to the concatenated vectors. The stated equation shows that the estimation procedure involves solving a simple least-square problem, which can be done in a computationally efficient way. Once an estimate wb is computed, the corresponding matrix Wc can be obtained by reverting the vectorization operation and eventually an estimate Pb of the transition model is computed using Equation 7. The pseudocode of the presented estimation procedure is detailed in Algorithm 1. We acknowledge that other approaches can be devised for choosing the action policy used during estimation. Some approaches have been devised for the Latent Bandit setting such as the one in Kinyanjui et al. (2023) which face a pure exploration problem. However, their method is tailored for the stationary setting and they update their policy as new information is acquired. Instead, in our scenario, it is necessary to select a prior an action policy and keep it constant during the interaction with the environment: approaches that work in this direction and that could be potentially used in our setting are those based on Experimental Design (Kiefer & Wolfowitz, 1960). ## 5.1.1 Arm Selection Strategy In Algorithm 1, we propose a simple approach for choosing the arms to pull. Each pair of arms is indeed pulled the same number of times during the exploration phase by using a deterministic approach. However, it can be shown that the estimation procedure proposed in Section 5.1 can be extended to a more flexible arm selection policy. We may randomize the arm choice by assigning non-uniform probabilities to each pair of arms. In principle, this aspect allows exploiting the knowledge of the known reward distribution of each arm, for example, giving at the beginning a higher probability to the pairs of arms that are more rewarding. For example, this arm selection policy may be beneficial if we plug our estimation approach into an iterative two-phase exploration and exploitation algorithm, as the one used in Zhou et al. (2021). Offline arm selection In problems with a large number of available arms, a round-robin approach among all possible pairs of arms may be detrimental as it considers all arms equivalently. There may be cases where some actions are less useful for state identification. The extreme case is an action that induces the same observation distribution for all the Bandit instances. Indeed, pulling that action will not provide any additional information on the current MAB and the effect will only be to slow down the estimation Algorithm 1: Estimation Procedure Input: Action Observation matrix O, number of rounds N 1 Build Reference matrix A using Equation 11 2 Initialize vector of counts na,a′ with zeroes for all (*a, a*′) ∈ I 2 3 k = 0 4 **while** *k < N* do 5 t = k ∗ I 2 6 **foreach** (*a, a*′) ∈ I 2 do 7 Pull arm at = a 8 Observe reward rt = r 9 Pull arm at+1 = a ′ 10 Observe reward rt+1 = r ′ 11 na,a′ (*r, r*′) = na,a′ (*r, r*′) + 1 12 t = t + 2 13 Compute dba,a′ for all (*a, a*′) ∈ I 2 using Equation 13 14 Obtain db by concatenating all the different dba,a′ (as done in 10) 15 Estimate wb from Equation 14 16 Reshape vector wb to obtain matrix Wc 17 Compute Pb using Equation 7 procedure. In general, actions that induce *similar* observation distributions for all the MABs will provide less information with respect to actions that induce highly different distributions for all the MABs. A more convenient approach, in this case, would be to select a subset of different arms to be used during the exploration phase. Intuitively, the arm selection procedure tends to promote diversity among arms conditioned on the latent states, with the objective of increasing the identifiability capabilities deriving from the actions. It turns out that we are able to get an understanding of the information loss we suffer by selecting specific arms, given the knowledge of the action observation matrix O. In particular, in Section 6 devoted to the theoretical analysis, we will see that the quality of the estimation highly depends on the minimum singular value σmin(O) of the action observation matrix O. We can thus use this value to drive the choice of the best subset of arms to use. In particular, by fixing a number *J < I* of arms to use among those available, the choice over the best subset of size J can be done as follows. We consider all the possible subsets of arms of size J, and for each of these subsets, we derive a reduced action observation matrix G of size JV ×S that is obtained by simply removing from the original matrix O all the rows associated to the actions not belonging to the considered subset of arms. Having defined a new action observation matrix for each generated subset, a good candidate subset of arms is the one yielding the reduced action observation matrix G with the highest σmin(G). Understandably, this approach implies that the reduced action observation matrix G derived from the subset of selected arms should be full-column rank, thus satisfying Assumption 4.2. ## 5.2 Sl-Ec Algorithm Having established an estimation procedure for the transition matrix Pb, we will now provide an algorithm that makes use of this approach in a regret minimization framework. We consider a finite horizon T for our problem. We propose an algorithm called *Switching Latent Explore* then Commit (SL-EC) that proceeds using an EC approach where the exploration phase is devoted to finding the best estimation of the transition matrix Pb, while during the exploitation phase, we maximize the instantaneous expected reward following the formulation provided in Equation 5. The exploration phase lasts for T0 episodes, where T0 is optimized w.r.t. the total horizon T, as will be seen in Section 6. The pseudocode of the SL-EC Algorithm is presented in Algorithm 2. Algorithm 2: SL-EC Algorithm Input: Observation model O, Exploration horizon T0, Total horizon T 1 Define number of rounds N = T0/2I 2 2 Pb = EstimationP rocedure(O, N) (Algorithm 1) 3 b0 = *UniformOverStates*() 4 Compute bbT0 using samples collected during Algorithm 1 5 t ← T0 6 **while** t ≤ T do 7 at = arg maxa∈I ⟨µ(a), bbt⟩ 8 Observe reward rt 9 bbt+1 = U pdateBelief(bbt, at, rt) (Equation 3) 10 t = t + 1 Basically, the exploration phase pulls each pair of arms in a round-robin fashion and uses the estimation procedure presented in Algorithm 1. When the exploration phase is over, an estimation of the transition matrix Pb is computed. After that, a belief vector b0 is initialized by assigning a uniform probability to all the states (Line 3), and it is updated using Equation 3 and the estimated Pb, considering the history of samples collected from the beginning up to T0 (Line 4). Finally, the exploitation phase starts, as described in the pseudocode of the algorithm. ## 6 Theoretical Analysis Having defined the estimation procedure of the transition model in Section 5.1 and having introduced the SL-EC algorithm, we will now provide theoretical guarantees for them. ## 6.1 Analysis Of Estimation Procedure In Algorithm 1 We start with a concentration bound on the transition matrix Pb computed from the estimation procedure in Algorithm 1. As already highlighted, this estimation procedure only requires the ergodicity of the chain, thus relaxing Assumption 4.1. Lemma 6.1. *Suppose Assumption 4.2 holds and suppose that the Markov chain with transition matrix* P is ergodic, such that πmin := mins∈S π(s) > 0 with π ∈ ∆(S) being the stationary distribution of the chain. By assuming that the chain starts from an arbitrary distribution ν ∈ ∆(S)*, by pulling each pair of arms in a* round-robin fashion for N *rounds and using the estimation procedure reported in Algorithm 1, we have that* with probability at least 1 − δ the estimation error of the transition matrix P *will be:* $$\|\mathbf{P}-{\widehat{\mathbf{P}}}\|_{F}\leq{\frac{2I}{\sigma_{\operatorname*{min}}^{2}(\mathbf{O})\pi_{\operatorname*{min}}}}{\sqrt{\frac{2S(C+\log(C I^{2}/\delta))}{(1-\lambda^{2I^{2}})N}}},$$ 2)N, (15) where ∥·∥F represents the Frobenius norm (Golub & Van Loan, 1996), σmin(O) represents the minimum singular value of the action observation matrix O, constant C is defined as C := ∥ ν π ∥∞ where νπ represents the vector of the element-wise ratio between the two probability distributions, while λ represents the modulus of the second highest eigenvalue of matrix P . As reported in the statement of the Lemma, N denotes the number of times each pair of arms is pulled, thus meaning that the stated error guarantee holds when interacting with the environment for a total number of 2I 2N steps, where the I 2term arises from the total number of pairs of arms while the constant value 2 accounts for considering pairs of arms. As a last remark, we note that the presented Lemma assumes that the chain starts from an arbitrary distribution. This fact leads to the further constant C in the bound. Indeed when the chain starts from the stationary distribution, we have C = 1. This result comes from Proposition E.4 that uses a concentration result derived from Fan et al. (2021). $$(15)$$ Here, we will provide a sketch of the proof of the presented Lemma. A more detailed version of this proof is reported in Appendix B. Sketch of the proof. The proof of Lemma 6.1 builds on two principal results. The former comprises a relation that links the estimation error of the matrix P with the estimation error of matrix W, while the latter is a concentration bound on the estimated Wc from the true one W. Concerning the first result, we can say that: $$\|\mathbf{P}-\widehat{\mathbf{P}}\|_{F}\leq\frac{2\sqrt{S}\|\mathbf{W}-\widehat{\mathbf{W}}\|_{F}}{\pi_{\rm min}}.\tag{1}$$ This result follows from a sequence of algebraic manipulations and makes use of Lemma E.1 appearing in Appendix E. We now need to define a bound on ∥W −Wc∥F . In order to bound this quantity, we resort to the vectorized vertions $\mathbf{w}$ and $\mathbf{w}$ of the two matrices and use the result $\|\mathbf{w}-\mathbf{w}^{\prime}\|_{F}=\|\mathbf{w}-\mathbf{w}\|_{2}$. We proceed as follows: $$\|\mathbf{w}-\hat{\mathbf{w}}_{T2}\|_{2}=\left\|\mathbf{A}^{\prime}(\mathbf{d}-\hat{\mathbf{d}})\right\|_{2}$$ $$\leq\|\mathbf{A}^{\prime}\|_{2}\|\mathbf{d}-\hat{\mathbf{d}}\|_{2}$$ $$=\frac{1}{\sigma_{\min}(\mathbf{A}^{\prime})}\|\mathbf{d}-\hat{\mathbf{d}}\|_{2}=\frac{1}{\sigma_{\min}^{2}(\mathbf{O}^{\prime})}\|\mathbf{d}-\hat{\mathbf{d}}\|_{2},$$ (P-2) where the first equality follows from Equation 14. In the inequality instead, we used the consistency property for the spectral norm of matrix $\mathbf{A}^{\prime}$, while in the last equality we used a property of the Kronecker product for which it holds that: $$\sigma_{\operatorname*{min}}(\mathbf{A})=\sigma_{\operatorname*{min}}(\mathbf{O})\sigma_{\operatorname*{min}}(\mathbf{O})=\sigma_{\operatorname*{min}}^{2}(\mathbf{O}).$$ Let's now consider the estimation error of each vector da,a′ that represents the stationary distribution over consecutive rewards conditioned on pulling the pair of arms (*a, a*′). From Equation 10, we know that by concatenating each of these vectors, we obtain the quantity d. Thus, by definition, we have: $$(\mathrm{P.1})$$ $$\|\mathbf{d}-\widehat{\mathbf{d}}\|_{2}=\sqrt{\sum_{(a,a^{\prime})\in\mathbb{I}^{2}}\|\mathbf{d}_{a,a^{\prime}}-\widehat{\mathbf{d}}_{a,a^{\prime}}\|_{2}^{2}}.\tag{1}$$ $$(\mathrm{P.3})$$ The estimation error of each da,a′ can be bounded by using a result shown in Proposition E.4 and inspired by the work of Hsu et al. (2012). It bounds the estimation error of categorical distributions when the observed samples derive from a Markov chain. With probability at least 1 − δ/I2 we have that: $$\|{\mathbf{d}}_{a,a^{\prime}}-{\hat{\mathbf{d}}}_{a,a^{\prime}}\|_{2}\leq{\sqrt{\left({\frac{1+\lambda^{2I^{2}}}{1-\lambda^{2I^{2}}}}\right){\frac{C+\log(C I^{2}/\delta)}{N}}}}.$$ The exponential term 2I 2that appears to the modulus of the second highest eigenvalue λ has been introduced thanks to the adoption of the round-robin procedure for the choice of combinations of arms. Notably, each pair of arms is pulled every 2I 2steps of the Markov Process, thus resulting in a faster mixing of the chain. For more details, please refer to Appendix B. By combining the last obtained bound with P.2 and P.3 and using a union bound for the estimation error of all vectors of type da,a′ , we have that with probability at least 1 − δ: $$\|\mathbf{w}-\widehat{\mathbf{w}}\|_{2}\leq\frac{1}{\sigma_{\min}^{2}(\mathbf{O})}\sqrt{\left(\frac{1+\lambda^{2I^{2}}}{1-\lambda^{2I^{2}}}\right)\frac{I^{2}(C+\log(CI^{2}/\delta))}{N}}$$ $$\leq\frac{I}{\sigma_{\min}^{2}(\mathbf{O})}\sqrt{\frac{2\big{(}C+\log(CI^{2}/\delta)\big{)}}{(1-\lambda^{2I^{2}})N}}.$$ Ultimately, by putting together the bound in P.1 with the one just obtained, we are able to obtain the final result stated in the Lemma. Dependency on the Problem Parameters Lemma 6.1 shows the dependencies of the result obtained from the proposed estimation approach. The estimation error scales almost linearly with the number of arms I. This may seem concerning when dealing with problems involving a high number of arms. However, we have already observed in Section 5.1.1 that when the number of arms is large, it is possible to reduce this dependency by using the arm selection strategy detailed in Section 5.1.1. We further have a dependency on the minimum value of the induced stationary distribution πmin, which is common in the partially observable setting. Concerning instead the dependency on the minimum singular value σmin(O), it is related to Assumption 4.2. It defines the amount of identifiability of different states given the information provided by the observations. This dependency characterizes the class of weakly-revealing POMDPs and is unavoidable in order to have a tractable problem (Chen et al., 2023). The dependency on the modulus of the second largest eigenvalue λ derives from the fact that observed samples are not independent but come from a Markov chain. The dependency on this term is thus unavoidable and we believe it to be tight for the considered setting. It derives from recent results from the work of Fan et al. (2021) which improves over the existing concentration results of samples coming from Markov Chains. For a thorough comparison of our estimation approach with standard Spectral Decomposition techniques, we refer to Appendix D. Continuous Reward Distributions The presented setting tackles the case of discrete observations. It appears that handling continuous reward distributions within this framework is not feasible and this is true if we apply our approach as is. However, we can discretize the observation distributions and consider the discretized distribution as a categorical one. The process of discretization involves dividing the continuous observation distributions into a predetermined number U of distinct consecutive intervals. Each interval is assigned a probability value that represents the likelihood of a sample drawn from the continuous distribution and belonging to that interval. Throughout this discretization procedure, we can define an action observation matrix of dimension IU × S and then apply Algorithm 1. More details on this aspect can be found in Appendix F. ## 6.2 Analysis Of The Sl-Ec Algorithm Having established the results on the estimation matrix P , we can now provide regret guarantees for Algorithm 2. We recall that the oracle we use is aware of both the observation model O and the transition model P but does not observe the hidden state. As shown in the definition of the regret in Equation 6, it builds a belief over the states, using the formulation defined in Equation 3 and selects the arm that maximizes the expected instantaneous reward. The derived regret upper bound is provided in the following: Theorem 6.1. Suppose Assumptions 4.1 and 4.2 hold and suppose that the Markov chain with transition matrix P has stationary stationary distribution π ∈ ∆(S)*. By assuming that the chain starts from an* arbitrary distribution ν ∈ ∆(S) and by considering a finite horizon T, there exists a constant T0*, with* T > T0, such that with probability at least 1 − δ*, the regret of the SL-EC Algorithm satisfies:* $$\Re(T)\leq2\left(\frac{2LI^{2}}{\sigma_{\min}^{2}(\mathbf{O})\pi_{\min}}\sqrt{\frac{S(C+\log(CI^{2}/\delta))}{(1-\lambda^{2I^{2}})}}\cdot T\right)^{2/3},\tag{16}$$ where L = 4S(1−ϵ) 2 ϵ 3 + √S is a constant that is used to bound the error in the estimated belief (more details in Proposition E.6 in Appendix E). The presented regret has an order of O(T 2/3) w.r.t. the horizon T, as common when using an Explore then Commit algorithm. A detailed proof of this theorem can be found in Appendix B. The presented bound on the regret can be achieved by appropriately choosing the exploration horizon T0. More specifically, we set it as follows: $$T_{0}=\left(\frac{2L T I^{2}}{\sigma_{\mathrm{min}}^{2}(\mathbf{O})\pi_{\mathrm{min}}}\sqrt{\frac{S(C+\log(C I^{2}/\delta))}{(1-\lambda^{2I^{2}})}}\right)^{2/3}.$$ $$\quad(17)$$. To compute T0, we need to have information about the minimum value of the stationary distribution πmin and about the modulus of the second highest eigenvalue λ. If they are not available, a slightly different ![13_image_0.png](13_image_0.png) Figure 1: (a) Difference between the estimated and real transition matrix with an increasing number of samples. The metric used is ∥·∥F (10 runs, 95% c.i.), (b) Difference between real and estimated transition matrix using two different subsets of arms of size J = 3 arms from the 8 available on a problem with 5 states. The metric used is ∥·∥F (10 runs, 95% c.i.). version of the bound can be derived so that T0 can be optimized by only requiring the knowledge of ϵ from Assumption 4.1. More details are reported in Section B.3 of Appendix B. Choice of Explore then Commit The choice of an Explore then Commit type of algorithm is mainly due to the simplicity of this approach. We believe that this scaling of the regret with respect to time cannot be further improved within the considered class of problems and this is mainly due to the identifiability condition reported in Assumption 4.2. Indeed this class of problems includes worst-case instances where we are guaranteed to acquire information only by pulling all the available arms, thus a forced exploration phase is always required. A similar approach has been used in Zhou et al. (2021): they devise the SEEU algorithm which alternates between full exploration and full exploitation phases reaching a Oe(T) 2/3regret guarantee. Instead of the phased algorithm, we opted for an EC approach. ## 7 Numerical Simulations In this section, we provide numerical simulations on synthetic and semi-synthetic data based on the MovieLens 1M (Harper & Konstan, 2015) dataset, demonstrating the effectiveness of the proposed Markov chain estimation procedure. Specifically, we show the efficiency of the offline arm selection procedure described in Section 5.1.1 and conduct a comparison between our SL-EC Algorithm and several baselines in non-stationary settings. In Section 7.3, we provide additional experiments that highlight the performance difference between our approach and a modified technique based on Spectral Decomposition. Finally, in Appendix A we provide different sets of experiments showing the regret comparison under different exploration horizons (Appendix A.2) and we provide some numerical simulations showing the estimation error incurred when the provided observation model is misspecified (Appendix A.3). ## 7.1 Estimation Error Of Transition Matrix The first set of experiments is devoted to showing the error incurred by the estimation procedure of the transition matrix in relation to the number of samples considered and the set of actions used for estimation. The left side of Figure 1 illustrates the estimation error of the transition matrix given different instances of Switching Bandits with an increasing number of states. In particular, we fix the number of total actions I = 10 and number of observations V = 10 and consider three instances with S = 5, S = 10 and S = 15 number of states. As expected, we can see that as the number of states increases, the problem becomes more complex, and more samples are needed to improve the estimation. Figure 1 reports the Frobenius norm ∥·∥F of the error between the true and the estimated transition matrix. We can notice that the estimation procedure is particularly efficient leading to low error values even with a limited number of samples, as can be observed from the steep error drop appearing in the first part of the plot. ![14_image_0.png](14_image_0.png) Figure 2: Plots of regret comparing the SL-EC Algorithm with some non-stationary bandit algorithms using: (a) synthetic data with parameters S = 3 states, I = 4 actions and V = 5 observations (5 runs, 95% c.i.); (b) data from MovieLens assuming S = 5 states, I = 18 actions and V = 5 observations (5 runs, 95% c.i.). The right plot in Figure 1, instead, shows the estimation error obtained by using a different subset of arms. As mentioned in previous sections, it is not always beneficial to use all the available actions during the estimation procedure, but selecting a subset of actions may be preferable. Furthermore, we show that by selecting specific subsets of arms, we can improve the estimation w.r.t using other subsets. For this experiment, we consider J = 3 arms among the I = 8 available for a Switching MAB instance with S = 5 states. We then identify the optimal subset of arms of size J and initiate the estimation process using the selected subset. In order to find the best one, we generate all matrices of type G, as described in Section 5.1.1 and choose the matrix with the highest σmin(G). The subset of arms generating that matrix will be used for estimation. The estimation error of the best subset of arms is represented in the plot with the red line, while we represent in green the estimation error of the subset having the lowest σmin(G). The figure clearly exhibits the performance difference between the two choices, thereby validating our claims. Additional details about the characteristics of the matrices used in the experiments are provided in Appendix A.1. ## 7.2 Algorithms Comparisons In this second set of experiments, we compare the regret suffered by our SL-EC approach with other algorithms specifically designed for non-stationary environments. Following the recent work of Zhou et al. (2021), we consider the subsequent baseline algorithms: the simple ϵ*-greedy* heuristics, a sliding-window algorithm such as *SW-UCB* (Garivier & Moulines, 2011) that is generally able to deal with non-stationary settings and the *Exp3.S* (Auer et al., 2002) algorithm. The parameters for all the baseline algorithms have been properly tuned according to the settings considered. For the specific experiments considered, we adopted scaled values for the exploration horizon T0 w.r.t. the result derived from the theory.7 It is worth noting that, unlike our SL-EC algorithm, the baselines do not assume knowledge of the observation model or the underlying Markov chain. In contrast, our approach utilizes the observation model to estimate the transition matrix and to update the belief over the current state. Additionally, we compare our approach with a particle filter algorithm proposed in Hong et al. (2020b) about non-stationary Latent Bandits. They consider two settings: one with complete knowledge of both the observation and transition models and another that incorporates priors on the parameters of the models to account for uncertainty. We compare against a mixture of these two settings by providing their algorithm with full information about the observation model (as it is for our case) and an informative prior about the true transition model. The comparison is made in terms of the empirical cumulative regret Rb(t) averaged over multiple independent runs. 7We used the value suggested by Equation 17 divided by (10L) 2/3. Scaling the values obtained by theory is common in the scientific literature and mostly translates into bigger multiplicative constants in the final regret bound or in similar bounds but holding with smaller probability. For our case, under the reduced exploration value, the regret presented in Theorem 6.1 increases by a multiplicative factor of (10L) 1/3. ## 7.2.1 Synthetic Experiments These experiments have been conducted on various problem configurations with different numbers of states S, actions I, and observations V . The regret results for one configuration are shown in Figure 2(a). From the figure, it is clear that most of the baseline algorithms display a linear time dependence for the regret. This is expected since these algorithms do not take into account the underlying Markov chain that governs the process. The particle filter algorithm, despite being given a good initial prior on the transition model, does not achieve the same performance of SL-EC in the long run. Conversely, we can notice a quite different behavior for our algorithm that, in line with an Explore then Commit approach, initially accumulates a large regret and then experiences a drastic slope change when the exploitation phase begins. The regret shown in each plot is the average over all the runs. For further information regarding the generation of the transition and observation models, as well as the hyperparameters used for the baseline algorithms, we refer the reader to Appendix A.1. As a remark, our algorithm outperforms the others when the absolute spectral gap 1 − λ of the chain has values closer to 1. Indeed, if this is not the case, simple exploration heuristics such as ϵ-greedy would lead to comparable performance. A clear example is when the transition matrix P defining the chain assigns equal probability to all transitions. In this scenario, all states can be considered independent and identically distributed, and we get no advantage from the knowledge of the matrix P over the use of an algorithm such as ϵ-greedy. ## 7.2.2 Movielens Experiments We also perform some experiments on semi-synthetic data based on MovieLens 1M (Harper & Konstan, 2015), a well-known collaborative filtering dataset where users rate different movies each belonging to a specific set of genres. We adopt a procedure similar to the one used in Hong et al. (2020b). The dataset is initially filtered to include only users who rated at least 100 movies and the movies that have been rated by at least 100 users. After that, we combine the available information in order to obtain a table where each row contains the mean of the ratings for each observed genre for each user (user-genre-rating table). If the user didn't observe any movie belonging to a specific genre, the cell is empty. From the obtained matrix, we select 70% of all ratings as a training dataset and use the remaining 30% as a test set. The sparse matrices so obtained are completed using least-squares matrix completion (Mnih & Salakhutdinov, 2007) using rank 10 and leading to a low prediction error. Having defined the appropriate rank, we use the predictions on the empty cells of the original user-genre rating matrix to fill the entire table. We define a switching bandit instance by using the notion of a *superuser* inspired by Hong et al. (2020b). We use k-means to cluster users using the rows of the user-genre-rating matrix. The users belonging to the same cluster define a superuser that embeds a set of users with similar tastes. The information about the users belonging to the same clusters is then combined and used to generate categorical distributions on the rating, given each superuser and each possible genre (our actions). We choose k = 5 for the number of superusers as it is the one that yields clusters with more similar dimensions, and we use I = 18 for the actions since it represents the number of identified genres. The number of observations V = 5 corresponds to the 5 possible ratings that a movie can get. The transition matrix that governs the dynamics with which superusers alternate is defined by giving higher probabilities to transitions to similar states and also giving higher weights to self-loops in order to avoid too frequent changes. The interaction goes as follows. At each step, a new superuser st is sampled based on st−1 and the transition model. The agent chooses an action at corresponding to a genre to propose and gets a rating that is sampled from the categorical distribution defined by vector O(at, ·), st . As for the synthetic case, our algorithm is compared to other baselines. From Figure 2(b), we can see that our SL-EC still outperforms the other baselines in the considered horizon. However, we highlight that our goal is not to beat the baselines since the comparison is not fair as most of them do not take into account the underlying Markov process, but we aim to show the difference w.r.t. other algorithms belonging to state of the art. More details about the experiments on Movielens are reported in Appendix A. | 2 States | 3K samples | 6K samples | 9K samples | 15K samples | |------------|-----------------|-----------------|-----------------|-----------------| | SD O | 0.0493 (0.0097) | 0.0379 (0.0103) | 0.0335 (0.0097) | 0.0259 (0.0081) | | SD T | 0.0342 (0.0185) | 0.0189 (0.0097) | 0.0149 (0.0032) | 0.0101 (0.007) | | Alg. 1 | 0.0234 (0.015) | 0.02 (0.0203) | 0.0119 (0.009) | 0.008 (0.0032) | | 3 States | 150K samples | 300K samples | 600K samples | 900K samples | | SD O | 0.0165 (0.0044) | 0.0113 (0.0036) | 0.0097 (0.0033) | 0.0085 (0.0018) | | SD T | 0.1547 (0.0517) | 0.154 (0.0532) | 0.1544 (0.0534) | 0.1541 (0.0532) | | Alg. 1 | 0.0066 (0.0026) | 0.0046 (0.0012) | 0.0037 (0.0018) | 0.0031 (0.0008) | | 5 States | 150K samples | 300K samples | 600K samples | 900K samples | | SD O | 0.0681 (0.0178) | 0.0513 (0.0111) | 0.0354 (0.0127) | 0.0283 (0.0082) | | SD T | 0.2409 (0.0633) | 0.2484 (0.0584) | 0.243 (0.0603) | 0.2407 (0.0601) | | Alg. 1 | 0.0283 (0.0054) | 0.0195 (0.0036) | 0.0137 (0.0033) | 0.0115 (0.0034) | Table 1: Comparison with Nearly Deterministic Models. ## 7.3 Numerical Comparisons With A Modified Spectral Decomposition Technique The focus of this last set of experiments is to show the difference between a modified Spectral Decomposition (SD) technique and our estimation approach detailed in Algorithm 1. Among the various applications, SD techniques are typically used for learning with Hidden Markov Models (HMM) where no information about the observation and transition model is provided. Zhou et al. (2021) make use of these techniques to get an estimation of both the observation and the transition model. It is important to highlight that SD methods are hardly used in practice because of their computational and sample complexity. Indeed, both the related works of Zhou et al. (2021) and Azizzadenesheli et al. (2016) include only proof-of-concept experiments with 2 hidden states and 2 possible actions. To make the comparison fairer, we consider a modified SD technique that is provided with information about the observation model in order to help the estimation process, as will be briefly explained. The original SD technique to which we refer follows the procedures highlighted in Anandkumar et al. (2014) for HMM and makes use of the Robust Tensor Power (RTP) method for orthogonal tensor decomposition. In typical SD techniques, data is collected by sampling an action at each time step and updating the computed statistics with the observed realization. With the presented modified SD technique, at each step, we do not simply update the statistics with the observation obtained when pulling the arm, but we give information about the observation distribution for all the available arms, with this information being conditioned on the underlying current state. In this way, it is like pulling all the arms at each step and receiving full information about their associated reward distributions, given the underlying state. We perform various experiments by fixing the number of arms (I = 20) and the number of possible rewards (V = 5) for each arm and by changing the number of states. Each experiment is performed over 10 different runs, where for each run a transition and observation model are generated. For each experiment, our estimation procedure uses 3 arms among the 20 available, which are selected using the offline arms selection strategy. The transition and observation matrices are created in two different ways: the former focuses on nearly-deterministic matrices (Table 1), while the latter considers more stochasticity for both of them (Table 2). The results of the experiments are structured in the following way. Each of the two Tables contains minitables representing sets of experiments characterized by different number of states. By fixing the number of states for the experiments, each mini-table shows three rows: the first one (indicated with *SD O*) contains the Frobenius norm of the estimation error of the observation matrix with the modified SD technique, the second row (indicated with *SD T*) represents the Frobenius norm of the estimation error of the transition matrix with the modified SD technique, while the third row represents the Frobenius norm of the estimation error of the transition matrix computed with Algorithm 1. For each experiment, we report the mean error over the 10 runs and one standard deviation between parenthesis. The modified SD technique clearly enhances the accuracy of estimating the observation model compared to standard SD approaches: this aspect is | 2 States | 150K samples | 210K samples | 270K samples | |------------|-----------------|-----------------|-----------------| | SD O | 0.1500 (0.2639) | 0.1411 (0.2741) | 0.1455 (0.2665) | | SD T | 0.1488 (0.1536) | 0.1699 (0.1742) | 0.1576 (0.1702) | | Alg. 1 | 0.0145 (0.0175) | 0.0145 (0.0134) | 0.0125 (0.0103) | | 3 States | 300K samples | 600K samples | 900K samples | | SD O | 0.2987 (0.2128) | 0.3078 (0.2177) | 0.2594 (0.2309) | | SD T | 0.3916 (0.2804) | 0.4425 (0.2637) | 0.4187 (0.2728) | | Alg. 1 | 0.0077 (0.003) | 0.0063 (0.0023) | 0.0052 (0.002) | Table 2: Comparison with Higher Model Stochasticity. evident from the relatively low estimation errors observed in the *SD O* rows. We present this information to illustrate that the comparison between our estimation procedure and SD approaches is now more fair due to the modified SD technique employed. Having clarified this aspect, we focus on the estimation error of the transition model between the two different methods: this information is indeed separated from *SD O* by a dashed line. We show the experiments with lower estimation errors in bold. The results of this first set of experiments are reported in Table 1. As already anticipated, both the observation and transition matrices are almost deterministic, hence having high probability on a specific observation/state and low probabilities for all the others. For transition matrices, the highest probability is assigned to the probability of staying in the same state. Near-determinism is defined to simplify the problem by making states more distinguishable. By inspecting the results, it is clear that Algorithm 1 outperforms the modified SD technique in almost all the scenarios. Comparable results are only achieved in the experiment with 2 states. Table 2 reports instead the experimental results obtained using both transition and observation matrices with less peaked distributions, thus higher stochasticity. The discrepancy between our approach and the modified SD technique is more evident in this scenario. This aspect can be justified by the theoretical comparison reported in Appendix D, where it can be observed that, compared to our estimation approach, SD techniques have a dependency of higher order on the minimum singular values of both the observation and the transition models. Thus, when the observation matrix is more stochastic, its σmin(O) typically decreases, and this aspect results in a higher estimation error. Indeed, it can be noticed that the estimation error is significant and the number of samples required to perform this set of experiments is much higher than that used for the nearly-deterministic case. Experiments involving a higher number of states instead were not able to reach convergence with a number of samples of the order 105 and, by trying to increase this quantity, there were memory space problems with the used hardware (Intel i7-11th and 16G RAM). Again, we would like to emphasize that SD techniques are explicitly meant to work in a different setting, intrinsically more complex, where no information about either the transition or the observation model is provided. However, with this set of experiments, we wanted to show that if, instead, we have knowledge about the observation model, directly using this information in the SD techniques does not lead to performances comparable to our approach. ## 8 Discussion And Conclusions This paper studies a Latent Bandit problem with latent states changing in time according to an underlying unknown Markov chain. Each state is represented by a different Bandit instance that is unobserved by the agent. As common in the latent Bandit literature, we assumed to know the observation model relating each MAB to the reward distribution of its actions, and by using some mild assumptions, we presented a novel estimation technique using the information derived from consecutive pulls of pairs of arms. As far as we know, we are the first to present an estimation procedure of this type aiming at directly estimating the stationary distribution w of consecutive states. The approach is easy to use and does not require specific hyperparameters to be set. We provided an offline arm selection that selects the best subsets of arms to speed up the estimation process. We analyzed the dependence of the parameters on the complexity of the problem, and we showed how our estimation approach can be extended to handle models with continuous observation distributions. We used the presented technique in our SL-EC algorithm that uses an Explore then Commit approach and for which we proved a O(T 2/3) regret bound. The experimental evaluation confirmed our theoretical findings showing advantages over some baseline algorithms designed for non-stationary MABs and showing good estimation performances even in scenarios with larger problems. Furthermore, we compared our approach both empirically and theoretically (Appendix D) with SD techniques, taking into account the differences between the two procedures. We identify different future research directions for the presented work, such as designing new algorithms that are able to exploit the flexibility in the exploration policy determined by the defined procedure, allegedly in an optimistic way. It may also be interesting to deepen the understanding of this problem when dealing with continuous reward models, trying to design optimal ways to discretize them in order to reach faster estimation performances. We could also consider the extension to the continuous state space setting (e.g., linearMDPs): among the main challenges in this scenario, we consider the adoption of a different representation for the reference matrix that would otherwise not be computable with infinite states and the redefinition of the stationary distribution over consecutive states. In such a case, it might be beneficial to estimate the feature functions directly by means of which the linear MDP is defined. Finally, it might be worth considering a contextual version of the proposed setting. According to the assumptions made, for example, whether the context is discrete or continuous or whether it is related or not to the latent state, this aspect may bring another dimension to the observation space. Redefining the reference matrix by taking this feature into account will likely lead to more informative components and help with the estimation process. ## References Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham M. Kakade, and Matus Telgarsky. Tensor decompositions for learning latent variable models. *J. Mach. Learn. Res.*, 15(1):2773–2832, jan 2014. Peter Auer, Nicolò Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed bandit problem. *SIAM Journal on Computing*, 32(1):48–77, 2002. Peter Auer, Pratik Gajane, and Ronald Ortner. Adaptively tracking the best bandit arm with an unknown number of distribution changes. In *Proceedings of the Thirty-Second Conference on Learning Theory*, volume 99 of *Proceedings of Machine Learning Research*, pp. 138–158, 2019. Kamyar Azizzadenesheli, Alessandro Lazaric, and Anima Anandkumar. Reinforcement learning of pomdps using spectral methods. In *Annual Conference Computational Learning Theory*, 2016. Omar Besbes, Yonatan Gur, and Assaf Zeevi. Stochastic multi-armed-bandit problem with non-stationary rewards. In *Advances in Neural Information Processing Systems*, 2014. Yang Cao, Zheng Wen, Branislav Kveton, and Yao Xie. Nearly optimal adaptive procedure with change detection for piecewise-stationary bandit. In *International Conference on Artificial Intelligence and Statistics*, 2018. Fan Chen, Huan Wang, Caiming Xiong, Song Mei, and Yu Bai. Lower bounds for learning in revealing POMDPs. In *Proceedings of the 40th International Conference on Machine Learning*, volume 202 of Proceedings of Machine Learning Research, pp. 5104–5161. PMLR, 23–29 Jul 2023. Yohann De Castro, Élisabeth Gassiat, and Sylvain Le Corff. Consistent estimation of the filtering and marginal smoothing distributions in nonparametric hidden markov models. *IEEE Transactions on Information Theory*, 63(8):4758–4777, 2017. Jianqing Fan, Bai Jiang, and Qiang Sun. Hoeffding's inequality for general markov chains and its applications to statistical learning. *Journal of Machine Learning Research*, 22(139):1–35, 2021. William Feller. *An Introduction to Probability Theory and its Applications Vol. I*. Wiley, 1968. Tanner Fiez, Shreyas Sekar, and Lillian J. Ratliff. Multi-armed bandits for correlated markovian environments with smoothed reward feedback. *arXiv: Learning*, 2018. Aurélien Garivier and Eric Moulines. On upper-confidence bound policies for switching bandit problems. In Proceedings of the 22nd International Conference on Algorithmic Learning Theory, pp. 174–188, 2011. Gene H. Golub and Charles F. Van Loan. *Matrix Computations*. The Johns Hopkins University Press, third edition, 1996. Jiaxing Guo, Qian Sang, and Niklas Karlsson. Adaptive seasonality estimation for campaign optimization in online advertising. In *2021 American Control Conference (ACC)*, pp. 1450–1455, 2021. F. Maxwell Harper and Joseph A. Konstan. The movielens datasets: History and context. *ACM Trans.* Interact. Intell. Syst., 5(4), dec 2015. Steven L. Heston and Ronnie Sadka. Seasonality in the cross-section of stock returns. Journal of Financial Economics, 87(2):418–445, 2008. ISSN 0304-405X. Joey Hong, Branislav Kveton, Manzil Zaheer, Yinlam Chow, Amr Ahmed, and Craig Boutilier. Latent bandits revisited. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 13423–13433. Curran Associates, Inc., 2020a. Joey Hong, Branislav Kveton, Manzil Zaheer, Yinlam Chow, Amr Ahmed, Mohammad Ghavamzadeh, and Craig Boutilier. Non-stationary latent bandits. *CoRR*, abs/2012.00386, 2020b. Daniel Hsu, Sham M. Kakade, and Tong Zhang. A spectral algorithm for learning hidden markov models. Journal of Computer and System Sciences, 78(5):1460–1480, 2012. ISSN 0022-0000. JCSS Special Issue: Cloud Computing 2011. Mehdi Jafarnia Jahromi, Rahul Jain, and Ashutosh Nayyar. Online learning for unknown partially observable mdps. In *Proceedings of The 25th International Conference on Artificial Intelligence and Statistics*, 2022. Bowen Jiang, Bo Jiang, Jian Li, Tao Lin, Xinbing Wang, and Chenghu Zhou. Online restless bandits with unobserved states. In *Proceedings of the 40th International Conference on Machine Learning*, 2023. Chi Jin, Sham M. Kakade, Akshay Krishnamurthy, and Qinghua Liu. Sample-efficient reinforcement learning of undercomplete pomdps. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS'20, Red Hook, NY, USA, 2020. Curran Associates Inc. ISBN 9781713829546. J. Kiefer and J. Wolfowitz. The equivalence of two extremum problems. *Canadian Journal of Mathematics*, 12:363–366, 1960. doi: 10.4153/CJM-1960-030-4. Newton Mwai Kinyanjui, Emil Carlsson, and Fredrik D. Johansson. Fast treatment personalization with latent bandits in fixed-confidence pure exploration. *Transactions on Machine Learning Research*, 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=NNRIGE8bvF. Expert Certification. Aryeh Kontorovich, Boaz Nadler, and Roi Weiss. On learning parametric-output hmms. In Sanjoy Dasgupta and David McAllester (eds.), *Proceedings of the 30th International Conference on Machine Learning*, volume 28 of *Proceedings of Machine Learning Research*, pp. 702–710, Atlanta, Georgia, USA, 17–19 Jun 2013. PMLR. Vikram Krishnamurthy. *Partially Observed Markov Decision Processes: From Filtering to Controlled Sensing*. Cambridge University Press, 2016. Jeongyeol Kwon, Yonathan Efroni, Constantine Caramanis, and Shie Mannor. Tractable optimality in episodic latent mabs. In *Advances in Neural Information Processing Systems*, volume 35, pp. 23634– 23645. Curran Associates, Inc., 2022. Tor Lattimore and Csaba Szepesvári. *Bandit Algorithms*. Cambridge University Press, 2020. Fang Liu, Joohyung Lee, and Ness B. Shroff. A change-detection based framework for piecewise-stationary multi-armed bandit problem. In *AAAI Conference on Artificial Intelligence*, 2017. Qinghua Liu, Alan Chung, Csaba Szepesvári, and Chi Jin. When is partially observable reinforcement learning not scary?, 2022. Charles F.Van Loan. The ubiquitous kronecker product. *Journal of Computational and Applied Mathematics*, 123(1):85–100, 2000. ISSN 0377-0427. Numerical Analysis 2000. Vol. III: Linear Algebra. Krishnadas M., K.P. Harikrishnan, and G. Ambika. Recurrence measures and transitions in stock market dynamics. *Physica A: Statistical Mechanics and its Applications*, 608:128240, 2022. ISSN 0378-4371. Omid Madani. On the computability of infinite-horizon partially observable markov decision processes. 12 1999. Odalric-Ambrym Maillard and Shie Mannor. Latent bandits. *31st International Conference on Machine* Learning, ICML 2014, 1, 05 2014. Robert Mattila, Cristian Rojas, Eric Moulines, Vikram Krishnamurthy, and Bo Wahlberg. Fast and consistent learning of hidden Markov models by incorporating non-consecutive correlations. In Hal Daumé III and Aarti Singh (eds.), *Proceedings of the 37th International Conference on Machine Learning*, volume 119 of *Proceedings of Machine Learning Research*, pp. 6785–6796. PMLR, 13–18 Jul 2020. Colin McDiarmid. *On the method of bounded differences*, pp. 148–188. London Mathematical Society Lecture Note Series. Cambridge University Press, 1989. Andriy Mnih and Russ R Salakhutdinov. Probabilistic matrix factorization. In J. Platt, D. Koller, Y. Singer, and S. Roweis (eds.), *Advances in Neural Information Processing Systems*, volume 20. Curran Associates, Inc., 2007. Ronald Ortner, Daniil Ryabko, Peter Auer, and Rémi Munos. Regret bounds for restless markov bandits. Theoretical Computer Science, 558:62–76, 2014. ISSN 0304-3975. Algorithmic Learning Theory. Martin L. Puterman. *Markov Decision Processes: Discrete Stochastic Dynamic Programming*. John Wiley; Sons, Inc., USA, 1st edition, 1994. ISBN 0471619779. Giorgia Ramponi, Amarildo Likmeta, Alberto Maria Metelli, Andrea Tirinzoni, and Marcello Restelli. Truly batch model-free inverse reinforcement learning about multiple intentions. In *Proceedings of the Twenty* Third International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 2020. Aleksandrs Slivkins and Eli Upfal. Adapting to a changing environment: the brownian restless bandits. In Annual Conference Computational Learning Theory, 2008. Francesco Trovò, Stefano Paladino, Marcello Restelli, and Nicola Gatti. Sliding-window thompson sampling for non-stationary settings. *Journal of Artificial Intelligence Research*, 68:311–364, 05 2020. Yi Xiong, Ningyuan Chen, Xuefeng Gao, and Xiang Zhou. Sublinear regret for learning pomdps, 2022. Li Zhou and Emma Brunskill. Latent contextual bandits and their application to personalized recommendations for new users. In *Proceedings of the Twenty-Fifth International Joint Conference on Artificial* Intelligence, IJCAI'16, pp. 3646–3653. AAAI Press, 2016. ISBN 9781577357704. Xiang Zhou, Yi Xiong, Ningyuan Chen, and Xuefeng Gao. Regime switching bandits. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, 2021.
Review 1: Summary: This paper introduces a novel approach to address the switching latent bandits problem, where an agent must solve multiple multi-armed bandit (MAB) problems evolving according to an unknown underlying Markov chain. The proposed explore-then-commit algorithm effectively handles this challenge by estimating the transition matrix during exploration and selecting actions optimally based on the believed latent state during exploitation. The theoretical analysis establishes a regret bound of $O(T^{2/3})$, demonstrating the algorithm's competitiveness compared to existing approaches. Empirical evaluations validate its effectiveness through numerical simulations and real data experiments, showcasing superior performance over alternative methods. Strengths and Weaknesses: Strength: The problem setup is interesting, and the paper is well-structured. The writing is comprehensive, and notations are clear. Experiments are reported on synthetic and real data. Weaknesses: Algorithm and Regret Bound: I am concerned about the $O(T^{2/3})$ regret guarantee, which is standard for Explore and Commit algorithms, and the analysis is straightforward. I am missing the novelty of the algorithmic ideas and analysis techniques, especially given the work is predominantly theoretical. I am wondering why the authors did not try a UCRL type approach (UCB-based method) to achieve $O(\sqrt{T})$ bound? This appears to be a major limitation of the work. In fact, few related works, e.g., Azizzadenesheli et al. (2016) achieved $O(\sqrt{T})$ regret, and it is unclear the contribution of the current work over these related works. It would be beneficial to have a table listing all the related works, their modeling assumptions, objectives, and performance guarantees to understand the precise contribution of this work. Dependencies on the problem parameters: In the final regret bound of Theorem 6.1, the dependency of the problem-dependent parameters, e.g., $\epsilon$, $\sigma_{\min}$, $\lambda$, etc., requires more discussion. Are these dependencies tight? For example, I am surprised by the $O(1/\epsilon^2)$ dependency; why do we need to explore all $P(s,s')$, especially if the rewards associated with these states are small? Seems non-intuitive. Some discussions on the tightness of the derived regret bounds would be useful. Assumptions: Following from the above point, it will also be important to understand the necessity of the assumptions, e.g., Assumptions 4.1 and 4.2 --- can they be relaxed if the dependencies on the parameters are not tight? Also, how practical are these assumptions for real-world problems? Can you give a real-world problem scenario where such examples are satisfied? In general, the motivation of the setting needs to be explained better; the one-paragraph discussion on the financial market and online advertising is not enough to understand the exact mapping of the problem framework to these practical examples; a better and more detailed explanation would be useful. Minor: The citation Kwon et al. (2022) seems to have typos. Check the formatting of the bibliography. Requested Changes: Please considered the detailed feedback provided under Weaknesses above, specifically the concerns around regret optimality, modeling assumptions, novelty against existing techniques (with a table), and justifying motivations. Broader Impact Concerns: The work is predominantly theoretical, I do not see any specific broader impact concerns. ================================================== Review 2: Summary: This paper studies a latent bandit problem where the latent state evolves according to an unknown Markov chain. The authors assume perfect knowledge of reward distribution of each arm, conditioned on the latent state. They introduce a novel estimation method of the transition matrix that leverage this knowledge. This result is later used in the proposed explore-then-commit algorithm SL-EC. This algorithm enjoys a $\tilde{O}(T^{2/3})$ regret bound and the authors illustrate that the algorithm has an empirical advantage over baselines. Strengths and Weaknesses: ### Strengths: - This paper studies an interesting problem that, to the best of my knowledge, is underexplored in the literature. - The paper is well-written. - The estimator seems novel. ### Weaknesses: - The approach requires full knowledge of the reward model for each latent state which is restrictive. Are there any real-world problems where this is true? Does your algorithm work well even when we only have access to an approximation of the reward distributions ? - The $T^{2/3}$ scaling is undesirable and I think it might be unavoidable when using explore-then-commit. Why go for explore-then-commit rather than approaches based UCB or Thompson sampling that usually enjoys much better regret? Are there any particular challenges with applying these approaches to this problem? - Both $\sigma_{\min}$ and $\pi_{\min}$ can be very small in practice and my understanding is that this could make the bound in Thm 6.1 vacuous. This since it could happen that $T_0 > T$ even for moderately large $T$. It seems like this undesirable scaling is due to the fact that we try to perfectly estimate $P$ and do we really need to do that to have good regret? - The regret bound holds only under the assumption that the chain starts from its stationary distribution. The authors state that similar results can be shown without this assumption. Why not have this stronger result in the paper? - The empirical evaluation can be improved. - SL-EC is not evaluated using the budget suggested by Eq 17 and instead uses scaled values. The amount of scaling is not presented in the paper. I understand that what is suggested by theory often is conservative in practice but having a version of SL-EC that uses the budget suggested by Eq 17 would help making sense of the theoretical results. - Plotting the R(t) for a single horizon, as in Figure 2, is not very informative. We know already that the algorithm has linear regret up to $t=T_0$ and then either zero regret or with ,low probability, linear regret. I suggest that the authors also plot the regret over several horizons, i.e $R(T)$, to illustrate the dependence in $T$ (since the budget $T_0$ is a function of $T$). This would tell us for which horizons the SL-EC has an advantage over baselines. - The confidence intervals in the empirical section looks too good to be true given the small number of seeds , e.g. see Fig 2 where only 5 seeds are used. I wonder if this is due to the fact that the cumulative regret has a multimodal distribution and the CI wrongly assumes a Gaussian distribution. Note that with probability $1-\delta$ the algorithm suffers no regret after $T_0$ and with probability $\delta$ it suffers linear regret. Thus, with probability $(1-\delta)^k$, k being the number of seeds, the sample variance in instant regret for any $t>T_0$ will be zero which will result in an incorrect CI around cumulative regret that has a very small width. I suggest that the authors run the experiments using more random seeds. Requested Changes: I have the following requested changes. ### Critical - Discuss whether the scaling in $\sigma_{\min}$, $\pi_{\min}$, and $T$ is optimal or if it can be improved. If possible, please state a lower bound on the cumulative regret. - Address the weaknesses in the empirical section (see weaknesses). An experiment where the reward model is misspecified would also be nice to see. ### Would improve the paper - I think the paper would benefit from a discussion on applications where the reward model is perfectly known. - It would improve the theory if the results wouldn't require the chain to start from the stationary distribution. - Discuss why use explore-then-commit for this problem and what the challenges with implementing more competitive approaches, like those based on optimism or Thompson sampling, are. Broader Impact Concerns: No concerns. ================================================== Review 3: Summary: The authors consider the problem of latent bandits where the reward of arms depend on an unobserved state, and where the learner knows the conditional distribution of the reward knowing both the state and the chosen action. The problem consists in estimating the state transition probability through the rewards. This also enable accurate estimation of the belief state. The authors propose an algorithm which starts by (possibly) pruning the set of arms, then explores them uniformly to infer the state transition probability matrix by inverting the (linear) relationship between state transition probability matrix and rewards transition probability matrix. Strengths and Weaknesses: Strengths - The paper is easy to understand - The authors present both algorithms, regret upper bounds as well as numerical simulations to illustrate the efficacy of the approach Weaknesses - I have doubts about the regret definition/model/objective: the authors are purely concerned with making sure that the estimated belief state ${\hat b}_t$ is an accurate estimate of the actual belief state $b_t$, but they do not care at all about the collected rewards. In fact, if there exists a very informative action (that allows to quickly guess the state) but has 0 reward, playing it incurs no regret. Not only is this counter intuitive, but i'm not even sure that the problem at hand is a bandit problem. It is essentially just an estimation problem. The regret definition would make sense if the agent were forced to play the "estimated belief" policy maximizing $\mu(a)^\top {\hat b}_t$, but it is not the case here and the learner may play as she wishes. - The strategy (uniform exploration) seems overly simplistic and probably highly suboptimal. The authors argue that, in order to counteract this problem, when there are too many non informative arms one can just exclude arms a priori, and concentrate on a well chosen, small subset of arms to apply uniform exploration on this subset, but I do not believe that this is true in general. Consider the following example where one has $K$ states $\mathbb{S}=\\{1,...,K\\}$, and K-1 arms $\mathbb{I}=\\{1,...,K-1\\}$, and the rewards are binary $\mathcal{V}=\\{0,1\\}$ such that $O((a,1),s) = 1$ if $a=s$ and $O((a,1),s) = 0$ if $a \ne s$. Essentially, sampling action $a$ is equivalent to asking the question "is the system in state a ?" and getting an exact answer. For the algorithm to be consistent, we cannot exclude any arm from the set of arms, otherwise if $a$ is excluded, it becomes impossible to guess (for instance) the probabilities $P(s,s')$ with $s=a$. So one has to sample uniformly over all the arms if we use the authors strategy. Now consider two fixed states $s_1$ and $s_2$, a positive number $\epsilon$ arbitrarily small and $P$ such that $P(s,s') \le \epsilon$ if $s,s' \not\in \\{s_1,s_2\\}$ (i.e the chain concentrates mostly on states $s_1$ and $s_2$). Then clearly, whenever we select actions that are not in $\{s_1,s_2\}$ we acquire little to no information about $P$ and collect no reward either. So only at most a fraction $4/K^2$ of the samples collected are useful, and the regret is at least $O(K^2)$ greater than what one would obtain with an optimal approach. I believe that, to design good algorithms for this problem (like in all bandit problems) one must sample adaptively to adjust the exploration strategy to the emerging information. - Why is Assumption 4.1 necessary ? This seems like a very strong assumption and clearly unnecessary for ergodicity. For instance very simple Markov chains like birth and death processes do not verify this assumption. This assumption greatly reduces the applicability of the results. - While the writing is very clear, I believe that the article is much too long. For instance Section "5 Proposed Approach" it is almost 3 pages of simple definitions and properties. Especially since the proposed approach is quite intuitive to a reader with a basic understanding of Markov chains: one simply estimates the transition probability matrix of rewards, and then one inverts a linear system to retrieve the transition probability of the states. Minor remarks: - "The regret of a policy $\theta$ on a bandit instance is defined as:" -> the authors do not define what a "policy" is at that point. - "is the unique stationary distribution of the chain" -> it might be worth reminding the reader that the stationary distribution is the solution to $\pi = \pi P$ at this point of the text. - "Each MAB is characterized by a finite set of discrete arms ... " This phrase is misleading because it sounds like there may be different sets of arms for different MAB instances, whereas here, all MABs have the same set of arms and rewards. - "The distribution of rewards Pr(·|s, a) conditioned on MAB instance s and action a is categorical" By categorical do you mean discrete ? - "Stationary Distribution of Consecutive States" Why not simply define $W(s,s') = \pi(s) P(s,s')$ ? Requested Changes: See above. Broader Impact Concerns: Not applicable. ================================================== Metareview: Recommendation: Accept as is Comment: While initially this paper wasn't unanimously appreciated by the reviewers, the author response has convinced one of the reviewers to change their minds and eventually support acceptance. One reviewer remained skeptical about the significance of the results, and in particular the strength of the assumptions that the authors have made. While I very much share these concerns and believe that the strength of the results leaves much to be desired, the paper does live up to the TMLR criteria for publication, and I am thus recommending the paper for acceptance. I nevertheless encourage the authors to keep working on this problem until a more satisfactory result can be achieved. ==================================================
# Hermes: Hybrid Error-Corrector Model With Inclusion Of External Signals For Nonstationary Fashion Time Series Étienne David *etienne.david@heuritech.com* SAMOVAR, Télécom SudParis, Institut Polytechnique de Paris, 91120 Palaiseau, France Jean Bellot *jean.bellot@heuritech.com* Heuritech, 6 Rue de Braque, 75003 Paris Sylvain Le Corff *sylvain.le_corff@sorbonne-universite.fr* LPSM, Sorbonne Université, UMR CNRS 8001, 75005, Paris Reviewed on OpenReview: *https: // openreview. net/ forum? id= 4ofFo7D5GL& noteId= QnOhD911vc* ## Abstract Developing models and algorithms to predict nonstationary time series is a long standing statistical problem. It is crucial for many applications, in particular for fashion or retail industries, to make optimal inventory decisions and avoid massive wastes. By tracking thousands of fashion trends on social media with state-of-the-art computer vision approaches, we propose a new model for fashion time series forecasting. Our contribution is twofold. We first provide publicly1 a dataset gathering 10000 weekly fashion time series. As influence dynamics are the key of emerging trend detection, we associate with each time series an external weak signal representing behaviours of influencers. Secondly, to leverage such a dataset, we propose a new hybrid forecasting model1. Our approach combines per-timeseries parametric models with seasonal components and a global recurrent neural network to include sporadic external signals. This hybrid model provides state-of-the-art results on the proposed fashion dataset, on the weekly time series of the M4 competition (Makridakis et al., 2018), and illustrates the benefit of the contribution of external weak signals. ## 1 Introduction Multivariate time series forecasting is a widespread statistical problem with many applications, see for instance Särkkä (2013); Douc et al. (2014); Zucchini et al. (2017) and the numerous references therein. Parametric generative models provide explainable predictions with statistical guarantees owing to a precise modeling of the predictive distributions of new data based on a record of past observations. Calibrating these models, for instance using maximum likelihood inference, often requires a fair amount of tuning so as to design time-series-specific models able to provide accurate forecasts and sharp confidence intervals. Depending on the use case, statistical properties of the signal and the available data, many families of models have been proposed for time series. The exponential smoothing model (Brown & Meyer, 1961), the Trigonometric Box-Cox transform, ARMA errors, Trend, and Seasonal components model (TBATS) (Livera et al., 2011), or the ARIMA with the Box-Jenkins approach (Box et al., 2015) are for instance very popular parametric generative models. Hidden Markov models (HMM) are also widespread and presuppose that available observations are defined using missing data describing the dynamical system. This hidden state is assumed to be a Markov chain such that at each time step the received observation is a random 1https://github.com/etidav/HERMES function of the corresponding latent data. Although hidden states are modeled as a Markov chain, the observations arising therefrom have a complex statistical structure. In various applications where signals exhibit non-stationarities such as trends and seasonality, classical HMM are not adapted. However, Touron (2017) recently proposed seasonal HMM, assuming that transition probabilities between the states, as well as the emission distributions, are not constant in time but evolve in a periodic manner. Strong consistency results were established in Touron (2019) and Expectation Maximization based numerical experiments were proposed. Although these works provide promising results, HMM are computationally expensive to train and are not yet well studied for seasonal sequences with thousands of components. In many fields, single or few time series have become thousands of sequences with various statistical properties. In this new context, classical time-series-specific statistical models show limitations when dealing with numerous heterogeneous data. By constrast, recurrent neural networks and recent sequence to sequence deep learning architectures offer very appealing numerical alternatives thanks to their capability of leveraging any kind of heterogeneous multivariate data, see for instance Hochreiter & Schmidhuber (1997); Vaswani et al. (2017); Siami-Namini et al. (2018); Li et al. (2019); Lim et al. (2019); Salinas et al. (2020). The DeepAR model proposed in Salinas et al. (2020) provides a global model from many time series based on a multi-layer recurrent neural network with LSTM cells. More recently, applications using the Transformer model have been proposed (Li et al., 2019). The Temporal Fusion Transformers (TFT) approach is a direct alternative to the DeepAR model (Lim et al., 2019). Unfortunately, all these solutions suffer from two main weaknesses. Firstly, many of them are black-boxes as the final forecast usually does not come with a statistical guarantee although a few recent works focused on measuring uncertainty in recurrent neural networks, see Martin et al. (2021). Secondly, without a fine preprocessing and well chosen hyperparameters, these methods may lead to poor results and be outperformed by traditional statistical models, see Makridakis et al. (2018). In this paper, we consider an emerging time series forecasting application referred to as *fashion trends* prediction. In fashion and retails industries, accurately anticipating consumers' needs is vital and wrong decisions can lead to massive wastes. With the explosion of social network and the recent advances in image recognition, it is possible to translate the visibility of fashion items on social media over time into time series. Consequently, models and algorithms can be trained to accurately anticipate and predict consumer behaviour. In Ma et al. (2020), a dataset is provided using social media pictures and an image recognition framework to detect several clothes: 2000 fashion time series are proposed with a weekly seasonality. However, only 3 years of historical data is available (144 data points) that may not be sufficient for some statistical approaches. In Ma et al. (2020), another dataset is presented gathering 8000 fashion sequences with an historical available data increased to 5 years. Nevertheless, only 120 values are available for each fashion time series and the overall volume remains low for a large part of the sequences resulting in a lot of noise and no clear patterns. In this paper, we propose a new fashion dataset overcoming the weaknesses of the two previous ones. Based on recent image recognition algorithms (Ren et al., 2015; Chollet, 2017), we built a large fashion dataset containing 10000 weekly sequences of fashion trends on social media with 5 years of historical data from 01-01-2015 to 30-12-2019. This dataset has very appealing properties: all time series have the same length (261 data points), there is no missing value and there is no sparse time series even for niche trends. Concerning fashion dynamics, some of them appear to be really volatile with nonlinear changes of dynamics resulting from the emergence of new tendencies. In this context, understanding early signals of the apparition of a trend is one of the key to accurately forecast the future of the fashion. Consequently, the originality of our dataset comes from the fact that additional external weak signals are introduced. With our fashion expertise, we detected several groups of highly influential fashion users. Analyzing their specific behaviours on social media, we associate with each time series an external weak signal representing the same fashion trends on this sub-category of users. They are called weak signals because they are often alerts or events that are too sparse, or too incomplete to allow on their own an accurate estimation of their impact on the prediction of the target signal. Exploring this new way of representing fashion, we aim at designing a model able to deal with such a large dataset, leveraging complex external weak signals and finally providing the most accurate forecasts. Recurrent neural networks are appealing to tackle our forecasting problem due to their capability of leveraging external data. Recently, hybrid models combining deep neural network (DNN) architectures with widespread statistical models to deal with seasonality and trends have been proposed, see for instance Zhang (2003); ![2_image_0.png](2_image_0.png) Figure 1: From social media to fashion time series. a) A complete image dataset of 150 millions of pictures is collected from social media users localized on 5 strategic markets. b) A visual recognition pipeline is applied on images. Global fashion items are detected with a collection of fine-grain attributes. c) Results are aggregated by fashion trend over time and normalized in order to remove social media bias. Jianwei et al. (2019); Bandara et al. (2020). The approach providing the most striking results was proposed in Smyl (2020) in the context of the M4 forecasting competition (Makridakis et al., 2020). Given a large dataset, a per-time-series multiplicative exponential smoothing model was introduced to estimate simple but fundamental components for each time series and compute a first prediction. Then a global recurrent neural network was trained on the entire dataset to correct errors of the previous exponential smoothing models. Following this work, we present in this paper HERMES, a new hybrid recurrent model for time series forecasting with inclusion of external signals. This new architecture is decomposed into two parts: First, a per-time-series parametric statistical model is trained on each sequence. Then, a global recurrent neural network is trained to evaluate and correct the forecast of the first collection of models. The ability to deal with external signals reveal the real potential of the hybrid approach: a global neural network, able to leverage large amounts of heterogeneous data, deal with any kind of external weak signals, learn context and finally correct weaknesses and errors of parametric models. The paper is organized as follows. Section 2 presents the new fashion dataset provided with this article. Then, the proposed forecasting approach is presented in Section 3. Section 4 describes the experiments and comparisons with several benchmarks on 2 different use cases: the fashion dataset and the M4 competition weekly dataset. Finally, a general conclusion and some research perspectives are given in Section 5. ## 2 From Social Media To Fashion Time Series 2.1 Translate Fashion To Data An image dataset of 150 millions pictures is collected from different social media such as Instagram or Weibo. We targeted 5 strategic markets for the retail industry using posts localisation: the United States, Europe, Japan, Brazil and China. With compliance of privacy and data protection, only public accounts are selected and no potential private information is used, saved or revealed during the whole process. The second step consists in creating a powerful visual recognition framework to be able to detect clothes details on pictures like the type of clothing, the form, the size, the color, the texture, etc. To do so, the following framework is designed. ![3_image_0.png](3_image_0.png) Figure 2: A shoes trend of the fashion dataset. In black the main signal and in orange its associated *fashionforward* weak signal. The sudden explosion of the influencers signal at the end of 2018 announces the future burst of the trend in the mass market. 1. First, an object detection model is trained to detect the position, the size and the general type of possible multiple fashion items on a picture. This localization model is based on the Faster-RCNN architecture introduced in Ren et al. (2015). Starting from weights trained on MS-COCO (Lin et al., 2014), the model is fine-tuned with our data with a standard setup following the original paper. 2. Additionally, several visual recognition models are trained at classifying a rich collection of 350 fashion details. We train one classifier for each category of fashion item: one for pants, another for tops, a third for shoes, etc. These models are all based on the Xception architecture introduced in Chollet (2017). So as to trained them, large amount of social media pictures (between 200k and 800k training images depending on the category) have been manually tagged to constitute meaningful training datasets depending on the classification task. Architectures are first initialized with public weights trained on ImageNet (Russakovsky et al., 2014) and then fine tuned on the manually labeled dataset corresponding to their task. At inference time, we first apply the localisation model which predicts boxes of generic fashion items (tops, pants, shoes, dresses, etc.) for each image. Then, each fashion item is cropped from its full image, resized to the classifiers' input size (299 × 299 px) and fed into the related classifier: a top will be fed into the model trained on tops, etc. We obtain for each image a set of boxes, associated with a general category and a set of fine-grain attributes describing this object. As a final step, fashion experts aggregate those attributes to define relevant trends for the fashion and retails industry. We call them fashion trends. They can be just one attribute detected by a visual recognition model, for example the sneakers fashion trend. They can also be a combination of several attributes, for instance the Mini A-line dress fashion trend which is a combination of several attributes (dress length, shape, category,...) each one detected with a specific vision recognition model. The 150 million of social media pictures are analyzed with this visual recognition pipeline. Out of those images, we detected clothes in 96 millions posts making the final dataset used in this paper. We aggregate results by fashion trend definition over the time and thousands of trends are finally translated from social media to time series. We note y c,g,m,j the final raw sequence representing the fashion trend j of the cloth type c for the gender g on market m. At each time t, y c,g,m,j trepresents the number of posted pictures in the market m during the week t where computer vision algorithms detected the fashion trend j of the cloth type c for the gender g. As an illustration, example of fashion time series is given in Figure 2. ## 2.2 Removing Social Media Bias Due to the increasing use of social media and continuous changes of users' behaviours, a normalization step is applied to the raw sequences y c,g,m,j in order to remove bias. The pre-processing of data presented in this section does not constitute a technical contribution. This step is presented in depth to explain how the dataset provided with this paper is constructed. Let y˜ c,g,m be the global sequence of the cloth type c for the gender g on market m (e.g the evolution of the skirts in general for female in Europe). With the R package stats, the Seasonal-Trend decomposition using LOESS (Cleveland et al., 1990) is used to remove the seasonal component of y˜ c,g,m. The resulting deseasonalized signal is called y¯ c,g,m. Finally, for any fashion trend j, the following normalized sequence is defined for all 0 ≤ t ≤ T: $$y_{t}^{i}=\frac{y_{t}^{c,g,m,j}}{\bar{y}_{t}^{c,g,m}}\;,$$ $$(1)$$ , (1) where T denotes the number of available time steps and i represent a unique identifier summarizing the 4 information c, g, m and j. The time series y c,g,m,j is divided by the deseasonalized signal y¯ c,g,m and not y˜ c,g,m in order to avoid removing the seasonality of all the fashion trend sequences. With this normalizing step, most of the social media bias is removed and the final normalized sequences are expressed in share of category. As an illustration, an example of the normalization process is displayed in Figure 3. In this example, the raw and normalized time series of the Jersey top trend for females in China (y c,g,m,j with c=Top, g=Female, m=China and j=Jersey top) are presented. So as to compute the normalized signal, the raw time series of the Jersey top trend for females in China is divied by the deseasonalized raw time series representing the global Top trend for female in China (y¯ c,g,m with c=Top, g=Female and m=China). ## 2.3 Weak Signal In theoretical fashion dynamics (Rogers, 1962), different categories of adopters follow a trend in succession, resulting in several adoption waves. So as to catch the early signal of the emergence of a trend, 6000 social media influencers were selectioned by hand by fashion experts. Aggregating them, a specific "fashionoriented" panel is created. With the same methodology as for the main panel described in Section 2.1 and Section 2.2, a normalized time series representing each fashion trend on this specific population is created. We named *fashion-forwards* this weak signal. For all fashion sequence {y i t}1≤t≤T , let {y f,i t }1≤t≤T be the normalized sequence representing the behaviours of influencers regarding the fashion trend i. As we want to detect shifts between the main signal and the fashion forward signal, the following input is computed for the hybrid model: for all t ∈ {1*, . . . , T*} and any fashion trend i, $$w_{t}^{f,i}={\frac{y_{t}^{f,i}}{y_{t}^{f,i}+y_{t}^{i}}}\,.$$ where T denotes the number of available time steps. Values close to 0.5 indicate a similar behaviour between the influencers panel and the general panel. For instance, an emerging fashion shoes trend with its *fashionforwards* weak signal is represented in Figure 2. ## 2.4 Fashion Dataset With this paper, we provide publicly1 a sample of 10000 normalized fashion trends for men and women, over 9 different categories and 5 different markets. Each sequence has 261 time steps, from 2015-01-05 to 2019-12-31 with weekly values and no missing values. This collection of 10000 fashion trends was selected in order to represent finely the issues faced by the fashion industry. For instance, some sequences show complex behaviours with sudden changes, referred to as emerging or declining trends. A central point of this work is to accurately detect and forecast such trends. In addition, each fashion time series is linked with its associated normalized fashion forward signal as presented in the section above. An overview of the dataset can be found in Table 1. All the trends names are anonymized and for each trend, only the following macro information are revealed: the geolocalisation, the gender and the cloth type. 1https://github.com/etidav/HERMES ![5_image_0.png](5_image_0.png) Figure 3: Example of difference between the raw sequence and the normalized one for the Jersey top fashion trend for females in China. In this example, the Jersey top fashion raw time series for females in China is normalized by the deseasonalized global top raw time series for females in China. The final normalized time series is finally expresssed as a share of the global catagery. (Top) Time series representing the raw signal of the Jersey top fashion trend for females in China. (Bottom) Time series representing the normalized signal of the Jersey top fashion trend for females in China. Table 1: Fashion time series overview. For each couple geozone/category, the table gives the number of trends (Female/Male). | Top | Pants | Short | Skirt | Dress | Coat | Shoes | Color | Texture | | |---------------|-----------|---------|---------|---------|--------|---------|----------|-----------|---------| | United States | 411/208 | 149/112 | 47/22 | 29/- | 20/- | 208/151 | 293/86 | 38/44 | 85/81 | | Europe | 409/228 | 134/114 | 48/21 | 28/- | 20/- | 211/159 | 303/78 | 41/42 | 87/74 | | Japan | 403/218 | 136/107 | 49/31 | 28/- | 23/- | 185/149 | 311/78 | 46/42 | 92/65 | | China | 424/202 | 147/114 | 46/29 | 27/- | 27/- | 178/161 | 310/78 | 41/47 | 88/77 | | Brazil | 431/222 | 134/117 | 49/27 | 30/- | 28/- | 203/152 | 311/76 | 48/41 | 107/84 | | Total | 2078/1078 | 700/564 | 239/130 | 142/- | 118/- | 985/772 | 1528/396 | 214/216 | 459/381 | ## 3 Hermes: A New Hybrid Model For Time Series Forecasting We introduce a new hybrid approach for time series forecasting composed of two parts: a collection of per-time-series parametric models, and a global error-corrector neural network train on all time series. Pertime-series parametric models are used in particular to learn local behaviours and to normalize sequences by removing trends and seasonality. Then, a recurrent neural network driven by the weak signals is trained to correct these per-time-series models. Consider N > 1 time series. For all 1 6 n 6 N and 1 6 t 6 T, let y n t be the value of the n-th sequence at time t and y n = {y n t }16t6T be all the values of this sequence. The objective of this paper is to propose a model to forecast all time series in a given time frame h ∈ N, i.e. we aim at sampling {y n T +1:T +h }16n6N based on {y n 1:T }16n6N . ## 3.1 Per-Time-Series Predictors For all 1 6 n 6 N, we note f n(.; θ n predictor) the n-th parametric model of the n-th sequence where θ n predictor are unknown parameters. Given the sequences {y n 1:T }16n6N and the estimated parameters {θ n predictor}16n6N , the time-series-specific forecasts {yb pred,n T +1:T +h|T }16n6N are, for all n ∈ {1*, . . . , N*}, for all i ∈ {1*, . . . , h*}, $$\hat{y}_{T+i|T}^{p r e d,n}=f^{n}(y_{1:T}^{n};\theta_{p r e d i c t o r}^{n})_{i}\;.$$ $$\left(2\right)$$ predictor)i. (2) During the M4 competition, the hybrid model of Smyl (2020) was based on a multiplicative exponential smoothing model as the time-series-specific predictor. However, on sporadic time series, this choice leads to poor results and instability. In this paper, a more general framework able to deal with any kind of per-timeseries models is provided. Thus, the choice of the parametric model can be adjusted depending on the nature of the time series. The only limitation is the computational time as we aim at forecasting thousands of time series simultaneously. For instance, hidden Markov models provide a very interesting framework but inference of such models requires computationally costly iterative procedures such as Expectation Maximization-based algorithms often combined with Monte Carlo estimates of unknown expectations. Choosing these approaches as per-time-series predictors would considerably slow the overall training process of the hybrid model. In Section 4, we introduce three declinations of the hybrid framework using different per-time-series predictors to highlight the adaptability of our approach. The first one is based on an exponential smoothing as a reference similar to the baseline Smyl (2020), the second one uses Thetam as per-time-series predictor (Hyndman et al., 2020) and the last one uses a TBATS model (Livera et al., 2011). For non stationary time series, changes of behaviours are not always predictable using the past of the sequence. In some cases, these changes depend on external variables not considered by univariate parametric models. The difficulty is that the exact influence of external variables on the main signal is mostly unknown. This motivates the introduction of a global RNN trained on all time series and able to consider and leverage external signals. ## 3.2 Error-Corrector Recurrent Model The second part of the model is a global RNN, trained on all the N sequences to correct the weaknesses of the first per-time-series parametric models. This task requires a thorough data pre-processing as recurrent neural networks training is highly sensitive to the scale of the data and requires well-designed inputs. Let w ∈ N be the window size, usually this window is proportional to the forecast horizon w ∝ h. The RNN input is defined as the following normalized, deseasonalized and rescaled sequence z n T = {z n T −w+i|T }16i6w: for all 1 6 n 6 N and 1 6 i 6 w, $$z_{T-w+i|T}^{n,T}:=\frac{y_{T-w+i}^{n}-\hat{y}_{T+k|T}^{p r e d,n}}{\bar{y}_{T}^{n}}\,,\quad\bar{y}_{T}^{n}=\frac{1}{w}\sum_{i=1}^{w}y_{T-w+i}^{n}\,.$$ ![7_image_0.png](7_image_0.png) Figure 4: Hermes forecast example on a time series representing the vertical stipes texture fashion trend for females in Brazil. In green the prediction of the TBATS per-time-series predictors. In red the final forecast of our HERMES hybrid model. where k = i − hbi/hc with b.c the floor function. With the numerator part y n T −w+i − yb pred,n T +k|T , the per-timeseries prediction is included in the RNN input and all the fundamental patterns already learned by this first predictor are removed from the time series. Then the denominator y¯ n T is use to rescaled all input at the same level as the time series can have different scales. Another option could have been to divide directly y n T −w+i by yb pred,n T +k|T but with time series hitting 0, this option is not valid. Let rnn(.; θ*corrector*) be the recurrent neural network model where θ*corrector* are unknown parameters. Given the RNN input sequences {z n T }16n6N and the global RNN estimated parameters θ*corrector*, the correction terms {yb corr,n T +1:T +h|T }16n6N are, for all n ∈ {1*, . . . , N*}, for all i ∈ {1*, . . . , h*}, $${\hat{y}}_{T+i|2}^{c o r r,1}$$ $\mathbf{v}\times\mathbf{v}$ T +i|T = rnn(z n ; θ*corrector*)i· y¯ n $${\frac{n}{T}};\theta_{c o r r e c}$$ and all $i\in\{1,\ldots,h\}$, $$\left({\boldsymbol{3}}\right)$$ T . Thus, if no external signals are available, the final hermes forecast is, for all 1 6 n 6 N and all i ∈ {1*, . . . , h*}, $$\widehat{y}_{T+i|T}^{n}=\widehat{y}_{T+i|T}^{pred,n}+\widehat{y}_{T+i|T}^{corr,n}$$ $$=f^{n}(y_{1:T}^{n};\theta_{predictor}^{n})_{i}+\textsc{RNN}(\mathbf{z}_{T}^{n};\theta_{corrector})_{i}\cdot\bar{y}_{T}^{n}\,.$$ ## 3.3 Weak Signal In addition to the N target time series, K × N external sequences indexed from 0 to T are now considered. For all 1 6 n 6 N, 1 6 k 6 K and 1 6 t 6 T, let w n,k t be the value of the k-th external sequence at time t associated with the sequence y n. Let wn = {{w n,k t }16t6T }16k6K be all the values of the external signals. In addition, let wn T = {{w n,k T −w+i }16i6w}16k6K be only the last w terms of the external sequences. Concatenating z n T and wn T , a new input for the RNN is defined: $$\mathbf{x}_{T}^{n}=\{x_{T-w+i|T}^{n}\}_{1\leqslant i\leqslant w}=\{z_{T-w+i|T}^{n},w_{T-w+i}^{n,1},\ldots,w_{T-w+i}^{n,K}\}_{1\leqslant i\leqslant w}\,.$$ Finally, for all 1 6 n 6 N and for all i ∈ {1*, . . . , h*} the final prediction becomes: $$\widehat{y}_{T+i|T}^{n}=\widehat{y}_{T+i|T}^{p r e d,n}+\widehat{y}_{T+i|T}^{c o r r,n}$$ $$=f^{n}(y_{1:T}^{n};\theta_{p r e d i c t o r}^{n})_{i}+\mathrm{RNN}(\mathbf{x}_{T}^{n};\theta_{c o r r e c t o r})_{i}\cdot\vec{y}_{T}^{n}\;.$$ T +i|T(4) An illustration of the proposed model is displayed in Figure 5 and a first forecast example is given in Figure 4.. $$\left(4\right)$$ ![8_image_0.png](8_image_0.png) Figure 5: Architecture of the hybrid model with weak signals. The proposed framework can be decomposed in 5 steps: i) provide a time series. ii) (a) fit a first statistical model with the provided time series, (b) compute a first prediction and (c) preprocess the time series for the Global RNN. iii) If available, external signals can be added as part of the RNN input. iv) With a pre-trained RNN, compute a correction of the first statistical prediction. v) Compute the final forecast by adding the first time series prediction and the RNN correction. ## 4 Experimental Results In this section, performance of the hybrid model is assessed and compared with several other approaches. The HERMES framework is evaluated on two different use cases. - A first application is proposed on the fashion dataset. The HERMES model is trained at forecasting the fashion time series one year ahead (h=52). This use case is mostly guided by the fashion and retail industry where clothes collections are usually prepared one year or more in advance. As additional signals representing influencers behaviours are available, this allows to set the focus on the ability of our framework to leverage these weak signals. - A second application is proposed on the weekly dataset of the M4 competition (Makridakis et al., 2018). Based on the competition's rules, the forecasting horizon is set to 13 and no external signals are available. Furthermore, the M4 weekly time series come from different sectors and have variable lengths. ## 4.1 Fashion Use Case 4.1.1 Training The fashion dataset is split into three blocks, train, *eval* and *test* sets. The 3 first years are used as the *train* set, the 4th year is kept for the *eval* set and the *test* set is made of the last year. The hybrid model is trained to compute a one-year ahead prediction, h equal to 52, and the window size w is fixed at 104. Using the two first years of the *train* set, a first per-time-series parametric model for each time series is fitted. With the resulting collection of local models, a forecast of the third year is computed for each sequence. Corrector inputs are finally computed and the RNN is trained at correcting this first collection of third-year forecasts. For the *eval* set, per-time-series predictors are fitted a second time using the three first years and forecasts of the fourth year are computed. The *eval* set is used during training to control the learning of the RNN model and prevent overfitting. The per-time-series predictors are fitted a last time for the *test* set using the four first years. The final accuracy measures of all our models are computed on this *test* set. As an illustration, an example of our split is shown in Figure 6. Note that we have just enough historical data to perform the proposed train/eval/test split. Therefore, the common solution of applying a rolling window to increase the different splits cannot be used. Only one couple input/output is provided for each split per time series. For the first parametric per-time-series models, existing Python libraries named statsmodels and tbats are used to estimate the different parameters θ n predictor. The architecture of the recurrent neural network error-corrector model is composed of 3 LSTM layers of shape 50 and a final Dense layer to provide the correct output dimension. A classical Adam optimizer is used with a learning rate and a batch size set using a grid search. The loss function is defined as follows: $$\ell(y_{T+1:T+h}^{n},\widehat{y}_{T+1:T+h|T}^{n})=\frac{1}{\widehat{y}_{T}^{n}}\sum_{i=1}^{h}|y_{T+i}^{n}-\widehat{y}_{T+i|T}^{n}|\,.$$ This choice of L1 loss function is motivated by its robustness to outliers which accounts for some time series in the fashion industry with very specific behaviours. The loss and previous parameters are all set with a complete grid search and have to be adapted regarding the use case. See C.1 for additional results concerning the loss function choice and C.2 for a complete grid search example. The code is developed in Python using the Tensorflow library and publicly available1. It allows the use of GPU to speed up the training process. ## 4.1.2 Benchmarks, Hybrid Models And Metrics As benchmarks, several widespread statistical methods and deep learning approaches were selected. Using the R package forecast and the Python packages statsmodels, tbats, for each time series, predictions are computed with the following methods: snaive, ets, stlm, thetam, *tbats* and *auto.arima*. The forecast of 1https://github.com/etidav/HERMES ![10_image_0.png](10_image_0.png) Figure 6: Temporal split for our training process. The three first years define our training set. The fourth year is used as our eval set and the final year is reserved for the test set. the *snaive* method is only the repetition of the last past period. The ets model is an additive exponential smoothing with a level component and a seasonal component. The *stlm* approach uses a multiplicative decomposition and models the seasonally adjusted time series with an exponential smoothing model. The Thetam model decomposes the original signal in θ-lines, predicts each one separately and recomposes them to produce the final forecast and *tbats* uses a trigonometrical seasonality. Finally, *auto.arima* is the R implementation of the ARIMA model with an automatic selection of the best parameters. A complete description and references for these models can be found in Hyndman et al. (2020). We also compare our model with recent deep learning architectures for time series. The Prophet model introduced in Taylor & Letham (2017) is proposed and implemented with the available package bearing the same name prophet, and three models based on recurrent neural networks called lstm, *lstm-ws* and *deepar* are used. Concerning *lstm* and *lstm-ws*, they are both composed of 3 LSTM layers of shape 50 and a final Dense layer of shape 52. The first one (*lstm*) has only access to the main signal while the second one (*lstmws*) has access to the external signal. The last approach is the recent state-of-the-art model called DeepAR and introduced in Salinas et al. (2020). The Python package GluonTS (Alexandrov et al., 2020) was used to train DeepAR on the fashion dataset with a gridsearch to tune the main hyperparameters. A strength of the proposed HERMES framework is that it can handle any kind of parametric model. Thus, three versions of HERMES are proposed on the fashion dataset using different local parametric models. The first one uses as predictors an additive exponential smoothing model as a reference close to Smyl (2020). The second one is based on the Thetam parametric model (Hyndman et al., 2020). Finally the last one relies on the TBATS model of Livera et al. (2011) and achieves the highest accuracy results on the fashion dataset. Declinations are called respectively hermes-ets, *hermes-tbats* and *hermes-thetam* according to the per-time-series model choice. For each of them, a variation with the inclusion of the weak signals (-ws) is presented. To compare the different methods, we use the Mean Absolute Scaled Error (MASE) for seasonal time series. As our sequences have completely different scales, from 10−5to 10−1, this metric was chosen to compute a fair error measure, independent of the scale of the sequence and suited for our seasonal fashion time series. The MASE metric is defined as follows, with T the length of the time series, m the seasonal period and h the horizon: $$\mathrm{\mathrm{MASE}}={\frac{T-m}{h}}{\frac{\sum_{j=1}^{h}\left|Y_{T+j}-{\hat{Y}}_{T+j}\right|}{\sum_{i=1}^{T-m}\left|Y_{i}-Y_{i-m}\right|}}\,.$$ Table 2: Results summary on the 10000 time series of the fashion dataset. For each metric, the average on all our time series is computed. For approaches using neural networks, 10 models are trained with different seeds. The mean and the standard deviation of the 10 results are displayed. Models with a ∗in their name have access to the external signal. MASE ↓ **ACCURACY** ↑ mean std *mean std* | have access to the external signal. | MASE ↓ | ACCURACY ↑ | | | |---------------------------------------|----------|--------------|-------|-------| | mean | std | mean | std | | | snaive | 0.881 | - | 0.357 | - | | thetam | 0.845 | - | 0.463 | - | | arima | 0.826 | - | 0.464 | - | | ets | 0.807 | - | 0.449 | - | | prophet | 0.786 | - | 0.485 | - | | stlm | 0.770 | - | 0.482 | - | | hermes-ets-ws* | 0.769 | 0.005 | 0.501 | 0.007 | | hermes-thetam | 0.764 | 0.003 | 0.497 | 0.005 | | hermes-thetam-ws* | 0.760 | 0.004 | 0.520 | 0.010 | | hermes-ets | 0.758 | 0.001 | 0.490 | 0.006 | | deepar | 0.752 | 0.018 | 0.459 | 0.015 | | tbats | 0.745 | - | 0.453 | - | | lstm-ws* | 0.728 | 0.004 | 0.500 | 0.008 | | lstm | 0.724 | 0.003 | 0.498 | 0.007 | | hermes-tbats | 0.715 | 0.002 | 0.488 | 0.008 | | hermes-tbats-ws* | 0.712 | 0.004 | 0.510 | 0.005 | Detecting emerging and declining trends is a crucial issue for the fashion industry. A correct or incorrect prediction could lead to good returns or massive waste due to overstock or unsold clothes. In addition to the MASE accuracy metric, the different methods are also evaluated on a classification task and especially differences between methods using weak signals or not. In a given year, an increasing trend is defined as a trend that does more than 5% of growth on average with respect to the previous year. In the same way, a decreasing trend is defined as a trend that declines by 5% on average or more. Other trends are classified as flat trends. With this threshold, the proposed fashion dataset is almost balanced on the *test* set: There are 3087 increasing trends, 3342 decreasing trends and 3571 flat trends. To compare the different methods on this classification task, the accuracy metric, defined as the percentage of correct classification, is used. ## 4.1.3 Results 10000 Fashion time series global accuracy. For the two metrics and for each model, we compute the average on all sequences in the final year. Results are displayed in Table 2. For methods using neural networks, 10 models are trained with different seeds. The average and the standard deviation of their results are computed and displayed. For the statistical models, TBATS largely dominates the alternatives in terms of MASE. It is one of the main motivations why this model is used on the best HERMES candidate as the predictor model. All HERMES versions show consequent improvements regarding their per-time-series predictors in terms of MASE. *hermes-tbats* outperforms the two other declinations and all the benchmarks on the fashion dataset. This result highlights two important features of the proposed hybrid model: i) the final accuracy of the model is strongly impacted by the choice of the per-time-series predictor and ii) if the per-time-series predictor is well chosen regarding the nature of the time series, the resulting HERMES model seems to outperforms other state-of-the-art approaches. Regarding the impact of the weak signals, Table 2 highlights an interesting improvement of the accuracy metric when weak signals are included. Figure 7 displays some examples of *hermes-tbats* predictions and illustrates some Tbats weaknesses that can be corrected by the hybrid approach. 10000 Fashion time series classification task. Classification results between the *tbats* model and the hybrid method *hermes-tbats* are given in Table 3, we note an impressive decrease of impactful errors: i.e. ![12_image_0.png](12_image_0.png) Figure 7: hermes-tbats forecast examples. In green the prediction of the per-time-series predictors thats. In red the final forecast of our HERMES hybrid model hermes-tbats. (Top) Time series representing a top fashion trend for females in The United States. (Bottom) Time series representing the horizontal stipes texture fashion trend for females in China. | Table 3: tbats, hermes-tbats and hermes-tbats-ws models confusion matrix tbats pred-dec pred-flat pred-inc hermes-tbats pred-dec pred-flat pred-inc true-dec 902 2113 327 true-flat 351 2920 300 true-inc 300 2078 709 true-dec 1261 1960 121 true-flat 549 2823 199 true-inc 214 2004 869 hermes-tbats-ws pred-dec pred-flat pred-inc true-dec 1956 1245 141 true-flat 1257 2087 227 true-inc 358 1620 1109 | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| Table 3: tbats, *hermes-tbats* and *hermes-tbats-ws* models confusion matrix | 1000 time series fashion dataset | 100 time series fashion dataset | | | | | | | | | |------------------------------------|-----------------------------------|-------|-------|--------|------------|-------|----|-------|----| | MASE ↓ | ACCURACY ↑ | | | | | | | | | | mean | std | mean | std | MASE ↓ | ACCURACY ↑ | | | | | | | mean | std | mean | std | | | | | | | snaive | 0.871 | - | 0.383 | - | | | | | | | thetam | 0.837 | - | 0.484 | - | | | | | | | arima | 0.821 | - | 0.472 | - | | | | | | | ets | 0.801 | - | 0.469 | - | | | | | | | prophet | 0.788 | - | 0.476 | - | | | | | | | hermes-ets | 0.767 | 0.004 | 0.482 | 0.009 | | | | | | | hermes-thetam | 0.766 | 0.002 | 0.476 | 0.009 | | | | | | | hermes-ets-ws* | 0.766 | 0.004 | 0.507 | 0.013 | | | | | | | stlm | 0.765 | - | 0.493 | - | | | | | | | hermes-thetam-ws* | 0.763 | 0.005 | 0.501 | 0.009 | | | | | | | lstm | 0.740 | 0.007 | 0.487 | 0.014 | | | | | | | deepar | 0.738 | 0.017 | 0.465 | 0.013 | | | | | | | tbats | 0.734 | - | 0.466 | - | | | | | | | lstm-ws* | 0.731 | 0.005 | 0.492 | 0.012 | | | | | | | hermes-tbats | 0.721 | 0.002 | 0.487 | 0.014 | | | | | | | hermes-tbats-ws* | 0.717 | 0.004 | 0.500 | 0.010 | snaive | 0.876 | - | 0.330 | - | | | thetam | 0.822 | - | 0.470 | - | | | | | | | arima | 0.814 | - | 0.400 | - | | | | | | | hermes-thetam | 0.812 | 0.009 | 0.446 | 0.031 | | | | | | | lstm | 0.810 | 0.015 | 0.446 | 0.049 | | | | | | | hermes-thetam-ws* | 0.810 | 0.008 | 0.479 | 0.024 | | | | | | | deepar | 0.804 | 0.024 | 0.393 | 0.029 | | | | | | | hermes-ets-ws* | 0.792 | 0.003 | 0.386 | 0.010 | | | | | | | hermes-ets | 0.790 | 0.004 | 0.374 | 0.005 | | | | | | | lstm-ws* | 0.789 | 0.010 | 0.485 | 0.036 | | | | | | | ets | 0.786 | - | 0.400 | - | | | | | | | prophet | 0.767 | - | 0.490 | - | | | | | | | tbats | 0.745 | - | 0.440 | - | | | | | | | stlm | 0.742 | - | 0.450 | - | | | | | | | hermes-tbats | 0.741 | 0.005 | 0.462 | 0.021 | | | | | | | hermes-tbats-ws* | 0.737 | 0.004 | 0.486 | 0.027 | | | | | Table 4: Results summary on 1000 time series and 100 time series of the fashion dataset. The MASE average on all the time series is computed. For the two approaches using a neural network, 10 models with different seeds are trained. the mean and the standard deviation of the 10 results are displayed. Models with a ∗in their name have access to the external signal. forecasting an increase instead of a decrease and vice versa. The *hermes-tbats* model divides by 3 the error rate in comparison to *tbats* with only a slight decrease of the number of correct increase/decrease predictions. However, with our weak signals, we see that *hermes-tbats-ws* is able to catch twice as much as its relative model without weak signals while keeping a relatively low number of impactful errors. Size of the dataset. In addition to the results on the whole fashion dataset, the robustness of the HERMES model is analyzed when it is trained on smaller datasets. Two experiments are performed on a sub sample of respectively 1000 and 100 randomly selected time series. Results are given in Table 4. The hybrid framework hermes-tbats-ws achieves the best performance in terms of global accuracy on both datasets. We can note that the accuracy of the full neural network *lstm* decreases when the dataset size decreases. On the small dataset of 100 time series, a local statistical model like tbats or *stlm* largely outperforms deep-model-based approaches such as lstm of *deepar*. In fact, providing sharp predictions from scratch is a complex task and high-dimensional recurrent neural networks require large amounts of data to do so. By contrast, the HERMES approach can rely on its first statistical part and consequently needs less data to be trained and to obtain interesting performance. We can nevertheless note that the gain brought by the error-corrector recurrent model decreases as the size of the dataset diminishes. ![14_image_0.png](14_image_0.png) Figure 8: One of the shortest sequences of the M4 weekly dataset (93 time steps). In order to fit its predictor, the last complete year of the train set is duplicated in order to reach a total length of 300 time steps. ## 4.2 M4 Competition Use Case We also assessed the performance of HERMES using the M4 weekly dataset (Makridakis et al., 2020). The M4 dataset gathers 359 weekly time series and has 3 main differences compared to the proposed fashion dataset. Firstly, sequences do not have the same length with sequences lying between 93 and 2610 time steps. Secondly, the 359 time series come from different sectors such that finance or Industry. Accordingly, they have very distinct scales and dynamics. Thirdly, compared to the previous fashion application, the time horizon of the prediction is set to 13 for the weekly dataset and no additional external signals are provided. ## 4.2.1 Training The M4 dataset is preprocessed as follows. As some sequences are short (93 time steps), they limit the window size w of the RNN. Consequently, 300 time steps are kept for each sequence. shorter sequences are duplicated in order to reach the length of 300 and longer sequences are cropped so as to keep the last 300 time steps. An overview of our train, eval, test set split and the resizing of the shortest sequences is given in Figure 8. Secondly, several M4 weekly time series have a large volume and a high level of variability. Consequently, Equation 2 of the HERMES framework is changed to: $$\widehat{y}_{T+i|T}^{p r e d,n}=\exp\left(f^{n}(l o g\left(y_{1:T}^{n}\right);\theta_{p r e d i c t o r}^{n})_{i}\right)\;.$$ $$\left(5\right)$$ . (5) This simple modification increases significantly the accuracy of the per-time-series predictors tested on the M4 weekly dataset while reducing the fitting time. As for the fashion dataset, a complete grid search is done on the M4 weekly dataset to fix hyperparameters of the HERMES architecture. The horizon h is set to 13 and the window size w to 104. For the RNN part, the same architecture as described in Section 4.1.2 is used. The Adam optimizer is used and the MASE is directly used as the loss function. As some of the m4 weekly time series have many years of historical data, a rolling window is finally applied on the train set so as to increase the number of examples and improve the training process. The number of slinding windows, the learning rate, the batch size, the RNN architecture and input size are set using a grid search and detailled in C.2. ## 4.2.2 Evaluation The proposed model is evaluated along with a rich collection of benchmarks provided by the M4 competition, encompassing statistical models and neural network approaches. In addition, the hybrid model named *Uber* of S.Smyl is added. For a complete description and references of the benchmark models and the hybrid model Table 5: Results summary on the m4 weekly dataset. For each metric, the average on all our time series is computed. For approaches using a neural network, 10 models are trained with different seeds. The mean and the standard deviation of the 10 results are displayed. SMAPE MASE OWA mean std mean std *mean std* | and the standard deviation of the 10 results are displayed. SMAPE | MASE | OWA | | | | | |---------------------------------------------------------------------|--------|-------|--------|-------|-------|-------| | mean | std | mean | std | mean | std | | | MLP | 21.349 | - | 13.568 | - | 3.608 | - | | RNN | 15.220 | - | 5.132 | - | 1.755 | - | | snaive | 9.161 | - | 2.777 | - | 1.000 | - | | SES | 9.012 | - | 2.685 | - | 0.975 | - | | Theta | 9.093 | - | 2.637 | - | 0.971 | - | | Holt | 9.708 | - | 2.420 | - | 0.966 | - | | Com | 8.944 | - | 2.432 | - | 0.926 | - | | Damped | 8.866 | - | 2.404 | - | 0.917 | - | | Uber Smyl (2020) | 7.817 | - | 2.356 | - | 0.851 | - | | tbats | 7.409 | - | 2.204 | - | 0.801 | - | | hermes-tbats | 7.383 | 0.016 | 2.191 | 0.010 | 0.797 | 0.002 | | Pawlikowski, et al. | 6.919 | - | 2.158 | - | 0.766 | - | | Petropoulos & Svetunkov | 6.726 | - | 2.133 | - | 0.751 | - | | Darin & Stellwagen | 6.582 | - | 2.107 | - | 0.739 | - | | fforma-hermes | 6.614 | - | 2.058 | - | 0.732 | - | Uber, see Makridakis et al. (2020) and Smyl (2020). As a HERMES candidate, a version using TBATS is proposed and called *hermes-tbats*. We propose a focus on the top 3 models reaching the highest accuracy on the M4 weekly dataset. These three methods are based on an ensembling and combine various approaches. The first model is presented in Darin & Stellwagen (2020) and called *Darin & Stellwagen*. The second model is introduced in Petropoulos & Svetunkov (2020) and called *Petropoulos & Svetunkov*. Finally, a description of the third model called *Pawlikowski, et al.* can be found in Pawlikowski & Chorowska (2020). An ensembling combining 4 HERMES variations is proposed. It is based on the FFORMA algorithm introduced in MonteroManso et al. (2020) and called *fforma-hermes*. A complete description of the training process of the proposed ensembling is given in B.3. Following the M4 competition methodology, all the candidates are evaluated according to the MASE, the SMAPE and the OWA measures. A complete definition of these metrics is proposed in Makridakis et al. (2020) and summarized in B.1. See also B.1 for additional information about the M4 weekly dataset. ## 4.2.3 Results The final results for the M4 weekly dataset are displayed in Table 5. The HERMES approach *hermestbats* outperforms all the benchmarks. This result is partially induced by the use of TBATS per-time-series predictors which achieve impressively good results on the test set. Regarding the hybrid model proposed by S.Smyl, its accuracy remains low in comparison to *tbats* and *hermes-tbats*. For the ensembling methods, the proposed FFORMA model with 4 HERMES variations *fforma-hermes* reaches the same high level of accuracy as the top 3 methods of the competition on the weekly dataset. The results provided by *hermes-tbats* confirm that the HERMES model is well suited for a large collection of forecasting tasks even difficult ones with small datasets, heterogeneous time series and the absence of additional useful external signals. Secondly, the accuracy gap between the proposed hybrid model and the approach proposed in Smyl (2020) illustrates the importance of a global framework able to leverage any kind of per-time-series predictors depending on the use cases. Finally, our model can be easily included as part of an ensembling method to improve the final robustness and accuracy of the predictions. ## 4.3 Discussion Results of the two previous forecasting tasks illustrate that the proposed hybrid method is a general framework, easily adaptable to different use cases and able to compete with other state-of-the-art methods. With its neural network component, it is also able to leverage any kind of external signals, which which could become essential in more and more forecasting applications. The experiments highlight some limitations that may be the topic of future works. - In its current form, the use of the neural network prevents the interpretability of the final prediction. The use of the external signal can not be easily assessed and no confidence intervals are provided with the hybrid framework. Future works can focus on designing more intepretable models, using for instance Bayesian neural networks, or combining neural networks with state space models. - The proposed hybrid approach can be computationally intensive, for instance compared to classical neural-network-based models. A solution to improve this limitation is to propose new procedures to train jointly both parts of the model with mini batches of observations. - In the Fashion use case, the inclusion and use of the external signals could be improved as the external signals are relevant only in a few fashion time series. An ongoing work is to use several neural networks specialized on different uses of the external signals. Depending on the past of the time series and the external signals, a hidden state could be used to switch between models. ## 5 Conclusion In this paper, we propose a new hybrid model for non stationary time series forecasting. By mixing the performance of local parametric models and a global neural network, *hermes-tbats* clearly outperforms traditional statistical methods and full neural network models on two forecasting tasks. Furthermore, this new model is totally suited to deal with external signals. With a fine pre-processing and a well-designed architecture, the proposed hybrid framework succeeds at leveraging complex extra data and reaches promising accuracy levels. In addition, a fashion dataset gathering a sample of 10000 time series and a collection of weak signals is provided. We show that it is possible to represent fashion data with time series and introduce statistical models based on social media data. By making it publicly available, we hope that it will enhance the diversity of datasets for time series forecasting and pave the way for further explorations. As a possible future work, designing new models for the weak signals would improve their inclusion in the HERMES architecture. Focusing on the examples with important changes of behaviours, a fine analysis of the impact of the collection of weak signals is the topic of ongoing works. In the same way, an interesting improvement of the hybrid framework can be to introduce not a single but several neural networks trained at correcting different kinds of weaknesses. A perspective is to add a latent discrete label to select dynamically the regime shifts. ## References Alexander Alexandrov, Konstantinos Benidis, Michael Bohlke-Schneider, Valentin Flunkert, Jan Gasthaus, Tim Januschowski, Danielle C. Maddix, Syama Rangapuram, David Salinas, Jasper Schulz, Lorenzo Stella, Ali Caner Türkmen, and Yuyang Wang. GluonTS: Probabilistic and Neural Time Series Modeling in Python. *Journal of Machine Learning Research*, 21(116):1–6, 2020. URL http://jmlr.org/papers/v21/ 19-820.html. Kasun Bandara, Christoph Bergmeir, and Hansika Hewamalage. LSTM-MSNet: Leveraging forecasts on sets of related time series with multiple seasonal patterns. *IEEE transactions on neural networks and* learning systems, 2020. George EP Box, Gwilym M Jenkins, Gregory C Reinsel, and Greta M Ljung. *Time series analysis: forecasting* and control. John Wiley & Sons, 2015. Robert G. Brown and Richard F. Meyer. The fundamental theorem of exponential smoothing. *Operations* Research, 9(5):673–685, 1961. François Chollet. Xception: Deep learning with depthwise separable convolutions. In *Proceedings of the* IEEE conference on computer vision and pattern recognition, pp. 1251–1258, 2017. Robert B Cleveland, William S Cleveland, Jean E McRae, and Irma Terpenning. Stl: A seasonal-trend decomposition. *J. Off. Stat*, 6(1):3–73, 1990. Sarah Goodrich Darin and Eric Stellwagen. Forecasting the M4 competition weekly data: Forecast Pro's winning approach. *International Journal of Forecasting*, 36(1):135–141, 2020. Randal Douc, Éric Moulines, and David Stoffer. *Nonlinear time series: theory, methods and applications* with R examples. Chapman and Hall/CRC, 2014. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. *Neural computation*, 9(8):1735–1780, 1997. Rob J Hyndman, George Athanasopoulos, Christoph Bergmeir, Gabriel Caceres, Leanne Chhay, Mitchell O'Hara-Wild, Fotios Petropoulos, Slava Razbash, and Earo Wang. Package 'forecast'. Online] https://cran. r-project. org/web/packages/forecast/forecast. pdf, 2020. E Jianwei, Jimin Ye, and Haihong Jin. A novel hybrid model on the prediction of time series and its application for the gold price analysis and forecasting. *Physica A: Statistical Mechanics and its Applications*, 527:121454, 2019. Shiyang Li, Xiaoyong Jin, Yao Xuan, Xiyou Zhou, Wenhu Chen, Yu-Xiang Wang, and Xifeng Yan. Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. *arXiv preprint* arXiv:1907.00235, 2019. Bryan Lim, Sercan O Arik, Nicolas Loeff, and Tomas Pfister. Temporal fusion transformers for interpretable multi-horizon time series forecasting. *arXiv preprint arXiv:1912.09363*, 2019. Tsung-Yi Lin, Michael Maire, Serge Belongie, Lubomir Bourdev, Ross Girshick, James Hays, Pietro Perona, Deva Ramanan, C. Lawrence Zitnick, and Piotr Dollár. Microsoft COCO: Common objects in context, 2014. Alysha M. De Livera, Rob J. Hyndman, and Ralph D. Snyder. Forecasting time series with complex seasonal patterns using exponential smoothing. *Journal of the American Statistical Association*, 106(496):1513– 1527, 2011. Yunshan Ma, Yujuan Ding, Xun Yang, Lizi Liao, Wai Keung Wong, and Tat-Seng Chua. Knowledge enhanced neural fashion trend forecasting. In *Proceedings of the 2020 International Conference on Multimedia* Retrieval, pp. 82–90, 2020. Spyros Makridakis, Evangelos Spiliotis, and Vassilios Assimakopoulos. The M4 competition: Results, findings, conclusion and way forward. *International Journal of Forecasting*, 34(4):802–808, 2018. Spyros Makridakis, Evangelos Spiliotis, and Vassilios Assimakopoulos. The M4 competition: 100,000 time series and 61 forecasting methods. *International Journal of Forecasting*, 36(1):54–74, 2020. Alice Martin, Charles Ollion, Florian Strub, Sylvain Le Corff, and Olivier Pietquin. The Monte Carlo Transformer: a stochastic self-attention model for sequence prediction. *arXiv preprint arXiv:2007.08620*, 2021. Pablo Montero-Manso, George Athanasopoulos, Rob J. Hyndman, and Thiyanga S. Talagala. FFORMA: Feature-based forecast model averaging. *International Journal of Forecasting*, 36(1):86–92, 2020. M4 Competition. Maciej Pawlikowski and Agata Chorowska. Weighted ensemble of statistical models. *International Journal* of Forecasting, 36(1):93–97, 2020. Fotios Petropoulos and Ivan Svetunkov. A simple combination of univariate models. International Journal of Forecasting, 36(1):110–115, 2020. ISSN 0169-2070. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. *Advances in neural information processing systems*, 28, 2015. Everett M Rogers. *Diffusion of innovations*. Simon and Schuster, 1962. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. Imagenet large scale visual recognition challenge, 2014. David Salinas, Valentin Flunkert, Jan Gasthaus, and Tim Januschowski. DeepAR: Probabilistic forecasting with autoregressive recurrent networks. *International Journal of Forecasting*, 36(3):1181–1191, 2020. Simo Särkkä. *Bayesian filtering and smoothing*. Cambridge university press, 2013. Sima Siami-Namini, Neda Tavakoli, and Akbar Siami Namin. A comparison of ARIMA and LSTM in forecasting time series. In *2018 17th IEEE International Conference on Machine Learning and Applications* (ICMLA), pp. 1394–1401, 2018. Slawek Smyl. A hybrid method of exponential smoothing and recurrent neural networks for time series forecasting. *International Journal of Forecasting*, 36(1):75–85, 2020. Sean Taylor and Benjamin Letham. Forecasting at scale. *The American Statistician*, 72, 09 2017. doi: 10.1080/00031305.2017.1380080. Augustin Touron. Modeling rainfalls using a seasonal hidden Markov model, 2017. Augustin Touron. Consistency of the maximum likelihood estimator in seasonal hidden Markov models. Statistics and Computing, 29(5):1055–1075, 2019. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. *31st Conference on Neural Information Processing Systems* (NeurIPS 2017), 2017. Xiaozhe Wang, Kate Smith, and Rob Hyndman. Characteristic-based clustering for time series data. Data mining and knowledge Discovery, 13:335–364, 2006. G Peter Zhang. Time series forecasting using a hybrid ARIMA and neural network model. *Neurocomputing*, 50:159–175, 2003. Walter Zucchini, Iain L MacDonald, and Roland Langrock. *Hidden Markov models for time series: an* introduction using R. CRC press, 2017. ## A Fashion Dataset: A Study Of Sub Samples Of Time Series Sharing The Same Behaviour A study of 4 subsamples of the fashion dataset is proposed in this section. They are defined as follows. - **disrupting time series**: in retail and fashion industries, a strategic issue is to correctly anticipate new trends that will burst or collapse in the next weeks, months or years. To detect this subsample of trends in the fashion dataset, the following approach is proposed: using the *snaive* model, a prediction of the last year of data is computed for each trend with the associated MASE. The disrupting time series are defined as the 1000 time series where the snaive prediction has the highest MASE. - **stable time series**: by contrast, a group of stable trends is presented. They are the 1000 time series where the *snaive* prediction with the lowest MASE. - **seasonal time series**: we compute for each trend the seasonality strength metric introduced in Wang et al. (2006). The seasonal time series group are the 1000 fashion time series with the highest seasonality strength metric. - **noisy time series**: Finally, several time series of the fashion dataset represent niche trends only worn and posted on social media by a few people. The average volume for these sequences is low resulting in sporadic and noisy time series difficult to forecast. In order to detect them, we use the seasonality and trend strength metrics presented in Wang et al. (2006). For each time series, the mean of these two quantities is computed and we define the time series group as the 1000 sequences with the lowest average. Table 6 displays the MASE of each model introduced in Section 4 on the 4 subsamples of time series. Several relevant results can be highlighted. Firstly, models using the influencer weak signals strongly outperform other candidates on the disruptive time series. This result is even more visible with the HERMES approach where models using the weak signals are always better than the ones without. These results illustrate empirically the impact of weak signals on final predictions. Secondly, results of *hermes-tbats* on stable time series and seasonal time series, highlight the interest in using an hybrid model. This is even more visible on seasonal time series where hybrid approaches largely outperform neural network models presented in the benchmarks. Finally, on noisy time series, we can see that neural network models reach the highest accuracy. Furthermore, for all the HERMES approaches, we can note that the final hybrid predictions and the per-time-series predictions are the same. ## B M4 Weekly Dataset, Ensembling Training And Results B.1 M4 Weekly Dataset The M4 weekly dataset is a collection of 359 time series with contrasting behaviours and sizes. An overview of the dataset is given in Table 7 and some examples of sequences are given in Figure 9. This use case is not properly suited for the HERMES approach as the dataset is small and there is no clear link between time series. Moreover, no additional external signals are available that could help the RNN part to correct the first errors of the per-time-series predictors. ## B.2 M4 Accuracy Metrics The M4 competition proposes 3 metrics to evaluate the different approaches: the mean absolute scaled error (MASE), the symmetric mean absolute percentage error (SMAPE) and the overall weighted average (OWA). MASE and SMAPE are defined as follow: $$\mathrm{MASE}={\frac{T-m}{h}}{\frac{\sum_{j=1}^{h}\left|Y_{T+j}-\hat{Y}_{T+j}\right|}{\sum_{i=1}^{T-m}\left|Y_{i}-Y_{i-m}\right|}}\,,\quad\quad\quad\quad\quad\mathrm{SMAPE}={\frac{2}{h}}\sum_{j=1}^{h}{\frac{\left|Y_{T+j}-\hat{Y}_{T+j}\right|}{\left|Y_{T+j}\right|+\left|\hat{Y}_{T+j}\right|}}\,.$$ , where h is the forecast horizon and m the length of the seasonality. The final OWA is computed by following these steps: i) compute the average MASE and SMAPE of a model. ii) Divide the previous results by the MASE and SMAPE computed with the benchmark method *snaïve*. iii) Compute the OWA as the average of the relative MASE and SMAPE obtained is step ii). As an example on the M4 weekly dataset, the method hermes-tbats has a MASE of 7.383 and a SMAPE of 2.191. The benchmark method *snaïve* has a MASE of 9.161 and a SMAPE of 2.777. Thus the OWA of *hermes-tbats* is equal to 0.797. ## B.3 Fforma Ensembling With Hermes Variations In this section, a complete description of the proposed ensembling on the M4 weekly dataset is provided. In a first time, 4 HERMES variations are trained using different per-time-series predictors. The first one Table 6: Results summary on 4 differents subsamples of Fashion time series: i) disrupting time series, ii) stable time series, iii) seasonal time series and iv) noisy time series. Models with a ∗in their name have access to the external signal. | | noisy time series MASE mean std | | |-------------------|-----------------------------------|-------| | snaive | 0.842 | - | | hermes-ets-ws* | 0.739 | 0.005 | | hermes-ets | 0.726 | 0.002 | | ets | 0.721 | - | | thetam | 0.717 | - | | stlm | 0.715 | - | | prophet | 0.698 | - | | arima | 0.696 | - | | hermes-thetam-ws* | 0.672 | 0.003 | | hermes-thetam | 0.669 | 0.001 | | deepar | 0.661 | 0.007 | | hermes-tbats-ws* | 0.647 | 0.006 | | tbats | 0.646 | - | | hermes-tbats | 0.644 | 0.003 | | lstm-ws* | 0.637 | 0.004 | | lstm | 0.636 | 0.003 | | seasonal time series MASE mean std | | | |--------------------------------------|-------|-------| | snaive | 0.895 | - | | ets | 0.895 | - | | prophet | 0.851 | - | | deepar | 0.836 | 0.035 | | lstm-ws* | 0.829 | 0.014 | | thetam | 0.826 | - | | lstm | 0.823 | 0.013 | | hermes-thetam | 0.815 | 0.008 | | tbats | 0.81 | - | | hermes-ets-ws* | 0.809 | 0.01 | | hermes-thetam-ws* | 0.808 | 0.008 | | arima | 0.805 | - | | stlm | 0.786 | - | | hermes-ets | 0.785 | 0.003 | | hermes-tbats-ws* | 0.777 | 0.012 | | hermes-tbats | 0.772 | 0.003 | | disrupting time series MASE mean std | stable time series MASE mean std | | | | | |----------------------------------------|------------------------------------|-------|---------|-------|----| | snaive | 1.455 | - | | | | | thetam | 1.314 | - | | | | | ets | 1.27 | - | | | | | arima | 1.256 | - | | | | | tbats | 1.229 | - | | | | | hermes-thetam | 1.209 | 0.005 | | | | | hermes-ets | 1.202 | 0.007 | | | | | stlm | 1.198 | - | | | | | hermes-tbats | 1.195 | 0.01 | | | | | prophet | 1.193 | - | | | | | deepar | 1.18 | 0.03 | | | | | lstm | 1.15 | 0.01 | | | | | hermes-thetam-ws* | 1.145 | 0.019 | | | | | hermes-ets-ws* | 1.131 | 0.024 | | | | | hermes-tbats-ws* | 1.092 | 0.007 | | | | | lstm-ws* | 1.086 | 0.009 | prophet | 0.629 | - | | | thetam | 0.615 | - | | | | | ets | 0.611 | - | | | | | arima | 0.565 | - | | | | | hermes-ets-ws* | 0.555 | 0.007 | | | | | snaive | 0.536 | - | | | | | deepar | 0.531 | 0.024 | | | | | hermes-thetam-ws* | 0.522 | 0.004 | | | | | hermes-ets | 0.518 | 0.002 | | | | | stlm | 0.513 | - | | | | | hermes-thetam | 0.508 | 0.005 | | | | | tbats | 0.501 | - | | | | | lstm-ws* | 0.492 | 0.007 | | | | | hermes-tbats-ws* | 0.477 | 0.008 | | | | | lstm | 0.47 | 0.004 | | | | | hermes-tbats | 0.451 | 0.002 | | | ![21_image_0.png](21_image_0.png) Figure 9: Examples of time series from the M4 weekly dataset. From Top to Bottom : time series called W10 from the Other category, W20 from the Macro category and W220 from the Finance category. Table 7: M4 weekly dataset overview. For each category, the number of sequences and the average length ![22_image_0.png](22_image_0.png) are given. | Nb. of sequences | Avg. length | Min. length | | |--------------------|---------------|---------------|------| | Demographic | 24 | 1659 | 1615 | | Finance | 164 | 1237 | 260 | | Industry | 6 | 834 | 356 | | Macro | 41 | 1264 | 522 | | Micro | 112 | 473 | 93 | | Other | 12 | 1598 | 470 | Figure 10: forecast examples of HERMES variations on 2 time series of the M4 weekly dataset. At the top, the *W133* time series is displayed with the prediction of the per-time-series predictor *thetam* (green) and the final forecast of the HERMES hybrid model *hermes-thetam* (red). At the bottom, the *W262* time series is represented with the corresponding prediction of the per-time-series predictors *ets-add* (green) and the HERMES correction of *hermes-ets-add* (red). Table 8: Results summary on the m4 weekly dataset of the HERMES variations. For each metric, the average on all the time series is computed. For approaches using a neural network, 10 models are trained with different seeds. The mean and the standard deviation of the 10 results are displayed. For the statistical models ets-add, *ets-mul* and *thetam*, the Python package statsmodels is used. The Python package tbats is used for the *tbats* approach. SMAPE MASE OWA mean std mean std *mean std* | is used for the tbats approach. SMAPE | | MASE | OWA | | | | |-----------------------------------------|-------|--------|-------|-------|-------|-------| | | mean | std | mean | std | mean | std | | ets-mul | 8.933 | - | 2.412 | - | 0.922 | - | | hermes-ets-mul | 8.889 | 0.021 | 2.377 | 0.016 | 0.913 | 0.004 | | ets-add | 8.929 | - | 2.410 | - | 0.921 | - | | hermes-ets-add | 8.880 | 0.022 | 2.377 | 0.016 | 0.913 | 0.004 | | thetam | 7.609 | - | 2.377 | - | 0.843 | - | | hermes-thetam | 7.590 | 0.012 | 2.359 | 0.010 | 0.839 | 0.002 | | tbats | 7.409 | - | 2.204 | - | 0.801 | - | | hermes-tbats | 7.383 | 0.016 | 2.191 | 0.010 | 0.797 | 0.002 | called *hermes-tbats* uses TBATS and is presented in Section 4.2. The second version is called *hermesthetam* and use the Thetam method provided with the Python package statsmodels. The two remaining variations use as per-time-series predictors an additive or multiplicative exponential smoothing and are called respectively *hermes-ets-add* and *hermes-ets-mul*. As for Thetam, the Python package statsmodels is used to fit the different exponential smoothing models. Concerning the HERMES architecture, for simplicity, hyperparameters described in Section 4.2 are used for each version but a grid search could have be run for each of them. 10 models are trained per version with different seeds and the best one based on the eval set is kept for the ensemble model. In a second time, the FFORMA ensembling introduced in Montero-Manso et al. (2020) is used to combine the 4 HERMES methods. The R package M4metalearning containing the FFORMA model is directly used without change of the hyperparameters, imported in Python with the library Rpy2 and combined with the HERMES code base. ## B.4 M4 Weekly Dataset Results In addition of the results provided is Section 4.2, Table 8 displays the results of all the HERMES variations included in the FFORMA ensembling as well as the accuracy of the per-time-series predictors. In each cases, HERMES approaches always improve the predictors accuracy. These improves can appear slight but are justified regarding the absence of link between time series and the absence of additional useful external signals. Nevertheless, efficient corrections can be obtained on some examples as displayed in Figure 10. ## C Training Parameters And Loss C.1 Loss Grid Search On The Fashion Dataset Using deep learning models in time series forecasting is an appealing way to achieve higher accuracy performance. However, it induces two main issues. First, it requires a large enough dataset to train the model as illustrated in Section 4. Second, a dataset can hide contrasting time series in terms of scale, noise and behaviour. These differences can impact training performance. For the HERMES architecture, some candidate losses were defined for the training: the Mean Absolute Error (MAE), the Mean Square Error (MSE), the Scaled Mean Absolute Error (SMAE) and the Scaled Mean Square Error (SMSE). The loss functions ![24_image_0.png](24_image_0.png) Figure 11: MASE accuray for the *hermes-tbats-ws* model depending on the loss used during the RNN training. For each loss, 10 models with different seeds have been trained. The mean and the standard deviation are represented with a point and a vertical line. are defined as follows: $$\begin{array}{c}{{M A E=\frac{1}{h}\sum_{i=1}^{h}\left|y_{T+i}^{n}-\widehat{y}_{T+i|T}^{n}\right|,}}\\ {{M S E=\frac{1}{h}\sum_{i=1}^{h}(y_{T+i}^{n}-\widehat{y}_{T+i|T}^{n})^{2}\,,}}\\ {{S M A E=\frac{1}{\widehat{y}_{T}^{n}}\sum_{i=1}^{h}\left|y_{T+i}^{n}-\widehat{y}_{T+i|T}^{n}\right|,}}\\ {{S M S E=\frac{1}{\widehat{y}_{T}^{n}}\sum_{i=1}^{h}(y_{T+i}^{n}-\widehat{y}_{T+i|T}^{n})^{2}\,.}}\end{array}$$ For each loss, 10 *hermes-tbats-ws* models have been trained with different seeds and the final mean and standard deviation are given in Figure 11. The final Scaled Mean Absolute Error reaches the lowest MASE and was selected to train all the HERMES models on the fashion dataset. ## C.2 Parameters Grid Search On The M4 Weekly Dataset In addition to the loss function, the HERMES model also depends on several hyperparameters to set correctly in order to reach satisfactory performance. For instance, an overview of the learning rate, batch size and number of windows per time series grid search for the M4 weekly dataset is shown in Figure 12. For each parameter, a collection of 10 *hermes-tbats* models have been trained with different seeds and the final OWA was calculated. As in the Figure 11, the mean and the standard deviation of each group of 10 trainings is computed. For the final *hermes-tbats* model of the M4 weekly dataset, the following set of parameters was selected: 3 windows per time series were used as the train set, the batch size was set to 8 and the learning rate was fixed to 0.005. ![25_image_0.png](25_image_0.png) Figure 12: OWA for the hermes-thats model on the eval set of the M4 weekly dataset. 5 hyperparameters used during the RNN training are tested: the number of moving windows per time series (top left), the learning rate (top right), the batch size (middle), the window size for the RNN input (bottom left) and the dimension of the LSTM layers output (bottom right). For each parameter, 10 models with different seeds have been trained. The mean and the standard deviation of the OWA on the eval set are represented with a point and a vertical line.
Review 1: Summary: This paper makes two contributions. First, it provides a large fashion time series generated from a massive collection of images. Second, it proposes a forecasting model in the specific setting of fashion trend prediction. Each fashion trend is basically modeled independently as a univariate signal, but the model is designed to incorporate "weak signals," which may result in some dependency between the univariate time series. Strengths and Weaknesses: Strength - Dataset is very interesting and should be valuable to the community. - Introduces many fashion-specific heuristic preprocesses, based on domain knowledge. Weakness - The description is often hard to follow possibly due to language issues. - The novelty of the prediction model looks weak. Requested Changes: Improve the description of Section 2. - Clearly define the terms like trend and cloth type using examples. - Section 2 should be described in a top-down manner if the resulting dataset is claimed to be one of the main contributions. Describe first what the resulting data looks like, then explain how you generated it. - It is not clear if the authors are calming technical contributions in the preprocess approaches. If so, the authors need to validate the specific choices like (1). The figure looks like a cherry-picked example. Section 3 - The univariate model adopted is not clearly described. What is the model? - The term "error correction" sounds misleading as there is not such an explicit algorithm to compute the error. Perhaps the second term of (3) should be called just the correction term. - I need methodological validation on (4). Apparently, the scaling of the input of RNN is critical to serving the RNN term as the coefficient of the correction term. Broader Impact Concerns: Dataset is valuable and the paper makes a good contribution to the community. ================================================== Review 2: Summary: - **Constructing a new fashion time series.** This paper presents large-scale time series data of fashion trends by processing photos posted on social networking services (SNS). The data is obtained from markets in five countries. A unique feature is the inclusion of posts by influencers tagged by domain experts. - **Neural network models for forecasting fashion time series.** The authors propose a method that combines parametric models that capture the local behavior of time series and RNN models to include external signals as input. - **Numerical experiments.** The effectiveness of the proposed method is evaluated using original and public data. Strengths and Weaknesses: S1. Creating and publishing a new fashion time series is significant because it promotes research in machine learning and data mining. The data is interesting in that it includes influencer data tagged by domain experts. S2. Although the proposed method is somewhat incremental, it attempts to design for the characteristics of the newly created data. S3. Experiments show that the proposed method performs better than the baseline in forecasting fashion trends. W1. The method of data set creation (especially human processing) is unclear. Discussion of the nature and reliability of the data set is also lacking. W2. There is room for improvement in the explanation of the proposed method. W3. Although the authors compared the proposed method with various baselines in the experiments, there is insufficient discussion of the advantages and disadvantages of the proposed method. The results need to be organized and discussed. Requested Changes: - The reliability of the data is important for the use of this data by third parties. Please explain in detail how the human processing was done. Also, to what extent can machine failures be included in image processing? Do humans correct them? - In publishing the data, it would be good to add a little more analysis of the characteristics of the data. For example, basic statistics for each market, statistics for the Weak signal $w_t^{f, i}$, etc. - It is written that $i$ represents the index of fashion trends, but it is unclear what exactly it refers to. Does it correspond to a product/item? - I didn't know what statistic $\hat{y}^{c,g,m}$ was. Could you please explain it in detail? - It may be a good idea to modify the structure of Section 3. - It is better to explain the hybrid model (4) at first. Then, elaborate on the motivation behind the formulation of (4). The output of the RNN appears to represent the weights in summing $f^n(\cdot)$ and $\bar{y}_T^n$. Is that correct? Also, when is one of $y^{pred}$ and $y^{corr}$ more dominant than the other? - The cost function described in Section 4.1 should be moved to Section 3. - It would be good to add a discussion of the limitations of this study. - Other external factors could be various, such as advertisements. This study focuses on the correlation between weak signals and trends, not causality. This should be clearly stated. - As seen in Figure 2, the effect of the external factor is accompanied by a time delay. It would be interesting to analyze how such time delays vary by fashion category and market. It would also be interesting to incorporate this into machine learning models. - In Table 2, it would be good to clarify which methods can handle weak signals and which cannot. Then, please clarify by experiment that it is effective to construct a hybrid model by the weighted sum of two terms as in the proposed method (4). - Minor issues: - The term "causal inference" appears in the abstract, but it is not discussed in this study and should be changed to another term. - The proposed method, HERMES, does not specify what it stands for. - Please cite Figure 1 in the text. - Please change the "t" in "At each time t" in section 2.1 to italic. Broader Impact Concerns: N/A ================================================== Review 3: Summary: This work proposes a hybrid framework for time-series forecasting. Another contribution of the work is a new time-series dataset, which tracks aggregated social media posts reflecting different fashion trends in various clothing categories and regions over a period of five years. The dataset additionally tracks social media posts of influencers across trends, categories and regions, which is to be utilised as an auxiliary signal for forecasting common-user behavior/posts. The authors fine-tune pre-trained image classifiers to setup an automated image segmentation and categorization pipeline. The pipeline is used to detect and classify fashion articles in posted images into a predefined set of categories. Images within each category are analyzed by fashion experts to identify and track various trends over time. At the core, the hybrid forecasting framework uses a statistical forecasting model, which is optimized individually for each time-series. A recurrent neural network (RNN) is trained across all time-series to correct prediction errors of the per time-series statistical models. The authors employ available implementations of a number of statistical models in their framework. Benchmarking results on the proposed dataset indicate that the hybrid combination of a statistical model and the error-corrector RNN outperforms other such combinations as well as individual instances of statistical and RNN models. They authors also compare variations of the RNN trained with and without the influencer activity signals to domonstrate that the best results are obtained when the signals are included as input. Experiments on smaller subsets of the fashion dataset are performed to demonstrate the applicability of the proposed model to small datasets. Finally the hybrid forecasting model and its ensemble version are shown to outperform other comparable approaches. Strengths and Weaknesses: Strengths: - The paper is clearly written and easy to follow - Contribution of a new time-series dataset in fashion domain - Competitive results on the M4 benchmark as well as on the proposed fashion dataset. - The work evaluates a number of models Weaknesses: - The proposed method only provides point-forecasts. It does not provide any uncertainty bounds around predictions. - Model optimization and evaluation only done for a single split across time (3 years train, 1 year validation, 1 test). - Results on smaller subsets of data do not show model performance with weak signals included as input. Trend classification performance also not reported for smaller datasets. Requested Changes: The evaluation would be more complete if performance for model trained with weak signals is reported for smaller subsets of data. Also the classification performance should be reported for smaller datasets. The authors should also consider reporting results for multiple temporal splits of data It seems that there is no concept of trend birth and death in the proposed dataset. It would be nice if the authors can clarify if that is true and why is that the case Broader Impact Concerns: The work releases a new dataset in the public domain, which is derived from social media activity of a large number of user across multiple regions. It would be in order if the authors clarify whether due to the data collection or the release of the fashion dataset, there will be any implications for individuals' privacy whose data were utilized for this work. ================================================== Review 4: Summary: This paper presents a new hybrid model for time series forecasting, called HERMES, that combines per-time-series parametric models with a global recurrent neural network (RNN) to correct the errors of the former. The paper also introduces a new fashion dataset that contains 10000 weekly fashion time series with external weak signals derived from social media influencers. The paper evaluates the performance of HERMES on the fashion dataset and the M4 weekly dataset, and compares it with several statistical and deep learning benchmarks. The paper shows that HERMES can leverage the external weak signals to improve the accuracy of forecasting and the detection of emerging and declining trends. The paper also demonstrates that HERMES is robust to different sizes of datasets and different choices of per-time-series models. The paper contributes to the literature on hybrid models for time series forecasting and provides a novel application to the fashion industry. Strengths and Weaknesses: Strengths: 1. The paper proposes a novel hybrid model that can leverage external weak signals to improve the forecasting accuracy and the detection of emerging and declining trends. 2. The paper introduces a new fashion dataset that is large, diverse, and rich in information. The dataset can be useful for other researchers and practitioners in the fashion industry. 3. The paper evaluates the performance of HERMES on two datasets and compares it with several statistical and deep learning benchmarks. The paper shows that HERMES outperforms the alternatives in terms of MASE and accuracy metrics. Weaknesses: 1. The paper does not provide a clear motivation or literature review for the choice of per-time-series models. 2. The paper could benefit from a more detailed discussion of the limitations of the proposed approach and potential avenues for future work. 3. The paper could also benefit from a more thorough comparison with existing methods, including a discussion of their strengths and weaknesses in relation to the proposed approach. Requested Changes: 1. The paper should provide a clear motivation or literature review for the choice of per-time-series models. 2. The paper should explain why exponential smoothing and TBATS are chosen as the predictors, and how they compare to other possible choices such as ARIMA or HMM. 3. The paper should compare with more existing methods. 4. The paper should explain how HERMES handles different types of time series, such as stationary, non-stationary, seasonal, or sporadic. Broader Impact Concerns: 1. The paper should discuss the potential positive and negative impacts of the proposed model and dataset on the society and the environment. 2. The paper should address the privacy and consent issues of using social media images and data to construct the fashion dataset and the weak signals. 3. The paper should explain how the data is collected, processed, and anonymized, and what are the ethical and legal implications of this process. 4. The paper should also acknowledge the potential risks or biases of using social media data to represent fashion trends, such as cultural diversity, inclusivity, or sustainability. ================================================== Metareview: Recommendation: Accept with minor revision Comment: This paper proposes a new fashion time-series data set and proposes a RNN-based time-series forecasting method. Specifically, it first offers an extensive fashion time series dataset generated from an extensive collection of images. Secondly, it introduces a forecasting model tailored to the realm of predicting fashion trends. While each fashion trend is treated as an independent univariate signal, the model is constructed to accommodate "weak signals," potentially introducing interdependencies among these individual time series. Overall, the paper is well written. The issues raised by the reviewers have been adequately resolved. Therefore, I also recommend accepting it. As a minor change, the size of the figures in Figure 12 is not consistent. Please make it consistent in the final version. ==================================================
# The Sparse Matrix-Based Random Projection: An Analysis Of Matrix Sparsity For Classification Anonymous authors Paper under double-blind review ## Abstract In the paper, we study the sparse {0, ±1}-matrix based random projection, which has been widely applied in classification to reduce the data dimension. For such kind of sparse matrices, it is of computational interest to explore the minimum number of nonzero entries ±1 that supports achieving the best or nearly best classification performance. To achieve this, we analyze the impact of matrix sparsity on the `1 distance between projected data points. The analysis is inspired by the principle component analysis, which says that the larger distance between projected data points should better capture the variation among original data, and then yield better classification performance. Theoretically, the `1 distance between projected data points is not only related to the sparsity of sparse matrices, but also to the distribution of original data. Without loss of generality, we consider two typical data distributions, the Gaussian mixture distribution and the two-point distribution, which have been widely used to model the distributions of real data. With the two data distributions, we estimate the `1 distance between projected data points. It is found that the sparse matrices with only one or at most dozens of nonzero entries per row, can provide comparable or even larger `1 distances than other more dense matrices, under the matrix size m ≥ O( √n). Accordingly, the similar performance trend should also be obtained in classification. This is confirmed with classification experiments on real data of different types, including the image, text, gene and binary quantization data. ## 1 Introduction Random projection is an important unsupervised dimensional reduction technique that simply projects highdimensional data to low-dimensional subspaces by multiplying the data with random matrices (Johnson & Lindenstrauss, 1984). The projection can approximately preserve the pairwise `2 distance between original data points, or say preserving the structure of original data, thus applicable to classification (Bingham & Mannila, 2001; Fradkin & Madigan, 2003; Wright et al., 2009). To achieve the `2 distance preservation property, random projection matrices need to follow certain distributions, such as Gaussian matrices (Dasgupta & Gupta, 1999) and sparse {0, ±1}-ternary matrices (shortly called sparse matrices hereafter) (Achlioptas, 2003). In practical applications, sparse matrices are preferred for its much lower complexity both in storage and computation. Considering random projection is often applied to computationally-intensive large-scale classification tasks, it is highly desirable to minimize its complexity. For this purpose, we propose to explore the minimum number of nonzero entries ±1 that enables the projected data to achieve the best or nearly best classification performance. To the best of our knowledge, no previous study has investigated the problem. Existing research on random projection is mainly devoted to exploring the distribution of random matrices that well holds the distance preservation property, more precisely, keeping the *expectation* of the pairwise distance between original data points unchanged after random projection and rendering the *variance* relatively small (Dasgupta & Gupta, 1999; Achlioptas, 2003). For the sparse matrix with entries properly scaled, it has been proved that the distance preservation property holds in `2 norm (Achlioptas, 2003; Li et al., 2006), but not in `1 norm (Brinkman & Charikar, 2003; Li, 2007). Here it is noteworthy that although the `2 distance preservation property enables random projection to be applied in classification, it can hardly be used to analyze the impact of matrix sparsity (namely the number of nonzero entries) on the follow-on classification, since the classification accuracy depends on the discrimination between projected data points, rather than the invariance of data structure. For instance, it has been proved that the `2 distance preservation property tends to become worse as the matrix becomes sparser (Li et al., 2006), namely containing fewer nonzero entries ±1. However, empirically, it is observed that the sparser matrix structure does not mean a worse classification performance; and usually, very sparse matrices, such as the ones with only one or dozens of nonzero entries per row, can achieve comparable or even better classification performance than other more dense matrices. For this intriguing performance, in the paper we provide reasonable theoretical explanations by analyzing the *variation* of the `1 distance between projected data points. By the early research of principle component analysis (PCA) (Jolliffe, 2002), it is known that the projection over a *larger* principle component will yield *larger* pairwise distances (equivalently, larger variances) for projected data points, while the larger distance tends to *better* capture the variation (i.e. statistical information) of original data (Jolliffe & Cadima, 2016), and then provide *better* classification performance (Turk & Pentland, 1991). In the sparse matrix based random projection, the `1 distance between projected data points is related not only to the sparsity of random matrices, but also to the distribution of original high-dimensional data. To analyze the impact of matrix sparsity on the `1 distance, we need to first model the distribution of original data. Without loss of generality, we consider two typical data distributions, the Gaussian mixture distribution and the two-point distribution. The former has been widely used to model the distribution of natural data (Torralba & Oliva, 2003; Weiss & Freeman, 2007) or their sparse transforms (Wainwright & Simoncelli, 1999; Lam & Goodman, 2000), while the latter can be used to model the distribution of binary data, such data often occurring in various quantization tasks (Gionis et al., 1999; Hubara et al., 2016; Yang et al., 2019). Benefiting from the two general distributions, as shown later, our theoretical analysis results can be applied to a variety of real data. Given the two data distributions, by varying the sparsity of sparse matrices, we analyze the *expected* `1 distance between projected data points and obtain the following two results: 1) The maximum distance tends to be achieved by the sparse matrices with only one nonzero entry per row, as the discrimination between two classes of original data is sufficiently high; 2) The distance tends to converge to a constant value with the increasing of matrix sparsity, and relatively small convergence errors can be achieved when sparse matrices contain only dozens of nonzero entries per row. To summarize, the two results imply that the sparse matrices with only one or at most dozens of nonzero entries per row, perform comparably or even better than the other more dense matrices, in the task of enlarging the expected `1 distance between projected data points. Accordingly, these matrices should also exhibit similar performance trends on classification. Note that the above analysis is built upon the *expectation* of `1 distance. To enable the expected distance to be approximated by an actual matrix of size m × n, we need m ≥ O( √n). In the experiments, we verify the performance advantage of the sparse matrices mentioned above by conducting classification experiments on real data of different types, including the image, text, gene and binary quantization data. The major contributions of the work can be summarized as follows: - For the sparse {0, ±1}-matrices based random projection, we for the first time investigate the impact of matrix sparsity on classification, by analyzing random projection from the viewpoint of distance variation rather than the conventional *distance preservation*. The proposed analysis is inspired by the early research of PCA (Jolliffe & Cadima, 2016; Turk & Pentland, 1991), that is the larger distance between projected data points should better account for the variation among original data and then yield better classification performance. - By theoretical and numerical analysis, it is found that the sparse matrices with only one or at most dozens of nonzero entries per row, tend to achieve comparable or even better classification performance than the other more dense matrices, if the original data has the Gaussian mixture distribution or two-point distribution, and the matrices have size m ≥ O( √n). This implies that we can drastically reduce the complexity of random projection matrices without losing, or even improving the classification performance. - The above analysis results are perfectly verified by simulations and experiments. The high *consistency* between theory and practice can be attributed to the good generalizability of the two aforementioned distributions we have adopted to model the original data, which has been recognized in the modeling of various types of data (Torralba & Oliva, 2003; Weiss & Freeman, 2007; Wainwright & Simoncelli, 1999; Lam & Goodman, 2000). ## 2 Problem Formulation Consider the random projection of two data points h, h 0 ∈ R n over a sparse random matrix R ∈ {0, ±1} m×n. For the matrix R, we attempt to estimate the minimum number of nonzero entries ±1 that enables maximizing the expected `1 distance EkRxk1 between the projections of h and h 0, where x = h − h 0. As discussed before, the maximum EkRxk1 is expected to provide the best classification performance. To determine the minimum sparsity, we need to investigate the changing trend of EkRxk1 against the varying sparsity of R. It can seen that the estimation depends on the distributions of the matrix R and the data h. So in the following, we first model the distributions of sparse matrices and real data, and then give the `1 estimation model. Notation. Throughout the work, we typically denote a matrix by a bold upper-case letter R ∈ R m×n, a vector by a bold lower-case letter r = (r1, r2*, ..., r*n) > ∈ R n, and a scalar by a lower-case letter ri or r. Sometimes, we use the bold letter ri ∈ R n to denote the i-th row of R ∈ R m×n. For ease of presentation, we defer all proofs to Appendix A. ## 2.1 The Distribution Of Sparse Matrices The sparse random matrix R we aim to study is specified in Definition 1, which has the parameter k counting the number of nonzero entries per row, and is simply called k-sparse to distinguish between the matrices of different sparsity. Instead of the form R ∈ {0, ±1} m×n p , in the definition we introduce a scaling parameter n mk to make the matrix entries have zero mean and unit variance. With this distribution, the matrix will hold the `2 distance preservation property, that is, keeping the expected `2 distance between original data points unchanged after random projection (Achlioptas, 2003). Note that the scaling parameter can be omitted in practical applications for easier computation; and the omitting will not change the relative distances between projected data points, thus not affecting the follow-up classification performance. Definition 1 (k-sparse random matrix). A k-sparse random matrix R ∈ {0, ±p n mk } m×n is defined to be of the following structure properties: - its each row vector r ∈ {0, ±p n mk } n contains exactly k nonzero entries, 1 ≤ k ≤ n; - the positions of k nonzero entries are arranged uniformly at random; - each nonzero entry takes the bipolar values ±p n mk with equal probability. ## 2.2 The Distribution Of Original Data For the original high-dimensional data h ∈ R n, as discussed before, we investigate two typical distributions, the two-point distribution and the Gaussian mixture distribution. Considering the expected `1 distance EkRxk1 is directly related to the pairwise difference x between two original data h and h 0, namely x = h − h 0 = (x1, x2*, . . . , x*n) >, we describe the distribution of x for the original data h with the given two distributions. ## 2.2.1 Two-Point Distribution Suppose that the two high-dimensional data h, h 0 ∈ {µ1, µ2} n have their each entry independently following a two-point distribution, where µ1 and µ2 are two arbitrary constants. Then the difference x between h and h 0 has its each entry xiindependently following a ternary discrete distribution $$x_{i}\sim{\mathcal{T}}(\mu,p,q)$$ xi ∼ T (*µ, p, q*) (1) with the probability mass function t ∈ {−µ, 0, µ} under the probabilities {*q, p, q*}, where µ = µ1 − µ2 and p + 2q = 1. $\left(1\right)$. ## 2.2.2 Gaussian Mixture Distribution When the two data h, h 0 ∈ R n have their each entry independently following a Gaussian mixture distribution, the difference x = h − h 0remains a Gaussian mixture (Andrews & Mallows, 1974), which allows each entry xi to be modeled as xi ∼ M(µ, σ2*, p, q*) (2) with the probability density function $$x_{i}\sim{\mathcal{M}}(\mu,\sigma^{2},p,q)$$ $$\left(2\right)$$ $$f(t)=p f_{\mathcal{N}}(t;0,\sigma^{2})+q f_{\mathcal{N}}(t;\mu,\sigma^{2})+q f_{\mathcal{N}}(t;-\mu,\sigma^{2})$$ $$\left({\boldsymbol{3}}\right)$$ f(t) = pfN (t; 0, σ2) + qfN (t; *µ, σ*2) + qfN (t; −*µ, σ*2) (3) where fN (t; *µ, σ*2) denotes the density function of t ∼ N (*µ, σ*2), and the parameters are subject to p, q ≥ 0 and p + 2q = 1. ## 2.3 The `1 **Distance Estimation Model** With the distributions defined for the original data points h, h 0 ∈ R n and the k-sparse random matrix R ∈ {0, ±p n mk } m×n, our goal is to analyze the changing of the expected `1 distance EkRxk1 (with x = h − h 0) against varying matrix sparsity k, and determine the minimum sparsity k that corresponds to the maximum or nearly maximum EkRxk1. Notice that we have EkRxk1 = mE|r >x|, since each row r ∈ R n of R follows an independent and identical distribution by Definition 1. This equivalence suggests that E|r >x| will share the same changing trend with EkRxk1, when varying k. Then for ease of analysis, instead of EkRxk1, in the following we choose to investigate the changing of E|r >x| against varying k. ## 3 The `1 **Distance Estimation With Two-Point Distributed Data** In this section, we investigate the changing of E|r >x| against varying matrix sparsity k, provided that the original data h, h 0 are drawn from two-point distributions, such that their difference x = h − h 0 has i.i.d. entries xi ∼ T (*µ, p, q*), as specified in (1). ## 3.1 Theoretical Analysis Theorem 1. Let r be a row vector of a k-sparse random matrix R ∈ {0, ±p n mk } m×n, and x ∈ R n with i.i.d. entries xi ∼ T (*µ, p, q*). It can be derived that $$\mathbb{E}[\mathbf{r}^{\mathsf{T}}\mathbf{x}]=2\mu{\sqrt{\frac{n}{m k}}}\sum_{i=0}^{k}C_{k}^{i}p^{i}q^{k-i}\left[{\frac{k-i}{2}}\right]C_{k-i}^{i}{\frac{k-i}{2}}]$$ and $$\left(4\right)$$ $$\mbox{Var}(|\mathbf{r}^{\top}\mathbf{x}|)=\frac{2q\mu^{2}n}{m}-\frac{4\mu^{2}n}{mk}\left(\sum_{i=0}^{k}C_{k}^{i}p^{i}q^{k-i}\left[\frac{k-i}{2}\right]C_{k}^{\lceil\frac{k-i}{2}\rceil}\right)^{2}$$ ial coefficient $\left(\begin{smallmatrix}k\\ i\end{smallmatrix}\right)$ and $\lceil\alpha\rceil=\min\{\beta:\beta\geq\alpha,\beta\in\mathbb{Z}\}$. By (4), $\mbox{E}|\mathbf{r}^{\top}\mathbf{x}|$ satisfies $$\quad(5)$$ where C i k is a binomial coefficient k i >x| satisfies the following two properties: (P1) When p ≤ 0.188, E|r >x| has its maximum at k = 1. (P2) When k → ∞, E|r >x| converges to a constant: $$k=1.$$ $$\operatorname*{lim}_{k\to\infty}{\frac{\sqrt{m}}{\mu\sqrt{n}}}\operatorname{E}|\boldsymbol{r}^{\top}\boldsymbol{x}|=2{\sqrt{q/\pi}},$$ which has the convergence error for finite k upper-bounded by $$\frac{\sqrt{m}}{\mu\sqrt{n}}\mathbb{E}|\mathbf{r}^{\top}\mathbf{x}|-2\sqrt{q/\pi}\bigg{|}\leq\frac{\sqrt{\pi}+\sqrt{2}}{\sqrt{\pi k}}.\tag{1}$$ $$({\mathfrak{h}})$$ $\left(\overline{i}\right)^{2}$ ![4_image_0.png](4_image_0.png) Figure 1: The value of E|r >x|/(µpn/m) calculated by (4) with p = 1/3 (a) and p = 2/3 (b), and estimated by statistical simulation with p = 1/3 (c) and p = 2/3 (d), provided xi ∼ T (*µ, p, q*), µ = 1. Remarks of Theorem 1: In P1 and P2, we characterize the changing trends of E|r >x| against varying matrix sparsity k, which are further discussed as follows. - By P1, E|r >x| can achieve its maximum value at k = 1, if the probability p of xi = 0 is sufficiently small (≤ 0.188). This condition means that the difference x between two data points h and h 0 should have a sufficient number of nonzero entries, and also implies that the two data h and h 0 should be sufficiently distinguishable from each other. Then we can say that for two discriminative data distributions, the best classification performance should be able to be achieved using very sparse random matrices with sparsity k = 1, by virtue of the maximum E|r >x| achieved at k = 1. - By P2, E|r >x| will converge to a constant that depends on the data distribution and matrix size, as k tends to infinity. Note that in (6) we describe the convergence with E|r >x|/(µpn/m) instead of E|r >x|, in terms of the fact that both formulas share the same changing trend against varying k, but the former has fewer parameters, only involving k and p. Moreover, it is noteworthy that the convergence error, namely the difference between the values of E|r >x| with finite k and infinite k, is upper-bounded in (7), and the bound indicates a convergence speed O(1/ √k). By the bound (7), it is easy to further derive that $$\frac{\left|\frac{\sqrt{\pi}}{\mu\sqrt{\pi}}\mathbb{E}|\mathbf{r}^{\top}\mathbf{x}|-2\sqrt{q/\pi}\right|}{2\sqrt{q/\pi}}\leq\eta,\ \ \text{if}\ \ k\geq\frac{(\sqrt{\pi}+\sqrt{2})^{2}}{4q\eta^{2}}\tag{8}$$ where η can be an arbitrary positive constant. It is seen that η establishes an upper bound for the ratio between the convergence error with the convergence value (called the convergence ratio error for short); and for any given upper bound η, there exists a lower bound for sparsity k to hold it. This means that within the bound of k derived for small η, E|r >x| will take similar values for different k, and accordingly, the different k should yield similar classification performance. Then we can say that the sparse matrices with small k (taking the values around its lower bound), will provide comparable classification performance with the other more dense matrices with larger k. Note that the lower bound of k derived in (8) contains slack, and in practice it tends to be much smaller, as demonstrated in the following numerical analysis. ## 3.2 Numerical Analysis To more closely examine the changing trends of E|r >x|/(µpn/m) against varying matrix sparsity k (derived in P1 and P2), we directly compute the value of E|r >x|/(µpn/m) by (4). Note that besides the parameter k, E|r >x|/(µpn/m) also involves the parameter p, the probability of xi = 0 as specified in (1). So we investigate E|r >x|/(µpn/m) over k ∈ [1, 500] for different p ∈ (0, 1). For brevity, we here only provide the results of p = 1/3 and 2/3 in Figs. 1 (a) and (b), see the supplement for more results. By the numerical analysis results, we revisit the two changing trends described in P1 and P2 and obtain more positive conclusions: ![5_image_0.png](5_image_0.png) Figure 2: The convergence error ratios of three different k ∈ {10, 20, 30} over varying p are derived for two-point distributed data (a) and Gaussian mixture data (b), by computing the left side of the inequality of η shown respectively in Eqs. (8) and (14). The parameters involved in computation are set as introduced in the corresponding numerical analysis. (P3) When p ≤ 1/3, such as the case of p = 1/3 shown in Fig. 1(a), E|r >x|/(µpn/m) tends to achieve its maximum value at k = 1, but at other larger k when p > 1/3, such as the case of p = 2/3 illustrated in Fig. 1(b). Compared to the theoretical result P1, the numerical result relaxes the upper bound of p from ≤ 0.188 to ≤ 1/3, enlarging the data space that allows the maximum E|r >x| to be reached at k = 1. More precisely, the above relaxed condition requires each entry xi of x to be nonzero with a probability greater than 2/3, instead of a probability greater than 0.812 (as required by P1). This superficially reduces the demand for the amount of nonzero entries occuring in the difference vector x between two data points h and h 0, and essentially, reduces the requirement for the discrimination between h and h 0. (P4) With the increasing of k, as the two cases of p = 1/3 and 2/3 shown in Figs. 1(a) and (b), E|r >x|/(µpn/m) tends to converge to the limit value 2pq/π derived in (6), where q = (1 − p)/2. Furthermore, it can be seen that the convergence speed is fast, allowing small convergence errors to be reached with small k, typically in the range of a few tens. For instance, in Fig. 2(a) we derive the convergence error ratios as defined in (8), which give the values close to zero when k ≥ 20 and p is relatively small. Recall that the small p implies the case that the original data have high discrimination. With the decreasing of data discrimination, we should need larger k to achieve small convergence errors. Besides the expectation E|r >x| of pairwise distances as discussed above, the variance Var(|r >x|) of pairwise distances derived in (5) is also a factor that may affect the classification performance. By computing (5), interestingly, we find that with the increasing of k, Var(|r >x|) exhibits a changing trend opposite to that of E|r >x|, see the supplement for details. In other words, the larger expectation corresponds to the smaller variance. Considering both larger expectations and smaller variances are favorable to classification, we can say that the two factors achieve consistent results in estimating the classification performance. ## 3.3 Statistical Simulation To verify the correctness of Theorem 1, including the expression (4) of E|r >x| and its two properties P1 and P2, we here estimate the expectation value E|r >x|/(µpn/m) (against varying k) by performing averaging over the statistically generated samples of r and x. If the theorem results are correct, the statistical simulation results should be consistent with the numerical analysis results P3 and P4 (derived by Theorem 1). The simulation is introduced as follows. First, we randomly generate 106 pairs of r and x from their respective distributions, i.e. r ∈ {0, ±p n mk } n with k nonzero entries randomly distributed, and x with i.i.d. xi ∼ T (*µ, p, q*). Then, the average value of |r >x|/(µpn/m) is derived as the final estimate of E|r >x|/(µpn/m). The parameters for the distributions of r and x are set as follows: m = 1, n = 104, µ = 1, and p = 1/3 or 2/3. The data dimension n = 104 allows us to increase k from 1 to 104. The average value of |r >x|/(µpn/m) at each k is provided in Figs. 1(c) and (d), respectively for the cases of p = 1/3 and 2/3. Note that the choices of m, n and µ will not affect the changing trend of E|r >x|/(µpn/m) against k. Comparing the numerical analysis results and the simulation results provided in Fig. 1, namely contrasting (a) vs. (c) and (b) vs. (d), it is seen that two kinds of results exhibit similar changing trends for E|r >x|/(µpn/m). The similarity between them validates Theorem 1, as well as the numerical analysis results P3 and P4. Moreover, it is noteworthy that what we estimate is an *expected* distance EkRxk1 (equivalently, mE|r >x|), rather than the actual distance kRxk1 we will derive with a given matrix sample. To approximate the expected distance, by Property 1 the actual matrices should have the size of m ≥ O( √n). Property 1. Let ri be the i-th row of a k-sparse random matrix R ∈ {0, ±p n mk } m×n, and x ∈ R n with i.i.d. entries xi ∼ T (*µ, p, q*). Suppose z = 1 m kRxk1 = 1 m Pm i=1 |r > i x|. For arbitrary small *ε, δ >* 0, we have the probability Pr{|z − Ez| ≤ ε} ≥ 1 − δ, if m2 m+1 ≥ qµ2n ε 2δ ; and the condition can be relaxed to m2 ≥ 2qµ2n ε 2δ , for a given x. ## 4 The `1 **Distance Estimation With Gaussian Mixture Data** In this section, we consider the case that the original data h, h 0 are drawn from Gaussian mixture distributions, such that their difference x = h − h 0 has i.i.d. entries xi ∼ M(µ, σ2*, p, q*), as specified in (2). With such data, the changing of E|r >x| against varying matrix sparsity k is analyzed. ## 4.1 Theoretical Analysis Theorem 2. Let r be a row vector of a k-sparse random matrix R ∈ {0, ±p n mk } m×n, and x ∈ R n with i.i.d. entries xi ∼ M(µ, σ2*, p, q*). It can be derived that E|r >x| = 2µ rn mk T1 + σ r2n πm T2 − 2µ rn mk T3 (9) T1 = X k i=0 C i kp iq k−i k − i 2 C d k−i 2 e k−i T2 = X k i=0 C i kp iq k−iX k−i j=0 C j k−i e − (k−i−2j) 2µ2 2kσ2 i=0 C i kp iq k−iX k−i T3 = X k j=0 C j k−iΦ − |k − i − 2j|µ √kσ $$(10)$$ $$(11)$$ and $$\operatorname{Var}(|\mathbf{r}^{\mathsf{T}}\mathbf{x}|)={\frac{n}{m}}(\sigma^{2}+2q\mu^{2})-\left(\operatorname{E}|\mathbf{r}^{\mathsf{T}}\mathbf{x}|_{1}\right)^{2}$$ 2(10) where Φ(·) is the distribution function of N (0, 1). Further, we have $$\mathbb{E}|\mathbf{r}^{\mathsf{T}}\mathbf{x}|\leq\mu{\sqrt{\frac{n}{m}}}+\sigma{\sqrt{\frac{2n}{\pi m}}},$$ and lim k→∞ √m µ √n E|r >x| = r2 π (σ 2 + 2qµ2) (12) which has the convergence error for finite k upper-bounded by $$\left|\frac{\sqrt{m}}{\mu\sqrt{n}}\,\mathbb{E}[r^{\mathsf{T}}x]-\sqrt{2(\sigma^{2}+2q\mu^{2})/\pi}\right|\leq\frac{4\sigma^{3}\left[p+2q(1+\mu^{2}/\sigma^{2})^{3/2}\right]}{(\sigma^{2}+2q\mu^{2})\sqrt{\pi E}}+\frac{\sqrt{2}[3\sigma^{4}+2q(6\sigma^{2}\mu^{2}+\mu^{4})]}{\sqrt{(\sigma^{2}+2q\mu^{2})\pi E}}.$$ . (13) $$(12)$$ $$(\mathbb{1},\mathbb{2})$$ ![7_image_0.png](7_image_0.png) Figure 3: The value of E|r >x|/pn/m calculated by (9) with p = 1/2 (a) and p = 2/3 (b), and estimated by statistical simulation with p = 1/2 (c) and p = 2/3 (d), provided xi ∼ M(*p, q, µ, σ*2), µ = 1 and σ = 1/3. Remarks of Theorem 2: In Eqs. (12) and (13), we obtain two results similarly as in P2 of Theorem 1. First, E|r >x| converges to a constant with speed O(1/ √k). Second, by (13), we can derive the lower bound of k that ensures the convergence error ratio upper-bounded by any given constant η: $$\frac{\left|\frac{\sqrt{\pi}}{\alpha^{2}}\mathbb{E}[\mathbf{r}^{T}\mathbf{x}]-\sqrt{2(\sigma^{2}+2\mu^{2})/\pi}\right|}{\sqrt{2(\sigma^{2}+2\mu^{2})/\pi}}\leq\eta,\ \ \text{if}\ \ k\geq\left(\frac{4\sigma^{3}[p+2q(1+\mu^{2}(\sigma^{2})^{3/2}]}{(\sigma^{2}+2\mu\sigma^{2})^{3/2}\sqrt{2}\eta}+\frac{3\sigma^{4}+2q(6\sigma^{2}\mu^{2}+\mu^{4})}{(\sigma^{2}+2\mu\sigma^{2})\eta}\right)^{2}.\tag{14}$$ In the case in [17] we consider the case where $\mathcal{M}$ is the $\mathcal{M}$-norm of As discussed in the remarks of Theorem 1, the lower bound of k derived for small η indicates a matrix sparsity k which can provide comparable classification performance with the other larger sparsity k. Usually, as shown in the following numerical analysis, the lower bound of k is small, allowing us to obtain sparse matrices. Moreover, the numerical analysis demonstrates that similarly to P1 of Theorem 1, E|r >x| in (12) also has its maximum attained at k = 1, when the data distribution parameter p specified in (2) takes relatively small values. ## 4.2 Numerical Analysis In this part, we directly compute the value of E|r >x|/(µpn/m) by (9). Note that E|r >x|/(µpn/m) involves four parameters: k, p, µ, and σ. In computing (9), we fix µ = 1 and vary other parameters in the ranges of σ/µ ∈ (0, 1/3), p ∈ (0, 1) and k ∈ [1, 500]. For easy simulation, we here upper bound σ/µ by 1/3, in view of the fact that σ/µ is usually not large for real data, while larger bounds empirically also work. Empirically, the changing trend of E|r >x|/(µpn/m) is not sensitive to σ/µ, but sensitive to p, namely the probability of each entry xi of the data difference x taking zero value, as specified in (2). In Figs. 3(a) and (b), we provide two typical results of p = 1/2 and 2/3, and observe two properties similar to the previous P3 and P4: (P5) When p ≤ 1/2, such as the case of p = 1/2 and σ/µ = 1/3 shown in Fig. 3(a), E|r >x|/µpn/m tends to obtain its maximum at k = 1, but at other larger k when p > 1/2, such as the case of p = 2/3 and σ/µ = 1/3 shown in Fig. 3(b). It can be seen that the upper bound of p obtained here for Gaussian mixture data is relaxed from 2/3 to 1/2 compared to the bound derived in P3 for two-point distributed data. This implies a wider range of data distributions that enables obtaining the maximum E|r >x|/µpn/m at k = 1. (P6) With the increasing of k, as the two results shown in Fig. 3(a) and (b), E|r >x|/(µpn/m) converges to the limit value derived in (12). Similarly to the convergence discussed in P4 for two-distributed data, the convergence error ratio defined in (14) can approach zero with small k, such as k = 20 shown in Fig. 2(b), especially when p is relatively small, namely the original data having relatively high discrimination. For P5 and P6, their similarity to P3 and P4 is not surprising, since the two-point distribution xi ∼ T (*µ, p, q*) can be viewed as an extreme case of the Gaussian mixture distribution xi ∼ M(µ, σ2*, p, q*) with σ → 0. Thanks to the good generalizability of Gaussian mixture models, as will be seen in our experiments, the two properties analyzed above hold for a variety of real data. Again note that we should have the matrix row size m ≥ O( √n), such that the actual distance kR>xk1 computed with a single random matrix sample can approximate the expected distance EkR>xk1 (equivalently mE|r >x|) derived with (9). The analysis is similar to Property 1, thus omitted here. ## 4.3 Statistical Simulation Similarly as in Section 3.3, we here verify the correctness of Theorem 2, including the expression (9) of E|r >x| and its convergence (12) by performing statistical simulation on x and r. The simulation results should agree with the numerical analysis results P5 and P6, if the theorem is correct. In the simulation, we estimate the value of E|r >x|/pn/m by drawing 106 pairs of x and r from their respective distributions and then computing the average of kr >xk1/pn/m as the estimate. The parameters of the distributions of x and r are set as follows: m = 1, n = 10000, µ = 1, σ = 1/3 and p = 1/2 or 2/3. The data dimension n = 10000 allows us to vary k from 1 to 10000. The average value of |r >x|/pn/m at each k is presented in Figs. 3(c) and (d), which have p = 1/2 and 2/3, respectively. Comparing the numerical analysis results and the simulation results shown in Fig. 3, namely contrasting (a) vs. (c) and (b) vs. (d), it can be seen that two kinds of results are roughly consistent with each other. The consistency validates Theorem 2, as well as the numerical analysis results P5 and P6. ## 5 Experiments In this section, we aim to verify that the impact of the varying matrix sparsity k on classification is consistent with its impact on the `1 distance between projected data as analyzed in Theorems 1 and 2; and more precisely, our goal is to demonstrate that the sparse matrices with only one or at most dozens of nonzero entries per row can provide comparable or even better classification performance than other more dense matrices, under the constraint of matrix size m ≥ O( √n). ## 5.1 Data Without loss of generality, we evaluate four different types of data, including the image dataset YaleB (Georghiades et al., 2001; Lee et al., 2005), the text dataset Newsgroups (Joachims, 1997), the gene dataset AMLALL (Golub et al., 1999) and binary image dataset MNIST (Deng, 2012). The former three kinds of data can be modeled by Gaussian mixtures, while the last one belongs to the two-point distribution. The data settings are introduced as follows. YaleB contains 40 × 30-sized face images of 38 persons, with about 64 faces per person. Newsgroups consists of 20 categories of 3000-dimensional text data, with 500 samples per category. AMLALL contains 25 samples taken from patients suffering from acute myeloid leukemia (AML) and 47 samples from patients suffering from acute lymphoblastic leukemia (ALL), with each sample expressed with a 7129-dimension gene vector. MNIST involves 10 classes of 28 × 28-sized handwritten digit images in MNIST, with 6000 samples per class and with each image pixel 0-1 binarized. Note here we reduce the dimension of the data in YaleB and Newsgroups for easy simulation, and this will not influence our comparative study. ## 5.2 Implementation The random projection based classification is implemented by first multiplying original data with k-sparse random matrices and then classifying the resulting projections with a classifier. To faithfully reflect the impact of the varying data distance on classification, we adopt the simple nearest neighbor classifier (NNC) (Cover & Hart, 1967) for classification, which has performance absolutely dependent on the pairwise distance between data points, without involving extra operations to improve data discrimination. In fact, our classification performance analysis on matrix sparsity could also be verified with other more sophisticated classifiers, like SVMs (Cortes & Vapnik, 1995), see Appendix B. ![9_image_0.png](9_image_0.png) Figure 4: Classification accuracy of the sparse matrix-based and Gaussian matrices-based random projections for image data (YaleB, DCT features), with varying matrix sparsity k ∈ [1, 30], three different projection ratios m/n = 1%, 10% and 50%, and two distance metrics `1 and `2. For each dataset, we will enumerate all possible class pairs in it to perform binary classification. In each class, we have one half of samples randomly selected for training and the rest for testing. To suppress the instability of random matrices and obtain relatively stable classification performance, as in (Bingham & Mannila, 2001), we repeat the random projection-based classification 5 times for each sample and make the final classification decision by vote. For comparison, the performance of the Gaussian matrix based random projection is provided. Although the classification performance of sparse matrices is analyzed with `1 distance, we also test and verify the performance on the popular `2 distance. ## 5.3 Results The classification results of four kinds of data are provided in Figs. 4–7, respectively. For each kind of data, as can be seen, we evaluate the classification performance of sparse matrices with varying sparsity k ∈ [1, 30], three different projection ratios m/n = 1%, 10% and 50%, and two distance metrics `1 and `2. Note that the data dimensions n we test here are on the order of thousands. With such scale of n, it is easy to deduce that the condition of m ≥ O( √n) will be satisfied as m/n = 10% and 50%, but be violated as m/n = 1%. Let us first examine the case of satisfying m ≥ O( √n), namely the cases of m/n = 10% and 50% as shown in Figs. 4–7(b)(c). It is seen that the four kinds of data all achieve their best performance with relatively small matrix sparsity k (< 30), such as with k = 1 in Fig. 4(c) and k = 15 in Fig. 5(c). But in the case of m/n = 1% which violates the condition of m ≥ O( √n), as shown in Figs. 4–7(a), the four kinds of data with an exception of AMLALL all fail to reach their top performance within k < 30. For AMLALL with m/n = 1%, as illustrated in Fig. 6(a), it fails to get the desired decreasing performance trend and performs poorly at k = 1, in contrast to the cases of m/n = 10% and m/n = 50% shown in Figs. 6(b)(c). Overall, the experimental results on four different kinds of data all agree with our theoretical analysis: the sparse matrices with only one or at most about dozens of nonzero entries per row, achieve comparable or even better classification performance than other more dense matrices, under the size of m ≥ O( √n). The changing trend of the classification performance against varying matrix sparsity k also consists with our theoretical analysis. More precisely, it can be seen from Figs. 4–7(b)(c) that the classifications of four datasets quickly converge to stable performance with the increasing matrix sparsity k. The difference between them mainly lies in the initial stage of the convergence. Specifically, as illustrated in Figs. 4(b)(c) ![10_image_0.png](10_image_0.png) Figure 5: Classification accuracy of the sparse matrix-based and Gaussian matrix-based random projections for text data (Newsgroups), with varying matrix sparsity k E [1,30], three different projection ratios m/n = 1%, 10% and 50%, and two distance metrics l1 and l2. ![10_image_1.png](10_image_1.png) Figure 6: Classification accuracy of sparse matrix-based and Gaussian matrix-based random projections for gene data (AM- LALL), with varying matrix sparsity k E [1,30], three different projection ratios m/n = 1%, 10% and 50%, and two distance metrics l1 and l2 and 6(b)(c), the convergence curves on the datasets YaleB and AMLALL both exhibit the declining trend at the initial increasing region of k, consistent with the numerical analysis result depicted in Fig. 3(a) (discussed in P5 and P6). As for the curves on the other two datasets Newsgroups and MNIST, as shown in Figs. 5(b)(c) and Figs. 7(b)(c), they both exhibit the trend of initially increasing with k, aligning with the numerical analysis results illustrated in Fig. 3(b) (P5 and P6) and Fig. 1(b) (P4 and P5). ![11_image_0.png](11_image_0.png) Figure 7: Classification accuracy of the sparse matrix-based and Gaussian matrix-based random projections for binary image data (MNIST, binarized pixels), with varying matrix sparsity k ∈ [1, 30], three different projection ratios m/n = 1%, 10% and 50%, and two distance metrics `1 and `2. Although the classification performance of sparse matrices is analyzed with `1 distance, it can be seen that the performance also holds for `2 distance, when comparing the upper row and the bottom row results shown in Figs. 4–7. This generalization can be attributed to the closeness of the two metrics (Gionis et al., 1999; Figiel et al., 1977). Moreover, experiments show that sparse matrices perform comparably or even better than the popularly used Gaussian matrices. This allows us to replace Gaussian matrices with sparse matrices, for much lower complexity. ## 6 Conclusion For the sparse {0, ±1}-matrix based random projection, we have analyzed the impact of matrix sparsity on classification. It is found that the sparse matrices with only one or at most dozens of nonzero entries per row, can provide comparable or even better classification performance than other more dense matrices, when the matrices have size m ≥ O( √n) and the original data are sufficiently discriminative. Moreover, it is empirically observed that the sparse matrices also compare favorably with the popularly used Gaussian matrices, and furthermore, the performance advantage we estimate with `1 distance also holds with `2 distance. Theses results imply that our sparse matrices have wide applications. Finally, it is noteworthy that our theoretical analysis exhibits high consistency with the experiments on real data of different types, owing to the good generalizability of the typical data distributions adopted in our statistical analysis. Besides the contribution to random projection, our classification performance analysis on sparse matrices is helpful to understand the competitive performance of deep ternary networks, which are generated by ternarizing the parameters and/or activations of full-precision networks and enjoy very sparse structures (Li et al., 2016; Zhu et al., 2017; Wan et al., 2018; Marban et al., 2020; Rokh et al., 2023). Despite suffering from significant quantization errors, interestingly, deep ternary networks usually have acceptable performance loss and sometimes can even provide performance gains. The reason for this intriguing phenomenon remains unclear. Considering deep networks can be modeled as a cascade of random projections (Giryes et al., 2016), our analysis of sparse matrix-based random projection can be viewed as a layerwise analysis of deep ternary networks. The sparse ternary matrices we have estimated with good classification performance partly explains the good performance of sparse ternary networks. ## References D. Achlioptas. Database-friendly random projections: Johnson–Lindenstrauss with binary coins. *J. Comput.* Syst. Sci., 66(4):671–687, 2003. David F Andrews and Colin L Mallows. Scale mixtures of normal distributions. Journal of the Royal Statistical Society: Series B (Methodological), 36(1):99–102, 1974. Ella Bingham and Heikki Mannila. Random projection in dimensionality reduction: applications to image and text data. In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 245–250, 2001. Bo Brinkman and Moses Charikar. On the impossibility of dimension reduction in `1. *Journal of the ACM*, pp. 766–788, 2003. Corinna Cortes and Vladimir Vapnik. Support-vector networks. *Machine learning*, 20(3):273–297, 1995. T. Cover and P. Hart. Nearest neighbor pattern classification. *IEEE Transactions on Information Theory*, 13(1):21–27, 1967. S. Dasgupta and A. Gupta. An elementary proof of the Johnson–Lindenstrauss lemma. Technical Report, UC Berkeley, (99–006), 1999. Li Deng. The mnist database of handwritten digit images for machine learning research. *IEEE Signal* Processing Magazine, 29(6):141–142, 2012. T. Figiel, J. Lindenstrauss, and V. D. Milman. The dimension of almost spherical sections of convex bodies. Acta Mathematica, 139:53–94, 1977. Dmitriy Fradkin and David Madigan. Experiments with random projections for machine learning. In Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 517–522, 2003. Athinodoros S. Georghiades, Peter N. Belhumeur, and David J. Kriegman. From few to many: Illumination cone models for face recognition under variable lighting and pose. *IEEE transactions on pattern analysis* and machine intelligence, 23(6):643–660, 2001. Aristides Gionis, Piotr Indyk, and Rajeev Motwani. Similarity search in high dimensions via hashing. In Proceedings of the 25th International Conference on Very Large Data Bases, 1999. Raja Giryes, Guillermo Sapiro, and Alex M Bronstein. Deep neural networks with random gaussian weights: A universal classification strategy? *IEEE Transactions on Signal Processing*, 64(13):3444–3457, 2016. T. R. Golub, D. K. Slonim, P. Tamayo, C. Huard, M. Gaasenbeek, J. P. Mesirov, H. Coller, M. L. Loh, J. R. Downing, M. A. Caligiuri, C. D. Bloomfield, and E. S. Lander. Molecular classification of cancer: Class discovery and class prediction by gene expression monitoring. *Science*, 286(5439):531–537, 1999. Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks. In *Proceedings of the 30th International Conference on Neural Information Processing Systems*, 2016. Thorsten Joachims. A probabilistic analysis of the Rocchio algorithm with TFIDF for text categorization. In *Proceedings of the Fourteenth International Conference on Machine Learning*, pp. 143–151, 1997. W. B. Johnson and J. Lindenstrauss. Extensions of Lipschitz mappings into a Hilbert space. Contemp. Math., 26:189–206, 1984. Ian T Jolliffe. *Principal component analysis*. Springer, 2002. Ian T Jolliffe and Jorge Cadima. Principal component analysis: a review and recent developments. *Philosophical transactions of the royal society A: Mathematical, Physical and Engineering Sciences*, 374(2065): 20150202, 2016. Edmund Y Lam and Joseph W Goodman. A mathematical analysis of the DCT coefficient distributions for images. *IEEE transactions on image processing*, 9(10):1661–1666, 2000. Kuang-Chih Lee, Jeffrey Ho, and David J Kriegman. Acquiring linear subspaces for face recognition under variable lighting. *IEEE Transactions on pattern analysis and machine intelligence*, 27(5):684–698, 2005. Fengfu Li, Bin Liu, Xiaoxing Wang, Bo Zhang, and Junchi Yan. Ternary weight networks. arXiv preprint arXiv:1605.04711, 2016. P. Li, T. J. Hastie, and K. W. Church. Very sparse random projections. in Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, 2006. Ping Li. Very sparse stable random projections for dimension reduction in `α (0 < α ≤ 2) norm. In *Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*, 2007. Arturo Marban, Daniel Becking, Simon Wiedemann, and Wojciech Samek. Learning sparse & ternary neural networks with entropy-constrained trained ternarization (EC2T). In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 722–723, 2020. Babak Rokh, Ali Azarpeyvand, and Alireza Khanteymoori. A comprehensive survey on model quantization for deep neural networks in image classification. *ACM Transactions on Intelligent Systems and Technology*, 14(6):1–50, 2023. Nathan Ross. Fundamentals of stein's method. *Probability Surveys*, 8:210–293, 2011. Pante Stˇanicˇa. Good lower and upper bounds on binomial coefficients. *JIPAM. Journal of Inequalities in* Pure & Applied Mathematics [electronic only], 2, 2001. Antonio Torralba and Aude Oliva. Statistics of natural image categories. Network: computation in neural systems, 14(3):391–412, 2003. M.A. Turk and A.P. Pentland. Face recognition using eigenfaces. In *Proceedings. 1991 IEEE Computer* Society Conference on Computer Vision and Pattern Recognition, pp. 586–591, 1991. Aad W Van der Vaart. *Asymptotic statistics*. Cambridge university press, 2000. Martin J. Wainwright and Eero P. Simoncelli. Scale mixtures of gaussians and the statistics of natural images. In *Proceedings of the 12th International Conference on Neural Information Processing Systems*, 1999. Diwen Wan, Fumin Shen, Li Liu, Fan Zhu, Jie Qin, Ling Shao, and Heng Tao Shen. TBN: Convolutional neural network with ternary inputs and binary weights. In *Proceedings of the European Conference on* Computer Vision (ECCV), pp. 315–332, 2018. Yair Weiss and William T Freeman. What makes a good model of natural images? In 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8. IEEE, 2007. J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma. Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31:210–227, 2009. J. Yang, X. Shen, J. Xing, X. Tian, H. Li, B. Deng, J. Huang, and X. Hua. Quantization networks. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019. Chenzhuo Zhu, Song Han, Huizi Mao, and William J. Dally. Trained ternary quantization. In *International* Conference on Learning Representations, 2017. ## A Appendix A.1 Proof Of Theorem 1 Proof. In the following, we sequentially prove (4), (5), P1 and P2. Proofs of (4) and (5): With the distributions of r and x, we can write kr >xk1 =p n mk µ Pk i=1 zi , where zi *∈ {−*1, 0, 1} with probabilities {*q, p, q*}. Then, it can be derived that $$\mathbb{E}[\boldsymbol{r}^{\top}\boldsymbol{x}]=\mu{\sqrt{\frac{n}{m k}}}\sum_{i=0}^{k}C_{k}^{i}p^{i}q^{k-i}\sum_{j=0}^{k-i}C_{k-i}^{j}|k-i-2j|,$$ $$(15)$$ $$(16)$$ among which Pk−i j=0 C j k−i |k − i − 2j| can be expressed as $$\sum_{j=0}^{k-i}(C_{k-i}^{j}|k-i-2j|)=2\left[\frac{k-i}{2}\right]C_{k-i}^{\lceil\frac{k-i}{2}\rceil},$$ where dαe = min{β : β ≥ *α, β* ∈ Z}. Combine (15) and (16), we can obtain (4). Next, we can derive the variance of |r >x| $$\text{Var}(|\boldsymbol{r}^{\top}\boldsymbol{x}|)=\text{Var}(\boldsymbol{r}^{\top}\boldsymbol{x})-\left(\mathbb{E}|\boldsymbol{r}^{\top}\boldsymbol{x}|\right)^{2}\tag{1}$$ $$=\frac{2q\mu^{2}n}{m}-\frac{4\mu^{2}n}{mk}\left(\sum_{i=0}^{k}C_{k}^{i}p^{i}q^{k-i}\left[\frac{k-i}{2}\right]C_{k-i}^{\left[\frac{k-i}{2}\right]}\right)^{2}.$$ $$(17)$$ Proof of P1: This part aims to prove E|r >x|k=1 > E|r >x|k>1, where the subscript k = 1 denotes the case of E|r >x| with k = 1, and the subscript k > 1 means the case of k taking any integer value greater than 1. In the following, we will calculate and compare E|r >x| in terms of the two cases. For the case of k = 1, by (4), it is easy to derive that $$\mathbb{E}|\mathbf{r}^{\top}\mathbf{x}|_{k=1}=2q\mu\sqrt{\frac{n}{m}}.\tag{18}$$ Then, let us see the case of computing E|r >x|k>1. By (4), E|r >x|k>1 is the sum of √ 2 k C i k p iq k−ik−i 2 C d k−i 2 e k−i multiplied by µp nm . To compute √ 2 k C i k p iq k−ik−i 2 C d k−i 2 e k−i, we consider separately two cases: k − i is even or odd, as detailed below. Case 1: Suppose k − i is even. We have $$\frac{2}{\sqrt{k}}C_{k}^{i}p^{i}q^{k-i}\left[\frac{k-i}{2}\right]C_{k-i}^{\left[\frac{k-i}{2}\right]}$$ $$\leq\frac{1}{\sqrt{k}}C_{k}^{i}p^{i}q^{k-i}(k-i)2^{k-i}\sqrt{\frac{2}{(k-i)\pi}}$$ $$\leq\sqrt{\frac{2}{\pi}}C_{k}^{i}p^{i}(2q)^{k-i},\tag{19}$$ since C γ 2γ ≤ 2 2γ √γπ , where γ is a positive integer (Stˇanicˇa, 2001). Case 2: Suppose k − i is odd. We have $$\frac{2}{\sqrt{k}}C_{k}^{i}p^{i}q^{k-i}\left[\frac{k-i}{2}\right]C_{k-i}^{\left[\frac{k-i}{2}\right]}$$ $$\leq\frac{1}{\sqrt{k}}C_{k}^{i}p^{i}q^{k-i}(k-i)2^{k-i}\sqrt{\frac{2}{(k-i-1)\pi}}$$ $$=\sqrt{\frac{2}{\pi}}C_{k}^{i}p^{i}(2q)^{k-i}\frac{k-i}{\sqrt{k(k-i-1)}}\tag{20}$$ Given k ≥ 5, we further have $${\frac{k-i}{\sqrt{k(k-i-1)}}}<1\;\;\mathrm{for}\;\;2\leq i\leq k-2,$$ and for i = k − 1 or k, $$\frac{2}{\sqrt{k}}C_{k}^{i}p^{i}q^{k-i}\left[\frac{k-i}{2}\right]C_{k-i}^{[\frac{k-i}{2}]}<\sqrt{\frac{2}{\pi}}C_{k}^{i}p^{i}(2q)^{k-i}.$$ To sum up, when k − i is odd, $$\frac{2}{\sqrt{k}}C_{k}^{i}p^{i}q^{k-i}\left[\frac{k-i}{2}\right]C_{k-i}^{[\frac{k-i}{2}]}$$ $$\leq\begin{cases}\sqrt{2}C_{k}^{i}p^{i}(2q)^{k-i},&k\geq5,i\geq2,\\ \frac{2}{\sqrt{k}}C_{k}^{i}p^{i}q^{k-i}(k-i)C_{k-i-1}^{\frac{k-i-1}{2}},&\text{otherwise}.\end{cases}\tag{21}$$ According to the results (19) and (21) derived in the above two cases, we know that E|r >x|k>1 can be computed in terms of two cases, 2 ≤ k ≤ 4 and k ≥ 5. For the case of 2 ≤ k ≤ 4, by (4), we have $$\mathbb{E}[\boldsymbol{r}^{\top}\boldsymbol{x}]=\begin{cases}\dfrac{\mu\sqrt{n}}{\sqrt{2m}}(4q^{2}+4pq),&k=2,\\ \dfrac{\mu\sqrt{n}}{\sqrt{2m}}(12q^{3}+12pq^{2}+6p^{2}q),&k=3,\\ \dfrac{\mu\sqrt{n}}{\sqrt{m}}(12q^{4}+24pq^{3}+12p^{2}q^{2}+4p^{3}q),&k=4,\end{cases}$$ $$\quad(22)$$ $$(23)$$ and for the case of k ≥ 5, with (19) and (21), we have $$\mathbf{E}|\mathbf{r}^{\textsf{T}}\mathbf{x}|\leq\mu{\sqrt{\frac{2n}{\pi m}}}+\mu{\sqrt{\frac{n}{m}}}(2q)^{5}\left({\frac{3{\sqrt{5}}}{8}}-{\sqrt{\frac{2}{\pi}}}\right).$$ By (18), (22) and (23), we can derive that $$\operatorname{E}|\boldsymbol{r}^{\mathsf{T}}\boldsymbol{x}|_{k=1}>\operatorname{E}|\boldsymbol{r}^{\mathsf{T}}\boldsymbol{x}|_{k>1}$$ holds under the condition of p ≤ 0.188. Then P1 is proved. In what follows, we elaborate the proof of (23) by considering two cases of k, being even or odd. Case 1: Suppose k ≥ 5 and k is even. Combining (19) and (21), we have $$\mathbb{E}|\mathbf{r}^{\top}\mathbf{x}|\leq\!\mu\sqrt{\frac{n}{m}}C_{k}^{1}p(2q)^{k-1}\left(\frac{\sqrt{k}}{2^{k-1}}C_{k-1}^{\frac{k}{2}-1}-\sqrt{\frac{2}{\pi}}\right)\tag{2}$$ $$+\mu\sqrt{\frac{2n}{\pi m}}\sum_{i=0}^{k}C_{k}^{i}p^{i}(2q)^{k-i}.$$ $$(24)$$ $$(25)$$ Denote h1(k) = $$\frac{h_{1}(k+2)}{h_{1}(k)}=\frac{k+1}{\sqrt{k(k+2)}}>1$$ $$h_{1}(k)=\frac{\sqrt{k}}{2^{k-1}}C_{k-1}^{\frac{k}{2}-1}.\,\,\mathrm{For}$$ we have $ \ \ h_1(k)=\frac{\sqrt{k}}{2^{k-1}}C_{k-1}^{\frac{k}{2}-1}\leq\lim_{k\to\infty}h_1(k)=\sqrt{\frac{2}{\pi}}.$ 25) that ... Then, it follows from (24) and (25) that $$\mathbb{E}|\mathbf{r}^{\top}\mathbf{x}|\leq\mu\sqrt{\frac{2n}{\pi m}}.\tag{1}$$ Case 2: Suppose k ≥ 5 and k is odd. Combining (19) and (21), we have $$\begin{split}\mathbb{E}|\mathbf{r}^{\top}\mathbf{x}|\leq&\mu\sqrt{\frac{n}{m}}C_{k}^{0}(2q)^{k}\left(\frac{\sqrt{k}}{2^{k-1}}C_{k-1}^{\frac{k-1}{2}}-\sqrt{\frac{2}{\pi}}\right)\\ &+\mu\sqrt{\frac{2n}{\pi m}}\sum_{i=0}^{k}C_{k}^{i}p^{i}(2q)^{k-i}.\end{split}$$ $$(2{\mathfrak{f}}{\mathfrak{h}})$$ $$(27)$$ Denote h2(k) = $=\frac{\sqrt{k}}{2^{k-1}}C_{k-1}^{\frac{k-1}{2}}.$ For . we have $$h_2(k)\qquad\qquad k+1$$ $$h_2(k)=\frac{\sqrt{k}}{2^{k-1}}C_{k-1}^{\frac{k-1}{2}}\leq h_2(5)=\frac{\sqrt{5}}{2^4}C_4^2.$$ (8) that $${\frac{h_{2}(k+2)}{h_{2}(k)}}={\frac{\sqrt{k(k+2)}}{k+1}}<1$$ Then, it follows from (27) and (28) that $$\mathbf{E}|\mathbf{r}^{\mathsf{T}}\mathbf{x}|\leq\mu{\sqrt{\frac{2n}{\pi m}}}+\mu{\sqrt{\frac{n}{m}}}(2q)^{5}\left({\frac{3{\sqrt{5}}}{8}}-{\sqrt{\frac{2}{\pi}}}\right).$$ Proof of P2: For ease of analysis, we first define the function $$g(\mathbf{r}^{\top}\mathbf{x};k,p)={\frac{\operatorname{E}|\mathbf{r}^{\top}\mathbf{x}|_{k}}{\mu\sqrt{n/m}}}=\operatorname{E}\left|{\frac{1}{\sqrt{k}}}\sum_{i=1}^{k}z_{i}\right|,\tag{1}$$ $$(28)$$ $$(29)$$ $$(30)$$ where {zi} is independently and identically distributed and zi *∈ {−*1, 0, 1} with probabilities {*q, p, q*}. By the Lindeberg-Lévy Central Limit Theorem, we have $${\frac{1}{\sqrt{k}}}\sum_{i=1}^{k}z_{i}\leadsto Z,$$ where Z ∼ N(0, 2q). Then based on (23), we have for k ≥ 5, $$\mathrm{E}\left|{\frac{1}{\sqrt{k}}}\sum_{i=1}^{k}z_{i}\right|\leq{\sqrt{\frac{2}{\pi}}}+(2q)^{5}\left({\frac{3{\sqrt{5}}}{8}}-{\sqrt{\frac{2}{\pi}}}\right).$$ It means that $$\operatorname*{lim}_{M\to+\infty}\operatorname*{lim}_{k\to+\infty}\operatorname{E}\left[\left|{\frac{1}{\sqrt{k}}}\sum_{i=1}^{k}z_{i}\right|1\left\{\left|{\frac{1}{\sqrt{k}}}\sum_{i=1}^{k}z_{i}\right|>M\right\}\right]=0.$$ Hence, √ 1 k Pk i=1 zi is an asymptotically uniformly integrable sequence. According to Theorem 2.20 in (Van der Vaart, 2000), we obtain $$\operatorname*{lim}_{k\to+\infty}{\frac{\sqrt{m}}{\mu\sqrt{n}}}\mathbb{E}|\pmb{r}^{\top}\pmb{x}|=\operatorname*{lim}_{k\to+\infty}\mathbb{E}\left|{\frac{1}{\sqrt{k}}}\sum_{i=1}^{k}z_{i}\right|$$ $$=\mathbb{E}\left|Z\right|$$ $$=2{\sqrt{\frac{q}{\pi}}}.$$ Next, let us investigate the error of the above convergence with respect to k. Following the definitions and properties described in Eqs. (29) and (30), we further suppose ti = √ 1 2q zi and Q ∼ N(0, 1), and get √m µ √n E|r >x| − 2pq/π = E 1 k X k i=1 zi − E|Z| =p2q E 1 k X k i=1 ti − E|Q| ≤p2qdw E 1 k X k i=1 ti , E|Q| ! where dw(*ν, υ*) denotes the Kolmogorov metric, with the form $d_{w}(\nu,\upsilon)=\sup_{h\in\mathcal{H}}\left|\int h(x)d\nu(x)-\int h(x)d\upsilon(x)\right|,$ $\mathcal{H}=\left\{h:\mathbb{R}\rightarrow\mathbb{R}:|h(x)-h(y)|\leq|x-y|\right\}.$ By the Theorem 3.2 in Ross (2011), since {ti} are i.i.d and Eti = 0, Et 2 i = 1, E|ti| 4 < ∞, we have $$d_{w}\left(\mathbb{E}\left[\frac{1}{k}\sum_{i=1}^{k}t_{i}\right],\mathbb{E}[Q]\right)\leq\frac{1}{k^{3/2}}\sum_{i=1}^{k}\mathbb{E}[t_{i}]^{3}+\frac{\sqrt{2}}{\sqrt{\pi}k}\sqrt{\sum_{i=1}^{k}\mathbb{E}t_{i}^{4}}$$ $$=\frac{1}{\sqrt{2qk}}+\frac{\sqrt{2}}{\sqrt{2q\pi}k},$$ and then $$\left|{\frac{{\sqrt{m}}}{\mu{\sqrt{n}}}}\mathbb{E}|r^{\top}x|-2{\sqrt{q/\pi}}\right|\leq{\frac{{\sqrt{\pi}}+{\sqrt{2}}}{{\sqrt{\pi k}}}}.$$ ## A.2 Proof Of Property 1 Proof. This problem can be addressed using the Chebyshev's Inequality, which requires us to first derive Ez and Var(z). Note that Ez = E( 1 m Pm i=1 |r > i x|) = E(|r > i x|) has been derived in (4). In the sequel, we need to first solve Var(z) = Ez 2 − (Ez) 2, which has $$\mathbb{E}z^{2}=\mathbb{E}(\frac{1}{m}\sum_{i=1}^{m}|\mathbf{r}_{i}^{\top}\mathbf{x}|)^{2}$$ $$=\frac{1}{m^{2}}\mathbb{E}(\sum_{i=1}^{m}|\mathbf{r}_{i}^{\top}\mathbf{x}|^{2})+\frac{1}{m^{2}}\mathbb{E}(\sum_{i\neq j}|\mathbf{r}_{i}^{\top}\mathbf{x}|\cdot|\mathbf{r}_{j}^{\top}\mathbf{x}|)$$ $$=\frac{2\mu^{2}n}{m^{2}}+\frac{m-1}{2m}\mathbb{E}(|\mathbf{r}_{i}^{\top}\mathbf{x}|\cdot|\mathbf{r}_{j}^{\top}\mathbf{x}|).$$ For the second term in the above result, it holds $$\mathrm{E}(|{\boldsymbol{r}}_{i}^{\top}{\boldsymbol{x}}|\cdot|{\boldsymbol{r}}_{j}^{\top}{\boldsymbol{x}}|)\leq\mathrm{Var}(|{\boldsymbol{r}}_{i}^{\top}{\boldsymbol{x}}|)+(\mathrm{E}|{\boldsymbol{r}}_{i}^{\top}{\boldsymbol{x}}|)^{2}=\mathrm{Var}(|{\boldsymbol{r}}_{i}^{\top}{\boldsymbol{x}}|)+(\mathrm{E}z)^{2},$$ by the covariance property $$\mathrm{Cov}(|\mathbf{r}_{i}^{\top}\mathbf{x}|,|\mathbf{r}_{j}^{\top}\mathbf{x}|)=\mathrm{E}(|\mathbf{r}_{i}^{\top}\mathbf{x}|\cdot|\mathbf{r}_{j}^{\top}\mathbf{x}|)-\mathrm{E}|\mathbf{r}_{i}^{\top}\mathbf{x}|\cdot\mathrm{E}|\mathbf{r}_{j}^{\top}\mathbf{x}|\tag{1}$$ $$=\rho\sqrt{\mathrm{Var}(|\mathbf{r}_{i}^{\top}\mathbf{x}|)}\cdot\sqrt{\mathrm{Var}(|\mathbf{r}_{j}^{\top}\mathbf{x}|)}$$ $$=\rho\mathrm{Var}(|\mathbf{r}_{i}^{\top}\mathbf{x}|),$$ $$(32)$$ $$(33)$$ where ρ ∈ (−1, 1) is the correlation coefficient. Substituting (31) into Var(z) = Ez 2 − (Ez) 2, by the inequality (32) and (17), we can derive $$\mathrm{Var}(z)\leq\frac{2q\mu^{2}n}{m^{2}}+\frac{m-1}{2m}[\mathrm{Var}(\left|\boldsymbol{r}_{i}^{\top}\boldsymbol{x}\right|)+(\mathrm{E}z)^{2}]-(\mathrm{E}z)^{2}$$ $$=\frac{2q\mu^{2}n}{m^{2}}+\frac{m-1}{2m}\cdot\frac{2q\mu^{2}n}{m^{2}}-(\mathrm{E}z)^{2}$$ $$=\frac{(m+1)q\mu^{2}n}{m^{2}}-(\mathrm{E}z)^{2}.$$ $$(34)$$ $$(35)$$ With the above inequality about Var(z), we can further explore the condition that holds the desired probability $$\operatorname*{Pr}\{|z-\mathbb{E}z|\leq\varepsilon\}\geq1-\delta.$$ Pr{|z − Ez| ≤ ε} ≥ 1 − δ. (35) By the Chebyshev's Inequality, (35) will be achieved, if Var(z)/ε2 ≤ δ; and according to (34), this condition can be satisfied when m2 m+1 ≥ qµ2n ε 2δ . In the above analysis, we consider a random x. For a given x, the condition of holding (35) can be further relaxed to m2 ≥ 2qµ2n ε 2δ , since in this case |r > i x| is independent between different i ∈ [m], such that Var(z) changes to be (17) divided by m. ## A.3 Proof Of Theorem 2 Proof. First, we derive the absolute moment of z ∼ N (*µ, σ*2) as $$\mathbb{E}|z|={\sqrt{\frac{2}{\pi}}}\sigma e^{-{\frac{\mu^{2}}{2\sigma^{2}}}}+\mu\left(1-2\Phi\left(-{\frac{\mu}{\sigma}}\right)\right)$$ $$(36)$$ which will be used in the sequel. With the distributions of r and x, we have |r >x| =p n mk Pk i=1 xi . For easier expression, assume y =Pk i=1 xi, then the distribution of y can be expressed as $$f(y)=\sum_{i=0}^{k}\sum_{j=0}^{k-i}C_{k}^{i}C_{k-i}^{j}p^{i}q^{k-i}\frac{1}{\sqrt{2\pi k}\sigma}e^{-\frac{(y-(2j+i-x)\mu)^{2}}{2k\sigma^{2}}}.$$ Then, by (36) we can derive that E|r >x| = rn mk X k i=0 X k−i j=0 hC i kC j k−i p iq k−i × Z +∞ −∞ |y| √2πkσ e − (y−(2j+i−s)µ) 2 2kσ2 dy = 2µ rn mk X k i=0 C i kp iq k−i k − i 2 C d k−i 2 e k−i − 2µ rn mk X k i=0 C i kp iq k−iX k−i j=0 C j k−iΦ − |k − i − 2j|µ √kσ + σ r2n πm X k i=0 C i kp iq k−iX k−i j=0 C j k−i e − (k−i−2j) 2µ2 2kσ2 where Φ(·) is the distribution function of N (0, 1). The above equation and (18), (22), (23) together lead to $$\operatorname{E}|\mathbf{r}^{\mathsf{T}}\mathbf{x}|\leq\mu{\sqrt{\frac{n}{m}}}+\sigma{\sqrt{\frac{2n}{\pi m}}}.$$ Next, we can derive the variance of |r >x| as $$\begin{split}\operatorname{Var}(|\boldsymbol{r}^{\mathsf{T}}\boldsymbol{x}|)&=Var(\boldsymbol{r}^{\mathsf{T}}\boldsymbol{x})-\left(\operatorname{E}|\boldsymbol{r}^{\mathsf{T}}\boldsymbol{x}|\right)^{2}\\ &={\frac{n}{m}}(\sigma^{2}+2q\mu^{2})-\left(\operatorname{E}\|\boldsymbol{r}^{\mathsf{T}}\boldsymbol{x}\|_{1}\right)^{2}.\end{split}$$ Finally, the convergence of √m µ √n E|r >x| shown in (12) and (13) can be derived in a similar way to the proof of P2 in Theorem 1. ## B Appendix In Figs. 8–11, we test the SVM (with linear kernel) classification accuracy for the sparse ternary matrix with varying matrix sparsity k (and compression ratio m/n) on four different types of data. It can be seen that the performance changing trends of SVM against the varying matrix sparsity k are similar to the KNN performance as illustrated in the body of the paper, thus consistent with our theoretical analysis. ![20_image_0.png](20_image_0.png) Figure 8: Classification accuracy of the sparse matrix-based and Gaussian matrix-based random projections for image data (YaleB, DCT features), with varying matrix sparsity k ∈ [1, 30], three different projection ratios m/n = 1%, 10% and 50%. ![20_image_1.png](20_image_1.png) Figure 9: Classification accuracy of the sparse matrix-based and Gaussian matrix-based random projections for text data (Newsgroups), with varying matrix sparsity k ∈ [1, 30], three different projection ratios m/n = 1%, 10% and 50%. ![20_image_2.png](20_image_2.png) Figure 10: Classification accuracy of the sparse matrix-based and Gaussian matrix-based random projections for gene data (AMLALL), with varying matrix sparsity k ∈ [1, 30], three different projection ratios m/n = 1%, 10% and 50%. ![20_image_3.png](20_image_3.png) Figure 11: Classification accuracy of the sparse matrix-based and Gaussian matrix-based random projections for binary image data (MNIST, binarized pixels), with varying matrix sparsity k ∈ [1, 30], three different projection ratios m/n = 1%, 10% and 50%.
# Multi-Grid Tensorized Fourier Neural Operator For Highresolution Pdes Anonymous authors Paper under double-blind review ## Abstract Memory complexity and data scarcity have so far prohibited learning solution operators of partial differential equations (PDE) at high resolutions. We address these limitations by introducing a new data efficient and highly parallelizable operator learning approach with reduced memory requirement and better generalization, called multi-grid tensorized neural operator (MG-TFNO). *MG-TFNO* scales to large resolutions by leveraging local and global structures of full-scale, real-world phenomena, through a decomposition of both the input domain and the operator's parameter space. Our contributions are threefold: i) we enable parallelization over input samples with a novel multi-grid-based domain decomposition, ii) we represent the parameters of the model in a high-order latent subspace of the Fourier domain, through a global tensor factorization, resulting in an extreme reduction in the number of parameters and improved generalization, and iii) we propose architectural improvements to the backbone FNO. Our approach can be used in any operator learning setting. We demonstrate superior performance on the turbulent Navier-Stokes equations where we achieve less than half the error with over 150× compression. The tensorization combined with the domain decomposition, yields over 150× reduction in the number of parameters and 7× reduction in the domain size without losses in accuracy, while slightly enabling parallelism. ## 1 Introduction Real-world scientific computing problems often time require repeatedly solving large-scale and high-resolution partial differential equations (PDEs). For instance, in weather forecasts, large systems of differential equations are solved to forecast the future state of the weather. Due to internal inherent and aleatoric uncertainties, multiple repeated runs are carried out by meteorologists every day to quantify prediction uncertainties. Conventional PDE solvers constitute the mainstream approach used to tackle such computational problems. However, these methods are known to be slow and memory-intensive. They require an immense amount of computing power, are unable to learn and adapt based on observed data, and oftentimes require sophisticated tuning (Slingo & Palmer, 2011; Leutbecher & Palmer, 2008; Blanusa et al., 2022). Neural operators are a new class of models that aim at tackling these challenging problems (Li et al., 2020a). They are maps between function spaces whose trained models emulate the solution operators of PDEs (Kovachki et al., 2021b). In the context of PDEs, these deep learning models are orders of magnitude faster than conventional solvers, can easily learn from data, can incorporate physically relevant information, and recently enabled solving problems deemed to be unsolvable with the current state of available PDE methodologies (Liu et al., 2022; Li et al., 2021c). Among neural operator models, Fourier neural operators (FNOs), in particular, have seen successful application in scientific computing for the task of learning the solution operator to PDEs as well as in computer vision for classification, in-painting, and segmentation (Li et al., 2021b; Kovachki et al., 2021a; Guibas et al., 2021). By leveraging spectral theory, FNOs have successfully advanced frontiers in weather forecasts, carbon storage, and seismology (Pathak et al., 2022; Wen et al., 2022; Yang et al., 2021). ![1_image_0.png](1_image_0.png) Figure 1: **Comparison of the performance on the relative** L 2 and H1**test errors (lower is better)** on a log-scale of our approach, compared with both our improved backbone (FNO) and the original FNO, on Navier-Stokes. Our approach enables large compression for both input and parameter, while outperforming regular FNO. While FNOs have shown tremendous speed-up over classical numerical methods, their efficacy can be limited due to the rapid growth in memory needed to represent complex operators. In the worst case, large memory complexity is required and, in fact, is unavoidable due to the need for resolving fine-scale features globally. However, many real-world problems, possess a local structure not currently exploited by neural operator methods. For instance, consider a weather forecast where predictions for the next hour are heavily dependent on the weather conditions in local regions and minimally on global weather conditions. Incorporating and learning this local structure of the underlying PDEs is the key to overcoming the curse of memory complexity. In this work, we propose a new, scalable neural operator that addresses these issues by leveraging the structure in both the domain space and the parameter space, Figure 2. Specifically, we introduce the multigrid tensor operator (*MG-TFNO*), a model that exploits locality in physical space by a novel multi-grid domain decomposition approach to compress the input domain size by up to 7× while leveraging the global interactions of the model parameters to compress them by over 100× without any loss of accuracy. In the input space, to predict the solution in any region of the domain, *MG-TFNO* decomposes the input domain into small local regions to which hierarchical levels of global information are added in a multi-grid fashion. Since a local prediction depends most strongly on its immediate spatial surroundings, the farther field information is downsampled to lower resolutions, progressively, based on its distance from the region of interest. Thus, *MG-TFNO* allows parallelization over the input domain as it relies on high-resolution data only locally and coarse-resolution data globally. Due to its state-of-the-art performance on PDE problems and efficient FFT-based implementation, we use the FNO as the backbone architecture for our method. It is worth noting that the multi-grid approach is readily amendable to neural network settings and, moreover, any other neural operator architecture can be used in place of FNO as a backbone. In the parameter space, we exploit the spatiotemporal structure of the underlying PDE solution operator by parameterizing the convolutional weights within the Fourier domain with a low-rank tensor factorization. ![2_image_0.png](2_image_0.png) Figure 2: **Overview of our approach**. First (left), a multi-grid approach is used to create coarse to fine inputs that capture high-resolution details in a local region while still encoding global context. The resulting regions are fed to a tensorized Fourier operator (middle), the parameters of which are jointly represented in a single latent space via a low-rank tensor factorization (here, a Tucker form). Here F denotes Fourier transform. Finally, the outputs (right) are stitched back together to form the full result. Smoothness in the output is ensured via the choice of the loss function. Specifically, we impose a coupling between all the weights in the Fourier space by jointly parameterizing them with a single tensor, learned in a factorized form such as Tucker or Canonical-Polyadic (Kolda & Bader, 2009). This coupling allows us to limit the number of parameters in the model without limiting its expressivity. On the contrary, this low-rank regularization on the model mitigates over-fitting and improves generalization. Intuitively, our method can be thought of as a fully-learned implicit scheme capable of converging in a small, fixed number of iterations. Due to the global nature of the integral kernel transform, the FNO avoids the Courant–Friedrichs–Lewy (CFL) condition plaguing explicit schemes, allowing convergence in only a few steps (Courant et al., 1928). Our weight coupling ensures maximum communication between the steps, mitigating possible redundancies in the learned kernels and reducing the complexity of the optimization landscape. In summary, we make the following contributions: - **We propose architectural improvements to the backbone** which we validated through thorough ablations. - **We propose** *MG-TFNO*, a novel neural operator parameterized in the spectral domain by a single low-rank factorized tensor, allowing its size to grow linearly with the size of the problem. - **Our tensor operator achieves better performance with a fraction of the parameters**: we outperform FNO on solving the turbulent Navier Stokes equations with more than 400× weight compression ratio, Figure 6. - **Our method overfits less and does better in the low-data regime**. In particular, it outperforms FNO with less than half the training samples, Figure 8. - **We introduce a novel multi-grid domain decomposition approach**, a technique which allows the operator to predict the output only on local portions of the domain, thus reducing the memory usage by an order of magnitude with no performance degradation. - **Combining tensorization with multi-grid domain decomposition leads to** *MG-TFNO*, which is more efficient in terms of task performance, computation, and memory. *MG-TFNO* achieves 2.5× lower error with 10× model weight compression, and 1.8× domain compression. - **A unified codebase** to run all configurations and variations of FNO and *MG-TFNO* will be released, along with the Navier-Stokes data used in this paper. ## 2 Background Here, we review related works and introduce the background necessary to explain our approach. Many physical phenomena are governed by PDEs and a wide range of scientific and engineering computation problems are based on solving these equations. In recent years, a new perspective to PDEs dictates to formulate these problems as machine learning problems where solutions to PDEs are learned. Prior works mainly focused on using neural networks to train for the solution map of PDEs (Guo et al., 2016; Zhu & Zabaras, 2018; Adler & Oktem, 2017; Bhatnagar et al., 2019; Gupta et al., 2021). The use of neural networks in the prior works limits them to a fixed grid and narrows their applicability to PDEs where maps between function spaces are desirable. Multiple attempts have been made to address this limitation. For example mesh free methods are proposed that locally output mesh-free solution (Lu et al., 2019; Esmaeilzadeh et al., 2020), but they are still limited to fixed input gird. A new deep learning paradigm, neural operators, are proposed as maps between function spaces (Li et al., 2020a; Kovachki et al., 2021b). They are discretization invariants maps. The input functions to neural operators can be presented in any discretization, mesh, resolution, or basis. The output functions can be evaluated at any point in the domain. Variants of neural operators deploy a variety of Nyström approximation to develop new neural operator architecture. Among these, multi-pole neural operators (Li et al., 2020b) utilize the multi-pole approach to develop computationally efficient neural operator architecture. Inspired by the spectral method, Fourier-based neural operators show significant applicability in practical applications (Li et al., 2021b; Yang et al., 2021; Wen et al., 2022; Rahman et al., 2022a), and the architectures have been used in neural networks for vision and text tasks (Guibas et al., 2021; Dao et al., 2022). Principle component analysis and u-shaped methods are also considered (Bhattacharya et al., 2020; Liu et al., 2022; Rahman et al., 2022b; Yang et al., 2022). It is also shown that neural operators can solely be trained using PDEs, resulting in physics-informed neural operators, opening new venues for hybrid data and equation methods (Li et al., 2021c) to tackle problems in scientific computing. Decomposing the domain in smaller subdomains is at the core of many methods in computational sciences(Chan & Mathew, 1994) and extensively developed in deep learning (Dosovitskiy et al., 2020). Prior deep learning methods on neural networks propose to decompose the input finite dimension vector to multiple patches, accomplish local operations, and aggregate the result of such process in the global sense (Dosovitskiy et al., 2020; Guibas et al., 2021). Such methods do not decompose the output domain and directly predict the entire output vector. In contrast, *MG-TFNO* works on function spaces, and not only decomposes the input domain, but also decomposes the domain of the output functions, and separately predicts the output at each subdomain. As we move beyond learning from simple structures to solving increasingly complex problems, the data we manipulate becomes more structured. To efficiently manipulate these structures, we need to go beyond matrix algebra and leverage the spatiotemporal structure. For all purposes of this paper, tensors are multidimensional arrays and generalize the concept of matrices to more than 2 modes (dimensions). For instance, RGB images are encoded as third-order (three-dimensional) tensors, videos are 4 th order tensors and so on and so forth. Tensor methods generalize linear algebraic methods to these higher-order structures. They have been very successful in various applications in computer vision, signal processing, data mining and machine learning (Panagakis et al., 2021; Janzamin et al., 2019; Sidiropoulos et al., 2017; Papalexakis et al., 2016). Using tensor decomposition Kolda & Bader (2009), previous works have been able to compress and improve deep networks for vision tasks. Either a weight matrix is tensorized and factorized Novikov et al. (2015), or tensor decomposition is directly to the convolutional kernels before fine-tuning to recover-for lost accuracy, which also allows for an efficient reparametrization of the network (Lebedev et al., 2015; Kim et al., 2016; Gusak et al., 2019). There is a tight link between efficient convolutional blocks and tensor factorization and factorized higher-order structures (Kossaifi et al., 2020). Similar strategies have been applied to multi-task | Variable | Meaning | Dimensionality | |------------|----------------------------------------------------|---------------------------------------| | T | Tensor of weights in the Fourier domain | Covα×···×α×m×n | | W | Weight tensor parameterizing the entire operator | Covα×···×α×n×n×2 d−1L | | A | Input function space | Infinite | | U | output function space | Infinite | | a | Input function | Infinite | | u | Output function | Infinite | | DA | Domain of function a | d | | DU | Domain of function u | d | | dA | Dimension of the co-domain of the input functions | 1 | | dU | Dimension of the co-domain of the output functions | 1 | | F | Fourier transform | Infinite | | F −1 | Fourier transform | Infinite | | L | Number of integral operation layers | In N | | l | Layer index | Between 1 and L | | σ | Point-wise activation operation | Infinite | | b | Bias vector | | | v | Function at each layer | Infinite | | α | Number of kept frequencies in Fourier space | Between 1 and 1 2 min{s1, · · · , sd} | Table 1: **Table of notation** learning (Bulat et al., 2020a) and NLP (Papadopoulos et al., 2022; Cordonnier et al., 2020). Of all these prior works, none has been applied to neural operator. In this work, we propose the first application of tensor compression to learning operators and propose a Tensor OPerator (*TFNO*). ## 3 Methodology Here, we briefly review operator learning as well as the Fourier Neural Operator, on which we build to introduce our proposed Tensor OPerator (*TFNO*) as well as the Multi-Grid Domain Decomposition, which together form our proposed *MG-TFNO*. ## 3.1 Operator Learning Let A := {a : DA → R dA } and U := {u : DU → R dU } denote two input and output function spaces respectively. Each function a, in the input function space A, is a map from a bounded, open set DA ⊂ R d to the dA-dimensional Euclidean space. Any function in the output function space U is a map from a bounded open set DU ⊂ R dto the dU -dimensional Euclidean space. In this work we consider the case D = DA = DU ⊂ R d. We aim to learn an operator G : *A → U* which is a mapping between the two function spaces. In particular, given a dataset of N points {(aj , uj )} N j=1, where the pair (aj , uj ) are functions satisfying G(aj ) = uj , we build an approximation of the operator G . As a backbone operator learning model, we use neural operators as they are consistent and universal learners in function spaces. For an overview of theory and implementation, we refer the reader to Kovachki et al. (2021b). We specifically use the FNO and give details in the forthcoming section (Li et al., 2021b). ## 3.2 Notation We summarize the notation used throughout the paper in Table 1. ## 3.3 Fourier Neural Operators For simplicity, we will work on the d-dimensional unit torus T d and first describe a single, pre-activation FNO layer mapping R m-valued functions to R n-valued functions. Such a layer constitutes the mapping G : L 2(T d; R m) → L 2(T d; R n) defined as $$\mathcal{G}(v)=\mathcal{F}^{-1}\big{(}\mathcal{F}(\kappa)\cdot\mathcal{F}(v)\big{)},\qquad\forall\,v\in L^{2}(\mathbb{T}^{d};\mathbb{R}^{m})\tag{1}$$ where κ ∈ L 2(T d; R n×m) is a function constituting the layer parameters and F, F −1 are the Fourier transform and its inverse respectively. The Fourier transform of the function κ is parameterized directly by some fixed number of Fourier nodes denoted α ∈ N. To implement equation 1, F, F −1 are replaced by the discrete fast Fourier transforms Fˆ, Fˆ−1. Let vˆ ∈ R s1*×···×*sd×m denote the evaluation of the function v on a uniform grid discretizing T d with sj ∈ N points in each direction. We replace F(κ) with a weight tensor T ∈ Covs1*×···×*sd×n×m consisting of the Fourier modes of κ which are parameters to be learned. To ensure that κ is parameterized as a R n×m-valued function with a fixed, maximum amount of wavenumbers α < 1 G2 min{s1, · · · , sd} that is independent of the discretization of T d, we leave as learnable parameters only the first α entries of T in each direction and enforce that T have conjugate symmetry. In particular, we parameterize half the corners of the d-dimensional hyperrectangle with 2 d−1 hypercubes with length size α. That is, T is made up of the free-parameter tensors T˜1, *· · ·* , T˜2 d−1 ∈ Covα*×···×*α×n×m situated in half of the corners of T. Each corner diagonally opposite of a tensor T˜j is assigned the conjugate transpose values of T˜j. All other values of T are set to zero. This is illustrated in the middle-top part of Figure 2 for the case d = 2 with T˜1 and T˜2. We will use the notation T(k, *· · ·*) = T˜ k for any k ∈ [2d−1]. The discrete version of equation 1 then becomes the mapping Gˆ : R s1*×···×*sd×m → R s1*×···×*sd×n defined as $$\hat{\mathcal{G}}(\hat{v})=\hat{\mathcal{F}}^{-1}\big{(}\mathbf{T}\cdot\hat{\mathcal{F}}(\hat{v})\big{)},\qquad\forall\,\hat{v}\in\mathbb{R}^{s_{1}\times\cdots\times s_{d}\times m}\tag{2}$$ where the · operation is simply the matrix multiplication contraction along the last dimension. Specifically, we have $$(\mathbf{T}\cdot\hat{\mathcal{F}}(\hat{v}))(l_{1},\ldots,l_{d},j)=\sum_{i=1}^{m}\mathbf{T}(l_{1},\ldots,l_{d},j,i)(\hat{\mathcal{F}}(\hat{v}))(l_{1},\ldots,l_{d},i).\tag{3}$$ From equation 2, a full FNO layer is build by adding a point-wise linear action to vˆ, a bias term, and applying a non-linear activation. In particular, from an input vˆ ∈ R s1*×···×*sd×m, the output qˆ ∈ R s1*×···×*sd×n is given as $${\hat{q}}(l_{1},\cdots,l_{d},:)=\sigma{\big(}\mathbf{Q}{\hat{v}}(l_{1},\cdots,l_{d},:)+{\hat{G}}({\hat{v}})+b{\big)}$$ with σ : R → R a fixed, non-linear activation, and b ∈ R n, Q ∈ R n×m, T˜1, *· · ·* , T˜2 d−1 ∈ Covα*×···×*α×n×m are the learnable parameters of the layer. The full FNO model consists of L ∈ N such layers each with weight tensors T1, *· · ·* , TL that have learnable parameters T˜(l) k = Tl(k, *· · ·*) for any l ∈ [L] and k ∈ [2d−1]. In the case n = m for all layers, we introduce the joint parameter tensor W ∈ Covα*×···×*α×n×n×2 d−1Lso that $$\mathbf{W}\left(\ldots,2^{d-1}(l-1)+k+1\right)={\tilde{\mathbf{T}}}_{k}^{(l)}.$$ A perusal of the above discussion reveals that there are (2dα d + 1)mn + n total parameters in each FNO layer. Note that, since m and n constitute the respective input and output channels of the layer, the number of parameters can quickly explode due to the exponential scaling factor 2 dα dif many wavenumbers are kept. Preserving a large number of modes could be crucial for applications where the spectral decay of the input or output functions is slow such as in image processing or the modeling of multi-scale physics. In the following section, we describe a tensorization method that is able to mitigate this growth without sacrificing approximation power. ![6_image_0.png](6_image_0.png) Figure 3: **Original FNO and Improved Backbone Architecture.** The original FNO architecture (Li et al., 2021b) is composed of simply a Spectral Convolution, with a (linear) skip connection to recover high-frequency information and handle non-periodic inputs (3(a)). We improve the architecture as detailed in section 3.4. In particular, we have a version with a double (sequential) skip connection (3(b)), while our best architecture uses nested skip connections, and can be made both with and without preactivation (subfigures 3(c) and 3(d), respectively). The latter, subfigure 3(d), is our best architecture. ## 3.4 Architectural Improvements Our proposed approach uses FNO as a backbone. To improve its performance, we first study various aspects of the Fourier Neural Architecture and perform thorough ablation to validate each aspect. In particular, we propose improvements to the base architecture that improve performance. Normalization in neural operators While normalization techniques, such as Batch-Normalization Ioffe & Szegedy (2015), have proven very successful in training neural networks, additional consideration must be given when applying those to neural operators in order to preserve its properties, notably discretization invariance. Specifically, it cannot depend on the spatial variables and therefore has to be either a global or a function-wise normalization. We investigate several configurations using instance normalization Ulyanov et al. (2016) and layer-normalization Ba et al. (2016), in conjunction with the use-of preactivation He et al. (2016). Channel mixing FNO relies on a global convolution realized in the spectral domain. Inspired by previous works, e.g. Guibas et al. (2021), we propose adding an MLP in the *original* space, after each Spectral convolution. In practice, we found that two-layer bottleneck MLP works well, e.g. we decrease the codimension by half in the first linear layer before restoring it in the second one. Boundary conditions Fourier neural operators circumvent the limitation of traditional Fourier methods to inputs with periodic boundaries only. This is achieved through a local linear transformation added to the spectral convolution. This can be seen as a linear skip connection. We investigate replacing these with an identity skip-connection and a soft-gated skip-connection Bulat et al. (2020b). We also investigate the impact of domain-padding, found by Li et al. (2021b) to improve results, especially for non-periodic inputs, and padding for the multi-grid decomposition. We represent in Figure. 3 the original FNO architecture (Li et al., 2021b), subfigure 3(a), the improved version with double (sequential) skip connections (subfigure 3(b)) and our best architecture, both with and without preactivation (subfigures 3(c) and 3(d), respectively). ## 3.5 Tensor Fourier Neural Operators In the previous section, we introduced a unified formulation of FNO where the whole operator is parametrized by a single parameter tensor W. This enables us to introduce the tensor operator, which parameterizes efficiently W with a low-rank, tensor factorization. We introduce the method for the case of a Tucker decomposition, for its flexibility. Other decompositions, such as Canonical Polyadic, can be readily integrated. This joint parametrization has several advantages: i) it applies a low-rank constraint on the entire tensor W, thus regularizing the model. These advantages translate into i) a huge reduction in the number of parameters, ii) better generalization and an operator less prone to overfitting. We show superior performance for low-compression ratios (up to 200×) and very little performance degradation when largely compressing (> 450×) the model, iii) better performance in a low-data regime. In practice, we express W in a low-rank factorized form, e.g. Tucker or CP. In the case of a Tucker factorization with rank (R1, · · · , Rd, RL, RI , RO), where RL controls the rank across layers, RI = RO control the rank across the input and output co-dimension, respectively, and R1, · · · , Rd control the rank across the dimensions of the operator: $$\mathbf{W}=\sum_{r_{1}=1}^{B_{1}}\cdots\sum_{r_{\alpha}=1}^{B_{\alpha}}\sum_{r_{i}=1}^{B_{i}}\sum_{r_{\alpha}=1}^{B_{\alpha}}\sum_{r_{i}=1}^{B_{i}}\mathbf{G}(r_{1},\cdots,r_{\alpha},r_{i},r_{\alpha},r_{i})\cdot\mathbf{U}^{(\mathbf{U})}(:,r_{1})\cdot\cdots\cdot\mathbf{U}^{(\mathbf{U})}(:,r_{\alpha})\cdot\mathbf{U}^{(\mathbf{U})}(:,r_{i})\cdot\mathbf{U}^{(\mathbf{U})}(:,r_{\alpha})\cdot\mathbf{U}^{(\mathbf{U})}(:,r_{i}).\tag{4}$$ Here, G is the core of size RL × RI × RO × R1 *× · · · ×* Rd and U(L), U(I), U(O), U(1), *· · ·* , U(d) are factor matrices of size (RL × L),(RI × I),(RO × O),(R1 × α), *· · ·* ,(Rd × α), respectively. Note that the mode (dimension) corresponding to the co-dimension can be left uncompressed by setting RL = L and U(L) = Id. This leads to layerwise compression. Also note that having a rank of 1 along any of the modes would mean that the slices along that mode differ only by a (multiplicative) scaling parameter. Also note that during the forward pass, we can pass T directly in factorized form to each layer by selecting the corresponding rows in U(L). While the contraction in equation 3 can be done using the reconstructed tensor, it can also be done directly by contracting Fˆ(ˆv) with the factors of the decomposition. For small, adequately chosen ranks, this can result in computational speedups. A visualization of the Tucker decomposition of a third-order tensor can be seen in Figure 4). Note that we can rewrite the entire weight parameter for this Tucker case, equivalently, using the more compact n-mode product as: $$\mathbf{W}=\mathbf{G}\times_{1}\mathbf{U}^{(1)}\cdots\times_{d}\mathbf{U}^{(d)}\times_{d+1}\mathbf{U}^{(1)}\times_{d+2}\mathbf{U}^{(\mathbf{O})}\times_{d+3}\mathbf{U}^{(\mathbf{L})}$$ We can efficiently perform an iFFT after contraction with the tensorized kernel. For any layer l, the (j1, j2) coordinate of the matrixvalued convolution function κ(x) is as follows, Figure 4: **Illustration of a Tucker** ![7_image_0.png](7_image_0.png) decomposition. For clarity , we show W as a 3 rd-order tensor weight. $$[\kappa_{l}(x)]j1,j_{2}=\sum_{i_{1}=1}^{m_{1}}\cdots\sum_{i_{d}=1}^{m_{d}}\sum_{r_{l}=1}^{R_{L}}\sum_{r_{i}=1}^{R_{U}}\sum_{r_{o}=1}^{R_{O}}\sum_{r_{i}=1}^{R_{1}}\cdots\sum_{r_{d}=1}^{R_{d}}\mathbf{G}(r_{1},\cdots,r_{d},r_{i},r_{o},r_{l})\cdot$$ U(1)(i1, r1)· · · U(d)(id, rd) · U(I)(j1, ri) · U(O)(j2, ro) · U(L)(l, rl) · exp(2π X d k=1 ixkik) This joint factorization along the entire operator allows us to leverage redundancies both locally and across ![8_image_1.png](8_image_1.png) the entire operator. This leads to a large reduction in the memory footprint, with only a fraction of the parameter. It also acts as a low-rank regularizer on the operator, facilitating training. Finally, through global parametrization, we introduce skip connections that allow gradients to flow through the latent parametrization to all the layers jointly, leading to better optimization. (a) **Predicting with padded regions.** Local region in the input is padded and used to predict the corresponding region in the output. (b) **MG-Domain Decomposition.** Progressively larger spatial regions are added to a local region by subsampling. Figure 5: **Domain decomposition in space (5(a)) and our Multi-Grid based approach. (5(b))**. White squares represent the region of interest while yellow squares the larger embeddings. Importantly, this formulation is general and works with any tensor factorization. For instance, we also explore a Canonical-Polyadic decomposition (CP) which can be seen as a special case of Tucker with a super-diagonal core. In that case, we set a single rank R and express the weights as a weighted sum of R rank-1 tensors. Concretely: $$\mathbf{W}=\sum_{r=1}^{R}\lambda_{r}\mathbf{U}^{(\mathbf{I})}(:,r)\cdot\ \cdots\ \cdot\mathbf{U}^{(\mathbf{d})}(:,r)\cdot$$ $$\mathbf{U}^{(\mathbf{I})}(:,r)\cdot\mathbf{U}^{(\mathbf{O})}(:,r)\cdot\mathbf{U}^{(\mathbf{L})}(:,r).$$ λrU(1)(:, r) · · · · · U(d)(:, r)· (5) ![8_image_0.png](8_image_0.png) $$\left(5\right)$$ where U(L), U(I), U(O), U(1), *· · ·* , U(d) are factor matrices of size (R×L),(R×I),(R×O),(R×α), *· · ·* ,(R×α), respectively and λ ∈ R R. Note that the CP, contrarily to the Tucker, has a single rank parameter, shared between all the dimensions. This means that to maintain the number of parameters the same, R needs to be very high, which leads to memory issues. This makes CP more suitable for large compression ratios, and indeed, we found it leads to better performance at high-compression / very low-rank. In this paper, we also explore the tensor-train decomposition Oseledets (2011). A rank-(1, R1, · · · , RN , RI , RO, RL, 1) TT factorization expresses W as: $$\mathbf{W}(i_{1},\cdots,i_{d},i_{c},i_{o},i_{l})=\mathbf{G}_{1}(i_{1})\cdot\times\mathbf{G}_{N}(i_{d})\mathbf{G}_{I}(i_{c})\times\cdots\mathbf{G}_{O}(i_{o})\times\cdots\mathbf{G}_{L}(i_{l}).$$ Where each of the factors of the decompositions Gk are third order tensors of size Rk × Ik × Rk+1. In the experimental section 4.3, we show results of *TFNO* trained with a Tucker, TT and CP factorization. Separable Fourier Convolution The proposed tensorization approach introduces a factorization of the weights in the spectral domain. When a CP Kolda & Bader (2009) is used, this induces separability over the learned kernel. We propose to make this separability explicit by not performing any channel mixing in the spectral domain and relying on the MLP introduced above to do so. The separable Spectral convolution can be thought of as a depthwise convolution performed in the Fourier domain, e.g. without any channel mixing. The mixing between channels is instead done in the spatial domain. This results in a significant reduction in the number of parameters while having minimal impact on performance (we found it necessary to increase the depth of the network, however, to ensure the network retained enough capacity). ## 3.6 Multi-Grid Domain Decomposition Having introduced our decomposition in the operator's parameter space, we now introduce our novel multigrid approach to decompose the problem domain. Domain decomposition is a method commonly used to parallelize classical solvers for time-dependent PDEs that is based on the principle that the solution for a fixed local region in space depends mostly on the input at the same local region (Chan & Mathew, 1994). In particular, since the time-step h > 0 of the numerical integrator is small, the solution u(*x, t* + h), for any point x ∈ D and t ∈ R+, depends most strongly on the points u(*y, t*) for all y ∈ B*x, r*(h)) where B*x, r*(h)denotes the ball centered at x with radius r(h). This phenomenon is easily seen for the case of the heat equation where, in one dimension, the solution satisfies $$\begin{split}u(x,t+h)&\propto\int_{-\infty}^{\infty}\exp\left({\frac{-(x-y)^{2}}{4h}}\right)u(y,t)\,\mathrm{d}y\\ &\approx\int_{x-4h}^{x+4h}\exp\left({\frac{-(x-y)^{2}}{4h}}\right)u(y,t)\,\mathrm{d}y\end{split}$$ with the approximation holding since 99.9937% of the kernel's mass is contained within B(x, 4h). While some results exist, there is no general convergence theory for this approach, however, its empirical success has made it popular for various numerical methods (Albin & Bruno, 2011). To exploit this localization , the domain D is split in q ∈ N pairwise-disjoint regions D1, · · · , Dq so that D = ∪ q j=1Dj . Each region Dj is then embedded into a larger one Zj ⊃ Dj so that points away from the center of Dj have enough information to be well approximated. A model can then be trained so that the approximation G(a|Zj )|Dj ≈ u|Dj holds for all j ∈ [q]. This idea is illustrated in Figure 5(a) where D = [0, 1]2 and all Dj , Zj are differently sized squares. This allows the model to be ran fully in parallel hence its time and memory complexities are reduced linearly in q. Multi-Grid. Domain decomposition works well in classical solvers when the time step h > 0 is small because the mapping u(·, t) 7→ u(·, t + h) is close to the identity. However, the major advancement made by machine learning-based operator methods for PDEs is that a model can approximate the solution, in one shot, for very large times i.e. h > 1. But, for larger h, the size of Zj relative to Dj must increase to obtain the same approximation accuracy, independently of model capacity. This causes any computational savings made by the decomposition approach to be lost. To mitigate this, we propose a multi-grid based domain decomposition approach where global information is added hierarchically at different resolutions. While our approach is inspired by the classical multi-grid method, it is not based on the V-cycle algorithm (McCormick, 1985). For ease of presentation, we describe this concept when a domain D = T 2is uniformly discretized by 2 s × 2 s points, for some s ∈ N, but note that generalizations can readily be made. Given a final level L ∈ N, we first sub-divide the domain into 2 2L total regions each of size 2 s−L × 2 s−L and denote them D (0) 1, · · · , D(0) 2 2L . We call this the zeroth level. Then, around each D (0) j, for any j ∈ [22L], we consider the square D (1) jof size 2 s−L+1 × 2 s−L+1 that is equidistant, in every direction, from each boundary of D (0) j. We then subsample the points in D (1) juniformly by a factor of 12 in each direction, making D (1) jhave 2 s−L × 2 s−L points. We call this the first level. We continue this process by considering the squares D (2) jof size 2 s−L+2 × 2 s−L+2 around each D (1) jand subsample them uniformly by a factor of 14 in each direction to again yield squares with 2 s−L × 2 s−L points. The process is repeated until the Lth level is reached wherein D (L) jis the entire domain subsampled by a factor of 2 −L in each direction. The process is illustrated for the case L = 2 in Figure 5(b). Since we work with the torus, the region of the previous level is always at the center of the current level. The intuition behind this method is that since the dependence of points inside a local region diminishes the further we are from that region, it is enough to have coarser information, as we go farther. We combine this multi-grid method with the standard domain decomposition approach by building appropriately padded squares Z (l) jof size 2 s−L + 2p × 2 s−L + 2p around each D (l) j where p ∈ N is the amount of padding to be Figure 6: Tensorization: error in logscale as a function of the compression ratio. We compare the tensor neural operator with an FN() with the same number of parameters (trimmed). We achieve over 100x compression ratio with better performance that the original FNO ![10_image_0.png](10_image_0.png) Figure 7: MG-Domain Decomposition: error as a function of the domain compression ratio. We compare MG-TFNO with different numbers of multigrid regions both with and without weight tensor compression to a full field FNO model. We achieve over 7x input space compression, 10x parameter space compression ratios and better performance than the original FNO. ![10_image_1.png](10_image_1.png) added in each direction. We then take the evaluations of the input function a at each level and concatenate them as channels. In particular, we train a model so that ô (((a) («, «« «« «« «« «« (« »)) (« » « ». Since the model only operates on each padded region separately, we reduce the total number of grid points used from 22s to (28-1 + 2p)2 and define the domain compression ratio as the quotient of these numbers. Furthermore, note that, assuming a is R dA -valued, a model that does not employ our multi-grid domain decomposition uses inputs with dA channels while our approach builds inputs with (L + 1)dA channels. In particular, the number of input channels scales only logarithmically in the number of regions hence global information is added at very little additional cost. Indeed, FNO models are usually trained with internal widths much larger than dA hence the extra input channels cause almost no additional memory overhead. ## 4 Experiments In this section, we first introduce the data, experimental setting and implementation details before empirically validating our approach through thorough experiments and ablations. ## 4.1 Data. We experiment on a dataset of 10K training samples and 2K test samples of the two-dimensional NavierStokes equation with Reynolds number 500. We also experiment with the one-dimensional viscous Burgers' equation. Navier-Stokes. We consider the vorticity form of the two-dimensional Navier-Stokes equation, $$\partial_{t}\omega+\nabla^{\perp}\phi\cdot\omega=\frac{1}{\text{Re}}\Delta\omega+f,\ \ \ \ \ x\in\mathbb{T}^{2},\,t\in(0,T]\tag{6}$$ $$-\Delta\phi=\omega,\ \ \ \int_{\mathbb{T}^{2}}\phi=0,\ \ \ \ \ \ x\in\mathbb{T}^{2},\,t\in(0,T]$$ with initial condition ω(0, ·) = 0 where T 2 =∼ [0, 2π) 2is the torus, f ∈ L˙ 2(T 2; R) is a forcing function, and Re > 0 is the Reynolds number. Then ω(t, ·) ∈ H˙ s(T 2; R) for any t ∈ (0, T] and s > 0, is the unique weak solution to equation 6 (Temam, 1988). We consider the non-linear operator mapping f 7→ ω(T, ·) with T = 5 and fix the Reynolds number Re = 500. We define the Gaussian measure µ = N (0, C) on the forcing functions where we take the covariance C = 27(−∆ + 9I) −4, following the setting in (De Hoop et al., 2022). Input data is obtained by generating i.i.d. samples from µ by a KL-expansion onto the eigenfunctions of C (Powell et al., 2014). Solutions to equation 6 are then obtained by a pseudo-spectral scheme (Chandler & Kerswell, 2013). Burgers' Equation. We consider the one-dimensional Burgers' equation on the torus, $$\partial_{t}u+uu_{x}=\nu u_{xx},\qquad x\in\mathbb{T},\;t\in(0,T]\tag{7}$$ $$u|_{t=0}=u_{0},\qquad\quad x\in\mathbb{T}$$ for initial condition u0 ∈ L 2(T; R) and viscosity ν > 0. Then u(t, ·) ∈ Hs(T; R), for any t ∈ R+ and s > 0, is the unique weak solution to 7 (Evans, 2010). We consider the non-linear operator u0 7→ u(T, ·) with T = 0.5 or 1 and fix ν = 0.01. We define the Gaussian measure µ = N (0, C) where we take the covariance C = 35/2(− d 2 dx2 + 9I) −3. Input data is obtained by generating i.i.d. samples from µ by a KL-expansion onto the eigenfunctions of C. Solutions to equation 7 are then obtained by a pseudo-spectral solver using Heun's method. We use 8K samples for training and 2K for testing. ## 4.2 Implementation Details Implementation We use PyTorch Paszke et al. (2017) for implementing all the models. The tensor operations are implemented using TensorLy Kossaifi et al. (2019) and TensorLy-Torch Kossaifi (2021). Our code was released under the permissive MIT license, as a Python package that is well-tested and comes with extensive documentation, to encourage and facilitate downstream scientific applications. It is available at https://github.com/neuraloperator/neuraloperator. ![12_image_0.png](12_image_0.png) Figure 8: Error as a function of the number of training samples (left) and training VS testing loss. We compare *TFNO* with a regular FNO. Note that on the left we show the testing L 2error while, for training, the H1loss is used and that is compared with the H1test error on the right. Our approach generalizes better while requiring fewer training samples. Hyper-parameters We train all models via gradient backpropagation using a mini-batch size of 16, the Adam optimizer, with a learning rate of 1e −3, weight decay of 1e −4, for 500 epochs, decreasing the learning rate every 100 epochs by a factors of 12 . The model width is set in all cases to 64 except when specified otherwise (for the Trimmed FNO), meaning that the input was first lifted (with a linear layer) from the number of input channels to that width. The projection layer projects from the width to 256 and a prediction linear layer outputs the predictions. 10000 samples were used for training, as well as a separate set of 2000 samples for testing. All experiments are done on a NVIDIA Tesla V100 GPU. To disentangle the effect of each of our components, the comparisons between the original FNO, the MGFNO, *TFNO*, and the *MG-TFNO* were conducted in the same setting, with a mini-batch size of 32, modes of 42 and 21 for the height and width, respectively, and an operator width of 64. For the comparison between our best models, we use all the modes (64 and 32) and a mini-batch size of 16, which leads to improved performance for all models but longer training times. For each comparison, the same setting and hyper-parameters were used for all models. Training the operator. Since *MG-TFNO* predicts local regions which are then stitched together to form a global function without any communication, aliasing effects can occur where one output prediction does not flow smoothly into the next. To prevent this, we train our model using the H1 Sobolev norm (Czarnecki et al., 2017; Li et al., 2021a). By matching derivatives, training with this loss prevents any discontinuities from occurring and the output prediction is smooth. ## 4.3 Experimental Results In this section, we compare our approach with both the regular FNO Li et al. (2021b) and the FactorizedFNO Tran et al. (2023), which separately applied FFT along each mode before combining the results. In all cases, our approach achieves superior performance with a fraction of the parameters, as can be seen in Table 4.4.4. Tensorizing: better compression. In Figure 6, we show the performance of our approach (TNO) compared to the original FNO, for varying compression ratios. In the Trimmed-FNO, we adjust the width in order to match the number of parameters in our TNO. We focus on the width of the network as it was shown to be the most important parameter (De Hoop et al., 2022). Our method massively outperforms the Trimmed-FNO at every single fixed parameter amount. Furthermore, even for very large compression ratios, our FNO outperforms the full-parameter FNO model. This is likely due to the regularizing effect of the tensor factorization on the weight, showing that many of the ones in the original model are redundant. | (TFNO) and the input-domain (MG-TFNO). Method L 2 test error (%) | | # Params | Model CR | Input CR | |--------------------------------------------------------------------|--------|------------|------------|------------| | FNO Li et al. (2021b) | 1.34% | 67M | - | - | | FFNO Tran et al. (2023) | 1.15 % | 1M | 67× | - | | TFNO (CP) | 0.29% | 890K | 75× | - | | TFNO (CP) | 0.47% | 447K | 150× | - | | MG-TFNO (CP) | 0.49 % | 447K | 40× | 1.9× | | MG-TFNO (Tucker) | 0.42 % | 447K | 19× | 1.9× | Tensorizing: better generalization. Figure 8 (left) shows that our TNO generalizes better with less training samples. Indeed, at every fixed amount of training samples, the TNO massively outperforms the full-parameter FNO model. Even when only using half the samples, our TNO outperforms the FNO trained on the full dataset. Furthermore, Figure 8 (right) shows that our TNO overfits significantly less than FNO, demonstrating the regularizing effect of the tensor decomposition. This result is invaluable in the PDE setting where very few training samples are typically available due to the high computational cost of traditional PDE solvers. Multi-Grid Domain Decomposition. In Table ??, we compare our *MG-TFNO* with the baseline FNO and the *TFNO*, respectively. *MG-TFNO* enables compressing both the weight tensor but also the input domain. On the other hand, preserving resolution invariance requires padding the patches, which decreases performance, resulting in a tradeoff between input domain compression and prediction accuracy. We also show the impact of multi-grid domain decomposition on performance in Figure 7. We find that lower compression ratios (corresponding to a larger amount of padding in the decomposed regions) perform better which is unsurprising since more information is incorporated into the model. More surprisingly, we find that using a larger number of regions (16) performs consistently better than using a smaller number (4) and both can outperform the full-field FNO. This can be due to the fact that: i) the domain decomposition acts as a form of data augmentation, exploiting the translational invariance of the PDE and more regions yield larger amounts of data, and ii) the output space of the model is simplified since a function can have high frequencies globally but may only have low frequencies locally. Consistently, we find that the tensor compression in the weights acts as a regularizer and improves performance across the board. Architectural improvements to the backbone In addition to the ablation performed on our *MGTFNO*, we also investigate architectural improvements to the FNO backbone, see Sec 3.4 for details. In particular, we find that, while instance normalization decreases performance, layer normalization helps, especially when used in conjunction with a pre-activation. Adding an MLP similarly improves performance, we found that a bottleneck (expansion factor of 0.5) works well in practice, resulting in an absolute improvement of 0.87% in relative L 2error. We found the ordering of normalization, activation, and weights (including preactivation), did not have a significant impact on performance. Finally, when not using multi-grid domain decomposition, the inputs are periodic and padding is not necessary. In that case, not padding the input improves performance. We use all these improvements for the backbone of the best version of our *MG-TFNO*, Fig 1 where we show that our improved backbone significantly outperforms the original FNO, while our approach significantly outperforms both, with a small fraction of the parameters, opening the door to the application of *MG-TFNO* to high-resolution problems. ## 4.4 Ablation Studies In this section, we further study the properties of our model through ablation studies. We first look at how TFNO suffers less from overfitting thanks to the low-rank constraints before comparing its performance with | | Table 3: Impact of our architectural improvements. | | | | | |-----------------------|------------------------------------------------------|----------------|---------------|-------------|----------| | Method | Layers | L 2 test error | H1 test error | # Params | Model CR | | FNO Li et al. (2021b) | 4 | 1.34% | 3.78% | 67,142,657 | - | | FNO Li et al. (2021b) | 6 | 0.90% | 2.59% | 100,705,409 | 0.7× | | FNO Li et al. (2021b) | 8 | 0.73% | 2.09% | 134,268,161 | 0.5× | | TFNO (CP) | 4 | 0.47% | 1.20% | 447,105 | 150× | | TFNO (CP) | 6 | 0.27% | 0.74% | 662,081 | 101× | | TFNO (CP) | 8 | 0.22% | 0.59% | 877,057 | 77× | | training. Method | 128 × 128 | 256 × 256 | 512 × 512 | 1024 × 1024 | | | | | |--------------------|-------------|-------------|-------------|---------------|----------|-----------|----------|-------| | L 2 error | H1 error | L 2 error | H1 error | L 2 error | H1 error | L 2 error | H1 error | | | CP TFNO | 0.3% | 0.87% | 0.3% | 0.93% | 0.3% | 0.93% | 0.3% | 0.93% | | CP MG-TFNO | 0.49% | 1.2% | 0.49% | 1.3% | 0.49% | 1.5% | 0.49% | 1.6% | Table 4: **Resolution invariance of** *TFNO*. Since the model is an operator, it is resolution invariant. In particular, here, we trained our model in resolution 128 × 128 and test it on unseen samples in various resolutions and show it generalizes, with virtually no loss of performance to higher resolutions unseen during training. various tensor decompositions. Finally, we perform ablation studies for our multi-grid domain decomposition on Burger's equation. ## 4.4.1 Resolution Invariance TFNO is resolution invariant, meaning that it can be trained on one resolution and tested on a different one. To illustrate this, we show zero-shot super-resolution results: we trained our best model (Table ??) on images of resolution 128×128 and tested it on unseen samples at higher resolutions (256×256 and 512×512), Table 4. As can be seen, our method does as well on unseen, higher-resolution unseen testing samples as it does on the training resolution, confirming the resolution invariance property of our neural operator. ## 4.4.2 Training On Higher-Resolution With Multi-Grid One important advantage of our multi-grid domain decomposition is that it enables training much larger models on large inputs by distributing over patches. We demonstrate this, by training on larger resolution (512x512 discretization) and using the largest FNO and TFNO that fits in memory, on a V100 GPU. For the original FNO, this corresponds to a width of 12, first row in table 5. We then compare its performance with the multigrid approach with a neural operator as large as fits into the same V100 GPUs i.e. each width in the table has been optimized to be as large as memory allows. As we can see, our approach allows to fit a larger model and reaches a much lower relative L 2error. ## 4.4.3 Overfitting And Low-Rank Constraint Here, we show that lower ranks (higher compressions) lead to reduced overfitting. In Figure ??, we show the training and testing H1errors for our TOP with Tucker decomposition at varying compression ratios (2x, 49x and 172x). We can see how, while the test error does not vary much, the gap between training and test errors reduces as we decrease the rank. As we can see, while being the most flexible, Tucker does not perform as well at higher compression ratios. In those extreme cases, CP and Tensor-Train lead to lower errors. | memory by distributing patches in the domain space, thus reaching a lower relative er | | | ror. | | |-----------------------------------------------------------------------------------------|-------|---------|---------|-----| | | | 2 error | | | | Model | Width | Patches | Padding | L | | FNO | 12 | 0 | 0 | 6.1 | | MG-FNO | 42 | 4 | 70 | 2.9 | | MG-FNO | 66 | 4 | 53 | 2.4 | | MG-FNO | 88 | 16 | 40 | 1.8 | | Tucker MG-TFNO | 80 | 16 | 46 | 1.3 | ![15_image_0.png](15_image_0.png) (a) Train VS Test error over time for a TOP with a CP factorization). (b) Train VS Test error over time for a TOP with a TT factorization. Figure 9: **Train/test curve for a TOP-CP (9(a)) and TOP-TT (9(b))** ## 4.4.4 Tensor-Train And Top Our approach is independent of the choice of tensor decomposition. We already showed how Tucker is most flexible and works well across all ranks. We also showed that while memory demanding for high rank, a CP decomposition leads to better performance and low rank. Our method can also be used in conjunction with other decompositions, such as tensor-train. To illustrate this, we show the convergence behavior of TNO with a Tensor-Train decomposition for a compression ratio of 178, figure 9(b). We also compare in Table 4.4.4 our *TFNO* with different tensor decompositions. Table 6: **Relative** L 2**test error of our** *MG-TFNO* **approach for different tensor decompositions**. We empirically found that Tucker works best for small compression ratio, CP excels at large compression ratio (≈ 100×) but becomes computationally heavy for smaller ones. TT tends to be unstable at low-compression ratios but preserves a good performance for extreme compression ratio (> 500×). Method L 2**test error # Params Model CR** FNO Li et al. (2021b) 1.12% 67 M 0× TFNO [Tucker] 0.37% 28 M 2.3× TFNO [CP] 0.46% 808 K 83× TFNO [TT] 1.18% 117 K 574× Table 7: **Ablation comparing the performance on the relative** L 2**test error of our** *MG-TFNO* ap- proach, compared with its parts *TFNO* **and MG-FNO and the regular FNO, on Navier-Stokes.** CR stands for compression ratio. Tensorization and multi-grid domain decomposition both individually improve performance while enabling space savings. The two techniques combined lead to further improvements, enabling large compression for both input and parameter, while outperforming regular FNO. Method L 2**test error # Params Model CR Domain CR** FNO (Li et al., 2021b) 2.54% 58 M 0× 0× TFNO [Tucker] 1.39% 41 M 1.5× 0× TFNO [CP] 2.24% 130 K 482× 0× MG-FNO 1.43% 58 M 0× 1.4× MG-TFNO [Tucker] 0.85% 5.5 M 10× 1.78× MG-TFNO [Tucker] 1.89% 5.5 M 10× 7× ## 4.4.5 Decomposing Domain And Weights: Mg-Tfno. Tensorization and multi-grid domain decomposition not only improve performance individually, but their advantages compound and lead to a strictly better algorithm that scales well to higher-resolution data by decreasing the number of parameters in the model as well as the size of the inputs thereby improving performance as well as memory and computational footprint. Table 7 compares FNO with Tensorization alone, multi-grid domain decomposition alone, and our joint approach combining the two, *MG-TFNO*. In all cases, for α, we keep 40 Fourier coefficients for height and 24 for the width and use an operator width of 64. Our results imply that, under full parallelization, the memory footprint of the model's inference can be reduced by 7× and the size of its weights by 10× while also improving performance. Consistently with our other experiments, we find that the tensor compression in the weights acts as a regularizer and improves performance across the board. Our results imply that, under full parallelization, the memory footprint of the model's inference can be reduced by 7× and its weight size by 10× while also improving performance. 4.4.6 Burgers' Equation ![16_image_0.png](16_image_0.png) Figure 10: **Error on Burgers' equation with** T = 0.5 **(left) and** T = 1 **(right) as a function** of domain compression ratio using standard domain decomposition without our multi-grid approach. We evaluate the performance of the standard domain decomposition approach. The radius indicates the size, in physical space, of the padding added to each region. We test the efficacy of the standard domain decomposition approach by training on two separate Burgers problems: one with a final time T = 0.5 and one with T = 1. As described in Section 3.6, we expect that for T = 1, each region requires more global information thus significantly more padding need to be used in order to reach the same error. The results of Figure 10 indeed confirm this. The domain compression ratios needed for the approach to reach the performance of the full-field model are higher, indicating the need for incorporating global information. These results motivate our multi-grid domain decomposition approach. ## 5 Conclusion In this work, we introduced i) a novel tensor operator (*TFNO*) as well as a multi-grid domain decomposition approach which together form *MG-TFNO*, ii) an operator model that outperforms the FNO with a fraction of the parameters and memory complexity requirements, and iii) architectural improvements to the FNO. Our method scales better, generalizes better, and requires fewer training samples to reach the same performance; while the multi-grid domain decomposition enables parallelism over huge inputs. This paves the way to applications on very high-resolution data and in our future work, we plan to deploy *MG-TFNO* to largescale weather forecasts for which existing deep learning models are prohibitive. ## References Jonas Adler and Ozan Oktem. Solving ill-posed inverse problems using iterative deep neural networks. Inverse Problems, nov 2017. Nathan Albin and Oscar P. Bruno. A spectral fc solver for the compressible navier–stokes equations in general domains i: Explicit time-stepping. *Journal of Computational Physics*, 230(16):6248–6270, 2011. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. *arXiv preprint* arXiv:1607.06450, 2016. Saakaar Bhatnagar, Yaser Afshar, Shaowu Pan, Karthik Duraisamy, and Shailendra Kaushik. Prediction of aerodynamic flow fields using convolutional neural networks. *Computational Mechanics*, pp. 1–21, 2019. Kaushik Bhattacharya, Bamdad Hosseini, Nikola B Kovachki, and Andrew M Stuart. Model reduction and neural networks for parametric pdes. *arXiv preprint arXiv:2005.03180*, 2020. Mackenzie L Blanusa, Carla J López-Zurita, and Stephan Rasp. The role of internal variability in global climate projections of extreme events. *arXiv preprint arXiv:2208.08275*, 2022. Adrian Bulat, Jean Kossaifi, Georgios Tzimiropoulos, and Maja Pantic. Incremental multi-domain learning with network latent tensor factorization. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pp. 10470–10477, 2020a. Adrian Bulat, Jean Kossaifi, Georgios Tzimiropoulos, and Maja Pantic. Toward fast and accurate human pose estimation via soft-gated skip connections. In 2020 15th IEEE International Conference on Automatic Face & Gesture Recognition, 2020b. Tony F. Chan and Tarek P. Mathew. Domain decomposition algorithms. *Acta Numerica*, 3:61–143, 1994. Gary J. Chandler and Rich R. Kerswell. Invariant recurrent solutions embedded in a turbulent twodimensional kolmogorov flow. *Journal of Fluid Mechanics*, 722:554–595, 2013. Jean-Baptiste Cordonnier, Andreas Loukas, and Martin Jaggi. Multi-head attention: Collaborate instead of concatenate. *arXiv preprint arXiv:2006.16362*, 2020. R. Courant, K. Friedrichs, and H. Lewy. Uber die partiellen differenzengleichungen der mathematischen physik. *Mathematische annalen*, 100(1):32–74, 1928. Wojciech M Czarnecki, Simon Osindero, Max Jaderberg, Grzegorz Swirszcz, and Razvan Pascanu. Sobolev training for neural networks. *Advances in Neural Information Processing Systems*, 30, 2017. Tri Dao, Beidi Chen, Nimit S Sohoni, Arjun Desai, Michael Poli, Jessica Grogan, Alexander Liu, Aniruddh Rao, Atri Rudra, and Christopher Ré. Monarch: Expressive structured matrices for efficient and accurate training. In *International Conference on Machine Learning*, pp. 4690–4721. PMLR, 2022. Maarten De Hoop, Daniel Zhengyu Huang, Elizabeth Qian, and Andrew M Stuart. The cost-accuracy trade-off in operator learning with neural networks. *arXiv preprint arXiv:2203.13181*, 2022. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020. Soheil Esmaeilzadeh, Kamyar Azizzadenesheli, Karthik Kashinath, Mustafa Mustafa, Hamdi A Tchelepi, Philip Marcus, Mr Prabhat, Anima Anandkumar, et al. Meshfreeflownet: A physics-constrained deep continuous space-time super-resolution framework. In *SC20: International Conference for High Performance Computing, Networking, Storage and Analysis*, pp. 1–15. IEEE, 2020. Lawrence C. Evans. *Partial differential equations*. American Mathematical Society, 2010. John Guibas, Morteza Mardani, Zongyi Li, Andrew Tao, Anima Anandkumar, and Bryan Catanzaro. Adaptive fourier neural operators: Efficient token mixers for transformers. *arXiv preprint arXiv:2111.13587*, 2021. Xiaoxiao Guo, Wei Li, and Francesco Iorio. Convolutional neural networks for steady flow approximation. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016. Gaurav Gupta, Xiongye Xiao, and Paul Bogdan. Multiwavelet-based operator learning for differential equations. *Advances in Neural Information Processing Systems*, 34:24048–24062, 2021. Julia Gusak, Maksym Kholiavchenko, Evgeny Ponomarev, Larisa Markeeva, Philip Blagoveschensky, Andrzej Cichocki, and Ivan Oseledets. Automated multi-stage compression of neural networks. Oct 2019. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV 14, pp. 630–645. Springer, 2016. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In *International conference on machine learning*, pp. 448–456. pmlr, 2015. Majid Janzamin, Rong Ge, Jean Kossaifi, Anima Anandkumar, et al. Spectral learning on matrices and tensors. Found. and Trends® *in Mach. Learn.*, 12(5-6):393–536, 2019. Yong-Deok Kim, Eunhyeok Park, Sungjoo Yoo, Taelim Choi, Lu Yang, and Dongjun Shin. Compression of deep convolutional neural networks for fast and low power mobile applications. 2016. Tamara G Kolda and Brett W Bader. Tensor decompositions and applications. *SIAM Rev.*, 51(3):455–500, 2009. Jean Kossaifi. Tensorly-torch. https://github.com/tensorly/torch, 2021. Jean Kossaifi, Yannis Panagakis, Anima Anandkumar, and Maja Pantic. Tensorly: Tensor learning in python. *Journal of Machine Learning Research (JMLR)*, 20(26), 2019. Jean Kossaifi, Antoine Toisoul, Adrian Bulat, Yannis Panagakis, Timothy M Hospedales, and Maja Pantic. Factorized higher-order CNNs with an application to spatio-temporal emotion estimation. pp. 6060–6069, 2020. Nikola Kovachki, Samuel Lanthaler, and Siddhartha Mishra. On universal approximation and error bounds for fourier neural operators. *Journal of Machine Learning Research*, 22(290):1–76, 2021a. Nikola Kovachki, Zongyi Li, Burigede Liu, Kamyar Azizzadenesheli, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Neural operator: Learning maps between function spaces. *arXiv preprint* arXiv:2108.08481, 2021b. Vadim Lebedev, Yaroslav Ganin, Maksim Rakhuba, Ivan V. Oseledets, and Victor S. Lempitsky. Speeding-up convolutional neural networks using fine-tuned CP-decomposition. 2015. Martin Leutbecher and Tim N Palmer. Ensemble forecasting. *Journal of computational physics*, 227(7): 3515–3539, 2008. Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Neural operator: Graph kernel network for partial differential equations. *arXiv* preprint arXiv:2003.03485, 2020a. Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Andrew Stuart, Kaushik Bhattacharya, and Anima Anandkumar. Multipole graph neural operator for parametric partial differential equations. Advances in Neural Information Processing Systems, 33:6755–6766, 2020b. Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Markov neural operators for learning chaotic systems. arXiv preprint arXiv:2106.06898, 2021a. Zongyi Li, Nikola Borislavov Kovachki, Kamyar Azizzadenesheli, Burigede liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differential equations. In *International Conference on Learning Representations*, 2021b. Zongyi Li, Hongkai Zheng, Nikola Kovachki, David Jin, Haoxuan Chen, Burigede Liu, Kamyar Azizzadenesheli, and Anima Anandkumar. Physics-informed neural operator for learning partial differential equations. *arXiv preprint arXiv:2111.03794*, 2021c. Burigede Liu, Nikola Kovachki, Zongyi Li, Kamyar Azizzadenesheli, Anima Anandkumar, Andrew M Stuart, and Kaushik Bhattacharya. A learning-based multiscale method and its application to inelastic impact problems. *Journal of the Mechanics and Physics of Solids*, 158:104668, 2022. Lu Lu, Pengzhan Jin, and George Em Karniadakis. Deeponet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators. arXiv preprint arXiv:1910.03193, 2019. S. F. McCormick. Multigrid methods for variational problems: General theory for the v- cycle. *SIAM* Journal on Numerical Analysis, 22(4):634–643, 1985. Alexander Novikov, Dmitry Podoprikhin, Anton Osokin, and Dmitry Vetrov. Tensorizing neural networks. pp. 442–450, 2015. I. V. Oseledets. Tensor-train decomposition. *SIAM J. Sci. Comput.*, 33(5):2295–2317, September 2011. Yannis Panagakis, Jean Kossaifi, Grigorios G. Chrysos, James Oldfield, Mihalis A. Nicolaou, Anima Anandkumar, and Stefanos Zafeiriou. Tensor methods in computer vision and deep learning. *Proceedings of the* IEEE, 109(5):863–890, 2021. doi: 10.1109/JPROC.2021.3074329. Christos Papadopoulos, Yannis Panagakis, Manolis Koubarakis, and Mihalis Nicolaou. Efficient learning of multiple nlp tasks via collective weight factorization on bert. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pp. 882–890, 2022. Evangelos E Papalexakis, Christos Faloutsos, and Nicholas D Sidiropoulos. Tensors for data mining and data fusion: Models, applications, and scalable algorithms. *ACM Trans. Intell. Syst. and Technol. (TIST)*, 8 (2):1–44, 2016. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in PyTorch. 2017. Jaideep Pathak, Shashank Subramanian, Peter Harrington, Sanjeev Raja, Ashesh Chattopadhyay, Morteza Mardani, Thorsten Kurth, David Hall, Zongyi Li, Kamyar Azizzadenesheli, et al. Fourcastnet: A global data-driven high-resolution weather model using adaptive fourier neural operators. *arXiv preprint* arXiv:2202.11214, 2022. Catherine E. Powell, Gabriel Lord, and Tony Shardlow. *An Introduction to Computational Stochastic PDEs*. Texts in Applied Mathematics. Cambridge University Press, United Kingdom, 1 edition, August 2014. ISBN 9780521728522. Md Ashiqur Rahman, Manuel A Florez, Anima Anandkumar, Zachary E Ross, and Kamyar Azizzadenesheli. Generative adversarial neural operators. *arXiv preprint arXiv:2205.03017*, 2022a. Md Ashiqur Rahman, Zachary E Ross, and Kamyar Azizzadenesheli. U-no: U-shaped neural operators. arXiv preprint arXiv:2204.11127, 2022b. Nicholas D Sidiropoulos, Lieven De Lathauwer, Xiao Fu, Kejun Huang, Evangelos E Papalexakis, and Christos Faloutsos. Tensor decomposition for signal processing and machine learning. Transactions Signal Processing, 65(13):3551–3582, 2017. Julia Slingo and Tim Palmer. Uncertainty in weather and climate prediction. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 369(1956):4751–4767, 2011. Roger Temam. *Infinite-dimensional dynamical systems in mechanics and physics*. Applied mathematical sciences. Springer-Verlag, New York, 1988. Alasdair Tran, Alexander Mathews, Lexing Xie, and Cheng Soon Ong. Factorized fourier neural operators. In *The Eleventh International Conference on Learning Representations*, 2023. URL https://openreview. net/forum?id=tmIiMPl4IPa. Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Instance normalization: The missing ingredient for fast stylization. *arXiv preprint arXiv:1607.08022*, 2016. Gege Wen, Zongyi Li, Kamyar Azizzadenesheli, Anima Anandkumar, and Sally M Benson. U-fno—an enhanced fourier neural operator-based deep-learning model for multiphase flow. *Advances in Water* Resources, 163:104180, 2022. Yan Yang, Angela F Gao, Jorge C Castellanos, Zachary E Ross, Kamyar Azizzadenesheli, and Robert W Clayton. Seismic wave propagation and inversion with neural operators. *The Seismic Record*, 1(3):126–134, 2021. Yan Yang, Angela F Gao, Jorge C Castellanos, Zachary E Ross, Kamyar Azizzadenesheli, and Robert W Clayton. Accelerated full seismic waveform modeling and inversion with u-shaped neural operators. arXiv preprint arXiv:2209.11955, 2022. Yinhao Zhu and Nicholas Zabaras. Bayesian deep convolutional encoder–decoder networks for surrogate modeling and uncertainty quantification. *Journal of Computational Physics*, 2018. ISSN 0021-9991.
Review 1: Summary: The paper proposes a Multi-Grid Tensorized Fourier Neural Operator (MG-TFNO) to address the challenge of learning high-resolution PDEs with memory and data constraints. The key ideas are (1) multi-grid decomposition (2) tensor decompositions; exploiting local structure and applying low rank regularization, which together allow compression and regularization. The paper evaluates the proposed method and compares against FNO on Navier-Stokes equations and achieve better performance with big reductions in domain size. Although the training data resolution is quite low. Strengths and Weaknesses: **Strengths** As dataset sizes continue to grow in deep learning, GPU memory bottlenecks are becoming increasingly problematic. Methods like Fourier Neural Operators (FNOs) that track many frequency modes have even worse memory scaling. This paper proposes using tensor decomposition techniques to find lower rank representations of the model weights. This can help reduce memory requirements and improve scaling for FNO-based methods on large datasets. **Weaknesses** The experiments are the major concern. The training data resolution of 128x128 is quite low and the blurriness is quite obvious in the figures. Given the major purported benefit of the proposed approach is in the applications to higher resolution data, this choice of experiments is somewhat surprising. Similarly it would be good to actually visualize the higher resolution evaluations to understand what kind of predictions are being made and what does the underlying data actually look like at higher resolution. It's very likely that most of the smaller details are missing in the higher resolution predictions which are not showing up in the averaged reported metrics. Moreover as the training loss function has changed too, it kind of makes it hard to compare the approach against many other exisiing results in the field. While reducing the parameter count is nice to see, it's not the end-all and be all for real world applications as what matters is the overall FLOPs of the models. It would be good to understand the trade-offs involved here and also report the wall-clock time of these different methods. In general, the claims about FNOs allowing evaluation "at any point in the domain" are somewhat misleading. FNOs requires uniform grids and can't work with arbitrary points in the grid. Similarly not comparing against alternative standard deep learning methods like unets might be interesting too. General decomposition ideas should apply to those methods as well? ConvNets in general do work with "image" sizes they haven't been trained on and at best might require some standard interpolation operations. Requested Changes: As in the weaknesses above, more experimental evidence is necessary to support the motivations in the abstract/introduction. At a minimum, some scaling experiments should be done as it's unclear what applications with modern hardware rquire number of parameters to be less than 10M while a lot of the experiments with less than 1M params. Broader Impact Concerns: NA ================================================== Review 2: Summary: The authors propose a tensorized Fourier neural operator (FNO) for reducing the memory complexity of a FNO. In addition, they propose the use of a multigrid domain compression technique that enables scaling to large spatial resolutions. They also propose improvements to the FNO backbone and perform ablation studies for their proposed improvements. Strengths and Weaknesses: Strengths: 1. The use of tensorization to minimize the memory complexity, and implicitly regularization the FNO is smart and a welcome addition. 2. Multigrid decomposition of inputs also enables faster exchange of information at different scales. 3. The paper is well written and the experiments are largely complete. Weaknesses: 1. The comparisons between methods are a bit selective. For example - the authors should provide wall-time comparisons between all the different methods. They should also add a curve for FFNO to Figure 9. The number of parameters is not an acceptable substitute for wall-time costs for training different methods. Also - in all tables of the paper - it is unclear whether the numbers are statistically significant. Requested Changes: Changes requested: 1. Please add a schematic for a 2D case to explain the "half corners" concept in Section 3.3 2. Please mention, also in 3.3, the discretization independent only applied to uniform grids with periodic boundary conditions. 3. In Section 2, please add a literature review about graph based methods which can also handle various discretizations. The authors say that the input to neural operators can be in "any discretization, mesh, resolution, or basis" - this is false. Can their trained neural operator be used for overset meshes or immersed boundary meshes? Can they be used with data generated from unstructured adaptive mesh refinement. 4. Please mention, in Section 3 before 3.2 - that the N points in the training dataset refer to N PDE solves. 5. Please add equations for the Separable Fourier Convolution subsection in Page 9 - they mention an "MLP introduced above" - but this is not clear. 6. Please mention how much time it took to generate the training data - based on that number - how many times must a surrogate FNO be deployed before the original training data cost is amortized. This must be mentioned explicitly. 7.Please add figures for convergence of various compared networks with respect to wall time as well as epochs. 8. Please add FFNO results to Figure 8 and 9. Broader Impact Concerns: None ================================================== Review 3: Summary: This study offers a modification of the Fourier Neural Operator (FNO) that consists of three parts: 1. The principal construction of the layer is different, with skip connections, convolutions, and spectral convolutions reordered in a certain way. 2. In the Fourier space, the linear transformation is given by a tensor written in a low-rank format. 3. Input features on a structured grid are coarsened in a way that resembles multigrid and domain decomposition. Target is also predicted on parts of the domain. The proposed modifications are carefully tested in multiple ablation studies. PDEs used are Burgers and Navier-Stokes equations in 2D. Strengths and Weaknesses: # Strengths 1. In general, the explanations are sufficient for understanding. 2. Motivation is clear. 3. The ablation study is comprehensive. # Weaknesses 1. Limited benchmarks. 2. Omissions and unclear parts. 3. Inaccuracies and misleading part. We provide more details in the next section of the review. Requested Changes: **1. Limited benchmarks.** The most questionable part of the study is the choice of PDEs: Burgers and Navier-Stokes. I see a few problems: 1. Both of these equations already appeared in the original study on FNO. They are not novel and not challenging. Both of them belong to the realm of fluid dynamics, which is well-studied from a computational perspective. 2. As empirically demonstrated in https://arxiv.org/abs/2203.13181, FNO shows no overfitting on the Navier-Stokes equation and generally works quite well for this problem. 3. All considered data (features and targets) is smooth. Given that, it is not clear whether the conclusions of the authors will stand if one applies the proposed architecture to more challenging problems. **2. Omissions and unclear parts.** There are several unclear parts that I kindly ask authors to clarify: 1. Part "3.6 Multi-Grid Domain Decomposition" is hard to follow. Unfortunately, Figure 5a does not clarify the description given in this part. In the figure, we see smoothing supplemented with coarsening, whereas in the text, we read about subsampling. Besides that, it is unclear from the text how the coarsening works. Can the authors provide a clear picture where coarsening, levels, and domain intersection are shown? 2. Schemes in Figure 3 are either wrong or contain non-standard terminology. For example, panel (a) shows classical FNO architecture with one block representing a skip connection. In the original paper https://arxiv.org/abs/2010.08895, classical FNO contains convolution with kernel size 1 in place of this skip connection. The authors may mean precisely the same with their "skip connection" block. "Skip connection" has a definitive meaning in the deep learning community. I suggest the authors clarify this part or better rename the block. Besides, I would like to point out that the article https://arxiv.org/abs/2301.11509v1 contains related modifications of the FNO backbone. I suggest the authors discuss the mentioned paper. 3. In Section 4.2. authors briefly mention they use $H_1$ Sobolev norm to train modified FNO. This is a crucial part since neural operators predict targets on separate blocks, and to stitch these local fields to a global one, the model is forced to match derivatives on the interfaces. The part needs further discussion and clarification. The main problem is the solution can fail to belong to $H_1$. In particular, Burger's equation considered by authors can fail to have a solution in a strong sense. This is not a problem per se, but authors need to clarify how they are going to use $H_1$ loss in this context. What if the solution does not have a derivative, e.g., it is an advection equation with discontinuous data or an elliptic equation with a piecewise continuous diffusion coefficient (this leads to a piecewise continuous solution)? Is it possible to obtain meaningful results with the proposed domain decomposition approach? **3. Inaccuracies and misleading parts.** These problems are relatively minor, but they decrease the overall quality of the manuscript: 1. The quality of all figures is poor. Is it possible for authors to provide vector graphics? 2. Page 2: The sentence with "memory complexity" is poorly phrased. 3. Page 3: the claim that the parametrization reduces the complexity of the optimization landscape is unsubstantiated. 4. Page 4: the part "The input functions to neural operators can be presented in any discretization, mesh, resolution, or basis. The output functions can be evaluated at any point in the domain." is misleading. There are limitations in all neural operators. FNO considered in the paper requires a uniform structured grid and can not process functions on arbitrary grids or be evaluated at any point of the domain. At best, the extension https://arxiv.org/abs/2207.05209 can do that when a good mapping to a rectangular logical domain is possible to supply or learn. 5. Page 4: the claim on a physics-informed operator is misleading. If we open the referenced paper https://arxiv.org/abs/2111.03794, we can find that physics-informed neural operator failed to learn for Navier-Stokes equation without the exact solutions provided by classical solvers. It can indeed learn on Burgers and elliptic equations, but the paper itself is the evidence that PDE data alone may not be enough. 6. Page 10: The domain decomposition that authors describe is given in a very narrow sense of MPI-like message passing. This term has a much broader meaning in scientific computing. For example, it is easy to recall the application to the elliptic equation https://www.sciencedirect.com/science/article/abs/pii/B978012407475050022X where it serves quite a different purpose. Similarly, the whole example authors provide does not seem to have much in common with domain decomposition. The locality of the equation is evident, for example, with mundane finite-difference discretization. Why bother splitting on multiple intervals and writing down integrals? 7. Page 10: Part "However, the major advancement made by machine learning-based operator methods for PDEs is that a model can approximate the solution, in one shot, for very large times i.e. $h > 1$." Is it an advantage at all? The scheme that the authors consider does not allow for event tracking since it is not possible to trace the changes between these large time intervals. Suppose I want to know whether or not the wind creates some critical pressure on the blade of the turbine. How should I use the scheme that performs enormous time steps? Can the authors specifically provide problems where this property helps? 8. Page 14: reference to undefined table 9. Page 15: reference to undefined table 10. Page 15: reference to undefined figure 11. Page 15: TOP abbreviation is not defined Broader Impact Concerns: NA ================================================== Review 4: Summary: This paper proposes the MG-TFNO with several improvements to FNO, including domain decomposition, parameter tensorization and architecture search. Specifically, in domain decomposition, the authors propose to apply the FNO in multiscale padded patches. To achieve the parameter efficiency, they tensorize the FNO parameters as the multiplication among core and factor matrices. An architecture search is also explored to improve performance. Experimentally, MG-TFNO surpasses FNO and F-FNO in both parameter efficiency and performance and beats FNO in data-efficiency. Strengths and Weaknesses: ## Strengths 1. The proposed MG-TFNO can reduce the parameter of FNO effectively without accuracy loss. 2. In multi-grid domain decomposition, the design in padded patch is interesting, which can enhance the local prediction. 3. The parameter tensorization is reasonable and can also facilitate model training. 4. Code is provided to ensure reproducibility. ## Weaknesses 1. The motivation of scaling FNO to large resolutions with data and memory efficiency is not well supported. (1) The authors confuse the concept between parameter efficiency and computation efficiency. I would like to point out that parameter efficiency is not equal to memory or computation efficiency, especially the GPU memory. Besides, splitting data into small patches also may not benefit the computation efficiency since they have to introduce the multiscale architecture to make up the global information. To support this motivation, the comparisons to FNO and F-FNO in GPU memory and running time are required. (2) Is the parameter efficiency necessary to FNO? As the authors described in Table 7, we can find that the model size of FNO is 67MB. For me, I don’t think this is a large model. To enhance this motivation, they need to plot the change of parameter size of FNO when we enlarge the input resolutions. Besides, why TFNO beats F-FNO? Some explanations are expected. (3) The Multi-grid domain decomposition damages the model scaling capability. As presented in Table 4, adding “MG-“ will enlarge the prediction error. 2. The experiments are insufficient. (1) Main experiments (Table 2 and Table 5) on Navier-Stokes are in the resolutions of 128$\times 128$ and 512$\times 512$ (I guess since they do not clarify the dataset resolution in Table 2). More experiments on larger inputs are expected, such as 1024$\times$1024. (2) More baselines are expected, such as U-NO [1] and LSM [2], which also employs the multiscale design. Besides, F-FNO also explores the architecture design. Are there any new finding beyond F-FNO? [1] U-NO: U-shaped Neural Operators, TMLR 2023 [2] Solving High-Dimensional PDEs with Latent Spectral Models, ICML 2023 (3) The performance of TFNO is not persuasive. As shown in Figure 9, we can find that FNO surpasses TFNO by a large margin. 3. This paper has some serious writing issues, making it hard to read. - In Figure 3(d), the left skip connection is linked to GeLu block. Is this a mistake or a special design. And “GeLu” should be “GeLU”. - In Figure 3(a), I think the official FNO also contains a convolution branch. - In Section 3.6, it will be better if the authors formalize the Multi-Grid part as equations. - The captions of Figure 6 and 7 should be placed below the figure. - The data resolution of Table 2 is not clearly described. - Some hyperlinks to figures and tables are not displayed well. 4. There are some unsupported claims. - For the “slightly enabling parallelism” claim in abstract, they need some metrics to measure the parallelism of model. - For “grow linearly with the size of the problem” in introduction, they should plot a change curve of model parameter w.r.t. different input resolutions. Requested Changes: As I stated in above, the authors should provide more evidence to support their motivation, experiments, and claims. A major revision to the writings is also required. All the details are provided in the above section. Broader Impact Concerns: This paper only focuses on technical designs. Thus, there are no ethical risks. Since the proposed method can improve the parameter efficiency, it can benefit the mobile applications. ================================================== Metareview: Recommendation: Reject Comment: The paper targets at a neural operator that can simulate high-resolution PDEs. While the goal is attracting, the presented solution, i.e. the Tensorized FNO, is not well supported by theoretical or empirical evidences. Reviewers held many concerns in the weak evaluation, the poor presentation, the slim innovation, and the improper positioning of this paper in the literature. Since no response was provided, these concerns remained unresolved, the paper is not acceptable. ==================================================
# Ideal: Interpretable-By-Design Algorithms For Learning From Foundation Feature Spaces Anonymous authors Paper under double-blind review ## Abstract Many of the existing transfer learning methods rely on parametric tuning and lack interpretation of decision making. However, the advance of foundation models (FM) makes it possible to avoid such parametric tuning, taking advantage of pretrained feature spaces. In this study, we define a framework called IDEAL (Interpretable-by-design DEep learning ALgorithms) which tackles the problem of interpretable transfer learning by recasting the standard supervised classification problem into a function of similarity to a set of prototypes derived from the training data. This framework generalises previously-known prototypical approaches such as ProtoPNet, xDNN and DNC. Using the IDEAL approach we can decompose the overall problem into two inherently connected stages: A) feature extraction (FE), which maps the raw features of the real-world data into a latent space, and B) identification of representative prototypes and decision making based on similarity and association between the query and the prototypes. This addresses the issue of interpretability (stage B) while retaining the benefits from the tremendous achievements offered by deep learning (DL) models (e.g., visual transformers, ViT) which are often pre-trained on huge datasets such as IG-3.6B + ImageNet-1K or LVD-142M (stage A). On a range of datasets (CIFAR-10, CIFAR-100, CalTech101, STL-10, Oxford-IIIT Pet, EuroSAT), we demonstrate, through an extensive set of experiments, how the choice of the latent space, prototype selection, and finetuning of the latent space affect accuracy and generalisation of the models on transfer learning scenarios for different backbones. Building upon this knowledge, we demonstrate that the proposed framework helps achieve an advantage over state-of-the-art baselines in class-incremental learning. Finally, we analyse the interpretations provided by the proposed IDEAL framework, as well as the impact of confounding in transfer learning, demonstrating that the proposed approach **without** finetuning improves the performance on confounded data over finetuned counterparts. The key findings can be summarized as follows: (1) the setting allows interpretability through prototypes, while also mitigating the issue of confounding bias, (2) lack of finetuning helps circumvent the issue of catastrophic forgetting, allowing efficient class-incremental transfer learning, and (3) ViT architectures narrow the gap between finetuned and nonfinetuned models allowing for transfer learning in a fraction of time **without** finetuning of the feature space on a target dataset with iterative supervised methods. ## 1 Background Deep-learning (DL) models can be formulated as deeply embedded functions of functions (Angelov & Gu (2019), Rosenblatt et al. (1962)), optimised through backpropagation (Rumelhart et al. (1986)): yˆ(x) = fn(. . .(f1(x; θ1)*. . .*); θn), (1) where fn(. . .(f1(x; θ1)*. . .*); θn) is a layered function of the input x, which has a generic enough, fixed parameterisation θ· to predict desirable outputs yˆ. However, this problem statement has the following limitations: $\mathbf{n}\rfloor_{\mathcal{T}}$ ![1_image_0.png](1_image_0.png) Figure 1: Difference between (a) a standard deep-learning model, and (b) the proposed prototype-based approach, IDEAL. Dataset credit: CIFAR-10 (Krizhevsky & Hinton (2009)) (1) transfer learning typically requires finetuning (Kornblith et al. (2019)) using error back-propagation (EBP) on the target problem and data of interest (2) such formulation does not depend upon training data, so the contribution of these samples towards the output yˆ is unclear, which hinders interpretability. For the interpretable architectures, such as ProtoPNet (Chen et al. (2019)), finetuning leads to confounding interpretations (Bontempelli et al. (2022)) (3) finally, for lifelong learning problems, such finetuning creates obstacles such as catastrophic forgetting (Parisi et al. (2019)) Emergence of foundation models, aimed at better generalisation and facilitating transfer learning, allows for mitigating point (1). Studies such as DINOv2 (Oquab et al. (2023)) demonstrate competitive results for ViT-based (Dosovitskiy et al. (2020)) architectures on transfer learning tasks on a range of datasets with linear finetuning. However, these works do not address neither interpretability nor lifelong learning. In this work, to jointly address all three above-mentioned limitations, we propose a generic framework for prototypical transfer learning called IDEAL (Interpretable-by-design DEep learning ALgorithms). Through this framework, we extensively study the benefits and trade-offs of prototypical transfer learning without finetuning across different architectures and tasks. Our solution for transfer learning, which generalises of xDNN (Angelov & Soares (2020)) and ProtoPNet (Chen et al. (2019)), can be summarised in the following form: $${\hat{y}}=g(\mathbf{x};{\boldsymbol{\theta}},\mathbb{P}),$$ $$\left(2\right)$$ $\mathbf{n},a(\cdot)$: . $$\left({\mathfrak{3}}\right)$$ yˆ = g(x; θ, P), (2) where P is a set of prototypes. We consider a more restricted version of function g(·): $${\hat{y}}=g(\mathbf{x};\theta_{\{d,h\}},\mathbb{X})=h(d(\mathbf{x},\mathbf{p};\theta_{d})|_{\mathbf{p}\in\mathbb{P}};\theta_{h}),$$ yˆ = g(x; θ{d,h}, X) = h(d(x, p; θd)|p∈P; θh), (3) where d is some form of (dis)similarity function (which can include DL feature extractors), θd and θh are parameterisations of functions d and h, respectively. The methods from this research draw from cognitive science and the way humans learn, namely using examples of previous observations and experiences (Zeithamova et al. (2008)). Prototype-based models have long been used in different learning systems: k nearest neighbours (Radovanovic et al. (2010)); decision trees (Nauta et al. (2021)); rule-based systems (Angelov & Zhou (2008)); case-based reasoning (Kim et al. (2014)); sparse kernel machines (Tipping (1999)). The advantages of prototype-based models has been advocated, for example, in Bien & Tibshirani (2011). The first prototypical architecture, learning both distances and prototypes, was proposed in Snell et al. (2017) and more recently developed in Chen et al. (2019); Angelov & Soares (2020) and Wang et al. (2023). In this paper, we demonstrate the efficiency of the proposed framework: it is compact, easy to interpret by humans, fast to train and adapt in a lifelong learning setting and benefits from a latent data space learnt from a generic dataset transferred to a different, more specific domain. Specifically, we make the following contributions: - we define the framework called IDEAL, which transforms a given non-interpretable latent space into an interpretable one based on prototypes, derived from the training set without finetuning, and quantify the performance gap between such model, its finetuned counterpart and standard DL architectures. - we demonstrate the benefits of the proposed framework on transfer and lifelong learning scenarios. Namely, in a fraction of training time and **without finetuning** of latent features the proposed models achieve performance, competitive with standard DL techniques. - we demonstrate the model's interpretability on classification and lifelong learning tasks, and show that **without** finetuning, the resulting models achieves **better** performance on confounded CUB data comparing to finetuned counterparts (Wah et al. (2011); Bontempelli et al. (2022)) We apply this generic IDEAL framework to a set of standard DL architectures such as ViT (Dosovitskiy et al. (2020); Singh et al. (2022)), VGG (Simonyan & Zisserman (2014)), ResNet (He et al. (2016)) and xDNN (Angelov & Soares (2020)) and evaluate the methodology on a range of well-known datasets such as CIFAR-10, CIFAR-100, CalTech101, EuroSAT, Oxford-IIIT Pet, and STL-10. ## 2 Related Work Explainability and interpretability The ever more complicated DL models (Krizhevsky et al. (2012); Dosovitskiy et al. (2020)) do not keep pace with the demands for human understandable interpretability (Rudin (2019)). Interpretability of deep neural networks is especially important in a number of applications: automotive (Kim & Canny (2017)), medical (Ahmad et al. (2018)), Earth observation (Zhang et al. (2022)) alongside others. Demand in such models is necessitated by the pursuit of safety (Wei et al. (2022)), as well as ethical concerns (Peters (2022)). Some of the pioneering approaches to explaining deep neural networks involve *post hoc* methods; these include saliency models such as saliency map visualisation method (Simonyan et al. (2014)) as well as Grad-CAM (Selvaraju et al. (2017)). However, saliency-based explanations may be misleading and not represent the causal relationship between the inputs and outputs (Atrey et al. (2019)), representing instead the biases of the model (Adebayo et al. (2018)). An arguably better approach is to construct interpretable-by-design (*ante hoc*) models (Rudin (2019)). These models could use different principles such as (1) interpretable-by-design architectures (Böhle et al. (2022)), which are designed to provide interpretations at every step of the architecture, as well as (2) prototype-based models, which perform decision making as a function of (dis)similarity to existing prototypes (Angelov & Soares (2020)). One of the limitations of the prototype based methods is that they are often still based on non-interpretable similarity metrics. This can be considered an orthogonal open problem which can be addressed by providing interpretable-by-design DL architectures (Böhle et al. (2022)). Symbolic and sparse learning machines The idea of prototype-based machine learning is closely related to the symbolic methods (Newell et al. (1959)), and draws upon the case based reasoning (Kim et al. (2014)) and sparse learning machines (Poggio & Girosi (1998)), which are designed to learn a linear (with respect to parameters) model, which is (in general, nonlinearly) dependent on a subset of training data samples. At the centre of many such methods is the kernel trick (Schölkopf et al. (2001)), which involves mapping of training and inference data into a space with different inner product within a reproducing Hilbert space (Aronszajn (1950)). Such models include support vector machines (SVMs) for classification (Boser et al. (1992)) and support vector regression (SVR) models (Smola & Schölkopf (2004)) for regression, as well as relevance vector machines (RVMs), which have demonstrated improvements in sparsity (Tipping (2001)). Prototype-based models (Snell et al. (2017)) proposed to use a single prototype per class in a fewshot supervised learning scenario. Another study by Li et al. (2018) suggested prototype-based learning for interpretable case-based reasoning. Building upon it, (Chen et al. (2019)) developed ProtoPNet model which classifies an image through dissecting it into a number of patches, which are then compared to prototypes for decision making using end-to-end supervised training. A more recent model xDNN (Angelov & Soares (2020)) selects one or multiple prototypes per class through a non-iterative online procedure which uses data density. In contrast to the proposed setting, it uses finetuning on a downstream dataset and only uses weak backbone models such as VGG-16. Versions of xDNN also define prototypes at the level of segments (Soares et al. (2021)) and image pixels (Zhang et al. (2022)). The concept of xDNN was used in the end-to-end prototype-based learning method DNC (Wang et al. (2023)). In contrast to xDNN and DNC, we consider the **lifelong learning** scenario and investigate the properties of models, trained on generic and **not finetuned** datasets. The closest works to this study are prototype-based models ProtoPNet (Rudin (2019)), DNC (Wang et al. (2023)) and xDNN (Angelov & Soares (2020)). In fact, the proposed framework generalises these methods as shown in Section 3.3. These works, however, are focused on end-to-end training and are not motivated by the challenge of transfer learning. Large deep-learning classifiers In contrast to DNC (Wang et al. (2023)) and ProtoPNet (Chen et al. (2019)), the proposed framework goes beyond the end-to-end learning concept. Instead, it takes advantage of the feature space of large classifiers such as ResNet (He et al. (2016)), VGG (Simonyan & Zisserman (2014)), SWAG-ViT (Singh et al. (2022)), and shows that with carefully selected prototypes one can achieve, on a number of datasets, a performance comparable to end-to-end trained models, in offline and online (lifelong) learning scenarios with or even **without finetuning and end-to-end learning**, thus very fast and computationally efficient, yet interpretable. Continual learning Continual learning models solve a number of related problems (van de Ven et al. (2022)). *Task-incremental learning* addresses the problem of incrementally learning known tasks, with the intended task explicitly input into the algorithm (Ruvolo & Eaton (2013); Li & Hoiem (2017); Kirkpatrick et al. (2017)). *Domain-incremental learning* (Wang et al. (2022a); Lamers et al. (2023)) addresses the problem of learning when the domain is changing and the algorithm is not informed about these changes. This includes such issues as *concept drift* when the input data distribution is non-stationary (Widmer & Kubat (1996)). *Class-incremental learning* (Yan et al. (2021); Wang et al. (2022b)) is a problem of ever expanding number of classes of data. In this paper, we only focus on this last problem. However, one can see how the prototype-based approaches could help solve the other two problems by circumventing catastrophic forgetting (French (1999)) through incremental update of the prototypes (Baruah & Angelov (2012)). Clustering Critically important for enabling continual learning is to break the iterative nature of the endto-end learning and within the proposed concept which offers to employ clustering to determine prototypes. Therefore, we are using both online (ELM (Baruah & Angelov (2012)), which is an online version of meanshift (Comaniciu & Meer (2002))), and offline k-means (MacQueen et al. (1967)) methods. Although there are a number of online clustering methods, e.g. the stochastic Chinese restaurant process Bayesian nonparametric approach (Aldous et al. (1983)), they usually require significant amount of time to run and therefore we did not consider those. ## 3 Methodology 3.1 Problem Statement Two different definitions of the problem statement are considered: offline and online (lifelong) learning. Offline learning Consider the following optimisation problem: $${\underset{\begin{array}{c}{{\operatorname{arg\,min}_{\mathbb{P}=\mathbb{P}(\mathbb{X})}}}\\ {{\boldsymbol{\theta}}_{\{d,h\}}}\end{array}}{\sum_{(\mathbf{x},y)\in(\mathbb{X},\mathbb{Y})}l(h(d(\mathbf{x},\mathbf{p};\boldsymbol{\theta}_{d})|_{\mathbf{p}\in\mathbb{P}};\boldsymbol{\theta}_{h}),y),}}$$ $$\left(4\right)$$ l(h(d(x, p; θd)|p∈P; θh), y), (4) where (X, Y) are a tuple of inputs and labels, respectively, and P is a set of prototypes derived from data X (e.g., by selecting a set of representative examples or by clustering). Brute force optimisation for the problem of selecting a set of representative examples is equivalent to finding a solution of the best-subset selection problem, which is an NP-hard problem (Natarajan (1995)). While there are methods for solving such subset selection problems in limited cases such as sparse linear regression (Bertsimas et al. (2016)), it still remains computationally inefficient in a general case (polynomial complexity is claimed in Zhu et al. (2020)) and/or solving it only in a limited (i.e. linear) setting. The common approach to dealing with such selection problem is to replace the original optimisation problem (equation (4)) with a surrogate one, where the prototypes P are provided by a data distribution (Angelov & Soares (2020)) or a geometric, e.g. clustering (Wang et al. (2023)) technique. Then, once the prototypes are selected, the optimisation problem becomes: $$\arg\min_{\mathbf{\theta}_{\{d,h\}}}\sum_{(\mathbf{x},y)\in(\mathbb{X},\mathbb{Y})}l(h(d(\mathbf{x},\mathbf{p};\mathbf{\theta}_{d})|_{p\in\mathbb{P}};\mathbf{\theta}_{h}),y).\tag{1}$$ $$\mathbf{\Sigma}$$ $$\mathbf{\Sigma}$$ Online (lifelong) learning Instead of solving a single objective for a fixed dataset, the problem is transformed into a series of optimisation problems for progressively growing set X: $$\{\arg\min_{\mathbf{\theta}_{l(d,h)}}\sum_{(\mathbf{x},y)\in(\mathbb{X}_{n},\mathbb{Y}_{n})}l(h(d(\mathbf{x},\mathbf{p};\mathbf{\theta}_{d})|_{\mathbf{p}\in\mathbb{P}_{n}};\mathbf{\theta}_{h}),y)\}_{n=1}^{N},\mathbb{X}_{n}=\mathbb{X}_{n-1}+\{\mathbf{x}_{n}\},\mathbb{X}_{1}=\{\mathbf{x}_{1}\}.$$ Once the prototypes are found, the problem would only require light-weight optimisation steps as described in Algorithms 1 and 2. Algorithm 1: Training and testing (offline) Data: Training data X = {x1 *. . .* xN }; Result: Prototype-based classifier c(x; P, θ) P ← FindPrototypes({x1 *. . .* xN }); // Prototype selection function FindPrototypes : X → P θ ← SelectParameters(X, Y, θ); // SelectParameters is a solution of Eq. 5 YˆT ← {h(d(x, p; θd)|p∈P; θh)}x∈XT ; Algorithm 2: Training and testing (online) Data: Training data X = {x1 *. . .* xN }; Result: Prototype-based classifier h(d(x, p; θ1)|p∈P; θ2) P *← {}*; for {x, y} ∈ X do yˆ = h(d(x, p; θd)|p∈P; θh); P ← UpdatePrototypes(P, x)); // Prototype update function UpdatePrototypes : P × X → P θ ← UpdateParameters(X, Y, θ); // UpdateParameters is a solution of Eq. 6 end ## 3.2 Choice Of Functions D And H While we define the framework in generic terms, we limit our analysis to a special case of Euclidean distance and winner-takes-all function. This helps focus on quantifying the trade-offs of accuracy, interpretability and generalisation between the model without finetuning, on one hand, and state-of-the-art, fully finetuned, models. Although it may be possible to further improve the performance by finding better architectural choices, we decided to focus on the simple parameterisation of the framework with the Euclidean distance and a winner-takes-all decision making. Throughout the experiments, we use the negative Euclidean distance between the feature vectors: $$d(\mathbf{x},\mathbf{p};\theta_{d})=-\ell^{2}(\phi(\mathbf{x};\theta_{d}),\phi(\mathbf{p};\theta_{d})),$$ $\left(\overline{\mathbb{Z}}\right)$. 2(ϕ(x; θd), ϕ(p; θd)), (7) where ϕ is the feature extractor output. For the scenario without finetuning, θd is frozen: ϕ(·) = ϕ(·; θd), θd = const. The similarities bounded between (0, 1] could be obtained by, for example, taking the exponential of the similarity function or normalising it. Except from the experiment in Figure 4, where h is implemented as k-NN, the function h is a winner-takes-all operator: $h(\cdot)=\text{CLASS}(\underset{p\in\mathbb{F}}{\min}d(\cdot,p;\theta_{d}))$ (1) $\left(\mathbb{S}\right)^{\frac{1}{2}}$ d(·, p; θd)) (8) Note that the lack of finetuning makes the loss function trivial as the model does not have any free parameters θ{d,h}. ## 3.3 Difference From The Other Prototype-Based Frameworks Existing prototype-based models, such as ProtoPNet (Chen et al. (2019)), DNC (Wang et al. (2023)) and xDNN (Angelov & Soares (2020)), focus on end-to-end training for the purpose of interpretability by design and not on transfer learning from the existing pretrained models. All of them can also be considered as specific cases of the presented framework. Neither of these models are aiming to address transfer learning, in contrast to this paper's attention on the trade-offs between finetuned and non-finetuned models. xDNN (Angelov & Soares (2020)) is a special case of our IDEAL formulation with $$d(\mathbf{x},\mathbf{p};\theta_{d})=-{\mathcal{C}}(\phi(\mathbf{x};\theta_{d}),\phi(\mathbf{p};\theta_{d})),$$ $\frac{1}{4}$ . d(x, p; θd) = −C(ϕ(x; θd), ϕ(p; θd)), (9) where C is a Cauchy similarity. It optimises the coefficients θd as a part of its finetuning procedure prior to the model training, and its decision making is defined according to the winner-takes-all procedure as per Equation 8. DNC (Wang et al. (2023)) selects prototypes at every optimisation step using an online version of SinkhornKnopp clustering algorithm (Cuturi (2013)) and defines l in equation 6 a softmax cross-entropy loss. ProtoPNet (Chen et al. (2019)), in contrast to the former two methods, operates over patches and not the full images: $$d(\mathrm{x},\mathrm{p};\theta_{d})=\max_{\tilde{\mathrm{x}}\in\mathrm{patches}(\tilde{\mathrm{x}})}(\log(\ell^{2}(\tilde{\mathrm{x}},\mathrm{p})+\mathrm{i})-\log(\ell^{2}(\tilde{\mathrm{x}},\mathrm{p})+\epsilon),\tag{10}$$ where ϵ is a parameter. ProtoPNet also defines a decision making function h as follows and optimises jointly the prototypes P and the parameters θd,h using cross-entropy loss: $$h(\cdot)=\text{FC}\left(\left[\begin{array}{c}d(\cdot,p_{1})\\ d(\cdot,p_{2})\\ \ldots\\ d(\cdot,p_{|\mathbb{P}|})\end{array}\right]_{p_{i}\in\mathbb{P},i\in[1,\ldots,|\mathbb{P}|]}\right)\tag{11}$$ ![6_image_0.png](6_image_0.png) Figure 2: Experimental setup. Top: standard DL model; middle: proposed framework with no finetuning; bottom: proposed framework **with** finetuning ## 3.4 Prototype Selection Through Clustering Selection of prototypes through many standard methods of clustering, such as k-means (Steinhaus et al. (1956)), is used by methods such as (Zhang et al. (2022)), DNC (Wang et al. (2023)). However, these methods have one serious limitation: they utilise the averaging of cluster values, so the prototypes P do not, in general, belong to the original training dataset X. It is still possible, however, to attribute the prediction to the set of the cluster members. The possible options for such prototype selection are summarised below. Standard *black-box* classifiers do not offer interpretability through prototypes. Prototypes, selected through k-means, are non-interpretable on their own account as discussed above; however, it is possible to attribute such similarity to the members of the clusters. Finally, one can select real prototypes as cluster centroids. This way it is possible to attribute the decision to a number of real image prototypes ranked by their similarity to the query image. Such choice between averaged and real centroids can create, as we show in the experimental section, a trade-off between interpretability and performance (see Section 4.2). ## 4 Experiments Throughout the experimental scenarios, we contrast three settings (see Figure 2): A) Standard DL pipeline involving training on generic datasets as well as finetuning on target ("downstream") task or data - both with iterative error backpropagation B) IDEAL **without finetuning**: the proposed prototype-based IDEAL method involving clustering in the latent feature space with subsequent decision making process such as using winner-takes-all analysis or k nearest neighbours as outlined in Algorithms 1 and 2 C) IDEAL **with finetuning**: Same as B) with the only difference being that the clustering is performed in a latent feature space which is formed by finetuning on target data set (from the "downstream" task) using iterative error backpropagation. Unlike A), setting C) provides interpretable prototypes The empirical questions are presented below. Questions 1 and 2 confirm that the method delivers competitive results **even without finetuning**. Building upon this initial intuition, we develop the key questions 3, 4 and 5, analysing the performance for lifelong learning scenarios and interpretations proposed by IDEAL, respectively. For reproducibility, the full parameterisation is described in Section A of the Appendix. Question 1. How does the performance of the IDEAL framework **without** *finetuning compare with the* well-known deep learning frameworks? Section 4.2 and Appendix B show, with a concise summary in Figure 3 and Figure 5, that the gaps between finetuned and non-finetuned IDEAL framework are consistently much smaller (tens of percent vs a few percentage points) for vision transformer backbones comparing to ResNets and VGG. Furthermore, Figure 6 shows that the training time expenditure is more than an order of magnitude smaller comparing to the finetuning time. Question 2. *To what extent does finetuning of the feature space for the target problem lead to overfitting?* In Section 4.3, figures 8, 9, 10, we demonstrate the issue of overfitting on the target spaces by finetuning on CIFAR-10 and testing on CIFAR-100 in both performance and through visualising the feature space. Interestingly, we also show in Table 3 of the Appendix that, while the choice of prototypes greatly influences the performance of the IDEAL framework **without** finetuning of the backbone, it does not make any significant impact for the finetuned models (i.e., does not improve upon random selection). Question 3 How does the IDEAL framework **without** finetuning compare in the class-incremental learning setting? In Section 4.4 we build upon questions 1 and 2 and demonstrate: the small gap between pretrained and finetuned ViT models ultimately enables us to solve class-incremental learning scenarios, improving upon well-known baseline methods. IDEAL framework **without** finetuning shows performance results on a number of class-incremental learning problems, comparable to task-level finetuning. Notably, in CIFAR-100 benchmark, the proposed method provides 83.2% and 69.93% on ViT-L and ResNet-101 respectively, while the state-of-the-art method from (Wang et al. (2022b)) only reports 65.86%. Question 4 *How does the IDEAL framework provide insight and interpretation?* In Section 4.5, we present the analysis of interpretations provided by the method. In Figure 12, 13, 14 we demonstrate the qualitative experiments showing the human-readable interpretations provided by the model for both lifelong learning and offline scenarios. Question 5. Can models **without** *finetuning bring advantage over the finetuned ones in terms of accuracy* and help identify misclassifications due to confounding (i.e., spurious correlations in the input)? While, admittedly, the model only approaches but does not reach the same level of accuracy for the same backbone without finetuning in the standard benchmarks such as CIFAR-10, it delivers better performance in cases with confounded data (with spurious correlations in the input). In Section 4.6, Table 1 we demonstrate, building upon the intuition from Question 2, that finetuning leads to overfitting on confounded data, and leads to confounded predictions and interpretations. We also demonstrate that in this setting, IDEAL **without finetuning** improves upon F1 score against the finetuned baseline as well as provides interpretations for wrong predictions due to the confounding. ![8_image_0.png](8_image_0.png) Figure 3: Comparison of the proposed IDEAL framework (**without** finetuning) on the CIFAR-10 data set with different prototype selection methods (random, the clustering used in xDNN (Soares et al. (2021)) and k-means method) vs the baseline DNN ![8_image_1.png](8_image_1.png) ![8_image_2.png](8_image_2.png) Figure 4: Comparison of results on CIFAR-10 (ViT, k nearest neighbours) ## 4.1 Experimental Setting Datasets CIFAR-10 and CIFAR-100 (Krizhevsky & Hinton (2009)), STL-10 (Coates et al. (2011)), OxfordIIIT Pet (Parkhi et al. (2012)), EuroSAT (Helber et al. (2018; 2019)), CalTech101 (Li et al. (2006)). Feature extractors We consider a number of feature extractor networks such as VGG-16 (Simonyan & Zisserman (2014)), ResNet50 (He et al. (2016)), ResNet101 (He et al. (2016)), ViT-B/16 (Dosovitskiy et al. (2020), henceforth referred to as ViT), ViT-L/16 (Dosovitskiy et al. (2020), henceforth referred to as ViT-L) with or **without** finetuning; the pre-trained latent spaces for ViT models were obtained using SWAG methodology (Singh et al. (2022)); the computations for feature extractors has been conducted using a single V100 GPU. Prototype selection techniques We include the results for such clustering techniques as k-means, kmeans with a nearest data point (referred to as k-means (nearest)), and two online clustering methods: xDNN (Angelov & Soares (2020)) and ELM (Baruah & Angelov (2012)). Baselines We explore trade-offs between standard deep neural networks, different architectural choices (averaged prototypes vs real-world examples) in Section 4.2, and present expanded analysis in Appendix B. ## 4.2 Offline Classification We found that the gap between the finetuned and non-finetuned models on a range of tasks decreases for the modern, high performance, architectures, such as ViT (Dosovitskiy et al. (2020)). For CIFAR-10, these findings are highlighted in Figure 3. While finetuned VGG-16's accuracy is close to the one of ViT and other recent models, different prototype selection techniques **without** finetuning (the one used in xDNN, k-means clustering, and random selection) all give accuracy between 60 and 80%. The picture is totally different for ViT, where k-means prototype selection **without finetuning** provides accuracy of 95.59% against finetuned ViT's own performance of 98.51%. While the results above report on performance of the k-means clustering used as a prototype selection technique, the experimental results in Figure 4 explore choosing the nearest prototype to k-means cluster ![9_image_0.png](9_image_0.png) random xDNN k-Means ![9_image_1.png](9_image_1.png) ![9_image_2.png](9_image_2.png) Figure 6: Comparison of training time expenditure on CIFAR-10 (left) and CIFAR-100 (right) with and without funetuning (ViT) centroid for interpretability reasons. Although it is clear (with further evidence presented in Appendix B) that the performance when selecting the nearest to the k-means centroids prototypes is lagging slightly behind the direct use of the centroids (denoted simply as k-means), it is possible to bring this performance closer by replacing the winner-takes-all decision making approach (Equation (8)) with the k nearest neighbours method. For this purpose, we utilise the sklearn's KNeighborsClassifier function. The abridged results for classification **without** finetuning for different tasks are presented in Figure 5 (one can find a full version for different methods in Section B). Below, we analyse closer just the results with using ViT as a feature extractor forming the latent data space. One can see in Figure 7 that: (1) **without finetuning**, on a number of tasks the model shows competitive performance, and (2) with finetuning of the backbone, the difference between the standard backbone and the proposed model is insignificant within the confidence interval. In Figure 6, one can see the comparison of the time expenditure between the finetuned and **non-finetuned** model. We conducted (see Appendix C) a sensitivity analysis experiment by varying the number of prototypes for CIFAR-10 on ResNet101 backbone by changing the value k for the k-means method. In Appendix B, we also show the results with the online clustering method ELM (Baruah & Angelov (2012)), which does not require the number of clusters to be pre-defined and instead uses a radius meta-parameter which affects granulation. ## 4.3 Demonstration Of Overfitting In The Finetuned Feature Spaces And The Prototype Selection Impact One clear advantage of transfer learning without finetuning is dramatically lower computational cost reflected in the time expenditure. However, there is also another advantage. The evidence shows that the finetuned feature space shows less generalisation. In Figures 8 and 9, one can see the comparison of the tSNE plots between the finetuned and **non-finetuned** version of the method. While the finetuned method achieves clear Figure 5: Results **without** finetuning for various problems (ViT) ![10_image_0.png](10_image_0.png) Figure 7: Comparison of results with ViT (Dosovitskiy et al. (2020)) as a feature extractor on a number of ![10_image_1.png](10_image_1.png) datasets. Random, xDNN, k-means denote different prototype selection methods Figure 8: tSNE plots for original (top-left) vs finetuned (top-right) features of ResNet101, k-means prototypes; original (bottom left) vs finetuned (bottom right), ResNet101, random prototype selection, CIFAR-10 ![11_image_0.png](11_image_0.png) Figure 9: tSNE plots for original (top-left) vs finetuned (top-right) features of ViT, k-means prototypes; original (bottom left) vs finetuned (bottom right), ViT, random prototype selection, CIFAR-10 ![11_image_1.png](11_image_1.png) Proposed (ViT backbone without finetuning) ® Proposed (ViT backbone with finetuning on CIFAR-10) ![11_image_2.png](11_image_2.png) Figure 10: Comparison between the model performance on CIFAR-100 without finetuning and finetuning on CIFAR-10 for different prototype selection methods separation on this task, using the same features to transfer to another task (from CIFAR-10 to CIFAR-100) leads to sharp decrease in performance (see Figure 10). While for the finetuned backbone, predictably, the results are not far off the standard DL models, they also show no significant difference between different types of prototype selection, including random (see Figure 7). This can be explained by the previous discussion of Figures 8 and 9, which suggests that finetuning gives clear separation of features, so the features of the same class stay close. For the **non-finetuned** results, meanwhile, the difference in accuracy between random and non-random prototype selection is drastic, reaching around 24% for VGG16. This finding remains consistent for a number of vision benchmarks. In Figure 5 and Appendix B, one can see that simple k-means prototype selection in the latent space can significantly improve the performance; with the increase of the number of prototypes this difference decreases, but is still present. ## 4.4 Continual Learning The evidence from the previous sections motivates us to extend the analysis to continual learning problems. Given a much smaller gap between the finetuned and non-finetuned ViT models, can the IDEAL framework without finetuning compete with the state-of-the-art class-incremental learning baselines? It turns out the answer is affirmative. We repeat the setting from Rebuffi et al. (2017) (Section 4, iCIFAR-100 benchmark) using IDEAL without finetuning the latent space of the ViT-L model. The hyperparameters of the proposed methods are given in Appendix A. This benchmark gradually adds the new classes with a class increment of 10, until it reaches 100 classes. The results, shown in Figure 11a, highlight excellent performance of the proposed method when the number of prototypes is set to 10% of data. As one can see in Appendix C, even much lower number of prototypes, below 1000 or even just 10 per class on average can still lead to competitive results. While we observe 64.18 ± 0, 0.16, 69.93 ± 0.23%, 82.20 ± 0.23 for ResNet-50, ResNet-101, and ViT-L respectively, Wang et al. (2022b) reports in its Table 1 for the best performing method for class-incremental learning, based on ViT architecture and contrastive learning, accuracy of just 65.86 ± 1.24% (with the size of the buffer 1000), while the original benchmark model iCarl (Rebuffi et al. (2017)) reaches, according to Wang et al. (2022b), only 50.49 ± 0.18%. To demonstrate the consistent performance, we expanded iCIFAR-100 protocol to other datasets, namely class-incremental versions of Caltech101 and CIFAR-10, which we refer to as iCaltech101 and iCIFAR-10. Figure 11 shows robust performance on iCaltech101 and iCIFAR-10. We use the class increment value of ten (eleven for the last step) and two for iCaltech101 and iCIFAR-10, respectively. We see that for iCaltech101, the model performance changes insignificantly when adding the new classes, and all three datasets demonstrate performance similar to offline classification (see Section 4.2). ## 4.5 Study Of Interpretability In Figures 12 and 13, we demonstrate the visual interpretability of the proposed model, through both most similar and most dissimilar prototypes. In addition, the results could be interpreted linguistically (see Appendix D). Figure 13 shows a number of quantitative examples for multiple datasets: Caltech101, STL-10, Oxford-IIIT Pets, all corresponding to the non-finetuned feature space scenario according to the experimental setup from Appendix A. We see that on a range of datasets, without any finetuning, the proposed IDEAL approach provides semantically meaningful interpretations. Furthermore, as there has been no finetuning, the ℓ 2 distances are defined in exact the same feature space and, hence, can be compared like-for-like between datasets (see subfigures 13a-13f).The experiment in Figure 13 provides an additional reason to use our approach **without** finetuning as it demonstrates that the incorrectly classified data tend to have larger distance to the closest prototypes than the correctly classified ones. Finally, Figure 14 outlines the evolution of predictions for the class-incremental learning scenario. For the sake of demonstration, we used the same setting as the one for the class-incremental lifelong learning detailed in Appendix A and Section 4.4, taking CIFAR-10 for class-incremental learning using ViT model with the increment batch of two classes. We trace the best and the worst matching and select middle prototypes (according to the ℓ 2 metric) through the stages of class-incremental learning. For the successful predictions, while the best matching prototypes tend to be constant, the worst matching ones change over time when the class changes. ![13_image_0.png](13_image_0.png) Figure 11: Accuracy of IDEAL in class-incremental learning experiments for different backbones (ViT-L, ResNet-101 and 50); comparison with Wang et al. (2022b) and Rebuffi et al. (2017) ![14_image_0.png](14_image_0.png) Figure 12: Interpreting the predictions of the proposed model (k-means (nearest), CIFAR-10, ViT) ## 4.6 Impact Of Confounding On Interpretations The phenomenon of confounding takes its origin in causal modelling and is informally described, as per Greenland et al. (1999), as *'a mixing of effects of extraneous factors (called confounders) with the effects of* interest'. In many real-world scenarios, images contain confounding features, such as watermarks or naturally occurring spurious correlations ('seagulls always appear with the sea on the background'). The challenge for the interpretable models is therefore multi-fold: (1) these models need to be resistant to such confounders (2) should these confounders interfere with the performance of the model, the model should highlight them in the interpretations. To model confounding, we use the experimental setup from Bontempelli et al. (2022), which involves inpainting training images of three out of five selected classes of the CUB dataset with geometric figures (squares) which correlate with, but not caused by, the original data (e.g., every image of the Crested Auklet class is marked in the training data with a blue square). In Table 1, we compare the experimental results between the original (Wah et al. (2011)) and confounded (Bontempelli et al. (2022)) CUB dataset. We use the same original pre-trained feature spaces as stated in Appendix A. The finetuned spaces are obtained through finetuning on confounded CUB data from Bontempelli et al. (2022) for 15 epochs. The results in Table 1 demonstrate clear advantage of models **without finetuning** on the confounded dataset for both k-means and k-means (nearest), in the case of ViT. Such gap, however, is much narrower for VGG-16 and ResNet-50. It is consistent with the results in Section 4.2 demonstrating the larger finetuning performance advantage for these models compared with the ViT. Furthermore, k-means (nearest) does not show improvements over finetuning in a k-means (nearest) scenario for VGG-16 and ResNet-50, in a stark contrast with the ViT results. We demonstrate the interpretations for the confounding experiment in Figure 15. While the model **without** finetuning successfully predicts the correct confounded class, Black-footed Albatross, the finetuned model fails at this scenario and predicts a similar class Sooty Albatross, which does not contain the confounder mark. On the other hand, the finetuned model performs similarly or better on the original (not confounded) data. These results further build upon the hypothesis from Question 2 and demonstrate that the use of the proposed framework can help address the phenomenon of confounding. Figure 16 gives an intuition behind improvements in performance of the non-finetuned model. It shows that in the finetuned scenario, confounded training data stands further away from the testing data which does not contain the confounder mark. In the scenario without finetuning, this does not happen and the training and testing data are matched closer, even in the presence of a confounder. The Sinkhorn approximation of Wasserstein-2 distance has been implemented using SamplesLoss(loss='sinkhorn', p=2, blur=1e-5) function from geomloss python library. ## 5 Conclusion Our work shows that interpretable, prototype-based models over the latent spaces of ViT models **without** finetuning, learnt on large generic datasets, work surprisingly well in a number of scenarios. In an extensive set of experiments, we find that: Contemporary ViT models drastically narrow the gap between the finetuned and non-finetuned models, making it possible to avoid finetuning altogether and still have competitive results on a number of benchmarks. To give an example, for VGG-16 backbone, the accuracy difference between the best-performing finetuned and non-finetuned scenarios on CIFAR-10 is 16.61%. The situation is drastically different for the ViT backbone, where this difference is just 2.94%. The findings in the previous paragraph indicate that without finetuning we can circumvent the problem of catastrophic forgetting in class-incremental learning. If the models can achieve competitive performance even **without** finetuning, one can use this advantage to solve a number of problems of lifelong learning without iterative updates and, hence, catastrophic forgetting. The experimental results show the strong empirical advantage of such approach, allowing to achieve, using a ViT-L backbone, a lead of 16.34% on a well-known iCIFAR-100 benchmark. The IDEAL framework, proposed in this paper, allows for interpretations through similarities in the latent feature space, which is not only comparable within one dataset but also between the datasets. We find that the closest prototypes in case of misclassification tend to be further away from the input, and using qualitative analysis, we demonstrate how the IDEAL framework allows to interpret the decision making process in both offline and class-incremental learning scenarios. Finetuning results in consistently inferior performance when compared to non-finetuned models in face of confounding bias. Our initial findings quantify the margin of feature space overfitting in a simple experiment, showing that the ViT model **without** finetuning has 5.63% advantage on CIFAR100 over the model finetuned on CIFAR-10. We then build upon this observation to show, quantitatively and qualitatively, how the models **without** finetuning outperform the purpose-finetuned counterparts on confounded data. Notably, the model with ViT backbone **without** finetuning achieves 14.1% lead over the finetuned model on confounded CUB dataset with prototypes selected using k-means clustering. ## Broader Impact Statement The IDEAL framework, proposed in this paper, goes beyond the paradigm of first training and then finetuning complex models to the new tasks, which is standard for the field, where both these stages of the approach use expensive GPU compute to improve the model performance. We show that contemporary architectures, trained with extensive datasets, can deliver performance, comparable to task-level finetuning, in a classincremental learning setting. This can deliver profound impact on democratisation of high-performance machine learning models and implementation on Edge devices, on board of autonomous vehicles, as well as address important problems of environmental sustainability by avoiding using much energy to train and finetune new latent representations, providing instead a way to re-use existing models. Furthermore, the proposed framework can help define a benchmark on how deep-learning latent representations generalise to new tasks. This approach also naturally extends to task- and potentially, domain-incremental learning, enabling learning new concepts. It demonstrates that with large and complex enough latent spaces, relatively simple strategies of prototype selection, such as clustering, can deliver results comparable with the state-of-the-art in a fraction | Feature space | Prototype selection | VGG16 | ResNet-50 | ViT | |-----------------|---------------------------------------------|--------------|--------------|--------------| | | Confounded data (Bontempelli et al. (2022)) | | | | | Finetuned | N/A, backbone network | 73.99 ± 2.91 | 70.42 ± 2.68 | 69.06 ± 4.40 | | Non-finetuned | k-means | 78.52 ± 1.31 | 76.68 ± 1.63 | 80.70 ± 2.26 | | Finetuned | k-means | 73.19 ± 1.43 | 67.16 ± 2.25 | 66.58 ± 5.81 | | Non-finetuned | k-means (nearest) | 64.13 ± 1.37 | 67.68 ± 0.90 | 82.88 ± 2.17 | | Finetuned | k-means (nearest) | 71.00 ± 2.92 | 69.03 ± 1.19 | 73.99 ± 5.19 | | | Original data | | | | | Finetuned | N/A, backbone network | 83.66 ± 1.16 | 83.49 ± 1.22 | 93.92 ± 1.31 | | Non-finetuned | k-means | 80.01 ± 1.27 | 80.10 ± 1.66 | 90.67 ± 1.13 | | Finetuned | k-means | 81.98 ± 1.53 | 79.38 ± 2.87 | 92.85 ± 1.70 | | Non-finetuned | k-means (nearest) | 72.11 ± 1.62 | 72.64 ± 1.87 | 88.57 ± 0.96 | | Finetuned | k-means (nearest) | 78.90 ± 2.77 | 80.05 ± 2.64 | 92.80 ± 1.77 | Table 1: F1 score comparison for CUB dataset (Wah et al. (2011)), %, confidence interval calculated over five runs; all k-means runs are for 10% (15) clusters/prototypes; the better results within its category are highlighted in bold, taking into account the confidence interval. While for the original data finetuning has strong performance benefits, non-finetuned model has an edge over the finetuned one for all architectures; for k-means (nearest) the non-finetuned model still performs clearly better with ViT architecture than the finetuned counterpart. of time and compute efforts. Importantly, unlike most of the state-of-the-art approaches, as described in the Related work section of this paper, the proposed framework directly provides interpretability in a linguistic and visual form and provides improved resistance to spurious correlations (confounding bias) in input features. However, while this study can help advance transparency and trustworthiness of the machine learning models, one needs to duly take into account considerations of privacy and security risks pertinent to deep-learning feature spaces and prototype-based learning. In many cases, such as, notably, for medical applications, there may be a need in preserving privacy of the prototypes and the training data, as exposing prototypes to the users may be unethical or illegal (Lucieri et al. (2023)). Deep-learning models' latent spaces themselves, as well as data they are trained upon, may be biased or unfair (Birhane et al. (2023)). Another risk is a potential for adversarial attacks (Biggio et al. (2013)), affecting either feature space representation or the distances between prototypes. ## Limitations One of the limitations of this study is that it focuses on the final latent representations and does not analyse the intermediate layers of the common neural network architectures. Future work should consider addressing this issue to improve our understanding of models' inference at a granular level. ## References Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. Sanity checks for saliency maps. *Advances in neural information processing systems*, 31, 2018. Muhammad Aurangzeb Ahmad, Carly Eckert, and Ankur Teredesai. Interpretable machine learning in healthcare. In Proceedings of the 2018 ACM international conference on bioinformatics, computational biology, and health informatics, pp. 559–560, 2018. David Aldous, Illdar Ibragimov, and Jean Jacod. *Ecole d'Ete de Probabilites de Saint-Flour XIII, 1983*, volume 1117. Springer, 1983. Plamen Angelov and Xiaowei Gu. *Empirical approach to machine learning*. Springer, 2019. 17 ![17_image_0.png](17_image_0.png) Figure 13: Interpreting the predictions (k-means (nearest), OxfordIIITPets/STL-10/Caltech101, ViT) 18 ![18_image_0.png](18_image_0.png) Figure 14: iCIFAR-10 class-incremental learning: evolution of prototype ranking ![19_image_0.png](19_image_0.png) 'Albatross') Figure 15: Comparing the interpretations of the non-finetuned and finetuned model with confounding on confounded CUB (Bontempelli et al. (2022)) dataset ![20_image_1.png](20_image_1.png) ![20_image_0.png](20_image_0.png) (a) tSNE plot: finetuned model. The clean testing data from confounded classes are better aligned with similar clean classes than with confounded ones (b) tSNE plot: model without finetuning. The models show better distribution matching, including for similar classes such as different species of Albatross. In both tSNE plots, the density estimation is shown for tSNE embedded points ![20_image_2.png](20_image_2.png) (c) Wasserstein-2 distance heatmap (Sinkhorn approxi- (d) Wasserstein-2 distance heatmap (Sinkhorn approxmation): finetuned model, training (vertical) to testing imation): model without finetuning. In contrast to (horizontal) distance. Black-footed albatross (test- the finetuned model, the similar classes' distributions ing distribution) is closer to a non-confounded Sooty are close yet closely match between training and testing albatross training distribution classes. Figure 16: Intuitive explanation behind better performance of non-finetuned model Plamen Angelov and Eduardo Soares. Towards explainable deep neural networks (xdnn). *Neural Networks*, 130:185–194, 2020. Plamen Angelov and Xiaowei Zhou. Evolving fuzzy-rule-based classifiers from data streams. Ieee transactions on fuzzy systems, 16(6):1462–1475, 2008. Nachman Aronszajn. Theory of reproducing kernels. *Transactions of the American mathematical society*, 68 (3):337–404, 1950. Akanksha Atrey, Kaleigh Clary, and David Jensen. Exploratory not explanatory: Counterfactual analysis of saliency maps for deep reinforcement learning. *arXiv preprint arXiv:1912.05743*, 2019. Rashmi Dutta Baruah and Plamen Angelov. Evolving local means method for clustering of streaming data. In *2012 IEEE international conference on fuzzy systems*, pp. 1–8. IEEE, 2012. Dimitris Bertsimas, Angela King, and Rahul Mazumder. Best subset selection via a modern optimisation lens. *The Annals of Statistics*, 44(2):813–852, 2016. Jacob Bien and Robert Tibshirani. Prototype selection for interpretable classification. The Annals of Applied Statistics, pp. 2403–2424, 2011. Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2013, Prague, Czech Republic, September 23-27, 2013, Proceedings, Part III 13, pp. 387–402. Springer, 2013. Abeba Birhane, Vinay Prabhu, Sang Han, Vishnu Naresh Boddeti, and Alexandra Sasha Luccioni. Into the laions den: Investigating hate in multimodal datasets. *arXiv preprint arXiv:2311.03449*, 2023. Moritz Böhle, Mario Fritz, and Bernt Schiele. B-cos networks: alignment is all we need for interpretability. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10329–10338, 2022. Andrea Bontempelli, Stefano Teso, Katya Tentori, Fausto Giunchiglia, and Andrea Passerini. Concept-level debugging of part-prototype networks. *arXiv preprint arXiv:2205.15769*, 2022. Bernhard Boser, Isabelle Guyon, and Vladimir Vapnik. A training algorithm for optimal margin classifiers. In *Proceedings of the fifth annual workshop on Computational learning theory*, pp. 144–152, 1992. Chaofan Chen, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, and Jonathan K Su. This looks like that: deep learning for interpretable image recognition. *Advances in neural information processing systems*, 32, 2019. Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In *Proceedings of the fourteenth international conference on artificial intelligence and statistics*, pp. 215–223. JMLR Workshop and Conference Proceedings, 2011. Dorin Comaniciu and Peter Meer. Mean shift: A robust approach toward feature space analysis. *IEEE* Transactions on pattern analysis and machine intelligence, 24(5):603–619, 2002. Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. *Advances in neural* information processing systems, 26, 2013. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020. Robert French. Catastrophic forgetting in connectionist networks. *Trends in cognitive sciences*, 3(4):128–135, 1999. Sander Greenland, Judea Pearl, and James M Robins. Confounding and collapsibility in causal inference. Statistical science, 14(1):29–46, 1999. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. Introducing eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. In IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium, pp. 204–207. IEEE, 2018. Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2019. Been Kim, Cynthia Rudin, and Julie A Shah. The bayesian case model: A generative approach for case-based reasoning and prototype classification. *Advances in neural information processing systems*, 27, 2014. Jinkyu Kim and John Canny. Interpretable learning for self-driving cars by visualizing causal attention. In Proceedings of the IEEE international conference on computer vision, pp. 2942–2950, 2017. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. *Proceedings of the national academy of sciences*, 114(13):3521–3526, 2017. Simon Kornblith, Jonathon Shlens, and Quoc V Le. Do better imagenet models transfer better? In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2661–2671, 2019. Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical Report 0, University of Toronto, Toronto, Ontario, 2009. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. *Advances in Neural Information Processing Systems*, 25, 2012. Christiaan Lamers, René Vidal, Nabil Belbachir, Niki van Stein, Thomas Bäeck, and Paris Giampouras. Clustering-based domain-incremental learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3384–3392, 2023. Fei-Fei Li, Rob Fergus, and Pietro Perona. One-shot learning of object categories. *IEEE transactions on* pattern analysis and machine intelligence, 28(4):594–611, 2006. Oscar Li, Hao Liu, Chaofan Chen, and Cynthia Rudin. Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions. In *Proceedings of the AAAI Conference on* Artificial Intelligence, volume 32, 2018. Zhizhong Li and Derek Hoiem. Learning without forgetting. *IEEE transactions on pattern analysis and* machine intelligence, 40(12):2935–2947, 2017. Adriano Lucieri, Andreas Dengel, and Sheraz Ahmed. Translating theory into practice: assessing the privacy implications of concept-based explanations for biomedical ai. *Frontiers in Bioinformatics*, 3, 2023. James MacQueen et al. Some methods for classification and analysis of multivariate observations. In Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, volume 1, pp. 281–297. Oakland, CA, USA, 1967. Balas Kausik Natarajan. Sparse approximate solutions to linear systems. *SIAM journal on computing*, 24 (2):227–234, 1995. Meike Nauta, Ron Van Bree, and Christin Seifert. Neural prototype trees for interpretable fine-grained image recognition. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 14933–14943, 2021. Allen Newell, John C Shaw, and Herbert A Simon. Report on a general problem solving program. In *IFIP* congress, volume 256, pp. 64. Pittsburgh, PA, 1959. Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. *arXiv preprint arXiv:2304.07193*, 2023. German Parisi, Ronald Kemker, Jose L Part, Christopher Kanan, and Stefan Wermter. Continual lifelong learning with neural networks: A review. *Neural networks*, 113:54–71, 2019. Omkar Parkhi, Andrea Vedaldi, Andrew Zisserman, and CV Jawahar. Cats and dogs. In 2012 IEEE conference on computer vision and pattern recognition, pp. 3498–3505. IEEE, 2012. Uwe Peters. Explainable ai lacks regulative reasons: why ai and human decision-making are not equally opaque. *AI and Ethics*, pp. 1–12, 2022. Tomaso Poggio and Federico Girosi. A sparse representation for function approximation. *Neural computation*, 10(6):1445–1454, 1998. Milos Radovanovic, Alexandros Nanopoulos, and Mirjana Ivanovic. Hubs in space: Popular nearest neighbors in high-dimensional data. *Journal of Machine Learning Research*, 11(sept):2487–2531, 2010. Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph Lampert. icarl: Incremental classifier and representation learning. In *2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 5533–5542. IEEE, 2017. Frank Rosenblatt et al. *Principles of neurodynamics: Perceptrons and the theory of brain mechanisms*, volume 55. Spartan books Washington, DC, 1962. Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. *Nature Machine Intelligence*, 1(5):206–215, 2019. doi: 10.1038/ s42256-019-0048-x. URL https://doi.org/10.1038/s42256-019-0048-x. David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by backpropagating errors. *nature*, 323(6088):533–536, 1986. Paul Ruvolo and Eric Eaton. Ella: An efficient lifelong learning algorithm. In *International conference on* machine learning, pp. 507–515. PMLR, 2013. Bernhard Schölkopf, Ralf Herbrich, and Alex J Smola. A generalized representer theorem. In Computational Learning Theory: 14th Annual Conference on Computational Learning Theory, COLT 2001 and 5th European Conference on Computational Learning Theory, EuroCOLT 2001 Amsterdam, The Netherlands, July 16–19, 2001 Proceedings 14, pp. 416–426. Springer, 2001. Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pp. 618–626, 2017. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. 2014. Mannat Singh, Laura Gustafson, Aaron Adcock, Vinicius de Freitas Reis, Bugra Gedik, Raj Prateek Kosaraju, Dhruv Mahajan, Ross Girshick, Piotr Dollár, and Laurens van der Maaten. Revisiting Weakly Supervised Pre-Training of Visual Perception Models. In *CVPR*, 2022. Alex Smola and Bernhard Schölkopf. A tutorial on support vector regression. *Statistics and computing*, 14: 199–222, 2004. Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. Advances in neural information processing systems, 30, 2017. Eduardo Soares, Plamen Angelov, and Ziyang Zhang. An explainable approach to deep learning from ct-scans for covid identification. 2021. Hugo Steinhaus et al. Sur la division des corps matériels en parties. *Bull. Acad. Polon. Sci*, 1(804):801, 1956. Michael Tipping. The relevance vector machine. *Advances in neural information processing systems*, 12, 1999. Michael Tipping. Sparse bayesian learning and the relevance vector machine. *Journal of machine learning* research, 1(Jun):211–244, 2001. Gido van de Ven, Tinne Tuytelaars, and Andreas S Tolias. Three types of incremental learning. *Nature* Machine Intelligence, pp. 1–13, 2022. Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds200-2011 dataset. 2011. Wenguan Wang, Cheng Han, Tianfei Zhou, and Dongfang Liu. Visual recognition with deep nearest centroids. In *International Conference on Learning Representations (ICLR)*, 2023. Yabin Wang, Zhiwu Huang, and Xiaopeng Hong. S-prompts learning with pre-trained transformers: An occam's razor for domain incremental learning. *Advances in Neural Information Processing Systems*, 35: 5682–5695, 2022a. Zhen Wang, Liu Liu, Yajing Kong, Jiaxian Guo, and Dacheng Tao. Online continual learning with contrastive vision transformer. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XX, pp. 631–650. Springer, 2022b. Dennis Wei, Rahul Nair, Amit Dhurandhar, Kush R Varshney, Elizabeth Daly, and Moninder Singh. On the safety of interpretable machine learning: A maximum deviation approach. Advances in Neural Information Processing Systems, 35:9866–9880, 2022. Gerhard Widmer and Miroslav Kubat. Learning in the presence of concept drift and hidden contexts. Machine learning, 23:69–101, 1996. Shipeng Yan, Jiangwei Xie, and Xuming He. Der: Dynamically expandable representation for class incremental learning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 3014–3023, 2021. Dagmar Zeithamova, W Todd Maddox, and David M Schnyer. Dissociable prototype learning systems: evidence from brain imaging and behavior. *Journal of Neuroscience*, 28(49):13194–13201, 2008. Ziyang Zhang, Plamen Angelov, Eduardo Soares, Nicolas Longepe, and Pierre Philippe Mathieu. An interpretable deep semantic segmentation method for earth observation. *arXiv preprint arXiv:2210.12820*, 2022. Junxian Zhu, Canhong Wen, Jin Zhu, Heping Zhang, and Xueqin Wang. A polynomial algorithm for bestsubset selection problem. *Proceedings of the National Academy of Sciences*, 117(52):33117–33123, 2020. ## A Experimental Setup In this work, all the experiments were conducted in PyTorch 2.0.0. The pre-trained models used in these experiments were obtained from TorchVision 1 while the finetuned models have been obtained from three different sources: 1. *Models that come from MMPreTrain* 2. Specifically, ResNet50 and ResNet101 finetuned on the CIFAR-10, and ResNet 50 finetuned on CIFAR-100. 2. *finetuned TorchVision models*. finetuning was conducted by continuing the EBP across all network layers. Such models include VGG-16 and Vision Transformer (ViT) finetuned on CIFAR-10, as well as ResNet101, VGG-16, and ViT finetuned on CIFAR-100. For ResNet101 and VGG-16 models, we ran the training for 200 epochs, while the Vision Transformer models were trained for 10 epochs. The Stochastic Gradient Descent (SGD) optimizer was employed for all models, with a learning rate of 0.0005 and a momentum value of 0.9. 3. *Linearly finetuned TorchVision models*. In such case, only the linear classifier was trained and all the remaining layers of the network were fixed. For these models, we conducted training for 200 epochs for ResNet50, ResNet101, and VGG16, and 25 epochs for the ViT models. We adopted the Stochastic Gradient Descent (SGD) optimizer, with a learning rate of 0.001 and a momentum parameter set at 0.9. We utilized k-means clustering and random selection methods, setting the number of prototypes for each class at 10% of the training data for the corresponding classes. Besides, we also set it to 12 per class and conducted experiments for ResNet50, ResNet101, and VGG-16 on CIFAR-10 and CIFAR-100 datasets, enabling us to evaluate the impact of varying the number of prototypes. For ELM online clustering method, we experimented with varying radius values for each specific dataset and backbone network. We selected a radius value that would maintain the number of prototypes within the range of 0-20% of the training data. In the experiments without finetuning on the CIFAR-10 dataset, we set the radius to 8, 10, 19, and 12 for ResNet50, ResNet101, VGG-16, and Vision Transformer (ViT) models respectively. The radius was adjusted to 8, 11, 19, and 12 for these models when conducting the same tasks without finetuning on CIFAR-100. For STL10, Oxford-IIIT Pets, EuroSAT, and CalTech101 datasets, the radius was set to 13 across all ELM experiments. In contrast, the xDNN model did not require hyper-parameter settings as it is inherently a parameter-free model. We performed all experiments for Sections 4.2 and 4.4 of the main paper 5 times and report mean values and standard deviations for our results, with the exception of the finetuned backbone models where we just performed finetuning once (or sourced finetuned models as detailed above). For the class-incremental learning experiments in Section 4.4, we use the k-means clustering method for prototype selection and set the number of prototypes to 10% of the training data. Each time we add the incremental classes, the existing prototypes are unchanged, and the algorithm adds the prototypes for the new classes to the existing prototypes. All the experiments were executed 10 times to allow a robust comparison with the benchmark results. To ensure a consistent and stable training environment, for every experiment we used a single NVIDIA V100 GPU from a cluster. ## B Complete Experimental Results Tables 2-9 contain extended experimental results for multiple benchmarks and feature extractors. These results further support the findings of the main paper. 1https://pytorch.org/vision/main/models.html 2https://github.com/open-mmlab/mmpretrain | FE | method | accuracy (%) | #prototypes | time, s | | |-------------------|--------------|----------------|---------------|----------------|-----| | ResNet50 | random | 65.55 ± 1.93 | 120(0.24%) | 85 | | | random | 80.40 ± 0.37 | 5, 000(10%) | 85 | | | | ELM | 81.17 ± 0.04 | 5, 500(11%) | 365 | | | | xDNN | 81.44 ± 0.33 | 115(0.23%) | 103 | | | | k-means | 84.12 ± 0.19 | 120(0.24%) | 201 | | | | k-means | 86.65 ± 0.15 | 5, 000(10%) | 1, 138 | | | | Resnet101 | random | 78.08 ± 1.38 | 120(0.24%) | 129 | | | random | 87.66 ± 0.25 | 5, 000(10%) | 129 | | | | ELM | 88.22 ± 0.09 | 7, 154(14.31%) | 524 | | | | xDNN | 88.13 ± 0.42 | 118(0.24%) | 145 | | | | k-means | 90.19 ± 0.15 | 120(0.24%) | 245 | | | | k-means | 91.50 ± 0.07 | 5, 000(10%) | 1, 194 | | | | VGG-16 | random | 50.13 ± 2.37 | 120(0.24%) | 95 | | | random | 65.06 ± 0.32 | 5, 000(10%) | 95 | | | | ELM | 72.31 ± 0.08 | 1, 762(3.52%) | 215 | | | | xDNN | 70.03 ± 0.96 | 103(0.21%) | 132 | | | | k-means | 74.48 ± 0.16 | 120(0.24% | 346 | | | | k-means | 75.94 ± 0.15 | 5, 000(10%) | 2, 362 | | | | random | 93.23 ± 0.11 | 5, 000(10%) | 597 | | | | ViT | | ELM | 90.61 ± 0.14 | 6, 685(13.37%) | 889 | | xDNN | 93.59 ± 0.12 | 112(0.2%) | 606 | | | | k-means | 95.59 ± 0.08 | 5, 000(10%) | 925 | | | | ViT-L | k-means | 96.48 ± 0.05 | 5, 000(10%) | 4, 375 | | | k-means (nearest) | 95.62 ± 0.07 | 5, 000(10%) | 4, 352 | | | Table 2: CIFAR-10 classification task comparison for the case of no finetuning of the feature extractor Table 2 demonstrates the data behind Figure 3 of the main paper. It also highlights the performance of the k-means model on ViT-L latent space, when the nearest real training data point to the k-means cluster centre is selected (labelled as k-means (nearest)). One can also see that even with the small number of selected prototypes, the algorithm delivers competitive performance without finetuning. Table 3 compares different latent spaces and gives the number of free (optimised) parameters for the scenario of finetuning of the models. With a small additional number of parameters, which is the number of possible prototypes, one can transform the opaque architectures into ones interpretable through proximity and similarity to prototypes within the latent space (this is highlighted in the interpretability column). Tables 4-9 repeat the same analysis, expanded from Figure 5 of the main paper for different datasets. The results show remarkable consistency with the previous conclusions and further back up the claims of generalisation to different classification tasks. ## C Sensitivity Analysis For The Number Of Prototypes Figure 17 further backs up the previous evidence that even with a small number of prototypes, the accuracy is still high. It shows, however, that there is a trade-off between the number of prototypes and accuracy. It also shows, that after a few hundred prototypes per class on CIFAR-10 and CIFAR-100 tasks, the performance does not increase and may even slightly decrease, indicating saturation. ## D Linguistic Interpretability Of The Proposed Framework Outputs To back up interpretability claim, we present two additional interpretability scenarios complementing the one in Section 4.5 of the main text. | FE | method | accuracy (%) | #parameters | #prototypes | time, s | interpretability | |-----------|-----------------|-----------------|---------------------|---------------------|-----------|--------------------| | ResNet50 | ResNet50 | 95.55 (80.71∗ ) | ∼ 25M (20K) | 36, 360 (13, 122∗ ) | ✗ | | | random | 94.92 ± 0.02 | ∼ 25M + 50K | 120(0.24%) | 36, 360 + 24 | ✓ | | | random | 95.32 ± 0.09 | ∼ 25M + 50K | 5, 000(10%) | 36, 360 + 24 | ✓ | | | xDNN | 95.32 ± 0.12 | ∼ 25M + 50K | 111(0.22%) | 36, 360 + 43 | ✓ | | | k-means | 94.91 ± 0.14 | ∼ 25M + 50K | 120(0.24%) | 36, 360 + 208 | ✓ | | | k-means | 95.50 ± 0.06 | ∼ 25M + 50K | 5, 000(10%) | 36, 360 + 1, 288 | ✓ | | | ResNet101 | Resnet101 | 95.58 (84.44∗ ) | ∼ 44M (20K) | 36, 360 | ✗ | | | random | 95.47 ± 0.06 | ∼ 44M + 50K | 120(0.24%) | 36, 360 + 37 | ✓ | | | random | 95.51 ± 0.01 | ∼ 44M + 50K | 5, 000(10%) | 36, 360 + 37 | ✓ | | | xDNN | 95.50 ± 0.10 | ∼ 44M + 50K | 107(0.21%) | 36, 360 + 54 | ✓ | | | k-means | 95.55 ± 0.03 | ∼ 44M + 50K | 120(0.24%) | 36, 360 + 231 | ✓ | | | k-means | 95.51 ± 0.04 | ∼ 44M + 50K | 5, 000(10%) | 36, 360 + 1, 357 | ✓ | | | VGG-16 | VGG-16 | 92.26 (83.71∗ ) | ∼ 138M (41K) | 40, 810 | ✗ | | | random | 87.48 ± 0.72 | ∼ 138M + 50K | 120(0.24%) | 40, 810 + 94 | ✓ | | | random | 90.86 ± 0.19 | ∼ 138M + 50K | 5, 000(10%) | 40, 810 + 94 | ✓ | | | xDNN | 91.42 ± 0.25 | ∼ 138M + 50K | 102(0.20%) | 40, 810 + 123 | ✓ | | | k-means | 92.24 ± 0.10 | ∼ 138M + 50K | 120(0.24%) | 40, 810 + 369 | ✓ | | | k-means | 92.55 ± 0.16 | ∼ 138M + 50K | 5, 000(10%) | 40, 810 + 2, 408 | ✓ | | | ViT | 98.51 (96.08∗ ) | ∼ 86M (8K) | 15, 282 (15, 565∗ ) | ✗ | | | | random | 98.56 ± 0.02 | ∼ 86M + 50K | 5, 000(10%) | 15, 282 + 598 | ✓ | | | xDNN | 98.00 ± 0.14 | ∼ 86M + 50K | 117(0.23%) | 15, 282 + 607 | ✓ | | | k-means | 98.53 ± 0.04 | ∼ 86M + 50K | 5, 000(10%) | 15, 282 + 938 | ✓ | | | ViT | | | | | | | Table 3: CIFAR-10 classification task comparison for the case of finetuned models (∗ denotes linear finetuning of the DL model) ![27_image_0.png](27_image_0.png) Figure 17: Accuracy sensitivity to the number of per-class prototypes (k-means, ResNet101, no finetuning) | FE | method | accuracy (%) | #prototypes | time, s | | |-------------------|--------------|----------------|---------------|----------------|-----| | ResNet50 | random | 41.66 ± 0.74 | 1, 200(2.4%) | 82 | | | random | 54.37 ± 0.43 | 10, 000(20%) | 82 | | | | ELM | 57.94 ± 0.11 | 7, 524(15.05%) | 129 | | | | xDNN | 58.25 ± 0.64 | 884(1.77%) | 98 | | | | k-means | 62.67 ± 0.26 | 1, 200(2.4%) | 124 | | | | k-means | 64.07 ± 0.37 | 10, 000(20%) | 258 | | | | ResNet101 | random | 50.25 ± 0.71 | 1, 200(2.4%) | 128 | | | random | 61.90 ± 0.41 | 10, 000(20%) | 128 | | | | ELM | 64.42 ± 0.12 | 4, 685(9.37%) | 161 | | | | xDNN | 64.60 ± 0.39 | 878(1.76%) | 143 | | | | k-means | 68.59 ± 0.40 | 1, 200(2.4%) | 170 | | | | k-means | 70.04 ± 0.12 | 10, 000(20%) | 310 | | | | random | 26.16 ± 0.24 | 1, 200(2.4%) | 94 | | | | random | 37.74 ± 0.48 | 10, 000(20%) | 94 | | | | VGG16 | ELM | 48.53 ± 0.05 | 2, 878(5.76%) | 122 | | | xDNN | 47.78 ± 0.41 | 871 (1.74%) | 119 | | | | k-means | 51.99 ± 0.24 | 1, 200(2.4%) | 175 | | | | k-means | 52.55 ± 0.27 | 1, 200(2.4%) | 437 | | | | random | 72.39 ± 0.21 | 10, 000(20%) | 604 | | | | ViT | | ELM | 69.94 ± 0.06 | 8, 828(17.66%) | 642 | | xDNN | 76.24 ± 0.24 | 830(1.66%) | 613 | | | | k-means | 79.12 ± 0.28 | 10, 000(20%) | 673 | | | | ViT-L | k-means | 82.18 ± 0.14 | 10, 000(20%) | 3, 905 | | | k-means (nearest) | 78.75 ± 0.29 | 10, 000(20%) | 3, 909 | | | Table 4: CIFAR-100 classification task comparison for the case of no finetuning of the feature extractor | FE | method | accuracy (%) | #parameters | #prototypes | time, s | interpretability | | |-----------|----------------|-----------------|--------------------|--------------------|------------|--------------------|----| | ResNet50 | ResNet50 | 79.70 (56.39∗ ) | ∼ 25M (205K) | 36, 360(13, 003∗ ) | ✗ | | | | random | 78.94 ± 0.17 | ∼ 25M + 50K | 1, 200(2.4%) | 36, 360 + 28 | ✓ | | | | random | 79.52 ± 0.17 | ∼ 25M + 50K | 10, 000(20%) | 36, 360 + 28 | ✓ | | | | xDNN | 79.75 ± 0.12 | ∼ 25M + 50K | 859(1.72%) | 36, 360 + 45 | ✓ | | | | k-means | 79.84 ± 0.07 | ∼ 25M + 50K | 1, 200(2.4%) | 36,360+82 | ✓ | | | | k-means | 79.77 ± 0.07 | ∼ 25M + 50K | 10, 000(20%) | 36,360+219 | ✓ | | | | ResNet101 | ResNet50 | 84.38 (63.18∗ ) | ∼ 44M (205K) | 45, 619(18, 955∗ ) | ✗ | | | | random | 82.26 ± 0.15 | ∼ 44M + 50K | 1, 200(2.4%) | 45, 619 + 175 | ✓ | | | | random | 80.75 ± 0.19 | ∼ 44M + 50K | 10, 000(20%) | 45, 619 + 175 | ✓ | | | | xDNN | 81.13 ± 0.16 | ∼ 44M + 50K | 831(1.66%) | 45, 619 + 191 | ✓ | | | | k-means | 83.03 ± 0.06 | ∼ 44M + 50K | 1, 200(2.4%) | 45, 619 + 220 | ✓ | | | | k-means | 83.14 ± 0.19 | ∼ 44M + 50K | 10, 000(20%) | 45, 619 + 439 | ✓ | | | | VGG-16 | VGG-16 | 75.08 (62.74∗ ) | ∼ 138M (410K) | 41, 038(17, 098∗ ) | ✗ | | | | random | 53.83 ± 0.91 | ∼ 138M + 50K | 1, 200(2.4%) | 41, 038 + 92 | ✓ | | | | random | 64.17 ± 0.36 | ∼ 138M + 50K | 10, 000(20%) | 41, 038 + 92 | ✓ | | | | xDNN | 72.63 ± 0.11 | ∼ 138M + 50K | 907(1.81%) | 41, 038 + 120 | ✓ | | | | k-means | 73.83 ± 0.16 | ∼ 138M + 50K | 1, 200(2.4%) | 41, 038 + 199 | ✓ | | | | k-means | 73.73 ± 0.23 | ∼ 138M + 50K | 10, 000(20%) | 41, 038 + 460 | ✓ | | | | ViT | 90.29(82.79∗ ) | ∼ 86M (77K) | 15, 536(15, 423∗ ) | ✗ | | | | | random | 89.90 ± 0.10 | ∼ 86M + 50K | 10, 000(20%) | 15, 536 + 621 | ✓ | | | | ViT | | xDNN | 89.17 ± 0.18 | ∼ 86M + 50K | 809(1.61%) | 15, 536 + 630 | ✓ | | k-means | 90.48 ± 0.05 | ∼ 86M + 50K | 10, 000(20%) | 15, 536 + 695 | ✓ | | | | FE | method | accuracy (%) | #prototypes | time, s | | |------------------|--------------|----------------|---------------|------------|----| | random | 98.55 ± 0.09 | 500(10%) | 61 | | | | ViT | | ELM | 95.27 ± 0.03 | 271(5.42%) | 63 | | xDNN | 98.63 ± 0.12 | 84(1.68%) | 62 | | | | k-means | 99.32 ± 0.03 | 500(10%) | 65 | | | | ViT-L | k-means | 99.71 ± 0.02 | 500(10%) | 377 | | | k-means(nearest) | 99.56 ± 0.05 | 500(10%) | 377 | | | | FE | method | accuracy (%) | #prototypes | time, s | | |-------------------|--------------|----------------|---------------|------------|----| | random | 90.82 ± 0.53 | 365(9.92%) | 48 | | | | ViT | | ELM | 90.85 ± 0.03 | 122(3.32%) | 49 | | xDNN | 96.30 ± 0.23 | 239(6.49%) | 49 | | | | k-means | 94.07 ± 0.20 | 365(9.92%) | 50 | | | | ViT-L | k-means | 95.78 ± 0.19 | 365(9.92%) | 279 | | | k-means (nearest) | 94.76 ± 0.30 | 740(9.92%) | 279 | | | Table 5: CIFAR-100 classification task comparison for the case of finetuned models (∗ denotes linear finetuning of the DL model) Table 6: STL10 classification task comparison for the case of no finetuning (linear finetuning of the ViT gives 98.97%) Table 7: OxfordIIITPets classification task comparison for the case of no finetuning (linear finetuning of ViT gives 94.41%) | FE | method | accuracy (%) | #prototypes | time, s | | |------------------|--------------|----------------|---------------|------------|-----| | random | 82.67 ± 0.54 | 2, 154(9.97%) | 266 | | | | ViT | | ELM | 83.69 ± 0.01 | 528(2.44%) | 277 | | xDNN | 85.24 ± 1.05 | 102(0.47%) | 269 | | | | k-means | 91.30 ± 0.16 | 2, 154(9.97%) | 330 | | | | ViT-L | k-means | 88.93 ± 0.22 | 2, 154(9.97%) | 1685 | | | k-means(nearest) | 83.97 ± 0.16 | 2, 154(9.97%) | 1685 | | | | FE | method | accuracy (%) | #prototypes | time, s | | |-------------------|--------------|----------------|---------------|------------|----| | | random | 89.42 ± 0.32 | 649(9.35%) | 96 | | | ViT | | ELM | 91.12 ± 0.07 | 516(7.43%) | 97 | | | xDNN | 94.61 ± 0.94 | 579(8.34%) | 97 | | | k-means | 94.46 ± 0.44 | 649(9.35%) | 99 | | | | ViT-L | k-means | 96.08 ± 0.34 | 649(9.35%) | 515 | | | k-means (nearest) | 93.74 ± 0.42 | 649(9.35%) | 517 | | | Table 8: EuroSAT classification task comparison for the case of no finetuning (linear finetuning gives 95.17%) Table 9: CalTech101 classification task comparison (linear finetuning gives 96.26%) First, we show the symbolic decision rules in Figure 18. These symbolic rules are created using ViT-L backbone, with the prototypes selected using the nearest real image to k-means cluster centroids, in a no-finetuning scenario for OxfordIIITPets dataset. Second, in Figure 19 we show how the overall pipeline of the proposed method can be summarised in interpretable-through-prototypes fashion. We show the normalised distance obtained through dividing by the sum of distances to all prototypes. This is to improve the perception and give relative, bound between 0 and 1, numbers for the prototype images. $\left(\begin{array}{c}Q\sim\left(\begin{array}{c}\includegraphics[height=36.135pt]{0.0pt}\end{array}\right)\right)$or $\left(\begin{array}{c}Q\sim\left(\begin{array}{c}\includegraphics[height=36.135pt]{0.0pt}\end{array}\right)\right)$or $\left(\begin{array}{c}Q\sim\left(\begin{array}{c}\includegraphics[height=36.135pt]{0.0pt}\end{array}\right)\right)$or \(\left(\begin{array}{c}Q\sim\left(\begin{array}{c}\includegraphics[height=36. $$\left|\begin{array}{c}\text{THEN'Abysinian'}\\ \\ \end{array}\right.$$ THEN 'American Bulldog' . IF Figure 18: An example of symbolic decision rules (OxfordIIITPets), Q denotes the query image ![31_image_0.png](31_image_0.png) ![31_image_1.png](31_image_1.png) Figure 19: Interpreting the model predictions (k-means (nearest), 500 clusters per class, CIFAR-10, ViT)
Review 1: Summary: The paper proposes a framework called IDEAL (Interpretable-by-design DEep learning ALgorithms) which recasts the standard supervised classification problem into a function of similarity to a set of prototypes derived from the training data while taking advantage of existing latent spaces of large neural networks forming so-called Foundation Models (FM). Strengths and Weaknesses: **Strengths**: - The paper proposes a novel and generic framework called IDEAL that can be applied to various deep learning architectures and datasets with/without fine-tuning and demonstrates its advantages over existing methods in terms of performance, interpretability, and transfer and lifelong learning. - The paper provides a comprehensive evaluation of the proposed framework on multiple datasets and tasks and shows that it can achieve good results without finetuning the feature space, handle class-incremental learning scenarios, and generate prototype-based explanations and symbolic rules. **Weaknesses**: - The paper does not provide a clear introduction to an unfamiliar audience. Also, the background second is not coherent and does not define what is the proposed method and jumps to the "efficiency of the proposed method". - The paper provides several figures that provide a clear explanation of the proposed method but it would be great to reduce the redundancy and provide one figure whit a visual abstract of the work. Requested Changes: - The authors need to add an introduction and a coherent story at the beginning of the paper. - Algorithms 1 needs to include more information on how the method works. - Authors need to compare their method with previous work and provide the novelty of this work explicitly. Broader Impact Concerns: The proposed research has the potential to advance the field of interpretable deep learning and provide more transparent and trustworthy models for various applications, the authors could consider this as well. Also, they could add possible negative impacts of their framework, such as the misuse or abuse of the prototype-based explanations, the privacy and security risks of exposing the prototypes to the users, or the ethical and social implications of using large foundation models that may encode biases or prejudices. I do not see any major concern regarding the broader impact of this paper, but I encourage the authors to address these issues in their future work. ================================================== Review 2: Summary: The paper proposes a new interpretable neural network method named IDEAL. Like xDNN and ProtoPNet, IDEAL uses prototypes to make network interpretable and doesn't require fine-tuning on downstream tasks since it can use k-NN classifier. Experiments show the effect of IDEAL. Strengths and Weaknesses: ### Strengths - The paper reports various experiment results ### Weakness - It is hard to understand the differences and advantages of IDEAL from previous approaches - There are a lot of papers on prototype-based interpretable models. But, no formula or summary explains novel points of IDEAL compared to others. - Experiments look self-comparison with the prototype selection method from xDNN. Does IDEAL outperforms other methods in performance or interpretability measure? - **without finetuning** setting looks similar to k-NN classifier, which is not novel in DNN classification. Is there any special reason that other methods can't be used for k-NN classifier? - Method section needs to be improved with details - I recommend authors to move eq (7), (8) to the method section. It is hard to understand d and h without this formula at method section. - I need more information related to Algorithm 1,2. What are the details of functions `FindPrototype`, `SelectParameters`, `UpdateParameters`, and `UpdatePrototype`? Requested Changes: - It is hard to understand what IDEAL is. I request to define the proposed method IDEAL precisely. - After defining IDEAL, compare it with previous approaches such as xDNN, ProtoPNet, and DNC with formulas and categorizations. - Compare the performance of IDEAL with previous methods in the experiments section. - Add explanation and experiments for why k-NN classifier is novel in IDEAL. Broader Impact Concerns: Since it is hard to understand the contribution and novel points of the proposed method, the paper is not ready to consider broader impact concerns. ================================================== Review 3: Summary: The study introduces IDEAL (Interpretable-by-design DEep learning ALgorithms), a framework enhancing deep learning (DL) explainability and addressing limitations of existing models like ProtoPNet and xDNN. IDEAL transforms supervised classification into a similarity function with prototypes derived from training data, utilizing latent spaces of large neural networks. Key findings include: IDEAL's interpretability through prototypes, mitigation of confounding bias, avoidance of catastrophic forgetting in class-incremental learning, and efficient transfer learning without finetuning feature space. The study demonstrates these advantages over traditional models across multiple datasets​​. Strengths and Weaknesses: Strengths 1. Comprehensive Experiments: The study conducts extensive experiments across a variety of datasets (CIFAR-10, CIFAR-100, CalTech101, STL-10, Oxford-IIIT Pet, EuroSAT), demonstrating the effectiveness of the IDEAL framework in different contexts. 2. Improved Interpretability: The use of prototypes in the IDEAL framework enhances the interpretability of deep learning models, a critical aspect often overlooked in traditional deep learning approaches. Weaknesses 1. Unclear Mitigation of Confounding Bias: The paper does not sufficiently clarify how the proposed IDEAL method mitigates the issue of confounding bias, leaving a gap in understanding its effectiveness in this regard. 2. Lack of Distinction from Existing Models: There is a lack of clarity on how IDEAL significantly differs from existing prototype-based models like ProtoPNet and xDNN, especially regarding its claim of going beyond end-to-end learning and utilizing large classifiers' feature space. 3. Overstated Title: The title of the paper suggests a broader scope than what is presented. The focus on lifelong/continuing learning in the title does not accurately reflect the core content and contributions of the proposed framework. 4. Interpretability of Models vs. Predictions: The paper emphasizes the interpretability of predictions but does not adequately address the interpretability of the models themselves, which is a more crucial aspect in the context of explainable AI. Requested Changes: 1. The paper could benefit from clearer organization and more precise language to enhance readability and understanding, particularly in explaining complex concepts and comparisons with existing models. 2. Given the strengths and weaknesses identified, the paper shows promise but requires significant revisions to address the weaknesses, particularly in clarifying its unique contributions and improving the overall clarity and coherence of the manuscript. Broader Impact Concerns: N/A ================================================== Review 4: Summary: In this paper, the authors notice that current existing deep learning methods that rely on parametric tuning lack explanatity. Based on this observation, they propose a framework which is called IDEAL in order to solve the aforementioned problem. Briefly speaking, the IDEAL algorithm recasts the conventional supervised classification problem into a function of similarity to a set of prototypes derived from training data while taking advantage of existing latent space of foundation models. It is claimed that the proposed IDEAL method is interpretable via prototypes while also mitigating the issue of confounding bias and able to circumvent the issue of catastrophic forgetting. Further, the proposed IDEAL method also narrows the gap between finetuned and non-finetuned models allowing for transfer learning in a fraction of time without finetuning of the features space on a target dataset. Strengths and Weaknesses: __Cons:__ - From my perspective, the paper mainly focuses on fine-tune of a given pre-trained model, since the main concerns mentioned in this paper are closely related to transfer learning (e.g. the 3 limitations mentioned in Section 1). Thus, it is better to constrain the topic in transfer learning instead of the entire deep learning. - The formulations of this paper are not clear and comprehensive. - The background part of this paper is not clear. The motivation is lacked here. It is hard to understand why Eq. (1) has the 3 limitations and why this problem is worth studying. - The paper claims that interpretable explanations are provided. However, I do not see any of them before experiment section. - The novelty of this paper is poor. In fact, prototypical learning paradigm has been widely used in many previous works, such as prototypical networks proposed in few-shot learning problem. Compared with previous works like prototypical nets that directly leverage label information to generate prototypes, this paper selects samples from the training data. I do not think such strategy is better. __Questions:__ 1. It seems that Eq. (1) is not very clear to express the learning process of a model. To be specific, the notation of $f(x|\theta)$ is confusing. It somewhat resembles the expression of conditional probability. However, we usually express the encoding process of a model in the way of $f_{\theta}(x)$. 2. Since $d$ is a similarity function, what is $h$? (see Eq. (3)) 3. We do not see some content regarding interpretability in this paper. Thus, how do you demonstrate the claimations in section backgound? __Minor concerns:__ 1. In Fig. 1(a), there is a $f_2(\cdot|\theta_1)$. I think it should be $f_2(\cdot|\theta_2)$. Requested Changes: - The topic of the paper should be narrowed to transfer learning/fine-tuning pretrained models instead of using deep learning, which is a broad domain. - The formulations, such as Eq. (1), requires more polish work. - More explanations and theoretical analyses are required to reveal that IDEAL method is interpretable. - More works are required to improved the contribution of this paper. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Reject Comment: The paper is motivated to address an issue of no interpretation for many learning algorithms, in particular for deep transfer learning. The paper then moves to make use of pre-trained foundation models and proposes a framework called IDEAL (Interpretable-by-design DEep learning ALgorithms). IDEAL leverages the powerful foundation models for feature extraction, and is thus expected to be better than previous prototype learning methods. The paper conducts a series of experiments analyzing how the choices of latent space, prototype selection, and finetuning of the latent space affect accuracy and generalisation of the models on transfer learning scenarios for different backbones. The paper also mentions a few other benefits possibly brought by the proposed IDEAL. Three reviewers give thorough reviews on the paper. They all have common concerns on selling points of the paper - motivation and organization of the paper are less clear such that it is difficult to evaluate what are the key messages from the paper. It is also difficult to identify advantages of the proposed IDEAL compared with previous prototype learning methods. They also raise issues about technical writing, e.g., the confusing notations of f(x|\theta), eqn.(3), etc. The authors are expected to narrow down the paper scope, propose clearer technical contributions, and compare with the closely related methods. ==================================================
# Maskocr: Scene Text Recognition With Masked Visionlanguage Pre-Training Pengyuan Lyu, Chengquan Zhang, ShanShan Liu, Meina Qiao, Yangliu Xu, Liang Wu, Kun Yao, Junyu Han, Errui Ding, Jingdong Wang VIS, Baidu Inc. lvpyuan@gmail.com {zhangchengquan, liushanshan07, qiaomeina, xuyangliu, wuliang11 yaokun01, hanjunyu, dingerrui}@baidu.com wangjingdong@outlook.com Reviewed on OpenReview: *https: // openreview. net/ forum? id= KNAWoKKpi3* ## Abstract Text images contain both visual and linguistic information. However, existing pre-training techniques for text recognition mainly focus on either visual representation learning or linguistic knowledge learning. In this paper, we propose a novel approach MaskOCR to unify vision and language pre-training in the classical encoder-decoder recognition framework. We adopt the masked image modeling approach to pre-train the feature encoder using a large set of unlabeled real text images, which allows us to learn strong visual representations. In contrast to introducing linguistic knowledge with an additional language model, we directly pre-train the sequence decoder. Specifically, we transform text data into synthesized text images to unify the data modalities of vision and language, and enhance the language modeling capability of the sequence decoder using a proposed masked image-language modeling scheme. Significantly, the encoder is frozen during the pre-training phase of the sequence decoder. Experimental results demonstrate that our proposed method achieves superior performance on benchmark datasets, including Chinese and English text images. ## 1 Introduction Text recognition, which involves transcribing visual information into text, is a crucial technology for bridging vision and language. It has a wide range of applications, including visual search, document digitization, and more. Many significant progres has been made in the field of text recognition. However, it remains a challenging task due to the difficulty of recognizing fine-grained categories of text, which can vary in fonts, colors, and other factors, coupled with the relative scarcity of labeled real-world data. As an alternative approach, synthetic data has been used in previous studies (e.g., Jaderberg et al. (2014a); Max Jaderberg and Karen Simonyan and Andrea Vedaldi and Andrew Zisserman (2016); Shi et al. (2017a; 2019); Yu et al. (2020); Fang et al. (2021)), and meaningful results have been obtained. Nonetheless, recognition performance is still restricted by the domain gap between synthetic and real-world data. Efforts have been made to reduce the need for real labeled data through pre-training, which can be broadly divided into two categories: strengthening visual representation learning with unlabeled real images, and introducing language priors with a language model. In previous studies (e.g., Liu et al. (2022); Yang et al. (2022)), contrastive learning and masked image modeling technologies were employed for pre-training using a large amount of unlabeled image data to obtain better visual representations. In other studies (e.g., Qiao et al. (2020); Fang et al. (2021)), a pre-trained language model was used to guide recognition model learning or correct recognition model predictions. These methods have achieved promising results, thanks to their incorporation of vision or language priors. However, there are two limitations to these approaches. First, they tend to focus solely on either visual representation learning or linguistic knowledge learning, while text images inherently contain both visual and linguistic information. Neglecting either type of prior knowledge may ![1_image_0.png](1_image_0.png) Figure 1: Comparison of previous pre-training methods. A classical recognition model consists of a feature encoder and a sequence decoder, responsible for extracting visual representations and decoding text sequences from those representations, respectively. Previous pre-training methods have primarily focused on pretraining the feature encoder for better visual representation learning (a) or equipping a separate language model to introduce language priors (b). In this paper, we take a different approach by simultaneously and naturally integrating vision and language pre-training into the encoder-decoder recognition model (c). result in loss of accuracy. Second, previous studies (e.g., Qiao et al. (2020); Fang et al. (2021)) introduced language priors into the recognition model with a detached language model that blocked gradient flow between the recognition and language models, potentially leading to suboptimal performance. In this paper, we explore the utilization of both visual and language priors through pre-training to enhance text recognition performance. Our approach unifies vision and language pre-training within the classical encoder-decoder recognition framework. Specifically, we pre-train the feature encoder on a large set of unlabeled real text images using masked image modeling, which enables the extraction of better visual representations through self-supervised mechanisms. Additionally, we directly pre-train the sequence decoder to improve language modeling capabilities. To achieve this, we first transform the text corpus into synthesized text images to unify the data modalities and then use a proposed masked image-language modeling technology to learn the linguistic representation. During language pre-training, we fix the pre-trained feature encoder and only update the sequence decoder. This strategy benefits from the language pre-training task, which explores language rules while the pre-trained encoder is not affected by the synthesized text image. We validate the effectiveness of our proposed approach on Chinese and English text images through extensive experiments and detailed discussion. Our proposed method achieves state-of-the-art performance and significantly surpasses previous methods, particularly on Chinese benchmarks. The main contributions of this paper can be summarized as follows: - We propose a masked vision-language pre-training method that unifies vision and language representation learning within the classical encoder-decoder recognition framework. - Our vision-language pre-training approach contributes to a significant improvement in text recognition. Specifically, with the proposed pre-training technology, we observe a 5% performance gain on the challenging Chinese benchmark BCTR. Furthermore, compared to previous state-of-the-art methods, we achieve superior results on both Chinese and English datasets. ## 2 Related Work Text recognition. Text recognition can be achieved through various approaches, including characterbased Wang et al. (2011); Jaderberg et al. (2014b); Lyu et al. (2018); Liao et al. (2019), word-based Max Jaderberg and Karen Simonyan and Andrea Vedaldi and Andrew Zisserman (2016), and sequence-based Shi et al. (2017a; 2019); Yu et al. (2020); Fang et al. (2021); Xue et al. (2021) methods. Character-based methods recognize text images by performing character localization, classification, and grouping. Wordbased methods treat each word as a category and recognize text through image classification. Sequencebased methods treat text recognition as a sequence labeling problem. This approach uses methods such as CTC Graves et al. (2006) and attention mechanism Shi et al. (2019); Lyu et al. (2019); Yu et al. (2020); Fang et al. (2021) to align the input image patch sequence with the output character sequence. Recently, the sequence-based solution has been the most widely studied framework due to its flexibility and ease of ground-truth labeling. The sequence-based architecture consists of two main modules: feature encoder and sequence decoder. The feature encoder learns visual representations for text images, while the sequence decoder maps the representations into character sequences, with or without the aid of a language module. Pre-training. Representation pre-training has been shown to be beneficial for improving downstream tasks, such as supervised or self-supervised pre-training on ImageNet for computer vision, and self-supervised pretraining for natural language processing. Self-supervised pre-training, which does not require labeled data, has gained significant attention in recent years. Several approaches have been proposed to achieve selfsupervised pre-training, such as siamese networks trained using contrastive learning in He et al. (2020); Chen et al. (2020; 2021c), and masked autoencoders for both vision and NLP models in He et al. (2022); Bao et al. (2022); Devlin et al. (2019), where the models improve their representations by predicting masked content. In the method proposed by Rust et al. (2023), pre-training is conducted on text images using Masked Image Modeling to enhance the feature extraction capabilities of the visual encoder, which is similar to the first stage of our method. However, a key divergence from Rust et al. (2023) lies in our method's incorporation of Masked Image-Language Modeling for additional pre-training of the decoder. This addition aims to enhance the semantic reasoning ability of the decoder when interpreting character sequences. In recent years, there has been significant interest in the field of vision-language pre-training, whereby models are trained to jointly understand both visual and textual information. Various approaches have been proposed for aligning visual and language representations, including image-text contrastive learning Radford et al. (2021); Jia et al. (2021), image-text matching Li et al. (2021a); Wang et al. (2021b), masked language modeling Zhou et al. (2020); Wang et al. (2021a), and language modeling Li et al. (2022); Yu et al. (2022). These pre-trained models can then be fine-tuned for specific downstream tasks, such as visual question answering, image captioning, image retrieval, and text-to-image generation. Pre-training for text recognition. Several methods have been studied to employ pre-training for text recognition. In Jaderberg et al. (2014a); Max Jaderberg and Karen Simonyan and Andrea Vedaldi and Andrew Zisserman (2016); Shi et al. (2017a; 2019); Yu et al. (2020); Fang et al. (2021); Li et al. (2021b), the recognition model is pre-trained on synthesized data using supervised learning. However, despite this approach, the recognition performance is still limited by the domain gap between synthesized and real data. To improve the recognition accuracy, some methods use unlabeled real images or text corpus. For instance, Liu et al. (2022) introduces self-supervised contrastive pre-training to learn representations from the input real images, while Yang et al. (2022) integrates contrastive learning and masked image modeling to pre-train the recognition model. Notably, the aforementioned methods are mainly focused on visual representation learning. In Fang et al. (2021) and Qiao et al. (2020), a separate pre-trained language model is used to enhance the language modeling capability. Compared to previous techniques that concentrate solely on either visual or language representation learning, our proposed approach combines vision and language pretraining into a single, integrated model. This approach naturally integrates with the encoder-decoder recognition model, leading to improved representation quality for both vision and language. Vision-language pre-training for document understanding. Some document understanding methods utilize vision-language pre-training to model both visual and linguistic information. For instance, the MVLM module in LayoutLM Xu et al. (2020) is proposed to align visual and textual representations, while VLPT Song et al. (2022) uses image-text contrastive learning to enhance visual representations. Our proposed vision-language pre-training approach differs from LayoutLM and VLPT in several ways. Specifically, LayoutLM takes multi-modal information (visual tokens and textual tokens) as input to the network and learns to process both image and text information. In contrast, our approach only takes the image as input and utilizes language to assist in learning the decoder. Moreover, our approach differs from VLPT in that VLPT mainly uses masked language modeling and cross-model contrastive schemes to learn ![3_image_0.png](3_image_0.png) Figure 2: Encoder-decoder transformer for text recognition. The encoder extracts a sequence of patch representations, and the decoder maps the patch representations to a sequence of representations, followed by a linear layer to recognize the sequence of characters. the image encoder, while our approach employs masked image modeling to learn the text-image encoder and language pre-training with masking for the decoder. ## 3 Approach In this section, we will first present our encoder-decoder recognition model which is equipped with transformer. After that, the visual pre-training of encoder and the language pre-training of decoder will be described respectively. ## 3.1 Encoder-Decoder Transformer For Text Recognition We adopt the encoder-decoder transformer architecture for text recognition. The encoder extracts a sequence of patch representations, and the decoder maps the patch representations to text representations. Figure 2 illustrates the encoder-decoder transformer architecture. Encoder. The encoder receives an image, I ∈ R 3×H×W , as the input. We partition the image vertically into a set of M vertical patches,[p1, p2*, . . . ,* pM]. The size of each patch is H × W/M. We process the flattened patches using linear projection to get patch embeddings. We add the 1D positional embeddings, which is enough as we partition the images vertically. We use the ViT Dosovitskiy et al. (2021), consisting of a sequence of multi-head self-attention and FFN units, as the encoder and learn the patch-level representations, F = [f1,f2*, . . . ,*fM], for the text image. Decoder. We form the text recognition decoder by following the decoder style of the DETR Carion et al. (2020) that is originally designed for object detection. The decoder transforms the N input embeddings, C = [c1, c2*, . . . ,* cN ], called character queries, into output embeddings in parallel, which are then independently mapped into characters through a linear classifier. Loss. The labels of the text recognition task are ordered strings. So unlike the DETR which uses the matching mechanism to assign ground truth for each output, we directly allocate the label of each character prediction in order. We denote the character predictions by Y = [y1 y2 *. . .* yN ]. Assuming N is larger than the number of characters in the text image. We consider the ground truth as Y∗ = [y ∗ 1 y ∗ 2 . . . y ∗ N ] padded with an end-of-sentence symbol [EOS]. The loss function is formulated as follows, $$\ell(\mathbf{Y},\mathbf{Y}^{*})={\frac{1}{L+1}}\sum_{l=1}^{L+1}\operatorname{CE}(\mathbf{y}_{l},\mathbf{y}_{l}^{*}),$$ ), (1) ![4_image_0.png](4_image_0.png) Figure 3: Visual pre-training on encoder. We adopt a masked image modeling approach to pre-train the encoder for text image representation learning. In this example, six image patches (top) are visible patches, and the other four (bottom) to be predicted are masked patches. where CE(·, ·) is the cross-entropy loss. L is the number of characters in the text image. To balance the number of [EOS] and other characters, we only employ the loss function on the characters as well as the first [EOS]. ## 3.2 Masked Image Modeling On Encoder For Visual Pre-Training We follow the masked image modeling framework, which is recently studied for general image representation pre-training, and adopt the context autoencoder-style method Chen et al. (2022) to pre-train the encoder for text image representation learning. The context autoencoder separates the decoding role from the encoder and drives the encoder to focus on representation learning, which is a better choice for our visual representation learning. The encoder pre-training process is given as follows. The input text image is randomly masked with a given masking ratio and the remaining parts, which are termed visible patches are sent to the encoder, generating the representations of visible patches. Then, the representations of visible patches are fed into a latent contextual regressor with mask queries, predicting the representations for masked patches Zm which is expected to be close to the representations Z ∗ m of masked patches directly computed from the encoder. Last, the representations of the predicted masked patches go into the image decoder, predicting the targets Tm. Figure 3 illustrates the encoder pre-training architecture. We use the patch RGB values, processed with layer normalization (Gaussian normalization), to form the targets. The loss function for encoder pre-training is a combination of representation alignment loss and prediction loss and is given as follows, $$\ell_{t}(\mathbf{T}_{m},\mathbf{T}_{m}^{*})+\lambda\ \ell_{z}(\mathbf{Z}_{m},\mathbf{Z}_{m}^{*}).$$ m). (2) Here, both losses ℓt(·, ·) and ℓz(·, ·) are the MSE loss. λ is the trade-off parameter and is set to 0.05 in our implementation. ## 3.3 Masked Image-Language Modeling On Decoder For Language Pre-Training Unifying the vision and language pre-training in a single model is not easy, since the granularity of image and text are different and the semantic of vision and language are not aligned. The core designs of our method to overcome this problem are that: 1) we transfer the text to image via text to image generation, thus the modality difference between image and text is eliminated. 2) We propose masked image-language modeling, $$(2)$$ ![5_image_0.png](5_image_0.png) Figure 4: Language pre-training on decoder. The whole pipeline is similar with the one in Figure 2. The difference is that the input to the encoder are the visible patches. The visible patches are formed by masking the image patches that correspond to the target characters (the patch number may be greater than the character number). The input representations to the decoder are a combination of encoded representations and zero representations added to the positions (gray boxes) where the masked patches are. The prediction targets are the characters that are masked. The encoder is fixed when performing decoder pre-training. which randomly masks some characters of the input image, and predicts the masked characters via the unmasked part. 3) We also design a sequential pre-training mechanism by freezing the pre-trained encoder to handle the domain gap between real image and synthesized image, so that the language presentation is enhanced while the pre-trained encoder which learns better visual representation is not affected. In detail, we transform the text data to images via a public synthesis tool Text Render 1. Given a text corpus, some fonts, and some background images, the synthesized text images as well as the location annotation of characters can be generated. To further enhance the language modeling capability of the decoder, we adopt the idea of masked language modeling and introduce a masked image-language modeling scheme. As shown in Figure 4, we randomly mask some characters and accordingly the patches, and send the remaining visible patches to the encoder, obtain the representations of visible patches, Fv. Then, we insert the zero representations Fm to the positions corresponding to the masked patches, and feed the combined representations F = [Fv Fm] with the corresponding positional embeddings added into the decoder, predicting the text: Y¯ = [y¯1 y¯2 *. . .* y¯N ]. The loss function is similar to BERT Devlin et al. (2019) and is merely about masked characters. We formulate it as follows, $$\ell(\bar{\mathbf{Y}}_{l},\mathbf{Y}_{l}^{*})={\frac{1}{L}}\sum_{l=1}^{L}\mathrm{CE}(\mathbf{y}_{n_{l}},\mathbf{y}_{n_{l}}^{*}),$$ $$(3)$$ ), (3) where L is the number of masked characters, and {n1, n2*, . . . , n*L} are the positions of the masked characters. Considering the style of synthesized text images might be different from the real text images, we keep the encoder (learned from encoder pre-training) not updated and only optimize the decoder, so that the pre-training stage of the decoder does not affect the representation quality of the pre-trained encoder. ## 4 Experiment 4.1 Datasets Chinese text line images. The pre-training set consists of 100 million unlabeled text line images collected from practical scenarios for visual pre-training, and 100 million synthetic text line images for language pretraining. The real images are collected from documents and street view, and the text in them is almost in Chinese. We collect text corpus from Chinese corpus 2, and generate 100 million images with 64 commonly used fonts using Text Render. Specifically, for each synthetic sample, the text transcription as well as the character bounding boxes are given. 1https://github.com/oh-my-ocr/text_renderer 2https://github.com/crownpku/awesome-chinese-nlp We first pre-train the encoder and decoder serially on the collected real images and the synthetic images, and then finetune our model on a large-scale Chinese text image benchmark BCTR Chen et al. (2021b). BCTR consists of four subsets (scene, web, document, and handwriting) and provides 1.4 million fully labeled images in total. The scene subset (Sen) is derived from some scene text datasets, including RCTW Shi et al. (2017b), ReCTS Zhang et al. (2019), LSVT Sun et al. (2019), ArT Chng et al. (2019), and CTW Yuan et al. (2019), resulting in 636,455 images. The web subset (Web) is constructed based on the MTWI He et al. (2018) dataset and contains 140589 text images. The document subset (Doc) is composed of 500000 synthetic text images generated by Text Render in document style. The handwriting subset (Hand) is collected from a handwriting dataset SCUT-HCCDoc Zhang et al. (2020), and 116643 text images are included. English text word images. We follow Yang et al. (2022) and collect about 15.8 million unlabeled English word images from CC-OCR Yang et al. (2021) for visual pre-training. In addition, we also synthesize 100 million English word images for language pre-training. Similarly, we collect corpus from WikiText103 Merity et al. (2017) and generate synthetic images with Text Render and 10 commonly used English fonts. Following Shi et al. (2019); Yu et al. (2020); Fang et al. (2021); Wang et al. (2021c); Zhang et al. (2022), two synthetic datasets MJSynth Jaderberg et al. (2014a) and SynthText Gupta et al. (2016) are used for the training of downstream recognition tasks. Besides, we also collect 2.78 million real labeled images from TextOCR Singh et al. (2021) and Open Images Dataset v5 3 as Yang et al. (2022) to verify the effectiveness of pre-training when finetuned on real images. We evaluate our model on six public scene text datasets: ICDAR 2013 (IC13) Karatzas et al. (2013), Street View Text (SVT) Wang et al. (2011), IIIT5K-Words (IIIT5K) Mishra et al. (2012)), ICDAR 2015 (IC15) Karatzas et al. (2015), Street View Text-Perspective (SVTP) Phan et al. (2013), and CUTE80 (CUTE) Risnumawan et al. (2014)). The samples in the first three datasets are all regular text images and the remaining datasets may contain perspective or curved text images. ## 4.2 Implementation Details Encoder-decoder transformer. The image patches are fed into a linear projection layer and then sent to the ViT. Two ViT structures are studied: ViT-S (12 transformer blocks with dimension 384), and ViT-B (12 transformer blocks with dimension 768). The decoder consists of four decoder layers, each of which includes a self-attention unit, a cross-attention unit, and an FFN unit. Each attention module is a 12-head attention with dimension 384. We train the encoder-decoder transformer with AdamW optimizer Loshchilov & Hutter (2019), cosine learning rate decay Ilya Loshchilov and Frank Hutter (2017), a weight decay of 0.05, a drop path ratio of 0.1, and a batch size of 512. When the model is trained from scratch, the learning rate is set to 1e − 3. Otherwise, the model is optimized with an initial learning rate of 1e − 4. We set the training epochs as 120 and 20 for the Chinese text line recognition model and the English word recognition model with a warm-up of 5 epochs and 0.5 epochs respectively. Visual pre-training on encoder. The latent contextual regressor consists of four regressor layers. Each layer includes a cross-attention unit and an FFN unit. The image decoder consists of four layers, and each layer includes a self-attention unit and an FFN unit. Each attention module is also a 12-head attention with dimension 384. Following He et al. (2022), we use the normalized pixel values of each masked patch as task. We optimize the model with AdamW optimizer and set the learning rate with the linear learning rate scaling rule Goyal et al. (2017): lr = base_lr ×*batchsize/*256. By default, the *base*_lr is set to 1.5e−4 with cosine learning rate decay and a 0.5 epoch warm-up. We train the encoder for 10 epochs and 30 epochs for Chinese data and English data pre-training, with the batch size being 4096. Language pre-training on decoder. We mask some characters with a ratio of 0.15 and accordingly mask the patches that contain the characters. This might lead to that a different number of patches are masked for 3https://storage.openvinotoolkit.org/repositories/openvino_training_extensions/datasets/open_images_v5_text Table 1: Ablation about vision-language pre-training. "Scratch" means the model is trained from scratch. "V" and "L" mean visual pre-training and language pre-training respectively. Sce Web Doc Hand Avg Scratch 68.8 70.7 98.6 49.4 75.8 V 72.3 73.7 99.2 62.5 79.8 L 71.0 72.4 98.8 54.5 77.7 V + L **73.9 74.8 99.3 63.7 80.8** different text images as one character may correspond to a different number of patches. We adopt masked attention to replace the original attention in the encoder with the parameters unchanged. We pre-train the decoder for 5 epochs with a batch size of 512, an initial learning rate of 1e−4, a 0.5 epochs warmup, and a cosine learning rate decay. Data preprocessing. Since the Chinese text line images vary greatly in width, we resize the height of the input image to 32 with the aspect ratio kept and pad the width of the input images to 400. For the English word samples, we directly resize all input images to 32 × 128. We set the width of the split vertical patch to 4 for all datasets by default. During the training of downstream recognition, some data augmentations like rotation, distortion, and color jitter are also used. Evaluation We evaluate BCTR by first processing the predictions and ground truth with the following rules as Chen et al. (2021b): (i) convert the full-width characters to half-width characters; (ii) convert all traditional Chinese characters to simplified characters; (iii) convert all English characters to lowercase; (iv) remove all spaces. After that, we compute the accuracy in sentence level over each subset and the whole dataset (Avg). To evaluate the six scene English text datasets, we follow Shi et al. (2019); Yu et al. (2020); Fang et al. (2021); Wang et al. (2021c); Zhang et al. (2022) and evaluate the recognition performance of our model with case-insensitive word accuracy. We also report the average accuracy (Avg) over all samples. ## 4.3 Ablation Studies In this section, we conduct ablation studies on the BCTR dataset to verify the effectiveness of our proposed pre-training method. All experiments are conducted on 8 A100 GPUs with the ViT-B as the encoder. The effectiveness of vision-language pre-training. To verify the effectiveness of vision-language pretraining, four models are trained. (i) The model is randomly initialized. (ii) The encoder is initialized with visual pre-training, and the rest parts are randomly initialized. (iii) The model is initialized with language pre-training which pre-trained on synthetic data. (iv) The model is initialized with vision-language pre-training. The results are presented in Table 1, which demonstrate the effectiveness of vision-language pre-training. Specifically, the superiority of the "V" model over the "Scratch" model indicates that visual pre-training of the encoder can lead to an improvement of up to 4%. In addition, language pre-training also yields better performance, achieving a 1.9% improvement over the "Scratch" model. Moreover, the visual pre-training and language pre-training are complementary, as evidenced by the 1% improvement achieved by the "V + L" model over the "V" model. These results provide strong evidence that language pre-training is a valuable approach. Evaluation of representation quality. To evaluate the quality of representations learned by our pretrained model, we conducted linear probing experiments and the results are shown in Table 2. We fixed the pre-trained encoder and decoder, which was pre-trained using vision-language pre-training, and only updated the parameters of the remaining linear layer. Surprisingly, the model achieved an average accuracy Table 2: Evaluation of representation quality. We consider two cases : (i) to evaluate the representation quality of the encoder which with vision pre-training, we fix the pre-trained encoder and finetune the remain part of the recognition model. we termed this setting as "V Linear probing". (ii) We conduct "V + L linear probing" that fixes the pre-trained encoder and decoder to evaluate the representation quality of vision-language pre-training. Only the remaining linear layer is updated in this setting. Sce Web Doc Hand Avg V Linear probing 62.8 67.5 98.1 54.3 73.6 V + L Linear probing 39.1 54.7 66.3 28.1 47.8 Table 3: Studying if the pre-trained encoder is simultaneously retrained during decoder pre-training. (i) The model is trained from scratch. (ii) the pre-trained encoder is retrained during the decoder pre-training. (iii) The pre-trained encoder is fixed when conducting decoder pre-training. Sce Web Doc Hand Avg Scratch 68.8 70.7 98.6 49.4 75.8 Retrain encoder 69.0 71.4 99.0 53.3 76.7 Fix encoder **73.9 74.8 99.3 63.7 80.8** of 47.8%, indicating that with vision-language pre-training, the encoder and decoder are already capable of learning meaningful representations. We also examined the quality of representations learned through visual pre-training, which fixes the pretrained encoder and fine-tunes the decoder and linear layer of the recognition model. The model achieved an average accuracy of 73.6%, demonstrating that visual pre-training also enhances the representation quality of the encoder. The effectiveness of our serially pre-training mechanism. To address the large domain gap between real and synthesized images, we fixed the pre-trained encoder when conducting language pre-training on the decoder. A comparison of our serially pre-trained mechanism with the encoder retrained from pretrained weights is shown in Table 3, which demonstrates the effectiveness of our approach. Specifically, when retraining the encoder from pre-trained weights during decoder pre-training, an average accuracy of 76.7% was achieved, which outperformed the scratch model (75.8%). However, this approach yielded a 4.1% drop in accuracy compared to our serially pre-trained model (80.8%), suggesting that the pre-trained encoder would be impacted by the synthesized text images if it were retrained during decoder pre-training. This emphasizes the large domain gap between synthetic and real data. Masking ratio of visual pre-training We conducted an exploration into the effect of different masking ratios, namely 0.30, 0.45, and 0.60, for visual pre-training. The results have been presented in Table 4, and we found that the optimal masking ratio for downstream recognition task was 0.45. It is worth noting that our optimal masking ratio is lower than that reported in MAE He et al. (2022) (0.75). This discrepancy could be attributed to the higher information density of the text images used in our experiments. Table 4: Ablation about the masking ratio of visual pre-training. In our experiments, the optimal masking ratio for downstream recognition task was 0.45. Masking ratio Sce Web Doc Hand Avg 0.30 71.5 73.1 99.1 61.8 79.3 0.45 **72.3 73.7 99.2 62.5 79.8** 0.60 72.0 73.6 99.1 60.7 79.4 0.75 72.2 73.4 99.1 59.1 79.2 Table 5: Ablation about the masking strategy in language pre-training. It can be seen that masking helps the performance, particularly for challenging scenarios such as scenes and handwritten text recognition, where accurate recognition is more difficult to achieve. "M" means trained with the masking strategy and "V" means trained with the visual pre-training. M V Sce Web Doc Hand Avg × × 69.3 71.4 98.6 49.3 76.1 √× **71.0 72.4 98.8 54.5 77.7** | M | V | Sce | Web | Doc | Hand | Avg | |-----|-----|-------|-------|-------|--------|-------| | × | × | 69.3 | 71.4 | 98.6 | 49.3 | 76.1 | | √ | × | 71.0 | 72.4 | 98.8 | 54.5 | 77.7 | | × | √ | 73.1 | 74.6 | 99.3 | 63.6 | 80.5 | | √ | √ | 73.9 | 74.8 | 99.3 | 63.7 | 80.8 | Table 6: Comparison with supervised pre-training. (i) The model is trained without pre-training model. (ii) The pre-trained model is trained on synthetic data with supervised pre-training. (iii) The model is pre-trained with our vision-language pre-training. Sce Web Doc Hand Avg Scratch 68.8 70.7 98.6 49.4 75.8 Supervised pre-training 69.3 71.4 98.6 49.3 76.1 Ours **73.9 74.8 99.3 63.7 80.8** Masking strategy of language pre-training We have examined the impact of masking strategy on language pre-training in Table 5. The results show that the language models trained with the masking strategy outperformed those without on both the scenarios of only conducting language pre-training and integrating visual pre-training. The results highlight the efficacy of masking strategy in capturing the underlying linguistic rules during pre-training, particularly for challenging scenarios such as scenes and handwritten text recognition. Comparison with supervised pre-training. To demonstrate the superiority of our vision-language pretraining, we conducted a comparison with supervised pre-training, the results of which are presented in Table 6. Specifically, we pre-trained the recognition model on 100 million synthetic images and fine-tuned it on the BCTR dataset. Our findings show that, compared to the model trained from scratch, the model with supervised pre-training only achieved marginal improvement, indicating that supervised pre-training with synthetic images is unnecessary when there is an abundance of real labeled data. However, our proposed vision-language pre-training resulted in a 5% improvement, highlighting its effectiveness. | Table 7: Ablation about the vertical patch size. | | | | | | |----------------------------------------------------|------|------|------|------|------| | Patch size | Sce | Web | Doc | Hand | Avg | | 32 × 4 | 68.8 | 70.7 | 98.6 | 49.4 | 75.8 | | 32 × 8 | 64.0 | 67.3 | 97.5 | 43.3 | 72.2 | Vertical patch size. In our study, we evaluated the performance of two different patch sizes without pre-training, the results of which are presented in Table 7. We found that the larger patch size led to worse performance, dropping the accuracy by 3.6%. We hypothesize that this may be due to the higher information density of the embedded token in the larger patch size, making it more difficult to learn. Generalizability. To demonstrate the effectiveness and generalizability of our proposed vision-language pretraining, we conducted experiments on a widely used text recognition decoder, the Connectionist Temporal Classification (CTC) decoder. We utilized multiple transformer encoder layers as the sequence decoder and trained the model using CTC loss. During language pre-training, we randomly masked some characters and predicted the entire text sequence using CTC loss. Figure 5: Example results of masked image modeling. The first row are the input images, the remaining two rows are masked images and reconstructed images. | oscr | oscar | 扩建府目 | 扩建项目 | |-------------|------------|-------------------|-------------------| | vvilla | villa | 毫元关系 木发铜锅 | 毫无关系 木炭铜锅 | | piizza | pizza | | | | disnep | disney | 质量是天键 | 质量是关键 | | strawbering | strawberry | 吉乐工作室 | 声乐工作室 | ![10_image_0.png](10_image_0.png) Figure 6: Visualization of text recognition results. The images in the first column and the fourth column are the input images. The results in the second column and the fifth column are predicted by the models trained from scratch. The results in column 3 and column 6 are output by the models with vision-language pre-training. | Table 8: Ablation about the different decoders. | | | |---------------------------------------------------|---------|------------------------------| | Decoders | Scratch | Vision-language Pre-training | | CTC-style | 76.7 | 80.1 | | DETR-style | 75.8 | 80.8 | To validate the effectiveness of our approach, we trained two models: one trained from scratch and the other initialized with vision-language pre-training. Our results in Table 8 demonstrate that the model initialized with vision-language pre-training achieved higher accuracy, with a 3.4% performance gain over the scratch model. Specifically, we achieved accuracies of 76.7% and 80.1%, respectively. These findings support the robustness and generalizability of our proposed model in the encoder-decoder recognition framework. Qualitative analysis. We visualize some examples result of masked image modeling in Figure 5. The input images are randomly masked with a masking ratio of 0.45. As a result, a part of a character or a whole character may be not visible for the encoder. The semantically plausible reconstructed results in the third row show that meaningful visual representation are learned by the encoder. We also show some results of the models without and with vision-language pre-training in Figure 6. As shown, when trained from scratch, the model is not robust to artistic fonts, occluded text, and visually similar characters. Benefiting from the visual and linguistic prior knowledge of real images and text corpus, the model with vision-language pre-training can handle the above-mentioned scenarios well. ## 4.4 Comparison With State-Of-The-Art Methods Comparison with state-of-the-art pre-training method. We compare our method with previous stateof-the-art visual pre-training method DiG Yang et al. (2022) fairly with the same data setting. Specifically, Table 9: Comparison with state-of-the-art pre-training method DiG. "Scratch" denotes the model is trained from scratch. "V" means finetuned with visual pre-training, "V + L" means finetuned with vision-language pre-training. The percentages 1%, 10%, and 100% refer to the proportions of real data used during the fine-tuning stage. 1% 10% 100% #Params CRNN Scratch 63.5 79.6 89.4 25M DiG (ViT-S) Scratch 9.2 73.4 91.4 36M DiG (ViT-S) V 84.6 92.0 94.6 36M Ours (ViT-S) Scratch 72.3 90.6 95.2 31M Ours (ViT-S) V 87.7 92.6 95.3 31M Ours (ViT-S) V + L **90.9 93.8 95.6** 31M we first pre-train our model on CC-OCR, MJSynth, and SynthText, following Yang et al. (2022) and finetune the pre-trained model on varying percentages of real labeled samples (1%, 10%, and 100%) sourced from TextOCR and Open Images Dataset v5. Our model's performance was thoroughly evaluated and compared against DiG, as presented in Table 9. We discovered that pre-training DiG led to a significant improvement in performance compared to the DiG baseline results obtained without pre-training. However, it is unclear whether this improvement is solely due to DiG pre-training or differences in the training settings. To reduce uncertainty, we trained a commonly used recognition model, CRNN Shi et al. (2017a)4, using the same data settings as our model, and achieved results of 63.5%, 79.6%, and 89.4%. Notably, CRNN was published in 2015 and its performance is generally not competitive with most recent recognition methods. The results of CRNN in Table 9 are significantly higher than the DiG baseline, suggesting that the observed performance gains may be primarily due to differences in the training settings used for the DiG baseline rather than the DiG pre-training. Due to the aforementioned reasons, we directly compared the performance of pre-training. Remarkably, our proposed approach significantly outperformed DiG in all the cases. Most notably, our method achieved an exceptional accuracy of 90.9% with only 27.8K (1%) real labeled images during finetuning, providing substantial evidence of the significant reduction in the demand for labeled data offered by our approach. Furthermore, the performance gains observed in our model, equipped with pre-trained encoder and decoder, further underscore the effectiveness of our pre-training mechanism. Chinese Text Line Recognition. We evaluate the ability of our model to recognize Chinese text lines on the BCTR dataset. We set the number of character queries N to 40 since most of the samples in BCTR have less than 40 characters. We show the results of our method and some representative methods on the BCTR dataset in Table 10. When training from scratch, our method with ViT-S as encoder outperforms all the previous methods while with the smallest model size. Specifically, our method is better than the previous best method TransOCR Chen et al. (2021a) by 2.8% (72.8% vs. 75.6%). When training with the pre-trained encoder and decoder, our models surpass the previous best results by large margins. In detail, our method shows steady improvement with the increase of the model size and improves over the state-of-the-art by 5.3% and 8.0% respectively. English scene text recognition. Following Shi et al. (2017a; 2019), we set the number of character queries N to 25 which exceeds the lengths of most English words. Since scene text appeared in natural scenes always with distortions or irregular layouts, we employ a spatial transformer network Jaderberg et al. (2015) which is adopted in Shi et al. (2019) to rectify the input image and train it with our recognizer jointly. Text recognition in the early days faced the challenge of limited annotated data, which led to the common practice of training on synthetic data and testing on real data. However, as data has accumulated over the past decade and unsupervised techniques have advanced, pre-training on synthetic data and testing 4https://github.com/PaddlePaddle/PaddleOCR | Methods | VP | LP | AD | IC13 | SVT | IIIT5K | IC15 | SVTP | CUTE | Avg | #Params | |--------------------------------|------|------|-------|--------|-------|----------|--------|--------|--------|-------|-----------| | ASTER Shi et al. (2019) | × | × | synth | 91.8 | 89.5 | 93.4 | 76.1 | 78.5 | 79.5 | 86.7 | - | | TextScanner Wan et al. (2020) | × | × | synth | 92.9 | 90.1 | 93.9 | 79.4 | 84.3 | 83.3 | 84.4 | - | | PIMNet Qiao et al. (2021) | × | × | synth | 95.2 | 91.2 | 95.2 | 83.5 | 84.3 | 84.4 | 90.5 | - | | SRN Yu et al. (2020) | × | × | synth | 95.5 | 91.5 | 94.8 | 82.7 | 85.1 | 87.8 | 90.4 | 55M | | VisionLan Wang et al. (2021c) | × | × | synth | 95.7 | 91.7 | 95.8 | 83.7 | 86.0 | 88.5 | 91.2 | 33M | | ABINet Fang et al. (2021) | × | × | synth | 94.9 | 90.4 | 94.6 | 81.7 | 84.2 | 86.5 | 89.8 | 24M | | I2C2W Xue et al. (2021) | × | × | synth | 95.0 | 91.7 | 94.3 | 82.8 | 83.1 | 93.1 | 90.2 | - | | Ours (ViT-S) | × | × | synth | 97.7 | 93.7 | 95.4 | 86.6 | 89.0 | 87.5 | 92.5 | 31M | | Ours (ViT-B) | × | × | synth | 96.8 | 94.7 | 95.3 | 87.1 | 89.3 | 90.6 | 92.7 | 97M | | SEED Qiao et al. (2020) | × | √ | synth | 92.8 | 89.6 | 93.8 | 80.0 | 81.4 | 83.6 | 88.3 | - | | ABINet Fang et al. (2021) | × | √ | synth | 97.4 | 93.5 | 96.2 | 86.0 | 89.3 | 89.2 | 92.7 | 37M | | ConCLR Zhang et al. (2022) | × | √ | synth | 97.7 | 94.3 | 96.5 | 85.4 | 89.3 | 91.3 | 92.8 | 37M | | PerSec Liu et al. (2022) | √ | × | synth | 97.2 | 94.6 | 96.3 | 84.4 | 89.5 | 90.2 | 92.4 | - | | DiG (ViT-S) Yang et al. (2022) | √ | × | synth | 97.1 | 93.4 | 96.7 | 87.1 | 90.1 | 88.5 | 93.1 | 36M | | DiG (ViT-B) Yang et al. (2022) | √ | × | synth | 96.9 | 94.6 | 96.7 | 87.1 | 91.0 | 91.3 | 93.4 | 52M | | TrOCR Li et al. (2021b) | √ | √ | synth | 98.3 | 93.2 | 91.0 | 84.0 | 91.0 | 89.6 | 90.3 | 558M | | Ours (ViT-S) | √ | √ | synth | 97.7 | 94.0 | 95.8 | 87.5 | 90.2 | 89.2 | 93.0 | 31M | | Ours (ViT-B) | √ | √ | synth | 98.1 | 94.9 | 95.8 | 87.5 | 89.8 | 90.3 | 93.1 | 97M | | DiG (ViT-S) Yang et al. (2022) | √ | × | real | 97.3 | 96.1 | 97.7 | 88.6 | 91.6 | 96.2 | 94.6 | 36M | | DiG (ViT-B) Yang et al. (2022) | √ | × | real | 97.6 | 96.5 | 97.6 | 88.9 | 92.9 | 96.5 | 94.9 | 52M | | MAERec-S Jiang et al. (2023) | √ | × | real | - | - | - | - | - | - | 94.6 | 21M | | Ours (ViT-S) | √ | √ | real | 97.8 | 96.9 | 98.0 | 90.2 | 94.9 | 96.2 | 95.6 | 31M | | Ours (ViT-B) | √ | √ | real | 98.2 | 96.9 | 98.0 | 90.1 | 94.6 | 95.8 | 95.6 | 97M | | Methods | VP | LP | Sce | Web | Doc | Hand | Avg | #Params | |------------------------------|------|------|-------|-------|-------|--------|-------|-----------| | CRNN Shi et al. (2017a) | × | × | 53.4 | 54.5 | 97.5 | 46.4 | 67.0 | - | | ASTER Shi et al. (2019) | × | × | 54.5 | 52.3 | 93.1 | 38.9 | 64.7 | - | | SAR Li et al. (2019) | × | × | 62.5 | 54.3 | 93.8 | 31.4 | 67.3 | - | | SRN Yu et al. (2020) | × | × | 60.1 | 52.3 | 96.7 | 18.0 | 65.0 | - | | TransOCR Chen et al. (2021a) | × | × | 63.3 | 62.3 | 96.9 | 53.4 | 72.8 | 84M | | Ours (ViT-S) | × | × | 68.6 | 70.3 | 98.5 | 49.1 | 75.6 | 36M | | Ours (ViT-B) | × | × | 68.8 | 70.7 | 98.6 | 49.4 | 75.8 | 100M | | Ours (ViT-S) | √ | √ | 71.4 | 72.5 | 98.8 | 55.6 | 78.1 | 36M | | Ours (ViT-B) | √ | √ | 73.9 | 74.8 | 99.3 | 63.7 | 80.8 | 100M | on real data has become increasingly impractical. Firstly, there is now an abundance of millions of real data samples available, providing sufficient resources for training recognition models. Secondly, domain differences between synthetic and real data can cause differences in the effectiveness of training. Therefore, the performance demonstrated in synthetic data training may not be representative of real data training. Thirdly, current unsupervised methods usually pre-train on real data to learn good feature representations, and fine-tuning on synthetic data may impair the effectiveness of pre-training. Consequently, we advocate pre-training on unlabeled real data and fine-tuning on labeled real data for evaluating model performance. As most methods are trained only on synthetic data, we also provide a comparison with this approach for reference. We report the results of our method and existing methods in Table 11 and compare them in detail. When trained on "synth" (MJSynth + SynthText) without pre-training, our method significantly surpasses the previous methods and achieves the best performance. When trained on synthetic data with pre-training, our method achieves comparable results with previous best model DiG Yang et al. (2022). Besides, When | of images processed per second. Methods | Avg | #Params | Flops | FPS | |-------------------------------------------|-------|-----------|---------|-------| | ABINet Fang et al. (2021) | 92.7 | 37M | 2.97G | 545 | | DiG Yang et al. (2022) | 94.6 | 36M | 15.74G | 188 | | Ours(ViT-S) | 95.6 | 31M | 1.17G | 1160 | finetuned on real datasets (TextOCR + Open Images Dataset v5) as Yang et al. (2022); Jiang et al. (2023), our method achieves the best performance (95.6% vs. 94.9% vs. 94.6%). The extensive comparisons demonstrate the excellence of our proposed method. Note that, some methods may also achieve good performance but not relevant to pre-training are not listed. These methods, perform semi-supervised learning on real data Fang et al. (2021); Zheng et al. (2022), refine the results with iterative mechanism Fang et al. (2021); Bautista & Atienza (2022), or predict text in multigranularity Wang et al. (2022) are complementary to our approach and may benefit our approach. We also observe that the improvement brought by pre-training is more pronounced for Chinese benchmarks than for English benchmarks. This could be attributed to several reasons. Firstly, the current English recognition datasets contain fewer characters and shorter text length, making it relatively easier than Chinese recognition. Secondly, the accuracy of current models on English benchmarks is already high and near saturation, with only marginal gains from pre-training as shown by the small improvement margin (0.4%) between DiG (ACM MM2022) and ABINet (CVPR 2021). Lastly, pre-training is more beneficial for challenging scenarios where the original model performs poorly. In many real-world scenarios that are more challenging and lack sufficient labeled data, the performance gain from pre-training can be substantial, such as the handwriting set of BCTR. Runtime Comparison. We compare the runtime of our method with two competitive other recognition methods that with similar parameters as ours. The runtime is evaluated on an A100 GPU with a batch size of 64, the results are shown in Table 12. Compared to ABINet and DiG, our method has the smallest number of parameters, the highest accuracy, and the fastest running speed. We measured the computational complexity of the three networks to analyze the difference in inference speed. ABINet has 2.97G Flops, DiG has 15.74G Flops, and MaskOCR has 1.17G Flops, showing a positive correlation with speed. We further investigated why DiG's computational complexity far exceeds that of ABINet and MaskOCR under similar parameters. We found that DiG uses a relatively small patch size (4 * 4), while our approach employs a patch size of 32 * 4. This results in a longer feature sequence input to ViT in DiG, leading to an exponential increase in computational complexity. ## 5 Conclusion The core of the proposed approach for text recognition lies in that we pre-train the recognition model, including both the encoder and the decoder to learn vision and language representation. The visual pretraining is able to benefit from large-scale real text images that are easily available without the need for text annotation. The language pre-training is able to benefit from the synthetic text images that are also easily available with the character-level annotation easily obtained. Experiments verify the effectiveness of our proposed vision-language pre-training. ## References Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. Beit: BERT pre-training of image transformers. In ICLR, 2022. Darwin Bautista and Rowel Atienza. Scene text recognition with permuted autoregressive sequence models. In ECCV, volume 13688, pp. 178–196, 2022. Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In ECCV, pp. 213–229, 2020. Jingye Chen, Bin Li, and Xiangyang Xue. Scene text telescope: Text-focused scene image super-resolution. In CVPR, pp. 12026–12035, 2021a. Jingye Chen, Haiyang Yu, Jianqi Ma, Mengnan Guan, Xixi Xu, Xiaocong Wang, Shaobo Qu, Bin Li, and Xiangyang Xue. Benchmarking chinese text recognition: Datasets, baselines, and an empirical study. CoRR, abs/2112.15093, 2021b. Xiaokang Chen, Mingyu Ding, Xiaodi Wang, Ying Xin, Shentong Mo, Yunhao Wang, Shumin Han, Ping Luo, Gang Zeng, and Jingdong Wang. Context autoencoder for self-supervised representation learning. CoRR, abs/2202.03026, 2022. Xinlei Chen, Haoqi Fan, Ross B. Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. CoRR, abs/2003.04297, 2020. Xinlei Chen, Saining Xie, and Kaiming He. An empirical study of training self-supervised vision transformers. In ICCV, pp. 9620–9629, 2021c. Chee Kheng Chng, Errui Ding, Jingtuo Liu, Dimosthenis Karatzas, Chee Seng Chan, Lianwen Jin, Yuliang Liu, Yipeng Sun, Chun Chet Ng, Canjie Luo, Zihan Ni, ChuanMing Fang, Shuaitao Zhang, and Junyu Han. ICDAR2019 robust reading challenge on arbitrary-shaped text - rrc-art. In ICDAR, pp. 1571–1576, 2019. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, pp. 4171–4186, 2019. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021. Shancheng Fang, Hongtao Xie, Yuxin Wang, Zhendong Mao, and Yongdong Zhang. Read like humans: Autonomous, bidirectional and iterative language modeling for scene text recognition. In CVPR, pp. 7098–7107, 2021. Priya Goyal, Piotr Dollár, Ross B. Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch SGD: training imagenet in 1 hour. CoRR, abs/1706.02677, 2017. Alex Graves, Santiago Fernández, Faustino J. Gomez, and Jürgen Schmidhuber. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In ICML, pp. 369– 376, 2006. Ankush Gupta, Andrea Vedaldi, and Andrew Zisserman. Synthetic data for text localisation in natural images. In CVPR, pp. 2315–2324, 2016. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross B. Girshick. Momentum contrast for unsupervised visual representation learning. In CVPR, pp. 9726–9735, 2020. Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross B. Girshick. Masked autoencoders are scalable vision learners. In CVPR, pp. 15979–15988, 2022. Mengchao He, Yuliang Liu, Zhibo Yang, Sheng Zhang, Canjie Luo, Feiyu Gao, Qi Zheng, Yongpan Wang, Xin Zhang, and Lianwen Jin. ICPR2018 contest on robust reading for multi-type web images. In ICPR, pp. 7–12, 2018. Ilya Loshchilov and Frank Hutter. SGDR: Stochastic Gradient Descent with Warm Restarts. In ICLR, 2017. Max Jaderberg, Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Synthetic data and artificial neural networks for natural scene text recognition. CoRR, abs/1406.2227, 2014a. Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Deep features for text spotting. In ECCV, pp. 512–528, 2014b. Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu. Spatial transformer networks. In NeurIPS, pp. 2017–2025, 2015. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In Marina Meila and Tong Zhang (eds.), ICML, volume 139, pp. 4904–4916, 2021. Qing Jiang, Jiapeng Wang, Dezhi Peng, Chongyu Liu, and Lianwen Jin. Revisiting scene text recognition: A data perspective. In ICCV, pp. 20543–20554, October 2023. Dimosthenis Karatzas, Faisal Shafait, Seiichi Uchida, Masakazu Iwamura, Lluis Gomez i Bigorda, Sergi Robles Mestre, Joan Mas, David Fernández Mota, Jon Almazán, and Lluís-Pere de las Heras. ICDAR 2013 robust reading competition. In ICDAR, pp. 1484–1493, 2013. Dimosthenis Karatzas, Lluis Gomez-Bigorda, Anguelos Nicolaou, Suman K. Ghosh, Andrew D. Bagdanov, Masakazu Iwamura, Jiri Matas, Lukas Neumann, Vijay Ramaseshan Chandrasekhar, Shijian Lu, Faisal Shafait, Seiichi Uchida, and Ernest Valveny. ICDAR 2015 competition on robust reading. In ICDAR, pp. 1156–1160, 2015. Hui Li, Peng Wang, Chunhua Shen, and Guyu Zhang. Show, attend and read: A simple and strong baseline for irregular text recognition. In AAAI, pp. 8610–8617, 2019. Junnan Li, Ramprasaath R. Selvaraju, Akhilesh Gotmare, Shafiq R. Joty, Caiming Xiong, and Steven ChuHong Hoi. Align before fuse: Vision and language representation learning with momentum distillation. In NeurIPS, pp. 9694–9705, 2021a. Junnan Li, Dongxu Li, Caiming Xiong, and Steven C. H. Hoi. BLIP: bootstrapping language-image pretraining for unified vision-language understanding and generation. In ICML, volume 162, pp. 12888–12900, 2022. Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei A. F. Florêncio, Cha Zhang, Zhoujun Li, and Furu Wei. Trocr: Transformer-based optical character recognition with pre-trained models. CoRR, abs/2109.10282, 2021b. Minghui Liao, Jian Zhang, Zhaoyi Wan, Fengming Xie, Jiajun Liang, Pengyuan Lyu, Cong Yao, and Xiang Bai. Scene text recognition from two-dimensional perspective. In AAAI, pp. 8714–8721, 2019. Hao Liu, Bin Wang, Zhimin Bao, Mobai Xue, Sheng Kang, Deqiang Jiang, Yinsong Liu, and Bo Ren. Perceiving stroke-semantic context: Hierarchical contrastive learning for robust scene text recognition. In AAAI, 2022. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In ICLR, 2019. Pengyuan Lyu, Minghui Liao, Cong Yao, Wenhao Wu, and Xiang Bai. Mask textspotter: An end-to-end trainable neural network for spotting text with arbitrary shapes. In ECCV, volume 11218, pp. 71–88, 2018. Pengyuan Lyu, Zhicheng Yang, Xinhang Leng, Xiaojun Wu, Ruiyu Li, and Xiaoyong Shen. 2d attentional irregular scene text recognizer. CoRR, abs/1906.05708, 2019. Max Jaderberg and Karen Simonyan and Andrea Vedaldi and Andrew Zisserman. Reading text in the wild with convolutional neural networks. IJCV, pp. 1–20, 2016. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. In ICLR, 2017. Anand Mishra, Karteek Alahari, and C. V. Jawahar. Scene text recognition using higher order language priors. In Richard Bowden, John P. Collomosse, and Krystian Mikolajczyk (eds.), BMVC, pp. 1–11, 2012. Trung Quy Phan, Palaiahnakote Shivakumara, Shangxuan Tian, and Chew Lim Tan. Recognizing text with perspective distortion in natural scenes. In ICCV, pp. 569–576, 2013. Zhi Qiao, Yu Zhou, Dongbao Yang, Yucan Zhou, and Weiping Wang. SEED: semantics enhanced encoderdecoder framework for scene text recognition. In CVPR, pp. 13525–13534, 2020. Zhi Qiao, Yu Zhou, Jin Wei, Wei Wang, Yuan Zhang, Ning Jiang, Hongbin Wang, and Weiping Wang. Pimnet: A parallel, iterative and mimicking network for scene text recognition. In ACM MM, pp. 2046– 2055, 2021. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In ICML, volume 139, pp. 8748–8763, 2021. Anhar Risnumawan, Palaiahnakote Shivakumara, Chee Seng Chan, and Chew Lim Tan. A robust arbitrary text detection system for natural scene images. Expert Syst. Appl., pp. 8027–8048, 2014. Phillip Rust, Jonas F. Lotz, Emanuele Bugliarello, Elizabeth Salesky, Miryam de Lhoneux, and Desmond Elliott. Language modelling with pixels. In ICLR, 2023. Baoguang Shi, Xiang Bai, and Cong Yao. An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition. TPAMI, pp. 2298–2304, 2017a. Baoguang Shi, Cong Yao, Minghui Liao, Mingkun Yang, Pei Xu, Linyan Cui, Serge J. Belongie, Shijian Lu, and Xiang Bai. ICDAR2017 competition on reading chinese text in the wild (RCTW-17). In ICDAR, pp. 1429–1434, 2017b. Baoguang Shi, Mingkun Yang, Xinggang Wang, Pengyuan Lyu, Cong Yao, and Xiang Bai. ASTER: an attentional scene text recognizer with flexible rectification. TPAMI, pp. 2035–2048, 2019. Amanpreet Singh, Guan Pang, Mandy Toh, Jing Huang, Wojciech Galuba, and Tal Hassner. Textocr: Towards large-scale end-to-end reasoning for arbitrary-shaped scene text. In CVPR, pp. 8802–8812, 2021. Sibo Song, Jianqiang Wan, Zhibo Yang, Jun Tang, Wenqing Cheng, Xiang Bai, and Cong Yao. Visionlanguage pre-training for boosting scene text detectors. In CVPR, pp. 15660–15670. IEEE, 2022. Yipeng Sun, Dimosthenis Karatzas, Chee Seng Chan, Lianwen Jin, Zihan Ni, Chee Kheng Chng, Yuliang Liu, Canjie Luo, Chun Chet Ng, Junyu Han, Errui Ding, and Jingtuo Liu. ICDAR 2019 competition on large-scale street view text with partial labeling - RRC-LSVT. In ICDAR, pp. 1557–1562, 2019. Zhaoyi Wan, Minghang He, Haoran Chen, Xiang Bai, and Cong Yao. Textscanner: Reading characters in order for robust scene text recognition. In AAAI, pp. 12120–12127, 2020. Jianfeng Wang, Xiaowei Hu, Zhe Gan, Zhengyuan Yang, Xiyang Dai, Zicheng Liu, Yumao Lu, and Lijuan Wang. UFO: A unified transformer for vision-language representation learning. CoRR, abs/2111.10023, 2021a. Kai Wang, Boris Babenko, and Serge J. Belongie. End-to-end scene text recognition. In ICCV, pp. 1457–1464, 2011. Peng Wang, Cheng Da, and Cong Yao. Multi-granularity prediction for scene text recognition. In ECCV, volume 13688, pp. 339–355, 2022. Wenhui Wang, Hangbo Bao, Li Dong, and Furu Wei. Vlmo: Unified vision-language pre-training with mixture-of-modality-experts. CoRR, abs/2111.02358, 2021b. Yuxin Wang, Hongtao Xie, Shancheng Fang, Jing Wang, Shenggao Zhu, and Yongdong Zhang. From two to one: A new scene text recognizer with visual language modeling network. In ICCV, pp. 14174–14183, 2021c. Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. Layoutlm: Pre-training of text and layout for document image understanding. In Rajesh Gupta, Yan Liu, Jiliang Tang, and B. Aditya Prakash (eds.), KDD, pp. 1192–1200, 2020. Chuhui Xue, Shijian Lu, Song Bai, Wenqing Zhang, and Changhu Wang. I2C2W: image-to-character-to-word transformers for accurate scene text recognition. CoRR, abs/2105.08383, 2021. Mingkun Yang, Minghui Liao, Pu Lu, Jing Wang, Shenggao Zhu, Hualin Luo, Qi Tian, and Xiang Bai. Reading and writing: Discriminative and generative modeling for self-supervised text recognition. In ACMMM, pp. 4214–4223, 2022. Zhengyuan Yang, Yijuan Lu, Jianfeng Wang, Xi Yin, Dinei Florêncio, Lijuan Wang, Cha Zhang, Lei Zhang, and Jiebo Luo. TAP: text-aware pre-training for text-vqa and text-caption. In CVPR, pp. 8751–8761, 2021. Deli Yu, Xuan Li, Chengquan Zhang, Tao Liu, Junyu Han, Jingtuo Liu, and Errui Ding. Towards accurate scene text recognition with semantic reasoning networks. In CVPR, pp. 12110–12119, 2020. Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, and Yonghui Wu. Coca: Contrastive captioners are image-text foundation models. CoRR, abs/2205.01917, 2022. Tai-Ling Yuan, Zhe Zhu, Kun Xu, Cheng-Jun Li, Tai-Jiang Mu, and Shi-Min Hu. A large chinese text dataset in the wild. JCST, pp. 509–521, 2019. Hesuo Zhang, Lingyu Liang, and Lianwen Jin. Scut-hccdoc: A new benchmark dataset of handwritten chinese text in unconstrained camera-captured documents. PR, pp. 107559, 2020. Rui Zhang, Mingkun Yang, Xiang Bai, Baoguang Shi, Dimosthenis Karatzas, Shijian Lu, C. V. Jawahar, Yongsheng Zhou, Qianyi Jiang, Qi Song, Nan Li, Kai Zhou, Lei Wang, Dong Wang, and Minghui Liao. ICDAR 2019 robust reading challenge on reading chinese text on signboard. In ICDAR, pp. 1577–1581, 2019. Xinyun Zhang, Binwu Zhu, Xufeng Yao, Qi Sun, Ruiyu Li, and Bei Yu. Context-based contrastive learning for scene text recognition. In AAAI, 2022. Caiyuan Zheng, Hui Li, Seon-Min Rhee, Seungju Han, Jae-Joon Han, and Peng Wang. Pushing the performance limit of scene text recognizer without human annotation. In CVPR, pp. 14096–14105, 2022. Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason J. Corso, and Jianfeng Gao. Unified visionlanguage pre-training for image captioning and VQA. In AAAI, pp. 13041–13049, 2020.
Review 1: Summary: This paper proposes a masked image-language modeling pretraining scheme to learn a strong encoder-decoder model for the task of text recognition. The proposed pretraining scheme applies masked language modeling to the image encoder over a set of unlabeled real text images, and pretrains the language decoder by transforming a text corpus into images. The method performs well on several Chinese and English benchmarks. Strengths and Weaknesses: In general I like the simplicity of the approach, its strong results, and high inference throughput. ## Strengths: - The proposed approach is simple and intuitive. - The model seems to scale well, with results improve substantially from 36M to 100M parameters. - The ablations are well conducted and informative, and showcase the benefits of the pretraining schemes and architectural design choices. ## Weaknesses: - What is the reason for the proposed model being so much faster than the other baselines, despite parameter counts being comparable (Table 10)? Could you be more specific in identifying reasons for the speedup? - The proposed pretraining scheme appears to be the same as the one proposed in [1], but there is no discussion of that paper. Can you provide more explicit comparisons against this work, and highlight the differences? - Minor typos: “Encode-decoder” -> “Encoder-decoder” (Figure 2 caption), Sec 4.4 (72.8% v.s. 75.6% has incorrect formatting) **References** [1] Rust, Phillip, et al. "Language modelling with pixels." ICLR 2023. Requested Changes: Would strengthen the work: - Discussion about model throughput, and why exactly it’s faster than prior approaches Critical: - Comparisons against Rust et al. (2023) Broader Impact Concerns: There are no major issues with the ethical implications of the work. It studies text recognition and is relatively low risk. ================================================== Review 2: Summary: This paper presents a novel approach to text recognition that combines vision and language pre-training in a classical encoder-decoder framework. The authors propose masked image modelling for the encoder and masked image-language modelling for the decoder. The approach involves pre-training the encoder on unlabeled real text images and the decoder on synthesized text images, thereby enhancing both visual and linguistic representation learning. The paper presents extensive experiments on Chinese and English text images, demonstrating superior performance over state-of-the-art methods. Strengths and Weaknesses: First, this paper, entitled "Text Recognition with Masked Vision-Language Pre-training", focuses primarily on the area of scene text recognition, a detail that could be more explicitly reflected in the title for clarity. While the paper presents a method that integrates vision and language pre-training within an encoder-decoder architecture, it notably omits any discussion of various models developed for document recognition, which are closely related to the field of scene text recognition. Furthermore, the related work section would benefit greatly from a more explicit exploration of the links between these two fields. Second, The approach presented in the paper, which integrates vision and language pre-training within an encoder-decoder architecture for text recognition, must be reconsidered in the context of the paper introducing the TrOCR model, particularly its 2023 version as presented at the Thirty-Seventh AAAI Conference on Artificial Intelligence (AAAI-23). It's noteworthy that while the paper cites the 2021 version of TrOCR, it overlooks the significantly updated and improved 2023 iteration. This oversight raises concerns regarding the originality and contemporary relevance of the proposed approach. Indeed, the core of the paper is centered on the proposed pretraining methodology, which 1) combines text-to-image conversion, 2) masked image-language modeling, and 3) a strategic use of a frozen encoder to effectively bridge the gap between visual and textual data interpretation. The first and second proposition are already used in the 2021 TrOCR paper. The third proposition seems to be new compared to trOCR and to improve the results as show on table 2. However, the main contribution of this article comes down to this aspect, which is very limited. Third, in this text recognition model, there is a necessary dependency on a separate text recognition model to provide the positions of the text, a dependency that should be explicitly stated. It's important to anticipate a potential drop in performance when these two models are combined, as they are not optimized together. This raises a critical question: given the interdependencies and potential performance trade-offs, is it still worth optimizing the text recognition module in isolation? This broader perspective should be considered when assessing the overall effectiveness and progress of approaches in this area. It's worth noting that there is an emerging trend within the document recognition community towards the development of fully integrated models that combine both text detection and recognition(End-to-End Document Recognition and Understanding with Dessurt in ECCV2022, DAN : a Segmentation-free Document Attention Network for Handwritten Document Recognition in PAMI 2023). The field's shift towards integrated models highlights the importance of addressing the issues of interdependence and potential performance trade-offs when text detection and recognition processes are not optimized together. Requested Changes: - In section 3.1, it is written "We partition the image vertically into a set of M vertical patches,[p1, p2, . . . , pM]." This supposes that the text orientation on the image is given. - The loss (Eq. 1) is purely sequential, which implies that a single insertion makes the full sequence considered incorrect. It is surprising that the model can be trained with this loss whereas loss based on alignment such as CTC have been generally used in the document recognition community. - Section 4.2 Evaluation: the metrics used for BCTR is the sentence level accuracy, which seem very strict (one character error discard the full sentence). Could you confirm? The metrics is different for the English datasets. - Table 3: the model is not very sensitive to the masking ration, which is good. What are the results with a masking ratio at 0.75 which is reported as the best for MAE? - Table 7: what are the 1% 10% 100% for? - Table 10: what is the time unit? Broader Impact Concerns: NA ================================================== Review 3: Summary: This paper discusses a pretraining strategy for encoder-decoder-based OCR systems and proposes a training procedure consisting of three steps: Pretraining of encoder with real images, pretraining of decoder with synthetic images while fixing the encoder part, and finetuning of the whole system. The pretraining criteria are basically the prediction of masked regions of pixels and texts for encoder and decoder, respectively. The key design principle is that the pretraining of encoder is done by real images and the pretraining of decoder is done by synthetic images while freezing the encoder. It allows to pretrain all the components effectively. The extensive experiments show the effectiveness of the approach and it reports strong numbers on OCR benchmarks in English and Chinese. Strengths and Weaknesses: The key contributions of the paper are 1) the use of real images for pretraining of encoder, 2) the use of synthetic images for pretraining of decoder while freezing encoder, 3) both encoder and decoder are pretrained. None of which is probably very novel by itself, but the combination and providing the strategy supported by the thorough experiments have a good value. Especially that showing a practical strategy to pretrain decoder with synthetic images seems important. It gives an insight how to use synthetic images for training OCR systems. The experiments are extensive and it is great that the methods are evaluated on two languages. The presentation could be improved in terms of writing and how to summarize results, but I do not find major weakness in the paper. A possible weakness is that the comparisons with prior studies may not be very fair or it may not contrast the differences in terms of algorithm. It is a shared common problem in the field that you just put the numbers from other papers. As discussed in the paper, we often observe significantly better results by just retraining a simple weak model like Resnet. It is great to have results of GiG from this perspective. Requested Changes: It seems the conclusion from the results in Table 1, 2, 5 is that we should never train the encoder part with synthetic data only. Does that sound reasonable? What if we combine real images with synthetic data when finetuning, etc. It would be great to have discussions on this. There are many tables with the same result at the first row (i.e. Scratch). I'm wondering if it is a better idea to create a consolidated table listing all results. There are several numbers that are just mentioned in text. Those could be put in the table, too. If it makes the readability worse, you can ignore this. There seem to be many unnatural expressions in the text although I was able to understand the meaning. I would recommend a proof-reading by a third-party person. Broader Impact Concerns: No concerns. ================================================== Metareview: Recommendation: Accept as is Comment: Interesting and novel combination of methods. OCR researchers would likely find it valuable to see the experiments that show how to pretrain the decoder with synthetic images. Lots of interesting ablations. ==================================================
# Recovering Exact Support In Federated Lasso Without Optimization Adarsh Barik *abarik@nus.edu.sg* Institute of Data Science National University of Singapore Jean Honorio jhonorio@unimelb.edu.au School of Computing and Information Systems The University of Melbourne Reviewed on OpenReview: *https: // openreview. net/ forum? id= S5y26tKlf2* ## Abstract Federated learning provides a framework to address the challenges of distributed computing, data ownership, and privacy over a large number of distributed clients with low computational and communication capabilities. In this paper, we study the problem of learning the exact support of sparse linear regression in the federated learning setup. We provide a simple communication ecient algorithm that only needs one-shot communication with the centralized server to compute the exact support by majority voting. Our method does not require the clients to solve any optimization problem and thus, can be run on devices with low computational capabilities. Our method is naturally robust to the problems of client failure, model poisoning, and straggling clients. We formally prove that our method requires a number of samples per client that is polynomial with respect to the support size, but independent of the dimension of the problem. We require the number of distributed clients to be logarithmic in the dimension of the problem. For certain classes of predictor variables (e.g. mutually independent, correlated Gaussian, etc.), the overall sample complexity matches the optimal sample complexity of the non-federated centralized setting. Furthermore, our method is easy to implement and has an overall polynomial time complexity. ## 1 Introduction Modern-day edge devices, with their data acquisition and storage ability, have pushed the need of distributed computing beyond the realms of data centers. Devices such as mobile phones, sensor systems in vehicles, wearable technology, and smart homes, within their limited storage and processing capabilities, can constantly collect data and perform simple computations. However, due to data privacy concerns and limitations on network bandwidth and power, it becomes impractical to transmit all the collected data to a centralized server and conduct centralized training. The nascent field of federated learning (Konen`y et al., 2015; 2016; Brendan et al., 2017; Mohri et al., 2019; Li et al., 2020a) tries to address these concerns. As described by Konen`y et al. (2016), federated learning is a machine learning setting where the goal is to train a high-quality centralized model with training data distributed over a large number of clients. Unlike the data centers, the clients collect data samples independently but in a non-i.i.d. fashion. The clients may be highly unbalanced, i.e., the number of samples per client may vary significantly. The clients may also have hardware-related constraints. Although the number of clients could be quite large, each client is typically a simple device that has access to a very small number of data samples and can only conduct very basic computations due to limitations on its processing and power capabilities. Furthermore, since battery power is at a premium, the communication between the client and the centralized server acts as a major bottleneck. Due to these constraints, it is common to encounter straggling and faulty clients in the federated learning setting. In this work, we study the problem of exact support recovery of sparse linear regression in federated learning without solving any optimization problem. Support recovery in sparse models is of great importance in machine learning as it relates to feature selection. In our setting, none of the clients has the access to necessary number of data samples required for exact support recovery or possess computational capabilities to run complex algorithms. Furthermore, we only allow for one-shot communication between the clients and the centralized server, i.e., clients can send information to the centralized server only once. We propose a novel yet simple algorithm for this setting which uses majority voting and show that local clients can collaboratively recover the exact support of the sparse linear regression model with provable theoretical guarantees. ## 1.1 Related Work The problem of support recovery in sparse linear regression has been well studied for the centralized setting in compressive sensing (See e.g., Foucart & Rauhut, 2017 and references therein) and sparse regression (See e.g., Wainwright, 2009b and references therein). In compressive sensing, for independent sub-Gaussian predictors, ps log dq samples are necessary for exact support recovery of a d-dimensional parameter vector with s non-zero entries. For sparse regression, Wainwright (2009b) provided the same information-theoretic lower bound for correlated Gaussian predictors. To the best of our knowledge, such a bound does not exist for the general case of *correlated sub-Gaussian* predictors. In the federated setting, data is divided across multiple clients. We define the overall sample complexity in the federated setting as the summation of the sample complexity across all clients. Table 1 shows a comparison of the overall sample complexity of our method in the federated setting to that of the tightest bounds available in the centralized setting running lasso. The federated learning framework has been used in many empirical studies (Konen`y et al., 2015; 2016). As it inherently facilitates distributed computing, it lends itself to be used in a vast range of applications which include but are not limited to deep networks (Brendan et al., 2017), neural networks (Yurochkin et al., 2019; Wang et al., 2020), principal component analysis (Grammenos et al., 2020) and fair resource allocation (Li et al., 2020b). There are also empirical studies that analyze adversarial attacks under the federated learning setting (Bhagoji et al., 2019). On the theoretical side, there are several application-based algorithms that provide convergence rate guarantees. For example, He et al. (2018) provide convergence rate guarantees for linear classification and regression models, Smith et al. (2017b;a) provide similar guarantees for lasso and multi-task learning respectively. Mohri et al. (2019) provide Rademacher-based generalization bounds. Our estimation method at the clients looks similar to marginal regression (See Fan et al., 2008; Genovese et al., 2012 and the works that follow). However, compared to them, we focus on exact support recovery in the federated learning setup. Besides, our analysis and theoretical guarantees hold in the non-asymptotic setting. A detailed survey on the challenges and applications of federated learning can be found in McMahan et al. (2021) and Wang et al. (2021). Table 1: Comparison of our overall sample complexity of support recovery in sparse regression in the federated setting with existing work in the centralized setting. Notation: s is the number of non-zero entries in the regression parameter vector and d is its dimension. The terms which are independent of s and d are not shown in the order notation. | Predictor type | Bound in the centralized setting | Our bound | |-------------------------|------------------------------------|-----------------------------| | Mutually independent | ps log dq (Wainwright, 2009b) | ps log dq, Theorem 1 | | Correlated Gaussian | ps log dq (Wainwright, 2009b) | ps log dq, Appendix G | | Correlated sub-Gaussian | Not Known | ps2 log s log dq, Theorem 2 | ## 1.2 Our Contribution All the works mentioned above are interesting in their own domain. The existing theoretical works provide guarantees for convergence rates (which guarantees a small mean squared error in the training set provided enough iterations) or generalization bounds (which guarantees a small mean squared error in the testing set provided enough samples). However, the final solution may not match exactly with the true parameter vector. In this work, we provide provable theoretical guarantees for the exact recovery of the support of the true sparse parameter vector of linear regression in federated learning. Support recovery, i.e., correctly detecting the zero and nonzero entries of the parameter vector, is arguably a more challenging task. We show that for some special classes of predictor variables which include mutually independent or correlated Gaussian random variables, we can do exact support recovery with at least plog dq clients and only psq data samples per client. Notice that in this case, the overall sample complexity is ps log dq which matches the optimal sample complexity of the centralized setting. We also provide novel theoretical results for a general class of correlated sub-Gaussian predictors where we show that if the number of clients is at least plog dq and each client has access to at least ps2 log sq data samples, then the support can be recovered exactly with high probability. We propose a simple yet eective method for exact support recovery and prove that the method is *correct* and ecient in terms of *time* and *sample complexity*. McMahan et al. (2021) and Wang et al. (2021) provided several key properties which make federated learning preferable over centralized systems. Our method fulfills many of these key properties: - **No optimization - low computation:** We do not solve any optimization problem at the client level. All the computations are simple and let us use our method in devices with low computational power. - **One shot communication and privacy:** Our method is communication ecient. We only need one round communication of at most d´bits from the client to the centralized server. As communication is kept to a minimum, very little information about the client is passed to the centralized server. - **Fault tolerance and aversion to model poisoning and straggling:** Our method is naturally robust to client node failure and averse to rogue and straggling clients. ## 2 Notation And Problem Setup In this section, we collect the notation which we use throughout this paper. We also formally define the support recovery problem for sparse linear regression in federated learning. Let w˚ P Rd be a d´dimensional parameter with sparsity s, i.e., only s out of d entries of w˚ are nonzero. We use rrs as a shorthand notation to denote the set t1, 2, ¨¨¨ , ru. Let S˚ be the true support set, i.e., S˚ " tr|w˚r ‰ 0, r P rdsu. We denote the corresponding complementary non-support set as S˚c " tr|w˚r " 0, r P rdsu. We will assume that minjPS˚ |w˚j | ° wl ° 0 and }w˚}8 † wh. The first condition is the well-known minimum weight condition (Wainwright, 2009b) which ensures that non-zero entries of w˚ are not arbitrarily close to zero which will make inference dicult for any method. The second condition can be written in terms of any ¸p-norm where p - 1. We chose ¸8-norm to keep our analysis simple and clean. In federated learning, the data is divided across multiple clients. Assume that there are g clients, each with ni independent samples, for i P rgs. Note that the data distribution across g clients need not be identical. Each client i P rgs contains each data sample in the format pXi, yiq where Xi P Rd are the predictor variables and yi P R is the response variable. The data generation process for each client i P rgs is as follows: $$y_{i}=X_{i}^{\mathsf{T}}w^{*}+e_{i}\ ,$$ $\left(1\right)$. yi " X|i w˚ ` ei , (1) where ei is a zero mean sub-Gaussian additive noise with variance proxy ÷2i , where ÷i ° 0. Note that all the clients share the same parameter vector w˚. The j-th entry of Xi is denoted by Xij , @i P rgs, j P rds. Each entry Xij of Xi is a zero mean sub-Gaussian random variable with variance proxy fl2i , where fli ° 0. We denote covariance matrix for Xi as i P Rdˆd with diagonal entries ijj " p‡ijj q 2, @j P rds and non-diagonal entries ijk " ‡ijk, @*j, k* P rds, j ‰ k. If predictor variables are mutually independent then ‡ijk " 0, @i P rgs*, j, k* P rds, j ‰ k. The t-th sample of the i-th client is denoted by pXti , yti q, @i P rgs, t P rnis. We note that Xti P Rd and yti P R and denote j-th entry of Xti as Xtij . Notice that the data distributions for pXi, yiq can vary a lot across the clients by varying fli and ÷i, as well as the specific sub-Gaussian probability distribution. The class of sub-Gaussian variates includes for instance Gaussian variables, any bounded random variable (e.g., Bernoulli, multinomial, uniform), any random variable with strictly log-concave density, and any finite mixture of sub-Gaussian variables. Similarly, data samples can be distributed unevenly across the clients by varying ni. In subsequent sections, we use PpAq to denote the probability of the event A and EpAq to denote the expectation of the random variable A. Figure 1a shows our setup and compares it with a centralized server running lasso in Figure 1b. Notice how each client only sends a maximum of d bits to the centralized server in Figure 1a and maintains the confidentiality of locally collected data. This is unlike the centralized setting where the centralized server has access to all the data. ![3_image_0.png](3_image_0.png) ![3_image_1.png](3_image_1.png) (b) Figure 1: (Left 1a) Support recovery in our federated sparse regression framework. (Right 1b) Support recovery in the centralized sparse regression framework using lasso. ## 3 Problem Statement For our problem, we assume that each client has access to ni " ops log dq samples, @i P rgs. That is, the number of samples ni grows strictly slower than s log d. Otherwise, the support can be trivially recovered by using compressed sensing methods in the client with ni " Ops log dq which is the order of necessary and sucient number of samples required for exact support recovery in linear regression setup (Wainwright, 2009a;b). Furthermore, we assume that each of our clients can only do very simple computations and can only do one-shot communication with the centralized server, i.e., each client can only send at most d-bits to the centralized server. Considering the above requirements, we are interested in answering the following question: Problem Statement 1 (Exact Support Recovery). *Given that each client contains* ni " ops log dq data samples generated through the process described in equation (1), is it possible to eciently recover the true support of the s-sparse shared parameter vector w˚ P Rd by collecting d*-bits of information from every client* only once with provable theoretical guarantees. The eciency in exact recovery means that the sample complexity per client should be ops log dq and that our algorithm should have polynomial time complexity and should also be easy to implement. ## 4 Our Method In this section, we present a simple algorithm to solve problem 1. Our main idea is that estimation at the client level can be incorrect for every client but this information can still be aggregated in a careful manner to compute the true support. ## 4.1 Client Level Computations Each client tries to estimate the support of w˚ using ni independent samples available to it. As mentioned previously, ni, @i P rgs is not sucient to compute correct support of w˚ using any method possible (Wainwright, 2009a). Let wˆi P Rd be the estimate of w˚ computed by each client i. Let Si " tj|wˆij ‰ 0, j P rdsu be the support of wˆi. Each server communicates the computed support (at most d bits) to a centralized server which then computes the final support of w˚. The centralized server receives Si from each client and computes the final support S " fpS1, S2, ¨¨¨ , Sgq. Each client i, @i P rgs computes wˆi in the following way: $$\forall i\in[g],j\in[d],\;{\hat{w}}_{i j}={\frac{1}{{\hat{\sigma}}_{i j}}}\mathrm{sign}({\hat{\alpha}}_{i j})\operatorname*{max}(0,|{\hat{\alpha}}_{i j}|-\lambda_{i j}),$$ where wˆij is j-th entry of wˆi and ⁄ij ° 0 is a regularization parameter. While it is possible to compute a feasible ⁄ij for each client i and entry j, we will present a more practical choice of a single ⁄ij " ⁄ across all clients and entries. Moreover, computing ‡ˆij is not required to estimate the support but we keep all the terms in equation 2 for clarity and completion. We define ‡ˆij and –ˆij as follows: $$\hat{\sigma}_{i j}\triangleq\frac{1}{n_{i}}\sum_{t=1}^{n_{i}}(X_{i j}^{t})^{2},\quad\hat{\alpha}_{i j}\triangleq\frac{1}{n_{i}}\sum_{t=1}^{n_{i}}y_{i}^{t}X_{i j}^{t}\tag{1}$$ $$\left(2\right)$$ $$\left(3\right)$$ These are simple calculations and can be done in Opdniq run time at each client. If ni can be kept small (which we will show later), this can be done even by a device with low computational ability. The choice of this exact form of wˆij in equation (2) is not arbitrary. To get the intuition behind our choice, consider the following ¸1-regularized (sparse) linear regression problem at each client. $$(\forall i\in[g]),\ \hat{w}_{i}=\arg\min_{w}\frac{1}{n_{i}}\sum_{t=1}^{n_{i}}(w^{\intercal}X_{i}^{t}-y_{i}^{t})^{2}+\|\Lambda_{i}\odot w\|_{1},\tag{1}$$ where }¨}1 denotes the ¸1 norm of a vector and d denotes the Hadamard product between vectors. The j-th entry of the regularizer vector i P Rd is ⁄ij . We can write equation (4) in expanded form as: $$(\forall i\in[g]),\ {\hat{w}}_{i}=\arg\operatorname*{min}_{w}w^{\intercal}\left({\frac{1}{n_{i}}}\sum_{t=1}^{n_{i}}X_{i}^{t}X_{i}^{t\intercal}\right)w-2w^{\intercal}\left({\frac{1}{n_{i}}}\sum_{t=1}^{n_{i}}y_{i}^{t}X_{i}^{t}\right)+\sum_{j=1}^{d}\lambda_{i j}|w_{j}|,$$ $$\quad(4)$$ $$\quad(5)$$ ⁄ij |wj |, (5) Now, we intentionally replace ∞nt"1 Xti Xti | with diagp∞nt"1 Xti Xti |q. This allows us to write equation (5) as sum of d independent optimization problems: $$(\forall i\in[g]),\ {\hat{w}}_{i}=\arg\operatorname*{min}_{w}\sum_{j=1}^{d}w_{j}^{2}\left({\frac{1}{n_{i}}}\sum_{i=1}^{n_{i}}X_{i j}^{t}\right)-2\sum_{j=1}^{d}w_{j}\left({\frac{1}{n_{i}}}\sum_{t=1}^{n_{i}}y_{i}^{t}X_{i j}^{t}\right)+\sum_{j=1}^{d}\lambda_{i j}|w_{j}|,$$ $$\quad(6)$$ ⁄ij |wj |, (6) and subsequently, we get equation (2) as the solution. This is an improper estimator that has the advantage of working well when there are very few samples, i.e., ni " Op1q with respect to dimension d. It is known that estimating the covariance as needed in the mean squared error requires ni in Oplog dq (See Lemma 1 in Ravikumar et al., 2011). Our simple estimator avoids any computation (or estimation) of the covariance matrix which, in any case, would be incorrect if each client has access to only a few samples. Each client i sends the support Si of wˆi to the centralized server. Even in the worst-case scenario, each client only sends d bits to the centralized server. ## 4.2 Information Aggregation And Constructing The Final Support We aggregate supports Si, @i P rgs from all the clients and construct the final support. Before we get to the construction of the final support, we define a random variable Rij , @i P rgs, j P rds which takes value 1 if j P Si and 0 otherwise. Thus, random variable Rij indicates whether entry j is in the support Si of client i. Using the random variables Rij , we construct the final support S using majority voting. This is done by computing the median of Rij across i P rgs. If the median is 1 then we conclude that j is in the support otherwise we conclude that j is not in the support. More formally, we define a random variable Rj fi 1g ∞gi"1 Rij and if Rj - 12 , then we conclude that j P S. Otherwise, if Rj † 12 , then we conclude that j R S. The above procedure can be compactly written as Algorithm 1. {Part I: Runs in client i, @i P rgs} Input: Data samples pXti , yti q, @t P rnis Output: Locally estimated support of shared parameter w˚ for each j P rds do Compute wˆij using equation (2) and (3) if wˆij ‰ 0 **then** Rij - 1 $$\begin{array}{c}{{\mathrm{\large~else}}}\\ {{R_{i j}\gets0}}\\ {{\mathrm{\large~and~if~}}}\end{array}$$ end for Send Rij to centralized server, @j P rds Algorithm 1: getExactSupport {Part II: Runs in centralized server} Input: Rij , @i P rgs, j P rds Output: Globally estimated support S for shared parameter w˚ S - tu for each j P rds do Compute Rj " 1g ∞gi"1 Rij if Rj - 12 **then** S - S Y tju end if end for ## 5 Main Result And Analysis In this section, we describe and analyze our theoretical results. We present our results in two dierent settings. In the first setting, we assume that predictor variables are mutually independent. We tackle the more general case of correlated predictors in the second setting. We deal with the special case of correlated Gaussian predictors in Appendix G. Detailed proofs for lemmas and theorems are available in the supplementary material. ## 5.1 Mutually Independent Predictors In this setting, predictor variables are mutually independent of each other in all the clients, i.e., @i P rgs, EpXijXikq " 0, @*j, k* P rds, j ‰ k. In this setting, we state the following result: Theorem 1 (Mutually Independent Predictors). For federated support learning in linear regression, as described in Section 3, with at least g " plog dq *clients and mutually independent predictor variables if each* client has at least ni " psq *i.i.d. data samples and the following condition holds:* $$\operatorname*{max}_{i\in[g]}\frac{C}{\sqrt{s}}\left(8\rho_{i}^{2}\sqrt{\sum_{k\in S^{\star}}w_{k}^{\star2}}+8|\eta_{i}\rho_{i}|\right)<\lambda<\operatorname*{min}_{j\in S^{\star,\mu}[\sigma]}|w_{j}^{\star}(\sigma_{j j}^{i})^{2}|-\frac{C}{\sqrt{s}}\left(8|w_{j}^{\star}|\rho_{i}^{2}+8\rho_{i}^{2}|\rho_{i}|\right)$$ $$\sqrt{\sum_{k\in S^{\star,k\neq j}}w_{k}^{\star2}}+8|\eta_{i}\rho_{i}|\right)$$ $$\left(7\right)$$ where C ° 0 is an absolute constant independent of ni, s and d, then Algorithm 1 recovers the exact support of the shared parameter vector w˚ *with probability at least* 1 ´ Op 1d q. Proof. Recall that Rj " 1g ∞gi"1 Rij where Rij is defined in Section 4.2. We prove that, with high probability, Rj - 12 , @j P S˚ and Rj † 12 , @j P S˚c . We will provide the proof in two parts. First, we deal with entries j which are in the support of w˚, i.e., j P S˚ and then we will deal with j P S˚c . For entries j **in support** S˚. We begin our proof by first stating the following lemma. Lemma 1. For @j P S˚, let EpRj q ° 12 *. With probability at least* 1 ´ 2 expp´2gp´12 ` EpRj qq2 ` log sq, simultaneously @j P S˚*,we have* Rj - 12 . For g " plog dq, Lemma 1 holds with probability at least 1 ´ Op 1d q. Next we show that for any j P S˚, EpRj q is indeed greater than 12 . Lemma 2. For i P rgs, j P S˚ and some 0 † " § 1, if predictors are mutually independent of each other and 0 † ⁄ij † |w˚j p‡ijj q 2 | ´ 8|w˚j |fl2i " ´ 8fl2i b∞kPS˚,k‰j w˚k 2" ´ 8|÷ifli|" *then we have* EpRj q - 1 ´ 6 g ∞gi"1 expp´ni"2q. In the above lemma, we assume p‡ijj q 2 ° 0 for all i P rgs for clarity of exposition. In a more general setting, since p‡ijj q 2 - 0 is the population covariance, it would easy to detect which clients i have ‡ijj " 0 through the empirical covariance, which would also be zero. Assuming that the proportion of clients for which ‡ijj " 0 is not too big, one could trivially extend our lemma above by using only the clients i for which p‡ijj q 2 ° 0. For entries j **in non-support** S˚c . Similar to the entries in the support, we begin this part by stating the following result for entries in the non-support. Lemma 3. For @j P S˚c , let EpRj q † 12 *. With probability at least* 1 ´ 2 expp´2gp 12 ´ EpRj qq2 ` logpd ´ sqq, simultaneously @j P S˚c *, we have* Rj § 12 . Again, if g " plog dq, then Lemma 3 holds with probability at least 1 ´ Op 1d q. It remains to show that for @j P S˚c , EpRj q is smaller than 12 . In particular, we use the result from the following lemma. Lemma 4. For i P rgs, j P S˚c and 0 † " § 1*, if predictors are mutually independent of each other and if* ⁄ij ° 8"fl2i a∞kPS˚ w2k ` 8|÷ifli|" *then we have* EpRj q § 4g ∞gi"1 expp´ni"2q. It is important to note that the statements of Lemma 2 and Lemma 4 are not high probability statements and therefore, a union bound is not required for them. We notice that as long as 4g ∞gi"1 expp´ni"2q § 12 , then @j P S˚c ,EpRj q § 12 . Similarly, @j P S˚,EpRj q - 12 as long as 6g ∞gi"1 expp´ni"2q § 12 . Results from Lemma 2 and 4 guarantee that the statements of Lemma 1 and 3 hold. Choosing ni " p 1"2 q and " " ? C s , C ° 0, we prove Theorem 1. ## 5.2 Correlated Predictors The concentration inequalities used in mutually independent predictors case do not lend themselves directly to the correlated predictors case which makes this analysis more challenging. As described previously, the covariance matrix for Xi is denoted by i P Rdˆd with diagonal entries ijj " p‡ijj q2, @j P rds and non-diagonal entries ijk " ‡ijk, @*j, k* P rds, j ‰ k. While some of the lemmas from the previous subsection can be reused, we had to come up with some new technical lemmas for this setting. Below, we state the main results for this setting. Theorem 2 (Correlated Predictors). For federated support learning in linear regression, as described in Section 3, with at least g " plog dq *clients and correlated predictor variables, if each client has* ni " ps2 log sq, s ° 1 i.i.d. data samples and the following condition holds: max jPS˚c ,iPrgs | ÿ kPS˚ w˚k ‡ijk| ` Cs ˜ ÿ kPS˚ 8 ?2|w˚k |p1` 4 max j fl2i p‡ijj q 2 q max j p‡ijj q 2 ` 8|÷ifli| ¸ † ⁄ † min jPS˚,iPrgs |pw˚j p‡ijj q 2 ` ÿ kPS˚,k‰j w˚k ‡ijkq| ´ Cs ¨ ˝8|w˚j |fl2i ` ÿ kPS˚,k‰j 8 ?2 |w˚k |p1 ` 4 max j fl2i p‡ijj q 2 q (8) max j p‡ijj q 2 ` 8|÷ifli| ˙ where C ° 0 is an absolute constant independent of ni, s and d, then Algorithm 1 recovers the exact support of the shared parameter vector w˚ *with probability at least* 1 ´ Op 1d q. Proof. Recall that Rj " 1g ∞gi"1 Rij where Rij is defined in Section 4.2. We will again prove that, with high probability, Rj - 12 , @j P S˚ and Rj † 12 , @j P S˚c . Some of the results from the previous Section 5.1 follow without any changes. We provide new results for the remaining parts. First, we deal with entries j which are in the support of w˚, i.e., j P S˚ and then we will deal with j P S˚c . For entries j **in support** S˚. We observe that Lemma 1 holds even in this case. Thus, we start our proof by stating the following lemma. Lemma 5. For i P rgs, j P S˚ *and some* 0 † " § ? 1 2 *, if* 0 † ⁄ij † |pw˚j p‡ijj q 2 ` ∞kPS˚,k‰j w˚k ‡ijkq| ´ 8|w˚j |fl2i " ´ ∞kPS˚,k‰j 8 ?2|w˚k |p1 ` 4 maxjfl2i p‡ijj q 2 q maxj p‡ijj q 2" ´ 8|÷ifli|" *then we have* EpRj q - 1 ´ 4s g ∞gi"1 expp´ni"2q. For entries j **in non-support** S˚c . Again, Lemma 4 follows directly. Thus, we present the following lemma to show that for the entries in the non-support EpRj q † 12 . Lemma 6. For i P rgs, j P S˚c *and some* 0 † " § ? 1 2 *, if* ⁄ij ° |∞kPS˚ w˚k ‡ijk| ` ∞kPS˚ 8 ?2|w˚k |p1 ` 4 maxjfl2i p‡ijj q 2 q maxj p‡ijj q 2" ` 8|÷ifli|" *then we have* EpRj q § 4s`2 g∞gi"1 expp´ni"2q. The statements of Lemma 5 and Lemma 6 are not high probability statements and therefore, a union bound is not required for them. Note that, as long as we have p4s ` 2q 1g ∞gi"1 expp´ni"2q † 12 , we will have EpRj q ° 12 , @j P S˚ and EpRj q † 12 , @j P S˚c . Results from Lemmas 5 and 6 ensure that Lemma 1 and 3 hold. Choosing ni " p 1"2 log sq and " " Cs , C ° 0, we prove Theorem 2. ## 5.3 Time Complexity Each client does Opdniq basic calculations. Thus, the time complexity at each client is Opsdq for mutually independent predictors and Ops2d log sq for correlated predictors. The centralized server gathers d-bits of information from g clients in Opdgq " Opd log dq time. ## 6 Choice Of Regularizer The conditions mentioned in Equations (7) and (8) provide sucient theoretical conditions on the regularizers ⁄ij " ⁄ for exact support recovery. It remains to be shown whether such a choice of ⁄ is feasible. If we let minjPS˚ ‡ijj " ‡li, then it can be shown that equation (7) has a feasible solution as long as we choose a C such that ? C s † wl‡li 2 16fl2i wh?s`8whfl2i `16÷ifli . This setting of ⁄ in equation (7) can be contrasted with the setting of the regularizer ⁄lasso of a centralized-server lasso problem in Theorem 1 of Wainwright (2009b). In particular, the lower bound on ⁄, Op1q ` Op ? 1 niq, is analogous to the lower bound on the centralized-server lasso regularizer ⁄lasso, i.e., ⁄lasso ° Op b∞ log d g i"1 niq. However, our choice of ⁄ is independent of d. In Section 8, we empirically validate the existence of a feasible range for ⁄. A similar analysis can be carried out for the feasibility of equation (8). Let MAB P R|A|ˆ|B| be a matrix constructed by restricting rows of M to entries in A and columns of M to entries in B for two sets *A, B* Ñ rds. Furthermore, for a matrix M P Rpˆq, let }M}8 fi maxiPrps ∞jPrqs |Mij |. We assume that }iS˚c S˚ }8 ` }iS˚S˚ }8 † " for some constant " ° 0. We further assume that " is so small such that wl‡li 2 ´ wh" ° 0. This assumption is similar to the *incoherence* assumption in Wainwright (2009b) and it ensures that predictors within the support are not highly correlated, as well as predictors outside the support do not exert a high influence on the predictors within the support. For ease of notation, we will denote the term p1 ` 4 maxr fl2i p‡irrq2 q maxr p‡irrq 2 by a positive constant k5. We observe that if Cs † wl‡li 2 ´wh" 16?2k5s`8whfl2i `16÷ifli , then the choice of ⁄ in equation (8) is feasible. Notice how the upper bound ⁄ † wl‡li 2 ´ wh" ° 0 is similar to a combination of the minimum weight condition and the incoherence condition from Wainwright (2009b). Similarly, the lower bound ⁄ ° Op1q ` Op blog s ni q is analogous to the setting of the centralized-server lasso regularizer in Wainwright (2009b). We provide the following illustrative example to explain the similarity between technical conditions required for the feasibility of equation (8) and the standard mutual incoherence condition. ## 6.1 An Illustrative Example Consider the data generation process given in equation (1), i.e., y " XT w˚ ` e. We have dropped the subscript i for brevity as the following argument holds for any client i P rgs. We take d " 3, the parameter vector w˚ P R3 looks like w˚ " » – a a 0 fi fl where a ° 0, i.e., S˚ " t1, 2u and S˚c " t3u. We assume that entries of $$\mathbb{E}(X_{1})=0\ \mathrm{a}$$ the design matrix Xk are in t´1, 1u@k " 1, 2, 3. Furthermore, let EpX1X2q " q, EpX2X3q " p, EpX1X3q " 0, EpX1q " 0 and EpX3q " 0, where Ep.q denotes the expected value. Let $\Sigma$ be $\mathbb{E}(X^TX)$ where $X=\begin{bmatrix}1\\ 0\end{bmatrix}$. $${\begin{array}{l}{\left[X_{1}\right]}\\ {X_{2}}\\ {X_{3}}\end{array}}\in\{-1,1\}^{3}.\,\,\,\mathrm{Then,}$$ $$\Sigma={\left[\begin{matrix}1&q&0\\ q&1&p\\ 0&p&1\end{matrix}\right]}$$ Thus, the feasibility criteria for ⁄ in equation (8) becomes: |w˚1 ‡31 ` w˚2 ‡32| ` Apw˚, , fl, ÷q Cs † C s minp|w˚1 ‡211 ` w˚2 ‡12| ` Bpw˚, , fl, ÷q, |w˚2 ‡222 ` w˚1 ‡21| ` Dpw˚, , fl, ÷qq, where Apw˚, , fl, ÷q, Bpw˚, , fl, ÷q and Dpw˚, , fl, ÷q are upper bounded by Opsq and thus these products can be made arbitrarily small by choosing an appropriate C. In the worst-case scenario, the above equation becomes (by replacing w˚ and ): $$a|p|<a-a|q|+E(w^{*},\Sigma,\rho,\eta)\frac{C}{s}$$ Again, Epw˚, , fl, ÷q Cs can be made arbitrarily small by choosing an appropriate C. The above feasibility criteria simplifies to : $$({\mathfrak{g}})$$ $$|p|+|q|<1+\epsilon,$$ |p|`|q| † 1 ` ', (9) where ' is an arbitrarily small quantity. We compare this with the mutual incoherence condition from Wainwright (2009b) which requires }S˚c S˚ ´1 S˚S˚ }8 § 1 ´ –, for some - P p0, 1s. This simplifies to $$\left\|\begin{array}{l l}{{\left[0\quad p\right]\left[\begin{array}{l l}{{1}}&{{q}}\\ {{q}}&{{1}}\end{array}\right]^{-1}}}\\ {{\left\|\begin{array}{l l}{{}}&{{}}\\ {{}}\end{array}\right.}}\end{array}<1$$ which is equivalent to $$(10)^{\frac{1}{2}}$$ $$|p|+|q|<1\ .$$ |p|`|q| † 1 . (10) Notice how for small values of p and q (say p " 0.2, q " 0.1) both equation (9) and (10) are easy to satisfy while for the high value of p and q (say p " 0.7, q " 0.6) both conditions fail to hold. ## 7 Discussion On Robustness Since our method only relies on the correct calculation of the median, it is naturally robust to the failure of some clients. To simulate the eect of model poisoning (Bhagoji et al., 2019) and stragglers, we consider that a proportion 0 † - † 12 of clients have gone rogue (are straggling) and transmitting the wrong information to the centralized server. For the worst-case scenario, we assume that they report the complement of the support, i.e., they always send a bit "1" for entries in the non-support and a bit "0" for entries in the support. To accommodate this change in the case of correlated predictors, we slightly change statements of Lemmas 5 and 6. Now we have, p@j P S˚q, EpRj q - p1 ´ —q ´ 4sg ∞p1´—qg i"1 expp´ni"2q and p@j P S˚c q, EpRj q § 4s`2 g∞p1´—qg i"1 expp´ni"2q ` —. It is easy to see that, as long as, we have ni ° 1"2 logp p8s`4qp1´—q 1´2— q data samples per client, then we still have EpRj q ° 12 , @j P S˚ and EpRj q † 12 , @j P S˚c and all our results still hold. Similarly, for mutually independent predictors, our results hold as long as we have ni ° 1"2 logp 12p1´—q 1´2— q. ## 8 Validating Theory With Synthetic Experiments In this section, we validate our theoretical results by conducting computational experiments. We provide the results for the experiments when predictors are correlated. Data in each client is generated ![9_image_0.png](9_image_0.png) by following the generative process described in equation 1. Note that predictors and error term in dierent clients follow dierent sub-Gaussian distributions. To make it more general, we keep the correlation between entries in the support dierent than the correlation between one entry in the support and the other entry in the non-support and these further vary across clients. The regularization parameter ⁄i for each client is chosen such that the condition in Theorem 2 is satisfied for every client. All the results reported here are averaged over 30 independent runs. We conduct two separate experiments to verify that ni " ps2 log sq independent samples per client and a total of g " plog dq clients are sucient to recover the true support. (a) Exact support recovery against numbers of samples per client (b) Exact support recovery against numbers of clients Figure 2: Phase transition curves. Left: Exact support recovery averaged across 30 runs against varying number of samples per client for dimension d " 500, 1000, and 2000, sparsity s " 3, g " plog dq clients. Right: Exact support recovery averaged across 30 runs against varying number of clients for sparsity s " 10, 20, 40, and 50, dimension d " 1000, n " maxp30, ps2 log sqq samples per server. ## 8.1 Exact Support Recovery Against Number Of Samples Per Client This experiment was conducted for a varying number of predictors (d " 500, 1000, and 2000). For each of them, we fixed the number of clients to be g " 2 log d. The sparsity s is kept fixed at 3. The number of samples per client ni is varied with control parameter C as 10C s2 log s. The performance of our method is measured by assigning a value 1 for exact recovery and 0 otherwise. We can see in Figure 2a, that initially, recovery remains at 0 and then there is a sharp jump after which recovery becomes 1. Notice how all three curves align perfectly. This validates the result of our theorem and shows that given g " plog dq clients, ni " ps2 log sq samples per client are sucient to recover the true support. ## 8.2 Exact Support Recovery Against Number Of Clients The second experiment was conducted for a varying number of non-zero entries (s " 10, 20, 40, and 50) in the support of w˚. The experiments were run for a setup with d " 1000 predictors. We fixed the number of samples per client (ni) to be maxp30, ps2 log sqq. This ensures that a minimum of 30 samples are available to each client. This is inline with our previous experiment where exact recovery is achieved around 30 samples per client. The number of clients g is varied with control parameter C as 10C log d. Like the previous experiment, performance is measured by assigning a value 1 for exact recovery and 0 otherwise. We can again see in Figure 2b, that initially, recovery remains at 0 and then it goes to 1 as we increase the number of clients. We also notice that all four curves align nicely. This validates that given ni " ps2 log sq independent samples per server, g " plog dq clients are sucient to recover the true support. ## 8.3 Robustness To Straggling Clients Recall that since our method only relies on the correct calculation of the median, it is naturally robust to the failure of some clients. Our next experiment simulates the eect of having a proportion 0 † - † 12 of straggling clients which transmit the wrong information to the centralized server. For the worst-case scenario, we assume that they report the complement of the support, i.e., they always send a bit "1" for entries in the non-support and a bit "0" for entries in the support. Table 2 shows that our method is robust for a proportion 0 † - † 0.3 of straggling clients. Table 2: Exact support recovery averaged across 30 runs for dierent proportions - of straggling clients for dimension d " 1000, sparsity s " 3, g " plog dq clients, ni " ps2 log sq samples per client. | - | 0.10 | 0.20 | 0.30 | 0.35 | 0.40 | |-----------------------------|--------|--------|--------|--------|--------| | Mean exact support recovery | 100% | 100% | 100% | 80% | 33% | ## 8.4 Comparison With Centralized Lasso We compared our method to the centralized-server lasso Wainwright (2009b), which has access to all the data, unlike our method. Table 3 shows that both our method and centralized-server lasso are successful in recovering the true support, but our method requires less computation. | d | Mean exact support recovery | Mean runtime | | | |------------|-------------------------------|----------------|-------------------|------| | Our method | Centralized lasso | Our method | Centralized lasso | | | 500 | 100% | 100% | 1 | 4.1 | | 1000 | 100% | 100% | 2.4 | 11.6 | | 2000 | 100% | 100% | 4.6 | 26.7 | Table 3: Exact support recovery and runtime averaged across 30 runs for dimension d " 500, 1000, and 2000, sparsity s " 3, g " plog dq clients, ni " ps2 log sq samples per client. For easy comparison, runtime was normalized with respect to our method for d " 500. ## 9 Real World Experiment In this section, we demonstrate the eectiveness of our method to determine the support of a sparse linear regression setup in a real world data set. We used the BlogFeedback data set (Buza, K., 2014) from https://archive.ics.uci.edu/ml/datasets/BlogFeedback. This data set contains features extracted from blog posts and the task is to predict how many comments the post will receive using these features. We divided data into training and test data by choosing 80% of all samples to be training data at random. The details about the data set are as follows: - Number of training samples: 41917 - Number of test samples: 10480 - Number of features (after removing all zeros columns): 276 ## 9.1 Comparing Recovered Support With Centralized Lasso Since the true support of the parameter vector is unknown for real-world data, we constructed a "centralized" support for comparison by running lasso on the complete training data set. This simulates a centralized server with access to all the data. The "centralized" support contains 42 elements. We want to compare the support recovered from our method, called "federated" support, with this 'centralized' support. ## 9.2 Performance Measures We defined the following performance measure for comparison: Jaccard Index " Number of common elements in the "federated" support and in the "centralized" support Number of elements in the union of "centralized" support and "federated" support (11) ## 9.3 Case 1 For the first experiment, we divided the dataset randomly into 419 clients with each client containing 100 samples (except the last one which contains more to account for imbalance). This is a highly distributed setting where each client has access to a small number of samples. We conducted our experiment using ⁄ " 0.08. Our method recovered support with 48 elements, with a Jaccard Index of 0.76. To compare the generalization on the test data set, we computed parameter vectors wfed and wcen by running simple linear regression on training samples restricted to "federated" support and "centralized" support respectively. After that, Mean Square Error (MSE) is computed on the test samples using the recovered wfed and wcen. We observed that MSE for our method, 0.71545, is slightly better than MSE for "centralized" support, 0.71615. ## 9.4 Case 2 For the second experiment, we divided the dataset randomly into 41 clients with each client containing 1000 samples (except the last one which contains more to account for imbalance). This is a setting where we have only a few clients. We again conducted our experiment using ⁄ " 0.08. Our method recovered support with 49 elements, with a Jaccard Index of 0.8. Again, we followed the same procedure as Case 1 to compute MSE in test samples. We observed that MSE for our method, 0.70525, is slightly better than MSE for "centralized" support, 0.70548. We see that in both cases, our method recovered a similar set of support as the support recovered by lasso running in a centralized server. Our method also generalizes to the test dataset in a similar manner, even performing better in terms of MSE. Table 4: Jaccard index for dierent regularizer values ⁄. | ⁄ | 0.04 | 0.06 | 0.08 | 0.10 | 0.12 | 0.14 | 0.16 | 0.18 | 0.20 | |--------|--------|--------|--------|--------|--------|--------|--------|--------|--------| | Case 1 | 0.46 | 0.83 | 0.76 | 0.73 | 0.76 | 0.77 | 0.63 | 0.53 | 0.42 | | Case 2 | 0.56 | 0.71 | 0.80 | 0.82 | 0.75 | 0.78 | 0.83 | 0.74 | 0.76 | ## 9.5 Robustness To The Choice Of Regularizer To evaluate the robustness of our method to the choice of regularizer ⁄, we tested a range of regularizer values in the real-world data. Table 4 shows that our method is relatively robust in a wide range of regularizer values (i.e., 0.06 § ⁄ § 0.14). ## 10 Concluding Remark In this paper, we propose a simple and easy-to-implement method for learning the exact support of parameter vector of linear regression problem in a federated learning setup. We provide theoretical guarantees for the correctness of our method. We also show that our method runs in polynomial sample and time complexity. Furthermore, our method is averse to client failures, model poisoning, and straggling clients. As a future direction, it would be interesting to apply our ideas to other estimation problems involving sparse regression such as non-parametric regression (Ravikumar et al., 2007), learning probabilistic graphical models (Ravikumar et al., 2010) and diusion networks (Daneshmand et al., 2014). These problems are often handled in the centralized setting but it would be interesting to tackle them in the federated setting without compromising on the performance. ## References Bhagoji, A. N., Chakraborty, S., Mittal, P., and Calo, S. Analyzing federated learning through an adversarial lens. *International Conference on Machine Learning*, 2019. Brendan, M. H., Eider, M., Daniel, R., Seth, H., and y Arcas Blaise, A. Communication-ecient learning of deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS), 2017. Foucart, S. and Rauhut, H. A mathematical introduction to compressive sensing. *Bull. Am. Math*, 54(2017): 151–165, 2017. He, L., Bian, A., and Jaggi, M. Cola: Decentralized linear learning. In Advances in Neural Information Processing Systems, pp. 4536–4546, 2018. Konen`y, J., McMahan, B., and Ramage, D. Federated optimization: Distributed optimization beyond the datacenter. *Neural Information Processing Systems, Workshop on Optimization for Machine Learning*, 2015. Konen`y, J., McMahan, H. B., Yu, F. X., Richtárik, P., Suresh, A. T., and Bacon, D. Federated learning: Strategies for improving communication eciency. *Neural Information Processing Systems, Workshop on* Private Multi-Party Machine Learning, 2016. Li, T., Sahu, A. K., Talwalkar, A., and Smith, V. Federated learning: Challenges, methods, and future directions. *IEEE Signal Processing Magazine*, 37(3):50–60, 2020a. Li, T., Sanjabi, M., and Smith, V. Fair resource allocation in federated learning. International Conference on Learning Representations, ICLR, 2020b. McDiarmid, C. On the method of bounded dierences. *Surveys in combinatorics*, 141(1):148–188, 1989. Mohri, M., Sivek, G., and Suresh, A. T. Agnostic federated learning. *International Conference on Machine* Learning, 2019. Ravikumar, P., Wainwright, M. J., Raskutti, G., Yu, B., et al. High-dimensional covariance estimation by minimizing l1-penalized log-determinant divergence. *Electronic Journal of Statistics*, 5:935–980, 2011. Smith, V., Chiang, C.-K., Sanjabi, M., and Talwalkar, A. S. Federated multi-task learning. In Advances in Neural Information Processing Systems, pp. 4424–4434, 2017a. Smith, V., Forte, S., Ma, C., Taká, M., Jordan, M. I., and Jaggi, M. Cocoa: A general framework for communication-ecient distributed optimization. *The Journal of Machine Learning Research*, 18(1): 8590–8638, 2017b. Wainwright, M. Chapter 2: Basic tail and concentration bounds, 2015. Wainwright, M. J. Information-theoretic bounds on sparsity recovery in the high-dimensional and noisy setting. *IEEE Trans. Info. Theory*, 55:5728–5741, December 2009a. Wainwright, M. J. Sharp thresholds for high-dimensional and noisy sparsity recovery using l1-constrained quadratic programming (lasso). *IEEE transactions on information theory*, 55(5):2183–2202, 2009b. Wang, H., Yurochkin, M., Sun, Y., Papailiopoulos, D., and Khazaeni, Y. Federated learning with matched averaging. *International Conference on Learning Representations, ICLR*, 2020. Yurochkin, M., Agarwal, M., Ghosh, S., Greenewald, K., Hoang, T. N., and Khazaeni, Y. Bayesian nonparametric federated learning of neural networks. *International Conference on Machine Learning*, 2019. Grammenos, A., Mendoza-Smith, R., Mascolo, C., and Crowcroft, J. Federated Principal Component Analysis. Neural Information Processing Systems, 2020. Hsu, Daniel and Kakade, Sham and Zhang, Tong A tail inequality for quadratic forms of subgaussian random vectors. *Electronic Communications in Probability*, 2012. Buza, Krisztian Feedback prediction for blogs. Data analysis, machine learning and knowledge discovery, Springer, 2014. Daneshmand, H., Gomez-Rodriguez, M., Song, L., and Schoelkopf, B. Estimating Diusion Network Structures: Recovery Conditions, Sample Complexity & Soft-Thresholding Algorithm. In *International Conference on* Machine Learning, pp. 793–801, 2014. Ravikumar, P., Liu, H., Laerty, J., and Wasserman, L. Spam: Sparse Additive Models. In *Proceedings* of the 20th International Conference on Neural Information Processing Systems, pp. 1201–1208. Curran Associates Inc., 2007. Ravikumar, P., Wainwright, M. J., Laerty, J. D., et al. High-dimensional ising model selection using l1-regularized logistic regression. *The Annals of Statistics*, 38(3):1287–1319, 2010. McMahan, H Brendan and others Advances and open problems in federated learning. *Foundations and* Trends® *in Machine Learning*, 2021. Wang, Jianyu and Charles, Zachary and Xu, Zheng and Joshi, Gauri and McMahan, H Brendan and Al-Shedivat, Maruan and Andrew, Galen and Avestimehr, Salman and Daly, Katharine and Data, Deepesh and others A field guide to federated optimization. *arXiv preprint arXiv:2107.06917*, 2021. Genovese, Christopher R and Jin, Jiashun and Wasserman, Larry and Yao, Zhigang A comparison of the lasso and marginal regression. *The Journal of Machine Learning Research*, 2012. Fan, Jianqing and Lv, Jinchi Sure independence screening for ultrahigh dimensional feature space. *Journal* of the Royal Statistical Society: Series B (Statistical Methodology), 2008.
Review 1: Summary: The paper proposes a novel method for learning the exact support of a sparse linear regression model in a federated learning setting, where the data is distributed across multiple clients with low computational and communication capabilities. The method does not require the clients to solve any optimization problem or estimate the covariance matrix, but only uses simple calculations and one-shot communication of d bits to the centralized server. The support is estimated at the central node using majority voting and the authors provide theoretical guarantees for the exact support recovery under various conditions on the number of clients, the number of samples per client, the sparsity level, and the regularization parameter. The method is potentially robust to client failures, model poisoning, and straggling clients, and can handle different classes of predictor variables, such as mutually independent, correlated Gaussian, or correlated sub-Gaussian. The paper also demonstrates the effectiveness of the method on synthetic and real-world datasets and shows that it can recover a similar or better support than the centralized lasso estimator without needing to aggregate all the data at the central node. Strengths and Weaknesses: Strengths: 1. The work proposes a novel and simple method for exact support recovery of sparse linear regression in the federated learning setup, which does not require any optimization problem or complex computation at the client level. 2. The work provides provable theoretical guarantees for the sample complexity and time complexity of the proposed method and shows that it matches or improves the optimal bounds of the centralized setting for some classes of predictor variables. Weaknesses: 1. My main concern is that it seems from (7) and (8) that we need to know the true $w*$, $\sigma$ and $\rho$ to choose a regularization parameter $\lambda$ such that the solution satisfies the theoretical guarantees but in practice we will not know $w^{*}, \sigma, \rho$. Moreover even in the experiments $\lambda$ is set to 0.08 without any justification for how it is chosen. Please provide some explanation/heuristic that can be used to choose $\lambda$ in practice or provide an ablation study illustrating the effect of varying $\lambda$ on the solution so that we can see the consequences of choosing a "bad" $\lambda$. 2. I have a similar concern for the discussion on robustness in Section 7 where it seems like we will need to know the portion $\beta$ of clients that have gone rogue to know if we have sufficient $n_i$ to trust our output. Is it realistic to assume that we will know $\beta$ beforehand? 3. There are no experiments showing that the method is indeed robust to client failures, data poisoning or straggling even though Section 7 claims that it will be robust to these issues. I would recommend adding an experiment for at least one of these scenarios to strengthen the claims in Section 7. 4. There are also no comparisons with other federated LASSO approaches such as Smith et al. (2017b; a) from Section 1.1. Even if none of them consider support recovery explicitly, I would suggest comparing with the support of the LASSO solution to illustrate the accuracy and computational efficiency of the proposed approach over baselines. Requested Changes: In addition to addressing the points highlighted under weaknesses above I would also recommend illustrating some real-world applications and benefits of accurate support recovery in the introduction of the paper to illustrate the importance of the problem being solved since currently that may not be clear to a reader unfamiliar with the other literature in this place. Broader Impact Concerns: N/A ================================================== Review 2: Summary: This paper enables federated learning for the LASSO problem. In particular, it shows how to optimize this model and provides theoretical analysis for the convergence rates and statistical bounds. Strengths and Weaknesses: 1. The writing is good. It is easy to follow. 2. This paper provides a theoretical analysis for the federated lasso problem. Requested Changes: 1. This paper claims the proposed algorithm has no optimization for finding the optimal solution. This is not true. In fact, it uses the soft-thresholding approach to solve the lasso problem. Thus, it overclaims the contribution. 2. The proposed algorithm is weird. It seems there is only one iteration. However, the standard soft-thresholding approach is an iterative approach, which requires many iterations. Why does the proposed algorithm just need one iteration? 3. What unique challenges are there in the theoretical analysis compared with standard lasso problems? Broader Impact Concerns: N/A ================================================== Review 3: Summary: This paper presents a new distributed algorithm for support recovery for LASSO in the federated setting where individual users have different data distributions. The algorithm is shown to have low computational cost for each of the users and requires a simple majority vote at the central server, near-optimal statistical complexity (however, requiring that every user has $\widetilde{\Omega} (s)$ samples in the uncorrelated case, and $\widetilde{\Omega} (s^2)$ samples in the correlated case), and is also robust. The algorithm is theoretically shown to work when the individual user covariances are diagonal and non-diagonal. The theoretical results are rounded out with an experimental analysis of the algorithm. Strengths and Weaknesses: The paper presents a new distributed algorithm for LASSO in the federated setting. The main point I take away from this paper is that the algorithm is computationally very efficient and has a very small communication footprint (only requiring a one-shot transmission of a support). I think the results in the paper are interesting, both to the federated learning community, as well as for designing better algorithms for LASSO, as the algorithm can be run as a parallel algorithm on $\log(d)$ threads on a single server to solve the LASSO problem. It would be very interesting to add in a section in the paper comparing the computational cost of this algorithm with other state-of-the-art fast algorithms for LASSO in the centralized setting. The communication footprint of the algorithm is very small, and I believe the analysis can be improved since each user returns a support size which is also $\widetilde{O} (s)$ with high probability (requires a proof), in which case, the support of each user can be communicated in $\widetilde{O} (s \log(d))$ bits by transmitting their locations. I believe this should be asymptotically optimal. Since the algorithm aggregates the individual supports by a majority rule, existing approaches can be used to show that the algorithm is robust to stragglers, as the authors show. This is nice, as in the federated setting, individual devices may drop in and out of the network, and may be subject to differing latencies (stragglers), or be malicious. In general, looking at the analysis in the paper for the uncorrelated case, it appears that as long as $\sigma_{jj}^i$ is not too small for any $(i,j)$, and $w_i$ is not too small, by choosing a sufficiently small value of $\delta$, the algorithm's correctness can be established. However, in this case, it is confusing to me why the algorithm fails when $\sigma_{jj}^i = 0$ for some $i$. In this case, even though the support is not-identifiable from the data present with user $i$, since this user's data cannot be used to tell whether $w_j$ is or isn't $0$, the presence of other users with $\sigma_{jj}^i \ne 0$ should allow the support to be identifiable. I believe that the current algorithm should still succeed in this case, and this follows from the same reasons that the algorithm is robust. This allows the minimum over all $i \in [g]$ to be changed into the $(1-\beta)^{th}$ percentile smallest minimum in the upper bound on $\lambda$. In the limiting case where $\sigma_{jj}^i = 0$ for all $i$ and some $j$, the current analysis seems to be tight. The picture seems more confusing in the correlated case. I fail to find a non-trivial example of parameters where eq. (8) holds. Notice that the lower bound on $\lambda$ is of the form $\max_{j \in S_c^\star, I \in [g]} | \sum_{k \in S^\star} w_k^\star \sigma_{jk}^i| + \cdots$, while the upper bound is of the form $\min_{j \in S^\star, i \in [g]} |w_j^\star (\sigma_{jj}^i)^2 + \sigma_{k \in S^\star, k\ne j} w_k^\star \sigma_{jk}^i| - \cdots$, where the $\cdots$ terms contain all the dependency on $\delta$, and by extension, $n_i$. The $\min$ in the upper bound can be relaxed to $\min_{j \in S^\star_c}$. However, even then, if $\sigma_{jj}^i < 1$ for any $i$, the upper and lower bounds on $\lambda$ contradict each other and correctness can no longer be established for the algorithm, let alone finding a suitable value of $\lambda$. This indicates to me that the analysis in the correlated case might be loose. I do not have an intuition about whether the algorithm might fail in the case (although the experiments seem to not suggest this possibility). Overall, I think the contribution in the paper is nice, however, not the tightest possible. I would have appreciated it if the authors spent some more effort in making the analysis in the paper tighter, and providing some more insights about when the bounds work, and when they fail. It's nice to see an experimental section and a study into the practical performance of the algorithm. The writing of the paper is ok, but the appendix can be improved quite significantly. It would help to include a little bit of structure as to what lemmas prove what results, and reduce the number of lemmas (since many of them seem to repeat). I would appreciate if the authors put some effort into this, as the current version of the appendix is not up to publication standard in my opinion. Minor: 1. $n_i = o(s \log(d))$ is not the same as $n_i \le k s \log(d)$ for some constant $k > 0$. 2. In the paragraph right before 4.2 there is a typo: $n_i$ is independent of $O(1)$ with respect of dimension $d$. 3. Use a different notation than $e_i$ for noise (eg. Z_i). It's easy for a reader to confuse this with the standard basis vectors. 4. How are Lemmas 10 and 11 different from Lemmas 8 and 9? The statements are identical? (same for Lemmas 12 and 13). It would be nice to streamline the appendix and remove bloat in the form of repeating lemmas. Requested Changes: See strengths/weaknesses. Broader Impact Concerns: None. ================================================== Review 4: Summary: The paper considers the problem of learning the support of a distribution over a distributed set of agents. The novelty of this paper resides in the analysis of the convergence results and complexities provided. The paper showcases the method in a real-world dataset. Strengths and Weaknesses: # Strengths **S1** - The paper presents a simple and clear algorithm to obtain the exact support for the federated Lazzo problem that requires little computation. The advantage of this method relies on its simplicity and its theoretical guarantees of convergence. **S2** - The paper presents a numerical experiment with real-world data that validates the theoretical claims put forward by the authors. S3 - The authors ‘one-shot communication’ framework makes the method widely applicable in practice. Compounded on the very low computations required, this method seems very well suited for a wide range of IOT applications. # Weaknesses **W1** - The paper seems to cover a very limited scenario where the data is generated according to a linear model. Even though these results are important to understanding federated learning under Lasso regularization, the results provided in this work have very limited applicability in practice. **W2** - The paper’s structure makes it difficult to understand. Even though Lasso is a widely known problem, the exact problem formulation is never explicitly written. This makes the first read of the paper rough and unclear. I would suggest explicitly writing the problem formulation. **W3** - The choice of a single Lambda for all agents seems like an arbitrary solution that is key in the method but that is not explained. The authors say: *“we will present a more practical choice of a single lambda across all clients and entries”* What are the implications of this decision and what is the intuition for it? Requested Changes: # Minor Comments **Lambda -** It seems to me that Lambda depends on quantities that cannot be evaluated in practice. As it is common in ML, the value of Lambda can be obtained empirically, but how sensitive the algorithm is to the correct choice of Lambda is unclear to me. The experiments section indicates that Lambda=0.08 was used, but nothing is said of why this value has been chosen or how it has been selected. I would suggest the authors consider providing more information regarding the choice of lambda in practice. **Modifications** “It is obvious that for” → “For “ Broader Impact Concerns: None. ================================================== Metareview: Recommendation: Accept with minor revision Comment: I have selected "accept with minor revision" though the revisions I'd like to see are on the more substantial end of the spectrum. Multiple reviewers mentioned that the manuscript generally needs to be improved for quality and ease of understanding. In particular, the organization of lemmas in the work is currently confusing. Cementing this, along with more explicit problem statements, settings of interest, and relations to prior works on LASSO (especially in distributed settings) would greatly benefit the work. I would also ask the reviewers to clean up Section 6 in particular, on the choice of the regularizer. Multiple reviews mentioned that this is a crucial point. Therefore it seems that making this clearer would be particularly beneficial. Currently, much of Section 6 is stated informally, in a paragraph structure. I believe it'd behoove the authors to make formal theorem/lemma statements that can be pointed to directly. Additionally, better discussion of related theory in (Wainwright, 2009b) would be useful - can you extract the theorems in question and pose them in the nomenclature of this paper, to better enhance audience understanding? ==================================================
# Beyond Distribution Shift: Spurious Features Through The Lens Of Training Dynamics Nihal Murali nihal.murali@pitt.edu Intelligent Systems Program University of Pittsburgh Aahlad Puli *aahlad@nyu.edu* Department of Computer Science New York University Ke Yu *yu.ke@pitt.edu* Intelligent Systems Program University of Pittsburgh Rajesh Ranganath rajeshr@cims.nyu.edu Department of Computer Science New York University Kayhan Batmanghelich batman@bu.edu Department of Electrical and Computer Engineering Boston University Reviewed on OpenReview: *https: // openreview. net/ forum? id= Tkvmt9nDmB* ## Abstract Deep Neural Networks (DNNs) are prone to learning spurious features that correlate with the label during training but are irrelevant to the learning problem. This hurts model generalization and poses problems when deploying them in safety-critical applications. This paper aims to better understand the effects of spurious features through the lens of the learning dynamics of the internal neurons during the training process. We make the following observations: (1) While previous works highlight the harmful effects of spurious features on the generalization ability of DNNs, we emphasize that not all spurious features are harmful. Spurious features can be "*benign*" or "*harmful*" depending on whether they are "harder" or "easier" to learn than the core features for a given model. This definition is model and dataset dependent. (2) We build upon this premise and use *instance difficulty* methods (like Prediction Depth (Baldock et al., 2021)) to quantify "easiness" for a given model and to identify this behavior during the training phase. (3) We empirically show that the harmful spurious features can be detected by observing the learning dynamics of the DNN's early layers. In other words, easy features learned by the initial layers of a DNN early during the training can (potentially) hurt model generalization. We verify our claims on medical and vision datasets, both simulated and real, and justify the empirical success of our hypothesis by showing the theoretical connections between Prediction Depth and information-theoretic concepts like V-usable information (Ethayarajh et al., 2021). Lastly, our experiments show that monitoring only accuracy during training (as is common in machine learning pipelines) is insufficient to detect spurious features. We, therefore, highlight the need for monitoring early training dynamics using suitable instance difficulty metrics. ## 1 Introduction DNNs tend to rely on spurious features even in the presence of *core* features that generalize well, which poses serious problems when deploying them in safety-critical applications such as finance, healthcare, and autonomous driving (Geirhos et al., 2020; Oakden-Rayner et al., 2020; DeGrave et al., 2021). A feature is termed spurious if it is correlated with the label during training but is irrelevant to the learning problem (Saab et al., 2022; Izmailov et al., 2022). Previous works use a distribution shift approach to explain this phenomenon (Kirichenko et al., 2022; Wiles et al., 2021; Bellamy et al., 2022; Adnan et al., 2022). However, we get additional insights by emphasizing on the learnability and difficulty of these features. We find that not all spurious features are harmful. Spurious features can be "*benign*" or "*harmful*" depending upon their difficulty with respect to signals that generalize well. We show how monitoring example difficulty metrics like Prediction Depth (PD) (Baldock et al., 2021) can reveal the harmful spurious features quite early during training. Early detection of such features is important as it can help develop intervention schemes to fix the problem early. To the best of our knowledge, we are the first to use the training dynamics of the model to detect spurious feature learning. The premises that support our hypothesis are as follows: *(P1)* Spurious features hurt generalization only when they are *"easier"* to learn than the core features (see Fig-1). *(P2)* Initial layers of a DNN tend to learn easy features, whereas the later layers tend to learn the harder ones (Zeiler & Fergus, 2014; Baldock et al., 2021). *(P3)* Easy features are learned much earlier than the harder ones during training (Mangalam & Prabhu, 2019; Rahaman et al., 2019). Premises *(P1-3)* lead us to conjecture that: "Monitoring the easy features learned by the initial layers of a DNN early during the training can help identify the harmful spurious features." We make the following observations. First, we show that spurious features can be benign (harmful) depending upon whether they are more challenging (easier) to learn than the core features. Second, we empirically show that our hypothesis works well on medical and vision datasets (sections-4.2,A.3), both simulated and real, regardless of the DNN architecture used. We justify this empirical success by theoretically connecting prediction depth with information-theoretic concepts like V-usable information (Ethayarajh et al., 2021) (sections-3,A.1). Lastly, our experiments highlight that monitoring only accuracy during training, as is common in machine learning pipelines, is insufficient to detect spurious features. In addition, we need to monitor the learning dynamics of the model using suitable instance difficulty metrics to detect harmful spurious features (section-4.3). This will not only save time and computational costs, but also help develop reliable models that do not rely on spurious features. ## 2 Related Work Not all spurious features hurt generalization: Geirhos et al. (2020) define spurious features as features that exist in standard benchmarks but fail to hold in more challenging test conditions. Wiles et al. (2021) view spurious feature learning as a distribution shift problem where two or more attributes are correlated during training but are independent in the test data. Bellamy et al. (2022) use causal diagrams to explain spurious correlations as features that are correlated with the label during training but not during deployment. All these papers characterize spurious correlations purely as a consequence of distribution shift; methods exist to build models robust to such shifts (Arjovsky et al., 2019; Krueger et al., 2021; Puli et al., 2022). The distribution shift viewpoint cannot distinguish between benign and harmful spurious features. In contrast, we stress the learnability and difficulty of spurious features, which helps separate the benign spurious features from the harmful ones (see Fig-1). Previous works like Shah et al. (2020); Scimeca et al. (2021) hint at this by saying that DNNs are biased towards simple solutions, and Dagaev et al. (2021) use the "too-good-to-betrue" prior to emphasize that simple solutions are unlikely to be valid across contexts. Veitch et al. (2021) distinguish various model features using tools from causality and stress test the models for counterfactual invariance. Other works in natural language inference, visual question answering, and action recognition, also assume that simple solutions could be harmful in nature (Sanh et al., 2020; Li & Vasconcelos, 2019; Clark et al., 2019; Cadene et al., 2019; He et al., 2019). We take this line of thought further by hypothesizing that simple solutions or, more explicitly, easy features, which affect the early training dynamics of the model can be harmful in nature. We suggest using suitable example difficulty metrics to measure this effect. ![2_image_0.png](2_image_0.png) Figure 1: An illustration of how the distribution shift viewpoint cannot distinguish between the benign and the harmful spurious features. This viewpoint suggests that training and testing are different graphical models between input (x), output (y), and the spurious feature (s). If x can predict s, and y is not correlated to s on the test data, then s is viewed as a spurious feature. (A) The figure shows two scenarios for even-odd classification. Scenario 1 shows a dataset where all even numbers have a spurious composite number (located at the top-left), and odd numbers have a prime number. Scenario 2 shows a dataset where all odd numbers have a spurious white patch. The spurious white patch is harmful in nature as it is an easy feature that the model can learn, causing poor performance on the test data. Whereas classifying prime numbers, as shown in scenario 1, is challenging. So the model ignores such benign spurious features that do not affect test data performance. This shows not all spurious features hurt generalization. Estimating Example Difficulty: There are different metrics in the literature for measuring instancespecific difficulty (Agarwal et al., 2022; Hooker et al., 2019; Lalor et al., 2018). Jiang et al. (2020) train many models on data subsets of varying sizes to estimate a consistency score that captures the probability of predicting the true label for a particular example. Toneva et al. (2018) define example difficulty as the minimum number of iterations needed for a particular example to be predicted correctly in all subsequent iterations. Agarwal et al. (2022) propose a VoG (variance-of-gradients) score which captures example difficulty by averaging the pre-softmax activation gradients across training checkpoints and image pixels. Feldman & Zhang (2020) use a statistical viewpoint of measuring example difficulty and develop influence functions to estimate the actual leave-one-out influences for various examples. Ethayarajh et al. (2021) use an information-theoretic approach to propose a metric called pointwise V-usable information (PVI) to compute example difficulty. Baldock et al. (2021) define prediction depth (PD) as the minimum number of layers required by the DNN to classify a given input and use this to compute instance difficulty. In our experiments, we use the PD metric to provide a proof of concept for our hypothesis. Monitoring Training Dynamics: Other works that monitor training dynamics have a significantly different goal than ours. While we monitor training dynamics to detect the harmful spurious features, Rabanser et al. (2022) uses neural network training dynamics for selective classification. They use the disagreement between the ground truth label and the intermediate model predictions to reject examples and obtain a favorable accuracy/coverage trade-off. Feng & Tu (2021) use a statistical physics-based approach to study the training dynamics of stochastic gradient descent (SGD). While they study the effect of mislabeled data on SGD training dynamics, we study the effect of spurious features on the early-time learning dynamics of DNNs. Hu et al. (2020) use the early training dynamics of neural networks to show that a simple linear model can often mimic the learning of a two-layer fully connected neural network. Adnan et al. (2022) have a similar goal as ours and use mutual information to monitor spurious feature learning. However, computing mutual information is intractable for high-dimensional data, and hence their work is only limited to infinitewidth neural networks that offer tractable bounds for this computation. Our work, on the contrary, is more general and holds for different neural network architectures and datasets. ## 3 Background And Methodology In this section, we formalize the notion of spurious features, task difficulty, and harmful vs benign features. Let Ptr and Pte be the training and test distributions defined over the random variables X (input), y (label), and s (*latent* spurious feature). Spurious Feature (s): A latent feature s is called spurious if it is correlated with label y in the training data but not in the test data. Specifically, the joint probability distributions Ptr and Pte can be factorized as follows. $$\begin{array}{l}{{P_{t r}(\mathbf{X},\mathbf{y},\mathbf{s})=P_{t r}(\mathbf{X}|\mathbf{s},\mathbf{y})P_{t r}(\mathbf{s}|\mathbf{y})P_{t r}(\mathbf{y})}}\\ {{P_{t e}(\mathbf{X},\mathbf{y},\mathbf{s})=P_{t r}(\mathbf{X}|\mathbf{s},\mathbf{y})P_{t e}(\mathbf{s})P_{t r}(\mathbf{y}).}}\end{array}$$ The variable s appears to be related to y but is not. This is shown in Fig-1. We also introduce notation for task difficulty. The difficulty of a task depends on the model and data distribution (X, y). Task Difficulty (Ψ): Let ΨPM(X, y) indicate the difficulty of predicting label y from input X for a model M, such that X, y ∼ P. We give examples of two metrics that can be used for measuring ΨPM. Example-1 for ΨPM **(Prediction Depth):** Baldock et al. (2021) proposed the notion of Prediction Depth (PD) to estimate example difficulty. The PD of input is defined as the minimum number of layers the model requires to classify the input. The lower the PD of input, the easier it is to classify. It is computed by building k-NN classifiers on the embedding layers of the model, and the earliest layer after which all subsequent k-NN predictions remain the same is the PD of the input. Figure-2 illustrates how we compute prediction depth. PD can be mathematically defined as follows: $$\mathrm{PD}(x)=\operatorname*{min}_{k}\{k|f_{k n n}^{k}(x)=f_{k n n}^{i}(x);i>k\}$$ $$(1)$$ knn(x);*i > k*} (1) fknn is the k-NN classifier (see Appendix-A.6 for details), ϕ iis the feature embedding for the given input at layer-i, and N is the index of the final layer of the model. We also use the notion of undefined PD to work with models that are not fully trained. We treat k-NN predictions close to 0.5 (for a binary classification setting) as invalid. Figure-3 illustrates how to read the PD plots used in our experiment. Example-2 for ΨPM (V**-Usable Information):** The Mutual Information between input and output, I(X; Y ), is invariant with respect to lossless encryption of the input, i.e., I(τ (X); Y ) = I(X; Y ). Such a definition assumes unbounded computation and is counter-intuitive to define task difficulty as heavy encryption of X does not change the task difficulty. The notion of *"Usable Information"* introduced by Xu et al. (2020) assumes bounded computation based on the model family V under consideration. Usable information is measured under a framework called predictive V*-information* (Xu et al., 2020). Ethayarajh et al. (2021) introduce pointwise V*-information* (PVI) for measuring example difficulty. $$\mathrm{PVI}(x\to y)=-\log_{2}g[\phi](y)+\log_{2}g^{\prime}[x](y),$$ $$\mathrm{s.t.}\quad g,g^{\prime}\in\mathcal{V}$$ ′[x](y), (2) The function g is trained on (*ϕ, y*) input-label pairs, where ϕ is a null input that provides no information about the label y. g ′is trained on (*x, y*) pairs from the training data. Lower PVI instances are harder for V and vice-versa. Since the first term in Eq-2 corresponding to g is independent of the input x, we only consider the second term having g ′in our experiments. $$\left(2\right)$$ ![4_image_0.png](4_image_0.png) Figure 2: The prediction depth of an image for a given model tells us the minimum number of layers needed by the model to classify the image. A subset of the training data is used to collect feature embeddings at each layer. The KNN classifier uses these feature embeddings at each layer to classify the input query. Prediction depth for the input query is the number of layers after which the KNN predictions converge. Harmful Spurious Feature: The spurious feature s is harmful for model M iff Ψ Ptr M (X, y) < Ψ Ptr M (X, y|do(s)). We use the causal operator "do(s)" to denote a causal intervention on the spurious feature s to break the correlation between spurious features and the labels. This is equivalent to removing the arrow from y to s as shown in Figure-1B. In other words, if removing the spurious correlation between s and y from the training data makes it harder for model M to predict the true label y, then the spurious feature is "harmful". The above formalization clarifies that a spurious feature may be harmful to one model but benign for another, as its definition is closely tied to the dataset, model, and task. Figure-4 illustrates this point. In what follows, we relate the notions of PD and V-usable information. We use Vcnn (of finite depth and width) in our proof as our experiments mainly use CNN architectures. Proposition 1: (Informal) Consider two datasets: Ds ∼ Ptr(X, y) with spurious features and Di ∼ Pte(X, y) without them. For some mild assumptions on PD (see Appendix-A.1), if the mean PD of Ds is less than the mean PD of Di, then the Vcnn-usable-information for Ds is larger than the Vcnn-usable-information for Di: I Ds Vcnn (X → Y ) > I Di Vcnn (X → Y ). See proof in Appendix-A.1. The proposition intuitively implies that a sufficient gap between the mean PD of spurious and core features can result in spurious features having more Vcnn-usable information than core features. In such a scenario, the model will be more inclined to learn spurious features instead of core ones. This proposition justifies using the PD metric to detect spurious features during training, as demonstrated in the following experiments. ![5_image_0.png](5_image_0.png) Figure 3: Examples of PD plots (for DenseNet-121) at different stages of the training process. The red bar indicates samples with undefined PD, and the dotted vertical line indicates the mean PD of the plot. Notice that the undefined samples (shown in red) slowly accumulate in layer 88 as training progresses. This is because the model needs more time to learn the challenging samples which accumulate at higher prediction depth, i.e., later layers. ![5_image_1.png](5_image_1.png) Figure 4: One model's food is another model's poison! A spurious feature may be benign for one model but harmful for another. Consider the task of classifying digits into "even" or "odd" classes. All even (odd) digits in training data have a spurious white patch in the bottom left (right) corner. This correlation doesn't exist in the test data as the spurious white patch is randomly placed. Case-1 shows the Deep Set (Zaheer et al., 2017) model, which patchifies the input image, processes each patch separately, and combines the features. The model's output is invariant to the location of the white patch. However, the MLP model shown in case-2 is location-sensitive; hence, it learns to associate the location of the spurious white patch with the label. Consequently, the MLP model is susceptible to learning such spurious correlations and performs poorly on the test data. So the spurious white patch is *benign* for Deep Sets but *harmful* for the MLP model. ## 4 Experiments We set up four experiments to evaluate our hypothesis. *First*, we consider two kinds of datasets, one where the spurious feature is easier than the core feature for a ResNet-18 model and another where the spurious feature is harder than the core for the same model. We train a classifier on each dataset and observe that the ResNet-18 model can learn the easy spurious feature but not the harder one. This experiment demonstrates that not all spurious features hurt generalization, but only those spurious features that are easier for the model than the core features are harmful. *Second*, we use medical and vision datasets to demonstrate how monitoring the learning dynamics of the initial layers can reveal harmful spurious features. We show that an early peak in the PD plot may correspond to harmful spurious features and must raise suspicion. Visualization techniques like grad-CAM can provide intuition about what feature is being used at that layer. *Third*, we show how harmful spurious features can often be detected relatively early during training. This is because initial layers which learn such spurious features converge very early during the training. We observe this by monitoring PD plots across training epochs. In all of our experiments, the PD plot reveals the spurious feature within two epochs of training. *Fourth*, we show that datasets with easy spurious features have more "usable information" (Ethayarajh et al., 2021) compared to their counterparts without such features. Due to higher usable information, the model requires fewer layers to classify the images with spurious features. We use this experiment to empirically justify Proposition-1 outlined in Appendix-A.1. We also conduct additional experiments (see Appendix-A.10) to justify the use of the PD metric in our work. ![6_image_0.png](6_image_0.png) Table 1: Results for the Dominoes experiment averaged across 4- ![6_image_1.png](6_image_1.png) runs. Numbers in bracket show mean-PD (dataset difficulty). Core-only accuracy indicates the model's reliance on core features. Models achieve high core-only accuracy when spurious features are harder than core features. ## 4.1 Not All Spurious Features Hurt Generalization! We use the Dominoes binary classification dataset (formed by concatenating two datasets vertically; see Fig-4) similar to the setup of Kirichenko et al. (2022). The bottom (top) image acts as the core (spurious) feature. Images are of size 64 × 32. We construct three pairs of domino datasets such that each pair has both a hard and an easy spurious feature with respect to the common core feature (see Table-1). We use classes {0,1} for MNIST and SVHN, {coat,dress} for FMNIST, and {airplane, automobile} for CIFAR10. We also include two classes from Kuzushiji-MNIST (or KMNIST) and construct a modification of this dataset called KMNpatch, which has a spurious patch feature (5x5 white patch on the top-left corner) for one of the two classes of KMNIST. The spurious features are perfectly correlated with the target. The order of dataset difficulty based on the mean-PD is as follows: KMNpatch(1.1) < MNIST(2.2) < FMNIST(3.9) < KMNIST(5) < SVHN(5.9) < CIFAR10(6.8). We use a ResNet18 model and measure the validation and core-only accuracies. The validation accuracy is measured on a held-out dataset sampled from the same distribution. For the core-only (test) accuracy, we blank out the spurious feature (top-half image) by replacing it with zeroes (same as Kirichenko et al. (2022)). The higher the core-only accuracy, the lesser the model's reliance on the spurious feature. Table 1 shows that all datasets achieve a high validation accuracy as expected. The core-only accuracy stays high (>98%) for datasets where the spurious feature is harder to learn than the core feature, indicating the ![7_image_0.png](7_image_0.png) Figure 6: Top row shows three reference datasets and their corresponding prediction depth (PD) plots. The datasets are ordered based on their difficulty (measured using mean PD shown by dotted vertical lines): KMN w/ patch(1.1) < MNIST(2.2) < FMNIST(3.9) < KMN w/o patch(5) < SVHN(5.9) < CIFAR10(6.8). The bottom row shows the effect of the spurious patch on the KMNIST dataset. The yellow region on the axis indicates the expected difficulty of classifying KMNIST. While the original KMNIST lies in the yellow region, the spurious patch significantly reduces the task difficulty. The Grad-CAM shows that the model focuses on the spurious patch. model's high reliance on core features. When the spurious feature is easier than the core, the model learns to leverage them, and hence the core-only accuracy drops to nearly random chance (∼50%). Interestingly, the KMNpatch-MN results have a much higher core-only accuracy (∼69%) and a more significant standard deviation. We explain this phenomenon in Appendix-A.7. This experiment demonstrates that not all spurious features hurt generalization, and only those that are easier than core features are harmful. ## 4.2 Monitoring Initial Layers Can Reveal The Harmful Spurious Features Synthetic Spurious Feature in Toy Dataset: To provide a proof of concept, we demonstrate our method on the Kuzushiji-MNIST (KMNIST) (Clanuwat et al., 2018) dataset comprising Japanese Hiragana characters. The dataset has ten classes and images of size 28 × 28, similar to MNIST. We insert a white patch (spurious feature) at a particular location for each of the ten classes. The location of the patch is class-specific. We train two VGG16 models, one on the KMNIST with a spurious patch (Msh) and another on the original KMNIST without the patch (M*orig*). Fig-6 shows the prediction depth (PD) plots for this experiment. The vertical dashed lines show the mean PD for each plot. Fig-6D shows that KMNIST with patch has a much lower mean PD than all the other datasets, and the PD plot is also highly skewed towards the initial layers. This suspicious behavior is because the white patch is a very easy feature, and hence the model only needs a single layer to detect it. The GradCAM maps for layer-1 confirm this by showing that Msh focuses mainly on the patch (see Fig-6D), and hence the test accuracy on the original KMNIST images is very low (∼8%). The PD plot for M*orig* (see Fig-6E) is not as skewed toward lower depth as the plot for Msh. This is expected as M*orig* is not looking at the spurious patch and therefore utilizes more layers to make the prediction. The mean PD for M*orig* suggests that the original KMNIST is harder than Fashion-MNIST but easier than CIFAR10. M*orig* also achieves a higher test accuracy (∼98%). This experiment demonstrates how models that learn spurious features (Msh) exhibit PD plots that are suspiciously skewed towards the initial layers. The skewed PD plot should raise concerns, and visualization techniques like Grad-CAM can aid our intuition about what spurious feature the model may be utilizing at any given layer. Semi-Synthetic Spurious Feature in Medical Datasets: We follow the procedure by DeGrave et al. (2021) to create the ChestX-ray14/GitHub-COVID dataset. This dataset comprises Covid19 positive images from Github Covid repositories and negative images from ChestX-ray14 dataset (Wang et al., 2017b). In addition, we also create the Chex-MIMIC dataset following the procedure by Puli et al. (2022). This dataset comprises 90% images of Pneumonia from Chexpert (Irvin et al., 2019) and 90% healthy images from MIMIC-CXR (Johnson et al., 2019). We train two DenseNet121 models, M*covid* on the ChestXray14/GitHub-COVID dataset, and M*chex* on the Chex-MIMIC dataset. We use DenseNet121, a common and standard architecture for medical image analysis. Images are resized to 512 × 512. ![8_image_0.png](8_image_0.png) Figure 7: PD plots for two DenseNet-121 models trained on Chex-MIMIC and ChestX-ray14/GitHub-COVID datasets are shown in the figure, along with their corresponding Grad-CAM visualizations. Both PD plots exhibit a very high peak in the initial layers (1 to 4), indicating that the models use very easy features to make the predictions. Fig-7 shows the PD plots for M*chex* and M*covid*. Both the plots are highly skewed towards initial layers, similar to the KMNIST with spurious patch in Fig-6D. This again indicates that the models are using very easy features to make the predictions, which is concerning as the two tasks (pneumonia and covid19 detection) are hard tasks even for humans. Examining the Grad-CAM maps at layer-4 reveals that these models focus on spurious features outside the lung region, which are irrelevant because both diseases are known to affect mainly the lungs. The reason for this suspicious behavior is that, in both these datasets, the healthy and diseased samples have been acquired from two different sources. This creates a spurious feature because source-specific attributes or tokens are predictive of the disease and can be easily learned, as pointed out by DeGrave et al. (2021). Real Spurious Feature in Medical Dataset: For this experiment, we use the NIH dataset (Wang et al., 2017a) which has the popular chest drain spurious feature (for Pneumothorax detection) (Oakden-Rayner et al., 2020). Chest drains are used to treat positive Pneumothorax cases. Therefore, the presence of a chest drain in the lung is positively correlated with the presence of Pneumothorax and can be used by the deep learning model (Oakden-Rayner et al., 2020). Appendix-A.5 outlines the procedure we use to obtain chest drain annotations for the NIH dataset. We train a DenseNet121 model (Mnih) for Pneumothorax detection on NIH images of size 128 × 128. Fig-8A shows the PD plot for Mnih. We observe that the distribution is not as skewed as the plots in the previous experiments. This is because all the images come from a single dataset (NIH). However, we do see suspicious peaks at the initial layers. Pneumothorax classification is challenging even for radiologists; hence, peaks at the initial layers raise suspicion. The Grad-CAM maps in Figs-8B & 8C reveal that the initial layers look at irrelevant artifacts and chest drains in the image. This provides evidence that the initial layers are ![9_image_0.png](9_image_0.png) Figure 8: Spurious correlations in NIH dataset. (A) PD plot for DenseNet-121 trained on NIH shows prominent peaks in the initial layers. (B, C) Grad-CAM reveals that the initial layers use irrelevant and spurious artifacts and chest drains for classification. (D) The chest drain spurious feature affects the AUC performance of the model. The X-axis (Y-axis) shows the false positive (true positive) rate. looking at the harmful spurious features. Fig-8D shows that the AUC performance is 0.94 when the diseased patients have a chest drain and 0.74 when they don't. In both cases, the set of healthy patients remains the same. This observation is consistent with the findings of Oakden-Rayner et al. (2020) and indicates that the model looks at chest drains to classify positive Pneumothorax cases. The above experiments demonstrate how a peak located in the initial layers of the PD plot should raise suspicion, especially when the classification task is challenging. Visualization techniques like Grad-CAM can further aid our intuition and help identify harmful spurious features being learned by the model. This approach works well even for realistic scenarios and challenging spurious features (like chest drain for Pneumothorax classification), as shown above. Appendix-A.3 includes additional results on vision datasets. ## 4.3 Detecting Harmful Spurious Features Early Fig-9 shows the evolution of the PD plot across epochs for Mnih (which is the model used in Fig-8). This visualization helps us observe the training dynamics of the various layers. The red bar in the PD plots shows the samples with undefined prediction depths. These plots reveal several useful insights into the learning dynamics of the model. Firstly, we see three prominent peaks in epoch-1 at layers-4,40,88 (see Fig-9A). The magnitude of the initial peaks (like layers4&40) remains nearly constant throughout the training. These peaks correspond to spurious features, as discussed in the previous section. This indicates that the harmful spurious features can often be identified early (epoch-1 in this case). Fig-10 shows the PD plots at epoch-2 for other datasets with spurious features. It is clear from Fig-10 that the suspiciously high peak at the initial layer is visible in the *second epoch* itself. The Grad-CAM maps reveal that this layer looks at irrelevant artifacts in the dataset. This behavior is seen in all datasets shown in Fig-10. Secondly, we also see that accuracy and AUC plots are not sufficient to detect spurious features during training. We need to monitor the training dynamics using suitable metrics (like PD) to detect this behavior. Thirdly, the red peak (undefined samples) decreases in magnitude with time, and we see a proportional increase in the layer-88 peak. This corroborates well with the observation that later layers take more time to converge (Rahaman et al., 2019; Mangalam & Prabhu, 2019; Zeiler & Fergus, 2014). Therefore, samples with higher PD are initially undefined and do not appear in the PD plot. Nonetheless, samples with lower PD show up very early during the training, which helps us detect the harmful spurious features early. Early detection can consequently help develop intervention schemes that fix the problem early. ![10_image_0.png](10_image_0.png) Figure 9: Evolution of PD plot across epochs shows the training dynamics of the DNN on the NIH dataset. The initial peaks (layers-4&40) are relatively stable throughout training, whereas the later peaks (layer-88) change with time. The initial layers learn spurious features, which can be detected early during the training. Samples with undefined PD (shown in red) take more time to converge and eventually accumulate in the later layers (layer 88 in this case). ![10_image_1.png](10_image_1.png) Figure 10: Epoch-2 PD plots for various datasets with spurious features. The high spurious peak in the initial layer is visible in all the datasets indicating that the harmful spurious features can be detected early during the training. ## Prediction Depth ~ V-Usable Information 4.4 Table-2 measures the influence of spurious features on NIH and KMNIST using PD and PVI metrics. All diseased patients in the "NIH w/ Spurious feat." dataset have a chest drain, whereas all diseased patients in the "NIH w/o Spurious feat." dataset have no chest drain. The set of healthy patients is common for the two datasets. The KMNIST datasets are the same as those used in Section-4.2. We use VGG16 for KMNIST and DenseNet121 for NIH. Other training details are the same as Section-4.2. Table-2 shows that datasets with spurious features (Da) have smaller mean PD values than their counterparts without such features (Di). Proposition-1 (see Section-3, Appendix-A.1) shows that a sufficient gap between the mean PDs of Ds and D; causes the V-Information of Ds to be greater than Di. Table-2 confirms this Table 2: Effect of Spurious features on Prediction Depth and the negative conditional V-entropy (-HVcnn (Y | X)). The label marginal distributions are the same with or without the spurious feature, and thus the negative conditional V-entropy is proportional to V-information. | Dataset | mean PD | −HVcnn (Y | X) | |---------------------------|-----------|------------------| | NIH w/ Spurious feat. | 53.43 | -0.1171 | | NIH w/o Spurious feat. | 75.33 | -0.2321 | | KMNIST w/ Spurious feat. | 1.06 | -0.0024 | | KMNIST w/o Spurious feat. | 5.25 | -0.0585 | in a medical-imaging dataset with a real chest drain spurious feature, and we see that the mean "usable information" increases when there is a spurious feature. This implies that the model learns spurious features as they have more usable information than the core features. We investigate this relationship between PD and V-information in more detail in Appendix-A.4. Ethayarajh et al. (2021) also show that V-information is positively correlated with test accuracy. This explains the significant change in AUC observed in Fig-8D. Proposition 1 bridges the gap between the notions of PD and V-usable information. This connection between V-information and PD indicates that monitoring early training dynamics using PD not only helps detect the harmful spurious features but also bears insights into the dataset's difficulty (in information-theoretic terms) for a given model class. ## 5 Conclusion We study spurious features by monitoring the early training dynamics of deep neural networks (DNNs). We empirically show that not all spurious features hurt generalization, but only those that are easier than the core features do. So spurious features can be "benign" or "harmful" depending on whether they are harder or easier than core features for a given model. Our hypothesis is: "Monitoring the easy features learned by the initial layers of a DNN early during the training can help identify harmful spurious features." We validate this hypothesis on real medical and vision datasets. We show that spurious features are learned quite early during the training and one can detect them by monitoring the early training of DNNs using instance difficulty metrics like Prediction Depth (PD). Further, we show a theoretical connection between PD and V-information to support our empirical results. Datasets with spurious features have more V-information as compared to their counterparts without such features causing the model to learn the spurious feature. To conclude, relying only on accuracy plots is insufficient for detecting harmful spurious features, and we recommend monitoring the DNN training dynamics using instance difficulty metrics like PD for the early detection of such features. ## References Mohammed Adnan, Yani Ioannou, Chuan-Yung Tsai, Angus Galloway, HR Tizhoosh, and Graham W Taylor. Monitoring shortcut learning using mutual information. *arXiv preprint arXiv:2206.13034*, 2022. Chirag Agarwal, Daniel D'souza, and Sara Hooker. Estimating example difficulty using variance of gradients. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 10368– 10378, 2022. Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019. Robert Baldock, Hartmut Maennel, and Behnam Neyshabur. Deep learning through the lens of example difficulty. *Advances in Neural Information Processing Systems*, 34:10876–10889, 2021. David Bellamy, Miguel A Hernán, and Andrew Beam. A structural characterization of shortcut features for prediction. *European Journal of Epidemiology*, 37(6):563–568, 2022. Remi Cadene, Corentin Dancette, Matthieu Cord, Devi Parikh, et al. Rubi: Reducing unimodal biases for visual question answering. *Advances in neural information processing systems*, 32, 2019. Kamalika Chaudhuri and Sanjoy Dasgupta. Rates of convergence for nearest neighbor classification. *Advances* in Neural Information Processing Systems, 27, 2014. Tarin Clanuwat, Mikel Bober-Irizar, Asanobu Kitamoto, Alex Lamb, Kazuaki Yamamoto, and David Ha. Deep learning for classical japanese literature. *arXiv preprint arXiv:1812.01718*, 2018. Christopher Clark, Mark Yatskar, and Luke Zettlemoyer. Don't take the easy way out: Ensemble based methods for avoiding known dataset biases. *arXiv preprint arXiv:1909.03683*, 2019. Nikolay Dagaev, Brett D Roads, Xiaoliang Luo, Daniel N Barry, Kaustubh R Patil, and Bradley C Love. A too-good-to-be-true prior to reduce shortcut reliance. *arXiv preprint arXiv:2102.06406*, 2021. Alex J DeGrave, Joseph D Janizek, and Su-In Lee. Ai for radiographic covid-19 detection selects shortcuts over signal. *Nature Machine Intelligence*, 3(7):610–619, 2021. Kawin Ethayarajh, Yejin Choi, and Swabha Swayamdipta. Information-theoretic measures of dataset difficulty. *arXiv preprint arXiv:2110.08420*, 2021. Vitaly Feldman and Chiyuan Zhang. What neural networks memorize and why: Discovering the long tail via influence estimation. *Advances in Neural Information Processing Systems*, 33:2881–2891, 2020. Yu Feng and Yuhai Tu. Phases of learning dynamics in artificial neural networks in the absence or presence of mislabeled data. *Machine Learning: Science and Technology*, 2(4):043001, 2021. Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A Wichmann. Shortcut learning in deep neural networks. *Nature Machine Intelligence*, 2(11):665–673, 2020. Ingo Gühring, Mones Raslan, and Gitta Kutyniok. Expressivity of deep neural networks. arXiv preprint arXiv:2007.04759, 2020. He He, Sheng Zha, and Haohan Wang. Unlearn dataset bias in natural language inference by fitting the residual. *arXiv preprint arXiv:1908.10763*, 2019. Sara Hooker, Aaron Courville, Gregory Clark, Yann Dauphin, and Andrea Frome. What do compressed deep neural networks forget? *arXiv preprint arXiv:1911.05248*, 2019. Wei Hu, Lechao Xiao, Ben Adlam, and Jeffrey Pennington. The surprising simplicity of the early-time learning dynamics of neural networks. *Advances in Neural Information Processing Systems*, 33:17116– 17128, 2020. Jeremy Irvin, Pranav Rajpurkar, Michael Ko, Yifan Yu, Silviana Ciurea-Ilcus, Chris Chute, Henrik Marklund, Behzad Haghgoo, Robyn Ball, Katie Shpanskaya, et al. Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In *Proceedings of the AAAI conference on artificial intelligence*, volume 33, pp. 590–597, 2019. Pavel Izmailov, Polina Kirichenko, Nate Gruver, and Andrew G Wilson. On feature learning in the presence of spurious correlations. *Advances in Neural Information Processing Systems*, 35:38516–38532, 2022. Saahil Jain, Ashwin Agrawal, Adriel Saporta, Steven QH Truong, Du Nguyen Duong, Tan Bui, Pierre Chambon, Yuhao Zhang, Matthew P Lungren, Andrew Y Ng, et al. Radgraph: Extracting clinical entities and relations from radiology reports. *arXiv preprint arXiv:2106.14463*, 2021. Ziheng Jiang, Chiyuan Zhang, Kunal Talwar, and Michael C Mozer. Characterizing structural regularities of labeled data in overparameterized models. *arXiv preprint arXiv:2002.03206*, 2020. Alistair EW Johnson, Tom J Pollard, Nathaniel R Greenbaum, Matthew P Lungren, Chih-ying Deng, Yifan Peng, Zhiyong Lu, Roger G Mark, Seth J Berkowitz, and Steven Horng. Mimic-cxr-jpg, a large publicly available database of labeled chest radiographs. *arXiv preprint arXiv:1901.07042*, 2019. Polina Kirichenko, Pavel Izmailov, and Andrew Gordon Wilson. Last layer re-training is sufficient for robustness to spurious correlations. *arXiv preprint arXiv:2204.02937*, 2022. David Krueger, Ethan Caballero, Joern-Henrik Jacobsen, Amy Zhang, Jonathan Binas, Dinghuai Zhang, Remi Le Priol, and Aaron Courville. Out-of-distribution generalization via risk extrapolation (rex). In International Conference on Machine Learning, pp. 5815–5826. PMLR, 2021. Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. *Advances in neural information processing systems*, 30, 2017. John P Lalor, Hao Wu, Tsendsuren Munkhdalai, and Hong Yu. Understanding deep learning performance through an examination of test set difficulty: A psychometric case study. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Conference on Empirical Methods in Natural Language Processing, volume 2018, pp. 4711. NIH Public Access, 2018. Yi Li and Nuno Vasconcelos. Repair: Removing representation bias by dataset resampling. In *Proceedings* of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9572–9581, 2019. Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. *Advances in neural* information processing systems, 30, 2017. Karttikeya Mangalam and Vinay Uday Prabhu. Do deep neural networks learn shallow learnable examples first? 2019. Jishnu Mukhoti, Andreas Kirsch, Joost van Amersfoort, Philip HS Torr, and Yarin Gal. Deep deterministic uncertainty: A new simple baseline. In *Proceedings of the IEEE/CVF Conference on Computer Vision* and Pattern Recognition, pp. 24384–24394, 2023. Luke Oakden-Rayner, Jared Dunnmon, Gustavo Carneiro, and Christopher Ré. Hidden stratification causes clinically meaningful failures in machine learning for medical imaging. In *Proceedings of the ACM conference on health, inference, and learning*, pp. 151–159, 2020. Aahlad Manas Puli, Lily H Zhang, Eric Karl Oermann, and Rajesh Ranganath. Out-of-distribution generalization in the presence of nuisance-induced spurious correlations. In *International Conference on Learning* Representations, 2022. Stephan Rabanser, Anvith Thudi, Kimia Hamidieh, Adam Dziedzic, and Nicolas Papernot. Selective classification via neural network training dynamics. *arXiv preprint arXiv:2205.13532*, 2022. Nasim Rahaman, Aristide Baratin, Devansh Arpit, Felix Draxler, Min Lin, Fred Hamprecht, Yoshua Bengio, and Aaron Courville. On the spectral bias of neural networks. In *International Conference on Machine* Learning, pp. 5301–5310. PMLR, 2019. Khaled Saab, Sarah Hooper, Mayee Chen, Michael Zhang, Daniel Rubin, and Christopher Ré. Reducing reliance on spurious features in medical image classification with spatial specificity. 2022. Victor Sanh, Thomas Wolf, Yonatan Belinkov, and Alexander M Rush. Learning from others' mistakes: Avoiding dataset biases without modeling them. *arXiv preprint arXiv:2012.01300*, 2020. Luca Scimeca, Seong Joon Oh, Sanghyuk Chun, Michael Poli, and Sangdoo Yun. Which shortcut cues will dnns choose? a study from the parameter-space perspective. *arXiv preprint arXiv:2110.03095*, 2021. Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pp. 618–626, 2017. Harshay Shah, Kaustav Tamuly, Aditi Raghunathan, Prateek Jain, and Praneeth Netrapalli. The pitfalls of simplicity bias in neural networks. *Advances in Neural Information Processing Systems*, 33:9573–9585, 2020. Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, and Geoffrey J Gordon. An empirical study of example forgetting during deep neural network learning. arXiv preprint arXiv:1812.05159, 2018. Joost Van Amersfoort, Lewis Smith, Yee Whye Teh, and Yarin Gal. Uncertainty estimation using a single deep deterministic neural network. In *International conference on machine learning*, pp. 9690–9700. PMLR, 2020. Victor Veitch, Alexander D'Amour, Steve Yadlowsky, and Jacob Eisenstein. Counterfactual invariance to spurious correlations: Why and how to pass stress tests. *arXiv preprint arXiv:2106.00545*, 2021. Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, and Ronald M Summers. Chestxray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In *Proceedings of the IEEE conference on computer vision and pattern* recognition, pp. 2097–2106, 2017a. Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, and Ronald M Summers. Chestxray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2097–2106, 2017b. Olivia Wiles, Sven Gowal, Florian Stimberg, Sylvestre Alvise-Rebuffi, Ira Ktena, Taylan Cemgil, et al. A fine-grained analysis on distribution shift. *arXiv preprint arXiv:2110.11328*, 2021. Yilun Xu, Shengjia Zhao, Jiaming Song, Russell Stewart, and Stefano Ermon. A theory of usable information under computational constraints. *arXiv preprint arXiv:2002.10689*, 2020. Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhutdinov, and Alexander J Smola. Deep sets. *Advances in neural information processing systems*, 30, 2017. Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In European conference on computer vision, pp. 818–833. Springer, 2014. Xingxuan Zhang, Linjun Zhou, Renzhe Xu, Peng Cui, Zheyan Shen, and Haoxin Liu. Nico++: Towards better benchmarking for domain generalization. *arXiv preprint arXiv:2204.08040*, 2022. ## A Appendix A.1 Proof Of Proposition-1 Proposition A.1. Given two datasets, Ds with spurious features and Di *without them, we assume the* following: 1. *(Well-Trained Model Assumption) The part of the network from any representation to the label is* one of the functions that compute V-information. 2. (Function Class Complexity Assumption) Assume that there exists a K ∈ {1, N} such that Vcnn of depth N − K is deep enough to be a strictly larger function class than Vknn *with a fixed neighbor* size (29 in this paper). Assume that this Vknn is a larger function class than a linear function. 3. (Controlled Confidence Growth Assumption) For both datasets D ∈ {Ds, Di}*, assume that the for* all k ∈ {1, · · · , N}, τ ≤ ID Vknn (ϕk) − ID Vknn (ϕk−1) ≤ ϵ 4. (Prediction Depth Separation Assumption) Let L be an integer such that, L ≤ K and *L < N* − ψ maxy (− log p(Y = y)). Note that p(Y = y) is simply the prevalence of class y. Let there exist a gap in prediction depths of samples in Ds and Di: ψ ∈ (0, 0.5) such that 1 − ψ fraction of Ds has prediction depth ≤ L and 1 − ψ fraction of Di has prediction depth > K. Then, for a model class of N-layer CNNs, we show that the Vcnn-information for Ds is greater than Vcnn*information for* Di: $$\mathcal{I}_{\mathcal{V}_{c n n}}^{D_{s}}(X\to Y)\geq\mathcal{I}_{\mathcal{V}_{c n n}}^{D_{i}}(X\to Y).$$ Proof. Before proceeding to the proof, we attempt to justify and reason about the above assumptions. Assumption-1 states a property of trained neural networks in the context of usable information. Let f(X) be a trained neural network. Consider splitting the network into the representation ϕk(X) at the k th layer and the rest of the network as a function applied to ϕk(X): i.e., f(X) = fk ◦ ϕk(X). Then we assume that fk(·) is the function that achieves the Vcnn of size(n-k)-information between ϕk(X) and Y (Ethayarajh et al., 2021; Xu et al., 2020). The function that computes V-information must achieve a minimum cross-entropy (Ethayarajh et al., 2021). So if we train f(X) by minimizing the cross-entropy loss, fk(.) must converge to a function that achieves the Vcnn of size(n-k)-information between ϕk(X) and Y . Assumption-2 implies that the CNN class in Vcnn is deep enough such that the network after the Kth layer can approximate a k-NN classifier with 29 neighbors; (K here is same as the K in assumption-4). This is also a reasonable assumption (Chaudhuri & Dasgupta, 2014; Gühring et al., 2020). Chaudhuri & Dasgupta (2014) lower bounds the error for k-NN classifiers for a fixed k, and Gühring et al. (2020) shows the depth expressivity of CNN classifiers. Assumption-3 states that the difference in Vknn-information between intermediate layers does not explode indefinitely and thus can be bounded by some positive quantities τ and ϵ. Assumption-4 is also easily satisfied. For example, if the smallest prevalence class in the dataset has a prevalence greater than 1 1000 , then assumption-4 boils down to saying *L < N* −0.5 ∗maxy (− log p(Y = y)) = N −3.45, where L is the low PD value caused by spurious features in Ds, and N is the total number of layers in the CNN. All our datasets satisfy the class prevalence >1 1000 constraint. Even diseases like pneumothorax which are rare, have a class prevalence of at least 1 30 in both NIH and MIMIC-CXR. And *L < N* − 3.45 is easily satisfied in all our experiments. For e.g., see Fig-5 where 80% (or 1 − ψ = 0.8) of the samples have PD ≤ 16. So L = 16, N = 121(for Densenet-121) easily satisfies 16 < 121 − 3.45. Now we elaborate on the proof of the proposition given the four assumptions. We proceed in two parts: first, we lower bound Vcnn-information for Ds, and then we upper bound Vcnn for Di. Assumption 3 implies: $$(B-A)\tau\leq\sum_{k=A}^{B}\tau\leq\mathcal{I}_{V_{k n n}}^{D}(\phi_{k})-\mathcal{I}_{V_{k n n}}^{D}(\phi_{k-1})\leq(B-A)\epsilon$$ Note: (*A, B*) are just placeholders for the min and max indices over which the summation is defined. They are replaced by (L + 1, K) and (K + 1, N) below while trying to lower bound I Ds Vcnn and upper bound I Di Vcnn respectively. PD - PVI connection. Note that by definition, when the prediction depth is k for a sample X, then P V Iknn(ϕk(X)) ≥ δ but P V Iknn(ϕk−1(X)) < δ. This follows from how we compute PD (see Section-3 in the main paper, and Appendix-A.6). Lower bounding I Ds Vcnn $$\begin{array}{r l}{{\mathcal{I}}_{\mathcal{V}_{c n n}}^{D_{s}}={\mathcal{I}}_{\mathcal{V}_{c n n}{\mathrm{~of~depth~}}N-K}^{D_{s}}(\phi_{K})}\\ {\geq{\mathcal{I}}_{\mathcal{V}_{k n n}}^{D_{s}}(\phi_{K})}\\ {=}&{{}}\end{array}$$ = I Ds Vcnn of depth N−K (ϕK) {Assumption-1} ≥ IDs Vknn (ϕK) {Assumption-2} = I Ds Vknn (ϕL) + X K k=L+1 I D Vknn (ϕk) − ID Vknn (ϕk−1) {Telescoping Sum} ≥ IDs Vknn (ϕL) + (K − L)τ {Assumption-3} ≥ ψ min X,Y ∈Ds,pd>=L P V Iknn(X → Y ) + (1 − ψ) min X,Y ∈Ds,pd<L P V Iknn(X → Y ) + (K − L)τ {Prediction Depth Separation} ≥ 0 ∗ ψ + δ ∗ (1 − ψ) + (K − L)τ {Prediction Depth Separation} Upper bounding I Di Vcnn $$\mathcal{I}_{\mathcal{V}_{c n n}}^{D_{i}}\leq\mathcal{I}_{\mathcal{V}_{k n n}}^{D_{i}}(\phi_{N})$$ I Vcnn ≤ IDi Vknn = I D Vknn (ϕK) + X N k=K+1 I D Vknn (ϕk) − ID Vknn ≤ ID Vknn ≤ (N − K)ϵ + ψ max X,Y ∈Di,pd(X)≤K P V ID Vknn (ϕK(X) → Y ) + (1 − ψ) max X,Y ∈Di,pd(X)>K P V ID Vknn ≤ (N − K)ϵ + ψ max y(− log p(Y = y)) + (1 − ψ) max X,Y ∈Di,pd(X)>K P V ID Vknn ≤ (N − K)ϵ + ψ max (ϕN ) {Assumption-2} $$\{{\mathrm{Assumption-2}}\}$$ $$\{{\mathrm{Telescoping~Sum}}\}$$ (ϕk−1) {Telescoping Sum} (ϕK) + (N − K)ϵ {Assumption-3} (ϕK(X) → Y ) {Prediction Depth Separation} (ϕK(X) → Y ) {PVI ≤ − log p(Y = y)} y(− log p(Y = y)) + (1 − ψ)δ {PD-PVI connection for pd > K} The proof follows by comparing the lower bound on I Ds Vcnn and the upper bound on I Di Vcnn . Intuitively what this means is that when there is a sufficiently large gap in the mean PD between Ds and Di, then the V-information of Ds exceeds the V-information of Di, which is why the model prefers learning the spurious feature instead of using the core features for the task. $$\{{\mathrm{Assumption-3}}\}$$ $$\{{\mathrm{Prediction~Depth~Separation}}\}$$ $$\{\text{PVI}\leq-\log p(Y=y)\}$$ $$\{\text{PD-PVI connection for pd}>K\}$$ ## A.2 Grad-Cam Visualization PD plots help us understand the model layers actively used for classifying different images. To further aid our intuition, we visualize the Grad-CAM outputs for the model's arbitrary layer k by attaching a soft-KNN head. Let gknn denote the soft and differentiable version of k-NN. We compute gknn as follows: This is a sample text in gknn(ϕ $g_{kmn}(\phi^{k}_{q};\phi^{k}_{i\in\{1,2,...m\}})=\frac{\sum_{j\in\mathcal{N}(\phi^{k}_{q},1)}\exp^{-\|\phi^{k}_{q}\|}}{\sum_{j\in\mathcal{N}(\phi^{k}_{q},:)}\exp^{-\|\phi^{k}_{q}\|}}$ q −ϕk j ∥/s $\frac{(1)}{\exp^{-\parallel\phi_q^k-\phi_j^k\parallel/x\parallel}}$ 4) $\exp^{-\parallel\phi_q^k-\phi_j^k\parallel/x\parallel}$ 5. This function makes the KNN differentiable and can be used to compute Grad-CAM (Selvaraju et al., 2017). We use the L1 norm for all distance computations. ϕ k q corresponds to feature at layer-k for query image xq. Let ϕ k i∈{1,2*,...m*} be the training data for KNN. Let N denote the neighborhood function. N (ϕ k q , :) returns the indices of K-nearest neighbors for ϕ k q . N (ϕ k q , 1) returns indices of images with positive label (y = 1) from the set of K-nearest neighbors for ϕ k q . s is the median for the set of L1 norms {∥ϕ k q − ϕ k j ∥} for j ∈ N (ϕ k q , :). ## A.3 Vision Experiments We use the *NICO++* (Non-I.I.D. Image dataset with Contexts) dataset Zhang et al. (2022) to create multiple spurious datasets (Cow vs. Bird; Dog vs. Lizard) such that the context/background is spuriously correlated with the target. NICO++ is a Non-I.I.D image dataset that uses context to differentiate between the test and train distributions. This forms an ideal setup to investigate what spurious correlations the model learns during training. We follow the procedure outlined by Puli et al. (2022) to create datasets with spurious correlations (90% prevalence) in the training data. The test data has the relationship between spurious attributes and the true labels flipped. This is similar to the Chex-MIMIC dataset illustrated in section-4.2. We test our hypothesis using ResNet-18 and VGG16. We train our models for 30 epochs using an Adam optimizer and a base learning rate of 0.01. We choose the best checkpoint using early stopping. ![17_image_0.png](17_image_0.png) Figure 11: Cow vs. Birds classification on NICO++ dataset. (A) Training data contains images of cows on grass and birds on water (correlation strength=0.9). The model achieves 97.2% training accuracy. (B) PD plot for ResNet-18 reveals a spurious peak at layer-1, indicating the model's heavy reliance on very simple (potentially spurious) features. (C) GradCAM plots for layer 1 reveal that the model mainly relies on the spurious background to make its predictions. (D) Consequently, the model achieves a test accuracy of only 59.9% on test data where the spurious correlation is flipped (i.e., cows (birds) are found on water (grass)). Figures-11,13 show PD plots and train/test accuracies for models that learn the spurious background feature present in the NICO++ dataset. While all models achieve > 85% training accuracy, they have poor accuracies ( 50%) on the test data where the spurious correlation is flipped. This can be seen simply by observing the PD plots for the model on the training data. The plots are skewed towards the initial layers indicating that the model relies heavily on very simple (potentially spurious) features for the task. GradCAM maps also confirm that the model often focuses on the background context rather than the foreground object of interest. ![18_image_0.png](18_image_0.png) Figure 12: Balanced dataset for Cow vs. Birds classification task on NICO++ dataset. (A) The training dataset contains a balanced distribution of cows and birds found on water and grass (each group has an equal number of images). (B) The balanced dataset shifts the PD plot towards the later layers (compared to Fig-11B, indicating that the model relies lesser on spurious features. (C) This consequently results in an improved test accuracy of 80.2% (as compared to 59.9% in Fig-11D for the spurious dataset). ![18_image_1.png](18_image_1.png) Figure 13: Dog vs. Lizard classification with a spurious background feature on NICO++ dataset. (A) Training data contains images of outdoor dogs and lizards on rock (correlation strength=0.9). The spurious background color/texture reveals the foreground object. The model achieves 87.2% training accuracy. (B) PD plot for ResNet-18 reveals a spurious peak at layer-1, indicating the model's reliance on simple (potentially spurious) features. (C) The low test accuracy confirms this (63.9%). The test data has the spurious correlation flipped (i.e., images contain dogs on rock and lizards found outdoors.) We further observe in Fig-12 that balancing the training data (to remove the spurious correlation) results in a model with improved test accuracy (80.2%) as expected. This is also reflected in the PD plot (Fig-12B), where we see that the distribution of the peaks, as well as the mean PD, shift proportionately towards the later layers, indicating that the model now relies lesser on the spurious features. By monitoring PD plots during training and using suitable visualization techniques, we show that one can obtain useful insights about the spurious correlations that the model may be learning. This can also help the user make an educated guess about the generalization behavior of the model during deployment. ## A.4 Empirical Relationship: Pd Vs V**-Information** In Section-4.4 we explore the relationship between PD and V-information. To empirically confirm these ![19_image_0.png](19_image_0.png) results, we further investigate this relationship on four additional datasets: KMNIST, FMNIST, SVHN, and CIFAR10. We train a VGG16 model on these datasets for ten epochs using an Adam optimizer and a base learning rate of 0.01. We use a bar plot to show the correlation between PD and V-entropy. We group PD into intervals of size four and compute the mean V-entropy for samples lying in this PD interval. Figure 14: The bar plots show a positive correlation between PD and Conditional V-entropy. Samples with higher PD also have a higher V-entropy resulting in lower usable information for models like VGG16. In Section-4.4, we find that PD is positively correlated with V-information, and the results shown in Fig-14 further confirm this observation. Instance difficulty increases with PD, and the usable information decreases with an increase in V-entropy. It is, therefore, clear from Fig-14 that samples with a higher difficulty (PD value) have lower usable information, which is not only intuitive but also provides empirical support to Proposition-1 in Appendix-A.1. ## A.5 Chest Drain Annotations For Nih Dataset To reproduce the results by Oakden-Rayner et al. (2020), we need chest drain annotations for the NIH dataset (Wang et al., 2017a), which is not natively provided. To do this, we use the MIMIC-CXR dataset (Johnson et al., 2019), which has rich meta-data information in radiology reports. We collaborate with radiologists to identify terms related to Pneumothorax from the MIMIC-CXR reports. These include pigtail catheters, pleural tubes, chest tubes, thoracostomy tubes, etc. We collect chest drain annotations for MIMIC-CXR by parsing the reports for these terms using the RadGraph NLP pipeline (Jain et al., 2021). Using these annotations, we train a DenseNet121 model to detect chest drains relevant to Pneumothorax. Finally, we run this trained model on the NIH dataset to obtain the needed chest drain annotations. We use these annotations to get the results shown in Fig - 8D, which closely reproduces the results obtained by Oakden-Rayner et al. (2020). ## A.6 Notion Of Undefined Prediction Depth Section-3 shows how we compute PD in our experiments. While fully trained models give valid PD values, our application requires working with arbitrary deep-learning models that are not necessarily fully trained. We, therefore, introduce the notion of undefined PD by treating k-NN predictions close to 0.5 (for a binary classification setting) as invalid. We define a δ such that |fknn(x) − 0.5| < δ implies an invalid k-NN output. We use δ = 0.1 and k = 29 in our experiments. If any k-NN predictions for the last three layers are invalid, we treat the PD of the input image to be undefined. To work with high-resolution images (like 512 × 512), we downsample the spatial resolution of all training embeddings to 8 × 8 before using the k-NN classifiers on the intermediate layers. We empirically see that our results are insensitive to k in the range [5, 30]. ## A.7 A Pd Perspective For Feature Learning Table 1 shows that the core-only accuracy stays high (>98%) for datasets where the spurious feature is harder to learn than the core feature. When the spurious feature is easier than the core, the model learns to leverage them, and hence the core-only accuracy drops to nearly random chance (∼50%). Interestingly, the KMNpatch-MN results have a much higher core-only accuracy (∼69%) and a more significant standard deviation. This is because the choice of features that the model chooses to learn depends on the PD distributions of the core and spurious features. We provide three different perspectives on why KMNpatchMN runs have better results. PD Distribution Perspective: The KMNpatch-MN domino dataset has a smaller difference in the corespurious mean PDs (2.2 − 1.1 = 1), as compared to other datasets (for e.g., MN-KMN has a difference of 5 − 2.2 = 2.8 in their mean PDs). The closer the PD distributions of the core and spurious features are, the more the model treats them equivalently. Therefore, in the case of the KMNpatch-MN, we empirically observe that different initializations (random seeds) lead to different choices the model makes in terms of core or spurious features. This is why the standard deviation of KMNpatch-MN is high (20.03) compared to the other experiments. Theoretical Perspective (Proposition-1): This is not surprising and, in fact, corroborates quite well with Proposition-1 in Appendix-A.1. The Prediction Depth Separation Assumption suggests that without a sufficient gap in the mean PDs of the core and spurious features, one cannot concretely assert anything about their ordinal relationship in terms of their usable information. In other words, spurious features will have higher usable information (for a given model) than the core features only if the spurious features have a sufficiently lower mean PD as compared to the core features. On the other hand, as the core and spurious features become comparable in terms of their difficulty, the model begins to treat them equivalently. Loss Landscape Perspective: *(this is a conjecture; we do not have empirical evidence)* The loss landscape is a function of the model and the dataset. The solutions in the landscape that are reachable by the model depend on the optimizer and the training hyperparameters. Given a model and a set of training hyperparameters, we conjecture that the diversity (in terms of the features that the model learns during training) of the solutions in the landscape increase as the distance (difference in mean PD) between the core and spurious features decreases. This diversity manifests as the model's choice of using core vs. spurious features and could potentially result in a higher standard deviation of core-only accuracy across initializations. ## A.8 Code Reproducibility The code for this project is publicly available at: https://github.com/batmanlab/TMLR23_Dynamics_of_ Spurious_Features ## A.9 Pd Plot Consistency We perform several random runs for the experiments shown in Fig-6 and compute the probability density function of the PD plots in Fig-6 using kernel density estimation (KDE). Fig-15 shows the resulting PD envelopes (computed using KDE), and also the original histograms of different random runs. We can see the consistency of PD plots for any given dataset across runs involving different random seeds. The overall ordering of the datasets according to difficulty computed by mean PD remains the same. This shows that PD can be used as a reliable measure to estimate dataset difficulty and to also detect spurious feature. ## A.10 Ensemble Uncertainty As A Measure Of Instance Difficulty The main idea of our paper can be extended to other instance difficulty metrics and visualization techniques and is not limited to PD or grad-CAM (see Sec-3). We provide a proof of concept in this section by using ![21_image_0.png](21_image_0.png) Figure 15: Consistency of PD plots across four random runs. The overall ordering of dataset difficulty based on mean PD remains the same as in Fig-6 techniques different from PD and grad-CAM to detect spurious features. However, there are several added benefits of using sophisticated metrics like PD and we highlight them here. ## A.10.1 Toy Dataset (Synthetic Spurious Feature) We re-run the experiments shown in Fig-6 with a new set of metrics. In order to compute instance difficulty, we train an ensemble of linear models over the various datasets shown in Fig-6 and compute the uncertainty over softmax outputs for various images. The uncertainty of the ensemble can be used as a proxy for instance difficulty. We compute the uncertainty by taking an expectation over softmax entropy (as common in the literature of uncertainty quantification (Lakshminarayanan et al., 2017; Mukhoti et al., 2023; Van Amersfoort et al., 2020)) of the ensemble models, and we use this metric instead of PD. We order the datasets based on mean entropy, and use SHAP (Lundberg & Lee, 2017) for visualizing the early peaks in order to detect spurious features. Results are shown in Fig-16. We find that the overall order of the dataset difficulty is the same as in Fig-6, and the entropy plots also look similar to the PD plots computed previously. The spurious patch in the KMNIST dataset significantly decreases the entropy of the ensemble models as expected. We visualize these images using SHAP to find that the model heavily relies on the spurious patch. Upon removing the patch from the KMNIST dataset (see Fig-16E), the mean entropy of the dataset significantly increases as the model now has to rely mainly on the core features. We plot the KMNIST images (with spurious patch) along with their SHAP visualizations for various classes in Fig-17 to show how the model significantly relies on the spurious patch for all the classes. ![22_image_0.png](22_image_0.png) Figure 16: Ensemble uncertainty as a measure of dataset difficulty and SHAP for visualizing spurious features. Top row shows three reference datasets and their corresponding entropy plots. The datasets are ordered based on their difficulty (measured using mean entropy shown by dotted vertical lines): KMN w/ patch(0.107) < MNIST(0.491) < FMNIST(0.527) < KMN w/o patch(1.04) < CIFAR10(1.593). The bottom row shows the effect of the spurious patch on the KMNIST dataset. The yellow region on the axis indicates the expected difficulty of classifying KMNIST. While the original KMNIST lies in the yellow region, the spurious patch significantly reduces the task difficulty. The SHAP plots show that the model focuses on the spurious patch. ## A.10.2 Medical Dataset (Real Spurious Feature) We now compare the ensemble entropy method with PD on the real medical dataset (NIH) with real spurious features (like chest drain). The setup is exactly the same as in Sec-4.2 (Real Spurious Feature in Medical Dataset). Additionally, we try to detect the simple artifacts and chest drains (as shown in Fig-8) using the ensemble entropy method. We increase the model capacity of the ensemble by adding a convolutional layer and a ReLU layer before the linear classification (Conv–ReLU–Linear). This not only helps the ensemble models to detect more complex spurious features (as compared to simple 1-layer linear models as in the previous section-A.10.1) but also leads to smoother and better grad-CAM plots, which helps us better debug the spurious features the model is using. The setup for PD experiments, however, remains the same as in Sec-4.2. Fig-18 shows that both the PD and the ensemble entropy method can detect the simple spurious features in the NIH dataset. However, Fig-19 shows that the PD method can additionally detect more complex spurious features like chest drains in the NIH dataset, whereas the ensemble entropy method is not able to do so as it comprises simple convolutional neural networks that have low model-capacity and can therefore only detect simple spurious artifacts. To further validate if the simple convolutional model used above can learn chest drains, we set up a simple classification experiment to try to classify if the X-ray image in the MIMIC-CXR dataset (Johnson et al., 2019) has chest drains/tubes or not using this simple baseline model. We collect chest drain annotations for the MIMIC-CXR dataset (Johnson et al., 2019) by parsing through radiology reports using the RadGraph NLP pipeline (Jain et al., 2021). We collaborate with radiologists to figure out terms related to Pneumothorax (like pigtail catheters, pleural tubes, chest tubes, thoracostomy tubes, etc.) Using these annotations, we train ![23_image_0.png](23_image_0.png) Figure 17: SHAP plots for visualizing what linear models learn on the KMNIST dataset with a spurious patch. The figure shows how the model is significantly relies on the spurious patch for each of the classes shown. ![23_image_1.png](23_image_1.png) Figure 18: Simple spurious artifacts in NIH dataset. GradCAM plots show that both methods, entropy and PD, can detect simple spurious artifacts in the NIH dataset. the simple convolutional model for 40 epochs, and the model only achieves an AUC of 0.58 (random guessing gives an AUC of 0.5). This experiment demonstrates that a simple convolutional model is not capable ![24_image_0.png](24_image_0.png) Figure 19: Challenging spurious features (like chest drains) in NIH dataset. Grad-CAM plots for the PD method reveal the spurious chest drain in each of the NIH chest X-ray images. Whereas the entropy method, which comprises an ensemble of simple baseline convolutional architectures, cannot detect more complex spurious features like chest drains. This experiment justifies the use of PD metric over ensemble uncertainty of simple baseline models having low model capacity. of detecting complex spurious features like chest drains. Therefore, the gradCAM plots for the ensemble entropy method fail to detect chest drains in the NIH dataset (as shown in Fig-19). This section shows that our work is not limited to PD or GradCAM. It is defined for any example difficulty metric (see Sec-3). We use PD as a proof of concept, but our hypothesis simply suggests that by monitoring the example difficulty of training data points and by visualizing the examples (particularly those with low difficulty), we can detect spurious features that hurt model generalization. The better the example difficulty metric and visualization technique, the better we can detect such spurious features. However, there are several additional benefits of using more sophisticated methods like PD, which are clearly illustrated in this section (see Fig-19). PD measures sample difficulty in a model-specific manner. It takes the model architecture into account, and we show why this is important in Fig-4. The ensemble of linear models approach shown here is not model-specific. It comprises simple linear baseline models and hence cannot detect more challenging spurious features like chest drains. Using sophisticated techniques like PD, one can also attempt to broadly identify the layers responsible for learning spurious features. This can further help develop suitable intervention schemes that remove the spurious feature representations from those layers. This is a promising future direction to address the issue of unlearning spurious features. One may use an ensemble of multi-layer neural networks to compute uncertainty over samples, but training an ensemble of larger models involves significant computational overhead and is time-consuming. Additionally, we develop useful theoretical connections between PD and information-theoretic concepts like usable information (see Appendix-A.1) to explain the empirical success we obtain in our experiments. The above benefits justify our use of the PD metric in this paper.
Review 1: Summary: This work investigates *shortcuts*, which describe a particular kind of spurious correlation which dominates the prediction for models over class-relevant features. Based on the statement of related work that "all spurious correlations are shortcuts", the work shows that spurious correlations do not qualify as *shortcuts* when they do not provide a pattern which is easier to learn than the class-relevant features. A method from previous work (Prediction Depth) is used as a proof-of-concept to empirically quantify the instance difficulty over datasets. Using PD, the work empirically shows that "easy" features are learned within the first two epochs of the training. A formal connection between PD and v-usable information from another previous work is demonstrated, backed with some empirical results. This work highlights the importance to monitor the instance difficulty over datasets to detect shortcuts. Strengths and Weaknesses: ### Strengths - The paper is very well written, polished and an enjoyable read. - The figures are of high quality, and highlight the results well. - The difficulty of various datasets, mostly synthetic, with one real-world experiment, is demonstrated very well. - Given the multiple recent works claiming all spurious correlations are supposedly *shortcuts*, this work challenges those over-hasty assumptions, which is an important contribution to the field. - The work motivates the important habit of checking for *shortcuts*, which is not reflected in the test accuracy alone. ### Weaknesses - (critical) The work does not report error bars or number of trials. Reporting these is critical to ensure the statistical significance of the empirical experiments. - (critical) The work verifies the hypothesis using the (somewhat complex) prediction depth. In the experiments, the main indicator for *shortcuts* is the number of samples with a PD of 1, i.e., the number of samples that can be classified with kNN after a single linear transformation (plus non-linear activation?). The empirical results would be stronger with a weaker, linear baseline to measure the instance difficulty, somewhat similar to He et. al (2019), which will most likely also find the static artifact shortcuts in the synthetic toy datasets. - (non-critical) The Prediction Depth assumes that the data points are transformed in feature space such that they lie close when they will ultimately be classified as the same class. For the first (linear) layer and simple static artifacts, this may be (intuitively) true. There is no guarantee that this is true for higher layers. This is arguably an issue of PD, which is used as a proof-of-concept approach to measure instance difficulty. Given that this is the only approach used (so far), it would be helpful if this would be discussed, maybe in the Appendix A.1. - (non-critical) Definition 3 defines spurious correlations as shortcuts if feature $s$ is strictly *easier* than the class-feature. This definition becomes somewhat challenging when assuming that classes may be activated by features of various difficulties, where the spurious feature lies within the least and most difficult class-defining features. This is somewhat going into the direction of the transposed statement "Not all shortcuts are spurious correlations" (this is obviously not true given the definition of shortcuts in this work). Requested Changes: - (critical) The work must report the number of trials and error bars to support the claim of statistical significance. - (critical) A linear baseline (similar to He et al. 2019) to quantify the instance difficulty should be used to provide an additional, even simpler instance difficulty metric (and justify the use of the more complex PD-metric). - (non-critical) The class-relevant locality required for the assumption of kNN in higher layer feature space would be helpful if discussed. - (non-critical) A discussion on the case where class-relevant features of various difficulties may be an interesting addition. ### Misc. (non-critical): - While I am happy the paper comes with code, for the sake of reproducibility I would urge the authors to structure their code as a Python module using `setuptools`, without relying too heavily on jupyter notebooks. Adhering to some code styling (PEP8, e.g. via black) and keeping to some docstring-style (googledoc, numpydoc, etc.) does wonders for readability and will thus boost reproducibility and accelerate subsequent research. - (figure-4) (sections-4.2,A.3) looks somewhat odd - Def2: Let ... indicate~~s~~ - The text captions of the PD figures is a little too small in all figures after Fig. 2 Broader Impact Concerns: None. ================================================== Review 2: Summary: The paper develops a distinction between shortcuts and spurious correlations in general, by emphasizing the dependence of shortcuts on learnability. The authors present empirical and theoretical analyses to support these claims. To formalize learnability, the authors start by introducing the notion of Prediction Depth (PD), which states an example is easier if it can be classified by the earlier embedding layers in a neural network. From here, the authors state that a dataset or task is easier if its mean PD is lower. Obviously, these definitions depend intricately on the model class. The connection back to features from PD is given by examining the histogram of PDs over the dataset. The authors argue that a peak in this histogram at earlier layers suggests the existence of shortcut features. By examining these histograms over the course of training, the authors claim these shortcut features are learned early in training since the peaks in the histogram at early layers remains constant. Finally a theoretical connection between PD and V-usable information is made. Strengths and Weaknesses: **Strengths** - Overall the experiments are quite interesting and have a reasonable mix of datasets. - The figures present the papers result reasonably well and are easier to follow than the text. - The general point that it is important to consider the learnability of spurious features is valid. **Weaknesses** - **Lack of clarity in definitions and analysis.** The authors make the following definition of shortcuts: spurious features that are easy to learn. Unfortunately this definition rests crucial on the definition of easiness to learn and is highly model dependent. - **PD as definition of difficulty.** First, this definition has no dependence on the labels, which seems odd. Even trivial models could return a PD of 1. For example, imagine that the first embedding layer is just a constant function. This would be true regardless of how intuitively “difficult” a task is. The inline equation is also confusing. There is a minimization and maximization, but only one variable (n) is used. This should be rewritten, since you want the minimum n that maximizes the function. - **Model dependence.** PD depends intricately on the architecture of the neural network. Indeed, it seems highly likely that changing the architecture could lead to contradictory results, where two dataset have a different ordering on their mean PD for two different architectures. - **Poorly written.** The analysis is quite hard to follow and contains many circular claims. For example, the line “This experiment demonstrates that not all spurious correlations are shortcuts, but only those spurious features that are easier than the core features are potential shortcuts.” Under the authors own terms, this is just the definition of a shortcut and not something that should be empirically tested. - **Definition 2.** This is not a definition, it simply introduces notation. In particular, “difficulty of predicting” is not defined and neither is “model.” What does “model” refer to here? Is this a model class, a trained model, etc? Requested Changes: I feel the paper needs significant rewriting. As discussed above in the Strengths and Weaknesses section, there are major difficulties with the definitions used. These lead to potentially circular claims and an analysis of results that I found difficult to follow. Broader Impact Concerns: None. ================================================== Review 3: Summary: In this study, the authors discovered that spurious features that are harder than the core features are shortcuts. They also found that Prediction Depth serves as an effective metric for detecting spurious features. Moreover, their findings revealed that shortcuts are learnt by early layers and in the early learning phrase. Lastly, the authors underscore the importance of monitoring early training training with Prediction Depth. Strengths and Weaknesses: Strengths: 1. The paper's topic holds interest and relevance across multiple domains within machine learning, such as adversarial learning and semi-supervised learning. 2. To validate their hypothesis, the authors conduct experiments on extensive datasets. Weaknesses: 1. Given that they are termed 'shortcuts', it seems intuitive that spurious features would be less challenging than core features. The inherent suggestion in their names already implies this meaning, so the first claim might not provide particularly meaningful insights. 2. The definition-1 is different from "Shortcuts are spurious features that perform well on standard benchmarks … real-world setting". Standard benchmarks should include test set. Should the distribution deviate substantially from the test set, the results would evidently indicate the necessity to investigate these spurious features further. I think the challenge lies in identifying spurious features that exist in both training and test datasets. 3. The aforementioned issue brings up further questions that challenge the second and third claims made by the authors. Are all spurious features relatively simple patterns, like 'block' used in the experiments? Could there exist spurious features with more complex patterns that remain easier to learn than the core features? If so, these features should be learned in the later layers and during later phases of learning. Therefore, while Prediction Depth (PD) might be useful for identifying evident spurious features, its utility for detecting such features in real-world problems may be limited. For instance, is PD capable of identifying any spurious features in databases such as ImageNet or COCO? Requested Changes: 1. fonts in some figures are too small to see. Broader Impact Concerns: None ================================================== Metareview: Recommendation: Accept with minor revision Comment: Reviewers found the initial manuscript, though handling interesting problem, unclear and lacking important experiments to support the claims of the paper. Authors in discussion with the reviewers updated the manuscript including important additional experiments and rewriting the paper to make it more understandable. In words of one of the reviewer from discussion, "Given the updated manuscript with a clearer definition of the material, as well as the added analysis with the alternative, simple model-based instance difficulty and the multiple trials for the PD-plots, I feel confident that this manuscript is in a good shape and sufficiently supports its claims. It motivates the importance of checking at least for obvious spurious correlations picked up by the model, which the test accuracy does not reflect, and thus provides a sufficient contribution to the field." I am happy to recommend acceptance of the current draft with minor revision to add more references as mentioned during the discussion phase. ==================================================
# No More Pesky Hyperparameters: Offline Hyperparameter Tuning For Rl | Han Wang∗† | han8@ualberta.ca | |-------------------|----------------------| | Archit Sakhadeo∗† | sakhadeo@ualberta.ca | | Adam White† | amw8@ualberta.ca | | James Bell† | jbell1@ualberta.ca | | Vincent Liu† | vliu1@ualberta.ca | | Xutong Zhao† | xutong@ualberta.ca | | Puer Liu† | puer@ualberta.ca | | Tadashi Kozuno† | kozuno@ualberta.ca | | Alona Fyshe† | alona@ualberta.ca | | Martha White† | whitem@ualberta.ca | Reviewed on OpenReview: *https: // openreview. net/ forum? id= AiOUi3440V* ## Abstract The performance of reinforcement learning (RL) agents is sensitive to the choice of hyperparameters. In real-world settings like robotics or industrial control systems, however, testing different hyperparameter configurations directly on the environment can be financially prohibitive, dangerous, or time consuming. We focus on hyperparameter tuning from offline logs of data, to fully specify the hyperparameters for an RL agent that learns online in the real world. The approach is conceptually simple: we first learn a model of the environment from the offline data, which we call a calibration model, and then simulate learning in the calibration model to identify promising hyperparameters. Though such a natural idea is (likely) being used in industry, it has yet to be systematically investigated. We identify several criteria to make this strategy effective, and develop an approach that satisfies these criteria. We empirically investigate the method in a variety of settings to identify when it is effective and when it fails. ## 1 Introduction Reinforcement learning (RL) agents are extremely sensitive to the choice of hyperparameters that regulate speed of learning, exploration, degree of bootstrapping, amount of replay and so on. The vast majority of work RL is focused on new algorithmic ideas and improving performance—in both cases assuming near-optimal hyperparameters. Indeed the vast majority of empirical comparisons involve well-tuned implementations or reporting the best performance after a hyperparameter sweep. Although progress has been made towards eliminating the need for tuning with adaptive methods (White & White, 2016; Xu et al., 2018; Mann et al., ∗These authors contributed equally to this work. †Computing Science, Alberta Machine Intelligence Institute (Amii), University of Alberta, Edmonton, Alberta, Canada. 2016; Zahavy et al., 2020; Jacobsen et al., 2019; Kingma & Ba, 2014; Papini et al., 2019), widely used agents employ dozens of hyperparameters and tuning is critical to their success (Henderson et al., 2018). The reason domain specialization and hyperparameter sweeps are possible—and perhaps why our algorithms are so dependent on them—is because most empirical work in RL is conducted in simulation. Simulators are critical for research because they facilitate rapid prototyping of ideas and extensive analysis. On the other hand, simulators allow us to rely on features of simulation not possible in the real world, such as exhaustively sweeping different hyperparameters. Often, it is not acceptable to test poor hyperparameters on a real system that could cause serious failures. In many cases, interaction with the real system is limited, or in more extreme cases, only data collected from a human operator is available. Recent experiments confirm significant sensitivity to hyperparameters is exhibited on real robots as well (Mahmood et al., 2018). It is not surprising that one of the major roadblocks to applied RL is extreme hyperparameter sensitivity. Fortunately, there is an alternative to evaluating algorithms on the real system: using previously logged data under an existing controller (human or otherwise). This offline data provides some information about the system, which could be used to evaluate and select hyperparameters without interacting with the real world. Hyperparameters are general, and can even include a policy initialization that is adjusted online. We call this problem *Data2Online*. This problem setting differs from the standard offline or batch RL setting because the goal is to select hyperparameters offline for the agent to use to *learn online in deployment*, as opposed to learning a *policy* offline. Typically in offline RL a policy is learned on the batch of data, using a method like Fitted Q Iteration (FQI) (Ernst et al., 2005; Riedmiller, 2005; massoud Farahmand et al., 2009), and the resulting fixed policy is deployed without being updated. Our setting is less stringent, as the policy continually adapts during deployment. Intuitively, selecting just the hyperparameters for further online learning should not suffer from the same hardness problems as offline RL (see (Wang et al., 2021) for hardness results), because the agent has the opportunity to gather more data online and adjust its policy. We discuss a strategy to use offline data for selecting hyperparameters for learning from scratch in deployment. The idea is simple: we use the data to learn a *calibration model*, and evaluate hyperparameters in the calibration model. For example, consider designing a learning system for controlling a water treatment plant, given only a set of data logs, visualized in Figure 1. We want an agent to control pump speeds and chemical treatments to clean the water with minimal energy usage—but how do we set the learning rate and other hyperparameters of this agent? We can learn a calibration model offline from data logs previously collected while human operators controlled the plant. The agent can be tested with different hyperparameter settings in the calibration model, to gauge which resulted in best online performance (e.g., reward accumulation or stable learning). The agent with these best hyperparameters is then deployed on the real plant. ![1_image_0.png](1_image_0.png) Figure 1: Data2Online: each hyperparameter setting is denoted by Λ. The plant imagined by the agent (via calibration model) is intentionally pixilated to reflect approximation of the true plant. This idea is natural, and versions of it are likely being used in industry already. For example, one could first construct *construct* a high-fidelity simulator and then use RL. This strategy was taken for impressive applications like for flying balloons (Bellemare et al., 2020) and for controlling plasma (Degrave et al., 2022). Still other work has *learned* slightly less high-fidelity simulators, using (recurrent) neural networks, to prototype and compare algorithms, such as for gas turbine control (Schaefer et al., 2007). These are natural steps, to understand which algorithms are effective, before deploying on a real system! The primary novelty in our work is to investigate learning and using calibration models as a black-box algorithm for Data2Online, particularly in settings where we do not have access to a physics simulator of the deployment scenario. Instead of considering the use of a simulator as part of the engineer's prototyping, we consider the learning and use of a calibration model as part of an automated process. The input to the automation pipeline is logged data from interaction under the current (human) operator and the output is an agent that learns in deployment (not a static policy). It may seem like a hopeless endeavour, considering the evidence in Sim2Real and in model-based RL suggesting small errors in models can result in poor policies (Talvitie, 2017; Höfer et al., 2021). However, the key distinction for our setting is that we identify hyperparameters, rather than learning a complete policy. The calibration model need not be a perfect simulator to be useful for identifying reasonable hyperparameters, whereas learning a transferable policy likely requires accurate models. This automated process is what we call Data2Online, and there has as yet been very little work understanding theoretical limits, algorithms and potential failure modes. To the best of our knowledge, there is only one other work that considers how to use a fixed offline dataset to evaluate an agent that learns online in deployment (Mandel et al., 2016). Their approach uses a novel strategy to reason about how the policy changes during learning, but is only effective for short horizon problems. In this paper, we first introduce our Data2Online strategy, and outline conditions on the calibration model and learning agents for this strategy to be effective. We bound the difference in the value in the real environment of the hyperparameters chosen in the calibration model, to the true best hyperparameters, in terms of the calibration model error and length of interaction. We then develop a non-parametric calibration model, based on k-nearest neighbors and a carefully chosen distance metric, that satisfies the conceptual criteria needed for the calibration model. We investigate the approach in three problems with different types of learning dynamics, two different learning agents, under different offline data collection policies, and with ablations on the key components of our proposed calibration model. We conclude by highlighting that grid search can be replaced with any hyperparameter optimization algorithm, and that this further improves performance in the real environment. ## 2 Related Problem Settings Offline RL involves learning from a dataset, but with a different goal: to deploy a fixed (near-optimal) policy. As a result, the hyperparameter selection approaches for offline RL are quite different. One strategy that has been used is to evaluate different policies corresponding to different hyperparameter settings, under what has been introduced as the Offline Policy Selection problem (Yang et al., 2020). This setting differs from our setting in that they evaluate the utility of the learned policy, rather than the utility of hyperparameters for learning online, in deployment. Some work in offline RL examines learning from data, and then adapting the policy online (Ajay et al., 2021; Yang & Nachum, 2021; Lee et al., 2021), including work that alternates between data collection and high confidence policy evaluation (Chandak et al., 2020a;b). Our problem is complementary to these, as a strategy is needed to select their hyperparameters. In Sim2Real the objective is to construct a high fidelity simulator of the deployment setting, and then transfer the policy and, in some cases, continue to fine tune in deployment. We focus on learning the calibration model from collected data, whereas in Sim2Real the main activity is designing and iterating on the simulator itself (Peng et al., 2018). Again, however, approaches for Data2Online are complementary, and even provide another avenue to benefit from the simulator developed in Sim2Real to pick hyperparameters for fine tuning. Domain adaptation in RL involves learning on a set of source tasks, to transfer to a target task. The most common goal has been to enable zero-shot transfer, where the learned policy is fixed and deployed in the target task (Higgins et al., 2017; Xing et al., 2021). Our problem has some similarity to domain adaptation, in that we can think of the calibration model as the source task and the real environment as the target task. Domain adaptation, however, is importantly different than our Data2Online problem: (a) in our setting we train in a *learned* calibration model not a real environment, and need a mechanism to learn that model (b) the relationship between our source and target is different than the typical relationship in domain adaptation and (c) our goal is to select and transfer hyperparameters, not learn and transfer policies. Learning from demonstration (LfD) and imitation learning involve attempting to mimic or extract a policy at least as good as a demonstrator. If the agent is learning to imitate online, then it is unrealistic to assume the demonstrator would generate enough training data required to facilitate hyperparameter sweeps. If the learner's objective is to imitate from a dataset, then this is exactly the problem study of this paper. Unfortunately, hyperparameter tuning in LfD is usually not addressed; instead it is common to use explicit sweeps (Merel et al., 2017; Barde et al., 2020; Ghasemipour et al., 2019; Behbahani et al., 2019) or manual, taskspecific tuning (Finn et al., 2017). Hyperparameter tuning, however, is a major obstacle to the deployment of LfD methods (Ravichandar et al., 2020). Finally, there is a large literature on hyperparameter selection in RL. Most introduce meta algorithms that learn hyperparameters, including work on meta-descent for stepsizes (Sutton, 1992; Xu et al., 2018; Jacobsen et al., 2019) and selecting the trace parameter (Downey & Sanner, 2010; Mann et al., 2016; White & White, 2016). These algorithms could be beneficial for offline hyperparameter selection, because they help reduce sensitivity to hyperparameters; but they are not a complete solution as they still have hyperparameters to tune. Other work has provided parameter-free methods that have theoretically defined formulas for hyperparameters (Papini et al., 2019). Deriving such algorithms is important, but is typically algorithm specific and requires time to extend to broader classes of algorithms. Finally, recent work has examined online hyperparameter selection, using off-policy learning to assess the utility of different hyperparameters in parallel (Paul et al., 2019; Tang & Choromanski, 2020). Otherwise, much of the work has been focused on settings where it is feasible to obtain multiple runs under different hyperparameters—such as in simulationwith the goal to improve on simple grid search (Srinivas et al., 2010; Bergstra & Bengio, 2012; Snoek et al., 2012; Li et al., 2018; Jaderberg et al., 2017; Falkner et al., 2018; Parker-Holder et al., 2020). ## 3 Problem Formulation In RL, an agent learns to make decisions through interaction with an environment. We formulate the problem as a Markov Decision Process (MDP), described by the tuple (S, A, R,P). S is the state space and A the action space. R : *S × A × S →* R is the reward, a scalar returned by the environment. The transition probability P : *S × A × S →* [0, 1] describes the probability of transitioning to a state, for a given state and action. On each discrete timestep t the agent selects an action At in state St, the environment transitions to a new state St+1 and emits a scalar reward Rt+1. The agent's objective is to find a policy that maximizes future reward. A policy π : S×A → [0, 1] defines how the agent chooses actions in each state. The objective is to maximize future discounted reward or the *return*, Gt.= Rt+1 + γt+1Gt+1 for a discount γt+1 ∈ [0, 1] that depends on the transition (St, At, St+1) (White, 2017). For continuing problems, the discount may simply be a constant less than 1. For episodic problems the discount might be 1 during the episode, and become zero when St, At leads to termination. Common approaches to learn such a policy are Q-learning and Expected Sarsa, which approximate the action-valuesthe expected return from a given state and action—and Actor-Critic methods that learn a parameterized policy (see (Sutton & Barto, 2018)). We assume that in the *Data2Online* setting the agent has access to an offline log of data that it can use to initialize hyperparameters before learning online. This log consists of ndata tuples of experience D = {(St, At, Rt+1, St+1, γt+1)} ndata i=1 , generated by interaction in the environment by a previous controller or controllers.1 For example, an agent that will use Expected Sarsa might want to use this data to decide on a suitable stepsize α, the number of layers l in the neural network (NN) architecture for the action-values and even an initialization θ0 for the NN parameters—namely a policy initialization. There are several options for each hyperparameter combination, λ = (*α, l, θ*0), resulting in a set of possible hyperparameters Λ to consider. This set can be discrete or continuous, depending on the underlying ranges. For example, the agent might want to consider any α ∈ [0, 1] and a θ0 only from a set of three possible choices. Procedurally, the Data2Online algorithm is given the dataset D and the set of hyperparameters Λ, and outputs a selected hyperparameter setting λ˜. A good choice is one that is within ε-optimal of the best 1Going back to our water treatment example, this data might have been collected over the last two years using actions from a human operator. We expect the data is likely from a reasonable or nearly optimal policy. Before letting our agent learn on and control the water treatment plant, it is sensible to use this already available data to improve its performance online. hyperparameters Perf(λ˜) ≥ max λ∈Λ Perf(λ) − ε (1) where Perf(λ) is the online performance of the agent, when deployed with the given hyperparameters. Typically, this will be the cumulative reward for continuing problems and average return for episodic problems, for a fixed number of steps T. Many hyperparameters may allow the agent to perform well, so we focus on nearly-optimal performance under λ˜ rather than on identifying the best hyperparameters. The central question for this Data2Online problem is: how can the agent use this log of data to select hyperparameters before learning in deployment? This is no easy task. The agent cannot query Perf(λ). It is not only evaluating a *fixed* policy, for which we could use the large literature on Off-policy Policy Evaluation. It is evaluating a *learning* policy. In the remainder of this paper, we introduce and test a new algorithm for this Data2Online problem. ## 4 Data2Online Using Calibration Models This section introduces the idea of calibration models and how they can be used for hyperparameter selection. We first discuss how to use the calibration model to select hyperparameters, before providing a way to learn the calibration model. We then discuss certain criteria on the calibration model and agent algorithm that make this strategy more appropriate. We conclude with some theoretical characterization of the error in hyperparameter selection, based on inaccuracies in the calibration model. ## 4.1 Using Calibration Models To Select Hyperparameters A calibration model is a simulator of the environment—learned from an offline batch of data—used to specify (or calibrate) hyperparameters in the agent. With the calibration model, the agent can test different hyperparameter settings. It evaluates the online performance of different hyperparameter choices in the calibration model and selects the best one. It can then be deployed into the true environment without any remaining hyperparameters to tune. The basic strategy is simple: we train a calibration model, then do grid search in the calibration model and pick the top hyperparameter, as summarized in Algorithm 1. For each hyperparameter, we obtain a measure of the online performance of the agent across nruns in the calibration model, assuming it gets to learn for nsteps of interaction. The pseudocode for AgentPerfInEnv is in Algorithm 2, for the episodic setting where we report average returns during learning. Note that we add a cutoff in the evaluation scheme to guarantee that at least 30 episodes are seen by the agent during training in the calibration model. We cut off episodes that run longer than nsteps/30 steps, teleporting the agent back to a start state. Algorithm 1 Hyperparameter Selection with Calibration Models using Grid Search Input: Λ: hyperparameter set for learner *Agent* D: the offline data log nsteps: number of interactions or steps nruns: number of runs Train calibration model C with D for λ in Λ do Perf[λ] = AgentPerfInEnv(C, *Agent*(λ), nsteps, nruns) (see Algorithm 2) end for Return: arg maxλ∈Λ Perf[λ] Many components in this approach are modular, and can be swapped with other choices. For example, instead of expected return during learning (online performance), optimizing the hyperparameters might be more desirable to find the best policy after a budget of steps. This would make sense if cumulative reward during learning in deployment was not important. We might also want a more robust agent, and instead of expected return, we may want median return. Finally, the grid search can be replaced with a more efficient hyperparameter selection method; we discuss this further in Section 7. Algorithm 2 AgentPerfInEnv Input: C: calibration model, *Agent*: learner, nsteps: \# of steps, nruns: \# of runs ncutoff ← nsteps/30 . Ensure there are at least 30 episodes ReturnsAcrossRuns = [] for i = 1 *. . . n*runs do t ← 0, G ← 0*, Returns* = [] s ← random start state from C for j = 1 *. . . n*steps do Obtain a from *Agent*(s), obtain s 0, r = C(*s, a*), give s 0, r to Agent G ← G + γ tr t ← t + 1 if s 0is terminal or *t > n*cutoff **then** Append G to *Returns* s ← random start state from C, t ← 0, G ← 0 end end for ReturnsAcrossRuns[i] ← average(*Returns*) end for Return: average(*ReturnsAcrossRuns*) We can also make this hyperparameter search more robust to error in the calibration model by obtaining performance across an ensemble of calibration models. This involves using nensembles random subsets of the log data, say by dropping at random 10% of samples, and training nensembles calibration models. The hyperparameter performance can either be averaged across these models, or a more risk-averse criterion could be used like worst-case performance. Using an average across models is like using a set of source environments to select hyperparameters—rather than a single source—and so could improve transfer to the real environment. These are all additions to this new Data2Online strategy. The central idea is to use this calibration model to evaluate hyperparameters, as if we had a simulator. We, therefore, investigate this idea first in its simplest form with expected returns, grid search and only one calibration model. ## 4.2 When Is This Approach Effective? This section highlights three conceptual criteria for designing the calibration model and selecting agents for which Data2Online should be effective. This includes 1) stability under model iteration, 2) handling actions with low data coverage and 3) selecting agent algorithms that only have initialization hyperparameters, namely those that affect early learning but diminish in importance over time. Producing reasonable transitions under many steps of model iteration is key for the calibration model. The calibration model is iterated for many steps, because the agent interacts with the calibration model as if it were the true environment—for an entire learning trajectory. An agent often relies on random exploration to discover good rewards before learning can adapt the policy. Thus the first few episodes may be extremely long before a goal state is found. In Mountain Car, for example, a linear function approximation agent takes over 65,000 steps to wander to the goal, and a nonlinear function approximator may take even longer.2 It is key, therefore, that the calibration model be *stable* and *self-correcting*. A stable model is one where, starting from any initial state in a region, the model remains in that region. A self-correcting model is one that, even if it produces a few non-real states, it comes back to the space of real states. Otherwise, 2It might seem like a solution is to have short episode cutoffs. However, some deployment scenarios may not allow short cutoffs, and we want our calibration model to reflect the real-world as best we can. Further, in continuing problems there are no terminations at all. Finally, cutting off episodes typically makes the exploration problem much easier and thus may indeed impact the hyperparameters selected. model iteration can produce increasingly non-sensical states, as has been repeatedly shown in model-based RL (Talvitie, 2017; Jafferjee et al., 2020; Abbas et al., 2020; Chelu et al., 2020). The model also needs to handle actions with no coverage, or low coverage. For unvisited or unknown states, the model simply does not include such states. The actions, however, can be queried from each state. If an action has not been taken in a state, nor a similar state, the model cannot produce a reasonable transition. Any choice will have limitations, because inherently we are filling in this data gap with an inductive bias. A common choice in offline RL is to assume pessimistic transitions: if an action is not observed, it is assumed to have a bad outcome. This ensures the learned, fixed policy avoids these unknown areas. The choice is even more nuanced in Data2Online. Just like offline RL, it can be sensible to avoid these unknown actions, to answer: in the space known to the agent, what hyperparameters allow the agent to learn quickly? But, another plausible alternative is that we want to select hyperparameters to encourage the agent to explore unknown areas, since once deployed the agent can update its policy in these unknown areas. In other words, a principle of optimism could also be sensible. Selecting the right inductive bias will depend on the environment and the types of hyperparameters we are selecting. This question will likely be one of the largest questions in Data2Online, similarly to how it remains an open question in offline RL. The third criterion is a condition on the agent, rather than the model. Practically, we can only test each hyperparameter setting for a limited number of steps in the calibration model. So, the calibration model is only simulating early learning. This suggests that this approach will be most effective if we tune initialization hyperparameters: those that provide an initial value for a constant but wash away over time. Examples include an initial learning rate which is then adapted; policy initialization; and an initial architecture that is grown and pruned over time. These criteria are conceptual, based on reasoning through when we expect success or failure. We use these conceptual criteria to propose an appropriate approach to learn a calibration model in the next section. In addition to conceptual reasoning, theoretical understanding of the Data2Online problem setting is also critical. We provide a first step in that direction in the next section. ## 4.3 Theoretical Insights This problem has aspects that both make it harder and potentially easier than a standard offline RL problem. One aspect that makes this problem harder is that we have to evaluate a learning policy offline, rather than a fixed learned one. A fixed policy can be assessed using policy evaluation, and there exists a variety of theoretical results on the quality of those estimates, many based on the foundational simulation lemma (Kearns & Singh, 2002). No such results exist for evaluating a policy that will be changing online. At the same time, intuitively, the problem could be easier than the offline RL problem, because the policy adapts online. Instead, we only have to select from a potentially small number of hyperparameters, rather than from a potentially large policy space. For example, it may be relatively easy to identify the best stepsize out of a set of three stepsizes. Further, if a policy learning algorithm is chosen that is robust to its hyperparameter settings, then the problem may be even simpler. For example, it may be simple to select the initial stepsize for an adaptive stepsize algorithm, where the initial stepsize only influences the magnitude of updates for the first few steps. We first extend the foundational simulation lemma to the Data2Online setting, in Theorem 1. Then, in Theorem 2, we show how to use this result, to bound how far the value of the hyperparameters chosen in the learned calibration model are from the best hyperparameters. Finally, we discuss how it might be possible to formalize this second intuition, for future theoretical investigation. We start by defining some needed terms. An online learner can be characterized by a history dependent policy (see Chapter 38 of (Lattimore & Szepesvári, 2020)). A history dependent policy is π = (π0, π1, π2*, . . .*) where πt : Ht → ∆(A) and Ht = (*S × A ×* R) t × S is the history at time step t. For simplicity, we assume the rewards are deterministic in [0, rmax] and the MDP has one initial state s0. The online learning agent interacts with the environment for T steps in total, in either a continuing or fixed-horizon setting. The value function for this online learner π is the sum of rewards from time t to the end of learning at T − 1 $$V_{t}^{\pi}(h_{t})=\mathbb{E}\left[\sum_{t^{\prime}=t}^{T-1}r(S_{t^{\prime}},A_{t^{\prime}})\mid H_{t}=h_{t}\right]$$ where the expectation is under At ∼ πt(· | Ht), St+1 ∼ P(· | St, At). Note that V π 0 (s0) is the T step objective that we use to select hyperparameters. For the fixed-horizon setting where episodes are of length horizon, T = K · *horizon* where K is the number of episodes. In this setting, the expectation is under St+1 ∼ P(· | St, At) if t is not the last step of an episode and St+1 = s0 if t is the last step of an episode. Dividing V π 0 (s0) by K gives the average episodic return over K episodes. Theorem 1 (Simulation Lemma for Online Learners). Assume the rewards r(s, a) *are deterministic in* [0, rmax] and the MDP has one initial state s0 Suppose we have a learned model (P , ˆ rˆ) *such that* $$\|{\hat{P}}(\cdot\mid s,a)-P(\cdot\mid s,a)\|_{1}\leq\varepsilon_{p}\quad{\mathrm{~and~}}\quad|r(s,a)-{\hat{r}}(s,a)|\leq\varepsilon_{r}\qquad{\mathrm{~for~}}\quad$$ and that rˆ(s, a) ∈ [0, rmax]. Let Vˆ π t (·) denote the value function under the learned model. Then for any history dependent policy π*, we have that* $$|V_{0}^{\pi}(s_{0})-\hat{V}_{0}^{\pi}(s_{0})|\leq\varepsilon_{r}T+\frac{\varepsilon_{p}r_{m a x}T^{2}}{2}.$$ Proof. We follow the proof of the simulation lemma. Since the rewards are deterministic, the history does not need to contain reward, that is, Ht = (*S × A*) t × S. For any ht = (s0, at, . . . , st) ∈ Ht with *t < T* − 1, $$V_{t}^{\pi}(h_{t})=\sum_{a\in\mathcal{A}}\pi_{t}(a\mid h_{t})r(s_{t},a)+\sum_{a\in\mathcal{A}}\pi_{t}(a\mid h_{t})\sum_{s^{\prime}\in\mathcal{S}}P(s,a,s^{\prime})V_{t+1}^{\pi}((h_{t},a,s^{\prime}))$$ $$\hat{V}_{t}^{\pi}(h_{t})=\sum_{a\in\mathcal{A}}\pi_{t}(a\mid h_{t})\hat{r}(s_{t},a)+\sum_{a\in\mathcal{A}}\pi_{t}(a\mid h_{t})\sum_{s^{\prime}\in\mathcal{S}}\hat{P}(s,a,s^{\prime})\hat{V}_{t+1}^{\pi}((h_{t},a,s^{\prime}))$$ and for the last step we have $$V_{T-1}^{\pi}(h_{T-1})=\sum_{a\in A}\pi_{T-1}(a\mid h_{T-1})r(s_{T-1},a)\quad\text{and}\quad\tilde{V}_{T-1}^{\pi}(h_{T-1})=\sum_{a\in A}\pi_{T-1}(a\mid h_{T-1})\hat{r}(s_{T-1},a).$$ We prove the simulation lemma from the last step. For t = T − 1, $$\left|V_{T-1}^{x}(h_{T-1})-\hat{V}_{T-1}^{x}(h_{T-1})\right|=\left|\sum_{a\in\mathcal{A}}\pi_{T-1}(a\mid h_{T-1})r(s_{T-1},a)-\sum_{a\in\mathcal{A}}\pi_{T-1}(a\mid h_{T-1})\hat{r}(s_{T-1},a)\right|\leq\varepsilon_{T-1}.$$ For all $t<T-1$. $$\mathrm{all}\ t<T-1,$$ |V π t (ht) − Vˆ π t (ht)| = | X a∈A πt(a | ht)r(st, a) + X a∈A πt(a | ht) X s 0∈S P(*s, a, s*0)V π t+1((ht*, a, s*0)) − X a∈A πt(a | ht)ˆr(st, a) − X a∈A πt(a | ht) X s 0∈S Pˆ(s, a, s0)Vˆ π t+1((ht*, a, s*0))| ≤ εr + X a∈A πt(a | ht)| X s 0∈S P(*s, a, s*0)V π t+1((ht*, a, s*0)) − X s 0∈S Pˆ(*s, a, s*0)Vˆ π t+1((ht*, a, s*0))| = εr + X a∈A πt(a | ht)| X s 0∈S P(*s, a, s*0)V π t+1((ht*, a, s*0)) − X s 0∈S Pˆ(*s, a, s*0)V π t+1((ht*, a, s*0)) + X s 0∈S Pˆ(*s, a, s*0)V π t+1((ht*, a, s*0)) − X s 0∈S Pˆ(*s, a, s*0)Vˆ π t+1((ht*, a, s*0))| ≤ εr + X a∈A πt(a | ht)| X s 0∈S (P(s, a, s0) − Pˆ(*s, a, s*0)) V π t+1((ht*, a, s*0)) | {z } ≤(T −t−1)rmax | + X a∈A πt(a | ht)| X s 0∈S Pˆ(*s, a, s*0)(V π t+1((ht*, a, s*0)) − Vˆ π t+1((ht*, a, s*0)))| $$\leq\varepsilon_{r}+\sum_{a\in A}\pi_{t}(a\mid h_{t})(T-t-1)r_{\max}\sum_{s^{\prime}\in S}\mid P(s,a,s^{\prime})-\hat{P}(s,a,s^{\prime})\mid+\max_{s^{\prime}\in S}\lvert V_{t+1}^{\pi}((h_{t},a,s^{\prime}))-\hat{V}_{t+1}^{\pi}((h_{t},a,s^{\prime}))\rvert$$ $$\leq\varepsilon_{r}+\varepsilon_{p}r_{\max}(T-t-1)+\max_{a,s^{\prime}}\lvert V_{t+1}^{\pi}((h_{t},a,s^{\prime}))-\hat{V}_{t+1}^{\pi}((h_{t},a,s^{\prime}))\rvert$$ Therefore, $|V_{q}^{\pi}(s_{0})-\tilde{V}_{q}^{\pi}(s_{0})|\leq\varepsilon_{r}+\varepsilon_{p}r_{\max}(T-1)+\cdots+\varepsilon_{r}+\varepsilon_{p}r_{\max}1+\varepsilon_{r}\leq\varepsilon_{r}T+\frac{\varepsilon_{r}r_{\max}r^{2}}{2}$. Theorem 1 says that if we have a good model of the environment, we can evaluate the T step objective for any online learner with bounded error. In particular, we can control this evaluation error by controlling the error of the learned model. Note that vmax .= T rmax is the maximum value, so the last term εprmaxT 2 2can be interpreted as εpvmaxT 2, meaning the bound scales with T: (εr + εpvmax 2)T. Back to our problem setting. Let Λ be the set of hyperparameters and πλ be a learner's policy with λ ∈ Λ. In our algorithm, we choose the best hyperparameters based on V˜ π 0 (s0), which is an estimator for Vˆ π 0 (s0) by running n runs with Pˆ. Let λ˜ = arg maxλ∈Λ V˜ πλ 0(s0) be the hyperparameters returned by our algorithm and λ ∗ = arg maxλ∈Λ V πλ 0(s0) be the true best hyperparameters in the set. The following theorem shows that our hyperparameters will not be too far from the best hyperparameters in terms of the T step objective. Theorem 2. Under the same conditions as Theorem 1, with probability 1 − δ*, we have* $$V_{0}^{\pi^{\lambda}\pi^{*}}(s_{0})-V_{0}^{\pi^{\lambda}}(s_{0})\leq2\varepsilon_{r}T+\varepsilon_{p}r_{max}T^{2}+r_{max}T\sqrt{\frac{2\ln\left(4/\delta\right)}{n}}$$ $$=\underbrace{(2\varepsilon_{r}+\varepsilon_{p}v_{max})T}_{approximation\ error}+\underbrace{v_{max}\sqrt{\frac{2\ln\left(4/\delta\right)}{n}}}_{estimation\ error}.$$ Proof. By Hoeffding's inequality, for a given π, we have with probability 1 − δ/2 that $$|\hat{V}_{0}^{\pi}(s_{0})-\hat{V}_{0}^{\pi}(s_{0})|\leq T r_{\mathrm{max}}{\sqrt{\frac{\ln{(4/\delta)}}{2n}}}$$ because the return in each run, to give the sample average V˜ π 0 (s0), is in [0*, T r*max]. Using the union bound, we can say this inequality holds for both πλ∗ and πλ˜, with probability 1 − δ. The source of this difference is from using a limited number of runs to approximate Vˆ π 0 (s0). As we increase the number of runs n, then the difference between our estimator V˜ π 0 (s0) and Vˆ π 0 (s0) goes to zero. Now we can reason about the hyperparameters chosen using λ˜ = arg maxλ∈Λ V˜ πλ 0(s0). V πλ∗ 0(s0) − V πλ˜ 0(s0) = V πλ∗ 0(s0) − V˜ πλ˜ 0(s0) + V˜ πλ˜ 0(s0) − V πλ˜ 0(s0) ≤ V πλ∗ 0(s0) − V˜ πλ∗ 0(s0) + V˜ πλ˜ 0(s0) − V πλ˜ 0(s0) = V πλ∗ 0(s0) − V˜ πλ∗ 0(s0) + Vˆ πλ∗ 0(s0) − Vˆ πλ∗ 0(s0) + V˜ πλ˜ 0(s0) − V πλ˜ 0(s0) + Vˆ πλ˜ 0(s0) − Vˆ πλ˜ 0(s0) ≤ |V πλ∗ 0(s0) − Vˆ πλ∗ 0(s0)| + |Vˆ πλ˜ 0(s0) − V πλ˜ 0(s0)| + |Vˆ πλ∗ 0(s0) − V˜ πλ∗ 0(s0)| + |V˜ πλ˜ 0(s0) − Vˆ πλ˜ 0(s0)| ≤ 2 max λ∈Λ |V πλ 0(s0) − Vˆ πλ 0(s0)| + 2T rmaxrln (4/δ) 2n ≤ 2εrT + εprmaxT 2 + rmaxT r2 ln (4/δ) n. The last inequality follows from Theorem 1. This result is a sanity check that we can reason about error in identifying hyperparameters based on model error. However, it has several limitations. One limitation is that the result is for continuing problems ![9_image_0.png](9_image_0.png) Figure 2: The interaction between the calibration model and the agent in each episode. and fixed-horizon episodic problems, but not variable length episodic problems. The analysis does not address variable length episodic problems, because it would impact both the histories on which policies are conditioned as well as the definition of the value function for this learning policy. The more important limitation, however, is that the bound depends on the length of interaction T, and worse on the squared length of interaction T 2if we assume vmax .= T rmax. Even if episodes are short, we expect the agent to learn for thousands of steps and so T can be quite large. It may be difficult to obtain sufficiently low ε (model error) to control for this accumulating error over many steps. We could potentially obtain a better result by considering smoothness in performance with respect to hyperparameters. Empirical studies suggest performance changes smoothly as a function of hyperparameters, when hyperparameter sensitivity plots are shown. We hypothesize there exists a subset of hyperparameters such that V πλ 0(s0) is smooth w.r.t. the hyperparameters in the subset, and hyperparameters outside the subset have very low V πλ 0(s0). Therefore, the error bound from Theorem 2 just needs to be smaller than the performance gap between hyperparameters in the good subset and hyperparameters outside the good subset to guarantee finding a nearly optimal hyperparameter setting. This direction is an important next step. ## 5 Stable Calibration Models With Knns We develop a non-parametric k-nearest neighbor (KNN) calibration model that (a) ensures the agent always produces real states and (b) remains in the space of states observed in the data, and so is stable under many iterations. The idea is simple: the entire offline data log constitutes the model, with trajectories obtained by chaining together transitions. Figure 2 shows the interaction between the calibration model and the agent in each episode. The KNN model is easy to use, which is relevant for those trying to get things working in the real world. In addition, the KNN calibration model is extremely lightweight and allows fast simulation, which is essential for our application: training possibly hundreds of RL agent's with different hyperparameter configurations from scratch. There are, however, several nuances in using this strategy to obtain calibration models. In particular, the method relies heavily on the distance metric used to find nearest neighbors. Further, the dataset is limited, and may have poor coverage for actions in certain states. We start by introducing the basic approach, and then discuss these two nuances in the following two subsections. We conclude by contrasting the approach to other ways to learn the calibration model, particularly with neural networks. ## 5.1 The Knn Calibration Model The calibration model needs to produce a (stochastic) outcome next state and reward *r, s*0, given a state and action *s, a*. We can produce novel trajectories from a dataset, by noting that if a state-action pair (st, at) is Algorithm 3 Learn KNN Calibration Model Input: dataset D with tuples of the form (S, A, S0, R), number of nearest neighbors k (default k = 3) Constructs: Representation ψ for distances, KD-Tree *T rees* for fast nearest neighbors search, Rdefault default return ψ ←LaplaceRepTraining(D) in Algorithm 11 T rees ←KDTreeConstruction(ψ, D) in Algorithm 5 Extract starting states *B ⊆ D* Set Rdefault to minimum return in dataset (pessimistic default return) Algorithm 4 Sample KNN Calibration Model Input: State st, Action a; if no action is given, procedure returns a start state if No action is given then return Sample s ∈ B uniform randomly end // Find k nearest neighbors to st, at, to get potential next states and rewards (s 0 i , ri, di) k i=1 ← KDTreeSearch(ψ(st)*, a, T rees, k*) in Algorithm 7 // If closest neighbor is far, then return a default return and terminate if mini d > threshold **then** r ← Rdefault, s 0 ← terminal return (*r, s*0) end Sample i ∈ [1, k] according to probabilities pi = 1 −di Σj∈[1,k]dj return (ri, s0i ) similar to (si, ai) for a stored tuple (si, ai, ri, s0i ), then it is plausible that ri, s0i could also have been observed from (st, at). To allow for stochastic transitions, the k most similar pairs to (st, at) can be found, and the next state and reward selected amongst these by sampling proportionally to similarity. More formally, given the current state-action pair (st, at), the model searches through all tuples (s, a, s0, r) and selects the k nearest neighbors, according to similarity between (st, at) and (*s, a*). (We discuss how to compute similarity in Section 5.2.) Let {(si, ai, ri, s0i )} k i=1 correspond to these tuples, and di to the distance between (st, at) and (si, ai). Then these (ri, s0i ) are all possible outcome rewards and next states, where the likelihood corresponds to similarity to (si, ai). If (st, at) is very similar to (si, ai), then (ri, s0i ) is a likely outcome. Otherwise, the more dissimilar, the more unlikely it is that (ri, s0i ) is a plausible outcome. The tuple i is sampled proportionally to pi = 1 −di Σj∈[1,k]dj , where a smaller distance indicates higher similarity. This procedure is summarized in Figure 2. At the start of each episode, a start state s0 is sampled randomly from the set of start states in the dataset. The agent takes its first action a0, to get the first pair (s0, a0) and the k nearest neighbors are found. This process continues until the agent reaches a terminal state, or the episode is cutoff and the agent teleported back to a start state. An overview of learning the KNN calibration model is given in Algorithm 3 and sampling the model in Algorithm 4. There are several details worth mentioning in the algorithms. First, a KNN model relies heavily on an appropriate distance. For example, for input states that correspond to (*x, y*) position, Euclidean distance can be a poor measure of similarity. If there is a wall in the environment, two states might be similar in Euclidean distance, but far in terms of reachability with notably different dynamics. We ameliorate this by learning a new representation—called a Laplace representation—ψ(s) and using Euclidean distance in this new space that better reflects similarity in transition dynamics, as described in Section 5.2. Second, there may be no similar pairs in the data for a given (*s, a*). The state is one that is observed in the dataset, but the action may not be since it is selected by the agent running in the calibration model. When the next outcome state st+1 is chosen from st, the agent selects action a˜t+1. The dataset might contain multiple transitions from states like st+1—including of course the transition that includes st+1—but these may be for only a subset of the actions. If none of these transitions uses a˜t+1, then the dataset has insufficient coverage to infer what might occur when taking that action in the environment. When this occurs in Algorithm 4—when the closest point (minimum distance) is too far away (above a threshold)—we set the return to a default return and terminate the episode. We discuss an appropriate choice for this default return in Section 5.3. Finally, we want to ensure that the model is efficient to query, even if we have a large dataset. For the discrete action setting, it is possible to get an O(1) look-up by caching the nearest neighbors upfront. For n datapoints, for each action we construct a table with n rows and k columns for the nearest neighbors, where each neighbor is stored as its index from 1 to n. Each transition consists of jumping between rows, using these indices. The detailed pseudocode is provided in Appendix A.1. More generally, for continuous actions, we can use a k-d tree (Bentley, 1975) to search for the k-nearest neighors. The k-d tree takes the transformed state-action pair, (ψ(s), a) as the key for the search. For a dataset of size n, it costs O(n log n) to construct the k-d tree and O(log n) to query for a nearest neighbor. This low computational complexity is key to allow us to use all of the data to create our calibration model. ## 5.2 Improving The Distance Metric For The Knn It is not hard to imagine examples where Euclidean distance on states or observations does not appropriately reflect similarity of states. For example, in a maze environment, if inputs correspond to (*x, y*), two nearby points in Euclidean distance may actually be on opposite sides of a wall, thus far apart in terms of transition dynamics. Similarly, Euclidean distance does not apply to images, since pixel-wise difference can make every image look very different from all the others in the dataset. Instead, we exploit a standard approach in metric learning: we first map the inputs to a new space where Euclidean (`2) distance is meaningful. In particular, we would like a new representation ψ(s) where states si and sj that have similar outcomes in terms of states and rewards are mapped to similar vectors ψ(si) ≈ ψ(sj ), and ones with dissimilar outcomes are mapped to different representations. Such representations that reflect similarity in terms of transition dynamics have been explored under what are called *Laplace representations* (Wu et al., 2019). The approach relies on having a stored trajectory that maintains the order of interaction. The objective includes two components: an *attractive term* that encourages two states that are nearby in the trajectory to have similar representations, and a *repulsive term* that encourages randomly sampled states to have different representations. For a neural network ψθ with parameters θ, the last layer of the NN ψθ(s) has loss $$\sum_{s_{t}\sim\mathcal{D}}\|\psi_{\theta}(s_{t})-\psi_{\theta}(s_{t+1})\|_{2}^{2}+\sum_{s_{i},s_{j}\sim\mathcal{D}}\left(\left(\psi_{\theta}(s_{i})^{T}\psi_{\theta}(s_{j})\right)^{2}-\|\psi_{\theta}(s_{i})\|_{2}^{2}-\|\psi_{\theta}(s_{j})\|_{2}^{2}\right)\right)$$ The inclusion of representation norms −kψθ(si)k 2 2 ensures that the representation is not simply decreased to zero to satisfy the first attractive term. Minimizing this objective encourages kψθ(st)−ψθ(st+1)k 2 2 to be small for states right beside each other in the trajectory—temporally close. The second term (ψθ(si) T ψθ(sj ))2is the repulsive term that encourage random pairs to have orthogonal representations. It is possible for st, st+1 to be randomly selected for the second term, but this is not that likely under the possible n 2 pairs; the first term dominates, ensuring these nearby points have similar representations. More details on learning the Laplace representation are given in Appendix A.1.1. The distance for a state-action pair is defined differently for discrete and continuous actions. For discrete actions, two actions are considered similar only when they are exactly the same. The resulting distance is $$d((s_{i},a_{i}),(s_{j},a_{j}))=\begin{cases}d_{s}(s_{i},s_{j})&\text{if}a_{i}=a_{j}\\ \infty&\text{else}\end{cases}\quad\text{for}d_{s}(s_{i},s_{j})\doteq\|\psi(s_{i})-\psi(s_{j})\|_{2}^{2}\,.$$ In practice, we simply keep separate data structures for each action, to find nearest neighbors. For continuous action problems, the Laplace representation can actually be learned on (*s, a*) directly, to obtain ψ(*s, a*). ## 5.3 Insufficient Data Coverage We do not require the dataset to have perfect state and action space coverage. We only query the KNN calibration model from states s that are in the dataset, by construction. But, for a given action a, there may be no state-action pair that is similar to (*s, a*) and so there is insufficient information about the outcome for that pair. What then should the model return? A natural choice is to truncate the episode, provide a default return—as if the agent had managed to visit future states in the environment—and transition back to the start state. This synthetic interaction in the calibration model is inaccurate, so we encourage the agent to learn within the parts of the calibration model that meaningfully reflect the true environment and avoid these unknown areas. This suggests using a pessimistic default return. The default return can be set to the minimal return experienced in the dataset. When the agent reaches these unknown state-action pairs, it receives a low return and on the next episode is less likely to come back to this unknown state-action pair. Pessimism has also been used in offline RL, but for a subtly different purpose than here. The goal of pessimism in offline RL is to avoid unknown areas, as it is unlikely for the fixed policy to be good in a region that it has never observed and further that unknown region may be dangerous. It is much safer to stay within the data distribution, and simply improve performance within that region. For us, the policy can adapt online if it reaches unknown areas, so it is not necessary to ensure the agent avoids them in the environment. But, we avoid encouraging the agent to visit these unknown areas in the calibration model because they are not reflective of the true environment, potentially skewing hyperparameter selection. For example, if the agent was instead encouraged to visit these state-action pairs (using optimism), then it might find an unknown state-action pair and spend most of its time visiting it. The hyperparameters would be tuned to work well for this interaction, with short episodes and (default) Monte-carlo return from this state-action that do not require any bootstrapping. Our primary purpose with this choice, therefore, is to make interaction in the calibration model more similar to interaction in the environment, under the unavoidable limitations of insufficient data coverage. ## 5.4 Alternative Calibration Models We can contrast this KNN calibration model to two natural alternatives: a kernel density estimator (KDE) model and a neural network model. A KDE model is a non-parametric estimator, that has even been investigated for model-based RL (Pan et al., 2018). Like our KNN calibration model, it should also stably remain within the region defined by the dataset. However, unlike the KNN calibration model, a KDE calibration model could produce non-existent states. It effectively interpolates between observed datapoints, and so results in significant generalization. If we consider again the example with (*x, y*) position in a gridworld with walls, then the KDE calibration model could produce transitions within the wall. Another alternative is to use a neural network (NN) to learn a calibration model. The dataset can be used to learn the expected next state and reward, for given state and action, using regression on inputs (*s, a*) and targets (*r, s*0). Or, to obtain a distribution over the next states and rewards, a conditional distribution can be learned using mixture density networks or stochastic networks. Simulators have been learned in RL on real data, particularly with recurrent NNs (RNNs) to handle partial observability, such as for gas turbine control (Schaefer et al., 2007; Schäfer, 2008), drawing on the larger literature using RNNs for system identification (Barabanov & Prokhorov, 2002; Yu, 2004; Mohajerin, 2012). At the same time, learning transition dynamics with NNs in RL can be challenging, and can cause issues when used as simulators. Such NN models can produce non-existent states, just like the KDE model. With an extremely long rollout, the prediction error accumulates, and the model may generate states that are out of distribution or become unstable. Several works in model-based RL have illustrated that iterating such models can produce less and less plausible outcomes states (Talvitie, 2017; Jafferjee et al., 2020; Abbas et al., 2020; Chelu et al., 2020). This is particularly problematic in the Data2Online setting, where the number of steps of iteration is much larger than what is typically used in model-based RL. The calibration model must mimic the real deployment environment. Initially the agent relies on random exploration to discover good rewards before learning can adapt the policy, and thus the first few episodes simulated by the model may be extremely long, even thousands of steps. Avoiding these issues with iterating NN models is an active area of research. One direction for model-based RL has been to train models to be correct over multiple steps of iteration (Talvitie, 2014; Venkatraman et al., 2015; Talvitie, 2017; Williams et al., 2017; Ke et al., 2019). Other work has looked at constraining the architecture to ensure stability (Manek & Kolter, 2019; Lawrence et al., 2020; Takeishi & Kawahara, 2021; Drgona et al., 2022). Such advances are likely to accelerate with the growth in sequence modeling and generation. In this work, we leverage the power of neural networks to improve the distance metric within our KNN model. Arguably, this distance metric does much of the heavy lifting, with mostly simplistic rules layered on top to transition between samples. The combination allows us to leverage the ability of NNs to scale to high-dimensional inputs, and the simplicity and interpretability of KNNs. It provides an easy-to-use alternative to learning NNs or RNNs from scratch, which can often require significant expertise. And, by design, it is guaranteed to remain stable and only produce states that have been observed. In some cases an end-to-end RNN model may be more effective, nonetheless, this KNN approach with a NN metric expands the types of models users can consider for their application. ## 6 Experiments We conducted a battery of experiments to provide a rounded assessment of when an approach can or cannot be expected to reliably select good hyperparameters for online learning. We investigate varying the data collection policy and size of the data logs to mimic a variety of deployment scenarios ranging from a near-optimal operator to random data. We explore selecting hyperparameters of different types for several different agents, and investigate a non-stationary setting where the environment changes from data collection to deployment. We begin with the simplest first question: how does our approach compare to simple baselines and with different choices of calibration model type. To extensively test the reliability of our approach, we deploy the algorithm on variants of Acrobot, Puddle World, and Cartpole (Sutton & Barto, 2018). All three environments are episodic and have a low-dimensional continuous state and discrete actions. Small environments allow extensive experimentation; critical for hyperparameter analysis and achieving statistically significant results. In addition, recent studies have shown that conclusions from small classic control environments match those generated in larger scale environments like Atari (Ceron & Castro, 2021). Experiments were conducted on a cluster and a powerful workstation using ∼ 8327 CPU hours and no GPUs. Full lists of all the hyperparameters can be found in the appendix. ## 6.1 Experiment 1: Comparing Calibration Models In this experiment we investigate the benefits of our approach with different choices of model in two classic control environments. We compare our KNN calibration model with learned Laplace similarity metric to an NN model trained to predict the next state and reward given input state and action observed in the calibration data. In addition, we also test an NN calibration model that takes the *Laplacian encoding* (see Section 5) of the current state as input and predicts the next state and reward to provide the network with a better transition-aware input representation. We used two continuous state, discrete action, episodic deployment environments, Acrobot and Puddle World, as described in the appendix and in introductory texts (Sutton & Barto, 2018). In this first experiment we select the hyperparameters for a linear softmax-policy Expected Sarsa agent (from here on, linear Sarsa) from data generated by a simple policy with good coverage. The agent uses tile coding to map the continuous state variables to binary feature vectors (see Sutton & Barto (Sutton & Barto, 2018) for a detailed discussion of tile coding). This on-policy, Sarsa agent learns quickly but is sensitive to several important hyperparameters. We investigate several dimensions of hyperparameters including the step-size and momentum parameters of the Adam optimizer, the temperature parameter of the policy, and the value function weight initialization. We choose these hyperparameters because their impact on performance is ![14_image_0.png](14_image_0.png) ![14_image_1.png](14_image_1.png) Figure 3: **Hyperparameter transfer with different calibration models.** Each subplot shows the performance of two calibration models compared against two baselines: FQI and random (described in text). The dotted horizontal line indicates the average performance of the best hyperparameter setting in the sweep in the deployment environment. Each box shows the distribution summarizing the true performance in deployment of the best hyperparameters selected in each run of the experiment. We plot the return per episode, thus **higher the better**. The LHS of each subplot uses the LHS y-axis and the RHS (separated by the dotted vertical orange line) uses the RHS red y-axis. In each subplot the bold line represents the median, the boxes represent the 25th and 75th percentiles, and outliers are represented as circles. Low variance indicates that the system reliably chooses the same hyperparameters each run. The performance of the random baseline characterizes the maximum possible variation. Recall the performance of each hyper is precomputed off-line for each hyperparameter combination and is thus not a source of variation in our setup. somewhat transient and can be overcome by continued learning; this reflects our desire for the agent to continually learn and adapt in deployment. We used a near-optimal policy for each environment to collect data for building the calibration models. The near-optimal data collection policy for Acrobot can solve the task in 100 steps, and the near-optimal data collection policy in Puddle World achieves an average return of -25. In both cases the policy will provide the system with many examples of successful trajectories to the goal states in the 5000 transition data log. Our evaluation pipeline involves several steps. First we evaluate the *true performance* (steps per episode for Acrobot and return per episode in Puddle World) of each hyperparameter combination in the deployment environment: running for 15,000 steps in Acrobot and 30,000 steps in Puddle World, averaging over 30 runs. We then use the data collection policy to generate the calibration data log and learn each model. We record the *true performance* of the selected hyperparameters to summarize the performance. This whole processrunning the data collection policy to generate a data log, learning the calibration model, and evaluating the hyperparameters—is repeated 30 times (giving 30 datasets with 30 corresponding hyperparameter selections). The statistic of interest is the median and distribution of the *true performance* for the hyper-parameters selected across runs. In the ideal case, if there is one truly best set of hyperparameters, the system will choose those every time and the variance in *true performance* would be zero. We also included two baselines. The first is randomly selecting hyperparameters, called Random, to get a sense of the spread in performance; all methods should outperform Random. We also include an Offline RL algorithm, called Fitted-Q Iteration (FQI) (Ernst et al., 2005), that learns a policy from the calibration data and then deploys the learned policy fixed in the deployment environment. For the FQI baseline we simply plot the distribution of performance of each of the 30 extracted policies on the deployment environment. For each policy, we average the performance over 30 random seeds. We tested FQI with a tile coded representation and a NN representation; the tile coded version performed best and we report that result. Figure 3 summarizes the results. In both environments the KNN calibration model performed well, selecting the same hyperparameters as would a sweep directly in the deployment environment. The NN calibration models perform poorly overall. The NN calibration model using raw inputs (no Laplacian encoding) was not as effective, and so we only include results for the NN with the Laplacian encoding in Figure 3 and relegate the other to Appendix D.1. Their performance can be unstable, choosing hyperparameters with ![15_image_0.png](15_image_0.png) Figure 4: **The role of data logs.** Plots (a) and (b) show the median deployment performance of hyperparmaters selected from calibration models constructed from different sized data logs (with 25% and 75% quartiles). Plots (c) and (d) show the median deployment performance under different policies used to collect data logs for constructing the calibration model. In both subplots, **higher is better**. In these bar plots the median is shown by the top of the colored bar, and the quartiles are shown by the black whiskers. Overall, the utility of the calibration model appears largely insensitive to the data log size and policy in these environemnts; hyperparameters that perform well in deployment can be selected. good performance in some runs, but often choosing poor hyperparameters. FQI generally performs worse than even Random. Note that we spent quite a bit of time improving FQI, as well as optimizing over several stepsize and regularization hyperparameters. This suggests the calibration data log is too limited to extract a good policy and deploy it without additional learning, but the data appears useful for selecting hyperparameters with the KNN calibration model. We also used our approach to tune both step-size parameters of a linear Actor-critic agent with tile coding. The KNN calibration model was able to select top performing hyperparameters for Actor-critic in both Acrobot and Puddle World—though the agent performed worse than linear Sarsa (results in Appendix D.2). ## 6.2 Experiment 2: Varying Data Collection Policies The objective of this experiment was to evaluate the robustness of our approach to changing both the amount of offline data available and the quality of the policy used to collect the data. We experimented with three different policies corresponding to near-optimal, *medium*, and *naive* performance to collect 5000 transitions for training our KNN Laplacian calibration model. The near-optimal policy was identical to the one used in the previous experiment. The medium policy was designed to achieve roughly half the visits to goal states after 5000 steps (approximately 90 for Puddle World & 25 for Acrobot) compared to the near optimal policy. The naive policy was designed such that it achieved significantly fewer visits (approximately 35 for Puddle World & 12 for Acrobot). We also tried different data log sizes of 500, 1000, and 5000 samples using the medium policy, all shown in Figure 4. The results in Figure 4 show that our approach is largely insensitive to data log size and policy in these classic environments. Even 500 transitions contains enough coverage of the state-space and successful terminations to produce a useful calibration model. This is in stark contrast to the FQI results in Experiment 1, where a policy trained offline from the same size data log failed to solve either task. Exploration in both these environments is not challenging; therefore, the success of the calibration model is not surprising. This positive outcome, however, reflects that it may be simpler to pick hyperparameters *in some environments*. In Experiment 4, we investigate a failure case in Cartpole. ## 6.3 Experiment 3: When The Environment Changes Learning online is critical when we expect the environment to change. This can happen due to wear and tear on physical hardware, un-modelled seasonal changes, or the environment may appear non-stationary to the agent because the agent's state representation does not model all aspects of the true state of the MDP. In this latter case it is often best to track rather than converge; to never stop learning (see (Sutton et al., 2007)). In our problem setting, the deployment environment could change significantly between (a) calibration data collection and (b) the deployment phase. Intuitively we would expect batch approaches that simply deploy a fixed policy learned from data to do poorly in such settings. The following experiment simulates such a scenario with the Acrobot environment. The experiment involves two variants of the environment. As before, we collected 5000 transitions using the near-optimal policy in Acrobot and then applied our approach to select good hyperparameters for the linear Sarsa agent. Unlike before, we evaluate the hyperparameters selected on a second, *changed* Acrobot environment. In the changed Acrobot environment we doubled the length and mass of the first link length. Our two phase setup changes the dynamics of Acrobot but does not prevent learning reasonably good policies as you will see in the results below. This whole process was repeated 30 times (generating 30 datasets with a corresponding 30 calibration models) to aggregate the results presented in Figure 5. We included three baselines to help contextualize the results: (1) transferring the policy from the first environment, (2) transferring the policy learned in the calibration model, and (3) FQI. The first baseline, called *Sarsa (True)*, simply transfers the policy learned in the first Acrobot environment to the changed Acrobot environment (no calibration model was used, hence the label *true*). The second baseline, called Sarsa (Calibration) simply uses the best performing policy learned by Sarsa in our calibration model, where the calibration model is created with data from the first Acrobot environment. Finally, we also included a FQI baseline. We trained a policy using FQI with tile coding on the data generated from the first environment (the same data used to build the calibration model). Then we evaluated the policy learned by FQI on the changed Acrobot environment. These baselines are meant to illustrate how performance might be effected if the environment dynamics changed but a prelearned policy was applied without taking the changes into account, perhaps because no one noticed the environment had changed. In all three baselines the policy evaluated in the second environment is fixed (no learning in deployment). ![16_image_0.png](16_image_0.png) Figure 5: **Selecting hyperparameters in the face of nonstationarity.** The plot above summarizes the performance of our approach compared with three fixed-policy transfer approaches (described in text). Selecting hyperparameters for deployment works well even when the environment changes between calibration data collection and deployment. Deploying fixed policies, on the other hand, performs poorly by comparison. The results in Figure 5 highlight that transferring fixed policies can be problematic when the environment changes. Our calibration-based approach performs best and appears robust under the abrupt non-stationarity tested here. Clearly, the difference between the two environments is significant enough such that transferring a policy directly learned on the first environment (the Sarsa-True baseline) performs worse than using our approach to select hyperparameters in the calibration model and then learning a new policy from scratch. Interestingly, learning and transferring a policy from the calibration model was worse than using than training on the first environment or training from the calibration data (as in FQI). It is not surprising that transferring hyperparameters and learning in deployment is more robust than transferring fixed policies, in these non-stationary settings. ## 6.4 Experiment 4: A Failure Case Our approach is not robust to all environments and data collection schemes. In this section we investigate when it can fail. One obvious way our approach can fail is if the agent's performance in the calibration model is always the same: no matter what hyperparameter we try, the system thinks they all perform similarly. To illustrate this phenomenon we use the Cartpole environment. In Cartpole, the agent must balance a pole in an unstable equilibrium as long as it can. If the cart reaches the end of the track or the pole reaches a critical angle, failure is inevitable. Near-optimal policies can balance the pole for hundreds of steps rarely experiencing failures and, thus, visit only a small fraction of the state-action space. A data log collected Return per run. Higher (closer to zero) is better ![17_image_0.png](17_image_0.png) Figure 6: **Success and failure in Cartpole.** This plot shows performance of three different calibration models constructed from random, near-optimal and medium policies. **Left**: the performance of the hyperparameters in deployment as picked by different calibration models. **Right**: each model's evaluation of all hyperparamters across all 30 runs. Ideally the distribution of performance would match that of the hyperparameter performance in the deployment environment—black dots far right. from the near-optimal policy would likely produce a calibration model where failures are impossible and all hyperparameters appear excellent. We test this hypothesis by looking at three data collection policies. We used a random policy, a nearoptimal policy with random initial pole angles, a medium policy that was half as good as the near-optimal policy (twice as many failures). We expect the random policy to provide useful data for the calibration model, whereas the near-optimal policy will cause failure due to the above reason. The interim policy helps understand sensitivity to this issue: we might expect the issue to be resolved with a less optimal policy. Figure 6 indeed shows that the dynamics of Cartpole combined with particular data collection policies can render the calibration model ineffective We see in the left-hand plot that the hyperparameter chosen with the calibration model using random data performs somewhat reasonably, but fails for both the medium and near-optimal policies. Even with random starting states the calibration model for near-optimal policy: the calibration model never simulated dropping the pole. The random policy produced the best calibration model. Unsurprisingly, the random policy drops the pole every few steps and thus the log contained many failures and higher state coverage. Nevertheless, the performance was still poor because there were no examples of balancing the pole for many consecutive steps: the model constructed from random data was still a poor model of the true deployment environment. We can see this further, by looking at the performance estimates under the three calibration models. The right-hand plot in Figure 6 shows the performance of all the hyperparameters according to the calibration model (before deployment). The blue dots show that most hyperparameters appear good in calibration when the model is constructed with data from a near-optimal policy. At the other extreme the grey dots show a large spread in performance of hyperparameters when the model is constructed with data from a the random policy. Note that none of the grey dots appear as low on the y-axis compared with the blue and orange dots corresponding to the other two policies. This indicates that calibration with the medium and near-optimal policy models incorrectly inflate the performance of many hyperparameter combinations, whereas calibration with the random-policy model potentially undervalues some hyperparameter combinations. One could certainly argue that many applications might not exhibit this behavior—especially since it is largely caused by a task with two modes of operation (failing or balancing). Extracting a policy initialization from the calibration data (perhaps via behavior cloning) and then using this initial policy in both hyperparameter selection and deployment could avoid these problems in Cartpole, but we leave these experiments to future work. Regardless, this experiment provides an important reminder that we will not anticipate all situations in the deployment of RL in the real world; there is no general black-box strategy for deployment and failures will happen. ## 7 Moving Beyond Grid Search The calibration model is an offline artifact that we can use as we like without impacting the deployment environment. We can use the model in smarter ways to discover and evaluate good hyperparameters for deployment. In fact, we can leverage the large literature on *algorithm configuration*, which explicitly deals with efficient methods to search over hyperparameters. In this section, we explain how to incorporate these approaches and test two strategies, as a replacement for grid search. ## 7.1 Improving The Hyperparameter Search A variety of hyperparameter search approaches were introduced under sequential model-based optimization (SMBO) (Hutter et al., 2011), but methods built on Bayesian optimization (BO) (Snoek et al., 2012) have become more popular. Complementary to these approaches are those that direct computation to promising hyperparameters and stop performance evaluations of poor hyperparameters early, as in Hyperband (Li et al., 2018), or that design the algorithm to do both (Klein et al., 2017; Falkner et al., 2018). All these approaches attempt to find the maximum of the performance function, assuming that function is expensive to query. BO algorithms approximate the performance function f(λ), and use this approximation to decide what hyperparameter setting λ to test next. The general approach is to (1) maintain a posterior distribution over the performance function f, (2) find a candidate set of optimal hyperparameters λc according to a criteria like expected improvement under the current posterior over f, (3) evaluate λc, obtaining y = f(λc) and (4) update the posterior with sample (λc, y). Once the algorithm terminates—typically by reaching a time limit—the best λc out of all the candidates tested is returned, according to the maximal y. The primary purpose of learning the posterior f is to direct which λc should be tested, though some algorithms do solve an optimization at the very end of this procedure to find λ with the maximal posterior mean (see (Frazier, 2018) for a nice overview). Due to the importance of hyperparameter optimization for machine learning—in a growing field called Auto ML—the development of BO methods has been focused on large numbers of hyperparameters, for training large models on large datasets. For this highly expensive setting, it is worth carefully crafting advanced algorithms that minimize the need to train and evaluate large models. These complex methods can then be released within packages, to facilitate their use, as they may be difficult to implement from scratch. BO in our experiments: We use an open-source package (Nogueira, 2014–), which uses gaussian processes for optimizing the hyperparameter setting. We chose to use upper confidence bounds, with a confidence level of 2.576—the default in the package—as the acquisition method. The queue is initialized with 5 random samples and the algorithm is run for 200 iterations. For our setting, each evaluation is not as complex and we need not use such advanced approaches. Instead, our primary goal is simply to answer: if we allow hyperparameters to be optimized over a continuous set, can we improve on a basic grid search? For this question, we also test two simple approaches: random search and the cross-entropy method (CEM). Random search involves simply testing a fixed number of hyperparameter settings, and selecting the one with maximal performance. Though simple, it is a common baseline because it has been shown to be competitive with more complex search strategies (Bergstra & Bengio, 2012). CEM (Rubinstein, 1999) is an approach to global optimization of functions, like BO, but is based on a simpler strategy of (1) maintaining a distribution over the inputs (hyperparameters), (2) increasing the likelihood of the top percentile of sampled values under this distribution, according to the performance function. The distribution is simple to sample, and the percentile easy-to-compute, making this approach simpler to use than BO. CEM has not been used for hyperparameter optimization, to the best of our knowledge. Likely the reason is that BO strategies provide a more powerful way to carefully reason about what candidate points to sample. CEM instead slowly narrows a distribution over hyperparameters, and does not reason about confidence intervals nor about a criterion (acquisition function) to identify ideal candidates. Nonetheless, we include it as a simpler strategy, to investigate performance of using calibration models with a hyperparameter search ![19_image_0.png](19_image_0.png) Figure 7: **Finding good hyperparameters via black-box optimization in calibration.** Here we compare four different strategies for optimizing hyperparameters: (1) grid search (which we have used in all previous experiments), (2) random search, (3) our CEM procedure, and (4) the Bayesian optimization (BO) approach. The y-axis is same as the one in Figure 3. Generally, all four approaches perform well highlighting the robustness of using the calibration model and transferring hyperparameters to deployment. Random search is better or comparable to grid search, but this is not a surprising as random search is typically found to be a strong baseline in hyperparameter optimization. Note the *True performance* in the plots above represent the deployment performance of the best hyperparameter combination from a discrete set; the same set used by grid search. CEM, BO and random search can obtain higher deployment performance because they search a continuous range and thus find better hyperparameter settings. This is one of the major benefits of using better optimization methods in calibration than grid search. in-between naive random search and the more advanced BO search. We emphasize that it is not critical which hyperparameter optimization approach is used within our framework; any method can be swapped in. CEM in our experiments: Our setting has two nuances compared to the typical setting where CEM is used: our function is expensive to query and we only get a stochastic sample. We provide a modified CEM algorithm, that still reflects the general framework for CEM, but using an incremental update—similar to stochastic gradient descent—to account for the stochasticity in our function query. The algorithm is summarized in Algorithm 13 in Appendix C.1. ## 7.2 Experiment 5: Hyperparameter Tuning With Alternative Optimization Approaches In this section we compare grid search, random search, Bayesian optimization, and our simple CEM approach for hyperparameter selection. Are these approaches interchangeable in our setup? Does searching a continuous hyperparameter space result in performance gains and at what cost? The goal of the experiment is to highlight that alternative hyperparameter optimization approaches beyond a basic grid search are possible and to investigate if they are beneficial. In this experiment, we use the same settings as above, but now optimize the temperature τ and stepsize α as continuous values in the ranges [0.0001, 5.0] and (0.0, 0.1] respectively for Acrobot, and [0.0001, 10.0] and [0.0, 1.0] respectively for Puddle World. The random search approach simply generates k possible hyperparameters from the continuous ranges above and evaluates each in parallel in the calibration model. The best performing hyperparameters according to the calibration phase are used in deployment. Both random search and CEM use 100 iterations, to make computation used comparable to grid search, while Bayesian optimization uses 200 iterations. Random search, Bayesian optimization and CEM outperform grid search, as we can see in Figure 7. The performance improvements are especially stark in Puddle World. Even when tuning only on the calibration model, the agent can outperform the best hyperparameters found by a grid search on the true environment. This is why the return for CEM is higher than the dotted line showing the performance of the best hyperparameters within the set used for the grid search. These results are promising, in that they show more carefully optimizing hyperparameters on the calibration model helps rather than hurts. A possible hypothesis, apriori, could have been that optimizing more carefully to the calibration model could cause odd or very poor hyperparameters to be chosen and that the restricted set in the grid search actually helped avoid this failure. These results suggest otherwise, and in fact highlight that our previous results could have been produced even more consistent performance with a smarter hyperparameter algorithm. This experiments in this paper highlight the generality and flexibility of our approach. The calibration model can take different forms. Data collection can be done in a number of different ways. Hyperparameters can be systematically searched or optimized. In the end, numerous other specializations and algorithmic innovations could be included to further optimize performance in real deployment scenarios. ## 8 Conclusion In this work, we introduced the Data2Online problem: selecting hyperparameters from log of data, for deployment in a real environment. The basic idea is to learn a calibration model from the data log, and then allow the agent to interact in the calibration model to identify good hyperparameters. Essentially, the calibration model is treated just like the real environment. We provide a simple approach, using k-nearest neighbors, to obtain a calibration model that is stable under many iterations and only produces real states. We then conduct a battery of tests, under different data regimes. Naturally, as the first work explicitly tackling this problem, we have only scratched the surface of options. There is much more to understand about when this strategy will be effective, and when it might fail. As we highlight throughout, this problem should be more feasible than offline RL, which requires the entire policy to be identified from a log rather than just suitable hyperparameters for learning. Our own experiments highlight that offline methods that attempted to learn and deploy a fixed policy performed poorly, whereas identifying reasonable hyperparameters was a much easier problem with consistently good performance across many policies and even small datasets. Nonetheless, we did identify one failure case, where the data resulted in a model that made the environment appear too easy and so most hyperparameters looked similar. Much more work can be done, theoretically and empirically, to understand the Data2Online problem. ## References Zaheer Abbas, Samuel Sokota, Erin Talvitie, and Martha White. Selective dyna-style planning under limited model capacity. In *International Conference on Machine Learning*, 2020. Anurag Ajay, Aviral Kumar, Pulkit Agrawal, Sergey Levine, and Ofir Nachum. Opal: Offline primitive discovery for accelerating offline reinforcement learning. In *International Conference on Learning Representations*, 2021. N.E. Barabanov and D.V. Prokhorov. Stability analysis of discrete-time recurrent neural networks. IEEE Transactions on Neural Networks, 13(2), 2002. Paul Barde, Julien Roy, Wonseok Jeon, Joelle Pineau, Derek Nowrouzezahrai, and Christopher Pal. Adversarial soft advantage fitting: Imitation learning without policy optimization. In *Advances in Neural* Information Processing Systems, 2020. Feryal Behbahani, Kyriacos Shiarlis, Xi Chen, Vitaly Kurin, Sudhanshu Kasewa, Ciprian Stirbu, Joao Gomes, Supratik Paul, Frans A Oliehoek, Joao Messias, et al. Learning from demonstration in the wild. In International Conference on Robotics and Automation, 2019. Marc G. Bellemare, Salvatore Candido, Pablo Samuel Castro, Jun Gong, Marlos C. Machado, Subhodeep Moitra, Sameera S. Ponda, and Ziyu Wang. Autonomous navigation of stratospheric balloons using reinforcement learning. *Nature*, 588(7836), 2020. Jon Louis Bentley. Multidimensional binary search trees used for associative searching. Communications of the ACM, 18(9):509–517, 1975. James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13:281–305, 2012. Johan Samir Obando Ceron and Pablo Samuel Castro. Revisiting Rainbow: Promoting more insightful and inclusive deep reinforcement learning research. In *International Conference on Machine Learning*, 2021. Yash Chandak, Scott M Jordan, Georgios Theocharous, Martha White, and Philip S Thomas. Towards safe policy improvement for non-stationary mdps. In *Advances in Neural Information Processing Systems*, 2020a. Yash Chandak, Georgios Theocharous, Shiv Shanka, Martha White, Sridhar Mahadevan, and Philip S Thomas. Optimizing for the future in non-stationary mdps. In *International Conference on Machine* Learning, 2020b. Veronica Chelu, Doina Precup, and Hado P van Hasselt. Forethought and hindsight in credit assignment. In *Advances in Neural Information Processing Systems*, 2020. Jonas Degrave, Federico Felici, Jonas Buchli, Michael Neunert, Brendan Tracey, Francesco Carpanese, Timo Ewalds, Roland Hafner, Abbas Abdolmaleki, Diego de las Casas, Craig Donner, Leslie Fritz, Cristian Galperti, Andrea Huber, James Keeling, Maria Tsimpoukelli, Jackie Kay, Antoine Merle, Jean-Marc Moret, Seb Noury, Federico Pesamosca, David Pfau, Olivier Sauter, Cristian Sommariva, Stefano Coda, Basil Duval, Ambrogio Fasoli, Pushmeet Kohli, Koray Kavukcuoglu, Demis Hassabis, and Martin Riedmiller. Magnetic control of tokamak plasmas through deep reinforcement learning. *Nature*, 602(7897), 2022. C. Downey and S. Sanner. Temporal difference Bayesian model averaging: A Bayesian perspective on adapting Lambda. In *International Conference on Machine Learning*, 2010. Jan Drgona, Aaron Tuor, and Draguna Vrabie. Learning Constrained Adaptive Differentiable Predictive Control Policies With Guarantees. *arXiv preprint arXiv:2004.11184*, 2022. Damien Ernst, Pierre Geurts, and Louis Wehenkel. Tree-based batch mode reinforcement learning. *Journal* of Machine Learning Research, 6:503–556, 2005. Stefan Falkner, A. Klein, and F. Hutter. BOHB: Robust and efficient hyperparameter optimization at scale. In *International Conference on Machine Learning*, 2018. Chelsea Finn, Tianhe Yu, Tianhao Zhang, Pieter Abbeel, and Sergey Levine. One-shot visual imitation learning via meta-learning. In *Conference on Robot Learning*, 2017. Peter I Frazier. A tutorial on bayesian optimization. *arXiv preprint arXiv:1807.02811*, 2018. Seyed Kamyar Seyed Ghasemipour, Richard Zemel, and Shixiang Gu. A divergence minimization perspective on imitation learning methods. In *Conference on Robot Learning*, 2019. Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. Deep reinforcement learning that matters. In *the AAAI Conference on Artificial Intelligence*, 2018. Irina Higgins, Arka Pal, Andrei Rusu, Loic Matthey, Christopher Burgess, Alexander Pritzel, Matthew Botvinick, Charles Blundell, and Alexander Lerchner. DARLA: Improving zero-shot transfer in reinforcement learning. In *International Conference on Machine Learning*, 2017. Sebastian Höfer, Kostas Bekris, Ankur Handa, Juan Camilo Gamboa, Melissa Mozifian, Florian Golemo, Chris Atkeson, Dieter Fox, Ken Goldberg, John Leonard, C. Karen Liu, Jan Peters, Shuran Song, Peter Welinder, and Martha White. Sim2Real in Robotics and Automation: Applications and Challenges. IEEE Transactions on Automation Science and Engineering, 18(2), 2021. Frank Hutter, Holger H Hoos, and Kevin Leyton-Brown. Sequential model-based optimization for general algorithm configuration. In *International Conference on Learning and Intelligent Optimization*, 2011. Andrew Jacobsen, Matthew Schlegel, Cam Linke, Thomas Degris, Adam White, and Martha White. Metadescent for online, continual prediction. In *the AAAI Conference on Artificial Intelligence*, 2019. Max Jaderberg, Valentin Dalibard, Simon Osindero, Wojciech M Czarnecki, Jeff Donahue, Ali Razavi, Oriol Vinyals, Tim Green, Iain Dunning, Karen Simonyan, Chrisantha Fernando, and Koray Kavukcuoglu. Population based training of neural networks. *arXiv preprint arXiv:1711.09846*, 2017. Taher Jafferjee, Ehsan Imani, Erin Talvitie, Martha White, and Micheal Bowling. Hallucinating value: A pitfall of dyna-style planning with imperfect environment models. *arXiv preprint arXiv:2006.04363*, 2020. Nan Rosemary Ke, Amanpreet Singh, Ahmed Touati, Anirudh Goyal, Yoshua Bengio, Devi Parikh, and Dhruv Batra. Learning dynamics model in reinforcement learning by incorporating the long term future. In *International Conference on Learning Representations*, 2019. Michael Kearns and Satinder Singh. Near-optimal reinforcement learning in polynomial time. *Machine* Learning, 49(2):209–232, 2002. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint* arXiv:1412.6980, 2014. Aaron Klein, Stefan Falkner, Simon Bartels, Philipp Hennig, and Frank Hutter. Fast Bayesian optimization of machine learning hyperparameters on large datasets. In *International Conference on Artificial Intelligence* and Statistics, 2017. Tor Lattimore and Csaba Szepesvári. *Bandit algorithms*. Cambridge University Press, 2020. Nathan Lawrence, Philip Loewen, Michael Forbes, Johan Backstrom, and Bhushan Gopaluni. Almost Surely Stable Deep Dynamics. In *Advances in Neural Information Processing Systems*, volume 33, 2020. Seunghyun Lee, Younggyo Seo, Kimin Lee, Pieter Abbeel, and Jinwoo Shin. Offline-to-online reinforcement learning via balanced replay and pessimistic Q-ensemble. In *Conference on Robot Learning*, 2021. Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. Hyperband: A novel bandit-based approach to hyperparameter optimization. *Journal of Machine Learning Research*, 18 (1):6765–6816, 2018. A. Rupam. Mahmood, Dmytro Korenkevych, Gautham Vasan, W. Ma, and J. Bergstra. Benchmarking reinforcement learning algorithms on real-world robots. In *Conference on Robot Learning*, 2018. Travis Mandel, Yun-En Liu, Emma Brunskill, and Zoran Popović. Offline evaluation of online reinforcement learning algorithms. In *the AAAI Conference on Artificial Intelligence*, 2016. Gaurav Manek and J Zico Kolter. Learning stable deep dynamics models. In *Advances in Neural Information* Processing Systems, 2019. Timothy A. Mann, Hugo Penedones, Shie Mannor, and T. Hester. Adaptive Lambda least-squares temporal difference learning. *arXiv preprint arXiv:1612.09465*, 2016. Amir massoud Farahmand, Mohammad Ghavamzadeh, Csaba Szepesvári, and Shie Mannor. Regularized fitted q-iteration for planning in continuous-space markovian decision problems. In *American Control* Conference, 2009. Josh Merel, Yuval Tassa, Dhruva TB, Sriram Srinivasan, Jay Lemmon, Ziyu Wang, Greg Wayne, and Nicolas Heess. Learning human behaviors from motion capture by adversarial imitation. *arXiv preprint* arXiv:1707.02201, 2017. N. Mohajerin. *Identification and Predictive Control Using RecurrentNeural Networks*. PhD thesis, Orebro University, 2012. Fernando Nogueira. Bayesian Optimization: Open source constrained global optimization tool for Python, 2014–. URL https://github.com/fmfn/BayesianOptimization. Yangchen Pan, Muhammad Hamad Zaheer, Adam White, Andrew Patterson, and Martha White. Organizing experience: A deeper look at replay mechanisms for sample-based planning in continuous state domains. In *International Joint Conference on Artificial Intelligence*, 2018. Matteo Papini, Matteo Pirotta, and Marcello Restelli. Smoothing policies and safe policy gradients. *arXiv* preprint arXiv:1905.03231, May 2019. Jack Parker-Holder, Vu Nguyen, and Stephen J. Roberts. Provably efficient online hyperparameter optimization with population-based bandits. In *Conference and Workshop on Neural Information Processing* Systems, 2020. Supratik Paul, Vitaly Kurin, and Shimon Whiteson. Fast efficient hyperparameter tuning for policy gradient methods. In *Advances in Neural Information Processing Systems*, 2019. Xue Bin Peng, Marcin Andrychowicz, Wojciech Zaremba, and Pieter Abbeel. Sim-to-real transfer of robotic control with dynamics randomization. In *International Conference on Robotics and Automation*, 2018. Harish Ravichandar, Athanasios S Polydoros, Sonia Chernova, and Aude Billard. Recent advances in robot learning from demonstration. *Annual Review of Control, Robotics, and Autonomous Systems*, 3:297–330, 2020. Martin Riedmiller. Neural Fitted Q Iteration - First experiences with a data efficient neural reinforcement learning method. In *European Conference on Machine Learning*, 2005. Reuven Rubinstein. The cross-entropy method for combinatorial and continuous optimization. *Methodology* and computing in applied probability, 1(2):127–190, 1999. Anton Maximilian Schaefer, Daniel Schneegass, Volkmar Sterzing, and Steffen Udluft. A Neural Reinforcement Learning Approach to Gas Turbine Control. In *2007 International Joint Conference on Neural* Networks, 2007. Anton Maximilian Schäfer. *Reinforcement Learning with Recurrent Neural Networks*. PhD thesis, Universitat Osnabruck, 2008. Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. Practical Bayesian optimization of machine learning algorithms. In *Advances in Neural Information Processing Systems*, 2012. Niranjan Srinivas, Andreas Krause, Sham Kakade, and Matthias Seeger. Gaussian process optimization in the bandit setting: no regret and experimental design. In *International Conference on Machine Learning*, 2010. Richard S Sutton. Adapting bias by gradient descent: An incremental version of delta-bar-delta. In the AAAI Conference on Artificial Intelligence, 1992. Richard S Sutton. Generalization in reinforcement learning: Successful examples using sparse coarse coding. In *Advances in Neural Information Processing Systems*, 1995. Richard S Sutton and Andrew G Barto. *Reinforcement learning: An introduction*. MIT press, 2018. Richard S Sutton, Anna Koop, and David Silver. On the role of tracking in stationary environments. In International Conference on Machine learning, 2007. Naoya Takeishi and Yoshinobu Kawahara. Learning Dynamics Models with Stable Invariant Sets. In AAAI Conference on Artificial Intelligence, 2021. Erik Talvitie. Self-correcting models for model-based reinforcement learning. In *the AAAI Conference on* Artificial Intelligence, 2017. Erin Talvitie. Model regularization for stable sample rollouts. In *International Conference on Uncertainty* in Artificial Intelligence, 2014. Yunhao Tang and Krzysztof Choromanski. Online hyper-parameter tuning in off-policy learning via evolutionary strategies. *arXiv preprint arXiv:2006.07554*, 2020. Arun Venkatraman, Martial Hebert, and J Andrew Bagnell. Improving multi-step prediction of learned time series models. In *the AAAI Conference on Artificial Intelligence*, 2015. Ruosong Wang, Dean P. Foster, and Sham M. Kakade. What are the statistical limits of offline RL with linear function approximation? In *International Conference on Learning Representations*, 2021. Martha White. Unifying task specification in reinforcement learning. In International Conference on Machine Learning, 2017. Martha White and Adam White. A greedy approach to adapting the trace parameter for temporal difference learning. In *International Conference on Autonomous Agents and Multiagent Systems*, 2016. Grady Williams, Nolan Wagener, Brian Goldfain, Paul Drews, James M Rehg, Byron Boots, and Evangelos A Theodorou. Information theoretic MPC for model-based reinforcement learning. In *International* Conference on Robotics and Automation, 2017. Yifan Wu, George Tucker, and Ofir Nachum. The laplacian in rl: Learning representations with efficient approximations. In *International Conference on Learning Representations*, 2019. Jinwei Xing, Takashi Nagata, Kexin Chen, Xinyun Zou, Emre Neftci, and Jeffrey L. Krichmar. Domain adaptation in reinforcement learning via latent unified state representation. In the AAAI Conference on Artificial Intelligence, 2021. Zhongwen Xu, Hado van Hasselt, and David Silver. Meta-gradient reinforcement learning. In *Advances in* Neural Information Processing Systems, 2018. Mengjiao Yang and Ofir Nachum. Representation matters: Offline pretraining for sequential decision making. In *International Conference on Machine Learning*, 2021. Mengjiao Yang, Bo Dai, Ofir Nachum, George Tucker, and Dale Schuurmans. Offline policy selection under uncertainty. *arXiv preprint arXiv:2012.06919*, 2020. Wen Yu. Nonlinear system identification using discrete-time recurrent neural networks with stable learning algorithms. *Information Sciences*, 158, 2004. Tom Zahavy, Zhongwen Xu, Vivek Veeriah, Matteo Hessel, Junhyuk Oh, Hado P van Hasselt, David Silver, and Satinder Singh. A self-tuning actor-critic algorithm. In *Advances in Neural Information Processing* Systems, 2020.
Review 1: Summary: The paper proposes Data2Online, a method of selecting hyperparameters based on a log of offline data. The method is tested empirically on three environments: Acrobot, Puddle world and Cartpole, providing some evidence of its usefulness. Strengths and Weaknesses: Strengths: - the presented method is simple and elegant - overall the paper is very well executed - the paper is well-motivated (i.e. addresses an important problem) - experiments cover various scenarios Weaknesses: - it is not clear if the suggested solution solves the stated problem - it is unclear how the solution would scale (to more difficult problems) In more details, the paper proposed *Data2Online*  method, which, in a nutshell, builds a calibration model used for tuning hyperparameters. The standing assumption on which the method hinges is that the quality of such a model might be relatively weak. I enjoyed reading the paper; the text's quality is high, the structure is well thought out, and, importantly, the experiments cover various situations (e.g., failure case). Nevertheless, I have doubts about the practical significance. I assume that the ultimate goal is sample efficiency understood in a broad sense, meaning that one takes into account all samples used, including hyperparameter search. The cost of collecting the offline dataset also needs to be considered (or argued to be irrelevant). Let us consider two extremal cases: near-optimal data, and random exploration data. In the first one, the problem has been solved before (and the prize had been paid). Random exploration data are cheap but have typically poor coverage, therefore making it possible only to find hyperparameters for the initial stages of training. However, perhaps, tuning these initial stages would be cheap to make standard online tuning? I would imagine that it is (might be) possible to benefit from the proposed method; however, at the moment, I feel the evidence is poor. The environments are somewhat simple (even a few steps suffice to provide good coverage). A few more details (most of the comments regarding text are nit-picks): - I find Section 1 and Section 2 slightly too long; for example I’d consider removing some parts of formalization - I find Section 4.2 a little bit detached from the rest (perhaps some forward references would help?) - I find Section 4.3 mostly ‘decorative’. Theoretical analysis is able to provide only very pessimistic estimates, which in my view, do not help much in understanding the method. I’d suggest moving it to the appendix - I like a lot the idea of KNN models. - I like the solid execution of the experimental section. This softens a little bit my criticism of the choice of environments, i.e. having relatively small and fast ones made it possible to provide a careful statistical evaluation. Requested Changes: I wonder if the authors can provide stronger arguments supporting that their method is likely to be practically useful. I could imagine various ways. The first one, perhaps the most standard, would be to run more experiments, including more complex environments. There might also be some cheaper ways. Experiments in Section 6.3 suggest that there might be interesting multi-task/CL transfer. Perhaps, one could argue that the calibration data can be reused multiple times (effectively decreasing the cost of their collection). Last but not least, it is possible to pinpoint sample efficiency benefits even with the current data. Broader Impact Concerns: N/A ================================================== Review 2: Summary: This paper presents an offline hyperparameter selection scheme based on a non-parametric calibration model for offline RL with online adaptation. The authors present a calibration model trained with KNN, simulate learned policies under the learned calibration model and then pick the hyperparameter that yields the best simulated performance. The authors compare the method to a calibration model trained with neural networks, prior offline RL method with offline tuning scheme (FQI) and offline RL with random hyperparameter selections and show that the proposed method achieves better performance in two control tasks with varying data quality. Strengths and Weaknesses: I think the idea of the method is intuitive and easy to follow. It is very natural to consider a model-based offline hyperparameter selection rule. The method is well-motivated and the empirical evidence shows that the proposed method is sensible and can outperform random hyperparameters, offline RL with offline hyperparameter tuning based on Q-values and offline RL with NN-parameterized dynamics models. However, I do have several concerns, which are listed as follows. First, I think it is a bit unclear why the authors consider the offline-to-online setting as the whole experiment section. The method seems to focus on hyperparameter selection for offline RL without using any online interactions and I don't see why this method is particularly fitted to the offline-to-online scenario. It may be able to apply to the setting where we train RL policies without different hyperparameters either online or offline and then perform offline hyperparameter selections, but I'm not sure if the case where we train the policy online and then select hyperparameters offline is necessary. Moreover, I think the experiments need to include more comparisons such as [1] and challenging domains such as DM control, DM locomotion, Manipulation playground and tasks from the D4RL benchmark, which will be pivotal to see the effectiveness of the approach. Finally, I believe that it is important to test the effectiveness of the method on different (offline) RL algorithms. It would be particularly interesting to test how the calibration models help pick hyperparameters for standard offline model-free RL algorithms (e.g. CQL, IQL, etc.) and model-based methods (e.g. MOReL, MOPO, COMBO, etc.) [1] Paine, Tom Le, et al. "Hyperparameter selection for offline reinforcement learning." arXiv preprint arXiv:2007.09055 (2020). Requested Changes: See the above section. Broader Impact Concerns: None ================================================== Review 3: Summary: The paper deals with the possibility to choose hyperparameters for reinforcement learning based on a model of the environment. This model is created from pre-recorded data of the environment. This model is called "calibration model" in this paper, which I think is a very appropriate name. One problem is that the paper presents this technique as being new, although it is state of the art. Especially the insight "The calibration model need not be a perfect simulator to be useful for identifying reasonable hyperparameters, whereas learning a transferable policy typically requires accurate models.", though completely correct and very important in practice, is not a new insight. However, since there is little published systematic research on this technique, this paper can help to fill this gap. In this paper, experiments are performed on two very simple benchmarks. As a data-based model, a k-nearest neighbors (KNN) approach improved by metric learning is used and compared to a NN that also uses the variables of the learned metric. Furthermore, some limitations are discussed and theoretical considerations are provided. Strengths and Weaknesses: **Strengths**\ The topic is important.\ There is a lack of published studies on experience with this technique.\ The use of KNN models improved by metric learning using Laplace representation as calibration models is, as far as I know, new.\ The term calibration model, which as far as I know is introduced in this paper is very fitting. **Weaknesses**\ The biggest problem in my opinion is that the paper presents the discussed technique as generally new although it is state of the art. The selection of appropriate reinforcement learning algorithms (and this includes the hyperparameters of the algorithms) based on a model learned on real data is known from patent specification US8099181 [1], for example. Furthermore, I know this approach as common practice in the application of RL to real applications. [1] https://patentimages.storage.googleapis.com/c5/5d/d6/b03f78fa17ce13/US8099181.pdf Several statements are made for which no sufficient evidence is given in their generality: • The statement „We propose a new approach to tune hyperparameters from offline logs of data, to fully specify the hyperparameters for an RL agent that learns online in the real world“ is correct, if one thinks of the specific design of the KNN with Laplace representation. However, since the statement continues with the sentence „The approach is conceptually simple: we first learn a model of the environment from the offline data, which we call a calibration model, and then simulate learning in the calibration model to identify promising hyperparameters“, it is claimed that the whole approach of selecting hyperparameters for reinforcement learning based on a model of the environment, where this model is created using pre-recorded data of the environment, is novel, which is not the case. • This is repeated with „We propose a novel strategy to use offline data for selecting hyperparameters“. Again this sentence is correct, if one thinks of the specific design of the KNN with Laplace representation. By the subsequent sentence „The idea is simple: we use the data to learn a calibration model, and evaluate hyperparameters in the calibration model“ it is again claimed that the approach is new in general. • Likewise „Though this is a simple and natural idea, to the best of our knowledge, it is the first general approach for Data2Online“. This approach already existed before. New is the special design of the calibration model as KNN with Laplace representation. Then, in the subsequent sentence, it is also admitted that the use of models is known: „It is common in reinforcement learning to learn models for offline policy evaluation or for planning“. The justification why the own approach is new „These approaches, however, do not need to tackle a key problem we consider in this work: iterating the model for thousands of learning steps.“ is not convincing. 1. why should thousands of time steps be necessary in general? After all, the data on which the calibration model was learned is available, so one can always start from a randomly selected real state from that data set during learning and then iterate the model, say, only 100 time steps. 2. it is not shown that the proposed model is better suited for thousands of iterations than established methods of system identification, like RNN, e.g. [2], [3], and references therein. My conviction is that RNN are much better able to approximate the environment than KNN with Laplace representation. The performed comparison with NN is not sufficient. First, the NN used are not state of the art, in particular not RNN, and second, the environments studied are extremely low dimensional, with very simple dynamics. In my experience, a KNN based approach will be structurally inferior to an RNN, especially in high dimensionality. • The statement „There is only one other work considering how to use offline data to evaluate an online agent (Mandel et al., 2016)“ is not true in this form. See [4], [5], [6] • Also irritating is the sentence „We are hopeful that, with more research, NN models will become a viable choice for learning calibration models“. NN are an established technique of system identification [2], [3] and references therein, which has also been used in high dimensional environments. In the present work, no sufficient evidence was brought that the proposed KNN based models have advantages over state of the art RNN. One more note to avoid misunderstandings: NN and RNN are not generally limited to identifying deterministic systems: ensembles and NN with stochastic components (e.g., Bayesian Neural Networks [7]) have been used successfully in stochastic environments. [2] J. Sjöberg, H. Hjalmarsson, L. Ljung, Neural Networks in System Identification, 1994\ [3] D Ha, J Schmidhuber, Recurrent world models facilitate policy evolution, 2018\ [4] T. Gabel, M. Riedmiller, Reducing policy degradation in neurodynamic programming, 2006\ [5] M. Migliavacca et al., Fitted policy search: Direct policy search using a batch reinforcement learning approach, 2010\ [6] A. Hans, S. Duell, S. Udluft, Agent self-assessment: Determining policy quality without execution, 2011\ [7] S Depeweg, Modeling Epistemic and Aleatoric Uncertainty with Bayesian Neural Networks and Latent Variables, 2018 Requested Changes: The paper can contribute to scientific progress 1) by naming and discussing the known approach for which published studies are scarce, and 2) by presenting KNN with with Laplace representation. Where I consider 1) to be the more important contribution. However, to be valuable, all inaccurate or insufficiently substantiated claims (see weaknesses) must be reformulated so that they are true and sufficiently substantiated. In particular, because this means not making the claim that the approach is generally novel, it is necessary to completely reword the Abstract, Introduction, and other parts of the text. Possibly, the title should also be chosen more modest and appropriate to the contribution actually made. Extensive experiments in more difficult and high dimensional environments and comparisons with state of the art system identifiers would drastically increase the usefulness of the paper. However, I think it is understandable if this is seen as future work. Broader Impact Concerns: No concerns. ================================================== Metareview: Recommendation: Accept with minor revision Comment: This work proposes a procedure for tuning the hyperparameters of online reinforcement learning algorithms, using a model estimated using offline interaction data. After describing the proposed approach, the authors show empirical results on three traditional reinforcement-learning testbeds. The three expert reviewers who assessed this work agree that the proposed approach is simple and intuitive, the topic is relevant, and the paper is clearly written and well executed. On the other hand, they pointed out a series of weaknesses related to the novelty of the proposed approach and its scalability to more complex/real-world domains. Most of these issues have been effectively addressed by the authors in their rebuttal and they have to update their paper accordingly. In particular, since some misunderstandings were common among multiple reviewers, this is a symptom that the writing was not clear enough about those points and the authors need to take particular care in adjusting them. For what concerns the experimental part, the authors mention that this work was inspired by a real application on a water treatment use case. It would be wonderful if the authors could show results in this domain even by providing them in a qualitative way in order not to violate any non-disclosure agreement. In conclusion, according to the TML guidelines, we are glad to accept this manuscript, while strongly encouraging the authors to make a final effort to address the reviewers' concerns. ==================================================
# Bayesian Quantification With Black-Box Estimators Albert Ziegler *albert@xbow.com* XBOW, Head of AI Uppsala, Sweden Paweł Czyż pawelpiotr.czyz@ai.ethz.ch ETH AI Center and Department of Biosystems Science and Engineering ETH Zürich Zürich, Switzerland Reviewed on OpenReview: *https: // openreview. net/ forum? id= Ft4kHrOawZ* ## Abstract Understanding how different classes are distributed in an unlabeled data set is important for the calibration of probabilistic classifiers and uncertainty quantification. Methods like adjusted classify and count, black-box shift estimators, and invariant ratio estimators use an auxiliary and potentially biased black-box classifier trained on a different data set to estimate the class distribution on the current data set and yield asymptotic guarantees under weak assumptions. We demonstrate that these algorithms are closely related to the inference in a particular probabilistic graphical model approximating the assumed ground-truth generative process, and we propose a Bayesian estimator. Then, we discuss an efficient Markov chain Monte Carlo sampling scheme for the introduced model and show an asymptotic consistency guarantee in the large-data limit. We compare the introduced model against the established point estimators in a variety of scenarios, and show it is competitive, and in some cases superior, with the non-Bayesian alternatives. ## 1 Introduction Consider a medical test predicting illness (classification label Y ), such as influenza, based on symptoms (features X). This often can be modeled as an anti-causal problem1(Schölkopf et al., 2012), where Y causally affects X. Under the usual i.i.d. assumption, one can approximate the probabilities P(Y | X) using a large enough training data set. However, the performance on real-world data may be lower than expected, due to data shift: the issue that real-world data comes from a different probability distribution than training data. For example, wellcalibrated classifiers trained during early stages of the COVID-19 pandemic will underestimate the incidence of the illness at the time of surge in infections. The paradigmatic case of data shift is *prior probability shift*, where the context (e.g., training and test phase) influences the distribution of the target label Y , although the generative mechanism generating X from Y is left unchanged. In other words, Ptrain(X | Y ) = Ptest(X | Y ), although Ptrain(Y ) may differ from Ptest(Y ). If Ptest(Y ) is known, then Ptest(Y | X) can be calculated by rescaling Ptrain(Y | X) according to Bayes' theorem (see Saerens et al. (2001, Sec. 2.2) or Schölkopf et al. (2012, Sec. 3.2)). However, Ptest(Y ) is usually unknown and needs to be estimated having access only to a finite sample from the distribution Ptest(X). This task is known as *quantification* (González et al., 2017; Forman, 2008). Although quantification found applications in adjusting classifier predictions, it is an important problem on its own. For example, imagine an inaccurate but cheap COVID-19 test, which can be taken by a 1While influenza causes high fever, in many medical problems the causal relationships are much more complex (Castro et al., 2020). 1 ![1_image_0.png](1_image_0.png) Figure 1: Left: High-dimensional model Mtrue. Dashed arrows are used to create low-dimensional representations Ci and C ′ j using a given black-box function f. Filled nodes represent observed random variables. The top row represents the labeled data set and the bottom row represents the unlabeled data set. Middle: Tractable approximation Mapprox (Sec. 3). Right: Model Msmall enabling fast inference when f is a black-box classifier or a clustering algorithm (Sec. 3.2). significant fraction of the population on a weekly basis. While this test may not be sufficient to determine whether a particular person is contagious, the estimate of the true number of positive cases could be used by epidemiologists to monitor the reproduction number and by the health authorities to inform public policy.2 We advocate treating the quantification problem using Bayesian modeling, which provides uncertainty around the Ptest(Y ) estimate. This uncertainty can be used directly if the distribution on the whole population is of interest, or it can be used to calibrate a probabilistic classifier to yield a more informed estimate for the label of a particular observation. A Bayesian approach was already proposed by Storkey (2009, Sec. 6). However, that proposal relies on a generative model P(X | Y ), which can be misspecified or computationally intractable in high-dimensional settings. Hence, quantification is usually approached either via the expectation-maximization (EM) algorithm (Saerens et al., 2001) or a family of closely-related algorithms known as invariant ratio estimators (Vaz et al., 2019), black-box shift estimators (Lipton et al., 2018), or adjusted classify and count (Forman, 2008), which replace the generative model P(X | Y ) with a (potentially biased) classifier. Tasche (2017), Lipton et al. (2018), and Vaz et al. (2019) proved that these algorithms are asymptotically consistent (they rediscover Ptest(Y ) in the limit of infinite data) under weak assumptions and derived asymptotic bounds on the related error. Our contributions are: 1. We show a connection between the quantification algorithms employing a black-box (and potentially biased) classifier with Bayesian inference in a probabilistic graphical model approximating the ground-truth generative process. 2. We present a tractable Bayesian approach, which is well-suited for low-data problems. Established alternatives provide asymptotic estimates on error bounds, but may be far off for small samples (to the point that some of the estimates for Ptest(Y ) may be negative). The Bayesian approach explicitly quantifies the uncertainty and does not suffer from the negative values problem. Moreover, it is possible to incorporate expert's knowledge via the choice of the prior distribution. 3. We prove that the *maximum a posteriori* inference in the considered model is asymptotically consistent under weak assumptions. ## 2 The Quantification Problem And Existing Solutions Consider a classification problem with Y = {1, 2*, . . . , L*} labels and observed features coming from a measurable space X . A given object is then represented by two random variables (r.v.): a Y-valued r.v. representing 2Note however that testing strategies may be adapted to the outbreaks, which in turn induce correlations between observed data, violating the usual assumption that the data are exchangeable. We discuss contraindications in Sec. 5. the label and an X -valued r.v. representing the measured features. We consider an anti-causal problem in which there exists a probabilistic mechanism P(X | Y ), responsible for generating the features from the label. We focus on two populations sharing the same generative mechanism, assuming that there exist probability distributions Ptrain(*X, Y* ) and Ptest(*X, Y* ) over *X × Y* such that Ptest(X | Y = y) = Ptrain(X | Y = y) for every y ∈ Y. In the literature this assumption is referred to as *prior probability shift* (Storkey, 2009). We will write K∗ y for the conditional distribution P(X | Y = y), which is the generative mechanism shared by both populations. The *quantification problem* (González et al., 2017) asks whether given finite i.i.d. samples from the distributions Ptrain(*X, Y* ) and Ptest(X) it is possible to determine the distribution Ptest(Y ). In principle, if the data are abundant, one can use samples from Ptrain(*X, Y* ) to determine the conditional distributions K∗ y and then find all probability vectors Ptest(Y ) which are compatible with the distribution of the features Ptest(X), written as a mixture distribution of generative mechanisms K∗ y : $$P_{\rm test}(X)=\sum_{y\in{\cal Y}}P_{\rm test}(Y=y)\,K_{y}^{*}.\tag{1}$$ $\quad(1)$ . The uniqueness of the vector Ptest(Y ) follows under strict linear independence assumption of measures K∗ y (Garg et al., 2020). We review the notion of strict linear independence in Appendix A, but for a finite discrete space X it reduces to the linear independence of probability vectors K∗ y , which allows constructing the left inverse of the P(X | Y ) matrix. In practice, however, it is not possible to fully reconstruct the distributions Ptest(X) and K∗ y from finite samples and principled statistical approaches are needed. To formalize the problem, consider a probabilistic graphical model Mtrue in Fig. 1: probability vectors Ptrain(Y ) and Ptest(Y ) are modeled with r.v. π and π ′ valued in the probability simplex ∆L−1 = ny ∈ (0, 1)L: y1 + *· · ·* + yL = 1oand the ground-truth generative processes K∗ y are modeled via parametric distributions Ky(·; θ) with a parameter vector θ. We observe N pairs of r.v. (Xi, Yi) for i ∈ {1*, . . . , N*} sampled independently according to the model $$Y_{i}\mid\pi\sim\mathrm{Ca}$$ Yi| π ∼ Categorical(L, π), Xi| Yi, θ ∼ KYi $$(\cdot,\pi),\qquad X_{i}\mid Y_{i},\theta\sim K_{Y_{i}}(\cdot;\theta).$$ Additionally, we observe N′r.v. X′j for j ∈ {1*, . . . , N*′} sampled independently from the mixture distribution $$X_{j}^{\prime}\mid\pi^{\prime},\theta\sim\sum_{y=1}^{L}\pi_{y}^{\prime}\,K_{y}(\cdot;\theta),$$ or, if latent variables Y ′ j valued in Y are introduced, $$Y_{j}^{\prime}\mid\pi^{\prime}\sim\mathrm{Categorical}(L,\pi^{\prime}),\qquad X_{j}^{\prime}\mid Y_{j}^{\prime},\theta\sim K_{Y_{j}^{\prime}}(\cdot;\theta).$$ ## 2.1 Likelihood-Based Methods Our work draws on two major groups of quantification methods, with the description of other approaches deferred to Appendix D. The first group proceeds by considering a generative probabilistic model of the data. Peters & Coberly (1976) assume that each Ky(·; θ) is a multivariate normal distribution. Then, they estimate θ using labeled data {Xi, Yi} and find the maximum likelihood solution for π ′ by an iterative optimization algorithm. Storkey (2009) discusses approaching quantification problems within a fully Bayesian estimation, which requires marginalization of θ, and notices that such marginalization may not generally be tractable for complex generative models Ky(·; θ). Moreover, a tractable generative model of high-dimensional data is likely to be misspecified, which may compromise Bayesian inference (Watson & Holmes, 2016; Lyddon et al., 2018). Saerens et al. (2001) observe that specifying high-dimensional distributions Ky(·; θ) may be avoided if one instead has access to an oracle probabilistic classifier r : X → ∆L−1such that each r(x) = P(Yi| Xi = *x, π*). Then, they show how to use a candidate value π ′to recalibrate r(x) and marginalize latent variables Y ′ j in the Expectation-Maximization (EM) manner, which iteratively optimizes π ′, targeting the maximum likelihood estimate. In Appendix D.1 we give a detailed treatment of this algorithm, together with two simple extensions: when a Dirichlet prior is used for P(π ′), EM targets the *maximum a posteriori* (MAP) estimate of the posterior distribution P(π ′| {X′j }, r). Moreover, we describe a Gibbs sampler allowing drawing from the posterior P(π ′| {X′j }, r). However, this posterior is generally different than P(π ′| {X′j }, {Xi, Yi}), which requires marginalization over r, and the performance of the EM algorithm depends on the access to the oracle classifier r, which has to be well-calibrated (Garg et al., 2020). As modern classification methods, such as neural networks, are often miscalibrated (Guo et al., 2017), one has to leverage the labeled data set {Xi, Yi} to improve calibration. Alexandari et al. (2020) introduces calibration methods which can yield the state-of-the-art results using the expectation-maximization algorithm. Although both approaches are based in sound likelihood framework, Peters & Coberly (1976) require learning high-dimensional generative models Ky(X; θ) = P(X | Y = *y, θ*) and Saerens et al. (2001) assume an access to a well-calibrated oracle classifier P(Yi| Xi, π). Moreover, each iteration of all mentioned approaches requires operations involving all N′ variables X′j . This may limit the scalability of either algorithm to large data sets. ## 2.2 Methods Involving An Auxiliary Black-Box Classifier The second group of approaches is based around a modification of Eq. 1 and assumes access to a given auxiliary black-box mapping: consider a given measurable space C and a measurable mapping3 f : *X → C*. For example, f can be a pretrained feature extractor (such as a large language model), clustering algorithm, or a generic classifier, trained on a large data set with possibly a different set of categories. Then, one can define new observed r.v. Ci = f(Xi) and C ′ j = f(X′j ), which in Fig. 1 corresponds to the part of the diagram with dashed arrows. Note that the new variables act only as a summary statistic and do not increase the amount of information available. Namely, given {(Xi, Yi)} and {X′j }, the r.v. π ′is independent of {Ci} and {C ′ j }, i.e.: π ′ *⊥⊥ {*Ci}, {C ′ j } {(Xi, Yi)}, {X′j }. However, the prior probability shift assumption implies that the distributions P(Ci| Yi = y) = P(C ′ j | Y ′ j = y) are equal for an arbitrary label y ∈ Y and indices i, j. In particular, Eq. 1 can be used with original features X replaced with the newly introduced representations C = f(X). As they are of lower dimension than X, it may be easier to approximate required probabilities with the available data samples. For example, Vaz et al. (2019) propose invariant ratio estimators, which generalize earlier approaches of adjusted classify and count (Forman, 2008; Tasche, 2017) and its variant introduced by Bella et al. (2010). Namely, for a given mapping f : X → R L−1, one constructs $$\hat{I}^{\prime}=\frac{1}{N^{\prime}}\sum_{j}C^{\prime}_{j},\quad\hat{F}_{:,y}=\frac{1}{|S_{y}|}\sum_{i\in S_{y}}C_{i},\text{where}S_{y}=\{i\in\{1,\ldots,N\}:Y_{i}=y\},$$ and solves the set of equations given by ˆf ′ = F πˆ ′ and π ′ 1 + *· · ·* + π ′ L = 1. In Appendix D.2 we review the closely-related algorithms employing black-box classifiers and based on matrix inversion (solving a set of linear equations), including the popular algorithm of Lipton et al. (2018). Estimators employing black-box classifiers offer four advantages over likelihood-based methods. First, as auxiliary mapping f can produce low-dimensional representations, estimating probabilities appearing in Eq. 1 may be more accurate. Secondly, Peters & Coberly (1976) require training a potentially high-dimensional generative model and Saerens et al. (2001) require a well-calibrated oracle probabilistic classifier, which may be hard to obtain in practice. Third, each optimization step in a likelihood-based method requires O(N′) operations. Black-box method f has to be applied only once to each X′j to construct the summary statistic, which is then used for solving a linear set of equations (cf. Eq. 1). Finally, even when P(X | Y ) is not invariant 3Although for the clarity of the exposition we will use a notation corresponding to a measurable function f, the results hold mutatis mutandi for an arbitrary Markov kernel (Klenke, 2014, Sec. 8.3), so that f does not need to be deterministic. (i.e., the prior probability shift assumption does not hold), for an appropriate dimension reduction method f it may hold that the distribution of low-dimensional representations, P(C | Y ), is invariant (Arjovsky et al., 2019). Lipton et al. (2018) calls invariance of P(C | Y ) the *weak prior probability shift assumption*. However, at the same time methods employing black-box dimension reduction methods f have three undesirable properties. First, dimension reduction methods may incur loss of information (Fedorov et al., 2009; Harrell, 2015, Sec. 1.3). In particular, even if the ground-truth distributions K∗ y = P(X | Y = y) are strictly linearly independent, the pushforward distributions P(C | Y = y) do not have to be. Secondly, solving Eq. 1 requires approximating probability distributions basing on the laws of large numbers: although these methods have desirable asymptotic behaviour, likelihood-based methods explicitly work with a given finite sample. Finally, solving a linear set of equations is not numerically stable when P(C | Y ) has large condition number. In the next section we show how to solve the last two issues within our proposed Bayesian framework. ## 3 Bayesian Quantification With Black-Box Shift Estimators We work in the setting of Fig. 1 with N labeled examples (X1, Y1), . . . ,(XN , YN ) and N′ unlabeled examples X′1 , . . . , X′N′ obtained under the prior probability shift assumption. Additionally, we assume that we work with a given dimension reduction mapping f : *X → C*. A fully Bayesian treatment (Storkey, 2009) relies on an assumed parametric generative mechanism Ky(X; θ) = P(X | Y = *y, θ*) and marginalizes over all possible values of parameter θ to obtain the values of the latent variables π and π ′. From the graphical structure in Fig. 1 we note that the posterior factorizes as $$P(\pi^{\prime},\pi\mid\{(X_{i},Y_{i})\},\{X_{j}^{\prime}\})=P(\pi\mid\{Y_{i}\})\cdot P(\pi^{\prime}\mid\{X_{j}^{\prime}\},\{(X_{i},Y_{i})\}),$$ and $$P(\pi^{\prime}\mid\{X^{\prime}_{j}\},\{(X_{i},Y_{i})\})\propto P(\pi^{\prime})\cdot\int\prod_{i}K_{Y_{i}}(X_{i};\theta)\cdot\prod_{j}\sum_{y}\pi^{\prime}_{y}K_{y}(X^{\prime}_{j};\theta)\,\mathrm{d}P(\theta).\tag{2}$$ The posterior P(π | {Yi}) is analytically tractable when a Dirichlet prior P(π) is used, so the difficulty in quantification relies in finding P(π ′| {X′j }, {(Xi, Yi)}). If θ is of moderate dimension, this distribution can be approximated by using Markov chain Monte Carlo (MCMC) algorithms (Betancourt, 2017) by jointly sampling π ′ and θ from the posterior P(π ′, θ | {(Xi, Yi)}, {X′j }) and retaining only the π ′component. However, in complex problems involving high-dimensional θ variables and large sample sizes N and N′, MCMC methods become computationally expensive, which may limit their applicability (Betancourt, 2015; Bardenet et al., 2017; Izmailov et al., 2021). Moreover, if parametric kernels Ky(·; θ) are misspecified, which is arguably often the case in high-dimensional problems, the resulting inference may be compromised (Watson & Holmes, 2016; Lyddon et al., 2018); we investigate this issue in Sec. 4.3. Both tractability and robustness to model misspecification can be simultaneously addressed by employing the provided black-box feature extractor f to replace Xi with Ci and X′j with C ′ j : Lewis et al. (2021) propose to improve robustness to model misspecification in regression models by conditioning on an insufficient summary statistic, rather than original data. In our case, we consider the conditional distribution $$P(\pi^{\prime}\mid\{C^{\prime}_{j}\},\{(C_{i},Y_{i})\})\propto P(\pi^{\prime})\cdot\int\prod_{i}\tilde{K}_{Y_{i}}(C_{i};\varphi)\cdot\prod_{j}\sum_{y}\pi^{\prime}_{y}\tilde{K}_{y}(C^{\prime}_{j};\varphi)\,\mathrm{d}P(\varphi),\tag{3}$$ where K˜y(·; φ) are distributions on the low-dimensional space C, rather than on high-dimensional space X , parameterized by vector φ. Although it is possible to take φ = θ and define K˜ (·; φ) to be the pushforward measure of K(·; θ) by the dimension reduction method f, we generally hope that a low-dimensional distribution K˜ (·; φ) may require fewer parameters and φ will be of a much lower dimension than θ, making the integral from Eq. 3 more tractable than Eq. 2. Apart from improved tractability, conditioning on summary statistic may improve the robustness due to easier specification of low-dimensional distributions K˜ (·; φ). Finally, even if the prior probability assumption does not hold, i.e., P(X | Y ) is not invariant, the distribution of low-dimensional representations P(C | Y ) may be invariant (Arjovsky et al., 2019), which in notation of Lipton et al. (2018) corresponds to the weak prior probability shift assumption. On the other hand, conditioning on an insufficient statistic loses information: the trivial approximation C = {1} and f(x) = 1 forgets any available information and results in the posterior being the same as the prior, P(π ′| {Ci, Yi}, {C ′ j }) = P(π ′), even in the limit of infinite data. Although the outlined methodology of approximating the intractable inference with a simpler model with a given black-box dimension reduction method f is general, below we analyse the simplest possible model, where C = {1, 2*, . . . , K*} and f is given by a black-box classifier, or a clustering algorithm, trained on a potentially very different data set. ## 3.1 The Discrete Model Consider C = {1, 2*, . . . , K*} and a given black-box function f : *X → C*. For example, f can be a miscalibrated classification algorithm trained on an entirely different data set (in particular, it is possible that K ̸= L) or a function assigning points to predefined clusters. If *K < L*, we are not able to identify Ptest(Y ) basing on the outputs of f. However, as we find the full Bayesian posterior, rather than providing a point estimate based on matrix inversion, the posterior distribution on π ′ may shrink along specific dimensions, providing accurate estimate of class prevalence for several classes. On the other hand, if K ≥ L and there is a strong enough correlation between outputs of f and ground-truth labels, the guarantees on methods employing black-box shift classifiers and matrix inversion (Tasche, 2017; Lipton et al., 2018; Vaz et al., 2019) ensure identifiability of the prevalence vector π ′in the large data limit, as we will see in Theorem 3.1. In this case, the model K˜y(·; φ) is particularly simple independently of the true data-generating process K∗ y : as each of K˜y(·; φ) distributions is supported on a finite set C, they have to be categorical distributions. Namely, φ = (φyk) is a matrix modeling the ground-truth probability table P(C = k | Y = y) and the model will not be misspecified provided that the weak prior probability shift assumption holds and that the prior on φ, π and π ′is positive on the simplices ∆K−1(for each φy:) and ∆L−1(for π and π ′). The approximate model Mapprox takes the form $$\pi,\pi^{\prime},$$ π, π′, φ ∼ P(π, π′, φ) Yi| π ∼ Categorical(L, π), Ci| Yi, φ ∼ Categorical(*K, φ*Yi:), Y ′ j | π ′ ∼ Categorical(L, π′), C′j | Y ′ j , φ ∼ Categorical(*K, φ*Y ′ j :). We do not put specific requirements on the prior P(π, π′, φ) other than being positive on the probability simplices: for example, the Dirichlet distribution or the logistic normal distribution can be used. If precise problem-specific information is not available, we recommend using weakly informative prior distributions (Gelman et al., 2013, Sec. 2.9), such as the uniform prior over each simplex. However, we note that if the Dirichlet priors are used, the model conceptually resembles a very low-dimensional variant of latent Dirichlet allocation (Pritchard et al., 2000; Blei et al., 2003), with observed r.v. Yi and Ci providing information on the φ matrix. In Section 3.2 we will show how to construct a scalable sufficient statistic and perform efficient inference using Hamiltonian Markov chain Monte Carlo methods (Betancourt, 2017). Although for *K < L* the model is not identifiable, for K ≥ L it shares asymptotic properties similar to the alternatives based on matrix inversion. In Appendix C we prove the following result: Theorem 3.1. Assume that: 1. The weak prior probability shift assumption holds, i.e., P*train*(C | Y ) = Ptest(C | Y ) is invariant between the populations. 2. *The ground-truth matrix* φ ∗ =P(C = k | Y = y)yk is of rank L *and all entries are strictly positive.* 3. *The ground-truth prevalence vectors* π ∗ = Ptrain(Y ) and π ′∗ = Ptest(Y ) *have only strictly positive* entries. 4. The prior P(π, π′, φ) *is continuous and strictly positive on the whole space* ∆L−1 × ∆L−1 × ∆K−1 *× · · · ×* ∆K−1. Then, for every δ > 0 and ε > 0, there exist N and N′*large enough such that with probability at least* 1 − δ the maximum a posteriori *estimate* π, ˆ πˆ ′, φˆ is in the ε*-neighborhood of the true parameter values* π ∗, π′∗, φ∗. Compared to the traditional approaches, we do not explicitly invert the matrix P(C | Y ) (modeled with φ), as any degeneracy is simply reflected in the posterior, showing that we did not learn anything new about the prevalence of some classes. However, if the full-rank condition holds, the *maximum a posteriori* estimate asymptotically recovers the true parameters. This result is similar to the standard results on methods employing matrix inversion, which typically assume conditions 1–3. As an additional condition, we require that the prior is positive on the whole space, which ensures that the ground-truth parameters lie in the region with positive density. This result in conceptually similar to the classical Bernstein–von Mises theorem linking Bayesian and frequentist inference in the large data limit, and in particular depends on the assumption that the model is not misspecified. However, we stress that in Bayesian statistics one should use the whole posterior distribution, rather than relying on a single point estimate. In Appendix C.1 we provide a further discussion of this point. ## 3.2 Fast Inference In The Discrete Model One advantage of methods employing black-box classifiers and matrix inversion over likelihood-based methods is that they require O(N + N′) computation time to preprocess the data, and then estimate is generated by matrix inversion, which is polynomial in K and L. Likelihood-based methods, however, require generally at least O(N′) operations per each likelihood evaluation. As such, these methods may not be scalable enough to large data sets or when extensive resampling methods, such as bootstrap (Efron, 1979; Tasche, 2019), are required. For the discrete Bayesian model, however, it is possible to preprocess the data in O(N + N′) time and then evaluate the likelihood and its gradient in polynomial time in terms of K and L, without any further dependence on N or N′. In this section we show how to construct a sufficient statistic for π, π ′, and φ, whose size is independent on N and N′ and allows one to perform efficient inference with existing sampling methods. Define a K-tuple (N′k )k∈C of r.v. summarizing the unlabeled data set by N′k = {j ∈ {1*, . . . , N*′} : C ′ j = k} , which can be constructed in O(K) memory and O(N′) time. Then, for each y ∈ Y, we define a K-tuple of r.v. (Fyk)k∈C, such that Fyk = {i ∈ {1*, . . . , N*} : Yi = y and Ci = k} , which requires O(LK) memory and O(N) time. Finally, we define an L-tuple of r.v. (Ny)y∈Y by Ny = Fy1 + *· · ·* + FyK. In Appendix B we prove that the likelihood P({Yi, Ci}, {C ′ j } | π, π′, φ) is proportional to the likelihood P(Ny),(N′k ),(Fyk) | π, π′, φin a smaller model, Msmall: $(N_{y})\mid\pi\sim$ Multinomial$(N,\pi)$, $(F_{y}.)\mid N_{y},\varphi\sim$ Multinomial$(N_{y},\varphi_{y}.)$, $(N_{y})\mid\pi\sim$ Multinomial$(N_{y},\pi)$ ′, φ ∼ Multinomial(N ′, φT π $$\begin{array}{l l}{{\left(N_{k}^{\prime}\right)\mid\pi^{\prime},\varphi}}\end{array}$$ ′). Hence, by the factorization theorem of Halmos & Savage (1949), we constructed a sufficient statistic for the inference of π, π ′, φ, whose size is independent of N and N′. In turn, we can use the likelihood of Msmall (rather than Mapprox) to sample π, π ′ and φ from the posterior, allowing us to perform each likelihood evaluation in O(KL), rather than O(N + N′), time. Moreover, the gradient of the likelihood is available, so we can use any of the existing Hamiltonian Markov chain Monte Carlo algorithms (Betancourt, 2017; Hoffman & Gelman, 2014). ## 4 Experimental Results We evaluate the proposed method in four aspects. In Sec. 4.1 we analyze the benefits of using Bayesian approach, rather than matrix inversion, in problems which are identifiable, but where the matrix P(C | Y ) ![7_image_0.png](7_image_0.png) Figure 2: Uncertainty of the prevalence estimates in the nearly non-identifiable model. Samples from the proposed Bayesian posterior quantify uncertainty better than bootstrapping point estimators based on matrix inversion. has a large condition number, requiring a large number of samples to be estimated accurately. In Sec. 4.2 we compare how the posterior mean compares with the existing point prediction methods employing black-box classifiers using extensive simulated data sets. In Sec. 4.3 we investigate how the posterior in approximate model, P(π ′| {Yi, Ci}, {C ′ j }), compares to the posterior P(π ′| {Yi, Xi}, {X′j }) with properly specified, as well as misspecified, generative models. Finally, in Sec. 4.4 we present an application of quantification methods to the problem of estimating cell type prevalence from single-cell RNA sequencing data. In all settings, we assume no specific knowledge of the problem (which could be used by principled prior elicitation) and use the uniform distributions as the priors for π, π ′, and φy: vectors. ## 4.1 Tighter Estimation For Hard-To-Identify Model The Bayesian inference does not require an explicit inversion of the P(C | Y ) matrix. We therefore hypothesise that it may be preferable in cases where P(C | Y ) has a large condition number. Hence, in this section we consider a case with L = K = 3 and a given black-box classifier which can only weakly distinguish between classes 2 and 3, i.e., the ground-truth matrix P(C | Y ) is given by $$\varphi^{*}=(\varphi_{y k}^{*})={\binom{0.96}{0.02}}$$ 0.96 0.02 0.02 0.02 0.50 0.48 0.02 0.48 0.50 $$\begin{array}{r r}{{0.02}}&{{0.02}}\\ {{0.50}}&{{0.48}}\\ {{0.48}}&{{0.50}}\end{array}\Bigg)\ .$$ Although the matrix is full-rank (and asymptotic identifiability for all methods employing black-box classifiers holds), having an access to a finite sample may limit practical ability to accurately estimate the prevalence. We simulated a data set with N = N′ = 1000 data points and ground-truth prevalence vectors π ∗ = (1/3, 1/3, 1/3) and π ′∗ = (0.5, 0.35, 0.15). In Fig. 2 we plot posterior samples from the proposed Bayesian model together with the bootstrapped (Efron, 1979) predictions of three methods employing black-box classifiers and performing explicit inversion: restricted and unrestricted invariant ratio estimators (RIR and UIR respectively; Vaz et al. (2019)) and black-box shift estimator of Lipton et al. (2018) (BBS). We used S = 100 bootstrap samples using the stratified bootstrapping procedure introduced for quantification problems by Tasche (2019); in Appendix E we additionally reproduce this experiment varying the sampled data sets. We see that the Bayesian approach identifies the component π ′ 1 , leaving large uncertainty on entries π ′ 2 and π ′ 3 . On the other hand, all bootstrapped methods struggle with estimating any of the components. Moreover, UIR and BBS often result in estimates with negative entries. As we further illustrate in Appendix E, this behaviour is typical for low sample sizes, and bootstrapped predictions become stable for N = N′ = 104 samples. However, the proposed Bayesian approach does not suffer from these low-data issues, appropriately quantifying uncertainty. ![8_image_0.png](8_image_0.png) Figure 3: Mean absolute error for quantification using simulated categorical black-box classifiers under different scenarios, the proposed Bayesian approach (BAY) shown in blue. ## 4.2 Simulations Employing The Discrete Model Although we advocate for quantifying uncertainty around the prediction of π ′, quantification is typically posed as a point estimation problem. We therefore compare the point predictions of three matrix-inversion methods mentioned before (RIR, UIR, and BBS) with the posterior mean in the Bayesian model (BAY) and a simple baseline known as "classify and count" (CC; Tasche (2017) proves that this method may not converge to the ground-truth value even in the high-data limit). Experimental design In this paragraph we describe the experimental design with the default parameter values. They are changed one at a time, as described in further sections, and for each setting we simulate labeled and unlabeled data sets S = 50 times and, for each method, we record the mean absolute error (MAE) between the ground-truth value π ′∗ and the point estimate πˆ ′. Using root mean squared error (RMSE) does not qualitatively change the results (see Appendix E.2). We fix the data set sizes N = 103 and N′ = 500 and use L = K = 5 as a default setting. The ground-truth prevalence vectors are parametrized as π ∗ = (1*/L, . . . ,* 1/L) and π ′∗(r) = r, 1−r L−1 , . . . , 1−r L−1 . By default, we use r = 0.7. The ground-truth matrix P(C | Y ) is parameterized as φ ∗ yy = q and φ ∗ yk = (1 − q)/(K − 1) for k ̸= y and K ≥ L, with the default value q = 0.85. Whenever *K < L* we use φ ∗ yk = 1/K for y ∈ {L + 1, L + 2*, . . . , K*} to obtain a valid probability vector. Changing prevalence We investigate the impact of increasing the prior probability shift (the difference between π and π ′) by changing r = π ′ 1 ∈ {0.5, 0.6, 0.7, 0.8, 0.9} and summarize the results in Fig. 3a. CC is adversely impacted by a strong data shift. The other estimators all perform similar to each other. Changing data set size We investigate whether the algorithms converge to the ground-truth value in the large data limit. We vary N′ ∈ {10, 50, 100, 500, 103, 104}. As shown in Fig. 3b, the large data limit appears very similar (except for CC), agreeing with asymptotic identifiability guarantees for BBS, RIR and our MAP estimates, although BBS appears slightly less accurate than the others in a low data regime. Changing classifier quality We investigate the impact of classifier quality by changing it in range q ∈ {0.55, 0.65, 0.75, 0.85, 0.95} and show the results in Fig. 3c. All considered method converge to zero error for high quality, but the convergence of CC is much slower than for the other algorithms. Changing the classification granularity We change K ∈ {2, 3, 5, 7, 9}, creating a setting when a given black-box classifier, trained on a different data distribution, is still informative about some of the classes, but provides different information. In particular, the CC estimator cannot be used for K ̸= L. Although the original formulation of BBSE and IR assumes K = L, we proceed with least square error solution. Our choice ![9_image_0.png](9_image_0.png) Figure 4: First two plots: conditional Student distributions P(X | Y ) and the training and test populations. Third plot: posterior samples under four different Bayesian models. Fourth plot: coverage of credible intervals. of φ ∗ given above guarantees that the classifier for *K > L* = 5 will contain at least as much information as a classifier with a smaller number of classes. Conversely for *K < L*, the information about some of the classes will be insufficient even in the large data regime - it is not possible for the matrix P(C | Y ) to have rank L, and asymptotic consistency does not generally hold. The results are shown in Fig. 3d. While all methods considered (apart from CC) suffer little error for K ≥ L, we note that our model-based approach can still learn something about the classes for which the classifier is informative enough, while the techniques based on matrix inversion are less effective. Additionally, we should stress that the Bayesian approach gives the whole posterior distribution on π ′(which can still appropriately shrink along well-recoverable dimensions). Changing the number of classes Finally, we jointly change L = K ∈ {2, 3, 5, 7, 9}. We plot the results in Fig. 3e. Again, classify and count obtains markedly worse results, with smaller differences between the other methods. Model misspecification Finally, we study whether the considered approaches are robust to breaking the weak prior probability shift assumption: the unlabeled data are sampled according to a different P(C | Y ) distribution, parameterized by q ′. The weak prior probability shift assumption corresponds to the setting q ′ = q, which is marked with a dashed black line in Fig. 3f. Although in this case asymptotic identifiability guarantees do not hold, we believe this to be an important case which may occur in practice (when additional distributional shifts are present). We see that the performance of BBS, IR and BAY estimates deteriorates for large discrepancies between q and q ′. However, for |q−q ′| ≤ 0.05, the median error of BBS, IR and BAY is still arguably tame, so we hope that these methods can be employed even if the prior probability shift assumption is only approximately correct. Note that in the case when q ′ > q, (i.e., the classifier has better predictive accuracy on the unlabeled data set than on the labeled data set, which we think rarely occurs in practice), CC outperforms other methods. ## 4.3 Uncertainty Assessment In A Misspecified Model Of A Mixture Of Student Distributions As mentioned in Sec. 3.1, using a black-box function f : *X → C* to reduce the dimensionality to a set {1, 2*, . . . , K*} not only improves the tractability of the problem, but also has the potential to make the model more robust to misspecification by replacing the prior probability shift assumption (invariance of P(X | Y ) with its weak version (invariance of P(C | Y )) and by learning the parameters of a categorical discrete distribution, rather than parameters of a potentially misspecified distribution Ky(X; θ). On the other hand, Mapprox loses information from the problem, so we do not expect it to be as appropriate as a properly specified generative model for P(X | Y ). To investigate properies of the Mapprox approach we generated low-dimensional data, so that Bayesian inference in Mtrue is still tractable: we consider a mixture of two Student t-distributions presented in Fig. 4 from which we sample N = N′ = 103 points. We implemented a Gaussian mixture model and a Student mixture distribution in NumPyro (Phan et al., 2019), setting weakly-informative priors on their parameters (see Appendix E.3). For the Mapprox we partitioned the real axis into K bins: (−∞, −4), [−4, a1), [a1, a2)*, . . . ,* [aK−3, 4), [4, ∞), with all intervals (apart from the first and the last one) of equal length. ![10_image_0.png](10_image_0.png) Figure 5: First two panels: principal components of the feature vectors X colored by the biopsy sample and cell types. Third panel: inferred cell type proportions. Fourth panel: inferred cell type proportions with vascular cells and neurons merged into one type. In Fig. 4 we see that using 10 bins yields very similar posterior to the one of a well-specified model and that using 5 bins yields a wider posterior, which agrees with the perspective that discretization resulted in information loss. However, a misspecified Gaussian mixture model concentrates around a wrong value. We repeated the simulation S = 200 times, excluded the runs with convergence issues (see Appendix E.3), and checked the frequentist coverage of the highest-density credible intervals. The coverage of the discretized models as well as of the properly specified Student mixture agrees well with the expected value. However, the credible intervals from the misspecified Gaussian mixture are systematically too narrow. As we demonstrate in Appendix E.3, misspecification is less problematic for lower sample sizes and more problematic for large sample sizes. We conclude that, in the considered setting, conditioning on an insufficient statistic, as proposed by Lewis et al. (2021) for regression problems, is a viable solution also for quantification. However, it is not as data-efficient as a properly-specified generative model if one is available. We did not compare obtained posteriors in high-dimensional problems due to high computational costs associated with running MCMC on high-dimensional generative models. ## 4.4 Prevalence Estimation In Real World Data As a more practical application of the proposed quantification method, we consider single-cell RNA sequencing data. Darmanis et al. (2017) collected biopsy specimens from four glioblastoma multiforme tumors corresponding to four different populations of cells. Each cell belongs to one of six healthy types (astrocyte, neuron, oligodendrocyte, OPC, myeloid or vascular) or is a malignant cancer cell, yielding in total L = 7 distinct cell types. Each cell is sequenced to yield a gene expression vector X with 23,368 entries. Basing on gene expression vectors and known marker genes, the cells have been assigned to the considered cell types we treat provided annotations as ground-truth labels. In Fig. 5 we plotted the first two principal components of the whole data set to visualise the distribution of features within each sample and with relation to the cell type. We note that although Darmanis et al. (2017) used TPM normalization (Zhao et al., 2021) to normalize the features X and the cells seem to roughly cluster within cell types, the sample-specific effects are still visible (see also Appendix E.4), so that the prior probability shift assumption, of invariant P(X | Y ), is violated. We consider a semi-realistic scenario in which one wants to estimate cell prevalence in an automated fashion employing a given black-box cell type classifier. We treat the first two samples as an auxiliary cell atlas on which a generic black-box cell type classifier was trained (we use a random forest), the third sample as an available labeled data set, {(Xi, Yi)}, and the fourth sample as an unlabeled data set, {X′j }, for the quantification problem. Although the prior probability shift is violated, we can hope that the labels predicted by the random forest are more invariant and that methods working under the weak prior probability shift assumption may still yield reasonable estimates. We consider quantification method using predicted labels (posterior mean in the proposed model, BBS, RIR, and CC) as well as two methods accepting probabilities, rather than labels: Expectation-Maximization (EM) and a soft variant of the restricted invariant ratio estimator (RIR (soft)). We do not use recalibration techniques proposed by Alexandari et al. (2020), using the vanilla probabilities provided by the random forest. Generally, the posterior mean captures well the true cell types prevalence, showing large uncertainty around the vascular cells and neurons. Similar performance is obtained by the simple CC baseline, owing to the good performance of the classifier. Interestingly, the methods using estimated probabilities (RIR (soft) and EM) obtained the worst performance. We hypothesise that this is due to the violated prior probability assumption and that using discrete labels, rather than continuous probabilities, improves the robustness. As methods employing matrix inversion and a black-box classifier (BBS and RIR) failed due to noninvertibility of the estimated matrices, we then decided to merge two least prevalent cell types occurring in the data set (vascular cells and neurons) into a single "Other" class. In that case, RIR and BBS obtain performance on par with posterior mean and CC. ## 5 Discussion The presented approach generalizes point estimates provided by black-box shift estimators and invariant ratio estimators to the Bayesian inference setting. This allows one to *quantify uncertainty* and *use existing* knowledge about the problem by prior specification. Moreover, by the construction of the sufficient statistic our approach is tractable even in large-data limit (for either data set considered). In all our experiments, the suggested estimator obtained at least as good performance as the existing methods, outperforming them in the *K < L* case where the number of modeled classes differs from the "true" number of classes. Compared to point estimates with asymptotic guarantees, our approach "knows what it does not know", meaning that the posterior is meaningful even if the matrix P(C | Y ) is not (left-)invertible, and it is specific for the prevalence values of those classes for which the feature extractor f is sufficiently informative. Moreover, the proposed approach aligns with the approaches employing black-box feature extractors f, which can be trained on a different data set and may be not calibrated properly. This is particularly useful when a hard, fully black-box classifier is given without the possibility of retraining it, which is an increasingly common theme with modern AI applications, which are often huge assets doing sophisticated processing, and also often proprietary and only available through APIs. However, the method we introduce is not free from challenges. As in all Bayesian inferences, care is required regarding modeling assumptions: whether the discrete model is applicable and what prior should be used. In particular, even the weak prior probability shift assumption may not hold (e.g., if the labeled and unlabeled data sets were collected under radically different conditions or the labeled and unlabeled data sets have different classes Y). Additionally, Bayesian inference often carries a model choice problem, and different choices for K or the discretization method f may yield different posteriors on the prevalence vector π ′, especially in the low data regime. If the generative model P(X | *Y, θ*) is well-specified and tractable, we suggest to use this instead of an approximation P(C | *Y, φ*). If it is not tractable, we suggest to use the available classifier with K classes, observing the quality of φ = P(C | Y ) matrix, and perhaps training one's own classifier on some hold-out data set. The experiments performed in Section 4.2 and Theorem 3 of Lipton et al. (2018) suggest that improving the classification accuracy results in more accurate prevalence quantification. Finally, we rely on the Markov chain Monte Carlo sampling schemes, which explore an O(KL)-dimensional parameter space. Although in this manuscript we focus on problems with a moderate number of classes, there exist complex data sets, such as ImageNet (Deng et al., 2009), with thousands of categories. We note that this can pose additional computational challenges for the MCMC samplers (Betancourt, 2015; Bardenet et al., 2017; Izmailov et al., 2021), which we do not investigate in this work. ## Statement Of Broader Impact This article discusses a Bayesian method of quantifying the prevalence of different classes in an unlabeled data set. We note that in general the parameter posterior conditioned on the full data view X can be different from the posterior conditioned on some representation C = f(X) - in cases where a reliable model P(X | Y ) is available and the inference is tractable, we suggest to use this instead of our discretized method. Secondly, the model need not apply - perhaps prior probability shift is not the only distribution shift occurring in the problem or the data may not be exchangeable. In epidemiology, for example, outbreaks induce correlations between the healthiness of different people that can easily extend to sampling. Finally, even if all the assumptions hold, recalibrating a probabilistic classifier with quantification may have undesirable consequences regarding fairness (Plecko & Bareinboim, 2022). ## Code Availability And Reproducibility We ensured reproducibility by designing all experiments as Snakemake workflows (Mölder et al., 2021). The code and workflows used to run the experiments and generate the figures are available in the https: //github.com/pawel-czyz/labelshift repository. ## Acknowledgments We would like to thank Ian Wright for valuable comments on the manuscript. This publication was supported by GitHub, Inc. and ETH AI Center. We would like to thank both institutions. ## References Amr M. Alexandari, Anshul Kundaje, and Avanti Shrikumar. Maximum likelihood with bias-corrected calibration is hard-to-beat at label shift adaptation. In Proceedings of the 37th International Conference on Machine Learning, ICML'20. JMLR.org, 2020. Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant Risk Minimization. arXiv e-prints, art. arXiv:1907.02893, Jul 2019. Kamyar Azizzadenesheli, Anqi Liu, Fanny Yang, and Animashree Anandkumar. Regularized learning for domain adaptation under label shifts. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. URL https://openreview.net/ forum?id=rJl0r3R9KX. Rémi Bardenet, Arnaud Doucet, and Chris Holmes. On Markov chain Monte Carlo methods for tall data. Journal of Machine Learning Research, 18(47):1–43, 2017. URL http://jmlr.org/papers/v18/15-205. html. Antonio Bella, Cesar Ferri, José Hernández-Orallo, and María José Ramírez-Quintana. Quantification via probability estimators. In *2010 IEEE International Conference on Data Mining*, pp. 737–742, 2010. doi: 10.1109/ICDM.2010.75. J.M. Bernardo and A.F.M. Smith. *Bayesian Theory*. Wiley Series in Probability and Statistics. Wiley, 1994. ISBN 9780471494645. Michael Betancourt. The fundamental incompatibility of scalable Hamiltonian Monte Carlo and naive data subsampling. In Francis Bach and David Blei (eds.), *Proceedings of the 32nd International Conference on* Machine Learning, volume 37 of *Proceedings of Machine Learning Research*, pp. 533–540, Lille, France, 07–09 Jul 2015. PMLR. URL https://proceedings.mlr.press/v37/betancourt15.html. Michael Betancourt. A conceptual introduction to Hamiltonian Monte Carlo, 2017. URL https://arxiv. org/abs/1701.02434. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet allocation. *J. Mach. Learn. Res.*, 3 (null):993–1022, mar 2003. ISSN 1532-4435. Mathieu Blondel, Akinori Fujino, and Naonori Ueda. Large-scale multiclass support vector machine training via euclidean projection onto the simplex. In *2014 22nd International Conference on Pattern Recognition*, pp. 1289–1294, 2014. doi: 10.1109/ICPR.2014.231. Daniel C. Castro, Ian Walker, and Ben Glocker. Causality matters in medical imaging. *Nature Communications*, 11:3673ff., 7 2020. Spyros Darmanis, Steven A. Sloan, Derek Croote, Marco Mignardi, Sophia Chernikova, Peyman Samghababi, Ye Zhang, Norma Neff, Mark Kowarsky, Christine Caneda, Gordon Li, Steven D. Chang, Ian David Connolly, Yingmei Li, Ben A. Barres, Melanie Hayden Gephart, and Stephen R. Quake. Single-cell rnaseq analysis of infiltrating neoplastic cells at the migrating front of human glioblastoma. *Cell Reports*, 21(5):1399–1410, 2017. ISSN 2211-1247. doi: https://doi.org/10.1016/j.celrep.2017.10.030. URL https: //www.sciencedirect.com/science/article/pii/S2211124717314626. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009. B. Efron. Bootstrap methods: Another look at the jackknife. *The Annals of Statistics*, 7(1):1 - 26, 1979. doi: 10.1214/aos/1176344552. URL https://doi.org/10.1214/aos/1176344552. Valerii Fedorov, Frank Mannino, and Rongmei Zhang. Consequences of dichotomization. *Pharmaceutical* Statistics, 8(1):50–61, 2009. doi: https://doi.org/10.1002/pst.331. URL https://onlinelibrary.wiley. com/doi/abs/10.1002/pst.331. George Forman. Quantifying counts and costs via classification. *Data Mining and Knowledge Discovery*, 17: 164–206, October 2008. Saurabh Garg, Yifan Wu, Sivaraman Balakrishnan, and Zachary Lipton. A unified view of label shift estimation. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 3290–3300. Curran Associates, Inc., 2020. A. Gelman, J.B. Carlin, H.S. Stern, D.B. Dunson, A. Vehtari, and D.B. Rubin. Bayesian Data Analysis, Third Edition. Chapman & Hall/CRC Texts in Statistical Science. Taylor & Francis, 2013. ISBN 9781439840955. URL https://books.google.pl/books?id=ZXL6AQAAQBAJ. Andrew Gelman, Aki Vehtari, Daniel Simpson, Charles C. Margossian, Bob Carpenter, Yuling Yao, Lauren Kennedy, Jonah Gabry, Paul-Christian Bürkner, and Martin Modrák. Bayesian workflow, 2020. Pablo González, Alberto Castaño, Nitesh Chawla, and Juan del Coz. A review on quantification learning. ACM Computing Surveys, 50:1–40, 09 2017. doi: 10.1145/3117807. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On calibration of modern neural networks. In *Proceedings of the 34th International Conference on Machine Learning - Volume 70*, ICML'17, pp. 1321–1330. JMLR.org, 2017. Paul R. Halmos and L. J. Savage. Application of the Radon-Nikodym Theorem to the Theory of Sufficient Statistics. *The Annals of Mathematical Statistics*, 20(2):225 - 241, 1949. doi: 10.1214/aoms/1177730032. URL https://doi.org/10.1214/aoms/1177730032. F.E. Harrell. Regression Modeling Strategies: With Applications to Linear Models, Logistic and Ordinal Regression, and Survival Analysis. Springer Series in Statistics. Springer International Publishing, 2015. ISBN 9783319194257. URL https://books.google.pl/books?id=94RgCgAAQBAJ. Matthew D. Hoffman and Andrew Gelman. The no-u-turn sampler: Adaptively setting path lengths in Hamiltonian Monte Carlo. *Journal of Machine Learning Research*, 15(47):1593–1623, 2014. URL http: //jmlr.org/papers/v15/hoffman14a.html. Pavel Izmailov, Sharad Vikram, Matthew D Hoffman, and Andrew Gordon Gordon Wilson. What are Bayesian neural network posteriors really like? In Marina Meila and Tong Zhang (eds.), *Proceedings of* the 38th International Conference on Machine Learning, volume 139 of *Proceedings of Machine Learning Research*, pp. 4629–4640. PMLR, 18–24 Jul 2021. URL https://proceedings.mlr.press/v139/ izmailov21a.html. Nikolay Karpov, Alexander Porshnev, and Kirill Rudakov. NRU-HSE at SemEval-2016 task 4: Comparative analysis of two iterative methods using quantification library. In *Proceedings of the 10th International* Workshop on Semantic Evaluation (SemEval-2016), pp. 171–177, San Diego, California, June 2016. Association for Computational Linguistics. doi: 10.18653/v1/S16-1025. URL https://www.aclweb.org/ anthology/S16-1025. Achim Klenke. *Probability Theory: A Comprehensive Course*. Springer, 2014. John K. Kruschke. Bayesian analysis reporting guidelines. *Nature Human Behaviour*, 5(10):1282–1291, Oct 2021. ISSN 2397-3374. doi: 10.1038/s41562-021-01177-7. URL https://doi.org/10.1038/ s41562-021-01177-7. John R. Lewis, Steven N. MacEachern, and Yoonkyung Lee. Bayesian Restricted Likelihood Methods: Conditioning on Insufficient Statistics in Bayesian Regression (with Discussion). *Bayesian Analysis*, 16 (4):1393 - 2854, 2021. doi: 10.1214/21-BA1257. URL https://doi.org/10.1214/21-BA1257. Zachary C. Lipton, Yu-Xiang Wang, and Alexander J. Smola. Detecting and correcting for label shift with black box predictors. In Jennifer G. Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of *Proceedings of Machine Learning Research*, pp. 3128–3136. PMLR, 2018. Simon Lyddon, Stephen Walker, and Chris C Holmes. Nonparametric learning from Bayesian models with randomized objective functions. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 31. Curran Associates, Inc., 2018. Alejandro Moreo, Andrea Esuli, and Fabrizio Sebastiani. QuaPy: a Python-based framework for quantification. In *Proceedings of the 30th ACM International Conference on Information & Knowledge Management*, pp. 4534–4543, 2021. F Mölder, KP Jablonski, B Letcher, MB Hall, CH Tomkins-Tinch, V Sochat, J Forster, S Lee, SO Twardziok, A Kanitz, A Wilm, M Holtgrewe, S Rahmann, S Nahnsen, and J Köster. Sustainable data analysis with Snakemake. *F1000Research*, 10(33), 2021. doi: 10.12688/f1000research.29032.1. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. *Journal of Machine Learning Research*, 12:2825–2830, 2011. C. Peters and W.A Coberly. The numerical evaluation of the maximum-likelihood estimate of mixture proportions. *Communications in Statistics - Theory and Methods*, 5:1127–1135, 1976. Du Phan, Neeraj Pradhan, and Martin Jankowiak. Composable effects for flexible and accelerated probabilistic programming in numpyro. *arXiv preprint arXiv:1912.11554*, 2019. Drago Plecko and Elias Bareinboim. Causal fairness analysis, 2022. Jonathan K Pritchard, Matthew Stephens, and Peter Donnelly. Inference of Population Structure Using Multilocus Genotype Data. *Genetics*, 155(2):945–959, 06 2000. ISSN 1943-2631. doi: 10.1093/genetics/ 155.2.945. URL https://doi.org/10.1093/genetics/155.2.945. Marco Saerens, Patrice Latinne, and Christine Decaestecker. Adjusting the outputs of a classifier to new a priori probabilities: A simple procedure. *Neural Computation*, 14:14–21, 2001. Bernhard Schölkopf, Dominik Janzing, Jonas Peters, Eleni Sgouritsa, Kun Zhang, and Joris Mooij. On causal and anticausal learning. In *Proceedings of the 29th International Coference on International Conference* on Machine Learning, ICML'12, pp. 459–466, Madison, WI, USA, 2012. Omnipress. ISBN 9781450312851. Shai Shalev-Shwartz and Yoram Singer. Efficient learning of label ranking by soft projections onto polyhedra. J. Mach. Learn. Res., 7:1567–1599, dec 2006. ISSN 1532-4435. Amos Storkey. When training and test sets are different: Characterizing learning transfer. In Joaquin Quionero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D. Lawrence (eds.), Dataset Shift in Machine Learning, chapter 1, pp. 3–28. The MIT Press, 2009. ISBN 0262170051. Dirk Tasche. Fisher consistency for prior probability shift. *Journal of Machine Learning Research*, 18(95): 1–32, 2017. URL http://jmlr.org/papers/v18/17-048.html. Dirk Tasche. Confidence intervals for class prevalences under prior probability shift. Machine Learning and Knowledge Extraction, 1(3):805–831, 2019. ISSN 2504-4990. doi: 10.3390/make1030047. URL https: //www.mdpi.com/2504-4990/1/3/47. Afonso Fernandes Vaz, Rafael Izbicki, and Rafael Bassi Stern. Quantification under prior probability shift: the ratio estimator and its extensions. *Journal of Machine Learning Research*, 20(79):1–33, 2019. URL http://jmlr.org/papers/v20/18-456.html. James Watson and Chris Holmes. Approximate Models and Robust Decisions. *Statistical Science*, 31(4):465 - 489, 2016. doi: 10.1214/16-STS592. URL https://doi.org/10.1214/16-STS592. Jack Xue and Gary Weiss. Quantification and semi-supervised classification methods for handling changes in class distribution. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 897–906, 01 2009. doi: 10.1145/1557019.1557117. Andy B. Yoo, Morris A. Jette, and Mark Grondona. Slurm: Simple linux utility for resource management. In Dror Feitelson, Larry Rudolph, and Uwe Schwiegelshohn (eds.), *Job Scheduling Strategies for Parallel* Processing, pp. 44–60, Berlin, Heidelberg, 2003. Springer Berlin Heidelberg. ISBN 978-3-540-39727-4. Kun Zhang, Bernhard Schölkopf, Krikamol Muandet, and Zhikun Wang. Domain adaptation under target and conditional shift. In *Proceedings of the 30th International Conference on International Conference on* Machine Learning - Volume 28, ICML'13, pp. III–819–III–827. JMLR.org, 2013. Yingdong Zhao, Ming-Chung Li, Mariam M. Konaté, Li Chen, Biswajit Das, Chris Karlovich, P. Mickey Williams, Yvonne A. Evrard, James H. Doroshow, and Lisa M. McShane. TPM, FPKM, or normalized counts? A comparative study of quantification measures for the analysis of RNA-seq data from the NCI patient-derived models repository. *Journal of Translational Medicine*, 19(1):269, Jun 2021. ISSN 14795876. doi: 10.1186/s12967-021-02936-w. URL https://doi.org/10.1186/s12967-021-02936-w. Albert Ziegler and Paweł Czyż. Unsupervised recalibration, 2020. URL https://arxiv.org/abs/1908. 09157. ## A Strict Linear Independence Of Measures We use the following definition: Definition A.1. Let K1, . . . , KL be probability measures on a measurable space X *. We say that they are* strictly linearly independent *if for every non-zero* λ ∈ R L, there exists a measurable set Aλ *such that* λ1K1(Aλ) + *· · ·* + λLKL(Aλ) ̸= 0. This definition is especially useful for proving identifiability of a well-specified mixture model: Theorem A.2. Let K1, . . . , KL be strictly linearly independent probability measures on space X . Then, for every two vectors of mixture weights π, π′ ∈ ∆L−1*such that* $\pi_{1}K_{1}+\cdots+\pi_{L}K_{L}=\pi_{1}^{\prime}K_{1}+\cdots+\pi_{L}K_{L}$. it follows that π = π ′. Proof. Consider the difference λ = π −π ′ ∈ R L. If it was not the zero vector, then for some set Aλ we would have π1K1(Aλ) + *· · ·* + πLKL(Aλ) ̸= π ′ 1K1(Aλ) + *· · ·* + π ′ LKL(Aλ). Note also that Aλ is not of positive measure with respect to both of the mixture measures, meaning that for a well-specified model the discrepancy will eventually be visible according to the law of large numbers. Although the definition above looks different from the original one proposed by Garg et al. (2020), they are essentially equivalent: Lemma A.3. Let µ be a σ-finite measure on X *such that the Radon–Nikodym derivatives* ky = dKy/dµ exist. Then, the probability measures K1, . . . , KL *are strictly linearly independent if and only if* $$\int_{\mathcal{X}}\left|\sum_{y\in{\mathcal{Y}}}k_{y}(x)\right|\,\mathrm{d}\mu(x)\neq0$$ for every non-zero λ ∈ R L. Proof. Consider any λ ̸= 0 and write ν = λ1K1 + *· · ·* + λKKL for the signed measure. Using the standard rules of Radon–Nikodym calculus, the condition on the integral is equivalent to |ν|(X ) ̸= 0. If there exists Aλ such that ν(Aλ) ̸= 0, then |ν|(X ) ≥ |ν|(Aλ) ≥ |ν(Aλ)| > 0. Conversely, assume that |ν|(X ) ̸= 0 and take the Hahn decomposition of X , with X = P ∪ N such that P ∩ N = ∅, ν(P) ≥ 0, and ν(N) ≤ 0. We define now Hahn–Jordan decomposition of ν into two positive measures, ν +(A) = ν(A ∩ P) and ν −(A) = −ν(A ∩ N), with the properties ν = ν + − ν − and |ν| = ν + + ν −. We conclude that |ν|(X ) = ν(P) − ν(N) ̸= 0, so that at least one of the sets P and N can be taken as Aλ. The above characterization of strict linear independence gives the following result: Lemma A.4. Assume that X is a standard Borel space and µ is a strictly positive measure. Further, assume that the Radon–Nikodym derivatives k1, . . . , kL are continuous functions, treated as vectors in the space of all continuous functions C(X , R). Then, if k1, . . . , kL are linearly independent, then K1, . . . , KL *are strictly* linearly independent. Proof. Take any λ ̸= 0 and write u = |λ1k1 +· · ·+λLkL| ∈ C(X , R). From the linear independence it follows that there exists x0 ∈ X such that u(x0) > 0. We can use the continuity of u to find an open neighborhood A of x0 such that for all x ∈ A we have u(x) > u(x0)/2. As u is non-negative and µ is strictly positive, we have µ(A) > 0 and $$\int_{X}u(x)\,\mathrm{d}\mu(x)\geq\int_{A}u(x)\,\mathrm{d}\mu(x)\geq{\frac{u(x_{0})}{2}}\cdot\mu(A)>0.$$ ## B Derivation Of The Sufficient Statistic Starting from the joint probability $$P(\pi,\pi',\varphi,\{Y_i,C_i\},\{Y'_j,C'_j\})=P(\pi,\pi',\varphi)\times\prod_{i=1}^N P(C_i\mid\varphi,Y_i)P(Y_i\mid\pi)\times\prod_{j=1}^{N'}P(C'_j\mid\varphi,Y'_j)P(Y'_j\mid\pi'),$$ need to derive. $\square$ . we need to derive $P(\pi,\pi^{\prime},\varphi\mid\{Y_{i},C_{i}\},\{C^{\prime}_{j}\})\propto P(\{Y_{i},C_{i}\},\{C^{\prime}_{j}\}\mid\pi,\pi^{\prime},\varphi)P(\pi,\pi^{\prime},\varphi)$, The observed likelihood is given by marginalization of Y ′ j variables: i=1 P(Ci| φ, Yi)P(Yi| π) Y N′ P({Yi, Ci}, {C ′ j} | π, π′, φ) = X lN′∈Y · · · X l1∈Y Y N j=1 P(C ′ j | φ, Y ′ j = lj )P(Y ′ j = lj | π) lN′∈Y · · · X l1∈Y Y N′ = Y N × X j=1 P(C ′ j | φ, Y ′ j = lj )P(Y ′ j = lj | π ′) i=1 P(Ci| φ, Yi)P(Yi| π) . | {z } A | {z } B Each of these terms will be calculated separately. We want to calculate that to calculate $$A:=\prod_{i=1}^{N}P(C_{i}=c_{i}\mid\varphi,Y_{i}=y_{i})P(Y_{i}=y_{i}\mid\pi)=\prod_{i=1}^{N}P(C_{i}=c_{i}\mid\varphi,Y_{i}=y_{i})\times\underbrace{\prod_{i=1}^{N}P(Y_{i}=y_{i}\mid\pi)}_{\mathcal{A}_{1}}.$$ The term A2 is simple to calculate: as P(Yi = yi| π) = πyi , we have $$A_{2}=\prod_{i=1}^{N}\pi_{y_{i}}=\prod_{l=1}^{L}(\pi_{l})^{n_{l}},$$ where nlis the number of i ∈ {1*, . . . , N*}, such that yi = l. In particular, up to a factor N!/n1! *. . . n*L!, this is the PMF of the multinomial distribution parametrised by π evaluated at (n1*, . . . , n*L). To calculate A1 we need to observe that P(Ci = k | *φ, Y*i = l) = φlk. Hence, $$A_{1}=\prod_{i=1}^{N}P(C_{i}=c_{i}\mid\varphi,Y_{i}=y_{i})=\prod_{l=1}^{L}\prod_{k=1}^{K}(\varphi_{l k})^{f_{l k}},$$ where flk is the number of i ∈ {1*, . . . , N*}, such that yi = l and ci = k. Observe that nl = fl1 + *· · ·* + flK. In particular, up to the factor $$\prod_{l=1}^{L}{\frac{n_{l}!}{f_{l1}!\ldots f_{l K}!}}$$ this corresponds to the product of PMFs of L multinomial distributions parametrised by probabilities φl: evaluated at fl:. Recall that $$B:=\sum_{l_{N^{\prime}}\in{\mathcal{Y}}}\cdots\sum_{l_{1}\in{\mathcal{Y}}}\prod_{j=1}^{N^{\prime}}P(C_{j}^{\prime}=c_{j}^{\prime}\mid\varphi,Y_{j}^{\prime}=l_{j})P(Y_{j}^{\prime}=l_{j}\mid\pi^{\prime}).$$ We can use the sum-product identity $$\sum_{l_{N^{\prime}}\in{\mathcal{Y}}}\cdots\sum_{l_{1}\in{\mathcal{Y}}}\prod_{j=1}^{N^{\prime}}f_{j}(l_{j})=\prod_{j=1}^{N^{\prime}}\sum_{l\in{\mathcal{Y}}}f_{j}(l)$$ to reduce: $$B=\prod_{j=1}^{N^{\prime}}\sum_{l\in{\mathcal{Y}}}P(C_{j}^{\prime}=c_{j}^{\prime}\mid\varphi,Y_{j}^{\prime}=l)P(Y_{j}^{\prime}=l\mid\pi^{\prime}).$$ Because both C ′ j and Y ′ j are parametrised with categorical distributions, we have $$P(C_{j}^{\prime}=k\mid\varphi,Y_{j}^{\prime}=l)=\varphi_{l k}$$ and $$P(Y_{j}^{\prime}=l\mid\pi^{\prime})=\pi_{l}^{\prime},$$ so $$\sum_{l\in{\mathcal{Y}}}P(C_{j}^{\prime}=k\mid\varphi,Y_{j}^{\prime}=l)P(Y_{j}^{\prime}=l\mid\pi^{\prime})=(\varphi^{T}\pi^{\prime})_{k}.$$ Hence, $$B=\prod_{j=1}^{N^{\prime}}(\varphi^{T}\pi^{\prime})_{c_{j}^{\prime}}=\prod_{k=1}^{K}\left((\varphi^{T}\pi^{\prime})_{k}\right)^{n_{k}^{\prime}},$$ where n ′ k is the number of j ∈ {1*, . . . , N*′} such that c ′ j = k. In particular, up to a factor of N′!/n′1 ! *· · ·* n ′ K!, this is the PMF of the multinomial distribution parametrized by probabilities φ T π ′evaluated at (n ′ 1 , . . . , n′K). ## C Proof Of Asymptotic Identifiability In this section we prove Theorem 3.1. We first need to establish two simple lemmas regarding approximate left inverses: Lemma C.1. *Choose any norms on the space of linear maps* R L → R K and R K → R L*. Suppose* K ≥ L and that A0 : R L → R K is of full rank L. Then, for every ε > 0 there exists δ > 0 *such that if* A: R L → R K is any matrix such that $$\|A-A_{0}\|<\delta,$$ *then the left inverse $A^{-1}:=(A^{T}A)^{-1}A^{T}$*. −1AT*exists and* $$||A^{-1}-A_{0}^{-1}||<\varepsilon.$$ Proof. First note that indeed the choice of norms does not matter, as all norms on finite-dimensional vector spaces are equivalent. Then, observe that rank is a lower semi-continuous function, so that for sufficiently small δ the map A will be of rank L as well. Finally, it is clear that the chosen formula for the left inverse is continuous as a function of A. Lemma C.2. If K ≥ L *and matrix* A0 : R L → R K is of full rank L, then for every ε > 0 *there exist numbers* δ > 0 and ν > 0 *such that for every linear mapping* A: R L → R K *and vector* v ∈ R L if $$||A-A_{0}||<\delta$$ and $$||A v-A_{0}v_{0}||<\nu,$$ then $$||v-v_{0}||<\varepsilon.$$ Proof. Again, the norm on either space can be chosen arbitrarily without any loss of generality. We will choose the p-norm for vectors and the induced matrix norms. From the previous lemma we know that for any chosen β > 0 we can take δ > 0 such that A is left-invertible and ||B − B0|| *< β,* where B = A−1 and B0 = A −1 0are the left inverses in the form defined before. Write w = Av and w0 = A0v0. We have $$||v-v_{0}||=||Bw-B_{0}w_{0}||$$ $$=||(Bw-B_{0}w)+(B_{0}w-B_{0}w_{0})||$$ $$=||(B-B_{0})w+B_{0}(w-w_{0})||$$ $$\leq||(B-B_{0})w||+||B_{0}(w-w_{0})||$$ $$\leq||B-B_{0}||\cdot||w||+||B_{0}||\cdot||w-w_{0}||$$ $$\leq\beta||w||+||B_{0}||\nu.$$ We can bound each of these two terms by ε/3 choosing appropriate β and ν. Then, we can find δ yielding appropriate β. Now the proof of Theorem 3.1 will proceed in two steps: 1. We show than for any prescribed probability we can find N and N′large enough that the maximum likelihood solution will be close to the true parameter values. 2. Then, we show that for reasonable priors the maximum a posteriori solution will almost surely assymptotically converge to the maximum likelihood solution. Let's assume that the data was sampled from the model with true parameters π ∗, π ′∗, φ ∗ and take δ > 0 and ε > 0. For any ν > 0 we can use the fact that log-likelihood is given by $\mathbb{L}_{\mathbb{N}}^{\times}$ $$\ell(\pi,\pi^{\prime},\varphi)=\sum_{l\in{\mathcal{Y}}}N_{l}\log\pi_{l}+\sum_{k\in{\mathcal{C}}}\sum_{l\in{\mathcal{Y}}}F_{l k}\log\varphi_{l k}+\sum_{k\in{\mathcal{C}}}N_{k}^{\prime}\log(\varphi^{T}\pi^{\prime})_{k},$$ and by the strong law of large numbers we can find N and N′large enough that with probability at least 1 − δ we will have ||πˆ − π ∗|| < ν and ||φˆ − φ ∗|| < ν, and ||φˆ T πˆ ′ − φ ∗Tπ ′∗|| < ν, where πˆ, φˆ, and πˆ ′is the maximum likelihood estimate. Basing on the previously established lemmas we conclude that we can pick ν small enough that ||πˆ−π ∗|| < ε, ||φˆ − φ ∗|| < ε, and ||πˆ ′ − π ′∗|| < ε. Now note that if we assume the PDF of the prior P(π, π′, φ) to be continuous, we can take a compact neighborhood of (π ∗, π′∗, φ∗) inside ∆L−1 × ∆L−1 × ∆K−1 *× · · · ×* ∆K−1 with probability mass arbitrarily close to 1. Then, the log-prior defined on this set will be bounded and the *maximum a posteriori* estimate can be made arbitrarily close to the maximum likelihood estimate with any desired probability. ## C.1 Should One Rely On The Maximum A Posteriori **Estimate?** Although in the above section we discuss why the *maximum a posteriori* (MAP) estimate is consistent in the large-sample limit, we advise against using it: as the data is finite, the fully Bayesian approach is to use the full posterior (approximated by the Markov chain Monte Carlo samples) to understand the associated uncertainty on the provided estimates. In settings in which the posterior distribution has to be summarized in terms of a single point estimate (e.g., for the comparison with estimators providing point estimates in Section 4.2), we advise for using the posterior mean, which is available directly from the MCMC samples. Conversely, the MAP estimate requires optimization, rather than sampling, and may not be well-defined (e.g., in non-identifiable problems with non-invertible P(C | Y ) matrix or when the sample size is not large enough). Moreover, posterior mean may be preferable over MAP on the basis of Bayesian decision theory. We present a short version of the standard decision-theoretic argument and refer to Bernardo & Smith (1994, Sec. 5.1.5) for a more detailed treatment. An approach employed in Bayesian decision theory is to define a loss function ℓ(ˆπ ′, π′∗) quantifying risk associated with choosing the estimate πˆ ′ when in reality the estimand attains π ′∗ value. Various applications involve different loss functions, to be specified by the decision maker: for example, if the component π ′ 1 describes the prevalence of a viral disease, one may prefer to choose a loss function assigning higher loss to the predictions underestimating the disease prevalence (i.e., πˆ ′ 1 < π′∗ 1should result in a higher loss than πˆ ′ 1 > π′∗ 1 ). The Bayesian decision theory argues then to provide an estimate minimizing the expected loss, Rℓ(ˆπ ′, π′) dP(π ′| data). The MAP estimate (provided that it exists) corresponds to the limit of the losses ℓ0−1,ϵ(ˆπ ′, π′) = 1[||πˆ ′−π ′|| > ϵ] for ϵ → 0 +. The resulting discontinuous 0 − 1 loss may not be of direct interest in the applied problems: for finite data the estimates are expected to be different from the ground-truth value and one instead may try to quantify by how much amount the estimate differs. For an ℓ2(ˆπ ′, π′) = ||πˆ ′ − π ′||2loss, the minimum of the expected loss is attained at the posterior mean. At the same time, we stress that it is preferable to analyze the whole posterior, rather than to rely on a single point estimate, especially that in reality the decision maker may not have an access to an oracle loss function and can consider different candidate losses. In particular, in Section 4.2 and Appendix E.2 we quantify the algorithm performance using two losses for which the minimum may not be attained at the posterior mean. Nevertheless, the experimental results presented there confirm that the (arguably suboptimal) choice of the posterior mean as the point estimate is on par with the point estimates provided by other methods. ## D Quantification Algorithms In this section we provide additional details on existing quantification methods, expanding on the description in Sec. 2. ## D.1 Expectation-Maximization And The Gibbs Sampler In this section we analyse the expectation-maximization algorithm introduced by Saerens et al. (2001), which assumes access to a well-calibrated probabilistic classifier providing the probabilities r(x) = P(Y | X = *x, π*). We assume a Dirichlet prior for P(π ′), so that the expectation-maximization targets maximum a posteriori of the distribution P(π ′| {X′j }, r). The original algorithm of Saerens et al. (2001) corresponds then to the uniform prior, P(π ′) = Dirichlet(π ′| 1, 1*, . . . ,* 1). Finally, we will show how to adjust the expectationmaximization algorithm to obtain a Gibbs sampler, sampling from the posterior P(π ′| {X′j }, r). To improve readability we will generally drop conditioning on r, leaving it implicit. ## D.1.1 Expectation-Maximization Saerens et al. (2001) notice that if one has a well-calibrated classifier P(Y | *X, π*), then they also have an access to a distribution P(Y | *X, π*′): $P(Y=y\mid X=x,\pi^{\prime})\propto P(Y=y,X=x\mid\pi^{\prime})$ $$=P(X=x\mid Y=y,\pi^{\prime})P(Y=y\mid\pi^{\prime})$$ $$=P(X=x\mid Y=y)\,\pi^{\prime}_{y},$$ $$P(X=x\mid Y=y)$$ $${}^{-}\ {\cal I}$$ where the proportionality constant does not depend on y. Analogously, $$P(Y=y\mid X=x,\pi)$$ P(Y = y | X = *x, π*) ∝ P(X = x | Y = y) πy. As P(X = x | Y = y) is the same, we can take the ratio of both expressions and obtain $$P(Y=y\mid X=x,\pi^{\prime})\propto P(Y=y\mid X=x,\pi){\frac{\pi_{y}^{\prime}}{\pi_{y}}},$$ where the proportionality constant does not depend on y. This yields unnormalized probabilities, which can be easily rescaled so that they sum up to 1. Expectation-maximization is an iterative algorithm finding a stationary point of the log-posterior $$\log P(\pi^{\prime}\mid\{X^{\prime}_{j}=x^{\prime}_{j}\})=\log P(\pi^{\prime})+\log P(\{X^{\prime}_{j}=x^{\prime}_{j}\}\mid\pi^{\prime})$$ $$=\log P(\pi^{\prime})+\sum_{j=1}^{N^{\prime}}\log P(X^{\prime}_{j}=x^{\prime}_{j}\mid\pi^{\prime}).$$ In particular, by running the optimization procedure several times, we can aim at finding the maximum a posteriori estimate. Assume that at the current iteration the proportion vector is π ′(t). Then, log P(X′j = x ′ j | π ′) = logX L y=1 P(X′j = x ′ j , Y ′ j = y | π ′) = logX L y=1 P(Y ′ j = y | π (t), X′j = x ′ j ) P(X′j = x ′ j , Y ′ j = y | π ′) P(Y ′ j = y | π ′(t), X′j = x ′ j ) ≥ X L y=1 P(Y ′ j = y | π ′(t), X′j = x ′ j ) log P(X′j = x ′ j , Y ′ j = y | π ′) P(Y ′ j = y | π ′(t), X′j = x ′ j ) where the bound follows from Jensen's inequality. Hence, $$\log P(\{X^{\prime}_{j}=x^{\prime}_{j}\}\mid\pi^{\prime})=\sum_{j=1}^{N^{\prime}}\log P(X^{\prime}_{j}=x^{\prime}_{j}\mid\pi^{\prime})$$ $$\geq\sum_{j=1}^{N^{\prime}}\sum_{y=1}^{L}P(Y^{\prime}_{j}=y\mid\pi^{\prime(t)},X^{\prime}_{j}=x^{\prime}_{j})\log\frac{P(X^{\prime}_{j}=x^{\prime}_{j},Y^{\prime}_{i}=y\mid\pi^{\prime})}{P(Y^{\prime}_{j}=y\mid\pi^{\prime(t)},X^{\prime}_{j}=x^{\prime}_{j})}.$$ Now let $$Q(\pi,\pi^{(t)})=\log P(\pi)+\sum_{j=1}^{N^{\prime}}\sum_{y=1}^{L}P(Y_{j}^{\prime}=y\mid\pi^{(t)},X_{j}^{\prime}=x_{j}^{\prime})\log\frac{P(X_{j}^{\prime}=x_{j}^{\prime},Y_{j}^{\prime}=y\mid\pi)}{P(Y_{j}^{\prime}=y\mid\pi^{(t)},X_{j}^{\prime}=x_{j}^{\prime})},$$ be a lower bound on the log-posterior. We will define the value π ′(t+1) by optimizing this lower bound, i.e., π ′(t+1) := argmaxπ′Q(π ′, π′(t)). Define auxiliary variables ξjy = P(Y ′ j = y | π ′(t), X′j = x ′ j ), which can be calculated using the probabilistic classifier r as above. Hence, $$Q(\pi^{\prime},\pi^{(t)})=\log P(\pi^{\prime})+\sum_{j=1}^{N^{\prime}}\sum_{y=1}^{L}\left(\xi_{j y}\log P(X_{j}^{\prime}=x_{j}^{\prime},Y_{j}^{\prime}=y)-\xi_{j y}\log\xi_{j y}\right).$$ Note that the term ξjy log ξjy does not depend on π ′, so it does not have to be included in the optimization. Similarly, we can write log P(X′j = x ′ j , Y ′ j = y | π ′) = log P(X′j = x ′ j | Y ′ j = y) + log π ′ y and notice that log P(X′j = x ′ j | Y ′ j = y) also does not depend on π ′. Hence, we have to optimize the expression $$\log P(\pi^{\prime})+\sum_{j=1}^{N^{\prime}}\sum_{y=1}^{L}\xi_{j y}\log\pi_{y}^{\prime},$$ where P(π ′) is modeled as the Dirichlet distribution, Dirichlet(π | α1*, . . . , α*L). Hence, the optimization objective becomes $$\sum_{y=1}^{L}\left((\alpha_{y}-1)+\sum_{j=1}^{N^{\prime}}\xi_{j y}\right)\log\pi_{y}^{\prime},$$ with the constraint π ′ 1 + *· · ·* + π ′ L = 1. Saerens et al. (2001) use the technique of Lagrange multipliers. However, we can optimise the first L − 1 coordinates and write π ′ L = 1 − (π ′ 1 + *· · ·* + π ′ L−1 ). In this case, if we differentiate with respect to π ′ y , we obtain Ay/π′y − AL/π′L = 0, where Ay = αy − 1 + PN′ j=1 ξjy. Hence, π ′ y = kAy for some constant k > 0. As $$\sum_{y=1}^{L}A_{y}=\sum_{y=1}^{L}\alpha_{y}-L+\sum_{j=1}^{N^{\prime}}\sum_{y=1}^{L}\xi_{j y}=\sum_{y=1}^{L}\alpha_{y}-L+N^{\prime},$$ we obtain $$\pi_{y}^{\prime}=\frac{1}{(\alpha_{1}+\cdots+\alpha_{L})+N^{\prime}-L}\left(\alpha_{y}-1+\sum_{j=1}^{N^{\prime}}\xi_{j y}\right),$$ which is taken as the next iteration value, π ′(t+1). The procedure is repeated until the sequence π ′(t) (approximately) converges to a point. ## D.1.2 Gibbs Sampler As typical for expectation-maximization algorithms, it is possible to implement a Gibbs sampler targeting the sample from the posterior P(π ′| {X′j }), rather than the mode (maximum a posteriori). The Gibbs sampler will iteratively sample from the high-dimensional P(π ′, {Y ′ j } | {X′j }) distribution. Note that for a Dirichlet prior we have $$P(\pi^{\prime}\mid\{Y_{j}^{\prime}=y_{j},X_{j}^{\prime}\})=\mathrm{Dirichlet}\left(\pi^{\prime}\bigg{|}\alpha_{1}+\sum_{j=1}^{N^{\prime}}\mathbf{1}[y_{j}=1],\ldots,\alpha_{L}+\sum_{j=1}^{N^{\prime}}\mathbf{1}[y_{i}=L]\right).$$ The assignments of individual points are then sequentially sampled as $Y^{\prime}_{k}\sim P(Y^{\prime}_{k}\mid\{Y^{\prime}_{1},\ldots,Y^{\prime}_{k-1},Y^{\prime}_{k+1},\ldots,Y^{\prime}_{L}\},\{X^{\prime}_{j}\},\pi^{\prime})$, which is possible due to the equality $P(Y_{k}^{\prime}\mid X_{k}^{\prime},\pi)=$ Categorical($\xi_{k1},\ldots,\xi_{kL}$), where ξky = P(Y ′ k = y | X′k = xk, π′), which is obtained using the probabilistic classifier r similarly as above. ## D.2 Estimators Employing Auxiliary Black-Box Classifiers In this section we briefly review the main algorithms employing a black-box classifier. ## D.2.1 Classify And Count When C = Y and f : *X → Y* is a classifier trained for a given problem with good accuracy, the simplest approach is to count its predictions and normalize by the total number of examples in the unlabeled data set. However, as Tasche (2017) shows, this approach does not need to correctly estimate P(Y ) even in the limit of infinite data. ## D.2.2 Adjusted Classify And Count Consider a case of an imperfect binary classifier, with Y = C = {+, −}. The true and false positive rates are defined by $$\begin{array}{l}{{\mathrm{TPR}=P(C=+\mid Y=+)}}\\ {{\mathrm{FPR}=P(C=+\mid Y=-)}}\end{array}$$ $$\mathrm{{\varepsilon}}\mathrm{{PR}}\cdot(1-\theta)$$ and can be estimated using the labeled data set. If θ = Ptest(Y = +), we have Ptest(C = +) = TPR · θ + FPR · (1 − θ) which can be estimated by applying the classifier to the unlabeled data set and counting positive outputs. If we assume that TPR ̸= FPR, i.e., the classifier has any predictive power, we obtain $$\mathbb{R}\cdot\theta+$$ $$\theta={\frac{P_{\mathrm{test}}(C=+)-\mathrm{FPR}}{\mathrm{TPR}-\mathrm{FPR}}}.$$ Then, Ptest(C = +) is estimated by counting the predictions of the classifier on the unlabeled data set. As Tasche (2017) showed, it is consistent in the limit of infinite data. Two generalizations, extending it to the problems with more classes, are known as the invariant ratio estimator and black-box shift estimator. ## D.2.3 Invariant Ratio Estimator Vaz et al. (2019) introduce the invariant ratio estimator, generalizing the Adjusted Classify and Count approach as well as the "soft" version of it proposed by Bella et al. (2010). Consider any function f : X → R L−1. For example, if u: *X → Y* is a classifier predicting outputs in the set {1*, . . . , L*}, we may define f as the one-hot encoding of L − 1 labels and assign the zero vector to the last label: $$f(x)=\left\{\begin{array}{l}{{}}\\ {{}}\\ {{}}\\ {{}}\end{array}\right.$$ $$\left\{\begin{array}{l l}{(1,0,\ldots,0)}\\ {(0,1,\ldots,0)}\\ {\vdots}\\ {(0,0,\ldots,1)}\\ {(0,0,\ldots,0)}\end{array}\right.$$ $$\begin{array}{l}{{\mathrm{if~}u(x)=1,}}\\ {{\mathrm{if~}u(x)=2,}}\end{array}$$ $$\begin{array}{l}{{\mathrm{if~}u(x)=L-1,}}\\ {{\mathrm{if~}u(x)=L.}}\end{array}$$ (1, 0*, . . . ,* 0) if u(x) = 1, (0, 1*, . . . ,* 0) if u(x) = 2, (0, 0*, . . . ,* 1) if u(x) = L − 1, (0, 0*, . . . ,* 0) if u(x) = L. Analogously, for a soft classifier u: X → ∆L−1 ⊂ R L, f may be defined as fk(x) = uk(x) for k ∈ {1, . . . , L − 1}. Then the *unrestricted* estimator πˆ ′ ∈ R L is given by solving the linear system $$\begin{cases}{\hat{F}_{11}\pi_{1}^{\prime}+\cdots+\hat{F}_{1L}\pi_{L}^{\prime}}&{=\hat{f}_{1}^{\prime}}\\ {\vdots}&{}\\ {\hat{F}_{L-1,1}\pi_{1}^{\prime}+\cdots+\hat{F}_{L-1,L}\pi_{L}^{\prime}}&{=\hat{f}_{L-1}^{\prime}}\\ {\pi_{1}^{\prime}+\cdots+\pi_{L}^{\prime}}&{=1}\end{cases}$$ where and $${\hat{f}}_{k}^{\prime}={\frac{1}{N^{\prime}}}\sum_{j=1}^{N^{\prime}}g_{k}(x_{j}^{\prime})$$ $${\hat{F}}_{k l}={\frac{1}{|S_{l}|}}\sum_{x\in S_{l}}g_{k}(x),$$ where Slis the subset of the labeled data set with yi = l. Note that adjusted classify and count is a special case of the invariant ratio estimator, for a hard classifier. Similarly, the algorithm proposed by Bella et al. (2010) is a special case of invariant ratio estimator for a soft classifier. The generalization for K ̸= L is immediate, with Gˆ becoming a (K −1)×L matrix and gˆ becoming a vector of dimension K − 1. Finally, Vaz et al. (2019) introduce a restricted estimator πˆ ′ R ∈ ∆L−1, which is given by a projection of πˆ ′ U onto the probability simplex. In our implementation we use the projection via sorting algorithm (Shalev-Shwartz & Singer, 2006; Blondel et al., 2014). ## D.2.4 Black-Box Shift Estimator Black-Box shift estimators are also based on the observation that $$P_{\mathrm{test}}(C)=P(C\mid Y)P_{\mathrm{test}}(Y),$$ where P(C | Y ) matrix can be estimated using either labeled or the unlabeled data set. Instead of solving this matrix equation directly by finding the (left) inverse, Lipton et al. (2018) estimate the pointwise ratio R(Y ) = Ptest(Y )/Ptrain(Y ) by rewriting this equation as $$P_{\mathrm{test}}(C)=P_{\mathrm{train}}(C,Y)R(Y),$$ and estimate the joint probability matrix Ptrain(*C, Y* ) using the labeled data set. Then, the equation can be solved for R(Y ). By pointwise multiplication by Ptrain(Y ) (estimated using the labeled data set) the prevalence vector Ptest(Y ) is found. Note that this approach naturally generalizes to the K ̸= L case. Lipton et al. (2018) study the case K = L and derive asymptotic error bounds. More rencently, Azizzadenesheli et al. (2019) introduced a regularized variant of this approach. ## D.2.5 Unsupervised Recalibration Ziegler & Czyż (2020) study the quantification problem from the perspective of recalibration of a given probabilistic classifier. Their method can be interpreted as partly a black-box shift estimator and partly as a likelihood-based estimator. Namely, they propose to use a black-box classifier to predict the labels Ci = f(Xi) and C ′ j = f(X′j ) and estimate the probability table P(C | Y ) by using the plug-in estimator. However, they note that solving explicitly Eq. 1 may suffer from numerical issues when condition number is high and instead they optimize the multinomial likelihood on the observed counts C ′ j . ## D.3 Other Algorithms Other quantification methods include the CDE-Iterate algorithm of Xue & Weiss (2009), which can obtain good empirical performance on selected problems (Karpov et al., 2016). However, as Tasche (2017, Sec. 3.4) showed, it is not asymptotically consistent. Zhang et al. (2013) describes a kernel mean matching approach, with a provable theoretical guarantee. However, as Lipton et al. (2018, Sec. 6) observed, kernel-based methods may be challenging to scale to large data sets. Finally, Moreo et al. (2021) present a Python package for quantification problems. ## E Experimental Details And Additional Experiments In this section we provide additional details on experimental protocols used in Sec. 4 together with additional experimental results. Experiments described in Appendices E.1, E.4, and E.5 were run on a laptop with 32 GiB RAM and 16 CPU cores clocked at 4680 MHz and finished under six hours. Experiments described in Appendices E.2 and E.3 require runs over many random seeds and require larger computing power (unless the number of random seeds is reduced). We ran them sequentially on a cluster equipped with 384 GiB RAM and 128 CPU cores clocked at 2.25–3.7 GHz. As the cluster is shared between different researchers and uses the Slurm workload manager (Yoo et al., 2003) to distribute runs among different projects, this provides an upper bound on the computational resources used. These experiments have finished in five hours. ## E.1 Nearly Non-Identifiable Model We reproduced the experiment described in Sec. 4.1 S = 5 times varying the random seed to obtain different data samples (and, subsequently, different posterior and bootstrap samples) as well as the data set size under N = N′constraint. We noted that methods based on matrix inversion raised an error whenever a matrix estimated from the bootstrap sample was singular. We dropped such bootstrap samples. In Fig. 6 we use N = N′ = 100, in Fig. 7 we use N = N′ = 103, and in Fig. 8 we use N = N′ = 104. We generally see that bootstrap for N = N′ ∈ {102, 103} can result in negative prevalence estimates. On the other hand, restricted invariant ratio estimator (RIR) often does not appropriately estimate the first component. Hence, we consider the Bayesian posterior preferable in low-data settings. For N = N′ = 104 we see that the performance of all methods is comparable. For each simulated data set, we ran four Markov chains with 500 warm-up steps and 1000 samples each using the NUTS algorithm of Hoffman & Gelman (2014). To flag potential convergence issues, we calculated the potential scale reduction factor Rˆ (Gelman et al., 2013, Sec. 11.4). For each variable it did not exceed 1.01. The effective sample size was over 3,000 for all parameters and across all experimental settings. ## E.2 Discrete Categorical Model Benchmark The default parameters introduced in Sec. 4.2 have been gathered in Table 1. For each data set we used four Markov chains with 500 warm-up steps and 1000 samples collected per chain. The maximal Rˆ did not exceed 1.01 and the minimal effective sample size was 772. Fig. 9 represents the outcomes of the experiment where mean absolute error metric has been replaced with the root mean squared error: $$\mathrm{RMSE}(\hat{\pi}^{\prime},\pi^{\prime*})=\sqrt{\frac{1}{L}\sum_{y}(\hat{\pi}_{y}^{\prime}-\pi_{y}^{\prime*})^{2}}.$$ Qualitatively, the conclusions do not change. ## E.3 Misspecified Model We repeated the experiment described in Sec. 4.3 for N = N′ ∈ {102, 103, 104} samples with the results presented in Fig. 10. For each generated data set and each method we ran four Markov chains with 1,500 ![27_image_0.png](27_image_0.png) ![27_image_1.png](27_image_1.png) Figure 6: For N = N′ = 100 samples the posterior is not very precise around the first component. We note that for unrestricted estimators the bootstrap samples result in negative probability estimates. ![28_image_0.png](28_image_0.png) Figure 7: For N = N′ = 103 Bayesian posterior concentrates around the ground-truth value of the first component. However, bootstrap samples often yield negative probability estimates. ![29_image_0.png](29_image_0.png) Figure 8: For N = N′ = 104samples the first component is perfectly determined. Bootstrap samples capture the uncertainty well and are qualitatively similar to the samples from the Bayesian posterior. ![30_image_0.png](30_image_0.png) Figure 9: Quantification using simulated categorical black-box classifiers under different scenarios. Y = 0 ![30_image_3.png](30_image_3.png) Y = 1 Y = 0 Y = 1 Y = 0 Y = 1 ![30_image_1.png](30_image_1.png) 4 2 0 2 4 4 2 0 2 4 4 2 0 2 4 ![30_image_2.png](30_image_2.png) Figure 10: Experiments with a mixture of Student distributions. First column: conditional Student distributions P(X | Y ). Second column: train (labeled) and test (unlabeled) distributions. Third column: posterior according to differend models. Fourth column: coverage of high-density credible intervals measured over S = 200 simulations. Top row: N = N′ = 100 samples. Middle row: N = N′ = 103samples. Bottom row: N = N′ = 104samples. warm-up steps and 2,000 samples collected per chain. We noticed occasional convergence issues: runs with at least one parameter such that Rˆ ≥ 1.02 were excluded from the coverage calculation (see Table 2). The convergence for all models is satisfactory (with the minimal number of simulations retained being 157 out of conducted 200), although we noticed that more runs were excluded for larger sample sizes and that misspecification (i.e., the Gaussian mixture model) resulted in more runs to be excluded. We see that the coverage of the high-density credible intervals of the discrete model generally agrees with the nominal value. However, the posterior in the discrete model can be wider than in the properly specified model using the generative mechanisms P(X | Y ). Moreover, we see that using discretized quantification method may be preferable over using a misspecified model when the large sample size is used: although for N = N′ = 100 the (misspecified) Gaussian mixture model has coverage close to the nominal value, for N = N′ ∈ {103, 104} the coverage deteriorates quickly. Parameters of the ground-truth mixture model We used (X | Y = 1) ∼ T (0, 0.5 2, 3) and (X | Y = 2) ∼ T (1, 0.5 2, 4), where T (µ, σ2, ν) is the location-scale t-distribution, i.e., the pushforward distribution of the standard Student t-distribution T (0, 1, ν) with ν degrees of freedom by the affine mapping x 7→ µ + σx. We sampled the number of labeled samples with label Y = 1 using N1 ∼ Binomial(N, 0.5) and then defined the number of samples with label Y = 2 by N − N1. Similarly, we generated an unlabeled data set, but with the class prevalence 0.2, rather than 0.5. Priors on the Gaussian and Student mixture models In both cases we used the uniform prior, π ′ ∼ Dirichlet(1, 1), on the prevalence vector. We modeled the scale parameters σi *∼ |C|*(1) via the halfCauchy prior and the location parameters via µi ∼ N 0, 3 2. Additionally, the Student mixture had a positive prior on the degrees of freedom, νi ∼ Γ(1, 1). The Gaussian mixture model can be treated as a special case of this model with the constraint νi = ∞ for both components. ## E.4 Single-Cell Data Analysis We downloaded the TPM-normalized (Zhao et al., 2021) data sequenced by Darmanis et al. (2017) from the Curated Cancer Cell Atlas. We applied the x 7→ log(1 + x) transform to all entries. In Fig. 11 we visualize P(X | Y ) by projecting the gene expression X on the first four principal components (calculated using all samples pooled together). We see that the distribution P(X | Y ) differs between the samples, although the cell types seem to roughly cluster together and the random forest classifier may distinguish well between different subtypes. As a random forest we used the SciKit-Learn implementation (Pedregosa et al., 2011, v. 1.4.1) with default hyperparameters and 20 trees. Before training the random forest we reduced the dimensionality by projecting the training data onto the first 50 principal components. This projection (onto the components defined by the training data) is used for making the predictions on other samples, before a random forest is applied. Similarly as in App. E.1, in this experiment we ran four chains with 500 warm-up steps and 1000 samples each. To flag potential convergence issues, we calculated the potential scale reduction factors Rˆ to two decimal places, which did not exceed 1.02. The minimal corresponding effective sample size was 298. ## E.5 Sensitivity To The Prior Choice In Sec. 4 we used uniform priors over all simplices, which is supposed to act as a reference prior (Gelman et al., 2013, Sec. 2.8) for problems without precise domain-specific information. In practice, the inference Table 1: Default parameters used in the experiments. $$\begin{array}{c c c c}{{}}&{{r}}&{{q}}&{{L}}&{{K}}\\ {{}}&{{0.7}}&{{0.85}}&{{5}}&{{5}}\end{array}$$ N N′*r q L K* 1000 500 0.7 0.85 5 5 Table 2: Number of simulations (out of 200) with R <ˆ 1.02 for each sample size N = N′ and model. | Sample Size | Discrete (10 bins) | Discrete (5 bins) | Gaussian | Student | |---------------|----------------------|---------------------|------------|-----------| | 100 | 200 | 200 | 192 | 200 | | 1000 | 199 | 199 | 174 | 200 | | 10000 | 199 | 196 | 157 | 197 | ![32_image_0.png](32_image_0.png) Figure 11: Projections onto first four principal components of the whole data set. Each column describes the projections of a specified sample on the 1st and 2nd (first row) or the 3rd and 4th (second row) component, coloured by the cell type. We see distributional differences and conclude that P(X | Y ) is not invariant between samples. may depend on the choice of the prior and a sensitivity analysis should be performed (see Kruschke (2021) or Gelman et al. (2020, Sec. 6.3)). We illustrate the prior choice problem in an example with L = K = 2 classes with an imperfect classifier P(C = 1 | Y = 1) = P(C = 2 | Y = 2) = 0.7 and prevalence vectors π = (0.5, 0.5) and π ′ = (0.7, 0.3). We sampled three data sets, differing by the number of points, N = N′ ∈ {50, 500, 5000} and we fitted three models, differing by the choice of the prior. We used the symmetric Dirichlet priors Dirichlet(*α, α, . . . , α*) over all simplices. While it simultaneously controls the parameters of the φ and π ′ matrix, a choice which is may not be often desired in practice, we hope that it provides meaningful information on the sensitivity of the inference to the choice of the prior. For α = 1 this prior is uniform and has been studied in Sec. 4. Choosing α < 1 encourages more sparsity: the prior on the π ′ vector concentrates near to the boundary of the simplex (i.e., (1, 0) and (0, 1) vectors). Simultaneously, the model assumes that φ matrix corresponds to sharper predictions. For example, this means that one of the entries {P(C = 1 | Y = 1), P(C = 2 | Y = 1)} should be larger than the other, i.e., the true positive rate and false negative rate should be very different. For α > 1 the prior distributes the mass around the center, preferring balanced prevalence vectors π ′. Under this prior, the false negative rate and the true positive rate are similar. We changed α ∈ {0.1, 1, 10} and performed Bayesian inference. Similarly as in Appendices E.1 and E.4 we ran four Markov chains with 500 warm-up steps and 1,000 samples each. To flag potential convergence issues, we calculated the potential scale reduction factors Rˆ, which did not exceed 1.02. The minimal effective sample size was 362. The posterior of different models is visualised in Fig. 12. We see that for a small sample size (N = N′ = 50), the sparse prior (α = 0.1) puts more mass at the boundary. At the same time, the predictions under α = 1 ![33_image_0.png](33_image_0.png) Figure 12: Sensitivity of the Bayesian inference with respect to the Dirichlet prior with concentration parameter α under different sample sizes N = N′. We plot the posterior in the form of a histogram, together with a vertical line representing posterior mean. The dashed line represents the ground-truth value. and α = 10 priors are nearly identical. For larger N = N′the effect of the prior vanishes, with the posterior concentrating around the true value, which is in line with Theorem 3.1.
Review 1: Summary: This is a paper well suited to TMLR since the focus is on "technical correctness over subjective significance" [jmlr.org/tmlr]. A problem we might care about ("quantification") is addressed competently, and the method is described reasonably clearly. Related work and competing methods are described properly. I am not sufficiently well versed in this problem to be sure that the competing approaches are the SOTA, so will have to trust the authors on this. The authors are careful with their claims, including those concerning their proposed method. The word "may" appears very frequently. Strengths and Weaknesses: As a result of taking a Bayesian approach to the problem at hand, this paper has a pleasing simplicity. The assumed model is conveniently represented as a DAG, and "learning" reduces to finding (or approximating) a posterior distribution over the unknown(s) of interest. The advance over the Bayesian approach (to the same problem) of Storkey is that it does not rely on a generative model P(X|Y). Whether a generative mechanism for (PX|Y) might be misspecified is highly contingent: presumably one could have real situations where it is possible to specify it reasonably correctly. As the authors correctly state: "M_{approx} loses information from the problem, so we do not expect it to be as appropriate as a properly specified generative model for P(X|Y)." The authors "work with a given dimension reduction mapping", and, as they explain, much depends on the appropriateness of the choice of mapping and whether it leads to a sufficient statistic (as the authors do explain). A motivation for dimensionality reduction (and thus losing information) is that: "However, for high-dimensional $\theta$, or when N and N' are large, MCMC will generally not be tractable". This is a vague statement about using MCMC for this problem; it would be good to see evidence of problems with MCMC (by eg citing some paper showing it, or doing the experiments yourselves). The statement: "We did not compare obtained posteriors in high-dimensional problems due to high computational costs associated with running MCMC on high-dimensional generative models." is not evidence. It is true that $\tilde{K}_{y}(.;\phi)$ (and thus M_approx) is particularly simple which is indeed an advantage of the presented method. In Section 3.2 the categorical distributions are effectively replaced with multinomial ones, which makes sense. The experiments are fairly small. It would be good to see what happens with large problems - and to see where, if anywhere, we hit the limits of scalability for the presented method. Also, there is no discussion of convergence (or lack thereof) of the MCMC samplers. I see that the authors use numpyro to do the MCMC; does this package not report, eg rhat values? MINOR POINTS The "panels" in Fig 3 are individual sub-figures. I recommend just labelling them as 3(a) - 3(f) and refer to them thus (rather than "the first panel of the second row", etc). The sixth "panel" (concerning model misspecification) is never explicitly referenced in the text. "To formalize the problem, consider a probabilistic graphical model in Fig. 1" State that the model referred to is M_true, rather than requiring the reader to deduce this from the text that follows this statement. Requested Changes: 1. Report on the convergence status of MCMC runs 2. Do at least one experiment on a sufficiently large instance to cause computational problems for the presented method (or prove that no such instance exists). Broader Impact Concerns: N/A ================================================== Review 2: Summary: This paper formulated the quantification problem as a Bayesian inference problem, and proposed a Bayesian approach that utilizes blackbox feature extractor and allows exact uncertainty quantification, which is particularly favorable in small sample scenario. Strengths and Weaknesses: Strength: Clear problem formulation and motivation. The Bayesian approach is natural and provides a principled way for uncertainty quantification of the estimated P_test(Y) The proposed method is assessed and demonstrated from various aspects with both simulated data and scRNA data. Weakness: For the proposed method to be practically useful, there seems to be a lack of discussion on the choice of prior and feature extractor. Requested Changes: Critical questions to clarify in the paper: - How robust is the inference to the prior specification? When there is limited prior knowledge, what are some recommendations on the prior? - How to choose the blackbox feature extractor f in practice? Minor changes / questions: - Clarify the notation P_lab and P_unl - Theorem 3.1 proved some desirable properties of MAP. Then why is the posterior mean instead of MAP used as the point estimate in Section 4.2? Broader Impact Concerns: The main contributions of the paper is on the methodology so to the best of my understanding, there is no major potential broader impact concerns. ================================================== Review 3: Summary: This paper studies the quantification problem and generalizes the point estimate methods like the black-box shift estimators to the Bayesian setting. The authors theoretically analyze the asymptotical consistency of the proposed maximum a posterior inference under weak assumptions. In particular, the proposed approach explicitly measures the uncertainty and does not suffer from the negative value problem in the low-data regime. Empirical evaluation on synthetic and real-world datasets shows that the proposal is competitive with existing non-Bayesian quantification methods. Strengths and Weaknesses: *Strengths*: - This paper addresses an interesting problem - the quantification problem which tries to estimate the class prevalence under the prior shift. - This paper carefully analyzes the connections and pros and cons among different approaches to the quantification problem, and extends the black-box shift estimator to the Bayesian setting which mitigates several existing issues. - The authors have demonstrated the claimed improvement including the robustness and superior performance in low-data scenarios through empirical evaluations.   *Weaknesses*: - Overall, the paper is not easy to follow, and the presentation could be improved. - The experiments could be more thorough and include datasets that are used in the literature. Please see Requested Changes for details. Requested Changes: - On the presentation side, the authors might want to provide a more coherent description of the proposed method, highlighting the connections and the differences from existing method and reasons why it has the ability to resolve issues of existing methods. It also would be nice to have some comparisons between the "weak assumptions" used in Theorem 3.1 and the literature. Moreover, more detailed description of the full efficient Markov chain Monte Carlo sampling would be much appreciated to be included in Section 3.2. - On the experimental side, the authors might want to 1) group the experiments according to the improvements of the proposed method and provide some discussion about the performance comparisons; 2) give clear definition of "weak uniform priors" used in the experiment and potentially some additional experiments to demonstrate the sensitivity to the different prior specifications; 3) include experiments used in the existing literature so that the audience could have a direct comparison; 4) present some summary quantitative metrics since the lines corresponds to different methods shown in Figure 3 and 5 are difficult to differentiate. - In Section 5, the authors "stress the importance of a principal shift in perspective" about treating f as an auxiliary feature extraction method. But since this has been introduced in the literature especially the black-box shift estimator method, the authors might want to give a clearer argument regarding additional insight/perspective advocated in this work. - Some notations: 1) "lab"/"unl" looks ambiguous to me, might consider "source"/"target" or just "train"/"test"; 2) it should be $C_{j}^{'}$ instead of $X_{j}^{'}$ in equation (3). Broader Impact Concerns: None ================================================== Metareview: Recommendation: Accept with minor revision Comment: The paper presents a sound and interesting approach to the quantification problem. The methodology is reasonable, and the authors provide sufficient support for the claims made in the paper. The topic is relevant to the field of distribution shifts, as well as statistics. Thus, the paper meets both the evidence and the audience criteria of TMLR. All reviewers are unanimously leaning towards acceptance. The consensus is that the paper is sound and provides a meaningful contribution to TMLR. During the discussion period, the authors addressed almost all of the concerns raised by the reviewers. **Requested revision**: Please address the remaining questions of reviewer 6Rrn ([link](https://openreview.net/forum?id=Ft4kHrOawZ&noteId=ZNteGLwnus)). In particular, please describe the compute necessary for running the methods. Ideally, please run the requested missing experiments with multiple MCMC chains and $\hat R$ estimates, or provide a discussion of why it is impractical to run these experiments. ==================================================
# Improved Group Robustness Via Classifier Retraining On Independent Splits Thien Hang Nguyen nguyen.thien@northeastern.edu Northeastern University, Boston, MA Hongyang R. Zhang *ho.zhang@northeastern.edu* Northeastern University, Boston, MA Huy Le Nguyen hu.nguyen@northeastern.edu Northeastern University, Boston, MA Reviewed on OpenReview: *https://openreview.net/forum?id=KgfFAI9f3E* ## Abstract Deep neural networks trained by minimizing the average risk can achieve strong average performance. Still, their performance for a subgroup may degrade if the subgroup is underrepresented in the overall data population. Group distributionally robust optimization (Sagawa et al., 2020a), or group DRO in short, is a widely used baseline for learning models with strong worst-group performance. We note that this method requires group labels for every example at training time and can overfit to small groups, requiring strong regularization. Given a limited amount of group labels at training time, Just Train Twice (Liu et al., 2021), or JTT in short, is a two-stage method that infers a pseudo group label for every unlabeled example first, then applies group DRO based on the inferred group labels. The inference process is also sensitive to overfitting, sometimes involving additional hyperparameters. This paper designs a simple method based on the idea of classifier retraining on independent splits of the training data. We find that using a novel sample-splitting procedure achieves robust worst-group performance in the fine-tuning step. When evaluated on benchmark image and text classification tasks, our approach consistently performs favorably to group DRO, JTT, and other strong baselines when either group labels are available during training or are only given in validation sets. Importantly, our method only relies on a single hyperparameter, which adjusts the fraction of labels used for training feature extractors vs. training classification layers. We justify the rationale of our splitting scheme with a generalization-bound analysis of the worst-group loss. ## 1 Introduction Deep neural networks are usually developed with examples in test sets that follow the same distribution as the training set. The performance of deep networks worsens when the test set distribution differs from the training set distribution. This problem has been studied by various literature in the name of out-of-distribution (OOD) generalization. This is crucial in safety-critical applications such as self-driving cars (Filos et al., 2020) and medical image classification (Oakden-Rayner et al., 2020). Tackling distribution shifts for out-of-distribution generalization is one of the most important problems for the real-world deployment of deep learning models. A notable setting where distribution shifts occur is the group-shift setting, where different data groups may have a distribution shift (Sagawa et al., 2020a). In this setting, there are predefined attributes that divide the input space into different groups of interest. Here, the goal is to find a model that performs well across several predefined groups. Prior work has observed that deep networks learned by empirical risk minimization suffer from poor worst-group performance despite good average-group performance. ![1_image_0.png](1_image_0.png) Figure 1: Worst-group learning curve on the Waterbird dataset between group DRO (**left**) and our method (**right**) from the setting in Section 4.2. In the left figure, the validation accuracy of group DRO becomes unstable as the number of epochs increases beyond 100 while the training accuracy remains close to 100%. Our approach instead uses 30% of the training data with group labels for fine-tuning the classification layer of a deep network that is obtained via minimizing the average risk using the rest of the training data without group labels (see also Algorithm 1). This allows our method to reuse features learned in the first stage while improving generalization compared with group DRO. The difficulty with learning group robust deep networks can be attributed to the phenomenon of shortcut learning (Geirhos et al., 2020) or spurious correlation (Sagawa et al., 2020a; Arjovsky et al., 2019). Shortcut learning poses that minimizing the empirical risk favors models that discriminate based on simpler, spurious features of the data. However, one would like the learning algorithm to produce a model that uses features and correlations that perform well not only on the train distribution but also on all potential distributions that a task may generate, like that of a worst-group distribution. In recent years, the group-shift setting has received considerable attention. Sagawa et al. (2020a) investigates distributional robust optimization (Ben-Tal et al., 2013) in this setting and introduces group distributionally robust optimization to optimize for the worst-group error directly. Since then, this approach has been widely used for training group-robust models, where it produces strong results that many follow-up works have used as a common baseline (for example, see Liu et al. (2021); Nam et al. (2020); Zhang et al. (2022) and references therein). However, the worst-group error minimization problem is sensitive to small groups (Sagawa et al., 2020a) and requires group labels for all examples at training time. Followup works in the group shift setting have considered methods to reduce the amount of group labels needed (Liu et al., 2021; Creager et al., 2021; Zhang et al., 2022; Nam et al., 2022). These methods usually follow the framework of first inferring pseudo-group labels using a referenced model (pseudo-labeling) and then applying a group-robust algorithm like minimizing the worst-group loss on the pseudo-labeled data. These methods show promising results, sometimes on par with methods with access to group labels during training. The caveat is that methods in this space usually involve additional hyperparameters, such as those from contrastive learning (Zhang et al., 2022) and semi-supervised learning (Nam et al., 2022). These are nontrivial complexities added to the group DRO procedure. The purpose of our paper is to investigate whether it is possible to develop group robust models with as few group labels as possible while alleviating the need for expensive parameter turning. Our Contributions. We answer the above question in the positive by designing a simple approach called Classifier Retraining on Independent Splits or CROIS in short. We replace the pseudo-labeling phase to instead use group labels only for fitting the final classifier layer. Our method achieves good robust performance without relying on various hyperparameters and parameter tuning. We note that this concern has been voiced by other researchers in the community with the goal of prioritizing simple, reproducible research over complex methods (Gulrajani & Lopez-Paz, 2021). Our method takes advantage of the good features learned by empirical risk minimization (Kang et al., 2019; Menon et al., 2021b) while overcoming the deficiency of its memorization behavior (Sagawa et al., 2020b). We utilize the training data as two independent splits: one group-unlabeled split to train the feature extractor and one group-labeled split to retrain only the classifier with a robust algorithm like group DRO. We say good features for a certain task to mean that there exists a linear classifier utilizing the deep network's features that perform well on our desired task, where features refer to the inputs to the deep network's final linear layer. We demonstrate through ablation studies that using independent splits is crucial for robust classifier retraining. Furthermore, our method's use of group DRO to only a low-capacity linear layer reduces group DRO's sensitivity towards small groups as well as the amount of data needed for group DRO to generalize well. For empirical evidence, see Figure 1 and Figure 3. For various benchmark data sets whose group labels are only partially given during training, we show strong experimental results on Waterbird, CelebA, MultiNLI, and CivilComments, which improved upon existing methods, including Just Train Twice (Liu et al., 2021) and Spread Spurious Attribute (Nam et al., 2022). The highlight of our method is that we only involve a single parameter (to determine data splitting fractions), and we completely eliminate the pseudo-labeling stage. We demonstrate a surprising result where using only a fraction of the group labels during training, our method shows competitive performance to group DRO that runs on fully-labeled groups. Our results reinforce several recent works showing that deep networks contain good features on image and text classification tasks (Menon et al., 2021b; Kirichenko et al., 2022). The simplicity of our procedure also allows us to cast it into a formal learning theory framework naturally. We state such a setting and develop a simple generalization bound on the worst-group loss. This result provides some justification for the hyperparameter p, which is used to balance the data size for feature learning and the rest for classifier retraining. The rest of this paper is organized as follows. In Section 2, we will discuss the related works. In Section 3, we will describe the design of our method. In Section 4, we present our experimental results. In Section 5, we provide a generalization bound for the worst-group loss using standard Rademacher complexity techniques. Lastly, we conclude the paper in Section 6. Appendix A provides additional details to support our experimental results. Appendix B states the proof for our theoretical claims. ## 2 Related Work There are three main settings for the group-shift problem: (1) full availability of group labels, (2) limited availability of group labels, and (3) no availability of group labels, all referring to the training stage. Other related areas include domain generalization and long-tailed classification. Fully-labeled group labels during training. Most methods here revolve around up-weighing minority groups, subsampling minority groups (Sagawa et al., 2020b), or performing group DRO (Sagawa et al., 2020a). Follow-up works include integrating data augmentation via generative model or selective augmentation (Yao et al., 2022) to a robust training pipeline. Partially-labeled group labels during training. In this setting, the approach of inferring more group labels for the group-unlabeled data remains the most popular. These pseudo-group labels are usually generated by training a referenced model that performs the labeling. For example, Liu et al. (2021) utilizes a low-capacity model that creates groups by labeling whether an example is correctly classified by the referenced model or not. Similarly, works like (Creager et al., 2021; Dagaev et al., 2021; Krueger et al., 2021; Nam et al., 2022; 2020) are variants of this approach of inferring pseudo group labels. These methods then proceed to use a group robust algorithm like group DRO (Sagawa et al., 2020a) or Invariant Risk Minimization (Arjovsky et al., 2019) to retrain new deep nets with the newly generated pseudo group labels. No group labels during training. This setting removes the ability to validate knowledge of potential groups. This makes the problem more difficult as it is unclear which correlation to look for during training. Some theoretical works in this space include Lahoti et al. (2020). Sohoni et al. (2020) proposes a popular empirical approach in this setting and has popularized the pseudo-labeling and retraining approach. This setting is related to domain generalization. Gulrajani & Lopez-Paz (2021) shows through mass-scale experiments that most out-of-distribution generalization methods do not improve over empirical risk minimization given the same amount of tuning and model selection criterion. Long-tailed classification. The long-tailed problem concerns certain classes having significantly fewer training examples than others (see, for example, Zhang et al. (2021) for a survey). Yang et al. (2021) uses random matrix theory to obtain intriguing insights into learning from imbalanced classes, such as the non-monotonicity of adding source data on transfer. Some techniques from the long-tail literature, like margin adjustment and distillation, have been applied to the group-shift setting to account for the group imbalances (Sagawa et al., 2020a; Lukasik et al., 2021; Kini et al., 2021). Li et al. (2023a;b) recently proposed a task modeling and boosting framework to aggregate multiple learned models to counteract the class imbalance problem. Representation learning in deep networks. Investigating the power of the features of deep nets has been of great interest in the long-tail setting (Liu et al., 2019; Menon et al., 2021a; Kang et al., 2019) as well as in the group-shift setting (Menon et al., 2021b). Kang et al. (2019) is one of the first works to provide extensive evidence for the hypothesis that deep nets contain good features via extensive experiments on several long-tailed vision datasets, where different strategies for obtaining feature extractors and fine-tuning the classification layer are examined. There, an ERM-trained feature extractor combined with a non-parametric method of rescaling1the classifier layer achieves (then) state-of-the-art results on all three datasets, showing evidence that the features of deep nets can be used to distinguish between rare and frequent classes. These insights are central to the development of our method. Memorization in deep learning. It has now been well known of high capacity deep nets' ability to memorize training examples (Zhang et al., 2017). In the group-shift setting, this behavior has been investigated by Sagawa et al. (2020b), which provides empirical and theoretical justifications for deep nets' memorization behavior of minority groups' training examples. This memorization behavior has also been observed in the other settings, including data imbalances (Feldman & Zhang, 2020), noisy labels (Ju et al., 2022), and fine-tuning pretrained models (Ju et al., 2023). We would like to point out that, developed concurrently with our work, is a paper by Kirichenko et al. (2022), where the authors similarly discover that classifier retraining on independent splits via a similar procedure improves group robustness. While the main idea of retraining the classifier using independent splits is similar, Kirichenko et al. (2022) focuses more on exploring the features learned by deep networks. In contrast, our work focuses more on controlling model capacity for group DRO, where we limit its use to only the final linear layer. Thus, we believe that our method is of independent interest. ## 3 Method This section describes the design of our method. First, we lay out the problem setup and the motivation behind this problem. Then, we describe our approach, which involves splitting the dataset into independent splits to conduct training features and classifiers separately. ## 3.1 Preliminaries For a classification task T of predicting labels in Y from inputs in X , we are given training examples {(xi, yi)} n i=1 that are drawn independent samples from some train distribution Dtrain. In the domain generalization setting, we want good performance on some unknown test distribution Dtest that is different but related to Dtrain through the task T . More explicitly, we wish to find a classifier f from some hypothesis space F using Dtrain such that the classification error L(f) = Ex,y∼Dtest [f(x) ̸= y] of f w.r.t. Dtest is low. In the group-shift setting (Sagawa et al., 2020a), we further assume that associated with each data point x is an *attribute* a(x) (some sub-property or statistics of x) from a set of possible attributes A. These attributes, along with the labels, form the set of possible groups G = *A × Y* that each example can take. We denote an input x's group label as g(x) ∈ G. We then define the classification error of a predictor f (w.r.t. a fixed implicit distribution) restricted to a group g ∈ G to be Lg(f) := Ex,y|g(x)=g[f(x) ̸= y]. The notion of 1Rescaling each row of the linear classifier using the row's norm to some power. See Kang et al. (2019). Algorithm 1 Classifier Retraining on Independent Splits (CROIS) Input: Training data DL with group labels and training data without group labels DU . Classifier retraining algorithm R (default to group DRO). Optional splitting parameter p (default to 1). 1: *Obtain validation sets* by partitioning DL into D′L and D (val) Land DU into D′U and D (val) U. 2: (Optional) *Add more unlabeled data* via split proportion p: Partition D′L into two parts, D1 and D2 such that |D1| = (1 − p) · |D′L | and |D2| = p · |D′L |. Set D′L ← D2 and D′U ← D′U ∪ D1. 3: *Obtain the initial model* f by running empirical risk minimization on D′U and selecting the best model in terms of average accuracy on D (val) L ∪ D (val) U. 4: *Perform classifier retraining* R with feature extractor f on D′L and then select the best model in terms of worst-group accuracy on D (val) Las the final output. worst-group error upper bounds the error of f w.r.t. any group Lwg(f) := maxg∈G Lg(f). Using this notation, the group-shift problem aims to discover a classifier in arg minf∈F {Lwg(f)} = arg minf∈F {maxg∈G Lg(f)}. We observe that the group-shift problem is just a particular case of the domain generalization problem when Dtest is the distribution consisting of only the points (*x, y*) with g(x) being restricted to the worst-group of f in G. Here, group distributional robust optimization solves this objective by performing a minimax optimization procedure that alternates between the model's weight and the relaxed weights of the groups. Spurious correlations and memorization. As an example, consider the Waterbird dataset (Sagawa et al., 2020a), where it has been constructed by combining images of water/land birds from the CUB dataset (Welinder et al., 2010) with water/land backgrounds from the PLACE dataset (Zhou et al., 2017). The task is to distinguish whether an image of a bird is a waterbird or a land bird. Regarding our problem, the type of bird forms the labels Y, and the backgrounds are set to be the attribute A for each type of bird. Altogether, these form four groups: G = *Y × A* = {waterbird, landbird*} × {*water, land}. This dataset is constructed so that the proportion of birds on matching backgrounds is significantly more than those of mismatched backgrounds. This is so that the backgrounds could be spuriously correlated with the labels, as predicting the background alone would achieve a high average accuracy w.r.t. the train distribution already. As expected, for models trained by empirical risk minimization, the groups with the highest error are the minority groups where the background mismatches the type of the bird, suggesting that the model is predicting using the background instead of the bird. Furthermore, the fact that these high-capacity models achieve *zero* training error leads to the conclusion that these models not only utilize spurious features like the background to make their predictions but also must have memorized the minority groups during its training process (Sagawa et al., 2020b). These problems are common when there is data imbalance in overparametrized networks (Feldman & Zhang, 2020; Li & Zhang, 2021) or when there is label noise (Ju et al., 2022). In the next section, we propose a method to circumvent these issues. ## 3.2 Our Approach Algorithm 1 presents an outline for our main method: *Classifier Retraining On Independent Splits*, or *CROIS* in short. Given group-labeled data and group-unlabeled data, our method involves several steps: 1. Organize the data into one *group-labeled* split D′L and one *group-unlabeled* split D′U . 2. Obtain a feature extractor trained by empirical risk minimization with the group-unlabeled split D′U . 3. Perform robust classifier retraining with the group-labeled split D′L , where classifier retraining refers to fine-tuning the final linear layer of a deep network. In the setting where group labels are limited (as in Section 4.1), |DL| is much smaller than |DU |, and we do not need to set p < 1. There, we primarily concern with partitioning DL into D′L and D (val) L. On the other hand, when group labels are available for a large portion of the training dataset (as in Section 4.2) and |DU | is much smaller than |DL|, the optional parameter p in step 2 controls the size of D′U to obtain a feature extractor and the number of group labels D′L used at train time. ![5_image_0.png](5_image_0.png) Figure 2: tSNE projection (Van der Maaten & Hinton, 2008) of the features of ResNet-50 on *seen* (**left**) versus *unseen* (**right**) examples from Waterbird.2 The features of the minority groups (orange and yellow) from the *unseen examples* appear better separated from the majority groups than that of the seen examples. Using unseen examples plays a major role in our method's ability to improve worst-group performance via robust classifier retraining. Good features of deep nets. As discussed in the related works section, there is extensive empirical evidence that deep nets trained by ERM contain features that can distinguish between the minority classes from the majority classes in both the long-tailed setting and the group-shift setting. This suggests that a key to the group-shift problem is correcting the classifier layer, which forms the basis for the first phase of our method. The most efficient way to utilize data for fine-tuning the final layer was left open, and our work focuses on exploring this aspect in more depth. Fine-tuning the classifier layer and utilizing group information. There is a range of possible strategies to utilize group labels. At one extreme, one can rescale using only minimal information, such as the group sizes, or at the extreme, one can maximally utilize group information by training the whole network with group DRO. Our work explores the space between these extremes by limiting the use of group labels to only fine-tune the final layer. Recall that in Kang et al. (2019), rescaling the classifier works best, whereas intuition suggests a data-dependent method like classifier retraining would work better. We hypothesize that this is related to the next issue of our discussion. Memorization behavior of deep nets. As discussed in the related works section, there is strong evidence for deep nets memorizing training examples, which is often believed to be one of the main causes of poor robust performance. One way to circumvent memorization is to control the model's capacity by incorporating some combinations of high ℓ2 regularization, early stopping, and other correctional parameters as has been done in Sagawa et al. (2020a). However, methods that are sensitive to different hyperparameter configurations with excessive tuning are not desirable, sometimes leading to reproducibility concerns (Gulrajani & Lopez-Paz, 2021). Our method, instead, does not require excessive tuning. It also extends to numerous settings depending on the availability of group labels. Tackling this memorization problem is crucial, and we achieve this using independent splits. As memorized examples' (i.e., already correctly classified) loss must be low, their gradients contain little helpful information. Furthermore, the features of memorized examples might not represent their group during test time: Figure 2 presents a visualization of the features between seen versus unseen examples. Thus, combining this observation with the evidence for deep nets containing good features, our method performs robust classifier retraining on *unseen* examples (D′L in Algorithm 1) in the hope of learning a classifier that utilizes features more representative of examples during test time. Our experimental results confirm these intuitions: the results from Table 3 show that robust classifier retraining without independent split indeed performs worse. Finally, as a side benefit, the independent split lends itself to theoretical analysis, which we provide in Section 5. This further supports the soundness of our method. 2Half of the data is used to obtain a feature extractor while the other half is used to obtain the features of the unseen examples. ## 4 Experiments We conduct experiments in two settings: (1) where group labels are only available from the validation split of the datasets (as in Liu et al. (2021); Nam et al. (2022)); and (2) when a fraction of group labels is available from the training split, and all group labels are available from the validation split. Our implementation in PyTorch can be found at https://github.com/timmytonga/crois. Setup. We use a similar setup to Liu et al. (2021) and Sagawa et al. (2020a). To demonstrate the ease of tuning of our method, unless noted otherwise (e.g., Table 2 and parameter p in Table 3), we fix the hyperparameters of both the empirical risk minimization (ERM) and the robust classifier retraining phase, reusing standard parameters for ERM (see Appendix A for full hyperparameters and model details). Further results of our method with tuned hyperparameters are presented in Section A.6 of the Appendix. Datasets. We experiment on four datasets: - **Waterbird** (Sagawa et al., 2020a). Combining the bird images from the CUB dataset (Welinder et al., 2010) with water or land backgrounds from the PLACES dataset (Zhou et al., 2017), the task is to classify whether an image contains a *landbird* or a *waterbird* without confounding with the background. There are 4795 total training examples, whereas the minority group (*waterbird, land* background) has only 56 examples. We report the weighted test *average accuracy* due to the skewed nature of the val and test sets to be consistent with Sagawa et al. (2020a). - **CelebA** (Liu et al., 2015) is a popular image dataset of celebrity faces. The task is to classify the celebrity in the image is blond or *not blond*, with male or *not male* as the confounding attribute. There are 162770 total training examples, and the smallest group (*blond, male*) has 1387 examples. - **MultiNLI** (Williams et al., 2017) is a natural language inference dataset for determining whether a sentence's hypothesis is *entailed* by, is *neutral* with, or *contradicts* its premise. The spurious attribute is negation words like *no, never*, or *nothing*. This task has 6 groups, with 206175 total training and 1521 in the minority group examples (*is entailed* and *contains negation*). - **CivilComments-WILDS** (Koh et al., 2021) is a natural language dataset where the task is to classify whether a sentence is toxic or *non-toxic*. There are 8 demographics - *male, female, white,* black, LGBTQ, Muslim, Christian, and *other religion*– forming 16 groups that *overlap* because a comment can contain multiple demographics. Following Koh et al. (2021), we evaluate all 16 groups but only use the attribute *black* along with the label in training. There is a total of 269038 training examples with 1045 minority examples from (other religion, *toxic*). ## 4.1 Result With Validation Group Labels Setup. In this section, we consider the setting where group labels DL are available only from the standard validation split, where these group labels can be used for training (Nam et al., 2022) or model selection (Liu et al., 2021). Here, the training split is treated as the group-unlabeled set DU . Most methods in this setting employ some pseudo-labeling approach to generate pseudo group labels that are then used to train a new network via a robust algorithm like group DRO. On the other hand, our method simply uses half of DL for classifier retraining D′L and the other half for model selection D (val) Land does not rely on pseudo-labeling. Our method also reuses the initial model for the retraining phase, making our method closer to that of a single-phase procedure with additional fine-tuning. Results. In Table 1, we compare our method against JTT (Liu et al., 2021) and SSA (Nam et al., 2022), where we report the mean and one standard deviation of the test average (*Avg Acc*) and worst-group Accuracy (*Wg Acc*) across 3 random seeds. There, our method outperforms JTT on all 4 datasets and SSA on 3 datasets using default parameters. Note that, unlike our method and SSA, JTT only uses available group labels for model selection. However, JTT requires training many models across two phases, which can be expensive. Furthermore, JTT's model selection can be quite sensitive (see Section 5.4 of Liu et al. (2021)). SSA alleviates this problem of JTT by more efficiently utilizing group labels to infer pseudo-labeling. Finally, our method dispenses altogether with pseudo-labeling while still achieving competitive performance. Table 1: Experimental results for the setting when only group labels from the validation split are used. Results for JTT (Just Train Twice) and SSA (Spread Spurious Attributes) are taken from Liu et al. (2021) and Nam et al. (2022), respectively. The numbers in parentheses denote one standard deviation from the mean across 3 random seeds. See Table A.7 in Appendix 18 for comparison with additional baselines. | Waterbird | CelebA | MultiNLI | CivilComments | | | | | | |--------------|-------------|-------------|-----------------|-------------|-------------|-------------|-------------|-------------| | Method | Avg Acc | Wg Acc | Avg Acc | Wg Acc | Avg Acc | Wg Acc | Avg Acc | Wg Acc | | JTT | 93.9 | 86.7 | 88.0 | 88.1 | 78.6 | 72.6 | 91.1 | 69.3 | | SSA | 92.2 (0.87) | 89.0 (0.55) | 92.8 (0.11) | 89.8 (1.28) | 79.9 (0.87) | 76.6 (0.66) | 88.2 (1.95) | 69.9 (2.02) | | CROIS (ours) | 92.1 (0.29) | 90.9 (0.12) | 91.6 (0.61) | 88.5 (0.87) | 81.4 (0.06) | 77.4 (1.21) | 90.6 (0.20) | 70.3 (0.34) | | % of group-labels from the | CelebA | | Waterbird | | | | |------------------------------|------------|------------|-------------|------------|------------|------------| | validation split | 20% | 10% | 5% | 20% | 10% | 5% | | JTT (Liu et al., 2021) | 81.1 | 81.1 | 82.2 | 84.0 | 86.9 | 76.0 | | SSA (Nam et al., 2022) | 88.9 | 90.0 | 86.7 | 88.9 | 88.9 | 87.1 | | CROIS's Wg Acc | 89.6 (0.4) | 87.6 (0.6) | 87.3 (1.0) | 90.4 (1.0) | 88.2 (0.9) | 87.8 (1.3) | | CROIS's Avg Acc | 90.8 (0.2) | 91.6 (0.3) | 87.8 (1.6) | 92.4 (0.5) | 93.0 (0.7) | 88.7 (1.6) | Table 2: Worst-group test accuracy for partial group labels from the validation split. Results for SSA and JTT are taken from Table 3 of Nam et al. (2022). The standard deviation is reported based on three independent runs. See Table 19 in Appendix A.7 for comparison with additional baselines. Discussion. We clarify the difference between JTT and our method. First, the initial phase of JTT is for *inferring* pseudo-group-labels for the group-unlabeled data. This phase requires careful hyperparameter tuning and capacity control using the group-labeled validation set to accurately produce pseudo group labels (as noted in section 5.4 of Liu et al. (2021)). On the other hand, our method trains a *single* model and simply retrains the last layer with any available group labels. Second, JTT's final performance is limited by group DRO's performance on the full network, which can worsen by mislabeled pseudo labels from the first phase. In contrast, we demonstrate in Section 4.2 that, by limiting group DRO to only the last layer, our method is competitive to full group DRO even when using only a fraction of group labels and minimal tuning. Compared with SSA (Nam et al., 2022), our method does not rely on pseudo labeling. In SSA, there is a pseudo-labeling phase along with a robust training phase using the inferred group labels. In the first phase, SSA trains a separate network that *predicts the group* rather than the class. By treating the pseudo-labeling problem as semi-supervised learning, SSA's pseudo-labeling capability improves upon JTT. Our results show that our method outperforms SSA on 3 out of 4 datasets while reusing default parameters. Reducing validation split size. Following the setup in JTT (Liu et al., 2021) and SSA (Nam et al., 2022), we vary the size of the validation split (20%, 10% and 5% of the original) to test whether our results still hold in these settings. We consider both the Waterbird and CelebA datasets. Note that for this setting, our method must be additionally *tuned* to account for the increased difficulty of the reduced group-labels quantity. Nevertheless, the smaller examples quantity, along with just training the last layer, makes the extra tuning less expensive (details and setup in Section A.8). We present our results (along with error bars) in Table 2, where our method outperforms JTT and SSA on various percentage levels. ## 4.2 Result With Partial Training Group Labels Next, we consider the setting where group labels are available from both the training split and the validation split. In contrast to Section 4.1, the standard validation split is used *only* for model selection D (val) Land not for classifier retraining D′L here. We compare our method using some fraction of the training split's group Table 3: Comparison between our method CROIS and group DRO. NCRT refers to naive classifier retraining, i.e., when an independent split is *not used* during the retraining phase. Results marked with † are taken from (Sagawa et al., 2020a). ∗For Waterbird, we omit the result for p = 0.05 due to the small dataset size and inability to sample any minority-group example for robust retraining. | Waterbird | | CelebA | MultiNLI | CivilComments | | | | | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------|-------------|-------------|-----------------|-------------|-------------|-------------|-------------| | Method | Avg Acc | Wg Acc | Avg Acc | Wg Acc | Avg Acc | Wg Acc | Avg Acc | Wg Acc | | ERM | 96.9 | 69.8 | 95.6 | 44.4 | 82.8 | 66.0 | 92.1 | 63.2 | | GDRO | 93.2† | 86.0† | 91.8† | 88.3† | 81.4† | 77.7† | 89.6 (0.23) | 70.5 (2.10) | | CROIS' p - group-labeled fraction used for retraining (with 1 − p unlabeled fraction for the ERM phase) 0.05 * * 91.9 (0.50) 88.9 (1.10) 81.8 (0.15) 73.8 (1.54) 90.8 (0.40) 63.3 (7.82) | | | | | | | | | | 0.10 | 95.4 (1.10) | 83.5 (3.24) | 91.3 (0.36) | 90.3 (0.82) | 80.8 (0.51) | 75.3 (2.06) | 89.5 (1.81) | 68.7 (1.72) | | 0.30 | 90.8 (0.35) | 89.6 (1.15) | 91.3 (0.44) | 90.6 (0.95) | 80.0 (0.31) | 77.9 (0.17) | 89.7 (0.33) | 68.6 (1.53) | | 0.50 | 90.4 (0.95) | 89.5 (0.59) | 91.9 (0.35) | 88.2 (2.10) | 79.8 (0.26) | 74.4 (1.00) | 89.5 (0.70) | 71.0 (1.50) | | NCRT | 96.5 | 75.2 | 93.9 | 69.2 | 82.3 | 67.9 | 90.3 | 67.6 | labels against group DRO using all the group labels. Again, we fix the parameters of our method to its standard empirical risk minimization parameter to demonstrate its ease of tuning (see Appendix A). Setup. We study our method with different amounts of training group labels determined by the parameter p. This means that (1 − p) fraction of the training split is used to obtain a feature extractor in the first phase D′U (that uses no group label), and the rest p fraction of group labels are used for robust classifier retraining D′L . This setup allows examining the trade-off between the quality of the feature extractor versus the amount of data available to perform classifier retraining. Additionally, to demonstrate the importance of retraining with *unseen* examples, we experiment with robust classifier retraining using the same data from the first phase, i.e., *without* independent splits - denoted as *NCRT* in the table. The parameter p. In practice, we expect that |DL| is a lot smaller than |DU |, as in Section 4.1. There, Table 2 suggests that reasonable robust performance can be achieved with a small fraction of group labels. In this setup, however, since the amount of group labels is abundant (DU = ∅ and DL is large), we treat p as a tune-able parameter that controls the size of D′U and D′L . Furthermore, using a p fraction of the available group labels simulates obtaining group labels for a random fraction of the data if there is a budget constraint on group labels. Results. In Table 3, our method outperforms group DRO on both image data sets and yields competitive performance to group DRO on the two text data sets when using only a fraction of group labels and reusing default hyperparameters. Our result implies that comparable or even better robust performance than group DRO can be obtained by collecting group labels for roughly 30% of the available training data (modulo validation). One exception is severe group imbalance cases, as in CivilComments (the minority group consists of only 0.4% of the dataset). There, a higher fraction of group-labeled data is beneficial to obtain more minority-group examples. Hence, a more efficient sampling method to include more minority examples (e.g., filter by labels first) would be beneficial in practice. Finally, the results for naive classifier retraining also show the importance of using an independent split for classifier retraining.3 Trade-off between feature extractor and amount of group-labeled data for robust retraining. From the results across the data sets, allocating more examples towards training the feature extractor (lower p) generally yields higher on-average accuracies. The worst-group error after classifier retraining has a more complex interaction with p, as it depends on both the quality of the feature extractor and the amount of group-labeled examples available to perform classifier retraining. While varying the proportion p in our 3In Sagawa et al. (2020a), group adjustment is observed to improve Waterbird's worst-group accuracy to 90.5%. We also notice an improvement when incorporated here and obtain a 90.3% ± 0.62 test worst-group accuracy. We also observe that similarly to Sagawa et al. (2020a), the adjustment only works for Waterbird but not for CelebA nor MultiNLI. ![9_image_0.png](9_image_0.png) Figure 3: While group DRO often requires high ℓ2 regularization to avoid overfitting, setting ℓ2 too high might cause instability in group DRO's minimax optimization procedure (**left**). Table 4: Comparison between reweighting, subsampling, and group DRO as classifier retraining algorithms on top of the same feature extractor (p = 0.30). | Retraining | Waterbird | CelebA | MultiNLI | CivilComments | | | | | |--------------|-------------|----------|------------|-----------------|---------|--------|---------|--------| | Method | Avg Acc | Wg Acc | Avg Acc | Wg Acc | Avg Acc | Wg Acc | Avg Acc | Wg Acc | | Reweighting | 95.2 | 87.1 | 92.1 | 85.0 | 78.9 | 67.0 | 88.3 | 56.4 | | Subsampling | 95.8 | 81.1 | 91.6 | 86.1 | 78.6 | 64.0 | 91.8 | 59.3 | | GDRO | 91.4 | 90.2 | 91.6 | 90.4 | 80.3 | 78.0 | 89.7 | 69.1 | experiments gives a rough estimate of this tradeoff, we hypothesize that the availability of minority group examples is the most important for obtaining a robust classifier. We further support this intuition with an ablation study in Section A.5 where removing non-minority examples has an insignificant impact on the final group-robust performance. Alleviating group DRO's sensitivity towards model capacity. Group DRO's requirement for model capacity control via either ℓ2 regularization or early stopping is well noted in the literature (Sagawa et al., 2020a). In Table 17, we compare group DRO and our method's sensitivity towards different ℓ2 regularization. While our method's performance on Waterbird is relatively uniform, group DRO is more sensitive to different ℓ2 settings on CelebA. When ℓ2 = 1 for CelebA, group DRO fails altogether (see Figure 3). On the other hand, our method achieves consistent performance across different ℓ2 settings. Our method controls the model capacity by limiting group DRO to only the last layer. This alleviates group DRO's tendency to overfit and simplifies parameter tuning (as in Figure 1 for Waterbird). ## 4.3 Ablation Studies Obtaining a good feature extractor. An ablation study on the effects of different validation accuracy and initial algorithms on the feature extractor's quality (measured by robust performance after classifier retraining) is presented in Section A.4. Similarly to previous works Kang et al. (2019), empirical risk minimization provides the best features over reweighting or group DRO (both requiring group labels). We find a positive correlation between validation average accuracy and features' quality. This then serves as a proxy for our method's model selection criterion in the first phase, significantly simplifying parameter tuning over other two-phase methods in the group-shift setting. Group DRO is better than reweighting and subsampling for classifier retraining. Table 4 contains results on using different classifier-retraining methods. We observe that group DRO produces the best grouprobust performance (since group DRO is designed for this setting, after all). Reweighting and subsampling seem effective on the vision datasets but fail to perform on the NLP datasets. Table 5: Comparison between retraining with group DRO on the full network (*Full*) versus last layer retraining (LL). | GDRO | Waterbird | CelebA | MultiNLI | CivilComments | | | | | |------------|-------------|----------|------------|-----------------|---------|--------|---------|--------| | p = 0.30 | Avg Acc | Wg Acc | Avg Acc | Wg Acc | Avg Acc | Wg Acc | Avg Acc | Wg Acc | | LL (CROIS) | 91.4 | 90.2 | 91.6 | 90.4 | 80.3 | 78.0 | 89.7 | 69.1 | | Full | 90.5 | 79.8 | 91.6 | 78.3 | 80.8 | 75.1 | 90.4 | 69.1 | Classifier retraining outperforms full retraining with group DRO. Classifier retraining plays a central role in our method. In Table 5, we compare fine-tuning with group DRO on the full DNN versus just the last layer for an independent split of p = 0.30. We see that group DRO on the last layer is much better than full group DRO on most datasets (except CivilComments). However, the main difference is that while the last layer retraining requires little additional tuning, we must search for different regularization strengths for group DRO when applied to the full network. our method can be shown to be quite robust to different parameter settings and regularization strengths (details in Appendix A.6). Deep nets learned by minimizing the average risk contain good features. The positive result for our decoupled training procedure provides strong evidence for deep nets containing good features for the group-shift problem. While this is consistent with findings in the literature on vision datasets (Kang et al., 2019; Menon et al., 2021b), our work further provides some of the first evidence of this hypothesis in non-vision tasks, where the same result would not have been possible without independent split, as evident in the result for naive classifier retraining in Table 3. Simplified model selection. The model selection criterion of picking the best average validation accuracy model simplifies hyperparameter tuning compared to other two-phase methods. This decision has been chosen mainly from the ablation experiments in Section A.4 of the Appendix, where we observe that higher average validation accuracy generally suggests better features. ## 5 Theoretical Justification Of Data Splitting With Generalization Bounds In this section, we complement our empirical results with Rademacher complexity-based generalization bounds for both phases of our method: one for the standard average loss for the deep net's feature extractor and another for the worst-group generalization bound for linear classifiers. These serve as an explanation for balancing the right proportion of samples within both phases, especially when the data imbalance between subgroups is significant. We provide an upper bound on the worst-group loss that depends on the worst-group sample size. We then argue that the average generalization error lower bounds the worst-group generalization error, and a balanced splitting of the samples is beneficial to control the worst-group error sufficiently. Generalization bound on average losses. We first present the standard generalization bound for deep nets here. Given a dataset S := {(x1, y1), . . . ,(xn, yn)}, consider a function class F consists of L-layers feedforward neural network with ρ-Lipschitz activations along with the composed function class with the dataset F|S: $$\mathcal{F}:=\left\{x\rightarrow\sigma_{L}\left(W_{L}\sigma_{L-1}\left(\cdots\sigma_{1}\left(W_{1}x\right)\cdots\right)\right)\mid\left\|W_{i}^{T}\right\|_{1,\infty}\leq B\right\},$$ $\mathcal{F}_{|S}:=\left\{\left(\ell\left(f(x_{1}),y_{1}\right),\ldots,\ell\left(f(x_{n}),y_{n}\right)\right)\mid f\in\mathcal{F}\right\}.$ If we collect samples x1*, . . . , x*n into rows of X ∈ X n×d and if the activations σi are ρ-Lipschitz with σi(0) = 0, then we can show the following generalization bound for F: Proposition 1 (See, e.g., Telgarsky (2021)). Suppose that f ∈ [a, b] for all f ∈ F for some finite *a < b*. Then with probability at least 1 − δ $$\operatorname*{sup}_{f\in\mathcal{F}}\mathbb{E}\left[f(Z)\right]-{\frac{1}{n}}\sum_{i=1}^{n}f(z_{i})\leq{\frac{2}{n}}\left\|X\right\|_{2,\infty}\left(2\rho B\right)^{L}{\sqrt{2\ln\left(d\right)}}+3\left(b-a\right){\sqrt{\frac{\ln\left(2/\delta\right)}{2n}}}.$$ We defer the proof to Appendix B. We can view a simple convolution layer at depth l with kernels K (l) 1 , . . . , K(l) kl as linear layers of Toeplitz matrices W (l) 1*, . . . , W*(l) klrepresentation the kernel. Then, for the (1, ∞)-norm for a convolution net with L layers, we scale with the size of the kernels rather than the dimension with standard linear layers: $$\left\|W_{j}^{(l)}\right\|_{1,\infty}=\sum_{i}\left|K_{j,i}^{(l)}\right|.$$ With this view, we can apply the bound above to convolutional networks as in our experiments. Generalization bound for worst-group losses. Next, we derive a generalization bound for the worstgroup loss for linear classifiers. Suppose we are working with a binary classification problem with G groups, where for a sample x ∈ X , we have that g(x) ∈ [G]. Consider the hypothesis class of linear classifiers with bounded norm i.e. $${\mathcal{H}}:=\{w\in\mathbb{H}\}$$ d| ∥w∥ ≤ B . Since the 0-1 loss ℓ(*x, y*)=1[x ̸= y] is a bit hard to work with, we instead consider the logistic loss $$w\vert\vert\leq B\}\;.$$ $$\ell(x,y)=\ln\left(1+\exp\left(-y x\right)\right).$$ Hence, the function class along with our dataset S := {(x1, y1), . . . ,(xn, yn)} we are considering for generalization is $\mathcal{F}:=\left\{\left(x,y\right)\rightarrow\ell\left(w^{T}x,y\right)\mid\|w\|\leq B\right\}$ $\mathcal{F}_{|S}:=\left\{\left(\ell\left(w^{T}x_{1},y_{1}\right),\ldots,\ell\left(w^{T}x_{n},y_{n}\right)\right)\mid\|w\|\leq B\right\}.$ We can further partition the dataset into its G groups: S1, S2*, . . . , S*G, where Si:= {(*x, y*) ∈ S | g(x) = i} with sizes ni each. Let $${\hat{R}}_{i}(f):={\frac{1}{n_{i}}}\sum_{(x_{j},y_{j})\in S_{i}}\ell(f(x_{j}),y_{j}),\qquad R_{i}(f):=\mathbb{E}_{(X,Y)|g(X)=i}\left[\ell(f(X),Y)\right]\,$$ denote the empirical and population group loss of f. We can then show the following result: Theorem 2. If F is a class of bounded linear classifiers and l *is the logistic loss, then for the binary* classification with G *groups, we have that with probability at least* 1 − δ: $$\sup_{f\in\mathcal{F}}\max_{i\in G}R_{i}(f)-\hat{R}_{i}(f)\leq\max_{i\in G}\frac{2}{n_{i}}B\cdot\left\|X^{(i)}\right\|_{F}+3\left(\ln(2)+B\right)\sqrt{\frac{\ln\left(2G/\delta\right)}{2n_{i}}}.$$ We defer the proof to Appendix B. Two stage training and data balancing: Empirical validation. Note that since the worst-group loss upper bounds the average loss, the average loss is a natural lower bound on the worst-group loss. The generalization gap for the worst-group loss is not only influenced by its hypothesis class and the amount of data per group but it is also influenced indirectly by the lower bound given by the average loss for training the feature extractor. Hence, this suggests that splitting the data into different proportions (via the parameter p) across two phases in our method is a means to trade-off between the worst-group and average | CelebA | Waterbird | | | | | | |------------------------|-------------|-------|-------|-------|------|-------| | Splitting proportion p | 0.1 | 0.3 | 0.5 | 0.1 | 0.3 | 0.5 | | Test worst-group loss | 0.287 | 0.275 | 0.312 | 0.389 | 0.27 | 0.313 | | Train worst-group loss | 0.249 | 0.269 | 0.236 | 0.183 | 0.25 | 0.295 | | Generalization gap | 0.038 | 0.006 | 0.076 | 0.206 | 0.02 | 0.018 | Table 6: Generalization gap between different splitting proportions of the training data for both phases. loss generalization gap. Note that, as with most existing generalization bounds for deep networks, these bounds are most likely not informative and are just suggestions for algorithms design. We present some supporting empirical evidence in Table 6 showing the worst-group loss and average loss generalization gap for various splitting proportions p for CelebA and Waterbird. The generalization gap shown below is selected from the best validation epoch. There, we obtain the best generalization when the splitting is "balanced" (p = 0.30) as in our other experiments. ## 6 Conclusion In this paper, we propose classifier retraining on independent splits as a simple method to reduce the number of group annotations needed for improving worst-group performance as well as alleviate group DRO's requirement for careful control of model capacity (Sagawa et al., 2020a). Our experimental results show the effectiveness of our method on four standard datasets across two settings and provide evidence that deep nets contain good features for the group-shift problem. Future Work. The richness of deep net features can potentially be helpful in solving the seemingly harder group-agnostic setting (where no group label is available) by allowing the practitioner to focus on obtaining a robust classifier given a feature extractor, where we have shown that reasonable robustness can be achieved with relatively few group-labels, which makes the problem seem closer in reach. On a broader note, while most work in representation learning focuses on producing good features (either with supervised, unsupervised, or self-supervised approaches), further examinations into different ways to perform classifier retraining in different settings (as in our work) could give a fuller picture to the features quality of different methods. Broader Impact Statement. Worst group robustness is closely related to fairness in AI, where issues like biases of machine learning models are considered (Hardt et al., 2016; Ding et al., 2021). Our method seeks to improve the group robustness of deep learning models, which is important as machine learning models become more ubiquitous. ## Acknowledgement Thanks to Pavel Izmailov and Michael Zhang for several discussions related to this paper. T. N. is partially supported by a seed/proof-of-concept grant from the Khoury College of Computer Sciences, Northeastern University. ## References Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. *arXiv* preprint arXiv:1907.02893, 2019. Aharon Ben-Tal, Dick Den Hertog, Anja De Waegenaere, Bertrand Melenberg, and Gijs Rennen. Robust solutions of optimization problems affected by uncertain probabilities. *Management Science*, 59(2):341–357, 2013. Lukas Biewald. Experiment tracking with weights and biases, 2020. URL https://www.wandb.com/. Software available from wandb.com. Elliot Creager, Jörn-Henrik Jacobsen, and Richard Zemel. Environment inference for invariant learning. In International Conference on Machine Learning, pp. 2189–2200. PMLR, 2021. Nikolay Dagaev, Brett D Roads, Xiaoliang Luo, Daniel N Barry, Kaustubh R Patil, and Bradley C Love. A too-good-to-be-true prior to reduce shortcut reliance. *arXiv preprint arXiv:2102.06406*, 2021. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. Frances Ding, Moritz Hardt, John Miller, and Ludwig Schmidt. Retiring adult: New datasets for fair machine learning. In *Advances in Neural Information Processing Systems*, 2021. Vitaly Feldman and Chiyuan Zhang. What neural networks memorize and why: Discovering the long tail via influence estimation. In *NeurIPS*, 2020. Angelos Filos, Panagiotis Tigkas, Rowan McAllister, Nicholas Rhinehart, Sergey Levine, and Yarin Gal. Can autonomous vehicles identify, recover from, and adapt to distribution shifts? In International Conference on Machine Learning, pp. 3145–3153. PMLR, 2020. Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A Wichmann. Shortcut learning in deep neural networks. *Nature Machine Intelligence*, 2(11): 665–673, 2020. Ishaan Gulrajani and David Lopez-Paz. In search of lost domain generalization. In *ICLR*. OpenReview.net, 2021. Zongbo Han, Zhipeng Liang, Fan Yang, Liu Liu, Lanqing Li, Yatao Bian, Peilin Zhao, Bingzhe Wu, Changqing Zhang, and Jianhua Yao. Umix: Improving importance weighting for subpopulation shift via uncertaintyaware mixup. *arXiv preprint arXiv:2209.08928*, 2022. Moritz Hardt, Eric Price, and Nati Srebro. Equality of opportunity in supervised learning. *Advances in* Neural Information Processing Systems, 29:3315–3323, 2016. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Haotian Ju, Dongyue Li, and Hongyang R Zhang. Robust fine-tuning of deep neural networks with hessianbased generalization guarantees. In *International Conference on Machine Learning*, pp. 10431–10461. PMLR, 2022. Haotian Ju, Dongyue Li, Aneesh Sharma, and Hongyang R Zhang. Generalization in graph neural networks: Improved pac-bayesian bounds on graph diffusion. In *International Conference on Artificial Intelligence* and Statistics, pp. 6314–6341. PMLR, 2023. Bingyi Kang, Saining Xie, Marcus Rohrbach, Zhicheng Yan, Albert Gordo, Jiashi Feng, and Yannis Kalantidis. Decoupling representation and classifier for long-tailed recognition. In International Conference on Learning Representations, 2019. Ganesh Ramachandra Kini, Orestis Paraskevas, Samet Oymak, and Christos Thrampoulidis. Label-imbalanced and group-sensitive classification under overparameterization. *Advances in Neural Information Processing* Systems, 34:18970–18983, 2021. Polina Kirichenko, Pavel Izmailov, and Andrew Gordon Wilson. Last layer re-training is sufficient for robustness to spurious correlations. *arXiv preprint arXiv:2204.02937*, 2022. Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, et al. Wilds: A benchmark of in-the-wild distribution shifts. In *International Conference on Machine Learning*, pp. 5637–5664. PMLR, 2021. David Krueger, Ethan Caballero, Joern-Henrik Jacobsen, Amy Zhang, Jonathan Binas, Dinghuai Zhang, Remi Le Priol, and Aaron Courville. Out-of-distribution generalization via risk extrapolation (rex). In International Conference on Machine Learning, pp. 5815–5826. PMLR, 2021. Preethi Lahoti, Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost, Nithum Thain, Xuezhi Wang, and Ed Chi. Fairness without demographics through adversarially reweighted learning. *Advances in neural information* processing systems, 33:728–740, 2020. Daniel Levy, Yair Carmon, John C Duchi, and Aaron Sidford. Large-scale methods for distributionally robust optimization. *Advances in Neural Information Processing Systems*, 33:8847–8860, 2020. Dongyue Li and Hongyang R Zhang. Improved regularization and robustness for fine-tuning in neural networks. *Advances in Neural Information Processing Systems*, 34:27249–27262, 2021. Dongyue Li, Haotian Ju, Aneesh Sharma, and Hongyang R Zhang. Boosting multitask learning on graphs through higher-order task affinities. *arXiv preprint arXiv:2306.14009*, 2023a. Dongyue Li, Huy Nguyen, and Hongyang Ryan Zhang. Identification of negative transfers in multitask learning using surrogate models. *Transactions on Machine Learning Research*, 2023b. Evan Z Liu, Behzad Haghgoo, Annie S Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, and Chelsea Finn. Just train twice: Improving group robustness without training group information. In *International Conference on Machine Learning*, pp. 6781–6792. PMLR, 2021. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015. Ziwei Liu, Zhongqi Miao, Xiaohang Zhan, Jiayun Wang, Boqing Gong, and Stella X Yu. Large-scale long-tailed recognition in an open world. In *Proceedings of the IEEE/CVF conference on computer vision and pattern* recognition, pp. 2537–2546, 2019. Michal Lukasik, Srinadh Bhojanapalli, Aditya Krishna Menon, and Sanjiv Kumar. Teacher's pet: understanding and mitigating biases in distillation. *arXiv preprint arXiv:2106.10494*, 2021. Aditya Krishna Menon, Sadeep Jayasumana, Ankit Singh Rawat, Himanshu Jain, Andreas Veit, and Sanjiv Kumar. Long-tail learning via logit adjustment. In *International Conference on Learning Representations*, 2021a. Aditya Krishna Menon, Ankit Singh Rawat, and Sanjiv Kumar. Overparameterisation and worst-case generalisation: friend or foe? In *International Conference on Learning Representations*, 2021b. Junhyun Nam, Hyuntak Cha, Sungsoo Ahn, Jaeho Lee, and Jinwoo Shin. Learning from failure: Training debiased classifier from biased classifier. *arXiv preprint arXiv:2007.02561*, 2020. Junhyun Nam, Jaehyung Kim, Jaeho Lee, and Jinwoo Shin. Spread spurious attribute: Improving worst-group accuracy with spurious attribute estimation. In *International Conference on Learning Representations*, 2022. Luke Oakden-Rayner, Jared Dunnmon, Gustavo Carneiro, and Christopher Ré. Hidden stratification causes clinically meaningful failures in machine learning for medical imaging. In *Proceedings of the ACM conference* on health, inference, and learning, pp. 151–159, 2020. Shiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and Percy Liang. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. International Conference of Learning Representation, 2020a. Shiori Sagawa, Aditi Raghunathan, Pang Wei Koh, and Percy Liang. An investigation of why overparameterization exacerbates spurious correlations. In *International Conference on Machine Learning*, pp. 8346–8356. PMLR, 2020b. Nimit Sohoni, Jared Dunnmon, Geoffrey Angus, Albert Gu, and Christopher Ré. No subclass left behind: Finegrained robustness in coarse-grained classification problems. Advances in Neural Information Processing Systems, 33, 2020. Matus Telgarsky. Deep learning theory lecture notes, 2021. URL https://mjt.cs.illinois.edu/dlt/ index.pdf. Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(11), 2008. P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech-UCSD Birds 200. Technical Report CNS-TR-2010-001, California Institute of Technology, 2010. Adina Williams, Nikita Nangia, and Samuel R Bowman. A broad-coverage challenge corpus for sentence understanding through inference. *arXiv preprint arXiv:1704.05426*, 2017. Fan Yang, Hongyang R Zhang, Sen Wu, Weijie J Su, and Christopher Ré. Analysis of information transfer from heterogeneous sources via precise high-dimensional asymptotics. *arXiv preprint arXiv:2010.11750v2*, 2021. Huaxiu Yao, Yu Wang, Sai Li, Linjun Zhang, Weixin Liang, James Zou, and Chelsea Finn. Improving out-of-distribution robustness via selective augmentation. *arXiv preprint arXiv:2201.00299*, 2022. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. In *ICLR*. OpenReview.net, 2017. Michael Zhang, Nimit S Sohoni, Hongyang R Zhang, Chelsea Finn, and Christopher Ré. Correct-n-contrast: A contrastive approach for improving robustness to spurious correlations. *ICML*, 2022. Yifan Zhang, Bingyi Kang, Bryan Hooi, Shuicheng Yan, and Jiashi Feng. Deep long-tailed learning: A survey. arXiv preprint arXiv:2110.04596, 2021. Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba. Places: A 10 million image database for scene recognition. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 2017. ## A Experimental Details A.1 Infrastructure We performed our experiments on 2 PCs with one NVIDIA RTX3070 and one NVIDIA RTX3090. Our implementation is built on top of the code base from Liu et al. (2021). Experimental data is collected with the help of Weights and Biases (Biewald, 2020). ## A.2 Models A.3 Hyperparameters We use ResNet50 (He et al., 2016) with ImageNet initialization and batch-normalization for CelebA and Waterbird. We use pretrained BERT (Devlin et al., 2018) for MultiNLI and CivilComments. We use the original train-val-test split in all the datasets and report the *test* results. Cross-entropy is used as the base loss for all objectives. SGD with momentum (set to 0.9) is used for the vision datasets while the AdamW optimizer with dropout and a fixed linearly-decaying learning rate is used for BERT. We use a batch size of 16 for CivilComments and 32 for the rest of the datasets. We do not use any additional data augmentation or learning rate scheduler in our results. Table 7 contains the hyperparameters used in our experiments in Sections 4.2 and 4.1. Note that these are the standard parameters for obtaining an ERM model for these datasets as in previous works (Sagawa et al., 2020a; Liu et al., 2021). The only difference is that we train Waterbird and CelebA for slightly shorter epoch due to finding no further increase in validation accuracies after those epochs. In our experiments, unless noted, we do not tune for any other hyperparameters. For the second phase of CivilComments, we do not use the default regularization but opt for 0 since the linear layer already has low capacity. However, adding further regularization does not seem to have much of an effect as in section A.6. Table 7: Hyperparameters used in the experiments. The slash indicates the parameters used in the first phase (feature extractor) versus the second phase (classifier retraining). | Waterbird | CelebA | MultiNLI | CivilComments | | |-------------------|-----------|------------|-------------------|-----------| | Learning Rate | 10−4/10−4 | 10−4/10−4 | 2 × 10−5/2 × 10−5 | 10−5/10−5 | | ℓ2 Regularization | 10−4/10−4 | 10−4/10−4 | 0/0 | 10−2/0 | | Number of Epochs | 250/250 | 20/20 | 20/20 | 6/6 | ## A.4 Ablation Studies: Obtaining A Good Feature Extractor In this section, we examine the different factors that can potentially impact the quality of the feature extractor. Impact of The Feature Extractor's Algorithms. We provide evidence that ERM-trained models produce the best features for worst-group robustness. We conduct an experiment on Waterbird, where instead of using ERM to obtain a feature extractor, we perform group DRO and Reweighing instead in the first phase. The results are presented in Table 8. While using reweighing or group DRO for the first phase defeats the purpose of reducing the number of group labels needed (whereas ERM doesn't need any), it is informative to examine the features alone. There, we see that even though ERM does not use group labels, it provides the best features for robust classifier retraining on an independent split. Impact of Early Stopping and Validation Accuracies on The Feature Extractor. We present an ablation study of how different early stopping epoch (Figure 4), average validation accuracy (Figure 5 left), and worst-group accuracy (Figure 5 right) of the initial ERM trained model affect the group DRO clsasifier Table 8: Effects of different methods for obtaining a feature extractor on test average accuracy and test worst-group accuracy (with ResNet50 on Waterbird). | Feature extractor via | Test Avg Acc | Test Wg Acc | |-------------------------|----------------|---------------| | Reweighing | 90.1 | 88.8 | | Group DRO | 90.8 | 88.6 | | ERM | 90.5 | 90.2 | retraining phase of CROIS. The results here are from performing CROIS with p = 0.30 on Waterbird across a wide variety of epochs. Table 9 presents the full data generated for this section. ![17_image_0.png](17_image_0.png) Figure 4: The effect of using different epochs for the feature extractor (phase 1) on classifier retraining's (phase 2) test accuracies. ![17_image_1.png](17_image_1.png) Figure 5: The effect of different *validation average accuracies* (**left**) and *validation worst-group accuracies* (**right**) from the feature extractor (phase 1) on classifier retraining's (phase 2) test accuracies. ## A.5 Further Studies On Robust Classifier Retraining Impact of robust retraining on independent splits. In this section, we examine how robust retraining affects the model's prediction of DU and DL before and after robust classifier retraining on independent split (with p = 0.30). Tables 10 and 11 show the accuracy on DU and DL for Waterbird and CelebA. The "Points changed" column indicates the number of points that the model changes prediction after robust retraining per group along with the total number of examples in that group (with percentage in parentheses). The worst group is underlined in the tables. Table 9: CROIS with group DRO (p = 0.30) on Waterbird. Average (*Avg Acc*) and Worst-group (*Wg Acc*) Accuracies for various epochs of the feature extractor ("Phase 1") and the corresponding test accuracies for classifier retraining ("Phase 2"). While training for longer epochs seems to help with average and worst-group accuracy for phase 2, the benefit is small. Hence, simply selecting the best validation average accuracy model, row Epoch 131 and denoted *BEST* here, yields good enough features that simplify our training procedure and model selection criteria. | CROIS (p = 0.3) | Feature Extractor (Phase 1) | Classifier Retraining (Phase 2) | | | | | |-------------------|-------------------------------|-----------------------------------|---------|--------|----------|---------| | Phase 1 Epoch | Val Avg | Val WG | Val Avg | Val WG | Test Avg | Test WG | | 0 | 91.3 | 0.05 | 87.8 | 85.6 | 86.6 | 85.2 | | 1 | 94 | 13.5 | 88 | 87.6 | 86 | 85.5 | | 2 | 95.2 | 20.3 | 89.6 | 88 | 88.3 | 86.9 | | 3 | 95.7 | 25.6 | 90.1 | 88 | 88.5 | 88.2 | | 4 | 95.8 | 25.6 | 88.9 | 88.2 | 87.1 | 86.9 | | 5 | 96.4 | 32.3 | 89.2 | 88.7 | 87.3 | 87.1 | | 6 | 96.5 | 38.4 | 90.5 | 88.7 | 89.3 | 88.9 | | 7 | 96.3 | 29.3 | 90.6 | 88.7 | 89.3 | 88.5 | | 8 | 97.1 | 38.4 | 90.1 | 89.3 | 88 | 87.7 | | 9 | 97.1 | 42.9 | 90.4 | 89.9 | 88.2 | 87.9 | | 10 | 96.7 | 40.6 | 91.2 | 90.2 | 89.4 | 88 | | 20 | 97.3 | 54.9 | 91.4 | 91 | 90.3 | 89.5 | | 50 | 97.4 | 55 | 91.5 | 91 | 90.4 | 89.7 | | 100 | 97.2 | 57.1 | 90.5 | 90.2 | 89.5 | 89.2 | | 131 (Best) | 97.6 | 55.6 | 90.9 | 90.2 | 89.7 | 89.6 | | 150 | 97.2 | 60.2 | 90.7 | 90.2 | 89.6 | 89.4 | | 200 | 97.5 | 52.6 | 91 | 90.2 | 88.8 | 88.2 | | 250 | 97.2 | 59.4 | 91 | 90.4 | 88.3 | 87.7 | | Waterbird (p = 0.3) | Accuracy on DU | | Accuracy on DL | | | | |-----------------------|------------------|----------------|------------------|-------|----------------|------------------| | Before | After | Points changed | Before | After | Points changed | | | Avg Acc | 100 | 94.5 | 184/3356 (5.48%) | 96.4 | 89.6 | 162/1439 (11.3%) | | Group 0 (73.0%) | 100 | 92.6 | 180/2430 (7.41%) | 99.6 | 88.2 | 122/1068 (11.4%) | | Group 1 (3.84%) | 100 | 99.3 | 1/141 (7.09%) | 76.7 | 100 | 10/43 (23.3%) | | Group 2 (1.17%) | 100 | 100 | 0/38 (0.00%) | 44.4 | 100 | 10/18 (55.6%) | | Group 3 (22.0%) | 100 | 99.6 | 3/747 (0.40%) | 90.8 | 92.3 | 20/310 (6.45%) | | CelebA (p = 0.30) | Accuracy on DU | Accuracy on DL | | | | | |---------------------|------------------|------------------|---------------------|--------|-------|--------------------| | | Before | After | Points changed | Before | After | Points changed | | Avg Acc | 96.5 | 92.4 | 7537/113939 (6.61%) | 95.6 | 92.0 | 3274/48831 (11.3%) | | Group 0 (43.7%) | 96.7 | 91.7 | 2609/50311 (5.19%) | 95.8 | 91.3 | 1011/21318 (4.74%) | | Group 1 (41.4%) | 99.6 | 92.2 | 3489/46652 (7.48%) | 99.6 | 92.2 | 1493/20222 (7.38%) | | Group 2 (14.1%) | 89.9 | 95.2 | 1000/16012 (6.25%) | 86.8 | 93.7 | 517/6868 (7.53%) | | Group 3 (0.87%) | 46.3 | 91.8 | 439/964 (45.4%) | 35.5 | 95.3 | 253/423 (59.8%) | Table 10: Model's prediction of DU and DL on Waterbird *before* and *after* robust classifier retraining on independent split. Before denotes the ERM training phase. The worst-group is underlined. Table 11: Model's prediction of DU and DL on CelebA *before* and *after* robust classifier retraining on independent split. Before denotes the ERM training phase. The worst-group is underlined. Interestingly, after robust retraining, the worst group almost always switches from the minority group to the majority group regardless of the data split. The importance of minority Examples: subsampled retraining. As alluded to in Section 4.2, group imbalance seems to play an important role in the robust performance of CROIS. To further demonstrate this point, we consider how CROIS performs when the second phase is subsampled versus when it is allowed additional non-minority examples. Table 12: Performance of CROIS when retraining is on a subsampled split versus the full split. Subsampling the split doesn't seem to impact CROIS's performance, indicating the importance of the availability of minority examples. | Dataset (minority group size, fraction) | Subsampled retraining | Full retraining | | | |-------------------------------------------|-------------------------|-------------------|--------|------| | Avg Acc | Wg Acc | Avg Acc | Wg Acc | | | CelebA (n = 423, 0.87%) | 90.6 | 89.7 | 91.9 | 88.9 | | Waterbird (n = 18, 1.2%) | 91.3 | 87.6 | 90.3 | 88.9 | As the result in Table 12 shows, there isn't a significant difference between the two sampling strategies, suggesting that the availability of minority group examples plays an important role in robust classifier retraining. ## A.6 Hyperparameter Tuning: Crois Vs. Group Dro In this section, we examine in more depth CROIS's sensitivity to hyperparameter tuning in comparison to group DRO. Further hyperameter exploration on CROIS. In the main body, we have demonstrated CROIS's effectiveness even with just using the same hyperparameters to train an ERM model. Here, we present results for further additional parameter tuning on the robust classifier retraining phase. These results provide empirical evidence for CROIS's potential as well as robustness to different hyperparameter settings. | | p = 0.50 | p = 0.30 | p = 0.10 | | | | |---------|-------------|-------------|-------------|-------------|-------------|-------------| | ℓ2 Reg. | Avg Acc | Wg Acc | Avg Acc | Wg Acc | Avg Acc | Wg Acc | | 1 | 91.6 (0.06) | 87.4 (2.76) | 90.9 (1.13) | 88.4 (0.78) | 91.2 (0.35) | 90.0 (0.70) | | 10−2 | 92.1 (0.32) | 87.6 (1.56) | 91.8 (0.21) | 87.5 (3.54) | 92.0 (0.44) | 88.3 (2.39) | | 10−4 | 91.9 (0.35) | 88.2 (2.10) | 91.3 (0.44) | 90.6 (0.95) | 91.3 (0.36) | 90.3 (0.82) | | 0 | 92.2 (0.25) | 86.8 (2.30) | 91.5 (0.07) | 87.8 (3.96) | 92.1 (0.47) | 88.1 (2.73) | ℓ2 **regularization.** We investigate whether *additional* regularization would be helpful to classifier retraining with group DRO on CelebA (Table 13) and Waterbird. (Table 14) We further examine the effects of regularization on CivilComments (Table 15) to support our choice in Section A. The results in the tables contain the mean and 1 standard deviation across 3 random seeds. Table 13: Effects of ℓ2 regularization on classifier retraining with group DRO on CelebA. Table 14: Effects of ℓ2 regularization on classifier retraining with group DRO on Waterbird. | | p = 0.50 | p = 0.30 | p = 0.10 | | | | |---------|-------------|-------------|-------------|-------------|-------------|-------------| | ℓ2 Reg. | Avg Acc | Wg Acc | Avg Acc | Wg Acc | Avg Acc | Wg Acc | | 1 | 89.6 (0.93) | 88.9 (1.37) | 91.7 (0.44) | 89.8 (0.68) | 94.5 (0.50) | 85.8 (0.15) | | 10−2 | 90.4 (1.31) | 89.3 (0.61) | 90.6 (1.25) | 89.2 (1.31) | 95.1 (0.40) | 86.3 (0.83) | | 10−4 | 90.4 (0.95) | 89.5 (0.59) | 90.8 (0.35) | 89.6 (1.15) | 95.4 (1.10) | 83.5 (3.24) | | 0 | 89.8 (0.53) | 89.3 (0.70) | 90.6 (1.31) | 89.1 (1.16) | 95.1 (0.35) | 86.4 (0.90) | | p = 0.50 | p = 0.30 | p = 0.10 | | | | | |------------|-------------|-------------|-------------|-------------|-------------|-------------| | ℓ2 Reg. | Avg Acc | Wg Acc | Avg Acc | Wg Acc | Avg Acc | Wg Acc | | 1 | 88.8 (1.30) | 70.0 (1.63) | 89.5 (0.35) | 66.4 (1.99) | 89.3 (1.69) | 68.9 (2.64) | | 10−2 | 89.4 (0.99) | 70.6 (0.42) | 89.5 (0.29) | 68.5 (0.87) | 88.6 (1.70) | 70.2 (1.63) | | 0 | 89.5 (0.70) | 71.0 (1.50) | 89.7 (0.33) | 68.6 (1.53) | 89.5 (1.81) | 68.7 (1.72) | Table 15: Effects of ℓ2 regularization on classifier retraining with group DRO on CivilComments. Learning rate. We examine the effects of different learning rates on CROIS on CelebA and Waterbird in Table 16. A lower learning rate seems to be more beneficial. Table 16: Effects of varying learning rate on CROIS p = 0.30 on CelebA and Waterbird. We fix ℓ2 regularization to 10−4. CelebA Average accuracy **92.2** 91.4 91.2 90.1 91.2 Worst-group accuracy 88.3 90 **90.3** 87.8 82.8 Waterbird Average accuracy 89.7 90.6 **94.2** 93.2 93.9 Worst-group accuracy **89.5** 88.9 87.1 88.8 78.5 Learning rate 10−5 10−4 10−3 10−2 10−1 | CelebA | Waterbird | | | | | | | | | | | | |----------|-------------|------|------|------|------|------|------|------|------|------|------|------| | ℓ2 reg. | 0 | 10−4 | 10−3 | 10−3 | 10−1 | 1 | 0 | 10−4 | 10−3 | 10−3 | 10−1 | 1 | | GDRO | 81.7 | 81.7 | 81.7 | 83.9 | 87.8 | 0.00 | 86.8 | 86.8 | 86.8 | 86.8 | 87.1 | 86.5 | | CROIS | 90.6 | 91.5 | 90.0 | 90.0 | 90.3 | 90.0 | 90.3 | 90.6 | 90.6 | 90.6 | 90.0 | 88.2 | Sensitivity to model capacity: CROIS versus group DRO. In this section, we present a comparison between CROIS and group DRO test performance with different ℓ2 regularization configurations on CelebA and Waterbird in Table 17. On CelebA, group DRO is quite sensitive to model capacity while it is less so on Waterbird. We note that group DRO fails to converge to a good stationary point when ℓ2 = 1 on CelebA (see Figure 3). Table 17: Worst-group test accuracy for group DRO and CROIS (p = 0.30) with different ℓ2 regularization. ## A.7 Additional Comparison To Other Methods We provide additional baselines for comparison in the Tables below. Additional baselines for group labels from only the validation set. In Table 18, we compare CROIS against JTT (Liu et al., 2021) and SSA (Nam et al., 2022), as well as additional baselines like CVaR DRO Table 18: Experimental results for the setting when only group labels from the validation set are used. Results for C-DRO (CVaR DRO), LfF, EIIL, JTT, and SSA are from Nam et al. (2022). Results for UMIX are from Han et al. (2022). Results for CnC are from Zhang et al. (2022). The numbers in parentheses denote one standard deviation from the mean across 3 random seeds. | Waterbird | | CelebA | MultiNLI | CivilComments | | | | | |-------------|-------------|-------------|-------------|-----------------|-------------|-------------|-------------|-------------| | Method | Avg Acc | Wg Acc | Avg Acc | Wg Acc | Avg Acc | Wg Acc | Avg Acc | Wg Acc | | C-DRO | 96.0 | 75.9 | 82.5 | 64.4 | 82.0 | 68.0 | 92.5 | 60.5 | | LfF | 91.2 | 78.0 | 85.1 | 77.2 | 80.8 | 70.2 | 92.5 | 58.8 | | EIIL | 91.2 | 78.0 | 85.1 | 77.2 | 80.8 | 70.2 | 92.5 | 58.8 | | JTT | 93.9 | 86.7 | 88.0 | 88.1 | 78.6 | 72.6 | 91.1 | 69.3 | | UMIX | 93.0 (0.5) | 90.0 (1.1) | 90.1 (0.4) | 85.3 (4.1) | N/A | N/A | 90.6 (0.4) | 70.1 (0.9) | | CnC | 90.9 (0.1) | 88.5 (0.3) | 89.9 (0.9) | 88.8 (0.9) | N/A | N/A | 81.7 (0.5) | 68.9 (2.1) | | SSA | 92.2 (0.87) | 89.0 (0.55) | 92.8 (0.11) | 89.8 (1.28) | 79.9 (0.87) | 76.6 (0.66) | 88.2 (1.95) | 69.9 (2.02) | | CROIS | 92.1 (0.29) | 90.9 (0.12) | 91.6 (0.61) | 88.5 (0.87) | 81.4 (0.06) | 77.4 (1.21) | 90.6 (0.20) | 70.3 (0.34) | | Waterbird | CelebA | MultiNLI | CivilComments | | | | | | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|------------|-----------------|------------|---------|--------|-------------|-------------| | Method | Avg Acc | Wg Acc | Avg Acc | Wg Acc | Avg Acc | Wg Acc | Avg Acc | Wg Acc | | ERM | 96.9 | 69.8 | 95.6 | 44.4 | 82.8 | 66.0 | 92.1 | 63.2 | | GDRO | 93.2† | 86.0† | 91.8† | 88.3† | 81.4† | 77.7† | 89.6 (0.23) | 70.5 (2.10) | | LISA | 91.8 (0.3) | 89.2 (0.6) | 92.4 (0.4) | 89.3 (1.1) | N/A | N/A | 89.2 (0.9) | 72.6 (0.1) | | CAMEL | 90.9 (0.9) | 89.1 (0.4) | N/A | N/A | N/A | N/A | N/A | N/A | | CROIS' p - group-labeled fraction used for retraining (with 1 − p unlabeled fraction for the ERM phase) 0.10 95.4 (1.10) 83.5 (3.24) 91.3 (0.36) 90.3 (0.82) 80.8 (0.51) 75.3 (2.06) 89.5 (1.81) 68.7 (1.72) 0.30 90.8 (0.35) 89.6 (1.15) 91.3 (0.44) 90.6 (0.95) 80.0 (0.31) 77.9 (0.17) 89.7 (0.33) 68.6 (1.53) | | | | | | | | | Table 19: Additional comparison where group labels from the training set are available. Results for LISA are taken from Yao et al. (2022). Results for CAMEL are taken from Model Patching. (Levy et al., 2020), LfF (Nam et al., 2020), EIIL (Creager et al., 2021), CnC (Zhang et al., 2022) and UMIX (Han et al., 2022). There, we report the mean and one standard deviation of the Test Average (*Avg Acc*) and Worst-Group Accuracy (*Wg Acc*) across three random seeds. Additional baselines for group labels from the training set. In the setting where group labels are available from the training set, we compare our method against additional baselines like LISA (Yao et al., 2022). ## A.8 Fraction Of The Validation Set Implementation Details From Section 4.1 Following the setup in Section 4.1 and the setup as in Liu et al. (2021); Nam et al. (2022), we further reduce the validation set to only a small fraction, 5%, 10%, and 20%. We investigate CROIS's performance in this very few group-labels setting across CelebA and Waterbird in Section 4.1. We note that the highly reduced sample size poses a new challenge and makes it harder to simply reuse the default parameters. - Tuning ℓ2 **regularization:** When using so little data, overfitting can become a bigger problem, even when just training a low-capacity linear classifier. Hence, we tune for higher values for ℓ2 regularization across {10−4, 10−2, 1, 10}. - **Tuning learning rate:** We also tune the learning rate across {10−5, 10−4, 10−3, 10−2} instead of simply reusing default parameters. - **The use of group labels and model selection:** Since the number of examples for classifier retraining is now significantly reduced, it might be wasteful to further split our available group labels for validation. Instead, we use all the available group labels for robust classifier retraining and perform model selection in the second phase via the *train worst-group accuracy*. The low capacity linear layer and higher ℓ2 regularization allow us to avoid overfitting when performing model selection this way. The feature extractor from the first phase is selected via the best average accuracy on the full group-unlabeled validation set. - **Smaller batch size:** Since group DRO requires group-balanced sampling, a batch size greater than the number of examples in a certain group would cause duplicate sampling of the minority-group examples in the same step, artificially increasing the weight for that group. We further tune for batch sizes across a grid of powers of 2 less than the smallest group or the default batch size (e.g. we search across {4, 8, 16} if the size of the smallest group is 17). In Table 2, we present the results for CROIS with the above modifications and compare them to CROIS and JTT. There, robust retraining for CelebA is performed with an ℓ2 regularization of 0.1, batch size of 8, and learning rate 10−5. For Waterbird, we found that batch size 8, weight decay 1, and learning rate 10−5 are best for 20% and 10% reduction. For 5% reduction, we further reduce the batch size to 4 (since the minority group only has 7 examples) and increase the weight decay to 10. The results show that CROIS maintains its robust performance even at greatly reduced group labels. This implies that even a few (minority) examples can help debiase the final layer classifier with proper configurations. ## B Proofs Of Theorem 1 Proof of Theorem 1. We look to apply the standard Rademacher Complexity generalization bound on the function class F|S. Recall that the standard Rademacher Complexity generalization bound (see for example Theorem 13.1 in Telgarsky (2021) and references therein) gives that for a function class F where for each f ∈ F, f ∈ [*a, b*], and dataset S = {z1, z2*, . . . , z*n} with n i.i.d. samples from some population distribution, we have that with probability at least 1 − δ $$\operatorname*{sup}_{f\in\mathcal{F}}\mathbb{E}\left[f(Z)\right]-{\frac{1}{n}}\sum_{i=1}^{n}f(z_{i})\leq{\frac{2}{n}}R(\mathcal{F}_{|S})+3\left(b-a\right){\sqrt{\frac{\ln\left(2/\delta\right)}{2n}}},$$ where for Rademacher random variables ϵi *∈ {−*1, +1}, we have that $$R({\mathcal{F}}_{|S}):=\mathbb{E}_{\epsilon}\left[\operatorname*{sup}_{f\in{\mathcal{F}}}\sum_{i=1}^{n}\epsilon_{i}f(z_{i})\right]$$ is the Rademacher complexity of F. Now, we simply bound the Radamacher Complexity of F|S (Theorem 14.1 of Telgarsky (2021)): $$R({\mathcal{F}}_{|S})\leq\left\|X\right\|_{2,\infty}\left(2\rho B\right)^{L}{\sqrt{2\ln\left(d\right)}}.$$ This gives us the generalization bound. Proof of Theorem 2. If we collect samples x1*, . . . , x*n into rows of X ∈ X n×d, then Theorem 13.3 from Telgarsky (2021) gives that RF|S ≤ B · ∥X∥F $$\cdot\;\|X\|_{F}\;.$$ Similarly, if we collect samples of Siinto rows X(i) ∈ X ni×d, then we have $\square$ $$R\left({\mathcal{F}}_{|S_{i}}\right)\leq B\cdot\left\|X^{(i)}\right\|_{F}.$$ Then combining with the Rademacher-complexity generalization bound gives that with probability at least 1 − δ, $$\operatorname*{sup}_{f\in{\mathcal{F}}}\mathbb{E}\left[f(Z)\right]-{\frac{1}{n}}\sum_{i=1}^{n}f(z_{i})\leq{\frac{2}{n}}B\cdot\|X\|_{F}+3\left(\ln(2)+B\right){\sqrt{\frac{\ln\left(2/\delta\right)}{2n}}},$$ where the range of f ∈ F|S is l(⟨*w, xy*⟩) ≤ ln(2) + ⟨w, xy⟩ ≤ ln(2) + B. For a group i ∈ G with training samples X(i) and f being linear classifiers with max norm B, we have that with probability at least 1 − δ $$\sup_{f\in\mathcal{F}}\mathbb{E}_{(X,Y)|g(X)=i}\left[\ell(f(X),Y)\right]-\frac{1}{n_{i}}\sum_{j}\ell(f(x_{j}),y_{j})\leq\frac{2}{n_{i}}B\cdot\left\|X^{(i)}\right\|_{F}+3\left(\ln(2)+B\right)\sqrt{\frac{\ln\left(2/\delta\right)}{2n_{i}}}\cdot$$ Now, letting Rˆi(f) := 1 ni Pj ℓ(f(xj ), yj ) and Ri(f) := E(X,Y )|g(X)=i[ℓ(f(X), Y )] denotes the empirical and population group loss of f and taking a union bound over all the groups, we have that with probability at least 1 − δ $$\max_{i\in G}\sup_{f\in\mathcal{F}}R_{i}(f)-\hat{R}_{i}(f)\leq\max_{i\in G}\frac{2}{n_{i}}B\cdot\left\|X^{(i)}\right\|_{F}+3\left(\ln(2)+B\right)\sqrt{\frac{\ln\left(2G/\delta\right)}{2n_{i}}}.$$ is finite, we can swap the max and sup on the LHS and we have Since the RHS is finite, we can swap the max and sup on the LHS and we have $$\operatorname*{sup}_{f\in\mathcal{F}}\operatorname*{max}_{i\in G}R_{i}(f)-\hat{R}_{i}(f)\leq\operatorname*{max}_{i\in G}\frac{2}{n_{i}}B\cdot\left\|X^{(i)}\right\|_{F}+3\left(\ln(2)+B\right)\sqrt{\frac{\ln\left(2G/\delta\right)}{2n_{i}}}.$$ Implications and takeaways: Comparison to standard pretraining and finetuning. As mentioned in the related works section as well as during the discussion of the motivation for our method, pretraining and then finetuning is a now well-known and established strategy in many domains. Our work differs the most significantly from this standard strategy through: 1. The use of independent splits: Traditional pretraining and finetuning reuses the dataset for both phase with the possibility of additional labels (contrastive learning, long-tailed learning, etc.). In our paper, we demonstrate through extensive experiments the importance of independent split when performing classifier retraining for . 2. The use of a group robust algorithm for finetuning: We mainly utilize group DRO for the classifier retraining phase. In contrast, most other works utilize strategies like reweighing or subsampling for finetuning. We demonstrated in our experiments that group DRO yields the best robust performance over other methods. Our work provides evidences that the features of ERM trained DNNs are rich enough to solve the group-shift problem (when an abundant amount of group labels is available to retrain the classifier) and one of the major reasons for poor worst-group performance of an ERM trained DNN is within its classifier layer. We then further demonstrate that even a few group labels can sufficiently "fix" the classifier to achieve better group-robust performance. This knowledge can potentially be useful towards solving the seemingly much harder group-agnostic setting (where no group label is available) by allowing the practitioner to focus on obtaining a robust classifier given an ERM trained feature extractor. Our experiments further show that reasonable robustness can be achieved with relatively few group-labels (that are not used to obtain the feature extractor), which makes the problem seem closer in reach.
Review 1: Summary: This paper presents a simple method to improve the group robustness of a classifier. The main idea is to first train a feature extractor through a conventional ERM procedure, and retrain only the last layer (classification layer) on the group-labeled split. The resulting procedure is much simpler than the existing methods often requiring complicated procedures such as pseudo-group-labeling or tuning sensitive hyperparameters, and does not require tons of group-labeled examples. Through extensive empirical validation, the proposed method (coined as CROIS) is demonstrated to outperform existing methods, and more importantly, to be easier to use, in the sense that it requires less hyperparameter tuning and is less sensitive to the various choices to be accounted for a classifier training for group robustness. The paper also presents a simple generalization bound for the worst group loss. Strengths and Weaknesses: Strengths - The paper is well-written and easy to follow. - The proposed method is simple and versatile, I like it requires less hyperparameter tuning or training tricks. - The experimental results are promising. Weaknesses - As the authors already admitted in the paper, there is a concurrent work (Kirichenko et al., 2022) presenting a similar work. I'm not sure about the Journal's policy on concurrent work, but I don't want to give a discount due to the concurrent work. - The theoretical analysis is a bit weak; it is a rather straightforward application of standard theory to a classification problem with groups, and does not seem to provide further insight about the empirical findings. Requested Changes: The argument in section 5.3 is not that convincing. While it is true that the average group loss is a lower-bound on the worst group loss, since it is a lower-bound, it is not clear how reducing the average group loss is related to improving the worst group loss. Moreover, the bound itself does not tell anything about the tradeoff between the splitting proportion $p$ and the worst group loss, nor anything about the quality of the feature extractor. It is quite expected to be honest because the bound itself is not that informative from the first place. I'd like to suggest to tone-down the arguments in section 5.3, or provide more empirical observations to support them; for instance, one may test with varying $p$ values, measure the quality of the feature extractors via average group accuracies, and show the correlation between average group generalization bound and the worst group generalization bound. Broader Impact Concerns: The paper does not bring potential broader impact concerns. ================================================== Review 2: Summary: This paper proposes a simple method named CROIS for learning a model to address the challenge of group-shift settings, i.e., the performance of underrepresented class in the training set might have be low. The motivation of CROIS comes from the fact that DNNs trained by empricial risk minimization have the ability to provide good features despite that might exploit the spurious features. The proposed method is composed of two steps: first train a model without the group labels, and then train using group distributionally robust optimization (GDRO) with the group labels. The experiment results show that the method can improve the performance on the underrepresented group in the training set. Strengths and Weaknesses: Strengths: - The paper is well-motivated and well-written. - The idea is straight forward yet effective. - The method has theoretical support Weakness: - The selection of hyper-parameter, especially $p$, need to be manually selected. It would be better to find a more principled way to pick this hyper-parameter. In my opinion it controls a very important trade-off. - The proposed method seems to use the validation set's group label in Table 1 and 2. This might be impractical for real world applications as they are not always available. Also it could lead to an unfair comparison with other baseline methods. Requested Changes: To better address or classify the concern of using validation set's data for training. Broader Impact Concerns: No ================================================== Review 3: Summary: This paper proposed a sample-splitting method named _classifier retraining on independent splits_ (CROIS) for improving the worst group performance, using both group-labeled and group-unlabeled data. The author aimed to reduce the need for group labels and hyperparameter tuning. The proposed method was evaluated on several datasets. Strengths and Weaknesses: ## Strengths - This work aims to use as few group labels as possible, which is nice - The sample-splitting procedure can be combined with other robust training methods - The proposed method is simple and can be easily implemented ## Weaknesses - The motivation "ERM-trained models contain good features (?), features of memorized examples are bad, regularization is costly -> independent splits" is not sufficiently explained and thus not very convincing. The soundness of this approach should be clarified. - There is no assumption on the relationship between the group-labeled split and group-unlabeled split. - The theoretical analysis seems not related to the proposed method? The analysis provides very limited insights and guarantees. Requested Changes: - I was surprised that the first sentence was erroneous. Being "independent and identically distributed" is a property of a collection of random variables, not two distributions (unless we are talking about distributions of distributions, which seem not the case here). - "Similar to other problems in OOD generalization, DNNs learned by ERM...": models similar to problems? (+ grammatical error) - Group DRO is the standard: please provide support. - Please write the full name when an acronym is used for the first time. - "This framework encapsulates many problems" as well as all imaginable supervised learning problems, so it adds no information. In general, please proofread the manuscript and improve the academic writing. Please improve the explanation of the motivation of the proposed method. Broader Impact Concerns: The author did not present a Broader Impact Statement. The worst group performance is closely related to fairness in machine learning. The author should discuss how the proposed method may affect robustness and fairness if used for sensitive data. ================================================== Metareview: Recommendation: Accept as is Comment: The paper proposes a new approach to group robustness that requires minimal group annotation, and has accompanying theoretical guarantees. Three reviewers found the paper to be intuitive, well executed, and to be of sufficient interest to the community. ==================================================
# Cda: Contrastive-Adversarial Domain Adaptation Anonymous authors Paper under double-blind review ## Abstract Recent advances in unsupervised domain adaptation (UDA) reveal that adversarial learning on deep neural networks can learn domain invariant features to reduce the shift between source and target domains. While such adversarial approaches achieve domain-level alignment, they ignore the class (label) shift. When class-conditional data distributions significantly differ between the source and target domain, it can generate ambiguous features near class boundaries that are more likely to be misclassified. In this work, we propose a two-stage model for UDA called Contrastive-adversarial Domain Adaptation (CDA). While the adversarial component facilitates domain-level alignment, two-stage contrastive learning exploits class information to achieve higher intra-class compactness across domains resulting in well-separated decision boundaries. Furthermore, the proposed contrastive framework is designed as a plug-and-play module that can be easily embedded with existing adversarial methods for domain adaptation. We conduct experiments on two widely used benchmark datasets for domain adaptation, namely, Office-31 and Digits-5, and demonstrate that CDA achieves state-of-the-art results on both datasets. ## 1 Introduction 2 Introduction Deep neural networks (DNNs) have significantly improved the state-of-the-art in many machine learning problems Dong et al. (2021). When trained on large-scale labeled datasets, DNNs can learn semantically meaningful features that can be used to solve various downstream tasks such as object classification, detection, and language processing. Yosinski et al. (2014)Zhuang et al. (2020)Yadav & Ganguly (2020). However, DNNs need to be qualified with caveats Belkin et al. (2019) - they are understood to be brittle and tend to generalize poorly to new datasets Neyshabur et al. (2017)Wilson & Izmailov (2020). Even a small shift compared to the training data can cause the deep network to make spurious predictions on the target domain. This phenomenon is known as domain shift You et al. (2019)Ben-David et al. (2010), where the marginal probability distribution of the underlying data changes across different datasets or domains. A typical solution is to fine-tune a model trained on a sufficiently labeled dataset by leveraging the limited number of labeled samples from the target dataset Chu et al. (2016)Long et al. (2017). However, in real-world problems, it might be expensive, or in some instances impossible Singla et al. (2019), to collect sufficient labeled data in the intended (target) domain leaving the fine-tuning or *transferring* process challenging to execute. Learning a model that reduces the dataset shift between training and testing distribution is known as domain adaptation Ben-David et al. (2006). When no labeled data is available in the target domain, it is called unsupervised domain adaptation (UDA) Ganin & Lempitsky (2015)Wilson & Cook (2020), which is the focus of this work. While the earliest domain adaptation methods worked with fixed feature representations, recent advances in deep domain adaptation (DDA) embed domain adaptation modules within deep learning architectures. Thus, domain adaptation and feature learning are achieved simultaneously (end-to-end) in a single training process. One of the most well-known approaches to DDA is the use of adversarial learning for reducing the discrepancy between the source and target domain Ganin et al. (2016)Tzeng et al. (2017)Pei et al. (2018)Long et al. (2018). Adversarial domain adaptation (ADA) approaches domain adaptation as a minimax game similar to how Generative Adversarial Networks (GANs) Creswell et al. (2018) work. An ![1_image_0.png](1_image_0.png) Figure 1: Illustration of the improvements proposed by CDA for unsupervised domain adaptation (UDA).(A) Existing adversarial methods for UDA align the source and target domain only at the domain level ignoring class boundaries. (B) In comparison, CDA achieves both domain and class-level alignment in a multi-step training regime. In step 1, CDA performs supervised contrastive learning on the labeled source domain, resulting in better intra-class compactness and well-separated decision boundaries for the target domain to align. In the next step, adversarial learning leads to domain-level alignment, while cross-domain contrastive learning pulls target samples to align with similar samples from the source domain and pushes away dissimilar clusters. auxiliary domain discriminator is trained to distinguish latent feature embeddings from source and target domains. At the same time, a deep neural network learns feature representations that are indistinguishable by the domain discriminator. In other words, the deep network, comprising a generator and a dense head, and the domain discriminator try to fool each other, resulting in latent features that cannot be distinguished by which domain they come from. Although ADA achieves domain-level alignment, it fails to capture the multimodal structure within a specific domain's data distribution Wang et al. (2020)Zhang et al. (2019). Even if a domain discriminator is fully confused, there is no guarantee for class-level alignment. In scenarios where class-conditional distributions across domains are significantly different, ADA can generate ambiguous features near class boundaries that are more likely to be misclassified (see Figure 1) Chen et al. (2019a). Some of the recent works have tried to tackle the problem of class-level alignment via training separate domain discriminators Pei et al. (2018) Wang et al. (2019); however, it gives rise to convergence issues amidst a lack of equilibrium guarantee. Other works directly encode class information in the domain adaptation module Long et al. (2018)Chen et al. (2019b). In this work, we propose a novel two-stage domain adaptation mechanism called Contrastive-adversarial Domain Adaptation (CDA). CDA leverages the mechanism of contrastive learning Le-Khac et al. (2020)Purushwalkam & Gupta (2020) for achieving class-level alignment in tandem with adversarial learning which focuses on domain-level alignment. The idea of contrastive learning is to learn an embedding space where similar data samples - and corresponding features - lie close to each other while dissimilar samples are pushed away. Although contrastive learning has been most successfully used in self-supervised learning Chen et al. (2020)Grill et al. (2020)Caron et al. (2021) tasks, the underlying idea can be exploited to solve domain adaptation. The contrastive module improves intra-class compactness (stage-I) and class-conditioned alignment (stage-II), while ADA focuses on the overall domain-level alignment. The expected outcome is a more ![2_image_0.png](2_image_0.png) Figure 2: An overview of the two-stage CDA framework. In stage-I (A), we perform supervised contrastive learning (CL) using the labeled source dataset. The motivation is to achieve better intra-class compactness and well-separated decision boundaries to make class-level alignment in stage-II (B) easier to perform. StageII is where the actual domain adaptation (DA) occurs using a combination of adversarial and cross-domain contrastive loss. The overall CDA objective function comprises multiple losses that are optimized in tandem to achieve DA. For a detailed explanation, see section 3. (figure best viewed in color). tightly coupled domain alignment that is class-aware. We conduct experiments on two benchmark datasets for UDA (Office-31 and Digits-5) to demonstrate that CDA achieves state-of-the-art results. ## 2.1 Contributions The key contributions of this work can be summarized as follows: - We propose a novel two-stage deep domain adaptation method (CDA) that combines contrastive and adversarial approaches for unsupervised domain adaptation (UDA). - Experiments show the efficacy of our proposed methods by achieving state-of-the-art results on well-known benchmark datasets for UDA. - The proposed contrastive module can be easily embedded within existing adversarial domain adaptation methods for improved performance. ## 3 Related Work 3.1 Unsupervised Domain Adaptation (Uda) The central idea of UDA is to learn domain-invariant feature representations. While the earliest (shallow) approaches worked with fixed features, the current methods combine the expressiveness of deep neural networks with domains adaptation for end-to-end learning Ganin & Lempitsky (2015)Long et al. (2017)Chen et al. (2019a). There is extensive literature on deep domain adaptation methods ranging from moment matching to more recent adversarial approaches. Both approaches aim to minimize the discrepancy between the source and target domain. While moment matching methods explicitly minimize the difference using a loss function such as Maximum Mean Discrepancy (MMD) Long et al. (2015)Long et al. (2017), adversarial methods seek to reduce the discrepancy using an adversarial objective which pits two networks against each other - a generator and a discriminator. For domain adaptation, the generator's goal is to produce latent features the domain discriminator cannot classify correctly. Doing so generates domain-invariant feature representation, i.e., the target domain gets aligned with the source domain. A common criticism of the earliest ADA methods was that they only result in domain-level alignment and ignore class-specific distributions. Recent works have built on the seminal work of Ganin et al. Ganin & Lempitsky (2015) in the context of ADA - they attempt to incorporate class-level information in the model for achieving a more tightly-coupled alignment across domains Long et al. (2018)Chen et al. (2019b)Pei et al. (2018). ## 3.2 Contrastive Learning Contrastive learning (CL) has achieved state-of-the-art results in self-supervised representation learning Chen et al. (2020)Grill et al. (2020). The goal of CL is to learn a model where feature representations of similar samples lie close to each other in the latent space, and dissimilar samples lie further apart. In the absence of labels, an augmented version corresponding to a sample is generated to create a positive (similar) pair. The other samples in the training minibatch become negative pairs. Entropy-based loss functions that simultaneously maximize the similarity of positive pairs and minimize the similarity of negative pairs are used. Recent works Caron et al. (2021) have shown how contrastive learning can learn semantically meaningful feature representations that can be used to solve various downstream tasks, and can even outperform supervised tasks solved in supervised settings Caron et al. (2020). ## 3.3 Contrastive Learning For Uda Recent works have applied the core principle of CL to domain adaptation tasks. Carlucci et al. Carlucci et al. (2019) used a pretext task (solving a jigsaw puzzle) for self-supervision to solve domain adaptation. Kim et al. Kim et al. (2021) proposed cross-domain self-supervised learning and extended by Yue et al.Yue et al. (2021) to align cluster-based class prototypes across domains for few-shot learning. Singh et al. Singh (2021) used CL with strongly augmented pairs to reduce the intra-domain discrepancy. Picking the appropriate augmentations for CL is heuristic and may not generalize to other datasets with the same model. We avoid data augmentation using a two-stage CL approach. To the best of our knowledge, this is the first work that systematically integrates CL with adversarial methods for the problem of unsupervised domain adaptation. ## 4 Contrastive-Adversarial Domain Adaptation 4.1 Problem Formulation In UDA, we aim to transfer a model learned on a labeled source domain to an unlabeled target domain. We assume that the marginal probability distributions of the two domains are not equal, i.e., P(Xs) ̸= P(Xt). We are given a labeled source dataset Ds = (Xs, Ys) = {(x i s , yis )} ns i=1 and an unlabeled dataset in the target domain Dt = Xt = {x i t} nt i=1 with ns and nt samples, respectively. Both {x i s} and {x i t} belong to the same set of N classes with P(Xs) ̸= P(Xt). The goal is to predict labels for test samples in the target domain using the model (G, C) : Xt → Yt trained on Ds ∪ Dt. The trained model includes a feature generator G : Xt → R d and a classifier C : R d → R N , where d is the dimension of the intermediate features produced by the generator. ## 4.2 Model Overview CDA is a two-stage model with three major components - a feature generator G, a classifier C, and an auxiliary domain classifier D (Figure 2). Further, a contrastive module is spaced between G and C. Broadly, there are two objectives achieved by the CDA model: 1) domain-level alignment using adversarial learning and 2) class-level alignment using contrastive learning. The following sections describe the mechanism of each objective in detail. ## 4.3 Domain-Level Adversarial Learning Adversarial learning aims to learn domain-invariant features by training the feature generator G and domain discriminator D with competing (minimax) objectives. The adversarial component is adapted from the seminal work of Ganin et al. (DANN) Ganin & Lempitsky (2015) that originally proposed the idea. As a Algorithm 1: Contrastive-adversarial Domain Adaptation Input : labeled source dataset Ds = {Xs, Ys}, unlabeled target dataset Dt = {Xt}, max epochs E, iterations per epoch K, model (C, D, G) Output: trained model (G, C) for e = 1 to E do for k = 1 to K do P $k=1$ *to $K$* **do** Sample batch $\{x_s,y_s\}$ from $D_s$ and compute $\mathcal{L}_{SupCL}+\mathcal{L}_{CE}$ using Eqn. 3, **if** $e\geq E'$**then** $\bot$ **Sample batch $\{e\}$ from $D$, and compute $\mathcal{L}_{Exp}$ using Eqn. 1. Sample batch {xt} from Dt and compute LAdv using Eqn. 1 if e ≥ E′′ **then** L*SupCL* = 0 Generate pseudo-labels yt and compute L*CrossCL* using Eqn. 5 end end Compute L*T otal* using Eqn. 7 Backpropagate and update C, D and G end end first step in the zero-sum game, G takes the labeled source and unlabeled target domain inputs and generates feature embeddings zs and zt. In the next step, D takes the feature embeddings and attempts to classify them as either coming from the source or target domain. The goal of G is to fool the discriminator such that output feature embeddings cannot be classified correctly by D. It is achieved by training D and G with an adversarial loss LAdv with gradient reversal (for G). For a given source sample xs ∼ Xs and target sample xt ∼ Xt, LAdv can be formulated as a binary cross-entropy loss: $${\mathcal{L}}_{A d v}({\mathcal{X}}_{s},{\mathcal{X}}_{t})=\sum_{\begin{array}{l}{{\mathbf{x}_{s}\sim{\mathcal{X}}_{t}}}\\ {{\mathbf{x}_{t}\sim{\mathcal{X}}_{t}}}\end{array}}\left[\log\left({\mathcal{D}}\left({\mathcal{G}}\left({\mathbf{x}_{t}}\right)\right)\right)+\log\left(1-{\mathcal{D}}\left({\mathcal{G}}\left({\mathbf{x}_{s}}\right)\right)\right)\right]$$ [log (D (G (xt))) + log (1 − D (G (xs)))] (1) with the following objective, $$(1)$$ $$\operatorname*{min}_{\mathcal{G}}\operatorname*{max}_{\mathcal{D}}\left({\mathcal{L}}_{A d v}\right)$$ $$\left({2}\right)$$ D(LAdv) (2) In other words, G tries to minimize LAdv while D learns to maximize it. The theoretical argument is that convergence will result in domain-invariant feature embeddings. However, such an adversarial approach only results in domain-level alignment without considering the complex multi-mode class distribution present in the source and target domain. Even when the domain discriminator is fully confused, there is no guarantee the classifier can successfully discriminate target samples based on the class labels. The absence of classlevel alignment results in under-transfer or negative transfer when the class-conditional distributions are significantly different across the two domains. ## 4.4 Class-Discriminative Contrastive Learning To generate feature embeddings that are not domain-invariant but also class-discriminative across the two domains, CDA proposes a constrastive learning-based (CL) module. For clarification, the CL module is not a neural network per se. It is an intermediary component that links G, D, and C and where the proposed two-stage contrastive objective is optimized. Stage I: The CL module performs supervised contrastive learning on the source domain. In every batch, samples from the same class are considered positive pairs, while samples from different classes are automatically assigned as negative pairs. Training progresses by optimizing a modified InfoNCE loss Chen et al. (2020) where NCE stands for Noise-contrastive Estimation (see Eq. ). Although CL is best associated with self-supervised representation learning, recent works (Khosla et al. Khosla et al. (2020)) have shown that minimizing a contrastive loss can outperform the standard cross-entropy loss for supervised classification tasks. The idea is that clusters of samples belonging to the same class are pulled together in the embedding space while simultaneously pushing apart clusters of samples from different classes creating well-separated decision boundaries for better aligning the target domain samples in the next step. The combined objective function during stage-I is as follows: $${\mathcal{L}}_{S t a g e I}={\mathcal{L}}_{S u p C L}+{\mathcal{L}}_{C E}$$ $$(3)$$ LStageI = L*SupCL* + LCE (3) $$\mathcal{L}_{SupCL}(\mathcal{X}_{s},\mathcal{Y}_{s})=\ -\sum_{\mathbf{z},\mathbf{z}^{+}\in D_{s}}\log\frac{\exp(\mathbf{z}^{\intercal}\mathbf{z}^{+}/\tau)}{\exp(\mathbf{z}^{\intercal}\mathbf{z}^{+}/\tau)+\sum_{\mathbf{z}^{-}\in D_{s}}\exp(\mathbf{z}^{\intercal}\mathbf{z}^{-}/\tau)}\tag{4}$$ where, LCE is the standard cross-entropy loss for multiclass classification. L*SupCL* is the supervised contrastive loss applied to samples from the labeled source domain. The variable zs denotes the l2 normalized latent embedding generated by G corresponding to the input sample xs. The variable τ refers to the temperature scaling (hyperparameter) which affects how the model learns from hard negatives Chuang et al. (2020). Stage II: For class-level alignment, CDA performs cross-domain contrastive learning. It is based on the understanding that samples belonging to the same class across the two domains should cluster together in the latent embedding space. Unlike supervised CL in stage-I, samples from the same class across domains are considered positive pairs, and samples from different classes become negative pairs. However, we need labels for the target domain which are not available. Some of the current methods in this space generate pseudo-labels using k-means clustering Singh (2021). Clustering on the source domain is either performed once during preprocessing or performed every few epochs during training, and target labels are assigned based on the nearest cluster centroid. We argue that both approaches are sub-optimal and propose making target label generation part of the training process itself without the need to perform clustering. $$\mathcal{L}_{CrossCL}(\mathcal{X}_{s},\mathcal{Y}_{s},\mathcal{X}_{t})=\ -\sum_{\begin{subarray}{c}i=1\\ \mathbf{z}_{i}\in D_{s}\end{subarray}}^{N}\log\frac{\exp(\mathbf{z}_{i}^{\intercal}\mathbf{z}_{i}^{\intercal}/\tau)}{\exp(\mathbf{z}_{i}^{\intercal}\mathbf{z}_{i}^{\intercal}/\tau)+\sum_{i\neq k=1}^{N}\exp(\mathbf{z}_{i}^{\intercal}\mathbf{z}_{i}^{k}/\tau)}\tag{5}$$ where, L*CrossCL* is the cross-domain contrastive loss in stage-II. zs and zt are the l2 normalized embeddings from the source and target, respectively. The superscript i and k are used to identify the class labels (pseudo labels in case of target domain). ## 4.5 Cda: Overall Framework In CDA, we take a multi-step approach to optimize multiple objective functions during training. In the first stage, we train only on the source domain for the first E′epochs (hyperparameter) to ensure the model reaches a certain level of classification accuracy. Next, we initiate the process for domain-level alignment as described above. We add LAdv to the overall objective function using a time-varying weighting scheme lambda. Once we have achieved well-separated clustering in the source domain and some level of domain alignment, we gradually introduce the last loss function L*CrossCL*. The (pseudo) target labels are obtained by executing a forward pass on the model (G, C): yt = argmax(C(G(xt))). Some target samples are expected to be misclassified initially, but as the training continues and target samples get aligned, decision boundaries will get updated accordingly, and model performance will improve with each iteration. L*CrossCL* pulls same-class clusters in the two domains closer to each other and pushes different clusters further apart. Finally, we also employ a standard crossentropy loss function LCE during the entire training process to keep track of the classification task. The overall training objective can be formulated as follows: $${\mathcal{L}}_{T o t a l}={\mathcal{L}}_{S t a g e1}+{\mathcal{L}}_{S t a g e2}$$ $$({\mathfrak{G}})$$ LT otal = LStage1 + L*Stage*2 (6) $$\begin{array}{c}{{{\mathcal{L}}_{T o t a l}={\mathcal{L}}_{S u p C L}+{\mathcal{L}}_{C E}}}\\ {{+\lambda*{\mathcal{L}}_{A d v}+\beta*{\mathcal{L}}_{C r o s s C L}}}\end{array}$$ $$\left(7\right)$$ where $$\lambda=\begin{cases}0&\text{for epoch}\ 0\leq e<E^{\prime}\\ \frac{2}{1+\exp^{-\gamma p}}-1&\text{for epoch}\ e\geq E^{\prime}\end{cases}\tag{1}$$ $$({\mathfrak{g}})$$ $$({\boldsymbol{\delta}})$$ $$\beta=\begin{cases}0&\text{for epoch}\ e\leq E^{\prime\prime}\\ \min(1,\alpha*\left(\frac{e-E^{\prime\prime}}{E^{\prime\prime}}\right))&\text{for epoch}\ E^{\prime\prime}<e\leq E\end{cases}$$ where, E′ and E′′ (with E′′ ≥ E′) indicate the epochs when Stage-I ends and L*CrossCL* is introduced in the objective function, respectively. At any given epoch, only one type of contrastive learning is performed, i.e. for e ≥ E′′, L*SupCL* = 0 (see Algorithm 1). The scaling variables λ and β (hyperparameters) control the rate at which LAdv and L*CrossCL* are added to the overall objective function to maintain the stability of the training process. The values of α and β increase from 0 to 1. ## 5 Experiments 5.1 Datasets We use two public benchmarks datasets to evaluate our method: Office-31 is a common UDA benchmark that contains 4,110 images from three distinct domains - Amazon (A with 2,817 images), DSLR (D with 498 images) and Webcam (W with 795 images). Each domain consists of 31 object classes. Our method is evaluated by performing UDA on each pair of domains, which generates 6 different tasks (Table 1). Digits-5 comprises a set of five datasets of digits 0-9 (MNIST, MNIST-M, USPS, SVHN, and SyntheticDigits) most commonly used to evaluate domain adaptation models. We use four of the five datasets and generate 3 different tasks (Table 2). Both MNIST and MNIST-M contain 60,000 and 10,000 samples for training and testing respectively. SVHN is a more complex real-world image dataset with 73,257 samples for training and 26,032 samples for testing. The digits in SVHN are captured from house numbers in Google Street View images. SVHN has an additional class for the digit '10' which is ignored to match the label range of other datasets. Finally, USPS is a smaller dataset with 7,291 training and 2,007 testing samples. We use all the available training samples for each task. and ![7_image_0.png](7_image_0.png) Figure 3: t-SNE visualizations for DANN and CDA to extract the contribution of the proposed contrastive module in learning domain-invariant yet class-discriminative embeddings. The analysis is for the MNIST (source) → MNIST-M (target) experiment. Each color represents one of the digits (0-9). (best viewed in color). ## 5.2 Baselines We compare the performance of CDA with the following well-known method (a) **DANN**, which originally proposed the idea of adversarial learning for domain adaptation, and state-of-the-art methods that go beyond just domain-level alignment - (b) **MADA** and (c) **iCAN**, which use multiple domain discriminators to capture the multimode structures in the data distribution; (d) **CDAN** and (e) **CDAN+BSP**, which condition the domain discriminator on class-discriminative information obtained from the classifier; (f) GTA, which proposes an adversarial image generation approach to directly learn the shared feature embeddings; (g) GVB, which proposes a gradually vanishing bridge mechanism for adversarial-based domain adaptation; (h) **ADDA**, which uses a separate discriminative loss in addition to the adversarial loss to facilitate class-level alignment; (i) MCD, which uses task-specific classifiers and maximizes the discrepancy between them. ## 5.3 Implementation Details Network Architecture: We use a ResNet-50 model pre-trained on ImageNet as the feature generator G. The last fully connected (FC) layer in ResNet-50 is replaced with a new FC layer to match the dimensions of the intermediate feature embedding. Both the classifier C and domain discriminator D are three-layer dense networks (512 → 256 → 128) with output dimensions of 10 (for 10 classes) and 1 (for identifying the domain), respectively. Training Details The CDA network is trained using the AdamW optimizer with a batch size of 32 and 128 for the Office-31 and Digits-5 datasets, respectively. The initial learning rate is set as 5e − 4; a learning rate scheduler is used with a step decay of 0.8 every 20 epochs. We use one NVIDIA V100 GPU for the experiments. For a detailed discussion, see the supplementary material. ## 5.4 Results The results on the Office-31 and Digits-5 datasets are reported in Tables 1 and 2, respectively. Our proposed method outperforms several baselines across different UDA tasks. Moreover, CDA achieves the best average accuracies on both datasets. Where CDA is unable to better state-of-the-art accuracy, it reports comparable | Table 2: Classification Accuracy on Digits-5 Dataset | | | | |------------------------------------------------------------------|----------------|-------------|-------------| | Method | MNIST →MNIST-M | MNIST →USPS | SVHN →MNIST | | DANN Ganin & Lempitsky (2015) | 84.1 | 90.8 | 81.9 | | ADDA Tzeng et al. (2017) | - | 89.4 | 76.0 | | CDAN Long et al. (2018) | - | 95.6 | 89.2 | | CDAN+BSP Chen et al. (2019b) | - | 95.0 | 92.1 | | MCD Saito et al. (2018) | - | 96.5 | 96.2 | | CDA (ours) | 96.6 | 97.4 | 96.8 | | * Best accuracy shown in bold and the second best as underlined. | | | | | Table 1: Classification Accuracy on Office-31 Dataset | | | | | | | | |---------------------------------------------------------|------|------|------|------|------|------|------| | Method | A →D | A →W | D →A | D →W | W→A | W→D | Avg. | | DANN Ganin & Lempitsky (2015) | 79.5 | 81.8 | 65.2 | 96.4 | 63.2 | 99.1 | 80.8 | | MADA Pei et al. (2018) | 87.8 | 90.0 | 70.3 | 97.4 | 66.4 | 99.6 | 85.2 | | iCAN Zhang et al. (2018) | 90.1 | 92.5 | 72.1 | 98.8 | 69.6 | 100 | 87.2 | | CDAN Long et al. (2018) | 91.7 | 93.1 | 71.3 | 98.6 | 69.3 | 100 | 87.3 | | CDAN+BSP Chen et al. (2019b) | 93.0 | 93.3 | 73.6 | 98.2 | 72.6 | 100 | 88.4 | | GTA Sankaranarayanan et al. (2018) | 87.7 | 89.5 | 72.8 | 97.9 | 71.4 | 99.8 | 86.5 | | GVB Cui et al. (2020) | 95.0 | 94.8 | 73.4 | 98.7 | 73.7 | 100 | 89.3 | | CDA (ours) | 93.6 | 94.0 | 74.7 | 98.6 | 78.9 | 100 | 89.9 | results with the best accuracy score. A direct comparison can be made with DANN (see section 4.5), with which it shares the same adversarial component, to highlight the effectiveness of the contrastive module. On average, CDA improves the accuracy on *Office-31* and *Digits-5* by approximately 9% and 11%, respectively, compared to DANN. Furthermore, CDA significantly outperforms two well-known approaches - MADA and CDAN - that also explicitly align domains at the class level. ## 5.5 Cda Hyperparameters The choice of hyperparameters for training the CDA model is presented in Table 3. All values are searched using the random search method. In addition to the variables presented, we use a learning rate scheduler with a step decay of 0.8 every 10 (20) epochs for training CDA on Office-31 (*Digits-5* ). We also use a dropout value of 0.2-0.5 in the second to last dense layer in the classifier C and domain discriminator D. ## 5.6 Ablation Study One of the key contributions of this work is that the proposed contrastive module can be easily embedded within existing adversarial domain adaptation methods for improved performance. We demonstrate this using the DANN architecture by Ganin et. al. as the baseline model and embed the contrastive module to improve the average classification score by 9% and 11% on the *Office-31* and *Digits-5* dataset, respectively. The training procedure (for DANN) requires minimal changes to adapt to the additional contrastive module (see Algorithm 2). We plot the t-SNE embeddings corresponding to the last layer in the respective classifiers (of DANN and CDA) for the MNIST to MNIST-M task (Figure 3). It can be seen that the contrastive module improves the adaptation performance. For DANN, although the source and target domain align with each other, labels are not well discriminated. The reason is that the original DANN approach does not consider classdiscriminative information and only aligns at the domain level. As a result, feature embeddings near the class boundaries are prone to be misclassified, resulting in lower classification accuracy on the target domain, as can be seen in the case of DANN in Table 2. For CDA, the contrastive module first increases the inter-class | Table 3: CDA Hyperparameters | | | | | | |-----------------------------------------------------|---------------|---------------------|---------|----------|----------| | Variable Description | Value | | | | | | lr | Learning Rate | 5e-4 | | | | | bs | Batch Size | 32, 128* | | | | | τ | Temperature | scaling | in | 0.5 | | | contrastive loss | | | | | | | E | Total | no. | of | training | 90, 200* | | epochs | | | | | | | E′ | No. | epochs when stage-I | 25, 40* | | | | of training ends | 35, 60* | | | | | | E′′ | No. | of | epochs | when | | | LCrossCL is added to the overall objective function | | | | | | | λ | Scaling factor for adversarial loss LAdv | varies+ | | | | | β | Scaling | factor | for | varies+ | | | LCrossCL | | | | | | | * The first values corresponds to the Office-31 | | | | | | * The first values corresponds to the *Office-31* dataset and second value is for experiments involving *Digits-5*. + The values increase from 0-1 using Eq. 8 and 9 to vary E′ and E′′. Algorithm 2: Contrastive Module Embedded in DANN Input : labeled source dataset Ds = {Xs, Ys}, unlabeled target dataset Dt = {Xt}, max epochs E, iterations per epoch K, model (C, D, G) Output: trained model (G, C) for e = 1 to E do for k = 1 to K do Sample batch {xs, ys} from Ds and compute L*SupCL* + LCE if e ≥ E′**then** Sample batch {xt} from Dt and compute LAdv + LCE if e ≥ E′′ **then** L*SupCL* = 0 Generate pseudo-labels yt and compute L*CrossCL* end end Compute L*T otal*; Backpropagate and update C, D and G end end Note: The additional steps for training the contrastive module are shown in blue. For DANN, E′, E′′ = 0. separation in the source domain. It then aligns samples belonging to the same class across domains close to each other - leading to well-separated decision boundaries and improved classification accuracy. We conclude that with minimal tweaks to the training process, the proposed contrastive module in CDA can be embedded in existing adversarial methods for UDA for improved performance (see Figure 4 in Appendix A). ## 6 Conclusion This paper proposes a new method for unsupervised domain adaptation (UDA) called Contrastive-adversarial Domain Adaptation (CDA). CDA improves upon existing adversarial methods for UDA by using a simple two-stage contrastive learning module that achieves well-separated class-level alignment in addition to the domain-level alignment achieved by adversarial approaches. CDA achieves this end-to-end in a single training regime unlike some of the existing approaches. Furthermore, the contrastive module is proposed as a standalone component that can be embedded with existing adversarial methods for UDA. Our proposed method achieves better performance than several state-of-the-art methods on two benchmark datasets, demonstrating the effectiveness of our approach. Lastly, this work further motivates an emerging research area exploring the synergy between contrastive learning and domain adaptation. ## References Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. Reconciling modern machine-learning practice and the classical bias–variance trade-off. *Proceedings of the National Academy of Sciences*, 116(32):15849– 15854, 2019. Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. Analysis of representations for domain adaptation. *Advances in neural information processing systems*, 19, 2006. Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. A theory of learning from different domains. *Machine learning*, 79(1):151–175, 2010. Fabio M Carlucci, Antonio D'Innocente, Silvia Bucci, Barbara Caputo, and Tatiana Tommasi. Domain generalization by solving jigsaw puzzles. In *Proceedings of the IEEE/CVF Conference on Computer Vision* and Pattern Recognition, pp. 2229–2238, 2019. Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. *Advances in Neural Information* Processing Systems, 33:9912–9924, 2020. Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9650–9660, 2021. Chao Chen, Zhihong Chen, Boyuan Jiang, and Xinyu Jin. Joint domain alignment and discriminative feature learning for unsupervised deep domain adaptation. In *Proceedings of the AAAI conference on artificial* intelligence, volume 33, pp. 3296–3303, 2019a. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In *International conference on machine learning*, pp. 1597–1607. PMLR, 2020. Xinyang Chen, Sinan Wang, Mingsheng Long, and Jianmin Wang. Transferability vs. discriminability: Batch spectral penalization for adversarial domain adaptation. In *International conference on machine learning*, pp. 1081–1090. PMLR, 2019b. Brian Chu, Vashisht Madhavan, Oscar Beijbom, Judy Hoffman, and Trevor Darrell. Best practices for fine-tuning visual classifiers to new domains. In *European conference on computer vision*, pp. 435–442. Springer, 2016. Ching-Yao Chuang, Joshua Robinson, Yen-Chen Lin, Antonio Torralba, and Stefanie Jegelka. Debiased contrastive learning. *Advances in neural information processing systems*, 33:8765–8775, 2020. Antonia Creswell, Tom White, Vincent Dumoulin, Kai Arulkumaran, Biswa Sengupta, and Anil A Bharath. Generative adversarial networks: An overview. *IEEE Signal Processing Magazine*, 35(1):53–65, 2018. Shuhao Cui, Shuhui Wang, Junbao Zhuo, Chi Su, Qingming Huang, and Qi Tian. Gradually vanishing bridge for adversarial domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 12455–12464, 2020. Shi Dong, Ping Wang, and Khushnood Abbas. A survey on deep learning and its applications. *Computer* Science Review, 40:100379, 2021. Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In *International conference on machine learning*, pp. 1180–1189. PMLR, 2015. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. *The journal of* machine learning research, 17(1):2096–2030, 2016. Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. *Advances in Neural Information Processing Systems*, 33:21271–21284, 2020. Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. Advances in Neural Information Processing Systems, 33:18661–18673, 2020. Donghyun Kim, Kuniaki Saito, Tae-Hyun Oh, Bryan A Plummer, Stan Sclaroff, and Kate Saenko. Cds: Cross-domain self-supervised pre-training. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9123–9132, 2021. Phuc H Le-Khac, Graham Healy, and Alan F Smeaton. Contrastive representation learning: A framework and review. *IEEE Access*, 8:193907–193934, 2020. Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In *International conference on machine learning*, pp. 97–105. PMLR, 2015. Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. Deep transfer learning with joint adaptation networks. In *International conference on machine learning*, pp. 2208–2217. PMLR, 2017. Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Conditional adversarial domain adaptation. *Advances in neural information processing systems*, 31, 2018. Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, and Nati Srebro. Exploring generalization in deep learning. *Advances in neural information processing systems*, 30, 2017. Zhongyi Pei, Zhangjie Cao, Mingsheng Long, and Jianmin Wang. Multi-adversarial domain adaptation. In Thirty-second AAAI conference on artificial intelligence, 2018. Senthil Purushwalkam and Abhinav Gupta. Demystifying contrastive self-supervised learning: Invariances, augmentations and dataset biases. *Advances in Neural Information Processing Systems*, 33:3407–3418, 2020. Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, and Tatsuya Harada. Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3723–3732, 2018. Swami Sankaranarayanan, Yogesh Balaji, Carlos D Castillo, and Rama Chellappa. Generate to adapt: Aligning domains using generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8503–8512, 2018. Ankit Singh. Clda: Contrastive learning for semi-supervised domain adaptation. *Advances in Neural Information Processing Systems*, 34, 2021. Ankush Singla, Elisa Bertino, and Dinesh Verma. Overcoming the lack of labeled data: Training intrusion detection models using transfer learning. In *2019 IEEE International Conference on Smart Computing* (SMARTCOMP), pp. 69–74. IEEE, 2019. Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 7167–7176, 2017. Ximei Wang, Liang Li, Weirui Ye, Mingsheng Long, and Jianmin Wang. Transferable attention for domain adaptation. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 33, pp. 5345–5352, 2019. Yimu Wang, Renjie Song, Xiu-Shen Wei, and Lijun Zhang. An adversarial domain adaptation network for cross-domain fine-grained recognition. In *Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision*, pp. 1228–1236, 2020. Andrew G Wilson and Pavel Izmailov. Bayesian deep learning and a probabilistic perspective of generalization. *Advances in neural information processing systems*, 33:4697–4708, 2020. Garrett Wilson and Diane J Cook. A survey of unsupervised deep domain adaptation. *ACM Transactions* on Intelligent Systems and Technology (TIST), 11(5):1–46, 2020. Nishant Yadav and Auroop R Ganguly. A deep learning approach to short-term quantitative precipitation forecasting. In *Proceedings of the 10th International Conference on Climate Informatics*, pp. 8–14, 2020. Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? *Advances in neural information processing systems*, 27, 2014. Kaichao You, Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Universal domain adaptation. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 2720–2729, 2019. Xiangyu Yue, Zangwei Zheng, Shanghang Zhang, Yang Gao, Trevor Darrell, Kurt Keutzer, and Alberto Sangiovanni Vincentelli. Prototypical cross-domain self-supervised learning for few-shot unsupervised domain adaptation. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 13834–13844, 2021. Weichen Zhang, Wanli Ouyang, Wen Li, and Dong Xu. Collaborative and adversarial network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3801–3809, 2018. Yuchen Zhang, Tianle Liu, Mingsheng Long, and Michael Jordan. Bridging theory and algorithm for domain adaptation. In *International Conference on Machine Learning*, pp. 7404–7413. PMLR, 2019. Fuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, and Qing He. A comprehensive survey on transfer learning. *Proceedings of the IEEE*, 109(1):43–76, 2020. ## A Appendix A.1 Cda Hyperparameters The choice of hyperparameters for training the CDA model are presented in Table 3. All values are searched using the random search method. In addition to the variables presented, we use a learning rate scheduler with a step decay of 0.8 every 10 (20) epochs for training CDA on Office-31 (*Digits-5* ). We also a dropout value of 0.2-0.5 in the second to last dense layer in the classifier C and domain discriminator D. | | Table 4: Hyperparameters | | | |-----------------------------------------------------------------------------|-----------------------------------------|---------------------------------|---------| | Variable | Description | Value | | | lr | Learning Rate | 5e-4 | | | bs | Batch Size | 32, 128* | | | τ | Temperature scaling in contrastive loss | 0.5 | | | E | Total no. of training epochs | 90, 200* | | | E′ | No. | epochs when stage-I of training | 25, 40* | | ends | | | | | E′′ | No. of epochs when LCrossCL is added | 35, 60* | | | to the overall objective function | | | | | λ | Scaling factor for adverarial loss LAdv | varies+ | | | β | Scaling factor for LCrossCL | varies+ | | | * The first values corresponds to the Office-31 dataset and second value is | | | | λ Scaling factor for adverarial loss LAdv varies+ β Scaling factor for LCrossCL *varies*+ * The first values corresponds to the *Office-31* dataset and second value is for experiments involving *Digits-5*. + The values increase from 0-1 using Eq. 8 and 9 (main text) to vary E′ and E′′. ## A.2 Training Process We use the AdamW optimizer with a learning rate scheduler and a multi-step objective function for training CDA. For the first E′epochs, we only optimize the supervised contrastive loss and the standard cross-entropy loss (for the classification task of interest) on the labeled source domain. The loss L till E′is: $${\mathcal{L}}_{e<E^{\prime}}={\mathcal{L}}_{S u p C L}+{\mathcal{L}}_{C E^{\prime}}$$ From epoch E′to E′′ the adversarial loss LAdv is gradually added to the objective function using a ramping function λ with value 0 at E′ and 1 at E′′. Between E′ and E′′, the overall loss function is: $${\mathcal{L}}_{E^{\prime}\leq e<E^{\prime\prime}}=\lambda*{\mathcal{L}}_{A d v}+{\mathcal{L}}_{S u p C L}+{\mathcal{L}}_{C E}$$ After epoch E′′, the supervised contrastive loss L*SupCL* is replaced by the cross-domain contrastive loss accompanied by a similar ramping function β to maintain the training stability. From epoch E′′ to the end of model training at epoch E, the overall loss function is: $\mathcal{L}_{E^{\prime\prime}}$$<$$e$$\leq$$E$$\lambda$$*$$\mathcal{L}_{Adv}$$+$$\beta$$*$$\mathcal{L}_{CrossCL}$$+$$\mathcal{L}_{CE}$ ![14_image_0.png](14_image_0.png) Figure 4: Illustrative diagram to demonstrate how the proposed contrastive loss module can be added to an existing domain adaptation model such as DANN (Ganin et. al). The components in gray denote parts of DANN and the contrastive module is shown in purple. Feature embeddings generated by the DANN generator pass through the contrastive module before moving to the domain discriminator and downstream task classifier. Depending on the training stage, the contrastive module minimizes either a supervised contrastive loss (stage I) using only the labeled source domain or a cross-domain contrastive loss (stage II) using both the source and target domain samples. (best viewed in color)
Review 1: Summary: The paper presents a novel two-stage deep domain adaptation method that combines both contrastive and adversarial approaches for unsupervised domain adaptation (UDA). The authors demonstrate the effectiveness of their proposed method through extensive experiments on well-known benchmark datasets for UDA. The results show that the proposed method outperforms the state-of-the-art approaches and achieves excellent performance. The methodology used in the experiments is well-designed and the results are clearly presented and discussed. The authors also provide insightful analysis of the limitations and challenges faced by traditional domain adaptation methods. The contribution of this paper to the field of unsupervised domain adaptation is significant and provides valuable insights for researchers and practitioners in the area of machine learning. The paper is well-written and the authors have demonstrated their expertise in the field. Overall, this paper is a valuable contribution to the field of machine learning and unsupervised domain adaptation. The authors have done a thorough job of evaluating the performance of their proposed method and have demonstrated its effectiveness in achieving state-of-the-art results. Strengths and Weaknesses: # Strengths The paper presents a new perspective on unsupervised domain adaptation by focusing on class-level adaptation and category boundary. The authors have identified a significant problem in the field and proposed an innovative solution by incorporating contrastive learning for class alignment. The writing quality and presentation of the paper are exceptional, with clear and concise language that effectively communicates the ideas and results. The authors have conducted a thorough literature review and provided visual illustrations of the method, making it easy for readers to understand the structure of the proposed approach. The tables presenting quantitative results are well-organized and highlight the key takeaways. Overall, this paper provides valuable food for thought for the adaptation community and offers a unique and innovative solution to a significant problem. The authors have demonstrated their expertise in the field and the writing quality and presentation of the paper are excellent. I would recommend it for publication at the conference. # Weakness The experiments in the paper are limited to only two common datasets, which may not be sufficient to fully demonstrate the effectiveness of the proposed approach. To strengthen the validity of the results, it is recommended that the authors consider providing positive results on additional datasets, such as domain adaptation from synthetic to real data or from natural to corrupted images. The ablation part of the paper could be further improved by exploring the functions of each part of the CAD algorithm. For example, it would be interesting to compare the performance of simple supervised learning with the proposed class-level contrast learning and to investigate the impact of dropping the pseudo-label training part. Additionally, the effectiveness and convenience of the contrastive module could be better demonstrated by providing more experiments on the combination of the module with other different methods. The authors could share the changes in performance, speed, computation, and other relevant aspects to provide a comprehensive evaluation of the module's utility. Finally, there are some typos to be fixed: - A small typo in Figure 2 (a), it should be L_{CE}, not LCE. - On page 6, line 2, the equation reference 'Noise-contrastive Estimation (see Eq. )' is omitted. - In equation 5, what does it mean that 'i \neq k = 1'? Overall, this paper presents an interesting and innovative approach to unsupervised domain adaptation. However, to fully demonstrate its effectiveness and convenience, the authors may need to consider addressing the limitations and weaknesses identified above. 1. Peng X, Usman B, Kaushik N, et al. Visda: The visual domain adaptation challenge[J]. arXiv preprint arXiv:1710.06924, 2017. 2. Hendrycks D, Dietterich T. Benchmarking neural network robustness to common corruptions and perturbations[J]. arXiv preprint arXiv:1903.12261, 2019. Requested Changes: The paper presents a new approach for unsupervised domain adaptation, focusing on class-level adaptation and category boundary. However, to ensure the quality of the paper, it is recommended that the authors address several areas for improvement. In other words, a revision is requested. Firstly, the authors should carefully check for possible typos and fix them to ensure the clarity and coherence of the paper. Secondly, the authors should consider adding more datasets to comprehensively demonstrate the efficacy of the proposed CAD method. Including additional datasets, such as domain adaptation from synthetic to real data or from natural to corrupted images, would strengthen the validity of the results. Thirdly, a holistic ablation study should be performed to analyze the functionalities of each component in the proposed method. The authors could consider decomposing CAD and analyzing each component to provide a comprehensive understanding of the method. Fourthly, the authors should prove the effectiveness of the contrastive module by adding more experiments. It would be helpful to provide an intuitive comparison between the proposed method and other methods, showing the change in performance, speed, computation, and other relevant aspects. Finally, to make the paper more meaningful, the authors should consider leaving more space for valuable content and minimizing the amount of space occupied by hyper-parameters and repeated equations. The authors could write them in lines or leave them in the appendix, and include a GitHub link for reference. Broader Impact Concerns: I don't have any concerns about the broader impact of this work. ================================================== Review 2: Summary: This study considered contrastive learning in domain adaptation. The NCE loss, in particular, is incorporated into adversarial domain adaptation. The NCE loss, which was designed for unsupervised domain adaptation, took into account two stages of contrastive loss: (1) contrastive learning in the source domain, and (2) contrastive loss between the source and target domains in the same class. The empirical evaluations are evaluated in two benchmarks. Strengths and Weaknesses: ### Review Summary This paper's goal is to integrate contrastive learning and domain adaptation, which is a valuable contribution to TMLR. Unfortunately, the paper has major flaws, such as significant over-claiming, technical concerns, and related work. Based on these, this reviewer believes that the paper should be resubmitted with significant major revisions. ------------------------------- Based on TMLR guidelines, my reviews are as follows: 1. **Would some individuals in TMLR's audience be interested in the findings of this paper?** Yes. Contrastive learning is now a well-known and widely used technique in unsupervised learning. Understanding its influences in domain adaptation would be a worthy and interesting contribution. 2. **Are the claims made in the submission supported by accurate, convincing and clear evidence?** No. The main points are as follows: **Over/incorrect/inaccurate claims** - In abstract *class-conditional data distributions significantly differ between the source and target domain, it can generate ambiguous features near class boundaries that are more likely to be misclassified.* This is incorrect. If the conditional data distributions are significantly different, the joint optimal risk will be arbitrarily large, according to Ben David et al 2010. There is NO approach that will work in an unsupervised DA (or Impossible case.) Furthermore, this paper assumes only the well-known covariate shift (P(X) varies with being P(Y|X) invariant), no conditional shift, correct? What causes the significant conditional shift? - Introduction *This phenomenon is known as domain shift You et al. (2019)Ben-David et al. (2010), where the marginal probability distribution of the underlying data changes across different datasets or domains.* Domain shift is not defined correctly. Indeed, domain shift simply refers to a shift in distribution (could be joint/marginal/conditional) rather than a shift in marginal distribution. - Introduction *however, it gives rise to convergence issues amidst a lack of equilibrium guarantee.* I couldn't figure out how these approaches couldn't ensure equilibrium. Indeed, adversarial loss, as defined in general by min-max training, cannot guarantee equilibrium in a non-convex setting. These are, in my opinion, rather weak arguments. **Technical concerns** - [Concerns on two-stage training.] According to the algorithm description, the two stages appear to train source contrastive loss first, then source -target contrastive loss. I couldn't understand why these losses weren't trained together. What are the main challenges? Have you conducted any ablation studies? How to choose E^{prime} and E^{prime,prime} in your algorithm? - [Concerning on pseudo-labels] The issue persists in stage 2. Because we use target pseudo-labels for source-target contrastive learning. Such a class-based alignment is still unreliable. We couldn't have a correct matching if the pseudo-labels were wrong, right? I believe this section requires more in-depth examination. **Empirical results/Justifications** To meet the TMLR requirements, I believe this section requires significant revisions. Currently, there is little practical support/justification. My main points are as follows. (a) Office 31 and digits 5 datasets were reported in tabs 1 and 2. However, in digit 5, many baselines values are missing. Did the authors implement all of the baselines for a fair comparison, or did they simply use the reported results? Because the training strategy in the paper is different, I would recommend **reimplementing** all of the baselines. This paper started by training source classifiers and then added adversarial loss, which is different from baselines. As a result, I believe that comparisons are unfair. (b) Aside from accuracy and a variety of TSNE. Additional empirical analysis/justifications are severely lacking. I'm still not sure why this approach works better in this situation. (c) Empirically results only report mean, the variance (or Std) is missing. This is important and critical in the empirical paper. (d) More benchmarks are suggested. It is acceptable to use either office 31 or digits 5. However, the purpose of this paper is to systematically understand contrastive loss and domain adaptation from an empirical standpoint. Additional baselines, such as Office-home and DomainNet, I believe, are required for evaluation. -------------- **Further explanations Q&A**: It seems that in the guideline, *it (review) should not be used as a reason to reject work that isn't considered “significant” or “impactful” because it isn't achieving a new state-of-the-art on some benchmark.* Why did this reviewer request more benchmarks or empirical studies? **A**: I agree that there is no need to achieve any state-of-the-art or compare very recent baselines for TMLR. This is not the same as adding more datasets or conducting additional empirical analyses/studies. This paper, in particular, claimed that it *systematically integrates CL with adversarial methods* (Sec 3.3. Related work). To support this claim, systematic empirical studies evaluating the impact of CL should be conducted. For example, it would be interesting to see the limitations of CL in domain adaptation. Based on this, I think empirical assessments are insufficient. --------------- **Other minor points** Sec 4.4 “NCE stands for Noise-contrastive Estimation (see Eq. )” NCE loss lacks exact reference. Requested Changes: See detailed comments in Q2: Are the claims made in the submission supported by accurate, convincing and clear evidence? Broader Impact Concerns: N/A ================================================== Review 3: Summary: This paper investigated contrastive learning for domain adaptation. Specifically, they applied supervised contrastive learning and cross-domain contrastive learning with labeled source samples and pseudo-labeled target samples. Experiments are conducted on two benchmark datasets. Strengths and Weaknesses: Strengths: 1. Introducing contrastive learning into DA seems interesting. 2. Experiments are conducted on two benchmark datasets. Weaknesses: 1. The writing could be improved. For example, there are two Introduction Section on Page one. In Fig 2, the L_{CE} in stage one is mis-writen as LCE. 2. The contribution needs to be clarified. Contrastive learning has been adopted in DA in [1], which should be discussed and compared in this paper. 3. Experiments are conducted on two small-sized datasets. More empirical evidence on larger datasets (e.g., OfficeHome and DomainNet) is expected. [1] Contrastive adaptation network for unsupervised domain adaptation, CVPR2019 Requested Changes: 1. An revision on paper writing. 2. Discussion with suggested related work. 3. Experiments on larger-sized datasets. Broader Impact Concerns: Not applicable ================================================== Metareview: Recommendation: Reject Comment: The role of contrastive learning in domain adaptation is interesting and could be helpful for the community. However, at this stage, the paper's evaluation raises many concerns from the reviewers. As there is no rebuttal to address these concerns. The AE does not have ground to accept this paper. ==================================================
# Federated Learning With Reduced Information Leakage And Computation Reviewed on OpenReview: *ht tp s: // op en re vi ew .n et /f or um ?i d= ZJ 4A 3x hA DV* *These authors contributed equally to this work. ## Abstract Federated learning (FL) is a distributed learning paradigm that allows multiple decentralized clients to collaboratively learn a common model without sharing local data. Although local data is not exposed directly, privacy concerns nonetheless exist as clients' sensitive information can be inferred from intermediate computations. Moreover, such information leakage accumulates substantially over time as the same data is repeatedly used during the iterative learning process. As a result, it can be particularly difficult to balance the privacy-accuracy trade-off when designing privacy-preserving FL algorithms. This paper introduces Upcycled-FL, a simple yet effective strategy that applies first-order approximation at every even round of model update. Under this strategy, half of the FL updates incur no information leakage and require much less computational and transmission costs. We first conduct the theoretical analysis on the convergence (rate) of Upcycled-FL and then apply two perturbation mechanisms to preserve privacy. Extensive experiments on both synthetic and real-world data show that the Upcycled-FL strategy can be adapted to many existing FL frameworks and consistently improve the privacy-accuracy trade-off.1 ## 1 Introduction Federated learning (FL) has emerged as an important paradigm for learning models in a distributed fashion, whereby data is distributed across different clients and the goal is to jointly learn a model from the distributed data. This is facilitated by a central server and the model can be learned through iterative interactions 1Code available at https://github.com/osu-srml/Upcycled-FL. Tongxin Yin**tyin@umich.edu* Department of Electrical and Computer Engineering University of Michigan Xuwei Tan*tan.1206@osu.edu Department of Computer Science and Engineering The Ohio State University Xueru Zhang**zhang.12807@osu.edu* Department of Computer Science and Engineering The Ohio State University Mohammad Mahdi Khalili khalili.17@osu.edu Department of Computer Science and Engineering The Ohio State University Mingyan Liu *mingyan@umich.edu* Department of Electrical and Computer Engineering University of Michigan between the central server and clients: at each iteration, each client performs certain computation using its local data; the updated local models are collected and aggregated by the server; the aggregated model is then sent back to clients for them to update local models; and so on till the learning task is deemed accomplished. Although each client's data is not shared directly with the central server, there is a risk of information leakage that a third party may infer individual sensitive information from (intermediate) computational outcomes. Information leakage occurs whenever the client's local gradients are shared with third parties. Importantly, the information leakage (or privacy loss) **accumulates** as data is repeatedly used during the iterative learning process: with more computational outcomes derived from individual data, third parties have more information to infer sensitive data and it poses higher privacy risks for individuals. An example is Huang et al. (2021), which shows that eavesdroppers can conduct gradient inversion attacks to recover clients' data from the gradients. In this paper, we use differential privacy (DP) proposed by Dwork (2006), a de facto standard for preserving individual data privacy in data analysis, ranging from simple tasks such as data collection and statistical analysis (Zhang et al., 2022b; Ghosh & Roth, 2011; Khalili & Vakilinia, 2021; Amin et al., 2019; Liu et al., 2021; Khalili et al., 2021c; 2019) to complex machine learning and optimization tasks (Cai et al., 2024; Khalili et al., 2021b;a; Huang et al., 2020; Jagielski et al., 2019; Zhang et al., 2019b; 2018a;b; 2019c). Compared to other privacy preservation techniques, it can (i) rigorously quantify the total privacy leakage for complex algorithms such as FL; (ii) defend against attackers regardless of their background knowledge; (iii) provide heterogeneous privacy guarantees for different clients. To achieve a certain DP guarantee, we need to perturb FL algorithms (e.g., adding noise to the output, objective, or gradient of local clients) and the perturbation needed for achieving the privacy requirement of each client grows as the total information leakage increases. Because the added perturbation inevitably reduces algorithm accuracy, it can be difficult to balance the privacy and accuracy trade-off in FL. This paper proposes a novel strategy for FL called Upcycled Federated Learning (Upcycled-FL) 2, in which clients' information leakage can be reduced such that it only occurs during *half* of the updates. This is attained by modifying the *even* iterations of a baseline FL algorithm with first-order approximation, which allows us to update the model using existing model parameters from previous iterations without using the client's data. Moreover, the updates in even iterations only involve addition/subtraction operations on existing model parameters at the central server. Because Upcycled-FL doesn't require local training and transmission in half of iterations, the transmission costs and the training time can be reduced significantly. It turns out that Upcycled-FL, by reducing the total information leakage, requires less perturbation to attain a certain level of privacy and can enhance the privacy-accuracy trade-off significantly. We emphasize that the idea of "upcycling information" is orthogonal to both the baseline FL algorithm and the DP perturbation method. It can be applied to any FL algorithms that involve local optimization at the clients. In this paper, we apply Upcycled-FL strategy to multiple existing FL algorithms and evaluate them on both synthetic and real-world datasets. For DP perturbation methods, we consider both output and objective perturbation as examples to quantify the privacy loss, while other DP methods can also be used. It is worth noting that although differentially private federated learning has been extensively studied in the literature, e.g., (Asoodeh et al., 2021; Chuanxin et al., 2020; Zhang et al., 2022a; Zheng et al., 2021; Wang et al., 2020b; Kim et al., 2021; Zhang et al., 2019a; Baek et al., 2021; Wu et al., 2022; Girgis et al., 2021; Truex et al., 2020; Hu et al., 2020; Seif et al., 2020; Zhao et al., 2020; Wei et al., 2020; Triastcyn & Faltings, 2019), all these algorithms need client's local data to update the model and the information leakage inevitably occurs at every iteration. This is fundamentally different from this work, where we propose a novel strategy that effectively reduces information leakage in FL. In addition to private federated learning, several approaches were proposed in the DP literature to improve the privacy-accuracy trade-off, e.g., privacy amplification by *sampling* (Balle et al., 2018; Beimel et al., 2014; Hu et al., 2021; Wang et al., 2019; Kasiviswanathan et al., 2011; Wang et al., 2015; Abadi et al., 2016), leveraging non-private public data (Avent et al., 2017; Papernot et al., 2016), *shuffling* (Úlfar Erlingsson et al., 2019), *using weaker privacy notion* (Bun & Steinke, 2016), *using tighter privacy composition analysis tool* (Abadi et al., 2016). However, none of these strategies affect the algorithmic properties of learning algorithms. By contrast, our method improves the privacy-accuracy trade-off by modifying the property of FL algorithm 2The word "upcycle" refers to reusing material so as to create higher-quality things than the original. (i.e., reducing the total information leakage); this improvement on the algorithmic property is independent of the privacy notion/mechanism or the analysis method. Our main contributions are summarized as follows. - We propose Upcycled-FL (Algorithm 1), a novel strategy with reduced information leakage and computation that is broadly applicable to many existing FL algorithms. - As an example, we apply our strategy to FedProx (Li et al., 2020) and conduct convergence (rate) analysis (Section 5, Theorem 5.6) where we identify a sufficient condition for the convergence of Upcycled-FL. - As an example, we apply two differential privacy mechanisms (i.e., output perturbation and objective perturbation) to Upcycled-FL and conduct privacy analysis (Section 6, Theorem 6.2). - We evaluate the effectiveness of Upcycled-FL on both synthetic and real data (Section 7). Extensive experiments show that Upcycled-FL can be adapted to many existing federated algorithms to achieve better performance; it effectively improves the accuracy-privacy trade-off by reducing information leakage. ## 2 Related Work Differential privacy in federated learning. Differential privacy has been widely used in federated learning to provide privacy guarantees (Asoodeh et al., 2021; Chuanxin et al., 2020; Zhang et al., 2022a; Zheng et al., 2021; Wang et al., 2020b; Kim et al., 2021; Zhang et al., 2019a; Baek et al., 2021; Wu et al., 2022; Girgis et al., 2021; Truex et al., 2020; Hu et al., 2020; Seif et al., 2020; Zhao et al., 2020; Wei et al., 2020; Triastcyn & Faltings, 2019). For example, Zhang et al. (2022a) uses the Gaussian mechanism for a federated learning problem and propose an incentive mechanism to encourage users to share their data and participate in the training process. Zheng et al. (2021) introduces f-differential privacy, a generalized version of Gaussian differential privacy, and propose a federated learning algorithm satisfying this new notion. Wang et al. (2020b) proposes a new mechanism called Random Response with Priori (RRP) to achieve local differential privacy and apply this mechanism to the text data by training a Latent Dirichlet Allocation (LDA) model using a federated learning algorithm. Triastcyn & Faltings (2019) adapts the Bayesian privacy accounting method to the federated setting and propose joint accounting method for estimating client-level and instance-level privacy simultaneously and securely. Wei et al. (2020) presents a private scheme that adds noise to parameters at the random selected devices before aggregating and provides a convergence bound. Kim et al. (2021) combines the Gaussian mechanism with gradient clipping in federated learning to improve the privacy-accuracy tradeoff. Asoodeh et al. (2021) considers a different setting where only the last update is publicly released and the central server and other devices are assumed to be trustworthy. However, all these algorithms need client's local data to update the model and the information leakage inevitably occurs at every iteration. This is fundamentally different from Upcycled-FL, which reuses existing results to update half of iterations and significantly reduces information leakage and computation. Tackling heterogeneity in federated learning. It's worth mentioning that, Upcycled-FL also empirically outperforms existing baseline algorithms under device and statistical heterogeneity. In real-world scenarios, local data are often non-identically distributed across different devices; different devices are also often equipped with different specifications and computation capabilities. Such heterogeneity often causes instability in the model performance and leads to divergence. Many approaches have been proposed to tackle this issue in FL. For example, FedAvg (McMahan et al., 2017) uses a random selection of devices at each iteration to reduce the negative impact of statistical heterogeneity; however, it may fail to converge when heterogeneity increases. Other methods include FedProx (Li et al., 2020), a generalization and re-parameterization of FedAvg that adds a proximal term to the objective function to penalize deviations in the local model from the previous aggregation, and FedNova (Wang et al., 2020a) that re-normalizes local updates before updating to eliminate objective inconsistency. It turns out that Upcycled-FL exhibits superior performance in the presence of heterogeneity because gradients encapsulate information on data heterogeneity, the reusing of which leads to a boost in performance. ## 3 Problem Formulation Consider an FL system consisting of a central server and a set I of clients. Each client i has its local dataset Di and these datasets can be non-i.i.d across the clients. The goal of FL is to learn a model ω ∈ R dfrom data ∪i∈IDi by solving the following optimization: $$\operatorname*{min}_{\omega}f(\omega):=\sum_{i\in\mathcal{I}}p_{i}F_{i}(\omega;\mathcal{D}_{i})=\mathbb{E}\left[F_{i}(\omega;\mathcal{D}_{i})\right],$$ $$(1)$$ minω f(ω) := Pi∈I piFi(ω; Di) = E [Fi(ω; Di)] , (1) where pi = P |Di| j∈I |Dj | is the size of client i's data as a fraction of the total data samples, E[·] is defined as the expectation over clients, Fi(ω; Di) is the local loss function associated with client i and depends on local dataset Di. In this work, we allow Fi(ω; Di) to be possibly non-convex. FL Algorithm. Let ω t i be client i's local model parameter at time t. In FL, the model is learned through an iterative process: at each time step t, 1) *local computations:* each active client updates its local model ω t i using its local data Di; 2) *local models broadcasts:* local models (or gradients) are then uploaded to the central server; 3) *model aggregation:* the central server aggregates results received from clients to update the global model parameter ω t =Pi∈I piω t i ; 4) *model updating:* the aggregated model is sent back to clients and is used for updating local models at t + 1. During the learning process, each client's local computation is exposed to third parties at every iteration: its models/gradients need to be uploaded to the central server, and the global models calculated based on them are shared with all clients. It is thus critical to ensure the FL is privacy-preserving. In this work, we consider differential privacy as the notion of privacy. Differential Privacy (Dwork, 2006). Differential privacy (DP) centers around the idea that the output of a certain computational procedure should be statistically similar given singular changes to the input, thereby preventing meaningful inference from observing the output. In FL, the information exposed by each client i includes all intermediate computations {ω t i } T t=1. Consider a randomized FL algorithm A(·) that generates a sequence of private local models {ωb t i } T t=1, we say it satisfies (*ε, δ*)-differential privacy for client i over T iterations if the following holds for any possible output O ∈ R d *× · · · ×* R d, and for any two neighboring local datasets Di, D′i : Pr({ωb t i} T t=0 ∈ O|Di) ≤ exp (ε) · Pr({ωb t i} T t=0 ∈ O|D′i ) + δ. where ε ∈ [0, ∞) bounds the privacy loss, and δ ∈ [0, 1] loosely corresponds to the probability that algorithm fails to bound the privacy loss by ε. Two datasets are neighboring datasets if they are different in at most one data point. ## 4 Proposed Method: Upcycled-Fl Main idea. Fundamentally, the accumulation of information leakage over iterations stems from the fact that the client's data Diis used in every update. If the updates can be made without directly using this original data, but only from computational results that already exist, then the information leakage originating from these updates will be zero, and meanwhile, the computational cost may be reduced significantly. Based on this idea, we propose Upcycled-FL, which considers reusing the earlier computations in a new update and significantly reduces total information leakage and computational cost. Note that Upcycled-FL is not a specific algorithm but an effective strategy that can be used for any existing FL algorithms. Upcycling model update. Next, we present Upcycled-FL and illustrate how the client's total information leakage is reduced under this method. For an FL system with the objective shown in Eqn. (1), the client i's local objective function is given by Fi(ω; Di). Under Upcycled-FL, we apply *first-order approximation* to Fi(ω; Di) at **even** iterations during federated training (while odd updates remain fixed). Specifically, at 2m-th iteration, we expand Fi(ω; Di) at ω 2m−1 i(local model in the previous iteration). Based on the Taylor series expansion, we have: $$\begin{array}{r c l}{{F_{i}(\omega;\mathcal{D}_{i})}}&{{=}}&{{F_{i}(\omega_{i}^{2m-1};\mathcal{D}_{i})+\nabla F_{i}(\omega_{i}^{2m-1};\mathcal{D}_{i})^{T}(\omega-\omega_{i}^{2m-1})+\mathcal{O}(||\omega-\omega_{i}^{2m-1}||^{2})}}\\ {{}}&{{}}&{{}}\\ {{}}&{{\approx}}&{{F_{i}(\omega_{i}^{2m-1};\mathcal{D}_{i})+\nabla F_{i}(\omega_{i}^{2m-1};\mathcal{D}_{i})^{T}(\omega-\omega_{i}^{2m-1})+\frac{\lambda_{m}}{2}||\omega-\omega_{i}^{2m-1}||^{2}}}\end{array}$$ $$\mathbf{\partial}^{\dagger}\mathbf{\partial}$$ for some constant λm ≥ 0 which may differ for different iteration 2m. Then for an FL algorithm, its model update at 2m-th iteration under Upcycled-FL strategy can be attained by replacing Fi(ω; Di) with its approximation in Eqn. (2) while the updates at odd iterations remain the same. We illustrate this using the following two examples. Example 4.1 (FedAvg (McMahan et al., 2017) under Upcycled-FL strategy). In FedAvg, client i at each iteration updates the local model by optimizing its local objective function, i.e., ω t i = arg minω Fi(ω; Di), ∀t. Under Upcycled-FL strategy, the client i*'s updates become:* $$\omega_{i}^{t}=\begin{cases}\arg\operatorname*{min}_{\omega}\nabla F_{i}(\omega_{i}^{2m-1};\mathcal{D}_{i})^{T}\omega+\frac{\lambda_{m}}{2}||\omega-\omega_{i}^{2m-1}||^{2},\quad{\mathrm{if}}\quad t=2m\\ \arg\operatorname*{min}_{\omega}F_{i}(\omega;\mathcal{D}_{i}),\quad{\mathrm{if}}\quad t=2m-1\end{cases}$$ Example 4.2 (FedProx (Li et al., 2020) under Upcycled-FL strategy). In FedProx*, a proximal term is* added to the local objective function to stabilize the algorithm under heterogeneous clients (Algorithm 2), i.e., client i *at each iteration updates the local model* ω t i = arg minω Fi(ω; Di) + µ 2 ||ω − ω t−1||2, ∀t*. Under* Upcycled-FL strategy, the client i*'s updates become:* $$\omega_{i}^{t}=\begin{cases}\arg\min_{\omega}\nabla F_{i}(\omega_{i}^{2m-1};\mathcal{D}_{i})^{T}\omega+\frac{\lambda_{m}}{2!}||\omega-\omega_{i}^{2m-1}||^{2}+\frac{\mu}{2}||\omega-\overline{{{\omega}}}^{2m-1}||^{2},&\text{if}\quad t=2m\\ \arg\min_{\omega}F_{i}(\omega;\mathcal{D}_{i})+\frac{\mu}{2}||\omega-\overline{{{\omega}}}^{2m-2}||^{2},&\text{if}\quad t=2m-1\end{cases}$$ $$\quad(3)$$ Next, we demonstrate how the information is upcycled under the above idea. As an illustrating example, we focus on FedProx given in Example 4.2. Note that in the even update of Eqn. (3), the only term that depends on dataset Diis ∇Fi(ω 2m−1 i; Di), which can be derived directly from the previous odd iteration. Specifically, according to the first-order condition, the following holds at odd iterations: $$\omega_{i}^{2m-1}=\arg\operatorname*{min}_{\omega}F_{i}(\omega;\mathcal{D}_{i})+\frac{\mu}{2}||\omega-\overline{{{\omega}}}^{2m-2}||^{2}\xrightarrow{\operatorname*{min}_{\omega}\gg}\nabla F_{i}(\omega_{i}^{2m-1};\mathcal{D}_{i})+\mu(\omega_{i}^{2m-1}-\overline{{{\omega}}}^{2m-2})=0.$$ Plug ∇Fi(ω 2m−1 i; Di) into even update of (3), we have the estimated update from the odd update: $$\omega_{i}^{2m}\,\approx\,\arg\operatorname*{min}_{\omega}\,\mu(\overline{{{\omega}}}^{2m-2}-\omega_{i}^{2m-1})^{T}\omega+\frac{\lambda_{m}}{2}||\omega-\omega_{i}^{2m-1}||^{2}+\frac{\mu}{2}||\omega-\overline{{{\omega}}}^{2m-1}||^{2}.$$ where the approximately equals sign "≈" is due to the approximation in Eqn. (2). By first-order condition, even update (5) can be reduced to: $$\mathbf{l})$$ $$\omega_{i}^{2m}\approx\omega_{i}^{2m-1}+\frac{\mu}{\mu+\lambda_{m}}\left(\overline{{{\omega}}}^{2m-1}-\overline{{{\omega}}}^{2m-2}\right).$$ It turns out that with first-order approximation, dataset Diis not used in the even updates. Because these updates do not require access to the client's data, these updates can be conducted at the central server directly. That is, the central server updates the global model by aggregating: $$\Xi^{2m}=\sum_{i\in\mathcal{I}}p_{i\omega}\overline{z}^{2m}\approx\sum_{i\in\mathcal{I}}p_{i\omega}\overline{z}^{2m-1}+\frac{\mu}{\mu+\lambda_{m}}\left(\overline{\omega}^{2m-1}-\overline{\omega}^{2m-2}\right)=\overline{\omega}^{2m-1}+\frac{\mu}{\mu+\lambda_{m}}\left(\overline{\omega}^{2m-1}-\overline{\omega}^{2m-2}\right).$$ Therefore, under Upcycled-FL strategy, even updates only involve *addition/subtraction* operations on the existing global models from previous iterations (i.e., ω 2m−1, ω 2m−2) without the need for a local training epoch: both the computational cost and transmission cost are reduced significantly. Note that the first-order $$\mathbf{\dot{\theta}}$$ $$\mathbf{\hat{h}}$$ approximation is only applied to even iterations, while the odd iterations should remain as the origin to ensure Eqn. (4) holds. The entire updating procedure of Upcycled-FL is summarized in Algorithm 1. Because Diis only used in odd iterations, information leakage only happens during odd updates. Intuitively, the reduced information leakage would require less perturbation to attain a certain level of privacy guarantee, which further results in higher accuracy and improved privacy-accuracy trade-off. In the following sections, we first analyze the convergence property of Upcycled-FL and then apply privacy mechanisms to satisfy differential privacy. ![5_image_0.png](5_image_0.png) (a) Upcycled-FL reuses the intermediate updates ![5_image_1.png](5_image_1.png) (b) Upcycled-FL uses different aggregation rule Figure 1: Upcycled-FL can be considered from two perspectives: (i) it can be regarded as reusing the intermediate updates of local models to reduce the total information leakage; or (ii) it can be regarded as a global aggregation method with larger global update, which accelerates the learning process with the same information leakage under the same training iterations. Discussion. Indeed, if we view two consecutive steps (odd 2m − 1 followed by even 2m) of Upcycled-FL as a single iteration t, then Upcycled-FedProx and FedProx will incur the same information leakage but will differ at the phase of *global model aggregation*, as shown in Figure 1(b). Specifically, instead of simply aggregating the global model by averaging over local updates (i.e., Pi∈I piω t i ), Upcycled-FL not only takes the average of local updates but also pushes the aggregation moving more toward the updating direction (i.e., Pi∈I piω t i − ω t−1). We present the difference between Upcycled-FL update strategy and regular aggregation strategy from these two perspectives in Figure 1. As Upcycled-FL only accesses client data at even iterations, it halves the communication cost compared to standard FL methods with the same number of iterations. Algorithm 1 Proposed aggregation strategy: Upcycled-FL 1: **Input:** λm > 0, µ > 0, {Di}i∈I , ω 0 2: for m = 1 to M do 3: The central server sends the global model parameters ω 2m−2to all the clients. 4: A subset of clients are selected to be active and each active client updates its local model by finding an (approximate) minimizer of the local objective function: ω 2m−1 i ← arg min ωFi(ω; Di) or *other local objective functions* 5: Clients send local models to the central server. 6: The central server updates the global model by aggregating all local models: ω $$\begin{array}{r l}{={}}&{{}\sum_{i\in{\mathcal{I}}}p_{i}\omega_{i}^{2m-1}}\end{array}$$ $\mathcal{I}^{-\omega}$ ω 2m = ω 2m−1 + λm 2 ω 2m−1 − ω 2m−2 ## 5 Convergence Analysis In this section, we analyze the convergence of Upcycled-FL. For illustrating purposes, we focus on analyzing the convergence of Upcycled-FedProx. Note that we do not require local functions Fi(·) to be convex. Moreover, we consider practical settings where data are *non-i.i.d* across different clients. Similar to Li et al. (2020), we introduce a measure below to quantify the dissimilarity between clients in the federated network. Definition 5.1 (B-Dissimilarity (Li et al., 2020)). The local loss function Fiis B-dissimilar if ∀ω, we have E[||∇Fi(ω)||2] *≤ ||∇*f(ω)||2B2. where E[·] denotes the expectation over clients (see Eqn. (1)). Parameter B ≥ 1 captures the statistical heterogeneity across different clients: when all clients are homogeneous with i.i.d. data, we have B = 1 for all local functions; the larger value of B, the more dissimilarity among clients. Assumption 5.2. Local loss functions Fi are B-dissimilar and L-Lipschitz smooth. Note that B-dissimilarity can be satisfied if the divergence between the gradient of the local loss function and that of the aggregated global function is bounded, as stated below. Lemma 5.3. ∀i, there exists B such that Fi is B-dissimilar if ||∇Fi(ω) − ∇f(ω)|| ≤ κi, ∀ω *for some* κi. Assumption 5.4. ∀i, hi(ω; ω t) := Fi(ω; Di) + µ 2 ||ω − ω t||2 are ρ-strongly convex. The above assumptions are fairly standard. They first appeared in Li et al. (2020) and are adopted in subsequent works such as T Dinh et al. (2020); Khaled et al. (2020); Pathak & Wainwright (2020). Note that strongly convex assumption is not on local objective Fi(ω; Di), but the regularized function Fi(ω; Di) + µ 2 ||ω − ω t||2, i.e., the assumption can be satisfied by selecting a sufficiently large µ > 0. Indeed, as shown in Section 7, our algorithm still converges even when Assumption 5.2 and 5.4 do not hold (e.g., DNN). Next, we conduct the theoretical analysis on the convergence rate. Lemma 5.5. Let Sm be a set of K randomly selected clients which got updated (i.e., active clients) at iterations 2m − 1 and 2m, and ESm[·] be the expectation with respect to the choice of clients. Then under Assumption 5.2 and 5.4, we have $$\mathbb{E}_{S_{m}}[f(\overline{{{\omega}}}^{2m+1})]\leq f(\overline{{{\omega}}}^{2m-1})-\mathbf{C_{1}}||\nabla f(\overline{{{\omega}}}^{2m-1})||^{2}+\mathbf{C_{2}}\cdot h_{m}^{1}+\mathbf{C_{3}}\cdot h_{m}^{2},$$ where $$\begin{array}{r l}{{h_{m}^{1}}}&{{:=}}&{{||\nabla f(\overline{{{\omega}}}^{2m-1})||\cdot||\overline{{{\omega}}}^{2m-1}-\overline{{{\omega}}}^{2m-2}||;}}\\ {{h_{m}^{2}}}&{{:=}}&{{||\overline{{{\omega}}}^{2m-1}-\overline{{{\omega}}}^{2m-2}||^{2}.}}\end{array}$$ The details of term C1, C2, C3 (expressed as functions of *L, B,* 1µ , 1 ρ , 1 K , µ λm ) are in Appendix C, Eqn. (11)-(13). Lemma 5.5 characterizes the relation of the values of global objective function over *two consecutive odd* iterations. It is easy to verify C2, C3 ≥ 0. By rearranging and telescoping, we get the following convergence rate of Upcycled-FedProx. Theorem 5.6 (Convergence rate of Upcycled-FedProx). Under Assumption 5.2 and 5.4, if C1 > 0*, we have* $$\min_{m\in[M]}\mathbb{E}\left[||\nabla f(\varpi^{2m-1})||^{2}\right]\leq\frac{1}{M}\sum_{m=0}^{M}\mathbb{E}\left[||\nabla f(\varpi^{2m-1})||^{2}\right]$$ $$\leq\frac{f(\varpi^{0})-f(\varpi^{*})}{M\mathbf{C_{1}}}+\frac{\sum_{m=0}^{M}\mathbf{C_{2}}h_{m}^{1}}{M\mathbf{C_{1}}}+\frac{\sum_{m=0}^{M}\mathbf{C_{3}}h_{m}^{2}}{M\mathbf{C_{1}}},$$ where ω 0and ω ∗denote the initial and the optimal global model parameters, respectively. Both terms C2 and C3 *are decreasing in* λm µ . Theorem 5.6 implies that tunable µ and λm are key hyper-parameters that control the convergence (rate) and robustness of Upcycled-FedProx. Recall that µ penalizes the deviation of local model ω 2m ifrom global aggregated model ω 2m−1, while λm penalizes the update of local model ω 2m ifrom its previous update ω 2m−1 i. Because C1 := C1 *L, B,* 1µ , 1 ρ , 1 K does not depend on λm (by Eqn. (11)), for proper µ and local functions Fi, the condition C1 > 0 in Theorem 5.6 can hold for any λm. However, λm could affect the convergence rate via terms C2 := C2 *L, B,* 1µ , 1 ρ , 1 K ,µ µ+λm and C3 := C3 L, 1µ , 1 ρ , 1 K ,µ µ+λm . Specifically, as the ratio λm µincreases, both C2 and C3 decrease (by Eqn. (12)-(13)) which results in a tighter convergence rate bound. We empirically examine the impacts of µ and λm in Section 7. It is worth noting that the convergence rate also depends on data heterogeneity, captured by dissimilarity B. According to Eqn. (11), C1 > 0 must hold when B = 0 (i.i.d. clients). Although C1 may become negative as B increases, the experiments in Section 7 show that Upcycled-FedProx can still converge when data is highly heterogeneous. Assumption 5.7. ||ω 2m−1 − ω 2m−2*|| ≤* h, ∀m and ||∇f(ω)*|| ≤* d, ∀ω. Assumption 5.7 is standard and has been used when proving the convergence of FL algorithms (Li et al., 2019; Yang et al., 2022); it requires that the difference of aggregated weights between two consecutive iterations and the gradient ||∇f(ω)|| are bounded. Under this assumption, we have the following corollary. Corollary 5.8 (Convergence to the stationary point). Under Assumption 5.2, 5.4, and 5.7, for fixed µ, K*, if* λm is taken such that µ µ+λm = O √ 1 M , then the convergence rate of Upcycled-FedProx *reduces to* O( √ 1 M ). Corollary 5.8 provides guidance on selecting the value of λm properly to guarantee the convergence of Upcycled-FedProx, i.e., by taking an increasing sequence of {λm}M m=1. Intuitively, increasing λm during the training helps stabilize the algorithm, because the deviation of local models from the previous update gets penalized more under a larger λm. ## 6 Private Upcycled-Fl In this section, we present a privacy-preserving version of Upcycled-FL. Many perturbation mechanisms can be adopted to achieve differential privacy such as *objective perturbation* (Chaudhuri et al., 2011; Kifer et al., 2012), *output perturbation* (Chaudhuri et al., 2011; Zhang et al., 2017), *gradient perturbation* (Bassily et al., 2014; Wang et al., 2017), etc. In this section, we use output and objective perturbation as examples to illustrate that FL algorithms combined with Upcycled-FL strategy, by reducing the total information leakage, can attain a better privacy-accuracy trade-off. Note that both output and objective perturbation methods are used to generate private updates at odd iterations, which can be used directly for even updates. Output perturbation: the private odd updates ωb 2m−1 iare generated by first *clipping* the local models ω 2m−1 i and then adding a noise random vector n m ito the clipped model: $$\begin{array}{r l}{{Cl i p\ o d d\ u p d a t e:\ }}&{{\xi(\omega_{i}^{2m-1})=\frac{\omega_{i}^{2m-1}}{\operatorname*{max}\left(1,\frac{||\omega_{i}^{2m-1}||_{2}}{\tau}\right)}}}\\ {{P e r t u r b\ \,w i t h\ \,n o i s e:\ }}&{{\widehat{\omega}_{i}^{2m-1}=\xi(\omega_{i}^{2m-1})+n_{i}^{m}}}\end{array}$$ where parameter τ > 0 is the clipping threshold; the clipping ensures that if ||ω 2m−1 i||2 ≤ τ , then update remains the same, otherwise it is scaled to be of norm τ . Objective perturbation: a random linear term ⟨n m i , ω⟩ is added to the objective function in odd (2m + 1)-th iteration, and the private local model ωb 2m+1 iis found by solving a *perturbed* optimization. Taking Upcycled-FedProx as the example, we have: $$\hat{\omega}_{i}^{2m+1}=\arg\operatorname*{min}_{\omega}F_{i}(\omega;{\mathcal{D}}_{i})+\frac{\mu}{2}||\omega-\overline{{{\omega}}}^{2m}||^{2}+\langle n_{i}^{m},\omega\rangle,$$ Given noisy ωb 2m−1 igenerated by either method, the private even updates ωb 2m ican be computed directly at the central server using the noisy aggregation Pi∈I piωb 2m−1 i. Privacy Analysis. Next, we conduct privacy analysis and theoretically quantify the total privacy loss of private Upcycled-FL. Because even updates are computed directly using already private intermediate results without using dataset Di, no privacy leakage occurs at even iterations. This can be formally stated as the following lemma. Lemma 6.1. *For any* m = 1, 2, · · · , if the total privacy loss up to 2m − 1 can be bounded by εm*, then the* total privacy loss up to the 2m*-th iteration can also be bounded by* εm. Lemma 6.1 is straightforward; it can be derived directly by leveraging a property of differential privacy called *immunity to post-processing* (Dwork et al., 2014), i.e., a differentially private output followed by any data-independent computation remains satisfying differential privacy. Based on Lemma 6.1, we can quantify the total privacy loss of private Upcycled-FL. We shall adopt moments accountant method (Abadi et al., 2016) for output perturbation, and the analysis method in (Chaudhuri et al., 2011; Zhang & Zhu, 2016; Zhang et al., 2018b) for objective perturbation. Here, we focus on settings where local loss function Fi(ωi, Di) := 1 |Di| Pd∈Di Fˆi(ωi, d) for some Fˆi, and the guarantee of privacy is presented in Theorem 6.2 (output perturbation) and 6.3 (objective perturbation) below. The total privacy loss in the following theorem is composed using moments accountant method (Abadi et al., 2016). Theorem 6.2. Consider the private Upcycled-FL over 2M *iterations under output perturbation with noise* n m i ∼ N (0, σ2I), then for any ε ≥M τ2 2σ2|Di| 2 , the algorithm is (ε, δ)-DP for agent i for $$\delta=\exp\left(-\frac{M\tau^{2}}{2\sigma^{2}|\mathcal{D}_{i}|^{2}}\Big{(}\frac{\varepsilon\sigma^{2}|\mathcal{D}_{i}|^{2}}{M\tau^{2}}-\frac{1}{2}\Big{)}^{2}\right).$$ _Equivalently, for any $\delta\in[0,1]$, the algorithm is $(\varepsilon,\delta)$-DP for agent $i$ for $$\varepsilon=2\sqrt{\frac{M\tau^{2}}{2\sigma^{2}|\mathcal{D}_{i}|^{2}}\log(\frac{1}{\delta})}+\frac{M\tau^{2}}{2\sigma^{2}|\mathcal{D}_{i}|^{2}}.$$ Theorem 6.3. Consider the private Upcycled-FL over 2M iterations under objective perturbation with noise n m i ∼ exp (−α m i ||n m i ||2). Suppose Fˆi *is generalized linear model (Iyengar et al., 2019; Bassily et al.,* 2014)3that satisfies ||∇Fˆi(ω; d)|| < u1, |Fˆ′′ i | ≤ u2*. Let feature vectors be normalized such that its norm is no* greater than 1, and suppose u2 ≤ 0.5|Di|µ holds. Then the algorithm satisfies (ε, 0)-DP for agent i *where* ε =PM m=0 2αm i u1µ+2.8u2 |Di|µ. The assumptions on Fˆi are again fairly standard in the literature, see e.g., (Chaudhuri et al., 2011; Zhang & Zhu, 2016; Zhang et al., 2018b). Theorem 6.2 and 6.3 show that the total privacy loss experienced by each agent accumulates over iterations and privacy loss only comes from odd iterations. In contrast, if consider differentially private FedProx, accumulated privacy loss would come from all iterations. Therefore, to achieve the same privacy guarantee, private Upcycled-FL requires much less perturbation per iteration than private FedProx. As a result, accuracy can be improved significantly. Experiments in Section 7 show that Upcycled-FL significantly improves privacy-accuracy trade-off compared to other methods. ## 7 Experiments In this section, we empirically evaluate the performance of Upcycled-FL by combining it with several popular FL methods. We first consider non-private algorithms to examine the convergence (rate) and robustness of Upcycled-FL against statistical/device heterogeneity. Then, we adopt both output and objective perturbation to evaluate the private Upcycled-FL. ## 7.1 Datasets And Networks We conduct experiments on both synthetic and real data, as detailed below. More details of each dataset are given in Appendix F.1. 3In supervised learning, the sample d = (*x, y*) corresponds to the feature and label pair. Function Fˆi(*ω, d*) is generalized linear model if it can be written as a function of ω T x and y. Synthetic data. Using the method in Li et al. (2020), we generate Syn(iid), Syn(0,0), Syn(0.5,0.5), Syn(1,1), four synthetic datasets with increasing statistical heterogeneity. We use *logistic regression* for synthetic data. Real data. We adopt two real datasets: 1) FEMNIST, a federated version of EMNIST (Cohen et al., 2017). Here, A *multilayer perceptron* (MLP) consisting of two linear layers with a hidden dimension of 14x14, interconnected by ReLU activation functions, is used to learn from FEMNIST; 2) Sentiment140 (Sent140), a text sentiment analysis task on tweets (Go et al., 2009). In this context, a bidirectional LSTM with 256 hidden dimensions and 300 embedding dimensions is used to train on Sent140 dataset. ## 7.2 Experimental Setup All experiments are conducted on a server equipped with multiple NVIDIA A5000 GPUs, two AMD EPYC 7313 CPUs, and 256GB memory. The code is implemented with Python 3.8 and PyTorch 1.13.0 on Ubuntu 20.04. We employ SGD as the local optimizer, with a momentum of 0.5, and set the number of local update epochs E to 10 at each iteration m. Note that without privacy concerns, any classifier and loss function can be plugged into Upcycled-FL. However, if we adopt objective perturbation as privacy protection, the loss function should also satisfy assumptions in Theorem 6.3. We take the cross-entropy loss as our loss function throughout all experiments. To simulate device heterogeneity, we randomly select a fraction of devices to train at each round, and assume there are stragglers that cannot train for full rounds; both devices and stragglers are selected by random seed to ensure they are the same for all algorithms. Baselines. To evaluate the effectiveness of our strategy, we apply our Upcycled-FL method to seven representative methods in federated learning. We use the grid search to find the optimal hyperparameters for all methods to make a fair comparison. These methods include: - FedAvg (McMahan et al., 2017): FedAvg learns the global model by averaging the client's local model. - FedAvgM (Hsu et al., 2019): FedAvgM introduces *momentum* while averaging local models to improve convergence rates and model performance, especially in non-i.i.d. settings. - FedProx (Li et al., 2020): FedProx adds a proximal term to the local objective function, which enables more robust convergence when data is non-i.i.d. across different clients. - Scaffold (Karimireddy et al., 2020): Scaffold uses control variates to correct the local updates, which helps in dealing with non-i.i.d. data and accelerates convergence. - FedDyn (Acar et al., 2021): FedDyn considers a dynamic regularization term to align the local model updates more closely with the global model. - pFedMe (T Dinh et al., 2020): pFedMe is a personalization method to handle client heterogeneity. We set the hyperparameter k in pFedMe as 5 to accelerate the training. - FedYogi (Reddi et al., 2021): FedYogi considers the adaptive optimization for the global model aggregation. Unless explicitly stated, the results we report are averaged outcomes over all devices. More details of experimental setup are in Appendix F.1. ## 7.3 Results Convergence and Heterogeneity. Because even iterations of Upcycled-FL only involve addition/subtraction operations with no transmission overhead and almost no computational cost, we train the Upcycled version of FL algorithms with **double** iterations compared to baselines in approximately same training time in this experiment. We evaluate the convergence rate and accuracy of Upcycled-FL under different dataset and heterogeneity settings. In each iteration, 30% of devices are selected with 90% stragglers. Table 1 compares the average accuracy of different algorithms when the device heterogeneity is high (90% stragglers). The results show that all baselines can be enhanced by our methods. Notably, while FedAvg achieves good performance on Syn(iid), it is not robust on heterogeneous data, e.g. Syn(0,0), Syn(0.5,0.5), and Syn(1,1). Nonetheless, Upcycled-FedAvg makes it comparable with the regular FedProx algorithm, which shows that our strategy can also mitigate performance deterioration induced by data heterogeneity. When Table 1: Average accuracy and standard deviation with 90% straggler on the testing dataset in the non-private setting over four runs: models are trained over synthetic data (Syn) for 80 iterations (160 for upcycled version), trained over Femnist for 150 iterations (300 for upcycled version), and trained over Sent140 for 80 iterations (160 for upcycled version). We use the grid search to find the optimal results for all methods. | Method | Dataset | | | | | | |-------------------|------------|--------------|------------|------------|------------|------------| | Syn(iid) | Syn(0,0) | Syn(0.5,0.5) | Syn(1,1) | FEMNIST | Sent140 | | | FedAvg | 98.06±0.07 | 79.28±0.61 | 81.58±0.43 | 80.40±1.28 | 81.38±3.54 | 76.11±0.11 | | Upcycled-FedAvg | 98.83±0.29 | 81.46±0.48 | 82.89±0.17 | 81.49±0.53 | 82.10±1.11 | 76.32±0.45 | | FedAvgM | 98.43±0.07 | 80.29±0.83 | 82.60±0.37 | 80.59±1.28 | 80.15±3.72 | 75.7±0.85 | | Upcycled-FedAvgM | 98.72±0.47 | 81.74±0.42 | 83.13±0.12 | 81.37±0.82 | 81.30±5.23 | 74.88±2.29 | | FedProx | 96.52±0.07 | 80.72±0.77 | 81.99±0.55 | 81.19±0.19 | 79.35±0.65 | 73.94±0.13 | | Upcycled-FedProx | 97.62±0.32 | 80.88±0.97 | 83.10±0.83 | 81.94±0.57 | 80.33±3.43 | 74.25±0.34 | | Scaffold | 97.51±0.24 | 80.26±1.54 | 82.44±1.66 | 74.91±2.67 | 76.83±2.97 | 76.34±0.56 | | Upcycled-Scaffold | 98.68±0.12 | 81.10±0.57 | 82.64±1.39 | 76.14±1.28 | 77.88±5.36 | 77.34±0.22 | | FedDyn | 97.00±0.19 | 81.62±0.97 | 80.64±0.81 | 77.27±2.95 | 81.76±0.98 | 75.97±0.35 | | Upcycled-FedDyn | 98.32±0.08 | 82.41±1.04 | 82.89±0.80 | 80.03±3.02 | 83.33±0.71 | 76.03±0.58 | | pFedme | 96.30±0.14 | 89.15±0.24 | 89.43±0.67 | 93.06±0.27 | 71.73±4.30 | 72.81±0.85 | | Upcycled-pFedme | 96.77±0.12 | 89.08±0.22 | 89.55±0.62 | 93.12±0.23 | 76.66±3.37 | 74.07±0.74 | | FedYogi | 99.30±0.35 | 81.20±2.64 | 79.49±1.30 | 78.95±1.98 | 73.53±7.73 | 77.02±0.07 | | Upcycled-FedYogi | 99.41±0.32 | 81.65±2.44 | 80.84±1.30 | 80.17±0.84 | 75.64±3.17 | 77.59±0.26 | data is i.i.d., FedProx with the proximal term µ 2 ||ω − ω t||2 may hurt the performance compared with FedAvg. However, the proximal term can help stabilize the algorithm and significantly improve the performance in practical settings when data is heterogeneous; these observations are consistent with (Li et al., 2020). Importantly, Upcycled-FL strategy further makes it more robust to statistical heterogeneity than regular FedProx as it could attain consistent improvements for all settings. ![10_image_0.png](10_image_0.png) Figure 2: Comparison on average loss and standard deviation between Upcycled-FL methods and original FL algorithms in the non-private setting under the approximate same training time. The training time refers to the time needed for a given number of iterations. Upcycled-FL does not require an update in the even iterations, allowing Upcycled-FL to train with doubled iterations. Figure 2(a), 2(b) and 2(c) compare the convergence property on Syn(iid), Syn(0.5,0.5) and FEMNIST. Note that, by using the aggregation rule in Figure 1(b), the training time of Upcycled-FL is almost the same as the baselines without introducing extra cost. We observe that under the same training time (the number of iterations for baselines), Upcycled-FL strategy brings benefits (achieving lower loss along training) for baselines in most cases. And this improvement is significant when the dataset is i.i.d. The loss trends are consistent with results in Table 1. We provide more results on the other three datasets in Appendix F.3. ![11_image_0.png](11_image_0.png) Figure 3: Comparison on average loss and standard deviation of private Upcycled-FL and private FL methods using **output perturbation**. The noise parameter σ is 1.0 for all baselines, while σ of the Upcycled version is set to 0.8. Taking the iid dataset as an example, ϵ¯ = 1.40 for the Upcycled version, which ensures stronger privacy than the original method with ϵ¯ = 1.59. Privacy-Accuracy Trade-off. We next inspect the accuracy-privacy trade-off of private Upcycled-FL and compare it with private baselines. Although we adopt both objective and output perturbation to achieve differential privacy, other techniques can also be used. For each parameter setting, we still conduct a grid search and perform 4 independent runs of experiments. To precisely quantify privacy, no straggler is considered in this experiment. We reported the results using output perturbation and objective perturbation. Figure 3 and 4 demonstrate the performance of private Upcylced-FL and private baselines on synthetic data using two types of perturbation respectively. We also report the results on real data in Appendix F.4. Here we carefully set the perturbation strength of each algorithm such that the privacy loss ϵ for private Upcycled-FL is strictly less than the original methods. As expected, private Upcycled-FL is more stable than baselines and attains a lower loss value under smaller ϵ. This is because the private Upcycled-FL with less information leakage requires much less perturbation to attain the same privacy guarantee as FL methods under privacy constraints. We also observe that in general Upcycled-FL can be used to augment all baseline methods with or without client local heterogeneity. ![12_image_0.png](12_image_0.png) Figure 4: Comparison on average loss and standard deviation of private Upcycled-FL and private FL methods using **objective perturbation**. Under objective perturbation, noise parameter α is 10 for all baselines, while α of the Upcycled version is set to 20 to ensure stronger privacy than the original versions. Taking the iid dataset as an example, ϵ¯ associated with these noise parameters is 7.36 for FedProx and 7.25 for Upcycled-FedProx (when µ = 0.5). ## 8 Conclusion This paper proposes Upcycled-FL, a novel plug-in federated learning strategy under which information leakage and computation costs can be reduced significantly. We theoretically examined the convergence (rate) of Upcycled-FedProx, a special case when Upcycled-FL strategy is applied to a well-known FL algorithm named FedProx. Extensive experiments on synthetic and read data further show that Upcycled-FL can be combined with common FL algorithms and enhance their robustness on heterogeneous data while attaining much better privacy-accuracy trade-off under common differential privacy mechanisms. ## Acknowledgments This material is based upon work supported by the U.S. National Science Foundation under awards IIS-2040800, IIS-2112471, IIS-2202699, IIS-2301599, and CMMI-2301601, by grants from the Ohio State University's Translational Data Analytics Institute and College of Engineering Strategic Research Initiative. ## References Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp. 308–318, 2016. Durmus Alp Emre Acar, Yue Zhao, Ramon Matas Navarro, Matthew Mattina, Paul N Whatmough, and Venkatesh Saligrama. Federated learning based on dynamic regularization. arXiv preprint arXiv:2111.04263, 2021. Kareem Amin, Travis Dick, Alex Kulesza, Andres Munoz, and Sergei Vassilvitskii. Differentially private covariance estimation. Advances in Neural Information Processing Systems, 32, 2019. Shahab Asoodeh, Wei-Ning Chen, Flavio P Calmon, and Ayfer Özgür. Differentially private federated learning: An information-theoretic perspective. In 2021 IEEE International Symposium on Information Theory (ISIT), pp. 344–349. IEEE, 2021. Brendan Avent, Aleksandra Korolova, David Zeber, Torgeir Hovden, and Benjamin Livshits. {BLENDER}: Enabling local search with a hybrid differential privacy model. In 26th {USENIX} Security Symposium ({USENIX} Security 17), pp. 747–764, 2017. Chunghun Baek, Sungwook Kim, Dongkyun Nam, and Jihoon Park. Enhancing differential privacy for federated learning at scale. IEEE Access, 9:148090–148103, 2021. Borja Balle, Gilles Barthe, and Marco Gaboardi. Privacy amplification by subsampling: tight analyses via couplings and divergences. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 6280–6290, 2018. Raef Bassily, Adam Smith, and Abhradeep Thakurta. Private empirical risk minimization: Efficient algorithms and tight error bounds. In 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, pp. 464–473. IEEE, 2014. Amos Beimel, Hai Brenner, Shiva Prasad Kasiviswanathan, and Kobbi Nissim. Bounds on the sample complexity for private learning and private data release. Machine learning, 94(3):401–437, 2014. Mark Bun and Thomas Steinke. Concentrated differential privacy: Simplifications, extensions, and lower bounds. In Theory of Cryptography Conference, pp. 635–658. Springer, 2016. Zhongteng Cai, Xueru Zhang, and Mohammad Mahdi Khalili. Privacy-aware randomized quantization via linear programming. In The 40th Conference on Uncertainty in Artificial Intelligence, 2024. URL https://openreview.net/forum?id=vWsf4L7rHq. Kamalika Chaudhuri, Claire Monteleoni, and Anand D Sarwate. Differentially private empirical risk minimization. Journal of Machine Learning Research, 12(3), 2011. Zhou Chuanxin, Sun Yi, and Wang Degang. Federated learning with gaussian differential privacy. In Proceedings of the 2020 2nd International Conference on Robotics, Intelligent Control and Artificial Intelligence, pp. 296–301, 2020. Gregory Cohen, Saeed Afshar, Jonathan Tapson, and Andre Van Schaik. Emnist: Extending mnist to handwritten letters. In 2017 international joint conference on neural networks (IJCNN), pp. 2921–2926. IEEE, 2017. Cynthia Dwork. Differential privacy. In International Colloquium on Automata, Languages, and Programming, pp. 1–12. Springer, 2006. Cynthia Dwork, Aaron Roth, et al. The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci., 9(3-4):211–407, 2014. Arpita Ghosh and Aaron Roth. Selling privacy at auction. In Proceedings of the 12th ACM Conference on Electronic Commerce, EC '11, pp. 199–208, New York, NY, USA, 2011. Association for Computing Machinery. ISBN 9781450302616. doi: 10.1145/1993574.1993605. URL https://doi.org/10.1145/1993 574.1993605. Antonious Girgis, Deepesh Data, Suhas Diggavi, Peter Kairouz, and Ananda Theertha Suresh. Shuffled model of differential privacy in federated learning. In International Conference on Artificial Intelligence and Statistics, pp. 2521–2529. PMLR, 2021. Alec Go, Richa Bhayani, and Lei Huang. Twitter sentiment classification using distant supervision. CS224N project report, Stanford, 1(12):2009, 2009. Tzu-Ming Harry Hsu, Hang Qi, and Matthew Brown. Measuring the effects of non-identical data distribution for federated visual classification. arXiv preprint arXiv:1909.06335, 2019. Jingchen Hu, Joerg Drechsler, and Hang J Kim. Accuracy gains from privacy amplification through sampling for differential privacy. arXiv preprint arXiv:2103.09705, 2021. Rui Hu, Yuanxiong Guo, Hongning Li, Qingqi Pei, and Yanmin Gong. Personalized federated learning with differential privacy. IEEE Internet of Things Journal, 7(10):9530–9539, 2020. Chunan Huang, Xueru Zhang, Rasoul Salehi, Tulga Ersal, and Anna G. Stefanopoulou. A robust energy and emissions conscious cruise controller for connected vehicles with privacy considerations. In 2020 American Control Conference (ACC), pp. 4881–4886, 2020. doi: 10.23919/ACC45564.2020.9147406. Yangsibo Huang, Samyak Gupta, Zhao Song, Kai Li, and Sanjeev Arora. Evaluating gradient inversion attacks and defenses in federated learning. Advances in Neural Information Processing Systems, 34:7232–7241, 2021. Roger Iyengar, Joseph P Near, Dawn Song, Om Thakkar, Abhradeep Thakurta, and Lun Wang. Towards practical differentially private convex optimization. In 2019 IEEE Symposium on Security and Privacy (SP), pp. 299–316. IEEE, 2019. Matthew Jagielski, Michael Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi-Malvajerdi, and Jonathan Ullman. Differentially private fair learning. In International Conference on Machine Learning, pp. 3000–3008. PMLR, 2019. Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh. Scaffold: Stochastic controlled averaging for federated learning. In International conference on machine learning, pp. 5132–5143. PMLR, 2020. Shiva Prasad Kasiviswanathan, Homin K Lee, Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. What can we learn privately? SIAM Journal on Computing, 40(3):793–826, 2011. Ahmed Khaled, Konstantin Mishchenko, and Peter Richtárik. Tighter theory for local sgd on identical and heterogeneous data. In International Conference on Artificial Intelligence and Statistics, pp. 4519–4529. PMLR, 2020. Mohammad Mahdi Khalili and Iman Vakilinia. Trading privacy through randomized response. In IEEE INFOCOM 2021-IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), pp. 1–6. IEEE, 2021. Mohammad Mahdi Khalili, Xueru Zhang, and Mingyan Liu. Contract design for purchasing private data using a biased differentially private algorithm. In Proceedings of the 14th Workshop on the Economics of Networks, Systems and Computation, pp. 1–6, 2019. Mohammad Mahdi Khalili, Xueru Zhang, and Mahed Abroshan. Fair sequential selection using supervised learning models. Advances in Neural Information Processing Systems, 34:28144–28155, 2021a. Mohammad Mahdi Khalili, Xueru Zhang, Mahed Abroshan, and Somayeh Sojoudi. Improving fairness and privacy in selection problems. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 8092–8100, 2021b. Mohammad Mahdi Khalili, Xueru Zhang, and Mingyan Liu. Designing contracts for trading private and heterogeneous data using a biased differentially private algorithm. IEEE Access, 9:70732–70745, 2021c. Daniel Kifer, Adam Smith, and Abhradeep Thakurta. Private convex empirical risk minimization and highdimensional regression. In Conference on Learning Theory, pp. 25–1. JMLR Workshop and Conference Proceedings, 2012. Muah Kim, Onur Günlü, and Rafael F Schaefer. Federated learning with local differential privacy: Trade-offs between privacy, utility, and communication. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2650–2654. IEEE, 2021. Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization in heterogeneous networks. Proceedings of Machine Learning and Systems, 2:429–450, 2020. Xiang Li, Kaixuan Huang, Wenhao Yang, Shusen Wang, and Zhihua Zhang. On the convergence of fedavg on non-iid data. In International Conference on Learning Representations, 2019. Xiyang Liu, Weihao Kong, Sham Kakade, and Sewoong Oh. Robust and differentially private mean estimation. Advances in neural information processing systems, 34:3887–3901, 2021. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communicationefficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pp. 1273–1282. PMLR, 2017. Nicolas Papernot, Martín Abadi, Ulfar Erlingsson, Ian Goodfellow, and Kunal Talwar. Semi-supervised knowledge transfer for deep learning from private training data. arXiv preprint arXiv:1610.05755, 2016. Reese Pathak and Martin J Wainwright. Fedsplit: An algorithmic framework for fast federated optimization. Advances in Neural Information Processing Systems, 33:7057–7066, 2020. Sashank J. Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečný, Sanjiv Kumar, and Hugh Brendan McMahan. Adaptive federated optimization. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=LkFG3lB13U5. Mohamed Seif, Ravi Tandon, and Ming Li. Wireless federated learning with local differential privacy. In 2020 IEEE International Symposium on Information Theory (ISIT), pp. 2604–2609. IEEE, 2020. Canh T Dinh, Nguyen Tran, and Josh Nguyen. Personalized federated learning with moreau envelopes. Advances in Neural Information Processing Systems, 33:21394–21405, 2020. Aleksei Triastcyn and Boi Faltings. Federated learning with bayesian differential privacy. In 2019 IEEE International Conference on Big Data (Big Data), pp. 2587–2596. IEEE, 2019. Stacey Truex, Ling Liu, Ka-Ho Chow, Mehmet Emre Gursoy, and Wenqi Wei. Ldp-fed: Federated learning with local differential privacy. In Proceedings of the Third ACM International Workshop on Edge Systems, Analytics and Networking, pp. 61–66, 2020. Di Wang, Minwei Ye, and Jinhui Xu. Differentially private empirical risk minimization revisited: Faster and more general. Advances in Neural Information Processing Systems, 30, 2017. Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, and H Vincent Poor. Tackling the objective inconsistency problem in heterogeneous federated optimization. Advances in neural information processing systems, 33: 7611–7623, 2020a. Yansheng Wang, Yongxin Tong, and Dingyuan Shi. Federated latent dirichlet allocation: A local differential privacy based framework. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp. 6283–6290, 2020b. Yu-Xiang Wang, Stephen Fienberg, and Alex Smola. Privacy for free: Posterior sampling and stochastic gradient monte carlo. In International Conference on Machine Learning, pp. 2493–2502. PMLR, 2015. Yu-Xiang Wang, Borja Balle, and Shiva Prasad Kasiviswanathan. Subsampled rényi differential privacy and analytical moments accountant. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 1226–1235. PMLR, 2019. Kang Wei, Jun Li, Ming Ding, Chuan Ma, Howard H Yang, Farhad Farokhi, Shi Jin, Tony QS Quek, and H Vincent Poor. Federated learning with differential privacy: Algorithms and performance analysis. IEEE Transactions on Information Forensics and Security, 15:3454–3469, 2020. Xiang Wu, Yongting Zhang, Minyu Shi, Pei Li, Ruirui Li, and Neal N Xiong. An adaptive federated learning scheme with differential privacy preserving. Future Generation Computer Systems, 127:362–372, 2022. Haibo Yang, Xin Zhang, Prashant Khanduri, and Jia Liu. Anarchic federated learning. In International Conference on Machine Learning, pp. 25331–25363. PMLR, 2022. Jiale Zhang, Junyu Wang, Yanchao Zhao, and Bing Chen. An efficient federated learning scheme with differential privacy in mobile edge computing. In International Conference on Machine Learning and Intelligent Communications, pp. 538–550. Springer, 2019a. Jiaqi Zhang, Kai Zheng, Wenlong Mou, and Liwei Wang. Efficient private erm for smooth objectives. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pp. 3922–3928, 2017. Lefeng Zhang, Tianqing Zhu, Ping Xiong, Wanlei Zhou, and Philip Yu. A robust game-theoretical federated learning framework with joint differential privacy. IEEE Transactions on Knowledge and Data Engineering, 2022a. Tao Zhang and Quanyan Zhu. Dynamic differential privacy for admm-based distributed classification learning. IEEE Transactions on Information Forensics and Security, 12(1):172–187, 2016. Xueru Zhang, Mohammad Mahdi Khalili, and Mingyan Liu. Improving the privacy and accuracy of admmbased distributed algorithms. In International Conference on Machine Learning, pp. 5796–5805. PMLR, 2018a. Xueru Zhang, Mohammad Mahdi Khalili, and Mingyan Liu. Recycled admm: Improve privacy and accuracy with less computation in distributed algorithms. In 2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 959–965. IEEE, 2018b. Xueru Zhang, Chunan Huang, Mingyan Liu, Anna Stefanopoulou, and Tulga Ersal. Predictive cruise control with private vehicle-to-vehicle communication for improving fuel consumption and emissions. IEEE Communications Magazine, 57(10):91–97, 2019b. Xueru Zhang, Mohammad Mahdi Khalili, and Mingyan Liu. Recycled admm: Improving the privacy and accuracy of distributed algorithms. IEEE Transactions on Information Forensics and Security, 15: 1723–1734, 2019c. Xueru Zhang, Mohammad Mahdi Khalili, and Mingyan Liu. Differentially private real-time release of sequential data. ACM Transactions on Privacy and Security, 26(1):1–29, 2022b. Yang Zhao, Jun Zhao, Mengmeng Yang, Teng Wang, Ning Wang, Lingjuan Lyu, Dusit Niyato, and Kwok-Yan Lam. Local differential privacy-based federated learning for internet of things. IEEE Internet of Things Journal, 8(11):8836–8853, 2020. Qinqing Zheng, Shuxiao Chen, Qi Long, and Weijie Su. Federated f-differential privacy. In International Conference on Artificial Intelligence and Statistics, pp. 2251–2259. PMLR, 2021. Úlfar Erlingsson, Vitaly Feldman, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, and Abhradeep Thakurta. Amplification by shuffling: From local to central differential privacy via anonymity. In ACM-SIAM Symposium on Discrete Algorithms (SODA), 2019. URL https://arxiv.org/abs/1811.1 2469. ## A Notation Table I set of agents Di dataset of agent i pi size of agent i's data as a fraction of total data samples ω t iagent i's local model parameter at time t ω taggregated model at central server at t ωb t i differentially private version of ω t i Filocal objective function of agent i f overall objective function n t irandom noise added to agent i at time t µ hyper-parameter for proximal term in FedProx and Upcycled FedProx λm hyper-parameter for first-order approximation at even iteration 2m in Upcycled-FL τ the clipping threshold for output perturbation ## B Lemmas Lemma B.1. *Define* ωe t = Ei(ω t _Define $\omega^{\circ}=\mathbb{z}_{i}(\omega_{i}^{\circ})$. Suppose conautonomous in Theorem 5.6 hold, then the following $$f(\widetilde{\omega}^{2m+1})\leq f(\overline{\omega}^{2m-1})-\widehat{C}_{1}\Big{(}L,B,\frac{1}{\mu},\frac{1}{\rho}\Big{)}||\nabla f(\overline{\omega}^{2m-1})||^{2}$$ $$+\widehat{C}_{2}\Big{(}L,B,\frac{1}{\mu},\frac{1}{\rho},\frac{\mu}{\mu+\lambda_{m}}\Big{)}||\nabla f(\overline{\omega}^{2m-1})||\cdot||\overline{\omega}^{2m-1}-\overline{\omega}^{2m-2}||^{2}$$ $$+\widehat{C}_{3}\Big{(}L,B,\frac{1}{\mu},\frac{1}{\rho},\frac{\mu}{\mu+\lambda_{m}}\Big{)}||\overline{\omega}^{2m-1}-\overline{\omega}^{2m-2}||^{2}$$ _is strictly convex._ )*. Suppose conditions in Theorem 5.6 hold, then the following holds* ∀m: where coefficients satisfy Cb1 L, B, 1µ , 1 ρ =1µ − LB µ2ρ − LB2 2ρ 2 Cb2 L, B, 1µ , 1 ρ ,µ µ + λm = L 2 µ2ρ + L + µ µ2+ L(L + ρ)B ρ 2 µ µ + λm Cb3 L, B, 1µ , 1 ρ ,µ µ + λm =L(L + ρ) 2 2ρ 2 µ 2 (µ + λm) 2 Lemma B.2. Let Sm be the set of K randomly selected local devices got updated at iterations 2m − 1 and 2m, and ESm[·] *be the expectation with respect to the choice of devices. Then we have* $$\mathbb{E}_{S_{m}}[f(\overline{\omega}^{2m+1})]\leq f(\overline{\omega}^{2m+1})+\widetilde{C}_{1}\Big{(}B,L,\frac{1}{K},\frac{1}{\rho}\Big{)}||\nabla f(\overline{\omega}^{2m-1})||^{2}$$ $$+\widetilde{C}_{2}\Big{(}B,L,\frac{1}{K};\frac{1}{\rho},\frac{\mu}{\mu+\lambda_{m}}\Big{)}||\nabla f(\overline{\omega}^{2m-1})||\cdot||\overline{\omega}^{2m-1}-\overline{\omega}^{2m-2}||$$ $$+\widetilde{C}_{3}\Big{(}B,L,\frac{1}{K},\frac{1}{\rho},\frac{\mu}{\mu+\lambda_{m}}\Big{)}||\overline{\omega}^{2m-1}-\overline{\omega}^{2m-2}||^{2}$$ _Remark 1._ where coefficients satisfy Ce1 B, L, 1K , 1 ρ =2B2 Kρ2 + 2LB + ρ ρ r2 K B ρ Ce2 B, L, 1K , 1 ρ ,µ µ + λm =4LB Kρ2 + 2LB + ρ ρ r2 K L ρ + 2L L + ρ ρ r2 K B ρ ·µ µ + λm Ce3 B, L, 1K , 1 ρ ,µ µ + λm = 2K L 2 ρ 2 + 2L L + ρ ρ r2 K L ρ · µ µ + λm 2 ## C Proofs Proof Of Lemma B.1 Proof. Since local functions Fi are L-Lipschitz smooth, at iteration 2m − 1, we have f(ωe 2m+1) ≤ f(ω 2m−1) + ⟨∇f(ω 2m−1), ωe 2m+1 − ω 2m−1⟩ + L 2 ||ωe 2m+1 − ω 2m−1||2 = f(ω 2m−1) + ⟨∇f(ω 2m−1), − 1 µ ∇f(ω 2m−1) + Φ2m+1⟩ + L 2 ||ωe 2m+1 − ω 2m−1||2 ≤ f(ω 2m−1) − 1 µ ||∇f(ω 2m−1)||2 + 1 µ ||∇f(ω 2m−1)|| · ||Φ 2m+1|| + L 2 ||ωe 2m+1 − ω 2m−1||2 where $$\Phi^{2m+1}=\frac{1}{\mu}\nabla f(\Xi^{2m-1})+\Xi^{2m+1}-\Xi^{2m-1}=\mathbb{E}_{i}\left[\frac{1}{\mu}\nabla F_{i}(\Xi^{2m-1})+\omega_{i}^{2m+1}-\Xi^{2m-1}\right]\tag{7}$$ der condition the following holds at $(2m+1)$ th iteration: By first-order condition, the following holds at (2m + 1)-th iteration: $$\omega_{i}^{2m+1}-\overline{{{\omega}}}^{2m-1}=-\frac{1}{\mu}\nabla F_{i}(\omega_{i}^{2m+1})+\overline{{{\omega}}}^{2m}-\overline{{{\omega}}}^{2m-1}$$ Plug into Eqn. (7), we have $$\begin{array}{r c l}{{\Phi^{2m+1}}}&{{=}}&{{\mathbb{E}_{i}\Big[\frac{1}{\mu}\Big(\nabla F_{i}\big(\overline{{{\omega}}}^{2m-1}\big)-\nabla F_{i}(\omega_{i}^{2m+1})\Big)+\overline{{{\omega}}}^{2m}-\overline{{{\omega}}}^{2m-1}\Big]}}\end{array}$$ By L-Lipschitz smoothness and Jensen's inequality, we have $$||\Phi^{2m+1}||\leq\mathbb{E}_{t}\Big{[}\frac{1}{\mu}||\nabla F_{t}(\overline{\omega}^{2m-1})-\nabla F_{t}(\omega_{t}^{2m+1})||\Big{]}+||\overline{\omega}^{2m}-\overline{\omega}^{2m-1}||\tag{8}$$ $$\leq\mathbb{E}_{t}\Big{[}\frac{L}{\mu}||\overline{\omega}^{2m-1}-\omega_{t}^{2m+1}||\Big{]}+||\overline{\omega}^{2m}-\overline{\omega}^{2m-1}||$$ $$\leq\mathbb{E}_{t}\Big{[}\frac{L}{\mu}||\omega_{t}^{2m+1}-\overline{\omega}^{2m}||\Big{]}+\frac{L+\mu}{\mu}||\overline{\omega}^{2m}-\overline{\omega}^{2m-1}||$$ Since hi are ρ-strongly convex, Fiis L-Lipschitz smooth, and ω 2m+1 i = arg minω hi(ω; ω 2m) we have $$||\omega_{i}^{2m+1}-\varpi^{2m}||$$ 2m*|| ≤* 1ρ $$\begin{array}{l l}{{}}&{{\leq}}\\ {{}}&{{\frac{1}{\rho}||\nabla h_{i}(\omega_{i}^{2m+1};\overline{{{\omega}}}^{2m})-\nabla h_{i}(\overline{{{\omega}}}^{2m};\overline{{{\omega}}}^{2m})||=\frac{1}{\rho}||0-\nabla F_{i}(\overline{{{\omega}}}^{2m})||}}\\ {{}}&{{\leq}}\\ {{}}&{{\frac{L}{\rho}||\overline{{{\omega}}}^{2m}-\overline{{{\omega}}}^{2m-1}||+\frac{1}{\rho}||\nabla F_{i}(\overline{{{\omega}}}^{2m-1})||}}\end{array}$$ Plug in Eqn. (8), $$||\Phi^{2m+1}||$$ 2m+1|| ≤ L $$\leq\quad\frac{L}{\mu\rho}\mathbb{E}_{i}[||\nabla F_{i}(\overline{{{\omega}}}^{2m-1})||]+\Big(\frac{L^{2}}{\mu\rho}+\frac{L+\mu}{\mu}\Big)||\overline{{{\omega}}}^{2m}-\overline{{{\omega}}}^{2m-1}||$$ Consider the following term $$||\tilde{\omega}^{2m+1}-\overline{\omega}^{2m-1}||=||\mathbb{E}_{i}[\omega_{i}^{2m+1}]-\overline{\omega}^{2m-1}||\leq\mathbb{E}_{i}[||\omega_{i}^{2m+1}-\overline{\omega}^{2m-1}||]$$ $$\leq\mathbb{E}_{i}[||\omega_{i}^{2m+1}-\overline{\omega}^{2m}||+||\overline{\omega}^{2m}-\overline{\omega}^{2m-1}||]$$ $$\leq\frac{L+\rho}{\rho}||\overline{\omega}^{2m}-\overline{\omega}^{2m-1}||+\frac{1}{\rho}\mathbb{E}_{i}\left[||\nabla F_{i}(\overline{\omega}^{2m-1})||\right]$$ $$\quad(9)$$ Because Fiis B-dissimilar, we have $$\mathbb{E}_{i}\Big[||\nabla F_{i}(\overline{{{\omega}}}^{2m-1})||\Big]\leq B||\nabla f(\overline{{{\omega}}}^{2m-1})||\Big]$$ Therefore, f(ωe 2m+1) ≤ f(ω 2m−1) − 1 µ ||∇f(ω 2m−1)||2 + 1 µ ||∇f(ω 2m−1)|| · ||Φ 2m+1|| + L 2 ||ωe 2m+1 − ω 2m−1||2 ≤ f(ω 2m−1) − 1 µ − LB µ2ρ − LB2 2ρ 2 ||∇f(ω 2m−1)||2 + L 2 µ2ρ + L + µ µ2+ L(L + ρ)B ρ 2 ||∇f(ω 2m−1)|| · ||ω 2m − ω 2m−1|| + L(L + ρ) 2 2ρ 2||ω 2m − ω 2m−1||2 After applying first-order approximation in even iterations, we have $$\overline{{{\omega}}}^{2m}-\overline{{{\omega}}}^{2m-1}=\frac{\mu}{\mu+\lambda_{m}}(\overline{{{\omega}}}^{2m-1}-\overline{{{\omega}}}^{2m-2})$$ Therefore, .$$ $$f(\overline{\omega}^{2m+1})\leq f(\overline{\omega}^{2m-1})-\left(\frac{1}{\mu}-\frac{LB}{\mu^2\rho}-\frac{LB^2}{2\rho^2}\right)||\nabla f(\overline{\omega}^{2m-1})||^2$$ $$+\left(\frac{L^2}{\mu^2\rho}+\frac{L+\mu}{\mu^2}+\frac{L(L+\rho)B}{\rho^2}\right)\frac{\mu}{\mu+\lambda_m}||\nabla f(\overline{\omega}^{2m-1})||\cdot||\overline{\omega}^{2m-1}-\overline{\omega}^{2m-2}||^2$$ $$+\frac{L(L+\rho)^2}{2\rho^2}\frac{\mu^2}{(\mu+\lambda_m)^2}||\overline{\omega}^{2m-1}-\overline{\omega}^{2m-2}||^2$$ mms R-1 is proved. The Lemma B.1 is proved. ## Proof Of Lemma B.2 Proof. Because local function Fiis L-Lipschitz smooth, f is local Lipschitz continuous. $\square$ $$f(\overline{{{\omega}}}^{2m+1})\leq f(\widetilde{\omega}^{2m+1})+L_{0}||\overline{{{\omega}}}^{2m+1}-\widetilde{\omega}^{2m+1}||\overline{{{\omega}}}^{2m+1}$$ where $L_{0}$ is the local Lipschitz continuous constant. Moreover, we have $$L_{0}\leq||\nabla f(\overline{\omega}^{2m-1})||+L(||\overline{\omega}^{2m+1}-\overline{\omega}^{2m-1}||+||\overline{\omega}^{2m+1}-\overline{\omega}^{2m-1}||)$$ Therefore, ESm[f(ω 2m+1)] ≤f(ωe 2m+1) + ESm h||∇f(ω 2m−1)|| + L(||ωe 2m+1 − ω 2m−1|| + ||ω 2m+1 − ω 2m−1||) ||ω 2m+1 − ωe 2m+1||i =f(ωe 2m+1) + ||∇f(ω 2m−1)|| + L||ωe 2m+1 − ω 2m−1||· ESm[||ω 2m+1 − ωe 2m+1||] + LESm h||ω 2m+1 − ω 2m−1|| · ||ω 2m+1 − ωe 2m+1||i ≤f(ωe 2m+1) + ||∇f(ω 2m−1)|| + 2L||ωe 2m+1 − ω 2m−1||· ESm[||ω 2m+1 − ωe 2m+1||] + LESm h||ω 2m+1 − ωe 2m+1||2i When K devices are randomly selected, by Eqn. (9), we have ESm h||ω 2m+1 − ωe 2m+1||2i ≤ 1 K Ei h||ω 2m+1 i − ωe 2m+1||2i≤ 2 K Ei h||ω 2m+1 i − ω 2m||2i ≤ 2 K Ei hL 2 ρ 2 ||ω 2m − ω 2m−1||2 + 1 ρ 2 ||∇Fi(ω 2m−1)||2 + 2L ρ 2 ||ω 2m − ω 2m−1|| · ||∇Fi(ω 2m−1)||i ≤ 2 K L 2 ρ 2 ||ω 2m − ω 2m−1||2 + 2B2 Kρ2 ||∇f(ω 2m−1)||2 + 4LB Kρ2 ||ω 2m − ω 2m−1|| · ||∇f(ω 2m−1)|| = 2 K L ρ ||ω 2m − ω 2m−1|| + B ρ ||∇f(ω 2m−1)||2 By Jensen's inequality, $$\mathbb{E}_{\mathcal{S}_{m}}\Big{[}||\overline{\omega}^{2m+1}-\widetilde{\omega}^{2m+1}||\Big{]}\leq\sqrt{\mathbb{E}_{\mathcal{S}_{m}}\Big{[}||\overline{\omega}^{2m+1}-\widetilde{\omega}^{2m+1}||^{2}\Big{]}}$$ $$=\sqrt{\frac{2}{K}}\Big{(}\frac{L}{\rho}||\overline{\omega}^{2m}-\overline{\omega}^{2m-1}||+\frac{B}{\rho}||\nabla f(\overline{\omega}^{2m-1})||\Big{)}\.$$ By Eqn. (10), ||∇f(ω 2m−1)|| + 2L||ωe 2m+1 − ω 2m−1|| ≤ 2L L + ρ ρ||ω 2m − ω 2m−1|| + 2LB + ρ ρ||∇f(ω 2m−1)|| Re-organize, we have ESm[f(ω 2m+1)] ≤f(ωe 2m+1) + 2K L 2 ρ 2 ||ω 2m − ω 2m−1||2 + 2B2 Kρ2 ||∇f(ω 2m−1)||2 + 4LB Kρ2 ||ω 2m − ω 2m−1*|| · ||∇*f(ω 2m−1)|| + 2L L + ρ ρ||ω 2m − ω 2m−1|| + 2LB + ρ ρ||∇f(ω 2m−1)||r2K L ρ ||ω 2m − ω 2m−1|| + 2L L + ρ ρ||ω 2m − ω 2m−1|| + 2LB + ρ ρ||∇f(ω 2m−1)||r2K B ρ ||∇f(ω 2m−1)|| =f(ωe 2m+1) + 2K L 2 ρ 2 + 2L L + ρ ρ r2 K L ρ · µ µ + λm 2||ω 2m−1 − ω 2m−2||2 + 4LB Kρ2 + 2LB + ρ ρ r2 K L ρ + 2L L + ρ ρ r2 K B ρ ·µ µ + λm ||ω 2m−1 − ω 2m−2*|| · ||∇*f(ω 2m−1)|| + 2B2 Kρ2 + 2LB + ρ ρ r2 K B ρ ||∇f(ω 2m−1)||2 Lemma B.2 is proved. Proof of Lemma 5.5 Proof. Lemma 5.5 can be proved by combing Lemmas B.1 and B.2, where C1 := C1 L, B, 1µ , 1 ρ , 1 K = Cb1 L, B, 1µ , 1 ρ − Ce1 *L, B,* 1K , 1 ρ (11) $$\begin{array}{l}~~~~~~~~~~~~~~~~~~~~~~\end{array}$$ (12) $$\begin{array}{l}~~~~~~~~~~~~~~~~~~~~~~\end{array}$$ (13) $$\begin{array}{l}~~~~~~~~~~~~~~~~~~~~~~\end{array}$$ (14) . =1µ − LB µ2ρ − LB2 2ρ 2 − 2B2 Kρ2 − 2LB + ρ ρ r2 K B ρ(11) C2 := C2 L, B, 1µ , 1 ρ , 1 K ,µ µ + λm = Cb2 L, B, 1µ , 1 ρ ,µ µ + λm + Ce2 L, B, 1K , 1 ρ ,µ µ + λm = L 2 µ2ρ + L + µ µ2+ L(L + ρ)B ρ 2+ 4LB Kρ2 + (4L 2B + ρL(1 + 2B) ρ 2 r2 K ·µ µ + λm(12) C3 := C3 L, 1µ , 1 ρ , 1 K ,µ µ + λm = Cb3 L, 1µ , 1 ρ ,µ µ + λm + Ce3 L, 1K , 1 ρ ,µ µ + λm $$\square$$ =L(L + ρ) 2 2ρ 2+ 2 K L 2 ρ 2 + 2L L + ρ ρ r2 K L ρ · µ µ + λm 2(13) ## D Proof Of Theorem 5.6 This can be proved directed using Lemma 5.5 by averaging over all M odd iterations. ## E Proof Of Theorem 6.2 WLOG, consider the case when local device got updated in every iteration and the algorithm runs over 2M iterations in total. We will use the uppercase letters X and lowercase letters x to denote random variables and the corresponding realizations, and use PX(·) to denote its probability distribution. To simplify the notations, we will drop the index i as we are only concerned with one agent. According to Abadi et al. (2016), for a mechanism M outputs o, with inputs d and ˆd, let a random variable c(o;M*, d,* ˆd) = log Pr(M(d)=o) Pr(M(dˆ)=o) denote the privacy loss at o, and $$\alpha_{\mathcal{M}}(\lambda)=\operatorname*{max}_{d,\hat{d}}\log\mathbb{E}_{o\sim\mathcal{M}(d)}\{\exp(\lambda c(o;\mathcal{M},d,\hat{d}))\}$$ For two neighboring datasets D and D′ of agent i, by Lemma 6.1, the total privacy loss is only contributed by odd iterations. Thus, for any sequence of private (clipped) models ωb t generated by mechanisms {Mm}M m=1 over 2M iterations, there is: c(ωb 0:2M; {Mm} M m=1, D, D ′) = log PΩb0:2M (ωb 0:2M|D) PΩb0:2M (ωb 0:2M|D′) =X M m=0 log PΩb2m+1 (ωb 2m+1|D, ωb 0:2m) PΩb2m+1 (ωb 2m+1|D′, ωb 0:2m) + log PΩb0 (ωb 0|D) PΩb0 (ωb 0|D′) =X M m=0 c(ωb 2m+1;Mm, ωb 0:2m, D, D ′) where ωb 0:t = {ωb τ } t τ=0 and Ωbtis random variable whose realization is ωb t. Since ωb 0is randomly generated, which is independent of dataset, we have PΩb0 (ωb 0|D) = PΩb0 (ωb 0|D′). Moreover, the following holds for any λ: $$\log\mathbb{E}_{\tilde{\omega}^{0\geq2M}}\left\{\exp(\lambda c(\tilde{\omega}^{0\cdot2M};\{\mathscr{M}^{m}\}_{m=1}^{M},\mathcal{D},\mathcal{D}^{\prime}))\right\}$$ $$=\log\mathbb{E}_{\tilde{\omega}^{0\geq M}}\left\{\exp(\lambda\sum_{m=0}^{M}c(\tilde{\omega}^{2m+1};\mathscr{M}^{m},\tilde{\omega}^{0\cdot2m},\mathcal{D},\mathcal{D}^{\prime})\right\}$$ $$=\sum_{m=0}^{M}\log\mathbb{E}_{\tilde{\omega}^{2m+1}}\left\{\exp(\lambda c(\tilde{\omega}^{2m+1};\mathscr{M}^{m},\tilde{\omega}^{0\cdot2m},\mathcal{D},\mathcal{D}^{\prime})\right\}\tag{14}$$ Therefore, α{Mm}Mm=1 (λ) ≤PM m=1 αMm(λ) also holds. First bound each individual αMm(λ). Consider two neighboring datasets D and D′. Private (clipped) model ωb 2m+1 is generated by mechanism Mm(D) = ξ(ω 2m+1) +N =1 |D| Pd∈D η(d) +N with function ||η(·)||2 ≤ τ and Gaussian noise N ∼ N (0, σ2I). Without loss of generality, let D′ = *D ∪ {*dn}, f(dn) = ±τe1 and Pd∈D η(d) = 0. Then Mm(D) and Mm(D′) are distributed identically except for the first coordinate and the problem can be reduced to one-dimensional problem. $$\begin{array}{r c l}{{c(\widehat{\omega}^{2m+1};\mathcal{M}^{m},\widehat{\omega}^{0:2m},\mathcal{D},\mathcal{D}^{\prime})}}&{{=}}&{{\log\frac{P_{\widehat{\Omega}^{2m+1}}(\widehat{\omega}^{2m+1}|\mathcal{D},\widehat{\omega}^{0:2m})}{P_{\widehat{\Omega}^{2m+1}}(\widehat{\omega}^{2m+1}|\mathcal{D}^{\prime},\widehat{\omega}^{0:2m})}}}\\ {{}}&{{=}}&{{\log\frac{P_{N}(n)}{P_{N}(n\pm\tau)}}}\\ {{}}&{{\leq}}&{{\frac{\tau}{2|\mathcal{D}|\sigma^{2}(2|n|+\tau)}\;.}}\end{array}$$ where n +1 |D| αMm(λ) = log EN∼N(0,σ2){exp(λτ 2|D|σ 2 (2N + τ ))} = log Z ∞ −∞ 1 √2πσ exp(− 1 2σ 2 (n − λ τ |D| ) 2) · exp( τ 2 2|D| 2σ 2 (λ 2 + λ))dn =τ 2λ(λ + 1) 2|D| 2σ 2. α{Mm}Mm=1 (λ) ≤X M m=1 αMm(λ) = Mτ 2λ(λ + 1) 2|D| 2σ 2 Pd∈D η(d) = ωb 2m+1. Therefore, Use the tail bound [Theorem 2, Abadi et al. (2016)], for any ε ≥M τ2 2|D| 2σ2 , the algorithm is (*ε, δ*)-differentially private for $$\delta\quad=\quad\operatorname*{min}_{\lambda:\lambda\geq0}\;h(\lambda)=\operatorname*{min}_{\lambda:\lambda\geq0}\exp\left(\frac{M\tau^{2}\lambda(\lambda+1)}{2|D|^{2}\sigma^{2}}-\lambda\varepsilon\right)$$ To find λ ∗ = argmin λ:λ≥0 h(λ), take derivative of h(λ) and assign 0 gives the solution λ¯ = ε|D| 2σ 2 M τ2 − 1 2 ≥ 0, and h ′′(λ¯) > 0, implies λ ∗ = λ¯. Plug into (15) gives: Aug into (15) gives: $\delta=\exp\left(\left(\dfrac{M\tau^2}{4|D|^2\sigma^2}-\dfrac{\varepsilon}{2}\right)\left(\dfrac{\varepsilon|D|^2\sigma^2}{M\tau^2}-\dfrac{1}{2}\right)\right)$ (15) Similarly, for any δ ∈ [0, 1], the algorithm is (*ε, δ*)-differentially private for $$\varepsilon\quad=\quad\operatorname*{min}_{\lambda,\lambda\geq0}\;h_{1}(\lambda)=\operatorname*{min}_{\lambda,\lambda\geq0}\;{\frac{M\tau^{2}(\lambda+1)}{2|D|^{2}\sigma^{2}}}+{\frac{1}{\lambda}}\log\left({\frac{1}{\delta}}\right)=2{\sqrt{\frac{M\tau^{2}}{2|D|^{2}\sigma^{2}}}}\log({\frac{1}{\delta}})+{\frac{M\tau^{2}}{2|D|^{2}\sigma^{2}}}\;.$$ ## Proof Of Theorem 6.3 Proof. WLOG, consider the case when local device got updated in every iteration and the algorithm runs over 2M iteration in total. We will use the uppercase letters X and lowercase letters x to denote random variables and the corresponding realizations, and use PX(·) to denote its probability distribution. To simplify the notations, we will drop the index i as we are only concerned with one agent, and use ω tto denote private output ωb t. For two neighboring datasets D and D′ of agent i, by Lemma 6.1, the total privacy loss is only contributed by odd iterations. Thus, the ratio of joint probabilities (privacy loss) is given by: $$\frac{P_{\Omega^{0\times2M}}(\omega^{0\times2M}|\mathcal{D})}{P_{\Omega^{0\times2M}}(\omega^{0\times2M}|\mathcal{D}^{\prime})}=\frac{P_{\Omega^{0}}(\omega^{0}|\mathcal{D})}{P_{\Omega^{0}}(\omega^{0}|\mathcal{D}^{\prime})}\cdot\prod_{m=0}^{M}\frac{P_{\Omega^{2m+1}}(\omega^{2m+1}|\omega^{0\cdot2m},\mathcal{D})}{P_{\Omega^{2m+1}}(\omega^{2m+1}|\omega^{0\cdot2m},\mathcal{D}^{\prime})}\tag{1}$$ $$(16)$$ where ω 0:t:= {ω s} t s=1 and Ω t denotes random variable of ω t. Since ω 0is randomly generated, which is independent of dataset. We have PΩ0 (ω 0|D) = PΩ0 (ω 0|D′). Consider the (2m + 1)-th iteration, by first-order condition, we have: $$n^{m}=-\nabla F_{i}(\omega^{2m+1};\mathcal{D})-\mu(\omega^{2m+1}-\overline{{{\omega}}}^{2m}):=g(\omega^{2m+1};\mathcal{D})$$ Given ω 0:2m, n m and ω 2m+1 will be bijective and the relation is captured by a one-to-one mapping g : R d → R d defined above. By Jacobian transformation, we have $$P_{\Omega^{2m+1}}(\omega^{2m+1}|\omega^{0,2m},{\mathcal{D}})=P_{N^{m}}(g(\omega^{2m+1};{\mathcal{D}}))\cdot|\operatorname*{det}({\mathbf{J}}(g(\omega^{2m+1};{\mathcal{D}})))|$$ Therefore, $$\frac{P_{\Omega^{2m+1}}(\omega^{2m+1}|\omega^{0:2m},\mathcal{D})}{P_{\Omega^{2m+1}}(\omega^{2m+1}|\omega^{0:2m},\mathcal{D}^{\prime})}=\frac{P_{N^{m}}(g(\omega^{2m+1};\mathcal{D}))}{P_{N^{m}}(g(\omega^{2m+1};\mathcal{D}^{\prime}))}\cdot\frac{|\mathrm{det}(\mathbf{J}(g(\omega^{2m+1};\mathcal{D})))|}{|\mathrm{det}(\mathbf{J}(g(\omega^{2m+1};\mathcal{D}^{\prime})))|}\,.$$ Let n m := g(ω 2m+1; D), n m′:= g(ω 2m+1; D′) be noise vectors that result in output ω 2m+1 under neighboring datasets D and D′respectively. WLOG, let d1 ∈ D and d ′ 1 ∈ D′ be the data pints in two datasets that are different, and D \ d1 = D′ \ d ′ 1 . Because noise vector Nm ∼ exp(−α m||n m||), we have, $$\frac{P_{N^{m}}(g(\omega^{2m+1};\mathcal{D}))}{P_{N^{m}}(g(\omega^{2m+1};\mathcal{D}^{\prime}))}\leq\exp(\alpha^{m}|n^{m}-n^{m^{\prime}}|)=\exp(\alpha^{m}|\nabla F_{i}(\omega^{2m+1};\mathcal{D}^{\prime})-\nabla F_{i}(\omega^{2m+1};\mathcal{D})||)$$ $$=\exp\left(\frac{\alpha^{m}}{|\mathcal{D}|}||\nabla F_{i}(\omega^{2m+1};d_{1})-\nabla F_{i}(\omega^{2m+1};d_{1})||\right)\leq\exp\left(\frac{2\alpha^{m}u_{1}}{|\mathcal{D}|}\right)$$ (17) Jacobian matrix $$\mathbf{J}(g(\omega^{2m+1};{\mathcal{D}})))=-\nabla^{2}F_{i}(\omega^{2m+1};{\mathcal{D}})-\mu\mathbf{I}_{d}:=A$$ 2m+1; D) − µId := A (18) Further define matrix $$A_{\Delta}=\mathbf{J}(g(\omega^{2m+1};\mathcal{D}^{\prime})))-A=\frac{1}{|\mathcal{D}|}\Big(\nabla^{2}F_{i}(\omega^{2m+1};d_{1})-\nabla^{2}F_{i}(\omega^{2m+1};d_{1}^{\prime})\Big)$$ Then $$\frac{|\mathrm{det}(\mathbf{J}(g(\omega^{2m+1};\mathcal{D})))|}{|\mathrm{det}(\mathbf{J}(g(\omega^{2m+1};\mathcal{D}^{\prime})))|}=\frac{|\mathrm{det}(A)|}{|\mathrm{det}(A_{\Delta}+A)|}=\frac{1}{|\mathrm{det}(I+A^{-1}A_{\Delta})|}=\frac{1}{|\prod_{k=1}^{r}(1+\lambda_{k}(A^{-1}A_{\Delta}))|}\;.$$ $$(17)$$ $$(18)$$ where λk(A−1A∆) denotes the k-th largest eigenvalue of matrix A−1A∆. Under generalized linear models, A∆ has rank at most 2. Because − u2 |D|µ ≤ λk(A−1A∆) ≤u2 |D|µ and µ, u2, |D| satisfy u2 |D|µ ≤ 0.5, we have, |det(J(g(ω 2m+1; D)))| |det(J(g(ω2m+1; D′)))| ≤1 |1 −u2 |D|µ | 2 = exp(−2 ln(1 −u2 |D|µ )) ≤ exp 2.8u2 |D|µ (19) where the last inequality holds because − ln(1 − x) < 1.4x, ∀x ∈ [0, 0.5]. Combine Eqn. (16), (19) and (17), we have $\mathbf{I}\cup I$; . $$\begin{array}{r c l}{{\frac{P_{\Omega^{0:2M}}(\omega^{0:2M}|\mathcal{D})}{P_{\Omega^{0:2M}}(\omega^{0:2M}|\mathcal{D}^{\prime})}}}&{{\leq}}&{{\prod_{m=0}^{M}\exp\left(\frac{2\alpha^{m}u_{1}}{|\mathcal{D}|}\right)\cdot\exp\left(\frac{2.8u_{2}}{|\mathcal{D}|\mu}\right)}}\\ {{}}&{{}}&{{}}\\ {{}}&{{}}&{{=}}&{{\exp\left(\sum_{m=0}^{M}\frac{2\alpha^{m}u_{1}\mu+2.8u_{2}}{|\mathcal{D}|\mu}\right)}}\end{array}$$ $(1-x)<1.4x,\forall x\in[0,0.5]$. $\square$ Theorem 6.2 is proved. ## F Experiments F.1 Details Of Datasets Table 2: Details of datasets. Numbers in parentheses represent the amount of test data. All of the numbers round to integer. Dataset Samples # of device Samples per device mean stdev | Dataset | Samples | # of device | Samples per device mean stdev | | | |-----------|-------------|---------------|---------------------------------|-----|-----| | Syn | iid | 6726(683) | 30 | 224 | 166 | | 0,0 | 13791(1395) | 30 | 460 | 841 | | | 0.5,0.5 | 8036(818) | 30 | 268 | 410 | | | 1,1 | 10493(1063) | 30 | 350 | 586 | | | FEMNIST | 16421(1924) | 50 | 328 | 273 | | | Sent140 | 32299(8484) | 52 | 621 | 105 | | Synthetic. The synthetic data is generated using the same method in Li et al. (2020). We briefly describe the generating steps here. For each device k, yk is computed from a softmax function yk = argmax(softmax(Wkxk+ bk)). Wk and bk are drawn from the same Gaussian distribution with mean uk and variance 1, where uk ∈ N(0; β). xk ∈ N(vk; Σ). vk is drawn from a Gaussian distribution with mean Bk ∈ N (0, γ) and variance 1. Σ is diagonal with P*j, j* = j −1.2. In such a setting, β controls how many local models differ from each other and γ controls how much local data at each device differs from that of other devices. In our experiment, we take k = 30, x ∈ R20, W ∈ R10∗20, b ∈ R10. We generate 4 datasets in total. They're Syn(iid) Syn(0,0) with β = 0 and γ = 0, Syn(0.5,0.5) with β = 0.5 and γ = 0.5 and Syn(1,1) with β = 1 and γ = 1. In the output perturbation experiments, we set σ to 1.0 for the baseline methods and to 0.8 for the Upcycled baselines, ensuring that the privacy budget ϵ for the baselines is always greater than that for the Upcycled baselines (e.g., the baseline ϵ¯ = 0.773 and the Upcycled baseline ϵ¯ = 0.683 for Syn(iid)). In the objective perturbation experiments, we set α to 10 for the baselines and 20 for the Upcycled baselines to achieve similar levels of information leakage, while still maintaining that ϵ for the Upcycled baselines is less than for the others. This constraint is maintained across all experiments. FEMNIST: Similar with Li et al. (2020), we subsample 10 lower case characters ('a'-'j') from EMNIST Cohen et al. (2017) and distribute 5 classes to each device. There are 50 devices in total. The input is 28x28 image. In the privacy experiments, σ is set to 0.27 for the baselines and 0.2 for the Upcycled baselines, and α is set to 100 for the baselines and 200 for the Upcycled baselines, respectively. Sent140: A text sentiment analysis task on tweets Go et al. (2009). The input is a sequence of length 25 and the output is the probabilities of 2 classes. Here, σ is set to 0.27 for the baselines and 0.2 for the Upcycled baselines, and α is set to 15 and 30 respectively. A brief summary of dataset can be found in Table 2. ## F.2 Details Of Algorithm Fedprox Here we present the detailed algorithm of FedProx: Algorithm 2 FedProx (Li et al., 2020) 1: **Input:** µ > 0, {Di}i∈I, ω 0 2: for t = 1 to T do 3: The central server sends the current global model parameter ω tto all the clients. 4: A subset of clients get active and each active client updates its local model by finding (approximate) minimizer of local loss function: ω t+1 i = arg min ωFi(ω; Di) + µ 2 ||ω − ω t||2. 5: Each client sends its local model to server. 6: The central server updates the global model by aggregating all local models: 7: **end for** ## F.3 Convergence On All Datasets Here we present the convergence and accuracy of the testing dataset in the final iteration for all datasets under 90% straggler and 30% straggler scenarios. ## F.3.1 90% Straggler ![26_image_0.png](26_image_0.png) ![26_image_1.png](26_image_1.png) (f) Sent140 Figure 5: Convergence of Upcycled-FL and regular FL methods with the approximate same training time under 90% Straggler. ## F.3.2 30% Straggler We also conduct an experiment with a 30% straggler scenario to examine the convergence of our method. Table 3 demonstrates that our conclusions remain valid even when the straggler rate is relatively low. Table 3: Average accuracy with 30% straggler on the testing dataset over four runs, the experimental setting is same as Table 1. | is same as Table 1. | Dataset | | | | | | |-----------------------|------------|------------|--------------|------------|------------|------------| | Method | Syn(iid) | Syn(0,0) | Syn(0.5,0.5) | Syn(1,1) | FEMNIST | Sent140 | | FedAvg | 98.50±0.18 | 80.30±1.01 | 82.60±0.72 | 78.81±2.10 | 83.06±1.30 | 74.29±0.12 | | Upcycled-FedAvg | 99.30±0.25 | 80.77±1.09 | 83.09±0.14 | 80.74±2.71 | 84.41±0.09 | 75.18±0.50 | | FedAvgM | 98.50±0.07 | 81.08±2.33 | 82.89±0.12 | 80.64±2.64 | 81.60±4.57 | 74.61±2.28 | | Upcycled-FedAvgM | 99.12±0.17 | 82.20±1.80 | 82.68±0.19 | 80.43±2.51 | 82.31±3.17 | 74.16±2.84 | | FedProx | 97.22±0.24 | 80.88±0.67 | 82.31±1.05 | 80.13±2.76 | 81.57±1.46 | 74.23±0.31 | | Upcycled-FedProx | 98.24±0.21 | 81.52±0.82 | 83.05±0.14 | 81.73±0.84 | 82.85±1.60 | 74.83±0.14 | | Scaffold | 98.28±0.14 | 79.95±1.67 | 79.54±0.72 | 72.27±4.68 | 76.85±5.55 | 75.76±0.21 | | Upcycled-Scaffold | 99.52±0.18 | 79.46±1.62 | 81.83±2.08 | 73.34±1.13 | 77.69±2.39 | 77.01±0.30 | | FedDyn | 98.17±0.08 | 82.06±0.61 | 80.97±1.04 | 78.65±3.83 | 84.39±0.95 | 75.88±0.11 | | Upcycled-FedDyn | 98.57±0.22 | 82.08±0.68 | 82.80±1.42 | 81.11±2.64 | 85.34±2.47 | 75.90±0.33 | | pFedme | 96.45±0.14 | 91.05±0.60 | 89.64±1.04 | 92.88±0.67 | 76.15±3.08 | 72.87±0.26 | | Upcycled-pFedme | 96.89±0.27 | 91.06±0.61 | 89.40±1.13 | 92.64±0.90 | 77.36±3.59 | 73.48±0.49 | | FedYogi | 99.38±0.28 | 81.06±2.53 | 80.65±1.32 | 79.14±1.63 | 79.98±7.83 | 77.63±0.29 | | Upcycled-FedYogi | 99.60±0.14 | 81.77±2.06 | 81.72±1.02 | 80.32±0.76 | 81.96±4.28 | 77.78±0.85 | ## F.3.3 30% Straggler ![28_image_0.png](28_image_0.png) Figure 6: Convergence of Upcycled-FL and regular methods with the approximate same training time under 30% Straggler. ## F.4 Additional Privacy Experiments In addition to privacy experiments on synthetic datasets, we also report the output perturbation and objective perturbation on real-world datasets: FEMNIST and Sent140 in Figure 7 and 8. Due to limited computational resources, we conduct these experiments using a subset of the previously established baselines. ![29_image_0.png](29_image_0.png) Figure 7: Comparison of private Upcycled-FL and private FL methods using **output perturbation** (left) and **objective perturbation** (right) on FEMNIST. ![29_image_1.png](29_image_1.png) Figure 8: Comparison of private Upcycled-FL and private FL methods using **output perturbation** (left) and **objective perturbation** (right) on Sent140. ## F.5 Experiments On Full-Scale Lowercase Letter Of Femnist We conduct a convergence experiment with full-scale lowercase letters (26 classes) from the FEMNIST dataset, as shown in Table 4. We also conduct privacy experiments comparing our method with FedProx, as shown in Figure 9. | Table 4: Average accuracy on the large-scale FEMNIST dataset | | | |----------------------------------------------------------------|--------------------------------------------------------|--------------| | Method | FEMNIST(lowercase letters) 30% straggler 90% straggler | | | FedAvg | 90.90 ± 4.55 | 88.50 ± 5.07 | | Upcycled-FedAvg | 91.26 ± 4.06 | 88.69 ± 4.94 | | FedAvgM | 91.02 ± 4.47 | 89.13 ± 4.35 | | Upcycled-FedAvgM | 91.05 ± 4.36 | 89.28 ± 4.05 | | FedProx | 91.91 ± 0.38 | 89.02 ± 4.53 | | Upcycled-FedProx | 92.08 ± 0.72 | 89.44 ± 4.30 | | Scaffold | 89.69 ± 3.92 | 89.11 ± 3.74 | | Upcycled-Scaffold | 89.77 ± 4.01 | 89.72 ± 3.90 | | FedDyn | 92.88 ± 0.30 | 91.42 ± 0.24 | | Upcycled-FedDyn | 93.56 ± 0.30 | 92.64 ± 0.37 | | pFedme | 81.36 ± 4.63 | 80.72 ± 1.47 | | Upcycled-pFedme | 85.62 ± 1.43 | 85.14 ± 0.93 | | FedYogi | 87.64 ± 1.05 | 85.61 ± 0.56 | | Upcycled-FedYogi | 88.89 ± 1.84 | 86.48 ± 1.30 | | Table 5: Training time for output perturbation experiments on Syn(iid). | | | | | | | | |---------------------------------------------------------------------------|---------------|---------------|----------------|----------------|----------------|----------------|----------------| | Time/s | FedAvg | FedAvgM | FedProx | Scaffold | FedDyn | pFedme | FedYogi | | Baseline | 147.50 ± 6.69 | 147.47 ± 7.49 | 139.91 ± 10.81 | 153.88 ± 12.70 | 181.49 ± 17.60 | 467.42 ± 12.87 | 153.53 ± 13.72 | | Upcycled | 86.44 ± 9.23 | 84.95 ± 3.39 | 80.59 ± 3.29 | 95.09 ± 13.89 | 99.82 ± 3.32 | 259.54 ± 18.27 | 92.82 ± 2.24 | Table 4: Average accuracy on the large-scale FEMNIST dataset ![30_image_0.png](30_image_0.png) Figure 9: Comparison of private Upcycled-FedProx and private FedProx using **output perturbation** (left) and **objective perturbation** (right). ## F.6 Training Time We report the average training time to compare the communication cost in Table 5.
Review 1: Summary: The paper proposes a new federated training method to reduce the communication cost and information leakage. Specifically, the paper proposes to only transfer the updates and merge on the odd iterations and uses the local model to do the update on the even iterations. The paper uses FedProx as an example to derive the upcycled version. The theoretical analysis are conduct to show the convergence rate of the proposed method and also briefly cover the privacy guarantee if differential private updates are utilized. The experimental results show the proposed upcycled FL could achieve a better convergence in both synthetic data and real-world data. Strengths and Weaknesses: Pros: 1. The paper is easy to follow and the proposed method is simple. 2. The experimental results show the proposed method is effective. Cons: 1. The proposed updates is unclear to me in the cases that if the client is selected in even round but won't be selected in odd round. Will the model be updated if the client is not selected? 2. I am not an expert on the differential privacy field. However, I believe the information leakage won't be reduced by simply half if only half of the updates need communication. If the assumption is true, the best strategy would be use at least as possible synchronization and the proposed method could be quickly adapted into using only 1/3, 1/4, .... 3. The experiments results in Figure 2,3,4 seems weird. I could get the proposed method would be better in terms of training-time however it could achieve a better rate in terms of iterations. The proposed method uses a less information but achieve a better convergence rate which does not make sense to me. 4. The experiments on the heterogeneous data distribution is not enough. The paper only uses synthetic data to simulate the heterogenous data distribution case. However, it would be more convincing if it could be done in the real-world data and it is also adopted by a lot of other federated learning methods to show their effectiveness. Partitioning the data with the dirichlet distribution would be a good way and the hyperparameter in the dirichlet distribution could be utilized to control the level of heterogeneity. Requested Changes: 1. Have a more thorough and formal discussion about the information leakage problem. 2. Explain why the half information is the sweet point for the update but not with fewer synchronization iteration. 3. Experiments on real-world data with dirichlet distribution. Broader Impact Concerns: None. ================================================== Review 2: Summary: This paper proposes Upcycled-FL, a simple yet effective method to save the communication cost and information leakage. The main idea is to reduce the communication rounds by updating the even rounds with previous updates from first-order expansion. The main method is provided with convergence analysis, privacy guarantees and comparison with multiple baselines. Strengths and Weaknesses: Strengths: 1. the paper proposes a simple method that seems to work well, with clear motivation and explanation. 2. The algorithm is supported with convergence guarantees and privacy analysis. 3. The method is compared with a lot of baseline FL algorithms. Weaknesses: 1. Only two examples are provided in Section 4 (FedAvg and FedProx). The authors should add a general modification description of FL methods. 2. The method could be more general. Instead of only even/odd alternation, can we think about more general update rules? For example, we can change the frequency to 3 or more (i.e., communicate every 3 updates). 3. Line 6 of Algorithm 1 seems like a momentum mechanism or extrapolation. The authors may want to explain the connection to existing optimization theory on this. 4. Assumptions 5.2, 5.4 are standard but may not be practical. Could the authors explain how they are satisfied in your experiments? 5. In addition to the convergence and privacy analysis, I think communication cost is another important factor. There should be some analysis on the communication. 6. The main privacy results are in Appendix B, which I think are important and should be discussed in the main text. 7. The main experiment datasets are either synthetic or small. The authors should try larger scale datasets for federated learning. 8. The improvement after adding Upcycle seems a bit minor. This shows that Upcycle-FL may not be very effective. 9. Missing baseline for FedYogi (https://arxiv.org/abs/2003.00295). 10. Missing a table comparing the communication cost although partially reflected in Table 1. 11. Privacy experiments are only done on synthetic datasets. What about standard datasets like FEMNIST and Sent140? Requested Changes: See above. Broader Impact Concerns: I don't have broader impact concerns. ================================================== Review 3: Summary: This paper proposes a Federated Learning (FL) strategy to minimize information leakage. It achieves this by accessing data only in odd rounds, while for even rounds, it updates the model by applying a first-order approximation based on the previous rounds’ model. The proposed technique looks simple but effective. Strengths and Weaknesses: Strength: The improvement is achieved with a simple technique and can be applied to general FL algorithms. Weakness: There is a lack of experiments on various dataset and the improvement achieved by the proposed technique is not significant. Requested Changes: There are a few comments. 1. Predicting the next update by ‘upcycling information’ may be less accurate as the data held by each client is non-iid. However, the experimental results seem to show that, on the contrary, there is further improvement. Could you explain me the reason why that is? 2. 30% of devices are selected in each iteration. Does the proposed technique still show better results even when fewer devices are selected? 3. Why upcycle only in even iteration? Isn't it possible to do it at different intervals, like 3 times or 4 times? How would accuracy or convergence change in that case? 4. Figures are too difficult to recognize. Please think about ways to presenst the experiment results more clearly. 5. "More rlated work is discussed in Appendix 2" in page 2, but thre is no Appendix 2. Broader Impact Concerns: There is no concerns on the ethical implications of the work. ================================================== Metareview: Recommendation: Accept with minor revision Comment: After reading the paper, I noticed some minor points of improvement I will list below: - typo: "datatsets" - I wonder if the $\approx$ in eq. (5) should be $=$? Or if it truly is an approximation, it should be made clear where it comes from. - "selected clients got updated", I guess missing "which"? - "we apply our Upcycled-FL method to six" I guess it's "seven" now - "Femnist" >> "FEMNIST", caption of table 1 - Privacy guarantees need to be clearly communicated in Fig. 4. I would also suggest mentioning that the training is non-private in Table 1 and Fig. 2 - Report, what is the $\pm$ reported in Table 1. Similarly what are the errorbars depicting in Figs. 2, 3 and 4. ==================================================
# Sft: Sampling-Based Foundational Transformer Anonymous authors Paper under double-blind review ## Abstract The versatility of self-attention mechanism earned transformers great success in almost all data modalities, with limitations on the quadratic complexity and difficulty of training. To apply transformers across different data modalities, practitioners have to make specific clever data-modality-dependent constructions. In this paper, we propose one efficient transformer that can work on multiple data modalities (point cloud, graph, and sequence) and constraints (rotational-invariant) without any modifications. The existence of such model is important as contemporary foundational modeling requires operability on multiple data sources. For efficiency on large number of tokens, our model relies on our context aware samplingwithout-replacement mechanism for both linear asymptotic computational complexity and real inference time gain. For efficiency on the training convergence rate and to ease the pain of meticulous hyper-parameter tuning, we rely on our newly discovered pseudoconvex formulation of transformer layer. As a foundational model, we achieved competitive results on many benchmarks, while being faster in inference, compared to other very specialized models. We release our source code in supplementary materials. ## 1 Introduction Transformers (Vaswani et al., 2017) have been successfully applied to a wide variety of tasks on many data modalities, namely sequence (Vaswani et al., 2017; Radford et al., 2019; Devlin et al., 2019), vision (Dosovitskiy et al., 2021; Carion et al., 2020; Liu et al., 2023), speech (Gulati et al., 2020; Gong et al., 2021; Koutini et al., 2022), time-series (Liu et al., 2022; Jin et al., 2024), point cloud (Lee et al., 2019; Zhao et al., 2021; Yang et al., 2019), and graph (Yun et al., 2019; Rampášek et al., 2022; Shirzad et al., 2023; Ngo et al., 2023). Its successes can be characterized by the versatility and parallelizability of self-attention modules in long-range modeling. This implies transformers can transform the whole sequence simultaneously like convolutional neural networks (CNN) with unlimited receptive field. This alone allows the model to scale up to a very big size, enabling large language modeling. While being versatile and parallelizable, transformers have some limitations: heavy architectural modifications for other data modalities adaptation, inherent quadratic complexity that limits scalability, and certain training difficulties. The first limitation arises when recent developments in deep learning towards foundational modeling. Large foundational models need more data beyond text (e.g. SQL databases, knowledge graphs, etc.) and end-users want to input different data modalities into these models like normal chatting experience between humans (e.g. "Look at my cute dog <dog.png> with her puppies <puppies.png>."). The exceptionally high-quality and abundant (probably in private) knowledge graph data and the relationship between different data modalities is an interesting research direction. The very first step towards such models is designing one single architecture for a large number of data modalities and evaluating its efficacy on each of these data modalities. In this work, we focus on solving the mentioned limitations with our new relative positional encoding scheme which works (adequately) for many kinds of data modalities. As our research resources have certain constraints, our research limits to two common archetypes of relationships between tokens: **dense low rank** (point cloud) and **sparse high rank** (sequence and graph). Graphs also pose a different challenge: the structural patterns are much more diverse compared to sequences. For training and evaluation efficiency, we lean towards developing sparse global attention, and some tweaks to trivialize the training process. We are interested in sparse global attention because its simplicity allows: applicability on (almost) all data modalities and transformer variants, a subquadratic asymptotic complexity, and a **lower practical runtime**. We notice that most sparse attention mechanisms either utilize a fixed sparse pattern (Beltagy et al., 2020; Child et al., 2019) or rely on random sparse patterns (Shirzad et al., 2023) or a combination of these (Zaheer et al., 2020). Our initial idea is to use a differentiable sampling method to learn a subsampling pattern. Based on Gumbel-softmax reparameterization, we design sampling **without** replacement through neural importance score computation. This way, the sparse global self-attention can learn to attend to important tokens and is usable on non-sequential data structures like point clouds and graphs. The attention nonlinearity we design is derived from the maxout network(Goodfellow et al., 2013). We combine it with a ReLU-based probability function instead of the Softmax function previously seen in transformers. The combination ultimately makes each of our transformer cells pseudoconvex on its parameters and is linear-scaled. This greatly alleviates the hyperparameter tuning pain of transformers. We further show that our maxout attention nonlinearity combined with the ReLU-based probability function allows better relative information aggregation. To demonstrate the effectiveness of relative information aggregation via positional encoding, we apply it to rotational invariant point cloud tasks, with coordinate information masked out of the input sequence. In this setting, only transformed Euclidean distance is introduced to the attention matrix. This technique (the ReLU-based probability function and the convex attention non-linearity) can be applied to any other self-attention-based model to increase the model expressivity. In this paper, we conduct extensive experiments on point cloud, graph, and sequence datasets. For point clouds, we benchmarked the model via classification (Wu et al., 2015) and semantic segmentation (Chang et al., 2015; Armeni et al., 2016) tasks, in both conventional paradigms as well as rotational insensitive ones. For graphs, we evaluate our method using two Peptide datasets (Singh et al., 2015) and one Computer Vision dataset (Everingham et al., 2010). For sequences, we used the long-range arena benchmark (Tay et al., 2021) featuring four classification tasks and one retrieval task on these datasets: ListOps (Nangia & Bowman, 2018), IMDb review (Maas et al., 2011), ACL Anthology Network (Radev et al., 2009), Grayscaled CIFAR-10 (Krizhevsky & Hinton, 2009), and Pathfinder (Linsley et al., 2018). We achieve competitive results on these benchmarks as a foundational model working on both point clouds and sequences compared to the specialized ones, featuring faster empirical runtime. In short, our contributions can be summarized as follows: - Efficient linear complexity sparse self-attention in terms of the number of tokens, - Differentiable and parallelizable sampling without replacement method that is optimization-friendly, - Pseudoconvex transformer layer that is stable during training by theoretical gradient analysis, - Zero-additional computing cost rotation-invariant addon for point cloud transformers, - Outperformance over many well-known methods on standard benchmarks. We provide the detailed theoretical analysis in the Appendix Section B. ## 2 Related Works Our work provides an efficient transformer variant that works on three data modalities: point clouds, graphs, and sequences; therefore, this section provides a context on relevant literature on efficient transformer, point cloud learning, graph learning, and sequential learning literature. Efficient Transformers. The scalability of transformers is hindered by the quadratic complexity of selfattention modules. Hence, numerous works have delved into sparsity (Beltagy et al., 2020; Ainslie et al., 2020; Zaheer et al., 2020; Shirzad et al., 2023), low-rank projection (Wang et al., 2020), hashing-based method (Kitaev et al., 2020), and kernel-based methods (Choromanski et al., 2021; Katharopoulos et al., 2020) to sacrifice the expressivity of transformers for a sub-quadratic construction. Our review of contemporary efficient transformers is influenced by this survey by (Tay et al., 2022). Linformer (Wang et al., 2020) is based on the low-rank assumption of self-attention matrices, realized via projection matrices. As one of the earliest works in efficient transformers, it has flaws in both scalability and fixed sequence length. Subsequent work like Linear Transformer (Katharopoulos et al., 2020) redesigns the similarity kernel previously as a softmax of dot-product score (or any other kernels) to one made of a linear probability cage. The modification frees them from the recomputation of the multiplication of key-value matrices, making attention computational cost O(N), where N refers to the number of tokens. Methods like (Beltagy et al., 2020) utilize three choices of fixed-sparsity-patterns: sliding window, dilated sliding window, and global+sliding window. Similarly, (Zaheer et al., 2020) uses a combination of different sparse patterns at one time: random attention, window attention, and global attention. On graphs, (Shirzad et al., 2023) also combines different sparsity patterns but with a constraint on the random-generated one: it has to be an expander graph. The hard-coded sparsity effectively linearized the complexity of transformers and is empirically fast. However, intuitively, no pattern or finite set of patterns fits all problems; therefore, there is also a line of work dedicated to a differentiable sparsity. (Roy et al., 2021) drew a relationship from MIPS problem (Maximum Inner Product Search) and the Nearest-Neighbor Search algorithm, when both the query and key vectors are unit vectors. With their online k-means algorithm, their method attends each query vector to every key vector belonging to the same cluster, thus, bringing the complexity down to O(n 1.5), with n is the number of tokens. Empirically, however, their method is not fast as it needs a k-means construction and extensive specialized implementation for sparse operators. We see that the literature currently lacks a fast learnable sparsity and this is what we provide, based on sampling method. (Lee et al., 2019), is most similar to our global attention scheme, as it is also based on sampling. Their method uses a multi-head attention stacked on one another to perform sampling. Except, our method learns a sparsity pattern for global attention, which has been proven to be more robust. Point Clouds. Deep Learning has been applied in Point-Cloud-related problems, including 3-D Shape Classification, 3-D Point Cloud Segmentation, and others. Our literature review on Point Cloud methods is inspired from (Guo et al., 2021; Zhang et al., 2023; Gerken et al., 2023) and other contemporary literature, covering a wide range of geometric deep learning. Data structure-wise, point cloud works can be classified into three types: point/mesh, voxel, and multi-view; applied by different deep learning methods. Multi-view based methods like (Su et al., 2015; Yu et al., 2018) process point cloud via a sequence of rendered 2-D images of objects from different views, enjoying the early breakthroughs of deep learning-based computer vision (He et al., 2016). The voxel-based method (Maturana & Scherer, 2015; Qi et al., 2016) operates on a 3-D grid representing the object, which also uses convolutional neural networks. The voxel-based method is computing-intensive: the number of voxels grows cubically to the grid resolution. Therefore, numerous methods like (Riegler et al., 2017) are dedicated to the sparsification of the voxel grid. OctNet (Riegler et al., 2017) sparsify the voxel grid using an octree to allocate more memory and computation resources to denser areas of the voxel grid. In contrast, point/mesh-based methods like the works by (Charles et al., 2017; Qi et al., 2017; Zhao et al., 2021) use unprocessed inputs (3-D coordinates with/without surface normal vectors). These works often share several components: local permutation invariant point operators, global feature aggregation, and point downsampling operators. Methods-wise, point cloud works can be classified into types: local-aggregation-based methods and global-aggregation-based methods. Local aggregators in point cloud literature usually feature convolutional layers or point-to-point global aggregators with self-attention. Convolutional-powered local aggregators can take many forms depending on the data structure: 2-D convolutional layers (Su et al., 2015; Yu et al., 2018) for multi-view, 3-D convolutional layers (Maturana & Scherer, 2015; Qi et al., 2016) for voxel grid, graph convolutional layers (Shi & Rajkumar, 2020) for point graph (usually constructed using kNN or distance ball), and kNN-powered local feature aggregator (or point convolution operators) (Qi et al., 2017; Xu et al., 2018). Point-Conv (Wu et al., 2019) and Point-Transformer (Zhao et al., 2021) also feature point interpolation operators as an aggregation method. Global feature aggregation in point cloud literature is dominated by the likes of self-attention (Vaswani et al., 2017) popularized by the fact that the module has been so successful in modeling long-range dependencies in sequential processing. Point-Transformer (Zhao et al., 2021) is one of the works featuring long-range dependency modeling. It also takes account of each point's locality via nearest neighbor max pooling, which also allows the model to operate on higher-level features as they progress through deeper layers. SO(3) Point Clouds. As point clouds exist in a 3-D Euclidean space, the rotation of objects should not change what those objects are. This is called the rotational invariance characteristics of some point cloud models and is an active area of research. Early research relies on data augmentation to achieve rotational invariance; however, it should be noted that covering all possible rotations is not achievable. Subsequent ones rely on modules satisfying one of the two characteristics: rotational invariant (Sun et al., 2019; Li et al., 2022; 2023) or rotational equivariant (Chen et al., 2021; Anderson et al., 2019; Deng et al., 2021; Kaba et al., 2023). Equivariant-based methods in point cloud, as a generalization to invariant-based ones, feature diverse mechanisms satisfying the equivariant constraint: f(r(*x, α*)) = r(f(x), α), where f, r, and α is the equivariant mapping, the rotation function, and the rotation angle, respectively. As an early work, similar to the existing equivariant literature in other data structures like images, (Chen et al., 2021) develops a spherical point convolution operation. (Anderson et al., 2019; Thomas et al., 2018; Cohen et al., 2018) propose learning on Fourier domain by utilizing spherical tensors and Fast Fourier Transform on SO(3) group. This allows them to retain perfect information of the whole point cloud on each layer, sequentially without a complicated multi-path model structure. (Deng et al., 2021) extends each neural node to have 3-D representation, with different linear projection and non-linearity. Their method is elegant and simple enough to apply to any other point cloud framework. It sees performance degradation in (Charles et al., 2017) but retains full performance on (Su et al., 2015). In rotational-invariant literature, Euclidean distance is the most commonly used feature (Jing et al., 2021; Maron et al., 2019; Li et al., 2023). The reason: it is simple and it contains all geometric information of any point cloud (Satorras et al., 2021). (Li et al., 2023) further shows that GNN is incapable of distinguishing certain point cloud cases given distance matrix and proves that a k-tuple-distance is capable of capturing any k-order geometric features. In line with other rotational-invariant via GNN methods, we show that rotational invariance-via Euclidean features can be efficiently applied to transformers in the form of relative positional encodings, with significant architectural changes. In addition, we provide an empirical analysis of the ability of our transformer on performance and efficiency on popular point cloud, sequence, and graph benchmarks with relative positional encodings. Graphs. Graphs are used to represent relative data with a variety of use cases, e.g., small atomic structures (Wu et al., 2018; Townshend et al., 2022) for drug discovery, knowledge graphs (Hogan et al., 2021; Ko et al., 2021; Choudhary et al., 2021) for reliable information retrieval, etc. Modern graph pattern recognition relies on deep learning with the emergence of Messaging Passing Neural Network (MPNN) (Gilmer et al., 2017). It is a learning paradigm where the computation of each node feature relies solely on its locality. Graph Convolutional Network (GCN) (Scarselli et al., 2009; Kipf & Welling, 2017) is the earliest work of this kind, where each node aggregates the feature of adjacency nodes using a permutation invariant operator. (Hy et al., 2018; 2019) extends the GCN model to higher-order while respecting the permutation symmetry by proposing the use of covariant tensor operators. Graph Attention Network (GAT) (Veličković et al., 2018) improves GCN by introducing a mechanism to select important nodes. These two are the classical graph neural networks and they suffer from overfocusing on locality; which prevents them from drawing meaningful long-range relationships (Dwivedi et al., 2022b; Ngo et al., 2023; Trang et al., 2024). Researchers have designed two mechanisms to tackle this with MPNN: virtual nodes (Pham et al., 2017) and k-hop neighborhood (Nikolentzos et al., 2020). Virtual nodes are designed to be nodes that connect to all nodes. This design provides information shortcuts for any two distant nodes, allowing global information to be propagated throughout all graph nodes. This concept is later explored and proved to be equivalent to low-rank-approximation-based transformer networks (Cai et al., 2023). K-hop MPNN aggregates information not only from the adjacency nodes but also the set of nodes that can be reached from the central node by following a path with a maximum of k edges. This increases the local receptive field of each graph node, which alleviates the mentioned problem, similar to the large convolutional kernel trend in computer vision (Ding et al., 2022). Transformer(Vaswani et al., 2017) operating on the complete graph domain (Bronstein et al., 2021), overcomes the mentioned problem purely through architectural design. While transformers are useful in extremely-large-scale learning tasks like language modeling, their performance is no good compared to traditional graph transformers in graph datasets, which often have few graphs with large numbers of nodes. Therefore, a considerable body of literature is devoted to making better handcrafted encodings as graph inductive bias (Ying et al., 2021; Dwivedi et al., 2022a; Rampášek et al., 2022). Graphormer (Ying et al., 2021) is one of the first graph transformer. It introduces several types of encodings: centrality encoding (how important the nodes are) and spatial encoding (how important the locality of each node is), which resembles how humans infer useful information from graphs. GraphGPS (Rampášek et al., 2022) designs a general pipeline for graph modeling by combining many prominent techniques into one: positional encoding (local PE, global PE, and relative PE), structural encoding, and multiple graph block types (MPNN layers (Brossard et al., 2021; Bresson & Laurent, 2018), attention layers (Vaswani et al., 2017; Choromanski et al., 2021; Zaheer et al., 2020)). Furthermore, (Ngo et al., 2023) proposes a multiscale version of graph transformer that learns to construct a hierarchy of coarsening graphs and a new form of positional encoding via graph wavelet transform. Our work focuses more on the scalability of the quadratic complexity graph transformer and the representative power of our relative information aggregation. This problem has been recently addressed by (Shirzad et al., 2023) where the attention mechanism is sparsified by virtual node and expander graphs. Learnable sparsity for sub-quadratic graph transformer is a relatively new technique, recently explored by (Trang et al., 2024). While the idea seems new and intriguing, it suffers from low convergence rate from stacked Gumbel-Softmax sampling layers to smoothen the discrete hierarchy. Here, we designed a more optimization-friendly differentiable sampler that can be stacked onto each other and a much simpler sparsity pattern (sampling versus hierarchy). ## 3 Background Variables (scalar, vector, matrix, and tensor) are all implemented as **tensors**. In practice, we use lots of tensor operators, namely, matrix multiplication, tensor concatenation, tensor stack, common activation functions (ReLU, Softplus, exp), slicing, sorting along dimension, gather, transpose, and permute. These mentioned operators should be explained in common deep learning frameworks' documentation, namely, PyTorch, TensorFlow, ... Stating this, however, we try to limit the usage of tensors in our mathematical equations and resort to scalars, vectors, and matrices whenever possible. In all of our equations, there are two occasions we used Stack function: to define the sampling operator and to illustrate the learnable sampling with replacement; therefore, we provide the definition of Stack function here. The Stack function receives n d-dimensional vectors and returns a matrix having shape n × d. The elements of the resulting matrix are expressed as follows: Stack(v1,v2,...,vn)[1,j] = v1[j] $\left(1\right)$. Stack(v1, v2, . . . , vn)[*i, j*] = vi[j] (1) The self-attention mechanism is a core component of transformer models to capture long-range dependencies. Let X = {x1, x2, . . . , xn} ∈ R n×drepresent a sequence of n d-dimensional vectors, Q, K, and V vectors representing query, key, and value vectors are created using three learnable linear transformation: Linear(X) : R d → R d. Then, using the Q, K, and V vectors, it creates another sequence by aggregating tokens satisfying certain criteria, expressed as follows: $$\operatorname{Attn}(\mathbf{X})=\operatorname{Prob}\left(\operatorname{Score}(\mathbf{Q},\mathbf{K})\right)\mathbf{V},$$ $\left(2\right)^{\frac{1}{2}}$ Attn(X) = Prob (Score(Q, K)) V, (2) where Prob and Score are attention non-linearity and attention score functions, respectively. In practice, Q, K, V are all tensors with the dimension of batch and attention heads; however, it is processed the same way concurrently across all batch elements and attention heads. Therefore, it is safe to formulate as one attention head, where Q, K, V, and Prob(Score(Q, K)) as matrices. Q, K, V are matrices of size n × d. *n, d* in the following equations are the number of tokens and the number of token dimensions, respectively. The function Prob : R n×n → R n×n and Score : R n×d × R n×d → R n×n can be any matrix(ces) to matrix transformation (linear or non-linear). This definition provides a generalization on what a transformer network can be. The earliest transformer (Vaswani et al., 2017) uses softmax and scaled dot product functions as Prob and Score. Replacements of Prob or Score compared to the vanilla transformer (Katharopoulos et al., 2020; So et al., 2021; Wortsman et al., 2023; Shen et al., 2023) are created to address the problem of quadratic complexity, over-centralized attention, and ease of training or just motivated purely from empirical results. In our work, we introduce changes to both Prob and Score to take care of the mentioned problems and further improve the theoretical expressivity of transformers, regarding relative positional encodings. ![5_image_0.png](5_image_0.png) LinearC ![5_image_1.png](5_image_1.png) Hyperparameters: b (batch size), s (number of tokens), dmodel **(number of token dimensions), drelative** (number of dimension of relative information), h (number of attention heads), ntoken **(number of** (dmodel, dmodel), LinearV (dmodel, dmodel), LinearC (dmodel, h), Torch-Pseudocode (Sampler in Figure 2, PostNorm variant, no dropout for simplicity): (sampled_X), LinearC relative_multiplicative_term, relative_additive_term = Linearrmul(Xr), **Linearradd(Xr)** Figure 1: Our proposed sampling-based transformer variant. (PreLN) indicates LayerNorm if Pre-LN Transformer variant, otherwise, identity. (PostLN) indicates LayerNorm if Post-LN Transformer variant, otherwise, identity. Red (our model) and blue (vanilla transformer) highlight the difference between our model and vanilla transformer (with attention bias). Sampler is described in Figure 2. ## 4 Method 4.1 Overview Our method is a composition of neural-based sampling without replacement method, modifications to the self-attention mechanism with rigorous theoretical justifications, and supporting layers. Similar to Linformer (Wang et al., 2020) applying low-rank approximation to key and value vectors, we delve into the sampling procedure for key and value vectors, elaborated in Section 4.2. Next, Section 4.3 describes our easy-to-optimize attention score combined with relative positional encodings via leaky probability formulation. Finally, the layer ends with a feed-forward network, transforming the vector representation of the tokens. The method is summarized in Figure 1. ## 4.2 Differentiable Sampling Without Replacement We formally define the operator that samples k of n real m-dimensional vectors as a function Sam(X) : R n×m → R k×m. Sampling is a linear transformation, the resulting sampled vectors can be defined by a matrix multiplication between the selection matrix P and the input token matrix X: PX. However, not every linear operation is a sampling operation; there are certain properties that P has to be met, so that, the linear transformation can be a sampling operation. That is: for the linear transformation to be an operator that samples k of n real m-dimensional vectors P must be representable in form of a stack of k one-hot vectors defined as follows: $\mathbb{P}=\text{Stack}_{j=1}^{k}(\text{onehot}(i_j,n),\text{dim}=0)=1$ $\boxed{\begin{array}{c}\text{onehot}(i_0,n)\\ \text{onehot}(i_1,n)\\ \vdots\\ \text{onehot}(i_{k-1},n)\end{array}}$ $\left(\frac{2}{3}\right)^3$ . (3) In the above equation, (i0, i1*, . . . , i*k−1) can be thought as the indices of the token matrix. In line with digital circuit (Harris & Harris, 2012), machine learning, and its programming language's conventions (Paszke et al., 2019), we define onehot is a function that outputs a vector (one-dimensional tensor) containing binary values in {0,1}. Our definition of onehot function takes two arguments: index i and length of the output vector n. The i th value of this output vector is one and the rest are zeros. For clarity, we give an example (index counts from 1): onehot(2, 3) = (0, 1, 0). There are two types of simple sampling mechanisms: sampling with replacement and sampling without replacement. For an operator to be a sampling without replacement operator, the indices must be unique, i.e. ij ̸= ik ∀ j ̸= k. We sample the vectors based on importance scores: the vectors' importance scores represent the likeliness of being picked. This implies that our method does not need to specify the number of sampled tokens, which is more versatile than learned projection matrices explored by previous method (Wang et al., 2020). The number of parameters of our importance score-based sampling is also smaller than projection matrices: d × 1 versus k × n, where *d, n, k* are the number of token dimensions, tokens, and sampled tokens (or projected dimensions), respectively. However, setting a constant number of sampled tokens allows better parallelization, supported by PyTorch's compile method that makes code into a singular optimized kernel. The importance scores are computed via a learnable linear transformation: Linear : R m → R. We use Gumbel distribution to include randomness to the importance scores, similar to (Jang et al., 2017). The Gumbel distribution probability density function (PDF) and importance score computation is given as follows: $$\begin{array}{c}{{\varphi(x)=e^{-(x+e^{-x})},}}\\ {{\quad\quad z=\mathrm{Linears}(\mathbf{X})=\mathbf{X}W_{S}+B_{S}.}}\end{array}$$ where φ : R → R is the PDF of the Gumbel distribution. To estimate gradients from non-differentiable choosing operator, Gumbel-Softmax (Jang et al., 2017) has used reparameterization trick. In the forward pass, it returns hard-choose results using argmax, and in the backward pass, it uses a differentiable proxy for the gradient, as described in Equations 5 and 6, respectively. In Equation 6, τ controls the trade-off between the gradient variance and the optimizability of hardmax function. As τ transits from 0 + to +∞, samples from Equation 6 transits from a categorical distribution (accurate gradients but vanishing) to a uniform distribution (strong gradient signal but inaccurate gradients). While this causes discrepancies between the forward and backward pass Jang et al. (2017), it provides a cheap and reliable enough method to optimize for categorical stochastic variables. The procedure to choose one vector from value matrix is given in the following equations: $$\mathbf{V}=\mathbf{X}W_{V}+B_{V}$$ $$s(\mathrm{V},z,g)=\mathrm{V}[\mathrm{argmax}_{i}(x[i]+g[i])].$$ $$\nabla s(\mathrm{V},z,g)=\nabla\left(\sum_{i=1}^{n}\frac{\exp\left(\frac{x[i]+g[i]}{\tau}\right)}{\sum_{j=1}^{n}\exp\left(\frac{x[i]+g[i]}{\tau}\right)}\mathrm{V}[i]\right),\tag{1}$$ $$\begin{array}{l}{(4)}\\ {(5)}\end{array}$$ $$\quad(6)$$ $$\begin{array}{l}{(7)}\\ {(8)}\end{array}$$ $$({\mathfrak{g}})$$ where s, V, z, g and τ are the sampling function, the value matrix, the score vectors, Gumbel sample vector, and the temperature scaling respectively; ∇ denotes the gradient operator. The problem is that Gumbel-Softmax gives off very small gradients due to its exponential nature. This is fine for its intended usage: to be put at the last layer or in a few specialized modules, but not a vital component in repeating sequential model cells. We can linearize (or polynomialize to be precise) the probability formula by raising the Softplus-ed score to the power of τ −1, τ ∈ R + instead of dividing the score by τ to control the trade-off. The following equations give the formulation of reparameterization trick via Softplus: $\mathrm{Softplus}(\mathbf{x})=\log(1+\exp(\mathbf{x}))$, $\mathrm{\tiny{\begin{array}{l}\mathrm{\tiny{$s(V,z,g)=V[\mathrm{argmax}_{i}(z[i]+g[i])]$}.\\ \\ \mathrm{\tiny{$\nabla s(V,z,g)=\nabla\left(\sum_{i=1}^{n}\frac{\mathrm{Softplus}\left(z[i]+g[i]\right)^{r^{-1}}}{\sum_{j=1}^{n}\mathrm{Softplus}\left(z[j]+g[j]\right)^{r^{-1}}}\mathrm{V}[i]\right).\\ \\ \mathrm{\tiny{$\nabla s(V,z,g)=\nabla\left(\sum_{i=1}^{n}\frac{\mathrm{Softplus}\left(z[i]+g[i]\right)^{r^{-1}}}{\sum_{j=1}^{n}\mathrm{Softplus}\left(z[j]+g[j]\right)^{r^{-1}}}\mathrm{V}[i]\right).\\ \\ \end{array}}}$ From here, we can derive the trivial sampling with replacement procedure by repeatedly applying Equation 6 or Equation 8, as follows: $\mathbb{Z}=\text{Linear}_{\text{S}}(\text{X})=\text{X}W_{\text{S}}+B_{\text{S}}$. $\text{Sam}(\text{X},k)=\text{Stack}_{i=1}^{k}\left(s(\text{X},\mathbb{Z},g_{i}),\text{dim}=0\right).$ $$(10)^{\frac{1}{2}}$$ $$(11)$$ i=1 (s(X, z, gi), dim = 0). (11) where each vector gi contains n independent samples from the Gumbel distribution. s is a stochastic function returning a (different) token each time it is called (in forward pass), defined in Equation 5 or Equation 8. Stacking the sampled vectors along the first dimension creates the sampled token matrix. While we do not use sampling with replacement in our method, for training stability, we found that this method requires gradients passing through the sampling function to be multiplied by 1k . The drawback of sampling with replacement is repeating sampled vectors. Essentially, this causes the real sampling rate to be low despite even at a high number of samples, greatly limits model capacity. Recognizing this limitation of sampling with replacement, we construct a data-driven differentiable sampling without replacement that is parallelizable, in contrast to existing sequential ones. Our method sorts the vectors by importance scores, selects k vectors with the highest scores in set S1 and k random vectors in set S2, satisfying this constraint: S1 ∩ S2 = ∅, concatenates set S1 and set S2 into set of k binary bins S, and finally selecting k vectors using Equation 8 and Equation 9. While the sorting procedure is not differentiable, the whole procedure ensures the important tokens receive higher scores than the unimportant ones iteratively through a Softplus probability bin and gradient descent process. In practice, we find the resulting sampled tokens to be duplets (linear combinations of a token from S1 and a token from S2) is good enough and much easier to optimize. Therefore, we accept the noise from the components from S2 instead of true sampling with reparameterization trick, which results in a lower convergence rate and requires intensive hyperparameter tuning of the τ parameter. τ is set as 1.0 throughout all experiments. The algorithm is described in Figure 2. With relative positional encodings, sparse and dense, the method to sample is similar. To inject relative positional encoding densely (e.g. injecting Euclidean Distance Matrix information onto attention maps; injecting token positional encodings), it is as simple as concatenating the embeddings reserved for relative positional encoding (e.g. point cloud coordinates) with the input of sampler function. This is possible if assuming there exists a function that linearly or non-linearly decomposes the relative positional encoding matrix with shape (*n, n*) into two matrices with shape (*n, d*). The sampler relies solely on the contextual meaning of token embeddings and does not depend on the relative positional information. This is by design and based on an assumption (which should be observed in Figure 3 and Figure 8): relative positional information gradually transmits to token embeddings through transformer layers. Sparse relative positional encoding; however, is much more complicated to be implemented for parallel processing. We provide a linear computational complexity algorithm that can be run on CPU (or PyTorch Sparse Tensor, but not efficient enough to justify): - Step 1: Sample from the token embeddings using pseudocode provided in Figure 2. - Step 2: Get the indices and probability score (the weight of binary linear combinations) of the top-k tokens and the randomly sampled tokens. - Step 3: Construct two relative positional matrices Atop, A**rand** with shape (*n, k*) where *n, k* are number of tokens and number of sampled tokens, respectively. - Step 4: Fill Atop, A**rand** with XR[indices]. XR here is a sparse matrix data structure. - Step 5: Transform each relative positional matrix through needed learnable layers provided in Section 4.3. It is done this way to avoid the layer transforming the relative features ended up learning the noisy relative features. - Step 6: Multiplied the two matrices Atop, A**rand** with their corresponding probability score provided by Step 2. - Step 7: Return Atop + Arand ![8_image_0.png](8_image_0.png) Figure 2: Our proposed differentiable sampling without replacement. In practice, we treat the sparse relative positional matrix XR as a dense matrix/tensor for parallelizability (and to be compiled to a singular kernel via compile from PyTorch method). This is a limitation of our work in engineering. ## 4.3 Attention Matrix Construction To recall, any generic self-attention is expressed as follows: $$\begin{array}{l}{{\mathbf{Q}=\mathrm{Linear}_{\mathbf{Q}}(\mathbf{X})=\mathbf{X}W_{Q}+B_{Q},}}\\ {{\mathbf{K}=\mathrm{Linear}_{\mathbf{K}}(\mathbf{X})=\mathbf{X}W_{K}+B_{K},}}\\ {{\mathbf{V}=\mathrm{Linear}_{\mathbf{V}}(\mathbf{X})=\mathbf{X}W_{V}+B_{V},}}\\ {{\mathrm{Attn}(\mathbf{X})=\mathbf{X}+\mathrm{Prob}\left(\mathrm{Score}(\mathbf{Q},\mathbf{K})\right)\mathbf{V}.}}\end{array}$$ In vanilla transformer (Vaswani et al., 2017), function Prob (probability function / attention matrix scale normalizer based on number of tokens) and Score (attention nonlinearity) are softmax and scaled-dot product. We provide formulate as one attention head with reasons given in Section 3. Q, K, V are matrices of size n×d. n, d in the following equations are the number of tokens and the number of token dimensions, respectively. The equations are expressed as follows: with Equation 12 and Equation 13 constructs Equation 14 using Equation 2. Subsequent attention construction may only specify the function Prob and the function Score. It is well-known that transformer is hard to optimize (Liu et al., 2020). We propose a conjecture that the ease of network training is strongly related to the componentwise convexity and the backpropagated gradient scale. This is based on a stark difference between transformers and network architectures: convolutional neural networks and multi-layer perceptrons are all pseudoconvex but self-attention modules are not. Here, $$\begin{array}{c}\mbox{Prob}(\mbox{A})[i,j]=\frac{\exp(\mbox{A}[i,j])}{\sum_{k=1}^{n}\exp(\mbox{A}[i,k])+\epsilon},\\ \mbox{Score}(\mbox{Q},\mbox{K})[i,j]=\frac{\sum_{k=1}^{d}(\mbox{Q}[i,k]*\mbox{K}[j,k])}{\sqrt{d}},\\ \mbox{Attn}(\mbox{X})=\mbox{X}+\mbox{Prob}(\mbox{Score}(\mbox{Q},\mbox{K}))\mbox{V},\end{array}$$ $$\left(12\right)$$ $$(13)$$ $$(14)$$ we reformulate the attention matrix such that for a single layer, the output is pseudoconvex with respect to the weights. We discovered the pairwise Maxout attention nonlinearity (derived from the Maxout activation (Goodfellow et al., 2013)) is convex. When combined with the attention module (including the feed-forward network), the whole transformer layer is pseudoconvex, with informal proof in Section 4.4 and formal one in Appendix B.2. The following equations express leaky factor, ReLU-probability function, and the pairwise Maxout attention nonlinearity (takes element-wise maximum, sum over the values, and scale by √d), respectively: $$\mathbf{C}=\text{Linear}_{\mathbf{C}}(\mathbf{X})=\mathbf{X}W_{C}+B_{C},$$ $$\text{Score}(\mathbf{Q},\mathbf{K})[i,j]=\frac{\sum_{k=1}^{d}\max(\mathbf{Q}[i,k],\mathbf{K}[j,k])}{\sqrt{d}},$$ $$\text{Prob}(\mathbf{A},\mathbf{C})[i,j]=\frac{\text{ReLU}(\mathbf{A}[i,j])}{\sum_{r=1}^{n}\text{ReLU}(\mathbf{A}[i,r])+\text{Softplus}(\mathbf{C}[r,1])+\epsilon}$$ $$\text{Attn}(\mathbf{X})=\mathbf{X}+\text{Prob}(\text{Score}(\mathbf{Q},\mathbf{K}),\mathbf{C})\text{V}.$$ While the leaky factor is initially introduced for numerical stability, it allows better relative information aggregation. In practice, we use a linear combination for relative positional encoding alongside the leaky probability function. The relative positional encoding matrix receives different linear transformations projecting n-dimensional-vectors into h-dimensional-vectors for each layer. With XR being the relative positional encoding matrix, the following equations express how we incorporate relative positional encoding into attention scores: Rmul= Softplus(LinearRmul (XR)), Radd= LinearRadd (XR), Score(Q, K)[i, j]= Pdk=1 max(Q[i, k], K[j, k]) √d∗ Rmul[i, j] + Radd[i, j], Prob(A, C)[i, j]=ReLU(A[i, j]) Pn r=1 ReLU(A[i, r]) + Softplus(C[r, 1]) + ϵ Attn(X)= X + Prob(Score(Q, K), C)V. Leakiness injects rank. For any non-leaky probability function, the self-attention fails to distinguish any two tokens of a rank-1 uniform input, commonly known as the over-smoothing phenomenon (Noci et al., 2022; Dong et al., 2023). This case proves relative-positional-encoding-based (RPE-based) transformers' approximation power is far from being universal (Luo et al., 2022) as opposed to absolute-positional-encodingbased (APE-based) Transformer (Yun et al., 2020). This barrier limits transformers' performances in relative positional data structures such as graphs and point clouds. In contrast, with the addition of the positive leaky component C, we empirically show that the rank progression of token representation through our RPE-based Transformer gradually gains some rank. We will name this phenomenon **rank injection**. To visualize rank injection, in Figure 3, we use 30 samples of random weights to measure the rank progression of token representation with one random normal relative matrix over the layers. ## 4.4 Theoretical Analysis Summary In order to theoretically justify the effectiveness of our proposed model, we will analyze its convexity power. Even though rigorously, it is not possible for probability distribution functions to exhibits actual convexity (Tsurumi, 1966), a weaker yet significant function class that also draw heavy attention in optimization theory (Liu et al., 2012) (Qin et al., 2016) are **pseudoconvex functions**, defined as follows: Definition 4.1. Let ∇ denote the gradient operator and S ⊂ R n is a convex open set, a function f : S → R is said to be pseudoconvex in S if for any x1, x2 ∈ S such that ⟨∇f(x1), x2 − x1⟩ ≥ 0, then f(x2) ≥ f(x1). Pseudoconvex functions hold a very important lemma, which states that every stationary point is also a global minimizer. To visualize this concept, we sketched two functions with convex and pseudoconvex ![10_image_0.png](10_image_0.png) Figure 3: Rank progression of token representation with 256 tokens and embedding size 512 through 100 randomly initialized single head SFT layers with leaky probability function and pairwise maxout attention nonlinearity. properties respectively in Figure 4. In application to deep learning, pseudoconvexity not only tackle tricky challenges in optimization such as saddle points and local minimas but also improve the overall performance of artificial neural networks. ![10_image_1.png](10_image_1.png) Figure 4: Comparison between a convex function z = x 2+y 2+10 10 (top-left) and a pseudoconvex function z = 10(x 2+y 2) x2+y2+1 (top-right) and their corresponding heatmaps with gradient vector fields (bottoms). The pseudoconvex function greatly resemble its convex counterpart regarding the search for the local minima. In summary of our theoretical analysis, we will first show the convex inefficiency of two well-known attention settings, namely the vanilla attention (Vaswani et al., 2017) and ReLU-based attention with dot product (Shen et al., 2023) by handpicking some representative counterexamples. The details of the statements and corresponding proofs are provided at Appendix Section B.1. Next, we will rigorously show that SFT is pseudoconvex with both linear and GeLU activation in FFN layer. Informally, we present the result as follows. Theorem 4.2. **(informal)** The SFT layer with linear or GeLU FFN activation and no sampling is componentwise pseudoconvex with respect to certain combinations of weights. We provide the full theorem and proof in Appendix Section B.2. In compensation to the non-decreasing gradient behaviour of pseudoconvex functions comparing to convex counterparts, we also derived an adaptive lower bound and global upper bound of the expectation of the Frobenius norm of the gradients of SFT, which is often referred as gradient norms for short, and analyze their complexity with respect to the number of tokens. The boundedness of the gradient norms should somewhat represent their magnitude and address the robustness of SFT against challenges such as sharp/flat points, i.e. places at which the gradients are too small/large. Theorem 4.3. **(informal)** Let Sa = Sa(WQ, WK, WV ) *be the SFT attention layer output in Algorithm* ??, then E ∂Sa ∂WV 2 F and E ∂Sa ∂WQ 2 F has complexity Θ(n) and Θ(1) respectively, where n *is the number of tokens.* Since the full proof for Theorem 4.3 is quite convoluted, we would like to sketch the proof as follows: - Adaptive lower bound: - **Step 1:** Derive a fractional function form f(W)/g(W) where W represents the set of parameters, - **Step 2:** Convexify the fractional form via the convex envelope derived by (Tawarmalani & Sahinidis, 2002), - **Step 3:** Evaluate the lower bound using the Jensen inequality, - **Step 4:** Make any simplification if necessary. - Global upper bound: employ the norm product inquality and the Jensen inequality for concave functions. The full statement and proof is at the Appendix Section B.3. ## 4.5 Complexity Analysis Consider the input token X ∈ R n×d, query and key weights WQ, WK ∈ R d×da . The computational cost for each step is as follows: - Linear transformations for Q, K and V: O(nd2), - Importance score of the queries and keys: O(nd), - Selecting top-k importance score: O(n log n) with sorting method or O(n) with quickselect algorithm, - Attention between n queries and top-k most important keys: O(nkda), - FFN network: O(nd2). Given these complexities and considering the long-ranged data structures which can implies n ≫ d, the asymptotic computational complexity of one single SFT layer is O(n). The asymptotic space complexity of one single SFT layer is also O(n). ![12_image_0.png](12_image_0.png) Figure 5: Inference time (in seconds) of the networks for ModelNet40 classification test split in 1 A100 GPU and 8 CPUs with a batch size of 32. The Vector Neuron (Deng et al., 2021) and Canonicalization (Kaba et al., 2023) framework is applied in PointNet (Charles et al., 2017) and DGCNN (Wang et al., 2019). The leaky attention function is applied to our SFT model. Our model in this plot does not use compile method from PyTorch to speed up for fair comparison (when using, it halves the inference time on A100). The results of others are taken from (Kaba et al., 2023). ## 5 Experiments 5.1 Objectives The core of any foundational model is its efficacy in modeling diverse data modalities. And, the fundamental difference between data modalities is the different types of relationships between tokens. Therefore, we conduct extensive experiments to measure the ability of our attention mechanism to model dense low-rank (point clouds) and sparse high-rank relationships (graphs and sequences). As we propose our pseudoconvex mechanism, we also conduct a comparative experiment to measure the real runtime of our transformer formulation against the vanilla transformer. In short, we answer three following questions: - How effectively can leakiness encode relative information into the tokens' representation? - How effectively can our sampling-based global attention model long-range interaction? - How significant does our pseudoconvex formulation aid model convergence rate? ## 5.2 Relative Learning We conduct experiments of relative learning on two particular archetypes: Dense Low-Rank and Sparse High-Rank because most data that exists in reality falls in either of the two mentioned groups. ## 5.2.1 Dense Low-Rank Relative Learning To measure the ability of our model in modeling dense low-rank relationship, we test our model on two point cloud tasks involving two datasets: Object Classification on ModelNet40 (Wu et al., 2015), Part Segmentation on ShapeNetPart (Chang et al., 2015); with/without rotational invariant constraint. The motivation we choose point cloud to experiment Dense Low-Rank relative Learning is simple: rank of Euclidean Distance Matrix of p-dimensional points is at most p+2 (Gower, 1985) and point cloud coordinates can be reconstructed from their Euclidean Distance Matrix through Multidimensional Scaling algorithm. Our experiments on rotational invariant constraint specifically remove all of point cloud coordinates in input tokens, which forces our model to use only relative information to discriminate. By making our model rely solely on relative information, the experiment validates the generalizability of information obtained through rank injection phenomena. We further compare it against other dedicated rotational invariant models in Table 1 and it shows comparable performance to these dedicated rotational invariant methods, while can be inferenced **hundreds of percents** faster (computational time measured in Figure 5). Note that, our relative positional encoding scheme **can be dropped into any point cloud transformer**, making it rotational invariant with neglectable additional cost (because all of the point cloud transformers already feature relative positional encoding). We also conduct experiments on point cloud datasets without rotational invariant constraints to measure how effective our method as a whole is in modeling non-sequential data in Table 2. It shows competitive results against other point cloud models. This is because our model does not include any data-modality-specific inductive bias (like the usage of kNN for locality heuristic) and is susceptible to overfitting in scenarios where data is not abundant. Table 1: Experimental results to measure the effectiveness of leaky probability function in terms of transferring relative information to tokens' representation. The table shows shape classification results on ModelNet40 dataset (Wu et al., 2015) and part segmentation results on ShapeNetPart dataset (Chang et al., 2015) under rotational invariant constraint with popular baselines in rotational invariance/equivariant literature. z/z, z/SO(3), SO(3)/SO(3) signifies the model trained with 3D coordinates (no rotational invariant constraint), the model trained with rotational invariant features/trained using rotational equivariant transformation, and trained with random rotation data augmentation, respectively. | trained with random rotation data augmentation, respectively. Method ModelNet40 | ShapeNetPart | | | | |-----------------------------------------------------------------------------------|----------------|----------|--------------|------| | z/z↑ | z/SO(3)↑ | z/SO(3)↑ | SO(3)/SO(3)↑ | | | SFCNN (Rao et al., 2019) | 91.4 | 84.8 | - | - | | TFN (Thomas et al., 2018) | 88.5 | 85.3 | 76.8 | 76.2 | | RI-Conv (Zhang et al., 2019) | 86.5 | 86.4 | 75.3 | 75.3 | | SPHNet (Poulenard et al., 2019) | 87.7 | 86.6 | - | - | | ClusterNet (Chen et al., 2019) | 87.1 | 87.1 | - | - | | GC-Conv (Zhang et al., 2020) | 89.0 | 89.1 | 77.2 | 77.3 | | RI-Framework (Li et al., 2022) | 89.4 | 89.4 | 79.2 | 79.4 | | VN-PointNet (Deng et al., 2021) | 77.5 | 77.5 | 72.4 | 72.8 | | VN-DGCNN (Deng et al., 2021) | 89.5 | 89.5 | 81.4 | 81.4 | | CN(NL)-PointNet (Kaba et al., 2023) | 79.9 | 79.6 | 73.5 | 73.6 | | CN(NL)-DGCNN (Kaba et al., 2023) | 88.7 | 88.8 | 78.4 | 78.5 | | SFT (Ours) | 91.1 | 87.2 | 78.3 | - | ## 5.2.2 Sparse High-Rank Relative Learning To measure the ability of our model in modeling sparse high-rank relationship, we test our model on two data modalities: sequence and graph. For sequential tasks, we choose Long-Range-Arena Benchmark (Tay et al., 2021), a standard to measure the effectiveness of efficient transformer schemes in sequential modeling. For graph tasks, we choose three datasets: Peptides-func (Singh et al., 2015), Peptides-struct (Singh et al., 2015), and PascalVOC-sp (Everingham et al., 2010) in Long-Range-Graph-Benchmark (Dwivedi et al., 2022b), which is the equivalent of Long-Range-Arena-Benchmark for graph transformers. The reason we chose these two data modalities to represent sparse high-rank relative learning is intuitive: both the causal relationship of sequences and graph adjacency matrix is very sparse yet high rank. The competitive | with popular baselines in the point cloud analysis literature. Method ModelNet40 | ShapeNetPart | | | | |------------------------------------------------------------------------------------|----------------|---------|---------|------| | mAcc↑ | OA↑ | c. IoU↑ | i. IoU↑ | | | PointNet (Charles et al., 2017) | 86.2 | 89.2 | 80.4 | 83.7 | | Set Transformer (Lee et al., 2019) | - | 90.4 | - | - | | PointNet++ (Qi et al., 2017) | - | 91.9 | 81.9 | 85.1 | | SpecGCN (Wang et al., 2018) | - | 92.1 | - | - | | PointCNN (Li et al., 2018) | 88.1 | 92.2 | 84.6 | 86.1 | | DGCNN (Wang et al., 2019) | 90.2 | 92.2 | 82.3 | 85.1 | | PointWeb (Zhao et al., 2019) | 89.4 | 92.3 | - | - | | SpiderCNN (Xu et al., 2018) | - | 92.4 | 81.7 | 85.3 | | PointConv (Wu et al., 2019) | - | 92.5 | 82.8 | 85.7 | | Point2Sequence (Liu et al., 2019) | 90.4 | 92.6 | - | 85.2 | | KPConv (Thomas et al., 2019) | - | 92.9 | 85.1 | 86.4 | | InterpCNN (Mao et al., 2019) | - | 93.0 | 84.0 | 86.3 | | Point Transformer (Zhao et al., 2021) | 90.6 | 93.7 | 83.7 | 86.6 | | Sequoia (Trang et al., 2024) | 88.4 | 92.0 | 80.6 | 83.8 | | SFT (Ours) | 88.2 | 91.1 | 81.3 | 84.5 | Table 2: Experimental results to measure the effectiveness of our sparse attention on non-sequential data structure, particularly spatial data structure. The table shows shape classification results on ModelNet40 dataset (Wu et al., 2015) and part segmentation results on ShapeNetPart dataset (Chang et al., 2015) along with popular baselines in the point cloud analysis literature. experimental results of our model in Long-Range-Arena-Benchmark and Long-Range-Graph-Benchmark are shown in Table 4 and Table 3, respectively. In sequential tasks, our model has competitive results against many other efficient transformers and the full attention transformer. However, it should be note that we did not reach the performance level of models designed specifically for sequential tasks like MEGA (Ma et al., 2023) and S4 (Gu et al., 2022). The inductive bias of EMA (Exponential Moving Average) in MEGA and SSM cannot be applied to other data structures like point clouds and graphs. In graphs tasks, our model has relatively good results in peptides dataset compared to other efficient transformers, full attention transformer, and superior results to all classical message passing neural networks. However, since our transformer construction for graph is simple and does not rely on local information aggregation like GraphGPS (Rampášek et al., 2022), it does not perform well on PascalVOC-sp dataset. This shows a limitation on how our method's relative information aggregation. ## 5.2.3 Performance Gap Between Relative **And Absolute Positional Information Aggregation** Additionally, we compare two variants of our models (with/without absolute position) on both Dense-LowRank and Sparse-High-Rank experiments. We notice a significant performance gap on all of the experimented datasets: ModelNet40, ShapeNetPart, Peptides-func, Peptides-struct, PascalVOC-sp; shown in Table 5. It suggested that our relative modeling performance is inferior to introducing relations directly into token embeddings, despite our formulation allowing more complicated constraints to be implemented. We suspect this is related to how we construct relative information since we only use two linear layers to transform relative information. We have experimented with multi-layer perceptron to transform relative information, but it proved to be too slow in practice. This is because transforming k ∗ n vectors (or an entire attention map), where k and n is the number of sampled tokens and the number of tokens, respectively, is an extremely costly operation. We believe efficient relative information transformation can be an interesting open question for future research. Table 3: (Long Range Graph Benchmark) Performance on Peptide and Computer-Vision Graph datasets. The performance of our model on peptide-func, peptide-struct, and the computer vision-based PascalVOC datasets is measured using Average Precision (AP), Mean Absolute Error (MAE), and F1-score metrics, respectively. | Method | Pept-func ↑ | Pept-struct ↓ | PasVOC-sp ↑ | |--------------------------------------------|---------------|-----------------|---------------| | GCN (Kipf & Welling, 2017) | 0.5930 | 0.3496 | 0.1268 | | GCNII (Chen et al., 2020) | 0.5543 | 0.3471 | 0.1698 | | GINE (Brossard et al., 2021) | 0.5498 | 0.3547 | 0.1265 | | GatedGCN (Bresson & Laurent, 2018) | 0.5864 | 0.3420 | 0.1265 | | GatedGCN (RWPE) (Bresson & Laurent, 2018) | 0.6069 | 0.3357 | 0.1265 | | Transformer (LapPE) (Vaswani et al., 2017) | 0.6326 | 0.2529 | 0.2694 | | Transformer (RWPE) (Vaswani et al., 2017) | 0.6502 | 0.2620 | 0.2718 | | SAN (LapPE) (Kreuzer et al., 2021) | 0.6384 | 0.2683 | 0.3230 | | SAN (RWPE) (Kreuzer et al., 2021) | 0.6562 | 0.2545 | 0.3216 | | GPS (Rampášek et al., 2023) | 0.6535 | 0.2500 | 0.3748 | | Exphormer (Shirzad et al., 2023) | 0.6527 | 0.2481 | 0.3975 | | GPS-Sequoia-RWPE (Trang et al., 2024) | 0.6755 | 0.2453 | 0.3379 | | SFT (Ours) | 0.6674 | 0.2661 | 0.1961 | | SFT-RWPE (Ours) | 0.6902 | 0.2655 | 0.2181 | | Method | ListOps | Text | Retrieval | Image | Pathfinder | |--------------------------------------|-----------|--------|-------------|---------|--------------| | BigBird (Zaheer et al., 2020) | 36.05 | 64.02 | 59.29 | 40.83 | 74.87 | | Reformer (Kitaev et al., 2020) | 37.27 | 56.10 | 53.40 | 38.07 | 68.50 | | Performer (Choromanski et al., 2021) | 18.01 | 65.40 | 53.82 | 42.77 | 77.05 | | Linformer (Wang et al., 2020) | 35.70 | 53.94 | 52.27 | 38.56 | 76.34 | | Luna-256 (Ma et al., 2021) | 37.98 | 65.78 | 79.56 | 47.86 | 78.55 | | Transformer (Vaswani et al., 2017) | 36.37 | 64.27 | 57.46 | 42.44 | 71.40 | | Sequoia (Trang et al., 2024) | 37.70 | 75.10 | 67.04 | 49.88 | 87.30 | | S4 (Gu et al., 2022) | 88.65 | 76.02 | 87.09 | 86.09 | 86.05 | | MEGA (Ma et al., 2023) | 63.14 | 90.43 | 91.25 | 90.44 | 96.01 | | SFT-Relative (Ours) | 39.95 | 64.54 | 71.33 | 47.65 | 78.39 | Table 4: (Long Range Arena) Accuracy on the full suite of long range arena (LRA) tasks. The performance of our model on peptide-func, peptide-struct, and the two computer vision-based datasets is measured using Average Precision (AP), Mean Absolute Error (MAE), and F1-score metrics, respectively. ## 5.3 Learning Curve Analysis We conduct learning curve analysis to see the effect of our pseudoconvex formulation to the convergence rate as well as the effect of sampling rate on the performance of our model. Since learning curve can be too noisy to interpret, we include curves of cumulative maximum attained performance for easier interpretation. ## 5.3.1 Convergence Speed To verify our claim on the effectiveness of our componentwise pseudoconvex transformer module, we conduct comparative experiments between three models: the model using Leaky-Relu Probability function with Maxout Attention score (1), the model using Softmax Probability function with Scaled-Dot Attention score | ModelNet40↑ | ShapeNetPart↑ | Peptides-func↑ | Peptides-struct↓ | PascalVOC-sp↑ | | |-----------------|-----------------|------------------|--------------------|-----------------|------| | relative | 87.2 | 78.3 | 0.67 | 0.2661 | 0.20 | | Absolute | 91.1 | 81.3 | 0.69 | 0.2655 | 0.22 | | Performance Gap | 3.9 | 3.0 | 0.02 | -0.001 | 0.02 | Table 5: Performance Gap between Relative and Absolute Positional Information Aggregation ![16_image_0.png](16_image_0.png) Figure 6: Comparison between learning curves of SFT with Leaky Relu-based + Maxout (orange), with softmax + dot product (blue) and vanilla transformer without relative information (green) on ModelNet40 throught 256 epochs. The actual test accuracy curve is shown (right) while the max accuracy through all epochs are also visualized during training (left) and testing (middle). (2), and the vanilla transformer without relative information (3). The models are trained on ModelNet40 dataset. The comparison between (1) and (2) shows the effectiveness of the pseudoconvex formulation against the traditional softmax-dot-product. The comparison between (1, 2) and (3) shows the necessity to include relative information. The experiments are shown in Figure 6. The higher convergence rate of our model verifies the effectiveness of our pseudoconvex formulation against the vanilla attention formulation, and the better performance of the two models using relative information against vanilla attention verifies the usefulness of relative information. ## 5.3.2 Performance-Efficiency Tradeoff On Sampling Rate To examine the performance-efficiency tradeoff, we trained our model with different sampling rates: 0.4%, 6.25%, 12.5%, 25%, and 50%, measured both of their train and test accuracy per epoch, and measured the runtime of models of different sampling rates. In Figure 7, we show the result of SFT performance with five different sampling rates. Our results suggest that higher sampling rate is likely to increase the convergence speed and performance. However, it should be noted that the increment of sampling rate from 25% to 50% does not result in better test accuracy even has a clear higher sampling rate and an extremely low sampling rate (0.4%) can achieve reasonable performance. To precisely measure the runtime of models, we run with one CPU thread. SFT layers with various numbers of sampled points are compared against the vanilla transformer (using the built-in implementation in PyTorch to measure). The input is a batch of 4 1024-length sequences. The result is shown in Table 6, which shows that our model is more efficient than the vanilla transformer when the sampling percentage is less than 50%. It should be note that this does not exclude the computation of MLP module within the transformer module and the built-in vanilla transformer does not support relative information. The computational cost of modules is discussed in Section 5.4. ![17_image_0.png](17_image_0.png) Figure 7: The influence of sampling rate onto the performance of SFT at five different sampling rates. The accuracy during training (left) and testing (right) are taken by max through 256 epochs. | | SFT (Ours) | Transformer | | | | | |-----------|-------------------------------|---------------|-------|-------|-------|-------| | #Sampling | 32 | 64 | 128 | 256 | 512 | 1024 | | %Sampling | 3.13% 6.25% 12.5% 25.0% 50.0% | 100% | | | | | | Runtime↓ | 0.30s | 0.32s | 0.34s | 0.46s | 0.64s | 0.64s | Table 6: Model runtime measured in seconds (10 runs averaged). ## 5.4 Computational Cost Breakdown To provide a more comprehensive insight on the computational cost of submodules of an SFT layer, we measure the computational cost of each SFT submodule (measured in GFLOPS) with results in Table 7. The layer processes 1024 256-dimensional tokens. | | QKVC | MultiHeadAttention | MLP | | | | | | | |---------------|----------|----------------------|-------|-------|--------|----------|-------|-----------|-----| | Sampling rate | Sampling | Q | K | V | C | Relative | W_cat | Remainder | | | 100% | 0.0037 | 0.268 | 0.268 | 0.268 | 0.016 | 0.268 | 0.268 | 0.044 | 2.2 | | 50% | 0.0037 | 0.268 | 0.134 | 0.134 | 0.0084 | 0.134 | 0.268 | 0.022 | 2.2 | | 25% | 0.0037 | 0.268 | 0.067 | 0.067 | 0.0042 | 0.067 | 0.268 | 0.009 | 2.2 | | 12.5% | 0.0037 | 0.268 | 0.034 | 0.034 | 0.0021 | 0.034 | 0.268 | 0.004 | 2.2 | | 6.25% | 0.0037 | 0.268 | 0.017 | 0.017 | 0.0011 | 0.017 | 0.268 | 0.002 | 2.2 | From Table 7, it can be inferred that the sampling cost is negligible compared to other submodules. However, even with only two linear layers, the model is computational time for relative positional encoding module still has high computational cost. Without relative positional encoding, the model can be even faster; however, the application of our model is hindered (e.g. no longer being able to process graph data modality, performance drop on point cloud data modality). Table 7: Submodule SFT Computational Cost ![18_image_0.png](18_image_0.png) Figure 8: Similarity and Rank of Token Embeddings through layers of 32 Randomly Selected Data-Points ## 5.5 Rank Injection Phenomena In Figure 3, we have explored how rank of input token embeddings gradually increases under random initialization. However, at a computationally feasible number of layers (1 - 20), the rank of input token embeddings is unfortunately still small and does not carry enough information to distinguish between different tokens. Here, we revisit the phenomena from a different perspective: a fully trained model. We measure cosine similarity between tokens and matrix rank layer by layer to show how information flows from relative positional encoding to token embeddings. The model and task we chose is rotational invariant point cloud classification on ModelNet40, which features 1024 256-dimensional tokens. From Figure 8, it shows token embeddings start to be dissimilar around 5 th layer. Similarly, token embeddings' matrix rank rises through layer transformations, **near to the maximum possible rank** (the number of dimensions of token embeddings is 256). This shows the model can generate complex spatial relationships between tokens since the input relative positional encoding is low-rank (Gower, 1985). ## 6 Conclusion In this paper, we made three contributions: a data-driven differentiable sampling without replacement method, self-attention formulation with pseudoconvexity and bounded gradient norm, and leaky probability function for relative positional encoding. Two of our three contributions aim at lowering the complexity of training transformers while the last one significantly improves the expressivity of transformer regarding relative positional encoding. We have empirically demonstrated the usefulness of the three modules and show at least one application: zero-additional computing cost to achieve rotational invariant - a desired property for multiple point cloud problems (e.g., molecular modeling). By easing the hyperparameter tuning pain, it may pave the way for very deep transformer networks to be trained at small facilities. In short, we make transformers more powerful and easier to train. Limitations and Future Works. We list the limitations and the prospects of our work as the following: - We have yet to incorporate dedicated point cloud modules into our model as well as use the leaky positional encoding for prominent state-of-the-art point cloud transformers. - Our model's efficiency bottleneck lies on its relative positional encoding computation, where at full attention, the relative positional encoding takes as much as 50% of multi-head attention computational cost. Future work would be to incorporate more sophisticated relative positional encoding that works well on all data modalities (point clouds, sequences, images, graphs, ...). - We have not yet experimented on heterogeneous datasets. Since SFT is a model that has shown effectiveness on a wide range of data modalities (point clouds, sequences, graphs), learning to process tokens of different types of data modalities is an interesting question. Processing tokens of different types of data modalities would provide unprecedented generative ability of models since it allows more precise information mediums such as graphs to describe relationships, tables to describe statistical data, and point clouds to describe 3D structures. ## References Joshua Ainslie, Santiago Ontanon, Chris Alberti, Vaclav Cvicek, Zachary Fisher, Philip Pham, Anirudh Ravula, Sumit Sanghai, Qifan Wang, and Li Yang. ETC: Encoding long and structured inputs in transformers. In Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liu (eds.), *Proceedings of the* 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 268–284, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.19. URL https://aclanthology.org/2020.emnlp-main.19. Brandon Anderson, Truong Son Hy, and Risi Kondor. Cormorant: Covariant molecular neural networks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural* Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings. neurips.cc/paper_files/paper/2019/file/03573b32b2746e6e8ca98b9123f2249b-Paper.pdf. Iro Armeni, Ozan Sener, Amir R. Zamir, Helen Jiang, Ioannis Brilakis, Martin Fischer, and Silvio Savarese. 3d semantic parsing of large-scale indoor spaces. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, 2016. Iz Beltagy, Matthew E. Peters, and Arman Cohan. Longformer: The long-document transformer. arXiv:2004.05150, 2020. Xavier Bresson and Thomas Laurent. Residual gated graph convnets, 2018. Michael M. Bronstein, Joan Bruna, Taco Cohen, and Petar Velickovic. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. *CoRR*, abs/2104.13478, 2021. URL https://arxiv.org/abs/2104.13478. Rémy Brossard, Oriel Frigo, and David Dehaene. Graph convolutions that can finally model local structure, 2021. Chen Cai, Truong Son Hy, Rose Yu, and Yusu Wang. On the connection between MPNN and graph transformer. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), *Proceedings of the 40th International Conference on Machine Learning*, volume 202 of *Proceedings of Machine Learning Research*, pp. 3408–3430. PMLR, 23–29 Jul 2023. URL https://proceedings.mlr.press/v202/cai23b.html. A. Cambini and L. Martein. *Generalized Convexity and Optimization: Theory and Applications*. Lecture Notes in Economics and Mathematical Systems. Springer Berlin Heidelberg, 2008. ISBN 9783540708759. URL https://books.google.com.vn/books?id=P_h3jgEACAAJ. Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm (eds.), *Computer Vision - ECCV 2020*, pp. 213–229, Cham, 2020. Springer International Publishing. ISBN 978-3-030-58452-8. Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015. R. Qi Charles, Hao Su, Mo Kaichun, and Leonidas J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In *2017 IEEE Conference on Computer Vision and Pattern Recognition* (CVPR), pp. 77–85, 2017. doi: 10.1109/CVPR.2017.16. Chao Chen, Guanbin Li, Ruijia Xu, Tianshui Chen, Meng Wang, and Liang Lin. Clusternet: Deep hierarchical cluster network with rigorously rotation-invariant representation for point cloud analysis. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4989–4997, 2019. doi: 10.1109/CVPR.2019.00513. Haiwei Chen, Shichen Liu, Weikai Chen, Hao Li, and Randall Hill. Equivariant point network for 3d point cloud analysis. In *2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 14509–14518, 2021. doi: 10.1109/CVPR46437.2021.01428. Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, and Yaliang Li. Simple and deep graph convolutional networks, 2020. Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. *URL https://openai.com/blog/sparse-transformers*, 2019. Krzysztof Marcin Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Quincy Davis, Afroz Mohiuddin, Lukasz Kaiser, David Benjamin Belanger, Lucy J Colwell, and Adrian Weller. Rethinking attention with performers. In *International Conference on* Learning Representations, 2021. URL https://openreview.net/forum?id=Ua6zuk0WRH. Shivani Choudhary, Tarun Luthra, Ashima Mittal, and Rajat Singh. A survey of knowledge graph embedding and their applications, 2021. Taco S. Cohen, Mario Geiger, Jonas Köhler, and Max Welling. Spherical CNNs. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=Hkbd5xZRb. Congyue Deng, Or Litany, Yueqi Duan, Adrien Poulenard, Andrea Tagliasacchi, and Leonidas Guibas. Vector neurons: A general framework for so(3)-equivariant networks. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 12180–12189, 2021. doi: 10.1109/ICCV48922.2021.01198. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Jill Burstein, Christy Doran, and Thamar Solorio (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https://aclanthology.org/N19-1423. Xiaohan Ding, Xiangyu Zhang, Jungong Han, and Guiguang Ding. Scaling up your kernels to 31×31: Revisiting large kernel design in cnns. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11953–11965, 2022. doi: 10.1109/CVPR52688.2022.01166. Yihe Dong, Jean-Baptiste Cordonnier, and Andreas Loukas. Attention is not all you need: Pure attention loses rank doubly exponentially with depth, 2023. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In *International Conference* on Learning Representations, 2021. URL https://openreview.net/forum?id=YicbFdNTTy. Vijay Prakash Dwivedi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. Graph neural networks with learnable structural and positional representations. In International Conference on Learning Representations, 2022a. URL https://openreview.net/forum?id=wTTjnvGphYj. Vijay Prakash Dwivedi, Ladislav Rampášek, Mikhail Galkin, Ali Parviz, Guy Wolf, Anh Tuan Luu, and Dominique Beaini. Long range graph benchmark. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022b. URL https://openreview.net/forum?id= in7XC5RcjEn. Mark Everingham, Luc Gool, Christopher K. Williams, John Winn, and Andrew Zisserman. The pascal visual object classes (voc) challenge. *Int. J. Comput. Vision*, 88(2):303–338, jun 2010. ISSN 0920-5691. doi: 10.1007/s11263-009-0275-4. URL https://doi.org/10.1007/s11263-009-0275-4. Jan E. Gerken, Jimmy Aronsson, Oscar Carlsson, Hampus Linander, Fredrik Ohlsson, Christoffer Petersson, and Daniel Persson. Geometric deep learning and equivariant neural networks. *Artificial Intelligence* Review, 56(12):14605–14662, Dec 2023. ISSN 1573-7462. doi: 10.1007/s10462-023-10502-7. URL https: //doi.org/10.1007/s10462-023-10502-7. Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, pp. 1263–1272. JMLR.org, 2017. Yuan Gong, Yu-An Chung, and James Glass. AST: Audio Spectrogram Transformer. In *Proc. Interspeech* 2021, pp. 571–575, 2021. doi: 10.21437/Interspeech.2021-698. Ian Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout networks. In Sanjoy Dasgupta and David McAllester (eds.), Proceedings of the 30th International Conference on Machine Learning, volume 28 of *Proceedings of Machine Learning Research*, pp. 1319–1327, Atlanta, Georgia, USA, 17–19 Jun 2013. PMLR. URL https://proceedings.mlr.press/v28/goodfellow13.html. J.C. Gower. Properties of euclidean and non-euclidean distance matrices. *Linear Algebra and its Applications*, 67:81–97, 1985. ISSN 0024-3795. doi: https://doi.org/10.1016/0024-3795(85)90187-9. URL https: //www.sciencedirect.com/science/article/pii/0024379585901879. Albert Gu, Karan Goel, and Christopher Re. Efficiently modeling long sequences with structured state spaces. In *International Conference on Learning Representations*, 2022. URL https://openreview.net/forum? id=uYLFoz1vlAC. Anmol Gulati, Chung-Cheng Chiu, James Qin, Jiahui Yu, Niki Parmar, Ruoming Pang, Shibo Wang, Wei Han, Yonghui Wu, Yu Zhang, and Zhengdong Zhang (eds.). *Conformer: Convolution-augmented Transformer* for Speech Recognition, 2020. Yulan Guo, Hanyun Wang, Qingyong Hu, Hao Liu, Li Liu, and Mohammed Bennamoun. Deep learning for 3d point clouds: A survey. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 43(12): 4338–4364, 2021. doi: 10.1109/TPAMI.2020.3005434. David Harris and Sarah Harris. *Digital Design and Computer Architecture, Second Edition*. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2nd edition, 2012. ISBN 0123944244. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification, 2015. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016. doi: 10.1109/CVPR.2016.90. Aidan Hogan, Eva Blomqvist, Michael Cochez, Claudia D'amato, Gerard De Melo, Claudio Gutierrez, Sabrina Kirrane, José Emilio Labra Gayo, Roberto Navigli, Sebastian Neumaier, Axel-Cyrille Ngonga Ngomo, Axel Polleres, Sabbir M. Rashid, Anisa Rula, Lukas Schmelzeisen, Juan Sequeda, Steffen Staab, and Antoine Zimmermann. Knowledge graphs. *ACM Computing Surveys*, 54(4):1–37, July 2021. ISSN 1557-7341. doi: 10.1145/3447772. URL http://dx.doi.org/10.1145/3447772. Truong Son Hy, Shubhendu Trivedi, Horace Pan, Brandon M. Anderson, and Risi Kondor. Predicting molecular properties with covariant compositional networks. *The Journal of Chemical Physics*, 148(24): 241745, 06 2018. ISSN 0021-9606. doi: 10.1063/1.5024797. URL https://doi.org/10.1063/1.5024797. Truong Son Hy, Shubhendu Trivedi, Horace Pan, Brandon M Anderson, and Risi Kondor. Covariant compositional networks for learning graphs. In *Proc. Int. Workshop on Mining and Learning with Graphs* (MLG), 2019. Vsevolod Ivanov. First order characterizations of pseudoconvex functions. *Serdica Mathematical Journal*, 27 (3):203–218, 2001. URL http://eudml.org/doc/11534. Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=rkE3y85ee. Ming Jin, Shiyu Wang, Lintao Ma, Zhixuan Chu, James Y. Zhang, Xiaoming Shi, Pin-Yu Chen, Yuxuan Liang, Yuan-Fang Li, Shirui Pan, and Qingsong Wen. Time-LLM: Time series forecasting by reprogramming large language models. In *The Twelfth International Conference on Learning Representations*, 2024. URL https://openreview.net/forum?id=Unb5CVPtae. Bowen Jing, Stephan Eismann, Patricia Suriana, Raphael John Lamarre Townshend, and Ron Dror. Learning from protein structure with geometric vector perceptrons. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=1YLJDvSx6J4. Sékou-Oumar Kaba, Arnab Kumar Mondal, Yan Zhang, Yoshua Bengio, and Siamak Ravanbakhsh. Equivariance with learned canonicalization functions. In Proceedings of the 40th International Conference on Machine Learning, ICML'23. JMLR.org, 2023. Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. Transformers are rnns: Fast autoregressive transformers with linear attention. In Proceedings of the 37th International Conference on Machine Learning, ICML'20. JMLR.org, 2020. Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks, 2017. Nikita Kitaev, Łukasz Kaiser, and Anselm Levskaya. Reformer: The efficient transformer. arXiv preprint arXiv:2001.04451, 2020. Hyunwoong Ko, Paul Witherell, Yan Lu, Samyeon Kim, and David W. Rosen. Machine learning and knowledge graph based design rule construction for additive manufacturing. *Additive Manufacturing*, 37:101620, 2021. ISSN 2214-8604. doi: https://doi.org/10.1016/j.addma.2020.101620. URL https: //www.sciencedirect.com/science/article/pii/S2214860420309921. Khaled Koutini, Jan Schlüter, Hamid Eghbal-zadeh, and Gerhard Widmer. Efficient training of audio transformers with patchout. In *Interspeech 2022, 23rd Annual Conference of the International Speech* Communication Association, Incheon, Korea, 18-22 September 2022, pp. 2753–2757. ISCA, 2022. doi: 10.21437/Interspeech.2022-227. URL https://doi.org/10.21437/Interspeech.2022-227. Devin Kreuzer, Dominique Beaini, William L. Hamilton, Vincent Létourneau, and Prudencio Tossou. Rethinking graph transformers with spectral attention, 2021. Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical Report 0, University of Toronto, Toronto, Ontario, 2009. URL https://www.cs.toronto.edu/~kriz/ learning-features-2009-TR.pdf. Juho Lee, Yoonho Lee, Jungtaek Kim, Adam Kosiorek, Seungjin Choi, and Yee Whye Teh. Set transformer: A framework for attention-based permutation-invariant neural networks. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), *Proceedings of the 36th International Conference on Machine Learning*, volume 97 of *Proceedings of Machine Learning Research*, pp. 3744–3753. PMLR, 09–15 Jun 2019. URL https://proceedings.mlr.press/v97/lee19d.html. Xianzhi Li, Ruihui Li, Guangyong Chen, Chi-Wing Fu, Daniel Cohen-Or, and Pheng-Ann Heng. A rotationinvariant framework for deep point cloud analysis. IEEE Transactions on Visualization and Computer Graphics, 28(12):4503–4514, dec 2022. ISSN 1077-2626. doi: 10.1109/TVCG.2021.3092570. URL https: //doi.org/10.1109/TVCG.2021.3092570. Yangyan Li, Rui Bu, Mingchao Sun, Wei Wu, Xinhan Di, and Baoquan Chen. Pointcnn: Convolution on x-transformed points. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. CesaBianchi, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper_files/paper/2018/file/ f5f8590cd58a54e94377e6ae2eded4d9-Paper.pdf. Zian Li, Xiyuan Wang, Yinan Huang, and Muhan Zhang. Is distance matrix enough for geometric deep learning? In *Thirty-seventh Conference on Neural Information Processing Systems*, 2023. URL https: //openreview.net/forum?id=QwQ5HhhSNo. Drew Linsley, Junkyung Kim, Vijay Veerabadran, Charles Windolf, and Thomas Serre. Learning long-range spatial dependencies with horizontal gated recurrent units. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper_files/paper/ 2018/file/ec8956637a99787bd197eacd77acce5e-Paper.pdf. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning, 2023. Liyuan Liu, Xiaodong Liu, Jianfeng Gao, Weizhu Chen, and Jiawei Han. Understanding the difficulty of training transformers. In Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liu (eds.), Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 5747–5763, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.463. URL https://aclanthology.org/2020.emnlp-main.463. Qingshan Liu, Zhishan Guo, and Jun Wang. A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization. *Neural Networks*, 26:99–109, 2012. Xinhai Liu, Zhizhong Han, Yu-Shen Liu, and Matthias Zwicker. Point2sequence: Learning the shape representation of 3d point clouds with an attention-based sequence to sequence network. In *Proceedings* of the Thirty-Third AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, AAAI'19/IAAI'19/EAAI'19. AAAI Press, 2019. ISBN 978-1-57735-809-1. doi: 10.1609/aaai. v33i01.33018778. URL https://doi.org/10.1609/aaai.v33i01.33018778. Yong Liu, Haixu Wu, Jianmin Wang, and Mingsheng Long. Non-stationary transformers: Exploring the stationarity in time series forecasting. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), *Advances in Neural Information Processing Systems*, 2022. URL https://openreview.net/ forum?id=ucNDIDRNjjv. Shengjie Luo, Shanda Li, Shuxin Zheng, Tie-Yan Liu, Liwei Wang, and Di He. Your transformer may not be as powerful as you expect. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), *Advances in Neural Information Processing Systems*, volume 35, pp. 4301–4315. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/ 1ba5f64159d67775a251cf9ce386a2b9-Paper-Conference.pdf. Jerry Ma and Denis Yarats. On the adequacy of untuned warmup for adaptive optimization. In *Proceedings* of the AAAI Conference on Artificial Intelligence, volume 35, pp. 8828–8836, 2021. Xuezhe Ma, Xiang Kong, Sinong Wang, Chunting Zhou, Jonathan May, Hao Ma, and Luke Zettlemoyer. Luna: Linear unified nested attention. *Advances in Neural Information Processing Systems*, 34:2441–2453, 2021. Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, and Luke Zettlemoyer. Mega: Moving average equipped gated attention. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=qNLe3iq2El. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Dekang Lin, Yuji Matsumoto, and Rada Mihalcea (eds.), Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 142–150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics. URL https://aclanthology.org/P11-1015. Jan R. Magnus and Heinz Neudecker. *Matrix Differential Calculus with Applications in Statistics and Econometrics*. John Wiley, second edition, 1999. ISBN 0471986321 9780471986324 047198633X 9780471986331. Jiageng Mao, Xiaogang Wang, and Hongsheng Li. Interpolated convolutional networks for 3d point cloud understanding. In *2019 IEEE/CVF International Conference on Computer Vision (ICCV)*, pp. 1578–1587, 2019. doi: 10.1109/ICCV.2019.00166. Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. Provably powerful graph networks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural* Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings. neurips.cc/paper_files/paper/2019/file/bb04af0f7ecaee4aae62035497da1387-Paper.pdf. Daniel Maturana and Sebastian Scherer. Voxnet: A 3d convolutional neural network for real-time object recognition. In *2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)*, pp. 922–928, 2015. doi: 10.1109/IROS.2015.7353481. Nikita Nangia and Samuel Bowman. ListOps: A diagnostic dataset for latent tree learning. In Silvio Ricardo Cordeiro, Shereen Oraby, Umashanthi Pavalanathan, and Kyeongmin Rim (eds.), Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pp. 92–99, New Orleans, Louisiana, USA, June 2018. Association for Computational Linguistics. doi: 10.18653/v1/N18-4013. URL https://aclanthology.org/N18-4013. Nhat Khang Ngo, Truong Son Hy, and Risi Kondor. Multiresolution graph transformers and wavelet positional encoding for learning long-range and hierarchical structures. *The Journal of Chemical Physics*, 159(3): 034109, 07 2023. ISSN 0021-9606. doi: 10.1063/5.0152833. URL https://doi.org/10.1063/5.0152833. Giannis Nikolentzos, George Dasoulas, and Michalis Vazirgiannis. k-hop graph neural networks. *Neural* Networks, 130, 07 2020. doi: 10.1016/j.neunet.2020.07.008. Lorenzo Noci, Sotiris Anagnostidis, Luca Biggio, Antonio Orvieto, Sidak Pal Singh, and Aurelien Lucchi. Signal propagation in transformers: Theoretical perspectives and the role of rank collapse, 2022. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural* Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings. neurips.cc/paper_files/paper/2019/file/bdbca288fee7f92f2bfa9f7012727740-Paper.pdf. Trang Pham, Truyen Tran, Hoa Dam, and Svetha Venkatesh. Graph classification via deep learning with virtual nodes, 2017. Adrien Poulenard, Marie-Julie Rakotosaona, Yann Ponty, and Maks Ovsjanikov. Effective rotation-invariant point cnn with spherical harmonics kernels. In *2019 International Conference on 3D Vision (3DV)*, pp. 47–56, 2019. doi: 10.1109/3DV.2019.00015. Charles R. Qi, Hao Su, Matthias Nießner, Angela Dai, Mengyuan Yan, and Leonidas J. Guibas. Volumetric and multi-view cnns for object classification on 3d data. In *2016 IEEE Conference on Computer Vision* and Pattern Recognition (CVPR), pp. 5648–5656, 2016. doi: 10.1109/CVPR.2016.609. Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper_files/paper/ 2017/file/d8bf84be3800d12f74d8b05e9b89836f-Paper.pdf. Sitian Qin, Xiudong Yang, Xiaoping Xue, and Jiahui Song. A one-layer recurrent neural network for pseudoconvex optimization problems with equality and inequality constraints. IEEE transactions on cybernetics, 47(10):3063–3074, 2016. Dragomir R. Radev, Pradeep Muthukrishnan, and Vahed Qazvinian. The ACL Anthology network corpus. In Min-Yen Kan and Simone Teufel (eds.), *Proceedings of the 2009 Workshop on Text and Citation Analysis* for Scholarly Digital Libraries (NLPIR4DL), pp. 54–61, Suntec City, Singapore, August 2009. Association for Computational Linguistics. URL https://aclanthology.org/W09-3607. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019. Ladislav Rampášek, Mikhail Galkin, Vijay Prakash Dwivedi, Anh Tuan Luu, Guy Wolf, and Dominique Beaini. Recipe for a General, Powerful, Scalable Graph Transformer. *Advances in Neural Information* Processing Systems, 35, 2022. Ladislav Rampášek, Mikhail Galkin, Vijay Prakash Dwivedi, Anh Tuan Luu, Guy Wolf, and Dominique Beaini. Recipe for a general, powerful, scalable graph transformer, 2023. Yongming Rao, Jiwen Lu, and Jie Zhou. Spherical fractal convolutional neural networks for point cloud recognition. In *2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 452–460, 2019. doi: 10.1109/CVPR.2019.00054. Gernot Riegler, Ali Osman Ulusoy, and Andreas Geiger. Octnet: Learning deep 3d representations at high resolutions. In *2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 6620–6629, 2017. doi: 10.1109/CVPR.2017.701. Aurko Roy, Mohammad Saffar, Ashish Vaswani, and David Grangier. Efficient content-based sparse attention with routing transformers. *Transactions of the Association for Computational Linguistics*, 9:53–68, 2021. doi: 10.1162/tacl_a_00353. URL https://aclanthology.org/2021.tacl-1.4. Víctor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E(n) equivariant graph neural networks. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of *Proceedings of Machine Learning Research*, pp. 9323–9332. PMLR, 18–24 Jul 2021. URL https://proceedings.mlr.press/v139/satorras21a.html. Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. *IEEE Transactions on Neural Networks*, 20(1):61–80, 2009. doi: 10.1109/TNN.2008. 2005605. Kai Shen, Junliang Guo, Xu Tan, Siliang Tang, Rui Wang, and Jiang Bian. A study on relu and softmax in transformer, 2023. Weijing Shi and Raj Rajkumar. Point-gnn: Graph neural network for 3d object detection in a point cloud. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1708–1716, 2020. doi: 10.1109/CVPR42600.2020.00178. P. Shilane, P. Min, M. Kazhdan, and T. Funkhouser. The princeton shape benchmark. In Proceedings Shape Modeling Applications, 2004., pp. 167–178, 2004. doi: 10.1109/SMI.2004.1314504. Hamed Shirzad, Ameya Velingker, Balaji Venkatachalam, Danica J. Sutherland, and Ali Kemal Sinop. Exphormer: Sparse transformers for graphs. In *Proceedings of the 40th International Conference on* Machine Learning, ICML'23. JMLR.org, 2023. Sandeep Singh, Kumardeep Chaudhary, Sandeep Kumar Dhanda, Sherry Bhalla, Salman Sadullah Usmani, Ankur Gautam, Abhishek Tuknait, Piyush Agrawal, Deepika Mathur, and Gajendra P.S. Raghava. SATPdb: a database of structurally annotated therapeutic peptides. *Nucleic Acids Research*, 44(D1):D1119–D1126, 11 2015. ISSN 0305-1048. doi: 10.1093/nar/gkv1114. URL https://doi.org/10.1093/nar/gkv1114. Sidak Pal Singh, Gregor Bachmann, and Thomas Hofmann. Analytic insights into structure and rank of neural network hessian maps. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, 2021. URL https://openreview.net/forum?id=otDgw7LM7Nn. David So, Wojciech Mańke, Hanxiao Liu, Zihang Dai, Noam Shazeer, and Quoc V Le. Searching for efficient transformers for language modeling. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems*, volume 34, pp. 6010–6022. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper_files/paper/ 2021/file/2f3c6a4cd8af177f6456e7e51a916ff3-Paper.pdf. Hang Su, Subhransu Maji, Evangelos Kalogerakis, and Erik Learned-Miller. Multi-view convolutional neural networks for 3d shape recognition. In *2015 IEEE International Conference on Computer Vision (ICCV)*, pp. 945–953, 2015. doi: 10.1109/ICCV.2015.114. Xiao Sun, Zhouhui Lian, and Jianguo Xiao. Srinet: Learning strictly rotation-invariant representations for point cloud classification and segmentation. In Proceedings of the 27th ACM International Conference on Multimedia, MM '19, pp. 980–988, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450368896. doi: 10.1145/3343031.3351042. URL https://doi.org/10.1145/3343031.3351042. Mohit Tawarmalani and Nikolaos Sahinidis. *Convexification and global optimization in continuous and* mixed-integer nonlinear programming. Theory, algorithms, software, and applications, volume 65. 01 2002. ISBN 978-1-4419-5235-6. doi: 10.1007/978-1-4757-3532-1. Yi Tay, Mostafa Dehghani, Samira Abnar, Yikang Shen, Dara Bahri, Philip Pham, Jinfeng Rao, Liu Yang, Sebastian Ruder, and Donald Metzler. Long range arena : A benchmark for efficient transformers. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id= qVyeW-grC2k. Yi Tay, Mostafa Dehghani, Dara Bahri, and Donald Metzler. Efficient transformers: A survey. ACM Comput. Surv., 55(6), dec 2022. ISSN 0360-0300. doi: 10.1145/3530811. URL https://doi.org/10.1145/3530811. Hugues Thomas, Charles R. Qi, Jean-Emmanuel Deschaud, Beatriz Marcotegui, François Goulette, and Leonidas Guibas. Kpconv: Flexible and deformable convolution for point clouds. In *2019 IEEE/CVF* International Conference on Computer Vision (ICCV), pp. 6410–6419, 2019. doi: 10.1109/ICCV.2019.00651. Nathaniel Thomas, Tess Smidt, Steven Kearnes, Lusann Yang, Li Li, Kai Kohlhoff, and Patrick Riley. Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds. arXiv preprint arXiv:1802.08219, 2018. Raphael J. L. Townshend, Martin Vögele, Patricia Suriana, Alexander Derry, Alexander Powers, Yianni Laloudakis, Sidhika Balachandar, Bowen Jing, Brandon Anderson, Stephan Eismann, Risi Kondor, Russ B. Altman, and Ron O. Dror. Atom3d: Tasks on molecules in three dimensions, 2022. Thuan Nguyen Anh Trang, Khang Nhat Ngo, Hugo Sonnery, Thieu Vo, Siamak Ravanbakhsh, and Truong Son Hy. Scalable hierarchical self-attention with learnable hierarchy for long-range interactions. Transactions on Machine Learning Research, 2024. ISSN 2835-8856. URL https://openreview.net/forum?id= qH4YFMyhce. Kazuyuki Tsurumi. Some remarks on polynomial and rational convexity. *Science Reports of the Tokyo Kyoiku* Daigaku, Section A, 9(209/213):111–115, 1966. ISSN 03713539. URL http://www.jstor.org/stable/ 43698682. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In *Proceedings of the 31st International Conference on* Neural Information Processing Systems, NIPS'17, pp. 6000–6010, Red Hook, NY, USA, 2017. Curran Associates Inc. ISBN 9781510860964. Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks, 2018. Chu Wang, Babak Samari, and Kaleem Siddiqi. Local spectral graph convolution for point set feature learning. In *Proceedings of the European conference on computer vision (ECCV)*, pp. 52–66, 2018. Sinong Wang, Belinda Z Li, Madian Khabsa, Han Fang, and Hao Ma. Linformer: Self-attention with linear complexity. *arXiv preprint arXiv:2006.04768*, 2020. Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E. Sarma, Michael M. Bronstein, and Justin M. Solomon. Dynamic graph cnn for learning on point clouds. *ACM Trans. Graph.*, 38(5), oct 2019. ISSN 0730-0301. doi: 10.1145/3326362. URL https://doi.org/10.1145/3326362. Mitchell Wortsman, Jaehoon Lee, Justin Gilmer, and Simon Kornblith. Replacing softmax with relu in vision transformers, 2023. Wenxuan Wu, Zhongang Qi, and Li Fuxin. Pointconv: Deep convolutional networks on 3d point clouds. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9613–9622, 2019. doi: 10.1109/CVPR.2019.00985. Zhenqin Wu, Bharath Ramsundar, Evan N. Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S. Pappu, Karl Leswing, and Vijay Pande. Moleculenet: A benchmark for molecular machine learning, 2018. Zhirong Wu, S. Song, A. Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and J. Xiao. 3d shapenets: A deep representation for volumetric shapes. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1912–1920, Los Alamitos, CA, USA, jun 2015. IEEE Computer Society. doi: 10.1109/CVPR. 2015.7298801. URL https://doi.ieeecomputersociety.org/10.1109/CVPR.2015.7298801. Yifan Xu, Tianqi Fan, Mingye Xu, Long Zeng, and Yu Qiao. Spidercnn: Deep learning on point sets with parameterized convolutional filters. In Computer Vision - ECCV 2018: 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part VIII, pp. 90–105, Berlin, Heidelberg, 2018. Springer-Verlag. ISBN 978-3-030-01236-6. doi: 10.1007/978-3-030-01237-3_6. URL https://doi.org/10. 1007/978-3-030-01237-3_6. Jiancheng Yang, Qiang Zhang, Bingbing Ni, Linguo Li, Jinxian Liu, Mengdie Zhou, and Qi Tian. Modeling point clouds with self-attention and gumbel subset sampling. In *2019 IEEE/CVF Conference on Computer* Vision and Pattern Recognition (CVPR), pp. 3318–3327, 2019. doi: 10.1109/CVPR.2019.00344. Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, and Tie-Yan Liu. Do transformers really perform badly for graph representation? In Thirty-Fifth Conference on Neural Information Processing Systems, 2021. URL https://openreview.net/forum?id=OeWooOxFwDa. Tan Yu, Jingjing Meng, and Junsong Yuan. Multi-view harmonized bilinear network for 3d object recognition. In *2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 186–194, 2018. doi: 10.1109/CVPR.2018.00027. Chulhee Yun, Srinadh Bhojanapalli, Ankit Singh Rawat, Sashank J. Reddi, and Sanjiv Kumar. Are transformers universal approximators of sequence-to-sequence functions?, 2020. Seongjun Yun, Minbyul Jeong, Raehyun Kim, Jaewoo Kang, and Hyunwoo J. Kim. Graph Transformer Networks. Curran Associates Inc., Red Hook, NY, USA, 2019. Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, and Amr Ahmed. Big bird: Transformers for longer sequences. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 17283–17297. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/ c8512d142a2d849725f31a9a7a361ab9-Paper.pdf. Huang Zhang, Changshuo Wang, Shengwei Tian, Baoli Lu, Liping Zhang, Xin Ning, and Xiao Bai. Deep learning-based 3d point cloud classification: A systematic survey and outlook. *Displays*, 79:102456, 2023. ISSN 0141-9382. doi: https://doi.org/10.1016/j.displa.2023.102456. URL https://www.sciencedirect. com/science/article/pii/S0141938223000896. Zhiyuan Zhang, Binh-Son Hua, David W. Rosen, and Sai-Kit Yeung. Rotation invariant convolutions for 3d point clouds deep learning. In *2019 International Conference on 3D Vision (3DV)*, pp. 204–213, 2019. doi: 10.1109/3DV.2019.00031. Zhiyuan Zhang, Binh-Son Hua, Wei Chen, Yibin Tian, and Sai-Kit Yeung. Global context aware convolutions for 3d point cloud understanding. In *2020 International Conference on 3D Vision (3DV)*, pp. 210–219, 2020. doi: 10.1109/3DV50981.2020.00031. Hengshuang Zhao, Li Jiang, Chi-Wing Fu, and Jiaya Jia. Pointweb: Enhancing local neighborhood features for point cloud processing. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5560–5568, 2019. doi: 10.1109/CVPR.2019.00571. Hengshuang Zhao, Li Jiang, Jiaya Jia, Philip Torr, and Vladlen Koltun. Point transformer. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 16239–16248, 2021. doi: 10.1109/ ICCV48922.2021.01595. ## A Notations Numbers and Arrays a A scalar a A vector A A matrix A⊤ Tranpose of matrix A n The length of the input sequence d Embedding dimension Sets R a Set of real vectors with length a R a×b Set of real matrices with size a × b [m] Set of all integers from 1 to m [*a, b*] Closed interval from a to b Indexing a[i] i th element of vector a, with indexing starts from 1 A[*i, j*] Element (*i, j*) of matrix A A[:, j] Column j of matrix A A[i] Row i of matrix A A[: k] The first k rows of matrix A A[k :] All rows from row k of matrix A Functions and Operators A ⊙ B Element-wise Hamadart product between two matrices A and B AB Matrix multiplication of two matrices A and B ∥x∥F Frobenius norm of x ∇ Gradient operator ⟨a, b⟩ Scalar product between vector a and b ## B Theoretical Analysis B.1 Nonconvexity Of Other Attention Settings As mentioned in Section 4.4, the definition of pseudoconvex functions is defined below. Definition B.1. Let ∇ denote the gradient operator and S ⊂ R n is a convex open set, a function f : S → R is considered pseudoconvex if for any x1, x2 ∈ S such that ⟨∇f(x1), x2 − x1⟩ ≥ 0 then f(x2) ≥ f(x1). In this section, we will show the nonpseudoconvexity of two Transformer variations: the well-known vanilla Transformer (Vaswani et al., 2017) and our ReLU-based attention with dot-product (Shen et al., 2023). Specifically, we will point out some simplified counterexamples with well-defined inputs to achieve this goal. Note that these counterexample can easily be generalized into arbitrary settings, but we handpicked them rather for simplicity. Theorem B.2. Let S(WQ, WK) = SM(XWQW⊤ K X⊤)XWV + X *be the vanilla attention layer where* SM is the softmax function with respect to query and key, then there exists a setting in which S *is not pseudoconvex* with respect to all pairs of weight entries. Proof. Let X ∈ R 3×3, WQ, WK ∈ R 3 and WV ∈ R 3×3. We consider a counterexample by letting X = WV = I3. Calculating the output we have $$\mathbf{S}(W_{Q},W_{K})[i,j]={\frac{e^{w_{Q i}w_{K j}}}{\sum_{k=1}^{3}e^{w_{Q i}w_{K k}}}}\qquad\forall i,j\in\{1,2,3\}.$$ where wQi and wKj is the i th and j th entry of the vectors WQ and WK respectively. From here, we will prove that S is not quasiconvex, thus not pseudoconvex, with respect to all pairs of scalar weight entries. - S is not quasiconvex with respect to the pair (wQs, wQr) for any s ̸= r and s, r ∈ {1, 2, 3}. $$1\quad1$$ Since we are considering convexity within the entries of the query, it is reasonable to fix WK = WK = 1 1 1 ⊤and consider the first entry S, which is simply $$\mathbf{s}(w_{Q1},w_{Q2},w_{Q3})={\frac{e^{w_{Q1}}}{e^{w_{Q1}}+e^{w_{Q2}}+e^{w_{Q3}}}}$$ Without loss of generality let us fix wQ1 = wQ1 = 0 and consider the convexity with respect to the pair (wQ2, wQ3). By taking the sublevel set defined by some level α ∈ (0, 1), one can easily observe that $$L_{\mathbf{s}}(\alpha):=\left\{(w_{Q2},w_{Q3})\in\mathbb{R}^{2}\quad{\mathrm{s.t}}\quad e^{w_{Q2}}+e^{w_{Q3}}\geq{\frac{1}{\alpha}}-1\right\}.$$ is evidently not a convex set in two dimensional space. Therefore, S is not quasiconvex w.r.t (wQ2, wQ3). - S is not quasiconvex with respect to the pair (wQs, wKr) for any s, r ∈ {1, 2, 3}. Without loss of generality, let wQs = wQs = wKs = wKs = 0 for s ∈ {2, 3} and consider the function representing the [2, 1] location entry of the S(WQ, WK) output $${\bf s}(w_{Q1},w_{K1})=\frac{1}{e^{w_{Q1}w_{K1}}+2}.$$ $\square$ Similarly, by taking the sublevel set defined by some level α ∈ (0, 1), it is obvious that this set is not a convex set. Hence, the proof is complete. Not only vanilla attention fails to exhitbit the pseudoconvex property, but also attention settings with the presence of the dot product. Theorem B.3. Let R(WQ, WK) = PR(XWQW⊤ K X⊤)XWV +X *be the ReLU based attention with dot product* where $$\mathrm{PR}(\mathbf{A})[i,j]={\frac{\mathrm{ReLU}(\mathbf{A}[i,j])}{\sum_{k=1}^{n}\mathrm{ReLU}(\mathbf{A}[i,k])}},$$ then there exists a setting in which R(WQ, WK) is not pseudoconvex w.r.t all pair of entry weight (wQs, wKr). The proof of this theorem is somewhat trivial and similar to the second counterexample in Theorem B.2. Therefore, we will skip the full proof. $\mathbf{U}\mathbf{U}_{\text{in}}$). ## B.2 Convexity Of Sft Even though a complete convex analysis is not possible due to the structure of the probability distribution function Prob, it is possible to prove the efficiency of this function during the training process via its pseudoconvexity, as all stationary points in a pseudoconvex are also global minimizers. The pseudoconvexity of the rational function class is verified by the following lemma. Lemma B.4. (Cambini & Martein, 2008) Let z(*x, c*) = f(x) g(x) be the ratio of two differentiable functions f and g defined on an open convex set S ⊂ R n. If f is convex and g is positive and affine, then z is a pseudoconvex function. Since ReLU is indifferentiable, the pseudoconvexity of Prob should be analyzed in separated orthants OI where I ⊂ [n] is defined via the non-negativity of the input entries. Lemma B.4 directly implies that Theorem B.5. *The probability distribution function* Prob : R n × R + → R n *defined as* $$\mathrm{Prob}(\mathbf{x},c)[i]={\frac{\mathrm{ReLU}(\mathbf{x}[i])}{\sum_{k}\mathrm{ReLU}(\mathbf{x}[k])+c}},$$ where i ∈ [n] is a pseudoconvex function in orthant OI for all I ⊂ [n]. Proof. Without loss of generality, consider the Prob function in the orthant O1 = O[n]\{1}, which means that for any x ∈ O1 $$\mathrm{Prob}(\mathbf{x},c)[i]={\left\{\begin{array}{l l}{0}&{{\mathrm{if~}}i=1,}\\ {{}}&{{}}\\ {{\frac{\mathbf{x}[i]}{\sum_{k}\mathbf{x}[k]+c}}}&{{\mathrm{otherwise.}}}\end{array}\right.}$$ From here, it is evident that the proof follows Lemma B.4 since x[k] is non-negative for all k ̸= 1. Therefore, Prob is pseudoconvex in orthant O1 and, thus, in all orthants of R n. The probability distribution function Prob can only be pseudoconvex in each individual orthant OI. However, due to the complete separation property, the stability during training is preserved. Consequently, we derive the pseudoconvexity power of SFT. But first, let us consider a modified version of SFT with linear activation in the FFN layer as follows: Theorem B.6. The SFT block in Algorithm ?? with FFN linear activation and no sampling has the following componentwise properties: - Pseudoconvex with respect to WQ, BQ, WK and BK; - *Pseudoconvex with respect to* WR1 , BR1 , WR2 and BR2 ; - Pseudoconvex with respect to WC , BC . Proof. Let M = SF T(WQ, BQ, WK, BK, WR1 , WR2 , WC ) be the output of a SFT block in Algorithm ??. - M is pseudoconvex with respect to WQ, WK, BQ and BK. Consider the orthant OQ+K ∩ OQ−K of Q and K, where OQ−K and OQ+K are the orthants in which the score function Score(Q, K) is differentiable and affine. Thus, it follows rom Theorem B.5 that each entry of the output of the Prob function can be written in the form f(WQ, WK, BQ, BK)/(g(WQ, WK, BQ, BK) + C) where f and g are affine and g is positive. As the remaining parts of SFT layer are linear transformations, they do not affect the affinity of f. Therefore, Lemma B.4 states the pseudoconvexity of the SFT block with respect to the inspected weights. - M is pseudoconvex with respect to WR1 , WR2 , BR1 and BR2 . This proof is similiar to which of WQ and WK and therefore is neglected. - M is pseudoconvex with respect to WC and BC . For the sake of simplity, Consider the i th entry of C, denoted as C[i] = X[i]WC + bC where 1dbC = BC , let there exist WC1 and WC2 such that: $$\nabla\mathbf{M}(W_{C1})(W_{C2}-W_{C1})\geq0.$$ $$(15)^{\frac{1}{2}}$$ ∇M(WC1)(WC2 − WC1) ≥ 0. (15) To prove the pseudoconvex dependency of M with respect to WC and BC , consider the dependencies of P(WC ) with respect to WC , and the pseudoconvexity w.r.t bC is analogous, where $$\mathbf{P}(W_{C})=S F T({\overline{{W_{Q}}}},{\overline{{B_{Q}}}},{\overline{{W_{K}}}},{\overline{{B_{K}}}},{\overline{{W_{R_{1}}}}},{\overline{{W_{R_{2}}}}},W_{C},{\overline{{b_{C}}}}),$$ and the overlined weights are treated as constants. This function can be written in a simpler form regarding each of its entry $$\mathbf{P}(W_{C})[i,j]={\frac{C_{1}}{C_{2}+\mathrm{SP}(\mathbf{X}[i]W_{C}+b_{C})}}+C_{3},$$ where C1, C2, C3 and C4 are all constants, C2 > 0 and SP denotes the softplus non-linearity. Returning to (15), we have: $$-\frac{C_{1}}{(C_{2}+\text{SP}(\mathbf{X}[i]W_{C1}+C_{4}))^{2}}\text{SG}(\mathbf{X}[i]W_{C1}+C_{4})\mathbf{X}[i](W_{C2}-W_{C1})\geq0,\tag{16}$$ given that SG denotes the sigmoid function SG(x) = 1/(1 + exp(−x)). Assume that C1 > 0 (similar for C1 < 0), then it follows from equation 16 that: $$\mathbf{X}[i](W_{C2}-W_{C1})\leq0.$$ Thus for every WC2 and WC1 satisfying equation 15, we have: $$\mathbf{P}(W_{C2})\leq\mathbf{P}(W_{C1}).$$ By definition, M is pseudoconvex with respect to WC and bC . However, in our proposed model, the FFN activation is GeLU. In order to show the pseudoconvexity of the our original SFT block, we analyze this via its quasiconvexity with the following theorem: Theorem B.7. *(Ivanov, 2001) A function* f : R m → R n *that is quasiconvex is also pseudoconvex if the set* of stationary points coincides with the set of global minimizers. From here, the pseudoconvexity of our model is proved. Theorem B.8. *The SFT block with FFN GeLU activation and no sampling exhibits the same properties as* in Theorem B.6. Proof. Let σ(·) be the shorthand notion of the GeLU non-linearity and M(·) be the function of the linear activation FFN SFT block. Then, the function of our proposed SFT block is G := σ ◦ M. For the sake of simplicity, let G(·) and M(·) be a single-matrix-input and scalar-output function, which can represent any weight and entry respectively. - Pseudoconvexity w.r.t the query, key, R1 and R2 scalers. Since the output of M(·) is a fractional function with respect to these weights, as illustrated in the proof of Theorem B.6, it has no stationary points. Furthermore, due to the uniqueness of the global minimizer of σ, one can easily derive that G, if quasiconvex, is also pseudoconvex by Theorem B.7. Now we will prove the quasiconvexity of G(·). Given that σ is a quasiconvex scalar non-linearity, we consider the sublevel set of G, defined as $L_{\bf G}(\alpha)=\{{\bf W}\ {\rm s.t}\ {\bf G}({\bf W})\leq\alpha\}$ $=\{{\bf W}\ {\rm s.t}\ u_{1}(\alpha)\leq{\bf M}({\bf W})\leq u_{2}(\alpha)\}$. where u1(·) and u2(·) represent the lower and upper scalar bound of σ with some level input. However, as illustrated in Theorem B.6, M(W) is a fractional function with respect to W and so it is both pseudoconvex and pseudoconcave. Hence, M is also both quasiconvex and quasiconcave, which means that if we let $L^{1}_{\bf M}(\alpha_{1})=\{{\bf W}\ {\rm s.t}\ {\bf M}({\bf W})\geq\alpha_{1}\}$ and $L^{2}_{\bf M}(\alpha_{2})=\{{\bf W}\ {\rm s.t}\ {\bf M}({\bf W})\leq\alpha_{2}\}$. be the superlevel and sublevel set of M(W) respectively, then they are convex sets for all α1 and α2. Therefore, $$L_{\mathbf{G}}(\alpha)=L_{\mathbf{M}}^{1}(\alpha)$$ M(u1(α)) ∩ L 2 M(u2(α)) is also a convex set and the quasiconvexity of G(W) is derived. - Pseudoconvexity w.r.t the leaky attention weights WC and bC . Consider the function P(WC ) defined in Theorem B.6, by following the same proof steps, one can prove that −P(WC ) is also pseudoconvex. As a result, P(WC ) is pseudoconcave. It is also evident that P(WC ) has no stationary points. Hence, by replicating the proof of the other weights, it is sufficient to show that the SFT block is also pseudoconvex with respect to WC , and also to bC . ## B.3 Gradient Analysis Of Sft To theoretically analyze the effectiveness of SFT against flat and sharp points, we evaluate the local adaptive lower bound and a global upper bound for the gradient norm squared of the weights. Theorem B.9. Let Sa = Sa(WQ, WK, WV ) = Prob(Score(XWQ, XWK), C)XWV + αX *be the SFT* self-attention layer output with the simplication of C being treated as a positive constant matrix. Assume that all weights are initialized via He initialization (He et al., 2015) where each entry independently follows the normal distribution N (0, σ2) and let WD = WQ − WK*, then for any positive matrices* WL QQ, WU QQ, WL KK, WU KK, WL DD, WU DD *such that* WL QQ < WU QQ, WL KK < WU KK, WL DD < WU DD and WQW⊤ Q ∈ [WL QQ, WU QQ], WKW⊤ K ∈ [WU KK, WU KK] and WDW⊤ D ∈ [WU DD, WU DD] there exist two positive scalar-valued positive function f1, f2 and positive constants CQ, CV *such that:* $$\mathbb{E}\left\|\frac{\partial\mathbf{Sa}}{\partial W_{V}}\right\|_{F}^{2}\in\left(f_{V}(W_{Q Q}^{L},W_{Q Q}^{U},W_{K K}^{L},W_{K K}^{U}),C_{V}\right),$$ $$\mathbb{E}\left\|\frac{\partial\mathbf{Sa}}{\partial W_{Q}}\right\|_{F}^{2}\in\left(f_{Q}(W_{D D}^{L},W_{D D}^{U},W_{K K}^{L},W_{K K}^{U}),C_{Q}\right).$$ $\square$ and $$f_{V}(W_{Q Q}^{L},W_{Q Q}^{U},W_{K K}^{L},W_{K K}^{U})=\Omega(n),\qquad C_{V}=O(n),$$ $$f_{Q}(W_{D D}^{L},W_{D D}^{U},W_{K K}^{L},W_{K K}^{U})=\Omega(1),\qquad C_{Q}=O(1).$$ Proof. First, let us note some remarks on matrix calculus and properties of the Kronecker product ⊗, shown at (Magnus & Neudecker, 1999) (Singh et al., 2021). Given some matrices A ∈ R s1×s2, B ∈ R s2×s3, C ∈ R s3×s4, D ∈ R s4×s5 and a matrix variable W ∈ R s2×s2, then: $${\frac{\partial\mathbf{AWB}}{\partial\mathbf{W}}}=\mathbf{A}\otimes\mathbf{B}^{\mathsf{T}}.$$ Additionally, we also mention some useful properties regarding the trace operator, denoted as tr(·), $$\begin{array}{r l}{\operatorname{tr}(\mathbf{A}\otimes\mathbf{B})}&{{}=\operatorname{tr}(\mathbf{A})\operatorname{tr}(\mathbf{B}),}\\ {(\mathbf{A}\mathbf{C})\otimes(\mathbf{B}\mathbf{D})}&{{}=(\mathbf{A}\otimes\mathbf{B})(\mathbf{C}\otimes\mathbf{D}).}\end{array}$$ We would also like to recall to the Jensen inequality for the expectation of convex and concave function. Let E denotes the expectation operator, and X be a random variable. Then, for any scalar function ϕ : R → R, - E(ϕ(X)) ≥ ϕ(E(X)) if ϕ is convex, - E(ϕ(X)) ≤ ϕ(E(X)) if ϕ is concave. Now we are ready for the proof, let us consider the gradient norm w.r.t WV . For simplicity, we have the notations $$\begin{array}{l}{{\mathbf{T}_{i}=\mathrm{Score}(\mathbf{X}[i]W_{Q},\mathbf{X}W_{K}),}}\\ {{\mathbf{R}_{i}=\mathrm{Prob}(\mathbf{T}_{i}).}}\end{array}$$ Note that ∂Sa ∂WV 2 F = Xn i=1 ∂Sa[i] ∂WV 2 F = Xn i=1 ∥RiX ⊗ Id∥ 2 F = Xn i=1 tr ((RiX) ⊗ Id)(X⊤R⊤ i ) ⊗ Id = Xn i=1 tr (RiXX⊤R⊤ i ) ⊗ Id = Xn i=1 d tr RiXX⊤R⊤ i = Xn i=1 d ∥RiX∥ 2 F = Xn i=1 Xn j=1 d ∥Ri[j]X[j]∥ 2 F . However, we have: $$\mathbf{R}_{i}[j]={\frac{\mathrm{ReLU}\left(\operatorname*{max}(\mathbf{Q}[i],\mathbf{K}[j])\right)}{\sum_{k=1}^{n}\mathrm{ReLU}\left(\operatorname*{max}(\mathbf{Q}[i],\mathbf{K}[k])\right)+\mathbf{C}[i]}}.$$ In order to linearize the max operator, let eij ∈ {0, 1} be the query-key indicator such that: max(Q[i], K[j]) = eijQ[i] + (1 − eij )K[j]. Similiar to the eij , let dij ∈ {0, 1} represents the ReLU non-linearity by having $$\mathrm{ReLU}\left(\mathrm{max}(\mathbf{Q}[i],\mathbf{K}[j])\right)=d_{i j}\,\mathrm{max}(\mathbf{Q}[i],\mathbf{K}[j])=d_{i j}\left(e_{i j}\mathbf{Q}[i]+(1-e_{i j})\mathbf{K}[j]\right).$$ By this formulation, we have ∂Sa ∂WV 2 F = d Xn i=1 Xn j=1 dij eijQ[i]Q⊤[i] + dij (1 − eij )K[j]K⊤[j] (Pn k=1 dikeikQ[i] + dik(1 − eik)K[k] + C[i])2 ∥X[j]∥ 2 F = d Xn i=1 Xn j=1 dij eij tr(X⊤[i]X[i]WQW⊤ Q ) + dij (1 − eij ) tr(X⊤[j]X[j]WKW⊤ K ) (Pn k=1 dikeik tr(X[i]WQ) + dik(1 − eik) tr(X[i]WK) + C[i])2 ∥X[j]∥ 2 F ≥d 3n Xn i=1 Xn j=1 dij eij tr(X⊤[i]X[i]WQW⊤ Q ) + dij (1 − eij ) tr(X⊤[j]X[j]WKW⊤ K ) Pn k=1 dikeik tr(X⊤[i]X[i]WQW⊤ Q ) + dik(1 − eik) tr(X⊤[k]X[k]WKW⊤ K ) + C[i] 2 ∥X[j]∥ 2 F . To express this more concisely, we have following shorthand notations: $$\mathbf{A}_{Qi}={\frac{d}{3n}}\sum_{j=1}^{n}d_{i j}e_{i j}\left\|\mathbf{X}[j]\right\|_{F}^{2}\mathbf{X}^{\top}[i]\mathbf{X}[i],$$ $$\mathbf{A}_{Ki}={\frac{d}{3n}}\sum_{j=1}^{n}d_{i j}(1-e_{i j})\left\|\mathbf{X}[j]\right\|_{F}^{2}\mathbf{X}^{\top}[j]\mathbf{X}[j],$$ $$\mathbf{B}_{Qi}=\sum_{k=1}^{n}d_{i k}e_{i k}\mathbf{X}^{\top}[i]\mathbf{X}[i],$$ $$\mathbf{B}_{Ki}=\sum_{k=1}^{n}d_{i k}(1-e_{i k})\mathbf{X}^{\top}[k]\mathbf{X}[k].$$ By doing this, we derive that: $$\left\|{\frac{\partial\mathbf{S}\mathbf{a}}{\partial W_{V}}}\right\|_{F}^{2}\geq\sum_{i=1}^{n}{\frac{\operatorname{tr}(\mathbf{A}_{Q i}W_{Q Q})+\operatorname{tr}(\mathbf{A}_{K i}W_{K K})}{\operatorname{tr}(\mathbf{B}_{Q i}W_{Q Q})+\operatorname{tr}(\mathbf{B}_{K i}W_{K K})+\mathbf{C}[i]^{2}}},$$ where WQQ = WQW⊤ Q and WKK = WKW⊤ K . We delve into the concept of convex envelope, which seeks the highest convex underestimator of a given function. Additionally, the local minina of the convex envelope is also the global minima of the primal function. However, in this proof, we only focus on the convex properties. Specifically, consider the non-convex lower bound function $$\mathbf{LB}_{i}(W_{QQ},W_{KK})=\frac{\text{tr}(\mathbf{A}_{Qi}W_{QQ})+\text{tr}(\mathbf{A}_{Ki}W_{KK})}{\text{tr}(\mathbf{B}_{Qi}W_{QQ})+\text{tr}(\mathbf{B}_{Ki}W_{KK})+\mathbf{C}[i]^{2}},\tag{17}$$ where $\mathbf{A}_{i}$ is the $i$-th eigenvector of the $i$-th component of $\mathbf{A}_{i}$. The number of $i$-th individual and based on the convex envelope of the bivariate rational function discovered at (Tawarmalani & Sahinidis, 2002), we can convexify the multivariate rational function LBi by considering the convexity with respect to each entry one at a time. For convenience, we let Af i(WQQ, WKK) be the affine function denominator of equation 17. As a result, we can derive a componentwise convex lower bound of LBi called **CCLB**i, which can be expressed in the form $$\text{CCLB}_{i}(W_{Qq},W_{KR})=\frac{r_{t}}{\text{AF}_{i}(W_{Qq},W_{KR})}\prod_{s,A,\eta=1}^{d}\text{CCd}_{Q,\eta^{\prime}}\left(\frac{\left(w_{Q,\eta^{\prime}}+\sqrt{w_{Q,\eta^{\prime}}^{2}w_{Q,\eta^{\prime}}^{2}}\right)^{2}}{\left(\sqrt{w_{Q,\eta^{\prime}}^{2}+\sqrt{w_{Q,\eta^{\prime}}^{2}}}\right)^{2}}\right)\text{Cd}_{K_{\text{PR}}}\left(\frac{\left(w_{K_{\text{PR}}}+\sqrt{w_{K_{\text{PR}}}^{2}w_{K_{\text{PR}}}^{2}}\right)^{2}}{\left(\sqrt{w_{K_{\text{PR}}}^{2}+\sqrt{w_{K_{\text{PR}}}^{2}}}\right)^{2}}\right),$$ where , where: - wQsr, wKpq is the [*s, r*] and [*p, q*] location entry of WQQ and WKK respectively, - w L Qsr and w U Qsr are the lower and upper bound of wQsr, - w L Kpq and w U Kpq are the lower and upper bound of wKpq, - riis a constant and is independent of the bounds, - CciQsr(·) is the identity function if wQsr is concave in the LBi function, and 1 otherwise, - CciKpq(·) is defined similar to CciQsr(·). With this formulation, we take the expectation of the gradient norm square and use the Jensen inequality, which leads to: s,r,p,q=1 CciQsr σ 2δsr + qwLQsrwUQsr2 CciKpq σ 2δpq + qwLKpqwUKpq2 F ≥ Xn i=1 ri Af i(σ 2In, σ2In) Y d E ∂Sa ∂WV 2 qwLQsr + qwUQsr2 qwLKpq + qwUKpq2 (18) ≜ fV (WL QQ, WU QQ, WL KK, WU KK) = Ω(n), (19) where δsr = 1 if s = r and 0 otherwise. Now let us consider the gradient norm with respect to the query: $$\left\|{\frac{\partial\mathbf{Sa}}{\partial W_{Q}}}\right\|_{F}^{2}=\sum_{i=1}^{n}\left\|{\frac{\partial\mathbf{Sa}[i]}{\partial W_{Q}}}\right\|_{F}^{2}.$$ Consider the gradient norm for each i ∈ [n], by applying the chain rule we have: $$\left\|\frac{\partial\mathbf{Sa}[i]}{\partial W_{Q}}\right\|_{F}^{2}=\left\|\frac{\partial\mathbf{Sa}[i]}{\partial\mathbf{R}_{i}}\frac{\partial\mathbf{R}_{i}}{\partial\mathbf{T}_{i}}\frac{\partial\mathbf{T}_{i}}{\partial\mathbf{Q}[i]}\frac{\partial\mathbf{Q}[i]}{\partial W_{Q}}\right\|_{F}^{2}$$ $$=\left\|W_{V}^{\top}\mathbf{X}^{\top}\frac{\partial\mathbf{R}_{i}}{\partial\mathbf{T}_{i}}\mathbf{e}_{i}\mathbf{X}[i]\right\|_{F}^{2},$$ where ei = (eij )j∈[n]is the query-key binary gate vector. Continuing the formulation, E ∂Sa[i] ∂WQ 2 F = E tr WV W⊤ V X⊤ ∂Ri ∂Ti eiX[i]X⊤[i]e ⊤ i ∂R⊤ i ∂T⊤ i X = σ 2∥X[i]∥ 2 F tr XX⊤E ∂Ri ∂Ti eie ⊤ i ∂R⊤ i ∂T⊤ i = σ 2∥X[i]∥ 2 F E X⊤ ∂Ri ∂Ti ei 2 F = X d k=1 σ 2∥X[i]∥ 2 F E X[:, k] ⊤ ∂Ri ∂Ti ei 2 F . Let $W_D=W_Q-W_K$ and $\mathbf{Z}_i=\dfrac{\partial\mathbf{R}_i}{\partial\mathbf{T}_i}\mathbf{e}_i$, we have: . $$\mathbf{Z}_{i}[j]={\frac{d_{i j}e_{i j}\left(\sum_{\begin{k=1}{k\neq j}\\ {(d_{i k}\mathbf{T}_{i}[k]+\mathbf{C}[i])^{2}}\end{array}}^{n}\mathbf{T}_{i}[k]-d_{i k}\mathbf{T}_{i}[j]\right)}{\left(d_{i k}\mathbf{T}_{i}[k]+\mathbf{C}[i]\right)^{2}}}.$$ And since Ti[k] = eikQ[i] + (1 − eik)K[k] = eikX[i]WD + ((1 − eik)X[k] + eikX[i]) WK, we derive that: k=1 k̸=j dik (1 − eik) 2 Xn j=1 X[j, k] 2dij eij Xn (X[i]WD) 2 X[:, k] ⊤Zi 2 F = k=1 dikeikX[i]WD + dik ((1 − eik)X[k] + eikX[i]) WK + C[i] !4 Xn k=1 k̸=j dik (1 − eik) 2 Xn j=1 X[j, k] 2dij eij Xn (X[i]WD) 2 ≥1 9n2 k=1 dikeik(X[i]WD) 2 + dik (((1 − eik)X[k] + eikX[i]) WK) 2 + C[i] 2 !2 . Xn Now let WDD = WDW⊤ D and confine WDD in a hypercube (WL DD, WU DD) similarly to WQQ and WKK, we have (X[i]WD) 2 = tr(X[i] ⊤X[i]WDW⊤ D ) ≤ ∥X[i]∥ 2 F ∥WDD∥F ≤ ∥X[i]∥ 2 F CWL DD, WU DD, where C(·, ·) is some scalar-valued positive function dependent only on the bounds of WDD. By applying similar evaluations for WKK and following the techniques used for the analysis WV gradient norm, such that it is sufficient to deduce that for some constant matrices ADi, BDi and BKi that are bound-independent, we have: $$\left\|\mathbf{X}[:,k]^{\top}\mathbf{Z}_{i}\right\|_{F}^{2}\geq C\left(W_{D D}^{L},W_{D D}^{U},W_{K K}^{L},W_{K K}^{U}\right){\frac{\operatorname{tr}(A_{D i}W_{D D})}{\operatorname{tr}(B_{D i}W_{D D})+\operatorname{tr}(B_{K i}W_{K K})+\mathbf{C}[i]^{2}}}.$$ It is evident of the proof from this point as it is analogous to the previous part. Therefore, we deduce another adaptive lower bound for the gradient norm of WQ as there exist a non-negative scalar function fQ such that: $$\mathbb{E}\left\|\frac{\partial\mathbf{Sa}}{\partial W_{Q}}\right\|_{F}^{2}\geq f_{Q}\left(W_{DD}^{L},W_{DD}^{U},W_{KK}^{L},W_{KK}^{U}\right)=\Omega(1).\tag{20}$$ Regarding the upper bound, we will achieve this by acquiring a sufficiently large universal constant. Simply by the norm product inequality, we have: $$\mathbb{E}\left\|\frac{\partial\mathbf{Sa}}{\partial W_{V}}\right\|_{F}^{2}\leq d\max_{i}\|\mathbf{X}[i]\|_{F}^{2}\sum_{i=1}^{n}\|\mathbf{R}_{i}\|_{F}^{2}\leq dn\max_{i}\|\mathbf{X}[i]\|_{F}^{2}\triangleq C_{V}=O(n).\tag{21}$$ Combining Eq 19 and Eq 21, the complexity of E $${\mathrm{ty~of~}}\mathbb{E}\left\|{\frac{\partial\mathbf{Sa}}{\partial W_{V}}}\right\|_{F}^{2}{\mathrm{~is~derived~to~be~}}\Theta(n).$$ Also, $$\mathbb{E}\left\|{\frac{\partial\mathbf{S}\mathbf{a}}{\partial W_{Q}}}\right\|_{F}^{2}\leq\sum_{i=1}^{n}\sum_{k=1}^{d}\sigma^{2}\|\mathbf{X}[i]\|_{F}^{2}\mathbb{E}\left\|\mathbf{X}[:,k]^{\top}\mathbf{Z}_{i}\right\|_{F}^{2}.$$ Since WD = WQ − WK, we get Var(WD) = Var(WQ) + Var(WK) = 2σ 2In. Alongside this, notice that the indicators dij where *i, j* ∈ [n] can be treated as binary gates that filter out negative values, therefore: k=1 k̸=j dik (1 − eik) 2 E X[:, k] ⊤Zi 2 F ≤ E Xn j=1 X[j, k] 2dij eij Xn (X[i]WD) 2 k=1 dikeikX[i]WD + C[i] !4 Xn k=1 k̸=j dik (1 − eik) 2 Xn j=1 X[j, k] 2dij eij Xn ≤ E k=1 dikeikX[i]WD + C[i] !2 C[i] X[i]WD + Xn k=1 dikeik!2 Xn k=1 k̸=j dik (1 − eik) 2 ≤ E Xn j=1 X[j, k] 2dij eij Xn k=1 dikeik!4 . 4C[i] 2 Xn y, the global upper bound for the gradient norm squared of $W_{V}$ is conducted: $$\mathbb{E}\left\|\frac{\partial\mathsf{Sa}}{\partial W_{Q}}\right\|_{F}^{2}\leq\sum_{i=1}^{n}\sum_{k=1}^{d}\sigma^{2}\|\mathbf{X}[i]\|_{F}^{2}\mathbb{E}\left(\frac{\sum_{j=1}^{n}\mathbf{X}[j,k]^{2}d_{ij}e_{ij}\left(\sum_{k=1}^{n}d_{ik}\left(1-e_{ik}\right)\right)^{2}}{4\mathbf{C}[i]^{2}\left(\sum_{k=1}^{n}d_{ik}e_{ik}\right)^{4}}\right)\triangleq C_{Q}=O(1).\tag{22}$$ From Eq. 20 and Eq. 22, we conclude that the complexity of $\mathbb{E}\left\|\frac{\partial\mathsf{Sa}}{\partial W_{Q}}\right\|_{F}^{2}$ is $\Theta(1)$. ## C Datasets The ModelNet40 dataset (Wu et al., 2015) consists of 12,311 pre-aligned shapes divided into 40 classes, where the train and test sets consist of 9,843 instances and 2,468 instances respectively. ModelNet40 is the pioneer large-scale 3D CAD dataset. Unlike previous CAD datasets (Shilane et al., 2004), ModelNet40 is the pioneer large-scale dataset that is diverse in terms of both class and samples per class. The ShapeNetPart dataset consists of 16,881 pre-aligned 3D shapes from 16 categories and is a part of a larger dataset: ShapeNetCore (51,300 3D models) (Chang et al., 2015). The 3D shapes in the ShapeNetPart dataset are annotated with 50 segmentation parts in total representing virtual real-world 3D semantic models. ShapeNet provides a diverse variety of shape annotations and corresponding shapes. The full ShapeNet dataset a is multitude containing upright and front orientation vectors, parts and keypoints, shape symmetries, but we only account for the part-segmenting task in our work. The Long Range Arena Benchmark (LRA) (Tay et al., 2021) is a composition of 5 tasks: ListOps (Nangia & Bowman, 2018), ImDB review (Maas et al., 2011), ACL Anthology Network (Radev et al., 2009), Grayscaled CIFAR-10 (Krizhevsky & Hinton, 2009), and Pathfinder (Linsley et al., 2018). These five tasks are all classification and they feature very long sequences: ListOps (2,048 tokens), ImDB review (1,024 tokens), ACL Anthology Network (4,096 tokens), Grayscaled CIFAR-10 (1,024 tokens), and Pathfinder (1,024 tokens). All five tasks involve tackling long-ranged data structures and manifold categorizations, challenging networks's generalization powers as well as memory and time efficiencies. We used three datasets in Long Range Graph Benchmark (LRGB) (Dwivedi et al., 2022b): Peptidesfunc (Singh et al., 2015), Peptides-struct (Singh et al., 2015), and PascalVOC-sp (Everingham et al., 2010). Peptides-func is a Graph Multiclass-Classification task, that features graphs with an averaged number of nodes of 150. Similarly, Peptides-struct is a Graph-Regression task, with the same averaged number of nodes. PascalVOC-sp is a Node-Classification task, with an averaged number of nodes of 479. While the Peptide datasets measure the long-range relationship deduction of models, the PascalVOC-sp dataset measures local pattern recognition through the Node-Classification tasks. This combination provides a good enough evaluation of efficient transformers' performance in the Graph data modality. ## D Reproducibility This section is devoted to providing information to assist the reproducibility of our work and explain the technicality of our experimental code. In our work, we conducted the following experiments: - Experiment to measure the performance of our model in Object-Classification on ModelNet40 (1.1) - Experiment to measure the performance of our model in Semantic-Segmentation on ShapeNetPart (1.2) - Experiment to measure the performance of our model in Object-Classification on ModelNet40 under rotational invariant constraint (1.3) - Experiment to measure the performance of our model in Semantic-Segmentation on ShapeNetPart under rotational invariant constraint (1.4) - Experiment to measure the performance of our model in Graph-Multiclass-Classification on Peptidefunc (2.1) - Experiment to measure the performance of our model in Graph-Regression on Peptide-struct (2.2) - Experiment to measure the performance of our model in Graph-Node-Classification on PascalVOC-sp (2.3) - Experiment to measure the performance of our model in LRA-Benchmark-ListOPS (3.1) - Experiment to measure the performance of our model in LRA-Benchmark-Text (3.2) - Experiment to measure the performance of our model in LRA-Benchmark-Retrieval (3.3) - Experiment to measure the performance of our model in LRA-Benchmark-Image (3.4) - Experiment to measure the performance of our model in LRA-Benchmark-Pathfinder (3.5) - Experiment to show the effectiveness of our attention formulation in ModelNet40 (4.1) - Experiment to analyze the performance-efficiency tradeoff in ModelNet40 (4.2) - Experiment to measure model efficiency (4.3) - Experiment to investigate rank injection phenomena (5) ## D.1 Experiment Details This subsection provides information on the experiment settings of both our main experiments and our ablation study experiments. All the experiments on ModelNet40 (1.1, 1.3, 4.1, 4.2) use uniform point cloud sampling technique to sample 1024 points as input tokens. Experiments on ShapeNetPart do not use uniform sampling technique; it randomly samples 2500 points. All the point cloud experiments (1.1, 1.2, 1.3, 1.4, 4.1, 4.2) under rotational invariant constraint use both point cloud coordinates and surface normal vectors. Under rotational invariant constraint, the input token value are replaced by tensor filled with one, and the relative positional embedding matrix is a Squared Euclidean Distance Matrix from point coordinates concatenated with Pairwise Dot-Product of surface normal vectors; these two features are rotational invariant. All experiments on LRG-Benchmark (2.1, 2.2, 2.3) use graph adjacency matrix as relative information between tokens. The subscripted adjacency matrices are fed into each layer based on the output of the sampling module. All experiments on LRA-Benchmark (3.1, 3.2, 3.3, 3.4, 3.5) use a learnable linear combination of Sinusoid1D as relative positional encoding; the sinusoid vectors are linearly combined, then a pairwise hadamard product is used to construct relative matrix between tokens. This method uses a sinusoidal matrix S having shape n × d, defined as follows: $$\begin{array}{c}{{\mathbb{S}[i,2j]=\sin(\frac{i}{10000^{\frac{j}{d}}}),}}\\ {{\mathbb{S}[i,2j+1]=\cos(\frac{i}{10000^{\frac{j}{d}}}),}}\end{array}$$ ), (24) ), (23) $$(23)^{\frac{1}{2}}$$ $$(24)$$ n, d are number of tokens and number of dimensions, respectively. The sinusoid matrix fed into the sampler, resulting in matrices of sampled positional tokens Stop, Srand. Relative position matrice is constructed by pairwise hadamard product, which results in tensor Atop and Arand of shape (n, n′, d): $$\mathbb{A}[{\color{red}i},{\color{red}j},{\color{red}k}]=\mathbb{S}[{\color{red}i},{\color{red}k}]*\mathbb{S}[{\color{red}j},{\color{red}k}]$$ $$(25)$$ A[*i, j, k*] = S[i, k] ∗ S[*j, k*] (25) A learnable linear layer is then applied to A, transforming it into a tensor of shape (n, n′, h), where h is the number of attention heads. The remaining steps regarding relative positional encoding can be traced back in Section 4.3, as we have described the method to generate XR. ## D.2 Implementation Details This subsection provides information on how we organize and implement our experimental software. All the experiments involving model training use TorchCompile to compile the entire model into one optimized kernel. This reduces around 40% of the computational cost of our models. Our implementation for graph data modality is not yet able to support very large graphs due to the usage of subscripted adjacency matrices. This would be fixed in the future if we implement direct sparse operator converting edge list into subscripted adjacency matrices, ensuring linear runtime. Our code is split into three modules: - Data-related Modules: data loaders and data augmentation modules for all of our experimented tasks. - Layers: code for our differentiable sampling-without-replacement module, SFT-layer with various configurations, and task-head. The code for our differentiable sampling-without-replacement module is separated from other codes; therefore, it is possible to use our sampling module to more optimized transformer implementation, such as the PyTorch one. - Models: methods to parse point cloud models, sequential models, and graph models. The experiments in which we calculate Flops of submodules (4.3) use the Calflop Python library. In this experiment, we disable TorchCompile. ## D.3 Training Strategies This subsection provides information on how we train our models across tasks; including the choice of optimizers, optimizer hyperparameters, learning rate schedulers, and data augmentations. All of our model uses AdamW optimizer, StepLR learning rate scheduler, max gradient norm clipping, and Untuned Linear Learning-Rate Warmup (Ma & Yarats, 2021). Therefore, the training-strategy-related hyperparameters are: - Optimizer - Number of Epochs - Batch Size - Learning Rate - L2-Weight-Decay - Learning Rate Scheduler - Step Size - Decay Rate - Max Gradient Norm Clipping - Max Gradient Norm Point Cloud tasks use data preprocessing and data augmentation techniques as the following: - Data preprocessing: All of the point clouds are centralized - Data preprocessing: The point cloud size is normalized by dividing the point coordinates by the distance from point cloud centroid and the farthest point to the centroid - Data augmentation: Randomly change the size of point clouds by multiplying point cloud coordinates with a uniformly sampled scalar in range of [0.6, 1.4] Peptide tasks use data preprocessing as the following: - Data preprocessing: Node features go through a 17-class one-hot vectorization - Data preprocessing: Edge features are concatenated with 1-valued tensor Here, we list the training hyperparameter for each experiment in Table 8. ## D.4 Architectural Specification This subsection provides information on the architectural hyperparameters of models across tasks. All of the models we experimented use a feedforward network expansion rate of 4 (this implies the FFN module in SFT transforms d-dimensional tokens into 4d-dimensional tokens then back to d-dimensional tokens). To combat overfitting, we have a slight difference compared to the vanilla transformer: we introduce dropout layers with different dropout rates: a dropout layer for Q and K vectors is called attention score dropout, a | Experiment | 1.1 | 1.2 | 1.3 | 1.4 | 2.1 | 2.2 | 2.3 | 3.1 | 3.2 | 3.3 | 3.4 | 3.5 | 4.1 | 4.2 | 4.3 | 5 | |-----------------------|---------------------------------------------------------------------------------------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|------| | Number of Epoch | 600 | 600 | 1000 | 1000 | 600 | 600 | 600 | 50 | 200 | 80 | 50 | 200 | 600 | 600 | 600 | 1000 | | Batch Size | 32 | 32 | 32 | 32 | 32 | 32 | 32 | 64 | 64 | 32 | 64 | 64 | 32 | 32 | 32 | 32 | | Learning Rate | 0.001 0.001 0.001 0.001 0.0008 0.0003 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 | | | | | | | | | | | | | | | | | L2 Weight Decay | 0.1 | 0.1 | 0.1 | 0.1 | 0.17 | 0.12 | 0.1 | 0.01 | 0.01 | 0.01 | 0.02 | 0.01 | 0.1 | 0.1 | 0.1 | 0.1 | | Step Size | 30 | 30 | 30 | 30 | 30 | 30 | 30 | 20 | 20 | 20 | 20 | 20 | 30 | 30 | 30 | 30 | | Decay Rate | 0.8 | 0.8 | 0.8 | 0.8 | 0.8 | 0.8 | 0.9 | 0.9 | 0.9 | 0.9 | 0.9 | 0.9 | 0.8 | 0.8 | 0.8 | 0.8 | | Max Gradient Norm 1.0 | 1.0 | 1.0 | 1.0 | 16.0 | 16.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | | Table 8: Training Hyperparameters of Experiments | Table 9: Architectural Hyperparameters of Experiments | | | | | | | | | | | | | |---------------------------------------------------------|------|------|------|------|------|------|------|-----|------|------|-----|-----| | Experiment | 1.1 | 1.2 | 1.3 | 1.4 | 2.1 | 2.2 | 2.3 | 3.1 | 3.2 | 3.3 | 3.4 | 3.5 | | Number of Sampled Tokens | 256 | 256 | 256 | 256 | 75 | 75 | 150 | 256 | 256 | 256 | 256 | 256 | | Drop Token Probability | 0.5 | 0.2 | 0.5 | 0.2 | 0.3 | 0.3 | 0.3 | 0.0 | 0.3 | 0.0 | 0.3 | 0.0 | | Token Embedding Size | 256 | 256 | 256 | 256 | 144 | 144 | 160 | 128 | 128 | 128 | 128 | 128 | | Number of Attention Head | 16 | 16 | 16 | 16 | 16 | 16 | 16 | 4 | 4 | 4 | 4 | 4 | | Attention Score Dropout | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.0 | 0.1 | 0.0 | 0.1 | 0.0 | | Token Embedding Dropout | 0.1 | 0.1 | 0.1 | 0.1 | 0.0 | 0.0 | 0.0 | 0.1 | 0.0 | 0.1 | 0.1 | 0.1 | | Feedforward Dropout | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | | Normalization Layer Position | Post | Post | Post | Post | Post | Post | Post | Pre | Post | Post | Pre | Pre | | Number of Layers | 8 | 8 | 8 | 8 | 4 | 4 | 4 | 6 | 4 | 5 | 8 | 6 | Table 9: Architectural Hyperparameters of Experiments dropout layer after multi-head-attention and feedforward network is called token embedding dropout, and a dropout layer after the feedforward network's dimension expansion linear layer is called feedforward dropout. The classification head used in ModelNet40 dataset is a bit different: it uses our differentiable sampling to sample 512 tokens and then uses a maximum aggregation to aggregate information. All of the classification heads we used in our experiments except the one on ModelNet40 use a linear layer with softmax aggregation. Table 9 shows architectural hyperparameters of experiments.
Review 1: Summary: Transformers attention efficiency is the long standing problem which current paper also tries to solve. Authors propose three new modifications to make attention linear (or n logn, n - sequence length) 1) modification of attention to sub-sample keys (which gives sparse attention) via first sorting the keys based on the scoring function and then taking the top-k: here authors proposes scheme on this selection so that gradient through the sampling can be computed to properly propagate through the top keys; 2) non-linearity instead of softmax - in combination with the sparsity provides pseudo-convexity property for the new attention block which should improve the training stability (ease of hyper-parameters search) and speed up convergence; 3) new relative positional embedding scheme, which accepts now graphs, point clouds and long sequences for the transformer model. Authors show efficiency of the proposed model by proving pseudo-convexity property and bound on the gradient norms, and further demonstrate results with empirical analysis on point clouds, long-range-graph benchmarks and long-range arena benchmarks. Empirical results show that model performs in the top of other models (but do not outperform the SOTA in each domain). Strengths and Weaknesses: **Strengths** - interesting idea/trick on making k-top sampling without replacement differentiable (not sure though how novel this is, but the idea/technique is definitely worth sharing / discussing) - construction of the transformer block to be pseudo-convex, as pseudo-convex property is nice property to have - theoretical and empirical results, which include benchmarking across several domains **Weaknesses** See requested changes for all details for every weakness. - the paper writing needs a lot of improvements: a lot of inconsistency between naming; math notation is very unclear and inconsistent - it is really hard to read and parse all theoretical formulations (took me a lot of time, though in most cases I got the idea / what authors mean but formality is really ambiguous). - I think authors didn't demonstrate how new formulation of attention prevents from transformers training difficulty. Just saying the model pseudo-convex now - I think is not enough. Moreover, it is not clear if the whole model is pseudo-convex as I can have some other blocks before or after transformers blocks. Also pseudo-convexity restricts the model a lot and we see in empirical results that model cannot outperform some SOTA models, then I don't get the point of introducing so restrictive model if in the end we restrict the expressiveness of it. - Pseudo-convex property in Def 4.1 is wrongly given thus the proof on pseudo-convexity is questionable. - There is absence of many ablations: - ablate introduced relative positional embedding - why do we need new one? what happens if we use RoPE or relative sinusoidal one or relative learnable one? even for the introduced relative positional embedding I didn't get is it learnable of sinusoidal? Why do we need such modification on the positional embedding? - what happens if we don't use $C$? in that case we loose pseudo-convexity (if I assume that all derivations in the paper are correct) but how about performance and model capacity then? - no analysis on the robustness to hyper-parameters and comparison between deep / non-deep models, pre-layernorm / pos-layernorm - as they are the main source of instabilities and issues of transformer training. - what is performance / speed if no sampling of keys happening (assuming that we have full attention)? do we outperform standard attention? - comparison with standard attention on regular tasks with no keys subsampling to see that the idea on the transformer modification itself works (here comparison should be fair in the sense of same positional embedding e.g.) - If I missed it, then it should be clearly stated / expanded - why do we need introduced modifications compared to prior works? I got about new variant of differentiable sampling, but why we need pseudo-convexity (as we restrict the family too much then) and new variant of relative positional embedding (as no comparison or deeper analysis are given) are not clear. Requested Changes: > the paper writing needs a lot of improvements: a lot of inconsistency between naming; math notation is very unclear and inconsistent - it is really hard to read and parse all theoretical formulations (took me a lot of time, though in most cases I got the idea / what authors mean but formality is really ambiguous). - be consistent with usage: "foundation" or "foundational", camel case or lower case for "Transformers", "relational" positional embedding is not used in the literature - it should be "relative" - be consistent on usage too, "hyperparameters" or "hyper-parameters", spaces between words and references, references with citep / citet depending on the case (right now it is inconsistent) - typos: - "an empirical analysis of the ability of our transformer" - ability on what? - "messing passing" -> "message passing" - "in the Appendix" -> "in Appendix" - "one of the first transformer" -> "one of the first transformers" - Linear(X) - is not clear as then it is map of vector to vector, and here input is tensor. - page 4 "PE" is not introduced I believe - page 9 "optimzations" -> "optimizations" - page 11 "attention from between"? - introduction - why there is no references to vision and speech e.g. as applications for transformers to sequences? - "sparsity patters" - I think variants of linear attention are not necessary sparsity patterns, it is just the linearization of the attention operation, so would be nice to discuss this too. - Figure 1: randomization is not clear from the figure, the color for rows doesn't correspond to the color of "randomization" word. Also $C$ is incorrectly shown as it is based on the $V$ but not on $X$. - "over-centralized attention" - what is it? I don't know this definition. - Sec 4.1. reference to Section .. - missed section index. - Math notation is very unclear: - tensors, vectors, scalars are not consistently marked with bolt / not bolt. Indexing is not clear for the vectors / tensors. - $X = (x_1, .., x_n)\in \mathbb{R}^{n\times d}$ -> $X = (x_1, .., x_n)^T\in \mathbb{R}^{n\times d}$ - "Sam", "S" - why different notation in Sec 4.2? also $nm$, $km$ - add \times. - Eq 2 is unclear, as not clear what onehot exactly doing and what is $x_i$ here as it is not bolt (I know what eq 2 is doing, but formally is it very poorly notated). - Usage of $*$ between matrices / tensors for matrix multiplication is bad notation, it is a convolution then op, not matmul. - Algorithm 1 row 7 - Sample is not defined - Eq 3 where are vectors, scalars? what is $\tau$? do you set temperature to infinity to be able to sample only one vector instead of linear combination? - Eq 4 - how do you select only one vector and not linear combination? - Eq. 8 - proof / discussion how you got it? - Algorithm 2 row 3 - how do you concat vector and $\mathbf{S}$ which is scalar? All notations between scalars and vectors are messed up here. Please either use pytorch pseudo-code or do correct math notations / ops here. It is really better to have pytorch pseudo-code, it took me a while to parse what is written. $\mathcal{S}_1[\mathbf{S}]$ notation is also very unclear. Row 16 - do we return linear combination of vectors? otherwise how do we get that only top vector is selected from the softplus op? Why do we need also Gumbel in row 2? - Eq 9 - again notation on tensors, vectors, index of the tensors are poor. - What is $max(\mathbf{X_i}, \mathbf{Y_j})$? is it max per coordinate? - What is $\mathbf{X_r}$? learnable? sinusoidal? - "It is well-known that transformers are hard to optimize" - the reference here is about post-LayerNorm models and very deep models, however in general pre-LayerNorm and smaller models are fine to optimize. In era of LLMs we know now (let me know if you need references) that bigger models (not deeper) it is also hard and a lot of instabilities occurs. I would reference here to more works, which mention these later issues. > Pseudo-convex property in Def 4.1 is wrongly given thus the proof on pseudo-convexity is questionable. It should be at the end of the definition $f(x_2) \geq f(x_1)$, while in the paper it is written $f(x_1) \geq f(x_2)$. The latter is wrong. The same definition is used in the proof in the Appendix, thus the correctness of the proof is under question now as incorrect definition is used. > There is absence of many ablations: As I specified, it will be stronger paper if proper ablations are done to show why we need every component or what better properties we have for the model as soon as it doesn't outperform SOTA models. **Other questions** - why do you consider post-layernorm models? - what sample rate is used in all reported empirical results? - why in the end model doesn't outperform prior works? what is then the point of this new architecture? - "The higher convergence rate of our model ..." - where and how do you see higher convergence rate? - Figure 5: what is the difference between "softmax dot" and "vanilla"? is it positional embedding? - "vanilla transformer does not support relational information" - what do you mean exactly? I can add relpos into transformer, moreover it is learning it anyway. - Table 6: what are numbers for 100% sampling rate? what if you use relpos for vanilla transformer? - page 38: "learnable linear combination of sinusoid1D" - what is that? could you give exact formulation what positional embedding is used? - page 39: "we introduce dropout" - can you confirm if the same is used for other model? also why your model overfits and you need this extra dropout? could it be that your model is performing better only because of this dropout? **Note**: I did a pass over Appendix with the proof and only didn't do deep reading for Appendix A3 as found the error (possible?) in the definition and thus proof for the earlier part. Broader Impact Concerns: Overall the work is theoretical and empirical regarding the basic modeling in the machine learning. As other models it can lead to any outcomes in the future, but with the current formulation it is more about core ML. Authors listed limitations, I would add that broader testing of the system is needed across domains and applications. ================================================== Review 2: Summary: The paper addresses two major challenges in the use of transformers for sequence processing: the high computational cost of self-attention and the complexity of training transformers. To tackle these issues, the authors propose two innovative mechanisms: a neural-guided down-sampling method for self-attention and a new attention non-linearity that is both linear-scaled and convex. The evaluation of the proposed method was based on point clouds, graphs, and long-range sequences. The source code is provided to help the objective evaluation. Strengths and Weaknesses: Strengths: - The paper is well-written. The organization of the paper is clear. Most technique details are easy to follow. - There are concrete theoretical analyses of the proposed method. Weakness: - My main concern with the paper is the lack of discussion on applying the proposed method in pertaining or fine-tining the long context reasoning ability from large language models, which seems to be the most matched scenario for the proposed method. The lack of the technique discussion and the corresponding evaluation leaves some logical flaws in the motivation of this paper. Requested Changes: Add additional experiments about the performance of fine-tining the long context reasoning ability from large language models. Broader Impact Concerns: Not applicable. ================================================== Review 3: Summary: The authors introduced a set of modifications to the transformer architectures that includes (1) a sampling method for self-attention that significantly reduces the computational complexity in terms of the number of tokens (2) a new non-linearity that leads to convex characteristics. The new non-linearity has some supporting theoretical results. Strengths and Weaknesses: I have quite a bit of clarification I would like to resolve with the authors before returning to summarizing the strength and weaknesses. In particular, since I'm not an expert on the training of the transformers, I will mainly focus on evaluating the theoretical component of this work for its correctness and clarity. Requested Changes: I will use this section to ask questions, since I will also request many of the clarifications to be included in the paper as well. 1. Overall, I would like the authors to define all mathematical notations used precisely. This may feel pedantic, but I am having a lot of trouble understanding key components of this work at the moment. I will list the ones that most directly confused me: 1a. Is the $\*$ symbol used for matrix multiplication? In Line 15 of Algorithm 2, is $\*$ here also matrix multiplication? 1b. What is the onehot function? What is lower case $x_0$ in equation 2, maybe a row of $X$? Where does $P$ show up in Algorithm 1? 1c. In an equation between (2) and (3), you defined Gumbel(0,1) as a function, but then in line 6 of Algorithm 1, you are adding a Gumbel(0,1)? Does this mean a sample from the Gumbel distribution? What is the dimension of this Gumbel(0,1) object in Algorithm 1? 1d. In line 7 of Algorithm 1, what does it mean to sample from $S$? What is this object $\pi$? What is the operation $K[\pi]$? 1e. Similarly, you did not define $F_1, F_2$, Prob, Score. What are these functions? 1f. Right before equation 6, you said "we can derive the trivial sampling with replace procedure by repeatedly applying Equation 3 or Equation 5." However, do you mean here drawing a sample from the distribution defined by equation 3 or 5? 1g. You did not define the ImportanceScore or the Concat function in equation 6 and 7. 2. Secondly, for Theorem 4.2 and 4.3, what is the architecture being analyzed here? Is this for one single layer of the attention block? 3. In Theorem 4.2, can you intuitively summarize what "certain combinations of weights" mean? I don't think referring to the Appendix is helpful here for the reader. 4. Theorem 4.3, when you use the big $\Theta$ notation, what is the variable going to infinity? Is it just the number of tokens $n$? I will stop here for now, as there is already a lot to be clarified. I will return to the discussion and attempt to evaluate the work again once these questions are addressed. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Reject Comment: The paper presents novel ideas on improving the efficiency of self-attention which is an important challenge for Transformers, along with a new positional encoding, and generalizations to handle other data modalities. While all the reviewers (and I) agree that the paper has good potential, there were several issues raised that were not sufficiently addressed in the revisions. In particular, the issues were - readability of technical content: this has improved but still needs work - over-claiming and lack of support for certain claims: the jump from theory to empirical is not clearly motivated or backed - overall structure of the paper: there seems to be a lot packed into the paper, but not easy to parse - weak empirical results: lack of ablations for the main claims, and unclear performance gains Given these, I recommend reject for the current submission and encourage the authors to take the reviewer feedback, incorporate it into an updated version, and re-submit. With improved readability, and adjustment of claims, I believe the paper would be good for TMLR. ==================================================
# A Systematic Approach To Universal Random Features In Graph Neural Networks Billy Joe Franks billy.franks@rptu.de Department of Computer Science University of Kaiserslautern-Landau (RPTU) Markus Anders Department of Mathematics TU Darmstadt Marius Kloft Department of Computer Science University of Kaiserslautern-Landau (RPTU) Pascal Schweitzer Department of Mathematics TU Darmstadt Reviewed on OpenReview: *https: // openreview. net/ forum? id= AXUtAIX0Fn* ## Abstract Universal random features (URF) are state of the art regarding practical graph neural networks that are provably universal. There is great diversity regarding terminology, methodology, benchmarks, and evaluation metrics used among existing URF. Not only does this make it increasingly difficult for practitioners to decide which technique to apply to a given problem, but it also stands in the way of systematic improvements. We propose a new comprehensive framework that captures all previous URF techniques. On the theoretical side, among other results, we formally prove that under natural conditions all instantiations of our framework are universal. The framework thus provides a new simple technique to prove universality results. On the practical side, we develop a method to systematically and automatically train URF. This in turn enables us to impartially and objectively compare all existing URF. New URF naturally emerge from our approach, and our experiments demonstrate that they improve the state of the art. ## 1 Introduction Structured data is omnipresent in the modern world. Graphs are a fundamental data type in machine learning (ML) on structured data and used in a variety of learning algorithms (Shervashidze et al., 2011; Nickel et al., 2016; Hamilton et al., 2017), including graph neural networks (GNNs) (Zhou et al., 2020; Wu et al., 2021). GNNs, and especially message passing neural networks (MPNNs) (Gilmer et al., 2017), have found widespread applications, from drug design to friendship recommendation in social networks (Song et al., 2019; Tang et al., 2020). However, severe limitations of MPNNs have recently been proven formally. In fact, MPNNs are at most as expressive as *color refinement* (Xu et al., 2019; Morris et al., 2019). Color refinement - also known as the 1-dimensional Weisfeiler-Leman algorithm - is a simple algorithm inducing a non-universal similarity measure on graphs. Thus MPNNs fail to be universal function approximators, a fundamental property known for multilayer perceptrons since the 1980s (Cybenko, 1989). Addressing this lack, universal random features (URF) (Murphy et al., 2019; Dasoulas et al., 2020; Sato et al., 2021; Abboud et al., 2021) were developed, which provably enable MPNNs to be universal function approximators. URF enhances the nodes of the graph with random values, thus artificially breaking its intrinsic symmetries and facilitating the distinction of previously indistinguishable nodes. URF represent the state of the art in practical, efficient, and universal GNNs. Recently, numerous variations of URF have been developed (Murphy et al., 2019; Dasoulas et al., 2020; Sato et al., 2021). No systematic comparison of these URF is available and it is unclear how their trainability, generalizability, and expressivity (Raghu et al., 2017) compare to one another, because of inconsistent terminology, differing or incomplete data sets for benchmarking, and orthogonal evaluation metrics. This makes it difficult for practitioners to decide which technique to apply. It also stands in the way of systematic improvements. Our contributions. We propose two systematic approaches to dealing with URF: 1) The individualizationrefinement node initialization (IRNI) framework, a theoretical description that encompasses all currently known URF schemes (and even proposes new ones, see Figure 1). The foundation for our framework is graph isomorphism theory, particularly the individualization-refinement (IR) paradigm. 2) A tuning technique based on bayesian hyperparamter optimization (BHO), designed specifically to tune methods based on the IRNI framework. Armed with these new systematic approaches, we make the following contributions: - We demonstrate that our framework IRNI is comprehensive. In fact, we show that all previous URF techniques are captured. - We formally prove that all instantiations of the IRNI framework that satisfy a natural compatibility condition are universal and equivariant, even ones not considered so far. Furthermore, we prove that already a very limited version of IRNI is universal on almost all graphs, including all 3-connected planar ones. We also quantify the amount of randomness required to ensure universality of all but an exponentially small fraction of all graphs. - We apply and systematically test "ensembling over randomness" on all URF. This is a particular form of ensembling that previously has been applied in some but not all URF (Murphy et al., 2019; Dasoulas et al., 2020). Our crucial new insight is that ensembling over randomness significantly improves the performance of all URF, even in cases in which it had not been considered previously. - The IRNI framework also suggests a new most natural URF directly related to practical graph isomorphism solvers. - We compare all previous URF methods using the same BHO tuning approach. This leads to multiple new state-of-the-art performances. In particular, all previous URF methods are improved upon. Additionally, we discover that even though random node initialization (RNI) was reported to be harder to train than other URF (Abboud et al., 2021), our tuning approach resolves this issue. In our evaluation, and somewhat surprisingly, we find that currently, there seems to be no clear best strategy in practice. There are strengths and weaknesses to all considered URF. However, this marks the first direct comparison of all the different URF and thus enables a more informed choice of which method to use in practice. Our hope is that our systematic approach also eases future development of URF, in particular for developing and evaluating new algorithmic ideas. Individualization refinement in a nutshell: The individualization-refinement framework is a general technique to devise algorithms for tasks revolving around symmetry. Specifically, it can be used to compute isomorphisms, automorphisms (or symmetries), and canonical forms of graphs or other combinatorial objects. The idea is first to use efficient subroutines to distinguish vertices according to simple structural properties preserved by symmetries. An example, which in fact, is similar to what is computed by color refinement (also known as the 1-dimensional Weisfeiler-Leman algorithm), iteratively collects information on the degree of neighbors of a vertex and the degrees of the neighbors of the neighbors and so on. These efficient subroutines must be isomorphism invariant and refine the partition of vertices into indistinguishable parts (and are therefore called refinements). ![2_image_0.png](2_image_0.png) Figure 1: Summary of our new approach: we describe a framework based on "random IR tree paths", called IRNI. Through its parameters, the model is able to exactly model many previous techniques. We prove that any IRNI-enhanced GNN is universal. Once a state is reached where refinements no longer yield new information, an IR algorithm artificially distinguishes a vertex v with a so-called individualization. This individualization breaks apparent symmetry and allows us to recurse by applying another refinement step, and so on. Since the individualization is artificial, to preserve isomorphism-invariance we also have to perform the individualization in a backtracking manner with all vertices which have not been distinguished from the vertex v at this point. The backtracking process finishes once all vertices have been distinguished, making symmetry computations trivial. We refer to Section 4 for a more in-depth explanation of the IR machinery necessary for this work. Why individualization refinement. We explain why the IR-framework is a natural choice for the development of efficient universal GNNs. (For general strengths and weaknesses beyond ML see (McKay & Piperno, 2014; Neuen & Schweitzer, 2017).) First of all, the introduction of graph isomorphism techniques in the context of machine learning on graphs already led to two success stories, namely the efficient WeisfeilerLeman (WL) Kernels (Shervashidze et al., 2011) based on color refinement (1-WL) and the more theoretical higher-order graph neural networks (Morris et al., 2019) based on higher-dimensional WL. However, when it comes to practical graph isomorphism, the use of the 2-dimensional WL is already prohibitive. This is not only due to excessive running time but also due to excessive memory consumption. In the world of isomorphism testing, higher-dimensional Weisfeiler-Leman algorithms remain on the theoretical side. In truth, without fail, modern solvers are IR algorithms (McKay, 1981; Junttila & Kaski, 2011; McKay & Piperno, 2014; Anders & Schweitzer, 2021a). They only use color refinement and instead of higherdimensional WL rather use individualizations to achieve universality. In contrast to higher-dimensional WL, the IR approach is generic, universal, and practical. Uncontested for more than 50 years, IR algorithms have the fastest computation times and acceptable memory consumption for graph isomorphism (McKay & Piperno, 2014). In that sense, the IRNI approach we introduce in this paper is the first time in which universal graph isomorphism techniques that are truly used in practice are transferred into a machine learning context. An important consequence of this is that ML practitioners can now readily transfer existing IR-related results to ML on graphs (Sec. 4.3). ## 2 Related Work Graphs are a powerful means of representing semantic information. Since graphs are a very general data type and most other data types are special cases, graphs have many applications. They can be used to extend or combine other data types like text, images, or time series (Noble & Cook, 2003; Vazirgiannis et al., 2018) and there are also data sets specific to graphs. Most commonly, these data sets are related to biology or chemistry. However, computer vision, social networks, and synthetic graphs without a related field are also present (Morris et al., 2020a). Neural learning on structured data like graphs was first introduced in (Baskin et al., 1997; Sperduti & Starita, 1997). Recently, a more specific deep learning model was pioneered for graphs, the graph neural network (GNN) (Gori et al., 2005; Scarselli et al., 2009). The GNN led to the development of a multitude of related models (Duvenaud et al., 2015; Li et al., 2016) usually referred to just as GNNs. GNNs allow for the joint training of graph feature extraction and classification, which previous models did not. Gilmer et al. (2017) gave a very general characterization of GNNs called message-passing neural networks (MPNN), which most GNN models can be characterized as. Lately, multiple concepts from other domains of deep learning have been transferred to GNNs like the attention mechanism (Velickovic et al., 2018) and hierarchical pooling (Ying et al., 2018). Cybenko (1989) proved the first universality result for one of the earliest deep learning models, the smallest possible multilayer perceptron (MLP). This result has since then been expanded to different activation functions (Leshno et al., 1993; Barron, 1994), to width-bounded MLPs (Lu et al., 2017; Liang & Srikant, 2017), and more recently to different layered artificial neural networks like the convolutional neural network (Zhou, 2020). Analogous results had been lacking for MPNNs, which are now well-established to be nonuniversal (Xu et al., 2019; Morris et al., 2019; Abboud et al., 2021). Following this finding, multiple attempts were made to establish universal ML models on graph data. Murphy et al. (2019) proposed relational pooling (RP), Sato et al. (2021) proposed random node initialization (RNI), and Dasoulas et al. (2020) proposed the colored local iterative procedure (CLIP), all of which provide universality to MPNNs (Abboud et al., 2021) and are closely related to one another as methods using universal random features (URF). Morris et al. (2019; 2020b) proposed k-GNNs based on the k-dimensional Weisfeiler-Leman algorithm and expanded on it by proposing the δ-k-GNN a local approximation variant of the k-GNN. Maron et al. (2019a;b) propose provably powerful graph networks, which are 2-WL powerful, and invariant graph networks, which are proven to be universal, see Morris et al. (2021) for further pointers. In this paper, we only consider methods that are scalable, practical, and universal at the same time. For a comparison between universal and non-universal approaches we refer to existing literature (Dasoulas et al., 2020; Morris et al., 2019; 2020b; Maron et al., 2019a). As expected non-universal approaches can outperform universal approaches in tasks that do not require high expressivity. However, non-universal approaches fail to achieve high performance on tasks that require high expressivity. Therefore, only methods employing random features that grant universality (Murphy et al., 2019; Dasoulas et al., 2020; Sato et al., 2021), or universal random features (URF), are considered. Some readers might wonder if typical graph data augmentation schemes should also be considered here. Examples would include deleting/adding edges/nodes. While these changes to graphs can induce the capacity to distinguish graphs from one another, we expected that universality of these methods is hard to prove. For deletions, this is essentially due to the various reconstruction conjectures, which so far have no proof. It is currently unknown if the set of all graphs with one node/edge deleted can be used to reconstruct the original graph, which would be required for these operations to be universal. As for node addition, the IRNI framework subsumes this operation as coloring a set of nodes is equivalent to node addition. Lastly, edge addition can be viewed as removing edges from the inverted graph and therefore has the same issues as edge deletion. For these reasons, we do not consider the vast number of data augmentations that are not provably universal (Puny et al., 2020; Papp et al., 2021; Ding et al., 2022). ## 3 Background Here we cover existing background necessary to understand the IRNI framework. Specifically we define graphs as well as how they are used in ML followed by coloring of graphs. We then cover color refinement a fundamental backbone of individualization refinement, the practical algorithm the IRNI framework is based on. This is followed by a definition of GIN networks the specific MPNN model we use. Lasty, we cover the specifics of RP, RNI, and CLIP, which are the related methods that can be described in the IRNI framework and that we compare later on. ![4_image_0.png](4_image_0.png) Figure 2: A run of naive refinements on a graph. The sequence of naive refinements ends in the coarsest equitable coloring, which can not be refined further using the naive refinement. Note how each coloring is finer than the previous coloring. ## 3.1 Graphs And Colorings We consider undirected, finite graphs G = (*V, E*) which consist of a set of vertices V ⊆ N and a set of edges E ⊆ V 2, where E is symmetric. From this point onward, let n := |V | and V = {1*, . . . , n*}. Additionally, we let G denote the set of all graphs, while Gn denotes the set of all graphs on n vertices. In ML contexts, graphs typically carry a node representation in R d, which we denote by X = {x1*, . . . ,* xn}. IR-tools require these node representations to be discrete. In other contexts, discretization can be difficult and techniques are being actively researched (Morris et al., 2016). However, the discretization is not critical for our purpose since we only require this encoding to compute a tuple of nodes (w1, w2*, . . .*) from an IR algorithm. After that, our approach continues on the original node representation. Let enc : R d *× G →* N be an arbitrary isomorphism-invariant encoding of the node representations. In practice, it is best to choose an encoding for which enc(xv, G) = enc(xw, G) if and only if xv = xw, however, this is not a requirement for any of the results we present. Elaborating further, if enc(xv, G) = enc(xw, G) even though xv ̸= xw then more nodes might be "individualized" than are strictly necessary, still resulting in a universal and equivariant method as described in Sec. 4. A (node) *coloring* is a surjective map π : V → {1*, . . . , k*}. We interpret the node representations as colors using enc, i.e. π(i) := enc(xi, G). We call π −1(i) ⊆ V the i-th cell for i ∈ {1*, . . . , k*}. If |π(V )| = n then π is discrete. This means every node has a unique color in π. Note that in the following, we always use "discrete" in this sense. Furthermore, we say a coloring π is *finer* than π ′if π(v) = π(v ′) =⇒ π ′(v) = π ′(v ′) holds for every *v, v*′ ∈ V . We may also say π ′is *coarser* than π. See Figure 2, where each consecutive coloring of the graph is finer than the previous one. We should remark that the IR machinery generalizes to directed and edge-colored graphs. One example of achieving this is by subdividing each edge with additional colored vertices, where the additional vertices model the edge color or edge direction (see McKay & Piperno for more details). Alternatively, said features can also be added to IR algorithms directly (Piperno, 2018). We denote by NG(v) the neighborhood of node v in graph G. ## 3.2 Color Refinement We now define color refinement, which all practical graph isomorphism solvers use. A coloring π is *equitable* if for every pair of (not necessarily distinct) colors *i, j,* the number of j-colored neighbors is the same for all i-colored vertices. Equitable colorings are precisely the colorings for which color refinement cannot be employed to distinguish nodes further. For a colored graph (*G, π*) there is (up to renaming of colors) a unique coarsest equitable coloring finer than π (McKay, 1981). This is precisely the coloring computed by color refinement. A more algorithmic way to describe the refinement is to define for a colored graph (*G, π*) the naively refined graph (*G, π*r) where π r(v) := (π(v), {{π(v ′) | v ′ ∈ NG(v)}}). The naive refinement r is applied exhaustively, i.e., until vertices cannot be partitioned further. The result is precisely the coarsest equitable coloring. Figure 2 illustrates the color refinement process. Note that color refinement, and hence coarsest equitable colorings, are strongly related to MPNNs, as is explained below. ## 3.3 Message Passing Neural Networks Formally a message passing update can be formulated as: $$\operatorname{ate}(\{x_{w,t}|w\in N_{G}(v)\}),$$ $$x_{v,t+1}:=0$$ xv,t+1 := combine(xv,t, aggregate({{xw,t|w ∈ NG(v)}})), (1) where G is a graph and xv,t is the vector representation of node v at time t. In this context, time t is usually interpreted as the layer and the aggregate function is typically required to be invariant under isomorphisms. We want to point out the similarity between the refinement r in color refinement and the message passing update (Equation 1). MPNNs are just computing on the colors computed by color refinement, which is why MPNNs are at most as powerful as color refinement in distinguishing graphs. We refer to Xu et al. (2019) and Morris et al. (2019) for a formal comparison. Common instances of MPNNs are graph convolutional networks (Duvenaud et al., 2015), graph attention networks (Velickovic et al., 2018), and graph isomorphism networks (GIN) (Xu et al., 2019). We use GIN throughout this paper, arguably the most efficient of these models (Dasoulas et al., 2020). A GIN is characterized by being simple and yet as powerful as the color refinement algorithm. Given an arbitrary multi-layer perceptron MLP(k)in layer k, ϵ (k) a learnable parameter of layer k, and h (0) v the input representation of node v. GIN updates its node representations h (k) v in layer k as follows: $$h_{v}^{(k)}:=\mathrm{MLP}^{(k)}\left(\left(1+\epsilon^{(k)}\right)h_{v}^{(k-1)}+\sum_{u\in\mathcal{N}_{G}(v)}h_{u}^{(k-1)}\right).$$ $$(1)$$ $$\left(2\right)$$ . (2) ## 3.4 Random Features Intuitively, using random features is a process that introduces randomized information into the input before processing it using machine learning techniques. For example, one might delete a random node or attach random numbers to feature vectors. We do not require a more formal notion since, in this paper, we are only interested in universal random features (URF). We use URF to refer to methods that provably enable their universality if used in conjunction with MPNNs. We next summarize all existing URF techniques. RP attaches a random permutation to the graph by attaching to each node a one-hot encoding of its image. To understand RNI, let G be a graph with node representations {x1*, . . . ,* xn} and d ∈ N a constant, random node initalization (RNI) concatenates d features sampled from a random distribution X to each node ∀v ∈ {1*, . . . , n*} : xv ← concatenate(xv, r1, . . . , rd), r1 . . . , rd ∼ X . CLIP first apply color refinement and then individualize each node of each color class by assigning a one-hot encoding of a natural number to it. If C is the set of all nodes of one color class, then each node v ∈ C is randomly assigned a unique number in {0, . . . , |C| − 1}, which is then one-hot encoded and concatenated onto its node representation xv. ## 4 Random Features From Individualization Refinement First, we describe individualization-refinement trees followed by the individualization-refinement node initialization (IRNI) framework. Then, we demonstrate that RNI, CLIP, and RP can be expressed as a manifestation of this framework. Lastly, we give several theoretical insights into the framework and prove a general universality for these methods under a natural compatibility constraint. ## 4.1 Individualization Refinement Trees Individualization refinement (IR) trees are the backtracking trees of practical graph isomorphism algorithms. We use randomly sampled leaves of these trees as random features. These leaves correspond to sequences of nodes (w1*, . . . , w*k) of the graph, which we translate into features of the MPNN. One central property of the sequence is that distinguishing its nodes from other nodes in the graph and applying color refinement yields a discrete coloring. ![6_image_0.png](6_image_0.png) Figure 3: In IR, refinement and individualization are alternatingly applied. This continues until the coloring of the graph becomes discrete. Two nodes connected by a squiggly line are considered one node of the IR tree: they illustrate the coloring of the graph before and after color refinement. In the following, we describe all the necessary ingredients of IR for the purposes of this paper. The IR paradigm is a complex machinery refined over many decades into sophisticated software libraries. We refer to McKay & Piperno (2014) and Anders & Schweitzer (2021a) for an exhaustive description. Refinement. The most crucial subroutine of an IR algorithm is the *refinement* Ref : G × Π × V ∗ → Π, where Π denotes the set of all vertex colorings of G and V ∗ denotes a string of vertices (∗is the Kleene star). A refinement must satisfy two properties: it must be invariant under isomorphism and individualize vertices in ν ∈ V ∗, i.e., let π ′ = Ref(*G, π, ν*), then for all v ∈ ν it holds that π ′−1(π ′(v)) = {v}. Our definition of refinement is slightly more general compared to McKay & Piperno (2014), which leads to a slight technicality that we discuss in the appendix. In practice, IR tools use color refinement as their refinement (see Sec. 3.2). We denote color refinement as CR(*G, π, ϵ*), where ϵ denotes the empty sequence (explained further below). Individualization. IR algorithms make use of *individualization*, a process that artificially forces a node into its own color class, distinguishing it. To record which vertices have been individualized we use a sequence ν = (v1*, . . . , v*k) ∈ V ∗. We modify color refinement so that CR(*G, π, ν*) is the unique coarsest equitable coloring finer than π in which every node in ν is a singleton with its own artificial color. Artificial distinctions caused by individualizations are thus taken into account. Cell selector. In a backtracking fashion, the goal of an IR algorithm is to reach a discrete coloring using color refinement and individualization. For this, color refinement is first applied. If this does not yield a discrete coloring, individualization is applied, branching over all vertices in one non-singleton cell. The task of the *cell selector* is to (isomorphism-invariantly) pick the non-singleton cell. Figure 3 illustrates this process. While many choices within certain restrictions are possible, one example that we will also use later on is the selector that always chooses the first, largest non-singleton cell of π. We use the notation Sel to refer to a cell selector. IR trees. We first give a formal definition of the IR tree ΓRef,Sel(*G, π*), followed by a more intuitive explanation. Nodes of ΓRef,Sel(*G, π*) are sequences of vertices of G and the root is the empty sequence ϵ = (). If ν = (v1*, . . . , v*k) is a node in ΓRef,Sel(*G, π*) and C = Sel(G, Ref(*G, π, ν*)) is the selected cell, then the set of children of ν is {(v1, . . . , vk, v) | v ∈ C}, i.e., all extensions of ν by one node v of the selected cell C. The root represents the graph (with no individualizations) after refinement (see Figure 3). A node ν represents the graph after all nodes in ν have been individualized followed by refinement (see Figure 3). A root-to-ν walk of the tree is naturally identified with a sequence of individualizations in the graph: in each step i of the walk, one more node vi belonging to a non-trivial color class C is individualized (followed by refinement). The sequence of individualizations (v1*, . . . , v*k) uniquely determines the node of the IR tree in which this walk ends. This is why we identify the name of the node with the sequence of individualizations necessary to reach the node: the sequence of individualizations necessary to reach ν is (v1*, . . . , v*k) = ν. ΓRef,Sel(*G, π, ν*) denotes the subtree of ΓRef,Sel(*G, π*) rooted in ν. We remark that the notation (*G, π*) φ simply means we apply φ to G and π, as noted in the lemma below. Isomorphism invariance of the IR tree follows from isomorphism invariance of Sel and Ref: Lemma 1 (McKay & Piperno (2014)). Let φ: V → V *denote an* automorphism of (G, π), i.e., (*G, π*) φ = (φ(V ), φ(E), φ(π)) = (*V, E, π*) = (G, π)*. Let* Aut(G, π) denote all automorphisms of (G, π)*. Then, if* ν is a node of ΓRef,Sel(G, π) and φ ∈ Aut(G, π)*, then* ν φ *is a node of* ΓRef,Sel(G, π) and ΓRef,Sel(*G, π, ν*) φ = ΓRef,Sel(*G, π, ν*φ). Generally, we refer to a process or object, as *isomorphism-invariant*, whenever it produces the same result for isomorphic inputs. We want to remark again that leaves of an IR tree correspond to discrete colorings of a graph. In fact, the set of all leaves of an IR tree forms a complete isomorphism invariant: two isomorphic graphs will have the same set of leaves, whereas two non-isomorphic graphs are guaranteed to have a distinct set of leaves (see Lemma 4 of McKay & Piperno (2014)). Random IR walks. There are various ways to traverse and use IR trees. Traditionally, solvers (e.g., nauty) solely used deterministic strategies, such as depth-first traversal (McKay & Piperno, 2014). Only recently competitive strategies based solely on random traversal, i.e., dejavu (Anders & Schweitzer, 2021a), have emerged (see Section 4.3). We make use of this recent development, by using *random root-to-leaf walks* of the IR tree ΓRef,Sel(*G, π*). We begin such a walk in the root node of ΓRef,Sel(*G, π*). We repeatedly choose uniformly at random a child of the current node until we reach a leaf ν of the tree. Then, we return the leaf ν = (w1*, . . . , w*k). A crucial property is that since ΓRef,Sel(*G, π*) is isomorphism-invariant (Lemma 1), random walks of ΓRef,Sel(*G, π*) are isomorphism-invariant as well: Lemma 2 (Anders & Schweitzer (2021a)). *As a random variable, the graph colored with the coloring of the* leaf resulting from a random IR walk is isomorphism-invariant. Lemma 2 is also true when restricting random walks to prefixes of a certain length d. We stress that random IR walks are conceptually unrelated to random walks in the graph itself considered elsewhere (Nikolentzos & Vazirgiannis, 2020). Our next step is to insert the sequence of nodes defined by random IR walks into MPNNs. ## 4.2 Individualization Refinement Node Initalization Let G be a graph with node representations {x1*, . . . ,* xn} and d ∈ N a constant. Individualization-refinement node initalization (IRNI) computes a random IR walk w = (w1*, . . . , w*k) in ΓRef,Sel(*G, π*), where π(i) := enc(xi, G). If k ≥ d, we take a prefix of length d, i.e., (w1*, . . . , w*d), providing d nodes to be individualized. IRNI then concatenates d features that are either 0 or 1 depending on this prefix: We set ∀v ∈ {1*, . . . , n*} : xv ← concatenate(xv, 1w1=v*, . . . ,* 1wd=v), which means that the j-th feature of node v is set to 1 if v is the j-th node that was individualized (i.e., wj = v) and 0 otherwise. This guarantees that node v is individualized if and only if it appears in the prefix. If *k < d*, then we simply "fill up" the walk with nodes in an isomorphism-invariant manner using the discrete coloring Ref(G, π,(w1*, . . . , w*k)): we add nodes in order of their color, first the node with the smallest color, then with the second smallest color, and so forth. We abbreviate IRNI with constant d as d-IRNI. Due to the dependence on an underlying IR-tree, both the refinement Ref and cell selector Sel are natural hyperparameters of IRNI. Unless stated otherwise, we assume that an arbitrary refinement and an arbitrary cell selector are used. If we want to state a specific refinement or cell selector, we do so using d-IRNI(Ref) and d-IRNI(Ref, Sel), respectively. IRNI depends on the random walk in the IR tree and we will prove its universality in Theorem 4. Thus, IRNI is a URF (analogous to RNI, CLIP, and RP). This justifies ensembling over this randomness, which we will refer to as ensembling over randomness (EoR). Specifically, we average the predictions of an MPNN over some URF. We now define some specific instances of IRNI by applying different refinements. Let the trivial refinement TR(*G, π, ν*) only individualize the vertices ν in π, followed by no further refinement. A random walk of ΓTR(*G, π*) thus picks a random permutation of vertices that respects only the initial color classes of π (i.e., the first vertex will always be of the first selected color of π, and so forth). In case of uncolored graphs (*G, π*) where π is the trivial coloring, random walks truly only become random permutations of vertices of G. In this case, it follows that for G ∈ Gn n-IRNI(TR) is equivalent to RP. However, we can enforce this even for colored graphs by actively ignoring the colors of π, resulting in what we call the *oblivious refinement* OR(*G, π, ν*) := TR(*G, V* (G) 7→ 1, ν). In this case, random walks are always random permutations of vertices of G. Hence, it follows that for G ∈ Gn n-IRNI(OR) is RP. We remark that the only difference between RNI and RP is in the encoding of the individualizations. Based on TR and CR, we define CTR(*G, π, ν*) := TR(G, CR(*G, π, ϵ*), ν). Note that CTR applies color refinement to the graph, followed by trivial refinement of nodes in the resulting color classes. By definition, for G ∈ Gn n-IRNI(CTR) is thus an alternative description of CLIP. ## 4.3 Ir Algorithms And Irni We now discuss the relationship between IR algorithms and MPNNs using IRNI. First, we remark that in terms of solving graph isomorphism, the use of repeated random IR walks has recently been proven to be a near-optimal traversal strategy of IR trees (Anders & Schweitzer, 2021b), where near-optimal refers to a logarithmic gap between the lower and upper bound. This is in contrast to deterministic traversal strategies such as depth-first search or breadth-first search, which have a quadratic overhead (Anders & Schweitzer, 2021b). URF are thus closely related to these optimal strategies. The ensembling defined in the previous section even seems to mimic the way the aforementioned near-optimal IR algorithm operates (Anders & Schweitzer, 2021b). In fact, the currently fastest practical graph isomorphism algorithm dejavu (Anders & Schweitzer, 2021a) uses essentially the same strategy. Moreover, the use of random IR walks has additional inherent benefits, such as "implicit automorphism pruning", i.e., the automatic exploitation of symmetry in the input (Anders & Schweitzer, 2021a). This translates to MPNNs with IRNI, in that if individualizations across multiple random walks are made on nodes that are symmetrical to each other, this does not introduce any additional randomness: due to their isomorphism-invariance (or equivariance), symmetrical nodes are indistinguishable by MPNNs by design. Previously, we discussed that a crucial property of MPNNs is that their result is isomorphism-invariant, i.e., it only depends on the isomorphism type. This is not true in the deterministic sense for IRNI. However, because of Lemma 2, the result only depends on the isomorphism type and the randomness. Lemma 3. Let Ref be any refinement. Let f *be the function computed by an MPNN with* d-IRNI(Ref), mapping a graph G and a random seed s ∈ Ω from the sample space of random IR walks Ω to a value f(*G, s*) ∈ R. Then for every permutation φ, we have that the random variables s 7→ f(G, s) and s 7→ f(φ(G), s) *have* the same distribution. Proof. The result follows directly from the isomorphism-invariance of MPNNs and Lemma 2. Nevertheless, we give a more extensive proof here. First, we recall that MPNNs are isomorphism-invariant (or equivariant): for every permutation φ, by definition, a MPNN produces the same result for G as for φ(G). Hence, we only need to proof that the augmentations made through random IR walks satisfy the claim. We note that the IR tree of G is ΓRef(G) and the IR tree of φ(G) is ΓRef(φ(G)) = φ(ΓRef(G)) (see Lemma 1 in McKay & Piperno (2014)). Intuitively, this means that ignoring the permutation φ we are drawing randomly from the same distribution. Hence, randomly sampling walks from these trees will result in the same nodes, except for applying the permutation φ: if ν ∈ ΓRef(G), then φ(ν) ∈ ΓRef(φ(G)). We remark that G individualized with ν is isomorphic to φ(G) individualized with φ(ν). This corresponds to the augmentation made to the MPNN in IRNI. Hence, overall, it follows that the MPNN must give the same result for G augmented with ν, as well as φ(G) augmented with φ(ν). We now provide a universality theorem for IRNI, in a similar fashion as Abboud et al. (2021) does for RNI. Let us first recall a definition of Abboud et al. (2021): let Gn denote the class of all n-vertex graphs and f : Gn → R. We say that some randomized function X that associates with a graph G ∈ Gn a random variable X (G) is an (*ϵ, δ*)-approximation of f if for all G ∈ Gn it holds that Pr(|f(G) − X (G)| ≤ ϵ) ≥ 1 − δ. The theorem proven by Abboud et al. (2021) is based on the fact that RNI fully individualizes a graph with high probability, and all fully individualized representations of a graph together constitute a complete isomorphism invariant. The crucial insight we exploit is that IR trees constitute a complete isomorphism invariant as well, specifically, even the set of all leaves of an IR tree suffices. Since the power of MPNNs is limited by color refinement, the only additional requirement needed is that the refinement used for random walks must be at most as powerful as color refinement. Theorem 4. Let Ref be a refinement that computes colorings coarser or equal to CR*, i.e., for any graph* G = (V, E), coloring π*, and* ν ∈ V ∗, Ref(G, π, ν) *is coarser or equal to* CR(G, π, ν). Let n ≥ 1 *and let* f : Gn → R be invariant. Then, for all ϵ, δ > 0*, there is an MPNN with* (n − 1)-IRNI(Ref) that (*ϵ, δ*)- approximates f. Proof. We prove the theorem using a combination of Theorem 2 from (Morris et al., 2019), the universality result of RNIs given in (Abboud et al., 2021), and the basic definition of IR trees. Since graphs have n nodes all possible random IR walks considered by (n − 1)-IRNI(Ref) are random IR walks ending in a leaf node of ΓRef(*G, π*) (see Section 4.1). If we were to individualize the sequence of nodes (w1*, . . . , w*k) corresponding to a leaf and apply the refinement Ref, the coloring of the entire graph would become discrete. By assumption, color refinement always produces colorings finer or equal to Ref, so applying color refinement also produces a discrete coloring. By the definition of (n − 1)-IRNI(Ref), the nodes contained in {w1*, . . . , w*k} all have distinct features not shared by any of the other nodes in the graph. This means that the nodes in {w1*, . . . , w*k} are indeed initially individualized in the MPNN. Now, it is well-known that Theorem 2 of Morris et al. (2019) (see also (Xu et al., 2019)) shows that there is an MPNN that produces the same partitioning of colors that color refinement would, i.e., in our case, yields a discrete partitioning of vertices. In other words, we can assume that the graph is individualized. This suffices to apply the universality result of Abboud et al. (2021) (see Lemma A2, Lemma A3, and Lemma A4 in (Abboud et al., 2021), which build upon (Barceló et al., 2020)), which solely depends on individualizing the graph. In particular, the proof of Abboud et al. (2021) builds a C 2sentence which identifies discretely colored graphs (Lemma A3). In turn, a disjunction identifying any possible discretely colored graph for a given graph is constructed (Lemma A4). Since the set of leaves of an IR tree are a complete isomorphism invariant, it suffices to build this disjunction over only those discretely colored graphs that correspond to a leaf in the IR tree. The hyperparameters of IR open up more opportunities to transfer results into the realm of MPNNs. We give one such example. We argue that with a specific cell selector, 3-connected planar graphs can be detected with 4-IRNI(CR). Theorem 5. Let Pn denote the class of 3-connected planar graphs. Let n ≥ 1 and let f : Pn → R be invariant. Then, for all ϵ, δ > 0*, there is a cell selector* Sel (which does not depend on n*) and an MPNN* with 4-IRNI(CR, Sel) that (ϵ, δ)*-approximates* f. Proof. First of all, we argue that individualizing a node of degree 5 and three of its neighbors surely suffices to make the graph discrete: this follows from Lemma 22 of Kiefer et al. (2017), which proves that individualizing 3 vertices on any common face followed by color refinement suffices to make the coloring discrete. Note that | Method | PROTEINS | MUTAG | NCI1 | TRI | TRIX | EXP | CEXP | CSL | |-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------| | None | 0.68 ± 0.06 | 0.89 ± 0.06 | 0.81 ± 0.02 | 0.50 ± 0.00 | 0.50 ± 0.00 | 0.50 ± 0.01 | 0.74 ± 0.02 | 0.50 ± 0.00 | | RNI | 0.66 ± 0.02 | 0.89 ± 0.04 | 0.81 ± 0.01 | 0.99 ± 0.01 | 0.99 ± 0.00 | 0.97 ± 0.03 | 0.95 ± 0.10 | 0.85 ± 0.06 | | CLIP | 0.65 ± 0.05 | 0.85 ± 0.09 | 0.81 ± 0.01 | 0.99 ± 0.00 | 0.81 ± 0.05 | 0.99 ± 0.04 | 0.99 ± 0.02 | 1.00 ± 0.01 | | RP | 0.74 ± 0.04 | 0.86 ± 0.07 | 0.81 ± 0.01 | 0.99 ± 0.00 | 0.82 ± 0.03 | 0.96 ± 0.02 | 0.97 ± 0.02 | 1.00 ± 0.00 | | IRNI(CR) | 0.75 ± 0.04 | 0.85 ± 0.05 | 0.82 ± 0.02 | 0.99 ± 0.01 | 0.73 ± 0.04 | 0.99 ± 0.04 | 0.95 ± 0.14 | 1.00 ± 0.00 | | RNIEoR | 0.69 ± 0.05 | 0.94 ± 0.03 | 0.85 ± 0.02 | 1.00 ± 0.00 | 1.00 ± 0.00 | 0.99 ± 0.01 | 0.98 ± 0.05 | 0.93 ± 0.06 | | CLIPEoR | 0.67 ± 0.03 | 0.92 ± 0.05 | 0.82 ± 0.02 | 1.00 ± 0.00 | 0.95 ± 0.05 | 1.00 ± 0.00 | 0.97 ± 0.08 | 1.00 ± 0.00 | | RPEoR | 0.78 ± 0.04 | 0.84 ± 0.12 | 0.87 ± 0.02 | 1.00 ± 0.00 | 0.95 ± 0.05 | 1.00 ± 0.00 | 1.00 ± 0.00 | 1.00 ± 0.01 | | IRNI(CR)EoR | 0.74 ± 0.04 | 0.87 ± 0.08 | 0.82 ± 0.02 | 1.00 ± 0.00 | 0.94 ± 0.05 | 0.99 ± 0.02 | 0.97 ± 0.06 | 0.99 ± 0.02 | | SOTAURF | 0.81 ± 0.03 | 0.95 ± 0.03 | 0.88 ± 0.01 | 0.91 ± NA | 0.93 ± NA | NA | NA | NA | | SOTA∗ URF | 0.77 ± 0.04 | 0.94 ± 0.04 | NA | NA | NA | 0.98 ± 0.02 | NA | 0.91 ± 0.07 | Table 1: The AUROC of a GIN network with RNI, CLIP, RP, IRNI(CR), and without any (None) of these on selected data sets. EoR indicates the use of ensembling over randomness. Bold entries indicate statistically significant best values, for this EoR is treated as separate from no EoR. SOTAURF indicates the state-of-theart AUROC. SOTA∗URF indicates the state-of-the-art accuracies, note that these are not directly comparable. Specifically, SOTAURF and SOTA∗URF refer to the results from the previous evaluations in (Murphy et al., 2019; Sato et al., 2021; Dasoulas et al., 2020; Abboud et al., 2021), from which we always chose the best value. individualizing a node of degree 5 and three of its neighbors surely individualizes three vertices on a common face. Using the arguments from the proof of Theorem 4 again, we can see that this would indeed suffice to show the claim. It remains to be shown that there is a cell selection strategy achieving the above. In the first step, the cell selector chooses an (isomorphism-invariant) color class consisting of degree 5 vertices. We remark that, due to Euler's formula, the average degree of a planar graph is less than 6, so such a node always exists. In the next step, we choose a non-trivial class containing only neighbors of the individualized degree 5 node v. We argue that unless all neighbors of v have been individualized, there is a non-trivial color class consisting of neighbors of v. Indeed, if there is a non-trivial class containing neighbors of v, the class may only contain such neighbors since color refinement distinguishes neighbors of v from non-neighbors of v. Here we use that v is individualized. We repeat the step of choosing a non-singleton class of neighbors of v and individualizing a node within it. If at any point no non-trivial class of neighbors exists, we are done: this means that the neighbors are fully discrete. This in turn suffices to show the claim. More results of this kind can be shown. For example, it is known that strongly regular graphs require at most O( √n log n) individualizations (Babai, 1980). In fact, for all but an exponentially small fraction of graphs, d-IRNI(CR) with small d suffices. Theorem 6. There is an absolute constant c > 1 such that the following holds. Let n ≥ 1 and d ∈ N, then there is a graph class G ′ n containing all but at most a 1/cdn fraction of all graphs for which the following holds. Let f : G ′ n → R be invariant, then, for all ϵ, δ > 0*, there is an MPNN with* d-IRNI(CR) that (ϵ, δ)- approximates f. Proof. To prove the theorem, we use the same technique as before. We only need to observe that for most graphs, after color refinement is applied, d arbitrary individualizations in non-singleton cells cause discretization of the graph. This, however, is a classic theorem by Babai & Kucera (1979, Theorem 4.1) showing the fraction of graphs for which this fails is at most 1/cdn. ## 5 Experiments We compare the URF schemes RNI, CLIP, RP, and IRNI(CR) and verify their increase in expressivity. We do so by applying them to synthetically crafted, hard data sets as well as standard practical data sets. Furthermore, we propose an automated training approach used throughout the benchmarks. Network architectures and optimization. For all experiments, we use the same general architecture, the Adam optimizer, and use the area under the receiver operating characteristic (AUROC). We optimize each method using a bayesian hyperparameter search in the same hyperparameter space. To estimate the performance, we use Monte Carlo cross-validation in an outer test loop and an inner validation loop estimating nested 10 × 9-fold cross-validation. The bayesian hyperparameter search is capped at evaluating 50 points in hyperparameter space. To encourage the models to optimize faster as well as to avoid overfitting, we add a penalty to the AUROC estimate based on some hyperparameters. The reported test AUROC does not include these penalties. To compute the node sequence for IRNI(CR) as well as the color refinement for CLIP we use dejavu (Anders & Schweitzer, 2021a). These choices are specified further in the appendix. In the following, we refer to a GIN without any URF as None, while we refer to a GIN with some initialization as RNI, CLIP, RP, or IRNI(CR) depending on the URF that is used. Each of these methods is also limited in the number of dimensions added. Data sets. We evaluate different models on datasets used in prior work on URF, specifically EXP, CEXP, TRI, TRIX, CSL, PROTEINS, MUTAG, and NCI1 (Srinivasan et al.; Borgwardt et al., 2005; Wale & Karypis, 2006; Murphy et al., 2019; Sato et al., 2021; Abboud et al., 2021). EXP, CEXP, TRI, TRIX, and CSL are synthetic data sets made up of graphs not distinguishable by the color refinement algorithm. TRI and TRIX contain 3-regular graphs and use the same training set while differing in the test set. The task is to detect triangles. EXP and CEXP consist of graphs carefully constructed so that each graph is in a pair that is indistinguishable by color refinement while encoding a satisfiable and unsatisfiable formula respectively. For CEXP 50% of all satisfiable graphs are modified to be distinguishable by color refinement from their unsatisfiable counterparts. CSL consists of 41-cycles with regular skip-connections according to 10 co-primes of 41. Each co-prime defines one class for the CSL task. The results of this experiment can be found in Table 1. ## 6 Discussion We notice that on the synthetic hard datasets TRI, TRIX, EXP, CEXP, and CSL, all methods improve the discriminatory power compared to not using any form of URF. We now compare the methods based on the three primary parameters encoding, ensembling, and refinement. Concerning the encoding on TRIX and CSL, there seem to be noticeable differences between RNI and the other methods. The poorer performance of CLIP, RP, and IRNI(CR) on TRIX is easily explained. Since the task is to detect triangles locally in the graph and the graph is regular, an individualization is required close to the triangle to be able to detect it. RNI individualizes everywhere simultaneously, while CLIP, RP, and IRNI(CR) only individualize locally. This means the detection of triangles depends on random chance. EoR helps since it increases this chance. The difference on CSL is not so clearly explainable. We do not know why RNI performs significantly worse here. Looking more specifically at the difference between RP and RNI, we remark that RP substantially outperforms RNI on PROTEINS, while RNI outperforms RP on MUTAG. EoR appears almost always to improve the performance. As such, we would advise its consideration whenever one of these URF is used in practice. Notice that its use does not increase training time and only linearly increases prediction time. EoR also appears to guarantee that at least one of RNI, CLIP, RP, or IRNI(CR) will outperform models without this expressibility increase. Considering the methods that use additional refinement to reduce the introduced randomness, namely CLIP and IRNI(CR), we observe they outperform RP on MUTAG, while RP outperforms the other two on PROTEINS and NCI1. On the synthetic hard datasets, in particular with ensembling, the three methods perform very similarly. Overall, no method appears to be the uniformly best method for practical use. We suspect that overfitting to the different features introduced by RNI, CLIP, RP, and IRNI(CR) plays a significant role in which of these URF performs best. ## 7 Conclusion We introduced IR as it applies to machine learning in the form of the IRNI framework. This enables the development of many URF based on selecting refinement, cell selector, and how to encode individualizations into the network. No URF introduced so far is the clear front runner. However, the BHO tuning approach presented here feasibly allows for the optimization of the IRNI hyperparameters in addition to other model hyperparameters. The IRNI hyperparameters also serve the unifying IRNI umbrella, as we were able to describe all existing and new URF based on these. Moreover, IRNI has a rigorous theoretical foundation ensuring equivariance and universality. We hope this will aid in systematic improvements in future research regarding GNNs. Practically, our findings imply that for each new graph learning task all mentioned URF need to be evaluated to determine the best fit. ## Acknowledgement We thank the reviewers for their constructive feedback that helped improve the paper. The authors acknowledge support by the Carl-Zeiss Foundation, the BMWK award 01MK20014U, the DFG awards KL 2698/2-1, KL 2698/5-1, KL 2698/6-1 and KL 2698/7-1,and the BMBF awards 01|S18051A, 03|B0770E, and 01|S21010C. The research leading to these results has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (EngageS: grant agreement No. 820148). ## References Ralph Abboud, İsmail İlkan Ceylan, Martin Grohe, and Thomas Lukasiewicz. The surprising power of graph neural networks with random node initialization. In Zhi-Hua Zhou (ed.), *Proceedings of the Thirtieth* International Joint Conference on Artificial Intelligence, IJCAI-21, pp. 2112–2118. International Joint Conferences on Artificial Intelligence Organization, 8 2021. doi: 10.24963/ijcai.2021/291. URL https: //doi.org/10.24963/ijcai.2021/291. Main Track. Markus Anders and Pascal Schweitzer. Engineering a fast probabilistic isomorphism test. In Martin FarachColton and Sabine Storandt (eds.), *Proceedings of the Symposium on Algorithm Engineering and Experiments, ALENEX 2021, Virtual Conference, January 10-11, 2021*, pp. 73–84. SIAM, 2021a. doi: 10.1137/1.9781611976472.6. URL https://doi.org/10.1137/1.9781611976472.6. Markus Anders and Pascal Schweitzer. Search Problems in Trees with Symmetries: Near Optimal Traversal Strategies for Individualization-Refinement Algorithms. In Nikhil Bansal, Emanuela Merelli, and James Worrell (eds.), 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021), volume 198 of *Leibniz International Proceedings in Informatics (LIPIcs)*, pp. 16:1–16:21, Dagstuhl, Germany, 2021b. Schloss Dagstuhl - Leibniz-Zentrum für Informatik. ISBN 978-3-95977-195-5. doi: 10.4230/LIPIcs.ICALP.2021.16. URL https://drops.dagstuhl.de/opus/volltexte/2021/14085%7D. László Babai. On the complexity of canonical labeling of strongly regular graphs. *SIAM J. Comput.*, 9(1): 212–216, 1980. doi: 10.1137/0209018. URL https://doi.org/10.1137/0209018. László Babai and Ludek Kucera. Canonical labelling of graphs in linear average time. In *20th Annual* Symposium on Foundations of Computer Science, San Juan, Puerto Rico, 29-31 October 1979, pp. 39–46. IEEE Computer Society, 1979. doi: 10.1109/SFCS.1979.8. URL https://doi.org/10.1109/SFCS.1979. 8. Pablo Barceló, Egor V. Kostylev, Mikaël Monet, Jorge Pérez, Juan L. Reutter, and Juan Pablo Silva. The logical expressiveness of graph neural networks. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=r1lZ7AEKvB. Andrew R Barron. Approximation and estimation bounds for artificial neural networks. *Machine learning*, 14(1):115–133, 1994. Igor I Baskin, Vladimir A Palyulin, and Nikolai S Zefirov. A neural device for searching direct correlations between structures and properties of chemical compounds. Journal of chemical information and computer sciences, 37(4):715–721, 1997. Karsten M Borgwardt, Cheng Soon Ong, Stefan Schönauer, SVN Vishwanathan, Alex J Smola, and HansPeter Kriegel. Protein function prediction via graph kernels. *Bioinformatics*, 21(suppl_1):i47–i56, 2005. George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems, 2(4):303–314, 1989. George Dasoulas, Ludovic Dos Santos, Kevin Scaman, and Aladin Virmaux. Coloring graph neural networks for node disambiguation. In Christian Bessiere (ed.), *Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20*, pp. 2126–2132. International Joint Conferences on Artificial Intelligence Organization, 7 2020. doi: 10.24963/ijcai.2020/294. URL https: //doi.org/10.24963/ijcai.2020/294. Main track. Kaize Ding, Zhe Xu, Hanghang Tong, and Huan Liu. Data augmentation for deep graph learning: A survey. arXiv preprint arXiv:2202.08235, 2022. David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alan AspuruGuzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular fingerprints. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc., 2015. URL https://proceedings.neurips.cc/ paper/2015/file/f9be311e65d81a9ad8150a60844bb94c-Paper.pdf. Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In Doina Precup and Yee Whye Teh (eds.), *Proceedings of the 34th* International Conference on Machine Learning, volume 70 of *Proceedings of Machine Learning Research*, pp. 1263–1272. PMLR, 06–11 Aug 2017. URL http://proceedings.mlr.press/v70/gilmer17a.html. Marco Gori, Gabriele Monfardini, and Franco Scarselli. A new model for learning in graph domains. In Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005., volume 2, pp. 729–734 vol. 2, 2005. doi: 10.1109/IJCNN.2005.1555942. William L. Hamilton, Rex Ying, and Jure Leskovec. Representation learning on graphs: Methods and applications. *IEEE Data Eng. Bull.*, 40(3):52–74, 2017. URL http://sites.computer.org/debull/ A17sept/p52.pdf. Tommi A. Junttila and Petteri Kaski. Conflict propagation and component recursion for canonical labeling. In Alberto Marchetti-Spaccamela and Michael Segal (eds.), *Theory and Practice of Algorithms in* (Computer) Systems - First International ICST Conference, TAPAS 2011, Rome, Italy, April 18-20, 2011. Proceedings, volume 6595 of *Lecture Notes in Computer Science*, pp. 151–162. Springer, 2011. doi: 10.1007/978-3-642-19754-3\_16. URL https://doi.org/10.1007/978-3-642-19754-3_16. Sandra Kiefer, Ilia Ponomarenko, and Pascal Schweitzer. The weisfeiler-leman dimension of planar graphs is at most 3. In 32nd Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2017, Reykjavik, Iceland, June 20-23, 2017, pp. 1–12. IEEE Computer Society, 2017. doi: 10.1109/LICS.2017.8005107. URL https://doi.org/10.1109/LICS.2017.8005107. Moshe Leshno, Vladimir Ya. Lin, Allan Pinkus, and Shimon Schocken. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. *Neural Networks*, 6(6):861– 867, 1993. ISSN 0893-6080. doi: https://doi.org/10.1016/S0893-6080(05)80131-5. URL https://www. sciencedirect.com/science/article/pii/S0893608005801315. Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard S. Zemel. Gated graph sequence neural networks. In Yoshua Bengio and Yann LeCun (eds.), 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016. URL http: //arxiv.org/abs/1511.05493. Shiyu Liang and R. Srikant. Why deep neural networks for function approximation? In *5th International* Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017. URL https://openreview.net/forum?id=SkpSlKIel. Zhou Lu, Hongming Pu, Feicheng Wang, Zhiqiang Hu, and Liwei Wang. The expressive power of neural networks: A view from the width. In *Proceedings of the 31st International Conference on Neural Information Processing Systems*, NIPS'17, pp. 6232–6240, Red Hook, NY, USA, 2017. Curran Associates Inc. ISBN 9781510860964. Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. Provably powerful graph networks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019a. URL https: //proceedings.neurips.cc/paper/2019/file/bb04af0f7ecaee4aae62035497da1387-Paper.pdf. Haggai Maron, Heli Ben-Hamu, Nadav Shamir, and Yaron Lipman. Invariant and equivariant graph networks. In *International Conference on Learning Representations*, 2019b. URL https://openreview.net/forum? id=Syx72jC9tm. Brendan D. McKay. Practical graph isomorphism. In 10th. Manitoba Conference on Numerical Mathematics and Computing (Winnipeg, 1980), pp. 45–87, 1981. Brendan D. McKay and Adolfo Piperno. Nauty and traces user guide. https://cs.anu.edu.au/people/Brendan.McKay/nauty/nug25.pdf. Brendan D. McKay and Adolfo Piperno. Practical graph isomorphism, ii. *J. Symb. Comput.*, 60:94–112, January 2014. ISSN 0747-7171. doi: 10.1016/j.jsc.2013.09.003. URL https://doi.org/10.1016/j.jsc. 2013.09.003. Christopher Morris, Nils M. Kriege, Kristian Kersting, and Petra Mutzel. Faster kernels for graphs with continuous attributes via hashing. In *2016 IEEE 16th International Conference on Data Mining (ICDM)*, pp. 1095–1100, 2016. doi: 10.1109/ICDM.2016.0142. Christopher Morris, Martin Ritzert, Matthias Fey, William L. Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. *Proceedings of* the AAAI Conference on Artificial Intelligence, 33(01):4602–4609, Jul. 2019. doi: 10.1609/aaai.v33i01. 33014602. URL https://ojs.aaai.org/index.php/AAAI/article/view/4384. Christopher Morris, Nils M. Kriege, Franka Bause, Kristian Kersting, Petra Mutzel, and Marion Neumann. Tudataset: A collection of benchmark datasets for learning with graphs. *CoRR*, abs/2007.08663, 2020a. URL https://arxiv.org/abs/2007.08663. Christopher Morris, Gaurav Rattan, and Petra Mutzel. Weisfeiler and Leman go sparse: Towards scalable higher-order graph embeddings. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 21824– 21840. Curran Associates, Inc., 2020b. URL https://proceedings.neurips.cc/paper/2020/file/ f81dee42585b3814de199b2e88757f5c-Paper.pdf. Christopher Morris, Matthias Fey, and Nils Kriege. The power of the weisfeiler-leman algorithm for machine learning with graphs. In Zhi-Hua Zhou (ed.), Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pp. 4543–4550. International Joint Conferences on Artificial Intelligence Organization, 8 2021. doi: 10.24963/ijcai.2021/618. URL https://doi.org/10.24963/ijcai.2021/618. Survey Track. Ryan Murphy, Balasubramaniam Srinivasan, Vinayak Rao, and Bruno Ribeiro. Relational pooling for graph representations. In *International Conference on Machine Learning*, pp. 4663–4673. PMLR, 2019. Daniel Neuen and Pascal Schweitzer. Benchmark graphs for practical graph isomorphism. In Kirk Pruhs and Christian Sohler (eds.), 25th Annual European Symposium on Algorithms, ESA 2017, September 4-6, 2017, Vienna, Austria, volume 87 of *LIPIcs*, pp. 60:1–60:14. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2017. doi: 10.4230/LIPIcs.ESA.2017.60. URL https://doi.org/10.4230/LIPIcs.ESA.2017.60. Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. A review of relational machine learning for knowledge graphs. *Proceedings of the IEEE*, 104(1):11–33, 2016. doi: 10.1109/JPROC.2015. 2483592. Giannis Nikolentzos and Michalis Vazirgiannis. Random walk graph neural networks. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), *Advances in Neural* Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/ 2020/hash/ba95d78a7c942571185308775a97a3a0-Abstract.html. Caleb C. Noble and Diane J. Cook. Graph-based anomaly detection. In *Proceedings of the Ninth ACM* SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '03, pp. 631–636, New York, NY, USA, 2003. Association for Computing Machinery. ISBN 1581137370. doi: 10.1145/ 956750.956831. URL https://doi.org/10.1145/956750.956831. Pál András Papp, Karolis Martinkus, Lukas Faber, and Roger Wattenhofer. Dropgnn: random dropouts increase the expressiveness of graph neural networks. *Advances in Neural Information Processing Systems*, 34:21997–22009, 2021. Adolfo Piperno. Isomorphism test for digraphs with weighted edges. In Gianlorenzo D'Angelo (ed.), 17th International Symposium on Experimental Algorithms, SEA 2018, June 27-29, 2018, L'Aquila, Italy, volume 103 of *LIPIcs*, pp. 30:1–30:13. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2018. doi: 10.4230/LIPIcs.SEA.2018.30. URL https://doi.org/10.4230/LIPIcs.SEA.2018.30. Omri Puny, Heli Ben-Hamu, and Yaron Lipman. Global attention improves graph networks generalization. arXiv preprint arXiv:2006.07846, 2020. Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, and Jascha Sohl-Dickstein. On the expressive power of deep neural networks. In *international conference on machine learning*, pp. 2847–2854. PMLR, 2017. Ryoma Sato, Makoto Yamada, and Hisashi Kashima. *Random Features Strengthen Graph Neural Networks*, pp. 333–341. 2021. doi: 10.1137/1.9781611976700.38. URL https://epubs.siam.org/doi/abs/10.1137/ 1.9781611976700.38. Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. *IEEE Transactions on Neural Networks*, 20(1):61–80, 2009. doi: 10.1109/TNN. 2008.2005605. Nino Shervashidze, Pascal Schweitzer, Erik Jan van Leeuwen, Kurt Mehlhorn, and Karsten M. Borgwardt. Weisfeiler-Lehman graph kernels. *Journal of Machine Learning Research*, 12:2539–2561, 2011. URL http://dl.acm.org/citation.cfm?id=2078187. Weiping Song, Zhiping Xiao, Yifan Wang, Laurent Charlin, Ming Zhang, and Jian Tang. Session-based social recommendation via dynamic graph attention networks. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, WSDM '19, pp. 555–563, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450359405. doi: 10.1145/3289600.3290989. URL https: //doi.org/10.1145/3289600.3290989. Alessandro Sperduti and Antonina Starita. Supervised neural networks for the classification of structures. IEEE Transactions on Neural Networks, 8(3):714–735, 1997. doi: 10.1109/72.572108. A Srinivasan, S Muggleton, M Sternberg, and R King. Theories for mutagenicity: a study of first-order and feature-based induction. *Artificial Intelligence*, 85(1):2. Bowen Tang, Skyler T Kramer, Meijuan Fang, Yingkun Qiu, Zhen Wu, and Dong Xu. A self-attention based message passing neural network for predicting molecular lipophilicity and aqueous solubility. Journal of Cheminformatics, 12(1):1–9, 2020. Michalis Vazirgiannis, Fragkiskos D. Malliaros, and Giannis Nikolentzos. GraphRep: Boosting text mining, NLP and information retrieval with graphs. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM '18, pp. 2295–2296, New York, NY, USA, 2018. Association for Computing Machinery. ISBN 9781450360142. doi: 10.1145/3269206.3274273. URL https: //doi.org/10.1145/3269206.3274273. Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id=rJXMpikCZ. Nikil Wale and George Karypis. Acyclic subgraph based descriptor spaces for chemical compound retrieval and classification. Technical report, University of Minnesota, 2006. Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S. Yu. A comprehensive survey on graph neural networks. *IEEE Transactions on Neural Networks and Learning Systems*, 32(1): 4–24, 2021. doi: 10.1109/TNNLS.2020.2978386. Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id= ryGs6iA5Km. Rex Ying, Jiaxuan You, Christopher Morris, Xiang Ren, William L. Hamilton, and Jure Leskovec. Hierarchical graph representation learning with differentiable pooling. In *Proceedings of the 32nd International* Conference on Neural Information Processing Systems, NIPS'18, pp. 4805–4815, Red Hook, NY, USA, 2018. Curran Associates Inc. Ding-Xuan Zhou. Universality of deep convolutional neural networks. *Applied and Computational Harmonic* Analysis, 48(2):787–794, 2020. ISSN 1063-5203. doi: https://doi.org/10.1016/j.acha.2019.06.004. URL https://www.sciencedirect.com/science/article/pii/S1063520318302045. Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. Graph neural networks: A review of methods and applications. *AI Open*, 1: 57–81, 2020. ISSN 2666-6510. doi: https://doi.org/10.1016/j.aiopen.2021.01.001. URL https://www. sciencedirect.com/science/article/pii/S2666651021000012. ## A Refinement Definition We discuss the slight technicality in our definition of refinement in the individualization-refinement paradigm. Compared to McKay & Piperno (2014), we are lacking the requirement that π ′ = Ref(*G, π, ν*) must be finer than π, making our definition of refinements slightly more general. With respect to the results, this implies that for (non-trivially) colored graphs, discrete refined colorings might not respect the initial coloring. This can potentially lead to issues with automorphisms and canonization since, essentially, the isomorphism invariant implied by the refinement is too weak. However, there is a simple fix using other components of the IR framework: we make use of invariants, which are not immediately relevant for IRNI itself. Whenever a coloring π ′ of a node ν in the tree is discrete, we can use the complete isomorphism invariant (Gπ ′, ππ ′) (note that since π ′is discrete, it also defines a permutation on the vertices of G). I.e., we identitfy the node ν with the invariant (Gπ ′, ππ ′). As mentioned above, this does not influence any of the results of this paper directly - it only serves to make the description sound in terms of the further discussion in McKay & Piperno (2014). We refer to McKay & Piperno (2014) for an in-depth discussion of invariants. ## B Relational Pooling Relational pooling (RP) Murphy et al. (2019) is more general than discussed in this paper. In its initial formulation 1 |V |! Pπ∈S|V | f(Gπ, Xπ) it is not tractable and does thus not fit into the comparison considered | parameter influences the hyperparameter optimization. Parameter batch size epochs learning rate weight decay | | | features | layers | dimensions | step size | EoR | | | |----------------------------------------------------------------------------------------------------------------|-----------|---------|------------|----------|--------------|-------------|---------|------|---------| | Minimum | 8 | 32 | 1e-6 | 1e-10 | 16 | 2 | 1 | 0.01 | 1 | | Maximum | 256 | 512 | 1e-2 | 1e-0.3 | 128 | 10 | 5 | 1.0 | 64 | | Data-Type | Integer | Integer | Real | Real | Integer | Integer | Integer | Real | Integer | | Penalty | 1-(x/256) | x/512 | None | None | x/128 | x/10 | None | None | x/64 | here. In the original paper, three methods to make RP tractable are discussed. The first uses canonization, which itself is generally not tractable, and thus is not considered. The third uses a fixed point tractable (FPT) formulation (essentially describing a less expressive version of the k-dimensional WL algorithm), which is generally not universal unless the FPT parameter is not fixed, and even then its costs are polynomial in the FPT parameter which is computationally too expensive for this comparison. Only the second discussed option, which is originally motivated by stochastically estimating the initial formulation, is both universal and computationally practical, which is why it is considered here. In particular, it is also the only version implemented and tested in the original paper. ## C Edge Coloring We discuss the differences when using edge colors since the MUTAG data set contains edge labels, which we consider in the experiments. In order to handle edge colors in dejavu for color refinement and random IR walks, we made the following modifications. We encode edge colors as vertex colors: we subdivide each edge of the graph using a vertex and color that vertex according to the color of the edge. Then, we ensured that the cell selector never selects nodes that were inserted to subdivide an edge, i.e., the solver can still only individualize vertices that truly correspond to the original vertices of the graph. We want to note that there is a more efficient albeit more involved way of resolving the issue Piperno (2018). ## D Color Encoding We discuss precisely how we encoded colors for the data sets discussed in this work. First, notice that we need to encode labels for nodes and for edges. We used the same method to encode colors for both. Almost all node and edge features can be considered binary numbers due to how the data sets are encoded. The only exception to this rule is the PROTEINS data set which has one natural number followed by a binary number. Thus, we simply read the node and edge features as a binary number, where the first bit has the highest order, and the last bit has the lowest order. This ensures that node and edge features are encoded as different natural numbers if the original features were different. However, due to the size of natural numbers, that specifically the NCI1 dataset produces, as it has 37 node features, we also apply a modulo operation by 12345. ## E Experiments Network architecture. Each MLP in each GIN layer has three layers (input, hidden, output) that widen to a fixed number of features as soon as possible and remain there throughout all subsequent layers. The input and hidden layers of every MLP are followed by a batch-norm operation. For graph classification tasks, we use global mean pooling followed by a linear transform and dropout with p = 0.5. We use the node embeddings after each GIN layer in this way and sum over all of them for the final graph representation. As activation functions, we use only ReLU. For node-classification tasks, we do the same as before without the global mean pooling. Bayesian hyperparameter optimization. We optimize over the batch size, the number of epochs, the learning rate, weight decay, the number of features the GIN layers expand to and operate on, the number of GIN layers, the number of dimensions added for the node initializations, the fraction of epochs after which the learning rate gets decreased by 0.5 consecutively, and the number of random samples for EoR. Table 2 describes the hyperparameter space used for the experiments. For in indepth description of each parameter: - Batch size refers to the batch size used during training. The smaller it is, the more it penalizes the evaluation metric during the bayesian hyperparameter search. - Epochs refers to the number of epochs used during training. The bigger they are, the more they penalize. - Learning rate refers to the initial learning rate used inside of the Adam optimizer. - Weight decay refers to the typical weight decay parameter inside of the Adam optimizer. - Features refer to the number of features the MLPs inside of the GIN layers expand to. Each node will have features many features after the first GIN layers first MLP layer. The more features, the more they penalize. - Layers refer to the number of layers of the GIN networks. The more layers, the more they penalize. - Dimensions refer to the number of dimensions each node's features are expanded by for the new features introduced by the different URF methods. This parameter is the same as the d in d-IRNI. - Step size is a relative parameter that influences how the learning rate is changed during training depending on the number of epochs. For instance, if the number of epochs is set at 100, then a step size of 0.1 will mean the learning rate is divided by 0.5 every 10 epochs, a step size of 0.5 would mean the learning rate is divided by 0.5 once after 50 epochs, and a step size of 1 indicates the learning rate is never dropped. - EoR refers to the number of random samples that are used to estimate the output of the network. For instance, for an EoR of 10 the input is modified 10 times i.i.d using the URF of choice. All 10 inputs are then passed through the network and the final predictions are then averaged for all 10 outputs. Tables 4, . . . , 19 show the mean and standard deviation of the best-found hyperparameters across their seeds for all the datasets and methods. For the learning rate and weight decay, only the exponent is given. Monte Carlo cross-validation. Each dataset is split using stratified 10-fold cross-validation with random shuffling. The first fold is used as the test set and the 9 others are used for bayesian hyperparameter optimization. After the best model is found, it is trained on all 9 folds and its performance on the test set is reported. This is repeated 10 times with different random shuffles. If the dataset initially already provides a test set, then the bayesian hyperparameter optimization is repeated for 10 different seeds. In the inner loop, which we referred to before as just bayesian hyperparameter optimization, the data is split using stratified 9-fold cross-validation. The first fold is used as a validation set and the other 8 folds are used to train the model, after which its performance is reported on the validation set. This is repeated 3 times with different random shuffles. This essentially estimates nested 10 × 9-fold cross-validation. The 10 and 3 were chosen based on a time budget. The performance on PROTEINS, MUTAG, and NCI1 is particularly sensitive to the variance in the performance estimate, so we expect to see an improved performance if more estimates are used. Test system and time budget. The system that was used to do the experiments mentioned in this work is made up of: - \#60-Ubuntu SMP Tue Jul 2 18:22:20 UTC 2019 4.15.0-55-generic | Parameter | batch size | epochs | learning rate | weight decay | features | layers | dimensions | step size | |-------------|----------------|-----------------|-----------------|----------------|---------------|-------------|--------------|-------------| | None | 152.70 ± 83.17 | 278.10 ± 185.29 | −3.21 ± 0.76 | −5.12 ± 2.72 | 76.00 ± 35.85 | 5.80 ± 1.78 | 3.30 ± 1.68 | 0.53 ± 0.32 | | RNI | 86.30 ± 82.32 | 303.50 ± 131.03 | −3.53 ± 0.89 | −5.72 ± 2.14 | 59.50 ± 41.13 | 5.30 ± 3.13 | 3.30 ± 1.27 | 0.63 ± 0.29 | | CLIP | 141.90 ± 93.62 | 224.20 ± 140.10 | −3.61 ± 1.06 | −7.41 ± 2.70 | 60.40 ± 38.66 | 3.80 ± 1.33 | 3.60 ± 1.50 | 0.58 ± 0.30 | | RP | 192.70 ± 52.98 | 262.50 ± 162.44 | −2.90 ± 0.66 | −6.76 ± 3.28 | 55.00 ± 35.34 | 2.80 ± 1.17 | 4.10 ± 1.04 | 0.55 ± 0.33 | | IRNI | 141.70 ± 90.51 | 308.30 ± 165.17 | −2.97 ± 1.17 | −7.45 ± 2.51 | 72.40 ± 48.34 | 3.00 ± 1.34 | 3.20 ± 1.47 | 0.45 ± 0.25 | | randomness. Method PROTEINS | MUTAG | NCI1 | TRI | TRIX | EXP | CEXP | CSL | | |-------------------------------|---------------|--------------|----------------|---------------|---------------|---------------|---------------|--------------| | None | 25672 ± 5426 | 7275 ± 5532 | 121487 ± 63237 | 10878 ± 2328 | 10809 ± 2288 | 20735 ± 9345 | 19652 ± 5739 | 2286 ± 409 | | RNI | 34499 ± 16968 | 7802 ± 2471 | 101154 ± 29446 | 53897 ± 27607 | 46855 ± 21908 | 53381 ± 35114 | 58668 ± 19114 | 9999 ± 4512 | | CLIP | 36189 ± 10374 | 12268 ± 3955 | 151603 ± 36904 | 52437 ± 19898 | 48608 ± 20161 | 65597 ± 13000 | 55558 ± 13902 | 10697 ± 3146 | | RP | 26042 ± 8782 | 8783 ± 2234 | 126282 ± 36043 | 56640 ± 35035 | 48930 ± 34016 | 52605 ± 29282 | 53750 ± 25979 | 6321 ± 1917 | | IRNI | 33383 ± 13685 | 7793 ± 1560 | 156347 ± 35831 | 42454 ± 13367 | 40597 ± 15643 | 64123 ± 13412 | 50042 ± 14155 | 10937 ± 1895 | | RNIEoR | 26593 ± 9390 | 7810 ± 3740 | 106401 ± 30762 | 26767 ± 10270 | 25630 ± 14628 | 34604 ± 12037 | 33747 ± 9654 | 8832 ± 6506 | | CLIPEoR | 65225 ± 31132 | 8418 ± 2048 | 118096 ± 17091 | 24487 ± 4718 | 28594 ± 6917 | 38565 ± 7954 | 82525 ± 17467 | 6810 ± 1807 | | RPEoR | 22203 ± 4547 | 6309 ± 2749 | 117571 ± 43888 | 21233 ± 5526 | 22385 ± 7073 | 31831 ± 13730 | 37978 ± 12934 | 6513 ± 2271 | | IRNIEoR | 33337 ± 12360 | 8204 ± 3483 | 160859 ± 55380 | 33799 ± 4711 | 61533 ± 21959 | 52990 ± 13094 | 75701 ± 18933 | 5528 ± 1360 | - 2 Intel(R) Xeon(R) Gold 6154 CPU @ 3.00GHz - 754GiB System memory - 10 GeForce RTX 2080 Ti The experiments took approximately 20 days without testing and approximately 1 month with testing, where the machine was not used on full load always. Table 3 shows the computation time in seconds for 1 seed for each of the methods on each separate dataset. Python libraries. We used python 3.8.10 to implement all the models and conduct all the experiments: - dejavu-gi 0.1.3 (for IRNI(CR)) - networkx 2.6.3 (for constructing TRI and TRIX) - numpy 1.21.4 - scikit-learn 1.0.1 - scikit-optimize 0.9.0 (for Bayesian hyperparameter optimization) - scipy 1.7.3 - torch 1.10.0 - torch-geometric 2.0.2 (specifically for graph related machine learning) Random Seed For the experiments we used the random seeds 0 through 9 as input to our code. However, our experiments might not be perfectly reproducible as dejavu the package we use to calculate random IR paths does not allow for its seed to be set. | dataset without EoR Parameter batch size | epochs | learning rate | weight decay | features | layers | dimensions | step size | | |--------------------------------------------|----------------|-----------------|----------------|--------------|---------------|--------------|-------------|-------------| | None | 149.70 ± 86.61 | 143.60 ± 98.14 | −2.51 ± 0.54 | −5.26 ± 2.77 | 49.20 ± 30.00 | 6.80 ± 2.23 | 2.70 ± 1.42 | 0.33 ± 0.25 | | RNI | 153.40 ± 86.97 | 430.10 ± 102.38 | −3.69 ± 0.31 | −6.46 ± 2.01 | 63.40 ± 36.21 | 9.10 ± 1.45 | 1.40 ± 0.92 | 0.87 ± 0.14 | | CLIP | 190.50 ± 49.88 | 308.00 ± 90.24 | −3.38 ± 0.77 | −7.61 ± 2.04 | 54.30 ± 30.84 | 8.30 ± 1.55 | 2.10 ± 1.37 | 0.66 ± 0.23 | | RP | 130.30 ± 85.78 | 353.30 ± 99.67 | −3.65 ± 0.28 | −5.25 ± 2.11 | 75.90 ± 27.93 | 8.80 ± 1.08 | 4.60 ± 0.66 | 0.53 ± 0.27 | | IRNI | 169.90 ± 87.46 | 245.80 ± 157.52 | −3.13 ± 0.89 | −6.69 ± 1.85 | 49.00 ± 34.81 | 8.00 ± 2.05 | 2.80 ± 1.54 | 0.51 ± 0.30 | | dataset without EoR Parameter batch size | epochs | learning rate | weight decay | features | layers | dimensions | step size | | |--------------------------------------------|----------------|-----------------|----------------|--------------|---------------|--------------|-------------|-------------| | None | 205.90 ± 67.09 | 172.10 ± 166.92 | −4.29 ± 1.82 | −4.40 ± 3.54 | 52.10 ± 40.55 | 4.30 ± 2.33 | 2.60 ± 1.80 | 0.56 ± 0.39 | | RNI | 164.40 ± 84.80 | 425.10 ± 79.47 | −2.95 ± 0.50 | −7.35 ± 1.75 | 63.00 ± 37.26 | 8.80 ± 1.40 | 2.30 ± 1.68 | 0.64 ± 0.28 | | CLIP | 223.20 ± 51.71 | 307.30 ± 120.73 | −2.78 ± 0.59 | −7.11 ± 2.50 | 63.50 ± 41.48 | 7.60 ± 1.20 | 1.80 ± 1.17 | 0.42 ± 0.23 | | RP | 139.10 ± 86.74 | 378.10 ± 115.58 | −3.59 ± 0.40 | −7.03 ± 2.21 | 84.90 ± 41.42 | 8.80 ± 1.25 | 4.70 ± 0.46 | 0.52 ± 0.24 | | IRNI | 174.50 ± 78.69 | 210.80 ± 159.69 | −3.20 ± 0.38 | −7.32 ± 2.63 | 33.20 ± 12.40 | 8.20 ± 1.54 | 1.80 ± 1.08 | 0.61 ± 0.25 | | dataset without EoR Parameter batch size | epochs | learning rate | weight decay | features | layers | dimensions | step size | | |--------------------------------------------|----------------|-----------------|----------------|--------------|---------------|--------------|-------------|-------------| | None | 256.00 ± 0.00 | 32.00 ± 0.00 | −4.15 ± 1.90 | −4.15 ± 4.71 | 16.00 ± 0.00 | 2.00 ± 0.00 | 3.80 ± 1.83 | 0.50 ± 0.49 | | RNI | 126.40 ± 67.20 | 431.20 ± 43.66 | −2.62 ± 0.30 | −7.63 ± 2.11 | 94.80 ± 24.86 | 9.40 ± 0.80 | 2.00 ± 1.34 | 0.46 ± 0.19 | | CLIP | 120.70 ± 83.98 | 378.60 ± 77.44 | −2.65 ± 0.45 | −7.15 ± 1.79 | 67.90 ± 16.12 | 8.10 ± 1.45 | 4.00 ± 0.89 | 0.64 ± 0.26 | | RP | 168.20 ± 96.77 | 445.90 ± 54.74 | −2.63 ± 0.44 | −6.80 ± 1.63 | 80.40 ± 32.04 | 8.50 ± 1.12 | 4.40 ± 1.02 | 0.69 ± 0.28 | | IRNI | 185.10 ± 83.23 | 451.80 ± 75.61 | −2.59 ± 0.31 | −9.16 ± 1.23 | 76.40 ± 29.88 | 8.90 ± 1.30 | 4.10 ± 0.94 | 0.63 ± 0.21 | | dataset without EoR Parameter batch size | epochs | learning rate | weight decay | features | layers | dimensions | step size | | |--------------------------------------------|-----------------|-----------------|----------------|--------------|---------------|--------------|-------------|-------------| | None | 256.00 ± 0.00 | 32.00 ± 0.00 | −4.15 ± 1.90 | −4.15 ± 4.71 | 16.00 ± 0.00 | 2.00 ± 0.00 | 3.80 ± 1.83 | 0.50 ± 0.49 | | RNI | 132.50 ± 91.25 | 454.30 ± 55.92 | −2.61 ± 0.48 | −8.01 ± 2.22 | 92.10 ± 26.43 | 9.50 ± 0.67 | 1.60 ± 1.28 | 0.66 ± 0.29 | | CLIP | 129.30 ± 105.04 | 436.60 ± 52.67 | −2.68 ± 0.31 | −7.31 ± 2.06 | 78.60 ± 25.15 | 8.40 ± 1.36 | 3.80 ± 0.75 | 0.54 ± 0.33 | | RP | 161.40 ± 102.91 | 397.30 ± 98.27 | −2.55 ± 0.38 | −7.80 ± 1.65 | 80.60 ± 21.90 | 8.80 ± 1.40 | 4.40 ± 0.80 | 0.56 ± 0.22 | | IRNI | 147.70 ± 93.12 | 437.30 ± 87.21 | −2.67 ± 0.41 | −6.83 ± 2.14 | 78.30 ± 27.22 | 8.10 ± 1.51 | 3.80 ± 0.98 | 0.68 ± 0.29 | | dataset without EoR Parameter batch size | epochs | learning rate | weight decay | features | layers | dimensions | step size | | | |--------------------------------------------|----------------|-----------------|----------------|--------------|----------------|--------------|-------------|-------------|-------------| | None | 75.20 ± 60.04 | 270.60 ± 138.40 | −3.78 ± 0.85 | −5.43 ± 2.02 | 91.10 ± 28.83 | 7.00 ± 2.05 | 3.40 ± 1.11 | 0.65 ± 0.32 | 1.00 ± 0.00 | | RNI | 103.70 ± 59.22 | 309.80 ± 84.42 | −2.97 ± 0.73 | −5.22 ± 1.99 | 89.50 ± 32.23 | 6.00 ± 1.55 | 3.00 ± 1.73 | 0.62 ± 0.23 | 1.00 ± 0.00 | | CLIP | 131.75 ± 58.16 | 403.25 ± 117.57 | −3.28 ± 0.79 | −5.79 ± 1.93 | 97.38 ± 30.59 | 7.38 ± 1.58 | 2.50 ± 1.66 | 0.50 ± 0.19 | 1.00 ± 0.00 | | ORNI | 157.00 ± 81.81 | 362.70 ± 134.81 | −3.15 ± 0.72 | −6.06 ± 2.20 | 69.00 ± 22.95 | 5.40 ± 1.69 | 3.30 ± 1.19 | 0.45 ± 0.21 | 1.00 ± 0.00 | | IRNI | 93.12 ± 83.31 | 369.38 ± 93.77 | −3.88 ± 0.61 | −6.40 ± 2.27 | 105.50 ± 26.80 | 8.12 ± 2.32 | 1.12 ± 0.33 | 0.57 ± 0.29 | 1.00 ± 0.00 | | dataset without EoR Parameter batch size | epochs | learning rate | weight decay | features | layers | dimensions | step size | | |--------------------------------------------|----------------|-----------------|----------------|--------------|----------------|--------------|-------------|-------------| | None | 157.10 ± 78.41 | 329.80 ± 152.31 | −3.67 ± 0.63 | −6.36 ± 2.25 | 72.30 ± 32.85 | 5.70 ± 3.07 | 2.90 ± 1.51 | 0.61 ± 0.30 | | RNI | 145.10 ± 90.67 | 315.40 ± 150.22 | −3.25 ± 0.78 | −8.43 ± 2.22 | 55.40 ± 30.11 | 5.80 ± 2.79 | 3.60 ± 1.43 | 0.76 ± 0.24 | | CLIP | 142.20 ± 94.73 | 208.90 ± 139.70 | −3.11 ± 0.91 | −7.12 ± 2.84 | 102.30 ± 17.73 | 6.30 ± 2.93 | 2.90 ± 1.14 | 0.68 ± 0.27 | | RP | 181.90 ± 64.39 | 256.20 ± 131.57 | −3.07 ± 0.77 | −6.27 ± 3.09 | 55.10 ± 29.18 | 6.60 ± 2.97 | 3.80 ± 0.98 | 0.61 ± 0.30 | | IRNI | 167.20 ± 78.88 | 371.80 ± 120.38 | −3.21 ± 1.09 | −5.81 ± 3.25 | 68.10 ± 40.89 | 5.80 ± 3.09 | 2.80 ± 1.47 | 0.62 ± 0.27 | | dataset with EoR Parameter batch size | epochs | learning rate | weight decay | features | layers | dimensions | step size | EoR | | |-----------------------------------------|----------------|-----------------|----------------|--------------|---------------|--------------|-------------|-------------|---------------| | RNI | 221.50 ± 53.90 | 359.50 ± 91.93 | −3.35 ± 0.58 | −6.44 ± 3.08 | 63.60 ± 35.13 | 7.70 ± 1.35 | 2.50 ± 1.28 | 0.68 ± 0.35 | 29.30 ± 20.25 | | CLIP | 196.80 ± 53.20 | 225.50 ± 93.81 | −2.94 ± 0.33 | −6.43 ± 2.73 | 51.70 ± 20.40 | 4.80 ± 1.60 | 3.20 ± 1.54 | 0.49 ± 0.26 | 25.60 ± 9.79 | | RP | 198.60 ± 72.36 | 249.90 ± 107.72 | −3.30 ± 0.72 | −6.20 ± 2.35 | 53.20 ± 29.50 | 6.90 ± 1.64 | 3.40 ± 0.92 | 0.60 ± 0.30 | 23.10 ± 16.59 | | IRNI | 165.80 ± 70.00 | 236.20 ± 120.13 | −3.28 ± 0.89 | −6.77 ± 1.84 | 44.70 ± 19.11 | 7.20 ± 2.09 | 2.60 ± 1.11 | 0.48 ± 0.28 | 20.60 ± 19.02 | | dataset with EoR Parameter batch size | epochs | learning rate | weight decay | features | layers | dimensions | step size | EoR | | |-----------------------------------------|----------------|-----------------|----------------|--------------|---------------|--------------|-------------|-------------|---------------| | RNI | 203.30 ± 60.54 | 264.40 ± 93.60 | −2.50 ± 0.44 | −6.91 ± 2.52 | 45.20 ± 22.30 | 5.70 ± 2.33 | 3.20 ± 1.08 | 0.54 ± 0.24 | 31.60 ± 14.83 | | CLIP | 187.10 ± 73.91 | 140.70 ± 97.46 | −2.94 ± 0.54 | −6.43 ± 3.27 | 49.60 ± 29.86 | 4.30 ± 1.35 | 3.80 ± 1.33 | 0.62 ± 0.32 | 36.50 ± 16.41 | | RP | 190.20 ± 78.03 | 182.80 ± 124.46 | −3.16 ± 0.68 | −5.02 ± 2.93 | 53.10 ± 27.94 | 3.80 ± 1.54 | 4.00 ± 1.18 | 0.64 ± 0.18 | 32.20 ± 18.42 | | IRNI | 199.20 ± 57.18 | 187.90 ± 111.60 | −2.60 ± 0.62 | −5.01 ± 2.37 | 52.30 ± 21.19 | 5.50 ± 2.25 | 4.20 ± 1.17 | 0.55 ± 0.27 | 26.10 ± 16.02 | | dataset with EoR Parameter batch size | epochs | learning rate | weight decay | features | layers | dimensions | step size | EoR | | |-----------------------------------------|----------------|-----------------|----------------|--------------|---------------|--------------|-------------|-------------|---------------| | RNI | 200.40 ± 56.35 | 236.70 ± 85.80 | −2.54 ± 0.52 | −5.66 ± 2.51 | 51.90 ± 29.79 | 5.90 ± 1.87 | 3.30 ± 1.19 | 0.58 ± 0.22 | 37.90 ± 17.17 | | CLIP | 205.40 ± 43.87 | 219.50 ± 118.76 | −2.79 ± 0.48 | −5.77 ± 2.64 | 38.90 ± 25.93 | 4.50 ± 1.63 | 4.00 ± 1.18 | 0.58 ± 0.28 | 25.90 ± 16.35 | | RP | 214.00 ± 47.44 | 214.90 ± 137.52 | −2.43 ± 0.45 | −7.03 ± 2.55 | 31.30 ± 22.45 | 5.30 ± 2.28 | 4.40 ± 0.66 | 0.44 ± 0.27 | 25.30 ± 15.07 | | IRNI | 198.30 ± 69.54 | 170.00 ± 139.94 | −2.72 ± 0.73 | −6.79 ± 2.77 | 30.70 ± 12.45 | 3.90 ± 1.45 | 3.60 ± 1.62 | 0.31 ± 0.29 | 34.40 ± 15.81 | | dataset with EoR Parameter batch size | epochs | learning rate | weight decay | features | layers | dimensions | step size | EoR | | |-----------------------------------------|-----------------|-----------------|----------------|--------------|----------------|--------------|-------------|-------------|---------------| | RNI | 153.00 ± 86.19 | 266.60 ± 120.89 | −4.46 ± 0.33 | −5.09 ± 1.85 | 96.10 ± 22.78 | 8.20 ± 1.54 | 3.20 ± 1.17 | 0.56 ± 0.29 | 44.60 ± 17.35 | | CLIP | 162.50 ± 76.87 | 383.90 ± 109.63 | −4.04 ± 0.77 | −7.20 ± 2.36 | 86.70 ± 35.18 | 7.90 ± 1.87 | 2.40 ± 1.20 | 0.41 ± 0.31 | 35.30 ± 14.49 | | RP | 138.10 ± 85.82 | 246.50 ± 59.54 | −4.54 ± 0.41 | −5.02 ± 2.15 | 102.60 ± 26.59 | 7.00 ± 1.90 | 3.60 ± 0.80 | 0.55 ± 0.30 | 43.00 ± 10.61 | | IRNI | 174.10 ± 104.05 | 356.70 ± 137.42 | −3.68 ± 0.47 | −7.05 ± 2.42 | 102.60 ± 18.67 | 7.50 ± 2.01 | 1.80 ± 0.98 | 0.61 ± 0.33 | 7.50 ± 12.67 | | dataset with EoR Parameter batch size | epochs | learning rate | weight decay | features | layers | dimensions | step size | EoR | | |-----------------------------------------|----------------|-----------------|----------------|--------------|---------------|--------------|-------------|-------------|---------------| | RNI | 110.70 ± 80.39 | 221.50 ± 132.31 | −3.57 ± 0.69 | −6.83 ± 2.79 | 78.20 ± 35.47 | 6.40 ± 2.69 | 2.90 ± 1.45 | 0.59 ± 0.24 | 34.40 ± 15.88 | | CLIP | 111.70 ± 88.57 | 269.30 ± 114.02 | −3.65 ± 0.61 | −4.71 ± 3.12 | 80.30 ± 34.81 | 6.90 ± 2.21 | 2.80 ± 1.25 | 0.42 ± 0.36 | 14.50 ± 14.12 | | RP | 153.10 ± 79.12 | 255.70 ± 173.48 | −2.96 ± 0.89 | −5.14 ± 2.90 | 84.20 ± 46.18 | 7.30 ± 2.49 | 2.70 ± 1.10 | 0.68 ± 0.29 | 35.80 ± 16.77 | | IRNI | 158.40 ± 84.20 | 310.00 ± 129.19 | −3.65 ± 1.01 | −4.81 ± 2.33 | 76.00 ± 43.38 | 5.60 ± 3.14 | 2.10 ± 1.37 | 0.61 ± 0.18 | 39.30 ± 21.87 | | Parameter | batch size | epochs | learning rate | weight decay | features | layers | dimensions | step size | EoR | |-------------|----------------|-----------------|-----------------|----------------|---------------|-------------|--------------|-------------|---------------| | RNI | 180.80 ± 72.66 | 265.90 ± 147.07 | −3.98 ± 1.03 | −6.88 ± 2.31 | 62.10 ± 33.05 | 5.60 ± 2.87 | 4.60 ± 0.49 | 0.67 ± 0.30 | 36.10 ± 17.31 | | CLIP | 92.10 ± 69.64 | 194.30 ± 153.41 | −3.42 ± 1.00 | −3.85 ± 2.41 | 63.70 ± 37.81 | 5.90 ± 2.77 | 3.60 ± 1.20 | 0.45 ± 0.30 | 21.00 ± 17.40 | | RP | 157.00 ± 78.16 | 281.10 ± 153.80 | −2.50 ± 0.28 | −4.69 ± 3.19 | 66.00 ± 39.44 | 2.80 ± 1.17 | 2.90 ± 1.04 | 0.41 ± 0.25 | 41.60 ± 17.45 | | IRNI | 163.70 ± 74.50 | 296.30 ± 169.70 | −2.83 ± 1.00 | −7.44 ± 2.49 | 57.50 ± 46.20 | 3.90 ± 1.76 | 3.80 ± 1.25 | 0.43 ± 0.41 | 18.50 ± 13.34 | | dataset without EoR Parameter batch size | epochs | learning rate | weight decay | features | layers | dimensions | step size | | |--------------------------------------------|----------------|-----------------|----------------|--------------|---------------|--------------|-------------|-------------| | None | 256.00 ± 0.00 | 32.00 ± 0.00 | −4.15 ± 1.90 | −4.15 ± 4.71 | 16.00 ± 0.00 | 2.00 ± 0.00 | 3.80 ± 1.83 | 0.50 ± 0.49 | | RNI | 74.90 ± 86.57 | 481.70 ± 58.11 | −3.15 ± 0.45 | −5.27 ± 2.27 | 90.00 ± 35.89 | 8.30 ± 1.35 | 1.70 ± 1.42 | 0.62 ± 0.26 | | CLIP | 179.10 ± 73.78 | 312.10 ± 91.59 | −2.25 ± 0.35 | −6.33 ± 2.00 | 66.40 ± 36.34 | 7.00 ± 1.67 | 3.50 ± 1.12 | 0.58 ± 0.18 | | RP | 185.00 ± 70.37 | 319.30 ± 88.94 | −2.50 ± 0.48 | −6.31 ± 2.48 | 53.00 ± 29.85 | 7.50 ± 1.43 | 4.20 ± 0.87 | 0.42 ± 0.24 | | IRNI | 191.90 ± 58.75 | 240.00 ± 114.25 | −2.86 ± 0.53 | −6.67 ± 2.83 | 37.70 ± 14.21 | 4.70 ± 1.62 | 4.00 ± 0.89 | 0.41 ± 0.23 | | dataset with EoR Parameter batch size | | epochs | learning rate | weight decay | features | layers | dimensions | step size | EoR | |-----------------------------------------|-----------------|-----------------|-----------------|----------------|---------------|-------------|--------------|-------------|---------------| | RNI | 164.50 ± 105.35 | 464.60 ± 102.96 | −3.18 ± 0.58 | −7.46 ± 2.07 | 89.70 ± 41.53 | 9.10 ± 1.04 | 1.70 ± 1.42 | 0.58 ± 0.23 | 50.50 ± 10.13 | | CLIP | 190.70 ± 73.12 | 366.50 ± 124.09 | −2.55 ± 0.45 | −7.67 ± 1.34 | 59.10 ± 29.38 | 6.70 ± 1.00 | 3.70 ± 1.49 | 0.37 ± 0.25 | 16.20 ± 14.53 | | RP | 187.40 ± 75.99 | 319.70 ± 92.42 | −2.39 ± 0.42 | −6.95 ± 2.11 | 70.80 ± 23.32 | 7.20 ± 0.98 | 3.20 ± 1.40 | 0.54 ± 0.28 | 11.50 ± 11.60 | | IRNI | 193.40 ± 54.69 | 217.40 ± 74.28 | −2.63 ± 0.45 | −6.22 ± 3.23 | 37.40 ± 14.84 | 6.20 ± 1.94 | 3.90 ± 1.04 | 0.43 ± 0.32 | 17.10 ± 11.57 | | dataset with EoR Parameter batch size | epochs | learning rate | weight decay | features | layers | dimensions | step size | EoR | | |-----------------------------------------|----------------|-----------------|----------------|--------------|---------------|--------------|-------------|-------------|---------------| | RNI | 172.10 ± 66.95 | 404.80 ± 66.97 | −3.78 ± 0.39 | −6.29 ± 2.17 | 77.00 ± 36.91 | 8.30 ± 1.55 | 1.30 ± 0.46 | 0.74 ± 0.24 | 22.50 ± 19.99 | | CLIP | 193.70 ± 64.15 | 304.90 ± 121.05 | −3.05 ± 0.72 | −5.41 ± 2.55 | 44.90 ± 24.97 | 7.90 ± 0.94 | 2.40 ± 1.02 | 0.70 ± 0.32 | 23.80 ± 19.20 | | RP | 193.70 ± 79.71 | 235.20 ± 96.69 | −3.57 ± 0.46 | −6.56 ± 2.48 | 68.70 ± 40.74 | 7.60 ± 1.56 | 3.20 ± 1.33 | 0.73 ± 0.26 | 25.30 ± 13.50 | | IRNI | 190.40 ± 63.72 | 224.10 ± 155.54 | −3.02 ± 0.67 | −5.86 ± 3.11 | 46.60 ± 21.89 | 8.10 ± 1.58 | 2.40 ± 1.20 | 0.53 ± 0.32 | 16.20 ± 14.04 | ## F Trainability Here we add an evaluation of the data from training all the models. More specifically, we consider the trainability of all the methods on the evaluated datasets. One aspect of trainability is: How easily can the model's hyperparameters be optimized? This question can be answered with BHO by evaluating how many hyperparameter optimization steps are necessary to reach a satisfactory performance. Alternatively, when comparing two models, the question can be answered by which model needs fewer steps to achieve greater performance. Considering our main experiment, we visualize the best-found performance at each BHO evaluation for each dataset and model. For each dataset and model, we have all BHO evaluations. From this, we compute each seed's best-found performance after each BHO step. We then average these seeds and visualize the BHO steps against the AUROC performance found (up to this point) in figures 4 and 5. We follow the nomenclature of table 1 in describing the datasets and the models. Note that the visualized performances are biased as we consider the maximum performance over multiple evaluations. Also, the best-found performances are not equal to the performance in table 1 as these figures use the validation set performances of the Monte Carlo cross-validation. A clear difference is notable between the synthetic datasets TRI, TRIX, EXP, CEXP, and CSL and the practical datasets PROTEINS, MUTAG, and NCI1. Each model converges independently on the practical datasets to its optimal performance, making comparisons between the models more difficult. On the synthetic datasets, this comparison is more straightforward, as the models mostly converge to the same optimum. Notably, RNI converges slower than all other models. Additionally, considering the other models, a rough order of trainability could be surmised to be: (1) IRNI, (2) CLIP, (3) ORNI, (4) RNI, however, this order is less significant than RNIs poor trainability. This order would coincide with the amount of randomness in each model. If we consider for each model the size of the IR tree on any given graph, then the order of size would be the same. IRNI requires the smallest amount of individualizations to reach completely distinguished graphs. ORNI and RNI have maximal trees since they ignore any color information present during the IR tree construction. CLIP's first step is the same as IRNI, and then it can be roughly compared to ORNI and RNI, so its tree size is in between the others. Lastly, RNI can be considered more random than ORNI since it uses continuous random variables, which results in a random space of infinite size. The possible graphs for IRNI, CLIP, and ORNI are always finite. From this, we conclude making use of less randomness improves trainability. ![23_image_0.png](23_image_0.png) Figure 4: These plots show the mean of the best performance (y) after x BHO steps (x) over all seeds as well as the standard deviation of the BHO training from table 1 for the models without EoR. ![24_image_0.png](24_image_0.png) Figure 5: These plots show the mean of the best performance (y) after x BHO steps (x) over all seeds as well as the standard deviation of the BHO training from table 1 for the models with EoR.
Review 1: Summary: The paper proposed IRNI - an individualization mechanism that builds on the combination of random node features and individualization refinement algorithms. Additionally, the authors propose a training framework based on Bayesian parameter optimization for hyperparameter tuning. The authors present a rigorous theoretical study and analysis of the proposed method, followed by a comparison of different URF methods with IRNI on several real-world and synthetic datasets. Strengths and Weaknesses: Strengths: * The paper addresses an open problem in an interesting topic of the use of random node features in GNNs. * The paper (after the revisions) is easy to follow, also as a non-familiar with IR algorithm reviewer. Weaknesses: * Although this is not a proper weakness (but rather an observation), it is interesting that still, there is not a clearly better method as far as random node features are concerned. I have read the discussion of the authors after the experimental section, but perhaps it would be possible to elaborate on why and when one would choose one method over the other? I think that this is an important point, in order to obtain a systematic way to work with random node features. * In the context of random node features, the paper can benefit from discussing [1]. Questions to the authors: * By taking random walk IR, is the 'level of randomness' decreased? at least with respect to the considered graph adjacency matrix? * Can the authors please explain the proposed method is related to positional encoding methods that use propagation of node features [2],[3]? Specifically, with respect to the random walk IR algorithm? * Why are the results in table 1 different between 'SOTA_{URF}' and the best of the methods {RNI,CLIP,RP, IRNI(CP)}? Are those simply results obtained in a different paper under the same settings or is there another difference? [1] Global attention improves graph networks generalization [2] From Local to Global: Spectral-Inspired Graph Neural Networks [3] Graph Positional Encoding via Random Feature Propagation Requested Changes: Please see my questions in my main review. Broader Impact Concerns: None ================================================== Review 2: Summary: The paper describes a novel GNN that uses random features and extends previous works. They show it is a universal approximator and evaluate it on several benchmarks Strengths and Weaknesses: Strengths: - The work, to the best of my knowledge, is novel - The method is well grounded in theory, makes sense intuitively, and works on several benchmarks Weaknesses: - The writing should be improved. The main issue with the writing is that the author uses non-trivial terms he doesn't explain, or explain after he talks about them thus making it confusing for the reader. Examples: (1) Universal random features (URF) never properly defined (I assume random features that make the inference a universal approximator) (2) IR and RNI acronyms not defined (3) Use discrete in a non-standard meaning which is confusing (4) "Individualization refinement (IR) trees are the backtracking trees of practical graph isomorphism algorithms." isn't clear and should be explained better - The proofs are more proof sketches. Should have more detailed proofs in the supplementary (what about proof for lemma 3?) - ״Our crucial new insight is that ensembling over randomness significantly improves the performance of all URF, even in cases in which it had not been considered previously.״, I don't the experiments support this strong claim Requested Changes: - Writing should be improved, and terms should be clearly explained - More detailed proofs Broader Impact Concerns: NA ================================================== Review 3: Summary: This paper introduces Individualization Refinement Node Initialization (IRNI), a unified framework for randomized node initialization algorithms for GNNs. It is based on the Individualization-Refinement algorithm. Existing universal random features (URF), including Relational Pooling (RP), Random Node Initialization (RNI), and Colored Local Iterative Procedure (CLIP), fall into this framework. By using this framework, this paper proposed a new URF systematically. On the theoretical side, this paper proves universal approximation theorems in the following scenarios: - Universality of the IRNI whose refinement is coarser than the color refinement. - Universality for invariant functions on 3-connected planner graphs. - Universality of a depth-fixed IRNI with the color refinement for the invariant functions on fixed-sized graphs (excluding an exponentially small number of graphs). Empirically, this paper evaluates several URFs within the IRNI framework, whose hyperparameters are optimized by Bayesian Hyperparameter Optimization on five artificial datasets and two real datasets. Strengths and Weaknesses: **Strengths** This paper provides a framework that enables us to treat existing URFs in a unified manner. It offers the following advantages: - It provides a systematic method of making new URFs. - It provides a unified approach for proving universality approximation theorems for randomized feature initialization. - It provides the parameterization of URFs (by choice of refinement and selection algorithms), enabling us to evaluate these parameters' impact on empirical prediction performances. **Weaknesses** - There is room for improvement in writing. Specifically, I would suggest the following improvements (see Requested Changes for details): - Write the basics of IR (in a self-contained manner, if possible.) - Provide definitions for undefined mathematical terms and notations. - Elaborate theorems' proofs - IRNI(CR), a new URF derived from the IRNI framework, does not outperform other URFs in numerical experiments, although its universality is shown. **Claim and Evidence** As far as I have checked, I found no apparent incorrect point in the proof. However, since I am unfamiliar with the Individualization-Refinement algorithm, I am not confident that I check correctly. The paper claims to propose a new Bayesian Hyperparameter Optimization method. Indeed, as far as I know, the optimization is new in that it adds a penalty computed by the hyperparameter values. However, the experiment uses the usual nested cross-validation if I get all the information. Therefore, I have a question about whether this optimization scheme is novel. **Audience** The expressive power of GNNs is one of the main topics in GNN research. Universality is the most common way to evaluate models' expressive power. Several data initialization methods have been proposed to make GNNs universal (i.e., URFs). However, the relationship between them needs to be clarified. This paper provides a unified method to handle URFs, deepening our understanding. Therefore, this paper is of interest to TMLR audiences. **Clarity** As mentioned in the Weaknesses section, I think writing has room for improvement. First, although this paper introduces some components of IR in Sections 4.1 and 4.2, if I get all the information, there is no explanation about how IR works and its basics. Readers without knowledge of IR may have difficulty grasping the paper's contents. I suggest adding the basic knowledge about IR (e.g., in Appendix) in a self-contained manner or providing pointers to necessary references. Second, some mathematical terms and notations are used without definitions (see Requested Changes for details). Finally, it would be more understandable if the proof of the theorems on IRNI universality (Theorem 4--6) were more detailed than the current one. Requested Changes: P.1 - Section 1, Paragraph 2: enhance -> enhances P.2 - Section 1, Paragraph 7 (Paragraph titled *Why individualization refinement*): This paragraph explains why IR is used. However, since there is no explanation of what IR, readers may think this paragraph is abrupt. Therefore, I suggest the description of IR before this paragraph. P.5 - Section 3.1, Paragraph 4: The reference McKay & Piperno has a form different from the other references. - Section 3.2, Paragraph 1: [...] colors $i$, $j$ the number of [...] -> [...] $i$, $j$, [...] (add a comma) P.6 - Section 3.4, Paragraph 2: - $\boldsymbol{x}_v \leftarrow \boldsymbol{x}_v \circ (r_1, \ldots, r_d)$. I think $\circ$ usually denotes function composition. Since this usage of $\circ$ is unusual, I suggest explaining its definition. - [...] by assigning it a one-hot encoding of natural numbers. -> [...] by assigning a one-hot encoding of natural numbers to it. - $x_v$ -> $\boldsymbol{x_v}$ - Section 4.1, Paragraph 3: - Write the definition of $V^{\ast}$. - Write explicitly that $\nu \in V^{\ast}$. - I think $\mathrm{Ref}(G, \pi, \nu)^{-1} (v) = \{v\}$ is a little difficult to interpret. Since the value range of $\mathrm{Ref}(G, \pi, \nu)$ is $\{1, \ldots, k\}$, $v$ is incompatible as an argument of $\mathrm{Ref}(G, \pi, \nu)^{-1}$. P.7 - Section 4.1, Lemma 1: - Since $\mathrm{Aut}(G, \pi)$ is only used in this lemma, the description would be more straightforward if we do not introduce the notation $\mathrm{Aut}(G, \pi)$ (Nevertheless, I think it is better to define the automorphism of a graph.) - Superscript $\varphi$ is undefined (e.g., $(G, \pi)^{\varphi}$) - $(G, \pi)^{\varphi} = [...] = G$: there is a notational inconsistency regarding whether we should include the coloring $\pi$ as a component of the graph $G$ or not. - Section 4.1, Lemma 2: The concept of isomorphism-invariant is undefined. P.8 - Section 4.2, Paragraph 3: *IRNI depends on the random walk in the IR tree and is thus a URF [...].*: URF, by definition (P1, Section 1, Paragraph 2), makes MPNN a universal function approximator. However, since it is not shown that IRNI is a universal approximator before this sentence, it may not be appropriate to call IRNI a URF, at least at this point. - Section 4.3, Paragraph 1: *[...] The use of repeated random IR walks has recently been proven to be a near-optimal traversal strategy of IR trees*: I suggest explaining what you mean by near-optimal here. Also, I want some references supporting this claim. P.9, Section 4.3, Theorem 5 - *Individualizing 3 vertices on a common face followed by color refinement suffices to make the coloring discrete.*: Is it correct to understand that the choice of face and vertices is arbitrary? In other words, is it sufficient to select *any* face in the planner graph and individualize *any* 3 vertices on the face suffices to make the color refinement discrete? P.9--10, Theorem 4--6 - The concept of $(\delta, \varepsilon)$-approximation is not mathematically defined. P.10, Section 4.3, Theorem 6 - $f$ is assumed to be invariant. Does it imply that the domain $\mathcal{G}'\_{n}$ is invariant under the action of the graph automorphism? Also, is it assumed that any graph in $\mathcal{G}'\_{n}$ originates has size $n$, that is, $\mathcal{G}'\_{n} \subset \mathcal{G}\_{n}$? P.17, Appendix E - I suggest describing the implementation details of models used in the numerical experiments, specifically, the libraries used to implement the models, Bayesian hyperparameter optimization, and evaluation code. Broader Impact Concerns: No concerns about broader impacts. ================================================== Review 4: Summary: The existing terminology, methods, and benchmark for universal random features are highly diverse, which prevents the application of URF in practice. This paper proposes a new framework capturing all previous URF and provided comprehensive comparison for all URF. For the theoretical side, the authors formally prove the universality of all instantiations of the proposed framework IRNI under natural condition. Strengths and Weaknesses: Strengths 1. This paper is well-organized and well-written. The contributions are clearly illustrated in Introduction, and backgrounds on several crucial concepts are introduced. 2. The theoretical contributions look good to me. The authors reveal the relationship between IR algorithms and MPNNs using IRNI. Weaknesses 1. The authors mention no systematic comparison among these URF and the trainability, generalizability, and expressivity is unclear. What’s the trainability? How do you measure the trainability? 2. It would be better to only highlight theoretical contribution or comprehensive benchmark. The current version is insufficient to systematically compare all URF, particular in in terms of trainability and generalizability. The experiments are only conducted in GIN. v Requested Changes: 1. Provide more details on the trainability of URF. 2. Highlight the main contribution appropriately. Broader Impact Concerns: No ================================================== Metareview: Recommendation: Accept as is Comment: This paper introduces a novel approach which unifies existing random feature initialization methods using the individual refinement algorithm and shows the universality of Message Passing neural Networks with IRNI for several settings. They also present a Bayesian approach for hyper-parameter tuning. The paper was appreciated by all reviewer but they noted that its writing could be improved. The authors responded very well to the questions and revised the paper to the satisfaction of the reviewers who now all agree on accepting it. While I do not think the paper needs a minor revision, I encourage the authors to add the clarifications to the last reviewer Urhj (and positioning/references) in the camera ready version of the paper. ==================================================
# Constrained Parameter Inference As A Principle For Learning Nasir Ahmad *n.ahmad@donders.ru.nl* Department of Artificial Intelligence, Donders Institute, Radboud University Ellen Schrader *e.schrader@donders.ru.nl* Department of Artificial Intelligence, Donders Institute, Radboud University Marcel van Gerven m.vangerven@donders.ru.nl Department of Artificial Intelligence, Donders Institute, Radboud University Reviewed on OpenReview: *https: // openreview. net/ forum? id= CUDdbTT1QC* ## Abstract Learning in neural networks is often framed as a problem in which targeted error signals are directly propagated to parameters and used to produce updates that induce more optimal network behaviour. Backpropagation of error (BP) is an example of such an approach and has proven to be a highly successful application of stochastic gradient descent to deep neural networks. We propose constrained parameter inference (COPI) as a new principle for learning. The COPI approach assumes that learning can be set up in a manner where parameters infer their own values based upon observations of their local neuron activities. We find that this estimation of network parameters is possible under the constraints of decorrelated neural inputs and top-down perturbations of neural states for credit assignment. We show that the decorrelation required for COPI allows learning at extremely high learning rates, competitive with that of adaptive optimizers, as used by BP. We further demonstrate that COPI affords a new approach to feature analysis and network compression. Finally, we argue that COPI may shed new light on learning in biological networks given the evidence for decorrelation in the brain. ## 1 Introduction Learning can be defined as the ability of natural and artificial systems to adapt to changing circumstances based on their experience. In biological and artificial neural networks this requires updating of the parameters that govern the network dynamics (Richards et al., 2019). A principled way of implementing learning in artificial neural networks is through the backpropagation of error (BP) algorithm (Linnainmaa, 1970; Werbos, 1974). BP is a gradient-based method which uses reverse-mode automatic differentiation to compute the gradients that are needed for individual parameter updating (Baydin et al., 2018). This approach relies on the repeated application of forward and backward passes through the network. In the forward (inference) pass, network activity is propagated forward to compute network outputs. In the backward (learning) pass, the loss gradient associated with the network outputs is propagated in the reverse direction for parameter updating. While effective, BP makes use of the transmission of gradients using biologically implausible non-local operations, and multiple separated network passes (Grossberg, 1987; Crick, 1989; Lillicrap et al., 2020). Alternative approaches, such as Hebbian learning and subspace methods circumvent this problem yet are restricted to unsupervised learning and do not afford (deep) credit assignment (Brea & Gerstner, 2016; Pehlevan et al., 2015). Here, we propose constrained parameter inference (COPI) as a new principle for learning. COPI uses information that can be made locally available at the level of individual parameters whose values are being inferred under certain constraints. Note that COPI is distinct from methods that rely on measuring gradients through activity differences (see the NGRAD hypothesis (Lillicrap et al., 2020)), in that no difference in activity needs to be computed to determine parameter updates. Specifically, by constructing a mixed network activity state - in the BP case a simple summation of the forward and backward passes for output units - parameters can infer their own optimal values by observation of node activities alone. This is distinct to many proposed biologically plausible methods which require parameters to measure differences in some activity, either physically using separate compartments/signals, or across time between two phases (Bengio, 2014; Scellier & Bengio, 2017; Ernoult et al., 2020; Whittington & Bogacz, 2017; Sacramento et al., 2018; Payeur et al., 2021). Thus, COPI provides a framework which might in future enable online continuous learning where parameter updates are based upon single state measurements. Furthermore, the COPI algorithm is not tied to any particular credit assignment method. In this sense it assumes that credit can be assigned to units (by a user's method of choice) and simply describes how parameters should update their values given a network state observation. Credit assignment is integrated into COPI by top-down perturbations. The form of the perturbation is precisely what determines which credit-assignment algorithm is being used for learning within the system, whether based on backpropagation, feedback alignment (Lillicrap et al., 2016; Nøkland, 2016), target propagation (Bengio, 2014; Ahmad et al., 2020) or otherwise. Thus, COPI does not address credit assignment as such but rather proposes a general approach for learning based upon a single mixed-state regime. In the following, we demonstrate that COPI provides a powerful framework for learning which is at least as effective as backpropagation of error while having the potential to rely on local operations only. Moreover, as will be shown, COPI allows for efficient linear approximations that facilitate feature visualisation and network compression. Hence, it also provides benefits in terms of interpretable and efficient deep learning. This has direct implications for modern machine learning as COPI can be used as a replacement for the parameter-updating step in backpropagation applications across a wide range of settings. ## 2 Methods In this section, we develop the constrained parameter inference (COPI) approach and describe its use for parameter estimation in feedforward neural networks. ## 2.1 Deep Neural Networks Let us consider deep neural networks consisting of L layers. The input-output transformation in layer l is given by yl = f(al) = f(Wlxl) with output yl, activation function f, activation al = Wlxl, input xl and weight matrix Wl ∈ R Kl×Kl−1, where Klindicates the number of units in layer l. As usual, the input to a layer l > 1 is given by the output of the previous layer, that is, xl = yl−1. Learning in deep neural networks amounts to determining for each layer in the network a weight update ∆Wl such that the update rule ## Wl ← Wl + Η∆Wl converges towards those weights that minimize a loss ℓ for some dataset D given a suitable learning rate η > 0. Locally, the optimum by gradient descent (GD) is to take a step in the direction of the negative expected gradient of the loss. That is, ∆ gd Wl = −E [∇Wl ℓ] , where, in practice, the expectation is taken under the empirical distribution. ## 2.2 Constrained Parameter Inference In Feedforward Systems Here, we develop an alternative approach and relate it directly to both stochastic gradient descent and local parameter inference. Note that the key transformation in a deep feedforward neural network is carried out by a weight matrix given by al = Wlxl. Suppose we know the target activation zl for this transformation. This can be expressed as an alternative transformation $$z_{l}=W_{l}^{*}x_{l}$$ for some desired weight matrix W∗ l . Ideally, we would like to use a learning algorithm which guarantees convergence of the current weight matrix to the desired weight matrix. A straightforward proposal is to carry out a decay from the current weight values to the desired weight values, such that the weight update is of the form $$\Delta_{W_{l}}=\mathbb{E}\left[W_{l}^{*}-W_{l}\right]=W_{l}^{*}-W_{l}\,.$$ l − Wl. (1) Of course, the key goal is to achieve this weight update without making use of the (unknown) desired weights. How to achieve this, is described in the following sections. ## 2.3 Learning The Forward Weights Let us rewrite the desired weight matrix in the following way: $$W_{l}^{*}=W_{l}^{*}\left(\mathbb{E}\left[x_{l}x_{l}^{\top}\right]\mathbb{E}\left[x_{l}x_{l}^{\top}\right]^{-1}\right)$$ $$=\mathbb{E}\left[W_{l}^{*}x_{l}x_{l}^{\top}\right]\mathbb{E}\left[x_{l}x_{l}^{\top}\right]^{-1}$$ $$=\mathbb{E}\left[z_{l}x_{l}^{\top}\right]\mathbb{E}\left[x_{l}x_{l}^{\top}\right]^{-1}$$ $$(1)$$ $$\left(2\right)$$ with E-xlx ⊤ l the (sample) autocorrelation matrix. We here assume this matrix to be invertible, though this condition is later shown to be unnecessary. If we plug this back into Eq. (1) then we obtain $$\Delta_{W_{l}}=\mathbb{E}\left[z_{l}x_{l}^{\top}\right]\mathbb{E}[x_{l}x_{l}^{\top}]^{-1}-W_{l}\,,$$ −1 − Wl, (2) allowing the weight update to be expressed in terms of target outputs, zl, rather than (unknown) desired weights. This is an expression of the least-squares optimization algorithm. Let us assume for the moment that the inputs xl are distributed such that they have zero covariance and unit variance, i.e., the inputs are whitened. This implies that the autocorrelation matrix is given by the identity matrix, that is, E-xlx ⊤ l = I. In this case, Eq. (2) reduces to the simple update rule $$\Delta_{W_{l}}=\mathbb{E}\left[z_{l}x_{l}^{\top}\right]-W_{l}\,.$$ In practice, it may be unreasonable (and perhaps even undesirable) to assume perfectly whitened input data. A more realistic and achievable scenario is one in which we make the less restrictive assumption that the data is decorrelated rather than whitened. This implies that the autocorrelation matrix is diagonal, and that E-xlx ⊤ l = diag E-x 2 l with x 2 l the vector of squared elements of xl. Right-multiplying both sides of Eq. (2) by this expression, and assuming that E-xlx ⊤ l is indeed diagonal, we obtain $$\Delta_{W_{l}}\mathrm{diag}\left(\mathbb{E}\left[x_{l}^{2}\right]\right)=\mathbb{E}\left[z_{l}x_{l}^{\top}\right]-W_{l}\,\mathrm{diag}\left(\mathbb{E}\left[x_{l}^{2}\right]\right)\ .$$ This matrix multiplication amounts to a rescaling of the columns of ∆Wl , and thereby a relative scaling of their learning rates. This finally leads to our constrained parameter inference (COPI) learning rule $$\Delta_{W_{l}}^{\mathrm{copi}}=\mathbb{E}\left[z_{l}x_{l}^{\top}\right]-W_{l}\,\mathrm{diag}\left(\mathbb{E}\left[x_{l}^{2}\right]\right)\,,$$ which is solely composed of a Hebbian correlational learning term and weight decay term. COPI receives its name from the fact that there are two constraints in play. First, the availability of a target or 'desired' state zl for each layer and, second, the requirement that the inputs xl are decorrelated. $$\left({\mathrm{3}}\right)$$ ## 2.4 Input Decorrelation We did not yet address how to ensure that the inputs to each layer are decorrelated. To this end, we introduce a new decorrelation method which transforms the potentially correlation-rich outputs yl−1 of a layer into decorrelated inputs xl to the following layer using the transformation $$x_{l}=R_{l}y_{l-1},$$ $\neg\alpha$b. where Rlis a decorrelating 'lateral' weight matrix. We set out to reduce the correlation in the output data xl by measurement of its correlation and inducing a shift toward lower correlation. In particular, consider a desired change in a given sample of the form $$x_{l}\gets x_{l}-\eta\left(\mathbb{E}\left[x_{l}x_{l}^{\top}\right]-\mathrm{diag}\left(\mathbb{E}\left[x_{l}^{2}\right]\right)\right)x_{l}\,,$$ where the expectations could be taken over the empirical distribution. We can shift this decorrelating transform from the output activities xl to the decorrelating matrix Rl. To do so, consider substituting xl = Rlyl−1, such that we may write $x_{l}\gets x_{l}-\eta\left(\mathbb{E}\left[x_{l}x_{l}^{\top}\right]-\text{diag}\left(\mathbb{E}\left[x_{l}^{2}\right]\right)\right)x_{l}$ $\gets R_{l}y_{l-1}-\eta\left(\mathbb{E}\left[x_{l}x_{l}^{\top}\right]-\text{diag}\left(\mathbb{E}\left[x_{l}^{2}\right]\right)\right)R_{l}y_{l-1}$ $\gets\left(R_{l}-\eta\left(\mathbb{E}\left[x_{l}x_{l}^{\top}\right]-\text{diag}\left(\mathbb{E}\left[x_{l}^{2}\right]\right)\right)R_{l}\right)y_{l-1}$. We converge to the optimal decorrelating matrix using an update Rl ← Rl + η∆ copi Rl , where $$\begin{array}{r l}{\Delta_{R_{l}}^{\mathrm{copi}}=-\left(\mathbb{E}\left[x_{l}x_{l}^{\top}\right]-\operatorname{diag}\left(\mathbb{E}\left[x_{l}^{2}\right]\right)\right)R_{l}}\\ {=-\left(\mathbb{E}\left[x_{l}q_{l}^{\top}\right]-\operatorname{diag}\left(\mathbb{E}\left[x_{l}^{2}\right]\right)R_{l}\right)}\end{array}$$ $$\left({\boldsymbol{4}}\right)$$ $$\left(5\right)$$ (4) with ql = Rlxl. Note that this decorrelating transform can also be derived rigorously as a gradient descent method, see Appendix A. The update can be made more local and therefore more biologically plausible by an approximation, which we explore here. To make the information locally available for the update of the decorrelation, we assume that this correlation reduction can be carried out in either direction - with the target of correlation reduction and source being exchangeable. This assumption allows us to arrive at a more biologically plausible learning rule given by $$\Delta_{R_{l}}^{\mathrm{bio-copi}}=-\left(\mathbb{E}\left[q_{l}x_{l}^{\top}\right]-R_{l}\operatorname{diag}\left(\mathbb{E}\left[x_{l}^{2}\right]\right)\right)$$ (5) with ql = Rlxl. Equation 5 has exactly the same form as our previous COPI decorrelation rule (3) for learning the forward weights though now acting to update its weights in the same manner as the COPI forward learning rule - using target states ql. These target states are now the total amount of decorrelating signal being provided to a given unit. In effect, the lateral weights are also constantly inferring correlational structure within the activities of a layer of units but, given the negatively-signed update, they update their values to reduce correlation instead. This rule is more biologically plausible than the theoretically derived decorrelation rule since an update of a lateral weight rij connecting unit j to unit i relies on qixj rather than xiqj . That is, an update local to unit i only requires access to its own target state rather than the target state of other units. Furthermore, the weight decay term is now scaled based upon the pre-synaptic activity, which is propagated via the synaptic connection, rather than the post-synaptic activity. As this information is available to the post-synaptic unit and synaptic connection, respectively, it can be used for parameter updating. Both the theoretically derived rule and its more biologically plausible formulation consistently reduce correlation in the output states xl, as shown in Appendix A. ## 2.5 Error Signals Equation (3) expresses learning of forward weights in terms of target states zl. However, without access to targets for each layer of a deep neural network model, one may wonder how this learning approach could be applied in the first place. To this end, we assume that the target states can be expressed as $\hat{\varepsilon}_0$ zl = al + αδl with δl an error signal which perturbs the neuron's state in a desired direction and α a gain term controlling the strength of the perturbation. Note that if we can directly access the δl, then the optimal weight change to push future responses in the desired direction δlis simply δlx ⊤. However, here we assume that the weightmodification process cannot directly access δl, but only 'sees' the perturbed activities zl = al + αδl. In this case, as explained above, COPI is necessary to produce correct weight changes and push future responses towards these target activities. Different error signals can be used to induce effective perturbations. According to stochastic gradient descent (SGD), the optimal perturbation is given by $$\delta_{l}^{\mathrm{sgd}}=-{\frac{d\ell}{d a_{l}}}$$ as this guarantees that the neuronal states are driven in the direction of locally-decreasing loss. The error signal at the output layer is given by δ sgd L = − ∂ℓ ∂aL = −diag(f ′(aL)) ∂ℓ ∂yL . Starting from δ sgd L, the layer-specific perturbation can be computed via backpropagation by propagating the error signal from output to input according to δ sgd l = ∂al+1 ∂al δ sgd l+1 with ∂al+1 ∂al= diag(f ′(al))R⊤ l+1W⊤ l+1. While gradient-based error signals provide a gold standard for the optimal perturbation, we may want to replace this by more biologically-plausible credit assignment methods. These methods typically use the same error signal δL ≜ δ sgd Lfor the output layer but propagate the error signal in the input direction using different proposal mechanisms. As an illustration of such an alternative error signal, we consider feedback alignment (FA) (Lillicrap et al., 2016), which supposes that the perturbation from the previous layer can be propagated through fixed random top-down weights Bl+1 instead of the transposed weights (Wl+1Rl+1) ⊤, as a way to address the so-called weight transport problem (Grossberg, 1987). Hence, in FA, the layer-wise perturbations are propagated by $$\delta_{l}^{\mathrm{fa}}=\mathrm{diag}(f^{\prime}(a_{l}))B_{l+1}\delta_{l+1}^{\mathrm{fa}}\,.$$ In our analyses, we will restrict ourselves to comparing error signals provided by backpropagation and feedback alignment only. Note, however, that other credit assignment methods such as direct feedback alignment (Nøkland, 2016) or target propagation and its variations (Bengio, 2014; Dalm et al., 2023) can be seamlessly integrated in our setup if desired. ## 2.6 Stochastic Copi Stochastic COPI replaces the expectations over the empirical distributions in Eqs. (3) and (4) by single data points, analogous to stochastic gradient descent (SGD). COPI training on single data points proceeds by computing the stochastic weight updates. For all COPI implementations, the forward weight updates are given by ∆ copi Wl= zlx ⊤ l − Wl diag x 2 l with target states zl = al + αδl, given some suitable error signal δl. The decorrelating lateral weight updates are given by ∆ copi Rl= −xlq ⊤ l − diag x 2 l Rl with ql = Rlxl. In practice, as usual, we train on minibatches instead of individual data points. See Algorithm 1 for a pseudo-algorithm which uses gradient-based error signals and a quadratic loss. For comparison against SGD, it is instructive to consider (stochastic) COPI applied to a single-layer neural network. Recall that the SGD update of a single-layer network is given by ∆ sgd W = − dℓ da x ⊤ . We can manipulate this expression in order to relate SGD to COPI as follows: $$\begin{split}\Delta_{W}^{\text{sgg}}&=-\frac{d\ell}{da}x^{\top}\\ &=\left(a-\frac{d\ell}{da}\right)x^{\top}-ax^{\top}\\ &=\left(a+\delta^{\text{sgg}}\right)x^{\top}-W\left(ax^{\top}\right)\,.\end{split}$$ ⊤ − Wxx⊤. (6) $\eqref{eq:walpha}$ | ▷ Parameters: learning rates ηR and ηW ; gain term α; number of epochs; batch size | | | | |--------------------------------------------------------------------------------------|----------------------------------------|-------------------------------|---------------------------| | 2: | for each epoch do | | | | 3: | for each batch = {(y0, y∗ )} ⊂ data do | | | | 4: | for layer l from 1 to L do | ▷ Forward pass | | | 5: | xl = Rlyl−1 | ▷ Decorrelate the input data | | | 6: | al = Wlxl | ▷ Compute activation | | | 7: | yl = f(al) | ▷ Compute output | | | 8: | end for | | | | 9: | ℓ = ||yL − y ∗ ||2 | ▷ Compute loss | | | 10: | for layer l from L to 1 do | ▷ Backward pass | | | 11: | δl = − dℓ | if l = L else δl = ∂al+1 δl+1 | ▷ Compute learning signal | | | daL | ∂al | | | 12: | end for | | | | 13: | for layer l from L to 1 do | ▷ Update parameters | | | | ⊤ | 2 | | | 14: | Wl ← Wl + ηW (al + αδl)x l − Wl diag x l | ▷ Update forward weights | | | 15: | Rl ← Rl − ηR xl(Rlxl) ⊤ − diag x 2 Rl | ▷ Update lateral weights | | | | l | | | | 16: | end for | | | | 17: | end for | | | | 18: | end for | | | | 19: return network 20: end procedure | | | | This update looks similar to the (stochastic) COPI update ∆ copi W = (a + αδsgd)x ⊤ − Wdiag x 2. The key difference, however, is that, in contrast to SGD, COPI ensures that the inputs are decorrelated, as realized by the COPI update ∆ copi R . Therefore, the weight decay term is unaffected by sample-wise input cross-correlations. The weight decay term for COPI is Hebbian in nature since the update of wij relies on [Wdiag x 2]ij = wijxixj , which is local to the synapse. In contrast, SGD is non-Hebbian in nature since it relies on [W(xx⊤)]ij =Pk wikxkxj , which is not local to the synapse. ## 2.7 Linear Approximation Of Non-Linear Transformations An additional benefit of the decorrelating properties of COPI is that it allows for the efficient approximation of one or more non-linear transformations by a linear matrix. As will be shown later, this has applications in interpretable and efficient deep learning. Let us consider a neural network as a non-linear transformation y = f (x). We are interested in computing its linear approximation, given by y = Bx. Suppose we have access to empirical data X =-x (1)*, . . . , x*(N) and Y =-y (1)*, . . . , y*(N)such that y (n) = fx (n)is the n-th input-output pair. The standard ordinary least squares solution for B is given by ## B = Y X⊤Xx⊤−1. However, this requires the computation of a matrix inverse, which can be prohibitive for large matrices. Networks trained using COPI can instead make use of the property that inputs are decorrelated by construction. This implies that XX⊤−1= diag 1/x1x ⊤ 1 , . . . , 1/xMx ⊤ M ≜ C with xm the mth row of X. This allows us to compute a transformation from such decorrelated inputs as ## B = Y X⊤C . Hence, in networks with decorrelated layer-wise inputs or activities, such as produced by COPI, linear approximations of transformations can be efficiently computed without computing inverses. We will use this property in Section 3.2 for both feature visualization and network compression. Note that this is ultimately the mechanism by which COPI also operates, though in a sample/batch-wise manner with a changing output distribution due to fluctuating learning signals. Specifically, consider the COPI algorithm at its fixed point, such that ∆ copi Wl= E-zlx ⊤ l − Wl diag E-x 2 l = 0. Under this condition, we can re-arrange to obtain Wl = E-zlx ⊤ l diag E-x 2 l −1, which is equivalent to our above formulation though under the assumption of a fixed desired output zl. ## 3 Results In the following, we analyse both the convergence properties of COPI and the benefits of the decorrelated representations at every network layer. 3.1 COPI performance on standard benchmarks ![6_image_0.png](6_image_0.png) Figure 1: **COPI vs BP performance on standard computer vision classification tasks**. A) Train/test accuracy and loss of a seven-layer (see graphical depiction), fully-connected, feedforward deep neural network model trained and tested on the handwritten MNIST dataset. B) Train/test accuracy of a five-layer, fully-connected, feedforward deep neural network model trained and tested on the CIFAR-10 dataset. All networks were run with five random seeds and envelopes show standard deviation across these networks. To validate COPI as a principle for learning, we compared it against backpropagation by training fullyconnected deep feedforward neural networks trained on the MNIST handwritten digit dataset (LeCun et al., 2010) and the CIFAR-10 image dataset (Krizhevsky & Hinton, 2009). COPI was simulated together with both gradient-based (δ sgd l) and feedback alignment-based (δ fa l ) perturbations with a loss function composed of the quadratic loss between network outputs and the one-hot encoded labels as 'desired' output. To clarify the benefits of the COPI decorrelation process, we additionally trained networks in which the forward weights Wl were updated by BP using vanilla SGD and lateral weights Rl were introduced and trained by the COPI decorrelating algorithm, labelled 'BP (decorr)'. Furthermore, we obtained baselines with backpropagation alone (no decorrelation) combined with the Adam optimizer (Kingma & Ba, 2014). Figure 1 shows the results of these simulations using learning parameters as described in Appendix B. Figure 1 shows that gradient-based COPI learning is extremely effective. During training, COPI achieves higher accuracy and lower loss than even an adaptive optimization approach (Adam) on the more challenging CIFAR-10 dataset. When BP is combined with decorrelation, training loss remains consistently lower for COPI across both datasets. We can only attribute this benefit to the explicit difference in the forward COPI and BP rules, where COPI relies on the built-in assumption that the inputs have a decorrelated form. During testing, we observe that COPI slightly outperforms BP (adam) in terms of accuracy on CIFAR-10 and performs consistently better than BP with decorrelation on both datasets. The performance of the more biologically plausible BIO-COPI variant is close to identical to that of regular COPI. COPI learning using feedback alignment was also feasible albeit less effective, consistent with literature (Nøkland, 2016). This demonstrates that different error signals can easily be plugged into the COPI framework when desired. Note further that results generalize to different network depths, as shown in Appendix C, as well as to other loss functions, as shown in Appendix D for the cross-entropy loss. | Figure 1. Also provided in brackets is the mean epoch at which the networks reached 99% pea | | k performance. | | | |-----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|---------------------|----------------------|----------------------| | Method | Peak Performance ± Standard Dev. (Mean # Epochs to 99% of Peak) MNIST CIFAR-10 | | | | | train | test | train | test | | | bp (adam) | 1.0 ± 0.0 (6) | 0.9838 ± 0.0004 (5) | 0.9998 ± 0.0001 (53) | 0.5619 ± 0.0023 (36) | | bp (decorr) | 1.0 ± 0.0 (3) | 0.9812 ± 0.0009 (3) | 1.0 ± 0.0 (8) | 0.5616 ± 0.0047 (8) | | copi (bp) | 1.0 ± 0.0 (3) | 0.9834 ± 0.0007 (3) | 1.0 ± 0.0 (7) | 0.5729 ± 0.0016 (10) | | copi (fa) | 1.0 ± 0.0 (7) | 0.9740 ± 0.0010 (4) | 1.0 ± 0.0 (13) | 0.5207 ± 0.0022 (6) | | bio-copi (bp) | 1.0 ± 0.0 (3) | 0.9835 ± 0.0009 (3) | 1.0 ± 0.0 (8) | 0.5730 ± 0.0040 (9) | Arguably, the largest gain is obtained in terms of convergence speed when using COPI as a learning mechanism. In general, we find that (BIO-)COPI and decorrelating BP (which uses the COPI mechanism for decorrelation) are able to learn much faster than conventional BP with an adaptive optimizer (Adam). As can be seen in Table 1, models employing decorrelation reach close to peak performance (within 99% of peak performance) much more rapidly. This is not due to the used learning rate since performance drops at higher learning rates when using Adam. ## 3.2 Decorrelation For Feature Analysis And Network Compression The COPI algorithm's requirement for decorrelation at every network layer is not only a restriction but also proves beneficial in a number of ways. We explore the decorrelation, as well as the analyses and computations that it enables. The proposed decorrelation method produces a representation similar to existing whitening methods, such as ZCA (Bell & Sejnowski, 1997). Figure 2A provides a visualisation of a randomly selected set of data samples from the MNIST dataset. From top to bottom are shown: the unprocessed samples, samples processed by the first decorrelating layer of a network trained on MNIST with the COPI algorithm (see MNIST networks described in Figure 1A), and finally a visualisation of samples transformed by a ZCA transform computed on the whole training set. As can be seen, there is qualitative similarity between the COPI- and ZCA-processed data samples. Remaining differences are attributed to the fact that COPI does not scale the individual elements of these samples (pixels) for unit variance, i.e. whitening, but instead produces decorrelation alone in a distributed fashion. Beyond the input transformation, COPI allows for visualisation of features deeper in the network by exploiting decorrelated layer-wise inputs. That is, we may use the decorrelated input training dataset X and corresponding unit activations Al to form a linear approximation Al = BlX of the receptive field of units deep in the network using the procedure described in Section 2.7. Here, the i-th row of Bl provides an average, linear, feature response for unit i in layer l of the network. Figure 2C shows such extracted features from a random selection of 100 units from the second, fourth and sixth layer and 10 units from the seventh layer of a network trained using COPI on MNIST. These results ![8_image_0.png](8_image_0.png) ![8_image_1.png](8_image_1.png) Figure 2: **The effect and utility of decorrelation within COPI network layers for feature readout** and network compression. A) Visualisation of the MNIST dataset (top), the decorrelating transformation produced by the COPI algorithm (middle), and, the whitening transformation produced by ZCA whitening (bottom). B) The decorrelated layer-wise inputs of COPI-trained networks could also be used to efficiently infer linear mappings between distant layers of the network. This allows the removal of network layers and replacement with an inferred, linear approximation of those intermediate layers. Plotted are the performances of the seven-layer MNIST trained COPI networks (left) and five-layer CIFAR-10 trained COPI Networks (right) from Figure 1. This network compression process is repeated for five, randomly seeded networks and error bars show standard deviation across these repeats. Layers are removed from the output layer backwards. Here, the left-most bars (seven/five layers for MNIST/CIFAR-10) correspond to the initial unmodified networks, which are provided for comparison. C) Using the decorrelated network inputs of the COPI networks (see middle row in A), the decorrelated inputs, x, from the entire training set data could be (pixel-wise) correlated with the network's layer-wise outputs, al (node-wise). This produced a linear approximation of the units' preferred features from different network layers. are from a single network used to produce the results shown in Figure 2A. This allows us to observe an increasingly digit-oriented feature preference for units as we go deeper into the network, as well as the presence of digit/anti-digit-selective units. This same mechanism of producing a linear approximation of receptive fields also provides a computationally efficient method to approximate the transformation produced by multiple layers of a COPI-trained network with a linear matrix. That is, we may employ the COPI principle to infer a linear matrix Bl approximating the transformation across multiple network layers such that Al ≈ BlXk where Xk is the decorrelated output of some layer *k < l*. This approach allowed us to rapidly convert MNIST and CIFAR-10 trained networks into networks consisting of any smaller number of layers, effectively providing a straightforward approach for network compression. Figure 2B shows the performance impact of such conversions for the seven-layer network trained on MNIST depicted in Figure 1A (MNIST trained) and the five-layer network trained on CIFAR-10 shown in Figure 1B. Note that this approximation is done in a single step using the network's response to the training data and does not require any retraining. Network performance is shown to stay relatively high despite the approximation and removal of layers. In fact, for CIFAR-10, we even see that this approximation returns some small gain in test-set performance. Note that layers are removed sequentially from the end of the network and, as can be seen, there is a significant drop in performance when the all layer of each network are approximated, indicating that the transformation in the first layer is crucial for achieving high performance levels. This is consistent with the change in performance when retraining networks consisting of a smaller number of layers, as shown in Appendix C. ## 4 Discussion In this paper, we introduced constrained parameter inference as a new approach to learning in feedforward neural networks. We derived an effective local learning rule and showed that, under the right conditions, individual weights can infer their own values. The locality required the removal of confounding influences between unit activities within every layer of the neural network, and to this end, we derived an efficient decorrelation rule. We further assumed error signals were available to perturb unit activations towards more desirable states from which the system could learn. The resulting algorithm allowed us to effectively train deep feedforward neural networks, where performance is competitive with that of backpropagation for both gradient-based and feedback alignment signals. Furthermore, our setup enables much higher effective learning rates than are possible than with vanilla BP and thus allows us to learn at speeds exceeding those possible even using adaptive optimizers. This may contribute to reducing carbon footprint when training large network models (Strubell et al., 2019). The algorithm also allows for more interpretable deep learning via the visualisation of deep decorrelated features (Rudin, 2019; Ras et al., 2022) and could contribute to efficient deep learning as it facilitates network compression (Wang, 2021). Going forward, it is important to expand the tasks to which COPI is applied and investigate its application to a broader class of network architectures. For example, COPI is in principle compatible with other network components such as convolutional layers, but requires careful consideration as for how to carry out decorrelation in an optimal manner. From a theoretical standpoint, COPI relates to unsupervised methods for subspace learning (Oja, 1982; Földiák & Young, 1998; Pehlevan et al., 2015). In particular, the form of the learning rule we propose bears a resemblance to Oja's rule (Oja, 1982), though it focuses on inference of parameters in the face of perturbations instead of latent factor extraction. See Appendix E for a comparison. Aside from unsupervised methods, the inference of parameters based upon input and output activities has been previously proposed to overcome the weight-transport problem (Akrout et al., 2019; Ahmad et al., 2021; Guerguiev et al., 2019). In particular, these methods attempt to learn the feedback connectivity required for backpropagation via random stimulation of units and a process of weight inference. Our method similarly attempts to carry out weight inference, but does so without random stimulation and with the purpose of learning of the forward model through combination with top-down perturbations. It is also interesting to note that our decorrelating mechanism captures some of the key elements of batch normalization (Ioffe & Szegedy, 2015; Huang et al., 2018). First, vanilla batch-normalization makes use of demeaning, a natural outcome of our decorrelation. Furthermore, whitening of batches has been recently shown to be an extremely effective batch-wise processing stage, yielding state-of-the-art performance on a number of challenging classification tasks (Huang et al., 2018), and reduction of covariance between hidden unit activities has been found to be a generalisation encouraging regularizer (Cogswell et al., 2015). However, unlike all of these methods, our method is not batch-computed and is instead a fixed component of the network architecture, learned over the course of the whole dataset and integrated as a network component. COPI may also shed light on learning and information processing in biological systems. There is both experimental and theoretical evidence that input decorrelation is a feature of neural processing through a number of mechanisms including inhibition, tuning curves, attention, and eye movements (Franke et al., 2017; Bell & Sejnowski, 1997; Pitkow & Meister, 2012; Segal et al., 2015; Vogels et al., 2011; Abbasi-Asl et al., 2016; Cohen & Maunsell, 2009; Dodds et al., 2019; Graham et al., 2006). In particular, center-surround filters of the LGN appear to produce a form of whitening. Whitening also appears to be key for sparse coding of visual inputs (King et al., 2013). To what extent there is decorrelation between all units projecting to a neuron is of course questionable, though COPI has the potential for modification to account for global or local correlations. The more biologically plausible decorrelation rule described in Section 2.4 suggests how the decorrelation rule here might operate in a local fashion. Beyond this, inhibitory and excitatory balance (Denève & Machens, 2016) has been formulated in a fashion which can be viewed as encouraging decorrelation. Learning rules which capture excitatory/inhibitory balance, such as the one by Vogels et al. (2011), rely on correlative inhibition between units, which in turn reduce the inter-unit covariance. Such detailed balance has been observed across cortical areas and so it does not seem unreasonable to consider this as a method to encourage decorrelation of not just the input but also downstream 'hidden' layers of neural circuits. When considering biological plausibility, the current work assumes that error signals are available and do not interfere with ongoing network activity. This means that we rely on a two-phase credit assignment process. For a fully online implementation, the error machinery should be integrated into a single mixed pass, which is an area for future exploration. We conclude that constrained parameter inference allows for efficient and effective training of deep feedforward neural networks while also providing a promising route towards biologically plausible deep learning. ## References Reza Abbasi-Asl, Cengiz Pehlevan, Bin Yu, and Dmitri Chklovskii. Do retinal ganglion cells project natural scenes to their principal subspace and whiten them? In 2016 50th Asilomar Conference on Signals, Systems and Computers, pp. 1641–1645, November 2016. Nasir Ahmad, Marcel A J van Gerven, and Luca Ambrogioni. GAIT-prop: A biologically plausible learning rule derived from backpropagation of error. *Advances in Neural Information Processing Systems*, 33, December 2020. Nasir Ahmad, Luca Ambrogioni, and Marcel A J van Gerven. Overcoming the weight transport problem via spike-timing-dependent weight inference. *Neurons, Behavior, Data Analysis, and Theory*, 5(3):1–20, August 2021. Mohamed Akrout, Collin Wilson, Peter C Humphreys, Timothy Lillicrap, and Douglas Tweed. Deep learning without weight transport. *ArXiv:1904.05391*, April 2019. Atilim Gunes Baydin, Barak A Pearlmutter, Alexey Andreyevich Radul, and Jeffrey Mark Siskind. Automatic differentiation in machine learning: a survey. *Journal of Marchine Learning Research*, 18:1–43, January 2018. Anthony J Bell and Terrence J Sejnowski. The "independent components" of natural scenes are edge filters. Vision Research, 37(23):3327–3338, December 1997. Yoshua Bengio. How auto-encoders could provide credit assignment in deep networks via target propagation. ArXiv, July 2014. Johanni Brea and Wulfram Gerstner. Does computational neuroscience need new synaptic learning paradigms? *Current Opinion in Behavioral Sciences*, 11:61–66, October 2016. Michael Cogswell, Faruk Ahmed, Ross B Girshick, C L Zitnick, and Dhruv Batra. Reducing overfitting in deep networks by decorrelating representations. *International Conference on Learning Representations*, May 2015. Marlene R Cohen and John H R Maunsell. Attention improves performance primarily by reducing interneuronal correlations. *Nature Neuroscience*, 12(12):1594–1600, December 2009. Francis Crick. The recent excitement about neural networks. *Nature*, 337:129–132, 1989. Sander Dalm, Nasir Ahmad, Luca Ambrogioni, and Marcel A J van Gerven. Gradient-adjusted incremental target propagation provides effective credit assignment in deep neural networks. *Transactions on Machine* Learning Research, pp. 1–12, 2023. Sophie Denève and Christian K Machens. Efficient codes and balanced networks. *Nature Neuroscience*, 19 (3):375–382, March 2016. Eric Mcvoy Dodds, Jesse Alexander Livezey, and Michael Robert DeWeese. Spatial whitening in the retina may be necessary for V1 to learn a sparse representation of natural scenes. *BioRxiv*, pp. 776799, September 2019. Maxence Ernoult, Julie Grollier, Damien Querlioz, Yoshua Bengio, and Benjamin Scellier. Equilibrium propagation with continual weight updates. *ArXiv*, April 2020. Peter Földiák. Forming sparse representations by local anti-Hebbian learning. *Biological Cybernetics*, 64(2): 165–170, December 1990. Peter Földiák and Malcolm P Young. Sparse coding in the primate cortex. In *The Handbook of Brain Theory* and Neural Networks. MIT Press, October 1998. Katrin Franke, Philipp Berens, Timm Schubert, Matthias Bethge, Thomas Euler, and Tom Baden. Inhibition decorrelates visual feature representations in the inner retina. *Nature*, 542(7642):439–444, February 2017. Daniel J Graham, Damon M Chandler, and David J Field. Can the theory of "whitening" explain the center-surround properties of retinal ganglion cell receptive fields? *Vision Research*, 46(18):2901–2913, September 2006. Stephen Grossberg. Competitive learning: From interactive activation to adaptive resonance. Cognitive Science, 11(1):23–63, January 1987. ISSN 0364-0213. Jordan Guerguiev, Konrad P Kording, and Blake A Richards. Spike-based causal inference for weight alignment. *ArXiv*, October 2019. Lei Huang, Dawei Yang, Bo Lang, and Jia Deng. Decorrelated batch normalization. *Proceedings of the IEEE* Computer Society Conference on Computer Vision and Pattern Recognition, pp. 791–800, April 2018. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. *ArXiv*, pp. 1–11, 2015. Paul D King, Joel Zylberberg, and Michael R DeWeese. Inhibitory interneurons decorrelate excitatory cells to drive sparse code formation in a spiking model of V1. *Journal of Neuroscience*, 33(13):5475–5485, March 2013. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *ArXiv*, December 2014. Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, University of Toronto, July 2009. Yann LeCun, Corinna Cortes, and Christopher J C Burges. MNIST handwritten digit database. ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist, 2, 2010. Timothy P Lillicrap, Daniel Cownden, Douglas B Tweed, and Colin J Akerman. Random synaptic feedback weights support error backpropagation for deep learning. *Nature Communications*, 7:13276, November 2016. Timothy P. Lillicrap, Adam Santoro, Luke Marris, Colin J. Akerman, and Geoffrey Hinton. Backpropagation and the brain. *Nature Reviews Neuroscience*, 21(6):335–346, June 2020. Seppo Linnainmaa. The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors. *Master's Thesis (in Finnish), Univ. Helsinki*, pp. 6–7, 1970. Arild Nøkland. Direct feedback alignment provides learning in deep neural networks. *Advances in Neural* Information Processing Systems, 29, December 2016. Erkki Oja. Simplified neuron model as a principal component analyzer. *Journal of Mathematical Biology*, 15(3):267–273, November 1982. Alexandre Payeur, Jordan Guerguiev, Friedemann Zenke, Blake A Richards, and Richard Naud. Burstdependent synaptic plasticity can coordinate learning in hierarchical circuits. *Nature Neuroscience*, 24(7): 1010–1019, July 2021. Cengiz Pehlevan, Tao Hu, and Dmitri B Chklovskii. A Hebbian/anti-Hebbian neural network for linear subspace learning: A derivation from multidimensional scaling of streaming data. *ArXiv*, March 2015. Xaq Pitkow and Markus Meister. Decorrelation and efficient coding by retinal ganglion cells. Nature Neuroscience, 15(4):628–635, March 2012. Gabrielle Ras, Ning Xie, Marcel A J van Gerven, and Derek Doran. Explainable deep learning: A field guide for the uninitiated. *Journal of Artificial Intelligence Research*, 73:329–397, January 2022. Blake A Richards, Timothy P Lillicrap, Philippe Beaudoin, Yoshua Bengio, Rafal Bogacz, Amelia Christensen, Claudia Clopath, Rui Ponte Costa, Archy de Berker, Surya Ganguli, Colleen J Gillon, Danijar Hafner, Adam Kepecs, Nikolaus Kriegeskorte, Peter Latham, Grace W Lindsay, Kenneth D Miller, Richard Naud, Christopher C Pack, Panayiota Poirazi, Pieter Roelfsema, João Sacramento, Andrew Saxe, Benjamin Scellier, Anna C Schapiro, Walter Senn, Greg Wayne, Daniel Yamins, Friedemann Zenke, Joel Zylberberg, Denis Therien, and Konrad P Kording. A deep learning framework for neuroscience. Nature Neuroscience, 22(11):1761–1770, October 2019. Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. *Nature Machine Intelligence*, 1(5):206–215, May 2019. João Sacramento, Rui Ponte Costa, Yoshua Bengio, and Walter Senn. Dendritic cortical microcircuits approximate the backpropagation algorithm. *Advances in Neural Information Processing Systems*, 31: 8721–8732, December 2018. Benjamin Scellier and Yoshua Bengio. Equilibrium propagation: Bridging the gap between energy-based models and backpropagation. *Frontiers in Computational Neuroscience*, 11:24, May 2017. Irina Yonit Segal, Chen Giladi, Michael Gedalin, Michele Rucci, Mor Ben-Tov, Yam Kushinsky, Alik Mokeichev, and Ronen Segev. Decorrelation of retinal response to natural scenes by fixational eye movements. Proceedings of the National Academy of Sciences U.S.A., 112(10):3110–3115, March 2015. Emma Strubell, Ananya Ganesh, and Andrew McCallum. Energy and policy considerations for deep learning in NLP. *ArXiv*, 2019. Tim P Vogels, Henning Sprekeler, Friedemann Zenke, Claudia Clopath, and Wulfram Gerstner. Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. *Science*, 334 (6062):1569–1573, December 2011. Shiqiang Wang. Efficient deep learning. *Nature Computational Science*, 1(3):181–182, March 2021. ISSN 26628457. Paul Werbos. *Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences*. PhD thesis, Harvard University, Cambridge, MA, 1974. James C R Whittington and Rafal Bogacz. An approximation of the error backpropagation algorithm in a predictive coding network with local Hebbian synaptic plasticity. *Neural Computation*, 29(5):1229–1262, May 2017. ## A Copi Decorrelation As Gradient Descent In the main text, we provided a description of the custom learning rule for decorrelation which forms a part of the COPI learning approach. Here we expand upon this description and frame the same derivation in terms of gradient descent upon a specific loss function. The COPI learning algorithms require a decorrelated input, meaning that our decorrelation method should minimise the off-diagonal values of E[xxT], where x represents the (vector) input data to any given layer and the expectation is taken empirically over a whole dataset. To this end, we can define an element-wise quadratic loss function li, representing the total undesirable correlation induced by a unit, indexed i, with respect to all other units, indexed j, within a single sample such that: $$l_{i}=\frac{1}{2}\sum_{j:\;j\neq i}\left(x_{i}x_{j}\right)^{2}\;.$$ The derivative of this expression can then be taken with respect to unit i, in order to identify how to modify the activity of unit xiin order to reduce this loss. Specifically, $${\frac{\partial l_{i}}{\partial x_{i}}}=\sum_{j:\,j\neq i}\left(x_{i}x_{j}\right)x_{j}\;,$$ showing that, via stochastic gradient descent, we can produce greater decorrelation by computing the product between unit activities and removing a unit-activity proportional measure from each unit, xi ← xi − η ∂li ∂xi , where η would be a learning rate. Vectorizing this stochastic gradient descent across all units allows us to write an update for our data x such that $$x\leftarrow x-\eta\left(x x^{T}-\mathrm{diag}\left(x^{2}\right)\right)x\,,$$ with learning rate η and diag(·) as used in the main text, indicates constructing a square matrix of zeros with the given values upon the diagonal. Finally, as in the main text, we can assume that x is constructed from some transformation, x = Ry, such that we can recast this update in terms of the decorrelating transformation matrix, R, where $$R y\gets R y-\eta\left(x x^{T}-\mathrm{diag}(x^{2})\right)R y\quad\Rightarrow\quad R y\gets\left[R-\eta\left(x x^{T}-\mathrm{diag}(x^{2})\right)R\right]y\,,$$ providing an equivalent to our derived update rule for decorrelation ∆ copi R = −ηxxT − diag(x 2)R, as introduced in the main text. One may ask why we constructed the specific decorrelation rule described above, rather than using an alternative existing rule. For that matter, one may ask why we chose to take the derivative of our decorrelation loss with respect to unit activities, x, when deriving this rule instead of directly with respect to the parameters of the transformation matrix, R. First, the used derivation allowed the production of a simple, Hebbian-like update and allowed us to formulate, admittedly by approximation, similar and elegant learning rules for forward and lateral weight matrices. This was important as a promising start in order to work toward methods for online and local learning of these transformations. Second, on a more rigorous note, the learning rule we propose produces reductions in inter-unit correlations which are not affected by the scale (eigenvalues) of the matrix R. This is a property that is induced by our choice of taking the derivative of our decorrelation loss with respect to the unit activities, x, rather than the matrix elements of R. Note that we can take the derivative of our above loss with respect to a single element of our decorrelating matrix Rij in the following manner, $${\frac{\partial l_{i}}{\partial R_{i j}}}={\frac{\partial l_{i}}{\partial x_{i}}}{\frac{d x_{i}}{d R_{i j}}}=\sum_{j:\,j\neq i}\left(x_{i}x_{j}\right)x_{j}y_{j}\,.$$ However, reducing correlations by taking the full derivative with respect to the elements of R, or via alternative existing methods which have been proposed for decorrelation through simple anti-Hebbian learning (Földiák, 1990; Pehlevan et al., 2015), result in a reduction in correlation which is affected by the scale of the matrix R. ![14_image_0.png](14_image_0.png) Figure 3: The reduction in correlation in a a toy-dataset when leveraging various decorrelation rules. To produce this plot, a dataset was randomly sampled from a multi-variant gaussian distribution and an initial decorrelating matrix R also sampled. This data, with samples y, is processed by the decorrelating matrix R to form outputs, x. A set of methods were then used to compute a single update to the decorrelating matrix, R, and the magnitude of reduction in the loss function (mean of loss ℓ = ||xx⊤ − diag(x 2)||22 over all datapoints in the dataset) was computed and plotted here. Various scalings were then applied to the matrix R and input data y, which maintained the output data such that x = (cR)(y/c), and the process of computing the efficacy of different learning rules repeated. As shown, only the proposed COPI decorrelating method produces a consistent reduction in the loss function, regardless of the scale of matrix R or the input data y. It is for this reason that this rule is desirable when applied in conjunction with other learning rules which require a relative scaling. The y-scale of this plot is omitted since its scale is arbitrarily dependent upon the initial sampling of data, and this plot is intended to be illustrative. We can show this effect empirically in set of simple simulations measuring the magnitude of correlation reduction induced by various learning rules, see Figure 3. In order to produce these results, we first construct a dataset by randomly sampling from a 100-dimensional multivariate normal distribution with a randomly sampled (arbitrary) covariance matrix. We further initialised a matrix R ∈ R 100×100 composed of the identity function, I, plus random noise added to every element drawn from a [-0.1,0.1] uniform distribution. This matrix, R, is used to process the input data, y, in order to attempt to produce a decorrelated output x = Ry as in the methods of the main part of this paper. In order to simulate an alternative scaling of the matrix R without affecting the output data distribution, we simulate a rescaling of R by removing a constant factor from the input data and scaling R by this factor, x = (cR)(y/c) With this setup, we could then demonstrate how various methods for learning the matrix R (with various scalings applied) reduce the loss function, $${\mathcal{L}}={\frac{1}{N}}\sum_{n=1}^{N}\ell^{(n)}={\frac{1}{N}}\sum_{n=1}^{N}\left(x^{(n)}\left(x^{(n)}\right)^{\top}-\mathrm{diag}\left(\left(x^{(n)}\right)^{2}\right)\right)^{2}\,,$$ where n indexes the N samples in the empirical dataset. In Figure 3, the COPI learning rule for decorrelation is compared to the derivative of this loss with respect to the elements of matrix R, ∂li/∂Rij above, and also against a simple anti-Hebbian learning rule (Földiák, 1990; Pehlevan et al., 2015), where ∆anti-hebbian R = −(xxT − diag(x 2)). As can be seen, the propose COPI learning rule is the only decorrelating learning rule which reduces the loss function by a consistent amount given some output distribution for x, regardless of the relative scaling of the decorrelating matrix R and the input data y. Having such a decorrelation method, free from a learning rate interference through the scale of matrix R or the unused pre-decorrelation variable y, is crucial for the COPI learning system since the forward learning rule and decorrelating learning rules interact and must be balanced in their speed of learning to avoid runaway correlations affecting the efficacy of the forward learning rule. ## B Learning Setup And Parameters Note that for all simulations which made use of decorrelation (all COPI networks and BP with decorrelation), the networks were first trained for a single epoch with only the decorrelation rule active. This allowed the network to reach a decorrelated state (the desired state for this learning) before forward weight updating was started. All hidden layers used the leaky rectified linear unit (leaky-ReLU) activation function. Training was carried out in mini-batches of size 50 for all simulations (stochastic updates computed within these mini-batches are averaged during application). Network execution and training is described in Algorithm 1. Learning rate parameters are described in Table 2. All code used to produce the results in this paper is available at: https://github.com/nasiryahm/ConstrainedParameterInference. Table 2: **Parameters for CIFAR-10 and MNIST trained networks (cf. Figure 1).** | Parameter | BP (Adam) | COPI (with BP/FA gradients) / BP (with decorr) | |--------------------------|-------------|--------------------------------------------------| | LeakyReLU negative slope | 0.1 | 0.1 | | Learning rate ηW | 0.0001 | 0.0001 | | Learning rate ηR | 0.0001 | 0.0001 | | Gain parameter α | 1.0 | 1000.0 | | Adam parameter β1 | 0.9 | - | | Adam parameter β2 | 0.999 | - | | Adam parameter ϵ | 10−8 | - | ## C Training Networks Of Various Depths ![15_Image_0.Png](15_Image_0.Png) Figure 4: **Train and test accuracy of networks of various depths trained by COPI (BP) vs BP** with the Adam optimiser. Performance of networks ranging from one to seven layers trained and tested using the MNIST handwritten digit set. The networks are composed such that each hidden layer (where present) is composed of 500 units, input layer of size 784, and output layer of size 10. The main text explored networks with fixed depths. Here we present additional results in which we trained and tested networks between one and seven layers on the MNIST handwritten digit set. These networks were each trained for 100 epochs and their performance measured after training. Parameters used for training followed the setup described in Appendix B. As can be observed, the networks of extremely shallow depth (one and two layer), have a measurably lower final performance compared to training by BP with the Adam optimizer. We found in our experimentation that this reflects the relative difficulty of decorrelating the input layer of our network, and the importance of the features in the first layer of the network. A similar effect of the importance of the first layer of the network for performance, can be observed in the main text in Figure 2, where, when approximating a deep network with a linear layer, we observed a significant drop in performance when all layers (including the first layer) were removed. ## D Copi With Categorical Cross-Entropy Loss In the main text, we explored a quadratic loss in the mathematical derivation and simulation. However, this does not imply that COPI can only be applied in the application of the quadratic loss function, in fact any arbitrary loss function applied to the outputs can be used to produce a gradient-based target. In particular, take any arbitrary loss which is solely a function of the outputs of a network, ℓ = f(yL). By default, we would compute the derivative with respect to the outputs of the network as dℓ dyL . This arbitrary formulation differs from a quadratic loss with a target, tL, since a quadratic loss is proportional to yL − tL. However, it is possible to reformulate an arbitrary loss function computed on the outputs in terms of a target in the following manner $$\begin{split}\frac{d\ell}{dy_{L}}&=\frac{d\ell}{dy_{L}}+(y_{L}-y_{L})\\ &=y_{L}-\left(y_{L}-\frac{d\ell}{dy_{L}}\right)\\ &=y_{L}-t_{L}^{*}\end{split}$$ ![16_image_1.png](16_image_1.png) where t ∗ L is a target formulated for this layer. ![16_image_0.png](16_image_0.png) ![16_image_2.png](16_image_2.png) Figure 5: **The performance, measured by training and test accuracy/loss, of a five-layer fullyconnected feedforward neural network architecture trained using a categorical cross-entropy** loss. Plotted are various learning approaches combining decorrelation, our algorithm (COPI), and standard stochastic gradient descent by backpropagation of error (BP). All results are shown for training on the MNIST handwritten digit classification task. All networks were run with five random seeds and envelopes show standard deviation across these networks. Results for BIO-COPI not shown given its near identical performance to regular COPI. We made use of such a target-formulation approach (though in terms of the derivative of the output hidden state aL) to train the same network architecture used to train the networks of Figure 1A with a categorical cross-entropy loss. This again demonstrates a favorable performance and convergence speed when training networks using COPI, though these simulations have not been as thoroughly optimized by parameter search. ## E Relation Between Copi And Oja'S Rule Let us consider the (BIO-)COPI update for single synaptic weights, given by $$\begin{array}{l}{{\Delta_{w_{i j}}^{\mathrm{copi}}=z_{i}x_{j}-w_{i j}x_{j}^{2}=\left(\sum_{j}w_{i j}x_{j}\right)x_{j}-w_{i j}x_{j}^{2}+\delta_{i}x_{j}}}\\ {{\Delta_{r_{i j}}^{\mathrm{copi}}=-\left(q_{i}x_{j}-r_{i j}x_{j}^{2}\right)=\left(\sum_{j}(-r_{i j})x_{j}\right)x_{j}-(-r_{i j})x_{j}^{2}\,.}}\end{array}$$ The first term in both expressions is a Hebbian update which relies on the states of the pre-synaptic units xj and post-synaptic units zi or qi only. The second term in both expressions takes the form of a weight decay. This functional form is similar to Oja's rule (Oja, 1982), which states: $$\Delta_{m_{i j}}^{\mathrm{oja}}=y_{i}x_{j}-m_{i j}y_{i}^{2}=\left(\sum_{j}m_{i j}x_{j}\right)x_{j}-m_{i j}y_{j}^{2}$$ for y = Mx. COPI differs from Oja's rule in that the forward weight update has an additional term δixj and the weight decay for both the forward and lateral weights depends on the (squared) pre-synaptic rather than post-synaptic firing rates. The functional impact of the difference in the weight decay scaling (by postvs pre-synaptic firing rates), is that Oja's rule scales weight decay in order to normalize the scale of the weights which target a post-synaptic neuron. By comparison, COPI scales the weight decay such as to best infer the scale which the weights should take in order to reproduce the post-synaptic neuron activity given the observed pre-synaptic neuron activity.
Review 1: Summary: The paper proposes a novel learning algorithm to infer parameters of an artificial neural network. In this framework, learning can be established by updating the forward weight values based on observed local neuron activities, under the constraint that layer-wise neural inputs should be decorrelated via learned lateral weights (hence the name "constrained parameter inference": COPI). Proposed algorithm is also agnostic to the used credit assignment method to compute the layer-wise learning signal, where more bio-plausible local methods can be explored besides a standard gradient-based error signal. Experiments are performed with fully-connected networks on the MNIST and CIFAR-10 tasks, and COPI is compared against standard backpropagation based parameter optimization. Further contributions on using the outcome of the decorrelation constraint for network compression are discussed. Strengths and Weaknesses: Strengths: The paper is clearly written, the derived learning algorithm is quite simple and yet effective. There are nice and clearly drawn bio-plausibility relationships throughout the manuscript regarding e.g., the local learning rules, or lateral weight updates to decorrelate layer-wise inputs. Weaknesses: I believe the major weakness that requires the most attention is the coverage of experimental simulations. Currently, not all aspects of the theoretically discussed and presented components of the learning algorithm are evaluated in the manuscript. Requested Changes: 1) It is not really clear to what extent the introduction of target propagation (TP) is relevant in the manuscript. Since it is introduced as a compatible credit assignment approach here, can the authors add in simulations (Fig 1 & Table 1) and show how well it works when one simulates COPI together with TP-based perturbations? 2) Independent of the network compression experiments from Sec 3.2, can the authors demonstrate how important is the layer-wise depth of the FC network to learning with COPI from scratch? The networks in Fig 1, especially for MNIST, can be perhaps simulated with level-wise shallower architectures for this experiment. 3) Authors' proposal in Appendix D remains empirically untouched. How well does this approximation perform? 4) One of the interesting contributions is the network compression capability that arises from the produced linear approximations of receptive fields. However without addressing Appendix F, it is not possible to grasp how does this really work in Section 3.2. I believe a restructuring of the manuscript to address this would be helpful. 5) I found the algorithmic summary in Appendix A quite useful. I think moving Algorithm 1 to the main manuscript might be beneficial in depicting how subsections of Sec 2 come together in forming the learning algorithm that is mainly proposed in the manuscript. Broader Impact Concerns: No specific concern. ================================================== Review 2: Summary: The authors propose a novel method for learning in neural networks. This method can learn directly and locally at each synapse from the activities (and desired activities) of the neurons, assuming that 1) desired activities (or desired directions of change in activities) are available for each neuron, and 2) inputs are uncorrelated. The method essentially performs Hebbian learning between inputs and desired outputs, with a weight decay term modulated by the variance of each input. The authors propose a method to learn decorrelating layers that decorrelate the output of each main layer, facilitating assumption 2. Various experiments suggest that the method is competitive with backpropagation with Adam. Strengths and Weaknesses: Strength: - The method is new, to my knowledge. The problem of plausible learning methods is important. Weaknesses: - The method seems to make some requirement that put its purpose into question: if the required information described in the paper is provided, then traditional methods become local Hebbian-like learning rules, making the proposed method seemingly redundant? - Secondarily, it is not clear that the experiments are fair, because IIUC they oppose a method COPI with weight decay against BP without weight decay. In more detail: Authors describe COPI as a local rule, that (by assuming decorrelation) can learn directly from local activities and desired/target activities at any node. But the way these desired activities are obtained is by providing a desired *direction*, or *increment*, much like the error gradient (over the weights) of BP, or the surrogate gradient of FB alignment. This desired direction of change is called little-delta in the text, in accordance with existing literature on gradient descent. But if you have such a little-delta (desired direction of change in activity), then plain gradient learning is already purely local and indeed "Hebbian", with the delta rather than the activities: deltaW = x * little-delta. What, then, is gained at all by using the more complicated COPI ? IIUC the specific form of COPI, and especially its assumption of decorrelated inputs, is only necessary because it tries to express learning as a function of raw activities and desired activities themselves, rather than the desired delta. Indeed this seems to be exactly what the "Correspondence to stochastic gradient descent" section says: the W(xx^T) term only pops up when we rewrite the plain gradient update, dW = delta * x, as a function of actual activities a (and desired activities a + delta) instead. But then the method assumes that we already have access to the deltas! So what have we gained by using the added complexity and assumptions of COPI? Experiments are interesting, but difficult to interpret: The authors claim the decorrelation layer improves performance across the board. But BP with Adam seems to outperform BP with decorrelation in terms of test error/accuracy in Figure 1a and (eventually) 1b? I may be missing something. Authors observe that COPI beats BP (when both have similarly trained decorrelation layers) and attribute this to COPI having " the built-in assumption that the inputs have a decorrelated form". But IIUC COPI includes a weight decay term. Was there a weight decay in the BP version? If there is no weight decay in the BP experiments, this might be a source of the difference in results. This is particularly acute since at least some of the failure of BP in the figures seems to be overfitting (test error increasing), and also considering the large size of the networks. Requested Changes: - Please explain what exactly COPI brings, when it requires access to desired directions of changes in activities that already allow for local, Hebbian-like learning. - In experiments, include a weight decay component in BP, comparable to that of COPI, and report results. - Last paragraph on p.6: please clarify the statement that decorrelation improves performance also for BP, which seems to conflict with my understanding of Figure 1a and (in the long run) 1b. - I don't understand the visualization part (second to last paragraph of p. 8). Why does decorrelation help here ? How exactly is Figure 2b generated? Broader Impact Concerns: I do not see any broader impact concerns. ================================================== Review 3: Summary: The paper describes a learning rule in a variant of a classical fully connected multi-layer perceptron. The key difference in the network architecture is to add a matrix $R$ prior to each matrix multiplication with the weights $W$ leading to an effective network activity defined by: $f(WRx)$ where $x$ in the input for this layer. Then the learning rule is defined so that $Rx$ generates a decorrelated (i.e. whitened) version of $x$ while $W$ minimizes the loss function (this defines the BP with decorrelation algorithm reported in the results). To further exhibit a variant of the weights updates for $R$ and $W$ which avoid is "more biologically plausible", the COPI learning rule replaces the covariance $\mathbb{E}[x x^T]$ with its diagonal $diag(x^2)$. The learning rule are then compared on MNIST and CIFAR with many fully connected layers, a strong finding is that decorrelated BP and COPI learn much faster than regular BP for the same performance level. Strengths and Weaknesses: I think that the topic of efficient bio-plausible rules for decorrelation in deep networks is important and I find the model $y=f(WRx)$ to be a simple and elegant solution. The plausible update of R which avoids some problem is therefore interesting as well, although I would have liked to know what plausibility it resolves specifically. I find the relevant on the learning rate of the decorrelated BP and COPI to be very promising and I would love to see a bit more analysis on that. Here are suggestion for control experiments: (a) what if the learning rate parameter $\eta$ is optimized individually for classical BP and decorrelated BP, would a higher learning rate be more beneficial for BP? (just reporting the next power of ten to prove it is not that easy to make faster would be sufficient) (b) I understand that the authors report the result of classical BP as the case with $y=f(WRx)$ with $R$ optimized with SGD. What if $f(Wx)$, is the learning rate better? is the final accuracy on CIFAR better? Requested Changes: For the technical description in the main text: the equations and the text is understandable and clear although I believe it could be written in a lighter fashion. More problematic, I find that the general story of the paper is sometimes detached from the mathematical content and could be improved in three ways: (1) I find the general framing of the paper around "constrained parameter optimization" to be quite miss leading because I thought it would perform some kind of constrained optimization but the technical part of the paper is not described in this way. (2) I also think that the paper would from a clearer definition of the meaning of "biologically plausible" learning rule which is solved in this paper with a clear statement about why is that problematic for biology: concretely which terms in BP or BP with decorrelation are problematic and how does COPI solves it? (BIOCOPI is also defined in the appendix, but what is the difference? and why does that matter?) (3) Unfortunately I did not understand the whole discussion about network compression. I believe that this relates usually to weights pruning or something like that but I did not see how is that relevant for the theoretical method and the experiments. Also I did not understand the figure in the main text about "removed layers". What does that mean in the experiments and why is that interesting? For the figures reporting the accuracy I would recommend to zoom in in the relevant range (for MNIST 85% - 100%) because this is where everything is happening and I would have liked to see more clearly those learning curves. Also I cannot see the filters in Figure 2B because there are too small. Broader Impact Concerns: None ================================================== Metareview: Recommendation: Accept as is Comment: All reviewers agreed that the revised version of the manuscript can be accepted for publication. There was some discussion on restrictive assumptions of the work. It was noted that the proposed algorithm is useful if the plasticity rule does not have access of the plasticity process to desired directions of activity changes and a paragraph which clarifies this was added in a final revision as demanded. As this was clarified, I recommend acceptance. ==================================================
# Meta-Learning Approach For Joint Multimodal Signals With Multimodal Iterative Adaptation Sehun Lee∗*yhytoto12@snu.ac.kr* Department of Computer Science and Engineering Seoul National University Wonkwang Lee∗ wonkwang.lee@snu.ac.kr Department of Computer Science and Engineering Seoul National University Gunhee Kim gunhee@snu.ac.kr Department of Computer Science and Engineering Seoul National University Reviewed on OpenReview: *https: // openreview. net/ forum? id= LV04KBaIQt* ## Abstract In the pursuit of effectively modeling real-world joint multimodal signals, learning to learn multiple Implicit Neural Representations (INRs) jointly has gained attention to overcome data scarcity and enhance fitting speed. However, predominant methods based on multimodal encoders often underperform due to their reliance on direct data-to-parameter mapping functions, bypassing the optimization steps necessary for capturing the complexities of real-world signals. To address this gap, we propose Multimodal Iterative Adaptation (MIA), a novel framework that combines the strengths of multimodal fusion with optimization-based meta-learning. The key idea is to enhance the learning of INRs by facilitating exchange of cross-modal knowledge among learners during the iterative optimization processes, improving generalization and enabling a more nuanced adaptation to complex signals. To achieve this, we introduce State Fusion Transformers (SFTs), an attention-based meta-learner designed to operate in the backward pass of the learners, aggregating learning states, capturing cross-modal relationships, and predicting enhanced parameter updates for the learners. Our extensive evaluation in various real-world multimodal signal regression setups shows that MIA outperforms existing baselines in both generalization and memorization performances. Our code is available at https://github.com/yhytoto12/MIA. ## 1 Introduction Implicit neural representations (INRs) are a class of neural networks designed to represent signals or data as continuous coordinate-to-feature mapping functions (*e.g.* an image is a function I(*x, y*) = (*r, g, b*) mapping 2D coordinates to color intensity values). INRs offer several advantages over traditional data representations (*e.g.* discrete 2D arrays for images), including the ability to recover inherently continuous signals that are often sampled sparsely and stored discretely (*e.g.* videos are the recordings of dynamic scenes that change continuously over space and time) and better scaling property with signal resolution (Dupont et al., 2022a). Since a variety of real-world signals can be represented as such continuous functions or they even inherently arise from such functions, numerous efforts have been made to formulate data as functions and represent them with INRs in a wide variety of domains or modalities, including audios (Sitzmann et al., 2020), time series (Fons et al., 2022), images (Sitzmann et al., 2020), videos (Chen et al., 2021a), 3D geometries (Park et al., ![1_image_0.png](1_image_0.png) Figure 1: Left: Multimodal encoder-based methods predict INR parameters directly from signals, omitting optimization steps for rapid adaptation, often failing to fully capture complexities of real-world signals (*e.g.* blurry RGB prediction). Middle: Unimodal methods struggle with generalization in data-sparse scenarios due to their lack of cross-modal interactions (*e.g.* inaccurate Sketch prediction). Right: Our Multimodal Iterative Adaptation (MIA) enhances learner interaction and cross-modal knowledge exchange during the backward optimization pass, leading to improved fitting and generalization. 2019), 3D scenes (Mildenhall et al., 2020), and 3D motions (Pumarola et al., 2021). They have shown promise in various practical applications that require the precise modeling of signals and their inherently continuous properties, such as generation (Yu et al., 2022), compression (Kim et al., 2022b), super-resolution (Chen et al., 2021b), video super-slomo (Chen et al., 2022b), and novel-view synthesis (Takikawa et al., 2021). As the field of INRs advance, learning to learn a group of multiple INRs jointly has gained attention recently to enhance convergence and data efficiency when modeling joint multimodal signals (Kim et al., 2022a; Shen et al., 2023). Such signals, often arising from correlated multimodal distributions with interdependent information, are crucial in diverse areas such as embodied agents (Xia et al., 2018), climate analysis (Wang et al., 2019), healthcare (Muhammad et al., 2021), neuroscience (Ulrich et al., 2015), drug discovery (Chan et al., 2023), and smart-grid systems (Kamarthi et al., 2022). In modeling such joint multimodal signals, the primary approach involves encoder-based meta-learning methods (Kim et al., 2022a; Shen et al., 2023) that use multimodal encoders to predict INR parameters directly from data through attention-based fusion. Concurrently, optimization-based methods (Tancik et al., 2021; Dupont et al., 2022a), though typically unimodal, focus on learning an effective parameter initialization to speed up convergence and enhance generalization. These methods iteratively optimize INR parameters from this initialization using gradient descent algorithms at test time. Nevertheless, both approaches come with their own limitations, as illustrated in Figure 1. Encoder methods primarily rely on direct data-to-parameter mappings without optimization steps for rapid adaptation, often failing to capture complex signals (e.g., underfitting high-frequency details) (Kim et al., 2019). In contrast, unimodal methods, despite their rapid signal fitting capability thanks to iterative optimization processes from good initializations, lack mechanisms for leveraging interacting multimodal structures in data, limiting their generalization in data-scarce situations (Kim et al., 2022a). To bridge these gaps, we present a novel optimization-based meta-learning framework called Multimodal Iterative Adaptation (MIA). Unlike the encoder-based methods, MIA builds upon the advantages of optimization-based meta-learning methods, *i.e.* meta-learning good initializations and adapting them to signals through iterative optimization steps for accurate fitting. In contrast to the unimodal counterparts, however, MIA further empowers each modality learner to interact and inform one another during adaptation. This fosters a rich exchange of information and synergistic learning of INRs across different modalities, leading to enhanced generalization capability with limited observations. The core of MIA is an attention-based meta-learner coined as State Fusion Transformers (SFTs), particularly designed to operate in the backward pass of the INR learners (*i.e.* within the space of learners' parameters and gradients). SFTs are meta-learned to achieve three goals: (1) aggregating the current *learning states* (*i.e.* parameters and gradients) of the learners, (2) capturing modality-specific and cross-modal relationships between those states through attention mechanisms, and (3) utilizing this knowledge to predict better parameter updates for each learner, facilitating cross-modal interactions at each iterative adaptation step. It's noteworthy to mention that attention mechanisms are well-studied across diverse multimodal learning contexts and applications. However, their potential to uncover and utilize interdependencies within multiple optimization problems and their trajectories (often represented by *states* of the learners for each optimization step) remains relatively unexplored. Our work contributes uniquely to the literature by being the first to investigate the integration of those potential multimodal structures within the scope of jointly learning multiple correlated INRs, showing how optimizing for one modality can inform and enhance solutions for others. While doing so, we detail recipes for effectively identifying and leveraging these multimodal relationships, including techniques for scaling gradients properly for each modality and integrating original, unimodal, and multimodal learner states. We believe these insights offer interesting new directions in the study of interacting optimization problems, especially useful when data scarcity could lead to bad optimization solutions. In experiments, we apply our technique to prior unimodal INR meta-learning frameworks such as Functa and Composers, and evaluate them on a variety of multimodal signal regression scenarios, including modeling 1D synthetic functions, ERA5 global climate data, 2D CelebA visual images, and Audiovisual-MNIST data. The results consistently demonstrate that our MIA significantly enhances the learning capabilities of the unimodal baselines. Moreover, it outperforms the encoder-based baselines in terms of both generalization and memorization performance. We also conduct in-depth and wide-ranging analysis studies to validate the necessity of each component in SFTs for achieving these improvements. ## 2 Related Work 2.1 Meta-Learning For Inrs. Fitting INRs to a function from randomly initialized weights has been found to be notoriously inefficient; it requires hundreds or thousands of optimization steps to converge (Sitzmann et al., 2020; Mildenhall et al., 2020), and learned representations do not generalize well on novel input coordinates under the few-shot learning regimes (Jain et al., 2021). To overcome such limitations, there have been increasing studies to apply meta-learning to facilitate the learning of INRs. The methods typically include either (1) metalearning an encoder to infer the parameters of INRs directly from observed data (Chen & Wang, 2022; Kim et al., 2023) or optimization-based approach where a weight initialization of INRs is learned to enable rapid adaptation to signals with a few optimization steps (Tancik et al., 2021; Bauer et al., 2023). ## 2.2 Multimodal Learning. Multimodal learning aims to integrate data or representations from multiple modalities to enhance the robustness and accuracy of learning systems, leveraging the complementary information available across different data types. Most of the work in the literature typically focus on learning multimodal encoders to infer enhanced representations from observed data, where attention mechanisms are utilized for multimodal fusion (Liang et al., 2022; Bachmann et al., 2022). Unlike this traditional focus on enhancing representations of models in their forward pass, we explore a novel setup where we utilize attention modules to enhance the learning of the models in their backward pass at each iterative optimization step. ## 2.3 Learning To Optimize (L2O). Our method also aligns with the works in L2O, another meta-learning domain that focuses on enhancing an optimization algorithm rather than weight initialization. Its idea is to empower an optimizer with neural networks that meta-learn useful prior knowledge on the optimization process (Andrychowicz et al., 2016). To achieve that idea, Metz et al. (2022a) utilize various state features of learners, including parameters, gradients and their higher-order statistics. In addition, Kang et al. (2023) meta-learn a gradient preconditioning method for improved convergence and Baik et al. (2023) introduce a meta-learned mechanism to estimate weight decaying and learning rate factors for improved generalization. Distinct from these methods, our work takes an intesting orthogonal direction by exploring the potential of leveraging inter-modal interactions among state features of learners to address data scarcity issues. ## 3 Preliminaries 3.1 Implicit Neural Representations (Inrs) INRs are a class of neural networks, often parameterized as a stack of MLPs, that are designed to approximate a function f : x 7→ y mapping the coordinates x to the features y. We interchangeably use data, signals, and functions throughout the paper. Given a set of P coordinate-feature pairs D = {(x i, yi)} P i=1 of a function, an INR fINR(·; θ) parameterized by θ is optimized to minimize a loss function as below: $${\cal L}({\cal D};\theta)=\frac{1}{P}\sum_{i=1}^{P}||f_{\rm INR}(x^{i};\theta)-y^{i}||_{2}^{2}.\tag{1}$$ $$\mathbf{\Sigma}$$ Despite the recent advances, optimizing individual INRs from randomly initialized weights are notoriously inefficient (Sitzmann et al., 2020; Mildenhall et al., 2020). Also, they often do not generalize well when observations are sparse (Tancik et al., 2021; Jain et al., 2021), *i.e.* when P is small. To address these problems, optimization-based meta-learning techniques have been proposed. ## 3.2 Meta-Learning Approach Two notable examples are Functa (Dupont et al., 2022b;a; Bauer et al., 2023) and Composers (Kim et al., 2023). These methods build upon CAVIA (Zintgraf et al., 2018) and implement two key ideas: (1) Metalearning INR parameters θ that capture data-agnostic priors, enabling faster adaptation when modeling each signal. (2) Introducing additional context parameters ϕ ∈ R S×D, a set of D-dimensional features that are adapted to each signal. These features encapsulate data-specific variations and conditions of the meta-learned INRs fINR(·; θ) via modulation schemes (Perez et al., 2018; Schwarz et al., 2023) during the adaptation stage. Formally, given a dataset D = {Dn} N n=1 of N functions, the objective in Eq. 1 is extended to accommodate both θ and ϕn for each function Dn as follows: $${\mathcal{L}}({\mathcal{D}}_{n};\theta,\phi_{n})={\frac{1}{P_{n}}}\sum_{i=1}^{P_{n}}||f_{\mathrm{INR}}(x_{n}^{i};\theta,\phi_{n})-y_{n}^{i}||_{2}^{2}.$$ $$\left(2\right)$$ $$\left(3\right)$$ $$\left(4\right)$$ Then, INR weights θ and initial context parameters ϕ are optimized with the meta-objective as below: for k = 0*, . . . , K* − 1. α is a learning rate and ϕ (0) n = ϕ. Intuitively, the objective can be cast as a bi-level optimization problem: (1) In the inner loop (Eq. 4), each INR learner is initialized with weights θ and ϕ (0) n and then adapted to each function Dn through ϕ (k) n (k = 1*, . . . , K*). (2) In the outer loop (Eq. 3), the meta learner updates the INR weights θ and initial context parameters ϕ so that each fitted INR fINR(·; *θ, ϕ*(K) n ) recovers the signal Dn well within K steps. ## 4 Approach We delve into our optimization-based meta-learning framework for learning multiple INRs jointly when modeling multimodal signals. We first extend previous meta-learning frameworks to multimodal setups. Then, we introduce our MIA, its core components SFTs, and the meta-learning objectives. We present a schematic illustration in Figure 2 and provide a meta-learning algorithm of our MIA in Algorithm 1. We consider a dataset D = {Dn} N n=1 with N pairs of multimodal signals. Each pair Dn = {Dnm}M m=1 is composed of signals from M different modalities. Each signal Dnm = {(x i nm, yinm)} Pnm i=1 contains Pnm coordinate-feature pairs which could vary with signals. $$\begin{array}{l}{{\theta,\phi=\operatorname*{argmin}_{\theta,\phi}\mathbb{E}_{n}\left[\mathcal{L}(\mathcal{D}_{n};\theta,\phi_{n}^{(K)})\right],\mathrm{~where~}}}\\ {{\phi_{n}^{(k+1)}=\phi_{n}^{(k)}-\alpha\cdot\nabla_{\phi_{n}^{(k)}}\mathcal{L}(\mathcal{D}_{n};\theta,\phi_{n}^{(k)}),}}\end{array}$$ ![4_image_0.png](4_image_0.png) Figure 2: An illustration on our proposed Multimodal Iterative Adaptation (MIA). Left: At each step k, each INR learner (fθm) calculates the loss (L (k) nm) given the weights (ϕ (k) nm) and data (xnm, ynm) during the forward pass (black arrows), followed by computing gradients (g (k) nm) to update the weights. Before updating the weights with the gradients (red arrows), we enhance them via State Fusion Transformers (SFTs) (blue arrows). Right: SFTs aggregate the states ({ϕ (k) nm, g (k) nm}m), capture the unimodal and crossmodal dependencies via USFTs and MSFTs, and fuse this knowledge to compute enhanced parameter updates ({g¯ (k) nm}m) via Fusion MLPs. ## 4.1 A Naive Framework For Multimodal Signals We begin with a simple baseline that combines per-modality meta-learners together while treating each of them separately. This framework consists of a set of independent per-modality INR fINR(·; θm, ϕm) with parameters θ = {θm}M m=1 and initial context parameters ϕ = {ϕm}M m=1. Then, we modify the meta-objective to promote both memorization and generalization performances during the adaptation. For this, we split each data into support Dtrain nm and query Dval nm sets, and use the following bi-level optimization algorithm to meta-learn the parameters for each modality: $$\theta_{m},\phi_{m}=\mathop{\rm argmin}_{\theta_{m},\theta_{m}}\mathbb{E}_{n}\left[\mathcal{L}(\mathcal{D}_{nm}^{\rm all};\theta_{m},\phi_{nm}^{(K)})\right],\ \mbox{where}\tag{5}$$ $$\phi_{nm}^{(k+1)}=\phi_{nm}^{(k)}-\alpha_{m}\cdot g_{nm}^{(k)},\ \ \ g_{nm}^{(k)}=\nabla_{\phi_{nm}^{(k)}}\mathcal{L}(\mathcal{D}_{nm}^{\rm train};\theta_{m},\phi_{nm}^{(k)}),\tag{6}$$ for k = 0*, . . . , K* − 1 with ϕ (0) nm = ϕm. αm is a learning rate for ϕ (k) nm of each modality. ## 4.2 Multimodal Iterative Adaptation (Mia) We enable cross-modal interaction among the learners through our MIA for enhanced parameter updates. Our rationale is grounded by the fact that the parameters of one modality learner carry information on a comprehensive encoding of the signals within a modality, whereas the gradients highlight both the adequacy of this encoding via magnitude and the direction for improvement. Thus, by allowing access to the states from other modalities through our MIA, we enable each learner to benefit from a broader, richer context for its own updates. Essentially, this process encourages learners to not only optimize their individual parameters but also to contribute to and benefit from a collective improvement across modalities. This process is described as: $$\bar{g}_{n1}^{(k)},\ldots,\bar{g}_{nM}^{(k)}=U_{\xi}\bigl(\bigl\{\bigl(g_{n m}^{(k)},\phi_{n m}^{(k)}\bigr)\bigr\}_{m=1}^{M}\bigr),$$ , (7) where Uξ denotes State Fusion Transformers (SFTs), a collection of meta-learned modules. ## 4.3 State Fusion Transformers (Sfts) As shown in Figure 2, SFTs enhance the learning of INRs via a three-step process: (1) aggregating the states of the unimodal learners, (2) capturing modality-specific and cross-modal information within those states to promote the knowledge exchange across them via attention mechanisms, (3) and utilizing this knowledge $$\left(7\right)$$ | Algorithm 1 Meta-Learning Algorithm of MIA. Require: Batch size B; Inner step K; Outer learning rates λΘ (Θ = {θ, ϕ, ξ, η}). 1: Initialize Θ = {θ, ϕ, ξ, η}. 2: repeat 3: Sample a batch of B joint multimodal signals {{Dnm}M B m=1} n=1. 4: Sample a sampling ratio Rnm ∼ U(Rmin m , Rmax m ) for ∀n, m. 5: Split Dnm into Dtrain nm , Dval nm, where |Dtrain nm | = Pnm × Rnm and |Dval nm| = Pnm. 6: for n = 1, . . . , B do 7: for k = 0, . . . , K − 1 do 8: for m = 1, . . . , M do (k) 9: g nm = ∇ϕ (k) L(Dtrain nm ; θm, ϕ(k) nm) ▷ Eq. 6 | | | | | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------|----------------------------------|----------|-----| | | nm | | | | | 10: | g (k) | nm | ▷ Eq. 12 | | | | (k) | | | | | | nm ← exp (ηm) · g (k) | (k) | | | | 11: | z nm = ProjectMLPm(g nm, ϕ(k) nm; ξm) (k) (k) | | | | | 12: | zˆ nm = USFTm(z nm; ξm) | ▷ Eq. 8 | | | | 13: | end for (k) | (k) | (k) | (k) | | 14: | z˜ n1 , . . . , z˜ nM = MSFT(ˆz n1 , . . . , zˆ nM; ξs) | ▷ Eq. 9 | | | | 15: | for m = 1, . . . , M do (k) | (k) | (k) | (k) | | 16: | g¯ nm = FusionMLPm([z nm ∥ zˆ nm ∥ z˜ nm]; ξm) | ▷ Eq. 10 | | | | | (k+1) | (k) | (k) | | | 17: | ϕ nm | ← ϕ nm − g¯ nm | ▷ Eq. 11 | | | 18: | end for | | | | | 19: | end for | | | | | 20: | end for | | | | | 21: | Louter = 1 PB | PM m=1 L(Dval nm; θm, ϕ(K) nm ). | ▷ Eq. 11 | | | B | n=1 | | | | | 22: | Θ ← Θ − λΘ∇ΘLouter, where Θ = {θ, ϕ, ξ, η}. | ▷ Eq. 11 | | | | 23: until converged | | | | | to predict better parameter updates for the learners. SFTs consist of Unimodal State Fusion Transformers (USFTs), Multimodal State Fusion Transformers (MSFTs), and Fusion MLPs. For each adaptation step k, we first compute per-modality state representations z (k) nm ∈ R Sm×Dzfrom the gradients and parameters of each modality learner. We concatenate the context parameters ϕ (k) nm ∈ R Sm×Dϕ and gradients g (k) nm ∈ R Sm×Dϕ along the feature dimension, followed by a per-modality projection MLPs. Then, these state representations are fed into USFTs with per-modality L1 transformer blocks (ξm): $$\hat{z}_{n m}^{(k)}=\mathsf{USFT}_{m}(z_{n m}^{(k)};\xi_{m}),\mathrm{~for~}m=1,\ldots,M,$$ $$({\boldsymbol{\delta}})$$ nm; ξm), for m = 1*, . . . , M,* (8) where zˆ (k) nm are updated state representations by USFTs, which model dependencies among a set of states within a modality. Next, we combine these per-modality representations into a sequence zˆ (k) n = {zˆ (k) nm} M m=1 and input to MSFTs with shared L2 transformer blocks (ξs). These enhanced state representations z˜ (k) n further capture cross-modal relationships: $$\hat{z}_{n}^{(k)}=\mathsf{MSFT}(\hat{z}_{n}^{(k)};\xi_{s}),\mathrm{~where~}\hat{z}_{n}^{(k)}=\{\hat{z}_{n m}^{(k)}\}_{m=1}^{M}.$$ m=1. (9) The final step is to integrate them all and estimate enhanced parameter updates for the learners. We concatenate the computed per-modality state representations z (k) nm, zˆ (k) nm, z˜ (k) nm ∈ R S×Dz along the feature dimension, followed by processing each feature with Fusion MLPs: $$\bar{g}_{n m}^{(k)}=\mathrm{FusionMLP}(\bar{z}_{n m}^{(k)};\xi_{m}),\mathrm{~where~}\bar{z}_{n m}^{(k)}=[z_{n m}^{(k)}\parallel\hat{z}_{n m}^{(k)}\parallel\hat{z}_{n m}^{(k)}].$$ nm]. (10) Here, g¯ (k) n = {g¯ (k) nm} M m=1 is the predicted weight updates for the unimodal learners. They are used to adapt ϕ (k) nm for each inner step k = 0*, . . . , K* − 1, while the parameters of SFTs ξ = ξs ∪ {ξm}M m=1 along with those $$({\mathfrak{g}})$$ $$(10)$$ of INRs (θ and ϕ) are meta-learned during outer-optimization: $$\theta,\phi,\xi=\operatorname*{argmin}_{\theta,\phi,\xi}\,\mathbb{E}_{n,m}\left[\mathcal{L}(\mathcal{D}_{n m}^{\mathrm{val}},\theta_{m},\phi_{n m}^{(K)})\right],\;\mathrm{where}\;\phi_{n m}^{(k+1)}=\phi_{n m}^{(k)}-\bar{g}_{n m}^{(k)}.$$ ## 4.4 Gradient Scaling The direct use of gradients as inputs to neural networks could be problematic due to the presence of extremely small values, leading to a bottleneck in learning process. Also, the gradient distributions could differ significantly across modalities, hindering their effective multimodal fusion. To remedy this, we meta-learn a scaling constant for each modality and rescale the gradients before feeding them into projection MLPs: $$(11)$$ $$g_{n m}^{(k)}\leftarrow\exp\left(\eta_{m}\right)\cdot g_{n m}^{(k)},$$ $$(12)$$ nm, (12) where we exponentiate the scaling constants {ηm}M m=1 to ensure positive rescaling and maintain gradient directions. ## 5 Experiments We demonstrate our proposed MIA improves performance of state-of-the-art meta-learned INR models. We refer the reader to Appendix A for more qualitative samples. Baselines. We adopt Functa (Dupont et al., 2022a) and Composers (Kim et al., 2023) as the base INR models, collectively referred to as CAVIA in subsequent sections. Then, we build our framework and other baselines upon this CAVIA. We include recent L2O methods such as MetaSGD (Li et al., 2017), GAP (Kang et al., 2023), and ALFA (Baik et al., 2023) as baselines integrated into CAVIA. These approaches aim to enhance learner's convergence and generalization by modulating gradients, yet they do not account for crossmodal interactions. We also compare against the state-of-the-arts encoder-based multimodal meta-learning models, HNPs (Shen et al., 2023) and MTNPs (Kim et al., 2022a), which are based on Neural Processes (NPs)(Garnelo et al., 2018). While HNPs are heavily demonstrated on multimodal classification, MTNPs are designed specifically for multimodal signal regression tasks like ours, using encoders with specialized attention and pooling instead of iterative optimization. We set K = 3 for optimization-based methods. See Appendix D.2 for detailed discussions on baselines. Datasets and Metrics. Our method is evaluated across four datasets: (1) multimodal 1D synthetic functions (Kim et al., 2022a), (2) multimodal 2D CelebA images (Xia et al., 2021), (3) ERA5 global climate dataset (Hersbach et al., 2019), and (4) Audiovisual-MNIST (AV-MNIST) dataset (Vielzeuf et al., 2018). For training, we construct the support set Dtrain nm by sampling Pnm ×Rnm coordinate-feature pairs from each full dataset Dnm. The sampling ratio Rnm varies within predefined ranges [Rmin m , Rmax m ], independently drawn for each signal. The full dataset serves as the query set Dval nm during validation to assess both memorization and generalization capabilities. We measure performance using mean squared errors (MSEs), averaged over 5 runs, across different ranges of sampling ratios. More details, including the number of signals (N), modalities (M), and coordinate-value pairs (P) for each dataset, can be found in Appendix D.1. ## 5.1 Multimoal 1D Synthetic Function Regression Setups. Following prior works (Finn et al., 2017; Guo et al., 2020; Kim et al., 2022a), we conduct experiments on a simple yet controlled 1D synthetic function regression setting. Specifically, we first define a canonical form of parametric multimodal functions as below: $y_{nm}=a_{n}\cdot\texttt{Act}_{m}(b_{n}\cdot x_{nm}+c_{n})+d_{n},$ where $\texttt{Act}_{m}\in\{\texttt{Sine},\texttt{Gaussian},\texttt{Tanh},\texttt{ReLU}\}.$ Each function Dnm is instantiated with the shared parameters (an, bn, cn, dn) and modality-specific non-linear activation functions Actm(·). We follow Kim et al. (2022a) to construct the dataset: We define a uniform grid of Pnm = 200 coordinates within a range of x ∈ [−5, 5]. We sample N = 1000 function parameters $$(13)$$ Table 1: Quantitative comparisons on the multimodal 1D synthetic functions. We report the normalized MSEs (×10−2) computed over distinct ranges of sampling ratios, averaged over 5 random seeds. The multimodal methods are marked with the dagger (†) symbol. Modality Sine Gaussian Tanh ReLU Range R min 0.01 0.02 0.05 0.01 0.02 0.05 0.01 0.02 0.05 0.01 0.02 0.05 R max 0.02 0.05 0.10 0.02 0.05 0.10 0.02 0.05 0.10 0.02 0.05 0.10 Functa 45.45 16.58 3.316 19.83 4.603 1.043 23.02 3.788 0.587 84.49 13.54 2.992 w/ MetaSGD 44.69 16.46 3.326 19.75 4.334 1.055 26.16 4.528 0.718 69.86 10.45 2.044 w/ GAP 44.76 16.49 3.297 19.54 4.518 1.052 23.46 3.799 0.555 63.17 10.12 1.862 w/ ALFA 48.09 18.67 5.160 19.09 4.910 1.580 22.45 4.200 0.872 53.83 8.303 1.536 w/ HNPs†16.95 8.397 2.547 1.634 0.912 0.293 2.844 0.673 **0.091** 9.247 **0.772 0.095** w/ MTNPs†13.34 4.718 1.871 2.019 1.513 1.285 4.678 0.794 0.340 17.75 1.315 0.398 w/ MIA† **6.386 2.058 0.547 1.057 0.571 0.281 1.285 0.378** 0.131 **5.069** 1.012 0.115 Composers 37.90 17.40 5.539 6.916 3.495 1.584 14.90 3.974 0.975 50.02 11.75 3.426 w/ MetaSGD 38.26 17.28 5.427 6.887 3.221 1.388 14.99 3.938 0.981 51.97 11.65 3.530 w/ GAP 37.53 17.52 5.397 6.630 3.409 1.526 14.40 3.828 0.978 50.90 10.85 3.128 w/ ALFA 36.53 14.87 4.115 5.650 2.770 1.154 14.18 3.426 0.799 42.96 6.814 1.481 w/ HNPs†20.38 9.480 2.413 2.091 1.075 0.342 4.497 0.991 **0.111** 19.13 2.698 0.129 w/ MTNPs†16.62 4.859 0.766 2.256 1.252 0.708 4.670 0.743 0.121 11.47 **0.897 0.114** w/ MIA† **5.564 1.844 0.627 0.975 0.528 0.237 1.257 0.343** 0.128 **4.715** 0.943 0.156 (an, bn, cn, dn) shared across modalities and add per-modality Gaussian noises ϵnm ∼ N (0, 0.02) to control the cross-modal correlations. We use [Rmin, Rmax] = [0.01, 0.1] for all modalities. Results. We present the quantitative results in Table 1, where we report normalized MSEs by the scale parameter anm per function, MSE = 1 N PN n=11 a 2nm ∥yˆnm − ynm∥ 2 2, following Kim et al. (2022a). The methods with no ability to handle multimodal signal jointly (CAVIA, MetaSGD, GAP, ALFA) fail to approximate the functions, showing high MSEs in all ranges of sampling ratios. In contrast, the multimodal methods (HNPs, MTNPs, and ours) are able to reconstruct each signal more precisely, even with extremely small number of support sets (e.g. R < 0.02). Moreover, while HNPs and MTNPs show strong fitting performances on smooth and low-curvature signals (*i.e.* Tanh and ReLU) when data is sufficient, our method achieves the best performances overall among the multimodal methods, verifying its effectiveness in utilizing cross-modal interactions to enhance generalization. ## 5.2 Multimodal 2D Celeba Dataset Setups. We follow MTNP (Kim et al., 2022a) and conduct experiments on real-world 32 × 32 image function regression settings. In particular, we consider three different visual modalities of 2D facial data on CelebA (Liu et al., 2015), namely RGB images (Karras et al., 2018), surface normal maps, and sketches (Xia et al., 2021). Results. Table 3 presents the quantitative results. The encoder baseline MTNPs outperforms unimodal baselines like CAVIA, MetaSGD, GAP, and ALFA in low-shot settings (R ≤ 0.25). However, with more substantial support (R ≥ 0.50), MTNPs fall behind ALFA, likely due to the common underfitting issues seen in encoder-based meta-learning approaches, especially with complex target functions featuring high-frequency details (Kim et al., 2019; 2022a; Guo et al., 2023; Shen et al., 2023). The poor performance of HNPs can be similarly explained, though they perform better in simpler 1D signal regressions. The original paper on HNPs (Shen et al., 2023) might not have shown this degeneracy, focusing on multimodal classification and using pretrained image features (Simonyan & Zisserman, 2014; He et al., 2016). In contrast, our MIA consistently outperforms all baselines across various support set sizes. Figure 4 demonstrates that MIA not only excels in generalization in data-scarce conditions but also retains high-frequency details effectively when data is sufficient. ## 5.3 Multimodal Climate Data Setups. ERA5 (Hersbach et al., 2019) is a global climate dataset that provides hourly estimates on a wide range of atmospheric variables measured globally over a grid of equally spaced latitudes and longitudes. Out Figure 3: Results on the multimodal 2D CelebA image function regression. We report the MSEs (×10−3) computed over distinct ranges of sampling ratios. The multimodal methods are marked with the dagger (†) symbol. | Modality | RGBs | | Normals | Sketches | | | | | | | | | | |------------|--------|-------|-----------|------------|-------|-------|-------|-------|-------|-------|-------|-------|------| | Range | R min | 0.00 | 0.25 | 0.50 | 0.75 | 0.00 | 0.25 | 0.50 | 0.75 | 0.00 | 0.25 | 0.50 | 0.75 | | R max | 0.25 | 0.50 | 0.75 | 1.00 | 0.25 | 0.50 | 0.75 | 1.00 | 0.25 | 0.50 | 0.75 | 1.00 | | | Functa | 13.32 | 4.008 | 2.900 | 2.408 | 5.079 | 2.401 | 2.028 | 1.859 | 13.57 | 6.966 | 5.368 | 4.756 | | | w/ MetaSGD | 13.02 | 3.830 | 2.685 | 2.182 | 4.923 | 2.268 | 1.864 | 1.682 | 12.72 | 6.278 | 4.532 | 3.839 | | | w/ GAP | 12.84 | 3.726 | 2.543 | 2.024 | 4.805 | 2.184 | 1.762 | 1.570 | 12.43 | 6.023 | 4.166 | 3.407 | | | w/ ALFA | 11.83 | 3.257 | 1.957 | 1.362 | 4.115 | 1.806 | 1.285 | 1.014 | 10.80 | 4.801 | 2.463 | 1.283 | | | w/ HNPs† | 19.02 | 10.24 | 9.728 | 9.433 | 14.34 | 14.38 | 14.35 | 14.23 | 20.36 | 20.45 | 20.44 | 20.57 | | | w/ MTNPs† | 9.871 | 4.807 | 4.105 | 3.644 | 3.983 | 2.552 | 2.339 | 2.221 | 9.680 | 6.568 | 5.395 | 4.819 | | | w/ MIA† | 6.946 | 2.563 | 1.627 | 1.135 | 2.979 | 1.530 | 1.118 | 0.869 | 7.667 | 4.042 | 2.142 | 1.011 | | | Composers | 22.41 | 12.41 | 11.09 | 10.24 | 8.613 | 6.583 | 6.415 | 6.292 | 19.17 | 15.73 | 14.88 | 14.63 | | | w/ MetaSGD | 20.11 | 10.25 | 9.000 | 8.268 | 8.218 | 5.979 | 5.753 | 5.601 | 18.95 | 15.43 | 14.49 | 14.20 | | | w/ GAP | 20.07 | 10.05 | 8.785 | 8.039 | 8.149 | 5.847 | 5.616 | 5.461 | 18.58 | 15.24 | 14.39 | 14.15 | | | w/ ALFA | 15.12 | 4.887 | 3.376 | 2.681 | 5.444 | 2.399 | 1.773 | 1.469 | 12.57 | 5.984 | 3.416 | 2.124 | | | w/ HNPs† | 17.88 | 12.40 | 12.13 | 11.74 | 7.363 | 6.419 | 6.375 | 6.324 | 17.70 | 17.31 | 17.20 | 17.34 | | | w/ MTNPs† | 9.902 | 4.957 | 4.269 | 3.813 | 4.184 | 2.747 | 2.545 | 2.437 | 9.791 | 6.425 | 5.163 | 4.544 | | | w/ MIA† | 9.764 | 3.418 | 1.913 | 1.017 | 3.749 | 1.763 | 1.062 | 0.526 | 9.505 | 4.708 | 2.336 | 0.855 | | ![8_image_0.png](8_image_0.png) Figure 4: Three qualitative comparisons on CelebA. Compared to the baselines, MIA generalizes better in the extremely low-shot settings (1st and 2nd panes) and preserves more rich high-frequency details or nuances (2nd and 3rd panes) when sufficient data is observed. of all the variables, we consider measurements on temperature, pressure, and humidity. Following Dupont et al. (2022c), we resize each data to 46 × 90 resolution. Results. As indicated in Table 5, some unimodal baselines like GAP and ALFA perform well, even surpassing multimodal methods across various sampling ratios. This performance can be attributed to the relatively stable nature of atmospheric variables over time and regions (Howells & Katz, 2019), where the fast memorizing capability is beneficial. Notably, our method surpasses all baselines, affirming its capability in handling real-world multimodal climate data effectively. Supporting evidence of its superior performance is also presented in Figure 7. ## 5.4 Multimodal Audio-Visual Avmnist Dataset Setups. Unlike previous setups, modeling audiovisual signals presents unique challenges arising from the heterogeneity in coordinate systems and lack of explicit spatiotemporal alignments between modalities. We employ AVMNIST dataset (Vielzeuf et al., 2018), a collection of greyscale digit images (LeCun et al., 2010) and their pronounced audios (Jackson, 2016). We use the original images of a 28 × 28 resolution, while we trim the audios with a sampling rate of 2kHz and a duration of 1 second. Results. In Table 6, both Functa-based CAVIA and MTNPs struggle with the audio modality, likely due to difficulties in handling high-frequency content and noise. Furthermore, the encoder baseline HNPs fails in both audio and visual signals, possibly due to inadequate fitting of complex audio without the support of expressive pretrained features, which also affects performance on simpler image functions. In contrast, our method excels in both modalities across all sampling ratios, as confirmed by Figure 7. Our approach Figure 5: Results on ERA5 dataset in MSEs (×10−4) across different sampling ratios. The multimodal methods are marked in the dagger (†) sign. Figure 6: Results on AV-MNIST in MSEs (×10−3) across different sampling ratios. The dagger (†) sign denotes multimodal methods. | Modality | Temperature | Pressure | Humidity | Modality | Images | Audios | | | | | | | | | | | | |------------|----------------------------------------------|-------------------------------------------|------------|------------|----------|----------|------|------|------|-------|-------|------|------|------|------|------|------| | Range | R min | 0.00 | 0.25 | 0.50 | 0.00 | 0.25 | 0.50 | 0.00 | 0.25 | 0.50 | | | | | | | | | R max | 0.25 | 0.50 | 1.00 | 0.25 | 0.50 | 1.00 | 0.25 | 0.50 | 1.00 | Range | R min | 0.00 | 0.25 | 0.50 | 0.25 | 0.50 | 0.75 | | | R max | 0.25 | 0.50 | 1.00 | 0.50 | 0.75 | 1.00 | | | | | | | | | | | | Functa | 8.56 | 4.04 | 3.79 | 2.56 | 1.39 | 1.32 | 33.7 | 24.2 | 22.6 | | | | | | | | | | w/ MetaSGD | 6.89 | 3.29 | 3.10 | 1.93 | 1.01 | 0.96 | 33.1 | 21.6 | 19.5 | | | | | | | | | | w/ GAP | 6.30 | 2.90 | 2.69 | 1.95 | 0.98 | 0.92 | 31.2 | 18.8 | 16.0 | | | | | | | | | | w/ ALFA | 4.03 | 1.64 | 1.30 | 0.61 | 0.19 | 0.15 | 28.5 | 15.1 | 11.2 | | | | | | | | | | w/ HNPs† | 35.0 | 35.2 | 34.6 | 2.85 | 2.82 | 2.82 | 49.6 | 49.6 | 49.6 | | | | | | | | | | w/ MTNPs† | 4.44 | 3.65 | 3.55 | 1.42 | 1.34 | 1.32 | 28.8 | 26.0 | 25.4 | | | | | | | | | | w/ MIA† | 2.03 1.02 0.74 0.46 0.17 0.13 16.0 11.7 8.47 | | | | | | | | | | | | | | | | | | Composers | 8.36 | 7.58 | 7.51 | 2.70 | 2.50 | 2.50 | 46.9 | 43.2 | 42.8 | | | | | | | | | | w/ MetaSGD | 9.65 | 8.47 | 8.38 | 2.41 | 2.29 | 2.28 | 57.5 | 48.9 | 48.4 | | | | | | | | | | w/ GAP | 8.88 | 7.74 | 7.65 | 2.77 | 2.43 | 2.43 | 46.0 | 42.1 | 41.5 | | | | | | | | | | w/ ALFA | 7.31 | 6.05 | 6.36 | 1.95 | 1.86 | 1.83 | 38.5 | 34.3 | 34.7 | | | | | | | | | | w/ HNPs† | 39.6 | 40.0 | 39.3 | 4.18 | 4.16 | 4.16 | 49.4 | 49.5 | 49.5 | | | | | | | | | | w/ MTNPs† | 4.52 | 3.61 | 3.49 | 1.38 | 1.32 | 1.31 | 28.5 | 25.3 | 24.4 | | | | | | | | | | w/ MIA† | 3.93 2.48 1.40 1.13 0.68 0.41 24.1 16.6 7.77 | Functa | 29.7 | 7.98 | 3.84 | 2.98 | 3.03 | 3.00 | | | | | | | | | | | | w/ MetaSGD | 29.8 | 7.94 | 3.71 | 0.95 | 0.50 | 0.31 | | | | | | | | | | | | | w/ GAP | 29.8 | 7.97 | 3.76 | 0.93 | 0.47 | 0.29 | | | | | | | | | | | | | w/ ALFA | 28.1 | 6.84 | 3.12 | 0.79 | 0.36 | 0.20 | | | | | | | | | | | | | w/ HNPs† | 50.3 | 16.2 | 10.5 | 2.99 | 3.03 | 3.00 | | | | | | | | | | | | | w/ MTNPs† | 28.8 | 11.4 | 6.88 | 2.37 | 2.30 | 2.23 | | | | | | | | | | | | | w/ MIA† | 19.3 4.85 2.08 0.41 0.21 0.13 | | | | | | | | | | | | | | | | | | Composers | 33.4 | 9.86 | 2.77 | 1.02 | 0.47 | 0.22 | | | | | | | | | | | | | w/ MetaSGD | 34.6 | 11.4 | 4.48 | 1.00 | 0.46 | 0.22 | | | | | | | | | | | | | w/ GAP | 34.4 | 11.2 | 4.15 | 1.06 | 0.53 | 0.29 | | | | | | | | | | | | | w/ ALFA | 31.9 | 8.27 | 2.46 | 1.09 | 0.48 | 0.22 | | | | | | | | | | | | | w/ HNPs† | 45.6 | 18.6 | 14.8 | 2.99 | 3.03 | 3.00 | | | | | | | | | | | | | w/ MTNPs† | 30.4 | 13.2 | 8.70 | 2.78 | 2.80 | 2.77 | | | | | | | | | | | | | w/ MIA† | 18.7 4.24 1.32 0.65 0.29 0.15 Image Audio | | | | | | | | | | | | | | | | | GT | Support | ALFA | MTNP | MIA | GT | | | | | | | | | | | | | | Temp. | Support | | | | | | | | | | | | | | | | | | Press. | | | | | | | | | | | | | | | | | | ![9_image_0.png](9_image_0.png) Figure 7: Qualitative comparisons in ERA5 and AV-MNIST. Left: Our method achieves the lowest errors (shown in viridis color) in modeling real-world climate data. Right: Our method accurately predicts digit type with little image data and recovers the audio faithfully. not only accurately reproduces audio signals but also effectively predicts digit classes from minimal image support, showcasing robust fitting and cross-modal generalization capabilities. ## 6 Analysis In this section, we delve into an in-depth analysis of our method. ## 6.1 Ablation Study Impact of modules. We investigate the roles of the three modules (*i.e.* USFTs, MSFTs, and Fusion MLPs) of SFTs in terms of both memorization and generalization performances. We construct ablated methods by gradually augmenting Composers with USFTs, MSFTs, and Fusion MLPs one at a time. We evaluate their fitting performances over the provided support sets and unobserved non-support parts separately, indicating the performances on memorization and generalization, respectively. We report the relative performance improvement (Maninis et al., 2019; Kim et al., 2022a) achieved by each method compared to the vanilla Composers, averaged across all signals and modalities. As in the results of Table 2a, USFTs specialize in enhancing modality-specific patterns that are useful for memorization, while MSFTs further emphasize shared information for generalization. Finally, Fusion MLPs effectively integrate the representations from USFTs and MSFTs, achieving superior performances on both memorization and generalization. Note that the reported improvement here do not specifically account for the sampling ratios of each modality, whereas the efficacy of the MSFTs becomes more apparent in cases of large imbalance in these sampling ratios across Table 2: Ablation study on SFTs on generalization (G) and memorization (M) capability. We report relative improvement (↑) achieved by ablated methods over vanilla Composers. | Modules | CelebA | AVMNIST | States | | CelebA | AVMNIST | | | | | | | | |-----------|----------|-------------|----------|------|----------|-----------|--------|-------|---------|----|----|----|----| | USFTs | MSFTs | Fusion MLPs | G | M | G | M | | | | | | | | | ✗ | ✗ | ✗ | 00.0 | 00.0 | 00.0 | 00.0 | | | | | | | | | ✓ | ✗ | ✗ | 58.3 | 92.1 | 60.7 | 84.1 | | | | | | | | | ✓ | ✓ | ✗ | 61.5 | 92.5 | 63.9 | 84.6 | | | | | | | | | ✓ | ✓ | ✓ | 61.4 | 93.0 | 65.6 | 88.7 | Params | Grads | Scaling | G | M | G | M | | | | ✓ | ✗ | | ✗ | N/A | N/A | N/A | N/A | | | | | | | | ✗ | ✓ | | ✗ | 54.3 | 68.7 | 12.6 | 13.0 | | | | | | | | ✗ | ✓ | | ✓ | 60.6 | 91.6 | 61.7 | 87.3 | | | | | | | | ✓ | ✓ | | ✓ | 61.4 | 93.0 | 65.6 | 88.7 | | | | | ![10_image_0.png](10_image_0.png) Figure 8: Do our MSFTs identify and leverage cross-modal relationships as expected? Left: Pearson Correlation Cij between attention weights assigned to modality i in MSFTs and the support set size for modality j. Positive correlation indicates a tendency of modality i to attend more to modality j when it has sufficient data. Right: Relative performance improvement (↑) when multimodal support set sizes increase. Colors indicate support set sizes of the target modality, while the x-axis represents that of the source modalities. modalities. For a detailed analysis of this aspect, we refer the readers to discussions on Figure 8b and Table 8. Also, for comparisons and discussions on all possible ablated methods, please find Appendix E.5. Impact of learners' states. We also study the impact of states utilized by SFTs by ablating parameters, gradients, or the meta-learned gradient scaling from SFTs. Similar to the previous analysis, we report their relative performance improvement compared to Composers. Table 2b reports the results. We find the parameters-only method fails entirely (reported as N/A). In contrast, the gradients-only method shows enhancements compared to vanilla Composers, and this improvement is significantly boosted with the metalearned gradient scaling. This suggests that multimodal gradients indeed contain useful information for predicting better update directions of each INR learner, and a proper scaling technique is crucial for their effective utilization. Finally, the best performances are achieved when employing all of them. This implies that learners' current knowledge on signals (*i.e.* parameters) can be incorporated further to provide SFTs with more comprehensive view on the optimization process, leading to better parameter adjustment. ## 6.2 Adaptive Cross-Modal Utilization Of Msfts We validate whether our MSFTs can (1) identify valuable cross-modal patterns in learners' states and (2) utilize them to enhance the learning of INRs. For the first aspect, we examine how attention patterns of MSFTs relate to the quality of the support sets. We calculate the Pearson correlation coefficient Cij , which reflects the relationship between attention weights assigned to the learner's state representations for modality i and the support set size for modality j. Notably, as presented in Table 8a, we observe strong positive correlations along the diagonal. This reveals that, when observations within a specific modality suffice, MSFTs refrain from disrupting its state representations with those from other modalities. Conversely, the strong negative correlations off the diagonal imply that MSFTs tend to compensate for suboptimal state representations driven by limited data in one modality by attending more to the other modalities. Table 3: Relative performance improvements (↑), parameter counts (↓), memory usage (↓), and meta-training time per iteration (↓) for methods considered in our paper. Notably, encoder baselines were excluded due to out-of-memory issues. The analysis illustrates increased costs with SFT integration, mitigated by higher resolutions and parameter alignment. | Image | | Training Time | | | | |---------------|--------|--------------------------------------|----------|-------------|---------| | Resolution | Method | Relative Performance Improvement (%) | # Params | Memory (GB) | (ms/it) | | 32 × 32 | CAVIA | 0.00 | 199K | 3.00 | 60.0 | | MetaSGD | 2.02 | 224K | 3.01 | 60.2 | | | GAP | -2.05 | 224K | 3.01 | 66.8 | | | ALFA | 55.1 | 299K | 3.01 | 67.2 | | | CAVIA-Large | -16.5 | 2.08M | 7.39 | 435.9 | | | MetaSGD-Large | -10.3 | 2.08M | 7.39 | 449.2 | | | GAP-Large | -0.69 | 2.08M | 8.00 | 425.6 | | | ALFA-Large | 50.30 | 2.17M | 7.69 | 154.5 | | | HNP | -4.60 | 1.90M | 58.47 | 8107.1 | | | MTNP | 53.4 | 11.85M | 44.55 | 2031.3 | | | MIA | 69.6 | 1.93M | 3.61 | 136.2 | | | CAVIA | 0.00 | 298K | 23.69 | 765.33 | | | ALFA | 8.86 | 644K | 24.43 | 755.90 | | | 128 × 128 | HNP | N/A | 2.01M | OOM | N/A | | MTNP | N/A | 21.38M | OOM | N/A | | | MIA | 68.9 | 2.04M | 25.62 | 942.22 | | To examine the second aspect, we assess the performances on one target modality while the sizes of multimodal support sets from other modalities are increased. As in previous experiments, we report the relative improvement by our method compared to the one obtained without the supports from the other modalities. Figure 8b visualizes the results that, in general, increased availability of multimodal support sets contributes to better performances in the target modality (*i.e.* increasing gains along the x-axis), while the benefit from multimodal observations becomes more significant when the size of the target support sets is smaller (*i.e.* blue lines). Based on this analysis, we confirm MSFTs' ability to correctly identify and leverage cross-modal interactions, which is particularly beneficial when modeling signals with scarce observations. ## 6.3 Computational Overhead Our SFTs add computational complexity over vanilla CAVIA during both meta-training and meta-testing phases. To justify this, we compare the computational resources required for both phases separately. Meta-training Computational Overhead. Table 3 outlines the number of parameters, memory consumption, and required meta-training time per iteration for each method, measured on the 2D multimodal CelebA dataset. Additionally, we analyze larger baseline models (CAVIA-Large, MetaSGD-Large, GAPLarge, and ALFA-Large) with parameter counts adjusted to match our MIA, along with models trained on 128×128 image resolutions. Unfortunately, encoder-based multimodal baselines encountered out-of-memory (OOM) issues at this larger solution setting, even with a batch size of one, and were excluded from the comparison. As shown in the table, the integration of SFTs into our MIA framework slightly increases memory overhead and doubles the training time compared to unimodal frameworks. However, the additional resource requirements become less significant with higher resolution inputs and are offset when baseline parameters are aligned with those of MIA. This efficiency benefits from SFTs maintaining consistent computational costs relative to sample size, in contrast to the linear complexity of INRs, and a quadratic cost related to the number of INR context parameters, which scales more favorably with signal resolution as noted by Dupont et al. (2022a). Meta-testing Computational Overhead. Finally, we examine the time required by the optimizationbased methods (CAVIA, GAP, and ALFA) to match the 3-step adaptation performance of our method during meta-testing phase, with respect to varying support set sizes. The results are depicted in Figure 9, utilizing ![12_image_0.png](12_image_0.png) Figure 9: Inference time comparisons for baselines to achieve 3-step adaptation performances of our MIA within a 100-step optimization limit. Left: Cases where the baselines successfully match our performances. Right: For the cases where the baselines fail to match ours, we plot times for their best-achieved performances. RGB modality in the 2D multimodal CelebA dataset. We showcase two distinct scenarios separately: one on the left where the baselines successfully match our performances, and another on the right where they fail to do so, all within the constraint of a 100-step optimization limit. In cases where the baselines do not match our performance, we report the inference time for their early-stopped best-achieved MSEs. Based on the figures, we conclude: (1) Even with 100 optimization steps, matching our 3-step performance remains a challenge for the baselines. (2) The baselines exhibit a tendency towards early overfitting with smaller support sets or underfitting with more data. (3) Even when they successfully match our performance, they often require more inference time (up to 421 ms) and optimization steps. ## 7 Discussion In this paper, we introduce a new optimization-based meta-learning framework for INRs, substantially enhancing the performances of existing methods. This improvement is primarily attributed to our novel Multimodal Iterative Adaptation (MIA), which enables separate INR learners to share optimization processes through State Fusion Transformers (SFTs). This unique multimodal adaptation scheme leverages cross-modal interactions among learner states, facilitating rapid adaptation and boosting performance. Despite the improvement, we also acknowledge the fundamental challenges of scaling up meta-learning algorithms. In particular, the high memory requirement necessary for bi-level optimization, such as retaining all computational graphs in the inner loop to compute exact second-order derivatives for training the metalearners in the outer loop, hinders their application to more complex high-definition signals. To overcome such limitations, we recognize that recent advancements in the scalable meta-learning domain present viable paths to enhance the scalability of meta-learning frameworks, including ours. Notably, techniques like context pruning during meta-training (Tack et al., 2023), efficient estimation of second-order gradients (Chen et al., 2022a; Metz et al., 2022b; Choe et al., 2023; Jain et al., 2023), and the design of more efficient transformer-based meta-learners (Jain et al., 2023) have demonstrated considerable promise; through these techniques, meta-learning algorithms have been effectively extended to address more complex real-world scenarios, including the successful scaling of meta-learned INRs for 1024 × 1024 images or 256 × 256 × 32 videos (Tack et al., 2023), and the optimization of medium to large-scale models such as BERTs (Devlin et al., 2019), RoBERTas (Liu et al., 2019), ViT-H (Dosovitskiy et al., 2021), or models even exceeding 11B+ parameters like T5-XXL (Raffel et al., 2020) (Chen et al., 2022a; Metz et al., 2022b; Choe et al., 2023; Jain et al., 2023). Additionally, exploring more efficient (Choromanski et al., 2021) or local (Liu et al., 2021) attention mechanisms, along with curriculum learning strategies such as a two-staged meta-learning scheme where we first meta-learn independent unimodal frameworks with USFTs for each modality, followed by joint learning of the entire multimodal framework augmented with MSFTs, could be a promising avenue for reducing the computational overheads of our SFTs further. The integration of these techniques for scaling up our framework presents an intriguing direction for future research, which we intend to explore in our subsequent work. ## Acknowledgment This work was supported by the Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. RS-2021-II211343, Artificial Intelligence Graduate School Program (Seoul National University)), the IITP grant funded by the Korea government (MSIT) (No. RS-2022-II220156, Fundamental research on continual meta-learning for quality enhancement of casual videos and their 3D metaverse transformation), LG AI Research (Learning Robust and GeneralPurpose Multimodal Representation), and the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2023R1A2C2005573). ## References Milad Abdollahzadeh, Touba Malekzadeh, and Ngai man Cheung. Revisit multimodal meta-learning through the lens of multi-task learning. In *NeurIPS*, 2021. Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, and Nando De Freitas. Learning to learn by gradient descent by gradient descent. *NeurIPS*, 29, 2016. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer normalization, 2016. Roman Bachmann, David Mizrahi, Andrei Atanov, and Amir Zamir. MultiMAE: Multi-modal multi-task masked autoencoders. In *ECCV*, 2022. Sungyong Baik, Myungsub Choi, Janghoon Choi, Heewon Kim, and Kyoung Mu Lee. Learning to learn task-adaptive hyperparameters for few-shot learning. *TPAMI*, 2023. Matthias Bauer, Emilien Dupont, Andy Brock, Dan Rosenbaum, Jonathan Richard Schwarz, and Hyunjik Kim. Spatial functa: Scaling functa to imagenet classification and generation, 2023. Lucian Chan, Marcel Verdonk, and Carl Poelking. Embracing assay heterogeneity with neural processes for markedly improved bioactivity predictions. In *NeurIPS 2023 Workshop on New Frontiers of AI for Drug* Discovery and Development, 2023. Hao Chen, Bo He, Hanyu Wang, Yixuan Ren, Ser Nam Lim, and Abhinav Shrivastava. Nerv: Neural representations for videos. In *NeurIPS*, 2021a. Xuxi Chen, Tianlong Chen, Yu Cheng, Weizhu Chen, Ahmed Awadallah, and Zhangyang Wang. Scalable learning to optimize: A learned optimizer can train big models. In *Computer Vision - ECCV 2022*, 2022a. Yinbo Chen and Xiaolong Wang. Transformers as meta-learners for implicit neural representations. In ECCV, 2022. Yinbo Chen, Sifei Liu, and Xiaolong Wang. Learning continuous image representation with local implicit image function. In *CVPR*, 2021b. Zeyuan Chen, Yinbo Chen, Jingwen Liu, Xingqian Xu, Vidit Goel, Zhangyang Wang, Humphrey Shi, and Xiaolong Wang. Videoinr: Learning video implicit neural representation for continuous space-time super-resolution. In *CVPR*, 2022b. Sang Keun Choe, Sanket Vaibhav Mehta, Hwijeen Ahn, Willie Neiswanger, Pengtao Xie, Emma Strubell, and Eric Xing. Making scalable meta learning practical. In *NeurIPS*, 2023. Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamás Sarlós, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, David Belanger, Lucy J. Colwell, and Adrian Weller. Rethinking attention with performers. In *ICLR*, 2021. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), June 2019. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In *ICLR*, 2021. Emilien Dupont, Hyunjik Kim, S. M. Ali Eslami, Danilo Jimenez Rezende, and Dan Rosenbaum. From data to functa: Your data point is a function and you can treat it like one. In *ICML*, 2022a. Emilien Dupont, Hrushikesh Loya, Milad Alizadeh, Adam Golinski, Yee Whye Teh, and Arnaud Doucet. Coin++: Neural compression across modalities. 2022b. Emilien Dupont, Yee Whye Teh, and Arnaud Doucet. Generative models as distributions of functions. In AISTATS, 2022c. Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In *ICML*, 2017. Elizabeth Fons, Alejandro Sztrajman, Yousef El-Laham, Alexandros Iosifidis, and Svitlana Vyetrenko. Hypertime: Implicit neural representations for time series. In *NeurIPS Workshop*, 2022. Marta Garnelo, Jonathan Schwarz, Dan Rosenbaum, Fabio Viola, Danilo J Rezende, SM Eslami, and Yee Whye Teh. Neural processes. In *ICML Workshop*, 2018. Pengsheng Guo, Chen-Yu Lee, and Daniel Ulbricht. Learning to branch for multi-task learning. In *ICML*, 2020. Zongyu Guo, Cuiling Lan, Zhizheng Zhang, Yan Lu, and Zhibo Chen. Versatile neural processes for learning implicit neural representations. In *ICLR*, 2023. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016. H Hersbach, B Bell, P Berrisford, G Biavati, A Horányi, J Muñoz Sabater, J Nicolas, C Peubey, R Radu, I Rozum, et al. Era5 monthly averaged data on single levels from 1979 to present. Copernicus Climate Change Service (C3S) Climate Data Store (CDS), 2019. T. Alfred Howells and J. I. Katz. Long term trends in atmospheric pressure and its variance. arXiv: Atmospheric and Oceanic Physics, 2019. Zohar Jackson. Spoken digit, 2016. URL https://github.com/Jakobovski/free-spoken-digit-dataset. Ajay Jain, Matthew Tancik, and Pieter Abbeel. Putting nerf on a diet: Semantically consistent few-shot view synthesis. In *ICCV*, 2021. Deepali Jain, Krzysztof Marcin Choromanski, Kumar Avinava Dubey, Sumeet Singh, Vikas Sindhwani, Tingnan Zhang, and Jie Tan. Mnemosyne: Learning to train transformers with transformers. In *NeurIPS*, 2023. URL https://openreview.net/forum?id=Fdfyga5i0A. Harshavardhan Kamarthi, Lingkai Kong, Alexander Rodriguez, Chao Zhang, and B Aditya Prakash. Camul: Calibrated and accurate multi-view time-series forecasting. In *Proceedings of the ACM Web Conference* 2022, 2022. Suhyun Kang, Duhun Hwang, Moonjung Eo, Taesup Kim, and Wonjong Rhee. Meta-learning with a geometry-adaptive preconditioner. In *CVPR*, 2023. Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. In *ICLR*, 2018. Alex Kendall, Yarin Gal, and Roberto Cipolla. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In *CVPR*, 2018. Chiheon Kim, Doyup Lee, Saehoon Kim, Minsu Cho, and Wook-Shin Han. Generalizable implicit neural representations via instance pattern composers. In *CVPR*, 2023. Donggyun Kim, Seongwoong Cho, Wonkwang Lee, and Seunghoon Hong. Multi-task processes. In *ICLR*, 2022a. Hyunjik Kim, Andriy Mnih, Jonathan Schwarz, Marta Garnelo, Ali Eslami, Dan Rosenbaum, Oriol Vinyals, and Yee Whye Teh. Attentive neural processes. In *ICLR*, 2019. Subin Kim, Sihyun Yu, Jaeho Lee, and Jinwoo Shin. Scalable neural video representations with learnable positional features. In *NeurIPS*, 2022b. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. 2014. Yann LeCun, Corinna Cortes, and CJ Burges. Mnist handwritten digit database. *ATT Labs [Online].* Available: http://yann.lecun.com/exdb/mnist, 2010. Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. Visualizing the loss landscape of neural nets. In *NeurIPS*, 2018. Zhenguo Li, Fengwei Zhou, Fei Chen, and Hang Li. Meta-sgd: Learning to learn quickly for few shot learning, 2017. Paul Pu Liang, Yiwei Lyu, Xiang Fan, Jeffrey Tsaw, Yudong Liu, Shentong Mo, Dani Yogatama, LouisPhilippe Morency, and Russ Salakhutdinov. High-modality multimodal transformer: Quantifying modality & interaction heterogeneity for high-modality representation learning. *TMLR*, 2022. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach, 2019. Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In *ICCV*, 2021. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In *ICCV*, December 2015. Kevis-Kokitsi Maninis, Ilija Radosavovic, and Iasonas Kokkinos. Attentive single-tasking of multiple tasks. In *CVPR*, 2019. Luke Metz, C Daniel Freeman, James Harrison, Niru Maheswaranathan, and Jascha Sohl-Dickstein. Practical tradeoffs between memory, compute, and performance in learned optimizers. In *CoLLAs*, 2022a. Luke Metz, James Harrison, C. Daniel Freeman, Amil Merchant, Lucas Beyer, James Bradbury, Naman Agrawal, Ben Poole, Igor Mordatch, Adam Roberts, and Jascha Sohl-Dickstein. Velo: Training versatile learned optimizers by scaling up, 2022b. Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In *ECCV*, 2020. Ghulam Muhammad, Fatima Alshehri, Fakhri Karray, Abdulmotaleb El Saddik, Mansour Alsulaiman, and Tiago H. Falk. A comprehensive survey on multimodal medical signals fusion for smart healthcare systems. Information Fusion, 2021. Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In *CVPR*, 2019. Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, and Aaron C. Courville. Film: Visual reasoning with a general conditioning layer. In *AAAI*, 2018. Albert Pumarola, Enric Corona, Gerard Pons-Moll, and Francesc Moreno-Noguer. D-nerf: Neural radiance fields for dynamic scenes. In *CVPR*, 2021. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. JMLR, 2020. Jonathan Richard Schwarz, Jihoon Tack, Yee Whye Teh, Jaeho Lee, and Jinwoo Shin. Modality-agnostic variational compression of implicit neural representations. In *ICML*, 2023. Jiayi Shen, Xiantong Zhen, Marcel Worring, et al. Episodic multi-task learning with heterogeneous neural processes. *NeurIPS*, 2023. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In *ICLR*, 2014. Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, and Gordon Wetzstein. Implicit neural representations with periodic activation functions. In *NeurIPS*, 2020. Ya Sun, Sijie Mai, and Haifeng Hu. Learning to learn better unimodal representations via adaptive multimodal meta-learning. *IEEE Transactions on Affective Computing*, 2023. Jihoon Tack, Subin Kim, Sihyun Yu, Jaeho Lee, Jinwoo Shin, and Jonathan Richard Schwarz. Learning large-scale neural fields via context pruned meta-learning. In *NeurIPS*, 2023. Towaki Takikawa, Joey Litalien, Kangxue Yin, Karsten Kreis, Charles Loop, Derek Nowrouzezahrai, Alec Jacobson, Morgan McGuire, and Sanja Fidler. Neural geometric level of detail: Real-time rendering with implicit 3d shapes. In *CVPR*, 2021. Matthew Tancik, Pratul Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan Barron, and Ren Ng. Fourier features let networks learn high frequency functions in low dimensional domains. In *NeurIPS*, 2020. Matthew Tancik, Ben Mildenhall, Terrance Wang, Divi Schmidt, Pratul P. Srinivasan, Jonathan T. Barron, and Ren Ng. Learned initializations for optimizing coordinate-based neural representations. In *CVPR*, 2021. Kyle R Ulrich, David E Carlson, Kafui Dzirasa, and Lawrence Carin. Gp kernels for cross-spectrum analysis. In *Advances in Neural Information Processing Systems*, 2015. Valentin Vielzeuf, Alexis Lechervy, Stephane Pateux, and Frederic Jurie. Centralnet: a multilayer approach for multimodal fusion. In *ECCV Workshops*, September 2018. Risto Vuorio, Shao-Hua Sun, Hexiang Hu, and Joseph J. Lim. Toward multimodal model-agnostic metalearning. In *NeurIPS Workshop*, 2018. Risto Vuorio, Shao-Hua Sun, Hexiang Hu, and Joseph J. Lim. Multimodal model-agnostic meta-learning via task-aware modulation. In *NeurIPS*, 2019. Bin Wang, Jie Lu, Zheng Yan, Huaishao Luo, Tianrui Li, Yu Zheng, and Guangquan Zhang. Deep uncertainty quantification: A machine learning approach for weather forecasting. In *SIGKDD*, 2019. Fei Xia, Amir R. Zamir, Zhi-Yang He, Alexander Sax, Jitendra Malik, and Silvio Savarese. Gibson env: real-world perception for embodied agents. In *CVPR*, 2018. Weihao Xia, Yujiu Yang, Jing-Hao Xue, and Baoyuan Wu. Tedigan: Text-guided diverse face image generation and manipulation. In *CVPR*, 2021. Sihyun Yu, Jihoon Tack, Sangwoo Mo, Hyunsu Kim, Junho Kim, Jung-Woo Ha, and Jinwoo Shin. Generating videos with dynamics-aware implicit generative adversarial networks. In *ICLR*, 2022. Luisa M. Zintgraf, Kyriacos Shiarlis, Vitaly Kurin, Katja Hofmann, and Shimon Whiteson. Caml: Fast context adaptation via meta-learning. In *ICML*, 2018. ## Appendix This section provides more results and details that could not be included in the main paper. ## A More Qualitative Results ![18_image_0.png](18_image_0.png) Figure 10: Qualitative comparisons on 1D synthetic functions. The black and blue lines represent the groundtruth and approximated signals recovered by each method, respectively, while red crossmarks pinpoint the locations of the provided support points. While all the methods operate well in relatively smooth and lowcurvature signals (*e.g.* Tanh and ReLU), the baselines either fail entirely in Sine and Gaussian modalities (*i.e.* the unimodal approaches) or struggle to approximate them correctly (*i.e.* the multimodal baselines). Unlike the baselines, our MIA fits almost perfectly to all functions, verifying its capability in fusing multiple source of information to improve the performances. ![19_image_0.png](19_image_0.png) Figure 11: Qualitative comparisons on CelebA 2D visual modalities. Compared to the baselines, our MIA generalizes better in that it successfully discovers the sunglasses from the observed Normals and Sketches, followed by transferring this knowledge to RGBs for generalization (see the upper pane). Similarly, as shown in the lower pane, MIA captures the eyeglasses in RGBs and Normals and then transfer this knowledge to recover Edges more accurately. ![20_image_0.png](20_image_0.png) Figure 12: Qualitative comparisons on CelebA 2D visual modalities. Compared to the baselines, our MIA can capture sophisticated nuances and render high-frequency details contained in each modality signal thanks to its inheritance of iterative optimization processes from the unimodal methods. In contrast, the multimodal encoder-based methods produces overly smooth predictions or suffer from underfitting problems, due to the direct data-to-parameter mapping without optimization steps. ![21_image_0.png](21_image_0.png) Figure 13: Qualitative comparisons on ERA5 global climate data. The first row of each pane indicates the grount-truth signals and provided support points, while the rest represents the approximations and their absolute errors achieved by each method. Our MIA shows superior generalization capability than the existing baselines, demonstrating its versatility in applications for real-world climate estimation. ![22_image_0.png](22_image_0.png) Figure 14: Qualitative comparisons on ERA5 global climate data. The first row of each pane indicates the grount-truth signals and provided support points, while the rest represents the approximations and their absolute errors achieved by each method. Our MIA shows superior convergence speed than the existing baselines, which struggle to approximate the climate data precisely and show high errors even when abundant observations are available. ![23_image_0.png](23_image_0.png) audio signals when little data is available from images (left two columns), while enjoying superior fitting performances when sufficient data is available (rightmost column). ## B Details Of The Inr Frameworks In this work, we adopt the meta-learning frameworks for INRs proposed in Functa (Dupont et al., 2022a; Bauer et al., 2023) and Composer (Kim et al., 2023). They builds upon CAVIA (Zintgraf et al., 2018), where two separate parameters are meta-learned: the INR parameters θ and the context parameters ϕ ∈ R S×Dϕ . In particular, the parameters θ of INRs are meta-learned to capture underlying structure of data or dataagnostic shared information across signals. In addition, the context parameters ϕ ∈ R S×Dϕ , a set of Dϕdimensional features, are adapted to each signal and encode per-signal variations or characteristics, which are then utilized to condition the parameters of INRs to recover the signals they are modeling. Besides the shared concepts, they differ in how they construct and utilize the context parameters to condition the INR parameters, which we detail below. Functa. In Functa, the context parameters ϕn ∈ R S×Dϕ for each signal are constructed as a spatiallyarranged grid of Dϕ-dimensional features, where each of them encodes local variations in a signal. This grid of features is then utilized to modulate the activations of each INR layer. To do so, the set of features is first processed by an additional linear convolutional module as ψn = fconv(ϕn; θconv), followed by bilinear/trilinear interpolation to compute a layer-wise affine transformation parameters ψn(x) = Interp(x; ψn) that scale and/or shift the activations of each layer in INRs given an input coordinate x. In the original paper, the authors of Functa opt to adopt a shift-only modulation scheme since it shows better rate-distortion (compression vs performance) trade-offs than using both. In contrast, we adopt a scale-only modulation scheme since it is empirically shown to perform slightly better in our earlier experiments. Also, we do not modulate the activations of the first and last layers since we empirically find it stabilizes the training without hurting the performances. Composers. Unlike Functa that adopt local grid of features and FiLM-like modulation scheme (Perez et al., 2018) applied to multiple layers of INRs, Composers introduce a set of non-local features Vn ∈ R S×Dϕ that are used as a low-rank approximation Wn = UVn on parameters Wn of a single layer in INRs. Here, U ∈ R S×Dϕ is another low-rank approximation matrix that is incorporated in the INR parameters θ and captures global patterns shared across various signals, which are composed and modulated in a data-specific manner via Vn . Following the original work (Kim et al., 2023), we approximate the second layer of INRs using U and Vn. Throughout all experiments, we use a 5-layer MLPs with 128 hidden dimensions and ReLU non-linearities in-between as a base INR architecture for both Functa and Composers. We also use random fourier features (Tancik et al., 2020) with σ = 30 to encode positional information, except that we do not use these features for the experiments on the 1D synthetic dataset. ## C Details On State Fusion Transformers (Sfts) We construct USFTs and MSFTs using the transformer block of ViT (Dosovitskiy et al., 2021), where each of them is parameterized by one transformer block (*i.e.* L1 = L2 = 1) with a width of 192, a MLP dimension of 192, and 3 attention heads. In addition, we set the dimension of state representations z (k) nm ∈ R Sm×Dz to Dz = 192 for both USFTs and MSFTs in all experiments. To compute the state representations z (k) nm for each modality, we first scale the gradients by Eq (12), and then apply LayerNorm (Ba et al., 2016) to the parameters ϕmn ∈ R Sm×Dϕ and the scaled gradients gmn ∈ R Sm×Dϕ separately to stabilize the training. Then, we concatenate them along the feature dimension (*i.e.* [ϕmn ∥ gmn] ∈ R Sm×2Dϕ ), followed by projecting them to the hidden space of USFTs and obtain the state representations z (k) nm via a two-layer MLPs with a hidden dimension of 192 and ReLU activations, which is dedicated to each modality. In addition, we add positional embedding to the state representations before feeding them into USFTs, where we use sinusoidal positional encodings (Mildenhall et al., 2020) and learnable embedding for Functa and Composers, respectively. Finally, we parameterize Fusion MLPs with a 3-layer MLPs with a hidden dimension of 192 and ReLU non-linear activations. Also, we embed LayerNorm into the first and the penultimate layers of Fusion MLPs to stabilize the training. We also attach PyTorch-style pseudo code for MIA in Listing 1 and SFTs in Listing 2, respectively. 11 ''' 12 grad_dict = dict() 19 loss, 23 )[0] 24 *\# Use SFTs to* 25 *\# 1. consider cross-modal interactions among the learners* 26 *\# 2. and enhance their weight udpates.* 30 *\# update context parameters using the enhanced gradients for each modality.* 31 ctx_params_dict[mode] = ctx_params_dict[mode] - grad_dict[mode] 32 ## 33 Return Ctx_Params_Dict 1 def inner_optimization_step(inr_model_dict, sft_model, modes, 2 ctx_params_dict, x_spt_dict, y_spt_dict): 3 *'''Code for Equations (6) and (7).* 4 *Arguments:* 5 *inr_model_dict: A dictionary of INRs for each modality.* 6 *sft_model: State Fusion Transformers.* 7 modes: A list of modalities (e.g. ['rgb', 'normal', *'sketch']).* 8 ctx_params_dict: A dictionary of INRs' *the context parameters for each modality.* 9 *x_spt_dict: A dictionary of the provided support coordinates for each modality.* 10 *y_spt_dict: A dictionary of the provided support features for each modality.* 20 ctx_params_dict[mode], 21 create_graph = **True** 22 \# Set to `True` during meta-training, otherwise *`False`.* 13 for mode in modes: 14 *\# for each modality, obtain predicted features given support cooridnates.* 15 y_pred = inr_model_dict[mode](x_spt_dict[mode], ctx_params_dict[mode]) 16 loss = mse_loss(y_pred, y_spt_dict[mode]) 17 *\# compute gradients w.r.t context parameters.* 18 grad_dict[mode] = torch.autograd.grad( 27 *\# Please refer to Listing 2 for the details.* 28 grad_dict = fuse_states(sft_model, modes, grad_dict, ctx_params_dict) 29 for mode in modes: Listing 1: PyTorch style pseudo-code for inner optimization step via MIA. 1 def fuse_states(sft_model, modes, grad_dict, ctx_params_dict): 2 *'''Code for Equations (8), (9), (10), and (12).* 3 *Arguments:* 8 ''' 9 ori_state_dict = dict() 10 uni_state_dict = dict() 23 24 *\# concatenate states for all modalities and apply MSFT* 25 states = torch.cat(states, dim = 1) 26 states = sft_model.msft(states) 27 for mode in modes: 19 *\# add positional embedding and apply USFT for each modality* 20 state = ori_state_dict[mode] + sft_model.pos_emb[mode] 21 uni_state_dict[mode] = sft_model.usft[mode](state) 22 states.append(uni_state_dict[mode]) 28 *\# calculate enhanced gradients by applying FusionMLP for each modality* 29 multi_state_dict[mode] = split(states, mode) 30 state = torch.cat([ori_state_dict[mode], 34 35 **return** grad_dict Listing 2: PyTorch style pseudo-code for State Fusion Transformers (SFTs). 11 multi_state_dict = dict() 12 states = [] 13 for mode in modes: 14 *\# scale gradients from each modality with the meta-learned scaling constant.* 15 grad_dict[mode] *= sft_model.grad_scaling[mode] 16 *\# project gradients and context params to the hidden space* 17 ori_state_dict[mode] = sft_model.proj_mlp[mode](grad_dict[mode], 18 ctx_params_dict[mode]) 31 uni_state_dict[mode], 32 multi_state_dict[mode]], dim = -1) 33 grad_dict[mode] = sft_model.fusion_mlp[mode](state) 4 *sft_model: State Fusion Transformers.* 5 *modes: A list of modalities.* 6 *grad_dict: A dictionary of the gradients w.r.t the context parameters for each modality.* 7 *ctx_params_dict: A dictionary of the context parameters of INRs for each modality.* | Table 4: List of common configurations for each dataset. | | | | | | |------------------------------------------------------------|---------------------|---------------------------|---------------------------------------|-----------------------------------------|---------------------------------------| | Hyperparameters | Synthetic | CelebA | ERA5 | AVMNIST | | | Sine, Gaussian | | | | | | | modalities | Tanh, ReLU (M = 4) | RGB Normal Sketch (M = 3) | Temperature Pressure Humidity (M = 3) | Images Audios (M = 2) | | | number of signals (N) | train | 900 | 27143 | 11327 | 60000 | | test | 100 | 2821 | 2328 | 10000 | | | batch size | 64 | 32 | 16 | 32 | | | epoch | 16,000 | 300 | 300 | 300 | | | RGB - (32, 32, 3) | | | | | | | resolution | (200, 1) | Normal - (32, 32, 3) | (46, 90, 1) | Images - (28, 28, 1) Audios - (2000, 1) | | | Sketch - (32, 32, 1) RGB - 1024 | 4140 | Images - 784 | | | | | number of original function samples (P) | 200 | Normal - 1024 | Audios - 2000 | | | | Sketch - 1024 | | | | | | | sampling ratio min, Rmax] | [0.01, 0.1] | [0.001, 1] | [0.001, 1] | Images - [0.001, 1] | | | [R | Audios - [0.250, 1] | | | | | | outer learning rate | 10−4 | | | | | | momentum (β1, β2) for Adam | (0.9, 0.999) | | | | | | total inner step K for | | | | | | | optimization-based methods | 3 | | | | | | scale for uncertainty lr | 1 | 1 | 0.1 | 10 | | | width/depth of INRs | 128/5 | | | | | | Functa | (8, 16) | (8, 8, 16) | (8, 16, 16) | Images - (8, 8, 4) Audios - (64, 32) | | | dimension of | | | | | | | context parameters ϕ | Composer | (8, 128) | (64, 128) | (128, 128) | Images - (64, 128) Audios - (64, 128) | | σ for fourier feature | None | 30.0 | 30.0 | 30.0 | | | bootstrapping factor for evaluation | 10 | 4 | 4 | 4 | | ## D More Experimental Details D.1 Common Details In all experiments, we interpret data as a set of coordinate-feature pairs and normalize the values of both coordinates and features. Specifically, for coordinates, the values are normalized within the range [−1, 1]d, where d varies with the dimensionality of the dataset. For features, normalization varies by data type: values are normalized to the range [0, 1] for image and climate data, and to [−1, 1] for audio signals. Normalization is performed by computing the minimum and maximum values across all data points for each dataset, which are then used to scale each data point accordingly. In the case of the 1D synthetic functions, we do not normalize the features; instead, we use the normalized MSE for evaluation. We subsample coordinate-feature pairs from the full set during the meta-training phase to promote the generalization, as well as memorization. For each signal Dnm, the sampling ratio Rnm is independently and identically drawn from a uniform distribution U(Rmin m , Rmax m ) for each modality m to construct the support sets Dtrain n. For evaluation, we employ a bootstrapping technique on the meta-test dataset. We iteratively Algorithm 2 Meta-Learning Algorithm for Optimization-based Baselines. Require: Batch size B; Inner step K; Outer learning rates λΘ. 1: Initialize Θ. 2: **repeat** 3: Sample a batch of B joint multimodal signals {{Dnm}M m=1} B n=1. 4: Sample Rnm ∼ U(Rmin m , Rmax m ) for ∀*n, m*. 5: Split Dnm into Dtrain nm , Dval nm, where |Dtrain nm | = Pnm × Rnm and |Dval nm| = Pnm. 6: for n = 1*, . . . , B* do 7: for k = 0*, . . . , K* − 1 do 8: for m = 1*, . . . , M* do 9: g (k) nm = ∇ϕ (k) nm L(Dtrain nm ; θm, ϕ(k) nm) 10: ϕ (k+1) nm ← update(ϕ (k) nm, g (k) nm) ▷ Inner optimization 11: **end for** 12: **end for** 13: **end for** 14: Louter = 1 B PB n=1 PM m=1 L(Dval nm; θm, ϕ(K) nm ). 15: Θ ← Θ − λΘ∇ΘLouter. ▷ Outer optimization 16: **until** converged sample the support and query sets multiple times for each data to mitigate potential variances that may arise from the sampling process. For 1D synthetic experiments, we apply a bootstrapping factor of 10, while for other scenarios, the factor is set to 4. In all experiments, we use Adam optimizer (Kingma & Ba, 2014) for meta-optimization, with a learning rate of 10−4 and the momentum parameters are set as (β1, β2) = (0.9, 0.999). Also, we apply uncertainty-aware loss weighting technique (Kendall et al., 2018) for all multimodal methods. Please find Table 4 for the clear list of common configurations for each dataset. ## D.2 Baselines In this section, we provide the more explanation of baselines and implementation details. To facilitate a better comparison with MIA, we include a pseudo-code for optimization-based meta-learning baselines (CAVIA, MetaSGD, GAP, ALFA) in Algorithm 2. In this context, the meta-learnable parameters Θ and the update(·) function differ according to each baseline's unique optimization procedure. CAVIA. We use a global fixed learning rate of α = 1.0 to adapt the context parameters of CAVIA-like methods (Functa and Composers) in the inner-loop for all experiments. Here, Θ = {*θ, ϕ*}, and the update rule is defined as update(ϕ (k) nm, g (k) nm) = ϕ (k) nm − αmg (k) nm. MetaSGD. Li et al. (2017) propose to use a meta-learned per-parameter learning rate α ∈ R S×Dϕ in the inner-loop adaptation phase, which is optimized along with the meta-learner in the outer-loop optimization phase. We apply this technique to adapt the context parameters of CAVIA-based frameworks, where we initialize their initial values to 1.0 in all experiments. In this setup, Θ = {*θ, ϕ, α*}, and the update rule is defined as update(ϕ (k) nm, g (k) nm) = ϕ (k) nm − αm ◦ g (k) nm. GAP. Kang et al. (2023) propose to accelerate the optimization process via Geometry-Adaptive Preconditioner (GAP), which preconditions the gradients g (k) at inner step k by manipulating its singular values with meta-learned parameters M. The procedure can be written as: $${\tilde{g}}^{(k)}=\mathbf{U}^{(k)}(\mathbf{M}\cdot\mathbf{\Sigma}^{(\mathbf{k})})\mathbf{V}^{(k)^{\mathrm{T}}},{\mathrm{~where~}}g^{(k)}=\mathbf{U}^{(k)}\mathbf{\Sigma}^{(\mathbf{k})}\mathbf{V}^{(k)^{\mathrm{T}}}.$$ T. (14) In the original paper (Kang et al., 2023), gradient matrix unfolding technique is introduced to facilitate SVD on the gradients of convolutional weights. Different from the original setup, now the shape of the context $\star$ 4. parameters and their associated gradient matrices is R S×Dϕ . Therefore, we do not using this unfolding technique and define the meta parameters as M = diag(Sp(M1)*, . . .* Sp(Mmin(S,Dϕ))). In addition, we experiment with *Approximate GAP* as well and report the best performances achieved from the two methods. This approximated version bypasses the need of SVD for calculating the preconditioned gradients by g˜ (k) ≃ Mg (k). Here, Θ = {*θ, ϕ,*M}, and the update rule is defined as update(ϕ (k) nm, g (k) nm) = ϕ (k) nm − g˜ (k) nm. ALFA. ALFA is proposed to meta-learn the weight update procedure along with the weights to facilitate the learning. Unlike GAP, ALFA introduces an additional meta-learned neural network h(ϕ (k), g(k); ξ) that dynamically predicts learning rates α (k) and weight decaying terms β (k)in a data-specific manner and for each inner step k, given the current parameters ϕ (k) and their gradients g (k) of the learners. The resulting weight update rule can be described as follows: $$\phi^{(k+1)}=\beta^{(k)}\cdot\phi^{(k)}-\alpha^{(k)}\cdot g^{(k)},\quad\mathrm{where~}g^{(k)}=\nabla_{\phi^{(k)}}{\mathcal{L}}^{(k)}.$$ $$\left(15\right)$$ (k). (15) To construct the meta-learned network h, we follow the original setup (Baik et al., 2023) and parametrize it with a 2-layer MLPs with ReLU activations. Also, we reduce the context parameters and gradients by averaging them along the feature dimensions (i.e. *g, ϕ* ∈ R S×Dϕ → g, ¯ ϕ¯ ∈ R S) and feed them into the metalearned network h. Finally, we augment the predicted learning rates and decaying terms α (k), β(k) ∈ R S with additional meta-learned weights α0, β0 ∈ R S, as suggested in the paper. We set the initial values of α0, β0 to 1. In summary, Θ = {*θ, ϕ, ξ*}, and the update rule is defined as update(ϕ (k) nm, g (k) nm) = β (k) nm · ϕ (k) nm − α (k) nm · g (k) nm, where α (k) nm, β(k) nm = h(ϕ (k) nm, g (k) nm; ξm). MTNPs. Multitask Neural Processes (MTNPs) (Kim et al., 2022a) is another class of meta-learning appproach for INRs based on Neural Processes (Garnelo et al., 2018), aimed at modeling multimodal signal distributions, similar to ours. This method replaces iterative optimization steps with feed-forward encoder networks to directly predict the parameters of INRs from observed signals. For this, MTNPs adopts a dual-stream and hierarchical fusion approach to capture cross-modal relationships among signals. The first stream, driven by a latent encoder, is tasked to capture uncertainty in recovering the entire function given partial observations and to infer global latent variables shared by the observed data points for each signal in a modality (Dmn). On the other hand, the second stream is guided by a deterministic encoder and is responsible for extracting local per-coordinate representations specific to each data point (xnm, ynm) that belongs to a signal in a modality. In addition, to improve the expressive power of the model, each stream is composed of a stack of specialized hierarchical multi-head attention blocks, where the earlier part captures the dependencies among data points within a modality and the latter discover their potential cross-modal relationships. For instance, in the first latent stream, cross-modal relationships among global latent variables for each modality are captured. Similarly, multimodal dependencies among local representations that belong to the same coordinate are considered in the second deterministic stream. We refer more interested readers to Section 3 and Figure 2 in the original paper of MTNPs (Kim et al., 2022a). To evaluate MTNPs in our setup, we apply two modifications to MTNPs to adapt it in our experiments: (1) We change the output dimension of the latent encoder so that it directly predicts the context parameters ϕnm ∈ R Snm×Dϕ that conditions the parameters of INRs. (2) For experiments on AVMNIST, we omit the module in the deterministic stream that captures cross-modal interactions among the axis-aligned local representations since there is no one-to-one correspondence between the image and audio coordinates. We use the official implementation1 by the author to construct MTNPs. HNPs. Heterogeneous Neural Processes (HNPs) (Shen et al., 2023) targets general multimodal metalearning setups, which can be applicable to both multimodal signal regression and multimodal classification scenarios. Similar to MTNPs, HNPs has modality-specific and modality-agnostic inference modules, where the former processes per-modality observations and produces their representations, whereas the latter fuses those representations to capture cross-modal relationships. Unlike MTNPs, however, HNPs introduces additional meta-learned modality-specific and modality-agnostic latent priors, ω and ν, which are respectively fed into the modality-specific and modality-agnostic inference modules as well to induce the respective latent variables, z and w. Finally, the per-modality decoder takes w to approximate the target function. In our problem setup, we slightly modify HNPs and formulate w to model the parameters of INR learners ϕ, and 1https://github.com/GitGyun/multi_task_neural_processes use Functa and Composer for the decoder framework. We use the official implementation2 provided by the author to construct HNPs. ## E Additional Analysis E.1 Correlation Between Attention Pattern And Observation Quality Table 5: Pearson Correlation Coefficient (Cij ) between the attention scores of MSFTs assigned to the learner's state representations for modality i and the support set size for modality j. | Functa | Composers | | | | | | | | |-------------------------------------------------------------------------|-------------|----------|---------|--------|---------|----------|--------|--------| | C | Sine | Gaussian | Tanh | ReLU | Sine | Gaussian | Tanh | ReLU | | Sine | 0.561 | −0.393 | −0.434 | −0.357 | 0.620 | −0.520 | −0.455 | −0.523 | | Gaussian | −0.089 | 0.290 | −0.074 | −0.020 | −0.406 | 0.473 | −0.262 | −0.222 | | Tanh | −0.163 | −0.068 | 0.319 | −0.036 | −0.493 | −0.439 | 0.597 | −0.421 | | ReLU | −0.144 | −0.082 | −0.165 | 0.270 | −0.265 | −0.226 | −0.117 | 0.353 | | (a) Coefficients on multimodal 1D synthetic functions. Functa Composers | | | | | | | | | | C | RGBs | Normals | Sketchs | RGBs | Normals | Sketchs | | | | RGBs | 0.779 | −0.645 | −0.579 | 0.826 | −0.837 | −0.671 | | | | Normals | −0.794 | 0.843 | −0.381 | −0.826 | 0.838 | −0.529 | | | | Sketchs | −0.923 | −0.891 | 0.973 | −0.856 | −0.829 | 0.933 | | | | (b) Coefficients on multimodal 2D CelebA images. | | | | | | | | | Figure 8a and Table 5 present when and how SFTs facilitate the interaction among the learners, quantified by Pearson Correlation Coefficient (Cij ). This coefficient measures the correlation between the attention scores of MSFTs assigned to a learner's state representations for modality i and the support set size for modality j. Here, we describe the detailed methodology used to calculate this correlation. Given a joint multimodal signal Dn = {Dnm}M m=1, K-step adaptation of the context parameters {ϕnm}M m=1 ∈ RS×Dϕ with MSFTs of H multihead attention produces the attention score map An with a shape of K × H × MS × MS. We reshape this attention score map to (K × H × S × S) × M × M, followed by reducing the leading dimensions to obtain an average attention score matrix A¯n with a shape of M ×M. This matrix quantifies the average interactions among the learners, where the (*i, j*) element of this matrix amounts to the average attention scores directed from the learner of modality i towards the learner of modality j. Finally, we compute the Pearson correlation coefficient C between these average attention score maps and the sizes of the support sets {|Dnm|}m across N sets of joint multimodal signals, where Cij amounts to the correlation between {A¯n(*i, j*)}n and {|Dnj |}n, which is the . ## E.2 Evolving Attention Maps Through Optimization Steps We also investigate the evolving interactions among the learners through optimization steps k = 1, · · · , K, as presented in Figure 17. Here, instead of computing the Pearson Correlation Coefficient between attention maps and the support set sizes as above, we quantify this degree of interactions among the learners solely by analyzing how the attention patterns within MSFTs evolve over optimization steps. This procedure involves the following steps: (1) We first reshape the attention score map An for a signal defined in Section E.1 to R (H×S×S)×K×M×M (2) Then, we average this map over the first dimension to obtain A¯n ∈ R K×M×M. (3) Finally, this map is aggregated further for all signals and then averaged to A¯ = 1 N Pn A¯n = {A¯k} K k=1. We visualize and analyze how this A¯kevolves with k. Note that this final attention map contains values in the 2https://github.com/autumn9999/HNPs ![31_image_0.png](31_image_0.png) Figure 17: Evolving interactions at each optimization step k = 1, 2, 3. Each (i, j) element in the matrices indicates the average attention score assigned by the learner of modality i to the learner of modality j. Please refer to Appendix E.2 for the detailed method for computing these matrices. The patterns show that learners tend to interact extensively with each other in the beginning (high off-diagonal attention soores in Step 1) and gradually attend more on themselves towards the end (high diagonal attention scores in Step 3). This highlights MIA's adaptability in ensuring a balanced utilization of multimodal interactions, emphasizing the necessity of applying multimodal adaptation iteratively for each optimization step. range of [0, 1], where higher values indicate significant cross-modal information exchange between distinct modality learners at each step, while a value of zero indicates no interaction. The results in the figure reveal that the interactions among the learners mostly occur in the beginning of the optimization (high off-diagonal attention scores at step 1). Then, we find that the learners gradually focus more on their own states towards the end (high diagonal attention scores at step 2 and 3). This analysis shows that SFTs ensure a balanced utilization of multimodal interactions. ![32_image_0.png](32_image_0.png) Figure 18: Learning trajectory of RGBs context parameters for the 3-step adaptation process with or without MIA, depending on the support set size of the target RGB and the sources, Normals and Sketches. For each plot, the green dotted line represents the trajectory of CAVIA adapted with the gradient descent (GD). The solid lines represent the trajectory adapted with MIA, where the different colors reflect the support set sizes of the source modalities (Normals and Sketches). The scalar values in the left side of the legend refer to the support set ratio of the source modalities and the right side means the achieved reconstruction performance (MSEs). ## E.3 Analysis Study On Learning Trajectories Via Mia We investigate how MIA adjusts the optimization trajectories of context parameters during the adaptation process compared to CAVIA. To do this, we freeze the trained weights of CAVIA, followed by integrating it with our SFTs. Then, we meta-learn our SFTs only. This ensures that the trajectory space of the context parameters of both CAVIA and our method is aligned, thereby enabling the comparison of trajectories between the two methods. After that, we visualize the learning trajectories on both methods by applying Principal Component Analysis (PCA) on the obtained context parameters as described in Li et al. (2018). Figure 18 illustrates that as the amount of observations from additional modalities increases, the learning trajectory with MIA shifts towards more effective solutions (lower MSE loss), moving away from bad solutions. This improvement is attributed to the SFTs' ability to enhance the sub-optimal gradients derived from limited observations by utilizing information from other modalities' learners. Conversely, when adapting through a gradient descent (GD), the learning trajectory of CAVIA remains constant regardless of observations in other modalities due to the absence of cross-modal interaction. Table 6: Results on CelebA dataset in MSEs (×10−3) and averaged unimodal attention scores ( 1 N Pn A¯n(*m, m*)) of MSFTs when injecting noise to gradients of other modalities' learners with varying noise level γ. | Functa | | Composers | | | | | |----------|---------------|---------------|---------------|---------------|---------------|---------------| | γ | RGBs | Normals | Sketchs | RGBs | Normals | Sketchs | | 0.00000 | 0.948 / 0.502 | 0.769 / 0.579 | 0.578 / 0.710 | 0.667 / 0.635 | 0.289 / 0.690 | 0.255 / 0.637 | | 0.00001 | 0.976 / 0.525 | 0.774 / 0.644 | 0.581 / 0.720 | 0.664 / 0.630 | 0.288 / 0.696 | 0.255 / 0.643 | | 0.00005 | 0.982 / 0.527 | 0.774 / 0.651 | 0.582 / 0.719 | 0.664 / 0.631 | 0.288 / 0.709 | 0.253 / 0.651 | | 0.00010 | 0.987 / 0.527 | 0.775 / 0.653 | 0.582 / 0.718 | 0.673 / 0.656 | 0.290 / 0.727 | 0.253 / 0.658 | | 0.00050 | 1.056 / 0.539 | 0.786 / 0.664 | 0.583 / 0.726 | 0.708 / 0.720 | 0.293 / 0.748 | 0.256 / 0.681 | | 0.00100 | 1.092 / 0.549 | 0.796 / 0.677 | 0.584 / 0.741 | 0.714 / 0.724 | 0.293 / 0.752 | 0.256 / 0.687 | | 0.00500 | 1.122 / 0.560 | 0.819 / 0.705 | 0.587 / 0.772 | 0.718 / 0.727 | 0.294 / 0.755 | 0.256 / 0.690 | | 0.01000 | 1.124 / 0.561 | 0.823 / 0.709 | 0.588 / 0.775 | 0.717 / 0.727 | 0.294 / 0.755 | 0.256 / 0.690 | | 0.10000 | 1.126 / 0.562 | 0.826 / 0.712 | 0.589 / 0.777 | 0.718 / 0.727 | 0.294 / 0.755 | 0.256 / 0.691 | | 1.00000 | 1.127 / 0.562 | 0.826 / 0.713 | 0.589 / 0.778 | 0.718 / 0.727 | 0.294 / 0.755 | 0.256 / 0.691 | ## E.4 Analysis Study On Negative Transfer From Other Modalities In this study, we explore the impact of noisy state information from a source modality m′ on a target modality m. Our evaluation focuses on the robustness of SFTs against the potential negative transfer of incorrect information from source modalities to the target. To assess this, we inject varying levels of Gaussian noise ϵnm′ ∼ N (0*, γI*) into the gradient g (k) nm′ of source modalities before they are fed into SFTs. As the noise level γ increases, the relevance of the state information from the sources to the target learner diminishes. To quantify the effect of noises, we measure the average MSEs and the average attention scores ( 1 N Pn A¯n(*m, m*)) within target learners' states, while varying the noise level from 10−5to 1.0. As shown in Table 6, while there is a marginal rise in MSEs upon the introduction of noise, no additional performance degradation is observed beyond a certain noise level. Interestingly, as the gradient noise from learners of other modalities increases, there is a concurrent increase in unimodal attention scores for the learner of the target modality. This indicates that SFTs inherently have the ability to handle potential negative transfers across modalities. ## E.5 More In-Depth Ablation Study For Components Of Sfts | | Modules | | Synthetic | | | |-------|-----------|------------|----------------|--------------|------| | USFTs | MSFT | FusionMLPs | Generalization | Memorization | | | (1) | ✗ | ✗ | ✗ | 0.00 | 0.00 | | (2) | ✗ | ✗ | ✓ | 38.9 | 54.2 | | (3) | ✓ | ✗ | ✗ | 44.4 | 63.6 | | (4) | ✓ | ✗ | ✓ | 43.1 | 68.0 | | (5) | ✗ | ✓ | ✗ | 86.8 | 71.6 | | (6) | ✗ | ✓ | ✓ | 86.8 | 76.7 | | (7) | ✓ | ✓ | ✗ | 86.7 | 75.5 | | (8) | ✓ | ✓ | ✓ | 88.7 | 81.6 | Table 7: Ablation study on the components in SFTs on generalization and memorization ability. We report relative performance improvement (↑) achieved by ablated methods (2-8) over vanilla Composers (1) on multimodal 1D synthetic function dataset. This section provides more in-depth ablation study results that could not be included in Table 2a. We first present the ablation study on the synthetic dataset that encompasses all possible component combinations to investigate how each of them contribute to performance. The results are shown in Table 7. The table shows the substantial impact of FusionMLPs in enhancing the weight updates of vanilla Composers when used independently (1 vs 2) or in combination with USFTs, MSFTs, or both (3 vs 4, 5 vs 6, 7 vs 8). This demonstrates the general versatility of FusionMLPs in enhancing gradient directions or magnitudes. Moreover, we observe that incorporating USFTs significantly enhances memorization capabilities (1-2 vs 3-4). This emphasizes the advantageous role of USFTs in capturing modality-specific patterns within the learners' states. In contrast, MSFTs excel in leveraging cross-modal interactions among the learners' states, leading to a substantial improvement in the generalization performances of Composers (3-4 vs 5-8). Lastly, the most optimal performance is achieved when utilizing a combination of USFTs, MSFTs, and FusionMLPs, validating unique and indispensable roles of each component within SFTs. Next, we extend the ablation study in Table 2a and present more in-depth analysis results on the multimodal 2D CelebA dataset. We first compare the meta-training overheads of the ablated methods, namely the number of parameters, memory consumption, and meta-training time per iteration. The results are summarized in Table 8a. The table shows that the overheads of our full model are comparable to the other ablated methods. Finally, we delve into the role of MSFTs and Fusion MLPs in enhancing the generalization capability of CAVIA and preventing negative transfer during the cross-modal interactions conducted within MSFTs. For this, we compare the relative performance improvement of the ablated methods for each target modality (either RGB, Normal, or Sketch) over the vanilla CAVIA while varying the sampling ratios of observable support sets from both the target and source modalities separately. The sampling ratios for the target and sources are set to either 0.01 (limited) or 0.25 (sufficient). The results are presented in Table 8b and Table 8c. The results demonstrate that, when the support for the target modality is highly limited (*i.e.*, R = 0.01), CAVIA augmented with MSFTs significantly boosts the performance of their unimodal counterparts, CAVIA and CAVIA with USFTs, regardless of the sampling ratios of the source modalities. This suggests that MSFTs are indeed essential for enhancing generalization capabilities when observations are limited. On the other hand, when observations for the target are sufficient (*i.e.*, R = 0.25) and those from the sources are limited (*i.e.*, R = 0.01), CAVIA with USFTs achieves better performance than CAVIA with MSFTs or CAVIA with both USFTs and MSFTs. This could be attributed to potential negative transfer from insufficient sources to the target. Finally, our full model alleviates this issue and achieves the best performances overall, indicating that Fusion MLPs are necessary to mitigate potential negative transfer during cross-modal interactions. ## E.6 Comparisons With Other Multimodal Meta-Learning Studies Multimodal meta-learning is not new in the domain of meta-learning. However, existing studies (Vuorio et al., 2018; 2019; Abdollahzadeh et al., 2021; Sun et al., 2023) differ significantly from our work in terms of the notion of modality and problem setups. This not only makes direct comparisons challenging but also hinders fair comparisons since evaluating their frameworks or ours to within each other's context requires substantial modifications in the methodologies. Nonetheless, in this section, we describe the problem setup and the method focused by these studies, followed by their comparative evaluation results in the context of our joint multimodal signal modeling scenarios. MMAML & KML. Unlike the prevalent focus of meta-learning and meta-testing on single-domain problems, exemplified by few-shot classification tasks on a single individual dataset like Omniglot, mini-Imagenet, or CUB datasets, works such as Vuorio et al. (2018; 2019) and Abdollahzadeh et al. (2021) extend their scope to encompass multiple domains of datasets. For example, they explore simple regression problems across a union of sinusoidal, linear, and tanh functions, few-shot classification tasks combining Omniglot, miniImagenet, and CUB datasets, or reinforcement learning scenarios across various environments such as Point Mass, Reacher, and Ant. In these studies, each domain, whether dataset or environment, is treated as a distinct modality, leading to their concept of multimodal meta-learning. Importantly, this concept of multimodality differs from the traditional understanding related to data types, as explicitly clarified in Section 3 of Abdollahzadeh et al. (2021). Table 8: Comparisons on computation overheads and performances across ablated methods. Overheads are measured by parameter counts (↓), memory usage (↓), and meta-training time per iteration (↓). Performances are reported as relative performance improvements (↑). | Modules | RGB: 0.01 | Normal: 0.01 | Sketch: 0.01 | RGB: 0.25 | Normal: 0.25 | Sketch: 0.25 | | | | | | | | | |-------------------------------------------------------------------------------------------------------------------|-------------|----------------|----------------|-------------|----------------|----------------|-------|-------|-------|-------|-------|-------|-------|-------| | USFTs MSFTs Fusion Source: Source: Source: Source: Source: Source: Source: Source: Source: Source: Source: Source | | | | | | | : | | | | | | | | | MLPs | 0.01 | 0.25 | 0.01 | 0.25 | 0.01 | 0.25 | 0.01 | 0.25 | 0.01 | 0.25 | 0.01 | 0.25 | | | | ✗ | ✗ | ✗ | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | | ✓ | ✗ | ✗ | 23.01 | 23.01 | 31.22 | 31.22 | 27.02 | 27.02 | 24.76 | 24.76 | 27.58 | 27.58 | 23.53 | 23.53 | | ✗ | ✓ | ✗ | 23.24 | 50.87 | 29.13 | 44.30 | 29.38 | 50.85 | 16.68 | 29.19 | 23.32 | 28.97 | 22.21 | 32.17 | | ✓ | ✓ | ✗ | 26.11 | 55.44 | 32.31 | 49.13 | 30.93 | 52.88 | 20.82 | 32.43 | 26.89 | 32.20 | 23.89 | 33.82 | | ✓ | ✓ | ✓ | 29.13 | 57.98 | 32.19 | 50.11 | 30.83 | 53.13 | 22.75 | 34.74 | 27.51 | 33.36 | 23.85 | 34.25 | | (b) Ablation study based on Functa | | | | | | | | | | | | | | | | Modules | RGB: 0.01 | Normal: 0.01 | Sketch: 0.01 | RGB: 0.25 | Normal: 0.25 | Sketch: 0.25 | | | | | | | | | | Source: Source: Source: Source: Source: Source: Source: Source: Source: Source: Source: Source: | | | | | | | | | | | | | | | | USFTs MSFTs Fusion MLPs | 0.01 | 0.25 | 0.01 | 0.25 | 0.01 | 0.25 | 0.01 | 0.25 | 0.01 | 0.25 | 0.01 | 0.25 | | | | ✗ | ✗ | ✗ | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | | ✓ | ✗ | ✗ | 38.65 | 38.65 | 38.33 | 38.33 | 23.76 | 23.76 | 73.35 | 73.35 | 50.29 | 50.29 | 51.13 | 51.13 | | ✗ | ✓ | ✗ | 38.55 | 48.20 | 37.68 | 45.73 | 23.58 | 37.16 | 72.19 | 73.39 | 50.51 | 51.89 | 48.82 | 52.48 | | ✓ | ✓ | ✗ | 40.70 | 50.47 | 38.89 | 48.73 | 26.37 | 41.05 | 72.75 | 74.05 | 52.21 | 53.78 | 50.64 | 54.64 | | ✓ | ✓ | ✓ | 41.54 | 52.70 | 41.05 | 49.88 | 26.44 | 39.85 | 74.06 | 75.16 | 53.80 | 55.09 | 50.75 | 54.10 | | Modules | Functa | | Composers | | | | | | |-----------|----------|-------------|-------------|---------|-----------------------|----------|--------|---------------| | USFTs | MSFTs | Fusion MLPs | # Params | Memory | Training Time (ms/it) | # Params | Memory | Training Time | | | (GB) | | (GB) | (ms/it) | | | | | | ✗ | ✗ | ✗ | 373K | 8.54 | 251.3 | 224K | 3.00 | 60.00 | | ✓ | ✗ | ✗ | 1.22M | 8.76 | 266.7 | 1.26M | 3.27 | 124.8 | | ✗ | ✓ | ✗ | 772K | 8.75 | 264.6 | 819K | 3.30 | 115.1 | | ✓ | ✓ | ✗ | 1.44M | 8.92 | 274.7 | 1.49M | 3.48 | 133.9 | | ✓ | ✓ | ✓ | 1.89M | 9.07 | 279.3 | 1.93M | 3.61 | 136.2 | These works highlight that meta-learning a single initialization may be suboptimal due to the multimodal nature of the problem. To address this, they introduce an additional meta-learned module known as a task encoder. The role of this task encoder is to identify the latent modality of the observed data and predict modulation parameters. These parameters guide the learned single initialization towards modality-specific initializations. Experimental results indicate that the task encoder indeed learns to identify the modality (or dataset domain), and consequently, adapting from modality-specific initializations yields better performance than relying on a modality-agnostic single initialization. However, it's important to note that in these papers, each data point is assumed to be sampled iid from the union of datasets. As a result, the learner adapts independently to each data point, without explicitly leveraging cross-modal relationships among modalities or incorporating mechanisms for multimodal fusion. For these reasons, we categorize these approaches as inherently unimodal meta-learning methods. To adapt their work in our problem setup, we parameterize their task encoder with the same transformer architecture backbone as our SFTs, which is shared across all modalities, and uses it to directly predict the parameters of INRs. Then, we adapt those predicted parameters for additional k optimization steps. AMML. Unlike the previously mentioned studies, Sun et al. (2023) delve into a scenario where multimodality relates specifically to data types. Their primary focus lies in addressing multimodal sentiment analysis problems, where the learner is tasked with classifying discrete sentiment scores ranging from 1 to 7 or binary sentiment classes (positive or negative). Since this classification relies on diverse sources of information represented in different modalities, encompassing texts, images, and audios, the task is referred to as multimodal inference in their work. Table 9: Quantitative comparisons on the multimodal 1D synthetic functions. We report the normalized MSEs (×10−2) computed over distinct ranges of sampling ratios, averaged over 5 random seeds. | Modality | RGBs | | | Normals | | | Sketches | | | | | | | |------------|---------|-------|-------|-----------|-------|-------|------------|-------|-------|-------|-------|-------|------| | min | 0.00 | 0.25 | 0.50 | 0.75 | 0.00 | 0.25 | 0.50 | 0.75 | 0.00 | 0.25 | 0.50 | 0.75 | | | Range | R R max | 0.25 | 0.50 | 0.75 | 1.00 | 0.25 | 0.50 | 0.75 | 1.00 | 0.25 | 0.50 | 0.75 | 1.00 | | Functa | 13.44 | 4.117 | 3.052 | 2.577 | 5.067 | 2.448 | 2.092 | 1.928 | 13.29 | 7.065 | 5.704 | 5.209 | | | w/ MMAML | 11.22 | 3.775 | 2.772 | 2.264 | 4.311 | 2.266 | 1.872 | 1.672 | 10.96 | 6.021 | 4.452 | 3.751 | | | w/ AMML | 8.110 | 2.909 | 1.853 | 1.334 | 3.341 | 1.689 | 1.243 | 1.000 | 8.192 | 4.332 | 2.278 | 1.124 | | | w/ MIA | 6.946 | 2.563 | 1.627 | 1.135 | 2.979 | 1.530 | 1.118 | 0.869 | 7.667 | 4.042 | 2.142 | 1.011 | | | Composers | 26.40 | 15.83 | 14.30 | 13.21 | 6.979 | 4.830 | 4.630 | 4.517 | 17.99 | 14.36 | 13.27 | 12.86 | | | w/ MMAML | 12.35 | 4.228 | 2.806 | 2.029 | 4.902 | 2.567 | 2.028 | 1.725 | 11.29 | 5.757 | 3.646 | 2.521 | | | w/ AMML | 11.28 | 5.195 | 3.971 | 3.241 | 4.209 | 2.336 | 1.745 | 1.339 | 9.813 | 5.084 | 2.727 | 1.319 | | | w/ MIA | 9.764 | 3.418 | 1.913 | 1.017 | 3.749 | 1.763 | 1.062 | 0.526 | 9.505 | 4.708 | 2.336 | 0.855 | | | Modality | Sine | | | Gaussian | | Tanh | | ReLU | | | | | | |------------|---------|-------|-------|------------|-------|--------|-------|--------|-------|-------|-------|-------|------| | | min | 0.01 | 0.02 | 0.05 | 0.01 | 0.02 | 0.05 | 0.01 | 0.02 | 0.05 | 0.01 | 0.02 | 0.05 | | Range | R R max | 0.02 | 0.05 | 0.10 | 0.02 | 0.05 | 0.10 | 0.02 | 0.05 | 0.10 | 0.02 | 0.05 | 0.10 | | Functa | 44.26 | 16.07 | 3.319 | 18.81 | 4.388 | 0.953 | 22.61 | 3.667 | 0.586 | 65.29 | 10.79 | 2.157 | | | w/ MMAML | 43.35 | 10.09 | 1.233 | 30.87 | 4.270 | 0.698 | 17.20 | 2.351 | 0.312 | 72.79 | 5.630 | 0.261 | | | w/ AMML | 10.12 | 4.462 | 1.717 | 1.253 | 0.959 | 0.638 | 1.719 | 0.541 | 0.267 | 7.063 | 1.254 | 0.305 | | | w/ MIA | 6.386 | 2.058 | 0.547 | 1.057 | 0.571 | 0.281 | 1.285 | 0.378 | 0.131 | 5.069 | 1.012 | 0.115 | | | Composers | 37.40 | 16.70 | 5.284 | 5.923 | 3.149 | 1.460 | 14.81 | 4.053 | 1.029 | 48.49 | 11.98 | 3.232 | | | w/ MMAML | 44.26 | 11.63 | 1.570 | 31.09 | 4.664 | 0.895 | 16.46 | 2.330 | 0.309 | 74.75 | 5.845 | 0.294 | | | w/ AMML | 14.57 | 9.549 | 7.706 | 1.699 | 1.462 | 1.086 | 2.411 | 1.021 | 0.725 | 10.19 | 2.340 | 0.907 | | | w/ MIA | 5.564 | 1.844 | 0.627 | 0.975 | 0.528 | 0.237 | 1.257 | 0.343 | 0.128 | 4.715 | 0.943 | 0.156 | | In their work, the authors underscore the limitations of existing methodologies, pointing out their inability to consider the heterogeneous convergence properties of each unimodal encoder network. Naively adapting these encoders using identical optimization algorithms (*e.g.* SGD with the same learning rate) results in suboptimal unimodal encoder networks. Consequently, the fusion of these suboptimal representations yields unsatisfactory outcomes for the final prediction. To address this, Sun et al. (2023) propose to include the independent adaptation of each modality-specific encoder using the unimodal classification loss within the inner loop. Following this inner-loop phase, a multimodal fusion network integrates the extracted representations derived from these individually adapted unimodal encoders. Finally, the fused representations are utilized for predicting class labels for sentimental analysis. It's also worth mentioning that Transformer structures are employed for independent modalityspecific encoder (such as the pre-trained BERT for the text encoder and Vanilla Transformers for image and acoustic encoders), whereas they are not utilized for multimodal fusion. Since their framework is not directly applicable to our joint multimodal function regression scenarios, we investigate the effectiveness of their adaptation-in-the-inner-loop followed by fusion-in-the-outer-loop scheme by applying our method's MIA only in the final optimization step. Results. Each compared method is run 5 times with different random seeds on Synthetic and CelebA datasets and their results are averaged. The results are in Table 9 and 10. From the tables, we conclude the following things: (1) MMAML greatly improves the memorization performances of CAVIA (Functa and Composer) thanks to the task encoder network, while it fails to generalize better than CAVIA due to its inherently unimodal nature of MMAML. (2) AMML further improves the generalization performances of CAVIA thanks to its multimodal-fusion-in-the-outer-loop scheme. However, its performances still fall short than our MIA, demonstrating the effectiveness of joint multimodal iterative adaptation of the learners during the adaptation stages. Table 10: Quantative comparisons on the multimodal 2D CelebA image function regression. We report MSEs (×10−3) computed over distinct ranges of sampling ratios, averaged over 5 random seeds.
Review 1: Summary: The paper introduces Multimodal Iterative Adaptation (MIA), a framework that advances the learning of Implicit Neural Representations (INRs) by integrating multimodal fusion with optimization-based meta-learning. MIA addresses the limitations of encoder-based methods by enabling cross-modal knowledge exchange during iterative optimization, which enhances the generalization of INRs to complex signals. The framework’s centerpiece, State Fusion Transformers (SFTs), is an attention-based meta-learner that operates during the backward pass, aggregating learning states and predicting improved parameter updates. This approach allows for a more nuanced adaptation to multimodal signals and overcomes data scarcity challenges. Strengths and Weaknesses: Strengths: * MIA is claimed to be the first to explore the integration of multimodal structures within the learning of multiple correlated INRs, offering a new perspective on optimizing across modalities. * By allowing modalities to inform each other, MIA significantly improves generalization capabilities, even with limited data, outperforming unimodal and encoder-based baselines. * The paper presents a thorough evaluation of MIA across various real-world multimodal signal regression scenarios, demonstrating its superior performance in both generalization and memorization. Weaknesses: * Although the paper is generally well-written, it is better for the authors to provide an application example of the INR setting where the data is scarce to make the audiences better understand the targeted tasks, especially for the ones that are not familiar with the INR research. * While the setting focuses on the optimization of INRs when the observations are sparse, it is unclear how sparse the four datasets used in experiments are. It would be better for the authors to provide some quantitative measurements regarding to $P$ of each dataset to further demonstrate their setting matches their motivation. Meanwhile, how many INR learners used (the $n$ and $m$) should also be specified for each dataset. * According to the results in Table 2 for the ablation study regarding the impact of modules, it seems like the USFTs is the most important module in SFTs as adding the other two modules only exhibits relative marginal improvements over USFTs-only variants. It would be beneficial for the authors to elaborate more on the rationale behind the design of both MSFTs and Fusion MLPs, especially considering together with the additional computational complexity brought by adding MSFTs and Fusion MLPs. This would be interesting for practitioners in employing the proposed method in industrial applications. * One minor suggestion is, given the designs of the MIA and SFTs are relatively intuitive, it would be interesting to see some theoretical groundings behind the designs with the current motivation. Requested Changes: Please kindly refer to the Strengths And Weaknesses. Broader Impact Concerns: No broader impact is discussed. ================================================== Review 2: Summary: This paper introduces a framework called Multimodal Iterative Adaptation (MIA) to improve the learning of Implicit Neural Representations (INRs) by leveraging cross-modal knowledge exchange. Current encoder-based and unimodal optimization methods for learning INRs are insufficient for capturing the complexities of real-world multimodal signals, e.g., sketch prediction. The proposed MIA framework enhances INRs by facilitating cross-modal knowledge exchange during the optimization process, using State Fusion Transformers designed to operate in the backward pass of the learners. Extensive experiments across various multimodal datasets demonstrate that MIA significantly improves generalization and memorization performance compared to existing methods. Strengths and Weaknesses: Strengths: - The paper is well-written in general. - Studying better methods for representation learning over multi-modal inputs is of great importance. - The experimental results of the proposed MIA approach look promising. Weaknesses: - After reading the entire paper, I'm still not fully convinced of the importance of INRs in practice. Why is it an important technique that deserves more research effort? And how can it improve our machine learning pipeline (e.g., by either improving learning efficiency or generalization)? I see it more as a writing/paper organization issue. One way to improve it is to add more motivating examples and discussions in the introduction. - Most of the multimodal experiments presented in this paper are actually still limited to visual input. I wonder if MIA can also be applied to more popular multimodal inputs like vision-language models, e.g., LLaVA [1]? - The overall procedure of the proposed MIA approach is not quite clear. It would be great to add an algorithm box. [1] https://arxiv.org/abs/2304.08485 Requested Changes: I summarize the requested changes below (Please see the Weaknesses section above for more details.): - Add more motivating examples and discussions for the proposed MIA approach in the introduction. - Explore the compatibility of MIA to vision-language models. - Add algorithm boxes for MIA and the major baseline methods. Broader Impact Concerns: I don't find any Broader Impact Concerns in the current manuscript. ================================================== Review 3: Summary: This work proposes Multimodal Iterative Adaptation (MIA), a framework for learning Implicit Neural Representations (INRs) for joint multimodal signals. The two primary existing approaches for this problem are (i) encoder-based meta-learning methods that leverage multimodality using attention-based fusion, or (ii) unimodal optimization-based methods that focus on learning a good initialization to speed up convergence. However, both approaches have limitations: the former suffers from slow adaptation and the inability to capture complex signals, while the latter is unable to leverage the multimodal data for better generalization in data-scarce settings. To tackle this, MIA proposes the best of the two worlds: an optimization-based meta-learning framework with an attention-mechanism, namely, the State Fusion Transformer (SFT). The SFT aggregates learning states and allows the different modalities to interact during adaptation, and results in better parameter updates during meta-learning. The authors conduct a comprehensive experimental study on various multimodal settings, and show that MIA outperforms other approaches in terms of both generalization and memorization. Strengths and Weaknesses: Strengths - The paper makes a nice contribution in the multimodal INR literature. The approach is a clever combination of the two primary paradigms in multimodal INR (encoder-based and optimization-based), and it is well-motivated. - The SFT is clearly structured into three stages, where each stage corresponds to a different functionality (unimodal learning, cross-modal learning, and state fusion). - The experimental evaluation is quite extensive. The authors compare MIA to several baseline methods, and they try two different base INR models, namely, Functa or Composers. They also use 4 datasets, 3 of which are real. With few exceptions, MIA outperforms the other methods, with the gap being particularly pronounced in the more challenging low-sampling regime. - The authors include an interesting ablation study that discusses key aspects of the MIA framework, e.g., the relative impact of each module of the SFT, the cross-modal utilization of SFTs, and the computational overhead of MIA. The findings in the ablation study seem to confirm that the SFT can identify useful cross-modal patterns, and make use of them to improve performance in the target modallity. Weaknesses - The MIA framework seems hard to scale to more complex signals, e.g., images larger than 128x128. This is fundamental challenge of meta-learning algorithms, as the authors acknowledge, but it still significantly limits he applicability of he proposed framework in complex real settings. - Table 2(a) suggests that the relative improvement of MIA mainly comes from the unimodal learning with USFT. The multimodal MSFT provides very small additional improvement, while MP Fusion also results in rather modest improvement. Given that this framework specifically addresses the multi-modality challenge, one may have expected that a much larger part of the relative improvement would be attributed to the cross-modal interactions, but it seems that unimodal learning can in fact primarily account for the improvement performance. Requested Changes: I do not have major concerns with respect to this work. Some questions to the authors: - Have they tried experiments with more complex datasets? If not, what was the main obstacle? If yes, how did they perform? - Can the authors comment on the fact that unimodal learning in USFT mainly accounts for the performance improvement? It seems that cross-modal interactions help, but to a much lesser degree. - In Figure 8(a) and Table 5, the Pearson correlation coefficient is negative in the off-diagonal entries, whereas in Figure 17(b) the off-diagonal entries are positive. Does Figure 17 depict something different from the Pearson correlation? Broader Impact Concerns: No concern. ================================================== Metareview: Recommendation: Accept as is Comment: In this paper, the authors presented a new meta-learning framework for multimodal modelling. Specifically, a Multimodal Iterative Adaptation (MIA) approach was proposed that combined multimodal fusion with optimization-based meta-learning. Extensive experimental evaluations on several real-world scenarios show the effectiveness of the proposed method. The paper is generally well-written and easy to follow. Three expert reviewers were invited to review the paper, and both strengths and weaknesses were raised. With back-and-forth discussions and revision, most of the concerns were well addressed by the authors in the revised version, and all the reviewers are satisfied with the revision. In the end, a positive score (with two *Leaning Accept* and one *Accept*) was recommended. On the other hand, both the reviewers and AE found that the interesting approach and the contributions made in this paper could be of interest to a group of audiences in TMLR. As a result, the AE is pleased to inform that the paper has been accepted to be published in TMLR. ==================================================
# Constraining Generative Models For Engineering Design With Negative Data Anonymous authors Paper under double-blind review ## Abstract Generative models have recently achieved remarkable success and widespread adoption in society, yet they still often struggle to generate realistic and accurate outputs. This challenge extends beyond language and vision into fields like engineering design, where safety-critical engineering standards and non-negotiable physical laws tightly constrain what outputs are considered acceptable. In this work, we introduce two approaches to guide models toward constraint-satisfying outputs using 'negative data' - examples of what to avoid. Our negative data generative models (NDGMs) outperform state-of-the-art NDGMs by 4x in constraint satisfaction and easily outperform classic generative models using 8x less data in certain problems. To demonstrate this, we rigorously benchmark our NDGMs against 14 baseline models across numerous synthetic and real engineering problems, such as ship hulls with hydrodynamic constraints and vehicle design with impact safety constraints. Our benchmarks showcase both the best-in-class performance of our new NDGM models and the widespread dominance of NDGMs over classic generative models in general. In doing so, we advocate for the more widespread use of NDGMs in engineering design tasks. ## 1 **Introduction** Generative models have demonstrated impressive results in vision, language, and speech. However, even with massive datasets, they struggle with precision, generating physically impossible, factually incorrect, or otherwise 'invalid' samples. Most users can easily point to examples: Anatomical inaccuracies, imbalanced objects in natural scenes, erroneous text responses, etc. This invalidity can be thought of as a form of constraint violation - in the ideal scenario, generative models would be constrained to only generate valid samples. While this constraint violation is a nuisance in image or text synthesis, it becomes a paramount concern in domains like engineering design with high-stakes (including safetycritical) constraints. A generative model synthesizing designs for car or airplane components, for example, may be subject to geometric restrictions (such as disconnected or colliding components), functional requirements (such as load-bearing capacity or maximum weight), industry standards, and manufacturing limitations. As generative models are increasingly applied to engineering problems, their blatant violation of ubiquitous, objective, and non-negotiable constraints is becoming increasingly problematic. Figure 1: Many real-world data distributions have gaps ![0_image_0.png](0_image_0.png) in their support caused by constraints. Generative models classically estimate these distributions using constraintsatisfying (positive) samples. We analyze training methods for generative models that additionally leverage constraintviolating (negative) data to more accurately estimate the density of in-distribution (positive) data. For example, by examining bike frames with disconnected components, a model can better learn to generate geometrically valid frames. We hypothesize that challenges with constraint violation in generative models are largely attributable to the fact that generative models are classically shown only 'positive' (constraint-satisfying) data points during training, and are never exposed to 'negative' (constraint-violating) data points to avoid. Completely satisfying constraints using this training approach is equivalent to learning a binary classification problem with only one class present in the data, a challenging task. Instead, by studying negative data in addition to positive data, generative models can better avoid constraint-violating samples during generation (Fig. 1). This aligns with their distribution-matching objective since negative data points should have zero density in the original real-world distribution that the model is trying to mimic. We will refer to these models as *Negative-Data* Generative Models, or NDGMs. We conceptualize and test two new NDGM formulations. These formulations dominate both simple baselines and the current state-of-the-art (SOTA) NDGMs on highly non-convex test problems and more complex engineering problems. In certain tests, our NDGMs generate 4x as many valid (positive) samples as SOTA models. In benchmarking 16 training formulations over 15 test problems, we additionally identify several broader conclusions. (1) Unlike our new NDGMs, we find that existing formulations for NDGMs sometimes fail to surpass simple baselines. In particular, while SOTA models excel in simpler problems, they falls short of our method in more complex problems. (2) We find that NDGMs in general dominate conventional models. We therefore advocate for more the widespread use of NDGMs (including existing ones) over vanilla models. (3) We demonstrate that negative data can be significantly more informative than positive data. In some problems, we achieve a 40x improvement in constraint satisfaction by augmenting the dataset by 6% using negative data, indicating that NDGMs often improve sample efficiency over conventional models. Contributions: (i) We introduce two "Negative-Data Generative Models" (NDGMs) which tie or outperform the existing SOTA in constraint satisfaction in all synthetic problems and the five most chellenging engineering problems tested. (ii) We curate an extensive set of benchmarks comprised of several synthetic problems and a dozen engineering tasks, featuring real constraints from engineering standards. We test 16 different NDGM formulations and baselines on these tests, the largest benchmark of NDGMs to our knowledge. (iii) We demonstrate that simple baselines dominate conventional generative models in constraint satisfaction. In fact, we show that the SOTA NDGMs cannot consistently outperform simple baselines when dealing with fine-grained constraints, even in low-dimensional settings. (iv) We show that NDGMs can significantly outperform vanilla models using (∼ 90%) less data. We thus advocate for the more widespread use of NDGMs in engineering tasks over vanilla counterparts. ## 2 **Background** In this section, we discuss constraint satisfaction in generative models and then discuss divergence minimization in generative models. For more related work, see Appendix A. ## 2.1 **Constraints In Engineering And Design** Constraints are ubiquitous in design. A designer creating ship hulls, for example, must adhere to a medley of geometric constraints, performance requirements, and safety regulations from authoritative bodies. Generating constraint-satisfying designs can be exceedingly difficult. As many practitioners turn to data-driven generative models to tackle engineering problems (Regenwetter et al., 2022a), this difficulty remains (Woldseth et al., 2022; Regenwetter et al., 2023) (as we demonstrate, even a generative model that sees 30k examples of valid ship hulls can only generate valid hulls with a 2% success rate). The overwhelming majority of deep generative models in design do not actively consider constraints (Woldseth et al., 2022; Regenwetter et al., 2022a), despite constraint satisfaction being an explicit goal in many of the design problems they address (Oh et al., 2019; Nie et al., 2021; Bilodeau et al., 2022; Chen et al., 2022; Chen & Fuge, 2019; Cheng et al., 2021). Several engineering design datasets (Regenwetter et al., 2022b; Bagazinski & Ahmed, 2023; Giannone & Ahmed, 2023; Mazé & Ahmed, 2023) feature constraint-violating designs, and many others have checks for validity (Whalen et al., 2021; Wollstadt et al., 2022), allowing datasets of invalid (negative) designs to be generated. In some cases, datasets of positive examples are generated through search by rejecting and discarding negative samples (Bagazinski & Ahmed, 2023; Regenwetter et al., 2022b), making negative data essentially free. In any problem where negative data is available or can be generated, NDGMs can be applied. ## 2.2 **Divergence Minimization In Generative Models** Before discussing divergence minimization in NDGMs, we first discuss divergence minimization in conventional generative models. Let pp(x) be the (positive) data distribution and pθ(x) the distribution sampled by the generative model. Given N samples from pp(x), the objective of generative modeling is to find a setting θ ∗ of θ, such that, for an appropriate choice of discrepancy measure, p ∗ θ ≈ pp. A common choice for this discrepancy measure is the Kullback–Leibler or KL divergence: $$\mathbb{KL}[p_{\theta}\|p_{p}]=\int p_{\theta}(\mathbf{x})\left[\log{\frac{p_{\theta}(\mathbf{x})}{p_{p}(\mathbf{x})}}\right]d\mathbf{x}.\tag{1}$$ To minimize the discrepancy, we find θ ∗ as the solution to the following optimization problem: $$(1)$$ $$\theta^{*}=\arg\operatorname*{min}_{\theta}\mathbb{K}\mathbb{L}[p_{\theta}||p_{p}].$$ $$\left(2\right)$$ θ KL[pθ∥pp]. (2) In practice, direct optimization of equation 2 is often intractable. As such, it is common in deep generative modeling to learn θ by using either a tractable lower-bound to a slightly different variant of equation 2 (Kingma & Welling, 2013; Burda et al., 2015; Ho et al., 2020; Sønderby et al., 2016) or by using plug-in or direct estimators of the divergence measure (Casella & Berger, 2002; Sugiyama et al., 2012a; Gutmann & Hyvärinen, 2010; Srivastava et al., 2017; Goodfellow et al., 2014; Srivastava et al., 2023; Poole et al., 2019). In both of these cases, under certain conditions, as N → ∞, theoretically, it holds that, θ → θ ∗. However, since N is limited, there remains a finite discrepancy between the model and data distributions. This mismatch often manifests in pθ allocating high probability mass in regions where pp may not have significant empirical support. In domains such as engineering design, where invalid (negative) designs tend to be very close to the valid (positive) designs, this leads to the generation of invalid designs with high probability. This lack of precision underpins the relatively limited success of deep generative models in the engineering design domain (Regenwetter et al., 2023). Divergence minimization in GANs. Generative Adversarial Networks (GANs) (Goodfellow et al., 2014; Arjovsky et al., 2017; Mohamed & Lakshminarayanan, 2016; Srivastava et al., 2017; Nowozin et al., 2016) are a powerful framework for generating realistic and diverse data samples. GANs have two main components: a generator fθ, which generates samples according to the density pθ, and a discriminator fϕ, which is a binary classifier. The generator learns to generate synthetic data samples by transforming random noise into meaningful outputs, while the discriminator aims to distinguish between real and generated samples. The standard GAN loss can be written as: $${\mathcal{L}}(\theta,\phi)=\mathbb{E}_{p_{p}(\mathbf{x})}[\log f_{\phi}(\mathbf{x})]+\mathbb{E}_{p_{\theta}(\mathbf{x})}[1-\log(f_{\phi}(\mathbf{x}_{\theta}))],$$ $\quad\quad\left(3\right)$ . Training a GAN involves iterating over minθ maxϕ L(*θ, ϕ*). GANs can also be interpreted in terms of estimating the density ratio (Gutmann & Hyvärinen, 2010; Srivastava et al., 2017) between the data and the generated distribution r(x) = pp(x)/pθ(x). This ratio can be estimated by a discriminative model as rϕ = fϕ(x)/(1 − fϕ(x)) and rϕ = 1 gives us pθ = pp. The optimal discriminator prediction and generator distribution are: $$f_{\phi}(\mathbf{x})=\frac{p_{p}(\mathbf{x})}{(p_{\theta}(\mathbf{x})+p_{p}(\mathbf{x}))},\ p_{\theta}^{*}(\mathbf{x})=p_{p}(\mathbf{x}).\tag{4}$$ Divergence minimization in other generative models. Many other types of generative models similarly minimize divergence between pθ and pp. These models include popular likelihood-based models like Variational Autoencoders (VAEs) (Kingma & Welling, 2013) and Denoising Diffusion Probabilistic Models (DDPMs) (Ho et al., 2020). We will not discuss the mathematics behind divergence minimization for these likelihood-based models, but we do benchmark several variants in our results. In general, we refer to unaugmented GANs, VAEs, and DDPMs as 'vanilla models' throughout the paper. ## 3 **Negative-Data Generative Models** In this section, we discuss the NDGM framework. We explain how generative models can be adjusted to exploit negative data to improve constraint satisfaction. Let pn denote the *negative distribution* i.e., the distribution of constraint-violating datapoints. Instead of training using only the positive distribution pp, we now seek to train a generative model using both pp and pn. In this section, we discuss several existing methods to do so. These methods range from simple baselines like rejection sampling to state-of-the-art formulations like discriminator overloading. ## 3.1 **Class Conditioning (Cc)** One approach to incorporating implicit constraints is through conditioning, which is popular in many design generation problems (Nie et al., 2021; Behzadi & Ilieş, 2021; Mazé & Ahmed, 2023; Malviya, 2020; Heyrani Nobari et al., 2021). In conditional modeling, a generative model typically conditions on the constraints denoted as c and learns a conditional distribution, p(x|c), where x represents the generated output. 'Off-theshelf' class-conditional models can be simple NDGMs, where the positive and negative data each constitute one class. During inference, the model attempts to satisfy constraints by generating conditionally positive samples. Other conditional variants such as auxiliary-classifier GANs (AC-GANs) (Odena et al., 2017) can also be used. Broadly speaking, the negative data formulation for generative models can be seen as a specific case of class-conditional generation. However, as we demonstrate in this paper, generic class-conditional training formulations for generative models are not as effective as specialized NDGMs. ## 3.2 **Pre-Trained Classifier (Pc)** One common approach for active constraint satisfaction involves pre-training a supervised model to predict constraint satisfaction and querying this model during training, inference, or postprocessing. Often, this model predicts constraint violation likelihood, though it can also predict intermediates that are combined in a more complex constraint check (Wang et al., 2022). Typically, this classifier fψ learns: $$f_{\psi}(\mathbf{x})={\frac{p_{n}(\mathbf{x})}{p_{n}(\mathbf{x})+p_{p}(\mathbf{x})}}.$$ $\left(5\right)^2$ . (5) This frozen classifier can be incorporated into the training of a generative model by adding an auxiliary loss, LP C to the generative model's loss, LGM to calculate a total loss, LT ot = LGM + λLP C , as in (Regenwetter & Ahmed, 2022). Here, λ is some weighting parameter and LP C is expressed as: $${\mathcal{L}}_{P C}=\mathbb{E}_{p_{\theta}(\mathbf{x})}[\log f_{\psi}(\mathbf{x})].$$ $$(6)$$ LP C = Epθ(x)[log fψ(x)]. (6) Pre-trained classifiers can also be applied during inference in certain models, such as in diffusion model guidance (Mazé & Ahmed, 2023; Giannone & Ahmed, 2023). This pre-trained classifier can alternatively be appended to a vanilla model as a rejection sampling layer - a simple but surprisingly effective baseline. We abbreviate pre-trained classifier loss, guidance, and rejection sampling as CL, G, and Rej, respectively, in our testing. ## 3.3 **Discriminator Overloading (Do)** Discriminator overloading is a technique to directly incorporate negative data into GAN model training. This formulation was proposed in two of the first papers to train a generative model using both positive and negative data (though we have made slight modifications for generality): Rumi-GAN (Asokan & Seelamantula, 2020) and Negative Data Augmentation GAN (NDA-GAN) (Sinha et al., 2021). We refer to these formulations as 'discriminator overloading' since the discriminator is 'overloaded' by learning to discriminate between (1) positives and (2) fakes or negatives. As such, the discriminator estimates: $$f_{\phi}(\mathbf{x})={\frac{p_{p}(\mathbf{x})}{\lambda p_{\theta}(\mathbf{x})+(1-\lambda)p_{n}(\mathbf{x})+p_{p}(\mathbf{x})}},$$ with λ being a weighting parameter. As usual, the generator attempts to generate samples that are classified as real, in this case indicating that they look similar to positive data and dissimilar to negative data. The loss is expressed as: $${\mathcal{L}}(\theta,\phi)=\mathbb{E}_{p_{r}(\mathbf{x})}[\log f_{\phi}(\mathbf{x})]+\mathbb{E}_{p_{\theta}(\mathbf{x})}[1-\log(f_{\phi}(\mathbf{x}_{\theta}))]+\mathbb{E}_{p_{n}(\mathbf{x})}[1-\log(f_{\phi}(\mathbf{x}))].$$ As we will show, discriminator overloading is effective. However, instead of conflating the negatives and fakes, we propose two formulations to learn the ratios between the positive, negative, and fake distributions individually. As we demonstrate, this adjustment yields superior performance. $$\left(7\right)$$ $$(8)$$ ## 4 **Proposed Negative Data Formulation** In this section, we propose several novel formulations for NDGMs. The training pseudocode for each is included in Appendix C, while more details on the mathematical formulation and derivations are included in Appendix B. Instead of learning a density ratio between pp and an amalgamation of pn and pθ, as in discriminator overloading, we instead propose methods to learn pairwise density ratios between the three distributions. The most straightforward method to do so is with a multi-class discriminator. ## 4.1 **Multi-Class Discriminator (Mc)** Noting that multi-class classifiers are strong density ratio estimators (Srivastava et al., 2023), we propose an NDGM variant using a multi-class discriminator model that learns three classes: positive, negative, and fake. This discriminator is implicitly estimating all relevant ratios: fϕ,c(x) = pc(x) pp(x) + pθ(x) + pn(x) ∀ c ∈ p, n, θ. (9) However, these ratios are not learned independently. Note that fϕ,p is a reweighted version of Eq. 7. Though $$({\mathfrak{g}})$$ $$(11)$$ this multi-class formulation is similar to discriminator overloading, instead of showing the discriminator a weighted amalgamation of fakes and negatives (as in DO), the multi-class discriminator instead treats fakes and negatives as separate classes, and can potentially refine its knowledge by distinguishing them. 4.2 **Double Discriminator (DD)** Though using a single multi-class discriminator to learn pairwise density ratios between pp, pn, and pθ is simpler, we can also accomplish the task using multiple discriminative models. For example, fϕ can estimate the ratio pp/pθ (this is a standard GAN discriminator), while fψ estimates pn/pθ. The loss is then expressed as: $$(10)$$ $$\mathcal{L}(\theta,\phi,\psi)=\mathbb{E}_{p_{\theta}(\mathbf{x})}[\log f_{\phi}(\mathbf{x})]+\mathbb{E}_{p_{\theta}(\mathbf{x})}[1-\log(f_{\phi}(\mathbf{x}_{\theta}))]$$ $$\quad-\lambda\mathbb{E}_{p_{\theta}(\mathbf{x})}[\log f_{\psi}(\mathbf{x})]-\lambda\mathbb{E}_{p_{\theta}(\mathbf{x})}[1-\log(f_{\psi}(\mathbf{x}_{\theta}))].$$ $\mathbf{x}$\(\mathbf{x} Here, λ ∈ [0, 1] is a tuning parameter adjusting the weight of the negative data's contribution to the loss and avoiding instability. Optimal discriminators learn: $$f_{\phi}(\mathbf{x})=\frac{p_{p}(\mathbf{x})}{(p_{\theta}(\mathbf{x})+p_{p}(\mathbf{x}))},\ \ f_{\psi}(\mathbf{x})=\frac{p_{n}(\mathbf{x})}{(p_{\theta}(\mathbf{x})+p_{n}(\mathbf{x}))}.$$ $\psi$ is the $n$-th order of $\mathbf{x}$. The rationale behind the double discriminator algorithm is intuitive when viewed as an expansion of a vanilla GAN. The generator now wants its samples to be classified as positive by the original discriminator and not be classified as negative by the extra discriminator. This simple double discriminator variant is benchmarked in the appendix (abbreviated DD-a). In practice, however, we find that an alternate formulation that combines this simple two-discriminator concept with discriminator overloading (DO) works better in many cases (we call this DD in the main paper and DD-b in the appendix). As we show in Appendix B, this alternative formulation has a mathematical basis that directly relates to the original GAN training objective. The alternative formulation consists of the classic discriminator, fϕ estimating pp/pθ and an overloaded discriminator, fψ estimating (pp + pθ)/pn. The total loss function is then expressed as: $$(12)$$ discriminant, $f_{\psi}$ estimating $(\psi_{p}+\psi_{\theta})/\psi$. The total loss function is then expressed as: $$\mathcal{L}(\theta,\phi,\psi)=\mathbb{E}_{p_{\psi}(\mathbf{x})}[\log f_{\phi}(\mathbf{x})]+\mathbb{E}_{p_{\psi}(\mathbf{x})}[1-\log(f_{\phi}(\mathbf{x}))]\tag{12}$$ $$+\lambda\mathbb{E}_{p_{\psi}(\mathbf{x})}[\log f_{\phi}(\mathbf{x})]+\lambda\mathbb{E}_{p_{\psi}(\mathbf{x})}[\log f_{\psi}(\mathbf{x})]+\lambda\mathbb{E}_{p_{\psi}(\mathbf{x})}[1-\log(f_{\psi}(\mathbf{x}))].$$ Once again, $\lambda$ is a weighting parameter modulating the contribution of the negative data. Appendix B. contains more detailed derivations and comparisons to similar formulatiuons. For more details on the training algorithms, see Appendix C. ## 5 **Experiments** We now present experiments on (i) 2D densities, where we benchmark 16 different model including ours, the SOTA, baseline NDGMs, and vanilla models; (ii) 9 diverse engineering tasks with different levels of complexity, and (iii) a block stack problem where we investigate multiple detailed constraints. For more experiments on 2D densities, engineering tasks, and block stacks, see Appendices D, G, and F. For additional experiments on diversity-augmented generation, see Appendix E. ## 5.1 **Negative Data For Densities With Constraints** We first showcase our approach using two easy-to-visualize but highly non-convex 2D test problems. Problem 1 is an adaptation of a classic multi-modal test problem made significantly more challenging with the addition of small negative regions in the centers of each mode. Problem 2 is a simple uniform distribution with many discontinuous circular regions of invalidity in a grid pattern. 10k positive and negative data points are randomly sampled, as shown in Figures 5a and 5b. Architecture and training details are included in Appendix D. ![5_image_0.png](5_image_0.png) Figure 2: Generated Distributions from several generative models on two highly non-convex test problems (Problem 1 on top, Problem 2 on the bottom). Positive data points and samples are shown in blue and negative ones in black. Our proposed NDGM models (GAN-MC, GAN-DD) generate significantly fewer negative samples. Models. We test 16 variants of GAN, VAE, and DDPM models. Among these are: Vanilla models (GAN, VAE, DDPM), trained only on positive data; Class conditional models (GAN-CC, VAE-CC) trained on both datasets in a binary class conditional setting as in Sec 3.1; Models augmented with a frozen pre-trained classifier to steer models during training though a classification loss (GAN-CL, VAECL, DDPM-CL); DDPM with pre-trained classifier guidance only during inference (DDPM-G); Vanilla models trained on only positive data, but augmented with a rejection sampling layer using a frozen classifier pre-trained on both positive and negative data (GAN-Rej, VAE-Rej, DDPM-Rej); Auxiliary Classifier GAN (GAN-AC); GAN with discriminator overloading as in NDA-GANs (Sinha et al., 2021) and Rumi-GANs (Asokan & Seelamantula, 2020) (GANDO). Our multi-class-discriminator GAN (Sec. 4.1) (GAN-MC); Our GAN variant with two discriminators (Sec. 4.2) (GAN-DD); GAN-DD-a is included in Appendix D.3. Table 1: Mean scores across two highly non-convex test problems. The **best** result is in bold. The next two best are underlined. Both of the formulations we propose (GANMC, GAN-DD) outperform the previous state-of-the-art (GAN-DO) in all metrics. Each model is tested three times for each of the two toy densities. | Model | Invalidity (%) ↓ | MMD (10−3 ) ↓ | F1 ↑ | |---------------|--------------------|-----------------|--------| | VAE | 12.2 | 3.19 | 0.89 | | VAE-CC | 14.6 | 3.13 | 0.88 | | VAE-CL | 0.29 | 5.06 | 0.86 | | VAE-Rej | 1.30 | 4.25 | 0.90 | | DDPM | 7.74 | 3.83 | 0.91 | | DDPM-CL | 9.41 | 3.60 | 0.90 | | DDPM-G | 5.20 | 4.19 | 0.89 | | DDPM-Rej | 1.11 | 3.65 | 0.90 | | GAN | 6.22 | 4.96 | 0.84 | | GAN-CC | 6.20 | 4.75 | 0.90 | | GAN-AC | 2.87 | 3.16 | 0.94 | | GAN-CL | 0.31 | 2.30 | 0.95 | | GAN-Rej | 0.43 | 4.88 | 0.86 | | GAN-DO (SOTA) | 0.46 | 4.33 | 0.93 | | GAN-MC (Ours) | 0.30 | 2.26 | 0.94 | | GAN-DD (Ours) | 0.28 | 2.03 | 0.95 | Metrics. We score each model on three metrics. 1) Invalidity - the fraction of generated samples that violate the constraints (negative samples). 2) Maximum Mean Discrepancy (MMD), a common distributional similarity metric. 3) F1 score for generative models, as proposed in (Sajjadi et al., 2018). We present an expanded study with more metrics in Appendix D.3. Results. Figure 5 plots the datasets and the generated distributions of several select models (vanilla GAN and the two NDGMs we propose). Our GAN-MC, and GAN-DD variants both achieve near-perfect constraint satisfaction compared to the vanilla model. Plots for all 16 models are included in Appendix D.3. Table 1 presents scores across all models (expanded tables included in Appendix D.3). Both our GANMC and GAN-DD variants score within the top three models for each of the metrics. Furthermore, they both outperform GAN-DO, the previous state-of-the-art formulation in every metric. Note that GAN-DO underperforms certain baselines, such as training with a frozen classifier loss (GAN-CL). In general, most NDGM baselines like classifier loss (-CL) and rejection sampling (-Rej) significantly outperform vanilla models. In all, our proposed GAN-MC and GAN-DD models achieve the highest performance across all metrics. ## 5.2 **How Much Negative Data Is Enough?** Table 2: Comparison of **invalidity metric** for NDGM models trained with different numbers of positive datapoints (Np) and negative datapoints (Nn). NDGMs can generate many times fewer constraint-violating samples, even when trained on orders of magnitude less data. A GAN-DD is benchmarked when Nn > 0, otherwise a vanilla GAN is benchmarked. Scores are averaged over four instantiations. **Lower is better.** (a) Models | (a) Models | (b) Problem 1 | | | (c) Problem 2 | | | | |------------------|------------------|-------|-------|----------------------------|------|------|------| | Negative Samples | Positive Samples | | | | | | | | 1K | 4K | | 16K | Positive Samples 1K 4K 16K | | | | | GAN | 0 | 10.3% | 10.0% | 12.3% | 2.4% | 2.3% | 5.9% | | GAN-DD | 1K | | | | | | | | GAN-DD | 4K | | | | | | | | GAN-DD | 16K | 0.6% | 0.3% | 0.3% | | | | | 0.2 % | 0.3% | | 0.4% | | | | | | 0.2 % | 0.1% | 0.3% | 0.8% | 0.6% | 0.6% | | | | | | | 0.2% | 0.3% | 0.5% | | | | | | | 0.2% | 0.2% | 0.1% | | | In the realm of generative models, it is theoretically possible to exactly recover the underlying data distribution, px, when provided with an infinite amount of data, model capacity, and computational resources. However, in practical scenarios where data throughput and computing are not only finite but limited, like engineering design and scientific research, simply increasing the volume of data is not a viable strategy to improve constraint satisfaction. Fortunately, we find that NDGMs can be significantly more data-efficient than vanilla generative models. In Table 2, we present empirical evidence to support our arguments. By solely increasing the amount of positive data without incorporating negative data (first row - vanilla GAN), we observed no reduction in invalidity metric, despite a 16x increase in positive data. Conversely, when we introduce a modest proportion of negative data (6% - Nn = 1K, Np = 16K), we can achieve a 10-40x reduction in the rate of invalid sample generation. Furthermore, we can even remove much of the positive data with only minor performance drops. Notably, even with 1k positive and 1k negative data points, NDGMs generate 7-20x fewer invalid samples compared to models trained on 16K positive data points. These experiments demonstrate that NDGMs can significantly (7-20x) outperform vanilla models using a fraction (13%) of the data. Importantly, practitioners seeking to improve their generative models may achieve much more value by collecting negative data, rather than additional positive data. ## 5.3 **Handling Connectivity And Stability Constraints** Block-stacking problems have long been studied as a case study in 'intuitive physics' (Battaglia et al., 2013; Riochet et al., 2020), on which many predictive and generative computational approaches have been tested (Hamrick et al., 2018; Smith et al., 2019b). In this study, we address an intuitive block-stacking problem featuring connectivity and stability constraints. These constraints are reflective of common engineering constraints related to interfacing of mechanical components and assembly-level functional requirements. Therefore, the block stacking problem is a representative, yet intuitive case study for ![6_image_0.png](6_image_0.png) Figure 3: Overview of constraints in stacked blocks problem. Our goal is to generate valid stacks of blocks (left) that are (I) connected and (II) stable. Stacks that violate either the connectivity or stability constraint are considered invalid. engineering applications. The constraints are defined as follows: (i) connectivity: Blocks must stack without floating or intersecting up to a prescribed tolerance, and (ii) stability: Any block (or sub-stack) must overlap with its support in such a way that its 'center of mass' falls in the supporting blocks' area. Therefore, the positive data consists of stable & connected stacks, while the negative data consists of unstable & connected, stable & disconnected, and unstable & disconnected stacks. Table 3: Constraint satisfaction on block stacking problem. Blocks are floating or intersecting if the distance between their touching edges is larger than 0.9 mm (the minimum distance between constraints in the negative data is 1 mm). b: base block; m: middle block; t: top block. For floating yb < ym and for intersecting yb > ym. Fulfilling Constraints. We train a vanilla model using only the positive data, and two NDGMs: a GAN-DO (as in (Sinha et al., 2021) and (Asokan & Seelamantula, 2020)) and our proposed GAN-DD, which leverages Eq. 12. We test on 20 splits (1000 samples each) and report scores in Table 3, where we break down constraint satisfaction into individual scores. GAN-DD outperforms the base model and the GAN-DO by a large margin in most constraintsatisfaction scores, with fewer intersecting and floating blocks, and in particular on global connectivity between boxes, indicating that the GAN-DD approach is effective in improving constraint satisfaction in situations where precision is important. Factoring in stability, we see an even larger gap between the baselines and our model, emphasizing the challenges in fulfilling multiple sets of fine-grained constraints. These experiments are reported in Appendix F, alongside more visualizations. | Metrics | GAN | GAN-DO GAN-DD | | |------------------------|---------|-----------------|---------| | ↓ Median(|y 1 | 0 | | | | b − y m|) | 2.78 mm | 5.12 mm | 0.54 mm | | 1 | 0 | | | | ↓ Median(|y m − y t |) | 2.12 mm | 1.91 mm | 0.83 mm | | ↓ Floating(yb, ym) | 20.44 % | 14.47 % | 13.78 % | | ↓ Floating(ym, yt) | 38.04 % | 20.73 % | 13.94 % | | ↓ Intersect(yb, ym) | 64.59 % | 77.20 % | 0.00 % | | ↓ Intersect(ym, yt) | 43.80 % | 54.82 % | 30.79 % | | ↑ Connected(yb, ym) | 14.96 % | 8.32 % | 86.21 % | | ↑ Connected(ym, yt) | 18.15 % | 24.44 % | 55.26 % | | ↑ Connected & Stable | 3.28 % | 8.85 % | 36.02 % | ![7_image_0.png](7_image_0.png) We also plot the distance distribution between blocks with and without negative data (Fig. 4). In the absence of negative data, the relative placement of blocks is much less precise, resulting in significant overlap (negative distance values) or gaps (positive distance values). When leveraging negative data, even when the constraints are not fulfilled, the errors have a much smaller magnitude, providing additional qualitative evidence of the effectiveness and importance of using negative data for fine-grained constraint satisfaction. ## 5.4 **Negative Data In Engineering Tasks** Generative models are commonly use to tackle engineering problems with constraints (Oh et al., 2019; Nie et al., 2021), but are often criticized for their inability to satisfy them (Woldseth et al., 2022; Regenwetter et al., 2023). To assess how our NDGMs fare in real engineering problems, we next benchmark the same 16 methods as in Sec. 5.1 on a dozen diverse engineering tasks, which are discussed in detail in Appendix G. These problems span numerous engineering disciplines including assorted industrial design tasks (compression spring, gearbox, heat exchanger, pressure vessel), structural and material design tasks (Ashby chart, cantilever beam, reinforced concrete, truss, welded beam), and several complex high-level design problems: Ship hulls with hydrodynamic constraints; bike frames with loading requirements; automobile chassis with performance requirements in impact testing. A variety of constraints are applied, including engineering standards from authoritative bodies like the American Concrete Institute (ACI), the American Society of Mechanical Engineers (ASME), and the European Enhanced Vehicle-Safety Committee (EEVC). Figure 4: Block placement by NDGM (w/ negative) vs vanilla model (w/o negative). The two vertical grey lines indicate the acceptable tolerance such that the constraints are considered satisfied. Our GAN-DD greatly reduces the overlap or air gap between blocks compared to a GAN, demonstrating its aptitude for constraint satisfaction. 8 We include the scores across the same 16 models tested in Sec. 5.1, all 12 engineering problems, and a larger set of metrics in Appendix G. Shown in Fig. 4 are invalidity scores over a subset of models and problems (problems where a vanilla model is already > 99% successful in generating positive samples are considered to be 'solved' and are only shown in Appendix G). The median score over three training runs is shown. In every problem, either the discriminator overloading GAN (GAN-DO) or our multiclass discriminator model (GAN-MC) are the top performers, indicating that negative data GANs significantly outperform the vanilla GAN. However, on the problems where the vanilla model struggles, our GAN-MC model outperforms GAN-DO. Specifically, GAN-DO is only the top performer on problems where the vanilla model is already at least 97% successful in generating positive samples. Poor validity scores are expected on the ship hull dataset because most constraints are not represented in the negative data (thus, for the majority of constraints, NDGMs have no natural advantage over vanilla models). However GAN-MC still manages to generate valid designs at 2.8x the rate of the GAN and over 1.6x the rate of GAN-DO. Notably, simple baselines (rejection sampling, classifier loss) also perform much better than vanilla models, as demonstrated in Appendix G. Additionally, we find that likelihood-based models outperform GANs in constraint satisfaction but lag behind in distributional similarity in many problems. All in all, NDGMs display a widespread dominance over their vanilla counterparts across a variety of engineering tasks, with our proposed GAN-MC excelling in more challenging ones. ## 5.5 **Negative Data In High-Dimensional Problems** Table 4: Constraint Satisfaction on 9 Engineering Benchmarks. Percentage (%) of generated samples violating constraints is shown. **Best** is bolded. Smaller (↓) is better. Scores are averaged over three instantiations. Problems are sorted by the invalidity of the baseline GAN model. Our GAN-MC outperforms GAN-DO on problems that the vanilla GAN struggles with (i.e. hard engineering problems where NDGMs are especially needed). Dataset GAN GAN-DO **GAN-MC** Compression Spring 2.01 **0.31** 0.55 Ashby Chart 2.35 **2.24** 3.22 Pressure Vessel 2.64 **0.05** 0.38 Welded Beam 2.86 **0.64** 1.25 Bike Frame 6.02 7.32 **5.89** Heat Exchanger 7.75 6.41 **4.64** Cantilever Beam 8.22 5.27 **4.67** Car Impact 10.43 6.00 **5.33** Ship Hull 98.0 96.4 **94.3** Having tested a variety of tabular engineering problems, we next consider whether our proposed methods can translate to higher-dimensional image domains such as images. We examine a common engineering design problem known as topology optimization (TO), which seeks to optimally distribute material in a spatial domain to achieve a certain objective (often minimizing mechanical compliance) (Sigmund & Maute, 2013). Simply put, TO is often used to create structures with high rigidity and low weight. The use of generative models for TO is very popular (Shin et al., 2023), but existing methods have been criticized for significant shortcomings (Woldseth et al., 2022) related to constraint satisfaction, such as generated topologies not being fully connected. Disconnected topologies tend to be highly sub-optimal and are impractical to fabricate. Figure 5: Examples of Positive and Negative Topologies. ![8_image_0.png](8_image_0.png) Table 5: Constraint Satisfaction on Topology Optimization Problem. **Best** is bolded. Our GAN-MC outperforms GAN, both generating fewer invalid designs and generating designs with less severe constraint violations. However, the quality of the negative data has a significant impact on GAN-MC's performance. When trained on "harder" negative data (rejected samples from a vanilla model), it performs better than when trained on "easier" procedurally-generated negative data. | | Invalidity (%) ↓ | Invalidity (Pixels) ↓ | |----------------------------------|--------------------|-------------------------| | GAN | 36.3 | 1.28 | | GAN-MC (Synthetic negative data) | 24.4 | 0.38 | | GAN-MC (Rejected negative data) | 16.0 | 0.29 | 9 To address this, we train NDGMs using disconnected topologies as negative data, using the classification guidance dataset from (Mazé & Ahmed, 2023). This dataset features procedurally-generated synthetic negatives with artificially-added floating components. We also replace the negatives in the dataset with disconnected topologies generated by a vanilla GAN trained on the positive data. These rejected negatives are "harder" negatives (closer to the positive distribution) and are hence more informative than the synthetic negatives. In evaluating models, we measure the proportion of generated topologies with disconnected components, as well as the average number of disconnected pixels in each generated topology. A vanilla GAN generates many more invalid topologies than either GAN-MC variant. As expected, the stronger negatives generated through rejection sampling result in superior performance. Compared to the stronger GAN-MC, the vanilla GAN generates 2.3x as many invalid topologies and violates constraints by 4.4x the severity. Additional details are included in Appendix H. ## 6 **Discussion & Conclusion** Deconflating fakes and negatives. We presented two new NDGM formulations which separately estimate several density ratios to better learn a true data distribution, rather than conflating fakes and negatives as done in the current SOTA. Our models dominate in highly non-convex 2D problems and outperform baselines in the 5 most challenging engineering problems tested. Our methods also excel in a block stacking problem, generating 11x and 4x more valid stacks than vanilla models and the previous SOTA, respectively. Finally, our methods outperform baselines by 4x in an image-based topology optimization problem. GANs versus diffusion models using negative data. Despite the growing popularity of diffusion models, GANs remain state of the art in many engineering design problems. Having benchmarked DDPMs in several of the problems tested, we find this statement to hold true in the context of NDGMs. Although negative-data-augmented DDPMs surpassed our GAN models in some metrics, this typically came at the expense of others. Conversely, GANs outperformed across all metrics in several problems (Sec. 5.1, for example). We look forward to future research which advances the capabilities of negative data diffusion models and makes them more viable in engineering design. NDGMs are underutilized. We believe NDGMs are underutilized in engineering design. This assertion is substantiated by several observations: 1) The widespread use of vanilla models in engineering design (Regenwetter et al., 2022a). 2) The relatively low cost of collecting negative data versus positive data in many engineering contexts. 3) The overwhelming dominance of NDGMs over vanilla models in our engineering benchmarks. 4) The data-efficiency improvements we demonstrated using negative data. Though our methods achieved SOTA performance in many problems, even the simple baseline NDGMs that we tested significantly outperform their vanilla counterparts. Therefore, we generally advocate for the increased use of NDGMs in engineering design. Generating high-quality negative data. Selecting strategies to generate negative data is an important research question. In the final case study on topology optimization, rejection sampling resulted in "stronger" negative data than the procedural generation method. It also required access to an oracle (constraint evaluator), which may be unavailable or prohibitively expensive in some applications. However, there are not always cheap, viable procedural generation approaches for negative data either. Effective negative data generation remains largely problem-dependent and the relative quality of negative data generation approaches is not necessarily straightforward. We anticipate that domain-agnostic methods to generate high-quality negative data could pair well with NDGMs and expand their impact. Limitations. As we demonstrate, NDGMs are sensitive to the quality of their negative training data. Although negative data is often cheaper than positive data in engineering design problems, generating high-quality negative data may be challenging in some domains. In other domains, sourcing any kind of negative data may be impossible. In domains where high-quality negative data is unavailable, NDGMs will naturally be impractical. Conclusion. In this paper, we presented two new Negative-Data Generative Models (NDGMs). We demonstrated that these models outperform more than a dozen other formulations in extensive benchmarks across several test problems and a dozen real engineering problems. We displayed that simple baseline NDGMs also achieve strong performance compared to vanilla models, demonstrating the general potency of NDGMs. Notably, we showed that NDGMs can often be much more data-efficient than classic models. ## References Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In International conference on machine learning, ICML'17, pp. 214–223. PMLR, JMLR.org, 2017. Siddarth Asokan and Chandra Seelamantula. Teaching a gan what not to learn. Advances in Neural Information Processing Systems, 33:3964–3975, 2020. Noah J Bagazinski and Faez Ahmed. Ship-d: Ship hull dataset for design optimization using machine learning. arXiv preprint arXiv:2305.08279, 2023. Peter W Battaglia, Jessica B Hamrick, and Joshua B Tenenbaum. Simulation as an engine of physical scene understanding. *Proceedings of the National Academy of Sciences*, 110(45):18327–18332, 2013. Mohammad Mahdi Behzadi and Horea T. Ilieş. GANTL: Toward Practical and Real-Time Topology Optimization With Conditional Generative Adversarial Networks and Transfer Learning. Journal of Mechanical Design, 144(2), 12 2021. ISSN 1050-0472. doi: 10.1115/1.4052757. URL https://doi.org/10. 1115/1.4052757. 021711. Martin Philip Bendsøe and Noboru Kikuchi. Generating optimal topologies in structural design using a homogenization method. *Computer Methods in Applied Mechanics and Engineering*, 71(2):197–224, 11 1988. ISSN 00457825. doi: 10.1016/0045-7825(88)90086-2. URL https://linkinghub.elsevier.com/ retrieve/pii/0045782588900862. Camille Bilodeau, Wengong Jin, Tommi Jaakkola, Regina Barzilay, and Klavs F Jensen. Generative models for molecular discovery: Recent advances and challenges. *Wiley Interdisciplinary Reviews: Computational* Molecular Science, 12(5):e1608, 2022. Ramin Bostanabad, Yichi Zhang, Xiaolin Li, Tucker Kearney, L Catherine Brinson, Daniel W Apley, Wing Kam Liu, and Wei Chen. Computational microstructure characterization and reconstruction: Review of the state-of-the-art techniques. *Progress in Materials Science*, 95:1–41, 2018. Andrew Brock, Theodore Lim, James Millar Ritchie, and Nick Weston. Context-aware content generation for virtual environments. In International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, volume 50084, pp. V01BT02A045. American Society of Mechanical Engineers, 2016. Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. *arXiv preprint* arXiv:1509.00519, 2015. Ruijin Cang, Hechao Li, Hope Yao, Yang Jiao, and Yi Ren. Improving direct physical properties prediction of heterogeneous materials from imaging data via convolutional neural network and a morphology-aware generative model. *Computational Materials Science*, 150:212–221, 2018. George Casella and Roger L. Berger. *Statistical Inference*. Duxbury, Pacific Grove, CA, 2002. Michael Chang, Alyssa L Dayan, Franziska Meier, Thomas L Griffiths, Sergey Levine, and Amy Zhang. Neural constraint satisfaction: Hierarchical abstraction for combinatorial generalization in object rearrangement. arXiv preprint arXiv:2303.11373, 2023. Hongrui Chen and Xingchen Liu. Geometry enhanced generative adversarial networks for random heterogeneous material representation. In International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, IDETC-21, Virtual, Online, Aug 2021. ASME. Qiuyi Chen, Jun Wang, Phillip Pope, Wei Chen, and Mark Fuge. Inverse design of two-dimensional airfoils using conditional generative models and surrogate log-likelihoods. *Journal of Mechanical Design*, 144(2): 021712, 2022. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In *International conference on machine learning*, pp. 1597–1607. PMLR, 2020. Wei Chen and Faez Ahmed. Mo-padgan: Reparameterizing engineering designs for augmented multi-objective optimization. *Applied Soft Computing*, 113:107909, 2021a. Wei Chen and Faez Ahmed. Padgan: Learning to generate high-quality novel designs. Journal of Mechanical Design, 143(3):031703, 2021b. Wei Chen and Mark Fuge. Béziergan: Automatic generation of smooth curves from interpretable lowdimensional parameters. *arXiv preprint arXiv:1808.08871*, 2018. Wei Chen and Mark Fuge. Synthesizing designs with interpart dependencies using hierarchical generative adversarial networks. *Journal of Mechanical Design*, 141(11):111403, 2019. Wei Chen, Kevin Chiu, and Mark Fuge. Aerodynamic design optimization and shape exploration using generative adversarial networks. In *AIAA Scitech 2019 Forum*, pp. 2351, 2019. Yu Cheng, Yongshun Gong, Yuansheng Liu, Bosheng Song, and Quan Zou. Molecular design in drug discovery: a comprehensive review of deep generative models. *Briefings in bioinformatics*, 22(6):bbab344, 2021. Kristy Choi, Chenlin Meng, Yang Song, and Stefano Ermon. Density ratio estimation via infinitesimal classification. In *International Conference on Artificial Intelligence and Statistics*, pp. 2552–2573. PMLR, 2022. Matthew Dering, James Cunningham, Raj Desai, Michael A Yukish, Timothy W Simpson, and Conrad S Tucker. A physics-based virtual environment for enhancing the quality of deep generative designs. In International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, volume 51753, pp. V02AT03A015. American Society of Mechanical Engineers, 2018. Shrinath Deshpande and Anurag Purwar. Computational creativity via assisted variational synthesis of mechanisms using deep generative models. *Journal of Mechanical Design*, 141(12), 2019. Shrinath Deshpande and Anurag Purwar. An Image-Based Approach to Variational Path Synthesis of Linkages. *Journal of Computing and Information Science in Engineering*, 21(2), 10 2020. ISSN 1530-9827. doi: 10.1115/1.4048422. URL https://doi.org/10.1115/1.4048422. 021005. Nikolaos Dionelis, Sotirios A Tsaftaris, and Mehrdad Yaghoobi. Omasgan: Out-of-distribution minimum anomaly score gan for anomaly detection. In *2022 Sensor Signal Processing for Defence Conference (SSPD)*, pp. 1–5. IEEE, 2022. Mohamed Elfeki, Camille Couprie, Morgane Riviere, and Mohamed Elhoseiny. Gdpp: Learning diverse generations using determinantal point processes. In *International conference on machine learning*, pp. 1774–1783. PMLR, 2019. Cristóbal Esteban, Stephanie L Hyland, and Gunnar Rätsch. Real-valued (medical) time series generation with recurrent conditional gans. *arXiv preprint arXiv:1706.02633*, 2017. Amir Hossein Gandomi and Xin-She Yang. Benchmark problems in structural optimization. In *Computational* optimization, methods and algorithms, pp. 259–281. Springer, 2011. Amir Hossein Gandomi, Xin-She Yang, and Amir Hossein Alavi. Mixed variable structural optimization using firefly algorithm. *Computers & Structures*, 89(23-24):2325–2336, 2011. Giorgio Giannone and Faez Ahmed. Diffusing the optimal topology: A generative optimization approach. In International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, volume 87301, pp. V03AT03A012. American Society of Mechanical Engineers, 2023. Giorgio Giannone, Akash Srivastava, Ole Winther, and Faez Ahmed. Aligning optimization trajectories with diffusion models for constrained design generation. *Advances in Neural Information Processing Systems*, 36, 2024. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In *Advances in neural information processing* systems, NIPS'14, pp. 2672–2680, Cambridge, MA, USA, 2014. MIT Press. Michael Gutmann and Aapo Hyvärinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 297–304. JMLR Workshop and Conference Proceedings, 2010. Jessica B Hamrick, Kelsey R Allen, Victor Bapst, Tina Zhu, Kevin R McKee, Joshua B Tenenbaum, and Peter W Battaglia. Relational inductive bias for physical construction in humans and machines. arXiv preprint arXiv:1806.01203, 2018. Amin Heyrani Nobari, Wei (Wayne) Chen, and Faez Ahmed. RANGE-GAN: Design Synthesis Under Constraints Using Conditional Generative Adversarial Networks. *Journal of Mechanical Design*, pp. 1–16, 09 2021. ISSN 1050-0472. doi: 10.1115/1.4052442. URL https://doi.org/10.1115/1.4052442. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 6840–6851. Curran Associates, Inc., 2020. URL https://proceedings.neurips. cc/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf. Fergus Imrie, Anthony R Bradley, Mihaela van der Schaar, and Charlotte M Deane. Deep generative models for 3d linker design. *Journal of chemical information and modeling*, 60(4):1983–1995, 2020. Cole Jetton, Matthew Campbell, and Christopher Hoyle. Constraining the Feasible Design Space in Bayesian Optimization With User Feedback. *Journal of Mechanical Design*, 146(4):041703, 11 2023. ISSN 1050-0472. doi: 10.1115/1.4063906. URL https://doi.org/10.1115/1.4063906. Mahmut Kaya and Hasan Şakir Bilge. Deep metric learning: A survey. *Symmetry*, 11(9):1066, 2019. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint* arXiv:1412.6980, 2014. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*, 2013. Jin-Woong Lee, Nam Hoon Goo, Woon Bae Park, Myungho Pyo, and Kee-Sun Sohn. Virtual microstructure design for steels using generative adversarial networks. *Engineering Reports*, 3(1):e12274, 2021. Baotong Li, Congjia Huang, Xin Li, Shuai Zheng, and Jun Hong. Non-iterative structural topology optimization using deep learning. *Computer-Aided Design*, 115:172–180, 2019. ISSN 0010-4485. doi: https://doi.org/10.1016/j.cad.2019.05.038. URL https://www.sciencedirect.com/science/article/ pii/S001044851930185X. Runze Li, Yufei Zhang, and Haixin Chen. Learning the aerodynamic design of supercritical airfoils through deep reinforcement learning. *AIAA Journal*, pp. 1–14, 2021. Xiang Li, Shaowu Ning, Zhanli Liu, Ziming Yan, Chengcheng Luo, and Zhuo Zhuang. Designing phononic crystal with anticipated band gap through a deep learning based data-driven method. *Computer Methods* in Applied Mechanics and Engineering, 361:112737, 2020. Siyan Liu, Zhi Zhong, Ali Takbiri-Borujeni, Mohammad Kazemi, Qinwen Fu, and Yuhao Yang. A case study on homogeneous and heterogeneous reservoir porous media reconstruction by using generative adversarial networks. *Energy Procedia*, 158:6164–6169, 2019. Zhaocheng Liu, Lakshmi Raju, Dayu Zhu, and Wenshan Cai. A hybrid strategy for the discovery and design of photonic structures. *IEEE Journal on Emerging and Selected Topics in Circuits and Systems*, 10(1): 126–135, 2020. Manoj Malviya. A systematic study of deep generative models for rapid topology optimization. 2020. F. Mazé and F. Ahmed. Diffusion models beat gans on topology optimization. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), Washington, DC, 2023. URL https://arxiv.org/abs/2208. 09591. Olof Mogren. C-rnn-gan: Continuous recurrent neural networks with adversarial training. arXiv preprint arXiv:1611.09904, 2016. Shakir Mohamed and Balaji Lakshminarayanan. Learning in implicit generative models. *arXiv preprint* arXiv:1610.03483, 2016. Lukas Mosser, Olivier Dubrule, and Martin J Blunt. Reconstruction of three-dimensional porous media using generative adversarial neural networks. *Physical Review E*, 96(4):043309, 2017. Ramaravind K Mothilal, Amit Sharma, and Chenhao Tan. Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pp. 607–617, 2020. Zhenguo Nie, Tong Lin, Haoliang Jiang, and Levent Burak Kara. Topologygan: Topology optimization using generative adversarial networks based on physical fields over the initial domain. Journal of Mechanical Design, 143(3):031715, 2021. Amin Heyrani Nobari, Wei Chen, and Faez Ahmed. Pcdgan: A continuous conditional diverse generative adversarial network for inverse design. *arXiv preprint arXiv:2106.03620*, 2021. Amin Heyrani Nobari, Wei Chen, and Faez Ahmed. Range-constrained generative adversarial network: Design synthesis under constraints using conditional generative adversarial networks. Journal of Mechanical Design, 144(2), 2022. Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. *Advances in neural information processing systems*, 29, 2016. Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxiliary classifier gans. In *International conference on machine learning*, pp. 2642–2651. PMLR, 2017. Sangeun Oh, Yongsu Jung, Ikjin Lee, and Namwoo Kang. Design automation by integrating generative adversarial networks and topology optimization. In *International Design Engineering Technical Conferences* and Computers and Information in Engineering Conference, volume 51753, pp. V02AT03A008. American Society of Mechanical Engineers, 2018. Sangeun Oh, Yongsu Jung, Seongsin Kim, Ikjin Lee, and Namwoo Kang. Deep generative design: Integration of topology optimization and generative models. *Journal of Mechanical Design*, 141(11), 2019. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018. Wamiq Para, Shariq Bhat, Paul Guerrero, Tom Kelly, Niloy Mitra, Leonidas J Guibas, and Peter Wonka. Sketchgen: Generating constrained cad sketches. *Advances in Neural Information Processing Systems*, 34: 5077–5088, 2021. Ben Poole, Sherjil Ozair, Aaron Van Den Oord, Alex Alemi, and George Tucker. On variational bounds of mutual information. In *International Conference on Machine Learning*, pp. 5171–5180. PMLR, 2019. Sharad Rawat and MH Herman Shen. Application of adversarial networks for 3d structural topology optimization. Technical report, SAE Technical Paper, 2019. Lyle Regenwetter and Faez Ahmed. Design target achievement index: A differentiable metric to enhance deep generative models in multi-objective inverse design. In International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, volume 86236, pp. V03BT03A046. American Society of Mechanical Engineers, 2022. Lyle Regenwetter, Brent Curry, and Faez Ahmed. BIKED: A dataset and machine learning benchmarks for data-driven bicycle design. In *International Design Engineering Technical Conferences and Computers and* Information in Engineering Conference, IDETC-21, Virtual, Online, Aug 2021. ASME. Lyle Regenwetter, Amin Heyrani Nobari, and Faez Ahmed. Deep generative models in engineering design: A review. *Journal of Mechanical Design*, 144(7):071704, 2022a. Lyle Regenwetter, Colin Weaver, and Faez Ahmed. Framed: Data-driven structural performance analysis of community-designed bicycle frames, 2022b. Lyle Regenwetter, Akash Srivastava, Dan Gutfreund, and Faez Ahmed. Beyond statistical similarity: Rethinking metrics for deep generative models in engineering design. *arXiv preprint arXiv:2302.02913*, 2023. Benjamin Rhodes, Kai Xu, and Michael U Gutmann. Telescoping density-ratio estimation. *Advances in* neural information processing systems, 33:4905–4916, 2020. Ronan Riochet, Mario Ynocente Castro, Mathieu Bernard, Adam Lerer, Rob Fergus, Véronique Izard, and Emmanuel Dupoux. Intphys: A framework and benchmark for visual intuitive physics reasoning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020. Stuart J Russell. *Artificial intelligence a modern approach*. Pearson Education, Inc., 2010. Mikael Sabuhi, Ming Zhou, Cor-Paul Bezemer, and Petr Musilek. Applications of generative adversarial networks in anomaly detection: a systematic literature review. *Ieee Access*, 9:161003–161029, 2021. Mehdi SM Sajjadi, Olivier Bachem, Mario Lucic, Olivier Bousquet, and Sylvain Gelly. Assessing generative models via precision and recall. *Advances in neural information processing systems*, 31, 2018. Ari Seff, Wenda Zhou, Nick Richardson, and Ryan P Adams. Vitruvion: A generative model of parametric cad sketches. *arXiv preprint arXiv:2109.14124*, 2021. Shashank Sharma and Anurag Purwar. Path synthesis of defect-free spatial 5-ss mechanisms using machine learning. In International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, volume 83990, pp. V010T10A034. American Society of Mechanical Engineers, 2020. Conner Sharpe and Carolyn Conner Seepersad. Topology design with conditional generative adversarial networks. In International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, volume 59186, pp. V02AT03A062. American Society of Mechanical Engineers, 2019. Seungyeon Shin, Dongju Shin, and Namwoo Kang. Topology optimization via machine learning and deep learning: A review. *Journal of Computational Design and Engineering*, 10(4):1736–1766, 2023. Dule Shu, James Cunningham, Gary Stump, Simon W Miller, Michael A Yukish, Timothy W Simpson, and Conrad S Tucker. 3d design using generative adversarial networks and physics-based validation. *Journal of* Mechanical Design, 142(7):071701, 2020. Ole Sigmund and Kurt Maute. Topology optimization approaches: A comparative review. Structural and Multidisciplinary Optimization, 48(6):1031–1055, 2013. ISSN 1615-147X. doi: 10.1007/s00158-013-0978-6. Abhishek Sinha, Kumar Ayush, Jiaming Song, Burak Uzkent, Hongxia Jin, and Stefano Ermon. Negative data augmentation. *arXiv preprint arXiv:2102.05113*, 2021. Kevin Smith, Lingjie Mei, Shunyu Yao, Jiajun Wu, Elizabeth Spelke, Josh Tenenbaum, and Tomer Ullman. Modeling expectation violation in intuitive physics with coarse probabilistic object representations. Advances in neural information processing systems, 32, 2019a. Kevin A. Smith, Lingjie Mei, Shunyu Yao, Jiajun Wu, Elizabeth S. Spelke, Joshua B. Tenenbaum, and Tomer David Ullman. Modeling expectation violation in intuitive physics with coarse probabilistic object representations. In *Neural Information Processing Systems*, 2019b. Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. Ladder variational autoencoders. In *Advances in neural information processing systems*, pp. 3738–3746, 2016. Akash Srivastava, Lazar Valkov, Chris Russell, Michael U. Gutmann, and Charles Sutton. Veegan: Reducing mode collapse in gans using implicit variational learning. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 30, pp. 3308–3318. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper/ 2017/file/44a2e0804995faf8d2e3b084a1e2db1d-Paper.pdf. Akash Srivastava, Seungwook Han, Kai Xu, Benjamin Rhodes, and Michael U. Gutmann. Estimating the density ratio between distributions with high discrepancy using multinomial logistic regression. Transactions on Machine Learning Research, 2023. ISSN 2835-8856. URL https://openreview.net/forum?id= jM8nzUzBWr. Masashi Sugiyama, Taiji Suzuki, and Takafumi Kanamori. *Density ratio estimation in machine learning*. Cambridge University Press, 2012a. Masashi Sugiyama, Taiji Suzuki, and Takafumi Kanamori. *Density ratio estimation in machine learning*. Cambridge University Press, 2012b. Ren Kai Tan, Nevin L Zhang, and Wenjing Ye. A deep learning–based method for the design of microstructural materials. *Structural and Multidisciplinary Optimization*, 61(4):1417–1438, 2020. Yingheng Tang, Keisuke Kojima, Toshiaki Koike-Akino, Ye Wang, Pengxiang Wu, Mohammad Tahersima, Devesh Jha, Kieran Parsons, and Minghao Qi. Generative deep learning model for a multi-level nano-optic broadband power splitter. In *2020 Optical Fiber Communications Conference and Exhibition (OFC)*, pp. 1–3. IEEE, 2020. Sofia Valdez, Carolyn Seepersad, and Sandilya Kambampati. A framework for interactive structural design exploration. In *International Design Engineering Technical Conferences and Computers and Information* in Engineering Conference, IDETC-21, Virtual, Online, Aug 2021. ASME. Jun Wang, Wei Wayne Chen, Daicong Da, Mark Fuge, and Rahul Rai. Ih-gan: A conditional generative model for implicit surface-based inverse design of cellular structures. *Computer Methods in Applied Mechanics* and Engineering, 396:115060, 2022. Liwei Wang, Yu-Chin Chan, Faez Ahmed, Zhao Liu, Ping Zhu, and Wei Chen. Deep generative modeling for mechanistic-based learning and design of metamaterial systems. *Computer Methods in Applied Mechanics* and Engineering, 372:113377, 2020. Eamon Whalen, Azariah Beyene, and Caitlin Mueller. Simjeb: simulated jet engine bracket dataset. In Computer Graphics Forum, volume 40, pp. 9–17. Wiley Online Library, 2021. Rebekka V Woldseth, Niels Aage, J Andreas Bærentzen, and Ole Sigmund. On the use of artificial neural networks in topology optimisation. *Structural and Multidisciplinary Optimization*, 65(10):294, 2022. Patricia Wollstadt, Mariusz Bujny, Satchit Ramnath, Jami J Shah, Duane Detwiler, and Stefan Menzel. Carhoods10k: An industry-grade data set for representation learning and design optimization in engineering applications. *IEEE Transactions on Evolutionary Computation*, 26(6):1221–1235, 2022. Tianju Xue, Thomas J Wallin, Yigit Menguc, Sigrid Adriaenssens, and Maurizio Chiaramonte. Machine learning generative models for automatic design of multi-material 3d printed composite solids. *Extreme* Mechanics Letters, 41:100992, 2020. Xin-She Yang and Amir Hossein Gandomi. Bat algorithm: a novel approach for global engineering optimization. Engineering computations, 29(5):464–483, 2012. Zijiang Yang, Xiaolin Li, L Catherine Brinson, Alok N Choudhary, Wei Chen, and Ankit Agrawal. Microstructural materials design via deep adversarial learning methodology. *Journal of Mechanical Design*, 140(11), 2018. Emre Yilmaz and Brian German. Conditional generative adversarial network framework for airfoil inverse design. In *AIAA aviation 2020 forum*, pp. 3185, 2020. Yonggyun Yu, Taeil Hur, Jaeho Jung, and In Gwun Jang. Deep learning for determining a near-optimal topological design without any iteration. *Structural and Multidisciplinary Optimization*, 59(3):787–799, 2019. Muhammad Zaigham Zaheer, Jin-ha Lee, Marcella Astrid, and Seung-Ik Lee. Old is gold: Redefining the adversarially learned one-class classifier training paradigm. In *Proceedings of the IEEE/CVF Conference* on Computer Vision and Pattern Recognition, pp. 14183–14193, 2020. Hui Zhang, Lei Yang, Changjian Li, Bojian Wu, and Wenping Wang. Scaffoldgan: Synthesis of scaffold materials based on generative adversarial networks. *Computer-Aided Design*, 138:103041, 2021. ISSN 0010-4485. doi: https://doi.org/10.1016/j.cad.2021.103041. URL https://www.sciencedirect.com/ science/article/pii/S001044852100052X. Wentai Zhang, Zhangsihao Yang, Haoliang Jiang, Suyash Nigam, Soji Yamakawa, Tomotake Furuhata, Kenji Shimada, and Levent Burak Kara. 3d shape synthesis for conceptual design and optimization using variational autoencoders. In *International Design Engineering Technical Conferences and Computers and* Information in Engineering Conference, volume 59186, pp. V02AT03A017. American Society of Mechanical Engineers, 2019. ## A **Related Work** Constraints in Engineering Problems. Generally, we can categorize the constraint information of engineering problems into four types (Regenwetter et al., 2023). (i) *No Constraint Information*: No information about constraints is given or can be collected, and learning constraints is typically infeasible or extremely challenging in a finite data regime. (ii) *'Negative' Dataset of Invalid Designs*: A collection of constraint-violating negative designs is available. Our method leverages such negative data to learn a constraint-satisfying generative model. The value of negative data is largely governed by their relative difficulty. Hard negatives fall near the positive data manifold, while easy negatives lie deep within constrain-violating regions. (iii) *Constraint Check*: A black-box 'oracle' that determines whether a design satisfies constraints is available. This check may be computationally expensive, limiting its use. (iv) *Closed-form Constraints*: An inexpensive closed-form constraint is available. In such scenarios, direct optimization is often favored over generative models in design problems. In other cases, constraintenforcing rules can be built into the model structure, an approach used in some generative models for molecular design (Cheng et al., 2021; Imrie et al., 2020). We note that each level of constraint information is strictly more informative than the previous. In this paper, we focus on the scenario in which a limited dataset of negative samples is available (ii) or can be generated using an oracle (iii), but closed-form constraints are not available. This scenario is common in applications such as structural design, mobility design (e.g., cars, bikes, ships, airplanes), and material synthesis. Density Ratio Estimation. Density Ratio Estimation (DRE) (Sugiyama et al., 2012b) is a critical technique in machine learning, particularly when evaluating distributions is not feasible or is computationally expensive (Mohamed & Lakshminarayanan, 2016). DRE techniques are heavily employed for generative modeling and score matching estimation (Goodfellow et al., 2014; Gutmann & Hyvärinen, 2010; Srivastava et al., 2023; Choi et al., 2022). In the context of GANs (Goodfellow et al., 2014; Arjovsky et al., 2017; ?), the DRE methodology forms the underlying basis for their operation. A well-known technique for DRE is probabilistic classification Sugiyama et al. (2012b), where a binary classifier is used to learn the ratio. However, accurate DRE from finite samples can be challenging, especially in high dimensions. To overcome this challenge, prior works have employed a divide-and-conquer approach. An example of this is the Telescoping Density Ratio Estimation (TRE) method (Gutmann & Hyvärinen, 2010; Rhodes et al., 2020), which divides the problem into a sequence of easier DRE sub-problems. Despite its success, there are limitations to this approach, especially when the number of intermediate bridge distributions is increased. Noise contrastive estimator (NCE (Gutmann & Hyvärinen, 2010)) and hybrid generative models (Srivastava et al., 2023; 2017; Rhodes et al., 2020) are also based on the density ratio as underlying methodology, providing a flexible paradigm for large scale generative modeling. Generative Models for Engineering Design. Generative models have recently seen extensive use in design generation tasks (Regenwetter et al., 2022a). Generative Adversarial Nets, for example, have seen extensive use in many applications. In Topology Optimization, GANs (Li et al., 2019; Rawat & Shen, 2019; Oh et al., 2018; 2019; Sharpe & Seepersad, 2019; Nie et al., 2021; Yu et al., 2019; Valdez et al., 2021) are often used to create optimal topologies, bypassing the need for iterative solvers like SIMP. In computational materials design GANs (Tan et al., 2020; Yang et al., 2018; Zhang et al., 2021; Mosser et al., 2017; Lee et al., 2021; Liu et al., 2019), VAEs (Cang et al., 2018; Li et al., 2020; Liu et al., 2020; Wang et al., 2020; Xue et al., 2020; Tang et al., 2020; Chen & Liu, 2021), and other models are used to generate synthetic data to better learn process-structure-property relations (Bostanabad et al., 2018). A variety of generative models have been applied to 2D shape synthesis problems (Yilmaz & German, 2020; Chen & Fuge, 2018; Chen et al., 2019; Chen & Fuge, 2019; Nobari et al., 2022; Li et al., 2021; Dering et al., 2018), such as airfoil design, and 3D shape synthesis problems (Shu et al., 2020; Nobari et al., 2022; Brock et al., 2016; Zhang et al., 2019) such as mechanical component synthesis in engineering design. Finally, generative models have been proposed as a method to tackle various miscellaneous product and machine design tasks (Deshpande & Purwar, 2019; Sharma & Purwar, 2020; Regenwetter et al., 2021; Deshpande & Purwar, 2020). Constraint Satisfaction in Machine Learning. From a general point of view, Constraint Satisfaction Problems (CSPs) have been long studied in computer science and optimization about optimal allocation, graph search, games, and path planning (Russell, 2010). However, such constraints are mostly related to algorithmic complexity and memory allocation. In generative design (Regenwetter et al., 2022a; Sigmund & Maute, 2013), constraint satisfaction has a different goal because we want to obtain a design with high performance but at the same time achieve diversity (distribution coverage) leveraging a probabilistic model. Recently, Neural Constraint Satisfaction (Chang et al., 2023) has been proposed to deal with objects in a scene to solve intuitive physics problems (Smith et al., 2019a; Hamrick et al., 2018). In the CAD domain, structured models to handle constraints have been proposed (Seff et al., 2021; Para et al., 2021). Conditional generative models have been proposed for structural topology optimization (Nie et al., 2021), leveraging physical fields (Nie et al., 2021; Mazé & Ahmed, 2023), dense approximations (Giannone & Ahmed, 2023), and trajectory alignment (Giannone et al., 2024) for high-quality candidate generation. These approaches rely on explicit constraint satisfaction. Instead, we focus on implicit constraint satisfaction, leveraging a dataset of invalid configurations to enhance the model capacity to generate valid designs. Anomaly Detection. Anomaly detection attempts to solve a one-class classification problem (anomalous vs not) (Sabuhi et al., 2021), much like constraint handling in generative models (positive vs negative). Several approaches have generated synthetic negative data as a stand-in for anomalies to train models (Zaheer et al., 2020; Dionelis et al., 2022). However, unlike anomaly detection, active constraint handling focuses on training a generative model to avoid negative samples, rather than simply identifying them. Furthermore, the point of training on negative data is to avoid the challenging one-class classification problem. Negative data has also been studied in the context of retrieval, using triplet losses Kaya & Bilge (2019) and contrastive estimators (Gutmann & Hyvärinen, 2010) for representation learning Oord et al. (2018); Chen et al. (2020). ## B **Negative Data Derivations& Density Ratio** Let pn denote the *negative distribution* i.e., the distribution of constraint-violating datapoints. Instead of training using only the positive distribution pp, we now seek to train a generative model pθ using both pp and pn. Assuming mutual absolute continuity of pp, pθ and pn, and starting from first principles, we can now re-write Eq. 2 as: $$\arg\min_{\theta}\int p_{\theta}(\mathbf{x})\left[\log p_{\theta}(\mathbf{x})-\log p_{p}(\mathbf{x})\right]d\mathbf{x}$$ $$=\arg\min_{\theta}\int p_{\theta}(\mathbf{x})\left[\log p_{\theta}(\mathbf{x})-\log p_{p}(\mathbf{x})+\left(\log\frac{p_{n}(\mathbf{x})}{p_{n}(\mathbf{x})}\right)\right]d\mathbf{x}$$ $$=\arg\min_{\theta}\int p_{\theta}(\mathbf{x})\left[\log\frac{p_{\theta}(\mathbf{x})}{p_{n}(\mathbf{x})}-\log\frac{p_{p}(\mathbf{x})}{p_{n}(\mathbf{x})}\right]d\mathbf{x}.\tag{13}$$ While the solution for Eq. 13 is the same as the solution for equation 2 i.e. pθ ∗ = pp, the model is now directly incentivized to allocate the same amount of probability mass to the samples from pn as does the data distribution pp. This ensures that when trained using finite N, the model does not allocate high probability mass to invalid samples. In other words, training under Eq. 13 encourages the model to minimize its discrepancy with respect to pp such that its discrepancy with respect to pn matches exactly that of pp and pn. Another important benefit of the reformulation in Eq. 13 is that in cases where sampling from pn is inexpensive (such as in the engineering design domain), the sample efficiency of the model with respect to samples from pp improves as shown in the next section. B.1 **Double Discriminator (DD) Formulation** We now re-write Eq. 13 using the equivalent formulation: $$\operatorname*{arg\,min}_{\theta}\int p_{\theta}(\mathbf{x})\left({\frac{1}{2}}\left[\log{\frac{p_{\theta}(\mathbf{x})}{p_{n}(\mathbf{x})}}-\log{\frac{p_{p}(\mathbf{x})}{p_{n}(\mathbf{x})}}\right]+{\frac{1}{2}}\left[\log{\frac{p_{\theta}(\mathbf{x})}{p_{p}(\mathbf{x})}}\right]\right)d\mathbf{x}.$$ Our goal is to minimize the Kullback-Leibler (KL) divergence between our model pθ(x) and the actual distribution pp(x), while simultaneously distancing our model pθ(x) from any invalid designs represented by pn(x). Given that we lack access to the explicit functional form of this distribution, we employ density ratio estimation Sugiyama et al. (2012b); Srivastava et al. (2023) as a means of model learning. In particular, we use $f_{\theta}$ to estimate $r(p_{\mathbf{p}},p_{\theta})=\dfrac{p_{\mathbf{p}}(\mathbf{x})}{p_{\mathbf{p}}(\mathbf{x})}$, $f_{\psi}$ to estimate $r(p_{\mathbf{p}},p_{n})=\dfrac{p_{\mathbf{p}}(\mathbf{x})}{p_{n}(\mathbf{x})}$, and $f_{\xi}$ to estimate $r(p_{\mathbf{p}},p_{n})=\dfrac{p_{\mathbf{p}}(\mathbf{x})}{p_{n}(\mathbf{x})}$. Leveraging the connection between density ratio and probabilistic classification (Sugiyama et al., 2012b), we can write (assuming balanced classes): $$(14)$$ $$r(\mathbf{x})={\frac{p_{p}(\mathbf{x})}{p_{n}(\mathbf{x})}}={\frac{p_{p}(\mathbf{y}|\mathbf{x})}{p_{n}(\mathbf{y}|\mathbf{x})}}={\frac{p_{p}(\mathbf{y}|\mathbf{x})}{1-p_{p}(\mathbf{y}|\mathbf{x})}},$$ where given a sample x, pp(y|x) represents the probability of it being a valid design, whereas pn(y|x) signifies the probability of it constituting an invalid design within the framework of a binary classifier. Notice that pn(y|x) = 1 − pp(y|x). We can apply the same reasoning to the other two ratios. In situations where pp(x) and pn(x) cannot be quickly evaluated but we can easily collect samples from them, we can resort to directly estimating the ratios rϕ, rψ, rξ using discriminative models to estimate the class probability. This approach is facilitated by employing the following identity: $$\left(15\right)$$ $$p_{p}(\mathbf{y}|\mathbf{x})=\sigma(\log r(\mathbf{x})).$$ pp(y|x) = σ(log r(x)). (16) $$(16)$$ We see that there is a direct correspondence between the density ratio of the two distributions and the valid class probability. The following is a natural parameterization for the density ratio estimators: $$\begin{array}{l}{{f_{\phi}(\mathbf{x};p_{p},p_{\theta})=\sigma(\log r_{\phi}(\mathbf{x}))}}\\ {{f_{\psi}(\mathbf{x};p_{\theta},p_{n})=\sigma(\log r_{\psi}(\mathbf{x}))}}\\ {{f_{\xi}(\mathbf{x};p_{p},p_{n})=\sigma(\log r_{\xi}(\mathbf{x})),}}\end{array}$$ $$\left(17\right)$$ to estimate the class probability or equivalenty fϕ(x) = log rϕ(x) to estimate the logits. Learning the density ratio estimators can be performed by binary cross-entropy: $$\mathcal{F}_{\phi}(\mathbf{x};\theta)=\mathbb{E}_{p_{\theta}(\mathbf{x})}\log\left[f_{\phi}(\mathbf{x})\right]+\mathbb{E}_{p_{\theta}(\mathbf{x})}\log\left[1-f_{\phi}(\mathbf{x})\right]$$ $$=\mathbb{E}_{p_{\theta}(\mathbf{x})}\log\left[\sigma(\log r_{\phi}(\mathbf{x}))\right]+\mathbb{E}_{p_{\theta}(\mathbf{x})}\log\left[1-\sigma(\log r_{\phi}(\mathbf{x}))\right].$$ $$(18)$$ The density ratio can be estimated by sampling from pp(x) and pθ(x), and subsequently learning a discriminator fϕ using these samples. In practice, pθ is learned leveraging adversarial training Goodfellow et al. (2014) and an auxiliary set of classifiers to push away samples from pn. Additionally, we use parameter sharing between fψ and fξ to help the discriminator learn the difference between valid and invalid samples early during training. $$\max_{\phi}{\cal F}_{\phi}(\mathbf{x};\mathbf{\theta})$$ $$\max_{\psi,\xi}{\cal F}_{\psi}(\mathbf{x};\mathbf{\theta})+{\cal F}_{\xi}(\mathbf{x};\mathbf{\theta})\tag{19}$$ $$\min_{\mathbf{\theta}}{\cal F}_{\phi}(\mathbf{x};\mathbf{\theta})-{\cal F}_{\psi,\xi}(\mathbf{x};\mathbf{\theta}).$$ $$(20)^{\frac{1}{2}}$$ By employing this formulation during training, we strive to push rϕ towards 1, thereby maximizing entropy, while encouraging rψ and rξ to be large and equal, consequently minimizing entropy. It is crucial to note that in the absence of parameter sharing, it is not strictly necessary to jointly train Fξ. If we manage to learn a robust model using ϕ and θ, then pθ approximates pp, and we can confidently rely on Fψ for constraint satisfaction. Nonetheless, based on empirical evidence, we observed improved performance and training stability, particularly during the early training stage, when we shared weights and supplied the auxiliary discriminator with valid and generated samples. This procedure assists the discriminator ϕ in differentiating invalid from generated samples while simultaneously situating the generated samples within the validity region. By implementing this method, we learn a generative model that produces samples closely aligned with the training distribution of valid samples and distant from the invalid distribution. This yields a generative model that respects constraints - the same GAN-DD-b model presented in the main paper. See Algorithm 3 for training details. The alternate double discriminator formulation (GAN-DD-a) is a variant of this approach: $$\operatorname*{arg\,min}_{\theta}\int p_{\theta}(\mathbf{x})\left(-\lambda\left[\log{\frac{p_{\theta}(\mathbf{x})}{p_{n}(\mathbf{x})}}\right]+\left[\log{\frac{p_{\theta}(\mathbf{x})}{p_{p}(\mathbf{x})}}\right]\right)d\mathbf{x}.$$ Given suitable values of λ, we can effectively learn a generative model and aptly differentiate the generated samples from invalid data. Upon testing this formulation with 2D densities, we also observed promising results in terms of both coverage and constraint satisfaction, which resulted in a significant reduction in the number of invalid samples—approximately by an order of magnitude. See Algorithm 2 for training details. B.2 **Multi-Class Discriminator (MC) Formulation** Noting that multi-class classifiers are strong density ratio estimators (Srivastava et al., 2023), we also propose a variant using a multiclass discriminator model. By defining a single multiclass classifier fϕ and assigning pseudo-labels 2 to invalid, 1 to valid, and 0 to generated samples, we can write: $${\mathcal{F}}_{\phi}^{\mathrm{qc}}(\mathbf{x};\theta)=\mathbb{E}_{p_{p}(\mathbf{x})}\log\left[f_{\phi}(\mathbf{x})\right]+\mathbb{E}_{p_{n}(\mathbf{x})}\log\left[f_{\phi}(\mathbf{x})\right]+\mathbb{E}_{p_{\theta}(\mathbf{x})}\log\left[f_{\phi}(\mathbf{x})\right],$$ $$(21)^{\frac{1}{2}}$$ where we assume one-hot-encoding for the classes and cross-entropy loss as a scoring mechanism. We can then maximize this loss with respect to ϕ, learning good discriminators between valid, invalid, and generated. and minimize it with respect to θ. Given that we are using a single classifier, the discriminator is implicitly estimating all the relevant ratios (Srivastava et al., 2023). Pushing the generated samples close to valid will also push them far from invalid samples. We experiment with this formulation on the 2d densities with good results. However, when in the presence of more complex distributions and constraints variety, it is reasonable to allocate different levels of capacity for model learning and constraint satisfaction, using different discriminators. Additionally, when we want to generalize our methods to generative models other than GANs, like DDPM, it makes sense to instantiate a separate classifier for guidance with the only focus on fulfilling the constraints. See Algorithm 1 for training details. ## B.3 **Comparison To Ac-Gan** Since our double-descriminator variant has two discriminative models, including one that learns to distinguish positive and negative samples, it bears a superficial similarity to AC-GANs (Odena et al., 2017) applied to binary class-conditional problems. In this section, we demonstrate that GAN-DD is distinct from AC-GANs. Consider an AC-GAN model applied as an NDGM. AC-GAN trains two discriminative models which share some weights, fϕ and fψ, which learn two ratios: rψ(x) = pp(x) pn(x)(23) rϕ(x) = preal(x) pθ(x)(22) An important observation is that AC-GAN's "real" class consists of the full distribution across all classes, hence is comprised of both positive and negative data: p*real*(x) = pp(x) + pn(x). The optimal discriminators now learn: $$(222)$$ $$(23)$$ $$(24)$$ $$f_{\phi}(\mathbf{x})={\frac{p_{p}(\mathbf{x})+p_{n}(\mathbf{x})}{p_{p}(\mathbf{x})+p_{n}(\mathbf{x})+p_{\theta}(\mathbf{x})}}$$ $$(25)$$ $$f_{\psi}(\mathbf{x})=\frac{p_{p}(\mathbf{x})}{p_{p}(\mathbf{x})+p_{n}(\mathbf{x})}$$ Here α is a parameter determined by the ratio of the classes. Assuming perfect discriminators, the generator loss is then expressed as: L(θ, ϕ, ψ) = Epp(x)+pn(x)[log fψ(x)] + Epθ(x)[1 − log fψ(xθ)] −Epp(x)+pn(x)[log(fϕ(x))] − Epθ(x)[log(fϕ(xθ))] = Epθ(x) log pp(xθ) pp(xθ) + pn(xθ) − 1 − log pp(xθ) + pn(xθ) pp(xθ) + pn(xθ) + pθ(xθ) (26) In contrast, the GAN-MC formulation's discriminator learns three density ratios. One of these is the reciprocal of rϕ (eq. 22). $$r_{\zeta,\theta}(\mathbf{x})=\frac{p_{\theta}(\mathbf{x})}{p_{p}(\mathbf{x})+p_{n}(\mathbf{x})}\tag{1}$$ However, the ratio that is actually used for generator training is a different ratio: $$(26)$$ $$r_{\zeta,p}(\mathbf{x})={\frac{p_{p}(\mathbf{x})}{p_{n}(\mathbf{x})+p_{\theta}(\mathbf{x})}}$$ pn(x) + pθ(x)(28) This leads an optimal discriminator to learn: $$f_{\zeta,p}(\mathbf{x})={\frac{p_{p}(\mathbf{x})}{p_{p}(\mathbf{x})+p_{n}(\mathbf{x})+p_{\theta}(\mathbf{x})}}$$ pp(x) + pn(x) + pθ(x)(29) $$(27)$$ $$(28)$$ $$(29)^{\frac{1}{2}}$$ Assuming the discriminator is perfectly optimal, the generator optimizes: $${\cal L}(\theta,\zeta)=\mathbb{E}_{p_{p}(\mathbf{x}_{\theta})}[\log f_{\zeta,p}(\mathbf{x}_{\theta})]=\mathbb{E}_{p_{p}(\mathbf{x}_{\theta})}\left[\log\frac{p_{p}(\mathbf{x}_{\theta})}{p_{p}(\mathbf{x}_{\theta})+p_{n}(\mathbf{x}_{\theta})+p_{\theta}(\mathbf{x}_{\theta})}\right].\tag{30}$$ This is indeed equivalent to eq. 26, indicating that AC-GAN can also function as an NDGM. However, it is critical to observe that this equivalence is contingent on **perfect discriminative models**. It does not indicate that the methods are equivalent. In fact, it can be easily seen that many of the other NDGMs tested in the paper (simple baselines, existing NDGMs, etc.) have this exact final loss, contingent on perfect models. The difference in performance between different adversarial NDGM formulations arises from the difference in discriminator training and discriminative performance. C **Pseudocode** Psuedocode for the multiclass discriminator (GAN-MC) and double discriminator variants (GAN-DD-a, GANDD-b) are shown below. Diversity loss is included, and is discussed in section E. For non-diversity-augmented variants, γ is set to 0. Algorithm 1 GAN-MC Training Procedure while step ≤ n*steps* do Sample Pbatch ∼ P*dataset* and Nbatch ∼ N*dataset* Sample ϵ ∼ N(0, 1) G*batch* = Generator(ϵ) DP preds = Discriminator(P*batch*) DN preds = Discriminator(N*batch*) DG preds = Discriminator(G*batch*) loss_fn = CategoricalCrossEntropy() D_*loss* = loss_fn(DG preds, 0) + loss_fn(DP preds, 1) + loss_fn(DN preds, 2) Diversity_*loss* = DPP(G*batch*) G_*loss* = loss_fn(DG preds, 1) + γ · Diversity_*loss* Optimize(Discriminator, D_*loss*) Optimize(Generator, G_*loss*) step = *step* + 1 end while Algorithm 2 GAN-DD-a Training Procedure while step ≤ n*steps* do Sample Pbatch ∼ P*dataset* and Nbatch ∼ N*dataset* Sample ϵ ∼ N(0, 1) G*batch* = Generator(ϵ) DP preds = Discriminator(P*batch*) DG preds = Discriminator(G*batch*) AI*preds* = Aux_Discriminator(N*batch*) AG preds = Aux_Discriminator(G*batch*) loss_fn = BinaryCrossEntropy() D_*loss* = loss_fn(DG preds, 0) + loss_fn(DP preds, 1) A_*loss* = loss_fn(AG preds, 0) + loss_fn(AN preds, 1) Diversity_*loss* = DPP(G*batch*) G_*loss* = loss_fn(DG preds, 1) - λ · loss_fn(AG preds, 1) + γ · Diversity_*loss* Optimize(Discriminator, D_*loss*) Optimize(Aux_Discriminator, A_*loss*) Optimize(Generator, G_loss) step = *step* + 1 end while Algorithm 3 GAN-DD-b Training Procedure while step ≤ n*steps* do Sample Pbatch ∼ P*dataset* and Nbatch ∼ N*dataset* Sample ϵ ∼ N(0, 1) G*batch* = Generator(ϵ) DP preds = Discriminator(P*batch*) DG preds = Discriminator(G*batch*) AP preds = Aux_Discriminator(P*batch*) AN preds = Aux_Discriminator(N*batch*) AG preds = Aux_Discriminator(G*batch*) loss_fn = BinaryCrossEntropy() D_*loss* = loss_fn(DG preds, 0) + loss_fn(DP preds, 1) A_*loss* = 0.5 · loss_fn(AG preds, 0) + 0.5 · loss_fn(AP preds, 0) + loss_fn(AN preds, 1) Diversity_*loss* = DPP(G*batch*) G_*loss* = loss_fn(DG preds, 1) + β · loss_fn(AG preds, 1) + γ · Diversity_*loss* Optimize(Discriminator, D_*loss*) Optimize(Aux_Discriminator, A_*loss*) Optimize(Generator, G_*loss*) step = *step* + 1 end while ## D **2D Densities** D.1 **2D Datasets And Test Problems** We discuss more details on the 2D datasets presented in Sec. 5.1. ## D.1.1 **2D Problem 1** This problem is labeled Problem 1 in the main paper. Data points are randomly sampled from one of six modes, each of which is a regular 2D gaussian. Distribution centers are spaced at an equal radius. Points in a close proximity to the center of any of the distribution are labeled as negatives and others are labeled as positives. Sampling is performed until 10k positive samples and 10k negative samples are acquired and excess of the oversampled class are discarded. ## D.1.2 **2D Problem 2** This problem is labeled Problem 2 in the main paper. Datapoints are uniformly sampled. A square grid of 'centerpoints' is overlaid over the distribution. Any datapoint in a close enough proximity to a 'centerpoint' is considered negative, while any others are considered positive. Sampling is performed until 10k positive samples and 10k negative samples are acquired and excess of the oversampled class are discarded. ## D.2 **Setup And Training** All tested networks (encoder, decoder, generator, DDPM noise model, auxiliary discriminator) are deep networks with two hidden layers of 400 neurons each and ReLU activations. A batch size of 256 is used throughout. Models are trained using the Adam optimizer (Kingma & Ba, 2014) with a learning rate 3e −4, 5e −4, and 1e −3for GANs, VAEs, and DDPMs respectively. Models are trained for 3500 epochs. The noise dimension for the GAN is set at 2, while the latent dimension for the VAE is set at 16. The VAE's KL divergence loss term is weighted at 0.05. The VAE's auxiliary classifier is pretrained and the validity weight parameter λ is set at 0.2. The GAN's validity weight parameter λ is set at 0.4. ## D.3 **Additional Results** We include a set of results on 2D density experiments expanding on our results from the main paper. Discriptions of the models can be found in the main paper. Mean scores and standard deviations are reported over 3 instantiations in Table 6. Evaluation Metrics. We utilize several of the metrics proposed in (Regenwetter et al., 2023) for constraintsatisfaction and distributional similarity in generative models. To measure performance, we calculate several scores obtained from precision-recall curves (Sajjadi et al., 2018), including F1, F0.1, F10, and the area under the curve (abv. AUC-PR). F0.1 roughly captures coverage (the degree to which the generated distribution covers the span of the dataset), while F10 roughly captures precision (not straying beyond the boundaries of the data). F1 and AUC-PR roughly serve to assess overall distributional similarity. We calculate the mean distance to the nearest dataset point for each generated sample as another simple estimate for accuracy (abv. NDP). Similarly, we calculate the mean distance to the nearest generated sample for each point in the dataset (abv. NGS) as a simple estimate for coverage. Finally, we calculate Maximum mean Discrepancy and proportion of invalid generated samples (abv. Validity). Table 6: Extended results for 2D density experiments. Mean scores and standard deviations over three instantiations are shown. Best models are determined using a two-sample t-test with 95% confidence and boldfaced. Our models (GAN-DD, GAN-MC, GAN-RM) surpass the previous state of the art (GAN-DO) in most metrics. | (GAN-DD, GAN-MC, GAN-RM) surpass the previous state of the art (GAN-DO) in most metrics. Metric: Validity ↓ MMD ↓ NDS ↓ NGS ↓ F1 ↑ F10 ↑ F0.1 ↑ | AUC-PR ↑ | | | | | | | | |---------------------------------------------------------------------------------------------------------------------------------------------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------| | Problem 1 | | | | | | | | | | GAN | 0.093±0.028 | 0.008±0.002 | 0.018±0.002 | 0.027±0.003 | 0.725±0.022 | 0.968±0.010 | 0.931±0.051 | 0.800±0.035 | | GAN-CC | 0.074±0.018 | 0.005±0.002 | 0.016±0.000 | 0.020±0.007 | 0.821±0.071 | 0.982±0.008 | 0.947±0.062 | 0.876±0.097 | | GAN-CL | 0.005±0.005 | 0.003±0.001 | 0.014±0.001 | 0.014±0.000 | 0.943±0.013 | 0.997±0.001 | 0.996±0.000 | 0.990±0.003 | | GAN-Rej | 0.008±0.005 | 0.010±0.003 | 0.015±0.001 | 0.021±0.003 | 0.751±0.035 | 0.971±0.005 | 0.925±0.025 | 0.822±0.045 | | VAE | 0.134±0.013 | 0.004±0.000 | 0.033±0.001 | 0.013±0.000 | 0.828±0.000 | 0.932±0.005 | 0.988±0.001 | 0.888±0.004 | | VAE-CC | 0.191±0.025 | 0.004±0.000 | 0.030±0.002 | 0.026±0.002 | 0.800±0.013 | 0.943±0.004 | 0.986±0.002 | 0.874±0.012 | | VAE-CL | 0.001±0.001 | 0.005±0.000 | 0.030±0.001 | 0.039±0.004 | 0.828±0.009 | 0.925±0.007 | 0.988±0.002 | 0.881±0.010 | | VAE-Rej | 0.024±0.010 | 0.006±0.001 | 0.031±0.002 | 0.017±0.004 | 0.826±0.013 | 0.924±0.006 | 0.989±0.001 | 0.872±0.011 | | DDPM | 0.083±0.003 | 0.002±0.000 | 0.018±0.000 | 0.010±0.000 | 0.912±0.003 | 0.994±0.000 | 0.997±0.000 | 0.978±0.001 | | DDPM-CL | 0.120±0.004 | 0.002±0.000 | 0.019±0.000 | 0.010±0.000 | 0.893±0.005 | 0.995±0.000 | 0.995±0.000 | 0.973±0.002 | | DDPM-G | 0.038±0.000 | 0.003±0.000 | 0.038±0.000 | 0.010±0.000 | 0.867±0.003 | 0.981±0.001 | 0.995±0.000 | 0.947±0.002 | | DDPM-Rej | 0.012±0.006 | 0.002±0.000 | 0.017±0.000 | 0.010±0.000 | 0.895±0.007 | 0.994±0.001 | 0.996±0.000 | 0.972±0.003 | | GAN-DO | 0.004±0.001 | 0.005±0.002 | 0.014±0.000 | 0.015±0.001 | 0.912±0.013 | 0.992±0.001 | 0.993±0.004 | 0.974±0.008 | | GAN-MC | 0.005±0.001 | 0.003±0.001 | 0.016±0.000 | 0.018±0.002 | 0.915±0.023 | 0.991±0.002 | 0.993±0.001 | 0.976±0.010 | | GAN-DD-a | 0.004±0.005 | 0.002±0.000 | 0.015±0.001 | 0.015±0.003 | 0.924±0.036 | 0.994±0.003 | 0.986±0.016 | 0.973±0.027 | | GAN-DD-b | 0.002±0.000 | 0.002±0.000 | 0.016±0.002 | 0.015±0.002 | 0.928±0.015 | 0.992±0.005 | 0.995±0.001 | 0.983±0.008 | | Problem 2 | | | | | | | | | | GAN | 0.018±0.007 | 0.002±0.000 | 0.022±0.001 | 0.020±0.000 | 0.964±0.004 | 0.998±0.000 | 0.998±0.000 | 0.996±0.001 | | GAN-CC | 0.041±0.011 | 0.002±0.001 | 0.024±0.001 | 0.022±0.003 | 0.959±0.012 | 0.997±0.001 | 0.997±0.001 | 0.994±0.003 | | GAN-CL | 0.006±0.004 | 0.002±0.000 | 0.022±0.000 | 0.021±0.001 | 0.958±0.007 | 0.997±0.001 | 0.997±0.000 | 0.994±0.002 | | GAN-Rej | 0.003±0.002 | 0.002±0.000 | 0.021±0.000 | 0.020±0.000 | 0.969±0.002 | 0.998±0.000 | 0.998±0.001 | 0.997±0.000 | | VAE | 0.104±0.002 | 0.002±0.000 | 0.029±0.000 | 0.018±0.000 | 0.960±0.002 | 0.998±0.000 | 0.998±0.000 | 0.995±0.000 | | VAE-CC | 0.101±0.003 | 0.002±0.000 | 0.029±0.000 | 0.018±0.000 | 0.958±0.003 | 0.997±0.000 | 0.998±0.000 | 0.995±0.000 | | VAE-CL | 0.005±0.001 | 0.005±0.001 | 0.023±0.000 | 0.050±0.004 | 0.901±0.021 | 0.993±0.001 | 0.991±0.003 | 0.969±0.009 | | VAE-Rej | 0.004±0.001 | 0.002±0.000 | 0.022±0.000 | 0.018±0.001 | 0.972±0.003 | 0.998±0.000 | 0.998±0.001 | 0.997±0.000 | | DDPM | 0.068±0.006 | 0.006±0.001 | 0.026±0.000 | 0.017±0.000 | 0.907±0.005 | 0.995±0.000 | 0.995±0.001 | 0.974±0.003 | | DDPM-CL | 0.069±0.005 | 0.005±0.000 | 0.026±0.000 | 0.017±0.000 | 0.911±0.002 | 0.995±0.000 | 0.995±0.000 | 0.975±0.002 | | DDPM-G | 0.065±0.002 | 0.005±0.000 | 0.026±0.000 | 0.017±0.000 | 0.907±0.001 | 0.994±0.001 | 0.995±0.000 | 0.973±0.000 | | DDPM-Rej | 0.006±0.004 | 0.005±0.000 | 0.022±0.000 | 0.017±0.000 | 0.914±0.006 | 0.995±0.000 | 0.995±0.001 | 0.977±0.003 | | GAN-DO | 0.004±0.002 | 0.003±0.001 | 0.022±0.000 | 0.025±0.001 | 0.948±0.012 | 0.997±0.001 | 0.996±0.001 | 0.991±0.003 | | GAN-MC | 0.002±0.001 | 0.002±0.000 | 0.022±0.000 | 0.027±0.002 | 0.953±0.013 | 0.997±0.001 | 0.997±0.001 | 0.993±0.004 | | GAN-DD-a | 0.003±0.001 | 0.003±0.001 | 0.022±0.000 | 0.022±0.002 | 0.949±0.004 | 0.997±0.000 | 0.997±0.001 | 0.992±0.001 | | GAN-DD-b | 0.004±0.001 | 0.002±0.001 | 0.022±0.000 | 0.022±0.000 | 0.962±0.008 | 0.997±0.001 | 0.997±0.001 | 0.995±0.002 | ![27_image_0.png](27_image_0.png) respectively. Valid (positive) generated samples are colored blue, while invalid (negative) generated samples are colored black. Constraint-satisfying regions are indicated in the background of plots in blue, while constraint-violating regions are colored white. ![28_image_0.png](28_image_0.png) ## E **Encouraging Diverse Generation In Ndgms** NDGMs tend to have high precision, but may struggle with recall. This tendancy arises because a conservative NDGM will avoid fringe regions of the distribution, resulting in incomplete coverage. One approach to improve recall is to explicitly encourage diversity of generated samples. Diversity is often a desired goal in generative modeling for engineering design applications (Chen & Ahmed, 2021b;a; Regenwetter & Ahmed, 2022; Regenwetter et al., 2023). As (Chen & Ahmed, 2021b) and (Chen & Ahmed, 2021a) note, incorporating diversity can also help models generalize and avoid mode collapse. Diversity was first explicitly incorporated into deep generative models for design engineering in (Chen & Ahmed, 2021b) using a Determinantal Point Process (DPP). Determinantal Point Process (DPP)-based diversity measures have been used in a variety of generative applications in design (Chen & Ahmed, 2021b; Nobari et al., 2021) and elsewhere (Elfeki et al., 2019; Mothilal et al., 2020). The DPP loss is calculated using a positive semi-definite DPP kernel S. Entries of this matrix are calculated using some modality- and problem-dependent similarity kernel, such as the Euclidean distance kernel. The (*i, j*) th element of S can be expressed in terms of the similarity kernel k and samples xi and xj as: $$S_{i,j}=k(x_{i},x_{j}),$$ $$\exists{\bf{S}}_{*}^{*}$$ and the loss as: $${\mathcal{L}}_{d i v}=-{\frac{1}{B}}\,\log\operatorname*{det}(S)=-{\frac{1}{B}}\sum_{i=1}^{B}\log\lambda_{i},$$ where λiis the i-th eigenvalue of L and B is the number of samples in the batch. The loss is incorporated by appending it to the overall loss term of the generative model LGM $${\mathcal{L}}_{t o t}={\mathcal{L}}_{G M}+\gamma\,{\mathcal{L}}_{d i v}$$ Adding this loss can help the generative model achieve better coverage, an observation supported by our experiments below. E.1 **Methodology** We train a GAN-DD-a, a GAN-DD-b, and a GAN-MC model, each augmented with a diversity weight, as indicated in Sec C. These models are labeled GAN-DD-a-DA, GAN-DD-b-DA, and GAN-MC-DA, respectively. To demonstrate that other models can be similarly augmented with diversity, we also train a VAE-CL with diversity (VAE-CL-DA). Diversity weight γ is set at 0.7 for GANs and 0.05 for VAEs. Architecture, dataset, and training parameters are unchanged from Appendix D.2. E.2 **Results** Table 7 shows scores over a variety of distributional similarity metrics and validity. The diversity-augmented double discriminator GAN variant is the strongest performer across most objectives. Plots of generated distributions are included in Figure 8. Visually, diversity-augmented models better distribute samples over the distribution, but sometimes generate more negative samples. We also include a percentage differences between diversity-augmented (DA) NDGMs and their non-DA counterparts in Table 8. Diversity improves the double discriminator (GAN-DD) variant on most metrics. However, the GAN-MC and VAE-CL models mainly improve only in recall-related scores (NGS, F0.1), while suffering in others, most notably validity. Table 7: Table showing scores over various vanilla models, NDGMs, and Diversity-Augmented (DA) NDGMs across a variety of distributional similarity metrics and validity metric. GAN-Vanilla 0.0622 0.0050 0.0199 0.0241 0.8417 0.9845 0.9807 0.9006 GAN-DD-a (ours) 0.0028 0.0022 0.0184 0.0177 0.9435 0.9960 0.9970 0.9918 GAN-DD-a-DA (ours) **0.0023** 0.0025 **0.0182 0.0156 0.9556** 0.9970 **0.9974 0.9937** GAN-DD-b (ours) 0.0028 **0.0020** 0.0188 0.0183 0.9491 0.9959 0.9964 0.9813 GAN-DD-b-DA (ours) 0.0026 0.0022 0.0191 0.0176 0.9514 **0.9972** 0.9963 0.9828 GAN-MC (ours) 0.0030 0.0023 0.0190 0.0220 0.9357 0.9941 0.9950 0.9854 GAN-MC-DA (ours) 0.0054 0.0022 0.0200 0.0170 0.9351 0.9949 0.9958 0.9848 VAE-Vanilla 0.1223 0.0032 0.0309 0.0158 0.8942 0.9662 0.9931 0.9431 VAE-CL 0.0029 0.0051 0.0261 0.0456 0.8619 0.9579 0.9895 0.9226 VAE-CL-DA 0.0035 0.0055 0.0295 0.0340 0.8608 0.9571 0.9925 0.9190 Validity ↓ MMD ↓ NDS ↓ NGS ↓ F1 ↑ F10 ↑ F0.1 ↑ AUC-PR ↑ Table 8: Table showing percantage (%) differences between diversity-augmented (DA) NDGMs and their non-DA counterparts. Improvements are bolded. For objectives that are maximized at 1 (F1, F0.1, F10, AUC-PR), we calculate percentage difference as (snew − sold)/(1 − sold). | Percent (%) Difference | Validity ↓ | MMD ↓ | NDS ↓ | NGS ↓ | F1 ↑ | F10 ↑ | F0.1 ↑ | AUC-PR ↑ | |----------------------------|--------------|---------|---------|---------|--------|---------|----------|------------| | GAN-DD-a-DA (vs. GAN-DD-a) | -14.5 | 13.3 | -0.8 | -11.4 | 21.4 | 24.9 | 13.5 | 23.6 | | GAN-DD-b-DA (vs. GAN-DD-b) | -7.27 | 8.9 | 1.6 | -3.5 | 4.6 | 32.0 | -1.7 | 16.7 | | GAN-MC-DA (vs. GAN-MC) | 81.7 | -1.2 | 5.2 | -22.6 | -1.0 | 14.3 | 14.9 | -4.5 | | VAE-CL-DA (vs. VAE-CL) | 22.8 | 8.3 | 13.1 | -25.5 | -0.8 | -1.9 | 28.6 | -4.7 | ![31_image_0.png](31_image_0.png) datasets are shown in the first two panes, respectively. Valid (positive) generated samples are colored blue, while invalid (negative) generated samples are colored black. Constraint-satisfying regions are indicated in the background of plots in blue, while constraint-violating regions are colored white. Diversity-augmented (DA) NDGMs achieve much better recall than their non-DA counterparts, though occasionally generate more negative samples. Our models are bolded. ## F **Block Stacking: Details And Additional Experiments** F.1 **Training And Dataset Details** For GAN-DO, we select λ = 0.75 as the best-performing hyperparameter. For GAN-DD, we select λ = 0.5 as the best-performing hyperparameter. We also benchmark an autoregressive GAN model (GAN-AR) (Mogren, 2016; Esteban et al., 2017). The architecture, dataset, and training details are shown below. Table 9: Relevant Hyperparameters for block stacking models. C: connectivity. S: stability. FM: floating material. VFE: volume fraction error. CE: compliance error. | GAN-DO/GAN-DD | GAN-AR | | |-----------------|-------------------|-------------------| | Dimension | 12 | 12 | | Valid Set | 10K | 10K | | Invalid Set | 10K | 10K | | Evaluation Set | 1K | 1K | | Constraints | C+S | C+S | | Generator | MLP (16-32-12) | LSTM (64-12) | | Discriminator | MLP (128-64-32-1) | MLP (128-64-32-1) | | Batch size | 200 | 200 | | Iterations | 30K | 50K | | | −3 | 1e −3 | | Learning rate | 1e | | | Optimizer | Adam | Adam | Example positive and negative configurations from the dataset are visualized in Sec. F.3 and F.4 F.2 **Fulfilling Multiple Sets of Constraints** We consider the more challenging problem where stacks must simultaneously satisfy connectivity and stablility constraints. S-C is used as positive data while the S-D, U-C, and U-D are pooled to constitute the negative data. Results are summarized in the lower half of Tables 11 and 12. We see that models trained using only positive data perform poorly in constraint satisfaction scores across the board. Autoregressive models (GAN-AR, (Mogren, 2016; Esteban et al., 2017)) tend to work better than standard GAN (second row) but still fall behind compared to GAN-DD. GAN-DD connects and stabilizes an order of magnitude more configurations than the GAN (fifth row) training on only positive (connected and balanced) configurations. Interestingly, GAN-AR (sixth row) and GAN-DO (seventh row) can achieve high scores on connectivity, with GAN-DO performing better than GAN-DD. However, when the model is challenged to fulfill both constraints, GAN-AR generates almost exclusively invalid configurations, and GAN-DO satisfies all constraints less than 1/4 as frequently as the GAN-DD, even if presented with the same amount of positive and negative data. We note that the benefits of negative data only extend to constraints that were included in the negative data. As shown in the upper half of Table 11, models trained on the relaxed case, including GAN-DD (fourth row) struggle to satisfy constraints not represented in the negative dataset since they do not see representative examples of unstable configurations in negative data. Table 10: Base model vs Negative Data model trained with negative data. We test on 20 splits (1000 samples each) and evaluate metrics that quantify precision and constraint satisfaction. We consider boxes floating or intersecting if the distance between the points is larger than 0.9 units (the minimum distance between constraints in the negative data is 1 unit). b: base block; m: middle block; t: top block. For floating yb < ym and for intersecting yb > ym. | Metrics | GAN (w/o Negative) | GAN-DO (w/ Negative) | GAN-DD (w/ Negative) | |------------------------|----------------------|------------------------|------------------------| | ↓ median(|y 1 | 0 | | | | b − y m|) | 2.78 ± 0.26 u | 5.12 ± 0.37 u | 0.54 ± 0.03 u | | 1 | 0 | | | | ↓ median(|y m − y t |) | 2.12 ± 0.22 u | 1.91 ± 0.22 u | 0.83 ± 0.05 u | | ↓ no-overlap(xb, xm) | 0.41 ± 0.64 % | 12.85 ± 3.41 % | 1.82 ± 0.79 % | | ↓ no-overlap(xm, xt) | 0.31 ± 0.66 % | 17.89 ± 3.70 % | 3.90 ± 2.01 % | | ↓ floating(yb, ym) | 20.44 ± 3.73 % | 14.47 ± 2.83 % | 13.78 ± 4.07 % | | ↓ floating(ym, yt) | 38.04 ± 4.49 % | 20.73 ± 4.11 % | 13.94 ± 2.17 % | | ↓ intersect(yb, ym) | 64.59 ± 4.10 % | 77.20 ± 3.97 % | 0.00 ± 0.00 % | | ↓ intersect(ym, yt) | 43.80 ± 4.47 % | 54.82 ± 5.01 % | 30.79 ± 4.05 % | | ↑ connected(yb, ym) | 14.96 ± 3.04 % | 8.32 ± 2.11 % | 86.21 ± 4.07 % | | ↑ connected(ym, yt) | 18.15 ± 3.06 % | 24.44 ± 3.22 % | 55.26 ± 4.05 % | | Positive Data | | Negative Data | Metrics | | | | | |-------------------------------------------------------------------|----|-----------------|-----------|--------|---------|---------|---------| | Connected Stable Disconnected Unstable Stability ↑ Connectivity ↑ | | | | Both ↑ | | | | | GAN | ✓ | ✗ | ✗ | ✗ | 2.44 % | 2.80 % | 0.19 % | | GAN-AR | ✓ | ✗ | ✗ | ✗ | 0.085 % | 13.23 % | 0.00 % | | GAN-DO | ✓ | ✗ | ✓ | ✗ | 0.39 % | 6.39 % | 0.00 % | | GAN-DD (ours) | ✓ | ✗ | ✓ | ✗ | 3.75 % | 45.30 % | 1.03 % | | GAN | ✓ | ✓ | ✗ | ✗ | 83.20 % | 4.83 % | 3.28 % | | GAN-AR | ✓ | ✓ | ✗ | ✗ | 32.16 % | 8.12 % | 0.07 % | | GAN-DO | ✓ | ✓ | ✓ | ✓ | 84.65 % | 12.63 % | 8.85 % | | GAN-DD (ours) | ✓ | ✓ | ✓ | ✓ | 70.35 % | 41.10 % | 36.02 % | | | GAN (w/o Negative) | GAN-DD (w/ Negative) | |-----------------------------------|-------------------------|------------------------| | ↑ Connected(yb, ym) (Ia) | 21.13 ± 3.04 | 100 ± 0.00 | | ↑ Connected(ym, yt) (Ib) | 22.68 ± 3.06 | 41.10 ± 4.79 | | ↑ Stable (II) | 83.20 ± 4.09 | 70.35 ± 3.72 | | ↑ Connected (I) | 4.83 ± 1.72 (-88.24 %) | 41.10 ± 4.80 | | ↑ Connected and Stable (I and II) | 3.28 ± 1.34 (- 90.89 %) | 36.02 ± 3.88 | Table 11: Overview of block stacking results. The upper half of the table shows results when the stability constraint is ignored during training (and unstable configurations are not designated as negative data to GAN-DO and GAN-DD). When stability is not considered during training, no generative models can reliably fulfill the stability constraint. The lower half shows results where both disconnected stacks and unstable stacks are considered negative (and are provided to GAN-DO and GAN-DD). GAN-DD improves constraint satisfaction by an order of magnitude over most baselines. Table 12: Handling Multiple Sets of Constraints. Positive data is connected (constraint set I) and stable (constraint set II). Negative data is composed of two negative sets: Connected-Unstable and Disconnected-Stable. Table 13: Overview. Using negative data improves constraint satisfaction by an order of magnitude. "Pseudobalanced" stacks would be stable if the other constraints were satisfied. | | Positive | Negative | Metrics | | | | | |--------------|-------------------------------------------------------------------------------------|------------|-----------|----|--------|----------|----------| | | Connected Balanced Disconnected Unbalanced Pseudo-Balanced ↑ Balanced ↑ Connected ↑ | | | | | | | | Positive Set | ✓ | ✗ | ✗ | ✗ | 2.50 % | - | 100.00 % | | Positive Set | ✓ | ✓ | ✗ | ✗ | 0.00 % | 100.00 % | 100.00 % | | Negative Set | ✗ | ✗ | ✓ | ✗ | 0.00 % | 100.00 % | 0.00 % | | Negative Set | ✗ | ✗ | ✗ | ✓ | 2.50 % | 0.00 % | 100.00 % | ![35_image_0.png](35_image_0.png) ## F.3 Training Set - Positive Figure 9: Positive Data for Constraint I. Example of positive configurations for constraints I (connection). Blocks fulfilling only the connection constraints (I). Specifically, the blocks are stacked one on top of each other without any floating or intersection but in general in unstable configurations. This means that the blocks in the data are arranged in a way that satisfies certain rules or criteria, such as not being allowed to float (i.e., not being fully supported by the blocks below) or intersecting (i.e., overlapping with other blocks). ![35_image_1.png](35_image_1.png) Figure 10: Positive Data for Constraints I+II. Example of positive configurations for constraints I (connection) and II (stability). Blocks fulfilling both constraints connection constraints (I). Specifically, the blocks are stacked one on top of each other without any floating or intersection in a stable configuration. This means that the blocks in the data are arranged in a way that satisfies certain rules or criteria, such as not being allowed to float (i.e., not being fully supported by the blocks below) or intersecting (i.e., overlapping with other blocks) and having an internal center of mass ## F.4 Training Set - Negative ![36_image_0.png](36_image_0.png) Figure 11: Negative Data. Example of negative data for block stacking. The negative data consists of two categories: hard-negative (top) and easy-negative (bottom). The top section of the figure contains examples of hard negative, which are generated by violating the constraints of the block stacking problem by a small degree (1 to 5 units). The constraints include the requirement that the blocks should not intersect or float above each other. These hard-negatives are designed to be challenging for the model to learn from, as they are close to satisfying the constraints but still violate them. The bottom section of the figure contains examples of easy-negative, which are generated by violating the constraints by a large degree (1 to 20 units). These examples are easier for the model to learn from, as the violations are more pronounced and easier to detect. ## F.5 Samples ![37_image_0.png](37_image_0.png) ![37_image_1.png](37_image_1.png) Figure 12: GAN Samples. Samples from a model trained only on positive data. The model generates reasonable samples but in most cases, the constraints are not satisfied. Precision at the boundary is challenging to enforce. ![37_image_2.png](37_image_2.png) Figure 13: GAN-DD Samples with Negative Constraints I. Samples from a model trained on positive and negative data for constraint I (connection) using our divergence formulation. The model generates reasonable samples and in most cases the connectivity constraints are satisfied. The model can generate blocks that do not intersect and float (up to a small tolerance of 0.9 u). However, because we do not rely on negative data for constraint II (stability), the generated samples do not fulfill this second set of constraints, and the generated blocks are connected but in general unstable. Precision at the boundary is enforced better than training only on positive data. ![38_image_0.png](38_image_0.png) Figure 14: GAN-DD Samples with Negative Constraints I+II. Samples from a model trained on positive and negative data for constraint I (connection) and constraint II (stability) using our divergence formulation. The model generates reasonable samples and in most cases the connectivity constraints are satisfied and the blocks are stacked in a stable configuration. The model can generate blocks that do not intersect and float (up to a small tolerance of 0.9 u) and the center of mass is internal to the structure. Because we rely on negative data for constraint I and II (connectivity and stability), the generated samples do fulfill both sets of constraints, and the generated blocks are connected and stable. Precision at the boundary is enforced better than training only on positive data. This visualization corroborates the need for negative designs when dealing with constraint satisfaction in generative models. ## G **Assorted Engineering Problems** G.1 **Datasets And Problems** In this section, we present details on the 12 engineering problems and datasets used for benchmarking. G.1.1 **Ashby Chart** Taken from (Jetton et al., 2023), this problem explores physically feasible combinations of material properties, according to known physical materials from an Ashby chart. The constraint function is a combination of an analytical constraint and a lookup from an Ashby chart. Material properties considered are density, yield strength, and Young's modulus. Material classes included are foams, natural materials, polymers, composites, ceramics, and metals. 1k positive samples and 1k negative samples are selected using uniform random sampling. ## G.1.2 **Bike Frame** The FRAMED dataset (Regenwetter et al., 2022b) is comprised of 4292 in-distribution (positive) humandesigned bicycle frame models. FRAMED also contains 3242 constraint-violating (negative) designs, some of which were human-designed and some of which were synthesized by generative models. FRAMED also contains 10095 generative model-synthesized valid designs that are not assumed to be in-distribution and are thus unused in this benchmark. Constraints are comprised of a set of empirical geometric checks and a black-box 3D reconstruction check. Constraints are unified using an all-or-nothing approach. Validity scores on this dataset are only evaluated using empirical checks. ## G.1.3 **Cantilever Beam** This problem considers the design of a five-component stepped cantilever beam. The thickness and height of each of the five components are the design variables, while the lengths of each component are given (fixed). Taken from (Gandomi & Yang, 2011), this problem has numerous geometric constraints and an overall constraint limiting the total deflection allowed by the design under a simple concentrated load at the tip of the beam. The optimization objective is not utilized. 1k positive samples and 1k negative samples are selected using uniform random sampling. ## G.1.4 **Car Impact** This problem quantifies the performance of a car design under a side impact scenario based on European Enhanced Vehicle-Safety Committee (EEVC) procedures (Gandomi et al., 2011). The car chassis is represented by 11 design parameters. Several critical deflection, load, and velocity thresholds are specified over several components of a crash dummy, constituting 10 constraints. The optimization objective is not utilized. 1k positive samples and 1k negative samples are selected using uniform random sampling. ## G.1.5 **Compression Spring** This problem, taken from (Gandomi & Yang, 2011), centers around the design of a helical compression spring parameterized over coil diameter, wire diameter, and number of spring coils. A constraint on free length and a constraint on displacement under a compressive load are specified. The optimization objective is not utilized. 1k positive samples and 1k negative samples are selected using uniform random sampling. ## G.1.6 **Gearbox** This gearbox (speed-reducer) design problem, taken from (Gandomi & Yang, 2011) features 7 parameters describing key geometric components like shaft diameters, number of teeth on gears, and face width of gears. Nine constraints are given, spanning considerations like bending stress on gear teeth, transverse stress and deflection on shafts, and surface stresses. The optimization objective is not utilized. 1k positive samples and 1k negative samples are selected using uniform random sampling. ## G.1.7 **Heat Exchanger** This problem, sourced from (Yang & Gandomi, 2012) considers the design of a heat exchanges, involving eight design parameters and six constraints focused on geometric validity. The optimization objective is not utilized. 1k positive samples and 1k negative samples are selected using uniform random sampling. ## G.1.8 **Pressure Vessel** This cylindrical pressure vessel design problem is taken from (Gandomi & Yang, 2011). The pressure vessel is parametrized according to four parameters, namely the cylinder thickness, spherical head thickness, inner radius, and cylinder length. Four geometric and structural constraints are specified in accordance with American Society of Mechanical Engineers (ASME) design codes. The optimization objective is not utilized. 1k positive samples and 1k negative samples are selected using uniform random sampling. ## G.1.9 **Reinforced Concrete** Taken from (Gandomi & Yang, 2011), this problem centers around the design of a simply supported concrete beam under a simple distributed load case. The beam is parameterized using a cross sectional area, base length, and height and is subject to a safety requirement indicated in the American Concrete Institute (ACI) 319-77 code. The optimization objective is not utilized. 1k positive samples and 1k negative samples are selected using uniform random sampling. ## G.1.10 **Ship Hull** The SHIPD Dataset (Bagazinski & Ahmed, 2023) is comprised of 30k valid (positive) ship hull designs and 20k invalid (negative) ship hull designs. The SHIPD dataset includes numerous constraints spanning geometric rules and functional performance targets, focusing on various types of hydrodynamic performance. ## G.1.11 **Truss** Taken from (Yang & Gandomi, 2012), this truss design problem considers the design of a three-beam truss parameterized by the length length of two of the beams (symmetry specifies the length of the third). The system is subject to one geometric constraint and two maximum stress constraints. 1k positive samples and 1k negative samples are selected using uniform random sampling. ## G.1.12 **Welded Beam** Taken from (Gandomi & Yang, 2011), this problem concerns a cantilever beam welded to a flat surface under a simple concentrated load at the tip of the beam. The beam is parametrized using a weld thickness, welded joint length, beam thickness, and beam width. Five structural constraints are given, specifying a maximum shear stress, bending stress, buckling load, and deflection, as well as a geometric constraint on the beam. The optimization objective is not utilized. 1k positive samples and 1k negative samples are selected using uniform random sampling. ## G.2 **Training And Architecture Details** We train the same 16 models tested on the 2D test problems. Model details are the same as described in D.2. Models trained on "bike frame" and "ship hull" are trained for 2000 epochs (since the datasets are larger) and models trained on the other 10 problems are trained for 5000 epochs. G.3 **Metrics** We measure validity, maximum mean discrepancy (MMD), and F1 score, as discussed in Sec. 5.1 of the main paper. G.4 **Extended Results and Discussion** Included in Tables 14 to 19 are Validity, MMD, and F1 scores for both adversarial and likelihood-based models for the 12 engineering problems. Several key takeaways can be extracted: - NDGMs achieve significantly better validity scores than vanilla models. - Likelihood models generally achieve better validity scores, lower MMD scores, and similar F1 scores. - Vanilla GANs generally achieve better MMD scores than negative data GANs. This trend is not mirrored in likelihood-based models. - As discussed in the main paper, discriminator overloading (GAN-DO) and multiclass discriminator (GAN-MC) GANs are dominant in validity metric. - VAE with classifier loss (VAE-CL) is dominant in validity metric. We stress that these takeaways are highly dataset dependent and should not be taken as general analysis of models from a standpoint of broad applicability. Underscoring this point: The models that perform best in these engineering-related tests are not the same models that perform best on the non-convex 2D test problems nor the block-stacking problem. Many of these engineering problems are fairly convex, lending themselves to different optimal models. | GAN | GAN-CC | GAN-CL | GAN-Rej | GAN-DO | GAN-D2 | GAN-MC | GAN-RM | | |---------------------|----------|----------|-----------|----------|----------|----------|----------|-------| | Ashby Chart | 2.35 | 3.05 | 0.87 | 4.12 | 1.28 | 3.76 | 3.17 | 2.37 | | Bike Frame | 4.98 | 14.28 | 3.08 | 6.05 | 5.11 | 3.18 | 1.09 | 2.91 | | Cantilever Beam | 8.22 | 7.13 | 6.54 | 5.42 | 5.97 | 6.39 | 4.53 | 6.23 | | Compression Spring | 2.23 | 3.73 | 2.26 | 1.22 | 0.31 | 2.63 | 0.18 | 2.49 | | Car Impact | 10.43 | 10.72 | 7.55 | 8.67 | 5.89 | 8.23 | 4.67 | 8.26 | | Gearbox | 0.57 | 1.48 | 0.19 | 0.12 | 0.01 | 0.09 | 0.09 | 0.14 | | Heat Exchanger | 8.65 | 5.98 | 7.47 | 10.09 | 6.23 | 9.09 | 4.86 | 8.56 | | Pressure Vessel | 2.84 | 2.69 | 0.58 | 0.73 | 0.01 | 1.05 | 0.42 | 1.45 | | Reinforced Concrete | 0.66 | 2.25 | 0.58 | 0.33 | 0.03 | 0.49 | 0.28 | 0.55 | | Ship Hull | 97.44 | 96.85 | 97.37 | 86.69 | 96.67 | 98.82 | 96.51 | 99.74 | | Truss | 0.04 | 0.34 | 0.09 | 0.06 | 0 | 0 | 0.73 | 0 | | Welded Beam | 2.5 | 2.73 | 2.18 | 1.85 | 0.63 | 1.95 | 0.66 | 1.63 | Table 14: Validity scores for adversarial models on engineering problems. **Best** is bolded and next two best are underlined. Lower (↓) is better. Median scores over three runs are reported. | VAE | VAE-CC | VAE-CL | VAE-Rej | DDPM | DDPM-CL | DDPM-G | DDPM-Rej | | |---------------------|----------|----------|-----------|--------|-----------|----------|------------|-------| | Ashby Chart | 0.45 | NA | 0.33 | 0.51 | 1.77 | 1.66 | 13.93 | 1.89 | | Bike Frame | 0.88 | NA | 0.73 | 0.63 | 0.86 | 1.52 | 2.46 | 0.83 | | Cantilever Beam | 1.02 | 1.1 | 0.54 | 1.1 | 1.3 | 1.14 | 2.07 | 1.56 | | Compression Spring | 0.34 | 0.39 | 0.14 | 0.22 | 1.28 | 11.32 | 3.07 | 1.32 | | Car Impact | 1.42 | 1.35 | 0.39 | 1.09 | 1.86 | 2.58 | 2.72 | 1.94 | | Gearbox | 0.15 | 0.02 | 0.01 | 0 | 0.08 | 0.02 | 0.12 | 0.01 | | Heat Exchanger | 1.24 | 1.71 | 1.15 | 1.41 | 1.95 | 2.7 | 3.53 | 2.2 | | Pressure Vessel | 0.26 | 0.16 | 0.05 | 0.06 | 0.7 | 0.67 | 0.95 | 0.52 | | Reinforced Concrete | 0.12 | 0.15 | 0.01 | 0.08 | 0.37 | 0.36 | 0.5 | 0.42 | | Ship Hull | 0 | 31.76 | 0 | 0 | 85.37 | 84.76 | 91.2 | 84.76 | | Truss | 0.08 | 0.01 | 0.01 | 0.01 | 0.3 | 0.19 | 0.58 | 0.13 | | Welded Beam | 0.41 | 0.28 | 0.08 | 0.16 | 1.01 | 1.25 | 1.51 | 0.63 | | GAN | GAN-CC | GAN-CL | GAN-Rej | GAN-DO | GAN-D2 | GAN-MC | GAN-RM | | |---------------------|----------|----------|-----------|----------|----------|----------|----------|-------| | Ashby Chart | 1.255 | 1.996 | 2.212 | 1.304 | 1.516 | 1.607 | 1.441 | 1.309 | | Bike Frame | 14.88 | 98.26 | 20.70 | 16.40 | 29.07 | 15.52 | 55.95 | 18.73 | | Cantilever Beam | 2.642 | 2.511 | 2.898 | 2.948 | 3.170 | 2.639 | 3.402 | 2.596 | | Compression Spring | 1.446 | 1.673 | 1.198 | 1.480 | 1.133 | 1.497 | 1.529 | 1.567 | | Car Impact | 3.165 | 2.556 | 2.716 | 3.395 | 2.991 | 2.612 | 3.440 | 2.674 | | Gearbox | 2.005 | 3.037 | 2.301 | 1.895 | 4.741 | 2.318 | 3.663 | 2.570 | | Heat Exchanger | 2.411 | 3.479 | 2.174 | 2.102 | 3.295 | 2.269 | 4.860 | 2.359 | | Pressure Vessel | 1.714 | 1.891 | 2.363 | 1.724 | 6.658 | 1.947 | 3.019 | 1.719 | | Reinforced Concrete | 1.182 | 1.615 | 1.525 | 1.456 | 3.584 | 1.624 | 1.795 | 1.946 | | Ship Hull | 5.320 | 19.76 | 6.542 | 4.506 | 8.588 | 8.998 | 6.029 | 12.47 | | Truss | 1.026 | 1.506 | 1.006 | 1.310 | 1.575 | 3.375 | 8.552 | 4.739 | | Welded Beam | 1.675 | 2.306 | 1.632 | 1.384 | 4.575 | 1.799 | 3.687 | 2.651 | Table 15: Validity scores for likelihood-based models on engineering problems. **Best** is bolded and next two best are underlined. Lower (↓) is better. Median scores over three runs are reported. Table 16: MMD scores for adversarial models on engineering problems. **Best** is bolded and next two best are underlined. Lower (↓) is better. Median scores over three runs are reported. | VAE | VAE-CC | VAE-CL | VAE-Rej | DDPM | DDPM-CL | DDPM-G | DDPM-Rej | | |---------------------|----------|----------|-----------|--------|-----------|----------|------------|-------| | Ashby Chart | 3.240 | NA | 3.343 | 3.018 | 10.86 | 10.45 | 8.643 | 10.25 | | Bike Frame | 42.76 | NA | 53.74 | 49.18 | 2.479 | 2.494 | 2.499 | 2.487 | | Cantilever Beam | 5.870 | 5.631 | 5.858 | 5.583 | 4.401 | 4.497 | 3.809 | 4.288 | | Compression Spring | 3.203 | 2.725 | 3.079 | 2.810 | 20.46 | 21.41 | 20.28 | 20.78 | | Car Impact | 4.756 | 4.951 | 5.145 | 4.565 | 3.943 | 3.956 | 3.334 | 3.845 | | Gearbox | 5.450 | 5.468 | 6.055 | 5.771 | 3.276 | 3.551 | 4.333 | 3.499 | | Heat Exchanger | 6.962 | 6.389 | 7.075 | 6.729 | 6.677 | 6.787 | 4.500 | 6.835 | | Pressure Vessel | 3.931 | 4.056 | 4.223 | 3.897 | 6.593 | 7.107 | 8.043 | 7.165 | | Reinforced Concrete | 3.095 | 3.673 | 3.786 | 3.473 | 10.93 | 10.46 | 12.66 | 10.88 | | Ship Hull | 1001 | 32.66 | 1001 | 1001 | 2.135 | 2.133 | 2.132 | 2.134 | | Truss | 1.604 | 1.610 | 1.416 | 1.502 | 9.985 | 9.535 | 30.25 | 10.11 | | Welded Beam | 4.539 | 4.011 | 4.577 | 4.261 | 7.722 | 8.442 | 9.185 | 8.456 | Table 17: MMD scores for likelihood-based models on engineering problems. **Best** is bolded and next two best are underlined. Lower (↓) is better. Median scores over three runs are reported. | GAN | GAN-CC | GAN-CL | GAN-Rej | GAN-DO | GAN-D2 | GAN-MC | GAN-RM | | |---------------------|----------|----------|-----------|----------|----------|----------|----------|-------| | Ashby Chart | 0.967 | 0.947 | 0.951 | 0.963 | 0.963 | 0.960 | 0.960 | 0.964 | | Bike Frame | 0.684 | 0.214 | 0.663 | 0.675 | 0.253 | 0.666 | 0.506 | 0.692 | | Cantilever Beam | 0.940 | 0.938 | 0.930 | 0.934 | 0.914 | 0.924 | 0.914 | 0.937 | | Compression Spring | 0.953 | 0.956 | 0.957 | 0.954 | 0.958 | 0.959 | 0.955 | 0.949 | | Car Impact | 0.927 | 0.930 | 0.931 | 0.922 | 0.934 | 0.936 | 0.897 | 0.928 | | Gearbox | 0.945 | 0.930 | 0.941 | 0.954 | 0.903 | 0.943 | 0.939 | 0.947 | | Heat Exchanger | 0.942 | 0.929 | 0.948 | 0.954 | 0.929 | 0.948 | 0.894 | 0.942 | | Pressure Vessel | 0.957 | 0.942 | 0.943 | 0.962 | 0.904 | 0.955 | 0.944 | 0.946 | | Reinforced Concrete | 0.964 | 0.952 | 0.956 | 0.960 | 0.932 | 0.959 | 0.951 | 0.956 | | Ship Hull | 0.043 | 0.012 | 0.020 | 0.054 | 0.044 | 0.026 | 0.053 | 0.019 | | Truss | 0.958 | 0.946 | 0.954 | 0.954 | 0.937 | 0.888 | 0.869 | 0.891 | | Welded Beam | 0.958 | 0.937 | 0.967 | 0.968 | 0.921 | 0.954 | 0.927 | 0.939 | | VAE | VAE-CC | VAE-CL | VAE-Rej | DDPM | DDPM-CL | DDPM-G | DDPM-Rej | | |---------------------|----------|----------|-----------|--------|-----------|----------|------------|-------| | Ashby Chart | 0.953 | NA | 0.953 | 0.950 | 0.859 | 0.855 | 0.876 | 0.856 | | Bike Frame | 0.899 | NA | 0.897 | 0.890 | 0.780 | 0.778 | 0.786 | 0.754 | | Cantilever Beam | 0.966 | 0.958 | 0.958 | 0.959 | 0.935 | 0.939 | 0.929 | 0.928 | | Compression Spring | 0.946 | 0.957 | 0.954 | 0.956 | 0.795 | 0.785 | 0.795 | 0.794 | | Car Impact | 0.955 | 0.960 | 0.947 | 0.958 | 0.938 | 0.939 | 0.927 | 0.938 | | Gearbox | 0.958 | 0.965 | 0.963 | 0.963 | 0.969 | 0.968 | 0.863 | 0.965 | | Heat Exchanger | 0.965 | 0.952 | 0.962 | 0.963 | 0.899 | 0.902 | 0.871 | 0.879 | | Pressure Vessel | 0.956 | 0.960 | 0.954 | 0.959 | 0.887 | 0.884 | 0.872 | 0.884 | | Reinforced Concrete | 0.946 | 0.945 | 0.953 | 0.940 | 0.846 | 0.851 | 0.837 | 0.848 | | Ship Hull | 0.033 | 0.906 | 0.033 | 0.034 | 0.879 | 0.887 | 0.871 | 0.873 | | Truss | 0.952 | 0.959 | 0.961 | 0.948 | 0.876 | 0.880 | 0.793 | 0.874 | | Welded Beam | 0.953 | 0.956 | 0.954 | 0.956 | 0.875 | 0.866 | 0.867 | 0.870 | Table 18: F1 scores for adversarial models on engineering problems. **Best** is bolded and next two best are underlined. Higher (↑) is better. Median scores over three runs are reported. Table 19: F1 scores for likelihood-based models on engineering problems. **Best** is bolded and next two best are underlined. Higher (↑) is better. Median scores over three runs are reported. ## H **Details On Topology Optimization Experiments** ![43_Image_0.Png](43_Image_0.Png) In this appendix, we include extra details on the datasets, models, training, and results of the topology optimization (TO) experiments. ## H.1 **Dataset Details** The GAN was trained exclusively on 32436 valid (connected) topologies generated through iterative optimization (SIMP) (Bendsøe & Kikuchi, 1988). The GAN-MC variants are trained on a medley of disconnected topologies generated by iterative optimization (2564), and either procedurally-generated synthetic topologies (35000) or GAN-generated disconnected topologies (92307). Synthetic topologies were sourced directly from the classification dataset of (Mazé & Ahmed, 2023). The GAN used to generate disconnected topologies for rejection was the exact GAN benchmarked in the paper. Topologies were checked for continuity and rejected samples were added to the negative dataset. All positive and negative data was multiplied by 8 using simple data augmentation consisting of rotation and flips before training any model. The various data sources are visualized below Figure 15: Visualization of positive data and various sources of negative data used to train GAN and GAN-MC on TO problems. ## H.2 **Model Details** The model architectures of the GAN and GAN-MC are identical except for the final output dimension of the descriminator. Both generator and descriminator are simple 5-layer convolutional neural networks. The generator has 3.6M parameters, while the discriminator has 2.8M parameters. For more architectural details, we refer the reader to the codebase. The latent dimension is 100, batch size is 128, and learning rate for both models is 2e-4, using the adam optimizer. ## H.3 **Visualization** ![44_Image_0.Png](44_Image_0.Png) We visualize several samples generated by GAN and GAN-MC, annotating constraint violations. Note that several violations are circled in some topologies. Each floating section contributes to the pixel count invalidity score, but is not double-counted for binary invalidity score. The topologies generated by GAN-MC have visibly fewer invalidities compared to the topologies generated by the GAN. Figure 16: Topologies generated by GAN with constraint violations annotated. ![44_image_1.png](44_image_1.png) Figure 17: Topologies generated by GAN-MC trained on rejected negative data with constraint violations annotated.
Review 1: Summary: This work introduces two new Negative-Data Generative Models (NDGMs) that match or surpass state-of-the-art methods. The proposed NDGMs can significantly outperform vanilla models using less data. Meanwhile, to evaluation it, the paper also includes an extensive benchmark set across many synthetic problems and engineering tasks. Strengths and Weaknesses: Strengths: 1. The proposed NDGMs significantly enhance constraint satisfaction across a variety of synthetic and challenging engineering problems. 2. NDGMs achieve superior performance using much less data compared to other models, highlighting their data efficiency. 3. The paper includes a comprehensive benchmark, providing a robust evaluation framework for NDGMs. Weaknesses: 1. The use of dual and multi-class discriminators may result in higher computational costs and longer training times, particularly for large-scale datasets. 2. Although the methods are effective, the proposed multi-category discriminator and dual-discriminator methods are not technically significant innovations. However, the results are good. 3. A simple application of GAN in the field of Engineering Design. Whether it is inspiring for other fields is uncertain. Requested Changes: No change request. Broader Impact Concerns: I'm not sure how much of an impact this work will have on the field of Engineering Design.I am not an expert in the field of Engineering Design. ================================================== Review 2: Summary: For generative modeling with constrains, the paper proposed two approaches to guide models toward constraint-satisfying outputs using negative data. With extensive benchmarks, the NDGM models outperform against baseline models. Strengths and Weaknesses: **Strength** 1) The NDGM Formulations are quite easy to understand, and the authors provided extensive benchmarks. 2) The experiments showed improved constraint satisfaction rates and improved data efficiency. **Weakness** 1) This paper mainly focuses on GAN models instead of diffusion models. It's interesting to compare with other diffusion models. 2) In the main paper, there is no discussion about the sampling diversity. Requested Changes: It will be great to add the sample diversity discussion in the main paper. So that readers can understand the tradeoff of new approaches easier. Broader Impact Concerns: n/a ================================================== Review 3: Summary: - contributions - Develops two approaches to incorporating negative data into generative models (specifically GANs): - multi-class model: Trains a multi-class classifier to distinguish positive, negative, and generated samples as an auxiliary GAN loss. - double discriminator: Train two discriminators, one for positive samples and one for negative samples. A weighted combination of these is used in the GAN training loss. - Evaluates several negative data generative model variants (and baseline models that do not use negative samples) across toy examples, an engineering task suite, a block stacking task, and topology optimization for engineering design. - Evaluates some of the scaling properties obtained by adding negative samples. - new knowledge - GANs outperform DDPM (representing the class of diffusion models) on engineering tasks with hard constraints. - Modest amounts of negative samples can substantially improve model performance in terms of satisfying hard constraints (at least for some tasks). - Negative samples can vary in how much they improve performance based on the form of the negative samples (being "easy" or "hard"). Specifically contrasts samples generated from heuristically generated synthetic data and samples generated by rejection sampling from a baseline generative model. Strengths and Weaknesses: # strengths - Provides strong evidence for the importance of negative data to a breadth of engineering tasks. The breadth of evidence is compelling and there are a healthy variety of alternative models tested. - Contributes two conceptually straightforward GAN extensions with reasonably good empirical performance. # weaknesses - Lack of reporting statistical variability in the main text (supplement includes some) - There is no reporting of statistical tests of differences among models when claims are made about which model or models are better (worse) for a given task. The appendix tables include measures of statistical variation, so the text should include empirical estimates of effect sizes (and statistical significance) when comparing models. This will strengthen the claims about differences among models. - The two contributed models are not included in all of the evaluations. - GAN-MC and GAN-DD-a (and possibly GAN-DD-b) should be included in all tables in the main text as they are the main technical contributions of the paper. - It would help to include the DDPM model variants in all results as well. Or a table clearly summarizing their performance. The claim that diffusion models are inferior on these tasks is interesting and provocative, so providing strong evidence of the consistency of this trend (or specificity to particular evaluations) will be valuable. The text also lacks a single easy reference summary for readers to assess the evidence in favor the the inferiority of diffusion models across the many evaluations. - (minor) The benchmarks do not include any sense of what "good" performance is. - Is there a human baseline or other baseline to include to compare these generative methods to? - A "gold standard" accepted by engineers in the respective tasks? - This would strengthen the ecological validity of these tasks. Requested Changes: # critical - In the main text tables please include the following: - Estimates of statistical variability for all results reported in tables. - Estimates of effect sizes and statistical significance for any differences claimed. For example, if the claim is GAN-MC outperforms GAN, include an estimate of the magnitude of improvement. - Include GAN-DO, GAN-DD, GAN-MC in all mainline reported results. - These are the contributed methods (and strong baseline in the case of GAN-DO) that are claimed contributions, and thus merit comprehensive evaluation. # strengthen - Ideally also include DDPM in all the evaluations. The claim that DDPM is inferior is very interesting and merits more comprehensive evaluation and summarization. - (less critical) The potential of negative samples to improve generative models is fascinating. I wonder about two scaling studies: - 1) Extending the results reported in Table 2 to include other methods (particularly DDPM given it's popularity as a model elsewhere) - 2) Extending the scaling range to cases where models train on more negative samples than positive. In at least some engineering domains it is easy to generate negative samples but positive samples are scarce. How well do the sample efficiency results extend to the regime of far more negative than positive samples? - The GAN-DD results from Table 2 are quite promising in this regard. 4x as many positive samples (from 1K) only yields a ~0.1 percentage point improvement (if any) for models with 4K or 16K negative samples. If possible this same scaling would be interesting to apply to the topology optimization problem, given the ability to generate negative samples there. Other notes to help strengthen the submission: - Some experiments would benefit from matching the number of training samples used in total (positive and negative). some comparisons are somewhat "unfair" in providing the model access to additional data (as negative samples) that are not used by vanilla baselines - (reiterating above) A general remark on the tables: Please report the number of trials for each table (ideally in the caption). The tables should include reports of statistical variability (often included in the appendix) and some statistical test of differences for any differences among models being claimed. This remark applies to all the tables provided and claims about differences in the main text. - Table 1 - It would be interesting to see these results but training the vanilla models (GAN, VAE, DDPM) with additional positive samples to match the total number of samples given to the negative models. For example, providing 20k positive samples to the vanilla models to make the 10k positive and 10k negative. The idea is to be somewhat more "fair" in granting access to the same amount of data to learn from, where the only difference is whether the samples come from the negative distribution. - Table 2 - I find it interesting that 4K positive samples often outperforms 16K for both problems and most model (except GAN-DD 1K). Any idea why this might be? - This table should include GAN-MC results for the same scaling test. - It would be interesting to see DDPM performance on this task: do they scale more poorly than GANs? Could the problem be that DDPMs require more data to start being effective (hence the negative main result in the experiments so far)? - This table should at least include GAN-MC results. - Table 3 - This table should include GAN-MC results. - Table 4 - This table should include GAN-DD results. - Table 5 - This table should include GAN-DO and GAN-DD for comparison. - How many pixels are in the output images? It is hard to interpret a score of 0.29 pixels without that information. - Tables 14-19 - These tables include "GAN-D2" and "GAN-RM", but I could not find either defined in the text. - These tables lack reporting of variability over multiple trials (in addition to lacking statistical tests of differences for the claims being made in the text). - Section 5.1 - "Figure 5 plots the datasets and the generated distributions of several select models (vanilla GAN and the two NDGMs we propose)" - "Figure 5" should be "Figure 2" Broader Impact Concerns: Broader impact is not directly addressed in an explicit Broader Impact Statement section. It does not seem necessary as the goal of the work is enabling better control over the outputs of generative models. ==================================================
# Black-Box Prompt Learning For Pre-Trained Language Models Shizhe Diao *sdiaoaa@connect.ust.hk* The Hong Kong University of Science and Technology Zhichao Huang *zhuangbx@connect.ust.hk* The Hong Kong University of Science and Technology Ruijia Xu *rxuaq@connect.ust.hk* The Hong Kong University of Science and Technology Xuechun Li *xul021@ucsd.edu* University of California, San Diego Yong Lin *ylindf@connect.ust.hk* The Hong Kong University of Science and Technology Xiao Zhou xzhoubi@connect.ust.hk The Hong Kong University of Science and Technology Tong Zhang∗tongzhang@ust.hk The Hong Kong University of Science and Technology Reviewed on OpenReview: *https: // openreview. net/ forum? id= IvsGP7xRvm* ## Abstract The increasing scale of general-purpose Pre-trained Language Models (**PLMs**) necessitates the study of more efficient adaptation across different downstream tasks. In this paper, we establish a Black-box Discrete Prompt Learning (**BDPL**) to resonate with pragmatic interactions between the cloud infrastructure and edge devices. Particularly, instead of fine-tuning the model in the cloud, we adapt PLMs by prompt learning, which efficiently optimizes only a few parameters of the discrete prompts. Moreover, we consider the scenario that we do not have access to the parameters and gradients of the pre-trained models, except for its outputs given inputs. This black-box setting secures the cloud infrastructure from potential attack and misuse to cause a single-point failure, which is preferable to the white-box counterpart by current infrastructures. Under this black-box constraint, we apply a variance-reduced policy gradient algorithm to estimate the gradients of parameters in the categorical distribution of each discrete prompt. In light of our method, the user devices can efficiently tune their tasks by querying the PLMs bounded by a range of API calls. Our experiments on RoBERTa and GPT-3 demonstrate that the proposed algorithm achieves significant improvement on eight benchmarks in a cloud-device collaboration manner. Finally, we conduct in-depth case studies to comprehensively analyze our method in terms of various data sizes, prompt lengths, training budgets, optimization objectives, prompt transferability, and explanations of the learned prompts.1 ∗Joint with Google research 1The code is available at https://github.com/shizhediao/Black-Box-Prompt-Learning. 1 Table 1: Comparison of different tuning methods. **Frozen**: the pre-trained model is frozen and will not be updated. **Black-Box**: there are no access to the parameters and gradients from the pre-trained model. Discrete: the prompts are discrete tokens (compared with soft prompts). **Interpretable**: the prompts are readable and interpretable. **Learnable**: the prompts are parametric and learnable with explicit or estimated gradients (compared with manual prompts). N/A: not applicable since the corresponding descriptions are for prompt learning. | Methods | Frozen | Black-Box | Discrete | Interpretable | Learnable | |---------------------------------------|----------|-------------|------------|-----------------|-------------| | Vanilla FineTuning | N/A | N/A | N/A | | | | GPT-3's FineTuning3 | ✓ | ✓ | N/A | N/A | N/A | | FeatureProbe (Peters et al., 2019) | ✓ | | ✓ | | | | ManualPrompt | ✓ | ✓ | ✓ | ✓ | | | InContextLearning(Brown et al., 2020) | ✓ | ✓ | ✓ | ✓ | | | PromptTuning (Lester et al., 2021) | ✓ | | ✓ | | | | P-Tuning v2 (Liu et al., 2021a) | ✓ | | ✓ | | | | AutoPrompt (Shin et al., 2020) | ✓ | ✓ | ✓ | ✓ | | | BBT (Sun et al., 2022) | ✓ | ✓ | ✓ | | | | BDPL (ours) | ✓ | ✓ | ✓ | ✓ | ✓ | ## 1 **Introduction** Large Pre-trained Language Models (PLMs) have demonstrated impressive versatility across a wide spectrum of downstream tasks, via either fine-tuning (FT) (Devlin et al., 2019; Liu et al., 2019; Lewis et al., 2020; Zhang et al., 2020; Yang et al., 2020; Diao et al., 2020; Pan et al., 2022) or prompt-based learning (PL) (Gao et al., 2021; Liu et al., 2021b; Schick & Schütze, 2021; Li & Liang, 2021; Liu et al., 2023). Traditionally, these two tuning paradigms are conducted at a white-box setting, where the parameters and gradients are accessible since the model is usually open-sourced and can be duplicated in user devices. Despite the fact that white-box methods have made remarkable progress, however, the increasing scale of PLMs renders this setting implausible. Nowadays, huge PLMs opt to serve users as commercial APIs deployed in the cloud, such as OpenAI GPT-32. In particular, the service providers hide their model parameters and expose the query and prediction interface, which is termed the black-box setting in this paper. Although solving NLP problems with APIs in a black-box setting is considerably challenging, it is indeed aligned with the new norm of the current interplay between the cloud infrastructure and edge devices. Specifically, from the position of cloud providers, it is reasonable to restrict the access of pre-trained model parameters since commercial, ethical, legal, security, and other concerns might be raised (Bommasani et al., 2021). First, under the white-box setting, the weaknesses and biases rooted in the underlying PLMs are at higher risk of being misused for harmful purposes. Second, the centralizing nature of PLMs exposes them to potential attacks to cause a single-point failure (Krishna et al., 2020). To this end, the black-box setting is more convincing to secure the cloud infrastructure from being condemned. As for the interests of user devices, the black-box paradigm grants them a more economical option. Otherwise, if we have access to the model's gradients, it requires transmitting gradients from cloud to device, causing high transmission costs. With this basic setting in mind, we further elaborate on a more pragmatic scenario, i.e., the discrete prompt learning under the black-box constraint. Particularly, we opt for the prompt learning mechanism instead of the fine-tuning counterpart, partially due to the fact that the prompt learning is more cost-effective by tuning fewer parameters and can eliminate the gap between pre-training and downstream transfer. Moreover, black-box fine-tuning3requires users to upload their private labels and save the fine-tuned model on the server, which strongly relies on the cloud provider as the single-point trust for the security of private data and model. On the contrary, our black-box prompt learning allows users to store the private label and tune the prompt locally, preventing potential data leakage and protecting the users' commercial interests. For instance, in our setting, each user device can query the output from the cloud and then update its prompts separately on its own data. It is noteworthy that we optimize discrete prompts, which are more interpretable to power users from different backgrounds to develop their own applications with PLMs. On the contrary, continuous prompt learning 2https://openai.com/api/ 3https://beta.openai.com/docs/guides/fine-tuning methods, e.g., BBT (black box tuning (Sun et al., 2022)), are difficult to interpret their learned prompts. Moreover, these methods fail to be directly applied to prediction APIs because APIs only accept discrete inputs (e.g., GPT-3). However, the discrete nature of our BDPL allows commercial prediction APIs to directly take the learned prompt tokens without pain. Overall, our established Black-Box Discrete Prompt Learning (BDPL) is closely in accordance with the recent progress of huge PLMs, whose comparisons with the existing settings can be found in table 1. As shown in the table, BDPL can be specified under the constraints that the PLMs are frozen and both their parameters and gradients are invisible and share the virtue of optimizing discrete, interpretable, and learnable prompt tokens simultaneously. To embrace this paradigm shift of tuning PLMs, we design a policy gradient inspired framework that can be optimized without relying on the parameters and gradients of the pre-trained models. Specifically, we characterize the prompt learning procedure as a discrete token selection problem, where the proper prompt tokens are sampled according to a categorical distribution. Because the PLM's parameters are invisible and gradients cannot be back-propagated, the categorical distribution needs to be optimized by some gradient-free algorithms. We resort to the policy gradient algorithm to estimate the gradients without back-propagation. Moreover, to eliminate the high variance issue of policy gradient, we adopted a variance-reduced policy gradient estimator. Experimental results on two kinds of datasets, *i.e.*, datasets without domain-shift and datasets with domainshift, demonstrate the effectiveness of the proposed black-box discrete prompt learning, which significantly improves the performance over a generic pre-trained model and outperforms all baseline models on eleven datasets. The results confirm that incorporating black-box prompt learning for pre-trained models is an effective and efficient solution to the PLM adaptation. We also present further analyses by investigating the effects of different training data sizes, prompt lengths, training budgets, and objectives. Our analyses demonstrated the robustness, scalability, and transferability of the proposed method. The contributions of our work are as follows: - We propose a new setting called black-box prompt learning, where we only have access to the output of prediction APIs without the need to access the PLM's parameters or gradients. The black-box prompt is optimized without the requirements of tuning pre-trained models, saving the fine-tuning costs. - We propose a new black-box discrete prompt learning (BDPL) method to solve this new problem, and demonstrate its effectiveness in dealing with domain shifts on various tasks. - We conduct comprehensive analyses on eleven benchmark datasets under cloud-device collaboration settings, demonstrating its effectiveness for commercial APIs. BDPL has a much wider range of applications than previous methods, such as transfer learning, model personalization, and decentralized training. ## 2 **Approach** In our setting, the input is a sentence S = s1s2 · · · sl*· · ·* sm with slindicating the l-th token and the output corresponds to its category y. Our goal is to learn n discrete prompt tokens T = t1t2 · · ·ti*· · ·*tn = V[j1]V[j2] · · · V[ji] *· · · V*[jn], which are prepended to the input sentence to create the user query [*T, S*]. Note that V represents the vocabulary list consisting of a total of N tokens, and ti = V[ji] is the i-th token in T and ji-th token in V. The overall architecture is shown in Figure 1. During the black-box training, we freeze the prediction model G with a stop-gradient strategy, and only optimize the discrete prompts T. Here, we assume independent categorical distribution for each prompt index ji ∼ Cat(pi), where the random variable jiis sampled with the probability distribution pi = [pi,1, · · · , pi,N ] over the N token indexes, where pi ∈ C and C = {p : ∥p∥1 = 1, 0 ⪯ p ⪯ 1}. Since piis independent of each other, the joint probability of the whole discrete prompt is P(T) = Πn i=1P(ti) = Πn i=1pi,ji . Because the prediction model's parameters are invisible and gradients cannot be back-propagated to the prompts, it is no longer possible to directly update the prompts by back-propagating through ∇piL(G([T, S], y)), where y is the label. Inspired by the policy gradient algorithm in discrete optimization, we resort to estimating the gradients without back-propagation to accomplish **black-box** training. ![3_image_0.png](3_image_0.png) Figure 1: Schematic illustrations of the comparisons across various tuning paradigms and the cloud-device interplay at the training phase of our algorithm. **Left:** (a) Vanilla finetuning and prompt tuning can be conducted on user devices in a white-box manner since the PLMs are feasible to be duplicated at user devices. After tuning, users can still access services of PLMs on the device. (b) Increasing scale hinders the democratizing of PLMs. In GPT-3's finetuning, users have to upload the input and associated labels to the server. After finetuning, the model is saved on the server. (c) In our black-box discrete prompt learning setting, users send queries to the server and then rely on PLMs' predictions to update their discrete prompts on the devices using a gradient-free optimization. **Right:** In our framework, the user query is created by concatenating the discrete prompt and the input sentence, where the prompt tokens are sampled from their categorical distributions respectively. After calculating the loss between the PLMs' predictions and input labels, we apply a variance-reduced policy gradient algorithm to estimate the gradient of the categorical distribution and update it accordingly. For abbreviation, we denote the L(G([T, S], y)) as L(T) since *S, y* can be deemed as constants here. By the virtue of the policy gradient estimator (PGE), we can optimize the loss function via forward propagation with: $$\mathbb{E}_{T}\left[{\mathcal{L}}(T)\right]=\int{\mathcal{L}}(T)P(T)\,\mathrm{d}T,$$ ET [L(T)] = ZL(T)P(T) dT, (1) and estimate the gradient of pi by: $$\nabla_{\boldsymbol{p_{i}}}\mathbb{E}_{T}\left[\mathcal{L}(T)\right]=\int\mathcal{L}(T)\nabla_{\boldsymbol{p_{i}}}P(T)\,\mathrm{d}T$$ $$=\int\mathcal{L}(T)\frac{P(T)}{P(T)}\nabla_{\boldsymbol{p_{i}}}P(T)\,\mathrm{d}T$$ $$=\int P(T)\mathcal{L}(T)\nabla_{\boldsymbol{p_{i}}}\log P(T)\,\mathrm{d}T$$ $$=\mathbb{E}_{P(T)}\left[\mathcal{L}(T)\nabla_{\boldsymbol{p_{i}}}\log\Pi_{j=1}^{n}P(t_{j})\right]$$ $$=\mathbb{E}_{P(T)}\left[\mathcal{L}(T)\nabla_{\boldsymbol{p_{i}}}\log P(t_{i})\right]$$ $$(1)$$ $$(2)$$ The j-th component of ∇pi log P(ti) could be solved explicitly by: $$\nabla_{p_{i,j}}\log P(t_{i})=\nabla_{p_{i,j}}\log p_{i,j_{i}}$$ ∇pi,j log P(ti) = ∇pi,j log pi,ji(3) Algorithm 1 The black-box discrete optimization procedures. Require: Input batch S, Label batch Y , Parameter of categorical distribution p1, *· · ·* , pn, Prediction model G, Loss function L. 1: for k ≤ I do 2: Sample j (k) 1 ∼ Cat(p1), · · · , j(k) n ∼ Cat(pn) 3: T (k) = t (k) 1*· · ·*t (k) n = V[j (k) 1] *· · · V*[j (k) n ] 4: **end for** 5: Lavg = 1 I PIk=1 L(G[T (k), S], Y ) 6: for i ≤ n do 7: g vr pi =1 I−1 PIk=1 ∇pi log P(t (k) i)(L(G[T (k), S], Y ) − Lavg) 8: pi ← projC (pi − η · g vr pi ) 9: **end for** 10: **return** p1, *· · ·* pn When j = ji, it is obvious that ∇pi,j log P(ti) = 1 p*i,ji* . When j ̸= ji, equation (3) is calculated by: $$\begin{aligned} \nabla_{p_{i,j}}\log P(t_i) =& \nabla_{p_{i,j}}\log(1-\sum_{k=1,k\neq j_i}^N p_{i,k}) \\ =& -\frac{1}{1-\sum_{k=1,k\neq j_i}^N p_{i,k}} \\ =& -\frac{1}{p_{i,j_i}} \end{aligned}$$ The above is the same as the second one. $$\left({\boldsymbol{4}}\right)$$ $$\left(5\right)$$ $$(6)$$ However, consistent with previous policy gradient applications (Sutton et al., 1999; Rezende et al., 2014; Jang et al., 2017; Zhou et al., 2021), we observed that conventional PGE suffers from high variance, which makes it challenging to converge in practice. Therefore, we adopted a variance-reduced policy gradient estimator (VR-PGE) as described in Williams (1992); Dong et al. (2020); Zhou et al. (2021). The estimated gradient is calculated by: $$g_{p_{i}}^{v r}=\frac{1}{I-1}\sum_{k=1}^{I}\left(\mathcal{L}(T^{(k)})-\frac{1}{I}\sum_{j=1}^{I}\mathcal{L}(T^{(j)})\right)\nabla_{p_{i}}\log P(t_{i})$$ log P(ti) (5) where T (k), k = 1, · · · , I are sampled independently from P(T). Thus, the prompt token distribution pi can be updated by a projected stochastic gradient descent algorithm: $$p_{i}\leftarrow\mathrm{proj}_{\mathcal{C}}(p_{i}-\eta\cdot g_{p_{i}}^{v r}),i=1,\cdots,n$$ ), i = 1, · · · , n (6) where η is the learning rate of prompt learning, I is the sample size, and projC is the projection calculation (details are presented in the Appendix). Here we introduce the detailed training procedure for updating the prompts using our proposed VR-PGE, whose mini-batch version is displayed in Algorithm 1. Assuming the input data is divided into B batches, and within each batch, we will perform I iterations of sampling to reduce the variance of estimation. Specifically, at the k-th iteration within each batch, we first sample the sequence of prompt tokens T (k) = V[j (k) 1]V[j (k) 2] *· · · V*[j (k) n ] according to the joint distribution P(T). When T (k)is created, we will prepend it to the input sentence S and feed the query [T (k), S] into the black-box pre-trained language model G, which will return back the prediction. In light of the model prediction and ground-truth label Y , we then calculate the loss L(G[T (k), S], Y ). Then the estimated gradients g vr pi for each pi could be obtained by executing Equation (5) after sampling all I prompt sequences for the training batch. Finally, the categorical distributions are updated by Equation (6). Vocabulary Construction A natural question is how to construct the vocabulary V and what is the size N. Inspired by the observation in Diao et al. (2021), which revealed the importance of domain-specific and task-specific words and ngrams in representation learning, we introduce such important ngrams as prompt candidates. Therefore, we adopt pointwise mutual information (PMI) to construct the vocabulary of candidate prompt tokens in an unsupervised way. For each sentence in the training set, we calculate the PMI by $$\operatorname{PMI}({\bar{x}},{\widehat{x}})=\log{\frac{p({\bar{x}}{\widehat{x}})}{p({\bar{x}})p({\widehat{x}})}},$$ $$\left(7\right)$$ , (7) where x¯ and xe are two adjacent words in the sentence, and p(x) is the probability of an n-gram x. If the PMI score between these two adjacent words is high, they have a high probability of co-occurrence and are more likely to form an n-gram, suggesting they are good collocation pairs. If the PMI score is lower than a threshold σ, a delimiter is inserted between x¯ and xe. As a result, the sentence will be segmented by several delimiters. Finally, we obtain a list of ngrams V by extracting those consecutive words after segmentation and with a frequency of at least f. As for the size N, we observe that large N will cause an unstable optimization process and even divergence. Therefore, we keep N between 50 and 200. ## 3 **Experimental Settings** In this section, we first introduce the datasets and evaluation metrics (§3.1), followed by the baseline models (§3.2). Lastly, we describe the implementation details (§3.3). ## 3.1 **Datasets And Evaluation Metrics** In order to examine the model's ability in generic classification tasks as well as domain-specific classification tasks, we include seven datasets from the GLUE benchmark (Wang et al., 2019): MNLI (Williams et al., 2018), QQP (Iyer et al., 2017), SST-2 (Socher et al., 2013), MRPC (Dolan & Brockett, 2005), CoLA (Warstadt et al., 2019), QNLI (Wang et al., 2019), RTE (Dagan et al., 2005; Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009), and four domain-specific datasets: CitationIntent (Jurgens et al., 2018), SciERC (Luan et al., 2018), RCT (Dernoncourt & Lee, 2017), HyperPartisan (Kiesel et al., 2019) from specific domains including computer science, biomedical science and news following Gururangan et al. (2020); Diao et al. (2021). The statistics of these datasets are shown in Table 2. Considering the data sparsity issue and large query costs4in cloud-device collaboration, we conduct our experiments on a popular and more realistic setting - few-shot learning, where huge models have shown their powerful ability (Brown et al., 2020). We follow Perez et al. (2021) to simulate a true k-shot learning setting. We randomly sample k data from the original training set for each class to construct the training set and another different k data to construct the validation set. The original validation set will be used as the test set. Because the size of the QQP and RCT validation sets is too large, we randomly sample 1K data to save costs. We adopt Matthews Correlation Coefficient for CoLA, F1-score for QQP, MRPC, CitationIntent, SciERC, HyperPartisan, RCT, and accuracy for SST-2, RTE, QNLI, SST-2, IMDB, CR, MR, and MPQA following Wang et al. (2019); Diao et al. (2021). MNLI results are an average of MNLI-match and MNLI-mismatch accuracy. ## 3.2 **Baselines** For GPT-3-based models, because previous white-box tuning methods and black-box continuous prompt tuning methods (e.g., BBT) cannot be applied to GPT-3, we compare our model with the following baselines. - **GPT-3's FineTuning**5: a GPT-3 inference API that is fine-tuned entirely on a labeled dataset (black-box). - **ManualPrompt**: a GPT-3 inference API with manually composed prompts to conduct the zero-shot evaluation. The human-written prompts are shown in Appendix A.5 (black-box). - **InContextLearning** (Brown et al., 2020): a GPT-3 inference API with a set of examples containing training sentences and labels as the input prefix, which is then prepended to the input texts (black-box). For RoBERTa-based models, we adopt the RoBERTa-large as the backbone and the following tuning methods. - **Vanilla FineTuning** (Liu et al., 2019): a RoBERTa-large model that is fine-tuned entirely on a labeled dataset (white-box). 4For example, only one training epoch on 10, 000 sentences with 300, 000 tokens will cause 6 USD costs for GPT-3-Davinci. Not to mention tens of hundreds of rounds of training. 5https://beta.openai.com/docs/guides/fine-tuning Table 2: The statistics of seven datasets in the generic domain and four datasets in the specific domain. CI, SE, HP denote CitationIntent, SciERC, HyperPartisan, respectively. |L|: number of classes for classification tasks. Note that we sample the few-shot training split and development split from the original training split for few-shot setting as described in Section 3.1. | Dataset | |L| | |Train| | |Dev| | |Test| | Type | Metrics | Domain | |-----------|-------|-----------------------|---------|----------|-------------------------|----------------|------------------| | | | Generic Tasks | | | | | | | MNLI | 3 | 393K | 9.8K | 9.8K | NLI | acc. | fiction, reports | | QQP | 2 | 364K | 40K | 391K | paraphrase | F1 | Quora | | SST-2 | 2 | 6.7K | 872 | 1.8K | sentiment | acc. | movie reviews | | MRPC | 2 | 3.7K | 408 | 1.7K | paraphrase | F1 | news | | CoLA | 2 | 8.6K | 1K | 1K | acceptability | Matthews corr. | books, articles | | QNLI | 2 | 105K | 5.5K | 5.5K | NLI | acc. | Wikipedia | | RTE | 2 | 2.5K | 277 | 3K | NLI | acc. | news, Wikipedia | | | | Domain-Specific Tasks | | | | | | | CI | 6 | 1.6K | 114 | 139 | citation intent | F1 | computer science | | SE | 7 | 3.2K | 455 | 974 | relation classification | F1 | computer science | | RCT | 5 | 180K | 30K | 30K | abstract sentence roles | F1 | biomedical | | HP | 2 | 516 | 64 | 65 | review helpfulness | F1 | reviews | - **PromptTuning** (Lester et al., 2021): a frozen RoBERTa-large model with continuous prompt embeddings prepended to the input, and learned by gradients (white-box). - **P-Tuning v2** (Liu et al., 2021a): a frozen RoBERTa-large model with continuous prompt embeddings prepended to each layer, and learned by gradients (white-box). - **AutoPrompt** (Shin et al., 2020): a frozen RoBERTa-large model with discrete prompts optimized based on gradient-guided search (white-box). - **FeatureProbe** (Peters et al., 2019): a frozen RoBERTa-large model outputs the features given inputs and a newly added classification layer is trained with the gradients (white-box). - **ManualPrompt**: a frozen RoBERTa-large model with manually composed prompts to conduct the zero-shot evaluation. The human-written prompts are shown in Appendix A.5 (black-box). - **InContextLearning** (Brown et al., 2020): a frozen RoBERTa-large model with a set of examples containing training sentences and labels as the input prefix, which is then prepended to the input texts (black-box). - BBT (Sun et al., 2022): a frozen RoBERTa-large model with continuous prompts that are optimized by covariance matrix adaptation evolution strategy (black-box). - **RLPrompt** (Deng et al., 2022): a frozen RoBERTa-large model with discrete prompts that are generated by a policy network and optimized by a reward function (black-box). ## 3.3 **Implementation** For GPT-3 experiments, we conduct experiments with four variants: GPT-3-Ada, GPT-3-Babbage, GPT-3- Curie, and GPT-3-Davinci. The batch size of training and evaluation is set to 4 to fulfill the query length limit (i.e., 2048). We call the APIs directly from OpenAI's services6. For RoBERTa-large experiments, we initialize it with pre-trained weights by Huggingface's Transformers library7. The batch size of training and evaluation is set to 16 and 32, respectively. The number of API calls is limited to 8000 across all datasets. For BDPL, we optimize the prompts by AdamW (Loshchilov & Hutter, 2019) for 30 epochs with a learning rate of 1 × 10−4. The prompt length is 50, and the size of the candidate prompt list N is 100. Other hyper-parameters are detailed in the Appendix A.3. 6https://openai.com/api/ 7https://github.com/huggingface/transformers | Dataset MNLI | QQP | SST-2 MRPC CoLA QNLI | RTE | CI | SE | RCT | HP | Avg. $Cost | | |----------------|-------------------------|------------------------|---------|-------------------------------------------------|--------|-----------------|-----------------|--------------|-----| | | GPT-3 Ada | | | | | | | | | | FT | 38.50.8 44.51.4 71.61.2 | 45.74.3 | 0.00.0 | 49.82.1 52.71.2 27.73.2 | 3.50.3 | 57.04.2 24.41.4 | 37.8 | 5.6 | | | MP | 26.50.9 31.21.8 63.11.3 | 35.62.1 | 0.00.0 | 45.61.5 47.32.0 26.92.0 | 1.20.6 | 15.82.4 15.21.8 | 28.0 | 0.5 | | | ICL | 36.30.7 40.31.3 64.60.8 | 40.51.5 | 1.31.4 | 48.81.7 48.71.3 28.20.3 | 2.50.4 | 22.72.6 20.02.3 | 32.2 | 5.7 | | | BDPL | 37.10.7 45.10.5 68.80.9 | 43.22.5 | 2.00.4 | 51.20.4 52.71.3 28.31.5 | 3.80.3 | 45.72.9 22.41.7 | 36.4 | 3.2 | | | | GPT-3 Babbage | | | | | | | | | | FT | 40.70.5 46.21.4 87.41.5 | 66.41.7 | 0.30.1 | 50.90.2 52.31.0 | 5.20.4 | 4.11.0 | 61.15.2 33.31.3 | 40.7 | 8.5 | | MP | 28.90.8 34.11.2 83.51.2 | 62.43.2 | 0.20.1 | 48.81.4 51.20.6 31.42.8 | 1.70.5 | 21.72.3 27.21.5 | 35.6 | 0.6 | | | ICL | 35.70.9 45.21.9 86.21.4 | 65.41.7 | 2.60.0 | 48.30.9 51.50.4 13.11.5 | 2.50.9 | 36.71.8 32.21.4 | 38.1 | 7.1 | | | BDPL | 41.00.6 50.41.5 86.41.1 | 67.71.2 | 2.80.1 | 52.10.3 53.11.0 40.22.5 | 3.20.8 | 45.22.2 30.42.3 | 43.0 | 4.0 | | | | GPT-3 Curie | | | | | | | | | | FT | 42.22.8 53.31.4 88.93.1 | 76.32.1 | 3.41.3 | 49.01.3 54.51.7 28.41.9 | 5.10.8 | 50.61.3 43.31.5 | 45.0 | 42.3 | | | MP | 34.51.9 44.32.1 84.21.4 | 73.31.2 | 2.00.6 | 47.20.9 44.01.2 19.21.3 | 2.80.3 | 31.01.4 37.11.5 | 38.1 | 2.5 | | | ICL | 38.02.1 47.22.3 87.01.9 | 81.01.3 | 2.80.0 | 46.81.1 46.21.6 15.21.9 | 4.81.3 | 50.12.3 39.02.3 | 41.6 | 28.5 | | | BDPL | 42.51.9 52.01.5 88.02.2 | 82.61.1 | 4.01.3 | 50.10.8 55.81.3 25.51.8 | 3.41.8 | 49.62.4 39.41.6 | 44.8 | 16.2 | | | | GPT-3 Davinci | | | | | | | | | | FT | 60.23.8 67.82.1 92.92.4 | 84.61.3 | 55.31.5 | 54.22.5 57.01.2 35.41.7 10.32.3 51.62.7 60.11.8 | 57.2 | 423.2 | | | | | MP | 40.22.5 39.21.6 86.72.7 | 69.72.1 | 55.22.4 | 28.01.3 55.31.9 25.62.0 | 4.91.0 | 26.81.9 52.61.4 | 44.0 | 25.0 | | | ICL | 52.72.9 55.63.4 87.23.3 | 82.41.7 | 56.72.0 | 17.91.7 56.62.3 30.13.0 | 9.21.5 | 44.42.2 55.42.5 | 49.8 | 206.1 | | | BDPL | 54.62.4 57.82.1 89.33.0 | 83.41.4 | 58.41.4 | 56.21.5 57.20.8 34.62.0 | 6.62.1 | 48.82.5 58.52.4 | 55.0 | 161.5 | | Table 3: The overall performance of black-box prompt and the comparison on eleven datasets with GPT-3. We report average scores across three random seeds, with standard deviations as subscripts. Avg. denotes the average score across all tasks. $Cost denotes the money cost in US dollars for calling GPT-3's API during training and inference. FT: GPT-3's FineTuning. MP: ManualPrompt. ICL: InContextLearning. ## 4 **Experimental Results** The overall results on eleven datasets are reported in Tables 3 and 4. We first verify our proposed method's effectiveness on a purely black-box setting with GPT-3 APIs. From Table 3, BDPL shows great superiority across eleven datasets. Compared with ManualPrompt and InContextLearning, BDPL demonstrates significant improvements, which are 8.35% and 4.35% on average of Ada, Babbage, Curie, and Davinci models. BDPL also achieves comparable performance with GPT-3's fine-tuning, which requires large money costs and uploading user's data. In Babbage, BDPL even outperforms GPT-3's fine-tuning. As the experiments are conducted on the few-shot setting, where a small number of data are available to fine-tune the models' parameters, for large models like GPT-3, overfitting could be a serious problem that deteriorates the performance so that fine-tuning is inferior to BDPL. Although careful adjustment of the fine-tuning algorithm may mitigate overfitting and improve accuracy, it needs a lot of manual effort and money costs, which is implausible for cloud-device collaboration. Moreover, it is observed that ManualPrompt and InContextLearning with less capable versions of GPT-3 (e.g., Ada and Babbage) fail in some challenging datasets (e.g., CoLA and SE) but BDPL performs well on them. With the increase of the model capacity (from Ada to Davinci), we observed ManualPrompt and InContextLearning could also solve them, which is consistent with recent observations of large model's emergent abilities (Wei et al., 2022). From this perspective, our method offers another option in addition to increasing model size, which is an efficient solution for less capable models to perform challenging tasks. | denotes the average score across all tasks. FT: Vanilla FineTuning. ICT: InContextLearning. Dataset MNLI QQP SST-2 MRPC CoLA QNLI RTE CI SE RCT | | | HP | Avg. | | | | | |---------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------|-------------------|---------|-----------------|-----------------------------------------|-----------------|-----------------|------| | | | White-Box Methods | | | | | | | | FT | 50.81.2 60.81.9 86.52.0 | 78.41.3 | 20.41.9 | 53.21.8 | 55.62.5 37.41.7 23.11.6 45.25.2 55.52.3 | 51.5 | | | | PromptTuning | 36.50.9 50.21.5 70.72.6 | 52.73.4 | 8.00.7 | 53.51.6 | 56.31.6 34.42.6 28.62.5 36.73.1 47.43.2 | 43.2 | | | | P-Tuning v2 | 44.21.7 57.42.4 80.41.2 | 62.42.0 | 8.92.7 | 51.51.3 | 53.11.7 31.44.2 24.62.5 35.43.9 55.44.2 | 45.9 | | | | AutoPrompt | 40.11.5 45.71.3 71.52.1 | 63.83.1 | 5.42.3 | 50.21.3 | 52.11.6 27.92.9 21.52.5 29.62.5 40.63.8 | 40.8 | | | | FeatureProbe | 46.51.8 56.31.1 79.51.6 | 68.91.7 | 15.61.2 | 50.50.2 | 54.12.5 22.32.0 20.83.6 31.24.7 60.12.6 | 46.0 | | | | | | Black-Box Methods | | | | | | | | ManualPrompt 35.91.3 49.80.9 77.22.1 | 70.41.6 | 0.60.0 | 49.21.1 | 48.20.6 12.32.4 | 9.61.4 | 11.71.5 35.71.6 | 36.4 | | | ICT | 37.21.6 50.10.9 82.82.1 | 72.12.3 | 1.10.4 | 50.80.5 | 49.32.3 14.61.7 | 9.21.5 | 25.81.6 38.52.4 | 39.2 | | BBT | 40.62.5 55.23.1 85.33.9 | 66.43.7 | 5.52.7 | 55.43.2 | 52.62.2 17.45.4 16.40.9 31.71.5 47.24.8 | 43.1 | | | | RLPrompt | 42.83.2 53.72.2 88.41.9 | 68.92.1 | 5.01.1 | 52.61.4 | 51.81.8 19.23.3 18.81.5 30.12.7 44.92.4 | 43.3 | | | | BDPL | 42.51.8 56.41.9 87.62.1 | 78.13.7 | 4.61.2 | 53.11.1 | 53.50.9 24.01.3 21.52.0 36.63.2 45.63.4 | 45.8 | | | Table 4: The overall performance of black-box prompt and the comparison on eleven datasets with RoBERTalarge. We report average scores across three random seeds, with standard deviations as subscripts. Avg. denotes the average score across all tasks. FT: Vanilla FineTuning. ICT: InContextLearning. In addition to the auto-regressive model, we also conduct additional experiments with an encode-only model, RoBERTa-large. Because the weights of RoBERTa-large are released and gradients can be leveraged, several white-box baseline models are introduced for comparison. First, our model outperforms all black-box methods, demonstrating the effectiveness of our proposed black-box discrete prompt optimization. Second, BDPL achieves comparable performance compared with white-box prompt-based methods including both discrete and continuous prompts. It is observed that BDPL even outperforms some white-box methods (e.g., PromptTuning and AutoPrompt). We attribute this phenomenon to the overfitting of white-box methods in terms of the given few-shot examples while BDPL does not suffer severe overfitting due to its exploration mechanism. We perform an ablation study in Section E to reveal the effect of data size and accuracy. Given that white-box prompt tuning methods cannot be applied in black-box settings when the gradients are unavailable, previous black-box methods such as InContextLearning and BBT can achieve 2.82% and 6.66% improvement on average over ManualPrompt. BDPL outperforms the previous black-box methods BBT and RLPrompt by an average of 2.7% and 2.5%, respectively. Compared with BBT, our method, BDPL, not only outperforms it but is also more practical considering its discrete nature. BBT is optimizing continuous prompts and cannot be directly fed into current prediction APIs. We also notice that there is still a large gap between FineTuning and all other methods. FineTuning updates the full model with gradients and huge parameters, serving as an upper bound for all methods. Across eleven tasks, it is observed that the BDPL on domain-specific datasets is as effective as on generic datasets. While it is known that domain shift introduced difficulty for models to deal with, BDPL offers an effective solution to domain-specific datasets. ## 5 **Analysis** We analyze several aspects of BDPL, including the effects of different training data sizes, prompt lengths, training budgets, and learning objectives. In addition, we examine the transferability of our learned prompts under the transfer learning setting and the explanation of prompts. We choose GPT-3 Babbage as the backbone model in the following discussion. The details are illustrated in this section. ## 5.1 **Ablation Study** Effects of Training Data Size First, we analyze the effects brought by four different training data sizes: 4-shot, 8-shot, 16-shot, and 32-shot. Experiments are conducted on MRPC and RCT datasets. As shown in the left part of Figure 2 (a) and (b), with the increase in training data size, the performance of FT, InContextLearning, and BDPL is improved on both MRPC and RCT, which is consistent with the assumption that more data brings sufficient training. Compared with baseline models, our model achieved consistent ![9_image_0.png](9_image_0.png) Figure 2: The effects of training data size, prompt length, and the number of API calls on MRPC and RCT. FT, MP, and ICL denote GPT-3's FineTuning, ManualPrompt, and InContextLearning, respectively. (a) Loss Type (b) Transfer Learning ![9_image_1.png](9_image_1.png) ![9_image_2.png](9_image_2.png) Figure 3: **(a) Ablations of loss function.** CE and Hinge represent cross-entropy loss and hinge loss, respectively. **(b) Transfer learning performance.** SST-2 is the source task while IMDB and CR are two tasks in the target domain. improvement over ManualPrompt and InContextLearning, verifying its effectiveness and scalability under different data sizes. Effects of Prompt Length It is known that prompt-based methods are sensitive to many aspects of prompts, including contexts (Jiang et al., 2020), orders (Lu et al., 2022) and lengths (Lester et al., 2021), and inappropriately designed prompts lead to bad performance. Here we study the effects of different prompt lengths on MRPC and RCT. Considering the maximum number of tokens is 2048 for the input of GPT-3 API, too many prompt tokens (e.g., more than 100) cause additional costs and even failure of queries. Therefore, we conduct experiments with length = 10, 25, 50, and 75. The results are shown in the Figure 2 (c) and (d). With the increase in prompt length, the performance increases at first and decreases when the prompt length reaches 75 in most cases. We conclude that the approximate best prompt length is 50, since a shorter prompt length limits the representation capacity while a longer prompt length might involve noises contained in the training data and is hard to optimize. Effects of Training Budgets Training budgets are essential factors for efficient-tuning methods. An efficient method is expected to achieve good performance with as few as training budgets. We measure the budgets by the number of prediction API calls and report the performance of our models with different numbers of API calls in Figure 2 (c) and (d). It is observed that with the increase of API calls, BDPL under different settings obtains performance gains because of sufficient training. All settings could converge within 12, 000 API calls. We also find that the 50-prompt performs well at first but the gap between 75-prompt and 50-prompt narrows down when the training budget grows, and finally, 75-prompt achieves competitive performance. It told us that if we do not have a sufficient training budget, it would be better to apply fewer prompt tokens, which are easier to optimize. Effects of Different Objectives In the previous experiments, the prompts are optimized with the crossentropy loss, and here we explore the effectiveness of our model with different objectives. With the same | Task | Prompt + Input | Prediction | Label | |---------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------|-----------| | <s> Our friends won't buy this analysis, let alone the next one we propose . </s> | Unacceptable | | | | CoLA | Acceptable | | | | as time game second family group company take full at way only <s> Our friends won't buy this analysis, let alone the next one we propose . </s> | Acceptable | | | | <s> one of the dead was a child, said a doctor at Civil Hospital Karachi. </s> <s> A doctor was killed by his parents . </s> | Entailment | Not | | | RTE | Entailment | | | | got because people during go N or work both support come also <s> one of the dead was a child, said a doctor at Civil | Not | | | | Hospital Karachi. </s> <s> A doctor was killed by his parents . </s> | Entailment | | | | CI | <s> This appeared to solve the problem, and the results presented later for the average degree of generalisation do not show an over-generalisation compared with those given in Li and Abe ( 1998 ) . </s> | Background | CompareOr | | last ie may man life show F best most state well around <s> This appeared to solve the problem, and the results presented | Contrast | | | | | CompareOr | | | | later for the average degree of generalisation do not show an over-generalisation compared with those given in Li and | Contrast | | | | Abe ( 1998 ) . </s> <s> It is not clear whether these patients would benefit from antifungal treatment .</s> | Results | | | | RCT | Background | | | | go such time part event city use found season play news people <s> It is not clear whether these patients would benefit from antifungal treatment .</s> | Background | | | | Figure 4: | Four correctly predicted examples by BDPL. We display the prompts and salience map of the | | | Figure 4: Four correctly predicted examples by BDPL. We display the prompts and salience map of the token <s>. The prompts are in green and the input tokens are in red. The salient tokens are highlighted in a blue background, where the darker color denotes the more dominant weights for the prediction. setting as our main experiment, we conduct further experiments with hinge loss on four datasets: MRPC, QNLI, CI, and RCT. We find that our model with both objectives can achieve comparable results. As shown in Figure 3 (a), the model with hinge loss outperforms that with cross-entropy loss on MRPC, CI, and RCT, but underperforms it on QNLI. On average, our approach with hinge loss works as well as that with cross-entropy loss. It is flexible enough to work with different objectives, and we hope to extrapolate to any kind of human-designed objectives. Effects of Transfer Learning A critical advantage of discrete prompts is the possibility of transferring prompts learned from one task to another because discrete tokens share the same text space instead of specific latent space for continuous prompts. To verify the transferability of black-box optimized prompt tokens, we conduct experiments on three sentiment analysis datasets (*i.e.*, SST-2 (Socher et al., 2013), IMDB (Maas et al., 2011), and CR (Hu & Liu, 2004)) with GPT-3-Babbage model in the 16-shot setting. First, we use SST-2 as the source task following Vu et al. (2022) and optimize the discrete prompt tokens with our proposed BDPL. Then we obtain those selected prompt tokens and simply prepend them to the beginning of the input of the target task, IMDB and CR. Our setting assumes no training data in the target domain, so we directly test the performance in the target task. Following Wang et al. (2021a), for CR, we randomly sample 2,000 instances as the test set. The results are shown in Figure 3 (b). Consistent with the previous observation in Section 4, BDPL outperforms ManualPrompt in the source task by a large margin. Moreover, learned prompts are helpful in two target tasks, demonstrating that our black-box method is robust under transfer learning settings. The experimental results display the expansion potential of *prompt transfer*, which is a promising practical application of BDPL, especially when there are N edge devices sharing a similar task, but they have no training data. We can update the black-box prompts in a general domain and then transfer them to the target domain. ## 5.2 **Prompt Explanation** To intuitively understand the prompts, we visualize the salience maps using the Language Interpretability Tool (LIT) (Tenney et al., 2020). We choose CoLA, RTE, CI, and RCT datasets because the sentences contained in these datasets are easy for human interpretation. The comparisons between models with discrete prompts and without them are shown in Figure 4. By adding discrete prompt tokens, the model is able to find coherence-related clues. For example, CoLA aims to distinguish the acceptability of sentences. The grammar-related word 'won't' and phrase 'let alone' dominate its prediction, which is consistent with the decision process of human beings. Similar observations are found in other datasets. Due to space limitations, more visualized examples are not shown here. Based on considerable empirical evidence, we conclude that BDPL can capture helpful information to guide the model. We notice that most of the optimized prompts are readable but in-comprehensible, useful to model improvement, but semantically confusing to humans. Ilyas et al. (2019b) find that neural networks rely on some *nonrobust* features to achieve the highest possible accuracy. These *non-robust* features are usually semantically meaningless to humans, but we can still train a well-performed model only using these features. We argue that while the optimized prompts that our method finds are meaningless to humans, they are useful for the models to make a more accurate prediction. In contrast, forcing the prompts to have semantic meaning may remove the useful information and leads to degraded performance. This observation is consistent with previous discrete prompt learning studies (Shin et al., 2020). ## 6 **Related Work** In this section, we present the review on prompt learning for pre-trained language models and black-box optimization. ## 6.1 **Prompts For Pre-Trained Models** Large pre-trained language models are of great importance and a standard paradigm is pre-training a language model on a large unlabeled corpus and then fine-tuning the pre-trained model on different supervised tasks. This approach shows great improvements on lots of downstream tasks but it needs lots of computational resources to change all the parameters and has to save a copy for each task. Therefore, prompt-based learning, which does not require tuning the large model, is proposed to solve the problem. Based on the format of prompts, the prompt-based learning can be categorized into two kinds: discrete prompt (Wallace et al., 2019; Shin et al., 2020; Jiang et al., 2020; Gao et al., 2021; Ben-David et al., 2022) and continuous prompt (Zhong et al., 2021; Qin & Eisner, 2021; Hambardzumyan et al., 2021; Liu et al., 2021b; Han et al., 2021; Li & Liang, 2021). The discrete prompt is usually a sequence of tokens or natural language phrases while the continuous prompt is designed as a sequence of vectors. However, all of these studies are limited to a white-box setting, which requires accessing all the parameters of a pre-trained model so that the gradients could be back-propagated to optimize the prompts. Recently, Sun et al. (2022) proposed black-box tuning methods but they are optimizing continuous prompts, which is impractical in real applications, because most of the commercial APIs do not accept continuous vectors as input. Our method, black-box prompt learning, provides a truly black-box solution with discrete optimization, which optimizes a set of discrete prompts without accessing the pre-trained model. There are some concurrent works (Deng et al., 2022; Hou et al., 2022) exploring black-box tuning methods for large language models. PromptBoosting (Hou et al., 2022) ensembles a large number of weak learners by the AdaBoost algorithm to pair pre-generated prompts with different elements of the LM's output distribution. They learn the verbalizer instead of discrete prompts of the language model, which is complementary to BDPL. RLPrompt (Deng et al., 2022) generates discrete prompts by a policy network and optimizes it by a reward function. In contrast, we apply a variance-reduced policy gradient estimator to optimize a few independent categorical probabilities to select the appropriate prompt tokens. In addition, we are the first to show that the black-box prompt learning methods can generalize to real-world large language models like GPT-3. ## 6.2 **Black-Box Optimization** One of the applications of black-box optimization is the score-based black-box adversarial attack (Ilyas et al., 2018; 2019a; Huang & Zhang, 2020; Andriushchenko et al., 2020; Cheng et al., 2019), where the models are also invisible to the attacker. These studies use zeroth-order optimization methods such as natural evolution strategy (NES) (Wierstra et al., 2014) to optimize the input and increase the loss to fool the model. Instead of deteriorating the models' performance in the adversarial attack, our direction is to find the inputs that improve the accuracy. Policy gradient (Sutton et al., 1999), which also belongs to the black-box optimization, is widely used in reinforcement learning to find the best policy. In contrast to NES that can only be used to search in the continuous search, policy gradient allows the choice of discrete policy and can be used to find the optimal discrete prompts. BDPL uses black-box optimization methods to find the optimal prompts, which is a novel application direction of these methods. Another line of research to adapt the black-box model is knowledge distillation (KD) (Hinton et al., 2015), which learns a student model with the outputs of large models. KD can be used to learn black-box models (Nguyen et al., 2022; Wang, 2021), perform domain adaptation (Liang et al., 2022), and adversarially attack the model Zhang et al. (2022). Despite the wide applications of black-box KD, learning the models with KD still requires a large number of queries and data for training the student network. On the contrary, our proposed approach, BDPL, is much more lightweight, and only needs to train a few prompts with a small amount of data. Moreover, under the scenario of cloud-device collaboration, BDPL only needs negligible computation on the edge devices while KD has to train the local student network with large computational resources. Therefore, BDPL is more practical for the scenario of cloud-device collaboration. ## 7 **Conclusion** This paper proposes a novel setting for text categorization namely black-box prompt learning, where a large pre-trained model is invisible so that the gradients cannot be back-propagated to update the prompts. Compared with the standard pre-training then fine-tuning paradigm, our approach only requires updating very few parameters. Compared with previous prompt-based methods, our approach does not require the visibility of pre-trained models, and thus it provides more flexibility in practical applications. We propose a black-box prompt learning method, BDPL, which employs a variance-reduced policy gradient estimator to approximate the gradients, and then update the prompts. Experimental results demonstrate that our approach outperforms all black-box methods and is comparable with white-box methods, illustrating the effectiveness of black-box optimization. Experiments on the transfer learning settings further show the potential of our approach in realistic scenarios, where the pre-trained model is deployed on the cloud, and the prompt learning can be implemented on each device. In the future, we would like to explore the effectiveness of our proposed methods on more commercial classifiers, such as Google Cloud APIs, Microsoft Azure APIs and so on. The black-box prompt learning for large multi-modal models (Wang et al., 2021b; Singh et al., 2022; Wang et al., 2021c; Zhou et al., 2022; Wang et al., 2022; Diao et al., 2023) is another important scenario to explore in future work. ## Acknowledgments We thank the anonymous reviewers for their valuable suggestions. This work was supported by the General Research Fund (GRF) of Hong Kong (No. 16310222 and No. 16201320). Shizhe Diao, Ruijia Xu, and Yong Lin were supported by the Hong Kong Ph.D. Fellowship Scheme (HKPFS). ## References Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion, and Matthias Hein. Square attack: a query-efficient black-box adversarial attack via random search. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIII, pp. 484–501. Springer, 2020. Eyal Ben-David, Nadav Oved, and Roi Reichart. PADA: Example-based Prompt Learning for on-the-fly Adaptation to Unseen Domains. *Transactions of the Association for Computational Linguistics*, 10:414–433, 04 2022. ISSN 2307-387X. doi: 10.1162/tacl_a_00468. URL https://doi.org/10.1162/tacl_a_00468. Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. The fifth pascal recognizing textual entailment challenge. In TAC, 2009. Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportunities and risks of foundation models. *ArXiv preprint*, abs/2108.07258, 2021. URL https://arxiv.org/abs/2108.07258. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), *Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual*, 2020. URL https: //proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html. Shuyu Cheng, Yinpeng Dong, Tianyu Pang, Hang Su, and Jun Zhu. Improving black-box adversarial attacks with a transfer-based prior. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp. 10932–10942, 2019. URL https://proceedings.neurips.cc/paper/ 2019/hash/32508f53f24c46f685870a075eaaa29c-Abstract.html. Ido Dagan, Oren Glickman, and Bernardo Magnini. The pascal recognising textual entailment challenge. In Machine Learning Challenges Workshop, pp. 177–190. Springer, 2005. Mingkai Deng, Jianyu Wang, Cheng-Ping Hsieh, Yihan Wang, Han Guo, Tianmin Shu, Meng Song, Eric Xing, and Zhiting Hu. RLPrompt: Optimizing discrete text prompts with reinforcement learning. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 3369–3391, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. URL https://aclanthology.org/2022.emnlp-main.222. Franck Dernoncourt and Ji Young Lee. PubMed 200k RCT: a dataset for sequential sentence classification in medical abstracts. In *Proceedings of the Eighth International Joint Conference on Natural Language* Processing (Volume 2: Short Papers), pp. 308–313, Taipei, Taiwan, 2017. Asian Federation of Natural Language Processing. URL https://aclanthology.org/I17-2052. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186, Minneapolis, Minnesota, 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https://aclanthology.org/N19-1423. Shizhe Diao, Jiaxin Bai, Yan Song, Tong Zhang, and Yonggang Wang. ZEN: Pre-training Chinese text encoder enhanced by n-gram representations. In Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 4729–4740, Online, 2020. Association for Computational Linguistics. doi: 10.18653/v1/ 2020.findings-emnlp.425. URL https://aclanthology.org/2020.findings-emnlp.425. Shizhe Diao, Ruijia Xu, Hongjin Su, Yilei Jiang, Yan Song, and Tong Zhang. Taming pre-trained language models with n-gram representations for low-resource domain adaptation. In *Proceedings of the 59th Annual* Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 3336–3349, Online, 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.259. URL https://aclanthology.org/2021. acl-long.259. Shizhe Diao, Wangchunshu Zhou, Xinsong Zhang, and Jiawei Wang. Write and paint: Generative visionlanguage models are unified modal learners. In *International Conference on Learning Representations*, 2023. William B. Dolan and Chris Brockett. Automatically constructing a corpus of sentential paraphrases. In *Proceedings of the Third International Workshop on Paraphrasing (IWP2005)*, 2005. URL https: //aclanthology.org/I05-5002. Zhe Dong, Andriy Mnih, and George Tucker. Disarm: An antithetic gradient estimator for binary latent variables. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and HsuanTien Lin (eds.), *Advances in Neural Information Processing Systems 33: Annual Conference on Neural* Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https: //proceedings.neurips.cc/paper/2020/hash/d880e783834172e5ebd1868d84463d93-Abstract.html. Tianyu Gao, Adam Fisch, and Danqi Chen. Making pre-trained language models better few-shot learners. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th* International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 3816–3830, Online, 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.295. URL https://aclanthology.org/2021.acl-long.295. Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. The third PASCAL recognizing textual entailment challenge. In *Proceedings of the ACL-PASCAL Workshop on Textual Entailment* and Paraphrasing, pp. 1–9, Prague, 2007. Association for Computational Linguistics. URL https:// aclanthology.org/W07-1401. Suchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 8342–8360, Online, 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.740. URL https://aclanthology.org/ 2020.acl-main.740. R Bar Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. The second pascal recognising textual entailment challenge. In *Proceedings of the Second PASCAL* Challenges Workshop on Recognising Textual Entailment, volume 7, 2006. Karen Hambardzumyan, Hrant Khachatrian, and Jonathan May. WARP: Word-level Adversarial ReProgramming. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4921–4933, Online, 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.381. URL https://aclanthology.org/2021.acl-long.381. Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, and Maosong Sun. PTR: Prompt Tuning with Rules for Text Classification. *ArXiv preprint*, abs/2105.11259, 2021. URL https://arxiv.org/abs/2105.11259. Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. Distilling the knowledge in a neural network. *ArXiv preprint*, abs/1503.02531, 2015. URL https://arxiv.org/abs/1503.02531. Bairu Hou, Joe O'Connor, Jacob Andreas, Shiyu Chang, and Yang Zhang. Promptboosting: Blackbox text classification with ten forward passes. *ArXiv preprint*, abs/2212.09257, 2022. URL https: //arxiv.org/abs/2212.09257. Minqing Hu and Bing Liu. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 168–177, 2004. Zhichao Huang and Tong Zhang. Black-box adversarial attack with transferable model-based embedding. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=SJxhNTNYwB. Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. Black-box adversarial attacks with limited queries and information. In Jennifer G. Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of *Proceedings of Machine Learning Research*, pp. 2142–2151. PMLR, 2018. URL http: //proceedings.mlr.press/v80/ilyas18a.html. Andrew Ilyas, Logan Engstrom, and Aleksander Madry. Prior convictions: Black-box adversarial attacks with bandits and priors. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019a. URL https://openreview.net/forum?id= BkMiWhR5K7. Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett (eds.), *Advances in Neural* Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp. 125–136, 2019b. URL https:// proceedings.neurips.cc/paper/2019/hash/e2c420d928d4bf8ce0ff2ec19b371514-Abstract.html. Shankar Iyer, Nikhil Dandekar, Kornél Csernai, et al. First quora dataset release: Question pairs. *data.* quora. com, 2017. Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017. URL https://openreview.net/forum?id=rkE3y85ee. Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. How can we know what language models know? Transactions of the Association for Computational Linguistics, 8:423–438, 2020. doi: 10.1162/tacl_a_00324. URL https://aclanthology.org/2020.tacl-1.28. David Jurgens, Srijan Kumar, Raine Hoover, Dan McFarland, and Dan Jurafsky. Measuring the evolution of a scientific field through citation frames. *Transactions of the Association for Computational Linguistics*, 6: 391–406, 2018. doi: 10.1162/tacl_a_00028. URL https://aclanthology.org/Q18-1028. Johannes Kiesel, Maria Mestre, Rishabh Shukla, Emmanuel Vincent, Payam Adineh, David Corney, Benno Stein, and Martin Potthast. SemEval-2019 task 4: Hyperpartisan news detection. In Proceedings of the 13th International Workshop on Semantic Evaluation, pp. 829–839, Minneapolis, Minnesota, USA, 2019. Association for Computational Linguistics. doi: 10.18653/v1/S19-2145. URL https://aclanthology. org/S19-2145. Kalpesh Krishna, Gaurav Singh Tomar, Ankur P. Parikh, Nicolas Papernot, and Mohit Iyyer. Thieves on sesame street! model extraction of bert-based apis. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=Byl5NREFDr. Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 3045–3059, Online and Punta Cana, Dominican Republic, 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.243. URL https://aclanthology.org/2021.emnlp-main.243. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In *Proceedings of the 58th Annual Meeting of the Association* for Computational Linguistics, pp. 7871–7880, Online, 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.703. URL https://aclanthology.org/2020.acl-main.703. Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582–4597, Online, 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.353. URL https: //aclanthology.org/2021.acl-long.353. Jian Liang, Dapeng Hu, Jiashi Feng, and Ran He. Dine: Domain adaptation from single and multiple black-box predictors. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern* Recognition, pp. 8003–8013, 2022. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 55(9):1–35, 2023. Xiao Liu, Kaixuan Ji, Yicheng Fu, Zhengxiao Du, Zhilin Yang, and Jie Tang. P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks. *ArXiv preprint*, abs/2110.07602, 2021a. URL https://arxiv.org/abs/2110.07602. Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. GPT Understands, Too. *ArXiv preprint*, abs/2103.10385, 2021b. URL https://arxiv.org/abs/2103.10385. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. RoBERTa: A Robustly Optimized BERT Pretraining Approach. ArXiv preprint, abs/1907.11692, 2019. URL https://arxiv.org/abs/1907.11692. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. URL https://openreview.net/forum?id=Bkg6RiCqY7. Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 8086–8098, Dublin, Ireland, 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.acl-long.556. URL https://aclanthology.org/2022.acl-long.556. Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 3219–3232, Brussels, Belgium, 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1360. URL https://aclanthology.org/D18-1360. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In *Proceedings of the 49th Annual Meeting of the Association* for Computational Linguistics: Human Language Technologies, pp. 142–150, Portland, Oregon, USA, 2011. Association for Computational Linguistics. URL https://aclanthology.org/P11-1015. Dang Nguyen, Sunil Gupta, Kien Do, and Svetha Venkatesh. Black-box few-shot knowledge distillation. In European Conference on Computer Vision, pp. 196–211. Springer, 2022. Rui Pan, Shizhe Diao, Jianlin Chen, and Tong Zhang. Extremebert: A toolkit for accelerating pretraining of customized bert. *ArXiv preprint*, abs/2211.17201, 2022. URL https://arxiv.org/abs/2211.17201. Ethan Perez, Douwe Kiela, and Kyunghyun Cho. True few-shot learning with language models. *Advances in* Neural Information Processing Systems, 34:11054–11070, 2021. Matthew E. Peters, Sebastian Ruder, and Noah A. Smith. To tune or not to tune? adapting pretrained representations to diverse tasks. In *Proceedings of the 4th Workshop on Representation Learning for* NLP (RepL4NLP-2019), pp. 7–14, Florence, Italy, 2019. Association for Computational Linguistics. doi: 10.18653/v1/W19-4302. URL https://aclanthology.org/W19-4302. Guanghui Qin and Jason Eisner. Learning how to ask: Querying LMs with mixtures of soft prompts. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 5203–5212, Online, 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.410. URL https://aclanthology.org/2021.naacl-main. 410. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, volume 32 of JMLR Workshop and Conference Proceedings, pp. 1278–1286. JMLR.org, 2014. URL http://proceedings.mlr.press/v32/rezende14. html. Timo Schick and Hinrich Schütze. It's not just size that matters: Small language models are also few-shot learners. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for* Computational Linguistics: Human Language Technologies, pp. 2339–2352, Online, 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.185. URL https://aclanthology.org/ 2021.naacl-main.185. Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4222–4235, Online, 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.346. URL https://aclanthology.org/2020.emnlp-main.346. Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. Flava: A foundational language and vision alignment model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15638–15650, 2022. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1631–1642, Seattle, Washington, USA, 2013. Association for Computational Linguistics. URL https://aclanthology. org/D13-1170. Tianxiang Sun, Yunfan Shao, Hong Qian, Xuanjing Huang, and Xipeng Qiu. Black-box tuning for languagemodel-as-a-service. *ArXiv preprint*, abs/2201.03514, 2022. URL https://arxiv.org/abs/2201.03514. Richard S Sutton, David McAllester, Satinder Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. *Advances in neural information processing systems*, 12, 1999. Ian Tenney, James Wexler, Jasmijn Bastings, Tolga Bolukbasi, Andy Coenen, Sebastian Gehrmann, Ellen Jiang, Mahima Pushkarna, Carey Radebaugh, Emily Reif, and Ann Yuan. The language interpretability tool: Extensible, interactive visualizations and analysis for NLP models. In *Proceedings of the 2020* Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 107–118, Online, 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-demos.15. URL https://aclanthology.org/2020.emnlp-demos.15. Tu Vu, Brian Lester, Noah Constant, Rami Al-Rfou', and Daniel Cer. SPoT: Better frozen model adaptation through soft prompt transfer. In *Proceedings of the 60th Annual Meeting of the Association for Computational* Linguistics (Volume 1: Long Papers), pp. 5039–5059, Dublin, Ireland, 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.acl-long.346. URL https://aclanthology.org/2022.acl-long.346. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. Universal adversarial triggers for attacking and analyzing NLP. In *Proceedings of the 2019 Conference on Empirical Methods in Natural* Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pp. 2153–2162, Hong Kong, China, 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1221. URL https://aclanthology.org/D19-1221. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. GLUE: A multitask benchmark and analysis platform for natural language understanding. In *7th International Conference* on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. URL https://openreview.net/forum?id=rJ4km2R5t7. Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework. *ArXiv preprint*, abs/2202.03052, 2022. URL https://arxiv.org/abs/2202.03052. Sinong Wang, Han Fang, Madian Khabsa, Hanzi Mao, and Hao Ma. Entailment as Few-Shot Learner. ArXiv preprint, abs/2104.14690, 2021a. URL https://arxiv.org/abs/2104.14690. Wenhui Wang, Hangbo Bao, Li Dong, and Furu Wei. Vlmo: Unified vision-language pre-training with mixtureof-modality-experts. *ArXiv preprint*, abs/2111.02358, 2021b. URL https://arxiv.org/abs/2111.02358. Zi Wang. Zero-shot knowledge distillation from a decision-based black-box model. In Marina Meila and Tong Zhang (eds.), *Proceedings of the 38th International Conference on Machine Learning, ICML 2021,* 18-24 July 2021, Virtual Event, volume 139 of *Proceedings of Machine Learning Research*, pp. 10675–10685. PMLR, 2021. URL http://proceedings.mlr.press/v139/wang21a.html. Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, and Yuan Cao. Simvlm: Simple visual language model pretraining with weak supervision. *arXiv preprint*, 2021c. Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625–641, 2019. doi: 10.1162/tacl_a_00290. URL https://aclanthology.org/Q19-1040. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. Emergent abilities of large language models. Transactions on Machine Learning Research, 2022. URL https://openreview.net/forum?id=yzkSU5zdwD. Survey Certification. Daan Wierstra, Tom Schaul, Tobias Glasmachers, Yi Sun, Jan Peters, and Jürgen Schmidhuber. Natural Evolution Strategies. *The Journal of Machine Learning Research*, 15(1):949–980, 2014. Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1112–1122, New Orleans, Louisiana, 2018. Association for Computational Linguistics. doi: 10.18653/ v1/N18-1101. URL https://aclanthology.org/N18-1101. Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3):229–256, 1992. Ze Yang, Wei Wu, Can Xu, Xinnian Liang, Jiaqi Bai, Liran Wang, Wei Wang, and Zhoujun Li. StyleDGPT: Stylized response generation with pre-trained language models. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pp. 1548–1559, Online, 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.findings-emnlp.140. URL https://aclanthology.org/2020.findings-emnlp. 140. Jie Zhang, Bo Li, Jianghe Xu, Shuang Wu, Shouhong Ding, Lei Zhang, and Chao Wu. Towards efficient data free black-box adversarial attack. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15115–15125, 2022. Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. DIALOGPT : Large-scale generative pre-training for conversational response generation. In *Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System* Demonstrations, pp. 270–278, Online, 2020. Association for Computational Linguistics. doi: 10.18653/v1/ 2020.acl-demos.30. URL https://aclanthology.org/2020.acl-demos.30. Zexuan Zhong, Dan Friedman, and Danqi Chen. Factual probing is [MASK]: Learning vs. learning to recall. In *Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, pp. 5017–5033, Online, 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.398. URL https://aclanthology.org/2021.naacl-main. 398. Wangchunshu Zhou, Yan Zeng, Shizhe Diao, and Xinsong Zhang. VLUE: A multi-task multi-dimension benchmark for evaluating vision-language pre-training. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of *Proceedings of Machine Learning Research*, pp. 27395–27411. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/v162/zhou22n.html. Xiao Zhou, Weizhong Zhang, Zonghao Chen, Shizhe Diao, and Tong Zhang. Efficient neural network training via forward and backward propagation sparsification. *Advances in Neural Information Processing Systems*, 34:15216–15229, 2021. ## A **Implementation Details** A.1 **Computing Infrastructure** For experiments on GPT-3, we directly call its APIs without any GPU for computation. For experiments on RoBERTa, they are conducted with NVIDIA 2080Ti GPUs with 11GB memory. ## A.2 **Evaluation Measures** For tasks from the GLUE Benchmark, we adopt Matthews correlation coefficient for CoLA, F1 for MRPC, and accuracy for RTE and QNLI following their original metric choices. We adopt macro-F1 for CitationIntent, SciERC, RCT, and HyperPartisan as evaluation metrics. ## A.3 **Bounds Of Hyper-Parameters** | Table 5: Bounds of hyper-parameters. | | | |----------------------------------------|----------------------------------|---------| | Hyper-parameter | GPT-3 | RoBERTa | | number of epochs | 30 | 30 | | train batch size | 4 | 32 | | eval and test batch size | 4 | 16 | | prompt length | {10, 12, 25, 50, 75} | | | learning rate | [1e-5, 1e-3] | | | dropout | 0.1 | | | learning rate optimizer | AdamW | | | loss type | {hinge loss, cross-entropy loss} | | ## A.4 **Configuration Of Best Model** | Table 5. Dataset | MNLI | QQP | SST-2 | MRPC | CoLA | QNLI | RTE | CI | SE | RCT | HP | |--------------------|--------|-------|---------|--------|--------|--------|-------|------|------|-------|------| | prompt length | 10 | 25 | 50 | 50 | 50 | 50 | 50 | 50 | 50 | 50 | 50 | | learning rate | 2e-4 | 1e-4 | 2e-4 | 1e-4 | 3e-4 | 2e-4 | 1e-4 | 1e-4 | 1e-4 | 1e-4 | 1e-4 | The configuration of the best model for each dataset is shown in Table 6. ## A.5 **Manual Templates** The manual templates are shown in Table 7. | Algorithm 2 Projection from z to C Require: a vector z. | ∗ | |-----------------------------------------------------------------------------------------------------|-----| | 1: Solve v1 from 1 ⊤[min(1, max(0, z − v 11))] − 1 = 0. ∗ 2: p ← min(1, max(0, p − v 11)). output p | | ## B **Projection Calculation** The projection from z to C can be calculated by: | Table 7: Prompts and label descriptions of ManualPrompt method. Most of them are from | | Gao et al. (2021). | |-----------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|----------------------| | Dataset | | Template | | MNLI | sentence1 entailment?[MASK], sentence2. (yes/no) | | | QQP | sentence1 ?[MASK], sentence2. (yes/no) | | | SST-2 | sentence1. It was [MASK]. (great/terrible) | | | MRPC | sentence1 ?[MASK], sentence2. (yes/no) | | | CoLA | sentence1. correct? [MASK]. (yes/no) | | | QNLI | sentence1 entailment?[MASK], sentence2. (yes/no) | | | RTE | sentence1 entailment?[MASK], sentence2. (yes/no) | | | CI | sentence1. What is the intent?[MASK]. (background/compare/extends/future/motivation/uses) | | | SE | sentence1. What is the relation?[MASK]. (compare/conjunction/evaluate/feature/ hyponym/part/us | ed) | | RCT | sentence1. It is [MASK]. (background/conclusion/method/objective/result) | | | HP | sentence1. It is [MASK]. (yes/no) | | | IMDB | sentence1. It was [MASK]. (great/terrible) | | | CR | sentence1. It was [MASK]. (great/terrible) | | sentence${}_{1}$ entailment?[MASK], sentence${}_{2}$. (yes/no) sentence${}_{1}$ entailment?[MASK], sentence${}_{2}$. (yes/no) sentence${}_{1}$. What is the intent?[MASK], (background/com/examples/future/motivation/uses) sentence${}_{1}$. What is the relation?[MASK], (compare/conjuction/evaluate/feature/ hyponym/part/used) sentence${}_{1}$. It is **[MASK]**, (background/conclusion/method/objective/result) sentence${}_{1}$. It is **[MASK]**, (yes/no) sentence${}_{1}$. It was **[MASK]**, (rest/terrible) sentence${}_{1}$. It was **[MASK]**, (rest/terrible) sentence${}_{1}$. It was **[MASK]**, (rest/terrible) Table 7: Prompts and label descriptions of ManualPrompt method. Most of them are from Gao et al. (2021). Proof. The projection from z to set C can be formulated in the following optimization problem: $$\operatorname*{min}_{p\in\mathbb{R}^{n}}{\frac{1}{2}}\|p-z\|^{2},$$ $$(8)$$ $$(9)$$ $$s.t.{\bf1}\top p=1{\mathrm{~and~}}0\leq p_{i}\leq1.$$ Then we solve the problem with the Lagrangian multiplier method. $$L(\mathbf{p},v)=\frac{1}{2}\|\mathbf{p}-\mathbf{z}\|^{2}+v(\mathbf{1}^{\top}\mathbf{p}-1)$$ $$=\frac{1}{2}\|\mathbf{p}-(\mathbf{z}-v\mathbf{1})\|^{2}+v(\mathbf{1}^{\top}\mathbf{z}-1)-\frac{n}{2}v^{2}.$$ with 0 ≤ pi ≤ 1. Minimize the problem with respect to p, we have respect to $\color{blue}{p}$, we have $$(10)$$ $${\hat{\mathbf{p}}}=\mathbf{1}_{\mathbf{z}-v\mathbf{1}\geq1}+(\mathbf{z}-v\mathbf{1})_{1>\mathbf{z}-v\mathbf{1}>0}$$ p˜ = 1z−v1≥1 + (z − v1)1>z−v1>0 (10) Then we have $$g(v)=L({\tilde{p}},v)$$ $$g(v)=L(\tilde{\mathbf{p}},v)$$ $$=\frac{1}{2}\|[\mathbf{z}-v\mathbf{1}]_{-}+[\mathbf{z}-(v+1)\mathbf{1}]_{+}\|^{2}$$ $$+v(\mathbf{1}^{\top}\mathbf{z}-1)-\frac{n}{2}v^{2}$$ $$=\frac{1}{2}\|[\mathbf{z}-v\mathbf{1}]_{-}\|^{2}+\frac{1}{2}\|[\mathbf{z}-(v+1)\mathbf{1}]_{+}\|^{2}$$ $$+v(\mathbf{1}^{\top}\mathbf{z}-1)-\frac{n}{2}v^{2}.$$ $$g^{\prime}(v)=\mathbf{1}^{\top}[v\mathbf{1}-\mathbf{z}]_{+}+\mathbf{1}^{\top}[(v+1)\mathbf{1}-\mathbf{z}]_{-}$$ $$+(1^{T}\mathbf{z}-1)-nv$$ $$=\mathbf{1}^{\top}\min(1,\max(0,\mathbf{z}-v\mathbf{1}))-1.$$ It is easy to verify that g ′(v) is a monotone decreasing function with respect to v and we can use a bisection method solve the equation g ′(v) = 0 with solution v ∗ 1 . Finally we have $$\begin{array}{l}\mathbf{p}^{*}=&\mathbf{1}_{\mathbf{z}-\mathbf{v}_{1}^{*}\mathbf{1}\geq1}+(\mathbf{z}-\mathbf{v}_{1}^{*}\mathbf{1})_{1\geq\mathbf{z}-\mathbf{v}_{1}^{*}\mathbf{1}>0}\\ \mathbf{=}&\min(1,\max(0,\mathbf{z}-\mathbf{v}_{1}^{*}\mathbf{1})).\end{array}\tag{1}$$ $$\begin{array}{l}{(11)}\\ {(12)}\end{array}$$ | Dataset | MNLI | QQP | SST-2 MRPC CoLA QNLI | RTE | CI | SE | RCT | HP | Avg. | |-------------------------------------|-------------------------|---------|------------------------|----------------------------------------|----------------------------------------|--------|------------------------|------|--------| | GPT-3 Babbage | | | | | | | | | | | FT | 40.70.5 46.21.4 87.41.5 | 66.41.7 | 0.30.1 | 50.90.2 | 52.31.0 | 5.20.4 | 4.11.0 61.15.2 33.31.3 | 40.7 | | | MP | 28.90.8 34.11.2 83.51.2 | 62.43.2 | 0.21.0 | 48.81.4 | 51.20.6 31.42.8 1.70.5 21.72.3 27.21.5 | 35.6 | | | | | ICL | 35.70.9 45.21.9 86.21.4 | 65.41.7 | 2.60.0 | 48.30.9 | 51.50.4 13.11.5 2.50.9 36.71.8 32.21.4 | 38.1 | | | | | BDPL | 41.00.6 50.41.5 86.41.1 | 67.71.2 | 2.80.1 | 52.10.3 | 53.11.0 40.22.5 3.20.8 45.22.2 30.42.3 | 43.0 | | | | | BDPL-infix | 25.11.1 35.61.8 80.22.3 | 60.31.4 | 0.50.2 | 50.80.3 | 51.21.0 15.22.9 2.00.7 10.50.9 20.21.7 | 32.0 | | | | | BDPL-suffix 39.11.7 47.52.1 85.63.1 | 64.22.6 | 1.10.4 | 51.81.6 | 52.91.2 23.71.6 2.90.7 43.12.3 28.81.2 | 40.1 | | | | | Table 8: The effects of different prompt token positions. We report average scores across three random seeds, with standard deviations as subscripts. Avg. denotes the average score across all tasks. FT: GPT-3's FineTuning. MP: ManualPrompt. ICL: InContextLearning. ## C **Effects Of Prompt Positions** In our main experiments, we follow existing prompt-based learning studies (Li & Liang, 2021) to prepend some prompt tokens to the original sequence. We also investigate the effects of prompt positions. First, we introduce two new baselines based on GPT-3-babbage: suffix-tuning (placing prompt tokens after the original sequence) and infix-tuning (placing prompt tokens in the middle of the sequence). The results are shown in 8. From the results, we observed that prefix-tuning outperforms suffix-tuning and infix-tuning by a large margin. We attribute this performance drop to the position embedding of the learned prompts. Compared with infix-tuning and suffix-tuning, prefix-tuning keeps the prompt tokens at the same position, which is consistent during the learning process. However, the infix-tuning and suffix-tuning require the prompts to have adapting ability to dynamic positions. In addition, putting the prompt tokens in the middle of the sequence may break the semantic meaning of the original sequence, leading to even worse results than ManualPrompt. These observations are consistent with the prefix-tuning (Li & Liang, 2021), so we decide to adopt prefix-tuning in our main method. ## D **Case Studies** More case studies for prompt explanation are provided in Figure 5. ## E **Performance With More Data** It is not very straightforward that black-box methods can outperform the white–box methods (i.e., PromptTuning and AutoPrompt). However, some similar observations were reported in previous and concurrent black-box studies. For example, BBT outperforms PromptTuning and AutoPrompt, and RLPrompt outperforms AutoPrompt. According to our experiments, we attribute this phenomenon to the overfitting of white-box methods in terms of the given few-shot examples. First, in our experiments, we found that PromptTuning and AutoPrompt have lower training losses. It suggests that white-box methods tend to overfit the small training data (our experiments are 16-shot). Furthermore, we gradually increase the number of training data and verify whether the experimental results are consistent with our conjecture. Specifically, we increase the training data from 16-shot to 32-shot, 64-shot, and 128-shot. The results are shown in Figure 6. It is observed that as the amount of training data increases, the white-box methods will outperform black-box methods, which demonstrates that white-box methods are better than black-box methods given sufficient data. Based on these experiments, we believe black-box methods are good at few-shot settings due to their exploration mechanism, which can mitigate overfitting. | Task | Prompt + Input | Prediction | Label | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------|--------------|---------| | <s> Our friends won't buy this analysis, let alone the next one we propose . </s> | Unacceptable | | | | CoLA | Acceptable | | | | any boy are some me were go any as said about one read out eat he be on for the you last could was and John more his never got like man would as book on will did who it gave can time saw about girl sent hit in he <s> Our friends won't buy | Acceptable | | | | this analysis, let alone the next one we propose . </s> <s> one of the dead was a child, said a doctor at Civil Hospital Karachi. </s> <s> A doctor was killed by his parents . </s> | Entailment | | | | RTE | Not | | | | | Entailment | | | | found would by million had in from of than died being years made because including three her one said New will government announced company South political United more other people under been for many could new will died from | Not | | | | against million being because would under former first including died will <s> one of the dead was a child, said a doctor | Entailment | | | | at Civil Hospital Karachi. </s> <s> A doctor was killed by his parents . </s> <s> This appeared to solve the problem, and the results presented later for the average degree of generalisation do not show an over-generalisation compared with those given in Li and Abe ( 1998 ) . </s> | Background | | | | CI | such word can between used use example learning information analysis proposed described parsing structure models | CompareOr | | | translation results semantic Collins lexical between rules proposed similar syntactic such which system learning grammar translation text language word training work information learning used described structure learning lexical analysis | Contrast | | | | between structure results information syntactic parsing <s> This appeared to solve the problem, and the results presented later for the average degree of generalisation do not show an over-generalisation compared with those given in Li and Abe ( 1998 ) . </s> | CompareOr Contrast | | | | <s> It is not clear whether these patients would benefit from antifungal treatment .</s> | Results | | | | RCT | Background | | | | group treatment study placebo was during clinical significantly after weeks significant of primary baseline during placebo clinical trial randomized groups was therapy risk symptoms significantly group of patients was two changes versus | Background | | | | therapy more use days all vs also blood participants p We To who that months was and as <s> It is not clear whether these patients would benefit from antifungal treatment .</s> | | | | | Figure 5: | Four correctly predicted examples by BDPL. We display the prompts and salience map of the | | | Figure 5: Four correctly predicted examples by BDPL. We display the prompts and salience map of the token <s>. The prompts are in green and the input tokens are in red. The salient tokens are highlighted in a blue background, where the darker color denotes the more dominant weights for the prediction. (a) MRPC (b) RCT ![22_image_0.png](22_image_0.png) ![22_image_1.png](22_image_1.png) Figure 6: The effects of training data size on MRPC and RCT with RoBERTa-large model. PromptTuning and AutoPrompt are two white-box methods. BBT and BDPL are two black-box methods.
Review 1: Summary: The paper introduces a method called Black-box Discrete Prompt Learning (BDPL) for adapting Pre-trained Language Models (PLMs) for different downstream tasks in a cloud-device collaboration setting. BDPL optimizes PLMs through prompt learning, which adjusts a few discrete parameters rather than fine-tuning the entire model. BDPL was tested on RoBERTa and GPT-3 and showed significant improvement on multiple benchmarks, with case studies also conducted to analyze the method under various conditions. Strengths and Weaknesses: As explained, the black-box prompt learning setting could be more realistic when we can only access large language models’ outputs without other information such as gradients. However, it doesn’t make sense how and why black-box prompt learning could outperform white-box prompt learning in terms of accuracy and cost because it is a more restrictive setting. Moreover, I wonder whether compared baselines are not strong enough. Requested Changes: Could you provide a plausible (and theoretical) explanation of how black-box prompt learning could perform better than white-box one? Can you compare other prompt learning methods with reinforcement learning [1, 2]? [1] Deng et al., RLPrompt: Optimizing Discrete Text Prompts With Reinforcement Learning [2] Hou et al., PromptBoosting: Black-Box Text Classification with Ten Forward Passes What’s the difference between ManualPrompt and InContextLearning? The prompt length N looks large (50~200). Isn’t it expensive? The length of the prompts in Figure 4 is short. why? Page 7: first InContextLearning -> GPT-3 not RoBERTa-large Question: how do you initialize prompt tokens? From random tokens? Broader Impact Concerns: The black-box setting of BDPL is preferable for protecting the cloud infrastructure from potential attacks and misuse, which could have a positive impact on security and reliability. ================================================== Review 2: Summary: This paper looks into the problem of adapting pretrained models to downstream tasks through prompt learning and without the access to the model weights and gradient information, the so called black-box setting where pre-trained LMs are exposed through only inference APIs. In particular, the paper looks into the black-box prompt learning scenario, where the end devices can query the outputs of the pre-trained model and use that to optimize prompts locally. Finally, the paper focuses on discrete prompts learning because existing cloud LM prediction APIs only accept discrete prompts as inputs. Under this setup, the paper formulates this black-box prompt learning as a discrete token selection problem and optimizes a categorical distribution via a policy gradient algorithm (gradient-free). Furthermore, the paper also introduces a variance-reduced policy gradient estimator to reduce the high variance issue in policy gradient optimization. Evaluation on RoBERTa and GPT-3 APIs show that the proposed method can effectively and efficiently adapt pretrained models through black-box manner. Strengths and Weaknesses: Strengths: - The paper introduces the black-box prompt learning problem, which to the best of the reviewer's knowledge, is a new and interesting problem. - The paper takes practical constraints into consideration, such as the cloud LM APIs only taking discrete tokens and no easy access to model weights/gradients into consideration when designing and evaluating their approach. - A comprehensive evaluation of different tuning methods (10 methods). - The paper is clearly written and easy to follow. Weaknesses: - Lack of discussion with alternative methods for black-box adaptation. - Evaluation datasets appear to be cherry-picked. - Lack of explanation on why some baseline results are noticeably worse than their original reported results. Requested Changes: The paper overall looks into a very interesting problem and makes a good attempt of resolving the problem that may lead to an impactful direction. There are a few comments the work: 1. While the proposed black-box prompt learning is indeed an interesting problem, the paper does not discuss too much about alternative methods for adapting pre-trained LMs without accessing model parameters. For example, it might be worthwhile discussing how the proposed method relates to knowledge distillation which also learns a separate model by querying the outputs of the teacher model (in this case, the pre-trained PLM). In particular, KD also does not require the client to have access to model weights/gradients. The resulting student model can be used for adaptation. Do we really need to formulate a new problem when existing techniques might already help resolve the problem? 2. It is unclear why the paper only uses a subset of the GLUE datasets for evaluation, e.g., MNLI, QQP, SST-2, and STSB in GLUE are not included. This raises questions on whether the reported datasets are cherry-picked. To be more convincing, it would be better to report the remaining GLUE task results. 3. The paper compares its approach with white-box fine-tuning methods, and it seems that the black-box prompt learning methods are getting similar accuracy as the white-box fine-tuning method in several cases. For example, BDPL is getting <1 point difference and sometimes better performance on MRPC, QNLI, RTE on both GPT-3 and RoBERTa. There are two issues related to this evaluation results: i) the fine-tuning results for RoBERTa on these tasks are noticeably worse by a large margin than their originally reported scores (e.g., please refer to Table 5 in RoBERTa paper and GLUE leaderboard). (ii) MRPC, CoLA, RTE have large variances that are hard to interpret their results without confidence intervals. To be more convincing, the authors might want to explain why the RoBERTa fine-tuning results on those GLUE tasks are much worse than the original paper and include the other GLUE tasks together with standard deviation to the results. Broader Impact Concerns: None ================================================== Review 3: Summary: This paper introduces a black-box prompt learning method, where some generated discrete prompt tokens are prepended to the original sequence to improve the quality of the generated sequences. The learning procedure leverages a no back-propagation based approach. The model is evaluated on various tasks and datasets and shows an advanced performance boost. Strengths and Weaknesses: Strengths: - This is a very interesting paper that introduces a novel discrete prompt-learning method, this method enables a more efficient and secure way of applying large-scale language models. - The empirical study is very solid. The proposed methods are compared with various alternatives under different tasks and datasets. The performance boost is significant. Weakness: - In general, I just have some small concerns about some design decisions without justification. For example, why the learned prompt tokens are placed before the original sequences? Alternatives could be to place them after the original sequence, or insert some learned tokens between the original sequences. I am not suggesting the proposed method is inappropriate, I am just curious if such alternatives have been considered, if so, why they did not work. Requested Changes: - Provide some justification for the algorithmic design decisions of the proposed methods. - Release the code for review if possible. Broader Impact Concerns: Not applicable. ================================================== Metareview: Recommendation: Accept as is Comment: This paper studies a new setting named black-box prompt learning and proposes a method BDPL for the setting, which achieves good performance on several public datasets. Reviewers are all positive about the paper and their concerns are well addressed through discussions and revisions. I recommend acceptance. ==================================================
# A Single Transformer For Scalable Vision-Language Modeling Anonymous authors Paper under double-blind review ## Abstract We present SOLO, a single transformer for Scalable visiOn-Language mOdeling. Current large vision-language models (LVLMs) such as LLaVA mostly employ heterogeneous architectures that connect pre-trained visual encoders with large language models (LLMs) to facilitate visual recognition and complex reasoning. Although achieving remarkable performance with relatively lightweight training, we identify four primary scalability limitations: (1) The visual capacity is constrained by pre-trained visual encoders, which are typically an order of magnitude smaller than LLMs. (2) The heterogeneous architecture complicates the use of established hardware and software infrastructure. (3) Study of scaling laws on such architecture must consider three separate components - visual encoder, connector, and LLMs, which complicates the analysis. (4) The use of existing visual encoders typically requires following a pre-defined specification of image inputs pre-processing, for example, by reshaping inputs to fixed-resolution square images. This inflexibility can create bottlenecks and impede scalability. A unified single Transformer architecture, like SOLO, effectively addresses these scalability concerns in LVLMs; however, its limited adoption in the modern context likely stems from the absence of reliable training recipes that balance both modalities and ensure stable training for billion-scale models. In this paper, we introduce the first open-source training recipe for developing SOLO, an open-source 7B LVLM with the single Transformer architecture using moderate academic resources (8 x A100 80GB GPUs). The training recipe involves initializing from LLMs, sequential pre-training on ImageNet and webscale data, and instruction fine-tuning on our curated high-quality datasets. On extensive evaluation, SOLO demonstrates performance comparable to LLaVA-v1.5-7B, particularly excelling in visual mathematical reasoning. ## 1 Introduction Large vision-language models (LVLMs) demonstrate remarkable performance on downstream tasks (Li et al., 2023c; Zhu et al., 2023; Liu et al., 2023c; Chen et al., 2023c; Kim & Ji, 2024). They can effectively extract visual information (Wang et al., 2023b) and follow human instructions to generate insightful responses (Li et al., 2023b; Chen et al., 2024d). Two established approaches for vision-language modeling include: (1) Connecting pre-trained visual encoders (Dosovitskiy et al., 2021b; Radford et al., 2021) and large language models (LLMs) (Touvron et al., 2023; Jiang et al., 2023) via a learned projection module that maps the visual embeddings to the embedding space of LLMs (Dai et al., 2023; Gao et al., 2023; Liu et al., 2023c), or an intermediate symbolic layer Wang et al. (2024c). (2) Leveraging a pre-trained visual encoder to extract features and aligning feature embeddings with a pre-defined codebook (Esser et al., 2021) to convert each image into a sequence of *discrete visual tokens*, thus enabling LVLMs to process both images and language tokens (Wang et al., 2022b; Peng et al., 2022; Anil et al., 2023; Team, 2024; Diao et al., 2023). However, despite their effectiveness, these approaches have limitations that make them hard to scale. We consider an architecture scalable when it can demonstrate sustained performance improvement through scaling (*e.g.,* more computational resources and/or data) without hitting any inherent bottleneck in the machine learning system. The bottleneck in prevalent LVLMs is evident across four dimensions (§2.1), primarily due to their reliance on a pre-trained visual encoder: ![1_image_0.png](1_image_0.png) Figure 1: (*Previous work*) The mainstream approaches for vision-language modeling rely on pre-trained visual encoders for visual feature extraction, which exhibits scalability limitations. (*Our work*) We advocate for a unified transformer architecture that processes both images and text, employing a simple linear projection to directly handle raw image pixels. <vision>, </vision>, and <vrow_sep> are special tokens designed explicitly for visual modality encoding. (1) **Constrained visual capabilities:** The visual capacities of a pre-trained vision encoder are largely pre-determined and limited by the data distribution and volume used during pre-training. Due to the significantly smaller size of visual encoders—approximately over ten times smaller than LLMs—they can be a performance bottleneck in solving complex vision-language tasks. (2) **Challenges in large-scale training and deployment:** The heterogeneous architecture of LVLMs with vision encoders poses challenges in adapting existing training and inference frameworks and utilizing existing hardware, which is mostly tailored and optimized for single Transformer architectures. (3) **Multiple components complicate the scaling analysis:** The analysis of scaling laws, which are crucial for the development of foundation models, is complicated by the necessity to consider the size of several distinct components independently: the visual encoder, the connector, and the LLMs. (4) **Limited image pre-processing flexibility:** Most vision encoders pre-define a specification on how image inputs should be pre-processed. For example, the widely used visual backbones, such as CLIP-ViT-336 (Radford et al., 2021), require a square image input with a resolution of 336 × 336. The inflexibility of image pre-processing can cause bottlenecks that hinder scalability. To address these limitations, we present SOLO, which employs a **single Transformer architecture for** unified and end-to-end vision-language modeling. SOLO accepts both raw image patches (in *pixels*) and texts as inputs, without using a separate pre-trained vision encoder (Fig. 1). This simplifies the model design and enhances the scalability and adaptability of the LVLM architecture. By simplifying from multicomponent LVLM to a single Transformer model, this architecture is unconstrained on the capabilities of visual encoders, easier to train and deploy using existing hardware and software, allows more straightforward scaling law analysis, and can easily scale to image data with diverse resolutions and aspect ratio. SOLO, with a 7-billion parameter count, is initialized from Mistral LLM v0.1 (Jiang et al., 2023) and leverages its extensive pre-trained knowledge. This modeling strategy is inspired by the foundational modeling framework of VisualBERT (Li et al., 2019) and industry efforts to scale unified LVLMs to the billion-scale (Bavishi et al., 2023). Despite the simplicity and scalability, its limited contemporary adoption can be attributed to the lack of reliable training recipes, as balancing vision and language modalities in unified LVLMs often leads to training divergence. This paper details the first open-source recipe for developing scalable unified LVLMs, using modest academic computational resources, specifically 8 NVIDIA A100 80GB GPUs (§3). Our training recipe involves initializing with pre-trained LLMs, sequential pre-training on ImageNet and web-scale datasets, and instruction fine-tuning on our curated high-quality data mixture. While still lags behind recent state-of-the-art LVLMs on evaluation benchmarks, SOLO exhibits performance on par with LLaVA-v1.5-7B (§4) and the variant LLaVA-7B∗(§6), which is created following our training recipe in the controlled setting. In particular, SOLO distinguishes itself in the domain of visual mathematical reasoning. Further scalability analysis reveals better scaling behaviors of SOLO and the scalability of our flexible image preprocessing pipeline (§6.2). In addition, through comprehensive ablation studies, we validate the design choices of our training recipe. Our empirical results confirm that the sequential pre-training on ImageNet and web-scale datasets and instruction fine-tuning on our carefully curated data mixture are both essential for the training of such single Transformer LVLMs (§5). Interestingly, we find that after removing the first stage of pre-training on ImageNet, the LVLM will produce outputs of drastically different quality while exhibiting similar image-conditioned language modeling loss (§5.1, Fig. 3). ## 2 Tackling Scalability Limitations Via Integrated Architectures 2.1 Scalability Limitations In Existing Lvlms The scalability constraints of existing LVLMs are currently articulated from four critical perspectives that limit their efficiency in utilizing expanded computational resources and larger datasets due to the bottlenecks in the system design: Fixed and Constrained Visual Capabilities The fixed nature of visual encoders severely limits the adaptability of LVLMs to novel visual data distribution and more complex vision-language tasks since these encoders are trained on specific distributions and training objectives. Current approaches address this issue by continuing the training of visual encoders (Bai et al., 2023) or by integrating features derived from various visual encoders (Lin et al., 2023). Nonetheless, the scope of data used for continued pre-training is substantially less than that used initially, which only marginally enhances encoder adaptability, and employing multiple encoders complicates the process of image feature extraction, thereby impeding the scalability of LVLMs. Moreover, the smaller scale of visual encoders compared to LLMs frequently results in the visual understanding component becoming a bottleneck. Consequently, visual representation learning is limited to the smaller visual encoders, hindering the full utilization of LLM capabilities in existing LVLMs. Challenges in Large-Scale Training and Deployment The heterogeneous architecture with multiple components complicates the implementation of machine learning systems for large-scale training and deployments. (1) **Training challenge**: At large training scale (*i.e.,* multi-node clusters), it is necessary to distribute not only the Transformer-based LLMs but also the vision model and the MLP connector across multiple devices, employing techniques such as tensor and pipeline parallelism. Thus, prevalent LVLMs cannot directly use existing industry-grade training frameworks optimized for the Transformer architecture (Shoeybi et al., 2019; Cano et al., 2023), thus necessitating the development of new tensor-sharding mechanisms. In addition, AI alignment typically employs algorithms such as Proximal Policy Optimization (PPO) (Schulman et al., 2017), which necessitate simultaneously maintaining multiple models (*e.g.,* reward and critic models) in GPU memory and cause difficulty in the algorithm implementations for heterogeneous architectures. (2) Deployment: The heterogeneous architecture complicates the deployment process due to similar model and tensor sharding challenges described above. Consequently, this hampers the large-scale services of existing LVLMs. Moreover, existing specialized AI chips (Techcrunch) and inference libraries, such as vLLM (Kwon et al., 2023) and MLC-LLM (team, 2023), are mostly optimized for Transformer architectures, presenting significant challenges in the deployment of these models on end devices. Multiple Components Complicate the Scaling Analysis The complexity introduced by the multiple components of LVLMs is a significant barrier to understanding and improving these systems. Each component —the visual encoder, the connector, and the language models—operates with its own parameters and training strategies (Radford et al., 2021; 2019; Brown et al., 2020a), which can lead to a lack of cohesion in the overall model behavior. Scaling laws are crucial for guiding the development of large foundational models by forecasting the performance of a target model using data from several sampled models that are significantly smaller in sizes (Kaplan et al., 2020; Bahri et al., 2021). However, applying these approaches to existing LVLMs requires simultaneous consideration and scaling of various components, hereby increasing complexity. Limited Image Pre-Processing Flexibility The strict requirements for image pre-processing imposed by the specifications of visual encoders may create bottlenecks that hinder the scalability. For instance, the requirement of a consistent input resolution can make it difficult to process images that are naturally high-resolution or have non-standard aspect ratios without compromising on the quality or representational fidelity of the input. Current mitigation strategies involve splitting the original image into multiple subimages, independently extracting visual features from each sub-image using pre-trained visual encoders and subsequently aggregating the representation embeddings (Xu et al., 2024a; Liu et al., 2024a; Dong et al., 2024). However, these approaches seem ad-hoc and can be suboptimal as the visual backbone is not pre-trained to handle these inputs, potentially impacting the effective handling of these high-resolution images. ## 2.2 Unified Vision-Language Modeling With Integrated Architectures We revisit the foundational modeling framework of VisualBERT (Li et al., 2019), initially proposed in the early stages of research on pre-trained vision-language models. The key idea is to use one single Transformer, initialized from BERT (Devlin et al., 2018) in VisualBERT, to uniformly process the image patches and language tokens. Fuyu-8B exemplifies the industry's effort to scale this modeling approach (Li et al., 2019) to billion-scale models (Bavishi et al., 2023). However, the limited widespread implementation of this unified architecture may be due to the lack of an established training recipe, as only the pre-trained model is released by Bavishi et al. (2023) without training details. Training such unified LVLMs presents significant challenges in balancing the two modalities and maintaining stable training, for which clear solutions are currently lacking. In this paper, we present SOLO with full details of its unified and integrated architecture design and training recipe. ## 3 Solo**: Scalable Vision-Language Modeling** SOLO consolidates image and language capabilities into a single model, enables data-driven determination of visual representations and parameter allocation across visual and language modalities, simplifies the scaling laws analysis, and allows it to handle high-resolution images and those with uncommon aspect ratios flexibly. For large-scale training (§3.2), SOLO also seamlessly integrates with established software frameworks for large-scale Transformer pre-training (Shoeybi et al., 2019). ## 3.1 Model Architecture The architecture of SOLO is shown in Fig. 1, which diverges from earlier models primarily in the extraction of visual features. Instead of resizing the image into a fixed resolution adapted to the pre-trained image encoders, SOLO keeps their original resolutions and aspect ratios. The feature extraction involves splitting the image into patches with a pre-defined size. Through a trainable linear projection, these raw image patches (in *pixels*) are transformed to obtain continuous embeddings that represent the visual features of the images. Thus, we can integrate image and language processing within a single model. We maintain a list of special tokens designed explicitly for visual modality encoding: <vision> and </vision> tokens mark the beginning and end of a span of image patches respectively; <vrow_sep> acts as a row separator within the image patches and helps the model distinguish between different rows of image patches, aiding in structured visual understanding. ![3_image_0.png](3_image_0.png) Figure 2: The input image resize algorithm to maintain the aspect ratio. | Training Stage | Dataset | #Instances | #Image | #Token | #Text Tokens | #Vision Tokens | |--------------------------------------------------------|--------------------------------------|--------------|-----------------------------------------------------------------------|------------------------|------------------------|------------------| | Stage-1 | ImageNet21K (Ridnik et al., 2021b) | 74, 283 | 13, 151, 276 | 2, 423, 203, 108 | 212, 745, 573 | 2, 210, 457, 535 | | SlimPajama, subset (Soboleva et al., 2023) | 120, 839 | 0 | 4, 340, 877, 587 | 4, 340, 877, 587 | 0 | | | Total | 195, 122 | - | 6, 764, 080, 695 (67.32%) 4, 553, 623, 160 (32.68%) 2, 210, 457, 535 | | | | | Stage-2 | Capfusion (subset) (Yu et al., 2024) | 204, 978 | 23, 681, 864 | 6, 664, 351, 863 | 1, 172, 726, 505 | 1, 172, 726, 505 | | Websight (Laurençon et al., 2024) | 71, 579 | 1, 922, 671 | 2, 300, 945, 215 | 1, 087, 060, 511 | 1, 213, 884, 704 | | | CC3M (Sharma et al., 2018b) | 32, 760 | 2, 331, 439 | 1, 064, 477, 314 | 76, 092, 147 | 988, 385, 167 | | | Detailed Captions (lz) | 6, 225 | 368, 767 | 202, 016, 770 | 44, 788, 200 | 157, 228, 570 | | | LLaVAR (Zhang et al., 2023b) | 3, 602 | 422, 315 | 117, 448, 784 | 31, 390, 556 | 86, 058, 228 | | | DVQA (Kafle et al., 2018) | 2, 917 | 200, 000 | 94, 853, 796 | 55, 653, 796 | 39, 200, 000 | | | OCR-VQA (Mishra et al., 2019) | 1, 593 | 165, 746 | 51, 920, 705 | 21, 161, 018 | 30, 759, 687 | | | FigureQA (Kahou et al., 2017) | 1, 526 | 100, 000 | 49, 586, 305 | 24, 803, 256 | 24, 783, 049 | | | SlimPajama, a different subset (Soboleva et al., 2023) | 120, 385 | 0 | 4, 300, 998, 161 | 4, 300, 998, 161 | 0 | | | Total | 445, 565 | - | 14, 846, 598, 913 (45.90%) 6, 814, 674, 150 (54.10%) 8, 031, 924, 763 | | | | | Stage-3 | ALLaVA-LAION (Chen et al., 2024a) | 13, 725 | 438, 992 | 442, 509, 490 | 176, 660, 898 | 265, 848, 592 | | ALLaVA-VLFLAN (Xu et al., 2024b) | 4, 469 | 207, 549 | 144, 577, 377 | 77, 835, 919 | 66, 741, 458 | | | LLaVAR (Zhang et al., 2023b) | 3, 602 | 422, 315 | 117, 448, 784 | 31, 390, 556 | 86, 058, 228 | | | DVQA (Kafle et al., 2018) | 2, 917 | 200, 000 | 94, 853, 796 | 55, 653, 796 | 39, 200, 000 | | | FigureQA (Kahou et al., 2017) | 1, 526 | 100, 000 | 49, 586, 305 | 24, 803, 256 | 24, 783, 049 | | | SlimPajama, a different subset (Soboleva et al., 2023) | 12, 085 | 0 | 430, 688, 442 | 430, 688, 442 | 0 | | | Total | 38, 324 | - | 1, 279, 664, 194 | (62.28%) 797, 032, 867 | (37.72%) 482, 631, 327 | | Table 1: Summary of datasets used in the three stages of pre-training. Each image patch counts as a vision token. Number of *instances* is calculated after packing them into sequences with 32K length. Formally, we define the patch size P and the maximal resolution M. For an image of dimension size (*W, H*), it is resized to (W′, H′) to ensure divisibility by P. Fig. 2 details the resizing process, which adjusts the W and H to the nearest multiples of P while preserving the original aspect ratio to the extent possible and complying with the constraints imposed by M. Subsequently, the image is divided into N patches, where N = (W′/P) × (H′/P), each with dimensions P × P × 3. A trainable linear projector then maps each patch from a flattened P × P × 3 vector to an output dimension compatible with the embedding space of LLMs, extracting N embeddings as the image's feature representation. These visual embeddings, along with special visual modality tokens and embeddings of the text tokens, are concatenated and processed through a single Transformer, facilitating unified vision-language modeling. Notably, compared to prevalent LVLMs, this modeling strategy facilitates a much earlier fusion of visual and language modalities, allowing LVLMs to extract relevant information conditioned on the given instructions. In our implementation, we initialize SOLO from the Mistral-7B-v0.1 base LLM. The max resolution M of processed images is set as 1024. The patch size P is set as 32. ## 3.2 Training Recipe We describe our approach for training unified billion-scale LVLMs, including pre-training (§3.2.1) and instruction fine-tuning (§3.2.2). For both stages, we optimize exclusively the language modeling loss on natural language tokens, without optimizing loss on image patches and special image tokens (*e.g.,* <vision>). We substantiate the essential ingredients in our recipe in §5. ## 3.2.1 Pre-Training We introduce a three-stage pre-training curriculum that progressively enhances the visual capabilities of LVLMs while preserving their fundamental language capabilities. We present datasets and their statistics for each stage in Tab. 1. Stage-1 ImageNet Pre-Training for Initialization We leverage ImageNet21K (Ridnik et al., 2021a), encompassing a broad spectrum of fine-grained visual categories, for the initial pre-training stage. In this process, we train SOLO to predict *only* fine-grained labels in natural language tokens (class name of images, e.g., "golden retriever") conditioned on the image patches, thereby developing visual representations that initialize subsequent pre-training runs. In §5, we demonstrate the critical role of this stage in training unified LVLMs: when this stage is removed, the LVLM pre-trained on web-scale data from stage 2 (*e.g.,* captioning) failed to generate meaningful captions (Fig. 4). Stage-2 Pre-Training on Web-Scale Data ImageNet21K, composed chiefly of visual concept data annotated by humans, faces scalability constraints in both knowledge breadth and data volume. In Stage-2, we scale up the pre-training data to encompass web-scale data, primarily consisting of image-caption pairs from sources like Capfusion (Yu et al., 2024) and CC3M (Sharma et al., 2018a). Additionally, we include synthetically generated web pages with associated HTML code from Websight (Laurençon et al., 2024) to improve OCR performance, and we also include a small set of supervised datasets to improve the data diversity. In this stage, the language modeling loss is applied uniformly across all language tokens, encompassing captions, HTML code, and questions and responses within the supervised datasets. Stage-3 Annealing Following MiniCPM (Hu et al., 2024a), we perform a final annealing stage to conclude the pre-training. In this stage, we incorporate a limited selection of supervised datasets–either down-sampled or omitted from the instruction fine-tuning dataset mixture (*e.g.,* ALLaVA, Chen et al. 2024a)–to prime SOLO for the subsequent instruction fine-tuning stage. The primary purpose of this stage is to transition SOLO from a noisy web data to being trained on high-quality data mixtures. Balancing Text and Vision Capability Through Language Corpus Blending Initiating with a base LLM and performing full-parameters training necessitates carefully preserving its inherent language comprehension abilities while performing image representation learning since most real-world vision-language tasks require text-only capabilities such as instruction comprehension and complex reasoning. At each stage of SOLO's pre-training, we mix in a non-trivial proportion of text-only pre-training data (SlimPajama, Soboleva et al. 2023) to maintain the text capability. We present more empirical results on how data mixture affects image and text loss trade-offs in §F. Pre-Training Infrastructure We modify the standard Megatron-LLM (Cano et al., 2023) to support arbitrary image patch inputs. We use one node with 8 NVIDIA A100 80G GPU for pre-training. We use 2-way tensor parallelism (Shoeybi et al., 2019) and 4-way data parallelism for training. Following Shoeybi et al. (2019), we adopt distributed optimizer to shard optimizer states across different GPUs for memory efficiency. We obtain a training throughput of 20k tokens per second. Training Hyperparameter We use a global batch size of 128 examples (*i.e.,* 4M tokens) and each pre-training example is packed to 32, 768 tokens. We adopt a learning rate of 5e-5 with cosine decay to a minimum learning rate of 5e-6 and warm up for 200 steps. We use weight decay of 0.1. For training efficiency, we pack shorter sequences into one longer sequence and re-adjust the attention mask to make sure tokens from different examples cannot attend to each other. The training process consists of 1525 steps in Stage 1, 3480 steps in Stage 2, and 300 steps in Stage 3. ## 3.2.2 Instruction Fine-Tuning Dataset Curation We meticulously select a diverse range of supervised datasets to perform instruction fine-tuning, aiming to enhance their performance across various domains of vision-language tasks. Our dataset selection strategy is based on the empirical analysis derived in Laurençon et al. (2024); Lin et al. (2024); Lu et al. (2024), and is mainly driven by the objective to cover a comprehensive range of data types, including language-only data, detailed image captions, scientific documents, tables, documents, charts, OCR and text-rich images, and general visual question-answering (VQA) tasks. In §A, we present datasets and their statistics in Tab. 3 and more details regarding data curation. Implementation Details To efficiently conduct experiments with various data mixtures for iterative testing, we utilize DeepSpeed (Rasley et al., 2020), as implemented in Accelerate (Gugger et al., 2022), for instruction fine-tuning. The global batch size is configured at 640, with a weight decay parameter of 0.1. We train for 1 epoch with a maximum learning rate of 1e-5, which follows a linear warm-up phase and transitions to a cosine decay schedule. | Level | Model | Visual | Language | MMStar | MME | SEED | ScienceQA | MathVista | AI2D | |-----------------|--------------|----------------|--------------|----------|--------|--------|-------------|-------------|--------| | OpenFlamingo v2 | (C) ViT-L/14 | MPT-7B | 26.9 | 607.2 | 28.8 | 44.8 | 18.6 | 31.7 | | | MiniGPT-4-v2 | EVA-G | Llama2-13B | 21.3 | 968.4 | 29.4 | 54.7 | 23.1 | 30.5 | | | Level-1 | VisualGLM | EVA-CLIP | ChatGLM-6B | 5.9 | 738.1 | 47.0 | 56.1 | 21.9 | 41.2 | | InstructBLIP | EVA-G | Vicuna-7B | 32.7 | 1391.4 | 44.5 | 54.1 | 24.4 | 40.6 | | | LLaVA-v1-7b | (C) ViT-L/14 | Llama-7B | 27.1 | 1075.5 | 50.4 | 61.8 | 25.2 | 48.3 | | | LLaVA-v1.5 7b | (C) ViT-L/14 | Vicuna-V1.5-7B | 33.1 | 1808.4 | 65.8 | 69.2 | 25.6 | 55.5 | | | mPLUG-OWL v2 | (C) ViT-L/14 | Llama2-7B | 34.8 | 1786.4 | 64.5 | 69.5 | 25.4 | 55.7 | | | XComposer | EVA-G | InternLM-7B | 6.9 | 1874.2 | 66.1 | 89.8 | 29.8 | 56.9 | | | Level-2 | MiniGPM-V | SigLIP-400M | MiniCPM-2.4B | 38.6 | 1650.2 | 65.6 | 77.0 | 30.6 | 56.3 | | Monkey | ViT-BigHuge | Qwen-7B | 37.0 | 1759.9 | 64.3 | 72.1 | 33.5 | 62.5 | | | LLaVA-Next | (C) ViT-L/14 | Mistral-7B | 38.4 | 1821.2 | 72.4 | 73.0 | 34.6 | 69.0 | | | MiniCPM-v2 | SigLIP-400M | MiniCPM-2.4B | 39.1 | 1808.2 | 67.1 | 80.7 | 39.8 | 62.9 | | | Level-3 | DeepSeek-VL | Hybrid | DeepSeek-7B | 40.5 | 1765.4 | 70.1 | 80.9 | 36.9 | 65.3 | | Ours | SOLO | Mistral-7B | 35.5 | 1260.0 | 64.4 | 73.3 | 34.4 | 61.4 | | ## 4 Comparison To Existing Lvlms 4.1 Model We select various open-source LVLMs for comparison to better understand the capabilities of SOLO. Based on the release time and capabilities of LVLMs, we select 3 groups of LVLMs to better understand the current development phase of SOLO. Level-1 LVLMs represent the pioneering generation, which initiate the integration of visual encoders with pre-trained LLMs, with releases prior to October 2023. Level-2 LVLMs, released before early 2024, typically feature a more refined selection of instruction fine-tuning data to enhance performance. Level-3 marks the state-of-the-art (SoTA) LVLMs, released within the last five months, incorporating advanced training recipes, superior LLM backbones, and support for high-resolution images. - **Level-1**: (1) OpenFlamingo v2 (Awadalla et al., 2023), (2) MiniGPT4 v2 (Chen et al., 2023a), (3) VisualGLM (Du et al., 2022), (4) InstructBLIP (Dai et al., 2023), (5) LLaVA v1 (Liu et al., 2023a). - **Level-2**: (6) LLaVA v1.5 (Liu et al., 2024a), (7) mPLUG-Owl v2 (Ye et al., 2024), (8) InternLMXComposer (Zhang et al., 2023a), (9) MiniCPM-v1 (Hu et al., 2023), - **Level-3**: (10) Monkey (Li et al., 2024b). (11) LLaVA-NEXT (Liu et al., 2024b), (12) MiniCPM-v2 (Hu et al., 2024b), (13) DeepSeek-VL (Lu et al., 2024). Each LVLM may have multiple variants based on different LLM sizes and architectures. If possible, we opt for the variant equipped with a 7B Mistral LLM. For the remaining LVLMs, we select the variant whose configuration most closely aligns with our specifications (Mistral-7B-LLM). We directly present the evaluation results of existing LVLMs from the leaderboard (OpenCompass) when available, to ensure a fair comparison. ## 4.2 Benchmarks We select a wide range of benchmarks, encompassing both general vision-language tasks and specific taskoriented datasets, for evaluation and analysis. For general vision-language capability evaluation, we choose MMStar (Chen et al., 2024b), MME (Fu et al., 2024), and SEED-Bench (Li et al., 2024a). Specifically, MMStar measures elite vision-indispensable capabilities, MME measures both the perception and cognition capabilities, and SEED-Bench covers 12 evaluation dimensions covering various aspects of LVLMs capabilities. For scientific document understanding, we choose AI2D (Kembhavi et al., 2016) and ScienceQA (Lu et al., 2022a). For visual mathematical reasoning, we choose MathVista (Lu et al., 2023). We adopt VLMEvalKit (Contributors, 2023) to perform the unified evaluation. ## 4.3 Results The experimental results are shown in Tab. 2. We find that SOLO significantly outperforms Level-1 LVLMs and also performs comparably to Level-2 LVLMs, despite slightly underperforming Level-3 LVLMs. Furthermore, ![7_image_0.png](7_image_0.png) Figure 3: Image captioning loss using two differently initialized checkpoints: (1) caption-only pre-training (green) initialized from the LLM; (2) two-stage pre-training (blue) initialized from the Stage-1 ImageNet pre-trained LVLM. Figure 4: Qualitative analysis of caption-only pre-training and SOLO's two-stage pre-training. Comparisons are made on two checkpoints with comparable vision-language modeling loss (*i.e.,* 2.1). Specifically, we select the caption-only checkpoint at pre-training step 150, and SOLO at step 100. SOLO excels in task-oriented benchmarks, especially in areas requiring scientific knowledge and mathematical reasoning, due to its successful integration of image representation and complex reasoning within a single unified model. Overall, while SOLO does not yet meet the SoTA performance of the leading LVLMs (Level-3) within the prevalent multi-component LVLM framework, it marks a substantial progression in unified vision-language modeling. This establishes SOLO as a viable candidate for future developments aimed at closing the performance gap with SoTA LVLMs, while more flexibility and scalability by avoiding issues in prior architectures (§2.1). ## 5 Validating Key Ingredients In Our Recipe 5.1 Lvlms Generate Meaningless Captions Without Stage-1 Pre-Training We assess the necessity of Stage-1 pre-training by comparing the Stage-2 LVLM checkpoints *with* and *without* undergoing Stage-1 ImageNet pre-training. In Fig. 3, we observe that these two variants overall achieve similar pre-training loss curves on vision-language modeling and (text) language modeling. Select Checkpoints for Comparison We select two checkpoints for comparison: one using caption-only pre-training (Stage-2 only) and the other utilizing SOLO's two-stage pre-training, both of which achieve an equivalent vision-language modeling loss of 2.1. Qualitative Comparison We *randomly* select one example for qualitative analysis (in Fig. 4). Despite the equivalent loss of the selected checkpoints in Fig. 3, we find that without ImageNet pre-training (Stage-2 only), the model generates irrelevant and meaningless image captions, indicates a training divergence. ![8_image_0.png](8_image_0.png) ![8_image_2.png](8_image_2.png) ![8_image_1.png](8_image_1.png) (b) The effectiveness of Stage-2 pre-training on web-scale data for knowledge breadth and data volume. ![8_image_3.png](8_image_3.png) (d) The performance across training steps on the finetuning dataset. Figure 5: The evaluation performance of various ablations to validate key ingredients of our recipe. The MME scores are normalized for better illustration. We perform a quantitative comparison on the same two checkpoints by training Quantitative Comparison them on the instruction fine-tuning data mixture for 800 steps (see Fig. 5a). Compared to the two-stage pre-trained SDL0, we observe a performance degradation across multiple benchmarks on the checkpoint without Stage-1 pre-training, further validating the importance of the first stage. We hypothesize that discrepancies between a model's vision and language capabilities can lead Discussion to the observed behaviors. Specifically, when there is a significant imbalance-such as with the Mistral 7B model, which possesses advanced language abilities but lacks vision understanding-the model may reduce loss by replicating caption patterns, including redundant text tokens irrelevant to the visual content. For instance, in a caption like "This is a dog", the essential element is "dog". Focusing solely on minimizing language modeling loss without a robust initialized vision representation may lead the model to favor generic phrases like "This is a" over the more discriminative "dog". This is because the former includes more tokens, disproportionately influencing the overall language modeling loss. Pre-training on ImageNet at Stage 1, which emphasizes predicting only the "dog" token, helps the model develop a solid visual representation, effectively narrowing the gap between vision and language capabilities and mitigating this issue. In addition, the results indicate that pre-training loss on vision-language data does not reliably indicate the performance of LVLMs. Detailed analysis are provided in §D, which also demonstrates that training loss on the instruction fine-tuning data mixture is similarly unreliable as an indicator. ![9_image_0.png](9_image_0.png) Figure 6: Stage-2 language modeling loss when trained on (1) Image captioning objective only (blue); (2) Image captioning objective and image classification objective used in Stage-1 (orange). ## 5.2 Stage-2 Pre-Training On Web-Scale Data Stage-2 Pre-Training Improves Performance on Top of Stage-1 We verify the effectiveness of Stage-2 pre-training on web-scale data by comparing the performance of two LVLMs. Each model is fine-tuned for 800 steps using the same instruction fine-tuning data mixture but initialized differently—one from the end of Stage 1 and the other from Stage 2. In Fig. 5b, we observe significant improvement on all evaluation datasets after pre-training on web-scale data, showing the substantial advantages of Stage 2 pre-training compared to solely using ImageNet data (Stage 1 only). Combining ImageNet and Captioning at Stage-2 Hurts Performance In addition, it is pertinent to ask whether ImageNet21K data can be combined with web-scale data for Stage 2. We include an ablation with SOLO trained on ImageNet21K and the web-scale data included in the second stage. Fig. 6 illustrates the training curves for comparison. The results suggest that while ImageNet pre-training effectively establishes an initial visual representation, it may not be optimal for subsequent Stage-2 pre-training on web-scale data, as it potentially impedes the optimization of vision-language modeling on image captions (*i.e.,* vision language modeling loss stop improving). This discrepancy may arise from the divergence in image classification and captioning capabilities; the former is emphasized in the first stage. This two-stage approach aligns with the principles of continual curriculum learning, where the model must maintain proficiency in familiar tasks while integrating new ones. This conclusion is also supported by our evaluation of downstream tasks (see Fig. 5c). We train two different checkpoints with and without ImageNet21K data in the second stage on the instruction fine-tuning data mixture (§3.2.2) for 800 steps. Note that we select two checkpoints with comparable vision-language modeling losses for analysis. The results indicate that incorporating ImageNet21K data in the second stage may detrimentally impact overall performance by inhibiting adaptation to and learning from web-scale data. ## 5.3 Performance Boost Via Instruction Fine-Tuning We evaluate the performance of SOLO on different training steps throughout the instruction fine-tuning stage (see Fig. 5d). The results indicate a consistent improvement in SOLO's performance with prolonged training on the fine-tuning dataset, although the MME scores exhibit some fluctuations. This outcome contrasts with the findings of Liu et al. (2024a), where the performance quickly plateaus upon training with a limited subset of the fine-tuning dataset. This illustrates the increased scalability of SOLO during the instruction fine-tuning stage, suggesting that acquiring additional high-quality supervised datasets for fine-tuning could consistently enhance performance. ## 5.4 Additional Validation Experiments We present the results to demonstrate the effectiveness of Stage-3 annealing in §B and to validate the curated data mixture for instruction fine-tuning in §C. We also find that balancing vision and language capabilities during pre-training is challenging on a 7B scale, and present our analysis in §F. ## 6 Further Analysis 6.1 Controlled Analysis Of Fuyu And Llava We conduct a controlled analysis to compare SOLO with LLaVA and Fuyu (see Fig. 7). SOLO with our training recipe consistently outperforms Fuyu8B, which adopts the same unified modeling strategy, across all evaluation benchmarks. To facilitate a controlled comparison with LLaVA, we develop LLaVA-7B∗, which integrates CLIP-ViT-336 and Mistral-7B-base-v0.1, utilizing our specific training procedure and data. The results reveal that LLaVA7B∗ achieves performance similar with LLaVA-v1.57B (Liu et al., 2024a), indicating that our training recipe, which utilizes large-scale datasets and extensive training, may not significantly impact LLaVAstyle LVLMs. Notably, while LLaVA-7B∗excels in general visual-language tasks, SOLO demonstrates superior capabilities in visual mathematical reasoning, with overall performance being similar. Figure 7: The controlled analysis of Fuyu-8B and ![10_image_0.png](10_image_0.png) LLaVA-7B. ## 6.2 Scalability Analysis SOLO **Shows Better Scaling Behaviors Compared to LLaVA** We perform instruction fine-tuning on pre-trained LLaVA obtained in §6.1 and compare its scaling behaviors with SOLO by measuring the performance improvement per training token (see Fig. 8). For evaluation purposes, we fine-tune both models with 50 steps as the starting points as the pre-trained models are ineffective at following instructions. We measure performance improvement per token every 500 steps and average these to obtain the metric. We observe that SOLO demonstrates better scaling behaviors compared to LLaVA across all evaluation benchmarks since it can more effectively transform the increasing training tokens into performance gains. SOLO **Demonstrates Improved Performance when Scaling up Image Resolution** We train SOLO on different image resolutions during the instruction fine-tuning stage for 1,000 steps due to the compute limits (see Fig. 9). The image resolution during inference matches that used in the instruction fine-tuning stage. We find that SOLO continues to improve the performance with increasing image resolution, especially for the visual mathematical reasoning task. In addition, there is no significant difference in performance between the 1024-square resolution and the adapted resolution with 1024 as the maximum used in SOLO. This demonstrates the efficiency and scalability of our flexible image pre-processing pipeline. ## 7 Related Work Model Architecture Existing research advances the development of LVLMs capable of addressing diverse tasks via a unified interface that can directly generate natural language, thus avoiding task-specific modifications (Wang et al., 2021; 2022a; Li et al., 2023c). Utilizing pre-trained LLMs (Brown et al., 2020b; Bubeck et al., 2023) as the language component paired with pre-trained visual encoders (Radford et al., 2021; Dosovitskiy et al., 2021a), recent approaches further enhance the instruction-following, user-friendly responses generation, and complex reasoning ability of LVLMs (Liu et al., 2023c; Zhu et al., 2023; Dai et al., 2023; Alayrac et al., 2022; Li et al., 2023a; Ye et al., 2024). Concurrently, Wang et al. (2022b); Peng et al. (2022); Anil et al. (2023); Team (2024); Ge et al. (2023) propose to further learn a codebook in the initial stage to discretize the continuous embeddings extracted by visual encoders into a sequence of image tokens. These approaches enable a uniform vision-language modeling strategy for image and language tokens. However, the dependence on pre-trained visual encoders restricts the scalability of LVLMs. In this study, we address this challenge by readopting the conventional vision-language modeling approach that utilizes a single ![11_image_0.png](11_image_0.png) Figure 8: We compare the scaling behaviors of SOLO and LLaVA by measuring the improvement on benchmark performance per token. Figure 9: The performance of SOLO when trained and tested on different resolutions of images. R@X denotes the resolution of X. Transformer for both image and text processing (Li et al., 2019). Furthermore, while Bavishi et al. (2023) extend this approach to billion-scale models, they do not disclose the specifics of their training processes. We address this gap by offering reproducible training recipes, complete with publicly released code, for scalable vision-language modeling on a 7-billion LVLM. Training Data Typically, LVLMs leverage extensive image-caption pair datasets (Lin et al., 2014a; Schuhmann et al., 2021; 2022; Yu et al., 2024; Chen et al., 2023b) to train a projector or a codebook that map continuous image features into the embedding space of LLMs, thereby aligning the two modalities (Li et al., 2023c; Gong et al., 2023; Zeng et al., 2023; Sun et al., 2023a). Furthermore, large-scale vision-language instruction tuning datasets (Su et al., 2023; Wei et al., 2023; Liu et al., 2023b; Gong et al., 2023; Gao et al., 2023; Li et al., 2023a) and feedback datasets (Li et al., 2023d; Sun et al., 2023b; Chen et al., 2024c; Zhang et al., 2024b) are utilized to further boost the fundamental capabilities of LVLMs and align LVLMs with human preferences, ensuring their ability to comprehend instructions and generate responses that are user-friendly. In this work, we propose a recipe that encompasses the selection of pre-training and instruction fine-tuning datasets, along with corresponding multi-stage paradigms, to facilitate the training of billion-scale LVLMs of a single Transformer architecture. Related work about the LVLMs evaluation benchmarks is in §E. ## 8 Conclusion This work revisits the simple vision-language modeling framework with a single Transformer. We argue that this approach effectively mitigates the scalability limitations inherent in prevailing models. With academic resources, we build SOLO, a 7B LVLM initialized from the Mistral LLM. We detail the training recipe and conduct extensive analysis and evaluation to validate the ingredients in our recipe. Experimental results show that SOLO demonstrates performance comparable to LLaVA-v1.5, supporting the continued investigation into this unified vision-language modeling approach for improved scalability. ## Limitations And Broader Impact Statement The investigation into large-scale vision-language modeling using a unified transformer architecture remains nascent, with our model not yet reaching optimal performance across diverse benchmarks. Continued advancements in the direction of unified LVLMs for scalable vision-language modeling are anticipated. However, although developing LVLMs with strong capabilities brings significant advancements in AI, it also poses potential negative impacts. One concern is the risk of misuse, where the model could be employed for malicious purposes, such as generating misleading content that could manipulate public opinion or deceive individuals. Additionally, the model may inadvertently exacerbate biases present in the training data, leading to unfair or discriminatory outcomes in decision-making processes. ## References Manoj Acharya, Kushal Kafle, and Christopher Kanan. Tallyqa: Answering complex counting questions. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pp. 8076–8084, 2019. Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. arXiv preprint arXiv:2204.14198, 2022. Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M. Dai, Anja Hauth, Katie Millican, David Silver, Slav Petrov, Melvin Johnson, Ioannis Antonoglou, Julian Schrittwieser, Amelia Glaese, Jilin Chen, Emily Pitler, Timothy P. Lillicrap, Angeliki Lazaridou, Orhan Firat, James Molloy, Michael Isard, Paul Ronald Barham, Tom Hennigan, Benjamin Lee, Fabio Viola, Malcolm Reynolds, Yuanzhong Xu, Ryan Doherty, Eli Collins, Clemens Meyer, Eliza Rutherford, Erica Moreira, Kareem Ayoub, Megha Goel, George Tucker, Enrique Piqueras, Maxim Krikun, Iain Barr, Nikolay Savinov, Ivo Danihelka, Becca Roelofs, Anaïs White, Anders Andreassen, Tamara von Glehn, Lakshman Yagati, Mehran Kazemi, Lucas Gonzalez, Misha Khalman, Jakub Sygnowski, and et al. Gemini: A family of highly capable multimodal models. CoRR, abs/2312.11805, 2023. doi: 10.48550/ARXIV.2312.11805. URL https://doi.org/10.48550/arXiv.2312.11805. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision, pp. 2425–2433, 2015. Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, et al. Openflamingo: An open-source framework for training large autoregressive vision-language models. arXiv preprint arXiv:2308.01390, 2023. Yasaman Bahri, Ethan Dyer, Jared Kaplan, Jaehoon Lee, and Utkarsh Sharma. Explaining neural scaling laws. arXiv preprint arXiv:2102.06701, 2021. Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond. 2023. Rohan Bavishi, Erich Elsen, Curtis Hawthorne, Maxwell Nye, Augustus Odena, Arushi Somani, and Sağnak Taşırlar. Introducing our multimodal models, 2023. URL https://www.adept.ai/blog/fuyu-8b. Ali Furkan Biten, Ruben Tito, Andres Mafla, Lluis Gomez, Marçal Rusinol, Ernest Valveny, CV Jawahar, and Dimosthenis Karatzas. Scene text visual question answering. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 4291–4301, 2019. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020a. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 2020b. Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott M. Lundberg, Harsha Nori, Hamid Palangi, Marco Túlio Ribeiro, and Yi Zhang. Sparks of artificial general intelligence: Early experiments with GPT-4. CoRR, 2023. Alejandro Hernández Cano, Matteo Pagliardini, Andreas Köpf, Kyle Matoba, Amirkeivan Mohtashami, Olivia Simin Fan, Axel Marmet, Deniz Bayazit, Igor Krawczuk, Zeming Chen, Francesco Salvi, Antoine Bosselut, and Martin Jaggi. epfllm megatron-lm, 2023. URL https://github.com/epfLLM/Megatron-LLM. Guiming Hardy Chen, Shunian Chen, Ruifei Zhang, Junying Chen, Xiangbo Wu, Zhiyi Zhang, Zhihong Chen, Jianquan Li, Xiang Wan, and Benyou Wang. Allava: Harnessing gpt4v-synthesized data for a lite vision-language model. arXiv preprint arXiv:2402.11684, 2024a. Jun Chen, Deyao Zhu, Xiaoqian Shen, Xiang Li, Zechun Liu, Pengchuan Zhang, Raghuraman Krishnamoorthi, Vikas Chandra, Yunyang Xiong, and Mohamed Elhoseiny. Minigpt-v2: large language model as a unified interface for vision-language multi-task learning. arXiv preprint arXiv:2310.09478, 2023a. Lin Chen, Jisong Li, Xiaoyi Dong, Pan Zhang, Conghui He, Jiaqi Wang, Feng Zhao, and Dahua Lin. Sharegpt4v: Improving large multi-modal models with better captions. arXiv preprint arXiv:2311.12793, 2023b. Lin Chen, Jinsong Li, Xiaoyi Dong, Pan Zhang, Yuhang Zang, Zehui Chen, Haodong Duan, Jiaqi Wang, Yu Qiao, Dahua Lin, et al. Are we on the right way for evaluating large vision-language models? arXiv preprint arXiv:2403.20330, 2024b. Yangyi Chen, Xingyao Wang, Manling Li, Derek Hoiem, and Heng Ji. Vistruct: Visual structural knowledge extraction via curriculum guided code-vision representation. In Proc. The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP2023), 2023c. Yangyi Chen, Karan Sikka, Michael Cogswell, Heng Ji, and Ajay Divakaran. Dress: Instructing large vision-language models to align and interact with humans via natural language feedback. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14239–14250, 2024c. Yangyi Chen, Karan Sikka, Michael Cogswell, Heng Ji, and Ajay Divakaran. Measuring and improving chain-of-thought reasoning in vision-language models. In Proc. 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL2024), 2024d. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. OpenCompass Contributors. Opencompass: A universal evaluation platform for foundation models. https: //github.com/open-compass/opencompass, 2023. Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven C. H. Hoi. Instructblip: Towards general-purpose vision-language models with instruction tuning. CoRR, 2023. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. Shizhe Diao, Wangchunshu Zhou, Xinsong Zhang, and Jiawei Wang. Write and paint: Generative visionlanguage models are unified modal learners. In The Eleventh International Conference on Learning Representations, 2023. Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. Enhancing chat language models by scaling high-quality instructional conversations. arXiv preprint arXiv:2305.14233, 2023. Xiaoyi Dong, Pan Zhang, Yuhang Zang, Yuhang Cao, Bin Wang, Linke Ouyang, Songyang Zhang, Haodong Duan, Wenwei Zhang, Yining Li, et al. Internlm-xcomposer2-4khd: A pioneering large vision-language model handling resolutions from 336 pixels to 4k hd. arXiv preprint arXiv:2404.06512, 2024. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv:2010.11929 [cs], 2021a. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In Iclr, 2021b. Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. Glm: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 320–335, 2022. Zhengxiao Du, Aohan Zeng, Yuxiao Dong, and Jie Tang. Understanding emergent abilities of language models from the loss perspective. arXiv preprint arXiv:2403.15796, 2024. Patrick Esser, Robin Rombach, and Björn Ommer. Taming transformers for high-resolution image synthesis. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pp. 12873–12883. Computer Vision Foundation / IEEE, 2021. doi: 10. 1109/CVPR46437.2021.01268. URL https://openaccess.thecvf.com/content/CVPR2021/html/Esser_ Taming_Transformers_for_High-Resolution_Image_Synthesis_CVPR_2021_paper.html. Francis Ferraro, Nasrin Mostafazadeh, Lucy Vanderwende, Jacob Devlin, Michel Galley, Margaret Mitchell, et al. A survey of current datasets for vision and language research. arXiv preprint arXiv:1506.06833, 2015. Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, and Rongrong Ji. Mme: A comprehensive evaluation benchmark for multimodal large language models, 2024. Zhe Gan, Linjie Li, Chunyuan Li, Lijuan Wang, Zicheng Liu, Jianfeng Gao, et al. Vision-language pre-training: Basics, recent advances, and future trends. Foundations and Trends® in Computer Graphics and Vision, 14(3–4):163–352, 2022. Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xiangyu Yue, et al. Llama-adapter v2: Parameter-efficient visual instruction model. arXiv preprint arXiv:2304.15010, 2023. Yuying Ge, Yixiao Ge, Ziyun Zeng, Xintao Wang, and Ying Shan. Planting a seed of vision in large language model. arXiv preprint arXiv:2307.08041, 2023. Tao Gong, Chengqi Lyu, Shilong Zhang, Yudong Wang, Miao Zheng, Qian Zhao, Kuikun Liu, Wenwei Zhang, Ping Luo, and Kai Chen. Multimodal-gpt: A vision and language model for dialogue with humans. arXiv preprint arXiv:2305.04790, 2023. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6904–6913, 2017. Sylvain Gugger, Lysandre Debut, Thomas Wolf, Philipp Schmid, Zachary Mueller, Sourab Mangrulkar, Marc Sun, and Benjamin Bossan. Accelerate: Training and inference at scale made simple, efficient and adaptable. https://github.com/huggingface/accelerate, 2022. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300, 2020. Jinyi Hu, Yuan Yao, Chongyi Wang, Shan Wang, Yinxu Pan, Qianyu Chen, Tianyu Yu, Hanghao Wu, Yue Zhao, Haoye Zhang, et al. Large multilingual models pivot zero-shot multimodal learning across languages. arXiv preprint arXiv:2308.12038, 2023. Shengding Hu, Yuge Tu, Xu Han, Chaoqun He, Ganqu Cui, Xiang Long, Zhi Zheng, Yewei Fang, Yuxiang Huang, Weilin Zhao, Xinrong Zhang, Zhen Leng Thai, Kai Zhang, Chongyi Wang, Yuan Yao, Chenyang Zhao, Jie Zhou, Jie Cai, Zhongwu Zhai, Ning Ding, Chao Jia, Guoyang Zeng, Dahai Li, Zhiyuan Liu, and Maosong Sun. Minicpm: Unveiling the potential of small language models with scalable training strategies. CoRR, abs/2404.06395, 2024a. doi: 10.48550/ARXIV.2404.06395. URL https://doi.org/10.48550/ arXiv.2404.06395. Shengding Hu, Yuge Tu, Xu Han, Chaoqun He, Ganqu Cui, Xiang Long, Zhi Zheng, Yewei Fang, Yuxiang Huang, Weilin Zhao, et al. Minicpm: Unveiling the potential of small language models with scalable training strategies. arXiv preprint arXiv:2404.06395, 2024b. Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 6700–6709, 2019. Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023. Kushal Kafle, Brian Price, Scott Cohen, and Christopher Kanan. Dvqa: Understanding data visualizations via question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5648–5656, 2018. Kushal Kafle, Robik Shrestha, and Christopher Kanan. Challenges and prospects in vision and language research. Frontiers in Artificial Intelligence, 2:28, 2019. Samira Ebrahimi Kahou, Vincent Michalski, Adam Atkinson, Ákos Kádár, Adam Trischler, and Yoshua Bengio. Figureqa: An annotated figure dataset for visual reasoning. arXiv preprint arXiv:1710.07300, 2017. Yasindu Kamizuru. Diagram image to text. URL https://huggingface.co/datasets/Kamizuru00/ diagram_image_to_text. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020. Aniruddha Kembhavi, Mike Salvato, Eric Kolve, Minjoon Seo, Hannaneh Hajishirzi, and Ali Farhadi. A diagram is worth a dozen images. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV 14, pp. 235–251. Springer, 2016. Aniruddha Kembhavi, Minjoon Seo, Dustin Schwenk, Jonghyun Choi, Ali Farhadi, and Hannaneh Hajishirzi. Are you smarter than a sixth grader? textbook question answering for multimodal machine comprehension. In Proceedings of the IEEE Conference on Computer Vision and Pattern recognition, pp. 4999–5007, 2017. Douwe Kiela, Hamed Firooz, Aravind Mohan, Vedanuj Goswami, Amanpreet Singh, Pratik Ringshia, and Davide Testuggine. The hateful memes challenge: Detecting hate speech in multimodal memes. Advances in neural information processing systems, 33:2611–2624, 2020. Jeonghwan Kim and Heng Ji. Finer: Investigating and enhancing fine-grained visual concept recognition in large vision language models. In arxiv, 2024. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles, 2023. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. Race: Large-scale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683, 2017. LAION. Laion-gpt-4v. URL https://huggingface.co/datasets/laion/gpt4v-dataset. Hugo Laurençon, Léo Tronchon, Matthieu Cord, and Victor Sanh. What matters when building vision-language models? arXiv preprint arXiv:2405.02246, 2024. Hugo Laurençon, Léo Tronchon, and Victor Sanh. Unlocking the conversion of web screenshots into html code with the websight dataset, 2024. Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu. Otter: A multi-modal model with in-context instruction tuning. arXiv preprint arXiv:2305.03726, 2023a. Bohao Li, Yuying Ge, Yixiao Ge, Guangzhi Wang, Rui Wang, Ruimao Zhang, and Ying Shan. Seed-bench-2: Benchmarking multimodal large language models. CoRR, abs/2311.17092, 2023b. doi: 10.48550/ARXIV. 2311.17092. URL https://doi.org/10.48550/arXiv.2311.17092. Bohao Li, Yuying Ge, Yixiao Ge, Guangzhi Wang, Rui Wang, Ruimao Zhang, and Ying Shan. Seedbench: Benchmarking multimodal large language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13299–13308, 2024a. Junnan Li, Dongxu Li, Silvio Savarese, and Steven C. H. Hoi. BLIP-2: bootstrapping language-image pre-training with frozen image encoders and large language models. CoRR, 2023c. Lei Li, Zhihui Xie, Mukai Li, Shunian Chen, Peiyi Wang, Liang Chen, Yazheng Yang, Benyou Wang, and Lingpeng Kong. Silkie: Preference distillation for large visual language models. arXiv preprint arXiv:2312.10665, 2023d. Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. Visualbert: A simple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557, 2019. Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. Evaluating object hallucination in large vision-language models. arXiv preprint arXiv:2305.10355, 2023e. Zhang Li, Biao Yang, Qiang Liu, Zhiyin Ma, Shuo Zhang, Jingxu Yang, Yabo Sun, Yuliang Liu, and Xiang Bai. Monkey: Image resolution and text label are important things for large multi-modal models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 26763–26773, 2024b. Ji Lin, Hongxu Yin, Wei Ping, Pavlo Molchanov, Mohammad Shoeybi, and Song Han. Vila: On pre-training for visual language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 26689–26699, 2024. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, 2014a. Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft COCO: common objects in context. In Computer Vision - ECCV 2014 - 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V. Springer, 2014b. Ziyi Lin, Chris Liu, Renrui Zhang, Peng Gao, Longtian Qiu, Han Xiao, Han Qiu, Chen Lin, Wenqi Shao, Keqin Chen, et al. Sphinx: The joint mixing of weights, tasks, and visual embeddings for multi-modal large language models. arXiv preprint arXiv:2311.07575, 2023. Fangyu Liu, Guy Emerson, and Nigel Collier. Visual spatial reasoning. Transactions of the Association for Computational Linguistics, 11:635–651, 2023a. Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, and Lijuan Wang. Aligning large multi-modal model with robust instruction tuning. arXiv preprint arXiv:2306.14565, 2023b. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. CoRR, 2023c. Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 26296–26306, 2024a. Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, and Yong Jae Lee. Llava-next: Improved reasoning, ocr, and world knowledge, January 2024b. URL https://llava-vl.github.io/ blog/2024-01-30-llava-next/. Yuliang Liu, Zhang Li, Hongliang Li, Wenwen Yu, Mingxin Huang, Dezhi Peng, Mingyu Liu, Mingrui Chen, Chunyuan Li, Lianwen Jin, et al. On the hidden mystery of ocr in large multimodal models. arXiv preprint arXiv:2305.07895, 2023d. Haoyu Lu, Wen Liu, Bo Zhang, Bingxuan Wang, Kai Dong, Bo Liu, Jingxiang Sun, Tongzheng Ren, Zhuoshu Li, Yaofeng Sun, et al. Deepseek-vl: towards real-world vision-language understanding. arXiv preprint arXiv:2403.05525, 2024. Pan Lu, Liang Qiu, Jiaqi Chen, Tony Xia, Yizhou Zhao, Wei Zhang, Zhou Yu, Xiaodan Liang, and Song-Chun Zhu. Iconqa: A new benchmark for abstract diagram understanding and visual language reasoning. arXiv preprint arXiv:2110.13214, 2021. Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. In NeurIPS, 2022a. Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, and Ashwin Kalyan. Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning. arXiv preprint arXiv:2209.14610, 2022b. Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. arXiv preprint arXiv:2310.02255, 2023. lz. Detailed caption. URL https://huggingface.co/datasets/echo840/Detailed_Caption. Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. Ok-vqa: A visual question answering benchmark requiring external knowledge. In Proceedings of the IEEE/cvf conference on computer vision and pattern recognition, pp. 3195–3204, 2019. Ahmed Masry, Do Xuan Long, Jia Qing Tan, Shafiq Joty, and Enamul Hoque. Chartqa: A benchmark for question answering about charts with visual and logical reasoning. arXiv preprint arXiv:2203.10244, 2022. Minesh Mathew, Viraj Bagal, Rubèn Tito, Dimosthenis Karatzas, Ernest Valveny, and CV Jawahar. Infographicvqa. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1697–1706, 2022. Anand Mishra, Shashank Shekhar, Ajeet Kumar Singh, and Anirban Chakraborty. Ocr-vqa: Visual question answering by reading text in images. In 2019 international conference on document analysis and recognition (ICDAR), pp. 947–952. IEEE, 2019. Jason Obeid and Enamul Hoque. Chart-to-text: Generating natural language descriptions for charts by adapting the transformer model. arXiv preprint arXiv:2010.09142, 2020. OpenCompass. Opencompass. URL https://huggingface.co/spaces/opencompass/open_vlm_ leaderboard. Zhiliang Peng, Li Dong, Hangbo Bao, Qixiang Ye, and Furu Wei. Beit v2: Masked image modeling with vector-quantized visual tokenizers. CoRR, abs/2208.06366, 2022. doi: 10.48550/ARXIV.2208.06366. URL https://doi.org/10.48550/arXiv.2208.06366. Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In Proceedings of the IEEE international conference on computer vision, pp. 2641–2649, 2015. Jordi Pont-Tuset, Jasper Uijlings, Soravit Changpinyo, Radu Soricut, and Vittorio Ferrari. Connecting vision and language with localized narratives. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part V 16, pp. 647–664. Springer, 2020. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. 2019. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, 2021. Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 3505–3506, 2020. Mengye Ren, Ryan Kiros, and Richard Zemel. Exploring models and data for image question answering. Advances in neural information processing systems, 28, 2015. Tal Ridnik, Emanuel Ben-Baruch, Asaf Noy, and Lihi Zelnik-Manor. Imagenet-21k pretraining for the masses. arXiv preprint arXiv:2104.10972, 2021a. Tal Ridnik, Emanuel Ben-Baruch, Asaf Noy, and Lihi Zelnik-Manor. Imagenet-21k pretraining for the masses, 2021b. Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114, 2021. Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open largescale dataset for training next generation image-text models. Advances in Neural Information Processing Systems, 35:25278–25294, 2022. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. CoRR, abs/1707.06347, 2017. URL http://arxiv.org/abs/1707.06347. Dustin Schwenk, Apoorv Khandelwal, Christopher Clark, Kenneth Marino, and Roozbeh Mottaghi. A-okvqa: A benchmark for visual question answering using world knowledge. In European Conference on Computer Vision, pp. 146–162. Springer, 2022. Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 2018a. Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2556–2565, 2018b. Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. Megatron-lm: Training multi-billion parameter language models using model parallelism. arXiv preprint arXiv:1909.08053, 2019. Oleksii Sidorov, Ronghang Hu, Marcus Rohrbach, and Amanpreet Singh. Textcaps: a dataset for image captioning with reading comprehension. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16, pp. 742–758. Springer, 2020. Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. Towards vqa models that can read. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8317–8326, 2019. Daria Soboleva, Faisal Al-Khateeb, Robert Myers, Jacob R Steeves, Joel Hestness, and Nolan Dey. SlimPajama: A 627B token cleaned and deduplicated version of RedPajama. https://www.cerebras.net/blog/ slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama, 2023. URL https: //huggingface.co/datasets/cerebras/SlimPajama-627B. Yixuan Su, Tian Lan, Huayang Li, Jialu Xu, Yan Wang, and Deng Cai. Pandagpt: One model to instructionfollow them all. arXiv preprint arXiv:2305.16355, 2023. Quan Sun, Qiying Yu, Yufeng Cui, Fan Zhang, Xiaosong Zhang, Yueze Wang, Hongcheng Gao, Jingjing Liu, Tiejun Huang, and Xinlong Wang. Generative pretraining in multimodality. arXiv preprint arXiv:2307.05222, 2023a. Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, et al. Aligning large multimodal models with factually augmented rlhf. arXiv preprint arXiv:2309.14525, 2023b. Benny J Tang, Angie Boggust, and Arvind Satyanarayan. Vistext: A benchmark for semantically rich chart captioning. arXiv preprint arXiv:2307.05356, 2023. Chameleon Team. Chameleon: Mixed-modal early-fusion foundation models. CoRR, abs/2405.09818, 2024. doi: 10.48550/ARXIV.2405.09818. URL https://doi.org/10.48550/arXiv.2405.09818. MLC team. MLC-LLM, 2023. URL https://github.com/mlc-ai/mlc-llm. Techcrunch. Etched is building an AI chip that only runs one type of model. https://techcrunch.com/2024/ 06/25/etched-is-building-an-ai-chip-that-only-runs-transformer-models/. Accessed: 2024-0706. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. Shagun Uppal, Sarthak Bhagat, Devamanyu Hazarika, Navonil Majumder, Soujanya Poria, Roger Zimmermann, and Amir Zadeh. Multimodal research in vision and language: A review of current and emerging trends. Information Fusion, 77:149–171, 2022. Junke Wang, Lingchen Meng, Zejia Weng, Bo He, Zuxuan Wu, and Yu-Gang Jiang. To see is to believe: Prompting gpt-4v for better visual instruction tuning. arXiv preprint arXiv:2311.07574, 2023a. Ke Wang, Junting Pan, Weikang Shi, Zimu Lu, Mingjie Zhan, and Hongsheng Li. Measuring multimodal mathematical reasoning with math-vision dataset. arXiv preprint arXiv:2402.14804, 2024a. Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. OFA: unifying architectures, tasks, and modalities through a simple sequence-tosequence learning framework. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA. Pmlr, 2022a. Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, and Furu Wei. Image as a foreign language: Beit pretraining for all vision and vision-language tasks. CoRR, abs/2208.10442, 2022b. doi: 10.48550/ARXIV.2208.10442. URL https://doi.org/10.48550/arXiv.2208.10442. Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, and Heng Ji. Executable code actions elicit better llm agents. arXiv preprint arXiv:2402.01030, 2024b. Zhenhailong Wang, Ansel Blume, Sha Li, Genglin Liu, Jaemin Cho, Zineng Tang, Mohit Bansal, and Heng Ji. Paxion: Patching video-language foundation models with action knowledge. In Proc. 2023 Conference on Neural Information Processing Systems (NeurIPS2023) [Spotlight Paper], 2023b. Zhenhailong Wang, Joy Hsu, Xingyao Wang, Kuan-Hao Huang, Manling Li, Jiajun Wu, and Heng Ji. Text-based reasoning about vector graphics. In arxiv, 2024c. Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, and Yuan Cao. Simvlm: Simple visual language model pretraining with weak supervision. arXiv preprint arXiv:2108.10904, 2021. Lai Wei, Zihao Jiang, Weiran Huang, and Lichao Sun. Instructiongpt-4: A 200-instruction paradigm for fine-tuning minigpt-4. arXiv preprint arXiv:2308.12067, 2023. Chris Wendler. Renderedtext. URL https://huggingface.co/datasets/wendlerc/RenderedText. Shujin Wu, Yi R Fung, Sha Li, Yixin Wan, Kai-Wei Chang, and Heng Ji. Macaroon: Training vision-language models to be your engaged partners. arXiv preprint arXiv:2406.14137, 2024. Ruyi Xu, Yuan Yao, Zonghao Guo, Junbo Cui, Zanlin Ni, Chunjiang Ge, Tat-Seng Chua, Zhiyuan Liu, Maosong Sun, and Gao Huang. Llava-uhd: an lmm perceiving any aspect ratio and high-resolution images. arXiv preprint arXiv:2403.11703, 2024a. Zhiyang Xu, Chao Feng, Rulin Shao, Trevor Ashby, Ying Shen, Di Jin, Yu Cheng, Qifan Wang, and Lifu Huang. Vision-flan: Scaling human-labeled tasks in visual instruction tuning. arXiv preprint arXiv:2402.11690, 2024b. Qinghao Ye, Haiyang Xu, Jiabo Ye, Ming Yan, Anwen Hu, Haowei Liu, Qi Qian, Ji Zhang, and Fei Huang. mplug-owl2: Revolutionizing multi-modal large language model with modality collaboration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13040–13051, 2024. Qiying Yu, Quan Sun, Xiaosong Zhang, Yufeng Cui, Fan Zhang, Yue Cao, Xinlong Wang, and Jingjing Liu. Capsfusion: Rethinking image-text data at scale. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14022–14032, 2024. Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, and Lijuan Wang. Mm-vet: Evaluating large multimodal models for integrated capabilities. arXiv preprint arXiv:2308.02490, 2023. Lifan Yuan, Ganqu Cui, Hanbin Wang, Ning Ding, Xingyao Wang, Jia Deng, Boji Shan, Huimin Chen, Ruobing Xie, Yankai Lin, et al. Advancing llm reasoning generalists with preference trees. arXiv preprint arXiv:2404.02078, 2024. Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, et al. Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi. arXiv preprint arXiv:2311.16502, 2023. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830, 2019. Yan Zeng, Hanbo Zhang, Jiani Zheng, Jiangnan Xia, Guoqiang Wei, Yang Wei, Yuchen Zhang, and Tao Kong. What matters in training a gpt4-style language model with multimodal inputs? arXiv preprint arXiv:2307.02469, 2023. Duzhen Zhang, Yahan Yu, Chenxing Li, Jiahua Dong, Dan Su, Chenhui Chu, and Dong Yu. Mm-llms: Recent advances in multimodal large language models. arXiv preprint arXiv:2401.13601, 2024a. Pan Zhang, Xiaoyi Dong, Bin Wang, Yuhang Cao, Chao Xu, Linke Ouyang, Zhiyuan Zhao, Shuangrui Ding, Songyang Zhang, Haodong Duan, Wenwei Zhang, Hang Yan, Xinyue Zhang, Wei Li, Jingwen Li, Kai Chen, Conghui He, Xingcheng Zhang, Yu Qiao, Dahua Lin, and Jiaqi Wang. Internlm-xcomposer: A vision-language large model for advanced text-image comprehension and composition. arXiv preprint arXiv:2309.15112, 2023a. Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, and Tong Sun. Llavar: Enhanced visual instruction tuning for text-rich image understanding. arXiv preprint arXiv:2306.17107, 2023b. Yongting Zhang, Lu Chen, Guodong Zheng, Yifeng Gao, Rui Zheng, Jinlan Fu, Zhenfei Yin, Senjie Jin, Yu Qiao, Xuanjing Huang, Feng Zhao, Tao Gui, and Jing Shao. Spa-vl: A comprehensive safety preference alignment dataset for vision language model, 2024b. Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: Enhancing visionlanguage understanding with advanced large language models. CoRR, 2023. Yuke Zhu, Oliver Groth, Michael Bernstein, and Li Fei-Fei. Visual7w: Grounded question answering in images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4995–5004, 2016. ## Appendix A Details Of Instruction-Tuning Data Curation The curated instruction fine-tuning data mixture is shown in Tab. 3. Each category is chosen to address specific challenges and capabilities of SOLO. For instance, datasets like UltraInteract-SFT (Yuan et al., 2024) and CodeAct-General (Wang et al., 2024b) enable the refinement of language processing and reasoning abilities, while visually rich datasets such as LVIS-Instruct4V (Wang et al., 2023a) and Localized Narratives (Pont-Tuset et al., 2020) enhance the model's basic image understanding and recognition abilities. Scientific document datasets like TQA (Kembhavi et al., 2017) are included to bolster the model's ability to parse and reason with academic visual information. Furthermore, OCR and text-heavy image datasets such as TextCaps (Sidorov et al., 2020) and OCR-VQA (Mishra et al., 2019) provide a crucial source for the model's ability to interpret text within complex images. By selecting datasets with a broad range of complexities, sizes, and focuses, we ensure a robust fine-tuning process that prepares SOLO to handle real-world applications effectively, reflecting a deep and detailed understanding of both vision and language data. Additionally, we conduct a thorough manual inspection and comparisons of the fine-tuning datasets, employing random sampling techniques on some datasets such as DVQA (Kafle et al., 2018) and FigureQA (Kahou et al., 2017) to guarantee diversity and prevent data imbalance. ## B Stage-3 Annealing We directly perform instruction fine-tuning on the pre-trained SOLO finished at Stage-2 to understand the effect of the annealing stage. The results shown in Fig. 10a indicate that introducing an annealing stage to conclude pre-training can slightly promote the performance across all evaluation benchmarks. ## C Effectiveness Of Curated Data Mixture For Instruction Fine-Tuning We conduct an ablation study to validate the curated data mixture for instruction fine-tuning. The ablations included are: (1) Without GPT-4V Data: All data generated by GPT-4V, including detailed captions and instructional fine-tuning samples, is excluded from the fine-tuning mixture. (2) With Additional OCR Data: Additional OCR data from LLaVAR is incorporated into the fine-tuning mixture to enhance OCR capabilities, which are crucial for tasks like require extract text information from charts. (3) With More GPT-4V Data: Data from GPT-4V used in the third stage of pre-training is added to the fine-tuning mixture. (4) Extended Training Duration: SOLO is trained for an additional epoch to investigate the effects of prolonged training. The results are presented in Fig. 10b. Our analysis indicates that incorporating additional OCR data does not significantly enhance performance in scientific document comprehension or visual mathematical reasoning tasks, which rely extensively on visual text understanding. This lack of improvement can be attributed to the discrepancy between general OCR data and the specific demands of scientific charts. Identifying effective methods for collecting OCR data pertinent to scientific chart comprehension remains a critical area for future research. Furthermore, incorporating this OCR data seems to adversely affect overall visual-language capabilities, as demonstrated by general benchmarks. Regarding the use of GPT-4V data, our findings suggest that a measured inclusion during the pre-training annealing stage enhances performance (see §B), whereas excessive incorporation during fine-tuning can hurt the overall performance. We also find that SOLO exhibits minimal performance decline when trained solely on existing supervised datasets, excluding all data generated by GPT-4V. This demonstrates that GPT-4V data is not essential for enhancing the core capabilities of SOLO. The results overall justify the choice of datasets included in the supervised fine-tuning data mixture. However, although we observe a continual performance improvement across training steps within one epoch (see Fig. 5d), prolonged training on repetitive samples could lead to overfitting and decreased performance. This suggests that while extended exposure to diverse training data generally enhances model performance, overfitting remains a critical challenge when models are exposed repeatedly to a limited data subset. Overall, the ablation study proves the effectiveness of our curated data mixture. ![23_image_0.png](23_image_0.png) (a) The effectiveness of Stage-3 pre-training in priming (b) The ablation study of the fine-tuning data mixture. SOLO for the instruction fine-tuning stage. Figure 10: The evaluation performance of various ablations to validate the effectiveness of Stage-3 pre-training and the fine-tuning data mixture. ![23_image_1.png](23_image_1.png) Figure 11: The training curves and downstream performance evaluation of two variants of S0L0. We find that they show significant performance differences although achieving a similar loss on the instruction fine-tuning data mixture. ## D (Pre-)Training Loss On Vision-Language Data Is Not A Reliable Indicator Of The Actual Performance. We find that both the pre-training loss and the instruction fine-tuning loss on the vision-language data are not reliable for the estimation of INLMs' actual performance. References to support this claim regarding the pre-training loss include observations detailed in Fig. 3, Fig. 4, and Fig. 5a. Despite achieving similar language modeling losses when conditioned on visual inputs, LVLMs exhibit markedly different behaviors and performance across various downstream tasks. This contrasts with findings from pure language modeling, where pre-training loss strongly correlates with various downstream task performance (Du et al., 2024). We also demonstrate that the loss associated with the instruction fine-tuning data mixture does not reliably indicate task performance. We train a variant of SOLO with a learning rate of 1e-4, deviating from the prescribed rate of 1e-5 in our recipe. We show the training curves (Fig. 11a) and the downstream performance evaluation (Fig. 11b) of these two variants. The two variants exhibit similar training behaviors and losses on | Category | Dataset | #Sample | |----------------------------|------------------------------------------------|-----------| | | CodeAct-General (Wang et al., 2024b) | 71K | | Language-Only | UltraInteract-SFT (Yuan et al., 2024) | 288K | | | UltraChat (Ding et al., 2023) | 207K | | Detailed Image Caption | LVIS-Instruct4V (Wang et al., 2023a) | 223K | | | ShareGPT4V (Chen et al., 2023b) | 102K | | | LAION-GPT4V (LAION) | 12K | | | Localized Narratives (Pont-Tuset et al., 2020) | 200K | | | VSR (Liu et al., 2023a) | 2K | | Scientific Document | TQA (Kembhavi et al., 2017) | 2K | | | ScienceQA (Lu et al., 2022a) | 5K | | Table, Document, and Chart | IconQA (Lu et al., 2021) | 27K | | | TabMWP (Lu et al., 2022b) | 23K | | | ChartQA (Masry et al., 2022) | 18K | | | VisText (Tang et al., 2023) | 7K | | | Chart2Text (Obeid & Hoque, 2020) | 27K | | | DVQA (Kafle et al., 2018) | 20K | | | FigureQA (Kahou et al., 2017) | 20K | | OCR and Text-Rich Images | Diagram Image-to-Text (Kamizuru) | 300 | | | Infographic VQA (Mathew et al., 2022) | 2K | | | ST-VQA (Biten et al., 2019) | 17K | | | TextCaps (Sidorov et al., 2020) | 22K | | | TextVQA (Singh et al., 2019) | 22K | | | OCR-VQA (Mishra et al., 2019) | 17K | | | Rendered-Text (Wendler) | 10K | | General VQA | HatefulMemes (Kiela et al., 2020) | 8.5K | | | OK-VQA (Marino et al., 2019) | 9K | | | AOK-VQA (Schwenk et al., 2022) | 16.5K | | | TallyQA (Acharya et al., 2019) | 100K | | | Visual7W (Zhu et al., 2016) | 14K | | | COCO-QA (Ren et al., 2015) | 46K | | | VQAV2 (Goyal et al., 2017) | 82K | | | GQA (Hudson & Manning, 2019) | 72K | Table 3: Summary of datasets used in the supervised fine-tuning stage. the instruction fine-tuning data mixture, yet they display significant performance disparities in downstream evaluation benchmarks. Overall, our analysis highlights the need to identify a dependable metric for evaluating LVLMs with the unified architecture in pre-training, particularly for establishing scaling laws in future research. ## E Related Work About Lvlms Evaluation Benchmarks The progress of LVLMs is guided and measured by the continuous development of evaluation benchmarks (Ferraro et al., 2015; Kafle et al., 2019; Gan et al., 2022; Chen et al., 2024d). Initially, evaluation primarily concentrates on fundamental visual-language skills, such as image captioning (Lin et al., 2014b; Plummer et al., 2015), basic visual information recognition (Antol et al., 2015; Goyal et al., 2017), compositional visual understanding (Hudson & Manning, 2019), and knowledge reasoning based on visual information (Marino et al., 2019; Schwenk et al., 2022). Current benchmarks are advancing to encompass more intricate capabilities, requiring LVLMs to perform detailed visual analysis and complex reasoning (Uppal et al., 2022; Zhang et al., 2024a). These benchmarks range from general assessments across various domains and skills (Li et al., 2024a; Fu et al., 2024; Chen et al., 2024b; Yu et al., 2023) to specific tests targeting particular abilities, such as ![25_image_0.png](25_image_0.png) Figure 12: Stage 2 language modeling loss when trained on a mixture with different quantities of text data. 1x reflects the data mixture in Tab. 1, 2x and 3x represent mixtures with 2 or 3 times more text data compared to 1x while keeping the amount of vision data unchanged. scientific document understanding (Kembhavi et al., 2016; Lu et al., 2022a), mathematical reasoning (Lu et al., 2023; Wang et al., 2024a), multi-discipline understanding and reasoning (Yue et al., 2023; Wu et al., 2024), hallucination (Li et al., 2023e), and OCR ability (Liu et al., 2023d). In this work, we select the advanced general and skill-specific benchmarks for evaluation. ## F Balancing Vision And Language Capabilities During Pre-Training Is Challenging We find that on a 7B scale, balancing vision and text capabilities can be challenging. Specifically, we observe that during Stage-2 pre-training, despite the inclusion of text-only pre-training data (§3.2.1) to maintain the language capability of the original LLM, the language modeling loss on the language-only pre-training subset still steadily increases as training continues. In Fig. 12, we introduce a setting where we gradually increase the proportion of text-only data per batch (Tab. 1) and monitor the language modeling loss for text. The results suggest that augmenting text data proportions does not alleviate the rise in language modeling loss, indicating challenges in achieving balanced vision and text capabilities in a 7B-scale model. To further understand the degradation in language ability of SOLO, we evaluate SOLO on standard LLM evaluation benchmarks, including MMLU (Hendrycks et al., 2020), GSM8k (Cobbe et al., 2021), HellaSwag (Zellers et al., 2019), and RACE (Lai et al., 2017). Our analysis includes comparisons with the backbone LLM of SOLO, specifically Mistral-7B-v0.1-base, as well as Mistral-7Bv0.1-Instruct (see Fig. 13). We observe a decline in language capabilities, particularly in knowledgeintensive benchmarks such as MMLU. There are two potential reasons: (1) Integrating vision capabilities may compromise language performance. (2) The quality of Mistral's pre-training corpus is better than the open-source Slimpajama we employ. Overall, the current results indicate a limitation in the current version of SOLO, as effective performance in real-world vision-language tasks often necessitates strong foundational language capabilities, including knowledge and reasoning. Thus, we plan to maintain the language capabilities of SOLO in the upcoming version by enriching the pre-training dataset with a higher-quality text corpus and increasing the proportion of text data. Figure 13: The evaluation of language capability of ![25_image_1.png](25_image_1.png) SOLO and the base Mistral LLM.
# Multitask Learning Can Improve Worst-Group Outcomes | Atharva Kulkarni∗ | atharvak@cs.cmu.edu | |-------------------------------------------------------------------------------------------------------|-----------------------| | Language Technologies Institute, School of Computer Science Carnegie Mellon University Lucio M. Dery∗ | ldery@cs.cmu.edu | | Computer Science Department, School of Computer Science Carnegie Mellon University Amrith Setlur | asetlur@cs.cmu.edu | | Machine Learning Department, School of Computer Science Carnegie Mellon University Aditi Raghunathan | raditi@cs.cmu.edu | | Computer Science Department, School of Computer Science Carnegie Mellon University Ameet Talwalkar | atalwalk@cs.cmu.edu | | Machine Learning Department, School of Computer Science Carnegie Mellon University Graham Neubig | gneubig@cs.cmu.edu | | Language Technologies Institute, School of Computer Science | | Graham Neubig gneubig@cs.cmu.edu Language Technologies Institute, School of Computer Science Carnegie Mellon University Reviewed on OpenReview: *https: // openreview. net/ forum? id= sPlhAIp6mk* ## Abstract In order to create machine learning systems that serve a variety of users well, it is vital to not only achieve high average performance but also ensure equitable outcomes across diverse groups. However, most machine learning methods are designed to improve a model's average performance on a chosen end task without consideration for their impact on worst group error. Multitask learning (MTL) is one such widely used technique. In this paper, we seek not only to understand the impact of MTL on worst-group accuracy but also to explore its potential as a tool to address the challenge of group-wise fairness. We primarily consider the standard setting of fine-tuning a pre-trained model, where, following recent work (Gururangan et al., 2020; Dery et al., 2023), we multitask the end task with the pretraining objective constructed from the end task data itself. In settings with few or no group annotations, we find that multitasking often, but not consistently, achieves better worst-group accuracy than Just-Train-Twice (JTT; Liu et al. (2021)) - a representative distributionally robust optimization (DRO) method. Leveraging insights from synthetic data experiments, we propose to modify standard MTL by regularizing the joint multitask representation space. We run a large number of fine-tuning experiments across computer vision and natural language processing datasets and find that our regularized MTL approach consistently outperforms JTT on both average and worst-group outcomes. Our official code can be found here: https://github.com/atharvajk98/MTL-group-robustness. ## 1 Introduction As machine learning systems exert ever-increasing influence on the real world, it is paramount that they not only perform well on aggregate but also exhibit equitable outcomes across diverse subgroups characterized by attributes like race (Buolamwini & Gebru, 2018; Liang et al., 2021), gender (Buolamwini & Gebru, 2018; Srinivasan & Bisk, 2022) and geographic location (Jurgens et al., 2017; De Vries et al., 2019; Ayush et al., 2021). Therefore, it is important to understand the impact of widely-used machine learning techniques with respect to these desiderata. Multitask learning (MTL) (Caruana, 1997; Baxter, 2000; Ruder et al., 2019; Dery et al., 2021a) is one example of such a technique that features prominently in machine learning practitioners' toolbox for improving a model's aggregate performance. However, the effect of multitask learning on worst group outcomes is underexplored. In this paper, we both study the impact of MTL, *as is*, on worst group error and also consider whether modifications can be made to improve its effect on worst group outcomes. Traditionally, the problem of worst-group generalization has been tackled explicitly via methods such as distributionally robust optimization (DRO) (Ben-Tal et al., 2013; Duchi & Namkoong, 2018; Hashimoto et al., 2018; Sagawa et al., 2020a). In contrast to traditional empirical risk minimization, DRO aims to minimize the worst-case risk over a predefined set of distributions (the *uncertainty set*). Defining the uncertainty set usually (but not always) requires access to group annotations. Since average performance based approaches like MTL are typically designed without consideration of group annotations, our focus will be on settings with limited-to-no group annotations. In these settings, there exist a number of *generalized Reweighting* (GRW) algorithms for distributional robustness (Nam et al., 2020; Liu et al., 2021; Zhang et al., 2022; Nam et al., 2022; Qiu et al., 2023; Zhai et al., 2023; Izmailov et al., 2022). These approaches minimize the weighted average risk based on the weight assigned to each example. One such widely used method is the Just-TrainTwice (JTT) algorithm (Liu et al., 2021), which performs two model training runs: one to identify poorly performing examples and another run that upweights these examples. For our empirical explorations, we take JTT as a representative DRO method and use it to provide reference performance to situate our study of multitask learning. We focus our investigations of multitask learning on the ubiquitous setting of fine-tuning a pre-trained model. Here, a common way to improve end task average performance is to multitask the end task with the pretraining objective constructed over the task data itself (Gururangan et al., 2020; Dery et al., 2021b). We intuit that this multitasking approach could improve robustness to worst group outcomes since previous work like Hendrycks et al. (2019; 2020); Mao et al. (2020) has established a favorable connection between pretraining and robustness (both adversarial and out-ofdistribution). We test our intuition by conducting preliminary experiments across a pair of computer vision and natural language tasks in two settings: one with limited group annotations and the other with none. Initial results (Table 1) reveal that multitasking shows promise in that it can improve worst group outcomes over ERM and JTT/ However, these improvements are not consistent. Therefore, we are spurred to consider modifications to make it a more competitive tool against worst group outcomes. Table 1: Standard multitasking improves worst group outcomes over ERM and JTT **but not** consistently. Experimental details can be found in Section A.1. Dataset Method **No Group Labels Val Group Labels** Worst-Group Acc Worst-Group Acc ``` Waterbirds ERM 80.14.6 85.41.4 JTT 82.11.2 85.92.5 (MTL) ERM+MIM 80.14.6 85.32.4 Civil-Small ERM 51.65.6 67.42.1 JTT 52.55.2 68.01.8 (MTL) ERM+MLM 58.36.6 68.50.4 ``` In order to build intuition about how to adapt MTL to target the worst group error, we conduct controlled experiments on two-layer linear models trained from scratch on synthetic data. We borrow the synthetic data setup introduced by Sagawa et al. (2020b) in which training data consists of two majority groups, where spurious features (features that are not required to robustly solve the end task) are predictive of the end task output, and two minority groups, where spurious features are uncorrelated with the output. Sagawa et al. (2020b) demonstrated that under certain conditions on the generative distribution of the input features, linear models trained on such data would probably rely on spurious features and thus, have poor worst group error. Working with this simplified setup where worst-group outcomes are easily inducible allows us to study MTL's effects more incisively. We instantiate reconstruction from noised input as our auxiliary task to perform multitask learning in the synthetic setup. This choice is partially*informed by the fact that many pre-training objectives like masked language modeling (MLM) (Devlin et al., 2018) and masked image modeling (He et al., 2022; Tong et al., 2022) are based on input reconstruction. When training solely on this auxiliary task, we uncover that regularizing the pre-output layer of the model is critical for ensuring that the model upweights the core features (features required to robustly solve the end task) over the spurious ones. This leads us to the following recipe for improving worst-group error: regularized multitask learning of the end task with the (appropriately chosen) auxiliary objective. Through a battery of experiments across natural language processing (NLP) and computer vision (CV) datasets, we demonstrate that multitasking the end task with the pre-training objective along with ℓ1 regularization on the shared, pre-prediction layer activations is competitive when pitted against stateof-the-art DRO approaches like JTT (Liu et al., 2021) and Bitrate-Constrained DRO (Setlur et al., 2023). Specifically, in settings where only validation group annotations are available, regularized MTL outperforms JTT and BR-DRO on 3/3 and 2/3 datasets, respectively. Our approach improves worst-group accuracy over ERM (by as much as ∼ 4%) and JTT (by ∼ 1%) in settings where group annotations are completely unavailable. Moreover, regularized MTL consistently outperforms both ERM and JTT on average performance, regardless of whether group annotations are available or not. Thus, within the prevailing framework of utilizing pre-trained models for downstream fine-tuning, our results demonstrate that regularized multitask learning can be a simple yet versatile and robust tool for improving both average and worst-group outcomes. ## 2 Informal Motivation For Our Regularized Mtl Method Problem Setup / Preliminaries: Let each input example x ∈ X have a classification label y ∈ Y and a demographic attribute s ∈ S. Each group g = (*s, y*) ∈ G is defined by the label y and the attribute s, such that s spuriously correlates with y. Thus, the sample space of G is the Cartesian product of Y and S. Our goal is to learn a model that minimizes the worst-group error. We evaluate the models based on their worst group accuracy (WGA), i.e., the minimum predictive accuracies of our model across all groups. We are interested in the setting where the spurious attribute s, and consequently, the group identity g are unavailable (or available to a very limited degree) at training time, as annotating spurious attributes is typically expensive. Why would we expect multitask learning to help mitigate worst group outcomes? It would be naive to assume that multitasking the end task with any auxiliary task would prevent poor group outcomes. In order to better understand intuitively which auxiliary tasks may be helpful, we first provide an example using the data generation process and linear model setup presented in Sagawa et al. (2020b)'s work on the effect of spurious and core features on worst-group accuracy. When do models incur high worst group error? Sagawa et al. (2020b) describe a simple datagenerating distribution that defines, for each example, a label y *∈ {−*1, 1}, a spurious attribute s *∈ {−*1, 1}, and features x. The features are described as either core features xcore if they are associated with the label y, or spurious features xspur if they are associated with the spurious attribute s: $$x_{\rm core}\mid y\ \sim{\cal N}\left(y{\bf1}\,\ \sigma_{\rm core}^{2}I_{d_{c}}\right)$$ $$x_{\rm spur}\mid s\ \sim{\cal N}\left(s{\bf1}\,\ \sigma_{\rm spur}^{2}I_{d_{s}}\right)\tag{1}$$ (1) $$x=[x_{\mathrm{core}};x_{\mathrm{spur}}]\in\mathbb{R}^{d}\quad{\mathrm{and}}\quad d=(d_{c}+d_{s})$$ We can then define a linear model, parameterized by wˆ , that predicts the label given the features. $$(1)$$ $${\hat{y}}^{(i)}={\hat{\mathbf{w}}}\cdot x^{(i)}.$$ (i). (2) Note that because the core features xcore are the ones associated with the label to be predicted, they are the ones that the model *should* use in order to attain high predictive accuracy. *We will delve deeper into other motivations in Section 2 $$\left(2\right)$$ The cross-product of the space of possible labels y = ±1 and spurious attributes s = ±1 divides the samples generated from this distribution into four *groups*. When some groups are more frequent than others in the training data, a correlation between {*y, s*} is created. Further still, in the presence of the above correlation, if the spurious features have lower variance with respect to the data generating process (Equation 1), i.e., σ 2 spur ≤ σ 2 core, linear models will tend to rely more on (assign a higher weight to) the spurious features over the core ones (Sagawa et al., 2020b). This learned reliance on the spurious features - instead of the core features that are truly predictive of the label - results in poor worst-group error. Why does reconstruction help? Considering the above, for an auxiliary task to be helpful, it should discourage the model from using spurious features by showing a stronger preference for core features in exactly the case when σ 2 spur ≤ σ 2 core. In this paper, we argue that one class of tasks that fulfills this criterion are *reconstruction tasks*, where we predict original input features from noised versions. For instance, in the example above, if we add noise with a constant variance of σ 2 noise over each dimension, it results in noised inputs that have variances σ˜ 2 spur =σ 2 spur + σ 2 noise≤ σ˜ 2 core =σ 2 core + σ 2 noiseper spurious and core feature dimension, respectively. Under the assumption that both true labels y = 1 and y = −1 are equally probable, and in the simplest case where we are reconstructing features independently of each other with a linear predictor xˆi = wix˜i (where x˜ is the noised input), the Bayes optimal weight on a feature i would be (see Appendix A.3 for the full proof): $$\mathbf{w}_{i}^{\mathrm{{bayes}}}={\frac{\sigma_{i}^{2}+0.5\left(\mu_{i|y=1}^{2}+\mu_{i|y=-1}^{2}\right)}{\sigma_{i}^{2}+0.5\left(\mu_{i|y=1}^{2}+\mu_{i|y=-1}^{2}\right)+\sigma_{\mathrm{noise}}^{2}}}$$ $$(3)$$ where µ 2 i|y=±1 are the per-dimension means from Eqn 1 Note that w bayes iis larger for dimensions with higher variances σ 2 i , assuming µ 2 i|y=±1 are symmetric across core and spurious features (i.e all i). Thus, this reconstruction task places more weight on the core features in exactly the setting where a linear predictor for the end task would prefer to use the spurious features. Note that for this preference of the core features to be effectively realized, the auxiliary task needs to be sufficiently up-weighted, but not so much so that the end task is not learned at all. Why is regularization necessary? Given an auxiliary task with the above property of preferring core to spurious features (under σ 2 spur ≤ σ 2 core), a model with sufficient capacity can still rely on spurious features for solving the end task. We can incentivize the model to mostly use core features by applying sufficient regularization to the parts of the model that are shared between the two tasks (such as shared feature extractors). The restricted capacity encourages the model to rely on features that would cause it to do well on **both** tasks, which would be the core features. Based on the intuition established in this section, we propose a simple yet effective method for improving worst-group outcomes: *Multitasking the end task with the pre-training objective - which tends to be* reconstruction tasks - while regularizing the shared (pre-prediction) layer. In Section 3, we will test this intuition through synthetic data experiments, and in Sections 4 and 5, we demonstrate its empirical efficacy through natural data experiments. ## 3 Synthetic Data Experiments We initiate our investigation with an empirical study in a simplified context, involving the training of a two-layer linear model on synthetic data. This exploration is designed to substantiate the informal intuition introduced in Section 2 through concrete empirical findings. ## 3.1 Data Generating Distribution We base our experiment on the data generation distribution from Equation 1, where features are divided into core and spurious ones. As an instantiation, we consider end the task data defined by Tend = {(xi, yi)}i∈N where we have N total samples. Here dc = 1; ds = 1 =⇒ d = 2. The data is dominated by samples where {s = y} and thus, we have two majority groups Gs=y=1 and Gs=y=−1, each with nmaj 2samples. The two minority groups are when {s = −y} each with nmin 2: Gs=−y=1, Gs=−y=−1 . Due to the fact that nmaj > nmin, the attribute s is highly correlated with the label y in the training data and thus, is a spurious feature when considering the true data generation distribution. The end task is to predict the true label yi from the given input data xi. Figure 1 shows data sampled from the generative process described in Equation 1. We produce 1000 samples in R2 with σ 2 core = 0.6 and σ 2 spur = 0.1. nmin = 100 and nmax = 900, making the spurious feature highly correlated with the true label. ![4_image_0.png](4_image_0.png) Figure 1: Visualization of synthetic training data (1000 points). ![4_image_1.png](4_image_1.png) $$\left({4}\right)$$ Figure 2: Predictors learned when we train on the end task only. Examples visualized are the balanced test samples created from Equation 1. We train on Tend only to confirm that the resulting model has poor worst-group outcomes. Since we will eventually be do multitasking, we use a two layer linear model where the first layer is a linear featurizer - that will eventually be shared between all tasks being multitasked - and the second layer is a prediction head dedicated to the end task. This shared featurizer but separate head architecture is common in modern multitask learning (Yu et al., 2020; Michel et al., 2021; Dery et al., 2021a). For simplicity, the featurizer layer f (·) is a diagonal linear function parameterized by a *: $$f:\mathbb{R}^{d}\to\mathbb{R}^{d}\mid f_{(a)}(x)=\left(\mathbf{diag}(\mathbf{a})\right)x$$ d| f(a)(x) = (**diag**(a)) x (4) And the final output prediction layer is given by $$y_{\mathrm{pred}}^{\mathrm{end}}=\left(w^{\mathrm{end}}\right)^{T}f(x)=(w^{\mathrm{end}})^{T}{\bigg(}\mathbf{diag}(\mathbf{a}){\bigg)}x=\left({\hat{w}}^{\mathrm{end}}\right)^{T}x$$ Note that we have effectively parameterized a linear model with decomposed formulation which will be helpful once we proceed to multitasking. The end task loss is binary cross-entropy with ℓ2 regularization on *the diagonal parameterization allows us to easily read off how much weight is assigned to core features versus spurious ones ## 3.2 Training On The End Task Only ![5_image_0.png](5_image_0.png) Figure 3: The ratio log( aspur acore ) for 2 (extreme) choices of |a|1 across 4 hyperparameter settings (learning rate × batch size). Figure 4: Multitask learning architecture used in ![5_image_1.png](5_image_1.png) Section 3.4. We use a shared intermediate layer and two separate prediction heads for Taux and Tend w end is given by the following equation where σ be the sigmoid function: $$\mathcal{L}_{\text{end}}\left(w^{\text{end}},a\right)=\frac{1}{N}\sum_{(x_{i},y_{i})\in\mathbf{T}_{\text{end}}}\left[y_{i}\cdot\log\left(\sigma\left(y_{\text{pred}}^{\text{end}}\right)\right)+(1-y_{i})\cdot\log\left(1-\sigma\left(y_{\text{pred}}^{\text{end}}\right)\right)\right]+\frac{\lambda}{2}\|w^{\text{end}}\|^{2}\tag{5}$$ We fit the model solely to the end task by running batched stochastic gradient descent on Lend. We use a batch size of 64, learning rate of 10−3, λ = 1, and 500 epochs. We use 100 generated points as validation data for model selection. As can be seen in Figure 2, training on the end task only can result in a predictor that achieves poor worst group error. This occurs even with varying the norm of the featurizer parameter a. ## 3.3 Training On Auxiliary Data Only As motivated in Section 2, we proceed to introduce a reconstruction based auxiliary task. The auxiliary task data is defined by Taux = {(˜xi, xi}i∈M where we have M total samples. M unlabelled points (with respect to the end task) are taken from the distribution described by Equation 1. Noise of the form $$\epsilon_{\mathrm{noise}}\ \sim{\mathcal N}\left(0\ ,\ \sigma_{\mathrm{noise}}^{2}I_{d}\right)\quad|\quad{\tilde{x}}=x+\varepsilon$$ is applied to each point. The task is to reconstruct xi from x˜i. Reusing the featurizer from Equation 4, we define the following prediction model: $$(6)$$ $$x_{\mathrm{pred}}^{\mathrm{aux}}=\left(W^{\mathrm{aux}}\right)^{T}f_{(\mathbf{a})}({\tilde{x}})$$ Waux parameterizes the auxiliary prediction head which we regularize to {Waux ∈ R d×d| ∥Waux∥ 2 F = 1}. Finally, our reconstruction loss is given by: $${\cal L}_{\rm recon}\left(W^{\rm aux},a\right)=\frac{1}{2M}\sum_{(\tilde{x}_{i},x_{i})\in{\bf T}_{\rm aux}}\|x_{i}-\left(W^{\rm aux}\right)^{T}f(\tilde{x})\|^{2}\tag{7}$$ Using the synthetic data instantiation in Figure 1, we apply noise from N (0 , I2), i.e., σ 2 noise = 1 on each of the 1000 training points to get the training data for the auxiliary task *. We fit the model solely to the auxiliary task by running batched stochastic gradient descent on Lrecon. We use learning rates in the set {10−2, 10−3} and batch sizes in the set {64, 256}. We consider two cases when the intermediate layer a has low versus high capacity, as reflected in ℓ1-norm. Low ℓ1-norm - ∥a∥1 = |aspur| + |acore| = 0.1 means restricted capacity since this constraint (along with ∥Waux∥F = 1) results in models that cannot fit the training data perfectly. High ℓ1-norm (∥a∥1 = 10) *Note that while we could generate more points for the auxiliary task, we would like to mimic the setting where we refrain from introducing external data (data beyond end task training data) since methods like JTT and BR-DRO do not utilize them. means that the model is expressive enough to perfectly fit the training data. Figure 3 provides insight into the learned intermediate layer in either case. When the model has enough capacity, there is no competition between the core and spurious features. This means solutions where the spurious feature is weighted more than the core feature are feasible as long as the core feature weight is enough to reconstruct the the noised core features well. However, under restricted capacity where the learned weight of the core and spurious features are in direct competition, the model has to put more weight on the core features in order to achieve a lower auxiliary loss (as motivated in Section 2). This can be seen in Figure 3. Thus, for the auxiliary task to be effective at forcing a model to use core features over spurious ones, the model's capacity must be reasonably restricted. ## 3.4 Multitasking With Regularization Given the findings from Section 3.3, we proceed to multitask Lend and Lrecon along with regularization on the shared layer a. Let A (τ ) = {a ∈ Rd| |a|1 = τ} be a set of ℓ1-norm constrained vectors, we solve the following multitask optimization problem: $$\tilde{\mathbf{W}}^{\text{aux}},\tilde{\mathbf{w}}^{\text{end}},\tilde{\mathbf{a}}=\text{argmin}_{\|W^{\text{aux}}\|_{F}^{2}=1},\mathbf{a}\in\mathbf{A}(\tau)\quad\alpha\cdot\mathcal{L}_{\text{recon}}\left(W^{\text{aux}},\mathbf{a}\right)+\mathcal{L}_{\text{end}}\left(w^{\text{end}},\mathbf{a}\right)\tag{8}$$ We implement the multitask model illustrated in Figure 4. We use the same set of hyper-parameters as used in Section 3.3 and perform joint stochastic gradient descent on both Tend and Taux. When τ is chosen to be small enough, model capacity is restricted, and the model is forced to rely chiefly on the core features to do well on both the end and auxiliary tasks. Figure 5 evinces this. When the norm of the shared layer is high ∥a∥1 = 10, the end task can still predominantly rely on the spurious features leading to poor worst group error. On the other hand, when model capacity is reasonably restricted by setting ∥a∥1 = 0.1, we see from Figure 5 (left) that we can achieve improved worst group accuracy. Thus, we can effectively leverage the reconstruction auxiliary task in this simplified setting by applying sufficient regularization to ensure improved worst-group outcomes. Table 2: Summary of results from Sections 3.3 and 3.4. Regularized multitasking leads to improved worst-group outcomes. Method ∥a∥1 = 0.1 ∥a∥1 = 10 Worst-Group Acc Worst-Group Acc End task only 64.15 48.30 Regularized MTL 94.02 0.0 ![6_image_0.png](6_image_0.png) Figure 5: Depicted are the learned half-spaces for the multitask model under τ = {0.1, 10} and α = 10. Restricting the capacity of the shared feature space is critical for multitasking to be effective for improving worst group error. Examples visualized are 1000 balanced test examples sampled from Equation 1 ## 4 Details For Natural Data Experiments We have made a case for using regularized multitask learning to combat poor worst-group performance through empirical explorations in a simplified, synthetic setting. In this section, we review the experimental details for our investigations of tasks of more practical interest. ## 4.1 Datasets We conduct experiments across three datasets. To relieve the burden of compute, we introduce a fourth dataset, a smaller, sub-sampled version of one of the original datasets for ablations. 1. **Waterbirds:** This image classification dataset was introduced by Sagawa et al. (2020a). The task is to distinguish between species of land and water birds. It consists of bird images sourced from the CUB dataset Wah et al. (2011) and superimposed on land or water backgrounds from the Places dataset Zhou et al. (2018). The label (type of bird) is spuriously correlated with the background, resulting in 4 groups. Since this is a small dataset (4795 train examples), we also use it for ablations. 2. **MultiNLI:** This is a natural language inference dataset. The task is to classify whether the second sentence is entailed by, contradicts, or is neutral with respect to the first sentence (Williams et al., 2018). Following Sagawa et al. (2020a), we utilize the presence of negation words as a spurious attribute, leading to the creation of a total of 6 groups. 3. **Civilcomments:** The Civilcomments dataset is a toxicity classification dataset that contains comments from online forums Borkan et al. (2019); Koh et al. (2021). Along with the toxicity label, each text is annotated with additional overlapping sub-group labels of 8 demographic identities: male, female, LGBTQ, Christian, Muslim, other religions, Black, and White. As per Koh et al. (2021) and Sagawa et al. (2020a), we defines 16 overlapping groups by taking the Cartesian product of the binary toxicity label and each of the above 8 demographic identities. 4. **Civilcomments-small:** As Civilcomments is a large dataset of about 448000 datapoints, we create a sub-group stratified subset of 5% for conducting ablations and other detailed experiments. Our subset contains 13770, 2039, and 4866 datapoints in our train, validation, and test split, respectively. ## 4.2 Multitask Model And Training Details We follow the parameter sharing paradigm (Ruder, 2017; Sener & Koltun, 2018) where both Tend and Taux share the same model body, parameterized by θbase. We instantiate task-specific heads, parameterized by θend and θaux, respectively. We introduce ℓ1 regularization to the last layer activations immediately before the per-task prediction heads *. Specifically, let h end, h aux ∈ R d be the output representations generated by the base model, which are fed into their respective task-specific heads. Our final multitask learning objective is expressed as follows: $\mathcal{L}_{\rm final}=\mathcal{L}_{\rm end}+\alpha_{\rm aux}\cdot\mathcal{L}_{\rm aux}+\alpha_{\rm reg}\left(||h^{\rm end}||_{1}+||h^{\rm aux}||_{1}\right)$. (9) We cross-validate optimizing Lfinal with different weighting schemes. We choose αaux and αreg from the set {e −1, e0, e1}. Note that whilst we optimize Lfinal we care only about improving worst-group error on Taux. We use the pretrained BERTbase (Devlin et al., 2018) and ViTbase (Dosovitskiy et al., 2020) as the shared base models for NLP and CV tasks, respectively. We leverage the base models' self-supervised pretraining objectives, namely, masked language modeling (MLM) and masked image modeling (MIM) for our auxiliary transfer task Taux as in Dery et al. (2021a). These auxiliary objectives are based on end-task data itself (unless specified otherwise). We do this to maintain an apples-to-apples comparison with our chosen baselines, which do not use external data. As Section 5 will show, we obtain performance improvements even in this setting. In Section 5.4, we show that our improvements when using task-only data are predicated on sufficient prior pre-training. More details on the multitask model and batching scheme are presented in A.1. *In contrast to synthetic data experiments, where the norm constraint is applied to pre-prediction layer weights, in this section we directly apply the norm constraint to the features, indirectly constraining model weights. In the synthetic experiment, core and spurious features have a one-to-one mapping to model weights, enabling direct regularization of the features, but that is no longer possible in more complex models. $$(9)$$ For training, we vary the fine-tuning learning rate within the ranges of {10−3, 10−4} for Waterbirds, and {10−4, 10−5} for the text datasets. We experiment with batch sizes in the set {4, 8, 16, 32}. We use the same batch sizes for Tend and Taux. We train for 50 epoch for the NLP datasets and 200 epochs for Waterbirds, with an early stopping patience of 10, as per the check-pointing scheme explained in section 4.2. We use the Adam optimizer for NLP datasets with decoupled weight decay regularization of 10−2 (Loshchilov & Hutter, 2017). Consistent with the recent studies on ViT (Dosovitskiy et al., 2020; Steiner et al., 2022), we use SGD with a momentum of 0.9 (Sutskever et al., 2013) to fine-tune Waterbirds. We run each hyperparameter configuration across 5 seeds and report the averaged results. We report the ERM, JTT, and groupDRO results for Civilcomments and MultiNLI from Idrissi et al. (2022) as the authors conducted extensive hyperparameter tuning across all these methods. However, since Idrissi et al. (2022) report results on Waterbirds using a ResNet-50 model (He et al., 2016) and our experiments employ ViT, we re-run all baselines using ViT with a consistent set of hyperparameters, as mentioned above. Evaluation Details We assess all methods and datasets using two model selection strategies: 1. **Val-GP:** This strategy requires group annotations in the validation data during training. Here, we select the model based on the maximum worst-group accuracy on the validation data. 2. **No-GP:** This strategy requires no access to any group annotations during training. We select the model based on the average validation accuracy. Baseline Methods Since we evaluate our method based on its ability to generalize to worst performing groups, we benchmark it against three popular methods found in group generalization literature. These methods either directly or indirectly optimize for worst-group improvements. 1. **Empirical Risk Minimization (ERM):** This is the standard approach of minimizing the average loss over all the training data. No group information is used during training except when the **Val-GP** strategy is used for model selection. 2. **Just Train Twice (JTT):** It presents a two step approach for worst group generalization Liu et al. (2021). JTT first trains a standard ERM model for T epochs to identify misclassified datapoints. Then, a second model is trained on a reweighted dataset constructed by upweighting the misclassified examples by αup. It does not use group information during training except for the **Val-GP** strategy. 3. **Bit-rate Constrained DRO (BR-DRO):** Traditionally, in the two-player formulation of DRO, the adversary can use complex reweighting functions, resulting in overly pessimistic solutions. In contrast, BR-DRO Setlur et al. (2023) constrains the adversary's complexity based on information theory under a data-independent prior. While BR-DRO offers weaker robustness without performance guarantees for arbitrary reweighting, it is less pessimistic and suitable for simpler distribution shifts, characterized by a reweighting function contained in a simpler complexity class. BR-DRO does not use group information during training except for the **Val-GP** setting. 4. **Group-DRO:** Group distributionally robust optimization minimizes the maximum loss across all the sub-groups Sagawa et al. (2020a). This optimization method incorporates group annotations during training. Similar to prior works Liu et al. (2021); Idrissi et al. (2022); Setlur et al. (2023), we treat it as an oracle, as this is the only method that uses group annotations. ## 5 Results And Discussion In this section, we provide empirical evidence demonstrating the effectiveness of our regularized MTL approach in mitigating worst-group error while maintaining average performance across different scenarios. ## 5.1 Multitasking Is Competitive With Bespoke Dro Methods We first compare our approach with previously proposed methods for tackling worst-group accuracy. Table 3 details the performance of various methods across the tasks of interest for the Val-GP setting. As expected, groupDRO yields the highest worst-group accuracy as it directly optimizes for it. Our MTL approach outperforms JTT and BR-DRO on two datasets (MNLI and Waterbirds) while performing comparatively Table 3: Mean and standard deviations of the test worst-group accuracies across all the methods under consideration. Regularized MTL consistently reduces the gap between ERM and groupDRO when considering worst-group accuracy. | Method | Group Labels | Civilcomments | MNLI | Waterbirds | |------------------------|----------------|-----------------|---------|--------------| | ERM | Val Only | 61.32.0 | 67.61.2 | 85.41.4 | | JTT | Val Only | 67.81.6 | 67.51.9 | 85.92.5 | | BR-DRO | Val Only | 68.90.7 | 68.50.8 | 86.71.3 | | ERM + MT + L1 | Val Only | 68.23.2 | 69.71.5 | 87.52.7 | | groupDRO (Upper Bound) | Train and Val | 69.91.2 | 78.00.7 | 93.90.7 | with BR-DRO on the CivilComments dataset. Given the competitive results in Table 3, we argue that our regularized MTL formulation is an attractive option over JTT and BR-DRO. Multitasking already features prominently in many ML code bases. Thus, introducing our simple regularization modification to existing MTL implementations represents a smaller technical overhead compared to introducing JTT or BR-DRO to target worst-group error. Also, as we will see in Section 5.2 below, regularized MTL is a single approach capable of improving both worst-group and average accuracy. 5.2 Multitasking Improves both Average and worst-group Performance Even in the Absence Group Annotations ![9_image_0.png](9_image_0.png) Figure 6: Comparison of the performance of different approaches with respect to average and worst-group accuracy on the Waterbirds dataset under val-GP and no-GP settings. Regularized MTL improves both average and worst-group accuracy even without group annotations. All methods enjoy lift in when validation group annotations are available. Though previous works typically assume that practitioners have access to the group annotations on the validation set (Liu et al., 2021; Kirichenko et al., 2022), we are interested in settings where no such annotations are available. This covers many tasks of practical interest since, in some cases, it may be prohibitively cost-intensive (financially and in terms of human labor) to acquire group annotations even for the smaller validation set (Paranjape et al., 2023). Consequently, we present a comparative performance analysis in Figure 6, encompassing settings with and without access to group annotations. With respect to worst-group accuracy, our regularized MTL approach outperforms JTT and achieves ≈ 2% lift over ERM when group annotations are absent, a trend consistent across both Waterbirds and Civilcomments-small datasets. While this lift of ≈ 2% remains when validation group annotations are introduced, the benefit from group-labeled data is more pronounced (≈ 5%−15%). This boost can be worthwhile to practitioners who have the resources to obtain some group annotations. Moreover, it becomes evident from Figure 6 that our method not only yields superior worst-group performance but also improves in average performance. ## 5.3 Are Both Regularization And Multitasking Jointly Necessary? Table 4: Disentangling the impact of L1 regularization and SSL objective on wost-group accuracy. We see that regularized multitasking is necessary for gains in both average and worst-group performance. | Dataset | Method | No Group Annotations | Val Group Annotations | | | |---------------------|----------|------------------------|-------------------------|---------|---------| | Avg Acc | WG Acc | Avg Acc | WG Acc | | | | JTT | 95.60.3 | 82.11.2 | 94.00.5 | 85.92.5 | | | Waterbirds | ERM | 95.50.2 | 80.14.6 | 94.10.7 | 85.41.4 | | + L1 | 95.60.3 | 82.05.4 | 94.70.9 | 86.41.4 | | | + MIM | 95.30.4 | 80.14.6 | 95.00.6 | 85.32.4 | | | + MIM + L1 | 95.80.3 | 83.33.4 | 95.40.4 | 87.52.7 | | | JTT | 83.30.2 | 52.55.2 | 81.30.8 | 68.01.8 | | | Civilcomments-Small | ERM | 83.90.4 | 51.65.6 | 81.41.0 | 67.42.1 | | + L1 | 83.70.4 | 51.64.0 | 80.30.7 | 66.31.6 | | | + MLM | 83.91.2 | 58.36.6 | 80.30.7 | 68.50.4 | | | + MLM + L1 | 84.40.4 | 53.74.3 | 82.00.5 | 69.41.7 | | In this section, we conduct an ablation to verify if *both* multitask learning and regularizing the final layer joint embedding space are necessary to improve average and worst-group performance. Our results are captured in Table 4. When assessing the worst-group accuracy, we find that regularizing the final embedding space during ERM can, at times, result in worse performance compared to training via standard ERM (66.31.6 vs 67.42.1 on CivilComments-small with validation group labels). On the other hand, multitasking without regularization can fail to improve over ERM, as evinced by the lack of improvement on Waterbirds. The regularized MTL approach is the only setting consistently improving on both datasets with and without validation group annotations. In line with these findings, we observe that joint L1 regularization and multitask learning setup yields the highest average accuracy in most cases. ## 5.4 Impact Of Pre-Training | Pretrianed | Method | No Group Annotations | Val Group Annotations | | | |----------------|----------|------------------------|-------------------------|---------|---------| | | Avg Acc | WG Acc | Avg Acc | WG Acc | | | | ERM | 65.10.5 | 4.51.6 | 53.30.7 | 10.12.9 | | No | JTT | 67.05.3 | 10.812.2 | 56.22.1 | 49.94.0 | | ERM + MIM + L1 | 67.02.3 | 1.650.7 | 53.52.7 | 12.03.2 | | | | ERM | 95.50.2 | 80.14.6 | 94.10.7 | 85.41.4 | | yes | JTT | 95.60.3 | 82.11.2 | 94.00.5 | 85.92.5 | | ERM + MIM + L1 | 95.80.3 | 83.33.4 | 95.40.4 | 87.52.7 | | Table 5: Waterbirds: Impact of pre-training on average and worst-group accuracy. Fine-tuning pre-trained models is arguably the de-facto paradigm in machine learning (Devlin et al., 2018; Dosovitskiy et al., 2020; Dery et al., 2021b). Consequently, our experiments so far have exclusively focused on pre-trained models. In this section, we wish to understand the effect of deviating from this paradigm on our MTL approach. Thus, we compare against JTT and ERM when the model is trained from scratch instead of starting with a pre-trained model. Tables 5 and 6 depict our results on Waterbirds and Civilcommentssmall, respectively. Our results show that pre-training is critical for setting up regularized MTL as a viable | Pretrianed | Method | No Group Annotations | Val Group Annotations | | | |--------------|----------|------------------------|-------------------------|---------|---------| | Avg Acc | WG Acc | Avg Acc | WG Acc | | | | ERM | 80.70.8 | 31.17.2 | 74.40.9 | 54.03.7 | | | No | JTT | 79.60.6 | 34.98.9 | 74.31.2 | 58.71.3 | | ERM+MLM+L1 | 80.70.6 | 31.37.6 | 74.20.3 | 56.20.9 | | | ERM | 83.90.4 | 51.65.6 | 81.41.0 | 67.42.1 | | | Yes | JTT | 83.30.2 | 52.55.2 | 81.30.8 | 68.01.8 | | ERM+MLM+L1 | 84.40.4 | 53.74.3 | 82.00.6 | 69.41.7 | | Table 6: Civilcomments-small : Impact of pre-training on average and worst-group accuracy. remedy against poor worst-group outcomes. We posit the following explanation for this outcome. Note that our informal motivation in Section 2 presupposes an ability to solve the auxiliary task to a reasonable degree. Solving the MLM and MIM tasks effectively from scratch with only the inputs of the relatively small supervised dataset is difficult. This poor performance on the auxiliary task translates to an inability to constrain the use of the spurious features on the end-task. Consistent with prior works (Tu et al., 2020; Wiles et al., 2022), our recommendation to practitioners is to use our approach during the fine-tuning of pre-trained models to be maximally effective. Another consequence of our findings is that caution is warranted in interpreting the results of previous work on DRO in light of the new paradigm of mostly using pre-trained models. Most previous results on DRO have examined the setting of training from scratch, and as Tables 6 and 5, DRO methods significantly outperform competitors in that setting. However, the originally outsized gains in worst-group error significantly shrink when we move to pre-trained models whilst our method shows superior performance. ## 6 Related Work - **Multitask Learning.** Multitask learning is a common stratergy for ML practitioners to improve the average performance of their models (Ruder, 2017; Ruder et al., 2019; Liu et al., 2019). While work like Hendrycks et al. (2019; 2020); Mao et al. (2020) have shown that multitasking can improve the adversarial and out-of-distribution robustness of models, the impact of multitasking on worst-group outcomes has been relatively unexplored. Makino et al. (2022) propose generative multitask learning (GMTL), a method that bolsters robustness to target shift by conditioning the input on all available targets, thereby addressing challenges associated with target-causing confounders and spurious dependencies between input and targets. However, it is important to note that their approach necessitates all target annotations during training, a requirement we do not assume in our scenario. Our work is inspired by Gururangan et al. (2020), who introduce constructing auxiliary objectives directly from end-task data for continued pretraining (they dub this Task Adaptive Pre-training - TAPT). Following Dery et al. (2021b; 2023), we multitask this auxiliary task with the end-task. However, unlike these studies, our focus is on improving the worst-case group accuracy of the final model. The work by Michel et al. (2021) explores the balancing of worst and average performance in multitask learning. In their study, they focus on a set of equally important end tasks, striving for proficient model performance across all of them. In contrast, our work delves into the asymmetrical multitask setting, where the presence of the auxiliary task is determined by its contribution to enhancing our target metric in the end task. - **Robustness using group demographics.** Our multitask learning approach is primarily designed for settings with limited-to-no group annotations. However, many DRO approaches assume the presence of group annotations for all training points. Among the approaches that leverage group information, *Group* Distributionally Robust Optimization Sagawa et al. (2020a) is the most popular technique that tries to minimize the maximum loss over the sub-groups. Goel et al. (2021) presented *Model Patching*, a data augmentation method designed to enhance the representation of minority groups. *FISH*, proposed by Shi et al. (2022), focuses on domain generalization via inter-domain gradient matching. In the settings where group annotations are expensive (financially or in terms of human resources) to procure, these methods are not viable options. - **Robustness without group demographics.** Extensive research has been dedicated to addressing the challenges of worst-group generalization in the more realistic scenario where access to group annotations during training is unavailable. *GEORGE* Sohoni et al. (2020) adopts a clustering-based methodology to unveil latent groups within the dataset and subsequently employs groupDRO for improved robustness. Learning from Failure (LfF) Nam et al. (2020) introduces a two-stage strategy. In the first stage, an intentionally biased model aims to identify minority instances where spurious correlations do not apply. In the second stage, the identified examples are given increased weight during the training of a second model. Just Train Twice (JTT) method Liu et al. (2021) follows a similar principle by training a model that minimizes loss over a reweighted dataset. This dataset is constructed by up-weighting training examples misclassified during the initial few epochs. Our regularized MTL approach has several advantages over these methods, even though they are all deployed in the same limited-to-no group annotations settings. As we have demonstrated, our approach can improve both worst-group and average performance, unlike the other approaches targeted against worst-group error only. Secondly, due to the widespread usage of multitask learning by many ML practitioners, implementing our modification represents minimal overhead instead of introducing one of the above bespoke approaches. ## 7 Conclusion In this work, we presented an empirical investigation of the impact of multitasking on worst-group outcomes. We found that deploying multitasking, *as is*, does not consistently improve upon worst performing groups. We have shown that while DRO methods, like JTT, display superior performance when models are trained from scratch, this is not the case in the currently more widespread setting of fine-tuning a pre-trained model. Specifically, when fine-tuning, our method - regularized multitasking of the end-task with the pre-training objective constructed over end-task data - leads to improvements in worst-case group accuracy over JTT. Our work has demonstrated that it is possible to design a single, simple method that improves both worstcase group accuracy average accuracy regardless of the availability of group annotations. Since multitask learning is already a standard part of many practitioner's toolbox, and our modification to adapt it against worst-group accuracy is simple, our approach requires minimal overhead to integrate into existing systems compared to bespoke DRO approaches. We thus encourage practitioners to introduce our modification to their MTL pipelines as an essentially free way of improving worst-group performance without sacrificing gains in average performance. In order to keep an apples-to-apples comparison with DRO approaches, we have primarily focused on multitasking with auxiliary objectives based on end-task data only. For future work, it would be interesting to explore the impact of multitasking with auxiliary objectives based on external data more deeply. It would also be interesting to leverage meta-learning to dynamically adapt the auxiliary tasks towards improving worst-case group outcomes (Dery et al., 2021b; 2023). Additionally, we leave the study on the generalizability of our method to adversarial robustness, domain shift, and label shift to future work. ## 8 Acknowledgements This work was also supported in part by the Tang AI Innovation Fund, Defence Science and Technology Agency Singapore, National Science Foundation grants IIS1705121, IIS1838017, IIS2046613, IIS2112471, and funding from Meta, Morgan Stanley, Amazon, Google, Schmidt Early Career Fellowship and Apple. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of any of these funding agencies. ## References Armen Aghajanyan, Anchit Gupta, Akshat Shrivastava, Xilun Chen, Luke Zettlemoyer, and Sonal Gupta. Muppet: Massive multi-task representations with pre-finetuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 5799–5811, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main. 468. URL https://aclanthology.org/2021.emnlp-main.468. Kumar Ayush, Burak Uzkent, Chenlin Meng, Kumar Tanmay, Marshall Burke, David Lobell, and Stefano Ermon. Geography-aware self-supervised learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10181–10190, 2021. Jonathan Baxter. A model of inductive bias learning. *Journal of artificial intelligence research*, 12:149–198, 2000. Aharon Ben-Tal, Dick Den Hertog, Anja De Waegenaere, Bertrand Melenberg, and Gijs Rennen. Robust solutions of optimization problems affected by uncertain probabilities. *Management Science*, 59(2):341– 357, 2013. Daniel Borkan, Lucas Dixon, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. Nuanced metrics for measuring unintended bias with real data for text classification. In *Companion Proceedings of The* 2019 World Wide Web Conference, WWW '19, pp. 491–500, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450366755. doi: 10.1145/3308560.3317593. URL https://doi.org/ 10.1145/3308560.3317593. Joy Buolamwini and Timnit Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. In *Conference on fairness, accountability and transparency*, pp. 77–91. PMLR, 2018. Rich Caruana. Multitask learning. *Machine learning*, 28:41–75, 1997. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In *International conference on machine learning*, pp. 1597–1607. PMLR, 2020. Terrance De Vries, Ishan Misra, Changhan Wang, and Laurens Van der Maaten. Does object recognition work for everyone? In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition* workshops, pp. 52–59, 2019. Lucio M Dery, Yann Dauphin, and David Grangier. Auxiliary task update decomposition: The good, the bad and the neutral. *arXiv preprint arXiv:2108.11346*, 2021a. Lucio M Dery, Paul Michel, Ameet Talwalkar, and Graham Neubig. Should we be pre-training? an argument for end-task aware training as an alternative. *arXiv preprint arXiv:2109.07437*, 2021b. Lucio M. Dery, Paul Michel, Mikhail Khodak, Graham Neubig, and Ameet Talwalkar. AANG : Automating auxiliary learning. In *The Eleventh International Conference on Learning Representations*, 2023. URL https://openreview.net/forum?id=vtVDI3w_BLL. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020. John Duchi and Hongseok Namkoong. Learning models with uniform performance via distributionally robust optimization. *arXiv preprint arXiv:1810.08750*, 2018. Karan Goel, Albert Gu, Yixuan Li, and Christopher Re. Model patching: Closing the subgroup performance gap with data augmentation. In *International Conference on Learning Representations*, 2021. URL https://openreview.net/forum?id=9YlaeLfuhJF. Sachin Goyal, Ananya Kumar, Sankalp Garg, Zico Kolter, and Aditi Raghunathan. Finetune like you pretrain: Improved finetuning of zero-shot vision models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 19338–19347, 2023. Suchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. Don't stop pretraining: Adapt language models to domains and tasks. arXiv preprint arXiv:2004.10964, 2020. Tatsunori Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang. Fairness without demographics in repeated loss minimization. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 1929–1938. PMLR, 10–15 Jul 2018. URL https://proceedings.mlr.press/ v80/hashimoto18a.html. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In *Proceedings of the IEEE/CVF conference on computer vision and pattern* recognition, pp. 16000–16009, 2022. Dan Hendrycks, Kimin Lee, and Mantas Mazeika. Using pre-training can improve model robustness and uncertainty. In *International conference on machine learning*, pp. 2712–2721. PMLR, 2019. Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, and Dawn Song. Pretrained transformers improve out-of-distribution robustness. *arXiv preprint arXiv:2004.06100*, 2020. Badr Youbi Idrissi, Martin Arjovsky, Mohammad Pezeshki, and David Lopez-Paz. Simple data balancing achieves competitive worst-group-accuracy. In Bernhard Schölkopf, Caroline Uhler, and Kun Zhang (eds.), Proceedings of the First Conference on Causal Learning and Reasoning, volume 177 of *Proceedings of* Machine Learning Research, pp. 336–351. PMLR, 11–13 Apr 2022. URL https://proceedings.mlr. press/v177/idrissi22a.html. Pavel Izmailov, Polina Kirichenko, Nate Gruver, and Andrew G Wilson. On feature learning in the presence of spurious correlations. *Advances in Neural Information Processing Systems*, 35:38516–38532, 2022. David Jurgens, Yulia Tsvetkov, and Dan Jurafsky. Incorporating dialectal variability for socially equitable language identification. In Regina Barzilay and Min-Yen Kan (eds.), *Proceedings of the 55th Annual* Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 51–57, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-2009. URL https: //aclanthology.org/P17-2009. Maurice G Kendall. A new measure of rank correlation. *Biometrika*, 30(1/2):81–93, 1938. Polina Kirichenko, Pavel Izmailov, and Andrew Gordon Wilson. Last layer re-training is sufficient for robustness to spurious correlations. *arXiv preprint arXiv:2204.02937*, 2022. Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, Tony Lee, Etienne David, Ian Stavness, Wei Guo, Berton Earnshaw, Imran Haque, Sara M Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, and Percy Liang. Wilds: A benchmark of in-the-wild distribution shifts. In Marina Meila and Tong Zhang (eds.), *Proceedings of the 38th International Conference on* Machine Learning, volume 139 of *Proceedings of Machine Learning Research*, pp. 5637–5664. PMLR, 18–24 Jul 2021. URL https://proceedings.mlr.press/v139/koh21a.html. Paul Pu Liang, Chiyu Wu, Louis-Philippe Morency, and Ruslan Salakhutdinov. Towards understanding and mitigating social biases in language models. In *International Conference on Machine Learning*, pp. 6565–6576. PMLR, 2021. Evan Z Liu, Behzad Haghgoo, Annie S Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, and Chelsea Finn. Just train twice: Improving group robustness without training group information. In Marina Meila and Tong Zhang (eds.), *Proceedings of the 38th International Conference on Machine* Learning, volume 139 of *Proceedings of Machine Learning Research*, pp. 6781–6792. PMLR, 18–24 Jul 2021. URL https://proceedings.mlr.press/v139/liu21f.html. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4487–4496, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1441. URL https://aclanthology.org/P19-1441. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. *arXiv preprint arXiv:1711.05101*, 2017. Taro Makino, Krzysztof Geras, and Kyunghyun Cho. Generative multitask learning mitigates targetcausing confounding. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), *Advances in Neural Information Processing Systems*, volume 35, pp. 36546–36558. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/ ece182f93af26c64187ba3f7dfd4309a-Paper-Conference.pdf. Chengzhi Mao, Amogh Gupta, Vikram Nitin, Baishakhi Ray, Shuran Song, Junfeng Yang, and Carl Vondrick. Multitask learning strengthens adversarial robustness. In *Computer Vision–ECCV 2020: 16th European* Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16, pp. 158–174. Springer, 2020. Paul Michel, Sebastian Ruder, and Dani Yogatama. Balancing average and worst-case accuracy in multitask learning. *arXiv preprint arXiv:2110.05838*, 2021. Junhyun Nam, Hyuntak Cha, Sungsoo Ahn, Jaeho Lee, and Jinwoo Shin. Learning from failure: Debiasing classifier from biased classifier. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 20673–20684. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/ file/eddc3427c5d77843c2253f1e799fe933-Paper.pdf. Junhyun Nam, Jaehyung Kim, Jaeho Lee, and Jinwoo Shin. Spread spurious attribute: Improving worst-group accuracy with spurious attribute estimation. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=_F9xpOrqyX9. Bhargavi Paranjape, Pradeep Dasigi, Vivek Srikumar, Luke Zettlemoyer, and Hannaneh Hajishirzi. AGRO: Adversarial discovery of error-prone groups for robust optimization. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=IrzkT99fDJH. Shikai Qiu, Andres Potapczynski, Pavel Izmailov, and Andrew Gordon Wilson. Simple and fast group robustness by automatic feature reweighting. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), Proceedings of the 40th International Conference on Machine Learning, volume 202 of *Proceedings of Machine Learning Research*, pp. 28448– 28467. PMLR, 23–29 Jul 2023. URL https://proceedings.mlr.press/v202/qiu23c.html. Sebastian Ruder. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098, 2017. Sebastian Ruder, Matthew E Peters, Swabha Swayamdipta, and Thomas Wolf. Transfer learning in natural language processing. In Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: Tutorials, pp. 15–18, 2019. Shiori Sagawa, Pang Wei Koh, Tatsunori B. Hashimoto, and Percy Liang. Distributionally robust neural networks. In *International Conference on Learning Representations*, 2020a. URL https://openreview. net/forum?id=ryxGuJrFvS. Shiori Sagawa, Aditi Raghunathan, Pang Wei Koh, and Percy Liang. An investigation of why overparameterization exacerbates spurious correlations. In Hal Daumé III and Aarti Singh (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of *Proceedings of* Machine Learning Research, pp. 8346–8356. PMLR, 13–18 Jul 2020b. URL https://proceedings.mlr. press/v119/sagawa20a.html. Ozan Sener and Vladlen Koltun. Multi-task learning as multi-objective optimization. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings. neurips.cc/paper_files/paper/2018/file/432aca3a1e345e339f35a30c8f65edce-Paper.pdf. Amrith Setlur, Don Dennis, Benjamin Eysenbach, Aditi Raghunathan, Chelsea Finn, Virginia Smith, and Sergey Levine. Bitrate-constrained DRO: Beyond worst case robustness to unknown group shifts. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/ forum?id=2QzNuaRHn4Z. Yuge Shi, Jeffrey Seely, Philip Torr, Siddharth N, Awni Hannun, Nicolas Usunier, and Gabriel Synnaeve. Gradient matching for domain generalization. In *International Conference on Learning Representations*, 2022. URL https://openreview.net/forum?id=vDwBW49HmO. Nimit Sohoni, Jared Dunnmon, Geoffrey Angus, Albert Gu, and Christopher Ré. No subclass left behind: Fine-grained robustness in coarse-grained classification problems. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 19339–19352. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/ file/e0688d13958a19e087e123148555e4b4-Paper.pdf. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. MASS: Masked sequence to sequence pretraining for language generation. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), *Proceedings* of the 36th International Conference on Machine Learning, volume 97 of *Proceedings of Machine Learning* Research, pp. 5926–5936. PMLR, 09–15 Jun 2019. URL https://proceedings.mlr.press/v97/song19d. html. Tejas Srinivasan and Yonatan Bisk. Worst of both worlds: Biases compound in pre-trained vision-andlanguage models. In *Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing* (GeBNLP), pp. 77–85, Seattle, Washington, July 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.gebnlp-1.10. URL https://aclanthology.org/2022.gebnlp-1.10. Andreas Peter Steiner, Alexander Kolesnikov, Xiaohua Zhai, Ross Wightman, Jakob Uszkoreit, and Lucas Beyer. How to train your vit? data, augmentation, and regularization in vision transformers. Transactions on Machine Learning Research, 2022. ISSN 2835-8856. URL https://openreview.net/forum?id= 4nPswr1KcP. Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In Sanjoy Dasgupta and David McAllester (eds.), Proceedings of the 30th International Conference on Machine Learning, volume 28 of *Proceedings of Machine Learning Research*, pp. 1139–1147, Atlanta, Georgia, USA, 17–19 Jun 2013. PMLR. URL https://proceedings.mlr.press/ v28/sutskever13.html. Zhan Tong, Yibing Song, Jue Wang, and Limin Wang. Videomae: Masked autoencoders are data-efficient learners for self-supervised video pre-training. *Advances in neural information processing systems*, 35: 10078–10093, 2022. Lifu Tu, Garima Lalwani, Spandana Gella, and He He. An empirical study on robustness to spurious correlations using pre-trained language models. Transactions of the Association for Computational Linguistics, 8:621–633, 2020. doi: 10.1162/tacl_a_00335. URL https://aclanthology.org/2020. tacl-1.40. Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds200-2011 dataset. 2011. Olivia Wiles, Sven Gowal, Florian Stimberg, Sylvestre-Alvise Rebuffi, Ira Ktena, Krishnamurthy Dj Dvijotham, and Ali Taylan Cemgil. A fine-grained analysis on distribution shift. In *International* Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=Dl4LetuLdyK. Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In *Proceedings of the 2018 Conference of the North American Chapter* of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1112–1122, New Orleans, Louisiana, June 2018. Association for Computational Linguistics. doi: 10. 18653/v1/N18-1101. URL https://aclanthology.org/N18-1101. Tianhe Yu, Saurabh Kumar, Abhishek Gupta, Sergey Levine, Karol Hausman, and Chelsea Finn. Gradient surgery for multi-task learning. *Advances in Neural Information Processing Systems*, 33:5824–5836, 2020. Runtian Zhai, Chen Dan, J Zico Kolter, and Pradeep Kumar Ravikumar. Understanding why generalized reweighting does not improve over ERM. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=ashPce_W8F-. Michael Zhang, Nimit S Sohoni, Hongyang R Zhang, Chelsea Finn, and Christopher Re. Correct-ncontrast: a contrastive approach for improving robustness to spurious correlations. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of *Proceedings of Machine Learning Research*, pp. 26484–26516. PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/v162/zhang22z.html. Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba. Places: A 10 million image database for scene recognition. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 40(6): 1452–1464, 2018. doi: 10.1109/TPAMI.2017.2723009. ## A Appendix A.1 Training Details For Tprim, we employ a task-specific classification head of a single-layer multi-layer perceptron (MLP). For Taux, we leverage the pre-trained MLM and MIM heads from BERT and ViT, respectively. We utilize the embedding of the [CLS] token from the base model through this MLP for classification. To facilitate effective multitask training, we adopt a task-heterogeneous batching scheme Aghajanyan et al. (2021). This facilitates the accumulation of gradients across tasks prior to each parameter update, contributing to improved training efficiency and convergence. Lastly, to ensure proper scaling, the L1 loss is normalized by the number of parameters in the shared representation. ## A.2 Going Beyond The Pre-Training Objective Table 7: Impact of different pre-training objectives on average and worst-group accuracy. Not all auxiliary tasks can help improve worst-group performance. | Dataset | Method | No Group Annotations | Val Group Annotations | | | |---------------------|------------|------------------------|-------------------------|---------|---------| | Avg Acc | WG Acc | Avg Acc | WG Acc | | | | ERM | 95.50.2 | 80.14.6 | 94.10.7 | 85.41.4 | | | Waterbirds | + MIM + L1 | 95.80.3 | 83.33.4 | 95.40.4 | 87.52.7 | | + SimCLR + L1 | 96.10.3 | 84.03.4 | 95.50.7 | 87.21.6 | | | ERM | 83.90.4 | 51.65.6 | 81.41.0 | 67.42.1 | | | Civilcomments-Small | + MLM + L1 | 84.40.4 | 53.74.3 | 82.00.5 | 69.41.7 | | + CLM + L1 | 83.30.7 | 50.94.9 | 81.10.9 | 67.31.4 | | Previous works in multitasking with self-supervised objectives suggest that different auxiliary objectives have disparate impacts on end task performance (Dery et al., 2023). Out of curiosity about the impact of the choice of auxiliary objective, we explore the impact of going beyond the model's original pre-training objective. For the Waterbirds dataset, we experiment with SimCLR - a constrastive prediction task based on determining whether two distinct augmented images originate from the same base image (Chen et al., 2020). For BERT experiments on Civilcomments-small, we substitute the standard masked language modeling (MLM) task with causal language modeling (CLM) as the auxiliary task. From the results in Table 7, we observe that SimCLR's performance closely resembles that of the MIM pre-training objective, whereas CLM shows relatively inferior results compared to MLM. We hypothesize that BERT's intrinsic bidirectional attention mechanism and non-autoregressive nature are not ideally suited for causal language modeling (Song et al., 2019), resulting in the model underperforming in our multitask setup. Given the model performance's sensitivity to the replacement objective's choice, we proffer a practical recommendation to practitioners: use the pre-training objective as the auxiliary task. This aligns with recent work on best practices for fine-tuning pre-trained models (Goyal et al., 2023). A.3 Bayes optimal model for dimension-independent reconstruction under noised inputs $$\begin{array}{c}{{\ell(w_{i})=\frac{1}{2}\mathbb{E}\left[\left(x_{i}-w_{i}\tilde{x}_{i}\right)^{2}\right]}}\\ {{=\frac{1}{2}\mathbb{E}\left[\left(x_{i}\right)^{2}-2w_{i}x_{i}\tilde{x}_{i}+\left(w_{i}\tilde{x}_{i}\right)^{2}\right]}}\\ {{\frac{\partial\ell(w_{i})}{\partial w_{i}}=-\mathbb{E}\left[x_{i}\tilde{x}_{i}\right]+w_{i}\mathbb{E}\left[\left(\tilde{x}_{i}\right)^{2}\right]}}\end{array}$$ w ∗ i = E [xix˜i] E h(˜xi) 2i = E [xi(xi + ϵi)] E h(xi + ϵi) 2i = E -(xi) 2+ E [xi] E [ϵi] E [(xi) 2] + 2E [xi] E [ϵi] + E [(ϵi) 2] note E [ϵi] = 0 , E-(xi) 2= σ 2 i + 0.5 µ 2 i|y=1 + µ 2 i|y=−1 = σ 2 i + 0.5 µ 2 i|y=1 + µ 2 i|y=−1 σ 2 i + 0.5 µ 2 i|y=1 + µ 2 i|y=−1 + σ 2 noise The optimal weighting w ∗ i is achieved when ∂ℓ(wi) ∂wi= 0. ## B Broader Impact Statement In terms of broader impact, our work has beneficial implications for group fairness and mitigating demographic-based bias in machine learning systems. Specifically, we present a simple approach to improving worst-case group error. This means that our work contributes positively to certain groups that would otherwise be impacted by the poor performance of ML models. We have also provided a new lens from which to view the problem of the impacts of multitasking, resulting in showing that it is tool able against poor group-based outcomes. However, our method introduces compute overhead due to the optimization of multiple objectives. This results in increased power consumption and, thus, greenhouse emissions during the training of models. Nevertheless, introducing our simple regularization modification to existing MTL implementations represents a negligible technical overhead than introducing JTT or BR-DRO to target worst-group error. Table 8: Sensitivity analysis of all the methods to different hyperparameters wrt worst-group accuracy. | Dataset | Method | Seed | Learning Rate | Batch Size | Up | T | λaux | λreg | |---------------------|---------------|---------|-----------------|--------------|---------|--------|--------|---------| | | ERM | 0.0734 | 0.5717 | −0.4384 | − | − | − | − | | | JTT | 0.0573 | − | −0.380 | 0.0246 | 0.1862 | − | − | | Waterbirds | ERM + MT + L1 | −0.0314 | − | 0.0142 | − | − | 0.0943 | 0.4114 | | | groupDRO | −.1688 | 0.196 | −0.1384 | − | − | − | − | | | ERM | 0.0617 | −0.8 | 0.1029 | − | − | − | − | | | JTT | −0.0315 | − | 0.2104 | −0.0047 | 0.1111 | − | − | | Civilcomments-Small | ERM + MT + L1 | 0.0677 | − | 0.0453 | − | − | 0.0601 | −0.2955 | | | groupDRO | −0.0392 | −0.7352 | −0.1726 | − | − | − | − | Sensitivity to Hyperparameters: For evaluating the sensitivity of worst group accuracy to various hyperparameters across different methods, we employ Kendall's rank coefficient τ Kendall (1938). To adhere to our computation budget, we opt for the optimal learning rate utilized in ERM for both JTT and ERM+MT+L1. Our sensitivity analysis from Table 8 reveals distinct preferences in learning rates. The ViT model applied to waterbirds, demonstrating a preference for higher learning rates. Conversely, when trained on the civilomments-small dataset, BERT exhibits a preference for lower learning rates. The epoch at upsampling is also a sensitive hyperparameter for JTT. The sensitivity to different seed values is negligible across all the methods. Notably, our method displays the most minor sensitivity to changes in batch size. However, it is noteworthy that our method is most responsive to variations in the regularization parameter (λreg), with lower values resulting in higher worst group accuracy.
Review 1: Summary: The paper proposes to use masked image/language modeling as an auxiliary task to improve the worse-group performance in datasets with subgroup shifts. On a synthetic dataset, the auxiliary task is shown to be quite effective when the model is trained with a proper regularizer. On 4 real-world datasets, the method achieves better worse-group accuracy than JTT and BR-DRO when there is no group labels. Strengths and Weaknesses: Strengths: 1. The paper is the first to demonstrate that the subgroup shift robustness can be improved by masked image/language modeling as the auxiliary task, as far as I know. 2. In the synthetic data experiment, the contrast between Fig. 4 and Fig. 5 looks compelling. Weaknesses: 1. Section 3.3 uses the weight norm as the measure for model complexity, while Section 4.2 uses the last layer feature norm, which means there is a gap between the argument in the synthetic data and in the real data. Resolving the inconsistency is important to understand the effect the the regularization term, which is quite essential as Fig. 5 shows. 2. The real data experiment on image data is only done on a small synthetic dataset, it is unclear whether the method will scale to larger datasets like WILDS-iWildCam [2]. 3. Some papers on multi-task learning for out-of-distribution generalization might be relevant to the paper, e.g., [1]. [1] Albuquerque, Isabela, et al. "Improving out-of-distribution generalization via multi-task self-supervised pretraining." arXiv preprint arXiv:2003.13525 (2020). [2] https://wilds.stanford.edu/datasets/ Requested Changes: Elaborate the gap between the model complexity measure in synthetic and real-world data, and provide some thoughts on why the feature regularization is so important for MLM to work well. Test if the method is scalable to large-scale vision datasets. Broader Impact Concerns: Not applicable ================================================== Review 2: Summary: The authors propose that multitask learning applied to a discriminative task and an input-reconstruction task can improve worst-case and average-case accuracy for the discriminative task. There exist two types of features: core and spurious. Solely training on the discriminative task can result in the core features being ignored. When the input-reconstruction task is simultaneously optimized with strong regularization, this ensures the core features are also learned, thus improving robustness. Strengths and Weaknesses: This is a strong paper that delivers a simple but meaningful message in a very clear way. Toy problems are used skillfully to present a narrative that is easy to follow, and left me believing that the method should work, even before seeing the empirical results. The empirical results are also strong, and support the authors' claims. I appreciated the level of detail the authors devoted in their discussion of model selection, which I believe is crucial for a paper about about robustness. The only weakness I found is the authors' lack of discussion on their choice of L1 norm regularization (as opposed to other forms, such as L2 norm or dropout). The choice of regularization seems important given how critical it is to their proposed approach. Requested Changes: I would add some discussion about the choice of regularization, and why you went with L1 norm. I also suggest discussing https://arxiv.org/abs/2202.04136 in the MTL section of the related work. It is about improving robustness to shifts in p(y, y') in multitask learning, where y and y' are the targets. I believe it's related to this work, since yours is about improving robustness to shifts in p(y, s), where s is the spurious attribute in Section 2. Both can be framed as improving robustness to unobserved confounding that is particular to MTL. Broader Impact Concerns: N/A ================================================== Review 3: Summary: The paper investigates how multitask learning (MTL) affects worst-group outcomes, which are the performance of a model on the most disadvantaged subgroups of data. The paper proposes regularized MTL, which combines the end task with a pre-training objective and applies ℓ1 regularization to the shared representation space, as a simple and effective method to improve worst-group outcomes. The paper provides both synthetic and natural data experiments to show that regularized MTL can induce the model to rely more on the core features and achieve better worst-group accuracy than distributionally robust optimization (DRO) methods. Strengths and Weaknesses: Strengths: The paper addresses an important and timely problem of group fairness in machine learning, especially in the context of pre-trained models and MTL. The paper demonstrates the effectiveness of the proposed method on three natural datasets across computer vision and natural language processing tasks and shows that it consistently outperforms several state-of-the-art DRO methods on both worst and average group outcomes, regardless of the availability of group annotations. The paper did several ablation experiments on the effect of their method and pertaining. Weaknesses: The paper lacks a structure, as of this version, the introduction consists of methods and results as well. The paper does not provide a clear and rigorous definition of the problem and the proposed method. The paper does not provide a clear explanation of why pre-training is critical for the success of the proposed method, and how it affects the trade-off between core and spurious features. The paper does not compare the proposed method with other MTL methods that use different auxiliary tasks or different regularization techniques, which could provide more insights into the effectiveness of the proposed method. The paper needs to address the potential limitations or drawbacks of the proposed method, such as the computational cost, the sensitivity to pre-task, hyperparameters, or the generalization to other domains or tasks. Requested Changes: The paper should be restructured and define the problem first and then introduce the method, results and supporting evidence. It should improve the readability and clarity of some sections, such as the problem formulation, the synthetic data experiments, and the appendix. For example, the paper could define the notation and the terminology more explicitly, and provide more explanations and examples for the synthetic data experiments. The paper could also reorganize the appendix and provide more details and proofs for some of the claims and results. The paper should mention the contribution and novelty of the work explicitly in the introduction. The paper should compare the proposed method with other MTL methods that use different auxiliary tasks or different regularization techniques. This would help to understand the advantages and disadvantages of the proposed method, and how it relates to existing MTL literature. Broader Impact Concerns: The paper does not have a Broader Impact Statement section. The paper should add a Broader Impact Statement section that discusses the potential ethical, social, or environmental implications of the work, both positive and negative. Some possible broader impact concerns of the work are: The paper could have a positive impact on improving safety and group fairness and reducing discrimination in machine learning applications, especially for those that use pre-trained models and MTL. The paper could also inspire more research on understanding and mitigating the effects of spurious correlations and feature selection on group outcomes. ================================================== Metareview: Recommendation: Accept with minor revision Comment: This paper aims to improve the worst-case performance of a given model across groups by introducing a masked reconstruction loss. After building intuition in toy settings, the proposed approach is compared to various baselines on standard benchmarks. Reviewers generally found the approach convincing and appreciated the clarity and structure of the paper, though one reviewer made suggestions about the presentation that should be considered. Otherwise, the various reviewers' concerns were addressed during the rebuttal. ==================================================
# On The Choice Of Learning Rate For Local Sgd Lukas Balles‡*lukas.balles@aleph-alpha.com* Aleph Alpha, Heidelberg, Germany. Work done at AWS. Prabhu Teja S‡*prbuteja@amazon.de* Amazon Web Services, Berlin, Germany. Cédric Archambeau *cedric.archambeau@helsing.ai* Helsing, Berlin, Germany. Work done at AWS. Reviewed on OpenReview: *https: // openreview. net/ forum? id= DPvwr4HJdt* ## Abstract Distributed data-parallel optimization accelerates the training of neural networks, but requires constant synchronization of gradients between the workers, which can become a bottleneck. One way to reduce communication overhead is to use Local SGD, where each worker asynchronously takes multiple local gradient steps, after which the model weights are averaged. In this work, we discuss the choice of learning rate for Local SGD, showing that it faces an intricate trade-off. Unlike in the synchronous case, its gradient estimate is biased, with the bias dependent on the learning rate itself. Thus using learning rate scaling techniques designed for faster convergence in the synchronous case with Local SGD results in a performance degradation as previously observed. To analyze the manifestation of this bias, we study convergence behaviour of Local SGD and synchronous data-parallel SGD when using their optimal learning rates. Our experiments show that the optimal learning rate for Local SGD differs substantially from that of SGD, and when using it the performance of Local SGD matches that of SGD. However, this performance comes at the cost of added training iterations, rendering Local SGD faster than SGD only when communication is much more time-consuming than computation. This suggests that Local SGD may be of limited practical utility. ## 1 Introduction Gradient-based optimization techniques like Stochastic Gradient Descent (SGD) (Robbins & Monro, 1951) and its variants (Qian, 1999; Sutskever et al., 2013; Kingma & Ba, 2015) have contributed enormously to the success of deep learning models in the past decade. With increasing scale of both models and datasets, distributed training has become commonplace. The predominant distributed training paradigm is *dataparallel* training, where each of K workers computes a gradient using an independently drawn minibatch of data in parallel. These individual gradients are then averaged across all workers before an optimizer update is applied, effectively increasing the batch size by a factor of K. The use of larger batch sizes results in an improved gradient estimate, its variance being inversely proportional to the total batch size (Bottou et al., 2018). However, fully capitalizing on this reduced variance requires higher learning rates. Goyal et al. (2018) propose to increase the learning rate linearly with the number of workers. The goal is to achieve what is called perfect linear scaling, i.e., to converge in K times fewer iterations when using K workers. This has been shown to succeed for a moderate number of workers, ‡Equal Contribution. ![1_image_0.png](1_image_0.png) Figure 1: Why is Local SGD's performance lower than SGD? Existing experimental works on Local SGD (Lin et al., 2020) use linear scaling that was designed for synchronous SGD and thus result in poorer performance when number of local steps H is large (Local-Lin). When using a automatic learning rate scaling method (Local-Ada) based on optimal learning rates, we find that Local SGD performs similarly for a large range of K and H. These results are for a Wide ResNet-28 being trained on ImageNet32. Each line represents a *K, H* configuration and accuracy increases moving away from the center. ![1_image_1.png](1_image_1.png) ![1_image_2.png](1_image_2.png) Figure 2: When is Local SGD useful in practice? With adaptive learning rate scaling methods modulating the number of training iterations, we examine when Local SGD (Local-Ada) faster than SGD (Acc-Ada). We see that Local SGD is faster than the synchronous case only when communication costs much more than computation (m ≥ 2), thus making Local SGD of utility only in scenarios of extreme communication costs. We show on x-axis the communication to computation costs of a hypothetical system, and pseudo-wall clock time on y-axis (see §6.2). The plots in shades correspond to various H, and the dark lines to the minimum skyline for each cost factor and K across all values of H. The cross-over from SGD to Local SGD happens only for high costs (m). but performance deteriorates for large K. In general, using a larger K requires a learning rate increase by factor *smaller* than K and, consequently, training for *more* than 1/K times the number of iterations. AdaScale (Johnson et al., 2020) uses estimates of gradient variance and magnitude to compute a local scaling factor for each update step, which is used to modulate the learning rate, as well as to *stretch* the number of training iterations. While data-parallel training can significantly speed up model training, it necessitates synchronization among workers at each iteration. As the number of workers increases, the communication time required to synchronize begins to dominate the computation time (see Ortiz et al., 2021, Figure 1). This is exacerbated by compute infrastructure with low bandwidth and/or high-latency connections between workers. A straightforward remedy for communication overhead is gradient accumulation, where each worker averages gradients from H > 1 minibatches locally before synchronization. However, this increases the effective batch size by a factor of K × H, which quickly enters a regime of diminishing returns. It has been shown to deteriorate performance, even for moderate values of H, when using linear learning rate scaling (Lin et al., 2020). An alternative approach to reduce communication overhead is Local SGD (Stich, 2019; Yun et al., 2022), which performs H > 1 gradient steps locally on each worker, after which the model weights are averaged and synchronized across K workers. While theoretical work has shown that Local SGD converges at the same rate as synchronous data-parallel SGD, empirical studies (Lin et al., 2020; Ortiz et al., 2021) have observed deteriorating model performance when using Local SGD. In this paper, we study this apparent discrepancy between the theory and empirical findings on Local SGD. We, specifically, answer the following three questions: 1. *Why does Local SGD's performance lag that of SGD?* Empirical works that study the performance of Local SGD use learning rate scaling methods borrowed from SGD literature (see §2). We surface that this is problematic for Local SGD. As in the synchronous case, increasing the number of workers reduces the variance of the virtual gradient estimate used by Local SGD, which warrants a learning rate increase. As we show, Local SGD's gradient estimate is *biased*, and the bias depends on the learning rate itself. Thus, a naive application of known learning rate scaling techniques may adversely affect the quality of Local SGD's gradient estimate and, thus, its convergence behavior; this explains the results by Lin et al. (2020); Ortiz et al. (2021) who use linear learning rate scaling and show that Local SGD performs worse than standard SGD (see §4). 2. *Can the performance gap between Local SGD and SGD be bridged?* We examine the optimal learning rates for Local SGD and SGD, and devise automatic learning rate scaling methods that scale the learning rate based on gradient statistics. In this process, we recover the well-known technique for SGD called AdaScale (Johnson et al., 2020) that was previously proposed as a heuristic (see §5). We show that using our proposed scaling technique, Local SGD reliably maintains the target accuracy across a wide range of values for K and H as shown in Figure 1 in blue. This is in contrast to previous works that observe deteriorating performance when using Local SGD with linear learning rate scaling as shown in the green curve in Figure 1 (see §6.1). 3. *When is Local SGD empirically preferable to SGD?* When using automatic learning rate scaling methods, we show that Local SGD converges faster than large batch SGD realized through gradient accumulation for large H. However, this does not trivially translate into wall-clock speedups, as these automatic learning rate scaling methods also modulate the number of training iterations. In Figure 2, we show that Local SGD improves wall-clock time convergence compared to synchronous data-parallel SGD, when comparing under optimal learning rate scaling, only in extreme scenarios of communication being substantially more time-consuming than computation. This finding casts fundamental aspersions on the practicality of Local SGD especially when scaling only the learning rate, one we find missing in prior works (see §6.2). ## 2 Related Work The literature on distributed optimization is vast, so we focus on the most closely related works. For instance, we do not discuss fully asynchronous methods like HogWild (Recht et al., 2011) or approaches to compress gradients for communication (e.g., Alistarh et al., 2017; Basu et al., 2019). Cao et al. (2023) present a survey of such methods. Learning rate scaling To our knowledge, the first practical recommendation for learning rate modulation in distributed optimization was proposed by Krizhevsky (2014), who introduced a *linear scaling rule* in terms of the number of workers. They found it to work well for small numbers of workers but observed performance drops for larger numbers. Goyal et al. (2018) showed that linear scaling could be made to work at larger scales with the use of a warmup phase that gradually increases the learning rate towards the target value. They found the duration of the warm-up to be critical to the performance. As an alternative to linear scaling, Krizhevsky (2014) as well as Hoffer et al. (2017) experimented with a less aggressive square-root heuristic, which ultimately did not prove successful. Going beyond simple scaling factors, AdaScale (Johnson et al., 2020) adaptively scales the learning rate based on estimates of the gradient variance and magnitude and has been shown to retain performance for a much higher number of workers. We will discuss this method in detail in §5. Methods for improved generalization, like cyclical learning rate (Smith, 2017), can be seen orthogonal to our work, as we investigate optimal scaling factors for the learning rates. Large-batch optimizers Optimizers like Lars (You et al., 2017) and Lamb (You et al., 2020) have been proposed with the explicit goal of handling large batch sizes, but their utility has been questioned. Nado et al. (2021) find that, with suitable hyperparameters, both SGD and Adam match the performance of Lars and Lamb. Based on the assumption that small batch training generalizes better, Adasum (Maleki et al., 2021) tries to mimic small batch training while using larger batch sizes using a more complex operation to fuse the gradients instead of summation. Refuting the claims that the stochastic nature of SGD is important for its performance, Geiping et al. (2022) show that with explicit regularization, even full-batch gradient descent can attain the performance of small batch SGD. Local SGD The method now known as LocalSGD has been studied both empirically (Povey et al., 2014; Zhang et al., 2016; Lin et al., 2020; Gu et al., 2023), and theoretically (Dekel et al., 2012; Zhou & Cong, 2018; Stich, 2019; Haddadpour et al., 2019; Khaled et al., 2020; Woodworth et al., 2020; Deng et al., 2022; Yun et al., 2022). Deep learning models trained using Local SGD have been shown to achieve lower performance than using fully synchronous training. Lin et al. (2020) remedy this by post-Local SGD, where they switch to Local SGD after training the network with synchronous SGD for a certain number of iterations. Ortiz et al. (2021) show that the point of the switch is a sensitive hyperparameter. They change the global averaging step to a moving average as in Wang et al. (2020) and find it partially alleviates that sensitivity. Note that both Lin et al. (2020) and Ortiz et al. (2021) attempt to achieve perfect linear scaling (i.e., reducing the number of iterations by a factor of K) and apply the linear learning rate scaling rule of Goyal et al. (2018) to Local SGD. A closely related work by Wang & Joshi (2019) shows that the number of local steps H needs to be decreased as model reaches closer to convergence. They consider a fixed known learning rate, and optimize H, whereas we do the opposite. Thus, the two approaches are flip-sides of the same coin. Recently, Gu et al. (2023) find that Local SGD performs like SGD when using a small learning rate and training *long enough*. Hence, this scenario while useful for analysis, is of limited utility, as in practice we are interested in getting a high enough performance in the shortest time possible. Federated learning A plethora of Local SGD variants have been used and studied in federated learning (Wang et al., 2021), where workers access fixed subsets of data, which are not necessarily iid. Under these conditions of data heterogeneity, Murata & Suzuki (2021) remark that the gradient estimate of Local SGD is biased but do not quantify it. Works in federated learning that examine learning rate scaling have focussed on heuristics like linear and square-root scaling presented above (e.g., Charles et al., 2021). Data heterogeneity poses additional challenges compared to our setting and is not considered here. ## 3 Preliminaries In this section, we introduce some preliminary material and define notation. ## 3.1 Problem Setup Training a neural network requires solving an empirical risk minimization problem with an objective of the following form: $$f(\mathbf{w})=\frac{1}{N}\sum_{i=1}^{N}f(\mathbf{w};\mathbf{x}_{i}),\tag{1}$$ $$(1)$$ where w are the model weights and f(·; xi) denotes the loss of the i-th training example. Throughout this paper, we assume f to be L-smooth, i.e., its gradient is Lipschitz: ∥∇f(w′) − ∇f(w)∥ ≤ L∥w′ − w∥ for all w, w′in the domain of f. This implies the well-known local quadratic bound f(w′) ≤ f(w) + ∇f(w) T(w′ − w) + L 2 ||w′ − w||2, which we will use in our analysis. Gradient descent iteratively minimizes the objective with updates of the form wt+1 = wt −γt∇f(wt), where γt is the learning rate at iteration t. For large datasets, it is inefficient to compute the gradient over all data points in each iteration. Instead, we resort to Stochastic Gradient Descent (SGD), where we approximate the gradient using a stochastic gradient $$\mathbf{g}_{t}={\frac{1}{B}}\sum_{i\in B}\nabla f(\mathbf{w}_{t};\mathbf{x}_{i}),$$ ∇f(wt; xi), (2) computed on a randomly-sampled minibatch B ⊂ {1, 2 *· · ·* N} of size |B| = B ≪ N. $$\left(2\right)$$ In synchronous data-parallel SGD, each worker computes a stochastic gradient g k t using an independent minibatch. These are then averaged, g˜t = 1 K PK k=1 g k t , before an optimization step is applied. Each g k t is an unbiased estimate of the gradient, Et[g k t ] = ∇ft:= ∇f(wt) with a variance that we denote as σ 2 t := Vart[g k t ]. Consequently, the averaged gradient g˜t unbiasedly estimates ∇ft with a variance of σ 2 t /K. Here and throughout this paper, Et denotes the conditional expectation given wk t , ∀k ∈ {1, 2 *· · ·* K}. ## 3.2 Local Sgd Synchronous data-parallel SGD is equivalent to each worker taking a local gradient step, followed by an averaging of the model weights. This requires a synchronization of the weights after every iteration, which may cause a large communication overhead, depending on the number of workers and the compute infrastructure. In Local SGD, each worker takes H > 1 local gradient steps before the weights are averaged: $$\mathbf{w}_{t+1}^{k}=\begin{cases}\frac{1}{K}\sum_{k=1}^{K}(\mathbf{w}_{t}^{k}-\gamma_{t}\mathbf{g}_{t}^{k}),&\text{if}H\mid t+1,\\ \mathbf{w}_{t}^{k}-\gamma_{t}\mathbf{g}_{t}^{k},&\text{otherwise.}\end{cases}\tag{1}$$ $$\left({\mathrm{3}}\right)$$ This reduces the communication frequency without affecting the rate of convergence (Stich, 2019). In practice, each of those H iterations can be replaced by a more complex update step, such as using momentum or even adaptive gradient methods like Adam (Singh et al., 2021; Wang et al., 2020). ## 4 Local Sgd'S Learning Rate Conundrum The analysis of Local SGD is based on the virtual sequences $${\bar{\mathbf{w}}}_{t}:={\frac{1}{K}}\sum_{k=1}^{K}\mathbf{w}_{t}^{k},\quad{\bar{\mathbf{g}}}_{t}:={\frac{1}{K}}\sum_{k=1}^{K}\mathbf{g}_{t}^{k},$$ $$\mathbf{\Sigma}$$ which are the averages of the per-worker iterates and gradients at each iteration. These sequences are tools for the mathematical analysis and are not computed by Local SGD at every iteration. They evolve as w¯ t+1 = w¯ t − γtg¯t, resembling an SGD trajectory with an "implicit" gradient estimate g¯t. Theoretical works on Local SGD show that this sequence behaves almost like synchronous data-parallel SGD under certain restrictions on the learning rate. This indicates that the implicit gradient estimate g¯t approximates the gradient ∇f(w¯ t) *just as well* as a hypothetical standard stochastic gradient g˜t with Et[g˜t] = ∇f(w¯ t) and a variance of $$\mathbb{E}_{t}[\|\hat{\mathbf{g}}_{t}-\nabla f(\bar{\mathbf{w}}_{t})\|^{2}]=\frac{\sigma_{t}^{2}}{K}.\tag{5}$$ The meaning of "just as well" is relatively opaque in existing works on Local SGD. Here, we make it explicit. As we will see, the quality of g¯t is influenced by the preceding steps taken since the most recent synchronization point. We, therefore, consider a single constant step size γ across H consecutive steps, starting from a synchronization point, and assume a local bound on the gradient variance. This is formalized as follows. Assumption 4.1. Assume we run H consecutive steps of Local SGD using a constant step size γ, starting from a synchronization point t : H | t. Assume the gradient variance across these H steps is bounded, i.e., for all k ∈ [K] and t ′ ∈ [*t, t* + H), we have Vart ′ [g k t ′ ] ≤ σ¯ 2 t for some σ¯t . With that, we can derive a bound on the mean-squared error (MSE) of Local SGD's implicit gradient estimate g¯. The proof is given in Appendix C.1. Proposition 4.2. Under Assumption 4.1, the MSE of Local SGD's implicit gradient estimate (Eq. *4) satisfies* $\mathbb{E}_{t}\left[\left|\bar{\mathbf{g}}_{t^{\prime}}-\nabla f(\bar{\mathbf{w}}_{t^{\prime}})\right|\right|^{2}\right]\leq\frac{\bar{\sigma}_{t}^{2}}{K}+(H-1)L^{2}\gamma^{2}\bar{\sigma}_{t}^{2}$ $\left|\begin{array}{c}\mbox{Variance}\\ \end{array}\right|$ $$\left(6\right)$$ t(6) Variance Bias *for all* t ′ ∈ [t, t + H). The bias originates from the fact that the per-worker gradients, g k t in Equation (4), are computed at slightly different locations in the parameter space due to differences in the local trajectories. The extent of this diffusion depends on the number of local steps H and, crucially, the learning rate γ used in preceding steps. The published convergence theory for Local SGD assumes restrictions to the learning rate γ which control the bias term such that the error is dominated by the variance term and its 1/K behavior. Therefore, those results have to be understood in the following way: For a given value of H, there is a small enough learning rate γ at which Local SGD converges at the same speed as synchronous data-parallel SGD. However, this learning rate will not be optimal, neither for synchronous data-parallel SGD nor for Local SGD. Equation (6) shows that Local SGD faces a fundamentally different trade-off when setting the learning rate compared to synchronous data-parallel SGD. Increasing K decreases the variance, which warrants a learning rate increase. However, a larger learning rate increases the bias term. Empirical studies (Lin et al., 2020; Ortiz et al., 2021) observing the poor practical performance of Local SGD for large values of H could in part be explained by this. Since they adopt the linear scaling rule for Local SGD, for large values of K and H, the bias term will dominate the total error in Equation (6). The same argument explains the observations of Wang & Joshi (2019); when the learning rate is fixed, the only way of reducing the gradient error (and therefore facilitating convergence) is to reduce the communication interval H, leading to their heuristic. In the next section, we propose a learning rate scaling rule for Local SGD that directly tackles the trade-off surfaced in Equation (6). ## 5 Adaptive Learning Rate Scaling For Synchronous And Local Sgd To gauge the real potential of Local SGD, we need to be able to scale its learning rate appropriately. We have seen in the previous section that Local SGD faces a fundamentally different trade-off with respect to the learning rate, compared to synchronous SGD. In this section, we first derive an optimal learning rate scaling technique for SGD. We then carry-over the ideas to derive an optimal learning rate scaling technique for Local SGD. ## 5.1 Optimal Learning Rate Scaling For Sgd - Adascale From the assumption of Lipschitz gradients in §3.1, f satisfies $$\mathbb{E}[f(\mathbf{w}_{t+1})]\leq f(\mathbf{w}_{t})-\gamma_{t}\nabla f_{t}^{T}\cdot\mathbb{E}[\mathbf{g}_{t}]+\frac{L\cdot\gamma_{t}^{2}}{2}\mathbb{E}[||\mathbf{g}_{t}||^{2}]\tag{10.11}$$ for two consecutive iterates wt, wt+1 with an SGD step. The optimal learning rate that maximizes the expected decrease over one step is given by $\square$ $\gamma_{t}=\frac{1}{L}\cdot\frac{\|\nabla f_{t}\|^{2}}{\|\nabla f_{t}\|^{2}+\sigma_{t}^{2}}$. $\gamma_{t}=\frac{1}{L}\cdot\frac{\|\nabla f_{t}\|^{2}}{\|\nabla f_{t}\|^{2}+\sigma_{t}^{2}}$. The optimal learning rate requires L which is difficult to estimate in practice, and thus not used in practice. Consequently, when the variance of our gradient estimate is reduced by a factor of 1/K due to an increase in number of workers to K, the optimal learning rate changes by the following factor: $$r_{t}={\frac{||\nabla f_{t}||^{2}+\sigma_{t}^{2}}{||\nabla f_{t}||^{2}+{\frac{\sigma_{t}^{2}}{K}}}}\in[1,K).$$ ∈ [1, K). (9) $$\mathbf{\Sigma}$$ $\downarrow$ . This is the *gain ratio* proposed by Johnson et al. (2020). Thus, we reinterpret the AdaScale gain ratio as the ratio of optimal learning rates for the case of K workers to 1 worker. We present a detailed derivation in Appendix B. For ∥∇ft∥ 2 ≪ σ 2 t , we recover the linear scaling rule of Goyal et al. (2018) with a gain ratio of rt ≈ K. In practice, however, the gain ratio will be smaller than K. When ∥∇ft∥ 2 ≫ σ 2 t , the gradient estimate is already very accurate, and the gain ratio is close to 1, implying that using a higher number of workers has no benefit. This is related to the concept of a critical batch size (McCandlish et al., 2018), which is the point after which a further increase in batch size has diminishing returns in terms of convergence speed. In addition to using rt as a scaling factor for the learning rate, Johnson et al. (2020) propose the concept of *scale-invariant iterations*. When using the standard linear scaling rule (Goyal et al., 2018), the iteration counter is incremented by a factor of K for a forward-backward pass. However, AdaScale interprets the gain ratio rt to be the effective number of workers used at iteration t. Incorporating this into the training process, they maintain an accumulator st =Pt−1 t ′=0 rt ′ , which replaces the standard iteration counter. Since rt < K, the use of scale-invariant iterations increases the total number of passes over the training set (true epochs) and "stretches" the learning rate schedule accordingly; see Lines 1 and 7 of Algorithm 2 in the Appendix. Thus, scaling to a larger number of workers may necessitate more true epochs but will typically still result in a substantial wall-clock time speedup, as shown by Johnson et al. (2020). AdaScale estimates ∥∇ft∥ and σ 2 t from the K iid per-worker gradients. We adopt their estimation procedure for our scaling method for Local SGD. ## 5.2 Optimal Learning Rate Scaling For Local Sgd– Localadascale We now derive a learning rate scaling method similar to AdaScale for Local SGD, named LocalAdaScale using the same principle underlying our derivation of AdaScale. We first derive an optimal step size for Local SGD, depending on the number of workers K and local steps H. A gain ratio is obtained by dividing by the optimal step size for the base case (K = H = 1) and will be used exactly as in AdaScale, both as a scaling factor for the learning rate, as well as the basis for a scale-invariant iteration counter. Optimal step size for Local SGD Finding the optimal step size for Local SGD is more complicated than for synchronous data-parallel SGD since the step size used in previous steps influences the quality of the (implicit) gradient estimate used in the current step. To simplify this, we adopt Assumption 4.1 from §4 and consider H consecutive steps using a constant step size. As we show in Appendix C.2, these steps lead to an expected decrease in function value, which is bounded as follows: where Et[f(w¯ t+H)] ≤ f(w¯ t) − H γ 2 G¯t + γ 2 A¯t − γ 2L 2A¯t − γ 2L 2 σ¯ 2 t K − γ 3L 2 4(H − 1)¯σ 2 t , (10) G¯t = 1 H t+ X H−1 1 H t+ X H−1 t ′=t Et[∥∇f(w¯ t ′ )∥ 2], A¯t = 1 K X k t ′=t Et[∥∇f(wk t ′ )∥ 2]. (11) This expected function decrease bound features the cumbersome terms G¯t and A¯t, which are expected squared gradient magnitudes along the trajectories of the virtual averaged iterates, and the per-worker iterates, respectively. These quantities are, in principle, dependent on η, since the choice of step size influences the gradient magnitude along that trajectory. We ignore this secondary effect by assuming A¯t *≈ ∥∇*f(w¯ t)∥ 2 ≈ G¯t, independent of η. This is fulfilled if the gradient magnitude stays approximately constant over the H subsequent steps. We verify this experimentally in Figure 7 in the Appendix. Using this assumption gives us an *approximate* upper bound: $$\cdots$$ $$\mathbb{E}_{t}[f(\bar{\mathbf{w}}_{t+H})]\stackrel{\approx}{=}f(\bar{\mathbf{w}}_{t})-H\underbrace{\left(\gamma\bar{G}_{t}-\frac{\gamma^{2}L}{2}\bar{G}_{t}-\frac{\gamma^{2}L}{2}\frac{\partial_{t}^{2}}{K}-\frac{\gamma^{3}L^{2}}{4}(H-1)\partial_{t}^{2}\right)}_{\mbox{\small\raisebox{-2.0pt}{$\bigodot$}}}.$$ $$(12)$$ . (12) In the expected decrease bound in Equation (10), the bias discussed in §4 manifests as the γ 3term. To illustrate that, Appendix D provides a similar H-step bound for synchronous data-parallel SGD, where this term is absent. We derive an approximately optimal step size for Local SGD by maximizing the term ▷◁ in Equation (12) leading to $$\gamma_{t}=\frac{1}{L}\cdot\frac{2\bar{G}_{t}}{\bar{G}_{t}+\frac{\bar{\sigma}_{t}^{2}}{K}+\sqrt{\left(\bar{G}_{t}+\frac{\bar{\sigma}_{t}^{2}}{K}\right)^{2}+3(H-1)\bar{G}_{t}\bar{\sigma}_{t}^{2}}}$$ $$(13)$$ $$(14)$$ See Appendix C.3 for a derivation. Gain ratio Analogous to our derivation of AdaScale, we can now define a gain ratio by dividing the optimal step size in Equation (13) by the base case (H = K = 1): $$\rho_{t}=\frac{2\left(\bar{G}_{t}+\bar{\sigma}_{t}^{2}\right)}{\bar{G}_{t}+\frac{\bar{\sigma}_{t}^{2}}{K}+\sqrt{\left(\bar{G}_{t}+\frac{\bar{\sigma}_{t}^{2}}{K}\right)^{2}+3(H{-}1)\bar{G}_{t}\bar{\sigma}_{t}^{2}}}.$$ . (14) When H = 1, this gain ratio recovers AdaScale. To illustrate the difference in behaviors between AdaScale and LocalAdaScale, we plot the gain ratio ρ in Figure 3 for increasing values of σ¯ 2/G¯. The larger the H, the smaller the gain ratio at any given value for σ¯ 2/G¯. This is explained by the fact that the gradient estimates get worse with H as seen in Proposition 4.2. Looking at the inset plot: for a deterministic case, i.e., when there is no variance (σ¯ 2 → 0) and thus no advantage to using multiple workers, we see that ρ = 1 for AdaScale and LocalAdaScale. For a large variance σ¯ 2 ≫ G¯, the gain ratio approaches K for all values of H, recovering the linear scaling rule. However, for larger H, it takes substantially higher values of σ¯ 2/G¯ for that regime to be reached. Algorithm 1 Automatic learning rate scaling for Local SGD– LocalAdaScale. Input: Initialization w0, step-size γt, #workers K, #local steps H, scale-inv budget S, t = 0, s = 0, grad_cache=[], ρ ← 1. 1: **while** s ≤ S do ▷ Scale inv budget not exhausted. 2: for k ∈ [K] do ▷ On each worker 3: Compute g k t using a batch of data. ▷ Gradient at t. 4: if H | t **then** ▷ One step after model sync. Compute $\mathbf{g}_{t}$ using a basis of $H\mid t$ then $\begin{array}{c}\text{grad}_{-}\text{cache}[k]=\mathbf{g}_{t}^{k},\\ \text{if}H\mid(t+1)\text{then}\\ \mathbf{w}_{t}^{k}\gets\frac{1}{K}\sum_{j=1}^{K}\mathbf{w}_{t}^{j}.\end{array}$ 2. $\equiv$ 67. $\mathbf{w}^{k}_{t+1}$ $12\frac{1}{2}$ . ▷ Save gradient. 6: if H | (t + 1) **then** ▷ Average every H steps 7: wk 8: G¯t, σ¯ 2 t ← grad_stats(grad_cache) ▷ Eq.15 9: Compute ρ as Equation (14). 10: wk t+1 ← wk . ▷ Local update. 11: s ← s + ρ 12: t ← t + 1 13: **return** the last iterate wt. $$\begin{array}{l}{{\mathrm{as~Eqlautet}}}\\ {{\mathrm{:-}}}\end{array}$$ ![7_image_0.png](7_image_0.png) $\mathbf{M}$ $\phi$ Figure 3: LocalAdaScale gain ratio from Equation (14) with K = 8. We plot σ¯ 2/G¯ on the x-axis and the gain ratio on the y-axis. In the inset, we show the plot for very small values of the ratio σ¯ 2/G¯. We see that synchronous SGD (AdaScale with H = 1) and Local SGD (LocalAdaScale H > 1) have different scaling behaviour. AdaScale reaches its maximum gain factor for much lower values of σ¯ 2/G¯ than LocalAdaScale. In practice, this translates to Local SGD needing a lower learning rate, and more iterations than SGD. When this is violated, Local SGD exhibits poorer convergence behaviour, further evidencing the arguments in §4 $\begin{array}{cc}1&P\\ 1&1\end{array}$ $\mathbf{w}_{t\,\cdot\,t}$ Implementing LocalAdaScale To implement LocalAdaScale, we need to estimate G¯t and σ¯ 2 t . We take the approach of estimating these quantities only at synchronization points, and then using the resulting gain throughout the following H local steps. This is in line with our assumption that gradient magnitude and variance are approximately constant over H steps. At a synchronization point t, we have access to K per-worker gradients computed at the *same* location. This allows us to estimate gradient variance and mean exactly as in AdaScale: $$\bar{\sigma}_{t}^{2}\approx\frac{1}{K-1}\sum_{k=1}^{K}\|\mathbf{g}_{t}^{k}\|^{2}-\frac{K}{K-1}\|\mathbf{g}_{t}\|^{2},\quad\bar{G}_{t}\approx\|\mathbf{\bar{g}}_{t}\|^{2}-\frac{1}{K}\bar{\sigma}_{t}^{2}.$$ $$(15)$$ Following AdaScale, these estimates are smoothed using an exponential moving average for additional stability. The algorithms LocalAdaScale and AdaScale are summarized in Algorithm 1, as LocalAdaScale is equivalent to AdaScale when H = 1. When using Local SGD, computing the estimates in Equation (15) requires the synchronization of gradients in addition to the weights, doubling the amount of data communicated. We partially alleviate this issue as follows. We cache the gradients computed right after synchronization and delay their communication (and thereby the computation of the gain ratio) until the *next* synchronization step.1 This way, we can synchronize weights and gradients simultaneously and only incur the *latency* overhead once. Of course, this does not alleviate communication time incurred due to *bandwidth* limitations. Note, however, that we devised LocalAdaScale primarily as a tool for the analysis of Local SGD and did not further optimize our implementation. In future work, one could devise approximate versions of LocalAdaScale that avoid additional communication. ## 6 Experiments Our experiments compare Local SGD to the gradient accumulation baseline under two scaling approaches, amounting to a total of four different methods. We compare these methods under identical numbers of workers (K) and communication/synchronization intervals H: 1. Gradient accumulation of H steps with linear scaling (Acc-Lin). 2. Gradient accumulation of H steps with AdaScale (Acc-Ada). 3. Local SGD with H local steps with linear scaling (Local-Lin). 4. Local SGD with H local steps adaptive scaling (Local-Ada), the method derived in §5. These four methods process the same number of samples between two consecutive synchronizations. The linear scaling variants perform exactly n true epochs, where n is set based on prior work for each model/dataset combination (see Appendix H.1). The adaptive scaling variants are allocated a budget of n *scale-invariant* epochs. Recall from §5.1 that this results in a variable number of true epochs. For linear scaling, we follow prior work and perform a linear warmup from γ base to the final value of Kγbase for 5% of the total iteration budget. For AdaScale, we use the implementation from FairScale (FairScale authors, 2021) which supports gradient accumulation. We compare these methods at different values for the number of workers K, as well as the communication interval H. The latter is used as the number of local steps or accumulation steps, respectively. We train a ResNet-18 (He et al., 2016) on CIFAR-10 (Krizhevsky, 2009), a WideResnet-28-2 (Zagoruyko & Komodakis, 2016) on ImageNet-32 (Chrabaszcz et al., 2017), and a ResNet-50 on ImageNet (Deng et al., 2009; Russakovsky et al., 2015). We use a base learning rate schedules from previously published works, which are well-tuned to the respective architecture and dataset, which we take to be the optimal learning rates for the base case i.e., K = 1, H = 1. Note that we are comparing learning rate *scaling* techniques and not specific learning rate schedules. Additional details of our experimental setup can be found in Appendix H.1. 1In Appendix E, we perform an ablation study showing that this delayed communication of gradients does not alter the behavior of the method significantly. ## 6.1 Local Sgd With Automatic Learning Rate Scaling Localadascale Maintains Target Accuracy In Figure 4 we show the test accuracy reached by the four methods. The target performance for each experiment is the performance we get when training on one worker (K = 1) with the standard hyperparameter settings (in Appendix H). For CIFAR-10 it is a top-1 accuracy of 93%, for ImageNet32 it is a top-5 accuracy of 69%, and for ImageNet it is a top-5 accuracy of 93%. See Appendix H.1 for attributions. We see that Acc-Lin breaks down very quickly for all datasets. Performance starts dropping when the total scaling (K × H) exceeds 32, in line with the findings by Goyal et al. (2018). This is overcome by using AdaScale (Acc-Ada), which maintains the target accuracy even for large H across all our experiments. In the Local SGD family, Local-Lin performs considerably better but we still see notable drops in test accuracy around H ≥ 8. Finally, adaptive learning rate scaling for Local SGD maintains the target performance across all K and H considered. Thus, we have demonstrated that appropriate learning rate scaling can fix the performance gap of Local SGD observed in previous works using Local-Lin (Lin et al., 2020; Ortiz et al., 2021). ## 6.2 Should I Use Local Sgd? We have established with the results in Figure 4 that both Local SGD and gradient accumulation can maintain high accuracy in the presence of appropriate learning rate scaling. As described in §5, Acc-Ada and Local-Ada not only modulate the learning rate but also increase the number of iterations by using scale-invariant epochs. Therefore the question of whether the reduced communication time makes up for the increased number of iterations arises. Our experiments help answer this question. In Figure 5, we compare the number of iterations required by each method. We see that increasing H, both adaptive scaling methods increase the number of iterations quite drastically. For H < 8, Acc-Ada converges in slightly fewer iterations than Local-Ada, whereas Local-Ada is more iteration efficient for more infrequent communication (larger H). See also the tabulated results in Appendix H for details. But which H should we choose in practice? Answering this question requires us to make assumptions about the relative cost of communication and computation. We let m denote the relative communication overhead, i.e., the ratio of time taken for one communication round to the time taken for one minibatch gradient computation without any synchronization. This is determined by the hardware and, therefore, beyond our control on the algorithmic side. Given the total number of epochs n (K,H) observed in our experiments for Local-Ada and Acc-Ada (Figure 5), we compute a pseudo-wall-clock training time as follows: Communication ![9_image_0.png](9_image_0.png) $$\left(16\right)$$ . (16) In Equation (16), we do not show dependence on per-device batch size B and dataset size explicitly in the term for number of iterations as they are constant scalars across methods. We plot this wall-clock time in Figure 6 for CIFAR-10 and ImageNet32 at K = 24 and H ∈ {1, 2, 8}; additional results may be found in Appendix H.3. For CIFAR-10, we see that SGD is faster than Local SGD for all H when m ≤ 3, and Local SGD otherwise. However, a different picture emerges for ImageNet-32; small values of H (say 2, 8) for both gradient accumulation and Local SGD result in faster convergence than SGD. This is possibly because the overall batch size (B ×24) is smaller than the critical batch size and thus gradient accumulation results in faster convergence. Local SGD is faster than SGD (for some H) when the dotted lines are below the solid ones in Figure 6, and we see that Local SGD is faster than SGD (with gradient accumulation) only when m ≥ 4. Additional plots may be found in Figure 11 in the Appendix. Such large values of m are atypical in most training set-ups. Ortiz et al. (2021) show that even when using K = 256, m stays around 3. A substantially larger communication overhead is plausible for some setups, e.g., crowd-sourced ![10_image_0.png](10_image_0.png) Figure 4: Test accuracies achieved for different numbers of workers (K) and communication intervals (H). Linear scaling with gradient accumulation deteriorates quickly. Linear scaling with Local SGD is more robust but suffers for large values of H. Using adaptive scaling, both gradient accumulation and Local SGD maintain the target performance across a large range of values for K and H. ![10_image_1.png](10_image_1.png) Figure 5: Number of epochs used by each method. While linear scaling operates under a fixed budget, the adaptive scaling methods *stretch* the learning rate schedule, which increases the total number of epochs. For values of H ≤ 8, Acc-Ada uses slightly fewer epochs than Local-Ada. For large values of H, Local-Ada is drastically more iteration-efficient. or volunteer computing (Ryabinin & Gusev, 2020). However, such scenarios come with a host of other issues like stragglers, fault tolerance that have propelled their own lines of research (Learning@home team, 2020; Borzunov et al., 2022a;b; Ryabinin et al., 2021; Blanchard et al., 2017). Therefore, our findings cast doubts on the practical utility of Local SGD in commonplace distributed training environments. ## 7 Conclusion Summary of Findings The previously published literature on Local SGD suffers from a certain discrepancy between theory and practice. Previous theoretical results have been interpreted as: "Local SGD behaves just like synchronous data-parallel SGD". In contrast to that, empirical studies have reported performance degradation when using Local SGD compared to synchronous SGD. We show that this discrepancy may be attributed to the choice of learning rate. Theoretical results have assumed restrictive upper bounds on the learning rate. This recovers the behavior of synchronous SGD at the same learning rate, but that learning rate is clearly non-optimal. Empirical works have used learning rate scaling techniques tailored to synchronous SGD. We show that this is likewise nonoptimal, since Local SGD faces a fundamentally different trade-off than synchronous data-parallel SGD where the error of its gradient estimate depends on the learning rate itself. ![11_image_0.png](11_image_0.png) Figure 6: Which method is faster? We plot pseudowall-clock time T on the y-axis for different assumed values of the relative communication overhead m, see Eq.(16)). We see that LocalAdaScale (using large values of H) converges faster for high communication overheads. For lower m, synchronous SGD with few gradient accumulation steps is preferable. To further study the behavior of Local SGD, we devise an optimal learning rate scaling method, called LocalAdaScale, mirroring the AdaScale method for synchronous SGD. Our experiments demonstrate that LocalAdaScale bridges the gap in performance between Local SGD and synchronous SGD and that Local SGD converges in fewer iterations than gradient accumulation for large communication intervals. However, in wall-clock time, we find that Local SGD is faster than synchronous data-parallel SGD only for very high communication overheads, shedding new light on the practicality of Local SGD. Limitations and Future Work The optimality of learning rate in this work is based on the training loss and not the test loss. Thus, the claims of optimality have to be seen in the limited context of training performance and optimization, not generalization. Several techniques that target improved generalization (Orvieto et al., 2022) have been investigated, and studying them for Local SGD is beyond the scope of the current work. Our paper is limited to analyzing the effect of the learning rate while keeping other hyperparameters fixed. Tuning other hyperparameters, such as momentum, may be useful to improve convergence or generalization. We also performed our comparison at a fixed communication interval H. A comparison of Local SGD vs. gradient accumulation under varying H (e.g., such as in post-local SGD (Lin et al., 2020)) may be interesting. The optimal step size for Local SGD in Equation (13) involves multiple approximations. Firstly, like AdaScale, it is based on a Lipschitz bound which may be loose for some objective functions. Secondly, we assume the gradient magnitude along H consecutive Local SGD steps to be approximately constant. Finally, we estimate gradient magnitude using imperfect empirical estimates, rendering our learning rate scaling only approximately optimal. We have restricted our analysis to the case of homogeneous data distribution on each worker. An extension of our learning rate scaling technique to the federated learning scenario with heterogeneous data would be interesting future work. Our experiments are limited to computer vision tasks and for models based on convolutional nets. Thus, our findings on the value of Local SGD might be limited to those conditions. Broader generalizations require further experimentation. As discussed in §5, our implementation of LocalAdaScale synchronizes both the model weights and gradients, which *adds* communication compared to plain Local SGD. We did not attempt to alleviate this, since we devised LocalAdaScale primarily as a tool for the analysis of Local SGD. In future work, one could devise approximate versions of LocalAdaScale that avoid additional communication. A possible avenue would be to approximate gradient magnitude and variance from the *pseudo-gradients* (Reddi et al., 2021) given by the displacement wt+H − wt. Beyond that, our work can be extended in several ways. While we have compared gradient accumulation and local steps as alternatives, one may realize a desired communication interval H using a combination of both. (For example, 8 local steps, each using 4 gradient accumulations, achieves a communication interval of 32.) Relatedly, one may alter the communication interval for different phases of the optimization process. Both decisions will influence learning rate scaling and may, in turn, be informed by gradient statistics. ## References Dan Alistarh, Demjan Grubic, Jerry Li, Ryota Tomioka, and Milan Vojnovic. QSGD: Communicationefficient SGD via gradient quantization and encoding. *Advances in Neural Information Processing Systems*, 30:1709–1720, 2017. 3 Lukas Balles, Javier Romero, and Philipp Hennig. Coupling adaptive batch sizes with learning rates. In Gal Elidan, Kristian Kersting, and Alexander T. Ihler (eds.), *Proceedings of the Thirty-Third Conference* on Uncertainty in Artificial Intelligence, UAI 2017, Sydney, Australia, August 11-15, 2017. AUAI Press, 2017. URL http://auai.org/uai2017/proceedings/papers/141.pdf. 19 Debraj Basu, Deepesh Data, Can Karakus, and Suhas Diggavi. Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification, and Local Computations. *arXiv:1906.02367 [cs, math, stat]*, November 2019. 3 Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. Machine learning with adversaries: Byzantine tolerant gradient descent. *Advances in neural information processing systems*, 30, 2017. 12 Alexander Borzunov, Dmitry Baranchuk, Tim Dettmers, Max Ryabinin, Younes Belkada, Artem Chumachenko, Pavel Samygin, and Colin Raffel. Petals: Collaborative inference and fine-tuning of large models. *arXiv preprint arXiv:2209.01188*, 2022a. URL https://arxiv.org/abs/2209.01188. 12 Alexander Borzunov, Max Ryabinin, Tim Dettmers, Quentin Lhoest, Lucile Saulnier, Michael Diskin, Yacine Jernite, and Thomas Wolf. Training transformers together. In *NeurIPS 2021 Competitions and Demonstrations Track*, pp. 335–342. PMLR, 2022b. 12 Léon Bottou, Frank E. Curtis, and Jorge Nocedal. Optimization methods for large-scale machine learning. *SIAM Review*, 60(2):223–311, 2018. doi: 10.1137/16M1080173. URL https://doi.org/10.1137/ 16M1080173. 1 Xuanyu Cao, Tamer Başar, Suhas Diggavi, Yonina C. Eldar, Khaled B. Letaief, H. Vincent Poor, and Junshan Zhang. Communication-efficient distributed learning: An overview. IEEE Journal on Selected Areas in Communications, 41(4):851–873, 2023. doi: 10.1109/JSAC.2023.3242710. 3 Zachary Charles, Zachary Garrett, Zhouyuan Huo, Sergei Shmulyian, and Virginia Smith. On large-cohort training for federated learning. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, 2021. URL https://openreview.net/forum?id= Kb26p7chwhf. 4 Kai Chen and Qiang Huo. Scalable training of deep learning machines by incremental block training with intra-block parallel optimization and blockwise model-update filtering. In *2016 ieee international conference on acoustics, speech and signal processing (icassp)*, pp. 5880–5884. IEEE, 2016. 24 Patryk Chrabaszcz, Ilya Loshchilov, and Frank Hutter. A downsampled variant of ImageNet as an alternative to the CIFAR datasets. *arXiv preprint arXiv:1707.08819*, 2017. 9, 28 Ofer Dekel, Ran Gilad-Bachrach, Ohad Shamir, and Lin Xiao. Optimal distributed online prediction using mini-batches. *Journal of Machine Learning Research*, 13(1), 2012. 4 J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In *CVPR09*, 2009. 9 Yuyang Deng, Mohammad Mahdi Kamani, and Mehrdad Mahdavi. Local SGD optimizes overparameterized neural networks in polynomial time. In Gustau Camps-Valls, Francisco J. R. Ruiz, and Isabel Valera (eds.), *Proceedings of The 25th International Conference on Artificial Intelligence and Statistics*, volume 151 of *Proceedings of Machine Learning Research*, pp. 6840–6861. PMLR, 28–30 Mar 2022. URL https: //proceedings.mlr.press/v151/deng22a.html. 4 FairScale authors. Fairscale: A general purpose modular pytorch library for high performance and large scale training. https://github.com/facebookresearch/fairscale, 2021. 9 Jonas Geiping, Micah Goldblum, Phil Pope, Michael Moeller, and Tom Goldstein. Stochastic training is not necessary for generalization. In *International Conference on Learning Representations*, 2022. URL https://openreview.net/forum?id=ZBESeIUB5k. 4 Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour. arXiv:1706.02677 [cs], April 2018. 1, 3, 4, 7, 10, 28 Xinran Gu, Kaifeng Lyu, Longbo Huang, and Sanjeev Arora. Why (and when) does local SGD generalize better than SGD? In *The Eleventh International Conference on Learning Representations*, 2023. URL https://openreview.net/forum?id=svCcui6Drl. 4 Farzin Haddadpour, Mohammad Mahdi Kamani, Mehrdad Mahdavi, and Viveck Cadambe. Local SGD with periodic averaging: Tighter analysis and adaptive synchronization. In Advances in Neural Information Processing Systems, pp. 11080–11092, 2019. 4 Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pp. 770–778. IEEE Computer Society, 2016. doi: 10.1109/CVPR.2016.90. URL https://doi.org/10.1109/CVPR.2016.90. 9, 23 Elad Hoffer, Itay Hubara, and Daniel Soudry. Train longer, generalize better: Closing the generalization gap in large batch training of neural networks. In *Proceedings of the 31st International Conference on Neural* Information Processing Systems, NIPS'17, pp. 1729–1739, Red Hook, NY, USA, 2017. Curran Associates Inc. ISBN 9781510860964. 3 Tyler Johnson, Pulkit Agrawal, Haijie Gu, and Carlos Guestrin. AdaScale SGD: A user-friendly algorithm for distributed training. In Hal Daumé III and Aarti Singh (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of *Proceedings of Machine Learning Research*, pp. 4911– 4920. PMLR, 13–18 Jul 2020. URL https://proceedings.mlr.press/v119/johnson20a.html. 2, 3, 7, 19, 24 Ahmed Khaled, Konstantin Mishchenko, and Peter Richtarik. Tighter theory for local SGD on identical and heterogeneous data. In Silvia Chiappa and Roberto Calandra (eds.), Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, volume 108 of *Proceedings of Machine* Learning Research, pp. 4519–4529. PMLR, 26–28 Aug 2020. URL https://proceedings.mlr.press/ v108/bayoumi20a.html. 4, 20, 27 Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun (eds.), *3rd International Conference on Learning Representations, ICLR 2015, San Diego,* CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv.org/abs/1412.6980. 1 Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009. 9, 23 Alex Krizhevsky. One weird trick for parallelizing convolutional neural networks. arXiv preprint arXiv:1404.5997, 2014. 3 Learning@home team. Hivemind: a Library for Decentralized Deep Learning. https://github.com/ learning-at-home/hivemind, 2020. 12 Tao Lin, Sebastian U. Stich, Kumar Kshitij Patel, and Martin Jaggi. Don't use large mini-batches, use local SGD. In *International Conference on Learning Representations*, 2020. URL https://openreview.net/ forum?id=B1eyO1BFPr. 2, 3, 4, 6, 10, 12, 24 Saeed Maleki, Madan Musuvathi, Todd Mytkowicz, Olli Saarikivi, Tianju Xu, Vadim Eksarevskiy, Jaliya Ekanayake, and Emad Barsoum. Scaling distributed training with adaptive summation. Proceedings of Machine Learning and Systems, 3, 2021. 3 Sam McCandlish, Jared Kaplan, Dario Amodei, and OpenAI Dota Team. An empirical model of large-batch training. *arXiv:1812.06162 [cs, stat]*, December 2018. 7 David E Muller. A method for solving algebraic equations using an automatic computer. Mathematical tables and other aids to computation, 10(56):208–215, 1956. 23 Tomoya Murata and Taiji Suzuki. Bias-variance reduced local SGD for less heterogeneous federated learning. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of *Proceedings of Machine Learning Research*, pp. 7872–7881. PMLR, 18–24 Jul 2021. URL https://proceedings.mlr.press/v139/murata21a.html. 4 Zachary Nado, Justin M. Gilmer, Christopher J. Shallue, Rohan Anil, and George E. Dahl. A large batch optimizer reality check: Traditional, generic optimizers suffice across batch sizes. *arXiv:2102.06356 [cs,* stat], June 2021. 3 Jose Javier Gonzalez Ortiz, Jonathan Frankle, Mike Rabbat, Ari Morcos, and Nicolas Ballas. Trade-offs of local sgd at scale: An empirical study. *arXiv preprint arXiv:2110.08133*, 2021. 2, 3, 4, 6, 10 Antonio Orvieto, Hans Kersting, Frank Proske, Francis Bach, and Aurelien Lucchi. Anticorrelated noise injection for improved generalization. In *International Conference on Machine Learning*, pp. 17094–17116. PMLR, 2022. 12 Daniel Povey, Xiaohui Zhang, and Sanjeev Khudanpur. Parallel training of dnns with natural gradient and parameter averaging. *arXiv preprint arXiv:1410.7455*, 2014. 4 Ning Qian. On the momentum term in gradient descent learning algorithms. *Neural networks*, 12(1):145–151, 1999. 1 Benjamin Recht, Christopher Re, Stephen Wright, and Feng Niu. Hogwild!: A lock-free approach to parallelizing stochastic gradient descent. *Advances in neural information processing systems*, 24, 2011. 3 Sashank J. Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečný, Sanjiv Kumar, and Hugh Brendan McMahan. Adaptive federated optimization. In *International Conference on* Learning Representations, 2021. URL https://openreview.net/forum?id=LkFG3lB13U5. 13 Herbert Robbins and Sutton Monro. A stochastic approximation method. The annals of mathematical statistics, pp. 400–407, 1951. 1 Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. ImageNet large scale visual recognition challenge. International journal of computer vision, 115(3):211–252, 2015. 9 Max Ryabinin and Anton Gusev. Towards crowdsourced training of large neural networks using decentralized mixture-of-experts. *Advances in Neural Information Processing Systems*, 33:3659–3672, 2020. 12 Max Ryabinin, Eduard Gorbunov, Vsevolod Plokhotnyuk, and Gennady Pekhimenko. Moshpit sgd: Communication-efficient decentralized training on heterogeneous unreliable devices. *Advances in Neural Information Processing Systems*, 34:18195–18211, 2021. 12 Navjot Singh, Deepesh Data, Jemin George, and Suhas Diggavi. Squarm-SGD: Communication-efficient momentum SGD for decentralized optimization. *IEEE Journal on Selected Areas in Information Theory*, 2(3):954–969, 2021. 5 Leslie N Smith. Cyclical learning rates for training neural networks. In 2017 IEEE winter conference on applications of computer vision (WACV), pp. 464–472. IEEE, 2017. 3 Sebastian U. Stich. Local SGD converges fast and communicates little. In *International Conference on* Learning Representations, 2019. URL https://openreview.net/forum?id=S1g2JnRcFX. 2, 4, 5, 27 Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In *International conference on machine learning*, pp. 1139–1147. PMLR, 2013. 1 Jianyu Wang and Gauri Joshi. Adaptive communication strategies to achieve the best error-runtime trade-off in local-update sgd. *Proceedings of Machine Learning and Systems*, 1:212–229, 2019. 4, 6 Jianyu Wang, Vinayak Tantia, Nicolas Ballas, and Michael Rabbat. SlowMo: Improving communicationefficient distributed SGD with slow momentum. 2020. URL https://openreview.net/forum?id= SkxJ8REYPH. 4, 5, 24 Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H. Brendan McMahan, Blaise Aguera y Arcas, Maruan Al-Shedivat, Galen Andrew, Salman Avestimehr, Katharine Daly, Deepesh Data, Suhas Diggavi, Hubert Eichner, Advait Gadhikar, Zachary Garrett, Antonious M. Girgis, Filip Hanzely, Andrew Hard, Chaoyang He, Samuel Horvath, Zhouyuan Huo, Alex Ingerman, Martin Jaggi, Tara Javidi, Peter Kairouz, Satyen Kale, Sai Praneeth Karimireddy, Jakub Konecny, Sanmi Koyejo, Tian Li, Luyang Liu, Mehryar Mohri, Hang Qi, Sashank J. Reddi, Peter Richtarik, Karan Singhal, Virginia Smith, Mahdi Soltanolkotabi, Weikang Song, Ananda Theertha Suresh, Sebastian U. Stich, Ameet Talwalkar, Hongyi Wang, Blake Woodworth, Shanshan Wu, Felix X. Yu, Honglin Yuan, Manzil Zaheer, Mi Zhang, Tong Zhang, Chunxiang Zheng, Chen Zhu, and Wennan Zhu. A Field Guide to Federated Optimization. *arXiv:2107.06917 [cs]*, July 2021. 4 Blake Woodworth, Kumar Kshitij Patel, Sebastian Stich, Zhen Dai, Brian Bullins, Brendan Mcmahan, Ohad Shamir, and Nathan Srebro. Is local SGD better than minibatch SGD? In *Proceedings of the 37th* International Conference on Machine Learning, pp. 10334–10343. PMLR, November 2020. 4 Yang You, Igor Gitman, and Boris Ginsburg. Large batch training of convolutional networks. arXiv preprint arXiv:1708.03888, 2017. 3 Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. Large batch optimization for deep learning: Training BERT in 76 minutes. In *International Conference on Learning Representations*, 2020. URL https: //openreview.net/forum?id=Syx4wnEtvH. 3 Chulhee Yun, Shashank Rajput, and Suvrit Sra. Minibatch vs local SGD with shuffling: Tight convergence bounds and beyond. In *International Conference on Learning Representations*, 2022. URL https:// openreview.net/forum?id=LdlwbBP2mlq. 2, 4 Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In *BMVC*, 2016. 9 Jian Zhang, Christopher De Sa, Ioannis Mitliagkas, and Christopher Ré. Parallel SGD: When does averaging help? *arXiv preprint arXiv:1606.07365*, 2016. 4 Fan Zhou and Guojing Cong. On the convergence properties of a K-step averaging stochastic gradient descent algorithm for nonconvex optimization. In *Proceedings of the 27th International Joint Conference* on Artificial Intelligence, pp. 3219–3227, 2018. 4 ## Contents 1 Introduction 1 2 Related Work 3 | 3.1 | Problem Setup | 4 | |-------|-----------------|-----| | 3.2 | Local SGD | 5 | 4 Local SGD's Learning Rate Conundrum 5 5 Adaptive Learning Rate Scaling for Synchronous and Local SGD 6 5.1 Optimal learning rate scaling for SGD - AdaScale . . . . . . . . . . . . . . . . . . . . . . . . 6 5.2 Optimal learning rate scaling for Local SGD– LocalAdaScale . . . . . . . . . . . . . . . . . . 7 6 Experiments 9 6.1 Local SGD with automatic learning rate scaling LocalAdaScale maintains Target Accuracy . 10 6.2 Should I Use Local SGD? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 7 Conclusion 12 A Glossary of Symbols 19 | C.1 | MSE of Local SGD's Gradient Estimate (Equation 6) | | 20 | |-------|-----------------------------------------------------|-----|------| | C.2 | Expected Decrease (Equation 10) | 21 | | | C.3 | Optimal Step Size (Equation 13) | | 23 | B Derivation of AdaScale 19 D Expected Decrease for H Steps of Synchronous SGD 24 E Avoiding Stale Gradients in Computing the Gain Ratio 24 | H.1 | Experimental Setup | | 28 | |-------|-----------------------------------------------|-----|------| | H.2 | Tabulated Results | | 28 | | H.3 | Additional Results for Pseudo-Wall Clock Time | 30 | | | H.4 | Behavior of the Gain Ratio | | 30 | F How accurate are our approximations in Equation (14)? 26 G Need to Modulate Step Size for Larger Batch Sizes 27 G.1 With Constant Learning Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 G.2 Using AdaScale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 ## A Glossary Of Symbols | Symbol | Meaning | |----------|----------------------------------------------------------------| | K | Number of workers | | H | Number of local steps | | γ base | Unscaled learning rate for K = 1 | | B | Batch size per worker/Base batch size | | (·)t | Quantity (·) at time t | | ∇ft | Gradient of function f at wt | | 2 σ t | Variance of gradient estimation at time t | | g k t | Gradient estimate on worker k at time t | | G¯ t, σt | Virtual Gradient magnitude and variance estimate for Local SGD | | wk t | Model weights on worker k at time t | Table 1: Glossary of symbols used ## B Derivation Of Adascale In this section, we derive the AdaScale rule of (Johnson et al., 2020) using ideas from Balles et al. (2017). This view of AdaScale being the learning rate scaling that maximizes the function decrease is novel to the best of our knowledge. Pseudo-code for AdaScale is shown in Algorithm 2. By definition, f satisfies the following property. $$f(y)\leq f(x)+\nabla f(x)^{T}(y-x)+{\frac{L}{2}}||y-x||^{2}$$ $$(17)$$ for *x, y* ∈ dom(f). Plugging in x = wt and y = wt+1 = wt − γtgt, and computing the expected value of f(wt+1) given wt ,where gt is the stochastic gradient at t, we get $$\mathbb{E}[f(\mathbf{w}_{t+1})]\leq f(\mathbf{w}_{t})-\gamma_{t}\nabla f_{t}^{T}\cdot\mathbb{E}[\mathbf{g}_{t}]+\frac{L\gamma_{t}^{2}}{2}\mathbb{E}[||\mathbf{g}_{t}||^{2}]\tag{1}$$ The optimal learning rate is the one that minimizes the right-hand side, thereby maximizing the expected decrease of the function value. Differentiating the right side of Equation (18) and setting it to zero gives the optimal learning rate as $$(18)$$ ${\gamma_t=\frac{\nabla f_t^T\mathbb{E}[\mathbf{g_t}]}{L\cdot\mathbb{E}[||\mathbf{g_t}||^2]}=\frac{1}{L}\frac{\bar{G}_t}{(\sigma_t^2+G_t)}}$ and ${\mathbb{E}_t[||\mathbf{g_t}||^2]=\bar{G}_t+\sigma_t^2}$. where Et[gt] = ∇ft, ||∇ft||2:= G¯t and Et[||gt||2] = G¯t + σ t When we increase the batch size by a factor of K, the variance is reduced by a factor of K. Thus, the optimal learning rate becomes $$(19)$$ $$\gamma_{t}^{K}=\frac{1}{L}\frac{\bar{G}_{t}}{\left(\frac{\sigma_{t}^{2}}{K}+\bar{G}_{t}\right)}.$$ . (20) Thus, the relative change in the learning rate when the batch size is increased K times is $$(20)$$ $$r_{t}={\frac{\sigma_{t}^{2}+\bar{G}_{t}}{{\frac{\sigma_{t}^{2}}{K}}+\bar{G}_{t}}},$$ , (21) and is termed the gain ratio. As it is evident, it takes a maximum value of K when G¯ ≪ σ 2. $$(21)$$ Algorithm 2 AdaScale for synchronous gradient descent Input: Initialization w, t = 0, learning rate function γt, \# workers K, scale-invariant iteration budget S, s = 0 t = 0. 1: **while** s ≤ S do ▷ Scale invariant budget not exhausted 2: for k ∈ [K] do ▷ On each worker 3: Compute g k t using a batch of data. ▷ Compute local gradients. 4: Compute rt ∈ [1, K] as given in Equation (21). 5: Update wt+1 ← wt − γ⌈s⌉ × rtg˜t ▷ g˜t:= 1 K PK k=1 g k t . 6: t ← t + 1. ▷ Iteration counter increment. 7: s ← s + rt. ▷ Budget is computed from the scaling obtained. 8: **return** Last iterate wt. ## C Details On Local Sgd This section contains details regarding Local SGD which have been omitted from the main text, in particular the derivation of the MSE of its implicit gradient estimate (Appendix C.1) as well as the optimal step size (Appendix C.3). Local SGD was introduced in §3, in particular Equation (3). In practice, Local SGD is often used with local steps other than a vanilla SGD step, e.g., momentum variants. Algorithm 3 provides pseudocode; Line 5 calls an update function, which may apply any gradient-based optimization step, like SGD or momentum SGD or adaptive gradient methods like Adam. Algorithm 3 Local SGD Input: Initialization w0, step-size γt, \#workers K, \#local steps H, budget T 1: t ← 0 2: **while** t ≤ T do ▷ Training budget not exhausted. 3: for k ∈ [K] do ▷ On each worker 4: Compute g k t using a batch of data. 5: wk t+1 ← Update(wk t , γt, g k t ). ▷ Local update 6: if t + 1 | H **then** ▷ Average every H steps 7: wk t+1 ← 1K PK k′=1 wk ′ t+1 8: t ← t + 1 9: **return** Averaged iterate 1K PK k=1 wkT . ## C.1 Mse Of Local Sgd'S Gradient Estimate (Equation 6) We derive the error in estimating the gradient for Local SGD, stated in Proposition 4.2. We first give the following Lemma, a variant of Lemma 1 in Khaled et al. (2020) using our local variance bound. Lemma C.1. Let Assumption 4.1 *hold and let* t ′ ∈ [t, t + H)*. Define* Vt ′ = 1 K PK k=1 wk t ′ − w¯ t ′ 2*. Then* $$\mathbb{E}_{t}[V_{t^{\prime}}]\leq(H-1)\gamma^{2}\bar{\sigma}_{t}^{2}$$ t(22) Proof. See Lemma 1 of Khaled et al. (2020). We are now ready to prove Proposition 4.2. $$(22)^{\frac{1}{2}}$$ Proof of Proposition *4.2.* Using the definition of g¯t (Equation (4)), for every t ′ ∈ [*t, t* + H), we have Et ′ h∥g¯t ′ − ∇f(w¯ t ′ )∥ 2i= Et ′ k=1 (g k t ′ − ∇f(w¯ t ′ ) 2 1 K X K (23) = Et ′ k=1 ∇f(wk t ′ ) − ∇f(w¯ t ′ ) 2 1 K X K k=1 g k t ′ − ∇f(wk t ′ )+ 1 K X K = Et ′ k=1 g k t ′ − ∇f(wk t ′ ) 2 k=1 ∇f(wk t ′ ) − ∇f(w¯ t ′ ) 2 1 K X K + 1 K X K | {z } Term 2 | {z } Term 1 $$(23)$$ (24) (24) $$\begin{array}{l}\left(25\right)\end{array}$$ = $$\begin{array}{l}\left(25\right)\end{array}$$ . $$(2{\mathfrak{f}}{\mathfrak{h}})$$ $$(27)$$ For Term 1, the variance bound in Assumption 4.1 gives us $$\mathbb{E}_{t^{\prime}}\left[\left\|\frac{1}{K}\sum_{k=1}^{K}\left(\mathbf{g}_{t^{\prime}}^{k}-\nabla f(\mathbf{w}_{t^{\prime}}^{k})\right)\right\|^{2}\right]=\frac{1}{K^{2}}\sum_{i,j}\mathbb{E}[(\mathbf{g}_{t^{\prime}}^{i}-\nabla f(\mathbf{w}_{t^{\prime}}^{i}))^{T}(\mathbf{g}_{t^{\prime}}^{j}-\nabla f(\mathbf{w}_{t^{\prime}}^{j}))]\leq\frac{\bar{\sigma}_{t}^{2}}{K}.$$ . (27) For Term 2, we use Jensen's inequality and L-smoothness to get $$\left\|\frac{1}{K}\sum_{k=1}^{K}\left(\nabla f(\mathbf{w}_{\ell}^{k})-\nabla f(\bar{\mathbf{w}}_{\ell^{\prime}})\right)\right\|^{2}\leq\frac{1}{K}\sum_{k=1}^{K}\left\|\left(\nabla f(\mathbf{w}_{\ell}^{k})-\nabla f(\bar{\mathbf{w}}_{\ell^{\prime}})\right)\right\|^{2}\leq L^{2}\underbrace{\frac{1}{K}\sum_{k=1}^{K}\left\|\mathbf{w}_{\ell}^{k}-\bar{\mathbf{w}}_{\ell^{\prime}}\right\|^{2}}_{=:V_{\ell^{\prime}}}.$$ $$(28)$$ $$(29)$$ $\square$ . .(28) Plugging that back in and taking the expectation Et, we find $$\mathbb{E}_{t^{\prime}}\left[\|\bar{\mathbf{g}}_{t^{\prime}}-\nabla f(\bar{\mathbf{w}}_{t^{\prime}})\|^{2}\right]\leq\frac{\bar{\sigma}_{t}^{2}}{K}+L^{2}\mathbb{E}_{t}[V_{t^{\prime}}]\leq\frac{\bar{\sigma}_{t}^{2}}{K}+(H-1)L^{2}\gamma^{2}\bar{\sigma}_{t}^{2}.$$ ## C.2 Expected Decrease (Equation 10) We now derive an upper bound on the expected decrease achieved by H consecutive steps of Local SGD. This result was stated in Equation (10). The following Proposition states the bound, where we factorize the learning rate as γ = η L ; this is solely for the readability of the proof. Proposition C.2. Let f be L-smooth. We consider H *consecutive steps of Local SGD (Equation* (3)) starting from a synchronization point H | t. We assume a fixed step size γ = η/L across these H steps. Then $$\mathbb{E}_{t}[f(\mathbf{w}_{t+H})]\leq f(\mathbf{w}_{t})-H\left(\frac{\eta}{2L}\tilde{G}_{t}+\frac{\eta}{2L}\tilde{A}_{t}-\frac{\eta^{2}}{2L}\tilde{A}_{t}-\frac{\eta^{2}}{2L}\frac{\partial_{t}^{2}}{K}-\frac{\eta^{3}}{4L}(H-1)\partial_{t}^{2}\right),\tag{30}$$ $$\tilde{G}_{t}=\frac{1}{H}\sum_{\nu^{\prime}=t}^{t+H-1}\mathbb{E}_{t}[\|\nabla f(\mathbf{w}_{\nu^{\prime}})\|^{2}],\quad\tilde{A}_{t}=\frac{1}{K}\sum_{k}\frac{1}{H}\sum_{\nu^{\prime}=t}^{t+H-1}\mathbb{E}_{t}[\|\nabla f(\mathbf{w}_{\nu^{\prime}}^{k})\|^{2}].\tag{31}$$ where Proof. From L-smoothness, we have for any t ′ $$\begin{split}\mathbb{E}_{t^{\prime}}[f(\tilde{\mathbf{w}}_{t^{\prime}+1})]&\leq f(\tilde{\mathbf{w}}_{t^{\prime}})-\frac{\eta}{L}\nabla f(\tilde{\mathbf{w}}_{t^{\prime}})^{T}\mathbb{E}_{t^{\prime}}[\tilde{\mathbf{g}}_{t^{\prime}}]+\frac{\eta^{2}}{2L}\mathbb{E}_{t^{\prime}}[\|\tilde{\mathbf{g}}_{t^{\prime}}\|^{2}]\\ &=f(\tilde{\mathbf{w}}_{t^{\prime}})-\frac{\eta}{L}\nabla f(\tilde{\mathbf{w}}_{t^{\prime}})^{T}\left(\frac{1}{K}\sum_{k}\nabla f(\mathbf{w}_{t^{\prime}}^{k})\right)+\frac{\eta^{2}}{2L}\mathbb{E}_{t^{\prime}}[\|\tilde{\mathbf{g}}_{t^{\prime}}\|^{2}]\end{split}\tag{32}$$ 21 Regarding the linear term, we have ∇f(w¯ t ′ ) T 1 K X k ∇f(wk t ′ ) ! = 1 2 1 K X k ∥∇f(wk t ′ )∥ 2 + ∥∇f(w¯ t ′ )∥ 2 − 1 K X k ∥∇f(wk t ′ ) − ∇f(w¯ t ′ )∥ 2 ! ≥ 1 2 1 K X k ∥∇f(wk t ′ )∥ 2 . +∥∇f(w¯ t ′ )∥ 2 − L 2 1 K X k ∥wk t ′ − w¯ t ′∥ 2 | {z } =:At ′ | {z } =:Vt ′ (33) $$(34)$$ Furthermore, we have $$\mathbb{E}_{\mathbf{r}^{\prime}}[\|{\tilde{\mathbf{g}}}_{\mathbf{r}^{\prime}}\|^{2}]=\|\mathbb{E}_{\mathbf{r}^{\prime}}[{\tilde{\mathbf{g}}}_{\mathbf{r}^{\prime}}]\|^{2}+\mathbf{Var}_{\mathbf{r}^{\prime}}[{\tilde{\mathbf{g}}}_{\mathbf{r}^{\prime}}]\leq\underbrace{\left\|{\frac{1}{K}}\sum_{k}\nabla f(\mathbf{w}_{\mathbf{r}^{\prime}}^{k})\right\|^{2}}_{\leq A_{\mathbf{r}^{\prime}}}+{\frac{\bar{\sigma}_{t}^{2}}{K}}.$$ Plugging Equations (33) and (34) back into Equation (32), we get $$\mathbb{E}_{t^{\prime}}[f(\tilde{\bf w}_{t^{\prime}+1})]\leq f(\tilde{\bf w}_{t^{\prime}})-\frac{\eta}{2L}\|\nabla f(\tilde{\bf w}_{t^{\prime}})\|^{2}-\frac{\eta}{2L}A_{t^{\prime}}+\frac{L\eta}{2}V_{t^{\prime}}+\frac{\eta^{2}}{2L}A_{t^{\prime}}+\frac{\eta^{2}}{2L}\frac{\partial_{t}^{2}}{K}.\tag{35}$$ Iterating this bound backward from t ′ = t + H − 1 while taking the expectation Et yields Et[f(w¯ t+H)] = Et[Et+H−1[f(w¯ t+H)]] ≤ Et f(w¯ t+H−1) − η 2L ∥∇f(w¯ t+H−1)∥ 2 − η 2L At+H−1 + Lη 2 Vt+H−1 + η 2 2L At+H−1 + η 2 2L σ¯ 2 t K ≤ . . . ≤ f(w¯ t) − η 2L t+ X H−1 − η 2L t+ X H−1 + η 2 2L t+ X H−1 + η 2 2L H σ¯ 2 t K + Lη 2 t+ X H−1 t ′=t Et[∥∇f(w¯ t ′ )∥ 2] t ′=t Et[At ′ ] t ′=t Et[At ′ ] t ′=t Et[Vt ′ ] | {z } =HG¯t | {z } =HA¯t | {z } =HA¯t | {z } (∗) (36) $$(37)$$ $$(38)$$ It remains to bound the term (∗). From Lemma C.1 we know that Vt obeys the recursion Et[Vt ′ ] ≤ E[Vt $$\gamma_{t^{\prime}-1}]+\gamma^{2}\bar{\sigma}_{t}^{2},\quad\gamma=\frac{\eta}{L}$$ L(37) Since Vt = 0, we get Et[Vt ′ ] ≤ (t ′ − t)¯σ 2 t η 2 L2 and $$\sum_{t^{\prime}=t}^{t+H-1}\mathbb{E}_{t}[V_{t^{\prime}}]\leq\bar{\sigma}_{t}^{2}\eta^{2}\frac{1}{L^{2}}\sum_{t^{\prime}=t}^{t+H-1}(t^{\prime}-t)=\bar{\sigma}_{t}^{2}\eta^{2}\frac{(H-1)H}{2L^{2}}.$$ Plugging that back in gives $$\mathbb{E}_{t}[f(\bar{\bf w}_{t+H})]\leq f(\bar{\bf w}_{t})-\frac{\eta}{2L}H\bar{G}_{t}-\frac{\eta}{2L}H\bar{A}_{t}+\frac{\eta^{2}}{2L}H\bar{A}_{t}+\frac{\eta^{2}}{2L}H\frac{\bar{\sigma}_{t}^{2}}{K}+\frac{\eta^{3}}{4L}(H-1)H\bar{\sigma}_{t}^{2}\tag{39}$$ ![22_image_0.png](22_image_0.png) Figure 7: Examining the assumption in Equation (40). We plot the standard deviation of the full-gradient magnitude over H steps for a ResNet-9 (He et al., 2016) trained on CIFAR-10 (Krizhevsky, 2009). It is evident that the gradient magnitude changes very little over the course of training and thus the assumption over a small number of H gradient steps the magnitude remains constant is a reasonable approximation. This expected function decrease bound features terms G¯t and A¯t, which are expected squared gradient magnitudes along the trajectories of the virtual averaged iterates, and the per-worker iterates, respectively. These quantities are not computed in practice, and in principle, dependent on η, since the choice of step size influences the gradient magnitude along that trajectory. We make a simplifying assumption that $${\bar{A}}_{t}\approx\|\nabla f({\bar{\mathbf{w}}}_{t})\|^{2}\approx{\bar{G}}_{t}$$ $$(40)$$ $$(41)$$ 2 ≈ G¯t (40) Equation (40) signifies that between two synchronizations the average virtual gradient magnitude and the local gradient magnitudes computed on each worker do not change too much and can be approximated by the gradient magnitude computed at the point of synchronization (w¯ t). We study the validity of this assumption in Figure 7. We train a small ResNet-9 on CIFAR-10 using SGD. We are interested in how the gradient magnitude (not stochastic gradient magnitude) evolves over a short horizon of H steps, and thus we plot the ratio standard deviation of the gradient magnitude over H steps to the average gradient magnitude over those H steps, commonly termed coefficient of variation. It is apparent from Figure 7 that the coefficient of variation is well below is almost < 1 except at the start of the training. This provides evidence that we can reuse the gradient magnitude for H steps due to its relative constancy. Substituting Equation (40) into Equation (39) gives us $$\mathbb{E}_{t}[f(\bar{\mathbf{w}}_{t+H})]\leq f(\bar{\mathbf{w}}_{t})-H\bigg(\gamma\tilde{G}_{t}-\frac{\gamma^{2}L}{2}\tilde{G}_{t}-\frac{\gamma^{2}L}{2}\frac{\tilde{\sigma}_{t}^{2}}{K}-\frac{\gamma^{3}L^{2}}{4}(H-1)\tilde{\sigma}_{t}^{2}\bigg)$$ (41) ## C.3 Optimal Step Size (Equation 13) We can now find the learning rate η that minimizes our bound on E[f(w¯ t+H)] in Equation (41). Setting the derivative of the right-hand side w.r.t. γ to zero reads $$\bar{G}_{t}-\gamma L\bar{G}_{t}-\gamma L\frac{\bar{\sigma}_{t}^{2}}{K}-\frac{3}{4}\gamma^{2}L^{2}(H-1)\bar{\sigma}_{t}^{2}=0$$ This quadratic equation in γ can be solved by the standard quadratic formula. We instead use Muller's method (Muller, 1956) which gives the roots of a quadratic of form ax2 + bx + c = 0 as $$x={\frac{2c}{-b\pm{\sqrt{b^{2}-4a c}}}}$$ $$(43)$$ $$(42)$$ Ignoring the negative root, we get $$\gamma=\frac{1}{L}\frac{2\bar{G}_{t}}{\bar{G}_{t}+\bar{\sigma}_{t}^{2}/K+\sqrt{(\bar{G}_{t}+\bar{\sigma}_{t}^{2}/K)^{2}+3(H-1)\bar{G}_{t}\bar{\sigma}_{t}^{2}}}.\tag{1}$$ In our experiments, following Johnson et al. (2020), we use SGD with momentum. Momentum buffers are maintained locally on each machine and not synchronized. We leave the investigation of different approaches to momentum buffers (Lin et al., 2020; Wang et al., 2020; Chen & Huo, 2016) to future work. ## D Expected Decrease For H **Steps Of Synchronous Sgd** To elucidate the expected decrease bound for Local SGD in Equation (41), we contrast it with a similar bound for H consecutive steps of synchronous data-parallel SGD using a constant learning rate γ. Analogously to our analysis for Local SGD, we assume gradient variance and magnitude to be bounded across these H steps, $\mathbb{E}_{t^{\prime}}[g^{k}_{t^{\prime}}]\leq\bar{\sigma}^{2}_{t},\quad\|\nabla f(\mathbf{w}_{t^{\prime}})\|^{2}\geq\bar{G}_{t},\quad t^{\prime}\in[t,t+H).$ $\mathbf{c}$\(\mathbf{ For the averaged gradient across K workers, this results in Vart By smoothness, we get Et ′ [f(wt ′+1)] ≤ f(wt ′ ) − γ∇f(wt ′ ) T Et ′ [g˜t ′ ] + Lγ2 2 Et ′ h∥g˜t ′∥ 2i = f(wt ′ ) − γ∥∇f(wt ′ )∥ 2 − Lγ2 2 -∥∇f(wt ′ )∥ 2 + Vart ′ [g˜t ′ ] ≤ f(wt ′ ) − γG¯t − Lγ2 2 G¯t + σ¯ 2 t K . $$(44)$$ $$(45)$$ $$\quad(46)$$ $$(47)$$ Using this recursively for H steps from t to t + H while taking the expectation Et, we get $$\mathbb{E}_{t}[f(\mathbf{w}_{t+H})]\leq f(\mathbf{w}_{t})-\underbrace{H\cdot\left(\gamma\tilde{G}_{t}-\frac{\gamma^{2}L}{2}\tilde{G}_{t}-\frac{\gamma^{2}L}{2}\frac{\tilde{\sigma}_{t}^{2}}{K}\right)}_{\mathrm{Expected~decrease~for~SGD}}.$$ $$(48)$$ Contrast this with Equation (30), restated here: $$\mathbb{E}_{t}[f(\bar{\mathbf{w}}_{t+H})]\leq f(\bar{\mathbf{w}}_{t})-H\cdot\left(\gamma\bar{G}_{t}-\frac{\gamma^{2}L}{2}\bar{G}_{t}-\frac{\gamma^{2}L}{2}\frac{\bar{\sigma}_{t}^{2}}{K}-\frac{\gamma^{3}L^{2}}{4}(H-1)\bar{\sigma}_{t}^{2}\right).$$ . (48) We see that Local SGD has an additional cubic term that results from the biased estimation of gradients. ## E Avoiding Stale Gradients In Computing The Gain Ratio To implement LocalAdaScale, we need to estimate G¯t and σ¯ 2 t . In Algorithm 1, we proposed a method that synchronizes *once* every H steps to average the weights and the cached stale gradients simultaneously. As an ablation study, we compare this against an alternative strategy where we synchronize the gradients one step after weight synchronization in Algorithm 4. While this incurs latency overhead *twice*, the resulting gain ratio is computed with more recent gradient evaluations and should therefore be more accurate. In Figure 9, we compare the two variants Local-Ada, and Local-Ada-NoStale. Across all datasets, we find that there are very small differences between the two variants in the final accuracy obtained or the number of epochs to convergence. The difference in the number of epochs for Local-Ada and Local-AdaNoStale is explained by the fact that underestimating the gain ratio ρ due to using stale gradients results in longer training durations. ![24_image_0.png](24_image_0.png) Figure 8: Comparison of test accuracies achieved for different numbers of workers (K) and communication intervals (H) by the two implementation variants of LocalAdaScale. Accuracy stays nearly identical between the two implementations, and the minor variations are within random seed variations. ![24_image_1.png](24_image_1.png) Figure 9: Number of epochs used by each method. When K > 8, both variants converge in a nearly identical number of epochs. Local-Ada trains for marginally more iterations, as it underestimates the gain ratio due to the utilization of cached gradients Algorithm 4 LocalAdaScale with two synchronizations - Local-Ada-NoStale 1: Input: Initialization w0, step-size γt, #workers K, #local steps H, Scale inv budget S, t = 0, s = 0. 2: **while** s ≤ S do ▷ Scale inv budget not exhausted. 3: for k ∈ [K] do ▷ On each worker 4: Compute g k t using a batch of data. ▷ Gradient at t. 5: if H | t **then** ▷ One step after model sync. 6: G¯t, σ¯ 2 t ← grad_stats(g k t ) ▷ Equation (15) 7: Compute ρ as Equation (14). 8: if H | (t + 1) **then** ▷ Average model every H steps 9: w j t ← 1K PK j=1 w $\dot{i}$. ## T ∀J. 10: Wk T+1 ← Wk T − Ργ⌈S⌉G K T . ▷ Local Update. 11: If H | (T + 1) **Then** ▷ Average Every H Steps 12: W J T ← 1K Pk J=1 W J T ∀J. 13: S ← S + Ρt 14: T ← T + 1 15: **Return** The Last Iterate Wi. F How Accurate Are Our Approximations In Equation (14)? We study the tightness of our assumptions by studying a generalized version of Equation (14) as $$\rho_{t}(\,\mathbf{c}\,)=\frac{2\left(\bar{G}_{t}+\bar{\sigma}_{t}^{2}\right)}{\bar{G}_{t}+\frac{\bar{\sigma}_{t}^{2}}{K}+\sqrt{\left(\bar{G}_{t}+\frac{\bar{\sigma}_{t}^{2}}{K}\right)^{2}+3\,\mathbf{c}\,(H-1)\bar{G}_{t}\bar{\sigma}_{t}^{2}}}.$$ . (49) Here when ρt(0) reduces to AdaScale and ρt(1) is LocalAdaScale, and thus ρt(c) for c ∈ [0, 1] interpolates AdaScale and LocalAdaScale. Modulating c scales the learning rate as well as the number of training iterations. $$(49)$$ ![25_image_0.png](25_image_0.png) Figure 10: Ablation over c in Equation (49). We plot c vs Top-5 Accuracy and Number of epochs ImageNet32 for K = 16 for H = 16, 32. We see that c ≥ 0.6 results recovering the Target Accuracy. We show in Figure 10 the effect of c on the performance. Using AdaScale's gain formula (c = 0) for Local SGD is has the advantage of fastest convergence albeit with poorer generalization as the learning rate used is likely too aggressive, and consequently training iterations too few. As c is increased from 0, we see improved performance and the performance shows diminishing returns when c ≥ 0.6. This threshold can vary with the dataset and network architecture in addition to the number of workers, and cannot be interpreted as a universal threshold. We see that our proposed method scales the learning rate conservatively and our threshold works well for very high communication gap of H = 32. This figure is further evidence that scaling methods that were designed for synchronous data-parallel SGD do not perform well for Local SGD and the delayed communication (H ≥ 2) has to be accounted for, as we seen in the case of c = 0 where there is a substantial performance difference between the two cases H = 16 and H = 32. ## G Need To Modulate Step Size For Larger Batch Sizes G.1 With Constant Learning Rate Khaled et al. (2020); Stich (2019) analyze the convergence of Local SGD under a different condition for learning rate: either constant, or decreasing with time, but not the optimal learning rate. We can analyze this choice by mapping that idea to SGD in Equation (18). Let γ be the learning rate that is small enough (additional upper bounds exist). We can see that with that learning rate the loss function decreases by $$\mathbb{E}[f(\mathbf{w}_{t+1})|\mathbf{w}_{t}]\leq f(\mathbf{w}_{t})-\left(\gamma\|\nabla f_{t}\|^{2}-\frac{\|\nabla f_{t}\|^{2}+\sigma_{t}^{2}}{2}L\gamma^{2}\right)\tag{50}$$ By keeping the learning rate the same value and when the number of workers goes up K fold, the variance reduces by the same factor, we get $$\mathbb{E}[f(\mathbf{w}_{t+1})|\mathbf{w}_{t}]\leq f(\mathbf{w}_{t})-\left(\gamma\|\nabla f_{t}\|^{2}-\frac{\|\nabla f_{t}\|^{2}+\frac{\sigma^{2}}{T\!\!K}}{2}L\gamma^{2}\right)\tag{51}$$ Comparing Equations (50) and (51), it is apparent that merely increasing the batch size results in an update that results in a larger reduction of the function value without having to tinker with the learning rate at all. However, this is suboptimal as a practitioner would be interested in tuning the learning rate that results in the largest reduction (implying faster training). Here lies the difference between prior works and our results. ## G.2 Using Adascale In Appendix B, we derived the optimal learning rate, which results in the largest decrease in the function value. Thus, when the number of workers is increased from 1 to K, any learning rate smaller than the optimal, will also result in the decrease of the function value in Equation (18), however, will be lesser than optimal. We make this more formal here. Substituting Equation (19) into Equation (18), we get $$\mathbb{E}[f(\mathbf{w}_{t+1})|\mathbf{w}_{t}]\leq f(\mathbf{w}_{t})-\left(\frac{||\nabla f_{t}||^{2}}{(\sigma_{t}^{2}+||\nabla f_{t}||^{2})}-\frac{(\sigma_{t}^{2}+||\nabla f_{t}||^{2})}{L}\frac{||\nabla f_{t}||^{4}}{(\sigma_{t}^{2}+||\nabla f_{t}||^{2})^{2}}\right)$$ $$=f(\mathbf{w}_{t})-\left(\frac{1}{2L}\underbrace{\frac{||\nabla f_{t}||^{4}}{(\sigma_{t}^{2}+||\nabla f_{t}||^{2})}}_{\text{Reduction factor}}\right)\qquad\qquad\qquad\text{(OptRednSGD-1Worker)}$$ Similarly, we can show that the optimal reduction in the function value when K workers are used as $$\mathbb{E}[f(\mathbf{w}_{t+1})|\mathbf{w}_{t}]\leq f(\mathbf{w}_{t})-{\left(\begin{array}{l}{}\\ {}\\ {\frac{1}{2L}}\underbrace{\left({\frac{\sigma_{t}^{2}}{K}}\right)}_{\mathrm{Re}}\\ {}\end{array}\right)}$$ $$(\mathrm{OptRednSGD-K~Worker})$$ ![27_image_0.png](27_image_0.png) (OptRednSGD-K Worker) The reduction factor in Equation (OptRednSGD-K Worker) ≥ Equation (OptRednSGD-1 Worker), as the denominator in Equation (OptRednSGD-K Worker) is smaller than Equation (OptRednSGD-1 Worker) and all other terms are identical. A learning rate adaptation is required to capitalize on the availability of K workers. ## H Details Of Experiments H.1 Experimental Setup CIFAR-10 We used ResNet model code and training hyperparameters from the GitHub repository https: //github.com/kuangliu/pytorch-cifar. They report a test accuracy of 93.02% for ResNet-18 in the single-worker base setting. ImageNet32 We used Wide ResNet model code from https://github.com/weiaicunzai/ pytorch-cifar100 and training hyperparameters from the paper proposing ImageNet32 (Chrabaszcz et al., 2017). They report a top-5 test accuracy of 69.08% for WRN-28-2 in the single-worker base setting. ImageNet We used ResNet model from the torchvision library and training hyperparameters from Goyal et al. (2018). The torchvision library reports a top-5 test accuracy of 92.86%. | Dataset | γbase | Momentum | Weight decay | LR schedule | Epochs | |------------|---------|------------|----------------|-----------------------------|----------| | CIFAR-10 | 0.1 | 0.9 | 5 × 10−4 | Cosine-decay | 200 | | ImageNet32 | 0.01 | 0.9 | 5 × 10−4 | Step (×0.5 every 10 epochs) | 40 | | ImageNet | 0.1 | 0.9 | 5 × 10−4 | Step (×0.1 every 30 epochs) | 90 | Training hyperparameters are listed in the following table: ## H.2 Tabulated Results Tabulated results of our experiments may be found in Tables 2 to 4. | 8 16 24 | |-----------| | Table 2: Accuracy [%] and number of epochs for CIFAR-10. Accuracy Epochs | | | | | | | | | | |----------------------------------------------------------------------------|-------|---------|---------|-----------|-----------|---------|---------|---------------------|-----| | K | H | Acc-Ada | Acc-Lin | Local-Ada | Local-Lin | Acc-Ada | Acc-Lin | Local-Ada Local-Lin | | | 8 | 2 | 95.12 | 94.73 | 95.07 | 95.16 | 291 | 200 | 338 | 200 | | 4 | 95.11 | 93.92 | 94.94 | 95.26 | 353 | 200 | 405 | 200 | | | 8 | 94.81 | 88.06 | 95.77 | 95.22 | 496 | 200 | 469 | 200 | | | 16 | 94.70 | 53.09 | 96.02 | 94.98 | 791 | 200 | 604 | 200 | | | 32 | 94.28 | 13.19 | 95.85 | 94.25 | 1450 | 200 | 813 | 200 | | | 64 | 94.53 | 10.70 | 95.71 | 93.61 | 2260 | 201 | 1056 | 200 | | | 16 | 2 | 95.35 | 94.13 | 95.11 | 95.24 | 360 | 200 | 422 | 200 | | 4 | 95.15 | 87.36 | 95.55 | 94.69 | 502 | 200 | 525 | 200 | | | 8 | 94.65 | 36.52 | 95.71 | 93.82 | 812 | 200 | 666 | 200 | | | 16 | 94.43 | 14.18 | 95.74 | 93.35 | 1330 | 200 | 890 | 200 | | | 32 | 94.64 | 14.12 | 94.74 | 91.95 | 2499 | 201 | 1735 | 200 | | | 64 | 93.24 | 10.12 | 95.47 | 90.83 | 6157 | 202 | 1846 | 200 | | | 24 | 2 | 94.52 | 90.02 | 95.24 | 94.21 | 443 | 200 | 515 | 200 | | 4 | 94.79 | 79.58 | 95.72 | 93.63 | 617 | 200 | 634 | 200 | | | 8 | 94.55 | 13.87 | 95.72 | 92.99 | 1006 | 200 | 813 | 200 | | | 16 | 93.65 | 10.76 | 95.39 | 91.68 | 1634 | 200 | 1132 | 200 | | | 32 | 93.96 | 11.08 | 95.92 | 86.69 | 3688 | 201 | 1671 | 200 | | | 64 | 92.73 | 11.48 | 95.28 | 66.22 | 8542 | 203 | 2535 | 200 | | Table 3: Accuracy [%] and number of epochs for ImageNet32. Table 4: Accuracy [%] and number of epochs for ImageNet. | | Table 3: Accuracy [%] and number of epochs for ImageNet32. Accuracy Epochs | | | | | | | | |----|------------------------------------------------------------------------------|---------|---------|-----------|-----------|---------|---------|---------------------| | K | H | Acc-Ada | Acc-Lin | Local-Ada | Local-Lin | Acc-Ada | Acc-Lin | Local-Ada Local-Lin | | 2 | 69.01 | 68.00 | 69.41 | 68.32 | 49 | 40 | 56 | 40 | | 4 | 69.15 | 67.84 | 69.51 | 68.13 | 57 | 40 | 65 | 40 | | 8 | 69.48 | 62.59 | 69.06 | 67.31 | 74 | 40 | 75 | 40 | | 16 | 69.50 | 51.15 | 68.86 | 66.19 | 104 | 40 | 89 | 40 | | 32 | 69.91 | 41.72 | 68.89 | 64.76 | 161 | 40 | 113 | 40 | | 64 | 69.94 | 28.45 | 68.87 | 63.96 | 283 | 40 | 142 | 40 | | 2 | 68.88 | 67.35 | 69.24 | 68.27 | 58 | 40 | 72 | 40 | | 4 | 69.16 | 61.09 | 69.42 | 67.22 | 73 | 40 | 87 | 40 | | 8 | 69.66 | 53.42 | 68.97 | 65.80 | 104 | 40 | 101 | 40 | | 16 | 70.01 | 48.60 | 68.61 | 63.44 | 162 | 40 | 123 | 40 | | 32 | 70.02 | 29.78 | 68.41 | 60.93 | 286 | 40 | 163 | 40 | | 64 | 69.54 | 6.30 | 68.44 | 59.27 | 518 | 40 | 219 | 40 | | 2 | 68.99 | 66.51 | 69.51 | 67.02 | 65 | 40 | 84 | 40 | | 4 | 69.45 | 45.41 | 69.36 | 65.81 | 87 | 40 | 103 | 40 | | 8 | 69.71 | 47.79 | 68.60 | 64.05 | 133 | 40 | 121 | 40 | | 16 | 69.58 | 32.45 | 68.27 | 60.80 | 218 | 40 | 154 | 40 | | 32 | 69.77 | 7.91 | 67.91 | 56.53 | 409 | 40 | 209 | 40 | | 64 | 68.97 | 4.84 | 67.89 | 54.18 | 744 | 40 | 279 | 40 | | | Table 4: Accuracy [%] and number of epochs for ImageNet. Accuracy Epochs | | | | | | | | | K | H | Acc-Ada | Acc-Lin | Local-Ada | Local-Lin | Acc-Ada | Acc-Lin | Local-Ada Local-Lin | | 2 | 92.32 | 92.66 | 92.93 | 92.31 | 108 | 90 | 122 | 90 | | 8 | 92.54 | 90.70 | 92.89 | 92.30 | 157 | 90 | 156 | 90 | | 32 | 92.14 | 59.23 | 92.98 | 90.72 | 328 | 90 | 240 | 90 | | 2 | 92.15 | 92.37 | 92.66 | 92.54 | 127 | 90 | 149 | 90 | | 8 | 92.54 | 85.40 | 92.85 | 91.64 | 213 | 90 | 217 | 90 | | 32 | 91.74 | 0.58 | 92.89 | 88.40 | 568 | 90 | 377 | 90 | | 2 | 92.84 | 91.81 | 92.83 | 92.49 | 141 | 90 | 177 | 90 | | 8 | 92.35 | 77.93 | 92.72 | 90.83 | 260 | 90 | 272 | 90 | | 32 | 91.12 | 0.62 | 92.24 | 84.08 | 761 | 90 | 536 | 90 | ## H.3 Additional Results For Pseudo-Wall Clock Time ![29_image_0.png](29_image_0.png) ![29_image_1.png](29_image_1.png) In Figure 11, we plot the full communication tradeoffs for all the datasets, and workers (K) considered. Figure 11: When is LocalAdaScale preferable to Acc-Ada? On the x-axis, we plot the relative cost m of communication to computation, and on the y-axis, we plot pseudo-wallclock time T. We see that LocalAdaScale converges faster than gradient accumulation for higher H, and also for a higher cost of communication. For lower m, gradient accumulation with fewer steps is preferable to LocalAdaScale. ## H.4 Behavior Of The Gain Ratio Figures 12 and 13 show the gain ratios used by AdaScale and LocalAdaScale. We plot ρ (Eq. 14) for LocalAda and the "effective gain" rH (Eq. 9) for Acc-Ada, accounting for the different scales of ρ and r. Since different methods take different numbers of iterations, we adjust the x-axis to correspond to scale-invariant epochs. We see that both methods approach the maximum gain ratio of K for small values of H. The average gain ratio achieved by Acc-Ada is slightly higher for small H, resulting in fewer iterations as observed before. For large H, this is reversed, and Local-Ada achieves higher average gains. In all settings and for both methods, the gain ratio generally increases smoothly over time, reflecting a diminishing gradient magnitude. ![30_image_0.png](30_image_0.png) Figure 12: Gain ratios for CIFAR-10 on ResNet-18. Each row corresponds to a communication interval H and each column to the number of workers K. Acc-Ada reaches higher gain ratios for small H and thus is more iteration efficient than Local-Ada. This trend reverses for large H, where we see that Acc-Ada reaches higher scaling only towards the end of training. The x-axis has been linear downsampled to fit into 200 epochs for visualization. ![30_image_1.png](30_image_1.png) Figure 13: Gain ratios for ImageNet-32 on WRN-28-2. Each row corresponds to a communication interval H and each column to the number of workers K. Acc-Ada reaches higher gain ratios for small H and thus is more iteration efficient than Local-Ada. This trend reverses for large H, where we see that Acc-Ada reaches higher scaling only towards the end of training. The x-axis has been linear downsampled to fit into 40 epochs for visualization. The step structure of the gain is due to the step learning rate decay schedule and does not align across plots because LocalAdaScale uses scale invariant epochs, which corresponds to different number of epochs in each *K, H* setting.
Review 1: Summary: The standard distributed data-parallel training of neural nets requires constant synchronization of the gradients between workers. Using Local SGD is one way to reduce the communication overhead, but its gradient estimate is biased. This paper borrows the idea of AdaScale to study how the learning rate should be set automatically for Local SGD. More specifically, their method adjusts the learning rates according to the gradient statistics and may automatically add more training steps until loss converges. By comparing the empirical performances of SGD and Local SGD, the authors claim that Local SGD is empirically preferable to SGD only in extreme scenarios of communication being substantially more time-consuming than computation. Strengths and Weaknesses: Strengths: 1. This paper studies a well-motivated problem: how do we reduce communication overhead while maintaining the convergence speed? The authors tackle this problem by looking into the learning rates of Local SGD, one of the most popular communication-efficient optimizers. 1. This paper is well-written. The experiment and proof details are written very clearly. 2. The proposed method LocalAdaScale is grounded by theory and has been validated via thorough experiments on different datasets, numbers of local steps, and numbers of workers. Weaknesses: 1. My main concern is about the main claim of the paper, "Local SGD is of limited practical utility". In my understanding, what the authors indeed did is to design learning rate schedules with their best efforts (via LocalAdaScale), and found that Local SGD is not faster than SGD if the communication cost is not strikingly large. I believe a more fair summary of their efforts could be "tuning learning rates of Local SGD alone may not be able to gain much over SGD" because other hyperparameters may also be very important, but this paper doesn't include any empirical study on them. For example, although this paper mainly views Lin et al. (2020) as a paper showing negative results of Local SGD, what they actually do in the paper is to propose a method, called "Post-local SGD", which is Local SGD with a special schedule of $H$: it starts with 1 and then increases to a larger constant halfway through training. In fact, Lin et al. (2020) found that this method can even beat SGD with the same number of training steps. So tuning the schedule of H, instead of tuning the schedule of learning rates, may improve the practical utility of Local SGD, but this paper doesn't have any related experiments. I would recommend the authors restrict their claim to tuning the learning rates alone. 2. Another issue of this paper is that it doesn't make a clear distinction between the previous efforts studying optimization and generalization. * Many tricks on learning rates, such as learning rate decay and warmup, are not only aiming for faster optimization. A folklore observation in deep learning is that starting training with a small learning rate can be much faster than that with a large learning rate, but in the end, the test accuracy is worse. The theory part of this paper only studies the learning rate from the optimization aspect, but in the experiments, the test accuracy is always reported. It is thus unclear whether the learning rates found by AdaScale are indeed "optimal" for test accuracy. * Previous works such as Lin et al. (2020) and Gu et al. (2023) are studying the generalization aspect of Local SGD. Their claim is that Local SGD (with LR and H set properly) can achieve better test accuracy, although the training loss could be a bit worse than SGD. All their experiments are conducted in standard setups with CIFAR-10 and ImageNet, so it is unfair to ignore the results and claim the scenarios being studied are "impractical" (Page 4). I would recommend the authors write more clearly that the current paper only studies Local SGD in terms of the convergence speed of training loss, and report the training loss in all their figures. Requested Changes: The paper is of high quality overall and I would like to vote for acceptance. However, I encourage the authors to rephrase their main claim on the practical utility of Local SGD and make a clearer distinction between optimization and generalization. Typos: * Page 3, Contribution 3: "Local SGD convergences faster" -> converges * Page 12, "our findings on the value of Local SGD might limited to those conditions." -> might be Broader Impact Concerns: This work studies the optimization methods in deep learning and raises no ethical issues. ================================================== Review 2: Summary: This work studies the convergence of Local SGD. In particular, the authors focus on ERM problems and derive an upper bounded on the expected decrease in the loss function in each iteration as a function of the number of workers $K$ and the number of local iterations $H$, when assuming L-smooth functions. They then derive a heuristic learning rate adaptation strategy that minimizes their upper bound on the expected training loss after $H$ local iterations. When $H=1$, they recover the AdaScale method, and thus they term their proposed strategy Local AdaScale. Following the AdaScale paper, the "effective" number of training iterations is tracked based on a specified "gain scale," which depends on the online gradient norms, an upper bound on the stochastic gradient variance, and the learning rate... coarsely speaking using a smaller learning rate can to a first order result in a slower iteration counter, and thus require more physical iterations to train for a given "effective" number of iterations. As a result, the proposed method not only adapts the Local SGD learning rate, but also the total number of Local SGD training iterations. Numerical experiments are provided on supervised image classification with convolutional networks. The primary baselines are a) linear learning rate scaling with Synchronous SGD with $K$ workers and $H$ gradient accumulation steps, b) AdaScale learning rate scaling with Synchronous SGD with $K$ workers and $H$ gradient accumulation steps, c) linear learning rate scaling with Local SGD with $K$ workers and $H$ local gradient steps, d) Local AdaScale learning rate scaling with Local SGD with $K$ workers and $H$ local gradient steps. Empirically, the proposed Local AdaScale strategy results in better generalization on the validation set than linear learning rate scaling with Local SGD. However, the authors also find that Local SGD requires a significant imbalance between communication and computation time (at least 3x in many cases) to provide any wall-clock speedups compared to large mini-batch SGD, since Local AdaScale can require many more training iterations to reach a certain level of performance. Strengths and Weaknesses: ### Strengths * This work is very well written and easy to follow; indeed, it was a pleasure to read this paper. * The theory is also quite straightforward and provides a clear motivation for the proposed adaptive learning rate scaling. * Numerical experiments provide compelling evidence for the advantage of the proposed adaptive learning rate scaling method compared to the linear scaling heuristic. * The paper is also quite honest in taking a critical look at when one would expect Local SGD to be advantageous in practice, and the authors are not afraid to point out that extreme communication imbalance is required in practice to realize a speedup with Local SGD. ### Weaknesses * All arguments in the paper follow by minimizing an upper bound on the expected loss function after $H$ local iterations. However, the practical quantity of interest is the validation accuracy. Given the non-convexity of the considered problem, decreasing the training loss as quickly as possible often leads to suboptimal solutions, especially in the case of supervised image classification. How do we reconcile the theory derived with respect to the training loss, and the practical quantity of interest, generalization and validation accuracy? * Proposed adaptation strategy requires double the communication overhead of Synchronous SGD with gradient accumulation, however the authors are explicit about this limitation in the work. * Numerical experiments are only conducted for supervised image classification with convolutional networks, however the authors are also explicit about this limitation in the work. * According to details in the appendix, momentum is used in all numerical experiments, however it is not clear to me how this is incorporated in practice, especially in the case of Local SGD. I am concerned that local momentum buffers quickly fall out of synch, and this will exacerbate performance of the Local SGD baseline with heuristic linear or square-root scaling of the learning rate. Are there both global and local momentum buffers? If so, how do they interact? If not, how do you synchronize momentum buffers? Would properly handling momentum buffers change the conclusions or decrease the performance gap? For instance, there is Slow-Momentum [1] or BMUF [2], which take special care in handling the momentum buffers in distributed optimization. * Based on the training loss curves in the appendix, it appears to me that all methods are still trained with a step-wise learning-rate decay. Is this indeed the case, that all numerical experiments are conducted with a step-wise learning rate decay? How does such a learning-rate schedule interact with the proposed Local AdaScale learning-rate scaling rule, and how do you reconcile this gap between theory and practice? * Why specifically focus on an AdaScale type derivation of the scaling rule, especially since the choice to minimize an upper bound on the training loss in each iteration is somewhat arbitrary given that the relevant quantity at the focus of this work is generalization/validation accuracy? For instance, why not consider Adagrad or Adam style updates? [1] Wang et al., Slowmo: Improving communication-efficient distributed sgd with slow momentum, ICLR 2020. [2] Chen et al., Scalable training of deep learning machines by incremental block training with intra-block parallel optimization and blockwise model-update filtering, ICASSP 2016. Requested Changes: * Please include a discussion and clarify how to reconcile the theory derived with respect to the training loss, and the practical quantity of interest, generalization and validation accuracy. * Please clarify and make it clear in the main paper how momentum is used in the numerical experiments. Are there both global and local momentum buffers? If so, how do they interact? If not, how do you synchronize momentum buffers? Would properly handling momentum buffers change the conclusions or decrease the performance gap? How would the results change if one incorporated existing techniques for correctly tracking the momentum buffers, as in SloMo? * Please clarify whether the numerical experiments are conducted with a pre-specified step-wise learning rate decay rule on top of the proposed Local AdaScale rule. If so, please discuss how you reconcile this gap between theory and practice? * Please include a broader discussion on learning rate adaptation methods and their use in the literature. My main concern with this work is the large gap between theory (motivation and derivation of an adaptive learning rate), and practice (unsynchronized momentum buffers, step-wise decay, validation accuracy vs training loss, etc.). I am concerned that the simple and well written exposition may provide a misleading picture for researchers, students, or practitioners trying to understand the behaviour of Local SGD. Broader Impact Concerns: No concerns. ================================================== Review 3: Summary: This article studies the importance of the learning rate choice in the local SGD method for distributed optimization. Compared to the data-parallel SGD method, there is a bias in the update direction of this method, which makes the method fail to converge when the learning rate is not suitably chosen. An adaptive method is proposed, based on existing works for data-parallel SGD. Extensive numerical results to show the advantage of using local SGD relative to the data-parallel SGD, when the commutation between workers is costly. Strengths and Weaknesses: The article gives a good literature review over the topic. The presentation is quite clear and easy to read. The idea of using adaptive learning rate as in AdaScale to reduce the bias in eq 6 seems to be new to the local SGD literature. It is nevertheless unclear how much bias is reduced in the proposed algorithm 1. Further clarifications are needed for me to understand the numerical results. Requested Changes: - In the numerical results in section 6, what is the definition of each iteration (in order to count n^{K,H} in eq 16) for the gradient accumulation methods? Is it the same k as the local SGD method? It is important to be precise on this. - As mentioned in the beginning of section 6.1, one seeks to achieve a quite high test accuracy in each of the 3 dataset. Did you use this accuracy to decide the value of n^{K,H}? If so, how? Also why in Fig 5, the local-lin method with K=24 can still achieve such an accuracy (with a quite small number of epochs)? Is it contradictory to the result in Fig 4 ? Is the number of epochs = n^{K,H} / training data size ? - There is still a lack of understanding why the local SGD with the proposed learning rate scheduling can achieve a good accuracy. Is it due to the reduced bias as mentioned in section 4? Some further discussions on this would be good. - In the abstract, the cost of added training iterations of local SGD is with respect to which method? Unlike in the synchronous case, which case are you referring to? In the introduction, when is local SGD empirically preferable to SGD, you mentioned local SGD converges faster than large batch SGD. What is the measurement of the speed here (to be faster)? Broader Impact Concerns: The main result of the article remains still empirical, but the topic is interesting so it may open up new research directions in the future. ================================================== Metareview: Recommendation: Accept as is Comment: All three reviewers also recommend this be accepted, in particular after a slight adjustment in the claims. I agree with them. ==================================================
# Label Noise-Robust Learning Using A Confidence-Based Sieving Strategy Reihaneh Torkzadehmahani reihaneh.torkzadehmahani@tum.de Technical University of Munich Reza Nasirigerdeh *reza.nasirigerdeh@tum.com* Technical University of Munich Helmholtz Munich Daniel Rueckert daniel.rueckert@tum.de Technical University of Munich Imperial College London Georgios Kaissis g.kaissis@tum.de Technical University of Munich Helmholtz Munich Reviewed on OpenReview: *https: // openreview. net/ forum? id= 3taIQG4C7H* ## Abstract In learning tasks with label noise, improving model robustness against overfitting is a pivotal challenge because the model eventually memorizes labels, including the noisy ones. Identifying the samples with noisy labels and preventing the model from learning them is a promising approach to address this challenge. When training with noisy labels, the per-class confidence scores of the model, represented by the class probabilities, can be reliable criteria for assessing whether the input label is the true label or the corrupted one. In this work, we exploit this observation and propose a novel discriminator metric called *confidence error* and a *sieving* strategy called CONFES to differentiate between the clean and noisy samples effectively. We provide theoretical guarantees on the probability of error for our proposed metric. Then, we experimentally illustrate the superior performance of our proposed approach compared to recent studies on various settings, such as synthetic and real-world label noise. Moreover, we show CONFES can be combined with other state-of-the-art approaches, such as Co-teaching and DivideMix to further improve model performance∗. ## 1 Introduction The superior performance of deep neural networks (DNNs) in numerous application domains, ranging from medical diagnosis (De Fauw et al., 2018; Liu et al., 2019) to autonomous driving Grigorescu et al. (2020) mainly relies on the availability of large-scale and high-quality data (Sabour et al., 2017; Marcus, 2018). Supervised machine learning in particular requires correctly annotated datasets to train highly accurate DNNs. However, such datasets are rarely available in practice due to labeling errors (leading to *label noise*) stemming from high uncertainty (Beyer et al., 2020) or lack of expertise (Peterson et al., 2019). In medical applications, for instance, there might be a disagreement between the labels assigned by radiology experts and those from the corresponding medical reports (Majkowska et al., 2020; Bernhardt et al., 2022), yielding datasets with noisy labels. Hence, it is indispensable to design and develop robust learning algorithms that ∗The code is available at: https://github.com/reihaneh-torkzadehmahani/confes are able to alleviate the adverse impact of noisy labels during training. Throughout this paper, we will refer to these methods as *label noise learning* methods. In the literature, there are different types of label noise including symmetric noise, pairflip noise, and instance-dependent noise. In *symmetric noise* a sample is allocated a random label, while in *pairflip noise* the label of a sample is flipped into the adjacent label (Patrini et al., 2017; Xia et al., 2020; Bai et al., 2021). In real-world scenarios, a corrupted label assigned to a sample depends on the feature values and the true label of the sample, known as *instance-dependent noise* (Liu, 2021; Zhang et al., 2021b). Training DNNs in the presence of label noise can lead to memorization of noisy labels and consequently, reduction in model generalizability (Zhang et al., 2021a; Chen et al., 2021b). Some of the existing studies for dealing with label noise focus on learning the *noise distribution*. Patrini et al. (2017); Berthon et al. (2021); Yao et al. (2021); Xia et al. (2019); Yao et al. (2020) model the noise distribution as a transition matrix, encapsulating the probability of clean labels being flipped into noisy ones and leverage loss correction to attenuate the effect of the noisy samples. Other studies (Cheng et al., 2020; Xia et al., 2021; Wei et al., 2020) learn the *clean label distribution* and capitalize on regularization or selection of reliable samples to cope with the noisy labels. A main challenge in this line of work, also known as *sample sieving* (or *sample selection*), is to find a reliable criterion (or metric) that can efficiently differentiate between clean and noisy samples. The majority of the previous studies (Jiang et al., 2018; Han et al., 2018; Yu et al., 2019) employ the *loss value* to this end, where the samples with small loss values are considered to likely be clean ones (*small-loss trick*). A prior study (Zheng et al., 2020) proposes a confidence-based criterion and shows the label is likely noisy if the model confidence in that label is low. Our work lies in this category of confidence-based sieving metrics. Since learning noise distributions is challenging, it is rarely used in practice. Sample sieving methods, on the other hand, have multiple unsolved problems: They might not always be capable of effectively filtering out noisy labels without supplementary assistance (e.g., additional model in Co-teaching). Moreover, their performance might not be satisfactory in the presence of certain types of noises such as instance-dependent or higher levels of noise. This motivates us to develop new metrics and learning algorithms that are more robust against various types and levels of label noise with minimal additional computational overhead. Contributions. Our main contributions can be summarized as follows: - We introduce a novel metric called *confidence error* to efficiently discriminate between clean and noisy labels. The confidence error metric is defined as the difference between the softmax outputs/logits of the predicted and original label of a sample. Moreover, we provide a theoretical bound on the probability of error for the proposed metric. Our theoretical analysis and observations indicate there exists a clear correlation between the confidence error value and the probability of being clean. That is, a sample with a lower confidence error has a much higher probability to be a clean sample than a noisy one. - We then integrate the confidence error criterion into a learning algorithm called *CONFidence Error* Sieving (CONFES) to robustly train DNNs in the instance-dependent, symmetric, and pairflip label noise settings. The CONFES algorithm computes the confidence error associated with training samples at the beginning of each epoch and only incorporates a subset of training samples with lower confidence error values during training (i.e., likely clean samples). - We validate our findings experimentally showing that CONFES significantly outperforms the stateof-the-art learning algorithms in terms of accuracy on typical benchmark datasets for label noise learning including CIFAR-10/100 (Krizhevsky et al., 2009) and Clothing1M (Xiao et al., 2015). The superiority of CONFES becomes particularly pronounced in scenarios where the noise level is high or when dealing with more intricate forms of noise such as instance-dependent noise. - We moreover demonstrate that combining CONFES with other learning algorithms including Coteaching (Han et al., 2018), JoCor (Wei et al., 2020), and DivideMix (Li et al., 2020) provides further accuracy gain, illustrating synergy between CONFES and the existing research endeavors in the field of learning with label noise. ## 2 Related Work Overcoming the memorization of noisy labels plays a crucial role in label noise learning and improves model generalization by making the training process more robust to label noise (Zhang et al., 2021a; Arpit et al., 2017; Natarajan et al., 2013). The research community mainly tackled the memorization problem by adjusting the loss function (known as *loss correction*), using *implicit/explicit regularization* techniques, or refining the training data and performing *sample sieving*. Adjusting the loss function according to the noise transition probabilities is an effective method for decreasing the adverse impact of noisy samples during the training but comes at the cost of accurate estimation of the transition matrix (Patrini et al., 2017). Previous studies have paved the way for this non-trivial estimation in different ways. For instance, T-Revision (Xia et al., 2019) estimates the transition matrix without requiring anchor points (the data points whose associated class is known almost surely), which play an important role in the effective learning of the transition matrix. Dual-T (Yao et al., 2020) first divides the transition matrix into two matrices that are easier to estimate and then aggregates their outputs for a more accurate estimation of the original transition matrix. Another line of work improves model generalization by introducing regularization effects suitable for learning with noisy labels. The regularization effect may be injected *implicitly* using methods such as data augmentation and inducing stochasticity. For example, *Mixup* (Zhang et al., 2018) augments the training data using a convex combination of a pair of examples and the corresponding labels to encourage the model to learn a simple interpolation between the samples. SLN (Stochastic Label Noise) (Chen et al., 2021a) introduces a controllable noise to help the optimizer skip sharp minima in the optimization landscape. Although the implicit regularization techniques have been proven effective in alleviating overfitting and improving generalization, they are insufficient to tackle the label noise challenge (Song et al., 2022). Thus, the community came up with *explicit* regularization approaches such as ELR (Early-Learning Regularization) (Liu et al., 2020) and CDR (Xia et al., 2021). ELR is based on the observation that at the beginning of training, there is an early-learning phase in which the model learns the clean samples without overfitting the noisy ones. Given that, ELR adds a regularization term to the Cross-Entropy (CE) loss, leading the model output toward its own (correct) predictions at the early-learning phase. Similarly, CDR first groups the model parameters into critical and non-critical in terms of their importance for generalization and then penalizes the non-critical parameters. A completely different line of work is sample sieving/selection, which aims to differentiate the clean samples from the noisy ones and employ only the clean samples in the training process. The previous works in this direction exploit loss-based or confidence-based metrics as the sample sieving criteria. MentorNet (Jiang et al., 2018) uses an extra pre-trained model (mentor) to help the main model (student) by providing it with small-loss samples. The decoupling algorithm (Malach & Shalev-Shwartz, 2017) trains two networks simultaneously using the samples on which the models disagree about the predicted label. Co-teaching (Han et al., 2018) cross-trains two models such that each of them leverages the samples with small-loss values according to the other model. Co-teaching+ (Yu et al., 2019) improves Co-teaching by considering clean samples as those that not only have small loss but also those on which the models disagree. JoCoR (Wei et al., 2020) first computes a joint-loss to make the outputs of the two models become closer, and then it considers the samples with small loss as clean samples. The utilization of two models in MentorNet, decoupling, Co-teaching, Co-teaching+, and JoCoR leads to a computational inefficiency that is twice as high compared to traditional training methods. LRT (Zheng et al., 2020) employs the likelihood ratio between the model confidence in the original label and its own predicted label and then selects the samples according to their likelihood ratio values. Our work is closely related to LRT as both employ a confidence-based metric for sample sieving. However, our metric is an absolute metric that captures the difference between the model's confidence in the given label and the predicted label. In contrast, LRT is a relative metric that is more sensitive to the model's quality. Our study belongs to the category of sample selection methods and capitalizes on model confidence to discriminate between the clean and noisy samples akin to LRT, without the need for training an additional model as required by methods like Co-teaching. ## 3 Confes: Confidence Error Based Sieving We first provide a brief background on the training process for the classification task. Then, we introduce the proposed confidence error metric and provide a theoretical analysis of its probability of error. Afterward, we present the CONFES algorithm, which capitalizes on confidence error for effective sample sieving. ## 3.1 Background We assume a classification task on a training dataset D = {(xi, yi) | xi ∈ *X, y*i ∈ Y } n i=1, where n is the number of samples and X and Y are the feature and label (class) space, respectively. The neural network model F(Xb; θ) ∈ R m×kis a k-class classifier with trainable parameters θ that takes mini-batches Xb of size m as input. In real life, a sample might be assigned the wrong label (e.g., due to human error). Consequently, *clean* (noise-free label) training datasets might not be available in practice. Given that, we assume Y˜ = {y˜i} n i=1 and D˜ = {(xi, y˜i)} n i=1 indicate the noisy labels and noisy dataset, respectively. The training process is conducted by minimizing the empirical loss (e.g., cross-entropy) using mini-batches of samples from the noisy dataset: $$\min_{\theta}\mathcal{L}(\mathcal{F}(X_{b};\theta);\widehat{Y}_{b})=\min_{\theta}\frac{1}{m}\sum_{i=1}^{m}\mathcal{L}(\mathcal{F}(x_{i};\theta),\widehat{y}_{i}),\tag{1}$$ $\mathcal{L}(\mathcal{F}(X_{b};\theta);\widehat{Y}_{b})=\min_{\theta}\frac{1}{m}\sum_{i=1}^{m}\mathcal{L}(\mathcal{F}(x_{i};\theta),\widehat{y}_{i}),$ (1) where L is the loss function and (Xb, Y˜b) is a mini-batch of samples with size m from the noisy dataset D˜. Table 1 provides a comprehensive summary of all the notations used in the theoretical analysis, along with their respective definitions. | Table 1: Summary of notations and their definitions | | | | |-------------------------------------------------------|-----------------------------------------------------------|----------|----------------------------------------------------------| | Notation | Defenition | Notation | Defenition | | (xi , y˜i) | Sample i with features xi and possibly noisy label y˜i | y i | Predicted label for a sample i | | | ′ | | | | n | Total number of samples | k | Total number of classes/labels | | F(·; θ) | A classifier with weights θ | H∗ (·) | The optimal Bayes classifier | | σ(·) | Softmax activation function | C (l) | Model confidence for label l | | Pj (·) | True conditional probability for label j | P˜ j (·) | Noisy conditional probability for label j | | L(·) | Loss function (e.g., cross-entropy) | τlj | The probability that label l is flipped to label j | | v | The best prediction of Baye's optimal classifier | w | The second-best prediction of Baye's optimal classifier | | α | Sieving threshold | EC (·) | Confidence error for a sample | | ϵ | Maximum approximation error of the classifier | ψ | A placeholder variable | | O | Order/asymptotic notation (the upper bound of complexity) | µ, β, γ | Tsybakov noise condition variables(µ ∈ (0, 1], β, γ > 0) | In the presence of label noise, the efficiency of the training process mainly depends on the capability of the model to distinguish between clean and noisy labels and to diminish the impact of noisy ones on the training process. In this study, we propose an elegant metric called *confidence error* for efficient sieving of the samples during training. ## 3.2 Confidence Error As The Sieving Metric Consider a sample s = (xi, y˜i) from the noisy dataset D˜. The k-class/label classifier F(xi; θ) takes xi as input and computes the weight value associated with each class as output. Moreover, assume σ(·) is the softmax activation function such that σ(F(xi; θ)) ∈ [0, 1]ktakes classifier's output and computes the predicted probability for each class. We define the *model confidence* for a given label l ∈ {1*, . . . , k*} associated with sample s as the prediction probability assigned to the label: C (l) = σ(F(xi; θ))(l)(2) The class with the maximum probability is considered as the predicted class, i.e. y ′ i , for sample s: (j)(F(xi; θ)), (3) $$y_{i}^{\prime}=\operatorname*{arg\,max}_{j\in\{1,\ \ldots,\ k\}}\sigma^{(j)}({\mathcal{F}}(x_{i};\theta)),$$ The *confidence error* EC (s) for sample s is defined as the difference between the probability assigned to the predicted label y ′ i and the probability associated with the original label y˜i: $$\mathbf{\Phi}_{i};\theta))^{(l)}$$ $$E_{C}(s)=C^{(y_{i}^{\prime})}-C^{({\bar{y}}_{i})},$$ (˜yi), (4) $$\left(2\right)$$ $$\left({\mathfrak{3}}\right)$$ $\left(4\right)$. where EC (s) ∈ [0, 1]. In other words, the confidence error states how much the model confidence in the original class is far from the model confidence in the predicted class. The confidence error of zero implies that the original and predicted classes are the same. ## 3.3 Probability Of Error In the following, we theoretically prove the probability that confidence error wrongly identifies noisy labels as clean ones and vice versa is bounded. Presume H∗is a Bayes optimal classifier that predicts the correct label according to the true conditional probability Pj (x) = Pr[y = j|x]. Consider v = H∗(x) = arg maxj Pj (x) as the H∗'s best prediction and w = arg maxj,j̸=v Pj (x) as its second best prediction. Define P˜j (x) as the noisy conditional probability, and ϵ as the maximum approximation error of the classifier F: $$\tilde{\mathcal{P}}_{j}(x)=\Pr[\tilde{y}=j|x]=\sum_{l=1}^{k}\Pr[\tilde{y}=j|y=l]*\mathcal{P}_{l}(x)=\sum_{l=1}^{k}\eta_{j}*\mathcal{P}_{l}(x),\quad\epsilon=\max_{x,j}\,\left[\left|C^{(j)}-\tilde{\mathcal{P}}_{j}(x)\right|\right],$$ $$\mathbf{\partial}(t)$$ $\square$ where τlj represents the probability that the label l is flipped to label j. Presume the true conditional probability P meets the multi-class Tsybakov noise condition (Zheng et al., 2020), which guarantees the presence of a margin (region of uncertainty) around the decision boundary separating different classes. This implies that the true conditional probabilities are sufficiently apart, and there is a reasonable level of distinguishability between the classes. Lemma 1. Given the true conditional probability P satisfying the multi-class Tsybakov noise condition, there exists α = min n1, minx τy˜y˜ Pw(x) + Pl̸=˜y τly˜ ∗ Pl(x) osuch that $$\mathrm{Pr}\left[\tilde{y}=H^{*}(x),C^{(\tilde{y})}(x)<\alpha\right]\leq\beta\left[\mathcal{O}(\epsilon)\right]^{\gamma},$$ for constants µ ∈ (0, 1], *β, γ >* 0, and *ϵ < µ* minj τjj . Proof. The proof can be found in Zheng et al. (2020). In simple terms, Lemma 1 states that if a label is noisy and the model confidence in that label is low, the label has a limited probability of being correct. The probability of correctness is determined by ϵ, the maximum approximation error for the model, which tends to be small in practical scenarios (Zheng et al., 2020). In other words, Lemma 1 implies that the error bound for model confidence is small in practice. Now, we provide the error bound for our proposed metric. We consider two possible error cases: (I) the label is noisy according to the optimal Bayes classifier H∗, but our metric recognizes it as clean, and (II) the label is clean based on H∗, but our metric identifies it as noisy. In the following theorem, we show the probability of making any of these two errors is bounded. Theorem 1. Given that the true conditional probability P satisfies the multi-class Tsybakov noise condition for constants µ ∈ (0, 1], *β, γ >* 0, and *ϵ < µ* minj τjj , we have: Case (I): Let the threshold $\alpha=\max\limits_x\left\{-\sigma^{(\psi)}(x)+\tau_{y'y'}\mathcal{P}_w(x)+\sum_{l,l\neq y'}\tau_{l y'}\mathcal{P}_l(x)\right\}$, then: . $$\operatorname*{Pr}\left[{\tilde{y}}\neq H^{*}(x),E_{C}(x,{\tilde{y}})\leq\alpha\right]\leq\beta\left[O(\epsilon)\right]^{\gamma}+\psi.$$ Pr[˜y ̸= H∗(x), EC (x, y˜) ≤ α] ≤ β [O(ϵ)]γ + ψ. (6) Case (II): Let the threshold $\alpha=\min\limits_{x}\;\bigg\{\sigma^{(y')}(x)-\tau_{\bar{y}\bar{y}}\mathcal{P}$. $$\mathbf{a}=\operatorname*{min}_{x}\,\left\{\sigma^{(y^{\prime})}(x)-\tau_{\tilde{y}\tilde{y}}\mathcal{P}_{w}(x)-\sum_{l,l\neq\tilde{y}}\tau_{l}\tilde{y}\mathcal{P}_{l}(x)\right\}\!,\,\mathrm{then}\,\,\mathbf{a}=\mathbf{a}_{l}\,.$$ $$\mathrm{Pr}\left[\tilde{y}=H^{*}(x),E_{C}(x,\tilde{y})>\alpha\right]\leq\beta\left[O(\epsilon)\right]^{\gamma}.$$ Pr[˜y = H∗(x), EC (x, y˜) > α] ≤ β [O(ϵ)]γ. (7) Proof. In **Case (I)**, the predicted label is either the same as the Bayes optimal classifier's or not: $$\begin{array}{l}{{\operatorname*{Pr}\left[{\bar{y}}\neq H^{*}(x),E_{C}(x,{\bar{y}})\leq\alpha\right]=\operatorname*{Pr}\left[{\bar{y}}\neq H^{*}(x),E_{C}(x,{\bar{y}})\leq\alpha,H^{*}(x)=y^{\prime}\right]+}}\\ {{\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\operatorname*{Pr}\left[{\bar{y}}\neq H^{*}(x),E_{C}(x,{\bar{y}})\leq\alpha,H^{*}(x)\neq y^{\prime}\right]}}\end{array}$$ ′](8) $$({\mathfrak{h}})$$ $$\left(7\right)$$ $$({\boldsymbol{8}})$$ Simplifying the terms and using the definition of H∗(x) yields: Pr[˜y ̸= H∗(x), EC (x, y˜) ≤ α] ≤ Pr[Pw(x) ≤ Py′ (x), EC (x, y˜) ≤ α] + Pr[˜y ̸= *v, v* ̸= y $$\left.\begin{array}{l}{{\mathrm{]+Pr}\left[\hat{y}\neq v,v\neq y^{\prime}\right]}}\end{array}\right.\tag{9}$$ By substituting the definition of confidence error, we have: $$\operatorname*{Pr}\left[{\bar{y}}\neq H^{*}(x),E_{C}(x,{\bar{y}})\leq\alpha\right]\leq\operatorname*{Pr}\left[{\mathcal{P}}_{w}(x)\leq{\mathcal{P}}_{\nu^{\prime}}(x),\sigma^{(y^{\prime})}(x)-\sigma^{({\bar{y}})}(x)\leq\alpha\right]+\operatorname*{Pr}\left[{\bar{y}}\neq v,v\neq y^{\prime}\right].$$ ′] (10) Then, we set Pr[˜y ̸= *v, v* ̸= y ′] = ψ and substitute σ (y ′)(x) with P˜y′ (x)−ϵ based on the definition of maximum approximation error: $$\operatorname*{Pr}\left[{\bar{y}}\neq H^{*}(x),E_{C}(x,{\bar{y}})\leq\alpha\right]\leq\operatorname*{Pr}\left[{\mathcal{P}}_{w}(x)\leq{\mathcal{P}}_{\nu^{\prime}}(x),{\hat{\mathcal{P}}}_{\nu^{\prime}}(x)\leq\alpha+\sigma^{({\bar{y}})}(x)+\epsilon\right]+\psi$$ i+ ψ (11) Next, we expand the P˜y′ term: $$(11)$$ $$\begin{array}{l}{{\mathrm{Pr}\left[\bar{y}\neq H^{*}(x),E_{C}(x,\bar{y})\leq\alpha\right]\leq}}\\ {{\mathrm{Pr}\bigg[\mathcal{P}_{w}(x)\leq\mathcal{P}_{y^{\prime}}(x),\left(\tau_{y^{\prime}y^{\prime}}\mathcal{P}_{y^{\prime}}(x)+\sum_{i\neq y^{\prime}}\tau_{\bar{y}^{\prime}}*\mathcal{P}_{i}(x)\right)\leq\alpha+\sigma^{(\bar{y})}(x)+\epsilon\bigg]+\psi,}}\end{array}$$ and simplify the resulting inequality as follows: $$\Pr\left[\hat{y}\neq H^{*}(x),E_{C}(x,\hat{y})\leq\alpha\right]\leq\Pr\left[\mathcal{P}_{w}(x)\leq\mathcal{P}_{\psi}(x)\leq\frac{\sigma^{(\theta)}(x)+\alpha-\sum_{l\neq\psi^{\prime}}\eta_{\psi^{\prime}}*\mathcal{P}_{l}(x)}{\tau_{\psi^{\prime}\psi}}+\frac{\epsilon}{\tau_{\psi^{\prime}\psi}}\right]+\psi\tag{13}$$ Then, we substitute the defined threshold α: $$\operatorname*{Pr}\left[{\hat{y}}\neq H^{*}(x),E_{C}(x,{\hat{y}})\leq\alpha\right]\leq\operatorname*{Pr}\left[{\mathcal{P}}_{w}(x)\leq{\mathcal{P}}_{y^{\prime}}(x)\leq{\mathcal{P}}_{w}(x)+{\frac{\epsilon}{\tau_{y^{\prime}y^{\prime}}}}\right]+\psi$$ Utilizing the multi-class Tsybakov noise condition completes the proof for the first case: $$\Pr\left[\tilde{y}\neq H^{*}(x),E_{C}(x,\tilde{y})\leq\alpha\right]\leq\beta\left[\frac{\epsilon}{\tau_{y^{\prime}y^{\prime}}}\right]^{\gamma}+\psi\tag{1}$$ Similarly, we calculate the bound for **Case (II)** as follows: $$\begin{array}{l l}{{\mathrm{Pr}\left[\bar{y}=H^{*}(x),E_{C}(x,\bar{y})>\alpha\right]=\mathrm{Pr}\left[\bar{y}=H^{*}(x),E_{C}(x,\bar{y})>\alpha,H^{*}(x)\neq y\right]+}}\\ {{}}&{{\mathrm{Pr}\left[\bar{y}=H^{*}(x),E_{C}(x,\bar{y})>\alpha,H^{*}(x)=y\right].}}\end{array}$$ $$(12)$$ $$(14)$$ $$\left(15\right)$$ ′] .(16) Upon simplifying the terms and substituting the definition of confidence error, we have: $$\begin{array}{l}{{\operatorname*{Pr}\left[{\bar{y}}=H^{*}(x),E_{C}(x,{\bar{y}})>\alpha\right]\leq}}\\ {{\operatorname*{Pr}\left[{\mathcal{P}_{w}(x)\leq{\mathcal{P}_{\bar{y}}(x),\sigma^{(y^{\prime})}-\sigma^{({\bar{y}})}>\alpha}\right]+\operatorname*{Pr}\left[{\bar{y}}=H^{*}(x)=y^{\prime},\sigma^{(y^{\prime})}-\sigma^{({\bar{y}})}>\alpha\right]}}\end{array}$$ Substituting the definition of ϵ yields the following expression: $$\Pr\left[\hat{y}=H^{\star}(x),E_C(x,\hat{y})>\alpha\right]\leq\Pr\left[\mathcal{P}_w(x)\leq\mathcal{P}_{\hat{y}}(x),\sigma^{(y')}-\alpha>\sigma^{(\hat{y})}\geq\hat{\mathcal{P}}_{\hat{y}}(x)-\epsilon\right]+0$$ expand the $\hat{\mathcal{P}}_w$ term. i+ 0 (18) Then, we expand the P˜y˜ term, $$\Pr\left[\bar{y}=H^{*}(x),E_{C}(x,\bar{y})>\alpha\right]\leq$$ $$\Pr\left[\mathcal{P}_{w}(x)\leq\mathcal{P}_{\bar{y}}(x),\sigma^{(y)}-\alpha>\sigma^{(\bar{y})}\geq\tau_{\bar{y}\bar{y}}\mathcal{P}_{\bar{y}}(x)+\sum_{l\neq\bar{y}}\eta_{\bar{y}}*\mathcal{P}_{l}(x)-\epsilon\right]\tag{19}$$ $$(16)$$ $$(17)$$ $$(18)$$ 6 and simplify it as follows: $$\Pr\left[\hat{y}=H^{*}(x),E_{C}(x,\hat{y})>\alpha\right]\leq\Pr\left[\mathcal{P}_{w}(x)\leq\mathcal{P}_{\hat{y}}(x)\leq\frac{\sigma^{(\psi^{\prime})}(x)-\alpha-\sum_{i\neq\hat{y}}\tau_{i\hat{y}}*\mathcal{P}_{i}(x)}{\tau_{i\hat{y}}}+\frac{\epsilon}{\tau_{i\hat{y}}}\right].$$ $$(20)$$ Substituting the threshold α based on its definition and employing the multi-class Tsybakov noise condition results in the following inequality: $$\Pr\left[\bar{y}=H^{*}(x),E_{C}(x,\hat{y})>\alpha\right]\leq\beta\left[\frac{\epsilon}{\tau_{\bar{y}\bar{y}}}\right]^{\gamma},$$ $$(21)$$ $\square$ which completes the proof of Theorem 1. This theorem establishes that the probability of error in distinguishing between clean and noisy samples is bounded if we utilize the confidence error as the sieving criterion. In the following, we present CONFES, which capitalizes on the confidence error metric for sieving the samples in label noise scenarios. ## 3.4 Confes Algorithm Previous studies (Bai et al., 2021; Liu et al., 2020) show that deep neural networks tend to memorize noisy samples, which can have a detrimental effect on the model utility. Therefore, it is crucial to detect the noisy samples and alleviate their adverse impact, especially in the early steps of training. The CONFES algorithm takes this into consideration by sieving the training samples using the confidence error metric and completely excluding the identified noisy samples during training. CONFES (Algorithm 1) consists of three main steps at each epoch: (1) Sieving samples, (2) building the refined training set, and (3) training the model. Algorithm 1: Confidence error based sieving (CONFES) Input: Noisy training dataset D˜ = {(xi, y˜i)} n i=1, model Fθ, number of training epochs T, initial sieving threshold α, number of warm-up epochs Tw, batch size m Output: Trained model Fθ 1 for i = 0*, . . . , T* − 1 do 2 αi = max(α - i ·α Tw , 0) /* Set sieving threshold */ 3 Dc i = {s ∈ D˜ | EC (s) ≤ αi} /* Compute confidence error using equation 4 and sieve clean samples */ 4 Da i = Dc i ⊕ {(xj , y˜j ) ∈ Dc i s.t. j=1, . . . , size(D˜) - *size*(Dc i )} /* Build new dataset(clean⊕duplicate) */ 5 for *mini-batch* β = {(xj , y˜j )} m j=1 ∈ Da i do /* Train the model on new dataset */ 6 Update model Fθ on mini batch β using equation 1 7 **return** Trained model Fθ In the sieving step, the confidence error for each training sample is computed using Equation 4; then, the samples whose confidence error is less than or equal to αi (sieving threshold at epoch i) are considered as clean, whereas the remaining samples are assumed to be noisy and excluded from training. The per-epoch sieving threshold αiis computed using two hyper-parameters: the initial sieving threshold α, and the number of warm-up epochs Tw, where αiis linearly reduced from α to zero during Tw warm-up epochs. The idea of adaptive sieving threshold αiis based on the observation that generalization occurs in the initial epochs of training, while memorization gradually unfolds afterward (Stephenson et al., 2021; Liu et al., 2020). We capitalize on the warm-up mechanism using an adaptive sieving threshold by training the model on a carefully selected subset of samples, which are potentially clean labels according to their confidence error values, instead of using all samples *from the beginning* of the training, laying a solid foundation for the learning process. In the second step, a new training dataset is created by concatenating (⊕) only the identified clean samples and their augmentations (duplicates) such that this dataset becomes as large as the initial training set. This is based on the fact that sieving the clean samples results in a reduction in the number of training samples $${}^{\star}\!f$$ $$\begin{array}{r l}{{}}&{{}\star/}\\ {{}}&{{}}\\ {{}}&{{}}\end{array}$$ $${}^{\star}\!/$$ due to the exclusion of noisy samples. Duplication of the clean samples accounts for this reduction and emphasizes learning the potentially clean samples. Moreover, in line with a previous study (Carlini et al., 2023), which indicates duplication is a very strong promoter of learning, duplicating clean samples produces a very strong learning signal which improves the algorithm overall. Finally, the model is trained on this augmented dataset. ## 4 Evaluation We draw a performance comparison between CONFES and recent baseline approaches on three label noise settings: symmetric, pairflip, and instance-dependent. In the following, we first describe the experimental setting and then present and discuss the comparison results. Moreover, we provide additional results regarding the effectiveness of the confidence error metric and CONFES algorithm in sieving the samples as well as the sensitivity of CONFESS to its hyper-parameters. ## 4.1 Experimental Setup Datasets. We utilize the CIFAR-10/100 datasets (Krizhevsky et al., 2009) and make them noisy using different types of synthetic label noise. Furthermore, we incorporate the Clothing1M dataset (Xiao et al., 2015), a naturally noisy benchmark dataset widely employed in previous studies. CIFAR-10/100 contain 50000 training samples and 10000 testing samples of shape 32 × 32 from 10/100 classes. For the CIFAR datasets, we perturb the training labels using symmetric, pairflip, and instance-dependent label noise introduced in Xia et al. (2020), but keep the test set clean. Following data augmentation/preprocessing procedure in previous works (Liu et al., 2020; Li et al., 2020), the training samples are horizontally flipped with probability 0.5, randomly cropped with size 32×32 and padding 4×4, and normalized using the mean and standard deviation of the dataset. Clothing1M is a real-world dataset of 1 million images of size 224 × 224 with noisy labels (whose estimated noise level is approximately 38% (Wei et al., 2022; Song et al., 2019)) and 10k clean test images from 14 classes. Following prior studies (Liu et al., 2020; Li et al., 2020), the data augmentation steps performed on the clothing1M dataset include 256 × 256 resizing, 224 × 224 random cropping, and random horizontal flipping. In clothing1M, the number of samples for each class is imbalanced. We follow Li et al. (2020) and sample a class-balanced subset of the training dataset at each epoch. State-of-the-art methods. On all considered datasets, we compare CONFES with the most recent related studies including (1) standard cross-entropy loss (CE), (2) Co-teaching (Han et al., 2018) that cross-trains two models and uses the small-loss trick for selecting clean samples and exchanges them between the two models, (3) ELR (Liu et al., 2020), an early-learning regularization method that leverages the model output during the early-learning phase, (4) CORES2(Cheng et al., 2020), a sample sieving approach that uses confidence regularization which leads the model towards having more confident predictions, (5) PES (Bai et al., 2021), a progressive early-stopping strategy, (6) SLN (Chen et al., 2021a) that improves regularization by introducing stochastic label noise, and (7) LRT, a confidence based algorithm that leverages likelihood ratio values for sample selection. Co-teaching, CORES2, and LRT are based on sample sieving, whereas ELR and SLN are regularization-based methods. For all methods, the specific hyper-parameters are set according to the corresponding manuscript or the published source code if available. Parameter Settings and Computational Resources. We conduct the experiments on a single GPU system equipped with an NVIDIA RTX A6000 graphic processor and 48GB of GPU memory. Our method is implemented in PyTorch v1.9. For all methods, we evaluate the average test accuracy on the last five epochs, and for co-teaching, we report the average of this metric for the two networks. Following previous works (Li et al., 2020; Bai et al., 2021), we train the PreActResNet-18 (He et al., 2016) model on CIFAR-10 and CIFAR-100 using the SGD optimizer with momentum of 0.9, weight decay of 5e-4, and batch size of 128. The initial learning rate is set to 0.02, which is decreased by 0.01 in 300 epochs using cosine annealing scheduler (Loshchilov & Hutter, 2017). For the Cloting1M dataset, we adopt the setting from Li et al. (2020) and train the ResNet-50 model for 80 epochs. The optimizer is SGD with momentum of 0.9 and weight decay of 1e-3. The initial learning rate is 0.002, which is reduced by factor of 10 at epoch 40. At each epoch, the model is trained on 1000 mini-batches of size 32. Note that ResNet-50 has been pretrained on ImageNet (Deng et al., 2009). ## 4.2 Results CIFAR-10/100 datasets. Tables 2 and 3 list test accuracy values for different noise types and noise rates on CIFAR-10 and CIFAR-100 datasets respectively. According to these tables, CONFES outperforms the competitors for all considered symmetric, pairflip, and instance-dependent noise types. Similarly, CONFES delivers higher accuracy than the competitors for different noise rates. Moreover, as the noise level increases, the accuracy gap between CONFES and its competitors widens in favor of CONFES. Figure 1 illustrates the test accuracy versus epoch for the different learning algorithms. As shown in the figure, CONFES is robust against overfitting because the corresponding test accuracy continues to increase as training moves forward, and stays at the maximum after the model converges. Some of the other algorithms such as SLN and ELR, on the other hand, suffer from the overfitting problem, where their final accuracy values are lower than the maximum accuracy they achieve. ![8_image_0.png](8_image_0.png) Figure 1: **Test accuracy** for **PreAct-ResNet18** trained on **CIFAR-100**: CONFES is robust against overfitting, whereas some competitors including SLN and ELR suffer from overfitting; noise level is 40%. 0.98 0.96 Table 2: Test accuracy on **CIFAR-10** for different noise types with noise level 40%. | 0.96 | 0.98 | 1.00 | 1.02 | 1.04 | |--------------------------------|------------|------------|------------|--------| | Method | Symmetric | Pairflip | Instance | | | CONFES (ours) | 90.62±0.2 | 86.18±0.3 | 90.28±0.2 | | | CE | 66.61 ±0.4 | 59.25 ±0.1 | 66.04 ±0.2 | | | Co-teaching (Han et al., 2018) | 87.42 ±0.2 | 84.57 ±0.2 | 86.90±0.1 | | | ELR (Liu et al., 2020) | 85.74 ±0.2 | 86.15 ±0.1 | 85.37 ±0.3 | | | CORES2 (Cheng et al., 2020) | 83.90 ±0.4 | 58.38 ±0.6 | 76.71 ±0.4 | | | LRT (Zheng et al., 2020) | 85.47 ±0.3 | 59.25 ±0.3 | 80.53 ±0.9 | | | PTD (Xia et al., 2020) | 72.05 ±0.9 | 58.34 ±0.8 | 65.97 ±0.9 | | | PES (Bai et al., 2021) | 90.55 ±0.1 | 85.56 ±0.1 | 85.63 ±0.5 | | | SLN (Chen et al., 2021a) | 83.69 ±0.2 | 85.26 ±0.5 | 67.71 ±0.4 | | Combining CONFES with state-of-the-art algorithms. Table 4 shows the accuracy values of CoTeaching, JoCor, and DivideMix if confidence error is used as the discriminator metric instead of the training loss. As shown in the table, the accuracy from these algorithms is enhanced by 2-5% compared to their baseline performance by combining them with CONFES, indicating that confidence error is not only effective as the main building block of the proposed CONFES algorithm but also combined with other state-of-the-art methods including DivideMix, which is a complex method employing data augmentation, and guessing or refining the noisy labels rather than excluding them, which helps in utilizing the noisy samples and learning their feature information. | (a) Instance-dependent | | | | |--------------------------------|------------|------------|------------| | Method | 20% | 40% | 60% | | CONFES (ours) | 73.59±0.2 | 69.68±0.2 | 59.48±0.1 | | CE | 63.16 ±0.1 | 48.92 ±0.3 | 30.65 ±0.4 | | Co-teaching (Han et al., 2018) | 71.12 ±0.3 | 66.55 ±0.3 | 57.18 ±0.2 | | ELR (Liu et al., 2020) | 63.10 ±0.2 | 49.15 ±0.2 | 29.88 ±0.6 | | CORES2 (Cheng et al., 2020) | 64.55 ±0.1 | 50.98 ±0.2 | 33.93 ±0.5 | | LRT (Zheng et al., 2020) | 73.14 ±0.2 | 65.32 ±0.6 | 45.37 ±0.1 | | MentorMix (Jiang et al., 2020) | 69.41 ±0.2 | 56.41 ±0.1 | 34.61 ±0.1 | | PES (Bai et al., 2021) | 71.65 ±0.3 | 64.83 ±0.2 | 41.10 ±0.5 | | SLN (Chen et al., 2021a) | 60.08 ±0.1 | 46.08 ±0.3 | 29.77 ±0.4 | | (b) Pairflip | | | | | Method | 20% | 30% | 40% | | CONFES (ours) | 73.12±0.1 | 71.34±0.2 | 62.37±0.4 | | CE | 64.31 ±0.3 | 55.77±0.1 | 45.62 ±0.4 | | Co-teaching (Han et al., 2018) | 69.59 ±0.2 | 64.04 ±0.4 | 55.42 ±0.5 | | ELR (Liu et al., 2020) | 62.05 ±0.5 | 54.44 ±0.2 | 44.31 ±0.3 | | CORES2 (Cheng et al., 2020) | 63.85 ±0.2 | 54.88 ±0.3 | 45.34±0.2 | | LRT (Zheng et al., 2020) | 71.70 ±0.1 | 60.78 ±0.1 | 46.24 ±0.2 | | MentorMix (Jiang et al., 2020) | 69.65 ±0.1 | 62.01 ±0.1 | 50.97 ±0.2 | | PES (Bai et al., 2021) | 71.73 ±0.4 | 68.28 ±0.3 | 59.18 ±0.2 | | SLN (Chen et al., 2021a) | 61.82 ±0.3 | 53.67 ±0.2 | 45.72 ±0.2 | | (c) Symmetric | | | | | Method | 20% | 40% | 60% | | CONFES (ours) | 73.89±0.1 | 69.63±0.2 | 60.65±0.1 | | CE | 63.46 ±0.7 | 47.85 ±0.4 | 29.59 ±0.3 | | Co-teaching (Han et al., 2018) | 71.54 ±0.3 | 66.26 ±0.1 | 58.82 ±0.1 | | ELR (Liu et al., 2020) | 63.59 ±0.1 | 48.33 ±0.2 | 30.37 ±0.1 | | CORES2 (Cheng et al., 2020) | 65.99 ±0.5 | 52.26 ±0.2 | 34.61 ±0.2 | | LRT (Zheng et al., 2020) | 73.72 ±0.1 | 66.52 ±0.2 | 50.86 ±0.4 | | MentorMix (Jiang et al., 2020) | 71.52 ±0.2 | 61.96 ±0.2 | 44.38 ±0.3 | | PES (Bai et al., 2021) | 71.42 ±0.2 | 68.37 ±0.2 | 60.38 ±0.1 | | SLN (Chen et al., 2021a) | 60.48 ±0.1 | 46.98 ±0.2 | 28.50 ±0.2 | Table 3: Test accuracy on **CIFAR-100** for various label noise types with different noise rates. (a) **Instance-dependent** | Method | Symmetric | Pairflip | Instance | |--------------------------------|-------------|------------|------------| | Co-teaching (Han et al., 2018) | 66.26 ±0.1 | 55.42 ±0.5 | 66.55±0.3 | | CONFES-Co-teaching | 69.94±0.1 | 57.90±0.2 | 69.51±0.1 | | Improvement | +3.68 | +2.48 | +2.96 | | DivideMix (Li et al., 2020) | 74.63±0.2 | 74.9±0.1 | 66.79±0.3 | | CONFES-DivideMix | 76.31±0.2 | 76.51±0.1 | 69.03±0.1 | | Improvement | +1.68 | +1.61 | +2.24 | | JoCoR (Wei et al., 2020) | 67.05±0.2 | 54.96±0.3 | 67.46±0.2 | | CONFES-JoCoR | 70.48±0.2 | 59.61±0.1 | 70.24±0.4 | | Improvement | +3.43 | +4.65 | +2.78 | | Table 5: Test accuracy on Clothing1M dataset | | | | | | | |------------------------------------------------|--------|--------|--------|--------|--------|---------------| | Method | CE | ELR | CORES2 | PES | SLN | CONFES (ours) | | Test accuracy | 69.21% | 71.39% | 69.50% | 69.18% | 72.80% | 73.24% | Table 4: Test accuracy for CONFES combined with other approaches on CIFAR-100 with noise rate 40%. Clothing1M dataset. Table 5 summarises the performance of methods on Clothing1M dataset. CORES2 and PES provide slight or no accuracy gain compared to the baseline cross-entropy training. CONFES, on the other hand, outperforms the competitors including ELR and SLN. Effectiveness of confidence error. We design an experiment to illustrate the effectiveness of the confidence error metric: we employ the SGD optimizer and cross-entropy loss function to train PreActResNet18 on CIFAR-100, where 40% of the labels are made noisy using the instance-dependent noise. At the beginning of each epoch, the model computes the confidence error for all training samples and sorts them in ascending order by the confidence error value. The model considers the first 60% of the samples with lower confidence error values as clean and only incorporates them during training. This procedure is repeated for 200 epochs. As shown in Figure 2, the distribution of confidence error values for the clean and noisy samples becomes more and more dissimilar as the training process proceeds. For instance, at epoch 50, a sample with a high confidence error (e.g. near 1.0) is much more likely to be a noisy sample than a clean one. Likewise, a sample with a very low confidence error is probably clean. The extensions of this experiment to pairflip and symmetric label noise are available at Figures 6-7 in the Appendix. These observations are highly consistent with the theoretical analysis from Theorem 1, which states the probability of error (identifying noisy labels wrongly as clean and vice versa) for the confidence error metric is bounded and low in practice. ![10_image_0.png](10_image_0.png) Figure 2: **Distributions of confidence error values** for clean and noisy samples progressively diverge from each other as the training process continues. The experiment is conducted using PreAct-ResNet18 and CIFAR-100 with noise level of 40%. Why CONFES? We use our previous experimental setup and train the model with the naive cross-entropy method and CONFES algorithm to answer this question. Figures 3a and 3b show the model confidence for the noisy, clean, and predicted labels (averaged over the corresponding samples) with cross-entropy and CONFES, respectively. According to Figure 3a, the confidence over noisy labels is very low at the early stages of cross-entropy training. However, as the training proceeds, the model's confidence in noisy labels increases. At the end of the training, the model confidence over predicted and noisy labels is close to each other. This indicates that the model has been misled by the noisy samples, wrongly considering them as the true labels of the samples. CONFES, on the other hand, utilizes the model confidence error to distinguish between clean and noisy samples and exclude the identified noisy samples during training. This results in consistently low confidence for the noisy samples, but high confidence in clean and predicted labels throughout all training stages, as shown in Figure 3b. This observation shows the importance of identifying noisy samples efficiently and keeping confidence in them as low as possible as performed by the CONFES algorithm. Furthermore, we employ the CONFES algorithm in the same setting as Figures 2 and 3 to calculate the confusion matrix and empirically examine the error made by CONFES in differentiating between the clean and noisy labels in practice. Figure 4 shows the confusion matrix for the CONFES algorithm. According to the figure, CONFES is effective in recognizing the noisy samples from the beginning to the end of the training, where it correctly identifies around 38% out of 40% of noisy samples. On the other hand, the algorithm wrongly identifies many clean samples as noisy in the early epochs (around 27%). However, as training moves forward, CONFES becomes more and more efficient in identifying the clean samples, where it correctly recognizes around 55% out of 60% of the clean samples at the last epoch. Figure 8 in the Appendix, moreover, visualizes the number of clean and noisy samples that CONFES identifies correctly for different noise types and noise rates, which are consistent with those from confusion matrices. ![11_image_0.png](11_image_0.png) 1.04 1.02 1.00 Figure 3: **Effectiveness of CONFES**: Using naive cross-entropy training (a), the model confidence over noisy labels increases as the training moves forward. It implies that the model is misled by the noisy samples. CONFES (b), however, differentiates the noisy samples from the clean ones and excludes the identified noisy samples during training. This leads to very low model confidence over the noisy samples but high confidence in clean and predicted labels throughout training. 0.98 0.96 ![11_image_1.png](11_image_1.png) Figure 4: **Confusion matrix for the CONFES algorithm**: In early epochs, CONFES correctly identifies the majority of noisy labels (around 38% out of 40%), but wrongly identifies many clean labels as noisy ones (about 27%). As training proceeds, the algorithm not only still remains effective in identifying the noisy labels (around 38% out of 40%) but also correctly recognizes the clean labels (about 55% out of 60%). The model is PreActResNet18 trained on CIFAR-100 with instance-dependant label noise of rate 40%. Sensitivity analysis of hyper-parameters. The initial sieving threshold α and number of warm-up epochs Tw are the hyper-parameter of the proposed CONFES algorithm. The per-epoch sieving threshold is computed using the aforementioned hyper-parameters. For CIFAR-100, we set α=0.2 and Tw=30 for all noise types and noise rates. For CIFAR-10, the values of α and Tw are 0.1 and 25, respectively, for symmetric and instance-dependent noise types. For Clothing1M, α and Tw are set to 0.05 and 3, respectively. We also investigate the sensitivity of CONFES to its hyper-parameters using the CIFAR-100 dataset with noise rate of 40% for symmetric, instance-dependent, and pairflip noise settings. To analyze the sensitivity to Tw, we set α = 0.2 and use four different values for warm-up epochs: Tw ∈ {5, 20, 30, 50}. Similarly, we set Tw = 30 and employ four different values for sieving threshold: α ∈ {0.1, 0.2, 0.3, 0.5}. As shown in Figure 5, the accuracy reductions using the suboptimal hyper-parameter values compared to the optimal setting (α = 0.2 and Tw = 30) are 1.6%, 2.3% and 4.1% for symmetric, instance-dependent, and pairflip noise settings, respectively, at the worst case. This indicates that CONFES is relatively robust against hyper-parameter value choices, making it easy to employ or tune by practitioners. ![12_image_0.png](12_image_0.png) ![12_image_1.png](12_image_1.png) Figure 5: **Sensitivity analysis of CONFES to its hyper-parameters** Tw(number of warm-up epochs) and α (initial sieving threshold) for different noise types. The dataset is CIFAR-100 with a noise rate of 40%. ## 5 Discussion And Conclusion We present the confidence error metric to effectively discriminate between noisy and clean samples in label noise learning settings. Moreover, we theoretically prove the probability of error for the proposed metric is bounded and experimentally show it is small in practice. We integrate the confidence error metric into a learning algorithm called CONFES, which refines the training samples by keeping only the identified clean samples and filtering out the noisy ones. Our experimental results verify the robustness of CONFES under different noise types such as symmetric, pairflip, and instance-dependent, especially when noise levels are high. We also demonstrate that confidence error can be employed by other algorithms including Co-teaching and DivideMix to further improve the model accuracy. CONFES versus baseline methods. According to the experimental results, CONFES outperforms all baseline methods in the considered symmetric, pairflip, and instant-dependent noise settings. As the noise rate increases, the efficiency of the CONFES algorithm becomes more apparent (e.g., noise rate of 60% in CIFAR-100). Moreover, CONFES is robust to overfitting unlike some of its competitors such as SLN and ELR. In terms of computational overhead, CONFES has one additional forward pass for constructing the refined dataset, which only includes clean samples according to the confidence error metric. However, methods such as Co-teaching (Han et al., 2018) employ two networks in the training process, which makes them substantially less computationally efficient compared to our approach. Although some methods such as PES (Bai et al., 2021) perform well in the presence of symmetric label noise, their accuracy decreases in more complex noise settings such as instance-dependent, which is not the case for CONFES. Additionally, the accuracy of some other baseline methods such as LRT (Zheng et al., 2020) and MentorMix (Jiang et al., 2020) drastically reduces in a highly noisy setting (e.g., with 60% noise rate). Approaches such as ELR (Liu et al., 2020) and PES (Bai et al., 2021) work well for "easy to classify" datasets such as CIFAR-10, but their efficiency reduces on more challenging datasets including CIFAR-100 and Clothing1M. CONFES, on the other hand, outperforms the compared baselines in different noise types (symmetric, instance-dependent, and pairflip), with various noise levels (i.e., 20%, 40% and 60%), and on CIFAR-10, CIFAR-100 and the challenging Clothing1M datasets. CONFES versus LRT. Our work is related to the work from Zheng et al. (2020) which proposed a confidence-based metric called likelihood ratio test (LRT) for sieving the clean samples. Our proposed confidence error metric has at least two advantages over the likelihood ratio: (1) Confidence error enables the algorithm to start performing the sample sieving in the early epochs of training. Using the sieving threshold αi, the algorithm only incorporates the samples with confidence error less than αiin the training instead of all samples. Applying a similar threshold to likelihood ratio in warm-up epochs delivers much lower accuracy than using all samples based on our observations. (2) The confidence error is a more efficient metric than the likelihood ratio for differentiating the clean samples from the noisy ones according to our experimental results provided in Figure 10 in the Appendix, which are indeed consistent with the accuracy results provided in the Evaluation section. Moreover, we empirically compared the probability of error for confidence error and LRT on the CIFAR-100 dataset with different types and levels of noise. The results (Figure 9 in the Appendix) show confidence error has a much smaller error rate in identifying noisy samples compared to LRT, while its error rate in identifying clean samples is slightly worse than LRT. In sample sieving, misidentifying noisy samples as clean ones (false negatives) is much more detrimental to utility than wrongly recognizing clean samples as noisy (false positives). The latter issue can be alleviated with techniques such as clean data duplication as employed by CONFES. In the future, we can extend our work by incorporating techniques such as semi-supervised learning to perform label correction. We can also automate the selection process for sample sieve size by utilizing soft clustering techniques to model confidence errors. This approach would eliminate the need for the initial sieving threshold hyper-parameter (α). Furthermore, we can employ ensemble learning techniques including Adaboost-like methodology. Leveraging the proposed confidence error metric and incorporating multiple weak classifiers might improve the efficiency of sieving but can be computationally expensive. ## Acknowledgement This work was supported by a Google Ph.D. Fellowship to R.T., as well as the German Federal Ministry of Education and Research and the Bavarian State Ministry for Science and the Arts. The authors of this work take full responsibility for its content. ## References Devansh Arpit, Stanisław Jastrzębski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, et al. A closer look at memorization in deep networks. In *International conference on machine learning*, pp. 233–242. PMLR, 2017. Yingbin Bai, Erkun Yang, Bo Han, Yanhua Yang, Jiatong Li, Yinian Mao, Gang Niu, and Tongliang Liu. Understanding and improving early stopping for learning with noisy labels. *Advances in Neural Information* Processing Systems, 34, 2021. Mélanie Bernhardt, Daniel C Castro, Ryutaro Tanno, Anton Schwaighofer, Kerem C Tezcan, Miguel Monteiro, Shruthi Bannur, Matthew P Lungren, Aditya Nori, Ben Glocker, et al. Active label cleaning for improved dataset quality under resource constraints. *Nature communications*, 13(1):1–11, 2022. Antonin Berthon, Bo Han, Gang Niu, Tongliang Liu, and Masashi Sugiyama. Confidence scores make instance-dependent label-noise learning possible. In *International Conference on Machine Learning*, pp. 825–836. PMLR, 2021. Lucas Beyer, Olivier J Hénaff, Alexander Kolesnikov, Xiaohua Zhai, and Aäron van den Oord. Are we done with imagenet? *arXiv preprint arXiv:2006.07159*, 2020. Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan Zhang. Quantifying memorization across neural language models. In The Eleventh International Conference on Learning Representations, 2023. Pengfei Chen, Guangyong Chen, Junjie Ye, Jingwei Zhao, and Pheng-Ann Heng. Noise against noise: stochastic label noise helps combat inherent label noise. In *International Conference on Learning Representations*, 2021a. Pengfei Chen, Junjie Ye, Guangyong Chen, Jingwei Zhao, and Pheng-Ann Heng. Robustness of accuracy metric and its inspirations in learning with noisy labels. In *Proceedings of the AAAI Conference on Artificial* Intelligence, 2021b. Hao Cheng, Zhaowei Zhu, Xingyu Li, Yifei Gong, Xing Sun, and Yang Liu. Learning with instance-dependent label noise: A sample sieve approach. *International Conference on Learning Representations*, 2020. Jeffrey De Fauw, Joseph R Ledsam, Bernardino Romera-Paredes, Stanislav Nikolov, Nenad Tomasev, Sam Blackwell, Harry Askham, Xavier Glorot, Brendan O'Donoghue, Daniel Visentin, et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. *Nature medicine*, 24(9):1342–1350, 2018. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009. Sorin Grigorescu, Bogdan Trasnea, Tiberiu Cocias, and Gigel Macesanu. A survey of deep learning techniques for autonomous driving. *Journal of Field Robotics*, 37(3):362–386, 2020. Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, and Masashi Sugiyama. Co-teaching: Robust training of deep neural networks with extremely noisy labels. Advances in neural information processing systems, 31, 2018. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European conference on computer vision, pp. 630–645. Springer, 2016. Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In *International Conference on Machine* Learning, pp. 2304–2313. PMLR, 2018. Lu Jiang, Di Huang, Mason Liu, and Weilong Yang. Beyond synthetic noise: Deep learning on controlled noisy labels. In *International Conference on Machine Learning*, pp. 4804–4815. PMLR, 2020. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. *Citeseer*, 2009. Junnan Li, Richard Socher, and Steven C.H. Hoi. Dividemix: Learning with noisy labels as semi-supervised learning. In *International Conference on Learning Representations*, 2020. Sheng Liu, Jonathan Niles-Weed, Narges Razavian, and Carlos Fernandez-Granda. Early-learning regularization prevents memorization of noisy labels. *Advances in neural information processing systems*, 33: 20331–20342, 2020. Xiaoxuan Liu, Livia Faes, Aditya U Kale, Siegfried K Wagner, Dun Jack Fu, Alice Bruynseels, Thushika Mahendiran, Gabriella Moraes, Mohith Shamdas, Christoph Kern, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. *The lancet digital health*, 1(6):e271–e297, 2019. Yang Liu. The importance of understanding instance-level noisy labels. *arXiv e-prints*, pp. arXiv–2102, 2021. Ilya Loshchilov and Frank Hutter. SGDR: stochastic gradient descent with warm restarts. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017. Anna Majkowska, Sid Mittal, David F Steiner, Joshua J Reicher, Scott Mayer McKinney, Gavin E Duggan, Krish Eswaran, Po-Hsuan Cameron Chen, Yun Liu, Sreenivasa Raju Kalidindi, et al. Chest radiograph interpretation with deep learning models: assessment with radiologist-adjudicated reference standards and population-adjusted evaluation. *Radiology*, 294(2):421–431, 2020. Eran Malach and Shai Shalev-Shwartz. Decoupling" when to update" from" how to update". *Advances in* Neural Information Processing Systems, 30, 2017. Gary Marcus. Deep learning: A critical appraisal. *arXiv preprint arXiv:1801.00631*, 2018. Nagarajan Natarajan, Inderjit S Dhillon, Pradeep K Ravikumar, and Ambuj Tewari. Learning with noisy labels. *Advances in neural information processing systems*, 26, 2013. Giorgio Patrini, Alessandro Rozza, Aditya Krishna Menon, Richard Nock, and Lizhen Qu. Making deep neural networks robust to label noise: A loss correction approach. In *Proceedings of the IEEE conference* on computer vision and pattern recognition, pp. 1944–1952, 2017. Joshua C Peterson, Ruairidh M Battleday, Thomas L Griffiths, and Olga Russakovsky. Human uncertainty makes classification more robust. In *Proceedings of the IEEE/CVF International Conference on Computer* Vision, pp. 9617–9626, 2019. Sara Sabour, Nicholas Frosst, and Geoffrey E Hinton. Dynamic routing between capsules. *Advances in neural* information processing systems, 30, 2017. Hwanjun Song, Minseok Kim, Dongmin Park, and Jae-Gil Lee. How does early stopping help generalization against label noise? In *ICML 2020 Workshop on Uncertainty and Robustness in Deep Learning*, 2019. Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, and Jae-Gil Lee. Learning from noisy labels with deep neural networks: A survey. *IEEE Transactions on Neural Networks and Learning Systems*, 2022. Cory Stephenson, suchismita padhy, Abhinav Ganesh, Yue Hui, Hanlin Tang, and SueYeon Chung. On the geometry of generalization and memorization in deep neural networks. In International Conference on Learning Representations, 2021. Hongxin Wei, Lei Feng, Xiangyu Chen, and Bo An. Combating noisy labels by agreement: A joint training method with co-regularization. In *Proceedings of the IEEE/CVF Conference on Computer Vision and* Pattern Recognition, pp. 13726–13735, 2020. Jiaheng Wei, Zhaowei Zhu, Hao Cheng, Tongliang Liu, Gang Niu, and Yang Liu. Learning with noisy labels revisited: A study using real-world human annotations. In *International Conference on Learning* Representations, 2022. Xiaobo Xia, Tongliang Liu, Nannan Wang, Bo Han, Chen Gong, Gang Niu, and Masashi Sugiyama. Are anchor points really indispensable in label-noise learning? *Advances in Neural Information Processing* Systems, 32, 2019. Xiaobo Xia, Tongliang Liu, Bo Han, Nannan Wang, Mingming Gong, Haifeng Liu, Gang Niu, Dacheng Tao, and Masashi Sugiyama. Part-dependent label noise: Towards instance-dependent label noise. Advances in Neural Information Processing Systems, 33:7597–7610, 2020. Xiaobo Xia, Tongliang Liu, Bo Han, Chen Gong, Nannan Wang, Zongyuan Ge, and Yi Chang. Robust early-learning: Hindering the memorization of noisy labels. In International Conference on Learning Representations, 2021. Tong Xiao, Tian Xia, Yi Yang, Chang Huang, and Xiaogang Wang. Learning from massive noisy labeled data for image classification. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 2691–2699, 2015. Yu Yao, Tongliang Liu, Bo Han, Mingming Gong, Jiankang Deng, Gang Niu, and Masashi Sugiyama. Dual t: Reducing estimation error for transition matrix in label-noise learning. *Advances in neural information* processing systems, 33:7260–7271, 2020. Yu Yao, Tongliang Liu, Mingming Gong, Bo Han, Gang Niu, and Kun Zhang. Instance-dependent label-noise learning under a structural causal model. *Advances in Neural Information Processing Systems*, 34, 2021. Xingrui Yu, Bo Han, Jiangchao Yao, Gang Niu, Ivor Tsang, and Masashi Sugiyama. How does disagreement help generalization against label corruption? In *International Conference on Machine Learning*, pp. 7164–7173. PMLR, 2019. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning (still) requires rethinking generalization. *Communications of the ACM*, 64(3):107–115, 2021a. Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In *International Conference on Learning Representations*, 2018. Yivan Zhang, Gang Niu, and Masashi Sugiyama. Learning noise transition matrix from only noisy labels via total variation regularization. In *International Conference on Machine Learning*, pp. 12501–12512. PMLR, 2021b. Songzhu Zheng, Pengxiang Wu, Aman Goswami, Mayank Goswami, Dimitris Metaxas, and Chao Chen. Errorbounded correction of noisy labels. In *International Conference on Machine Learning*, pp. 11447–11457. PMLR, 2020. ## A Appendix Further experiments on the effectiveness of confidence error. We extended the experiments associated with Figure 2 of the main manuscript to the symmetric and pairflip label noise. Experiments are conducted using PreAct-ResNet18 and CIFAR-100 with noise level of 40%. ![17_image_0.png](17_image_0.png) Figure 6: Distributions of **confidence error** values for **pairflip** label noise . ![17_image_1.png](17_image_1.png) Figure 7: Distributions of **confidence error** values for **symmetric** label noise Additional details on the experimental setup. For all experiments on the CIFAR-10, CIFAR-100, and Clothing1M datasets, there are some general hyper-parameters such as learning rate, batch size, and weight decay, specified in the original manuscript and are summarized in Table 6. The method-specific hyper-parameters used in the experiments are set based on the corresponding manuscript or the published source code: Co-teaching (Han et al., 2018)∗, ELR (Liu et al., 2020) †, CORES2(Cheng et al., 2020)‡, PES (Bai et al., 2021)§, SLN (Chen et al., 2021a) ¶, DivideMix (Li et al., 2020)‖, JoCoR (Wei et al., 2020) ∗∗, LRT (Zheng et al., 2020)††, MentorMix (Jiang et al., 2020)‡‡ and PTD (Xia et al., 2020)§§. Instance-dependent label noise. In order to generate the instance-dependent label noise in the experiments, we followed the previous works Cheng et al. (2020); Yao et al. (2020); Bai et al. (2021); Chen et al. (2021a) and employed the following algorithm proposed in Xia et al. (2020): ∗https://github.com/bhanML/Co-teaching †https://github.com/shengliu66/ELR ‡https://github.com/UCSC-REAL/cores §https://github.com/tmllab/PES ¶https://github.com/chenpf1025/SLN ‖https://github.com/LiJunnan1992/DivideMix ∗∗https://github.com/hongxin001/JoCoR ††https://github.com/pingqingsheng/LRT ‡‡https://github.com/LJY-HY/MentorMix_pytorch §§https://github.com/xiaoboxia/Part-dependent-label-noise Algorithm 2: Instance-dependent Label Noise Generation taken from Xia et al. (2020) Input: Clean samples {(xi, yi)} n i=1, Noise rate τ Output: Noisy samples {(xi, y˜i)} n i=1 1 Sample instance flip rates q ∈ R N from the truncated normal distribution N (τ, 0.1 2, [0, 1]) 2 Independently samples w1, ..., wc from the standard normal distribution N (0, 1 2) 3 for i = 0*, ..., n* do 4 p = xi· wyi /* Generate instance dependent flip rate */ 5 pyi = −∞ /* control the diagonal entry of the instance-dependent transition matrix */ 6 p = qi· softmax(p) /* make the sum of the off-diagonal entries of the yi-th row to be qi */ 7 pyi = 1 − qi /* set the diagonal entry to be 1 − qi */ 8 Randomly choose a label from the label space according to the possibilities p as noisy label yi 9 **return** Noisy samples {(xi, y˜i)} n i=1 Table 6: General training hyperparameters (common for all methods of comparison). 1.04 1.02 1.00 0.98 0.96 | | CIFAR-10 | CIFAR-100 | Clothing1M | |--------------------|------------------|------------------|----------------------| | Model | PreActResNet-18 | PreActResNet-18 | Pretrained ResNet-50 | | Batch size | 128 | 128 | 32 | | Learning rate (lr) | 2e-2 | 2e-2 | 2e-3 | | lr scheduler | Cosine annealing | Cosine annealing | MultiStep | | lr decay factor | 100 | 100 | 10 at epoch 40 | | Weight decay | 5e-4 | 5e-4 | 1e-3 | | Epochs | 300 | 300 | 80 | ![18_image_0.png](18_image_0.png) Figure 8: The number of clean and noisy samples that CONFES correctly identifies on CIFAR-100 with different noise types and noise rates. The dashed lines represent the total number of clean and noisy samples. CONFES consistently achieves a high success rate in correctly distinguishing between clean and noisy samples. 0.96 0.98 1.00 1.02 1.04 ![19_image_0.png](19_image_0.png) 1.04 1.02 1.00 0.98 0.96 confidence error metric) and the LRT algorithm (Zheng et al., 2020) (based on the likelihood ratio metric) on the CIFAR-100 dataset with different noise types and noise rates. (noisy, selected) means the sample is noise but wrongly selected by the algorithm to be incorporated in training. Likewise, (clean, rejected) means the sample is clean but excluded by the algorithm during training. The probability of error is lower for CONFES in the former case, whereas it is slightly higher in the latter case. Note that the former case is more detrimental to the performance compared to the latter one. ![20_image_0.png](20_image_0.png) Figure 10: Distributions of **likelihood ratio** employed in LRT(Zheng et al., 2020) and the proposed confidence error metric used in CONFES at epoch 200. The distributions of confidence error for noisy and clean samples are more dissimilar than that of likelihood ratio, indicating that confidence error is a more effective metric than likelihood ratio for sieving the samples. The experimental setup is the same as Figure 2.
Review 1: Summary: This paper presents a new metric aimed at discerning between noisy and clean labels. The theoretical foundation of the metric is also established, demonstrating its effectiveness in expectation by showcasing a low error rate for the identified labels. Furthermore, empirical evidence is provided to validate the metric's efficacy. Additionally, the paper proposes a method that strategically filters out noisy examples early in the learning process. The experimental results serve to substantiate the efficiency of this proposed approach and its ability to improve overall learning performance by leveraging the metric to handle noisy data. Strengths and Weaknesses: Strengths, 1. The paper proposes a novel metric to differentiate clean labeled data and noisy labeled data, which has been demonstrated to be effective at differentiation empirically. 2. An empirical effective method is proposed based on the metric 3. The paper is well-written and well-organized. It is easy to follow. Weakness 1. Some of the notations lack definition in the theoretical results, such as the \mathfrak{O} 2. The paper's motivation includes that two-model methods are less efficient, and this motivation may imply there is a trade-off between efficiency and accuracy. Such kind of results have not been presented in the paper. Requested Changes: 1. Theoretically, the paper's proposed method is closely related to the LRT [Zheng et al. 2020]. The paper even uses part of their theoretical results to form one's own. So except for the empirical studies, what is the relationship between the current paper and LRT? Section 2 needs more discussion on this part. 2. Any empirical results demonstrate the efficiency of the proposed method compared to those two-model methods? I understand a trade-off is possible if the proposed method is faster but less accurate. However, if the proposed method is quicker and more accurate, I am wondering why this happen. 3. The clarity of presenting the theoretical results needs to be improved: please define all notations clearly before presenting the concrete results. Broader Impact Concerns: N/A ================================================== Review 2: Summary: The paper studies the problem of noise label learning. The paper proposes a simple metric called the confidence error, which is essentially the difference between the predicted class probability and the label class probability, and uses such a metric to identify clean labels to construct the training data set. The paper shows theoretical properties of the proposed metric that under certain conditions, samples selected using the criteria have bounded probabilities to being noisy. The paper conducts empirical experiments and show that the proposed method achieve better performance compared to some baseline methods. Strengths and Weaknesses: Strengths: 1. The proposed method is simple and easy to implement. 2. The paper studies theoretical properties of the proposed method. 3. The paper shows that the proposed method has better empirical performance than many noisy learning baselines. 4. The proposed method can be combined with existing baseline methods to achieve better performance. Questions and Weaknesses: 1. I am not very clear about the intuition behind the proposed metric. If the predicted class and the labeled class have almost equal confidence (but they are different classes), e.g., 0.5 and 0.5, then such a sample is always selected as a clean sample but the given label is used for training. Why is this a valid decision? 2. For the CONFES algorithm, it starts with a higher threshold, which selects more data points and gradually decreases the threshold. In such a case, how many data points do the algorithm typically selects in the end? Could it be the case that many samples never get selected and trained? 3. Suppose a noisy-labeled sample by chance have the predicted class to be the same as the given class, such a sample is selected and it seems likely that such a sample will remain to predict the given label and selected in every epoch. Do you observe this happening in practice and if it is the case, how to prevent it? 4. Can you give more justifications for the duplication step of CONFES? Does duplication give better performance and why? Minor: 1. Page 3: two right parentheses in "The k-class/label classifier F(xi; θ))". Requested Changes: Please address my questions and concerns in the questions and weaknesses part. Broader Impact Concerns: N/A ================================================== Review 3: Summary: This work leverages an insightful observation to introduce a groundbreaking discriminator metric known as "confidence error" and a filtering technique called CONFES, which significantly enhances the distinction between clean and noisy samples. It also establishes rigorous theoretical assurances regarding the probability of error associated with the novel metric. Furthermore, it conducts comprehensive experimental evaluations, wherein it demonstrates the remarkable performance superiority of its proposed methodology over recent investigations across different scenarios, encompassing both synthetic and real-world label noise. Strengths and Weaknesses: Strengths: 1.The paper presents an innovative alternative metric, distinct from previously used loss values, that offers a more effective means of sieving training examples. 2.Theorectically, It also establishes rigorous theoretical assurances regarding the probability of error. Experimentally, the proposed method has an impressive performance and can be equipped into other algorithms. 3.The organization and presentation of the paper are well done. Weaknesses: 1. It might be better to visualize the number of clean samples the proposed method sieves, since the core of the work is to use a new metric to select clean samples. 2. The reason why the warm-up is used needs to be further explained. Why not only warm up the model and then set the threshold? 3. The contributions seems a little incremental though it compare the method to LRT. Requested Changes: 1. Visualize the number of clean samples the proposed method sieves. 2. Give more explanation to warm-up. 3. Perhaps it would be beneficial to introduce an additional Adaboost-like algorithm to further enhance the novelty of the approach. Broader Impact Concerns: None. ================================================== Metareview: Recommendation: Accept as is Comment: After a major revision, significant additional evidence for the proposed method has been included. This has been appreciated by all reviewers, resulting in a unanimous recommendation of acceptance. ==================================================
# Controlling Federated Learning For Covertness Adit Jain *aj457@cornell.edu* Department of Electrical and Computer Engineering, Cornell University Vikram Krishnamurthy vikramk@cornell.edu Department of Electrical and Computer Engineering, Cornell University Reviewed on OpenReview: *https://openreview.net/forum?id=g01OVahtN9* ## Abstract A learner aims to minimize a function f by repeatedly querying a distributed oracle that provides noisy gradient evaluations. At the same time, the learner seeks to hide arg min f from a malicious eavesdropper that observes the learner's queries. This paper considers the problem of *covert* or learner-private optimization, where the learner has to dynamically choose between learning and obfuscation by exploiting the stochasticity. The problem of controlling the stochastic gradient algorithm for covert optimization is modeled as a Markov decision process, and we show that the dynamic programming operator has a supermodular structure implying that the optimal policy has a monotone threshold structure. A computationally efficient policy gradient algorithm is proposed to search for the optimal querying policy without knowledge of the transition probabilities. As a practical application, our methods are demonstrated on a hate speech classification task in a federated setting where an eavesdropper can use the optimal weights to generate toxic content, which is more easily misclassified. Numerical results show that when the learner uses the optimal policy, an eavesdropper can only achieve a validation accuracy of 52% with no information and 69% when it has a public dataset with 10% positive samples compared to 83% when the learner employs a greedy policy. ## 1 Introduction 1.1 Main Results A learner aims to minimize a function f by querying an oracle repeatedly. At times k = 0, 1*, . . .*, the learner sends a query qk to the oracle, and the oracle responds with a noisy gradient evaluation rk. Ideally, the learner would use this noisy gradient in a stochastic gradient algorithm to update its estimate of the minimizer, xˆk as: xˆk+1 = ˆxk − µk rk, where µk is the step size and pose the next query as qk+1 = ˆxk+1. However, the learner seeks to hide the arg min f from an eavesdropper. The eavesdropper observes the sequence of queries (qk) but does not observe the responses from the oracle. The eavesdropper is passive and does not directly affect the queries or the responses. How can the learner perform stochastic gradient descent to learn arg min f but hide it from the eavesdropper? This problem arises in federated learning (FL), where the central learner (e.g. an application) optimizes the loss function of a neural network by communicating with a distributed set of devices. The learner communicates the weights to the devices and receives noisy gradients of the loss function evaluated on local data which the learner uses to update the weights. Our proposed approach is to control the stochastic gradient descent (SGD) using another stochastic gradient algorithm, namely a structured policy gradient algorithm (SPGA) that solves a resource allocation Markov decision process. The following two-step cross-coupled stochastic gradient algorithm summarizes our approach: Stochastic Gradient Descent: xˆk+1 = ˆxk − µkG(rk, yk, qk), Query using Policy from SPGA: qk+1 ∼ P(ν(yk+1), xˆk+1). $\left(1\right)$. Here k is the time index, µk is the step size, qk is the query and yk is the system state. The function G is designed by the learner to update the estimate based on the noisy response rk (for example, G can be 0 when obfuscating and the 1 noisy gradient rk otherwise). P is the probability distribution of the query based on the transition probability kernel of the stationary policy, which decides whether to learn or obfuscate. The first equation in (1) updates the learner's arg min estimate, xˆk, and the second equation computes the next query qk+1 using a policy ν. ## Contributions: - This paper proposes a framework for the learner to dynamically query the oracle for robust learning and covert optimization. We formulate a Markov decision process (MDP) to solve the decision problem of the learner by exploiting the inherent stochasticity of the oracle. Structural results which show that the optimal policy has a threshold structure (Fig. 1(a)) are proven in Theorem 2 and Theorem 3. The structural results enable the use of efficient policy search methods to search for the optimal policy. This framework can be extended to meta-learning problems like controlling learning for energy efficiency and quality control. - A policy gradient algorithm is proposed which estimates the optimal stationary policy for the MDP. The optimal stationary policy controls the primary stochastic gradient descent of the learner as described by (1) and shown in Fig. 1(b). The policy gradient algorithm has a linear time complexity due to the threshold nature of the optimal policy and does not need knowledge of the transition probabilities. The policy gradient algorithm runs on a slower time scale and can adapt to changes in the system. - The proposed methods are demonstrated on a novel application, covert federated learning (FL) on a text classification task using large language model embeddings. Our key numerical results are summarized in Table 2. It is shown that compared to a greedy policy when the learner is using the optimal policy, the the eavesdropper's estimate of the optimal weights do not generalize well, even if the eavesdropper uses a public dataset to validate the queries 1. - This paper considers two pragmatic aspects of empirical distributed learning environments: a stochastic oracle and an optimization queue. A stochastic oracle with different noise levels can model a non-i.i.d. client distribution in FL, and an optimization queue can model repeated optimization tasks like machine unlearning and distribution shifts. Theorem 1 characterizes the number of successful updates required for convergence, and Lemma 1 analyzes the stability of the queue. ## 1.2 Motivation And Related Works The main application of covert stochastic optimization is in distributed optimization where the central learner queries a distributed oracle and receives gradient evaluations using which it optimizes f. Such a distributed oracle is part of federated learning where deep learning models are optimized on mobile devices with distributed datasets (McMahan et al., 2017) and pricing optimization where customers are surveyed (Delahaye et al., 2017). In distributed optimization, the eavesdropper can pretend as a local worker of the distributed oracle and obtain the queries posed (but not the aggregated responses). The eavesdropper can infer the local minima by observing the queries. These estimates can be used for malicious intent or competitive advantage since obtaining reliable, balanced, labeled datasets is expensive. As an example, in hate speech classification in a federated setting (Meskys et al., 2019; Narayan et al., 2022), a hate 1A greedy policy poses learning queries until all the required successful updates are done to the model and then starts posing obfuscating queries. ![1_image_0.png](1_image_0.png) (a) Threshold structure of the optimal policy with two actions. ![1_image_1.png](1_image_1.png) (b) Coupled interaction of the two gradient algorithms. Figure 1: To achieve covert optimization, we propose controlling the SGD by a policy gradient algorithm that exploits the policy structure. | Symbol | Description | |----------|-----------------------------------------------------------------------| | k | Time index for the stochastic gradient descent (SGD) | | n | Index for stochastic policy gradient algorithm (SPGA) | | xˆ | Learner's estimate of the minima | | zˆ | Eavesdropper's estimate of the minima | | y | State variable for the oracle state, learner state and arrival state | | q | Query posed to the oracle by the learner | | u | Action of the learner (Learning or Obfuscation) | | r | Noisy gradient evaluation by oracle of function at query point | | σk | Bound on the variance of the noise added at time k | | σ | Tolerance parameter of learner for bound on noise variance | | ϕ | True threshold parameter of the optimal policy | | θ | Threshold parameter of the SPGA tracking the true threshold parameter | | µ, κ | Step size of the SGD and SPGA respectively | Table 1: Summary of mathematical notations used in this paper. speech peddler can pretend as a client and observe the trajectory of the learner to obtain the optimal weights minimizing the loss function. Using these optimal weights for discrimination, the eavesdropper can train a generator network to generate hate speech which will not be detected (Hartvigsen et al., 2022; Wang et al., 2018). Learner-private or covert optimization: The problem of protecting the optimization process from eavesdropping adversaries is examined in recent literature (Xu et al., 2021a; Tsitsiklis et al., 2021; Xu, 2018; Xu et al., 2021b). Tsitsiklis et al. (2021); Xu et al. (2021b;a) to obtain (*L, δ*)-type privacy bounds on the number of queries required to achieve a given level of learner-privacy and optimize with and without noise. The current state of art (Xu et al., 2021a) in covert optimization dealt with convex functions in a noisy setting. Although there is extensive work in deriving the theoretical proportion of queries needed for covert optimization, practically implementing covert optimization has not been dealt with. In contrast, our work considers a non-convex noisy setting with the problem of posing learning queries given a desired level of privacy. We want to schedule M model updates in N queries and obfuscate otherwise, ideally learning when the noise of the oracle is expected to be less. Theoretical bounds are difficult to derive for a non-convex case, but we empirically show our formulation achieves covertness. The approach of this work aligns with hiding optimal policies in reinforcement learning when actions are visible (Liu et al., 2021; Pan et al., 2019; Dethise et al., 2019), information theory (Bloch, 2016), and preserving privacy while traversing graphs (Erturk & Xu, 2020) 2. | Scenario 1 | Scenario 2 | | | | |-----------------------------------|--------------------------------|------------------|-----------------------|------------------| | Eavesdropper has no toxic samples | Eavesdropper has toxic samples | | | | | Type of Policy | Eavesdropper accuracy | Learner accuracy | Eavesdropper accuracy | Learner accuracy | | Greedy | 0.83 | 0.84 | 0.83 | 0.82 | | Optimal | 0.52 | 0.81 | 0.69 | 0.81 | Motivation for stochastic oracle and optimization queue: An important application of covert optimization is in federated learning, which is deployed in various machine learning tasks to improve the privacy of dataset owners and communication efficiency (Heusel et al., 2017; Chen et al., 2022a). FL involves non-i.i.d. data distribution across clients and time-varying client participation and data contribution which motivate a dynamic stochastic oracle (Karimireddy et al., 2020; Jhunjhunwala et al., 2022; Doan et al., 2020; Chen et al., 2022b). Similar to recent work in a setup with 2Distinction from differential privacy: Differential privacy Lee et al. (2021); Dimitrov et al. (2022); Asi et al. (2021) is concerned with the problem of preserving the privacy of the local client's data by mechanisms like adding noise and clipping gradients, whereas learner-private or covert optimization is used when the central learner is trying to prevent an eavesdropper from freeriding on the optimization process (Tsitsiklis et al., 2021). Table 2: The optimal policy to our MDP formulation achieves covertness: The eavesdropper's accuracy is reduced significantly in comparison to a greedy policy (-31%) even when the eavesdropper has samples to validate the queries (-14%). The learner's accuracy remains comparable in both the scenarios with a maximum difference of 3%. Markovian noise in FL (Sun et al., 2018; Rodio et al., 2023), we model the bounds on the variance of the oracle noise to vary as a discrete Markov chain, and exploit the Markovian nature of the oracle's stochasticity to query dynamically. An optimization queue for training tasks is motivated by active learning where the learner waits for training data to be annotated and performs optimization once there is enough annotated data (Bachman et al., 2017; Wu et al., 2020). Optimization tasks can also be due to purging client's data, training in new contexts, distributional shifts, or by design of the learning algorithm (Tripp et al., 2020; Cai et al., 2021; Mallick et al., 2022; Ginart et al., 2019; Sekhari et al., 2021). ## 1.3 Organization In Section 2, the problem of minimizing a function while hiding the arg min is modeled as a controlled SGD. In Section 3, a finite horizon MDP is first formulated where the learner has to perform M successful gradient updates of the SGD in N total queries to a stochastic oracle. The formulation is extended to an infinite horizon constrained MDP (CMDP) for cases when the learner performs optimization multiple times, and a policy gradient algorithm is proposed to search for the optimal policy. Numerical results are demonstrated on a classification task in Section 4. A discussion of the stochastic oracle, dataset description, additional experimental results and proofs, are presented in the Appendix. Notations are summarized in Table 1. Limitations: The assumption of unbiased responses and the independence of the eavesdropper and oracle might not hold, especially if the eavesdropper can deploy multiple devices. Due to the lack of data on device participation in FL, it is difficult to verify the assumption of a Markovian oracle. The assumption on an equal eavesdropper prior for both the obfuscating and learning SGD has not been theoretically verified. The assumption that the learner knows about the eavesdropper's dataset distribution might only hold if it is a public dataset or if there are obvious costly classes. ## 2 Controlled Stochastic Gradient Descent For Covert Optimization This section discusses the oracle, the learner, and the eavesdropper and states the assumptions which are essential in modeling the problem as a Markov decision process in the next section. The following is assumed about the oracle O: (A1): (**Bounded from below and Lipschitz continous**) The oracle is a function f : R d → R. f is continous and is bounded from below, f ≥ f ∗. f is continously differentiable and its derivative is γ-Lipschitz continous, i.e. ∥∇f(z) − ∇f(x)∥ ≤ γ∥z − x∥ ∀*x, z* ∈ R d, where ∇f indicates the gradient of f and *∥ · ∥* is the l 2-norm. At time k, for a query qk ∈ R d, the oracle returns a noisy evaluation rk of ∇f at qk with added noise ηk, $$(2)^{\frac{1}{2}}$$ rk(qk) = ∇f(qk) + ηk. (2) (A2): (**Assumption on Noise**) The noise ηk is such that rk(qk) is an unbiased estimator of ∇f(qk), E[rk(qk)] = ∇f(qk) . The noise has a bounded variance E-∥ηk∥ 2≤ (σk) 2. σ 2 k is a constant that the oracle returns along the response, rk(qk). Assumptions (A1) and (A2) are regularity assumptions for analyzing query complexity results of the stochastic gradient descent, and are found in standard first-order gradient descent analysis (Ghadimi & Lan, 2013; Ajalloeian & Stich, 2021). There assumption on the noise terms is slightly weaker than independence (Ghadimi & Lan, 2013). Our analysis can be extended to algorithms with better convergence rates like momentum based Adam (Kingma & Ba, 2017). Besides simplicity in exposition, SGD is shown to have better generalizability Zhou et al. (2020); Wilson et al. (2017). f can be an empirical loss function for a training dataset D, f(x; D) = Pdi∈D G(x; di), D = {di} where G(·; ·) is a loss function. In FL, the oracle is a collection of clients acting as a distributed dataset and compute. The objective of the learner L is to query the oracle and obtain a ϵ-close estimate xˆ such that, $$\mathbb{E}\left(\|\nabla f({\hat{x}})\|^{2}\right)\leq\epsilon,$$ 2≤ ϵ, (3) given an initial estimate, xˆ0 ∈ R d. The expectation E[·] here is with respect to the noise sequence and the external random measure (if any) used to compute the estimate. The learner has knowledge of the Lipschitz constant γ from (A1). At time k for query qk the learner receives (rk, σ2 k ) from the oracle (the response rk(qk) is denoted as rk). The $\eqref{eq:walpha}$. learner iteratively queries the oracle with a sequence of queries q1, q2*, . . .* and correspondingly updates its estimate xˆ1, xˆ2*, . . .* for estimating xˆ such that it achieves its objective (3). In classical SGD, the learner iteratively updates its estimate based on the gradient evaluations at the previous estimate. Now, since the queries are visible and the learner has to obfuscate the eavesdropper, the learner can either query using its true previous estimate or obfuscate the eavesdropper as described later. The learner updates its estimates xˆ1, xˆ2*, . . .* based on whether the posed query qk is for learning or not and the received noise statistic σk. A learning query is denoted by action uk = 1 and an obfuscating query by action uk = 0. The learner chooses a noise constant σ 3and performs a controlled SGD with step size µk for mth successful update) such that it updates its estimate only if σ 2 k ≤ σ 2 and if a learning query was posed, i.e., uk = 1 (1 denotes the indicator function), $$\hat{x}_{k+1}=\hat{x}_{k}-\mu_{k}r_{k}\mathbb{1}\left(\sigma_{k}^{2}\leq\sigma^{2}\right)\mathbb{1}\left(u_{k}=1\right).$$ 21 (uk = 1). (4) For formulating the MDP in the next section, we need the following definition and theorem, which characterizes the finite nature of the optimization objective of (3). The proof of the theorem follows by applying the gradient lemma and standard convergence results to the update step (Bottou, 2004; Ghadimi & Lan, 2013; Ajalloeian & Stich, 2021) and is given in the Appendix B.1. Definition 1. **Successful Update**: At time k *an iteration of (4) is a successful update if* 1σ 2 k ≤ σ 21 (uk = 1) = 1. Theorem 1. (**Required number of successful updates**) Learner L querying an oracle O *which satisfies assumptions* (A1-2) using a controlled stochastic gradient descent with updates of the form (4) with a constant step size µk = µ = min nϵ 2γσ2 , 1 γ o, needs to perform M successful updates (Def. 1) to get ϵ*-close to a critical point (3), where* $$(4)$$ $$M=O\left(\frac{1}{\epsilon}+\frac{\sigma^{2}}{\epsilon^{2}}\right).$$ Let the M successful updates happen at time indices k1*, . . . , k*M, then the learner's estimate of the critical point, xˆ is chosen as xˆki with probability P µki M 1 µkj , which is a uniform distribution for a constant step size. Theorem 1 helps characterize the order of the number of successful gradient updates that need to be done in the total communication slots available to achieve the learning objective of (3). If the optimization is not one-off, the learner maintains a queue of optimization tasks where each optimization task requires up to order M successful updates. The obfuscation strategy used by the learner builds upon the strategy suggested in existing literature (Xu et al., 2021a; Xu, 2018). The learner queries either using the correct estimates from (4) or queries elsewhere in the domain to obfuscate the eavesdropper. In order to compute the obfuscating query, we propose that the learner runs a parallel SGD, which also ensures that the eavesdropper does not gain information from the shape of two trajectories. The obfuscating queries are generated using a suitably constructed function, H, and running a parallel SGD with an estimate, zˆk, $$\hat{z}_{k+1}=\hat{z}_{k}-\mu_{k}H(r_{k},\sigma_{k},u_{k}).$$ $$(5)$$ zˆk+1 = ˆzk − µkH(rk, σk, uk). (5) At time k an eavesdropper E has access to the query sequence, (q1*, . . . , q*k) which the eavesdropper uses to obtain an estimate, zˆ for the arg min f. We make the following assumption on the eavesdropper: (E1) The eavesdropper is assumed to be passive, omnipresent, independent of the oracle, and unaware of the chosen ϵ. (E2) Given a sequence of N queries (q1, q2*, . . . , q*N ) which the eavesdropper observes, we assume that the eavesdropper can partition query sequence into two disjoint SGD trajectory sequences I and J 4. (E3) In the absence of any additional information, the eavesdropper uses a proportional sampling estimator similar to Xu (2018) and is defined below. 3The noise constant σ helps characterize the number of queries required and decide if a received response will be used for learning or not, this is in principle similar to the controlling communication done in Chen et al. (2022a); Sun et al. (2022). The probability of σk ≤ σ depends on the oracle state (defined in the next section). Using such a construction our framework enables characterizing a oracle which has varying noise levels. 4The parameter space is high dimensional, and it is assumed that SGD trajectories do not intersect. The final weights used in production can be transmitted in a secure fashion (using a much costlier, less efficient communication (Xu et al., 2023; Kairouz et al., 2021)). Assumption (E1) is generally not true, especially if the eavesdropper is part of the oracle (for example, in FL), but we take this approximation assuming since the number of clients is much greater than the single eavesdropper and hence the oracle is still approximately Markovian. Given an equal prior over disjoint intervals, an eavesdropper using a proportional sampling estimator calculates the posterior probability of the minima lying in an interval proportional to the number of queries observed in the interval. Since this work considers two disjoint SGD trajectories, the eavesdropper's posterior probability of the minima belonging to one of the SGD trajectories given equal priors is proportional to the number of queries in the trajectories, Definition 2. **Proportional sampling estimator**: If the eavesdropper (with assumptions E1-2) observes queries belonging to two SGD trajectory sequences I and J *and has equal prior over both of them, then the eavesdropper's* posterior belief of the learner's true estimate xˆ belonging to I *is given by,* PI ∆= P(ˆx ∈ I|(q1, q2*, . . . , q*N )) = |I| |I|+|J| . Let K∗ be defined as K∗ = arg max K∈{I,J} PK and B[−1] retrieve the last item of the sequence B. After N observed queries, the eavesdropper's maximum a posteriori estimate, zˆ of the learner's true estimate, xˆ, is given by, $${\mathrm{)}}[-1].$$ $$(6)^{\frac{1}{2}}$$ zˆ = (qk|qk ∈ K∗, k = 1*, . . . , N*)[−1]. (6) An eavesdropper using a proportional sampling estimator with an estimate of the form (6) can be obfuscated by ensuring a) that the eavesdropper has equal priors over the two trajectory and b) that the majority of the queries posed are obfuscating 5. Rather than posing the learning queries randomly, we formulate an MDP in the next section, which exploits the stochasticity of the oracle for optimally scheduling the queries. ## 3 Controlling The Stochastic Gradient Descent Using A Markov Decision Process This section formulates a Markov decision process that the learner L solves to obtain a policy to dynamically choose between posing a learning or an obfuscating query. The MDP is formulated for a finite horizon and infinite horizon case, and we prove structural results characterizing the monotone threshold structure of the optimal policy. ## 3.1 Finite Horizon Markov Decision Process In the finite horizon case where the learner wants to optimize once and needs M successful updates (obtained from Theorem 1 or chosen suitably) to achieve (3) with N(> M) queries. Such a formulation helps model covert optimization of the current literature for a one-off FL task or a series of training tasks carried out one after the other. The state space S is an augmented state space of the oracle state space SO and the learner state space SL. The oracle is modeled to have W oracle states, SO = {1, 2*, . . . W*}. The learner state space has M states, SL = {1, 2*, . . . , M*} which denote the number of successful updates left to be evaluated by the learner. The augmented state space is S = SO × SL. The state space variables with n queries remaining (out of N) is denoted by yn = (y O n , yL n ). As described in the obfuscation strategy, the learner can query either to learn using (4) or to obfuscate using (5), the action space is U = {0 = obfuscate, 1 = learn}. un denotes the action when n queries are remaining . Υ : SO *× U →* [0, 1] denotes the probability of a successful update of (4) and is dependent on y O n and action un, Υ(y O n , un) = Pσ 2 n ≤ σ 2| y O n 1 (un = 1). Hence, the bounds on the noise variance in (A2) of the SGD are dynamic and modeled as a finite state Markov chain. The learner state reduces by one if there is a successful update of (4), hence the transition probability between two states of the state space S is given by, P(yn−1|yn, un) = P(y O n−1 |y O n )Υ(y O n , un)1(y L n−1 = y L n − 1) +(1 − Υ(y O n , un))1(y L n−1 = y L n ). With n queries left, for action, un, the learner incurs a privacy cost, c(un, yO n ) which accounts for the increase in the useful information known to the eavesdropper. At the end of N queries, the learner incurs a terminal learning cost l(y L 0 ) which penalizes the remaining number of successful updates y L 0 . The following assumptions are made about the MDP: (M1) The rows of the probability transition matrix between two states of the oracle are assumed to be first-order stochastic dominance orderable (Eq. (1) of Ngo & Krishnamurthy (2009)), i.e., Pk≥l P(y O n+1 = k|y O n = j) ≤ 5This can be extended to obfuscating with multiple intervals by using auxiliary trajectories and obfuscating by choosing uniformly between those. PP(y O n+1 = k|y O n = i) ∀ *i > j, l* = 1*, . . . , W*. This is a restatement of the assumption that an oracle in a better state is more likely to stay in a better state, which is found to be empirically true for most computer networks. (M2) c(un, yO n ) is chosen to decrease in un and also decrease in oracle cost y O n to incentivize learning when the oracle is in a good state. The learner does not incur a privacy cost when it does not learn, i.e., c(0, yO n ) = 0. (M3) l(y L 0 ) is increasing in y L 0 , integer convex, l(y L 0 + 2) − l(y L 0 + 1) > l(y L 0 + 1) − l(y L 0 ) and l(0) = 0. (M4) With n queries remaining, the learner has information on yn, and eavesdropper does not have any information. For a FL setting, (M1) assumes that if the client participation is high, it is less likely to drop suddenly. (M2-M3) ensure that the optimal policy prioritizes learning in a better oracle state, i.e., when the variance is more likely to be bounded. Integer convexity from (M3) in prove the structural results and ensures that the learning is prioritized more when the learner queue state is larger. The asymmetry in information in (M4) can be explained by (E1) since the eavesdropper is assumed to be a part of a much larger oracle. The objective for the finite horizon MDP can be formulated as minimizing the state action cost function Qn(*y, u*), $$V_{n}(y)=\operatorname*{min}_{u\in{\mathcal{U}}}Q_{n}(y,u),\quad\forall\,y\in{\mathcal{S}},$$ $$\left(7\right)$$ Qn(y, u), ∀ y ∈ S, (7) where, Qn(*y, u*) = c(un, yO) +Py′∈S P(y ′|*y, u*n)Vn−1(y ′) and Vn(y) is the value function with n queries remaining and V0(y) = l(y L 0 ). The optimal decision rule is given by, u ∗ n (y) = arg minu∈UQn(*y, u*). The optimal policy ν ∗is the sequence of optimal decision rules, ν ∗(y) = u ∗ N (y), u∗N−1 (y)*, . . . , u*∗ 1 (y). ## 3.2 Infinite Horizon Constrained Markov Decision Process This subsection formulates the infinite horizon constrained MDP (CMDP), highlights the key differences from the finite horizon case, introduces the optimization queue, and proves its stability. The CMDP formulation minimizes the average privacy cost while satisfying a constraint on the average learning cost. An optimization queue helps model the learner performing optimization repeatedly, which is needed for purging specific data, distributional shifts, and active learning. Optimization Queue: The learner maintains a queue with the number of successful updates it needs to make, y L n at time n. In contrast to the finite horizon case, the learner receives new requests and appends the optimization tasks to its queue. At time n, y E n new successful updates required are added to the queue. y E n is an i.i.d. random variable with P(En = M) = δ, P(En = 0) = 1 − δ and E(En) = δM 6. In order to ensure the queue is stable, we pose conditions on the success function Υ, the learning cost l, and the learning constraint Λ in Lemma 1. The state space for the new arrivals to the queue is SE = {0, M}. Also, the learner state is denumerable, SL = {0, 1*, . . .* }. The oracle state space, SO is the same as before. The state space is now, S = SO × SL × SE. The state variables at time n (n denotes the time index for deciding this section) are given by yn = (y O n , yL n , yE n ). The transition probability now has to incorporate the queue arrival, with the same Υ as before and can be written as, $$\mathbb{P}(y_{n+1}|y_{n},u_{n})=\mathbb{P}(y_{n+1}^{G}|y_{n}^{G})\mathbb{P}(y_{n+1}^{E})\left(\Upsilon(y_{n}^{G},u_{n})\mathbb{I}\left(y_{n+1}^{L}=y_{n}^{L}+y_{n}^{E}-1\right)\right.\tag{8}$$ $$\left.+(1-\Upsilon(y_{n}^{G},u_{n}))\mathbb{I}(y_{n+1}^{L}=y_{n}^{L}+y_{n}^{E})\right).$$ The learning cost in the infinite horizon case is redefined as l(un, yO n ) and is decreasing in un and increasing in y O n which contrasts with c. The learning cost does not depend on y L n except when y L n = 0, l(un, yO n ) = 0 ∀un ∈ U, yO n ∈ SO. The privacy cost c(un, yO n ), assumptions (M1-2), and the action space U are the same as the finite horizon case. In the infinite horizon case, a stationary policy is a map from the state space to the action space, ν : *S → U*. Hence the policy generates actions, ν = (u1, u2*, . . .*). Let T denote the space of all stationary policies. The average privacy cost and the average learning cost, respectively, are, $$C_{y_{0}}(\nu)=\limsup_{N\to\infty}\frac{1}{N}\,\mathbb{E}_{\nu}\left[\sum_{n=1}^{N}c(u_{n},y_{n}^{O})\mid y_{0}\right],\quad L_{y_{0}}(\nu)=\limsup_{N\to\infty}\frac{1}{N}\,\mathbb{E}_{\nu}\left[\sum_{n=1}^{N}l(u_{n},y_{n}^{O})\mid y_{0}\right].$$ . 6The construction of y E and the i.i.d. condition is for convenience and we only require that y E is independent of the learner and the oracle state. The constrained MDP can then be formulated as, $\inf_{\nu\in T}C_{y_{0}}(\nu)$ s.t. $L_{y_{0}}(\nu)\leq\Lambda\;\forall y_{0}\in\mathcal{S}$, (11.11) (ν) ≤ Λ ∀y0 ∈ S, (9) where Λ is the constraint on the average learning cost and accounts for delays in learning. Since the optimization queue can potentially be infinite, to ensure that new optimization tasks at the end get evaluated, we state the following lemma, Lemma 1. **(Queue stability)** *Let the smallest success probability be* Υmin = minyO∈SO Υ(u = 1, yO)*. If,* $$({\mathfrak{H}})$$ $$\frac{\delta M}{\Upsilon_{\mathrm{min}}}<1-\frac{\Lambda}{l(0,W)},$$ then every policy satisfying the constraint in (9) induces a stable queue and a recurrent Markov chain. Lemma 1 ensures that the optimization queue is stable. Since the policy of always transmitting satisfies the constraint, the space of policies that induce a recurrent Markov chain is non-empty. ## 3.3 Structural Results This subsection proves structural results for the optimal policy solving the MDP and the CMDP. The threshold structure of the optimal policy substantially reduces the search space and is used in devising the structured policy gradient algorithm. The following theorem proves that the optimal policy, ν ∗solving the finite horizon MDP of (7) has a threshold structure with respect to the learner state, y L n . Theorem 2. *(Nature of optimal policy* ν ∗) *The optimal policy for the finite horizon MDP of (7) with assumptions* (**M1-3***) is deterministic and monotonically increasing in the learner state,* y L n . Since the action space consists of 2 actions, a monotonically increasing policy is a threshold policy (Fig. 1(a)), $$\nu^{*}(y_{n})=\begin{cases}0=\mathrm{obfuscate},&y_{n}^{L}<\chi(y_{n}^{O})\\ 1=\mathrm{learn},&\mathrm{otherwise}\end{cases},$$ where χ(y O n ) is an oracle state dependent threshold and parametrizes the policy. The proof of Theorem 2 is in the Appendix and follows from Lemma 3, supermodularity, and assumptions on the cost and transition probability matrix. In order to characterize the structure of the optimal policy for the CMDP, we first study an unconstrained Lagrangian average cost MDP with the instantaneous cost, w(*u, y*O; λ) = c(*u, y*O) +λl(*u, y*O), where λ is the Lagrange multiplier. The average Lagrangian cost for a policy ν is then given by, $$J_{y_{0}}(\nu,\lambda)=\operatorname*{lim}_{N\to\infty}\frac{1}{N}\mathbb{E}_{\nu}\left[\sum_{n=1}^{N}w(u_{n},y_{n}^{O};\lambda)\mid y_{0}\right],\quad\forall\ y_{0}\in\mathcal{S}.$$ The corresponding average Lagrangian cost MDP and optimal stationary policy are, $$J_{y_{0}}^{*}(\lambda)=\operatorname*{inf}_{\nu\in{\mathcal{T}}}J_{y_{0}}(\nu,\lambda),$$ $J_{y_{0}}(\lambda)=\inf_{\nu\in\mathcal{T}}J_{y_{0}}(\nu,\lambda)$, $\nu_{\lambda}^{*}=\arg\inf_{\nu\in\mathcal{T}}J_{y_{0}}(\nu,\lambda)$. ν∈T Further, we treat the average Lagrangian cost MDP of (10) as a limiting case of the following discounted Lagrangian cost MDP when the discounting factor, β goes to 1, $$J_{\rm po}^{\beta}(\nu,\beta,\lambda)=\limsup_{N\to\infty}\mathbb{E}_{\nu}\left[\sum_{n=1}^{N}\beta^{n}w(u_{n},y_{n}^{O};\lambda)\mid y_{0}\right].\tag{11}$$ $$(10)$$ Theorem 2 can then be extended to show that the optimal policy of (11) has a threshold structure in terms of the learner state, y L n . The average Lagrangian cost MDP of (10) is a limit of the discounted Lagrangian cost MDPs (11) with an appropriate sequence of discount factors (βn) converging to 1, and therefore has a threshold policy. The existence of a stationary policy for (10) is shown in previous work Ngo & Krishnamurthy (2010); Sennott (1989) and is discussed in Appendix B.4. Hence we directly state a corollary from Sennott (1989) as a theorem, which shows that the stationary queuing policy for the CMDP in (9) is a randomized mixture of optimal policies for two average Lagrangian cost MDPs. Theorem 3. **(Existence of randomized stationary policy)** (Sennott, 1993) There exists an optimal stationary policy for the CMDP of (9), which is a randomized mixture of optimal policies of two average Lagrangian cost MDPs, $$\nu^{*}=p\nu_{\lambda_{1}}^{*}+(1-p)\nu_{\lambda_{2}}^{*},$$ $$(12)^{\frac{1}{2}}$$ , (12) where ν ∗ λ1 and ν ∗ λ2 are optimal policies for average Lagrangian cost MDPs of the form (10) and p *is the probability* with which ν ∗ λ1 is chosen. Because of Theorem 3, the optimal policy for the CMDP is a randomized mixture of two threshold policies and will also have a threshold structure. A threshold policy of the form of (12) has two threshold levels (denoted by ϕ1 for transition from action 0 to randomized action and ϕ2 for randomized action to action 1) can be written as, $$\nu^{*}(y)=\begin{cases}0,&y^{L}<\phi_{1}\\ 1\;\mathrm{with\;prob.}\;p,&\phi_{1}\leq y^{L}<\phi_{2}\\ 1,&\phi_{2}\leq y^{L}\end{cases}.$$ . (13) We next propose an efficient reinforcement learning algorithm that exploits this structure to learn the optimal stationary policy for the CMDP. ## 3.4 Structured Policy Gradient Algorithm The optimal policies for both the finite horizon MDP and infinite horizon MDP using existing techniques of either value iteration or linear programming, but these methods require knowledge of the transition probabilities. Hence to search for the optimal policy without the knowledge of the transition probabilities, we propose a policy gradient algorithm which has a linear time complexity in the learner state space. In order to efficiently find the optimal policy of the form of (13) we use an approximate sigmoidal policy νˆ(y, Θ) constructed as follows, $${\hat{\nu}}(y,\Theta)=\left({\frac{h}{1+\exp{\frac{-y^{L}+\theta_{1}}{\tau}}}}+{\frac{1-h}{1+\exp{\frac{-y^{L}+\theta_{2}}{\tau}}}}\right),$$ $$(13)^{\frac{1}{2}}$$ $$(14)$$ , (14) where θ1 and θ2 are a parameter which approximates the thresholds and the randomization factor p, ϕ1 and ϕ2. h approximates the mixing probability p. Parameter τ controls how close the sigmoidal policy follows a discrete threshold policy. It can be shown that as τ → 0, h → p, θ1 → ϕ1 and θ2 → ϕ2, the approximate policy converges to the true policy νˆ(y, Θ) → ν ∗(Ngo & Krishnamurthy, 2010; Kushner & Yin, 2003). Algorithm 1 Structured Policy Gradient Algorithm Input: Initial Policy Parameters Θ0, Perturbation Parameter ω, K Iterations, Step Size κ, Scale Parameter ρ, Learning cost l, Privacy Cost c Output: Policy Parameters ΘK procedure COMPUTESTATIONARYPOLICY(*ω, K, κ, ρ*) for n ← 1 *. . . K* do Γ ← *Bernoulli*( 1 2 ) ▷ 3 × |SO||SE| i.i.d. Bernoulli random variables Θ+ n ← Θn + Γ ∗ ω, Θ− n ← Θn − Γ ∗ ω, ˆl ← AVGCOST(l, Θn) ∆ˆl ← (AVGCOST((l, Θ+ n ) − AVGCOST(l, Θ− n )) , ∆ˆc ← (AVGCOST((c, Θ+ n ) − AVGCOST(c, Θ− n )) Θn and ξn using (15) end for end procedure procedure AVGCOST(J, Θ) νˆ ← POLICYFROMPARAMETERS(Θ) Jˆ ← 1 T PT t=1 J(ˆν(yt), yt) end procedure ![9_image_0.png](9_image_0.png) Figure 2: Learner L either learns from the oracle O (distributed clients) or obfuscates the eavesdropper E based on the oracle state (number of clients) and the queue state (number of successful training rounds). Learner (central aggregator in FL) updates its queue state based on the response (size of the training data) and the action taken. A policy gradient algorithm that finds an optimal policy of the form (12) for solving the CMDP of (9) is now proposed. For each oracle state, the parameters for the approximate policy of (14) is given by (θ1, θ2, h). The complete set of policy parameters (total of 3 × |SO||SE| parameters) is referred to as, Θ. The procedure is summarized in Algorithm 1. Our policy gradient algorithm updates the parameter estimates Θn using (15) by taking an approximate gradient of the average costs, the constraint in (9) and ξn which updates using (15). The parameters which are updated are chosen randomly using a vector Γ, the components of which have an i.i.d. Bernoulli distribution. These chosen parameters are perturbed by +ω and −ω and the approximate gradient of the privacy and learning cost is denoted by ∆c and ∆l, respectively. The approximate gradients of these costs are computed using a finite difference method on the approximate costs. The approximate average costs are computed by interacting with the Markovian oracle for T timesteps. The approximate average learning cost is denoted by ˆl. κ and ρ are suitably chosen step and scale parameters 7. $$\begin{array}{c}{{\Theta_{n+1}=\Theta_{n}-\kappa\left(\Delta c+\hat{\Delta}l\times\operatorname*{max}\left[0,\xi_{n}+\rho\left(\hat{l}-\Lambda\right)\right]\right),}}\\ {{\xi_{n+1}=\operatorname*{max}\left[\left(1-\frac{\kappa}{\rho}\xi_{n}\right),\xi_{n}+\kappa\left(\hat{l}-\Lambda\right)\right].}}\end{array}$$ (15) $\frac{1}{2}$ The SPGA algorithm can run on a faster time scale by interacting with the system parallel to the SGD and updating the stationary policy parameters, Θ. The stationary policy controls the query posed and updates the SGD estimate in (1). The computational complexity for the SPGA algorithm described here is O(|S|), i.e., the algorithm is linear in the state space and hence significantly more scalable than standard policy gradient methods (Kushner & Yin, 2003). ## 3.5 Alternative Discrete Optimization Formulation For Threshold Identification Since the SPGA algorithm is not amenable to finite time complexity analysis, an alternative approach to solve for the threshold levels is by formulating the problem as multi-armed bandits (MAB). Considering each possible configuration of Y = |SO||SE| thresholds (each of which can take υ values) for the Lagrange constrained problem of (11) as arms in a MAB problem, the minimum achievable regret is of the order Oυ Ylog T(Lai & Robbins, 1985). The value of the Lagrange parameter can then be iterated over similar to the value iteration of Ngo & Krishnamurthy (2010) to obtain the two Lagrange multipliers and the randomization probability which form the optimal policy of (13). The main disadvantages of a MAB approach compared to the SPGA is that there are strong assumptions on the noise structure, it is unable to track changes in the underlying system and the time complexity is exponential in the state space. 7The step size is constant if we want the SPGA to track change in the underlying system and decreasing otherwise. ## 4 Hate Speech Classification In A Federated Setting We now present a novel application of the covert optimization framework in designing a robust hate speech classifier illustrated in Figure 2. The task of the learner L is to minimize a classification loss function (f(x)) and simultaneously hide the optimal classifier (neural network) parameters (arg min f(x)) from an eavesdropper (E). State Space: In our federated setting, the oracle state, y O denotes the number of clients participating in each communication round, and more clients indicate a better state. Client participation is assumed to be Markovian since it is more general than i.i.d. participation and is closer to a real-world scenario (Sun et al., 2018; Rodio et al., 2023). Although the stochastic aspect of the oracle is modeled on client participation and the quantity of the training data, it can also be modeled with respect to the quality. For example, in hate speech detection and similar applications, unintended bias based on characteristics like race, gender, etc. often occurs (Dixon et al., 2018). If the oracle is based on how diverse the training data is, we can train when the available data is good enough and obfuscate otherwise using costs related to biased classification (Viswanath et al., 2022). The learner state, y L, denotes the number of remaining successful gradient updates. The learner decides the total number of required successful gradient updates based on convergence criteria, like the one in Theorem 1. A successful gradient update is done if the number of available training data is above a threshold, similar to communication skipping using system parameters in Sun et al. (2022); Mishchenko et al. (2022). This is a proxy for the threshold of σ in (A2) since an exact noise bound is difficult to obtain in practice. The optimization queue receives new arrivals, y E for model retraining due to client requests for unlearning their data, data distribution shifts, and active learning (Bachman et al., 2017; Sekhari et al., 2021; Cai et al., 2021). The timescale for practical federated training ranges from a few hours to a few days (Hard et al., 2019). In the hate speech classification, model retraining requests due to a shift in context is on a slower time scale, but purging requests for a client's data can be made every few hours. All devices, including the ones not participating in the training round can make such a request ensuring y E is independent of y O (E1). Action Space: Using labeled datasets, GANs can be trained to generate hate speech (Lin et al., 2017; de Masson d' Autume et al., 2019). But since access to labeled data is difficult, an eavesdropper E can use the optimal weights as discriminator weights to train a generator (Wang et al., 2018). The action space, U is to either send the correct learning neural network parameters or the obfuscating parameters. We discuss in the next section how to generate obfuscating parameters under two different eavesdroppers' information scenarios. Costs: Although for this paper, only a majority of queries need to be obfuscating, in general, the more learning queries an eavesdropper knows, the higher probability of the eavesdropper figuring out the optimal weights. Hence the eavesdropper can generate hate speech and misinformation which are semantically more coherent (measured by metrics of readability and perplexity (Carrasco-Farré, 2022; Mimno et al., 2011)) and can be spread easily (Viswanath et al., 2022). Hence the privacy cost in the MDP can be associated with metrics of the spread of malicious content, including prevalance (Wang et al., 2021), and contagion spread rates (Davani et al., 2021; Lawson et al., 2023). Analogously, the learning cost l can be associated with the same set of costs since malicious content will go unclassified if the classifier does not achieve the desired accuracy. For delays in forgetting a client's information, the learning cost is analogous to the value of private information, measured by metrics like Shapeley value (Kleinberg, 2001; Wang et al., 2020). Also, clients might want to remove their data because they incorrectly annotated it earlier, and keeping the retraining task in the queue can worsen the real-time accuracy (Davani et al., 2021; Klie et al., 2023). The privacy and learning cost and the learning constraint, Λ of (9), can be chosen appropriately depending on the maximum ratio of the queries the learner can afford to expose, which for the proportional sampling estimator is 0.5. ## 4.1 Numerical Results For the first experiment, the MDP formulation and structural results are demonstrated on a hate speech classification task under a federated setting described above. The convergence of the threshold parameters in SPGA is investigated in the second experiment. It is empirically shown that the optimal policy solving (9) is threshold in the oracle state, demonstrating that the optimal policy makes the learner learns more when the oracle is good. Additional benchmark experiments on the MNIST data, dataset preprocessing, the architecture, and assumptions are listed in the Appendix. Before discussing the numerical study, we explain two scenarios with respect to the information that the eavesdropper has and the information the learner has about the eavesdropper. The results of the study are then presented under both scenarios. ![11_image_0.png](11_image_0.png) Figure 3: Convergence of validation accuracies for Learner L and Eavesdropper E under greedy and optimal policy. The optimal policy helps dynamically learn and hide the optimal weights compared to a greedy policy of always learning. Scenario 1: Eavesdropper does not have enough data: When the eavesdropper does not have enough data to choose between the two SGDs, the obfuscating queries can be posed in various ways, for example, by posing noisy queries sampled randomly from domain R d or by doing a mirrored gradient descent using the true queries in (5). Since the eavesdropper has no information, the prior over the two SGDs is the same. An extension of this scenario is when the learner is trying to obfuscate hyperparameters which are essential to training a neural network (Probst et al., 2019) from the eavesdropper. The learner can switch between the intended hyperparameter and a suitably chosen obfuscating hyperparameter. Scenario 2: Eavesdropper has a subset of data, but the learner has information about the subset: In case the eavesdropper has access to a subset of the data, it can test out both the trajectories on its dataset to see which one has a smaller empirical loss. Let the dataset of the eavesdropper be D0 ∈ D. If the learner knows D0, then it can simulate an oracle with function f ′(x) = f(*x, D*0) = 1 |D0| Pd∈D0 G(*x, d*) to obtain noisy gradients, r ′ k = ∇f ′(qk) + η ′ k where G is the loss function and η ′ k is suitably simulated noise. The obfuscating queries can be obtained using the following SGD trajectory: zˆk = ˆzk−1 −µkr ′ k1 (uk = 0). For example, when the eavesdropper is using a public dataset, accessing a reliable and balanced dataset is otherwise costly. The case where learner has incomplete information about the eavesdropper's dataset is left for future research. The case when the eavesdropper has access to all data is not relevant since then the eavesdropper can carry out the optimization on its own. The weights obtained using the parallel SGD do not generalize well since the empirical loss being minimized is for a subset of the data. This makes the prior over both the SGDs more balanced since the eavesdropper can no longer take advantage of its dataset. And because the parallel SGD is observed the majority of the time, then the eavesdropper's estimate (6) corresponds to the parallel SGD. ## 4.2 Demonstration Of Mdp Framework On Covert Optimization In Hate Speech Classification A hate speech classification task is considered where an extended version of a pre-trained BERT (Devlin et al., 2019; Turc et al., 2019) model is fine-trained on a labeled dataset to classify textual data as toxic or not 8. A federated setting with 20 clients whose data is non-overlapping is considered. Each client has 5443 training samples and 1443 validation samples. For the experimental results, we consider N = 45 communication rounds (or queries) and M = 16 successful model updates (which is around ∼ 34% of the total queries). To demonstrate the versatility of our formulation, we use the FedAvg algorithm, where the learner aggregates the weights of the individual clients rather than the weight updates (McMahan et al., 2017). The experiment is done with 10 random seeds to obtain error bounds. We use size of the training data as a proxy for σk since an exact σk is difficult to estimate. A threshold on the noise bound (σ) is approximated with the threshold on the number of data points, i.e., the learner discards any communication round when 8The dataset used was made public by Jigsaw AI and can be found here. Hate speech classification is still an open problem and the achieved accuracy is barely satisfactory but our aim was to show the application of our formulation. Our source code can be found on this anonymized link. less than 1/4 of the entire dataset is available for training9. The underlying Markov chain of the device participation has three states, i.e. either W1 = 1/4, W2 = 1/2 or W3 = 1/1 of the devices participating in any communication round. Each device can contribute any number of data points out of the available datapoints, and for the chosen criteria, the success function empirically comes out to be Υ(y O, 1) = [0.1, 0.43, 0.95]. The transition probabilities between the oracle states is given by P O = [0.8 0.2 0; 0.3 0.5 0.2; 0 0.2 0.8]. The privacy cost, c is taken to be 0.3, 0.8, 1.8 for the respective oracle states and the learning cost, l to be 0.6. The choice of the cost is done by interacting with the system and seeing the average proportion of queries obfuscated using the stationary policy as shown in Appendix A.4.3. This cost can also be tuned in an online fashion based on the number of slots available, the learner's preference, and the percentage of queries the learner can afford to expose hence enabling practical realizability of previous work on covert optimization (Xu et al., 2021a). Since this is a finite horizon MDP, the learner can use linear programming or value iteration to find an optimal policy using empirical estimates of the transition probabilities using past data or can use the SPGA algorithm and interact with the system to find a stationary sub-optimal policy. We show our results using a stationary policy obtained by using the SPGA algorithm, hence the learner does not know P(yn+1|yn, un). Figure 3(a) shows the convergence of the aggregated validation accuracies for the learner and the eavesdropper under Scenario 1 under a greedy and stationary policy. The loss function considered is the binary cross entropy loss function, and the accuracy is the validation accuracy score. The eavesdropper accuracy is calculated using a balanced validation dataset of size 2886. It can be seen that although the learner's accuracy, on average goes up to 0.85, the eavesdropper's accuracy goes up to 0.52. Figure 3(b) is for Scenario 2 with an eavesdropper with an imbalanced dataset of 10% toxic (one of the two classes) examples. This dataset is assumed to be public, and the learner has complete access to it, which it uses to obfuscate the eavesdropper. In Figure 3(b), it is evident that the obfuscation achieved is lesser than when the eavesdropper had no toxic samples since the eavesdropper is able to achieve an accuracy of around 0.69, explained by the fact that the obfuscating parameters are trained on a sample of the entire dataset. Although, in both cases, when the learner uses a greedy policy, the eavesdropper's accuracy is at par with the learner's accuracy. This demonstrates how using the optimal policy, the learner can prevent an eavesdropper from learning the optimal weights of the classifier. ## 4.2.1 Convergence Of Spga Algorithm And Numerical Structural Result The convergence of Algorithm 1 to the true threshold parameters is investigated next. In addition to the previously defined parameters, a learning constraint Λ = 0.2 is imposed on the average learning rates, setting up a CMDP whose objective is given by (9). The arrival probability for M = 4 queries is δ = 0.1 10. For calculating the approximate average cost, we take a sample path of 100 timesteps. The results are averaged over 100 runs. The convergence of the threshold parameter ϕ2 for different oracle states with arrival state E = 0 is plotted in Figure 4 along with the true threshold parameters found by linear programming. The approximate thresholds, θ2 of the sigmoidal policy of (14) converge close to the true threshold parameters, ϕ2 without the knowledge of the transition probabilities. The threshold for the oracle state 3 converges to a negative value which when plugged into the sigmoidal policy of (14) resemble a threshold policy with threshold at 0 since the learner state can not be negative. It can be numerically seen that the threshold of optimal policy decreases with increasing oracle state; that is, the optimal policy is nonincreasing in the oracle state hence the learner poses a learning query more often when the oracle is in a good state. Figure 4: Convergence of threshold parameters θ to true ![12_image_0.png](12_image_0.png) threshold parameters ϕ in Alg. 1 for different oracle states. The optimal policy incentivizes learning more when the oracle is in a better state. 9Recent work in federated learning has proposed skipping training rounds when the distributed oracle is not good enough leading to less communication rounds and better convergence rates (Chen et al., 2018; Sun et al., 2022; Mishchenko et al., 2022). 10This is slightly different from the theoretical model since it is not possible to simulate an infinite buffer, we consider a length 40 queue, and to prevent a queue overflow a high learning cost of 100 is imposed in case the queue full. This consideration is similar to previous work on network queueing using a CMDP approach and does not change the threshold and monotone nature of the policy (Djonin & Krishnamurthy, 2007). The parameters for the SPGA algorithm along with an additional experiment with constant step size is given in Appendix A.6. ## 5 Conclusion The problem of covert optimization is studied from a dynamic perspective when the oracle is stochastic, and the learner receives new optimization requests. The problem is modeled as a Markov decision process, and structural results are established for the optimal policy. A linear time policy gradient algorithm is proposed, and the application of our framework is demonstrated in a hate speech classification context. Future work can look at inverse RL techniques for the eavesdropper to infer the optimal learner policy, and more robust obfuscating gradient trajectories can be studied. The problem of covert optimization can be investigated in a decentralized setting where the problem is modeled as a switching control game with participants switching between learning and obfuscating others. Our suggested methodology can dynamically control learning in distributed settings for objectives like energy efficiency, client privacy, and client selection. An eavesdropper with finite memory and an objective of minimizing average learning cost with a constraint on average privacy cost can be considered. This work's main broader ethical concerns are a) an increase in energy consumption to achieve covertness and b) it could help a learner covertly train a classifier for questionable reasons, e.g., censorship. ## References A. Ajalloeian and S. U. Stich. On the Convergence of SGD with Biased Gradients, May 2021. URL http: //arxiv.org/abs/2008.00051. arXiv:2008.00051 [cs, math, stat]. H. Asi, V. Feldman, T. Koren, and K. Talwar. Private Stochastic Convex Optimization: Optimal Rates in L1 Geometry. In *Proceedings of the 38th International Conference on Machine Learning*, pp. 393–403. PMLR, July 2021. URL https://proceedings.mlr.press/v139/asi21b.html. ISSN: 2640-3498. P. Bachman, A. Sordoni, and A. Trischler. Learning Algorithms for Active Learning. In *Proceedings of the 34th* International Conference on Machine Learning, pp. 301–310. PMLR, July 2017. URL https://proceedings. mlr.press/v70/bachman17a.html. ISSN: 2640-3498. M. R. Bloch. Covert Communication Over Noisy Channels: A Resolvability Perspective. IEEE Transactions on Information Theory, 62(5):2334–2354, May 2016. ISSN 1557-9654. doi: 10.1109/TIT.2016.2530089. Conference Name: IEEE Transactions on Information Theory. L. Bottou. Stochastic Learning. In O. Bousquet, U. von Luxburg, and G. Rätsch (eds.), *Advanced Lectures on Machine* Learning: ML Summer Schools 2003, Canberra, Australia, February 2 - 14, 2003, Tübingen, Germany, August 4 - 16, 2003, Revised Lectures, Lecture Notes in Computer Science, pp. 146–168. Springer, Berlin, Heidelberg, 2004. ISBN 978-3-540-28650-9. doi: 10.1007/978-3-540-28650-9_7. URL https://doi.org/10.1007/ 978-3-540-28650-9_7. T. Cai, R. Gao, J. Lee, and Q. Lei. A Theory of Label Propagation for Subpopulation Shift. In Proceedings of the 38th International Conference on Machine Learning, pp. 1170–1182. PMLR, July 2021. URL https: //proceedings.mlr.press/v139/cai21b.html. ISSN: 2640-3498. C. Carrasco-Farré. The fingerprints of misinformation: how deceptive content differs from reliable sources in terms of cognitive effort and appeal to emotions. *Humanities and Social Sciences Communications*, 9(1):1–18, May 2022. ISSN 2662-9992. doi: 10.1057/s41599-022-01174-9. URL https://www.nature.com/articles/ s41599-022-01174-9. Number: 1 Publisher: Palgrave. T. Chen, G. Giannakis, T. Sun, and W. Yin. LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed Learning. In *Advances in Neural Information Processing Systems*, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper_files/paper/2018/hash/ feecee9f1643651799ede2740927317a-Abstract.html. T. Chen, K. Zhang, G. B. Giannakis, and T. Ba¸sar. Communication-Efficient Policy Gradient Methods for Distributed Reinforcement Learning. *IEEE Transactions on Control of Network Systems*, 9(2):917–929, June 2022a. ISSN 2325-5870. doi: 10.1109/TCNS.2021.3078100. Z. Chen, S. Zhang, T. T. Doan, J.-P. Clarke, and S. T. Maguluri. Finite-Sample Analysis of Nonlinear Stochastic Approximation with Applications in Reinforcement Learning, January 2022b. URL http://arxiv.org/abs/ 1905.11425. arXiv:1905.11425 [cs, math]. A. M. Davani, M. Atari, B. Kennedy, and M. Dehghani. Hate Speech Classifiers Learn Human-Like Social Stereotypes, October 2021. URL http://arxiv.org/abs/2110.14839. arXiv:2110.14839 [cs]. C. de Masson d' Autume, S. Mohamed, M. Rosca, and J. Rae. Training Language GANs from Scratch. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings. neurips.cc/paper/2019/hash/a6ea8471c120fe8cc35a2954c9b9c595-Abstract.html. T. Delahaye, R. Acuna-Agost, N. Bondoux, A.-Q. Nguyen, and M. Boudia. Data-driven models for itinerary preferences of air travelers and application for dynamic pricing optimization. *Journal of Revenue and Pricing Management*, 16 (6):621–639, December 2017. ISSN 1477-657X. doi: 10.1057/s41272-017-0095-z. URL https://doi.org/ 10.1057/s41272-017-0095-z. A. Dethise, M. Canini, and S. Kandula. Cracking Open the Black Box: What Observations Can Tell Us About Reinforcement Learning Agents. In *Proceedings of the 2019 Workshop on Network Meets AI & ML*, NetAI'19, pp. 29–36, New York, NY, USA, August 2019. Association for Computing Machinery. ISBN 978-1-4503-6872-8. doi: 10.1145/3341216.3342210. URL https://dl.acm.org/doi/10.1145/3341216.3342210. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, May 2019. URL http://arxiv.org/abs/1810.04805. arXiv:1810.04805 [cs]. D. I. Dimitrov, M. Balunovic, N. Konstantinov, and M. Vechev. Data Leakage in Federated Averaging. Transactions on Machine Learning Research, November 2022. ISSN 2835-8856. URL https://openreview.net/forum? id=e7A0B99zJf&noteId=6wc4VYeT6N. L. Dixon, J. Li, J. Sorensen, N. Thain, and L. Vasserman. Measuring and Mitigating Unintended Bias in Text Classification. In *Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society*, AIES '18, pp. 67–73, New York, NY, USA, December 2018. Association for Computing Machinery. ISBN 978-1-4503-6012-8. doi: 10.1145/3278721.3278729. URL https://dl.acm.org/doi/10.1145/3278721.3278729. D. V. Djonin and V. Krishnamurthy. MIMO Transmission Control in Fading Channels—A Constrained Markov Decision Process Formulation With Monotone Randomized Policies. *IEEE Transactions on Signal Processing*, 55(10):5069–5083, October 2007. ISSN 1941-0476. doi: 10.1109/TSP.2007.897859. Conference Name: IEEE Transactions on Signal Processing. T. T. Doan, L. M. Nguyen, N. H. Pham, and J. Romberg. Convergence Rates of Accelerated Markov Gradient Descent with Applications in Reinforcement Learning, October 2020. URL http://arxiv.org/abs/2002.02873. arXiv:2002.02873 [math]. M. S. Erturk and K. Xu. Anonymous Stochastic Routing, December 2020. URL http://arxiv.org/abs/1911. 08875. arXiv:1911.08875 [cs, math]. S. Ghadimi and G. Lan. Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming. *SIAM* Journal on Optimization, 23(4):2341–2368, January 2013. ISSN 1052-6234. doi: 10.1137/120880811. URL https://epubs.siam.org/doi/10.1137/120880811. Publisher: Society for Industrial and Applied Mathematics. A. Ginart, M. Guan, G. Valiant, and J. Y. Zou. Making AI Forget You: Data Deletion in Machine Learning. In *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/hash/ cb79f8fa58b91d3af6c9c991f63962d3-Abstract.html. A. Hard, K. Rao, R. Mathews, S. Ramaswamy, F. Beaufays, S. Augenstein, H. Eichner, C. Kiddon, and D. Ramage. Federated Learning for Mobile Keyboard Prediction, February 2019. URL http://arxiv.org/abs/1811. 03604. arXiv:1811.03604 [cs]. T. Hartvigsen, S. Gabriel, H. Palangi, M. Sap, D. Ray, and E. Kamar. ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection. In *Proceedings of the 60th Annual Meeting of the* Association for Computational Linguistics (Volume 1: Long Papers), pp. 3309–3326, Dublin, Ireland, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.acl-long.234. URL https://aclanthology. org/2022.acl-long.234. M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper_files/ paper/2017/hash/8a1d694707eb0fefe65871369074926d-Abstract.html. D. Jhunjhunwala, P. Sharma, A. Nagarkatti, and G. Joshi. Fedvarp: Tackling the variance due to partial client participation in federated learning. In *Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence*, pp. 906–916. PMLR, August 2022. URL https://proceedings.mlr.press/v180/jhunjhunwala22a. html. ISSN: 2640-3498. P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings, R. G. L. D'Oliveira, H. Eichner, S. E. Rouayheb, D. Evans, J. Gardner, Z. Garrett, A. Gascón, B. Ghazi, P. B. Gibbons, M. Gruteser, Z. Harchaoui, C. He, L. He, Z. Huo, B. Hutchinson, J. Hsu, M. Jaggi, T. Javidi, G. Joshi, M. Khodak, J. Konecný, A. Korolova, F. Koushanfar, S. Koyejo, T. Lepoint, Y. Liu, P. Mittal, M. Mohri, R. ˇ Nock, A. Özgür, R. Pagh, M. Raykova, H. Qi, D. Ramage, R. Raskar, D. Song, W. Song, S. U. Stich, Z. Sun, A. T. Suresh, F. Tramèr, P. Vepakomma, J. Wang, L. Xiong, Z. Xu, Q. Yang, F. X. Yu, H. Yu, and S. Zhao. Advances and Open Problems in Federated Learning, March 2021. URL http://arxiv.org/abs/1912.04977. arXiv:1912.04977 [cs, stat]. S. P. Karimireddy, S. Kale, M. Mohri, S. Reddi, S. Stich, and A. T. Suresh. SCAFFOLD: Stochastic Controlled Averaging for Federated Learning. In *Proceedings of the 37th International Conference on Machine Learning*, pp. 5132–5143. PMLR, November 2020. URL https://proceedings.mlr.press/v119/karimireddy20a.html. ISSN: 2640-3498. D. P. Kingma and J. Ba. Adam: A Method for Stochastic Optimization, January 2017. URL http://arxiv.org/ abs/1412.6980. arXiv:1412.6980 [cs]. J. Kleinberg. On the Value of Private Information. *TARK '01: Proceedings of the 8th conference on Theoretical aspects* of rationality and knowledge, July 2001. J.-C. Klie, B. Webber, and I. Gurevych. Annotation Error Detection: Analyzing the Past and Present for a More Coherent Future. *Computational Linguistics*, 49(1):157–198, March 2023. ISSN 0891-2017. doi: 10.1162/coli_a_00464. URL https://doi.org/10.1162/coli_a_00464. H. Kushner and G. G. Yin. *Stochastic Approximation and Recursive Algorithms and Applications*. Springer Science & Business Media, July 2003. ISBN 978-0-387-00894-3. Google-Books-ID: EC2w1SaPb7YC. T. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. *Advances in Applied Mathematics*, 6(1):4–22, March 1985. ISSN 01968858. doi: 10.1016/0196-8858(85)90002-8. URL https://linkinghub.elsevier. com/retrieve/pii/0196885885900028. M. A. Lawson, S. Anand, and H. Kakkar. Tribalism and tribulations: The social costs of not sharing fake news. Journal of Experimental Psychology: General, 152(3):611–631, March 2023. ISSN 1939-2222, 0096-3445. doi: 10.1037/xge0001374. URL http://doi.apa.org/getdoi.cfm?doi=10.1037/xge0001374. H. Lee, J. Kim, S. Ahn, R. Hussain, S. Cho, and J. Son. Digestive neural networks: A novel defense strategy against inference attacks in federated learning. *Computers & Security*, 109:102378, October 2021. ISSN 0167-4048. doi: 10.1016/j.cose.2021.102378. URL https://www.sciencedirect.com/science/article/pii/ S0167404821002029. K. Lin, D. Li, X. He, Z. Zhang, and M.-t. Sun. Adversarial Ranking for Language Generation. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper_files/paper/2017/hash/ bf201d5407a6509fa536afc4b380577e-Abstract.html. Z. Liu, Y. Yang, T. Miller, and P. Masters. Deceptive Reinforcement Learning for Privacy-Preserving Planning, February 2021. URL http://arxiv.org/abs/2102.03022. arXiv:2102.03022 [cs]. A. Mallick, K. Hsieh, B. Arzani, and G. Joshi. Matchmaker: Data Drift Mitigation in Machine Learning for Large-Scale Systems. *Proceedings of Machine Learning and Systems*, 4:77–94, April 2022. URL https://proceedings. mlsys.org/paper/2022/hash/1c383cd30b7c298ab50293adfecb7b18-Abstract.html. R. Mattila, C. R. Rojas, V. Krishnamurthy, and B. Wahlberg. Computing monotone policies for Markov decision processes: a nearly-isotonic penalty approach, April 2017. URL http://arxiv.org/abs/1704.00621. arXiv:1704.00621 [cs]. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y. Arcas. Communication-Efficient Learning of Deep Networks from Decentralized Data. In *Proceedings of the 20th International Conference on Artificial* Intelligence and Statistics, pp. 1273–1282. PMLR, April 2017. URL https://proceedings.mlr.press/ v54/mcmahan17a.html. ISSN: 2640-3498. E. Meskys, J. Kalpokiene, P. Jurcys, and A. Liaudanskas. Regulating Deep Fakes: Legal and Ethical Considerations, December 2019. URL https://papers.ssrn.com/abstract=3497144. D. Mimno, H. Wallach, E. Talley, M. Leenders, and A. McCallum. Optimizing Semantic Coherence in Topic Models. EMNLP '11: Proceedings of the Conference on Empirical Methods in Natural Language Processing, July 2011. K. Mishchenko, G. Malinovsky, S. Stich, and P. Richtarik. ProxSkip: Yes! Local Gradient Steps Provably Lead to Communication Acceleration! Finally! In *Proceedings of the 39th International Conference on Machine Learning*, pp. 15750–15769. PMLR, June 2022. URL https://proceedings.mlr.press/v162/mishchenko22b. html. ISSN: 2640-3498. K. Narayan, H. Agarwal, S. Mittal, K. Thakral, S. Kundu, M. Vatsa, and R. Singh. DeSI: Deepfake Source Identifier for Social Media. pp. 2858–2867, 2022. URL https://openaccess.thecvf.com/content/ CVPR2022W/FaDE-TCV/html/Narayan_DeSI_Deepfake_Source_Identifier_for_Social_ Media_CVPRW_2022_paper.html. M. H. Ngo and V. Krishnamurthy. Optimality of threshold policies for transmission scheduling in correlated fading channels. *IEEE Transactions on Communications*, 57(8):2474–2483, August 2009. ISSN 1558-0857. doi: 10.1109/ TCOMM.2009.08.070350. M. H. Ngo and V. Krishnamurthy. Monotonicity of Constrained Optimal Transmission Policies in Correlated Fading Channels With ARQ. *IEEE Transactions on Signal Processing*, 58(1):438–451, January 2010. ISSN 1941-0476. doi: 10.1109/TSP.2009.2027735. X. Pan, W. Wang, X. Zhang, B. Li, J. Yi, and D. Song. How You Act Tells a Lot: Privacy-Leaking Attack on Deep Reinforcement Learning. *Reinforcement Learning*, 2019. P. Probst, A.-L. Boulesteix, and B. Bischl. Tunability: Importance of Hyperparameters of Machine Learning Algorithms. Journal of Machine Learning Research, 20(53):1–32, 2019. ISSN 1533-7928. URL http://jmlr.org/ papers/v20/18-444.html. A. Rodio, F. Faticanti, O. Marfoq, G. Neglia, and E. Leonardi. Federated Learning under Heterogeneous and Correlated Client Availability, January 2023. URL http://arxiv.org/abs/2301.04632. arXiv:2301.04632 [cs]. S. M. Ross. *Introduction to Stochastic Dynamic Programming*. Academic Press, July 2014. ISBN 978-1-4832-6909-2. Google-Books-ID: bBLjBQAAQBAJ. A. Sekhari, J. Acharya, G. Kamath, and A. T. Suresh. Remember What You Want to Forget: Algorithms for Machine Unlearning. In *Advances in Neural Information Processing Systems*, volume 34, pp. 18075–18086. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper_files/paper/2021/hash/ 9627c45df543c816a3ddf2d8ea686a99-Abstract.html. L. I. Sennott. Average Cost Optimal Stationary Policies in Infinite State Markov Decision Processes with Unbounded Costs. *Operations Research*, 37(4):626–633, 1989. ISSN 0030-364X. URL https://www.jstor.org/ stable/171262. Publisher: INFORMS. L. I. Sennott. Constrained Average Cost Markov Decision Chains. *Probability in the Engineering and Informational Sciences*, 7(1):69–83, January 1993. ISSN 1469-8951, 02699648. doi: 10.1017/S0269964800002795. URL https://www.cambridge.org/core/ journals/probability-in-the-engineering-and-informational-sciences/ article/abs/constrained-average-cost-markov-decision-chains/ 541480633345391975A7D320B59347E9. Publisher: Cambridge University Press. J. Sun, T. Chen, G. B. Giannakis, Q. Yang, and Z. Yang. Lazily Aggregated Quantized Gradient Innovation for Communication-Efficient Federated Learning. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 44 (4):2031–2044, April 2022. ISSN 1939-3539. doi: 10.1109/TPAMI.2020.3033286. T. Sun, Y. Sun, and W. Yin. On Markov Chain Gradient Descent, September 2018. URL http://arxiv.org/ abs/1809.04216. arXiv:1809.04216 [math, stat]. A. Tripp, E. Daxberger, and J. M. Hernández-Lobato. Sample-Efficient Optimization in the Latent Space of Deep Generative Models via Weighted Retraining. In *Advances in Neural Information Processing Systems*, volume 33, pp. 11259–11272. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/ hash/81e3225c6ad49623167a4309eb4b2e75-Abstract.html. J. N. Tsitsiklis, K. Xu, and Z. Xu. Private Sequential Learning. *Operations Research*, 69(5):1575–1590, September 2021. ISSN 0030-364X, 1526-5463. doi: 10.1287/opre.2020.2021. URL http://pubsonline.informs. org/doi/10.1287/opre.2020.2021. I. Turc, M.-W. Chang, K. Lee, and K. Toutanova. Well-Read Students Learn Better: On the Importance of Pre-training Compact Models, September 2019. URL http://arxiv.org/abs/1908.08962. arXiv:1908.08962 [cs]. H. Viswanath, A. Shor, and Y. Kitaguchi. Quantifying Human Bias and Knowledge to guide ML models during Training, November 2022. URL http://arxiv.org/abs/2211.10796. arXiv:2211.10796 [cs]. S. Wang, H. Fang, M. Khabsa, H. Mao, and H. Ma. Entailment as Few-Shot Learner, April 2021. URL http: //arxiv.org/abs/2104.14690. arXiv:2104.14690 [cs]. T. Wang, J. Rausch, C. Zhang, R. Jia, and D. Song. A Principled Approach to Data Valuation for Federated Learning. In Q. Yang, L. Fan, and H. Yu (eds.), *Federated Learning: Privacy and Incentive*, Lecture Notes in Computer Science, pp. 153–167. Springer International Publishing, Cham, 2020. ISBN 978-3-030-63076-8. doi: 10.1007/ 978-3-030-63076-8_11. URL https://doi.org/10.1007/978-3-030-63076-8_11. Y. Wang, C. Wu, L. Herranz, J. Van De Weijer, A. Gonzalez-Garcia, and B. Raducanu. Transferring GANs: Generating Images from Limited Data. In V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss (eds.), Computer Vision – ECCV 2018, volume 11210, pp. 220–236. Springer International Publishing, Cham, 2018. ISBN 978-3-030-01230-4 978-3-030-01231-1. doi: 10.1007/978-3-030-01231-1_14. URL https://link.springer.com/10.1007/ 978-3-030-01231-1_14. Series Title: Lecture Notes in Computer Science. A. C. Wilson, R. Roelofs, M. Stern, N. Srebro, and B. Recht. The Marginal Value of Adaptive Gradient Methods in Machine Learning. In *Advances in Neural Information Processing Systems*, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper_files/paper/2017/hash/ 81b3833e2504647f9d794f7d7b9bf341-Abstract.html. Y. Wu, E. Dobriban, and S. Davidson. DeltaGrad: Rapid retraining of machine learning models. In Proceedings of the 37th International Conference on Machine Learning, pp. 10355–10366. PMLR, November 2020. URL https://proceedings.mlr.press/v119/wu20b.html. ISSN: 2640-3498. J. Xu, K. Xu, and D. Yang. Learner-Private Convex Optimization, October 2021a. URL http://arxiv.org/ abs/2102.11976. arXiv:2102.11976 [cs, math, stat]. J. Xu, K. Xu, and D. Yang. Optimal query complexity for private sequential learning against eavesdropping. In Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, pp. 2296–2304. PMLR, March 2021b. URL https://proceedings.mlr.press/v130/xu21f.html. ISSN: 2640-3498. J. Xu, K. Xu, and D. Yang. Learner-Private Convex Optimization. *IEEE Transactions on Information Theory*, 69(1): 528–547, January 2023. ISSN 1557-9654. doi: 10.1109/TIT.2022.3203989. Conference Name: IEEE Transactions on Information Theory. K. Xu. Query Complexity of Bayesian Private Learning. In *Advances in Neural Information Processing Systems*, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/ hash/7bccfde7714a1ebadf06c5f4cea752c1-Abstract.html. P. Zhou, J. Feng, C. Ma, C. Xiong, S. C. H. Hoi, and W. E. Towards Theoretically Understanding Why Sgd Generalizes Better Than Adam in Deep Learning. In *Advances in Neural Information Processing Systems*, volume 33, pp. 21285–21296. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/ hash/f3f27a324736617f20abbf2ffd806f6d-Abstract.html. ## A Appendix A: Experimental Parameters And Methodology A.1 Dataset And Preprocessing We use Jigsaw's *Unintended Bias in Toxicity Classification* dataset for our experimental results. The dataset has ∼ 1.8 million public comments from the Civil Comments platform. The dataset was annotated by human raters for toxic conversational attributes, mainly rating the toxicity of each text on a scale of 0 to 1 and sub-categorizing for severe toxicity, obscene, threat, insult, identity attacks, and sexually explicit content. More information about the annotation process can be found on the Kaggle website for this dataset here. We consider the much simpler task of classifying the text as toxic or not. The toxicity is scored by human volunteers on a scale of 0 to 1, and we consider a comment toxic if the toxicity score is greater than 0.5. The original dataset is imbalanced with 1660540 non-toxic samples and 144334 toxic samples, and for each experimental run, we take a random balanced subset with 144334 toxic and non-toxic samples. The reason we do this is twofold, a) to reduce the time it takes to train our model and make it feasible to run the clients concurrently on our machine, and b) to study the accuracy that the eavesdropper achieves on the positive samples more profoundly. To achieve b), we could also have taken a weighted accuracy function. We preprocess the data by removing special characters and contracting the word space (for example, replacing "can't" and "cant" with the same word). ## A.2 Architecuture, Training Hyperparameters, And Loss Functions Our architecture used for training involves the following layer sequence: A pre-trained BERT layer which outputs a 128-length embedding, a fully connected 128 neurons wide linear layer with ReLU activation, a dropout layer with a rate of 10−1and finally, a linear layer classifying the text as hate speech or not. The motivating reason for choosing this architecture was that this was the standard template in many of the submissions received in the competition. However, as highlighted before, our approach is both architecture and convergence algorithm-agnostic. We consider the logit loss function. We use the following hyperparameters for training: learning rate: 10−3, training batch size of 40, and validation batch size of 20. To demonstrate the versatility of our methods, we optimize our neural network using Adam (Kingma & Ba, 2017) and run FedAvg (McMahan et al., 2017) instead of FedSGD. Using the preprocessed training data, we fine-train our model to minimize the binary cross entropy loss (BCE). ## A.3 Markov Decision Process Parameters We consider 20 clients and client participation is simulated using a Markov chain with the states being 5 clients, 10 clients, and 20 clients. In each training round, each participating client chooses between 0 to Nbatches batches. We perform N = 45 training rounds with M = 20 successful updates. The accuracies are accuracy scores evaluated on the validation dataset averaged across clients. At any given communication round, we assume that the eavesdropper takes whatever weight trajectory occupies the majority number of communication rounds up till that round. ## A.4 Additional Benchmark Experiments On Mnist Dataset We further conduct experiments on an image recognition task on the MNIST data in a federated setting to a) benchmark our methods on a standard dataset and b) study the effect of varying eavesdropper and Markov chain parameters on the effectiveness of our approach. Also, the image recognition task is more computationally efficient than the hate speech classification task. We can perform a lot more runs of the experiment (20 runs of the Hate Speech Classification task took around ∼ 23 hours, whereas, within the same time frame, we could do 1040 runs of the image classification task). ## A.4.1 Varying Eavesdropper Parameters We use a Markov chain which is identical to Experiment 1. We set M = 30 successful gradient updates out of 120 communication rounds for this task. We report our results for 20 and 50 clients. Our experimental results are reported in Table 4 and Table 5 and we summarize our key findings below: - We vary the size of the training data the eavesdropper has compared to the size of a participating client. We consider three cases: the size of the eavesdropper's training set is 10%, 40%, or 100% of the size of the participating client's size. We observe that as the training size even with the obfuscated weights, the eavesdropper's accuracy improves since the eavesdropper can learn on its own just well enough. - We also vary the number of classes the eavesdropper has more samples of and the proportion of data for these classes to other classes. We consider three cases: 2, 5, and 8 "good" classes that the eavesdropper composes 99% or 90% of the data. The case when all classes are evenly distributed is also benchmarked against. We observe that both the number of good classes and a more balanced dataset improve the eavesdropper accuracy. - We conclude that for the case of a limited information eavesdropper, the optimal policy performs much better than a greedy policy with the eavesdropper accuracy having a difference of as much as 51% (for the case with 2 good classes which form 99% of the training data with 40% of the size.). ## A.4.2 Results On Fedsgd For completeness, we demonstrate our results on FedSGD as well with 75 successful updates in 240 communication queries. We summarize our results in Table 3 averaged for 20 experiment runs. The eavesdropper is assumed to have 4 good classes composing 90% of the data and 10% of the training data size. We see that the pattern is similar to the FedAvg case but with slower convergence, with the average learner accuracy going up to 88% while the eavesdropper is stalled up to 56% while the learner uses the optimal policy. A.4.3 Varying Markov chain parameters | FedSGD | 7 oracle states | | | | |----------------|-----------------------|------------------|-----------------------|------------------| | Type of Policy | Eavesdropper accuracy | Learner accuracy | Eavesdropper accuracy | Learner accuracy | | Greedy | 0.89 | 0.89 | 0.93 | 0.88 | | Optimal | 0.56 | 0.88 | 0.34 | 0.88 | Table 3: Additional results on MNIST dataset using a) FedSGD and b) 7 oracle states demonstrate versatality of our framework and robustness to the choice of system parameters respectively. We consider a setup similar to experiment 1 with 100 clients and fix the eavesdropper parameters to have 2 good classes composing 99% of the data and 100% of the training data size. The number of oracle states is increased to 7 with the device counts as [36, 41, 45, 50, 55, 58, 100] and the threshold is set to 1/8 th of the dataset. The emperical success probabilities are found to be [0.01, 0.12, 0.41, 0.75, 0.94, 0.96, 1]. The probability transition matrix has a structure similar to the previous one and can be found in the code. The results are summarized in Table 3 averaged over 30 runs, and we see trends similar to the case with 3 oracle states (the eavesdropper accuracy is much less since the training data is now distributed over 100 clients so the eavesdropper data is not big enough) . The possible reason for the eavesdropper having a better accuracy in a greedy scheme could be that the weights that the eavesdropper estimates are optimal and the eavesdropper has more time slots to train. Figure 5: Empirical results on the effect of the cost of ![20_image_0.png](20_image_0.png) learning on the average proportion of learning queries. We also investigate the effect of learning cost, l, on the average number of times action u = 1 (learn) is taken when using the stationary policy in Fig. 5. This helps illustrate how our learning cost can be chosen to achieve a desired level of learner-privacy (percentage of queries that should be obfuscated), which are set based on theoretical results (Xu et al., 2021b). The average cost is calculated over 1000 for different simulation runs. The graph seems piecewise linear with jumps which could be explained by the nature of the solution of the occupation measure for the MDP, which is a vertex of a polytope defined by the corresponding linear program and jumps to next vertex with an increase in the instantenous cost. ## A.5 Discussion On A Policy Without Any Stochastic Considerations And Possible Limitations Of The Optimal Policy In the controlled SGD setting presented in this paper, the learner discards certain queries which are evaluated unsuccessfully according to a chosen noise constant. Controlling for stochasticity helps filter out communication rounds that will perform poorly due to insufficient clients and training data. When there is an appreciable disparity between the good and bad states, such filtering leads to a significant performance difference. Additionally, our assumption is more general than a single noise constant. The learner can control the noise threshold constant, σ, which enables the characterization of the finite number of updates to optimize f. An approach without such considerations is different, it will be equivalent to using a random policy in placing M learning queries in N total queries. Our approach can solve this task by modifying the SGD update step and updating the learner state regardless of σk. For a practical setting, such a scheme would still perform better than the random policy since the obfuscation would be the same, but the learning would be done when the practical parameters, like the number of clients and data points, are better. In our numerical experiments with the MNIST data, using a random policy of placing 30 learning queries in 120 communication rounds, the learner obtained a average accuracy of 80.5% against an accuracy 92.5% obtained using the optimal policy. The learning performance of the proposed optimal policy is the same as the greedy policy since the greedy policy also uses the same thresholding (σk < σ) policy. However, since the greedy method does not schedule the queries optimally, it poses a lot more learning queries and cannot achieve obfuscation as illustrated in the numerical experiments. On the other hand, a random policy without thresholding, which poses M learning queries in N communication slots, achieves obfuscation but suffers in learning performance. Hence, the optimal policy can be seen as achieving a tradeoff between both of them. The significant limitations of using the proposed method are: 1. The gain from the optimal policy will not be significantly high if the ratio M/N is much higher than the average occupancy of the oracle in the good oracle states. 2. The stochastic nature of the optimal policy implies that there is a probability that enough queries are not obfuscated. ## A.6 Spga Algorithm Parameters And Additional Experiment For analysing the convergence of the SPGA algorithm, we choose the step size κn = 0.5 n , the scale parameter as ρ = 20 and the initial constraint parameter as ξ = 10. The initial condition for the learner state is set to be y L = 40 and the oracle is state y O = 3 when interacting with the system. In Fig. 7, we also demonstrate how with a constant step size the SPGA algorithm is able to track changes in the underlying policy parameters. Before iteration 2000 the underlying Markov chain has parameters same as the previous SPGA experiment except the arrival rate which is 3% for M = 10 updates. After the 2000 iteration, this changes to M = 4 updates, with an arrival rate of 10%. The success probabilities and the oracle state transition is taken to be, Υ = [0.1, 0.6, 0.9] and P O = [0.7 0.2 0.1; 0.3 0.1 0.7; 0.2 0.2 0.7]. The results are averaged over 100 runs. It can be seen from the figure that even though the convergence is not as close as the decreasing step size, the SPGA algorithm tracks changes in the system. This effect is more prominently visible for oracle state y O = 1 since the change in the true parameters is significant. Note on convergence: Proving convergence for the SPGA algorithm is difficult since policy search is not a convex problem and it's difficult to make regularity assumptions for the average cost function of the CMDP with respect to the policy (which is on a discrete space). Efficient linear programs can be derived for solving the CMDP which have finite sample results but these methods require knowledge of the transition probabilities (Mattila et al., 2017). Hence we use the structural results on the optimal policy to reduce the search space from |SO||SL||SE| to 3|SO||SE|. Where |SL| is the finite approximation for the countably infinite queue length. This reduction at least makes it computationally possible to minimize the cost function. ## A.7 Tradeoff Between Convergence And Obfuscation For a finite-horizon case, the learners convergence varies as O( γσ2 log m ) where m is the number of successful updates already made. The obfuscation with respect to an eavesdropper using proportional sampling and MAP estimator is a step function as shown in Figure 6. As a reminder this simple model of eavesdropper works in our setting because the eavesdropper is assumed to be have equal priors on both the trajectories in absence of any additional information. Existing theoretical results in learner-private optimization give more general probability bounds but till now have been extended to the convex optimization case only (using a Dirichlet process prior) (Xu et al., 2023). Without going into the technical details, the key idea of the bound is that in order to ensure that the eavesdropper has a less than 1/ϱ probability of estimating the arg min, the learner must pose at least ϱ times the standard number of learning queries, where ϱ ∈ N. Although tight upper and lower bounds are difficult to obtain for nonconvex functions even with regularity assumptions, the lower bounds for the convex case serve as loose lower bound for the non-convex case too, making the obfuscation probability a staircase function. ![22_image_0.png](22_image_0.png) Figure 6: Pictorial representation of the tradeoff between the learning objective and the obfuscation objective. (a) shows obfuscation is a step function for the proportional sampling estimator based eavesdropper. (b) shows the change in closeness to a critical point of f. For the infinite horizon case the obfuscation has a similar structure but ![22_image_1.png](22_image_1.png) we look at it with respect to the average number of learning queries and obfuscation on average. And if the conditions of Lemma 1 are satisfied, the optimization remains stable and the set of optimization tasks converge and the learning objective is satisfied. Note that for the infinite horizon case the underlying true emperical function f changes with a change in context or removal of a data point. Our methods could also be extended to an SGD which has a constant step size in which the asymptotic convergence rate becomes slower as the average number of queries decreases although an exact characterization is tough. Figure 7: Convergence of threshold parameters in Alg. 1 for different oracle states with constant step sizes. This trajectory for the parameter estimates is more erroneous, has only weak convergence results but can track changes in the underlying true parameters. ## A.8 Note On Arrival State Space The arrival state space taken in the paper is a specific case for illustrative purposes. It could be a more general set, for ex. S E = {M, 2M, 3M, 4M} with a suitable probability distribution. Lemma 1 can be suitably adjusted to ensure a stable queue as long as the expected arrival rate is finite. We can then quantize the actual arrivals to the nearest arrival | E Dataset Parameters | E Accuracy | L Accuracy | | | | | |---------------------------------------------------------------------------------------------------------------|------------------------|-----------------------|---------------|----------------|---------------|----------------| | No. of Good Classes | Prop. of Training Data | Prop. of Good Classes | Greedy Policy | Optimal Policy | Greedy Policy | Optimal Policy | | 2 | 0.1 | 0.99 | 0.857 | 0.336 | 0.925 | 0.832 | | 2 | 0.1 | 0.9 | 0.857 | 0.725 | 0.925 | 0.832 | | 2 | 0.1 | 0.2 | 0.857 | 0.927 | 0.925 | 0.832 | | 2 | 0.4 | 0.99 | 0.859 | 0.445 | 0.924 | 0.815 | | 2 | 0.4 | 0.9 | 0.859 | 0.907 | 0.924 | 0.815 | | 2 | 0.4 | 0.2 | 0.859 | 0.973 | 0.924 | 0.815 | | 2 | 1 | 0.99 | 0.850 | 0.632 | 0.930 | 0.822 | | 2 | 1 | 0.9 | 0.850 | 0.916 | 0.930 | 0.822 | | 2 | 1 | 0.2 | 0.850 | 0.968 | 0.930 | 0.822 | | 5 | 0.1 | 0.99 | 0.843 | 0.784 | 0.924 | 0.817 | | 5 | 0.1 | 0.9 | 0.843 | 0.900 | 0.924 | 0.817 | | 5 | 0.1 | 0.5 | 0.843 | 0.932 | 0.924 | 0.817 | | 5 | 0.4 | 0.99 | 0.834 | 0.707 | 0.922 | 0.819 | | 5 | 0.4 | 0.9 | 0.834 | 0.920 | 0.922 | 0.819 | | 5 | 0.4 | 0.5 | 0.834 | 0.970 | 0.922 | 0.819 | | 5 | 1 | 0.99 | 0.848 | 0.820 | 0.926 | 0.815 | | 5 | 1 | 0.9 | 0.848 | 0.961 | 0.926 | 0.815 | | 5 | 1 | 0.5 | 0.848 | 0.977 | 0.926 | 0.815 | | 8 | 0.1 | 0.99 | 0.830 | 0.898 | 0.926 | 0.819 | | 8 | 0.1 | 0.9 | 0.830 | 0.957 | 0.926 | 0.819 | | 8 | 0.1 | 0.8 | 0.830 | 0.943 | 0.926 | 0.819 | | 8 | 0.4 | 0.99 | 0.868 | 0.898 | 0.926 | 0.809 | | 8 | 0.4 | 0.9 | 0.868 | 0.936 | 0.926 | 0.809 | | 8 | 0.4 | 0.8 | 0.868 | 0.955 | 0.926 | 0.809 | | 8 | 1 | 0.99 | 0.864 | 0.943 | 0.931 | 0.831 | | 8 | 1 | 0.9 | 0.864 | 0.975 | 0.931 | 0.831 | | 8 | 1 | 0.8 | 0.864 | 0.975 | 0.931 | 0.831 | | Table 4: Additional experiments on the MNIST data with 20 clients showcase how varying different eavesdropper | | | | | | | Table 4: Additional experiments on the MNIST data with 20 clients showcase how varying different eavesdropper parameter changes the accuracy the eavesdropper is able to achieve. state. In most practical implementations of a stochastic gradient algorithm, apart from an epsilon-based convergence criteria, there is also a parameter for the maximum number of iterations, which can be thought of as . Additionally. the implementation for our particular arrival state space {0, M} can be done by considering two states (an arrival or no arrival), and the structured policy gradient algorithm does not need the exact value of M to update the state parameters. The SPGA also adapts to changes in the underlying arrival probability. ## B Appendix B: Proofs B.1 Proof Of Theorem 1 Proof. Let the learner successfully update the minima estimate at time indices, k1*, . . . , k*m. Using quadratic bound associated with the Lipschitz condition of A1 (the descent lemma) the following can be derived. We use a constant step size µ but similar results can be derived for a decreasing step size. Lemma 2. (Modified Descent Lemma) For an oracle function f which satisfies assumptions A1, the following can be shown for the successful updates (Def. 1) performed at time k1*, . . . , k*M, $$\left(\mu-\frac{\mu^{2}\gamma}{2}\right)\sum_{m=1}^{M}\|\nabla f(x_{k_{m}})\|^{2}\leq f(x_{k_{0}})-f^{*}-\left(\mu-\mu^{2}\gamma\right)\sum_{m=1}^{M}\left(\nabla f(x_{k_{m}}),\eta_{k_{m}}\right)+\frac{\mu^{2}\gamma}{2}\sum_{m=1}^{M}\|\eta_{k_{m}}\|^{2}.\tag{16}$$ We introduce the following expression to bound the expected gradient norm and obtain the desired result, $$\Psi_{M}=\frac{1}{M}\sum_{m=0}^{M-1}\mathbb{E}\|\nabla f(x_{k_{m}})\|^{2}.$$ The expectation is with respect to the noise terms ηk0 , ηk1 , . . . , ηkM . ΨM is equal to the expected gradient norm of a uniformly at random selected iterate, ΨM = E∥∇f(ˆx)∥ 2for xˆ ∈u.a.r. {xk1 , . . . , xkM }. The expectation now is w.r.t. | E Dataset Parameters | E Accuracy | L Accuracy | | | | | |------------------------|------------------------|-----------------------|---------------|----------------|---------------|----------------| | No. of Good Classes | Prop. of Training Data | Prop. of Good Classes | Greedy Policy | Optimal Policy | Greedy Policy | Optimal Policy | | 2 | 0.1 | 0.99 | 0.861 | 0.320 | 0.860 | 0.840 | | 2 | 0.1 | 0.9 | 0.861 | 0.611 | 0.860 | 0.840 | | 2 | 0.1 | 0.2 | 0.861 | 0.852 | 0.860 | 0.840 | | 2 | 0.4 | 0.99 | 0.884 | 0.318 | 0.862 | 0.830 | | 2 | 0.4 | 0.9 | 0.884 | 0.734 | 0.862 | 0.830 | | 2 | 0.4 | 0.2 | 0.884 | 0.895 | 0.862 | 0.830 | | 2 | 1 | 0.99 | 0.852 | 0.393 | 0.859 | 0.843 | | 2 | 1 | 0.9 | 0.852 | 0.864 | 0.859 | 0.843 | | 2 | 1 | 0.2 | 0.852 | 0.955 | 0.859 | 0.843 | | 5 | 0.1 | 0.99 | 0.875 | 0.461 | 0.638 | 0.838 | | 5 | 0.1 | 0.9 | 0.875 | 0.655 | 0.638 | 0.838 | | 5 | 0.1 | 0.5 | 0.875 | 0.807 | 0.638 | 0.838 | | 5 | 0.4 | 0.99 | 0.843 | 0.595 | 0.854 | 0.833 | | 5 | 0.4 | 0.9 | 0.843 | 0.795 | 0.854 | 0.833 | | 5 | 0.4 | 0.5 | 0.843 | 0.916 | 0.854 | 0.833 | | 5 | 1 | 0.99 | 0.864 | 0.710 | 0.857 | 0.840 | | 5 | 1 | 0.9 | 0.864 | 0.810 | 0.857 | 0.840 | | 5 | 1 | 0.5 | 0.864 | 0.892 | 0.857 | 0.840 | | 8 | 0.1 | 0.99 | 0.868 | 0.408 | 0.856 | 0.839 | | 8 | 0.1 | 0.9 | 0.868 | 0.714 | 0.856 | 0.839 | | 8 | 0.1 | 0.8 | 0.868 | 0.870 | 0.856 | 0.839 | | 8 | 0.4 | 0.99 | 0.841 | 0.805 | 0.862 | 0.832 | | 8 | 0.4 | 0.9 | 0.841 | 0.916 | 0.862 | 0.832 | | 8 | 0.4 | 0.8 | 0.841 | 0.914 | 0.862 | 0.832 | | 8 | 1 | 0.99 | 0.861 | 0.820 | 0.853 | 0.840 | | 8 | 1 | 0.9 | 0.861 | 0.860 | 0.853 | 0.840 | | 8 | 1 | 0.8 | 0.861 | 0.881 | 0.853 | 0.840 | Table 5: Additional experiments on the MNIST data with 50 clients showcase how varying different eavesdropper parameter changes the accuracy the eavesdropper is able to achieve and how our framework can be extended to more number of clients. both the random iterate and the noise terms. The bound on ΨM is presented in the following corollary and is obtained by algebraic manipulation of Lemma 2, taking expectation on both sides and using assumption (A1, A2), Corollary 1. After M *successful updates of the SGD (Def. 1), for any step size* µ ≤ 1 γ , under assumptions (A1, A2) the following holds, $$\Psi_{M}\leq\frac{2\left(\mathbb{E}f(x_{k_{1}})-f^{*}\right)}{M\mu}+\mu\gamma\sigma^{2}.\tag{1}$$ What remains to be shown is that for appropriate step size µ and number of successful updates M, ΨM ≤ O(ϵ) and the learning objective is achieved. To achieve this, we obtain the following conditions on the two summands of (17) by bounding them individually by ϵ/2 and setting F = (Ef(xk1 ) − f ∗), $$(17)$$ $$\frac{2F}{M\mu}\leq\frac{\epsilon}{2},\mu\gamma\sigma^{2}\leq\frac{\epsilon}{2}.$$ The second equations bounds the step size, µ ≤ϵ 2γσ2 , which along with µ ≤ 1 γ gives, µ = min nϵ 2γσ2 , 1 γ o. Putting the step size in the first equation makes the objective ΨM ≤ ϵ for any M ≥ max n4F γ ϵ, 8*F γσ*2 ϵ 2o. This implies M = O( 1 ϵ + σ 2 ϵ 2 ) and completes the proof. ## B.1.1 Proof Of Lemma 2 Proof. We can use the quadratric upper bound from the Lipschitz condition, the response in (2) and the successful gradient update step of (4) to obtain the following bound, f(xkm+1 ) ≤f(xkm) − µ⟨∇f(xkm), rkm⟩ + µ 2γ 2 ∥rkm∥ 2 f(xkm+1 ) ≤f(xkm) − µ⟨∇f(xkm), ∇f(xkm) + ηkm)⟩ + µ 2γ 2 ∥∇f(xkm) + ηkm∥ 2 f(xkm+1 ) ≤f(xkm) − µ − µ 2γ 2 ∥∇f(xkm)∥ 2 −µ − µ 2γ⟨∇f(xkm), ηkm⟩ + µ 2γ 2 ∥ηkm∥ 2. Summing over the inequalities till $m=M$, $$\left(\mu-\frac{\mu^{2}\gamma}{2}\right)\sum_{m=1}^{M}\|\nabla f(x_{k_{m}})\|^{2}\leq f(x_{k_{0}})-f(x_{k_{M+1}})-\left(\mu-\mu^{2}\gamma\right)\sum_{m=1}^{M}\left\langle\nabla f(x_{k_{m}}),\eta_{k_{m}}\right\rangle+\frac{\mu^{2}\gamma}{2}\sum_{m=1}^{M}\|\eta_{k_{m}}\|^{2}$$ $$\leq f(x_{k_{0}})-f^{*}-\left(\mu-\mu^{2}\gamma\right)\sum_{m=1}^{M}\left\langle\nabla f(x_{k_{m}}),\eta_{k_{m}}\right\rangle+\frac{\mu^{2}\gamma}{2}\sum_{m=1}^{M}\|\eta_{k_{m}}\|^{2}.$$ The last step is due to the function being lower bounded by $f^{*}$. This concludes the proof. ## B.1.2 Proof Of Corollary 1 Proof. Taking expectation on (16) from Lemma 2 w.r.t. the noise terms ηk0 , ηk1 , . . . , ηkM , $$\left(\mu-\frac{\mu^{2}\gamma}{2}\right)\sum_{n=1}^{M}\mathbb{E}\left[\left[\nabla f(x_{k_{n}})\right]^{2}\right]\leq\mathbb{E}\left[f(x_{k_{n}})\right]-f^{*}-\left(\mu-\mu^{2}\gamma\right)\sum_{n=1}^{M}\mathbb{E}\left[\left(\nabla f(x_{k_{n}}),\eta_{k_{n}}\right)\right]+\frac{\mu^{2}\gamma}{2}\sum_{n=1}^{M}\mathbb{E}\left[\left[\left|\eta_{k_{n}}\right|\right]^{2}\right].$$ Now using assumption A2, $\mathbb{E}\left[\left(\nabla f(x_{k_{n}}),\eta_{k_{n}}\right)\right]\eta_{k_{0}},\ldots,\eta_{k_{n+1}}]=0$ and by (A2) and definition of a successful gradient step, E-∥ηkm∥ 2≤ σ 2 km ≤ σ 2. The above equation therefore reduces to, $$\left(\mu-{\frac{\mu^{2}\gamma}{2}}\right)\sum_{m=1}^{M}\mathbb{E}\left[\|\nabla f(x_{k_{m}})\|^{2}\right]\leq\mathbb{E}\left[f(x_{k_{0}})\right]-f^{\star}+{\frac{\mu^{2}\gamma M\sigma^{2}}{2}}.$$ Denoting F0 = Ef(xkm) − f ∗, and using µγ ≤ 1 we obtain, $$\frac{\mu}{2}\left(2-\mu\tau\right)\sum_{m=1}^{M}\mathbb{E}\left[\|\nabla f(x_{k_{m}})\|^{2}\right]\leq F_{0}+\frac{\mu^{2}\gamma M\sigma^{2}}{2}\implies\frac{\mu}{2}\sum_{m=1}^{M}\mathbb{E}\left[\|\nabla f(x_{k_{m}})\|^{2}\right]\leq F_{0}+\frac{\mu^{2}\gamma M\sigma^{2}}{2}.$$ which is the $\mathcal{S}^{\prime}$-norm which is not in the classical case. Multiplying by 2/µM on both sides we obtain the desired result, $$\Psi_{M}\leq\frac{2F_{0}}{M\mu}+\mu\gamma\sigma^{2}.$$ ## B.2 Proof Of Lemma 1 Proof. Let ν be a querying policy satisfying the constraint of (9), then $$\Lambda\geq\operatorname*{lim}_{N\to\infty}\frac{1}{N}\mathbb{E}\left[\sum_{n=1}^{N}l(0,y_{n}^{O}=W_{O})\mathbb{I}\left(a_{n}=0,y_{n}^{L}>0\right)\mid y_{0}\right].$$ If lim supN→∞ 1 N E hPN n=1 1(y L n = 0)|y0 i> 0 then the queue is stable since the queue returns to the state y L n = 0 infinitely often and hence the Markov chain is also recurrent. Otherwise if lim supN→∞ 1 N E hPN n=1 1(y L n = 0)|y0 i= 0 the stability of the queue can be shown by proving that the average successful transmissions (denoted by ry0 (ν)) under the policy ν is greater than the average arrival rate δM, $$r_{y_{0}}(\nu)=\liminf_{N\to\infty}\frac{1}{N}\mathbb{E}_{\nu}\left[\sum_{n=1}^{N}\Upsilon(u_{n},y_{n}^{O})1(u_{n}\neq0)|y_{0}\right]\geq\liminf_{N\to\infty}\frac{1}{N}\mathbb{E}_{\nu}\left[\sum_{n=1}^{N}\Upsilon_{\min}1(u_{n}\neq0)|y_{0}\right]$$ $$\geq\Upsilon_{\min}\left(1-\limsup_{N\to\infty}\frac{1}{N}\mathbb{E}_{\nu}\left[\sum_{n=1}^{N}1(u_{n}=0,y_{n}^{L}>0)|y_{0}\right]\right)\geq\Upsilon_{\min}\left(1-\frac{\Lambda}{l(0,y^{O}=W))}\right)\geq\delta M.$$ Since the average successful learning rate is greater than the average query arrival rate, this induces a stable buffer, and due to Foster's Theorem, this induces a recurrent Markov chain. ## B.3 Proof Of Theorem 2 We state the following lemma which is key to prove Theorem 2. Lemma 3. **(Monotonicy of value function** V ) The value function Vn(y) is decreasing in number of queries left, n and oracle state, y O *and increasing in learner state,* y L. Proof. The strategy for proving the monotonicity of the value function with respect to the state space variables will be to use induction and assumptions about the cost function and the probability transition matrix. The recursion for Vn+1 [y O, yL]from (7) is, Vn+1 y = [y O, yL]= min u∈U c(u, yO) + X y O′∈SO P(y O′|y O) Υ(y O, u)Vn [y O′, yL − u] + (1 − Υ(y O, u))Vn [y O′, yL] , $${\mathrm{where~}}V_{0}\left([y^{O^{\prime}},y^{L}]\right)=l(y^{L}).$$ Monotonicity in n: The first step of the induction, V1 [y O, yL]≤ V0 [y O′, yL] can be shown as, $$V_{1}\left([y^{O},y^{L}]\right)=\operatorname*{min}_{u\in{\mathcal{U}}}c(u,y^{O})+\sum_{y^{O^{\prime}}\in{\mathcal{S}}^{O}}\mathbb{P}(y^{O^{\prime}}|y^{O})\left(\Upsilon(y^{O},u)l(y^{L})\right.+(1-\Upsilon(y^{O},u))l(y^{L})\right)$$ $$\leq Q_{1}\left([y^{O},y^{L}],0\right)=l(y^{L})=V_{0}\left([y^{O^{\prime}},y^{L}]\right).$$ Now let Vn [y O′, yL] ≤ Vn−1 [y O′, yL] , then using the recursion and the monotonicity of cost it is straightforward to show that it holds true for n + 1 since, $$V_{n+1}\left(|y^{O},y^{L}|\right)\leq\min_{u\in U}c(u,y^{O})+\sum_{y^{O^{\prime}}\in\mathcal{O}^{O}}\mathbb{P}(y^{O^{\prime}}|y^{O})\left(\Upsilon(y^{O^{\prime}},u)V_{n-1}\left(|y^{O^{\prime}},y^{L}-0\right)+(1-\Upsilon(y^{O^{\prime}},u))V_{n-1}\left(|y^{O^{\prime}},y^{L}|\right)\right)$$ $$=V_{n}\left(|y^{O},y^{L}|\right).$$ Monotonicity in y L: We use inductive reasoning again to prove that the value function is increasing in y L. Note that V0 [y O, yL]= l(y L) is increasing in y L. Let Vn [y O, yL]be increasing in y L. From the definition of Vn+1 [y O, yL]it follows that Vn+1 is sum of increasing functions of y L hence it should also be increasing in y L. Monotonicity in y O: Using the assumption of first order stochastic dominance on P(y O′|y O) we prove that Vn [y O, yL]is non-increasing in y O. V0 [y O, yL]= l(y L) is non-increasing in y O. Assume Vn [y O, yL]is non-increasing. Then, for y O > 1, by monotonicity of c, c(u, yO) ≤ c(*u, y*O − 1). And by the first-order stochastic dominance assump- tion and induction assumption, Py $$\begin{array}{l}{{c,c(u,y^{O})\leq c(u,y^{O})}}\\ {{\varepsilon s^{o}\ \mathbb{P}(y^{O^{\prime}}|y^{O})V_{n}\left([y]\right)}}\end{array}$$ [y O′, yL] . ${}^{O}-1$). And by the first-order stochastic of $[y^{O'},y^L])\leq\sum_{y^{O'}\in\mathcal{S}^{O}}\mathbb{P}(y^{O'}|y^{O}-1)V_n$ . Then for y $${\mathfrak{n}}\;\mathrm{for}\;y^{O}>1,$$ Vn+1 y = [y O, yL]= min u∈U c(u, yO) + X y O′∈SO P(y O′|y O) Υ(y O, u)Vn [y O′, yL − u] + (1 − Υ(y O, u))Vn [y O′, yL] ≤ min u∈U c(u, yO − 1) + X y O′∈SO P(y O′|y O − 1) Υ(y O, u)Vn [y O′, yL − u] + (1 − Υ(y O, u))Vn [y O′, yL] = Vn+1 y = [y O − 1, yL]. Proof for the infinite horizon discounted MDP: Since instantaneous cost w is bounded. Hence the value function sequence, $$V_{n+1}^{\beta}\left([y^{O},y^{L},y^{E}]\right)=\min_{u\in U}\left[w(u,y;\lambda)+\beta\sum_{\begin{subarray}{c}y^{O}\in\mathcal{O}^{\otimes}\\ y^{E}\in\mathcal{E}^{\otimes}\end{subarray}}\mathbb{P}(y^{O^{\prime}}|y^{O})\mathbb{P}(y^{E^{\prime}})\,\left(\Upsilon(y^{O^{\prime}},u)V_{n}^{\beta}\left([y^{O^{\prime}},y^{L}+y^{E}-u,y^{E^{\prime}}]\right)+\right.$$ $$\left.(1-\Upsilon(y^{O},u))V_{n}^{\beta}\left([y^{O^{\prime}},y^{L}+y^{E},y^{E^{\prime}}]\right)\right)\right],$$ $\square$ converges for any initial V β 0 [y O, yL, yE]. Hence we choose a V β 0 [y O, yL, yE]which is increasing in y L, yE and decreasing in y O. Note that by assumptions on c and l, w(*u, y*; λ) is decreasing in y O and nondecreasing in y L. Therefore by induction V β n [y O, yL, yE]is increasing in y L and y E. And by assumption of first order stochastic dominance on P(y O′|y O) it is easy to see that V β n [y O, yL, yE]is decreasing in y O. Therefore V β[y O, yL, yE]= V β∞ [y O, yL, yE]is increasing in y L and y E and decreasing in y O. We use this result on V β[y O, yL, yE]in the discussion of structural results on the infinite horizon average cost MDP. We now prove Theorem 2: Proof. To show that the optimal discounted cost policy is monotonically increasing in learner state y L, we will prove inductively that QN [y O, yL], uis submodular in (y L, u) for all y L ≥ 1. In other words, we prove that, $$Q_{n+1}\left([y^{O},y^{L}],1\right)-Q_{n+1}\left([y^{O},y^{L}],0\right)=c(1,y^{O})-c(0,y^{O})+\sum_{y^{O^{\prime}}\in\mathcal{G}^{O}}\mathbb{P}(y^{O^{\prime}}|y^{O})\Upsilon(y^{O},1)$$ $$\times\left[V_{n}\left([y^{O^{\prime}},y^{L}-1]\right)-V_{n}\left([y^{O^{\prime}},y^{L}]\right)\right],$$ is monotonically decreasing in the learner state y L for y ≥ 1 for all n ≥ 0 for a suitable initialization. This is a sufficient condition for a monotone threshold policy since if the state action (Q) decreases monotonically. It will change its sign over the learner state space S L only once, and the action 0 will be optimal until a certain value of y L and the action 1 will be optimal otherwise. For illustration, in a finite horizon case, we assume the oracle state is fixed, and with abuse of notation, write the Q function only in terms of the learner state . Supermodularity of the Q function in (y L, u) is satisfied when . Supermodularity implies increasing differences in with increasing . The optimal action with queries remaining can be written as . Therefore, if at some , the optimal action is 0, then for , the optimal action has to be since the difference between the Q function only decreases. Hence, inductively, in can be shown that for , the optimal action will be 0. Similarly, it can be shown that if at some , the optimal action is 1, then for all , the optimal action would have to be 1. This gives rise to the threshold structure of the policy. This was an illustrative explanation of the concept of supermodularity, which has been used to show threshold policy results in previous work Ngo & Krishnamurthy (2010) and can be extended to the discounted Lagrangian cost MDP case. Applying a policy trained by the policy gradient method to a stochastic gradient method can be difficult, due to the changing environment. Qn+1 [y O, yL], 1− Qn+1 [y O, yL], 0has increasing differences in y L if, Vn [y O, yL]has increasing difference in y L. We prove this inductively. By assumption of integer convexity, V0 [y O, yL]= l(y L), has increasing differences in y L. Assume Vn [y O, yL]has increasing differences in y L then Qn+1 [y O, yL], uis submodular in (y L, u). We will now show that Vn+1 [y O, yL]has increasing differences in y L, i.e. $$V_{n+1}\left([y^{O},y^{L}+1]\right)-V_{n+1}\left([y^{O},y^{L}]\right)-\left(V_{n+1}\left([y^{O},y^{L}]\right)-V_{n+1}\left([y^{O},y^{L}-1]\right)\right)\geq0.\tag{18}$$ we let $Q_{n+1}\left([y^{O},y^{L}],u_{1}\right)=V_{n+1}\left([y^{O},y^{L}+1]\right)$, $Q_{n+1}\left([y^{O},y^{L}],u_{2}\right)=V_{n+1}\left([y^{O},y^{L}]\right)$ and $u_{1}+1\left([y^{O},y^{L}],u_{3}\right)=V_{n+1}\left([y^{O},y^{L}-1]\right)$ for some actions $u_{1},u_{2}$ and $u_{3}$. Now (18) can be written as, Qn+1 [y Qn+1 [y O, yL + 1], u1 − Qn+1 [y O, yL], u1 −Qn+1 [y O, yL], u1 − Qn+1 [y O, yL − 1], u2 ≥ 0 ⇐⇒ Qn+1 [y O, yL + 1], u1 − Qn+1 [y O, yL], u1 −Qn+1 [y O, yL], u1 − Qn+1 [y O, yL], u2 | {z } A | {z } By optimality≥0 −Qn+1 [y O, yL], u2 − Qn+1 [y O, yL], u3 −Qn+1 [y O, yL], u3 − Qn+1 [y O, yL − 1], u3 ≥ 0. | {z } By optimality≥0 | {z } B Now rearranging the terms for A, y O′∈SO P(y O′|y O) × hΥ(y O, u2)] Vn [y O′, yL] − Vn [y O′, yL − 1] A =X +(1 − Υ(y O, u2)) Vn [y O′, yL + 1]− Vn [y O′, yL] i ≥ y O′∈SO P(y O′|y O) × hVn [y O′, yL] − Vn [y O′, yL − 1]i ≥ B. X The second last inequality is due to induction on Vn [y O, yL]and the last inequality follows from similar expansion on B and induction hypothesis. This theorem can be straightforwardly extended to infinite horizon discounted MDP. ## B.4 On Threshold Structure Of Average Lagrangian Cost Optimal Policy To show that the optimal policy of the unconstrained average cost MDP has a threshold structure, we first state the following lemma (Sennott, 1989). Lemma 4. Let (βk) *be any increasing sequence of discount factors, s.t.,* limk→∞ βk = 1*. Let (*ν ∗ βk ) be the associated sequence of discounted optimal stationary policies. There exist a subsequence (αk) of (βk) *and a stationary policy* ν that is the limit of (ν ∗ αk ). An optimal policy given by Lemma 4 is an average cost optimal policy under suitable assumptions (Sennott, 1989). We verify these assumptions and characterize the average cost optimal policy in the following theorem: Theorem 4. Any stationary deterministic policy ν given by Lemma 4 is an average cost optimal policy. In particular there exists a constant ψ = limβ→1(1 − β)V β(y) for every y, and function Ψ(y) with −N ≤ Ψ(y) ≤ My*, such that,* $$\psi+\Psi(y)=\operatorname*{min}_{u\in{\mathcal{U}}}\left\{w(u,y;\lambda)+\sum_{y^{'}\in{\mathcal{S}}}\mathbb{P}(y^{'}|y,u)\Psi(y)\right\}.$$ . Furthermore, the stationary policy is average cost optimal with an average cost ψ. Proof. For any stationary policy ν to be average cost optimal, the following assumptions need to be satisfied (Sennott, 1989): - Assumption 1: For every state y and discount factor β, the optimal discounted cost V β(y) is finite. - Assumption 2: There exists N ≥ 0 such that, −N ≤ Ψβ(y) ∆= V β(y) − V β(0) where 0 is a reference state. - Assumption 3: There exists My ≥ 0, such that, Ψβ(y) ≤ My for every y and β. For every y there exists, u such that Py ′∈S P(y ′|*y, u*) < ∞. - Assumption 3': Assumption 3 holds and Py ′∈S P(y ′|*y, u*)My < ∞. For a reference state 0 = [0*, W,* 0], the policy of always transmitting induces a stable buffer, and the expected time and cost for the first passage to state 0 are finite. Therefore by Proposition 5i) and 4ii) of Sennott (1989) and from Ross (2014) Assumption 1 and 3 are satisfied. Assumption 3' is satisfied by the probability transition given in (8). Assumption 2 is satisfied because V βis increasing in y L, y E and decreasing in y O and therefore V β(y) ≥ V β(0) ∀y ∈ S as shown in Lemma 3. Due to the above lemma and theorem and using the discussion in the proof of Lemma 3, the average cost optimal policy inherits the monotone threshold structure of the discounted optimal policy.
Review 1: Summary: This paper considers covert FL, where the learner have two goals simultaneously: (1) learn the minimizer of a function $f$ based on the noisy gradients from distributed oracles by sending a query $q$, and (2) hiding the minimizer of $f$ from a malicious eavesdropper. The authors considered this problem as controlling the SGD by a policy gradient algorithm that exploits the policy structure. The authors showed that the optimal policy has a monotone threshold structure. This paper also proposes an efficient policy gradient algorithm, which well balances between accuracy and robustness (against eavesdropper) in practical scenarios including the hate speech classification task. Strengths and Weaknesses: Strength - The authors provided a proper motivation for such problem in the paper. Especially, the description on the problematic scenario when the eavesdropper behave maliciously based on the revealed information on the hate speech classification model - This paper provides some theoretical and empirical results on the optimal policy Requested Changes: Probably it is better to improve the readability of the paper. Making a table summarizing important notations would be helpful. In the caption of figure/table, it is better to what each mathematical notations is (e.g., \phi, \theta in Fig.4) Broader Impact Concerns: None ================================================== Review 2: Summary: The paper studies the covert optimization problem in federated learning in which a learner dynamically decides to query a stochastic gradient oracle. The authors model the decision-making problem of choosing stochastic gradients as a (constrained) Markov decision process and present several structural properties of the underlying optimal policies. To find an optimal policy, the authors propose a policy gradient method and demonstrate the performance of proposed methods in experiments. Strengths and Weaknesses: Strengths: - The authors apply the classical Markov decision process (MDP) to model the query of stochastic gradient oracle. This provides a general framework to study the eavesdropper issue in federated learning. - The authors show that the optimal policies of proposed MDPs have a threshold structure that can be used to reduce the policy search space. By approximating the policy using sigmoid functions, the authors propose a structured policy gradient to learn an optimal query policy. This is a useful application of policy gradient methods in federated learning. - The authors also provide experiments to show the performance of the proposed methods. The hate speech classification seems to be a new application of covert optimization. Weaknesses: - Modeling the stochastic gradient with controlled oracles as a MDP problem is intuitive. Can the authors provide more rigorous statements? Any simple examples of stochastic gradient methods? - Assumptions made about the proposed MDPs can impose the limitations. It is important to illustrate why the stochastic gradient method is the case. - The non-asymptotic convergence of stochastic gradient method is quite standard in theory. However, non-asymptotic convergence is absent for the proposed policy gradient method. It is also less discussed the computational complexities of the algorithm. - Performance of the proposed policy is similar as greedy policy. Pros and cons of the propopsed method needs to be clarified. Requested Changes: Here are some other questions: - What is the definition of $P$ in Equation (1)? - It seems difficult to determine $M$ in MDPs. How this can be handled in policy gradient algorithms and experiments. - It is useful to illustrate the super modular structure more and its implications for the optimal policy? - Applying a policy trained by the policy gradient method to a stochastic gradient method can be difficult, due to the changing environment. How does the policy trained by the policy gradient method generalize? Broader Impact Concerns: No ================================================== Review 3: Summary: In this paper, a learner seeks to minimize a function f via gradient descent by querying a distributed oracle that provides noisy gradient evaluation. However, the learner seeks to hide the argmin of f from a malicious eavesdropper. Let me mention that I am not an expert in federated learning but I do know a bit about covertness in the context of information theory. Strengths and Weaknesses: The formulation and questions asked in this paper seem to be interesting and meaningful. The experimental evaluation is useful. The derivation of a Markov decision process as a means to control the stochastic gradient descent algorithm is also fairly novel. Requested Changes: The reviewer, however, has the following major concerns about the submission. 1) There does not seem to be a theoretical guarantee for the Structured Policy Gradient algorithm in Section 3.4. 2) When researchers in information theory use the word "covert", it usually means something different. Please search "covert communications" and look at this paper. https://ieeexplore.ieee.org/document/7407378/ Herein a major consideration is to quantify the **tradeoff** between the level of covertness and throughput. In the authors' investigations, they should also quantify the tradeoff between the convergence rate and the level of covertness (e.g., how well the argmin f is actually hidden). This is not done in the present submission. Broader Impact Concerns: NIL ================================================== Metareview: Recommendation: Accept with minor revision Comment: ## Summary and strengths All reviewers were positive about the paper and pointed out that the problem is well-motivated, reasonably studied from the theoretical point of view, and given a sufficient experimental evaluation. ## Weaknesses In short, the theory is somewhat limited, and only one problem is studied numerically. **Extra details**. One weakness pointed out by Reviewer N6BD is that the paper's guarantees are asymptotic. The authors explained the difficulty of getting non-asymptotic guarantees in their response and Appendix A.6. While I understand the technical difficulty, it is, nevertheless, of higher interest to the TMLR community to have non-asymptotic guarantees. Reviewer N6BD also pointed out that the assumptions are a bit limiting, which I also agree with. The assumptions made about the optimization aspects (compact domain, bounded stochastic gradients) are not what the modern theory of SGD uses. ==================================================
# A Simple, Ecient And Scalable Contrastive Masked Autoencoder For Learning Visual Representations Anonymous authors Paper under double-blind review ## Abstract Hybrid self-supervised learning methods that combine masked image modelling and contrastive learning have demonstrated state-of-the-art performance across many vision tasks. In this work we identify a property overlooked by previous hybrid methods: they can achieve considerable eciency improvements compared to contrastive learning, whilst still outperforming the constituent contrastive and masked image modelling training components. To demonstrate this, we introduce CAN a minimal and conceptually clean synthesis of (C) contrastive learning, (A) masked autoencoders, and (N) the noise prediction approach used in diusion models. CAN is designed to be ecient, masking 50% of patches in *both* views, meaning that the overall FLOPs load of SimCLR is 70% higher than CAN for ViT-L backbones. Our combined approach outperforms its MAE and SimCLR constituent parts on an extensive set of downstream transfer learning and robustness tasks under both linear probe and finetune protocols, and pre-training on large scale datasets such as JFT-300M and ImageNet-21K. Code is provided in the supplementary material, and will be publicly released. ## 1 Introduction Contrastive learning (Chen et al., 2020b) and masked image models such as MAE (He et al., 2022) employ very dierent learning mechanisms. The former learns to extract features that are invariant to certain semantics-preserving variations in data, while latter reconstructs missing parts of an input, thereby learning spatial statistical correlations in data. Because of this, *hybrid* methods have recently been proposed that combine aspects of both with the goal of building a reinforced and improved training mechanism (Huang et al., 2022; Tao et al., 2022). However, existing hybrid methods tend to suer from two weaknesses compared to MAE: 1) training costs scale more poorly as model size increases, and 2) the re-introduction of complexityincreasing tricks such as multi-cropping and use of momentum updated target networks that are commonplace in contrastive learning. This increase in complexity is especially harmful to fast iteration of new models and methods given the increased adoption of web-scale training datasets (Yu et al., 2022; Radford et al., 2021; Jia et al., 2021) and the extreme accompanying costs. In this work we introduce CAN—a hybrid contrastive masked autoencoder designed with simplicity and eciency as priorities. In the process our aim is to demonstrate that hybrid methods are not only a promising path to improved state-of-the-art performance (as prior work has shown) but can improve feature learning without higher computation costs or more complex training recipes. As well as a minimal fusion of contrastive learning and masked autoencoders, CAN additionally uses the denoising loss that has driven advances in diusion models (Ho et al., 2020; Song et al., 2021). This loss predicts the *noise* added to an input image, introducing negligible overheads. Denoising oers a promising third complementary learning mechanism to contrastive learning and masked autoencoding by forcing the model to learn high-frequency information, whereas autoencoder reconstructions focus on low-frequency information (Hou et al., 2017). We show that CAN performs favourably according to key metrics: 1) performance-eciency trade-o compared to contrastive learning and MAE, and 2) scalability to pre-training on large datasets. Indeed, CAN enjoys stronger performance than its constituent parts on their own, whilst using considerably fewer FLOPs ![1_image_0.png](1_image_0.png) Figure 1: CAN enjoys a favourable performance-eciency trade-o. **Left:** CAN scales more eently than SimCLR since it uses masked inputs. **Middle and right:** CAN outperforms SimCLR and MAE on ImageNet linear probe and finetune evaluations for ViT-L models when pre-training on uncurated data such as JFT300M. than contrastive learning. This advantage conttinue to hold when pre-training on large datasets such as JFT-300M and ImageNet-21K, which consist of 300M and 14M images, respectively. For instance, evaluating JFT-trained ViT-L models using the top-1 accuracy of an ImageNet-trained linear probe, MAE achieves 64.1% and SimCLR achieves 73.4%, while CAN achieves 75.4%. In short, the advantages of CAN are: 1. **Simplicity.** CAN is a minimal synthesis of three powerful self-supervised learning methods: contrastive learning, masked autoencoders, and denoising. 2. E**ciency.** CAN enjoys a favourable eciency-performance trade-o (Figure 1), e.g., SimCLR uses 70% more FLOPs than CAN with ViT-L backbones. 3. **Scalability.** CAN scales well to training on large image datasets, such as JFT-300M and ImageNet21K. CAN is more ecient than SimCLR since it masks 50% of patches in each view. This also translates to faster run-times, with our largest training (ViT-L 5000 epochs) taking 2 weeks for SimCLR, and 1 week for CAN on our hardware. Our aim is to scale and solve SSL in a practical setting, specifically pre-training on large-scale datasets like JFT300M and ImageNet21k. We demonstrate that while pre-training on these large-scale datasets, we often outperform MAE, SimCLR baselines by a significant margin across 15 downstream datasets encompassing linear evaluation, fine-tuning, few-shot learning, and robustness settings. ## 2 Related Work Masked image models with Vision Transformers. The advent of the Vision Transformer (ViT) (Dosovitskiy et al., 2021b) provoked a focused eort to develop strong self-supervised learning frameworks for ViT backbones. Works such as DINO (Caron et al., 2021) and MoCo-v3 (Chen et al., 2021b) demonstrated that techniques developed with ConvNet backbones in mind could also perform competitively using ViTs after proper tuning to suit the new architecture. ViT-specific methods have emerged since then, particularly masked image modelling (Bao et al., 2022; Chen et al., 2022; Xie et al., 2022), which use a mask-and-reconstruct training mechanism, taking inspiration from pre-training methods used in NLP (Devlin et al., 2018). This classical idea (Ballard, 1987) is enjoying a rejuvenation thanks to favourable eciency when combined with the vision transformer architecture (Dosovitskiy et al., 2021b). Most notably MAE (He et al., 2022) showed that classical masked autoencoding approaches could be used to pre-train ViTs *without* passing masked tokens through the encoder. This provides a significant eciency boost; our method similarly takes advantage of this. Contrastive learning in computer vision. Self-supervision has received significant attention in computer vision as it oers a way to extract general purpose features without supervision. In particular, contrastive learning (van den Oord et al., 2018; Héna et al., 2020; Chen et al., 2020b; He et al., 2020; Tian et al., 2020; Chuang et al., 2020; Héna et al., 2021) has achieved state of the art performance by enforcing invariance to augmentations, whilst using negative samples (Robinson et al., 2021a; Ge et al., 2021) to avoid trivial solutions by spreading the embedding out uniformly on the sphere (Wang & Isola, 2020). The contrastive pre-training task is conceptually very dierent from masked image models such as MAE, which learn spatial statistical dependencies. Another distinction is that autoencoders encourage information preservation in latent representations, whilst contrastive learning could suppress features (Chen et al., 2021a; Robinson et al., 2021b). This leads us to hypothesize that the two approaches learn dierent, complementary data features. This motivates us to combine contrastive learning and masked image modelling so as to develop a reinforced pre-training task that enjoys the merits of each. Denoising di**usion models.** Denoising autoencoders (DAE) (Vincent et al., 2010) learn to reconstruct clean data given a noisy input. By learning to map low-density data regions to high-density regions, DAE learns the shape of the data manifold. This connection was made precise by Vincent (2011), who showed that DAEs learn the score-function s(x) = Òx log p(x). This key observation underpins the significant recent advances in generative diusion models, which use an estimate of the score-function to generate samples (Ho et al., 2020; Song et al., 2021). The recent success of DAEs in generative modelling has not yet translated to representation learning, with some exceptions (Asiedu et al., 2022; Zaidi et al., 2022). In this work we exploit a denoising autoencoder to eliminate the MAE ineciency of reconstructing unmasked patches but never using them. Siamese masked image modelling. Several recent works propose approaches that combine ideas from masked image modelling and Siamese self-supervised learning. For instance, Huang et al. (2022) propose a combination of contrastive and masked reconstruction objectives using one masked view, and one full (unmasked) view. Other recent works (Tao et al., 2022; Chen et al., 2022; Assran et al., 2022) use similar asymmetric designs. The key distinction between CAN and these works is that we strike a dierent balance, focusing on developing a *simple*, and e*cient* method. For instance we use *two masked views* and no momentum encoder. We hope the simplicity and eciency of CAN, and our experiments showing it's scalability, will make it easy to adapt and modify in future work. ## 3 A Simple Contrastive Masked Autoencoder Framework Our approach is a minimal synthesis of contrastive learning, the masked autoencoder (MAE) (He et al., 2022), and the denoising loss used in the training of diusion models. We focus on simplicity and scalability, aiming to design a hybrid with as few complex or costly components as possible. We also aim to minimize wasted computation: in particular, the MAE decoder requires reconstructions of all patches, but only those of masked patches are used in the loss, a fact that CAN exploits. Below, first we detail the basic pipeline of generating views and passing masked inputs through the encoder and decoder, then explain the three objectives we use: contrastive, reconstruction, and denoising. The penultimate section describes the combined objective, and the final section discusses scalability. ## 3.1 Overview Of Method Given a batch of n images {bx}ni=1, we generate two views bx1i , bx2i œ Rh◊w◊3 of each image without supervision using the same data augmentations as Chen et al. (2020b). Each image is then split into T = (h/p) ◊ (w/p) non-overlapping patches of size p ◊ p: bx1i,patch, bx2i,patch œ RT ◊p◊p◊3 in preparation for input to the ViT encoder. We always assume that p divides h and w. Two masks bM1i , bM2i œ {0, 1}T are independently generated, with a 1 in coordinate t œ {1*,...T*} indicating that the t-th patch is masked. Each patch is masked independently with probability r, conditioned on always having exactly TÕ = r · T patches masked, which we assume is an integer. In all CAN experiments our default masking rate is r = 50% unless explicitly stated otherwise (note that for all MAE results we follow the exact settings as in (He et al., 2022) using the default r = 75%). Following He et al. (2022), only the T ≠ TÕ *unmasked* patches are passed to the ViT encoder, which processes the two views in parallel. Masking a large fraction of patches from both views makes our method much more ecient (see Table 1) than contrastive methods that use two full views and recent works that use one full view and one masked view (Assran et al., 2022; Huang et al., ![3_image_0.png](3_image_0.png) Figure 2: **The CAN framework:** Two views of an image are generated, 50% of patches randomly masked in each, and noise is added to patches. An encoder is trained to solve three tasks: 1) Reconstruction: encoded patches are passed to a decoder that reconstructs missing patches, 2) Denoise: reconstructs the noise added to unmasked patches, and 3) **Contrast:** pooled patches are passed to a contrastive loss, using in-batch samples as negatives (Chen et al., 2020b). 2022). Finally, we collect the embeddings of unmasked tokens bz1i , bz2i œ R(T ≠TÕ)◊d and reshape into T ◊ d tensors by adding a learned [M] embedding to positions corresponding to masked tokens. The result is passed through a comparatively lightweight ViT decoder to produce outputs ˆbx 1 i , ˆbx 2 i in image space Rh◊w◊3. ## 3.2 Contrastive Learning Objective The embeddings bz1i , bz2i œ R(T ≠TÕ)◊d returned by the encoder are pooled via a simple mean along the first dimension to form d-dimensional embeddings, which are passed through a lightweight MLP projection head that maps into a lower dimension space Rr, r<d, and normalized to unit length to produce embeddings bu1i , bu2i œ Rr for i = 1*,...n*. For the ith batch item we collect the other 2n ≠ 2 samples in-batch Ni = {bu1j , bu2j}j"=i to use as negatives, and compute the LInfoNCE loss: $${\frac{1}{2n}}\sum_{v=1,2}\sum_{i=1}^{n}-\log{\frac{e^{b u_{i}^{1\top}b u_{i}^{2}/\tau}}{e^{b u_{i}^{1\top}b u_{i}^{2}/\tau}+\sum_{b u^{-}\in\mathcal{N}_{i}}e^{b u_{i}^{v\top}b u^{-}/\tau}}}$$ where · > 0 is a temperature parameter, defaulting to 0.1. Our choice of InfoNCE objective is justified by recent work (Koppula et al., 2022) that found that a simple InfoNCE objective as in SimCLR scales to large dataset better than methods such as BYOL (Grill et al., 2020) or DINO (Caron et al., 2020). ## 3.3 Patch Reconstruction Objective The outputs ˆbx 1 i , ˆbx 2 i , i = 1*,...,n* of the ViT decoder are trained to reconstruct the missing patches of each image. As in He et al. (2022), we find it best to only compute the reconstruction loss on masked patches: $${\mathcal{L}}_{\mathrm{rec}}={\frac{1}{2n}}\sum_{v=1,2}\sum_{i=1}^{n}\|b M_{i}^{v}\circ(b x_{i}^{v}-\hat{b}x_{i}^{v})\|_{2}^{2}$$ where ¶ multiplies all pixels in the tth patch of the residual image bxvi ≠ ˆbx v i by (bMvi )t œ {0, 1}. Whilst computing the loss only on masked patches gives better performance, it indicates wasted computation since the decoder also produces reconstructions for unmasked patches. To avoid waste we propose an alternative objective specifically for unmasked patches, which we discuss next. ![4_image_0.png](4_image_0.png) Figure 3: **Denoising:** Both the encoded patches and the noise level ‡ are passed to the decoder by passing ‡ through an MLP, and adding the result to each embedded token. ## 3.4 Denoising Objective Inspired by the significant advances in diusion modelling using *denoising* training objectives (Ho et al., 2020; Kingma et al., 2021) and their equivalent score-based counterparts (Song et al., 2021; Vincent, 2011) we revisit the suitability of denoising for self-supervised learning. We add independent isotropic Gaussian noise to each image bxvi Ω bxvi + ‡vi bevi with bevi ≥ N (b0, I) and ‡vi uniformly sampled from an interval [0, ‡max]. This noisy input is masked and passed to the encoder as described in Section 3.1. When passing encoded patches to the decoder we make a small addition to the method in Section 3.1 to provide the decoder with information on the noise level ‡vi to help it separate noise from the ground truth image. This is motivated by denoising diusion methods, which pass both the noisy image and the noise level as inputs to the denoising model (Ho et al., 2020). We approach this by using ‡vi as a positional encoding in the decoder, similarly to Vaswani et al. (2017). First we produce a sinusoidal embedding of ‡vi œ Rd, which is passed through a lightweight 2 layer MLP with ReLU activations of constant width d to produce a (learnable) embedding bpvi œ Rd, whose dimension matches the latent dimension of bzvi œ RT ◊d. We add the result to each embedded token (including missing tokens [M]) to provide noise-level information: (bzvi )t Ω (bzvi )t +bpvi for t = 1 *...,T*, and pass the result to the decoder producing ˆbx v i . We define our denoising loss function, which is computed only on unmasked pixels: $${\mathcal{L}}_{\mathrm{denoise}}={\frac{1}{2n}}\sum_{v=1,2}\sum_{i=1}^{n}\|(1-b M_{i}^{v})\circ(\sigma_{i}^{v}b e_{i}^{v}-\hat{b x_{i}}^{v})\|_{2}^{2}$$ where, ¶ multiplies pixels by the patch-level masking as in Section 3.3. Note that the reconstruction loss Lrec still uses the *clean* input bx as its target, with no noise added. The denoising loss is extremely lightweight, introducing only a very small overhead due to the MLP. We emphasize that the reconstruction of noise patches comes at zero additional cost since the decoder produces reconstructions of all patches, both masked and unmasked, but only reconstructions of masked patches are used in Lrec. Finally, it has been observed in the diusion modelling literature that although it is equivalent to train a denoising model to estimate the noise be, or to estimate the clean input bx (Vincent, 2011), there is an empirical gap, with noise target faring better. While we do not pursue it further, our testing corroborates this. ## 3.5 The Combined Objective Function The overall CAN objective trains the encoder and decoder to optimize three losses combined: $${\mathcal{L}}_{\mathrm{CAN}}=\lambda_{\mathrm{In}}$$ LCAN = ⁄InfoNCELInfoNCE + ⁄recLrec + ⁄denoiseLdenoise where 0 Æ ⁄InfoNCE, ⁄rec, ⁄denoise, and ⁄InfoNCE + ⁄rec + ⁄denoise = 1 weight the objectives. In practice we parameterize the weights by eliminating one variable using the equality constraint, taking: ⁄rec = (1 ≠ ⁄InfoNCE) · ⁄ and ⁄denoise = (1 ≠ ⁄InfoNCE) · (1 ≠ ⁄) where 0 Æ ⁄ Æ 1. This parameterization makes it easy to control the weighting between the two reconstruction losses Lrec,Ldenoise on the one hand, and the contrastive loss LInfoNCE on the other. We find that performance is robust to the choice of ⁄, and many choices of ⁄InfoNCE also work well (see Section 5). | Architecture | Epochs | IN-1K top-1 | | |------------------------------|----------|---------------|------| | MoCLR (Tian et al., 2021) | R50 | 5000 | 67.6 | | BYOL (Grill et al., 2020) | R50 | 5000 | 67.9 | | DnC (Tian et al., 2021) | R50 | 1000 | 67.9 | | DnC (Tian et al., 2021) | R50 | 4500 | 70.7 | | MoCLR (Tian et al., 2021) | R200◊2 | 5000 | 74.2 | | DnC (Tian et al., 2021) | R200◊2 | 3000 | 77.3 | | MAE† (He et al., 2022) | ViT-L | 1600 | 50.5 | | MAE† (He et al., 2022) | ViT-L | 5000 | 64.1 | | SimCLR† (Chen et al., 2020b) | ViT-B | 800 | 65.8 | | SimCLR† (Chen et al., 2020b) | ViT-L | 800 | 72.6 | | SimCLR† (Chen et al., 2020b) | ViT-L | 1600 | 73.1 | | SimCLR† (Chen et al., 2020b) | ViT-L | 5000 | 73.4 | | CAN (ours) | ViT-B | 800 | 67.1 | | CAN (ours) | ViT-L | 800 | 72.8 | | CAN (ours) | ViT-L | 1600 | 74.3 | | CAN (ours) | ViT-L | 3000 | 75.3 | | CAN (ours) | ViT-L | 5000 | 75.4 | Table 1: **JFT-300M pre-training:** Comparison to the state of the art on ImageNet linear probe. CAN outperforms all methods except DnC, which uses a complicated multi-stage training process. Computation is measured as ImageNet-equivalent epochs. †Our implementation of (Chen et al., 2020b) and (He et al., 2022). ## 3.6 Discussion On Eciency The eciency of CAN arises from masking 50% of both views. We also omit certain design choices in the interests of eciency: we do not use a momentum encoder or multiple views (multi-crop). Each of these components tends to add significant (2◊ or more) expense to training. Even without these components CAN achieves strong performance, outperforming its key constituent parts SimCLR and MAE. ## 4 Results 4.1 Pre-Training On Uncurated Data: Jft-300M A key promise of self-supervised learning is to allow models to be trained on extremely large scale image datasets collected from the Web. Not only is such data likely to be *unannotated*, but also *uncurated*: images containing many objects, variable lighting, artifacts (e.g., watermarks) and so on. The large variation in images found online presents a major challenge to self-supervised learning, and it is not guaranteed that methods that work well on curated (and comparatively smaller) datasets such as ImageNet will work equally well on less curated data. To study how CAN scales to large datasets we use JFT-300M (Sun et al., 2017), a dataset of around 300 million images. Setup. Training time is measured in ImageNet-equivalent epochs: 1 epoch equals 1281167/[batch size] steps, the number of steps in one IN-1K epoch. Models are evaluated using linear probe and finetuning on IN-1K. All hyperparameers were tuned on IN-1K, besides learning rate and weight decay which we cut by a factor of 4 and 2 respectively to stabilize training on JFT-300M. See Appendix C and Section 5 for details. Results. Figure 1 compares CAN to SimCLR and MAE baselines using ViT-L models. CAN achieves a much better trade-o between eciency (measured in FLOPs) and performance using ViT-L models for all three methods: SimCLR uses 70% more FLOPs than CAN, which consistently outperforms both SimCLR and MAE: for training ViT-L models for 5000 epochs, CAN achieves an IN-1K linear probe performance of 75.4%, compared to 73.4% for SimCLR and 64.1% for MAE. The relatively poorer linear probe performance of MAE on JFT-300M highlights the non-triviality of scaling from IN-1K to larger datasets and suggests that | Method | Pre-training epochs | Encoder | No Additional params. | Masked image | Finetune | Linear probe | |------------------------------|-----------------------|-----------|-------------------------|----------------|------------|----------------| | from scratch | 100 | ViT-B | 3 | 7 | 79.1 | - | | MoCo-v3 (Chen et al., 2021b) | 300 | ViT-B | 7 | 7 | 83.0 | 76.7 | | DINO (Caron et al., 2021) | 1600 | ViT-B | 7 | 7 | 82.8 | 78.2 | | CIM (Fang et al., 2022) | 300 | ViT-B | 7 | 7 | 83.1 | - | | CAE (Chen et al., 2022) | 800 | ViT-B | 7 | 7 | 83.8 | 68.6 | | CAE (Chen et al., 2022) | 1600 | ViT-B | 7 | 7 | 83.9 | 70.4 | | BEiT (Bao et al., 2022) | 800 | ViT-B | 7 | 7 | 83.2 | 37.6ú | | SimMIM (Xie et al., 2022) | 800 | ViT-B | 3 | 7 | 83.8 | 56.7 | | MAE (He et al., 2022) | 800 | ViT-B | 3 | 3 | 83.1 | - | | MAE (He et al., 2022) | 1600 | ViT-B | 3 | 3 | 83.6 | 68.0 | | CAN (ours) | 800 | ViT-B | 3 | 3 | 83.4 | 74.0 | | CAN (ours) | 1600 | ViT-B | 3 | 3 | 83.6 | 74.8 | | SimCLR† (Chen et al., 2020b) | 800 | ViT-L | 3 | 7 | 83.4 | 73.9 | | MAE (He et al., 2022) | 800 | ViT-L | 3 | 3 | 84.9 | 73.5 | | MAE† (He et al., 2022) | 800 | ViT-L | 3 | 3 | 83.7 | 71.4 | | CAN (ours) | 800 | ViT-L | 3 | 3 | 84.7 | 76.2 | Table 2: **Finetune and linear probe results with pre-training on ImageNet-1K.** Note that CAN does not use multi-crop augmentation or momentum encoder. †Our implementation of (Chen et al., 2020b) and (He et al., 2022). *Quoted from Chen et al. (2022). while MAE is scalable for *model size*, scalability to larger *datasets* requires further study. Figure 1 (right) gives finetuning results. CAN performs favourably: for a 5000 epoch pre-training schedule, CAN achieves an IN-1K linear probe performance of 86.1%, compared to 85.5% for SimCLR and 85.4% for MAE. CAN also enjoys better scaling with training schedule length than either MAE or SimCLR, with the dierence in performance becoming *larger* for longer schedules. We hypothesize that this is not coincidental, and that strong pre-training tasks like CAN play an important role in scalability. We also compare CAN to the current state of the art on JFT-300M pre-training in Table 1. Our best performance, 75.4% with ViT-L outperforms all methods besides DnC, with 77.3% (Tian et al., 2021) with R200◊2. However we note that CAN is *considerably* simpler than DnC, which involves training 10 separate "expert" models (each as large as the final model), and then using MoCLR (an improvement of SimCLR that adds a momentum encoder and more), using distillation to produce a single final model. Our calculations suggest that training a ViT-L with CAN is about 3◊ faster than training the considerably smaller ResNet50 with DnC in terms of wall clock time (see Appendix B for explanation). CAN on ViT-L outperforms MoCLR with R200◊2 backbone (similar parameter counts), where we note that MoCLR performs as well or better than BYOL and MoCo-v3 on IN-1K (Tian et al., 2021). ## 4.2 Pre-Training On Imagenet-21K We also consider the performance of CAN on pre-training on ImageNet-21K (IN-21K), a publicly available dataset of 14.2 million images Deng et al. (2009). We use the same hyperparameter settings as JFT-300M. We run a full set of evaluations on linear probe (Table 6), robustness (Figure 15), and few-shot learning (Figure 16) (see Sections 4.4 and 4.5 for details on few-shot and robustness evaluations). Results are reported in Appendix A.1. CAN also performs well with IN-21K pre-training, with CAN finetuned on IN-1K showing better robustness than MAE and SimCLR in 8 out of 8 cases, and CAN achieving best 25-shot performance on 6 out of 9 datasets. ## 4.3 Pre-Training On Imagenet-1K Next we evaluate our method using ImageNet (IN-1K) pre-training to verify that it is also competitive in this setting. Results in Table 2 record the top-1 accuracy on IN-1K classification of finetuned models and linear probes. Finetuning CAN achieves 83.6% with ViT-B, outperforming other contrastive approaches such as MoCo-v3 (83.0%), and is competitive with other state-of-the-art approaches such as CAE (83.9%). The linear probe performance of CAN is 74.8% using ViT-B, beating all masked image modelling methods, the best of which is CAE with 70.4% (Chen et al., 2022). CAN is only outperformed by MoCo-v3 and DINO, which use momentum encoders and two full image views, and in the case of DINO 10 multi-crop views. Note ![7_image_0.png](7_image_0.png) Figure 4: **Few-shot:** ViT-L models pre-trained on JFT-300M for 5000 epochs are evaluated on 9 datasets in few-shot setting (10-shot and 25-shot). CAN outperforms MAE and SimCLR. that the *masked image* column indicates whether a method uses one or more full image views as input to the model, and the *no additional parameters* column indicates whether a method relies on other parameters besides the main encoder, e.g., from a pre-trained tokenizer, or a momentum updated target encoder. We also report results for our MAE implementation, which approximately matches the numbers reported in He et al. (2022), validating our MAE results on JFT-300M. ## 4.4 Few-Shot Learning We use linear probes to evaluate suitability of CAN for few-shot learning, following the protocol of Dosovitskiy et al. (2021a). We use the models pre-trained on JFT-300M for 5000 epochs whose ImageNet performance is recorded in Figure 1. Results in Figure 4 for few-shot transfer learning on 9 other datasets show that the superior performance on IN-1K translates to strong performance on other tasks. We also note that our 25-shot ViT-L models beat *full-shot* both DnC and BYOL ResNet50 models (also trained for 5000 epochs on JFT-300M) on 6 out of 8 datasets (Tian et al., 2021). See Appendix A for many additional results, including pre-training on IN-21K. ## 4.5 Robustness To Distribution Shift Finally, we consider the robustness of CAN to distribution shifts. We use ViT-L backbones trained for 5000 epochs on JFT-300M, which have been finetuned on IN-1K. Model performance is evaluated on a number of dierent validation sets with the same 1000 classes as IN-1K Mao et al. (2022). Figure 5 reports results on the following 7 validation sets, which cover a large variety of distribution shifts: original IN-1K (Deng et al., 2009), IN-v2 (Recht et al., 2019), IN-ReaL (Beyer et al., 2020), IN-Adversarial (Hendrycks et al., 2021b), IN-Rendition (Hendrycks et al., 2021a), ObjectNet (Barbu et al., 2019). CAN performs favourably under both JFT-300M, IN-21K and IN-1K pre-training, beating SimCLR and MAE baselines in nearly all cases. See Appendix A for additional results. We study the dierent components of CAN to better understand the eect of the dierent mechanisms, and to determine optimal parameter configurations. All ablations use ViT-B models trained for 100 epochs on IN-1K and evaluated with a linear probe on IN-1K unless explicitly said otherwise. We use the best loss weights 5 Hyperparameter analysis Method Contrastive loss ¿ Reconstruction loss ¿ SimCLR 9.157 — MAE - 0.1658 CAN (ours) 9.143 **0.1633** Table 3: **Loss complementarity.** CAN training achieves *lower* training loss for both contrastive and reconstruction than individual training. All methods use 50% masking for fair comparison. ![8_image_0.png](8_image_0.png) Figure 5: **Robustness:** Evaluating performance under distribution shifts with respect to models finetuned on IN-1K. Validation performance of ViT-L models is reported on 7 dierent datasets. and noise level in these experiments for experiments in Section 4. Complementarity of contrastive and reconstruction losses. A key hypothesis motivating our work is that contrastive learning and masked autoencoder reconstruction may not only be compatible training objectives, but are *complementary* ones. Table 3 compares the final training value of the contrastive LInfoNCE and reconstruction Lrec when jointly trained (i.e., CAN) compared to only optimizing LInfoNCE (SimCLR) or only Lrec (MAE). The results support the hypothesis: joint training achieves a lower loss on *both* objectives compared to individual training. | None | +noise | +noise, +loss | Full | |--------|----------|-----------------|--------| | 67.9 | 68.6 | 68.4 | 68.9 | Table 4: **Denoising objective.** "Full" denotes the entire method as described in Section 3.4 $$\begin{array}{c c c}{{}}&{{\mathrm{CN}}}&{{\mathrm{CA}}}&{{\mathrm{CAN~(full)}}}\\ {{}}&{{}}&{{68.5}}&{{67.9}}&{{68.9}}\end{array}$$ Ablating CAN loss terms. CAN is comprised of three components: (C) contrastive, (A) masked autoencoder, and (N) denoising losses. We ablate each of the three components in Table 5, setting the loss weight to zero to "remove" a component. We use ViT-B models pre-trained for 100 epochs. Removing any component leads to worse performance, with contrastive loss hurting the most. Table 5: **CAN loss terms.** We remove each of the three loss terms in CAN one by one. Denoising method. Table 4 studies the eect of each of the components of the denoising method. We use ViT-B models trained for 100 epochs on ImageNet, and consider four settings, each adding in more parts of the method: 1) CAN with no denoising, 2) adding noise to the input only, 3) adding noise and using the denoising loss, and 4) the full method with all of the described components, including using ‡vi as a positional encoding in the decoder. Results show that simply adding noise as a data augmentation improves performance by 0.7%, which can be improved to 1% by adding a reconstruction loss with noise level passed as an argument. The noise level argument is necessary: the reconstruction loss without noise level argument performs worse (68.4%) than noise with no reconstruction at all (68.6%). We emphasize that the improvement from denoising comes at minimal run time and memory cost, since it uses reconstructions produced by the decoder, which in the case of MAE are simply thrown away unused. We also tried predicting the clean patches instead of noise, and found it worked poorly, corroborating similar findings in the diusion literature. Masking rate. Figure 6 reports the behavior of CAN and SimCLR under dierent masking rates on IN-1K and JFT-300M pre-training (for JFT-300M we use 800 epochs). The performance of SimCLR decreases as the masking rate increases, suggesting that masking is not an eective data augmentation. In contrast, performance of CAN peaks at a non-zero masking rate, but at a much lower rate than the 75% used by MAE on IN-1K. This occurs since very low masking rates are preferred by the contrastive part of CAN, but severely damage the autoencoder part as it can learn trivial solutions. The considerable eciency improvement from ![9_image_0.png](9_image_0.png) Figure 6: CAN and SimCLR with dierent masking rates. ViT-B models are pre-trained for 100 epochs on IN-1K (left), and 800 epochs on JFT-300M (right). ![9_image_1.png](9_image_1.png) Figure 7: ViT-B models pre-trained on IN-1K for 100 epochs. **Left:** The best contrastive loss weight is small but non-negative. **Middle:** A wide range of ‡max values improve over no-noise. Right: Performance is not sensitive to the denoising loss weight. masking 50% of patches more than compensates for the small drop in performance for a fixed number of epochs. Contrastive loss weight. We vary the weighting ⁄InfoNCE used to weight the contribution of the contrastive and reconstruction losses. Recall that larger ⁄InfoNCE places higher weight on the contrastive loss. Results in Figure 7 show that the best weight is ⁄InfoNCE = 0.03, which approximately balances the magnitudes of the two terms (see Table 3). Denoising loss weight and noise level. We study the noise level interval [0, ‡max] from which to sample input noise, and the weight ⁄ balancing the denoising and reconstruction losses. Results in Fig. 7 show that the best maximum noise level is ‡max = 0.05, and that similar performance is attained for dierent weights on the denoising loss. ## 6 Discussion We present CAN, a simple, ecient and scalable self-supervised method for visual representation learning. CAN combines ideas from contrastive learning, masked autoencoding, and diusion denoising into a single high-performing method. Extensive empirical results show that CAN scales with minimal changes to the large uncurated datasets, outperforming SimCLR and MAE methods on a wide range of downstream tasks and evaluations, including ImageNet linear probes, few-shot, robustness, and finetuning. Our results suggests that combining dierent self-supervised methods can produce better results than the constituent parts alone. Further exploration of this search space appears a promising avenue for future work. ## References Emmanuel Brempong Asiedu, Simon Kornblith, Ting Chen, Niki Parmar, Matthias Minderer, and Mohammad Norouzi. Decoder denoising pretraining for semantic segmentation. *preprint arXiv:2205.11423*, 2022. Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, and Nicolas Ballas. Masked siamese networks for label-ecient learning. In preprint arXiv:2204.07141, 2022. Dana H Ballard. Modular learning in neural networks. In Association for the Advancement of Artificial Intelligence (AAAI), volume 647, pp. 279–284, 1987. Hangbo Bao, Li Dong, and Furu Wei. BEiT: BERT pre-training of image transformers. In *Int. Conf. on* Learning Representations (ICLR), 2022. Andrei Barbu, David Mayo, Julian Alverio, William Luo, Christopher Wang, Dan Gutfreund, Josh Tenenbaum, and Boris Katz. ObjectNet: A large-scale bias-controlled dataset for pushing the limits of object recognition models. In *Advances in Neural Information Processing Systems (NeurIPS)*, volume 32, 2019. Lucas Beyer, Olivier J Héna, Alexander Kolesnikov, Xiaohua Zhai, and Aäron van den Oord. Are we done with ImageNet? In *preprint arXiv:2006.07159*, 2020. Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. In Advances in Neural Information Processing Systems (NeurIPS), volume 33, pp. 9912–9924, 2020. Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Int. Conference on Computer Vision (ICCV), pp. 9650–9660, 2021. Mark Chen, Alec Radford, Je Wu, Heewoo Jun, Prafulla Dhariwal, David Luan, and Ilya Sutskever. Generative pretraining from pixels. In *Int. Conference on Machine Learning (ICML)*, 2020a. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Georey Hinton. A simple framework for contrastive learning of visual representations. In *Int. Conference on Machine Learning (ICML)*, pp. 1597–1607. PMLR, 2020b. Ting Chen, Calvin Luo, and Lala Li. Intriguing properties of contrastive losses. In *Advances in Neural* Information Processing Systems (NeurIPS), volume 34, pp. 11834–11845, 2021a. Xiaokang Chen, Mingyu Ding, Xiaodi Wang, Ying Xin, Shentong Mo, Yunhao Wang, Shumin Han, Ping Luo, Gang Zeng, and Jingdong Wang. Context autoencoder for self-supervised representation learning. In preprint arXiv:2202.03026, 2022. Xinlei Chen, Saining Xie, and Kaiming He. An empirical study of training self-supervised vision transformers. In *Int. Conference on Computer Vision (ICCV)*, pp. 9640–9649, 2021b. Ching-Yao Chuang, Joshua Robinson, Yen-Chen Lin, Antonio Torralba, and Stefanie Jegelka. Debiased contrastive learning. In *Advances in Neural Information Processing Systems (NeurIPS)*, volume 33, pp. 8765–8775, 2020. Ekin Dogus Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V. Le. RandAugment: Practical automated data augmentation with a reduced search space. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 3008–3017, 2020. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In *IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 248–255. Ieee, 2009. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In *North American Chapter of the Association for Computational* Linguistics (NAACL), 2018. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In Int. Conf. on Learning Representations (ICLR), 2021a. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In *Int. Conf. on Learning Representations (ICLR)*, 2021b. Yuxin Fang, Li Dong, Hangbo Bao, Xinggang Wang, and Furu Wei. Corrupted image modeling for selfsupervised visual pre-training. *preprint arXiv:2202.03382*, 2022. Songwei Ge, Shlok Kumar Mishra, Haohan Wang, Chun-Liang Li, and David Jacobs. Robust contrastive learning using negative samples with diminished semantics. In *Advances in Neural Information Processing* Systems (NeurIPS), volume abs/2110.14189, 2021. Priya Goyal, Piotr Dollár, Ross B. Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch SGD: Training ImageNet in 1 hour. preprint arXiv:1706.0267, 2017. Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. *Advances in neural information processing systems*, 33:21271–21284, 2020. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In *IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 9729–9738, 2020. Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In *IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 16000–16009, June 2022. Olivier Héna, Aravind Srinivas, Jerey De Fauw, Ali Razavi, Carl Doersch, S. M. Ali Eslami, and Aäron van den Oord. Data-ecient image recognition with contrastive predictive coding. In Int. Conference on Machine Learning (ICML), pp. 4182–4192, 2020. Olivier J Héna, Skanda Koppula, Jean-Baptiste Alayrac, Aäron Van den Oord, Oriol Vinyals, and João Carreira. Ecient visual pretraining with contrastive detection. In Int. Conference on Computer Vision (ICCV), 2021. Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, et al. The many faces of robustness: A critical analysis of out-of-distribution generalization. In *Int. Conference on Computer Vision (ICCV)*, pp. 8340–8349, 2021a. Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. Natural adversarial examples. In *IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 15262–15271, 2021b. Charles Herrmann, Kyle Sargent, Lu Jiang, Ramin Zabih, Huiwen Chang, Ce Liu, Dilip Krishnan, and Deqing Sun. Pyramid adversarial training improves ViT performance. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 13419–13429, 2022. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diusion probabilistic models. In *Advances in Neural* Information Processing Systems (NeurIPS), volume 33, pp. 6840–6851, 2020. Xianxu Hou, Linlin Shen, Ke Sun, and Guoping Qiu. Deep feature consistent variational autoencoder. In 2017 IEEE winter conference on applications of computer vision (WACV), pp. 1133–1141. IEEE, 2017. Zhicheng Huang, Xiaojie Jin, Chengze Lu, Qibin Hou, Ming-Ming Cheng, Dongmei Fu, Xiaohui Shen, and Jiashi Feng. Contrastive masked autoencoders are stronger vision learners. *arXiv:2207.13532v1*, 2022. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In *International Conference on Machine Learning*, pp. 4904–4916. PMLR, 2021. Diederik Kingma, Tim Salimans, Ben Poole, and Jonathan Ho. Variational diusion models. In Advances in Neural Information Processing Systems (NeurIPS), volume 34, pp. 21696–21707, 2021. Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, and Neil Houlsby. Big transfer (bit): General visual representation learning. In *ECCV*, 2020. Skanda Koppula, Yazhe Li, Evan Shelhamer, Andrew Jaegle, Nikhil Parthasarathy, Relja Arandjelovic, João Carreira, and Olivier Héna. Where should i spend my flops? eciency evaluations of visual pre-training methods. *arXiv preprint arXiv:2209.15589*, 2022. Ilya Loshchilov and Frank Hutter. Fixing weight decay regularization in Adam. *preprint arXiv:1711.05101*, 2017a. Ilya Loshchilov and Frank Hutter. SGDR: Stochastic gradient descent with warm restarts. In Int. Conf. on Learning Representations (ICLR), 2017b. Chengzhi Mao, Lu Jiang, Mostafa Dehghani, Carl Vondrick, Rahul Sukthankar, and Irfan Essa. Discrete representations strengthen vision transformer robustness. In Int. Conf. on Learning Representations (ICLR), 2022. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In *International Conference on Machine Learning*, pp. 8748–8763. PMLR, 2021. Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do ImageNet classifiers generalize to ImageNet? In *Int. Conference on Machine Learning (ICML)*, pp. 5389–5400. PMLR, 2019. Joshua Robinson, Ching-Yao Chuang, Suvrit Sra, and Stefanie Jegelka. Contrastive learning with hard negative samples. In *Int. Conf. on Learning Representations (ICLR)*, 2021a. Joshua Robinson, Li Sun, Ke Yu, Kayhan Batmanghelich, Stefanie Jegelka, and Suvrit Sra. Can contrastive learning avoid shortcut solutions? In *Advances in Neural Information Processing Systems (NeurIPS)*, volume 34, pp. 4974–4986, 2021b. Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic dierential equations. In Int. Conf. on Learning Representations (ICLR), 2021. Andreas Steiner, Alexander Kolesnikov, , Xiaohua Zhai, Ross Wightman, Jakob Uszkoreit, and Lucas Beyer. How to train your ViT? Data, augmentation, and regularization in vision transformers. In *Transactions on* Machine Learning Research (TMLR), 2021. Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting unreasonable eectiveness of data in deep learning era. In *Int. Conference on Computer Vision (ICCV)*, pp. 843–852, 2017. Chenxin Tao, Xizhou Zhu, Gao Huang, Yu Qiao, Xiaogang Wang, and Jifeng Dai. Siamese image modeling for self-supervised vision representation learning. In *preprint arXiv:2206.01204*, 2022. Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive multiview coding. In Europ. Conference on Computer Vision (ECCV), pp. 776–794, 2020. Yonglong Tian, Olivier J Hena, and Aäron van den Oord. Divide and contrast: Self-supervised learning from uncurated data. In *Int. Conference on Computer Vision (ICCV)*, pp. 10063–10074, 2021. Aäron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. preprint arXiv:1807.03748, 2018. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In *Advances in Neural Information Processing Systems* (NeurIPS), 2017. Pascal Vincent. A connection between score matching and denoising autoencoders. *Neural computation*, 23 (7):1661–1674, 2011. Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, Pierre-Antoine Manzagol, and Léon Bottou. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. *Journal of machine learning research*, 11(12), 2010. Tongzhou Wang and Phillip Isola. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In *Int. Conference on Machine Learning (ICML)*, pp. 9929–9939. PMLR, 2020. Zhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai, and Han Hu. Simmim: A simple framework for masked image modeling. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9653–9663, 2022. Yang You, Igor Gitman, and Boris Ginsburg. Large batch training of convolutional networks. In *ECCV*, 2017. Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, and Yonghui Wu. CoCa: Contrastive captioners are image-text foundation models. *preprint arXiv:2205.01917*, 2022. Sheheryar Zaidi, Michael Schaarschmidt, James Martens, Hyunjik Kim, Yee Whye Teh, Alvaro SanchezGonzalez, Peter Battaglia, Razvan Pascanu, and Jonathan Godwin. Pre-training via denoising for molecular property prediction. *preprint arXiv:2206.00133*, 2022. Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, and Yi Yang. Random erasing data augmentation. In Association for the Advancement of Artificial Intelligence (AAAI), 2020.
Review 1: Summary: This paper presents CAN, a hybrid self-supervised learning method that combines contrastive learning, masked autoencoders, and noise prediction from diffusion models. The paper demonstrates the proposed approach improves efficiency and performance, even surpassing popular methods like MAE and SimCLR. Strengths and Weaknesses: Weaknesses: 1. The paper appears to be incomplete, as key elements such as Figure 2 and Figure 3 are missing from the main text, and it is difficult to follow with Table ??, Figure ?? and Appendix ??. Navigating the material becomes challenging due to these gaps. 2. The innovation of the proposed technique seems limited, primarily combine three existing loss components rather than introducing a substantially novel approach. Requested Changes: Kindly conduct a comprehensive review of the paper and ensure its completion. Broader Impact Concerns: n/a ================================================== Review 2: Summary: This paper presents an approach that combines several self-supervised learning methods into one framework. Specifically, the proposed representation learning method involves combining masked auto-encoders, contrastive learning and image denoising objectives. In contrast to existing methods, the proposed method is designed with a focus on training efficiency to allow scalability to large datasets. Strengths and Weaknesses: ## Strengths - The motivation for the problem addressed in this work is clearly explained and easy to follow. In order to fully take advantage of self-supervised learning, scalability is a crucial component. The proposed work presents an approach that is efficient while taking advantage of the state-of-the-art ideas in this domain. - The text of the paper is very well written and presents concise descriptions of the key ideas. While the idea of combining contrastive learning and masked image modeling is not novel, this work makes several novel proposals to improve efficiency. The proposed ideas of masking multiple views, removing momentum encoders, multi-crop and integrating denoising objectives are all novel ideas that can provide useful guidance for future work. - The quantitative evaluation of the representation learned by CAN demonstrates clear superiority over all the component self-supervised learning objectives on multiple benchmarks. The experiments cover multiple datasets and multiple base architectures demonstrating a convincing increase in performance. The ablative studies presented also provide useful insights into the contribution of the 3 components and design choices proposed in this hybrid self-supervised learning approach. ## Weakness I see only one major weakness of the proposed work. The entire text of the paper focuses on the improved efficiency of the proposed hybrid self-supervised learning approach. However, the paper doesn't present much experimental evidence about the claimed efficiency. Figure 1 demonstrates that the proposed approach is less efficient than MAEs and more efficient than SimCLR. But there is no other experiment investigating the efficiency of the approach. The experiments section focuses heavily on demonstrating the accuracy gains obtained by leveraging the proposed approach. Since the main claim is around efficiency, I hoped to see similar experiments with regards to efficiency. For example, there is no ablation on the computational cost of the design choices made in the proposed approach. The authors also clearly state and cite works to show that this is not the first hybrid self-supervised learning approach that combines MAEs and contrastive learning. The main contribution of this work is claimed to be a *more* efficient hybrid self-supervised learning approach. This claim is not backed by any experimental evidence. I think it is important to compare the efficiency of existing hybrid approaches to the proposed approach. Requested Changes: See weakness above. Broader Impact Concerns: N/A ================================================== Review 3: Summary: The authors propose a new contrastive-MAE-hybrid self-supervised representation learning method, CAN, which is short for (C) contrastive learning, (A): masked autoencoders, and (N) noise prediction. The key idea is to combine contrastive objectives with masked autoencoders by masking argument views, where a noise prediction task is further used to reinforce the high-frequency focus of MAEs. CAN is very simple. Extensive experiments on image classification with impressive results demonstrate that CAN is more training-efficient and scalable to the pretraining dataset scale. Strengths and Weaknesses: ### Strength - This work focuses on developing training-efficient and scalable self-supervised representation learning with a contrastive-MAE-hybrid fashion, where visual pretraining on large-scale datasets like JFT-300M are explored. The targeted problems of performance-efficiency trade-off, scalability, and hybrid pretraining are critical and relevant. In modern AI developments, this is very important when considering large-scale uncurated data from sources like the web. - The proposed method is simple, clear, and easy to follow. The idea of combining MAE and contrastive learning is relatively straightforward, while the idea of introducing denoising objectives for improving high-frequency focus is interesting to me. - Extensive experiments on image classification have been conducted, and impressive results have been achieved. ### Weakness - My first **major concern** lies in the technical novelty contribution. As mentioned in the Strength, CAN is quite simple without complicated designs. However, this does not change the fact that the major modifications to combine contrastive learning and MAE upon baseline methods like SimCLR is to add masks and the MAE objective to argument views. Masking strategy has been proven to improve training efficiency [Li et al., 2023; Yang et al., 2023], CAN seems quite straightforward for a hybrid representation learning. However, I like the idea of denoising prediction for further improvement (which makes sense but is not significant, though). - My second **major concern** is about the experimental evaluation. The current experiments are conducted on large-scale datasets like JFT-300M, which is good (but an in-house dataset that is not available to the community). However, the main evaluation on downstream tasks only includes ImageNet-1K classification. As a foundation representation learning method, it is necessary to evaluate its representation transferring performance on tasks like object detection and semantic segmentation since IN-1K Top-1 or linear probing performance cannot reveal all aspects of the representation nature. - As mentioned before, the method's modification is mainly masks and MAE objective. Tab. 4-5 and Fig. 6 show that denoising prediction is not the essential part of the designs since only a marginal improvement is achieved. Besides, since contrastive learning and MAE baselines are critical, the results should be included in Tab. 5 for a more comprehensive comparison. From the result of AN, I would say that the MAE performance is very low, which is confusing and should be explained. - The main results are based on one contrastive baseline, SimCLR. I wonder about the performance of CAN to other methods like VICReg [Bardes, Ponce & LeCun, 2022]. - What about extending to multimodal contrastive learning? For example, CLIP? - Why are the ImageNet-equivalent epochs a fair metric for the training efficiency comparison? Why not directly report the training GPU hours? It is a bit strange to me. - Can authors provide more insights into the method like some theoretical evidence? Why is denoising prediction helping MAE since MAE itself can be viewed as denoising autoencoding (DAE)? - Missing citations and comparisons besides the works mentioned before. This paper focuses on hybrid SSL. However, no comparisons to prior arts discussed in related work have been included in the main results. Besides, some recent work on hybrid SSL is not compared or discussed [Jing, Zhu & LeCun, 2022; Qi et al., 2023]. - Minor: All appendix references are ???. The writing should be improved. For example, for the experimental discussion, include more details and analysis which may help strengthen the insights of this work. [Li et al., 2023] Scaling Language-Image Pre-training via Masking. In CVPR. [Yang et al., 2023] Attentive Mask CLIP. arXiv preprint. [Bardes, Ponce & LeCun, 2022] VICReg: Variance-Invariance-Covariance Regularization For Self-Supervised Learning. In ICLR. [Jing, Zhu & LeCun, 2022] Masked Siamese ConvNets. arXiv preprint. [Qi et al., 2023] Contrast with Reconstruct: Contrastive 3D Representation Learning Guided by Generative Pretraining. In ICML. Requested Changes: The requested changes are mainly described in the pros and cons part. Besides the revisions or clarifications requested before, there are some experiments to be conducted to make the work more solid, but I would suggest some actions as follows: - Object detection experiments on COCO and semantic segmentation results on ADE20K. - Possible experiments to extend CAN to other (multimodal) contrastive learning methods. Broader Impact Concerns: The broader impact is not discussed, but I do not have any critical concerns regarding this. ================================================== Metareview: Recommendation: Reject Comment: The reviewers acknowledged that the paper tackles an important topic. They appreciated improved downstream performance when we combine SimCLR and MAE as the pretraining objective. However, there were two major concerns: 1) The main technical idea of combining SimCLR and MAE has been explored extensively in the community. I am not using this as the reason for rejection though -- this criticism has been already acknowledged by the authors in the paper, and they emphasized that the main contribution of the paper is not to propose this as a new technique, but rather showing that it improves efficiency. 2) There is insufficient evidence to fully support the claimed efficiency claim. As noted by `htve`, Figure 1 demonstrates that the proposed approach is less efficient than MAEs and more efficient than SimCLR. But there is no other experiment investigating the efficiency of the approach. Since the focus of this paper is on efficiency, extensive experimental validation is necessary. Showing how the combined SimCLR & MAE objective scales with various factors --such as image resolution, number of tokens, batch size, dataset size, different compute capacity budgets (both during pretraining and finetuning stages), various downstream tasks including image (classification, segmentation, generation, etc.) and video (classification, language grounding, tracking, etc.), and many others.-- could greatly enhance the quality of the submission. Given these limitations, we recommend rejection at this time. ==================================================
# Probabilistic Matching Of Real And Generated Data Statistics In Generative Adversarial Networks Anonymous authors Paper under double-blind review ## Abstract Generative adversarial networks constitute a powerful approach to generative modeling. While generated samples often are indistinguishable from real data, there is no guarantee that they will follow the true data distribution. For scientific applications in particular, it is essential that the true distribution is well captured by the generated distribution. In this work, we propose a method to ensure that the distributions of certain generated data statistics coincide with the respective distributions of the real data. In order to achieve this, we add a new loss term to the generator loss function, which quantifies the difference between these distributions via suitable f-divergences. Kernel density estimation is employed to obtain representations of the true distributions, and to estimate the corresponding generated distributions from minibatch values at each iteration. When compared to other methods, our approach has the advantage that the complete shapes of the distributions are taken into account. We evaluate the method on a synthetic dataset and a real-world dataset and demonstrate improved performance of our approach. ## 1 Introduction Generative adversarial networks (GANs) (Goodfellow et al., 2014) comprise a generator and a discriminator network trained adversarially until the generator manages to produce samples realistic enough to fool the discriminator. Since their conception, GANs have become a popular tool for generative modeling (Hong et al., 2019; Gui et al., 2021). The GAN framework is generally applicable and it is probably best known for its successes in image generation (Reed et al., 2016; Mathieu et al., 2016; Isola et al., 2017; Ledig et al., 2017). Although GANs have proven powerful, challenges such as mode collapse and non-convergence remain (Saxena and Cao, 2021). It is often the case that the generated samples, while realistic, stem from only a subspace of the true data distribution, or do not reflect the relative frequencies with which they occur accurately. For scientific applications in particular, such as in cosmology (Rodriguez et al., 2018; Villaescusa-Navarro et al., 2021) or high-energy physics (Paganini et al., 2018; Alanazi et al., 2021), where GANs may serve as differentiable surrogate models for expensive but highly accurate numerical simulations, having a good match between the distributions is essential (Kansal et al., 2023). It is this latter aspect that we tackle in this work, by matching properties of the generated distribution with those of the real data distribution. In particular, we consider statistics of the dataset such as the power spectrum components, and match their distributions. We incorporate these requirements in the form of probabilistic constraints since it is not properties of individual samples that are enforced, but collective characteristics of the dataset. The approach is chiefly aimed at applications in science, where suitable statistics to be matched can be chosen through domain knowledge. The only requirement on the statistics is that they need to be differentiable. The main ingredients of our approach are the following: we approximate both the distributions of the real data and the generated data statistics efficiently via kernel density estimation (KDE) (Silverman, 1986). In each iteration, the mismatch between true and generated distributions is then calculated through suitable f-divergences and added as an additional term to the generator loss. That way, we end up with a constrained generated distribution. Using f-divergences, as opposed to e.g. low-order moments of the distributions, has the advantage that the full shapes of the distributions are taken into account. In the following, we refer to our method as probabilistically constrained GAN (pcGAN). ## 2 Related Work The field of physics-informed machine learning, where prior knowledge is introduced into the ML model, has been an active area of research in recent years (Karniadakis et al., 2021; Cuomo et al., 2022). In the context of GANs, two main approaches for including prior knowledge in the model exist. In the first approach, the constrained values can be fed as additional inputs into the discriminator, such that it can explicitly use constraint fulfillment as a means to distinguish between real and generated data. In Stinis et al. (2019), GANs are employed for interpolation and extrapolation of trajectories following known governing equations. The generated trajectories are constrained to fulfill these equations by passing the constraint residuals as additional inputs to the discriminator; in order to prevent the discriminator from becoming too strong, some noise is added to the residuals of the real data, which might otherwise be very close to zero. When extrapolating, the GAN is applied iteratively from some initial condition; in order to train stably, it learns to predict the correct trajectory from slightly incorrect positions of the previous step. In Yang et al. (2019), a physically-informed GAN (PI-GAN) is developed to model groundwater flow. They make use of the same basic idea as physics-informed neural networks (Raissi et al., 2019) and employ automatic differentiation in order to obtain a partial differential equation (PDE) residual on the GAN output, which is in turn fed into the discriminator. By evaluating the GAN prediction at many different points and comparing to an equivalent ensemble of true values of the corresponding physical field, the GAN is constrained to adhere to a stochastic PDE. In the second approach, prior knowledge may be taken into account via additional loss terms in either discriminator or generator loss: in Khattak et al. (2018; 2019), GANs are employed to simulate detector signals for high-energy physics particle showers. Here, physical constraints such as the particle energy are taken into account via additional generator loss terms. In Yang et al. (2021), the incorporation of imprecise deterministic constraints into the GAN is investigated; e.g. the case where the GAN output is supposed to follow a PDE, but where the PDE parameters are not known accurately could be formulated as an imprecise constraint. In a first step, deterministic constraints can be included by adding the constraint residuals as an additional loss term to the generator loss; they argue that it is better to add such terms to the generator since this strengthens the weaker party in the adversarial game, instead of giving an even larger advantage to the discriminator. In order to make the constraint imprecise, they do not require that the residuals go to zero, but instead only include residuals above a certain threshold value ϵ 2in the loss. The work closest in aim to ours is probably that by Wu et al. (2020), where a statistical constrained GAN is introduced. They add an additional term to the generator loss function in order to constrain the covariance structure of the generated data to that of the true data. This additional term is a measure of similarity between the covariances, and they concluded that the Frobenius norm was the best choice for this purpose. They use their method to obtain better solutions for PDE-governed systems. Similar to Wu et al. (2020), our method also imposes probabilistic constraints via an additional term to the generator loss. However, there are significant differences: firstly, our method does not consider the covariance structure of the dataset in particular, but instead allows to constrain on arbitrary statistics of the data. Secondly, our method uses f-divergences to match the distributions of true and generated data statistics explicitly and takes the complete shapes of the distributions into account, instead of only the second-order moments. ## 3 Background The basic idea of generative adversarial networks (GANs) (Goodfellow et al., 2014) is to train a generator to generate samples of a given distribution and a discriminator (or critic) to distinguish between real and ![2_image_0.png](2_image_0.png) Figure 1: The various representations involved in matching the statistic zs are depicted. The histogram in the background shows the true data distribution. **Left:** Representation of the true data distribution. **Middle** left: Representation of the generated data distribution with batch size 64 for different choices of σ in the kernel. **Middle right:** Representation of the generated data distribution for various batch sizes with optimal choice for σ (as determined via Algorithm 3 in the appendix). **Right:** Taking the recent minibatch history into account (here with ϵ = 0.9) can smoothen out fluctuations and lead to a more accurate representation. In this figure, a perfectly trained generator has been assumed, i.e. the minibatches have been sampled from real data. generated data. During the training, both networks are pitted against each other in a minimax game with value function $$\min_{G}\max_{D}V(D,G)=\mathbb{E}_{x\sim p_{data}(x)}\left[\log D(x)\right]+\mathbb{E}_{z\sim p_{z}(z)}\left[\log(1-D(G(z)))\right].\tag{1}$$ Here, D denotes the discriminator, G the generator, x samples drawn from the real data and z randomly generated latent space vectors serving as input to the generator; pdata and pz denote the real data distribution and the latent vector distribution, respectively. Discriminator and generator are then trained alternatingly (with m ≥ 1 discriminator updates between each generator update); in (Goodfellow et al., 2014), it is shown that a stable equilibrium to the minimax problem (1) exists and that the optimal solution lies in the generator producing samples from the true data distribution. The standard GAN can be very difficult to train and often suffers from mode collapse. In Arjovsky et al. (2017), the Wasserstein GAN (WGAN) was introduced, where they suggest the earth-mover (EM) distance as a new loss for the GAN. They show that the discriminator and generator losses can then be expressed as $$\begin{array}{l}{{\mathcal{L}_{D}=D(x_{\rm gen})-D(x_{\rm true}),}}\\ {{\mathcal{L}_{G}=-D(x_{\rm gen}),}}\end{array}$$ $$\begin{array}{l}{{(2\mathrm{a})}}\\ {{(2\mathrm{b})}}\end{array}$$ under the condition that the discriminator is Lipschitz continuous. Rather crudely, this is enforced by clipping the weights of the discriminator. In the end, the terms in (2) are approximated as expectations over minibatches. With this loss function, the discriminator can be interpreted as a critic that assigns scores to both true and generated samples. These scores are not constrained to any specific range and can therefore give meaningful feedback to the generator also when the discriminator is outperforming. Advantages of the WGAN include improved learning stability as well as meaningful loss curves (Gui et al., 2021). In this work, we also consider two other common variants of the GAN: firstly, the WGAN with gradient penalty (WGAN-GP) (Gulrajani et al., 2017), where the aforementioned weight clipping is avoided by instead imposing a penalty on the discriminator that is supposed to enforce Lipschitz continuity. Secondly, the spectrally normalized GAN (SNGAN) (Miyato et al., 2018), where Lipschitz continuity is ensured by constraining the spectral norm of each layer of the discriminator explicitely. ## 4 Method The aim of our method is to consider the distributions of Ns differentiable statistics z of the true dataset, such as e.g. components of the power spectrum (compare Appendix C.1), and to ensure that the same statistics, when extracted from the generated data, are distributed equally. In order to match true (ptrue) and generated (pgen) distributions, we modify the generator loss (2b) as follows: $${\mathcal{L}}_{G}^{c}={\mathcal{L}}_{G}+\lambda\sum_{s=1}^{N_{s}}\lambda_{s}h(p_{\mathrm{true}}(z_{s}),p_{\mathrm{gen}}(z_{s})).$$ $$\left({3}\right)$$ The function h is an f-divergence that quantifies the mismatch between ptrue and pgen, λ is a global weighting factor for the constraints, and the λs, for which Ps λs = 1, allow to weight the constraints individually. Three important choices remain to be made: how to choose the function h, how to obtain suitable functional representations for ptrue and pgen, and how to adequately weight the different loss terms. ## 4.1 Quantifying The Mismatch Let p, q be arbitrary probability density functions (PDFs). For f-divergences h, it holds that h(*p, q*) ≥ 0, with equality if and only if p = q. These properties justify the use of f-divergences for the function h in (3). A major advantage of using f-divergences, as opposed to e.g. the Wasserstein distance, is that they are efficient to calculate. The Kullback-Leibler (KL) divergence constitutes a straightforward choice for h and is defined as $$h(p,q)=\operatorname{KL}(p||q)=\int_{-\infty}^{\infty}p(x)\log{\frac{p(x)}{q(x)}}d x.$$ $$\left(4\right)$$ dx. (4) The KL divergence is asymmetric and we consider the forward KL, also known as zero-avoiding, in order to ensure a complete overlap of areas with non-zero probability of the distributions; in case of the reverse, or zero-forcing, KL, the loss term would typically tend to match q to one of the peaks of p and hence fail to match the distributions in a way suitable for our purposes. The Jeffreys divergence (JD), which can be thought of as a symmetrized version of the KL divergence, as well as the total variation (TV) distance constitute further options: $$h(p,q)=\mathrm{J}(p||q)=\frac{1}{2}\left(\mathrm{KL}(p||q)+\mathrm{KL}(q||p)\right),$$ $$h(p,q)=V(p-q)=\int_{-\infty}^{\infty}|p(x)-q(x)|dx.$$ (5) $\binom{6}{2}$ . An advantage of the latter choice is that no divisions by zero can occur, which may cause problems with the other two options. As an alternative to using f-divergences, we also discuss the maximum mean discrepancy (MMD) as a possible loss function in Appendix B.2. We show that there are drawbacks to using the MMD loss and that the method performs better when using f-divergences. ## 4.2 Obtaining Representations In order to evaluate the loss terms in (3), means of extracting representations for both the true and generated PDFs are required. We denote these representations as p˜true and p˜gen. Note that p˜true will need to be determined only once, in advance of the GAN training, since it remains constant. In contrast to the true distribution, the generated distribution changes during GAN training, and hence p˜gen also needs to be determined anew after each generator update. Kernel density estimation (KDE) (Silverman, 1986) has proven effective for obtaining these representations. For the true distributions, we then get $$\tilde{p}_{\mathrm{true}}(z_{s})=\frac{1}{N}\sum_{j=1}^{N}\frac{1}{\bar{\sigma}_{s}}K\left(\frac{z_{s}-z_{s j}}{\bar{\sigma}_{s}}\right),$$ , (7) where N denotes the number of datapoints, K the kernel function and σ¯s the bandwidth of the kernel. The choice σ¯s = 1 200 (maxj (zsj )−minj (zsj )) has proven to give accurate representations for the true distributions (compare e.g. the leftmost plots in Figs. 1, 2, and 5), as we typically have N ≫ 1000 and can afford to choose such a small value. Throughout the paper, we use Gaussian kernels with K(x) = (2π) −1/2e −x 2/2. We evaluate different kernel choices in Appendix B.1. We also approximate the generated distributions at each iteration using KDE, using the constraint values as obtained from the current minibatch samples. That is, we obtain the approximate generated PDFs as $$\tilde{p}_{\rm gen}(z_{s})=\frac{1}{n_{\rm batch}}\sum_{j=1}^{n_{\rm batch}}\frac{1}{\sigma_{s}}K\left(\frac{z_{s}-z_{sj}}{\sigma_{s}}\right),\tag{8}$$ where nbatch denotes the batch size. For p˜gen(zs), choosing σs adequately is crucial and requires more thought than in the case of p˜true. This is due to the fact that there are much fewer samples available in the minibatches. The bandwidths σs are chosen separately for each constraint zs, under the criterion that p˜gen as obtained from minibatches drawn from the true data should have a mismatch as small as possible with p˜true. Since the optimal values of σs would be expected to depend both on the range of value zs in the true dataset and the batch size, we parameterized them as Algorithm 1 High-level algorithm $$\left(7\right)$$ Step 1: obtain p˜true via (7) Step 2: determine the optimal values f s σ in (9) (see Algorithm 3 in the appendix) Step 3: train the pcGAN (see Algorithm 2) Algorithm 2 Training the probabilistically constrained GAN (pcGAN) Input: Untrained D and G; p˜true; data {xtrue}; h; λ Output: Trained D and G for i = 1 to Nit do for k = 1 to m − 1 do sample xtrue generate xgen LD = mean(D(xgen) − D(xtrue)) update D clip weights end for generate xgen L 0 G = −mean(D(xgen)) for s = 1 to Ns do calculate statistics {zsj} nbatch j=1 from xgen determine p˜ i gen(zs) according to (10) ls = h(˜ptrue(zs), p˜ i gen(zs)) end for η = l − min(l) + 0.1(max(l) − min(l)) λs = P ηs s′ ηs′ L c G = λPs λsls LG = L 0 G + L c G update G end for σs(nbatch) = std(zs)/f s σ (nbatch). (9) A detailed description of how to determine the optimal values for f s σ is given in Appendix A. In order to improve the accuracy of p˜gen (assuming that pgen does not change drastically between subsequent iterations), we can include information from the preceding minibatches via an exponentially decaying historical average: p˜ i gen(zs) = (1 − ϵ)˜pgen(zs) + ϵp˜ i−1 gen (zs), (10) where the parameter ϵ defines the strength of the decay and i denotes the current iteration. In this way, the potentially strong fluctuations between minibatches are smoothened out, allowing for a more accurate representation of pgen. With representation (10) for the generated distribution and (7) for the true distribution, the one-dimensional integrals required for evaluating h(p˜true, p˜gen) in (3) can be carried out numerically. In Fig. 1, the various representations are illustrated. ## 4.3 Weighting The Constraints The following heuristic scheme has proven effective in weighting the constraints according to how big their mismatches are relative to each other: first, we calculate η = l − min(l) + 0.1(max(l) − min(l)), where l is a vector with components ls = h(p˜true(zs), p˜ i gen(zs)). Then we assign λs = P ηs s′ ηs′ . The first term in the definition of η quantifies the constraint fulfillment relative to the best-fulfilled one and the second term prevents already well-matched constraints from ceasing to be included. The global weighting factor λ needs to be tuned separately. A high-level overview of the method is given in Algorithm 1 and the pcGAN training is detailed in Algorithm 2. Note that, while our training algorithm is based on the WGAN, the modified generator loss (3) is more general and can be used for other types of GANs as well. ## 5 Results 1 In this section, we present the results obtained with our model. We introduce a set of evaluation metrics and consider a synthetic example and a real-world dataset from physics. The evaluation metrics are chosen to cover different aspects of the generated distribution and evaluate the GAN performance both in the sample space and in a lower-dimensional space of high-level features, as is common practice (Kansal et al., 2023). We compare the pcGAN to the unconstrained WGAN, WGAN-GP, SNGAN, and the statistical constrained GAN from Wu et al. (2020). We investigate the impact that training parameters have on the model performance and we combine the probabilistic constraint with the different GAN variants to evaluate its potential for improving their performance. More results are given in the appendices. In Appendix B.1, we investigate the impact that the choice of kernel for the KDE has on the model performance. In Appendix B.2, we evaluate how well the MMD loss would perform instead of the f-divergences for matching the constraints. A discussion on the training time required for the different models is given in B.4. Additional information on the datasets, the training parameters, and the high-level features is given in Appendix C. ## 5.1 Evaluation Metrics To compare the different models, we consider four evaluation metrics: The Fréchet distance in the sample space, as an alternative to the widely used Fréchet Inception distance (Heusel et al., 2017). It quantifies the agreement of the first and second-order moments of the real and generated distribution and is calculated via $$d_{F}^{2}=\|\mu-\mu^{\prime}\|^{2}+\mathrm{Tr}\left(\Sigma+\Sigma^{\prime}-2\sqrt{\Sigma\Sigma^{\prime}}\right),$$ $$(11)$$ $\text{i1}\cdot\text{i1}=\text{i1}\cdot\text{i1}$. where µ, Σ correspond to the true distribution and µ ′, Σ ′to the generated distribution. The F1-score in the space of high-level features, which is defined as the harmonic mean between precision (P) and recall (R): $$F_{1}=2{\frac{P R}{P+R}}.$$ . (12) In the context of generative modeling, the precision is the fraction of generated samples that lie in the real data manifold and the recall gives the fraction of real data samples that lie in the generated data manifold Sajjadi et al. (2018). They are calculated as suggested in Kynkäänniemi et al. (2019), with choice k = 10 for the k-nearest neighbor. The agreement of the distributions of the constrained statistics, by calculating the average of the total variations of the differences between their histograms: $$\bar{V}_{c}=\frac{1}{N_{s}}\sum_{s=1}^{N_{s}}V(p_{\mathrm{true}}^{\mathrm{hist}}(z_{s})-p_{\mathrm{gen}}^{\mathrm{hist}}(z_{s})),$$ gen (zs)), (13) 1The code for the project will be made available on GitHub. $$(12)$$ $$(13)^{\frac{1}{2}}$$ ![6_image_0.png](6_image_0.png) Figure 2: **(Synthetic example)** The distributions of three different power spectrum components ps as obtained by the different models are depicted, where the orange lines show the true distribution as obtained via KDE (7). From left to right, the histograms correspond to the real data, the pcGAN, the method of Wu et al. (2020), WGAN, WGAN-GP, and SNGAN. For the histograms, 20 000 generated samples have been considered (or the full dataset, in case of the real distribution). Parameters for the pcGAN: bs = 256, λ = 500, ϵ = 0.9, h = KL. where p hist true and p hist gen are given by the outline of the histograms in e.g. Fig. 2. Here, the histograms have been chosen instead of KDE, in order to use a quantity that is not directly constrained (and hence without the danger of overfitting to). The agreement between the distributions of the Nf high-level features (here denoted as x). We proceed in the same way as for the constrained statistics: $$\bar{V}_{f}=\frac{1}{N_{f}}\sum_{f=1}^{N_{f}}V(p_{\mathrm{true}}^{\mathrm{hist}}(x_{f})-p_{\mathrm{gen}}^{\mathrm{hist}}(x_{f})).\tag{1}$$ $$(14)$$ To get an idea of the complexity of each metric, it helps to consider them in the following way: the F1 score takes the full shape of the data distribution into account, dF the first two moments, and V¯c and V¯f the marginal distributions of the constrained statistics and the chosen set of high-level features, respectively. ## 5.2 Synthetic Example For our first experiment, we consider a superposition of sine waves. Each wave consists of two sine waves, x = 1 2 P2 i=1 sin(ωit), with angular frequencies sampled randomly from ωi *∼ |N* (1, 1)|, and we generate 200 equally-spaced measurements in the interval t ∈ [0, 20]. In total, we create 100 000 samples of size 200 to serve as training data. We perform the Fourier transform for real-valued inputs for each time series in the dataset and we use the square roots of the power spectrum components (i.e. the absolute values of the Fourier coefficients) as the statistics to constrain when training the GAN; that is, we have 101 separate constraints (compare Appendix C.1). In Figure 2, results for the different GAN variants are depicted. The data generated by the pcGAN matches the true distributions very well. The method of Wu et al. (2020) comes in second, managing to cover the correct range of constraint values, but failing to adhere to the precise shapes of the PDFs. The unconstrained | WGAN | WGAN + pc | Wu et al. | Wu et al. + pc | WGAN-GP | WGAN-GP + pc | SNGAN | SNGAN + pc | | |---------------|-------------|-------------|------------------|-----------|----------------|-----------|--------------|------------| | (pcGAN) | | | | | | | | | | 2 d F 100 (↓) | 48.94±6.65 | 12.32±2.99 | 3.81±2.13 | 2.49±0.75 | 49.38±5.26 | 3.50±0.76 | 49.67±5.76 | 12.67±3.51 | | F1 (↑) | 0.15±0.06 | 0.18±0.05 | 0.16±0.03 | 0.20±0.06 | 0.16±0.04 | 0.19±0.05 | 0.16±0.09 | 0.18±0.08 | | V¯c (↓) | 1.11±0.10 | 0.20±0.02 | 0.47±0.22 | 0.17±0.03 | 1.07±0.09 | 0.20±0.01 | 1.40±0.12 | 0.34±0.04 | | V¯ f (↓) | 1.70±0.10 | 0.80±0.05 | 0.95±0.12 | 0.75±0.05 | 1.61±0.15 | 0.70±0.05 | 1.63±0.11 | 0.88±0.08 | Table 1: **(Synthetic example)** The different GAN variants and their combinations with the probabilistic constraint are evaluated via different performance metrics, defined in Section 5.1: the Fréchet distance dF , the F1 score, the agreement of the constraint distributions V¯c, and the agreement of the distributions of a selection of high-level features V¯f . The arrows indicate whether high or low values are better. Ten runs have been conducted per model, and the mean values plus-or-minus one standard deviation are given. Bold font highlights best performance. Parameters: bs = 256, ϵ = 0.9, h = KL, and λ = [500, 500, 500, 2500], respectively, from left to right for the constrained variants. ![7_image_0.png](7_image_0.png) Figure 3: **(Synthetic example)** Different values of the weighting coefficient λ are considered (with bs = 256, ϵ = 0.9, h = KL). Ten runs have been conducted per model, and the mean values plus-or-minus one standard deviation are depicted. WGAN, WGAN-GP and SNGAN are distinctly worse and tend to assign too much weight to the highest peak of the distribution. In Table 1, we evaluate the performance of the different models in terms of the evaluation metrics defined in Section 5.1. The results for the constraint distributions are well reflected here and the unconstrained versions of the GAN tend to perform worse than both the pcGAN and the method of Wu et al. (2020) on metrics other than the F1 score. When considering the F1 score, pcGAN is ahead of the unconstrained models. Between the pcGAN and Wu et al. (2020), pcGAN outperforms Wu et al. (2020) in all metrics apart from dF , where the method of Wu et al. (2020) is slightly better. This makes sense since dF only evaluates agreement of the first and second-order moments of the distribution; the latter are precisely what the method of Wu et al. (2020) constrains. In addition to the pcGAN, results for combinations of the other GAN variants with the probabilistic constraint are also given in the table. It is apparent that adding the constraint also leads to improved performance of the other models. In Appendix B.3, a plot visualizing the constraint fulfillment equivalent to Fig. 2 is given for the different constrained GANs. In Fig. 3, the impact of the global weighting factor λ is investigated. A clear improvement with increasing λ is visible for most of the metrics, up to λ ≈ 100. Overall, the value λ = 500 appears to be a good choice. The fact that dF and F1 also improve indicates that the constraints help to produce a better diversity of samples. In Fig. 4, we consider the impact that the batch size and the historical averaging have on the results. Both dF and constraint fulfillment improve with increasing batch size, although we observe diminishing returns for batch sizes larger than 256. The inclusion of historical averaging improves dF , with higher values of ϵ yielding larger improvements, whereas constraint fulfillment V¯c is only weakly affected by the choice of ϵ and the ![8_image_0.png](8_image_0.png) Figure 4: **(Synthetic example)** Different batch sizes with and without historical averaging are considered (with λ = 500, h = KL). The different colors indicate which points belong to the same batch size. Ten runs have been conducted per model, and the mean values plus-or-minus one standard deviation are depicted. | h=TV, ϵ=0.5 | h=TV, ϵ=0.9 | h=KL, ϵ=0.5 | h=KL, ϵ=0.9 | h=JD, ϵ=0.5 | h=JD, ϵ=0.9 | | |---------------|---------------|---------------|---------------|---------------|---------------|-----------| | d 2 100 (↓) | 0.21±0.06 | 0.15±0.03 | 0.15±0.02 | 0.12±0.02 | 0.15±0.05 | 0.14±0.04 | | F F1 (↑) | 0.15±0.04 | 0.13±0.03 | 0.17±0.05 | 0.18±0.04 | 0.18±0.07 | 0.16±0.06 | | V¯c (↓) | 0.29±0.05 | 0.25±0.06 | 0.19±0.02 | 0.19±0.02 | 0.14±0.01 | 0.16±0.02 | | V¯ f (↓) | 0.81±0.07 | 0.80±0.08 | 0.76±0.05 | 0.79±0.06 | 0.75±0.07 | 0.86±0.08 | Table 2: **(Synthetic example)** Different choices for the f-divergence h quantifying the mismatch between ptrue and pgen in (3) are considered (with bs = 256, λ = 500, h = KL). Ten runs have been conducted per model, and the mean values plus-or-minus one standard deviation are given. Bold font highlights best performance. metric V¯f is negatively affected. F1 seems to be largely unaffected by the batch size and somewhat negatively affected by large values of ϵ. The larger the batch size, the smaller the impact of historical averaging. In Table 2, the different options for the f-divergence h used for matching the statistics are evaluated. The results indicate that the Jeffreys divergence performs slightly better than the KL divergence, and the total variation is notably worse than the other two options. Furthermore, we observe that increasing the factor ϵ for the historical averaging tends to improve the results for the total variation and the KL divergence, but slightly decreases the performance in case of Jeffreys divergence. We conclude that the probabilistic constraint holds promise for improving the performance of many different GAN variants. When training the pcGAN, larger batch sizes are advantageous. For smaller batch sizes, the historical averaging can yield improvements. When choosing the f-divergence for matching the constraints, the KL divergence or the Jeffreys divergence should be selected rather than the total variation. The weighting parameter λ is essential to consider when tuning the pcGAN. The architectures used for the discriminator and generator were inspired by the DCGAN architecture and the ADAM optimizer (Kingma and Ba, 2015) was used for optimization. A discussion on the runtime of the different models is given in Appendix B.4. A detailed description of the architecture, settings for the training procedure, and samples as obtained from the different models can be found in Appendix C.2. ## 5.3 Icecube-Gen2 Radio Signals The IceCube neutrino observatory (Aartsen et al., 2017) and its planned successor IceCube-Gen2 (Aartsen et al., 2021) are located at the South Pole and make use of the huge ice masses present there in order to detect astrophysical high-energy neutrinos. Deep learning methodology has already been employed to extract information such as shower energy or neutrino direction from radio-detector signals (Glaser et al., 2023; Holmberg, 2022). Holmberg (2022) also investigated the use of GANs to simulate detector signals. We are going to consider the filtered Askaryan radio signals from Holmberg (2022), which were generated using ![9_image_0.png](9_image_0.png) Figure 5: **(IceCube-Gen2)** The distributions of minimum and maximum values as obtained by different models are compared, where the orange lines show the true distribution as obtained via KDE (7). From left to right, the histograms correspond to the real data, the pcGAN, the method of Wu et al. (2020), WGAN, WGAN-GP, and SNGAN. For the histograms, 20 000 generated samples have been considered (or the full dataset, in case of the real distribution). Parameters for the pcGAN: bs = 256, λ = 2, ϵ = 0.9, h = KL. | WGAN | WGAN + pc | Wu et al. | Wu et al. + pc | WGAN-GP | WGAN-GP + pc | SNGAN | SNGAN + pc | | |-----------|-------------|-------------|------------------|------------|----------------|-----------|--------------|-----------| | (pcGAN) | | | | | | | | | | 2 d F (↓) | 25.34±12.85 | 16.71±9.52 | 15.82±8.15 | 14.21±6.20 | 5.36±6.09 | 6.88±4.80 | 9.02±7.07 | 7.09±3.17 | | F1 (↑) | 0.35±0.17 | 0.38±0.12 | 0.36±0.08 | 0.36±0.06 | 0.48±0.08 | 0.45±0.06 | 0.46±0.04 | 0.46±0.06 | | V¯c (↓) | 0.97±0.11 | 0.30±0.06 | 0.99±0.12 | 0.30±0.05 | 0.71±0.12 | 0.16±0.04 | 1.01±0.06 | 0.17±0.04 | | V¯ f (↓) | 0.76±0.09 | 0.70±0.08 | 0.75±0.05 | 0.68±0.07 | 0.66±0.06 | 0.65±0.07 | 0.65±0.05 | 0.63±0.04 | Table 3: **(IceCube-Gen2)** The different GAN variants and their combinations with the probabilistic constraint are evaluated via different performance metrics, defined in Section 5.1: the Fréchet distance dF , the F1 score, the agreement of the constraint distributions V¯c, and the agreement of the distributions of a selection of high-level features V¯f . The arrows indicate whether high or low values are better. Ten runs have been conducted per model, and the mean values plus-or-minus one standard deviation are given. Bold font highlights best performance. Parameters: bs = 256, λ = 2, ϵ = 0.9, h = KL. the NuRadioMC code (Glaser et al., 2020) according to the ARZ algorithm (Alvarez-Muñiz et al., 2010). These signals take the form of 1D waveforms and in our experiments we want to focus solely on the shape of these waves, not their absolute amplitudes; this is achieved by normalizing each signal to its maximum absolute value. We use the pcGAN to constrain the generated data on the distributions of the minimum and maximum values of the signals. The results are depicted in Fig. 5. The pcGAN matches the characteristics of both minimum and maximum distribution well. In particular, it manages to match the spikes at -1 and 1 more accurately than any of the other models. The distributions as obtained via the other models also exhibit the two peaks in each of the distributions but do not reproduce their precise shapes correctly. Out of the remaining models, WGAN-GP matches the distributions best, with only slightly less pronounced spikes at -1 and 1 than the pcGAN. A plot showing the constraint fulfillment for the different constrained GANs is given in Appendix B.3. In Table 3, the evaluation metrics are given for the different GAN variants together with their constrained versions. The constraints are matched well for all of the constrained models. In terms of the remaining metrics, adding the constraint yields improvements for WGAN, Wu et al. and SNGAN. For WGAN-GP, on the other hand, a slight decrease in performance can be observed. While WGAN-GP performs best on d 2 F and F1, WGAN-GP + pc is the best choice for overall performance when taking constraint fulfillment into account. The network architecture used for the GANs is based on that from Holmberg (2022). More details on the training procedure, as well as plots of generated samples, are given in Appendix C.3. ## 6 Conclusions And Future Work We have presented the probabilistically constrained GAN (pcGAN), a method to incorporate probabilistic constraints into GANs. The method is expected to be particularly useful for scientific applications, where it is especially important for generated samples to represent the true distribution accurately and where suitable statistics to be matched can be identified through domain knowledge. For a given statistic z, this is achieved by adding the mismatch between the corresponding true and generated distribution as quantified via a suitable f-divergence to the generator loss. Kernel density estimation is employed to obtain representations for the distributions of the statistic. By adequately weighting the different loss terms, a large number of statistics can be matched simultaneously. We have evaluated our method using two different datasets. Our experiments clearly demonstrate that the probabilistic constraint is effective at matching the chosen dataset statistics. In terms of the evaluation metrics, the pcGAN constitutes a significant improvement over the standard WGAN. Depending on the dataset under consideration, it can also outperform WGAN-GP, SNGAN, and the method of (Wu et al., 2020). Combining the probabilistic constraint with GAN variants other than the WGAN also improves the respective models in most cases. For future work, it would be interesting to extend the method to consider the joint distribution of different statistics, in order to also include correlations between them in the constraint. Furthermore, it would be important to find a way to make the method compatible with conditional GANs in order to widen the range of possible applications. Finding automated ways for obtaining suitable statistics to match, e.g. by using features of classifier networks, could improve the approach and would allow for its application to situations where insufficient domain knowledge is available. In principle, the probabilistic constraint could also be added to other types of generative models. The main requirements would be that new samples are generated during each iteration of the training procedure and that a suitable spot for adding the loss term can be identified. Investigating the applicability of the approach to other generative models, such as autoencoders (Kingma and Welling, 2014) or denoising diffusion probabilistic models (Ho et al., 2020), therefore constitutes another promising avenue for future research. ## References Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K.Q. Weinberger, editors, *Advances in Neural Information Processing Systems*, volume 27. Curran Associates, Inc., 2014. Yongjun Hong, Uiwon Hwang, Jaeyoon Yoo, and Sungroh Yoon. How generative adversarial networks and their variants work: An overview. *ACM Computing Surveys (CSUR)*, 52(1):1–43, 2019. Jie Gui, Zhenan Sun, Yonggang Wen, Dacheng Tao, and Jieping Ye. A review on generative adversarial networks: Algorithms, theory, and applications. *IEEE transactions on knowledge and data engineering*, 2021. Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. In *International conference on machine learning*, pages 1060–1069. PMLR, 2016. Michael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean square error. 2016. 4th International Conference on Learning Representations, ICLR 2016. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pages 1125–1134, 2017. Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photo-realistic single image super-resolution using a generative adversarial network. In *Proceedings of the IEEE conference on computer vision and* pattern recognition, pages 4681–4690, 2017. Divya Saxena and Jiannong Cao. Generative adversarial networks (GANs) challenges, solutions, and future directions. *ACM Computing Surveys (CSUR)*, 54(3):1–42, 2021. Andres C Rodriguez, Tomasz Kacprzak, Aurelien Lucchi, Adam Amara, Raphaël Sgier, Janis Fluri, Thomas Hofmann, and Alexandre Réfrégier. Fast cosmic web simulations with generative adversarial networks. Computational Astrophysics and Cosmology, 5(1):1–11, 2018. Francisco Villaescusa-Navarro, Daniel Anglés-Alcázar, Shy Genel, David N Spergel, Rachel S Somerville, Romeel Dave, Annalisa Pillepich, Lars Hernquist, Dylan Nelson, Paul Torrey, et al. The CAMELS project: Cosmology and astrophysics with machine-learning simulations. *The Astrophysical Journal*, 915(1):71, 2021. Michela Paganini, Luke de Oliveira, and Benjamin Nachman. Accelerating science with generative adversarial networks: an application to 3d particle showers in multilayer calorimeters. *Physical review letters*, 120(4): 042003, 2018. Yasir Alanazi, Nobuo Sato, Tianbo Liu, Wally Melnitchouk, Pawel Ambrozewicz, Florian Hauenstein, Michelle P. Kuchera, Evan Pritchard, Michael Robertson, Ryan Strauss, Luisa Velasco, and Yaohang Li. Simulation of electron-proton scattering events by a feature-augmented and transformed generative adversarial network (FAT-GAN). In Zhi-Hua Zhou, editor, *Proceedings of the Thirtieth International Joint* Conference on Artificial Intelligence, IJCAI-21, pages 2126–2132, 2021. Raghav Kansal, Anni Li, Javier Duarte, Nadezda Chernyavskaya, Maurizio Pierini, Breno Orzari, and Thiago Tomei. Evaluating generative models in high energy physics. *Physical Review D*, 107(7):076017, 2023. Bernard W Silverman. *Density estimation for statistics and data analysis*, volume 26. CRC press, 1986. George Em Karniadakis, Ioannis G Kevrekidis, Lu Lu, Paris Perdikaris, Sifan Wang, and Liu Yang. Physicsinformed machine learning. *Nature Reviews Physics*, 3(6):422–440, 2021. Salvatore Cuomo, Vincenzo Schiano Di Cola, Fabio Giampaolo, Gianluigi Rozza, Maziar Raissi, and Francesco Piccialli. Scientific machine learning through physics–informed neural networks: where we are and what's next. *Journal of Scientific Computing*, 92(3):88, 2022. Panos Stinis, Tobias Hagge, Alexandre M Tartakovsky, and Enoch Yeung. Enforcing constraints for interpolation and extrapolation in generative adversarial networks. *Journal of Computational Physics*, 397:108844, 2019. Liu Yang, Sean Treichler, Thorsten Kurth, Keno Fischer, David Barajas-Solano, Josh Romero, Valentin Churavy, Alexandre Tartakovsky, Michael Houston, Mr Prabhat, et al. Highly-scalable, physics-informed GANs for learning solutions of stochastic PDEs. In 2019 IEEE/ACM Third Workshop on Deep Learning on Supercomputers (DLS), pages 1–11. IEEE, 2019. Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics, 378:686–707, 2019. Gul Rukh Khattak, Sofia Vallecorsa, and Federico Carminati. Three dimensional energy parametrized generative adversarial networks for electromagnetic shower simulation. In 2018 25th IEEE International Conference on Image Processing (ICIP), pages 3913–3917, 2018. Gul Rukh Khattak, Sofia Vallecorsa, Federico Carminati, and Gul Muhammad Khan. Particle detector simulation using generative adversarial networks with domain related constraints. In *2019 18th IEEE* International Conference On Machine Learning And Applications (ICMLA), pages 28–33, 2019. Zeng Yang, Jin-Long Wu, and Heng Xiao. Enforcing imprecise constraints on generative adversarial networks for emulating physical systems. *Communications in Computational Physics*, 30(3):635–665, 2021. Jin-Long Wu, Karthik Kashinath, Adrian Albert, Dragos Chirila, Heng Xiao, et al. Enforcing statistical constraints in generative adversarial networks for modeling chaotic dynamical systems. Journal of Computational Physics, 406:109209, 2020. Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In International conference on machine learning, pages 214–223. PMLR, 2017. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of Wasserstein gans. *Advances in neural information processing systems*, 30, 2017. Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. *arXiv preprint arXiv:1802.05957*, 2018. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. Advances in neural information processing systems, 30, 2017. Mehdi SM Sajjadi, Olivier Bachem, Mario Lucic, Olivier Bousquet, and Sylvain Gelly. Assessing generative models via precision and recall. *Advances in neural information processing systems*, 31, 2018. Tuomas Kynkäänniemi, Tero Karras, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Improved precision and recall metric for assessing generative models. *Advances in Neural Information Processing Systems*, 32, 2019. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of International Conference on Learning Representations (ICLR), 2015. Mark G Aartsen, M Ackermann, J Adams, JA Aguilar, M Ahlers, M Ahrens, D Altmann, K Andeen, T Anderson, I Ansseau, et al. The IceCube neutrino observatory: instrumentation and online systems. Journal of Instrumentation, 12(03):P03012, 2017. Mark G Aartsen, R Abbasi, M Ackermann, J Adams, JA Aguilar, M Ahlers, M Ahrens, C Alispach, P Allison, NM Amin, et al. IceCube-Gen2: the window to the extreme universe. Journal of Physics G: Nuclear and Particle Physics, 48(6):060501, 2021. Christian Glaser, S McAleer, Sigfrid Stjärnholm, P Baldi, and SW Barwick. Deep-learning-based reconstruction of the neutrino direction and energy for in-ice radio detectors. *Astroparticle Physics*, 145:102781, 2023. Anton Holmberg. Fast simulations of radio neutrino detectors: Using generative adversarial networks and artificial neural networks, 2022. Christian Glaser, Daniel García-Fernández, Anna Nelles, Jaime Alvarez-Muñiz, Steven W Barwick, Dave Z Besson, Brian A Clark, Amy Connolly, Cosmin Deaconu, KD de Vries, et al. NuRadioMC: Simulating the radio emission of neutrinos from interaction to detector. *The European Physical Journal C*, 80:1–35, 2020. Jaime Alvarez-Muñiz, Andrés Romero-Wolf, and Enrique Zas. Čerenkov radio pulses from electromagnetic showers in the time domain. *Physical Review D*, 81(12):123009, 2010. Diederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840–6851, 2020. Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. *The Journal of Machine Learning Research*, 13(1):723–773, 2012. Yujia Li, Kevin Swersky, and Rich Zemel. Generative moment matching networks. In International conference on machine learning, pages 1718–1727. PMLR, 2015. Gintare Karolina Dziugaite, Daniel M Roy, and Zoubin Ghahramani. Training generative neural networks via maximum mean discrepancy optimization. *arXiv preprint arXiv:1505.03906*, 2015. Chun-Liang Li, Wei-Cheng Chang, Yu Cheng, Yiming Yang, and Barnabás Póczos. MMD GAN: Towards deeper understanding of moment matching network. *Advances in neural information processing systems*, 30, 2017. Tommaso Dorigo, Andrea Giammanco, Pietro Vischia, Max Aehle, Mateusz Bawaj, Alexey Boldyrev, Pablo de Castro Manzano, Denis Derkach, Julien Donini, Auralee Edelen, et al. Toward the end-to-end optimization of particle physics instruments with differentiable programming. *Reviews in Physics*, 10: 100085, 2023. ## A Algorithm To Determine F S Σ We propose a method to determine optimal values for f s σ that is based on the assumption that data sampled from the true distribution should on average have the best possible match between the true distribution and the KDE approximation obtained via the minibatches. The procedure is summarized in Algorithm 3. In order to determine the optimal value of f s σ for a given constraint zs in (9), we perform a grid search over possible values fσ. Introducing the standard deviation of zs into the definition of σ via c helps to narrow the range in which the optimal values f s σ lie. The grid is defined in the array afσ . For each value of fσ, we evaluate the mismatch between the true distribution p˜true(zs) (7) and the generated distribution p˜gen(zs) (8) via the f-divergence h. The minibatches are sampled from the true data since the aim is to obtain a mismatch as small as possible for true data. The obtained values for the mismatch are then averaged over Navg minibatches. Subsequently, the value fσ corresponding to the minimum mean value is determined; this value is the desired optimal value f s σ . Figures 8 and 9 illustrate the grid search. | Algorithm 3 Determining the optimal value of f s | σ | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----| | Input: true data {zs}; p˜true(zs), h, Navg s Output: f σ c = std(zs) afσ = logspace(−1, 2, 200) for iN ∈ [0, Navg);ifσ ∈ [0, len(afσ )]; do sample minibatch {zs} σ = c afσ [ifσ ] determine p˜gen(zs) according to (8) using σ H[ifσ , iN ] = h(˜ptrue(zs), p˜gen(zs)) end for ifσ = min (mean (H, dim = 1)) f s [ifσ ] σ = afσ | | ## B Additional Results In this appendix, we present additional experiments and results. ## B.1 The Choice Of Kernel Here, we investigate the impact that the specific choice of kernel for approximating the distributions in (3) has on the performance of the pcGAN. We consider the following kernels (compare Table ??): Gaussian (G), uniform (u), Epanechnikov (epa), triweight (tri), and cosine (cos). The kernel bandwidths are obtained via (9) and Algorithm 3. The results are depicted in Table 5. The choice of kernel does not have a big impact on the model performance. Overall, the Gaussian kernel seems to be the best choice as it consistently performs well for all of the metrics. A potential reason why the Gaussian kernel is superior can be found in its unbounded support. This means that there will always be some overlap between real and generated distributions, as obtained via KDE, enabling more informative gradients. | 1 | − x 2 | | | |-----------------|----------------|---------|----| | G | K(x) = √ | e | 2 | | 2π | |x| ≤ 1 | | | | u | K(x) = ( 1 2 0 | |x| > 1 | | | (1 − x 2 ) | |x| ≤ 1 | | | | epa | K(x) = ( 3 4 0 | |x| > 1 | | | 32 (1 − x 2 ) 3 | |x| ≤ 1 | | | | tri | K(x) = ( 35 0 | |x| > 1 | | | cos | K(x) = ( π cos π x | |x| ≤ 1 | | | 4 | 2 | | | | 0 | |x| > 1 | | | ![14_image_0.png](14_image_0.png) Table 4: Different choices of kernel. ## B.2 Using Maximum Mean Discrepancy To Match Statistics The maximum mean discrepancy (MMD) is a kernel-based statistical test that can be employed to determine whether two distributions are the same (Gretton et al., 2012). It has been used as a loss function to establish generative moment matching networks, a distinct class of generative models (Li et al., 2015; Dziugaite et al., 2015). While an adversarial approach has been suggested to improve MMD networks by learning more suitable kernels (Li et al., 2017), they constitute their own model class and not an extension of the GAN. In | K = G | K = u | K = epa | K = tri | K = cos | | |---------------|------------|------------|------------|------------|------------| | 2 d F 100 (↓) | 13.39±3.08 | 19.73±3.57 | 15.33±4.60 | 15.15±5.70 | 11.69±1.83 | | F1 (↑) | 0.17±0.06 | 0.13±0.04 | 0.20±0.05 | 0.14±0.03 | 0.18±0.04 | | V¯c (↓) | 0.20±0.02 | 0.21±0.02 | 0.31±0.08 | 0.25±0.06 | 0.23±0.03 | | V¯ f (↓) | 0.80±0.07 | 0.89±0.06 | 0.84±0.04 | 0.84±0.05 | 0.82±0.04 | | σ0 = 1 | σ0 = 1 | σ0 = 1 | σ0 = 1 | σ0 = 0.5 | σ0 = 2 | σ0 = 5 | σ0 = [1, 2, 5] | | |---------------|------------|------------|------------|------------|------------|------------|------------------|------------| | λ = 0.1 | λ = 0.5 | λ = 1.0 | λ = 5.0 | λ = 0.5 | λ = 0.5 | λ = 0.5 | λ = 0.5 | | | 2 d F 100 (↓) | 21.52±3.97 | 18.42±3.03 | 18.59±3.47 | 23.26±2.93 | 19.52±2.12 | 17.27±4.13 | 18.16±5.63 | 17.63±3.47 | | F1 (↑) | 0.14±0.05 | 0.11±0.04 | 0.11±0.03 | 0.10±0.07 | 0.13±0.04 | 0.17±0.06 | 0.18±0.04 | 0.13±0.04 | | V¯c (↓) | 0.56±0.05 | 0.71±0.09 | 0.76±0.10 | 0.96±0.06 | 0.50±0.03 | 0.90±0.12 | 1.05±0.15 | 0.77±0.05 | | V¯ f (↓) | 1.20±0.11 | 1.13±0.06 | 1.07±0.07 | 1.01±0.06 | 1.08±0.06 | 1.10±0.08 | 1.10±0.13 | 1.07±0.05 | Table 5: **(Synthetic example)** The pcGAN with different choices of kernel K is evaluated via different performance metrics, defined in Section 5.1: the Fréchet distance dF , the F1 score, the agreement of the constraint distributions V¯c, and the agreement of the distributions of a selection of high-level features V¯f . The arrows indicate whether high or low values are better. Ten runs have been conducted per model, and the mean values plus-or-minus one standard deviation are given. Bold font highlights best performance. Parameters: bs = 256, λ = 500, ϵ = 0.9, h = KL. Table 6: **(Synthetic example)** The pcGAN with MMD loss is evaluated for different weighting factors λ and kernel widths σ0 via different performance metrics, defined in Section 5.1: the Fréchet distance dF , the F1 score, the agreement of the constraint distributions V¯c, and the agreement of the distributions of a selection of high-level features V¯f . The arrows indicate whether high or low values are better. Ten runs have been conducted per model, and the mean values plus-or-minus one standard deviation are given. Bold font highlights best performance. this appendix, we do not consider MMD networks but explore instead the effectiveness of using MMD as the loss function for matching the high-level statistics. That is, we use the MMD loss instead of f-divergences (compare Section 4.1) in (3). The kernel maximum mean discrepancy between two distributions is defined as $${\rm MMD}^{2}=\frac{1}{\sigma}\mathbb{E}\left[K\left(\frac{X-X^{\prime}}{\sigma}\right)-2K\left(\frac{X-Y}{\sigma}\right)+K\left(\frac{Y-Y^{\prime}}{\sigma}\right)\right],\tag{15}$$ where X denotes real data and Y generated data. This leads to the following loss function, where we estimate the expectations over minibatches and omit constant terms (i.e. terms that do not contain Y ): $${\cal L}_{\rm MMD}=\frac{1}{M(M-1)\sigma}\sum_{m\neq m^{\prime}}K\left(\frac{y_{m}-y_{m^{\prime}}}{\sigma}\right)-\frac{2}{MN\sigma}\sum_{m=1}^{M}\sum_{n=1}^{N}K\left(\frac{y_{m}-x_{n}}{\sigma}\right),\tag{16}$$ where M is the number of generated samples and N the number of real samples in the current minibatches. One drawback of this approach is the mixed loss term in equation (16); it would be too computationally costly to take into account the entire dataset at each iteration wherefore we also need to batch the real data. When using f-divergences in loss (3) of our approach, similar problems can be circumvented by evaluating ptrue once on a fixed grid in advance of the training. Here, the same trick does not work, since the statistics as extracted from the generated data determine the points at which the kernel K needs to be evaluated. The results for the MMD loss are given in Table 6. We consider different weighting factors for the loss term, as well as Gaussian kernels with different bandwidth. The bandwidths of the Gaussian kernels for the different constraints are given by the values σs in (9) times the factors σ0 given in the figure. When multiple factors are given, then the sum of the corresponding Gaussian kernels is used. Both in terms of matching the constraints and in terms of the performance metrics, the method performs better than the standard WGAN, but worse than the pcGAN (compare Table. 1). ![16_image_0.png](16_image_0.png) Figure 6: **(Synthetic example)** The distributions of three different power spectrum components ps as obtained with the different GAN variants when combined with the probabilistic constraint are depicted, where the orange lines show the true distribution as obtained via KDE (7). For the histograms, 20 000 generated samples have been considered (or the full dataset, in case of the real distribution). Parameters: bs = 256, ϵ = 0.9, h=KL, and λ = [500, 500, 500, 2500], respectively, from left to right. ## B.3 Additional Plots In Fig. 6, plots of the constraint fulfillment for the different GAN variants combined with the probabilistic constraint are given. It is apparent that all the constrained models reproduce the constraint distributions well, with WGAN-GP + pc giving the smoothest fit. The distributions obtained with SNGAN + pc are a bit more jagged than the other ones. In Fig. 7, an equivalent plot is given for the IceCube-Gen2 dataset. Again, all of the constrained GANs match the distributions very well. ## B.4 Runtime And Parameter Tuning | WGAN | WGAN + pc | Wu et al. | Wu et al. + pc | WGAN-GP | WGAN-GP + pc | SNGAN | SNGAN | | |--------------------|-------------|-------------|------------------|-----------|----------------|---------|---------|------| | (pcGAN) | + pc | | | | | | | | | Synthetic (5.2) | 1.20 | 1.95 | 1.38 | 2.06 | 1.91 | 2.68 | 1.40 | 2.07 | | IceCube-Gen2 (5.3) | 0.37 | 0.46 | 0.45 | 0.46 | 0.51 | 0.56 | 0.60 | 0.67 | Table 7: Runtime (in hours) for one run of 100 000 iterations. The runtime required for one run of 100 000 iterations for the various models and for the different datasets is given in Table 7. The time required to extract the representations ptrue for all of the constrained statistics combined is negligible in comparison: for the synthetic dataset, it takes 66 seconds, and for the IceCube-Gen2 dataset 0.05 seconds. These numbers have been obtained on a system with NVIDIA RTX 3060 Ti 8GB GPU, Intel Core i7 7700-K @ 4.2GHz CPU, and 16GB RAM. Overall, adding the probabilistic constraint increases the runtimes of the corresponding base GANs by around 40% − 60% in case of the synthetic dataset, where 101 statistics are matched. For the IceCube-Gen2 dataset, where only 2 statistics are matched, the increases in runtime are significantly lower at less than 30%. ![17_image_0.png](17_image_0.png) Figure 7: **(IceCube-Gen2)** The distributions of minimum and maximum values as obtained with the different GAN variants when combined with the probabilistic constraint are depicted, where the orange lines show the true distribution as obtained via KDE (7). For the histograms, 20 000 generated samples have been considered (or the full dataset, in case of the real distribution). Parameters: bs = 256, ϵ = 0.9, h=KL, λ = 2. Apart from the increased runtime for individual runs, new hyperparameters get introduced with the probabilistic constraint which will cause additional upfront cost when tuning the model. They include the global weighting coefficient λ, the parameter ϵ determining the amount of historical averaging, and the function h to quantify the mismatch between the distributions. In principle, the heuristic choices in the formula for λs can also be finetuned. The values used throughout the paper, e.g. those given in Tables 1 and 3, should serve as good starting points for these hyperparameters. The most important parameter to tune individually for each dataset is the global weighting coefficient λ in (3). Assuming that the underlying unconstrained GAN has already been well-tuned, obtaining good results with the probabilistically constrained GAN should be possible with 5-10 additional runs. The fine-tuning process of the pcGAN can add a sizable amount of time to the model development. In cases where the simulation would take minutes or hours to generate a single sample, and where thousands or millions of samples need to be generated, the pcGAN can still provide speedups. Apart from that, GANs have the advantage of being differentiable, which allows for their use in end-to-end optimization pipelines, e.g. for detector design (Dorigo et al., 2023). Hence, being faster than the traditional simulation is not always essential for GANs to be useful. ## C Details On The Experiments In this appendix, we give additional information on the experiments conducted in Sections 5.2-5.3, in particular on the network architectures and the training parameters. The code for the project will be made available on GitHub; note, however, that only the data for the synthetic example is available there. ## C.1 Constraints And High-Level Features We start by giving an overview of the different quantities that have been employed either as constraints or performance metrics. For the 1D signals x of length Nx = 200 in Sections 5.2 and 5.3, we used the minimum and maximum values, min = min(x0*, . . . , x*Nx−1), max = max(x0*, . . . , x*Nx−1), mean values, mean =1 Nx PNx−1 i=0 xi, the mean absolute values, mean(abs) =1 Nx PNx−1 i=0 |xi|, the number of zero crossings, Nzc, and the number of maxima, Nmax, of the curves. The discrete Fourier transform for real-valued inputs (as implemented in torch.fft.rfft) was utilized to obtain the complex Fourier coefficients for positive frequencies k ∈ [0, ⌊ Nx 2 ⌋ + 1] below the Nyquist frequency, $$X_{k}=\sum_{n=0}^{N_{x}-1}x_{n}e^{-i2\pi{\frac{k n}{N_{x}}}},\tag{1}$$ $$(17)^{\frac{1}{2}}$$ and the corresponding power spectrum components are obtained as Sk =1 Nx |Xk| 2. The total spectral energy is then calculated as S =P⌊ Nx 2 ⌋+1 k=0 Sk. When employed as constraints, we did not constrain on the power spectrum components directly, but instead on ps[k] = √NxSk. For the different experiments, we considered the following set of high-level features: mean, mean(abs), max-min, E, Nzc, and Nmax. For the IceCube-Gen2 experiment, we omitted the mean, since it did not exhibit interesting structure in its distribution. ## C.2 Synthetic Example (Section 5.2) The synthetic dataset consists of 100 000 samples of size 200, generated as described in Section 5.2. For this example, we employed convolutional networks for both the discriminator and generator; details on the corresponding network architectures are given in Tables 9 and 10, respectively. In layers where both batch normalization and an activation function are listed in the column 'Activation', batch normalization is applied to the input of the layer whereas the activation function is applied to the output. Padding is employed in each layer such that the given output sizes are obtained; reflection padding is utilized. Table 8: Hyperparameters used for the experiments. | Experiment | Navg | Nit | lr | fsched | itsched | β1 | β2 | clamping | |--------------------|--------|---------|------|----------|-----------|------|------|------------| | Synthetic (5.2) | 50 | 100 000 | 2e-4 | 0.5 | 70000 | 0 | 0.9 | [0, 0.005] | | IceCube-Gen2 (5.3) | 50 | 100 000 | 5e-4 | 0.5 | 40000 | 0 | 0.9 | [0, 0.1] | In Figure 8, the search for the best values f s σ in (8) is illustrated for h = KL. It is apparent, that there is a clear, batch size-dependent minimum of the KL-divergence for each constraint, with larger batch sizes tending towards larger values of f s σ ; this is due to the fact that more samples in the minibatch allow for a more fine-grained approximation of the generated distribution. In the top right plot, optimal values of f s σ are depicted for all components ps[i] of the power spectrum. The spike around i ≈ 10 is the result of some outliers in the values of the power spectrum components; they lead to a high standard deviation of the true distribution, which in turn requires a large f s σ in order to obtain small enough standard deviations for the KDE to resolve the narrow peak well. In the bottom row, approximations of the generated distributions as obtained via the minibatches are depicted. It is apparent that the mixtures of Gaussians approximate them reasonably well, with larger batch sizes typically giving better results. In Figure 10, samples from the true distribution as well as generated samples from the different GANs are depicted. All of the GANs produce reasonable-looking results, although upon closer inspection it becomes apparent that they do not necessarily constitute a superposition of two sine waves. Only the WGAN seems to have a tendency to produce rugged curves. ## C.3 Icecube-Gen2 (Section 5.3) For this example, we considered 50 000 IceCube-Gen2 radio-detector signals of size 200 (generated using the NuRadioMC code (Glaser et al., 2020) according to the ARZ algorithm (Alvarez-Muñiz et al., 2010)), normalized to their respective maximum absolute values. The networks employed are a mixture of convolutional and fully connected networks, which have been based on the architectures used in Holmberg (2022); details on discriminator and generator architectures are given in Tables 11 and 12, respectively. For the discriminator, the input to the network is first fed through four convolutional layers in parallel, the outputs of which are ![19_image_0.png](19_image_0.png) Figure 8: **(Synthetic example)** Determining optimal values f s σ for h = KL. **Top left:** The three plots on the top left depict the dependency of the KL divergence on the factor fσ for different power spectrum components; the curves have been averaged over 50 minibatches sampled from the original dataset. Bottom left: The first three plots in the bottom row depict the distribution of the constraint values together with their KDE representation, as well as curves obtained via (8) from minibatches of different size (not averaged); it is between them that the KL divergences in the top row have been calculated. **Top right:** Optimal values of the factor f s σ are depicted for different batch sizes, where the index i gives the respective component of the power spectrum. subsequently concatenated into one long array. The LeakyReLU activation function with factor 0.2 is applied. During training, we also check for a rare failure mode where the GAN generates only horizontal lines; if this happens, the training is restarted. In Figure 9, a plot on the process of determining optimal values for f s σ is shown at the example h = KL. Same as for the synthetic example (compare Fig. 8), the KL divergences as a function of fσ exhibit clear minima that depend on the batch size. In Figure 12, samples from the true distribution as well as generated samples from the different GANs are depicted. Altogether, most of the generated samples look good, with none of the models clearly outperforming the others. ## C.4 Training Parameters Here, we summarize the training parameters used for the different experiments. Nit gives the number of training iterations, lr the learning rate, and λ the weighting factor for the constraints in (3). In the column 'clamping', the range is given to which network parameters of the discriminator D were clamped in order to enforce the Lipschitz constraint in WGANs (Arjovsky et al., 2017). The ADAM optimizer (Kingma and Ba, 2015) was used for training the networks, with hyperparameter values β1 and β2; a scheduler was employed that reduced the learning rate by a factor of fsched after itsched iterations. The weighting factor for the statistical constraint from Wu et al. (2020) was chosen as λWu = 1. The weighting factor for the gradient penalty in WGAN-GP was chosen as λGP = 10. The parameter m, which gives the number of discriminator updates per generator update, was chosen as 1. ![20_image_0.png](20_image_0.png) Figure 9: **(IceCube-Gen2)** Determining optimal values f s σ for h = KL. **Top left:** The two plots on the top left depict the dependency of the KL divergence on the factor fσ; the curves have been averaged over 50 minibatches sampled from the original dataset. **Bottom left:** The first two plots in the bottom row depict the distributions of the constraint values together with their KDE representation, as well as curves obtained via (8) from minibatches of different size (not averaged); it is between them that the KL divergences in the top row have been calculated. **Top right:** Optimal values of the factor f s σ are depicted for different batch sizes, where the index i gives the respective constraint. | Layer | Output Size | Kernel Size | Stride | Activation | |---------|---------------|---------------|----------|-----------------| | Input | 1 × 200 | | | | | Conv | 32 × 99 | 3 | 2 | BatchNorm, ReLU | | Conv | 32 × 99 | 3 | 1 | BatchNorm, ReLU | | Conv | 32 × 99 | 3 | 1 | ReLU | | Conv | 64 × 48 | 3 | 2 | BatchNorm, ReLU | | Conv | 64 × 48 | 3 | 1 | BatchNorm, ReLU | | Conv | 64 × 48 | 3 | 1 | ReLU | | Conv | 128 × 23 | 3 | 2 | ReLU | | Conv | 128 × 23 | 3 | 1 | ReLU | | Conv | 128 × 23 | 3 | 1 | ReLU | | Conv | 256 × 10 | 3 | 2 | ReLU | | Conv | 256 × 10 | 3 | 1 | ReLU | | Conv | 256 × 10 | 3 | 1 | ReLU | | Flatten | 2560 | | | | | Linear | 1 | | | | Table 9: Discriminator architecture for the synthetic example. | Layer | Output Size | Kernel Size | Stride | Activation | |------------|---------------|---------------|----------|-----------------| | Input | 1 × 5 | BatchNorm | | | | ConvTransp | 256 × 25 | 3 | 16 | BatchNorm, Tanh | | Conv | 256 × 25 | 3 | 1 | BatchNorm, Tanh | | Conv | 256 × 25 | 3 | 1 | BatchNorm, Tanh | | ConvTransp | 128 × 50 | 3 | 2 | BatchNorm, Tanh | | Conv | 128 × 50 | 3 | 1 | BatchNorm, Tanh | | Conv | 128 × 50 | 3 | 1 | BatchNorm, Tanh | | ConvTransp | 64 × 100 | 3 | 2 | BatchNorm, Tanh | | Conv | 64 × 100 | 3 | 1 | BatchNorm, Tanh | | Conv | 64 × 100 | 3 | 1 | BatchNorm, Tanh | | ConvTransp | 32 × 200 | 3 | 2 | BatchNorm, Tanh | | Conv | 32 × 200 | 3 | 1 | BatchNorm, Tanh | | Conv | 32 × 200 | 3 | 1 | Tanh | | Conv | 1 × 200 | 3 | 1 | Tanh | | Layer | Output Shape | Kernel Size | Stride | Activation | |-------------|----------------|---------------|----------|--------------| | Input | 1 × 200 | | | | | Conv01 | 32 × 49 | 5 | 4 | LeakyReLU | | Conv02 | 32 × 47 | 15 | 4 | LeakyReLU | | Conv03 | 32 × 44 | 25 | 4 | LeakyReLU | | Conv04 | 32 × 42 | 35 | 4 | LeakyReLU | | Concatenate | 32 × 182 | | | | | Conv | 1 × 182 | 1 | 1 | LeakyReLU | | Linear | 92 | LeakyReLU | | | | Linear | 45 | LeakyReLU | | | | Linear | 20 | LeakyReLU | | | | Linear | 1 | | | | | Layer | Output Size | Kernel Size | Stride | Activation | |------------|---------------|---------------|----------|--------------| | Input | 5 | | | | | Linear | 24 | ReLU | | | | Conv | 48 × 24 | 3 | 1 | ReLU | | Conv | 48 × 24 | 3 | 1 | ReLU | | ConvTransp | 24 × 49 | 3 | 2 | ReLU | | Conv | 24 × 49 | 3 | 1 | ReLU | | ConvTransp | 12 × 99 | 3 | 2 | ReLU | | Conv | 12 × 99 | 3 | 1 | ReLU | | ConvTransp | 6 × 199 | 3 | 2 | ReLU | | Conv | 1 × 200 | 4 | 1 | | Table 10: Generator architecture for the synthetic example. Table 11: Discriminator architecture for the IceCube-Gen2 data. The input is first fed through the layers Conv01-Conv04 in parallel and the outputs are subsequently concatenated into one long array. Table 12: Generator architecture for the IceCube-Gen2 data. ![22_image_0.png](22_image_0.png) ![23_image_0.png](23_image_0.png) ![23_image_1.png](23_image_1.png) Figure 11: Samples for the synthetic example as obtained from the different constrained models. ![24_image_0.png](24_image_0.png) ![25_image_0.png](25_image_0.png)
Review 1: Summary: This paper proposes a novel regularization mechanism for GANs based on divergences between some differentiable statistics of the true and generated distributions. In particular in this work, those statistics are chosen based on domain knowledge and estimated with KDEs. Some additional work is done in terms of "tricks" that are part of them method, such as how to chose kernel parameters, or the weighing of divergences. Empirically, the method is tested on time series data where it can be observed that the proposed method better recovers the desired statistics. A number of design choices are also tested. Strengths and Weaknesses: Overall I appreciate what the method is pushing for, and it seems like an elegant way of incorporating prior knowledge. The paper is clear and easy to read, I think I could almost reimplement this just from the text. While the empirical evaluation is limited, there's still some good work done there where the synthetic example is a good illustration, and some design choices are tested. Unfortunately, there are still some choices that seem not very well justified (see below). It also feels like there's a missed opportunity here to test on way more domains with different priors. This could be a much more general paper considering how many domains this method could apply to, and this lack of scope is perhaps the main weakness of the paper. On the novelty side, this is novel to me but I'm not up-to-date enough on this corner of the GAN world to be an authority here. Requested Changes: The main change I'd like to see in this paper is better justification for some design choices, that appear to be tricks set by trial and error rather than methodically. For example, why set $\bar\sigma_s$ the way it is set? This is just a hyperparameter that should be empirically tuned. Similarly, the method to determine $f_\sigma^s$ appears very heuristic, with some constants sprinkled in Algorithm 3. The method to determine $\lambda_s$ is just as suspicious; where does this $0.1$ value (or even really the whole $\eta$ formula) come from? Some text comments: - "A major advantage of using f-divergence..." this is fairly constrained for situations where one can easily integrate over x. This is rarely the case for a complex enough distribution (e.g. one parameterized by a deep model). - Equation 5 is wrong and shows the Jeffreys divergence rather than the JS divergence. The two are quite close but the difference matters numerically and empirically (what are the experiments using?) - "We chose to employ KDE with Gaussian kernels", this was a bit scary to read, but I was reassured later on by appendix B.1. I'd recommend presenting the method in a manner that's a bit more generic (but using Gaussian kernels as a running example may be helpful for readers). - Equation 10 suggests some recursive computation, is this really the case? If so is it truncated? - Writing `linspace(0, 20, 200)` belongs more in a technical report than a scientific paper. Consider revising. Broader Impact Concerns: No concerns. ================================================== Review 2: Summary: This paper proposes the probabilistically constrained GAN to improve the statistical fidelity of generated samples in GANs. The main contribution is a technique to match the distributions of certain statistics between real and generated data by adding a new loss term to the generator loss function. This approach aims to ensure that generated samples not only look realistic but also accurately reflect the true data distribution. Strengths and Weaknesses: Strengths: 1. This paper introduces a new way to constrain GANs by matching distributions of chosen statistics. 2. The use of f-divergences and kernel density estimation is well-justified. 3. The approach can be applied to various GAN architectures and allows for arbitrary differentiable statistics to be constrained. 4. The experiments cover both synthetic and real-world datasets, with a wide range of evaluation metrics and comparisons to existing methods. This paper also includes the detailed ablation studies. Weaknesses: 1. The use of kernel density estimation may become computationally expensive for high-dimensional statistics. A more detailed discussion on the method's scalability would be beneficial. Additionally, the paper could be strengthened by demonstrating improvements on existing state-of-the-art GAN models for more complex tasks like image generation, rather than focusing primarily on toy datasets. 2. While the paper makes good use of KDE to approximate statistics, it relies on hand-crafted features. An interesting extension would be to explore the use of learned statistics from existing models, which could potentially capture more complex or nuanced aspects of the data distribution. This approach could draw connections to the rich history of analysis-by-synthesis methods in statistical modeling. Requested Changes: An intriguing aspect to consider is the potential adaptability of the proposed method to other probabilistic generative models, such as energy-based models (EBMs). It would be valuable to explore how the core principles of the pcGAN approach might be applied beyond the GAN framework. Could the authors comment on the feasibility and potential benefits of extending their proposed model to these broader classes of generative models? Broader Impact Concerns: I do not see any significant broader impact concerns that would necessitate adding a Broader Impact Statement or expanding an existing one. ================================================== Review 3: Summary: In the context of data generation for scientific applications, the authors introduce a method to align the distributions of statistics of generated data with those of real data. Their approach is suited for Generative Adversarial Networks (GANs) and enhances the generator's loss function by incorporating a new term that measures distributional differences using f-divergences. By applying kernel density estimation, they represent the real distributions and estimate the corresponding generated distributions at each iteration. This method referred to as probabilistically constrained GAN (pcGAN), focuses on matching the overall shapes of distributions rather than individual sample properties. The approach is particularly beneficial for scientific applications, as domain-specific differentiable statistics can be selected. The authors perform the empirical evaluation on one synthetic and one real-world dataset. Strengths and Weaknesses: ## Strengths - The paper introduces a further form of supervision in training GANs for scientific applications to enforce a better match between generated and real data statistics. - The proposed idea is presented with clarity, and the author's contribution is well-stated. - The authors provide the source code to reproduce their results. - The paper is well-written and well-organized. ## Weaknesses - The empirical evaluation is not convincing (see Requested Changes - 1): - The number of runs (3 per model) is not enough to produce a convincing empirical evaluation, as the differences in performance cannot be proven to be statistically relevant with only 3 evaluations per model. - The experiments conducted on the real dataset are not exhaustive since they do not include the evaluation of adding the probabilistic constraints to different GAN variants, as done for the synthetic dataset. - The numerical results reported in Figure 7 do not show a clear improvement in pcGAN compared to the other models. - The new loss term in pcGAN introduces an overhead both in hyper-parameters fine-tuning and training procedure. However, this aspect is not covered in the paper. As pointed out by the authors in the introduction, "GANs may serve as surrogate models for expensive but highly accurate numerical simulations", thus the tradeoff between performance improvement and computational overhead is crucial for their method, but it is missing in the analysis. - The results are not well-presented (see Requested Changes - 2). Requested Changes: ### 1. Empirical Evaluation (Critical) - Increase the number of runs per model to strengthen the results. - Evaluate the addition of the probabilistic constraints to different GAN variants on real data - In Section 5.2 (pag. 9) authors claim "In therm of execution time, the pcGAN takes about twice as long to train as the unconstrained WGAN". This claim needs to be supported with numerical results. - Adding the new loss requires adjusting several parameters ($f_\sigma$, $\lambda$, $\epsilon$, batch size). How expensive is the fine-tuning process? ### 2. Results Presentation (Critical) The graphical representation chosen by the authors to report the numerical results is 1) not easy to read and 2) not compact, forcing the authors to move important results in the Appendix. For example, the authors describe four metrics used in the empirical evaluation, but in Figures 4 and 5 they report only two of them, without providing any explanation for the missing results. Thus, I strongly suggest the authors to: - replace Figures 3, 5, and 7 with tables, and include the results for all the metrics; - replace Figure 4 with a line chart with a confidence interval, and include the results for all the metrics. Broader Impact Concerns: None ==================================================
# On The Infinite-Depth Limit Of Finite-Width Neural Networks Soufiane Hayou hayou@nus.edu.sg Department of Mathematics National University of Singapore Reviewed on OpenReview: *https: // openreview. net/ forum? id= RbLsYz1Az9* ## Abstract In this paper, we study the infinite-depth limit of finite-width residual neural networks with random Gaussian weights. With proper scaling, we show that by fixing the width and taking the depth to infinity, the pre-activations converge in distribution to a zero-drift diffusion process. Unlike the infinite-width limit where the pre-activation converge weakly to a Gaussian random variable, we show that the infinite-depth limit yields different distributions depending on the choice of the activation function. We document two cases where these distributions have closed-form (different) expressions. We further show an intriguing change of regime phenomenon of the post-activation norms when the width increases from 3 to 4. Lastly, we study the sequential limit infinite-depth-then-infinite-width and compare it with the more commonly studied infinite-width-then-infinite-depth limit. ## 1 Introduction The empirical success of over-parameterized neural networks has sparked a growing interest in the theoretical understanding of these models. The large number of parameters - millions if not billions - and the complex (non-linear) nature of the neural computations (presence of non-linearities) make this hypothesis space highly non-trivial. However, in certain situations, increasing the number of parameters has the effect of 'placing' the network in some 'average' regime that simplifies the theoretical analysis. This is the case with the infinitewidth asymptotics of random neural networks. The infinite-width limit of neural network architectures has been extensively studied in the literature (e.g. Neal (1995); Schoenholz et al. (2017); Yang (2020); Poole et al. (2016); Arora et al. (2019); Hayou et al. (2019a)), and has led to many interesting theoretical and algorithmic innovations (see Appendix A for a comprehensive discussion). However, most works on this limit consider a fixed depth network. *What about infinite-depth*? Existing works on the infinite-depth limit can generally be divided into three categories: - *Infinite-width-then-infinite-depth limit*: in this case, the width is taken to infinity first, then the depth is take to infinity. This is the infinite-depth limit of infinite-width neural networks. This limit was particularly used to derive the Edge of Chaos initialization scheme (Schoenholz et al., 2017; Poole et al., 2016), study the impact of the activation function (Hayou et al., 2019a), the behaviour of the NTK (Hayou et al., 2020; Xiao et al., 2020) etc. - *The joint infinite-width-and-depth limit*: in this case, the depth-to-width ratio is fixed, and therefore, the width and depth are jointly taken to infinity at the same time. There are few works that study the joint width-depth limit. For instance, in (Li et al., 2021), the authors showed that for a special form of residual neural networks (ResNet), the network output exhibits a (scaled) log-normal behaviour in this joint limit. This is different from the sequential limit where width is taken to infinity first, followed by the depth, in which case the distribution of the network output is asymptotically normal (Schoenholz et al., 2017; Hayou et al., 2019a). In (Li et al., 2022), the authors studied the covariance kernel of an MLP in the joint limit, and showed that it converges weakly to the solution of Stochastic Differential Equation (SDE). In Hanin & Nica (2020), the authors showed that in the joint limit case, the NTK of an MLP remains random when the width and depth jointly go to infinity. This is different from the deterministic limit of the NTK where the width is taken to infinity before depth (Hayou et al., 2020). More recently, Hanin (2022) explored the impact of the depth-to-width ratio on the correlation kernel and the gradient norms in the case of an MLP architecture, and showed that this ratio can be interpreted as an effective network depth. - *Infinite-depth limit of finite-width neural networks*: in both previous limits (infinite-width-then-infinitedepth limit, and the joint infinite-width-depth limit), the width goes to infinity. Naturally, one might ask what happens if width is fixed and depth goes to infinity? What is the limiting distribution of the network output at initialization? In (Hanin, 2019), the author showed that neural networks with bounded width are still universal approximators, which motivates the study of finite-width large depth neural networks. In (Peluchetti & Favaro, 2020), the authors showed that the pre-activations of a particular ResNet architecture converge weakly to a diffusion process in the infinite-depth limit. This is the result of the fact that ResNet can be seen as discretizations of SDEs (see Section 2). In the present paper, we study the infinite-depth limit of finite-width ResNet with random Gaussian weights (an architecture that is different from the one studied in (Peluchetti & Favaro, 2020)). We are particularly interested in the *asymptotic behaviour of the pre/post-activation values*. Our contributions are four-fold: 1. Unlike the infinite-width limit, we show that the distribution of the pre-activations in the infinite-depth limit is not necessarily Gaussian. In the simple case of networks of width 1, we study two cases where we obtain known but completely different distributions by carefully choosing the activation function. 2. For ReLU activation function, we introduce and discuss the phenomenon of *network collapse*. This phenomenon occurs when the pre-activations in some hidden layer have all non-positive values which results in zero post-activations. This leads to a stagnant network where increasing the depth beyond a certain level has no effect on the network output. For any fixed width, we show that in the infinite-depth limit, network collapse is a zero-probability event, meaning that almost surely, all post-activations in the network are non-zero. 3. For general width networks, the distribution of the pre-activations is generally intractable. We focus on the norm of the post-activations with ReLU activation function, and show that this norm has approximately a Geometric Bronwian Motion (GBM) dynamics. We call this Quasi-GBM. We also shed light on a change of regime phenomenon that occurs when the width n increases from 3 to 4. For width n ≤ 3, resp. n ≥ 4, the logarithmic growth factor of the post-activations is , resp. positive. 4. We study the sequential limit infinite-depth-then-infinite-width, which is the converse of the more commonly studied infinite-width-then-infinite-depth limit, and show some key similarities between these two limits. We particularly show that the pre-activations converge to the solution of a Mckean-Vlasov process, which has marginal Gaussian distributions, and thus we recover the Gaussian behaviour in this limit. All the proofs are provided in the appendix and referenced in the main text. Empirical evaluations of these theoretical findings are also provided. ## 2 The Infinite-Depth Limit Hereafter, we denote the width, resp. depth, of the network by n, resp. L. We also denote the input dimension by d. Let *d, n, L* ≥ 1, and consider the following ResNet architecture of width n and depth L $Y_{0}=W_{in}x,\quad x\in\mathbb{R}^{d}$ $Y_{l}=Y_{l-1}+\frac{1}{\sqrt{L}}W_{l}\phi(Y_{l-1}),\quad l=1,\ldots,L,$ $$(1)$$ where ϕ : R → R is the activation function, L ≥ 1 is the network depth, Win ∈ R n×d, and Wl ∈ R n×n is the weight matrix in the l th layer. We assume that the weights are randomly initialized with iid Gaussian variables W ij l ∼ N (0, 1 n ), W ij in ∼ N (0, 1 d ). For the sake of simplification, we only consider networks with no bias, and we omit the dependence of Yl on n in the notation. While the activation function is only defined for real numbers, we will abuse the notation and write ϕ(z) = (ϕ(z 1)*, . . . , ϕ*(z k)) for any k-dimensional vector z = (z 1*, . . . , z*k) ∈ R kfor any k ≥ 1. We refer to the vectors {Yl, l = 0*, . . . , L*} by the *pre-activations* and the vectors {ϕ(Yl), l = 0*, . . . , L*} by the *post-activations*. Hereafter, x ∈ R dis fixed, and we assume that x ̸= 0. The 1/ √L scaling in Eq. (1) is not arbitrary. This specific scaling was shown to stabilize the norm of Yl as well as gradient norms in the large depth limit (e.g. Hayou et al. (2021); Marion et al. (2022)). In the next result (which has been shown for the single input case in Peluchetti & Favaro (2020)), we show that the infinite depth limit of Eq. (1) (in the weak sense) exists and has the same distribution of the solution of a stochastic differential equation. For the sake of simplicity, we only state the result for the single input case; the multiple inputs case is provided in in Appendix B. Proposition 1. Assume that the activation function ϕ *is Lipschitz on* R n. Then, in the limit L → ∞, the process XL t = Y⌊tL⌋, t ∈ [0, 1]*, converges in distribution to the solution of the following SDE* $$d X_{t}=\frac{1}{\sqrt{n}}\|\phi(X_{t})\|d B_{t},\quad X_{0}=W_{i n}x,$$ $$\left(2\right)$$ where (Bt)t≥0 is a Brownian motion (Wiener process), independent from Win. Moreover, if the activation function ϕ *is only locally Lipschitz, then* XL tconverges locally to Xt. More precisely, for any fixed r > 0, we consider the stopping times τ L = inf{t ≥ 0 : |XL t | ≥ r}*, and* τ = inf{t ≥ 0 : |Xt| ≥ r}, then the stopped process XL t∧τ L converges in distribution to the stopped solution Xt∧τ *of the above SDE.* The proof of Proposition 1 is provided in Appendix B.6. We use classical results on the numerical approximations of SDEs. Proposition 1 shows that the infinite-depth limit of finite-width ResNet (Eq. (1)) has a similar behaviour to the solution of the SDE given in Eq. (8). In this limit, Y⌊tL⌋ converges in distribution to Xt. Hence, properties of the solutions of Eq. (8) should theoretically be 'shared' by the pre-activations Y⌊tL⌋ when the depth is large. For the rest of the paper, we study some properties of the solutions of Eq. (8). This requires the definition of filtered probability spaces which we omit here. All the technical details are provided in Appendix B. We compare the theoretical findings with empirical results obtained by simulating the pre/post-activations of the original network Eq. (1). We refer to Xt, the solution of Eq. (8), by the infinite-depth network. The distribution of X1 (the last layer in the infinite-depth limit) is generally intractable, unlike in the infinite-width-then-infinite-depth limit (Gaussian, Hayou et al. (2021)) or joint infinite-depth-and-width limit (involves a log-normal distribution in the case of an MLP architecture, Li et al. (2021)). Intuitively, one should not expect a universal behaviour (e.g. the Gaussian behaviour in the infinite-width case) of the solution of Eq. (8) as this latter is highly sensitive to the choice of the activation function, and different activation functions might yield completely different distributions of X1. We demonstrate this in the next section by showing that we can recover closed-form distributions by carefully choosing the activation function. The main ingredient is the use of Itoˆ 's lemma. See Appendix B for more details. ## 3 Different Behaviours Depending On The Activation Function In this section, we restrict our analysis to a width-1 ResNet with one-dimensional inputs, where each layer consists of a single neuron, i.e. d = n = 1. In this case, the process (Xt)0≤t≤1 is one-dimensional and is solution of the following SDE dXt = |ϕ(Xt)|dBt, X0 = Winx. We can get rid of the absolute value in the equation above since the process Xt has the same distribution as X˜t, the solution of the SDE dX˜t = ϕ(X˜t)dBt. The intuition behind this is that the infinitesimal random variable 'dBt' is Gaussian distributed with zero mean and variance dt. Hence, it is a symmetric random variable and can absorb the sign of ϕ(Xt). The rigorous justification of this fact is provided in Theorem 7 in the Appendix. Hereafter in this section, we consider the process X, solution of the SDE $$d X_{t}=\phi(X_{t})d B_{t},\quad X_{0}=W_{i n}x.$$ dXt = ϕ(Xt)dBt, X0 = Winx. (3) Given a function g ∈ C2(R) 1, we use Itoˆ 's lemma (Lemma 4 in the appendix) to derive the dynamics of the process g(Xt). We obtain, $$dg(X_{t})=\underbrace{\phi(X_{t})g^{\prime}(X_{t})}_{\sigma(X_{t})}dB_{t}+\underbrace{\frac{1}{2}\phi(X_{t})^{2}g^{\prime\prime}(X_{t})}_{\mu(X_{t})}dt.\tag{4}$$ In financial mathematics nomenclature, the function µ is called the *drift* and σ is called the *volatility* of the diffusion process. Itoˆ 's lemma is a valuable tool in stochastic calculus and is often used to transform and simplify SDEs to better understand their properties. It can also be used to find candidate functions g and activation functions ϕ such that the SDE Eq. (4) admits solutions with known distributions, which yields a closed-form distribution for Xt. We consecrate the rest of this section to this purpose. ## 3.1 Relu Activation ReLU is a piece-wise linear activation function. Let us first deal with the simpler case of linear activation functions. In the next result, we show that linear activation functions yield log-normal distributions. In this case, the process Xt follows the Geometric Brownian motion dynamics. Later in this section, we show that this result can be adapted to the case of the ReLU activation function given by ϕ(x) = max(x, 0). Proposition 2. Let x ∈ R *such that* x ̸= 0*. Consider a linear activation function* ϕ(y) = αy + β, *where* α > 0, β ∈ R are constants. Let σ > 0 *and define the function* g by g(y) = (αy + β) γ, where γ = σα−1. Consider the stochastic process Xt*, solution of Eq.* (3). Then, the process g(Xt) *is a solution of the SDE* $$d g(X_{t})=a g(X_{t})d t+\sigma g(X_{t})d B_{t},$$ where a = 1 2 σ 2γ −1(γ − 1)*. As a result, we have that for all* t ∈ [0, 1], $$g(X_{t})\sim g(X_{0})\exp\left(\left(a-\frac{1}{2}\sigma^{2}\right)t+\sigma B_{t}\right).$$ The proof of Proposition 2 is provided in Appendix E, and consists of using Itoˆ lemma and solving a differential equation. When the activation function is ReLU, we still obtain a log-normal distribution conditionally on the event that the initial value X0 is positive. Proposition 3. Let x ∈ R *such that* x ̸= 0, and let ϕ *be the ReLU activation function given by* ϕ(z) = max(z, 0) for all z ∈ R. Consider the stochastic process Xt*, the solution of Eq.* (3). Then, the process X is a mixture of a Geometric Brownian motion and a constant process. More precisely, we have for all t ∈ [0, 1] $$X_{t}\sim\mathbb{1}_{\{X_{0}>0\}}\,X_{0}\exp\left(-\frac{1}{2}t+B_{t}\right)+\mathbb{1}_{\{X_{0}\leq0\}}X_{0}.$$ Hence, given a fixed X0 > 0, the process X *is a Geometric Brownian motion.* The proof of Proposition 3 is provided in Appendix F. We show that conditionally on X0 > 0, with probability 1, the process Xt is positive for all t ∈ [0, 1]2. When Xt > 0, the ReLU activation is just the identity function, which justifies the similarity between this result and the one obtained with linear activations (Proposition 2). Conversely, if X0 < 0, the process is constant equal to X0 since the updates 'dXt' are equal to zero in this case. A rigorous justification of this is given for general width n later in the paper (Lemma 1). An empirical verification of Proposition 2 is provided in Fig. 1 where we compare the theoretical results to simulations of the *neural paths* (Yl)1≤l≤L and (log(Yl))1≤l≤L from the original (finite-depth) ResNet given by Eq. (1). We observe an excellent match with theoretical predictions for depths L = 50 and L = 100. In the case of a small depth (L = 5), the theoretical distribution does not fit well the empirical one (obtained by simulations), 1Here C 2(R) refers to the vector space of functions g : R → R that are twice differentiable and their second derivatives are continuous. 2In Appendix F, we show that the stopping τ = inf{t ≥ 0 : s.t. Xt ≤ 0} is infinite almost surely, which is stronger that what we need. This is a classic result in stochastic calculus. ![4_image_0.png](4_image_0.png) Figure 1: Empirical verification of Proposition 2. **(a), (b), (d)** Histograms of log(YL) and based on N = 5000 simulations for depths L ∈ {5, 50, 100} with Y0 = 1. Estimated density (Gaussian kernel estimate) and theoretical density (Gaussian) are illustrated on the same graphs. **(c), (e)** 30 Simulations of the sequence (log(Yl))l≤L (c) and the sequence (Yl)l≤L (e). We call such sequences Neural paths. The results are reported for depth L = 100, with Y0 = 1, ϕ being the ReLU activation. The theoretical mean of log(Yl) is given by m(l) = − l 2L and that of Ylis equal to Y0 = 1. We also illustrate the 99% confidence intervals, based on the theoretical prediction for log(Yl) (Proposition 2), and the empirical Quantiles for Yl. (f) Histogram of YL based on N = 5000 simulations for depth L = 100. which is expected since the dynamics of X describe (only) the infinite-depth limit of the ResNet. More figures are provided in Appendix L. Remark: notice that the log-normal behaviour is a result of the fact that we only consider the case n = 1 (width one). Indeed, the single neuron case forces ReLU to act like a linear activation when X0 > 0, and like a 'zero' activation when X0 ≤ 0. For general width n ≥ 1, such behaviour does not hold in general, and usually some coordinates of Xt will be negative while others are non-negative, which implies that the volatility term ∥ϕ(Xt)∥ has non-trivial dependence on Xt. We discuss this in more details in Section 4. In the next section, we illustrate a case of an exotic (non-standard) activation function that yields a completely different closed-form distribution of Xt. ## 3.2 Exotic Activation With a particular choice of the activation function ϕ and mapping g, the stochastic process g(Xt) is the solution of well-known type of SDEs known as the Ornstein-Uhlenbeck SDEs. In this case, the activation function is non-standard and involves the inverse of the imaginary error function, a variant of the error function. Proposition 4 (Ornstein-Uhlenbeck neural networks). Let x ∈ R *such* that x ̸= 0. Consider the following activation function ϕ $$\phi(y)=\exp(h^{-1}(\alpha y+\beta)^{2}),$$ where α, β ∈ R *are constants and* h −1*is the inverse function of the imaginary error function given by* h(z) = √ 2 π R z 0 e t 2dt. Let g *be the function* defined by $$g(y)=\alpha{\sqrt{\pi}}h^{-1}(\alpha y+\beta).$$ ![4_image_1.png](4_image_1.png) Figure 2: Exotic activation Let the process Xt *be the solution of Eq.* (3) 3. Then, the stochastic process g(Xt) follows the OrnsteinUhlenbeck dynamics on (0, 1] *given by* $$d g(X_{t})=a g(X_{t})d t+2a d B_{t},\quad g(X_{0})=g(W_{i n}x),$$ where a = πα2 4 . As a result, conditionally on X0 (fixed X0*), we have that for all* t ∈ [0, 1], $$g(X_{t})\sim{\mathcal N}\left(g(X_{0})e^{-a t},{\frac{\pi}{2}}(1-e^{-2a t})\right),$$ and the process Xt *is distributed as* Xt ∼ α −1(h(α −1π $${}^{1}\pi^{-1/2}{\mathcal{N}}\left(g(X_{0})e^{-a t},{\frac{\pi}{2}}(1-e^{-2a t})\right))-\beta).$$ Fig. 2 shows the graph of the function ϕ(y) = exp(h −1(y) 2) mentioned in Proposition 4 with α = 1 and β = 0. With this choice of the activation function, the infinite-depth network output X1 has the distribution g −1Ng(X0)e −at, 2(1 + e −2at) (conditionally on X0), where g is given in the statement of the proposition. This distribution, although easy to simulate, is different from both the Gaussian distribution that we obtain in the infinite-width limit and the log-normal distribution associated with ReLU activation. This confirms that not only do neural networks exhibit completely different behaviours when the ratio depth-to-width is large, but in this case, that their behaviour is very sensitive to the choice of the activation function. The results of Proposition 4 are empirically confirmed in Fig. 3. The original ResNet given by Eq. (8) with depth L = 100 exhibit very similar behaviour to that of the SDE. ![5_image_0.png](5_image_0.png) Figure 3: Empirical verification of Proposition 4. **(a), (b), (d)** Histograms of g(YL) based on N = 5000 simulations for depths L ∈ {5, 50, 100} with Y0 = 1. Estimated density (Gaussian kernel estimate) and theoretical density (Gaussian) are illustrated on the same graphs. **(c), (e)** 30 Simulations of the neural paths (g(Yl))l≤L (c) and (Yl)l≤L (e). The results are reported for depth L = 100, with Y0 = 1, ϕ is given un Proposition 4. The theoretical mean of g(Yl) (conditionally on Y0) is approximated by m(l) = g(Y0)e − πl 3L and that of Ylis equal to Y0 = 1. We also illustrate the 99% confidence intervals, based on the theoretical prediction for g(Yl) (Proposition 2), and the empirical Quantiles for Yl. (f) Histogram of YL based on N = 5000 simulations for depth L = 100. 3in Appendix D, we show that the activation function ϕ is only locally Lipschitz. Hence, the solution of this SDE exists only in the local sense and the convergence in distribution of Y⌊tL⌋to Xt is also in the local sense (Proposition 1). However, by continuity of the Brownian path, the stopping times τ L and τ diverge almost surely when r goes to infinity. Therefore, the conclusion of Proposition 4 remains true for all t ∈ [0, 1]. Technical details are provided in Appendix D. ## 4 General Width N ≥ 1 Let n ≥ 1 and x ∈ R dsuch that x ̸= 0. Consider the process X given by the SDE $$dX_{t}=\frac{1}{\sqrt{n}}\|\phi(X_{t})\|dB_{t},\quad X_{0}=W_{in}x,\tag{1}$$ $$\left(5\right)$$ where ϕ is the activation function, and B is an n-dimensional Brownian motion, independent from Win. For n ≥ 2, the coordinates of Xt are dependent which makes the distribution of Xt generally intractable. This intractability is purely due to this dependence between the coordinates and not to the non-linearity itself. To understand this, we have added in Appendix K a comprehensive analysis of the case of the identity activation and other piece-wise activation functions. Intuitively, if for some s, if ∥ϕ(Xs)∥ = 0 in Eq. (5), then for all t ≥ s, Xt = Xs since the increments 'dXt' are all zero for t ≥ s. This holds for any choice of the activation function ϕ, provided that the process X exists, i.e. the SDE has a unique solution. We summarize this in the next lemma. Lemma 1 (Collapse). Let x ∈ R d*such that* x ̸= 0, and ϕ : R → R be a Lipschitz function. Let X be the solution of the SDE given by Eq. (5)*. Assume that for some* s ≥ 0, ϕ(Xs) = 0. Then, for all t ≥ s, Xt = Xs, almost surely. Lemma 1 is a particular case of Lemma 7 in the Appendix. The proof consists of using the uniqueness of the solution of Eq. (5) when the volatility term is Lipschitz. This result is trivial in the finite depth case (Eq. (1)). When there exists s such that ϕ(Xs) = 0, the process X becomes constant (equal to Xs) for all t ≥ s (almost surely). We call this phenomenon *process collapse*. In the case of finite-depth networks (Eq. (1)), we call the same phenomenon *network collapse*. Understanding when, and whether, such event occurs is useful since it has significant implications on the the large depth behaviour of neural networks. Indeed, if such event occurs, it would mean that increasing depth has no effect on the network output after some time s (or approximately, after layer index ⌊sL⌋). In the next result, we show that under mild conditions on the activation function, process collapse is a zero-probability event. ## 4.1 Network Collapse The next result gives (mild) sufficient conditions on the activation function so that the process X almost surely does not collapse. In the proof, we use Itoˆ 's lemma in the multi-dimensional case, which states that for any function g : R n → R that is C 2(R n), we have that $$d g(X_{t})=\nabla g(X_{t})^{\top}d X_{t}+\frac{1}{2n}\|\phi(X_{t})\|^{2}\mathrm{Tr}\left[\nabla^{2}g(X_{t})\right].$$ Lemma 2. Let x ∈ R d*such that* x ̸= 0, and consider the stochastic process X *given by the following SDE* $$d X_{t}=\frac{1}{\sqrt{n}}\|\phi(X_{t})\|d B_{t}\,,\quad t\in[0,\infty),\quad X_{0}=W_{i n}x,$$ where ϕ(z) : R → R *is Lipschitz, injective,* C 2(R) *and satisfies* ϕ(0) = 0*, and* ϕ ′ and ϕ ′′ϕ *are bounded on* R, and (Bt)t≥0 is an n-dimensional Brownian motion independent from Win ∼ N (0, d−1I). Let τ be the stopping time given by τ = min{t ≥ 0 : ϕ(Xt) = 0}. *Then, we have that* $$\mathbb{P}\left(\tau=\infty\right)=1.$$ The proof of Lemma 2 is provided in Appendix G. Many standard activation functions satisfy the conditions of Lemma 2. Examples include Hyperbolic Tangent Tanh(z) = e 2z−1 e 2z+1 , and smooth versions of ReLU activation such as GeLU given by ϕ*GeLU* (z) = zΨ(z) where Ψ is the cumulative distribution function of the standard Gaussian variable, and Swish (or SiLU) given by ϕ*Swish*(z) = zh(z) where h(z) = (1 +e −z) −1is the Sigmoid function. The result of Lemma 2 can be extended to the case when ϕ is the ReLU function with miner changes. Lemma 3. *Consider the stochastic process* (8) *given by the SDE* $$d X_{t}=\frac{1}{\sqrt{n}}\|\phi(X_{t})\|d B_{t}\,,\quad t\in[0,\infty),\quad X_{0}=W_{i n}x,$$ where ϕ is the ReLU activation function, and (Bt)t≥0 is an n-dimensional Brownian motion independent from Win ∼ N (0, d−1I). Let τ *be the stopping time given by* $$\tau=\operatorname*{min}\{t\geq0:\|\phi(X_{t})\|=0\}=\operatorname*{min}\{t\geq0:\,\forall i\in[n],\,X_{t}^{i}\leq0\}.$$ Then, we have that $$\mathbb{P}\left(\tau=\infty\,|\,\|\phi(X_{0})\|>0\right)=1.$$ $\mathcal{A}_{\mathcal{I}}$ As a result, we have that $$\mathbb{P}(\tau=\infty)=1-2^{-n}.$$ The proof of Lemma 3 relies on a particular choice of a sequence of functions (ϕm)m≥1 that approximate the ReLU activation ϕ. Details are provided in Appendix G. The result of Lemma 3 shows that for all T > 0, with probability 1, if there exists j ∈ [n] such that X j 0 > 0, then for all t ∈ [0, T], there exists a coordinate i such that Xi t > 0, which implies that the volatility of the process X given by √ 1 n ∥ϕ(Xt)∥ does not vanish in finite time t. Notably, this implies that for any t ∈ [0, 1], the norm of post-activations given by ∥ϕ(Xt)∥ does not vanish (with probability 1). This is important as it ensures that the vector ϕ(Xt), which represents the post-activations in the infinite-depth network, does not vanish, and therefore the process Xt does not get stuck in an absorbent point. The dependence between the coordinates of the process Xt is crucial in this result. In the opposite case where Xt are independent, the event {∥ϕ(Xt)∥ = 0} has probability 2 −n. Notice also that this result holds only in the infinite-depth limit. With finite-depth ResNet (Eq. (1)) with ReLU activation, it is not hard to show that the network collapse event {∃l ∈ [L], s.t. ∥ϕ(Y⌊tL⌋)∥ = 0} has non-zero probability. However, as the depth increases, the probability of network collapse goes to zero. Fig. 4 shows the probability of network collapse for a finite-width and depth ResNet (Eq. (1)). As the depth L increases, it becomes unlikely that the network collapses. This is in agreement with our theoretical prediction that the infinite-depth network represented by the process Xt has zero-probability collapse event, conditionally on the fact that ∥ϕ(X0)∥ > 0. The probability of neural collapse also decreases with width, which is expected, since it becomes less likely to have all pre-activations non-positive as the width increases. 1 3 5 7 9 11 13 15 17 19 Depth 0.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 $\mathbf{M}$ n = 1 n = 2 n = 10 Collapse probability Figure 4: Probability of the event {∃l ∈ [L] such that ϕ(Yl) = 0} (collapse) for varying widths and depths. The probability and the 95% confidence intervals are estimated using N = 5000 samples. ## 4.2 Post-Activation Norm As a result of Lemma 3, conditionally on ∥ϕ(X0)∥ > 0, we can safely consider manipulating functions that require positiveness such as the logarithm of the norm of the post-activations. In the next result, we show that the norm of the post-activations has a distribution that resembles the log-normal distribution. We call this Quasi Geometric Brownian Motion distribution (Quasi-GBM). Theorem 1 (Quasi-GBM behaviour of the post-activations norm). *We have that for all* t ∈ [0, 1], $$\|\phi(X_{t})\|=\|\phi(X_{0})\|\exp\left(\frac{1}{\sqrt{n}}\hat{B}_{t}+\frac{1}{n}\int_{0}^{t}\mu_{s}d s\right),\quad a l m o n s t\ w u e r l y,$$ ![8_image_0.png](8_image_0.png) $\mathbf{a}$ Figure 5: Histogram of √n log(∥ϕ(YL)∥/∥ϕ(Y0)∥) for depth L = 100 and different widths n ∈ {2, 3, 4, 6, 20, 100} based on N = 5000 simulations. Gaussian density estimate and (Gaussian) kernel density estimates are shown. We observe a great match between the best Gaussian estimate and the empirical distribution, which confirms the quasi-log-normal theoretical predictions from Theorem 1. _where $\mu_{s}=\frac{1}{2}\|\phi^{\prime}(X_{s})\|^{2}-1$, and $(\hat{B})_{t}$, $t\leq1$_ 2 − 1, and (Bˆ)t≥0 *is a one-dimensional Brownian motion. As a result, for all* 0 ≤ s ≤ $$\mathbb{E}\left[\log\left(\frac{\|\phi(X_{t})\|}{\|\phi(X_{s})\|}\right)|\,\|\phi(X_{0})\|>0\right]=\left(\frac{(1-2^{-n})^{-1}}{4}-\frac{1}{n}\right)(t-s).$$ $$\mathbb{L}\qquad\backslash\mathbb{R}$$ *Moreover, for $n\geq2$, we have* $$\mathrm{Var}\left[\log\left(\frac{\|\phi(X_{t})\|}{\|\phi(X_{s})\|}\right)\|\,\|\phi(X_{0})\|>0\right]\leq\left(n^{-1/2}+\Gamma_{s,t}^{1/2}\right)^{2}(t-s),$$ _where $\Gamma_{s,t}=\frac{1}{4}\int_{s}^{t}\,\left(\left(\mathbb{E}\phi^{\prime}(X_{u}^{1})\phi^{\prime}(X_{u}^{2})-\frac{(1-2^{-n})^{2}}{4}\right)+n^{-1}\left(\frac{1-2^{-n}}{2}-\mathbb{E}\phi^{\prime}(X_{u}^{1})\phi^{\prime}(X_{u}^{2})\right)\right)du$._ Different tools from stochastic calculus and probability theory are used in the proof of Theorem 1. Technical details are provided in Appendix H. The first result in the theorem suggests that the norm of the postactivations has a quasi-log-normal distribution (conditionally on X0). The first term in the exponential is Gaussian (n −1/2Bˆt) and the second term depends on n −1µs, which involves an average over (ϕ ′(Xis ))1≤i≤n. In the large width limit, this average concentrates around its mean as we will see in Theorem 2. In Fig. 5, we show the histogram of √n log(∥ϕ(YL)∥/∥ϕ(X0)∥) for depth L = 100 and varying widths n. Surprisingly, the log-normal approximation fits the empirical distribution very well even for small widths n ∈ {2, 3, 4, 6} for which the term n −1µs is not necessarily close to its mean4. More interestingly, the result of Theorem 1 sheds light on an intriguing change of regime that occurs between widths n = 3 and n = 4. Indeed, for n ≤ 3, the logarithmic growth factor of the norm of the post-activations ∥ϕ(Xt)∥ tends to decrease with depth on average, while it increases for n ≥ 4. When n = 4, the average growth is positive although very small. This change of regime phenomenon suggests that for n ≤ 3, the random variable ∥ϕ(Xt)∥/∥ϕ(Xs)∥ has significant probability mass in the region (0, 1). This probability mass tends to 0 as n increases since ∥ϕ(Xt)∥/∥ϕ(Xs)∥ converges to a deterministic constant (we will see this in the next theorem), and the variance upperbound in Theorem 1 converges to 0 when n goes to infinity, which can be explained by the fact that Eϕ ′(X1 u )ϕ ′(X2 u ) n→∞ −→ 1/4 (the coordinates become independent in the large width limit, see next 4We currently do not have a rigorous explanation for this effect. A possible explanation for this empirical result is that the integral over µs has some 'averaging' effect. ![9_image_0.png](9_image_0.png) Figure 6: 30 simulations of the sequence (log(∥ϕ(Yl)∥/∥ϕ(Y0)∥))1≤l≤L for depth L = 100 and different widths n ∈ {2, 3, 4, 6, 20, 100}. Theoretical means from Theorem 1 are shown in red dashed lines and compared to their empirical counterparts. We observe that when the ratio L/n increases (especially for n = 100), the empirical mean also increases and becomes significantly different from the theoretical prediction. theorem). Experiments showing this concentration are provided in Appendix L.5. In Fig. 6, we simulate 30 neural paths (i.e. (Yl)1≤l≤L) for depth L = 100 and compute the logarithmic factor log(∥ϕ(Yl)∥/∥ϕ(Y0)∥). An excellent match with the theoretical results is observed for widths n ∈ {2, 3, 4, 6, 20}. A mismatch between theory and empirical results appears when n = 50, which is expected, since the theoretical results of Theorem 1 yield good approximations only when n ≪ L. Notice that the case of n = 1 matches the result of Proposition 3. Indeed, the latter implies that conditionally on ϕ(X0) > 0, we have log(ϕ(Xt)/ϕ(X0)) = log(Xt/X0) ∼ −t/2+Bt where B is a one-dimensional Brownian motion, and where we have used the fact that Xt > 0 for all t. This result can be readily obtained from Theorem 1 by setting n = 1. An interesting question is that of the infinite-width limit of the process Xt, which corresponds to the sequential limit infinite-depth-then-infinite-width of the ResNet Y⌊tL⌋ (Eq. (1)). We discuss this in the next section. ## 4.3 Infinite-Width Limit Of Infinite-Depth Networks In the next result, we show that when the width goes to infinity, the ratio ∥ϕ(Xt)∥/∥ϕ(X0)∥ concentrates around a layer dependent (t-dependent) constant. In this limit, the coordinates of Xt converge in L2 to a Mckean-Vlasov process, which allows us to recover the Gaussian behaviour of the pre-activations of the ResNet. We later compare this with the converse sequential limit infinite-width-then-infinite-depth where the pre-activations are also normally distributed, and show a key difference in the variance of the Gaussian distribution. Theorem 2 (Infinite-depth-then-infinite-width limit). For 0 ≤ s ≤ t ≤ 1*, we have* $$\log\left(\frac{\|\phi(X_{t})\|}{\|\phi(X_{s})\|}\right)\mathbbm{1}_{\{\|\phi(X_{0})\|>0\}}\xrightarrow[n\to\infty]{}\frac{t-s}{4},\quad\text{and,}\quad\frac{\|\phi(X_{t})\|}{\|\phi(X_{s})\|}\mathbbm{1}_{\{\|\phi(X_{0})\|>0\}}\xrightarrow[n\to\infty]{}\exp\left(\frac{t-s}{4}\right).$$ The second claim is $L$ where the convergence holds in L1. Moreover, we have that $$i n\ L_{1}.$$ $$\operatorname*{sup}_{i\in[n]}\mathbb{E}\left(\operatorname*{sup}_{t\in[0,1]}|X_{t}^{i}-{\tilde{X}}_{t}^{i}|^{2}\right)=\mathcal{O}(n^{-1}),$$ _Ilowing (Mckean-Vlasov) SDE_ where Xi t is the solution of the following (Mckean-Vlasov) SDE $$d\tilde{X}_{t}^{i}=\left(\mathbb{E}\phi(\tilde{X}_{t}^{i})^{2}\right)^{1/2}d B_{t}^{i},\quad\tilde{X}_{0}^{i}=X_{0}^{i}.$$ As a result, the pre-activations Y i ⌊tL⌋ (Eq. (1)) converge in distribution to a Gaussian distribution in the limit infinite-depth-then-infinite-width $$\forall i\in[n],\ \ Y_{[t L]}^{i}\xrightarrow{L\to\infty\ \mathrm{then}\ n\to\infty}\mathcal{N}(0,d^{-1}\|x\|^{2}\exp(t/2)).$$ The proof of Theorem 2 requires the use of a special variant of the Law of large numbers for non iid random variables, and a convergence result of particle systems from the theory of Mckean-Vlasov processes. Details are provided in Appendix I. In neural network terms, Theorem 2 shows that the logarithmic growth factor of the norm of the post-activations, given by log ∥ϕ(Y⌊tL⌋)∥/∥ϕ(Y⌊sL⌋)∥, converges to (t−s)/4 in the sequential limit L → ∞, then n → ∞. More importantly, the pre-activations Y i ⌊tL⌋ converge in distribution to a zeromean Gaussian distribution in this limit, with a layer-dependent variance. In the converse sequential limit, i.e. n → ∞, then L → ∞, the limiting distribution of the pre-activations Y i ⌊tL⌋ is also Gaussian with the same variance. We show this in the following result, which uses Lemma 5 in (Hayou et al., 2021). Theorem 3 (Infinite-width-then-infinite-depth limit). Let t ∈ [0, 1]*. Then, in the limit* limL→∞ limn→∞ (infinite width, then infinite depth), we have that $${\frac{\|\phi(Y_{[t L]})\|}{\|\phi(Y_{0})\|}}\,\mathbbm{1}_{\{\|\phi(Y_{0})\|>0\}}\longrightarrow\exp\left({\frac{t}{4}}\right),$$ where the convergence holds in probability. Moreover, the pre-activations Y i ⌊tL⌋ (Eq. (1)) converge in distribution to a Gaussian distribution in the limit infinite-width-then-infinite-depth $$\forall i\in[n],\ \ Y_{[t L]}^{i}\ {\xrightarrow{n\to\infty\ \mathit{t h e n}\ L\to\infty}}\ {\mathcal{N}}(0,d^{-1}\|x\|^{2}\exp(t/2)).$$ The proof of Theorem 3 is provided in Appendix J. We use existing results from Hayou et al. (2021) on the infinite-depth asymptotics of the neural network Gaussian process (NNGP). It turns out that the order to the sequential limit (taking the width to infinity first, then taking the depth to infinity, or the converse) does not affect the limiting distribution, which is a Gaussian with variance ∝ exp(t/2)). Intuitively, by taking the width to infinity first, we make the coordinates independent from each other, and the processes (Y i l )1≤L become iid Markov chains. Taking the infinite-depth limit after the infinite-width limit consists of taking the infinite-depth limit of one-dimensional Markov chains. On the other hand, when we take depth to infinity first, the coordinates (Xi t )1≤i≤n remain dependent (through the volatility term n −1/2∥ϕ(Xt)∥), which results in the Quasi-log-normal behaviour of the norm of the post-activations (Theorem 1). Taking the width to infinity then yields an asymptotic norm of the post-activations equal to ∥ϕ(X0)∥ exp(t/2) (Theorem 2) which is the same norm in the converse limit (Theorem 3). It remains to take the width to infinity to decouple the coordinates and obtain the Gaussian distribution (through the Mckean-Vlasov dynamics). Knowing that the variance of the pre-activations is mainly determined by the norm of the post-activations (Eq. (5)), we can see why the variance is similar in both sequential limits. ## 5 Discussion On The Case Of Multiple Inputs The result of Proposition 1 can be easily generalized to the multiple input case, and the resulting dynamics is still an SDE. The generalization to the multiple inputs case is given by Proposition 5 in the Appendix. An important question in the literature on infinite-width neural networks is the behaviour of the correlation of the pre-activations (or the post-activations) for different inputs a and b, which is given by ⟨Y⌊tL⌋(a),Y⌊tL⌋(b)⟩ ∥Y⌊tL⌋(a)∥∥Y⌊tL⌋(b)∥ . This correlation can be as a geometric measure of the information as it propagates through the network. In the infinite-width-then-depth limit, this correlation (generally) converges to a degenerate limit (a constant value) which results in either a constant or a sharp landscape of the network output and causes gradient exploding/vanishing issues (Schoenholz et al., 2017; Yang & Schoenholz, 2017; Hayou et al., 2019a). Techniques such block scaling (Hayou et al., 2021), or kernel shaping (Zhang et al., 2022; Martens et al., 2021) solve this problem and ensure that the correlation is well-behaved in the large depth limit. In our case, when the width n is finite and the depth L is taken to infinity, we can define the correlation for two inputs a ̸= b and time t ∈ [0, 1] by $$c_{t}(a,b)\ {\stackrel{d e f}{=}}\ {\frac{\langle X_{t}(a),X_{t}(b)\rangle}{\|X_{t}(a)\|\|X_{t}(b)\|}}.$$ Using Itoˆ 's lemma, ct has dynamics of the form $$d c_{t}(a,b)=\Psi(X_{t}(a),X_{t}(b))d B_{t},$$ ![11_image_0.png](11_image_0.png) $$(6)$$ for some non-trivial mapping Ψ. Unfortunately, this kind of dynamics (which is not an SDE) is generally intractable, and we are currently investigating these dynamics for future work. However, since we scale the ResNet blocks with the factor 1/ √L (Eq. (1)), which is the same scaling that solves the degeneracy issue in the infinite-width-then-depth limit (Hayou et al., 2021), it should be expected that the correlation kernel ct does not converge to a degenerate limit. In Fig. 7, we simulate the correlation path in a ResNet of depth L = 200 and width n = 20. The paths exhibits some level of stochasticity but no degeneracy can be observed. Understanding the correlation dynamics (Eq. (6)) in the infinite-depth limit of finite-width networks is an interesting open question. The infinite-width limit5 of these dynamics is also an interesting open question. We leave this for future work. Figure 7: 10 Simulations of the correlation path ⟨Y⌊tL⌋(a),Y⌊tL⌋(b)⟩ ∥Y⌊tL⌋(a)∥∥Y⌊tL⌋(b)∥ 1≤l≤L for depth L = 200, width n = 20, and different (*a, b*) (different initial correlations c0). The color code depends only on the initial correlation value c0 (red for the largest correlations values) ## 6 Practical Implications Our theoretical analysis has many interesting implications from a practical standpoint. Here we summarize some key insights form our results. 5The infinite-width limit of infinite-depth correlations Initialization and stability in the large depth limit. An important factor pertaining to the trainability of neural networks is the behaviour of the neurons (pre/post-activations). Ensuring that the neurons are well-behaved at initialization is crucial for training since the first step of any gradient-based training algorithm depends on the values of the neurons at initialization. This has led to interesting developments in initialization schemes for MLPs such as the Edge of Chaos (Poole et al., 2016; Schoenholz et al., 2017) which ensures that the variance of the pre-activation does not (exponentially) vanish or explode in the large depth limit. In the case of ResNet, we know from the existing theory on the infinite-width limit of neural networks that scaling the residual blocks with 1/ √L stabilizes the pre/post-activations in the large depth limit (Hayou et al., 2021). Hence, we do not need a special initialization scheme with this scaling. However, one could argue that this (approximately) ensures stability *only* when the width is much larger than the depth. What about the other cases when n ≈ L or n ≪ L? the last case can be studied by fixing the width and taking the depth to infinity. In our paper, we not *only* show that the neurons remain stable in fixed-width large-depth networks, but we fully characterize their behaviour when the depth is infinite and show that it follows an SDE in this limit. To summarize, we show that initializing ResNet Eq. (1) with standard Gaussian random variables and scaling the blocks with 1/ √L ensures stability inside the network in large-depth (fixed-width) networks (notice that this is actually equivalent to scaling the variance of the initialization weights with 1/L, which can be seen as an initialization scheme). Intuitively, by stabilizing the pre-activations, we also stabilize the gradients. To confirm this intuition, we show in Fig. 8 the evolution of gradient norms as they back-propagate through the network. This experiment was conducted by fixing the last layer's gradient to a constant value and back-propagating the gradient from there. The result shows that the 1/ √L scaling, along with standard Gaussian initialization, ensure well-behaved gradients which is a desirable property for gradient-based training. Another interesting property of the Edge of Chaos initialization scheme for MLPs is that it ensures that correlation kernel (correlation between the pre-activations for different inputs) does not exponentially converge to a degenerate value (constant value)6. We discussed some aspects of the correlation kernel in Section 5 and showed empirically that with the 1/ √L scaling, the correlation is well-behaved and does not converge to degenerate values (Fig. 7). Scaled Gradient norm (normalized) 0 20 40 60 80 100 Layer index 0.8 1.0 1.2 1.4 1.6 1e8 Unscaled 0 20 40 60 80 100 Layer index 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Gradient norm (normalized) Figure 8: 10 Simulations of the gradient norm for scaled ResNet (Eq. (1)) and non-scaled ResNet (Yl = Yl−1 + Wlϕ(Yl−1)) for depth L = 100 and width n = 10. We normalize the gradients norms by the gradient norm of the last layer. The color code depends only on the ratio of the graident norm at the first layer to that of the last layer (dark red for the largest values). Without scaling, the gradient norm explodes (highly likely). The 1/ √L stabilizes the gradients as they backpropagate through the network. Network collapse. Another issue that could occur in finite-width networks is that of network collapse, i.e. when the pre-activations in a hidden layer are all negative, which causes the post-activations to be all zero. In ResNet (Eq. (1)), this implies that increasing depth beyond some level has no effect on the network output. This is problematic since the weights in those 'inactive' layers have zero gradient and thus will not be updated when such event occurs. A simple way to understand network collapse is to see what happens at initialization. When the width n is sufficiently large, one can expect that such event is unlikely to occur. What about small-width neural networks? we offer a simple answer to this question: for finite-width neural networks, increasing the depth L ensures that such event is unlikely to happen. This is true even for extremely small widths, e.g. n = 2, 3, which is counter-intuitive. Empirical results in Fig. 4 support this theoretical prediction. 6The correlation still converges to 1 with an EOC initialization. The benefit of the EOC lies in the fact that the convergence rate is much slower (polynomial Vs exponential) (Schoenholz et al., 2017; Hayou et al., 2019a) No universal kernel regime. An interesting application of fixed-depth infinite-width neural network is the so-called Neural Network Gaussian Process (NNGP). This is the Gaussian process limit of neural networks, that can be used to perform posterior inference and obtain uncertainty estimates (Lee et al., 2018). The converse case, i.e. fixed-width infinite-depth, has been however poorly understood, and the question of whether the infinite-depth limit of finite-width networks has some universal behaviour has been an open question since. We addressed this question in this work and showed that the limit (in the case of the ResNet architecture Eq. (1)) does not admit a universal distribution (e.g. Gaussian process in the infinite-width limit). More precisely, this limit is highly sensitive to the choice of the activation function. What about infinite-depth-then-width? the infinite-depth limit of infinite-width neural networks has been studied in the literature (Hayou et al., 2019a; 2020). It is known that in this limit, the network behaves as a Gaussian process with a well-defined kernel. What about the converse limit, i.e. infinite-width limit of infinite-depth networks? this has been so far an open question, and our work addresses one part of it. We show that the marginal distributions are zero-mean Gaussians with the same variance as in the infinite-width-then-depth limit. Characterizing the full covariance kernel is still however an open question (see Section 5 for a discussion on this topic). ## 7 Conclusion And Limitations Understanding the limiting laws of randomly initialized neural networks is important on many levels. Primarily, understanding these limiting laws allows us to derive new designs that are immune to exploding/vanishing pre-activations/gradients phenomena. Next, they also enable a deeper understanding of overparameterized neural networks, and (often) yield many interesting (and simple) justifications to the apparent advantage of overparameterization. So far, the focus has been mainly on the infinite-width limit (and infinite-width-theninfinite-depth limit) with few developments on the joint limit. Our work adds to this stream of papers by studying the infinite-depth limit of finite-width neural networks. We showed that unlike the infinite-width limit, where we always obtain (under some mild conditions on the activation function) a Gaussian distribution, the infinite-depth limit is highly sensitive to the choice of the activation function; using the Itoˆ 's lemma, we showed how we can obtain certain known distributions by carefully tuning the activation function. In the general width limit, we showed an important characteristic of infinite-depth neural networks with general activation functions (including ReLU, conditionally on ∥ϕ(X0)∥ > 0): the probability of process collapse is zero, meaning that with probability one, the process Xt does not get stuck at any absorbent point. This is not true for finite-depth ResNets as we can see in Fig. 4, which highlights the fact that as we increase depth, the collapse probability tends to decrease, and eventually converges to zero in the infinite-depth limit, which is in agreement with our results. This work, although novel in many aspects, is still far from depicting a complete picture of the infinite-depth limit of finite-width networks. There are still numerous interesting open questions in this research direction. Indeed, one of these is the dynamics of the gradient, and more specifically the behaviour of the NTK in the infinite-depth limit of finite-width neural networks. For instance, we already know that in the joint infinite-width-depth limit of MLPs, the NTK is random (Hanin, 2019); but what happens when the width is fixed and the depth goes to infinity? In the MLP case, a degenerate NTK should be expected. Henceforth, questions remain as to whether a suitable scaling leads to interesting (non-degenerate) infinite-depth limit of the NTK as is the case of the infinite-depth limit of infinite-width NTK (Hayou et al., 2021). ## References Sanjeev Arora, Simon Du, Wei Hu, Zhiyuan Li, and Ruosong Wang. Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), *Proceedings of the 36th International Conference on Machine Learning*, volume 97 of *Proceedings of Machine Learning Research*, pp. 322–332. PMLR, 2019. Boris Hanin. Universal function approximation by deep neural nets with bounded width and relu activations. Mathematics, 7(10), 2019. Boris Hanin. Correlation functions in random fully connected neural networks at finite width. 2022. Boris Hanin and Mihai Nica. Finite depth and width corrections to the neural tangent kernel. In International Conference on Learning Representations, 2020. S. Hayou, A. Doucet, and J. Rousseau. On the impact of the activation function on deep neural networks training. In *International Conference on Machine Learning*, 2019a. S. Hayou, A. Doucet, and J. Rousseau. Mean-field behaviour of neural tangent kernel for deep neural networks. *arXiv preprint arXiv:1905.13654*, 2020. Soufiane Hayou, Arnaud Doucet, and Judith Rousseau. Etraining dynamics of deep networks using stochastic gradient descent via neural tangent kernel. 2019b. Soufiane Hayou, Eugenio Clerico, Bobby He, George Deligiannidis, Arnaud Doucet, and Judith Rousseau. Stable resnet. In Arindam Banerjee and Kenji Fukumizu (eds.), *Proceedings of The 24th International* Conference on Artificial Intelligence and Statistics, volume 130 of *Proceedings of Machine Learning Research*, pp. 1324–1332. PMLR, 13–15 Apr 2021. Soufiane Hayou, Arnaud Doucet, and Judith Rousseau. The curse of depth in kernel regime. In Melanie F. Pradier, Aaron Schein, Stephanie Hyland, Francisco J. R. Ruiz, and Jessica Z. Forde (eds.), Proceedings on "I (Still) Can't Believe It's Not Better!" at NeurIPS 2021 Workshops, volume 163 of *Proceedings of* Machine Learning Research, pp. 41–47. PMLR, 2022. Jiri Hron, Yasaman Bahri, Jascha Sohl-Dickstein, and Roman Novak. Infinite attention: NNGP and NTK for deep attention networks. In Hal Daumé III and Aarti Singh (eds.), *Proceedings of the 37th International* Conference on Machine Learning, volume 119 of *Proceedings of Machine Learning Research*, pp. 4376–4386. PMLR, 2020. Jonathan E. Ingersoll. *Theory of Financial Decision Making*. 1987. Arthur Jacot. *Theory of Deep Learning: Neural Tangent Kernel and Beyond*. 2022. URL https: //infoscience.epfl.ch/record/295831/files/EPFL_TH9825.pdf. Arthur Jacot, Franck Gabriel, Francois Ged, and Clement Hongler. Freeze and chaos: Ntk views on dnn normalization, checkerboard and boundary artifacts. In Bin Dong, Qianxiao Li, Lei Wang, and Zhi-Qin John Xu (eds.), *Proceedings of Mathematical and Scientific Machine Learning*, volume 190 of *Proceedings of* Machine Learning Research, pp. 257–270. PMLR, 2022. Benjamin Jourdain, Sylvie Meleard, and Wojbor Woyczynski. Nonlinear sdes driven by lévy processes and related pdes. *Latin American journal of probability and mathematical statistics*, 4, 08 2007. Peter Kloeden and Eckhard Platen. *Numerical Solution of Stochastic Differential Equations*. Springer Berlin, Heidelberg, 1995. J. Lee, Y. Bahri, R. Novak, S.S. Schoenholz, J. Pennington, and J. Sohl-Dickstein. Deep neural networks as Gaussian processes. In *International Conference on Learning Representations*, 2018. Mufan Li, Mihai Nica, and Dan Roy. The future is log-gaussian: Resnets and their infinite-depth-and-width limit at initialization. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural Information Processing Systems*, volume 34, pp. 7852–7864. Curran Associates, Inc., 2021. Mufan Bill Li, Mihai Nica, and Daniel M. Roy. The neural covariance sde: Shaped infinite depth-and-width networks at initialization. *arXiv*, 2022. Fusheng Liu, Haizhao Yang, Soufiane Hayou, and Qianxiao Li. Connecting optimization and generalization via gradient flow path length. 2022. Pierre Marion, Adeline Fermanian, Gérard Biau, and Jean-Philippe Vert. Scaling resnets in the large-depth regime. *arXiv*, 2022. James Martens, Andy Ballard, Guillaume Desjardins, Grzegorz Swirszcz, Valentin Dalibard, Jascha SohlDickstein, and Samuel S. Schoenholz. Rapid training of deep neural networks without skip connections or normalization layers using deep kernel shaping. *arXiv, preprint 2110.01765*, 2021. A.G. Matthews, J. Hron, M. Rowland, R.E. Turner, and Z. Ghahramani. Gaussian process behaviour in wide deep neural networks. In *International Conference on Learning Representations*, 2018. R.M. Neal. *Bayesian Learning for Neural Networks*, volume 118. Springer Science & Business Media, 1995. Stefano Peluchetti and Stefano Favaro. Infinitely deep neural networks as diffusion processes. In Silvia Chiappa and Roberto Calandra (eds.), *Proceedings of the Twenty Third International Conference on* Artificial Intelligence and Statistics, volume 108 of *Proceedings of Machine Learning Research*, pp. 1126– 1136. PMLR, 2020. B. Poole, S. Lahiri, M. Raghu, J. Sohl-Dickstein, and S. Ganguli. Exponential expressivity in deep neural networks through transient chaos. *30th Conference on Neural Information Processing Systems*, 2016. S.S. Schoenholz, J. Gilmer, S. Ganguli, and J. Sohl-Dickstein. Deep information propagation. In International Conference on Learning Representations, 2017. Mariia Seleznova and Gitta Kutyniok. Analyzing finite neural networks: Can we trust neural tangent kernel theory? In Joan Bruna, Jan Hesthaven, and Lenka Zdeborova (eds.), *Proceedings of the 2nd Mathematical* and Scientific Machine Learning Conference, volume 145 of *Proceedings of Machine Learning Research*, pp. 868–895. PMLR, 2022. Soo Sung, Supranee Lisawadi, and Andrei Volodin. Weak laws of large numbers for arrays under a condition of uniform integrability. *The Korean Mathematical Society J. Korean Math. Soc*, 45:289–300, 02 2008. Peter Tankov and Nizar Touzi. *CALCUL STOCHASTIQUE ET FINANCE*. 2018. URL http://www.cmap. polytechnique.fr/~touzi/Poly-MAP552.pdf. Lechao Xiao, Jeffrey Pennington, and Samuel Schoenholz. Disentangling trainability and generalization in deep neural networks. In Hal Daumé III and Aarti Singh (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of *Proceedings of Machine Learning Research*, pp. 10462– 10472. PMLR, 2020. G. Yang. Tensor programs iii: Neural matrix laws. *arXiv preprint arXiv:2009.10685*, 2020. G. Yang and S. Schoenholz. Mean field residual networks: On the edge of chaos. In Advances in neural information processing systems, pp. 7103–7114, 2017. Guodong Zhang, Aleksandar Botev, and James Martens. Deep learning without shortcuts: Shaping the kernel with tailored rectifiers. In *International Conference on Learning Representations*, 2022. Bernt Øksendal. *Stochastic Differential Equations*. 2003. ## A Discussion On The Infinite-Width Limit The infinite-width limit of neural network architectures has been extensively studied in the literature, and has led to many interesting theoretical and algorithmic innovations. We summarize these results below. - *Initialization schemes*: the infinite-width limit of neural networks has been extensively studied in the literature. In particular, for multi-layer perceptrons (MLP), a new initialization scheme that stabilizes forward and backward propagation (in the infinite-width limit) was derived in (Poole et al., 2016; Schoenholz et al., 2017). This initialization scheme is known as the Edge of Chaos, and empirical results show that it significantly improves performance. In Yang & Schoenholz (2017); Hayou et al. (2021), the authors derived similar results for the ResNet architecture, and showed that this architecture is *placed* by-default on the Edge of Chaos for any choice of the variances of the initialization weights. - *Gaussian process behaviour*: Multiple papers (e.g. Neal (1995); Lee et al. (2018); Yang (2020); Matthews et al. (2018); Hron et al. (2020)) studied the weak limit of neural networks when the width goes to infinity. The results show that a randomly initialized neural network (with Gaussian weights) has a similar behaviour to that of a Gaussian process, for a wide range of neural architectures, and under mild conditions on the activation function. In Lee et al. (2018), the authors leveraged this result and introduced the neural network Gaussian process (NNGP), which is a Gaussian process model with a neural kernel that depends on the architecture and the activation function. Bayesian regression with the NNGP showed that NNGP surprisingly achieves performance close to the one achieved by an SGD-trained finite-width neural network. The large depth limit of this Gaussian process was studied in Hayou et al. (2021), where the authors showed that with proper scaling, the infinite-depth (weak) limit is a Gaussian process with a universal kernel7. - *Neural Tangent Kernel (NTK)*: the infinite-width limit of the NTK is the so-called NTK regime or Lazy-training regime. This topic has been extensively studied in the literature. The optimization and generalization properties (and some other aspects) of the NTK have been studied in Liu et al. (2022); Arora et al. (2019); Seleznova & Kutyniok (2022); Hayou et al. (2019b). The large depth asymptotics of the NTK have been studied in (Hayou et al., 2020; 2022; Jacot et al., 2022; Xiao et al., 2020). We refer the reader to Jacot (2022) for a comprehensive discussion on the NTK. ## B Review Of Stochastic Calculus In this section, we introduce the required mathematical framework and tools to handle stochastic differential equations (SDEs). We suppose that we have a probability space (Ω, F, P), where Ω is the event space, P is the probability measure, and F is the sigma-algebra associated with Ω. For n ≥ 1, we denote by B the standard n-dimensional Brownian motion, and Ft its natural filtration. Equipped with (Ft)t≥0, we say that the probability space (Ω, F,(Ft)t≥0, P) is a filtered probability space. Ft is the collection of events that are measurable up to time t, i.e. can be verified if we have knowledge of the Brownian motion B (and potentially some other independent source such as the initial condition of a process X defined by a B-driven stochastic differential equation) up to time t. We are now ready to define a special type of stochastic processes known as Itoˆ processes. ## B.1 Existence And Uniqueness Definition 1 (Itoˆ diffusion process). A stochastic process (Xt)t∈[0,T] *valued in* R n is called an Itoˆ diffusion process if it can be expressed as $$X_{t}=X_{0}+\int_{0}^{t}\mu_{s}d s+\int_{0}^{t}\sigma_{s}d B_{s},$$ where B is a n*-dimensional Brownian motion and* σt ∈ R n×n, µ ∈ R n *are predictable processes satisfying* R T 0 (∥µs∥2 + ∥σsσ ⊤ s ∥2)ds < ∞ *almost surely.* 7A kernel is called universal when any continuous function on some compact set can be approximated arbitrarily well with kernel features. The following result gives conditions under which a strong solution of a given SDE exists, and is unique. Theorem 4 (Thm 8.3 in Tankov & Touzi (2018)). Let n ≥ 1*, and consider the following SDE* dXt = µ(t, Xt)dt + σ(t, Xt)dBt, X0 ∈ L2, where B is a m-dimensional Brownian process for some m ≥ 1*, and* µ : R + × R n → R n and σ : R + × R n → R n×m *are measurable functions satisfying* 1. there exists a constant K > 0 such that for all t ≥ 0, *x, x*′ ∈ R n $\|\mu(t,x)-\mu(t,x^{\prime})\|+\|\sigma(t,x)-\sigma(t,x^{\prime})\|\leq k\|x-x^{\prime}\|$. 2. the functions ∥µ(., 0)∥ and ∥σ(., 0)∥ are L2(R +) *with respect to the Lebesgue measure on* R +. Then, for all T ≥ 0*, there exists a unique strong solution of the SDE above.* ## B.2 Itoˆ **'S Lemma** The following result, known as Itoˆ 's lemma, is a classic result in stochastic calculus. We state a version of this result from Tankov & Touzi (2018). Other versions and extensions exist in the literature (e.g. Ingersoll (1987); Øksendal (2003); Kloeden & Platen (1995)). Lemma 4 (Itoˆ 's lemma, Thm 6.7 in Tankov & Touzi (2018)). Let Xt be an Itoˆ diffusion process (Definition *1) of the form* dXt = µtdt + σtdBt, t ∈ [0, T], X0 ∼ ν where ν *is some given distribution. Let* g : R + × R n → R be C 1,2([0, T], R n) *(i.e.* C 1*in the first variable* t and C 2in the second variable x). Then, with probability 1*, we have that* $$f(t,X_{t})=f(0,X_{0})+\int_{0}^{t}\nabla_{x}f(s,X_{s})\cdot dX_{s}+\int_{0}^{t}\left(\partial_{t}f(s,X_{s})+\frac{1}{2}\operatorname{Tr}\left[\sigma_{t}^{\top}\nabla_{x}^{2}f(s,X_{s})\sigma_{s}\right]\right)ds,$$ _where $\nabla_{x}f$ and $\nabla_{x}^{2}f$ refer to the gradient and the Hessian, respectively. This can also be expressed as an SDE $\nabla_{x}^{2}f$ refer to the gradient and the Hessian, respectively. This can also be_ $$df(t,X_{t})=\nabla_{x}f(t,X_{t})\cdot dX_{t}+\left(\partial_{t}f(t,X_{t})+\frac{1}{2}\mathrm{Tr}\left[\sigma_{t}^{\top}\nabla_{x}^{2}f(t,X_{t})\sigma_{t}\right]\right)dt.$$ ## B.3 Convergence Of Euler'S Scheme To The Sde Solution The following result gives a convergence rate of the Euler discretization scheme to the solution of the SDE. Theorem 5 ( Corollary of Thm 10.2.2 in Kloeden & Platen (1995)). Let d ≥ 1 *and consider the* R d-valued ito process X (Definition *1) given by* $$X_{t}=X_{0}+\int_{0}^{t}\mu(s,X_{s})d s+\int_{0}^{t}\sigma(s,X_{s})d B_{s},$$ where B is a m-dimensional Brownian motion for some m ≥ 1, X0 *satisfies* E∥X0∥ 2 < ∞*, and* µ : R +×R d → R d are σ : R + × R d → R d×m *are measurable functions satisfying the following conditions:* 1. There exists a constant K > 0 such that for all t ∈ R*, x, x*′ ∈ R d, $\|\mu(t,x)-\mu(t,x^{\prime})\|+\|\sigma(t,x)-\sigma(t,x^{\prime})\|\leq K\|x-x^{\prime}\|$. 2. There exists a constant K′ > 0 such that for all t ∈ R, x ∈ R d $$\|\mu(t,x)\|+\|\sigma(t,x)\|\leq K^{\prime}(1+\|x\|).$$ 3. There exists a constant K′′ > 0 such that for all t, s ∈ R, x ∈ R d, ∥µ(t, x) − µ(s, x)∥ + ∥σ(t, x) − σ(s, x)∥ ≤ K′′(1 + ∥x∥)|t − s| 1/2. Let δ ∈ (0, 1) *such that* δ −1 ∈ N (integer), and consider the times tk = kδ for k ∈ {1, . . . , δ−1}. Consider the Euler scheme given by $$Y_{k+1}^{i}=Y_{k}^{i}+\mu^{i}(t_{k},Y_{n}^{k})\delta+\sum_{j=1}^{m}\sigma^{i,j}(t_{k},Y_{n}^{k})\Delta B_{k}^{j},\quad Y_{0}^{i}=X_{0}^{i},$$ where Y i, µi, σi,j denote the coordinates of these vectors for i ∈ [d], j ∈ [m]*, and* ∆B j k ∼ N (0, δ)*. Then, we* have that $$\mathbb{E}\operatorname*{sup}_{t\in[0,1]}\|X_{t}-Y_{[t\delta^{-1}]}\|^{2}={\mathcal{O}}(\delta).$$ We can extend the result of Theorem 5 to the case of locally Lipschitz drift and volatility functions µ and σ. For this purpose, let us first define local convergence. Definition 2. Let (XL)L≥1 be a sequence of processes and X be a stochastic process. For r > 0, define the following stopping times τ L = {t ≥ 0 : |XL t | ≥ r}, τ = {t ≥ 0 : |Xt| ≥ r}. We say that XL converges locally to X if for any r > 0, XL t∧τ L *converge to* Xt∧τ . This definition is general for any type of convergence, we will specify clearly the type of convergence when we use this notion of local convergence. Lemma 5 (Locally-Lipschitz coefficients). Consider the same setting of Theorem 5 *with the following conditions instead* 1. For any r > 0, there exists a constant K > 0 such that for all t ∈ R*, x, x*′ ∈ R d *with* ∥x∥, ∥x ′∥ ≤ r, $$\|\mu(t,x)-\mu(t,x^{\prime})\|+$$ ∥µ(t, x) − µ(t, x′)∥ + ∥σ(t, x) − σ(t, x′)∥ ≤ K∥x − x ′∥. 2. For any r > 0, there exists a constant K′ > 0 such that for all t ∈ R, x ∈ R dsatisfying ∥x∥ ≤ r $${\boldsymbol{\tau}}_{\pm}$$ $$-\,\sigma(t,x^{\prime})$$ $\downarrow$ . ∥µ(t, x)∥ + ∥σ(t, x)∥ ≤ K′(1 + ∥x∥). 3. For any r > 0, there exists a constant K′′ > 0 such that for all t, s ∈ R, x ∈ R dsatisfying ∥x∥ ≤ r, $$\|\mu(t,x)-\mu(s,x)\|\;.$$ ∥µ(t, x) − µ(s, x)∥ + ∥σ(t, x) − σ(s, x)∥ ≤ K′′(1 + ∥x∥)|t − s| 1/2. Then, for any r > 0*, we have that* $$\mathbb{E}\operatorname*{sup}_{t\in[0,1]}\|X_{t\wedge\tau}-Y_{\left[(t\wedge\tau_{\delta})\delta^{-1}\right]}\|^{2}=\mathcal{O}(\delta),$$ $$\|X_{t}\|>r\}.$$ where τδ = inf{t ≥ 0 : ∥Y⌊tδ−1⌋∥ > r}*, and* τ = inf{t ≥ 0 : ∥Xt∥ > r}. We omit the proof here as it consists of the same techniques used in Kloeden & Platen (1995), with the only difference consisting of considering the stopped process Xτ. By stopping the process, we force the process to stay in a region where the coefficients are Lipschitz. ## B.4 Convergence Of Particles To The Solution Of Mckean-Vlasov Process The next result gives sufficient conditions for the system of particles to converge to its mean-field limit, known as the Mckean-Vlasov process. Theorem 6 ( Mckean-Vlasov process, Corollary of Thm 3 in Jourdain et al. (2007)). et d ≥ 1 and consider the R d-valued ito process X (Definition *1) given by* $$d X_{t}=\sigma(X_{t},\nu_{t}^{n})d B_{t},$$ $\uparrow$ . )dBt, X0 *has iid components*, where B is a d*-dimensional Brownian motion,* ν n t def =1d Pd i=1 δ{Xi t } the empirical distribution of the coordinates of Xt, and σ is real-valued given by σ(*x, ν*) = Rζ(x, y)dν(y) *for all* x ∈ R d and distribution ν. Assume that the function ζ *is Lipschitz continuous. Then, we have that for all* T ∈ R +, $$\operatorname*{sup}_{i\in[n]}\mathbb{E}\left(\operatorname*{sup}_{t\leq T}|X_{t}^{i}-{\tilde{X}}_{t}^{i}|^{2}\right)={\mathcal{O}}(n^{-1}),$$ where X˜i*is the solution of the following Mckean-Vlasov equation* $$d\tilde{X}_{t}^{i}=\sigma(\tilde{X}_{t}^{i},\nu_{t}^{i})d B_{t}^{i},\quad\tilde{X}_{0}^{i}=X_{0}^{i},$$ where ν i t is the distribution of X˜i. ## B.5 Other Results From Probability And Stochastic Calculus The next trivial lemma has been opportunely used in Li et al. (2021) to derive the limiting distribution of the network output (multi-layer perceptron) in the joint infinite width-depth limit. This simple result will also prove useful in our case of the finite-width-infinite-depth limit. Lemma 6. Let W ∈ R n×n be a matrix of standard Gaussian random variables Wij ∼ N (0, 1)*. Let* v ∈ R n be a random vector independent from W *and satisfies* ∥v∥2 = 1 . Then, W v ∼ N (0, I). Proof. The proof follows a simple characteristic function argument. Indeed, by conditioning on v, we observe that W v ∼ N (0, I). Let u ∈ R n, we have that $$\mathbb{E}_{W,v}[e^{i\langle u,Wv\rangle}]=\mathbb{E}_{v}[\mathbb{E}_{W}[e^{i\langle u,Wv\rangle}|v]]$$ $$=\mathbb{E}_{v}[e^{-\frac{\|u\|^{2}}{2}}]$$ $$=e^{-\frac{\|u\|^{2}}{2}}.$$ This concludes the proof as the latter is the characteristic function of a random Gaussian vector with Identity covariance matrix. The next theorem shows when a stochastic process (ito) Theorem 7 (Variation of Thm 8.4.3 in Øksendal (2003)). Let (Xt)t∈[0,T] and (Yt)t∈[0,T] *be two stochastic* processes given by $\begin{array}{l}\{dX_{t}=b(X_{t})dt+\sigma(X_{t})dB_{t},\quad X_{0}=x\in\mathbb{R}.\\ dY_{t}=b_{t}dt+v_{t}d\hat{B}_{t},\quad Y_{0}=X_{0},\end{array}$ where σ : R → R 1×k, (bt)t≥0 and (vt)t≥0 are real valued adapted stochastic processes, and v is adapted to the filtration of the Brownian motion (Bˆt)t≥0, (Bt)t≥0 is an k*-dimensional Brownian motion and* (Bˆt)t≥0 is a 1*-dimensional Brownian motion. Assume that* E[bt|Nt] = b(Yt) where Nt = σ((Ys)s≤t) is the σ*-Algebra* generated by {Ys : s ≤ t}*, and* v 2 t = σ(Yt)σ(Yt) ⊤ almost surely (in terms of dt × dP *measure where* dt is the natural Borel measure on [0, T] and dP is the probability measure associated with the probability space). Then, Xt and Yt *have the same distribution for all* t ∈ [0, T]. Proof. The proof of this theorem is the same as that of Thm 8.4.3 in Øksendal (2003) with small differences. Indeed, our result is slightly different from that of Øksendal (2003) in the sense that here we consider Brownian motions with different dimensions, while in their theorem, the author considers the case where the Brownian motions involved in (Xt) and (Yt) are of the same dimension. However, both results make use of the so-called Martingale problem, which characterizes the weak uniqueness and hence the distribution of Ito processes8. The generator of Xt is given for f ∈ C2(R) by $${\mathcal{G}}(f)(x)=b(x){\frac{\partial f}{\partial x}}+{\frac{1}{2}}\sigma(x)\sigma(x)^{\top}{\frac{\partial^{2}f}{\partial x^{2}}}.$$ Now define the process H(f) for f ∈ C2(R) by $${\mathcal{H}}(f)(t)=b_{t}{\frac{\partial f}{\partial x}}(Y_{t})+{\frac{1}{2}}v_{t}^{2}{\frac{\partial^{2}f}{\partial x^{2}}}(Y_{t}).$$ Let Nt = σ((Ys)s≤t) be the σ-Algebra generated by {Ys : s ≤ t}. Using Itoˆ lemma, we have that for *s < t*, $$\mathbb{E}[f(Y_{s})|\mathcal{N}_{t}]=f(Y_{t})+\mathbb{E}[\int_{t}^{s}\mathcal{H}(f)(r)dr|\mathcal{N}_{t}]$$ $$=f(Y_{t})+\mathbb{E}\left[\int_{t}^{s}\mathbb{E}[\mathcal{H}(f)(r)|\mathcal{N}_{r}]dr|\mathcal{N}_{t}\right]$$ $$=f(Y_{t})+\mathbb{E}\left[\int_{t}^{s}\mathcal{G}(f)(Y_{r})dr|\mathcal{N}_{t}\right],$$ where we have used the fact that E[br|Nr] = b(Yr). Now define the process M by $$M_{t}=f(Y_{t})-\int_{0}^{t}{\mathcal{G}}(f)(Y_{r})d r.$$ For *s > t*, we have that $$\mathbb{E}[M_{s}|\mathcal{N}_{t}]=f(Y_{t})+\mathbb{E}\left[\int_{t}^{s}\mathcal{G}(f)(Y_{r})dr|\mathcal{N}_{t}\right]-\mathbb{E}\left[\int_{0}^{s}\mathcal{G}(f)(Y_{r})dr|\mathcal{N}_{t}\right]\quad\text{(by It$\hat{o}$lemma)},$$ $$=f(Y_{t})-\mathbb{E}\left[\int_{0}^{t}\mathcal{G}(f)(Y_{r})dr|\mathcal{N}_{t}\right]=M_{t}.$$ Hence, Mt is a martingale (w.r.t to Nt). We conclude that Yt has the same law as Xt by the uniqueness of the solution of the martingale problem (see 8.3.6 in Øksendal (2003)). The next result is a simple corollary of the existence and uniqueness of the strong solution of an SDE under the Lipschitz conditions on the drift and the volatility. It basically shows that a zero-drift process collapses (becomes constant) once the volatility is zero. Lemma 7. Let g : R n → R be a Lipschitz function. Let Z *be the solution of the stochastic differential* equation $$d Z_{t}=g(Z_{t})d B_{t},\quad Z_{0}\in\mathbb{R}^{n}.$$ If g(Z0) = 0, then Zt = Z0 almost surely. Proof. This follows for the uniqueness of the strong solution of an SDE(Theorem 4). 8We omit the details on the Martingale problem here. We invite the curious reader to check Chapter 8 in Øksendal (2003) for further details. ## B.6 Proof Of Proposition 1 We are now ready to prove the following result. Proposition 1. Assume that the activation function ϕ *is Lipschitz on* R n. Then, in the limit L → ∞*, the* process XL t = Y⌊tL⌋, t ∈ [0, 1]*, converges in distribution to the solution of the following SDE* $$d X_{t}=\frac{1}{\sqrt{n}}\|\phi(X_{t})\|d B_{t},\quad X_{0}=W_{i n}x,$$ $$\left(7\right)$$ ∥ϕ(Xt)∥dBt, X0 = Winx, (7) where (Bt)t≥0 is a Brownian motion (Wiener process). Moreover, we have that for any t ∈ [0, 1] *Lipschitz* function Ψ : R n → R, EΨ(Y⌊tL⌋) = EΨ(Xt) + O(L −1/2), where the constant in O *does not depend on* t. Moreover, if the activation function ϕ *is only locally Lipschitz, then* XL tconverges locally to Xt*. More* precisely, for any fixed r > 0*, we consider the stopping times* τ L = inf{t ≥ 0 : |XL t | ≥ r}, τ = inf{t ≥ 0 : |Xt| ≥ r}, then the stopped process XL t∧τ L converges in distribution to the stopped solution Xt∧τ *of the above SDE.* Proof. The proof is based on Theorem 5 in the appendix. It remains to express Eq. (1) in the required form and make sure all the conditions are satisfied for the result to hold. Using Lemma 6, we can write Eq. (1) as $$Y_{l}=Y_{l-1}+\frac{1}{\sqrt{L}}\sigma(Y_{l-1})\zeta_{l-1}^{L},$$ where σ(y) def = √ 1 n ∥ϕ(y)∥ for all y ∈ R n and ζ L lare iid random Gaussian vectors with distribution N (0, I). This is equal in distribution to the Euler scheme of SDE Eq. (8). Since σ trivially inherits the Lipschitz or local Lipschitz properties of ϕ, we conclude for the convergence using Theorem 5 and Lemma 5. Now let Ψ be K-Lipschitz for some constant K > 0. We have that $\mathbb{E}\Psi(Y_{[tL]})-\mathbb{E}\Psi(X_{t})|\leq K\mathbb{E}\sup_{t\in[0,1]}\|\tilde{Y}_{[tL]}-X_{t}\|=\mathcal{O}(L^{-1/2}),$ where Y¯ is the Euler scheme as in Theorem 5, and where we have used the fact that Y⌊tL⌋ and Y¯⌊tL⌋ have the same distribution. The result of Proposition 1 can be generalized to the case with multiple inputs with minimal changes in the proof. We summarize this result in the next proposition. Proposition 5. Let x1, x2*, . . . , x*k ∈ R dbe non-zero inputs, and denote by Yl(xi) *the pre-activation vector* in layer l for the input xi*. Consider the vector* Y k l = (Yl(x1) ⊤, Yl(x2) ⊤*, . . . , Y*l(xk) ⊤) ⊤ ∈ R k·n *consisting of* the concatenation of the pre-activations vectors for all inputs xi. Assume that the activation function ϕ is Lipschitz on R n. Then, in the limit L → ∞*, the process* X L,k t = Y k ⌊tL⌋ , t ∈ [0, 1]*, converges in distribution* to the solution of the following SDE $$dX^{k}_{t}=\frac{1}{\sqrt{n}}\Sigma(\mathbf{X}^{k}_{t})^{1/2}d\mathbf{B}_{t},\quad\mathbf{X}^{k}_{0}=((W_{in}x_{1})^{\top},\ldots,(W_{in}x_{k})^{\top})^{\top},\tag{8}$$ where (Bt)t≥0 is an kn-dimensional Brownian motion (Wiener process), independent from Win, and Σ(Xk t ) is the covariance matrix given by $\Sigma(\mathbf{X}_{t}^{k})=\left[\begin{array}{c|c|c|c|c}\alpha_{1,1}I_{n}&\alpha_{1,2}I_{n}&\ldots&\alpha_{1,k}I_{n}\\ \hline\alpha_{2,1}I_{n}&\alpha_{2,2}I_{n}&\ldots&\alpha_{2,k}I_{n}\\ \vdots&\vdots&\vdots&\vdots\\ \alpha_{k,1}I_{n}&\ldots&\ldots&\alpha_{k,k}I_{n}\end{array}\right]$ , where αi,j = ⟨ϕ(X k,i t), ϕ(X k,j t)⟩*, with* (X k,1 t⊤, . . . , Xk,k t⊤) ⊤ def = Xk t . Moreover, if the activation function ϕ is only locally Lipschitz, then X L,k t*converges locally to* Xk t . More precisely, for any fixed r > 0, we consider the stopping times τ L = inf{t ≥ 0 : |X L,k t| ≥ r}*, and* τ = inf{t ≥ 0 : |Xk t | ≥ r}, *then the stopped process* X L,k t∧τ L converges in distribution to the stopped solution Xk t∧τ *of the above SDE.* Proof. The proof is similar to that of Proposition 1. The only difference lies the definition of the Gaussian vector ζ L l . In this case, we have for all xi $$Y_{l}(x_{i})=Y_{l-1}(x_{i})+\frac{1}{\sqrt{L}}\frac{1}{\sqrt{n}}\zeta_{l-1}^{L}(Y_{l-1}(x_{i})),$$ where $\zeta_{l-1}^L(Y_{l-1}(x_i))\stackrel{def}{=}\sqrt{n}W$. √nWlϕ(Yl−1(xi)). Concatenating these identities yield $${\mathbf{Y}}_{l}^{k}={\mathbf{Y}}_{l-1}^{k}+{\frac{1}{\sqrt{L}}}{\frac{1}{\sqrt{n}}}{\mathbf{\zeta}}_{l-1}^{L},$$ where ζ L l−1 is the concatenation of the vector ζ L l−1 (Yl−1(xi)) for i = 1*, . . . , k*. It is straightforward that the covariance matrix of the Gaussian vector ζ L l−1 is given by the matrix Σ above (with X replaced by Y ). We conclude using Theorem 5. ## C Some Technical Results For The Proofs C.1 Approximation Of X In the next lemma, we provide an approximate stochastic process Xm to X, that differs from X by the volatility term. The upper-bound on the L2 norm of the difference between Xm and X will prove useful in the proofs of other results. The proof of this lemma requires the use of Gronwall's lemma, a tool that is often used in stochastic calculus. Lemma 8. Let x ∈ R d*such that* x ̸= 0, m ≥ 1 *be an integer, and consider the two stochastic processes* Xm and X *given by* $$\begin{cases}d X_{t}^{m}=\frac{1}{\sqrt{n}}\|\phi_{m}(X_{t}^{m})\|d B_{t}\,,&t\in[0,\infty),\quad X_{0}^{m}=W_{i n}x,\\ d X_{t}=\frac{1}{\sqrt{n}}\|\phi(X_{t})\|d B_{t}\,,&t\in[0,\infty),\quad X_{0}=W_{i n}x,\end{cases}$$ where ϕm(z) = R z 0 h(mu)du where h *is the Sigmoid function given by* h(u) = (1 + e −u) −1, ϕ *is the ReLU* activation function, and (Bt)t≥0 is an n*-dimensional Brownian motion. We have the following* $$\forall t\geq0,\ \mathbb{E}\left\|X_{t}^{m}-X_{t}\right\|^{2}\leq{\frac{2n t}{m^{2}}}e^{2t}.$$ Proof. Let t ≥ 0. We have that E ∥Xm t − Xt∥ 2 = 1 n E Z t 0 (∥ϕm(Xm s )∥ − ∥ϕ(Xs)∥)dBs. 2 Using Itoˆ isometry and the fact that (∥ϕm(Xm s )∥ − ∥ϕ(Xs)∥) 2 ≤ ∥ϕm(Xm s ) − ϕ(Xs)∥ 2, we obtain E ∥Xm t − Xt∥ 2 ≤ Z t 0 E ∥ϕm(Xm s ) − ϕ(Xs)∥ 2ds ≤ 2 Z t 0 E ∥ϕm(Xm s ) − ϕ(Xm s )∥ 2ds + 2 Z t 0 E ∥ϕ(Xm s ) − ϕ(Xs)∥ 2ds ≤ 2nt m2 + 2 Z t 0 E ∥Xm s − Xs∥ 2ds, where we have used Lemma 9 and the fact that ReLU is 1-Lipschitz. We concldue using Gronwall's lemma. ## C.2 Approximation Of Φ The next lemma provides a simple upper-bound on the distance between the ReLU activation ϕ and an approximate function ϕm that converges to ϕ in the limit of large m. Lemma 9. *Consider the function* ϕm(z) = R z 0 h(mu)du where z ∈ R where m ≥ 1*. We have that* $$\operatorname*{sup}_{z\in\mathbb{R}}|\phi_{m}(z)-\phi(z)|\leq{\frac{1}{m}}.$$ Proof. Let m ≥ 1 and z ∈ R. Assume that z > 0. We have that $$|\phi_{m}(z)-\phi(z)|=\int_{0}^{z}\frac{e^{-m u}}{1+e^{-m u}}d u$$ $$\leq\int_{0}^{z}e^{-m u}d u$$ $$=\frac{1}{m}(1-e^{-m z})\leq\frac{1}{m}.$$ For the case where z ≤ 0, the proof is the same. We have that $$|\phi_{m}(z)-\phi(z)|=\int_{0}^{z}\frac{e^{m u}}{1+e^{m u}}d u$$ $$\leq\int_{0}^{z}e^{m u}d u$$ $$=\frac{1}{m}(1-e^{m z})\leq\frac{1}{m},$$ which concludes the proof. ## C.3 Other Lemmas The next lemma shows that the logarithmic growth factor log ∥ϕm(Xm t )∥ ∥ϕm(Xm 0 )∥ converges to log ∥ϕ(Xt)∥ ∥ϕ(X0)∥ when m goes to infinity, where the convergence holds in L1. The key ingredient in the use of uniform integrability coupled with convergence in probability, which is sufficient to conclude on the L1 convergence. This result will help us conclude in the proof of Theorem 1. Lemma 10. Let x ∈ R d*such that* x ̸= 0, m ≥ 1 *be an integer, and consider the two stochastic processes* Xm and X *given by* $$\begin{cases}d X_{t}^{m}=\frac{1}{\sqrt{n}}\|\phi_{m}(X_{t}^{m})\|d B_{t}\,,&t\in[0,\infty),\quad X_{0}^{m}=W_{i n}x,\\ d X_{t}=\frac{1}{\sqrt{n}}\|\phi(X_{t})\|d B_{t}\,,&t\in[0,\infty),\quad X_{0}=W_{i n}x,\end{cases}$$ where ϕm(z) = R z 0 h(mu)du where h *is the Sigmoid function given by* h(u) = (1 + e −u) −1, ϕ is the ReLU activation function, and (Bt)t≥0 is an n*-dimensional Brownian motion. Then, conditionally on the fact that* ∥ϕ(X0)∥ > 0*, we have that* $$\forall t\geq0,\ \log\left({\frac{\|\phi_{m}(X_{t}^{m})\|}{\|\phi_{m}(X_{0}^{m})\|}}\right)\stackrel{L^{1}}{\longrightarrow}\log\left({\frac{\|\phi(X_{t})\|}{\|\phi(X_{0})\|}}\right).$$ Proof. Let t > 0. From Lemma 8, we know that Xm converges in L 2to X. Using Lemma 9 and the fact that ReLU is 1-Lipschitz, we obtain $$\mathbb{E}\|\phi_{m}(X_{t}^{m})-\phi(X_{t})\|^{2}\leq\frac{2n}{m^{2}}+2\mathbb{E}\|X_{t}^{m}-X_{t}\|^{2},$$ which implies that ϕm(Xm t ) converges in L 2to ϕ(Xt). In particular, the convergence holds in probability. Using this fact with the Continuous mapping theorem, we obtain that $$\forall t\geq0,\ \log\left(\|\phi_{m}(X_{t}^{m})\|\right)\stackrel{\mathbb{P}}{\longrightarrow}\log\left(\|\phi(X_{t})\|\right).$$ −→ log (∥ϕ(Xt)∥). (9) Let us show the following, $$\forall t\geq0,\ \log\left({\frac{\|\phi_{m}(X_{t}^{m})\|}{\|\phi_{m}(X_{0}^{m})\|}}\right)\stackrel{\mathbb{P}}{\longrightarrow}\log\left({\frac{\|\phi(X_{t})\|}{\|\phi(X_{0})\|}}\right).$$ Let ϵ > 0 and t > 0. We have $$\mathbb{P}\left(\left|\log\left(\frac{\|\phi_{m}(X_{t}^{m})\|}{\|\phi_{m}(X_{0}^{m})\|}\right)-\log\left(\frac{\|\phi(X_{t})\|}{\|\phi(X_{0})\|}\right)\right|\geq\epsilon\right)\leq\mathbb{P}\left(\|\phi_{m}(X_{t}^{m})\|-\log\left(\|\phi(X_{t})\|\right)\right|\geq\epsilon/2\right)$$ $$+\mathbb{P}\left(\|\phi_{m}(X_{0}^{m})\|-\log\left(\|\phi(X_{0})\|\right)\right|\geq\epsilon/2\right),$$ $$({\mathfrak{g}})$$ where the first term converges to zero by Eq. (9), and the second term converges to zero by Lemma 9. Hence, the convergence in probability holds. To conclude, it suffices to show that the sequence of random variables Y m t = log ∥ϕm(Xm t )∥ ∥ϕm(Xm 0 )∥ m≥1 is uniformly integrable. Let K > 0. From the proof of Lemma 2, with ζ = ϕm, we have that $$Y_{n}^{m}=\frac{1}{\sqrt{n}}\int_{0}^{t}\mu(X_{s}^{m})ds+\frac{1}{2n}\int_{0}^{t}\sum_{i=1}^{n}\sigma_{i}(X_{s}^{m})dB_{s}^{i},$$ $\sigma_{i}(X_{s}^{m})=\frac{[\phi_{m}^{\prime}(X_{s}^{m,i})\phi_{m}(X_{s}^{m,i})]}{[\phi_{m}(X_{s}^{m,i})]},\;\;\mbox{and}\;\;\mu(X_{s}^{m})=\frac{1}{2}\sum_{i=1}^{n}\left\{\phi_{m}^{\prime}(X_{s}^{m,i})\phi_{m}(X_{s}^{m,i})+\phi_{m}^{\prime}(X_{s}^{m,i})^{2}\right\}-\frac{[\phi_{m}(X_{s}^{m,i})]^{2}}{(X_{s}^{m,i})^{2}}.$ Therefore, where σi(Xm ∥ϕ ′ m(Xm s ∥ϕm(Xms $$\mathbb{E}|Y_{t}^{m}|^{2}=\frac{1}{n}\mathbb{E}\left(\int_{0}^{t}\mu(X_{s}^{m})ds\right)^{2}+\frac{1}{4n^{2}}\mathbb{E}\int_{0}^{t}\sum_{i=1}^{m}\sigma_{i}(X_{s}^{m})^{2}ds\tag{10}$$ $$\leq\frac{t}{n}\int_{0}^{t}\mathbb{E}\mu(X_{s}^{m})^{2}ds+\frac{1}{4n^{2}}\mathbb{E}\int_{0}^{t}\sum_{i=1}^{m}\sigma_{i}(X_{s}^{m})^{2}ds,$$ where we have used the Itoˆ isometry and Cauchy-Schwartz inequality. Using the conditions on ϕm, it is straightforward that term 1 4n2 ER t 0 Pm i=1 σi(Xm s ) 2ds is uniformly bounded. It remains to bound the first term. Similarly to the proof of Theorem 1, we condition on the regions of |Xm,i s| and obtain that the terms Eµ(Xm s ) 2 are uniformly bounded over m (we omit the proof here as it is just a repetition of the techniques used in the proof of Theorem 1). Therefore, we have that supm≥1 E|Y m t| 2 < ∞, which implies uniform integrability. This concludes the proof. Lemma 11. Let x ∈ R d*such that* x ̸= 0, m ≥ 1 be an integer, and consider the stochastic processes X *given* by $$d X_{t}=\frac{1}{\sqrt{n}}\|\phi(X_{t})\|d B_{t}\,,\quad t\in[0,\infty),\quad X_{0}=W_{i n}x,$$ where ϕ is the ReLU activation function, and (Bt)t≥0 is an n-dimensional Brownian motion independent from X0. Then, conditionally on the fact that ∥ϕ(X0)∥ > 0, we have that for all s ∈ [0, 1], i ∈ [n] $$\mathbb{P}(|X_{s}^{i}|\leq\delta)={\mathcal{O}}_{\delta\to0}(\delta),$$ where the bound holds uniformly over s ∈ [0, 1]. Proof. We have that $$X_{s}^{i}=X_{0}^{i}+\frac{1}{\sqrt{n}}\int_{0}^{s}\|\phi(X_{u})\|d B_{u}^{i}.$$ Since ϕ(Xu) > 0 for all u ≥ 0 almost surely, and by the independence of B and X0, we can easily see that Xis has no Dirac mass and is the sum of two continuous random variables (not independent) Xi0 and √ 1 n R s 0 ∥ϕ(Xu)∥dBiu that have bounded density functions, and thus Xis has a bounded density function hs. Hence, writing P(|Xis | ≤ δ) = R δ−δ hs(t)dt = O(δ) concludes the proof. The bound can be taken uniformly over s ∈ [0, 1] by taking sups∈[0,1]|hs(0)| . ## D The Ornstein-Uhlenbeck (Ou) Process The OU process is the (unique) strong solution to the following diffusion $$d X_{t}=a(b-X_{t})d t+\sigma d B_{t},$$ $$(11)$$ dXt = a(b − Xt)dt + σdBt, (11) where *a, b, σ* ∈ R are constants, and B is a one dimensional Brownian motion. In financial mathematics, the OU process is used as a model of short-term interest rate under the name of the Vasicek model. The OU process has a closed-form expression and its marginal distribution is Gaussian. The next lemma gives a full characterization of the marginal distributions of an OU process. Lemma 12. Eq. (11) *admits the following solution* $$X_{t}=X_{0}e^{-a t}+b(1-e^{-a t})+\sigma\int_{0}^{t}e^{-a(t-s)}d B_{s}.$$ As a result, we have the following $$\operatorname{ssi}a n.$$ - Xt *is Gaussian.* - $\mathbb{E}[X_t]=X_0e^{-at}+b(1-e^{-at}).$ - $\mathrm{Cov}(X_t,X_s)=\frac{\sigma^2}{2a}\left(e^{-a|t-s|}-e^{-a(t+s)}\right)$. Proof. Consider the process Zt = e atXt, using Itoˆ lemma, we have that $$\begin{array}{r l}{d Z_{t}=a Z_{t}d t+e^{a t}d X_{t}}\\ {}&{{}=a b e^{a t}d t+\sigma e^{a t}d W_{t}.}\end{array}$$ Integrating between 0 and t yields $$Z_{t}=Z_{0}+b(e^{a t}-1)+\sigma\int_{0}^{t}e^{a s}d W_{s}.$$ We conclude by multiplying both sides with e −at. The result for E[Xt] is straightforward since E hR t 0 e −a(t−s)dWs i= 0 by the properties of Itoˆ integral and the Brownian motion. For the covariance, without loss of generality assume that *t > s* ≥ 0. We have that Now, $ \text{Cov}(X_t,X_s)=\sigma^2\mathbb{E}\left[\int_0^t e^{-a(t-u)}dW_u\int_0^s e^{-a(s-u)}dW_u\right]$ $ =\sigma^2\mathbb{E}\left[\int_0^s e^{-a(t-u)}dW_u\int_0^s e^{-a(s-u)}dW_u\right]$ $ =\sigma^2\int_0^s e^{-2a\left(\frac{s+t}{2}-u\right)}du$ $ =\frac{\sigma^2}{2a}\left(e^{-a(t-s)}-e^{-a(t+s)}\right),$ cf. which completes the proof. We would like to find sufficient conditions on the activation function ϕ and a function g such that the process g(Xt) (Eq. (8)) follows an the OU dynamics. For this purpose, we proceed by reverse-engineering the problem; Using Itoˆ 's lemma (Eq. (4)), this is satisfied when there exist constants *a, b, σ* such that $$\begin{array}{l}{{\left\{\begin{array}{l l}{{\mu(y)=\frac{1}{2n}\phi(y)^{2}g^{\prime\prime}(y)=a(b-g(y))}}\\ {{\sigma(y)=\frac{1}{\sqrt{n}}\phi(y)g^{\prime}(y)=\sigma.}}\end{array}\right.}}\end{array}$$ This implies that g ′′(y) g ′2(y) = 2aσ−2(b − g(y)). Letting G =Rg be the primitive function of g, we obtain that G satisfies a differential equation of the form $$\frac{1}{G^{\prime\prime}(y)}=\alpha y+\beta G(y)+\zeta,$$ where *α, β, ζ* ∈ R are constants. Let us consider the case where α = ζ = 0 and β ̸= 0, i.e. 1 G′′(y) = βG(y). Equivalently, we solve the differential equation G′′(y) = βG where β ∈ R. Multiplying both sides by G′ and integrating we obtain 1 2G′(y) 2 = β log(|G|) + γ. A sufficient condition for this to hold is to have G > 0 and G satisfies $$\frac{G^{\prime}(y)}{\sqrt{\log(G)+\gamma}}=\zeta$$ for some constants *ζ, γ*. Integrating the left-hand side yields Then $ \int_{-\infty}^{y}\frac{G'(u)}{\sqrt{\log(G(u))+\gamma}}du=\int_{-\infty}^{G(y)}\frac{1}{\sqrt{\log(u)+\gamma}}du=\alpha\operatorname{Erfi}(\sqrt{\log(G(y))+\gamma})+\beta.$ The imaginary error function$ \beta$ given by where Erfi is the imaginary error function9 given by $$\operatorname{Erf}(z)={\frac{2}{\sqrt{\pi}}}\int_{0}^{z}e^{t^{2}}d t.$$ To alleviate the notation, we denote h := Erfi in the rest of this section. From the above, G should have the form $$G(y)=\exp\left(\zeta+\left(h^{-1}(\alpha y+\beta)\right)^{2}\right),$$ where *α, β, ζ* are all constants, and h −1is the inverse function of the imaginary error function. We conclude that the activation function ϕ should have the form $$\phi(y)=\frac{2\sigma}{\alpha^{2}\pi}\exp(-\zeta+h^{-1}(\alpha y+\beta)^{2}).$$ 9Although the name might be misleading, the imaginary error function is real when the input is real. In this case, the coefficients a and b are given by $$b=0,a=\frac{\sigma^{2}}{\alpha^{2}\pi}\exp(-2\zeta).$$ Letting g = G′, the process g(Xt) has the following dynamics $d\rho(X_t)$? $=-a a(\cdot)$ dg(Xt) = −ag(Xt)dt + σdBt, Hence g(Xt) is an OU process, and we can conclude that the network output in the infinite-depth limit X1 satisfies $$g(X_{1})\sim{\mathcal{N}}\left(g(X_{0})e^{-a},{\frac{\sigma^{2}}{2a}}\left(1-e^{-2a}\right)\right).$$ We can then infer the distribution of X1 by a simple change of variable. Note that this distribution is non-trivial, and unlike the infinite-width limit of the same ResNet (Hayou et al. (2021)) where the distribution is Gaussian, here the distribution of the pre-activations is directly impacted by the choice of the activation function ϕ. However, with this particular choice of the activation function ϕ, the existence of the process X can only be proven in the local sense, because ϕ is only locally Lipschitz. Let us first show this in the next lemma. We will see how we can mitigate this issue later. Lemma 13. Let ϕ : R → R *defined by* $$\phi(y)=\exp(h^{-1}(\alpha y+\beta)^{2}),$$ where α, β ∈ R are two constants. We have that ϕ is locally Lipschitz, meaning that for any compact set K ⊂ R, there exists CK *such that* ∀*x, x*′ ∈ K, |ϕ(x ′) − ϕ(x)| ≤ CK|x ′ − x|. Proof. It suffices to show that the derivative of ϕ is locally bounded to conclude. We have that $$\phi^{\prime}(y)=2\alpha(h^{-1})^{\prime}(\alpha y+\beta)h^{-1}(\alpha y+\beta)\exp(h^{-1}(\alpha y+\beta)^{2})$$ $$=\alpha\sqrt{\pi}\,h^{-1}(\alpha y+\beta).$$ Since h −1is continuous on R, then ϕ ′is bounded on any compact set of R, which concludes the proof. Now we can rigorously prove the following result. $\sigma_{\text{SATAD}}$ 3. $\blacksquare$ Proposition 6. Let x ∈ R *such that* x ̸= 0*. Consider the following activation function* ϕ $$\phi(y)=\frac{2\sigma}{\alpha^{2}\pi}\exp(-\zeta+h^{-1}(\alpha y+\beta)^{2}),$$ $$\varepsilon\;\alpha,\beta\in\mathbb{R}\;a n d\;\sigma,\zeta\;\succ$$ where α, β ∈ R and σ, ζ > 0 are constants. Let g *be the function defined by* $$g(y)=\alpha{\sqrt{\pi}}\exp(\zeta)h^{-1}(\alpha y+\alpha y)$$ −1(αy + β). Consider the stochastic process Xt *defined by* $$d X_{t}=|\phi(X_{t})|d B_{t},\quad X_{0}=W_{i n}x.$$ Then, we have that for all t ∈ [0, 1], $$g(X_{t})\sim{\mathcal{N}}\left(g(X_{0})e^{-a t},{\frac{\sigma^{2}}{2a}}(1-e^{-2a t})\right),$$ $$w h e r e\ a={\frac{\sigma^{2}}{\alpha^{2}\pi}}\exp(-2\zeta).$$ Proof. For N > 0, consider the stopping time τN defined by τN = inf{t ≥ 0 : |Xt| ≥ N}. Using the continuity of paths of X, it is straightforward that limN→∞ τN = ∞ almost surely. Let N > 0 be large enough. The SDE satisfied by the process X has a unique strong solution for t ∈ [0, τN ) since the activation function ϕ is Lipschitz on the interval (−*N, N*). By applying Itoˆ lemma for t ∈ (0, τN ), we have that dg(Xt) = −ag(Xt)dt + σdBt, (from previous results). Using the fact that limN→∞ τN = ∞ almost surely, and taking N large enough, we obtain that for all t ∈ (0, 1], we have that $$d g(X_{t})=-a g(X_{t})d t+\sigma d B_{t},$$ we conclude using Lemma 12. ## E The Geometric Brownian Motion (Gbm) The GBM dynamics refers to stochastic differential equations of the form $\square$ $$(12)$$ dXt = aXtdt + σXtdBt, (12) where *a, σ* are constants and B is a one dimensional Brownian motion. This SDE played a crucial role in financial mathematics and is often used as a model of stock prices. It admits a closed-form solution given in the next lemma. Lemma 14. Eq. (12) *admits the following solution* $$X_{t}=X_{0}\exp\left(\left(a-{\frac{1}{2}}\sigma^{2}\right)t+\sigma B_{t}\right).$$ The distribution of Xt *is known as a log-Gaussian distribution. Moreover, the solution is unique.* Proof. The existence and uniqueness of the solution follows from Theorem 4. Indeed, it suffices to have the drift and the volatility both Lipschitz to obtain the result. This is satisfied in the case of GBM. Now consider the process Zt = log(Xt). Using Itoˆ lemma10, it is easy to verify that $$d Z_{t}=\left(a-{\frac{1}{2}}\sigma^{2}\right)d t+\sigma d B_{t},$$ $\square$ we conclude by integrating both sides. Now let us find sufficient conditions under which the infinite-depth network represented by the process X has a GBM behaviour. In order for this to hold, it suffices to have $$\begin{array}{l}{{\left\{\begin{array}{l l}{{\mu(y)=\frac{1}{2n}\phi(y)^{2}g^{\prime\prime}(y)=a g(y)}}\\ {{\sigma(y)=\frac{1}{\sqrt{n}}\phi(y)g^{\prime}(y)=\sigma g(y).}}\end{array}\right.}}\end{array}$$ This implies g ′′ g ′2 ∝ 1 g , or equivalently g ′′ g ′ ∝ g ′ g , which in turn yields log(|g ′|) = α log(|g|) + β, and therefore |g ′*| ∝ |*g| ζ. Assuming that g ′*, g >* 0, we can easily verify that functions of the form g(y) = α(y + β) γ where α, β, γ > 0 satisfy the requirements. Hence, the activation function should satisfy ϕ(y) = σγ−1(y + β), i.e. 10Notice that here, Xt should be positive in order to consider log(Xt). This is easy to show and the proof is similar to that of Lemma 15. the activation should be linear. In this case, we have a = 1 2 σ 2γ −1(γ − 1) and the process g(Xt) has the following GBM dynamics dg(Xt) = ag(Xt)dt + σg(Xt)dBt. From Lemma 14, we conclude that $$g(X_{1})\sim g(X_{0})\exp\left(\left(a-\frac{1}{2}\sigma^{2}\right)t+\sigma B_{1}\right).$$ Observe that in the special case of γ = 1, β = 0, α = 1, we have g(y) = y and a = 0. In this case, we obtain Y1 ∼ Y0 exp − 1 2 σ 2t + σB1 . We summarize the previous results in following proposition. Proposition 7. Let x ∈ R *such that* x ̸= 0. Consider the following activation function ϕ ϕ(y) = αy + β, where α > 0, β ∈ R are constants. Let σ > 0 *and define the function* g by g(y) = (αy + β) γ. $$e\ \gamma=\sigma\alpha^{-1}.\ \ C o n$$ where γ = σα−1. Consider the stochastic process Xt *defined by* $$d X_{t}$$ dXt = |ϕ(Xt)|dBt, X0 = Winx. Then, the process g(Xt) *satisfies the following GBM dynamics* $$d g(X_{t})=a g(X_{t})d t+\sigma g(X_{t})d B_{t},$$ $\epsilon_{\mathrm{f}}$. $\mathbf{\hat{z}}_{\sigma_i}-1(\sigma_i-1)$ where a = 1 2 σ 2γ −1(γ − 1)*. As a result, we have that for all* t ∈ [0, 1], $$g(X_{t})\sim g(X_{0})\exp\left(\left(a-\frac{1}{2}\sigma^{2}\right)t+\sigma B_{t}\right).$$ $${\mathit{l}}=1$$ ## F Relu In The Case N = D = 1 Consider the process X given by the SDE $$:,\quad t\in[0,1],$$ dXt = ϕ(Yt)dBt, t ∈ [0, 1], X0 > 0. where ϕ(z) = max(z, 0) for z ∈ R is the ReLU activation function. Note that we assume X0 > 0 in this case. We will deal with the general case later in this section. It is straightforward that if Xs ≤ 0 for some s ∈ [0, 1], then for all t ≥ *s, X*t = Xs. This is because dXt = 0 × dBt whenever Xt ≤ 0. A rigorous justification is provided in Lemma 1. Hence, the event {Xs ≤ 0} constitutes a stopping event where the process becomes constant. We also say that 0 is an absorbent point of the process X. A classic tool in stochastic calculus to deal with such situations is the notion of *stopping time* which is a random variable that depend on the trajectory of X (or equivalently on the natural filtration Ft associated with the Brownian motion B). Consider the following stopping time $$\tau=\operatorname*{inf}\{t\in[0,1],\ \mathrm{s.t.}\ X_{s}\leq0\}.$$ τ = inf{t ∈ [0, 1], s.t. Xs ≤ 0}. (13) Observe that we have for all t ∈ [0, τ ] $$d X_{t}=X_{t}d B_{t},$$ which implies that Yt is a Geometric Brownian motion in the interval [0, τ ]. Hence, if τ > 1 (a.s.), the network output has also a log-normal distribution in the infinite-depth limit. In the next lemma, we show that τ = ∞ with probability 1 which confirms the above. $$(13)$$ Lemma 15. Let τ *be the stopping time defined by Eq.* (13)*. We have that* $$\mathbb{P}(\tau=\infty)=1.$$ Proof. By continuity of the Brownian path and the ReLU function ϕ, the paths of the process X are also continuous11. we have that τ > 0 almost surely. From the observation above, taking the limit t → τ − and using the continuity, we obtain $$X_{\tau}=X_{0}\exp\left(-\frac{1}{2}\tau+B_{\tau}\right).$$ For some ω ∈ {τ < ∞}, we have that Xτ (ω) = 0 (by continuity). Hence − 1 2 τ (w) + Xτ (ω) = −∞. This happens with probability zero, which means that the event {τ < ∞} has probability zero. This concludes the proof. Hence, with the ReLU activation function, given X0 > 0, the network output is distributed as $$X_{1}\sim X_{0}\exp\left(-\frac{1}{2}+B_{1}\right).$$ Now let us go back to the original setup for X0. Recall that X0 = Winx for some x ̸= 0 and Win ∼ N (0, 1). By conditioning on X0 and observing that 0 is an absorbent point of the process X, we obtain that $$X_{1}\sim\mathbb{1}_{\{X_{0}>0\}}X_{0}\exp\left(-\frac{1}{2}+B_{1}\right)+\mathbb{1}_{\{X_{0}\leq0\}}X_{0}.$$ We summarize these results in the next proposition. Proposition 8. Let x ∈ R *such that* x ̸= 0, and let ϕ *be the ReLU activation function given by* ϕ(z) = max(z, 0) for all z ∈ R. Consider the stochastic process Xt *defined by* $$d X_{t}=$$ $=\phi(X_t)$ $$B_{t},\quad X_{0}=W_{i n}x.$$ dXt = ϕ(Xt)dBt, X0 = Winx. Then, the process X is a mixture of a Geometric Brownian motion and a constant process. More precisely, we have for all t ∈ [0, 1] $X_{t}\sim\mathbb{1}_{\{X_{0}>0\}}\,X_{0}\exp\left(-\frac{1}{2}t+B_{t}\right)+\mathbb{1}_{\{X_{0}\leq0\}}X_{0}$. $>0$, _the process $X$ is a Geometric Brownian motion._ Hence, conditionally on X0 > 0, the process X *is a Geometric Bronwian motion.* ## G Proof Of Lemma 2 **And Lemma** 3 Lemma 2. Let x ∈ R d*such that* x ̸= 0, and consider the stochastic process X *given by the following SDE* $$d X_{t}=\frac{1}{\sqrt{n}}\|\phi(X_{t})\|d B_{t}\,,\quad t\in[0,\infty),\quad X_{0}=W_{i n}x,$$ $$\tau=\operatorname*{mm}\{t\ ,$$ where ϕ(z) : R → R *is Lipschitz, injective,* C 2(R) *and satisfies* ϕ(0) = 0*, and* ϕ ′ and ϕ ′′ϕ *are bounded on* R, and (Bt)t≥0 is an n-dimensional Brownian motion independent from Win ∼ N (0, d−1I). Let τ be the stopping time given by τ = min{t ≥ 0 : ϕ(Xt) = 0}. Then, we have that $$\mathbb{P}\left(\tau=\infty\right)=1.$$ 11This is a classic result in stochastic calculus. More rigorously, X can be chosen to have continuous paths with probability 1. $$\mathbf{\partial}:\varphi(\mathbf{A}_{t})\mathbf{\partial}=0\}.$$ Proof. It is straightforward that with probability 1 we have ∥ϕ(X0)∥ > 0, which implies that with probability 1, τ > 0. Let *t < τ* . Using Itoˆ 's lemma with the function g(z) = 12 log(∥ζ(x)∥ 2), we obtain $$d g(X_{t})=\nabla g(X_{t})^{\top}d X_{t}+{\frac{1}{2n}}\|\zeta(X_{t})\|^{2}\mathrm{Tr}(\nabla^{2}g(X_{t}))d t.$$ Therefore, $$g(X_{t})-g(X_{0})=\frac{1}{\sqrt{n}}\int_{0}^{t}\mu(X_{s})ds+\frac{1}{2n}\int_{0}^{t}\sum_{i=1}^{n}\sigma_{i}(X_{s})dB_{s}^{i},$$ where σi(Xs) = |ϕ ′(Xis )ϕ(Xis )| ∥ϕ(Xs)∥, and µ(Xs) = 12 Pn i=1 ϕ ′′(Xis )ϕ(Xis ) + ϕ ′(Xis ) 2− ∥ϕ ′(Xs)◦ϕ(Xs)∥ 2 ∥ϕ(Xs)∥2 , and ◦ refers to the Hadamard product of vectors, i.e. coordinate-wise product. For some ω ∈ {τ < ∞}, using the path continuity of the process X and the continuity of g, we have that limt→τ(ω)− g(Xτ(ω)(ω)) = −∞. Therefore, we should also have $$\frac{1}{\sqrt{n}}\int_{0}^{\tau(\omega)}\mu(X_{s}(\omega))d s+\frac{1}{2n}\int_{0}^{\tau(\omega)}\sum_{i=1}^{n}\sigma_{i}(X_{s}(\omega))d B_{s}^{i}(\omega)=-\infty.$$ Hence, we have that $$\mathbb{P}\left(\tau<\infty\right)\leq\mathbb{P}\left(\frac{1}{\sqrt{n}}\int_{0}^{t}\mu(X_{s})ds+\frac{1}{2n}\int_{0}^{t}\sum_{i=1}^{n}\sigma_{i}(X_{s})dB_{s}^{i}=-\infty\right)$$ $$=\lim_{A\rightarrow\infty}\mathbb{P}\left(\frac{1}{\sqrt{n}}\int_{0}^{t}\mu(X_{s})ds+\frac{1}{2n}\int_{0}^{t}\sum_{i=1}^{n}\sigma_{i}(X_{s})dB_{s}^{i}\leq-A\right)$$ $$=\lim_{A\rightarrow\infty}\mathbb{P}\left(\frac{1}{\sqrt{n}}\int_{0}^{t}\mu(X_{s})ds+\frac{1}{2n}\int_{0}^{t}\sigma(X_{s})d\hat{B}_{s}\leq-A\right),$$ where Bˆ is a one-dimensional Brownian motion, and where we use Theorem 7, and σ(Xs) = (Pn i=1 σi(Xs) 2) 1/2 =∥ϕ ′(Xs)◦ϕ(Xs)∥ ∥ϕ(Xs)∥. Using the conditions on ϕ, there exists a constant K > 0 such |ϕ ′| ≤ K, |ϕ ′′ϕ| ≤ K. With this we obtain for all Z ∈ R n $$|\sigma(Z)|={\frac{\|\phi^{\prime}(Z)\circ\phi(Z)\|}{\|\phi(Z)\|}}\leq K,$$ and $$|\mu(Z)|=\left|{\frac{1}{2}}\sum_{i=1}^{n}\left(\phi^{\prime\prime}(Z^{i})\phi(Z^{i})+\phi^{\prime}(Z^{i})^{2}\right)-{\frac{\|\phi^{\prime}(Z)\circ\phi(Z)\|^{2}}{\|\phi(Z)\|^{2}}}\right|\leq{\frac{1}{2}}n K+\left({\frac{1}{2}}n+1\right)K^{2}.$$ Hence, the random variable √ 1 n R t 0 µ(Xs)ds + 1 2n R t 0 σ(Xs)dBˆs is finite with probability 1. We conclude that $$\mathbb{P}\left(\tau=\infty\right)=1.$$ Lemma 3. *Consider the stochastic process* (8) *given by the SDE* $$d X_{t}=\frac{1}{\sqrt{n}}\|\phi(X_{t})\|d B_{t}\,,\quad t\in[0,\infty),\quad X_{0}=W_{i n}x,$$ where ϕ is the ReLU activation function, and (Bt)t≥0 is an n-dimensional Brownian motion. Let τ *be the* stopping time given by τ = min{t ≥ 0 : ∥ϕ(Xt)∥ = 0} = min{t ≥ 0 : ∀i ∈ [n], Xi t ≤ 0}. $$(X_{t})\|=0\}=\operatorname*{min}\{t\ \}$$ Then, we have that $$\mathbb{P}(\tau=\infty)=1-2^{-n}.$$ *As a result, we have that* $\frac{}{}$ *have* $\frac{}{}$ *that*. $$\mathbb{P}\left(\tau=\infty\,|\,\|\phi(X_{0})\|>0\right)=1.$$ Proof. Let t0 > 0. Using Lemma 7, we know that if for some t1, ∥ϕ(Xt1 )∥ = 0, then for all t ≥ t1, we have that Xt = Xt1 and ∥ϕ(Xt)∥ = 0. Hence, we have that $\mathbb{P}\left(\tau\leq t_{0}\,|\,||\phi(X_{0})||>0\right)=\mathbb{P}\left(||\phi(X_{t_{0}})||=0\,|\,||\phi(X_{0})||>0\right).$ Let m ≥ 1 and consider the function ϕm(z) = R z 0 h(m u)du and h(t) = (1 +e −t) −1is the Sigmoid function12. It is straightforward that ϕm satisfies the conditions of Lemma 2. Let Xm be the solution of the following SDE (the solution exists and is unique since ϕm is trivially Lipschitz) $$d X_{t}^{m}=\frac{1}{\sqrt{n}}\|\phi_{m}(X_{t}^{m})\|d B_{t}\,,\quad t\in[0,\infty),\quad X_{0}=W_{i n}x.$$ We know from Lemma 8 that Xm tconverges in L 2to Xt (uniformly over t ∈ [0, T] for any T > 0). In particular, this implies convergence in distribution. Moreover, observe that for all t $$\mathbb{E}\|\phi_{m}(X_{t}^{m})-\phi(X_{t})\|^{2}\leq\frac{2n}{m^{2}}+2\mathbb{E}\|X_{t}^{m}-X_{t}\|^{2},$$ where we used triangular inequality and the upperbound from Lemma 9. Thus, we have that ϕm(Xm t ) converges in L 2(and in distribution) to ϕ(Xt). Let δk = [1/(k + 1), 1/k) for k ≥ 1, and define δ0 = [1, ∞). For m ≥ 1, using Lemma 9, we have that $$\mathbb{P}\left(\|\phi(X_{t_{0}})\|=0\,\cap\,\|\phi(X_{0})\|>0\right)\leq\sum_{k=0}^{\infty}\mathbb{P}\left(\|\phi_{m}(X_{t_{0}})\|\leq1/m\,\cap\,\|\phi(X_{0})\|\in\delta_{k}\right).$$ Given k ≥ 0, we have that for *m > n*1/2(k + 1), $$\mathbb{P}\left(\|\phi_{m}(X_{t_{0}})\|\leq1/m\,\cap\,\|\phi(X_{0})\|\in\delta_{k}\right)\leq\mathbb{P}\left(\|\phi_{m}(X_{t_{0}}^{m})\|\leq1/m+\log(m)/m\,\cap\,\|\phi(X_{0})\|\in\delta_{k}\right)\tag{14}$$ $$+\mathbb{P}\left(\|\phi_{m}(X_{t_{0}}^{m})-\phi(X_{t_{0}})\|>\log(m)/m\,\cap\,\|\phi(X_{0})\|\in\delta_{k}\right)$$ Let us deal with the first term. Using Lemma 9, we have that $$\mathbb{P}\left(\|\phi_{m}(X_{t_{0}}^{m})\|\leq1/m+\log(m)/m\,\cap\,\|\phi(X_{0})\|\in\delta_{k}\right)$$ $$\leq\mathbb{P}\left(\|\phi_{m}(X_{t_{0}}^{m})\|\leq1/m+\log(m)/m\,\cap\,\|\phi_{m}(X_{0}^{m})\|\geq1/(k+1)-n^{1/2}m^{-1}\cap\,\|\phi(X_{0})\|\in\delta_{k}\right)$$ $$\leq\mathbb{P}\left(\log\left(\frac{\|\phi_{m}(X_{t_{0}}^{m})\|}{\|\phi_{m}(X_{0}^{m})\|}\right)\leq-\log(m/(1+\log(m))+\log((k+1)^{-1}-m^{-1}n^{1/2})\,\cap\,\|\phi(X_{0})\|\in\delta_{k}\right).$$ From Lemma 10, we know that the random variable log ∥ϕm(Xm t0 )∥ ∥ϕm(Xm 0 )∥ converges in L 1 and thus it is bounded in L 1 norm (over m). Therefore, a simple application of Markov's inequality yields that the probability above goes to 0 when m goes to ∞. The second term in Eq. (14) also converges to 0 using the L 2convergence of ϕm(Xm t0 ) to ϕ(Xt0 ) coupled with a simple application of Markov's inequality. We therefore obtain that for all k ≥ 0, 12Note that ϕm has a closed-form formula given by ϕm(z) = m−1(log(1 + emz) − log(2)), which can be seen as a shifted and scaled version of the Softplus function. However, we do not need the closed-form formula in our analysis. limm→∞ P (∥ϕm(Xt0 )∥ ≤ 1/m ∩ ∥ϕ(X0)∥ ∈ δk) = 0. Using the Dominated convergence theorem, we obtain that for all t0,P (∥ϕ(Xt0 )∥ = 0 ∩ ∥ϕ(X0)∥ > 0) = 0, which implies that E-1{τ>t}|∥ϕ(X0)∥ > 0= 1 for all t > 0. Another application of the Dominated convergence theorem yields the result. The second part is straightforward by observing that P (∥ϕ(X0)∥ = 0) = 2 −n. ## H Proof Of Theorem 1 Theorem 1. *We have that for all* t ∈ [0, 1], $$\|\phi(X_{t})\|=\|\phi(X_{0})\|\exp\left(\frac{1}{\sqrt{n}}\hat{B}_{t}+\frac{1}{n}\int_{0}^{t}\mu_{s}ds\right),\quad\mbox{almost surely,}$$ $\|^{2}-1$_, and $(\hat{B})_{t\geq0}$ is a one-dimensional Brownian motion. As a where µs = 1 2 ∥ϕ ′(Xs)∥ 2 − 1, and (Bˆ)t≥0 *is a one-dimensional Brownian motion. As a result, we have that* for all 0 ≤ s ≤ t ≤ 1 $$\mathbb{E}\left[\log\left({\frac{\|\phi(X_{t})\|}{\|\phi(X_{s})\|}}\right)|\,\|\phi(X_{0})\|>0\right]=\left({\frac{1-2^{-n}}{4}}-{\frac{1}{n}}\right)(t-s),$$ Moreover, for n ≥ 2*, we have* $$\mathrm{Var}\left[\log\left(\frac{\|\phi(X_{t})\|}{\|\phi(X_{s})\|}\right)\|\|\phi(X_{0})\|>0\right]\leq\left(n^{-1/2}+\Gamma_{s,t}^{1/2}\right)^{2}(t-s),$$ _where $\Gamma_{s,t}=\frac{1}{4}\int_{s}^{t}\,\left(\left(\mathbb{E}\phi^{\prime}(X_{u}^{1})\phi^{\prime}(X_{u}^{2})-\frac{(1-2^{-n})^{2}}{4}\right)+n^{-1}\left(\frac{1-2^{-n}}{2}-\mathbb{E}\phi^{\prime}(X_{u}^{1})\phi^{\prime}(X_{u}^{2})\right)\right)du$._ Proof. Let t ∈ [0, 1]. Let us firs consider the case where ∥ϕ(X0)∥ = 0. For all t we have ∥ϕ(Xt)∥ = 0 and the result is trivial. We consider the case where ∥ϕ(X0)∥ > 0 (happens with probability 1 − 2 −n), and all the expectations in this proof are conditionally on this event. Consider the function g : R n → R given by $$g(x)=\log(\|\phi(x)\|)={\frac{1}{2}}\log(\|\phi(x)\|^{2}).$$ Ideally, we would like to use Itoˆ 's lemma and Lemma 3, which ensures that ∥ϕ(Xt)∥ remains positive on [0, 1], and obtain for all t ∈ [0, 1] ${dg(X_t)\stackrel{dist}{=}\frac{1}{\sqrt{n}}d\hat{B}_t+\frac{1}{n}\mu_t dt,}$ This would be true, and let ${X_t}$ be ${X_t}$. where µt is some well defined quantity. This would let us conclude. However, Itoˆ 's lemma requires that the function be C 2(R n), which is violated by our choice of g. To mitigate this issue, we consider a sequence of function (gm)m≥m that approximates the function g when m goes to infinity. For m ≥ 1, let gm be defined by $$g_{m}(x)=\frac{1}{2}\log(\|\phi_{m}(x)\|^{2}),$$ where ϕm(t) = R t 0 h(m u)du and h(t) = (1 + e −t) −1is the Sigmoid function. We have that $$\left\{\begin{array}{l l}{{\frac{\partial g_{m}}{\partial x_{i}}(x)=\frac{h(m x_{i})\phi_{m}(x_{i})}{\|\phi_{m}(x)\|^{2}}}}\\ {{\frac{\partial^{2}g_{m}}{\partial x_{i}^{2}}(x)=\frac{m h(m x_{i})(1-h(m x_{i}))\phi_{m}(x_{i})+h(m x_{i})^{2}}{\|\phi_{m}(x)\|^{2}}-2\frac{h(m x_{i})^{2}\phi_{m}(x_{i})^{2}}{\|\phi_{m}(x)\|^{4}}}\end{array}\right.$$ Let Xm be the solution of the following SDE $$d X_{t}^{m}=\frac{1}{\sqrt{n}}\|\phi_{m}(X_{t}^{m})\|d B_{t}\,,\quad t\in[0,\infty),\quad X_{0}=W_{i n}x,$$ Using Itoˆ 's lemma, we have that $$d g_{m}(X_{t}^{m})=\frac{1}{n}\mu_{s}^{m}d s+\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\sigma_{s}^{m,i}d B_{s}^{i},$$ where $$\mu_{s}^{m}=\frac{1}{2}\sum_{i=1}^{n}\big{(}mh(mX_{s}^{m,i})(1-h(mX_{s}^{m,i}))\phi_{m}(X_{s}^{m,i})+h(mX_{s}^{m,i})^{2}\big{)}-\frac{\|h(mX_{s}^{m})\circ\phi_{m}(X_{s}^{m})\|^{2}}{\|\phi_{m}(X_{s}^{m})\|^{2}},$$ and σ m,i s = |h(mXm,i s)◦ϕm(Xm,i s)| ∥ϕm(Xms )∥. By Lemma 10, we know that gm(Xm s ) − gm(Xm 0 ) converges in L1 to g(Xs) − g(X0). Let us now compute the limit of gm(Xm s ) − gm(Xm 0 ) from the equation above to conclude. More precisely, let us show that for all s ∈ (0, 1) $$\operatorname*{lim}_{m\to\infty}\mathbb{E}\left|g_{m}(X_{s}^{m})-g_{m}(X_{0}^{m})-(Z_{s}-Z_{0})\right|=0,$$ where Z is the process given by $$Z_{t}=g(X_{0})+\frac{1}{\sqrt{n}}\left(\sum_{i=1}^{n}\int_{0}^{t}\sigma_{s}^{i}d B_{t}^{i}\right)+\frac{1}{n}\int_{0}^{t}\mu_{s}d s,$$ where µs = $=\frac{1}{2}\|\phi^\prime(X^i_s)\|^2-1$, and $\sigma^i_s=\frac{\phi(X^i_s)}{\|\phi(X_s)\|}$. Let t ∈ (0, 1]. Using triangular inequality, Itoˆ isometry, and Cauchy-Schwartz inequality, we have that $$\mathbb{E}\left[g_{m}(X_{t}^{m})-g_{m}(X_{0}^{m})-(Z_{t}-Z_{0})\right]\leq\frac{1}{n}\int_{0}^{t}\mathbb{E}\left[\mu_{s}^{m}-\mu_{s}\right]ds+\frac{1}{\sqrt{n}}\mathbb{E}\left[\int_{0}^{t}\sum_{i=1}^{n}(\sigma_{s}^{i}-\sigma_{s}^{m,i})dB_{s}^{i}\right]\tag{15}$$ $$\leq\frac{1}{n}\int_{0}^{t}\mathbb{E}\left[\mu_{s}^{m}-\mu_{s}\right]ds+\frac{1}{\sqrt{n}}\left(\mathbb{E}\int_{0}^{t}\sum_{i=1}^{n}(\sigma_{s}^{i}-\sigma_{s}^{m,i})^{2}ds\right)^{1/2}$$ We first deal with the the term R t 0 E |µ m s − µs| ds. Let us show that this term converges to 0. Let us show the following, $$\forall s>0,\operatorname*{lim}_{m\to\infty}\mathbb{E}\left|\mu_{s}^{m}-\mu_{s}\right|=0.$$ Let s ∈ [0, 1]. We have that $$\mu_{s}^{m}=\frac{1}{2}\sum_{i=1}^{n}J_{i}^{m}-G^{m},$$ where J m i = mh(mXm,i s)(1−h(mXm,i s))ϕm(Xm,i s)+h(mXm,i s) 2, and Gm = ∥h(mXm s )◦ϕm(Xm s )∥ 2 ∥ϕm(Xms )∥2 . Let us start with the term Gm. Observe that Gm ≤ 1 almost surely. We have that E|1 − G m| = E ∥(1 − h(mXm s ) 2) 1/2 ◦ ϕm(Xm s )∥ 2 ∥ϕm(Xm s )∥ 2 = E ∥(1 − h(mXm s ) 2) 1/2 ◦ ϕm(Xm s )∥ 2 ∥ϕm(Xm s )∥ 21{mini |X m,i s |≥log(m)/m} + E ∥(1 − h(mXm s ) 2) 1/2 ◦ ϕm(Xm s )∥ 2 ∥ϕm(Xm s )∥ 21{mini |X m,i s |<log(m)/m} When mini|x i| ≥ log(m)/m, we have that for all i ∈ [n] $$(1-h(m x^{i})^{2})\leq2(1-h(m x^{i}))\leq2\exp(-m\times\log(m)/m)=2m^{-1}.$$ Therefore, $$\mathbb{E}\left[\frac{\|(1-h(m X_{s}^{m})^{2})^{1/2}\circ\phi_{m}(X_{s}^{m})\|^{2}}{\|\phi_{m}(X_{s}^{m})\|^{2}}\mathbbm{1}_{\left\{\operatorname*{min}_{i}|X_{s}^{m,i}|\geq\log(m)/m\right\}}\right]\leq2m^{-1}.$$ For the remaining term, using the fact that 1 − h 2 ≤ 1, we have that Combining from, using the fact that $\mathbb{P}^{m,n}\leq1$, we have that $$\mathbb{E}\left[\frac{\|(1-h(mX_{n}^{m})^{2})^{1/2}\circ\phi_{m}(X_{n}^{m})\|^{2}}{\|\phi_{m}(X_{n}^{m})\|^{2}}1_{\{\min_{i}|X_{i}^{m,i}|<\log(m)/m\}}\right]\leq\mathbb{P}\left(\min_{i}|X_{i}^{m,i}|<\log(m)/m\right)$$ $$\leq\sum_{i=1}^{n}\mathbb{P}\left(|X_{i}^{m,i}|<\log(m)/m\right).$$ Now using Lemma 8, we have that $$\mathbb{P}(\|X_{s}^{m}-X_{s}\|\geq\log(m)/m)\leq{\frac{C_{n}}{\log(m)^{2}}},$$ for some constant Cn that depends on n. Therefore, for all i, we have $$\mathbb{P}(|X_{s}^{m,i}|<\log(m)/m)\leq\mathbb{P}(|X_{s}^{i}|<2\log(m)/m)+\mathbb{P}(|X_{s}^{m,i}-X_{s}^{i}|\geq\log(m)/m)$$ $$\leq\mathbb{P}(|X_{s}^{i}|<2\log(m)/m)+\frac{C_{n}}{\log(m)^{2}}.$$ Recall that $X_s^i=X_0^i+\frac{1}{\sqrt{n}}\int_0^s\|\phi(X_u)\|dB_u.$ By Lemma 11, we know that: $$\mathbb{P}(|X_{s}^{i}|<2\log(m)/m)=\mathcal{O}(\log(m)/m),$$ We conclude that limm→∞ E|1 − Gm| = 1. Now let us show that for all i, limm→∞ E|J m i − ϕ ′(Xis )| = 0. Let i ∈ [n] and Am,i s = mh(mXm,i s)(1 − h(mXm,i s))ϕm(Xm,i s). We have that E |A m,i s| = E |A m,i s|1{|X m,i s |≤2 log(m)/m} + E |A m,i s|1{|X m,i s |≤2 log(m)/m} ≤ m × 2 log(m)/m × P(|Xm,i s| ≤ 2 log(m)/m) + m−1E|ϕm(Xm,i s)| ≤ 2 log(m) ×P(|Xis | ≤ 3 log(m)/m) + P(|Xm,i s − Xis | ≥ log(m)/m)+ m−1E|ϕm(Xm,i s)| ≤ 2 log(m) × P(|Xis | ≤ 3 log(m)/m) + 2Cn log(m) + m−1E|ϕm(Xm,i s)|, where we have used Lemma 8 and Markov's inequality. Using Lemma 9 and the fact that the absolute value function is Lipschitz, we know that limm→∞ E |ϕm(Xm,i s)| = E |ϕ(Xis )| < ∞. Therefore, the third term vanishes in the limit m → ∞. The second term 2Cn/ log(m) also vanishes. The first term also vanished using Lemma 11. Therefore, limm→∞ E|Am,i s| = 0. Let us now deal with the last term in $J_{i}^{m}$. We have that $$\mathbb{E}\left|h(mX_{s}^{m,i})^{2}-\phi^{\prime}(X_{s}^{i})\right|=\mathbb{E}\left|h(mX_{s}^{m,i})^{2}-\phi^{\prime}(X_{s}^{i})\right|\mathbb{1}_{\{|X_{s}^{m,i}|\geq\log(m)/m\}}$$ $$+\mathbb{E}\left|h(mX_{s}^{m,i})^{2}-\phi^{\prime}(X_{s}^{i})\right|\mathbb{1}_{\{|X_{s}^{m,i}|<\log(m)/m\}}$$ $$\leq3m^{-1}+2\mathbb{P}(|X_{s}^{m,i}|<\log(m)/m)$$ $$\leq3m^{-1}+2\mathbb{P}(|X_{s}^{i}|<2\log(m)/m)+2\frac{C_{n}}{\log(m)^{2}}$$ Using Lemma 11, we obtain that limm→∞ E|h 2(mXm,i s) − ϕ ′(Xis )| = 0. Hence, we obtain that limm→∞ E|J m i − ϕ ′(Xs)| = 0. We conclude that limm→∞ E|µ m s − µs| = 0. Moreover, from the analysis above, it is easy to see that supm≥1, s∈(0,1] E|µ m s − µs| < ∞. We now deal with the second term ER t 0 Pn i=1(σ i s − σ m,i s) 2ds1/2from Eq. (15). For this part only, we define the stopping time τϵ for ϵ ∈ (0, ∥ϕ(X0)*∥ ∧ ∥*ϕ(X0)∥ −1) (Recall that the analysis is conducted conditionally on the fact that ∥ϕ(X0)∥ > 0) by τϵ = inf{t ≥ 0, s.t. ∥ϕ(Xt)∥ ∈ [0, ϵ] ∪ [ϵ $$\|\in[0,\epsilon]\cup[\epsilon^{-1},\infty)\}.$$ Notice that τϵ > 0 almost surely since ∥ϕ(X0)∥ ∈ (*ϵ, ϵ*−1). Let s ∈ (0, 1]. We have that $$\mathbb{E}\int_{0}^{t\wedge\tau_{s}}\sum_{i=1}^{n}(\sigma_{s}^{i}-\sigma_{s}^{m,i})^{2}\leq2\mathbb{E}\int_{0}^{t\wedge\tau_{s}}\sum_{i=1}^{n}\left(\frac{\phi(X_{s}^{i})}{\|\phi(X_{s})\|}-\frac{\phi_{m}(X^{m,i})}{\|\phi_{m}(X_{s}^{m})\|}\right)^{2}d s$$ $$+2\mathbb{E}\int_{0}^{t\wedge\tau_{s}}\frac{\|(1-h(mX_{s}^{m})^{2})^{1/2}\circ\phi_{m}(X_{s}^{m}))\|}{\|\phi(X_{s}^{m})\|^{2}}d s.$$ The second term can be upperbounded in the following fashion $$\mathbb{E}\int_{0}^{t\wedge\tau}\frac{\|(1-h(mX_{s}^{m})^{2})^{1/2}\circ\phi_{m}(X_{s}^{m})\|}{\|\phi(X_{s}^{m})\|^{2}}ds\leq\int_{0}^{t}\mathbb{E}\frac{\|(1-h(mX_{s}^{m})^{2})^{1/2}\circ\phi_{m}(X_{s}^{m}))\|}{\|\phi(X_{s}^{m})\|^{2}}ds$$ $$=\int_{0}^{t}\mathbb{E}|1-G_{m}|,$$ where Gm is defined above. We know that R t 0 E|1−Gm| converges to 0 in the limit m → ∞ by the Dominated convergence theorem (the integrand is bounded). Let us show that the first term also vanishes. We have that E Z t∧τϵ 0 Xn i=1 ϕ(Xis ) ∥ϕ(Xs)∥ − ϕm(Xm,i) ∥ϕm(Xm s )∥ 2ds ≤ 2E Z t∧τϵ 0 Xn i=1 ϕ(Xis ) − ϕm(Xm,i) ∥ϕ(Xs)∥ 2ds + 2E Z t∧τϵ 0 Xn i=1 ϕm(Xm,i s) 2 1 ∥ϕ(Xs)∥ −1 ∥ϕ(Xm s )∥ 2ds ≤ 2ϵ −2E Z t∧τϵ 0 ∥ϕ(Xis ) − ϕm(Xm,i s)∥ 2ds + 2E Z t∧τϵ 0 Xn i=1 ϕm(Xm,i s) 2 1 ∥ϕ(Xs)∥ −1 ∥ϕ(Xm s )∥ 2ds. The first term 2ϵ −2E R t∧τϵ 0∥ϕ(Xis ) − ϕm(Xm,i s)∥ 2ds converges to 0 in the limit m → ∞ by Lemma 9 and Lemma 8. Let us deal with the second term. Define the event E = {sups∈(0,1] ∥Xm s − Xs∥ ≤ log(m)/m} for m large enough such that log(m)*/m < ϵ*. Observe that on the event E, we have that for all s ∈ (0, 1], ∥ϕ(Xm s ) − ϕ(Xs)∥ ≤ ( √n + log(m))m−1. Hence, $$\mathbb{E}\,\mathbb{I}_{E}\,\int_{0}^{t\wedge\tau}\sum_{i=1}^{n}\phi_{m}(X_{s}^{m,i})^{2}\left(\frac{1}{\|\phi(X_{s})\|}-\frac{1}{\|\phi(X_{s}^{m})\|}\right)^{2}d s\leq\epsilon^{-2}(\sqrt{n}+\log(m))^{2}m^{-2}\rightarrow_{m\rightarrow\infty}0.$$ Moreover, letting Ec be the complementary event of E, we have $$\mathbb{E}\,\mathbb{1}_{E^{c}}\int_{0}^{t\wedge\tau_{s}}\sum_{i=1}^{n}\phi_{m}(X_{s}^{m,i})^{2}\left(\frac{1}{\|\phi(X_{s})\|}-\frac{1}{\|\phi(X_{s}^{m})\|}\right)^{2}ds$$ $$\leq\mathbb{E}\,\mathbb{1}_{E^{c}}\int_{0}^{t\wedge\tau_{s}}\sum_{i=1}^{n}\phi_{m}(X_{s}^{m,i})^{2}\left(\frac{2}{\|\phi(X_{s})\|^{2}}+\frac{2}{\|\phi(X_{s}^{m})\|^{2}}\right)ds$$ $$\leq2e^{-2}\mathbb{E}\,\mathbb{1}_{E^{c}}\int_{0}^{t\wedge\tau_{s}}\|\phi_{m}(X_{s}^{m})\|^{2}ds+2\mathbb{P}(E^{c}).$$ Using the fact that ∥ϕm(Xm s )∥ ≤ √n m + ∥ϕ(Xs)∥ + ∥Xm s − Xs∥ (by Lemma 9 and the fact that ReLU is Lipschitz), we obtain $$\mathbb{E}\,\mathbb{1}_{E^{\nu}}\,\int_{0}^{t\wedge\pi_{\nu}}\|\phi_{m}(X_{\nu}^{m})\|^{2}ds\leq3nm^{-2}+3\sup_{s\leq\lambda}\mathbb{E}\|X_{\nu}^{m}-X_{s}\|^{2}+3\epsilon^{-2}\mathbb{P}(E^{\nu}).$$ The term $\sup_{s<1}\mathbb{E}\|X_{\nu}^{m}-X_{s}\|^{2}$ converges to $0$ by Lemma 8. Using Doob's martingale inequality on the submartingale ∥Xm s − Xs∥ (with respect to the natural filtration generated by the Brownian motion B) 13 we obtain that $$\mathbb{P}(E^{c})\leq{\frac{\mathbb{E}\|X_{1}^{m}-X_{1}\|^{2}}{m^{-2}\log(m)^{2}}}={\frac{C_{n}}{\log(m)^{2}}},$$ we conclude that $\lim_{m\to\infty}\mathbb{E}\int_{0}^{t\wedge\tau_{s}}\sum_{i=1}^{n}(\sigma_{s}^{i}-\sigma_{s}^{m,i})^{2}=0$. where we have used Lemma 8. We conclude that limm→∞ ER t∧τϵ By observing that ER t∧τϵ 0|µ m s −µs|ds ≤ ER t 0 E|µ m s −µs|ds, a simple application of the Dominated convergence theorem yields limm→∞ ER t∧τϵ 0|µ m s − µs|ds = 0. Hence, we proved that $$\operatorname*{lim}_{m\to\infty}\mathbb{E}\left|g_{m}(X_{t\wedge\tau_{\epsilon}}^{m})-g_{m}(X_{0}^{m})-(Z_{t\wedge\tau_{\epsilon}}-Z_{0})\right|=0.$$ From Lemma 10, we know that gm(Xm t∧τϵ ) − gm(Xm 0 ) converges in L1 to g(Xt∧τϵ ) − g(X0) 14, therefore E|g(Xt∧τϵ ) − g(X0) − (Zt∧τϵ − Z0)| = 0 which implies that almost surely, $$\log\left({\frac{\|\phi(X_{t\wedge\tau_{*}})\|}{\|\phi(X_{0})\|}}\right)=Z_{t\wedge\tau_{*}}-Z_{0}.$$ Recall that this holds for any ϵ small enough. Observe that τϵ is almost surely non-decreasing as we decrease ϵ. Hence τϵ has a limit almost surely. Using Lemma 3 and the continuity of the paths of Xs we have that limϵ→0+ τϵ = ∞. Taking the limit ϵ →+, we conclude that almost surely we have $$\log\left({\frac{\|\phi(X_{t})\|}{\|\phi(X_{0})\|}}\right)=Z_{t}-Z_{0}.$$ 13The submartingale behaviour is a result of the convexity of the norm function and the fact that Xms Xs is a martingale since d(Xms Xs) = √1n (ϕm(Xms ) − ϕ(Xs))dBs. A simple application of Jensen's inequality yields the result. 14the result of Lemma 10 holds when t is replaced by t ∧ τϵ and the proof is exactly the same. We omit the proof to avoid redundancies Now observe that the coordinates of Xt are identically distributed (not independent since we condition on ∥ϕ(X0)∥ > 0). Thus, for all i ∈ [n], Eϕ ′(Xis ) = Eϕ ′(X1 s ) where X1 s is the first coordinate of the vector Xs. Another key observation is that the event {X1 s > 0} is included in the event {∥ϕ(X0)∥ > 0} (Lemma 1). Hence, P(X1 s > 0 ∩ ∥ϕ(X0)∥ > 0) = P(X1 s > 0), where the last term P(X1 s > 0) is *free* from any conditioning on ∥ϕ(X0)∥ > 0. By observing that the random variable X1 s = X1 0 + √ 1 n R s 0 ∥ϕ(Xu)∥dB1u has a symmetric distribution around zero (by properties of the X1 0 and the Brownian motion B), we have that P(X1 s > 0|∥ϕ(X0)∥ > 0) = P(X1 s > 0)P(∥ϕ(X0)∥ > 0)−1 = 1 2 (1 − 2 −n) −1. We conclude that $$\mathbb{E}\,\mu_{s}={\frac{n}{4}}(1-2^{-n})^{-1}-1,$$ which yields the desired result for the conditional mean by substraction. Now let us deal with the variance. To alleviate the notation, we omit the conditioning on the event {∥ϕ(X0)∥ > 0}. All the expectations below are taken conditionally on this event. Let 0 ≤ s ≤ t ≤ 1. Let λ > 0. We have that $$\mathrm{Var}\left[\log\left(\frac{\|\phi(X_{t})\|}{\|\phi(X_{s})\|}\right)^{2}\right]=\mathrm{Var}(Z_{t}-Z_{s})^{2}$$ $$\leq\frac{(1+\lambda^{-1})(t-s)}{n}+\frac{1+\lambda}{n^{2}}\mathbb{E}\left(\int_{s}^{t}(\mu_{u}-\mathbb{E}\,\mu_{u})du\right)^{2}$$ $$\leq(t-s)\left(\frac{1+\lambda^{-1}}{n}+\frac{1+\lambda}{n^{2}}\int_{s}^{t}\mathrm{Var}\mu_{u}\,du\right),$$ where we have used the inequality (a + b) 2 ≤ (1 + λ −1)a 2 + (1 + λ)b 2 and the Cauchy-Schwartz inequality. It remains to simplify Varµ 2 u . Let p u 1 = Eϕ ′(X1 u ) = 2−1(1 − 2 −n) and p u 2 = Eϕ ′(X1 u )ϕ ′(X2 u ). We have that $$\begin{array}{r l}{{\mathbb{E}}\mu_{u}^{2}={\frac{1}{4}}\mathbb{E}\|\phi^{\prime}(X_{u})\|^{4}-\mathbb{E}\|\phi^{\prime}(X_{u})\|^{2}+1}\\ {={\frac{n(n-1)}{4}}p_{2}^{u}-{\frac{3n}{4}}p_{1}^{u}+1,}\end{array}$$ where we have used the exchangeability property of the family {ϕ ′(Xiu ), i = 1*, . . . n*}. Thus, for the variance Varµ 2 u , we obtain $$\mathrm{Var}\mu_{u}^{2}=\frac{n^{2}}{4}((p_{2}^{u}-(p_{1}^{u})^{2})+n^{-1}(p_{1}^{u}-p_{2}^{u})).$$ Therefore, $$\mathrm{Var}\left[\log\left(\frac{\|\phi(X_{t})\|}{\|\phi(X_{s})\|}\right)^{2}\right]\leq(t-s)\left(\frac{1+\lambda^{-1}}{n}+(1+\lambda)\Gamma_{s,t}\right),$$ where Γs,t def =R t s 1 4 ((p u 2 − (p u 1 ) 2) + n −1(p u 1 − p u 2 )) du. Optimizing over λ yields $$\mathrm{Var}\left[\log\left(\frac{\|\phi(X_{t})\|}{\|\phi(X_{s})\|}\right)^{2}\right]\leq(t-s)\left(n^{-1/2}+\Gamma_{s,t}^{1/2}\right)^{2}.$$ The term Γs,t can be shown to have O(n −1/2) asymptotic behaviour using tools from Mckean-Vlasov theory. Thus, the variance term has (atmost) O(n −1) behaviour. ## I Proof Of Theorem 2 In this section, we provide the proof of Theorem 2. We use the following Law of Large numbers that does not require independence. Theorem 8 (Corollary 3.1 in Sung et al. (2008)). Let (Y i n )1≤i≤n,n≥1 be a triangular array of random variables. Assume that the following holds - supn≥1 $$\mathrm{P}_{n\geq1}\;{\frac{1}{n}}\sum_{i=1}^{n}\mathbb{E}|Y_{n}^{i}|<\infty.$$ - lima→∞ supn≥1 $$_{\geq1}{\frac{1}{n}}\sum_{i=1}^{n}\mathbb{E}|Y_{n}^{i}|\,\mathbb{1}_{\{|Y_{n}^{i}|>a\}}=0.$$ Then, we have that $${\frac{1}{n}}\left(\sum_{i=1}^{n}Y_{n}^{i}-\zeta_{n}^{i}\right)\longrightarrow0,$$ where the convergence is in L1 and ζ i n = E[Y i n |Fn,i−1], with Fn,j = σ{Y k n , 1 ≤ k ≤ j}*, i.e. the sigma algebra* generated by the variables {Y k n , 1 ≤ k ≤ j}, and Fn,0 = {∅, Ω} *by definition.* Let us now prove our result. Theorem 2. For 0 ≤ s ≤ t ≤ 1*, we have* $$\log\left(\frac{\|\phi(X_{t})\|}{\|\phi(X_{s})\|}\right)\ 1_{\{\|\phi(X_{0})\|>0\}}\ \underset{n\to\infty}{\longrightarrow}\ \frac{t-s}{4},\quad\text{and,}\quad\frac{\|\phi(X_{t})\|}{\|\phi(X_{s})\|}\ 1_{\{\|\phi(X_{0})\|>0\}}\ \underset{n\to\infty}{\longrightarrow}\ \exp\left(\frac{t-s}{4}\right).$$ _the convergence holds in $L$._ where the convergence holds in L1. Moreover, we have that $$\operatorname*{sup}_{i\in[n]}\mathbb{E}\left(\operatorname*{sup}_{t\in[0,1]}|X_{t}^{i}-{\tilde{X}}_{t}^{i}|^{2}\right)={\mathcal{O}}(n^{-1}),$$ where Xi t is the solution of the following (Mackean-Vlasov) SDE $$d\tilde{X}_{t}^{i}=\left(\mathbb{E}\phi(\tilde{X}_{t}^{i})^{2}\right)^{1/2}d B_{t}^{i},\quad\tilde{X}_{0}^{i}=X_{0}^{i}.$$ As a result, the pre-activations Y i ⌊tL⌋ (Eq. (1)*) converge in distribution to a Gaussian distribution in the limit* infinite-depth-then-infinite-width $$\forall i\in[n],\ \ Y_{[t L]}^{i}\xrightarrow{L\to\infty\ \mathrm{then}\ n\to\infty}\mathcal{N}(0,d^{-1}\|x\|^{2}\exp(t/2)).$$ Proof. Let 0 ≤ s ≤ t ≤ 1. From Theorem 1, we have that almost surely $$\log\left(\frac{\|\phi(X_{t})\|}{\|\phi(X_{s})\|}\right)=\exp\left(\frac{1}{\sqrt{n}}(\hat{B}_{t}-\hat{B}_{s})+\frac{1}{n}\int_{s}^{t}\mu_{u}d u\right).$$ We know that √ 1 n (Bˆt−Bˆs) converges to zero almost surely (by continuity of Brownian paths) and in L1. Let us now deal with the second term n −1R t s µudu. We have that 1n µu = 1 2 1 n Pn i=1 ϕ ′(Xiu ) − 1 n . Fix u ∈ [*s, t*] and let Z i n = ϕ ′(Xiu ) (recall that Xiu has an implicit dependence on n). Since Z i n is uniformly bounded across i and n, it is straightforward that the conditions of Theorem 8 are satisfied. Therefore, we have the following convergence in L1 $$\frac{1}{n}\left(\sum_{i=1}^{n}Z_{n}^{i}-\zeta_{n}^{i}\right)\longrightarrow0,$$ where ζ i n = E[Z i n |Fn,i−1]. Recall from the proof of Theorem 1 that the event {Xju > 0} is included in the event {∥ϕ(X0)∥ > 0}. Another key observation that will allow us to conclude is that the distribution of Xiu given Fn,i−1 is symmetric around 0 since the dependence is reflected only in the variance of the Brownian motion. Hence, ζ i n = 1 2 (1 − 2 −n) −1 almost surely. Since n −1 Pn i=1 ζ i n = 1 2 (1 − 2 −n) −1 −→ 12 , (in L1), then n −1 Pn i=1 Z i n converges to 1/2 in L1. Using the Dominated convergence theorem, we obtain the first result. Let us now deal with the second result on the absolute growth factor. Let N > 0 and define the event $$E_{N}=\left\{{\frac{\|\phi(X_{t})\|}{\|\phi(X_{s})\|}}\leq\exp(N)\right\},$$ and let EcN be its complementary event. For N large enough, we have that E ∥ϕ(Xt)∥ ∥ϕ(Xs)∥ − exp(t/4) = E ∥ϕ(Xt)∥ ∥ϕ(Xs)∥ − exp((t − s)/4) 1EN + E ∥ϕ(Xt)∥ ∥ϕ(Xs)∥ − exp((t − s)/4) 1EcN ≤ exp(N) × E log ∥ϕ(Xt)∥ ∥ϕ(Xs)∥ − (t − s)/4 + E ∥ϕ(Xt)∥ ∥ϕ(Xs)∥ − exp((t − s)/4) 1EcN ≤ exp(N) × E log ∥ϕ(Xt)∥ ∥ϕ(Xs)∥ − (t − s)/4 + K P(E c N ), where K is a (t dependent) constant and where we have used Theorem 1 obtain that E ∥ϕ(Xt)∥ ∥ϕ(Xs)∥ is finite. Taking n to infinity in the inequality above, we obtain that for N large enough If the inequality above, we obtain that for $N$ large enough $$\lim_{n\to\infty}\mathbb{E}\left|\frac{\|\phi(X_{t})\|}{\|\phi(X_{s})\|}-\exp(t/4)\right|\leq K\,\mathbb{P}(E_{N}^{c})$$ $$\leq N^{-1}K\,\mathbb{E}\left|\log\left(\frac{\|\phi(X_{t})\|}{\|\phi(X_{s})\|}\right)\right|,$$ $M$-algebra inequality $\sim$ Given this instance, for all $N$ large where we have used Markov's inequality. Since this is true for all N large enough, we conclude that limn→∞ E ∥ϕ(Xt)∥ ∥ϕ(Xs)∥ − exp((t − s)/4) = 0. The convergence to Mckean-Vlasov dynamics is straightforward from Theorem 6, and the Gaussian distribution is given by Lemma 16. ## I.1 Some Technical Lemmas Lemma 16. Let x ∈ R d*such that* x ̸= 0, m ≥ 1 be an integer, and consider the real-valued (Mckean-Vlasov) stochastic process X˜ *given by* $$d\tilde{X}_{t}=\left(\mathbb{E}\phi(\tilde{X}_{t})^{2}\right)^{1/2}d B_{t}\,,\quad t\in[0,\infty),\quad\tilde{X}_{0}=\tilde{W}_{i n}^{\top}x,$$ where ϕ is the ReLU activation function, (Bt)t≥0 is a one-dimensional Brownian motion, and W˜in ∼ N (0, d−1I)*. We have the following* $$\forall t\geq0,X_{t}\sim{\mathcal{N}}(0,d^{-1}\|x\|^{2}\exp(t/2)).$$ Proof. Let t > 0. From the SDE, it is clear that X˜t is Gaussian with zero mean and variance R t 0 ∥ϕ(X˜s)∥L2 ds (by Itoˆ isometry). Therefore, since ReLU is homogeneous, it is straightforward that for all s > 0, ∥ϕ(X˜s)∥ 2 L2 = 1 2 ∥X˜s∥ 2 L2 . Using Itoˆ 's lemma, we obtain $$d\tilde{X}_{t}^{2}=2\tilde{X}_{t}d\tilde{X}_{t}+\frac{1}{2}\|\tilde{X}_{t}\|_{L_{2}}^{2}d t.$$ Taking the expectation15 yields the following ordinary differential equation $$d\|\tilde{X}_{t}\|_{L_{2}}^{2}=\frac{1}{2}\|\tilde{X}_{t}\|_{L_{2}}^{2}d t,$$ $\square$ which has a closed-form solution given by $$\|\tilde{X}_{t}\|_{L_{2}}^{2}=\|\tilde{X}_{0}\|_{L_{2}}^{2}\exp(t/2).$$ We conclude by observing that ∥X˜0∥ 2 L2 = EX˜ 2 0 = d $$d^{-1}\|x\|^{2}.$$ Lemma 17. Let x ∈ R d*such that* x ̸= 0, m ≥ 1 be an integer, and consider the two real-valued (Mckean- Vlasov) stochastic processes X˜ m and X˜ *given by* $$\begin{cases}d\tilde{X}_{t}^{m}=\left(\mathbb{E}\phi_{m}(\tilde{X}_{t}^{m})^{2}\right)^{1/2}d B_{t}\,,&t\in[0,\infty),\quad\tilde{X}_{0}^{m}=\tilde{W}_{i n}^{\top}x,\\ d\tilde{X}_{t}=\left(\mathbb{E}\phi(\tilde{X}_{t})^{2}\right)^{1/2}d B_{t}\,,&t\in[0,\infty),\quad X_{0}=\tilde{W}_{i n}^{\top}x,\end{cases}$$ where ϕm(z) = R z 0 h(mu)du where h *is the Sigmoid function given by* h(u) = (1 + e −u) −1, ϕ *is the ReLU* activation function, (Bt)t≥0 is a one-dimensional Brownian motion, and W˜in ∼ N (0, d−1I)*. We have the* following $$\forall t\geq0,\ \mathbb{E}|\tilde{X}_{t}^{m}-\tilde{X}_{t}|^{2}\leq\frac{2t}{m^{2}}e^{2t}.$$ Proof. The proof of Lemma 17 is similar to that of Lemma 8 with the only difference of replacing the euclidean norm with the L2 norm in probability space. Let t ≥ 0, we have that E|X˜ m t − X˜t| 2 = E Z t 0 (∥ϕm(X˜ m s )∥L2 − ∥ϕ(X˜s)∥L2 )dBs 2 = Z t 0 (∥ϕm(X˜ m s )∥L2 − ∥ϕ(X˜s)∥L2 ) 2ds ≤ Z t 0 ∥ϕm(X˜ m s ) − ϕ(X˜s)∥ 2 L2 ds ≤ 2t m2 + 2 Z t 0 ∥X˜ m s − X˜s∥ 2 L2 ds, where we have used the triangular inequality and Lemma 9. We conclude using Gronwall's lemma. ## J Proof Of Theorem 3 Theorem 3. Let t ∈ [0, 1]*. Then, in the limit* limL→∞ limn→∞ *(infinite width, then infinite depth), we* have that∥ϕ(Y⌊tL⌋)∥ $$\frac{\|\phi(Y_{[t L]})\|}{\|\phi(Y_{0})\|}\;\mathbb{1}_{\{\|\phi(Y_{0})\|>0\}}\longrightarrow\exp\left(\frac{t}{4}\right),$$ _ababdita_ where the convergence holds in probability. 15This should be understood as integrating the SDE, then taking the expectation, then differentiating once again. Moreover, the pre-activations Y i ⌊tL⌋ (Eq. (1)) converge in distribution to a Gaussian distribution in the limit infinite-width-then-infinite-depth $$\forall i\in[n],\ \ Y_{[t L]}^{i}\xrightarrow{n\to\infty\ \mathrm{then}\ L\to\infty}\mathcal{N}(0,d^{-1}\|x\|^{2}\exp(t/2)).$$ Proof. Let t ∈ [0, 1]. It is straightforward that limn→∞ 1{∥ϕ(Y0)∥>0} = 1 almost surely. Moreover, we have that for all t ∈ [0, 1], n −1∥ϕ(Y⌊tL⌋)∥ 2converges in distribution to Eϕ(Y 1 ⌊tL⌋ ) 2 when n goes to infinity (Yang, 2020; Hayou et al., 2021; Matthews et al., 2018). Since the limiting value is constant, then the convergence holds also in probability. Now let ϵ > 0. We have that $$\mathbb{P}\left(\left|\frac{\|\phi(Y_{1:L})\|}{\|\phi(Y_{0})\|}\,1_{\{\|\phi(Y_{0})\|>0\}}-\exp\left(\frac{t}{4}\right)\right|>\epsilon\right)\leq\mathbb{P}\left(\frac{\|\phi(Y_{1:L})\|}{\|\phi(Y_{0})\|}\,1_{\{\|\phi(Y_{0})\|>0\}}-\exp\left(\frac{t}{4}\right)>\epsilon\right)$$ $$+\mathbb{P}\left(\frac{\|\phi(Y_{1:L})\|}{\|\phi(Y_{0})\|}\,1_{\{\|\phi(Y_{0})\|>0\}}-\exp\left(\frac{t}{4}\right)<-\epsilon\right).$$ Let us show that the first term in the right-hand side converges to 0 in the sequential limit 'infinite width then infinite depth'. The proof is similar for the second term. We have that $$\mathbb{P}\left(\frac{\|\phi(Y_{[1L]})\|}{\|\phi(Y_{0})\|}\,1_{\{\|\phi(Y_{0})\|>0\}}-e^{\frac{t}{4}}>\epsilon\right)\leq\mathbb{P}\left(\frac{\|\phi(Y_{[1L]})\|}{\|\phi(Y_{0})\|}1_{\{\|\phi(Y_{0})\|>0\}}-\frac{\sqrt{\mathbb{E}\phi(Y_{[1L]}^{1})^{2}}}{\sqrt{\mathbb{E}\phi(Y_{0}^{2})^{2}}}>\epsilon/2\right)$$ $$+1\left(\frac{\sqrt{\mathbb{E}\phi(Y_{[1L]}^{1})^{2}}}{\sqrt{\mathbb{E}\phi(Y_{0}^{2})^{2}}}-\exp\left(\frac{t}{4}\right)>\epsilon/2\right),$$ _d.t._ where 1(z > ϵ/2) def = 1{*z>ϵ/*2} (to alleviate the notation). Using the convergence in probability of n −1∥ϕ(Y⌊tL⌋)∥ 2to Eϕ(Y 1 ⌊tL⌋ ) 2, we obtain for all L $$\lim_{n\to\infty}\mathbb{P}\left(\frac{\|\phi(Y_{[L\bar{t}]})\|}{\|\phi(Y_{0})\|}\mathbbm{1}_{\{\|\phi(Y_{0})\|>0\}}-\exp\left(\frac{t}{4}\right)>\epsilon\right)\leq1\left(\frac{\sqrt{\mathbb{E}\phi(Y_{[L\bar{t}]}^{1})^{2}}}{\sqrt{\mathbb{E}\phi(Y_{0}^{1})^{2}}}-\exp\left(\frac{t}{4}\right)>\epsilon/2\right).$$ Using Lemma 5 in Hayou et al. (2021), and the homegenous property of ReLU, we have that $$\operatorname*{lim}_{L\rightarrow\infty}\mathbb{E}\phi(Y_{[t L]}^{1})^{2}=\frac{1}{2}q_{t},$$ where qt : [0, 1] → R + is the solution of the ordinary differential equation q ′ t = 1 2 qt, which has a unique solution given by qt = q0 exp(t/2). Dividing by q0 and taking the square root, and taking L to infinity, we obtain the desired result. Regarding the convergence in distribution of the pre-activations, in the limit n → ∞, the pre-activations become Gaussian with zero mean and variance E(Y 1 ⌊tL⌋ ) 2. This variance converges to qt given above in the limit L → ∞. The conclusion is straightforward using Slutsky's lemma. ## K Piece-Wise Linear Activation Functions We have seen in Section 4 that the distribution of Xt is generally intractable for n ≥ 2. This is purely due to *finite width* n ≥ 2 and not to the non-linearity of the activation function. To understand this, let us see what happens when the activation function is the identity function. In this case the process Xt is solution of the following SDE $$d X_{t}={\frac{1}{\sqrt{n}}}\|X_{t}\|d B_{t}.$$ ∥Xt∥dBt. (16) When n = 1, the SDE Eq. (16) has a closed-form solution given by the (conditional) GBM distribution (Proposition 2). For general n ≥ 2, the entries of Xt are dependent and the resulting dynamics (generally) do not admit closed-form solutions. However, we can obtain closed-form solutions for the norm ∥Xt∥. Indeed, a simple application of Itoˆ 's lemma yields the following results. Theorem 9 (Norms with the identity activation). *With the linear activation, we have that for all* t ∈ [0, 1], $$\|X_{t}\|=\|X_{0}\|\exp\left({\frac{1}{\sqrt{n}}}{\hat{B}}_{t}+\left({\frac{1}{2}}-{\frac{1}{n}}\right)t\right),\quad a l m o s t\,\,s u r e l y,$$ where (Bˆ)t≥0 *is a one-dimensional Brownian motion. As a result, we have that for all* 0 ≤ s ≤ t ≤ 1 $$(16)$$ $$0\leq s\leq t\leq1$$ $$\mathbb{E}\left[\log\left({\frac{\|X_{t}\|}{\|X_{s}\|}}\right)\right]=\left({\frac{1}{2}}-{\frac{1}{n}}\right)(t-s).$$ The proof of Theorem 9 is straightforward using Itoˆ 's lemma. We omit the proof here. Role of the non-linearity. By comparing the result of Theorem 1 and Theorem 9, we observe some differences between the case of ReLU and that of the identity activation function. With ReLU, the drift term in log(∥ϕ(Xt)∥/∥ϕ(Xs)∥) is given by 1n R t 0 µsds which is a stochastic term with mean given by 1−2 −n 4 − 1 n t. With the identity activation, this drift term is *deterministic* and is equal to 12 − 1 n t. This allows to conclude the following: - *Non-linearity induces stochastic drift:* the non-linearity of ReLU induces stochasticity in the drift term of log(∥Xt∥/∥X0∥), which results in the Quasi-GBM dynamics given by Theorem 1. - *Non-linearity induces change of regime:* with ReLU, the mean drift of log(∥ϕ(Xt)∥/∥ϕ(X0)∥) is given by 1−2 −n 4 − 1 n t which is negative for n = 1, 2, 3. This induces the change of regime we discussed after Theorem 1 (having a negative mean drift implies that there is a significant mass of the distribution of ∥Xt∥/∥X0∥ in the regime (0, 1)). With the identity activation function, the drift term is always non-negative for n ≥ 2, and negative for n = 1. Thus, the change of regime cover some values n ≥ 2 only when there is a non-linearity. We give more details about this observation in the next result. To capture the effect of non-linearity in the regime change phenomenon discussed above, we study the dynamics of the post-norm activation for a special class of piece-wise linear activations that include both ReLU and the identity function. The result of Theorem 1 can be easily extended to the case of general piece-wise linear activation functions using the same proof techniques. We obtain the following result which generalizes that of Theorem 1 and Theorem 9. Theorem 10 (Post-activations norm for piece-wise linear activations). Let α, β ∈ R, and let ϕα,β be the activation function given by ϕα,β(z) = α*ReLU*(z) + βReLU(z)*. We have that for all* t ∈ [0, 1], $$\|\phi_{\alpha,\beta}(X_{t})\|=\|\phi_{\alpha,\beta}(X_{0})\|\exp\left(\frac{1}{\sqrt{n}}\hat{B}_{t}+\frac{1}{n}\int_{0}^{t}\mu_{s}^{\alpha,\beta}d s\right),\quad a l m o s t\ w u r e l y,$$ where µ α,β s = 1 2 Pn i=1(α 21xi≥0 + β 21xi<0) − 1, and (Bˆ)t≥0 is a one-dimensional Brownian motion. As a result, we have that for all 0 ≤ s ≤ t ≤ 1 $\mathbf{a}=\mathbf{b}$. - *if* $\alpha=0,\beta\neq0$ ? $$\mathbb{E}\left[\log\left(\frac{\|\phi_{0,\beta}(X_{t})\|}{\|\phi_{0,\beta}(X_{s})\|}\right)|\,\|\phi_{0,\beta}(X_{0})\|>0\right]=\left(\frac{\beta^{2}(1-2^{-n})}{4}-1\right)$$ $${\frac{1}{n}})\,(t-s).$$ $$\quad\bullet\quad i f\;\alpha\neq0,\beta=0$$ $$\mathbb{E}\left[\log\left(\frac{\|\phi_{\alpha,0}(X_{t})\|}{\|\phi_{\alpha,0}(X_{s})\|}\right)|\,\|\phi_{\alpha,0}(X_{0})\|>0\right]=\left(\frac{\alpha^{2}(1-2^{-n})}{4}+1\right)$$ $${\frac{1}{n}})\,(t-s).$$ $$\mathbf{\partial}\cdot\mathbf{\partial}i f\,\alpha\neq0,\beta\neq0$$ $$\mathbb{E}\left[\log\left({\frac{\|\phi_{\alpha,\beta}(X_{t})\|}{\|\phi_{\alpha,\beta}(X_{s})\|}}\right)\right]=\left({\frac{\alpha^{2}+\beta^{2}}{4}}-{\frac{1}{n}}\right)(t-s).$$ Theorem 10 generalizes that of ReLU (Theorem 1, α = 1, β = 0) and the identity activation (Theorem 9, α = −β = 1). The discontinuity of the mean of log ∥ϕα,β(Xt)∥ ∥ϕα,β(Xs)∥ at the poles α = 0 (and β ̸= 0) and β = 0 (and α ̸= 0) is due to the fact that the event {∥ϕα,β(X0)∥ > 0} has non-zero probability in these cases and zero probability when α ̸= 0 and β ̸= 0. Perturbation analysis around the identity function. Consider the case when α = 1 and β = 1 − ε for some ε ≪ 1. The mean logarithmic growth factor is given by $$G_{s,t}^{n}=\left(\frac{1+(1-\varepsilon)^{2}}{4}-\frac{1}{n}\right)(t-s)\approx\left(\frac{1-\varepsilon}{2}-\frac{1}{n}\right)(t-s).$$ Observe that for ε = 0, we recover the result of Theorem 9 (identity activation). Hence, a small perturbation of the identity function has the effect of decreasing the factor Gn s,t which results in having negative values for Gn s,t for certain values of n. Indeed, by fixing α = 1, notice that the minimum values of Gn s,t is obtained when β ≈ 0, for which ϕ1,0 = ReLU. Notice that we can also control the change of regime by tuning the parameter α. This allows us to control the sign of Gn s,t for any n by tuning the parameter α. We leave the analysis of the practical implications of tuning α for future work. ## L Additional Experiments L.1 Geometric Brownian Motion Additional histograms of YL and log(Yl) (Proposition 2) are shown in Fig. 9 and Fig. 10. ![45_image_0.png](45_image_0.png) (a) Distribution of log(YL) with L = 5 ![45_image_2.png](45_image_2.png) (c) Distribution of log(YL) with L = 10 ![45_image_1.png](45_image_1.png) (b) Distribution of YL with L = 5 ![45_image_3.png](45_image_3.png) ![45_image_5.png](45_image_5.png) (d) Distribution of YL with L = 10 ![45_image_4.png](45_image_4.png) (e) Distribution of log(YL) with L = 50 (f) Distribution of YL with L = 50 Figure 9: Empirical verification of Proposition 2. **(a), (c), (e)** Histograms of log(YL) and based on N = 5000 simulations for depths L ∈ {5, 10, 50} with Y0 = 1. Estimated density (Gaussian kernel estimate) and theoretical density (Gaussian) are illustrated on the same graphs. **(b), (d), (f)** Histograms of YL based on N = 5000 simulations for depths L ∈ {5, 10, 50} with Y0 = 1. ![46_image_0.png](46_image_0.png) (a) Distribution of log(YL) with L = 100 ![46_image_3.png](46_image_3.png) (c) Distribution of log(YL) with L = 200 ![46_image_1.png](46_image_1.png) (b) Distribution of YL with L = 100 ![46_image_2.png](46_image_2.png) (d) Distribution of YL with L = 200 Figure 10: Empirical verification of Proposition 2. **(a), (c)** Histograms of log(YL) and based on N = 5000 simulations for depths L ∈ {100, 200} with Y0 = 1. Estimated density (Gaussian kernel estimate) and theoretical density (Gaussian) are illustrated on the same graphs. **(b), (d)** Histograms of YL based on N = 5000 simulations for depths L ∈ {100, 200} with Y0 = 1. ## L.2 Ornstein-Uhlenbeck Process Additional histograms of YL and g(Yl) (Proposition 4) are shown in Fig. 9 and Fig. 10. Density ![47_image_0.png](47_image_0.png) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 (a) Distribution of log(YL) with L = 5 Density ![47_image_2.png](47_image_2.png) (c) Distribution of log(YL) with L = 10 ![47_image_1.png](47_image_1.png) (b) Distribution of YL with L = 5 ![47_image_3.png](47_image_3.png) (d) Distribution of YL with L = 10 ![47_image_5.png](47_image_5.png) (e) Distribution of log(YL) with L = 50 (f) Distribution of YL with L = 50 Figure 11: Empirical verification of Proposition 2. **(a), (c), (e)** Histograms of log(YL) and based on N = 5000 simulations for depths L ∈ {5, 10, 50} with Y0 = 1. Estimated density (Gaussian kernel estimate) and theoretical density (Gaussian) are illustrated on the same graphs. **(b), (d), (f)** Histograms of YL based on N = 5000 simulations for depths L ∈ {5, 10, 50} with Y0 = 1. ![47_image_4.png](47_image_4.png) ![48_image_0.png](48_image_0.png) (a) Distribution of log(YL) with L = 100 ![48_image_3.png](48_image_3.png) (c) Distribution of log(YL) with L = 200 ![48_image_1.png](48_image_1.png) (b) Distribution of YL with L = 100 ![48_image_2.png](48_image_2.png) (d) Distribution of YL with L = 200 Figure 12: Empirical verification of Proposition 2. **(a), (c)** Histograms of log(YL) and based on N = 5000 simulations for depths L ∈ {100, 200} with Y0 = 1. Estimated density (Gaussian kernel estimate) and theoretical density (Gaussian) are illustrated on the same graphs. **(b), (d)** Histograms of YL based on N = 5000 simulations for depths L ∈ {100, 200} with Y0 = 1. ## L.3 Histograms Of Non-Scaled Log-Norm Of Post-Activations In Fig. 13, we show the histogram of log(∥ϕ(YL)∥/∥ϕ(Y0)∥) based on N = 5000 simulations. We observe that as the width n increases, the Gaussian approximate is no longer accurate, which is due to the fact that ∥ϕ(YL)∥/∥ϕ(Y0)∥ converges to a deterministic value (Theorem 2). ![49_image_0.png](49_image_0.png) Figure 13: Histogram of log(∥ϕ(YL)∥/∥ϕ(Y0)∥) for depth L = 100 and different widths n ∈ {2, 3, 4, 6, 20, 100}. Gaussian density estimate and (Gaussian) kernel density estimate are shown. As the width increases, we observe a deterioration of the match between the best Gaussian estimate and the empirical distribution. This is due to the fact that the norm of the post-activations concentrates around a deterministic value when n goes to infinity (Theorem 2). ## Evolution Of √Πλοg(‖Φ(Υչ)‖/‖Φ(Υο)‖) L.4 In Fig. 14, Fig. 15, Fig. 16, and Fig. 17, we show the histograms of √πλοσ(||φ(ΥΏ)||/ ||φ(Υο)||) for depth L = 100, hidden layers l E {10,30,40,60,70,90}, and widths n E {2,3,20,100}. We observe that Gaussian distribution fits better the last layers. This was expected since the limiting distribution (Quasi-GBM) given in Theorem 1 is only valid for layer indices [tL] when L goes to infinity. Thus, for small I, it should be expected that the Gaussian distribution would not be a good approximation. ![50_image_0.png](50_image_0.png) Figure 14: Distribution of √πλοσ(||φ(Υγ)|||φ(Υο)||) (Εq. (1)) for different layer indices, with depth L = 100 and width n = 2 ![51_image_0.png](51_image_0.png) ![51_image_1.png](51_image_1.png) Figure 15: Distribution of √πλοσ(||φ(Υγ)|||φ(Υο)||) (Εq. (1)) for different layer indices, with depth L = 100 and width n = 3 ![52_image_0.png](52_image_0.png) ![52_image_1.png](52_image_1.png) Figure 16: Distribution of √πλοσ(‖Φ(ΥΎ)‖/《Φ(Υο)||) (Εq. (1)) for different layer indices, with depth L = 100 and width n = 20 ![53_image_0.png](53_image_0.png) ![53_image_1.png](53_image_1.png) Figure 17: Distribution of √n log(∥ϕ(Yl)∥/∥ϕ(Y0)∥) (Eq. (1)) for different layer indices, with depth L = 100 and width n = 100 ## L.5 Evolution Of Log(∥Φ(Yl)∥/∥Φ(Y0)∥) **(Non-Scaled).** In Fig. 18, Fig. 19, Fig. 20, and Fig. 21, we show the non-scaled versions of the histograms from the previous section. We observe that the histogram concentrates around a single value (the distribution converges to a Dirac mass) as n increases. This is a result of the asymptotic behaviour of the ResNet in the infinite-depththen-infinite-width limit as shown in Theorem 2. ![54_image_0.png](54_image_0.png) Figure 18: Distribution of log(∥ϕ(Yl)∥/∥ϕ(Y0)∥) (Eq. (1)) for different layer indices, with depth L = 100 and width n = 2 ![55_image_0.png](55_image_0.png) ![55_image_1.png](55_image_1.png) Figure 19: Distribution of log(∥ϕ(Yl)∥/∥ϕ(Y0)∥) (Eq. (1)) for different layer indices, with depth L = 100 and width n = 3 ![56_image_0.png](56_image_0.png) ![56_image_1.png](56_image_1.png) Figure 20: Distribution of log(∥ϕ(Yl)∥/∥ϕ(Y0)∥) (Eq. (1)) for different layer indices, with depth L = 100 and width n = 20 ![57_image_0.png](57_image_0.png) ![57_image_1.png](57_image_1.png) Figure 21: Distribution of log(∥ϕ(Yl)∥/∥ϕ(Y0)∥) (Eq. (1)) for different layer indices, with depth L = 100 and width n = 100.
Review 1: Summary: The paper studies the distribution of the infinite-depth limit of activations of a fully connected resnet The authors showed that the distribution is, in general, not tractable and identified a few special cases where the distribution can actually be identified Strengths and Weaknesses: Strength: the results are novel and have some fundamental importance in understanding deep learning, especially at initialization Weakness: 1. the authors should probably spend at least a little effort in demonstrating the possible usefulness of the theory, say, suggesting a new initialization trick 2. there are a few other problems that I directly outline in the requested change section Requested Changes: 1. There are two theoretical cases I would like to see, and I think would improve the comprehensibility of the theory - The case when the activation is linear. It seems like the distribution is analytically tractable here - The case when the activation is only perturbatively away from linearity; this would allow us to understand the leading effect of being nonlinear - having these results, the authors should be able to make much more insightful discussions about the distribution -- for example, what part of the derived results can be attributed to having "depth" and what part can be attributed to being "nonlinear" 2. The authors have indeed demonstrated the correctness of the theory, but I think the authors should probably spend at least a little effort in demonstrating the possible usefulness of the theory, say, suggesting a new initialization trick - this does not have to be a very large-scale experiment, just suggesting a minor trick that works on a toy example is satisfactory to me 3. I am against using the word "phase transition." The word phase transition comes from statistical physics and has a quite precise definition. To talk about a phase transition, one first needs to define/identify the free energy, and one then needs to identify where it becomes nonanalytic, but this is not what the authors have achieved. - I would suggest using the word "regime change," "qualitative change in behavior," "regime crossover," or similar wording After these changes, I would be happy to recommend acceptance Broader Impact Concerns: Fine ================================================== Review 2: Summary: This paper studies random `residual` neural networks in the following limits: depth $L\to \infty$ and the width $n$ are either fixed and finite or go to infinity. Based on the observation that random `residual` neural networks can be viewed as an Euler discretion scheme of SDEs, the authors convert the studies of random residual networks to the studies of continuous time SDEs. As such, tools from stochastic calculus (e.g., Ito's formula) can be applied directly. The authors obtained several results, including - Single input, $d=n=1$ ($d$: input dimension, $n$ width of the network) case. Characterization of the distribution of the output for some special activations. - Single input, $n>1$. The distribution is hard to get. Instead, the authors compute the distribution of the norm of the output, $\|X_t\|$. - The authors also point out that the distributions of the norm $\|X_t\|$ are not the same under a different order of limits $d\to \infty$ then $n\to\infty$ v.s. $n\to\infty$ then $d\to\infty$. Overall, the paper contains several interesting statistical insights, and the approaches of the paper seem rigorous. However, I find the results not very relevant, useful, or interesting from an ML perspective. It is unclear to me what this paper's main take-home `ML` message is. In my opinion, this paper may be more suitable for the stat community. Strengths and Weaknesses: ## Strengths - Contains several interesting statistical observations. - Simulations seem to support the authors' insights. ## Weaknesses The biggest concern is the relevance of the paper to ML. The major connection I can tell is: residual networks can be viewed as an Euler discretion scheme of SDEs. Follow-up results derived from this observation provide limited insights into ML, e.g., insights into - What is the distribution of the outputs (for multiple inputs), and how does this distribution affect the training and performance of networks? Note that the paper only handles single input. - How is this limit ($L\to\infty$ first) related to training / initializing of neural networks? My understanding is that this scaling limit is not popular in practice and practitioners often scale the width and depth simultaneously. Moreover, the depth (in tens) << width (in thousands) in practice. - What do the backward gradients (NTKs) look like, and what is their connection to network training? I am not convinced that this paper is of great interest to the ML community without such or similar findings. I think the statistics avenue will be more suitable for this paper. Requested Changes: I don't have requested changes for this paper. The major concern I have is the lack of relevance to the ML community. I would like to see more insights that could strengthen our theoretical understanding of neural networks. Broader Impact Concerns: NA. ================================================== Review 3: Summary: The paper studied ResNets at initialization with a particular scaling of $1/\sqrt{L}$ weighting on the fully connected component, and study the limit as depth $L\to\infty$ for some fixed finite width $n$. The main contributions of the paper revolve around the limiting SDE in Proposition 1, and the consequences that follows from this SDE. In particular, the authors showed that the network does not collapse under some basic conditions, the norm of post activations behave approximately like a geometric Brownian motion, and characterized the sequent depth then width limit. Strengths and Weaknesses: Strengths 1. The infinite depth limit results are novel, as this regime is under explored. 2. The characterization of the limit is fairly detailed, despite a somewhat intractable limiting SDE. Weaknesses 1. I would personally prefer some more discussion towards how this work relates to other regimes, and what the authors suspect happens the kernel for multiple inputs. More details in the requested changed section. Requested Changes: On the infinite-depth-then-width limit As the authors described in the introduction, the order of the limits matter for neural networks. In particular, the width-then-depth limit typically loses some information and therefore is not equal to the joint limit. I'm curious if the authors have any thoughts towards whether or not the depth-then-width limit behaves similarly and loses some information compared to the joint limit. On the degeneracy of C-maps One common issue of infinite-depth analysis is that the C-maps becomes degenerate in the limit, and this usually leads to poor training stability unless a normalization method or deep kernel shaping is used [1]. In the case of the scaling in this paper, do the authors have any thought on whether or not it leads to degenerate C-maps? As far as I can tell, the contribution of the activation function should be bounded in the limit. On joint distribution over multiple inputs Related to the earlier point, can the authors comment on any foreseeable technical challenges for extending towards the multiple input setting? In particular, can techniques from [2] be used towards deriving an SDE for the covariance kernel? Minor typo in equation 4, I believe the drift is missing a factor of $1/2$. References 1. Martens, James, Andy Ballard, Guillaume Desjardins, Grzegorz Swirszcz, Valentin Dalibard, Jascha Sohl-Dickstein, and Samuel S. Schoenholz. "Rapid training of deep neural networks without skip connections or normalization layers using deep kernel shaping." arXiv preprint arXiv:2110.01765 (2021). 2. Li, Mufan Bill, Mihai Nica, and Daniel M. Roy. "The Neural Covariance SDE: Shaped Infinite Depth-and-Width Networks at Initialization." arXiv preprint arXiv:2206.02768 (2022). Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept as is Comment: The paper studies the infinite-depth limit of finite-width residual neural networks. The paper discusses several interesting results including characterization of the output distribution for some special cases. The reviewers noted several positive aspects of the paper including: - The novelty of the infinite depth limit as this regime is relatively under-explored - The simulations support the theoretical insights. - Detailed characterization of special cases - Comparisons of infinite-depth-then-infinite-width with the more popular infinite-width-then-infinite-depth limit. The authors have also satisfactorily addressed suggested changes proposed by reviewers including: - linear activation and perturbation analysis - discussion of practical implications - discussion of joint distribution over multiple inputs During the discussion, the consensus decision of the reviewers leaned towards acceptance. There was some discussion around relevance to TMLR and if the paper would be a better fit for stats journal, but I think that this paper is very relevant to TMLR audience (see my justification under "Audience" above). I recommend accept as is. Congrats! ==================================================
# An Empirical Study Of Implicit Regularization In Deep Offline Rl Caglar Gulcehre∗ , Srivatsan Srinivasan∗**, Jakub Sygnowski, Georg Ostrovski,** Mehrdad Farajtabar, Matt Hoffman, Razvan Pascanu, Arnaud Doucet DeepMind Reviewed on OpenReview: *https: // openreview. net/ forum? id= HFfJWx60IT* ## Abstract Deep neural networks are the most commonly used function approximators in offline reinforcement learning. Prior works have shown that neural nets trained with TD-learning and gradient descent can exhibit implicit regularization that can be characterized by underparameterization of these networks. Specifically, the rank of the penultimate feature layer, also called *effective rank*, has been observed to drastically collapse during the training. In turn, this collapse has been argued to reduce the model's ability to further adapt in later stages of learning, leading to the diminished final performance. Such an association between the effective rank and performance makes effective rank compelling for offline RL, primarily for offline policy evaluation. In this work, we conduct a careful empirical study on the relation between effective rank and performance on three offline RL datasets : bsuite, Atari, and DeepMind lab. We observe that a direct association exists only in restricted settings and disappears in the more extensive hyperparameter sweeps. Also, we empirically identify three phases of learning that explain the impact of implicit regularization on the learning dynamics and found that bootstrapping alone is insufficient to explain the collapse of the effective rank. Further, we show that several other factors could confound the relationship between effective rank and performance and conclude that studying this association under simplistic assumptions could be highly misleading. ## 1 Introduction The use of deep networks as function approximators in reinforcement learning (RL), referred to as Deep Reinforcement Learning (DRL), has become the dominant paradigm in solving complex tasks. Until recently, most DRL literature focused on online-RL paradigm, where agents must interact with the environment to explore and learn. This led to remarkable results on Atari (Mnih et al., 2015), Go (Silver et al., 2017), StarCraft II (Vinyals et al., 2019), Dota 2 (Berner et al., 2019), and robotics (Andrychowicz et al., 2020). Unfortunately, the need to interact with the environment makes these algorithms unsuitable and unsafe for many real-world applications, where any action taken can have serious ethical or harmful consequences or be costly. In contrast, in the offline RL paradigm (Fu et al., 2020; Fujimoto et al., 2018; Gulcehre et al., 2020; Levine et al., 2020), also known as batch RL (Ernst et al., 2005; Lange et al., 2012), agents learn from a fixed dataset previously logged by other (possibly unknown) agents. This ability makes offline RL more applicable to the real world. Recently, Kumar et al. (2020a) showed that offline RL methods coupled with TD learning losses could suffer from an *effective rank* collapse of the penultimate layer's activations, which renders the network to become under-parameterized. They further demonstrated a significant fraction of Atari games, where the collapse of effective rank collapse corresponded to performance degradation. Subsequently, Kumar et al. (2020a) ∗Indicates joint first authors. explained the rank collapse phenomenon by analyzing a TD-learning loss with bootstrapped targets in the kernel and linear regression setups. In these simplified scenarios, bootstrapping leads to self-distillation, causing severe under-parametrization and poor performance, as also observed and analyzed by Mobahi et al. (2020). Nevertheless, Huh et al. (2021) studied the rank of the representations in a supervised learning setting (image classification tasks) and argued that low rank leads to better performance. Thus the low-rank representations could act as an implicit regularizer. ![1_image_0.png](1_image_0.png) Figure 1: [Atari] The rank and the performance on broad vs narrow hyperparameter sweep: Correlation between effective rank and agent's performance towards the end of training in different Atari games. We report the regression lines for the narrow sweep, which covers only a single offline RL algorithm, with a small minibatch size (32) and a learning rate sweep similar to the hyperparameter sweep defined in RL Unplugged (Gulcehre et al., 2020), whereas in the broad setting, we included more data from different models and a larger hyperparameter sweep. In the narrow setup, there is a positive relationship between the effective rank and the agent's performance, but that relationship disappears in the broad data setup and almost reverses. Typically, in machine learning, we rely on empirical evidence to extrapolate the rules or behaviors of our learned system from the experimental data. Often those extrapolations are done based on a limited number of experiments due to constraints on computation and time. Unfortunately, while extremely useful, these extrapolations might not always generalize well across all settings. While Kumar et al. (2020a) do not concretely propose a causal link between the rank and performance of the system, one might be tempted to extrapolate the results (agents performing poorly when their rank collapsed) to the existence of such a causal link, which herein we refer to as rank collapse hypothesis. In this work, we do a careful large-scale empirical analysis of this potential causal link using the offline RL setting (also used by Kumar et al. (2020a)) and also the Tandem RL (Ostrovski et al., 2021) setting. The existence of this causal link would be beneficial for offline RL, as controlling the rank of the model could improve the performance (see the regularization term explored in Kumar et al. (2020a)) or as we investigate here, the effective rank could be used for model selection in settings where offline evaluation proves to be elusive. Key Observation 1: The effective rank and the performance are correlated in restricted settings, but this correlation disappears when we increase the range of hyperparameters and the types of models that we are evaluating (Figure 1) a. aThis is because other factors like hyperparameters and architecture can confound the rank of the penultimate layer; unless the those factors are controlled carefully, the conclusions drawn from the experiments based on the rank can be misleading. Instead, we show that different factors affect a network's rank without affecting its performance. This finding indicates that unless all of these factors of variations are controlled - many of which we might still be unaware of - the rank alone might be a misleading indicator of performance. A deep Q network exhibits three phases during training. We show that rank can be used to identify different stages of learning in Q-learning if the other factors are controlled carefully. We believe that our study, similar to others (e.g. Dinh et al., 2017), re-emphasizes the importance of critically judging our understanding of the behavior of neural networks based on simplified mathematical models or empirical evidence from a limited set of experiments. Key Observation 2: Deep Q-learning approaches goes through three phases of learning: i) simple behaviors, ii) complex behaviors, iii) under-parameterization (Figure 2). These phases can be identified by the effective rank and performance on a given task a. aThe first two phases of learning happen during the training of all models we tested. At times, the third phase of the learning could potentially lead to the agent losing its representation capacity and the ensuing poor performance. ![2_image_0.png](2_image_0.png) Figure 2: **Lifespan of learning in deep Q-learning:** The plot on the left-hand side illustrates the evolution of the effective rank, and the plot on the right-hand side demonstrates the evolution of the performance during training. In the first phase, the model learns easy-to-learn behaviors that are simplistic by nature and ignore many factors of environmental variations. The effective rank collapses to a minimal value in the first phase since the model does not need a large capacity to learn the simple behaviors. In phase 2, the model learns more complex behaviors that we identify as those that obtain large returns when the policy is evaluated in the environment. Typically, in supervised learning, phase 2 is followed by overfitting. However, in offline RL (specifically the TD-learning approaches that we tried here), we observed that it is often followed by underfitting/under-parameterization in phase 3. We organize the rest of the paper and our contributions in the order we introduce in the paper as follows: - Section 2 presents the related work, and Section 3 describes our experimental protocol. - In Sections 4, 8, 6, 7 and 9, we study the extent of impact of different interventions such as architectures, loss functions and optimization on the causal link between the rank and agent performance. Some interventions in the model, such as introducing an auxiliary loss (e.g. CURL (Laskin et al., 2020), SAM (Foret et al., 2021) or the activation function) can increase the effective rank but does not necessarily improve the performance. This finding indicates that the rank of the penultimate layer is not enough to explain an agent's performance. We also identify the settings where the rank strongly correlates with the performance, such as DQN with ReLU activation and many learning steps over a fixed dataset. - In Section 4.1, we show that a deep Q network goes through three stages of learning, and those stages can be identified by using rank if the hyperparameters of the model are controlled carefully. - Section 5 describes the main outcomes of our investigations. Particularly, we analyze the impact of the interventions described earlier and provide counter-examples that help contradict the *rank* collapse hypothesis, establishing that the link between rank and performance can be affected by several other confounding factors of variation. - In Section 11, we ablate and compare the robustness of BC and DQN models with respect to random perturbation introduced only when evaluating the agent in the environment. We found out that the offline DQN agent is more robust than the behavior cloning agent which has higher effective rank. - Section 12 presents the summary of our findings and its implications for offline RL, along with potential future research directions. ## 2 Background 2.1 Effective Rank And Implicit Under-Regularization The choice of architecture and optimizer can impose specific *implicit biases* that prefer certain solutions over others. The study of the impact of these implicit biases on the generalization of the neural networks is often referred to as *implicit regularization*. There is a plethora of literature studying different sources of implicit regularization such as *initialization of parameters* (Glorot and Bengio, 2010; Li and Liang, 2018; He et al., 2015), *architecture* (Li et al., 2017; Huang et al., 2020), *stochasticity* (Keskar et al., 2016; Sagun et al., 2017), and *optimization* (Smith et al., 2021; Barrett and Dherin, 2020). The rank of the feature matrix of a neural network as a source of implicit regularization has been an active area of study, specifically in the context of supervised learning (Arora et al., 2019; Huh et al., 2021; Pennington and Worah, 2017; Sanyal et al., 2018; Daneshmand et al., 2020; Martin and Mahoney, 2021). In this work, we study the phenomenon of implicit regularization through the effective rank of the last hidden layer of the network, which we formally define below. ![3_image_0.png](3_image_0.png) ![3_image_1.png](3_image_1.png) ![3_image_3.png](3_image_3.png) N×D where N ≥ D that uses a $$\mathbf{effective\ rank}_{\delta}(\Phi)=\min_{k}\left\{k:\frac{\sum_{i=1}^{k}\sigma_{i}(\Phi)}{\sum_{j=1}^{D}\sigma_{j}(\Phi)}\geq1-\delta\right\}.$$ ![3_image_5.png](3_image_5.png) . (1) ![3_image_2.png](3_image_2.png) ![3_image_4.png](3_image_4.png) ![3_image_6.png](3_image_6.png) ![3_image_7.png](3_image_7.png) ![3_image_9.png](3_image_9.png) ![3_image_8.png](3_image_8.png) Terminology and Assumptions. Note that the threshold value δ throughout this work has been fixed to 0.01 similar to Kumar et al. (2020a). Throughout this paper, we use the term *effective rank* to describe the rank of the last hidden layer's features only, unless stated otherwise. This choice is consistent with prior work (Kumar et al., 2020a) as the last layer acts as a representation bottleneck to the output layer. Kumar et al. (2020a) suggested that the effective rank of the Deep RL models trained with TD-learning objectives collapses because of i) implicit regularization, happening due to repeated iterations over the dataset with gradient descent; ii) self-distillation effect emerging due to bootstrapping losses. They supported their hypothesis with both theoretical analysis and empirical results. The theoretical analysis provided in their paper assume a simplified setting, with infinitesimally small learning rates, batch gradient descent and linear networks, in line with most theory papers on the topic. We focus our experiments to use the effective rank definition by Kumar et al. (2020a) to make our results more comparable. However, in our preliminary experiments we compared different rank measures and they perfectly correlated to each other on Atari. ## 2.2 Auxiliary Losses Additionally, we ablate the effect of using an auxiliary loss on an agent's performance and rank. Specifically, we chose the "Contrastive Unsupervised Representations for Reinforcement Learning" (CURL) (Laskin et al., 2020) loss that is designed for use in a contrastive self-supervised learning setting within standard RL algorithms: L(*s, a, r, θ*) = LQ(*s, a, r, θ*) + λ ∗ LCURL(s, *s, θ* ˆ ). (2) $$L(s,a,r,\theta)=L_{Q}(s,a,r,\theta)+\lambda*L_{C U R L}(s,\hat{s},\theta).$$ $\left(2\right)$. Here, LQ refers to standard RL losses and the CURL loss LCURL(s, *s, θ* ˆ ) is the contrastive loss between the features of the current observation s and a randomly augmented observation sˆ as described in Laskin et al. (2020). Besides this, we also ablate with Sharpness-Aware Minimization (SAM) (Foret et al., 2021), an approach that seeks parameters that have a uniformly low loss in its neighborhood, which leads to flatter local minima. We chose SAM as a means to better understand whether the geometry of loss landscapes help inform the correlation between the effective rank and agent performance in offline RL algorithms. In our experiments, we focus on analyzing mostly the deep Q-learning (DQN) algorithm (Mnih et al., 2015) to simplify the experiments and facilitate deeper investigation into the rank collapse hypothesis. ## 2.3 Deep Offline Rl Online RL requires interactions with an environment to learn using random exploration. However, online interactions with an environment can be unsafe and unethical in the real-world (Dulac-Arnold et al., 2019). Offline RL methods do not suffer from this problem because they can leverage offline data to learn policies that enable the application of RL in the real world (Menick et al., 2022; Shi et al., 2021; Konyushkova et al., 2021). Here, we focused on offline RL due to its importance in real-world applications, and the previous works showed that the implicit regularization effect is more pronounced in the offline RL (Kumar et al., 2020a; 2021a). Some of the early examples of offline RL algorithms are least-squares temporal difference methods (Bradtke and Barto, 1996; Lagoudakis and Parr, 2003) and fitted Q-iteration (Ernst et al., 2005; Riedmiller, 2005). Recently, value-based approaches to offline RL have been quite popular. Value-based approaches typically lower the value estimates for unseen state-action pairs, either through regularization (Kumar et al., 2020b) or uncertainty (Agarwal et al., 2020). One could also include RBVE (Gulcehre et al., 2021) in this category, although it regularizes the Q function only on the rewarding transitions to prevent learning suboptimal policies. Similar to R-BVE, Mathieu et al. (2021) have also shown that the methods using single-step of policy improvement work well on tasks with very large action space and low state-action coverage. In this paper, due to their simplicity and popularity, we mainly study action value-based methods: offline DQN (Agarwal et al., 2020) and offline R2D2 (Gulcehre et al., 2021). Moreover, we also sparingly use Batched Constrained Deep Q-learning (BCQ) algorithm (Fujimoto et al., 2019), another popular offline RL algorithm that uses the behavior policy to constrain the actions taken by the target network. Most offline RL approaches we explained here rely on pessimistic value estimates (Jin et al., 2021; Xie et al., 2021; Gulcehre et al., 2021; Kumar et al., 2020b). Mainly because offline RL datasets lack exhaustive exploration, and extrapolating the values to the states and actions not seen in the training set can result in extrapolation errors which can be catastrophic with TD-learning (Fujimoto et al., 2018; Kumar et al., 2019). On the other hand, in online RL, it is a common practice to have inductive biases to keep optimistic value functions to encourage exploration (Machado et al., 2015). We also experiment with the *tandem RL* setting proposed by Ostrovski et al. (2021), which employs two independently initialized online (active) and offline (passive) networks in a training loop where only the online agent explores and drives the data generation process. Both agents perform identical learning updates on the identical sequence of training batches in the same order. Tandem RL is a form of offline RL. Still, unlike the traditional offline RL setting on fixed datasets, in the *tandem RL*, the behavior policy can change over time, which can make the learning non-stationary. We are interested in this setting because the agent does not necessarily reuse the same data repeatedly, which was pointed in Lyle et al. (2021) as a potential cause for the rank collapse. ## 3 Experimental Setup To test and verify different aspects of the *rank collapse hypothesis* and its potential impact on the agent performance, we ran a large number of experiments on bsuite (Osband et al., 2019), Atari (Bellemare et al., 2013) and DeepMind lab (Beattie et al., 2016) environments. In all these experiments, we use the experimental protocol, datasets and hyperparameters from Gulcehre et al. (2020) unless stated otherwise. We provide the details of architectures and their default hyperparameters in Appendix A.11. - **bsuite** - We run ablation experiments on bsuite in a fully offline setting with the same offline dataset as the one used in Gulcehre et al. (2021). We use a DQN agent (as a representative TD Learning algorithm) with multi-layer feed-forward networks to represent the value function. bsuite provides us with a small playground environment which lets us test certain hypotheses which are computationally prohibitive in other domains (for e.g.: computing Hessians) with respect to terminal features and an agent's generalization performance. - **Atari** - To test whether some of our observations are also true with higher-dimensional input features such as images, we run experiments on the Atari dataset. Once again, we use an offline DQN agent as a representative TD-learning algorithm with a convolutional network as a function approximator. On Atari, we conducted large-scale experiments on different configurations: ## (A) Small-Scale Experiments: - **DQN-256-2M**: Offline DQN on Atari with minibatch size 256 trained for 2M gradient steps with four different learning rates: [3e − 5, 1e − 4, 3e − 4, 5e − 4]. We ran these experiments to observe if our observations hold in the default training scenario identified in RL Unplugged (Gulcehre et al., 2020). - **DQN-32-100M**: Offline DQN on Atari with minibatch size of 32 trained for 100M gradient steps with three different learning rates: [3 × 10−5, 1 × 10−4, 3 × 10−4]. We ran those experiments to explore the effects of reducing the minibatch size. (b) Large-scale experiments: - **Long run learning rate sweep (DQN-256-20M):** Offline DQN trained for 20M gradients steps with 12 different learning rates evenly spaced in log-space between 10−2 and 10−5trained on minibatches of size 256. The purpose of these experiments is to explore the effect of wide range of learning rates and longer training on the effective rank. - **Long run interventions (DQN-interventions):** Offline DQN trained for 20M gradient steps on minibatches of size 256 with 128 different hyperparameter interventions on activation functions, dataset size, auxiliary losses etc. The purpose of these experiments is to understand the impact of such interventions on the effective rank in the course of long training. - **DeepMind Lab** - While bsuite and Atari present relatively simple fully observable tasks that require no memory, DeepMind lab tasks (Gulcehre et al., 2021) are more complex, partially observable tasks where it is very difficult to obtain good coverage in the dataset even after collecting billions of transitions. We specifically conduct our observational studies on the effective ranks on the **DeepMind Lab** dataset, which has data collected from a well-trained agent on the SeekAvoid level. The dataset is collected by adding different levels of action exploration noise to a well-trained agent in order to get datasets with a larger coverage of the state-action space. ![6_image_0.png](6_image_0.png) Figure 3: **Structural causal model (SCM) of different factors that we test: M** represents the model selection method that we use to determine the h which denotes the observed confounders that are chosen at the beginning of the training, including the task, the model architecture (including depth and number of units), learning rate and the number of gradient steps to train. β is the effective rank of the penultimate layer. λ is the agent's performance, measured as episodic returns the agent attains after evaluating in the environment. A represents the unobserved confounders that change during training but may affect the performance, such as the number of dead units, the parameter norms, and the other underlying factors that can influence learning dynamics. We test the effect of each factor by interventions. We illustrate the structural causal model (SCM) of the interactions between different factors that we would like to test in Figure 3. To explore the relationship between the rank and the performance, we intervene on h, which represents potential exogenous sources of **implicit regularization**, such as architecture, dataset size, and the loss function, including the auxiliary losses. The interventions on h will result in a randomized controlled trial (RCT.) A represents the unobserved factors that might affect performance denoted by λ an the effective rank β such as activation norms and number of dead units. It is easy to justify the relationship between M, h, A and β. We argue that β is also confounded by A and h. We show the confounding effect of A on β with our interventions to beta via auxiliary losses or architectural changes that increase the rank but do not affect the performance. We aim to understand the nature of the relationship between these terms and whether SCM in the figure describes what we notice in our empirical exploration. We overload the term **performance** of the agent to refer to episodic returns attained by the agent when it is evaluated online in the environment. In stochastic environments and datasets with limited coverage, an offline RL algorithm's online evaluation performance and generalization abilities would generally correlate (Gulcehre et al., 2020; Kumar et al., 2021c). The offline RL agents will need to generalize when they are evaluated in the environment due to, - **Stochasticity** in the initial conditions and transitions of the environment. For example, in the Atari environment, the stochasticity arises from sticky actions and on DeepMind lab, it arises from the randomization of the initial positions of the lemons and apples. - **Limited coverage:** The coverage of the dataset is often limited. Thus, an agent is very likely to encounter states and actions that it has never seen in the training dataset. ## 4 Effective Rank And Performance Based on the results of Kumar et al. (2020a), one might be tempted to extrapolate a positive causal link between the effective rank of the last hidden layer and the agent's performance measured as episodic returns attained when evaluated in the environment. We explore this potentially interesting relationship on a larger scale by adopting a proof by contradiction approach. We evaluated the agents with the hyperparameter setup defined for the Atari datasets in RL Unplugged (Gulcehre et al., 2020) and the hyperparameter sweep defined for DeepMind Lab (Gulcehre et al., 2021). For a narrow set of hyperparameters this correlation exists as observed in Figure 1. However, in both cases, we notice that a broad hyperparameter sweep makes the correlation between performance and rank disappear (see Figures 1 and DeepMind lab Figure in Appendix A.3). In particular, we find hyperparameter settings that lead to low (collapsed) ranks with high performance (on par with the best performance reported in the restricted hyperparameter range) and settings that lead to high ranks but poor performance. This shows that the correlation between effective rank and performance can not be trusted for offline policy selection. In the following sections, we further present specific ablations that help us understand the dependence of the effective rank vs. performance correlation on specific hyperparameter interventions. ## 4.1 Lifespan Of Learning With Deep Q-Networks Empirically, we found effective rank sufficient to identify three phases when training an offline DQN agent with a ReLU activation function (Figure 2). Although the effective rank may be sufficient to identify those stages, it still does not imply a direct causal link between the effective rank and the performance as discussed in the following sections. Several other factors can confound effective rank, making it less reliable as a guiding metric for offline RL unless those confounders are carefully controlled. ![7_image_0.png](7_image_0.png) Figure 4: **The three phases of learning:** On IceHockey and MsPacman RL Unplugged Atari games we illustrate the different phases of learning with the offline DQN agent using the learning rate of 0.0004. The blue region in the plots identifies the Phase 1, green region is the Phase 2, and red region is the Phase 3 of the learning. IceHockey is one of the games where the expert that generated the dataset performs quite poorly on it, thus the majority of the data is just random exploration data. It is easy to see that in this phase, the Offline DQN performs very poorly and never manages to get out of the Phase 1. The performance of the agent on IceHockey is poor and the effective rank of the network is low throughout the training. On MsPacman, we can observe all the three phases. The model transition into Phase 2 from Phase 1 quickly, and then followed by the under-fitting regime where the effective rank collapses the agent performs poorly. We identified three phases of learning according to which stage they appear during training, the performance (returns obtained by the algorithm) when evaluated in the environment, and the effective rank: - **Phase 1 (Simple behaviors):** These behaviors emerge early in training and give rise to low rewards when evaluated in the environment (often close to a random policy.) The effective rank of the model first collapses to a small value, sometimes to a single-digit value, and then gradually increases. Let us note the term simple behaviors, here, refers to how easy/simple to learn a behavior for an RL algorithm. We hypothesized that this could be due to the implicit bias of the SGD to learn functions of increasing complexity over training iterations (Kalimeris et al., 2019); therefore, early in training, the network relies on *simple behaviours* that are myopic to most of the variation in the data. Hence the model would have a low rank. The rank collapse early at the beginning of the training happens very abruptly; just in a handful of gradient updates, the effective rank collapses to a single-digit number. However, this early rank collapse does not degrade the agent's performance. We call this rank collapse early in training as self-pruning effect. - **Phase 2 (Complex behaviors):** These behaviors emerge later in training and achieve high rewards, often close to the best policy in the dataset when evaluated in the environment. In this phase, the effective rank of the model first increases and then usually flattens. The model starts to learn more complex behaviors that would achieve high returns when evaluated in the environment. We call these behaviors as complex behaviors, because of the complexity/difficulty to learn these behaviors since they emerge in later during training and sensitive to the hyperparameters. - **Phase 3 (Underfitting/Underparameterization):** This is the last phase of the algorithm; in this phase, the effective rank collapses to a small value (often to 1), and the agent's performance collapses too. The third phase is called *underfitting* since the agent's performance usually drops, and the effective rank also collapses, which causes the agent to lose part of its capacity. This phase is not always observed (or the performance does not collapse with effective ran towards the end of the training) in all settings as we demonstrate in our different ablations. Typically in supervised learning, Phase 2 is followed by over-fitting, but with offline TD-learning, we could not find any evidence of over-fitting. We believe this phase is primarily due to the target Q-network needing to extrapolate over the actions not seen during the training and causing extrapolation errors as described by (Kumar et al., 2019). A piece of evidence to support this hypothesis is presented in Figure 29 in Appendix A.5, which suggests that the effective rank and the value error of the agent correlate well. In this phase, the low effective rank and poor performance are caused by a large number of dead ReLU units. Shin and Karniadakis (2020) also show that as the network has an increasing number of dead units, it becomes under-parameterized, and this could influence the agent's performance negatively. It is possible to identify those three phases in many of the learning curves we provide in this paper, and our first two phases agree with the works on SGD's implicit bias of learning functions of increasing complexity (Kalimeris et al., 2019). Given a fixed model and architecture, whether it is possible to observe all these three phases during training fundamentally depends on: 1. **Hyperparameters:** The phases that an offline RL algorithm would go through during training depends on hyperparameters such as learning rate and the early stopping or training budget. For example, due to early stopping the model may just stop in the second phase, if the learning rate is too small, since the parameters will move much slower the model may never get out of Phase 1. If the model is not large enough, it may never learn to transition from Phase 1 into Phase 2. 2. **Data distribution:** The data distribution has a very big influence on the phases the agent goes thorough, for example, if the dataset only contains random exploratory data and ORL method may fail to learn complex behaviors from that data and as a result, will never transition into Phase 2 from Phase 1. 3. **Learning Paradigm:** The learning algorithm, including optimizers and the loss function, can influence the phases the agent would go through during the training. For example, we observed that Phase 3 only happens with the offline TD-learning approaches. It is possible to avoid phase three (underfitting) by finding the correct hyperparameters. We believe the third phase we observe might be due to non-stationarity in RL losses (Igl et al., 2020) due to bootstrapping or errors propagating through bootstrapped targets (Kumar et al., 2021b). The underfitting regime only appears if the network is trained long enough. The quality and the complexity of the data that an agent learns from also plays a key role in deciding which learning phases are observed during training. In Figure 4, we demonstrate the different phases of learning on IceHockey and MsPacman games. On IceHockey, since the expert that generated the dataset has a poor performance on that game, the offline DQN is stuck in Phase 1, and did not managed to learn complex behaviors that would push it to Phase 2 but on MsPacMan, ![9_image_0.png](9_image_0.png) all the three phases are present. We provide learning curves for all the online policy selection Atari games across twelve learning rates in Appendix A.8. Figure 5: **[Atari]**: Bar charts of effective ranks with respect to the learning rates after 1M and 20M learning steps. After 1M gradient steps, the ranks are distributed almost like a bell-shaped curve, indicating that high and low learning rates have low ranks (phases 1 and 3) while the learning rates in the middle are in phase 2 and hence have the higher rank. After 20M learning steps, the mode of the distribution of the ranks skews towards the left, indicating low terminal ranks for many large learning rates. Namely, as we train the network longer, the terminal rank decreases, particularly for the large learning rates. The rank is low for the low learning rates because the model is stuck in Phase 1, whereas the large learning rates get into Phase 3 quickly and thus the low terminal ranks. Figure 5 shows the relationship between the effective rank and twelve learning rates. In this figure, the effect of learning rate on the different phases of learning is distinguishable. For low learning rates, the ranks are low because the agent can never transition from Phase 1 to Phase 2, and for large learning rates, the effective ranks are low because the agent is in Phase 3. Therefore, the distribution of effective ranks and learning rates has a Gaussian-like shape, as depicted in the figure. The distribution of rank shifts towards low learning rates as we train the agents longer because slower models with low learning rates start entering Phase 2, and the models trained with large learning rates enter Phase 3. ## 4.2 The Effect Of Dataset Size We can use the size of the dataset as a possible proxy metric for the coverage that the agent observes in the offline data. We uniformly sampled different proportions (from 5% of the transitions to the entire dataset) from the transitions in the RL Unplugged Atari benchmark dataset (Gulcehre et al., 2020) to understand how the agent behaves with different amounts of training data and whether this is a factor affecting the rank of the network. Figure 6 shows the evolution of rank and returns over the course of training. The effective rank and performance collapse severely with low data proportions, such as when learning only on 5% of the entire dataset ![10_image_0.png](10_image_0.png) subsampled. Those networks can never transition from phase 1 to phase 2. However, as the proportion of the dataset subsampled increases, the agents could learn more complex behaviors to get into phase 2. The effective rank collapses less severely for the larger proportions of the dataset, and the agents tend to perform considerably better. In particular, we can see that in phase 1, an initial decrease of the rank correlates with an increase in performance, which we can speculate is due to the network reducing its reliance on spurious parts of the observations, leading to representations that generalize better across states. It is worth noting that the ordering of the policies obtained by using the agents' performance does not correspond to the ordering of the policies with respect to the effective rank throughout training. For example, offline DQN trained on the full dataset performs better than 50% of the dataset, while the agent trained using 50% of the data sometimes has a higher rank. A similar observation can be made for Zaxxon, at the end of the training, the network trained on the full dataset underperforms compared to the one trained on 50% of the data, even if the rank is the same or higher. Figure 6: **[Atari] Dataset size:** Evolution of ranks and returns as we vary the fraction of data available for the agent to train on. We see that the agent which sees very little data collapses both in terms of rank and performance. The agent which sees more of the data has good performance even while allowing some shrinkage of rank during the training. ## 5 Interactions Between Rank And Performance To understand the interactions and effects of different factors on the rank and performance of offline DQN, we did several experiments and tested the effect of different hyperparameters on effective rank and the agent's performance. Figure 3 shows the causal graph of the effective rank, its confounders, and the agent's performance. Ideally, we would like to intervene in each node on this graph to measure its effect. As the rank is a continuous random variable, it is not possible to directly intervene on effective rank. Instead, we emulate interventions on the effective rank by conditioning to threshold β with τ . In the control case (λ(1)), we assume *β > τ* and for λ(0), we would have β ≤ τ . We can write the average treatment effect (ATE) (Holland, 1986) for the effect of setting the effective rank to a large value on the performance as: $$\operatorname{ATE}(\lambda,\beta,\tau)=\mathbb{E}[\lambda|\lambda(1)]-\mathbb{E}[\lambda|\lambda(0)].$$ ATE(*λ, β, τ* ) = E[λ|λ(1)] − E[λ|λ(0)]. (3) $\left(3\right)$. ![11_image_0.png](11_image_0.png) Let us note that this ATE(*λ, β, τ* ) quantity doesn't necessarily measure the causal effect of β on λ, since we know that the λ can be confounded by A (hidden confounders.) Figure 7: **[Atari]:** The correlation plot between the effective rank and the performance (measured in terms of episode returns by evaluating the agent in the environment) of offline DQN on Asterix and Gravitar games over 256 different hyper-parameter configurations trained for 2M learning steps. There is a strong correlation with the ReLU function, but the correlation disappears for the network with *tanh* activation function. There is no significant correlation between effective rank and the performance on the Asterix game with the complete data. Still, a positive correlation exists on the Gravitar game. These results are not affected by the Simpson's paradox since the subgroups of the data when split into groups concerning activation functions, do not show a consistent correlation trend. We study the impact of factors such as activation function, learning rate, data split, target update steps, and CURL and SAM losses on the Asterix and Gravitar levels. We chose these two games since Asterix is an easy-to-explore Atari game with relatively dense rewards, while Gravitar is a hard-to-explore and a sparse reward setting (Bellemare et al., 2016). In Table 1, we present the results of intervening to β thresholded with different quantiles of the effective rank. Choosing a network with a high effective rank for a ReLU network has a statistically significant positive effect on the agent's performance concerning different quantiles on both Asterix and Gravitar. The agent's performance is measured in terms of normalized scores as described in Gulcehre et al. (2020). However, the results with the tanh activation function are mixed, and the effect of changing the rank does not have a statistically significant impact on the performance in most cases. In Figure 7, we show the correlations between the rank and performance of the agent. The network with ReLU activation strongly correlates rank and performance, whereas *tanh* does not. The experimental data can be prone to Simpson's paradox which implies that a trend (correlation in this case) may appear in several subgroups of the data, but it disappears or reverses when the groups are aggregated (Simpson, *1951). This* can lead to misleading conclusions. We divided our data into equal-sized subgroups for hyperparameters and model variants, but we could not find a consistent trend that disappeared when combined, as in Figure 7. Thus, our experiments do not appear to be affected by Simpson's paradox. Table 1: **Atari:** The average treatment effect of having a network with a higher rank than concerning different quantiles. We report the average treatment effect (ATE), its uncertainty (using 95% confidence intervals using standard errors), and p-values for Asterix and Gravitar games. Higher effective rank seems to have a smaller effect on Asterix than Gravitar. Let us note that, Gravitar is a sparse reward problem, and Asterix is a dense-reward one. Overall the effect is more prominent for ReLU than tanh, and with tanh networks, it is not statistically significant. We boldfaced the ATEs where the effect is statistically significant (p < 0.05.) The type column of the table indicates the activation functions used in the experiments we did the intervention. The "Combined" type corresponds to a combination of tanh and ReLU experiments. | Level | Type | quantile=0.25 | quantile=0.5 | quantile=0.75 | quantile=0.95 | | | | | |----------|----------------|-----------------|----------------|-----------------|-----------------|---------------|----------------|---------------|-------| | ATE | p-value | ATE | p-value | ATE | p-value | ATE | p-value | | | | Combined | 0.019 ± 0.010 | 0.001 | 0.007 ± 0.013 | 0.207 | 0.005 ± 0.019 | 0.324 | -0.031 ± 0.011 | 0.999 | | | Asterix | ReLU | 0.079 ± 0.018 | 0.000 | 0.089 ± 0.025 | 0.000 | 0.112 ± 0.040 | 0.000 | 0.110 ± 0.085 | 0.017 | | Tanh | -0.014 ± 0.004 | 0.999 | 0.009 ± 0.005 | 0.996 | -0.005 ± 0.006 | 0.886 | 0.020 ± 0.017 | 0.023 | | | Combined | 0.954 ± 0.077 | 0.000 | 0.569 ± 0.098 | 0.000 | 0.496 ± 0.141 | 0.000 | 0.412 ± 0.206 | 0.001 | | | Gravitar | ReLU | 1.654 ± 0.150 | 0.000 | 1.146 ± 0.163 | 0.000 | 1.105 ± 0.256 | 0.000 | 1.688 ± 0.564 | 0.000 | | Tanh | 0.028 ± 0.090 | 0.303 | 0.185 ± 0.118 | 0.005 | 0.242 ± 0.167 | 0.009 | 0.054 ± 0.265 | 0.367 | | | u | Control (u) | Treatment T(u) | Yt(u) − Yc(u) | |---------------------|---------------|------------------|-----------------| | Activation Function | ReLU | tanh | 66.97 ± 7.13 | | Learning Rate | 3 × 10−4 | 3 × 10−5 | -54.15 ± 7.54 | | Target Update Steps | 2500 | 200 | -15.23 ± 8.21 | | SAM Loss Weight | 0 | 0.05 | -10.05 ± 8.26 | | Level | Asterix | Gravitar | -9.41 ± 8.25 | | CURL Loss Weight | 0 | 0.001 | 8.09 ± 8.27 | | Data Split | 100% | 5% | 5.11 ± 8.27 | Table 2: **[Atari] Average Treatment Effect** of different interventions on Asterix and Gravitar: Quantity of interest Y is the terminal rank of the features. Activation function and learning rate influence the terminal feature ranks the most in our setup. We indicate them with boldface; changing the activation from ReLU to tanh improves the effective rank, whereas reducing the learning rate reduces the rank. In this analysis, we set our control setting to be trained with ReLU activations, the learning rate of 3e−4, without any auxiliary losses, with training done on the full dataset in the level Asterix. We present the *Average Treatment* Effect (ATE) (the difference in ranks between the intervention being present and absent in an experiment) of changing each of these variables in Table 2. We find that changing the activation function from ReLU to tanh drastically affects the effective ranks of the features, which is in sync with our earlier observations on other levels. In Figure 8, we present a heatmap of ATEs where we demonstrate the effective rank change when two variables are changed from original control simultaneously. Once again, the activation functions and learning rate significantly affect the terminal ranks. We also observe some interesting combinations that lead the model to converge to a lower rank - for example, using SAM loss with dropout. ![12_image_0.png](12_image_0.png) 0 50 100 150 Figure 8: **[Atari] ablations:** Effects of different pairs of interventions on the control set. The control is on Asterix without dropout, CURL and SAM on the full dataset and with a target update period of 2500 steps. These observations further reinforce our belief that different factors affect the phenomenon. The extent of rank collapse and mitigating rank collapse alone may never fully fix the agent's learning ability in different environments. ## 6 The Effect Of Activation Functions The severe rank collapse of phase 3 is apparent in our Atari models, which have simple convolutional neural networks with ReLU activation functions. When we study the same phenomenon on the SeekAvoid dataset, rank collapse does not seem to happen similarly. It is important to note here that to solve those tasks effectively, the agent needs memory; hence, all networks have a recurrent core and LSTM. Since standard LSTMs use *tanh*(·) activations, investigating in this setting would help us understand the role of the choice of architecture on the behavior of the model's rank. Figure 10 shows that the output features of the LSTM network on the DeepMind lab dataset do not experience any detrimental effective rank collapse with different exploration noise when we use *tanh*(·) activation function for the cell. However, if we replace the *tanh*(·) activation function of the LSTM cell with ReLU or if we replace the LSTM with a feed-forward MLP using ReLU activations (as seen in Figure 10) the effective rank in both cases, collapses to a small value at the end of training. This behavior shows that the choice of activation function has a considerable effect on whether the model's rank collapses throughout training and, subsequently, its ability to learn expressive value functions and policies in the environment as it is susceptible to enter phase 3 of training. Observation: Agents that have networks with ReLU units tend to have dead units which causes the effective rank to collapse in phase 3 of learning while other activations like tanh do not suffer a similar collapse. The activation functions influence both the network's learning dynamics and performance. As noted by ![13_image_0.png](13_image_0.png) Pennington and Worah (2017), the activation function can influence the rank of each layer at initialization. Figure 9 presents our findings on bsuite levels. In general, the effective rank of the penultimate layer with ReLU activations collapses faster, while ELU and *tanh*(·) tend to maintain a relatively higher rank than ReLU. As the effective rank goes down for the catch environment, the activations become sparser, and the units die. We illustrate the sparsity of the activations with Gram matrices over the features of the last hidden layer of a feedforward network trained on bsuite in Figure 37 in Appendix A.13. Figure 9: **[bsuite] Effective ranks of different activation functions:** The magnitude of drop in effective rank is more severe for **ReLU** and **sigmoid** activation functions than **tanh**. ## 7 Optimization The influence of minibatch size and the learning rate on the learning dynamics is a well-studied phenomenon. Smith and Le (2017); Smith et al. (2021) argued that the ratio between the minibatch size and learning rate relates to implicit regularization of SGD and also affects the learning dynamics. Thus, it is evident that ![14_image_0.png](14_image_0.png) Figure 10: **[DeepMind lab SeekAvoid] Activation functions:** Evolution of ranks in a typical LSTM network with **tanh** activations at the cell of LSTM on DeepMind lab. We see that a typical LSTM network does not get into Phase 3. However, when we change the activation function of the cell gate from **tanh** to **ReLU** the effective rank collapses to very small values. The effective rank collapses when the LSTM is replaced with a feedforward network too in cases of ReLU activation. those factors would impact the effective rank and performance. Here, we focus on a setup where we see the correlation between effective rank and performance and investigate how it emerges. Our analysis and ablations in Table 2 and Figure 8 illustrate that the learning rate is one of the most prominent factors on the performance of the agent. In general, there is no strong and consistent correlation between the effective rank and the performance across different models and hyperparameter settings. However, in Figure 1, we showed that the correlation exists in a specific setting with a particular minibatch size and learning rate. Further, in Figure 7, we narrowed down that the correlation between the effective rank and the performance exists for the offline DQN with ReLU activation functions. Thus in this section, we focus on a regime where this correlation exists with the offline DQN with ReLU and investigate how the learning rate and minibatches affect the rank. We ran several experiments to explore the relationship between the minibatch size, learning rates, and rank. In this section, we report results on Atari in few different settings - Atari-DQN-256-2M, Atari-DQN-32-100M, Atari-DQN-256-20M and Atari-BC-256-2M. The dying ReLU problem is a well-studied issue in supervised learning (Glorot et al., 2011; Gulcehre et al., 2016) where due to the high learning rates or unstable learning dynamics, the ReLU units can get stuck in the zero regime of the ReLU activation function. We compute the number of dead ReLU units of the penultimate layer of a network as the number of units with zero activation value for all inputs. Increasing the learning rate increases the number of dead units as can be seen in Figures 11 and 12. Even in BC, we observed that using large learning rates can cause dead ReLU units, rank collapse, and poor performance, as can be seen in Figure 13, and hence this behavior is not unique to offline RL losses. Nevertheless, models with TD-learning losses have catastrophic rank collapses and many dead units with lower learning rates than BC. Let us note that the effective rank depends on the number of units (D) and the number of dead units (η) at a layer. It is easy to see that effective rank is upper-bounded by D − η. In Figure 14, we observe a strong correlation between the number of dead ReLU units of the penultimate layer of the ReLU network and its effective rank. Observation: The pace of learning influences the number of dead units and the effective rank: the larger the learning rates and the smaller minibatches are, the number of dead units increases, and the effective rank decreases. However, the performance is only poor when the effective rank is severely low. ![15_image_0.png](15_image_0.png) Figure 11: [Atari DQN-32-100M] Learning curves of offline DQN for 100M gradient steps of training with minibatch size of 32. Increasing the learning rate increases the number of dead units in the ReLU network. As a result, the increased learning rate also causes more severe collapse as well, which aligns very well with the number of dead units. We observed that this behavior can also happen with networks using saturating activation functions such as sigmoid. Finally, we look into how effective ranks shape up towards the end of training. We test 12 different learning rates, trying to understand the interaction between the learning rates and the effective rank of the representations. We summarize our main results in Figure 5 where training offline DQN decreases the effective rank. The effective rank is low for both small and large learning rates. For higher learning rates, as we have seen earlier, training for longer leads to many dead ReLU units, which in turn causes the effective rank to diminish, as seen in Figures 11 and 12. Moreover, as seen in those figures, in Phase 1, the effective rank and the number of dead units are low. Thus, the rank collapse in Phase 1 is not caused by the number of dead units. In Phase 3, the number of dead units is high, but the effective rank is drastically low. The drastically low effective rank is caused by the network's large number of dead units in Phase 3. In Phase 3, we believe that both the under-parameterization and the poor performance is caused by the number of dead units in ReLU DQN, which was shown to be the case by (Shin and Karniadakis, 2020) in supervised learning. Overall, we could only observe a high correlation between the effective rank and the agent's performance, when we use ReLU activations in the network after a long training. We also present more analysis on how controlling the loss landscape affects the rank vs performance in Appendix A.2 and A.4. ## 8 The Effect Of Loss Function 8.1 Q-Learning And Behavior Cloning Behavior cloning (BC) is a method to learn the behavior policy from an offline dataset using supervised learning approaches (Pomerleau, 1989). Policies learned by BC will learn to mimic the behavior policy, and thus the performance of the learned BC agent is highly limited by the quality of the data the agent is trained on. We compare BC and Q-learning in Figure 15. We confirm that with default hyperparameters, the BC ![16_image_0.png](16_image_0.png) Figure 12: [Atari DQN-256-3M] Learning curves of offline DQN for 3M gradient steps of training with minibatch size of 256. Increasing the minibatch size improves the performance of the network with larger learning rates. ![16_image_1.png](16_image_1.png) Figure 13: [Atari BC-256-2M] Learning Curves of BC After 2M gradient steps with mini-batch size of 256. For large enough learning rates the rank of BC agent also collapses. We hypothesize that the rank collapse is a side effect of learning in general and not only due to TD-learning based losses. agent's effective rank does not collapse at the end of training. In contrast, as shown by Kumar et al. (2020a), DQN's effective rank collapses. DQN outperforms BC even if its rank is considerably lower, and during learning, the rank is not predictive of the performance (Figure 15). This behavior indicates the unreliability of effective rank for offline model selection. ![17_image_0.png](17_image_0.png) Figure 14: [Atari] Scatter plot for the correlation between the number of dead units and effective rank at the end of training and we observe that the effective rank strongly correlates with the number of dead units. Also, using larger learning rate increases the number of dead units in the network. ![17_image_1.png](17_image_1.png) Figure 15: [Atari] Offline DQN and Behavior Cloning (BC): We compare BC and DQN agents on the Atari dataset. We used the same architecture, dataset, and training protocols for both baselines. We used the hyperparameters defined in (Gulcehre et al., 2020) for comparisons. The rank of the DQN agent is significantly lower and achieves higher returns than BC. ## Curl: The Effect Of Self-Supervision 8.2 We study whether adding an auxiliary loss term proposed for CURL (Laskin et al., 2020) (Equation 2) during training helps the model mitigate rank collapse. In all our Atari experiments, we use the CURL loss described in (Laskin et al., 2020) without any modifications. Since the DeepMind lab tasks require memory, we apply a similar CURL loss to the features aggregated with a mean of the states over all timesteps. In all experiments, we also sweep over the weight of the CURL loss ). ![18_image_0.png](18_image_0.png) Figure 16: **[Atari] Auxiliary losses:** Evolution of ranks as we increase the weight of auxiliary losses. We see that a strong weight for auxiliary loss helps mitigate the rank collapse but prevents the model from learning useful representations. Figure 16 shows the ranks and returns on Atari (for DeepMind lab results see Appendix A.6.) In Atari games, using very large weights for an auxiliary loss (≈ 1) prevents rank collapse, but they simultaneously deteriorate the agent performance. We speculate, borrowing on intuitions from the supervised learning literature on the role of the rank as implicit regularizer Huh et al. (2021), that in such scenarios large rank prevents the network from ignoring spurious parts of the observations, which affects its ability to generalize. On the other hand, moderate weights of CURL auxiliary loss do not significantly change the rank and performance of the agent. Previously Agarwal et al. (2021) showed that the CURL loss does not improve the performance of an RL agent on Atari in a statistically meaningful way. On DeepMind lab games, we do not observe any rank collapse. In none of our DeepMind lab experiments agents enter into Phase 3 after Phase 1 and 2. This is due to the use of the tanh activation function for LSTM based on our investigation of the role of the activation functions in Section 6. ## 9 Tandem Rl Kumar et al. (2020a) propose one possible hypothesis for the observed rank collapse as the re-use of the same transition samples multiple times, particularly prevalent in the offline RL setting. A setting in which this hypothesis can be tested directly is the 'Tandem RL' proposed in Ostrovski et al. (2021): here a secondary ('passive') agent is trained from the data stream generated by an architecturally equivalent, independently initialized baseline agent, which itself is trained in a regular, online RL fashion. The passive agent tends to under-perform the active agent, despite identical architecture, learning algorithm, and data stream. This setup presents a clean ablation setting in which both agents use data in the same way (in particular, not differing in their re-use of data samples), and so any difference in performance or rank of their representation cannot be directly attributable to the reuse of data. In Figure 17, we summarize the results of a Tandem-DQN using Adam (Kingma and Ba, 2014) and RMSProp (Tieleman et al., 2012) optimizers. Despite the passive agent reusing data in a similar fashion as the online agent, we observe that it collapses to a lower rank. Besides, the passive agent's performance tends to be significantly (in most cases, catastrophically) worse than the online agent that could not satisfactorily explained just by the extent of difference in their effective ranks alone. We think that the Q-learning algorithm seems to be not efficient enough to exploit the data generated by the active agent to learn complex behaviors that would put the passive agent into Phase 2. We also noticed that, although Adam achieves ![19_image_0.png](19_image_0.png) Figure 17: **Atari, tandem RL:** We investigate the effect of the choice of optimizers between Adam and RMSProp on rank and the performance of the models, both for the active (solid lines) and passive agents (dashed lines). We observe the rank of the passive agent is lower than the active agent both for Adam and RMSProp. ![19_image_1.png](19_image_1.png) Figure 18: **Atari, Forked tandem RL:** We evaluate different loss functions in the forked tandem setting, where the passive agent is forked from the online agent, and the online agent's parameters are frozen. Still, the parameters of the online agent are updated. We forked the passive agent after it had seen 50M frames during training which is denoted with dashed lines in the figure. We observe that using both BVE and MC return losses improves the agent's rank, but the agent's performance is still poor. better performance than RMSProp, the rank of the model trained with the Adam optimizer tends to be lower than the models trained with RMSProp. In Figure 18, we investigate the effect of different learning algorithms on the rank and the performance of an agent in the *forked tandem RL* setting. In the forked tandem setting, an agent is firstly trained for a fraction of its total training time. Then, the agent is 'forked' into active and passive agents, and they both start with the same network weights where the active agent is 'frozen' and not trained but continues to generate data from its policy to train the passive agent on this generated data for the remaining of the training time. Here, we see that the rank flat-lines when we fork the Q-learning agent, but the performance collapses dramatically. In contrast, despite the rank of BVE and an agent trained with on-policy Monte Carlo returns going up, the performance still drops. Nevertheless, the decline of performance for BVE and the agent trained with Monte Carlo returns is not as bad as the Q-learning agent on most Atari games. ## 10 Offline Policy Selection In Section 7, we found that with the large learning rate sweep setting rank and number of dead units have high correlation with the performance. Offline policy selection (Paine et al., 2020) aims to choose the best policy without any online interactions purely from offline data. The apparent correlation between the rank and performance raises the natural question of how well rank performs as an offline policy selection approach. We did this analysis on DQN-256-20M experiments where we previously observed strong correlation between the rank and performance. We ran the DQN network described in DQN-256-20M experimental setting until the end of the training and performed offline policy selection using the effective rank to select the best learning rate based on the learning rate that yields to maximum effective rank or minimum number of dead units. ![20_image_0.png](20_image_0.png) Figure 19: **Atari DQN-256-20M:** This plot is depicting the simple regret of a DQN agent with ReLU activations using effective rank and the number of dead units. On the left, we show the the simple regret to select the best learning rate using the effective rank and on the right hand side, we show the simple regret achieved based on the number of dead units. Figure 19 illustrates the simple regret on each Atari offline policy selection game. The simple regret between the recommended policy measures how close an agent is to the best policy, and it is computed as described in Konyushkova et al. (2021). A simple regret of 1 would mean that our offline policy selection mechanism successfully chooses the best policy, and 0 would mean that it would choose the worst policy. After 2M training steps, the simple regret with the number of dead units as a policy selection method is poor. In contrast, the simple regret achieved by selecting the agent with the highest effective rank is good. The mean simple regret achieved by using the number of dead units as an offline policy selection (OPS) method is 0.45 ± 0.11, where the uncertainty ±0.11 is computed as standard error across five seeds. In contrast, simple regret achieved by using effective rank as an OPS method is 0.24 ± 0.12. The effective ranks for most learning rates collapse as we train longer since more models enter Phase 3 of learningthe number of dead units increases. After 20M of learning steps, the mean simple regret computed using effective rank as the OPS method ![20_image_1.png](20_image_1.png) Figure 20: **Atari DQN-256-20M:** Here, we are depicting the percentage of improvement by doing offline policy selection using rank individually vs online policy selection using median reward across nine Atari games to select the best learning rate. Using effective rank as the offline policy selection method performs relatively well when compared to doing online policy selection based on median normalized score across Atari games. becomes 0.40 ± 0.07, and the mean simple regret is 0.25 ± 0.12 with the number of dead units for the OPS. Let us note that the 2M learning step is more typical in training agents on the RL Unplugged Atari dataset. The number of dead units becomes a good metric to do OPS when the network is trained for long (20M steps in our experiment), where the rank becomes drastically small for most learning rate configurations. However, effective rank seems to be a good metric for the OPS earlier in training. Nevertheless, without prior information about the task and agent, it is challenging to conclude whether the number of dead units or effective rank would be an appropriate OPS metric. Other factors such as the number of training steps, activation functions, and other hyperparameters confound the effective rank and number of dead units. Thus, we believe those two metrics we tested are unreliable in general OPS settings without controlling those other extraneous factors. In Figure 20, we compare effective rank as an OPS method with respect to its percentage improvement over the online policy selection to maximize the median reward achieved by the each agent across nine Atari games. The effective rank on Zaxxon, BeamRider, MsPacman, Pooyan, Robotank, DoubleDunk perform competitively to the online policy selection. Effective rank may be a complementary tool for networks with ReLU activation function and we believe it can be a useful metric to monitor during the training in addition to number of dead units to have a better picture about the performance of the agent. ## 11 Robustness To Input Perturbations Several works, such as Sanyal et al. (2018) suggested that low-rank models can be more robust to input perturbations (specifically adversarial ones). It is difficult to just measure the effect of low-rank representations on the agent's performance since rank itself is not a robust measure, as it depends on different factors such as activation function and learning which in turn can effect the generalization of the algorithm independently from rank. However, it is easier to validate the antithesis, namely *"Do more robust agents need to have* higher effective rank than less robust agents?". We can easily test this hypothesis by comparing DQN and BC agents. Robustness metric: We define the robustness metric ρ(p) based on three variables: i) the noise level p, ii) the noise distribution d(p), and iii) the score obtained by agent when evaluated in the environment with noise level p: *score*[d(p)] for which *score*[d(0)] represents the average score achieved by the agent without any perturbation applied on it. Then we can define ρ(p) as follows: $$\rho(p)=1-{\frac{s c o r e[d(0)]-s c o r e[d(p)]}{s c o r e[d(0)]}}={\frac{s c o r e[d(p)]}{s c o r e[d(0)]}}.$$ $$\quad(4)$$ score[d(0)]. (4) In our experiments, we compare DQN and BC agents since we already know that BC has much larger ranks across all Atari games than DQN. We trained these agents on datasets without any data augmentation. The data augmentations are only applied on the inputs when the agent is evaluated in the environment. We evaluate our agents on BeamRider, IceHockey, MsPacman, DemonAttack, Robotank and RoadRunner games from RL Unplugged Atari online policy selection games. We excluded DoubleDunk, Zaxxon and Pooyan games since BC agent's performance on those levels was poor and very close to the performance of random agent, so the robustness metrics on those games would not be very meaningful. On IceHockey, the results can be negative, thus we shifted the scores by 20 to make sure that they are non-negative. In our robustness experiments, we used the hyperparameters that were used in Gulcehre et al. (2020) on Atari for both for DQN and BC. Robustness to random shifts: In Figure 21, we investigate the Robustness of DQN and BC agents with respect to random translation image perturbations applied to the observations as described by Yarats et al. (2020). We evaluate the agent on the Atari environment with varying degrees of input scaling. We noticed that BC's performance deteriorates abruptly, whereas the performance of DQN, while also decreasing, is better than the BC one. Robustness to random scaling: In Figure 22, we explore the Robustness of DQN and BC agents with respect to random scaling as image perturbation method applied to the observations that are feed into our ![22_image_0.png](22_image_0.png) ![22_image_1.png](22_image_1.png) Figure 21: **[Atari] Robustness to random shifts during evaluation:** We measure the robustness of BC and DQN to random shifts on six Atari games from RL Unplugged. The DQN and BC agents are trained offline without any perturbations on the inputs, only on the RL Unplugged datasets. However, during evaluation time we perturbed the input images with "random shift" data augmentation. Overall, DQN is more robust than BC to the evaluation-time random-scaling image perturbations that are not seen during training. The difference is more pronounced on IceHockey and BeamRider games. DQN achieved mean AUC (over different games) of 2.042 and BC got 1.26. ![22_image_2.png](22_image_2.png) Figure 22: **[Atari] Robustness to random scaling during evaluation:** We measure the robustness of BC and DQN to random scaling over inputs on six Atari games from RL Unplugged. The DQN and BC agents are trained offline without data augmentations over the RL Unplugged datasets. However, we perturbed the observations during the evaluation with "random scale" data augmentation. We observed that DQN is more robust than BC to the evaluation-time random-scaling data augmentation. The difference is more pronounced in IceHockey, BeamRider, and Robotank games. DQN achieved a mean AUC (over different games) of 1.60, and BC achieved 0.87. deep Q-network. We randomly scale the image inputs as described in Chen et al. (2017). When evaluating the agent in an online environment, we test it with varying levels of image scaling. The results are again very similar to the random shift experiments where the DQN agent is more robust to random perturbations in the online environment. According to the empirical results obtained from those two experiments, it is possible to see that DQN is more robust to the evaluation-time input perturbations, specifically random shifts and scaling than the BC agent. Thus, more robust representations do not necessarily require a higher effective rank. ## 12 Discussion In this work, we empirically investigated the previously hypothesized connection between low effective rank and poor performance. We found that the relationship between effective rank and performance is not as simple as previously conjectured. We discovered that an offline RL agent trained with Q-learning during training goes through three phases. The effective rank collapses to severely low values in the first phase –we call this as the self-pruning phase– and the agent starts to learn basic behaviors from the dataset. Then in the second phase, the effective rank starts going up, and in the third phase, the effective rank collapses again. Several factors such as learning rate, activation functions and the number of training steps, influence the occurrence, persistence and the extent of severity of the three phases of learning. In general, a low rank is not always indicative of poor performance. Besides strong empirical evidence, we propose a hypothesis trying to explain the underlying phenomenon: not all features are useful for the task the neural network is trying to solve, and low rank might correlate with more robust internal representations that can lead to better generalization. Unfortunately, reasoning about what it means for the rank to be too low is hard in general, as the rank is agnostic to which direction of variations in the data are being ignored or to higher-order terms that hint towards a more compact representation of the data with fewer dimensions. Our results indicate that an agent's effective rank and performance correlate in restricted settings, such as ReLU activation functions, Q-learning, and a fixed architecture. However, as we showed in our experiments, this correlation is primarily spurious in other settings since it disappears with simple modifications such as changing the activation function and the learning rate. We found several ways to improve the effective rank of the agent without improving the performance, such as using tanh instead of ReLU, an auxiliary loss (e.g., CURL), and the optimizer. These methods address the rank collapse but not the underlying learning deficiency that causes the collapse and the poor performance. Nevertheless, our results show that the dynamics of the rank and agent performance through learning are still poorly understood; we need more theoretical investigation to understand the relationship between those two factors. We also observed in Tandem and offline RL settings that the rank collapses to a minimal value early in training. Then there is unexplained variance between agents in the later stages of learning. Overall, the cause and role of this early rank collapse remain unknown, and we believe understanding its potential effects is essential in understanding large-scale agents' practical learning dynamics. The existence of low-rank but high-performing policies suggest that our networks can be over-parameterized for the tasks and parsimonious representations emerge naturally with TD-learning-based bootstrapping losses and ReLU networks in the offline RL setting. Discarding the dead ReLU units might achieve a more efficient inference. We believe this finding can give inspiration to a new family of pruning algorithms. Acknowledgements: We would like to thank Clare Lyle, Will Dabney, Aviral Kumar, Rishabh Agarwal, Tom le Paine, Mark Rowland and Yutian Chen for the discussions. We want to thank Mark Rowland, Rishabh Agarwal and Aviral Kumar for the feedback on the early draft version of the paper. We thank Sergio Gomez and Bjorn Winckler for their help with the infrastructure and the code-base at the inception of this project. We would like to thank the developers of Acme (Hoffman et al., 2020), Jax (Bradbury et al., 2018) and the Deepmind JAX ecosystem (Babuschkin et al., 2020) for developing the software infrastructure that enabled our experiments. ## References Rishabh Agarwal, Dale Schuurmans, and Mohammad Norouzi. An optimistic perspective on offline reinforcement learning. In *International Conference on Machine Learning*, pages 104–114. PMLR, 2020. Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron C Courville, and Marc Bellemare. Deep reinforcement learning at the edge of the statistical precipice. Advances in neural information processing systems, 34:29304–29320, 2021. OpenAI: Marcin Andrychowicz, Bowen Baker, Maciek Chociej, Rafal Jozefowicz, Bob McGrew, Jakub Pachocki, Arthur Petron, Matthias Plappert, Glenn Powell, Alex Ray, et al. Learning dexterous in-hand manipulation. *The International Journal of Robotics Research*, 39(1):3–20, 2020. Sanjeev Arora, Nadav Cohen, Wei Hu, and Yuping Luo. Implicit regularization in deep matrix factorization. Advances in Neural Information Processing Systems, 32:7413–7424, 2019. Igor Babuschkin, Kate Baumli, Alison Bell, Surya Bhupatiraju, Jake Bruce, Peter Buchlovsky, David Budden, Trevor Cai, Aidan Clark, Ivo Danihelka, Claudio Fantacci, Jonathan Godwin, Chris Jones, Ross Hemsley, Tom Hennigan, Matteo Hessel, Shaobo Hou, Steven Kapturowski, Thomas Keck, Iurii Kemaev, Michael King, Markus Kunesch, Lena Martens, Hamza Merzic, Vladimir Mikulik, Tamara Norman, John Quan, George Papamakarios, Roman Ring, Francisco Ruiz, Alvaro Sanchez, Rosalia Schneider, Eren Sezener, Stephen Spencer, Srivatsan Srinivasan, Luyu Wang, Wojciech Stokowiec, and Fabio Viola. The DeepMind JAX Ecosystem, 2020. URL http://github.com/deepmind/jax. David Barrett and Benoit Dherin. Implicit gradient regularization. In *International Conference on Learning* Representations, 2020. Charles Beattie, Joel Z Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich Küttler, Andrew Lefrancq, Simon Green, Víctor Valdés, Amir Sadik, et al. Deepmind lab. *arXiv preprint arXiv:1612.03801*, 2016. Marc Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Remi Munos. Unifying count-based exploration and intrinsic motivation. *Advances in Neural Information Processing Systems*, 29, 2016. Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. *Journal of Artificial Intelligence Research*, 47:253–279, 2013. Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemyslaw Debiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, et al. DotA 2 with large scale deep reinforcement learning. *arXiv preprint arXiv:1912.06680*, 2019. James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax. Steven Bradtke and Andrew Barto. Linear least-squares algorithms for temporal difference learning. *Machine* Learning, 22:33–57, 03 1996. Liang-Chieh Chen, George Papandreou, Florian Schroff, and Hartwig Adam. Rethinking atrous convolution for semantic image segmentation. *arXiv preprint arXiv:1706.05587*, 2017. Hadi Daneshmand, Jonas Kohler, Francis Bach, Thomas Hofmann, and Aurelien Lucchi. Batch normalization provably avoids ranks collapse for randomly initialised deep networks. *Advances in Neural Information* Processing Systems, 2020. Yann Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. arXiv preprint arXiv:1406.2572, 2014. Laurent Dinh, Razvan Pascanu, Samy Bengio, and Yoshua Bengio. Sharp minima can generalize for deep nets. In *International Conference on Machine Learning*, volume 70, pages 1019–1028. PMLR, 2017. Gabriel Dulac-Arnold, Daniel Mankowitz, and Todd Hester. Challenges of real-world reinforcement learning. arXiv preprint arXiv:1904.12901, 2019. Damien Ernst, Pierre Geurts, and Louis Wehenkel. Tree-based batch mode reinforcement learning. *Journal* of Machine Learning Research, 6:503–556, 2005. Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. Sharpness-aware minimization for efficiently improving generalization. *arXiv preprint arXiv:2103.09575*, 2021. Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine. D4RL: Datasets for deep datadriven reinforcement learning. *arXiv preprint arXiv:2004.07219*, 2020. Scott Fujimoto, Herke Van Hoof, and David Meger. Addressing function approximation error in actor-critic methods. *arXiv preprint arXiv:1802.09477*, 2018. Scott Fujimoto, Edoardo Conti, Mohammad Ghavamzadeh, and Joelle Pineau. Benchmarking batch deep reinforcement learning algorithms. *arXiv preprint arXiv:1910.01708*, 2019. Behrooz Ghorbani, Shankar Krishnan, and Ying Xiao. An investigation into neural net optimization via Hessian eigenvalue density. *arXiv preprint arXiv:1901.10159*, 2019. Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In *International Conference on Artificial Intelligence and Statistics*, pages 249–256. JMLR Workshop and Conference Proceedings, 2010. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In *International* Conference on Artificial Intelligence and Statistics, pages 315–323. JMLR Workshop and Conference Proceedings, 2011. Caglar Gulcehre, Marcin Moczulski, Misha Denil, and Yoshua Bengio. Noisy activation functions. In International Conference on Machine Learning, pages 3059–3068. PMLR, 2016. Caglar Gulcehre, Ziyu Wang, Alexander Novikov, Tom Le Paine, Sergio Gómez Colmenarejo, Konrad Zolna, Rishabh Agarwal, Josh Merel, Daniel Mankowitz, Cosmin Paduraru, et al. RL unplugged: Benchmarks for offline reinforcement learning. *arXiv preprint arXiv:2006.13888*, 2020. Caglar Gulcehre, Sergio Gómez Colmenarejo, Ziyu Wang, Jakub Sygnowski, Thomas Paine, Konrad Zolna, Yutian Chen, Matthew Hoffman, Razvan Pascanu, and Nando de Freitas. Regularized behavior value estimation. *arXiv preprint arXiv:2103.09575*, 2021. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing humanlevel performance on imagenet classification. In *Proceedings of the IEEE International Conference on* Computer Vision, pages 1026–1034, 2015. Matt Hoffman, Bobak Shahriari, John Aslanides, Gabriel Barth-Maron, Feryal Behbahani, Tamara Norman, Abbas Abdolmaleki, Albin Cassirer, Fan Yang, Kate Baumli, Sarah Henderson, Alex Novikov, Sergio Gómez Colmenarejo, Serkan Cabi, Caglar Gulcehre, Tom Le Paine, Andrew Cowie, Ziyu Wang, Bilal Piot, and Nando de Freitas. Acme: A research framework for distributed reinforcement learning. arXiv preprint arXiv:2006.00979, 2020. Paul W Holland. Statistics and causal inference. *Journal of the American Statistical Association*, 81(396): 945–960, 1986. Kaixuan Huang, Yuqing Wang, Molei Tao, and Tuo Zhao. Why do deep residual networks generalize better than deep feedforward networks?–a neural tangent kernel perspective. *arXiv preprint arXiv:2002.06262*, 2020. Minyoung Huh, Hossein Mobahi, Richard Zhang, Brian Cheung, Pulkit Agrawal, and Phillip Isola. The low-rank simplicity bias in deep networks. *arXiv preprint arXiv:2103.10427*, 2021. Maximilian Igl, Gregory Farquhar, Jelena Luketina, Wendelin Boehmer, and Shimon Whiteson. Transient non-stationarity and generalisation in deep reinforcement learning. *arXiv preprint arXiv:2006.05826*, 2020. Ying Jin, Zhuoran Yang, and Zhaoran Wang. Is pessimism provably efficient for offline RL? In *International* Conference on Machine Learning, pages 5084–5096. PMLR, 2021. Dimitris Kalimeris, Gal Kaplun, Preetum Nakkiran, Benjamin Edelman, Tristan Yang, Boaz Barak, and Haofeng Zhang. Sgd on neural networks learns functions of increasing complexity. Advances in neural information processing systems, 32, 2019. Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836, 2016. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In *International Conference* on Learning Representations, 2014. Ksenia Konyushkova, Yutian Chen, Thomas Paine, Caglar Gulcehre, Cosmin Paduraru, Daniel J Mankowitz, Misha Denil, and Nando de Freitas. Active offline policy selection. Advances in Neural Information Processing Systems, 34, 2021. Aviral Kumar, Justin Fu, Matthew Soh, George Tucker, and Sergey Levine. Stabilizing off-policy Q-learning via bootstrapping error reduction. In *Conference on Neural Information Processing Systems*, pages 11761– 11771, 2019. Aviral Kumar, Rishabh Agarwal, Dibya Ghosh, and Sergey Levine. Implicit under-parameterization inhibits data-efficient deep reinforcement learning. *arXiv preprint arXiv:2010.14498*, 2020a. Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning for offline reinforcement learning. *arXiv preprint arXiv:2006.04779*, 2020b. Aviral Kumar, Rishabh Agarwal, Aaron Courville, Tengyu Ma, George Tucker, and Sergey Levine. Valuebased deep reinforcement learning requires explicit regularization. In *RL for Real Life Workshop & Overparameterization: Pitfalls and Opportunities Workshop, ICML*, 2021a. URL https://drive.google.com/ file/d/1Fg43H5oagQp-ksjpWBf_aDYEzAFMVJm6/view. Aviral Kumar, Rishabh Agarwal, Tengyu Ma, Aaron Courville, George Tucker, and Sergey Levine. Dr3: Value-based deep reinforcement learning requires explicit regularization. *arXiv preprint arXiv:2112.04716*, 2021b. Aviral Kumar, Anikait Singh, Stephen Tian, Chelsea Finn, and Sergey Levine. A workflow for offline modelfree robotic reinforcement learning. *arXiv preprint arXiv:2109.10813*, 2021c. Michail G. Lagoudakis and Ronald Parr. Least-squares policy iteration. *Journal of Machine Learning* Research, 4:1107–1149, 2003. Sascha Lange, Thomas Gabel, and Martin Riedmiller. Batch reinforcement learning. In Marco Wiering and Martijn van Otterlo, editors, *Reinforcement Learning: State-of-the-Art*, pages 45–73. Springer Berlin Heidelberg, 2012. Michael Laskin, Aravind Srinivas, and Pieter Abbeel. Curl: Contrastive unsupervised representations for reinforcement learning. In *International Conference on Machine Learning*, pages 5639–5650. PMLR, 2020. Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. *arXiv preprint arXiv:2005.01643*, 2020. Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. Visualizing the loss landscape of neural nets. *arXiv preprint arXiv:1712.09913*, 2017. Yuanzhi Li and Yingyu Liang. Learning overparameterized neural networks via stochastic gradient descent on structured data. *arXiv preprint arXiv:1808.01204*, 2018. Clare Lyle, Mark Rowland, Georg Ostrovski, and Will Dabney. On the effect of auxiliary tasks on representation dynamics. In *International Conference on Artificial Intelligence and Statistics*, pages 1–9. PMLR, 2021. Marlos C Machado, Sriram Srinivasan, and Michael Bowling. Domain-independent optimistic initialization for reinforcement learning. In *Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence*, 2015. Charles H Martin and Michael W Mahoney. Implicit self-regularization in deep neural networks: Evidence from random matrix theory and implications for learning. *Journal of Machine Learning Research*, 22(165): 1–73, 2021. Michael Mathieu, Sherjil Ozair, Srivatsan Srinivasan, Caglar Gulcehre, Shangtong Zhang, Ray Jiang, Tom Le Paine, Konrad Zolna, Richard Powell, Julian Schrittwieser, et al. Starcraft ii unplugged: Large scale offline reinforcement learning. In *Deep RL Workshop NeurIPS 2021*, 2021. Jacob Menick, Maja Trebacz, Vladimir Mikulik, John Aslanides, Francis Song, Martin Chadwick, Mia Glaese, Susannah Young, Lucy Campbell-Gillingham, Geoffrey Irving, and Nat McAleese. Teaching language models to support answers with verified quotes. *arXiv preprint arXiv:2203.11147*, 2022. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. *Nature*, 518(7540):529–533, 2015. Hossein Mobahi, Mehrdad Farajtabar, and Peter L Bartlett. Self-distillation amplifies regularization in Hilbert space. *arXiv preprint arXiv:2002.05715*, 2020. Ian Osband, Yotam Doron, Matteo Hessel, John Aslanides, Eren Sezener, Andre Saraiva, Katrina McKinney, Tor Lattimore, Csaba Szepezvari, Satinder Singh, et al. Behaviour suite for reinforcement learning. arXiv preprint arXiv:1908.03568, 2019. Georg Ostrovski, Pablo Samuel Castro, and Will Dabney. The difficulty of passive learning in deep reinforcement learning. *Advances in Neural Information Processing Systems*, 34, 2021. Tom Le Paine, Cosmin Paduraru, Andrea Michi, Caglar Gulcehre, Konrad Zolna, Alexander Novikov, Ziyu Wang, and Nando de Freitas. Hyperparameter selection for offline reinforcement learning. *arXiv preprint* arXiv:2007.09055, 2020. Jeffrey Pennington and Pratik Worah. Nonlinear random matrix theory for deep learning. *Advances in* Neural Information Processing Systems, 2017. Dean A Pomerleau. ALVINN: An autonomous land vehicle in a neural network. In Conference on Neural Information Processing Systems, pages 305–313, 1989. Martin Riedmiller. Neural fitted Q iteration - first experiences with a data efficient neural reinforcement learning method. In João Gama, Rui Camacho, Pavel B. Brazdil, Alípio Mário Jorge, and Luís Torgo, editors, *European Conference on Machine Learning*, pages 317–328, 2005. Levent Sagun, Léon Bottou, and Yann LeCun. Singularity of the hessian in deep learning. *arXiv preprint* arXiv:1611.07476, 2016. Levent Sagun, Utku Evci, V Ugur Guney, Yann Dauphin, and Leon Bottou. Empirical analysis of the hessian of over-parametrized neural networks. *arXiv preprint arXiv:1706.04454*, 2017. Amartya Sanyal, Varun Kanade, Philip HS Torr, and Puneet K Dokania. Robustness via deep low-rank representations. *arXiv preprint arXiv:1804.07090*, 2018. Tianyu Shi, Dong Chen, Kaian Chen, and Zhaojian Li. Offline reinforcement learning for autonomous driving with safety and exploration enhancement. *arXiv preprint arXiv:2110.07067*, 2021. Yeonjong Shin and George Em Karniadakis. Trainability of relu networks and data-dependent initialization. Journal of Machine Learning for Modeling and Computing, 1(1), 2020. David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, and Adrian Bolton. Mastering the game of go without human knowledge. *Nature*, 550(7676):354–359, 2017. Edward H Simpson. The interpretation of interaction in contingency tables. Journal of the Royal Statistical Society: Series B (Methodological), 13(2):238–241, 1951. Samuel L Smith and Quoc V Le. A Bayesian perspective on generalization and stochastic gradient descent. arXiv preprint arXiv:1710.06451, 2017. Samuel L Smith, Benoit Dherin, David GT Barrett, and Soham De. On the origin of implicit regularization in stochastic gradient descent. *arXiv preprint arXiv:2101.12176*, 2021. Tijmen Tieleman, Geoffrey Hinton, et al. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. *COURSERA: Neural networks for machine learning*, 4(2):26–31, 2012. Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double Q-learning. In Thirtieth AAAI conference on artificial intelligence, 2016. Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. *Nature*, 575(7782):350–354, 2019. Tengyang Xie, Ching-An Cheng, Nan Jiang, Paul Mineiro, and Alekh Agarwal. Bellman-consistent pessimism for offline reinforcement learning. *Advances in Neural Information Processing Systems*, 34, 2021. Denis Yarats, Ilya Kostrikov, and Rob Fergus. Image augmentation is all you need: Regularizing deep reinforcement learning from pixels. In *International Conference on Learning Representations*, 2020. ## A Appendix A.1 The Impact Of Depth Pennington and Worah (2017) explored the relationship between the rank and the depth of a neural network at initialization and found that the rank of each layer's feature matrix decreases proportionally to the index of a layer in a deep network. Here, we test the performance of a feedforward network on the bsuite task trained using Double Q-learning (Van Hasselt et al., 2016) with 2, 8, and 16 layers to see the effect of the number of layers on the effective rank. All our networks use ReLU activation functions and use He initialization (He et al., 2015). Figure 23 illustrates that the rank collapses as one progresses from lower layers to higher layers proportionally at the end of training as well. The network's effective rank (rank of the penultimate layer) drops to a minimal value on all three tasks regardless of the network's number of layers. The last layer of a network will act as a bottleneck; thus, a collapse of the effective rank would reduce the expressivity. Nevertheless, a deeper network with the same rank as a shallower one can learn to represent a larger class of functions (be less under-parametrized). The agents exhibit poor performance when the effective rank collapses to 1. At that point, all the ReLU units die or become zero irrespective of input. Thus on bsuite, deeper networks –four and eight layered networks– performed worse than two layered MLP. ![29_image_0.png](29_image_0.png) Figure 23: **[bsuite] The ranks and depths of the networks:** The evolution of the ranks across different layers of deep neural networks. The figure on the left (L) is for the ranks of catch across different layers. The figure at the center (C) is for the ranks of mountain_car across different layers. The figure on the right (R) is for cartpole. ## A.2 Spectral Density Of Hessian For Offline Rl Analyzing the eigenvalue spectrum of the Hessian is a common way to investigate the learning dynamics and the loss surface of deep learning methods (Ghorbani et al., 2019; Dauphin et al., 2014). Understanding Hessian's loss landscape and eigenvalue spectrum can help us design better optimization algorithms. Here, we analyze the eigenvalue spectrum of a single hidden layer feedforward network trained on the bsuite Catch dataset from RL Unplugged (Gulcehre et al., 2020) to understand the loss-landscape of a network with a low effective rank compared to a model with higher rank at the end of the training. As established in Figure 24, ELU activation function network has a significantly higher effective rank than the ReLU network. By comparing those two networks, we also look into the differences in the eigenvalue spectrum of a network with high and low rank. Since the network and the inputs are relatively low-dimensional, we computed the full Hessian over the dataset rather than a low-rank approximation. The eigenvalue spectrum of the Hessian with ReLU and the ELU activation functions is shown in Figure 24. The rank collapse is faster for ReLU than ELU. After 900k gradient updates, the ReLU network concentrates 92% of the eigenvalues of Hessian around zero; this is due to the dead units in ReLU network (Glorot et al., 2011). On the other hand, the ELU network has a few very large eigenvalues after the same number of gradient updates. In Figure 25, we summarize the distribution of the eigenvalues of the Hessian matrices of the ELU and ReLU networks. As a result, the Hessian and the feature matrix of the penultimate layer of the network will be both low-rank. Moreover, in this case, the landscape of the ReLU network might be flatter than the ELU beyond the notion of wide basins (Sagun et al., 2016). This might mean that the ReLU network finds a simpler solution. Thus, the flatter landscape is confirmed by the simpler function learned and less capacity used at the end of the training, which is induced by the lower rank representations. ## A.3 Deepmind Lab: Performance Vs The Effective Ranks In Figure 26, we show the correlation between the effective rank and the performance of R2D2, R2D2 trained with CURL and R2D2 trained with SAM. When we look at each variant separately or as an aggregate, we don't see a strong correlation between the performance and the rank of an agent. ## A.4 Sam Optimizer: Does Smoother Loss Landscape Affect The Behavior Of The Rank Collapse? We study the relationship between smoother loss landscapes and rank collapse. We use Sharpness Aware Minimization (SAM) (Foret et al., 2021) loss for potentially creating flatter loss surfaces in order to see if smoother loss landscapes affect the rank and performance dynamics differently. Figures 27 and 28 show the evolution of feature ranks and generalization performance in Atari and DeepMind lab respectively. We do not observe a very clear relation between the extent of smoothing the loss and the feature ranks or generalization performance. ![30_image_0.png](30_image_0.png) Figure 24: **[bsuite Catch] spectral density of Hessian:** The visualization of the spectral density of the full Hessian of a network trained with 64 units using ReLU (left) and ELU (right) activation functions. The eigenvalues of the Hessian of the offline DQN with ReLU activation function are concentrated around 0 and most of them are less than 1. The eigenvalues of the ELU network also concentrate around 0 with a few large outlier eigenvalues. ![30_image_1.png](30_image_1.png) Figure 25: **[bsuite Catch] Hessian eigenvalues (evs) for offline DQN:** We visualize the percentages of positive, negative, and near-zero eigenvalues of the Hessian for offline DQN with ELU and ReLU activation functions on the catch dataset from bsuite. If the absolute value of an eigenvalue is less than 1e-7, we consider it as a near-zero eigenvalue. We can see that for ELU network, near-zero, positive and negative eigenvalues are almost evenly distributed. However, with ReLU network majority of eigenvalues are near-zero (90% of the evs are exactly zero), very few negative (2 %) and some positive eigenvalues (7.1 %). ## A.5 Effective Rank And The Value Error A curious relationship is between the effective rank and the value error, because a potential for the rank collapse or Phase 3 with the TD-learning algorithms can be the errors propagating thorough the bootstrapped targets. Figure 29 shows the correlation between the value error and the effective ranks. There is a strong anti-correlation between the effective ranks and the performance of the agents except on the levels where the expert agent that generated the dataset performs poorly at the end of the training (e.g. IceHockey.) This makes the hypothesis that the extrapolation error can cause rank collapse (or push the agent to Phase 3) more plausible. ![31_image_0.png](31_image_0.png) Figure 26: [DeepMind lab] The effective rank and the performance: Correlations between feature ranks and episode returns for different exploration noises on DeepMind lab dataset. We include data from three models: R2D2, R2D2 trained with CURL, and R2D2 trained with SAM. We do not observe a strong correlation between the effective rank and performance across different noise exploration levels in the data. ![31_image_1.png](31_image_1.png) Figure 27: [Atari] Sharpness Aware Minimization (SAM) Loss: Evolution of ranks as we increase the weight of auxiliary losses. We see that some amount of weight on the SAM loss helps mitigate the extent of rank collapse but we observe no clear relationship with the agent performance. ## A.6 Curl On Deepmind Lab In Figure 30 shows the CURL results on DeepMind lab dataset. We couldn't find any consistent pattern across various CURL loss weights. ## A.7 Learning Rate Evolution In Figure 31, we also perform a hyperparameter selection by evaluating the model in the environment at various stages during the training. As the offline DQN is trained longer, the optimal learning rate for the best agent performance when evaluated online goes down. As one increases the number of training steps of an agent, we need to change the learning rate accordingly since the number of training steps affects the best learning rate. ## A.8 Learning Curves Long Training Regime And The Different Phases Of Learning In this subsection, we investigate the effect of changing learning rates on the effective rank and the performance of the agent on RL Unplugged Atari online policy selection games. We train the offline DQN for ![32_image_0.png](32_image_0.png) Figure 28: [DeepMind lab-SeekAvoid Snapshot] Sharpness Aware Minimization (SAM) Loss We do not observe any rank collapse as we continue training with the LSTM networks (because of the tanh activations discussed in Section 6) used in DeepMind lab dataset over a spectrum of different weights for the SAM loss. ![32_image_1.png](32_image_1.png) Figure 29: [Atari]: These plots shows the correlation between the value error and the effective rank for offline DQN agent trained for 20M steps on online policy selection games for Atari. There is an apparent anti-correlation between the effective rank and the value error. Namely, as the value error of an agent when evaluated in the environment increases the effective rank decreases. The correlation is significant on most Atari levels except IceHockey where the expert agent that generated the dataset performs poorly. 20M learning steps which is ten times longer than typical offline Atari agents (Gulcehre et al., 2020). We evaluated twelve learning rates in [10-2, 10-5] equally spaced in logarithmic space. We show the effective rank learning and performance curves in Figure 32. It is easy to identify different phases of learning in most of those curves. ![33_image_0.png](33_image_0.png) Figure 30: **[DeepMind lab-SeekAvoid:] auxiliary losses** Evolution of ranks as we increase the weight of auxiliary loss. While some auxiliary loss helps the model perform well, there is no clear correlation between rank and performance. ![33_image_1.png](33_image_1.png) Figure 31: **[Atari]**: Evolution of the optimal learning rate found by online evaluations in the environment. As the model is trained longer the optimal learning rate found by online evaluations goes down. ## A.9 Effective Rank And The Performance Figure 33 shows the correlation between the effective rank and the performance of an agent trained with minibatch of size 32. On most Atari online policy selection games it is possible to see a very strong correlation but on some games the correlation is not there. It seems like even Figure 34 depicts the correlation between the effective rank and the performance of a DQN agent with ReLU activation function. There is a significant correlation on most Atari games. As we discussed earlier, long training setting with ReLU activation functions where the effect of the rank is the strongest on the performance. ## A.10 Computation Of Feature Ranks Here, we present the *Python* code-stub that we used across our experiments (similar to Kumar et al. (2020a)) to compute the feature ranks of the pre-output layer's features: import numpy as np def compute_rank_from_features(feature_matrix, rank_delta=0.01): """Computes rank of the features based on how many singular values are significant.""" ![34_image_0.png](34_image_0.png) Atari online policy selection games. We train the offline DQN agent for 20M learning steps and evaluated 12 learning rates in [10-2, 10-5] equally spaced in logarithmic space. Let us note that in all games rank goes down early in the training (Phase 1), then goes up (phase 2) and for some learning rate the effective rank collapses (Phase 3). Correspondingly, the performance is low in the beginning of the training (Phase 1), goes up and stays high for a while (Phase 2), and sometimes it the performance collapses (Phase 3). ![35_image_0.png](35_image_0.png) Figure 33: **[Atari]** The correlation between ranks and returns for DQN trained with minibatch size of 32. We ![35_image_1.png](35_image_1.png) ran each network with three learning rates and five different seeds but we only show the mean across those five seeds here. We can see very strong correlations on some games, but that correlation is not consistent. Figure 34: **[Atari]** The correlation between ranks and returns for DQN trained for a network trained for 20M learning steps and 12 different learning rates with minibatch size of 256. We can see that there is significant positive correlation between the rank and the learning rates for most online policy selection games. | Hyper-parameters | bsuite | Atari | DeepMind lab | | | |-------------------------------|-------------|---------|----------------|-------|-----| | Training batch size | 32 | 256 | 4 (episodes) | | | | Rank calculation batch size | 512 | 512 | 512 | | | | Num training steps | 1e6 | 2e6 | 2e4 | | | | Learning rate | 3e-4 | 3e-5 | 1e-3 | | | | Optimizer | Adam | Adam | Adam | | | | Feedforward hidden layer size | 64 | 512 | 256 | | | | Num hidden layers | 2 | 1 | 1 | | | | Activation | ReLU | ReLU | ReLU | (tanh | for | | | LSTM gates) | | | | | | Memory | None | None | LSTM | | | | Discount | 0.99 | 0.99 | 0.997 | | | Table 3: The default hyper-parameters used in our work across different domains. sing_values = np.linalg.svd(feature_matrix, compute_uv=False) cumsum = np.cumsum(sing_values) nuclear_norm = np.sum(sing_values) approximate_rank_threshold = 1.0 - rank_delta threshold_crossed = ( cumsum >= approximate_rank_threshold * nuclear_norm) effective_rank = sing_values.shape[0] - np.sum(threshold_crossed) + 1 return effective_rank ## A.11 Hyperparameters Here, we list the standard set of hyper-parameters that were used in different domains: bsuite, Atari, and DeepMind Lab respectively. These are the default hyper-parameters, which may differ when stated so in our specific ablation studies. For the DMLLAB task, we use the same network that was used by Gulcehre et al. (2021). For all the Atari tasks, we use the same convolution torso that was used by Gulcehre et al. (2020) which involves three layers of convolution with ReLU activations in between. - Layer 1 - Conv2d(channels=32, kernel_shape=[8, 8], stride=[4, 4]) - Layer 2 - Conv2d(channels=64, kernel_shape=[4, 4], stride=[2, 2]) - Layer 3 - Conv2d(channels=64, kernel_shape=[3, 3], stride=[1, 1]) ## A.12 Bsuite Phase Transitions And Bottleneck Capacity We illustrate the phase transition of a simple MLPs with ReLU activations. In Figure 35, we have a network of size (64, bottleneck units, 64) where we vary the number of bottleneck units. In Figure 36, we have a network of size (64, bottleneck units) where we vary the number of bottleneck units. In both cases, having smaller number of bottleneck units reduces the performance of the mode and agents were able to solve the problem even when the penultimate layer's effective rank was small. With the larger learning rate the right handside figures (b), the effective ranks tend to be lower. ## A.13 Activation Sparsity On Bsuite In Figure 37, we show that the activations of the ReLU network becomes very sparse during the course of training. The sparsity of the ReLU units seems to be significantly higher than the ELU units at the end of training. ![37_image_0.png](37_image_0.png) Figure 35: **[bsuite] - Catch dataset:** Phase transition plots of a network with hidden layers of size (64, bottleneck units, 64). On the y-axis, we show the log2 (bottleneck units), and x-axis is the number of gradient steps. The figures on the left hand-side (a) is trained with the learning rate with 8e-5 and right hand-side (b) experiments are trained with learning rate of 3e-4. The first row shows the episodic returns. The second row shows the effective rank of the second layer (bottleneck units) and the third row is showing the penultimate layer's effective rank. The effective rank collapses much faster with the learning rate of 4e-3 than 5e-8. The low ranks can still have good performance. The low bottleneck units causes the effective rank of the last layer to collapse faster. The performance of the network with the small number of bottleneck units is poor. The effective rank of the small number of bottleneck units is smaller. ![38_image_0.png](38_image_0.png) Figure 36: **[bsuite] - Catch dataset:** Phase transition plots of a network with hidden layers of size (64, bottleneck units). On the y-axis, we show the log2 (bottleneck units), and x-axis is the number of gradient steps. The figures on the left hand-side (a) is trained with the learning rate with 8e-5 and right hand-side (b) experiments are trained with learning rate of 3e-4. The first row shows the episodic returns. The second row shows the effective rank of the second layer (bottleneck units). The effective rank collapses much faster with the learning rate of 4e-3 than 5e-8. The low ranks can still have good performance. The small number of bottleneck units causes the effective rank of the layer to collapse faster. The performance of the network with the small number of bottleneck units is poor. The effective rank of the small number of bottleneck units is smaller. ![38_image_1.png](38_image_1.png) Figure 37: **[bsuite Catch] Gram matrices of activations:** Gram matrices of activations of a two-layer MLP with ReLU and ELU activation functions. The activations of the ReLU units become sparser when compared to ELU units at the end of the training due to dead ReLU units.
Review 1: Summary: The authors conduct an extensive empirical investigation into the phenomenon of implicit under parametrization in offline (deep) Q learning (introduced by Lavine et al in 2020), i.e a situation where high capacity functional aproximators trained via TD learning are unable to fit the data because the final layer of activation "collapses" to a highly restricted subspace. They conclude that while one can indeed observe this behavior in some settings, the existence of implicit underparametrization is highly sensitive to network hyperparameters such as choice of activation function, learning rate, minibatch size etc. They conclude that extrapolating from simplified models (where one can analytically prove that implicit under parametrization occurs) and specific hyperparameter settings (where one can verify its occurence empirically) is misleading, and that the precise role of implicit underparametrization in the performance of deep RL methods is still unclear. Strengths and Weaknesses: I found the presentation to be clear and engaging. In particular the authors were careful to distinguish between correlation and causation- clearly the latter is what we are interested in here, and so I appreciated that an explicit causal graph was hypothesized and tested. The authors seem to have done a good job of conducting a comprehensive set of empirical tests, and make a convincing case that it would be misleading to claim that implicit underparametrization/rank collapse provides a monocausal explanation of the failures of deep Q-learning. I should say however that I am far from an expert in this field, and don't consider myself well qualified to judge the strength of the empirical evidence marshalled by the authors. The authors suggest a three phase picture of Q learning- a first phase where "simple" behaviors are learned, a second phase where "complex" behaviors are learned, and a final phase of underparametrization. Given the subjective terminology, was unclear to me how exactly the authors defined these phases, and would have appreciated more details. Requested Changes: Given that the authors of the original implicit underparametrization paper proved that the phenomenon occurs in kernel regression, it would have been interesting to examine the behavior of wide (but finite) networks. I should emphasize that I don't view the inclusion of tests of this regime to be critical for securing my reccomendation for acceptance. Broader Impact Concerns: I don't see any particular ethical concerns with the paper. ================================================== Review 2: Summary: This submission provides an extensive empirical investigation on the relationship between effective rank and agents' performance in deep offline reinforcement learning. The authors proposed a few hypotheses on connecting ranks with different ingredient in deep neural networks. Experiments are provided to validate and discuss the hypotheses. Strengths and Weaknesses: Strengths 1. Investigation the relationship between rank of the penultimate layer of neural networks and agents' performance is an interesting topic, and may motivate more follow-up work. 2. The authors did provide an extensive investigation on the topic from different angles, which is a good first attempt to tackle this problem. Weakness 1. The current version is over empirical and hence it's difficult to draw any concrete conclusion. For example, on Key Observation 2 in Page 3, it's extremely difficult to verify that the three stages are necessary. Furthermore, the definitions for "simple" and "complex" are lacking and hence we cannot distinguish them with clear boundaries. 2. As mentioned above, this submission cannot draw any conclusion based on the results shown. It's true that there is somewhat correlation between the effective rank and performance. However, we cannot know when it will or not will be the case from the provided results. Requested Changes: It would be more convincing if the authors can provide either (1) theoretical justification on the relationship between rank and performance (2) more convincing empirical justification to draw some "specific" conclusion. A few detailed questions and comments are provided as follows. 1. In the caption for Figure 5, the authors stated that "the ranks are distributed almost like a Gaussian". More rigor is necessary, instead of simple guesses. 2. The authors mentioned in the caption of Figure 16 that "auxiliary loss helps mitigate the rank collapse but prevents the model from learning useful representations". Based on my understanding, there are no steps to measure the usefulness of representations and hence such a measure is necessary before drawing this conclusion. Broader Impact Concerns: N/A ================================================== Review 3: Summary: This paper investigates the hypothesis that (effective) rank collapse causes poor performance in offline RL. Through an extensive set of experiments on offline RL benchmarks, the authors show that the results identified in past work only hold for a small range of hyperparameters, and so targeting it may not be a viable solution to improving performance. They also identify several interesting phases of offline RL training related to the rank and performance. Strengths and Weaknesses: Below are my views on the strengths and weaknesses of the results. Note that I am not very familiar with RL and so am not very qualified to comment on the appropriateness or quality of the experiments. Strengths: - The paper identifies an interesting hypothesis and does a very thorough investigation of it. - The authors identify numerous hyperparameters that may affect the outcome and study their ablations. - The writing is fairly straightforward to understand. - In addition to the main objective of the work, there is also an interesting discussion and empirical investigation of training dynamics in RL. Weaknesses: - One important discussion that seems missing to me is an explanation for why Kumar et al. (2020) observed improved performance when controlling rank collapse. - W.r.t. the definition of effective rank: I understand that the authors are aiming to align with Kumar et al. (2020), but since this “effective rank” quantity is effectively an attempt to discount small singular values, I wonder if a better quantity to study—since it is parameter-free—would be the stable rank (c.f. the actual quantity used in Arora et al. (2018) and Sanyal et al. (2020)). Of course, these papers would argue instead for a smaller rather than a larger (stable) rank. Requested Changes: Changes that are critical for me to recommend acceptance: - Discussion: as discussed in the weaknesses, I believe the observation by Kumar et al. (2020) that controlling rank could improve performance should be better-discussed. Is this also a question of hyperparameter range, or do you believe there are confounding factors? Changes that would strengthen paper: - Figure 2: it’d be useful for the reader to know if the y-axes here start at zero (rank / returns) or not. - Section 2.2: I’m confused by the usage of the word “ablate” here. Usually it means to remove certain components to check their effects, but here it seems to just mean you use them? Perhaps writing here could be made more clear. Broader Impact Concerns: None. ================================================== Metareview: Recommendation: Accept as is Comment: Reviewer hEii raised some concerns about the submission being overly empirical and lacking theoretical grounding. While I agree that it would have been much stronger had it included backing theory, I think the empirical contribution is of sufficient novelty and significance for publication, and the text is completely transparent about its empirical nature. I therefore recommend acceptance. ==================================================
# Solving Nonconvex-Nonconcave Min-Max Problems Exhibiting Weak Minty Solutions Axel Böhm *axel.boehm@univie.ac.at* University of Vienna, Austria Reviewed on OpenReview: *https: // openreview. net/ forum? id= Gp0pHyUyrb* ## Abstract We investigate a structured class of nonconvex-nonconcave min-max problems exhibiting so-called *weak Minty* solutions, a notion which was only recently introduced and already prooved powerful by simultaneously capturing different generalizations of monotonicity. We prove novel convergence results for a generalized version of the optimistic gradient method (OGDA) in this setting, matching the 1/k rate for the best iterate in terms of the squared operator norm recently shown for the extragradient method (EG). In addition we propose an adaptive step size version of EG, which does not require knowledge of the problem parameters. ## 1 Introduction The recent success of machine learning models which can be described by min-max optimization, such as generative adversarial networks (Goodfellow et al., 2014), adversarial learning (Madry et al., 2018), adversarial example games (Bose et al., 2020) or actor-critic methods (Pfau & Vinyals, 2016), has sparked interest in such saddle point problems. While methods have been identified, which (mostly) work in practice, the setting in which the objective function is nonconvex in the minimization and nonconcave in the maximization component remains theoretically poorly understood and even shows intractability results (Daskalakis et al., 2021; Lee & Kim, 2021a). Recently, Daskalakis et al. (2020) studied a class of nonconvex-nonconcave min-max problems and observed that the extragradient method (EG) showed good converge behavior in the experiments. Surprisingly, the problems did not seem to exhibit any of the known tame properties such as monotonicity, or Minty solutions. Later, Diakonikolas et al. (2021) found the appropriate notion (see Assumption 1), which is weaker than the existence of a Minty solution (an assumption extensively used in the literature (Malitsky, 2020; Liu et al., 2020; 2021)) and also generalizes the concept of negative comonotonicity (Bauschke et al., 2020; Combettes & Pennanen, 2004; Lee & Kim, 2021a). Due to these unifying and generalizing properties the notion of weak Minty solutions was promptly studied in (Pethick et al., 2022; Lee & Kim, 2021b). Assumption 1 (Weak Minty solution). *Given an operator* F : R d → R d*there exists a point* u ∗ ∈ R d and a parameter ρ > 0 *such that* $$\langle F(u),u-u^{*}\rangle\geq-{\frac{\rho}{2}}\|F(u)\|^{2}\quad\forall u\in\mathbb{R}^{d}.$$ d. (1) Additionally, Diakonikolas et al. (2021) proved that a generalization of EG (Korpelevich, 1976) is able to solve problems which exhibit such solutions with a complexity of O(ε −1) for the squared operator norm. This modification which they title EG+, is based on an aggressive extrapolation step combined with a conservative update step. Such a step size policy has already been explored in the context of a stochastic version of EG in (Hsieh et al., 2020). In a similar spirit we investigate a variant of the optimistic gradient descent ascent (OGDA) (Daskalakis et al., 2018; Popov, 1980)/Forward-Reflected-Backward (FoRB) (Malitsky & Tam, 2020) method. We pose the question, and give an affirmative answer to: $$(1)$$ Can OGDA match the convergence guarantees of EG in the presence of weak Minty solutions? In particular, we show that the following modification of the OGDA method, given for step size a > 0 and parameter 0 < γ ≤ 1 by $$u_{k+1}=u_{k}-a{\Big(}(1+\gamma)F(u_{k})-F(u_{k-1}){\Big)},\quad\forall k\geq0,$$ is able to match the bounds of EG+ (Diakonikolas et al., 2021; Pethick et al., 2022): uk = ¯uk − aF(¯uk), u¯k+1 = ¯uk − γaF(uk), ∀k ≥ 0, by only requiring one gradient oracle call per iteration. In Figure 1 we see that beyond the theoretical guarantees OGDA+ can even provide convergence where EG+ does not. Note that OGDA is most commonly written in the form where γ = 1, see (Daskalakis et al., 2018; Malitsky & Tam, 2020; Böhm et al., 2022), with the exception of two recent works which have investigated a more general coefficient see (Ryu et al., 2019; Mokhtari et al., 2020). While the previous references target the monotone setting the true importance of γ only shows up in the presence of weak Minty solutions as in this case we *require* it to be larger than 1 to guarantee convergence - a phenomenon not present for monotone problems. Connection to min-max. When considering a general (smooth) min-max problem $$(x,y)$$ $\mathbf{a}$ min xmax yf(*x, y*) the operator F mentioned in Assumption 1 arises naturally as F(u) := [∇xf(x, y), −∇yf(*x, y*)] with u = (*x, y*). However, by studying saddle point problems from this more general perspective of variational inequalities (VIs), see (SVI), via the operator F we can simultaneously capture more settings such as certain equilibrium problems, see (Facchinei & Pang, 2007). About the weak Minty parameter ρ. The parameter ρ in the definition of weak Minty solutions (1) plays a crucial role in the analysis and the experiments. In particular it is necessary that the step size is larger than a term proportional to ρ, see for example Theorem 3.1 or (Pethick et al., 2022). At the same time, as typical, the step size is constrained from above by the reciprocal of the Lipschitz constant of F. For example, since the authors of (Diakonikolas et al., 2021) require the step size to be less than 1L , their convergence statement only holds if ρ < 1 4L for the choice γ = 1 2 . This was later improved in (Pethick et al., 2022) to 1L for γ even smaller. As in the monotone setting, OGDA however, requires a smaller step size than EG. Nevertheless, through a different analysis we are able to match the most general condition on the weak Minty parameter ρ < 1L for appropriate γ and a. ## Contribution. 1. Building on the recently introduced notion of weak solutions to the Minty variational inequality, see (Diakonikolas et al., 2021), we prove a novel convergence rate of O(1/k) in terms of the squared operator norm for a modification of OGDA, which we name OGDA+, matching the one of EG. 2. Even under the stronger assumption that the operator is moreover monotone we improve the possible range of step sizes for OGDA+ (Ryu et al., 2019) and recover the best known result for the standard method (γ = 1) (Gorbunov et al., 2022b). 3. We prove a complexity bound of O(ε −2) for a stochastic version of the OGDA+ method. 4. Additionally, we propose an adaptive step size version of EG+, which is able to obtain the same convergence guarantees without any knowledge of the Lipschitz constant of the operator F, and therefore possibly even take larger steps in regions of low curvature and allow for convergence where a fixed step size policy does not. ![2_image_0.png](2_image_0.png) Figure 1: Problem (7), with ρ = 1 L , meaning that convergence is not covered by theory. Since the Lipschitz constant can be computed analytically we choose the step size accordingly. Due to the linearity of the operator F there is no benefit in using linesearch, so only methods using constant step sizes are compared. Only OGDA+ **converges**. ## 1.1 Related Literature Since there is an extensive literature on convergence rates in terms of a gap function or distance to a solution for monotone problems as well as generalizations such as nonconvex-concave (Boţ & Böhm, 2020; Lin et al., 2020), convex-nonconcave (Xu et al., 2023) or under the Polyak-Łojasiewicz assumption, see (Yang et al., 2020), we will only focus on the nonconvex-nonconcave setting. Weak minty. Diakonikolas et al. (2021) noticed that a particular parametrization of the von Neumann ratio game exhibits a new type of solution, which they titled weak Minty, without possessing any of the known properties such as (negative) comonotonicity or Minty solutions. They showed convergence in the presence of such solutions for EG if the extrapolation step size is twice as large as the update step. Later Pethick et al. (2022) showed that the condition on the weak Minty parameter can be relaxed by reducing the length of the update step even further and they do so in an adaptive way. In order to not require any other hyperparameters they also propose a backtracking line search, which might come at the cost of additional gradient computations or the use of second order information (in contrast to the adaptive step size we propose in Algorithm 3). In (Lee & Kim, 2021b) a different approach is taken by restricting the attention to the min-max setting and using multiple ascent steps per descent step, obtaining the same O(1/k) rate as EG. Minty solutions. Many works have shown different approaches for when the problem at hand exhibits a Minty solution, see (MVI). The authors of (Liu et al., 2021) showed that weakly monotone VIs can be solved by successively adding a quadratic proximity term and repeatedly optimizing the resulting strongly monotone VI with any convergent method. In (Mertikopoulos et al., 2019) the convergence of the OGDA method was proven, but without any rate. In (Malitsky, 2020) it was noted that the convergence proof for the golden ratio algorithm (GRAAL) works without any modification. See also (Dang & Lan, 2015) for a non-euclidean version of EG and (Liu et al., 2020) for adaptive methods. While the assumption that a Minty solution exists is a generalization of the monotone setting it is difficult to find nonmonotone problems that do possess such solutions. In our setting, see Assumption 1, the Minty inequality (MVI) is allowed to be violated at every point by a factor proportional to the squared operator norm. Negative comonotonicity. While previously studied under the name of *cohypomonotonicity* (Combettes & Pennanen, 2004) the notion of negative comonotonicity was recently explored in (Bauschke et al., 2020). It provides a generalization of monotonicity, but in a direction different from the notion of Minty solutions and only a few works have analyzed methods in this setting. The authors of (Lee & Kim, 2021a) studied an anchored version of EG and showed an improved convergence rate of O(1/k2) (in terms of the squared operator norm). Similarly, Cai et al. (2022) studied an accelerated version of the reflected gradient method (Malitsky, 2015). It is an open question whether such an acceleration is possible in the more general setting of weak Minty solutions (any Stampacchia solution to the VI given by negatively comonotone operator is a weak Minty solution). Another interesting observation was made in (Gorbunov et al., 2022c) where for cohypomonotone problems monotonically decreasing gradient norm was shown when using EG. However, we did not observe this in our experiments, highlighting the need to distinguish this class from problems with weak Minty solutions. Interaction dominance. The authors of (Grimmer et al., 2022) investigate the notion of α-interaction dominance for nonconvex-nonconcave min-max problems and showed that the proximal-point method converges sublinearly if this condition holds in y and linearly if it holds in both components. Furthermore Lee & Kim (2021a) showed that if a problem is interaction dominant in both components, then it is also negatively comonotone. Optimism. The beneficial effects of introducing the simple modification commonly known as optimism have recently sparked the interest of the machine learning community (Daskalakis et al., 2018; Daskalakis & Panageas, 2018; Liang & Stokes, 2019; Gidel et al., 2019). Its name originates from online optimization (Rakhlin & Sridharan, 2013a;b). The idea dates back even further (Popov, 1980) and has been studied in the mathematical programming community as well (Malitsky, 2015; Malitsky & Tam, 2020; Csetnek et al., 2019). ## 2 Preliminaries 2.1 Notions Of Solution We summarize the most commonly used notions of solutions appearing in the context of variational inequalities (VIs) and beyond. Note that these are commonly defined in terms of a constraint set C ⊂ R d. A Stampacchia1(Kinderlehrer & Stampacchia, 2000) solution of the VI given by F : R d → R dis defined as a point u ∗such that ⟨F(u ∗), u − u ∗⟩ ≥ 0 ∀u ∈ C. (SVI) In particular we only consider in this manuscript the unconstrained case C = R din which case the above condition reduces to F(u ∗) = 0. Very much related with a long tradition is the following. A *Minty*1solution is given by a point u ∗ ∈ C such that $$(\mathrm{{SWI}})$$ $$\langle F(u),u-u^{*}\rangle\geq0\quad\forall u\in C.$$ $$(\mathrm{MTI})$$ ∗⟩ ≥ 0 ∀u ∈ C. (MVI) If F is continuous, a Minty solution of the VI is always a Stampacchia solution. The reverse is in general not true but holds for example if the operator F is monotone. In particular, there exist nonmonotone problems with Stampacchia solutions but without any Minty solutions. ## 2.2 Notions Of Monotonicity The aim of this section is to recall some elementary and some more recent notions of monotonicity and the connection between those. We call an operator F **monotone** if $$\langle F(u)-F(v),u-v\rangle\geq0.$$ Such operators arise naturally as the gradients of convex functions, from convex-concave min-max problems or from equilibrium problems. Two notions frequently studied that fall in this class are **strongly monotone** operators fulfilling $$\langle F(u)-F(v),u-v\rangle\geq\mu\|u-v\|^{2}.$$ 1Sometimes Stampacchia and Minty solutions are referred to as *strong* and *weak* solutions respectively, see (Nesterov, 2007), but we will refrain from this nomenclature, as it is confusing in the context of *weak Minty* solutions. They appear as gradients of strongly convex functions or strongly-convex-strongly-concave min-max problems. A second subclass of monotone operators are so-called **cocoercive** operators fulfilling $$\langle F(u)-F(v),u-v\rangle\geq\beta\|F(u)-F(v)\|^{2}.$$ $$\left(2\right)$$ 2. (2) They appear, for example, as gradients of smooth convex functions, in which case (2) holds with β equal to the reciprocal of the gradients Lipschitz constant. Leaving the monotone world. Both subclasses of monotonicity introduced above can be used as starting points to venture into the non-monotone world. Since general non-monotone operators might exhibit erratic behavior like periodic cycles and spurious attractors (Hsieh et al., 2021), it makes sense to find settings that extend the monotone one, but still remain tractable. First and foremost, the by now well-studied setting of ν-**weak monotonicity** ⟨F(u) − F(v), u − v*⟩ ≥ −*ν∥u − v∥ 2. Such operators arise as the gradients of the well-studied class (see (Davis & Drusvyatskiy, 2018)) of weakly convex functions - a rather generic class of functions as it includes all functions *without upward cusps*. In particular every smooth function with Lipschitz gradient turns out to fulfill this property. On the other hand, extending the notion of cocoercivity to allow for negative coefficients, referred to as **cohypomonotonicity**, has received much less attention (Bauschke et al., 2020; Combettes & Pennanen, 2004) and is given by $$\langle F(u)-F(v),u-v\rangle\geq-\gamma\|F(u)-F(v)\|^{2},$$ 2. Clearly, if there exists a Stampacchia solution for such an operator, then it also fulfills Assumption 1. Behavior w.r.t. the solution. While the above properties are standard assumption in the literature, it is usually sufficient to ask for the corresponding condition to hold when one of the arguments is a (Stampacchia) solution. This means instead of monotonicity it is enough to obtain standard convergence results, see (Gorbunov et al., 2022a), to ask for the operator F to be **star-monotone** (Pennanen, 2001), i.e. $\left[\left.\left(\begin{matrix}v\end{matrix}\right)\right|\right]$ 2. $$\langle F(u),u-u^{*}\rangle\geq0$$ or **star-cocoercive** (Gorbunov et al., 2022a) ⟨F(u), u − u ∗⟩ ≥ γ∥F(u)∥ 2. In this spirit, we can provide a new interpretation to the assumption of the existence of a weak Minty solution as asking for the operator F to be **negatively star-cocoercive** (with respect to at least one solution). Furthermore, we want to point out that while the above star notions are sometimes required to hold for all solutions u ∗, in the following we only require it to hold for a single solution. ## 3 Ogda For Problems With Weak Minty Solutions The generalized version of OGDA, whose name we equip in the spirit of (Diakonikolas et al., 2021; Pethick et al., 2022), with a "+" to highlight the presence of the additional parameter γ, is given by: Algorithm 1 OGDA+ Require: Starting point u0 = u−1 ∈ R d, step size a > 0 and parameter 0 < γ ≤ 1. for k = 0, 1*, . . .* do uk+1 = uk − a (1 + γ) F(uk) − F(uk−1) Theorem 3.1. Let F : R d → R dbe L-Lipschitz continuous satisfying Assumption 1 with 1L > ρ and (uk)k≥0 be the iterates generated by Algorithm 1 with step size a satisfying a > ρ and $$a L\leq\frac{1-\gamma}{1+\gamma}.$$ . (3) ![5_image_0.png](5_image_0.png) Figure 2: **Polar game** from (Pethick et al., 2022) with 1 8L *> ρ >* 1 10L . Shows the need to reduce γ in OGDA+. $$Then,\;for\;all\;k\geq0$$ _Then, for all $k\geq0$_ $$\min_{i=0,\ldots,k-1}\|F(u_{i})\|^{2}\leq\frac{1}{k\alpha\gamma(a-\rho)}\|u_{0}+aF(u_{0})-u^{*}\|^{2}.$$ _In particular as long as $\rho<\frac{1}{L}$ we can find a $\gamma$ small enough such that the above bound holds._ The first observation is that we would like to choose a as large as possible as this allows us to treat the largest class of problems with *ρ < a*. In order to be able to choose the step size a large we have to decrease γ as evident from (3). This, however degrades the speed of the algorithm as it makes the update steps smaller - the same effect can be observed (Pethick et al., 2022) for EG+ and is therefore not surprising. One could derive an optimal γ (i.e. minimizing the right hand side) from Theorem 3.1, which however results in a uninsightful cubic dependence on ρ. In practice, the strategy of decreasing γ until we get convergence, but not further gives reasonable results. Furthermore, we want to point out that the condition ρ < 1L , is precisely the best possible bound for EG+ in (Pethick et al., 2022). ## 3.1 Improved Bounds Under Monotonicity While the above theorem also holds if the operator F is monotone, we can modify the proof slightly to obtain a better dependence on the parameters: Theorem 3.2. Let F : R d → R dbe monotone and L*-Lipschitz. If* aL = 2−γ 2+γ − ε for ε > 0 then, the iterates generated by OGDA+ *fulfill* $$\operatorname*{min}_{i=0,\ldots,k-1}\|F(u_{i})\|^{2}\leq{\frac{2}{k a^{2}\gamma^{2}\varepsilon}}\|u_{0}+a F(u_{0})-u^{*}\|^{2}.$$ $=1\;\;and\;a<\frac{1}{2L}$. In particular, we can choose γ = 1 and a < 1 3L . There are different works discussing the convergence of OGDA in terms of the iterates or a gap function with a < 1 2L , see for example (Malitsky & Tam, 2020). We, however, want to compare the above bound to more similar results on rates for the best iterate in terms of the operator norm. The same rate as ours for OGDA is shown in (Chavdarova et al., 2021) but requires the conservative step size bound a ≤1 16L . This was later improved to a ≤1 3L in (Gorbunov et al., 2022b) where the bound even holds for the last iterate. However, all of these only deal with the case γ = 1. The only other reference that deals with a generalized (i.e. not necessarily γ = 1) version of OGDA is (Ryu et al., 2019). There the resulting step size condition is a ≤ 2−γ 4L , which is strictly worse than ours for any γ. To summarize, not only do we show for the first time that the step size of a generalization of OGDA can go above 1 2L we also provide the least restrictive bound for any value of γ. ## 3.2 Ogda+ **Stochastic** In this section we discuss the setting where instead of the exact operator F we only have access to a collection of independent estimators F˜(·, ξi) at every iteration. We assume here that estimator F˜ is unbiased E -F˜(uk, ξi) uk−1 = F(uk), and has bounded variance E-∥F˜(uk, ξi) − F(uk)∥ 2≤ σ 2, we show that we can still guarantee convergence, by using batch sizes B of order O(ε −1): Algorithm 2 stochastic OGDA+ Require: Starting point u0 = u−1 ∈ R d, step size a > 0, parameter 0 < γ ≤ 1 and batch size B. for k = 0, 1*, . . .* do Sample i.i.d. (ξi) B i=1 and compute estimator g˜k = 1 B PB i=1 F˜(uk, ξi) uk+1 = uk − a (1 + γ) ˜gk − g˜k−1 Theorem 3.3. Let F : R d → R dbe L-Lipschitz satisfying Assumption 1 with 1L > ρ *and let* (uk)k≥0 be the sequence of iterates generated by stochastic OGDA+, with a and γ satisfying *ρ < a <* 1−γ 1+γ 1 L then, to visit an ε *stationary point such that* mini=0,...,k−1 E-∥F˜(ui, ξ)∥ 2≤ ε *we require* $=\;\frac{1}{2}$ 5. $${\mathcal{O}}\left({\frac{1}{\varepsilon}}{\frac{1}{a\gamma(a-\rho)}}\mathbb{E}\big[\|u_{0}+a{\tilde{g}}_{0}-u^{*}\|^{2}\big]\max\left\{1,{\frac{4\sigma^{2}}{a L}}{\frac{1}{\varepsilon}}\right\}\right)$$ calls to the stochastic oracle F˜*, with large batch sizes of order* O(ε −1). In practice large batch sizes of order O(ε −1) are typically not desirable, but rather a small or decreasing step size is preferred. In the weak Minty setting this is cause for additional trouble due to the necessity of large step sizes to guarantee convergence. See in this context the heuristic variant proposed in (Pethick et al., 2022) which decreases the parameter corresponding to γ in our setting. Unfortunately the current analysis does not allow for variable γ. ## 4 Eg+ **With Adaptive Step Sizes** In this section we present Algorithm 3 that is able to solve the previously mentioned problems without any knowledge of the Lipschitz constant L, as it is typically difficult to compute in practice. Additionally, it is well known that rough estimates will lead to small step sizes and slow convergence behavior. However, in the presence of weak Minty solutions there is additional interest in choosing large step sizes. We observed in Theorem 3.1 and related works such as (Diakonikolas et al., 2021) and (Pethick et al., 2022) the fact that a crucial ingredient in the analysis is that the step size is chosen larger than a multiple of the weak Minty parameter ρ to guarantee convergence at all. For these reasons we want to outline a method using adaptive step sizes, meaning that no step size needs to be supplied by the user and no line-search is carried out. Since the analysis of OGDA+ is already quite involved in the constant step size regime we choose to equip EG+ with an *adaptive step size* which estimates the inverse of the (local) Lipschitz constant, see (4). Due the fact that the literature on adaptive methods, especially in the context of VIs is so vast we do not aim to give a comprehensive review but highlight only few with especially interesting properties. In particular we do not want to touch on methods with linesearch procedure which typically result in multiple gradient computations per iteration, such as (Tseng, 2000; Malitsky & Tam, 2020). We use a simple and therefore widely used step size choices which naively estimates the local Lipschitz constant and forces a monotone decreasing behavior. Such step sizes have been used extensively for monotone VIs, see (Yang & Liu, 2018; Boţ et al., 2020), and similarly in the context of the mirror-prox method which corresponds to EG in the setting of (non-euclidean) Bregman distances, see (Antonakopoulos et al., 2019). A version of EG with a different adaptive step size choice has been investigated by Antonakopoulos et al. (2021); Bach & Levy (2019) with the unique feature that it is able to achieve the optimal rates for both smooth and nonsmooth problems without modification. However, these rates are only for monotone VIs and are in terms of the gap function. One of the drawbacks of adaptive methods resides in the fact that the step sizes are typically required to be nonincreasing which results in poor behavior if a high curvature area was visited by the iterates before reaching a low curvature region. To the best of our knowledge the only method which is allowed to use nonmonotone step size to treat VIs, and does not use a possibly costly linesearch, is the golden ratio algorithm (Malitsky, 2020). It comes with the additional benefit of not requiring a global bound on the Lipschitz constant of F at all. While it is known that this method converges under the stronger assumption of existence of Minty solutions, a quantitative convergence result is still open. Algorithm 3 EG+ with adaptive step size Require: Starting points u0, u¯0 ∈ R d, initial step size a0 and parameter τ ∈ (0, 1) and 0 < γ ≤ 1. 1: for k = 0, 1*, . . .* do 2: Find the step size: $$a_{k}=\operatorname*{min}\left\{a_{k-1},{\frac{\tau\|u_{k-1}-{\bar{u}}_{k-1}\|}{\|F(u_{k-1})-F({\bar{u}}_{k-1})\|}}\right\}.$$ . (4) 3: Compute next iterate: $$\left(4\right)$$ $$\begin{array}{c}{{u_{k}=\bar{u}_{k}-a_{k}F(\bar{u}_{k})}}\\ {{\bar{u}_{k+1}=\bar{u}_{k}-a_{k}\gamma F(u_{k}).}}\end{array}$$ Clearly, ak is monotonically decreasing by construction. Moreover, it is bounded away from zero by the simple observation that ak ≥ min{a0*, τ /L*} > 0. The sequence therefore converges to a positive number which we denote by a∞ := limk ak. Theorem 4.1. Let F : R d → R dbe L*-Lipschitz that satisfies Assumption 1, where* u ∗ denotes any weak Minty solution, with a∞ > 2ρ *and let* (uk)k≥0 be the iterates generated by Algorithm 3 with γ = 1 2 and τ ∈ (0, 1). Then, there exists a k0 ∈ N*, such that* $$\operatorname*{min}_{i=k_{0},\ldots,k}\|F(u_{k})\|^{2}\leq{\frac{1}{k-k_{0}}}{\frac{L}{\tau({\frac{a_{\infty}}{2}}-\rho)}}\|{\bar{u}}_{k_{0}}-u^{*}\|^{2}.$$ Algorithm 3 presented above provides several benefits, but also some drawbacks. The main advantage resides in the fact that the Lipschitz constant of the operator F does not need to be known. Moreover, the step size choice presented in (4) might allow us to take steps much larger than what would be suggested by a global Lipschitz constant if the iterates never - or only during later iterations - visits the region of high curvature (large local L). In such cases these larger step sizes come with the additional advantage that they allow us to solve a richer class of problems as we are able to relax the condition ρ < 1 4L in the case of EG+ to *ρ < a*∞/2 where a∞ = limk ak ≥ *τ /L*. On the other hand, we face the problem that the bounds in Theorem 4.1 only hold after an unknown number of initial iterations when ak/ak+1 ≤ 1 τ is finally satisfied. In theory this might take long if the curvature around the solution is much higher than in the starting area as this will force the need to decrease the step size very late into the solution process resulting in the quotient ak/ak+1 being too large. This drawback could be mitigated by choosing τ smaller. However, this will result in poor performance due to small step sizes. Even for monotone problems where this type of step size has been proposed this problem could not be circumvented and authors instead focused on convergence of the iterates without any rate. ## 5 Numerical Experiments In the following we compare EG+ method from (Diakonikolas et al., 2021) with the two methods we propose OGDA+ and EG+ with adaptive step size, see Algorithm 1 and Algorithm 3 respectively. Last but not least we also include the CurvatureEG+ method from (Pethick et al., 2022) which is a modification of EG+ and adaptively chooses the ratio of extrapolation and update step. In addition a backtracking linesearch ![8_image_0.png](8_image_0.png) Figure 3: **Ratio game**. A particularly difficult parametrization of (5) suggested in (Daskalakis et al., 2020). Right: The sign of ⟨F(u), u−u ∗⟩ parametrized as u := (x, 1−*x, y,* 1−y), with yellow representing a negative sign and purple a positive one, highlighting the fact that the solution is not Minty. Nevertheless, all methods are able to converge. is performed with an initial guess made by second order information, whose extra cost we ignore in the experiments. ## 5.1 Von Neumann'S Ratio Game We consider von Neumann's *ratio game* (von Neumann, 1945) recently explored in (Daskalakis et al., 2020; Diakonikolas et al., 2021). It is given by $$\operatorname*{min}_{x\in\Delta^{m}}\operatorname*{max}_{y\in\Delta^{n}}V(x,y)={\frac{\langle x,R y\rangle}{\langle x,S y\rangle}},$$ $\left(5\right)^{\frac{1}{2}}$ where R ∈ R m×n and S ∈ R m×n + with ⟨*x, Sy*⟩ > 0 for all x ∈ ∆m, y ∈ ∆n, with ∆d:= {z ∈ R d: zi ≥ 0,Pd i=1 zi = 1} denoting the unit simplex. Expression (5) can be interpreted as the value V (πx, πy) for a stochastic game with a single state and mixed strategies. In Figure 3b we see an illustration of a particularly difficult instance of (5) discussed in (Daskalakis et al., 2020), highlighting the fact that the Stampacchia solution is not a Minty solution, even when restricted to arbitrarily close ball around it (yellow area touching the solution). Interestingly we still observe good convergence behavior, although an estimated ρ is more than ten times larger than the estimated Lipschitz constant. ## 5.2 Forsaken A particularly difficult min-max toy example with "Forsaken" solution was proposed in Example 5.2 of (Hsieh et al., 2021), and is given by $$\operatorname*{min}_{x\in\mathbb{R}}\operatorname*{max}_{y\in\mathbb{R}}\;x(y-0.45)+\varphi(x)-\varphi(y),$$ where φ(z) = 14 z 2 − 1 2 z 4 + 1 6 z 6. This problem exhibits a Stampacchia solution at (x ∗, y∗) ≈ (0.08, 0.4), but also two limit cycles not containing any critical point of the objective function. In addition, Hsieh et al. (2021) also observed that the limit cycle closer to the solution repels possible trajectories of iterates, thus "shielding" the solution. Later, Pethick et al. (2022) noticed that, restricted to the box ∥(*x, y*)∥∞ < 3 2 the above mentioned solution is weak Minty with ρ ≥ 2 · 0.477761, which is much larger than 1L ≈ 0.08. In line with these observations we can see in Figure 4 that none of the fixed step size methods with step size bounded by 1L converge. In light of this observation Pethick et al. (2022) proposed a backtracking linesearch which potentially allows for larger steps than predicted by the global Lipschitz constant. Similarly, our proposed $\mathrm{f}\left(\mathrm{H}\mathrm{s}\mathrm{i}\mathrm{e}\right.$ $$({\mathrm{6}})$$ adaptive step size version of EG+, see Algorithm 3, is also able to break through the repelling limit cycle and converge the solution. On top of this, it does so at a faster rate and without the need of additional computations in the backtracking procedure. ![9_image_0.png](9_image_0.png) Figure 4: **Forsaken.** An illustration of different methods on problem (6), originally suggested by Hsieh et al. (2021). Only Algorithm 3 and CurvatureEG+ are able to choose a step size large enough to withstand the repellent limit cycle. ## 5.3 Lower Bound Example The following min-max problem was introduced in (Pethick et al., 2022) as a lower bound on the dependence between ρ and L for EG+: $$\min_{x\in\mathbb{R}}\max_{y\in\mathbb{R}}\xi xy+\frac{\zeta}{2}(x^{2}-y^{2}).\tag{1}$$ In particular Theorem 3.4 from (Pethick et al., 2022) states that EG+ (with any γ) and constant step size a = 1 L converges for this problem if and only if (0, 0) is a weak Minty solution with ρ < 1−γ L , where ρ and L can be computed explicitly in the above example and are given by $$\left(7\right)$$ $$L=\sqrt{\xi^{2}+\zeta^{2}}\quad\mathrm{and}\quad\rho=-2\frac{\zeta}{\xi^{2}+\zeta^{2}}.$$ Figure 1 is obtained by choosing ξ = √3 and ζ = −1 we get exactly ρ = 1 L and the theory therefore predicting divergence of EG+ for any γ, which is exactly what is empirically observed. Although, the general upper bound proved in Theorem 3.1 only states convergence in the case ρ < 1L , we observe rapid convergence of OGDA+ for this example showcasing that it can drastically outperform EG+ in some scenarios. ## 6 Conclusion Many interesting questions remain in the realm of min-max problems - especially when leaving the convexconcave setting. Very recently Gorbunov et al. (2022c) showed that the O(1/k) bounds on the squared operator norm for EG and OGDA for the **last iterate** (and not just the best one) hold even in the negative comonotone setting. Deriving a similar statement in the presence of merely weak Minty solutions is an open question. Overall, our analysis and experiments seem to provide evidence that there is little advantage of using OGDA+ over EG+ for most problems as the lower iteration cost is offset by the smaller step size. One exception is given by problem (7) displayed in Figure 1, which is not covered by theory and OGDA+ is the only method able to converge. Lastly, we observe that the previous paradigm in pure minimization of "smaller step size ensures convergence" but "larger step size gets there faster", where the latter is typically constrained by the reciprocal of the gradients Lipschitz constant, does not seem to hold true for min-max problems anymore. The analysis of different methods in the presence of weak Minty solutions shows that convergence can be lost if the step size is too small and sometimes needs to be larger than 1L , which one can typically only hope for in adaptive methods. Our EG+ method with adaptive step size achieves this even without the additional cost of a backtracking linesearch as used for the CurvatureEG+ method of (Pethick et al., 2022). ## References Kimon Antonakopoulos, Veronica Belmega, and Panayotis Mertikopoulos. An adaptive mirror-prox method for variational inequalities with singular operators. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/ c2890d44d06bafb6c7b4aa194857ccbc-Paper.pdf. Kimon Antonakopoulos, Veronica Belmega, and Panayotis Mertikopoulos. Adaptive extra-gradient methods for min-max optimization and games. In *International Conference on Learning Representations*, 2021. URL https://openreview.net/forum?id=R0a0kFI3dJx. Francis Bach and Kfir Y Levy. A universal algorithm for variational inequalities adaptive to smoothness and noise. In *Conference on Learning Theory*, pp. 164–194. PMLR, 2019. Heinz H Bauschke, Walaa M Moursi, and Xianfu Wang. Generalized monotone operators and their averaged resolvents. *Mathematical Programming*, pp. 1–20, 2020. Axel Böhm, Michael Sedlmayer, Ernö Robert Csetnek, and Radu Ioan Boţ. Two steps at a time - taking GAN training in stride with Tseng's method. *SIAM Journal on Mathematics of Data Science*, 4(2):750–771, 2022. Avishek Joey Bose, Gauthier Gidel, Hugo Berard, Andre Cianflone, Pascal Vincent, Simon LacosteJulien, and William L. Hamilton. Adversarial example games. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), *Advances in Neural Information* Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/ 65586803f1435736f42a541d3a924595-Abstract.html. Radu Ioan Boţ and Axel Böhm. Alternating proximal-gradient steps for (stochastic) nonconvex-concave minimax problems. *arXiv:2007.13605*, 2020. Radu Ioan Boţ, Michael Sedlmayer, and Phan Tu Vuong. A relaxed inertial forward-backward-forward algorithm for solving monotone inclusions with application to GANs. *arXiv:2003.07886*, 2020. Yang Cai, Argyris Oikonomou, and Weiqiang Zheng. Accelerated single-call methods for monotone inclusions and constrained min-max optimization. *arXiv preprint arXiv:2206.05248*, 2022. URL https://arxiv. org/pdf/2210.03096.pdf. Tatjana Chavdarova, Michael I Jordan, and Manolis Zampetakis. Last-iterate convergence of saddle point optimizers via high-resolution differential equations. *arXiv preprint arXiv:2112.13826*, 2021. Patrick L Combettes and Teemu Pennanen. Proximal methods for cohypomonotone operators. *SIAM journal* on control and optimization, 43(2):731–742, 2004. Ernö Robert Csetnek, Yura Malitsky, and Matthew K Tam. Shadow Douglas–Rachford splitting for monotone inclusions. *Applied Mathematics & Optimization*, 80(3):665–678, 2019. Cong D Dang and Guanghui Lan. On the convergence properties of non-euclidean extragradient methods for variational inequalities with generalized monotone operators. Computational Optimization and applications, 60(2):277–310, 2015. Constantinos Daskalakis and Ioannis Panageas. The limit points of (optimistic) gradient descent in min-max optimization. In *Advances in Neural Information Processing Systems*, pp. 9236–9246, 2018. Constantinos Daskalakis, Andrew Ilyas, Vasilis Syrgkanis, and Haoyang Zeng. Training GANs with optimism. In *International Conference on Learning Representations*, 2018. URL https://openreview.net/forum? id=SJJySbbAZ. Constantinos Daskalakis, Dylan J Foster, and Noah Golowich. Independent policy gradient methods for competitive reinforcement learning. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 5527– 5540. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/ 3b2acfe2e38102074656ed938abf4ac3-Paper.pdf. Constantinos Daskalakis, Stratis Skoulakis, and Manolis Zampetakis. The complexity of constrained minmax optimization. In *Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing*, pp. 1466–1478, 2021. Damek Davis and Dmitriy Drusvyatskiy. Stochastic subgradient method converges at the rate O(k −1/4) on weakly convex functions. *arXiv preprint arXiv:1802.02988*, 2018. Jelena Diakonikolas, Constantinos Daskalakis, and Michael Jordan. Efficient methods for structured nonconvex-nonconcave min-max optimization. In *International Conference on Artificial Intelligence and* Statistics, pp. 2746–2754. PMLR, 2021. Francisco Facchinei and Jong-Shi Pang. *Finite-dimensional variational inequalities and complementarity* problems. Springer Science & Business Media, 2007. Gauthier Gidel, Hugo Berard, Gaëtan Vignoud, Pascal Vincent, and Simon Lacoste-Julien. A variational inequality perspective on generative adversarial networks. In *International Conference on Learning Representations*, 2019. URL https://openreview.net/forum?id=r1laEnA5Ym. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2672–2680, 2014. Eduard Gorbunov, Nicolas Loizou, and Gauthier Gidel. Extragradient method: O(1/k) last-iterate convergence for monotone variational inequalities and connections with cocoercivity. In *International Conference* on Artificial Intelligence and Statistics, pp. 366–402. PMLR, 2022a. Eduard Gorbunov, Adrien Taylor, and Gauthier Gidel. Last-iterate convergence of optimistic gradient method for monotone variational inequalities. 2022b. URL https://arxiv.org/abs/2205.08446. Eduard Gorbunov, Adrien Taylor, Samuel Horváth, and Gauthier Gidel. Convergence of proximal point and extragradient-based methods beyond monotonicity: the case of negative comonotonicity. *arXiv preprint* arXiv:2210.13831, 2022c. URL http://arxiv.org/abs/2210.13831v1. Benjamin Grimmer, Haihao Lu, Pratik Worah, and Vahab Mirrokni. The landscape of the proximal point method for nonconvex–nonconcave minimax optimization. *Mathematical Programming*, pp. 1–35, 2022. URL https://arxiv.org/pdf/2006.08667.pdf. Ya-Ping Hsieh, Panayotis Mertikopoulos, and Volkan Cevher. The limits of min-max optimization algorithms: Convergence to spurious non-critical sets. In *International Conference on Machine Learning*, pp. 4337– 4348. PMLR, 2021. Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, and Panayotis Mertikopoulos. Explore aggressively, update conservatively: Stochastic extragradient methods with variable stepsize scaling. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 16223–16234. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/ paper/2020/file/ba9a56ce0a9bfa26e8ed9e10b2cc8f46-Paper.pdf. David Kinderlehrer and Guido Stampacchia. *An introduction to variational inequalities and their applications*. SIAM, 2000. GM Korpelevich. The extragradient method for finding saddle points and other problems. *Matecon*, 12: 747–756, 1976. Sucheol Lee and Donghwan Kim. Fast extra gradient methods for smooth structured nonconvex-nonconcave minimax problems. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, 2021a. URL https://openreview.net/forum?id=AYAgKFl78z. Sucheol Lee and Donghwan Kim. Semi-anchored multi-step gradient descent ascent method for structured nonconvex-nonconcave composite minimax problems. *arXiv preprint arXiv:2105.15042*, 2021b. Tengyuan Liang and James Stokes. Interaction matters: A note on non-asymptotic local convergence of generative adversarial networks. In Kamalika Chaudhuri and Masashi Sugiyama (eds.), *The 22nd International Conference on Artificial Intelligence and Statistics*, volume 89 of Proceedings of Machine Learning Research, pp. 907–915. PMLR, 2019. Tianyi Lin, Chi Jin, and Michael I Jordan. On gradient descent ascent for nonconvex-concave minimax problems. In *International Conference on Machine Learning*, pp. 6083–6093. PMLR, 2020. Mingrui Liu, Youssef Mroueh, Jerret Ross, Wei Zhang, Xiaodong Cui, Payel Das, and Tianbao Yang. Towards better understanding of adaptive gradient algorithms in generative adversarial nets. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=SJxIm0VtwH. Mingrui Liu, Hassan Rafique, Qihang Lin, and Tianbao Yang. First-order convergence theory for weaklyconvex-weakly-concave min-max problems. *The Journal of Machine Learning Research*, 2021. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In *International Conference on Learning Representations*, 2018. URL https://openreview.net/forum?id=rJzIBfZAb. Yura Malitsky. Projected reflected gradient methods for monotone variational inequalities. SIAM Journal on Optimization, 25(1):502–520, 2015. Yura Malitsky. Golden ratio algorithms for variational inequalities. *Mathematical Programming*, 184(1): 383–410, 2020. Yura Malitsky and Matthew K Tam. A forward-backward splitting method for monotone inclusions without cocoercivity. *SIAM Journal on Optimization*, 30(2):1451–1472, 2020. Panayotis Mertikopoulos, Bruno Lecouat, Houssam Zenati, Chuan-Sheng Foo, Vijay Chandrasekhar, and Georgios Piliouras. Optimistic mirror descent in saddle-point problems: Going the extra(-gradient) mile. In *International Conference on Learning Representations*, 2019. URL https://openreview.net/forum? id=Bkg8jjC9KQ. Aryan Mokhtari, Asuman Ozdaglar, and Sarath Pattathil. A unified analysis of extra-gradient and optimistic gradient methods for saddle point problems: Proximal point approach. In *International Conference on* Artificial Intelligence and Statistics, pp. 1497–1507. PMLR, 2020. Yurii Nesterov. Dual extrapolation and its applications to solving variational inequalities and related problems. *Mathematical Programming*, 109(2-3):319–344, 2007. Teemu Pennanen. On the range of monotone composite mappings. *Journal of Nonlinear and Convex Analysis*, 2(2), 2001. Thomas Pethick, Puya Latafat, Panos Patrinos, Olivier Fercoq, and Volkan Cevher. Escaping limit cycles: Global convergence for constrained nonconvex-nonconcave minimax problems. In International Conference on Learning Representations, 2022. URL https://openreview.net/pdf?id=2_vhkAMARk. David Pfau and Oriol Vinyals. Connecting generative adversarial networks and actor-critic methods. arXiv preprint arXiv:1610.01945, 2016. Leonid Denisovich Popov. A modification of the arrow-hurwicz method for search of saddle points. *Mathematical notes of the Academy of Sciences of the USSR*, 28(5):845–848, 1980. Alexander Rakhlin and Karthik Sridharan. Online learning with predictable sequences. In *Proceedings of* the 26th Annual Conference on Learning Theory, pp. 993–1019, 2013a. Sasha Rakhlin and Karthik Sridharan. Optimization, learning, and games with predictable sequences. In Advances in Neural Information Processing Systems, pp. 3066–3074, 2013b. Ernest K Ryu, Kun Yuan, and Wotao Yin. Ode analysis of stochastic gradient methods with optimism and anchoring for minimax problems and GANs. *arXiv preprint arXiv:1905.10899*, 2019. Paul Tseng. A modified forward-backward splitting method for maximal monotone mappings. SIAM Journal on Control and Optimization, 38(2):431–446, 2000. John von Neumann. A model of general economic equilibrium. *The Review of Economic Studies*, 13(1):1–9, 1945. Zi Xu, Huiling Zhang, Yang Xu, and Guanghui Lan. A unified single-loop alternating gradient projection algorithm for nonconvex–concave and convex–nonconcave minimax problems. *Mathematical Programming*, pp. 1–72, 2023. Jun Yang and Hongwei Liu. A modified projected gradient method for monotone variational inequalities. Journal of Optimization Theory and Applications, 179(1):197–211, 2018. Junchi Yang, Negar Kiyavash, and Niao He. Global convergence and variance-reduced optimization for a class of nonconvex-nonconcave minimax problems. *arXiv preprint arXiv:2002.09621*, 2020. ## A Omitted Proofs A.1 Ogda+ For convenience we will sometimes use the notation gk for F(uk) for all k ≥ −1. Lemma A.1. Let (uk) *be the sequence of iterates generated by Algorithm 1, then* ∥uk+1 + agk − u ∗∥ 2 ≤ ∥uk + agk−1 − u ∗∥ 2 + aγρ∥gk∥ 2 + 4aγ−1⟨gk − gk−1, uk − uk+1⟩ 2(1 + 2γ −1)∥gk − gk−1∥ − (2γ −1 − 1)∥uk+1 − uk∥ 2 − a 2.(8) Proof. From the update of the method we deduce for all k ≥ 0 $$\|u_{k+1}+ag_{k}-u^{*}\|^{2}=\|u_{k}-a\gamma g_{k}+ag_{k-1}-u^{*}\|^{2}$$ $$=\|u_{k}+ag_{k-1}-u^{*}\|^{2}-2(u_{k}-u^{*},a\gamma g_{k})-(2ag_{k-1}-a\gamma g_{k},a\gamma g_{k})$$ $$\leq\|u_{k}+ag_{k-1}-u^{*}\|^{2}+a\gamma\rho\|g_{k}\|^{2}-(2ag_{k-1}-a\gamma g_{k},a\gamma g_{k}),$$ used the weak Minty assumption to deduce the last inequality. It remains to derive the $$({\mathfrak{g}})$$ where we used the weak Minty assumption to deduce the last inequality. It remains to derive the following equality $$-\langle2a g_{k-1}-a\gamma g_{k},a\gamma g_{k}\rangle=4a\gamma^{-1}\langle g_{k}-g_{k-1},u_{k}-u_{k+1}\rangle-(2\gamma^{-1}-1)\|u_{k+1}-u_{k}\|^{2}$$ $$-a^{2}(1+2\gamma^{-1})\|g_{k}-g_{k-1}\|^{2}.$$ $$(8)$$ $$(10)$$ 2.(10) This can be seen by expressing every difference of iterates in terms of gradients according to Algorithm 1, giving $$4a\gamma^{-1}\langle g_{k}-g_{k-1},u_{k}-u_{k+1}\rangle=4a\gamma^{-1}\langle g_{k}-g_{k-1},a(g_{k}-g_{k-1})+a\gamma g_{k}\rangle$$ $$=4a^{2}\gamma^{-1}\|g_{k}-g_{k-1}\|^{2}+4a^{2}\langle g_{k},g_{k}-g_{k-1}\rangle.$$ and − (2γ −1 − 1)∥uk+1 − uk∥ 2 = (1 − 2γ −1)∥a(gk − gk−1) + aγgk∥ 2 = (a 2 − 2a 2γ −1)∥gk − gk−1∥ 2 + (2γa2 − 4a 2)⟨gk, gk − gk−1⟩ + (a 2γ 2 − 2a 2γ)∥gk∥ 2 = (a 2 − 2a 2γ −1)∥gk − gk−1∥ 2 + ⟨gk,(a 2γ 2 − 4a 2)gk − (2γa2 − 4a 2)gk−1⟩. Adding the previous two equalities to −a 2(1+ 2γ −1)∥gk −gk−1∥ 2 proves (10). Combining (9) and (10) proves the desired statement. We are actually going to show a slightly more general version of Theorem 3.1, which introduces an additional parameter λ. Note that for λ = γ −1 we recover the statement of Theorem 3.1. This allows us to cover the analysis of the monotone case in one proof. In particular λ close to zero will yield the statement of Theorem 3.2. Theorem A.1. Let F : R d → R dbe L*-Lipschitz continuous satisfying Assumption 1, where* u ∗ denotes any weak Minty solution, with aλγ > ρ *for some* 0 ≤ λ ≤ 2γ −1 *and let* (uk)k≥0 be the sequence of iterates generated by Algorithm 1 with $$a L\leq\frac{2-\lambda\gamma-\gamma}{2-\lambda\gamma+\gamma}.\tag{1}$$ $$Then,\;for\;all\;k\geq0$$ $$\frac{1}{k}\sum_{i=0}^{k-1}\|F(u_{i})\|^{2}\leq\frac{1}{ka\gamma(a\lambda\gamma-\rho)}\|u_{0}+aF(u_{0})-u^{*}\|^{2}.$$ $<\frac{1}{\alpha}$ we can find a small enough $\gamma$ such that the above $\Gamma$ In particular as long as ρ < 1L we can find a small enough γ such that the above bound holds. Proof. Using the definition uk+1 we can express $$(11)$$ $$(12)$$ $$a\gamma g_{k}=u_{k}-u_{k+1}-a\lambda(g_{k}-g_{k-1}).$$ $$(13)$$ aγgk = uk − uk+1 − aλ(gk − gk−1). (12) By applying norms on both sides we obtain via expansion of squares $$a^{2}\lambda\gamma^{2}\|g_{k}\|^{2}=\lambda\|u_{k+1}-u_{k}\|^{2}+a^{2}\lambda\|g_{k}-g_{k-1}\|^{2}+2a\lambda\langle u_{k+1}-u_{k},g_{k}-g_{k-1}\rangle.$$ Adding (13) and Lemma A.1 we deduce $$\|u_{k+1}+ag_{k}-u^{*}\|^{2}+a\gamma(a\lambda\gamma-\rho)\|g_{k}\|^{2}\leq\|u_{k}+ag_{k-1}-u^{*}\|^{2}-a^{2}(1+2\gamma^{-1}-\lambda)\|g_{k}-g_{k-1}\|^{2}$$ $$-(2\gamma^{-1}-1-\lambda)\|u_{k+1}-u_{k}\|^{2}+(4\alpha\gamma^{-1}-2a\lambda)(u_{k}-u_{k+1},g_{k}-g_{k-1}).\tag{14}$$ Now, we get via Young's inequality $(4\gamma^{-1}a-2a\lambda)(u_{k}-u_{k+1},g_{k}-g_{k-1})\leq aL(2\gamma^{-1}-\lambda)\|u_{k}-u_{k+1}\|^{2}+L^{-1}a(2\gamma^{-1}-\lambda)\|g_{k}-g_{k-1}\|^{2}$. Combining the previous two inequalities $$\|u_{k+1}+ag_{k}-u^{*}\|^{2}+a\gamma(a\gamma\lambda-\rho)\|g_{k}\|^{2}\leq\|u_{k}+ag_{k-1}-u^{*}\|^{2}$$ $$+(aL^{-1}(2\gamma^{-1}-\lambda)-a^{2}(1+2\gamma^{-1}-\lambda))\|g_{k}-g_{k-1}\|^{2}-(2\gamma^{-1}-1-\lambda-La(2\gamma^{-1}-\lambda))\|u_{k+1}-u_{k}\|^{2},\tag{15}$$ We now use the Lipschitz continuity2 of F to deduce that ∥uk+1 + agk − u ∗∥ 2 + aγ(aγλ − ρ)∥gk∥ 2 ≤ ∥uk + agk−1 − u ∗∥ 2 + (aL(2γ −1 − λ) − a 2L 2(1 + 2γ −1 − λ))∥uk − uk−1∥ 2 − (2γ −1 − 1 − λ − La(2γ $$2\gamma^{-1}-\lambda))\|u_{k+1}-u_{k}\|^{2}.$$ 2Strictly speaking we would have to assume that the term before ∥gk − gk−1∥ 2is positive, but if it is not we can just discard it and be done with the proof. The fact that the terms can be telescoped is, with α = La equivalent to $$2\gamma^{-1}-1-\lambda-(2\alpha\gamma^{-1}-\alpha\lambda)\geq-\alpha^{2}(1+2\gamma^{-1})+\alpha^{2}\lambda+(2\alpha\gamma^{-1}-\alpha\lambda)$$ E h∥ 1 B PB j=1 F˜(uk, ξj ) − F(uk)∥ 2i. which can be simplified to $$\alpha^{2}(2+\gamma-\lambda\gamma)-\alpha(4-2\lambda\gamma)+2-\gamma-\lambda\gamma\geq0$$ By solving for α we get the condition $$\alpha\leq\frac{2-\lambda\gamma-\gamma}{2-\lambda\gamma+\gamma},\tag{16}$$ where from the condition 2γ −1 − 1 − λ − (2αγ−1 − αλ) ≥ 0 we deduce α ≤ 2−λγ−γ 2−λγ , which is redundant in light of (16). The statement follows since we chose u0 = u−1. ## A.2 Improved Bounds Under Monotonicity For the readers convenience we restate the theorem from the main text. Theorem 3.2. Let F : R d → R d be monotone and L-Lipschitz. If aL = 2−γ 2+γ − ε for ε > 0 then, the iterates generated by OGDA+ fulfill $${\frac{1}{k}}\sum_{i=0}^{k-1}\|F(u_{i})\|^{2}\leq{\frac{2}{k a^{2}\gamma^{2}\varepsilon}}\|u_{0}+a F(u_{0})-u^{*}\|^{2}.$$ In particular, we can choose γ = 1 and a < 1 3L . Proof of Theorem 3.2. From Theorem A.1 and the fact that aL = γ−2 γ+2 − ε we need to find an appropriate λ > 0 such that aL ≤ 2−λγ−γ 2−λγ+γ . Given ε > 0 we therefore aim to find a λ > 0 such that $$\frac{2-\gamma}{2+\gamma}-\varepsilon\stackrel{{!}}{{\leq}}\frac{2-\lambda\gamma-\gamma}{2-\lambda\gamma+\gamma}.\tag{1}$$ $$(17)$$ By bringing both terms on one side and the same denominator we obtain the condition $$\frac{2\lambda\gamma^{2}}{(\gamma+2)(\gamma+2-\lambda\gamma)}\stackrel{{!}}{{\leq}}\varepsilon.\tag{1}$$ $$(18)$$ Using the fact that λ ≤ 2γ −1 and γ ≥ 0 we can upper bound the left hand side by λ. Choosing λ = ε therefore ensures that the necessary condition on the step size (16) is satisfied and at the same time yields the dependence on ε in the denominator of the right hand side in the statement of the theorem. ## A.3 Ogda+ **Stochastic** For the stochastic analysis we use for convenience the notation ∆k+1 = E-∥uk+1 − uk∥ 2and σ˜ 2 = Lemma A.2. Let (uk) be the sequence of iterates generated by stochastic OGDA+, then, for any λ > 0 $L_{k+1}+a\gamma(a-\rho)\mathbb{E}\big{[}\|g_{k}\|^{2}\big{]}\leq L_{k}+2(1+\lambda)\tilde{C}_{1}\tilde{\sigma}+(1+\lambda^{-1})\tilde{C}_{1}\Delta_{k}-\tilde{C}_{2}\Delta_{k+1}$. where C˜1 := max{0*, aLγ*−1 − a 2L $$\hat{\cal L}^{2}(1+\gamma^{-1})\}\,\,a n d\,\hat{\cal C}_{2}:=\gamma^{-1}-1-a{\cal L}\gamma^{-1}.$$ $$\square$$ Proof of Lemma A.2. From the update of the method we deduce for all k ≥ 0 $$\mathbb{E}\big{[}\|u_{k+1}+a\tilde{\eta}_{k}-u^{*}\|^{2}\big{]}=\|u_{k}-\alpha\gamma\tilde{\eta}_{k}+a\tilde{\eta}_{k-1}-u^{*}\|^{2}$$ $$=\mathbb{E}\big{[}\|u_{k}+a\tilde{\eta}_{k-1}-u^{*}\|^{2}-\left(2a\tilde{\eta}_{k-1}-a\tilde{\eta}_{k},\alpha\gamma\tilde{\eta}_{k}\right)\big{]}-2a\mathbb{E}[(u_{k}-u^{*},\gamma\tilde{\eta}_{k})]\tag{19}$$ $$\leq\mathbb{E}\big{[}\|u_{k}+a\tilde{\eta}_{k-1}-u^{*}\|^{2}+\alpha\gamma\rho\|g_{k}\|^{2}-\left(2a\tilde{\eta}_{k-1}-\alpha\gamma\tilde{\eta}_{k},\alpha\gamma\tilde{\eta}_{k}\right)\big{]},$$ where we used $2\mathbb{E}[(u_{k}-u^{*},\tilde{g}_{k})]=2\mathbb{E}[\mathbb{E}[(u_{k}-u^{*},\tilde{g}_{k})\,|\,u_{k}]]=2\mathbb{E}[(u_{k}-u^{*},g_{k})]\geq-\rho\mathbb{E}[\|g_{k}\|^{2}]$ to deduce the last inequality. It remains to note that $$\begin{array}{c}{{-(2a\hat{g}_{k-1}-a\gamma\hat{g}_{k},a\gamma\hat{g}_{k})=4a\gamma^{-1}(\hat{g}_{k}-\hat{g}_{k-1},u_{k}-u_{k+1})-(2\gamma^{-1}-1)\|u_{k+1}-u_{k}\|^{2}}}\\ {{-a^{2}(1+2\gamma^{-1})\|\hat{g}_{k}-\hat{g}_{k-1}\|^{2},}}\end{array}$$ $$(20)$$ 2,(20) which follows immediately the way we deduced (10). Now using the definition of uk+1 we get $$a^{2}\gamma\mathbb{E}\big{[}\|g_{k}\|^{2}\big{]}=a^{2}\gamma\mathbb{E}\big{[}\|\mathbb{E}[\tilde{g}_{k}\,|\,u_{k}]\|^{2}\big{]}\leq a^{2}\gamma\mathbb{E}\big{[}\|\tilde{g}_{k}\|^{2}\big{]}=\mathbb{E}\big{[}\gamma^{-1}\|u_{k+1}-u_{k}\|^{2}\big{]}\tag{1}$$ $$+\mathbb{E}\big{[}\gamma^{-1}\|a(\tilde{g}_{k}-\tilde{g}_{k-1})\|^{2}+2a\gamma^{-1}(u_{k+1}-u_{k},\tilde{g}_{k}-\tilde{g}_{k-1})\big{]}.$$ $$(21)$$ Combining (19) to (21) we deduce $$\mathbb{E}\big{[}\|u_{k+1}+a\tilde{y}_{k}-u^{*}\|^{2}\big{]}+a\gamma(a-\rho)\mathbb{E}\big{[}\|g_{k}\|^{2}\big{]}\leq\mathbb{E}\big{[}\|u_{k}+a\tilde{y}_{k-1}-u^{*}\|^{2}\big{]}-a^{2}(1+\gamma^{-1})\mathbb{E}\big{[}\|\tilde{y}_{k}-\tilde{y}_{k-1}\|^{2}\big{]}$$ $$-(\gamma^{-1}-1)\mathbb{E}\big{[}\|u_{k+1}-u_{k}\|^{2}\big{]}+2\alpha\gamma^{-1}\mathbb{E}\big{[}(u_{k}-u_{k+1},\tilde{y}_{k}-\tilde{y}_{k-1})\big{]}.$$ We get via Young's inequality $2a\gamma^{-1}\langle u_{k}-u_{k+1},\tilde{g}_{k}-\tilde{g}_{k-1}\rangle\leq a\gamma^{-1}L\|u_{k}-u_{k+1}\|^{2}+L^{-1}a\gamma^{-1}\|\tilde{g}_{k}-\tilde{g}_{k-1}\|^{2}$. $$2a\gamma^{-1}\langle u_{k}-u_{k+1},\tilde{g}_{k}-\tilde{g}_{k}\rangle$$ Combining the previous two inequalities $$\mathbb{E}\big{[}\big{[}|u_{k+1}+a\hat{u}_{k}-u^{*}|^{2}\big{]}+a\gamma(a-\rho)\mathbb{E}\big{[}|g_{k}|^{2}\big{]}\leq\mathbb{E}\big{[}|u_{k}+a\hat{u}_{k-1}-u^{*}|^{2}\big{]}$$ $$+(L^{-1}\alpha\gamma^{-1}-a^{2}(1+\gamma^{-1})\mathbb{E}\big{[}|\hat{g}_{k}-\hat{g}_{k-1}|^{2}\big{]}-(\gamma^{-1}-1-L\alpha\gamma^{-1})\mathbb{E}\big{[}|u_{k+1}-u_{k}|^{2}\big{]}.\tag{22}$$ Note we have to estimate the $\mathbb{E}^{T}$ norm of the expectation value in the $\mathbb{E}^{T}$ norm of the $\alpha$ Next, we need to estimate the difference of the gradient estimators via the difference of the true $$\|\tilde{g}_{k}-\tilde{g}_{k-1}\|^{2}\leq\left(1+\frac{1}{\lambda}\right)\|g_{k}-g_{k-1}\|^{2}+(1+\lambda)\|g_{k}-\tilde{g}_{k}+\tilde{g}_{k-1}-g_{k-1}\|^{2}$$ $$\leq\left(1+\frac{1}{\lambda}\right)L^{2}\|u_{k}-u_{k-1}\|^{2}+2(1+\lambda)\left(\|g_{k}-\tilde{g}_{k}\|^{2}+\|\tilde{g}_{k-1}-g_{k-1}\|^{2}\right),$$ where we used the Lipschitz continuity of the operator. Therefore, by taking the expectation, we obtain $$\mathbb{E}\big{[}\|\tilde{g}_{k}-\tilde{g}_{k-1}\|^{2}\big{]}\leq\left(1+\frac{1}{\lambda}\right)L^{2}\mathbb{E}\big{[}\|u_{k}-u_{k-1}\|^{2}\big{]}+4(1+\lambda)\sigma^{2}.\tag{23}$$ Plugging (23) into (22) we deduce $$\mathbb{E}\big{[}[|u_{k+1}+ag_{k}-u^{\top}|^{2}]+a\gamma(a-\rho)\mathbb{E}\big{[}[|g_{k}|^{2}]\big{]}\leq\mathbb{E}\big{[}[|u_{k}+ag_{k-1}-u^{\top}|^{2}]+4(1+\lambda)\max\{0,L^{-1}a\gamma^{-1}-a^{2}(1+\gamma^{-1})\}\partial^{2}\big{]}$$ $$\qquad\qquad+\max\Big{\{}0,(1+\lambda^{-1})(La\gamma^{-1}-a^{2}L^{2}(1+\gamma^{-1}))\Big{\}}\Delta_{k}-(\gamma^{-1}-1-aL\gamma^{-1})\Delta_{k+1}.$$ Theorem 3.3. Let F : R d → R d be L-Lipschitz satisfying Assumption 1 with 1L > ρ and let (uk)k≥0 be the sequence of iterates generated by stochastic OGDA+, with a and γ satisfying *ρ < a <* 1−γ 1+γ 1 L then, to visit an ε stationary point such that mini=0*,...,k*−1 E-∥F(ui)∥ 2≤ ε we require $${\mathcal{O}}\left({\frac{1}{\varepsilon}}{\frac{1}{a\gamma(a-\rho)}}\mathbb{E}\big[\|u_{0}+a{\tilde{g}}_{0}-u^{*}\|^{2}\big]\max\left\{1,{\frac{4\sigma^{2}}{a L}}{\frac{1}{\varepsilon}}\right\}\right)$$ calls to the stochastic oracle, with large batch sizes of order O(ε −1). Proof. Let us first note that the condition α ≤ 1−γ 1+γ implies that the positive part in C˜1 is redundant as the second term in the maximum Am in the maximum $$\alpha\gamma^{-1}-\alpha^2(1+\gamma^{-1})>\alpha\left(\gamma^{-1}-\frac{1-\gamma}{1+\gamma}(1+\gamma^{-1})\right)=\alpha\left(\frac{\gamma^{-1}+1-\left(\gamma^{-1}-\gamma\right)}{1+\gamma}\right)=\alpha\geq0,$$ y nonnegative, Next we remark that the statement is already nonnegative. Next we remark that the statement $$\hat{C}_{2}\geq(1+\lambda^{-1})\hat{C}_{1}$$ $$(24)$$ −1)C˜1 (24) for some λ > 0 is equivalent to C˜2 > C˜1 (with strict inequality). This is, however, precisely the condition of Theorem A.1 but with strict inequality, which is the reason why we require the strict inequality α < 1−γ 1+γ to ensure (24). So we can iteratively apply the statement of Lemma A.2 to deduce $$a\gamma(a-\rho)\sum_{i=0}^{k-1}\|g_{i}\|^{2}\leq\mathbb{E}\big[\|u_{0}+a g_{0}-u^{*}\|^{2}\big]+4(1+\lambda^{-1})\sum_{i=0}^{k-1}\sigma_{i}^{2}.$$ We still need to estimate λ −1to find the right batch size in order to decrease the last summand to the desired accuracy. By considering (24) we get $$(1+\lambda^{-1})\leq{\frac{\tilde{C}_{2}}{\tilde{C}_{1}}}={\frac{\gamma^{-1}-1-\alpha\gamma^{-1}}{\alpha(\gamma^{-1}-\alpha-\alpha\gamma^{-1})}}\stackrel{\alpha\leq1}{\leq}{\frac{\gamma^{-1}-1-\alpha\gamma^{-1}}{\alpha(\gamma^{-1}-1-\alpha\gamma^{-1})}}={\frac{1}{\alpha}}$$ By taking $B:=\max\{1,\frac{4\sigma^2}{\alpha\varepsilon}\}$ indo. ``` By taking B := max{1, 4σ 2 αε } independent samples per iteration, we get the variance ``` $$\tilde{\sigma}^{2}=\frac{\alpha\varepsilon}{4},$$ and thus arrive at a total oracle call complexity as claimed. ## A.4 Eg+ **With Adaptive Step Size** Lemma A.3. Let F : R d → R dbe L*-Lipschitz and satisfy Assumption 1. Then, for the iterates generated* by Algorithm 3, it holds that $$\gamma^{-1}\|\tilde{u}_{k+1}-u^{*}\|^{2}\leq\gamma^{-1}\|\tilde{u}_{k}-u^{*}\|^{2}-a_{k}(a_{k}\gamma(1-\gamma)-\rho)\|F(u_{k})\|^{2}\\ -\Big{(}1-\frac{7a_{k}}{a_{k+1}}\Big{)}(\|u_{k}-\tilde{u}_{k}\|^{2}+\|u_{k}-\tilde{u}_{k+1}\|^{2}).\tag{25}$$ $\square$ Proof of Lemma A.3. We start by using Assumption 1 splitting the following term into three ρ $$\begin{split}&\|^{2}\leq\langle a_{k}F(u_{k}),u_{k}-u^{*}\rangle\\ &=\langle a_{k}F(u_{k}),\bar{u}_{k+1}-u^{*}\rangle+\langle a_{k}F(\bar{u}_{k}),u_{k}-\bar{u}_{k+1}\rangle+a_{k}\langle F(u_{k})-F(\bar{u}_{k}),u_{k}-\bar{u}_{k+1}\rangle.\end{split}$$ $${\frac{F}{2}}\|F(u_{k})\|^{2}\leq\langle a_{k}F(u_{k}),u_{k}-u^{*}\rangle$$ $$(26)$$ By expressing F(uk) via the definition of u¯k+1 and the three point identity we obtain $$\langle a_{k}F(u_{k}),\bar{u}_{k+1}-u^{*}\rangle=\gamma^{-1}\langle\bar{u}_{k}-\bar{u}_{k+1},\bar{u}_{k+1}-u^{*}\rangle$$ $$=\frac{\gamma^{-1}}{2}(\|\bar{u}_{k}-u^{*}\|^{2}-\|\bar{u}_{k+1}-\bar{u}_{k}\|^{2}-\|\bar{u}_{k+1}-u^{*}\|^{2}).$$ $$(27)$$ Similarly, by expressing F(¯uk) via the definition of uk we deduce $$\langle a_{k}F(\bar{u}_{k}),u_{k}-\bar{u}_{k+1}\rangle=\langle\bar{u}_{k}-u_{k},u_{k}-\bar{u}_{k+1}\rangle\tag{28}$$ $$=\frac{1}{2}\left(\|\bar{u}_{k+1}-\bar{u}_{k}\|^{2}-\|u_{k}-\bar{u}_{k}\|^{2}-\|u_{k}-\bar{u}_{k+1}\|^{2}\right).$$ Lastly, via the Cauchy-Schwarz inequality $$a_{k}\langle F(u_{k})-F(\bar{u}_{k}),u_{k}-\bar{u}_{k+1}\rangle\leq a_{k}\|F(u_{k})-F(\bar{u}_{k})\|\|u_{k}-\bar{u}_{k+1}\|\tag{29}$$ $$\stackrel{{(4)}}{{\leq}}\frac{a_{k}\tau}{a_{k+1}}\|u_{k}-\bar{u}_{k}\|\|u_{k}-\bar{u}_{k+1}\|$$ $$\leq\frac{a_{k}\tau}{2a_{k+1}}(\|u_{k}-\bar{u}_{k}\|^{2}+\|u_{k}-\bar{u}_{k+1}\|^{2}).$$ Combining (26) to (29) and multiplying by 2 we get $$\rho\|F(u_{k})\|^{2}\leq\gamma^{-1}\left(\|\tilde{u}_{k}-u^{*}\|^{2}-\|\tilde{u}_{k+1}-u^{*}\|^{2}\right)-(\gamma^{-1}-1)\|\tilde{u}_{k+1}-\tilde{u}_{k}\|^{2}\\ -\left(1-\frac{\gamma a_{k}}{a_{k+1}}\right)\left(\|u_{k}-\tilde{u}_{k}\|^{2}+\|u_{k}-\tilde{u}_{k+1}\|^{2}\right).$$ In the above setting, the term $F(u_{k})$ is $\gamma$. For $k=0$, we have Using the observation that $\gamma a_{k}F(u_{k})=\bar{u}_{k+1}-\bar{u}_{k}$, gives $$\gamma^{-1}\|\bar{u}_{k+1}-u^{*}\|^{2}+a_{k}(a_{k}\gamma(1-\gamma)-\rho)\|F(u_{k})\|^{2}\leq\gamma^{-1}\|\bar{u}_{k}-u^{*}\|^{2}\\ -\Big{(}1-\frac{\tau a_{k}}{a_{k+1}}\Big{)}(\|u_{k}-\bar{u}_{k}\|^{2}+\|u_{k}-\bar{u}_{k+1}\|^{2}).$$ The $\tau$-th order parameter is $\tau$-th order parameter. We see that the largest possible range for ρ is achieved for γ = 1 2 . Theorem 4.1. Let F : R d → R d be L-Lipschitz that satisfies Assumption 1, where u ∗ denotes any weak Minty solution, with a∞ > 2ρ and let (uk)k≥0 be the iterates generated by Algorithm 3 with γ = 1 2 and τ ∈ (0, 1). Then, there exists a k0 ∈ N, such that $$\operatorname*{min}_{i=k_{0},\ldots,k}\|F(u_{k})\|^{2}\leq{\frac{1}{k-k_{0}}}{\frac{L}{\tau({\frac{a_{\infty}}{2}}-\rho)}}\|{\bar{u}}_{k_{0}}-u^{*}\|^{2}.$$ Proof. As ak converges to a∞, the quotient ak/ak+1 converges to 1. In particular, there exists an index k0 ∈ N such that ak/ak+1 ≤ 1 τ for all k ≥ k0, because τ < 1. We can therefore drop the last term in (25) and sum up to obtain $$\|\bar{u}_{k+1}-u^{*}\|^{2}+\sum_{i=k_{0}}^{k}a_{i}\left(\frac{a_{i}}{2}-\rho\right)\|F(u_{k})\|^{2}\leq\|\bar{u}_{k_{0}}-u^{*}\|^{2}.$$ The desired statement follows by observing that ai ≥ a∞ ≥ *τ /L*. Note that the above proof for the adaptive version of EG+ provides an improvement in the dependence between ρ and L over the analysis of (Diakonikolas et al., 2021) even in the constant step size regime. ## B Additional Statements And Proofs For the sake of completeness we provide a proof of the elementary fact that Minty solutions are a stronger requirement than Stampachia solutions. Lemma B.1. If F *is continuous then every Minty solution is also a Stampacchia solution.* $$\square$$ Proof. Let w ∗ be a solution to the Minty VI and z = αw∗ + (1 − α)u for an arbitrary u ∈ R m and α ∈ (0, 1), then ⟨F(αw∗ + (1 − α)u),(1 − α)(u − w ∗)⟩ ≥ 0. This implies that $$(1-\alpha)\langle F(\alpha w^{*}+(1-\alpha)u),(u-w^{*})\rangle\geq0.$$ By dividing by (1 − α) and then taking the limit α → 1 we obtain that w ∗is a solution of the Stampacchia formulation. ## C Numerics For all experiments, if not specified otherwise, we used for OGDA+ and the adaptive version of EG+ the parameter γ = 1 2 . For the step size choice of Algorithm 3 we use τ = 0.99. For the CurvatureEG+ method of (Pethick et al., 2022) (with their notation) we use δk equal to −ρ/2, where ρ is the weak Minty parameter, if it is known and less than 1/L; and −0.499 times the step size, otherwise. Furthermore we set the parameters of the linesearch to τ = 0.9 and ν = 0.99. ## C.1 Ratio Game The data used to generate the instance displayed in Figure 3 was suggested in (Daskalakis et al., 2020) and is given by $$R={\binom{-0.6}{0.6}}-0.3{\binom{-0.3}{0.6}}$$ $$\operatorname{and}$$ $$S=\begin{pmatrix}0.9&0.5\\ 0.8&0.4\end{pmatrix}.$$ This results in the following objective function for the min-max problem $$V(x,y)={\frac{-1.2x y+0.9y-0.3}{0.4y+0.1x+0.4}},$$ which gives rise to the optimality conditions $$39x+0.48$$ $-0.12$. $\mathbf{a}$ −0.12x ∗ − 0.39x + 0.48 = 0 and $$-0.48y^{2}-0.57y+0.03=0$$ with the solution (x ∗, y∗) = (0.951941, 0.050485). For the experiments we used an estimated Lipschitz constant L = 5 3 . In Figure 5 we can see the reason for the slow convergence behavior of the CurvatureEG+ method observed in Figure 3. Not only is the step size computed by the backtracking procedure smaller than the one chosen by adaptive EG+, see (4), also the second step (update step) uses an even smaller fraction of the already smaller extrapolation step size. ## C.1.1 Polar Game For Figure 2 we used the so-called Polar Game introduced in (Pethick et al., 2022) which is given by $$F(x,y)=(\psi(x,y)-y,\psi(y,x)+x),$$ $\ell=0$. $16x^2\pm16y^2$) and perform. F(*x, y*) = (ψ(x, y) − y, ψ(*y, x*) + x), (30) where ψ(*x, y*) = 1 16 ax(−1 + x 2 + y 2)(−9 + 16x 2 + 16y 2) and parameter a > 0. In Figure 2 we used a = 1 3 . $$(30)$$ ![20_image_0.png](20_image_0.png) Figure 5: Ratio game. An illustration of the step sizes used by different methods, as well as the ratio of extrapolation to update step. CurvatureEG+ chooses its own ratio adaptively and does so for this example in a seemingly overly conservative way, resulting in slow convergence observed in Figure 3.
Review 1: Summary: This paper studies the problem of finding a zero of an operator that is negatively cocoercive wrt a solution $u^*$. In other words, there exists a weak Minty solution of the problem $u^*$. The main contribution is to develop a modification of OGDA called OGDA+ which provably converges at the rate $1/k$ in terms of the squared operator norm. In comparision, Diakonikolas et al 2021 recently developed a modification of the Extra Gradient algorithm called EG+ that matches the rate $1/k$. It was not known if such a modification could be applied to OGDA which is a concurrent algorithm to EG. The authors also develop an adaptive version of EG+ which does not require the knowledge of the Lipschitz constant. However, the established convergence rate holds after an unknown number $k_0$ of steps. Strengths and Weaknesses: The paper is very well written, and the presentation is very clear. The introduction and the preliminaries clearly explain the problem, the concept of weak Minty and its connection to monotonicity, and the contribution in light of previous works (which I summarized in the previous paragraph). I appreciated to read this part and I believe that it could even be the beginning of a review paper on the topic. The theoretical claims are cast in theorems (summarized above) proven in the appendix. The only issue I see is that the convergence rate of the adaptive algorithm holds after an unknown number of iterations $k_0$. The proofs are rather standard optimization proofs, but clearly presented. Finally, some honest simulations show the advantage of each proposed algorithm: the advantage of the adaptive method over the others on a toy problem exhibiting a repelling cycle around the solution, and a comparison of EG+ and OGDA+ (proposed algo) where, on some problem OGDA+ is better, and on some difficult instance of a min max problem from (Daskalakis et al. (2020); Diakonikolas et al. (2021)) EG+ is slightly better. Requested Changes: Paragraph after Th 3.1. Given a step size $a$, what is the optimal value of $\gamma$ ? As I said, the preliminaries are well written but I have a small question: monotonicity implies star-monotonicity, however, negative cocoercivity does not imply negative star cocoercivity (because $F(u^*)$ might not be equal to zero), am I correct? So, negative star cocoercivity is not a generalization of negative cocoercivity. In general, it is not clear to me that the operator is zero at a weak minty solution, i.e. $A(u_k) \to A(u^*) = 0$ (which should be true since you look at the squared operator norm). First sentence after Th 3.3. What is the connection between large batch size and large step size? One call to the stochastic oracle is computing $g_k$ or $F(u,\xi)$? **Typos** "we have to decrease $\gamma$ as evident from (10)". From (3) I guess? " more then ten times" "problem (30) displayed in Figure 1." In general, check the numbering of the equations. Broader Impact Concerns: NA ================================================== Review 2: Summary: This paper considers the optimistic gradient descent ascent (OGDA) for a class of structured variational inequalities problems and its applications to the min-max problem. The major difference lies in the weak Minty assumption. The authors present the convergence of the proposed algorithm in both deterministic and stochastic cases. Under the stronger assumption that the operator is even monotone, the authors show that the algorithm could use large step sizes. Strengths and Weaknesses: Strengths: This paper brings some new insights for solving variational inequalities. The authors prove the complete convergence and theoretical advantage of their proposed algorithms. However, some weaknesses prevent me from voting acceptance of this paper. 1. The weak Minty solution is a direct weak version of cohypomonotonicity [Bauschke et al. (2020); Combettes & Pennanen (2004)], which greatly weakens the novelty. 2. The authors fail to explain the necessity of studying the weak Minty solution problem. What kinds of machine learning problems enjoy such a property? Why are we interested in this property? Requested Changes: 1. The authors need to explain the necessity of studying the weak Minty solution problem. More specifically, they need to provide sufficient conditions to check the weak Minty property. What kinds of machine learning problems enjoy such a property? 2. They need to explain the difference between cohypomonotonicity and weak Minty property, that is, at least providing some examples that are weak Minty but not cohypomonotonic. Broader Impact Concerns: I do not find any ethical concern. ================================================== Review 3: Summary: The work focuses on weak Minty solutions of Lipschitz operators. The authors consider three methods: Optimistic Gradient Descent Ascent (OGDA) also known as Popov's method, its stochastic version, and the adaptive version of the celebrated Extragradient method (EG). The convergence is proven with respect to the averaged squared norm of the operator. The authors also provide several numerical experiments showing that OGDA can converge for some weak Minty problems when EG fails and illustrate the advantage of using the new version of adaptive EG. Strengths and Weaknesses: ## Strengths 1. All the results are correct and new. I have checked the proofs in detail and found only minor inaccuracies. Overall, the paper is well-written as well. 2. For OGDA+ the authors derive $O(1/k)$ convergence rate for any $\rho < 1/L$ (here $\rho$ is the weak Minty constant and $L$ is Lipschitz constant). This matches the best-known results for EG obtained by Pethick et al. (2022). It is worth mentioning that the proposed analysis differs from the existing ones known for OGDA and EG. 3. The authors propose a version of EG+ with adaptive step size and derive $O(1/k)$ convergence rate for the proposed algorithm for finding weak Minty solutions. The proposed method shows good performance in the experiments as well. ## Weaknesses In my opinion, weaknesses are minor and are outweighed by strengths. 1. (minor) The stochastic version of OGDA is analyzed only in the case of large $O(\varepsilon^{-2})$ batch sizes. Although it is a known problem of deriving $O(1/k)$ convergence rates in terms of the (expected) squared operator norm without using large batch sizes for stochastic EG and OGDA (even in the monotone case), it would be good for the paper to add this discussion somewhere in the text, e.g., right after Theorem 3.3. 2. (minor) The rate for Theorem 4.1 is derived only for $k > k_0$, where $k_0$ is unknown in general. Is it possible to estimate $k_0$ somehow at least under some additional assumption? It would be interesting to have some upper bound on $k_0$ in some special cases (e.g., for monotone problems). Requested Changes: ## Requests 1. If I am not mistaken, the analysis relies on the fact that the solution is unique. I suggest adding this clarification to the paper. Can the analysis be generalized to cover the case when there are multiple solutions (e.g., when (1) holds only for the closest solution to the considered point $u$)? 2. Page 2, "it is necessary that step size is larger than a multiple of $\rho$": could the authors provide references supporting this claim? 3. Page 2, the same paragraph: the result of Pethick et al. (2022) holds for any $\rho < 1/L$. 4. After inequality (2), "they appear as gradients of smooth convex functions": I think it is important to mention that cocoercive operators are not necessary gradients of smooth convex functions (e.g., rotation operators with acute angles). The current formulation can confuse a reader. 5. Section 3.2 requires some formalization: I suggest adding the pseudocode of the method and explicitly showing that it uses mini-batches. 6. Could the authors provide a complete derivation of the first formula in the proof of Theorem 3.2? 7. At the end of the proof of Theorem 3.2, the authors refer to the lemma of Opial. Can the authors provide a reference and complete formulation in the appendix? 8. Can the authors elaborate on how the last inequality from the proof of Lemma A.2 implies the statement of the lemma? For example, I believe one should mention that $a \leq L\gamma$ to get this result and also write $\max(0, \ldots)$ in front of $\sigma_{k-1}^2$ in the last inequality from the proof (similarly to the factor in front of $\Delta_k$). 9. Proof of Theorem 3.3. I request the authors to provide a complete derivation and formalize a bit the proof. In particular, the derivation of $\tilde{C}_2 \geq \tilde{C}_1$ is missing, instead of $||u_1 - u_0||^2$ one should have $\mathbb{E}||u_1 - u_0||^2$, and in (22) it should be somehow reflected through the notation that summands denote different samples. ## Suggestions, comments and questions 1. How is Theorem 3.2 related to the existing results for OGDA? 2. Is there a way to justify the necessity of large batch sizes in Theorem 3.3? 3. Why are $u_{k-1}$ and $\bar{u}_{k-1}$ used in line 2 of Algorithm 2 to estimate the Lipschitz constant? From my understanding, it mostly comes from the analysis, but it would be good to understand whether there is another reason explaining why we should use this pair of points, e.g., why $\bar{u}_k, u_{k-1}$ or $\bar{u}_k, \bar{u}_{k-1}$ are not used instead? Some numerical comparisons of different modifications can also be very interesting to see. 4. I suggest repeating the formulations of the theorems in the appendix right before their proofs for the readers' convenience (e.g., in Appendix A.2). ## Minor comments 1. The last word of the abstract ("paragraph") should be removed. 2. Theorem 3.1, last sentence: "$\gamma$ small enough" $\to$ "small enough $\gamma$". 3. Theorem A.1, "Let ... and $(u_k)_{k\geq 0}$ {\color{red} be} the sequence": here "{\color{red} be}" is missing. 4. The authors missed periods after some formulas, e.g., (15), formula between (19) and (20), (20). 5. Lemma A.2, the second term in the left-hand side: factor $\gamma$ is missing. Moreover, $L_k$, $\sigma_k$ and $\tilde g_k$ are undefined. 6. Formula after (19): expectation is missing in the right-hand side. 7. The last step of the proof of Lemma A.3: it is better to write a complete derivation (though it is correct and can be verified). Also, it is better to explain why the choice of $\gamma = 1/2$ is the best and in what sense. I guess this is because it makes the factor in front of $||F(u_k)||^2$ as large as possible in the proof. However, there are other terms dependent on $\gamma$ as well. Broader Impact Concerns: I have no broader impact concerns. ================================================== Review 4: Summary: This paper studies unconstrained min-max problems which have at least one weak-Minty solution. For such problems the authors study an extension of the optimistic gradient descent-ascent algorithm and show that it converges in a slightly larger parameters regime than the extragradient method. The authors also show the same convergence rate as for the latter method. Some other results were also introduced, but I think they were less important. Strengths and Weaknesses: In my opinion, it is far-fetched to motivate such problems by GANs or adversarial networks. At the same time, there are few other classes of operators that have clear potential. For such classes of operators, the paper does provide a meaningful result. However, the paper writing has to be improved quite significantly. Requested Changes: First of all, I do not understand why the paper's lines are not numbered. I only hope it was not authors' fault. More seriously, here is the list of questions: 1. Abstract. How can we oppose "recently introduce" and "able to capture"? Also "paragraph" in the end? 2. First paragraph, page 1: "an assumption", not "a". 3. Paragraph "About the weak Minty...", page 2. Any positive number is a multiple of $\rho$. It is not entirely clear to what $\frac{1}{2L}$ refers: to $\rho$ or to the bound on a step size? 4. page 2: What is "even monotone"? Do we also have "odd monotone"? What does it mean "significantly improved"? From 0.1 to 1000? 5. "Operator norm for monotone problems", page 3. There is no need to mention all non-interesting intermediate results. The goal is to make a paper more readable and not to please all potential reviewers. 6. In general, I feel that the number of citations is unnecessarily large. A lot of results for which the authors refer are quite trivial. For example, in page 6, the last two display equations are obvious, there is no need to mention another two papers that used them. There are plenty of such cases. 7. Why do we even need to introduce what VI is? We work exclusively with the equation $F(x) = 0$. Why do we need strong monotonicity or cocoercivity? Do we use them in the paper? 8. page 4 (bottom), "solution operator" is not defined. 9. page 5, why subgradients? We define $F$ to be a single-valued operator. 10. page 5, paragraph after Eq. 2. What is the point of this discussion? What has it to do with the paper's topic? 11. "very ill-behaved"? 12. smooth and $L$-smooth are different classes of functions. 13. Theorem 3.1: Why not to optimize over $\gamma$? This will give a more clear dependence on $\rho$ and $L$. 14. "Effective step size"? 15. Theorem 3.2 is confusing. Isn't $\frac{1}{2L}$ the upper bound for step sizes when $\gamma = 1$? If I am right, I don't think it is that nice to formulate the theorem knowing in advance that with a bit more efforts one can improve it. 16. I would not call Algorithm 2 as adaptive. This is how a term might lose its meaning. In this way, the subgradient method is adaptive or any method with linesearch is adaptive. Adaptivity and not requiring to know $L$ are quite different things. I don't think that a method with decreasing steps should be called adaptive. Also, its advantage is slightly overstated. Its theoretical convergence requires $a_{\infty}> 2\rho$ which in turn require us to know $L$. Broader Impact Concerns: ............... ================================================== Metareview: Recommendation: Accept with minor revision Comment: Dear authors, The reviewers agree that this is a good paper the community will benefit from. The paper tackles an interesting problem and is very well written. I agree with the recommendation of the reviewers, and recommend acceptance. Please apply the most reasonable changes requested by the reviewers for the camera-ready version of the paper. Most of them are very minor. Congratulations and thanks for submitting your work to TMLR! Bets regards, Action Editor TMLR ==================================================
# Controlling Confusion Via Generalisation Bounds Anonymous authors Paper under double-blind review ## Abstract We establish new generalisation bounds for multiclass classification by abstracting to a more general setting of discretised error types. Extending the PAC-Bayes theory, we are hence able to provide fine-grained bounds on performance for multiclass classification, as well as applications to other learning problems including discretisation of regression losses. Tractable training objectives are derived from the bounds. The bounds are uniform over all weightings of the discretised error types and thus can be used to bound weightings not foreseen at training, including the full confusion matrix in the multiclass classification case. ## 1 Introduction Generalisation bounds are a core component of the theoretical understanding of machine learning algorithms. For over two decades now, the PAC-Bayesian theory has been at the core of studies on generalisation abilities of machine learning algorithms. PAC-Bayes originates in the seminal work of McAllester (1998; 1999) and was further developed by Catoni (2003; 2004; 2007), among other authors—we refer to the recent surveys Guedj (2019) and Alquier (2021) for an introduction to the field. The outstanding empirical successes of deep neural networks in the past decade call for better theoretical understanding of deep learning, and PAC-Bayes emerged as one of the few frameworks allowing the derivation of meaningful (and non-vacuous) generalisation bounds for neural networks: the pioneering work of Dziugaite & Roy (2017) has been followed by a number of contributions, including Neyshabur et al. (2018), Zhou et al. (2019), Letarte et al. (2019), Pérez-Ortiz et al. (2021); Perez-Ortiz et al. (2021) and Biggs & Guedj (2021; 2022a;b), to name but a few. Much of the PAC-Bayes literature focuses on the case of binary classification, or of multiclass classification where one only distinguishes whether each classification is correct or incorrect. This is in stark contrast to the complexity of contemporary real-world learning problems. This work aims to bridge this gap via generalisation bounds that provide information rich measures of performance at test time by controlling the probabilities of errors of any finite number of types, bounding combinations of these probabilities uniformly over all weightings. Previous results. We believe our framework of discretised error types to be novel. In the particular case of multiclass classification, little is known from a theoretical perspective and, to the best of our knowledge, only a handful of relevant strategies or generalisation bounds can be compared to the present paper. The closest is the work of Morvant et al. (2012) on a PAC-Bayes generalisation bound on the operator norm of the confusion matrix, to train a Gibbs classifier. We focus on a different performance metric, in the broader setting of discretised error types. Koço & Capponi (2013) suggest to minimise the confusion matrix norm with a focus on the imbalance between classes; their treatment is not done through PAC-Bayes. Laviolette et al. (2017) extend the celebrated C-bound in PAC-Bayes to weighted majority votes of classifiers, to perform multiclass classification. Benabbou & Lang (2017) present a streamlined version of some of the results from Morvant et al. (2012) in the case where some examples are voluntarily not classified (*e.g.*, in the case of too large uncertainty). More recently, Feofanov et al. (2019) derive bounds for a majority vote classifier where the confusion matrix serves as an error indicator: they conduct a study of the Bayes classifier. From binary to multiclass classification. A number of PAC-Bayesian bounds have been unified by a single general bound, found in Bégin et al. (2016). Stated as Theorem 1 below, it applies to binary classification. We use it as a basis to prove our Theorem 3, a more general bound that can be applied to, amongst other things, multiclass classification and discretised regression. While the proof of Theorem 3 follows similar lines to that given in Bégin et al. (2016), our generalisation to 'soft' hypotheses incurring any finite number of error types requires a non-trivial extension of a result found in Maurer (2004). This extension (Lemma 5), along with its corollary (Corollary 6) may be of independent interest. The generalisation bound in Maurer (2004), stated below as Corollary 2, is shown in Bégin et al. (2016) to be a corollary of their bound. In a similar manner, we derive Corollary 7 from Theorem 3. Obtaining this corollary is significantly more involved than the analogous derivation in Bégin et al. (2016) or the original proof in Maurer (2004), requiring a number of technical results found in Appendix B. Briefly, the results in Bégin et al. (2016) and Maurer (2004) consider an arbitrary input set X , output set Y = {−1, 1}, hypothesis space *H ⊆ Y*X and i.i.d. sample S ∈ (*X × Y*) m. They then establish high probability bounds on the discrepancy between the risk (probability of error an a new datapoint) of any stochastic classifier Q (namely, a distribution on H) and its empirical counterpart (the fraction of the sample Q misclassifies). The bounds hold uniformly over all Q and contain a complexity term involving the Kullback-Leibler (KL) divergence between Q and a reference distribution P on H (often referred to as a prior by analogy with Bayesian inference—see the discussion in Guedj, 2019). There are two ways in which the results in Bégin et al. (2016) and Maurer (2004) can be described as binary. First, as Y contains two elements, this is obviously an instance of binary classification. But a more interesting and subtle way to look at this is that only two cases are distinguished—correct classification and incorrect classification. Specifically, since the two different directions in which misclassification can be made are counted together, the bound gives no information on which direction is more likely. More generally, the aforementioned bounds can be applied in the context of multiclass classification provided one maintains the second binary characteristic by only distinguishing correct and incorrect classifications rather than considering the entire confusion matrix. However, note that these bounds will not give information on the relative likelihood of the different errors. In contrast, our new results can consider the entire confusion matrix, bounding how far the true (read "expected over the data-generating distribution") confusion matrix differs from the empirical one, according to some metric. In fact, our results extend to the case of arbitrary label set Y, provided the number of different errors one distinguishes is finite. Formally, we let SM j=1 Ej be a user-specified disjoint partition of Y 2into a finite number of M *error types*, where we say that a hypothesis h ∈ H makes an error of type j on datapoint (x, y) if (h(x), y) ∈ Ej (by convention, every pair (y, y ˆ ) ∈ Y2is interpreted as a predicted value yˆ followed by a true value y, in that order). It should be stressed that some Ej need not correspond to mislabellings—indeed, some of the Ej may distinguish different correct labellings. We then count up the number of errors of each type that a hypothesis makes on a sample, and bound how far this empirical distribution of errors is from the expected distribution under the data-generating distribution (Theorem 3). Thus, in our generalisation, the (scalar) risk and empirical risk (RD(Q) and RS(Q), defined in the next section) are replaced by M-dimensional vectors (RD(Q) and RS(Q)), and our discrepancy measure d is a divergence between discrete distributions on M elements. Our generalisation therefore allows us to bound how far the true distribution of errors can be from the observed distribution of errors. If we then associate a loss value ℓj ∈ [0, ∞) to each Ej we can derive a bound on the *total risk*, defined as the sum of the true error probabilities weighted by the loss values. In fact, the total risk is bounded with high probability uniformly over all such weightings. The loss values need not be distinct; we may wish to understand the distribution of error types even across error types that incur the same loss. For example, in the case of binary classification with Y = {−1, 1}, we can take the usual partition into E1 = {(−1, −1),(1, 1)} and E2 = {(−1, 1),(1, −1)} and loss values ℓ1 = 0, ℓ2 = 1, or the fine-grained partition Y 2 = {(0, 0)} ∪ {(1, 1)} ∪ {(0, 1)*} ∪ {*(1, 0)} and the loss values ℓ1 = ℓ2 = 0, ℓ3 = 1, ℓ4 = 2. More generally, for multiclass classification with N classes and Y = [N], one may take the usual coarse partition into E1 = {(y, y ˆ ) ∈ Y2: yˆ = y} and E2 = {(y, y ˆ ) ∈ Y2: yˆ ̸= y} (with ℓ1 = 0 and ℓ2 = 1), or the fully refined partition into Ei,j = {(*i, j*)} for *i, j* ∈ [N] (with correspondingly greater choice of the associated loss values), or something in-between. Note that we still refer to Ej as an "error type" even if it contains elements that correspond to correct classification, namely if there exists y ∈ Y such that (*y, y*) ∈ Ej . As we will see later, a more fine-grained partition will allow more error types to be distinguished and bounded, at the expense of a looser bound. As a final example, for regression with Y = R, we may fix M strictly increasing thresholds 0 = λ1 < λ2 < · · · < λM and partition Y 2into Ej = {(y1, y2) ∈ Y2: λj ≤ |y1 − y2| < λj+1} for j ∈ [M − 1], and EM = {(y1, y2) ∈ Y2: |y1 − y2| ≥ λM}. Outline. We set our notation in Section 2. In Section 3 we state and prove generalisation bounds in the setting of discretised error types: this significantly expands the previously known results from Bégin et al. (2016) by allowing for generic output sets Y. Our main results are Theorem 3 and Corollary 7. To make our findings profitable to the broader machine learning community we then discuss how these new bounds can be turned into tractable training objectives in Section 4 (with a general recipe described in greater detail in Appendix A). The paper closes with perspectives for follow-up work in Section 5 and we defer to Appendix B the proofs of technical results. ## 2 Notation For any set A, let M(A) be the set of probability measures on A. For any M ∈ Z>0, define [M] := {1, 2*, . . . , M*}, the M-dimensional simplex △M := {u ∈ [0, 1]M : u1 + *· · ·* + uM = 1} and its interior △>0 M := △M ∩ (0, 1)M. For *m, M* ∈ Z>0, define the integer counterparts Sm,M := (k1*, . . . , k*M) ∈ ZM ≥0 : k1 + *· · ·* + kM = m and S >0 m,M := Sm,M ∩ ZM >0 . The set Sm,M is the domain of the multinomial distribution with parameters *m, M* and some r ∈ △M, which is denoted Mult(*m, M,* r) and has probability mass function for k ∈ Sm,M given by $$\operatorname{Mult}(\mathbf{k};m,M,\mathbf{r}):=\begin{pmatrix}m\\ k_{1}&k_{2}&\cdots&k_{M}\end{pmatrix}\prod_{j=1}^{M}r_{j}^{k_{j}},\quad\text{where}\quad\begin{pmatrix}m\\ k_{1}&k_{2}&\cdots&k_{M}\end{pmatrix}:=\frac{m!}{\prod_{j=1}^{M}k_{j}!}.$$ For q, p ∈ △M, let kl(q∥p) denote the KL-divergence of Mult(1*, M,* q) from Mult(1*, M,* p), namely kl(q∥p) := PM j=1 qj ln qj pj , with the convention that 0 ln 0x = 0 for x ≥ 0 and x ln x 0 = ∞ for x > 0. For M = 2 we abuse notation and abbreviate kl((q, 1 − q)∥(p, 1 − p)) to kl(q∥p), which is then the conventional definition of kl(·∥·) : [0, 1]2 → [0, ∞] found in the PAC-Bayes literature (as in Seeger, 2002, for example). Let X and Y be arbitrary input (*e.g.*, feature) and output (*e.g.*, label) sets respectively. Let SM j=1 Ej be a partition of Y 2into a finite sequence of M *error types*, and to each Ej associate a loss value ℓj ∈ [0, ∞). The only restriction we place on the loss values ℓj is that they are not all equal. This is not a strong assumption, since if they were all equal then all hypotheses would incur equal loss and there would be no learning problem: we are effectively ruling out trivial cases. Let *H ⊆ Y*X denote a hypothesis class, D ∈ M(*X × Y*) a data-generating distribution and S ∼ Dm an i.i.d. sample of size m drawn from D. For h ∈ H and j ∈ [M] we define the empirical j*-risk* and true j*-risk* of h to be R j S (h) := 1m P(x,y)∈S 1[(h(x), y) ∈ Ej ] and R j D(h) := E(x,y)∼D[1[(h(x), y) ∈ Ej ]], respectively, namely, the proportion of the sample S on which h makes an error of type Ej and the probability that h makes an error of type Ej on a new (*x, y*) ∼ D. More generally, suppose *H ⊆ M*(Y) X is a class of *soft* hypotheses of the form H : *X → M*(Y), where, for any A ⊆ Y, H(x)[A] is interpreted as the probability according to H that the label of x is in A. It is worth stressing that a soft hypothesis is still deterministic since a prediction is not drawn from the distribution it returns. We then define the empirical j*-risk* of H to be R j S (H) := 1m P(x,y)∈S H(x)-{yˆ ∈ Y : (*y, y* ˆ ) ∈ Ej}, namely the mean—over the elements (*x, y*) of S—probability mass H assigns to predictions yˆ ∈ Y incurring an error of type Ej when labelling each x. Further, we define the true j*-risk* of H to be R j D(H) := E(x,y)∼D -H(x)-{yˆ ∈ Y : (ˆ*y, y*) ∈ Ej}, namely the mean—over (*x, y*) ∼ D—probability mass H assigns to predictions yˆ ∈ Y incurring an error of type Ej when labelling each x. We will see in Section 4 that the more general hypothesis class *H ⊆ M*(Y) X is necessary for constructing a differentiable training objective. To each ordinary hypothesis h ∈ YX there corresponds a soft hypothesis H ∈ M(Y) X that, for each x ∈ X , returns a point mass on h(x). In this case, it is straightforward to show that R j S (h) = R j S (H) and R j D(h) = R j D(H) for all j ∈ [M], where we have used the corresponding definitions above for ordinary and soft hypotheses. Since, in addition, our results hold identically for both ordinary and soft hypotheses, we henceforth use the same notation h for both ordinary and soft hypotheses and their associated values R j S (h) and R j D(h). It will always be clear from the context whether we are dealing with ordinary or soft hypotheses and thus which of the above definitions of the empirical and true j-risks is being used. We define the *empirical risk* and *true risk* of a (ordinary or soft) hypothesis h to be RS(h) := (R1S (h)*, . . . , R*M S (h)) and RD(h) := (R1D(h)*, . . . , R*M D (h)), respectively. It is straightforward to show that RS(h) and RD(h) are elements of △M. Since S is drawn i.i.d. from D, the expectation of the empirical risk is equal to the true risk, namely ES[R j S (h)] = R j D(h) for all j and thus ES[RS(h)] = RD(h). Finally, we generalise to stochastic hypotheses Q ∈ M(H), which predict by first drawing a deterministic hypothesis h ∼ Q and then predicting according to h, where a new h is drawn for each prediction. Thus, we define the empirical j*-risk* and true j*-risk* of Q to be the scalars R j S (Q) := Eh∼Q[R j S (h)] and R j D(Q) := Eh∼Q[R j D(h)], for j ∈ [M], and simply the *empirical risk* and *true risk* of Q to be the elements of △M defined by RS(Q) := Eh∼Q[RS(h)] and RD(Q) := Eh∼Q[RD(h)]. As before, since S is i.i.d., we have (using Fubini this time) that ES[RS(Q)] = RD(Q). Finally, given a loss vector ℓ ∈ [0, ∞)M, we define the *total risk* of Q by the scalar RTD(Q) := PM j=1 ℓjR j D(Q). As is conventional in the PAC-Bayes literature, we refer to sample independent and dependent distributions on M(H) (*i.e.* stochastic hypotheses) as *priors* (denoted P) and posteriors (denoted Q) respectively, even if they are not related by Bayes' theorem. ## 3 Inspiration And Main Results We first state the existing results in Bégin et al. (2016) and Maurer (2004) that we will generalise from just two error types (correct and incorrect) to any finite number of error types. These results are stated in terms of the scalars RS(Q) := 1m P(x,y)∈S 1[h(x) ̸= y] and RD(Q) := E(x,y)∼D1[h(x) ̸= y] and, as we demonstrate, correspond to the case M = 2 of our generalisations. Theorem 1. (Bégin et al., 2016, Theorem 4) Let X be an arbitrary set and Y = {−1, 1}. Let D ∈ M(*X × Y*) be a data-generating distribution and H ⊆ YX be a hypothesis class. For any prior P ∈ M(H), δ ∈ (0, 1], convex function d : [0, 1]2 → R, sample size m and β ∈ (0, ∞), with probability at least 1 − δ *over the random* draw S ∼ Dm, we have that simultaneously for all posteriors Q ∈ M(H) $$d\big(R_{S}(Q),R_{D}(Q)\big)\leq\frac{1}{\beta}\left[\mathrm{KL}(Q\|P)+\ln\frac{\mathcal{I}_{d}(m,\beta)}{\delta}\right],$$ with Id(*m, β*) := supr∈[0,1] hPm k=0 Bin(k; *m, r*) exp βd km , ri*, where* Bin(k; m, r) *is the binomial probability* mass function Bin(k; *m, r*) := m k r k(1 − r) m−k. Note the original statement in Bégin et al. (2016) is for a positive integer m′, but the proof trivially generalises to any β ∈ (0, ∞). One of the bounds that Theorem 1 unifies—which we also generalise—is that of Seeger (2002), later tightened in Maurer (2004), which we now state. It can be recovered from Theorem 1 by setting β = m and d(*q, p*) = kl(q∥p) := q ln qp + (1 − q) ln 1−q 1−p . Corollary 2. (Maurer, 2004, Theorem 5) Let X be an arbitrary set and Y = {−1, 1}. Let D ∈ M(*X × Y*) be a data-generating distribution and H ⊆ YX be a hypothesis class. For any prior P ∈ M(H), δ ∈ (0, 1] and sample size m, with probability at least 1 − δ over the random draw S ∼ Dm*, we have that simultaneously for* all posteriors Q ∈ M(H) $$\operatorname{kl}\bigl(R_{S}(Q),R_{D}(Q)\bigr)\leq{\frac{1}{m}}\left[\operatorname{KL}(Q\|P)+\ln{\frac{2{\sqrt{m}}}{\delta}}\right].$$ We wish to bound the deviation of the empirical vector RS(Q) from the unknown vector RD(Q). Since in general the stochastic hypothesis Q we learn will depend on the sample S, it is useful to obtain bounds on the deviation of RS(Q) from RD(Q) that are uniform over Q, just as in Theorem 1 and Corollary 2. In Theorem 1, the deviation d(RS(Q), RD(Q)) between the scalars RS(Q), RD(Q) ∈ [0, 1] is measured by some convex function d : [0, 1]2 → R. In our case, the deviation d(RS(Q), RD(Q)) between the vectors RS(Q), RD(Q) ∈ △M is measured by some convex function d : △2M → R. In Section 3.2 we will derive Corollary 7 from Theorem 3 by selecting β = m and d(q, p) := kl(q∥p), analogous to how Corollary 2 is obtained from Theorem 1. ## 3.1 Statement And Proof Of The Generalised Bound We now state and prove our generalisation of Theorem 1. The proof follows identical lines to that of Theorem 1 given in Bégin et al. (2016), but with additional non-trivial steps to account for the greater number of error types and the possibility of soft hypotheses. Theorem 3. Let X and Y be arbitrary sets and SM j=1 Ej *be a disjoint partition of* Y 2. Let D ∈ M(*X × Y*) be a data-generating distribution and *H ⊆ M*(Y) X be a hypothesis class. For any prior P ∈ M(H), δ ∈ (0, 1], jointly convex function d : △2M → R, sample size m and β ∈ (0, ∞), with probability at least 1 − δ over the random draw S ∼ Dm, we have that simultaneously for all posteriors Q ∈ M(H) $$d\big{(}\mathbf{R}_{S}(Q),\mathbf{R}_{D}(Q)\big{)}\leq\frac{1}{\beta}\left[\text{KL}(Q\|P)+\ln\frac{\mathcal{I}_{d}(m,\beta)}{\delta}\right],\tag{1}$$ where Id(*m, β*) := supr∈△M hPk∈Sm,M Mult(k; *m, M,* r) exp βd km , ri. Further, the bounds are unchanged if one restricts to an ordinary hypothesis class, namely if *H ⊆ Y*X . The proof begins on the following page after a discussion and some auxiliary results. One can derive multiple bounds from this theorem, all of which then hold simultaneously with probability at least 1 − δ. For example, one can derive bounds on the individual error probabilities R j D(Q) or combinations thereof. It is this flexibility that allows Theorem 3 to provide far richer information on the performance of the posterior Q on unseen data. For a more in depth discussion of how such bounds can be derived, including a recipe for transforming the bound into a differentiable training objective, see Section 4 and Appendix A. To see that Theorem 3 is a generalisation of Theorem 1, note that we can recover it by setting Y = {−1, 1}, M = 2, E1 = {(−*y, y*) : y *∈ Y}* and E2 = {(*y, y*) : y *∈ Y}*. Then, for any convex function d : [0, 1]2 → R, apply Theorem 3 with the convex function d ′: △2M → R defined by d ′((u1, u2),(v1, v2)) := d(u1, v1) so that Theorem 3 bounds d ′RS(Q), RD(Q)= dR1S (Q), R1D(Q)which equals d(RS(Q), RD(Q)) in the notation of Theorem 1. Further, $$\sum_{\mathbf{k}\in S_{m,2}}\operatorname{Mult}(\mathbf{k};m,2,\mathbf{r})\exp\left(\beta d^{\prime}\left({\frac{\mathbf{k}}{m}},\mathbf{r}\right)\right)=\sum_{k=0}^{m}\operatorname{Bin}(k;m,r_{1})\exp\left(\beta d\left({\frac{k}{m}},r_{1}\right)\right),$$ so that the supremum over r1 ∈ [0, 1] of the right hand side equals the supremum over r ∈ △2 of the left hand side, which, when substituted into (1), yields the bound given in Theorem 1. Our proof of Theorem 3 follows the lines of the proof of Theorem 1 in Bégin et al. (2016), making use of the change of measure inequality Lemma 4. However, a complication arises from the use of soft classifiers h ∈ M(Y) X . A similar problem is dealt with in Maurer (2004) when proving Corollary 2 by means of a Lemma permitting the replacement of [0, 1]-valued random variables by corresponding {0, 1}-valued random variables with the same mean. We use a generalisation of this, stated as Lemma 5 (Lemma 3 in Maurer, 2004 corresponds to the case M = 2), the proof of which is not insightful for our purposes and thus deferred to Appendix B.1. An immediate consequence of Lemma 5 is Corollary 6, which is a generalisation of the first half of Theorem 1 in Maurer (2004). While we only use it implicitly in the remainder of the paper, we state it as it may be of broader interest. The consequence of Lemma 5 is that the worst case (in terms of bounding d(RS(Q), RD(Q))) occurs when R{(x,y)}(h) is a one-hot vector for all (*x, y*) ∈ S and h ∈ H, namely when *H ⊆ M*(Y) X only contains hypotheses that, when labelling S, put all their mass on elements yˆ ∈ Y that incur the same error type1. In particular, this is the case for hypotheses that put all their mass on a single element of Y, equivalent to the simpler case *H ⊆ Y*X as discussed in Section 2. Thus, Lemma 5 shows that the bound given in Theorem 3 cannot be made tighter only by restricting to such hypotheses. 1More precisely, when ∀h ∈ H ∀(*x, y*) ∈ S ∃j ∈ [M] such that h(x)[{yˆ ∈ Y : (ˆ*y, y*) ∈ Ej )}] = 1. Lemma 4. (Change of measure, Csiszár, 1975, Donsker & Varadhan, 1975) For any set H, any P, Q ∈ M(H) and any measurable function ϕ : H → R, E h∼Q ϕ(h) ≤ KL(Q∥P) + ln E h∼P exp(ϕ(h)). Lemma 5. (Generalisation of Lemma 3 in Maurer, 2004) Let X1, . . . , Xm be i.i.d △M-valued random vectors with mean µ and suppose that f : △mM → R *is convex. If* X′1 , . . . , X′m *are i.i.d.* Mult(1, M, µ) *random* vectors, then E[f(X1*, . . . ,* Xm)] ≤ E[f(X′1 , . . . , X′m)]. Corollary 6. (Generalisation of Theorem 1 in Maurer, 2004) Let X1, . . . , Xm be i.i.d △M*-valued random* vectors with mean µ*, and* X′1 , . . . , X′m *be i.i.d.* Mult(1, M, µ)*. Define* X¯ := 1m Pm i=1 Xi and X¯′:= 1 m Pm i=1 X′ i . Then E[exp(mkl(X¯ ∥µ)] ≤ E[exp(mkl(X¯′∥µ)]. Proof. (of Corollary 6) This is immediate from Lemma 5 since the average is linear, the kl-divergence is convex and the exponential is non-decreasing and convex. Proof. (of Theorem 3) The case *H ⊆ Y*X follows directly from the more general case by taking H′:= {h ′ ∈ M(Y) X : ∃h ∈ H such that ∀x ∈ X h ′(x) = δh(x)}, where δh(x) ∈ M(Y) denotes a point mass on h(x). For the general case *H ⊆ M*(Y) X , using Jensen's inequality with the convex function d(·, ·) and Lemma 4 with ϕ(h) = βd(RS(h), RD(h)), we see that for all Q ∈ M(H) $$\beta d\big{(}\mathbf{R}_{S}(Q),\mathbf{R}_{D}(Q)\big{)}=\beta d\left(\mathbb{E}_{h\sim Q}\mathbf{R}_{S}(h),\mathbb{E}_{h\sim Q}\mathbf{R}_{D}(h)\right)$$ $$\leq\mathbb{E}_{h\sim Q}\beta d\big{(}\mathbf{R}_{S}(h),\mathbf{R}_{D}(h)\big{)}$$ $$\leq\text{KL}(Q\|P)+\ln\left(\mathbb{E}_{h\sim P}\exp\left(\beta d\big{(}\mathbf{R}_{S}(h),\mathbf{R}_{D}(h)\big{)}\right)\right)$$ $$=\text{KL}(Q\|P)+\ln(Z_{P}(S)),$$ where ZP (S) := Eh∼P exp βd(RS(h), RD(h)). Note that ZP (S) is a non-negative random variable, so that by Markov's inequality P S∼Dm ZP (S) ≤ ES′∼DmZP (S ′) δ ≥ 1 − δ. Thus, since ln(·) is strictly increasing, with probability at least 1 − δ over S ∼ Dm, we have that simultaneously for all Q ∈ M(H) $$\beta d\big{(}\mathbf{R}_{S}(Q),\mathbf{R}_{D}(Q)\big{)}\leq\text{KL}(Q\|P)+\ln\frac{\mathbb{E}_{\sim D^{m}}Z_{P}(S^{\prime})}{\delta}.\tag{2}$$ To bound ES′∼DmZP (S ′), let Xi:= R{(xi,yi) ′}(h) ∈ △M for i ∈ [m], where (xi, yi) ′is the i'th element of the dummy sample S ′. Noting that each Xi has mean RD(h), define the random vectors X′ i ∼ Mult(1*, M,* RD(h)) and Y := Pm i=1 X′ i ∼ Mult(*m, M,* RD(h)). Finally let f : △mM → R be defined by f(x1*, . . . , x*m) := exp βd 1m Pm i=1 xi, RD(h) , which is convex since the average is linear, d is convex and the exponential is non-decreasing and convex. Then, by swapping expectations (which is permitted by Fubini's theorem since the argument is non-negative) and applying Lemma 5, we have that ES′∼DmZP (S ′) can be written as ES′∼DmZP (S ′) = E S′∼Dm E h∼P exp βdRS′ (h), RD(h) = E h∼P E S′∼Dm exp βdRS′ (h), RD(h) = E h∼P E X1,...,Xm exp βd 1m Xm i=1 Xi, RD(h) !! ≤ E h∼P E X′1 ,...,X′m exp βd 1m Xm i=1 X′ i , RD(h) !! = E h∼P E Y exp βd 1m Y , RD(h) = E h∼P X k∈Sm,M Multk; m, M, RD(h)exp βdkm , RD(h) $$\leq\operatorname*{sup}_{\mathbf{r}\in\triangle_{M}}\left[\sum_{\mathbf{k}\in S_{m,M}}\operatorname{Mult}\left(\mathbf{k};m,M,\mathbf{r}\right)\exp\left(\beta d\left({\frac{\mathbf{k}}{m}},\mathbf{r}\right)\right)\right].$$ Which is the definition of Id(*m, β*). Inequality (1) then follows by substituting this bound on ES′∼DmZP (S ′) into (2) and dividing by β. ## 3.2 Statement And Proof Of The Generalised Corollary We now apply our generalised theorem with β = m and d(q, p) = kl(q∥p). This results in the following corollary, analogous to Corollary 2 (although the multi-dimensionality makes the proof much more involved, requiring multiple lemmas and extra arguments to make the main idea go through). We give two forms of the bound since, while the second is looser, the first is not practical to calculate except when m is very small. Corollary 7. Let X and Y *be arbitrary sets and* SM j=1 Ej *be a disjoint partition of* Y 2. Let D ∈ M(*X × Y*) be a data-generating distribution and *H ⊆ M*(Y) X be a hypothesis class. For any prior P ∈ M(H), δ ∈ (0, 1] and sample size m, with probability at least 1 − δ over the random draw S ∼ Dm*, we have that simultaneously* for all posteriors Q ∈ M(H) $$\mathrm{kl}\big{(}\mathbf{R}_{S}(Q)\|\mathbf{R}_{D}(Q)\big{)}\leq\frac{1}{m}\left[\mathrm{KL}(Q\|P)+\ln\left(\frac{m!}{\delta m^{m}}\sum_{k\in\mathbb{N}_{-M}}\sum_{j=1}^{M}\frac{k_{j}^{k_{j}}}{k_{j}!}\right)\right]\tag{3}$$ $$\leq\frac{1}{m}\left[\mathrm{KL}(Q\|P)+\ln\left(\frac{1}{\delta}\sqrt{\pi}e^{1/(12m)}\left(\frac{m}{2}\right)^{\frac{k_{j}-1}{2}}\sum_{i=0}^{M-1}\left(\frac{M}{\pi}\right)^{(m)^{1/2}}\Gamma\left(\frac{M-\varepsilon}{2}\right)\right)\right],\tag{4}$$ _where the second inequality holds provided $m\geq M$. Further, the bounds are unchanged if one restricts to an ordinary hypothesis class, namely if *H ⊆ Y*X . While analogous corollaries can be obtained from Theorem 3 by other choices of convex function d, the kl-divergence leads to convenient cancellations that remove the dependence of Ikl(*m, β,* r) on r, making Ikl(*m, β*) := supr∈△M Ikl(*m, β,* r) simple to evaluate. Note (4) is logarithmic in 1/δ (typical of PAC-Bayes bounds) and thus the confidence can be increased very cheaply. Ignoring logarithmic terms, (4) is O(1/m), also as expected. As for M, a simple analysis shows that (4) grows only sublinearly in M, meaning M can be made quite large provided one has a reasonable amount of data. To prove Corollary 7 we require Lemma 8, the proof of which is deferred to Appendix B.2. **Lemma 8**.: _For integers $M\geq1$ and $m\geq M$, $\sum_{k\in S^{\diamond}_{m,M}}\frac{1}{\prod_{j=1}^{M}\sqrt{k_{j}}}\leq\frac{\pi^{\frac{M}{2}}\pi^{\frac{M-2}{2}}}{\Gamma(\frac{M}{2})}$._ Proof. (of Corollary 7) Applying Theorem 3 with d(q, p) = kl(q∥p) (defined in Section 2) and β = m gives that with probability at least 1−δ over S ∼ Dm, simultaneously for all posteriors Q ∈ M(H), klRS(Q)∥RD(Q)≤ 1 m [KL(Q∥P) +ln Ikl(m,m) δ], where Ikl(*m, m*) := supr∈△M [Pk∈Sm,M Mult(k; *m, M,* r) exp mkl( k m , r)]. Thus, to establish the first inequality of the corollary, it suffices to show that $${\cal I}_{\rm kl}(m,m)\leq\frac{m!}{m^{m}}\sum_{{\bf k}\in S_{m,M}}\prod_{j=1}^{M}\frac{k_{j}^{k_{j}}}{k_{j}!}.\tag{5}$$ To see this, for each fixed r = (r1, . . . , rM) ∈ △M let Jr = {j ∈ [M] : rj = 0}. Then Mult(k; *m, M,* r) = 0 for any k ∈ Sm,M such that kj ̸= 0 for some j ∈ Jr. For the other k ∈ Sm,M, namely those such that kj = 0 for all j ∈ Jr, the probability term can be written as Mult(k; *m, M,* r) = Q m! M j=1 kj ! QM j=1 r kj j = Q m! j̸∈Jr kj ! Qj̸∈Jr r kj j , and (recalling the convention that 0 ln 00 = 0) the term exp(mkl( k m , r)) can be written as $$\exp\left(m\sum_{j=1}^{M}{\frac{k_{j}}{m}}\ln{\frac{{\frac{k_{j}}{m}}}{r_{j}}}\right)=\exp\left(\sum_{j\not\in J_{r}}k_{j}\ln{\frac{k_{j}}{m r_{j}}}\right)=\prod_{j\not\in J_{r}}\left({\frac{k_{j}}{m r_{j}}}\right)^{k_{j}}={\frac{1}{m^{m}}}\prod_{j\not\in J_{r}}\left({\frac{k_{j}}{r_{j}}}\right)^{k_{j}},$$ 7 where the last equality is obtained by recalling that the kj sum to m. Substituting these two expressions into the definition of Ikl(*m, m*) and only summing over those k ∈ Sm,M with non-zero probability, we obtain k∈Sm,M Mult(k; m, M, r) exp mkl km , r =X k∈Sm,M: ∀j∈Jr kj=0 Mult(k; m, M, r) exp mkl km , r X m! Qj̸∈Jr kj ! Y j̸∈Jr r kj j 1 mm Y j̸∈Jr kj rj kj =X k∈Sm,M: ∀j∈Jr kj=0 = m! mm X k∈Sm,M: ∀j∈Jr kj=0 Y j̸∈Jr k kj j kj ! j=1 k kj j kj !(because 0 0 0! = 1) = m! mm X k∈Sm,M: ∀j∈Jr kj=0 Y M j=1 k kj j kj ! . ≤ m! mm X k∈Sm,M Y M Since this is independent of r, it also holds after taking the supremum over r ∈ △M of the left hand side. We have thus established (5) and hence (3). Now, defining f :S∞ M=2 Sm,M → R by f(k) = Q|k| j=1 k kj j /kj !, we see that to establish inequality (4) it suffices to show that $$\frac{m!}{m^{m}}\sum_{\mathbf{k}\in S_{m,M}}f(\mathbf{k})\leq\sqrt{\pi}e^{1/12m}\left(\frac{m}{2}\right)^{\frac{M-1}{2}}\sum_{z=0}^{M-1}\left(\frac{M}{z}\right)\frac{1}{\left(\pi m\right)^{z/2}\,\Gamma\left(\frac{M-z}{2}\right)}.\tag{6}$$ We show this by upper bounding each f(k) individually using Stirling's formula: ∀n ≥ 1 √2πn n e n< n! < √2πn n e ne 1 12n . Since we cannot use this to upper bound 1/kj ! when kj = 0, we partition the sum above according to the number of coordinates of k at which kj = 0. Let z index the number of such coordinates. Since f is symmetric under permutations of its arguments, $$\sum_{\mathbf{k}\in S_{m,M}}f(\mathbf{k})=\sum_{z=0}^{M-1}\binom{M}{z}\sum_{\mathbf{k}\in S_{m,M-z}^{>0}}f(\mathbf{k}).\tag{7}$$ For k ∈ S >0 Stirling's formula yields $f(\mathbf{k})\leq\prod_{j=1}^M\frac{k_j^{k_j}}{\sqrt{2\pi k_j}\left(\frac{k_j}{\pi}\right)^{k_j}}=\prod_{j=1}^M\frac{e^{k_j}}{\sqrt{2\pi k_j}}=\frac{e^{\pi i}}{(2\pi)^{M/2}}\prod_{j=1}^M\frac{1}{\sqrt{k_j}}.$ And I am going to see it. application of Lemma 8 now gives $$\sum_{\mathbf{k}\in S_{m,M-z}^{\alpha,\beta}}f(\mathbf{k})\leq\frac{e^{m}}{(2\pi)^{M/2}}\sum_{\mathbf{k}\in S_{m,M-z}^{\alpha,\beta}}\prod_{j=1}^{M}\frac{1}{\sqrt{k_{j}}}\leq\frac{e^{m}}{(2\pi)^{\frac{M}{2}}}\frac{\pi^{\frac{M-z}{2}}m^{\frac{M-z-2}{2}}}{\Gamma\left(\frac{M-z}{2}\right)}=\frac{e^{m}m^{\frac{M-z}{2}}}{2^{\frac{M}{2}}\left(\pi m\right)^{s/2}\Gamma\left(\frac{M-z}{2}\right)}.$$ Substituting this into equation (7) and bounding m! using Stirling's formula, we have $$\frac{m!}{m^{m}}\sum_{\mathbf{k}\in S_{m,M}}f(\mathbf{k})\leq\frac{\sqrt{2\pi m}e^{1/12m}}{e^{m}}\sum_{z=0}^{M-1}\binom{M}{z}\frac{e^{m}m^{\frac{M-2}{2}}}{2^{M/2}\left(\pi m\right)^{z/2}\Gamma\left(\frac{M-z}{2}\right)}$$ $$=\sqrt{\pi}e^{1/12m}\left(\frac{m}{2}\right)^{\frac{M-1}{2}}\sum_{z=0}^{M-1}\binom{M}{z}\frac{1}{\left(\pi m\right)^{z/2}\Gamma\left(\frac{M-z}{2}\right)}$$ which is (6), establishing (4) and therefore completing the proof. ## 4 Implied Bounds And Construction Of A Differentiable Training Objective As already discussed, a multitude of bounds can be derived from Theorem 3 and Corollary 7, all of which then hold simultaneously with high probability. For example, suppose after a use of Corollary 7 we have a bound of the form kl(RS(Q)||RD(Q)) ≤ B. The following proposition then yields the bounds Lj ≤ R j D(Q) ≤ Uj , where Lj := inf{p ∈ [0, 1] : kl(R j S (Q)∥p) ≤ B} and Uj := sup{p ∈ [0, 1] : kl(R j S (Q)∥p) ≤ B}. Moreover, since in the worst case we have kl(RS(Q)||RD(Q)) = B, the proposition shows that the lower and upper bounds Lj and Uj are the tightest possible, since if R j D(Q) ̸∈ [Lj , Uj ] then kl(R j S (Q)∥R j D(Q)) > B implying kl(RS(Q)||RD(Q)) > B. For a more precise version of this argument and a proof of Proposition 9, see Appendix B.3. Proposition 9. Let q, p ∈ △M*. Then* kl(qj∥pj ) ≤ kl(q∥p) for all j ∈ [M]*, with equality when* pi = 1−pj 1−qj qi. for all i ̸= j. As a second much more interesting example, suppose we can quantify how bad an error of each type is by means of a loss vector ℓ ∈ [0, ∞)M, where ℓj is the loss we attribute to an error of type Ej . We may then be interested in bounding the *total risk* RTD(Q) ∈ [0, ∞) of Q which, recall, is defined by RTD(Q) := PM j=1 ℓjR j D(Q). Indeed, given a bound of the form kl(RS(Q)||RD(Q)) ≤ B, we can derive RTD(Q) ≤ sup{PM j=1 ℓj rj : r ∈ △M, kl(RS(Q)||r) ≤ B}. This motivates the following definition of kl−1 ℓ(u|c). To see that this is indeed well-defined (at least when u ∈ △>0 M ), see the discussion at the beginning of Appendix B.4. Definition 10. For u ∈ △M, c ∈ [0, ∞) and ℓ ∈ [0, ∞)M*, define* kl−1 ℓ(u|c) = sup{PM j=1 ℓjvj : v ∈ △M, kl(u∥v) ≤ c}. Can we calculate kl−1 ℓ(u|c) and hence fℓ(kl−1 ℓ(u|c)) in order to evaluate the bound on the total risk? Additionally, if we wish to use the bound on the total risk as a training objective, can we calculate the partial derivatives of f ∗ ℓ (u, c) := fℓ(kl−1 ℓ(u|c)) with respect to the uj and c so that we can use gradient descent? Our Proposition 11 answers both of these questions in the affirmative, at least in the sense that it provides a speedy method for approximating these quantities to arbitrary precision provided uj > 0 for all j ∈ [M] and c > 0. Indeed, the only approximation step required is that of approximating the unique root of a continuous and strictly increasing scalar function. Thus, provided the uj themselves are differentiable, Corollary 7 combined with Proposition 11 yields a tractable and fully differentiable objective that can be used for training. More details on how this can be done, including an algorithm written in pseudocode, can be found in Appendix A. While somewhat analogous to the technique used in Clerico et al. (2022) to obtain derivatives of the one-dimensional kl-inverse, our proposition directly yields derivatives on the total risk by (implicitly) employing the envelope theorem (see for example Takayama & Akira, 1985). Since the proof of Proposition 11 is rather long and technical, we defer it to Appendix B.4. Proposition 11. Fix ℓ ∈ [0, ∞)M such that not all ℓj *are equal, and define* fℓ : △M → [0, ∞) by fℓ(v) := PM j=1 ℓjvj *. For all* u˜ = (u, c) ∈ △>0 M × (0, ∞)*, define* v ∗(u˜) := kl−1 ℓ(u|c) ∈ △M *and let* µ ∗(u˜) ∈ (−∞, − maxj ℓj ) be the unique solution to c = ϕℓ(µ)*, where* ϕℓ : (−∞, − maxj ℓj ) → R *is given by* ϕℓ(µ) := ln(−PM j=1 uj µ+ℓj )+PM j=1 uj ln(−(µ+ℓj ))*, which is continuous and strictly increasing. Then* v ∗(u˜) = kl−1 ℓ(u|c) is given by $$\mathbf{v}^{*}(\hat{\mathbf{u}})_{j}=\frac{\lambda^{*}(\hat{\mathbf{u}})u_{j}}{\mu^{*}(\hat{\mathbf{u}})+\ell_{j}}\quad\text{for}j\in[M],\quad\text{where}\quad\lambda^{*}(\hat{\mathbf{u}})=\left(\sum_{j=1}^{M}\frac{u_{j}}{\mu^{*}(\hat{\mathbf{u}})+\ell_{j}}\right)^{*}$$ _in $f^{*}:\wedge^{\geq0}\times(0\ \infty)\to[0\ \infty)$ by $f^{*}(\hat{\mathbf{u}}):=f_{i}(\mathbf{v}^{*}(\hat{\mathbf{u}}))$ we have that_ −1 . _Further, defining $f_{\boldsymbol{\ell}}^{*}:\triangle_{M}^{\geq0}\times(0,\infty)\to[0,\infty)$ by $f_{\boldsymbol{\ell}}^{*}(\hat{\boldsymbol{u}}):=f_{\boldsymbol{\ell}}(\boldsymbol{v}^{*}(\hat{\boldsymbol{u}}))$, we have that_ $$\frac{\partial f_{\boldsymbol{\ell}}^{*}}{\partial u_{j}}(\hat{\boldsymbol{u}})=\lambda^{*}(\hat{\boldsymbol{u}})\left(1+\ln\frac{u_{j}}{\boldsymbol{v}^{*}(\hat{\boldsymbol{u}})_{j}}\right)\qquad\text{and}\qquad\frac{\partial f_{\boldsymbol{\ell}}^{*}}{\partial c}(\hat{\boldsymbol{u}})=-\lambda^{*}(\hat{\boldsymbol{u}}).$$ ## 5 Perspectives By abstracting to a general setting of discretised error types, we established a novel type of generalisation bound (Theorem 3) providing far richer information than existing PAC-Bayes bounds. Through our Corollary 7 and Proposition 11, our bound inspires a training algorithm (see Appendix A) suitable for many different learning problems, including structured output prediction (as investigated by Cantelobre et al., 2020, in the PAC-Bayes setting), multi-task learning and learning-to-learn (see *e.g.* Maurer et al., 2016). We will demonstrate these applications and our bound's utility for real-world learning problems in an empirical follow-up study. Note we require i.i.d. data, which in practice is frequently not the case or is hard to verify. Further, the number of error types M must be finite. While in continuous scenarios it would be preferable to be able to quantify the entire distribution of loss values without having to discretise into finitely many error types, in the multiclass setting our framework is entirely suitable. ## References Pierre Alquier. User-friendly introduction to PAC-Bayes bounds. *arXiv preprint arXiv:2110.11216*, 2021. Amiran Ambroladze, Emilio Parrado-Hernández, and John Shawe-Taylor. Tighter PAC-Bayes bounds. In Bernhard Schölkopf, John C. Platt, and Thomas Hofmann (eds.), *Advances in Neural Information Processing* Systems 19, Proceedings of the Twentieth Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 4-7, 2006, pp. 9–16. MIT Press, 2006. URL https: //proceedings.neurips.cc/paper/2006/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html. Loubna Benabbou and Pascal Lang. PAC-Bayesian generalization bound for multi-class learning. In NIPS 2017 Workshop. (Almost) 50 Shades of Bayesian Learning: PAC-Bayesian trends and insights, 2017. URL https://bguedj.github.io/nips2017/pdf/PAC-Bayes_2017_paper_3.pdf. Felix Biggs and Benjamin Guedj. Differentiable PAC-Bayes objectives with partially aggregated neural networks. *Entropy*, 23(10):1280, 2021. doi: 10.3390/e23101280. URL https://doi.org/10.3390/e23101280. Felix Biggs and Benjamin Guedj. On margins and derandomisation in PAC-Bayes. In *AISTATS*, 2022a. URL https://arxiv.org/abs/2107.03955. Felix Biggs and Benjamin Guedj. Non-vacuous generalisation bounds for shallow neural networks. *arXiv* preprint arXiv:2202.01627, 2022b. Luc Bégin, Pascal Germain, François Laviolette, and Jean-Francis Roy. PAC-Bayesian Bounds based on the Rényi Divergence. In Arthur Gretton and Christian C. Robert (eds.), *Proceedings of the 19th* International Conference on Artificial Intelligence and Statistics, volume 51 of Proceedings of Machine Learning Research, pp. 435–444, Cadiz, Spain, 09–11 May 2016. PMLR. URL https://proceedings.mlr. press/v51/begin16.html. Théophile Cantelobre, Benjamin Guedj, María Pérez-Ortiz, and John Shawe-Taylor. A pac-bayesian perspective on structured prediction with implicit loss embeddings. *arXiv preprint arXiv:2012.03780*, 2020. Olivier Catoni. A PAC-Bayesian approach to adaptive classification. *preprint*, 840, 2003. Olivier Catoni. *Statistical Learning Theory and Stochastic Optimization: Ecole d'Eté de Probabilités de* Saint-Flour XXXI-2001. Springer, 2004. Olivier Catoni. *PAC-Bayesian Supervised Classification: The Thermodynamics of Statistical Learning*, volume 56 of *Institute of Mathematical Statistics (IMS) Lecture Notes - Monograph Series*. Institute of Mathematical Statistics, 2007. ISBN 9780940600720. URL https://books.google.fr/books?id= acnaAAAAMAAJ. Eugenio Clerico, George Deligiannidis, and Arnaud Doucet. Conditionally gaussian pac-bayes. In *International* Conference on Artificial Intelligence and Statistics, pp. 2311–2329. PMLR, 2022. Imre Csiszár. I-divergence geometry of probability distributions and minimization problems. *The Annals of* Probability, pp. 146–158, 1975. MD Donsker and SRS Varadhan. Large deviations for Markov processes and the asymptotic evaluation of certain markov process expectations for large times. In *Probabilistic Methods in Differential Equations*, pp. 82–88. Springer, 1975. Gintare Karolina Dziugaite and Daniel M Roy. Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data. In Conference on Uncertainty in Artificial Intelligence [UAI], 2017. Gintare Karolina Dziugaite and Daniel M. Roy. Entropy-SGD optimizes the prior of a PAC-Bayes bound: Generalization properties of entropy-SGD and data-dependent priors. In Jennifer G. Dy and Andreas Krause (eds.), *Proceedings of the 35th International Conference on Machine Learning, ICML 2018,* Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of *Proceedings of Machine Learning* Research, pp. 1376–1385. PMLR, 2018. URL http://proceedings.mlr.press/v80/dziugaite18a.html. Gintare Karolina Dziugaite, Kyle Hsu, Waseem Gharbieh, Gabriel Arpino, and Daniel Roy. On the role of data in PAC-Bayes. In Arindam Banerjee and Kenji Fukumizu (eds.), The 24th International Conference on Artificial Intelligence and Statistics, AISTATS 2021, April 13-15, 2021, Virtual Event, volume 130 of Proceedings of Machine Learning Research, pp. 604–612. PMLR, 2021. URL http://proceedings.mlr. press/v130/karolina-dziugaite21a.html. Vasilii Feofanov, Emilie Devijver, and Massih-Reza Amini. Transductive bounds for the multi-class majority vote classifier. *Proceedings of the AAAI Conference on Artificial Intelligence*, 33:3566–3573, 2019. doi: 10.1609/aaai.v33i01.33013566. URL https://ojs.aaai.org/index.php/AAAI/article/view/4236. Benjamin Guedj. A Primer on PAC-Bayesian Learning. In *Proceedings of the second congress of the French* Mathematical Society, 2019. URL https://arxiv.org/abs/1901.05353. Sokol Koço and Cécile Capponi. On multi-class classification through the minimization of the confusion matrix norm. In Cheng Soon Ong and Tu Bao Ho (eds.), *Proceedings of the 5th Asian Conference on Machine* Learning, volume 29 of *Proceedings of Machine Learning Research*, pp. 277–292, Australian National University, Canberra, Australia, 13–15 Nov 2013. PMLR. URL https://proceedings.mlr.press/v29/ Koco13.html. François Laviolette, Emilie Morvant, Liva Ralaivola, and Jean-Francis Roy. Risk upper bounds for general ensemble methods with an application to multiclass classification. *Neurocomputing*, 219:15–25, 2017. ISSN 0925-2312. doi: https://doi.org/10.1016/j.neucom.2016.09.016. URL https://www.sciencedirect.com/ science/article/pii/S0925231216310177. Gaël Letarte, Pascal Germain, Benjamin Guedj, and Francois Laviolette. Dichotomize and generalize: PACBayesian binary activated deep neural networks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlché Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems 32*, pp. 6872–6882. Curran Associates, Inc., 2019. Guy Lever, François Laviolette, and John Shawe-Taylor. Distribution-dependent PAC-Bayes priors. In International Conference on Algorithmic Learning Theory, pp. 119–133. Springer, 2010. Guy Lever, François Laviolette, and John Shawe-Taylor. Tighter PAC-Bayes bounds through distributiondependent priors. *Theoretical Computer Science*, 473:4–28, February 2013. ISSN 0304-3975. doi: 10.1016/j. tcs.2012.10.013. URL https://linkinghub.elsevier.com/retrieve/pii/S0304397512009346. Andreas Maurer. A note on the PAC-Bayesian theorem. *arXiv preprint cs/0411099*, 2004. Andreas Maurer, Massimiliano Pontil, and Bernardino Romera-Paredes. The benefit of multitask representation learning. *J. Mach. Learn. Res.*, 17:81:1–81:32, 2016. URL http://jmlr.org/papers/v17/15-242. html. David A McAllester. Some PAC-Bayesian theorems. In *Proceedings of the eleventh annual conference on* Computational Learning Theory, pp. 230–234. ACM, 1998. David A McAllester. PAC-Bayesian model averaging. In Proceedings of the twelfth annual conference on Computational Learning Theory, pp. 164–170. ACM, 1999. Shakir Mohamed, Mihaela Rosca, Michael Figurnov, and Andriy Mnih. Monte carlo gradient estimation in machine learning. *J. Mach. Learn. Res.*, 21(132):1–62, 2020. Emilie Morvant, Sokol Koço, and Liva Ralaivola. PAC-Bayesian generalization bound on confusion matrix for multi-class classification. In Proceedings of the 29th International Conference on Machine Learning, ICML 2012, Edinburgh, Scotland, UK, June 26 - July 1, 2012. icml.cc / Omnipress, 2012. URL http: //icml.cc/2012/papers/434.pdf. Behnam Neyshabur, Srinadh Bhojanapalli, and Nathan Srebro. A PAC-Bayesian approach to spectrallynormalized margin bounds for neural networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id=Skz_WfbCZ. Emilio Parrado-Hernández, Amiran Ambroladze, John Shawe-Taylor, and Shiliang Sun. PAC-Bayes bounds with data dependent priors. *J. Mach. Learn. Res.*, 13:3507–3531, 2012. URL http://dl.acm.org/ citation.cfm?id=2503353. María Pérez-Ortiz, Omar Rivasplata, Benjamin Guedj, Matthew Gleeson, Jingyu Zhang, John Shawe-Taylor, Miroslaw Bober, and Josef Kittler. Learning pac-bayes priors for probabilistic neural networks. arXiv preprint arXiv:2109.10304, 2021. Maria Perez-Ortiz, Omar Rivasplata, John Shawe-Taylor, and Csaba Szepesvari. Tighter risk certificates for neural networks. *Journal of Machine Learning Research*, 22(227):1–40, 2021. URL http://jmlr.org/ papers/v22/20-879.html. Omar Rivasplata, Csaba Szepesvári, John Shawe-Taylor, Emilio Parrado-Hernández, and Shiliang Sun. PAC-Bayes bounds for stable algorithms with instance-dependent priors. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pp. 9234–9244, 2018. URL https: //proceedings.neurips.cc/paper/2018/hash/386854131f58a556343e056f03626e00-Abstract.html. Matthias Seeger. PAC-Bayesian generalisation error bounds for Gaussian process classification. *Journal of* Machine Learning Research, 3(Oct):233–269, 2002. Akira Takayama and Takayama Akira. *Mathematical economics*. Cambridge university press, 1985. Wenda Zhou, Victor Veitch, Morgane Austern, Ryan P. Adams, and Peter Orbanz. Non-vacuous generalization bounds at the ImageNet scale: a PAC-Bayesian compression approach. In *7th International Conference on* Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. URL https://openreview.net/forum?id=BJgqqsAct7.
Review 1: Summary: The paper provides generalization bounds within PAC-learning paradigm for multiclass classification by extending the work by Bégin et al. (2016) which was developed for binary classification. The extension also includes considering regression losses. Strengths and Weaknesses: Strengths 1. The work is presented nicely within the context of prior works. 2. The paper is well written and straightforward to follow. Weaknesses 1. Experimental validation is missing 2. The work is primarily based on prior results and it is not clear how significant the contribution is. 3. The estimation of the term I_d(m, β) is not provided and hence it is not clear how tight the upperbound is and whether it is useful. Requested Changes: My research area is not theoretical and hence I am not fully sure about my assessment. I tried to follow the paper and the proofs seem correct to me but I have two concerns: 1. The work is primarily based on previous works and it is not clear to me how impactful or novel are the contributions of the paper. 2. TMLR's audience include many people with empirical background. I would like to see empirical explorations to validate the results both using synthetic data and real-world datasets. I think providing such results and validating the paper will significantly improve the contribution of the submission. Broader Impact Concerns: N/A ================================================== Review 2: Summary: This paper establishes PAC-Bayes generalization bounds for multi-class and discretized regression. The main results concern the error bounds on disjoint partitions for the multi-class label set. The generalization error bound in this paper measures how far the true confusion matrix differs from the empirical one provided a user-specified metric. Then, the generalized theorem is applied to the KL-divergence. Finally, this paper also provides the construction of differentiable training objectives based on the new KL-divergence bound. Strengths and Weaknesses: 1. The proof of Theorem 3 follows the same line as the proof of Theorem 1 in Bdégin et al. (2016) with a slight modification for the usage of the randomization technique developed in Maurer (2004) for the soft classifiers. In my opinion, the main results of this paper are a direct combination of well-known facts. Therefore, the novelty of this paper seems to be insufficient. 2. In Section 4, the authors put forward the definition of kl_l^{-1}(u|c) to derive the tightest bounds. But kl_l^{-1}(u|c) seems complicated to compute due to the nonlinearity of the KL-divergence. So it is unclear to me how to compute kl_l^{-1}(u|c) from the data. Requested Changes: In Section 4, the authors discuss how to turn these new bounds into tractable training objective functions. But no evaluations and comparisons are presented here. I think it is necessary to conduct experiments to compare the accuracy of the learning algorithms based on the new training objective functions, to the ones based on the KL-divergence, see, e.g., Section 4 in Bdégin et al. (2016). Broader Impact Concerns: I have no concerns here. ================================================== Review 3: Summary: The authors establish new generalization error bounds for multiclass classification. This is done by extending some aspects of PAC-Bayes theory and by abstracting to a more general setting of so-called discretised error types. Strengths and Weaknesses: Let me start by saying that this is my first review for TMLR. Hence, I am not sure what the standards of this journal are. I am judging novelty based on my experience as a reviewer for machine learning conferences. Strengths: + The paper is certainly timely as there's now tremendous interest in PAC-Bayes bounds. + The paper is very clearly written and the contributions are evident. Weaknesses: - Unfortunately, the paper is not up to the standards for acceptance of a top machine learning conference. The derivations of the main theorem in Theorem 3 follow almost verbatim from standard PAC-Bayes theory. This is done by a standard change of measure technique and subsequent control of a VC-like term. The generalization of Theorem 3 from Theorem 1 (Begin's work) is too minor to warrant publication as the Binomial in the final term used for binary classification is generalized to a Multinomial for multiclass classification. - There is now a whole host of methods to bound generalization error beyond using PAC-Bayes theory. It would have been good if the authors could compare what they have to the state-of-the-art in the literature, e.g., VC-type bounds, information-theoretic type bounds (of Russo and Zou, and Xu and Raginsky, etc.). - A numerical example to illustrate the theory developed would be useful in view of the fact that the theory is somewhat weak. Requested Changes: Unfortunately, the contributions in the manuscript are not sufficiently novel for publication even after aesthetic changes are made. I believe that the authors should try to pursue stronger results to have a chance at publication. Hence, I have no suggested changes, as I don't think that any minor changes will have any material change on the decision. Broader Impact Concerns: NIL ================================================== Metareview: Recommendation: Reject Comment: The authors are encouraged to submit a new version, which provides more context as to why these contributions are not simple variations of existing results, as well as evidence regarding the implications of the results. ==================================================
# A Scalable Finite Difference Method For Deep Reinforcement Learning Anonymous authors Paper under double-blind review ## Abstract Several low-bandwidth distributable black-box optimization algorithms in the family of finite differences such as Evolution Strategies have recently been shown to perform nearly as well as tailored Reinforcement Learning methods in some Reinforcement Learning domains. One shortcoming of these black-box methods is that they must collect information about the structure of the return function at every update, and can often employ only information drawn from a distribution centered around the current parameters. As a result, when these algorithms are distributed across many machines, a significant portion of total runtime may be spent with many machines idle, waiting for a final return and then for an update to be calculated. In this work we introduce a novel method to use older data in finite difference algorithms, which produces a scalable algorithm that avoids significant idle time or wasted computation. ## 1 Introduction Reinforcement learning (RL) is a sub-field of machine learning that is concerned with finding an optimal policy π ∗ to direct an agent through a Markov decision process (MDP) to maximize an objective J(π). In this work, we are interested in methods relating to the *policy gradient* (Sutton et al., 1999) where the policy is parameterized by a set of real parameters θ ∈ R d. These methods search for π ∗ by tuning θ through gradient ascent on J(πθ). Black-box methods for policy optimization are increasingly common in the RL literature (Salimans et al., 2017; Mania et al., 2018; Such et al., 2017), where they can be competitive against more popular approaches under certain conditions (Stulp & Sigaud, 2012a). One such method is *Evolution Strategies* (Salimans et al., 2017) (ES). ES is a distributable learning algorithm that optimizes a population of policies neighboring πθ by stochastically sampling perturbations from a distribution with mean θ and maximizing the expected reward of these perturbations. Unlike most purpose-built RL algorithms, ES does not take advantage of the minutiae of the MDP framework, instead leveraging only whole interactions with the decision process to compute updates. In spite of this comparative sparsity of information, ES has been shown to be competitive with powerful learning algorithms like TRPO (Schulman et al., 2015) and A2C (Mnih et al., 2016) in many environments. ES has a particular advantage when transmitting data to asynchronous machines connected to the system is costly because it is able to compress evaluations of a policy in an MDP to a pair of values, one integer and one floating-point, which requires very little network bandwidth to transmit. As a result, the vast majority of the network communication involved in the operation of the algorithm is in the transmission of parameter updates. One shortcoming of ES is that all information used to compute an update must come from a perturbation of the current parameters. This places severe limitations on the speed at which ES can find an optimal policy because once a sufficient number of perturbations have been dispatched, connected workers must either wait for an update, cut off trajectories early, or have the information they collected discarded. In this work we introduce a method to incorporate information from prior versions of the policy by computing difference quotients even for out-of-distribution samples. This enables workers to compute the return of policies constantly, eliminating idle time. 1 ## 2 Background In this work we consider the undiscounted-return episodic online reinforcement learning context (see Sutton & Barto (2022, 68)). ## 2.1 Finite Difference Algorithms In reinforcement learning, finite difference algorithms adjust the parameters of a policy such that an objective J(πθ) = Eτ∼πθ [R(τ )] known as the *expected return* is maximized. In this expression, R(τ ) is called the *return* of a trajectory τ , and the policy is a function πθ : S → A which maps states of a decision process to actions. The interaction of the policy πθ and the process produces the distribution of trajectories over which the expected return is defined. To simplify our notation, we abbreviate J(πθ) to J(θ). In this work we are only concerned with finite trajectories, e.g. finite sequences of state, action and reward triples of formτ = {(s0, a0, r0),(s1, a1, r1)...(sT , aT , rT )}, created by the interaction between an agent and an MDP starting from s0 and continuing until a terminal state is reached. To maximize J(θ), the gradient ∇J(θu) can be used to iteratively tune the parameters θu where u is the number of updates that have been applied to θ by the learning process. The simplest update rule for θu is $$\theta_{u+1}=\theta_{u}+\eta\nabla J(\theta_{u}),$$ where η is a hyper-parameter called the *learning rate*. This update rule is known as stochastic gradient descent, although in the RL setting it is used to maximize an objective rather than minimize one, suggesting "ascent", rather than descent of the function J. Finite difference methods estimate the gradient ∇J(θu) by perturbing θu to form new parameters $$\alpha=\theta_{u}+\delta.$$ A trajectory τα is then collected with the resulting policy and reward is computed using $$R(\tau_{\alpha})=\sum_{i=0}^{T}r_{i}.$$ In this work, only a single trajectory is used to evaluate each perturbed parameter set α, so we may refer to R(τα) as R(α) without loss of specificity. The scaled change in reward for a perturbation is then $$\Delta R={\frac{R(\alpha)-R_{\mathrm{ref}}}{||\delta||}},$$ where Rref = R(θu) in the forward difference case or Rref = R(θu − δ) in the central difference (also known as antithetic sampling) case, where a factor of 12 is introduced to account for the alternative method of approximation (Peters & Schaal, 2008; Salimans et al., 2017). A gradient estimate gFD can then be accumulated over N perturbations as $$\mathbf{g}_{\mathrm{FD}}={\frac{1}{N}}\sum_{i=1}^{N}\Delta R_{i}{\frac{\delta_{i}}{\left\|\delta_{i}\right\|}},$$ $$(1)$$ ||δi||, (1) where N is a hyper-parameter called the *batch size*. ## 2.2 Evolution Strategies ES (Salimans et al., 2017) is a black box method for optimizing a distribution of policies that estimates a quantity similar to gFD by stochastically perturbing the policy parameters θ with multi-variate Gaussian noise $$\alpha=\theta_{u}+\sigma\epsilon,$$ α = θu + σϵ, (2) where ϵ ∼ N (0, I) ∈ R dim(θ)and 0 *< σ <* ∞. The ES gradient estimator is then $$\mathbf{g}_{\mathrm{ES}}={\frac{1}{\sigma}}\mathbb{E}[R(\alpha)\epsilon]$$ $$\approx{\frac{1}{\sigma N}}\sum_{i=1}^{N}R(\alpha_{i})\epsilon_{i}.$$ However, in practice Salimans et al. (2017) employed an unadjusted form of antithetic sampling in their implementation of ES, which changes gES to $${\bf g}_{\mathrm{ES}}=\frac{1}{\sigma}\mathbb{E}[(R(\theta_{u}+\delta)-R(\theta_{u}-\delta))\epsilon].$$ Notice that, while similar to gFD, gES does not scale its gradient estimate by the size of each perturbation as in (1), and in the antithetic case it also ignores the usual scaling factor of 12 . In spite of these differences, ES still approximates a central-difference estimator of the policy gradient, as shown in recent work (Raisbeck et al., 2020). ES can be made into a highly scalable distributed algorithm by collecting perturbations ϵ and their associated rewards R(α) on independent asynchronous CPUs. This enables a learner CPU to collect (*ϵ, R*(α)) pairs from each worker CPU and compute gES to update θu as soon as a sufficient number of returns have arrived. In addition to the usual advantages of parallel computation, this method of distribution is desirable in cases where network communication is costly because it is possible to compress each pair of data into 2 values, which is significantly less than what workers in other distributed RL algorithms like R2D2 (Kapturowski et al., 2019), SEED RL (Espeholt et al., 2019), IMPALA (Espeholt et al., 2018) and others must transmit to their respective learners. ## 3 Related Work Numerous studies have investigated the applications of black-box optimization algorithms to RL tasks. Stanley & Miikkulainen (2002) and Stanley et al. (2009) studied approaches to neuro-evolution which evolve both the structure of the policies and their parameters. Hausknecht et al. (2014) successfully applied neuro-evolution to Atari domains. Stulp & Sigaud (2012b) and Hansen et al. (2003) investigated methods of dynamically adapting the covariance matrix of the sampling distribution to facilitate faster learning under various conditions. Sehnke et al. (2010) proposed a method related to ES which estimates a likelihood gradient in parameter space. This work builds from ES (Salimans et al., 2017) which was able to compete with powerful RL algorithms in MuJoCo (Todorov et al., 2012) and Atari (Bellemare et al., 2012) domains. Related to this work is the usage of importance-mixing (Sun et al., 2009) which was applied in conjunction with ES by Pourchot et al. (2018). Their method continually reused information from prior perturbations of the policy so long as they were proximal to the sampling distribution at the current update. Liu et al. (2019) established a method analogous to TRPO (Schulman et al., 2015) which enables sample reuse in ES by optimizing a surrogate objective. The primary contribution of this work is showing that finite difference algorithms can use information from perturbations which are not proximal to the sampling distribution at the cost of introducing a bias to the gradient approximation. ## 4 Learning Algorithm A core issue in the implementation of ES is that trajectories in a decision process typically do not require a uniform amount of time to collect; changes in the agent and stochasticity in the environment can lead to dramatic differences in collection time. This means that some workers may take more time than others to return information to the learner, and this asynchronicity could lead to information loss if the learner has computed a new policy by the time a worker finishes testing a perturbation. To address this problem, Salimans et al. (2017) dynamically limited the number of time-steps an agent is allowed to interact with the decision process for before a trajectory is prematurely terminated. While this solution reduces the problem of some machines waiting idle for potentially slow trajectories on other machines to be collected, it introduces a bias to the information used to estimate the reward gradient; the only trajectories with complete information are those that do not get cut off early, which artificially favors shorter trajectories. Further, this approach can only guarantee 50% usage of connected workers in the worst case (Salimans et al., 2017). ## 4.1 Using Delayed Information To improve worst-case resource use and reduce the bias introduced by early termination, we introduce an approach that enables workers to continually compute and test parameters α without terminating episodes early or discarding data computed using perturbations of previous parameters. To do this, we incorporate returns computed from perturbations of prior policy parameters θu−n when estimating gFD where θu−n is an earlier set of parameters (0 < n ≤ u). This is possible if we treat perturbed parameters α from prior updates θu−n as perturbations of the current update θu which have also been biased by the sum ν of updates to θ that have been computed over the prior n update steps by the learning algorithm. We begin with a forward difference estimator of the policy gradient where we perturb the policy parameters in the same manner as ES with δ = σϵ, $$\mathbf{g}_{\rm FD}=\frac{1}{N}\sum_{i=1}^{N}\left[R(\alpha_{i})-R(\theta_{u})\right]\frac{\sigma\epsilon_{i}}{||\sigma\epsilon_{i}||^{2}}.\tag{3}$$ Then, to allow returns from θu−n to contribute to gFD, we treat a reward sampled from a perturbed old policy R(θu−n + σϵ) as a reward sampled from the current policy whose perturbation ϵ has been biased. $$R(\theta_{u-n}+\sigma\epsilon)=R(\theta_{u}+(\sigma\epsilon-\nu)),$$ where the bias ν is the difference between θu and θu−n, $$\nu=\theta_{u}-\theta_{u-n},$$ this allows us to treat all perturbations equally $$\begin{array}{r l}{\alpha=\theta_{u}+\sigma\epsilon-\nu}\\ {}&{{}=\theta_{u}+\theta_{u-n}+\sigma\epsilon-\theta_{u}}\\ {}&{{}=\theta_{u-n}+\sigma\epsilon.}\end{array}$$ Note that for n = 0 this reduces to the perturbations used by ES in (2). Next we modify (3) to allow for returns from any θu−n by replacing the Gaussian noise ϵ with the biased Gaussian noise λ where $$\begin{array}{l}{{\lambda=\sigma\epsilon-\nu}}\\ {{\quad=\sigma\epsilon+\theta_{u-n}-\theta_{u},}}\end{array}$$ which yields our method to approximate ∇J(θu) $${\bf g}_{\mathrm{{DFD}}}=\frac{1}{N}\sum_{i=1}^{N}[R(\alpha_{i})-R(\theta_{u})]\frac{\lambda_{i}}{||\lambda_{i}||^{2}}.$$ $$\left(4\right)$$ . (4) We call this method the *delayed finite difference* (DFD) gradient estimator. ## 4.2 Dfd Implementation We now provide algorithms for the central learner and asynchronous workers for DFD. Data collected from our workers will contain a perturbation, its cumulative reward, the length of the trajectory on which it was evaluated, and the update to θ that was perturbed, e.g. R(α), ϵ, T, and u. Note that this is two more values than ES workers must transmit after each episode. To improve consistency in the magnitude of our gradient estimates, we standardize each batch of returns by subtracting the sample mean µR and division by the sample standard deviation σR of rewards in that batch. | Algorithm 1 DFD Learner | Algorithm 2 DFD Worker Input: σ while running do Receive θu, u from learner Sample ϵ ∼ N (0, I) Compute α = θu + σϵ Collect R(α), T from decision process Transmit R(α), ϵ, T, u to learner end while | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Input: σ, N, Tlim, update rule Initialize: θ0, Ttotal, u while Ttotal < Tlim do Transmit θu, u to workers Compute R(θu) Collect N evaluations from workers Compute batch statistics µR, σR for i = 0...N do Ttotal ← Ttotal + Ti λi = σϵi + θui − θu R(αi) ← R(αi)−µR σR end for Compute gDFD via (4) Compute θu+1 via update rule u ← u + 1 end while | | In settings where N perturbations can be evaluated by workers faster than the policy's reward R(θu) can be computed, evaluating R(θu) on the learner may result in unnecessary delays at each update while the learner tests the policy. A simple approach to this would be to move the evaluation of R(θu) to the worker such that occasionally R(θu) is collected instead of R(α), but since R(θu) must be known by the learner prior to each update, this would not alleviate the pausing issue. An alternative approach is to approximate R(θu) on the learner as the average of rewards R(α) from perturbations of the current policy instead of measuring R(θu) directly. Note that in cases where this is impossible (e.g. the batch contains only delayed data) or if there is not a sufficient number of perturbations from the current policy to compute a meaningful estimate of R(θu), the average reward over the entire batch of returns can be used as a biased estimate of R(θu) instead. The exact approach we used to estimate R(θu) is described in appendix A.2. ## 5 The Dynamics Of Delayed Information When we consider incorporating delayed information into a finite differences gradient approximation, a pair of questions arise: how does the use of delayed information change our approximation? In particular, how does it affect the quality and bias of the gradient? We answer these questions in several parts: first, we examine the sources of bias in a normal finite differences gradient approximation. Second, we examine the bias introduced by the adjustments made in Salimans et al. (2017) to resolve the efficiency issues mentioned in section 4 and Salimans et al. (2017, 4). Third, we discuss the changes introduced by the inclusion of finite difference partial derivative approximations created using old perturbations. In particular, we discuss the way that this removes a certain kind of bias, as well as the new biases which it introduces, and the increases in the speed of learning suggested by theoretical considerations and empirical results. In the end, we conclude that it is not possible to evaluate the general merits of the adjustment from a purely theoretical perspective. As a result, our theoretical analysis consists of a qualitative description of the effect of various considerations on the performance of an optimizer. For simplicity in this section, we assume that R is a deterministic function of the policy. While the finite differences algorithm for partial derivative and gradient approximation is well-founded, it is not an unbiased estimator—following from the definition of a differentiable function, finite differences approximations are guaranteed for a differentiable function to converge *in the limit* to the true gradient. For any fixed, perturbation size, the finite differences are only an approximation to the true partial derivative. Interestingly, this is not true of the related (Salimans et al., 2017) evolution strategies gradient—that is, although the gradient approximators converge to one another under certain conditions (Raisbeck et al., 2020), the ES gradient approximator is an unbiased estimator of the "search gradient" (Wierstra et al., 2014), while the finite differences approximator is a biased estimator of the true gradient. Importantly, however, differentiable functions are defined by the property that this bias decreases with the size of the perturbations. Because of the inconsistent rate at which trajectories can be collected from a decision process, performing standard finite differences has an extremely poor worst-case performance, as described in section 4 and Salimans et al. (2017). To resolve this, Salimans et al. (2017) place a dynamically set limit on the length of a decision process. While their method guarantees a worst-case resource utilization of 50%, it also introduces a significant bias: information from episodes which happen to be longer is disproportionately ignored. In RL, where return is often significantly influenced by the length of an episode, has the potential to be a serious problem, above and beyond the normal biases of a finite differences gradient approximator. That is the state of affairs to which this work responds: is it possible to efficiently perform finite differences without discarding information from some episodes? We have answered in the affirmative, by noting that perturbations of previous sets of parameters θu−k can still furnish reasonable approximations of the partial derivative under some circumstances. In particular, by definition there is some radius in which the partial derivative approximations are close enough, for any γ, and under many circumstances the information from perturbations of earlier sets of parameters will satisfy this requirement. For example, the triangle inequality guarantees that if the distance between the current parameters and a past parameter vector ||θu−θu−k|| and the perturbation size are each smaller than the radius β in which partial derivative approximations are γ-accurate, for a chosen γ, then these will provide a "good" contribution to the overall gradient estimation. Even when this requirement does not hold formally, well behaved functions (e.g. Lipschitz) often have the property that finite differences with magnitude greater than β remain reasonable approximations of the partial derivative within a larger radius about the point. This contribution, however, comes with a caveat: for a symmetric distribution of returns, such as the Gaussian distributions employed in Evolution Strategies, these perturbations of prior parameters will not be uniformly distributed with respect to direction. Instead, they are directionally similar to the path of updates between θu and θu−k. In particular, while for any vector v, because the distribution of ϵ is Gaussian, we have for current perturbations α E [⟨(α − θu), v⟩] = E [⟨*ϵ, v*⟩] = 0. For a delayed perturbation α ′, we can see that the difference in policy parameters has a non-zero effect: $$\mathbb{E}\left[\langle(\alpha^{\prime}-\theta_{u}),(\theta_{u}-\theta_{u-k})\rangle\right]$$ $$=\mathbb{E}\left[\langle(\epsilon-\theta_{u}+\theta_{u-k}),(\theta_{u}-\theta_{u-k})\rangle\right]$$ = E [⟨ϵ,(θu − θu−k)⟩] + E [⟨(θu−k − θu),(θu − θu−k)⟩] = 0 + E [⟨(θu−k − θu),(θu − θu−k)⟩] = − ⟨(θu − θu−k),(θu − θu−k)⟩. = *− ||*θu − θu−k||2 That is, delayed perturbations have a component which is biased in the direction of the parameters from which they were drawn. In particular, because of the use of an average in the finite differences approximator which we used (and in Evolution Strategies), this means that, even if the delayed perturbations provide good partial derivative approximations, the resulting gradient will be more significant in the direction of the update from which the perturbation was drawn. Qualitatively, this bias has multiple effects. First, one might imagine that this could serve as a "check", verifying that the parameter changes undertaken improved the performance of the policy. Second, if the updates have worked to improve our function, we might find that incorporation of delayed information acts as a kind of "momentum" (Rumelhart et al., 1986), providing an additional impetus in the direction of previous updates, under the condition that reward has improved as a result of the updates undertaken. It is difficult to make a general assertion about the net effect of these changes on the performance of the underlying finite differences optimizer from theory alone. In particular, the quality of partial derivative approximations produced by delayed information varies with the return function R, the size of perturbations σ, the parameters θu, the number of connected machines, the batch size N, and the number of updates which have elapsed since the perturbation was generated, k, which is a function of the machines running the worker algorithm. As we demonstrate in the next section, empirical evidence indicates that inclusion of delayed information is often beneficial. This holds *both* when the computational budget (total environment interactions) and when the number of updates are held constant—that is, the benefits of including information from delayed perturbations are not *just* in the greater efficiency of the system, but *also* in the direction of the updates themselves, which is likely a combination of two factors: 1) not ignoring information which would be drawn from longer episodes, and 2) the additional "momentum". ## 6 Experiments To test our method, we studied ES, DFD, and a modification of our method which discards delayed information that we refer to as FD, in 4 of the MuJoCo (Todorov et al., 2012) continuous control environments. Further, we ran PPO (Schulman et al., 2017) under the same conditions and in the same environments as the other methods to provide a point of reference for the performance of our method relative to modern RL algorithms. All methods were tested across the same 10 random seeds. At every update, the policy was used to collect 10 trajectories from the environment. The reward for the policy at that update was then computed as the average cumulative reward over those trajectories. Scores were normalized using min-max normalization relative to the highest average reward over the final 1M time-steps during training and the lowest average initial reward over the first 1M time-steps during training for each environment. All training curves were generated using the RLiable library (Agarwal et al., 2021), which plots the Inter-Quartile Mean (IQM) and point-wise 95% percentile stratified bootstrap confidence intervals over each random seed for each method. Hyper-parameters and full experimental details can be found in appendix A. 6.1 Method Performance Study ![6_image_0.png](6_image_0.png) Figure 1: **Methods Comparison** An empirical study comparing ES, FD, and DFD in 4 continuous control tasks. Dotted lines represent the Inter-Quartile Mean and shaded regions show the point-wise 95% percentile stratified bootstrap confidence intervals over 10 random seeds. We began by testing the performance of ES, FD, and DFD in these environments. We found that incorporating delayed information was typically beneficial to FD, resulting in significantly higher final rewards in two of the four environments. Further, learning appeared to occur faster under DFD than under FD in all environments, as seen in Figure 1. Table 1: Policy reward over final 1M timesteps per method measured across 10 seeds mean (standard deviation) | Environment | PPO | ES | FD | DFD | |----------------|------------------|----------------|----------------|----------------| | Ant-v2 | 6601.32 (112.02) | 1657.4 (837.4) | 1717.4 (154.4) | 2345.3 (552.4) | | HalfCheetah-v2 | 9230.85 (991.79) | 4821.9 (697.0) | 5266.2 (759.0) | 5110.0 (811.3) | | Hopper-v2 | 2062.83 (359.64) | 1812.9 (671.6) | 3313.9 (137.2) | 3392.6 (200.6) | | Walker2d-v2 | 5815.25 (610.37) | 1720.3 (731.1) | 2108.2 (603.4) | 2495.5 (756.6) | To provide a point of reference to modern RL methods, we tested PPO under the same conditions as the other methods we examined. The average cumulative reward of the policy over the final 1M time-steps was measured for each method in each experiment, and the results can be found in Table 1. We found that DFD was typically superior to ES and was able to surpass PPO in one environment, though it was worse than PPO in the remaining three environments. | Environment | FD | DFD | |----------------|--------------|---------------| | Ant-v2 | 130.0 (24.9) | 156.3 (29.2) | | HalfCheetah-v2 | 729.5 (10.1) | 1250.0 (0) | | Hopper-v2 | 867.1 (10.3) | 1398.1 (22.6) | | Walker2d-v2 | 941.7 (30.2) | 1446.1 (55.7) | Table 2: Number of updates computed with and without delayed data measured across 10 seeds. mean (standard deviation) The benefit of incorporating delayed information when updating the policy may come from the number of updates the optimizer can make within a fixed number of time-steps, the quality of those updates, or a combination thereof. In Table 2 we show the mean and standard deviation of the number of updates computed by FD and DFD in the environments we tested. Including delayed information when computing updates resulted in a 32.8% mean increase in the number of updates computed by the algorithm over the same number of time-steps in these settings. This increase will vary with the length of episodes in the environment, the time it takes for an update to be computed, and the number of workers connected. 6.2 Studying the Impact of Delayed Information ![7_image_0.png](7_image_0.png) Figure 2: **Empirical Study.** Results of a synthetic experiment in the Walker2d-v2 environment examining the impact that incorporating delayed information has on the learning algorithm. The box-plots in each chart show the distribution of final policy rewards as the proportion of delayed information in each batch increases. At each update a batch of information was artificially constructed containing a fixed proportion of data from perturbations of a delayed policy, where the remainder of each batch was filled with data from perturbations of the current policy. Final policy rewards were measured as the average over the final 50 updates to the policy at the end of training. Each experiment was conducted over 10 random seeds. To determine how incorporating delayed information impacts the quality of updates in DFD, we conducted a synthetic study where batches of information were artificially held back at each update, enabling us to synthesize batches of data containing fixed proportions of delayed and current information when updating the policy. Our study examined delays of 1, 2, 4, and 8 updates, where at each delay we examined the impact of computing updates with batches composed of either 0% (entirely current) 25%, 50%, 75%, or 100% (entirely old) delayed information. The remainder of each batch was filled with perturbations of the current parameters θu when necessary. This study was conducted over 10 random seeds for 800 updates each in the Walker2d-v2 MuJoCo environment. The final reward of the policy in each experiment was measured as the average cumulative reward over the final 50 updates to the policy. We found that incorporating some delayed information in each update was not often harmful to the final performance of policies trained during this study. In some cases delayed information resulted in higher quality updates, as shown in Figure 2 when 50% of returns were delayed by 1 update. However, this benefit was sometimes reduced as the proportion of delayed information in each batch approached 1 and the delay increased. In particular, batches containing entirely delayed information resulted in the worst average performance regardless of the delay employed. ## 7 Discussion Our experiments show that a black-box method can successfully leverage out-of-distribution information when estimating the gradient of an objective. While our method was able to improve over ES and FD, it was still behind PPO in most of the domains we examined. Our method is able to make use of information from distributed workers that are unable to transmit information to a learner before an update is computed, which enables all workers to continually sample and test perturbations of the policy without artificially terminating episodes or pausing while an update is being computed. However, incorporating information that is too old or using a high proportion of delayed information in each batch will reduce the efficacy of the learning algorithm. Practitioners and future researchers should be careful to design systems using DFD such that the rate at which a batch of data can be collected is not significantly faster than the time it takes the learner to compute an update, otherwise it is possible for so much delayed information to be buffered by the learner that it may never catch up to information from perturbations of the current policy. This would result in every batch containing only delayed information, which we found to be the worst performing case in our synthetic tests as shown in Figure 2. ## 8 Conclusion We have introduced a scalable method for black-box policy optimization using finite differences which is suitable for settings where communication between asynchronous computers is costly. Our method yields notable improvements over ES in continuous control, and does not prematurely terminate trajectories or stop workers from collecting data while the policy is being updated. While we found incorporating delayed data was often beneficial in these settings, their inclusion introduces a bias to the estimation of ∇J(θu) as discussed in section 5. We found that while the bias introduced to the gradient estimates by delayed information is not always harmful, it can reduce the quality of the learning in some extreme synthetic tests. There is still a clear gap between refined RL algorithms like PPO and black-box optimizers like ES. However, scalable black-box algorithms like ES and FD still hold some advantages in the distributed compute setting, and enabling black-box methods to make use of out-of-distribution data is a step in the direction of closing the performance gap between the two approaches. Of interest to future work may be combining DFD with the importance-mixing method from Sun et al. (2009), so that perturbations which fall inside the parameter sampling distribution at the current update can be employed multiple times. Another interesting topic for future work may be investigating different choices for δ. We chose δ = σϵ as in ES where ϵ is sampled from a multi-variate Gaussian distribution, but different methods for sampling ϵ may be worth considering. In that vein, one might consider categorically different methods of perturbing the policy, such as constructing perturbations in an agent space (Raisbeck et al., 2021), or the natural space described by Amari & Douglas (1998), rather than the space of parameters. ## References Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron Courville, and Marc G Bellemare. Deep reinforcement learning at the edge of the statistical precipice. *Advances in Neural Information Processing Systems*, 2021. URL https://nips.cc/virtual/2021/oral/26713. S. Amari and S.C. Douglas. Why natural gradient? In *Proceedings of the 1998 IEEE International Conference on* Acoustics, Speech and Signal Processing, ICASSP '98 (Cat. No.98CH36181), volume 2, pp. 1213–1216 vol.2, 1998. doi: 10.1109/ICASSP.1998.675489. URL https://ieeexplore.ieee.org/document/675489. Marc G. Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. *CoRR*, abs/1207.4708, 2012. URL http://arxiv.org/abs/1207.4708. Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Volodymir Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, and Koray Kavukcuoglu. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures, 2018. URL https://arxiv.org/abs/1802.01561. Lasse Espeholt, Raphaël Marinier, Piotr Stanczyk, Ke Wang, and Marcin Michalski. Seed rl: Scalable and efficient deep-rl with accelerated central inference, 2019. URL https://arxiv.org/abs/1910.06591. Nikolaus Hansen, Sibylle D. Müller, and Petros Koumoutsakos. Reducing the Time Complexity of the Derandomized Evolution Strategy with Covariance Matrix Adaptation (CMA-ES). *Evolutionary Computation*, 11(1): 1–18, 03 2003. ISSN 1063-6560. doi: 10.1162/106365603321828970. URL https://doi.org/10.1162/ 106365603321828970. Matthew Hausknecht, Joel Lehman, Risto Miikkulainen, and Peter Stone. A neuroevolution approach to general atari game playing. *IEEE Transactions on Computational Intelligence and AI in Games*, 6(4):355–366, 2014. doi: 10.1109/TCIAIG.2013.2294713. URL https://ieeexplore.ieee.org/document/6756960. Steven Kapturowski, Georg Ostrovski, Will Dabney, John Quan, and Remi Munos. Recurrent experience replay in distributed reinforcement learning. In *International Conference on Learning Representations*, 2019. URL https://openreview.net/forum?id=r1lyTjAqYX. Guoqing Liu, Li Zhao, Feidiao Yang, Jiang Bian, Tao Qin, Nenghai Yu, and Tie-Yan Liu. Trust region evolution strategies. *Proceedings of the AAAI Conference on Artificial Intelligence*, 33(01):4352–4359, Jul. 2019. doi: 10. 1609/aaai.v33i01.33014352. URL https://ojs.aaai.org/index.php/AAAI/article/view/4345. Horia Mania, Aurelia Guy, and Benjamin Recht. Simple random search provides a competitive approach to reinforcement learning, 2018. URL https://arxiv.org/abs/1803.07055. Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. *arXiv*, 2016. doi: 10.48550/ARXIV.1602.01783. URL https://arxiv.org/abs/1602.01783. Jan Peters and Stefan Schaal. Reinforcement learning of motor skills with policy gradients. *Neural Networks*, 21 (4):682–697, 2008. ISSN 0893-6080. doi: https://doi.org/10.1016/j.neunet.2008.02.003. URL https://www. sciencedirect.com/science/article/pii/S0893608008000701. Robotics and Neuroscience. Aloïs Pourchot, Nicolas Perrin, and Olivier Sigaud. Importance mixing: Improving sample reuse in evolutionary policy search methods. *CoRR*, abs/1808.05832, 2018. URL http://arxiv.org/abs/1808.05832. John C. Raisbeck, Matthew Allen, Ralph Weissleder, Hyungsoon Im, and Hakho Lee. Evolution strategies converges to finite differences, 2020. URL https://arxiv.org/abs/2001.01684. John C. Raisbeck, Matthew W. Allen, and Hakho Lee. Agent spaces, 2021. URL https://arxiv.org/abs/ 2111.06005. David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Learning representations by back-propagating errors. *Nature*, 323(6088):533–536, oct 1986. doi: 10.1038/323533a0. URL https://www.nature.com/ articles/323533a0. Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, and Ilya Sutskever. Evolution strategies as a scalable alternative to reinforcement learning, 2017. URL https://arxiv.org/abs/1703.03864. John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, and Pieter Abbeel. Trust region policy optimization, 2015. URL https://arxiv.org/abs/1502.05477. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms, 2017. URL https://arxiv.org/abs/1707.06347. Frank Sehnke, Christian Osendorfer, Thomas Rückstieß, Alex Graves, Jan Peters, and Jürgen Schmidhuber. Parameter-exploring policy gradients. *Neural Networks*, 23(4):551–559, 2010. ISSN 0893-6080. doi: https: //doi.org/10.1016/j.neunet.2009.12.004. URL https://www.sciencedirect.com/science/article/ pii/S0893608009003220. The 18th International Conference on Artificial Neural Networks, ICANN 2008. Kenneth O. Stanley and Risto Miikkulainen. Evolving neural networks through augmenting topologies. Evolutionary Computation, 10(2):99–127, 2002. doi: 10.1162/106365602320169811. URL https://ieeexplore.ieee. org/document/6790655. Kenneth O. Stanley, David B. D'Ambrosio, and Jason Gauci. A hypercube-based encoding for evolving largescale neural networks. *Artificial Life*, 15(2):185–212, 2009. doi: 10.1162/artl.2009.15.2.15202. URL https: //ieeexplore.ieee.org/document/6792316. Freek Stulp and Olivier Sigaud. Policy Improvement Methods: Between Black-Box Optimization and Episodic Reinforcement Learning. HAL, 2012a. URL https://hal.archives-ouvertes.fr/hal-00738463. Freek Stulp and Olivier Sigaud. Path integral policy improvement with covariance matrix adaptation. *CoRR*, abs/1206.4621, 2012b. URL http://arxiv.org/abs/1206.4621. Felipe Petroski Such, Vashisht Madhavan, Edoardo Conti, Joel Lehman, Kenneth O. Stanley, and Jeff Clune. Deep neuroevolution: Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning, 2017. URL https://arxiv.org/abs/1712.06567. Yi Sun, Daan Wierstra, Tom Schaul, and Juergen Schmidhuber. Efficient natural evolution strategies. In *Proceedings of* the 11th Annual Conference on Genetic and Evolutionary Computation, GECCO 09, pp. 539–546, New York, NY, USA, 2009. Association for Computing Machinery. ISBN 9781605583259. doi: 10.1145/1569901.1569976. URL https://doi.org/10.1145/1569901.1569976. Richard S. Sutton and Andrew G. Barto. *Reinforcement Learning: An Introduction*. The MIT Press, Cambridge, MA, USA, 2022. ISBN 9780262039246. URL http://incompleteideas.net/book/the-book.html. Richard S Sutton, David McAllester, Satinder Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In S. Solla, T. Leen, and K. Müller (eds.), Advances in Neural Information Processing Systems, volume 12. MIT Press, 1999. URL https://proceedings.neurips.cc/paper/ 1999/file/464d828b85b0bed98e80ade0a5c43b0f-Paper.pdf. Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In *2012 IEEE/RSJ* International Conference on Intelligent Robots and Systems, pp. 5026–5033, 2012. doi: 10.1109/IROS.2012.6386109. URL https://ieeexplore.ieee.org/document/6386109. Daan Wierstra, Tom Schaul, Tobias Glasmachers, Yi Sun, Jan Peters, and Jürgen Schmidhuber. Natural evolution strategies. *Journal of Machine Learning Research*, 15(27):949–980, 2014. URL http://jmlr.org/papers/ v15/wierstra14a.html. ## A Experimental Details All experiments were run on c6a.8xlarge Amazon Web Services server instances. For environments other than Ant-v2, we conducted 3 experiments in parallel on a single machine where each experiment was given 4 vCPUs for workers and 4 vCPUs for the learner. For Ant-v2 we tested DFD, ES, and FD in parallel on 3 separate servers using 24 vCPUs each for workers. All experiments with the exception of Ant-v2 were conducted with the fixed hyper-parameters in Table 3. We chose these parameters because they are similar to the equivalent parameters from Mania et al. (2018) and Salimans et al. (2017). Ant-v2 required a significantly larger batch size of N = 400 for our method to solve, and was provided 24 workers instead of 4. Each algorithm was tested over the same 10 random seeds. ## A.1 Policy Parameterization Policies in our experiments parameterized an independent Gaussian for each element in the action vector. Rather than using state-independent variance for each of these distributions, our policies produced both the mean and variance for each action distribution. This means that for an environment with A actions, the policy had 2A outputs. The means of each distribution were taken from the first half of a policy's output, and the variances were taken from the second half. The variance of each distribution was linearly transformed from the interval [−1, 1] given by the tanh activation function onto the interval [0, 1] before the distribution was constructed. ## A.2 Algorithmic Details | Parameter | Value | |---------------------------|--------------------------| | Total Timesteps | 50,000,000 | | Gradient Optimizer | Adam | | σ | 0.02 | | N | 40 | | Adam β1 | 0.9 | | Adam β2 | 0.999 | | Adam ϵ | 1e-8 | | Adam η | 0.01 | | Reward Standardization | Yes | | Policy Architecture | 64 tanh → 64 tanh → tanh | | Policy Outputs | Diagonal Gaussian | | Standardized Observations | Yes | | Observations Clipped | [-5, 5] | | Random Seeds | 124, 125, 126, ... 133 | | Worker CPUs | 4 | Table 3: Hyper-parameters used in our experiments PPO was run with 16 parallel workers and the same policy architecture we used in our experiments. Adam's learning rate was decayed from 3e-4 to 0 over the 50M training steps in each environment. All other PPO parameters were set to the values described by Schulman et al. (2017) in their MuJoCo experiments. Since ES uses antithetic sampling, each noise vector ϵ constructs two parameter perturbations, θu + σϵ, and θu − σϵ. To ensure both methods used the same number of perturbations in each update, we set the batch size in ES to N/2 in every experiment. Following the practice of ES, we maintained running statistics about the observations encountered by any policy during training and used them to standardize each observation by subtracting the running mean and dividing by the running standard deviation before providing an observation to a policy. After standardization, each element of the observation vector was clipped to be on the interval [−5, 5]. As mentioned in the main text, we standardize rewards on the learner at each update using statistics from each batch such that every batch of rewards had zero mean and unit variance. Further, we approximated R(θu) when computing gDFD by taking the mean of rewards from perturbations of the current policy in each batch. That is, $$R(\theta_{u})\approx\frac{1}{B}\sum_{i=1}^{B}R(\theta_{u}+\sigma\epsilon_{i}),\tag{1}$$ $$({\mathfrak{H}})$$ where B is the number of rewards in a batch of size N that were from perturbations of the current policy (e.g. θu + σϵ). In cases where there are few or no rewards from perturbations of the current policy, we estimated R(θu) as the average reward over the entire batch of data. In our experiments we estimated R(θu) following (5) when B N ≥ 0.2, and as the average reward over the entire batch otherwise. When collecting returns from connected workers the learner continually accepted all available returns until there were at least N. If there were more than N returns available, the remaining returns were placed back on the buffer to be collected at the next iteration of the loop.
Review 1: Summary: The authors of this paper investigate and address a core problem of utilizing black-box optimization algorithms (e.g., evolutionary strategy) for deep reinforcement learning. Specifically, the authors leverage finite difference methods to derive a delayed finite difference (DFD) gradient estimator, which can address the sample-inefficency issue of previous work using ES (Salimans et. al, 2017). Further, the authors present the performance of incorporating the estimated gradient into the Adam optimizer and several modified formulations of stochastic gradient descent (modified SGD and dynamic SGD). The empirical performance shows promising results and significant improvements over naive black-box optimization methods (e.g., ES), which may further motivate researchers to study the benefits of the proposed techniques in this paper. Strengths and Weaknesses: Strengths: - The paper presents a scalable method to address the sample inefficiency problem of previous DRL using black-box optimization methods, where ES can not leverage all the samples collected to improve the learning policy. I think this is a critical problem, and I have not seen many previous related works trying to address the issue. - Since finite difference methods are approximated approaches to estimating the gradient, it is also good to utilize adaptive step size methods such as Adam to improve performance further. And the experimental results demonstrate the benefits of utilizing such adaptive step size strategies. - The authors present the experiments using recent proposed systematic evaluation methods (Agarwal et al., 2021), which is important since most of existing black-box based estimation methods tends to have high variance. - The several directions discussed in the conclusion are promising and interesting, especially given the promising results presented in this paper. Weakness: - As far as I know, even though this paper improves the performance of black-box gradient-based methods, its performance is still far from gradient-based methods such as PPO or off-policy-based methods such as SAC. - The motivation of proposed modifying SGD and dynamic SGD should be discussed more. I can not see the advantage of using these two methods (if I missed anything, please correct me :) ). - Some ablation studies should also be conducted to help readers understand the proposed method's limitations and robustness. For example, how can we choose $\delta$ in these environments? Some important hyperparameters should be studied and discussed to help readers to understand. - Also, since the finite difference is an approximated method, it would be great to see the approximated error between the approach and the true gradient estimation. Requested Changes: - Add ablation study on the critical hyperparameters, such as $\delta$. - If possible, please add an error analysis figure to show the approximated error using finite difference methods. - If possible, please add the training time and sample utilization of your methods and ES, which would further demonstrate the advantage of your method. - If possible, please discuss why you would like to introduce modifying SGD and dynamic SGD. And also I think it is also to incorporate the strategies into Adam. and how would that actual performance be? - Please add PPO to your experiments to help people understand the current stage of black-box-based optimization methods. Broader Impact Concerns: N/A. ================================================== Review 2: Summary: The paper contributes to Finite Difference and ES ("à la OpenAI") approaches to RL problems. It brings two contributions: - it provides a way to use "delayed returns" (information from trajectories obtained with an older version of the policy) into the computation of the Finite Difference Gradient - from a comparison between Stochastic Gradient Descent (SGD) and Adam, it studies a modified version of SGD which only increases the magnitude of gradient steps and shows that this component if enough to explain most of the superiority of Adam over SGD. An empirical study with four mujoco benchmarks reveals the (mild) impact of both contributions. Strengths and Weaknesses: Strengths: - both contributions make sense and are presented rather clearly - the empirical study uses rliable, hence avoiding some of the common pitfalls in many RL comparisons Weaknesses: - most importantly, the paper lacks a related work section, as a result it ignores a lot of relevant work - some readers might find some of these contributions too straightforward to be of interest. Honestly, I'm not sure that these ideas have never been published before - the two contributions are rather independent, they could have been presented in two different papers and it is unclear if their combination brings something special with respect to using them separately - the "story telling" of the paper could be much improved. All in all, my general feeling is that this paper may be turned into a useful contribution provided a significant amount of work, but under its current form it lacks a lot of maturity, it is far from being ready to be published Requested Changes: * The paper should have a related work section. Doing this will help the authors better delineate their contribution and strengthen it. They should at least have a look at the following papers: Several papers from the group of K. Choromanski lie in the same domain and consider other families of ESs like CMA-ES and the Cross-Entropy Method (CEM), which do not use a gradient descent component. Reading these papers and a few connected papers will help the authors get a broader view on the relationship between FD methods and ES methods. - Choromanski, K. M., Pacchiano, A., Parker-Holder, J., Tang, Y., & Sindhwani, V. (2019). From complexity to simplicity: Adaptive es-active subspaces for blackbox optimization. Advances in Neural Information Processing Systems, 32. - Choromanski, K., Rowland, M., Sindhwani, V., Turner, R., & Weller, A. (2018, July). Structured evolution with compact architectures for scalable policy optimization. In International Conference on Machine Learning (pp. 970-978). PMLR. Though I cannot claim that the idea of using older trajectories has already been done in gradient-based FD and ES methods, this is closely related to Importance Mixing in the context of CEM or CMA-ES: - Pourchot, A., Perrin, N., & Sigaud, O. (2018). Importance mixing: Improving sample reuse in evolutionary policy search methods. arXiv preprint arXiv:1808.05832. In the future work section, the authors mention combining ES and Trust regions methods, they should read: - Liu, G., Zhao, L., Yang, F., Bian, J., Qin, T., Yu, N., & Liu, T. Y. (2019, July). Trust region evolution strategies. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 33, No. 01, pp. 4352-4359). These papers are starting points, the authors must also look for closely related papers. * The authors should better consider whether their two contributions are related and they should be more explicit in the introduction about the articulation between these contributions * About the DSGD contribution, it could be presented either as a dissection of the sources of efficiency in the Adam mechanisms (as currently put forward in the discussion) or as an alternative to Adam, in which case the paper should state more clearly if there are unquestionable advantages in using DSGD instead of Adam (in terms of bandwith, wall clock time efficieny or whatever...) and this advantage should appear in a study. * We can see immediately from the first paragraph of the introduction (sentence "Most model-free ...") that the authors do not have a strong expertise in non-ES-based RL methods. What are "algorithms like Q-learning"? Do they mean "derived from dynamic programming " (as mentionned in the second paragraph)? Also, the choice of references in the sentence is rather weak and there would be more to say. My guess is that the authors added this perspective because they felt that speaking about "true" RL is necessary to be in the scope of TMLR, but I do not share this feeling: we see more and more "pure ES" papers at NeurIPS, for instance. So I encourage the authors to speak only about what they are expert of. In the future work I had the same feeling, the authors may mention the relationship between reusing old trajectories and using a replay buffer, but I don't believe they are ready to cover such topics in depth. * Unless I'm wrong, Figure 3 contains all the information already present in Figure 2. The authors could choose three blue-to-green colors for the curves of Figure 2 and three red-orange-pink colors for the additional ones and put everything into a single figure. * I understand the argument of the authors about comparing the best performing policies, but it has been mentioned several times in the field that this is a bad practice as this performance indicator is very fragile (see e.g. the Henderson paper, Deep RL that matters): run it a second time with different seeds and you will most probably get different results... * The greater sensitivity of Adam to initial conditions could be put more forward * In the abstract "investigate a core problem": you should rather say which problem you investigate and let the reader decide if it is core or not. * In 3.1, remove all "will". You already did it, using the present is better in science. * For equations, use \eqref{label}, which produces (number). * p4, the paragraph starting with "Collecting R(\theta_u)..." is still a little unclear to me. Could you be more explicit about the collection process? With a schema? * Algorithms 1 and 2 are not referenced in the text, all floats should be referenced. Eventually, the algorithms could be explained. * In Figure 1, shouldn't you use a log scale to reveal that SGD is not at 0? What is IQM in the y-axis label (also present in figs 2 and 3)? * How did you compute the magnitude of gradient updates? * The "sum of ranks" column in Table 2 is neither commented nor explained * I would remove the discussion and use it rather as an articulation of the second contribution (see above) * the future work section should be moved in the end of the conclusion (without a title) Broader Impact Concerns: I see no specific concern about this work ================================================== Review 3: Summary: The authors introduce a method to solve the problem of prematurely terminating long trajectories to increase the amount of utilization in evolution strategies (ES) algorithms. Typically, such returns are discarded to maintain the update schedule. However, these long trajectories create delayed returns which can be used to update the policy. The paper investigates how this additional information help in obtaining better policies when not discarded. The claim of contribution: Introducing the delayed finite difference method, which allows using delayed returns, achieving higher utilization. Strengths and Weaknesses: **Strengths:** - The idea of incorporating delayed returns into the updates increases the utilization of ES methods, which is beneficial for increasing the number of updates. **Weaknesses:** - **The problem is not clearly stated and belatedly introduced.** - The problem solved in the paper and the solution the authors provide are not clear from the abstract. The authors mention that they investigate a core problem without mentioning what it is, namely the computational efficiency of such methods. Clear language should be used. Moreover, the problem is not even clearly motivated in the introduction. - The problem is understandably introduced in section 3 for the first time in the paper. - **Unfair comparisons:** - All claims and conclusions about how Adam is better than SGD in Fig. 1 may have come from the fact that the authors did not tune the step size. Using a larger step size for SGD would likely improve its performance (this is what likely makes the modified SGD work since it’s essentially SGD with a scaled step size). For a fair comparison, a hyperparameter search must be done to select the best step size for SGD and compare it with Adam. - The method of FD-MSGD, DFD-MSGD, and DFD+SGD are not compared in Fig. 3 - How is the utilization measured? It’s clearly affected by choice of how to compute $R(\theta_u)$. The claim about 100% utilization is not justified. Therefore, empirical analysis of utilization must be added. - **Irrelevant work added to the main contribution, while leaving out essential work:** - The limitations of the work, such as the instability introduced by using returns from old policies, are not discussed. Moreover, the bias introduced by including the delayed returns is not discussed and is left for future work; however, this is a core part of the paper and needs to be discussed and analyzed. - The main contribution of this paper is to provide a way of using delayed returns coming from long trajectories. The modified SGDs (MSGD and DSGD) are distracting readers from the contribution. In addition, experiments on Adam and arguments about the magnitude of the update are also unnecessary and erroneous as discussed above. Lastly, the introduction of DSGD is not well-placed in the literature, and there is no mention of any similar methods with a scalar adaptive step size. - The authors use $U_{max}$ to prematurely terminate trajectories which goes against the proposed method. A careful study of this parameter is needed. - **Lack of experimental rigor and overstating the results:** - The authors mention that: “incorporating delayed returns is almost uniformly beneficial in the environment we studies.”. However, this statement is not supported by the experiment results shown in Table 2. The delayed finite difference approach only helps in two environments out of four. Moreover, the error bars overlap, so the improvement in these two environments is not statistically significant. - It is doubtful that the magnitude of Adam’s updates remains constant. They should depend on the landscape geometry surrounding the current point of parameters which changes with the number of time steps. - The suggested method of modifying SGD is unnecessary and introduced based on the wrong analysis of the results from Fig. 1 as discussed in the “unfair comparison” section. There is no need for using the author’s modified SGD when there is a simple solution of using a larger step size. - In Fig. 1, the results with Adam, which depends on exact gradients, look identical to FD-Adam, which uses numerical gradients. - Fig.1 needs to be plotted on a log scale. The authors mention that SGD updates are not zeros but only appear as zeros since they are small. With a log scale, the results will be clearer. - Why are the aggregate results on dissimilar environments useful? - **Ambiguous and imprecise wordings:** - The authors use the words return and reward interchangeably. Each quantity has its own precise definition in reinforcement learning. It is ambiguous to the reader when they are used incautiously. - The word bandwidth is used without definition, and it’s unclear from the statements what it means. How does your method, which performs more updates, have low bandwidth? Bandwidth is inversely proportional to the time interval between updates. When we decrease the interval, the bandwidth increases. Moreover, the authors mention that ES methods require very low bandwidth compared with other distributed RL methods. What does this mean? - In the text explaining Eq. 5, the authors refer to the quantity $\Delta R$ as the change in reward, where it should be the scaled change in returns. - The performance measure IQM is undefined. It’s necessary to show what it stands for and how it is calculated. - The authors mention that collecting $R(\theta_u)$ may be costly. It’s not clear why it’s costly compared to $R(\alpha)$. **Minor issues:** - The authors used the same notations to write vectors and scalars. Reading these notations would be very challenging to follow for many readers. - In algorithm 1, it’s missing how the terminal time step for each worker $n_i$ is related to the global time step $u$. Please update the algorithm to fix this. - It’s missing how the learner makes the workers compute $R(\theta_u)$. The authors mention that $R(\theta_u)$ is computed occasionally without mentioning exactly the frequency of such changes. The authors suggested another modification to their algorithm by approximating $R(\theta_u)$ by an average over $R(\alpha)$ from all workers. It’s unclear how this way approximates $R(\theta_u)$. - The step size increment and decrement are denoted by $\epsilon_1$ and $\epsilon_2$, respectively. However, the authors use $\epsilon$ for noise. Please use different notations. - What is the sum of ranks in Table 2? - In the introduction, the authors mention that alternative gradient-based rules are often not considered. This statement is clearly wrong since there is an entire research community on finding better optimization methods. - Consider removing the box surrounding the figures. Requested Changes: The following are critical to secure a recommendation for acceptance: - Use clear and precise language (as discussed in the strengths/weaknesses section). - Focus on the main contribution of the paper, namely the use of delayed returns. Describe the problem early in the abstract and the introduction. - Create a fair comparison between algorithms by doing a hyperparameter search. Present rigorous experiments and don’t overstate the results. - Get rid of the sections/experiments that are less relevant to the paper, as discussed in the strengths/weaknesses section. - Studying the introduced bias from the delayed returns should be done in this paper, not in future works. Moreover, study the effect of $U_{max}$. - The limitations of the work need to be discussed. - The paper needs to add an empirical measure for utilization since it depends on the different variations in the algorithms. - Correct errors (mentioned in the strengths/weaknesses section) and define the undefined quantities. - Use clear notations, for example, distinguishing between scalars and vectors. Broader Impact Concerns: No concerns. ================================================== Metareview: Recommendation: Reject Comment: See comments on Claims and Evidence above. ==================================================
# Lightweight Vision Transformer Coarse-To-Fine Search Via Latency Profiling Anonymous authors Paper under double-blind review ## Abstract Despite their impressive performance on various tasks, vision transformers (ViTs) are heavy for mobile vision applications. Recent works have proposed combining the strengths of ViTs and convolutional neural networks (CNNs) to build lightweight networks. Still, these approaches rely on hand-designed architectures with a pre-determined number of parameters. This requires re-running the training process to obtain different lightweight ViTs under different resource constraints. In this work, we address the challenge of finding optimal lightweight ViTs given constraints on model size and computational cost using neural architecture search. Using the proposed method, we first train a supernet, which is a hybrid architecture of CNNs and transformers. To efficiently search for the optimal architecture, we use a search algorithm that considers both model parameters and on-device deployment latency. This method analyzes network properties, hardware memory access pattern, and degree of parallelism to directly and accurately estimate the network latency. To prevent the need for extensive testing during the search process, we use a lookup table based on a detailed breakdown of the speed of each component and operation, which can be reused to evaluate the whole latency of each search structure. Our approach leads to improved efficiency compared to testing the speed of the whole model during the search process. With extensive experiments on ImageNet, we demonstrate that under similar parameters and FLOPs, our searched lightweight ViTs have higher accuracy and lower latency compared to state-of-the-art models. For example, our AutoViT_XXS (71.3% Top-1 accuracy and 10.2ms latency) has a 2.3% higher accuracy and 4.5ms lower latency compared to MobileViT_XXS (69.0% Top-1 accuracy of and 14.7ms latency). ## 1 Introduction Vision Transformer (ViT) Dosovitskiy et al. (2020) exploits the self-attention mechanism inherited from the transformer architecture and has recently obtained state-of-the-art performance in vision tasks like image classification Dosovitskiy et al. (2021); Touvron et al. (2021); Chen et al. (2021b), object detection Carion et al. (2020); Dai et al. (2021a); Amini et al. (2021); Misra et al. (2021); Dai et al. (2022), semantic segmentation Zheng et al. (2021); Cheng et al. (2021); Ding et al. (2021), image retrieval El-Nouby et al. (2021), image enhancement Yang et al. (2020); Chen et al. (2021c); Lu et al. (2021). These efforts have focused on improving the overall performance of the vision transformers. While the overall results are impressive, ViTs sacrifice lightweight model capacity and portability for high accuracy. Vision transformers are relatively less explored for mobile vision tasks Mehta & Rastegari (2021) and are difficult to deploy to edge devices due to resource constraints. The popular ViT models are made small by simply scaling them down Touvron et al. (2021) with a significant reduction in model performance. This challenge can be addressed by the careful manual design of the architecture to meet various resource constraints of different devices. However, this process requires several trials and errors to obtain viable candidates, especially for designing hybrid CNN-Transformer ViTs that have emerged recently. d'Ascoli et al. (2021); Chu et al. (2021); Dai et al. (2021b); Liu et al. (2021); Graham et al. (2021); Wang et al. (2021b). While previous works have showcased improvements in on-device efficiency, they often do not prioritize consideration of the real hardware latency when designing the DNN models. Some contain mobile-unfriendly operations that limit performance improvement. LeViT Graham et al. (2021) employs a convolutional stem in place of the original patch stem. While LeViT successfully reduces FLOPs, it does so at the expense of introducing redundant parameters. This becomes a significant constraint for edge devices with limited memory capacity. Besides, relying on FLOPs as a performance 1 metric is misleading since it doesn't necessarily correlate with latency. Additionally, the reshaping process–a data layout change, can cost GPU/NPU cycles, which computation amounts can not be calculated. Also, HardSwish is not inherently supported by iPhone's CoreML. Sophisticated attention mechanisms, like the ones employed in the Long-Short Transformer Zhu et al. (2021), are notoriously hard to support or accelerate on mobile devices. Neural architecture search (NAS) has shown its benefit over manual design as a powerful technique for the automatic design of neural networks, especially for designing efficient models. Autoformer Chen et al. (2021d) is a dedicated one-shot architecture search framework for pure transformer structures. However, their limited search space leads to performance degradation for very lightweight models. NASViT Anonymous (2022) searches for a conv and transformer hybrid structure. However, it has a prolonged training time. The gradient optimization method introduced to circumvent the conflicts between larger and smaller subnets is time-consuming. It also includes mobile-unfriendly operations such as the talking head module and window attention. S3 Chen et al. (2021e) proposes an automatic search space design method to improve the effectiveness of the design space. However, its evolutionary search is time-consuming. Our work endeavors to rectify the deficiencies found in existing solutions by introducing a truly efficient, hardware-oriented approach. This approach has been optimized to seamlessly adapt to the constraints of the target hardware and fulfill the specific speed requirements. To address the existing ViT search problem, we propose an efficient NAS approach that considers three perspectives: optimal search space, training efficiency, and latency-guided search. Specifically, we design a new search space that enables the search for hybrid structures of convolution and transformer, which outperform existing hybrid structures, as demonstrated in Figure 1. To improve training efficiency, we encode the search space into independent supernets, which reduces optimization interference between subnets of different sizes. Our experiments show that this approach leads to improved performance compared to existing methods. In the field of edge computing, the selection of models pivots critically on the metric of latency. Despite this, numerous prevailing search algorithms face challenges in accurately evaluating model latency using only FLOPs and parameter counts. As such, a model that appears smaller in size as a result of a search process does not necessarily guarantee higher speed. In light of these limitations, some researchers have made attempts to incorporate latency into the search criteria via the utilization of a latency predictor Wang et al. (2020). However, developing an accurate predictor is not without its challenges—it necessitates hundreds, even thousands, of actual speed tests across diverse model structures to form a robust training dataset. If the search space is expansive, a proportional increase in actual test results is required to ensure accurate predictions across all possible architectures. To this end, we propose a coarse-to-fine search algorithm that utilizes latency as a primary indicator and a parameter as a memory threshold to explore the search space and find the optimal architecture. We leveraged the latency profiling technique to improve efficiency. As a result of our efforts, we successfully identified models with significantly lower latency than the previous approach of relying on flops and the number of parameters. Our contributions can be summarized as follows: Figure 1: Top-1 accuracy and sizes of different models on ![1_image_0.png](1_image_0.png) ImageNet. Our method achieves a better trade-off than other efficient CNN-based and transformer-based models. - **Hybrid Search Space:** We designed a search space that combines CNN and transformers, leveraging the strengths of both, to create lightweight ViTs suitable for mobile vision tasks. - **Multi-Size Supernet Training Scheme:** To address the challenges of dealing with a large search space in vision transformers and optimize the subnets of different sizes, we proposed a multi-size supernet training scheme that mitigates optimization interference between subnets and enables efficient training. - **Latency-aware Search Scheme:** We revisit the design principles of ViT and its variants through latency analysis and propose a latency profiling model, which can efficiently estimate the latency of network candidates. We propose a parameter-latency-oriented coarse-to-fine search strategy to find the optimal subnets for hardware deployment among the trained supernets. - **Performence:** Our searched model can achieve up to 79.2% accuracy on ImageNet with 6.3M parameters and 1.4 GFLOPs, exceeding the performance of existing models with similar resource budgets. ## 2 Related Work Vision Transformers. ViT Dosovitskiy et al. (2020) is a pioneering work that uses only transformer blocks to solve various vision tasks. Compared to traditional CNN structures, ViT allows all the positions in an image to interact through transformer blocks. In contrast, CNNs operate on a fixed-sized window with restricted spatial interactions, which hinders their ability to capture relations at the pixel level in both spatial and time domains. Since then, many new variants have been proposed. For example, DeiT Touvron et al. (2021), T2T-ViT Yuan et al. (2021b), and Mixer Tolstikhin et al. (2021) tackle the data-inefficiency problem in ViT by training only with ImageNet. PiT Heo et al. (2021) replaces the uniform structure of the transformer with a depth-wise convolution pooling layer to reduce spacial dimension and increase channel dimension. SViTE Chen et al. (2021f) alleviates training memory bottleneck and improves inference efficiency by co-exploring input token and attention head sparsity. Neural Architecture Search. There has been increasing interest in designing efficient networks with neural architecture search (NAS). Among different methods, weight-sharing NAS has become popular due to training efficiency Yu et al. (2020); Wang et al. (2021a); Sahni et al. (2021). They train one over-parameterized supernet whose weights are shared across all sub-networks in the search space to conduct architecture search, significantly reducing the computational cost. This one-shot NAS approach usually contains two phases. In the initial phase, all candidate networks within the search space are refined using weight sharing, ensuring that every network concurrently achieves optimal performance by the conclusion of training. The subsequent phase employs conventional search methods, like evolutionary algorithms, to identify the top-performing models, considering different resource limitations. BigNAS Yu et al. (2020) trains a supernet that covers all subnets in the architecture search space with weight sharing. AttentiveNAS Wang et al. (2021a) improve existing two-stage NAS with attentive sampling of networks on the best or the worst Pareto front. Efficient ViT Design. There's been an increasing emphasis on developing efficient ViT through both NAS-driven designs and hand-crafted approaches. Each method has showcased efficiency gains, as evidenced by hardware evaluations. Regarding hand-crafted designs, LeViT Graham et al. (2021) replaces the original patch stem with a convolutional stem, enhancing inference speed for image classification. MOAT Yang et al. (2022) seamlessly integrates the advantages of mobile convolution and self-attention within a single block through a meticulous redesign. Regarding NAS-driven designs, HAT Wang et al. (2020) searches for an encoder-decoder transformer structure and requires additional retraining or finetuning of the optimal candidates obtained during the search. BossNAS Li et al. (2021a) searches for CNN-transformer hybrid models with block-wisely self-supervised neural architecture search. CvT Wu et al. (2021) proposes a new architecture family and searches for strides and kernel size. Autoformer Chen et al. (2021d) entangles the weights of different vision transformer blocks in the same layer during supernet training and trains three supernets with different scales to reduce training time. NASVIT Anonymous (2022) extends the search space to get efficient models leveraging a hybrid architecture of convolutions and transformers. The current designs and methods, however, have their set of limitations. These include the incorporation of operations that are not mobile-friendly, an over-reliance on parameters and FLOPs as primary evaluation metrics, restrictive search spaces, and prolonged training or search durations. Our advantages over existing works can be summarized as follows: ① In contrast to hand-crafted methods, leveraging network search allows us to pinpoint the optimal structure tailored to specific hardware constraints. Hand-crafted designs tend to yield a set of models that primarily differ in their dimensions. While they may serve as general-purpose models suitable for a broad range of devices, they may not be as finely optimized for a specific hardware profile as network-searched models. We incorporated a hybrid search space with an inductive bias. This not only broadens the search space but also accelerating convergence and mitigating local optima. ② From a hardware-oriented perspective, it is more efficient to define a narrow search space tailored to specific constraints of target devices. As a result, there's no need for training with the entire search space. By segmenting the solution into multiple supernets, not only can we sidestep conflicts, but we can also dramatically cut down on training overhead. ③ Instead of the traditional approach of ![3_image_0.png](3_image_0.png) Figure 2: An illustration of our ViT structure. Conv and MBConv refer to standard convolution and inverted residual blocks, respectively. All CNN and transformer blocks contain a stack of dynamic layers with searchable architecture configurations. We also search for the input resolutions. relying on hardware deployment for every candidate or using a latency predictor, our method stands out in its accuracy and efficiency. Our latency prediction model is a training-free theoretical model suitable for general-purpose hardware, GPU. It considers the properties of the target hardware, the model type, the model size, and the data granularity. It then quantitatively captures both the computation latency and data movement latency, enabling it to precisely predict the actual throughput for each layer. ## 3 Methodology 3.1 Background Our goal is to search for lightweight and high-performance models for deployment on edge devices. There are two problems with current ViT models: (i) Most of them sacrifice efficiency to improve accuracy. The large computation cost and the number of model parameters make it difficult to deploy these models to devices such as cell phones or FPGAs. For example, Swin transformer Liu et al. (2021) achieves SOTA accuracy on multiple computer vision tasks. In the field of object detection, it is only applied to frameworks such as Retina Lin et al. (2017) or Maskrcnn He et al. (2017), but has not yet been applied to frameworks such as YOLO series Redmon et al. (2016); Redmon & Farhadi (2017; 2018); Bochkovskiy et al. (2020) that are known for their efficiency. (ii) Most of the current works are done by simply scaling down from the original dimension to obtain models of different sizes Dosovitskiy et al. (2021); Touvron et al. (2021). This coarse-grained model selection significantly sacrifices the accuracy of small models and offers limited flexibility in adjusting the size. Despite superior performance in the high computational budget regime, ViTs still do not perform as well as their CNN counterparts on small or medium-sized architectures, especially when compared to CNN architectures that are highly optimized by neural architecture search. CNN networks such as MobileNets Howard et al. (2017), ShuffleNetv2 Ma et al. (2018), ESPNetv2 Mehta et al. (2019), and MNASNet Tan et al. (2019) can easily replace the heavyweight backbones in existing task-specific models to reduce the model size and improve latency. One major drawback of these approaches is that they are spatially localized. On the other hand, the transformer is able to learn global representations, but ignores the spatial induction bias inherent in CNNs and thus requires more parameters to learn visual representations. Based on these considerations, we focus on combining the advantages of CNN (e.g., spatial induction bias) and ViT (e.g., input adaptive weighting and global processing) to find a hybrid architecture of convolution and transformer that considers both performance and efficiency. Past work Mehta & Rastegari (2021) has shown that incorporating the downsampling mechanism of convolution into the transformer architecture can effectively reduce the size of the model and its ability to process high-resolution images, which can be of great benefit to the learning and deployment of the transformer. Due to the efficiency of local computation, convolution is introduced to process high-resolution inputs, while the transformer is used to process low-resolution features to extract global information. ## 3.2 Search Space As shown in Figure 2, each of our hybrid blocks consists of standard convolution layers and one transformer block. Transformer stands for the transformer blocks and MBConv refers to MobileNetv2 Sandler et al. (2018) blocks. Standard ViT reshape input tensor X of size H × W × C into N × d. Where C, H, and W represent the channels, height, and width, N represents the number of patches and d is the dimension. However, reshaping into 2D feature ignores the spatial inductive bias that is inherent in CNNs. Following Mehta & Rastegari (2021), we apply a n × n standard convolution layer followed by a point-wise convolution layer to produce X. The convolution layer encodes local ![4_image_0.png](4_image_0.png) Figure 3: **Framework Overview.** We train a weight-shared supernet by iteratively optimizing randomly sampled subnets. *Left:* We search for the number of heads and expansion ratio of a transformer block. *Middle:* We search for the width and depth of MBConv and transformer block. Each layer and block are dynamic. Solid lines and dark blocks represent selected components in contrast to the dashed lines and lighter blocks. *Right:* Perform a coarse-to-fine search with hardware latency constraints to find the model with the highest validation accuracy. spatial information while the point-wise convolution projects the tensor to a high-dimensional space by learning linear combinations of the input channels. To learn global representations with spatial inductive bias, we reshape (unfold) X into N non-overlapping flattened patches X′ of size hw × N × d, where hw is the number of pixels of one patch P. The folding-unfolding process replaces local processing in convolutions with global processing using transformers. This allows the transformer block to have CNN- and ViT-like properties, which helps it learn better representations with fewer parameters and simple training recipes. For each patch, inter-patch relationships are encoded by applying transformers for each pixel to obtain X′out(p) = *T ransformer*(X′(p)), where 1 ≤ p ≤ P. This type of transformer block prevents ViT from losing either the patch order or the spatial order of pixels within each patch. After that, we fold reshape (fold) X′out back to Xout of size H × W × C. The spatial order of pixels within each patch will be retained throughout the process. We summarize the detailed dimensions of our search space. In Table 1, width represents the channel size for CNN layers and hidden dimension for transformer layers, respectively. The depth denotes the number of repeated CNN and transformer layers for each block. The expansion ratio refers to the expansion ratio of the depth-wise convolution layer for CNN layers and the MLP expansion ratio for transformer layers. For each CNN block, we search for the optimal channel widths, block depths, expansion ratios, and kernel sizes. For each transformer block, we search for the best number of windows, hidden feature dimensions, depths, and MLP expansion ratios. Following one-shot NAS methods, we encode the search space into a supernet. All subnets share the weights of their common parts. The supernet is the largest model in the space. In particular, the supernet stacks the maximum number of transformer blocks with the largest embedding dimension, Q-K-V dimension, and MLP ratio as defined in the space. During training, all possible subnets are uniformly sampled, and the corresponding weights are updated. Our transformer block structure, along with the search space allows us to search for lightweight models. This is mainly due to learning of global representations with transformers. For a given patch, prior work converts spatial information by learning linear combinations of pixels. The global information is then encoded by learning inter-patch information using transformers. As a result, these models lose the image-specific inductive bias, which is inherent to CNNs. As a result, they require more capability to learn visual representations. Hence, they are deep and wide. Unlike these models, our model uses convolution and transformers in such a way that the resulting transformer blocks have convolution-like properties while allowing for global processing. This modeling capability allows us to design shallow and narrow models that are lightweight. According to the constraints on model parameters, we partition the large-scale search space into three sub-spaces based on parameters and encode them into three independent supernets. Such a partition allows the search algorithm to concentrate on finding models within a specific parameter range, which can be specialized by users according to their available resources and application requirements. It also reduces gradient conflicts between large and small sub-networks trained via weight-sharing due to gaps in model sizes. We apply one-shot NAS, which includes two phases: (i) Train a supernet containing all the candidate subnetworks. During training, the parameters of all candidate networks in the search space are optimized simultaneously by weight sharing. (ii) Search for the best sub-network in the well-trained supernet under various resource constraints Cai et al. (2019). Typical search techniques include evolutionary algorithms Guo et al. (2020). Figure 3 shows our overall framework. We use the search space introduced in Table 1 and train a weight-shared supernet by iteratively optimizing randomly sampled subnets from the space. We search for the width, depth, and expansion ratio of both CNN layers and transformer layers and also the number of self-attention heads. The layer and depth in each block are dynamic. After training the supernet, we perform a coarse-to-fine search with hardware latency constraints to find the model with the highest validation accuracy. We propose a latency prediction model, which can efficiently estimate the latency of network candidates by considering network properties, hardware memory access pattern, and degree of parallelism. ## 3.3.1 Supernet Training In transformer search space, the classical supernet training strategy encounters the following challenges. (i) Slow convergence. This can be attributed to the independent training of transformer blocks resulting in the weights being updated a limited number of times Chen et al. (2021d). (ii) Low performance. The performance of the subnets that inherit weights from the one-shot supernet, trained under classical weight sharing strategy, is far below their performance of training from scratch. This limits the ranking capacity of supernets Anonymous (2022). To mitigate this, existing works perform additional retraining of the searched architectures since their weights are not fully optimized. | Search | Width | Depth | Number | Expansion | |-----------------|------------|--------------|-----------|-------------| | Dimension | of Heads | ratio | | | | Supernet XXS | | | | | | Conv | {16, 24} | - | - | - | | MBConv-1 | {16, 24} | {1, 2} | - | 1 | | MBConv-2 | {24, 32} | {2, 3} | - | 1 | | MBConv-3 | {32, 48} | {2, 3} | - | 1 | | Transformer-1 | {48, 64} | {2, 3, 4, 5} | {2, 3, 4} | {1.5, 2} | | Transformer-2 | {64, 80} | {2, 3, 4, 5} | {2, 3, 4} | {1.5, 2} | | Transformer-3 | {80, 96} | {2, 3, 4, 5} | {2, 3, 4} | {1.5, 2} | | Transformer-4 | {96, 112} | {2, 3, 4, 5} | {2, 3, 4} | {1.5, 2} | | MBPool | {1000} | - | - | 6 | | Number of Stage | {3, 4} | | | | | Params Range | 1 ∼ 1.8M | | | | | Supernet XS | | | | | | Conv | {16, 24} | - | - | - | | MBConv-1 | {24, 32} | {1, 2} | - | 1 | | MBConv-2 | {32, 48} | {2, 3} | - | 1 | | MBConv-3 | {48, 64} | {2, 3} | - | 1 | | Transformer-1 | {64, 80} | {2, 3, 4, 5} | {4, 5, 6} | {1.5, 2} | | Transformer-2 | {80, 96} | {2, 3, 4, 5} | {4, 5, 6} | {1.5, 2} | | Transformer-3 | {96, 112} | {2, 3, 4, 5} | {4, 5, 6} | {1.5, 2} | | Transformer-4 | {112,128} | {2, 3, 4, 5} | {4, 5, 6} | {1.5, 2} | | MBPool | {1000} | - | - | 6 | | Number of Stage | {3, 4} | | | | | Params Range | 1.6 ∼ 2.6M | | | | | Supernet S | | | | | | Conv | {16, 24} | - | - | - | | MBConv-1 | {24, 32} | {1, 2} | - | 1 | | MBConv-2 | {32, 48} | {2, 3} | - | 1 | | MBConv-3 | {48, 64} | {2, 3} | - | 1 | | Transformer-1 | {64, 96} | {2, 3, 4, 5} | {7, 8, 9} | {1.5, 2} | | Transformer-2 | {96, 128} | {2, 3, 4, 5} | {7, 8, 9} | {1.5, 2} | | Transformer-3 | {128, 160} | {2, 3, 4, 5} | {7, 8, 9} | {1.5, 2} | | Transformer-4 | {160, 192} | {2, 3, 4, 5} | {7, 8, 9} | {1.5, 2} | | MBPool | {1000} | - | - | 6 | | Number of Stage | {3, 4} | | | | | Params Range | 4.0 ∼ 7.2M | | | | 3.3 Neural Architecture Search PipelineTable 1: An illustration of our search space: The search space is divided into three individual supernets within different parameter ranges to satisfy different resource constraints. Tuples of three values in parentheses represent the lowest value, highest, and steps. Weight Entanglement. Inspired by Autoformer Chen et al. (2021d) operating on transformer-only search space, we apply the weight entanglement training strategy to the vision transformer search space. The central idea is to enable different transformer blocks to share weights for their common parts in each layer. The weight entanglement strategy works specifically for homogeneous building blocks. This is because homogeneous blocks are structurally compatible so that weights can be shared with each other. In the implementation, for each layer, we only need to store the weights of the largest of the n homogeneous candidate blocks. The remaining smaller building blocks can extract the weights directly from the largest building block. Compared with classical weight-sharing methods, the weight entanglement strategy has three advantages. (i) Faster convergence. Weight entanglement allows each block to be updated more times than the previous independent training strategy. (ii) Low memory cost. We now only need to store the largest building blocks' parameters for each layer, instead of all the candidates in the space. (iii) Better performance of the subnets. We randomly select a transformer architecture in each iteration. Then we obtain its weights from the weights of the supernet and compute the losses of the subnets. Finally, we update the corresponding weights with the remaining weights frozen. The architecture search space P is encoded in a supernet, denoted as S(*P, W*), where W is the weight of the supernet that is shared across all the candidate architectures s. Algorithm 1 illustrates the training procedure of our supernet. During the training, the weight W is optimized by: $$W=\arg\operatorname*{min}_{W}{\mathcal{L}}({\mathcal{S}}(P,W)),$$ WL(S(*P, W*)), (1) where L represents the loss function on the training dataset. To reduce memory usage, one-shot methods usually sample subnets from S for optimization. The second stage is to search architectures by ranking the performance of subnets based on the learned weights in W. The subnet search objective is: Algorithm 1 Supernet Training. Input: Training epochs N , search space P, supernet S, loss function L, train dataset D*train*, initial supernet weights WA, candidate weights w for i in N epochs do for data, labels in D*train* do Randomly sample one transformer architecture from search space P Obtain the corresponding weights w from WA Compute the gradients based on L Update the corresponding part of w in WA while freezing the rest of the supernet S end for end for Output S $$\circ(S(p,w))$$ $$s=\arg{\mathfrak{n}}$$ $$(2)$$ s∈S Acc(S(*p, w*)), (2) where Acc indicates the top-1 accuracy of the architecture on the validation dataset, s is the sampled subnet that inherits weight w from W. Training Efficiency. We improve training efficiency in two ways. (1) *Search space reduction*: We introduce an inductive bias where the model dimension (width) increases gradually for each transformer stage, resulting in candidates that do not follow the pattern being discarded. This reduces the search space from 1016 to 1010 subnets and improves training efficiency. For reference, AutoFormer Chen et al. (2021d) has 1016 and BigNAS Yu et al. (2020) has more than 1012 subnets. Similar architectural inductive bias is applied in the stage-based PVT Wang et al. (2021b) and the Swin Transformer Liu et al. (2021). (2) Weight entanglement allows different transformer blocks to share weights of their common parts in each layer with significant benefits of faster convergence, reduced memory footprint, and improved subnet performance. ## 3.3.2 Latency-Aware Search Scheme Latency Profiling. Upon acquisition of the trained supernet, we carry out a search algorithm to derive the ideal subnets. Predominant strategies optimize the inference speed of transformers via computational complexity (MACs) or throughput (images/sec) derived from server GPUs. Such metrics, however, fail to accurately reflect real on-device latency. Traditional hardware-aware network search methods usually depend on the hardware deployment of each candidate within the search space to ascertain latency - a process that is both time-consuming and inefficient. A single candidate demands hundreds of inferences to generate an accurate latency, prolonging the search process. Existing works, such as HAT, employ a latency predictor, pre-trained with thousands of real latency data points, as an offline method to forecast the candidate latency, as opposed to obtaining real latency by inference during the search. This method, however, is only applicable to a relatively small search space. For larger search spaces, an increased volume of measured latency data is required as a training set for the predictor, substantially raising the time cost. If the test set is inadequate, the predictor fails to estimate the latency accurately. To overcome this challenge, we construct a latency lookup table by collecting the on-device latency of MBConv and transformer blocks of varying dimensions. Specifically, the Conv width includes {16,24}, the MBConv width includes {16,24,32,48}, the Transformer block width includes {48,64,80,96,112}, the number of heads includes {2,3,4,5}, and the expansion ratio includes {1.5,2}, thus making a total of 46 modules. It's noteworthy that the speed of individual module execution may differ from when they are combined due to inter-module influences. Given that we do not calculate the latency of each module in isolation, we measure the latency of each module based on the per-latency accuracy drop when removed from the entire model. This methodology provides a more comprehensive and realistic understanding of the latency impacts. Search Pipeline. We propose a coarse-to-fine strategy, which involves integrating latency feedback directly into the design loop when searching for models. This eliminates the need for FLOPs as the latency proxy and reduces the searching time, enabling us to design specialized models efficiently for various hardware platforms. Figure 3 on the right illustrates our coarse-to-fine strategy. It contains two steps: Initially, we aim to identify a rough skeleton of promising network candidates. Thereafter, we sample multiple fine-grained variations surrounding each skeleton architecture of interest. More specifically, during the coarse-grained phase, we execute a network search using parameters such as memory threshold and perform latency evaluations using the lookup table. Candidates that meet our latency budget are selected to proceed to the fine-grained search phase. Here, we conduct an evolutionary search to procure the optimal subnets (as shown in Figure 5). Our objective is to maximize classification accuracy while minimizing the model size. At the onset of the evolutionary search, we select N random architectures as seeds. The top-k architectures are chosen as parents to generate the next generation via crossover and mutation. For a crossover, two randomly selected candidates are chosen and crossed to produce a new one during each generation. For mutation, a candidate first modifies its depth with a certain probability Pd, followed by mutating each block with a probability Pm to generate a new architecture. Algorithm 2 elaborates the procedure of our search method. Algorithm 2 Latency-aware Coarse-to-fine Search. Coarse-grained latency guided search: Input: Latency space L, Search space P, Supernet S, Latency budget Tl, Parameter budget Tp for i in N epochs do Randomly sample one subnet architecture αi from S Obtain size and latency through L if Budget of αi ≤ Tl&Tp **then** Save subnet to list *A ← {*αi} A := the Top K candidates; end if end for Output Promising subnet candidates A Fine-grained accuracy guided search: Input: Number of generation iteration T, validation dataset Dval, mutation probability of depth Output: The optimal subnet α. Pd, mutation probability of each layer Pm while search step t ∈ (0, T) do while αi ∈ A do Obtain the accuracy of the subnet N(αi, Wαi ) on Dval end while T opk := the Top K candidates; P opC = Crossover(T opk, S, C) P opM = Mutation(T opk, Pd, Pm*, S, C*) P op(t + 1) = P opC ∪ *P op*M end while Different devices interpret and execute specific operations distinctively due to variations in their underlying architecture, design, computational capabilities, memory management, IO, and bandwidth. Therefore, an operator's latency profile on one hardware platform (e.g., a mobile CPU/GPU) may differ considerably from its profile on another, such as a distinct GPU. This variance stems from the unique characteristics of each hardware. Consequently, it becomes imperative to conduct device-specific optimizations and tests. One cannot simply port results from one hardware setting to another and expect them to remain valid. Nevertheless, crafting multiple lookup tables, although demanding, is substantially more efficient than undertaking multiple comprehensive testing cycles. In our approach, we require data from under 100 instances, while traditional methods might need data from 1000+ instances. Additionally, our method is training-free, whereas conventional techniques demand training, further increasing their resource consumption. ## 4 Experiments 4.1 Datasets And Implementation Details Our experiments are conducted on ImageNet Deng et al. (2009), with approximately 1.2 million images. We report the accuracy on the 50k images in the test set. The image resolution is 256 × 256. We train the supernets using a similar recipe as DeiT. Data augmentation techniques, including RandAugment Cubuk et al. (2020), Cutmix Yun et al. (2019), Mixup Zhang et al. (2017) and random erasing Zhong et al. (2020), are adopted with the same hyperparameters as in DeiT except for the repeated augmentation. Images are split into patches of size 16 × 16. All the models are implemented using PyTorch 1.7 and trained on Nvidia Tesla V100 GPUs. In the evolutionary search process, we configure a population size of 50 and proceed through 20 generations. Within each generation, we select the topperforming 10 architectures to function as parents, which then produce offspring networks via mutation and crossover mechanisms. We assign mutation probabilities for Pd and Pm as 0.2 and 0.4, respectively. ![8_image_0.png](8_image_0.png) Figure 4: **Latency Breakdown.** Results are obtained on iPhone 13 with CoreML. The on-device speed for various operators is reported. The latency of models and operations are denoted with different colors. | Model | Params | FLOPS | Top-1 Accuracy | Latency | |--------------------|----------|---------|------------------|-----------| | (M) | (G) | (%) | (ms) | | | MobileNetv1 | 2.6 | 0.3 | 68.4 | 14.0 | | MobileNetv2 | 2.6 | 0.2 | 69.8 | 10.7 | | MobileNetv3 | 2.5 | 0.1 | 67.4 | 8.9 | | ShuffleNetv2 | 2.3 | 0.1 | 69.4 | - | | ESPNetv2 | 2.3 | 0.2 | 69.2 | - | | MobileViT_XXS | 1.3 | 0.2 | 69.0 | 14.7 | | AutoViT_XXS | 1.8 | 0.3 | 71.3 | 10.2 | | DeiT-T | 5.7 | 1.3 | 72.2 | 16.7 | | T2T-T | 4.3 | 1.1 | 71.7 | - | | PiT-T | 10.6 | 0.7 | 72.4 | 15.4 | | CrossViT-T | 6.9 | 1.6 | 73.4 | 19.5 | | EdgeViT-XXS | 4.1 | 0.6 | 74.4 | 28.9 | | MobileNetV3-L | 5.4 | 0.2 | 75.2 | 26.5 | | tiny-MOAT-0 | 3.4 | 0.8 | 75.5 | - | | LeViT-128S | 7.8 | 0.3 | 76.6 | 28.2 | | MobileViT_XS | 2.3 | 0.6 | 74.8 | 26.5 | | AutoViT_XS | 2.5 | 0.8 | 75.5 | 19.3 | | DenseNet-169 | 14.0 | 6.7 | 76.2 | - | | EfficientNet-B0 | 5.3 | 0.4 | 76.3 | 14.5 | | AutoFormer-tiny | 5.7 | 1.3 | 74.7 | 25.1 | | LocalViT-T | 5.9 | 1.3 | 74.8 | - | | CeiT-T | 6.4 | 1.2 | 76.4 | - | | PoolFormer-s12 | 12.0 | 2.0 | 77.2 | 59.0 | | GLIT | 7.2 | 1.4 | 76.3 | - | | ConvMixer-1024/12 | 14.6 | - | 77.8 | 35.7 | | NASViT-A0 | 0.3 | 0.2 | 78.2 | - | | tiny-MOAT-1 | 5.1 | 1.2 | 78.3 | - | | ResNet50 | 25.5 | 4.1 | 78.5 | 29.4 | | PVT | 13.1 | 2.1 | 78.7 | 54.5 | | LeViT-128 | 9.2 | 0.4 | 78.6 | 36.8 | | MobileViT_S | 5.6 | 1.1 | 78.4 | 35.7 | | EfficientFormer-L1 | 12.3 | 1.3 | 79.2 | 28.0 | | DeiT-S | 22.5 | 4.5 | 79.8 | 41.0 | | AutoViT_S | 6.3 | 1.3 | 79.2 | 27.9 | Table 2: Accuracy comparison on ImageNet with state-of-theart CNN and transformer-based models under similar parameters and computational cost (FLOPs). ![8_image_1.png](8_image_1.png) Figure 5: Coarse-to-fine Architecture Search. *Upper* figure: Pre-define some promising candidate network skeletons by the number of parameters and latency budget. *Bottom figure:* Generate more architectures by randomly mutating each skeleton architecture by varying the depth slightly and using the weights from the single-stage model for the induced child models to evaluate all the candidates. | Model | APb | APb 50 | APb 75 | AP m | AP m 50 | AP m 75 | Model | CIFAR10 | CIFAR100 | Flowers | Cars | |--------------------|-------|----------|----------|------------|-----------|-----------|-----------------|-----------|------------|-----------|--------| | ResNet18 | 34 | 54 | 36.7 | 31.2 | 51 | 32.7 | | | | | | | PoolFormer-S12 | 37.3 | 59.0 | 40.1 | 34.6 | 55.8 | 36.9 | | | | | | | EfficientFormer-L1 | 37.9 | 60.3 | 41.0 | 35.4 | 57.3 | 37.3 | | | | | | | PVT-Tiny | 36.7 | 59.2 | 39.3 | 35.1 | 56.7 | 37.3 | | | | | | | AutoViT_S | 37.6 | 60.2 | 41.1 | 35.3 | 57.5 | 37.6 | EfficientNet-B0 | 98.1 | 88.1 | 96.9 | 90.8 | | | | | | DeiT-S | 98.0 | 87.1 | 97.8 | - | | | | | | | | | CeiT-T | 98.5 | 88.4 | 96.9 | 90.5 | | | | | | | | | CeiT-T↑384 | 98.5 | 88.0 | 97.8 | 93.0 | | | | | | | | | AutoViT_S | 98.8 | 89.6 | 98.1 | 92.4 | | | | Table 3: Results comparison on COCO2017 object detection and instance segmentation. Table 4: Comparison with existing ViT neural architecture search baselines. ## 4.2 Experimental Results We conduct an architecture search on ImageNet and obtain several hybrid transformer models with different parameter sizes. All these models directly inherit the weights from the supernet without additional retraining and other postprocessing. We present a summary of performance in Table 2. We compare our hybrid models with various CNN-based and Transformer based models, namely: MobileNets Howard et al. (2017), ShuffleNetv2 Ma et al. (2018), ESPNetv2 Mehta et al. (2019), MobileViT Mehta & Rastegari (2021), DeiT-T Touvron et al. (2021), T2T-T Yuan et al. (2021b), PiT-T Heo et al. (2021), CrossViT-T Chen et al. (2021b), DenseNet-169 Huang et al. (2017), EfficientNet-B Tan & Le (2019), LocalViT-T Li et al. (2021b), CeiT-T Yuan et al. (2021a), PVT Wang et al. (2021b), LeViT Graham et al. (2021), NASViT Anonymous (2022), GLiT Chen et al. (2021a), PoolFormer Yu et al. (2022) and Moat Yang et al. (2022). As shown in Figure 1, our searched hybrid models obtain a better trade-off in model size vs. Top-1 accuracy in ImageNet compared to other models. The configurations of our final searched hybrid models are shown in Table 5. Our AutoViT_XXS, with only 1.8M parameters and 0.3G FLOPs, achieves a Top-1 accuracy of 71.3%, which is higher than all the other CNNs and MobileViT_XXS. Moreover, its latency of 10.2ms is the lowest among the comparable models, thus demonstrating a significant efficiency improvement. Our AutoViT_XS (19.3 ms, 75.5% Top-1 accuracy, 0.8 GFlops) and AutoViT_S (27.9 ms, 79.2% Top-1 accuracy, 1.3 GFlops) also outperform existing ViT variants in both speed and accuracy. Hand-crafted designs tend to yield a set of models that primarily differ in their dimensions. While they may serve as general-purpose models suitable for a broad range of devices, they may not be as finely optimized for a specific hardware profile as network-searched models. For example, our AutoViT_XS model has 36% fewer parameters (2.5M vs. 3.4M) compared to tiny-MOAT-0m while achieving the same accuracy of 75.5%. This parameter efficiency becomes even more critical when considering the memory constraints typical of edge devices. We measure the latency of various models on a mobile platform, displayed in Figure 4. Models include MobileNetV1, PiT-T, MobileViT_XS, DeiT-T, and our own model AutoViT_XXS. Different colored bars represent different modules. Notably, our searched model, particularly in the areas of attention and MLP (Multilayer Perceptron), showcases a significant efficiency improvement. To showcase the generalizability of our method, we evaluate our model on object detection and instance segmentation benchmark, as presented in Table 4. Experiments are conducted on COCO 2017 Lin et al. (2014). We use the Mask R-CNN He et al. (2017) framework and replace different backbones. FLOPs are measured at 1333 × 800 resolution. We also conduct experiments on downstream benchmarks: CIFAR-10 Krizhevsky et al. (2009), CIFAR-100 Krizhevsky et al. (2009), Flowers Nilsback & Zisserman (2008), Cars Krause et al. (2013). Results are presented in Table 5. AutoViT_S demonstrates competitive or superior performance across different tasks and datasets. We achieved an accuracy of 89.6% in CIFAR100, which is 1.2% higher than CeiT-T. We also achieve competitive accuracy compared to EfficientFormer-L1 with fewer parameters (12.3M vs. 6.3M). ## 5 Ablation Study 5.1 Knowledge Distillation In this experiment, we use knowledge distillation to improve the accuracy of the supernet. We use the pre-trained EfficientNet-B5 with 83.3% top-1 accuracy, and RegNetY-32G with 83.6% top-1 accuracy as different teacher models. We also apply soft distillation and hard distillation for comparison. Soft distillation Hinton et al. (2015) minimizes the Kullback-Leibler divergence between the softmax of the teacher and the softmax of the student model. The distillation ![10_image_0.png](10_image_0.png) Figure 6: We compare the loss and accuracy convergence curves of the three training methods for Supernet _XXS. From left to right: no distillation, distillation with EfficientNet-B5 as a teacher, and distillation with RegNetY-32G as a teacher. Models trained with RegNetY-32G as the teacher outperform models trained with Efficient-B5 and those without distillation. | Models | Width | Depth | Number of heads | Expantion ratio | |-------------|-----------------------|---------|-------------------|-------------------| | AutoViT_XXS | [48, 64, 80, 96, 112] | [5] | [3, 3, 3, 4, 2] | [2, 1.5, 2, 2, 2] | | AutoViT_XS | [96, 120, 144, 168] | [4] | [4, 6, 4, 4] | [1.5, 2, 2, 2] | | AutoViT_S | [128, 176, 224, 272] | [4] | [9, 8, 8, 9] | [1.5, 2, 2, 1.5] | $$({\mathfrak{I}})$$ $$(4)^{\frac{1}{2}}$$ Table 5: Our searched hybrid architectures. loss is: $$L_{s o f t}=(1-\alpha)L_{C E}(\psi(Z_{s}),y)+\alpha\tau^{2}K L(\psi(\frac{Z_{s}}{\tau}),\psi(\frac{Z_{t}}{\tau})),$$ )), (3) where Zt and Zs are the logits of the teacher and student model, respectively. ψ the softmax function. τ is the temperature for the distillation, α is the coefficient balancing the Kullback–Leibler divergence loss (KL) and the cross-entropy (LCE) on ground truth labels y. $$L_{h a r d}=(1-\alpha)L_{C E}(\psi(Z_{s}),y)+\alpha L_{C E}(\psi(Z_{s}),y_{t}),$$ For hard-label distillation Touvron et al. (2021), we take the hard decision of the teacher as a true label. The distillation objective is: L*hard* = (1 − α)LCE(ψ(Zs), y) + αLCE(ψ(Zs), yt), (4) where yt = *argmax*cZt(c) is the hard label from the teacher logits Zt. The results are shown in Table 6. When using soft distillation, both EfficientNet-B5 and RegNetY-32G as teachers outperform the model trained without distillation (68.6% vs. 63.5% and 71.3% vs 64.6%). Moreover, we observe that although the accuracy of the RegNetY-32G model is only 0.3% higher than that of the Efficient-B5 model, the accuracy of the supernet model with RegNetY-32G as teacher outperforms the one with Efficient-B5 as a teacher by 2.7%. We also compare hard and soft distillation with RegNetY-32G as a teacher, where hard distillation outperforms soft one by 0.4%. Figure 6 shows the loss and accuracy convergence curves of the three training methods. ## 5.2 Varying Number Of Supernets As discussed in Section 3.2, we partition the large search space into three independent sub-spaces based on parameters and train one supernet for each sub-space. This reduces gradient conflicts between large and small sub-networks trained via weight-sharing due to model size gaps. Further, search space partitioning reduces computational and memory costs. To demonstrate the benefit of our partitioning strategy for supernet training, we perform an experiment with a single supernet covering our hybrid search space. Compared to our XXS model with similar parameters (1.8M), the accuracy of the searched model from the above single supernet achieved a Top-1 accuracy of 65.3%, which is 6.0% lower than the accuracy obtained with the XXS supernet (71.3%). ## 5.3 Hybrid Vs. Transformer-Only Structure. Table 7 provides a comparison of latency and Top-1 Accuracy between a Transformer-Only model and a Hybrid model. It is observed under similar Top-1 Accuracy (71.2% for Transformer-Only and 71.3% for the Hybrid model), the Hybrid model's latency is significantly lower, approximately 37% less than the Transformer-Only model (10.2ms for the Hybrid Table 6: Distillation results on ImageNet with Supernet_XXS. We compare the results of our supernet trained with no distillation versus EfficientNet-B5 and RegNetY32G as teachers for soft and hard distillation. Table 8: Training time (GPU hours) comparison of different search methods on Supernet_XXS. | | Model | Training Time (h) | Top-1 Acc. (%) | | | |----------------|----------------|---------------------|-----------------------------------------------------------|----|------| | Model | Params (M) | Top-1 Acc. (%) | Random | 0 | 66.8 | | | Evolution | 12 | 70.7 | | | | | Course-to-fine | 5 | 71.3 | | | | w/o KD | 1.8 | 64.6 | | | | | Soft KD EffNet | 1.8 | 68.6 | | | | | Soft KD RegNet | 1.8 | 70.5 | | | | | Hard KD RegNet | 1.8 | 71.3 | Table 9: Comparison with existing ViT neural architecture | | | | Table 7: | Latency Comparison between Hybrid and | | | | | | |-------------------------|-----------------------------------------|----------------|----------|----------------|-----|------| | Transformer-only Model. | Model | Params(M) | FLOPS(G) | Top-1 Acc. (%) | | | | Model | Latency (ms) | Top-1 Acc. (%) | ASViT | 29 | 5.3 | 81.2 | | | GLiT | 7.2 | 1.4 | 76.3 | | | | | Autoformer | 5.7 | 1.3 | 74.7 | | | | | AutoViT_S | 6.3 | 1.4 | 79.2 | | | | Transformer-Only | 16.1 | 71.2 | | | | | | Hydbrid | 10.2 | 71.3 | | | | | Table 9: Comparison with existing ViT neural architecture search baselines. model compared to 16.1ms for the Transformer-Only model). This suggests that the Hybrid model can achieve virtually the same performance as the transformer-only model while being significantly more computationally efficient on small lightweight ViT models. ## 5.4 Comparison With Different Search And Nas Methods Different Search Methods. We compare three types of search strategies: random search, evolution search Guo et al. (2020), and our coarse-to-fine search. Evolution search operates in two phases: (i) Crossover: Two randomly selected candidates are crossed to produce a new one. (ii) Mutation. A candidate is chosen randomly and mutated with a probability of 0.1 for each choice to produce a new candidate. Although no training is involved, the process is still time-consuming because the two phases are repeated to generate enough new candidates to obtain those that meet the given architecture constraints. As shown in Table 8, our coarse-to-fine search method can achieve better results (71.3 vs. 70.7 Top-1 accuracy) compared to the previous evolution search with much less training time. Different NAS Methods Table 9 shows a comparison of our method to several ViT-based NAS methods, namely, ASViT Chen (2022), GLiT Chen et al. (2021a), and Autoformer Chen et al. (2021d). Our model offers an excellent balance of efficiency and performance. With only 6.3M parameters and 1.4G FLOPs, AutoViT_S achieves a Top-1 Accuracy of 79.2%. Although slightly lower than ASViT's accuracy (81.2%), AutoViT_S accomplishes this with much less computational demand, emphasizing the model's efficiency. Under similar FLOPs, the Top-1 accuracy of our AutoViT_S is noticeably higher than GLiT (76.3%) and Autoformer (74.7%). ## 6 Conclusion In this work, we apply neural architecture search to automatically search for optimal lightweight vision transformer (VIT) models given different resource constraints. We design a hybrid search space including CNN and transformer components. We train multiple supernets of different parameter ranges to handle the huge search space in ViT and mitigate conflicts in the weight-sharing of sub-networks. Further, we introduce an efficient latency-aware coarse-to-fine search algorithm to obtain the optimal networks that significantly outperform prior-art lightweight vision transformer models. Our current experiments are limited to classification tasks. Future work includes extending our framework to other vision tasks such as object detection. This work is scientific in nature, we do not believe it has potential negative societal impacts. ## References Arash Amini, Arul Selvam Periyasamy, and Sven Behnke. T6d-direct: Transformers for multi-object 6d pose direct regression. *arXiv preprint arXiv:2109.10948*, 2021. Anonymous. NASVit: Neural architecture search for efficient vision transformers with gradient conflict aware supernet training. In *Submitted to The Tenth International Conference on Learning Representations*, 2022. URL https://openreview.net/forum?id=Qaw16njk6L. under review. Alexey Bochkovskiy, Chien-Yao Wang, and Hong-Yuan Mark Liao. Yolov4: Optimal speed and accuracy of object detection. *arXiv preprint arXiv:2004.10934*, 2020. Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, and Song Han. Once-for-all: Train one network and specialize it for efficient deployment. *arXiv preprint arXiv:1908.09791*, 2019. Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In *European Conference on Computer Vision (ECCV)*, pp. 213–229. Springer, 2020. Boyu Chen, Peixia Li, Chuming Li, Baopu Li, Lei Bai, Chen Lin, Ming Sun, Junjie Yan, and Wanli Ouyang. Glit: Neural architecture search for global and local image transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 12–21, 2021a. Chun-Fu Chen, Quanfu Fan, and Rameswar Panda. Crossvit: Cross-attention multi-scale vision transformer for image classification. *arXiv preprint arXiv:2103.14899*, 2021b. Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, and Wen Gao. Pre-trained image processing transformer. In *Proceedings of the IEEE/CVF Conference on Computer* Vision and Pattern Recognition (CVPR), pp. 12299–12310, 2021c. Minghao Chen, Houwen Peng, Jianlong Fu, and Haibin Ling. Autoformer: Searching transformers for visual recognition. In *Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)*, pp. 12270–12280, 2021d. Minghao Chen, Kan Wu, Bolin Ni, Houwen Peng, Bei Liu, Jianlong Fu, Hongyang Chao, and Haibin Ling. Searching the search space of vision transformer. *Advances in Neural Information Processing Systems*, 34, 2021e. Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang, and Zhangyang Wang. Chasing sparsity in vision transformers: An end-to-end exploration. In *Advances in Neural Information Processing Systems*, 2021f. Wuyang et al. Chen. Auto-scaling vision transformers without training. In *ICLR*, 2022. Bowen Cheng, Alexander G Schwing, and Alexander Kirillov. Per-pixel classification is not all you need for semantic segmentation. *arXiv preprint arXiv:2107.06278*, 2021. Xiangxiang Chu, Zhi Tian, Bo Zhang, Xinlong Wang, Xiaolin Wei, Huaxia Xia, and Chunhua Shen. Conditional positional encodings for vision transformers. *arXiv preprint arXiv:2102.10882*, 2021. Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical automated data augmentation with a reduced search space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703, 2020. Linhui Dai, Hong Liu, Hao Tang, Zhiwei Wu, and Pinhao Song. Ao2-detr: Arbitrary-oriented object detection transformer. *IEEE Transactions on Circuits and Systems for Video Technology*, 2022. Zhigang Dai, Bolun Cai, Yugeng Lin, and Junying Chen. Up-detr: Unsupervised pre-training for object detection with transformers. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 1601–1610, 2021a. Zihang Dai, Hanxiao Liu, Quoc V Le, and Mingxing Tan. Coatnet: Marrying convolution and attention for all data sizes. *arXiv preprint arXiv:2106.04803*, 2021b. Stéphane d'Ascoli, Hugo Touvron, Matthew Leavitt, Ari Morcos, Giulio Biroli, and Levent Sagun. Convit: Improving vision transformers with soft convolutional inductive biases. *arXiv preprint arXiv:2103.10697*, 2021. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 248–255. Ieee, 2009. Lei Ding, Dong Lin, Shaofu Lin, Jing Zhang, Xiaojie Cui, Yuebin Wang, Hao Tang, and Lorenzo Bruzzone. Looking outside the window: Wide-context transformer for the semantic segmentation of high-resolution remote sensing images. *arXiv preprint arXiv:2106.15754*, 2021. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In *International Conference on Learning* Representations (ICLR), 2021. URL https://openreview.net/forum?id=YicbFdNTTy. Alaaeldin El-Nouby, Natalia Neverova, Ivan Laptev, and Hervé Jégou. Training vision transformers for image retrieval. arXiv preprint arXiv:2102.05644, 2021. Benjamin Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Herve Jegou, and Matthijs Douze. Levit: A vision transformer in convnet's clothing for faster inference. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 12259–12269, October 2021. Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, and Jian Sun. Single path one-shot neural architecture search with uniform sampling. In *European Conference on Computer Vision*, pp. 544–560. Springer, 2020. Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 2961–2969, 2017. Byeongho Heo, Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Junsuk Choe, and Seong Joon Oh. Rethinking spatial dimensions of vision transformers. In *International Conference on Computer Vision (ICCV)*, 2021. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. *arXiv preprint* arXiv:1503.02531, 2015. Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017. Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 4700–4708, 2017. Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In Proceedings of the IEEE international conference on computer vision workshops, pp. 554–561, 2013. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Changlin Li, Tao Tang, Guangrun Wang, Jiefeng Peng, Bing Wang, Xiaodan Liang, and Xiaojun Chang. BossNAS: Exploring hybrid CNN-transformers with block-wisely self-supervised neural architecture search. In *ICCV*, 2021a. Yawei Li, Kai Zhang, Jiezhang Cao, Radu Timofte, and Luc Van Gool. Localvit: Bringing locality to vision transformers, 2021b. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In *European conference on computer vision*, pp. 740–755. Springer, 2014. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pp. 2980–2988, 2017. Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In *Proceedings of the IEEE/CVF International Conference* on Computer Vision (ICCV), 2021. Zhisheng Lu, Hong Liu, Juncheng Li, and Linlin Zhang. Efficient transformer for single image super-resolution. *arXiv* preprint arXiv:2108.11084, 2021. Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In *Proceedings of the European Conference on Computer Vision (ECCV)*, September 2018. Sachin Mehta and Mohammad Rastegari. Mobilevit: light-weight, general-purpose, and mobile-friendly vision transformer. *arXiv preprint arXiv:2110.02178*, 2021. Sachin Mehta, Mohammad Rastegari, Linda Shapiro, and Hannaneh Hajishirzi. Espnetv2: A light-weight, power efficient, and general purpose convolutional neural network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9190–9200, 2019. Ishan Misra, Rohit Girdhar, and Armand Joulin. An End-to-End Transformer Model for 3D Object Detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021. Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In *2008* Sixth Indian conference on computer vision, graphics & image processing, pp. 722–729. IEEE, 2008. Joseph Redmon and Ali Farhadi. Yolo9000: better, faster, stronger. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7263–7271, 2017. Joseph Redmon and Ali Farhadi. Yolov3: An incremental improvement. *arXiv preprint arXiv:1804.02767*, 2018. Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 779–788, 2016. Manas Sahni, Shreya Varshini, Alind Khare, and Alexey Tumanov. Compofa: Compound once-for-all networks for faster multi-platform deployment. *arXiv preprint arXiv:2104.12642*, 2021. Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 4510–4520, 2018. Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In *International* Conference on Machine Learning, pp. 6105–6114. PMLR, 2019. Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V Le. Mnasnet: Platform-aware neural architecture search for mobile. In *Proceedings of the IEEE/CVF Conference on Computer* Vision and Pattern Recognition, pp. 2820–2828, 2019. Ilya Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Daniel Keysers, Jakob Uszkoreit, Mario Lucic, et al. Mlp-mixer: An all-mlp architecture for vision. arXiv preprint arXiv:2105.01601, 2021. Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Herv'e J'egou. Training data-efficient image transformers & distillation through attention. In *ICML*, 2021. Dilin Wang, Meng Li, Chengyue Gong, and Vikas Chandra. Attentivenas: Improving neural architecture search via attentive sampling. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 6418–6427, 2021a. Hanrui Wang, Zhanghao Wu, Zhijian Liu, Han Cai, Ligeng Zhu, Chuang Gan, and Song Han. Hat: Hardware-aware transformers for efficient natural language processing. *arXiv preprint arXiv:2005.14187*, 2020. Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021b. Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, and Lei Zhang. Cvt: Introducing convolutions to vision transformers. *arXiv preprint arXiv:2103.15808*, 2021. Chenglin Yang, Siyuan Qiao, Qihang Yu, Xiaoding Yuan, Yukun Zhu, Alan Yuille, Hartwig Adam, and LiangChieh Chen. Moat: Alternating mobile convolution and attention brings strong vision models. arXiv preprint arXiv:2210.01820, 2022. Fuzhi Yang, Huan Yang, Jianlong Fu, Hongtao Lu, and Baining Guo. Learning texture transformer network for image super-resolution. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 5791–5800, 2020. Jiahui Yu, Pengchong Jin, Hanxiao Liu, Gabriel Bender, Pieter-Jan Kindermans, Mingxing Tan, Thomas Huang, Xiaodan Song, Ruoming Pang, and Quoc Le. Bignas: Scaling up neural architecture search with big single-stage models. In *European Conference on Computer Vision*, pp. 702–717. Springer, 2020. Weihao Yu, Mi Luo, Pan Zhou, Chenyang Si, Yichen Zhou, Xinchao Wang, Jiashi Feng, and Shuicheng Yan. Metaformer is actually what you need for vision. In *Proceedings of the IEEE/CVF conference on computer vision and pattern* recognition, pp. 10819–10829, 2022. Kun Yuan, Shaopeng Guo, Ziwei Liu, Aojun Zhou, Fengwei Yu, and Wei Wu. Incorporating convolution designs into visual transformers. In *Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)*, pp. 579–588, October 2021a. Li Yuan, Yunpeng Chen, Tao Wang, Weihao Yu, Yujun Shi, Zi-Hang Jiang, Francis E.H. Tay, Jiashi Feng, and Shuicheng Yan. Tokens-to-token vit: Training vision transformers from scratch on imagenet. In *Proceedings of the IEEE/CVF* International Conference on Computer Vision (ICCV), pp. 558–567, October 2021b. Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. In *Proceedings of the IEEE/CVF International* Conference on Computer Vision, pp. 6023–6032, 2019. Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017. Sixiao Zheng, Jiachen Lu, Hengshuang Zhao, Xiatian Zhu, Zekun Luo, Yabiao Wang, Yanwei Fu, Jianfeng Feng, Tao Xiang, Philip HS Torr, et al. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 6881–6890, 2021. Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, and Yi Yang. Random erasing data augmentation. In *Proceedings* of the AAAI Conference on Artificial Intelligence, volume 34, pp. 13001–13008, 2020. Chen Zhu, Wei Ping, Chaowei Xiao, Mohammad Shoeybi, Tom Goldstein, Anima Anandkumar, and Bryan Catanzaro. Long-short transformer: Efficient transformers for language and vision. *Advances in neural information processing* systems, 34:17723–17736, 2021.
Review 1: Summary: This work targets latency-aware vision transformer (ViT) neural architecture search (NAS). Specifically, the authors proposed a hybrid search space with both Conv and Transformer in it and built a supernet training scheme on top of it. After supernet training, a latency look-up table is used to find the models with the best accuracy vs. efficiency trade-offs. Strengths and Weaknesses: ## Strengths 1. Target important question: Although a lot of prior work is compressing ViT models' FLOPs or #params, latency is a more important metric for real applications. Thus, this work explores a unique direction to search ViT with better accuracy vs. latency trade-offs. 2. Informative figures and tables: The paper with well-designed figures and tables is easy to follow. 3. Good performance: the improvement of the accuracy vs. latency trade-offs on ImageNet is obvious. ## Weaknesses 1. Limited novelty: For the claimed contribution of Hybrid Search Space, conv and transformer hybrid structure (in parallel or sequential manner) has been explored in many prior works [1,2,3]. The author did not explain why the newly proposed one is different from prior works clearly. For the claimed contribution of Multi-Size Supernet Training Scheme, it seems to be similar to Autoformer [4]'s weight entanglement, a more detailed comparison between the two will clarify this point in a better way. 2. Scalability to different devices: The authors claimed that "to prevent the need for extensive testing during the search process, we use a lookup table ...". However, if we need to apply the framework to different devices, the "extensive testing" for each device will still be a problem. 3. Limited dataset: The benchmark is only conducted on ImageNet, thus the generalizability to other tasks is unknown. [1] https://openreview.net/forum?id=Qaw16njk6L [2] https://arxiv.org/abs/2104.01136 [3] https://arxiv.org/abs/2107.02192 [4] https://arxiv.org/abs/2107.00651 Requested Changes: I would suggest the authors revise the manuscripts by adding clarification on the "Limited novelty", "Scalability to different devices", and "Limited dataset", as mentioned in the Weaknesses above. Broader Impact Concerns: No Broader Impact Concerns ================================================== Review 2: Summary: This paper searches a lightweight hybrid CNN and transformer based neural architectures. The process contains two stages, coarse search stage is latency-awaring and the finegrained search stage is mainly dominated by the validate accuracy. An evolutionary search approach is applied to procure the optimal subnets. In order to mitigate the optimization interference problem, the authors propose a multi-size supernet training scheme. Strengths and Weaknesses: Strengths: 1. This paper construct a complete neural architecture search framwork. The authors train a supernet firstly and then search the Pareto front by an evolutionary approach. 2. The results of the experiments show that the searched network is superior to some existing methods. Weaknesses: 1. This paper tends to be an engineering work and the novelty is weak. The supernet pretraining method has been widely used in previous works. The search space and the method of lookup table for latency is not creative. 2. The specific method lacks a detailed description. In Sec 3.2, the alphabetic representation is perplexing and they are not illustrated, such as $X$, $X_{L}$. The used transformer is different from familar transformer which contains $2$ dimensions toekn number an feature dimension, so what is the concrete structure of this transformer? 3. The figures can not express the contents precisely. In Figure.2, what is the meaning of unfold and fold? In Figure.3, what is Attention-FNN? 4. Because the novelty is weak, excellent results are expected. However, in the experiments, the proposed method compares with only part of lightweight methods. There are many lightweight network and NAS based architectures, it is better to compare with some SOTA methods. Besides, as a hybrid method, there are many CNN+transformer architectures in the past two years, the method should compare with them. Requested Changes: Please see the weakness. In addition, the readability of this paper can be greatly improved. The writing should be simple and clear, make the article easy to be understanded. The training time of the supernet is not presented and compared. Broader Impact Concerns: No. ================================================== Review 3: Summary: This paper introduces a NAS approach for hybrid deep neural networks (CNN and Transformer). The approach includes three features: 1. This approach can search transformers and CNNs at the same time 2. This approach proposes a coarse-to-fine paradigm to save computation 3. This approach profiles the efficiency using the real latency instead of MACs or FLOPs Strengths and Weaknesses: Strengths 1. The algorithm targets real latency, which makes this algorithm more useful and practical for industrial use. 2. The final models achieve very high accuracy and low latency compared with previous state-of-the-art architecture Weaknesses: 1. The search space is still limited in certain building blocks. It is unclear whether gains are from the combination of building blocks (MBConv, transformers, standard convs) or the searching algorithm. As a reference, MOAT [1] is also a hybrid architecture combining MBConv and transformers. What is the gap between NAS design and hand-crafted design? 2. Lack of other benchmarks. It would be better to see the detection/segmentation results of the final network for validating the generalization of the networks. [1] Yang, Chenglin, et al. "Moat: Alternating mobile convolution and attention brings strong vision models." arXiv preprint arXiv:2210.01820 (2022). Requested Changes: 1. There is a typo on page 5, the 3rd line of section 3.3. It should be "(ii)" instead of "(ii))" 2. It would be better if more context on training supernets (one-shot NAS) in the related work. 3. [optional] More benchmarks, e.g. segmentation, detection 4. [optional] Comparison between hand-crafted baseline and NAS hybrid networks (e.g. MOAT) Broader Impact Concerns: None ================================================== Metareview: Recommendation: Reject Comment: While the proposed method is reasonable and the reviewers have listed some strengths of this work, the paper in its current form is below the TMLR standard due to several major weaknesses. 1. Insufficient literature review and discussion. The literature survey of this paper is weak and seems to lack enough awareness to the latest development of the area. For example, there is no mentioning of several important efficient ViT backbones, such as FastViT [1], FasterViT [2] and EfficientViT [3]. These methods should be carefully discussed and compared in the experimental section. 2. Novelty. This paper is written in a way as if using lookup table is a novel contribution. As reviewer 8Q1B mentioned, the use of lookup table is not entirely novel and many related works in hardware-latency-aware efficient design could be found with a simple search. It is highly recommended that this part be thoroughly modified and discussed to give enough credits to the community. 3. Experiment setting. The paper's experiment design is weak. Besides missing comparison to the SOTA efficient backbones mentioned above, it is recommended that the authors exactly follow the experimental setting of these papers for apples to apples comparison. In particular, both [2] and [3] have considered TensorRT latency on GPUs. Since the position of this paper emphasizes heavily on hardware-friendly design, it would be great to follow the TensorRT latency measure as well. [1] FastViT: A Fast Hybrid Vision Transformer using Structural Reparameterization [2] FasterViT: Fast Vision Transformers with Hierarchical Attention [3] EfficientViT: Multi-Scale Linear Attention for High-Resolution Dense Prediction ==================================================
# A Self-Supervised Framework For Function Learning And Extrapolation Simon N. Segert ssegert@princeton.edu Princeton Neuroscience Institute Princeton University Jonathan D. Cohen jdc@princeton.edu Princeton Neuroscience Insitute Princeton University Reviewed on OpenReview: **https://openreview.net/forum?id=ILPFasEaHA** ## Abstract Understanding how agents learn to generalize - and, in particular, to extrapolate - in high-dimensional, naturalistic environments remains a challenge for both machine learning and the study of biological agents. One approach to this has been the use of function learning paradigms, which allow agents' empirical patterns of generalization for smooth scalar functions to be described precisely. However, to date, such work has not succeeded in identifying mechanisms that acquire the kinds of general purpose representations over which function learning can operate to exhibit the patterns of generalization observed in human empirical studies. Here, we present a framework for how a learner may acquire such representations, that then support generalization - and extrapolation in particular - in a few-shot fashion in the domain of scalar function learning. Taking inspiration from a classic theory of visual processing, we construct a self-supervised encoder that implements the basic inductive bias of invariance under topological distortions. We show the resulting representations outperform those from other models for unsupervised time series learning in several downstream function learning tasks, including extrapolation. ## 1 Introduction A key feature of an intelligent agent is the ability to recognize and extrapolate a variety of abstract patterns that commonly occur in the world. Here, we focus on a tractable but still highly general special case of such patterns, that take the form of one-dimensional smooth functions. From a formal perspective, the space of all such functions is vast (Reed and Simon, 1980), necessitating the use of inductive biases for making useful inferences. Thus, while this setting does not encompass all possible kinds of structures that can be generalized or extrapolated, it is a suciently rich space that insights gained here are likely to shed light on the ability of biological and artificial agents to generalize more broadly. At the same time, the abstract structure of this space is relatively simple and well-understood, thus making it amenable to precise analysis and interpretable experimental manipulations. A further virtue of the space of functions is the existence of detailed experimental data from humans in this domain, thus facilitating a direct comparison of models with natural agents. Indeed, over the past few decades, the empirical studies of function learning in humans has catalogued the forms of several such commonly applied biases, including associative similarity, rule based categorization (McDaniel and Busemeyer, 2005), bias towards positive linear forms (Kwantes and Neal, 2006) and compositional construction from small number of basis elements (Schulz et al., 2017). Taken together, such results describe a class of "intuitive functions," which are functions that people appear readily able to recognize and use. While it is generally accepted that ecient generalization implies the existence of some previous expectations about the structure of the space of functions, what is not obvious is how such expectations are acquired; that is, what mechanisms are capable of learning and encoding abstract structure through unsupervised or self-supervised experience, in such a way that features relevant to any particular task may be easily "read out" as required. Here, we propose to address these challenges by adapting and extending the general framework of the field of self-supervised learning (Chen et al., 2020; He et al., 2020). The framework consists of two components: a "slow" encoder that learns general purpose representations of one-dimensional functions using a standard self-supervised learning algorithm, and a collection of "fast" heads, which can rapidly adapt to dierent function learning paradigms, based on a small amount of task-specific annotated data, using a simple form of supervised learning (linear or logistic regression). The heads are trained on top of the representations learned previously by the encoder, allowing the model to make use of its general knowledge to rapidly adapt to the particular task demands. While eorts have taken this general approach (Chen et al., 2020; He et al., 2020), none to our knowledge have specifically considered the domain of function learning with intuitive functions - that is, ones that people have been empirically observed to use (DeLosh et al., 1997; McDaniel and Busemeyer, 2005; Schulz et al., 2017). Our approach is further distinguished in the design of the encoder used for self-supervised learning. For this, we treat a scalar function as a (typically very short) time series. The crucial feature of our encoder is a novel family of augmentations of time series, derived from the theory and phenomenology of topological visual processing (Zeeman, 1965; Chen, 2005). This theory holds that the visual system is invariant to certain kinds of local topological distortions of stimuli, distortions that we design our augmentations to mimic. We hypothesize that such distortions reflect commonly occurring structure in the world, that may in turn have been discovered by the brain, either through evolution or early development, and used as a basis for generalization. Following this idea, we train on a self-supervised objective that tries to enforce invariance across these augmentations, adapting the framework of Chen et al. (2020). We demonstrate that our choice of encoder and training procedure learns representations that perform better on a collection of downstream function learning and generalizaton tasks than do comparison models for learning and/or representing time series. This should be of particular interest to the field of semi-supervised learning, since works in that field have not yet systematically analyzed time series that correspond to intuitive functions. Moreover, we directly compare the generalization patterns of the model with those of humans asked to perform a multiple-choice extrapolation paradigm modeled after an empirical study by Schulz et al. (2017). We find that the model exhibits a qualitatively similar bias as people in this setting, namely, a greater accuracy on functions that are compositionally structured. This should also be of interest to psychologists, since it suggests that behavioral biases in function learning may arise as consequences of a more general representation-learning procedure. ## 2 Background 2.1 Contrastive Learning Here we provide a brief summary of the elements of contrastive learning that are necessary to define our encoder. This is adapted from Chen et al. (2020) and van den Oord et al. (2018). The basic assumption is that we are provided with a set of positive pairs (vi, vÕi), i = 1*,...,N*, which are taken as inputs that we wish to consider similar to each other. All other pairs of inputs are considered as negative pairs, which the objective will attempt to push apart in the latent space. For convenience, we will treat the input as a single flattened dataset of size 2N, in which the positive pairs are those of the form (vi, vi+N ), i Æ N. Let f◊ : Rn1 æ Rn2 and g" : Rn2 æ Sn3≠1 be two parametric families of functions (e.g. neural networks). Here n1 is the dimensionality of the inputs vi, while n2 and n3 are arbitrary, and Sn3≠1 denotes the hypersphere consisting of all vectors in Rn3 of unit norm. The objective is $$\operatorname*{max}_{\theta,\phi}\sum_{i=1}^{2N}<(g_{\phi}\circ f_{\theta})(v_{i}),(g_{\phi}\circ f_{\theta})(v_{i+N})>-L S E_{j\neq i}^{\tau}(<(g_{\phi}\circ f_{\theta})(v_{i}),(g_{\phi}\circ f_{\theta})(v_{j})>)$$ $$(1)$$ Here, · > 0 is a hyperparameter, and LSE denotes the logsumexp function LSE·i (zi) := · ú log qi ezi/· . The brackets < ·, · > are the Euclidean dot product. After optimizing this objective, we discard the function g and take the encoder to be the function f. Informally, the first term of the objective function acts to push positive pairs together in the latent space, since it is maximized when both elements in the pair have equal representations. Conversely, the second term acts to push apart all other pairs of inputs. This is because the logsumexp is a monotonically increasing function of each of its inputs; therefore it will be minimized when all of the pairwise similarities are as small as possible. A more precise analysis of properties of this objective may be found in Wang and Isola (2020). ## 2.2 A Generative Model Of Intuitive Functions To define a generative model for reference curves that plausibly resemble the distribution encountered by people, we adapt the generative process proposed in Schulz et al. (2017). This generative model uses the formalism of Gaussian processes Rasmussen and Williams (2006); we provide further general background in the Appendix. Schulz et al. (2017) define the Compositional Grammar by starting from three basic Gaussian Process kernels: $$\begin{array}{r c l}{{K_{l i n e a r}(x,y)}}&{{=}}&{{(x-\theta_{1})(y-\theta_{1})}}\\ {{}}&{{}}&{{K_{r b f}(x,y)}}&{{=}}&{{\theta_{3}e^{-(x-y)^{2}/\theta_{2}^{2}}}}\\ {{K_{p e r i o d i c}(x,y)}}&{{=}}&{{\theta_{4}e^{-\sin^{2}(2\pi|x-y|\theta_{5})/\theta_{6}^{2}}}}\end{array}$$ where ◊i are hyperparameters. In addition, the authors include in the CG ten kernels which are defined using pointwise sums and products of these above three. We refer to the Appendix for a more detailed description. A natural point of comparison is the Spectral Mixture (SM) kernel (Wilson and Adams, 2013), which is a flexible non-parametric kernel family defined by the formula $$\begin{array}{r c l}{{K_{m i x}(x,y)}}&{{=}}&{{\sum_{i=1}^{m}w_{i}e^{-2\pi^{2}(x-y)^{2}\sigma_{i}}\cos(2\pi(x-y)\mu_{i})}}\end{array}$$ Schulz et al. (2017) demonstrated that people learn curves generated from the CG more easily than ones generated from the SM . Therefore the family of kernels in the CG are good candidates for generating curves that are both naturalistic and are easily recognized by people. Lastly, it is important to note that, due to the nature of continuous space, in practice it is necessary to represent functions by their values on a finite set x1 *<...<x*N of ordered sample points. In our case, we take the points to be evenly-spaced, and use the same set of points for every function. Thus any function {(xi, yi)} may be treated as a time series and vice versa1. In what follows we will use the terms "curve," "function" and "time series" interchangeably, with the understanding that the points xi remain fixed across all functions. Also, since the positions of the xi's are the same for all functions, we omit them from explicit notation, and use y to denote the vector with components {yi}i that defines a function. ## 3 A Contrastive Encoder For Intuitive Functions To define the encoder, following Section 2.1, we need to specify the architecture and the family of positive pairs. For the encoder architecture, we simply take f to be a feedforward network of several 1D convolutions, and g to be an MLP with a single hidden layer. We set n2 = n3 = 128 and · = .5. For the class of augmentations, we take inspiration from the field of topological visual perception (Chen, 2005; Zeeman, 1965), which posits that the visual system maintains an invariance to local topological distortions (or "tolerances") of stimuli in order to facilitate global processing. 1In function learning, the x-axis does not necessarily correspond to time. The same is true of a "time series," despite the name: it is merely an ordered list of numbers. In our case, we consider 1-dimensional functions rather than 2-dimensional images, but similar principles apply. We propose a family of transformations that implements localized topological distortions to the function, together with several basic global distortions: (1) random vertical reflection, (2) random jittered upsampling and (3) random rescaling. We denote these respectively by stochastic transformations T1, T2, T3, which we describe in more detail below. The first transformation, that accommodates vertical reflection, is defined by T1(y) = ≠y with 50 percent probability and T1(y) = y otherwise. The second, that accommodates horizontal bending, is the most elaborate. To evaluate T2(y), we first select a random interval [a, b] ∏ [x1, xT ]. We then randomly select points xÕ1 <...<xÕT in [*a, b*]. These points are not required to be uniformly spaced. We generate them by sampling uniformly and independently at random from [*a, b*] and then sorting the samples, and then define T2(y)i = qj Cie≠ (xÕj≠xi)2 ![3_image_0.png](3_image_0.png) 2‡2 yj where 1/Ci = qj e≠ (xÕj≠xi)2 2‡2 . In other words, the values of T2(y) are given by a Gaussian Kernel Density Estimator (KDE) at the points xi. The eect is three-fold: since the points xi lie in a proper sub-interval of [*a, b*], this crops a portion of the function and up-samples to the original resolution. Secondly, because the points xÕi are not evenly spaced, some inhomogeneous horizontal stretching or contraction is introduced. Thirdly, the nature of the Gaussian KDE means that the augmented functions are smoothed with respect to the originals. Finally, we apply T3 that accommodates vertical rescaling. For this, we choose a random interval [*a, b*] µ [0, 1] and then apply an ane transformation such that the maximum value of the new function is b and the minimum is a. More explicitly, T3(y)i = (b ≠ a) yi≠minj yj maxj yj≠minj yj + a. Since this is applied last, the resulting functions always take values in the interval [0, 1]. Therefore the positive pairs take the form (T3T2T1(y), T3T2T1(y)) where y is a function. We reiterate that Ti are stochastic transformations, so despite the notational appearance, the two functions comprising a given positive pair will not be equal, since they are generated using two separate evaluations of a stochastic transformation. In our experiments the function y itself is generated from the Compositional Grammar over intuitive functions described in Section 2.2. An illustration of the augmentations is provided in Figure 1.Furthermore, we provide ablation studies on the eect of each augmentation individually in the Appendix. Figure 1: Illustrations of the augmentations. Each plot consists of two functions comprising a positive pair. In the four plots on the left, only the horizontal stretch transformation T2 is applied. In the four plots on the right, all three transformations are applied. ## 4 Data Description And Encoder Training To evaluate the ability of the encoder to learn a representation of intuitive functions, we generated and trained it on two types of functions: one generated from the family of 13 kernels defined by the CG (see Section 2.2); and the other (used as a control) from a non-compositional SM Kernel, for a total of 14 kernels. As noted above, Schulz et al. (2017) showed that human completions are closer to those generated by the CG than by the SM. We included the SM in our generative distribution to allow a similar comparison. Each function was generated by first sampling one of these 14 kernels, then sampling any hyperparameters of that kernel, and finally sampling from the resulting covariance matrix evaluated on T = 100 evenly horizontally spaced points. We sampled from the SM kernel 50 percent of the time, and each of the remaining 13 CG kernels 3.85(= 50/13) percent of the time. Therefore, any dierences in the representations between the SM and the CG cannot be ascribed to data availability. We normalized all functions to lie in the interval [0, 1]. All encoders were trained using a batch size of 512, with an Adam optimizer with learning rate of .001 and weight decay of 10≠6. All encoders were exposed to 500,000 curves during training. We trained three copies of each encoder using random initializations and averaged results over these copies. As described above, our model consists of a combination of a contrastive loss function, and a convolutional encoder architecture. In addition, we considered eight comparison models, six of which are encoding models trained using dierent objectives than the contrastive loss, and two of which are architectural ablations trained using the same contrastive loss, but with non-convolutional encoder architectures. Of the first six, four were unsupervised time series models: Triplet Loss (tloss) (Jean-Yves et al., 2019), Temporal Neighborhood coding (tnc) (Tonekaboni et al., 2021), Contrastive Predictive Coding (cpc) (van den Oord et al., 2018), and Conditional Neural Processes (cnp) (Garnelo et al., 2018a). We also tested a Variational Autoencoder (vae) (Kingma and Welling, 2014) as an example of an unsupervised algorithm that has been successful in other domains, but does not exploit any structure particular to time series. Finally, we included a baseline encoder ("raw") that simply copies the raw input. To control for the latent space capacity, all encoders except for the baseline had representations of equal dimensionality (128). The first of the two architectural ablations consisted of replacing the convolutional encoder with a multi-layer perceptron. We denote this by "contrastive-mlp." In the second, we replaced the convolutional encoder with the same permutation-invariant encoder as was used in the CNP model, which we denote by "contrastiveperm-inv". In this encoder, the representation of the function {(xi, yi)}ni=1 takes the form 1n qni=1 MLP(xi, yi) (Garnelo et al., 2018a;b). This encoder is thus permutation invariant in the sense that the representation of the function does not depend on the ordering of its constituent x-y observations. For both of these ablations, the architectures were constrained to have approximately the same number of parameters as the convolutional encoder. Further implementational details of all comparison models are provided in the Appendix. ## 5 Results On Downstream Classification And Extrapolation Tasks To evaluate the quality of the learned representations, we adapted three function learning paradigms that are either directly translated from or inspired by paradigms from studies of human performance: (1) kernel classification, (2) multiple choice extrapolation, and (3) freeform extrapolation. The first one corresponds directly to the standard paradigm for unsupervised learning evaluation in computer vision (Chen et al., 2020), in which an unsupervised algorithm is trained on a dataset for which ground truth annotations are available, and then a supervised classifier such as a logistic regression is fit on top of the frozen representations. Although to our knowledge this has not been used directly in the analysis of empirical results concerning human function learning, it may be regarded as an abstraction of the experiments from Leon-Villagra and Lucas (2019), which showed that people's completions depend on their judgements about the category to which a function belongs, suggesting that people make use of categories when judging functions. The two extrapolation tasks are drawn directly from Schulz et al. (2017). A version of the third task also appears inWilson et al. (2015), however using a dierent generative process for the probe curves. For each task, we define a head that transforms the encoder representations to a task-specific output, and train the head on a small amount of labeled data. In all cases when training the heads, the weights of the encoder are frozen. | | 3 | 10 | 30 | 100 | 300 | |----------------------|-------------|-------------|-------------|-------------|-------------| | contrastive | 40.95± 1.83 | 55.27± 1.41 | 64.81± 1.80 | 72.00± 1.52 | 76.23± 1.31 | | cnp | 15.07± 1.28 | 19.25± 1.35 | 22.69± 0.96 | 26.19± 0.95 | 28.10± 0.73 | | cpc | 25.06± 1.29 | 35.45± 1.57 | 46.38± 1.14 | 54.94± 1.18 | 58.48± 1.04 | | raw | 11.86± 1.47 | 15.54± 2.02 | 14.27± 1.96 | 15.73± 1.51 | 14.25± 1.64 | | t-loss | 30.41± 1.73 | 41.31± 1.89 | 52.16± 1.20 | 59.78± 1.24 | 63.35± 1.19 | | tnc | 23.15± 1.14 | 31.22± 1.09 | 38.55± 1.23 | 44.85± 1.08 | 49.16± 1.20 | | vae | 9.27± 1.13 | 12.77± 1.65 | 21.51± 1.89 | 29.17± 1.56 | 33.74± 1.50 | | contrastive-perm-inv | 16.24± 1.48 | 22.89± 1.43 | 27.09± 0.88 | 29.37± 0.70 | 31.75± 0.87 | | contrastive-mlp | 33.58± 2.05 | 46.08± 1.37 | 53.96± 0.90 | 57.79± 0.67 | 60.30± 0.65 | Table 1: Accuracy on the categorization task, as a function of the number of training examples per category. Chance performance is 7.14 percent In addition, we note that several of the tasks required that we compute the posterior mean with respect to a Gaussian kernel in order to construct the training data, which required that we initially fit the kernel hyperparameters. For example, in the multiple choice task, to construct two candidate completions we took the posterior mean of the prompt curve with respect to both the SM and the CG kernels, which required we first fit the hyperparameters for those kernels. The fitting of GP kernel hyperparameters is known to suer from under-fitting and instability problems (see Wilson et al. (2015), including the supplementary material). To address this, we fixed the hyperparameter values of each kernel class, and evaluated the downstream performance of all encoders on curves generated with this fixed set of hyperparameters. We repeated this 10 times using dierent random choices of hyperparameters each time, and averaged the results. This ensured that the hyperparameter values were correctly specified within each task, and that our results were not influenced by the imperfections of any particular hyperparameter optimization procedure. We trained three copies of each of the six encoders (contrastive encoder and five comparisons) using dierent initializations, and all reported results were averaged over the 3 copies and 10 hyperparameter choices, for a total of 30 measurements. The error bars are 95 percent confidence intervals of the standard error of the mean over those measurements. ## 5.1 Kernel Classification Here, the task was to predict which of the 14 kernels was used to generate a given function. The head was simply a linear layer + softmax, the outputs of which were interpreted as the probabilities of each class. Thus it is equivalent to a 14-way logistic regression on the encoder representations. In all cases, we fit the head using the SGDClassifier class from scikit-learn. Additionally, we separately chose an L2 penalty for each head using cross validation. We report the accuracy of each such classifier on a collection of 2800 held-out curves (200 per class). As shown in Table 1, the contrastive encoder is able to attain approximately 55 percent accuracy using only 10 labeled examples per class, which improves to approximately 75 percent when using 300 examples per class, improving upon the second-best model (t-loss) by around 10 percentage points. ## 5.2 Multiple Choice Extrapolation In the multiple choice completion paradigm, the models were presented with a prompt curve y œ R80, as well as several candidate completions curves yi œ R100 with the properties that yij = yj for j <= 80 and were required to select the correct completion. Following Schulz et. al., we constructed the candidate completion curves by computing the posterior mean with respect either to the SM kernel, or to the best-fitting CG kernel, with the correct answer corresponding to which of these two kernels was used to generate the prompt curve. ![6_image_0.png](6_image_0.png) Figure 2: An example multiple choice completion problem. The prompt curve is on the left. The compositional completion is in the middle and the mixture completion is on the right. In this case, the correct answer is the compositional completion. The coloring of the candidate curves is for visual aid only. Table 2: Performance on the Multiple Choice completion task, as a function of the number of training examples per category. Chance performance is 50 percent. | | 3 | 10 | 30 | 100 | 300 | |----------------------|-------------|-------------|-------------|-------------|-------------| | contrastive | 67.74± 2.48 | 73.68± 1.69 | 78.09± 1.47 | 79.07± 1.75 | 80.90± 1.93 | | cnp | 59.30± 2.35 | 59.80± 2.35 | 59.72± 2.10 | 60.04± 2.14 | 61.56± 2.30 | | cpc | 62.80± 2.60 | 67.55± 2.26 | 70.55± 1.93 | 71.58± 1.97 | 74.48± 1.81 | | raw | 51.64± 2.83 | 54.80± 2.13 | 54.28± 2.20 | 54.68± 1.85 | 56.96± 2.22 | | t-loss | 60.18± 1.44 | 62.74± 1.62 | 65.12± 1.57 | 65.31± 1.55 | 67.76± 1.73 | | tnc | 58.38± 2.46 | 63.65± 1.89 | 68.91± 1.64 | 70.05± 1.68 | 72.25± 1.99 | | vae | 52.48± 0.74 | 53.03± 0.83 | 53.32± 0.85 | 53.57± 0.84 | 55.00± 1.29 | | contrastive-perm-inv | 60.38± 2.70 | 60.63± 2.67 | 60.68± 2.68 | 62.30± 2.58 | 63.32± 2.54 | | contrastive-mlp | 67.05± 3.00 | 67.49± 2.56 | 70.00± 2.85 | 71.71± 2.50 | 72.79± 2.24 | 2The training data for the head consisted of 50 percent prompt curves sampled from the SM and 50 percent curves sampled from the CG. Since this task required comparing the prompt curve to each of the candidate curves, we used a quadratic decision rule for the head. Let h0 denote the encoder representation of the prompt (upsampled to 100 points prior to being fed into the encoder), and hi,i> 0 denote the representations of the choices. The head linearly projected these vectors into a lower-dimensional space, and chose between the alternatives using a dot product in this space. That is, we fit a model of the form pi à e(whi,wh0) where w is a linear projection from the encoder space to R32 and pi, i = 1, 2 are the choice probabilities. All heads were trained on a cross-entropy loss using the Adam optimizer with a learning rate of .01. We report the accuracy on a collection of 400 held-out curves (200 per class). In this case, we see from Table 2 an improvement in accuracy of approximately 5 to 10 percent compared with the second best model. Interestingly, the rank ordering of the models also diers compared to the categorization task: here cpc and tnc both outperform t-loss, and the vae generally matches performance of the raw encoder baseline. ## 5.3 Freeform Extrapolation In this task, for a given function y œ R100, the model was presented with an initial portion y1:80 and required to make a prediction yˆ œ R20 that extended it for a fixed sized window (length 20). Performance was measured by Sim(*y, y* ˆ 80:100), for some choice Sim of similarity function. It has been argued that, due to its high-dimensional and underconstrained nature, this task provides a more rigorous test of extrapolation than do discrete categorization tasks and that, in an empirical setting, it may provide finer-grained insights into peoples' inductive biases (DeLosh et al., 1997). However, it may be unreasonable to expect that an algorithm 2More explicitly, to generate the CG completion, we computed the likelihood of the prompt with respect to each of the CG kernels, and then took the posterior mean of the kernel that attained the highest likelihood, mimicing the procedure of Schulz et. al. 7 trained without any predictive experience can exhibit a reasonable ability to perform free-form extrapolation. Here, we tested the hypothesis that this capacity can arise from modest amounts of supervised (predictive) training based on categorization judgements among function representations acquired in a self-supervised manner from the contrastive learning mechanism described above. To test this, we implemented a simple form of curriculum learning. In the first phase of the curriculum, we trained a logistic regressor on the encoder features learned during unsupervised training, to predict the generative kernel of an input function, exactly as in the categorization task from Section 5.1. Here we presented the regressors with 300 functions and corresponding category annotations from each kernel. In the second phase of the curriculum, we present a small number of functions yk with no label annotations. We then fit a simple class-dependent forecasting model of the form: $$y_{i}^{k}\sim w_{0}^{c_{k}}+w_{0}+\sum_{j=1}^{L}(w_{j}^{c_{k}}+w_{j})y_{i-j}^{k}\tag{2}$$ where cˆk œ {1, 2*,...,* 14} denotes the kernel class of the function yk predicted by the logistic regressor. Here L is a hyperparameter that controls the autoregressive time lag. We set L = 20 in all cases. The parameters {wmj }0ÆjÆL,1ÆmÆ14 are weights that are fit using least-squares. When given a function y to extrapolate at test, we first estimated the class cˆ œ {1, 2*,...,* 14} of y using the logistic regressor and encoder features. We then forecast it using the autoregression weights {wj + wcˆj}0ÆjÆL. We compared the results of this procedure with two controls. The first was an Ideal Observer model that was given access to the true underlying Gaussian kernel used to generate each function, information to which the other models were not privy. This model forecast a given function by computing the posterior mean with respect to the kernel on which it was trained. Since this model used Bayesian inference on the exact underlying distribution over functions, it represented the best performance that any model could attain. We refer to this as the "GPIO" (Gaussian process Ideal Observer) model. The second control was a simple autoregression model, that removed the categorization step in order to evaluate its contribution to the forecast quality. It used an unconditional forecasting model of the form yki ≥ w0 + qLj=1 wjyki≠j that ignored any category structure. The autoregression model was trained on exactly the same number of functions as the other forecasting models (with the number of curves used to train the logistic regressor included in this count). We evaluated the extrapolation performance of each model using the Pearson correlation coecient and L2 distance (see the Appendix for results of L2 distance) between the actual and predicted curves. We report the average values for 4200 held-out curves (300 per class). The results, shown in Table 3, are similar to those for the categorization task. All models substantially outperformed the autoregression baseline, indicating that even imperfect category information is helpful for extrapolation. The contrastive model performed better than any other model except the GPIO model. Several example extrapolations from the contrastive model are shown in Figure 3. ## 6 Comparison With Human Data The multiple choice completion task from Section 5.2 was modeled after Experiment 1 in Schulz et al. (2017). An intriguing result of that experiment was that people were more likely to select the CG completion than the SM completion. We asked whether any of the models shares this property. To do this, we measured the dierence in accuracy when the prompt curve was sampled from the CG compared to when it was sampled from the SM. More precisely, let {yi0}i denote a collection of prompt curves, {yiCG}i the corresponding completions generated by the CG, and {yiSM}i the completions generated by the SM. Furthermore, define zi to be a binary variable that indicates whether yi0 was sampled from the CG or from the SM. In our design, half of the prompt curves were sampled from the CG, meaning that zi assumes each of the two values with 50 percent probability. For a given model, the choice probabilities {(piCG, piSM)}i are given as in Section 5.2. Then the accuracy dierence is defined by $$\Delta_{a c c}:=\mathbb{E}_{i}(p_{C G}^{i}|z_{i}=C G)-\mathbb{E}_{i}(p_{S M}^{i}|z_{i}=S M)$$ acc := Ei(piCG|zi = CG) ≠ Ei(piSM|zi = SM) (3) ![8_image_0.png](8_image_0.png) Figure 3: Freeform completions generated by the contrastive model, using the maximal amount of training data. The GPIO completions are also shown for comparison. Table 3: Results on the freeform task, as a function of the number of samples per category used to train the regression (not counting the functions used to train the kernel classifier). Values are the Pearson correlation of the predicted to the true completion. The value for the GPIO model is 83.70± 1.78. | | 1 | 3 | 10 | 30 | 100 | |----------------------|-------------|-------------|-------------|-------------|-------------| | autoregression | 18.48± 4.46 | 18.48± 4.48 | 18.47± 4.52 | 18.50± 4.47 | 18.66± 4.47 | | contrastive | 30.15± 2.36 | 49.05± 2.00 | 60.78± 1.59 | 63.34± 1.39 | 63.91± 1.43 | | cnp | 24.29± 2.07 | 33.12± 1.98 | 40.91± 1.44 | 43.60± 1.33 | 45.37± 1.29 | | cpc | 25.13± 2.83 | 42.32± 2.61 | 51.04± 2.39 | 53.06± 1.92 | 54.79± 1.92 | | raw | 19.89± 4.20 | 23.13± 4.81 | 27.02± 5.82 | 28.18± 5.81 | 29.14± 5.89 | | t-loss | 29.45± 3.28 | 44.48± 2.74 | 54.62± 2.00 | 56.87± 1.93 | 58.10± 1.95 | | tnc | 26.72± 3.22 | 39.11± 2.84 | 48.30± 1.81 | 51.52± 1.60 | 52.78± 1.74 | | vae | 27.36± 3.09 | 33.51± 2.53 | 37.18± 2.67 | 39.27± 2.51 | 40.89± 2.39 | | contrastive-perm-inv | 22.51± 2.65 | 32.33± 2.10 | 39.53± 1.52 | 42.57± 1.49 | 44.04± 1.33 | | contrastive-mlp | 26.72± 2.21 | 45.45± 2.41 | 56.19± 1.55 | 58.47± 1.65 | 59.64± 1.35 | A short calculation shows that acc is directly related to the model's propensity to favor the CG completion over the SM completion: $$\mathbb{E}_{i}(p^{i}_{CG})=\frac{1}{2}\mathbb{E}_{i}(p^{i}_{CG}|z_{i}=CG)+\frac{1}{2}\mathbb{E}_{i}(p^{i}_{CG}|z_{i}=SM)$$ $$=\frac{1}{2}\mathbb{E}(p^{i}_{CG}|z_{i}=CG)+\frac{1}{2}(1-\mathbb{E}_{i}(p^{i}_{SM}|z_{i}=SM))$$ $$=\frac{1}{2}+\frac{1}{2}\Delta_{acc}$$ $\mathbb{E}_{i}(p^{i}_{CG})=\frac{1}{2}\mathbb{E}_{i}(p^{i}_{CG}|z_{i}=CG)+\frac{1}{2}\mathbb{E}_{i}(p^{i}_{CG}|z_{i}=SM)$ In other words, acc is positive exactly when the CG completion is chosen more often than the SM completion. Note that an unbiased model would have Ei(piCG) = Pi(zi = CG) = 12 and thus acc = 0. Note also that, as $$\left(4\right)$$ $\left(5\right)$. | | 3 | 10 | 30 | 100 | 300 | |----------------------|-------------|-------------|-------------|-------------|-------------| | contrastive | 19.82± 5.88 | 21.49± 5.26 | 20.53± 4.87 | 18.57± 4.84 | 19.39± 4.81 | | cnp | 0.73± 5.39 | 5.25± 3.01 | 6.61± 4.35 | 8.17± 3.00 | 9.61± 3.32 | | cpc | -3.92± 7.59 | 1.50± 5.50 | 4.73± 5.60 | 6.38± 4.43 | 10.33± 4.02 | | raw | 5.19± 5.36 | 2.25± 3.79 | 3.87± 2.45 | 5.09± 1.81 | 7.92± 2.61 | | t-loss | 0.57± 5.04 | 5.46± 4.37 | 8.24± 3.95 | 10.30± 3.44 | 12.72± 3.75 | | tnc | -3.23± 6.13 | 4.70± 4.07 | 6.42± 2.81 | 8.18± 3.60 | 9.55± 3.97 | | vae | 2.01± 1.34 | 2.62± 1.25 | 3.12± 1.42 | 3.30± 1.41 | 4.19± 1.79 | | contrastive-perm-inv | 1.95± 5.01 | 4.60± 2.88 | 5.98± 4.05 | 8.17± 3.20 | 9.74± 3.55 | | contrastive-mlp | 16.06± 3.74 | 16.56± 3.35 | 16.14± 4.01 | 16.48± 4.30 | 18.32± 3.84 | Table 4: Value of acc on the multiple choice task, as a function of number of labeled training samples per category. The corresponding value for people is approximately 39. We do not bold the highest number because, unlike in the other tables, the values here do not correspond to a normative performance metric. described in Section 4, all models were trained on an equal proportion of curves from the SM and CG, so any resulting bias cannot be due to dierential data availability between the two classes of curves. We see from Table 4 that all models had at least a weak form of the CG bias, in that they attained higher accuracy on the multiple choice task when the prompt was sampled from the CG. The contrastive model had the highest value of this bias, albeit the error bars overlap with the values for t-loss and contrastive-mlp. Crucially, however, even the baseline "raw" model showed a significant positive bias, thus indicating that the observed biases may be due to statistical properties of the curves themselves, independent of the properties of the learned representations of the models. The contrastive and contrastive-mlp were the only models that attained a acc value significantly higher than that of raw, thus indicating that the observed bias for these models is partially due to the properties of their representations. However, all of these biases are smaller quantitatively than reported in Schulz et al. (2017). There it was found that people attain an accuracy of 32 percent when the prompt curve is from the SM (that is, they choose the CG completion 68 percent of the time), while they attain an accuracy of at least 71 percent when the prompt curve is from the CG. We say "at least", because in that experiment, there were actually three choices presented to the participants in the CG case: the CG completion, the SM completion, and an additional distractor completion. Thus we can estimate that, for people, the accuracy dierence is given by $$\Delta_{a c c}^{p e o p l e}\geq39$$ ## 7 Related Work 7.1 Contrastive Learning And Time Series Representation Learning The idea of learning representations by maximizing an information theoretic criterion can be traced at least back to Linsker's InfoMAX (Linsker, 1988) principle, in which it was shown that certain properties of neurons in visual cortex could be replicated by training the encoder to maximize mutual information between the input and the encoder representation. This principle was subsequently extended to the problem of unsupervised deconvolution of time series (Bell and Sejnowski, 1995) by extraction of independent components. A network that learns by instead trying to maximize representational similarity between two dierent parts of the same input, presaging the modern approach to contrastive learning, was introduced by Becker and Hinton (1992). In a similar spirit, the BCM learning rule (Bienenstock et al., 1982), introduced as a model of synaptic plasticity in the visual cortex, can be shown to be equivalent to projecting the data onto subspaces that are "maximally discriminative" (Intrator and Cooper, 1992), and thus, most likely to be useful for downstream classification tasks. Rather than trying to optimize the mutual information directly, however, most modern implementations of this idea use a form of the InfoNCE loss, introduced by van den Oord et al. (2018). There it was shown that this objective is a tractable lower bound to the mutual information criterion, which can be dicult to estimate directly. This loss is also strongly reminiscent of the older technique of Contrastive Hebbian learning (Hinton, 1989), insofar as both involve computing average network activations over a set of "positive pairs" of inputs as well as over a set of "negative pairs", and try to maximize the dierence between the two averages. The authors incorporated this objective in their Contrastive Predictive Coding model (CPC), in which a recurrent encoder is trained to predict its own future outputs. This basic loss function has been adapted and modified in several ways. In Chen et al. (2020) it is used in tandem with a siamese network architecture, as we described in Section 2.1, while Aberdam et al. (2020) extends this setup to a seq-to-seq objective and Li et al. (2020) integrates the contrastive objective with a reconstructive one. A similar contrastive objective is used in He et al. (2020), except with a memory bank used to sample negative examples, with this approach extended to videos in Pan et al. (2021). The approach of Hyvarinen and Morioka (2016) is also very similar in spirit, in which the idea is to learn temporal features of time series that dier across dierent time windows. Although some of the works above deal with sequential data, they tend to be high-dimensional (videos (Pan et al., 2021), image patches (Aberdam et al., 2020)) or data that is otherwise not directly interpretable by humans (audio (van den Oord et al., 2018), radio frequency signals (Li et al., 2020)), and do not consistently yield lower dimensional, readily interpretable, and easily composable functions of the form studied here. Fewer works have considered whether and how representation learning of such simple functions, such as the 1-dimensional time series of the sort used in function learning experiments and studied here-this omission is notable since, despite their simplicity, such functions occur in a wide range of naturalistic settings (Duvenaud et al., 2013) . Two particularly notable models of unsupervised time series learning are Triplet loss (Jean-Yves et al., 2019) and Temporal Neighborhood Coding (Tonekaboni et al., 2021). The first is inspired by word2vec (Mikolov et al., 2013), and relies on predicting the representation of a "word" (here a short window of the time series) from the representation of its "context" (here a longer window containing the "word"). In TNC, the timeseries is divided into disjoint segments. The encoder is jointly learned alongside a discriminator, in such a way that the discriminator is able to tell the dierence between distant and proximal observations. In both TNC and Triplet loss, a recurrent encoder is used. An alternative approach to unsupervised learning of 1D time series learning is through autoencoders. A popular choice here is a seq2seq architecture with a reconstruction loss (Amiriparian et al., 2017; Lyu et al., 2018; Malhotra et al., 2017). Ma et al. (2019) augmented this setup with a k-means objective to encourage clustering in the latent space. Compared to our approach, these involve considerably more complexity, through the use of an additional decoding step, as well as more intricate seq2seq architectures. ## 7.2 Function Learning And Gaussian Processes The dominant framework for modeling of human function learning uses Gaussian processes, a statistical model that specifies a probability distribution over the infinite-dimensional space of functions and allows for tractable inference procedures (Rasmussen and Williams, 2006). Lucas et al. (2015) used Gaussian processes to capture a wide range of empirical function learning phenomena, while Wilson et al. (2015) and Schulz et al. (2017) proposed specific families of kernels to model human extrapolation judgements. A limitation of the basic GP framework is its dependence on a choice of specific kernels or kernel families. Our approach sought to address this dependence through the use of unsupervised learning. Other approaches have taken a similar tack. The Spectral Mixture Kernel (Wilson and Adams, 2013) and Variational GP (Tran et al., 2016) do so by introducing nonparametric families of kernels that can approximate arbitrary kernels to an arbitrary level of precision. Duvenaud et al. (2013) implement a similar idea, except by building up a family of kernels using operations of a small number of atomic kernels, and performing a search over the resulting combinatorial space. Sun et al. (2018) perform a similar search except using a continuous relaxation and neural network. In Hinton and Salakhutdinov (2007), an appropriate kernel is found by fitting a Boltzmann machine. Neural Processes (Garnelo et al., 2018b) go further and replace the Gaussian kernel with a more flexible parametric family of distributions that can be learned using a neural network. This approach naturally extends to modeling of conditional distributions(Garnelo et al., 2018a; Kim et al., 2019; Gondal et al., 2021). This approach also has the advantage that the encoder can accommodate both variable number of observations, as well as variable x-locations. Such approaches share our broad goal of trying to learn the structure of a space of curves without assuming any particular functional forms ahead of time. However, both dier from ours in that they build in more statistical machinery, by positing an explicit generative probabalistic model of the input distribution of curves. ## 8 Limitations There are several limitations to our encoder. First, it diers from other models in that it was not designed to scale to very long time series. In particular, we use a feedforward convolutional encoder that processes the entire time series at once, while other techniques use some combination of recurrence and/or local windowing of the time series. Our time series have only 100 points, which is extremely short from the viewpoint of typical time series in learning models. However, in the context of function learning, such short time series have face validity, because during function learning experiments people can only make use of limited information at a time (Villagra et al., 2018). Thus, while it is not clear how well our encoder architecture would scale to very long time series, it is also not clear how well humans would do so either; and it remains to be determined how useful doing so would be for generalization in natural environments. These remain subjects for future research. Second, the feedforward nature of our encoder also restricts it to processing time series of a fixed length and sampling frequency, as opposed to the recurrent encoders of the other models which can handle time series of variable lengths, or the CNP-style permutation-invariant encoder, which can handle both variable lengths and variable sampling frequencies. In principle this could be overcome by upsampling or downsampling as necessary (and this was the approach we took in the multiple choice completion task). While this kind of resampling may be benign or helpful in certain circumstances (e.g., as a form of context normalization (Webb et al., 2020)), there are also many applications in which it would instead be preferable to preserve the original resolution and accommodate variable lengths. It is also possible that an appropriate modification of the CNP-style encoder could be used to overcome this diculty; but due to the relatively poor results of the contrastive-perm-inv model, further work is necessary to fully flesh out this idea. ## 9 Discussion The contrastive encoder we presented exhibited superior performance to comparison models in tests of generalization involving categorization as well as free form extrapolation. This was the case, despite its greater simplicity than those models. Moreover, we found that the performance was significantly degraded if an MLP was used in place of the convolutional layers in the encoder, and was even further degraded when using a Neural Process-style permutation-invariant architecture. On the other hand, the usage of a convolutional encoder is not sucient on its own to achieve this level of performance, as shown by the poor results of the VAE, which employed such an encoder. This suggests that the combination of 1d convolutions,together with the contrastive loss and specific family of augmentations, may be key to learning good representations of intuitive functions. In the multiple choice task, all models found it easier to correctly extrapolate prompt curves generated from the CG than curves generated from the SM, with this eect being most pronounced in the contrastive model. This is qualitatively similar to the corresponding empirical result from Schulz et al. (2017), regarding peoples' judgements in an analogous task. Thus our analysis suggests that such a bias may simply "fall out" as a consequence of a more general representation-learning procedure. More generally, we regard this as a proof of concept that the properties of representation learning algorithms can serve as an explanatory tool in the study of high-level human cognition such as function learning. Indeed, several influential accounts of human intelligence posit the existence of elements of "core knowledge," such as an abstract number sense and fundamental notions of Euclidean geometry (Spelke and Kinzler, 2007; Chollet, 2019). Sometimes referred to as "atoms" of knowledge, these primitives are assumed to be low dimensional forms of representation and/or simple constructs and functions (e.g., continuity of processing, simple forms of causality), on which more complex cognitive abilities responsible for human intelligence are built. It has been proposed that the availability and use of such primitives is a critical factor in distinguishing human generalization capabilities from that of existing artificial systems (Lake et al., 2016), that rely on statistical estimation, and recent empirical evidence has been proposed in support of this claim (Kumar et al., 2020) Major eorts in cognitive science have assumed that such primitives are either genetically pre-specified, or arise suciently early and predictably in development that they can be treated as predetermined. Based on this assumption, such eorts have focused research on the kinds of inference and learning mechanisms that, operating on such primitives, can compose them into more complex forms of processing (Lake et al., 2015; Ellis et al., 2020). Similarly, it has been proposed that such primitives should be considered as inductive biases when designing and comparing candidate computational architectures that seek to emulate human generalization capabilities. In contrast to this approach, some have argued that it is neither necessary nor accurate to assume that such primitives are pre-specific, but rather they arise from and are shaped by general purpose learning mechanisms interacting with and encoding statistical of present in the environment (Rumelhart and McClelland, 1986). While examples have been provided of how human-like concept formation and generalization can arise in this way (McClelland and Rogers, 2003), these have generally relied on externally supervised forms of learning that are explicitly trained on tasks that elicit such structure. To date, it has been dicult to design artificial systems that can discover low dimensional, simple forms of structure that can be exploited for generalization, using unsupervised or self-supervised forms of learning. Here, we have proposed one such mechanism, through a combination of contrastive learning and topological augmentations, and have demonstrated its ability learn to basic classes of functions, and simple compositions thereof. For testing free form extrapolation, we used a curriculum learning strategy that involved first learning categories and then learning category-specific forecasting rules. While this procedure was more complex than for the other heads, there is reason to believe that it in fact resembles the process by which people may learn to make inferences in sparse and underdetermined settings. The most direct evidence of this in the realm of function learning comes from Leon-Villagra and Lucas (2019), which showed that peoples' extrapolations of curves were dependent on whether they judged the curves to lie in a previously encountered category, suggesting that people use category-dependent forecasting rules. More generally, our approach may be viewed as implementing a form of a Hierarchical Bayesian model, which have been show to capture the structure of peoples' intuitive theories about abstract structures in the world (Gershman and Niv, 2010; Tenenbaum et al., 2011; Kemp and Tenenbaum, 2008). Our approach also fits with the idea that learning in natural agents involves adjudicating a tension between maintaining as much flexibility as possible (by optimizing a Maximum Entropy objective) while at the same time maximizing eciency of computation (e.g., by optimizing a Minimum Energy objective). From this perspective, the contrastive encoder can be viewed as maximizing entropy, as implemented by the InfoNCE objective that we used, as it may be shown (Wang and Isola, 2020) that the second term in that objective is an estimator of the entropy of the distribution of codes in the latent space. Complementing this, our curriculum learning can be viewed as minimizing the energy of representations generated by a given category of function when presented with an instance of that function. This may strike a balance between flexibility (of generalization) and eciency (of inference) that begins to approximate the balance observed in natural agents, and humans in particular (Frankland et al., 2021). ## 10 Broader Impact As this work is concerned with foundational properties of learning algorithms in an abstract setting, we do not foresee any negative societal consequences arising directly from this work. ## References A. Aberdam, R. Litman, S. Tsiper, O. Anschel, R. Slossberg, S. Mazor, R. Manmatha, and P. Perona. Sequence-to-sequence contrastive learning for text recognition. *arXiv:2012.10873v1*, 2020. S. Amiriparian, M. Freitag, N. Cummins, and B. Schuller. Sequence to sequence autoencoders for unsupervised representation learning from audio. In *DCASE Workshop*, 2017. S. Becker and G. Hinton. Self-organizing neural network that discovers surfaces in random dot stereograms. Nature, 1992. A. J. Bell and T. I. Sejnowski. An information-maximization approach to blind separation and blind deconvolution. *Neural Computation*, 1995. E. Bienenstock, L. Cooper, and P. Munro. Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. 1982. L. Chen. The topological approach to perceptual organization. *Visual Cognition*, 2005. T. Chen, S. Kornblith, M. Norouzi, and G. Hinton. A simple framework for contrastive learning of visual representations. In *International Conference on Machine Learning*, 2020. F. Chollet. On the measure of intelligence. *arXiv:1911.01547*, 2019. E. L. DeLosh, J. R. Busemeyer, and M. A. McDaniel. Extrapolation: The sine qua non for abstraction in function learning. *Journal of Experimental Psychology: Learning, Memory and Cognition*, 1997. D. Duvenaud, J. R. Lloyd, R. Grosse, J. B. Tenenbaum, and Z. Ghahramani. Structure discovery in nonparametric regression through compositional kernel search. In *International Conference on Machine* Learning, 2013. K. Ellis, C. Wong, M. Nye, M. Sable-Meyer, L. Cary, L. Morales, L. Hewitt, A. Solar-Lezama, and J. B. Tenenbaum. Dreamcoder: Growing generalizable, interpretable knowledge with wake-sleep bayesian program learning. *arXiv:2006.08381*, 2020. S. Frankland, T. Webb, and J. Cohen. No coincidence, george: Capacity-limits as the curse of compositionality. PsyArxiv:https://doi.org/10.31234/osf.io/cjuxb, 2021. M. Garnelo, D. Rosenbaum, C. J. Maddison, T. Ramalho, D. Saxton, M. Shanahan, Y. W. Teh, D. J. Rezende, and S. M. A. Eslami. Conditional neural processes. *arXiv:1807.01613*, 2018a. M. Garnelo, J. Schwarz, D. Rosenbaum, F. Viola, D. J. Rezende, S. A. Eslami, and Y. W. Teh. Neural processes. *arXiv preprint arXiv:1807.01622*, 2018b. S. J. Gershman and Y. Niv. Learning latent structure: carving nature at its joints. *Current Opinion in* Neurobiology, 2010. M. W. Gondal, S. Joshi, N. Rahaman, S. Bauer, M. Wuthrich, and B. Scholkopf. Function contrastive learning of transferable meta-representations. In *International Conference on Machine Learning*, 2021. K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick. Momentum contrast for unsupervised visual representation learning. In *Computer Visision and Pattern Recognition*, 2020. G. E. Hinton. Deterministic boltzmann learning performs steepest descent in weight-space. *Neural Computation*, 1989. G. E. Hinton and R. R. Salakhutdinov. Using deep belief nets to learn covariance kernels for gaussian processes. In *Conference on Neural Information Processing Systems*, 2007. A. Hyvarinen and H. Morioka. Unsupervised feature extraction by time-contrastive learning and nonlinear ica. In *Advances in Neural Information Processing Systems*, 2016. N. Intrator and L. Cooper. Objective function formulation of the bcm theory of visual cortical plasticity: Statistical connections, stability conditions. *Neural Networks*, 1992. Jean-Yves, Franceschi, A. Dieuleveut, and M. Jaggi. Unsupervised scalable representation learning for multivariate time series. In *Conference on Neural Information Processing Systems*, 2019. C. Kemp and J. B. Tenenbaum. The discovery of structural form. Proceedings of the National Academy of Sciences, 2008. H. Kim, A. Mnih, J. Schwarz, M. Garnelo, A. Eslami, D. Rosenbaum, O. Vinyals, and Y. W. Teh. Attentive neural processes. *arXiv:1901.05761*, 2019. D. P. Kingma and M. Welling. Auto-encoding variational bayes. *arXiv:1312.6114*, 2014. S. Kumar, I. Dasgupta, J. Cohen, N. Daw, and T. Griths. Meta-learning of structured task distributions in humans and machines. 2020. P. Kwantes and A. Neal. Why people underestimate y when extrapolating in linear functions. Journal of Experimental Psychology: Learning, Memory, and Cognition, 2006. B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum. Human-level concept learning through probabilistic program induction. *Science*, 2015. B. M. Lake, T. D. Ullman, J. B. Tenenbaum, and S. J. Gershman. Building machines that learn and think like people. *Behavioral and Brain Sciences*, 2016. P. Leon-Villagra and C. G. Lucas. Generalizing functions in sparse domains. In *Proceedings of the Cognitive* Science Society, 2019. T. Li, L. Fan, Y. Yuan, H. He, Y. Tian, and D. Katabi. Information-preserving contrastive learning for self-supervised representations. *arXiv:2012.09962v1*, 2020. R. Linsker. Self-organization in a perceptual network. *IEEE Computer*, 1988. C. G. Lucas, T. L. Griths, J. J. Williams, and M. L. Kalish. A rational model of function learning. Psychonomic Bulletin and Review, 2015. X. Lyu, M. Hueser, S. L. Hyland, G. Zerveas, and G. Rätsch. Improving clinical predictions through unsupervised time series representation learning. *arXiv:1812.00490*, 2018. Q. Ma, J. Zheng, S. Li, and G. W. Cottrell. Learning representations for time series clustering. In *Advances* in Neural Information Processing Systems, 2019. P. Malhotra, L. V. Vishnu TV, P. Agarwal, and G. Shro. Timenet:pre-trained deep recurrent neural network for time series classification. *arXiv:1706.08838*, 2017. J. McClelland and T. Rogers. The parallel distributed processing approach to semantic cognition. Nature Reviews Neuroscience, 2003. M. A. McDaniel and J. R. Busemeyer. The conceptual basis of function learning and extrapolation: Comparison of rule-based and associative-based models. *Psychonomic Bulletin and Review*, 2005. T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In *Advances in Neural Information Processing Systems*, 2013. M. Neumann, S. Huang, D. E. Marthaler, and K. Kersting. pygps - a python library for gaussian process regression and classification. *Journal of Machine Learning Research*, 2015. T. Pan, Y. Song, T. Yang, W. Jiang, and W. Liu. Videomoco: Contrastive video representation learning with temporally adversarial examples. In *Computer Vision and Pattern Recognition*, 2021. C. E. Rasmussen and C. K. I. Williams. *Gaussian processes for machine learning*. MIT Press, 2006. M. Reed and B. Simon. *Functional analysis*. Academic Press, 1980. D. E. Rumelhart and J. L. McClelland. Parallel distributed processing: explorations in the microstructure of cognition. Volume 1. Foundations. MIT Press, 1986. E. Schulz, J. B. Tenenbaum, D. Duvenaud, M. Speekenbrink, and S. J. Gershman. Compositional inductive biases in function learning. *Cognitive Psychology*, 2017. E. S. Spelke and K. K. D. Kinzler. Core knowledge. *Developmental Science*, 2007. S. Sun, G. Zhang, C. Wang, W. Zeng, J. Li, and R. Grosse. Dierentiable compositional kernel learning for gaussian processes. *International Conference on Machine Learning*, 2018. J. B. Tenenbaum, C. Kemp, T. L. Griths, and N. D. Goodman. How to grow a mind: Statistics,structure, and abstraction. *Science*, 2011. S. Tonekaboni, D. Eytan, and A. Goldengerg. Unsupervised representation learning for time series with temporal neighborhood coding. In *International Conference on Learning Representations*, 2021. D. Tran, R. Ranganath, and D. M. Blei. The variational gaussian process. In International Conference on Learning Representations, 2016. A. van den Oord, Y. Li, and O. Vinyals. Representation learning with contrastive predictive coding. arXiv:1807.03748, 2018. P. L. Villagra, I. Preda, and C. G. Lucas. Data availability and function extrapolation. In Proceedings of the Cognitive Science Society, 2018. T. Wang and P. Isola. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In *International Conference on Machine Learning*, 2020. T. W. Webb, Z. Dulberg, S. M. Frankland, A. A. Petrov, R. C. O'Reilly, and J. D. Cohen. Learning representations that support extrapolation. In *International Conference on Machine learning*, 2020. A. G. Wilson and R. P. Adams. Gaussian process kernels for pattern discovery and extrapolation. In International Conference on Machine Learning, 2013. A. G. Wilson, C. Dann, C. G. Lucas, and E. P. Xing. The human kernel. In *Conference on Neural Information* Processing Systems, 2015. E. Zeeman. Topology of the brain. In Mathematics and computer science in biology and medicine: Proceedings of the conference held by the Medical Research Council in Association with the Health Department, 1965.
Review 1: Summary: The authors study representation learning in the space of 1D functions generated by various types of GP kernels. They train 6 different types of encoders using either varying architectures or contrastive training losses. Subsequently, they compare the learned representations in three different supervised tasks by learning simple predictions head on top of the learned representation, using only small amounts of data. The three tasks are: 1. Classifying which type of kernel (out of 14 possible choices) a given function is most likely from 2. Selecting one of two possible choices for continuation of a given function 3. "Freeform" predicting the continuation of a given function by predicting auto-regression coefficients. The 14 available kernels are selected from two different families of kernels: 13 generated from a compositional grammar ("CG", out of linear, rbf, periodic) and one spectral mixture kernel. The authors compare their results with biases found in humans, which exhibit a tendency to select CG kernels over SM kernels when choosing, and find a similar bias in their results. Strengths and Weaknesses: # Strengths - Focus on a "simpler" (compared to large scale, e.g. visual, tasks), yet sufficiently complex and well defined domain can generate interesting results. - The authors compare against a broad set of baselines - The authors performed a sizeable number of experiments for various baseline/hyperparameter/task variations. - Interesting results in section 6 # Weaknesses I have grouped my feedback below into three categories, "Clarity", "Lack of clearly formulated learnings" and "Methodological questions". I've also added a "Nitpick" section containing more subjective opinions/minor issues that I would like to point out, but do not expect the authors to explicitly address in the rebuttal (but I think would strengthen the paper). ## Lack of clearly formulated learnings/contributions The most critical weakness of this paper for me is that the contribution is not clear. The authors state that they "propose a framework for addressing these challenges", meaning how to learn a good representation in a self-supervised fashion. However, this is a well-researched problem and the proposed framework is not novel. Neither the idea of unsupervised pre-training and learning small task-specific 'heads', nor the used loss function are new. The authors further claim novelty on the investigated domain and the newly proposed augmentations. While the domain itself is not new, I am indeed not aware of deep-learning research on this domain for the specific question of self-supervised contrastive learning, although I am not an expert in this area. However, I believe a stronger justification for why this area is of interest would strengthen the paper. The authors make references to biological generalization, but these references are vague and possible insights gained are not clearly specified. This connects to the experimental section, in which the results are interpreted as "method X performs better than method Y", but no deeper insights are formulated or discussed, for example about biological insights or reasons why some representation learning methods might perform better than others. ## Clarity At a few places, especially in the experimental section, I was not able to exactly understand the description: - Last paragraph before section 5.1: To me this paragraph reads as if each value in table 1 is the average/std over 30 values (3 copies * 10 hyperparameter choices). But does that mean each copy-hyperparameter pair was only evaluated on a *single* test-datapoint? Surely a larger test-set was used? - Section 5.1: How is the L2 penalty applied to the encoder? I thought the encoder remains fixed for this task? - Section 5.2: "we constructed the candidate completion curves by computing the posterior mean with respect either to the SM kernel, or to the CG kernel family". How was the kernel from the CG family chosen? For a CG prompt, was the "correct" kernel chosen or a random one? For a SM prompt, was a random or the best fitting CG kernel chosen? ## Methodological questions - I am unsure about the choice to fix the hyperparameters for each kernel instead of fitting them to each given function. Because then the multiple choice task is not just between different kernel *types*, of which there are the described 14 (e.g. linear, rbf, SM), but actually between different *kernels* with different hyperparameters, which should make the choice much easier. (This also relates to my point in the 'clarity' section about how the corresponding CG kernels were chosen) - The results from section 6 is very interesting, but I believe it requires either some discussion/explanation or, even better, additional experiments explaining how this bias is introduced. It would also be good to confirm that the bias does not exist on the training dataset and that the observed bias is induced by the generalization of the learned representation. ## Smaller points / Nitpicks - "...that insights gained here are likely to shed light on the ability of biological and artificial agents to generalize more broad". This is a strong claim that would benefit from support from either citations or a more detailed argument. - I am not sure about the intuitive explanation of the InfoNCE loss: If one approximates LSE as max(z_i), then this z_i won't be the most similar negative example, but it will be the positive example because the sum in the denominator is over *all* pairs, not just negative ones. - I think section 2.2 would greatly benefit from some visual examples for some of the kernels used. - I find it unsurprising that a CNN compares favourably to an MLP with the same number of parameters (CNNs have surprisingly few). A more interesting (IMO) comparison would have been to a larger MLP. - “Thus our analysis suggests that such a bias may simply “fall out” as a consequence of a more general representation-learning procedure. More generally, we regard this as a proof of concept that the properties of representation learning algorithms can serve as an explanatory tool in the study of high-level human cognition such as function learning, thereby opening up many exciting directions for future work.” I believe this claim requires some more concrete examples as support. Requested Changes: Please refer for details to the review sections above. As most critical recommended change, I would suggest giving the paper a clearer "take away" message. The authors hint at potential relevance to social/biological sciences, but these are kept at a too high level. Furthermore, the discussion of experimental results should also better highlight potential conclusions. For example (but not necessarily these), things like: Why is InfoNCE better than the baselines? What mechanism leads to the bias get introduced in section 6? What does this tell us about human cognition? etc.. On the methodological side, two points should be addressed before I can recommend acceptance: * As the fixing hyperparameters for kernels instead of learning them is an unusual choice, which I believe could have quite strong influences on the qualitative results, it would be good to have some additional justification/discussion and, ideally, ablation studies (in the appendix is fine if space is scarce) showing that they don't change the qualitative results. * The results from section 6 are very interesting, but require more discussion, or even better additional experiments, further explaining how the observed bias is introduced. It should also be confirmed that the bias is not yet present on the training data (which, I believe, would point to a bug). Lastly, please have a look at the "Nitpicks" in the review section above. However, these aren't critical for my recommendation. Furthermore, please improve the clarity of (experimental) descriptions. These should be small fixes. * Last paragraph of section 5.1 (the test setup) * How was L2 penalty used * Section 5.2: How was the kernel from the CG family chosen? Broader Impact Concerns: No broder impact concerns. ================================================== Review 2: Summary: This paper proposes a self-supervised learning method for learning a representation of 1-dimensional intuitive functions, which allows for better extrapolation. Specifically, the paper proposes 1) a 1D-convolutional encoder, 2) the use of contrastive loss, and 3) data augmentation via several topological distortions inspired by cognitive science literature. The experimental results show that the proposed method outperforms several baselines (e.g., CPC, triplet loss, VAE) on kernel classification, multiple choice extrapolation, and free-form extrapolation. In addition, the empirical result also shows that the proposed method has a bias towards Compositional Grammar (CG) like humans, which arguably makes it suitable for extrapolation in natural environments. Strengths and Weaknesses: * Strength 1) The problem of learning a representation of 1D intuitive functions and extrapolating them is not only new but also very well-motivated by the cognitive science literature as discussed in the paper. While the majority of the prior work has focused on learning representations of images and texts, this paper introduces a relatively under-explored problem and makes a nice attempt to mimic how humans learn and extrapolate functions. 2) The proposed way of applying topological distortions sounds plausible and convincing. 3) The empirical result looks convincing. Although there is little work on learning representations for 1D functions, the proposed method is still compared against reasonable and popular representation learning algorithms including CPC, Triplet loss, and VAE. 4) The paper is very well-written. The problem is very well-motivated, and the description of the method and experiment is very clear. * Weakness 1) There is no ablation study on the use and the specific choices of topological distortions. The proposed algorithm is a combination of 1) convolutional architecture, 2) contrastive loss, and 3) topological distortions. While there are ablation studies show the effectiveness of the first two, I could not find experiments showing the effectiveness of topological distortions unless I missed something. Showing this would make the result much more comprehensive. Requested Changes: 1. Show how much each topological distortion improves extrapolation. 2. (Minor) Giving some examples of CG and SM kernels in the main text would be helpful for readers who are less familiar with them. 3. (Minor) Fix the wrong reference (Figure 3 -> Figure 1) in Page 4. 4. (Minor) Fix squeezed tick labels (texts) in Figure 1, 2, 3. Broader Impact Concerns: I do not have any ethical or broader impact concerns out of this paper. ================================================== Review 3: Summary: This paper builds a framework for studying general representations learned to estimate "intuitive" scalar functions. These intuitive functions are constructed from a set of functions that have previously been shown to be more intuitive or easy for humans to learn, providing a reasonable proxy for naturalistic function learning. The paper uses a self-supervised encoder (e.g. an autoencoder) to learn representations which are invariant to topological distortions of these intuitive functions. The paper studies a few-shot learning paradigm where representations are pretrained on a large corpus of data, then a specialized final layer is learned few-shot mapping from representation to the desired predictor. Strengths and Weaknesses: ## Novelty, significance, relevance The results and methodology in this paper appear novel. The work is certainly relevant to the representation learning community, though its connection to the psychology community is not immediately clear; however the psychology connection seems more of an aside in the paper anyways. The methodology represents a significant change to classical representation learning methodologies, where the experiment design is targeted towards emulating human behavior on a set of function learning tasks. The resulting experiments provide some interesting insights---though it's possible some of these insights may be misattributed (I will discuss more below). ## Correctness Broadly, this paper did an exceptional job at matching empirical evidence to claims. Experiments were well designed with 30 samples used to estimate sample means of performance, uncertainty measures included in all tables, reasonable baselines provided, and special care given to training data to rule out potential sources of bias (i.e. data imbalance). ### Confidence intervals 95% confidence intervals are reported for every result in every table, however these confidence intervals are based on standard errors. It's not clear that proper assumptions are met for these 95% confidence intervals to be accurate, namely the normality assumptions on the underlying data as well as independence assumptions. There is underlying structure to the data (3 trials for 10 different hyperparameters = 30 total data points), which suggests that independence may be violated and also hints to me that normality plausibly would not hold. As a result, all of these uncertainty measures could be providing optimistically tight bounds on the sample mean estimate. It should be noted, however, that in most cases the effect size appears to be quite large---there is a large difference in estimated mean performance between the proposed algorithm and baselines---so even if confidence intervals are optimistically tight, likely statistical significance still holds. These optimistic intervals may be adversely impacting conclusions in Table 3 where differences between the proposed method and the architecture ablation (contrastive-mlp) are often much smaller. In the other tables (Tables 1, 2, 4) these optimistic intervals likely do not impact conclusions severely. ### Table 4 -- bias towards CG I believe the paper is actually underclaiming its results in this table. The bolding scheme and discussion in the paragraph at the bottom of page 9 suggest that the paper is seeking statistically significant differences between models. That is, the table is implicitly testing the claim: > Does our proposed model (contrastive) induce greater bias towards CG functions than other models? The resulting conclusion is that there are no statistically significant differences among the results. However, I believe the goal of this section should _actually_ be measuring difference from 0 bias, rather than differences between algorithms. That is, the claim should be: > Do any of the models induce _any_ bias towards CG functions? Answering this question would allow drawing conclusions that this empirical methodology yields similar biases as the human-centered methodology found in prior works. Interpreting the results in Table 4 in this way, it appears that the proposed method is statistically significant with a very meaningful difference (~20 points away from 0). Regardless of which claim is measured, however, I wonder if there are confounding effects that are harming the conclusions. Specifically, the paper suggests that these biases towards better accuracy on CG functions vs SM functions are due to the modeling decisions themselves. To some extent, this is clearly true: there are notable differences in bias between models. However some degree of this bias very likely is coming from the data. Specifically, look at the "raw" baseline. The bias is approximately ~5 points in favor of CG, despite the model having no predictive power (it is effectively the identity function). The differences in the underlying data could be due to the effective function spaces induced by CG vs. SM. Each function construction methodology yields a space of possibly observed functions. Imagine the extreme case that CG yields only one possible function, while the space for SM is extremely large. Then despite the data having 50% of examples drawn from each set, the singular CG function has 250k samples that the estimator can learn from, while each SM function has only 1 relevant sample. In this setting, then, it would be unsurprising that the estimator has some bias towards CG. Clearly the function space of CG is much larger than 1 function as in my example, but how large is the effective function space compared to SM? Considering this alongside the result from the "raw" baseline, does this suggest some of the differences found in the study are due to the underlying data instead of the learning mechanism? As an aside, taking into account the possible underlying data bias and my suggestion to change the statistical test in Table 4, the proposed method actually looks _even better_. The proposed method (and the single ablation constrastive-mlp) would plausibly be the only methods that have a bias toward CG functions, suggesting that the learning mechanisms resemble human biases more strongly on this dataset than other learning mechanisms. (In order to draw this conclusion, I subtracted 5.19 [the raw bias] from each model's bias to approximate offsetting by underlying data structure. This isn't exactly correct, but might give a ballpark estimate of possible conclusions). ## Clarity I had a couple of minor clarity concerns, primarily with the objective function stated in Section 2.1. The first, Section 2.1 notes InfoNCE in its title, however this is not actually discussed or even mentioned in the section body. A citation to a relevant paper is provided without context. If InfoNCE is important enough to be in the section title, likely it should be discussed in the section body. The second clarity concern in Sec 2.1 is with the definition of the objective. The objective assumes that we are provided with pairs: $(v_i, v'_i)$ with $N$ such pairs. Later the objective is defined with modular arithmetic and a summation that spans from $i \in [1, 2N]$. It was not until reading the van den Oord et al. paper that I understood that the objective does not take advantage of the pairs structure (i.e. $v'$ is never used in the objective definition), but rather that these pairs are flattened into a single dataset of size $2N$. I don't believe the Spectral Mixture functions are defined in this work---and even Compositional Grammar functions are only intuitively defined. I recognize both come from prior works, however they are critical to this paper and clarity would be greatly enhanced if a little more time was spent detailing these classes of functions. ## Pedantics A few pedantic minor points that shouldn't influence the decision: * There are several typos that shouldn't be possible with latex---notably several whitespace characters are dropped between a period and the start of the next sentence. * On page 6, there is a paragraph segment (paragraph starts on page 5) which is nestled awkwardly between two figures. It looks like a caption and made reading awkward. * Similar paragraph issue on page 8 * I had difficulty writing a summary of contributions based on the abstract and/or introduction. That is, neither the abstract nor introduction gave a clear overview of what is contributed by the paper. Notably, I found the abstract rather misleading in thinking this paper would somehow be related to reinforcement learning (terms like "agents" and "environments" and my own biases mixed poorly). This paper is highly specific to CG and SM functions and a particular methodology for constructing function learning. I believe the abstract should be more upfront about this, instead of being high-level and, well..., abstract. Requested Changes: * The changes marked "pedantic" above would be nice-to-have in order to improve readability. * A deeper definition of Spectral Mixture functions and Compositional Grammar functions would significantly strengthen the readability. * I think the work would benefit greatly from revisiting Table 4 and ensuring that it provides direct support for the claims the paper wishes to make. In order to better align with the abstract, I believe that a reinterpretation of Table 4 is necessary. However, the paper is strong even without this change. Broader Impact Concerns: The included broader impact concerns section seems sufficient for this work. ================================================== Metareview: Recommendation: Accept as is Comment: This paper studies representation learning in the space of 1D functions. It generates functions from a set of kernels: 13 kernels are drawn from a family of compositional kernels (CG) and one spectral mixture kernel (SM). The paper optimizes a contrastive learning objective with data augmentation and explores several architectures for the encoder. It tests the learned representations on several downstream tasks, such as extrapolation and kernel classification. One of they key results is that the contrastive learner with a 1D convolutional backbone exhibits a similar bias towards CG extrapolation compared to SM as humans do, albeit not quite as strong. The reviewers all appreciated the focus on simpler functions, the quality of the writing, and the breadth of the experiments. The comparison with human data (section 6) is especially interesting. The main concerns were: - A lack of clarity in terms of the specific contributions of the paper. - Fixing kernel hyperparameters of each kernel rather than fitting them to data. - More detailed discussion of the results in section 6. - Ablations on the effects of topological distortions in relation to extrapolation. - The validity of the reported confidence intervals. - Various clarifications and definitions (e.g., mathematical definitions of CG and SM kernels). The authors addressed all of these points during the rebuttal period and unanimously recommend acceptance. In my opinion, this paper outlines a very nice fundamental study, clearly and convincingly supports its claims, and would make a solid contribution to the TMLR community. ==================================================
# Symbolic Regression Is Np-Hard Marco Virgolin marco.virgolin@cwi.nl Centrum Wiskunde & Informatica, Amsterdam, the Netherlands Solon P. Pissis *solon.pissis@cwi.nl* Centrum Wiskunde & Informatica, Amsterdam, the Netherlands Vrije Universiteit, Amsterdam, the Netherlands Reviewed on OpenReview: *https: // openreview. net/ forum? id= LTiaPxqe2e* ## Abstract Symbolic regression (SR) is the task of learning a model of data in the form of a mathematical expression. By their nature, SR models have the potential to be accurate and humaninterpretable at the same time. Unfortunately, finding such models, i.e., performing SR, appears to be a computationally intensive task. Historically, SR has been tackled with heuristics such as greedy or genetic algorithms and, while some works have hinted at the possible hardness of SR, no proof has yet been given that SR is, in fact, NP-hard. This begs the question: Is there an exact polynomial-time algorithm to compute SR models? We provide evidence suggesting that the answer is probably negative by showing that SR is NP-hard. ## 1 Introduction Symbolic regression (SR) is a sub-field of machine learning concerned with discovering a model of the given data in the form of a mathematical expression (or equation) (Koza, 1994; Schmidt & Lipson, 2009). For example, consider having measurements of planet masses m1 and m2, the distance r between them, and the respective gravitational force F. Then, an SR algorithm would ideally re-discover the well-known expression (or an equivalent formulation thereof) F = G × m1m2 r 2 , with G = 6.6743 × 10−11, by opportunely combining the mathematical operations (here, of multiplication and division) with the variables and constant at play. The appeal of learning models as mathematical expressions goes beyond obtaining predictive power alone, as is commonplace in machine learning. In fact, SR models are particularly well suited for human interpretability and in-depth analysis (Otte, 2013; Virgolin et al., 2021b; La Cava et al., 2021). This aspect enables a safe and responsible use of machine learning models for high-stakes societal applications, as requested in the AI acts by the European Union and the United States (European Commission, 2021; 117th US Congress, 2022; Jobin et al., 2019). Moreover, it enables scientists to gain deeper knowledge about the phenomena that underlie the data. Consequently, SR enjoys wide applicability: SR has successfully been applied to astrophysics (Lemos et al., 2022), chemistry (Hernandez et al., 2019), control (Derner et al., 2020), economics (Verstyuk & Douglas, 2022), mechanical engineering (Kronberger et al., 2018), medicine (Virgolin et al., 2020b), space exploration (Märtens & Izzo, 2022), and more (Matsubara et al., 2022). As we will describe in Sec. 2, many different algorithms have been proposed to address SR, ranging from genetic algorithms to deep learning ones. Existing algorithms either lack optimality guarantees or heavily restrict the space of SR models to consider. In fact, there is a wide belief in the community that SR is an NP-hard problem1(Lu et al., 2016; Petersen et al., 2019; Udrescu & Tegmark, 2020; Li et al., 2022). However, to the best of our knowledge, this belief had yet to be solidified in the form of a proof prior to the advent of this paper. Indeed, we prove that there exist instances of the SR problem for which one cannot discover the best-possible mathematical expression in polynomial time unless P=NP. Id est, SR is an NP-hard problem. 1Lu et al. (2016) state that SR is NP-hard but provide no reference nor proof. ## 2 Background We begin with a historical overview of how SR has been attempted from an algorithmic perspective (Sec. 2.1), and then follow with related work concerning hardness (Sec. 2.2). ## 2.1 Sr Algorithms The introduction of SR is generally attributed to John R. Koza (e.g., Zelinka et al. (2005) make this claim); however, the problem of finding a mathematical expression or equation that explains empirical measurements was already considered in earlier works (Gerwin, 1974; Langley, 1981; Falkenhainer & Michalski, 1986). Such works build mathematical expressions by iterative application of multiple heuristic tests on the data. Koza is best known for his pioneering work on genetic programming (GP), i.e., the form of evolutionary computation where candidate solutions are variable-sized and represent programs (Koza et al., 1989; Koza, 1990; 1994). Early forms of GP were proposed by Cramer (1985); Hicklin (1986). Koza showed that GP can be used to discover SR models by encoding mathematical expressions as computational trees (see Fig. 1). In such trees, internal nodes represent functions (e.g., +, −, ×, etc.) that are drawn from a pre-decided set of possibilities, and leaf nodes represent variables or constants (e.g., x1, x2, . . . , −1, π, etc.). GP evolves a population of trees by initially sampling random trees, and then conducts the following steps: (1) stochastic replacement and recombination of their sub-trees; (2) evaluation of the fitness by executing the trees and assessing their output; and (3) stochastic survival of the fittest. Figure 1: Example of a tree that ![1_image_0.png](1_image_0.png) encodes f(x) = (sin(x1) + x2) × x3/x1. Recently, La Cava et al. (2021) proposed *SRBench*, a benchmarking platform for SR that includes more than 20 algorithms and more than 250 data sets. SRBench shows that several state-of-the-art algorithms for SR are GP-based. Among these, at the time of writing, *Operon* by Burlacu et al. (2020) was found to perform best in terms of discovering accurate SR models; and *GPGOMEA* by Virgolin et al. (2021a) was found to perform best in terms of discovering decently-accurate and relatively-simple SR models (i.e., shorter mathematical expressions). Other forms of GP, such as *strongly-typed* GP (Montana, 1995), *grammar-guided* GP (McKay et al., 2010), and *grammatical evolution* (O'Neill & Ryan, 2001), are often used to tackle *dimensionally-aware* SR, i.e., the search of mathematical expressions with constraints to achieve meaningful combinations of units of measurement. SR has been addressed with other types of algorithms than genetic ones, including, e.g., Monte-Carlo tree search (Cazenave, 2013; Sun et al., 2022). Moreover, several authors proposed deterministic algorithms. For example, Worm & Chiu (2013) and Kammerer et al. (2020) proposed enumeration algorithms which make SR tractable by restricting the space of possible models to consider and including dynamic programming and pruning strategies. Cozad (2014); Cozad & Sahinidis (2018) showed how SR can be addressed with mixed integer nonlinear programming. McConaghy (2011) proposed FFX, which generates a linear combination of many functions that are linearly-independent from each other, and then fits its coefficients with the *elastic* net (Zou & Hastie, 2005) to promote sparsity. Olivetti de França (2018) and Rivero et al. (2022) propose greedy algorithms that start from small mathematical expressions and iteratively expand them, by replacing existing components with larger ones from a set of possibilities. Lastly, recent years have seen the proposal of deep learning-based algorithms for SR. Petersen et al. (2020) cast the SR problem as a reinforcement learning one and train a recurrent neural network to generate accurate SR models. Udrescu & Tegmark (2020) leverage neural networks in order to test for symmetries and invariances in the data that are then used to prune the space of possible SR models. An end-to-end approach is taken by Kamienny et al. (2022) and Vastl et al. (2022), who train deep neural transformers to produce SR models directly from the data. Li et al. (2022) seek SR models by proposing a convexified formulation of deep reinforcement learning. In summary, existing SR algorithms are either heuristics, which do not guarantee optimality (e.g., genetic, greedy, or deep learning-based algorithms), or they are exact algorithms that achieve optimality but only over a small subset of all possible SR models, to bound the runtime (e.g., dynamic programming and mixed-integer nonlinear programming algorithms). This strongly hints to the fact that SR is NP-hard. As mentioned earlier, no proof has yet been given. ## 2.2 Related Hardness Results SR is typically posed as an empirical risk minimization (ERM) problem or, when regularization is considered, a structural risk minimization one (Vapnik, 1999). There exists a multitude of theoretical results in machine learning posing the problem as an ERM one. For example, Blum & Rivest (1992) famously proved the NP-completeness of training a three node-neural network to label a given data set correctly. The loss employed in such situations is commonly the 0-1 loss, i.e., the loss that returns 0 if the output of the model (or *prediction*) equals the output that is expected from the data (or *label*) and 1 otherwise. More recently, it has been shown that, under the 0-1 loss, it is NP-hard to even train a linear classifier to be ϵ-better than random (Feldman et al., 2012). In the context of coding theory, ERM with 0-1 loss has been used to prove the NP-hardness of finding a univariate polynomial of maximum degree k over a finite field (a *code*) that maximizes code matching (Guruswami & Vardy, 2005). Under different types of loss, polynomial-time solutions to ERM exist. A famous example of this is linear regression under the squared error loss, which can be solved in polynomial time via ordinary least squares. Recently, Backurs et al. (2017) presented fine-grained complexity results for ERM with kernel-based and neural network-based approaches. SR can be set to search in the space of polynomials and one can choose to use the 0-1 loss. Moreover, one can in principle set SR to work in finite fields rather than on real numbers to operate with polynomials for discrete codes. For example, Koza (1990) shows how to set GP to learn Boolean circuits by composition of logic gates. This means that the hardness of SR can follow from linking back to results such as the one by Feldman et al. (2012) (we sketch how this can be achieved at the end of Sec. 4). The modern connotation of SR is focused on regression (i.e., we seek a model f : R d → R with d the number of features) and commonly-used losses have co-domain in R + 0 , such as the absolute error loss or the squared error loss; rather than the 0-1 loss. We will provide a proof of NP-hardness that is general to this type of losses (Eq. (2)). In essence, we will show that when certain *basic* arithmetic operations are chosen for combining (e.g., addition), SR becomes NP-hard. This choice of basic operations allows us to reduce from a rather classical variant of the knapsack problem, namely, the *unbounded subset set* problem (Kellerer et al., 2004). ## 3 Preliminaries We will hereon refer to SR models as *functions* when appropriate, as this is their fundamental nature. Functions take variables as arguments. One can use the *identity function*, i.e., the function that returns the value of the variable taken as its argument. For simplicity of exposition, we will generically refer to functions and not make a distinction between (non-identity) functions and variables. Similarly, we will refer to functions also for *constant functions*, i.e., functions that can only return a single numerical value, irrespective of their arguments. This said, let us recall the concept of function composition, which is central to SR. Definition 1. Function composition Given two functions f : A → B and g : B → C, function composition, which we denote by g ◦ f, is the operation that produces a third function h : A → C*, such that* h(x) = g(f(x)). Thanks to function composition, we can now define the concept of *search space* of an SR problem. Definition 2. Search space of SR Let P be a set of functions. The search space of SR is the function space F that contains all functions that can be formed by composition of the elements of P. To better understand what Def. 2 states, consider that P can be set to contain a mix of functions that perform basic algebraic operations such as addition, subtraction, multiplication, and division; transcendental functions such as sin, cos, log, exp; constant functions (or simply constants), such as c42(x) = 42 and cπ(x) = π for any x; and identity functions that represent variables of interest for the problem at hand, such as x1, x2, x3. P is typically referred to as the *primitive set*, and its elements as *primitives* (Poli et al., 2008). Once P has been decided, F is determined. For example, choosing P = {+(·, ·), −(·, ·), ×(·, ·), x1, x2, −1, +1} means that F will contain a subset of all possible polynomials of arbitrary degree in x1 and x2. In particular, F is a subset because only some coefficients can be expressed, by composing constants with addition, subtraction, and multiplication. Let us clarify a point regarding constants in particular. Normally, one would include constants which are relevant to the instance of SR at hand. For example, if the unknown phenomenon for which an SR model is sought is suspected to have sinusoidal components, it may be advisable to include multiples of π in P. Moreover, P can be set to contain special elements that represent probability distributions from which constants can be sampled (see the concept of *ephemeral random constant* described by Koza (1994); Poli et al. (2008)). We denote one such element by R and, e.g., R can be chosen to represent the uniform distribution between two numbers, or the normal distribution with a certain mean and variance. When an SR algorithm picks R from P to compose an SR model, a constant is sampled from the distribution identified by R. Here (more specifically, in Corollary 1) we will generously assume that any constant can be sampled directly from R, and therefore that there is no need for a real-valued optimizer to be part of the SR algorithm. For example, having P = {+(·, ·), −(·, ·), ×(·, ·), x1, x2, R} will mean that F contains all polynomials of arbitrary degree in x1 and x2. We can now proceed by providing a definition of the SR problem. While this definition can be extended to other domains, we focus on handling real-valued numbers as the majority of the works takes place in this domain, and subsets thereof. Definition 3. Symbolic Regression (SR) problem Given a set P *of functions, a distance* L : R × R → R + 0 , vectors xi = (x1,i, . . . , xd,i) ∈ R d and scalars yi ∈ R, for i = 1, . . . , n*, the SR problem asks for finding a function* f ⋆*such that:* $$f^{\star}\in\operatorname*{arg\,min}_{f\in{\mathcal{F}}}{\frac{1}{n}}\sum_{i=1}^{n}{\mathcal{L}}\left(y_{i},f(\mathbf{x}_{i})\right)$$ $$(1)$$ L(yi, f(xi)) (1) where F *is the search space that is defined by* P. We provide some remarks concerning the proposed definition of the SR problem. Firstly, let us map the objects provided in the definition to terms familiar to a machine learning audience. The pair (xi, yi) is normally what is referred to as observation, data point, *example*, or *sample*, where xj,i is the value of the jth feature or *variable* for the ith observation, and yiis the value of the label or *target variable* for the same observation. The set that contains the observations upon which L is computed, i.e., D = {(xi, yi)} n i=1, is called *training set*. It is commonly assumed that the observations in D were drawn independently and identically distributed (i.i.d.) from an unknown probability distribution. Moreover, the distance L is called loss. Losses need not be distances, but in SR they normally are. Popular choices in the literature are the absolute error loss and the squared error loss. Here, we consider losses of the form: $${\mathcal{L}}(y_{i},f(\mathbf{x}_{i}))=|y_{i}-f(\mathbf{x}_{i})|^{r},$$ $$\left(2\right)$$ r, (2) with r ∈ N0: for r = 0 one gets the 0-1 loss; for r = 1 one gets the absolute error loss; and for r = 2 one gets the squared error loss. The minimization of the loss function across the observations in D makes SR an ERM problem. As is generally the case for learning, one actually desires f to generalize to new (or also called *unseen*) observations, i.e., observations that come from the same underlying probability distribution but were not available in D. In other words, it is not sufficient that f ⋆is a best-possible function with respect to the training set, as the loss should remain minimal also for new observations that are not available to us. To this end, a common practice is to heuristically use a separate set of data (the *validation set*) to estimate the generalization to observations outside the training set. Another approach, which is often used together and not alternative to adopting a validation set, is to perform *structural risk minimization*, i.e., account for regularization terms such as λ × C(f), where λ ∈ R + 0 controls the regularization strength and C : F → R is a function of the complexity of f. Typical goals of such regularization terms are improving generalization (by limiting effects akin to Runge's phenomenon (Fornberg & Zuev, 2007)) or, particularly for SR, improving the interpretability of f. Implementations of C range from weighted counting of the number of primitives that constitute f (Ekart & Nemeth, 2001; Hein et al., 2018) to machine learning models trained from human feedback to predict f's interpretability (Virgolin et al., 2020a; 2021b). Here, for simplicity, we do not consider regularization and focus on ERM alone. Equivalently put, we consider λ = 0: note that this choice is not limiting because this step can also be taken when constructing the proof of Theorem 1. We will briefly get back to how λ > 0 and C may be used to build interesting search spaces for the hardness of SR at the end of Sec. 4. Still, considering a "pure optimization" formulation (or ERM), as given in Eq. (1), can be considered to be a pre-requisite for being able to machine-learn accurate models from the data; in fact, it is commonplace for literature that concerns the hardness of learning to provide results with respect to the training set (see, e.g., (Feldman et al., 2012; Hu et al., 2019)). In a similar fashion, here we will consider the case of minimizing the empirical risk with respect to the training set D and show that this alone already poses a challenge for any SR algorithm. Here, we assume that computing f(x) and L(y, f(x)) (see Def. 3) can be done in polynomial time. Regarding L, our assumption is met for commonly-used losses such as the absolute and squared error ones. In fact, computing losses of such form takes O(n) operations, i.e., the runtime is linear in the number of observations. Regarding f, our assumption is met, e.g., for all functions that can be discovered by the SR algorithms currently in SRBench (La Cava et al., 2021); a notable exception of practical interest are recursive functions taking exponential time to compute (see, e.g., d'Ascoli et al. (2022)). For non-recursive functions, f can be implemented as a directed acyclic graph, where nodes represent the functions from P, and edges represent compositions. To compute f(x), it suffices to visit each node of the graph for each observation, thus requiring O(ℓ × n) operations, where ℓ is the number of primitives in f. Fig. 1 shows an example of such a graph, especially in the form of a *tree*, which is perhaps the most common way of encoding mathematical expressions in SR (see, e.g., the SR algorithms benchmarked by La Cava et al. (2021)). We conclude this section with the following important definition. Definition 4. Decision version of the SR problem (SR-Dec) Given an SR instance and an ϵ ∈ R + 0 , SR-Dec outputs YES *if and only if:* $$\exists f\in{\mathcal{F}}:{\frac{1}{n}}\sum_{i=1}^{n}{\mathcal{L}}\left(y_{i},f(\mathbf{x}_{i})\right)<\epsilon.$$ $$(3)$$ L(yi, f(xi)) *< ϵ.* (3) Essentially, Def. 4 is the problem of deciding whether there exists a function f in the search space such that its empirical risk is smaller than a chosen threshold ϵ. ## 4 The Result We proceed directly by providing the main result of this paper. Theorem 1. *The SR problem is NP-hard.* Proof. Let us begin by stating that SR-Dec is in NP. Recall that the computations of f(x) and L(yi, f(xi)) take polynomial time (see Sec. 3). Of course, the check < ϵ takes O(1) time. Thus, if f is guessed by an oracle, then we can provide an answer to SR-Dec in polynomial time. We proceed by considering the unbounded subset sum problem (USSP). USSP is a similar problem to the unbounded knapsack problem, where a same item can be put in the knapsack an arbitrary number of times, and the weight of an item corresponds exactly to the profit gained by including that item in the knapsack. The decision version of USSP, USSP-Dec, is defined as follows. Given j = 1*, . . . , k* (k items), wj ∈ N (weight of that item), and t ∈ N (the target), USSP-Dec asks: $\exists\mathbf{m}:\sum_{j=1}^{k}w_{j}m_{j}=t$? (10.1) $$\left(4\right)$$ where mj ∈ N0 (multiplicity with which an item is picked). USSP-Dec is known to be NP-complete (Kellerer et al., 2004). To prove that SR-Dec is NP-complete, we show that any instance of USSP-Dec can be reduced to some instance of SR-Dec in polynomial time. To this end, we will restrict SR-Dec as follows: (1) We pick the set of primitives P to be P = {+, x1*, . . . , x*d}, with d = k; (2) We set ϵ = 1. Next, we craft D to have a single observation (n = 1) and d = k features. For the only observation in D (dropping the index for the observation number, since there is only one), we set x1 = w1, x2 = w2*, . . . , x*k = wk, and y = t. In other words, we have set the search space F to contain only linear sums of the features in the data set D, i.e., functions of the form f(x) = Pd j=1 xjmj . Importantly, mj ∈ N0 and xi ∈ N, meaning that the co-domain of any f is N0. Consequently, the smallest non-zero loss that can be achieved is 1. The only functions f that can achieve an error smaller than ϵ = 1 are those that interpolate the observation exactly, i.e., f(x) = y. Then, the following holds: $$\left(Eq.\ (3)\ with\ \epsilon=1\right)$$ $$\left(\mathcal{L}\left(y,f(\mathbf{x})\right)<1\iff f(\mathbf{x})=y\right)$$ $$\left(Equivalence\ y=t\ due\ to\ \mathcal{D}\right)$$ $$\left(Expanding\ \mathcal{F}\ based\ on\ choice\ of\ \mathcal{P}\right)$$ $$\left(Equivalence\ x_{j}=w_{j},d=k\ due\ to\ \mathcal{D}\right)$$ $$\left(Re\text{-}formulating\ in\ terms\ of\ \mathbf{m}\right)$$. $\exists f\in\mathcal{F}:\mathcal{L}\left(y,f(\mathbf{x})\right)<1?$ $\exists f\in\mathcal{F}:f(\mathbf{x})=y?$ $\exists f\in\mathcal{F}:f(\mathbf{x})=t?$ $\exists f\in\left\{\sum_{j=1}^d x_j m_j:m_j\in\mathbb{N}_0\right\}:f(\mathbf{x})=t?$ $\exists f\in\left\{\sum_{j=1}^k w_j m_j:m_j\in\mathbb{N}_0\right\}:f(\mathbf{x})=t?$ $\exists\mathbf{m}:\sum_{j=1}^k w_j m_j=t?$ (Eq. (3) *with* ϵ = 1) ∃f ∈ F : L(*y, f*(x)) < 1? (5) (L(*y, f*(x)) < 1 ⇐⇒ f(x) = y) ∃f ∈ F : f(x) = y? (6) (Equivalence y = t due to D) ∃f ∈ F : f(x) = t? (7) (5) (6) (7) (8) (9) (10) (10) In other words, there exist some instances of SR-Dec that can be re-formulated as USSP-Dec (cfr. Eqs. (4) and (10)). Now, since assembling P as stated above takes linear time in k, picking ϵ = 1 takes O(1) time, and constructing D as stated above takes linear time in k, then any instance of USSB-Dec can be reduced to some instance of SR-Dec in polynomial time: SR-Dec is NP-complete. We conclude the proof with a *reductio ab absurdum*. Let us assume that there exists an algorithm to compute an optimal f ⋆for the SR problem (Def. 3) in polynomial time. An optimal f ⋆is the one for which the loss is minimal, which means that using f ⋆in Eq. (3) allows us to immediately answer SR-Dec. Since verifying that L(y, f ⋆(x)) < ϵ takes polynomial time, we conclude that if the SR problem can be solved in polynomial time, then we can also solve SR-Dec in polynomial time. Therefore, the SR problem is NP-hard. We remark that, in the proof of Theorem 1, we construct P so as not to contain R (nor any constant). Some readers might disagree with this quite broad definition of SR. In fact, some SR algorithms heavily rely on the presence of constants as well as on their optimization (e.g., FFX by McConaghy (2011) and *FEAT* by La Cava et al. (2018)). Not allowing for arbitrary constants to be present in the functions of the search space might be seen as a violation of the very definition of SR. In other words, some might think that P must contain R. We next show that SR remains NP-hard in this special case. Corollary 1. The SR problem is NP-hard even when P *must include* R. 6 Proof. We follow a similar construction of the proof of Theorem 1. This time, we set P to additionally contain R, i.e., P = {+, x1, x2*, . . . , x*d, R}, with d = k. This means that the function space F now contains functions of the form f(x) = c +Pd j=1 xjmj with mj ∈ N0 and c ∈ R (sampled from R). As to D, we will now include two observations instead of a single one. The first observation is set as before, i.e., x1,1 = w1, x2,1 = w2*, . . . , x*k,1 = wk (d = k) and y1 = t. For the second observation, we set x1,2 = 0, x2,2 = 0*, . . . , x*k,2 = 0 and y2 = 0, i.e., the value of all features and of the label are set to zero. It now remains to determine how we should set ϵ. Because of our construction of D, an f for which SR-Dec answered YES in the situation considered in Theorem 1, i.e., with c = 0, would still make SR-Dec answers YES for a c with sufficiently small magnitude. Note that those functions interpolate D exactly if c = 0. Thus, the magnitude of c must be such that |c| r < ϵ, with r the degree of the loss (Eq. (2)), because the corresponding empirical risk for those functions is 12 |c| r + 1 2 |c| r = |c| r, where the two summands on the left side of the equation are respective to the two observations in D. Now the question becomes whether using c ̸= 0 allows to answer YES to more functions than those that would interpolate D when c = 0. If that would be true, then we can no longer apply the strategy used in the proof of Theorem 1 to reduce from USSP-Dec. To be able to still use that strategy, we will now show that there exist instances of SR-Dec, in particular by picking a different ϵ, such that even if P *must* include R, then only functions with c = 0 are candidates for a YES answer. This would allow us to reduce once again from USSP-Dec (by appropriately picking ϵ), because f(x) = 0 + Pd j=1 xjmj =Pd j=1 xjmj , as in Eq. (8). For c ̸= 0 to allow SR-Dec to answer YES to more functions than those for when c = 0, c must contribute to lower the empirical risk. For the second observation, c can only increase the risk, because the loss is: $${\cal L}(y_{2},f({\bf x}_{2}))=|y_{2}-f({\bf x}_{2})|^{r}=\left|0-\left(c+\sum_{j=1}^{d}0\times m_{j}\right)\right|^{r}=|c|^{r}.\tag{1}$$ $$(11)$$ For the first observation, however, using c = 0 ̸ can lower the respective loss and thus contribute to lowering the empirical error. In particular, consider that the first functions that are candidates to receive a YES answer thanks to c ̸= 0 are those that had a loss of 1 when c = 0 (see the proof of Theorem 1, where we set ϵ = 1). If we can have SR-Dec answer NO to these functions, then it will necessarily answer NO also to all other functions that have a loss larger than 1 on the first observation when c = 0. We thus proceed by considering that the smallest, non-zero loss that can be obtained on the first observation for c ̸= 0, is |1 − c| r. This leads to the following empirical risk over our D: $$\frac{1}{2}\sum_{i=1}^{2}{\cal L}(y_{i},f({\bf x}_{i}))=\frac{1}{2}\left(|1-c|^{r}+|0-c|^{r}\right).\tag{12}$$ For any integer r ≥ 0, the minimum is 12 ×1 2 r−1 = 2−r: this is easy to verify for r = 0 and r = 1, while for r ≥ 2 Eq. (12) describes a U-shaped curve that is symmetric around, and has minimum in c = 1 2 . Therefore, it suffices to pick ϵ = 2−r: even if c is optimal, one cannot lower the empirical risk below 2 −rfor any function whose loss (on the first observation) is not zero. In other words, imposing ϵ = 2−rensures that SR-Dec will answer YES iff SR-Dec answers YES also for c = 0. This means that one can set c = 0 and proceed with a reduction from USSP-Dec as in the proof of Theorem 1. Finally, we provide the following remark. Remark. *One can consider the structural risk minimization setting whereby the following minimization is* sought: $${\frac{1}{n}}\sum_{i=1}^{n}{\mathcal{L}}(y_{i},f(\mathbf{x}_{i}))+\lambda C(f),\,\,\,{\mathrm{with~}}\lambda>0.$$ L(yi, f(xi)) + λC(f), with λ > 0. (13) Then, SR-Dec can be restricted to automatically answer NO for any f *that does not satisfy certain conditions,* such as linearity. For example, one can pick P = {+, ×, x1, . . . , xd, R} *to search in the space of arbitrary* $$(13)$$ polynomials, and pick C *such that* C(f) = ∞ if deg f ≥ k else 0, for an arbitrary integer k ≥ 0*. Combining* this with the use of the 0-1 loss function enables to reduce SR-Dec from existing results in literature that consider linear classifiers (Feldman et al., 2012) or coding polynomials in finite fields (Guruswami & Vardy, 2005). ## 5 Conclusion Our main contribution here was to prove that symbolic regression (SR), i.e., the problem of discovering an accurate model of data in the form of a mathematical expression, is in fact NP-hard. In particular, we have provided formal definitions of what SR entails, and showed how the decision version of the unbounded subset sum problem can be reduced to a decision version of the SR problem. Except for the general definition of SR we considered, we have additionally shown that SR remains NP-hard even when the set of primitives must contain distributions from which constants can be sampled, and provided a sketch of how an alternative proof can be constructed by using the 0-1 loss and previous results from the literature. Having settled the matter on the hardness of SR, we hope that this note inspires further works on lower and upper bounds of different SR variants. In fact, while we have shown that hardness holds in principle (by picking a search space suitable for reduction from USSP-Dec), there might exist specific variants of SR (e.g., different search spaces, specific regularization terms, or specific type of data) that are more commonly encountered in practical applications. For such more specific variants, proving hardness or designing polynomial-time algorithms would complement the SR status quo, which mostly focuses on heuristic algorithmic design. Ultimately, theoretical advances may greatly help advancing our knowledge of what is possible with SR. ## References 117th US Congress. Algorithmic accountability act, 2022. URL https://www.congress.gov/bill/ 117th-congress/house-bill/6580/. Arturs Backurs, Piotr Indyk, and Ludwig Schmidt. On the fine-grained complexity of empirical risk minimization: Kernel methods and neural networks. *Advances in Neural Information Processing Systems*, 30, 2017. Avrim L Blum and Ronald L Rivest. Training a 3-node neural network is NP-complete. *Neural Networks*, 5 (1):117–127, 1992. Bogdan Burlacu, Gabriel Kronberger, and Michael Kommenda. Operon C++ an efficient genetic programming framework for symbolic regression. In Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion, pp. 1562–1570, 2020. Tristan Cazenave. Monte-Carlo expression discovery. *International Journal on Artificial Intelligence Tools*, 22(1):1250035, 2013. A Cozad. *Data-and theory-driven techniques for surrogate-based optimization*. PhD thesis, Ph. D. thesis, Department of Chemical Engineering, Carnegie Mellon University, 2014. Alison Cozad and Nikolaos V Sahinidis. A global MINLP approach to symbolic regression. Mathematical Programming, 170(1):97–119, 2018. Nichael Lynn Cramer. A representation for the adaptive generation of simple sequential programs. In Proceedings of the First International Conference on Genetic Algorithms, pp. 183–187, 1985. Stéphane d'Ascoli, Pierre-Alexandre Kamienny, Guillaume Lample, and François Charton. Deep symbolic regression for recurrent sequences. *arXiv preprint arXiv:2201.04600*, 2022. Erik Derner, Jiří Kubalík, Nicola Ancona, and Robert Babuška. Constructing parsimonious analytic models for dynamic systems via symbolic regression. *Applied Soft Computing*, 94:106432, 2020. Aniko Ekart and Sandor Z. Nemeth. Selection based on the pareto nondomination criterion for controlling code growth in genetic programming. *Genetic Programming and Evolvable Machines*, 2(1):61–73, 2001. European Commission. Artificial intelligence act, 2021. URL https://artificialintelligenceact.eu/. Brian C Falkenhainer and Ryszard S Michalski. Integrating quantitative and qualitative discovery: The ABACUS system. *Machine Learning*, 1(4):367–401, 1986. Vitaly Feldman, Venkatesan Guruswami, Prasad Raghavendra, and Yi Wu. Agnostic learning of monomials by halfspaces is hard. *SIAM Journal on Computing*, 41(6):1558–1590, 2012. Bengt Fornberg and Julia Zuev. The runge phenomenon and spatially variable shape parameters in rbf interpolation. *Computers & Mathematics with Applications*, 54(3):379–398, 2007. Donald Gerwin. Information processing, data inferences, and scientific generalization. *Behavioral Science*, 19 (5):314–325, 1974. Venkatesan Guruswami and Alexander Vardy. Maximum-likelihood decoding of Reed-Solomon codes is NP-hard. *IEEE Transactions on Information Theory*, 51(7):2249–2256, 2005. Daniel Hein, Steffen Udluft, and Thomas A Runkler. Interpretable policies for reinforcement learning by genetic programming. *Engineering Applications of Artificial Intelligence*, 76:158–169, 2018. Alberto Hernandez, Adarsh Balasubramanian, Fenglin Yuan, Simon AM Mason, and Tim Mueller. Fast, accurate, and transferable many-body interatomic potentials by symbolic regression. *npj Computational* Materials, 5(1):1–11, 2019. Joseph F Hicklin. *Application of the genetic algorithm to automatic program generation*. PhD thesis, University of Idaho, 1986. Xiyang Hu, Cynthia Rudin, and Margo Seltzer. Optimal sparse decision trees. Advances in Neural Information Processing Systems, 32, 2019. Anna Jobin, Marcello Ienca, and Effy Vayena. The global landscape of AI ethics guidelines. *Nature Machine* Intelligence, 1(9):389–399, 2019. Pierre-Alexandre Kamienny, Stéphane d'Ascoli, Guillaume Lample, and François Charton. End-to-end symbolic regression with transformers. *arXiv preprint arXiv:2204.10532*, 2022. Lukas Kammerer, Gabriel Kronberger, Bogdan Burlacu, Stephan M. Winkler, Michael Kommenda, and Michael Affenzeller. Symbolic Regression by Exhaustive Search: Reducing the Search Space Using Syntactical Constraints and Efficient Semantic Structure Deduplication, pp. 79–99. Springer International Publishing, 2020. Hans Kellerer, Ulrich Pferschy, and David Pisinger. Introduction to NP-Completeness of knapsack problems. In *Knapsack Problems*, pp. 483–493. Springer, 2004. John R Koza. *Genetic programming: A paradigm for genetically breeding populations of computer programs* to solve problems, volume 34. Stanford University, 1990. John R Koza. Genetic programming as a means for programming computers by natural selection. Statistics and Computing, 4(2):87–112, 1994. John R Koza et al. Hierarchical genetic algorithms operating on populations of computer programs. In International Joint Conference on Artificial Intelligence, volume 89, pp. 768–774, 1989. Gabriel Kronberger, Michael Kommenda, Andreas Promberger, and Falk Nickel. Predicting friction system performance with symbolic regression and genetic programming with factor variables. In Proceedings of the Genetic and Evolutionary Computation Conference, pp. 1278–1285, 2018. William La Cava, Tilak Raj Singh, James Taggart, Srinivas Suri, and Jason H Moore. Learning concise representations for regression by evolving networks of trees. In *International Conference on Learning* Representations, 2018. William La Cava, Patryk Orzechowski, Bogdan Burlacu, Fabricio Olivetti de Franca, Marco Virgolin, Ying Jin, Michael Kommenda, and Jason H Moore. Contemporary symbolic regression methods and their relative performance. In *Advances in Neural Information Processing Systems - Datasets and Benchmarks Track*, 2021. Pat Langley. Data-driven discovery of physical laws. *Cognitive Science*, 5(1):31–54, 1981. Pablo Lemos, Niall Jeffrey, Miles Cranmer, Shirley Ho, and Peter Battaglia. Rediscovering orbital mechanics with machine learning. *arXiv preprint arXiv:2202.02306*, 2022. Haoran Li, Yang Weng, and Hanghang Tong. CoNSoLe: Convex neural symbolic learning. *arXiv preprint* arXiv:2206.00257, 2022. Qiang Lu, Jun Ren, and Zhiguang Wang. Using genetic programming with prior formula knowledge to solve symbolic regression problem. *Computational Intelligence and Neuroscience*, 2016. Marcus Märtens and Dario Izzo. Symbolic regression for space applications: Differentiable cartesian genetic programming powered by multi-objective memetic algorithms. *arXiv preprint arXiv:2206.06213*, 2022. Yoshitomo Matsubara, Naoya Chiba, Ryo Igarashi, Tatsunori Taniai, and Yoshitaka Ushiku. Rethinking symbolic regression datasets and benchmarks for scientific discovery. *arXiv preprint arXiv:2206.10540*, 2022. Trent McConaghy. FFX: Fast, scalable, deterministic symbolic regression technology. In *Genetic Programming* Theory and Practice IX, pp. 235–260. Springer, 2011. Robert I McKay, Nguyen Xuan Hoai, Peter Alexander Whigham, Yin Shan, and Michael O'neill. Grammarbased genetic programming: a survey. *Genetic Programming and Evolvable Machines*, 11(3):365–396, 2010. David J Montana. Strongly typed genetic programming. *Evolutionary Computation*, 3(2):199–230, 1995. Fabrício Olivetti de França. A greedy search tree heuristic for symbolic regression. *Information Sciences*, 442-443:18–32, 2018. ISSN 0020-0255. Michael O'Neill and Conor Ryan. Grammatical evolution. *IEEE Transactions on Evolutionary Computation*, 5(4):349–358, 2001. Clemens Otte. Safe and interpretable machine learning: A methodological review. Computational Intelligence in Intelligent Data Analysis, pp. 111–122, 2013. Brenden K Petersen, Mikel Landajuela Larma, T Nathan Mundhenk, Claudio P Santiago, Soo K Kim, and Joanne T Kim. Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients. *arXiv preprint arXiv:1912.04871*, 2019. Brenden K Petersen, Mikel Landajuela Larma, Terrell N Mundhenk, Claudio Prata Santiago, Soo Kyung Kim, and Joanne Taery Kim. Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients. In *International Conference on Learning Representations*, 2020. Ricardo Poli, William B Langdon, and Nicholas F McPhee. *A Field Guide to Genetic Programming*. Lulu Press, 2008. Daniel Rivero, Enrique Fernandez-Blanco, and Alejandro Pazos. DoME: A deterministic technique for equation development and symbolic regression. *Expert Systems with Applications*, 198:116712, 2022. Michael Schmidt and Hod Lipson. Distilling free-form natural laws from experimental data. *Science*, 324 (5923):81–85, 2009. Fangzheng Sun, Yang Liu, Jian-Xun Wang, and Hao Sun. Symbolic physics learner: Discovering governing equations via monte carlo tree search. *arXiv preprint arXiv:2205.13134*, 2022. Silviu-Marian Udrescu and Max Tegmark. AI Feynman: A physics-inspired method for symbolic regression. Science Advances, 6(16):eaay2631, 2020. Vladimir Vapnik. *The nature of statistical learning theory*. Springer, 1999. Martin Vastl, Jonáš Kulhánek, Jirí Kubalík, Erik Derner, and Robert Babuška. SymFormer: End-to-end symbolic regression using transformer-based architecture. *arXiv preprint arXiv:2205.15764*, 2022. Sergiy Verstyuk and Michael R. Douglas. Machine learning the gravity equation for international trade. Available at SSRN 4053795, 2022. Marco Virgolin, Andrea De Lorenzo, Eric Medvet, and Francesca Randone. Learning a formula of interpretability to learn interpretable formulas. In *International Conference on Parallel Problem Solving from* Nature, pp. 79–93. Springer, 2020a. Marco Virgolin, Ziyuan Wang, Tanja Alderliesten, and Peter A N Bosman. Machine learning for the prediction of pseudorealistic pediatric abdominal phantoms for radiation dose reconstruction. *Journal of Medical* Imaging, 7(4):046501, 2020b. Marco Virgolin, Tanja Alderliesten, Cees Witteveen, and Peter A N Bosman. Improving model-based genetic programming for symbolic regression of small expressions. *Evolutionary Computation*, 29(2):211–237, 2021a. Marco Virgolin, Andrea De Lorenzo, Francesca Randone, Eric Medvet, and Mattias Wahde. Model learning with personalized interpretability estimation (ML-PIE). In Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 1355–1364, 2021b. Tony Worm and Kenneth Chiu. Prioritized grammar enumeration: Symbolic regression by dynamic programming. In *Proceedings of the Genetic and Evolutionary Computation Conference*, pp. 1021–1028, 2013. Ivan Zelinka, Zuzana Oplatkova, and Lars Nolle. Analytic programming–symbolic regression by means of arbitrary evolutionary algorithms. *International Journal of Simulation: Systems, Science and Technology*, 6(9):44–56, 2005. Hui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2):301–320, 2005.
Review 1: Summary: The paper studies the complexity of the problem of symbolic regression from data to learn a mathematical expression and shows that this problem is NP hard. While it has been widely believed that this problem is NP-hard (Lu et al 2016, Peterson et al 2019), the paper claims to be the first to provide a formal proof for the NP-hardness. Strengths and Weaknesses: Strengths 1. The paper is well-written. The discussion of related work is comprehensive. A strong case is built for the need to learn interpretable symbolic models that can be analyzed for safety. The paper also identifies papers where the NP-hardness of symbolic regression has been hinted or mentioned before. A detailed discussion on symbolic regression, SRbench and deep learning based SR is presented. Weaknesses 1. The reviewer was a bit surprised that no prior work has proven symbolic regression to be NP-hard. It appears at the end of Page 3 and start of Page 4 that the SR problem definition (which would ideally differentiate between training on a set of data and generalizing to a validation set) is replaced with just the minimization of the loss function. At this point, prior work such as https://arxiv.org/abs/cs/0405005 become an instance of (1) and the result of say, "Maximum-likelihood decoding of Reed-Solomon Codes is NP-hard" ("computing the degree k polynomial that disagrees with the minimum number of input point") would imply the main result in the paper by suitably defining the L function. So, the reviewer is not convinced the main result in the paper is new. 2. The reduction from unbounded subset sum problem to SR is interesting. But whether epsilon is part of the problem definition of SR or part of the solution is debatable. One can argue that the goal is to find f that minimizes epsilon, but the problem definition itself does not set the value of epsilon (something that is true in practice). Under this argument, it would not be okay to pick epsilon to be 0 when translating USSP-Dec to SR. 3. See some specific requests on presentation in "Requested Changes". Requested Changes: Definition 2 can be better worded: "formed by compositions of elements of P and their compositions" - composition is naturally treated as being recursive and unless, one wants to restrict composition to just two (or fixed number of) levels - there is no need to mention compositions twice. Page 4 anyway talks about restrictions on the space of f (in particular, rules out recursion) later. In Definition 3, is P including the ephemeral random constant / distribution representing element R or is it just the set of functions and variables. If it is just the later (as the wording of Definition 3 suggests), the discussion of R before Definition 3 appears to be a distraction that can be eliminated from discussion. Broader Impact Concerns: There are no broader impact concern for the paper. The authors are advised to add a broader impact statement building on their argument about the benefits of symbolic model in the introduction. But even without this statement, the broader impact of studying the complexity of symbolic regression is obvious. ================================================== Review 2: Summary: The authors prove that symbolic regression (SR) is NP-hard by reduction to the unbounded subset sum problem. The reduction holds also for cases in which constants (and distributions) appear in the search space. Strengths and Weaknesses: + Well written, accessible even to non-expert readers. The proofs are easy to follow. + Good coverage of the SR literature. + The mathematical setup and the proofs themselves seem sound to me. + The contribution seems to be novel. - Missing discussion of significance and expected impact of this result. The authors mention that the paper settles a well-known conjecture, which is true. In a sense, the paper serves to retroactively justify the usage of genetic algorithm and other heuristic search procedures for this problem. However, the significance of doing so is a bit unclear. A more detailed discussion of the limits of this result and of its dependency on the choice of search space would have helped to give this result more depth. - Lacks connection to classical hardness results in statistical learning. Specifically, considering that SR is cast essentially as an empirical risk minimization problem, I was expecting the authors to explore the connection between their results and the well-known hardness of ERM. Could ERM be reduced to SR? Requested Changes: - Highlight the significance of your result by briefly discussing what consequences you expect it to have / what benefits you expect it to bring to research on SR. - Briefly discuss how the two theorems depend on the choice of search space. The reduction seems to show that allowing for just sum, product, and equality is enough to trigger NP-hardness, but it doesn't say much for SR cases in which these operations are not allowed. Please briefly clarify this point. - Briefly discuss links to classical hardness results in ML. - p 2: "Early forms of GP where proposed" -> were. - p 3: Search space of SR: The definition depends on the notion of composition between functions, which is defined, and on that of composition between functions and variables, which is not. Please clarify. - p 4: "i.e., distance" -> a distance Broader Impact Concerns: This paper poses no ethical concerns. ================================================== Review 3: Summary: Symbolic regression is one of the central techniques for interpretable and verifiable ML in many applications. So far, using evolutionary algorithms (EA) is one of the most successful approaches. However, with the black-box view of EAs, there comes the question of whether a polynomial algorithm could exist to find an optimal solution. The authors of the paper at hand show that by reduction of the unbounded subset sum problem, the symbolic regression problem is NP-hard and thus, it is unlikely that an optimal solution can be found efficiently. Strengths and Weaknesses: ### Strengths * It is an important result in showing that SR is NP-hard * The paper is very well written and easy to follow * The reduction proof seems to be correct ### Weaknesses * Two simplifications were assumed (i.e., no regularization and no recursive functions). However, I agree with the authors that this does not diminish the importance of their result. Requested Changes: A bit nitpicking on the wording: "this is already problematic for any SR algorithms". I wouldn't say that this is problematic because SR algorithms are nevertheless fairly successful. Maybe let's say that it already poses a challenge to do this on the training set efficiently. Is it guaranteed that there is a unique solution to Equation 1? If not, I wonder whether it should be f^* \in \argmin since the task would not to find all solutions, but only one of them. Broader Impact Concerns: None ================================================== Metareview: Recommendation: Accept as is Comment: The technical tools used in the proof do not seem especially complex or novel, and the conclusion is also not surprising. That said, the reviewers and I feel that a formal proof of a key fact about symbolic regression is of inherent value. The paper is also well-written, and the discussion of related work is quite substantive. The authors also did a good job of revising the paper based on the reviewers' comments. Given all this, I am recommending acceptance as is. ==================================================
# Tscmamba: Mamba Meets Multi-View Learning For Time Series Classification Anonymous authors Paper under double-blind review ## Abstract Multivariate time series classification (TSC) is critical for various applications in fields such as healthcare and finance. While various approaches for TSC have been explored, important properties of time series, such as shift equivariance and inversion invariance, are largely underexplored by existing works. To fill this gap, we propose a novel multi-view approach to capture patterns with properties like shift equivariance. Our method integrates diverse features, including spectral, temporal, local, and global features, to obtain rich, complementary contexts for TSC. We use continuous wavelet transform to capture time-frequency features that remain consistent even when the input is shifted in time. These features are fused with temporal convolutional or multilayer perceptron features to provide complex local and global contextual information. We utilize the Mamba state space model for efficient and scalable sequence modeling and capturing long-range dependencies in time series. Moreover, we introduce a new scanning scheme for Mamba, called tango scanning, to effectively model sequence relationships and leverage inversion invariance, thereby enhancing our model's generalization and robustness. Experiments on two sets of benchmark datasets (10 datasets each) demonstrate our approach's effectiveness, achieving average accuracy improvements of 4.01-6.45% and 8.77% respectively over leading TSC models such as TimesNet and TSLANet. ## 1 Introduction Time series classification (TSC) is a fundamental task in diverse fields abundant with time series data such as healthcare, weather forecasting, and finance. With the advancement of sensing technologies, multivariate time series (MTS) data have been ubiquitous, and thus TSC over MTS has attracted increasing research attention. Various approaches for TSC have been explored, including statistical, signal processing, and machine or deep learning approaches in both time- and frequency domains. Despite intensive studies and rapid progress, important properties of many time series, such as shift equivariance and inversion invariance, still remain underexplored in TSC. Equivariance refers to a property where transformations applied to the input lead to predictable and corresponding transformations in the output. In the case of shift equivariance for time series data, this means that when we apply a shift transformation to the input time series, the output of our model (such as detected features or patterns) undergoes a corresponding shift. Shift equivariance property is crucial for TSC as it allows pattern recognition regardless of exact temporal positions, offering benefits such as: - Resilience to temporal misalignments between different instances of the same class, which is common in real-world data. - Creation of more consistent features across samples when dealing with MTS of varying lengths, such as when training and test examples have different durations. - Improved generalization while retaining information about temporal relationships, as the classifier can learn patterns that are consistent in their relative positions across different time shifts. These benefits collectively enhance the robustness and flexibility of TSC models, making them more adaptable to the diverse and often noisy nature of real-world time series data. Existing machine learning approaches for TSC often rely on convolution-based methods or neural networks (CNNs) to endow shift equivariance for the extracted features (LeCun et al., 1989). Many models have been designed to exploit this property for visual learning tasks with 2D or 3D data (Thomas et al., 2018; Fuchs et al., 2020). For TSC, several convolution-based methods (Dempster et al., 2020; Franceschi et al., 2019; Ismail Fawaz et al., 2019) have implicitly utilized this property. However, they only employ temporal features by applying convolutions in the time-domain. In addition to time-based features, frequency-based features can help identify important patterns in the data's spectral composition. Disentangling time and frequency components has been shown to be critical for time series learning (Zhang et al., 2022b; Eldele et al., 2024). In particular, spectral features have been extracted using discrete Fourier transform (DFT) (Eldele et al., 2024) and discrete wavelet transform (DWT) (Chaovalit et al., 2011; Ouyang et al., 2021; Zhao & Zhang, 2005). However, neither DFT nor DWT is shift equivariant - DFT lacks this property (Oppenheim, 1999), while DWT's discrete nature and downsampling (decimation) step in its computation also prevent shift equivariance (Mallat, 2008; Percival & Walden, 2000). Consequently, a small shift in the input signal can lead to significantly different coefficients or features for DFT and DWT. Continuous wavelet transforms (CWT) with real-valued mother wavelets possess shift equivariance (Mallat, 2008). CWT coefficients offer a localized time-frequency representation of the signal, with scale-dependent locality (Mallat, 2008). This means they capture more localized features at higher frequencies and broader features at lower frequencies, while still maintaining overall locality compared to global transforms like DFT. While CWT has been used for TSC (Wang et al., 2020), global patterns of MTS data have been insufficiently considered. MTS data often contain sensible global patterns, such as trends, seasonality, periodicity, cycles, and long-term dependencies. Lacking the ability to capture shift-equivariant global features can lead to a loss of discriminative information for TSC. These considerations highlight that many existing approaches cannot fully exploit shift equivariance of features or patterns for TSC. While some CNN or CWT-based methods can capture shift-equivariant features, they often lack the ability to effectively utilize spectral or global features. A clear need arises for an effective approach that can leverage shift-equivariant local and global features in both time and frequency domains to enhance TSC performance. In this paper, we propose a novel approach for TSC that effectively leverages time shift-equivariant features and patterns from both time and frequency domains. The shift-equivariant spectral features are derived from the time-frequency representations of CWT with a real-valued mother wavelet. To remedy CWT's limitation regarding the locality of resulting features, we leverage convolutional kernels to extract shift-equivariant temporal features. While convolutional kernels excel at capturing temporal dependencies between input features or interdependencies between channels preserving shift equivariance, they typically have limited lengths, resulting in a limited receptive field that extracts temporally localized contents (Bengio et al., 2013). To enrich the expressiveness of CNN-based features, we adopt the kernel-based feature transformation ROCKET (Dempster et al., 2020). ROCKET uses random kernels with random lengths, potentially extracting a wide spectrum of temporal features, ranging from highly local to global ones. However, many TSC tasks involve MTS data with characteristics or patterns spanning the entire length or a significant portion of the time series, suggesting that globally discriminative features or global interactions of features can be beneficial. While ROCKET may capture global features to a certain extent due to its use of kernels with random lengths, it tends to focus more on local features with varying degrees of locality. To enhance the extraction of global features, we incorporate fully connected MLPs. These can capture temporally global patterns with their wide receptive field, although they may not be well-suited for identifying temporally local patterns or temporal dependencies due to their treatment of each input feature independently. Thus, we use MLPs to complement ROCKET's approach, strengthening the extraction of global features. To avoid significantly increasing the size of extracted features, we propose a switch mechanism that selects between CNN-based (primarily local) features and MLP-based global patterns. This mechanism determines which type of temporal features - local or global - is more discriminative for a given input and integrates it with the CWT-domain features. Our experiments demonstrate that the switch mechanism effectively captures the most salient characteristics of the input data, whether they are local or global in nature. The time-frequency representations from CWT and temporal features from ROCKET transformation, or temporal features from global MLP, will be leveraged jointly to exploit the MTS characteristics, particularly shift equivariance, across domains. These different types of features provide various perspectives on the MTS data. We combine these perspectives with an approach known as multi-view learning, which has been shown to improve model performance and reliability, e.g., in image classification and clustering (Peng et al., 2024). This provides comprehensive, enriched multi-view contexts to the subsequent inference, thus enhancing TSC performance. In addition, we introduce inversion invariance, a new concept in TSC where a time series' features or patterns are equally useful for classification when read in both forward and backward directions. This property is particularly relevant for data where time direction is not inherently meaningful, such as ECG patterns, climate data, or rotational data with arbitrary start points. We posit that inversion invariance can generally enhance TSC performance for the following reasons: - Using inversion invariance for TSC can effectively double the amount of input data, as both forward and backward readings will be used to train the same model. This increase in training examples is a new form of data augmentation, which leads to better generalization, reducing potential overfitting for TSC. - Capturing inversion invariant patterns may help enhance the model's robustness to noise or disturbances. MTS data is often noisy (Kang et al., 2014), which can mask underlying signals and affect algorithm robustness. By identifying patterns meaningful in both directions, the TSC model may focus more on intrinsic patterns while being less affected by (potentially direction-specific) noise. Inversion invariance property may improve generalization and robustness, especially in cases where patterns can manifest similarly important patterns in both time directions. To incorporate this property into our model, we will leverage a new scanning scheme called tango scanning, which we will introduce below. Our empirical results based on extensive experiments on various MTS datasets testify to the usefulness of this property and thus support the postulate, though a theoretical certificate for upholding the postulate remains a future line of research. To facilitate final classification, we employ Mamba (Gu & Dao, 2023), a state-of-the-art (SOTA) model based on state-space models (SSMs). Like recurrent neural networks, SSMs use state variables to represent the system's internal condition and its evolution over time (Gu et al., 2021). Mamba introduces selective state spaces, a mechanism that updates only a subset of state dimensions based on each input. This allows Mamba to focus on the most relevant information, efficiently process long sequences, and capture long-range dependencies. Mamba-based models have demonstrated competitive performance on various tasks, including language modeling (Gu & Dao, 2023; Dao & Gu, 2024), time series forecasting (Ahamed & Cheng, 2024b), DNA sequence modeling (Gu & Dao, 2023), tabular data learning (Ahamed & Cheng, 2024a), and audio generation (Shams et al., 2024). Unlike popular Transformers, which have quadratic time complexity in sequence length, Mamba achieves linear time complexity. This makes it more suitable for processing long sequences and scaling to larger datasets. By using Mamba, our TSC model is efficient in training and inference, with reduced computational costs and memory requirements compared to existing SOTA models. We introduce a novel sequence scanning scheme, tango scanning, for inputting the original sequence and the reversed sequence into the same Mamba block and fusing the output. This scheme uses essentially the same memory footprint but demonstrates higher accuracy than vanilla Mamba scanning. Our tango scanning scheme differs from existing bi-directional Mamba implementations in the following ways: 1) Compared to (Wang et al., 2024; Behrouz & Hashemi, 2024), our approach uses one vanilla Mamba block for both directions, while theirs use two separate blocks. 2) Compared to BiMamba (Schiff et al., 2024), we use a single Mamba block without weight ties, and only one reversal operation, whereas BiMamba uses two blocks with partial weight ties and two reversal operations. 3) Compared to MambaDNA (Schiff et al., 2024), we use a single reversal operation and one Mamba block, while MambaDNA uses two "reverse complement" operations and two Mamba blocks with weight ties where the "reverse complement" operation is specific to deoxyribonucleic acid (DNA) sequences to reflect the complementary nucleotides in the DNA base pairs. Through extensive experiments, we demonstrate that our multi-view Mamba-based approach outperforms or matches the performance of existing SOTA models, typically with a small fraction of computational requirements and reduced memory usage. In summary, the contributions of this paper include: - Building a novel multi-view approach for TSC: Our approach seamlessly integrates frequency- and time-domain features to exploit shift-equivariant patterns and provide complementary, discriminative contexts for TSC. It also employs a gating scheme to fuse spectral features with local or global timedomain features, effectively leveraging patterns characterizing the MTS classes. - Adapting the Mamba state-space model for sequence modeling and TSC: Our model can capture long-term dependencies within the MTS with linear efficiency and scalability. - Introduction of inversion invariance for TSC: This includes the new concept of inversion invariance for MTS classification. It also includes building an innovative Mamba-based "tango scanning" scheme to identify inversion invariant features or patterns. Our proposed tango scanning demonstrates improved effectiveness in modeling inter-token relationships in the sequence compared to vanilla Mamba block or bi-directional Mamba for TSC. - Extensive experimental validation of the proposed approach: Our model shows superior performance over various existing SOTA models over two sets of standard benchmarking datasets (10 datasets each), achieving average accuracy improvements of 4.01-6.45% and 8.77% respectively over leading TSC models among 20 SOTA baselines. These contributions are expected to advance real-world TSC applications in research and everyday life. The following sections will briefly review related works, present our approach in detail, and demonstrate extensive experimental results and ablation studies to conclude the paper. ## 2 Related Works In this section, we provide a brief review of relevant methods for TSC in the literature, focusing on works using machine learning or deep learning. We group existing methods into 4 categories: traditional methods like DTW, deep learning approaches using CNNs or RNNs, Transformer architectures, and methods based on state-space models. Traditional TSC methods include techniques like Dynamic Time Warping (DTW) (Berndt & Clifford, 1994), which measures the similarity between time series by aligning them in a non-linear way of dynamic programming. Tree-based methods like XGBoost (Chen & Guestrin, 2016) have also been applied to the TSC task. In recent years, deep learning approaches have become increasingly popular for TSC. Various MLP-based methods have been proposed, including DLinear by Zeng et al. (2023) and LightTS by Zhang et al. (2022a). DLinear constructs a simple model based on MLP, while LightTS uses light sampling-oriented MLP. These models are generally efficient in computations. Convolutional neural networks (CNNs) have been adapted for TSC, such as ROCKET (Dempster et al., 2020) which uses random convolutional kernels for fast and accurate classification. CNN has been also used by Franceschi et al. (2019) for learning representations of multivariate time series in an unsupervised way, which is then further leveraged for TSC. Besides CNNs, recursive neural networks (RNNs) such as long-short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) and a variant, a gated recursive unit (GRU), has also been adopted for TSC. Moreover, CNN and RNN have been combined to handle TSC effectively (Lai et al., 2018). ![4_image_0.png](4_image_0.png) Figure 1: Schematic illustration of the proposed model TSCMamba. This diagram illustrates the architecture of our approach, featuring tango scanning. Here, DP refers to Depth-wise Pooling (DP) and LP refers to Linear Projection (LP). A switch gate selectively activates the utilization of either ROCKET or MLP-derived features. The MLP module, depicted in the bottom right, comprises two layers with an optional dropout mechanism interspersed for regularization Transformers by Vaswani et al. (2017), originally used for natural language processing, have been adapted for time series modeling. Reformer by Kitaev et al. (2020) introduces efficiency improvements to handle longer sequences. Numerous Transformer variants have been proposed to better model the unique characteristics of time series, such as handling non-stationarity with on-stationary Transformers by Liu et al. (2022), combining exponential smoothing with ETSformer in Woo et al. (2022), and using decomposition and auto-correlation with Autoformer in Wu et al. (2021). Other variants include Pyraformer (Liu et al., 2021), which uses pyramidal attention to reduce complexity, Flowformer (Wu et al., 2022), which linearizes Transformers using conservation flows, and Informer (Zhou et al., 2022), which focuses on efficient long sequence forecasting by exploiting frequency-enhanced decomposition. Notably, TimesNet (Wu et al., 2023) models the temporal 2D variations in time series data using a hierarchical structure of temporal blocks. By combining 2D convolutions, multi-head self-attention, and a novel positional encoding scheme, it can capture both local patterns and long-range dependencies in time series and obtain state-of-the-art performance. Despite the impressive performance of Transformer-based models, Zeng et al. (2023) have shown that MLP-based models can be more effective in many scenarios. Recently, a method called LSSL by Gu et al. (2022) has been proposed for TSC with a structured SSM called S4. It employs a special parameterization called the Diagonal Plus Low-Rank form to represent the state transition matrix, enabling efficient computation over long sequences. ## 3 Methodology This section describes our proposed method, TSCMamba, whose overall architecture is schematically illustrated in Fig. 1. The architecture and data flow comprise the following key steps: First, we generate global temporal features, localized temporal features, and joint temporal-spectral features from the MTS data. Second, these diverse views are fused to provide rich contexts for subsequent sequence modeling. A switch gate determines whether to fuse the transform-domain features with local features or global patterns in the temporal domain. Third, both the pre-fusion and post-fusion features are fed into an inference engine consisting of two Mamba blocks. This captures long-term dependencies between these features. Fourth, we introduce a tango scanning scheme for each Mamba block to exploit inversion-invariant features, followed by depth-wise pooling. Fifth, final class decisions are made using an MLP to generate class logits. The following subsections detail each component and procedure of TSCMamba. ## 3.1 Spectral Representation We have chosen Continuous Wavelet Transform (CWT) to represent raw signals in the spectral domain. CWT potentially surpasses space-time Fourier transform, fast Fourier transform (FFT), and digital wavelet transform (DWT) by providing superior time-frequency localization and multi-resolution analysis. CWT's adaptable wavelets enhance feature extraction and noise reduction while better-handling edge effects. This makes CWT particularly suitable for analyzing non-stationary signals. CWT's continuous and detailed representation may offer a significant advantage over the discrete nature of DWT. This renders CWT highly effective for precise time-frequency analysis. Among a variety of wavelets, the Morlet wavelet in Equation 1 is employed in this paper due to its capturing both amplitude and phase effectively: $$\psi(t)=(\pi^{-1/4})(1-{\frac{t^{2}}{\sigma^{2}}})\exp(-{\frac{t^{2}}{2\sigma^{2}}})\cos(2\pi f t),$$ $$\left(1\right)$$ where σ is the scale parameter controlling the width of the wavelet, and f is the frequency parameter that controls the frequency of the cosine function. In this paper, we adopt σ 2 = 1 and f = 5/(2π) to balance computational cost and the expressiveness of the obtained wavelet features. However, keeping these parameters learnable may potentially benefit the classification accuracy. The smooth and symmetric shape of the Morlet wavelet minimizes distortions and edge effects, resulting in a clear and interpretable timefrequency representation. Using the wavelet function, we obtain a 2-D representation of the size L1 × L1 for each channel of an original MTS input sample of size L. In this paper, we adopt L1 = 64 for computational efficiency and expressiveness of the obtained wavelet features. We summarize this CWT feature extraction process in S-Algorithm 1. Since conversion from time signals to CWT representation is not learnable, we move this to the data pre-processing part, while regarding only the patch embedding module to be learnable. This helps our model to achieve lower FLOPs and faster training. With the resultant CWT representation of size D × L1 × L1, we further perform patch embedding using a Conv2D layer (kernel size = stride = p, padding = 0), where p = 8 is patch size. Later with flattened patches, we utilize a feed-forward network (FFN) to obtain patches of size D × X for each MTS sample. The FFN consists of one fully connected layer with an input dimension ( L1 p ) 2 and an output dimension X, as shown for the projected space in Figure 1. It is used to extract features within the CWT representation. For each batch of size B, the resultant tensor for representing CWT features is denoted by *W ∈ R*B×D×X. ## 3.2 Temporal Feature Extraction To complement the frequency-domain features, we extract time-domain features. As previously discussed, different MTS datasets may have global or local features or patterns that discriminate between different classes. Capturing these features is essential for accurate classification. We leverage two different approaches to capture such features. Extracting Local Features with Convolutional Kernels in Unsupervised Fashion. Convolutional kernels usually have limited receptive fields, thus focusing on the extraction of local features. Since an MTS dataset may have local features at multiple temporal scales, it is sensible to capture local features within various widths of receptive fields. To this end, we employ the ROCKET approach (Dempster et al., 2020) to extract local features within various local neighborhoods for each channel in an unsupervised fashion. Here, it is to be noted that we only utilize the time domain to extract the kernel-based features, we do not utilize the class labels of the corresponding features. Therefore, our improvement in performance does not solely rely on the ROCKET method, rather it works as a performance booster in certain datasets. ROCKET is a randomized algorithm that uses a set of randomized convolutional kernels to extract features from time series data. The method is suitable for capturing local features at various scales due to its randomized nature and the use of kernels with different sizes and strides. The procedure first randomly generates a set of convolutional kernels, each with a specific size and stride. Next, it convolves each kernel with the time series data to generate a feature map. The procedure is summarized in S-Algorithm 2. ROCKET generates random convolutional kernels, including random length and dilation. It transforms the time series with two features per kernel. The global max pooling and the proportion of positive values (PPV). This fits one set of parameters for individual series. We apply the ROCKET feature extraction method to each channel of length L to form a feature vector of length X. We input our training data as a result, with the input tensor of size B ×D ×L, we obtain a tensor VL ∈ RB×D×X that represents the local features. We utilized the sktime implementation of ROCKET to achieve this (Király et al., 2024; Löning et al., 2019). Global Feature Extraction with MLP. MLP has a receptive field covering the entire input, allowing the resulting feature vectors to capture the global characteristics of the MTS data. Independently for each channel of the input MTS of size D ×L, we utilize a one-layer MLP with linear activation to obtain a feature vector of size D × X. Therefore, with the input tensor of size B × D × L, we obtain a tensor VG ∈ RB×D×X representing the global features. ## 3.3 Fusing Multi-View Representations After obtaining the CWT-based spectral features using CWT and the temporal features at both local and global levels, we fuse these features to effectively exploit the complementary information in these multi-view representations. Through our empirical study, we observe that for many MTS data, either local features or global features in the temporal domain play a dominant role in discriminating between classes for TSC. This observation motivates us to fuse the spectral features with either global or local features in the temporal domain. Denote the temporal features by V, which is either VG or VL. Then, the fused feature map VW will be calculated as follows: VW = *W ⊗ V*, (2) where ⊗ represents an element-wise operation. In this paper, this is either a multiplicative or additive operation such that $$\mathbb{\Leftrightarrow}\ \mathbb{V},$$ {VW }ijk = λVijk ∗ (2 − λ)Wijk, or, {VW }ijk = λVijk + (2 − λ)Wijk, (3) where λ ≥ 0 is a learnable parameter that determines the balance between the spectral features and the temporal features, 1 ≤ i ≤ B, 1 ≤ j ≤ D, 1 ≤ k ≤ X. We set the λ as a learnable parameter while the initial value of λ = 1.0. Therefore, the optimal value of λ will be determined during the training process. The initial value of λ = 1.0 ensures a balanced focus initially between temporal and spectral domain features. After obtaining the fused temporal-spectral features, we composite it with the tensors containing the multiview features into a new tensor U = W∥VW *∥V ∈ R*B×D×3X, where ∥ is a concatenation operation. We use a switching mechanism to make the choice between V = VG or V = VL. This mechanism is implemented as a learnable binary mask that selects either the global or local temporal features during the training process. The final state of this switch will be determined by the optimization in the training process and tuned based on datasets for optimal performance. ## 3.4 Inferring With Time-Channel Tango Scanning With the integrated temporal-spectral contextual representations contained in tensor U, which is processed by a layer normalization for training stability, we can now learn salient representations to capture important relationships between features, particularly long-term dependencies. To achieve this, we construct tokens by treating each feature vector in U along the time and channel dimensions as a separate token. Subsequently, we leverage Mamba, a type of SSM, for modeling the token sequences. Mamba is designed for capturing discriminative contents by selectively scanning the token sequences. This selective scan ability allows the model to focus on the most informative parts of the sequence while ignoring less relevant information. By doing so, Mamba can effectively capture long-term dependencies and identify salient features that are most useful for classification. Compared to other SSMs, Mamba has the advantage of being computationally efficient and able to handle long sequences. It achieves this by using a sparse attention mechanism that reduces the complexity of tokento-token interactions. This makes Mamba particularly well-suited for processing time series data, where the sequences can be lengthy and contain complex temporal dependencies. $$d h(t)/d t=A~h(t)+B~u(t),\quad z(t)=C~h(t),$$ $$\left({4}\right)$$ Vanilla Mamba Block: Inside a Mamba block, two fully-connected layers in two branches calculate linear projections. The output of the linear mapping in the first branch passes through a 1D causal convolution and SiLU activation S(·) (Elfwing et al., 2018), then a structured SSM. The continuous-time SSM maps an input function or sequence u(t) to output v(t) z(t) through a latent state h(t): dh(t)/dt = A h(t) + B u(t), z(t) = C h(t), (4) where h(t) is N-dimensional, with N also known as a *state expansion factor*, u(t) is D-dimensional, with D being the *dimension factor* for an input token, z(t) is an output of dimension D, and A, B, and C are coefficient matrices of proper sizes. This dynamic system induces a discrete SSM governing state evolution and outputs given the input token sequence through time sampling at {k∆} with a ∆ time interval. This discrete SSM is $$h_{k}=\bar{A}\ h_{k-1}+\bar{B}\ u_{k},\quad z_{k}=C\ h_{k},$$ where $h_{k}$, $u_{k}$, and $z_{k}$ are respectively samples of $h(t)$, $u(t)$, and $z(t)$ at time $k\Delta$, −1(exp(∆A) − I)∆B. (6) $$\bar{A}=\exp(\Delta A),\quad\bar{B}=(\Delta A)^{-1}(\exp(\Delta A)-I)\Delta B.$$ $$\left(5\right)$$ $$({\mathfrak{f}}{\mathfrak{o}})$$ For SSMs, diagonal A is often used. Mamba makes B, C, and ∆ linear time-varying functions dependent on the input. In particular, for a token u, B, C ← *Linear*N (u), and ∆ ← sof tplus(*parameter* + LinearD(*Linear*1(u))), where *Linear*p(u) is a linear projection to a p-dimensional space, and *sof tplus* activation function. Furthermore, Mamba also has an option to expand the model dimension factor D by a controllable dimension expansion factor E. Such coefficient matrices enable context and input selectivity properties (Gu & Dao, 2023) to selectively propagate or forget information along the input token sequence based on the current token. Denote the discretization operation by ∆ = τ∆(*parameter* + s∆), where τ∆ and s∆ are both functions of the input. For the special case of univariate sequences, the selectivity property has been mathematically proved (Gu & Dao, 2023), as shown in the following: Theorem 1. *(Gu and Dao 2023) When* N = 1, A = −1, B = 1, s∆ = Linear(x), and τ∆ = sof tplus*, then* the selective SSM recurrence takes the form of $$h_{k}=(1-g_{k})\ h_{k-1}+g_{k}\ u_{k},\quad a n d\quad g_{k}=\sigma(L i n e a r(u_{k})),$$ where gk *is the gate.* This theorem states that the hidden state is a convex combination of the current input token and the previous hidden state, with the combination coefficient controlled by the current input token. Moreover, it is pointed out that the parameter gk is responsible for selecting the input contents uk from the sequence, plays a role similar to a gating mechanism in the RNN model, thus connecting the selective SSM to the traditional RNN. After obtaining the SSM output, it is multiplicatively modulated with the output from the second branch before another fully connected projection. The second branch in the Mamba block simply consists of a linear mapping followed by a SiLU. Tango Scanning: The selectivity ability of Mamba depends on the ordering of the tokens in the sequence because the hidden state at time n is constructed causally from history tokens as determined by the ordering of the tokens. If the history tokens do not contain informational contexts, Mamba may provide less effective predicted output. To alleviate this potential limitation of causal scanning, we construct a dedicated module to extend a vanilla Mamba block, as shown in Figure 1. Each module comprises one vanilla Mamba block. On the input side, the module accepts a sequence in a forward fashion as input and then inverts the sequence to accept it as input again. At the output side, the output of the vanilla Mamba block with forward sequence and that with the inverted sequence are added element-wise. The operations are represented as follows. Denote an input token sequence by v = [v1, · · · , vM], where vi ∈ RD, and v ∈ RD×M is the matrix representation of the token sequence with M being the sequence length. We will first get a reverse-flipped sequence v (r) by inverting the ordering of the elements in v. Tango scanning performs the following operations to obtain the output sequence s (o): $v^{(r)}=Reverse(v)=[v_M,v_{M-1},\cdots,v_1],$ $a=Lambda(v),\quad a^{(r)}=Lambda(v^{(r)}),$ $s^{(o)}=v\oplus a\oplus v^{(r)}\oplus a^{(r)},$ ... $$\left(7\right)$$ $$({\boldsymbol{\delta}})$$ $$({\mathfrak{g}})$$ $$\stackrel{\mathrm{\scriptsize{\bf~r~}}}{\mathrm{(10)}}$$ where *Reverse*(·) denotes the flipping operation of a sequence, *M amba*(·) denotes a vanilla Mamba block, and ⊕ denotes element-wise addition. The last equation 10 represents the element-wise addition for information fusion. Notably, the same Mamba block is used for the forward sequence v and the reverse-flipped sequence v (r). By doing so, the SSM in this block will be trained to update the hidden state variable more effectively than using simply the forward scanning of the vanilla Mamba. Because of the sharing of one Mamba block (and thus one SSM) with two sequences that are flips of each other, we regard it as a dancer's one body with two concerted legs and hence call it tango scanning. Unlike the bi-directional Mamba block in Behrouz & Hashemi (2024); Schiff et al. (2024) that uses two separate SSMs with one for forward direction and the other for backward direction, our tango scanning block uses only a single Mamba block. Importantly, for the inverted sequence in our tango scanning module, the output from the Mamba block is not re-inverted back to the original order before performing element-wise addition. In other words, a tango scanning module only involves sequence inversion once. On the contrary, the bi-directional Mamba block (Schiff et al., 2024) needs to re-invert the output from the vanilla Mamba block. Empirically, we will demonstrate that our tango scanning can effectively update the hidden state variable while maintaining essentially the same memory footprint as the vanilla Mamba block. Performing Tango Scanning in Time and Channel Dimensions: The MTS data have significant patterns, correlation structures, and temporal long-term dependencies. To model the relationship in the temporal dimension, we perform tango scanning temporally for every channel. The processed embedded representation with tensor size B × 3X × D is transformed using Tango Scanning. Specifically, with each Ddimensional feature point across all channels regarded as a token, we have a token sequence with dimension factor D and length 3X as input to the Mamba block in the tango scanning module. This yields an output tensor of size B × 3X × D. That is, by denoting u (t) = [u (t) 1 , · · · , u (t) 3X] T ∈ R3X×D as the token sequence formed along the time direction for a time series (in the batch), we have s (t) = T ango_*Scanning*(u (t)), (11) where s (t) = [s (t) 1 , · · · , s (t) 3X] T ∈ R3X×D. By leveraging Mamba, we will extract salient features and context cues from the input token sequence. Particularly, the output sequence st captures the between-time-point interactions along the temporal direction. Because the MTS data often have significant correlations along the channel dimension, we will also model relationships across channels. To this end, we first form our tensor to have size B × D × 3X and then we transform it using our tango scanning. Specifically, the whole univariate sequence of each channel is used as a token with a dimension factor 3X for the Mamba block in the tango scanning module. Thus, we form a token sequence of length D, with each token having dimension 3X. This token sequence will be input to our tango scanning module, yielding an output tensor of size B × D × 3X. That is, by denoting u (c) = [u (c) 1 , · · · , u (c) D ] T ∈ RD×3X as the token sequence formed along the channel dimension, we have $$(11)$$ $\left(12\right)$. s (c) = T ango_*Scanning*(u (c)), (12) where s (c) = [s (c) 1 , · · · , s (c) D ] T ∈ RD×3X. Note that the Tango Scanning module used in Eq. (12) is different from the one used in Eq. (11) and utilizes a separate Mamba module. The output sequence sc captures the between-channel interactions along the temporal direction. It is critical to account for the inter-relationships across channels when the MTS data have many channels. After obtaining the outputs from the time-wise scanning ut and the channel-wise scanning uc, we will perform another fusion at the Mamba-transformed sequence level: z = (s (t)) T ⊕ s (c), (13) where ⊕ denotes element-wise addition of matrices, and (s (t)) Tis the transpose of s (t). The resultant fused sequence is of size D × 3X. ## 3.5 Output Class Representation The fused tensor of size B × D × 3x will be used to distill class information (class logits). First, we perform depth-wise pooling (DP) (Figure 1) to aggregate information across channels. Specifically, given a fused $$(13)$$ 5) $\downarrow$ ? $\epsilon_0(\epsilon)$ $${\bar{z}}=D P(z),$$ $\left(14\right)^{2}$ sequence z ∈ RD×3X, we have z¯ = DP(z), (14) where DP(·) denotes the depth-wise pooling and the output z¯ ∈ R3x. DP can be either average pooling or max pooling. We regard these two pooling operations as two possible values of DP. Given a dataset, the specific pooling will be determined in the training stage. $$(15)$$ Subsequently, we will employ an FFN of two layers with an optional dropout mechanism interspersed for regularization: $$\bar{z}^{(1)}=M L P(\bar{z}),\quad\bar{z}^{(2)}=M L P(\bar{z}^{(1)}),$$ where z¯ is projected into vectors z¯ (1) ∈ R3x/2 and z¯ (2) ∈ RC . The class labels will be determined based on z¯ (2). To train the proposed network, which we call TSCMamba, we employ a cross-entropy loss (CE) on the output of the second layer of the FFN, z¯ (2). ## 4 Experiments And Result Analysis In this section, we present the results of our experiments on benchmark datasets for time series classification tasks. We evaluate the performance of our proposed method, TSCMamba, and compare it with several stateof-the-art baseline models. The results demonstrate the effectiveness of TSCMamba in handling complex time series classification tasks. ## 4.1 Datasets We evaluated the performance of our proposed method, TSCMamba, on 10 benchmark datasets for time series classification tasks following TimesNet (Wu et al., 2023) and additional 10 datasets following TSLANet (Eldele et al., 2024). These datasets are commonly used in the literature and are representative of various domains, including image, audio, and sensor data. We present dataset statistics in S-Table 1. These datasets are sourced from a diverse set of domains and contain a diverse range of classes, channels, and time-sequences leading to a robust benchmark for evaluating classification tasks. Moreover, some datasets contain more data in the Test set than the Train set (EC, HW, HB, JV, SCP1, UG) making the time-series classification a harder task. More domain-related information can be found in Bagnall et al. (2018). ## 4.2 Experimental Environment All experiments were conducted using the PyTorch framework (Paszke et al., 2019) with NVIDIA 4× V100 GPUs (32GB each). The model was optimized using the ADAM algorithm (Kingma & Ba, 2014) with Cross-Entropy loss following TimesNet (Wu et al., 2023). Moreover, the baseline results are utilized from TimesNet (Wu et al., 2023) paper for a fair comparison (Same train-test set across the methods). The batch Table 1: Classification Accuracy (%) for Various Models. The . symbol in Transformer models denotes the specific type of ∗former used. The best average result and rank is in **bold** and second best is underlined. The ranks are calculated using the Wilcoxon signed-rank test (lower is better). | Datasets | | | | Methods | | | | | | | | | | | | | | | | | | |------------|--------------------------------------------------------------------------------------------------------------------------------|-------|-------------|-----------|------------------|--------------------------------------------------------------------------------|------------------|-----------------------|------------------|------------------|-------|-------|------|------|------|------|------|------|------|-------|-------| | | DTW XGBoost Rocket LSTM LSTNet LSSL TCN Trans | | | Re. | In. | Pyra Auto. Station. FED. ETS. Flow. DLinear LightTS. TimesNet TSLANet TSCMamba | | | | | | | | | | | | | | | | | | (1994) (2016) (2020) (1997) (2018) (2022) (2019) (2017) (2020) (2021) (2021) (2021) (2022) (2022) (2022) (2022) (2023) (2022a) | | | | | (2023) | (2024) | Ours | | | | | | | | | | | | | | | EC | 32.3 | 43.7 | 45.2 | 32.3 | 39.9 | 31.1 | 28.9 | 32.7 | 31.9 | 31.6 | 30.8 | 31.6 | 32.7 | 31.2 | 28.1 | 33.8 | 32.6 | 29.7 | 35.7 | 30.4 | 62.0 | | FD | 52.9 | 63.3 | 64.7 | 57.7 | 65.7 | 66.7 | 52.8 | 67.3 | 68.6 | 67.0 | 65.7 | 68.4 | 68.0 | 66.0 | 66.3 | 67.6 | 68.0 | 67.5 | 68.6 | 66.8 | 69.4 | | HW | 28.6 | 15.8 | 58.8 | 15.2 | 25.8 | 24.6 | 53.3 | 32.0 | 27.4 | 32.8 | 29.4 | 36.7 | 31.6 | 28.0 | 32.5 | 33.8 | 27.0 | 26.1 | 32.1 | 57.9 | 53.3 | | HB | 71.7 | 73.2 | 75.6 | 72.2 | 77.1 | 72.7 | 75.6 | 76.1 | 77.1 | 80.5 | 75.6 | 74.6 | 73.7 | 73.7 | 71.2 | 77.6 | 75.1 | 75.1 | 78.0 | 77.6 | 76.6 | | JV | 94.9 | 86.5 | 96.2 | 79.7 | 98.1 | 98.4 | 98.9 | 98.7 | 97.8 | 98.9 | 98.4 | 96.2 | 99.2 | 98.4 | 95.9 | 98.9 | 96.2 | 96.2 | 98.4 | 99.2 | 97.0 | | PS | 71.1 | 98.3 | 75.1 | 39.9 | 86.7 | 86.1 | 68.8 | 82.1 | 82.7 | 81.5 | 83.2 | 82.7 | 87.3 | 80.9 | 86.0 | 83.8 | 75.1 | 88.4 | 89.6 | 83.8 | 90.2 | | SCP1 | 77.7 | 84.6 | 90.8 | 68.9 | 84.0 | 90.8 | 84.6 | 92.2 | 90.4 | 90.1 | 88.1 | 84.0 | 89.4 | 88.7 | 89.6 | 92.5 | 87.3 | 89.8 | 91.8 | 91.8 | 92.5 | | SCP2 | 53.9 | 48.9 | 53.3 | 46.6 | 52.8 | 52.2 | 55.6 | 53.9 | 56.7 | 53.3 | 53.3 | 50.6 | 57.2 | 54.4 | 55.0 | 56.1 | 50.5 | 51.1 | 57.2 | 61.7 | 66.7 | | SA | 96.3 | 69.6 | 71.2 | 31.9 | 100.0 | 100.0 95.6 | 98.4 | 97.0 100.0 99.6 100.0 | 100.0 | 100.0 100.0 98.8 | 81.4 | 100.0 | 99.0 | 99.9 | 99.0 | | | | | | | | UG | 90.3 | 75.9 | 94.4 | 41.2 | 87.8 | 85.9 | 88.4 | 85.6 | 85.6 | 85.6 | 83.4 | 85.9 | 87.5 | 85.3 | 85.0 | 86.6 | 82.1 | 80.3 | 85.3 | 91.3 | 93.8 | | Avg. | 67.0 | 66.0 | 72.5 | 48.6 | 71.8 | 70.9 | 70.3 | 71.9 | 71.5 | 72.1 | 70.8 | 71.1 | 72.7 | 70.7 | 71.0 | 73.0 | 67.5 | 70.4 | 73.6 | 76.04 | 80.05 | | Rank | 15.20 | 15.55 | 10.25 19.55 | 10.40 | 11.70 12.40 9.40 | 9.95 | 8.90 12.80 11.50 | 7.30 | 12.40 12.85 6.45 | 14.60 | 12.65 | 6.40 | 6.40 | 4.35 | | | | | | | | size, epochs, and initial learning rate varied on the datasets for optimal performance.The hyperparameters were selected based on the Train and Test set provided by the dataset archive following TimesNet Wu et al. (2023). Moreover, the optimization was performed utilizing a cosine-annealing learning rate scheduler. We measure the prediction performance of our method using accuracy metric where larger values indicate better prediction accuracy. Baseline Models In this study, we evaluate the performance of our proposed method, TSCMamba, against 1920 state-of-the-art baselines in Table 1, encompassing Transformer-based (Eldele et al., 2024; Wu et al., 2023; Vaswani et al., 2017; Kitaev et al., 2020; Zhou et al., 2021; Liu et al., 2021; Wu et al., 2021; Liu et al., 2022; Zhou et al., 2022; Woo et al., 2022; Wu et al., 2022), CNN-based (Franceschi et al., 2019), RNN-based (Hochreiter & Schmidhuber, 1997; Lai et al., 2018; Gu et al., 2022), MLP-based (Zeng et al., 2023; Zhang et al., 2022a), and classical machine learning-based methods (Berndt & Clifford, 1994; Chen & Guestrin, 2016; Dempster et al., 2020). Therefore the comparison among these methods following TimesNet (Wu et al., 2023) provides strong recent baselines from various aspects of machine learning. ## 4.3 Predictive Performance Comparison The comprehensive results are presented in Table 1. Notably, our approach achieves a substantial improvement of 4.01% over the existing best baseline, TSLANet (Eldele et al., 2024). Additionally, it improves upon the existing second-best baseline, TimesNet (Wu et al., 2023), by 6.45%, which is a significant margin compared to TimesNet's improvement of 0.6% over the previous best baseline, Flowformer (Wu et al., 2022). This notable performance gain establishes TSCMamba as a strong contender for the TSC task. We plan to release our code and checkpoints for Table 1 publicly.We have uploaded our code in supplementary materials and plan to release it publicly. While Table 1 presents our best-achieved results, we also demonstrate TSCMamba's reproducibility and stability across 5 runs with mean and error bars (standard deviation) in S-Figure 7. In addition to the main baselines, we also compare TSCMamba against regular Mamba (S-Figure 4). From Table 1, it is evident that some methods may perform well on certain datasets (e.g., TimesNet (Wu et al., 2023) on JV, SA), but may lack performance by a large margin on others (e.g., TimesNet (Wu et al., 2023) on EC, HW). In contrast, our method maintains a balance across the datasets while showing a significant improvement in average performance. In addition to benchmark datasets mentioned by TimesNet (Wu et al., 2023), we also evaluate our model on 10 additional randomly selected datasets from TSLANet (Eldele et al., 2024) and present the results in S-Table 2. Following TimesNet (Wu et al., 2023) and other recent baselines like TSLANet (Eldele et al., 2024), we present the results as average performance across datasets. However, recognizing that averaging over various datasets might not be the optimal way to assess different models, we also present ranks of different models as suggested by Demšar (2006). These ranks are calculated using the Wilcoxon signed-rank test, following the methodology of Ismail Fawaz et al. (2019). For both sets of benchmark datasets, Table 1 and S-Table 2 demonstrate that TSCMamba achieves the best performance in terms of averaged accuracy and rank. ## 4.4 Computational Complexity In this study, we compared the floating-point operations (FLOPs) of the top-performing methods presented in Table 1. To calculate FLOPs, we set a batch size of 16 across all baselines. For our method, we employed the best-performing hyperparameters, whereas for other baselines, we utilized the recommended parameters specified in the official TimesNet code (Wu et al., 2023) and Flowformer (Wu et al., 2022). We leveraged the source code from Ye (2023) to calculate FLOPs. The overall FLOPs, including both forward and backward passes, are presented in Table 2. Notably, our method achieves substantial improvements in terms of FLOPs across all datasets, with the exception of PEMS-SF (PS). This anomaly can be attributed to the projected space (X) used to achieve the best result for this dataset, which was set to 1024, thereby impacting the total FLOPs for this dataset only. Table 2: FLOPs comparison among the top performing methods. The values are represented in GigaFLOPS (G) or TeraFlops (T), where 1TFLOPs=1000GFLOPs and a lower value indicates better computational efficiency. | Methods | EC | FD | HW | HB | JV | PS | SCP1 | SCP2 | SA | UG | |----------------------------|-------|---------|---------|---------|--------|--------|---------|---------|--------|---------| | Flow. Wu et al. (2022) | 1.06T | 37.97G | 92.21G | 246.37G | 15.76G | 94.02G | 542.64G | 697.74G | 50.33G | 190.82G | | TimesNet. Wu et al. (2023) | 1.11T | 161.93G | 115.88G | 182.69G | 48.15G | 74.18G | 503.62G | 2.33T | 26.00G | 247.73G | | TSCMamba (Ours) | 1.69G | 11.53G | 27.24G | 8.39G | 12.33G | 2.84T | 3.42G | 11.11G | 0.78G | 13.86G | | Mamba | Avg.Pool | ROCKET | AF | EC | FD | HW | HB | JV | PS | SCP1 | SCP2 | SA | UG | Avg. | |---------|------------|----------|------|------|------|------|------|------|------|--------|--------|------|------|--------| | ✓ | ✓ | ✓ | ✓ | 62.0 | 57.0 | 53.3 | 74.1 | 93.0 | 90.2 | 92.5 | 66.7 | 94.1 | 93.8 | 77.67 | | ✗ | ✓ | ✓ | ✓ | 33.1 | 63.2 | 34.1 | 73.2 | 85.4 | 81.5 | 86.7 | 57.2 | 74.0 | 89.1 | 67.75 | | ✓ | ✗ | ✓ | ✓ | 31.6 | 64.2 | 52.0 | 74.1 | 94.1 | 63.0 | 86.7 | 60.6 | 96.7 | 92.8 | 71.58 | | ✓ | ✓ | ✗ | ✓ | 31.6 | 69.4 | 24.8 | 76.6 | 97.0 | 87.3 | 91.8 | 58.3 | 97.6 | 86.2 | 72.06 | | ✓ | ✓ | ✓ | ✗ | 30.0 | 51.5 | 49.3 | 72.7 | 91.4 | 84.4 | 88.7 | 58.9 | 90.0 | 90.3 | 70.72 | Table 3: Ablation experiments on particular components in our method. ## 5 Ablation 5.1 Component-Wise Ablation In this section, we conduct an ablation study to investigate the contribution of individual components in our proposed method. The results are presented in Table 3. A notable observation is that the performance degrades in a large margin across all datasets when the Mamba modules are not utilized, highlighting the importance of incorporating Mamba in our approach. Specifically, when Mamba is not employed (2nd row), the intermediate values are bypassed by scanning operations and directly fed into the DP and MLPs for class logit prediction. In the 3rd row, we present depth-wise max-pooling instead of average-pooling, resulting in an input shape of B,3X. Furthermore, when ROCKET-extracted features are not utilized (4th row), we resort to MLP-extracted features, where the former are non-learnable and the latter are learnable. Additionally, in the 5th row, we explore the effect of replacing additive fusion (AF) with multiplicative fusion (MF) (of ROCKET features and spectral features), as detailed in Sec.3.3. Notably, while Table 3 largely mirrors the best performance reported in Table 1 across most datasets, the SpokenArabicDigits (SA) dataset exhibits optimal performance when employing depth-wise max-pooling and MLP-based features. ## 5.2 Hyperparameter Sensitivity In this section, we discuss the key settings for the Mamba model, as detailed in Gu & Dao (2023). The model operates with four main settings: model dimension (d_model), SSM state expansion factor (d_state), local convolution width (d_conv), and block expansion factor (expand). In our experiments, we automatically set the model dimension based on the input data, while we adjust the other three settings. The importance of fine-tuning these parameters is evident from our tests, shown in Figure 2, which clearly demonstrate their impact on our model's performance. In addition to Mamba's hyperparameters, we also tuned the dimension of the projected space (X) mentioned in 1. ## 5.3 Effectiveness Of Tango Scanning Although, our way of scanning (Tango Scanning) at first glance may seem counterintuitive to only forwardbased scanning and with another additional reverse flip based scanning (Schiff et al., 2024), it provides substantial improvements in accuracy. This approach, as demonstrated its effectiveness by Figure 3, outperforms traditional forward-scanning and other reverse-flip-based scanning for TSC tasks, making it a valuable strategy for complex scanning scenarios. Our tango scanning is explained in more detail in section 3.4. ![12_image_0.png](12_image_0.png) Figure 2: Sensitivity analysis of TSCMamba's hyper-parameters on Time Series Classification (TSC) performance. The plot shows the impact of varying (from left to right, top to bottom) block expansion factor, SSM state expansion factor, local convolutional width, and dimension of the projected space (X) on model performance, highlighting the relative importance of each component in achieving optimal TSC results. ![12_image_1.png](12_image_1.png) Figure 3: Effectiveness of our tango scanning compared against only forward-based scanning protocol and additional flip-based scanning protocol in reverse scanning. ## 6 Conclusion And Future Work We present TSCMamba, an innovative approach for Time Series Classification (TSC) designed to enhance performance while maintaining lower Floating Point Operations (FLOPs). TSCMamba leverages multi-view learning to analyze different views of time-series data, including local and/or global features extracted from time and frequency domains, thereby capturing the essential, discriminative patterns of real-world timeseries data. Moreover, the proposed tango scanning mechanism demonstrates TSCMamba's superiority over conventional scanning methods through extensive experimental validation. Our comprehensive experiments highlight TSCMamba's exceptional performance in terms of both accuracy and computational efficiency, consistently outperforming current state-of-the-art methods. These results suggest that TSCMamba can serve as a robust and efficient solution for a wide range of TSC applications. Looking ahead, our future work will focus on further enhancing TSCMamba by incorporating self-supervised learning techniques and extending its capabilities to multiple-task learning, in addition to the classification task. We also plan to explore the adaptability of TSCMamba across more diverse and complex time-series datasets, aiming to establish its versatility and robustness in various real-world scenarios. The promising results suggest TSCMamba's high potential for time series applications. ## References Md Atik Ahamed and Qiang Cheng. MambaTab: A simple yet effective approach for handling tabular data. arXiv preprint arXiv:2401.08867, 2024a. Md Atik Ahamed and Qiang Cheng. TimeMachine: A time series is worth 4 mambas for long-term forecasting. *arXiv preprint arXiv:2403.09898*, 2024b. Anthony Bagnall, Hoang Anh Dau, Jason Lines, Michael Flynn, James Large, Aaron Bostrom, Paul Southam, and Eamonn Keogh. The UEA multivariate time series classification archive, 2018. *arXiv preprint* arXiv:1811.00075, 2018. Ali Behrouz and Farnoosh Hashemi. Graph Mamba: Towards learning on graphs with state space models. arXiv preprint arXiv:2402.08678, 2024. Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 35(8):1798–1828, 2013. Donald J. Berndt and James Clifford. Using dynamic time warping to find patterns in time series. In KDD Workshop, 1994. Pimwadee Chaovalit, Aryya Gangopadhyay, George Karabatis, and Zhiyuan Chen. Discrete wavelet transform-based time series analysis and mining. *ACM Computing Surveys (CSUR)*, 43(2):1–37, 2011. Tianqi Chen and Carlos Guestrin. XGBoost: A scalable tree boosting system. KDD, 2016. Tri Dao and Albert Gu. Transformers are SSMs: Generalized models and efficient algorithms through structured state space duality. In *International Conference on Machine Learning*, 2024. Angus Dempster, Franccois Petitjean, and Geoffrey I. Webb. Rocket: exceptionally fast and accurate time series classification using random convolutional kernels. *Data Min. Knowl. Discov.*, 2020. Janez Demšar. Statistical comparisons of classifiers over multiple data sets. *Journal of Machine Learning* Research, 7(1):1–30, 2006. URL http://jmlr.org/papers/v7/demsar06a.html. Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee Keong Kwoh, Xiaoli Li, and Cuntai Guan. Time-series representation learning via temporal and contextual contrasting. In *IJCAI*, pp. 2352– 2359, 2021. Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, and Xiaoli Li. Tslanet: Rethinking transformers for time series representation learning. In *International Conference on Machine Learning*, 2024. Stefan Elfwing, Eiji Uchibe, and Kenji Doya. Sigmoid-weighted linear units for neural network function approximation in reinforcement learning. *Neural Networks*, 107:3–11, 2018. Jean-Yves Franceschi, Aymeric Dieuleveut, and Martin Jaggi. Unsupervised scalable representation learning for multivariate time series. In *Advances in Neural Information Processing Systems*, 2019. Fabian Fuchs, Daniel Worrall, Volker Fischer, and Max Welling. Se (3)-transformers: 3d roto-translation equivariant attention networks. *Advances in Neural Information Processing Systems*, 33:1970–1981, 2020. Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752, 2023. Albert Gu, Isys Johnson, Karan Goel, Khaled Saab, Tri Dao, Atri Rudra, and Christopher Ré. Combining recurrent, convolutional, and continuous-time models with linear state-space layers. Advances in Neural Information Processing Systems, 2021. Albert Gu, Karan Goel, and Christopher Ré. Efficiently modeling long sequences with structured state spaces. In *International Conference on Learning Representations*, 2022. S. Hochreiter and J. Schmidhuber. Long short-term memory. *Neural Comput.*, 1997. Hassan Ismail Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, and Pierre-Alain Muller. Deep learning for time series classification: a review. *Data Mining and Knowledge Discovery*, 33(4): 917–963, 2019. Yanfei Kang, Danijel Belušić, and Kate Smith-Miles. Detecting and classifying events in noisy time series. Journal of the Atmospheric Sciences, 71(3):1090–1104, 2014. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint* arXiv:1412.6980, 2014. Franz Király, Markus Löning, Tony Bagnall, Matthew Middlehurst, Sajaysurya Ganesh, Martin Walter, George Oastler, Anirban Ray, Jason Lines, ViktorKaz, Lukasz Mentel, Benedikt Heidrich, Sagar Mishra, chrisholder, Daniel Bartling, Leonidas Tsaprounis, RNKuhns, Ciaran Gilbert, Mirae Baichoo, Hazrul Akmal, Patrick Rockenschaub, Taiwo Owoseni, Guzal, eenticott shell, Sami Alavi, jesellier, Armaghan, Kishan Manani, and Patrick Schäfer. sktime/sktime: v0.30.0, June 2024. URL https://doi.org/10. 5281/zenodo.11460888. Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya. Reformer: The efficient transformer. In International Conference on Learning Representations, 2020. Guokun Lai, Wei-Cheng Chang, Yiming Yang, and Hanxiao Liu. Modeling long-and short-term temporal patterns with deep neural networks. In *SIGIR*, 2018. Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. Backpropagation applied to handwritten zip code recognition. *Neural Computation*, 1(4):541–551, 1989. Shizhan Liu, Hang Yu, Cong Liao, Jianguo Li, Weiyao Lin, Alex X Liu, and Schahram Dustdar. Pyraformer: Low-complexity pyramidal attention for long-range time series modeling and forecasting. In *International* Conference on Learning Representations, 2021. Yong Liu, Haixu Wu, Jianmin Wang, and Mingsheng Long. Non-stationary transformers: Rethinking the stationarity in time series forecasting. In *Advances in Neural Information Processing Systems*, 2022. Markus Löning, Anthony Bagnall, Sajaysurya Ganesh, Viktor Kazakov, Jason Lines, and Franz J Király. sktime: A unified interface for machine learning with time series. *arXiv preprint arXiv:1909.07872*, 2019. Stephane Mallat. *A Wavelet Tour of Signal Processing: The Sparse Way*. Academic Press, 2008. Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam. A time series is worth 64 words: Long-term forecasting with transformers. In *International Conference on Learning Representations*, 2023. Alan V Oppenheim. *Discrete-Time Signal Processing*. Pearson Education India, 1999. Kewei Ouyang, Yi Hou, Shilin Zhou, and Ye Zhang. Adaptive multi-scale wavelet neural network for time series classification. *Information*, 12(6):252, 2021. Adam Paszke, S. Gross, Francisco Massa, A. Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Z. Lin, N. Gimelshein, L. Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In *Advances in Neural Information* Processing Systems, 2019. Chong Peng, Kehan Kang, Yongyong Chen, Zhao Kang, Chenglizhao Chen, and Qiang Cheng. Fine-grained essential tensor learning for robust multi-view spectral clustering. *IEEE Transactions on Image Processing*, 33:3145–3160, 2024. Donald B Percival and Andrew T Walden. *Wavelet Methods for Time Series Analysis*, volume 4. Cambridge university press, 2000. Yair Schiff, Chia-Hsiang Kao, Aaron Gokaslan, Tri Dao, Albert Gu, and Volodymyr Kuleshov. Caduceus: Bi-directional equivariant long-range dna sequence modeling. *arXiv preprint arXiv:2403.03234*, 2024. Siavash Shams, Sukru Samet Dindar, Xilin Jiang, and Nima Mesgarani. Ssamba: Self-supervised audio representation learning with mamba state space model. *arXiv preprint arXiv:2405.11831*, 2024. Nathaniel Thomas, Tess Smidt, Steven Kearnes, Lusann Yang, Li Li, Kai Kohlhoff, and Patrick Riley. Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds. arXiv preprint arXiv:1802.08219, 2018. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In *Advances in Neural Information Processing Systems*, 2017. Zhenya Wang, Ligang Yao, and Yongwu Cai. Rolling bearing fault diagnosis using generalized refined composite multiscale sample entropy and optimized support vector machine. *Measurement*, 156:107574, 2020. Zihan Wang, Fanheng Kong, Shi Feng, Ming Wang, Han Zhao, Daling Wang, and Yifei Zhang. Is mamba effective for time series forecasting? *arXiv preprint arXiv:2403.11144*, 2024. Gerald Woo, Chenghao Liu, Doyen Sahoo, Akshat Kumar, and Steven C. H. Hoi. ETSformer: Exponential smoothing transformers for time-series forecasting. *arXiv preprint arXiv:2202.01381*, 2022. Haixu Wu, Jiehui Xu, Jianmin Wang, and Mingsheng Long. Autoformer: Decomposition transformers with Auto-Correlation for long-term series forecasting. In *Advances in Neural Information Processing Systems*, 2021. Haixu Wu, Jialong Wu, Jiehui Xu, Jianmin Wang, and Mingsheng Long. Flowformer: Linearizing transformers with conservation flows. In *International Conference on Machine Learning*, 2022. Haixu Wu, Tengge Hu, Yong Liu, Hang Zhou, Jianmin Wang, and Mingsheng Long. Timesnet: Temporal 2dvariation modeling for general time series analysis. In *The Eleventh International Conference on Learning* Representations, 2023. URL https://openreview.net/forum?id=ju_Uqw384Oq. xiaoju Ye. calflops: a flops and params calculate tool for neural networks in pytorch framework, 2023. URL https://github.com/MrYxJ/calculate-flops.pytorch. Zhihan Yue, Yujing Wang, Juanyong Duan, Tianmeng Yang, Congrui Huang, Yunhai Tong, and Bixiong Xu. Ts2vec: Towards universal representation of time series. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8):8980–8987, Jun. 2022. doi: 10.1609/aaai.v36i8.20881. URL https://ojs.aaai.org/ index.php/AAAI/article/view/20881. Ailing Zeng, Muxi Chen, Lei Zhang, and Qiang Xu. Are transformers effective for time series forecasting? In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 37, pp. 11121–11128, 2023. T. Zhang, Yizhuo Zhang, Wei Cao, J. Bian, Xiaohan Yi, Shun Zheng, and Jian Li. Less is more: Fast multivariate time series forecasting with light sampling-oriented mlp structures. arXiv preprint arXiv:2207.01186, 2022a. Xiang Zhang, Ziyuan Zhao, Theodoros Tsiligkaridis, and Marinka Zitnik. Self-supervised contrastive pretraining for time series via time-frequency consistency. *Advances in Neural Information Processing Systems*, 35:3988–4003, 2022b. Yunhao Zhang and Junchi Yan. Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting. In *International Conference on Learning Representations*, 2023. URL https://openreview.net/forum?id=vSVLM2j9eie. Qibin Zhao and Liqing Zhang. Ecg feature extraction and classification using wavelet transform and support vector machines. In *2005 International Conference on Neural Networks and Brain*, volume 2, pp. 1089– 1092, 2005. doi: 10.1109/ICNNB.2005.1614807. Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. Informer: Beyond efficient transformer for long sequence time-series forecasting. In *Proceedings of the AAAI* Conference on Artificial Intelligence, 2021. Tian Zhou, Ziqing Ma, Qingsong Wen, Xue Wang, Liang Sun, and Rong Jin. FEDformer: Frequency enhanced decomposed transformer for long-term series forecasting. In *International Conference on Machine* Learning, 2022. Tian Zhou, Peisong Niu, xue wang, Liang Sun, and Rong Jin. One fits all: Power general time series analysis by pretrained lm. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine (eds.), *Advances in Neural Information Processing Systems*, volume 36, pp. 43322–43355. Curran Associates, Inc., 2023. URL https://proceedings.neurips.cc/paper_files/paper/2023/file/ 86c17de05579cde52025f9984e6e2ebb-Paper-Conference.pdf. ## 7 Appendix In the appendix section, we present additional supplementary materials including algorithms, dataset statistics, etc. To distinguish from original materials, we add the prefix S- to the supplementary materials. ## 7.1 Algorithms In this section, we present the two schematic algorithms for conversion of raw signals to CWT and ROCKET feature extraction in S-Algorithm 1 and S-Algorithm 2 respectively. S-Algorithm 1 Convert Raw Signals to CWT representation Input: Raw signals of shape (*N, D, L*) Output: Tensor of shape (N, D, L1, L1) \# We set L1 = 64 in this paper. 1: for each signal i in N do 2: for each dimension d in D do 3: *signal* ← Raw[*i, d,* :] 4: *coeff, freq* ← CWT(*signal*) 5: cwt_*resized* ← resize(coeff,(L1, L1), mode="constant") 6: Tensor[i, d, :, :] ← cwt_*resized* 7: **end for** 8: end for S-Algorithm 2 Feature Extraction with ROCKET for Random Convolutional Kernel Transform Input: Time series data of length = L, number of kernels, n = X 2 Output: Feature vector of shape (2 × n) = X 1: *kernels* ← list of n random kernels of random length l, weight w, bias b, dilation d, padding p. 2: feature_*maps* ← empty list 3: for each kernel k in *kernels* do 4: hppv, hmax ← convolve(*k, x*) 5: feature_*maps.*append(hppv, hmax) 6: **end for** 7: **return** feature_ maps ## 7.2 Dataset Statistics We utilized 10 a total of 20 datasets following TimesNet (Wu et al., 2023) and TSLANet (Eldele et al., 2024). Datasets with their corresponding number of channels (D), sequence length, Train samples, Test samples, number of classes, and domain information are presented in S-Table 1. | Datasets | Channels | Length | Train | Test | Classes | Domain | | |-----------------------------|---------------------------|----------|---------|--------|-----------|----------------------------|------------------| | Benchmark datasets | EthanolConcentration (EC) | 3 | 1751 | 261 | 263 | 4 | Alcohol Industry | | FaceDetection (FD) | 144 | 62 | 5890 | 3524 | 2 | Face (250Hz) | | | Handwriting (HW) | 3 | 152 | 150 | 850 | 26 | Smart Watch | | | Heartbeat (HB) | 61 | 405 | 204 | 205 | 2 | Clinical | | | JapaneseVowels (JV) | 12 | 29 | 270 | 370 | 9 | Audio | | | PEMS-SF (PS) | 963 | 144 | 267 | 173 | 7 | Transportation | | | SelfRegulationSCP1 (SCP1) | 6 | 896 | 268 | 293 | 2 | Health (256Hz) | | | SelfRegulationSCP2 (SCP2) | 7 | 1152 | 200 | 180 | 2 | Health (256Hz) | | | SpokenArabicDigits (SA) | 13 | 93 | 6599 | 2199 | 10 | Voice (11025Hz) | | | UWaveGestureLibrary (UG) | 3 | 315 | 120 | 320 | 8 | Gesture | | | Additional datasets | AtrialFibrillation (AF) | 2 | 640 | 15 | 15 | 3 | ECG | | BasicMotions (BM) | 6 | 100 | 40 | 40 | 4 | Human Activity Recognition | | | Cricket (CR) | 6 | 1197 | 108 | 72 | 12 | Human Activity Recognition | | | FingerMovements (FM) | 28 | 50 | 316 | 100 | 2 | EEG | | | HandMovementDirection (HMD) | 10 | 400 | 160 | 74 | 4 | EEG | | | MotorImagery (MI) | 64 | 3000 | 278 | 100 | 2 | EEG | | | PenDigits (PD) | 2 | 8 | 7494 | 3498 | 10 | Motion | | | PhonemeSpectra (PHS) | 11 | 217 | 3315 | 3353 | 39 | Audio | | | RacketSports (RS) | 6 | 30 | 151 | 152 | 4 | Human Activity Recognition | | | StandWalkJump (SWJ) | 4 | 2500 | 12 | 15 | 3 | ECG | | S-Table 1: Publicly available datasets with their statistics utilized in this paper. ## 7.3 **Comparison With Mamba** In addition to the robust baselines presented in this paper, we also evaluate the performance of Mamba (Gu & Dao, 2023). To conduct this experiment on the 10 benchmark datasets listed in Table 1, we process the MTS data directly using Mamba modules with a regular scanning protocol. The processed signals are then fed into two-stage MLPs, following the strategy outlined in Figure 1, to obtain class logits. The experimental results, shown in S-Figure 4, clearly demonstrate the effectiveness of our method (TSCMamba), as it outperforms all the benchmarks across the 10 datasets, underscoring the necessity of our approach. These experiments were conducted using the same hyperparameters as those used to achieve the results in Table 1. ![18_image_0.png](18_image_0.png) S-Figure 4: TSCMamba in comparison with directly applied regular Mamba module to time series data ## 7.4 **Comparison With Bimamba** In this section, we present the comparative results of TSCMamba's channel and token (or time) tango scanning (Eqs. 12 and 11) against BiMamba (Schiff et al., 2024). To ensure a fair comparison, we utilized the same hyperparameters that were employed to achieve the best results shown in Table 1. For BiMamba, we used the official code provided by Schiff et al. (2024), with tied weights for the in and out projections. As illustrated in S-Figure 5, TSCMamba consistently outperforms BiMamba across all benchmark datasets, underscoring the effectiveness of tango scanning. ![18_image_1.png](18_image_1.png) S-Figure 5: Performance comparison in accuracy of TSCMamba with tango scanning versus BiMamba (Schiff et al., 2024) across the benchmark datasets. ## 7.5 **Ablation On Tango Scanning** This section presents the results of our experiments with both channel and token tango scanning. We display the outcomes of token tango scanning with the channel tango scanning module turned off. Conversely, we also provide results for channel tango scanning with the token tango scanning module disabled. As demonstrated in S-Figure 6, both modules are crucial for achieving optimal accuracy, highlighting their individual importance in the overall performance of our model. ![19_image_0.png](19_image_0.png) S-Figure 6: Ablation on tango scanning module compared against only token tango scanning and only channel tango scanning. ## 7.6 Accuracy On Multiple Runs ![19_image_1.png](19_image_1.png) S-Figure 7: Performance of TSCMamba over 5 random runs. The mean performance is shown as green bars, with the standard deviation represented by red error bars that are very small. (Best viewed when zoomed in.) ## 7.7 **Additional Datasets Results From Uea** S-Table 2: Additional classification results on the UEA datasets in terms of accuracy (as %). The ranks are calculated using the Wilcoxon signed-rank test (lower is better). | Dataset | TSCMamba | TSLANet | GPT4TS | TimesNet | ROCKET | CrossF. | PatchTST | MLP | TS-TCC | TS2VEC | |-----------------------|------------|-----------|----------|------------|----------|-----------|------------|--------|----------|----------| | Ours | (2024) | (2023) | (2023) | (2020) | (2023) | (2023) | (2023) | (2021) | (2022) | | | AtrialFibrillation | 67.00 | 40.00 | 33.33 | 33.33 | 20.00 | 46.66 | 53.33 | 46.66 | 33.33 | 53.33 | | BasicMotions | 100.00 | 100.00 | 92.50 | 100.00 | 100.00 | 90.00 | 92.50 | 85.00 | 100.00 | 92.50 | | Cricket | 98.61 | 98.61 | 8.33 | 87.50 | 98.61 | 84.72 | 84.72 | 91.67 | 93.06 | 65.28 | | FingerMovements | 69.00 | 61.00 | 57.00 | 59.38 | 61.00 | 64.00 | 62.00 | 64.00 | 44.00 | 51.00 | | HandMovementDirection | 71.62 | 52.70 | 18.92 | 50.00 | 50.00 | 58.11 | 58.11 | 58.11 | 64.86 | 32.43 | | MotorImagery | 62.00 | 62.00 | 50.00 | 51.04 | 53.00 | 61.00 | 61.00 | 61.00 | 47.00 | 47.00 | | PenDigits | 98.54 | 98.94 | 97.74 | 98.19 | 97.34 | 93.65 | 99.23 | 92.94 | 98.51 | 97.40 | | PhonemeSpectra | 24.66 | 17.75 | 3.01 | 18.24 | 17.60 | 7.55 | 11.69 | 7.10 | 25.92 | 8.23 | | RacketSports | 91.45 | 90.79 | 76.97 | 82.64 | 86.18 | 81.58 | 84.21 | 78.95 | 84.87 | 74.34 | | StandWalkJump | 73.33 | 46.67 | 33.33 | 53.33 | 46.67 | 53.33 | 60.00 | 60.00 | 40.00 | 46.67 | | Average | 75.62 | 66.85 | 47.11 | 63.36 | 63.04 | 64.06 | 66.68 | 64.54 | 63.15 | 56.82 | | Rank | 1.65 | 3.90 | 8.60 | 5.70 | 5.70 | 6.00 | 4.35 | 5.95 | 5.45 | 7.70 |
Review 1: Summary: The paper considers the time series classification problem. The standard approach using deep learning is to apply CNNs for local pattern-based features. On the other hand, the approach fails to capture global patterns. The paper proposes to use features based on wavelets of the time-series that can capture global patterns such as cyclic behaviours. More precisely, the proposed method also employs a state-space model to switching local feature-based CNNs and wavelets feature-based MLPs. The experimental results using benchmark datasets shows that the proposed method outperforms other baselines. Strengths and Weaknesses: Strengths - The paper shows a well-motivated approach to trying to capture the global features of time series. -The experimental results show that the proposed method performs better than other alternatives. Weaknesses: -The effects or advantages of newly proposed components are not fully clear. In particular, how the state-space model and/or the global features based on wavelets should be clarified. Requested Changes: As raised above, it is not clear how much the newly proposed components, the state-space model and the global features affects the performance. For example, to clarify the effect of the state-space model, the paper could compare the proposed methods with some simple baseline combining local and global features. If such additional experiments are performed, the technical contribution would be clearer. Broader Impact Concerns: N/A ================================================== Review 2: Summary: The authors introduce a multi-view time-frequency neural network architecture designed for multivariate time series classification. This framework leverages wavelet transform to break down the signal into its spectral and temporal components. It then uses a combination of CNN (inspired by the ROCKET method) and MLP to extract local and global features, respectively. To identify discriminative patterns in the temporal domain, the authors propose a switch mechanism that selects between local and global temporal patterns, integrating them with CWT features. For capturing long-term dependencies efficiently, the state-of-the-art state-space method, Mamba, introduced in 2023, is utilized due to its linear time complexity. Strengths and Weaknesses: **Strengths:** - **[S1]:** The authors integrate multiple advanced modules, such as time-frequency transforms, local and global neural network feature extraction, and the recent state-of-the-art Mamba architecture, to achieve high performance in multivariate time series classification. They also introduce a scanning scheme tailored to the Mamba architecture to identify salient features within complex contexts. - **[S2]:** There is a notable performance improvement of approximately 6% in classification accuracy across 10 subsets of the UEA repository. - **[S3]:** The proposed method demonstrates greater computational efficiency compared to baseline models. - **[S4]:** The paper includes a thorough ablation study of the model components and various scanning techniques. **Weaknesses:** - **[W1]:** Traditional time series classification (TSC) methods that disentangle time and frequency components, including both older [1] and more recent works[2], have dominated the field. The authors should better clarify their choice of wavelet transform over other spectral transformation methods and include relevant works in the related work section. Additionally, several unsupported statements, particularly in paragraphs 2 and 4 on page 1, should be either referenced or removed. - **[W2]:** The introduction section is very extensive and the paper's motivation is not clearly highlighted. There is a deviation from the current trend of general-purpose backbones and unsupervised pretraining tasks, as well as from testing on multiple time series tasks and datasets. - **[W3]:** The experiments section is somewhat limited, as only 10 subsets from the UEA datasets (out of 26 in total) are selected. Other papers, including cited works, use additional datasets for classification and clustering. The authors should consider whether their architecture can be extended to other types of tasks and datasets (as in [2]). - **[W4]:** The authors should compare their method directly to Mamba as a baseline since it is the component showing the most significant effect on the classification performance. It has also been recently incorporated in the Time Series Library [3,4]. - **[W5]:** In the experiments section, results appear identical to those in the original TimesNet paper. Random seeds and multiple runs to capture model variance are not taken into account. - **[W6]:** The hyperparameter selection process for the proposed method and the baselines should be extensively detailed. The UEA datasets do not provide intermediate validation sets for parameter tuning, and some researchers validate and test on the test set, which may highlight models that overfit. If a validation set is used, this should be clearly stated in the experimental setup. - **[W7]:** In the ablation experiments (Table 3), the average performance per combination should be mentioned to facilitate comparison with the baselines in Table 1. [1] Zhang, X., Zhao, Z., Tsiligkaridis, T., & Zitnik, M. (2022). Self-supervised contrastive pre-training for time series via time-frequency consistency. Advances in Neural Information Processing Systems, 35, 3988-4003. [2] Eldele, Emadeldeen, et al. "Tslanet: Rethinking transformers for time series representation learning." arXiv preprint arXiv:2404.08472 (2024). [3] Wu, H., Hu, T., Liu, Y., Zhou, H., Wang, J., & Long, M. (2022). Timesnet: Temporal 2d-variation modeling for general time series analysis. arXiv preprint arXiv:2210.02186. [4] https://github.com/thuml/Time-Series-Library Requested Changes: Based on the weaknesses above the following changes can significantly improve the papers main content and contribution. - **Supporting Motivation and Highlighting Contribution:** The introduction should be significantly improved to better convey the main message of the study and position the contribution of the proposed architecture, including specific references that confirm mentioned comments. (Refer to **W1,W2**) - **Experimental Evaluation:** The experimental evaluation is poor and could be enhanced by more experiments as described above. Details on the reproducibility of the results and model variances are significant and should be provided at least in the supplementary material. (Refer to **W3,W4,W5,W6**) - **Comparative Analysis:** The experimental evaluation could benefit from comparisons with methods that have similar components, in addition to the extensive comparisons with transformer-based architectures that are not specifically tailored to TSC but rather to forecasting. (Refer to **W3,W4,W7**) Broader Impact Concerns: N/A ================================================== Review 3: Summary: The paper aggregates techniques for feature extraction from time-series data and combines them with the Mamba SSM to extract quantities of interest. The authors describe a bidirectional SSM that leads to a reversal invariance. The authors validate their findings on common time-series classification datasets. Strengths and Weaknesses: Strengths - The proposed method combines time-series techniques into a single model. - The reported experimental performance is good. Weaknesses - The paper is not reproducible. - No code is published. - Arguments are frequently presented without backing (experiment/ proof/ reference). - It is unclear why the proposed "Tango Scanning" should perform well. - Some related work that applies similar design principles is not referenced. Requested Changes: ### Contribution Claims are too Strong - "However, effective multi-view strategies have not been well investigated for enhancing deep learning-based TSC in the literature." There are models like [3] and the references therein that split the time series into "global" and "local" features. - "Moreover, we propose an innovative Mamba-based scanning scheme, called tango scanning", c.f. bidirectional SSM [1], [2], [4] - "This scanning is demonstrated to be more effective in modeling the relationships in the sequence than that of vanilla Mamba block for TSC" - I did not find a result in the paper that backs this claim. - "We first use the CWT for multi-scale representation of MTS in the general task of TSC.", c.f. [5] which uses the wavelet transform for TSC. ### Back arguments by reference or experiment. - Wavelets are local in frequency and time domain. - The paper presents engineering choices and challenges without proper reasoning. In example: - p.7: "If the history tokens do not contain informational contexts, Mamba may provide less effective predicted output. To alleviate this potential limitation of causal scanning, we construct a dedicated module to extend a vanilla Mamba block." - It is unclear how reverting the sequence can add information. - p.8: "Performing Tango Scanning in Time and Channel Dimension" - How you arrived at this design choice is unclear. There are no references nor experimental ablations. ### Release Code. While the authors write in section 4.3 that they plan to release code, the results are not reproducible without implementation details. The paper's main contribution is in aggregating time-series tools into a feature extraction framework for classification that performs well on datasets. Without the actual implementation, I find it hard to justify that this is interesting to the TMLR community. ### Bidirectional SSM - The proposed "Tango Scanning" is not novel in the sense the authors describe it. - "Bidirectional" SSM are well known, c.f. [1], [2], [4]. The difference to, e.g. [Schiff et al. (2024)][1] is that the output is not reversed, but, e.g. [4] uses non-reversed outputs but without weight tying. - The proposed method is enforcing a reversibility invariance, i.e. $f(x, t)=f(x, \text{reverse}[t])$, whereas typically a reversability equivariance is enforced $ \text{reverse}[f(x, t)]=f(x, \text{reverse}[t])$, c.f. [1]. I am unaware of prior work enforcing this constraint, possibly because it might be restrictive in a forecasting setting. [4] claims that an advantage of bidirectional SSMs is their resistance to noise. - Bidirectional scanning is not ablated. It is not immediate to me why bidirectional scanning would have an edge over forward scanning only. ### Further Comments - Table 3: no std. deviations - p.8: The SSM in [Schiff et al. (2024)][1] are weight-tied, meaning that they are the same SSM. [1]: https://arxiv.org/abs/2403.03234 [2]: https://arxiv.org/abs/2403.11144v3 [3]: https://proceedings.neurips.cc/paper_files/paper/2023/hash/28b3dc0970fa4624a63278a4268de997-Abstract-Conference.html [4]: https://arxiv.org/abs/2402.08678 [5]: https://ieeexplore.ieee.org/abstract/document/1614807 Broader Impact Concerns: I have no concerns. ==================================================
# Active Acquisition For Multimodal Temporal Data: A Challenging Decision-Making Task Jannik Kossen1∗ † Cătălina Cangea2 † Eszter Vértes2 Andrew Jaegle2 Viorica Patraucean2Ira Ktena2 Nenad Tomasev2 **Danielle Belgrave**2 1*OATML, Department of Computer Science, University of Oxford* 2*Google DeepMind* Reviewed on OpenReview: *https: // openreview. net/ forum? id= Gbu1bHQhEL* ## Abstract We introduce a challenging decision-making task that we call *active acquisition for multimodal temporal data* (A2MT). In many real-world scenarios, input features are not readily available at test time and must instead be acquired at significant cost. With A2MT, we aim to learn agents that actively select which modalities of an input to acquire, trading off acquisition cost and predictive performance. A2MT extends a previous task called active feature acquisition to temporal decision making about high-dimensional inputs. We propose a method based on the Perceiver IO architecture to address A2MT in practice. Our agents are able to solve a novel synthetic scenario requiring practically relevant cross-modal reasoning skills. On two large-scale, real-world datasets, Kinetics-700 and AudioSet, our agents successfully learn cost-reactive acquisition behavior. However, an ablation reveals they are unable to learn adaptive acquisition strategies, emphasizing the difficulty of the task even for state-of-the-art models. Applications of A2MT may be impactful in domains like medicine, robotics, or finance, where modalities differ in acquisition cost and informativeness. ## 1 Introduction In making a clinical diagnosis, the medical professional must carefully choose a specific set of tests to diagnose the patient quickly and correctly. It is of crucial importance to choose the right test at the right time, and tests should only be performed when useful, as they may otherwise cause unnecessary patient discomfort or financial expense. Recently, large-scale datasets of medical treatment records have become available (Hyland et al., 2020; Johnson et al., 2020). They may potentially facilitate improvements in medical domain knowledge and patient care, for example by allowing us to learn which tests to perform when. While prior work has demonstrated that machine learning can be used to inform complex diagnoses from simple measurements, see, for example, Liotta et al. (2003); Diaz-Pinto et al. (2022), treatment records present a significant modelling challenge as they contain temporally sparse observations from high-dimensional modalities, e.g. X-Rays, MRIs, blood tests, or genetic data. Prior work in *active feature acquisition* (AFA) (Greiner et al., 2002; Melville et al., 2004; Ling et al., 2004; Zubek et al., 2004; Sheng & Ling, 2006; Saar-Tsechansky et al., 2009) has similarly considered the cost of feature acquisition at test time on a per-datum basis: given a test input for which all features are missing initially, which features should one acquire to best trade off predictive performance and the cost of feature acquisition? This is different to active learning (Settles, 2010), which minimizes the number of label acquisitions needed for model *training*. Concurrent methods in AFA often use a Bayesian experimental design approach (Lindley, 1956; Chaloner & Verdinelli, 1995; Sebastiani & Wynn, 2000), acquiring features that maximise the expected information gain with respect to the prediction (Ma et al., 2018; Li & Oliva, ∗Work done while interning at Google DeepMind. †Correspondence to jannik.kossen@cs.ox.ac.uk and ccangea@deepmind.com. 1 ![1_image_0.png](1_image_0.png) Figure 1: In many practical applications, features are not available a priori at test time and have to be acquired at a real-world cost to allow for the prediction of an associated label. In *Active Acquisition for* Multimodal Temporal Data, we aim to learn agents that efficiently acquire for multimodal temporal inputs: (a) at each timestep, the agent decides which modalities of the input it acquires, paying a per-modality acquisition cost; (b) then, a separate model predicts given the sparse sequence of observations; (c) lastly, the agent gets rewarded for low prediction loss and small acquisition cost. 2021; Lewis et al., 2021). Alternatively, AFA can be phrased as a reinforcement learning task where agents optimize the trade-off objective directly (Shim et al., 2018; Kachuee et al., 2019; Zannone et al., 2019; Janisch et al., 2019; 2020; Yin et al., 2020). Notably, prior work in AFA assumes static data: although acquisitions are sequential, feature values do not evolve along a temporal dimension. Furthermore, the features themselves usually correspond to low-dimensional observations, i.e. single values in a tabular dataset. In this work, we propose *active acquisition for multimodal temporal data* (A2MT). Taking the above medical setup as motivation, we extend the familiar setting of AFA in two key ways: (1) We assume that inputs are sequences that evolve temporally. Our agents will need to learn not only which features to acquire, but also when to acquire them. (2) We no longer assume that inputs are unimodal and low-dimensional. Instead, we assume that each input comprises a collection of high-dimensional modalities, and that acquisitions are made for entire modalities at each timestep. With these extensions, A2MT generalizes AFA and reduces the gap to practical applications that are often both temporal and multimodal. A2MT can also find application outside the medical domain (cf. §6). To study A2MT in a controlled environment, we propose a set of synthetic scenarios of increasing difficulty that are temporal and multimodal, both key requirements for A2MT. Further, we propose to study A2MT on audio-visual datasets, concretely AudioSet (Gemmeke et al., 2017) and Kinetics-700 2020 (Smaira et al., 2020). These provide a challenging testbed for A2MT and avoid some of the complications of working with medical data. We propose a method based on Perceiver IO (Jaegle et al., 2021)—a modality-agnostic architecture that can be applied directly to a large variety of real-world inputs—and we explore different reinforcement learning techniques to train the agent. Our method is able to solve a subset of the synthetic tasks we propose and provides reasonable performance on the real-world datasets. However, further investigation reveals that our method can ultimately be outperformed by a non-adaptive ablation, highlighting the difficulty of the A2MT task and, consequently, opportunities for future work. In summary, our contributions are: - We introduce the *active acquisition for multimodal temporal data* (A2MT) scenario (§2). - We suggest both synthetic and real-world datasets that motivate A2MT and allow for convenient benchmarking of methods (§3). - We propose a novel method based on Perceiver IO to tackle A2MT (§4), provide a thorough empirical study (§5), and discuss key areas of improvement for future work (§7). ## 2 Active Acquisition For Multimodal Temporal Data With A2MT (fig. 1), we study strategies for cost-sensitive acquisition when data is both multimodal and temporal. We train an agent policy π that makes a binary acquisition decision per modality and timestep. After a fixed number of timesteps, a model f makes a sequence-level prediction given the sparsely acquired observations. The agent then observes a reward that consists of two terms: (1) The agent gets rewarded in proportion to the negative predictive loss given the observations. (2) Acquisitions come at a (modalityspecific) cost and the agent is penalized for each acquisition made. Term (1) compels the agent to acquire often, as additional observations should improve the quality of the predictions of f. However, this increases the penalty incurred from term (2). The agent therefore needs to trade off acquisition cost and prediction reward, learning which modalities in the input are worth acquiring and when it is worth acquiring them. To make the most of a limited acquisition budget, we aim to learn agents with individualized acquisition strategies that adapt to the sequences at hand. For example, agents may learn to make use of interactions between modalities: a past observation in one modality (e.g. a suspicious value in a blood test) may lead to the acquisition of another modality (e.g. a more specialised test), achieving both accuracy and cost-efficiency. A2MT requires models that can successfully accommodate sparse, multimodal, and temporal data. This is challenging for state-of-the-art models, even without the additional complexities of active selection required for successful A2MT. In A2MT (Alg. 1), we assume access to a fully observed training set of input-output pairs ((x1, y1), . . . ,(xN , yN )). For classification tasks, for example, yi are labels, yi ∈ (1*, . . . , C*). Each input xiis a sequence of observations xi = (xi,1, . . . , xi,T ). At each timestep t, the observation xi,t decomposes into M modalities xi,t = (xi,t,1, . . . , x*i,t,M*); each modality may be high-dimensional, x*i,t,m* ∈ R dm. For example, x*i,t,m* could be a single frame in a video where we collapse the image height H, width W, and color channels C into a single axis with dimensionality dm = H · W · C. We focus on a single sample and drop the leading axis, x = xi, to avoid notational clutter. We use colons to indicate 'slices' of inputs along a particular axis, e.g. x1:t−1 = (x1*, . . . ,* xt−1). At each timestep t ∈ (1*, . . . , T*), we obtain a set of acquisition decisions across modalities, or actions, at = (at,1, . . . , at,M), by sampling from the agent policy, at ∼ π(·|x˜1:t−1, a1:t−1; θ). Here, at,m ∈ (0, 1) is a binary indicator of whether modality m was acquired at time t. We write x˜ instead of x to highlight the fact that the inputs may contain missing entries, and θ are the trainable agent parameters. Here, π gives the joint probability over all possible acquisition decisions at that timestep, including extremes such as acquiring all or none of the modalities, i.e. π(at|x˜1:t−1, a1:t−1; θ) ∈ [0, 1]M. At each timestep t and for each modality m, we acquire xm,t only if am,t = 1. We summarize the set of all actions across timesteps as a = (a1*, . . . ,* aT ). Lastly, we assume access to a model f which makes a prediction yˆ given an input x˜. After completing the acquisition process for a given test sample, we use this model to predict f(x˜1:T ) = ˆy. Consequently, we require that f can predict for multimodal inputs with features missing arbitrarily across time and modalities. Algorithm 1 A2MT Inputs: Test input x, agent π, model f. 1: for t = 1 to T do 2: Sample at ∼ π(·|x˜1:t−1, a1:t−1; θ). 3: for m = 1 to M do 4: if at,m = 1 **then** 5: Acquire: x˜t,m ← xt,m. 6: **else** 7: Do not acquire: x˜t,m ← ∅. 8: **end if** 9: **end for** 10: **end for** 11: Return prediction f(x˜1:T ). We train agents to maximize the following reward: $$R=\mathbb{E}\left[-C(\mathbf{a})-{\mathcal{L}}(f({\tilde{\mathbf{z}}}_{1:T}),y)\right],\quad{\mathrm{where}}$$ cmat,m. (1) $$C(\mathbf{a})=\sum\nolimits_{t=1}^{T}\sum\nolimits_{m=1}^{M}c_{m}a_{t,m}.$$ Here, the expectation is over the data (x, y) and actions a from the policy; C(a) gives the total cost of acquisition along the sequence; cm is a modality-specific cost, and L(f(x˜1:T ), y) is log likelihood loss. Equation (1) summarizes the problem of A2MT: trading off acquisition costs against predictive performance $$(1)$$ for multimodal and temporal inputs. In §4, we detail how we optimize this objective using reinforcement learning. We refer to §D for a discussion of A2MT in terms of Markov Decision Processes. ## 3 Datasets For A2Mt While the medical domain acts as important practical *motivation* for A2MT, medical datasets, such as Hyland et al. (2020); Johnson et al. (2020), usually require significant domain expertise. And while the eventual application of A2MT methods in this domain is crucial, we believe that a mix of synthetic and real *non-medical* data can serve as a widely accessible testbed for the machine learning community to develop robust methodology. We therefore introduce datasets suitable for developing A2MT methods. ## 3.1 Synthetic Datasets We introduce three scenarios with varying levels of difficulty and a clear optimal acquisition strategy, designed to test the cross-modal reasoning capabilities of the agents. ![3_image_0.png](3_image_0.png) Figure 2: The synthetic scenarios allow for sparse acquisition while keeping perfect accuracy. This requires agents capable of cross-modal reasoning. (Label is 9 in the above.) Concretely, we create a dataset (cf. algorithm B.1) of fixed-length sequences with two modalities: counter and digit. The digit modality is a sequence of random numbers of length T, e.g. 0131043443. For the counter modality, we draw random 'starting values' uniformly from the set of valid numbers. For each starting value, we create a sequence counting down to 0 from that starting value, and then concatenate all countdown sequences. I.e. when drawing the starting values 2, 3, and 2, we generate the sequences 210, 3210, and 210, and the final counter modality is their concatenation, 2103210210. We cut sequences to length T. The label attached to each sequence is the sum of the digit modality at all timesteps where the counter modality is zero, e.g. 3+3+3=9 in this case. To increase the complexity of this task, we can replace the sequence of raw numbers with a sequence of images, for example replacing raw numbers with matching MNIST digits (LeCun, 1998) for both modalities. We also consider replacing the numbers of the counter modality with *audio* sequences from the spoken digit dataset (Jackson, 2017). Figure 2 shows these synthetic dataset variants. Further, one can increase task complexity by adjusting the sequence length and the number of unique symbols for the modalities. To solve the synthetic task, agents need to reason about interactions between modalities; a key feature of our practical motivation for A2MT. Each modality in the synthetic scenario offers an ideal strategy to save acquisition cost without sacrificing predictive accuracy. (1) The agent can learn to acquire the digit modality only if the counter modality is zero, because only then is a digit relevant for the label. (2) The agent can further learn to *skip* observations in the counter modality, because of the regular pattern they follow. ## 3.2 Audio-Visual Datasets Additionally, we propose to apply large-scale audio-visual classification datasets, such as AudioSet (Gemmeke et al., 2017) or Kinetics-700-2020 (Smaira et al., 2020), to the A2MT setting. These datasets are multimodal and temporal, each input being a sequence of sound and images. We divide each input video into a set of temporally aligned images and audio segments. Compared to the synthetic scenarios, these datasets offer the complexity and noise of real-world data, ideally allowing A2MT agents to optimize the trade-off objective in interesting ways. In fig. 3, we illustrate the variety in inputs for the AudioSet dataset: (a) shows an example where both modalities are informative towards the label, in (b) the change in the image modality could be a signal for the agent to revisit the audio modality, (c) shows an example where the image modality is seemingly unrelated to the sound-based label, and in (d) all image frames are identical. ![4_image_0.png](4_image_0.png) Figure 3: Four example sequences from the AudioSet training set. The labels associated with these inputs are (a) Music, (b) Music, (c) Speech, and (d) Electronic music. The audio signal is often more informative of the label than the images for AudioSet. Inputs are downsampled in the above visualization. We observe empirically that model predictions degrade consistently if we mask out parts of the input at evaluation time (cf. §5). Further, we find that models trained on inputs which span longer durations (increasing the temporal stride to keep the input size constant) perform better. These results suggest there is temporal variety in these datasets and that additional observations improve model predictions. This supports the idea that A2MT-style selection of the *right* inputs is possible. ## 4 Perceiver Io For A2Mt We use Perceiver IO (Jaegle et al., 2021) for both the predictive model f and the agent π. Perceiver IO is a suitable architecture for A2MT as it is modality-agnostic and scales well to large input sizes. Further, Perceiver IO gracefully handles missing data: mask tokens are used to signal where features are missing in the input, cf. Jaegle et al. (2021). For the model f, we make no changes to the standard Perceiver IO architecture. For the agent π, we condition on previous actions by appending them to the *decoder* input of the Perceiver. We propose two variants of our approach: Inputs for the synthetic datasets tend to be somewhat smaller, and we can afford a computationally more involved routine. In contrast, for the real world datasets, we use a computationally leaner setup that allows for sequential application of the Perceiver and training of the agent at scale. We give full details in §C. ## 4.1 Variant 1: Small Data Regime For the synthetic scenarios, the agent π and the model f are two fully separate Perceiver IO models that do not share any parameters. They are trained jointly: the agent acquires the input data features for the classifier. For policy training, we can make use of the straight-through Gumbel-Softmax estimator (Jang et al., 2017) due to the simple unmasking effect of actions in A2MT. The Gumbel trick (Jang et al., 2017; Maddison et al., 2017) allows for backpropagation through the discrete action variables and is generally considered a low-variance estimator in comparison to alternatives such as REINFORCE (Williams, 1992). Concretely, joint training directly follows algorithm 1 for each input in a batch of training samples: we iteratively apply the agent π, unmasking modalities as given by the sampled actions a. We then predict with the model f given the partially observed input sequence x˜1:T . We can compute a Monte Carlo sample of the reward, eq. (1), given the observed prediction loss L(f(x˜1:T ), y) and cost of acquisition C(a), where L is log-likelihood loss. Due to the Gumbel parameterization of the actions, we can directly apply gradient-based optimization to the agent parameters θ. We simultaneously train the model f by minimizing the loss L in the objective with respect to the parameters of f. ## 4.2 Variant 2: Large Data Regime We apply the Perceiver-based agent *repeatedly* during A2MT, i.e. up to T = 25 times in the scenarios that we study. This increases computational cost, particularly for gradient computation during training. To save compute for the large scale inputs of real-world datasets, we share Perceiver IO encoders between the predictive model f and agent π; unlike in the small data regime, f and π are no longer fully separate models. We further pre-train the encoder and the decoder of f. In other words, in the large data regime, the classifier f is no longer trained on inputs acquired by the policy π. Lastly, when training the agent decoder parameters, we fix both the (shared) encoder parameters as well as the decoder parameters of f. To ensure that the classifier—and, in particular, the shared encoder—is suitable for inputs encountered later during agent training, we propose to use a particular masking setup for the inputs during pre-training of the classifier. Concretely, for each input, **(M1)** we drop modalities at timestep t by sampling masks with fixed per-modality probability, am,t ∼ Bernoulli(pm); **(M2)** we randomly draw tmax ∼ Unif(0, T) and mask out all inputs at *t > t*max, i.e. am,t = 0 ∀m ∀*t > t*max; **(M3)** we randomly drop entire modalities from the inputs with fixed per-modality rates am,t = dm ∀t, with dm ∼ Bernoulli(p (d) m ). The masking mechanisms **(M1-M3)** are applied only during pre-training, and we use the same fixed values for pm and dm in all experiments, see §C.4 for further details. This masking procedure exposes the encoder to sparse input distributions during pre-training that are equivalent to those created by a randomly acting agent. This leads to versatile encoder representations that are useful during agent training. In addition to helping agent training, we observe that the masking procedures affect the test set performance of model predictions positively, and we observe the highest performance when all methods (**M1–M3**) are applied simultaneously (cf. §5). This fits with similar results on how Transformer-based architectures benefit from masking at training time (Devlin et al., 2018; Dosovitskiy et al., 2021; Feichtenhofer et al., 2022). Following related work in active feature acquisition, such as Li & Oliva (2021), we additionally condition the agent directly on model predictions f(x˜1:t) at each timestep t. Concretely, we concatenate both predicted class probabilities, pf (y|x˜1:t), as well as associated predictive entropies, Epf (y|x˜1:t)[− log pf (y|x˜1:t)], to the latent representation of the Perceiver IO agent. While this incurs additional computational cost from predicting with the model at each timestep, we believe the additional information will be useful in informing agent behavior. For example, this allows the agent to be aware of the uncertainty attached to model predictions at each timestep. This way, the agent could, for example, learn to stop acquiring when model predictions are sufficiently confident. Inspired by reward shaping (Ng et al., 1999; Li & Oliva, 2021), we optionally add an intermediate reward term to the objective eq. (1), I = −αPT t=1 (L(f(˜x1:t), y) − γL(f(˜x1:t−1), y)), where α is a hyperparameter, and γ is the discount factor. This term directly incentivizes the agent to decrease the predictive loss of f, which helps with credit assignment, as it is otherwise hard for the agent to learn which particular acquisition actually led to loss reduction. The Gumbel-Softmax estimator becomes prohibitively expensive in the large-scale scenario, as it requires full backpropagation through repeated application of both f and π. We therefore rely on the advantage actor critic (A2C) (Sutton & Barto, 2018) policy gradient method, which approximates gradients of eq. (1) with respect to the agent parameters via Monte Carlo samples of the score function estimator. A2C uses an additional baseline model that reduces variance during optimization, and we implement this baseline as a separate Perceiver IO decoder head. ## 5 Experiments We next give experimental results on synthetic and real data and refer to §C for further details. ## 5.1 Synthetic Scenario We begin by exploring the performance of our small-scale variant on the synthetic datasets. We use sequences of length T = 10 and three distinct values (0, 1, 2) per modality such that the input shape is (T, R dm) = (10, 3) per modality when using a one-hot encoding for the values. | | Input Sequence 1 | | Input Sequence 2 | | Input Sequence 3 | | | | | | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------|----------------------|----------|-------------|----------|-------------| | - - - - - - - - - - - - - - - -+ - - - - - - - - - - - - - - - - - - - - - - -+ - - - - - - - - - - - - - - - - - - - - - - -+ - - - - - - - - - - - - - - - - - - - - - | - | | | | | | | Digit | | [2 0 0 2 0 0 2 2 0 1] | [2 1 2 2 0 1 0 1 1 1] | [1 2 1 1 2 1 0 1 0 1] | | | | | | | Actions Digit | | [0 1 0 0 1 0 0 1 0 0] | [0 1 1 0 0 1 0 0 1 0] | [0 1 0 0 1 0 0 1 0 0] | | | | | | | Counter | | [1 0 2 1 0 2 1 0 2 1] | [2 1 0 2 1 0 2 1 0 2] | [1 0 2 1 0 2 1 0 2 1] | | | | | | | Actions Counter | [1 1 1 1 0 1 0 1 1 0] | [0 1 1 1 0 0 1 0 1 0] | [1 1 1 1 0 1 0 1 1 0 | ] | | | | | | | Label | | True 2 | / Pred .: 2 | | True 4 | / Pred .: 4 | | True 5 | / Pred .: 5 | Figure 4: Acquisition behavior of the Perceiver IO agent on a simple synthetic scenario. 'Digit' and 'Counter' give ground truth values for the Counter and Digit input modalities. 'Actions Digit' and 'Actions Counter' mark when the agent did (1) or did not acquire (0) for each of the modalities. The agent successfully learns a sparse acquisition strategy: it (almost always) acquires the Digit modality only if the Counter modality is 0, and further learns to skip some acquisitions in the Counter modality. Table 1: Results on the synthetic task. For raw digits as inputs our agent learns to acquire an average of 46.2% of the digit modality, 68.8% of the counter modality, and achieves an accuracy of 92.4% on the test set. Clearly, the agent learns a selective acquisition procedure for both modalities without sacrificing predictive accuracy. We further compute the optimal acquisition rates for the synthetic scenario as 37.2% for the digit and 39.3% for the counter modality. These highlight that, in particular for the counter modality, there is a gap relative to optimal behavior for our agent, cf. table 1. In §A, we further compare the agent's acquisition strategy to optimal behavior. We display examples of learned agent behavior on individual test set inputs in fig. 4. For the digit modality, the agent mostly follows the ideal strategy and acquires only whenever the counter is zero. For the counter modality, the agent learns to skip acquisitions at some of the non-informative timesteps. In fig. A.1, we display training dynamics: the agent initially shows high acquisition rates for both modalities, which quickly leads to increases in model predictive accuracy. Then, the agent learns to discard irrelevant parts of the input; this happens more quickly for the digit than the counter modality. Metric Agent Oracle Label Prediction Accuracy 92.4% 100.0% Digit Acquisition Rate 46.2% 37.2% Counter Acquisition Rate 68.8% 39.3% Figure A.2 shows training curves for early results on the image and image/audio versions of the synthetic scenario. The agents overfit to the training set and their behavior does not generalize. We suspect that our method struggles to learn acquisition strategies and representations simultaneously. ## 5.2 Audio-Visual Datasets Next, we investigate the performance of the large scale variant of our approach on the AudioSet and Kinetics datasets. We split the training set of each dataset into a subset used for model pretraining and a subset used exclusively for agent training, taking up 80% and 20% of the original training set respectively. Note again that model and agent share the encoder, which is learned during model pre-training, and then fixed during agent training. For both datasets, each input sample consists of audio-visual input with 250 frames of images and a raw audio signal spanning 10 seconds. For images, we take every 10th frame as input, obtaining a total of 25 input frames. We pre-process the audio signal to mel spectrograms, and then divide the signal into 25 segments. This leads to an input shape of (25, 40 · 128) for the audio modality, and (25, 200 · 200 · 3) for the image modality, where we collapse additional dimensions beyond time into the last axis as described in §2. We first discuss insights from pre-training before moving on to results for A2MT agents. Table 2: Masking at training time as proposed by **(M1-M3)** improves performance on the (unmasked, fully observed) AudioSet test set. ## 5.2.1 Model Pretraining | Variant | mAP | |--------------------------|-------| | No Masking | 0.178 | | (M1) Random Masking | 0.230 | | + (M2) Max. Timestep | 0.262 | | + (M3) Modality Dropping | 0.344 | | + Conv. Downsampling | 0.370 | Masking Variants. Table 2 gives results of different pre-training variants for Perceiver IO on AudioSet. We observe that masking significantly improves mean average precision (mAP; higher is better) on the AudioSet validation set, going from 0.178 mAP without any masking to 0.344 mAP for the complete masking setup (M1–M3) Table 3: Impact of dropping entire modalities at evaluation time on the AudioSet and Kinetics test sets. For both datasets, performance is best when no modalities are dropped (fully observed). For AudioSet, audio is more informative than images, as 'audio only' performance trumps 'image only'; for Kinetics, images are more informative. Table 4: Gradually masking out the more informative modality (audio for AudioSet, images for Kinetics) at evaluation time. Performance degrades as the 'mask rate', the fraction of randomly selected timesteps t at which we mask inputs, increases. The weak modality (images for AudioSet, audio for Kinetics) is not masked here. | | Mask Rate | AudioSet (mAP) | Kinetics (top-1) | | | |----------------|----------------|------------------|--------------------|-------|-------| | Variant | AudioSet (mAP) | Kinetics (top-1) | | | | | Fully Observed | 0.370 | 0.396 | | | | | Audio Only | 0.302 | 0.087 | | | | | Image Only | 0.137 | 0.305 | 0.0 | 0.370 | 0.396 | | | 0.3 | 0.369 | 0.397 | | | | | 0.8 | 0.331 | 0.365 | | | | | 0.9 | 0.292 | 0.316 | | | | | 1.0 | 0.137 | 0.087 | | | (cf. §4.2). We observe that each of the masking procedures **(M1-M3)** progressively improves evaluation metrics. See §C.4 for details on masking rates. We further add a single strided convolutional pre-processing layer per modality, which further improves performance. Note that, here, we evaluate the model on *fully* observed data. For A2MT we are particularly interested in the predictive performance of f for sparse data. Sparsity at Evaluation Time. In order for A2MT agent training to be successful, the model f needs to be capable of extracting information from *sparse* inputs. Table 3 reports model performance when dropping entire modalities during evaluation. We observe that the model relies on both modalities for prediction, as the performance on fully observed data is best. As expected, the audio modality is more informative for AudioSet and images are stronger for Kinetics. Table 4 shows how predictions degrade when increasing the masking rate across time for the stronger modality. Predictions are best on fully observed data and deteriorate significantly at 90% missing inputs. Evidently, the model learns to make use of additional data in the input at low masking rates while also predicting reasonably for sparse inputs. In summary, the proposed masking routines **(M1-M3)** help learn models which use all modalities in the input and degrade as inputs become increasingly sparse. These results suggest that our pre-trained Perceiver IO models are good candidates for use in A2MT, which we investigate next. ## 5.2.2 Agent Training Agents React to Cost. We follow the setup described in §4.2 and train agents using A2C without intermediate rewards. We set a nonzero acquisition cost only for the more informative modality of each dataset. Table 5 shows agent behavior for acquisition costs on the audio modality of the AudioSet dataset and table 6 gives results for image cost on Kinetics. We observe that our agents generally react to increased acquisition costs by decreasing the number of acquisitions, learning how many acquisitions are 'worth it' for a given cost. Further, we find that agents hold on to a single acquisition (1/25 inputs frames, i.e. acquisition rate 0.04) even at relatively high costs. This makes sense, as performance deteriorates drastically when all of the informative modality is dropped. Conversely, agents readily learn to not acquire large portions of the informative input modality, as we have observed this has only a small effect on predictive scores. For the Kinetics dataset, the agents discover that an acquisition rate of about 0.2 optimizes the objective across a variety of low to medium costs. In §A, we discuss results when imposing costs on the weak modality. Random Ablations. We are ultimately interested in learning agents which display *adaptive* behavior that meaningfully adjusts to the information in each input. Therefore, we compare the performance of our agents against two ablations that we call 'random-rate' and 'random-1hot'. For both, we first compute the average acquisition rate of the agent per modality, i.e. the average fraction of timesteps at which it acquires. For the 'random-rate' ablation, we acquire modalities with a fixed Bernoulli probability per timestep equal to the average acquisition rate of the agent. Additionally, we construct the 'random-1hot' ablation by acquiring at a fixed number of timesteps per modality that are equidistantly spread across the sequence. The number of acquisitions is chosen such that we match the average number of agent acquisitions per modality. (Usually Table 5: The agent acquisition rate appropriately reduces for the audio modality on the **AudioSet** test set as acquisition costs increase; the agent acquisition rate is the average fraction of timesteps at which the agent acquires. We impose costs only on the audio modality and compare against two ablations: (a) Random-Rate, which acquires at each timestep with a fixed probability matching the agent acquisition rate, and (b) Random1Hot, which acquires at a fixed set of equidistant timesteps matching the acquisition rate of the agent. The agent outperforms the rate-matched ablation at high costs, but does not improve over the discrete 1-hot ablation. See the main text for further discussion. Standard deviations are 5-fold repetitions on the test set. | Cost per Audio Acquisition | 1 × 10−6 | 1 × 10−5 | 1 × 10−4 | 2.5 × 10−4 | 1 × 10−3 | |------------------------------|-----------------|-----------------|-----------------|-----------------|-----------------| | Agent Acquisition Rate | 0.98 | 0.83 | 0.23 | 0.11 | 0.04 | | Agent (mAP) | 0.3724 ± 0.0010 | 0.3697 ± 0.0025 | 0.3359 ± 0.0117 | 0.3116 ± 0.0108 | 0.2580 ± 0.0010 | | Random-Rate (mAP) | 0.3728 ± 0.0005 | 0.3719 ± 0.0007 | 0.3365 ± 0.0016 | 0.2946 ± 0.0014 | 0.2295 ± 0.0009 | | Random-1Hot (mAP) | 0.3732 ± 0.0001 | 0.3731 ± 0.0002 | 0.3472 ± 0.0003 | 0.3165 ± 0.0006 | 0.2660 ± 0.0004 | | Cost per Image Acquisition | 1 × 10−3 | 5 × 10−3 | 1 × 10−2 | 1 × 10−1 | 5 × 10−1 | |------------------------------|-----------------|-----------------|-----------------|-----------------|-----------------| | Agent Acquisition Rate | 0.19 | 0.22 | 0.21 | 0.09 | 0.04 | | Agent (top1) | 0.3370 ± 0.0009 | 0.3336 ± 0.0034 | 0.3326 ± 0.0016 | 0.3309 ± 0.0011 | 0.2722 ± 0.0005 | | Random-Rate (top1) | 0.2804 ± 0.0019 | 0.2856 ± 0.0019 | 0.2830 ± 0.0006 | 0.2378 ± 0.0012 | 0.1721 ± 0.0014 | | Random-1Hot (top1) | 0.3038 ± 0.0003 | 0.3067 ± 0.0003 | 0.3063 ± 0.0002 | 0.2957 ± 0.0007 | 0.2089 ± 0.0002 | Table 6: The agent acquisition rates generally reduce as costs increase for the image modality of the **Kinetics** test set; the agent acquisition rate is the average fraction of timesteps at which the agent acquires. We do not impose any costs for acquisitions of the less-informative audio modality. We report standard deviations over 5-fold repeated application on the test set. See main text or table 5 for details. ![8_image_0.png](8_image_0.png) Figure 5: Comparing learned acquisition patterns of an agent on AudioSet to the patterns of the random ablations. Our agent learns a set of fixed timesteps for which it always acquires, similar to the random-1hot baseline. In (a) and (c), acquisition rates are close to zero and too small to be visible for some timesteps. the number of agent acquisitions is not an integer, and so we remove some acquisition probability for the last of the fixed timesteps.) See fig. 5 for an illustration of the ablations. These are ablations rather than baselines, as they use the per-modality acquisition rates found by the agent, and thus incur the same cost as the agent. However, potentially unlike the agent, the ablations do not act adaptively: they acquire with the same fixed probabilities for each sequence. If we find that our agent can consistently outperform both ablations, this is supporting evidence for adaptive behavior in the agent, adjusting its acquisitions to the information in each input. For AudioSet, the agent does not consistently outperform either ablation at low acquisition costs. However, the agent does tend to outperform the random-rate, but not the random-1hot, ablation at medium-to-high acquisition costs. Figure 5 displays the average *temporal acquisition pattern* of the agent and ablations for the audio modality of AudioSet at high acquisition costs. The agent learns a *discrete* pattern of acquisitions across timesteps, similar to the random-1hot baseline. This is advantageous in comparison to the fixed low rates of the random-rate baseline: due to the small value of the acquisition probabilities, the resulting Binomial distribution over acquisitions has significant mass at 0 acquisitions. Therefore, the random-rate baseline sometimes does not acquire anything at all, which has a large negative effect on its average performance. It is encouraging that the agents learn discrete acquisition behavior, avoiding the drawbacks of fixed small acquisition rates. However, this also shows that the agents do not learn *individualized* predictions, and instead acquire at fixed timesteps, ignoring individual inputs for decision making. There is almost no variance in acquisition behavior for different test samples. For scenarios with higher learned acquisition rates—where the variance of fixed-rate acquisition is less disadvantageous—we find some agents do not learn 1-hot style acquisition patterns and instead behave more similarly to random-rate. For Kinetics, we observe that our agents are able to outperform the random-rate and random-1hot ablations. However, instead of individualized behavior, figs. 6 and A.3 show that the agents learn a static pattern where they never acquire both modalities simultaneously. While this is interesting behavior that optimizes the objective better than the random ablations (presumably because at least one modality is always acquired), it falls short of our goal of *adaptive* acquisition. ![9_image_0.png](9_image_0.png) Intermediate Rewards. We investigate the effect of intermediate rewards on agent behavior. Figure 7 (a) displays 100 random samples of agent behavior from the AudioSet test set when using intermediate rewards. Intermediate rewards lead to agents that show *variable* acquisition behavior between samples. It also seems that intermediate reward—which encourages the agent to reduce predictive loss with each action—leads to greedy behavior in the agents, where acquisitions at earlier timesteps are preferred. Further, fig. 7 (b) shows the number of acquisitions per sample is now correlated to the entropy, which expresses certainty or uncertainty of the predictions. However, fig. 7 (c) shows that model entropy and loss are practically uncorrelated here. (We expand on this in our discussion in §7.) Comparing to the random ablations, we find that, while our AudioSet agents do learn adaptive behavior, their performance can still be matched by the non-adaptive ablations, cf. table A.5. For Kinetics, we observe similar behavior (except that we outperform the random ablations with the same caveat as above), cf. table A.6. Figure 6: Acquisition patterns on Kinetics at 0.001 cost per image. ## 6 Related Work Additional Applications of A2MT. A2MT-like problems can be found in a variety of application domains. For example in wearable devices, the energy cost of sensor activation is significant (Possas et al., 2018) and a policy that reduces sensor usage without sacrificing predictive accuracy is desirable. Literature in AFA further mentions computer security or fraud detection as areas of application. Given that both these areas naturally may have temporal or multimodal components, these applications transfer to A2MT as well. Lastly, the acquisition procedure in A2MT can be a way to reduce input size and thus computational cost of predictions. Active Perception. Active perception (Bajcsy, 1988; Aloimonos et al., 1988; Bajcsy et al., 2018) models the cost associated with visual attention in the context of embodied agents. In the language of A2MT: if we cannot observe all visual input features at once, we need to decide where to look actively and iteratively. For ![9_image_1.png](9_image_1.png) Figure 7: Behavior for agents trained with intermediate reward on 100 random samples from the AudioSet test set. (a) The acquisition behavior displays significant variance, and agents prefer acquisition at earlier timesteps. (Samples in grey, mean µ and standard deviation σ in blue.) (b) The per-sample acquisition rates are correlated to the entropy of model predictions. (c) Correlation between entropy and predictive loss is poor. example, Mnih et al. (2014); Haque et al. (2016); Jayaraman & Grauman (2018) solve visual classification or reconstruction problems by learning agents that iteratively attend to parts of the input. Relatedly, active sensing, e.g. Satsangi et al. (2018); Hero & Cochran (2011), studies how to actively select a subset of sensors to observe to reduce uncertainty over latent variables. Satsangi et al. (2020) connect model uncertainty and prediction reward in active perception. Efficient Video Classification. Related work in efficient video classification has sought to reduce the computational cost of video classification by selecting a small set of salient frames for each input (Yeung et al., 2016; Wu et al., 2019b; Korbar et al., 2019; Wu et al., 2019a; Gao et al., 2020; Zheng et al., 2020; Wang et al., 2021; Ghodrati et al., 2021; Panda et al., 2021; Gowda et al., 2021; Yang et al., 2022). For computationally cheap policies, these methods can achieve computational cost-savings compared to methods that rely on all timesteps as input. This is related to so-called anytime prediction approaches (Grubb & Bagnell, 2012; Zilberstein, 1996; Horvitz, 1987) which explicitly trade off computation cost and predictive accuracy. In contrast to all of the above, A2MT assumes that there is a real-world cost associated with the *acquisition* of the input modalities, e.g. the cost of performing an MRI scan, and we can completely neglect any computational costs. Also, A2MT requires temporally causal acquisition, which is something not respected by most efficient video classification approaches. These difference in motivation lead to methods which (in all cases we know of) are not applicable to the A2MT scenario. ## 7 Discussion While our agents did react sensibly to acquisition cost on the complex audio-visual datasets in §5.2.2, they learned static behavior and did not adjust acquisitions sensibly to individual sequence. In this section, we offer a discussion of possible reasons for this negative result: Figure 7 (c) suggest that our models may not be suitable for learning adaptive behavior. We would expect the uncertainty (entropy) of the predictions to correlate with the predictive loss. Then, entropy could be a useful signal to guide agent behavior: for samples where the model is certain about the prediction (low entropy) the agent can stop acquisitions early; when the model is uncertain, the agent may acquire more to reduce uncertainty and therefore loss. We do not see this correlation in fig. 7 (c), and so it may be hard for the agent to learn a policy that optimizes the reward, which depends on the pre-trained predictive model. One possible explanation for the lack of correlation is that our Perceiver IO models currently underfit the AudioSet and Kinetics dataset. However, as we are not aware of prior work studying the entropies of Perceiver IO predictions, we cannot exclude the possibility that the entropies of Perceiver IO generally do not conform to expectations. Therefore, both training the initial model longer and additional model architectures may be worth exploring. Lastly, future work should not exclude the possibility that a subtle distribution shift in masking distributions between pre-training and agent training further inhibits policy learning. Alternatively, AudioSet and Kinetics may be ill-suited for application to A2MT: while we have found that there is some signal diversity in AudioSet and Kinetics, different sections of a given clip often look similar, presumably making it difficult for the agents to learn adaptive behavior. Subsequent work in A2MT could consider datasets with longer clip durations and more content diversity per sequence, e.g. ActivityNet (Fabian Caba Heilbron & Niebles, 2015). Lastly, there are interesting variations of the A2MT setup that future work could explore, e.g. other reward formulations such as a fixed budget per sample or a global budget across samples, acquisition costs that change with time, or actions that affect state evolution. ## 8 Conclusion We have introduced the task of active acquisition for multimodal temporal data (A2MT), extending prior work in active feature acquisition to multimodal and temporal inputs. We have further proposed a Perceiver IO–based reinforcement learning approach to tackle A2MT problems. On novel synthetic scenarios our agents successfully learn to use cross-modal reasoning to optimize the trade-off between feature acquisition cost and predictive accuracy. We further adapt Kinetics and AudioSet, two large scale video classification datasets, to application to A2MT: here, the agents appropriately react to modality-specific acquisition costs. However, ablations reveal they do not adapt acquisitions to individual inputs. We believe A2MT is a challenging and practically relevant task, and we would be excited for the community to join our efforts. ## Broader Impact Statement Our paper proposes a sequential decision making task, as well as a method to approach this task. We believe that this method can have useful societal and economic impact in domains such as robotics, finance, and healthcare. However, one of the main limitations of this paper is that we have focused on synthetic data scenarios. Although this synthetic data is motivated by real-world scenarios, we do not recommend direct deployment of our method to practical scenarios. Work on automated decision making should always be carried out in close collaboration with domain experts, while proactively taking into account safety and ethical considerations. ## Acknowledgments We thank Yujia Li, Timothy Lillicrap, Andrew Brock, Adrià Recasens, João Carreira, Lucas Smaira, Isabela Albuquerque, Joost van Amersfoort, Ali Eslami, our action editor, Martha White, and the anonymous reviewers for helpful feedback and interesting discussions that have led to numerous improvements of the paper. ## References John Aloimonos, Isaac Weiss, and Amit Bandyopadhyay. Active vision. *International journal of computer* vision, 1(4):333–356, 1988. Ruzena Bajcsy. Active perception. *Proceedings of the IEEE*, 76(8):966–1005, 1988. Ruzena Bajcsy, Yiannis Aloimonos, and John K Tsotsos. Revisiting active perception. *Autonomous Robots*, 2018. Kathryn Chaloner and Isabella Verdinelli. Bayesian experimental design: A review. *Statistical Science*, pp. 273–304, 1995. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv:1810.04805*, 2018. Andres Diaz-Pinto, Nishant Ravikumar, Rahman Attar, Avan Suinesiaputra, Yitian Zhao, Eylem Levelt, Erica Dall'Armellina, Marco Lorenzi, Qingyu Chen, Tiarnan DL Keenan, et al. Predicting myocardial infarction through retinal scans and minimal personal information. *Nature Machine Intelligence*, 4(1): 55–61, 2022. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In *International Conference on Learning Representations*, 2021. Bernard Ghanem Fabian Caba Heilbron, Victor Escorcia and Juan Carlos Niebles. Activitynet: A largescale video benchmark for human activity understanding. In *Conference on Computer Vision and Pattern* Recognition, pp. 961–970, 2015. Christoph Feichtenhofer, Haoqi Fan, Yanghao Li, and Kaiming He. Masked autoencoders as spatiotemporal learners. *arXiv:2205.09113*, 2022. Ruohan Gao, Tae-Hyun Oh, Kristen Grauman, and Lorenzo Torresani. Listen to look: Action recognition by previewing audio. In *Conference on Computer Vision and Pattern Recognition*, pp. 10457–10467, 2020. Jort F Gemmeke, Daniel PW Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R Channing Moore, Manoj Plakal, and Marvin Ritter. Audio set: An ontology and human-labeled dataset for audio events. In *2017 IEEE international conference on acoustics, speech and signal processing (ICASSP)*, pp. 776–780. IEEE, 2017. Amir Ghodrati, Babak Ehteshami Bejnordi, and Amirhossein Habibian. Frameexit: Conditional early exiting for efficient video recognition. In *Conference on Computer Vision and Pattern Recognition*, pp. 15608– 15618, 2021. Shreyank N Gowda, Marcus Rohrbach, and Laura Sevilla-Lara. Smart frame selection for action recognition. In *AAAI Conference on Artificial Intelligence*, volume 35, pp. 1451–1459, 2021. Russell Greiner, Adam J Grove, and Dan Roth. Learning cost-sensitive active classifiers. *Artificial Intelligence*, 139(2):137–174, 2002. Alex Grubb and Drew Bagnell. Speedboost: Anytime prediction with uniform near-optimality. In *Artificial* Intelligence and Statistics, 2012. Albert Haque, Alexandre Alahi, and Li Fei-Fei. Recurrent attention models for depth-based person identification. In *Conference on Computer Vision and Pattern Recognition*, 2016. Alfred O Hero and Douglas Cochran. Sensor management: Past, present, and future. *Sensors Journal*, 2011. Eric J. Horvitz. Reasoning about beliefs and actions under computational resource constraints. In Conference on Uncertainty in Artificial Intelligence, 1987. Stephanie L Hyland, Martin Faltys, Matthias Hüser, Xinrui Lyu, Thomas Gumbsch, Cristóbal Esteban, Christian Bock, Max Horn, Michael Moor, Bastian Rieck, et al. Early prediction of circulatory failure in the intensive care unit using machine learning. *Nature medicine*, 26(3):364–373, 2020. Z. Jackson. Free spoken digit dataset (fsdd). https://github.com/Jakobovski/ free-spoken-digit-dataset, 2017. Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, et al. Perceiver io: A general architecture for structured inputs & outputs. In *International conference on machine learning*, 2021. Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. In *International Conference on Learning Representations*, 2017. Jaromír Janisch, Tomáš Pevn`y, and Viliam Lis`y. Classification with costly features using deep reinforcement learning. In *AAAI Conference on Artificial Intelligence*, volume 33, pp. 3959–3966, 2019. Jaromír Janisch, Tomáš Pevn`y, and Viliam Lis`y. Classification with costly features as a sequential decisionmaking problem. *Machine Learning*, 109(8):1587–1615, 2020. Dinesh Jayaraman and Kristen Grauman. Learning to look around: Intelligently exploring unseen environments for unknown tasks. In *Conference on computer vision and pattern recognition*, pp. 1238–1247, 2018. Alistair Johnson, Lucas Bulgarelli, Tom Pollard, Steven Horng, Leo Anthony Celi, and Roger Mark. Mimic-iv (version 2.0). *PhysioNet*, 2020. Mohammad Kachuee, Orpaz Goldstein, Kimmo Karkkainen, Sajad Darabi, and Majid Sarrafzadeh. Opportunistic learning: Budgeted cost-sensitive learning from data streams. In *International Conference on* Learning Representations, 2019. Bruno Korbar, Du Tran, and Lorenzo Torresani. Scsampler: Sampling salient clips from video for efficient action recognition. In *International Conference on Computer Vision*, pp. 6232–6242, 2019. Yann LeCun. The mnist database of handwritten digits. *http://yann.lecun.com/exdb/mnist/*, 1998. Sarah Lewis, Tatiana Matejovicova, Yingzhen Li, Angus Lamb, Yordan Zaykov, Miltiadis Allamanis, and Cheng Zhang. Accurate imputation and efficient data acquisitionwith transformer-based vaes. In NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications, 2021. Yang Li and Junier Oliva. Active feature acquisition with generative surrogate models. In International Conference on Machine Learning, pp. 6450–6459. PMLR, 2021. Dennis V Lindley. On a measure of the information provided by an experiment. The Annals of Mathematical Statistics, 27(4):986–1005, 1956. Charles X Ling, Qiang Yang, Jianning Wang, and Shichao Zhang. Decision trees with minimal costs. In International conference on Machine learning, pp. 69, 2004. Lance A Liotta, Mauro Ferrari, and Emanuel Petricoin. Clinical proteomics: written in blood. *Nature*, 425 (6961):905–905, 2003. Chao Ma, Sebastian Tschiatschek, Konstantina Palla, José Miguel Hernández-Lobato, Sebastian Nowozin, and Cheng Zhang. Eddi: Efficient dynamic discovery of high-value information with partial vae. arXiv:1809.11142, 2018. Chris J Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: A continuous relaxation of discrete random variables. In *International Conference on Learning Representations*, 2017. Prem Melville, Maytal Saar-Tsechansky, Foster Provost, and Raymond Mooney. Active feature-value acquisition for classifier induction. In *Fourth IEEE International Conference on Data Mining (ICDM'04)*, pp. 483–486. IEEE, 2004. Volodymyr Mnih, Nicolas Heess, Alex Graves, et al. Recurrent models of visual attention. *Advances in* neural information processing systems, 27, 2014. Andrew Y Ng, Daishi Harada, and Stuart Russell. Policy invariance under reward transformations: Theory and application to reward shaping. In *Icml*, volume 99, pp. 278–287, 1999. Rameswar Panda, Chun-Fu Richard Chen, Quanfu Fan, Ximeng Sun, Kate Saenko, Aude Oliva, and Rogerio Feris. Adamml: Adaptive multi-modal learning for efficient video recognition. In *International Conference* on Computer Vision, pp. 7576–7585, 2021. Rafael Possas, Sheila Pinto Caceres, and Fabio Ramos. Egocentric activity recognition on a budget. In Conference on Computer Vision and Pattern Recognition, pp. 5967–5976, 2018. Maytal Saar-Tsechansky, Prem Melville, and Foster Provost. Active feature-value acquisition. *Management* Science, 55(4):664–684, 2009. Yash Satsangi, Shimon Whiteson, Frans A Oliehoek, and Matthijs TJ Spaan. Exploiting submodular value functions for scaling up active perception. *Autonomous Robots*, 2018. Yash Satsangi, Sungsu Lim, Shimon Whiteson, Frans Oliehoek, and Martha White. Maximizing information gain in partially observable environments via prediction reward. International Conference on Autonomous Agents and Multiagent Systems, 2020. Paola Sebastiani and Henry P Wynn. Maximum entropy sampling and optimal bayesian experimental design. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 62(1):145–157, 2000. Burr Settles. Active Learning Literature Survey. *Machine Learning*, 2010. Victor S Sheng and Charles X Ling. Feature value acquisition in testing: a sequential batch test algorithm. In *International conference on Machine learning*, pp. 809–816, 2006. Hajin Shim, Sung Ju Hwang, and Eunho Yang. Joint active feature acquisition and classification with variable-size set encoding. *Advances in neural information processing systems*, 31, 2018. Lucas Smaira, João Carreira, Eric Noland, Ellen Clancy, Amy Wu, and Andrew Zisserman. A short note on the kinetics-700-2020 human action dataset. *arXiv:2010.10864*, 2020. Richard S Sutton and Andrew G Barto. *Reinforcement learning: An introduction*. MIT press, 2018. Yulin Wang, Zhaoxi Chen, Haojun Jiang, Shiji Song, Yizeng Han, and Gao Huang. Adaptive focus for efficient video recognition. In *International Conference on Computer Vision*, pp. 16249–16258, 2021. Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3):229–256, 1992. Wenhao Wu, Dongliang He, Xiao Tan, Shifeng Chen, and Shilei Wen. Multi-agent reinforcement learning based frame sampling for effective untrimmed video recognition. In *International Conference on Computer* Vision, pp. 6222–6231, 2019a. Zuxuan Wu, Caiming Xiong, Chih-Yao Ma, Richard Socher, and Larry S Davis. Adaframe: Adaptive frame selection for fast video recognition. In *Conference on Computer Vision and Pattern Recognition*, pp. 1278–1287, 2019b. Mingyu Yang, Yu Chen, and Hun-Seok Kim. Efficient deep visual and inertial odometry with adaptive visual modality selection. *arXiv:2205.06187*, 2022. Serena Yeung, Olga Russakovsky, Greg Mori, and Li Fei-Fei. End-to-end learning of action detection from frame glimpses in videos. In *computer vision and pattern recognition*, pp. 2678–2687, 2016. Haiyan Yin, Yingzhen Li, Sinno Jialin Pan, Cheng Zhang, and Sebastian Tschiatschek. Reinforcement learning with efficient active feature acquisition. *arXiv preprint arXiv:2011.00825*, 2020. Sara Zannone, José Miguel Hernández-Lobato, Cheng Zhang, and Konstantina Palla. Odin: Optimal discovery of high-value information using model-based deep reinforcement learning. In *ICML Real-world* Sequential Decision Making Workshop, 2019. Yin-Dong Zheng, Zhaoyang Liu, Tong Lu, and Limin Wang. Dynamic sampling networks for efficient action recognition in videos. *Transactions on Image Processing*, 29:7970–7983, 2020. Shlomo Zilberstein. Using anytime algorithms in intelligent systems. *AI magazine*, 17(3):73–73, 1996. Valentina Bayer Zubek, Thomas Glen Dietterich, et al. Pruning improves heuristic search for cost-sensitive learning. 2004. ## A Additional Results In fig. A.1, we display the training dynamics for the experiments on the synthetic scenario with raw inputs, fig. A.2 gives performance for the MNIST and the MNIST + Spoken Digit version of the synthetic experiment. In tables A.1 and A.2, we report confusion matrices comparing our agent's acquisition strategy to oracle behavior on the raw version of the synthetic data experiment. Our test set has 104sequences of length T = 10, so there are a total of 105timesteps per modality for which we evaluate agent acquisitions. For the digit modality, optimal behavior is to acquire only when the counter modality is zero. Table A.1 shows the agent largely follows this optimal behavior, acquiring in 98.6% of cases where the counter is zero (true positives), and when the counter is not zero, the agent does not acquire in 84.8% of cases (true negatives). For the counter modality, oracle behavior is to acquire the first element of each countdown, as all values following as well as the start of the next countdown can be inferred. Because countdowns have a minimum length of 2, the counter does not need to be acquired when a new countdown starts at the last timestep. Table A.2 shows that agent behavior is less ideal for the counter modality, acquiring 71.6% of starting values (true positives) but not acquiring only for 33.0% of negatives (true negatives). In other words, at timesteps where the agent needs not acquire (negatives), the agent unnecessarily acquires 67.0% of the time (false positives). These extra acquisitions may help the agent better discover zeros in the counter modality when it misses a countdown starting value. Table A.3 and table A.4 give results of our agents on AudioSet and Kinetics respectively when costs are imposed on the weaker input modality. For AudioSet, the agents learn to acquire ≈ 1/25 image per sequence across a magnitude of acquisition costs. Surprisingly, the performance at 1 image frame is only about 1 p.p. mAP lower than the performance on fully observed data, which explains why it makes sense for the agent to quickly drop to such a low acquisition rate on the image modality. At high costs of 0.01 per acquisition, the agent no longer acquires any images, which finally does hurt mAP by about 4 p.p. For Kinetics, we observe that the number of acquisitions for the weaker audio modality drops steadily as cost is increased, and, correspondingly, so does the top1-accuracy. This indicates that, for Kinetics, the model makes better use of the weaker audio modality. Figure A.3 shows the agent never acquires both modalities on the Kinetics dataset across a variety of costs. Table A.5 and table A.6 give agent behavior with intermediate rewards enabled for AudioSet and Kinetics. They do not lead to significantly improved agent performance relative to the non-adaptive ablations. ## B Synthetic Sequence Generation Algorithm B.1 gives Python code for generating the sequences for the synthetic dataset (cf. §3.1). ![15_image_0.png](15_image_0.png) ![15_image_1.png](15_image_1.png) ![15_image_2.png](15_image_2.png) Table A.1: Confusion matrix for agent acquisitions of the digit modality compared to oracle behavior on the test set of the raw synthetic scenario. Table A.2: Confusion matrix for agent acquisitions of the counter modality compared to oracle behavior on the test set of the raw synthetic scenario. | Oracle | | | | |----------|-------|-------|-------| | False | True | | | | Agent | False | 19987 | 40665 | | True | 11171 | 28177 | | | | Oracle | | | |-------|----------|-------|------| | False | True | | | | Agent | False | 53247 | 9575 | | True | 528 | 36650 | | ![16_image_0.png](16_image_0.png) Figure A.1: Performance of the Perceiver IO–based model and agent when applied to the raw-digit version of the synthetic dataset. The model learns to quickly solve the task, and the agent slowly reduces the number of acquired datapoints. Note that the 'Acquisitions' plots give acquisition rates averaged across time, s.t. a '1' corresponds to the entire modality being acquired. ## C Experiment Details C.1 Synthetic Experiments: Raw Inputs C.1.1 Dataset We generate a synthetic dataset with training set of size 50 × 103 and test set of size 10 × 103. For the 'raw' version, the input modalities have shape (10, 3), i.e. we use sequences of length 10 and there are three distinct values (2, 1, 0) per modality. We set the acquisition cost to 0.0005 per modality and timestep. ## C.1.2 Architecture We train using a batch size of 256. We use the ADAM optimizer with initial learning rate of 3×10−4, weight decay of 1×10−6, and a cosine annealing schedule. For the Perceiver IO encoder we use a single cross-attend block with 4 self-attention operations per Perceiver IO block; we use 128 queries, and the hidden dimension is 128. For the Perceiver IO decoder, we use a single head with 128 queries and hidden dim of 128. We train for a total of 2 × 105steps. We set the discount factor to γ = 1. ## C.1.3 Choosing Acquisition Costs In this section, we detail how we arrived at our selection of acquisition costs for the results in §5.2.2. For both AudioSet and Kinetics, we found valid cost ranges through experimentation. As a rule of thumb, we aimed for acquisition costs such that T times the cost is smaller than the observed predictive loss on the fully unmasked data. If this were not the case, it is too attractive to not acquire any input samples. Note that predictive losses between AudioSet and Kinetics are significantly different; AudioSet is a multi-class (per instance) problem and Kinetics is not. As costs are directly traded off with predictive loss in our objective, cf. eq. (1), this explains the different cost magnitudes between the datasets. ![17_image_0.png](17_image_0.png) Figure A.2: Performance of the Perceiver IO–based model when applied to the (a) MNIST and (b) MNIST/Spoken Digit version of the synthetic dataset. The agents are unable to learn prediction strategies that generalize to the test set for this scenario. The different plots show a sweep over dropout probabilities in [0, 0.3, 0.5, 0.8] for the input tokens: while we do observe a regularizing effect for dropout here, it does not lead to improved generalization performance. Acquisition cost is set to 0 for these runs. Note that the 'Acquisitions' plots give acquisition rates s.t. a '1' corresponds to the entire modality being acquired. We apply running mean smoothing with kernel size 10 for the plots of the first two columns. Table A.3: Acquisition behavior for the agent and ablations as costs increase for the image modality of the AudioSet dataset. Standard deviations over 5 repetitions over the test set. | Cost Image | 0.00001 | 0.00010 | 0.00100 | |------------------------|-----------------|-----------------|-----------------| | Agent Acquisition Rate | 0.07 | 0.05 | 0.01 | | Agent (mAP) | 0.3683 ± 0.0060 | 0.3627 ± 0.0017 | 0.3289 ± 0.0072 | | Random-Rate (mAP) | 0.3505 ± 0.0005 | 0.3470 ± 0.0010 | 0.3238 ± 0.0008 | | Random-1Hot (mAP) | 0.3681 ± 0.0003 | 0.3650 ± 0.0004 | 0.3275 ± 0.0004 | | Cost Audio | 0.001 | 0.010 | 0.100 | |------------------------|-----------------|-----------------|-----------------| | Agent Acquisition Rate | 0.79 | 0.43 | 0.05 | | Agent (top1) | 0.3313 ± 0.0041 | 0.3253 ± 0.0036 | 0.2672 ± 0.0015 | | Random-Rate (top1) | 0.2825 ± 0.0012 | 0.2840 ± 0.0020 | 0.2524 ± 0.0013 | | Random-1Hot (top1) | 0.3063 ± 0.0003 | 0.2930 ± 0.0003 | 0.2638 ± 0.0004 | Table A.4: Acquisition behavior for the agent and ablations as costs increase for the audio modality of the Kinetics dataset. Standard deviations over 5 repetitions over the test set. ![18_image_0.png](18_image_0.png) Figure A.3: Learned patterns on Kinetics at a variety of image cost. The y-axis gives the average acquisition rate of the agent for a modality and timestep. As the image cost increases, the number of acquisitions decreases. Further, the agents learn a static pattern where at least one modality is always acquired. Table A.5: Agent behavior with intermediate rewards on AudioSet with different audio acquisition costs and trade-off parameters α. See §4.2 for an explanation of intermediate rewards and α. Standard deviations over 5 repetitions over the test set. | Cost Audio | 0.0005 | 0.0005 | 0.00010 | 0.00010 | |------------------------|-----------------|-----------------|-----------------|-----------------| | Trade-off α | 0.1 | 0.5 | 0.1 | 0.5 | | Agent Acquisition Rate | 0.05 | 0.13 | 0.04 | 0.06 | | Agent (mAP) | 0.2648 ± 0.0063 | 0.3104 ± 0.0120 | 0.2671 ± 0.0009 | 0.2668 ± 0.0075 | | Random-Rate (mAP) | 0.2439 ± 0.0012 | 0.3043 ± 0.0013 | 0.2323 ± 0.0014 | 0.2570 ± 0.0014 | | Random-1Hot (mAP) | 0.2750 ± 0.0006 | 0.3238 ± 0.0004 | 0.2664 ± 0.0003 | 0.2838 ± 0.0008 | | Cost Image | 0.01 | 0.01 | 0.10 | 0.10 | |------------------------|-----------------|-----------------|-----------------|-----------------| | Trade-off α | 0.1 | 0.5 | 0.1 | 0.5 | | Agent Acquisition Rate | 0.19 | 0.27 | 0.15 | 0.18 | | Agent (top1) | 0.3260 ± 0.0018 | 0.3234 ± 0.0012 | 0.3225 ± 0.0027 | 0.3162 ± 0.0008 | | Random-Rate (top1) | 0.2895 ± 0.0004 | 0.2673 ± 0.0011 | 0.2768 ± 0.0011 | 0.2807 ± 0.0009 | | Random-1Hot (top1) | 0.3015 ± 0.0003 | 0.2798 ± 0.0004 | 0.3042 ± 0.0004 | 0.3037 ± 0.0003 | Table A.6: Agent behavior with intermediate rewards on Kinetics with different image acquisition costs and trade-off parameters α. See §4.2 for an explanation of intermediate rewards and α. Standard deviations over 5 repetitions over the test set. Once we found a valid cost value, we increased and decreased costs to find a range of costs that leads to varied agent behavior. First, we increased costs until the agent (almost) did not make any acquisitions anymore. Tables 5 and 6 show this requires adjusting the costs across multiple magnitudes. Note that acquisition rates of about 0.04 correspond to acquiring a single segment for our total of T = 25 segments. As the agent already acquires only a single segment, additional cost increases are therefore not that interesting from the perspective of A2MT. We also decreased costs until agent behavior became stagnant. For AudioSet at the lowest cost, the agent acquires (almost) all segments of the audio signal, so further decreasing the cost would not change behavior. For Kinetics, agent acquisition rates are already constant for the three lowest acquisition costs, and it is unlikely that further cost decreases would change this. It seems that the agent refuses to learn to acquire more than 20% of the image modality, regardless of how low the cost is. In table 4, we observed that predictive performance on Kinetics is almost unchanged until mask rates are increased above 80%, which points towards the repetitive nature of the image modality in the Kinetics dataset. It is therefore likely that the agent behavior is sensible here, as there is simply no benefit to increasing acquisitions above 20% for the image modality of Kinetics for the Perceiver IO predictive model. ## C.2 Synthetic Experiments: Mnist And Spokendigit Versions For all hyperparameters not mentioned in the below, we use the same settings as for the 'raw' version of the synthetic dataset. We use a batch size of 128 and train for > 1 × 106steps. The input shape of the MNIST images is (10, 28, 28), and the shape of the audio snippets is (10, 39, 80) after mel spectrogram pre-processing. ## C.3 Audio-Visual Datasets C.3.1 Dataset The AudioSet dataset has a training set of size 1 771 873, an evaluation set of size 17 748, and 632 classes. We use the unbalanced version of the dataset. The Kinetics dataset has a training set of size 545 793, a test set of size 67 858, and 700 classes. For both datasets, each input sample consists of audio-visual input with 250 frames of images and a raw audio signal spanning 10 seconds. For images, we take every 10th frame as input, obtaining a total of 25 input frames. We pre-process the audio signal to mel spectrograms, and then divide the signal into 25 segments. The audio modality has input shape (25, 40, 128), and the image modality has input shape (25, 200, 200, 3). ## C.4 Architecture Perceiver IO. For the Perceiver IO encoder we use a single cross-attend block with 8 self-attention operations per Perceiver IO block; we use 1024 queries, and the hidden dimension is 1024. For the Perceiver IO decoder, i.e. the policy head, we use a single cross-attend head with 1024 queries and hidden dim of 1024. The A2C baseline network is also a Perceiver IO decoder module with the same configuration as the policy. Pre-Training. We use the ADAM optimizer with initial learning rate of 3×10−4, weight decay of 1×10−6, and a cosine annealing schedule. We train for a total of 100 epochs using a batch size of 512. We use the masking settings detailed below. For the masking variants **M1-M3**, we report results with the following settings. For M1, we keep inputs at given modality and timestep with pm = 0.2. For M2, we set pm = 0.4, which accounting for the additional 'max-timestep' masking mechanism (which masks out half the input on average), yields the same expected unmasking rate of 0.2. For M3, we keep pm = 0.4 and additionally drop modalities with dm = 0.5. In all of the above, probabilities are the same across modalities m. These settings performed best in preliminary experiments. Agent Training. We train agents for a total of 20×103steps. We re-use the optimizer configuration from pre-training for training of the agent and the A2C baseline network. We set the discount factor to γ = 1. ## D A2Mt As A Markov Decision Process Here, we attempt to formally connect A2MT to Markov Decision Processes (MDPs) which are the main paradigm for framing problems in the reinforcement learning literature (Sutton & Barto, 2018). Concretely, we believe that a Partially Observable MDP (POMDP) is best suited to the A2MT scenario. We refer to Yin et al. (2020); Janisch et al. (2019; 2020) for a similar treatment of the (PO)MDP action space for active feature acquisition problems. POMDPs are defined by the 6-tuple (S, A,P, R, Ω, O). At each timestep t, the environment is in some state st ∈ S and the agent selects an action at ∈ A. This leads to a new state st+1 ∼ P(·|st, at) and reward rt ∼ R(st, at). In a POMDP, the agent never observes states directly, and instead has access only to observations ot ∈ Ω depending on the unobserved state, ot ∼ O(·|st). To select actions, the agent uses a policy depending only on the current observation at ∼ π(·|ot). Using the notation introduced above and in §2, we can align the A2MT framework with a POMDP by making the following identifications: (1) Actions for t ∈ (1*, . . . , T*) are binary acquisition decisions per modality, at ∈ AB, t ∈ (1*, . . . , T*). After the sequence is consumed, at t = T + 1, we insert an additional timestep at which no action is taken, AP = ∅. Formally, the action space A is the union of both spaces, A = AB ∪ AP . (2) For t ∈ (1*, . . . , T*), the state st contains the unmasked sequence of the modalities until time t, i.e. the *sequence* of observations x1:t, as well as a vector of all previous agent actions, a1:t. Here, the modalities p(xt|xt+1) evolve according to the data generating distribution, e.g. the algorithm for the synthetic dataset creation in §3.1 or the generative model underlying AudioSet/Kinetics video data. At time T + 1, the state additionally contains the label associated with the multimodal sequence, i.e. a sample from the conditional distribution p(y|x1:T ) of the generating process. (3) The observation kernel O(ot|st) emits a sequence of (partially) masked modalities according to the acquisition pattern of the agent up to time t, ot = x˜1:t. Note that because of our construction in (2), the state contains all necessary information to assemble this sequence at each timestep. No observation is emitted at oT +1. (4) For t ∈ (1*, . . . , T*), the reward function is defined as Rt(st, at) = Rt(at) = −Pm cmat,m, i.e. the reward is the negative modalityspecific acquisition cost at that timestep, which does not depend on the state, cf. eq. (1). For t = T + 1, the reward is given by the negative loss achieved by the classifier R(sT +1, aT +1) = R(sT +1) = −L(f(x˜1:T ), y). Note that, instead of modelling the prediction f(x˜1:T ), y) as an action, we include f as part of the global environment state as it is not trained by reinforcement learning. When f is fixed, e.g. in our large scale experiments, the reward from the classification at T + 1 depends only on the acquisitions of the agent; f and its parameters are fixed across sequences. When f is trained jointly with the policy, the parameters of f are updated between—but not within—episodes, making the reward and thus the POMDP nonstationary.
Review 1: Summary: The paper has three contributions. First, it introduces a so-called A2MT task, which abstractly allows the agent to query temporal and multimodal data, e.g., video or audio stream. At each timestep, the agent can select some modalities (possibly none or all) and view their entry corresponding to the given timestep, e.g., a given frame in the video stream. The goal is to acquire enough data to predict some labels attached to the temporal and multimodal data (possibly some function of the data). Each query has its cost, possibly dependent on the modality, which creates a tug-of-war dynamics, where the agent wants to increase the chances of predicting the label, and at the same time, minimize the number of queries. The paper's second contribution is the actual instance of A2MT consisting of synthetic data (sort of a guessing numbers game) and the datasets based on AudioSet and Kinetics, introduced in other papers. Finally, the paper proposes a deep-learning system composed of a prediction model (here based on Perceiver IO architecture) and a policy, which makes a decision on which modality to query at each step. The paper includes experimental analysis, including positive and negative results. Strengths and Weaknesses: **Strengths:** * The problem in the paper is ambitious, and the model chosen is a relatively new attention-based architecture that aims to decouple the input length from the model’s residual stream dimension. * The problem is well-defined, and its relevance is described compellingly. * The difficulty of datasets scales well from more simple to more complex. * The paper provides some interesting observations, including a change from overfitting to generalization for the simple synthetic dataset, the comparison against random agents, the 'negative' result concerning adaptivity (see also Weakness paragraph), and the relationship between loss and entropy. **Weaknesses:** * The datasets should be clearly described in the main body of the paper. In several places, it is pretty hard to follow what are the shapes of the data or what part of the data the evaluations are performed on (e.g., Table 1-3). * The paper's clarity would improve if the tables’ and figures’ captions were self-explanatory. One must find information scattered across the paper to understand better what is happening. For instance, in Table 4-5, it not entirely clear what 'Agent Acquisition Rate' is (it is hard to find a definition of that quantity, with a caption of Figure A.1 being some proxy of that), what are the Random baselines (page 8), why is only one cost per acquisition (page 7), what dataset is used for the calculations for Kinetics. * The paper structures A2MT as a reinforcement learning problem by adding a temporal aspect, actions, sparse rewards, and using an RL algorithm for training (A2C). However, the exact setup is not clearly defined (typically, one describes the underlying MDP or a similar object). Some deviations from the standard setup include the reward being defined by a performance of another learned model or the policy depending on the whole trajectory (and sometimes more). * The description of the synthetic dataset could be improved. For instance, “the counter modality repeatedly counts down from a randomly drawn staring value” is somewhat confusing. The lack of clear description here makes the analysis of Figure 4 hard to follow. Describing Gumbel softmax as an RL algorithm is perhaps an abuse of notation. * Random agents (baselines) are unclear: how to compute the acquisition rate for 'Random-Rate' agent, or how is the “fixed set of equidistantly spaced timesteps” chosen? * The paper provides a 'negative' result by showing that the learned agent does not learn a policy that adapts to inputs. This is interesting (as mentioned in the Strengths paragraph); however, the justification of the Authors’ “there might not be enough signal from the model predictions for the agent to anticipate how acquisitions affect the reward” or “method struggles to learn acquisition strategies and representations simultaneously”. This leaves the reader slightly unsatisfied and unsure whether the Authors’ made enough effort to clarify the issue. Some of the reasons could include (1) Some specific features of the data (e.g., periodicity); (2) Too short learning; (3) Difficulty in learning a policy to match a pre-trained predictive model; (4) Distribution shift in masking from pre-training to policy training; (5) Not enough regularization (the Authors’ mention the model’s overfitting), etc. Requested Changes: I like the paper: it sets an interesting problem, proposes a learning system that aims to solve the tasks, and discusses some interesting experimental phenomena. However, I would like the Authors to improve the clarity, the technical details, and discuss some of the alternative reasons why the policy does not learn adaptive behavior (for details, see the Weakness section). If the Authors decide to follow up with this, I am willing to set "Claims And Evidence" to "Yes". Broader Impact Concerns: No concerns. ================================================== Review 2: Summary: This paper presents a framework for classification in which costs are assigned to observing different multimodal inputs (e.g. images or audio), and implements an approach to perform this classification with a Perceiver-IO type of architecture that takes actions that determine whether it attends to each of the different inputs. Experiments on a controlled synthetic dataset and several larger datasets evaluate the methods performance relative to its ablations. The paper finds that in some settings on some datasets, the method outperforms ablations in which the attention to the inputs is set at a fixed rate and at a fixed frequency. Strengths and Weaknesses: ## Strengths - The paper proposes an interesting task and datasets on which to evaluate it. The task generalizes some existing tasks that either assume that the acquirable features do not vary in time or that the acquirable features are low-dimensional. - The paper evaluates relative to some very important ablations and presents findings despite their "nonsignificance" in many settings, which informs readers that there is significant room for improvement on the methods side. - The paper evaluates on a tightly controlled synthetic regime, which is also itself a good regime for further experimentation on varying some of the fundamental characteristics of the data and evaluation. ## Weaknesses ### 1: Presentation weaknesses - [W1.1] would be significantly improved with an architecture diagram of the policy and the classifier. - [W1.2] S3.1: It is unclear if the randomly drawn starting value for the counter is drawn once or multiple times. The example sequence given: “2103210210” appears to be only realizable if it is drawn multiple times, assuming values: [2, 3, 2]. Please clarify this in the paper. - [W1.3] The experiment presented in Table 3 is unclear. A mask rate of 0 results in AudioSet mAP of 0.370, which appears to correspond to the 0.370 values in Table 2 and Table 1. However, the 0.370 value in Table 1 includes masking (M1,M2,M3). It is unclear what a “Mask Rate” of 0.0 (Table 3 Row 1) and masking of M1, M2, and M3 combined are doing, because masking with 0.0 probability seems like a contradiction with applying the masking of M1, M2, and M3. Is the mask rate the input to M1, M2, and M3? This would make sense if it were used as the parameter of the Bernoulli distribution, so it would work for M1 and M3, but not M2. My main guess here is that the Mask Rate in Table 3 is the rate of masking the test data, not the rate of masking the training data. Please clarify the paper so that others are less likely to have to guess at this. If my guess is correct, I suggest using terminology that clarifies the difference between masking when it is used as model optimization hyperparameter, vs. when it is used as a (test/eval) dataset parameter. - [W1.4] S5.1 would be improved by presenting the metrics in a table, rather than inline, in order to increase their visibility. - [W1.5] The related work section could be improved by including discussion of the “anytime prediction” framework and approaches (e.g., but not limited to, [A,B,C,D] in the References section below), in which a learner’s output can be computed at “any time” and improves with the addition of more inputs. The proposed A2MT framework, in which a single prediction is made after a fixed number of timesteps with costs on using certain inputs, could be considered a degenerate version of anytime prediction, which also imposes costs on using certain inputs, but requires approaches to be capable of producing an output at every timestep. ### 2: Experimental weaknesses - [W2.1] The experiments in S5.1 and Fig A.1 could be improved with more metrics of counter acquisition that are normalized relative to the optimal number of acquisitions and the precise timesteps an optimal agent would attend to the counter. The optimal number of acquisitions is determined by the counter variable(s) — the digits need only be attended to by an optimal agent where the counter is 0, so observing the counter values after each 0 is sufficient. The paper states that 68.6% of the counter modality is attended to, but it is unclear how much an optimal agent should be attending to the test data. Furthermore, matching the optimal attention rate to counter variables is insufficient for attending to the correct counter variables — the analysis could be further improved with a metric of accuracy of attention to the “necessary” counter variables — scoring true positives for attention to the counter values that succeed 0, false negatives for failing to attend to those values, and false positives for counter modality attention elsewhere. These same metrics could be applied to the digit modality. These types of accuracy metrics are particularly valuable for this synthetic dataset because they’re easily computable relative to exactly computing or estimating them for the Audio-Visual datasets. - [W2.2] The experiments in Table 4 and Table 5 would be significantly improved if they included prior work on these same classification tasks, both from approaches that are cost-insensitive (e.g. any prior classification approaches, which I’m fairly confident exist) and those that are cost-sensitive, if they exist on these dataset (I don’t know such exist on these datasets). The former type of approaches would ground the proposed methods potential improvement over cost-insensitive approaches, which is perhaps the main value proposition of any A2MT-based method. - [W2.3] It is unclear how the ranges of cost values investigated in Table 4 and Table 5 were chosen. The paper would be improved with additional experiments here. The cost ranges appear to differ significantly: in [1e-6, 1e-3] for audio, vs. [1e-3, 5e-1] for images. It is possible that the effect of outperforming the random ablations is only present in the lower range [1e-3, 5e-1] for both modalities, and that the effect of underperforming the ablations is only present in the higher range [1e-6, 1e-3]. If the experiments presented results that used the same dense ranges for both modalities, we could draw stronger conclusions. ### References - [A] Zilberstein, Shlomo. "Using anytime algorithms in intelligent systems." AI magazine 17.3 (1996): 73-73. - [B] Horvitz, Eric J. "Reasoning about beliefs and actions under computational resource constraints." arXiv preprint arXiv:1304.2759 (2013). - [C] Shani, Guy, Joelle Pineau, and Robert Kaplow. "A survey of point-based POMDP solvers." Autonomous Agents and Multi-Agent Systems 27 (2013): 1-51. - [D] Grubb, Alex, and Drew Bagnell. "Speedboost: Anytime prediction with uniform near-optimality." Artificial Intelligence and Statistics. PMLR, 2012. Requested Changes: Critical weaknesses to address: W1.1, W2.2 (cost-insensitive prior work), W2.3. The remaining weaknesses are important (including (W2.2 cost-sensitive prior work)) but noncritical. Broader Impact Concerns: None ================================================== Review 3: Summary: - The paper introduces a novel task formulation, i.e., the Active Acquisition for Multimodal Temporal Data (A2MT) problem. - The authors have designed synthetic scenarios and validated on real-world datasets, AudioSet and Kinetics. - The paper introduces a reinforcement learning approach based on Perceiver IO to tackle the proposed problem with a thorough empirical study. The agents' appropriate response to modality-specific acquisition costs demonstrates the potential of the proposed method for real-world applications. Strengths and Weaknesses: Strengths: - The motivation of the task formulation is clear and evident, and one can clearly see could have potential applications in various fields like finance, and healthcare. - The formulated problem is novel and in time, which is an up-to-date formulation for the classic Active Feature Acquisition (AFA) problem. - The empirical findings, especially those in Section 5.2.2, are of interest to the TMLR community. - Overall, the delivery and presentation of the work are mostly clear and easy to follow, with some minor formatting issues that need to be addressed. Weaknesses: - The discussion of related work can improve, in particular, the authors should consider situating or differentiating their work against the line of work in multimodal active learning. - Based on my understanding, the synthetic scenario used in the paper is not a natural abstraction of the temporal decision-making problem. Especially, I am not sure if the feature values evolve. > "Notably, prior work in AFA assumes static data: although acquisitions are sequential, feature values do not evolve along a temporal dimension." > More specifically, the synthetic scenario involves a series of random counting sequences, in which the starting number does not depend on the previous sequences. > "The counter modality repeatedly counts down from a randomly drawn starting value, e.g. 2103210210. The label attached to each sequence is the sum of the digit modality at all timesteps where the counter modality is zero, e.g. 3+3+3=9 in this case." > The presented problem is more of a reasoning problem over sequences than a temporal decision-making problem, to my understanding. For a more natural abstraction, a Markovian (or more complicated temporally dependent sequence) sequence would make more sense. I would love to hear more about the design choice of this experiment from the authors. Requested Changes: Major: - (See weakness for details) I would really like to see how the presented method works on a more continuous time-dependent synthetic scenario, e.g., a simple Markovian counter sequence. Minor/Formatting: - In Section 2, the notation $c$ has been used to denote both color channels and cost. Please consider differentiating them. - Figure 5 and 7 are too small to read. Please consider enlarging them. Broader Impact Concerns: Not applicable. ================================================== Metareview: Recommendation: Accept with minor revision Comment: This paper is well-written and introducing an interesting problem. You have already addressed many of the reviewers concerns, so I won’t reiterate those here. The decision is Accept with Minor Revisions, where I (the action editor) can check that a few small things are implemented. The reviewers largely seem happy with the changes. A few requests based on the reviewers reviews: 1. One reviewer asked for an architecture diagram of the policy and classifier. I suspect the reason for this request was that it is not clear how the classifier and the policy were jointly trained. Presumably, they were simply trained in parallel, with the policy producing the inputs and then that being handed to the classifier for training. I think this concern can be addressed by more clearly highlighting this training procedure, and stating what you said that you have two independent perceiver IO models. It would also be useful to explain why the Perceiver IO model easily handles missing data, since when searching in the original paper for the word “missing”, nothing seems to come up. A diagram of this might be useful way to explain this, but is not strictly necessary. 2. There was a nice request to formalize the MDP underlying this RL problem, and you did so in the appendix. However, this could use a bit of improvement. Its mostly ok except for the treatment of f (which makes this a nonstationary POMDP). Since f is not trained using RL, I wonder if it would be better to consider f as part of the MDP. In the simplest setting, f is fixed (somehow trained ahead of time) and the RL agent’s job is just to figure out what inputs to acquire to minimize cost while still getting good predictions. If f is changing, then this f could be part of the POMDP state (not visible to the agent). 3. One reviewer asked to include anytime prediction. I see how this is related, but quite different; its appreciated that you discussed it. However, the related work section is too sparse. Active perception and attention are large areas. For example, see a reasonably large survey in this work (I do apologize that it is my own, but of course, its the easiest one for me to think of, you can find other maybe more pertinent citations in it): “Maximizing Information Gain in Partially Observable Environments via Prediction Rewards” Satsangi et al., AAMAS 2020 These should be very simple writing changes, thus the Accept with Minor Revisions. ==================================================
# Enhancing Diffusion-Based Image Synthesis With Robust Classifier Guidance Bahjat Kawar bahjat.kawar@cs.technion.ac.il Computer Science Department Technion, Israel Roy Ganz ganz@campus.technion.ac.il Electrical Engineering Department Technion, Israel Michael Elad *elad@cs.technion.ac.il* Computer Science Department Technion, Israel Reviewed on OpenReview: *https: // openreview. net/ forum? id= tEVpz2xJWX* ## Abstract Denoising diffusion probabilistic models (DDPMs) are a recent family of generative models that achieve state-of-the-art results. In order to obtain class-conditional generation, it was suggested to guide the diffusion process by gradients from a time-dependent classifier. While the idea is theoretically sound, deep learning-based classifiers are infamously susceptible to gradient-based adversarial attacks. Therefore, while traditional classifiers may achieve good accuracy scores, their gradients are possibly unreliable and might hinder the improvement of the generation results. Recent work discovered that adversarially robust classifiers exhibit gradients that are aligned with human perception, and these could better guide a generative process towards semantically meaningful images. We utilize this observation by defining and training a time-dependent adversarially robust classifier and use it as guidance for a generative diffusion model. In experiments on the highly challenging and diverse ImageNet dataset, our scheme introduces significantly more intelligible intermediate gradients, better alignment with theoretical findings, as well as improved generation results under several evaluation metrics. Furthermore, we conduct an opinion survey whose findings indicate that human raters prefer our method's results. ## 1 Introduction Image synthesis is one of the most fascinating capabilities that have been unveiled by deep learning. The ability to automatically generate new natural-looking images without any input was first enabled by revolutionary research on VAEs - variational auto-encoders (Kingma & Welling, 2014) and GANs - generative adversarial networks (Goodfellow et al., 2014). Both these techniques, as well as their many subsequent works (Radford et al., 2016; Arjovsky et al., 2017; Gulrajani et al., 2017; Karras et al., 2020; Van Den Oord et al., 2017; Vahdat & Kautz, 2020), involved training neural networks on a large dataset of natural images, aiming to convert simple and easily accessed random vectors into images drawn from a distribution close to the training set. Despite their impressive capabilities, the image distributions learned by these models were initially restricted to a specific class of images, ranging from low-resolution handwritten digits (Deng, 2012) to higher-resolution human faces (Karras et al., 2019). As more research and resources were invested in this field, several works (Pu et al., 2017; Brock et al., 2018; Esser et al., 2021) were able to make a leap forward, and devise more complicated models capable of synthesizing a diverse range of natural images. A commonly agreed upon challenge in this context is ImageNet (Deng et al., 2009), a dataset containing millions of natural ![1_image_0.png](1_image_0.png) Figure 1: Images generated with our proposed method. images, all labeled with one of 1000 image classes. For a given class label (*e.g.* "hen" or "baseball"), these class-conditional generative models can nowadays synthesize a realistic image of that class. Recently, a different family of generative models has emerged to the forefront of image synthesis research. Denoising diffusion probabilistic models (Sohl-Dickstein et al., 2015; Ho et al., 2020), also known as scorebased generative models (Song & Ermon, 2019), have achieved new state-of-the-art image generation performance (Dhariwal & Nichol, 2021; Song et al., 2021; Vahdat et al., 2021), showcasing better image fidelity and mode coverage than VAEs and GANs. These models have also excelled at several downstream tasks (Amit et al., 2021; Kawar et al., 2021b; Theis et al., 2022; Nie et al., 2022), and they also act as the powerhouse behind the unprecedented capabilities of text-to-image models (Ramesh et al., 2022; Saharia et al., 2022). Essentially, these models utilize a Gaussian denoising neural network in an iterative scheme - starting from a pure Gaussian noise image, it is continually and gradually denoised in a controlled fashion, while also being perturbed randomly, until it finally turns into a synthesized natural-looking image. To achieve classconditional generation, the denoising neural network can accept the class label as input (Ho et al., 2022a). Additionally, the diffusion process can be guided by gradients from a classifier (Dhariwal & Nichol, 2021). This brings us naturally to discuss the next topic, of image classifiers, and their role in image synthesis. In parallel to the progress in image synthesis research, substantial efforts were also made in the realm of image classification. Given an input image, neural networks trained for classification are able to assign it a class label from a predefined set of such labels, often achieving superhuman performance (He et al., 2016; Dosovitskiy et al., 2020). Despite their incredible effectiveness, such classifiers were found to be susceptible to small malicious perturbations known as adversarial attacks (Szegedy et al., 2014). These attacks apply a small change to an input image, almost imperceptible to the human eye, causing the network to incorrectly classify the image. Subsequently, several techniques were developed for defending against such attacks (Madry et al., 2018; Andriushchenko & Flammarion, 2020; Zhang et al., 2019; Wang et al., 2020), obtaining classifiers that are *adversarially robust*. In addition to their resistance to attacks, robust classifiers were also found to possess unexpected advantages. The gradients of a robust classifier model were found to be *perceptually aligned*, exhibiting salient features of a class interpretable by humans (Tsipras et al., 2019). This phenomenon was harnessed by a few subsequent works, enabling robust classifiers to aid in basic image generation, inpainting, and boosting existing generative models (Santurkar et al., 2019; Ganz & Elad, 2021). In this work, we draw inspiration from these recent discoveries in image classification and incorporate their advantages into the world of diffusion-based image synthesis, which has largely been oblivious to the capabilities of robust classifiers. Dhariwal & Nichol (2021) suggested to use gradients from a (non-robust) classifier for guiding a diffusion synthesis process. We improve upon this technique by examining the validity of these gradients and suggesting a way to obtain more informative ones. We pinpoint several potential issues in the training scheme of classifiers used as guidance and observe their manifestation empirically (see Figures 3 and 4). We then propose the training of an adversarially robust, time-dependent classifier (*i.e.*, a classifier ![2_image_0.png](2_image_0.png) Figure 2: Images generated by guided diffusion using the same random seed and class label, with a vanilla (top) and a robust (bottom) classifier. Our robust model provides more informative gradients, leading to better synthesis quality. that accepts the current diffusion timestep as input) suited for guiding a generation process. Our proposed training method resolves the theoretical issues raised concerning previous classifier guidance techniques. Empirically, our method attains significantly enhanced generative performance on the highly challenging ImageNet dataset (Deng et al., 2009). We evaluate the synthesis results using standard metrics, where our method outperforms the previous state-of-the-art classifier guidance technique. Furthermore, we conduct an opinion survey, where we ask human evaluators to choose their preferred result out of a pair of generated images. Each pair consists of two images generated using the same class label and the same random seed, once using the baseline classifier guidance method (Dhariwal & Nichol, 2021), and once using our proposed robust classifier guidance. Our findings show that human raters exhibit a pronounced preference towards our method's synthesis results. To summarize, we incorporate a recently discovered capability of robust classifiers, perceptually aligned gradients, into the classifier-guided diffusion-based image synthesis scheme (Dhariwal & Nichol, 2021). We highlight several benefits of the adversarial training scheme and show how they can aid in classifier guidance for diffusion models. To that end, we train an adversarially robust time-dependent classifier on the diverse ImageNet dataset (Deng et al., 2009). We use this classifier in conjunction with a conditional diffusion model to obtain high quality image generation. The resulting technique outperforms the previous vanilla classifier guidance method on several key evaluation metrics such as FID (Heusel et al., 2017). Furthermore, we present a conducted opinion survey, which found that human evaluators show a clear preference towards our method. ## 2 Background 2.1 Robust Classifiers Deep learning-based classifiers parameterized by ϕ aim to model the log-likelihood of a class label y ∈ {1*, . . . , C*} given a data instance x ∈ R d, namely log pϕ(y|x). Such architectures are trained to minimize the empirical risk over a given labeled training set {xi, yi} N i=1, *e.g.*, $$\min_{\phi}\frac{1}{N}\sum_{i=1}^{N}{\cal L}_{CE}({\bf h}_{\phi}({\bf x}_{i}),y_{i}),\tag{1}$$ where N is the number of training examples, hϕ(xi) = {log pϕ(j|xi)} C j=1 is the set of these log-likelihood scores predicted by the classifier for the input xi, and LCE is the well-known cross-entropy loss, defined as $${\mathcal{L}}_{C E}(\mathbf{z},y)=-\log{\frac{\exp(\mathbf{z}_{y})}{\sum_{j=1}^{C}\exp(\mathbf{z}_{j})}}.$$ . (2) $$\left(2\right)$$ ![3_image_0.png](3_image_0.png) Figure 3: Gradients of images on their respective true class labels, using a vanilla classifier and our robust one at different timesteps. Gradients are min-max normalized. Classifiers of this form have had astounding success and have led to state-of-the-art (SOTA) performance in a wide range of domains (He et al., 2016; Simonyan & Zisserman, 2014). Nevertheless, these networks are known to be highly sensitive to minor corruptions (Hosseini et al., 2017; Dodge & Karam, 2017; Geirhos et al., 2017; Temel et al., 2017; 2018; Temel & AlRegib, 2018) and small malicious perturbations, known as adversarial attacks (Szegedy et al., 2014; Athalye et al., 2017; Biggio et al., 2013; Carlini & Wagner, 2017; Goodfellow et al., 2015; Kurakin et al., 2017; Nguyen et al., 2014). With the introduction of such models to real-world applications, these safety issues have raised concerns and drawn substantial research attention. As a consequence, in recent years there has been an ongoing development of better attacks, followed by the development of better defenses, and so on. While there are abundant attack and defense strategies, in this paper we focus on the Projected Gradient Descent (PGD) attack and Adversarial Training (AT) robustification method (Madry et al., 2018) that builds on it. PGD is an iterative process for obtaining adversarial examples - the attacker updates the input instance using the direction of the model's gradient w.r.t. the input, so as to maximize the classification loss. AT is an algorithm for robustifying a classifier, training it to classify maliciously perturbed images correctly. Despite its simplicity, this method was proven to be highly effective, yielding very robust models, and most modern approaches rely on it (Andriushchenko & Flammarion, 2020; Huang et al., 2020; Pang et al., 2020; Qin et al., 2019; Xie et al., 2019; Zhang et al., 2019; Wang et al., 2020). In addition to the clear robustness advantage of adversarial defense methods, Tsipras et al. (2019) has discovered that the features captured by such robust models are more aligned with human perception. This property implies that modifying an image to maximize the probability of being assigned to a target class when estimated by a robust classifier, yields semantically meaningful features aligned with the target class. In contrast, performing the same process using non-robust classifiers leads to imperceptible and meaningless modifications. This phenomenon is termed *Perceptually Aligned Gradients* (PAG). Since its discovery, few works (Santurkar et al., 2019; Aggarwal et al., 2020) have harnessed it for various generative tasks, including synthesis refinement as a post-processing step (Ganz & Elad, 2021). ## 2.2 Diffusion Models Denoising diffusion probabilistic models (DDPMs), also known as simply *diffusion models*, are a family of generative models that has recently been increasing in popularity (Song & Ermon, 2019; Ho et al., 2020). These methods have demonstrated unprecedented realism and mode coverage in synthesized images, achieving state-of-the-art results (Dhariwal & Nichol, 2021; Song et al., 2021; Vahdat et al., 2021) in well-known metrics such as Fréchet Inception Distance - FID (Heusel et al., 2017). In addition to image generation, these techniques have also been successful in a multitude of downstream applications such as image restoration (Kawar et al., 2021a; 2022), unpaired image-to-image translation (Sasaki et al., 2021), image segmentation (Amit et al., 2021), image editing (Liu et al., 2021; Avrahami et al., 2022), text-to-image generation (Ramesh et al., 2022; Saharia et al., 2022), and more applications in image processing (Theis et al., 2022; Gao et al., 2022; Nie et al., 2022; Blau et al., 2022; Han et al., 2022) and beyond (Jeong et al., 2021; Chen et al., 2022; Ho et al., 2022b; Zhou et al., 2021). The core idea of diffusion-based generative models is to start from a pure Gaussian noise image, and gradually modify it using a denoising network and a controlled random perturbation until it is finally crystallized into a realistic high-quality image. While different realizations of this idea exist, we follow the notation established in (Dhariwal & Nichol, 2021). Specifically, diffusion models aim to sample from a probability distribution pθ(x) that approximates a data probability q(x) representing a given dataset. Sampling starts from a pure Gaussian noise vector xT , and gradually updates it into samples xT −1, xT −2*, . . . ,* x2, x1 until the final output image x0. Each timestep t represents a fixed noise level in the corresponding image xt, which is a mixture of x0 and a white Gaussian noise vector ϵt, specifically $$\mathbf{x}_{t}={\sqrt{\alpha_{t}}}\mathbf{x}_{0}+{\sqrt{1-\alpha_{t}}}\epsilon_{t},$$ √1 − αtϵt, (3) with predefined signal and noise levels αt and 1 − αt, respectively (0 = αT < αT −1 < · · · < α1 < α0 = 1). A denoising model ϵθ(xt, t) is trained to approximate ϵt, and is subsequently used at sampling time to model the distribution pθ(xt−1|xt) = N (µt , σ2 t I), with $$(3)$$ $$\mu_{t}=\sqrt{\frac{\alpha_{t-1}}{\alpha_{t}}}\left({\bf x}_{t}-\frac{1-\frac{\alpha_{t-1}}{\alpha_{t-1}}}{\sqrt{1-\alpha_{t}}}\mathbf{e}_{\theta}({\bf x}_{t},t)\right),\tag{4}$$ and σ 2 t is either set to a constant value representing a bound on the possible variance of the underlying distribution (Ho et al., 2020), or learned by a neural network (Nichol & Dhariwal, 2021). This distribution enables the iterative sampling, starting from pure noise xT and ending with a final image x0. ## 3 Motivation 3.1 Class-Conditional Diffusion Synthesis We are interested in generating an image from a certain user-requested class of images, labeled y. Previous work in this area suggested conditioning a denoising diffusion model on an input class label, thereby obtaining ϵθ(xt*, t, y*), and this way conditioning the sampling sequence on the desired class label (Ho et al., 2022a). In addition, building on ideas from (Sohl-Dickstein et al., 2015; Song et al., 2021), it was suggested by (Dhariwal & Nichol, 2021) to guide the diffusion process using gradients from a classifier. Assuming access to a timedependent (actually, noise-level-dependent) classification model that outputs log pϕ(y|xt, t), this *classifier* guidance technique suggests incorporating the model's gradient ∇xt log pϕ(y|xt, t) into the diffusion process. This encourages the sampling output x0 to be recognized as the target class y by the classifier model utilized. These gradients can be further scaled by a factor s, corresponding to a modified distribution proportional to pϕ(y|xt, t) s. Increasing s results in a sharper distribution, thereby trading off diversity for fidelity. A time-dependent classifier log pϕ(y|xt, t) is trained for this purpose using the cross-entropy loss on noisy intermediate images xt, obtained by sampling images x from a dataset and randomly setting t, controlling the noise level. The classifier is then used in conjunction with a conditional diffusion model for sampling. ## 3.2 Vanilla Classifier Guidance Shortcomings The use of gradients of the assumed underlying data distribution ∇xt log q(y|xt, t) in the diffusion process is well-motivated by Dhariwal & Nichol (2021). However, it is unclear whether the aforementioned "vanilla" training method of a classifier encourages its gradients' proximity to those of the data distribution. In fact, it was proven by (Srinivas & Fleuret, 2020) that these model gradients can be arbitrarily manipulated without affecting the classifier's cross-entropy loss nor its accuracy. Crucially, this means that training is oblivious to arbitrary changes in model gradients. We provide this proof in Appendix B for completeness, and naturally extend it to cover time-dependent classifiers trained on noisy images. It was also previously suggested that the iterative use of such gradients is akin to a black-box adversarial attack on the Inception classifier used for ![5_image_0.png](5_image_0.png) Figure 4: Maximizing the probability of target classes with given images using classifier gradients (at t = 0). Our robust classifier leads to images with less adversarial noise, and more aligned with the target class. assessing generation quality (Ho & Salimans, 2021). Essentially, this may result in better nominal generation performance in metrics such as FID, while not necessarily improving the visual quality of output images. Moreover, (Chao et al., 2021) prove that changing the scaling factor of classifier guidance to s ̸= 1 does not generally correspond to a valid probability density function. To conclude, there is substantial evidence in recent literature towards the shortcomings of "vanilla" trained classifiers for obtaining accurate gradients. This, in turn, motivates the pursuit of alternative approximations of ∇xt log q(y|xt, t) for use as classifier guidance in diffusion-based synthesis. ## 4 Obtaining Better Gradients 4.1 Robust Classifier Benefits In traditional Bayesian modeling literature, it is assumed that given a data point x, there exists a probability of it belonging to a certain class y, for each possible choice of y. In diffusion models, the same assumption is made over noisy data points xt, with the probability distribution q(y|xt, t). However, in practice, no concrete realization of these probabilities exists. Instead, we have access to a labeled image dataset where for each image x, there is a "ground-truth" label y ∈ {1*, . . . , C*}. Using this data, a classifier model is encouraged to output pϕ(y|xt, t) = 1 and pϕ(y ′|xt, t) = 0 for y ′̸= y through the cross-entropy loss function. While this method achieves impressive classification accuracy scores, there is no indication that its outputs would approximately match the assumed underlying distribution, nor will it reliably provide its input-gradients. Instead of relying on "vanilla" cross-entropy training of classification models, we suggest leveraging a few recently discovered advantages of robust adversarially-trained classifiers, which have been largely unexplored in the context of diffusion models hitherto. Tsipras et al. (2019) show that traditionally trained classifiers can very easily mismatch an underlying synthetic data distribution by relying on non-robust weakly correlated features. In contrast, an adversarially-trained robust classifier would be vastly more likely to rely on more robust and highly informative features. Interestingly, by migrating to a robust classifier, we can leverage its recently discovered phenomenon of perceptually aligned gradients (Tsipras et al., 2019). These gradients have allowed robust classifiers to be used in tasks such as inpainting, basic image generation (Santurkar et al., 2019), and boosting existing generative models (Ganz & Elad, 2021). Notably, such tasks imply the existence of decent generative Table 1: Quality metrics for image synthesis using a class-conditional diffusion model on ImageNet (128 × 128). Left to right: no guidance, vanilla classifier guidance, robust classifier guidance (ours). | Metric | Unguided | Vanilla | Robust | |---------------|------------|-----------|----------| | Precision (↑) | 0.70 | 0.78 | 0.82 | | Recall (↑) | 0.65 | 0.59 | 0.56 | | FID (↓) | 5.91 | 2.97 | 2.85 | capabilities implicit in robust classifier gradients, but not "vanilla" ones. Therefore, we propose replacing the classifier used in (Dhariwal & Nichol, 2021) with a robust one. ## 4.2 Proposed Method Note that an off-the-shelf adversarially trained robust classifier would not fit our purpose in this context. This is due to the fact that in the diffusion process, the classifier operates on intermediate images xt, which are a linear mixture of an ideal image and Gaussian noise. Furthermore, this mixture is also a function of t, which requires the classifier model to be time-dependent. Consequently, we propose the training of a novel robust time-dependent classifier model hϕ(xt, t) = {log pϕ(j|xt, t)} C j=1. For each sample x from a training set D = {xi, yi} N i=1 and timestep t, we first transform x into its noisy counterpart xt, and then apply a gradient-based adversarial attack on it. Since the training images are perturbed with both Gaussian and adversarial noises, we apply early stopping - the attack stops as soon as the model is fooled. Finally, the model is shown the attacked image x˜t = A (xt, ϕ), and is trained using the cross-entropy loss with the ground-truth label y. The resulting loss function is formulated as $$\mathbb{E}_{(\mathbf{x},y)\sim{\mathcal{D}},t\sim\mathrm{Uni}[0,T],\mathbf{x}_{t}\sim q_{t}\left(\mathbf{x}_{t}|\mathbf{x}\right)}\left[{\mathcal{L}}_{C E}\left(\mathbf{h}_{\phi}\left({\tilde{\mathbf{x}}}_{t},t\right),y\right)\right].$$ $$\left(5\right)$$ E(x,y)∼D,t∼Uni[0,T],xt∼qt(xt|x)[LCE (hϕ (x˜t, t), y)] . (5) Early stopping is crucial to this training scheme, as heavily noisy images (especially for large t) with a full-fledged attack can easily overwhelm the model in the early stages of training. Conversely, early stopping allows the model to train on non-attacked samples initially, then proceeding to gradually more challenging cases as training progresses. This scheme resolves several previously mentioned issues with vanilla classifier guidance. First, the gradients of an adversarially trained classifier cannot be arbitrarily manipulated like those of a vanilla model. Unlike the vanilla setting, the loss for an adversarially trained classifier is directly dependent on the adversarial attack employed, which in turn depends on the model gradients. Therefore, any change to the model's gradients, such as the one suggested by (Srinivas & Fleuret, 2020), would necessarily affect the model's predictions and loss during training. Second, gradients from a robust classifier are shown to be aligned with human perception. Namely, they exhibit salient features that humans naturally associate with the target class, and this should be contrasted with adversarial noise unintelligible by humans. Therefore, they cannot be thought of as "adversarial" to performance metrics, as their vanilla classifier counterparts (Ho & Salimans, 2021). Third, while vanilla cross-entropy-trained classifier gradients are not known to contain any intelligible features, robust classifier gradients may be interpretable. Elliott et al. (2021) have leveraged robust classifier gradients in order to highlight salient features and explain neural network decisions. These findings hint towards the superiority of these gradients. Note that because of the need to work on intermediate images, the classifiers employed with diffusion models train on data mixed with Gaussian noise. It was discovered that this simple data augmentation can lead to gradients that are more interpretable by humans (Kaur et al., 2019), albeit to a lesser extent than observed in adversarially trained models. Therefore, we hypothesize that utilizing a model with better "perceptually aligned gradients" will yield enhanced image synthesis results. ![7_image_0.png](7_image_0.png) Figure 5: Approximations of the final image at uniformly spaced intermediate steps of the guided diffusion process, for the same class and the same random seed. Our robust classifier provides better guidance. ## 5 Experiments 5.1 Robust Time-Dependent Classifier Training In our experiments, we focus on the highly challenging ImageNet (Deng et al., 2009) dataset for its diversity and fidelity. We consider the 128 × 128 pixel resolution, as it provides a sufficiently high level of details, while still being computationally efficient. In order to test our hypothesis, we require the training of a robust time-dependent classifier. We adopt the same classifier architecture from (Dhariwal & Nichol, 2021) and train it from scratch using the proposed loss in Equation (5) on the ImageNet training set. We use the gradient-based PGD attack to perturb the noisy images xt. The attack is restricted to the threat model {xt + δ *| | ∥*δ∥2 ≤ 0.5}, and performed using a step size of 0.083, and a maximum number of 7 iterations. We stop the PGD attack on samples as soon as it is successful in achieving misclassification. This early stopping technique allows the model to train on unattacked data points at the beginning, and progressively increase its robustness during training. We train the classifier for 240k iterations, using a batch size of 128, a weight decay of 0.05, and a linearly annealed learning rate starting with 3 × 10−4 and ending with 6 × 10−5. Training is performed on two NVIDIA A40 GPUs. In addition, we conduct an ablation study on CIFAR-10 (Krizhevsky et al., 2009) and report the results and implementation details in Appendix E. To qualitatively verify that the resulting model indeed produces perceptually aligned gradients, we examine the gradients at different timesteps for a handful of natural images from the ImageNet validation set. For comparison, we also show the same gradients as produced from the vanilla classifier trained by Dhariwal & Nichol (2021). As can be seen in Figure 3, the gradients from our robust model are more successful than their vanilla counterpart at highlighting salient features aligned with the image class, and with significantly less adversarial noise. To further demonstrate the information implicit in these gradients, we perform a targeted PGD process with 7 steps and a threat model {x + δ *| | ∥*δ∥2 ≤ 100}, maximizing a certain target class for a initial image. Figure 4 shows that our model yields images that align better with the target class. ## 5.2 Robust Classifier Guided Image Synthesis The main goal of our work is improving class-conditional image synthesis. Following (Dhariwal & Nichol, 2021), we utilize 250 diffusion steps out of the trained 1000 (by uniformly skipping steps at sampling time) of their pre-trained conditional diffusion model for this task, while guiding it using our robust classifier. For the classifier guidance scale, we sweep across values s ∈ {0.25, 0.5, 1, 2} and find that s = 1 produces better Table 2: Percentage of image pairs where human evaluators prefer our robust classifier's output, the vanilla one, or have no preference. An output is considered preferred if the percentage of users who selected it passes a certain threshold. | Threshold | Robust | Vanilla | No Preference | |-------------|----------|-----------|-----------------| | 50% | 61.5% | 31.5% | 7.0% | | 60% | 51.0% | 28.5% | 28.5% | | 70% | 35.0% | 13.5% | 51.5% | | 80% | 21.5% | 8.5% | 70.0% | results, aligning well with theoretical findings set forth by Chao et al. (2021). Dhariwal & Nichol (2021) perform a similar sweep for their model, and find that s = 0.5 provides the best results. In all comparisons, we use s = 1 for our robust classifier and s = 0.5 for the vanilla one. We conduct an ablation study regarding the important hyperparameters and design choices in Appendix E. Qualitatively, the resulting synthesized images using our robust classifier look visually pleasing, as evident in Figures 1, 2, and 5. However, our method underperforms in a handful of cases, as we show in Figure 6. In order to quantify the quality of our results, we adopt the standard practice in class-conditional ImageNet image synthesis: we randomly generate 50000 images, 50 from each of the 1000 classes, and evaluate them using several well-known metrics - FID (Heusel et al., 2017), Precision, and Recall (Kynkäänniemi et al., 2019). Precision quantifies sample fidelity as the fraction of generated images that reside within the data manifold, whereas Recall quantifies the diversity of samples as the fraction of real images residing within the generated image manifold. FID provides a comprehensive metric for both fidelity and diversity, measuring the distance between two image distributions (real and generated) in the latent space of the Inception V3 (Szegedy et al., 2016) network. In Table 1, we compare three guidance methods for class-conditional generation: (i) using the class-conditional diffusion model without guidance; (ii) guidance using the pretrained vanilla classifier; and (iii) guidance by our robust classifier. Our proposed model achieves better FID and Precision than both competing techniques, but it is outperformed in Recall. Seeking a more conclusive evidence that our method leads to better visual quality, we conduct an opinion survey with human evaluators. We randomly sample 200 class labels, and then randomly generate corresponding 200 images using the conditional diffusion model guided by two classifiers: once using the pre-trained vanilla classifier, and once using our robust one. Both generation processes are performed using the same random seed. Human evaluators were shown a randomly ordered pair of images (one from each classifier), the requested textual class label, and the question: "Which image is more realistic, and more aligned to the description?". Evaluators were asked to choose an option from "Left", "Right", and "Same". The image pairs were shown in a random order for each evaluator. The main findings of the survey are summarized in Table 2. In each pair, we consider an image to be preferred over its counterpart if it is selected by more than a certain percentage (threshold) of users who selected a side. We then calculate the percentage of pairs where evaluators prefer our robust classifier's output, the vanilla one, or have no preference. We vary the threshold from 50% up to 80%, and observe that humans prefer our classifier's outputs over the vanilla ones for all threshold levels. During the survey, each pair was rated by 19 to 36 evaluators, with an average of 25.09 evaluators per pair, totaling 5018 individual answers. Out of these, 40.4% were in favor of our robust classifier and 32.4% were in favor of the vanilla one. These results provide evidence for human evaluators' considerable preference for our method, for a significance level of > 95%. ## 6 Related Work In this work we propose to harness the perceptually aligned gradients phenomenon by utilizing robust classifiers to guide a diffusion process. Since this phenomenon's discovery, several works have explored the generative capabilities of such classifiers. Santurkar et al. (2019) demonstrated that adversarially robust 9 ![9_image_0.png](9_image_0.png) Figure 6: In a handful of cases, vanilla classifier guidance produces better outputs than robust classifier guidance. These results mostly consist of better lighting, saturation, and image focus. models can be used for solving various generative tasks, including basic synthesis, inpainting, and superresolution. Zhu et al. (2021) drew the connection between adversarial training and energy-based models and proposed a joint energy-adversarial training method for improving the generative capabilities of robust classifiers. Furthermore, Ganz & Elad (2021) proposed using a robust classifier for sample refinement as a post-processing step. In contrast to these works, this paper is the first to combine a robust classifier into the inner-works of a generative model's synthesis process, leading to a marked improvement. Also worth mentioning are few other improved guidance techniques for class-conditional diffusion that were proposed. Chao et al. (2021) developed a new objective for training a classifier to capture the likelihood score. More specifically, they introduced a two-stage training scheme: first train a diffusion model, and then train the classifier to better supplement the estimations of the frozen diffusion model. Despite the intriguing idea, they demonstrate it only on low-resolution datasets (32 × 32), and the two training phases are sequential, as the classifier's training requires a pre-trained diffusion model. Hence, these phases are not parallelizable. In contrast, we propose an independent classifier training scheme, which scales well to more diverse datasets with higher resolutions. Another fascinating work is (Ho & Salimans, 2021), which proposed a class-conditional synthesis without using a classifier. Instead, they offered to combine predictions of conditional and unconditional diffusion models linearly. Interestingly, a single neural network was used for both models. However, while it shows impressive performance, the proposed combination is heuristic, and requires careful hyperparameter tuning. Their work (named classifier-free guidance) follows a different research direction than ours, as we focus on enhancing classifier guidance, enabling information from outside the trained diffusion model to be incorporated into the generation process. This approach improves the generative process' modularity and flexibility, as it allows the continued definitions of more classes, without requiring further training of the base generative diffusion model. Moreover, in the case where classes are not mutually exclusive, classifier guidance allows for the generation of multiple classes in the same image at inference time (by taking the gradient for all requested classes). This is possible in classifier-free guidance only by defining a combinatorial number of class embeddings (to account for all possible class intersections). ## 7 Conclusion In this paper we present the fusion of diffusion-based image synthesis and adversarially robust classification. Despite the success of the classifier guidance of diffusion models (Dhariwal & Nichol, 2021), we highlight several key weaknesses with this approach. Specifically, our analysis of the vanilla classifier gradients used for guidance exposes their limited ability to contribute to the synthesis process. As an alternative, we train a novel adversarially robust time-dependent classifier. We show that this scheme resolves the issues identified in vanilla classifier gradients, and use the resulting robust classifier as guidance for a generative diffusion process. This is shown to enhance the performance of the image synthesis on the highly challenging ImageNet dataset (Deng et al., 2009), as we verify using standard evaluation metrics such as FID (Heusel et al., 2017), as well as a generative performance evaluation survey, where human raters show a clear preference towards images generated by the robust classifier guidance technique that we propose. Our future work may focus on several promising directions: (i) generalizing this technique for obtaining better gradients from multi-modal networks such as CLIP (Radford et al., 2021), which help guide text-to-image diffusion models (Ramesh et al., 2022); (ii) implementing robust classifier guidance beyond diffusion models, e.g. for use in classifier-guided GAN training (Sauer et al., 2022); (iii) extending our proposed technique to unlabeled datasets; and (iv) seeking better sources of perceptually aligned gradients (Ganz et al., 2022), so as to better guide the generative diffusion process. ## References Gunjan Aggarwal, Abhishek Sinha, Nupur Kumari, and Mayank Kumar Singh. On the benefits of models with perceptually-aligned gradients. *ArXiv*, abs/2005.01499, 2020. Tomer Amit, Eliya Nachmani, Tal Shaharbany, and Lior Wolf. Segdiff: Image segmentation with diffusion probabilistic models. *arXiv preprint arXiv:2112.00390*, 2021. Maksym Andriushchenko and Nicolas Flammarion. Understanding and improving fast adversarial training. In *Neural Information Processing Systems*, 2020. Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In *Proceedings of the 34th International Conference on Machine Learning*, volume 70 of Proceedings of Machine Learning Research, pp. 214–223. PMLR, 06–11 Aug 2017. Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. Synthesizing robust adversarial examples, 2017. Omri Avrahami, Dani Lischinski, and Ohad Fried. Blended diffusion for text-driven editing of natural images. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 18208–18218, 2022. Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. *Lecture Notes in Computer Science*, pp. 387–402, 2013. ISSN 1611-3349. Tsachi Blau, Roy Ganz, Bahjat Kawar, Alex Bronstein, and Michael Elad. Threat model-agnostic adversarial defense using diffusion models. *arXiv preprint arXiv:2207.08089*, 2022. Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. *arXiv preprint arXiv:1809.11096*, 2018. Nicholas Carlini and David Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods, 2017. Chen-Hao Chao, Wei-Fang Sun, Bo-Wun Cheng, Yi-Chen Lo, Chia-Che Chang, Yu-Lun Liu, Yu-Lin Chang, Chia-Ping Chen, and Chun-Yi Lee. Denoising likelihood score matching for conditional score-based data generation. In *International Conference on Learning Representations*, 2021. Zehua Chen, Xu Tan, Ke Wang, Shifeng Pan, Danilo Mandic, Lei He, and Sheng Zhao. Infergrad: Improving diffusion models for vocoder by considering inference in training. In *ICASSP 2022-2022 IEEE International* Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8432–8436. IEEE, 2022. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In *2009 IEEE Conference on Computer Vision and Pattern Recognition*, pp. 248–255, 2009. Li Deng. The MNIST database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141–142, 2012. Prafulla Dhariwal and Alexander Quinn Nichol. Diffusion models beat GANs on image synthesis. In *ThirtyFifth Conference on Neural Information Processing Systems*, 2021. Samuel Dodge and Lina Karam. A study and comparison of human and deep learning recognition performance under visual distortions, 2017. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2020. Andrew Elliott, Stephen Law, and Chris Russell. Explaining classifiers using adversarial perturbations on the perceptual ball. In *Conference on Computer Vision and Pattern Recognition (CVPR)*, 2021. Patrick Esser, Robin Rombach, and Bjorn Ommer. Taming transformers for high-resolution image synthesis. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 12873–12883, 2021. Roy Ganz and Michael Elad. Bigroc: Boosting image generation via a robust classifier. *CoRR*, abs/2108.03702, 2021. Roy Ganz, Bahjat Kawar, and Michael Elad. Do perceptually aligned gradients imply adversarial robustness? arXiv preprint arXiv:2207.11378, 2022. Jin Gao, Jialing Zhang, Xihui Liu, Trevor Darrell, Evan Shelhamer, and Dequan Wang. Back to the source: Diffusion-driven test-time adaptation. *arXiv preprint arXiv:2207.03442*, 2022. Robert Geirhos, David H. J. Janssen, Heiko H. Schütt, Jonas Rauber, Matthias Bethge, and Felix A. Wichmann. Comparing deep neural networks against humans: object recognition when the signal gets weaker, 2017. I. Goodfellow, J. Shlens, and C. Szegedy. Explaining and Harnessing Adversarial Examples. In *International* Conference on Learning Representations, ICLR, 2015. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in Neural Information Processing Systems, 27, 2014. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of Wasserstein GANs. *Advances in neural information processing systems*, 30, 2017. Xizewen Han, Huangjie Zheng, and Mingyuan Zhou. CARD: Classification and regression diffusion models. arXiv preprint arXiv:2206.07275, 2022. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, volume 30, 2017. Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. In *NeurIPS 2021 Workshop on Deep* Generative Models and Downstream Applications, 2021. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems, volume 33, pp. 6840–6851. Curran Associates, Inc., 2020. Jonathan Ho, Chitwan Saharia, William Chan, David J Fleet, Mohammad Norouzi, and Tim Salimans. Cascaded diffusion models for high fidelity image generation. *Journal of Machine Learning Research*, 23 (47):1–33, 2022a. Jonathan Ho, Tim Salimans, Alexey A Gritsenko, William Chan, Mohammad Norouzi, and David J Fleet. Video diffusion models. In *ICLR Workshop on Deep Generative Models for Highly Structured Data*, 2022b. Hossein Hosseini, Baicen Xiao, and Radha Poovendran. Google's cloud vision api is not robust to noise, 2017. Lang Huang, Chao Zhang, and Hongyang Zhang. Self-adaptive training: beyond empirical risk minimization. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. Myeonghun Jeong, Hyeongju Kim, Sung Jun Cheon, Byoung Jin Choi, and Nam Soo Kim. Diff-tts: A denoising diffusion model for text-to-speech. *arXiv preprint arXiv:2104.01409*, 2021. Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 4401–4410, 2019. Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110–8119, 2020. Simran Kaur, Jeremy Cohen, and Zachary C Lipton. Are perceptually-aligned gradients a general property of robust classifiers? *arXiv preprint arXiv:1910.08640*, 2019. Bahjat Kawar, Gregory Vaksman, and Michael Elad. SNIPS: Solving noisy inverse problems stochastically. Advances in Neural Information Processing Systems, 34, 2021a. Bahjat Kawar, Gregory Vaksman, and Michael Elad. Stochastic image denoising by sampling from the posterior distribution. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, pp. 1866–1875, 2021b. Bahjat Kawar, Michael Elad, Stefano Ermon, and Jiaming Song. Denoising diffusion restoration models. In ICLR Workshop on Deep Generative Models for Highly Structured Data, 2022. Diederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial examples in the physical world, 2017. Tuomas Kynkäänniemi, Tero Karras, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Improved precision and recall metric for assessing generative models. *Advances in Neural Information Processing Systems*, 32, 2019. Xihui Liu, Dong Huk Park, Samaneh Azadi, Gong Zhang, Arman Chopikyan, Yuxiao Hu, Humphrey Shi, Anna Rohrbach, and Trevor Darrell. More control for free! image synthesis with semantic diffusion guidance. *arXiv preprint arXiv:2112.05744*, 2021. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In *6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings*. OpenReview.net, 2018. Anh Mai Nguyen, Jason Yosinski, and Jeff Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. *CoRR*, abs/1412.1897, 2014. Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning, pp. 8162–8171. PMLR, 2021. Weili Nie, Brandon Guo, Yujia Huang, Chaowei Xiao, Arash Vahdat, and Anima Anandkumar. Diffusion models for adversarial purification. In *International Conference on Machine Learning (ICML)*, 2022. Tianyu Pang, Xiao Yang, Yinpeng Dong, Taufik Xu, Jun Zhu, and Hang Su. Boosting adversarial training with hypersphere embedding. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), *Advances in Neural Information Processing Systems 33: Annual* Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. Yuchen Pu, Weiyao Wang, Ricardo Henao, Liqun Chen, Zhe Gan, Chunyuan Li, and Lawrence Carin. Adversarial symmetric variational autoencoder. *Advances in neural information processing systems*, 30, 2017. Chongli Qin, James Martens, Sven Gowal, Dilip Krishnan, Krishnamurthy Dvijotham, Alhussein Fawzi, Soham De, Robert Stanforth, and Pushmeet Kohli. Adversarial robustness through local linearization. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett (eds.), *Advances in Neural Information Processing Systems 32: Annual Conference on Neural* Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp. 13824–13833, 2019. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In *4th International Conference on Learning Representations*, 2016. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In *International Conference on Machine Learning*, pp. 8748–8763. PMLR, 2021. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with CLIP latents. *arXiv preprint arXiv:2204.06125*, 2022. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-toimage diffusion models with deep language understanding. *arXiv preprint arXiv:2205.11487*, 2022. Shibani Santurkar, Dimitris Tsipras, Brandon Tran, Andrew Ilyas, Logan Engstrom, and Aleksander Madry. Image Synthesis with a Single (Robust) Classifier. Curran Associates Inc., Red Hook, NY, USA, 2019. Hiroshi Sasaki, Chris G Willcocks, and Toby P Breckon. UNIT-DDPM: Unpaired image translation with denoising diffusion probabilistic models. *arXiv preprint arXiv:2104.05358*, 2021. Axel Sauer, Katja Schwarz, and Andreas Geiger. Stylegan-xl: Scaling stylegan to large diverse datasets. In *Special Interest Group on Computer Graphics and Interactive Techniques Conference Proceedings*, pp. 1–10, 2022. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition, 2014. URL https://arxiv.org/abs/1409.1556. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In *International Conference on Machine Learning*, pp. 2256–2265. PMLR, 2015. Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. In Advances in Neural Information Processing Systems, pp. 11918–11930, 2019. Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In *International Conference on* Learning Representations, 2021. Suraj Srinivas and Francois Fleuret. Rethinking the role of gradient-based attribution methods for model interpretability. In *International Conference on Learning Representations*, 2020. C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. In *International Conference on Learning Representations, ICLR*, 2014. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In *Proceedings of the IEEE conference on computer vision and* pattern recognition, pp. 2818–2826, 2016. Dogancan Temel and Ghassan AlRegib. Traffic signs in the wild: Highlights from the IEEE video and image processing cup 2017 student competition [SP competitions]. *IEEE Signal Processing Magazine*, 35(2): 154–161, mar 2018. Dogancan Temel, Gukyeong Kwon, Mohit Prabhushankar, and Ghassan AlRegib. Cure-tsr: Challenging unreal and real environments for traffic sign recognition. 2017. Dogancan Temel, Jinsol Lee, and Ghassan AlRegib. Cure-or: Challenging unreal and real environments for object recognition. 2018. Lucas Theis, Tim Salimans, Matthew D Hoffman, and Fabian Mentzer. Lossy compression with gaussian diffusion. *arXiv preprint arXiv:2206.08889*, 2022. D. Tsipras, S. Santurkar, L. Engstrom, A. Turner, and A. Madry. Robustness May Be at Odds with Accuracy. In *International Conference on Learning Representations, ICLR*, 2019. Arash Vahdat and Jan Kautz. Nvae: A deep hierarchical variational autoencoder. *Advances in Neural* Information Processing Systems, 33:19667–19679, 2020. Arash Vahdat, Karsten Kreis, and Jan Kautz. Score-based generative modeling in latent space. In Neural Information Processing Systems (NeurIPS), 2021. Aaron Van Den Oord, Oriol Vinyals, and Koray Kavukcuoglu. Neural discrete representation learning. Advances in Neural Information Processing Systems, 30, 2017. Yisen Wang, Difan Zou, Jinfeng Yi, James Bailey, Xingjun Ma, and Quanquan Gu. Improving adversarial robustness requires revisiting misclassified examples. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. Cihang Xie, Yuxin Wu, Laurens van der Maaten, Alan L. Yuille, and Kaiming He. Feature denoising for improving adversarial robustness. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pp. 501–509. Computer Vision Foundation / IEEE, 2019. Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P. Xing, Laurent El Ghaoui, and Michael I. Jordan. Theoretically principled trade-off between robustness and accuracy. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of *Proceedings of Machine Learning Research*, pp. 7472–7482. PMLR, 2019. Shengyu Zhao, Zhijian Liu, Ji Lin, Jun-Yan Zhu, and Song Han. Differentiable augmentation for data-efficient gan training. *Advances in Neural Information Processing Systems*, 33:7559–7570, 2020. Linqi Zhou, Yilun Du, and Jiajun Wu. 3d shape generation and completion through point-voxel diffusion. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 5826–5835, 2021. Yao Zhu, Jiacheng Ma, Jiacheng Sun, Zewei Chen, Rongxin Jiang, Yaowu Chen, and Zhenguo Li. Towards understanding the generative capability of adversarially robust classifiers. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 7708–7717, 2021. doi: 10.1109/ICCV48922.2021.00763. ## A Additional Samples ![15_Image_0.Png](15_Image_0.Png) Figure 7: Uncurated samples generated by a conditional diffusion model, guided by our robust timedependent classifier. ## B Proof For Arbitrarily Manipulated Gradients Below we show a theoretical observation, adapted from the original work it was presented in (Srinivas & Fleuret, 2020). Observation. Assume a neural network classifier h(x) = {hi(x)} C i=1, where hi: R d → R, and an arbitrary function g : R d → R. Consider another neural network classifier h˜(x) = h˜i(x) C i=1, where h˜i(x) = hi(x) + g(x), for which we obtain ∇xh˜i(x) = ∇xhi(x) + ∇xg(x). For this, the corresponding cross-entropy loss values and accuracy scores remain unchanged. Proof. For any data point x ∈ R d and its corresponding true label y ∈ {1*, . . . , C*}, the cross-entropy loss for the neural network h(x) is given as $$\mathcal{L}_{CE}(\mathbf{h}(\mathbf{x}),y)=-\log\frac{\exp(h_{y}(\mathbf{x}))}{\sum_{j=1}^{C}\exp(h_{j}(\mathbf{x}))}$$ $$=-\log\exp(h_{y}(\mathbf{x}))+\log\left(\sum_{j=1}^{C}\exp\left(h_{j}(\mathbf{x})\right)\right)$$ $$=-h_{y}(\mathbf{x})+\log\left(\sum_{j=1}^{C}\exp\left(h_{j}(\mathbf{x})\right)\right).\tag{6}$$ For the neural network h˜(x), we obtain the cross-entropy loss by substituting h˜i(x) = hi(x) + g(x) in Equation (6), LCE(h˜(x), y) = −h˜y(x) + log j=1 exp h˜j (x) X C = −hy(x) − g(x) + log j=1 exp (hj (x) + g(x)) X C = −hy(x) − g(x) + log j=1 exp (hj (x)) exp (g(x)) X C = −hy(x) − g(x) + log j=1 exp (hj (x)) X C + log (exp (g(x))) = −hy(x) − g(x) + log j=1 exp (hj (x)) X C + g(x) = −hy(x) + log j=1 exp (hj (x)) X C = LCE(h(x), y). (7) $$\left(7\right)$$ $$({\boldsymbol{8}})$$ The last equality holds due to Equation (6). It also holds that $$\operatorname*{arg\,max}_{i\in\{1,\ldots,C\}}{\tilde{h}}_{i}(\mathbf{x})=\operatorname*{arg\,max}_{i\in\{1,\ldots,C\}}{(h_{i}(\mathbf{x})+g(\mathbf{x}))}=\operatorname*{arg\,max}_{i\in\{1,\ldots,C\}}h_{i}(\mathbf{x}),$$ hi(x), (8) implying identical predictions for both networks on any input, and therefore identical accuracy scores, completing the proof. This observation shows that two neural networks with an identical loss over all inputs, can have arbitrarily different gradients, as the proof does not assume any limitations on g(x) nor on x. This also implies that the proof remains valid for noisy training data, and as a result, it remains valid for time-dependent classifiers which follow a Gaussian noise schedule, such as the one trained by (Dhariwal & Nichol, 2021). However, when transitioning into an adversarial training scheme, the perturbed training inputs become dependent on the model's gradients, and consequently, the adversarial loss presented in Equation (5) changes. This motivates the use of an adversarially-trained classifier over a vanilla one, when the goal is to obtain better gradients. ## C Opinion Survey Details ![17_Image_0.Png](17_Image_0.Png) Figure 8: A screenshot of the survey that was shown to human evaluators. As previously mentioned, we conducted an opinion survey, asking human evaluators to choose their preferred image out of a pair of generated images or to choose that they have no preference. Evaluators were shown 200 pairs, randomly shuffled, where each pair consists of two randomly ordered images generated with the same class label and seed, once guided by our robust classifier, and once guided using the pre-trained vanilla classifier. An example of the screen shown to evaluators is displayed in Figure 8. Some evaluators did not fill out the entire survey. Hence, different pairs have different numbers of answers. Nevertheless, sufficient data was collected for all pairs, as the number of answers per pair ranged from 19 to 36, averaging 25.09 answers. ## D Implementation Details We base our implementation on the publicly available code provided by (Dhariwal & Nichol, 2021). For implementing the adversarial training scheme, we adapt code from (Madry et al., 2018) to work with timedependent models and enable early stopping, and use it to train our robust time-dependent classifier. The sampling routine is identical to that of (Dhariwal & Nichol, 2021), up to changing the model and sampling hyperparameters. Our code and trained robust time-dependent classifier models are available at https: //github.com/bahjat-kawar/enhancing-diffusion-robust. ## E Ablation Study To perform a comprehensive ablation study on the different hyperparameters of our method, we consider the smaller CIFAR-10 (Krizhevsky et al., 2009) dataset, containing 32 × 32-pixel images. We train a class- Table 3: FID at different training iterations of an unguided class-conditional diffusion model for CIFAR-10. | Iterations | 50k | 100k | 150k | 200k | 250k | 300k | |--------------|-------|--------|--------|--------|--------|--------| | FID | 11.65 | 8.41 | 7.81 | 7.67 | 7.68 | 7.91 | conditional diffusion model for CIFAR-10, with an architecture adapted from Nichol & Dhariwal (2021) (changing the dropout from 0.3 to 0.1, making the model class-conditional, and using a linear 1000-step noise schedule akin to Dhariwal & Nichol (2021)). We measure FID with 10000 images against the validation set, and show the diffusion model's results in Table 3. The trained diffusion model achieves its best FID at 200k training iterations. As a general point of reference, StyleGAN2 (Karras et al., 2020) achieves an FID of 11.07 (Zhao et al., 2020). As a baseline, we train a vanilla time-dependent classifier with an architecture similar to Dhariwal & Nichol (2021) (changing: the number of output channels to 10 and image size to 32 to match CIFAR-10, the attention resolutions from [32, 16, 8] to [16, 8], and the classifier width from 128 to 32). At 200k iterations and s = 0.25, it achieves an FID of 7.65 with the aforementioned diffusion model. | Threat Model | 5 steps | 7 steps | 9 steps | |---------------------------|-----------|-----------|-----------| | {xt + δ | | ∥δ∥2 ≤ 0.25} | 7.62 | 7.62 | 7.64 | | {xt + δ | | ∥δ∥2 ≤ 0.5} | 7.62 | 7.60 | 7.63 | | {xt + δ | | ∥δ∥2 ≤ 1.0} | 7.64 | 7.63 | 7.65 | | {xt + δ | | ∥δ∥∞ ≤ 4/255} | 7.61 | 7.61 | 7.62 | | {xt + δ | | ∥δ∥∞ ≤ 8/255} | 7.65 | 7.64 | 7.67 | | Classifier | s = 0 | s = 0.0625 | s = 0.125 | s = 0.25 | s = 0.5 | |--------------|---------|--------------|-------------|------------|-----------| | Vanilla | 7.67 | 7.65 | 7.62 | 7.65 | 7.87 | | Robust | 7.67 | 7.62 | 7.58 | 7.60 | 7.79 | Table 4: FID on CIFAR-10 for different threat models and numbers of attacker steps, after training a robust classifier for 200k steps and using a guidance scale of s = 0.25. Best result is highlighted in **bold**. Then, we adversarially train a robust classifier with the same architecture. We perform an ablation study on the different adversarial training hyperparameters and summarize our results in Table 4. The attack step size is set to 2.5 times the upper bound in the attack threat model, divided by the number of attacker steps. We choose the best performing robust classifier, which was trained on the threat model {xt + δ *| | ∥*δ∥2 ≤ 0.5} with 7 attacker steps and 200k training iterations. Table 5: FID for classifier guidance scales (s) for both the vanilla and robust classifiers on CIFAR-10. Moreover, we also conduct an ablation study for the classifier guidance scale hyperparameter s, for both the vanilla and robust classifiers, and present the results in Table 5. Our robust classifier outperforms its vanilla counterpart in every classifier scale, attaining an FID of 7.58 at s = 0.125. These results show that a simple traversal of the classifier scales cannot improve the vanilla classifier's performance to the level attained by the robust one. ## E.1 Effect Of Classifier Guidance Scale On Precision And Recall We measure the effect of the classifier guidance scale hyperparameter s on the Precision and Recall metrics for our main ImageNet experiments for both the vanilla and robust classifiers. The results are presented in Table 6. Table 6: Precision and Recall for classifier guidance scales (s) for both the vanilla and robust classifiers on ImageNet. | Classifier | Metric | s = 0 | s = 0.25 | s = 0.5 | s = 1.0 | s = 2.0 | s = 4.0 | |--------------|-----------|---------|------------|-----------|-----------|-----------|-----------| | Vanilla | Precision | 0.70 | 0.73 | 0.77 | 0.80 | 0.84 | 0.86 | | Recall | 0.65 | 0.62 | 0.59 | 0.53 | 0.46 | 0.37 | | | Robust | Precision | 0.70 | 0.74 | 0.77 | 0.82 | 0.86 | 0.89 | | Recall | 0.65 | 0.62 | 0.60 | 0.56 | 0.47 | 0.39 | | Diffusion models are known for their great mode coverage, which gives them an edge in the Recall metric. In fact, the unguided diffusion model achieves better Recall rates than any guided version. As s increases (in both vanilla and robust settings), we trade off Recall for better Precision. Our robust model at s = 0.5 achieves Precision and Recall that are similar to Dhariwal & Nichol (2021). However, in FID, which computes a distance between distributions (covering concepts from both Precision and Recall), our robust classifier (at s = 1.0) improves upon the vanilla one (which achieves its best FID result at s − 0.5), meaning that the improvement in precision outweighs the loss in recall.
Review 1: Summary: The paper proposes to replace the classifier for classifier-guided diffusion models with an adversarially trained classifier, arguing that a classifier trained with adversarial noise provides better gradients and information to a class-conditional diffusion net at sample time. For this, the paper compares results of a class-conditional diffusion net on ImageNet guided by either a vanilla or an adversarially trained classifier. Using an adversarially robust classifier for guidance improves FID and precision and a human user study indicates that humans prefer images generated with the guidance of the adversarially robust classifier. Strengths and Weaknesses: The motivation of the approach is well written and makes it clear why an adversarially robust classifier may be preferred over a vanilla classifier for guidance of class-conditional diffusion models. The evaluation is done on ImageNet, arguably one of the most challenging class-conditional datasets for image generation and based on FID and a human user study the proposed methodology seems to improve upon the baseline. Requested Changes: The approach is only evaluated on a single dataset. While ImageNet is a good benchmark the approach should be evaluated on other datasets, too. The approach is somewhat constrained in that it only works on class-conditional datasets, nevertheless, there are other datastes on which this approach can be evaluated such as CUB-200, Oxfod 102-Flower, possible CelebA, or even CIFAR. Overall, however, I believe the biggest drawback is the limitation of classifier-guided models to class-conditional image datasets in the first place. Arguably, most of the popularity of diffusion days nowadays stems from the fact that they are able to model large, diverse datasets. In these cases, classifier-guidance is not realistic since there are no available class labels for most images. Approaches such as CLIP-guidance may alleviate this to a degree but are not evaluated in this paper. The default approach is classifier-free guidance for almost all of these cases, which has the advantage of not needing an additional classifier network and is not limited to class-conditional guidance. As a summary, I would expect the paper to have - comparisons on more datasets - extension of the approach to CLIP-guidance or similar guidance mechanisms that are not restricted to class-conditional models - a comparison to classifier-free guidance: if the approach is not better than classifier-free guidance despite being more complex and expensive I don't see the benefit Broader Impact Concerns: n/a ================================================== Review 2: Summary: This paper proposes a methodology to improve image quality in class-conditional DDPMs. In particular, they rely on the observation from prior work that adversarially robust classifiers tend to have more perceptually-aligned gradients. They leverage this property by using gradients from a robust classifier (rather than a standard ERM-trained classifier) to guide the diffusion process (based on the setup of Dhariwal and Nichol). They validate their approach on ImageNet generation in terms of automated metrics as well as a human study. Strengths and Weaknesses: Strengths: - The paper is well-written and easy to follow. - It is interesting to see ideas from the adversarial robustness literature being employed to improve diffusion models. Weaknesses: - A somewhat salient point that is not clear is whether they train the diffusion model from scratch or fine-tune the pre-trained model from Dhariwal and Nichol on ImageNet. If it is the latter, (i) the authors should explain why they made this choice and (ii) it should be clearly stated throughout the paper. It would also be valuable to see how the approaches compare on training the model from scratch. - It is not clear how significant the improvement in generation actually is. In particular, one concern I have is that the proposed method is trading of quality (improved precision and FID) for diversity (recall)---perhaps similar to the effect of the scaling factor s. Moreover, the human experiment is also setup to measure quality and not diversity. I wonder whether simply tuning "s" for the vanilla method would yield similar results. - Finally, given that robust training is a key component of the proposed approach, the authors could do a better job of examining how choices of hyperparameters therein affect generation quality. Currently, parameters such as eps, step size, number of steps are merely stated without explanation. The paper could be strengthened with an analysis of how image quality changes with eps and step size. Also, it seems like the step size being used for the attack might be too small---in general, the rule of thumb is 2.5*eps/steps. Some additional questions/comments: - In Table 1, does no guidance just mean using the pre-trained model from Dhariwal and Nichol (trained with vanilla classifier guidance)? - It seems from Table 1 that like a big part of the improvement stems from training the model longer with any guidance. - Even though the metrics used in the paper (FID, precision and recall) are derived from prior work, it would be nice if the authors could briefly describe them at least in the Appendix. Requested Changes: My main concerns are described in the three weaknesses above. I would be happy to recommend acceptance if the authors addressed those concerns. Broader Impact Concerns: N/A ================================================== Review 3: Summary: This work addresses a shortcoming of diffusion-based image synthesis which rely on an image classification model for (class) guided image generation. The authors argue that well known adversarial vulnerability of such image classification models also impairs the performance of image synthesis approaches based on a vulnerable model. The authors propose to utilize adversarial hardened (here in form of adversarial training) classification models instead. The authors provide some quantitative as well as qualitative evidence to show improvements of their method for image synthesis. Strengths and Weaknesses: Strengths: • The paper addresses an important short-coming of guided image-synthesis approaches and provides a solution to address these short-comings as well as a valid line of argument why / how their solution addresses the short-coming. To the best of my knowledge the proposed solution is novel wrt state-of-the-art approaches. • The paper provides some strong qualitative evidence that their proposed approach improves guided image synthesis. • Implementation of the approach and reproduction of the findings seem straight forward, however no code is provided. • The paper is well written and structured. The main contributions of the paper are introduced and presented in an easy to access manner. Weaknesses: • My major concern for this work is the lack of sufficient quantitative results / evaluations. Only Table 1 provides some rather limited quantitative evaluation. Specifically additional ablations studies / argumentation would be desirable addressing the low recall rate for the proposed approach. Like why/how and in which cases this doesn’t work. Minor Weakness: • I find the term “time-dependent” rather confusing. I understand that this refers to the iterations in the diffusion model, but is suggests a relation to video-generation, which is not the meaning here. If “time-dependent” is a commonly used term in the context of diffusion models that is fine. Requested Changes: I consider it critical that some further quantitative evidence / ablations are provided to better understand the achieved improvements for the new approach. If addressed I would raise my "Claims And Evidence" rating "No" -> "Yes" Broader Impact Concerns: Similar to other image generation methods the proposed method can be used to also improve quality of faked images / photos and can hence be misused. ================================================== Metareview: Recommendation: Accept with minor revision Comment: Reasoning behind my recommendation can be found under "Claims and Evidence" ==================================================
# Unsupervised Domain Adaptation By Learning Using Privileged Information Anonymous authors Paper under double-blind review ## Abstract Successful unsupervised domain adaptation is guaranteed only under strong assumptions such as covariate shift and overlap between input domains. The latter is often violated in high-dimensional applications like image classification which, despite this limitation, continues to serve as inspiration and benchmark for algorithm development. In this work, we show that training-time access to side information in the form of auxiliary variables can help relax restrictions on input variables and increase the sample efficiency of learning at the cost of collecting a richer variable set. As this information is assumed available only during training, not in deployment, we call this problem unsupervised domain adaptation by learning using privileged information (DALUPI). To solve this problem, we propose a simple two-stage learning algorithm, inspired by our analysis of the expected error in the target domain, and a practical end-to-end variant for image classification. We propose three evaluation tasks based on classification of entities in photos and anomalies in medical images with different types of available privileged information (binary attributes and single or multiple regions of interest). We demonstrate across these tasks that using privileged information in learning can reduce errors in domain transfer compared to baselines, be robust to spurious correlations in the source domain, and increase sample efficiency. ## 1 Introduction Deployment of machine learning (ML) systems relies on generalization from training samples to new instances in a target domain. When these new instances differ in distribution from the source of training data, performance tends to degrade and guarantees are often weak. For example, a supervised ML model trained to identify medical conditions in X-ray images from one hospital may work poorly in another hospital if the two sites have different equipment or examination protocols (Zech et al., 2018). In the unsupervised domain adaptation (UDA) problem (Ben-David et al., 2006), no labeled examples are available from the target domain and strong assumptions are needed for success. In this work, we ask: How can access to auxiliary variables during training help solve the UDA problem and weaken the assumptions necessary to guarantee domain transfer? In standard UDA, a common assumption is that the object of the learning task is identical in source and target domains but that input distributions differ (Shimodaira, 2000). This "covariate shift" assumption is plausible in our X-ray example above: Doctors are likely to give the same diagnosis based on X-rays of the same patient from similar but different equipment. However, guarantees for consistent domain adaptation also require either distributional overlap between inputs from source and target domains or known parametric forms of the labeling function (Ben-David & Urner, 2012; Wu et al., 2019; Johansson et al., 2019). Without these, adaptation cannot be verified or guaranteed by statistical means. Input domain overlap is implausible for the high-dimensional tasks that have become standard benchmarks in the UDA community, including image classification (Long et al., 2013; Ganin et al., 2016) and sentence labeling (Orihashi et al., 2020). If hospitals have different X-ray equipment, the probability of observing (near-)identical images from source and target domains is zero (Zech et al., 2018). Even when covariate shift and overlap are satisfied, large domain differences can have a dramatic effect on sample complexity (Breitholtz & Johansson, 2022). Despite promising developments (Shen et al., 2022), realistic guarantees for practical domain transfer remain elusive. In supervised ML without domain shift, incorporating auxiliary variables in the training of models has been proposed to improve out-of-sample generalization. For example, learning using *privileged information* (Vapnik & Vashist, 2009; Lopez-Paz et al., 2016), variables available during training but unavailable in deployment, has been proven to require fewer examples compared to learning without these variables (Karlsson et al., 2021). In X-ray classification, privileged information (PI) can come from graphical annotations or clinical notes made by radiologists that are unavailable when the system is used. While PI has begun to see use in domain adaptation, see e.g., Sarafianos et al. (2017) or Vu et al. (2019), and a theoretical analysis exists for linear classifiers (Xie et al., 2020), the literature has yet to fully characterize the benefits of this practice. We introduce *unsupervised domain adaptation by learning using privileged information* (DALUPI), in which auxiliary variables, related to the outcome of interest, are leveraged during training to improve test-time adaptation when the variables are unavailable. We summarize our contributions below: - We formalize the DALUPI problem and give conditions under which it is possible to solve it consistently, i.e., to learn a model using privileged information that predicts optimally in the target domain. Importantly, these conditions do not rely on distributional overlap between source and target domains in the input variable (Section 2.1), making consistent learning without privileged information (PI) generally infeasible. - We propose practical learning algorithms for image classification in the DALUPI setting (Section 3), designed to handle problems with three different types of PI, see Figure 1 for examples. As common UDA benchmarks lack auxiliary variables related to the learning problem, we propose three new evaluation tasks spanning the three types of PI using data sets with real-world images and auxiliary variables. - On these tasks, we compare our methods to supervised learning baselines and well-known methods for unsupervised domain adaptation (Section 4). We find that our proposed models perform favorably to the alternatives for all types of PI, particularly when input overlap is violated and when training sets are small. ## 2 Privileged Information In Domain Adaptation In unsupervised domain adaptation (UDA), the goal is to learn a hypothesis h to predict outcomes (or labels) Y ∈ Y for problem instances represented by input covariates X ∈ X , drawn from a target domain with density T (*X, Y* ). During training, we have access to labeled samples (*x, y*) only from a source domain S(*X, Y* ) and unlabeled samples x˜ from T (X). As a running example, we think of S and T as radiology departments at two different hospitals, of X as the X-ray image of a patient, and of Y as the diagnosis made by a radiologist after analyzing the image. We aim to learn a hypothesis h ∈ H from a hypothesis set H that minimizes the expected target-domain prediction error (risk) RT , with respect to a loss function L : *Y × Y →* R+, i.e., to solve $$\operatorname*{min}_{h\in{\mathcal{H}}}R_{{\mathcal{T}}}(h),$$ RT (h), RT (h) := ET (X,Y )[L(h(X), Y )] , (1) where we use the subscript convention Ep(X)[f(X)] = Rx∈X p(x)f(x)dx to denote an expectation of some function f over a density p on the domain X . A consistent solution to the UDA problem returns a minimizer of Equation 1 without ever observing labeled samples from T . However, if S and T are allowed to differ arbitrarily, finding such a solution cannot be guaranteed (Ben-David & Urner, 2012). To make the problem feasible, we assume that *covariate shift* (Shimodaira, 2000) holds—that the labeling function is the same in both domains, but the covariate distributions differ. Assumption 1 (Covariate shift). For domains S, T on *X × Y*, covariate shift *holds w.r.t.* X if ∃x ∈ X : T (X = x) ̸= S(X = x) and ∀x ∈ X : T (Y | x) = S(Y | x) . $$(1)$$ ![2_image_0.png](2_image_0.png) Figure 1: Examples of domain adaptation tasks with different types of privileged information (PI). During training, input samples X and PI W are drawn from both source and target domains. Labels Y are only available from the source domain. At test time, a target sample X is observed. We consider three types of PI: binary attribute vectors, a single region of interest, and multiple regions of interest. In our example, covariate shift means that radiologists at either hospital would diagnose two patients with the same X-ray in the same way, but that the radiologists may encounter different distributions of patients and images. To guarantee consistent learning without further assumptions, these distributions cannot be too different—the source domain input S(x) must sufficiently *overlap* the target input domain T (x). Assumption 2 (Domain overlap). A domain S overlaps another domain T *w.r.t.* X on X if ∀x ∈ X : T (X = x) > 0 =⇒ S(X = x) > 0 . Covariate shift and domain overlap w.r.t. X guarantee that the target risk RT can be identified by the sampling distribution described above, and thus, that a solution to Equation 1 may be found. Hence, they have become standard assumptions, used by most informative guarantees (Zhao et al., 2019). Overlap is often violated in high-dimensional problems such as image classification, partly due to irrelevant information that has a spurious association with the label Y (Beery et al., 2018; D'Amour et al., 2021). In X-ray classification, it may be possible to perfectly distinguish hospitals (domains) based on protocol or equipment differences manifesting in the images (Zech et al., 2018). There are no guarantees for optimal UDA in this case. Some guarantees based on distributional distances do not rely on overlap (Ben-David et al., 2006; Long et al., 2013), but do not guarantee optimal learning either (Johansson et al., 2019). Still, an image X may *contain* information W which is both *sufficient for prediction* and supported in both domains. For X-rays, this could be a region of pixels indicating a medical condition, ignoring parts that merely indicate differences in protocol (Zech et al., 2018). The learner does not know how to find this information a priori, but it can be supplied during training as added supervision. A radiologist could indicate Table 1: A summary of the different settings we consider in this work, what data is assumed to be available during training and if guarantees for identification are known for the setting under the assumptions of Proposition 1. The parentheses around source samples for DALUPI indicate that we need not necessarily observe these for the setting. Note that at test time only x from T is observed. ∗Under the more generous assumption of overlapping support in the input space X , guarantees exist for all these settings. | Setting | Observed S | Observed T | Guarantee | | | | |-----------|--------------|--------------|-------------|----|----|--------| | x | w | y | x˜ | w˜ | y˜ | for RT | | SL-T | ✓ | ✓ | ✓ | | | | | SL-S | ✓ | ✓ | ∗ | | | | | UDA | ✓ | ✓ | ✓ | ∗ | | | | LUPI | ✓ | ✓ | ✓ | ∗ | | | | DALUPI | (✓) | ✓ | ✓ | ✓ | ✓ | ✓ | regions of interest W using bounding boxes during training (Irvin et al., 2019), but would not be available to annotate images at test time. As such, W is *privileged information* (Vapnik & Vashist, 2009). ## 2.1 Unsupervised Domain Adaptation With Privileged Information Learning using privileged information, variables that are available only during training but not at test time, has been shown to improve sample efficiency in diverse settings (Vapnik & Izmailov, 2015; Pechyony & Vapnik, 2010; Jung & Johansson, 2022). Next, we show that privileged information can also improve UDA by providing *identifiability* of the target risk—allowing it to be computed from the sampling distribution—even when overlap is not satisfied in X. We define domain adaptation by learning using privileged information (DALUPI) as follows. During training, learners observe samples of covariates X, labels Y and privileged information W ∈ W from S in a dataset DS = {(xi, wi, yi)} m i=1, as well as samples of covariates and privileged information from T , DT = {(˜xi, w˜i)} n i=1. At test time, trained models only observe covariates x˜ ∼ T (X) and our learning goal remains to minimize the target risk, see Equation 1. We justify access to privileged information from T , but not labels, by pointing out that it is often easier to annotate observations with privileged information W than with labels Y . For example, a non-expert may be able to reliably recognize the outline of an animal in an image, indicating the pixels W corresponding to it, but not identify its species (Y ); see Figure 2, where it would likely be easier to identify the location of the cat in the image than to identify its breed. To identify RT (Equation 1) without overlap in X, we make the assumption that W is sufficient to predict Y in the following sense. Assumption 3 (Sufficiency of privileged information). Privileged information W is sufficient for the outcome Y given covariates X if Y ⊥ X | W in both S and T . Assumption 3 is satisfied when X provides no more information about Y in the presence of W. If we consider W to be a subset of image pixels corresponding to an area of interest, the other pixels in X may be unnecessary to predict Y . This is illustrated in Figure 2 where the privileged information wiis the region of interest indicated by the bounding box ti. Here, overlap is more probable in W than in X , as the extracted pixels mostly show cats. Moreover, when W retains more information, sufficiency becomes more plausible but domain overlap in W is reduced. The sufficiency assumption is used to replace T (y | x) with T (y | w) in Proposition 1. If sufficiency is violated but it is plausible that the degree of insufficiency is comparable across domains, we can still obtain a bound on the target risk which may be estimated from observed quantities. We give such a result in Appendix F. We would expect there to exist some PI that can be chosen such that it is sufficient for a given task. However, if sufficiency cannot be ensured then we would expect the overall performance to decrease somewhat, if covariate shift w.r.t. w is not violated. Although, we would still expect the generalization error to be of comparable magnitude. If covariate shift is also violated in W we expect further declines in performance as ![4_image_0.png](4_image_0.png) Figure 2: An illustration of domain overlap being more plausible when we consider appropriate forms of privileged information W, such as a region of interest of an image. Source and target domains S, T are here indoor and outdoor images X and the task is to identify the animal Y in the image. the problem is more difficult and we are not guaranteed to identify the optimal hypothesis (Johansson et al., 2019). Assumptions 1–2 holding with respect to privileged information W instead of X, along with Assumption 3, allow us to identify the target risk even for models h ∈ H that do not use W as input: Proposition 1. Let Assumptions 1 and 2 be satisfied w.r.t. W (not necessarily w.r.t. X) and let Assumption 3 hold as stated. Then, the target risk RT is identified for hypotheses h : *X → Y*, $$R_{\mathcal{T}}(h)=\mathbb{E}_{\mathcal{T}(X)}\left[\mathbb{E}_{\mathcal{T}(W|X)}\left[\mathbb{E}_{S(Y|W)}[L(h(X),Y)\mid X,W]\mid X]\right]\right]$$ $$=\int_{x}\mathcal{T}(x)\int_{w}\mathcal{T}(w\mid x)\int_{y}\mathcal{S}(y\mid w)L(h(x),y)\mathrm{d}y\mathrm{d}w\mathrm{d}x\,$$ and for L the squared loss, a minimizer of RT *is the function* $$h_{\mathcal{T}}^{*}(x)=\mathbb{E}_{\mathcal{T}(W|x)}[\mathbb{E}_{\mathcal{S}(Y|W)}[Y\mid W]\mid x]=\int_{w}\mathcal{T}(w\mid x)\int_{y}\mathcal{S}(y\mid w)y\;\mathrm{d}y\mathrm{d}w\;.$$ Proof sketch. RT (h) = Rx,y T (*x, y*)L(h(x), y)dxdy. We can then marginalize over W to get T (*x, y*) = T (x)ET (W|x)[T (y | W) | x] = T (x)Rw:S(w)>0 T (w | x)S(y | w)dw, where the first equality follows by sufficiency and the second by covariate shift and overlap in W. T (x), T (w | x) and S(y | w) are observable through training samples. That h ∗ T is a minimizer follows from the first-order condition. See Appendix C. Proposition 1 shows that there are conditions where privileged information allows for identification of targetoptimal hypotheses where identification is not possible without it, i.e., when overlap is violated in X. W guides the learner toward the information in X that is relevant for the label Y . When W is deterministic in X, overlap in X implies overlap in W, but not vice versa. In the same case, under Assumption 3, if covariate shift holds for X, it holds also for W. Hence, if sufficiency can be justified, the requirements on X are weaker than in standard UDA, at the cost of collecting W. Surprisingly, Proposition 1 does not require that X is observed in the source domain as the result does not depend on S(x). Figure 1 gives examples of problems with the DALUPI structure which we consider in this work. For comparison, we list related learning paradigms in Table 1. Supervised learning (SL-S) refers to learning from labeled samples from S without privileged information. SL-T refers to supervised learning with (infeasible) access to labeled samples from T . UDA refers to the setting at the start of Section 2 and LUPI to learning using privileged information without data from T (Vapnik & Vashist, 2009). We compare DALUPI to these alternative settings in our experiments in Section 4. ## 2.2 A Two-Stage Algorithm And Its Risk In light of Proposition 1, a natural learning strategy is to model privileged information as a function of the input, T (W | x), and the outcome as a function of privileged information, gˆ(w) ≈ ES [Y | w], and combining these. In the case where W is a deterministic function of X, T (W | x) is a map f : *X → W*, which may be estimated as a regression ˆf and combined with the outcome regression to form hˆ = ˆg( ˆf(X)). We may find such functions ˆf, gˆ by separately minimizing the empirical risks $$\hat{R}_{\cal Y}^{W}(f)=\frac{1}{n}\sum_{i=1}^{n}L_{\mathbf{W}}(f(\hat{x}_{i}),\hat{w}_{i})\quad\mbox{and}\quad\hat{R}_{\cal S}^{Y}(g)\quad=\frac{1}{m}\sum_{i=1}^{m}L_{\cal Y}(g(w_{i}),y_{i})\;.\tag{2}$$ Hypothesis classes F, G may be chosen so that H = {h = g ◦ f; (f, g) *∈ F × G}* has a desired form. Note that LW and LY may in general be different loss functions. We can bound the generalization error of estimators hˆ = ˆg ◦ ˆf when W ∈ R dW and the loss is the squared loss. We do this by placing an assumption of Lipschitz smoothness on the space of prediction functions: ∀g ∈ G, w, w′ ∈ W : ∥g(w) − g(w ′)∥2 ≤ M∥w − w ′∥2. To arrive at a bound, we first define the ρ-weighted empirical risk of the outcome model g in the source domain, RˆY,ρ S(g) = 1m Pm i=1 ρ(wi)LW (g(wi), yi) where ρ is the density ratio of T and S, ρ(w) = T (w) S(w) . When the density ratio ρ is unknown, we may use density estimation (Sugiyama et al., 2012) or probabilistic classifiers to estimate it. We arrive at the following result, proven for univariate Y but generalizable to multivariate outcomes. Proposition 2. Suppose that W and Y are deterministic in X and W, respectively, and that Assumptions 1– 3 hold w.r.t. W. Let G comprise M-Lipschitz mappings g : W → Y with W ⊆ R dW , and let the loss be the squared Euclidean distance, assumed to be uniformly bounded over W*. Let* ρ(w) = T (w)/S(w) and d and d ′ be the pseudo-dimensions of G and F, respectively. Assume that there are m labeled samples from S and n unlabeled samples from T . Then, for any h = g ◦ f ∈ G × F*, with probability at least* 1 − δ, $$\frac{R_{T}(h)}{2}\leq\hat{R}_{S}^{Y,\rho}(g)+M^{2}\hat{R}_{T}^{W}(f)+\mathcal{O}\left(\sqrt[3/8]{\frac{d\log\frac{m}{d}+\log\frac{4}{2}}{m}}+\sqrt{\frac{d^{\prime}\log\frac{n}{d^{\prime}}+\log\frac{d_{W}}{h}}{n}}\right).$$ . Proof sketch. Decomposing the risk of h ◦ ϕ , we get $$R_{\mathcal{T}}(h)=\mathbb{E}_{\mathcal{T}}[(g(f(X))-Y)^{2}]$$ $$\leq2\mathbb{E}_{\mathcal{T}}[(g(W)-Y)^{2}+(g(f(X))-g(W))^{2}]$$ $$\leq2\mathbb{E}_{\mathcal{T}}[(g(W)-Y)^{2}+M^{2}\|f(X))-g(W)\|^{2}]$$ $$\leq2\mathbb{E}_{\mathcal{T}}[(g(W)-Y)^{2}]+2M^{2}\mathbb{E}_{\mathcal{T}}[\|(f(X)-W)\|^{2}]$$ $$=2R_{\mathcal{T}}^{Y}(g)+2M^{2}R_{\mathcal{T}}^{W}(f)=2R_{S}^{Y,\rho}(g)+2M^{2}R_{\mathcal{T}}^{W}(f)\.$$ The first inequality follows the relaxed triangle inequality, the second from the Lipschitz property, and the third equality from Overlap and Covariate shift. Treating each component of wˆ as independent, using standard PAC learning results, and application of Theorem 3 from Cortes et al. (2010) allows us to reweight the risk with the density ratio ρ by also adding an additional term which contains the Rényi divergence. Then with a union bound argument, we get the stated result. See Appendix D for a more detailed proof. When F and G contain the ground-truth mappings between X and W and between W and Y , in the infinitesample limit, minimizers of Equation 2 minimize RT as well. Our approach is not limited to classical PAC analysis but could, under suitable assumptions, be carried out under another framework, e.g. using PACBayes analysis to obtain a bound that contains different sample complexity terms. However, such a bound would then hold in expectation over a posterior distribution on H instead of uniformly over H. We sketch a proof of such a bound in Appendix E. Furthermore, if sufficiency is violated but it is plausible that the degree of insufficiency is comparable across domains, we can still obtain a bound on the target risk which may be estimated from observed quantities. We give such a result in Appendix F. ![6_image_0.png](6_image_0.png) Figure 3: A schematic representation of the train and test flow for DALUPI using (a) the two-stage estimator presented in Section 2.2 and (b) an end-to-end architecture based on Faster R-CNN (Ren et al., 2016). In the two-stage procedure, the networks ˆf and gˆ are learned through empirical risk minimization of LW and LY , respectively. At test time, ˆf and gˆ are combined into hˆ = ˆg( ˆf(X)). The end-to-end estimator uses a region proposal network (RPN) to produce regions of interest in the input image X. The RPN, which serves as the network ˆf, is followed by a detection network gˆ that predicts the class of any object within a region proposal. Training is guided by regression losses L PRN reg (*T , T* ˆ ) and L det reg(TˆU , T), as well as by classification losses L PRN cls (*P , P* ˆ ) and L det cls (PˆU ). Here, T and Tˆ denote ground-truth and predicted bounding box coordinates, respectively, and TˆU are the predicted coordinates for a region proposal with ground-truth label U. Further, Pˆ is the RPN's predicted probability that a region proposal contains an object, P is a binary label assigned to the proposal based on its overlap with ground-truth bounding boxes, and Pˆu is the probability of the ground-truth class U within the proposal, as predicted by the detection network. ## 3 Image Classification With Privileged Information We use image classification, where X is an image and Y is a discrete label, as proof of concept for DALUPI. To show the versatility of our approach, we consider three different instantiations of privileged information W: a binary attribute vector, a single region of interest, or multiple regions of interest. The two-stage estimator, see Figure 3a, is used in the first two cases. With multiple regions of interest as privileged information, we use an end-to-end model based on Faster-R-CNN (Ren et al., 2016), see Figure 3b. We detail each setting below and illustrate them in Figure 1. ## 3.1 Binary Attributes As Pi First, we consider the case where each image xiis accompanied by privileged information in the form of a binary vector wi ∈ {0, 1} dindicating the presence of d attributes in the image. In this setting, we can directly apply our two-stage estimator (Equation 2). For the first estimator ˆf, we use a convolutional neural network (CNN) trained on observations from T (and possibly S) to output a binary vector of attributes wˆi from the input xi. For the second estimator gˆ, we use a multi-layer perceptron classifier, trained on source domain observations, that predicts the image label yˆi given the vector of attributes wi. We use the binary cross-entropy loss to train ˆf and the categorical cross-entropy loss to train gˆ. The resulting classifier, hˆ(x) = ˆg( ˆf(x)), is subsequently evaluated on target domain images. ## 3.2 Single Region Of Interest As Pi Next, we consider privileged information as a subset of pixels wi, taken from the image xi and associated with an object or feature that determines the label yi ∈ {1*, . . . , K*}. In our experiments, this PI is provided as a *single* bounding box with coordinates ti ∈ R 4enclosing the region of interest wi. Here, we use two CNNs, ˆd and gˆ, and a deterministic function ϕ to approximate the two-stage estimator (Equation 2). The network ˆd is trained to output bounding box coordinates tˆi as a function of the input xi, and the pixels wˆi within the bounding box are extracted from xi and resized to pre-specified dimensions through ϕ. The composition of these two functions, ˆf(xi) = ϕ(xi, ˆd(xi)), returns wˆi. The second network gˆ is trained to predict yi given the pixels wi contained in a bounding box ti based on observations from S. We use the mean squared error loss for ˆd and the categorical cross-entropy loss for gˆ. Finally, hˆ(x) = ˆg( ˆf(x)) is evaluated on target domain images where the output of ˆf is used for prediction with gˆ. For further details see Appendix A.1. ## 3.3 Multiple Regions Of Interest As Pi Finally, we consider a setting where privileged information indicates *multiple* regions of interest in an image. We use this PI in multi-label classification problems where the image xiis associated with one or more categories k from a set {1*, . . . , K*}, encoded in a multi-category label yi ∈ {0, 1} K (e.g., indicating findings of one or more diseases). The partial label yi(k) = 1 indicates the presence of features or objects in the image from category k. In our entity classification experiment, an object j of class k ∈ [K] in the image, say "Bird", will be annotated by a bounding box tij ∈ R 4surrounding the pixels of the bird, and an object label uij = k. In X-ray classification, tij can indicate an abnormality j in the X-ray image, and uij ∈ {1*, . . . , K*} the label of the finding (e.g., "Pneumonia"). To make full use of privileged information, we train a deep neural network hˆ(x) = ˆg( ˆf(x)), where ˆf produces a set of bounding box coordinates tˆij and extracts the pixels wˆij associated with each tˆij , and where gˆ predicts a label uˆij for each wˆij . To this end, we adapt the Faster R-CNN architecture (Ren et al., 2016) which uses a region proposal network (RPN) to generate regions that are fed to a detection network for classification and refined bounding box regression. A CNN backbone in combination with the RPN region of interest pooling serves as the subnetwork ˆf, producing estimates wˆi of the privileged information for an image xi. For the detection network, which corresponds to the subnetwork gˆ, we use Fast-RCNN (Girshick, 2015). Privileged information adds supervision through regression losses L RPN reg (*t, t* ˆ ) and L det reg(tˆu, t) for region proposals tˆand class-specific bounding box coordinates tˆu. We use the smooth L1 loss defined by Girshick (2015) for both L RPN reg and L det reg. Training is further guided by classification losses L RPN cls (ˆ*p, p*) = −(p log ˆp+ (1−p) log ˆp) and L det cls (ˆpu) = − log ˆpu, where pˆ is the RPN's predicted probability that a region proposal contains an object, p is a binary label assigned to the proposal based on its overlap with ground-truth bounding boxes, and pˆu is the probability of the ground-truth class u within the proposal, as predicted by the detection network. In Appendix A.2, we provide details of the learning objective and architecture and describe small modifications to the training procedure of Faster R-CNN to accommodate the DALUPI setting. Unlike the two-stage estimator, we train Faster R-CNN (both ˆf and gˆ) end-to-end, minimizing both losses at once. In entity classification experiments (see Table 3 and Figure 5), we also train this model in a LUPI setting, where no information from the target domain is used, but privileged information from the source domain is used. ## 4 Experiments We evaluate the empirical benefits of learning using privileged information, compared to the other data availability settings in Table 1, across four UDA image classification tasks where PI is available in the forms described in Section 3. Widely used datasets for UDA evaluation like OfficeHome (Venkateswara et al., 2017) and large-scale benchmark suites like DomainBed (Gulrajani & Lopez-Paz, 2021), VisDA (Peng et al., 2017) and WILDS (Koh et al., 2021) *do not* include privileged information and cannot be used for evaluation here. Thus, we first compare our method to baselines on the recent CelebA task (Xie et al., 2020) which includes PI in the form of binary attributes (Section 4.1). Additionally, we propose three new tasks based on well-known image classification data sets with regions of interest as PI (Section 4.2–4.4). In Section 4.1 and 4.2, we use the two-stage estimator with the subnetwork ˆf based on the ResNet-18 architecture (He et al., 2016a). In Section 4.3 and 4.4, we use our variant of Faster R-CNN with a ResNet-50 backbone. Our goal is to collect evidence that DALUPI improves adaptation bias and sample efficiency compared to methods that do not make use of PI. We choose baselines to illustrate these two disparate settings. First, we compare DALUPI to supervised learning baselines, SL-S and SL-T, trained on labeled examples from the source and target domain, respectively. SL-S is a simple but strong baseline: On benchmark suites like DomainBed and WILDS, there is still no UDA method that *consistently* outperforms SL-S (ERM) without transfer learning (Gulrajani & Lopez-Paz, 2021; Koh et al., 2021). SL-T serves as an oracle comparison since it uses labels from the target domain which are normally unavailable in UDA. Second, we compare DALUPI to two UDA methods—domain adversarial neural networks (DANN) (Ganin et al., 2016) and the margin disparity discrepancy (MDD) (Zhang et al., 2019)—which have theoretical guarantees but do not make use of PI. These baselines are all based on the ResNet architecture. In Section 4.1, we compare DALUPI also to In-N-Out (Xie et al., 2020), which was designed to make use of auxiliary (privileged) attributes for training domain adaptation models. We do not include this model in other experiments as it was not designed to use regions of interest as privileged information. The exact architectures of all models and baselines are described in Appendix A, along with details on experimental setup and hyperparameters. For each task and task-specific setting (label skew, amount of privileged information, etc.), we train 10 models from each relevant class using hyperparameters randomly selected from given ranges (see Appendix A). For DANN and MDD, the trade-off parameter, which regularizes domain discrepancy in representation space, increases from 0 to 0.1 during training; for MDD, the margin parameter is set to 3. All models are evaluated on a held-out validation set from the source domain and the best-performing model in each class is then evaluated on held-out test sets from both domains. For SL-T, we use a held-out validation set from the target domain. We repeat this procedure over 5 or 10 seeds, controlling the data splits and the random number generation. We report accuracy and area under the ROC curve (AUC) with 95 % confidence intervals computed by bootstrapping over the seeds. ## 4.1 Celebrity Photo Classification With Binary Attributes As Pi In the case where privileged information is available as binary attributes, we follow Xie et al. (2020) who introduced a binary classification task based on the CelebA dataset (Liu et al., 2015), where the goal is to predict whether the person in an image has been identified as male or female (Y ) in one of the binary attributes that accompanies the data set's photos of celebrities (X). Like Xie et al. (2020), we use 7 of the 40 other attributes (Bald, Bangs, Mustache, Smiling, 5_o_Clock Shadow, Oval_Face, and Heavy_Makeup) as a vector of privileged information W ∈ {0, 1} 7. The target and source domains are defined by people wearing (T ) and not wearing (S) a hat. More details can be found in (Xie et al., 2020) and in Appendix A.3. Table 2 shows the target accuracy for each model. We see that DALUPI performs comparably to the In-NOut models proposed by Xie et al. (2020), while outperforming other baselines. Confidence intervals overlap for MDD, In-N-Out and DALUPI. The only variants of In-N-Out that achieve a higher average accuracy than DALUPI require four or more rounds of training to achieve their results (baseline, auxiliary input, auxiliary output pre-training, tuning and self-training) (Xie et al., 2020). Both DALUPI and In-N-Out benefit from access to privileged information from both the source and target domain (pre/self-training for In-N-Out). Finally, it is worth noting that neither covariate shift, nor sufficiency are likely to hold w.r.t. to W in this task. Specifically, photos with none of the 7 attributes active, w = 0, have different label rates and majority label in S and T (the rates of labels are Y¯S = 0.64 and Y¯T = 0.46, respectively) and therefore P(Y |W) is not constant, i.e. covariate shift is violated. In addition, the best model we have found trained on W Table 2: Celebrity photo classification. DALUPI performs comparably to the In-N-Out models in Xie et al. (2020). Note: In-N-Out results are reported as the average of 5 trials with 90 % confidence intervals. | | Target accuracy | |-------------------------------|-------------------| | SL-S | 74.4 (73.1, 76.4) | | DANN | 73.9 (72.2, 76.5) | | MDD | 77.3 (75.2, 79.0) | | In-N-Out (w/o pretraining) | 78.5 (77.2, 79.9) | | In-N-Out (w. pretraining) | 79.4 (78.7, 80.1) | | In-N-Out (rep. self-training) | 80.4 (79.7, 81.1) | | DALUPI (W from T ) | 78.3 (76.3, 80.3) | | DALUPI (W from S, T ) | 79.3 (76.9, 81.7) | ![9_image_0.png](9_image_0.png) Figure 4: Digit classification. Target domain accuracy as a function of association ϵ between background and label in S. As the skew increases, the target-domain performance of the non-privileged models deteriorates. alone achieves only 65 % accuracy, compared to the results in Table 2—sufficiency is unlikely to hold. Thus, DALUPI is robust to violations of these assumptions. ## 4.2 Digit Classification With Single Region Of Interest As Pi We construct a synthetic image dataset, based on the assumptions of Proposition 1, to verify that there are problems where DALUPI is guaranteed successful transfer but standard UDA is not. Starting from CIFAR-10 (Krizhevsky, 2009) images upscaled to 128 × 128, we insert a random 28 × 28 digit image from the MNIST dataset (Lecun, 1998), with a label in the range 0–4, into a random location of each CIFAR-10 image, forming the input image X (see Figure 1 (top) for examples). The label Y ∈ {0*, . . . ,* 4} is determined by the MNIST digit. We store the bounding box around the inserted digit image and use the pixels contained within it as privileged information W during training. The domains are constructed using CIFAR-10's first five and last five classes as source and target backgrounds, respectively. Both source and target datasets contain 15,298 images each. To increase the difficulty of the task, we make the digit be the mean color of the dataset and make the digit background transparent so that the border of the image is less distinct. This may slightly violate Assumption 2 with respect to the region of interest W since the backgrounds differ between domains. To understand how successful transfer depends on domain overlap and access to sufficient privileged information, we include a *skew parameter* ϵ ∈ [ 1 c , 1], where c = 5 is the number of digit classes, which determines the correlation between digits and backgrounds. For a source image i with digit label Yi ∈ {0*, . . . ,* 4}, we select a random CIFAR-10 image with class Bi ∈ {0*, . . . ,* 4} with probability P(Bi = b | Yi = y) = {ϵ, if b = y; (1 − ϵ)/(c − 1), otherwise}. For target images, digits and backgrounds are matched uniformly at random. The choice ϵ = 1 c yields a uniform distribution and ϵ = 1 is equivalent to the background carrying as much signal as the privileged information. We hypothesize that ϵ = 1 is the worst possible case where confusion of the model is likely, which would lead to poor adaptation under domain shift. In Figure 4, we observe the conjectured behavior. As the skew ϵ and the association between background and label increases, the performance of SL-S decreases rapidly on the target domain. At ϵ = 1, it performs no better than random guessing, likely because the model has learned to associate spurious features in the background with the label of the digit. We also observe that DANN and MDD deteriorate in performance with increased correlation between the label and the background. In contrast, DALUPI is unaffected by the skew as the subset of pixels extracted by ˆf only carries some of the background with it, while containing Table 3: Entity classification. UDA models have access to all unlabeled target samples, LUPI to all PI (source), and DALUPI to all PI (source and target). | | Source AUC | Target AUC | |--------|-------------------|-------------------| | SL-T | 60.1 (58.7, 61.5) | 69.0 (68.1, 69.9) | | SL-S | 69.5 (68.6, 70.4) | 63.0 (61.6, 64.2) | | DANN | 68.1 (67.5, 68.7) | 62.5 (61.9, 63.1) | | MDD | 62.4 (61.1, 63.9) | 57.7 (56.3, 59.2) | | LUPI | 69.3 (68.5, 70.1) | 65.9 (65.0, 66.8) | | DALUPI | 71.4 (70.3, 72.4) | 68.2 (66.3, 70.1) | ![10_image_0.png](10_image_0.png) Figure 5: Entity classification. Target domain AUC. The performance of SL-S and SL-T is extended across the x-axes for visual purposes. DANN and MDD use an increasing fraction of target samples x˜ but no PI. sufficient information to make good predictions. Interestingly, DALUPI also seems to be as good or slightly better than the oracle SL-T in this setting. This may be due to improved sample efficiency from using PI. ## 4.3 Entitity Classification With Multiple Regions Of Interest As Pi Next, we consider multi-label classification of the presence of four types of entities (persons, cats, dogs, and birds) indicated by a binary vector Y ∈ {0, 1} 4for images X from the MS-COCO dataset (Lin et al., 2014). PI is used to localize regions of interest W related to the entities, provided as bounding box annotations. We define source and target domains S and T as indoor and outdoor images, respectively. Indoor images are extracted by filtering out images from the MS-COCO super categories "indoor" and "appliance" that also contain at least one of the four main label classes. Outdoor images are extracted using the super categories "vehicle" and "outdoor". In total, there are 5,231 images in the source and 5,719 images in the target domain; the distribution of labels is provided in Appendix A.5. Sufficiency is likely to hold in this task because the pixels contained in a bounding box should be sufficient for an annotator to classify the entity according to the four categories above, irrespective of the pixels outside of the box. Similarly, covariate shift is likely to hold since the label attributed to the pixels in a bounding box should be the same, whether the entity is indoor or outdoor. We study the effect of adding privileged information by first training the end-to-end model in a LUPI setting, using all (*x, y*) samples from the source domain and increasing the fraction of inputs for which PI is available, nPI(S), from 0 to 1. We then train the model in a DALUPI setting, increasing the fraction of (˜x, w˜) samples from the target domain, nPI(T ), from 0 to 1, while keeping nPI(S) = 1. We train SL-S and SL-T using all available data and increase the fraction of unlabeled target samples used by DANN and MDD from 0.2 to 1 while using all data from the source domain. Table 3 shows the models' source and target domain AUC, averaged over the four entity classes, when the UDA models have access to all unlabeled target samples, LUPI to all PI from the source domain, and DALUPI to all PI from both domains. Clearly, DALUPI yields a substantial gain in adaptation. As we see in Figure 5, the performance of LUPI increases as nPI(S) increases. When additional (˜x, w˜) samples from the target domain are added, DALUPI outperforms SL-S and approaches the performance of SL-T. We note that DANN and MDD do not benefit as much from added unlabeled target samples as DALUPI does. Their weak performance could be explained by difficulties in adversarial training. The gap between LUPI and SL-S for nPI(S) = 0 is anticipated; we do not expect the detection network to work well without bounding box supervision. 11 | | ATL | CM | PE | |--------|-------------|-------------|-------------| | SL-T | 57 (56, 58) | 59 (55, 63) | 79 (78, 80) | | SL-S | 55 (55, 56) | 61 (58, 64) | 73 (70, 75) | | DANN | 53 (51, 55) | 55 (53, 58) | 55 (51, 61) | | MDD | 49 (48, 50) | 51 (51, 52) | 51 (48, 54) | | DALUPI | 55 (55, 56) | 72 (71, 73) | 74 (72, 76) | Table 4: X-ray task. Test AUC for the three pathologies in the target domain for all considered models. Boldface indicates the best-performing feasible model; SL-T uses target labels. Figure 6: Left: Example from the X-ray ![11_image_0.png](11_image_0.png) target test set with label CM. The red rectangle indicates the bounding box predicted by DALUPI. Right: saliency map for CM for SL-S. ## 4.4 X-Ray Classification With Multiple Regions Of Interest As Pi As a real-world application, we study detection of pathologies in chest X-ray images. We use the ChestXray8 dataset (Wang et al., 2017) as source domain and the CheXpert dataset (Irvin et al., 2019) as target domain.1 As PI, we use the regions of pixels associated with each found pathology, as annotated by domain experts using bounding boxes. For the CheXpert dataset, only pixel-level segmentations are available, and we create bounding boxes that tightly enclose the segmentations. It is not obvious that the pixels within such a bounding box are sufficient for classifying the pathology. For this reason, we suspect that some of the assumptions of Proposition 1 may be violated. However, as we find below, DALUPI improves empirical performance compared to baselines for small training sets, thereby demonstrating increased sample efficiency. We consider the three pathologies that exist in both datasets and for which there are annotated findings: atelectasis (ATL: collapsed lung), cardiomegaly (CM: enlarged heart), and pleural effusion (PE: water around the lung). There are 457 and 118 annotated images in the source and target domain, respectively. We train DALUPI, DANN and MDD using all these images. SL-S is trained with the 457 source images and SL-T with the 118 target images as well as 339 labeled but non-annotated target images. Neither SL-S, SL-T, DANN, nor MDD support using privileged information. The distributions of labels and bounding box annotations are given in Appendix A.6. In Table 4, we present the per-class AUCs in the target domain. DALUPI outperforms all baseline models, including the target oracle, in detecting CM. For ATL and PE, it performs similarly to or better than the other feasible models. That SL-T is better at predicting PE is not surprising because this pathology is most prevalent in the target domain. In Figure 6, we show a single-finding image from the target test set with ground-truth label CM. The predicted bounding box of DALUPI with the highest probability is added to the image. DALUPI identifies the region of interest (the heart) and makes a correct classification. The rightmost panel shows the saliency map for the ground truth class for SL-S. We see that the gradients are mostly constant, indicating that the model is uncertain. In Appendix B, we show AUC for CM for the models trained with additional examples *without* bounding box annotations. We find that SL-S reaches the performance of DALUPI when a large amount of labeled examples are provided. This indicates that identifiability is not the main obstacle for adaptation and that PI improves sample efficiency. ## 5 Related Work Learning using privileged information was first introduced by Vapnik & Vashist (2009) for support vector machines (SVMs), and was later extended to empirical risk minimization (Pechyony & Vapnik, 2010). Methods using PI, which is sometimes called hidden information or side information, has since been applied in many diverse settings such as healthcare (Shaikh et al., 2020), finance (Silva et al., 2010), clustering (Feyereisl & Aickelin, 2012) and image recognition (Vu et al., 2019; Hoffman et al., 2016). Related concepts include knowledge distillation (Hinton et al., 2015; Lopez-Paz et al., 2016), where a teacher model trained 1This study was granted IRB approval. on additional variables adds supervision to a student model, and weak supervision (Robinson et al., 2020) where so-called weak labels are used to learn embeddings, subsequently used for the task of interest. Furthermore, in the realm of NLP, there is the related concept of learning using feature feedback, where additional annotations that are related to the associated task label are provided (Katakkar et al., 2022; Kaushik et al., 2021). These works are mostly of an empirical nature, and theoretical work on the subject either considers linear models/SVMs (Poulis & Dasgupta, 2017) or a teacher/student-type setup where additional supervision is given when the model predicts incorrectly (Dasgupta et al., 2018). The use of PI for deep image classification has been investigated by Chen et al. (2017) and Han et al. (2023) but these works only cover regular supervised learning where source and target domains coincide. Further, Sharmanska et al. (2014) used regions of interest in images as privileged information to improve the accuracy of image classifiers, but did not consider domain shift either. Domain adaptation using PI has been considered before with SVMs (Li et al., 2022; Sarafianos et al., 2017), but not with more complex classifiers such as neural networks. Vu et al. (2019) used scene depth as PI in semantic segmentation using deep neural networks. However, they only used PI from the source domain and they did not provide any theoretical analysis. Xie et al. (2020) provide some theoretical results for a similar setup to ours. However, these are specifically for linear classifiers while our approach holds for any type of classifier. Motiian (2019) investigated PI and domain adaptation using the information bottleneck method for visual recognition. However, their setting differs from ours in that each observation comprises source-domain and target-domain features, a label and PI. Another related approach is that of subsidiary tasks (Kundu et al., 2022; Ye et al., 2022). However, in these settings the additional tasks performed are used to build a representation that helps with the main task through domain alignment. Our approach instead seeks to use information which directly relates to the main task. ## 6 Discussion We have presented DALUPI: unsupervised domain adaptation by learning using privileged information (PI). The framework provides provable guarantees for adaptation under relaxed assumptions on the input features, at the cost of collecting a larger variable set, such as attribute or bounding box annotations, during training. Our analysis inspired practical algorithms for image classification which we evaluated using three kinds of privileged information. In our experiments we demonstrated tasks where our approach is successful while existing adaptation methods fail. We observed empirically also that methods using privileged information are more sample-efficient than comparable non-privileged learners, in line with the literature. In fact, DALUPI models occasionally even outperform oracle models trained using target labels due to their sample efficiency. Thus, we recommend considering these methods in small-sample settings. The main contribution of the paper is the proposed learning paradigm for domain adaptation with privileged information. Since common benchmark datasets in UDA lack privileged information related to the learning problem, we created three new tasks for evaluating our framework, see Section 4.2–4.4, which itself is a notable contribution. We hope that this work inspires the community to develop additional datasets for UDA using privileged information. To avoid assuming that domain overlap is satisfied with respect to input covariates, we require that the label is conditionally independent of the input features given the PI—that the PI is "sufficient". This is a limitation whenever sufficiency is difficult to verify. However, in our motivating example of image classification, a domain expert could *choose* PI so that sufficiency is reasonably justified. Moreover, in experiments on CelebA, we see empirical gains from our approach even when sufficiency is known to be violated. Another limitation is that we still rely on overlap in the privileged information, W, which may also be violated in some circumstances. It is more likely that overlap holds for W when, for example, it is a subset of X, as argued in Figure 2. Designing experiments to test how sensitive DALUPI is to violations of these assumptions is an interesting direction for future work. The use of regions of interest as privileged information brings up an interesting point concerning the relationship between the label and the privileged information. In object detection tasks, it is natural to treat the bounding box coordinates as label information. In this work, however, the learning tasks were multi-class and multi-label image classification, not object detection. Producing a perfect box W was not the goal of the learning task, and the bounding boxes were therefore neither critical for the task nor the labels. Instead, the bounding boxes were privileged information and our experiments in Section 4.2–4.4 sought to quantify the value of this added information, compared to not having it. Therefore, we compared our method to image classification baselines. It is not obvious a priori that learning from object locations improves the adaptation of image classifiers. If there is a lack of PI available to the models one might mitigate this by either 1) using the limited amount of PI that is available to learn gˆ and assume that it is good enough to achieve reasonable overall performance; or 2) using the learned ˆf to create "weak" PI labels for the inputs that are missing PI, similar to the work of e.g. Robinson et al. (2020). However, one should note that the latter approach might bias the model in unintended ways and, as such, should be undertaken with some caution. In future work, our framework could be applied to a more diverse set of tasks, with different modalities of inputs and privileged information to investigate if the findings here can be replicated and extended. Moreover, such work could consider different types and degrees of shifts to further corroborate the stability and resistance to noise which we observe here. More broadly, using PI may be viewed as "building in" domain knowledge in the structure of the adaptation problem and we see this as a promising direction for further research. ## References Sara Beery, Grant Van Horn, and Pietro Perona. Recognition in terra incognita. In Proceedings of the European conference on computer vision (ECCV), pp. 456–473, 2018. Shai Ben-David and Ruth Urner. On the hardness of domain adaptation and the utility of unlabeled target samples. In *International Conference on Algorithmic Learning Theory*, pp. 139–153. Springer, 2012. Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. Analysis of representations for domain adaptation. *Advances in neural information processing systems*, 19, 2006. Adam Breitholtz and Fredrik Daniel Johansson. Practicality of generalization guarantees for unsupervised domain adaptation with neural networks. *Transactions on Machine Learning Research*, 2022. Yunpeng Chen, Xiaojie Jin, Jiashi Feng, and Shuicheng Yan. Training group orthogonal neural networks with privileged information. In *Proceedings of the Twenty-Sixth International Joint Conference on Artificial* Intelligence, IJCAI-17, pp. 1532–1538, 2017. Corinna Cortes, Yishay Mansour, and Mehryar Mohri. Learning bounds for importance weighting. In J. Lafferty, C. Williams, J. Shawe-Taylor, R. Zemel, and A. Culotta (eds.), *Advances in Neural Information* Processing Systems, volume 23. Curran Associates, Inc., 2010. Sanjoy Dasgupta, Akansha Dey, Nicholas Roberts, and Sivan Sabato. Learning from discriminative feature feedback. In *Advances in Neural Information Processing Systems 31 (NeurIPS 2018)*, 2018. Antoine de Mathelin, François Deheeger, Guillaume Richard, Mathilde Mougeot, and Nicolas Vayatis. ADAPT: Awesome Domain Adaptation Python Toolbox. *arXiv preprint arXiv:2107.03049*, 2021. Alexander D'Amour, Peng Ding, Avi Feller, Lihua Lei, and Jasjeet Sekhon. Overlap in observational studies with high-dimensional covariates. *Journal of Econometrics*, 221(2):644–654, 2021. Jan Feyereisl and Uwe Aickelin. Privileged information for data clustering. *Information Sciences*, 194:4–23, 2012. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-Adversarial Training of Neural Networks. arXiv:1505.07818 [cs, stat], May 2016. arXiv: 1505.07818. Pascal Germain, Amaury Habrard, François Laviolette, and Emilie Morvant. PAC-Bayes and Domain Adaptation. *Neurocomputing*, 379:379–397, February 2020. arXiv: 1707.05712. Ross Girshick. Fast R-CNN. In *Proceedings of the IEEE international conference on computer vision*, pp. 1440–1448, 2015. Ishaan Gulrajani and David Lopez-Paz. In search of lost domain generalization. In *International Conference* on Learning Representations, 2021. URL https://openreview.net/forum?id=lQdXeXDoWtI. Dongyoon Han, Junsuk Choe, Seonghyeok Chun, John Joon Young Chung, Minsuk Chang, Sangdoo Yun, Jean Y. Song, and Seong Joon Oh. Neglected free lunch - learning image classifiers using annotation byproducts. In *Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)*, pp. 20200–20212, October 2023. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016a. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016b. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. Judy Hoffman, Saurabh Gupta, and Trevor Darrell. Learning with Side Information through Modality Hallucination. In *2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, pp. 826–834. IEEE, 2016. Jeremy Irvin, Pranav Rajpurkar, Michael Ko, Yifan Yu, Silviana Ciurea-Ilcus, Chris Chute, Henrik Marklund, Behzad Haghgoo, Robyn Ball, Katie Shpanskaya, et al. Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In *Proceedings of the AAAI conference on artificial intelligence*, volume 33, pp. 590–597, 2019. Fredrik D Johansson, David Sontag, and Rajesh Ranganath. Support and invertibility in domain-invariant representations. In *The 22nd International Conference on Artificial Intelligence and Statistics*, pp. 527– 536. PMLR, 2019. Bastian Jung and Fredrik Daniel Johansson. Efficient learning of nonlinear prediction models with time-series privileged information. In *Advances in Neural Information Processing Systems*, 2022. Rickard Karlsson, Martin Willbo, Zeshan Hussain, Rahul G. Krishnan, David A. Sontag, and Fredrik D. Johansson. Using time-series privileged information for provably efficient learning of prediction models. In *Proceedings of The 25th International Conference on Artificial Intelligence and Statistics 2022*, 2021. Anurag Katakkar, Clay H. Yoo, Weiqin Wang, Zachary Lipton, and Divyansh Kaushik. Practical benefits of feature feedback under distribution shift. In Jasmijn Bastings, Yonatan Belinkov, Yanai Elazar, Dieuwke Hupkes, Naomi Saphra, and Sarah Wiegreffe (eds.), *Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP*, pp. 346–355, Abu Dhabi, United Arab Emirates (Hybrid), December 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022. blackboxnlp-1.29. URL https://aclanthology.org/2022.blackboxnlp-1.29. D. Kaushik, A. Setlur, E. H. Hovy, and Z. C. Lipton. Explaining the efficacy of counterfactually augmented data. In *International Conference on Learning Representations*, 2021. Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, et al. Wilds: A benchmark of inthe-wild distribution shifts. In *International Conference on Machine Learning*, pp. 5637–5664. PMLR, 2021. Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. Jogendra Nath Kundu, Suvaansh Bhambri, Akshay Kulkarni, Hiran Sarkar, Varun Jampani, and R. Venkatesh Babu. Concurrent subsidiary supervision for unsupervised source-free domain adaptation. In *Computer Vision - ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022,* Proceedings, Part XXX, pp. 177–194, Berlin, Heidelberg, 2022. Springer-Verlag. ISBN 978-3-031-20055-7. doi: 10.1007/978-3-031-20056-4_11. URL https://doi.org/10.1007/978-3-031-20056-4_11. Yann Lecun. Gradient-Based Learning Applied to Document Recognition. *proceedings of the IEEE*, 86(11): 47, 1998. Yanmeng Li, Huaijiang Sun, and Wenzhu Yan. Domain adaptive twin support vector machine learning using privileged information. *Neurocomputing*, 469:13–27, 2022. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollar, and Larry Zitnick. Microsoft COCO: Common Objects in Context. In *ECCV*. European Conference on Computer Vision, September 2014. Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In *Proceedings of the IEEE conference on computer vision and* pattern recognition, pp. 2117–2125, 2017. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In *2015* IEEE International Conference on Computer Vision (ICCV), pp. 3730–3738, 2015. doi: 10.1109/ICCV. 2015.425. Mingsheng Long, Jianmin Wang, Guiguang Ding, Jiaguang Sun, and Philip S. Yu. Transfer feature learning with joint distribution adaptation. In Proceedings of the 2013 IEEE International Conference on Computer Vision, ICCV '13, pp. 2200–2207, USA, 2013. IEEE Computer Society. David Lopez-Paz, Leon Bottou, Bernhard Schölkopf, and Vladimir Vapnik. Unifying distillation and privileged information. In *International Conference on Learning Representations (ICLR 2016)*, 2016. Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. *Foundations of Machine Learning*. MIT Press, second edition, 2018. Saeid Motiian. *Domain Adaptation and Privileged Information for Visual Recognition*. PhD thesis, West Virginia University, 2019. Graduate Theses, Dissertations, and Problem Reports. 6271. Shota Orihashi, Mana Ihori, Tomohiro Tanaka, and Ryo Masumura. Unsupervised Domain Adaptation for Dialogue Sequence Labeling Based on Hierarchical Adversarial Training. In *Interspeech 2020*, pp. 1575–1579. ISCA, October 2020. Dmitry Pechyony and Vladimir Vapnik. On the theory of learnining with privileged information. In Advances in Neural Information Processing Systems, volume 23. Curran Associates, Inc., 2010. Xingchao Peng, Ben Usman, Neela Kaushik, Judy Hoffman, Dequan Wang, and Kate Saenko. Visda: The visual domain adaptation challenge. *arXiv preprint arXiv:1710.06924*, 2017. Stefanos Poulis and Sanjoy Dasgupta. Learning with feature feedback: from theory to practice. In *International Conference on Artificial Intelligence and Statistics (AISTATS)*, 2017. S Ren, K He, R Girshick, and J Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 39(6):1137–1149, 2016. Joshua Robinson, Stefanie Jegelka, and Suvrit Sra. Strength from Weakness: Fast Learning Using Weak Supervision. In *Proceedings of the 37th International Conference on Machine Learning*, pp. 8127–8136. PMLR, November 2020. Nikolaos Sarafianos, Michalis Vrigkas, and Ioannis A. Kakadiaris. Adaptive SVM+: Learning with privileged information for domain adaptation. In *Proceedings of the IEEE International Conference on Computer* Vision (ICCV) Workshops, Oct 2017. Tawseef Ayoub Shaikh, Rashid Ali, and M. M. Sufyan Beg. Transfer learning privileged information fuels CAD diagnosis of breast cancer. *Machine Vision and Applications*, 31(1):9, February 2020. Viktoriia Sharmanska, Novi Quadrianto, and Christoph H. Lampert. Learning to Transfer Privileged Information. *arXiv:1410.0389 [cs, stat]*, October 2014. arXiv: 1410.0389. Kendrick Shen, Robbie M Jones, Ananya Kumar, Sang Michael Xie, Jeff Z. Haochen, Tengyu Ma, and Percy Liang. Connect, not collapse: Explaining contrastive learning for unsupervised domain adaptation. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp. 19847–19878. PMLR, 17–23 Jul 2022. Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. *Journal of statistical planning and inference*, 90(2):227–244, 2000. Catarina Silva, Armando Vieira, Antonio Gaspar-Cunha, and Joao Carvalho das Neves. Financial distress model prediction using SVM+. In *Proceedings of the International Joint Conference on Neural Networks*, pp. 1–7, July 2010. M. Sugiyama, T. Suzuki, and T. Kanamori. *Density Ratio Estimation in Machine Learning*. Cambridge books online. Cambridge University Press, 2012. Marian Tietz, Thomas J. Fan, Daniel Nouri, Benjamin Bossan, and skorch Developers. *skorch: A scikit-learn* compatible neural network library that wraps PyTorch, July 2017. Vladimir Vapnik and Rauf Izmailov. Learning Using Privileged Information: Similarity Control and Knowledge Transfer. *Journal of Machine Learning Research*, 16:2023–2049, 2015. Vladimir Vapnik and Akshay Vashist. A new learning paradigm: Learning using privileged information. Neural Networks, 22(5):544–557, July 2009. Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5018–5027, 2017. Tuan-Hung Vu, Himalaya Jain, Maxime Bucher, Matthieu Cord, and Patrick Pérez Pérez. Dada: Depthaware domain adaptation in semantic segmentation. In *2019 IEEE/CVF International Conference on* Computer Vision (ICCV), pp. 7363–7372, 2019. Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, and Ronald M Summers. Chestxray8: Hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In *Proceedings of the IEEE conference on computer vision and pattern* recognition, pp. 2097–2106, 2017. Yifan Wu, Ezra Winston, Divyansh Kaushik, and Zachary Lipton. Domain adaptation with asymmetricallyrelaxed distribution alignment. In *International conference on machine learning*, pp. 6872–6881. PMLR, 2019. Sang Michael Xie, Ananya Kumar, Robbie Jones, Fereshte Khani, Tengyu Ma, and Percy Liang. In-n-out: Pre-training and self-training using auxiliary information for out-of-distribution robustness. *arXiv preprint* arXiv:2012.04550, 2020. Yalan Ye, Ziqi Liu, Yangwuyong Zhang, Jingjing Li, and Hengtao Shen. Alleviating style sensitivity then adapting: Source-free domain adaptation for medical image segmentation. In Proceedings of the 30th ACM International Conference on Multimedia, MM '22, pp. 1935–1944, New York, NY, USA, 2022. Association for Computing Machinery. ISBN 9781450392037. doi: 10.1145/3503161.3548426. URL https: //doi.org/10.1145/3503161.3548426. John R Zech, Marcus A Badgeley, Manway Liu, Anthony B Costa, Joseph J Titano, and Eric Karl Oermann. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study. *PLoS medicine*, 15(11):e1002683, 2018. Yuchen Zhang, Tianle Liu, Mingsheng Long, and Michael Jordan. Bridging theory and algorithm for domain adaptation. In *International conference on machine learning*, pp. 7404–7413. PMLR, 2019. Han Zhao, Remi Tachet Des Combes, Kun Zhang, and Geoffrey Gordon. On learning invariant representations for domain adaptation. In *International Conference on Machine Learning*, pp. 7523–7532. PMLR, 2019. ## A Experimental Details In this section, we give further details of the experiments. All code is written in Python and we mainly use PyTorch in combination with skorch (Tietz et al., 2017) for our implementations of the networks. For Faster R-CNN, we adapt the implementation provided by torchvision through the function fasterrcnn_resnet50_fpn. For DANN and MDD, we use the ADAPT TensorFlow implementation (de Mathelin et al., 2021) with a ResNet-50-based encoder. We initially set the trade-off parameter λ, which controls the amount of domain adaption regularization, to 0 and then increase it to 0.1 in 10,000 gradient steps according to the formula λ = β(2/(1 + e −p) − 1)/C, where p increases linearly from 0 to 1, β is a parameter specified for each experiment, and C = 2/(1 + e −1) − 1. For MDD, we fix the margin parameter γ to 3. The source and target baselines are based on the ResNet-50 architecture when PI is provided as multiple regions of interest; otherwise, the ResNet-18 architecture is used. The architecture of DALUPI in each experiment is specified in the respective subsection below. We use the Adam optimizer in all experiments. Learning rate decay is treated as a hyperparameter. For ADAPT models (DANN and MDD), the learning rate is either constant or decayed according to µ0/(1 + αp) 3/4, where µ0 is the initial learning rate, p increases linearly from 0 to 1, and α is a parameter specified in each experiment (see below). For non-ADAPT models, the learning rate is either constant or decayed by a factor 0.1 every nth epoch, where n is another hyperparameter. For all models except LUPI and DALUPI, the classifier network following the encoder is a simple MLP with two possible settings: Either it is a single linear layer from inputs to outputs or a three-layer network with ReLU activations between the layers. This choice is treated as a hyperparameter in our experiments. The nonlinear case has the following structure where n is the number of input features: - fully connected layer with n neurons - ReLU activation layer - fully connected layer with n//2 neurons - ReLU activation layer - fully connected layer with n//4 neurons. All models were trained using NVIDIA Tesla A40 GPUs and the development and evaluation of this study required approximately 30,000 hours of GPU training. The code will be made available. ## A.1 Dalupi With Two-Stage Classifier Here, we describe in more detail how we construct our two-stage classifier for image classification when privileged information is provided as a single region of interest as in the digit classification task (Section 4.2). When privileged information is provided as binary attributes, we can directly learn the two-stage estimator according to Equation 2. In the digit classification task, each image xi has a single label yi ∈ {0*, . . . ,* 4} determined by the MNIST digit. Privileged information is given by a single bounding box with coordinates ![18_image_0.png](18_image_0.png) Figure 7: Faster R-CNN (Ren et al., 2016) architecture. The RoI pooling layer and the classification and regression layers are part of the Fast R-CNN detection network (Girshick, 2015). ti ∈ R 4enclosing a subset of pixels wi corresponding to the digit. The training procedure is summarized in Algorithm 1 and further described below. We first learn ˆd which is a function that takes target image data, x˜i, and bounding box coordinates, ti, and learns to output bounding box coordinates, tˆi, which should contain the privileged information wi. Note that we do not exactly follow the setup in Equation 2 since we do not need to actually predict the pixel values within the bounding box. If we find a good enough estimator of ti we should minimize the loss of f in Equation 2. To obtain the privileged information we apply a deterministic function ϕ which crops and scales an image using the associated bounding box, ti. We can now write the composition of these two functions as ˆf(xi) = ϕ(xi, ˆd(xi)) which outputs the privileged information. The function ϕ is hard-coded and therefore not learned. In the second step, we learn gˆ to predict the label from the privileged information wi, which is a cropped version of xi where the cropping is defined by the bounding box ti around the digit. This cropping and resizing is performed by ϕ. When we evaluate the performance of this classifier we combine the two models into one, hˆ(x) = ˆg(ϕ(x, ˆd(x))). We use the mean squared error loss for learning ˆd and categorical crossentropy (CCE) loss for gˆ. Algorithm 1 Training of the two-stage model. 1: **procedure** Two_stage (x˜i, wi, ti, yi) 2: Empirically minimize 1m Pm i=1 ∥d(˜xi) − ti∥ 2 and obtain ˆd. 3: Empirically minimize 1n Pn i=1 CCE(g(wi), yi) and obtain gˆ. 4: Compose ˆd, gˆ and ϕ into hˆ(x) = ˆg(ϕ(x, ˆd(x))). 5: **end procedure** ## A.2 Dalupi With Faster R-Cnn For multi-label classification, we adapt Faster R-CNN (Ren et al., 2016) outlined in Figure 7 and described below. Faster R-CNN uses a region proposal network (RPN) to generate region proposals which are fed to a detection network for classification and bounding box regression. This way of solving the task in subsequent steps has similarities with our two-stage algorithm although Faster R-CNN can be trained end-to-end. We make small modifications to the training procedure of the original model in the end of this section. The RPN generates region proposals relative to a fixed number of reference boxes—anchors—centered at the locations of a sliding window moving over convolutional feature maps. Each anchor is assigned a binary label p ∈ {0, 1} based on its overlap with ground-truth bounding boxes; positive anchors are also associated with a ground-truth box with location t. The RPN loss for a single anchor is $$L^{\mathrm{RPN}}({\hat{p}},p,{\hat{t}},t):=L_{\mathrm{cls}}^{\mathrm{RPN}}({\hat{p}},p)+p L_{\mathrm{reg}}^{\mathrm{RPN}}({\hat{t}},t),$$ reg (*t, t* ˆ ), (3) where tˆ represents the refined location of the anchor and pˆ is the estimated probability that the anchor contains an object. The binary cross-entropy loss and a smooth L1 loss are used for the classification loss L RPN cls and the regression loss L RPN reg , respectively. For a mini-batch of images, the total RPN loss is computed based on a subset of all anchors, sampled to have a ratio of up to 1:1 between positive and negative ditto. A filtered set of region proposals are projected onto the convolutional feature maps. For each proposal, the detection network—Fast R-CNN (Girshick, 2015)—outputs a probability pˆ(k) and a predicted bounding box location tˆ(k) for each class k. Let pˆ = (ˆp(0)*, . . . ,* pˆ(K)), where Pk pˆ(k) = 1, K is the number of classes and 0 represents a catch-all background class. For a single proposal with ground-truth coordinates t and multi-class label u ∈ {0*, . . . , K*}, the detection loss is $$L^{\mathrm{det}}(\hat{p},u,\hat{t}_{u},t)=L_{\mathrm{cls}}^{\mathrm{det}}(\hat{p},u)+\mathbf{I}_{u\geq1}L_{\mathrm{reg}}^{\mathrm{det}}(\hat{t}_{u},t),$$ $$\left(4\right)$$ reg(tˆu, t), (4) where L det cls (ˆ*p, u*) = − log ˆp(u) and L det reg is a smooth L1 loss. To obtain a probability vector for the entire image, we maximize, for each class k, over the probabilities of all proposals. During training, Faster R-CNN requires that all input images x come with at least one ground-truth annotation (bounding box) w and its corresponding label u. To increase sample-efficiency, we enable training the model using non-annotated but labeled samples (*x, y*) from the source domain and annotated but unlabeled samples (˜x, w˜) from the target domain. In the RPN, no labels are needed, and we simply ignore anchors from non-annotated images when sampling anchors for the loss computation. For the computation of Equation 4, we handle the two cases separately. We assign the label u = −1 to all ground-truth annotations from the target domain and multiply L det cls by the indicator Iu≥0. For non-annotated samples (*x, y*) from the source domain, there are no box-specific coordinates t or labels u but only the labels y for the entire image. In this case, 4 is undefined and we instead compute the binary cross-entropy loss between the per-image label and the probability vector for the entire image. We train the RPN and the detection network jointly as described in Ren et al. (2016). To extract feature maps, we use a Feature Pyramid Network (Lin et al., 2017) on top of a ResNet-50 architecture He et al. (2016b). We use the modified model in the experiments in Section 4.3 and 4.4. In Section 4.3, we also train this model in a LUPI setting, where no information from the target domain is used. ## A.3 Celebrity Photo Classification With Binary Attribute Vector In our experiment based on CelebA (Liu et al., 2015), the input x is an RGB image which has been resized to 64×64 pixels, the target y is a binary label for gender of the subject of the image, and the privileged information w are 7 binary-valued attributes. The attributes used in this experiment are: Bald, Bangs, Mustache, Smiling, 5_o_Clock_Shadow, Oval_Face and Heavy_Makeup. We use a subset of the CelebA dataset with 2,000 labeled source examples and 3,000 unlabeled target examples. We use 1,000 samples each for the validation set, source test set and target test set, respectively. When we use privileged information from the source domain, in addition to the target, we use 30,000 extra samples (*x, w*) with PI. For DALUPI, we use the two-stage estimator with the network ˆf based on ResNet-18 followed by a nonlinear MLP. The network gˆ is an MLP with two hidden layers of with 256 neurons each. To conform with the experiment in Xie et al. (2020) we only train the models for 25 epochs. If the validation accuracy (or validation AUC for ˆf) does not improve for 10 subsequent epochs, we stop the training earlier. For DALUPI, the early stopping patience is 15 for each network. We treat the problem as multi-class classification with two classes and use the categorical cross entropy loss for SL-S, DANN, and MDD. We randomly choose hyperparameters from the following predefined sets of values: - SL-S and SL-T: $$\mathbf{f}(\mathbf{4})$$ - batch size: (16, 32, 64) - initial learning rate: (1.0 × 10−4, 1.0 × 10−3) $$0\times10^{-4}\;,\;1.0\times10^{-3}\;)$$ - step size n for learning rate decay: (15, 30, 100) - weight decay: (1.0 × 10−4, 1.0 × 10−3) - dropout (encoder): (0, 0.1, 0.2, 0.5) - nonlinear classifier: (True, False). ## - Dalupi: - batch size: (16, 32, 64) - initial learning rate: (1.0 × 10−5, 1.0 × 10−4, 1.0 × 10−3) - step size n for learning rate decay: (15, 30, 100) - weight decay: (1.0 × 10−4, 1.0 × 10−3). - batch size: (16, 32, 64) - initial learning rate: (1.0 × 10−4, 1.0 × 10−3) - parameter α for learning rate decay: (0, 1.0) - weight decay: (1.0 × 10−4, 1.0 × 10−3) - dropout (encoder): (0, 0.1, 0.2, 0.5) - width of discriminator network: (64, 128, 256) - depth of discriminator network: (2, 3) - nonlinear classifier: (True, False) - parameter β for adaption regularization decay: (0.1, 1.0, 10.0). ## - Mdd: - batch size: (16, 32, 64) - initial learning rate: (1.0 × 10−4, 1.0 × 10−3) - parameter α for learning rate decay: (0, 1.0) - weight decay: (1.0 × 10−4, 1.0 × 10−3) - dropout (encoder): (0, 0.1, 0.2, 0.5) - nonlinear classifier: (True, False) - maximum norm value for classifier weights: (0.5, 1.0, 2.0) - parameter β for adaption regularization decay: (0.1, 1.0, 10.0). ## A.4 Digit Classification With Single Bounding Box As Pi In the digit classification task, we separate 20 % of the available source and target data into a test set. We likewise use 20 % of the training data for validation purposes. For DALUPI we use ResNet-18 for the function ˆf. We replace the default fully connected layer with a fully connected layer with 4 neurons to predict the coordinates of the bounding box. The predicted bounding box is resized to a 28 × 28 square no matter the initial size. We use a simple convolutional neural network for the function gˆ with the following structure: - convolutional layer with 16 output channels, kernel size of 5, stride of 1, and padding of 2 - max pooling layer with kernel size 2, followed by a ReLU activation - convolutional layer with 32 output channels, kernel size of 5, stride of 1, and padding of 2 - max pooling layer with kernel size 2, followed by a ReLU activation - dropout layer with p = 0.4 - fully connected layer with 50 out features, followed by ReLU activation - dropout layer with p = 0.2 - fully connected layer with 5 out features. The model training is stopped when the best validation accuracy (or validation loss for ˆf) does not improve over 10 epochs or when the model has been trained for 100 epochs, whichever occurs first. All models are trained from scratch, without pretrained weights. We use the categorical cross entropy loss for SL-S, SL-T, DANN, and MDD. We randomly choose hyperparameters from the following predefined sets of values: - SL-S and SL-T: - batch size: (16, 32, 64) - initial learning rate: (1.0 × 10−4, 1.0 × 10−3) - step size n for learning rate decay: (15, 30, 100) - weight decay: (1.0 × 10−4, 1.0 × 10−3) - dropout (encoder): (0, 0.1, 0.2, 0.5) - nonlinear classifier: (True, False). ## - Dalupi: - batch size: (16, 32, 64) - initial learning rate: (1.0 × 10−4, 1.0 × 10−3) - step size n for learning rate decay: (15, 30, 100) - weight decay: (1.0 × 10−4, 1.0 × 10−3). - batch size: (16, 32, 64) - initial learning rate: (1.0 × 10−4, 1.0 × 10−3) - parameter α for learning rate decay: (0, 1.0) - weight decay: (1.0 × 10−4, 1.0 × 10−3) - dropout (encoder): (0, 0.1, 0.2, 0.5) - width of discriminator network: (64, 128, 256) - depth of discriminator network: (2, 3) - nonlinear classifier: (True, False) - parameter β for adaption regularization decay: (0.1, 1.0, 10.0). ## - Mdd: - batch size: (16, 32, 64) - initial learning rate: (1.0 × 10−4, 1.0 × 10−3) - parameter α for learning rate decay: (0, 1.0) - weight decay: (1.0 × 10−4, 1.0 × 10−3) - dropout (encoder): (0, 0.1, 0.2, 0.5) - nonlinear classifier: (True, False) - maximum norm value for classifier weights: (0.5, 1.0, 2.0) - parameter β for adaption regularization decay: (0.1, 1.0, 10.0). Table 5: Marginal label distribution in source and target domains for the entity classification task based on the MS-COCO dataset. The background class contains images where none of the four entities are present. | Domain | Person | Dog | Cat | Bird | Background | |----------|----------|-------|-------|--------|--------------| | Source | 2,963 | 569 | 1,008 | 213 | 1,000 | | Target | 3,631 | 1,121 | 423 | 712 | 1,000 | ## A.5 Entitity Classification With Multiple Regions Of Interest As Pi In the entity classification experiment, we train all models for at most 50 epochs. If the validation AUC does not improve for 10 subsequent epochs, we stop the training earlier. No pretrained weights are used in this experiment since we find that the task is too easy to solve with pretrained weights. For DALUPI and LUPI, we use the end-to-end solution based on Faster R-CNN (see Section A.2). We use the default anchor sizes for each of the feature maps (32, 64, 128, 256, 512), and for each anchor size we use the default aspect ratios (0.5, 1.0, 2.0). We use the binary cross entropy loss for SL-S, SL-T, DANN, and MDD. We use the 2017 version of the MS-COCO dataset (Lin et al., 2014). As decribed in Section 4.3, we extract indoor images by sorting out images from the super categories "indoor" and "appliance" that also contain at least one of the entity classes. Outdoor images are extracted in the same way using the super categories "vehicle" and "outdoor". Images that match both domains (for example an indoor image with a toy car) are removed, as are any gray-scale images. We also include 1,000 negative examples, i.e., images with none of the entities present, in both domains. In total, there are 5,231 images in the source domain and 5,719 images in the target domain. From these pools, we randomly sample 3,000, 1,000, and 1,000 images for training, validation, and testing, respectively. In Table 5 we describe the label distribution in both domains. All images are resized to 320 × 320. We randomly choose hyperparameters from the following predefined sets of values. For information about the specific parameters in LUPI and DALUPI, we refer to the paper by Ren et al. (2016). Here, RoI and NMS refer to region of interest and non-maximum suppression, respectively. - SL-S and SL-T: - batch size: (16, 32, 64) - initial learning rate: (1.0 × 10−4, 1.0 × 10−3) - step size n for learning rate decay: (15, 30, 100) - weight decay: (1.0 × 10−4, 1.0 × 10−3) - dropout (encoder): (0, 0.1, 0.2, 0.5) - nonlinear classifier: (True, False). - batch size: (16, 32, 64) - initial learning rate: (1.0 × 10−4, 1.0 × 10−3) - parameter α for learning rate decay: (0, 1.0) - weight decay: (1.0 × 10−4, 1.0 × 10−3) - dropout (encoder): (0, 0.1, 0.2, 0.5) - width of discriminator network: (64, 128, 256) - depth of discriminator network: (2, 3) - nonlinear classifier: (True, False) - parameter β for adaption regularization decay: (0.1, 1.0, 10.0). - MDD: Table 6: Marginal distribution of labels of images and bounding boxes in the source and target domain, respectively, for the chest X-ray classification experiment. ATL=Atelectasis; CM=Cardiomegaly; PE=Effusion; NF=No Finding. | Data | ATL | CM | PE | NF | |--------|--------|--------|--------|--------| | x ∼ S | 11,559 | 2,776 | 13,317 | 60,361 | | w ∼ S | 180 | 146 | 153 | - | | x˜ ∼ T | 14,278 | 20,466 | 74,195 | 16,996 | | w˜ ∼ T | 75 | 66 | 64 | - | - batch size: (16, 32, 64) - initial learning rate: (1.0 × 10−4, 1.0 × 10−3) - parameter α for learning rate decay: (0, 1.0) - weight decay: (1.0 × 10−4, 1.0 × 10−3) - dropout (encoder): (0, 0.1, 0.2, 0.5) - nonlinear classifier: (True, False) - maximum norm value for classifier weights: (0.5, 1.0, 2.0) - parameter β for adaption regularization decay: (0.1, 1.0, 10.0). ## - Lupi And Dalupi: - batch size: (16, 32, 64) - learning rate: (1.0 × 10−4, 1.0 × 10−3) - step size n for learning rate decay: (15, 30, 100) - weight decay: (1.0 × 10−4, 1.0 × 10−3) - IoU foreground threshold (RPN): (0.6, 0.7, 0.8, 0.9) - IoU background threshold (RPN): (0.2, 0.3, 0.4) - batchsize per image (RPN): (32, 64, 128, 256) - fraction of positive samples (RPN): (0.4, 0.5, 0.6, 0.7) - NMS threshold (RPN): (0.6, 0.7, 0.8) - RoI pooling output size (Fast R-CNN): (5, 7, 9) - IoU foreground threshold (Fast R-CNN): (0.5, 0.6) - IoU background threshold (Fast R-CNN): (0.4, 0.5) - batchsize per image (Fast R-CNN): (16, 32, 64, 128) - fraction of positive samples (Fast R-CNN): (0.2, 0.25, 0.3) - NMS threshold (Fast R-CNN): (0.4, 0.5, 0.6) - detections per image (Fast R-CNN): (25, 50, 75, 100). ## A.6 X-Ray Classification With Multiple Regions Of Interest As Pi In the X-ray classification experiment, we train all models for at most 50 epochs, using pre-trained weights in the ResNet architecture of each model. If the validation AUC does not improve for 10 subsequent epochs, we stop the training earlier. We then fine-tune all models, except DANN and MDD, for up to 20 additional epochs. The number of encoder layers that are fine-tuned is a hyperparameter for which we consider different values. We start the training with weights pretrained on ImageNet. For DALUPI, we use the end-to-end solution based on Faster R-CNN (see Section A.2). We use the default anchor sizes for each of the feature maps (32, 64, 128, 256, 512), and for each anchor size we use the default aspect ratios (0.5, 1.0, 2.0). We use the binary cross entropy loss for SL-S, SL-T, DANN, and MDD. In total, there are 83,519 (457) and 120,435 (118) images (annotated images) in the source and target domain, respectively. The distributions of labels and bounding box annotations are provided in Table 6. Here, "NF" refers to images with no confirmed findings. In the annotated images, there are 180/146/153 and 75/66/64 examples of ATL/CM/PE in each domain respectively. Validation and test sets are sampled from non-annotated images and contain 10,000 samples each. All annotated images are reserved for training. We merge the default training and validation datasets before splitting the data and resize all images to 320 × 320. For the source dataset (ChestX-ray8), the bounding boxes can be found together with the dataset. The target segmentations can be found here: https://stanfordaimi.azurewebsites.net/ datasets/23c56a0d-15de-405b-87c8-99c30138950c. We choose hyperparameters randomly from the following predefined sets of values. For information about the specific parameters in DALUPI, we refer to the paper by Ren et al. (2016). RoI and NMS refer to region of interest and non-maximum suppression, respectively. - SL-S and SL-T: - batch size: (16, 32, 64) - learning rate: (1.0 × 10−4, 1.0 × 10−3) - weight decay: (1.0 × 10−4, 1.0 × 10−3) - dropout (encoder): (0, 0.1, 0.2, 0.5) - nonlinear classifier: (True, False) - number of layers to fine-tune: (3, 4, 5) - learning rate (fine-tuning): (1.0 × 10−5, 1.0 × 10−4). - batch size: (16, 32, 64) - initial learning rate: (1.0 × 10−4, 1.0 × 10−3) - parameter α for learning rate decay: (0, 1.0) - weight decay: (1.0 × 10−4, 1.0 × 10−3) - number of trainable layers (encoder): (1, 2, 3, 4, 5) - dropout (encoder): (0, 0.1, 0.2, 0.5) - width of discriminator network: (64, 128, 256) - depth of discriminator network: (2, 3) - nonlinear classifier: (True, False) - parameter β for adaption regularization decay: (0.1, 1.0, 10.0). ## - Mdd: - batch size: (16, 32, 64) - initial learning rate: (1.0 × 10−4, 1.0 × 10−3) - parameter α for learning rate decay: (0, 1.0) - weight decay: (1.0 × 10−4, 1.0 × 10−3) - number of trainable layers (encoder): (1, 2, 3, 4, 5) - dropout (encoder): (0, 0.1, 0.2, 0.5) - nonlinear classifier: (True, False) - maximum norm value for classifier weights: (0.5, 1.0, 2.0) - parameter β for adaption regularization decay: (0.1, 1.0, 10.0). ## - Dalupi: - batch size: (16, 32, 64) - learning rate: (1.0 × 10−4) ![25_image_0.png](25_image_0.png) (a) SL-S, ϵ = 0.2 (b) SL-S, ϵ = 1.0 ![25_image_1.png](25_image_1.png) Figure 8: Example images (top) and saliency maps (bottom) from SL-S when trained with source skew ϵ = 0.2 (a) and ϵ = 1 (b). - weight decay: (1.0 × 10−4, 1.0 × 10−3) - IoU foreground threshold (RPN): (0.6, 0.7, 0.8, 0.9) - IoU background threshold (RPN): (0.2, 0.3, 0.4) - batchsize per image (RPN): (32, 64, 128, 256) - fraction of positive samples (RPN): (0.4, 0.5, 0.6, 0.7) - NMS threshold (RPN): (0.6, 0.7, 0.8) - RoI pooling output size (Fast R-CNN): (5, 7, 9) - IoU foreground threshold (Fast R-CNN): (0.5, 0.6) - IoU background threshold (Fast R-CNN): (0.4, 0.5) - batchsize per image (Fast R-CNN): (16, 32, 64, 128) - fraction of positive samples (Fast R-CNN): (0.2, 0.25, 0.3) - NMS threshold (Fast R-CNN): (0.4, 0.5, 0.6) - detections per image (Fast R-CNN): (25, 50, 75, 100) - learning rate (fine-tuning): (1.0 × 10−5, 1.0 × 10−4) - number of layers to fine-tune: (3, 4, 5). ## B Additional Results In Figure 8a and 8b, we show some example images from the digit classification task with associated saliency maps from the source-only model for different values of the skew parameter ϵ. We can see that for a lower value of epsilon the SL-S model activations seem concentrated on the area with the digit, while when the correlation with the background is large the model activations are more spread out. In Figure 9, we show the *average* AUC when additional training data of up to 30,000 samples are added in the chest X-ray experiment. We see that, once given access to a much larger amount of labeled samples, SL-S and DALUPI perform comparably in the target domain. In Figure 10, we show AUC for the pathology CM when additional training data *without* bounding box annotations are added. We see that SL-S catches up to the performance of DALUPI when a large amount of labeled examples are provided. These results indicate that identifiability is not the primary obstacle for adaptation, and that PI improves sample efficiency. ## C Proof Of Proposition 1 Proposition. Let Assumptions 1 and 2 be satisfied w.r.t. W (not necessarily w.r.t. X) and let Assumption 3 hold as stated. Then, the target risk RT is identified for hypotheses h : *X → Y*, $$R_{\mathcal{T}}(h)=\int_{x}\mathcal{T}(x)\int_{w}\mathcal{T}(w\mid x)\int_{y}\mathcal{S}(y\mid w)L(h(x),y)\mathrm{d}y\mathrm{d}w\mathrm{d}x\ .$$ ![26_image_0.png](26_image_0.png) Figure 9: Classification of chest X-ray images. Model performance on source (a) and target (b) domains. The AUC is averaged over the three pathologies: ATL, CM and PE. The 95 % confidence intervals are computed using bootstrapping the results over five seeds. ![26_image_1.png](26_image_1.png) Figure 10: Test AUC for CM in T . DALUPI outperforms the other models when no extra (*x, y*) samples are provided. Adding examples without bounding box annotations improves the performance of SL-S and SL-T, eventually causing the latter to surpass DALUPI. and, for L the squared loss, a minimizer of RT is h ∗ T (x) = Rw T (w | x)ES [Y | w]dw . Proof. By definition, RT (h) = Rx,y T (x, y)L(h(x), y)dydx. We marginalize over W to get $$\mathcal{T}(x,y)=\mathcal{T}(x)\mathbb{E}_{\mathcal{T}(W|x)}\left[\mathcal{T}(y\mid W,x)\mid x\right]$$ $$=\mathcal{T}(x)\mathbb{E}_{\mathcal{T}(W|x)}[\mathcal{T}(y\mid W)\mid x]$$ $$=\mathcal{T}(x)\int_{w:\mathcal{T}(w)>0}\mathcal{T}(w\mid x)\mathcal{S}(y\mid w)\mathrm{d}w$$ $$=\mathcal{T}(x)\int_{w:\mathcal{S}(w)>0}\mathcal{T}(w\mid x)\mathcal{S}(y\mid w)\mathrm{d}w\.$$ where the second equality follows by sufficiency and the third by covariate shift and overlap in W. T (x), T (w | x) and S(y | w) are observable through training samples. That h ∗ T is a minimizer follows from the first-order condition of setting the derivative of the risk with respect to h to 0. This strategy yields the well-known result that $$h_{\mathcal{T}}^{*}=\operatorname*{arg\,min}_{h}\mathbb{E}_{\mathcal{T}}[(h(X)-Y^{2})]=\mathbb{E}_{\mathcal{T}}[Y\mid X]\ .$$ By definition and the previous result, we have that $$\mathbb{E}_{\mathcal{T}}[Y\mid X=x]=\int_{y}y\frac{\mathcal{T}(x,y)}{\mathcal{T}(x)}\mathrm{d}y$$ $$=\int_{y}\int_{w:\mathcal{S}(w)>0}\mathcal{T}(w\mid x)\mathcal{S}(y\mid w)y\mathrm{d}w\mathrm{d}y$$ $$=\int_{w}\mathcal{T}(w\mid x)\mathbb{E}_{\mathcal{S}}[Y\mid x]\mathrm{d}w$$ and we have the result. ## D Proof Of Proposition 2 Proposition 2. Assume that G *comprises M-Lipschitz mappings from the privileged information space* W ⊆ R dW to Y. Further, assume that both the ground truth privileged information W and label Y are deterministic in X and W respectively. Let ρ be the domain density ratio of W and let Assumptions 1–3 (Covariate shift, Overlap and Sufficiency) hold w.r.t. W. Further, let the loss L be uniformly bounded by some constant B and let d and d ′be the pseudo-dimensions of G and F *respectively. Assume that there are* n observations from the source (labeled) domain and m *from the target (unlabeled) domain. Then, with* L the squared Euclidean distance, for any h = h ◦ f ∈ G × F*, w.p. at least* 1 − δ, $$\begin{array}{l}{{\frac{R_{\mathcal{T}}(h)}{2}\leq\hat{R}_{\mathcal{S}}^{Y,\rho}(g)+M^{2}\hat{R}_{\mathcal{T}}^{W}(f)}}\\ {{\qquad\qquad+2^{5/4}\sqrt{d_{2}(\mathcal{T}\|S)}\,\sqrt[\frac{3}{\sqrt{\frac{d\log\frac{2m e}{d}+\log\frac{4}{\delta}}}}}\\ {{\qquad\qquad+d_{W}B M^{2}\left(\sqrt{\frac{2d^{\prime}\log\frac{e n}{d^{\prime}}}{n}}+\sqrt{\frac{\log\frac{d_{W}}{\delta}}{2n}}\right).}}\end{array}$$ Proof. Decomposing the risk of h ◦ ϕ , we get $$R_{\mathcal{T}}(h)=\mathbb{E}_{\mathcal{T}}[(g(f(X))-Y)^{2}]$$ $$\leq2\mathbb{E}_{\mathcal{T}}[(g(W)-Y)^{2}+(g(f(X))-g(W))^{2}]$$ $$\leq2\mathbb{E}_{\mathcal{T}}[(g(W)-Y)^{2}+M^{2}\|f(X))-g(W)\|^{2}]$$ $$\leq2\mathbb{E}_{\mathcal{T}}[(g(W)-Y)^{2}]+2M^{2}\mathbb{E}_{\mathcal{T}}[\|(f(X)-W)\|^{2}]$$ $$=2R_{\mathcal{T}}^{Y}(g)+2M^{2}R_{\mathcal{T}}^{W}(f)=2\underbrace{R_{S}^{Y,\rho}(g)}_{(I)}+2M^{2}\underbrace{R_{\mathcal{T}}^{W}(f)}_{(II)}.$$ The first inequality follows from the relaxed triangle inequality, the second inequality from the Lipschitz property and the third equality from Overlap and Covariate shift. We will bound these quantities separately starting with (I). We assume that the pseudo-dimension of G, d is bounded. Further, we assume that the second moment of the density ratios, equal to the Rényi divergence d2(*T ∥S*) = Σw∈cGT (w) T (w) S(w) are bounded and that the density ratios are non-zero for all w ∈ G. Let D1 = {wi, yi} m i=0 be a dataset drawn i.i.d from the source domain. Then by application of Theorem 3 from Cortes et al. (2010) we obtain with probability 1 − δ over the choice of D1, $$(I)=R_{S}^{Y,\rho}(g)\leq\hat{R}_{S}^{Y,\rho}(g)+2^{5/4}\sqrt{d_{2}({\cal T}\|S)}^{3/8}\sqrt{\frac{d\log\frac{2m e}{d}+\log\frac{4}{5}}{m}}$$ Now for (II) we treat each component of w ∈ W as a regression problem independent from all the others. So we can therefore write the risk as the sum of the individual component risks $$R_{T}^{W}(f)=\Sigma_{i=1}^{d w}R_{T,i}^{W}(f)$$ Let the pseudo-dimension of F be denoted d, D2 = {xi, wi} n i=0 be a dataset drawn i.i.d from the target domain. Then, using theorem 11.8 from Mohri et al. (2018) we have that for any δ > 0, with probability at least 1 − δ over the choice of D2, the following inequality holds for all hypotheses f ∈ F for each component risk $$R_{T,i}^{W}(f)\leq\hat{R}_{T,i}^{W}(f)+B\left(\sqrt{\frac{2d^{\prime}\log\frac{e n}{d^{\prime}}}{n}}+\sqrt{\frac{\log\frac{1}{\delta}}{2n}}\right)$$ We then simply make all the bounds hold simultaneously by applying the union bound and having it so that each bound must hold with probability 1 −δ dW which results in $$R_{\mathcal{T}}^{W}(f)=\Sigma_{i=1}^{d_{W}}R_{\mathcal{T},i}^{W}(f)\leq\Sigma_{i=1}^{d_{W}}\widehat{h}_{\mathcal{T},i}^{W}(f)+\Sigma_{i=1}^{d_{W}}B\left(\sqrt{\frac{2d^{\prime}\log\frac{cn}{d}}{n}}+\sqrt{\frac{\log\frac{d_{W}}{d}}{2n}}\right)$$ $$=\widehat{h}_{\mathcal{T}}^{W}(f)+d_{W}B\left(\sqrt{\frac{2d^{\prime}\log\frac{cn}{d}}{n}}+\sqrt{\frac{\log\frac{d_{W}}{8}}{2n}}\right)$$ Combination of these two results then yield the proposition statement. Consistency follows as Y is a deterministic function of W and W is a deterministic fundtion of X and both H and F are well-specified. Thus both empirical risks and sample complexity terms will converge to 0 in the limit of infinite samples. The parts of the bound shown above can be described as falling into three main categories: Empirical risk(s), domain shift and sample complexity components. A central term that figures both in the weighted empirical risk and the Rényi divergence is the density ratio T (w) S(w) . Therefore, the size of the bound is governed at least in part based on the proximity in W-space the source and target domains are. This is similar to other importance weighting bounds, however, since the experiment designer may choose the form of PI this can be more well-behaved than the density ratio in the input space. ## E Proof Sketch For Pac-Bayes Bound We will here detail a proof sketch for a PAC-Bayes version of the bound we propose in the main text. For the purposes of this bound we will consider the quantity Eh∼ψRT (h), where ψ is a posterior distribution over classifiers h ∼ ψ. As we are basing the bound on the two-step methodology where we train two different classifiers on separate datasets we assume that we can obtain the posteriors over the component functions separately and independently i.e. h = f ◦ g ∼ ψ = ψf × ψg, where f ∼ ψf and g ∼ ψg. Let the assumptions from proposition 2 hold here. Similar to the previous section we decompose the risk into two parts: $$\begin{aligned} \mathbb{E}_{h\sim 0}\,R_T(h)&=\mathbb{E}_{h\sim 0}\mathbb{E}_T\big[\left(g(f(X))-Y\right)^2\big]\\ &=\mathbb{E}_{h\sim 0}\big[2R_T^V(g)+2M^2R_T^W(f)\big]=2\mathbb{E}_{h\sim 0}\underbrace{R_T^V(g)}_{(I)}+2M^2\mathbb{E}_{h\sim 0}\underbrace{R_T^W(f)}_{(II)}. \end{aligned}$$ We note that since we now have expectations over the composite function $h$ on exercises which $g$ is a constant. We note that since we now have expectations over the composite function h on expressions which depend on only one of the components we can, for example, write the following: $$\mathbb{E}_{h\sim\psi}R_{T}^{Y}(g)=\mathbb{E}_{g\sim\psi_{g}}R_{T}^{Y}(g)$$ This holds as we assume that f and g are not dependent on each other. Therefore, we can just marginalize out the part which is not in use. From this point we can use some of the available bounds from the literature to estimate the resulting part e.g. Corollary 1 from Breitholtz & Johansson (2022). Application of this result yields the following bound on the first term $$\operatorname*{\mathbb{E}}_{g\sim\psi_{g}}R_{T}^{Y}(g)\leq\frac{1}{\gamma}\operatorname*{\mathbb{E}}_{g\sim\psi_{g}}R_{S}^{\hat{Y},\rho}(g)+\beta_{\infty}\frac{\operatorname{KL}(\psi_{g}\|\pi_{g})+\ln(\frac{1}{\delta})}{2\gamma(1-\gamma)m}\ .$$ Thereafter we can use another bound from the literature to estimate the second term, e.g. Theorem 6 from Germain et al. (2020). Using this we obtain the following: $$\operatorname*{\mathbb{E}}_{f\sim\psi_{f}}R_{T}^{W}(f)\leq\frac{\alpha}{1-e^{-\alpha}}\left(\operatorname*{\mathbb{E}}_{f\sim\psi_{f}}\hat{R}_{T}^{W}(f)+\frac{\mathrm{KL}(\psi_{f}\|\pi_{f})+\ln(\frac{1}{\delta})}{n\alpha}\right)\ .$$ Then a bound can be constructed by combining these two results using a union bound argument. $$\mathbb{E}_{h\sim\psi}R_{\mathcal{T}}(h)\leq\frac{2}{\gamma}\,\mathbb{E}_{g\sim\psi_{g}}\,R_{\mathcal{S}}^{\hat{\mathcal{X}},\rho}(g)+\beta_{\infty}\frac{\mathrm{KL}(\psi_{g}\|\pi_{g})+\ln(\frac{2}{\delta})}{2\gamma(1-\gamma)m}$$ $$+\frac{2M^{2}\alpha}{1-e^{-\alpha}}\left(\mathbb{E}_{f\sim\psi_{f}}\,\hat{R}_{\mathcal{T}}^{W}(f)+\frac{\mathrm{KL}(\psi_{f}\|\pi_{f})+\ln(\frac{2}{\delta})}{n\alpha}\right).$$ ## F A Bound On The Target Risk Without Suffiency The sufficiency assumption is used to replace T (y | x) with T (y | w) in the proof of Proposition 1. If sufficiency is violated but it is plausible that the degree of insufficiency is comparable across domains, we can still obtain a bound on the target risk which may be estimated from observed quantities. One way to formalize such an assumption is that there is some γ ≥ 1, for which $$\sup_{x\in{\cal T}(x|w)}{\cal T}(y\mid w,x)/{\cal T}(y\mid w)\leq\gamma\sup_{x\in{\cal S}(x|w)}{\cal S}(y\mid w,x)/{\cal S}(y\mid w)\tag{5}$$ This may be viewed as a relaxation of suffiency. If Assumption 3 holds, both left-hand and right-hand sides of the inequality are 1. Under Equation 5, with ∆γ(*w, y*) equal to the right-hand side the inequality, $$R_{\mathcal{T}}(h)\leq\int_{x}\mathcal{T}(x)\int_{w}\mathcal{T}(w\mid x)\int_{y}\Delta_{\gamma}(w,y)\mathcal{S}(y\mid w)L(h(x),y)\mathrm{d}y\mathrm{d}w\mathrm{d}x\ .$$ However, the added assumption is not verifiable statistically.
Review 1: Summary: This paper introduces Unsupervised Domain Adaptation by Learning Using Privileged Information (DALUPI), a novel approach to tackle the challenge of transferring knowledge from a labeled source domain to an unlabeled target domain. Traditional unsupervised domain adaptation (UDA) methods often rely on strong assumptions about domain overlap that are frequently violated in high-dimensional problems like image classification. DALUPI addresses this limitation by leveraging privileged information (PI)---additional data available only during training---to relax restrictions on input variables and improve sample efficiency. The authors provide theoretical guarantees for consistent learning without requiring distributional overlap between input domains, which is typically needed for traditional UDA methods. To demonstrate the practical utility of DALUPI, the paper proposes learning algorithms for image classification using three types of privileged information: binary attribute vectors, single regions of interest, and multiple regions of interest. The authors evaluate their approach on four tasks, including celebrity photo classification, digit recognition, entity detection in natural images, and X-ray classification. Across these experiments, DALUPI demonstrates improved performance compared to baseline methods, especially when input domain overlap is violated and training sets are small. The results show that DALUPI can successfully adapt in scenarios where traditional UDA methods fail and exhibits increased sample efficiency compared to non-privileged learners. Strengths and Weaknesses: ## Strengths: - The paper provides a theoretical framework for unsupervised domain adaptation by learning using privileged information (DALUPI). This framework relaxes the strong assumptions typically required in UDA, particularly the need for input domain overlap. This is a significant advancement in the field, as it allows for adaptation in scenarios where traditional UDA methods might fail. - The authors conduct several experiments for image classification using three different types of privileged information, including on a real-world medical imaging application. - The experimental results show that DALUPI models often outperform baseline methods, especially in scenarios with limited training data. In some cases, DALUPI even outperforms oracle models trained on target labels, highlighting the method's strong sample efficiency. This is a crucial advantage in many real-world scenarios where labeled data is scarce or expensive to obtain. ## Weaknesses: - The theoretical guarantees of DALUPI rely on the assumption that the privileged information is ``sufficient'' for predicting labels. While this assumption is justified for some of the experimental tasks, it may not hold in many real-world scenarios. The paper could benefit from a more thorough discussion of when this assumption is likely to hold and how violations of it might impact performance. - While the paper compares DALUPI to some baseline UDA methods (DANN and MDD), it doesn't include comparisons to more recent state-of-the-art UDA techniques. This makes it harder to fully assess the relative advantages of DALUPI in the broader context of current UDA research. Requested Changes: - The authors could design experiments to test how sensitive DALUPI is to violations of the sufficiency assumption. This could involve artificially degrading the quality of the privileged information to see how it affects performance. Such experiments would help understand the robustness of DALUPI in scenarios where the privileged information is not perfectly sufficient for predicting labels. - Another thing to do would be to test DALUPI on more challenging cross-domain adaptation tasks, such as adapting between drastically different image styles (e.g., photos to sketches, or Yelp reviews to tweets). This would showcase the method's ability to handle more extreme domain shifts.' - There is a whole line of work in NLP that calls this learning with feature feedback that you haven't discussed in related work. Several recent papers have shown how learning with feature feedback can help models have better performance vis-a-vis out of domain. Please include those in related work. It might also be beneficial t9o show your evaluate on those NLP tasks. [1, 2] [1] Kaushik, D., Setlur, A., Hovy, E. H., & Lipton, Z. C. Explaining the Efficacy of Counterfactually Augmented Data. In International Conference on Learning Representations. 2021. [2] Katakkar, A., Yoo, C. H., Wang, W., Lipton, Z. C., & Kaushik, D. Practical Benefits of Feature Feedback Under Distribution Shift. In Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (pp. 346-355). 2022. Broader Impact Concerns: N/A ================================================== Review 2: Summary: The paper presents the notion of unsupervised domain adaptation with privileged information (PI). This means to achieve adaptation of machine learning models trained for a given task over similar but not identical domains. We can think of this as having measurements taken by some medical imaging devices, which are analyzed by different healthcare experts for a given of identifying a structure in the image, the possible structures being the same over all cases. The task stays the same, but the measurements and expert interpretation may vary. Over this domain adaptation, the paper is proposing an approach to make use of privileged information, that is, side information elements of the data that can be used to improve training of the models. The paper is developing the idea, with some theoretical developments, a technical proposal, and experiments demonstrating the proper working of the approach. The main contribution of the paper is to develop the use of PI from the target domain, not just the source domain, as it has been proposed previously (i.e., LUPI). The developments and experiments support the proposal that using PI for target domain is helpful in the context of domain adaptation. Strengths and Weaknesses: Strengths: - Solid theoretical arguments supporting the proposed approach - Broad set of experiments supporting the proposal. - Well-organized and well-written paper. Weaknesses: - The PI needs to be sufficient for achieving the end task, which is quite a strong assumption. The proposed two-stage algorithm is formulated that way, the second stage receiving only the W (privileged information), not the data X itself. Maybe the X itself can be included in the W, but that’s not discussed. - The proposal is quite limited and narrow in terms of applicability, it makes sense for providing bounding boxes, but there are not many other obvious settings allowing it. The binary attributed for the celebrity photos seems more like a toy problem, and it is not clear at all that the binary attributes are sufficient for classification. The bounding box PI is relevant, but appears quite niche. - Moreover, the fact that the PI should be available from the target domain puts an extra burden for achieving some results, and obviously provides an advantage over the task achieved. For instance, providing bounding boxes over the objects of interest requires a level of intervention that is close to completing the classification task itself (even though it is argued otherwise in the text). Requested Changes: The limitations on the type and availability of PI for target domains should be further discussed. In particular, when the PI is not sufficient for the task, but can be helpful to improve performance as additional information the input data (X), to what extent can this be helpful? Also, a point that is not clear to me is how to deal with missing PI from a target domain. Maybe I missed it from the paper, but it seems that in the case of having only part of the target domain data with PI, the rest not having PI, we can still proceed and use the first stage of the model to estimate the pseudo-PI of the sample. Is this correct? I think that this can be better developed, as this is an important aspect. For instance, in Fig. 4, when we show results for n_PI(T) varying between 0 and 1, I guess that the results are reported over all the dataset T from target domain, estimating PI from samples missing it. In general, there are few details on the specific algorithms used in the main part of the paper, some more details are provided in the appendix, but even there, there are still some dots to connect by the reader. I would suggest to the authors to provide a complete, more explicit explanations on how they implemented their model in practice for the experiments conducted, both in the main paper and the appendix (for the extra details). Broader Impact Concerns: The contribution is rather technical and doesn’t raise any specific ethical issues. ================================================== Review 3: Summary: The paper considers the unsupervised domain adaptation (DA) problem with the privileged information. This means that the learner has access to data triplets $(x,w,y)\sim S$ from the source data distributions and has to build a predictor $h(x)$ that will be applied to some different but related target distribution $x\sim T$ (actually, $(x,w,y)\sim T$ but $w,y$ are not observed during testing). The predictor is expected to perform well on $T$. The authors of the paper claim that standard theoretically-justified approaches to DA which use only $(x,y)$ usually require non-realistic assumptions, e.g., on $S, T$, which do not generally hold in practice. The main message of the paper is that the usage of privileged information $w$ (which is, of course, not always available), can help to alleviate this issue. The authors provide a two-stage algorithm (train prediction for priviledged information $w=f(x)$ and predictor for the label $y=g(w)$ on the train data from $S$; use $h(x)=g(f(x))$ to predict on $T$) to perform DA using the privileged information and analyze their approach from the empirical risk perspective. From the practical point of view, the authors do several computational experiments (outperform some DA baselines in several tasks: celeba, digits, entity and X-ray classification) and also demonstrate a particular (though, synthetic) experiment which evidently demonstrates the case when using the privileged information yields superior performance compared to standard approach which does not use it. Since I am not a deep expert in the field of unsupervised domain adaptation, it is possible that in my review I missed some important aspect as I am not very familiar with deep aspects of existing research/baselines in the subfield. Strengths and Weaknesses: **Strength** - The overall proposed idea of exploiting additional privileged information for DA seems rather natural, reasonable and inspiring. - The overall clarity is mostly ok (especially the main message of the paper and its explanations). - The paper provides theoretical guarantees of the provided approach's performance. This is a positive contrast aspect distinguishing the work from many modern domain adaptation papers which are purely heuristical. - Illustrative evaluation on 3 problems with different types of privileges information is conducted (binary attributes, single region, multiple regions). The method, in general, outperforms the baselines or performs comparably. **Weaknesses** - While the theoretical result (risk bound, proposition 2) looks interesting, it is not very well explained. The authors should provide a deeper discussion of what each component means and how it affects the bound. I especially talk about the density ratio $p(w)=T(w)/S(w)$ which seems to be the most unclear component here. At the same time, it is one of the most important as it explains how far $T$ is from $S$. The phrase “we may use density estimation” confuses. - Some assumptions of the paper which are needed to derive the theoretical results look too unrealistic. For example, I think that “Suppose that $W$ and $Y$ are deterministic in $X$ and $W$, respectively” is a very unrealistic assumption. Assumption 3 is also not very practical. Given that, the derived theoretical results are not truly inspiring as they are mostly straightforward combinations of existing statistical learning theory results and decompositions. - I would say that the paper is not sufficiently mathematically rigorous. For example, in Proposition 1 there are sums over x,w,y while it should contain integrals over the respective data distributions (or expectations). This is somewhat strange (taking into account that sometimes the authors use the notation with expectations themself). In equation (2) there are the same loss functions $L$ for learning privileged information and prediction labels which seems to be not correct (in general, they are expected to be different, right?). - Synthetic experiment in section 4.1 is a little bit hard to parse. It would be nice to have a table or smth like it briefly summarizing how and where the covariate shift happens in the considered case (it is explained in the text but I do not completely understand). Requested Changes: Since I am not a deep expert in the field of domain adaptation, I can not for sure say that the baselines considered in the paper are state-of-the-art and setups are truly ok. Therefore, my review is mostly about logical and presentational aspects of the paper. I would like the authors to Improve the presentation/explanation of their theoretical results as well as improve the mathematical rigor of their claims and proofs, see my comments in the "weaknesses" section above. Broader Impact Concerns: No concerns ================================================== Metareview: Recommendation: Accept as is Comment: The paper meets the bar both in terms of Claims and Evidence and in terms of Audience. I would encourage the authors to address Reviewer hqvf's comments on notation and mathematical rigor to the extent that they can for the camera-ready version. ==================================================
# Gps++**: Reviving The Art Of Message Passing For Molecular** Property Prediction Dominic Masters1*, Josef Dean1*, Kerstin Klaser1*, Zhiyi Li1**, Sam Maddrell-Mander**1, Adam Sanders1, Hatem Helal1, Deniz Beker1, Andrew Fitzgibbon1, Shenyang Huang2, 3, 4, Ladislav Rampášek3, 5, Dominique Beaini**2, 3, 5** 1Graphcore 2Valence 3Mila - Québec AI Institute 4McGill University 5Université de Montréal Reviewed on OpenReview: **https://openreview.net/forum?id=moVEUgJaHO** ## Abstract We present GPS++, a hybrid Message Passing Neural Network / Graph Transformer model for molecular property prediction. Our model integrates a well-tuned local message passing component and biased global attention with other key ideas from prior literature to achieve state-of-the-art results on large-scale molecular dataset PCQM4Mv2. Through a thorough ablation study we highlight the impact of individual components and find that nearly all of the model's performance can be maintained without any use of global self-attention, showing that message passing is still a competitive approach for 3D molecular property prediction despite the recent dominance of graph transformers. We also find that our approach is significantly more accurate than prior art when 3D positional information is not available. Table 1: Comparison of model size and accuracy on large-scale molecular property prediction dataset PCQM4Mv2. | Model | # Params | Model | Validation | |------------------------------------|------------|-------------|--------------| | | Type | MAE (meV) ↓ | | | GIN-virtual (Hu et al., 2021) | 6.7M | MPNN | 108.3 | | GPS (Rampášek et al., 2022) | 19.4M | Hybrid | 85.8 | | GEM-2 (Liu et al., 2022a) | 32.1M | Transformer | 79.3 | | Global-ViSNet (Wang et al., 2022b) | 78.5M | Transformer | 78.4 | | Transformer-M (Luo et al., 2022) | 69.0M | Transformer | 77.2 | | GPS++ [MPNN only] | 40.0M | MPNN | 77.2 | | GPS++ | 44.3M | Hybrid | 76.6 | ## 1 Introduction Among many scientific areas, deep learning is having a transformative impact on molecular property prediction tasks for biological and chemical applications (Keith et al., 2021; Reiser et al., 2022). In particular, the aim is to replace or augment expensive and/or time-consuming experiments and first-principles methods with more efficient machine learning models. While there is a long history of machine learning methods in this field, two particular approaches have been dominant as of late for processing graph-structured molecular data: message passing neural networks (MPNNs) iteratively build graph representations of molecules by sending information explicitly along edges defined by bonds (Gilmer et al., 2017; Battaglia et al., 2018); while graph transformers treat the nodes (atoms) as all-to-all connected and employ global self-attention approaches (Vaswani et al., 2017), optionally building in local inductive biases through augmented attention (Ying et al., 2021a; Luo et al., 2022). While transformers have been extremely successful in other domains, the quadratic complexity ![1_image_0.png](1_image_0.png) Figure 1: Augmented General Powerful Scalable (GPS++) Graph Transformer overview. (left) GPS++ takes on the input a featurised molecular graph. Chemical, positional, and structural node (atom) and edge (bond) features are detailed in section 4.2. Geometric 3D information is provided only optionally during training, e.g. inference on PCQM4Mv2 test set needs to be done without their explicit knowledge. (middle) A stack of GPS++ modules that combine a custom local MPNN and a global biased self-attention mechanism (akin to Transformer-M) to learn expressive representations, see section 4.1. (right) Global graph pooling with final prediction head, and auxiliary denoising tasks, see section 5.2. with respect to the number of nodes is a significant obstacle for larger molecules. This makes MPNNs an attractive approach due to their linear scaling with graph size, however, issues like oversmoothing (Li et al., 2018), oversquashing, and underreaching (Alon & Yahav, 2021) have been found to limit their effectiveness. In this work we focus on the task of predicting the HOMO-LUMO energy gap, an important quantum chemistry property being the minimum energy needed to excite an electron in the molecular structure. This property is typically calculated using Density Functional Theory (DFT) (Kohn & Sham, 1965), the *de facto* method used for accurately predicting quantum phenomena across a range of molecular systems. Unfortunately, traditional DFT can be extremely computationally expensive, prohibiting the efficient exploration of chemical space (Dobson, 2004), with some approaches taking more than 8 hours per molecule per CPU (Axelrod & Gomez-Bombarelli, 2022). Within this context the motivation for replacing it with fast and accurate machine learning models is clear. While this task does aim to accelerate the development of alternatives to DFT it also serves as a proxy for other molecular property prediction tasks. Therefore, it can potentially benefit a range of scientific applications in fields like computational chemistry, material sciences and drug discovery. The PubChemQC project (Nakata & Shimazaki, 2017) is one of the largest widely available DFT databases, and from it is derived the PCQM4Mv2 dataset, released as a part of the Open Graph Benchmark Large Scale Challenge (OGB-LSC) (Hu et al., 2021), which has served as a popular testbed for development and benchmarking of novel graph neural networks (GNNs). The original OGB-LSC 2021 competition motivated a wide range of solutions using both MPNNs and transformers. However, following the success of the winning method Graphormer (Ying et al., 2021a), subsequent work has resulted in a large number of graph transformer methods for this task with comparatively little attention given to the message passing methods that have been successful in other graph-structured tasks. In this work we build on the work of Rampášek et al. (2022) that advocates for a hybrid approach, including both message passing and transformer components in their General, Powerful, Scalable (GPS) framework. Specifically, we build GPS++, which combines a large and expressive message-passing module with a biased self-attention layer to maximise the benefit of local inductive biases while still allowing for effective global communication. Furthermore, by integrating a grouped input masking method (Luo et al., 2022) to exploit available 3D positional information and carefully crafting a range of diverse input features we achieve the best-reported result on the PCQM4Mv2 validation data split of 76.6 meV mean absolute error (MAE). Next, we perform an extensive ablation study to understand the impact of different model components on the performance. Surprisingly, we find that even without a global self-attention mechanism (seen in graph transformer architectures), extremely competitive performance can be achieved. Therefore, we argue that MPNNs remain viable in the molecular property prediction task and hope the results presented in this work can spark a renewed interest in this area. We also observe that when solely focusing on 2D molecular data (without 3D conformer coordinates), our proposed GPS++ model significantly outperforms other such works. Our contributions can be summarised as follows: - We show that our hybrid MPNN/Transformer model, GPS++, is a parameter-efficient and effective approach to molecular property prediction, achieving state-of-the-art MAE scores for PCQM4Mv2 even when compared to parametrically larger models. - We find that even without self-attention, our model matches the performance of prior state-of-the-art transformers, highlighting that well-optimised MPNNs are still highly effective in this domain. - We investigate how different model components affect the model performance, in particular highlighting the impact of the improved chemical feature choice, 3D positional features, structural encodings and architectural components. Reproducibility: Source code to reproduce our results can be found at: **https://github.com/graphcore/** ogb-lsc-pcqm4mv2. ## 2 Related Work Before the advent of deep learning for graph representation learning (Bronstein et al., 2021), molecular representation in chemoinformatics rested on feature engineering of descriptors or fingerprints (Rogers & Hahn, 2010; Keith et al., 2021). After learnable fingerprints (Duvenaud et al., 2015) the general framework of GNNs, often based on MPNNs, has been rapidly gaining adoption (Reiser et al., 2022). Not all approaches follow the MPNN model of restricting compute to the sparse connectivity graph of the molecule, for example the continuous-filter convolutions in SchNet (Schütt et al., 2018) and the transformer-based Grover (Rong et al., 2020) employ dense all-to-all compute patterns. GNN method development has also been facilitated by the availability of well motivated (suites of) benchmarking datasets such as QM9 (Ramakrishnan et al., 2014), MoleculeNet (Wu et al., 2018), Open Graph Benchmark (OGB) (Hu et al., 2020), OGB-LSC PCQM4Mv2 (Hu et al., 2021), Therapeutics Data Commons (Huang et al., 2022) for molecular property prediction or MD17 (Chmiela et al., 2017), ANI-1 (Smith et al., 2017), Open Catalyst (Tran et al., 2022) for structural conformers or molecular force field predictions. Depending on the nature of the task, the geometric 3D structure of input molecules needs to be taken into account. This imposes an important constraint on the models, that need to be roto-translation invariant in case of global molecular property prediction and roto-translation equivariant in case of conformer or force field prediction, in addition to input permutation invariance and equivariance, respectively. For the latter case, specialised geometric GNN models have been proposed, such as SchNet Schütt et al. (2018), DimeNet(++) Gasteiger et al. (2020b;a), PaiNN Schütt et al. (2021), NequIP Batzner et al. (2022), or a transformer-based TorchMD-Net Thölke & De Fabritiis (2022b). In this work we are primarily motivated by the PCQM4Mv2 dataset that is a part of the large-scale graph ML challenge (Hu et al., 2021), which contains uniquely large numbers of graphs, and has the particular characteristic that the molecular 3D structure is only provided for the training portion of the dataset, but not at test time. This has motivated methods specifically equipped to handle such a scenario: Noisy Nodes denoising autoencoding (Godwin et al., 2022) and pretraining (Zaidi et al., 2022), GEM-2 (Liu et al., 2022a), ViSNet (Wang et al., 2022b). Methods based on global self-attention (Vaswani et al., 2017) became particularly dominant after Graphormer (Ying et al., 2021a) won OGB-LSC 2021, which spurred development of transformer-based methods: SAN (Kreuzer et al., 2021), EGT (Hussain et al., 2022), GPS (Rampášek et al., 2022), TokenGT (Kim et al., 2022), or Transformer-M (Luo et al., 2022). ## 3 Preliminaries Throughout the paper we use the following notation. Bold lowercase letters v are (row) vectors, bold uppercase letters M are matrices, with individual elements denoted by non-bold letters i.e. vk or Mpq. Blackboard bold lowercase letters v are categorical (integer-valued) vectors. In general, we denote by -vk k∈K the vertical concatenation (stacking) of vectors vk. Vertical concatenation is also denoted by a semicolon, i.e. -v1 ; ... ; vJ =-vj J j=1 =-vj for j ∈ {1*, ..., J*}. Horizontal concatenation, which typically means concatenation along the feature dimension, is denoted by a vertical bar, i.e. -v1 | v2 . A molecule is represented as a graph G = (V, E) for nodes V and edges E. In this representation, each node i ∈ V is an atom in the molecule and each edge (u, v) ∈ E is a chemical bond between two atoms. The number of atoms in the molecule is denoted by N = |V| and the number of edges is M = |E|. Each node and edge is associated with a list of categorical features xi ∈ Z Datom and euv ∈ Z Dbond , respectively, for Datom atom features and Dbond bond features. A further set of 3D atom positions R =-r1 ; ... ; rN ∈ R N×3, extracted from original DFT calculations, is provided for training data, but crucially not for validation and test data. Our algorithm operates on edge, node, and global *features*. Node features in layer ℓ are denoted by x ℓ i ∈ R dnode , and are concatenated into the N × dnode matrix Xℓ =-x ℓ 1 ; ... ; x ℓ N . Edge features e ℓ uv ∈ R dedge are concatenated into the edge feature matrix Eℓ =-e ℓ uv for (u, v) ∈ E. Global features are defined per layer as g ℓ ∈ R dglobal. We also define an *attention bias* matrix B ∈ R N×N , computed from the input graph topology and 3D atom positions, described later in section 4.2. ## 4 Gps++ Our GPS++ model closely follows the GPS framework set out by Rampášek et al. (2022). This work presents a flexible model structure for building hybrid MPNN/Transformer models for graph-structured input data. We build a specific implementation of GPS that focuses on maximising the benefit of the inductive biases of the graph structure and 3D positional information. We do this by building a large and expressive MPNN component and biasing our attention component with structural and positional information. We also allow global information to be propagated through two mechanisms, namely the global attention and by using a global feature in the MPNN. As displayed in Figure 1, the main **GPS++** block (section 4.1) combines the benefits of both message passing and attention layers by running them in parallel before combining them with a simple summation and MLP; this layer is repeated 16 times. This main trunk of processing is preceded by an **Encoder** function responsible for encoding the input information into the latent space (section 4.2), and is followed by a simple **Decoder** function (section 5.2). Feature engineering is also used to improve the representation of the atoms/bonds, to provide rich positional and structural features that increase expressivity, and to bias the attention weights with a distance embedding. ## 4.1 Gps++ **Block** The **GPS++** block is defined as follows for layers ℓ > 0 (see section 4.2 for the definitions of X0, E0, g 0, B): $$\mathbf{X}^{\ell+1},\mathbf{E}^{\ell+1},\mathbf{g}^{\ell+1}=\mathsf{G P S+}\left(\mathbf{X}^{\ell},\mathbf{E}^{\ell},\mathbf{g}^{\ell},\mathbf{B}\right)$$ ℓ, B(1) computed as $$\begin{array}{r l}{\mathbf{Y}^{\ell},\ \mathbf{E}^{\ell+1},\ \mathbf{g}^{\ell+1}={\mathsf{M P N N}}\left(\mathbf{X}^{\ell},\mathbf{E}^{\ell},\mathbf{g}^{\ell}\right)}\\ {\mathbf{Z}^{\ell}={\mathsf{B i a s e d A t t n}}\left(\mathbf{X}^{\ell},\mathbf{B}\right)}\\ {\forall i:\quad\mathbf{x}_{i}^{\ell+1}={\mathsf{F F N}}\left(\mathbf{y}_{i}^{\ell}+\mathbf{z}_{i}^{\ell}\right)}\end{array}$$ ℓ(2) ℓ = **BiasedAttn** Xℓ, B(3) (4) $\left(1\right)$. ![4_image_0.png](4_image_0.png) Figure 2: The main **GPS++** processing block (left) is composed of a local message passing **MPNN** module and a biased global attention **BiasedAttn** module. (right) A diagram of the used **MPNN** block. The gather, **scatter** and sum operations highlight changes in tensor shapes and are defined in equations 6 to 11. Our **MPNN** module is a variation on the neural message passing module with edge and global features (Gilmer et al., 2017; Battaglia et al., 2018; Bronstein et al., 2021). We choose this form to maximise the expressivity of the model (Veličković, 2023) with the expectation that overfitting will be less of an issue with PCQM4Mv2, compared to other molecular datasets, due to its size. The essential components (excluding dropout and layer norm) of the **MPNN** module are defined as follows (see Figure 2 for a graphical representation, and Appendix A.1 for the exact formulation): $$\mathbf{Y}^{\ell},\ \mathbf{E}^{\ell+1},\ \mathbf{g}^{\ell+1}={\mathsf{M P N N}}\left(\mathbf{X}^{\ell},\mathbf{E}^{\ell},\mathbf{g}^{\ell}\right)$$ ℓ(5) computed as ℓ (6) ∀(u, v) : e¯ ℓ uv = MLPedge -x ℓ u | x ℓ v | e ℓ uv | g ∀i : x¯ ℓ i = MLPnode "x ℓ i X (u,i)∈E e¯ ℓ ui | {z } sender messages X (i,v)∈E e¯ ℓ iv | {z } receiver messages X (u,i)∈E x ℓ u | {z } adjacent nodes g¯ ℓ = MLPglobal " g ℓ X j∈V x¯ ℓ j X (u,v)∈E e¯ ℓ g ℓ $$(11)$$ uv# (8) $$\left(5\right)$$ $$({\mathfrak{h}})$$ $$\left(7\right)$$ $$(8)$$ $$(9)$$ $$(10)$$ #! (7) The three networks MLPη for η ∈ {node, edge, global} each have two layers with **GELU** activation functions and an expanded intermediate hidden dimension of 4dη. This message passing block is principally the most similar to Battaglia et al. (2018). However, we draw the reader's attention to a few areas that differ from common approaches. Firstly, we choose to decouple the three latent sizes, setting dnode = 256, dedge = 128 and dglobal = 64, because we found that increasing dedge and dglobal does not improve task performance. Secondly (and relatedly) we aggregate over the adjacent node $$\forall i:\quad{\bf y}_{i}^{\ell}=\bar{\bf x}_{i}^{\ell}+{\bf x}_{i}^{\ell}$$ $$\forall(u,v):\quad{\bf e}_{uv}^{\ell+1}=\bar{\bf e}_{uv}^{\ell}+{\bf e}_{uv}^{\ell}$$ $${\bf g}^{\ell+1}=\bar{\bf g}^{\ell}+{\bf g}^{\ell}$$ i (9) uv (10) ℓ(11) representations as well as the more complex edge feature messages in Equation 7, this is similar to running a simple Graph Convolutional Network (GCN) in parallel to the more complex message calculation. We do this to allow the higher-dimensional node features x ℓ u to bypass the lower-dimensional messages e¯ ℓ uv, preventing potential compression. Finally, we also aggregate not only the *sender messages* from edges directed towards the node, but also the *receiver messages* computed on the edges directed away from the node (labelled in Equation 7). By concatenating these two terms rather than summing them we maintain directionality data in each edge, encouraging message information to flow bidirectionally but not requiring the MLPedge to learn directional invariance. Our **BiasedAttn** module follows the form of the self-attention layer of Luo et al. (2022) where a standard self-attention block (Vaswani et al., 2017) is biased by a structural prior derived from the input data. In our work, the bias B is made up of two components, a shortest path distance (SPD) embedding and a 3D distance bias derived from the molecular conformations as described in section 4.2. The FFN module takes a similar form to MLPnode though with additional dropout terms (see Appendix A.1 for full details). ## 4.2 Input Feature Engineering As described in section 3, the dataset samples include the graph structure G, a set of categorical features for the atoms and bonds xi, euv, and the 3D node positions ri. It has been shown that there are many benefits to augmenting the input data with additional structural, positional, and chemical information (Rampášek et al., 2022; Wang et al., 2022a; Dwivedi et al., 2022). Therefore, we combine several feature sources when computing the input to the first GPS++ layer. There are four feature tensors to initialise; node state, edge state, whole graph state and attention biases. and attention bias. $$\mathbf{X}^{all}=\left[\mathbf{X}^{\text{atom}}\mid\mathbf{X}^{\text{LapVec}}\mid\mathbf{X}^{\text{LapVal}}\mid\mathbf{X}^{\text{RW}}\mid\mathbf{X}^{\text{Cent}}\mid\mathbf{X}^{\text{3D}}\right]$$ $$\mathbf{X}^{0}=\text{Dense}(\mathbf{X}^{all})\in\mathbb{R}^{N\times d_{\text{hidden}}}$$ $$\mathbf{E}^{0}=\text{Dense}(\left[\mathbf{E}^{\text{bond}}\mid\mathbf{E}^{\text{3D}}\right])\in\mathbb{R}^{M\times d_{\text{edge}}}$$ $$\mathbf{g}^{0}=\text{Enbed}_{d_{\text{global}}}(0)\in\mathbb{R}^{d_{\text{global}}}$$ $$\mathbf{B}=\mathbf{B}^{\text{SPD}}+\mathbf{B}^{\text{3D}}\in\mathbb{R}^{N\times N}$$ $$(12)^{\frac{1}{2}}$$ $$(13)^{\frac{1}{2}}$$ $$(14)^{\frac{1}{2}}$$ $\left(15\right)$. $\left(16\right)^3$ Here the node features X0 are built from categorical atom features, graph Laplacian positional encodings (Kreuzer et al., 2021; Dwivedi & Bresson, 2020), random walk structural encodings (Dwivedi et al., 2022), local graph centrality encodings (Ying et al., 2021a; Shi et al., 2022) and a 3D centrality encoding (Luo et al., 2022). The edge features E0 are derived from categorical bond features and bond lengths, and the attention bias uses SPD and 3D distances. The global features are initialised with a learned constant embedding in latent space. These input features are further described in Appendix A.2. Chemical Features The categorical features x, e encapsulate the known chemical properties of the atoms and bonds, for example, the atomic number, the bond type or the number of attached hydrogens (which are not explicitly modelled as nodes), as well as graphical properties like node degree or whether an atom or bond is within a ring. The set that is used is not determined by the dataset and augmenting or modifying the chemical input features is a common strategy for improving results. By default, the PCQM4Mv2 dataset uses a set of 9 atom and 3 bond features (described in Table A.1). There is, however, a wide range of chemical features that can be extracted from the periodic table or using tools like RDKit. Ying et al. (2021b) have shown that extracting additional atom level properties can be beneficial when trying to predict the HOMO-LUMO energy gap, defining a total of 28 atom and 5 bond features. We explore the impact of a number of additional node and edge features and sweep a wide range of possible combinations. In particular, we expand on the set defined by Ying et al. (2021b) with three additional atom features derived from the periodic table, the atom group (column), period (row) and element type (often shown by colour). These are intended to generalise to unseen atom types better than a single integer atomic number feature, and we found that them to be particularly beneficial (shown later in ablation Table 5). We also found that in many cases *removing* features was beneficial, for example, we found that generally our models performed better when excluding information about chiral tag and replacing it with chiral centers. We further observe that our best feature combinations all consist of only 8 node features, where the majority of the input features stay consistent between the sets. We show the three best feature sets found in Table A.1 and use *Set 1* for all experiments unless otherwise stated (i.e., during ablations). Comparison to GPS The GPS++ model is a specific instantiation of the GPS framework, and therefore has several similarities with the reference GPS model implemented by Rampášek et al. (2022). The high level architecture of each layer remains the same, using an MPNN module and self-attention module in parallel before combining their node updates with an MLP. However, the design of these two modules is quite different in our implementation. GPS++ uses a bespoke MPNN instead of the GatedGCN (Bresson & Laurent, 2018) employed by GPS. Our MPNN is most similar to Battaglia et al. (2018) but with some key differences described in section 4.1 such as GCN-like adjacent node aggregation and reverse message passing. For the self-attention module, where GPS uses unbiased attention like Vaswani et al. (2017), GPS++ implements learned bias terms BSPD and B3D to incorporate the graph structure. Considering the input features of each model, GPS++ and GPS share the use of the Graph Laplacian and Random walk positional/structural encodings XLapVec, XLapVal and XRW, whilst GPS++ additionally uses node centrality encoding XCent and the spatial node and edge features X3D and E3D. Moreover, the atomic features Xatom in GPS++ are a refinement of the defaults used by GPS, including the addition of the novel atomic Group, Period and Element Type features. Finally, while GPS trains by minimising the error on only the HOMO-LUMO prediction task, GPS++ utilises the following extra regularisation techniques in order to effectively scale parameters on finite data: a node/edge feature denoising task (Godwin et al., 2022), a 3D spatial denoising task (Luo et al., 2022), stochastic depth (Huang et al., 2016) and pervasive element-wise dropout. The additional training tasks are described further in section 5. ## 5 Experimental Setup 5.1 Dataset The PCQM4Mv2 dataset (Hu et al., 2021) consists of 3.7M molecules defined by their SMILES strings. Each molecule has on average 14 atoms and 15 chemical bonds. However, as the bonds are undirected in nature and graph neural networks act on directed edges, two bidirectional edges are used to represent each chemical bond. The 3.7M molecules are separated into standardised sets, namely into **training** (90%), **validation** (2%), test-dev (4%) and **test-challenge** (4%) sets using a scaffold split where the HOMO-LUMO gap targets are only publicly available for the **training** and **validation** splits. PCQM4Mv2 also provides a conformation for each molecule in the training split, i.e., a position in 3D space for each atom such that each molecular graph is in a relaxed low-energy state. Crucially, the validation and test sets have no such 3D information provided. ## 5.2 Model Training Training Configuration Our model training setup uses the Adam optimiser (Kingma & Ba, 2015) with a gradient clipping value of 5, a peak learning rate of 4e-4 and the model is trained for a total of 450 epochs. We used a learning rate warmup period of 10 epochs followed by a linear decay schedule. Decoder and Loss The final model prediction is formed by global sum-pooling of all node representations and then passing it through a 2-layer MLP. The regression loss is the mean absolute error (L1 loss) between a scalar prediction and the ground truth HOMO-LUMO gap value. Noisy Nodes/Edges Noisy nodes (Godwin et al., 2022; Zaidi et al., 2022) has previously been shown to be beneficial for molecular GNNs including on the PCQM4M dataset. The method adds noise to the input data then tries to reconstruct the uncorrupted data in an auxiliary task. Its benefits are expected to be two-fold: it adds regularisation by inducing some noise on the input, but also combats oversmoothing by forcing the node-level information to remain discriminative throughout the model. This has been shown to be particularly beneficial when training deep GNNs (Godwin et al., 2022). We follow the method of Godwin et al. (2022) that applies noise to the categorical node features by randomly choosing a different category with probability pcorrupt, and we further extend this to the categorical edge features. A simple categorical cross-entropy loss is then used to reconstruct the uncorrupted features at the output. We set pcorrupt = 0.01 and weight the cross-entropy losses such that they have a ratio 1:1.2:1.2 for losses HOMO-LUMO:NoisyNodes:**NoisyEdges**. The impact of Noisy Nodes/Edges can be seen in Table 4 of the ablation study. Grouped Input Masking The 3D positional features R are only defined for the training data. We must therefore make use of them in training without requiring them for validation/test. We found that the method proposed by Luo et al. (2022) achieved the most favourable results so we adopt a variation hereon referred to as *grouped input masking*. Atom distances calculated from 3D positions are embedded into vector space R K via K = 128 Gaussian kernel functions. We then process these distance embeddings in three ways to produce attention biases (B3D), node features (X3D) and edge features (E3D) (exact formulations can be found in Appendix A.2). This method stochastically masks out any features derived from the 3D positional features R to build robustness to their absence. Specifically, this is done by defining two input sets to be masked: X Spatial = {X3D, E 3D, B 3D}, X Topological = {B SPD} and three potential masking groups: 1. Mask X Spatial, 2. Mask X Topological, and 3. No masking. These masking groups are then sampled randomly throughout training with ratio 1:3:1. If 3D positions are not defined, e.g. in validation/test, masking group 1 is always used. 3D Denoising Alongside *grouped input masking*, Luo et al. (2022) also add noise to the 3D atom positions R during training before computing any derivative features (e.g., 3D Bias, 3D Centrality), and predict the atom-wise noise as an auxiliary self-supervised training task. We closely follow their implementation for GPS++, adding an SE(3)-equivariant self-attention layer as a noise prediction head, which encourages the model to preserve 3D information until the final layer as well as further develop spatial reasoning. See Appendix section A.2 for full details. ## 6 Results Model Accuracy In Table 2, we compare the single model performance of GPS++ with results from the literature, in particular, we pay close attention to the Transformer-M (Luo et al., 2022) model as it has the best results for a transformer, but also because we use their input masking and denoising method for incorporating 3D features; this allows us to gain some tangible insights into the value of hybrid/MPNN approaches vs. transformers. Comparing the best GPS++ result with prior work, we set a new state of the art MAE score on the PCQM4Mv2 validation set, outperforming not only the parametrically comparable Transformer-M Medium but also the much larger Transformer-M Large. While it would be tempting to assume that the Multi-Headed Self Attention (MHSA) component of our model is the most significant due to the prevalence of transformers among the other top results, we find that we can maintain competitive accuracy with no attention at all, falling back to a pure MPNN, passing messages only between bonded atoms (nodes). Furthermore, we show that this MPNN model is more accurate in the absence of 3D features, even beating the hybrid GPS++ in this setting. We believe this shows a strong reliance of the transformer's global attention mechanism on 3D positional information to learn atom relationships and, in contrast, the power of encoding spatial priors into the model using message passing along molecular bonds in the MPNN. Further, by halving the number of GPS++ layers from 16 to 8 to reach parametric parity, Table 2 shows that the MPNN-only GPS++ model compares favourably to the original GPS implementation. We investigate the impact of other components of the model more in section 7. Model Throughput Molecules in PCQM4Mv2 have an average of 14.3 nodes (atoms) per graph, and our full GPS++ model trains at 17,500 graphs per second on 16 IPUs. This means each epoch completes in | Global-ViSNet are concurrent work. Model | # Param. | Model Type | Valid MAE (meV) ↓ | |-----------------------------------------------------------------|------------|--------------|---------------------| | without 3D Positional Information GCN-virtual (Hu et al., 2021) | 4.9M | MPNN | 115.3 | | GIN-virtual (Hu et al., 2021) | 6.7M | MPNN | 108.3 | | GRPE (Park et al., 2022) | 46.2M | Transformer | 89.0 | | Transformer-M [Medium, No 3D Positions] (Luo et al., 2022) | 47.1M | Transformer | 87.8 | | EGT (Hussain et al., 2022) | 89.3M | Transformer | 86.9 | | Graphormer (Shi et al., 2022) | 48.3M | Transformer | 86.4 | | GPS (Rampášek et al., 2022) | 19.4M | Hybrid | 85.8 | | GPS++ [8 Layers, No 3D Positions] | 22.4M | Hybrid | 83.6 | | GPS++ [No 3D Positions] | 44.2M | Hybrid | 82.6 | | GPS++ [8 Layers, MPNN only, No 3D Positions] | 20.3M | MPNN | 82.2 | | GPS++ [MPNN only, No 3D Positions] | 40.0M | MPNN | 81.8 | | with 3D Positional Information GEM-2 (Liu et al., 2022a) | 32.1M | Transformer | 79.3 | | GPS++ [8 Layers, MPNN only] | 22.5M | MPNN | 79.3 | | Transformer-M [Medium] (Luo et al., 2022) | 47.1M | Transformer | 78.7 | | GPS++ [8 Layers] | 22.7M | Hybrid | 78.6 | | Global-ViSNet (Wang et al., 2022b) | 78.5M | Transformer | 78.4 | | Transformer-M [Large] (Luo et al., 2022) | 69.0M | Transformer | 77.2 | | GPS++ [MPNN only] | 40.3M | MPNN | 77.2 | | GPS++ | 44.5M | Hybrid | 76.6 | 3 minutes and a full 450 epoch training run takes less than 24 hours. The MPNN-only version of GPS++ (i.e. ![8_image_0.png](8_image_0.png) disabling the MHSA module) has almost double the throughput at 33,000 graphs per second, partially due to the reduced compute costs but also as a result of increasing the micro batch size thanks to reduced memory pressure. Figure 3 compares the models for a range of training epochs, and shows the MPNN-only version can outperform prior results from GEM-2 and Global-ViSNet with only 5-6 hours of training, and match the prior SOTA Transformer-M after 13 hours. High throughput allows for rapid iteration and extensive ablation studies (see section 7). Full details of the hardware used and acceleration methods employed can be found in Appendix B. Figure 3: Training time versus validation error for the full hybrid GPS++ model versus the MPNN-only model. Comparable training times for GEM-2, Global-ViSNet and Transformer-M were unavailable at the time of writing. Fine-tuning Accuracy In order to test GPS++ in the presence of 3D positional data during test time and to investigate the generalisability of the model, we fine-tune GPS++ on 8 different tasks from the the quantum chemistry benchmark QM9 (Ruddigkeit et al., 2012; Ramakrishnan et al., 2014). The QM9 dataset comprises 134k small organic molecules in equilibrium state which contain up to 9 heavy atoms (C, O, N, F) and, unlike in the PCQ dataset, 3D positional information is available at test time. Though the benchmark provides 12 labels for each molecule, we found that 4 very closely related energy labels U0*, U, H* and G were particularly sensitive to hyperparameters, random seeds and label normalisation choices, so we have omitted these labels to reduce the time and compute costs of the generalisation study. QM9 does not provide a standardised dataset split, we therefore follow several previous works (Luo et al., 2022; Thölke & De Fabritiis, 2022a) and randomly select 10,000 molecules for validation and 10,831 for testing; all remaining molecules are used during training. Due to the small size of this dataset and the high variance in test accuracy observed for different train/test splits, we fine-tune 3 baseline GPS++ checkpoints on 5 random split seeds for each task and report both the mean and standard deviation across all 15 runs. We disable grouped input masking during fine-tuning, to maximise the benefit of QM9's always-present 3D features. For each label we performed a hyperparameter sweep, settling on learning rates in the range [0.0004, 0.001], epochs in the range [1000, 2000], 3D denoising loss weighting in the range [0.00001, 0.75], and in some cases removing some dropout operations. In particular, we found that the HOMO, LUMO and Gap labels benefited from a much higher 3D denoising loss weight than pretraining (0.75 vs 0.1). By contrast, for the label R2 we found the best results by setting the 3D denoising weight to 0.00001 and disabled dropout in the node MLP as well as stochastic depth, suggesting the signal-to-noise ratio in this label is much lower. The results are shown in Table 3. Despite differences in the types of molecules, the DFT configurations used to generate the equilibrium states and the availability of 3D positions at test time, GPS++ generalises very well to this new dataset for tasks like HOMO, LUMO and Gap which are closely related to the PCQM4Mv2 pretraining task. The performance of GPS++ on less closely related labels generally falls behind the state of the art, the weakest being **ZPVE** and R2, though in each case the results are well within the distribution of reasonable results from prior work. This bias of the model to a particular kind of task appears to be the norm throughout the QM9 results; each model that sets the state of the art in one label also exhibits particularly weak performance in one or more other labels. Beyond its current strengths, it may be possible to improve GPS++'s weaker scores by implementing task-specific prediction heads. For example, Thölke & De Fabritiis (2022a) add a rotationally-equivariant output layer just when predicting the Electronic Spatial Extent (R2), as this is a highly spatial task. Table 3: QM9 fine-tuning results. Test MAE and STD reported over 5 test split seeds and 3 pre-trained GPS++ checkpoints (15 runs total). The **first** and **second** best results for each label are highlighted. | Model | Pretrained Homo Lumo Gap | CV | µ | ZPVE | R2 | α | | | | |--------------------------------------------|----------------------------|-------|-----------|--------|---------|---------|--------|--------|---------| | | | 2 | 3 | | | | | | | | meV | meV | meV | cal/mol K | D | meV | α 0 | α 0 | | | | Schnet (Schütt et al., 2018) | · | 41 | 34 | 63 | 0.033 | 0.033 | 1.70 | 0.073 | 0.24 | | MGCN (Lu et al., 2019) | · | 42 | 57 | 64 | 0.038 | 0.056 | 1.12 | 0.11 | 0.030 | | LieConv (Finzi et al., 2020) | · | 30 | 25 | 49 | 0.038 | 0.032 | 2.28 | 0.80 | 0.084 | | SE(3)-Transformer (Fuchs et al., 2020) | · | 35 | 33 | 53 | 0.054 | 0.051 | - | - | 0.14 | | DimeNet++ (Gasteiger et al., 2020a) | · | 24.6 | 19.5 | 32.6 | 0.023 | 0.030 | 1.21 | 0.33 | 0.044 | | GEM (Fang et al., 2021) | ✓ | 33.8 | 27.7 | 52.1 | 0.035 | 0.034 | 1.73 | 0.089 | 0.081 | | PaiNN (Schütt et al., 2021) | · | 27.6 | 20.4 | 45.7 | 0.024 | 0.012 | 1.28 | 0.066 | 0.045 | | LieTF (Hutchinson et al., 2021) | · | 33 | 27 | 51 | 0.035 | 0.041 | 2.10 | 0.45 | 0.082 | | TorchMD-Net (Thölke & De Fabritiis, 2022b) | · | 20.3 | 17.5 | 36.1 | 0.026 | 0.011 | 1.84 | 0.033 | 0.059 | | EGNN (Satorras et al., 2021) | · | 29 | 25 | 48 | 0.031 | 0.029 | 1.55 | 0.11 | 0.071 | | SphereNet (Liu et al., 2022b) | · | 22.8 | 18.9 | 31.1 | 0.022 | 0.026 | 1.12 | 0.29 | 0.046 | | SEGNN (Brandstetter et al., 2022) | · | 24 | 21 | 42 | 0.031 | 0.023 | 1.62 | 0.66 | 0.060 | | EQGAT (Le et al., 2022) | · | 20 | 16 | 32 | 0.024 | 0.011 | 2.00 | 0.38 | 0.053 | | 3D Infomax (Stärk et al., 2022) | ✓ | 29.8 | 25.7 | 48.8 | 0.033 | 0.034 | 1.67 | 0.12 | 0.075 | | 3D-MGP (Jiao et al., 2022) | ✓ | 21.3 | 18.2 | 37.1 | 0.026 | 0.020 | 1.38 | 0.092 | 0.057 | | NoisyNode (Godwin et al., 2022) | · | 20.4 | 18.6 | 28.6 | 0.025 | 0.025 | 1.16 | 0.70 | 0.052 | | Transformer-M (Luo et al., 2022) | ✓ | 17.5 | 16.2 | 27.4 | 0.022 | 0.037 | 1.18 | 0.075 | 0.041 | | GNS-TAT+NN (Zaidi et al., 2022) | ✓ | 14.9 | 14.7 | 22.0 | 0.020 | 0.016 | 1.02 | 0.44 | 0.040 | | 16.1 | 14.6 | 26.2 | 0.028 | 0.024 | 1.85 | 0.40 | 0.062 | | | | GPS++ | ✓ | ± 0.3 | ± 0.4 | ± 0.3 | ± 0.001 | ± 0.001 | ± 0.28 | ± 0.03 | ± 0.005 | ## 7 Ablation Study The top-performing GPS++ model was attained empirically, resulting in a complex combination of input features, architectural choices and loss functions. In this section we assess the contribution of each feature to final task performance on PCQM4Mv2, and find that much of the performance can be retained by a simplified design. All results listed in the ablation study are obtained by training the final GPS++ model from scratch 10 for 200 epochs with one or more features disabled, averaging MAE over 5 runs. Average batch size is kept constant (926 nodes per batch) for all runs to keep them directly comparable. Node and Edge Input Features The PCQM4Mv2 dataset provides 9 chemical node and 3 edge features for each sample, and following prior work GPS++ incorporates additional features and preprocessing strategies. Table 4 isolates the contribution of each of these additions (excluding 3D features). | | ∆ MAE (meV) | | |---------------------------------|---------------|-------| | Removed Feature | Valid | Train | | None (Baseline) | 77.2 | 53.6 | | Random Walk Structural Encoding | + 1.2 | + 1.5 | | Shortest Path Distance Bias | + 0.8 | + 2.6 | | Laplacian Positional Encoding | + 0.3 | + 0.1 | | Local Centrality Encoding | + 0.1 | 0.0 | | Bond Lengths | + 0.1 | - 0.1 | | Noisy 3D Positions | + 1.6 | + 3.6 | | Noisy Nodes | + 0.6 | + 0.2 | | Noisy Edges | + 0.1 | - 0.1 | Table 4: Ablation of node and edge features. | | Table 5: Choices of chemical features. Atom Group, | ∆ MAE (meV) | | | |------------------|------------------------------------------------------|---------------|-------|-------| | Feature Set | Period & Type | Set Size | Valid | Train | | Set 1 (Baseline) | ✓ | 14 | 77.2 | 53.6 | | Set 2 | ✓ | 14 | + 0.2 | 0.0 | | Set 3 | ✓ | 14 | + 0.2 | + 0.3 | | Original | ✓ | 15 | + 0.2 | + 0.3 | | Original | · | 12 | + 0.5 | + 0.7 | | Set 1 | · | 11 | + 0.6 | + 0.4 | | Ying21 | ✓ | 36 | + 4.1 | - 3.1 | | Ying21 | · | 33 | + 4.4 | - 3.0 | In line with previous work (Rampášek et al., 2022), the Random Walk Structural Encoding is the most impactful individual input feature, degrading validation performance by 1.2 meV upon removal, whilst the Graph Laplacian Positional Encodings (both eigenvectors and eigenvalues) have minimal benefit. The Shortest Path Distance Bias in the self-attention module makes a meaningful contribution, though the memory cost of storing all-to-all node distances is non-trivial. Other features, despite being beneficial when they were added at intermediate steps in the development of the model, are shown to have become redundant in the final GPS++ composition: the Graphormer-style Local Centrality Encoding (i.e. embedding the degree of the node) can be derived from the Random Walk features, and the use of Bond Lengths as an edge feature in the MPNN appears to be unnecessary in the presence of more comprehensive 3D features elsewhere in the network. Table 4 also shows that whilst the regularisation effect of the noisy nodes loss is beneficial, the contribution of noisy edges is negligible. This may be due to the simplicity (and hence ease of denoising) of the limited edge features in PCQM4Mv2. Table 5 compares the possible choices of chemical input features described in section 4.2: *Original* is the set of features included in PCQM4Mv2; *Ying21* refers to all 28 node features and 5 edge features in the PCQ superset defined by Ying et al. (2021b); *Set 1-3* are performant subsets of 11 node features and 3 edge features selected from all of the above, defined fully in Table A.1 and obtained via a combinatorial search; *Atom Group, Period & Type* refers to our contribution of 3 additional atom features relating to an atom's position in the periodic table, intended to be more generalised than the atomic number. The results clearly show that training on all 36 available input features causes significant overfitting and degrades task performance by introducing noise. Moreover, the new *Atom Group, Period & Type* features consistently provide meaningfully richer atom representations than the atomic number feature which is present in every feature set. We believe future work on similar datasets should consider incorporating this simple addition. 3D Input Features and Self-Attention The 3D node position information given for the training set in PCQM4Mv2 is provided as input to the final model in three forms: the Bond Length edge features in the MPNN, the all-to-all 3D Bias map added to the self-attention module, and the node-wise 3D Centrality Encoding added in the node encoder. The Bond Lengths are shown to have minimal impact in Table 4, so Table 6 explores the 3D Bias and Centrality as well as how they interact with the MHSA module. In rows 1-4 we see that including at least one of the pair is critical to the final model's performance, but neither makes the other redundant. The 3D Bias is higher fidelity than the 3D Centrality (i.e, an all-to-all distance map versus a node-wise distance sum) and is the stronger of the pair, but the effectiveness of the simpler 3D Centrality should be noted since it does not require the use of custom biased self-attention layers. Additionally, we found that training stability of the MHSA module suffered in the absence of either 3D feature, requiring us to halve the learning rate to 2e-4 for convergence unless the MHSA module was disabled. | | Table 6: Ablation of self-attention and 3D features. Feature ∆ MAE (meV) | | | | | | |----|----------------------------------------------------------------------------|-------------------------------------------------|---------------|-------|--------|----| | # | MHSA | 3D Bias | 3D Centrality | Valid | Train | | | 1 | ✓ | ✓ | ✓ | 77.2 | 53.6 | | | 2 | ✓ | ✓ | · | + 0.6 | - 2.3 | ∗ | | 3 | ✓ | · | ✓ | + 1.1 | - 3.6 | ∗ | | 4 | ✓ | · | · | + 5.1 | - 10.0 | ∗ | | 5 | · | · | · | + 4.3 | - 16.4 | | | 6 | · | · | ✓ | + 0.9 | - 1.5 | | | | | ∗ trained with half learning rate for stability | | | | | Table 6: Ablation of self-attention and 3D features. Interestingly, when the 3D Bias term is not used, Table 6 rows 4-5 show that it is preferable to *remove* the MHSA. This may imply that the Shortest Path Distance Bias term (the other bias in the MHSA, which attempts to provide a non-3D global graph positional encoding) is insufficient to modulate attention between atoms, and that instead spatial relationships are key for this quantum chemistry task. For example, when a molecule is folded over and two atoms may be very close in 3D space but far apart in the graph topology. Model Architecture GPS++ uses a large MPNN module comprising 70% of the model parameters and using extensively tuned hyper-parameters. Table 7 ablates the components of the model network architecture, and shows that the MPNN module is unsurprisingly the single most important component for task performance. Outside of the MPNN, the MHSA (with accompanying attention biases) and FFN contribute approximately equally to the final performance, though they have opposite impacts on training MAE. Within the MPNN, the largest task impact arises from removing the edge features and edge MLP from all layers (euv and MLPedge in Equation 6), falling back on simple GCN-style message passing that concatenates neighbouring node features when computing messages. Table 7 also shows that the use of Adjacent Node Feature Aggregation (i.e., the sum of adjacent node features x ℓ u in Equation 7) allows large node features of size 256 to bypass compression into edge messages of size 128, affording a small but meaningful MAE improvement. It is likely due to this feature that we have not found a benefit to increasing the edge latent size to match the node size. Both the use of Global Features and Sender Message Aggregation within the MPNN are shown to be of comparable importance to the MHSA and FFN modules. Global Features, denoted as g ℓin Equation 6 and 7, allow the MPNN some degree of global reasoning, whilst Sender Message Aggregation (the sum of e¯ ℓ iv in Equation 7) uses outgoing messages in addition to the usual incoming messages to update a node's features. There is some overlap in purpose of the Global Features and the MHSA module, so the table also shows the impact of removing both at once. The resulting performance degradation is significantly greater than the sum of their individual impacts, which may imply each feature is able to partially compensate for the loss of the other in the individual ablations. | | Table 7: Ablation of network architecture features. ∆ MAE (meV) | | | | |----------------------------|-------------------------------------------------------------------|--------|------------|-----| | Removed Feature | Valid | Train | Params (∆) | | | None (Baseline) | 77.2 | 53.6 | 44M ( | 0%) | | MPNN | + 7.7 | + 10.9 | 14M (-70%) | | | Edge Features | + 4.2 | + 3.7 | 33M (-26%) | | | Global Features | + 1.2 | 0.0 | 40M ( -8%) | | | Sender Message Aggregation | + 0.7 | + 0.4 | 38M (-14%) | | | Adjacent Node Aggregation | + 0.4 | + 1.3 | 36M (-19%) | | | MHSA | + 0.9 | - 1.5 | 40M (-10%) | | | FFN | + 1.0 | + 1.3 | 36M (-19%) | | | MHSA and Global Features | + 3.4 | - 0.2 | 37M (-18%) | | Table 7: Ablation of network architecture features. ## 8 Discussion In this work, we define GPS++, a hybrid MPNN/Transformer, optimised for the PCQM4Mv2 molecular property prediction task (Hu et al., 2021). Our model builds on previous work (Rampášek et al., 2022; Luo et al., 2022; Godwin et al., 2022) with a particular focus on building a powerful and expressive message passing component comprising 70% of model parameters. We showed that our GPS++ model has state-of-the-art performance on this uniquely large-scale dataset, and that despite recent trends towards using only graph transformers in molecular property prediction, GPS++ retains almost all of its performance as a pure MPNN with the global attention module ablated. Finally, we consider the case where 3D positions are not available and show that our GPS++ models are significantly better than transformers, where a significant drop-off is observed across all prior work. We also found that our MPNN-only model performs better than our hybrid under these circumstances; this indicates a strong reliance between effective attention and the availability of 3D positional information. Whilst these results are interesting for problems with small molecules, testing on much larger molecules, for example, peptides, proteins, or RNAs, which can have hundreds or thousands of atoms, is a tougher challenge. Under these conditions, the linear complexity of MPNNs with graph size suggests some computational benefits over global attention, however, it is still to be determined if the downsides of issues like underreaching outweigh these benefits. Nevertheless, we believe that our results highlight the strength of MPNNs in this domain and hope that it inspires a revival of message passing and hybrid models for molecular property prediction when dealing with large datasets. ## References Uri Alon and Eran Yahav. On the bottleneck of graph neural networks and its practical implications. In International Conference on Learning Representations, 2021. Simon Axelrod and Rafael Gomez-Bombarelli. Geom, energy-annotated molecular conformations for property prediction and molecular generation. *Scientific Data*, 9(1):1–14, 2022. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. *arXiv:1607.06450*, 2016. Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. *arXiv:1806.01261*, 2018. Simon Batzner, Albert Musaelian, Lixin Sun, Mario Geiger, Jonathan P Mailoa, Mordechai Kornbluth, Nicola Molinari, Tess E Smidt, and Boris Kozinsky. E(3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials. *Nature communications*, 13(1):1–11, 2022. Jenna A. Bilbrey, Kristina M. Herman, Henry Sprueill, Soritis S. Xantheas, Payel Das, Manuel Lopez Roldan, Mike Kraus, Hatem Helal, and Sutanay Choudhury. Reducing down(stream)time: Pretraining molecular gnns using heterogeneous ai accelerators. *arXiv preprint arXiv:2211.04598*, 2022. Johannes Brandstetter, Rob Hesselink, Elise van der Pol, Erik J Bekkers, and Max Welling. Geometric and physical quantities improve e(3) equivariant message passing. In *International Conference on Learning* Representations, 2022. URL **https://openreview.net/forum?id=_xwr8gOBeV1**. Xavier Bresson and Thomas Laurent. Residual gated graph convnets, 2018. Michael M Bronstein, Joan Bruna, Taco Cohen, and Petar Veličković. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. *arXiv:2104.13478*, 2021. Stefan Chmiela, Alexandre Tkatchenko, Huziel E Sauceda, Igor Poltavsky, Kristof T Schütt, and Klaus-Robert Müller. Machine learning of accurate energy-conserving molecular force fields. *Science advances*, 3(5): e1603015, 2017. Christopher M. Dobson. Chemical space and biology. *Nature*, 432(7019):824–828, December 2004. ISSN 1476-4687. doi:10.1038/nature03192. David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alán AspuruGuzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular fingerprints. Advances in neural information processing systems, 28, 2015. Vijay Prakash Dwivedi and Xavier Bresson. A generalization of transformer networks to graphs. arXiv:2012.09699, 2020. Vijay Prakash Dwivedi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. Graph neural networks with learnable structural and positional representations. In International Conference on Learning Representations, 2022. Xiaomin Fang, Lihang Liu, Jieqiong Lei, Donglong He, Shanzhuo Zhang, Jingbo Zhou, Fan Wang, Hua Wu, and Haifeng Wang. Chemrl-gem: Geometry enhanced molecular representation learning for property prediction. *arXiv preprint arXiv:2106.06130*, 2021. Matthias Fey and Jan Eric Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019. Marc Finzi, Samuel Stanton, Pavel Izmailov, and Andrew Gordon Wilson. Generalizing convolutional neural networks for equivariance to lie groups on arbitrary continuous data. In *International Conference on* Machine Learning, pp. 3165–3176. PMLR, 2020. Fabian Fuchs, Daniel Worrall, Volker Fischer, and Max Welling. Se (3)-transformers: 3d roto-translation equivariant attention networks. *Advances in Neural Information Processing Systems*, 33:1970–1981, 2020. Johannes Gasteiger, Shankari Giri, Johannes T. Margraf, and Stephan Günnemann. Fast and uncertaintyaware directional message passing for non-equilibrium molecules. In *Machine Learning for Molecules* Workshop, NeurIPS, 2020a. Johannes Gasteiger, Janek Groß, and Stephan Günnemann. Directional message passing for molecular graphs. In *International Conference on Learning Representations (ICLR)*, 2020b. Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In *International conference on machine learning*, pp. 1263–1272. PMLR, 2017. Jonathan Godwin, Michael Schaarschmidt, Alexander L Gaunt, Alvaro Sanchez-Gonzalez, Yulia Rubanova, Petar Veličković, James Kirkpatrick, and Peter Battaglia. Simple GNN regularisation for 3D molecular property prediction and beyond. In *International Conference on Learning Representations*, 2022. Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (GELUs). *arXiv:1606.08415*, 2016. Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open Graph Benchmark: Datasets for Machine Learning on Graphs. 34th Conference on Neural Information Processing Systems, 2020. Weihua Hu, Matthias Fey, Hongyu Ren, Maho Nakata, Yuxiao Dong, and Jure Leskovec. OGB-LSC: A large-scale challenge for machine learning on graphs. In 35th Conference on Neural Information Processing Systems: Datasets and Benchmarks Track, 2021. Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q. Weinberger. Deep networks with stochastic depth. In Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling (eds.), *Computer Vision - ECCV 2016*, pp. 646–661, Cham, 2016. Springer International Publishing. ISBN 978-3-319-46493-0. Kexin Huang, Tianfan Fu, Wenhao Gao, Yue Zhao, Yusuf Roohani, Jure Leskovec, Connor W Coley, Cao Xiao, Jimeng Sun, and Marinka Zitnik. Artificial intelligence foundation for therapeutic science. Nature Chemical Biology, 18(10):1033–1036, 2022. Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Mia Xu Chen, Dehao Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V. Le, Yonghui Wu, and Zhifeng Chen. Gpipe: Efficient training of giant neural networks using pipeline parallelism. *arXiv preprint arXiv:1811.06965*, 2018. Md Shamim Hussain, Mohammed J Zaki, and Dharmashankar Subramanian. Global self-attention as a replacement for graph convolution. In *Proceedings of the 28th ACM SIGKDD Conference on Knowledge* Discovery and Data Mining, pp. 655–665, 2022. Michael J Hutchinson, Charline Le Lan, Sheheryar Zaidi, Emilien Dupont, Yee Whye Teh, and Hyunjik Kim. Lietransformer: Equivariant self-attention for lie groups. In *International Conference on Machine Learning*, pp. 4533–4543. PMLR, 2021. Rui Jiao, Jiaqi Han, Wenbing Huang, Yu Rong, and Yang Liu. 3d equivariant molecular graph pretraining. arXiv preprint arXiv:2207.08824, 2022. John A. Keith, Valentin Vassilev-Galindo, Bingqing Cheng, Stefan Chmiela, Michael Gastegger, KlausRobert Müller, and Alexandre Tkatchenko. Combining machine learning and computational chemistry for predictive insights into chemical systems. *Chemical Reviews*, 121(16):9816–9872, 2021. doi:10.1021/acs.chemrev.1c00107. PMID: 34232033. Jinwoo Kim, Tien Dat Nguyen, Seonwoo Min, Sungjun Cho, Moontae Lee, Honglak Lee, and Seunghoon Hong. Pure transformers are powerful graph learners. In *NeurIPS*, 2022. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In *ICLR (Poster)*, 2015. URL **http://arxiv.org/abs/1412.6980**. Walter Kohn and Lu Jeu Sham. Self-consistent equations including exchange and correlation effects. Phys. Rev., 140:A1133–A1138, Nov 1965. Mario Michael Krell, Matej Kosec, Sergio P. Perez, and Andrew Fitzgibbon. Efficient sequence packing without cross-contamination: Accelerating large language models without impacting performance. *arXiv* preprint arXiv:2107.02027, 2021. Devin Kreuzer, Dominique Beaini, William L. Hamilton, Vincent Létourneau, and Prudencio Tossou. Rethinking graph transformers with spectral attention. In *Advances in Neural Information Processing* Systems, 2021. Tuan Le, Frank Noé, and Djork-Arné Clevert. Equivariant graph attention networks for molecular property prediction. *arXiv preprint arXiv:2202.09891*, 2022. Qimai Li, Zhichao Han, and Xiao-Ming Wu. Deeper insights into graph convolutional networks for semisupervised learning. In *AAAI*, pp. 3538–3545, 2018. Lihang Liu, Donglong He, Xiaomin Fang, Shanzhuo Zhang, Fan Wang, Jingzhou He, and Hua Wu. GEM-2: Next generation molecular property prediction network with many-body and full-range interaction modeling. arXiv preprint arXiv:2208.05863, 2022a. Yi Liu, Limei Wang, Meng Liu, Yuchao Lin, Xuan Zhang, Bora Oztekin, and Shuiwang Ji. Spherical message passing for 3d molecular graphs. In *International Conference on Learning Representations (ICLR)*, 2022b. Chengqiang Lu, Qi Liu, Chao Wang, Zhenya Huang, Peize Lin, and Lixin He. Molecular property prediction: A multilevel quantum interactions modeling perspective. In *Proceedings of the AAAI Conference on* Artificial Intelligence, volume 33, pp. 1052–1060, 2019. Shengjie Luo, Tianlang Chen, Yixian Xu, Shuxin Zheng, Tie-Yan Liu, Liwei Wang, and Di He. One transformer can understand both 2D & 3D molecular data. *arXiv:2210.01765*, 2022. Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, and Hao Wu. Mixed precision training. arXiv preprint arXiv:1710.03740, 2017. Maho Nakata and Tomomi Shimazaki. Pubchemqc project: a large-scale first-principles electronic structure database for data-driven chemistry. *Journal of chemical information and modeling*, 57(6):1300–1308, 2017. Wonpyo Park, Woonggi Chang, Donggeon Lee, Juntae Kim, and Seung won Hwang. GRPE: Relative positional encoding for graph transformer. *arXiv:22201.12787*, 2022. Raghunathan Ramakrishnan, Pavlo O Dral, Matthias Rupp, and O Anatole Von Lilienfeld. Quantum chemistry structures and properties of 134 kilo molecules. *Scientific data*, 1(1):1–7, 2014. Ladislav Rampášek, Mikhail Galkin, Vijay Prakash Dwivedi, Anh Tuan Luu, Guy Wolf, and Dominique Beaini. Recipe for a General, Powerful, Scalable Graph Transformer. *arXiv:2205.12454*, 2022. RDKit. RDKit: Open-source cheminformatics. **http://www.rdkit.org**. [Online]. Patrick Reiser, Marlen Neubert, André Eberhard, Luca Torresi, Chen Zhou, Chen Shao, Houssam Metni, Clint van Hoesel, Henrik Schopmans, Timo Sommer, et al. Graph neural networks for materials science and chemistry. *Communications Materials*, 3(1):93, 2022. David Rogers and Mathew Hahn. Extended-connectivity fingerprints. Journal of chemical information and modeling, 50(5):742–754, 2010. Yu Rong, Yatao Bian, Tingyang Xu, Wei yang Xie, Ying Wei, Wen bing Huang, and Junzhou Huang. Grover: Self-supervised message passing transformer on large-scale molecular data. *ArXiv*, abs/2007.02835, 2020. Lars Ruddigkeit, Ruud Van Deursen, Lorenz C Blum, and Jean-Louis Reymond. Enumeration of 166 billion organic small molecules in the chemical universe database gdb-17. Journal of chemical information and modeling, 52(11):2864–2875, 2012. Vıctor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E (n) equivariant graph neural networks. In International conference on machine learning, pp. 9323–9332. PMLR, 2021. Kristof Schütt, Oliver Unke, and Michael Gastegger. Equivariant message passing for the prediction of tensorial properties and molecular spectra. In *International Conference on Machine Learning*, pp. 9377–9388. PMLR, 2021. Kristof T Schütt, Huziel E Sauceda, P-J Kindermans, Alexandre Tkatchenko, and K-R Müller. Schnet–a deep learning architecture for molecules and materials. *The Journal of Chemical Physics*, 148(24):241722, 2018. Yu Shi, Shuxin Zheng, Guolin Ke, Yifei Shen, Jiacheng You, Jiyan He, Shengjie Luo, Chang Liu, Di He, and Tie-Yan Liu. Benchmarking graphormer on large-scale molecular modeling datasets. *arXiv:2203.04810*, 2022. Justin S Smith, Olexandr Isayev, and Adrian E Roitberg. Ani-1, a data set of 20 million calculated off-equilibrium conformations for organic molecules. *Scientific data*, 4(1):1–8, 2017. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. *The journal of machine learning research*, 15(1): 1929–1958, 2014. Hannes Stärk, Dominique Beaini, Gabriele Corso, Prudencio Tossou, Christian Dallago, Stephan Günnemann, and Pietro Liò. 3d infomax improves gnns for molecular property prediction. In International Conference on Machine Learning, pp. 20479–20502. PMLR, 2022. Philipp Thölke and Gianni De Fabritiis. Equivariant transformers for neural network based molecular potentials. In *International Conference on Learning Representations*, 2022a. Philipp Thölke and Gianni De Fabritiis. Torchmd-net: Equivariant transformers for neural network based molecular potentials. *arXiv preprint arXiv:2202.02541*, 2022b. Richard Tran, Janice Lan, Muhammed Shuaibi, Siddharth Goyal, Brandon M Wood, Abhishek Das, Javier Heras-Domingo, Adeesh Kolluru, Ammar Rizvi, Nima Shoghi, et al. The open catalyst 2022 (oc22) dataset and challenges for oxide electrocatalysis. *arXiv:2206.08917*, 2022. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. *Advances in Neural Information Processing Systems*, 30, 2017. Petar Veličković. Everything is connected: Graph neural networks. *Curr Opin Struct Biol*, 79, 2023. Haorui Wang, Haoteng Yin, Muhan Zhang, and Pan Li. Equivariant and stable positional encoding for more powerful graph neural networks. In *International Conference on Learning Representations*, 2022a. Yusong Wang, Shaoning Li, Tong Wang, Zun Wang, Xinheng He, Bin Shao, and Tie-Yan Liu. How to better introduce geometric information in equivariant message passing? **https://github.com/ogb-visnet/** Global-ViSNet/blob/master/ViSNet_Tech_Report.pdf, 2022b. Accessed: 2022-11-16. Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S Pappu, Karl Leswing, and Vijay Pande. Moleculenet: a benchmark for molecular machine learning. *Chemical science*, 9 (2):513–530, 2018. Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, and Tie-Yan Liu. Do transformers really perform badly for graph representation? In Advances in Neural Information Processing Systems, 2021a. Chengxuan Ying, Mingqi Yang, Shuxin Zheng, Guolin Ke, Shengjie Luo, Tianle Cai, Chenglin Wu, Yuxin Wang, Yanming Shen, and Di He. First place solution of KDD Cup 2021 & OGB large-scale challenge graph prediction track. *arXiv:2106.08279*, 2021b. Sheheryar Zaidi, Michael Schaarschmidt, James Martens, Hyunjik Kim, Yee Whye Teh, Alvaro SanchezGonzalez, Peter Battaglia, Razvan Pascanu, and Jonathan Godwin. Pre-training via denoising for molecular property prediction. *arXiv preprint arXiv:2206.00133*, 2022. ## A Detailed Model Description A.1 Gps++ **Block Complete** The **GPS++** block is defined as follows for layers ℓ > 0 (see section A.2 for the definitions of X0, E0, g 0). Xℓ+1, E ℓ+1, g ℓ+1 = **GPS++** Xℓ, E ℓ, g ℓ, B(17) computed as $\mathbf{Y}^{\ell}$, $\mathbf{E}^{\ell+1}$, $\mathbf{g}^{\ell+1}=$ MPNN ($\mathbf{X}^{\ell},\mathbf{E}^{\ell},\mathbf{g}^{\ell}$) $\mathbf{Z}^{\ell}=$ BiasedAttn ($\mathbf{X}^{\ell},\mathbf{B}$) $\forall i:\quad\mathbf{x}^{\ell+1}_{i}=$ FFN ($\mathbf{y}^{\ell}_{i}+\mathbf{z}^{\ell}_{i}$) ℓ(18) ℓ = **BiasedAttn** Xℓ, B(19) (20) The MPNN **module** A simplified version of the **MPNN** module is presented in section 4.1. The full description is as follows: $$\mathbf{Y}^{\ell},\ \mathbf{E}^{\ell+1},\ \mathbf{g}^{\ell+1}={\mathsf{M P N N}}\left(\mathbf{X}^{\ell},\mathbf{E}^{\ell},\mathbf{g}^{\ell}\right)$$ ℓ(21) computed as ∀(u, v) : c ℓ uv =-x ℓ u | x ℓ v | e ℓ uv | g ℓ(22) ∀(u, v) : e¯ ℓ uv = Dropout0.0035 MLPedge c ℓ uv (23) ∀i : x¯ ℓ i = MLPnode "x ℓ i X (u,i)∈E e¯ ℓ ui | {z } sender messages X (i,v)∈E e¯ ℓ iv | {z } receiver messages X (u,i)∈E x ℓ u g ℓ #! (24) | {z } adjacent nodes g¯ ℓ = MLPglobal " g ℓ X j∈V x¯ ℓ j X (u,v)∈E e¯ ℓ uv# (25) ∀i : y ℓ i = LayerNorm(Dropout0.3 (x¯ ℓ i ) + x ℓ i ) (26) ∀(u, v) : e ℓ+1 uv = e¯ ℓ uv + e ℓ uv (27) g ℓ+1 = Dropout0.35(g¯ ℓ) + g ℓ(28) $$(21)$$ $$(222)$$ $$(23)$$ $$(24)$$ $$(25)$$ $$(26)$$ $$(29)$$ $$(30)$$ where **Dropout**p (Srivastava et al., 2014) masks by zero each element with probability p and **LayerNorm** follows the normalisation procedure of Ba et al. (2016). The three networks MLPη for η ∈ {node, edge, global} each have two layers and are defined by: $$\mathbf{y}=\mathsf{M L P}_{\eta}(\mathbf{x})$$ y = MLPη(x) (29) computed as $\bar{\bf x}=6$EU(Dense(${\bf x}$)) $\in\mathbb{R}^{4d_{\eta}}$ ${\bf y}=$Dense(LayerNorm($\bar{\bf x}$)) $\in\mathbb{R}^{d_{\eta}}$ where **GELU** is from Hendrycks & Gimpel (2016). The BiasedAttn **module** Our **BiasedAttn** module follows the form of the self-attention layer in Luo et al. (2022) where a standard self-attention block (Vaswani et al., 2017) is biased by a structural prior derived from the input graph. In our work the bias B is made up of two components, a Shortest Path Distance embedding and a 3D Distance Bias derived from the molecular conformations as described in section A.2. Single-head biased attention is defined by: $$\mathbf{Z}={\mathsf{B i a s e d A t t n(X,B)}}$$ Z = **BiasedAttn**(X, B) (32) computed as $$\mathbf{A}=\frac{\left(\mathbf{X}\mathbf{W}_{Q}\right)\left(\mathbf{X}\mathbf{W}_{K}\right)^{\top}}{\sqrt{d_{\text{node}}}}+\mathbf{B}\in\mathbb{R}^{N\times N}\tag{33}$$ $$\tilde{\mathbf{A}}=\texttt{Dropout}_{0.3}\left(\texttt{Softmax}\left(\mathbf{A}\right)\right)\left(\mathbf{X}\mathbf{W}_{V}\right)\in\mathbb{R}^{N\times d_{\text{node}}}$$ (34) $$\mathbf{Z}=\texttt{GraphDropout}_{\pounds0.3}\left(\tilde{\mathbf{A}}\right)+\mathbf{X}\in\mathbb{R}^{N\times d_{\text{node}}}\tag{35}$$ for learnable weight matrices WQ,WK,WV ∈ R dnode×dnode and output in R N×dnode , though in practice we use 32 attention heads which are mixed before **GraphDropout** using an affine projection WP ∈ R dnode×dnode. $$(32)^{\frac{1}{2}}$$ The FFN **module** Finally, the feed-forward network module takes the form: $$\mathbf{y}=\mathsf{F F N}(\mathbf{x})$$ $$(36)^{\frac{1}{2}}$$ y = FFN(x) (36) computed as $\bar{\mathbf{x}}=\texttt{Dropout}_{p}(\texttt{GELU}(\texttt{Dense}(\mathbf{x}))\in\mathbb{R}^{4d_{\texttt{nodes}}}$ $\mathbf{y}=\texttt{GraphDropout}_{\frac{\mathbf{x}}{2}0.3}(\texttt{Dense}(\bar{\mathbf{x}}))+\mathbf{x}\in\mathbb{R}^{d_{\texttt{nodes}}}$ $$(37)$$ $$(38)$$ Unless otherwise stated, the dropout probability p = 0. **GraphDropout**, also known as Stochastic Depth or LayerDrop (Huang et al., 2016), masks whole graphs together rather than individual nodes or features, relying on skip connections to propagate activatitions. ## A.2 Input Feature Engineering As described in section 3, the dataset samples include the graph structure G, a set of categorical features for the atoms and bonds xi, euv, and the 3D node positions ri. It has been shown that there are many benefits to augmenting the input data with additional structural, positional, and chemical information (Rampášek et al., 2022; Wang et al., 2022a; Dwivedi et al., 2022). Therefore, we combine several feature sources when computing the input to the first GPS++ layer. There are four feature tensors to initialise; node state, edge state, whole graph state and attention biases. $$\mathbf{X}^{all}=[\mathbf{X}^{\text{atom}}\mid\mathbf{X}^{\text{LapVec}}\mid\mathbf{X}^{\text{LapVal}}\mid\mathbf{X}^{\text{RW}}\mid\mathbf{X}^{\text{Cent}}\mid\mathbf{X}^{\text{3D}}]$$ $$\mathbf{X}^{0}=\mathsf{Dense}(\mathbf{X}^{all})\in\mathbb{R}^{N\times d_{\text{nodes}}}$$ $$\mathbf{E}^{0}=\mathsf{Dense}([\mathbf{E}^{\text{bond}}\mid\mathbf{E}^{\text{3D}}])\in\mathbb{R}^{M\times d_{\text{edge}}}$$ $$\mathbf{g}^{0}=\mathsf{Embed}_{d_{\text{dihedral}}}(0)\in\mathbb{R}^{d_{\text{polyball}}}$$ $$\mathbf{B}=\mathbf{B}^{\text{SPD}}+\mathbf{B}^{\text{3D}}\in\mathbb{R}^{N\times N}$$ $$\begin{array}{l}{(39)}\\ {\quad(40)}\end{array}$$ The various components of each of these equations are defined over the remainder of this section. The encoding of these features also makes recurring use of the following two generic functions. Firstly, a two-layer MLPencoder that projects features to a fixed-size latent space: $$\begin{array}{c}{{\mathrm{(41)}}}\\ {{\mathrm{(42)}}}\end{array}$$ $y=\mathsf{MLP}_{\text{encoder}}(x),\quad\text{where}\,x\in\mathbb{R}^{h}$ (1.1) $$(43)$$ h(43) computed as $\bar{x}=\text{ReLU}(\text{Dense}(\text{LayerNorm}(x)))$$\in\mathbb{R}^{2h}$$y=\text{Dropout}_{0.18}(\text{Dense}(\text{LayerNorm}(\bar{x})))$$\in\mathbb{R}^{32}$ $$(444)$$ 2h(44) $$(45)$$ 32 (45) Secondly, a function **Embed**d(j) ∈ R d which selects the j th row from an implicit learnable weight matrix. Chemical Features The chemical features used in this work are shown in Table A.1. To embed the categorical chemical features from the dataset xi, euv into a continuous vector space, we learn a simple embedding vector for each category, sum the embeddings for all categories, and then process it with an MLP to produce Xatom and Ebond, i.e. $$\forall i:\quad\bar{\mathbf{x}}_{i}^{\text{atom}}=\mathsf{MLP}_{\text{node}}\left(\sum_{j\in x_{i}}\mathsf{Embed}_{64}(j)\right)$$ $$\forall i:\quad\mathbf{x}_{i}^{\text{atom}}=\mathsf{Dropout}_{0.18}(\bar{\mathbf{x}}_{i}^{\text{atom}})\ \in\mathbb{R}^{d_{\text{node}}}$$ $$\forall(u,v):\quad\mathbf{e}_{uv}^{\text{bond}}=\mathsf{MLP}_{\text{edge}}\left(\sum_{j\in x_{uv}}\mathsf{Embed}_{64}(j)\right)$$ $$\forall(u,v):\quad\mathbf{e}_{uv}^{\text{bond}}=\mathsf{Dropout}_{0.18}(\bar{\mathbf{e}}_{uv}^{\text{bond}})\ \in\mathbb{R}^{d_{\text{edge}}}$$ Here MLPnode and MLPedge refer to the functions by the same names used in Eq. 23 and 24 in the **MPNN** module, yet parameterised independently. $$(46)$$ $$(47)$$ $$(48)$$ $$(49)$$ | Table A.1: Chemical input feature selection for PCQM4Mv2. Feature Set | | | | | | |-------------------------------------------------------------------------|----------|-------|-------|-------|--------| | Node features | Original | Set 1 | Set 2 | Set 3 | Ying21 | | Atomic number | ✓ | ✓ | ✓ | ✓ | ✓ | | Group | · | ✓ | ✓ | ✓ | · | | Period | · | ✓ | ✓ | ✓ | · | | Element type | · | ✓ | ✓ | ✓ | · | | Chiral tag | ✓ | · | · | · | ✓ | | Degree | ✓ | ✓ | ✓ | ✓ | ✓ | | Formal charge | ✓ | ✓ | · | ✓ | ✓ | | # Hydrogens | ✓ | ✓ | ✓ | ✓ | ✓ | | # Radical electrons | ✓ | ✓ | ✓ | ✓ | ✓ | | Hybridisation | ✓ | · | ✓ | ✓ | ✓ | | Is aromatic | ✓ | ✓ | ✓ | · | ✓ | | Is in ring | ✓ | ✓ | ✓ | ✓ | ✓ | | Is chiral center | · | ✓ | ✓ | ✓ | ✓ | | Explicit valence | · | · | · | · | ✓ | | Implicit valence | · | · | · | · | ✓ | | Total valence | · | · | · | · | ✓ | | Total degree | · | · | · | · | ✓ | | Default valence | · | · | · | · | ✓ | | # Outer elecrons | · | · | · | · | ✓ | | Van der Waals radius | · | · | · | · | ✓ | | Covalent radius | · | · | · | · | ✓ | | # Bonds in radius N = 2 : 8 | · | · | · | · | ✓ | | Gasteiger charge | · | · | · | · | ✓ | | Is donor | · | · | · | · | ✓ | | Is acceptor | · | · | · | · | ✓ | | Edge features Bond type | ✓ | ✓ | · | · | ✓ | | Bond stereo | ✓ | ✓ | ✓ | ✓ | ✓ | | Is conjugated | ✓ | · | ✓ | ✓ | ✓ | | Is in ring | · | ✓ | ✓ | ✓ | ✓ | | Bond direction | · | · | · | · | ✓ | Table A.1: Chemical input feature selection for PCQM4Mv2. Graph Laplacian Positional Encodings (Kreuzer et al., 2021; Dwivedi & Bresson, 2020) Given a graph with adjacency matrix A and degree matrix D, the eigendecomposition of the graph Laplacian L is formulated into a global positional encoding as follows. Using as follows: $\forall i:\quad\mathbf{x}_i^{\text{LapVec}}=\mathsf{MLP}_{\text{encoder}}(\mathbf{U}\left[i,\,2\dots k^{\text{Lap}}\right])\,\in\mathbb{R}^{32},$ $\qquad\qquad\qquad\qquad\qquad\qquad\qquad\text{where}\mathbf{L}=\mathbf{D}-\mathbf{A}=\mathbf{U}^\top\boldsymbol{\Lambda}\mathbf{U}$ $\forall i:\quad\mathbf{x}_i^{\text{LapVec}}=\mathsf{MLP}_{\text{encoder}}\left(\dfrac{\boldsymbol{\Lambda}'}{||\boldsymbol{\Lambda}'||}\right)\,\in\mathbb{R}^{32},$ $\qquad\qquad\qquad\qquad\qquad\text{where}\boldsymbol{\Lambda}'=\text{diag}(\boldsymbol{\Lambda})\left[2\dots k^{\text{Lap}}\right]$ I am lost to a right hand, as before, I am lost to a right hand, as well. $$(50)$$ $$(51)$$ To produce fixed shape inputs despite variable numbers of eigenvalues / eigenvectors per graph, we truncate / pad to the lowest 7 eigenvalues, excluding the first trivial eigenvalue Λ11 = 0. We also randomise the eigenvector sign every epoch which is otherwise arbitrarily defined. Random Walk Structural Encoding (Dwivedi et al., 2022) This feature captures the probability that a random graph walk starting at node i will finish back at node i, and is computed using powers of the transition matrix P. This feature captures information about the local structures in the neighbourhood around each node, with the degree of locality controlled by the number of steps. For this submission random walks from 1 up to k RW = 16 steps were computed to form the feature vector. ∀i : x¯ RW i = h(P 1)ii, (P 2)ii, · · · , (P k RW)iii where P = D−1A (52) ∀i : x RW i = MLPencoder x¯ RW i∈ R 32 (53) $$\begin{array}{l}{(52)}\\ {(53)}\end{array}$$ $$\left(54\right)$$ Local Graph Centrality Encoding (Ying et al., 2021a; Shi et al., 2022) The graph centrality encoding is intended to allow the network to gauge the importance of a node based on its connectivity, by embedding the degree (number of incident edges) of each node into a learnable feature vector. $$\forall i:\quad\mathbf{x}_{i}^{\mathrm{Cent}}={\mathsf{Embed}}_{64}\left(D_{i i}\right)\ \in\mathbb{R}^{64}$$ 64 (54) Shortest Path Distance Attention Bias Graphormer (Ying et al., 2021a; Shi et al., 2022) showed that graph topology information can be incorporated into a node transformer by adding learnable biases to the self-attention matrix depending on the distance between node pairs. During data preprocessing the SPD map ∆ ∈ N N×N is computed where ∆ij is the number of edges in the shortest continuous path from node i to node j. During training each integer distance is embedded as a scalar attention bias term to create the SPD attention bias map BSPD ∈ R N×N . $$\forall i,j:\quad B_{i j}^{\mathrm{SPD}}={\mathsf{Embed}}_{1}\left(\Delta_{i j}\right)\,\in\mathbb{R}$$ $$(55)$$ ij = **Embed**1 (∆ij ) ∈ R (55) Single-headed attention is assumed throughout this report for simplified notation, however, upon extension to multi-headed attention, one bias is learned per distance per head. Embedding 3D Distances Using the 3D positional information provided by the dataset comes with a number of inherent difficulties. Firstly, the task is invariant to molecular rotations and translations, however, the 3D positions themselves are not. Secondly, the 3D conformer positions are only provided for the training data, not the validation or test data. To deal with these two issues and take advantage of the 3D positions provided we follow the approach of Luo et al. (2022). To ensure rotational and translational invariance we use only the distances between atoms, not the positions directly. To embed the scalar distances into vector space R K we first apply K = 128 Gaussian kernel functions, where the k th function is defined as need as $$\forall i,j:\quad\bar{\psi}^{k}_{ij}=\frac{||{\bf r}_{i}-{\bf r}_{j}||-\mu^{k}}{|\sigma^{k}|}\tag{56}$$ $$\forall i,j:\quad\psi^{k}_{ij}=-\frac{1}{\sqrt{2\pi}\,|\sigma^{k}|}\exp\left(-\frac{1}{2}\left(\bar{\psi}^{k}_{ij}\right)^{2}\right)\,\in\mathbb{R}\tag{57}$$ with learnable parameters µ k and σ k. The K elements are concatenated into vector ψij . We then process these distance embeddings in three ways to produce attention biases, node features and edge features. 3D Distance Attention Bias The 3D attention bias map B3D ∈ R N×N allows the model to modulate the information flowing between two node representations during self-attention based on the spatial distance between them, and are calculated as per Luo et al. (2022) $$\forall i,j:\quad\mathbf{B}_{i j}^{\mathrm{3D}}={\mathsf{M L P_{\mathrm{bias3D}}}}(\psi_{i j})\,\in\mathbb{R}$$ $$(58)$$ ij = MLPbias3D(ψij ) ∈ R (58) Upon extension to multi-headed attention with 32 heads MLPbias3D instead projects to R 32. Bond Length Encoding Whilst B3D makes inter-node distance information available to the self-attention module in a dense all-to-all manner as a matrix of simple scalar biases, we also make this information available to the **MPNN** module in a sparse but high-dimensional manner as edge features E3D =-e 3D uv for (u, v) ∈ E calculated as $$\forall(u,v):\quad\mathbf{e}_{uv}^{\mathrm{3D}}=\mathsf{MLP}_{\mathrm{encoder}}(\boldsymbol{\psi}_{uv})\ \in\mathbb{R}^{32}\tag{1}$$ Global 3D Centrality Encoding The 3D node centrality features X3D =-x 3D 1 ; *. . .* ; x 3D N are computed by summing the embedded 3D distances from node i to all other nodes. Since the sum commutes this feature cannot be used to determine the distance to a specific node, so serves as a centrality encoding rather than a positional encoding. $$(59)$$ $$\forall i:\quad\mathbf{x}_{i}^{\mathrm{3D}}=\mathbf{W}^{\mathrm{3D}}\sum_{j\in\mathcal{V}}\psi_{ij}\ \in\mathbb{R}^{32}\tag{1}$$ Here W3D ∈ R K×32 is a linear projection to the same latent size as the other encoded features. 3D Denoising We also closely follow the approach of Luo et al. (2022) to implement an auxiliary selfsupervised 3D denoising task during training. Before the 3D distance inputs are embedded (Equation 56) the atom positions R are substituted by R + σϵ, where σ ∈ R is a scaling factor we set to 0.2, and ϵ ∈ R N×3 are Gaussian noise vectors. GPS++ computes ϵˆik, the predicted value of ϵik, as: $$\hat{\mathbf{e}}_{ik}=\left(\sum_{j\in\mathcal{V}}\mathbf{A}_{ij}\mathbf{\Delta}_{ij}^{k}\mathbf{X}_{j}^{L}\mathbf{W}_{V_{1}}^{\mathrm{3D}}\right)\mathbf{W}_{V_{2}}^{\mathrm{3D}}\ \in\mathbb{R}\tag{1}$$ $$(60)$$ $$(61)$$ (62) $\binom{63}{}$ . where $$\bar{\mathbf{A}}=\frac{\left(\mathbf{X}^{L}\mathbf{W}_{Q}^{\mathrm{3D}}\right)\left(\mathbf{X}^{L}\mathbf{W}_{K}^{\mathrm{3D}}\right)^{\top}}{\sqrt{d_{\mathrm{node}}}}+\mathbf{B}\ \in\mathbb{R}^{N\times N}$$ $$\mathbf{A}=\texttt{Dropout}_{0.3}\left(\texttt{Softmax}\left(\bar{\mathbf{A}}\right)\right)\ \in\mathbb{R}^{N\times N}$$ Here XL is the node output of the final GPS++ layer, ∆k ij is the k-th element of directional vector ri−rj ||ri−rj || from atom j to atom i, and W3D Q ,W3D K ,W3D V1 ∈ R dnode×dnode,W3D V2 ∈ R dnode×1 are learnable weight matrices. A cosine similarity loss is computed between ϵˆ and ϵ, and we set the loss weight ratio of Homo-Lumo MAE vs 3D Denoising to 10 : 1. ## B Hardware And Acceleration B.1 Hardware We train our models using a BOW-POD16 which contains 16 IPU processors, delivering a total of 5.6 petaFLOPS of float16 compute and 14.4 GB of in-processor SRAM which is accessible at an aggregate bandwidth of over a petabyte per second. This compute and memory is then distributed evenly over 1472 tiles per processor. This architecture has two key attributes that enable high performance on GNN and other AI workloads (Bilbrey et al., 2022): memory is kept as close to the compute as possible (i.e., using on-chip SRAM rather than off-chip DRAM) which maximises bandwidth for a nominal power budget; and compute is split up into many small independent arithmetic units meaning that any available parallelism can be extremely well utilised. In particular this enables very high performance for sparse communication ops, like gather and scatter, and achieves high FLOP utilisation even with complex configurations of smaller matrix multiplications. Both of these cases are particularly prevalent in MPNN structures like those found in GPS++. To exploit the architectural benefits of the IPU and maximise utilisation, understanding the program structure ahead of time is key. This means all programs must be compiled end-to-end, opening up a range of opportunities for optimisation but also adding the constraint that tensor shapes must be known and fixed at compile time. ## B.2 Batching And Packing To enable fixed tensor sizes with variable sized graphs it is common to pad the graphs to the max node and edge size in the dataset. This, however, can lead to lots of compute being wasted on padding operations, particularly in cases where there are large variations in the graph sizes. To combat this it is common to *pack* a number of graphs into a fixed size shape to minimise the amount of padding required, this is an abstraction that is common in graph software frameworks like PyTorch Geometric (Fey & Lenssen, 2019) and has been shown to achieve as much as 2x throughput improvement for variable length sequence models (Krell et al., 2021). Packing graphs into one single large pack, however, has a couple of significant downsides: the memory and compute complexity of all-to-all attention layers is O(n 2) in the pack size not the individual graph sizes, and allowing arbitrary communication between all nodes in the pack forces the compiler to choose sub-optimal parallelisation schemes for the gather/scatter operations. To strike a balance between these two extremes we employ a two tiered hierarchical batching scheme that packs graphs into a fixed size but then batches multiple packs to form the micro-batch. We define the maximum pack size to be 60 nodes, 120 edges and 8 graphs then use a simple streaming packing method where graphs are added to the pack until either the total nodes, edges or graphs exceeds the maximum size. This achieves 87% packing efficiency of the nodes and edges with on average 3.6 graphs per pack, though we believe that this could be increased by employing a more complex packing strategy (Krell et al., 2021). We then form micro-batches of 8 packs which are pipelined (Huang et al., 2018) over 4 IPUs accumulating over 8 micro-batches and replicated 4 times to form a global batch size of 921 graphs distributed over 16 IPUs. For the MPNN-only model, the micro-batch size was increased to 15, forming global batches of 1737 graphs. ## B.3 Numerical Precision To maximise compute throughput and maximise memory efficiency it is now common practice to use lower precision numerical formats in deep learning (Micikevicius et al., 2017). On IPUs using float16 increases the peak FLOP rate by 4x compared to float32 but also makes more effective usage of the high bandwidth on-chip SRAM. For this reason we use float16 for nearly all1compute but also use float16 for the majority2 of the weights, this is made possible, without loss of accuracy, by enabling the hardware-level stochastic rounding of values. While we do use a loss scaling (Micikevicius et al., 2017) value of 1024 we find that our results are robust to a wide range of choices. We also use float16 for the first-order moment in Adam but keep the second-order moment in float32 due to the large dynamic range requirements of the sum-of-squares. 1A few operations like the sum of squares the variance calculations are up-cast to float32 by the compiler. 2A small number of weights are kept in float32 for simplicity of the code rather than numerical stability.
Review 1: Summary: In this paper, the authors propose a novel graph-learning architecture designed specifically for molecular property prediction tasks. The proposed model, namely GPS++, consists of a combination of architectures/modules that exists in prior works. The paper primarily focuses on the molecules HOMO-LUMO energy gap prediction and shows the proposed method achieves state-of-the-art results on the OGB-LSC dataset PCQM4Mv2. In addition to the main results, the paper also includes extensive ablation studies to showcase the contribution of each neural module and input features to the final results. Strengths and Weaknesses: Strengths: Although only study a very specific application in graph learning, the paper clearly defines its scope and provides enough context for the problem setup and discussion of prior works. The proposed architecture combines the graph transformers and the message-passing networks and achieves state-of-the-art results in the OGB-LSC task. The paper provides extensive ablation empirical studies on the properties of the node/edge/3D features as well as the effectiveness of the massage-passing neural module. The architecture of the proposed method is well presented and the structure of the paper is easy to follow. Weaknesses: I have two main concerns for this work: novelty and generalizability. The proposed architecture largely resembles that of GPS, with limited modifications. Also, it is not clear the motivations behind the design of the new GPS++ block. The paper only uses a single dataset in all experiments. It is not clear whether the proposed method only achieves better performance in this specific case or it's general enough to be used in other molecular graph learning tasks. Requested Changes: * Include empirical studies on more datasets/tasks to examine the general applicability of the GPS++ module. * Add more discussions on the motivations and reasoning behind the design of the new neural architecture. Broader Impact Concerns: I don't have ethical concerns about this work. ================================================== Review 2: Summary: This work proposes a deep model for graph learning, particularly predicting the HOMO-LUMO gap of molecules. It closely follows the design of a previous work GPS, which combines the operation of message passing and transformer. The main difference between this GPS++ work, compared to GPS, lies in a well-tuned MPNN model, feature engineering, and dedicatedly designed training objective. Strengths and Weaknesses: #### Strengths #### (1) This work provides exhaustive experiments to demonstrate a lot number of different model architecture designs and feature engineering. The extensive experimental results could be useful for the community for future research. (2) The writing and the organization of the paper is super clear and easy to follow. (3) The good empirical performance of the dedicatedly designed model, including feature engineering. #### Weaknesses #### (1) One of the biggest weaknesses is the novelty of the method. Since GPS++ closely follows the general design of previous models, the architecture itself is not new. Since TMLR does not require a lot of novelty, I will not criticize this aspect in terms of making an acceptance decision. (2) The experiments are only on a single dataset and task, although the dataset is super large. Readers may wonder if the exhaustive model designs and featurization work for other molecular property prediction tasks. So I highly recommend adding at least one more experiment to make the paper stronger. Requested Changes: (1) It would be better to evaluate the GPS++ method on more tasks. Broader Impact Concerns: N/A ================================================== Review 3: Summary: The authors present a hybrid graph neural network model called the GPS++, utilizing elements of both MPNNs and transformer architectures, focused on the HOMO-LUMO gap prediction task on the PCQM4Mv2 dataset. The paper is a mainly empirical contribution, including and evaluating various features and network blocks within the GPS framework, resulting in a very good performance on the selected regression task, outperforming other state-of-the-art GNN models. The paper also includes extensive ablation studies, clearly showcasing the contribution of the individual features/network components to the overall performance, leading to the interesting finding that most of the performance can be attributed to the MPNN component as opposed to the attention component of the architecture, with the MPNN alone showing better performance on the 2d structure task. Strengths and Weaknesses: Strengths: The main strength of this paper is the detailed empirical evaluation and the extensive ablation studies of the different design choices of the network architecture. The authors utilize the generality of the GPS framework to combine various established feature engineering methods and network components, and in doing so achieve a state-of-the-art performance on the selected task. Furthermore, the detailed ablation studies can potentially be used to guide the design decisions of researchers or engineers when encountered with a similar task in the future. Another strength of the work is its clearness and readability, with the authors clearly stating the reason and justification for each design decision, and having a well defined notation, making the equations easy to follow. Weaknesses: The two main weaknesses of this work are its narrow focus and the lack of novelty. Most of the other state-of-the-art GNN models included in the paper as a point of comparison have been evaluated on a number of graph-based tasks and datasets, showing their generality and potential for wide range application. On the other hand the GPS++ was only evaluated on the HOMO-LUMO gap prediction task of the PCQM4Mv2 dataset. Due to the large number and specificity of the components used in GPS++, it is unclear whether its good performance generalizes to other graph-based tasks or whether its architecture is fine-tuned to provide exceptional results only for the selected task. In addition to the narrow focus, the work also fails to present any novel theoretical contributions to the field, although to the credit of the authors they also do not claim any such contribution. The model is a specific instantiation of the GPS framework, which is clearly signaled by its name GPS++, and largely uses already established features and network components that are introduced in a lot of the competing models. As mentioned previously, the contribution mainly comes in the way that these components are combined together and the extensive ablation studies. Requested Changes: Overall, I think the paper is good as it is, the authors set a specific objective and achieve it, giving the paper a narrow but clear structure. While it would certainly be beneficial to include results for other similar datasets such as QM9, PDBBind, etc, which would significantly strengthen the work, I think the work as it is just about satisfies the acceptance criteria of TMLR and I would recommend it for acceptance without requiring any major changes in the content. However, for the purposes of reproducibility, I believe it is also important for the authors to provide access to the code used to produce the results, and unless I missed it i did not see the code included as a part of the submission. Not having the code available is potentially a reason to not accept the paper, since reproducibility is even more important given the empirical nature of this work and lack of theoretical novelty. In term of minor changes, some more details on the training procedure would be appreciated, specifically about how the auxiliary task of 3d positions denoising is incorporated into the training process. The authors give ratios of the losses for the noisy nodes/edges task during training, but no such details are given for the 3d denoising task for training. Broader Impact Concerns: I have no concerns about the ethical implications of this work. ================================================== Review 4: Summary: This paper extends GPS. The proposed method GPS++ is a hybrid model that combines message-passing neural networks and graph transformer models. The authors tackled molecular property prediction. The proposed method achieved state-of-the-art performance. Lastly, the authors provide extensive ablation studies to investigate the effects of various features, structural encodings, and architectural components. Strengths and Weaknesses: This paper studied how to optimize the input features to improve the performance of GPS. The proposed method achieved state-of-the-art performance with an interesting hybrid architecture that combines transformers and message-passing neural networks, which is a type of graph neural networks. The paper reads well, and the discussion in the paper is straightforward. The authors introduced the augmented general powerful scalable graph transformer in Fig 1., the new layer GPS++ layer, and feature sets. An extensive ablation study supports the value of each component in the proposed method. However, it is not clear which component is newly proposed in this paper. The authors did not explicitly compare the proposed method with the previous method/baseline GPS regarding neural network architectures. In addition, in Table 2, the main experimental results raise a couple of concerns. First, in the experimental setting without 3D positional information, GPS has a twice smaller number of parameters than GPS++. This is a huge difference. Also, the proposed Hybrid method GPS++ GPS++ [No 3D Position] with more parameters underperforms GPS++ [MPNNonly, No 3D Position]. Although the proposed method achieved considerable performance gain in the setting with limited information, with 3D positional information, the performance gain is marginal, and GPS is not even compared. In sum, the proposed extension has limited novelty, and performance gain is questionable. **Strengths**: 1. The proposed method achieved strong performance and good efficiency by extensive optimization for architectural components, features, and structural encodings. 2. Extensive ablation study is provided and each component in the final model is well justified by empirical performance gain. **Weaknesses**: 1. Although this paper admitted that the proposed method is a specific implementation of GPS, no clear comparison between GPS and GPS++ layers. So, it is hard to evaluate the contributions of this paper unless the differences are explicitly discussed. 2. This paper is incremental and the method section mainly introduces the implementation details including feature engineering. It may be crucial for improving performance but beyond the specific benchmarks the feature engineering has limited academic merit. 3. ~~In the setting with 3D positional information, the performance gain is marginal. Also, in Table 2, compared to GPS, GPS++ seems to improve the performance but the model size is doubled. So, it may not be a fair comparison. The authors need to provide well-tuned GPS with a comparable number of parameters~~. [addressed by authors' feedback:GPS++ (8 layers)] 4. Table 2, GPS is missing in with 3D Positional information. Requested Changes: 1. ~~Overall the paper reads well. However, as mentioned above, the main contribution/difference between GPS and the proposed model needs to be discussed more clearly.~~ [addressed by author feedback, Section 4. in the revised version] 2. ~~Given graph laplacian, eig vec, eig vals can be used for positional embeddings or features. However, it is not clear how to compute them for heterogeneous graphs with various edge/node types. Did you convert the heterogenous graphs into a homogeneous graph and compute $X^\text{LapVec}$ and $X^\text{LapVal}$?~~ [addressed by author feedback] treated as a homogeneous graph 3. ~~In Eq (7), why is $\sum_{(i,v) \in E}x_v^l$ included? Often, message passing along reversed edges improve the performance.~~ [addressed by author feedback] 4. ~~The feature set Ying21 is missing in Table A.1~~. [addressed by author feedback] Broader Impact Concerns: I have no ethical concerns. ================================================== Metareview: Recommendation: Accept as is Comment: Both reviewers and authors engaged in a fruitful discussion that resulted in substantial improvements of the paper. All reviewers like the paper especially after the revisions but are also concerned about the lack of changes over the original GPS method. In the end the reviewers were convinced that the changes merit publication. ==================================================
# Transfool: An Adversarial Attack Against Neural Machine Translation Models Sahar Sadrizadeh *sahar.sadrizadeh@epfl.ch* EPFL, Lausanne, Switzerland Ljiljana Dolamic *ljiljana.dolamic@ar.admin.ch* Armasuisse S+T, Thun, Switzerlan Pascal Frossard *pascal.frossard@epfl.ch* EPFL, Lausanne, Switzerland Reviewed on OpenReview: *https: // openreview. net/ forum? id= sFk3aBNb81* ## Abstract Deep neural networks have been shown to be vulnerable to small perturbations of their inputs, known as adversarial attacks. In this paper, we investigate the vulnerability of Neural Machine Translation (NMT) models to adversarial attacks and propose a new attack algorithm called *TransFool*. To fool NMT models, TransFool builds on a multi-term optimization problem and a gradient projection step. By integrating the embedding representation of a language model, we generate fluent adversarial examples in the source language that maintain a high level of semantic similarity with the clean samples. Experimental results demonstrate that, for different translation tasks and NMT architectures, our white-box attack can severely degrade the translation quality while the semantic similarity between the original and the adversarial sentences stays high. Moreover, we show that TransFool is transferable to unknown target models. Finally, based on *automatic* and *human* evaluations, TransFool leads to improvement in terms of success rate, semantic similarity, and fluency compared to the existing attacks both in white-box and black-box settings. Thus, TransFool permits us to better characterize the vulnerability of NMT models and outlines the necessity to design strong defense mechanisms and more robust NMT systems for real-life applications. ## 1 Introduction The impressive performance of Deep Neural Networks (DNNs) in different areas such as computer vision (He et al., 2016) and Natural Language Processing (NLP) (Vaswani et al., 2017) has led to their widespread usage in various applications. With such an extensive usage of these models, it is important to analyze their robustness and potential vulnerabilities. In particular, it has been shown that the outputs of these models are susceptible to imperceptible changes in the input, known as adversarial attacks (Szegedy et al., 2014). Adversarial examples, which differ from the original inputs in an imperceptible manner, cause the target model to generate incorrect outputs. If these models are not robust enough to these attacks, they cannot be reliably used in applications with security requirements. To address this issue, many studies have been recently devoted to the effective generation of adversarial examples, the defense against attacks, and the analysis of the vulnerabilities of DNN models (Moosavi-Dezfooli et al., 2016; Madry et al., 2018; Ortiz-Jiménez et al., 2021). The dominant methods to craft imperceptible attacks for continuous data, e.g., audio and image data, are based on gradient computing and various optimization strategies. However, these methods cannot be directly extended to NLP models due to the discrete nature of the tokens in the corresponding representations (i.e., words, subwords, and characters). Another challenge in dealing with textual data is the characterization of the imperceptibility of the adversarial perturbation. The ℓp-norm is highly utilized in image data to measure imperceptibility but it does not apply to textual data where manipulating only one token in a sentence may significantly change the semantics. Moreover, in gradient-based methods, it is challenging to incorporate linguistic constraints in a differentiable manner. Hence, optimization-based methods are more difficult and less investigated for adversarial attacks against NLP models. Currently, most attacks in textual data are gradient-free and simply based on heuristic word replacement, which may result in *sub-optimal* performance (Alzantot et al., 2018; Ren et al., 2019; Jin et al., 2020; Li et al., 2020; Morris et al., 2020; Zang et al., 2020; Guo et al., 2021; Sadrizadeh et al., 2022; Wang et al., 2022). An important task in NLP is Neural Machine Translation (NMT), which involves automatically converting a sequence of words in a source language to a sequence of words in a target language (Bahdanau et al., 2015). By using DNN models, NMT systems have achieved exceptional performance and are increasingly used in safety and security sensitive applications like medical and legal applications (Vieira et al., 2021). In this application also, adversarial attacks provide insights into the vulnerabilities of these systems, particularly when exposed to samples outside the training distribution. In particular, we study untargeted attacks, in which the adversary aims to generate adversarial examples that are semantically similar to the input sentences while their corresponding translations differ significantly (Michel et al., 2019; Cheng et al., 2019; 2020a; Zhang et al., 2021). It is expected that a good NMT model generates similar translations for similar sentences. However, even small perturbations such as changing a name, tense, gender, or numbers can degrade the translation quality. This unexpected behavior of NMT models has serious implications for real-world applications. For instance, if an NMT model performs well on a medical text, we expect that changing the name of the patient, diagnosis, or document number would still result in a good translation by the NMT model, which is not always the case. Therefore, these adversarial attacks, which are similar to the inputs but lead to a significant change in the model's output, outline weaknesses in the NMT model. Our goal is to evaluate the robustness of NMT models to such adversarial sentences to eventually improve the *security* of applications and the *robustness* of such models. In this paper, we propose *TransFool* to build *semantically similar* and *fluent* adversarial attacks against NMT models. We build a new solution to the challenges associated with gradient-based adversarial attacks against textual data. To find an adversarial sentence that is fluent and semantically similar to the input sentence but highly degrades the translation quality of the target model, we propose a multi-term optimization problem over the tokens of the adversarial example. We consider the white-box attack setting, where the adversary has access to the target model and its parameters. White-box attacks are widely studied since they reveal the vulnerabilities of the systems and are used in benchmarks. To ensure that the generated adversarial examples are similar to the original sentences, we incorporate a Language Model (LM) in our method in two ways. First, we consider the loss of a Causal Language Model (CLM) in our optimization problem in order to impose the syntactic correctness of the adversarial example. Second, by working with the embedding representation of an LM, instead of the NMT model, we ensure that similar tokens are close to each other in the embedding space (Tenney et al., 2019). This enables the definition of a similarity term between the respective tokens of the clean and adversarial sequences. Hence, we include a similarity constraint in the proposed optimization problem, which uses the LM embeddings. Finally, our optimization contains an adversarial term to maximize the loss of the target NMT model. The generated adversarial example, i.e., the minimizer of the proposed optimization problem, should consist of meaningful tokens, and hence, the proposed optimization problem should be solved in a discrete space. By using a gradient projection technique, we first consider the continuous space of the embedding space and perform a gradient descent step and then, we project the resultant embedding vectors to the most similar valid token. In the projection step, we again use the LM embeddings and project the output of the gradient descent step into the nearest meaningful token in the embedding space (with maximum cosine similarity). We test our method against different NMT models with transformer structures, which are now widely used for their exceptional performance. For different NMT architectures and translation tasks, experiments show that our white-box attack can reduce the translation quality while it maintains a high level of semantic similarity with the clean samples. Also, we extend TransFool to black-box settings and show that it can fool unknown target models. Overall, automatic and human evaluations show that in both white-box and black-box settings, TransFool outperforms the existing attacks in terms of success rate, semantic similarity, and fluency. In summary, our contributions are as follows: - We define a new optimization problem to compute semantic-preserving and fluent attacks against NMT models. The objective function contains several terms: adversarial loss to maximize the loss of the target NMT model; a similarity term to ensure that the adversarial example is *similar* to the original sentence; and loss of a CLM to generate *fluent* and *natural* adversarial examples. - We propose a new strategy to incorporate linguistic constraints in our attack in a differentiable manner. Since LM embeddings provide a meaningful representation of the tokens, we use them instead of the NMT embeddings to compute the similarity between two tokens. - We design a white-box attack algorithm, *TransFool*, against NMT models by solving the proposed optimization problem with gradient projection. Our attack, which operates at the token level, is effective against state-of-the-art NMT models and *outperforms* prior works. - By using the transferability of adversarial attacks to other models, we extend the proposed white-box attack to the black-box setting. Our attack is highly effective even when the *target languages* of the target NMT model and the reference model are *different*. To our knowledge, this type of attack, *cross-lingual*, has not been investigated. The rest of the paper is organized as follows. We review the related works in Section 2. In Section 3, we formulate the problem of adversarial attacks against NMT models, and propose an optimization problem to generate adversarial examples. We describe our attack algorithm in Section 4. In Section 5, we discuss the experimental results and evaluate TransFool against different transformer models and translation tasks. Moreover, we evaluate our attack in black-box settings and show that TransFool has very good transfer properties. Finally, the paper is concluded in Section 6. ## 2 Related Work Adversarial attacks against NMT systems have been studied in recent years. First, Belinkov & Bisk (2018) show that character-level NMT models are highly vulnerable to character manipulations such as typos in a block-box setting. Similarly, Ebrahimi et al. (2018a) investigate the robustness of character-level NMT models. They propose a white-box adversarial attack based on HotFlip (Ebrahimi et al., 2018b) and greedily change the important characters to decrease the translation quality (untargeted attack) or mute/push a word in the translation (targeted attack). On the other hand, many of the adversarial attacks against NMT models are rather based on word replacement. Cheng et al. (2019) propose a white-box attack where they first select random words of the input sentence and replace them with a similar word. In particular, in order to limit the search space, they find some candidates with the help of a language model and choose the token that aligns best with the gradient of the adversarial loss to cause more damage to the translation. Michel et al. (2019) and Zhang et al. (2021) similarly find important words in the sentence but replace them with a neighbor word in the embedding space to create adversarial examples. However, these methods use *heuristic* strategies which are likely to have sub-optimal performance. Cheng et al. (2020a) considers another strategy and propose Seq2Sick, a targeted white-box attack against NMT models. They introduce an optimization problem in the embedding space of the NMT model. and solve it by gradient projection. Although they have a projection step to the nearest embedding vector, they use the NMT embedding space, which is not effective in capturing the similarity between tokens. Other types of attacks against NMT models with different threat models and purposes have also been investigated in the literature. (Wallace et al., 2020) propose several new types of attack: Universal adversarial attacks, which consist of a single snippet of text that can be added to any input sentence to mislead the NMT model, and malicious nonsense, which fools the NMT model to generate malicious translation from nonsense inputs. Some papers focus on making NMT models robust to perturbation to the inputs (Cheng et al., 2018; 2020b; Tan et al., 2021). Some other papers use adversarial attacks to enhance the NMT models in some aspects, such as word sense disambiguation (Emelin et al., 2020), robustness to subword segmentation (Park et al., 2020), and robustness of unsupervised NMT (Yu et al., 2021). In (Xu et al., 2021; Wang et al., 2021), the data poisoning attacks against NMT models are studied. Another type of attack whose purpose is to change multiple words while ensuring that the output of the NMT model remains unchanged is explored in (Chaturvedi et al., 2019; 2021). Another attack is presented in (Cai et al., 2021), where the adversary uses the hardware faults of systems to fool NMT models. All these attacks show the vulnerability of NMT systems to adversarial attacks in various scenarios different than ours. In this paper, we focus on the untargeted attack against NMT models, which aims to deceive the NMT model into generating substantially different translations for similar sentences. Such attacks are particularly important since they expose an unexpected behaviour of the NMT systems. In summary, existing adversarial attacks often use heuristic strategies based on *word-replacement* and are likely to have sub-optimal performance. Or they use the *NMT embedding* space to find similar tokens, which is not effective in capturing the similarity between tokens. Finally, none of these attacks study the *transferability* to black-box settings. To address these issues, we introduce *TransFool*, a method for crafting effective and fluent adversarial sentences that are similar to the original sentences. ## 3 Optimization Problem In this section, we first present our new formulation for generating adversarial examples against NMT models, along with different terms that form our optimization problem. Adversarial Attack. Consider X to be the source language space and Y to be the target language space. The NMT model f : *X → Y* generally has an encoder-decoder structure (Bahdanau et al., 2015; Vaswani et al., 2017) and aims to maximize the translation probability p(yref|x), where x ∈ X is the input sentence in the source language and yref ∈ Y is the ground-truth translation in the target language. To process textual data, each sentence is decomposed into a sequence of tokens. Therefore, the input sentence x = x1x2*...x*k is split into a sequence of k tokens, where xiis a token from the vocabulary set VX of the NMT model, which contains all the tokens from the source language. For each token in the translated sentence yref = yref,1*, ...,* yref,l, the NMT model generates a probability vector over the target language vocabulary set VY by applying a softmax function to the decoder output. The adversary is looking for an adversarial sentence x ′, which is tokenized into a sequence of k tokens x ′ = x ′ 1x ′ 2 ...x′k , in the source language that fools the target NMT model, i.e., the translation of the adversarial example f(x ′) is far from the true translation. However, the adversarial example x ′ and the original sentence x should be similar so that the true translation of the adversarial example stays similar to yref. As is common in the NMT models (Vaswani et al., 2017; Tang et al., 2020), to feed the discrete sequence of tokens into the NMT model, each token is converted to a continuous vector, known as an embedding vector, using a lookup table. In particular, let emb(.) be the embedding function that maps the input token xi to the continuous embedding vector emb(xi) = ei ∈ R m, where m is the embedding dimension of the target NMT model. Therefore, the input of the NMT model is a sequence of embedding vectors representing the tokens of the input sentence, i.e., ex = [e1, e2*, ...,* ek] ∈ R (k×m). In the same manner, for the adversarial example, we can define ex′ = [e ′ 1 , e ′ 2 , ..., e ′ k ] ∈ R (k×m). To generate an adversarial example for a given input sentence, we introduce an optimization problem with respect to the embedding vectors of the adversarial sentence ex′ . Our optimization problem is composed of multiple terms: an adversarial loss, a similarity constraint, and the loss of a language model. An adversarial loss causes the target NMT model to generate faulty translation. Moreover, with a language model loss and a similarity constraint, we impose the generated adversarial example to be a fluent sentence and also semantically similar to the original sentence, respectively. The proposed optimization problem, which finds the adversarial example x ′from its embedding representation ex′ by using a lookup table, is defined as follows: $${\bf x}^{\prime}\leftarrow\operatorname{argmin}_{{\bf e}_{i}^{\prime}\in{\mathcal E}_{\mathcal V_{X}}}[{\mathcal L}_{A d v}+\alpha{\mathcal L}_{S i m}+\beta{\mathcal L}_{L M}],$$ [LAdv + αLSim + βLLM], (1) where α and β are the hyperparameters that control the relative importance of each term. Moreover, we call the continuous space of the embedding representations the embedding space and denote it by E, and we show the discrete subspace of the embedding space E containing the embedding representation of every token in $\left(1\right)$. the source language vocabulary set by EVX . We now discuss the different terms of the optimization function in detail. Adversarial Loss. In order to create an adversarial example whose translation is far away from the reference translation yref, we try to maximize the training loss of the target NMT model. Since the NMT models are trained to generate the next token of the translation given the translation up until that token, we are looking for the adversarial example that maximizes the probability of wrong translation (by minimizing the probability of correct translation) for the i-th token, given that the NMT model has produced the correct translation up to step (i − 1): $${\cal L}_{Adv}=\frac{1}{l}\sum_{i=1}^{l}\log(p_{f}(y_{\rm ref,i}|{\bf e}_{{\bf x}^{\prime}},\{y_{\rm ref,1},...,y_{\rm ref,}(i-1)\})),\tag{2}$$ where pf (yref,i|ex′ , {yref,1*, ..., y*ref,(i−1)}) is the cross entropy between the predicted token distribution by the NMT model and the delta distribution on the token yref,i, which is one for the correct translated token, yref,i, and zero otherwise. By minimizing log(pf (.)), normalized by the sentence length l, we force the output probability vector of the NMT model to differ from the delta distribution on the token yref,i, which may cause the predicted translation to be wrong. Similarity Constraint. To ensure that the generated adversarial example is similar to the original sentence, we need to add a similarity constraint to our optimization problem. It has been shown that the embedding representation of a language model captures the semantics of the tokens (Tenney et al., 2019; Shavarani & Sarkar, 2021). Suppose that the embedding representation of the original sentence by a language model (which may differ from the NMT embedding representation ex) is vx = [v1, v2*, ...,* vk] ∈ R (k×n), where n is the embedding dimension of the LM. Likewise, let vx′ denote the sequence of LM embedding vectors regarding the tokens of the adversarial example. We can define the distance between the i-th tokens of the original and the adversarial sentences by computing the cosine distance between their corresponding LM embedding vectors: $$\forall i\in\{1,...,k\}:\quad r_{i}=1-\frac{\mathbf{v}_{i}^{T}\mathbf{v}_{i}^{\prime}}{\|\mathbf{v}_{i}\|_{2}.\|\mathbf{v}_{i}^{\prime}\|_{2}}.\tag{3}$$ The cosine distance is zero if the two tokens are the same and it has larger values for two unrelated tokens. We want the adversarial sentence to differ from the original sentence in only a few tokens. Therefore, the cosine distance between most of the tokens in the original and adversarial sentence should be zero, which causes the cosine distance vector [r1, r2*, ..., r*k] to be sparse. To ensure the sparsity of the cosine distance vector, instead of the ℓ0 norm, which is not differentiable, we can define the similarity constraint as the ℓ1 norm relaxation of the cosine distance vector normalized to the length of the sentence: $${\cal L}_{S i m}=\frac{1}{k}\sum_{i=1}^{k}1-\frac{{\bf v}_{i}^{\sf T}{\bf v}_{i}^{\prime}}{\|{\bf v}_{i}\|_{2}.\|{\bf v}_{i}^{\prime}\|_{2}}.\tag{10}$$ $$\left({3}\right)$$ $$\left(4\right)$$ Language Model Loss. Causal language models are trained to maximize the probability of a token given the previous tokens. Hence, we can use the loss of a CLM, i.e., the negative log-probability, as a rough and differentiable measure for the fluency of the generated adversarial sentence. The loss of a CLM, which is normalized to the sentence length, is as follows: $${\mathcal{L}}_{L M}=-\frac{1}{k}\sum_{i=1}^{k}\log(p_{g}({\bf v}_{i}^{\prime}|{\bf v}_{1}^{\prime},...,{\bf v}_{(i-1)}^{\prime})),$$ where g is a CLM, and pg(v ′ i |v ′ 1 , ..., v ′ (i−1)) is the cross entropy between the predicted token distribution by the language model and the delta distribution on the token v ′ i , which is one for the corresponding token in the adversarial example, v ′ i , and zero otherwise. To generate adversarial examples against a target NMT model, we propose to solve the optimization problem (1), which contains an adversarial loss term, a similarity constraint, and a CLM loss. $$\left({5\atop5}\right)$$ ## 4 Transfool Attack Algorithm We now introduce our algorithm for generating adversarial examples against NMT models. The block diagram of our proposed attack is presented in Figure 1. We are looking for an adversarial example with tokens in the vocabulary set VX and the corresponding embedding vectors in the subspace EVX . Hence, the optimization problem (1) is discrete. The high-level idea of our algorithm is to use gradient projection to solve (1) in the discrete subspace EVX . The objective function of (1) is a function of NMT and LM embedding representations of the adversarial example, ex′ and vx′ , respectively. Since we aim to minimize the optimization problem with respect to ex′ , we need to find a transformation between the embedding space of the LM and the target NMT model. To this aim, as depicted in Figure 1, we propose to replace the embedding layer of a pre-trained language model with a Fully Connected (FC) layer, which gets the embedding vectors of the NMT model as its input. Then, we train the language model and the FC layer simultaneously with the causal language modeling objective. Therefore, we can compute the LM embedding vectors as a function of the NMT embedding vectors: vi = F C(ei), where F C ∈ R m×n is the trained FC layer. Algorithm 1 TransFool Adversarial Attack Input: f(.): Target NMT model, VX : Vocabulary set F C: Fully connected layer, x: Input sentence yref : Ground-truth translation of x λ: BLEU score ratio, *α, β*: Hyperparameters K: Maximum No. of iterations, γ: step size Output: x ′: Generated adversarial example initialization: ∀i ∈ {1*, ..., k*} eg,i, ep,i ← ei, s ← empty set itr ← 0, thr ← BLEU(f(ex), yref )) × λ while *itr < K* do itr ← itr + 1 Step 1: Gradient descent in the continuous embedding space: eg ← eg − γ.∇ex′ (Ladv + αLSim + βLLM) vg ← F C(eg) Step 2: Projection to the discrete subspace EVX and update if the sentence is new: for i ∈ {1*, ..., k*} do ep,i ← argmax e∈EVX F C(e)⊤vg,i ∥F C(e)∥2.∥vg,i∥2 end for if ep not in set s **then** add ep to set s eg ← ep if BLEU(f(ep), yref )) ≤ thr **then** break (adversarial example is found) end if end if end while return ex′ ← ep The pseudo-code of our attack can be found in Algorithm 1. In more detail, we first convert the discrete tokens of the sentence to continuous embedding vectors of the target NMT model, then we use the FC layer to compute the embedding representations of the tokens by the language model. Afterwards, we consider the continuous relaxation of the optimization problem, which means that we assume that the embedding vectors are in the continuous embedding space E instead of EVX . In each iteration of the algorithm, we first update the sequence of embedding vectors ex′ in the opposite direction of the gradient (gradient descent). Let us denote the output of the gradient descent step for the i-th token by eg,i. Then we project the resultant embedding vectors, which are not necessarily in EVX , to the nearest token in the vocabulary set VX . Since the distance in the embedding space of the language model approximates the similarity between the tokens, we use the LM embedding representations with cosine similarity metric in the projection step to find the most similar token in the vocabulary. We can apply the trained fully connected layer F C to find the LM embedding representations: vg = F C(eg). Hence, the projected NMT embedding vector, ep,i, for the i-th token is: Figure 1: Block diagram of *TransFool*. ![5_image_0.png](5_image_0.png) $$\mathbf{e_{p,i}}={\underset{\mathbf{e}\in{\mathcal{E}}_{\mathcal{V},\mathcal{X}}}{\operatorname{argmax}}}\,{\frac{F C(\mathbf{e})^{\top}\mathbf{v_{g,i}}}{\|F C(\mathbf{e})\|_{2}.\|\mathbf{v_{g,i}}\|_{2}}}.$$ . (6) However, due to the discrete nature of data, by applying the projection step in every iteration of the algorithm, we may face an undesirable situation where the algorithm gets stuck in a loop of previously computed steps. In order to circumvent this issue, we will only update the embedding vectors by the output of the projection step if the projected sentence has not been generated before. We perform the gradient descent and projection steps iteratively until a maximum number of iterations is reached, or the translation quality of the adversarial example relative to the original translation quality is less than a threshold. To evaluate the translation quality, we use the BLEU score, which is a widely used metric in the literature: $${\frac{\mathrm{BLEU}(f(\mathbf{e_{x^{\prime}}}),\mathbf{y_{r e f}}))}{\mathrm{BLEU}(f(\mathbf{e_{x}}),\mathbf{y_{r e f}}))}}\leq\lambda.$$ BLEU(f(ex), yref)) ≤ λ. (7) $$\left(7\right)$$ ## 5 Experiments In this section, we first discuss our experimental setup, and then we evaluate TransFool against different models and translation tasks, both in white-box and black-box settings.1 ## 5.1 Experimental Setup We conduct experiments on the English-French (En-Fr), English-German (En-De), and English-Chinese (En-Zh) translation tasks. We use the test set of WMT14 (Bojar et al., 2014) for En-Fr and En-De tasks, and the test set of OPUS-100 (Zhang et al., 2020) for En-Zh task. Some statistics of these datasets are presented in Appendix A. We evaluate TransFool against transformer-based NMT models. To verify that our attack is effective against various architectures, we attack the HuggingFace implementation of Marian NMT models (Junczys-Dowmunt et al., 2018) and mBART50 multilingual NMT model (Tang et al., 2020). As explained in Section 4, the similarity constraint and the LM loss of the proposed optimization problem require an FC layer and a CLM. To this aim, for each NMT model, we train an FC layer and a CLM (with GPT-2 structure (Radford et al., 2019)) on WikiText-103 dataset. We note that the input of the FC layer is the target NMT embedding representation of the input sentence. To find the minimizer of our optimization problem (1), we use the Adam optimizer (Kingma & Ba, 2014) with step size γ = 0.016. Moreover, we set the maximum number of iterations to 500. Our algorithm has three parameters: coefficients α and β in the optimization function (1), and the relative BLEU score ratio λ in the stopping criteria (7). We set λ = 0.4, β = 1.8, and α = 20. We chose these parameters experimentally according to the ablation study available in Appendix B.1, to optimize the performance in terms of success rate, semantic similarity, and fluency. We compare our attack with (Michel et al., 2019), which is a white-box untargeted attack against NMT models.2 We only consider one of their attacks, called kNN, which substitutes some words with their neighbors in the embedding space; the other attack considers swapping the characters, which is too easy to detect. We also adapted *Seq2Sick* (Cheng et al., 2020a), a targeted attack against NMT models, which is based on an optimization problem in the NMT embedding space, to our untargeted setting. For evaluation, we report different performance metrics: **(1) Attack Success Rate (ASR)**, which measures the rate of successful adversarial examples. Similar to (Ebrahimi et al., 2018a), we define the adversarial example as successful if the BLEU score of its translation is *less than half* of the BLEU score of the original translation. **(2) Relative decrease of translation quality**, by measuring the translation quality in terms of *BLEU score*3 and *chrF* (Popović, 2015). We denote these two metrics by **RDBLEU** and **RDchrF**, respectively. We choose to compute the *relative decrease* in translation quality so that scores are comparable across different models and datasets (Michel et al., 2019). **(3) Semantic Similarity (Sim.)**, which is computed between the original and adversarial sentences and commonly approximated by the *Universal* Sentence Encoder (USE) (Yang et al., 2020). USE model is trained on various NLP tasks to embed sentences 1Our source code is available at https://github.com/sssadrizadeh/TransFool. Appendix G also contains the license information and details of the assets (datasets, codes, and models). 2Source codes of (Cheng et al., 2019; 2020b), other untargeted white-box attacks against NMTs, are not publicly available. 3We use case-sensitive SacreBLEU on detokenized sentences. Task Method **Marian NMT mBART50** ASR↑ RDBLEU↑ RDchrF↑ Sim.↑ Perp.↓ TER↓ ASR↑ RDBLEU↑ RDchrF↑ Sim.↑ Perp.↓ TER↓ En-Fr TransFool **69.38 0.57 0.23 0.85** 182.45 **13.91 60.68 0.53 0.22** 0.84 **121.12 10.58** kNN 36.53 0.36 0.16 0.82 389.78 19.15 30.84 0.29 0.11 **0.85** 336.47 21.03 Seq2Sick 27.01 0.21 0.16 0.75 **175.31** 13.97 25.53 0.19 0.13 0.75 151.92 13.55 En-De TransFool **69.49 0.65 0.23 0.84 165.53 13.57 62.87 0.61 0.22** 0.83 **134.90 11.07** kNN 39.22 0.40 0.17 0.82 441.62 19.42 35.99 0.39 0.12 **0.86** 375.32 21.22 Seq2Sick 35.60 0.31 0.21 0.67 290.32 18.13 35.59 0.31 0.20 0.66 265.62 18.18 En-Zh TransFool **73.82 0.74 0.31 0.88 102.49 11.82 57.50 0.67 0.26 0.90 74.75 7.77** kNN 31.12 0.33 0.18 0.86 180.27 15.95 27.25 0.32 0.14 **0.90** 160.27 16.58 Seq2Sick 28.76 0.26 0.25 0.73 161.84 17.48 24.25 0.31 0.18 0.78 105.42 13.58 Table 1: Performance of white-box attack against different NMT models. into high-dimensional vectors. The cosine similarity between USE embeddings of two sentence approximates their semantic similarity. **(4) Perplexity score (Perp.)**, which is a measure of the fluency of the adversarial example computed by the perplexity score of *GPT-2 (large)*. **(5) Token Error Rate (TER)**, which measures the percentage of tokens that are modified by an adversarial attack. ## 5.2 Performance In White-Box Settings Now we evaluate TransFool in comparison to kNN and Seq2Sick against different NMT models. Table 1 shows the results in terms of different evaluation metrics.4 Overall, our attack is able to decrease the BLEU score of the target model to less than half of the BLEU score of the original translation for more than 60% of the sentences for all tasks and models (except for the En-Zh mBART50 model, where ASR is 57.50%). Also, in all cases, semantic similarity is more than 0.83, which shows that our attack can maintain a high level of semantic similarity with the clean sentences. In comparison to the baselines, TransFool obtains a higher success rate against different model structures and translation tasks, and it is able to reduce the translation quality more severely. Since the algorithm uses the gradients of the proposed optimization problem and is not based on token replacement, TransFool can highly degrade the translation quality. Furthermore, the perplexity score of the adversarial example generated by TransFool is much less than the ones of both baselines (except for the En-Fr Marian model, where it is a little higher than Seq2Sick), which is due to the integration of the LM embeddings and the LM loss term in the optimization problem. Moreover, the token error rate of our attack is lower than both baselines, and the semantic similarity is preserved better by TransFool in almost all cases since we use the LM embeddings instead of the NMT ones for the similarity constraint. While kNN can also maintain similarity, Seq2Sick does not perform well in this criterion. We also computed similarity by BERTScore (Zhang et al., 2019) and BLEURT-20 (Sellam et al., 2020) that highly correlate with human judgments in Appendix D.1, which shows that TransFool is better than both baselines in maintaining the semantics. We also show the generalizability of TransFool against other translation tasks in Appendix D.2. Finally, as presented in Appendix D.3, the successful attacks by the baselines, as opposed to TransFool, are not semantic-preserving or fluent. We also compare the runtimes of TransFool and both baselines. In each iteration of our proposed attack, we need to perform a back-propagation through the target model and the language model to compute the gradients. Also, in some iterations (27 iterations per sentence on average), a forward pass is required to compute the output of the target model to check the stopping criteria. For the Marian NMT (En-Fr) model, on a system equipped with an NVIDIA A100 GPU, it takes 26.45 seconds to generate adversarial examples by TransFool. On the same system, kNN needs 1.45 seconds, and Seq2Sick needs 38.85 seconds to generate adversarial examples for less effective adversarial attacks, however. Table 2 shows an adversarial example against mBART50 (En-De). In comparison to the baselines, TransFool makes smaller changes to the sentence, and the adversarial example is a correct English sentence similar to the original one. However, kNN and Seq2Sick generate adversarial sentences that are not necessarily natural or similar to the original ones. More examples by TransFool, kNN, and Seq2Sick can be found in Appendix 4Since our attack is token-level, there is a small chance that, when the adversarial example is converted to text, re-tokenization does not produce the same set of tokens. Thus, all results are computed after re-tokenization of the adversarial examples. | Sentence | Text | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------| | Org. | The most eager is Oregon, which is enlisting 5,000 drivers in the country's biggest experiment. | | Ref. Trans. | Le plus déterminé est l'Oregon, qui a mobilisé 5 000 conducteurs pour mener l'expérience la plus importante du pays. | | Org. Trans. | Le plus avide est l'Oregon, qui recrute 5 000 pilotes dans la plus grande expérience du pays. | | Adv. TransFool The most eager isQuebec, which is enlisting 5,000 drivers in the country's biggest experiment. Trans. Le Québec, qui fait partie de la plus grande expérience du pays, compte 5 000 pilotes. (some parts ar | e | | not translated.) | | | Adv. kNN | Theve eager is Oregon, C aren enlisting 5,000 drivers in theau's biggest experiment. | | Trans. | Theve avide est Oregon, C sont enrôlés 5 000 pilotes dans la plus grande expérience de Theau. | | Adv. Seq2Sick The most buzz is FREE, which is chooseing Games comments in the country's great developer. | | | Trans. | Le plus buzz est GRATUIT, qui est de choisir Jeux commentaires dans le grand développeur du pays. | | ∗Perturbed tokens are in red, and in the original sentence, the perturbations by TransFool are in blue. The changes in the translation that | | | are the direct result of the perturbations are in brown, while the changes that are due to the failure of the target model are in orange. | | Table 2: Adversarial examples against Marian NMT (En-Fr) by various methods (white-box). D.5. We also provide some adversarial sentences when we do not use the LM embeddings in our algorithm to show the importance of this component. Indeed, TransFool outperforms both baselines in terms of success rate. It is able to generate more natural adversarial examples with a lower number of perturbations (TER) and higher semantic similarity with the clean samples in almost all cases. A complete ablation study of the effect of hyperparameters, the language model used in TransFool, and the beam-size parameter of the target NMT model is presented in Appendix B.1, B.2, and B.3, respectively. ## 5.3 Performance In Black-Box Settings In practice, the adversary's access to the learning system may be limited. Hence, we propose to analyze the performance of TransFool in a black-box scenario. It has been shown that adversarial attacks often transfer to another model that has a different architecture and is even trained with different datasets (Szegedy et al., 2014). By utilizing this property of adversarial attacks, we extend TransFool to the black-box scenario. We consider that we have complete access to one NMT model (the reference model), including its gradients. We implement the proposed gradient-based attack in algorithm 1 with this model. However, for the stopping criteria of the algorithm, we query the black-box target NMT model to compute the BLEU score. We can also implement the black-box transfer attack in the case where the source languages of the reference model and the target model are the same, but their target languages are different. Since Marian NMT is faster and lighter than mBART50, we use it as the reference model and evaluate the performance of the black-box attack against mBART50. We compare the performance of TransFool with WSLS (Zhang et al., 2021), a blackbox untargeted attack against NMT models based on word-replacement (the choice of backtranslation model used in WSLS is investigated in Appendix F). We also evaluate the performance of kNN and Seq2Sick in the blackbox settings by attacking mBART50 with the adversarial example generated against Marian NMT (in the white-box settings). The results are reported in Table 3. We also report the performance when attacking Google Translate, some generated adversarial samples, and similarity performance computed by BERTScore and BLEURT-20 in Appendix E. Table 3: Performance of black-box attack against mBART50. Task Method ASR↑ RDBLEU↑ RDchrF↑ Sim.↑ Perp.↓ TER↓ \#Queries↓ En-Fr TransFool **70.19 0.58** 0.22 **0.85** 175.39 **17.08** 27 kNN 33.74 0.33 0.15 0.82 383.71 22.57 - Seq2Sick 25.97 0.21 0.14 0.75 **173.63** 21.13 - WSLS 56.21 **0.58 0.27** 0.84 214.23 31.30 1423 En-De TransFool **66.76 0.65 0.22** 0.84 **167.54 16.73** 23 kNN 36.70 0.39 0.16 0.82 435.02 22.34 - Seq2Sick 32.17 0.29 0.20 0.67 286.67 26.59 - WSLS 44.33 0.50 0.19 **0.86** 219.32 29.12 1262 En-Zh TransFool **63.27** 0.71 0.27 **0.88 100.14 14.76** 36 kNN 26.89 0.31 0.17 0.86 176.34 17.07 - Seq2Sick 23.65 0.30 0.23 0.73 162.67 25.17 - WSLS 40.00 **0.72 0.52** 0.83 186.44 32.35 1782 Table 3 shows that in all tasks, with a few queries to the target model, our black-box attack achieves better performance than the white-box attack against the target model (mBART50) but a little worse performance than the white-box attack against the reference model (Marian NMT). In all cases, the success rate, token error rate, and perplexity of TransFool are better than all baselines (except for the En-Fr task, where perplexity | Task | Marian NMT | mBART50 | | | | | | | | | | |---------------------------------------------|---------------------------------------------|-----------|------|--------|----|-------|------|------|------|--------|----| | ASR↑ RDBLEU↑ RDchrF↑ Sim.↑ Perp.↓ #Queries↓ | ASR↑ RDBLEU↑ RDchrF↑ Sim.↑ Perp.↓ #Queries↓ | | | | | | | | | | | | En-De → En-Fr 60.53 | 0.55 | 0.22 | 0.84 | 169.49 | 24 | 61.68 | 0.56 | 0.22 | 0.84 | 169.51 | 23 | | En-Fr → En-De 66.22 | 0.63 | 0.22 | 0.84 | 198.04 | 23 | 63.86 | 0.63 | 0.21 | 0.84 | 195.50 | 24 | Table 4: Performance of black-box attack, when the target language is different. is a little higher than Seq2Sick). The ability of TransFool and WSLS to maintain semantic similarity is comparable and better than both other baselines. However, WSLS has the highest token error rate, which makes the attack detectable. The effect of TransFool on BLEU score is larger than that of the other methods, and its effect on chrF comes after WSLS (except for the En-DE task, where TransFool is the best). Regarding the complexity, TransFool requires only a few queries to the target model for translation, while WSLS queries the model more than a thousand times, which is costly and may not be feasible in practice. For the En-Fr task, on a system equipped with an NVIDIA A100 GPU, it takes 43.36 and 1904.98 seconds to generate adversarial examples by TransFool and WSLS, respectively, which shows that WSLS is very time-consuming. We also analyze the *cross-lingual* transferability of the generated adversarial examples to a black-box NMT model with the same source language but a different target language. Since we need a dataset with the same set of sentences for different language pairs, we use the validation set of WMT14 for En-Fr and En-De tasks. Table 4 shows the results for two cases: Marian NMT or mMBART50 as the target model. We use Marian NMT as the reference model with a different target language than that of the target model. In all settings, the generated adversarial examples are highly transferable to another NMT model with a different target language (i.e., they have high attack success rate and large semantic similarity). To the best of our knowledge this type of transferability have not been studied before. Moreover, the high transferability of TransFool, even to other languages, shows that it is able to capture the common failure modes in different NMT models, which can be dangerous in real-world applications. ## 5.4 Discussion 5.4.1 Comparison To Related Works Unlike several word-replacement-based methods (Michel et al., 2019; Li et al., 2020; Jin et al., 2020), TransFool does not explicitly select a few important tokens in the beginning of the attack algorithm. Instead, all tokens are modified during the gradient step of the TransFool algorithm. However, in the projection step, most tokens are projected back to the original ones, while a few are replaced with the closest tokens in the embedding space. Not limiting the search space from the beginning and using a gradient-based approach lead TransFool to achieve a high success rate. On another note, it is challenging to incorporate linguistic constraints in a differentiable manner with our optimization-based method. TransFool solves this challenge by finding a transformation between the embedding representations of the NMT model and that of the language model. This is particularly important, as we see in Table 5. This Table shows the results of TransFool and kNN when we use LM embeddings or NMT embeddings for measuring the similarity between two tokens.5 The LM embeddings result in lower perplexity and higher semantic similarity for both methods, which demonstrates the importance of this component of the TransFool algorithm. As a matter of fact, the main difference between TransFool and Seq2Sick is employing the LM embeddingsinstead of the NMT ones, which results in lower perplexity Table 5: Performance of white-box attack against Marian NMT (En-Fr) with/without language model embeddings. Method ASR↑ RDBLEU↑ RDchrF↑ Sim.↑ Perp.↓ TransFool w/ LM Emb. 69.48 0.56 0.23 0.85 177.20 TransFool w/ NMT Emb. 68.27 0.57 0.26 0.78 193.32 kNN w/ LM Emb. 32.13 0.32 0.15 0.85 246.52 kNN w/ NMT Emb. 36.65 0.35 0.16 0.82 375.84 5In order to have a fair comparison, we fine-tuned hyperparameters of Transfool, in the case when we do not use LM embeddings, to have a similar attack success rate. and higher similarity for TransFool compared to Seq2Sick. We should note that more ablation studies on the LM part of our attack are available in Appendix B.2. ## 5.4.2 Translation Quality Metric In TransFool, we have used the BLEU score to evaluate the translation quality for the success criterion. We chose the BLEU score since it has been used in previous works (Cheng et al., 2019; 2020b; Wallace et al., 2020; Zhang et al., 2021), is still common in benchmarks, and is fast. However, any other metric can be considered for evaluating the translation quality in the attack algorithm. We report the performance of TransFool against Marain NMT using BLEURT-20 (Sellam et al., 2020), a recent metric for translation quality, in Table 6. We should note that the success rate is measured based on BLEURT-20, and the relative decrease in translation quality in terms of this metric is denoted by RDBLEURT. The results indicate that other metrics, such as BLEURT-20, can yield similar results to those of the BLEU score. We can also use the same metric to evaluate the semantic similarity in the source language and the translation quality in the target language to make interpreting the evaluation results easier. Therefore, we report semantic similarity in terms of both USE and BLUERT-20 in this Table. The translation quality, in terms of BLEURT-20, has dropped from 0.73 to 0.32, while the similarity in the source language remains 0.64. Table 6: Performance of white-box attack against Marian NMT (En-Fr) with BLEURT-20 as translation metric. Metric ASR↑ RDBLEURT↑ Sim. USE↑ Sim. BLUERT↑ Perp.↓ TER↓ BLEURT-20 72.60 0.56 0.85 0.64 186.55 13.17 ## 5.4.3 Effect Of Transfool On Nmt Performance Through Back-Translation Figure 2: Similarity by BLEURT-20 between different pairs in case of attacking Marian NMT (En-Fr). 1. The round-trip translation quality for the original sentences is higher than that of the adversarial sentences (0.81 vs. 0.73). This finding suggests that the attack successfully degraded the model's performance, and not all of the changes to the translation are of type one, i.e., the direct results of the perturbations. Given the nature of the translation task, we can observe that some adversarial perturbations appear directly in the translation of the adversarial examples. However, other changes to the translation are caused by the degradation in the performance of the NMT model resulting from the attack. For example in Table 2, changing Oregon to Quebec results in two types of changes in the translation: type 1) *"l'Oregon"* is replaced with "le Québec", and type 2) some part of the input sentence, *"The most eager is"*, is not translated. In order to evaluate if the model is truly fooled by TransFool (i.e., type 2 of the changes), we consider the Marian NMT (En-Fr) as the target model and use a back translation model to do a round-trip translation (En-Fr and Fr-En). If the target NMT model (En-Fr) is performing well, we expect that the English sentence and its back-translated counterpart to have a high similarity. We measure this similarity, in terms of BLEURT-20, for the original and adversarial sentences. A lower similarity score for the adversarial sentences indicates that the model's performance is degraded by the attack. Moreover, we can measure the similarity between the original and adversarial sentences in the source language (En) and the similarity between their translations in the target language (Fr). The results averaged across the dataset are depicted in Figure 2. There are two key observations: ![10_image_0.png](10_image_0.png) 2. The similarity between the original and adversarial sentences in the source domain (En) is 0.66, while the similarity between their translations in the target domain (Fr) is 0.49. This unexpected discrepancy again suggests that the attack successfully degraded the model's performance. Therefore, these two observations confirm that TransFool is successful in fooling the NMT model. ## 5.4.4 Human Evaluation We conduct a human evaluation campaign to further evaluate the adversarial attacks against Marian NMT (En-Fr) in the white-box setting. Specifically, we assess TransFool, kNN, and Seq2Sick attacks based on three criteria: fluency, semantic preservation, and translation quality. Appendix C provides more details about our setup, while the results of each survey along with their 95% confidence intervals are reported. Our findings demonstrate that the adversarial examples generated by TransFool are more semantic-preserving and fluent than both baselines. We also conduct a separate survey only for the adversarial examples generated by TransFool, where the annotators are asked to rate the accuracy of the reference translations and the NMT translations. The results show that the NMT translation has a lower score than the reference translations. We also report the Inter-Annotator Agreement (IAA) between the human judgments in Appendix C. Overall, our human evaluation results demonstrate the superiority of TransFool over the baselines. ## 6 Conclusion In this paper, we proposed *TransFool*, a white-box adversarial attack against NMT models, by introducing a new optimization problem solved by an iterative method based on gradient projection. We utilized the embedding representation of a language model to impose a similarity constraint on the adversarial examples. Moreover, by considering the loss of an LM in our optimization problem, the generated adversarial examples are more fluent. Extensive automatic and human evaluations show that TransFool is highly effective in different translation tasks and against different NMT models. Our attack is also transferable to black-box settings with different structures and even different target languages. In both white-box and black-box scenarios, TransFool obtains improvement over the baselines in terms of success rate, semantic similarity, and fluency. TransFool demonstrate that the translations of two similar sentences (original and adversarial ones) by the NMT model can differ significantly, and even small perturbations such as a name change can degrade the translation quality. These adversarial attacks against NMT models are critical to analyze the vulnerabilities of NMT models, to measure their robustness, and to eventually build more robust NMT models. ## Acknowledgments This work has been partially supported by armasuisse Science and Technology project MULAN. We would also like to thank the anonymous reviewers whose valuable comments improved the paper. ## References Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. Generating natural language adversarial examples. In *Proceedings of the 2018 Conference on Empirical* Methods in Natural Language Processing, pp. 2890–2896, 2018. Dzmitry Bahdanau, Kyung Hyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In *3rd International Conference on Learning Representations, ICLR 2015*, 2015. Yonatan Belinkov and Yonatan Bisk. Synthetic and natural noise both break neural machine translation. In International Conference on Learning Representations, 2018. Ondřej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, et al. Findings of the 2014 workshop on statistical machine translation. In *Proceedings of the ninth workshop on statistical machine translation*, pp. 12–58, 2014. Kunbei Cai, Md Hafizul Islam Chowdhuryy, Zhenkai Zhang, and Fan Yao. Seeds of seed: Nmt-stroke: Diverting neural machine translation through hardware-based faults. In 2021 International Symposium on Secure and Private Execution Environment Design (SEED), pp. 76–82. IEEE, 2021. Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, and Lucia Specia. Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In *Proceedings of the 11th International* Workshop on Semantic Evaluation (SemEval-2017), pp. 1–14, 2017. Akshay Chaturvedi, Abijith KP, and Utpal Garain. Exploring the robustness of nmt systems to nonsensical inputs. *arXiv preprint arXiv:1908.01165*, 2019. Akshay Chaturvedi, Abhisek Chakrabarty, Masao Utiyama, Eiichiro Sumita, and Utpal Garain. Ignorance is bliss: Exploring defenses against invariance-based attacks on neural machine translation systems. IEEE Transactions on Artificial Intelligence, 2021. Minhao Cheng, Jinfeng Yi, Pin-Yu Chen, Huan Zhang, and Cho-Jui Hsieh. Seq2sick: Evaluating the robustness of sequence-to-sequence models with adversarial examples. In *Proceedings of the AAAI Conference on* Artificial Intelligence, volume 34, pp. 3601–3608, 2020a. Yong Cheng, Zhaopeng Tu, Fandong Meng, Junjie Zhai, and Yang Liu. Towards robust neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1756–1766, 2018. Yong Cheng, Lu Jiang, and Wolfgang Macherey. Robust neural machine translation with doubly adversarial inputs. In *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics*, pp. 4324–4333, 2019. Yong Cheng, Lu Jiang, Wolfgang Macherey, and Jacob Eisenstein. Advaug: Robust adversarial augmentation for neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 5961–5970, 2020b. Javid Ebrahimi, Daniel Lowd, and Dejing Dou. On adversarial examples for character-level neural machine translation. In *Proceedings of the 27th International Conference on Computational Linguistics*, pp. 653–663, 2018a. Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. Hotflip: White-box adversarial examples for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 31–36, 2018b. Denis Emelin, Ivan Titov, and Rico Sennrich. Detecting word sense disambiguation biases in machine translation for model-agnostic adversarial attacks. In *The 2020 Conference on Empirical Methods in* Natural Language Processing, pp. 7635–7653. Association for Computational Linguistics, 2020. Markus Freitag, Ricardo Rei, Nitika Mathur, Chi-kiu Lo, Craig Stewart, George Foster, Alon Lavie, and Ondřej Bojar. Results of the wmt21 metrics shared task: Evaluating metrics with expert-based human evaluations on ted and news domain. In *Proceedings of the Sixth Conference on Machine Translation*, pp. 733–774, 2021. Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. Continuous measurement scales in human evaluation of machine translation. In *Proceedings of the 7th Linguistic Annotation Workshop and* Interoperability with Discourse, pp. 33–41, 2013. Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. Can machine translation systems be evaluated by the crowd alone. *Natural Language Engineering*, 23(1):3–30, 2017. Chuan Guo, Alexandre Sablayrolles, Hervé Jégou, and Douwe Kiela. Gradient-based adversarial attacks against text transformers. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language* Processing, pp. 5747–5757, 2021. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. Is bert really robust? a strong baseline for natural language attack on text classification and entailment. In *Proceedings of the AAAI conference on artificial* intelligence, volume 34, pp. 8018–8025, 2020. Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Alham Fikri Aji, Nikolay Bogoychev, André F. T. Martins, and Alexandra Birch. Marian: Fast neural machine translation in C++. In Proceedings of ACL 2018, System Demonstrations, pp. 116–121, Melbourne, Australia, July 2018. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint* arXiv:1412.6980, 2014. Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumond, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario Šaško, Gunjan Chhablani, Bhavitvya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, Nicolas Patry, Angelina McMillan-Major, Philipp Schmid, Sylvain Gugger, Clément Delangue, Théo Matussière, Lysandre Debut, Stas Bekman, Pierric Cistac, Thibault Goehringer, Victor Mustar, François Lagunas, Alexander Rush, and Thomas Wolf. Datasets: A community library for natural language processing. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 175–184, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. URL https://aclanthology.org/2021.emnlp-demo.21. Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. Bert-attack: Adversarial attack against bert using bert. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pp. 6193–6202, 2020. Qingsong Ma, Ondřej Bojar, and Yvette Graham. Results of the wmt18 metrics shared task: Both characters and embeddings achieve good performance. In *Proceedings of the third conference on machine translation:* shared task papers, pp. 671–688, 2018. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In *International Conference on Learning Representations,* ICLR 2018, 2018. Paul Michel, Xian Li, Graham Neubig, and Juan Pino. On evaluation of adversarial perturbations for sequence-to-sequence models. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 3103–3114, 2019. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In *Proceedings of the IEEE conference on computer vision and* pattern recognition, pp. 2574–2582, 2016. John Morris, Eli Lifland, Jack Lanchantin, Yangfeng Ji, and Yanjun Qi. Reevaluating adversarial examples in natural language. In *Findings of the Association for Computational Linguistics: EMNLP 2020*, pp. 3829–3839, 2020. Daniel Naber et al. A rule-based style and grammar checker. 2003. Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. Facebook fair's wmt19 news translation task submission. In *Proceedings of the Fourth Conference on Machine Translation (Volume* 2: Shared Task Papers, Day 1), pp. 314–319, 2019. Guillermo Ortiz-Jiménez, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, and Pascal Frossard. Optimism in the face of adversity: Understanding and improving deep learning through adversarial robustness. Proceedings of the IEEE, 109(5):635–659, 2021. Jungsoo Park, Mujeen Sung, Jinhyuk Lee, and Jaewoo Kang. Adversarial subword regularization for robust neural machine translation. In Findings of the Association for Computational Linguistics, ACL 2020: EMNLP 2020, pp. 1945–1953. Association for Computational Linguistics (ACL), 2020. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. *Advances in neural information processing systems*, 32, 2019. Maja Popović. chrf: character n-gram f-score for automatic mt evaluation. In *Proceedings of the Tenth* Workshop on Statistical Machine Translation, pp. 392–395, 2015. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. *OpenAI blog*, 1(8):9, 2019. Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. Generating natural language adversarial examples through probability weighted word saliency. In *Proceedings of the 57th annual meeting of the association* for computational linguistics, pp. 1085–1097, 2019. Sahar Sadrizadeh, Ljiljana Dolamic, and Pascal Frossard. Block-sparse adversarial attack to fool transformerbased text classifiers. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7837–7841. IEEE, 2022. Thibault Sellam, Amy Pu, Hyung Won Chung, Sebastian Gehrmann, Qijun Tan, Markus Freitag, Dipanjan Das, and Ankur Parikh. Learning to evaluate translation beyond english: Bleurt submissions to the wmt metrics 2020 shared task. In *Proceedings of the Fifth Conference on Machine Translation*, pp. 921–927, 2020. Hassan S Shavarani and Anoop Sarkar. Better neural machine translation by extracting linguistic information from bert. In *Proceedings of the 16th Conference of the European Chapter of the Association for* Computational Linguistics: Main Volume, pp. 2772–2783, 2021. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In 2nd International Conference on Learning Representations, ICLR 2014, 2014. Weiting Tan, Shuoyang Ding, Huda Khayrallah, and Philipp Koehn. Doubly-trained adversarial data augmentation for neural machine translation. *arXiv e-prints*, pp. arXiv–2110, 2021. Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. Multilingual translation with extensible multilingual pretraining and finetuning. *arXiv preprint* arXiv:2008.00401, 2020. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R Bowman, Dipanjan Das, et al. What do you learn from context? probing for sentence structure in contextualized word representations. In *7th International Conference on Learning* Representations, ICLR 2019, 2019. Jörg Tiedemann. Parallel data, tools and interfaces in opus. In *Eight International Conference on Language* Resources and Evaluation, MAY 21-27, 2012, Istanbul, Turkey, pp. 2214–2218, 2012. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In *Advances in neural information processing systems*, pp. 5998–6008, 2017. Lucas Nunes Vieira, Minako O'Hagan, and Carol O'Sullivan. Understanding the societal impacts of machine translation: a critical review of the literature on medical and legal use cases. Information, Communication & Society, 24(11):1515–1532, 2021. Eric Wallace, Mitchell Stern, and Dawn Song. Imitation attacks and defenses for black-box machine translation systems. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 5531–5546, 2020. Boxin Wang, Chejian Xu, Xiangyu Liu, Yu Cheng, and Bo Li. Semattack: Natural textual attacks via different semantic spaces. In *Findings of the Association for Computational Linguistics: NAACL 2022*, pp. 176–205, 2022. Jun Wang, Chang Xu, Francisco Guzmán, Ahmed El-Kishky, Yuqing Tang, Benjamin Rubinstein, and Trevor Cohn. Putting words into the system's mouth: A targeted attack on neural machine translation using monolingual data poisoning. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pp. 1463–1473, 2021. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 38–45, Online, October 2020. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/2020.emnlp-demos.6. Chang Xu, Jun Wang, Yuqing Tang, Francisco Guzmán, Benjamin IP Rubinstein, and Trevor Cohn. A targeted attack on black-box neural machine translation with parallel data poisoning. In *Proceedings of the* Web Conference 2021, pp. 3638–3650, 2021. Yinfei Yang, Daniel Cer, Amin Ahmad, Mandy Guo, Jax Law, Noah Constant, Gustavo Hernandez Abrego, Steve Yuan, Chris Tar, Yun-Hsuan Sung, et al. Multilingual universal sentence encoder for semantic retrieval. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pp. 87–94, 2020. Heng Yu, Haoran Luo, Yuqi Yi, and Fan Cheng. A2r2: Robust unsupervised neural machine translation with adversarial attack and regularization on representations. *IEEE Access*, 9:19990–19998, 2021. Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. Word-level textual adversarial attacking as combinatorial optimization. In *Proceedings of the 58th Annual Meeting of* the Association for Computational Linguistics, pp. 6066–6080, 2020. Biao Zhang, Philip Williams, Ivan Titov, and Rico Sennrich. Improving massively multilingual neural machine translation and zero-shot translation. In *Proceedings of the 58th Annual Meeting of the Association for* Computational Linguistics, pp. 1628–1639, 2020. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. Bertscore: Evaluating text generation with bert. In *International Conference on Learning Representations*, 2019. Xinze Zhang, Junzhe Zhang, Zhenhua Chen, and Kun He. Crafting adversarial examples for neural machine translation. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics* and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 1967–1977, 2021. # Supplementary Material TransFool: An Adversarial Attack against Neural Machine Translation Models ## Abstract In this supplementary material, we first provide some statistics of the evaluation datasets in Section A. The ablation study of the is presented in Section B: the effect of hyperparameters (Section B.1), different aspect of the language model (Section B.2), and effect of NMT model beam size (Section B.3). We conducted human evaluation whose results and details are presented in Section C. More results of the white-box attack are reported in D: the results of other similarity metrics (Section D.1), performance of TransFool against other translation tasks (Section D.2), performance over successful attacks (Section D.3), trade-off between success rate and similarity/fluency (Section D.4), and some generated adversarial examples (Section D.5). Section E provides more experiments on the black-box attack: the performance of attacking *Google Translate* (Section E.1), results of other similarity metrics (Section E.2), and some generated adversarial examples (Section E.3). We discuss the effect of the back-translation model choice on WSLS in Section F. Finally, the license information and more details of the assets (datasets, codes, and models) are provided in Section G. ## A Some Statistics Of The Datasets Some statistics, including the number of samples, the Average length of the sentences, and the translation quality of Marian NMT and mBART50, of the evaluation datasets, i.e., OPUS100 (En-Zh) WMT14 (En-FR) and (En-De), are reported in table 7. Table 7: Some statistics of the evaluation datasets. ## B Ablation Study B.1 Effect Of Hyperparameters | Dataset | Average | #Test | Marian NMT | mBART50 | | | |---------------------|-----------|-----------|--------------|-----------|-------|-------| | Length Samples BLEU | chrF | BLEU chrF | | | | | | En-Fr WMT14 | 27 | 3003 | 39.88 | 64.94 | 36.17 | 62.66 | | En-De WMT14 | 26 | 3003 | 27.72 | 58.50 | 25.66 | 57.02 | | En-Zh | 18 | 2000 | 33.11 | 50.98 | 29.27 | 41.92 | | OPUS-100 | | | | | | | In this Section, we analyze the effect of different hyperparameters (including the coefficients α and β in our optimization problem (1), the step size of the gradient descent γ, and the relative BLEU score ratio λ in the stopping criteria Eq. (7)) on the white-box attack performance in terms of success rate, semantic similarity, and perplexity score. In all the experiments, we consider English to French Marian NMT model and evaluate over the first 1000 sentences of the test set of WMT14. The default values for the hyperparameters are as follows, except for the hyperparameter that varies in the different experiments, respectively: α = 20, β = 1.8, γ = 0.016, and λ = 0.4. Effect of the similarity coefficient α. This hyperparameter determines the strength of the similarity term in the optimization problem (1). Figure 3a shows the effect of α on the performance of our attack. By increasing the similarity coefficient of the proposed optimization problem, we are forcing our algorithm to find adversarial sentences that are more similar to the original sentence. Therefore, as shown in Figure 3a, larger values of α result in higher semantic similarity. However, in this case, it is harder to fool the NMT model, i.e., lower attack success rate, RDBLEU, and RDchrF. Moreover, it seems that, since the generated adversarial examples are more similar to the original sentence, they are more natural, and their perplexity score is lower. ![17_image_0.png](17_image_0.png) Figure 3: Effect of different hyperparameters on the performance of TransFool. Effect of the language model loss coefficient β. We analyze the impact of the hyperparameter β, which controls the importance of the language model loss term in the proposed optimization problem, in Figure 3b. By increasing this coefficient, we weaken the effect of the similarity term, i.e., the generated adversarial examples are less similar to the original sentence. As a result, the success rate and the effect on translation quality, i.e., RDBLEU and RDchrF, increase. Effect of the step size γ. The step size of the gradient descent step of the algorithm can impact the performance of our attack, which is investigated in Figure 3c. Increasing the step size results in larger movement in the embedding space in each iteration of the algorithm. Hence, the generated adversarial examples are more aggressive, which results in lower semantic similarity and higher perplexity scores. However, we can find adversarial examples more easily and achieve a higher attack success rate, RDBLEU, and RDchRF. Effect of the BLEU score ratio λ. This hyperparameter determines the stopping criteria of our iterative algorithm. Figure 3d studies the effects of this hyperparameter on the performance of our attack. As this figure shows, a higher BLEU score ratio causes the algorithm to end in earlier iterations. Therefore, the changes applied to the sentence are less aggressive, and hence, we achieve higher semantic similarity and a lower perplexity score. However, the attack success rate, RDBLEU, and RDchrF decrease since we make fewer changes to the sentences. ## B.2 Ablation Study On The Language Model One of the main component of our attack is the language model, which is used to measure the similarity between tokens. We showed in Section 5.4.1 that using the LM embeddings instead of the NMT ones highly affects our attack performance. In this Section, we study the impact of different properties of the LM used in TransFool on the attack performance. ![17_image_1.png](17_image_1.png) Language Model Quality First, we investigate the effect of the language model quality on the performance. For Marian NMT (En-Fr), we save the checkpoints of the LM and FC layer after each epoch. Then, we use each of these models, and attack 1000 sentences of dataset. The results are presented in Figure 4. We can see that until a certain Figure 4: Effect of LM quality on performance. epoch, similarity and perplexity improves and then, they gradually deteriorate. This may be due to the fact that the LM becomes less generalizable to the dataset we attack after training for some epochs. We should note that we have considered the last LM from the last epoch in all our experiments. Training Dataset of the Language Model To further study the effect of the language model, we can also fine-tune the language model and the fully-connected layer on the translation dataset (WMT14 En-Fr training set). Table 8 show the results when we use this language model in our attack. These results demonstrate that with a language model trained over sentences similar to those we are trying to attack, we can improve the perplexity score and similarity of the adversarial examples. Table 8: Performance of TransFool white-box attack against Marian NMT (En-Fr) with/without fine-tuned LM. | Method | ASR↑ RDBLEU↑ RDchrF↑ Sim.↑ Perp.↓ TER↓ | | | | | | |--------------------|------------------------------------------|------|------|------|--------|-------| | with Original LM | 69.48 | 0.56 | 0.23 | 0.85 | 177.20 | 13.91 | | with fine-tuned LM | 61.14 | 0.52 | 0.21 | 0.86 | 151.92 | 12.35 | ## B.3 Effect Of Nmt Modelbeam Size Translation models use beam search to generate high-quality outputs. Marian NMT uses a beam size of 4, and mBART50 uses a beam size of 5. To see the effect of this parameter on TransFool performance, we attack Marian NMT (En-Fr) with different beam sizes and the results are presented in Table 9. The attack performance is not significantly impacted by the beam size, possibly because we employ the training loss for the adversarial loss, and beam size is only used for inference. This analysis shows the consistency in the performance of TransFool when the NMT model has different settings. Table 9: Performance of TransFool white-box attack against Marian NMT (En-Fr) with different beam sizes. Beam Size ASR↑ RDBLEU↑ RDchrF↑ Sim.↑ Perp.↓ TER↓ 1 69.28 0.58 0.24 0.85 172.18 13.70 2 70.38 0.57 0.23 0.85 178.07 13.81 3 68.67 0.56 0.23 0.85 176.39 13.92 4 ∗ 69.48 0.56 0.23 0.85 177.20 13.91 6 68.07 0.56 0.23 0.85 174.77 13.80 ## C Human Evaluation We conduct a human evaluation campaign of TransFool, kNN, and Seq2Sick attacks on Marian NMT (En-Fr) in the white-box setting. We randomly choose 90 sentences from the test set of the WMT14 (En-FR) dataset with the adversarial samples and their translations by the NMT model. We split 90 sentences into three different surveys to obtain a manageable size for each annotator. We recruited two volunteer annotators for each survey. For the English surveys, we ensure that the annotators are highly proficient English speakers. Similarly, for the French survey, we ensure that the annotators are highly proficient in French. Before starting the rating task, we provided annotators with detailed guidelines similar to (Cer et al., 2017; Michel et al., 2019). The task is to rate the sentences for each criterion on a continuous scale (0-100) inspired by WMT18 practice (Ma et al., 2018) and Direct Assessment (Graham et al., 2013; 2017). For each sentence, we evaluate three aspects in three different surveys: - *Fluency*: We show the three adversarial sentences and the original sentence on the same page (in random order). We request the annotators to rate their level of agreement with the statement *"The* sentence is fluent." for each sentence. - *Semantic preservation*: We show the original sentence on top, and below that, we display three adversarial sentences (in random order). We request the annotators to rate their level of agreement with the statement *"The sentence is similar to the reference text."* for each sentence. - *Translation quality*: Inspired by monolingual direct assessment (Ma et al., 2018; Graham et al., 2013; 2017), we evaluate the translation quality by displaying the reference translation at the top and, below that, presenting translations of the three adversarial sentences (in a random order). We request the annotators to rate their level of agreement with the statement *"The sentence is similar to the* reference text." for each translation. ![19_image_0.png](19_image_0.png) Figure 5: Human evaluation results for TransFool, kNN, and Seq2Sick attacks against Marian NMT (En-Fr). Only for the TransFool adversarial sentences we conduct a fourth survey. We show the English adversarial sentence on top, and below that, we display the reference translation or the translation generated by the NMT model. We request the annotators to rate their level of agreement with the statement "The French sentence is a good translation for the English sentence.". We do this survey solely for TransFool sine the adversarial sentences by other attacks are not fluent and the task could be excessively challenging for the annotators. To ensure that the two annotators agree, we only consider sentences where their two corresponding scores are less than 30. We average both scores to compute the final score for each sentence. We calculate 95% confidence intervals by using 15K bootstrap replications. The results are depicted in Figure 5. These results show that the adversarial examples generated by TransFool are more semantic-preserving and fluent than both baselines. According to the provided guide to the annotators for semantic similarity, the score of 67.8 shows that the two sentences are roughly equivalent, but some details may differ. Moreover, a fluency of 66.4 demonstrates that although the generated adversarial examples by TransFool are more fluent than the baselines, there is still room to improve the performance in this regard. We follow the direct assessment strategy to measure the effectiveness of the adversarial attacks on translation quality. According to (Ma et al., 2018), since a sufficient level of agreement of translation quality is difficult to achieve with human evaluation, direct assessment simplifies the task to a simpler monolingual assessment instead of a bilingual task. The similarity of the translations of the adversarial sentences with the reference translation is shown in Figure 5c. The similarity of Seq2Sick is worse than other attacks. However, its similarity in the source language is also worse. Therefore, we compute the decrease of similarity (between the original and adversarial sentences) from the source language to the target language. The results in Figure 5d show that all attacks affect the translation quality and the effect of TransFool is more pronounced than that of both baselines. Regarding the fourth survey on TransFool adversarial sentences, for the successful adversarial examples, the scores and their corresponding confidence interval for the translations generated by the NMT model and the reference translations are presented in Figure 6. We can see that the reference translation has got a higher score than the NMT translations, which suggests that the translation quality of the target model is indeed degraded. Finally, for all surveys, we calculate InterAnnotator Agreement (IAA) between the two human judgments for each sentence. We compute IAA in terms of Pearson Correlation coefficient instead of the commonly used Cohen's K since scores are in a continuous scale. The results are presented in Table 10. Overall, we conclude that we achieve a good inter-annotator agreement for all sentence types and evaluation metrics. Moreover, for the fourth survey, the IAA is 0.40 and 0.65 for the reference translations and the NMT translations, Table 10: IAA for different surveys in human evaluation. Sentence Type Fluency Similarity in En Similarity in Fr Original 0.68 - - TransFool 0.85 0.82 0.79 kNN 0.91 0.82 0.86 Seq2Sick 0.89 0.88 0.83 ![19_image_1.png](19_image_1.png) Figure 6: Human evaluation for TransFool translations. respectively. The lower IAA for this survey compared to the other ones is due the the challenging task of translation quality assessment in two languages Ma et al. (2018). ## D More Results On The White-Box Attack D.1 Semantic Similarity Computed By Other Metrics To better assess the ability of adversarial attacks in maintaining semantic similarity, we can compute the similarity between the original and adversarial sentences using other metrics such as BERTScore (Zhang et al., 2019) and BLEURT-20 (Sellam et al., 2020). It is shown in (Zhang et al., 2019) that BERTScore correlates well with human judgments. BLEURT-20 is also shown to correlates better with human judgment than traditional measures (Freitag et al., 2021). The results are reported in Table 11. These results indicate that the TransFool is indeed more capable of preserving the semantics of the input sentence. In the two cases where kNN has better similarity by using the Universal Sentence Encoder (USE) (Yang et al., 2020), the performance of TransFool is better in terms of BERTScore and BLEURT-20. | Task | Method | Marian NMT | | mBART50 | | | | |-----------------------------|----------|--------------|-----------------------------|-----------|------|------|------| | USE↑ BERTScore↑ BLEURT-20 ↑ | | | USE↑ BERTScore↑ BLEURT-20 ↑ | | | | | | TransFool | 0.85 | 0.95 | 0.65 | 0.84 | 0.96 | 0.70 | | | En-Fr | kNN | 0.82 | 0.94 | 0.61 | 0.85 | 0.93 | 0.67 | | Seq2Sick | 0.75 | 0.94 | 0.60 | 0.75 | 0.94 | 0.66 | | | TransFool | 0.84 | 0.96 | 0.67 | 0.83 | 0.95 | 0.69 | | | kNN | 0.82 | 0.94 | 0.61 | 0.86 | 0.93 | 0.67 | | | En-De | Seq2Sick | 0.67 | 0.93 | 0.52 | 0.66 | 0.92 | 0.58 | | TransFool | 0.88 | 0.96 | 0.67 | 0.90 | 0.97 | 0.76 | | | En-Zh | kNN | 0.86 | 0.95 | 0.66 | 0.90 | 0.95 | 0.72 | | Seq2Sick | 0.73 | 0.94 | 0.54 | 0.78 | 0.95 | 0.67 | | Table 11: Similarity performance of white-box attacks. ## D.2 Performance Of Transfool Against Other Translation Tasks We also tested the generalizability of TransFool by attacking English-to-Russian (En-Ru) and English-to-Czech (En-Cs) translation tasks. The results are reported in Table 12. We can see that the performance of TransFool in different translation tasks is consistent with the previous ones we studied. | Task | Marian NMT | mBART50 | | | | | | | | | | | |----------------------------------------|----------------------------------------|-----------|------|--------|--------|-------|-------|------|------|--------|--------|-------| | ASR↑ RDBLEU↑ RDchrF↑ Sim.↑ Perp.↓ TER↓ | ASR↑ RDBLEU↑ RDchrF↑ Sim.↑ Perp.↓ TER↓ | | | | | | | | | | | | | En-Ru 75.78 | 0.65 | 0.27 | 0.84 | 209.82 | 14.60 | 61.99 | 0.57 | 0.24 | 0.86 | 111.01 | 9.66 | | | En-Cs | 68.81 | 0.65 | 0.24 | 0.86 | 191.36 | 12.68 | 59.84 | 0.61 | 0.23 | 0.84 | 116.89 | 10.26 | Table 12: Performance of TransFool white-box attack against other translation tasks . ## D.3 Performance Over Successful Attacks The evaluation metrics of the successful adversarial examples that strongly affect the translation quality are also important, and they show the capability of the adversarial attack. Hence, we evaluate TransFool, kNN, and Seq2Sick only over the successful adversarial examples.6 The results for the white-box setting are presented in Table 13. By comparing this Table and Table 1, which shows the results on the whole dataset, we can see that TransFool performance is *consistent* among successful and unsuccessful attacks. Moreover, successful adversarial examples generated by TransFool are still semantically similar to the original sentences, and their perplexity score is low. However, the successful adversarial examples generated by Seq2Sick and kNN do not preserve the semantic similarity and are not fluent sentences; hence, they are *not valid* adversarial sentences. 6As defined in Section 5, the adversarial example is successful if the BLEU score of its translation is less than half of the BLEU score of the original translation. | Task | Method | Marian NMT | | | | mBART50 | | | | | | | | |----------------------------------------|----------|--------------|------|----------------------------------------|--------|-----------|-------|-------|------|------|--------|--------|-------| | ASR↑ RDBLEU↑ RDchrF↑ Sim.↑ Perp.↓ TER↓ | | | | ASR↑ RDBLEU↑ RDchrF↑ Sim.↑ Perp.↓ TER↓ | | | | | | | | | | | TransFool | 69.38 | 0.66 | 0.26 | 0.83 | 229.75 | 15.33 | 60.68 | 0.66 | 0.27 | 0.82 | 164.52 | 12.56 | | | En-Fr | kNN | 36.53 | 0.70 | 0.30 | 0.76 | 746.89 | 24.52 | 30.84 | 0.72 | 0.28 | 0.77 | 691.64 | 28.05 | | Seq2Sick | 27.01 | 0.72 | 0.40 | 0.56 | 648.92 | 25.28 | 25.53 | 0.74 | 0.41 | 0.53 | 556.61 | 25.16 | | | TransFool | 69.49 | 0.72 | 0.25 | 0.83 | 191.51 | 14.54 | 62.87 | 0.73 | 0.26 | 0.81 | 169.76 | 12.66 | | | En-De | kNN | 39.22 | 0.75 | 0.29 | 0.77 | 675.01 | 23.07 | 35.99 | 0.75 | 0.23 | 0.81 | 574.68 | 25.75 | | Seq2Sick | 35.60 | 0.78 | 0.40 | 0.53 | 659.90 | 25.67 | 35.59 | 0.78 | 0.40 | 0.52 | 612.22 | 26.67 | | | TransFool | 73.82 | 0.76 | 0.34 | 0.87 | 112.28 | 12.83 | 57.50 | 0.73 | 0.31 | 0.88 | 99.08 | 9.86 | | | En-Zh | kNN | 31.12 | 0.72 | 0.29 | 0.80 | 355.25 | 22.55 | 27.25 | 0.76 | 0.27 | 0.85 | 295.53 | 23.58 | | Seq2Sick | 28.76 | 0.72 | 0.46 | 0.58 | 437.49 | 26.84 | 24.25 | 0.79 | 0.44 | 0.60 | 292.55 | 25.59 | | Table 13: Performance of white-box attack over successful adversarial examples. ## D.4 Trade-Off Between Success Rate And Similarity/Fluency The results in our ablation study B show that there is a trade-off between the quality of adversarial example, in terms of semantic-preservation and fluency, and the attack success rate. As studied in (Morris et al., 2020), we can filter adversarial examples with low quality based on hard constraints on semantic similarity and the number of added grammatical errors caused by adversarial perturbations. We can analyze the trade-off between success rate and similarity/fluency by setting different thresholds for filtering adversarial examples. If we evaluate the similarity by the sentence encoder suggested in (Morris et al., 2020), the success rate with different threshold values for similarity in the case of Marian (En-Fr) is depicted in Figure 7b. By considering only the adversarial examples with a similarity higher than a threshold, the success rate decreases as the threshold increases, and the quality of the adversarial examples increases. Similarly, we can do the same analysis for fluency. As suggested in (Morris et al., 2020), we count the grammatical errors by LanguageTool (Naber et al., 2003) for the original sentences and the adversarial examples. Figure 7a depicts the success rate for different thresholds of the number of added grammatical errors caused by adversarial perturbations. These analyses show that with tighter constraints, we can generate better adversarial examples while the success rate decreases. All in all, according to these results, TransFool outperforms the baselines for different thresholds of similarity and grammatical errors. ![21_image_0.png](21_image_0.png) Figure 7: Tradeoff between success rate and Similarity/fluency. The left figure shows the effect of acceptable number of added grammar errors by adversarial perturbation. The right figure shows the effect of similarity threshold. ## D.5 More Adversarial Examples In this Section, we present more adversarial examples generated by TransFool, kNN, and Seq2Sick. In order to show the effect of using LM embeddings on the performance of TransFool, we also include the generated adversarial examples against English to French Marian NMT model when we do not use LM embeddings. In all these tables, the tokens modified by TransFool are written in **blue** in the original sentence, and the modified tokens by different adversarial attacks are written in red in their corresponding adversarial sentences. Moreover, the changes made by the adversarial attack to the translation that are not directly related to the modified tokens are written in orange, while the changes that are the direct result of modified tokens are written in brown. As can be seen in the examples presented in Table 14, TransFool makes smaller changes to the sentence. The generated adversarial example is a correct English sentence, and it is similar to the original sentence. However, kNN, Seq2Sick, and our method with the NMT embeddings make changes that are perceptible, and the adversarial sentences are not necessarily similar to the original sentence. The higher semantic similarity of the adversarial sentences generated by TransFool is due to the integration of LM embeddings and the LM loss in the proposed optimization problem. We should highlight that TransFool is able to make changes to the adversarial sentence translation that are not directly related to the modifications of the original sentence but are the result of the NMT model failure. | Sentence | Text | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------| | Org. | The most eager is Oregon, which is enlisting 5,000 drivers in the country's biggest experiment. | | Ref. Trans. | Le plus déterminé est l'Oregon, qui a mobilisé 5 000 conducteurs pour mener l'expérience la plus importante du pays. | | Org. Trans. | Le plus avide est l'Oregon, qui recrute 5 000 pilotes dans la plus grande expérience du pays. | | Adv. TransFool | The most eager isQuebec, which is enlisting 5,000 drivers in the country's biggest experiment. | | Trans. | Le Québec, qui fait partie de la plus grande expérience du pays, compte 5 000 pilotes. (some parts are not translated at all.) | | Adv. w/ NMT Emb. The most eager isCustom, which is enlisting Disk drivers in the country's editions Licensee. Trans. Le plus avide estCustom, qui recrute des pilotes de disque dans les éditions du pays Licencié. Adv. kNN Theve eager is Oregon, C aren enlisting 5,000 drivers in theau's biggest experiment. Trans. Theve avide est Oregon, C sont enrôlés 5 000 pilotes dans la plus grande expérience de Theau. Adv. Seq2Sick The most buzz is FREE, which is chooseing Games comments in the country's great developer. Trans. Le plus buzz est GRATUIT, qui est de choisir Jeux commentaires dans le grand développeur du pays. | | Table 14: Adversarial examples against Marian NMT (En-Fr) by various methods (white-box). | Sentence | Text The devices, which track every mile a motorist drives and transmit that information to bureaucrats, are at the | |---------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Org. | center of a controversial attempt in Washington and state planning offices to overhaul the outdated system for funding America's major roads. | | Ref. Trans. | Die Geräte, die jeden gefahrenen Kilometer aufzeichnen und die Informationen an die Behörden melden, sind Kernpunkt eines kontroversen Versuchs von Washington und den Planungsbüros der Bundesstaaten, das veraltete System zur Finanzierung US-amerikanischer Straßen zu überarbeiten. | | Org. Trans. | Die Geräte, die jede Meile ein Autofahrer fährt und diese Informationen an Bürokraten weiterleitet, stehen im Zentrum eines umstrittenen Versuchs in Washington und in den staatlichen Planungsbüros, das veraltete System zur Finanzierung der großen Straßen Amerikas zu überarbeiten. The vehicles, which track every mile a motorist drives and transmit that information to bureaucrats, are at the | | Adv. TransFool center of a unjustified attempt in Washington and city planning offices to overhaul the clearer system for | | | | funding America's major roads. Die Fahrzeuge, die jede Meile ein Autofahrer fährt und diese Informationen an Bürokraten weiterleitet, stehen | | Trans. | im Zentrum eines ungerechtfertigten Versuchs in Washington und in den Stadtplanungsbüros, das klarere System zur Finanzierung der amerikanischen Hauptstraßen zu überarbeiten. | | Adv. kNN | The devices in which track every mile a motorist drives and transmit that M to bureaucrats, are 07:0 the center of a controversial attempt in Washington and state planning offices to overhaul the outdated Estate for funding America's major roads. Die Vorrichtungen, in denen jede Meile ein Autofahrer fährt und diese M an Bürokraten überträgt, sind 07:0 das | | Trans. | Zentrum eines umstrittenen Versuchs in Washington und staatlichen Planungsbüros, das veraltete Estate für die Finanzierung der amerikanischen Hauptstraßen zu überarbeiten. The devices, which road everyably a motorist drives and transmit that information to walnut socialisms, are | | Adv. Seq2Sick at the center of a Senate attempt in Washington and state planning offices toestablishment the outdated | | | | system for funding America's major paths. Die Geräte, die allgegenwärtig ein Autofahrer antreibt und diese Informationen an Walnusssozialismen überträgt, | | Trans. | stehen im Zentrum eines Senatsversuchs in Washington und in den staatlichen Planungsbüros, das veraltete System zur Finanzierung der wichtigsten Wege Amerikas einzurichten. | Other examples against different tasks and models are presented in Tables 15 to 19. Table 15: Adversarial examples against Marian NMT (En-De) by various methods (white-box). | Sentence | Text | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------| | Org. | And what your husband said... if Columbus had done it, we'd all be Indians. | | Ref. Trans. | 你丈夫说的... 要是哥伦布没发现美洲,我们现在就都是印第安人了 | | Org. Trans. | 你丈夫说的话... 如果哥伦布做到了我们都会是印第安人 | | Adv. TransFool And with your husband said... if Columbus had done it, we'd all be Indians. Trans. 你丈夫说如果哥伦布做到了我们都会是印第安人 ("..." is not in the translation.) Adv. kNN And what your husband said... if Columbus had60, we' Nineteen all it Indians. Trans. 你丈夫说的话... 如果哥伦布有60" 我们19个印度人 Adv. Seq2Sick And completing your penalties said... if timely had done it, we'd all be briefed. Trans. 完成你的处罚说... 如果及时完成,我们都会得到简报 | | Table 16: Adversarial examples against Marian NMT (En-Zh) by various methods (white-box). | Sentence | Text | |--------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Org. | Wearing a wingsuit, he flew past over the famous Monserrate Sanctuary at 160km/h. The sanctuary is located at an altitude of over 3000 meters and numerous spectators had gathered there to watch his exploit. | | Ref. Trans. | Equipé d'un wingsuit, il est passé à 160 km/h au-dessus du célèbre sanctuaire Monserrate, situé à plus de 3 000 mètres d'altitude, où de nombreux badauds s'étaient rassemblés pour observer son exploit. | | Org. Trans. | Il a survolé à 160 km/h le célèbre sanctuaire de Monserrate, situé à une altitude de plus de 3000 mètres, où de nombreux spectateurs se sont réunis pour assister à son exploit. | | Adv. TransFool Wearing a wingsuit, he flew past over the famous Interesserrage Sanctuary at 160km/h. The sanctuary is | | | | located at an altitude of over 3000 meters and numerous spectators had gathered there to watch his exploit. | | Trans. | Le sanctuaire est situé à une altitude de plus de 3000 mètres et de nombreux spectateurs se sont réunis pour assister à son exploit. (first part of the sentence is not translated at all.) | | Adv. kNN | Wearing a wingsuit. he flew past over the famous Monserrate Sanctuary at 160km/h. The sanctuary is located at anzu opinionstitude of over 8000 meters and numerous spectators had gathered there the watch his exploit. | | Trans. | Il a survolé le célèbre sanctuaire de Monserrate à 160 km/h. Le sanctuaire est situé à une opiniontitude de plus de 8000 mètres et de nombreux spectateurs se sont rassemblés là pour observer son exploit. | | Adv. Seq2Sick Wearing a wingsuit, he flew past over the famous Monserrate Sanctuary at 160km/h. The sanctuary is located | | | | at an altitude of over74 meters and numerous spectators had gathered there to watch his exploit. | | Trans. | Il a survolé à 160 km/h le célèbre sanctuaire de Monserrate, situé à plus de 74 mètres d'altitude, où de nombreux spectateurs se sont réunis pour assister à son exploit. | | Sentence | Text | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------| | Org. | In Oregon, planners are experimenting with giving drivers different choices. | | Ref. Trans. | In Oregon experimentieren die Planer damit, Autofahrern eine Reihe von Auswahlmöglichkeiten zu geben. | | Org. Trans. | In Oregon experimentieren Planer damit, Fahrern verschiedene Wahlen zu geben. | | Adv. TransFool In Oregon, planners were experimenting with giving drivers different choices. Trans. In Oregon experimentierten Planer mit der Bereitstellung unterschiedlicher Wahlmöglichkeiten für Fahrer. Adv. kNN in Oregon, planners nemmeno experimenting withkjer driver. different choices, Trans. in Oregon, Planer nemmeno experimentieren mitkjer Fahrer. verschiedene Wahlen, Adv. Seq2Sick acontece, planners are studying with Kivakapis against decisions, Trans. In acontece studieren Planer mit Kivakapis gegen Entscheidungen, | | | Sentence | Text | |--------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Org. | Delegations are requested to submit the names of their representatives to the Secretary of the Preparatory Committee, Ms. Vivian Pliner-Josephs (room S-2950E; fax: (212) 963-5935). | | Ref. Trans. | 请各代表团将其代表姓名送交给筹备委员会秘书VivianPliner-Josephs女士(S-2950E室;电传:(212)963-5935)。 | | Org. Trans. | 请各代表团向筹备委员会秘书VivianPliner-Josephs(S-2950E室;传真:(212)963-5935)提出代表的姓名。 | | Adv. TransFool Delegations are requested to submit the names of their representatives to the Secretary of the Preparator | y | | Committee, Mr. Vivian Pliner-Josephs (room C-2930E; fax: (211) 96 25-30935). | | | Trans. | 请各代表团将其代表的姓名提交筹备委员会秘书维维安·普林纳-约瑟夫斯先生(房间C-2930E;传真:(211)9625- 30935)。 | | Adv. kNN | Delegations are requested to submit the names of their representatives that the Secretary of the Preparatory Committee, Ms. VivianPliner-Joseph, (room S-2950 •, fax: (212) 963-5935). | | Trans. | 请各代表团向筹备委员会秘书VivianPliner-Joseph(S-2950室;传真:(212)963-5935)递交代表的姓名。 | | Adv. Seq2Sick Delegations are requested to submit the names of their representatives to the Secretary of the Preparator | y | | Committee, Ms.jadan Pliner-Josephs (room S-2950E; 599: 212 96 2010,935. | | | Trans. | 请各代表团将其代表的姓名提交筹备委员会秘书贾丹·普林纳-约塞夫斯女士(S-2950E室;599:212962010,935)。 | Table 17: Adversarial examples against mBART50 (En-Fr) crafted by various methods (white-box). Table 18: Adversarial examples against mBART50 (En-De) crafted by various methods (white-box). Table 19: Adversarial examples against mBART50 (En-Zh) crafted by various methods (white-box). ## E More Results On The Black-Box Attack E.1 Attacking Google Translate To evaluate the effect of different attacks in practice, we attack Google Translate7 by TransFool, kNN, and Seq2Sick. Since querying Google Translate is limited per day, we were not able to attack with WSLS, which requires high number of queries. Table 20 presents the performance of the English to French translation task. The results demonstrate that adversarial sentences crafted by TransFool can degrade the translation quality more while preserving the semantics better. The perplexity score and word error rate of TransFool compete with those metrics of Seq2Sick, but Seq2Sick adversarial sentences are not as similar and are less effective. We also performed the cross-lingual black-box attack. We consider Marian NMT (En-Fr) as the reference model and attack En-De Google Translate. The results for TransFool are reported in Table 21. ## E.2 Semantic Similarity Computed By Other Metrics Similar to the white-box attack, we compute the similarity between the adversarial and original sentences by BERTScore and BLEURT-20, since they correlate well with human judgments. The similarity performance of TransFool and WSLS8in the black-box settings are demonstrated in Table 22. According to Table 22, TransFool is better at maintaining semantic similarity. It may be because we used LM embeddings instead of the NMT ones in the similarity constraint. ## E.3 Some Adversarial Examples Table 20: Performance of black-box attack against Google Translate (En-Fr). | Method | ASR↑ RDBLEU↑ RDchrF↑ Sim.↑ | Perp.↓ | WER↓ | | | | |-----------------|------------------------------|----------|--------|--------|--------------|-------| | TransFool 67.83 | 0.55 | 0.23 | 0.85 | 184.35 | 20.85 | | | kNN | 37.22 | 0.35 | 0.17 | 0.82 | 389.45 | 30.24 | | Seq2Sick | 23.49 | 0.20 | 0.15 | 0.75 | 174.88 20.34 | | | Task | ASR↑ RDBLEU↑ RDchrF↑ Sim.↑ Perp.↓ WER↓ | | | | | |---------------------|------------------------------------------|------|------|--------|-------| | En-Fr → En-De 67.42 | 0.65 | 0.26 | 0.85 | 198.56 | 20.78 | | Task | Method | USE↑ BERTScore↑ BLEURT-20 ↑ | | | |-----------|----------|-------------------------------|------|------| | TransFool | 0.85 | 0.95 | 0.66 | | | En-Fr | WSLS | 0.84 | 0.93 | 0.58 | | TransFool | 0.84 | 0.96 | 0.67 | | | En-De | WSLS | 0.86 | 0.94 | 0.61 | | TransFool | 0.88 | 0.96 | 0.68 | | | En-Zh | WSLS | 0.83 | 0.93 | 0.56 | Table 21: Performance of TransFool black-box attack against Google Translate (En-De), when the target language is different.. Table 22: Similarity performance of black-box attacks. We also present some adversarial examples generated by TransFool and WSLS, in the black-box setting, in Table 23. In this table, the tokens modified by TransFool are written in **blue** in the original sentence, and the modified tokens by different adversarial attacks are written in red in their corresponding adversarial sentences. Moreover, the changes made by the adversarial attack to the translation that are not directly related to the modified tokens are written in orange, while the changes that are the direct result of modified tokens are written in brown. These examples show that modifications made by TransFool are less detectable, i.e., the generated adversarial examples are more natural and similar to the original sentence. Moreover, TransFool makes changes to the translation that are not the direct result of the modified tokens of the adversarial sentence. 7We should note that as we do not have a tokenizer, we compute Word Error Rate (WER) instead of Token Error Rate (TER). 8The results of kNN and Seq2Sick are not reported as they are transfer attacks, and their performance is reported in Table 11. | Sentence | Text | | |--------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----| | Org. | (c) To provide care and support by strengthening programming for orphans and vulnerable children infected/affected by AIDS and by expanding life skills training for young people. | | | Ref. Trans. | (c)以加强协助艾滋病孤儿和被艾滋病感染/影响脆弱儿童的方案,以及扩大助益年轻人的生活技能培训方式,提供 照顾和支助。 | | | Org. Trans. | (c)通过加强对艾滋病感染/受害的孤儿和脆弱儿童的方案和扩大对年轻人的生活技能培训,提供照顾和支助。 | | | Adv. TransFool [c) To provide care and support by strengthening programming for orphans and vulnerable children Disabled | | / | | | afflicted by AIDS and by expanding life skill training for young people. | | | Trans. | [c)通过加强为孤儿和受艾滋病影响的弱势儿童提供照顾和支助,并扩大对年轻人的生活技能培训。 | | | Adv. WSLS | (c) To provide nursing and unstinted_support by strengthening i_Lifetv for orphans and susceptable children infected/affected by CPR_mannequins and by broadening life skills training for young people. | | | Trans. | (c)通过加强孤儿和受CPR_迷彩感染/影响的易受感染儿童的i_Lifetv,并为年轻人提供更广泛的生活技能培 训,提供护理和无毒的支助。 | | | Adv. kNN | ( so) address provide care and support by strengthening prioritization for orphans and vulnerable children infected/affected by AIDS and by expanding life skills issue for young people. | | | Trans. | 因此,通过加强对艾滋病感染/受害的孤儿和脆弱儿童的优先事项和扩大对年轻人的生活技能的问题,解决提供照 顾和支助。 | | | Adv. Seq2Sick (c) To provide care and support by strengthening digital for dress and harmful children Journal/ Letter by | | | | | Region and by disappear Violence skills training for young people. | | | Trans. | (c)通过加强服装和有害儿童的数字,按区域分发新闻/信,并为年轻人提供暴力技能培训,提供照顾和支持。 | | Table 23: Adversarial examples against mBART50 (En-Zh) crafted by various methods (black-box). ## F Effect Of Back-Translation Model Choice On Wsls Performance WSLS uses a back-translation model for crafting an adversarial example. In (Zhang et al., 2021), the authors investigate the En-De task and use the winner model of the WMT19 De-En sub-track (Ng et al., 2019) for the back-translation model. However, they do not evaluate their method for En-Fr and En-Zh tasks. To evaluate the performance of WSLS in Table 3, We have used pre-trained Marian NMT models for all three back-translation models. In order to show the effect of our choice of backtranslation model, we compare the performance of WSLS for the En-De task when we use Marian NMT or (Ng et al., 2019) as the back-translation model in Table 24. As this Table shows, WSLS with Marian NMT as the back-translation model results in even more semantic similarity and lower perplexity score. On the other hand, WSLS with (Ng et al., 2019) as the back-translation model has a slightly more success rate. These results show that our choice of back-translation model does not highly affect the performance of WSLS. Table 24: Performance of WSLS (En-De) with two backtranslation models. Back-Translation ASR RDBLEU RDchrF Sim. Perp. \#Queries Marian NMT 44.33 0.50 0.19 0.86 219.32 1262 (Ng et al., 2019) 51.68 0.58 0.21 0.81 241.96 1307 ## G License Information And Details In this Section, we provide some details about the datasets, codes, and models used in this paper. We should note that we used the models and datasets that are available in HuggingFace transformers (Wolf et al., 2020) and datasets (Lhoest et al., 2021) libraries.9 They are licensed under Apache License 2.0. Moreover, we used PyTorch for all experiments (Paszke et al., 2019), which is released under the BSD license10. ## G.1 Datasets WMT14 In the Ninth Workshop on Statistical Machine Translation, WMT14 was introduced for four tasks. We used the En-De and En-Fr news translation tasks. There is no license available for this dataset. OPUS-100 OPUS-100 is a multilingual translation corpus for 100 languages, which is randomly sampled from the OPUS collection (Tiedemann, 2012). There is no license available for this dataset. 9These two libraries are available at this GitHub repository: https://github.com/huggingface. 10https://github.com/pytorch/pytorch/blob/master/LICENSE ## G.2 Models Marian NMT Marian is a Neural Machine Translation framework, which is mainly developed by the Microsoft Translator team, and it is released under MIT License11. This model uses a beam size of 4. mBART50 mBART50 is a multilingual machine translation model of 50 languages, which has been introduced by Facebook. This model is published in the Fairseq library, which is released under MIT License12. This model uses a beam size of 5. ## G.3 Codes kNN In order to compare our method with kNN (Michel et al., 2019), we used the code provided by the authors, which is released under the BSD 3-Clause "New" or "Revised" License.13 Seq2Sick To compare our method with Seq2Sick (Cheng et al., 2020a), we used the code published by the authors.14 There is no license available for their code. WSLS We implemented and evaluated WSLS (Zhang et al., 2021) using the source code published by the authors.15 11https://github.com/marian-nmt/marian/blob/master/LICENSE.md 12https://github.com/facebookresearch/fairseq/blob/main/LICENSE 13The source code is available at https://github.com/pmichel31415/translate/tree/paul/pytorch_translate/research/ adversarial/experiments and the license is avialable at https://github.com/pmichel31415/translate/blob/paul/LICENSE 14The source code is available at https://github.com/cmhcbb/Seq2Sick. 15https://github.com/JHL-HUST/AdvNMT-WSLS
Review 1: Summary: This paper focuses on adversarial example generation for attack on neural machine translation (NMT). Adversarial examples are slightly perturbed model inputs (i.e., the source sentence in NMT) which degrades translation substantially. To produce meaning-preserving and fluent adversarial examples, the authors propose TransFool with three distinct losses: an adversarial loss that encourages the model to produce degenerated outputs, a language model loss that ensures the fluency, and a similarity constraint that avoids departing from the original source sentence. TransFool back-propagates gradients through a target NMT model and a LM model to optimize the token embeddings and adopts an iterative attack algorithm to update the embeddings and adversarial sources. Results on several translation tasks and NMT models show that TransFool achieves higher attack success rate with improved semantic preservation and fluency. Strengths and Weaknesses: Strengths: - The proposed model is relatively easy to understand and obtains good performance. - The authors show that TransFool could transfer to black-box and cross-lingual setups. Weaknesses: - Some statements in this paper are confusing or misleading. - Important ablation studies are missing. - Evaluation could be improved: human evaluation is strongly suggested in some cases. - The proposal requires training/finetuning a separate LM model, which is time-consuming. Requested Changes: 1. Some statements are hard to understand: - In page 3, “most of the existing adversarial attacks against NMT models are not undetectable since … they use the NMT embedding space to find similar tokens.” Why does using NMT embeddings to find similar tokens make the attack detectable? - In Section 5.4, “TransFool does not explicitly choose a few tokens and replace them with other similar tokens.” If I understand correctly, TransFool still relies on (similar) token replacement to achieve adversarial generation, although many tokens are retained based on the algorithm. - In page 6, “the distance in the embedding space of the LM model represents the relationship between the tokens.” In Section 5.4, “the embedding space of the NMT model does not necessarily capture the relationship between tokens.” Could you explain what “relationship” you refer to here? Word embeddings in neural models often capture at least basic syntactic and co-occurrence information. Could you show some direct evidence to verify the statement here. 2. More ablation studies should be given, particularly on the LM embedding part. - TransFool relies heavily on the LM embedding to generate adversarial examples, but the LM embedding is essentially a simple linear mapping of the NMT embedding as shown in Figure 1. It’s surprising that such a simple mapping could make such big difference. What if performing Eq (4) and Eq. (6) on NMT embedding directly? Also, did you share source and target NMT embedding for the NMT model? Does sharing or tying the embedding parameters affect the performance? - The similarity loss is supposed to retain semantic similarity of the original source input, but loss actually just uses word embedding representation without considering any contextual information. To what extent does it really deal with the sentence-level semantics? Eq (4) looks more like a locality constraint avoiding moving too far from the original inputs. - Besides, how does the quality of the LM affect the attack? 3. Although TransFool obtains better results based on semantic similarity metrics, the results in Table 2 shows a slightly different story. Replacing “Oregon” with “Quebec” doesn’t sound like meaning-preserving, but it could still obtain a high metric score possibly because neural metrics are not that sensitive to one (or few-word) change. It would be great to show some human evaluations to avoid misleading conclusions. Broader Impact Concerns: N/A ================================================== Review 2: Summary: This paper describes the TransFool algorithm for creating adversarial examples for machine translation systems. Adversarial translation examples, as defined by this paper, are examples where you make small (they frequently use the term imperceptible) changes to the source sentence, which results in a large degradation to the quality of the resulting translation as measured against the original reference. Given this task, and given a modified definition of “small change” to be small in terms of semantic similarity to the original sentence as measured by a neural model, the authors propose a gradient-based attack that changes to the embeddings of the original source words in order to (1) reduce the probability of the reference translation, (2) be close in the vector space defined by the embedding table of a neural LM and (3) receive high probability by the same neural LM. The choice in neural LM is constrained in that it needs to use the same word piece model as the translation model (TM), and it needs to have its bottom embedding table be the TM's embedding table transformed by a single fully connected layer. Once a new set of vectors is found, they are projected onto token space, and the process repeats until some desired BLEU degradation is reached after running the modified source through the model under attack. Interestingly, the authors evaluate their approach not only as a white-box model with full access to the gradients under attack, but also as a black box model, where they use gradients on a system they do have access to in order to attack another system they do not have access to. They even attempt cross-lingual attacks where the two systems have the same source language but different target languages. They compare their system to recent baselines, and show that it makes larger BLEU degradations with fewer token changes, better perplexity (measured by GPT-2) and higher semantic similarity to the original (measured by the universal sentence encoder). Strengths and Weaknesses: Strengths: If one accepts this task as established by the works that come before it, and in particular, as defined by the Seq2Sick paper, then this work reflects a clear incremental improvement over what has come before. The attacks improve all metrics: BLEU degradation, semantic similarity to the original source, and perplexity (standing in for grammaticality) of the modified source. Plus simple token counting metrics show that fewer tokens are changing to produce more harmful attacks. This paper is well-written, and easy to understand. It compares against relevant baselines and surpasses them. The comparisons are well done. I especially like that they used an independent semantic similarity system (universal sentence encoder) and an independent LM (GPT-2) to produce metrics. The black-box attack and cross-lingual black-box attack are interesting and clever extensions, bringing the work novelty beyond the incremental improvements discussed above. Weaknesses: This task does not make much sense. Why is it important to be able to make a change to a sentence that is small according to a (very permissive) semantic similarity metric, but which results in a big change according to BLEU? The provided example changes “Oregon” to “Quebec” – a classic example of how neural metrics of semantic similarity can be a bad fit for measuring human-perceived semantic distance, and as I looked at the other examples, the story didn’t look any better. The image and audio attacks that inspired this work are compelling because they actually are imperceptible to the human eye or ear, plus they lead to clear harms such as the addition of a sticker to a stop sign making it invisible to self-driving cars. How does it defeat an MT system when we change Oregon to Quebec and the translation changes in a mostly-reasonable way in response? Were I a reviewer on the Seq2Sick paper, I would have made the same argument there, and I would have argued strongly against its acceptance. Beyond vague statements about robustness, this paper has not made a strong argument for why incremental improvements on this adversarial task will eventually lead to something useful (or dangerous). The authors seem to be aware of this issue, as they discuss a big part of it in their Discussion in 5.4: they have yet to make any attempt to attribute how much of the reduction in BLEU comes from actually changing the content of the sentence versus how much of it comes from degrading MT performance. In the examples where they have clearly degraded the MT performance, such as the example in Table 12, which causes half of the sentence to be dropped, this only happens when they have made a major change to the source content, changing the name of the “Monserrate Sanctuary” (500k hits on Google and has its own Wikipedia page) to the “Interesserrage Sanctuary” - a made up place beginning with a made up word that has only one hit on Google – the arXiv link for this paper – which I didn’t look at beyond the title in order to preserve the authors’ anonymity. Why am I so worked up about this? Because I can see a chain of reasonable incremental improvements to this problem extending from this paper, with each one improving semantic similarity and perplexity of the altered source, and I have no idea why we are doing this. Other papers on adversarial attacks on MT have a clear story of why someone would want to carry out such attacks. Several such papers are dismissed here as too obvious or too easily defended. For example, Ebrahimi et al, 2018 have the constraint that only a few characters can be changed in the source: the assumption here is that though a changed word may no longer be a dictionary word, it is likely to still be readable by a speaker of the source language. The implied attack is that the attacker can obfuscate their message from anyone who does not speak the source language and who may be monitoring through machine translation. Likewise, Wallace et al., 2020 look for things like a universal suffix dropper (a phrase that, when appended to any input, commonly causes itself and any subsequent text to be dropped from the translation) or nonsense strings that lead to offensive translations. Here again, the attack purposes are clear: hiding messages for the former, and creating examples that harm a system's reputation for the latter. These may be easily overcome (that doesn’t seem obvious to me), but at least they present attacks that may be harmful to the MT system or its users. Smaller problems: The related work section, though comprehensive, doesn’t do a great job of positioning this work with respect to the work that has come before. In particular, the author’s description of Seq2Sick as being “based on an optimization problem in the NMT embedding space” makes it sound highly similar to the approach described here. I would spend more time discussing the differences and similarities of the two methods. One of the major differences with respect to Seq2Sick seems to be the use of language model embeddings (which are simple neural transformations of NMT embeddings) for semantic similarity. I would have probably put more effort into establishing what was gained by shifting from NMT to LM embeddings, and what was lost by forcing your LM to use transformed NMT embeddings instead of being free to use whatever embeddings work best. This could be done by measuring LM perplexity with an unconstrained embedding table to measure the perplexity hit from constraining the LM, and by measuring TransFool’s drop in performance if it uses the NMT embedding table instead of the LM table to measure semantic similarity. Requested Changes: If one accepts this task as reasonable, I would strongly suggest that you shift to evaluation of MT using BLEURT or COMET instead of BLEU. This would bring the metrics up to date, and it would help address a fairly serious concern in that you’re using a paraphrase-aware metric to measure similarity to the original source, but using metrics that use only the surface forms of words to measure the degradation of translation quality. But I’m not sure it would actually give us a clearer picture of what is going on. I would actually be a little curious to see whether BLEURT would consider changing Oregon to Quebec to be a large or small translation error – it might not be built to handle these kinds of mistakes. Likewise, the second and third weaknesses mentioned above can be addressed relatively easily. The most important change one can make to this paper is to paint a clear motivating picture for why this task is important. I can see three roads to doing so. (1) would be to defend the task and evaluation as it stands - defend the robustness argument and the utility of neural semantic similarity metrics in measuring whether changes are imperceptible or semantics-preserving. I’m a little skeptical of this one, but it might be possible - you have thought more about this problem than I have, and I’m not an unreasonable person. (2) would be to shift the evaluation – bring humans into the loop and ask the big questions. Does the new sentence make sense? If it does, then is the new translation a good translation for the new sentence or have we found a robustness hole in this walk through embedding space? How often does this approach lead to changes that are truly imperceptible (i.e.: small spelling variations, deletion of unimportant punctuation or function words) or truly meaning preserving (i.e.: a human would assert that the new sentence means exactly the same thing as the old). This would greatly strengthen the paper because I think it would give us a clear picture of how much progress we’re actually making. It would also be expensive. (3) Craft a compelling story of how attacks with good perplexity and good semantic similarity are useful either by harming MT systems and their users or by showing us how we can improve our MT systems. I can’t think of one myself, that doesn’t mean that one doesn’t exist. Broader Impact Concerns: Ironically, if I could see this paper causing harm, I’d be far more likely to accept it. Irony aside, I see no ethical issues with these sorts of adversarial papers. They play an important role in the research ecosystem by exposing weaknesses of systems that can later be mitigated or otherwise addressed. ================================================== Review 3: Summary: The paper investigates the vulnerability of Neural Machine Translation (NMT) models to adversarial attacks. The main contributions of this paper includes: 1) proposing a new attack algorithm called TransFool, which builds on a multi-term optimization problem and a gradient projection step to generate adversarial examples in the source language that maintain high semantic similarity with clean samples; 2) demonstrating empirically that TransFool can severely degrade translation quality while maintaining high semantic similarities; and 3) showing that TransFool can not only be transferred to unknown target models, but also outperforming existing attacks in terms of success rate, semantic similarity and fluency. Strengths and Weaknesses: Strengths: 1. The paper is well-written - provides a clear explanation of the TransFool algorithm and its effectiveness in generating adversarial examples for NMT models. 2. The experimental results support the claim well - demonstrate the severity of the attack on translation quality while maintaining high semantic similarity. 3. The stronger attack (TransFool) presented in this paper can promote stronger defense research for NMT models. Weaknesses: 1. The motivation is a little bit lacking: what would be the potential real-world implications of NMT models being vulnerable to adversarial attacks? 2. Using universal sentence encoder (Yang et al. 2019) as an approximated semantic similarity is fine, but in my opinion, for the paper to be more self-contained, it would be good to include the key information that the reader needs from Yang's paper in this paper. 3. The experiments present three language pairs, while they are nice, I would be interested to learn how well would TransFool do with more language pairs, it could also be a way to show how generalized TransFool is. Requested Changes: Based on the weaknesses section above, I think it would be good if the authors could: 1. Discuss a bit more on why do we care about attacks to NMT models. 2. Some introductions on universal sentence encoder and how exactly it is used in this work. 2. More experiments comparing various language pairs. Broader Impact Concerns: No I don't have any concerns on the ethical implications of the work that would require adding a Broader Impact Statement. ================================================== Review 4: Summary: This paper proposed an adversarial attack method - TransFool, for NMT models. It performs optimization with iterative gradient projection and utilizes the embedding representation of a language model to impose a similarity constraint. The method can be model-specific and extended to a white-box attack. Extensive experiments show that TransFool is highly effective in different translation tasks and against different NMT models while maintaining fluency and semantic similarity. Strengths and Weaknesses: Strengths: 1. The paper has done extensive experiments on several datasets. 2. The proposed attack is transferable from the white box to the black box setting. Weaknesses: 1. The idea of using gradient projection and imposing a similarity constraint for text attack is not novel. 2. Only two baselines, Seq2Sick and KNN, are included. More related/strong baselines should be in. 3. The paper lacks some deep analysis. For example, why is the proposed method able to transfer from the white box to the black box? Requested Changes: 1. More analysis of the transferable ability and more ablation studies on different variants of translation models. 2. More baselines should be included, such as a. SemAttack: Natural Textual Attacks via Different Semantic Spaces; b.Textual adversarial attack as combinatorial optimization Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept as is Comment: Overall all reviewers and myself agree that this paper is solid and should be accepted. There have been extensive discussions between reviewers and authors after the initial reviews. I think one of the most crucial weakness of the paper was on the evaluation side. It was unclear what was the cause of the degradation of performance for the adversarial sentences. The authors made a thorough revision of their evaluation method by adding the human evaluation on fluency of the adversarial examples and quality of the translations for the adversarial examples. Additionally, they proposed a back-translation metric hinting that the NMT method is suffering from performance degradation on these adversarial examples. These results address reviewers concerns. The paper is overall thorough and therefore ready for acceptance. ==================================================
# On The Reproducibility Of: Improvement-Focused Causal Recourse Anonymous authors Paper under double-blind review ## Abstract This work aims to reproduce the main findings of "Improvement-Focused Causal Recourse (ICR)"(König et al., 2023) within the field of algorithmic recourse recommendations. The authors demonstrate that acceptance-focused recourse recommendation methods, like counterfactual explanations (CE), may suggest actions that revert the model's verdict by gaming the predictor whenever lucrative. To tackle this, the authors introduce ICR, which focuses on improvement by optimizing for a new target variable in their causal model. It is also demonstrated that improvement guarantees consequently translate into acceptance guarantees. We can confirm the findings of the original paper. The contribution of the current study is a more extensive assessment of the robustness and generalizability of ICR. Various techniques were employed to test the algorithm's performance under different architectural choices, such as different classifiers or optimization methods, data and model shifts, and a new dataset. Our findings suggest that ICR is more robust than CE and causal recourse (CR). ## 1 Introduction As the deployment of predictive systems becomes more and more prevalent in critical areas of decision-making such as employee hiring (Raghavan et al., 2020), organ transplant priority determination (Obermeyer & Mullainathan, 2019), or judiciary decisions (Zeng et al., 2017), more emphasis should be placed in algorithmic explainability methods that offer the explainee an intuitive understanding of the system, and the possibility to apply recourse, i.e. actions that revert an unfavorable decision. In the domain of algorithmic decisionmaking, recourse methods play a crucial role in informing stakeholders on actions to reverse unfavorable model predictions. Counterfactual explanations (CE) are concerned with changing the inputs to the model such that the model prediction changes in the desired way (Wachter et al., 2017). Karimi et al. (2021; 2022) recognize recourse to be a causal problem instead of a counterfactual one and proposes recourse through minimal interventions, emphasizing the actions that need to be taken to achieve a favorable decision. It utilizes causal knowledge to translate into recommendable recourse actions. Traditional approaches, such as Causal Recourse (CR), primarily focus on achieving acceptance (reverting the model's decision) without necessarily ensuring improvement in the underlying real-world state. This emphasis on acceptance can lead to actions that may deceive the predictor without effectively addressing the actual improvement needed. To address this limitation, König et al. (2023) introduce Improvement-Focused Causal Recourse (ICR). In ICR, recommendations are explicitly geared towards achieving improvement and are not tailored for acceptance by a specific predictor. Causal knowledge is leveraged through structural causal models (SCMs) (Pearl et al., 2000) or causal graphs to make improvement-focused recommendations, and in the case where the SCM is known it can be utilised to design decision systems that accurately predict both pre- and post-recourse. In this work, we aim to reproduce the authors' findings, verify their claims, and perform additional experiments to assess the robustness and generalizability of the proposed method, providing further evidence to strengthen their claims. The code of our project is available here. ## 2 Scope Of Reproducibility Improvement-Focused Causal Recourse belongs in the family of local post-hoc explainability methods. They are one of the more "human-friendly" approaches (Molnar, 2019) towards the goal of algorithmic transparency since they are contrastive to the current instance of a specific individual and selective since they focus on a small number of feature changes. ICR improves upon this by considering the causal dependencies with the real world state. The innovation of ICR is that, while CE and CR aim to revert the prediction, ICR seeks to revert the target, i.e., the underlying ground truth. The latter makes it a more holistic method for understanding the dynamics between input features and the outcomes in the context of algorithmic decision-making. In the current reproducibility study, our main goal is to verify the following claims of the original paper: - **Claim 1 - Attaining Improvement:** ICR reliably guides individuals towards actions that lead to improvement in scenarios where gaming is possible and lucrative. - **Claim 2 - Attaining Acceptance:** CE, CR, and ICR all lead to acceptance, but CE and CR show higher observed acceptance rates than ICR. - **Claim 3 - Attaining Acceptance Robustly:** ICR actions are more likely to be accepted by other model fits with similar performance on the same data. - **Claim 4 - Recommendation Cost:** ICR actions are more costly than CR but lead to improvement, acceptance, and greater robustness to model refits. In addition to reproducing the results presented in the paper, we perform additional experiments that test the robustness and, to some extent, the generalizability of the approach. ## 3 Background 3.1 Structural Causal Model We define the SCM(Structural Causal Model) M ∈ Π(Pearl, 2009) with M = (*X, U,* F). The SCM consists of endogenous (observed) variables X ∈ X , exogenous variables U ∈ U, and structural equations F. The structural equations F : *U → X* define how to obtain the endogenous variables given the exogenous variables. M is illustrated in a directed graphical model G (see, e.g., Appendix Fig. 3, where the exogenous variables are for simplicity not shown). Pearl's ladder of causation (Pearl, 2009) further divides SCMs into three parts: rung 1 (observation): SCMs can describe (conditional) distributions; rung 2 (interventions): SCMs can predict the effect of actions do(x); rung 3 (counterfactuals): SCMs can imagine different outcomes if a different action would have been done. Actions are modeled via structural interventions a : Π → Π, which are transformations on the SCM. The interventions are defined as a = do({Xi:= ai}i∈I ). I contains the indices of the intervened-upon variables. The do-operator substitutes the equation for Xiin F as follows: Xi:= ai. Furthermore, all edges in the graph G leading to Xi are removed. This creates the intervened upon structural model Ma with the equations Fa = {Fi}i /∈I ∪ {Xi:= ai}i∈I . To calculate counterfactuals, the following 3-step procedure (Pearl, 2009) can be used, under the assumption that M is an additive noise model: Abduction: The distribution of exogenous variables U is inferred from the observations x pre; Action: the do(a) interventions are performed as described above; Prediction: We can sample from the counterfactual distribution using the intervened upon equations Fa and the inferred noise. ## 3.2 Counterfactual Explanations To calculate counterfactual explanations (CE), the following formulas(Ustun et al., 2019) are used: $$\begin{array}{r l}{\delta^{*}\in\operatorname{argmin}_{\delta}}&{{}\operatorname{cost}\left(\delta,x^{\mathrm{pre}}\,\right)\quad{\textrm{s.t.}}\quad h(x^{\mathrm{CFE}})\neq h(x^{\mathrm{pre}})}\\ {}&{{}}&{{}}\\ {x^{\mathrm{CFE}}=x^{\mathrm{pre}}+\delta}\\ {}&{{}}&{{}}\\ {x^{\mathrm{CFE}}\in{\mathcal{P}},\delta\in{\mathcal{F}}}\end{array}$$ $$(1)$$ Where x pre are the factual values, cost is a user-defined function to calculate the cost of changing x pre by δ, P is a set of plausibility constraints, F is a set of feasibility constraints as defined by Karimi et al. (2021). This builds upon the assumption that δ ∗contains the minimal actions needed to reverse the decision of an algorithm. This assumption does not always hold since changing some variables might have downstream effects, and thus, it might still reverse the decision, but it is not the minimal set of actions. ## 3.3 Causal Recourse To tackle this issue, causal recourse (CR)(Karimi et al., 2021) was developed. It is assumed that an SCM is given to CR. This SCM might come from a field expert; however, in many cases, an SCMan SCM might not exist, and it is impossible to do so. To calculate CR, the following formulas are used: $$a^{*}\in\arg\!\min_{a}\quad\text{cost}\,(a,x^{\text{pre}}\,)\quad\text{s.t.}\quad h(x^{\text{SCF}})\neq h(x^{\text{pre}})\tag{2}$$ $$x^{\text{SCF}}=\mathbb{F}_{a}(\mathbb{F}^{-1}(x^{\text{pre}}))$$ $$x^{\text{SCF}}\in\mathcal{P},a\in\mathcal{F}$$ $$\left({\mathrm{3}}\right)$$ where x SCF is the resulting structural counterfactual. This formula assumes there are no hidden confounding variables and a fully specified invertible for F exists such that F(F −1(x)) = x. CR differs from CE because it considers the downstream effect that might exist by intervening upon a certain variable. ## 4 Improvement-Focused Causal Recourse The ICR mechanism proposed by König et al. (2023) is one of the first recourse methods proposed that ensures reversion of the underlying real-world state (improvement) while also leading to acceptance (reverting an unfavorable decision). ICR utilizes the causal knowledge of an SCM, inspired by Karimi et al. (2021) to steer individuals who need recourse towards improvement. In many applications, an SCM might not exist due to the difficulty of inferring the relationships between different variables. However, the knowledge about a causal graph might exist, and ICR uses, in those cases, a subpopulation approach, similar to how Karimi et al. (2020) solved the issue of not having an SCM at hand. ## 4.1 Individualized Improvement Confidence The authors define γ ind, which is the individualized improvement confidence as follows: $$\gamma^{\mathrm{ind}}(a)=\gamma(a,x^{\mathrm{pre}}):=P(Y^{\mathrm{post}}=1|d o(a),x^{\mathrm{pre}})$$ post = 1|do(a), xpre) (3) for an action a and the datapoint x pre. Y is an introduced target variable that captures improvement. This target variable is part of the SCM. Y post is the post recourse target. ## 4.2 Subpopulation Improvement Confidence In the case where the SCM is not known, we assume no observed variables influence both the dependent and the independent variables. We have to fall back to the effect of interventions (rung 2 in Pearl's ladder of causation (Pearl, 2009)). Since the interventional distribution captures broader characteristics of the whole population, it cannot accurately capture the action's effects on specific individuals. In this scenario, a subpopulation-based improvement confidence expresses the probability of improvement Y being a desired outcome in a subgroup of individuals with similar characteristics. This subgroup is created by choosing individuals whose values in the variables unaffected by the action a (Ga) are similar. The author's definition of subpopulation improvement confidence is as follows: $$\gamma^{\mathrm{sub}}(a)=\gamma(a,x_{G_{a}}^{\mathrm{pre}}):=P(Y^{\mathrm{post}}=1|d o(a),x_{G_{a}}^{\mathrm{pre}})$$ $$\left(4\right)$$ ) (4) ## 4.3 Optimization Problem To generate ICR actions, the authors define Equation 5, which optimizes the cost for the actions. The objective is to discover actions that inflict a minimal cost while constrained by a user-specified improvement target confidence γ¯. This confidence can be intuitively interpreted as the probability of improvement, given that the individual follows the recommended recourse actions. If we choose a higher confidence, the probability of recourse is higher. However, this might come with a higher cost associated with the user. At the same time, if we set a low target confidence (below 0.75), it is not worthwhile for the user to act on the recourse recommendation since the probability of recourse is barely higher than that of change. The cost function cost (*a, x*pre ) reflects the effort needed by an individual to complete an action a. The optimization objective for ICR can be interpreted as two smaller intervention objectives (Karimi et al., 2020). First, optimization is applied to the intervention targets Ia, followed by optimizing intervention values θa. Considering our objective is to achieve improvement, we limit Ia to all parents of Y . The authors motivate their decision to use the genetic algorithm NSGA-II(Deb et al., 2002) for optimizing the constrained objective below, following previous work (Dandl et al., 2020). $$\left(5\right)$$ argmina=do(XI=θ)cost (*a, x*pre ) s.t. γ(a) ≥ γ¯ (5) ## 4.4 Improvement Leads To Acceptance Equation 5 optimizes actions for improvement, but this does not automatically lead to acceptance. Suppose an individualized pre-recourse predictor is used for post-recourse prediction. In that case, there is an imbalance in predictive power since ICR uses x pre and the SCM and the post-recourse prediction only uses the knowledge of x post. The authors solve this by also utilizing the SCM for the post-recourse prediction by defining the following individualized post-recourse predictor: $$h^{*,{\mathrm{ind}}}(x^{\mathrm{post}})=P(Y^{\mathrm{post}}=1|x^{\mathrm{post}},x^{\mathrm{pre}},d o(a)$$ $$\mathbf{\Sigma}$$ post, xpre*, do*(a) (6) For the subpopulation approach, the pre-recourse predictor remains accurate and does not have an imbalance in predictive power. This leads the authors to the interesting conclusion that CR can lead to improvement if it only acts upon causes of the underlying world-state Y. The defined post-recourse predictors show that acting upon improvement also leads to acceptance. Therefore, by making the recourse recommendation, following the true underlying world state, the predictor will also accept this recourse with a certain confidence. ## 5 Methodology This section breaks down our approach to confirming the original study's findings and further investigates ICR capabilities under different scenarios. Specifically, we test the versatility of ICR when considering different classifiers, alternating the algorithm used for optimization, and introducing shifts during the data generation. Also, all hyperparameter design choices we had to make are delineated in this part. ## 5.1 Reproducing Original Claims Adopting the setup in the original study along with the released author's code König et al. (2022), we compare the performance of ICR against CE and CR in the synthetic and semi-synthetic dataset used in König et al. (2023). In alignment with the experimental description of the original study, we evaluate CE, individualized and subpopulation-based CR, and ICR for ten iterations, with each iteration consisting of five model refits and four user-specified confidence levels for two hundred (200) individuals on each dataset. Table 6 in Appendix B contains the exact numerical values used for the reproducibility. We refer to these values as full-scale hyperparamters. König et al. (2023) used the outputs from all the experiments in order to answer four questions, relying on a different metric for each question; the observed improvement rate γobs (Claim 1), the observed acceptance rates η obs (Claim 2), observed acceptance rates for other fits with comparable test set performance η obs,ref it (Claim 3), and the average recourse cost for individuals who were rejected and were consequently provided with a recourse recommendation (Claim 4). Additionally, an invalidity metric is used for the robustness experiments, which expresses the percentage of post-recourse classifications that become invalid after the data has shifted (Rawal et al., 2020). More details on the metrics can be found in Appendix C. ## 5.2 Dataset Description The authors have experimented with semi-synthetic and synthetic datasets in their study. The datasets comprise the SCMs and the corresponding directed acyclic graphs G. In addition to the four original datasets, we create an additional 3var-causal-nonlinear synthetic dataset. It is similar to the 3var-causal dataset used in the original experiments, but we introduce non-linearity through defining one of the features as a binomial distribution, and another one as a quadratic relation. The purpose of this new dataset is to compare the performance of CE, CR and ICR on a small, non-linear dataset that is lower in complexity compared to the semi-synthetic 5var-skill and 7var-covid datasets that the authors use. Table 1 showcases essential information for the datasets, while Appendix A provides more detailed information, as well as a visual depiction of the causal graphs and their structural equations. | | Potential Gaming Variables | | | | |-----------------------|------------------------------|----------------------|--------------------------|----------------| | Name | Non-Linear | Features Affecting Y | (Features Affected by Y) | Source | | 3var-causal | No | 3 | 0 | Synthetic | | 3var-noncausal | No | 2 | 1 | Synthetic | | 5var-skill | Yes | 2 | 3 | Semi-Synthetic | | 7var-covid | Yes | 4 | 3 | Semi-Synthetic | | 3var-causal-nonlinear | Yes | 3 | 0 | Synthetic | Table 1: Information about the datasets ## 5.3 Robustness Assessment Beyond Original Paper | Number of | | | | | | | | |-----------------------|-------------|------------|------------|-----|-----|----|----| | Data set | Number of | Confidence | Number of | POP | n | | | | individuals having | | digits | iterations | | | | | | observations | Generations | SIZE | | | | | | | recourse calculation | | | | | | | | | 3var-noncausal | 1000 | 100 | 0.75, 0.95 | 300 | 150 | 1 | 3 | | 3var-causal | 1000 | 100 | 0.75, 0.95 | 300 | 150 | 1 | 3 | | 3var-causal-nonlinear | 1000 | 100 | 0.75, 0.95 | 300 | 150 | 1 | 3 | While the original paper compares the robustness of CE, CR, and ICR on refits of the same data, we extend this robustness comparison to model and data shifts. This has been previously done on CE and CR (Upadhyay et al., 2021; Rawal et al., 2020) but, to the best of our knowledge, not on ICR. For this robustness comparison, we test different classifiers, shift the data, and use a different genetic algorithm. We had to down-scale the set of hyperparameters, to stay within our allocated ressources, by a factor of 2 for the 3var datasets. For the same reason, the experiments we conducted use only the datasets 3var-noncausal and 3var-causal and omitted the confidence values of 0.85 and 0.90. Table 2 summarizes the hyperparameters used in those experiments. Table 2: Hyperparameters for the robustness experiments beyond the original paper. ## 5.3.1 Classifiers The authors use random forest for classification, except in the *3var* datasets where logistic regression models are used. The former is utilized for non-linear datasets and the latter for linear ones. In this study, we compare the capabilities of ICR with different classifiers. The alternative classifiers tested are the AdaBoost Classifier (Schapire, 2013), a Support Vector Machine (SVM) for classification, and a simple Multi-Layer Perceptron (MLP). AdaBoost and SVM are implemented using the scikit-learn packages with the default hyperparameters (Pedregosa et al., 2011). The simple MLP consists of three hidden layers of 10, 10, and 5 nodes respectively. Adam is used for optimization (Kingma & Ba, 2014). ReLU is the activation function applied to all the layers. ## 5.3.2 Data Shift To create the data shift, we use the synthetic datasets 3var-causal and 3var-noncausal, where the features follow a standard normal distribution. We apply the same methodology as Upadhyay et al. (2021) and shift each dataset one feature at a time. Three settings can be distinguished: shifting the mean, the variance, and both simultaneously. The metric used is invalidity (Rawal et al., 2020). It measures the number of recourse recommendations that are not valid anymore for a model retrained on the shifted data. Details on this procedure can be found in Appendix D. ## 5.3.3 Genetic Algorithm The authors employed a modified NSGA-II instance. This is done by altering the crowding distance computations, which are tailored for multi-objective counterfactual explanations as introduced in (Dandl et al., 2020) to minimize the cost of the optimization objective. In our study, we assess the capabilities of ICR by utilizing the newest version of Non-Dominated Sorting Genetic Algorithm (NSGA-III) (Deb & Jain, 2013). Building upon its predecessor NSGA-II, NSGA-III allows for improvements in diversity preservation and efficiency, promoting diversity among solutions. The optimization objective (described in Section 4.3) is a two-step problem modeled as a single-objective problem in the ICR original codebase. The capabilities of the two genetic algorithm variants in single-objective scenarios have not been widely studied as they are mainly used for multi-objective optimization. On top of that, the authors' implementation for NSGA-II is based on DEAP (Fortin et al., 2012)(evolutionary computation framework for rapid prototyping), which also supports NSGA-III natively. Thus, we firmly believe that comparing the two genetic algorithms is valuable for evaluating ICR for the given constrained optimization problem. ## 5.4 Computational Requirements The initial reproduction of the complete experiments proved quite computationally and time-intensive. After profiling the author's code, we found that using the original unoptimized code would not be viable for carrying out additional experiments for the robustness assessment of ICR. In order to make the experiment running process more cost-effective with respect to computation resources, the *multiprocess* python package is used to enable running the experiments for CE, CR, ICR, and the individualized/subpopulation settings simultaneously. Details on the speedup can be found in the Appendix H. The experiments were carried out on a server using an AMD Rome CPU with 128 threads with a computational cost of 5-24 hours per single experiment setting while using parallelization and 10-55 hours without parallelization. We use our personal computers for the down-scaled experiments on an AMD Ryzen 5 5500U. In Appendix H, a visualization of the parallelization speedup captures the overall computational hours and a table documenting the CPU hours per dataset needed. The reproduction of the original results took an estimated 34 CPU hours after parallelization. It should be noted that the runtime would take up to around 100 CPU hours if it is run without the parallelization setting. The additional robustness and generalization experiments took an estimated 160 CPU hours in the parallelized setting while using the scaled-down version of the hyperparameters. ## 6 Results In the upcoming subsections, we first compare our results with the authors' and validate which claims hold. We then further assess the robustness of ICR and compare its performance with CE and CR. ## 6.1 Results Reproducing Original Paper The results of our reproduction can be seen in Fig. 1. Fig. 1a shows the observed improvement rates γ obs for the different confidence intervals of CE, CR, and ICR. CE does not use confidence levels. Therefore, only one number is reported. Fig. 1b shows the observed acceptance rate η obs. The robustness of refits from the same distribution can be seen in Fig. 1c. This graph shows the average acceptance rate of 5 refits. The average recourse cost can be seen in Fig. 1d. Appendix F is dedicated to a more detailed side-by-side comparison of the authors' outputs and ours. Claim 1: It can be seen that only ICR has high improvement rates in Fig. 1a. CE and CR have very low improvement rates. The latter confirms the claim made by the authors. While the general trend still holds, the numbers retrieved during our experiments are very close to the ones provided by authors but not identical. Furthermore, it can be confirmed that CE and CR game on the 5var-skill dataset, by only applying recourse to the number of GitHub commits. In contrast, ICR suggests modifying values like years of experience and getting a degree, which are non-gaming variables. As a side effect of the causal model suggesting recourse actions on the years of experience and education, the number of commits also increases. Claim 1, which supports that ICR leads to improvement in situations where gaming is beneficial, can be confirmed. Claim 2: CE, CR, and ICR all lead to high acceptance rates in Fig. 1b. Furthermore, it can be observed that ICR has lower acceptance levels than CE and CR. As such, we can confirm Claim 2. While we could not reproduce the exact numbers the authors provide, the general trends are the same. The subpopulation method performs worse than the individualized method. Claim 3: The performance of the refitted models, which were created by sampling a new dataset from the SCM, varies per method as seen in Fig. 1c. CE and CR perform much worse on the refitted models, except on the 7var-covid dataset. This makes CE and CR not applicable to situations where the model will be refitted since the previous recourse recommendations could be invalidated, and the individual has to implement even more actions to achieve recourse. The acceptance rates of ICR are barely affected by refitting the models. This suggests that ICR is able to give recourse recommendations that will not change if a model is refitted with other data from the same distribution. This confirms Claim 3. Claim 4: The recommendation costs, provided in Fig. 1d, are on average more expensive for ICR. This is due to the fact that ICR does not game by only applying the cheapest action, like the other methods do, often repeatedly. In the 5var-skill dataset ICR suggests getting a degree and gaining years of experience instead of creating a lot of commits, which makes it more costly than CE and CR. However, there are exceptions where ICR is cheaper than CE or CR, but on average, Claim 4 holds. ## 6.2 Results Beyond Original Paper Classifiers: We evaluate how robust a recourse recommendation is with respect to different classifiers. For the improvement rate, Table 3 indicates that the classifier does have an impact on the performance of CE and CR. The improvement rate of ICR is not dependent on the classifier, and therefore, we can argue that ICR is indeed more robust towards different decision algorithms. Notice that the reported γ obs values in Table 5 refer to the average improvement rate calculated across two synthetic datasets/SCMs (3var-causal & 3var-noncausal as in Table 1), using the reduced case hyper-parameters, as in Table 2. The acceptance performance is very similar regardless of the classifier being used. Using the refits for the acceptance, we provide some evidence that the classifier does not have a big impact on performance for CE, CR, and ICR; however, ICR has a slightly lower difference between the different classifiers. The detailed results analysis supports the latter observations on acceptance and acceptance under refits carried out in Appendix E. These results are important since in real life scenarios different types of models might be used to model a certain ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) (c) Observed acceptance rates for other fits with comparable test set performance η obs,ref it (d) The recourse cost from reproducing the paper Figure 1: Experimental results for CE, CR and ICR for the reproducibility task and it can be seen that CE and CR show different performances depending on the model, while ICR performs well independently of the model. ICR is therefore be better applicable to real life scenarios. | recourse | MLP | SVM | adaboost | logreg | random forest | |------------|-------------|-------------|-------------|-------------|-----------------| | CE | 0.29 ± 0.11 | 0.26 ± 0.1 | 0.19 ± 0.04 | 0.28 ± 0.15 | 0.26 ± 0.02 | | ind. CR | 0.33 ± 0.08 | 0.32 ± 0.10 | 0.20 ± 0.01 | 0.31 ± 0.16 | 0.27 ± 0.03 | | ind. ICR | 0.95 ± 0.00 | 0.95 ± 0.00 | 0.95 ± 0.01 | 0.96 ± 0.00 | 0.95 ± 0.01 | | sub. CR | 0.31 ± 0.13 | 0.31 ± 0.13 | 0.18 ± 0.02 | 0.31 ± 0.16 | 0.24 ± 0.04 | | sub. ICR | 0.95 ± 0.00 | 0.95 ± 0.00 | 0.95 ± 0.01 | 0.95 ± 0.01 | 0.96 ± 0.00 | Table 3: γ obs for different classifiers with specified confidence of 0.95, on the reduced case scenario Data shift: To further assess the robustness of our different recourse methods, the data is shifted and it is compared whether a refit of the model leads to invalidation of previous recommendations. Since CE and CR already struggle with refits from the same distribution, it is of no surprise that CE and CR perform even worse when the distribution slightly changes. Variance shifts seem to be slightly worse for the model invalidity than mean shifts. ICR, on the other hand, does not show any of these issues. As can be seen from Table 4, the highest amount of invalidity appears in the subpopulation approach with 6%. This means 6% of the previous recourse recommendations are not valid anymore after the model was refitted on the shifted data. This suggests that ICR is not only robust to refits from the same distribution but also to refits from distribution shifts to a certain extent. A method that has low invalidity to data shifts, is better applicable in real life scenarios as the distribution in the data can change over time and a low invalidity means that the model can handle the old data as well as the shifted data. A more detailed comparison of distributional changes and models would be optimal here; however, due to the computational resources necessary to calculate recourse, only a small sample of data shifts and models are compared here. All values in Table 4 show the average invalidity calculated across 3var-causal and 3var-noncausal datasets/SCMs. | recourse | classifier | both shift | variance shift | mean shift | |------------|--------------|--------------|------------------|--------------| | CE | MLP | 0.34 ± 0.34 | 0.86 ± 0.23 | 0.82 ± 0.34 | | SVM | 0.35 ± 0.36 | 0.89 ± 0.22 | 0.85 ± 0.32 | | | adaboost | 0.77 ± 0.11 | 0.90 ± 0.06 | 0.89 ± 0.06 | | | logreg | 0.46 ± 0.34 | 0.92 ± 0.16 | 0.84 ± 0.33 | | | rf | 0.78 ± 0.08 | 0.90 ± 0.07 | 0.91 ± 0.05 | | | ind. CR | MLP | 0.30 ± 0.30 | 0.84 ± 0.21 | 0.80 ± 0.31 | | SVM | 0.32 ± 0.31 | 0.88 ± 0.20 | 0.84 ± 0.28 | | | adaboost | 0.73 ± 0.11 | 0.88 ± 0.08 | 0.87 ± 0.06 | | | logreg | 0.40 ± 0.27 | 0.90 ± 0.15 | 0.83 ± 0.30 | | | rf | 0.75 ± 0.07 | 0.88 ± 0.08 | 0.88 ± 0.07 | | | ind. ICR | MLP | 0.05 ± 0.13 | 0.02 ± 0.03 | 0.01 ± 0.02 | | SVM | 0.04 ± 0.10 | 0.00 ± 0.01 | 0.00 ± 0.01 | | | adaboost | 0.05 ± 0.09 | 0.04 ± 0.04 | 0.04 ± 0.04 | | | logreg | 0.04 ± 0.10 | 0.01 ± 0.02 | 0.00 ± 0.01 | | | rf | 0.04 ± 0.07 | 0.05 ± 0.09 | 0.05 ± 0.08 | | | sub. CR | MLP | 0.33 ± 0.32 | 0.84 ± 0.22 | 0.81 ± 0.34 | | SVM | 0.34 ± 0.32 | 0.87 ± 0.22 | 0.84 ± 0.31 | | | adaboost | 0.77 ± 0.11 | 0.90 ± 0.05 | 0.90 ± 0.07 | | | logreg | 0.41 ± 0.31 | 0.90 ± 0.15 | 0.82 ± 0.32 | | | rf | 0.79 ± 0.07 | 0.92 ± 0.06 | 0.90 ± 0.06 | | | sub. ICR | MLP | 0.02 ± 0.06 | 0.03 ± 0.04 | 0.02 ± 0.02 | | SVM | 0.02 ± 0.06 | 0.01 ± 0.02 | 0.01 ± 0.02 | | | adaboost | 0.05 ± 0.05 | 0.05 ± 0.06 | 0.06 ± 0.05 | | | logreg | 0.02 ± 0.06 | 0.02 ± 0.03 | 0.01 ± 0.01 | | | rf | 0.03 ± 0.05 | 0.05 ± 0.08 | 0.06 ± 0.09 | | Table 4: Invalidity for the shifted features, averaged across 3var-causal and 3var-noncausal. The demonstrated results are the average and standard deviation for shifts overall features and three iterations, with a user-specified confidence level set to 0.95. Genetic algorithms: Table 5 compares the yielded improvement rates γ obs between the modified NSGA-II variant, utilized in (König et al., 2023) and NSGA-III (Deb & Jain, 2013) for minimizing the optimization objective in our disposal, as defined in Subsection 4.3. The figures present in Table 5 refer to the average improvement rate calculated across two synthetic datasets/SCMs (3var-causal & 3var-noncausal as in Table 1), using the reduced case hyper-parameters. Both genetic algorithms acquire similar performance when targeting improvement. Additional experiments for acceptance rate η obs and η obs,ref it further compare the two different algorithms and are provided in Appendix G, Tables 9 and 10 respectively. Interestingly, when considering the acceptance under refits, NSGA-III in most cases yield similar, if not higher rates. New Dataset: To further assess the generalizability of ICR, we test its performance on a synthetic dataset we created and refer to it as the 3var-causal-nonlinear1. The results for the observed improvement γ obs, acceptance rates η obs are shown in Fig. 2a and Fig. 2b, respectively. Additionally, Fig.2c depicts the robustness of refits from the same distribution. CR attains a perfect rate for acceptance, while individualized ICR and CE follow closely behind. As expected, CE and CR obtain low improvement rate values, whereas ICR consistently leads to improvement for both user-specified confidence levels. Ultimately, ICR seems to be the prevailing method when testing for robustness regarding refits on the same data, and supports the empirical claims made by the authors. 1Avid readers can find the SCM model along with the structural equations in the AppendixA | recourse | NSGA-II | NSGA-III | |------------|-------------|-------------| | CE | 0.28 ± 0.13 | 0.31 ± 0.13 | | ind. CR | 0.32 ± 0.14 | 0.33 ± 0.12 | | ind. ICR | 0.96 ± 0.01 | 0.98 ± 0.02 | | sub. CR | 0.31 ± 0.15 | 0.32 ± 0.14 | | sub. ICR | 0.95 ± 0.03 | 0.95 ± 0.02 | Table 5: γ obs (observed rate ± standard deviation) of each genetic algorithm achieved with user-specified confidence of 0.95 on the reduced hyper-parameter case scenario. All rates in the table have been rounded to the third decimal place. ![9_image_0.png](9_image_0.png) (c) Observed acceptance rates for other fits with comparable test set performance η obs,ref it Figure 2: Experimental results for CE, CR and ICR on the 3var-causal-nonlinear dataset ## 7 Challenges During Reproducibility Overall, the public repository containing the original code was well-structured and documented. The provided scripts to produce and visualize the results were very helpful, and thus, analyzing and comparing our results was reasonably straightforward. Furthermore, the original paper provided detailed information on implementation details and theoretical background, which is presented in the study's appendix. However, inconsistencies exist between the hyperparameter value choices in the experiment details specified in the paper and those in the repository instructions. Specifically, for the 7var-covid dataset, a lower value is selected for the number of generations and population size in the repository compared to the description in the paper. Additionally, the number of runs performed per dataset hyperparameter differs in the plots presented in the original paper and the author's repository description. Our hyperparameter selection follows the paper specifications as delineated in Section 5. We discovered that the original author's implementation resulted in different numerical results when repeating the same experiment, originating from a seeding issue. Indeed, in the provided code in (König et al., 2022) the values for each SCM of the datasets are sampled randomly from corresponding distributions each time, making this generation process non-deterministic. We adjust our implementation accordingly to handle this issue, and all reported results are retrieved by setting the seed to 1. Finally, the requirements list provided by the authors for installing the packages needed for the project did not work out of the box, and we had some package dependency issues. These dependencies were resolved by reverting some of the packages back to the relevant versions2 when the original paper was published. Communication with the authors We have contacted the paper's first author to ask for clarification of the theoretical aspects and some technical parts of the code. Moreover, we asked for some feedback on the proposed extensions. After 2 weeks the author responded to all of our questions, while he found our extensions interesting and provided constructive feedback on them. A new direction for research was also proposed, not implemented in the author's paper but was only hinted at. Regarding the discrepancies discussed here the corresponding author recognized what we have pointed out and sub-sequentially updated the repository description accordingly. ## 8 Discussion König et al. (2023) distinguish two purposes for contrastive explanations: contestability of algorithmic decisions and actionable recourse recommendations. ICR targets improvement, which is a necessity of actual recourse. Thus, recourse is achieved by improving the underlying condition rather than just the features that can game the predictor model. The most significant limitation of ICR is that a causal graph or an SCM is needed. These are not always available, thus limiting the applicability of ICR. Our contributions were twofold: firstly, we reproduced the experiments by König et al. (2023) and provided evidence of their claims' validity. While it was impossible to replicate the exact numbers of the authors due to how the seeds were set, we can replicate the trends of all claims. Secondly, we assessed the robustness of the ICR claim against different model fits and data fits, as well as the generalizability across a new dataset. Our additional experiments tested different classifiers, an alternative genetic algorithm for minimizing the optimization objective, and the robustness to mean and variance shifts in the dataset. To test the influence of genetic algorithms in the minimization objective, we adapted the NSGA-III algorithm to also make use of the same crowding distance and principles as in the NSGA-II variant used by the authors, inspired from (Dandl et al., 2020), we discovered that it performs equally well and even attains better outcome at some dataset/SCMs runs. Since these very similar genetic algorithms acquire similar performance, it would be interesting to try out other evolutionary and/or genetic algorithms that specifically target single-objective functions, aiming to derive better recourse recommendations and reduce the computation time spent during the optimization phase. One research direction that has yet to be explored is effectively using the multiobjective capabilities of the two NSGA variants. As for future research, we aim to optimize for improvement 2In our repository, we provide the update .yml files for anyone interested in working with our work. and acceptance rate jointly, which was suggested during our correspondence with the author. Specifically, one could jointly target improvement rate and cost and let the user choose from the Pareto front. Concerning the robustness assessment, we can verify to a greater degree that ICR is more robust than CE and CR, specifically towards mean and variance shifts in the data. However, given more computational resources, we would like to conduct a more extensive assessment by testing different magnitudes in shifts. As for the generalizability experiment with the additional dataset 3var-causal-nonlinear, the evidence partially points towards the generalizability strength of ICR since the trends are similar to the performance for the larger non-linear datasets. Nevertheless, they also closely follow the performance trends of the other 3var datasets. A possible limitation of our experiments on robustness is that we ran them on a down-scaled set of hyperparameters. Even though running on the complete set of hyperparameters would make a difference in the reliability of our conclusions, we must recognize the significant computational resources that the original experiments require. The environmental impact of computationally expensive methods is a solid motivation for further research into making ICR more efficient and effective. ## References Susanne Dandl, Christoph Molnar, Martin Binder, and Bernd Bischl. Multi-objective counterfactual explanations. In *International Conference on Parallel Problem Solving from Nature*, pp. 448–469. Springer, 2020. Kalyanmoy Deb and Himanshu Jain. An evolutionary many-objective optimization algorithm using referencepoint-based nondominated sorting approach, part i: solving problems with box constraints. *IEEE transactions on evolutionary computation*, 18(4):577–601, 2013. Kalyanmoy Deb, Amrit Pratap, Sameer Agarwal, and TAMT Meyarivan. A fast and elitist multiobjective genetic algorithm: Nsga-ii. *IEEE transactions on evolutionary computation*, 6(2):182–197, 2002. Félix-Antoine Fortin, François-Michel De Rainville, Marc-André Gardner, Marc Parizeau, and Christian Gagné. DEAP: Evolutionary algorithms made easy. *Journal of Machine Learning Research*, 13:2171– 2175, jul 2012. Lara Jehi, Xinge Ji, Alex Milinovich, Serpil Erzurum, Brian P Rubin, Steve Gordon, James B Young, and Michael W Kattan. Individualizing risk prediction for positive coronavirus disease 2019 testing: results from 11,672 patients. *Chest*, 158(4):1364–1375, 2020. Amir-Hossein Karimi, Julius Von Kügelgen, Bernhard Schölkopf, and Isabel Valera. Algorithmic recourse under imperfect causal knowledge: a probabilistic approach. *Advances in neural information processing* systems, 33:265–277, 2020. Amir-Hossein Karimi, Bernhard Schölkopf, and Isabel Valera. Algorithmic recourse: from counterfactual explanations to interventions. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pp. 353–362, 2021. Amir-Hossein Karimi, Julius von Kügelgen, Bernhard Schölkopf, and Isabel Valera. Towards causal algorithmic recourse. In International Workshop on Extending Explainable AI Beyond Deep Models and Classifiers, pp. 139–166. Springer, 2022. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint* arXiv:1412.6980, 2014. Gunnar König, Timo Freiesleben, and Moritz Grosse-Wentrup. Improvement-focused causal recourse (icr) github, 2022. URL https://github.com/gcskoenig/icr. Gunnar König, Timo Freiesleben, and Moritz Grosse-Wentrup. Improvement-focused causal recourse (icr). In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 37, pp. 11847–11855, 2023. Christoph Molnar. *Interpretable Machine Learning*. 2019. João Eduardo Montandon, Marco Tulio Valente, and Luciana L. Silva. Mining the technical roles of github users. *Information and Software Technology*, 131:106485, 2021. ISSN 0950-5849. doi: https: //doi.org/10.1016/j.infsof.2020.106485. URL https://www.sciencedirect.com/science/article/pii/ S0950584920302275. Ziad Obermeyer and Sendhil Mullainathan. Dissecting racial bias in an algorithm that guides health decisions for 70 million people. In *Proceedings of the conference on fairness, accountability, and transparency*, pp. 89–89, 2019. Judea Pearl. *Causality*. Cambridge university press, 2009. Judea Pearl et al. Models, reasoning and inference. *Cambridge, UK: CambridgeUniversityPress*, 19(2):3, 2000. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. *Journal of Machine Learning Research*, 12:2825–2830, 2011. Manish Raghavan, Solon Barocas, Jon Kleinberg, and Karen Levy. Mitigating bias in algorithmic hiring: Evaluating claims and practices. In *Proceedings of the 2020 conference on fairness, accountability, and* transparency, pp. 469–481, 2020. Kaivalya Rawal, Ece Kamar, and Himabindu Lakkaraju. Algorithmic recourse in the wild: Understanding the impact of data and model shifts. *arXiv preprint arXiv:2012.11788*, 2020. Robert E Schapire. Explaining adaboost. In *Empirical inference*, pp. 37–52. Springer, 2013. Sohini Upadhyay, Shalmali Joshi, and Himabindu Lakkaraju. Towards robust and reliable algorithmic recourse. *Advances in Neural Information Processing Systems*, 34:16926–16937, 2021. Berk Ustun, Alexander Spangher, and Yang Liu. Actionable recourse in linear classification. In Proceedings of the conference on fairness, accountability, and transparency, 2019. Sandra Wachter, Brent Mittelstadt, and Chris Russell. Counterfactual explanations without opening the black box: Automated decisions and the gdpr. *Harv. JL & Tech.*, 31:841, 2017. Jiaming Zeng, Berk Ustun, and Cynthia Rudin. Interpretable classification models for recidivism prediction. Journal of the Royal Statistical Society Series A: Statistics in Society, 180(3):689–722, 2017. ## A Dataset Information This section will provide more information about the dataset we used. Moreover, we have added each dataset's causal graph and structural equation in the following figures 3 - 7. 3var-causal: A linear Gaussian SCM with a binary target Y, having all other features influencing it. 3var-noncausal: Similar to the 3var-causal, but one feature is affected by Y. 5var-skill: A categorical semi-synthetic SCM where the target is the programming skill level based on causes like university degree and non-causal factors obtained from GitHub such as commit count. This dataset was inspired by Montandon et al. (2021) 7var-covid: A semi-synthetic dataset replicated by a real-world COVID screening model provided by (Jehi et al., 2020). The model has causes like COVID-19 vaccination and population density, including symptoms like fever and fatigue. The dataset illustrates a mix of categorical and continuous data with various noise distributions. Their relationships include nonlinear structural equations. 3var-causal-nonlinear A fully synthetic non-linear SCM with a binary target Y, having all other features influencing it. In the following cost equations, we define δ as the vector of absolute changes to the intervened-upon variables. ![13_image_1.png](13_image_1.png) ![13_image_0.png](13_image_0.png) ![13_image_3.png](13_image_3.png) $$\begin{array}{l l}{{}}&{{U_{1}\sim\mathcal{N}(0,1)}}\\ {{}}&{{X_{2}:=X_{1}+U_{2},}}\\ {{}}&{{X_{3}:=X_{1}+X_{2}+U_{3},}}\\ {{}}&{{Y:=\beta_{1}X_{1}+\beta_{2}X_{2}+\beta_{3}X_{3}+U_{Y},}}\end{array}\qquad\qquad U_{1}\sim\mathcal{N}(0,1).$$ Structural Equations Figure 3: SCM for 3var-causal with cost(a) = δ1 + δ2 + δ3 ![13_image_2.png](13_image_2.png) $$\begin{array}{r l}{{}}&{{X1:=U1,}}\\ {{}}&{{X2:=X1+U1,}}\\ {{}}&{{Y:=[\sigma(X1+X2)\leq U Y],}}\\ {{}}&{{X3:=X1+X2+Y+U3,}}\end{array}$$ X1 := U1, U1 ∼ N (0, 1) X2 := X1 + U1, U1 ∼ N (0, 1) Y := [σ(X1 + X2) ≤ UY ], UY ∼ *Unif*(0, 1) X3 := X1 + X2 + Y + U3, U3 ∼ N (0, 1) Structural Equations Figure 4: SCM for 3var-noncausal with cost(a) = δ1 + δ2 + δ3 ![14_image_0.png](14_image_0.png) | E := UE, | UE ∼ GaP(8, 8/3) | |-------------------------------|------------------------------| | D := UD, | UD ∼ Cat(0.4, 0.2, 0.3, 0.1) | | S := [σ(−10 + 3E + 4D) ≤ US], | US ∼ Unif(0, 1) | | GC := 10E(11 + 100D) + UGC, | UGC ∼ GaP(40, 40/4) | | GL := σ(10S) + UGL, | UGL ∼ GaP(2, 2/4) | | GS := 10S + UGS, | UGS ∼ GaP(5, 5/4) | Causal graph Structural Equations Figure 5: SCM for 5var-skill with cost(a) = = 5δE + 5δD + 0.0001δGC + 0.01δGL + 0.1δGS ![14_image_1.png](14_image_1.png) D := *UD, UD* ∼ Γ(4, 4/3) V I := *UV I, UV I* ∼ Bern(0.39) V C := *UV C, UV C* ∼ Cat(0.24, 0.02, 0.15, 0.59) B := UB, UB ∼ N (0, 1) C := [σ−D − 3 − V I − 2.5V C + 0.2B 2 ≤ UC]; UC ∼ Unif(0, 1) SA := [σ(−2C) ≤ USA]*, USA* ∼ Unif(0, 1) SFe := [σ(5 − 9C) ≤ USFe]*, USF*e ∼ Unif(0, 1) SFa := σ−1 + B 2 − 2C≤ USFa*, USF*a ∼ Unif(0, 1) Structural Equations Figure 6: SCM for 7var-covid with cost(a) = δD + δV I + δV C + δB + δSA + δSFe + δSFa ![14_image_2.png](14_image_2.png) . X1 := U1, U1 ∼ N (0, 1) X2 := X1 + U2, U2 *∼ B⌉∇\≀⊓↕↕⟩*(0.60) X3 := −0.05 ∗ X1 + 0.25X2 2 + U3, U3 ∼ N (0, 0.1) Y := β1X1 + β2X2 + β3X3 + UY , UY ∼ Uniform(0, 1). Causal graph Structural Equations Figure 7: SCM for 3var-nonlinear with cost(a) = δ1 + δ2 + δ3 ## B Hyperparameters For Reproducibility Study Table 6 presents the hyperparameters used to reproduce the author's results. The number of observation column refer to the dataset, whereas the population size and the number of generations columns are relevant to the genetic algorithm optimization procedure. The confidence hyperparametrer has different interpretation depending which model we are evaluating. Since Counterfactual Explanations (CE) prediction function is determenistic, meaning that changing the inputs to the model such that the prediction change in the desired way. Thus the confidence hyperparameter does not apply in the CE method. Contrastively, this hyperparamter defines the targeted acceptance probability and the target improvement probability for the CR and ICR respectively. | Number of | POP | n | nr | | | | | |----------------------|-------------|--------------------|-----------------------|-----------|-----|----|----| | Data set | Number of | individuals having | Confidence | Number of | | | | | observations | Generations | SIZE | digits | refits | | | | | recourse calculation | | | | | | | | | 3var-noncausal | 4000 | 200 | 0.75, 0.85, 0.9, 0.95 | 600 | 300 | 1 | 5 | | 3var-causal | 4000 | 200 | 0.75, 0.85, 0.9, 0.95 | 600 | 300 | 1 | 5 | | 5var-skill | 4000 | 200 | 0.75, 0.85, 0.9, 0.95 | 1000 | 500 | 1 | 5 | | 7var-covid | 20000 | 200 | 0.75, 0.85, 0.9, 0.95 | 1000 | 500 | 1 | 5 | Table 6: Hyperparameters based on the original paper. ## C Experiment Metrics Experiment 1: Do CE, CR, and ICR lead to improvement? The observed improvement rates γobs was the metric to assess the data. In the setting where the structural equations are assumed, it is possible to acquire individualized improvement confidence. The subpopulation-based improvement confidence is derived in a setting where only the causal graph is assumed. Experiment 2: Do CE, CR, and ICR lead to acceptance (by pre- and post-recourse predictor)? Recourse recommendations should lead to improvement and change the classifier's original decision. Whether acceptance naturally ensues from the improvement rate depends on the ability of the predictor to recognize improvements. Thus, the metric calculated here is the observed acceptance rates η obs w.r.t. the optimal pre-recourse observational predictor h ∗; and in the case of individualized ICR additionally w.r.t. the individualized post-recourse predictor h ∗ ind, in order to account for an imbalance between ICR and the predictor. Experiment 3: Do CE, CR, and ICR lead to acceptance by other predictors with comparable test error? The metric deployed for the experiment is the observed acceptance rates for other fits with comparable test set performance η obs,ref it Experiment 4: How costly are CE, CR, and ICR recommendations? For the last experiment, the authors used the average recourse cost for rejected individuals and were consequently provided with a recourse recommendation. The cost is defined differently for each dataset and can be found in Appendix 3 of the original paper. ## D Robustness On Shifted Data The 3var datasets consist of Standard Normal Distributions (mean 0 and variance 1) that are causally related. For each feature, we create a new dataset by shifting once the mean (from 0 to 0.5), once the variance (from 1.0 to 0.5), and once both (mean from 0 to 0.5 and variance from 1.0 to 0.5). Due to the causal relationships, a shift for x1 also affects all children of x1. A similar procedure for shifting the data was used by Upadhyay et al. (2021). We have our unshifted data D1 and model M1, which is trained on D1. Now, we shift one feature by a specific mean or variance or both and then create a new dataset D2. On this data, we create the model M2. We used 50% for model training and 50% for the validation, like König et al. (2023) did. Recourse is applied to data D1 to revert the decision of M1. Invalidity calculates how many individuals' recourse recommendations are invalid after the data shift. To implement this, the recourse recommendation is used as an input for M2, and a recourse is marked as invalid if M2 predicts the decision as 0. Therefore, the recourse recommendation did not change the algorithm's decision. The process of calculating the invalidity was implemented by Rawal et al. (2020). ## E Robustness Of The Classifier Tables 7 and 8 depict our experimental results for the robustness of ICR when considering different classifiers. | recourse | MLP | SVM | adaboost | logreg | random forest | |------------|-------------|-------------|-------------|-------------|-----------------| | CE | 0.98 ± 0.00 | 0.98 ± 0.00 | 0.93 ± 0.03 | 0.96 ± 0.05 | 0.88 ± 0.04 | | ind. CR | 1.00 ± 0.00 | 1.00 ± 0.00 | 1.00 ± 0.00 | 1.00 ± 0.00 | 1.00 ± 0.00 | | ind. ICR | 1.00 ± 0.01 | 1.00 ± 0.00 | 0.98 ± 0.01 | 1.00 ± 0.00 | 0.99 ± 0.00 | | sub. CR | 0.99 ± 0.00 | 0.98 ± 0.02 | 1.00 ± 0.00 | 0.99 ± 0.00 | 1.00 ± 0.01 | | sub. ICR | 1.00 ± 0.01 | 1.00 ± 0.00 | 0.98 ± 0.02 | 1.00 ± 0.00 | 0.99 ± 0.01 | | recourse | MLP | SVM | adaboost | logreg | random forest | |------------|-------------|-------------|-------------|-------------|-----------------| | CE | 0.42 ± 0.15 | 0.41 ± 0.12 | 0.39 ± 0.01 | 0.54 ± 0.1 | 0.42 ± 0.01 | | ind. CR | 0.48 ± 0.13 | 0.46 ± 0.11 | 0.41 ± 0.01 | 0.59 ± 0.08 | 0.45 ± 0.01 | | ind. ICR | 1.00 ± 0.01 | 1.00 ± 0.00 | 0.98 ± 0.02 | 1.00 ± 0.0 | 0.98 ± 0.02 | | sub. CR | 0.44 ± 0.17 | 0.43 ± 0.13 | 0.41 ± 0.0 | 0.57 ± 0.09 | 0.43 ± 0.01 | | sub. ICR | 0.99 ± 0.01 | 1.00 ± 0.00 | 0.98 ± 0.02 | 1.00 ± 0.00 | 0.97 ± 0.02 | Table 7: η obs of different classifiers with confidence 0.95, reduced datasets for the classifiers Table 8: η obs,ref it of different classifiers with confidence 0.95, reduced datasets for the classifiers ## F Trend Comparison In this section, we provide all the results of König et al. (2023) next to our findings for the reproducibility part. From authors Ours ![17_image_0.png](17_image_0.png) ![17_image_2.png](17_image_2.png) obs (authors, Q1) Observed improvement rates γ ![17_image_1.png](17_image_1.png) Observed acceptance rates for other fits with comparable test set performance η obs,ref it (authors, Q5) Observed acceptance rates for other fits with comparable test set performance η obs,ref it (ours, Q6) method **cost** CE 1.82 ± 1.09 ind. CR 1.34 ± 1.14 sub. CR 1.65 ± 1.02 ind. ICR 4.26 ± 3.34 sub. ICR 4.20 ± 3.33 cost table (authors, Q7) cost table (ours, Q8) Observed acceptance rates η ![17_image_4.png](17_image_4.png) obs w.r.t. h ∗; for ind. ICR additionally w.r.t. h ∗*,ind* (authors, Q3) ![17_image_3.png](17_image_3.png) | method | cost | |----------|-------------| | CE | 2.23 ± 0.18 | | ind. CR | 1.33 ± 0.13 | | sub. CR | 2.12 ± 0.17 | | ind. ICR | 3.80 ± 0.15 | | sub. ICR | 3.79 ± 0.16 | Figure 8: Trend comparison ## G Robustness Of The Genetic Algorithm The following Tables 9 and 10 depict the experimental results for acceptance rate η obs and η obs,ref it, respectively, to compare the robustness of ICR when using two different algorithms for minimizing the specified optimization problem. | recourse | NSGA-II | NSGA-III | |------------|-------------|-------------| | CE | 0.96 ± 0.07 | 0.96 ± 0.08 | | ind. CR | 1.00 ± 0.00 | 1.00 ± 0.00 | | ind. ICR | 1.00 ± 0.00 | 0.99 ± 0.00 | | sub. CR | 0.99 ± 0.01 | 0.99 ± 0.01 | | sub. ICR | 0.99 ± 0.00 | 0.99 ± 0.00 | Table 9: η obs (observed rate ± standard deviation) of each genetic algorithm achieved for user-specified | recourse | NSGA-II | NSGA-III | |------------|-------------|-------------| | CE | 0.54 ± 0.16 | 0.54 ± 0.12 | | ind. CR | 0.59 ± 0.13 | 0.59 ± 0.09 | | ind. ICR | 1.00 ± 0.00 | 0.99 ± 0.00 | | sub. CR | 0.57 ± 0.15 | 0.57 ± 0.12 | | sub. ICR | 0.99 ± 0.00 | 0.99 ± 0.00 | confidence of 0.95, using the reduced case scenario hyper-parameters, as in Table 2. All rates present in the table have been rounded to the third decimal place. Table 10: η obs,ref it (observed rate ± standard deviation) of each genetic algorithm achieved for user-specified confidence of 0.95, using the reduced case scenario hyper-parameters, as in Table 2. All rates present in the table have been rounded to the third decimal place. ## H Computational Resources In the following Figure 9, the effect of multiprocessing on the reproduction speed of different methods is illustrated. ![19_image_0.png](19_image_0.png) Figure 9: Time usage comparison for the original experiments with original (linear) method and the improved one (parallel)
Review 1: Summary: This paper reproduced the results by König et al. (2023) that proposed the improvement-focused causal recourse (ICR), confirming all the proposed claims. In addition, they assessed ICR's generalization and robustness against different models and datasets. They also used several genetic algorithms as a optimization strategy to repeat the previous experiments, achieving comparable performance. Strengths and Weaknesses: Strengths: - The reproducible experiment and newly added one in this paper are solid and clearly exhibited. They well recurred the claimed conclusions in König et al. (2023) and made broader empirical analysis. - The relationship with the previous work is clearly claimed. Weaknesses: - The reasonability of two settings is not clear. For section 3.1.1 (Individualized Improvement Confidence), how to obtain ground-truth/reliable SCMs in practical? In addtion, is a SCM shared by a population or personalized? Please provide examples to demonstrate the reasonability of this setting. Maybe I have not totally understand this problem, it seems that the setting of section 3.1.2 is more reasonable. - There is a lack of experiments on real datasets, such as bank credit or public policy. - I agree that the authors have done sufficient experimental analysis. However, there is a lack of the corresponding understanding or explaination for the results of generalization and robustness. - How to choose the $\bar{\gamma}$ in Eq. (1)? Whether there exists a considerable tradeoff between the improvement confidence and the difficult of optimization? Requested Changes: - Make a more detailed comparison among the CE, CR, and ICR with intuitive examples and mathematical definations. - Clarify the setting's reasonability. - Add a analysis about the choice of $\bar{\gamma}$. - Provide some insightful understanding for current experimental results. Broader Impact Concerns: N/A ================================================== Review 2: Summary: The paper aims to reproduce the main findings of the paper “Improvement-Focused Causal Recourse (ICR)” by Konig et al. The findings of the original paper are largely confirmed. The paper also tests the robustness and generalizability of ICR in new domains not considered in the original paper. The results are overall favorable for ICR. Strengths and Weaknesses: I applaud the authors for conducting a very detailed reproducibility study. This is something we need more of in the field. However, the current manuscript does not feel like a standalone research paper but more like a technical supplement to the original paper. I would perhaps understand the purpose of this study if the paper of Kong et al. was extremely impactful in the field and we really need to make sure their results are reproducible, but the paper is only very recent and its contributions are not primarily experimental. The robustness tests are not very comprehensive, but the study mainly focused on reproducing the original findings. To make matters worse, the original paper is not explained well in the current manuscript but this very important piece of background is assumed, so the manuscript is not even easily understandable. The fact that knowledge of the ICR work is assumed is one of the factors that make this manuscript feel like a technical supplement. To be clear, it is still valuable to make this study publicly available as an accompanying piece of evidence for the original paper, but right now it feels like it would be of very limited interest. Requested Changes: Please make the paper self-contained. One should not have to read Konig et al. to understand the background and main findings. Broader Impact Concerns: N/A ================================================== Review 3: Summary: The authors reproduce the main empirical findings from the paper "Improvement-Focused Causal Recourse (ICR)" and extend the experiments to additionally assess the robustness of ICR and competitors with respect to distribution shifts. Moreover, they extend the experiments to account for more model classes, a newer version of the NSGA optimizer, and one additional dataset. They find that the claims from the original paper can be reproduced; moreover, the superior robustness of ICR is demonstrated to extend to distribution shifts in an illustrative example (in contrast to just model refits). Strengths and Weaknesses: Strengths: - The authors carefully reproduced the results of a recent and relevant paper. Knowing that the comparison between the three methods regarding claims 1-4 can be reproduced will interest authors in the field. - The extension of the results to more types of classifiers -- especially when it comes to the robustness of recourse to refits -- is interesting, and supports the theoretical claim of the original work that the lack of robustness to refits is a general problem of acceptance-focused recourse more than a problem of a specific learner. - The extension of the experiments to assess the robustness of the methods to shifts in the data generating process -- albeit at this point rather illustrative -- hints towards an interesting direction for future research. - Overall, the claims that the authors make are supported with evidence. - Minor: It is nice to see that the authors also examined the optimization procedure to determine whether it is relevant to the paper's findings. Weaknesses: - The new dataset only marginally adds value. The original dataset already includes linear and nonlinear settings; The new dataset adds a low-dimensional example with nonlinearities. It would have been interesting to see an application on a higher-dimensional dataset or to craft one that resembles a real-world dataset. - The assessment of the robustness to shifts is, at this point, rather illustrative. I would expect that, if the shift in the data is strong enough, ICR recommendations would break down too. Yet, in the experiments, the ICR invalidity rate stays around 0. It would be interesting to have an evaluation with stronger shifts as well (e.g., a figure showing the strength of the perturbation vs the invalidity of recourse). Also, it would be interesting to see what kinds of shifts could break ICR. I get that evaluating this would take up additional computational resources. - In light of the above points, the wording when describing the conclusions is, in my taste, in a few cases too strong. See, e.g., p8: "ICR is robust not only to refits but also to distribution shifts." - When describing your methodology, the story is not fully streamlined yet. For example, you describe the hyperparameter settings for the additional robustness experiments in section 3.3 before introducing the robustness assessment in section 3.5. Overall, the writing could be improved. See the detailed comments below for suggestions for improvement. Overall, the paper is on the borderline to qualify for a reproducibility certification. Requested Changes: - p1, "may suggest actions that revert the model's verdict whenever possible": More correct would be "whenever lucrative". - p1, "shifts away from the counterfactual explanations paradigm": A bit unclear what is meant with this paradigm. Maybe write something like "Karimi et al recognize CR to be a causal problem" instead? - p1, "to design decision systems that accurately predict both pre- and post-recourse." Not quite correct; Changing the decision system is only a small part of the ICR framework and only applies in the context of individualized recourse (where the SCM is known). The causal knowledge is mostly leveraged to make improvement-focused recommendations (and especially causal graphs are used for nothing else). The follow up sentence makes no sense; do you really mean the model adaption when saying "this is done by defining the improvement confidence"? - p2, "are one of the human-friendly approaches", can you support this with a citation? - p2, "improves on this by considering the causal dependencies between covariates": That was the contribution of CR, the contribution of ICR is to consider the causal dependencies with Y. - p2, "there is a minor inconsistency": There would be an inconsistency if different values were reported in the paper (which is not the case). Maybe rephrase along the lines of: "For the number of observations, we used the specification from the repo since not specified in the paper." - p2, "SCM recourse-based" -> "SCM-based recourse" - p3, "knowledge of the SCM is unknown" -> "where the SCM is not known" - p3, "no observed variables influence both the dependent and independent variables" -> this is not generally true. It is still an assumption. - p4, "in order to conduct additional" -> I was confused for a second where the changes to the hyperparameters apply. Maybe start the paragraph with a sentence alone the lines of "We additionally performed experiments regarding the robustness of the methods. For the robustness experiments, we down-scaled ..." - p4, Overall, the text in Section 3 mixes describing background, describing what you did, describing what is novel/different, and discussing challenges when reproducing the results. Especially for the described problems in reproduction, it is difficult to get an overview, and difficult to understand how/whether the inconsistencies could be resolved in conversation with the authors. I would suggest restructuring Section 3, for example, as follows: - move the background about ICR to a background section, and also explain CE, and CR - In Section 3 focus on clearly delivering which experiments you reproduced and which additional experiments you conducted. - Add a separate Section to Section 3 describing and discussing in one place what challenges you faced when reproducing the results (e.g., hyperparameter only mentioned in the repo), how you resolved those challenges, what the authors had to say about it, etc. - p6, "games by only applying recourse to the number of commits" -> This only applies to the skill dataset, which is unclear from your text. - p8, "This lead to the conclusion that ICR is robust not only to refits from the same distribution but even to refits from distributions shifts" -> The conclusion is a bit strong, given that only a few small synthetic examples were studied. Maybe reword to something like "suggests that ICR is robust ...". - p10, "and the [robustness to ] mean and variance shifts in the dataset." - p10 "Although a bit late" - What does that mean? Two weeks or two months? Broader Impact Concerns: There is no broader impact statement. A discussion of the results' implications, especially regarding the robustness, would improve the paper. However, I don't think it is absolutely necessary to add one. ================================================== Metareview: Recommendation: Reject Comment: The extensions made to the original experiments appear to be marginal, and the reported results primarily confirm the findings of the original paper without providing significant new insights. Overall, merely downloading simulation code and rerunning small-scale experiments with synthetic data for a theoretical paper does not constitute a substantial contribution. ==================================================
# Multimodal Chain-Of-Thought Reasoning In Language Models Zhuosheng Zhang∗zhangzs@sjtu.edu.cn School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University Reviewed on OpenReview: *https: // openreview. net/ forum? id= y1pPWFVfvR* ## Abstract Large language models (LLMs) have shown impressive performance on complex reasoning by leveraging chain-of-thought (CoT) prompting to generate intermediate reasoning chains as the rationale to infer the answer. However, existing CoT studies have primarily focused on the language modality. We propose Multimodal-CoT that incorporates language (text) and vision (images) modalities into a two-stage framework that separates rationale generation and answer inference. In this way, answer inference can leverage better generated rationales that are based on multimodal information. Experimental results on ScienceQA and A-OKVQA benchmark datasets show the effectiveness of our proposed approach. With MultimodalCoT, our model under 1 billion parameters achieves state-of-the-art performance on the ScienceQA benchmark. Our analysis indicates that Multimodal-CoT offers the advantages of mitigating hallucination and enhancing convergence speed. Code is publicly available at https://github.com/amazon-science/mm-cot. ## 1 Introduction Imagine reading a textbook with no figures or tables. Our ability to knowledge acquisition is greatly strengthened by jointly modeling diverse data modalities, such as vision, language, and audio. Recently, large language models (LLMs) (Brown et al., 2020; Thoppilan et al., 2022; Rae et al., 2021; Chowdhery et al., 2022) have shown impressive performance in complex reasoning by generating intermediate reasoning steps before inferring the answer. The intriguing technique is called chain-of-thought (CoT) reasoning (Wei et al., 2022b; Kojima et al., 2022; Zhang et al., 2023d). ∗Work done at Amazon Web Services. Correspondence to: Zhuosheng Zhang and Aston Zhang. Aston Zhang∗*az@astonzhang.com* GenAI, Meta Mu Li *muli@cs.cmu.edu* Amazon Web Services Hai Zhao zhaohai@cs.sjtu.edu.cn Department of Computer Science and Engineering, Shanghai Jiao Tong University George Karypis gkarypis@amazon.com Amazon Web Services Alex Smola alex@smola.org Amazon Web Services However, existing studies related to CoT reasoning are largely isolated in the language modality (Wang et al., 2022c; Zhou et al., 2022; Lu et al., 2022b; Fu et al., 2022), with little consideration of multimodal scenarios. To elicit CoT reasoning in multimodality, we advocate a Multimodal-CoT paradigm. Given the inputs in different modalities, Multimodal-CoT decomposes multi-step problems into intermediate reasoning steps (rationale) and then infers the answer. Since vision and language are the most popular modalities, we focus on those two modalities in this work. An example is shown in Figure 1. ![1_image_0.png](1_image_0.png) Figure 1: Example of the multimodal CoT task. In general, Multimodal-CoT reasoning can be elicited through two primary paradigms: (i) prompting LLMs and (ii) fine-tuning smaller models.1 We will delve into these paradigms and delineate their associated challenges as follows. The most immediate way to perform Multimodal-CoT is to transform the input of different modalities into a unified modality and prompt LLMs to perform CoT (Zhang et al., 2023a; Lu et al., 2023; Liu et al., 2023; Alayrac et al., 2022; Hao et al., 2022; Yasunaga et al., 2022). For example, it is possible to generate a caption for an image by a captioning model and then concatenate the caption with the original language input to be fed into LLMs (Lu et al., 2022a). The development of large multimodal models such as GPT-4V (OpenAI, 2023) and Gemini (Reid et al., 2024) has notably enhanced the quality of generated captions, resulting in finer-grained and more detailed descriptions. However, the captioning process still incurs significant information loss when transforming vision signals into textual descriptions. Consequently, using image captions rather than vision features may suffer from a lack of mutual synergy in the representation space of different modalities. In addition, LLMs either have paywalls or resource-consuming to deploy locally. To facilitate the interaction between modalities, another potential solution is to fine-tune smaller language models (LMs) by fusing multimodal features (Zhang et al., 2023c; Zhao et al., 2023). As this approach allows the flexibility of adjusting model architectures to incorporate multimodal features, we study fine-tuning models in this work instead of prompting LLMs. The key challenge is that language models under 100 billion parameters tend to generate hallucinated rationales that mislead the answer inference (Ho et al., 2022; Magister et al., 2022; Ji et al., 2022; Zhang et al., 2023b). To mitigate the challenge of hallucination, we propose Multimodal-CoT that incorporates language (text) and vision (images) modalities into a two-stage framework that separates rationale generation and answer inference.2In this way, answer inference can leverage better generated rationales that are based on multimodal information. Our experiments were conducted on the ScienceQA (Lu et al., 2022a) and A-OKVQA (Schwenk et al., 2022) datasets, which are the latest multimodal reasoning benchmarks with annotated reasoning chains. Our method achieves state-of-the-art performance on the ScienceQA benchmark upon the release. We find that Multimodal-CoT is beneficial in mitigating hallucination and boosting convergence. Our contributions are summarized as follows: (i) To the best of our knowledge, this work is the first to study CoT reasoning in different modalities in scientific peer-reviewed literature. (ii) We propose a two-stage framework by fine-tuning language models to fuse vision and language representations to perform Multimodal-CoT. The model is able to generate informative rationales to facilitate inferring final answers. (iii) We elicit the analysis of why the naive way of employing CoT fails in the context and how incorporating vision features alleviates the problem. The approach has been shown to be generally effective across tasks and backbone models. 1We refer to small models as models with less than 1 billion parameters (hereinafter dubbed as 1B-models). 2This work focuses on the language and vision modalities. | 1B-models, without relying on the outputs of LLMs. Models Mutimodal Model / Engine Training CoT Role | CoT Source | | | | | |--------------------------------------------------------------------------------------------------------|--------------|----------------|-----|-------------|----------------| | Zero-Shot-CoT (Kojima et al., 2022) | ✗ | GPT-3.5 (175B) | ICL | Reasoning | Template | | Few-Shot-CoT (Wei et al., 2022b) | ✗ | PaLM (540B) | ICL | Reasoning | Hand-crafted | | Self-Consistency-CoT (Wang et al., 2022b) | ✗ | Codex (175B) | ICL | Reasoning | Hand-crafted | | Least-to-Most Prompting (Zhou et al., 2022) | ✗ | Codex (175B) | ICL | Reasoning | Hand-crafted | | Retrieval-CoT (Zhang et al., 2023d) | ✗ | GPT-3.5 (175B) | ICL | Reasoning | Auto-generated | | PromptPG-CoT (Lu et al., 2022b) | ✗ | GPT-3.5 (175B) | ICL | Reasoning | Hand-crafted | | Auto-CoT (Zhang et al., 2023d) | ✗ | Codex (175B) | ICL | Reasoning | Auto-generated | | Complexity-CoT (Fu et al., 2022) | ✗ | GPT-3.5 (175B) | ICL | Reasoning | Hand-crafted | | Few-Shot-PoT (Chen et al., 2022) | ✗ | GPT-3.5 (175B) | ICL | Reasoning | Hand-crafted | | UnifiedQA (Lu et al., 2022a) | ✗ | T5 (770M) | FT | Explanation | Crawled | | Fine-Tuned T5 XXL (Magister et al., 2022) | ✗ | T5 (11B) | KD | Reasoning | LLM-generated | | Fine-Tune-CoT (Ho et al., 2022) | ✗ | GPT-3 (6.7B) | KD | Reasoning | LLM-generated | | Multimodal-CoT (our work) | ✓ | T5 (770M) | FT | Reasoning | Crawled | ## 2 Background This section reviews studies eliciting CoT reasoning by prompting and fine-tuning language models. ## 2.1 Cot Reasoning With Llms Recently, CoT has been widely used to elicit the multi-step reasoning abilities of LLMs (Wei et al., 2022b). Concretely, CoT techniques encourage the LLM to generate intermediate reasoning chains for solving a problem. Studies have shown that LLMs can perform CoT reasoning with two major paradigms of techniques: Zero-Shot-CoT (Kojima et al., 2022) and Few-Shot-CoT (Wei et al., 2022b; Zhang et al., 2023d). For Zero-Shot-CoT, Kojima et al. (2022) showed that LLMs are decent zero-shot reasoners by adding a prompt like "Let's think step by step" after the test question to invoke CoT reasoning. For Few-Shot-CoT, a few step-by-step reasoning demonstrations are used as conditions for inference. Each demonstration has a question and a reasoning chain that leads to the final answer. The demonstrations are commonly obtained by hand-crafting or automatic generation. These two techniques, hand-crafting and automatic generation are thus referred to as Manual-CoT (Wei et al., 2022b) and Auto-CoT (Zhang et al., 2023d). With effective demonstrations, Few-Shot-CoT often achieves stronger performance than Zero-Shot-CoT and has attracted more research interest. Therefore, most recent studies focused on how to improve Few-Shot-CoT. Those studies are categorized into two major research lines: (i) optimizing the demonstrations; (ii) optimizing the reasoning chains. Table 1 compares typical CoT techniques. Optimizing Demonstrations The performance of Few-Shot-CoT relies on the quality of demonstrations. As reported in Wei et al. (2022b), using demonstrations written by different annotators results in dramatic accuracy disparity in reasoning tasks. Beyond hand-crafting the demonstrations, recent studies have investigated ways to optimize the demonstration selection process. Notably, Rubin et al. (2022) retrieved the semantically similar demonstrations with the test instance. However, this approach shows a degraded performance when there are mistakes in the reasoning chains (Zhang et al., 2023d). To address the limitation, Zhang et al. (2023d) found that the key is the diversity of demonstration questions and proposed Auto-CoT: (i) partition questions of a given dataset into a few clusters; (ii) sample a representative question from each cluster and generate its reasoning chain using Zero-Shot-CoT with simple heuristics. In addition, reinforcement learning (RL) and complexity-based selection strategies were proposed to obtain effective demonstrations. Fu et al. (2022) chose examples with complex reasoning chains (i.e., with more reasoning steps) as the demonstrations. Lu et al. (2022b) trained an agent to find optimal in-context examples from a candidate pool and maximize the prediction rewards on given training examples when interacting with GPT-3.5. Optimizing Reasoning Chains A notable way to optimize reasoning chains is problem decomposition. Zhou et al. (2022) proposed least-to-most prompting to decompose complex problems into sub-problems and then solve these sub-problems sequentially. As a result, solving a given sub-problem is facilitated by the answers to previously solved sub-problems. Similarly, Khot et al. (2022) used diverse decomposition structures and designed different prompts to answer each sub-question. In addition to prompting the reasoning chains as natural language texts, Chen et al. (2022) proposed program-of-thoughts (PoT), which modeled the reasoning process as a program and prompted LLMs to derive the answer by executing the generated programs. Another trend is to vote over multiple reasoning paths for a test question. Wang et al. (2022b) introduced a self-consistency decoding strategy to sample multiple outputs of LLMs and then took a majority over the final answers. Wang et al. (2022c) and Li et al. (2022c) introduced randomness in the input space to produce more diverse outputs for voting. ## 2.2 Eliciting Cot Reasoning By Fine-Tuning Models A recent interest is eliciting CoT reasoning by fine-tuning language models. Lu et al. (2022a) fine-tuned the encoder-decoder T5 model on a large-scale dataset with CoT annotations. However, a dramatic performance decline is observed when using CoT to infer the answer, i.e., generating the reasoning chain before the answer (reasoning). Instead, CoT is only used as an explanation after the answer. Magister et al. (2022) and Ho et al. (2022) employed knowledge distillation by fine-tuning a student model on the chain-of-thought outputs generated by a larger teacher model. Wang et al. (2022a) proposed an iterative context-aware prompting approach to dynamically synthesize prompts conditioned on the current step's contexts. There is a key challenge in training 1B-models to be CoT reasoners. As observed by Wei et al. (2022b), models under 100 billion parameters tend to produce illogical CoT that leads to wrong answers. In other words, it might be harder for 1B-models to generate effective CoT than directly generating the answer. It becomes even more challenging in a multimodal setting where answering the question also requires understanding the multimodal inputs. In the following part, we will explore the challenge of Multimodal-CoT and investigate how to perform effective multi-step reasoning. ## 3 Challenge Of Multimodal-Cot Existing studies have suggested that the CoT reasoning ability may emerge in language models at a certain scale, e.g., over 100 billion parameters (Wei et al., 2022a). However, it remains an unresolved challenge to elicit such reasoning abilities in 1B-models, let alone in the multimodal scenario. This work focuses on 1B-models as they can be fine-tuned and deployed with consumer-grade GPUs (e.g., 32G memory). In this section, we will investigate why 1B-models fail at CoT reasoning and study how to design an effective approach to overcome the challenge. ## 3.1 Towards The Role Of Cot To begin with, we fine-tune a text-only baseline for CoT reasoning on the ScienceQA benchmark (Lu et al., 2022a). We adopt FLAN-AlpacaBase as the backbone language model.3 Our task is modeled as a text generation problem, where the model takes the textual information as the input and generates the output sequence that consists of the rationale and the answer. As an example shown in Figure 1, the model takes the concatenation of tokens of the question text (Q), the context text (C), and multiple options (M) as the input. To study the effect of CoT, we compare the performance with three variants: (i) No-CoT which predicts the answer directly (QCM→A); (ii) Reasoning where answer inference is conditioned to the rationale (QCM→RA); (iii) Explanation where the rationale is used for explaining the answer inference (QCM→AR). 3https://github.com/declare-lab/flan-alpaca. It is a 200M T5 model (Raffel et al., 2020) fine-tuned on Stanford Alpaca data (Taori et al., 2023). Implementation details are presented in Section 5.2. Table 2: Effects of CoT in the one-stage setting. Method Format Accuracy No-CoT QCM→A 81.63 Reasoning QCM→RA 69.32 Explanation QCM→AR 69.68 ![4_image_0.png](4_image_0.png) ![4_image_1.png](4_image_1.png) ![4_image_2.png](4_image_2.png) Question: Will these magnets attract or repel each other? ![4_image_5.png](4_image_5.png) ![4_image_8.png](4_image_8.png) Vision ![4_image_3.png](4_image_3.png) Gold **Rationale:** Will these magnets attract or repel? To find out, look at which poles are closest to each other. The north pole of one magnet is closest to the south pole of the other magnet. Poles that are different attract. So, these magnets will attract each other. Answer: The answer is (A). | + Vision Features | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Generated Rationale: Will these magnets attract or repel? To find out, look at which poles are closest to each other. The north pole of one magnet is closest to the south pole of the other magnet. Poles that are different attract. So, these magnets will attract each other. Answer: The answer is (A). | ![4_image_4.png](4_image_4.png) ![4_image_6.png](4_image_6.png) ![4_image_7.png](4_image_7.png) ![4_image_9.png](4_image_9.png) ![4_image_10.png](4_image_10.png) ![4_image_11.png](4_image_11.png) | Baseline | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Generated Rationale: Will these magnets attract or repel? To find out, look at which poles are closest to each other. The south pole of one magnet is closest to the south pole of the other magnet. Poles that are the same repel. So, these magnets will repel each other. Answer: The answer is (B). | Figure 2: Example of the two-stage framework without vision features (baseline) and with vision features (ours) for generating rationales and predicting answers. The upper part presents the problem details with a gold rationale, and the lower part shows the outputs of the baseline and our method incorporated with vision features. We observe that the baseline fails to predict the right answer due to the misleading by hallucinated rationales. More examples are shown in Appendix A.1. Surprisingly, as shown in Table 2, we observe a ↓12.31% accuracy decrease (81.63%→69.32%) if the model predicts rationales before answers (QCM→RA). The results imply that the rationales might not necessarily contribute to predicting the right answer. According to Lu et al. (2022a), the plausible reason might be that the model exceeds the maximum token limits before obtaining the required answer or stops generating the prediction early. However, we find that the maximum length of the generated outputs (RA) is always less than 400 tokens, which is below the length limit of language models (i.e., 512 in T5 models). Therefore, it deserves a more in-depth investigation into why the rationales harm answer inference. ## 3.2 Misleading By Hallucinated Rationales To dive into how the rationales affect the answer prediction, we separate the CoT problem into two stages, rationale generation and *answer inference*. 4 We report the RougeL score and accuracy for the rationale generation and answer inference, respectively. Table 3 shows the results based on the two-stage framework. Although the two-stage baseline model achieves a 90.73 RougeL score of the rationale generation, the answer inference accuracy is only 78.57%. Compared with the QCM→A variant (81.63%) in Table 2, the result shows that the generated rationale in the two-stage framework does not improve answer accuracy. Table 3: Two-stage setting of (i) rationale generation (RougeL) and (ii) answer inference (Accuracy). Method (i) QCM→ R (ii) QCMR→ A Two-Stage Framework 90.73 78.57 w/ Captions 90.88 79.37 w/ Vision Features 93.46 85.31 Then, we randomly sample 50 error cases and find that the model tends to generate hallucinated rationales that mislead the answer inference. As an example shown in Figure 2, the model (left part) hallucinates that, "*The south pole of one magnet is closest to the south pole of the other magnet*", due to the lack of reference to the vision content. We find that such mistakes occur at a ratio of 56% among the error cases (Figure 3(a)). ## 3.3 Multimodality Contributes To Effective Rationales We speculate that such a phenomenon of hallucination is due to a lack of necessary vision contexts for performing effective Multimodal-CoT. To inject vision information, a simple way is to transform the image into a caption (Lu et al., 2022a) and then append the caption in the input of both stages. 4The details will be presented in Section 4. However, as shown in Table 3, using captions only yields marginal performance gains (↑0.80%). Then, we explore an advanced technique by incorporating vision features into the language model. Concretely, we feed the image to the ViT model (Dosovitskiy et al., 2021b) to extract vision features. Then we fuse the vision features with the encoded language representations before feeding the decoder (more details will be presented in Section 4). Interestingly, with vision features, the RougeL score of the rationale generation has boosted to 93.46% (QCM→R), which correspondingly contributes to better answer accuracy of 85.31% (QCMR→A). ![5_image_0.png](5_image_0.png) (a) ratio of hallucination mistakes (b) correction rate w/ vision features Figure 3: The ratio of (a) hallucination mistakes and (b) correction rate w/ vision features. With those effective rationales, the phenomenon of hallucination is mitigated - 60.7% hallucination mistakes in Section 3.2 have been corrected (Figure 3(b)), as an example shown in Figure 2 (right part).5 The analysis so far compellingly shows that vision features are indeed beneficial for generating effective rationales and contributing to accurate answer inference. As the two-stage method achieves better performance than one-stage methods, we choose the two-stage method in our Multimodal-CoT framework. ## 4 Multimodal-Cot In light of the discussions in Section 3, we propose Multimodal-CoT. The key motivation is the anticipation that the answer inference can leverage better generated rationales that are based on multimodal information. In this section, we will overview the procedure of the framework and elaborate on the technical design of the model architecture. ![5_image_1.png](5_image_1.png) Figure 4: Overview of our Multimodal-CoT framework. Multimodal-CoT consists of two stages: (i) rationale generation and (ii) answer inference. Both stages share the same model structure but differ in the input and output. In the first stage, we feed the model with language and vision inputs to generate rationales. In the second stage, we append the original language input with the rationale generated from the first stage. Then, we feed the updated language input with the original vision input to the model to infer the answer. ## 4.1 Framework Overview Multimodal-CoT consists of two operation stages: (i) rationale generation and (ii) answer inference. Both stages share the same model structure but differ in the input X and output Y . The overall procedure is illustrated in Figure 4. We will take vision-language as an example to show how Multimodal-CoT works. In the rationale generation stage, we feed the model with X = {X1 language, Xvision} where X1 language represents the language input in the first stage and Xvision represents the vision input, i.e., the image. For example, X can be instantiated as a concatenation of question, context, and options of a multiple choice reasoning 5The left mistakes are mainly about map understanding, requiring extra commonsense signals (Section 6.7). 6 problem (Lu et al., 2022a) as shown in Figure 4. The goal is to learn a rationale generation model R = F(X) where R is the rationale. In the answer inference stage, the rationale R is appended to the original language input X1 language to construct the language input in the second stage, X2 language = X1 language ◦ R where ◦ denotes concatenation. Then, we feed the updated input X′ = {X2 language, Xvision} to the answer inference model to infer the final answer A = F(X′). In both stages, we train two models with the same architecture independently. They take the annotated elements (e.g., X → R, XR → A, respectively) from the training set for supervised learning. During inference, given X, the rationales for the test sets are generated using the model trained in the first stage; they are used in the second stage for answer inference. ## 4.2 Model Architecture Given language input Xlanguage ∈ {X1 language, X2 language} and vision input Xvision, we compute the probability of generating target text Y (either the rationale or the answer in Figure 4) of length N by $$p(Y|X_{\text{language}},X_{\text{vision}})=\prod_{i=1}^{N}p_{\theta}\left(Y_{i}\mid X_{\text{language}},X_{\text{vision}},Y_{<i}\right),\tag{1}$$ $$(1)$$ (2) (3) $\frac{1}{2}$ where pθ (Yi| Xlanguage, Xvision, Y<i) is implemented with a Transformer-based network (Vaswani et al., 2017). The network has three major procedures: encoding, interaction, and decoding. Specifically, we feed the language text into a Transformer encoder to obtain a textual representation, which is interacted and fused with the vision representation before being fed into the Transformer decoder. Encoding The model F(X) takes both the language and vision inputs and obtains the text representation Hlanguage and the image feature Hvision by the following functions: $$\begin{array}{r c l}{{H_{\mathrm{language}}}}&{{=}}&{{\mathrm{LanguageEncoder}(X_{\mathrm{language}}),}}\\ {{}}&{{H_{\mathrm{vision}}}}&{{=}}&{{W_{h}\cdot\mathrm{VisionExtractor}(X_{\mathrm{vision}}),}}\end{array}$$ where LanguageEncoder(·) is implemented as a Transformer model. We use the hidden states of the last layer in the Transformer encoder as the language representation Hlanguage ∈ R n×d where n denotes the length of the language input, and d is the hidden dimension. Meanwhile, VisionExtractor(·) is used to vectorize the input image into vision features. Inspired by the recent success of Vision Transformers (Dosovitskiy et al., 2021a), we fetch the patch-level features by frozen vision extraction models, such as ViT (Dosovitskiy et al., 2021b). After obtaining the patch-level vision features, we apply a learnable projection matrix Wh to convert the shape of VisionExtractor(Xvision) into that of Hlanguage; thus we have Hvision ∈ R m×d where m is the number of patches. Note that our approach is general to both scenarios with or without image context. For the questions without associated images, we use all-zero vectors as the "blank features" with the same shape as the normal image features to tell the model to ignore them. Interaction After obtaining language and vision representations, we use a single-head attention network to correlate text tokens with image patches, where the query (Q), key (K) and value (V ) are Hlanguage, Hvision and Hvision, respectively. The attention output Hattn vision ∈ R n×dis defined as: Hattn vision = Softmax( QK⊤ √dk )V, where dk is the same as the dimension of Hlanguage because a single head is used. Then, we apply the gated fusion mechanism (Zhang et al., 2020; Wu et al., 2021; Li et al., 2022a) to fuse Hlanguage and Hvision. The fused output Hfuse ∈ R n×dis obtained by: $$\lambda={\mathrm{Sigmoid}}(W_{l}H_{\mathrm{language}}+W_{v}H_{\mathrm{vision}}^{\mathrm{attn}}),$$ $$H_{\mathrm{true}}=(1-\lambda)\cdot H_{\mathrm{language}}+\lambda\cdot H_{\mathrm{vision}}^{\mathrm{attn}},$$ where Wl and Wv are learnable parameters. | $\rangle$ ? Decoding Finally, the fused output Hfuse is fed into the Transformer decoder to predict the target Y . ## 5 Experiments This section will present the benchmark dataset, the implementation of our technique, and the baselines for comparisons. Then, we will report our main results and findings. ## 5.1 Dataset Our method is evaluated on the ScienceQA (Lu et al., 2022a) and A-OKVQA (Schwenk et al., 2022) benchmark datasets. We choose those datasets because they are latest multimodal reasoning benchmarks with annotated reasoning chains. ScienceQA is a large-scale multimoda science question dataset with annotated lectures and explanations. It contains 21k multimodal multiple choice questions with rich domain diversity across 3 subjects, 26 topics, 127 categories, and 379 skills. There are 12k, 4k, and 4k questions in the training, validation, and test splits, respectively. A-OKVQA is a knowledge-based visual question answering benchmark, which has 25k questions requiring a broad base of commonsense and world knowledge to answer. It has 17k/1k/6k questions for train/val/test. As A-OKVQA provides multiple-choice and direct answer evaluation settings, we use the multiple-choice setting to keep consistency with ScienceQA. ## 5.2 Implementation The following part presents the experimental settings of Multimodal-CoT and the baseline methods. Experimental Settings We adopt the T5 encoder-decoder architecture (Raffel et al., 2020) under Base (200M) and large (700M) settings in our framework. We apply FLAN-Alpaca to initialize our model weights.6 We will show that Multimodal-CoT is generally effective with other backbone LMs, such as UnifiedQA (Khashabi et al., 2020) and FLAN-T5 (Chung et al., 2022) (Section 6.3). The vision features are obtained by the frozen ViT-large encoder (Dosovitskiy et al., 2021b). We fine-tune the models up to 20 epochs, with a learning rate of 5e-5. The maximum input sequence length is 512. The batch size is 8. Our experiments are run on 8 NVIDIA Tesla V100 32G GPUs. More details are presented in Appendix B. ## Baseline Models We Utilized Three Categories Of Methods As Our Baselines: (i) Visual question answering (VQA) models, including MCAN (Yu et al., 2019), Top-Down (Anderson et al., 2018), BAN (Kim et al., 2018), DFAF (Gao et al., 2019), ViLT (Kim et al., 2021), Patch-TRM (Lu et al., 2021), and VisualBERT (Li et al., 2019). These VQA baselines take the question, context, and choices as textual input, while utilizing the image as visual input. They employ a linear classifier to predict the score distribution over the choice candidates. (ii) LMs, including the text-to-text UnifiedQA model (Khashabi et al., 2020) and few-shot learning LLMs (GPT-3.5, ChatGPT, GPT-4, and Chameleon (Lu et al., 2023)). UnifiedQA (Khashabi et al., 2020) is adopted as it is the best fine-tuning model in Lu et al. (2022a). UnifiedQA takes the textual information as the input and outputs the answer choice. The image is converted into a caption extracted by an image captioning model following Lu et al. (2022a). UnifiedQA treats our task as a text generation problem. In Lu et al. (2022a), it is trained to generate a target answer text, i.e., one of the candidate options. Then, the most similar option is selected as the final prediction to evaluate the question answering accuracy. For GPT-3.5 models (Chen et al., 2020), we use the text-davinci-002 and text-davinci-003 engines due to their strong performance. In addition, we also include the comparison with ChatGPT and GPT-4. The inference is based on the few-shot prompting, where two in-context examples from the training set are concatenated before the test instance. The few-shot demonstrations are the same as those in Lu et al. (2022a). (iii) Fine-tuned large vision-language model. We select the recently released LLaMA-Adapter (Zhang et al., 2023a), LLaVA (Liu et al., 2023), and InstructBLIP (Dai et al., 2023) as the competitive large vision-language 6https://github.com/declare-lab/flan-alpaca. Table 4: Main results (%). Size = backbone model size from the ScienceQA leaderboard ("-" means unavailable or unknown). Question classes: NAT = natural science, SOC = social science, LAN = language science, TXT = text context, IMG = image context, NO = no context, G1-6 = grades 1-6, G7-12 = grades 7-12. Segment 1: Human performance; Segment 2: VQA baselines; Segment 3: LM baselines, i.e., UnifiedQA and few-shot learning LLMs; Segment 4: Fine-tuned large vision-language models; Segment 5: Our Multimodal-CoT results. Prior published best results are marked with an underline. Our best average result is in **bold** face. † denotes concurrent studies, either through citation or comparison with Multimodal-CoT. Model Size NAT SOC LAN TXT IMG NO G1-6 G7-12 Avg Human - 90.23 84.97 87.48 89.60 87.50 88.10 91.59 82.42 88.40 MCAN (Yu et al., 2019) 95M 56.08 46.23 58.09 59.43 51.17 55.40 51.65 59.72 54.54 Top-Down (Anderson et al., 2018) 70M 59.50 54.33 61.82 62.90 54.88 59.79 57.27 62.16 59.02 BAN (Kim et al., 2018) 112M 60.88 46.57 66.64 62.61 52.60 65.51 56.83 63.94 59.37 DFAF (Gao et al., 2019) 74M 64.03 48.82 63.55 65.88 54.49 64.11 57.12 67.17 60.72 ViLT (Kim et al., 2021) 113M 60.48 63.89 60.27 63.20 61.38 57.00 60.72 61.90 61.14 Patch-TRM (Lu et al., 2021) 90M 65.19 46.79 65.55 66.96 55.28 64.95 58.04 67.50 61.42 VisualBERT (Li et al., 2019) 111M 59.33 69.18 61.18 62.71 62.17 58.54 62.96 59.92 61.87 UnifiedQA (Lu et al., 2022a) 223M 71.00 76.04 78.91 66.42 66.53 81.81 77.06 68.82 74.11 GPT-3.5 (text-davinci-002) (Lu et al., 2022a) 173B 75.44 70.87 78.09 74.68 67.43 79.93 78.23 69.68 75.17 GPT-3.5 (text-davinci-003) 173B 77.71 68.73 80.18 75.12 67.92 81.81 80.58 69.08 76.47 ChatGPT (Lu et al., 2023) - 78.82 70.98 83.18 77.37 67.92 86.13 80.72 74.03 78.31 GPT-4 (Lu et al., 2023) - 85.48 72.44 90.27 82.65 71.49 92.89 86.66 79.04 83.99 Chameleon (ChatGPT) (Lu et al., 2023)† - 81.62 70.64 84.00 79.77 70.80 86.62 81.86 76.53 79.93 Chameleon (GPT-4) (Lu et al., 2023)† - 89.83 74.13 89.82 88.27 77.64 92.13 88.03 83.72 86.54 LLaMA-Adapter (Zhang et al., 2023a)† 6B 84.37 88.30 84.36 83.72 80.32 86.90 85.83 84.05 85.19 LLaVA (Liu et al., 2023)† 13B 90.36 95.95 88.00 89.49 88.00 90.66 90.93 90.90 90.92 InstructBLIP (Dai et al., 2023)† 11B - - - - 90.70 - - - Mutimodal-CoTBase 223M 84.06 92.35 82.18 82.75 82.75 84.74 85.79 84.44 85.31 Mutimodal-CoTLarge 738M 91.03 93.70 86.64 90.13 88.25 89.48 91.12 89.26 90.45 baselines. For LLaMA-Adapter, the backbone model is the 7B LLaMA model fine-tuned with 52k self-instruct demonstrations. To adapt to our tasks, the model is further fine-tuned on the ScienceQA dataset. ## 5.3 Main Results Table 4 shows the main results in the ScienceQA benchmark. We observe that Mutimodal-CoTLarge achieves substantial performance gains over the prior best model in publications (86.54%→90.45%). The efficacy of Multimodal-CoT is further supported by the results obtained from the A-OKVQA benchmark in Table 5. It is worth noting that Chameleon, LLaMA-Adapter, LLaVA, and InstructBLIP are concurrent works released several months after our work. In the subsequent Section 6.2, we will show that our method is orthogonal to those multimodal models (e.g., InstructBLIP) and can be potentially used with them together to improve generality further, i.e., scaled to scenarios where human-annotated rationales are unavailable, thereby establishing the effectiveness across diverse tasks. | from (Chen et al., 2023) and Schwenk et | al. (2022). | |-------------------------------------------|---------------| | Model | Accuracy | | BERT | 32.93 | | GPT-3 (Curie) | 35.07 | | IPVR (OPT-66B) | 48.6 | | ViLBERT | 49.1 | | Language-only Baseline | 47.86 | | Multimodal-CoTBase | 50.57 | Ablation study results in Table 6 show that both the integration of vision features and the two-stage framework design contribute to the overall performance. Furthermore, we find that Multimodal-CoT demonstrates the ability to mitigate hallucination (Section 3.3) and improve convergence (Section 6.1). | Table 6: Ablation results of Multimodal-CoT. Model Base Large Multimodal-CoT 85.31 90.45 w/o Two-Stage Framework 82.62 84.56 w/o Vision Features 78.57 83.97 | |----------------------------------------------------------------------------------------------------------------------------------------------------------------| ## 6 Analysis The following analysis will first show that Multimodal-CoT helps enhance convergence speed and has the feasibility of adaptation to scenarios without human-annotated rationales. Then, we investigate the general effectiveness of Multimodal-CoT with different backbone models and vision features. We will also conduct an error analysis to explore the limitations to inspire future studies. We use models under the base size for analysis unless otherwise stated. ## 6.1 Multimodality Boosts Convergence Figure 5 shows the validation accuracy curve of the baseline and Multimodal-CoT across different training epochs. "One-stage" is based on the QCM→A input-output format as it achieves the best performance in Table 2 and "Two-stage" is our two-stage framework. We find that the twostage methods achieve relatively higher accuracy at the beginning than the one-stage baselines that generate the answer directly without CoT. However, without the vision features, the twostage baseline could not yield better results as the training goes on due to low-quality rationales (as observed in Section 3). In contrast, using vision features helps generate more effective rationales that contribute to better answer accuracy in our two-stage multimodal variant. ![9_image_0.png](9_image_0.png) Figure 5: Accuracy curve of the No-CoT baseline and Multimodal-CoT variants. ## 6.2 When Multimodal-Cot Meets Large Models A recent flame is to leverage large language models or large vision-language models to generate reasoning chains for multimodal question answering problems (Zhang et al., 2023a; Lu et al., 2023; Liu et al., 2023; Alayrac et al., 2022; Hao et al., 2022; Yasunaga et al., 2022). We are interested in whether we can use large models to generate the rationales for Multimodal-CoT; thus breaking the need for datasets with human-annotated rationales. During the first-stage training of Multimodal-CoT, our target rationales are based on human annotation in the benchmark datasets. Now, we replace the target rationales with those generated ones. As ScienceQA contains questions with images and without images, we leverage InstructBLIP and ChatGPT to generate the rationales for questions with paired images and questions without paired images, respectively.7 Then, we combine both of the generated pseudo-rationales as the target rationales for training (Multimodal-CoT w/ Generation) instead of relying on the human annotation of reasoning chains (Multimodal-CoT w/ Annotation). Table 7 shows the comparison results. We see that using the generated rationales achieves comparable performance to using human-annotated rationales for training. In addition, the performance is also much better than directly prompting those baseline models to obtain the answer (in the QCM→A inference format). 7Examples are provided in Appendix C.1. Table 7: Result comparison with large models. We also present the results of InstructBLIP and ChatGPT baselines for reference. The inference format for the two baselines is QCM→A. Model IMG TXT AVG InstructBLIP 60.50 - - ChatGPT 56.52 67.16 65.95 Multimodal-CoT w/ Annotation 88.25 90.13 90.45 Multimodal-CoT w/ Generation 83.54 85.73 87.76 We see that Multimodal-CoT can work effectively with large models. The findings above compellingly show the feasibility of adaptation to scenarios without human-annotated rationales, thereby establishing the effectiveness of our approach across diverse tasks. ## 6.3 Effectiveness Across Backbones To test the generality of the benefits of our approach to other backbone models, we alter the underlying LMs to other variants in different types. As shown in Table 8, our approach is generally effective for the widely used backbone models. | Table 8: Using different backbone LMs. | Table 9: Using different vision features. | | | | |------------------------------------------|---------------------------------------------|-------------|---------------|----------| | Method | Accuracy | Feature | Feature Shape | Accuracy | | Prior Best (Lu et al., 2022a) | 75.17 | ViT | (145, 1024) | 85.31 | | | CLIP | (49, 2048) | 84.27 | | | | DETR | (100, 256) | 83.16 | | | | ResNet | (512, 2048) | 82.86 | | | MM-CoT on UnifiedQA | 82.55 | | | | | MM-CoT on FLAN-T5 | 83.19 | | | | | MM-CoT on FLAN-Alpaca | 85.31 | | | | ## 6.4 Using Different Vision Features Different vision features may affect the model performance. We compare three widely-used types of vision features, ViT (Dosovitskiy et al., 2021b), CLIP (Radford et al., 2021), DETR (Carion et al., 2020), and ResNet (He et al., 2016). ViT, CLIP, and DETR are patch-like features. For the ResNet features, we repeat the pooled features of ResNet-50 to the same length with the text sequence to imitate the patch-like features, where each patch is the same as the pooled image features. More details of the vision features are presented in Appendix B.1. Table 9 shows the comparative results of vision features. We observe that ViT achieves relatively better performance. Therefore, we use ViT by default in Multimodal-CoT. ## 6.5 Alignment Strategies For Multimodal Interaction We are interested in whether using different alignment strategies for multimodal interaction may contribute to different behaviors of multimodal-CoT. To this end, we tried another alignment strategy, i.e., image-grounded text encoder, in BLIP Li et al. (2022b). This alignment approach injects visual information by inserting one additional cross-attention layer between the self-attention layer and the feed-forward network for each transformer block of the text encoder. Our current strategy in the paper is similar to the unimodal encoder as in BLIP, which is used for comparison. In Table 10, we see that using other alignment strategies also contributes to better performance than direct answering. | Model | Accuracy | |-----------------------------|------------| | Direct Answering | 82.62 | | Unimodal encoder | 85.31 | | Image-grounded text encoder | 84.60 | Table 10: Result comparison with different alignment strategies for multimodal interaction. ## 6.6 Generalization To Other Multimodal Reasoning Benchmarks | Table 11: Generalization performance on MMMU. | | | |-------------------------------------------------|------|----------| | Model | Size | Accuracy | | Kosmos-2 (Peng et al., 2024) | 1.6B | 24.4 | | Fuyu (Bavishi et al., 2024) | 8B | 27.9 | | OpenFlamingo-2 (Awadalla et al., 2023) | 9B | 28.7 | | MiniGPT4-Vicuna (Zhu et al., 2023) | 13B | 26.8 | | Multimodal-CoT | 738M | 28.7 | | GPT-4V(ision) (OpenAI, 2023) | - | 56.8 | | Gemini Ultra (Reid et al., 2024) | - | 59.4 | We are interested in evaluating the generalization capability of Multimodal-CoT to datasets outside its training domain. For this purpose, we utilize the widely-recognized multimodal reasoning benchmark, MMMU (Yue et al., 2024), and conduct an evaluation of Multimodal-CoT on MMMU without further training. As shown in Table 11, it is evident that Multimodal-CoT demonstrates effective generalization to MMMU, achieving better performance than various larger models around 8B. ## 6.7 Error Analysis To gain deeper insights into the behavior of Multimodal-CoT and facilitate future research, we manually analyzed randomly selected examples generated by our approach. The categorization results are illustrated in Figure 6. We examined 50 samples that yielded incorrect answers and categorized them accordingly. The examples from each category can be found in Appendix D. The most prevalent error type is commonsense mistakes, accounting for 80% of the errors. These mistakes occur when the model is faced with questions that require commonsense knowledge, such as interpreting maps, counting objects in images, or utilizing the alphabet. The second error type is logical mistakes, constituting 14% of the errors, which involve contradictions in the reasoning process. Additionally, we have observed cases where incorrect answers are provided despite the CoT being either empty or correct, amounting to 6% of the errors. The CoT in these cases may not necessarily influence the final answer. ![11_image_0.png](11_image_0.png) Figure 6: Categorization analysis. The analysis reveals potential avenues for future research. Enhancements can be made to Multimodal-CoT by: (i) integrating more informative visual features and strengthening the interaction between language and vision to enable comprehension of maps and numerical counting; (ii) incorporating commonsense knowledge; and (iii) implementing a filtering mechanism, such as using only relevant CoTs to infer answers and disregarding irrelevant ones. ## 7 Conclusion This paper formally studies the problem of multimodal CoT. We propose Multimodal-CoT that incorporates language and vision modalities into a two-stage framework that separates rationale generation and answer inference, so answer inference can leverage better generated rationales from multimodal information. With Multimodal-CoT, our model under 1 billion parameters achieves state-of-the-art performance on the ScienceQA benchmark. Analysis shows that Multimodal-CoT has the merits of mitigating hallucination and enhancing convergence speed. Our error analysis identifies the potential to leverage more effective vision features, inject commonsense knowledge, and apply filtering mechanisms to improve CoT reasoning in future studies. ## References Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. *Advances in Neural Information Processing Systems*, 35:23716–23736, 2022. Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. Bottom-up and top-down attention for image captioning and visual question answering. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pp. 6077–6086. IEEE Computer Society, 2018. doi: 10.1109/CVPR.2018.00636. Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, et al. Openflamingo: An open-source framework for training large autoregressive vision-language models. *arXiv preprint arXiv:2308.01390*, 2023. Rohan Bavishi, Erich Elsen, Curtis Hawthorne, Maxwell Nye, Augustus Odena, Arushi Somani, and Sağnak Taşırlar. Fuyu-8b: A multimodal architecture for ai agents, 2024. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), *Advances in Neural Information Processing Systems 33: Annual Conference on* Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I, pp. 213–229, 2020. Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey E. Hinton. Big self-supervised models are strong semi-supervised learners. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), *Advances in Neural Information Processing Systems 33:* Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. *ArXiv preprint*, abs/2211.12588, 2022. Zhenfang Chen, Qinhong Zhou, Yikang Shen, Yining Hong, Hao Zhang, and Chuang Gan. See, think, confirm: Interactive prompting between vision and language models for knowledge-based visual reasoning. *ArXiv* preprint, abs/2301.05226, 2023. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. Palm: Scaling language modeling with pathways. *ArXiv preprint*, abs/2204.02311, 2022. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. ArXiv preprint, abs/2210.11416, 2022. Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructblip: Towards general-purpose vision-language models with instruction tuning, 2023. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In *9th International Conference* on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021a. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021b. Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and Tushar Khot. Complexity-based prompting for multi-step reasoning. *ArXiv preprint*, abs/2210.00720, 2022. Peng Gao, Zhengkai Jiang, Haoxuan You, Pan Lu, Steven C. H. Hoi, Xiaogang Wang, and Hongsheng Li. Dynamic fusion with intra- and inter-modality attention flow for visual question answering. In *IEEE* Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pp. 6639–6648. Computer Vision Foundation / IEEE, 2019. doi: 10.1109/CVPR.2019.00680. Yaru Hao, Haoyu Song, Li Dong, Shaohan Huang, Zewen Chi, Wenhui Wang, Shuming Ma, and Furu Wei. Language models are general-purpose interfaces. *ArXiv preprint*, abs/2206.06336, 2022. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pp. 770–778. IEEE Computer Society, 2016. doi: 10.1109/CVPR.2016.90. Namgyu Ho, Laura Schmid, and Se-Young Yun. Large language models are reasoning teachers. *ArXiv* preprint, abs/2212.10071, 2022. Jie Huang and Kevin Chen-Chuan Chang. Towards reasoning in large language models: A survey. *ArXiv* preprint, abs/2212.10403, 2022. Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Yejin Bang, Andrea Madotto, and Pascale Fung. Survey of hallucination in natural language generation. *ACM Computing Surveys*, 2022. Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. UNIFIEDQA: Crossing format boundaries with a single QA system. In Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 1896–1907, Online, 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.findings-emnlp.171. Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, and Ashish Sabharwal. Decomposed prompting: A modular approach for solving complex tasks. *ArXiv preprint*, abs/2210.02406, 2022. Jin-Hwa Kim, Jaehyun Jun, and Byoung-Tak Zhang. Bilinear attention networks. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pp. 1571–1581, 2018. Wonjae Kim, Bokyung Son, and Ildoo Kim. Vilt: Vision-and-language transformer without convolution or region supervision. In Marina Meila and Tong Zhang (eds.), *Proceedings of the 38th International* Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pp. 5583–5594. PMLR, 2021. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. *ArXiv preprint*, abs/2205.11916, 2022. Bei Li, Chuanhao Lv, Zefan Zhou, Tao Zhou, Tong Xiao, Anxiang Ma, and JingBo Zhu. On vision features in multimodal machine translation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 6327–6337, Dublin, Ireland, 2022a. Association for Computational Linguistics. doi: 10.18653/v1/2022.acl-long.438. Junnan Li, Dongxu Li, Caiming Xiong, and Steven C. H. Hoi. BLIP: bootstrapping language-image pretraining for unified vision-language understanding and generation. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato (eds.), International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of *Proceedings of Machine* Learning Research, pp. 12888–12900. PMLR, 2022b. Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. Visualbert: A simple and performant baseline for vision and language. *ArXiv preprint*, abs/1908.03557, 2019. Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. On the advance of making language models better reasoners. *ArXiv preprint*, abs/2206.02336, 2022c. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. *ArXiv preprint*, abs/2304.08485, 2023. Pan Lu, Liang Qiu, Jiaqi Chen, Tony Xia, Yizhou Zhao, Wei Zhang, Zhou Yu, Xiaodan Liang, and Song-Chun Zhu. Iconqa: A new benchmark for abstract diagram understanding and visual language reasoning. In The 35th Conference on Neural Information Processing Systems (NeurIPS) Track on Datasets and Benchmarks, 2021. Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. *Advances in Neural Information Processing Systems*, 35:2507–2521, 2022a. Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, and Ashwin Kalyan. Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning. ArXiv preprint, abs/2209.14610, 2022b. Pan Lu, Liang Qiu, Wenhao Yu, Sean Welleck, and Kai-Wei Chang. A survey of deep learning for mathematical reasoning. *ArXiv preprint*, abs/2212.10535, 2022c. Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, and Jianfeng Gao. Chameleon: Plug-and-play compositional reasoning with large language models. In The Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS 2023), 2023. Lucie Charlotte Magister, Jonathan Mallinson, Jakub Adamek, Eric Malmi, and Aliaksei Severyn. Teaching small language models to reason. *ArXiv preprint*, abs/2212.08410, 2022. OpenAI. Gpt-4v(ision) system card, 2023. Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Qixiang Ye, and Furu Wei. Grounding multimodal large language models to the world. In *The Twelfth International Conference on* Learning Representations, 2024. URL https://openreview.net/forum?id=lLmqxkfSIw. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In Marina Meila and Tong Zhang (eds.), *Proceedings* of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of *Proceedings of Machine Learning Research*, pp. 8748–8763. PMLR, 2021. Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John Mellor, Irina Higgins, Antonia Creswell, Nat McAleese, Amy Wu, Erich Elsen, Siddhant Jayakumar, Elena Buchatskaya, David Budden, Esme Sutherland, Karen Simonyan, Michela Paganini, Laurent Sifre, Lena Martens, Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena Gribovskaya, Domenic Donato, Angeliki Lazaridou, Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli, Nikolai Grigorev, Doug Fritz, Thibault Sottiaux, Mantas Pajarskas, Toby Pohlen, Zhitao Gong, Daniel Toyama, Cyprien de Masson d'Autume, Yujia Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin, Aidan Clark, Diego de Las Casas, Aurelia Guy, Chris Jones, James Bradbury, Matthew Johnson, Blake Hechtman, Laura Weidinger, Iason Gabriel, William Isaac, Ed Lockhart, Simon Osindero, Laura Rimell, Chris Dyer, Oriol Vinyals, Kareem Ayoub, Jeff Stanway, Lorrayne Bennett, Demis Hassabis, Koray Kavukcuoglu, and Geoffrey Irving. Scaling language models: Methods, analysis & insights from training gopher. *ArXiv preprint*, abs/2112.11446, 2021. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21:140:1–140:67, 2020. Machel Reid, Nikolay Savinov, Denis Teplyashin, Dmitry Lepikhin, Timothy Lillicrap, Jean-baptiste Alayrac, Radu Soricut, Angeliki Lazaridou, Orhan Firat, Julian Schrittwieser, et al. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. *arXiv preprint arXiv:2403.05530*, 2024. Ohad Rubin, Jonathan Herzig, and Jonathan Berant. Learning to retrieve prompts for in-context learning. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 2655–2671, Seattle, United States, 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.naacl-main.191. Dustin Schwenk, Apoorv Khandelwal, Christopher Clark, Kenneth Marino, and Roozbeh Mottaghi. A-okvqa: A benchmark for visual question answering using world knowledge. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part VIII, pp. 146–162. Springer, 2022. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. Alpaca: A strong, replicable instruction-following model. *Stanford Center for* Research on Foundation Models. https://crfm. stanford. edu/2023/03/13/alpaca. html, 2023. Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Vincent Zhao, Yanqi Zhou, Chung-Ching Chang, Igor Krivokon, Will Rusch, Marc Pickett, Pranesh Srinivasan, Laichee Man, Kathleen MeierHellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise Aguera-Arcas, Claire Cui, Marian Croak, Ed Chi, and Quoc Le. Lamda: Language models for dialog applications. *ArXiv preprint*, abs/2201.08239, 2022. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp. 5998–6008, 2017. Boshi Wang, Xiang Deng, and Huan Sun. Iteratively prompt pre-trained language models for chain of thought. In *Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing*, pp. 2714–2730, Abu Dhabi, United Arab Emirates, 2022a. Association for Computational Linguistics. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. *ArXiv preprint*, abs/2203.11171, 2022b. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. Rationale-augmented ensembles in language models. *ArXiv preprint*, abs/2207.00747, 2022c. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. Emergent abilities of large language models. Transactions on Machine Learning Research, 2022a. Survey Certification. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. *ArXiv preprint*, abs/2201.11903, 2022b. Zhiyong Wu, Lingpeng Kong, Wei Bi, Xiang Li, and Ben Kao. Good for misconceived reasons: An empirical revisiting on the need for visual context in multimodal machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 6153–6166, Online, 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.480. Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Rich James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer, and Wen-tau Yih. Retrieval-augmented multimodal language modeling. Proceedings of the 40th International Conference on Machine Learning, PMLR, pp. 39755–39769, 2022. Zhou Yu, Jun Yu, Yuhao Cui, Dacheng Tao, and Qi Tian. Deep modular co-attention networks for visual question answering. In *IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long* Beach, CA, USA, June 16-20, 2019, pp. 6281–6290. Computer Vision Foundation / IEEE, 2019. doi: 10.1109/CVPR.2019.00644. Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, Cong Wei, Botao Yu, Ruibin Yuan, Renliang Sun, Ming Yin, Boyuan Zheng, Zhenzhu Yang, Yibo Liu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen. Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi. In Proceedings of CVPR, 2024. Renrui Zhang, Jiaming Han, Aojun Zhou, Xiangfei Hu, Shilin Yan, Pan Lu, Hongsheng Li, Peng Gao, and Yu Qiao. Llama-adapter: Efficient fine-tuning of language models with zero-init attention. *ArXiv preprint*, abs/2303.16199, 2023a. Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, et al. Siren's song in the ai ocean: A survey on hallucination in large language models. *arXiv* preprint arXiv:2309.01219, 2023b. Zhuosheng Zhang, Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, Zuchao Li, and Hai Zhao. Neural machine translation with universal visual representation. In *8th International Conference on Learning* Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. Zhuosheng Zhang, Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, Zuchao Li, and Hai Zhao. Universal multimodal representation for language understanding. *IEEE Transactions on Pattern Analysis* and Machine Intelligence, pp. 1–18, 2023c. doi: 10.1109/TPAMI.2023.3234170. Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. Automatic chain of thought prompting in large language models. In *The Eleventh International Conference on Learning Representations*, 2023d. Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, and Baobao Chang. Mmicl: Empowering vision-language model with multi-modal in-context learning. *arXiv preprint arXiv:2309.07915*, 2023. Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, and Ed Chi. Least-to-most prompting enables complex reasoning in large language models. *ArXiv preprint*, abs/2205.10625, 2022. Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: Enhancing visionlanguage understanding with advanced large language models. In *The Twelfth International Conference on* Learning Representations, 2023. ## A Extended Analysis For The Challenge Of Multimodal-Cot A.1 Additional Examples Of Misleading Through Hallucinated Rationales Based on our case studies (Section 3.2), we have observed a tendency for the baseline model to generate hallucinated rationales. Here, we present additional examples to illustrate this phenomenon, as depicted in Figure 7. ![18_image_0.png](18_image_0.png) Question: Which solution has a higher concentration of yellow particles? Context: The diagram below is a model of two solutions. Each yellow ball represents one particle of solute. Options: (A) Solution B (B) Solution A ![18_image_4.png](18_image_4.png) ![18_image_2.png](18_image_2.png) ![18_image_3.png](18_image_3.png) ![18_image_6.png](18_image_6.png) ![18_image_7.png](18_image_7.png) ![18_image_1.png](18_image_1.png) ![18_image_5.png](18_image_5.png) Gold **Rationale:** In Solution A and Solution B, the yellow particles represent the solute. To figure out which solution has a higher concentration of yellow particles, look at both the number of yellow particles and the volume of the solvent in each container. Use the concentration formula to find the number of yellow particles per milliliter. Solution B has more yellow particles per milliliter. So, Solution B has a higher concentration of yellow particles. + Vision Features Answer: The answer is (A). ![18_image_8.png](18_image_8.png) Rationale: In Solution A and Solution B, the yellow particles represent the solute. To figure out which solution has a higher concentration of yellow particles, look at both the number of yellow particles and the volume of the solvent in each container. Use the concentration formula to find the number of yellow particles per milliliter. Solution A has more yellow particles per milliliter. So, Solution A has a higher concentration of yellow particles. Answer: The answer is (B). Rationale: In Solution A and Solution B, the yellow particles represent the solute. To figure out which solution has a higher concentration of yellow particles, look at both the number of yellow particles and the volume of the solvent in each container. Use the concentration formula to find the number of yellow particles per milliliter. Solution B has more yellow particles per milliliter. So, Solution B has a higher concentration of yellow particles. Answer: The answer is (A). ![18_image_9.png](18_image_9.png) ![18_image_10.png](18_image_10.png) ![18_image_11.png](18_image_11.png) Question: Think about the magnetic force between the magnets in each pair. Which of the ![18_image_13.png](18_image_13.png) following statements is true? Context: The images below show two pairs of magnets. The magnets in different pairs do not affect each other. All the magnets shown are made of the same material. ![18_image_14.png](18_image_14.png) (B) The magnitude of the magnetic force is smaller in Pair 1. (A) The magnitude of the magnetic force is smaller in Pair 2. (C) The magnitude of the magnetic force is the same in both pairs. ![18_image_12.png](18_image_12.png) Gold **Rationale:** The magnets in Pair 1 attract. The magnets in Pair 2 repel. But whether the magnets attract or repel affects only the direction of the magnetic force. It does not affect the magnitude of the magnetic force. Both magnet sizes and distance affect the magnitude of the magnetic force. The sizes of the magnets in Pair 1 are the same as in Pair 2. The distance between the magnets is also the same. So, the magnitude of the magnetic force is the same in both pairs. Answer: The answer is (C). Baseline Rationale: Distance affects the magnitude of the magnetic force. When there is a greater distance between magnets, the magnitude of the magnetic force between them is smaller. There is a greater distance between the magnets in Pair 2 than in Pair 1. So, the magnitude of the magnetic force is smaller in Pair 2 than in Pair 1. Answer: The answer is (B). Rationale: The magnets in Pair 1 attract. The magnets in Pair 2 repel. But whether the magnets attract or repel affects only the direction of the magnetic force. It does not affect the magnitude of the magnetic force. Both magnet sizes and distance affect the magnitude of the magnetic force. The sizes of the magnets in Pair 1 are the same as in Pair 2. The distance between the magnets is also the same. So, the magnitude of the magnetic force is the same in both pairs. Answer: The answer is (C). Figure 7: Examples of the two-stage framework without vision features (baseline) and with vision features (ours) for generating rationales and predicting answers. The upper part presents the problem details, and the lower part shows the outputs of the baseline and our method. ## A.2 Two-Stage Training Performance With Different Sizes Of Lms In Section 3, we observed that the inclusion of vision features has a positive impact on the generation of more effective rationales, consequently resulting in improved answer accuracy. In addition to incorporating vision features, another approach to addressing the issue of incorrect rationales is to scale the size of the language model (LM). Figure 8 showcases the answer accuracy achieved by our two-stage training framework, both with and without the integration of vision features. Notably, when employing a larger LM, the baseline accuracy (without vision features) experiences a significant enhancement. This finding suggests that scaling the LM size could potentially alleviate the problem of incorrect rationales. However, it is crucial to acknowledge that the performance still falls considerably short of utilizing vision features. This outcome further validates the effectiveness of our Multimodal-CoT methodology across varying LM sizes. ![19_image_0.png](19_image_0.png) Figure 8: Answer accuracy with different sizes of LMs. ## A.3 Discussion Of The Possible Paradigms To Achieve Multimodal-Cot As discussed in Section 1, there are two primary approaches to facilitate Multimodal-CoT reasoning: (i) prompting LLMs and (ii) fine-tuning small models. The common approach in the first approach is to unify the input from different modalities and prompt LLMs to perform reasoning (Zhang et al., 2023a; Lu et al., 2023; Liu et al., 2023; Alayrac et al., 2022; Hao et al., 2022; Yasunaga et al., 2022). For instance, one way to achieve this is by extracting the caption of an image using a captioning model and then concatenating the caption with the original language input to feed LLMs. By doing so, visual information is conveyed to LLMs as text, effectively bridging the gap between modalities. This approach can be represented as the input-output format <image → caption, question + caption → answer>. We refer to this approach as **Caption-based** Reasoning (Figure 9a). It is worth noting that the effectiveness of this approach depends on the quality of the image caption, which may be susceptible to errors introduced during the transfer from image captioning to answer inference. In contrast, an intriguing aspect of CoT is the ability to decompose complex problems into a series of simpler problems and solve them step by step. This transformation leads to a modification of the standard format <question → answer> into <question → rationale → answer>. Rationales, being more likely to reflect the reasoning processes leading to the answer, play a crucial role in this paradigm. Consequently, we refer to approaches following this paradigm as **CoT-based Reasoning**. The nomenclature has been widely adopted in the literature (Huang & Chang, 2022; Zhang et al., 2023d; Lu et al., 2022c). Our work aligns with the paradigms of **CoT-based Reasoning** in the context of multimodal scenarios, specifically employing the <question + image → rationale → answer> framework (Figure 9b). This approach confers advantages on two fronts. Firstly, the Multimodal-CoT framework leverages feature-level interactions between vision and language inputs, enabling the model to gain a deeper understanding of the input information and facilitating more effective inference of answers by incorporating well-founded rationales. Our analysis has demonstrated that Multimodal-CoT offers notable benefits by mitigating hallucination and enhancing convergence, resulting in superior performance on our benchmark datasets. Secondly, the lightweight nature of Multimodal-CoT renders it compatible with resource constraints and circumvents any potential paywalls. ![20_image_0.png](20_image_0.png) Figure 9: Paradigms to achieve Multimodal-CoT. ## B Experimental Details B.1 Details Of Vision Features In Section 6.2, we compared four types of vision features, ViT (Dosovitskiy et al., 2021b), CLIP (Radford et al., 2021), DETR (Carion et al., 2020), and ResNet (He et al., 2016). The specific models are: (i) ViT: vit_large_patch32_384, 8(ii) CLIP: RN101;9(iii) DETR: *detr_resnet101_dc5* ; 10 (iv) ResNet: we use the averaged pooled features of a pre-trained ResNet50 CNN. Table 12 presents the dimension of the vision features (after the function VisionExtractor(·) in Eq. 3). For ResNet-50, we repeat the pooled features of ResNet-50 to the same length as the text sequence to imitate the patch-like features, where each patch is the same as the pooled image features. | Table 12: Feature shape of vision features | | |----------------------------------------------|---------------| | Method | Feature Shape | | ViT | (145, 1024) | | CLIP | (49, 2048) | | DETR | (100, 256) | | ResNet | (512, 2048) | ## B.2 Datasets Our method is evaluated on the ScienceQA (Lu et al., 2022a) and A-OKVQA (Schwenk et al., 2022) benchmark datasets. - ScienceQA is a large-scale multimodal science question dataset with annotated lectures and explanations. It contains 21k multimodal multiple choice questions with rich domain diversity across 3 subjects, 26 topics, 127 categories, and 379 skills. The dataset is split into training, validation, and test splits with 12k, 4k, and 4k questions, respectively. - A-OKVQA is a knowledge-based visual question answering benchmark, which has 25k questions requiring a broad base of commonsense and world knowledge to answer. Each question is annotated with rationales that 8https://github.com/rwightman/pytorch-image-models. 9https://github.com/jianjieluo/OpenAI-CLIP-Feature. 10https://github.com/facebookresearch/detr. explain why a particular answer was correct according to necessary facts or knowledge. It has 17k/1k/6k questions for train/val/test. For ScienceQA, our model is evaluated on the test set. For A-OKVQA, our model is evaluated on the validation set as the test set is hidden. ## B.3 Implementation Details Of Multimodal-Cot As the Multimodal-CoT task requires generating the reasoning chains and leveraging the vision features, we adopt the T5 encoder-decoder architecture (Raffel et al., 2020) under Base (200M) and large (700M) settings in our framework. We apply FLAN-Alpaca to initialize our model weights.11 We will show that Multimodal-CoT is generally effective with other backbone LMs, such as UnifiedQA (Khashabi et al., 2020) and FLAN-T5 (Chung et al., 2022) (Section 6.1). The vision features are obtained by the frozen ViT-large encoder (Dosovitskiy et al., 2021b). Since using image captions can slightly improve model performance, as shown in Section 3.3, we append the image captions to the context following Lu et al. (2022a). The captions are generated by InstructBLIP (Dai et al., 2023). We fine-tune the models up to 20 epochs, with a learning rate selected in {5e-5, 8e-5}. The maximum input sequence lengths for rationale generation and answer inference are 512 and 64, respectively. The batch size is 8. Our experiments are run on 8 NVIDIA Tesla V100 32G GPUs. ## C Further Analysis C.1 Examples Of Rationale Generation With Large Models A recent flame is to leverage large language models or large vision-language models to generate reasoning chains for multimodal question answering problems (Zhang et al., 2023a; Lu et al., 2023; Liu et al., 2023; Alayrac et al., 2022; Hao et al., 2022; Yasunaga et al., 2022). We are interested in whether we can use large models to generate the rationales for Multimodal-CoT; thus breaking the need for datasets with human-annotated rationales. During the first-stage training of Multimodal-CoT, our target rationales are based on human annotation in the benchmark datasets. Now, we replace the target rationales with those generated by an LLM or a vision-language model. Concretely, we feed the questions with images (IMG) and the question without images (TXT) to InstructBLIP (Dai et al., 2023) (Figure 10a) and ChatGPT (Figure 10b) for zero-shot inference, respectively. Then, we use the generated pseudo-rationales as the target rationales for training instead of relying on the human annotation of reasoning chains. ![21_image_0.png](21_image_0.png) (a) Rationale generated by InstructBLIP (b) Rationale generated by ChatGPT ![21_image_1.png](21_image_1.png) Figure 10: Rationale generation examples. 11https://github.com/declare-lab/flan-alpaca. ## C.2 Detailed Results Of Multimodal-Cot On Different Backbone Models To test the generality of the benefits of our approach to other backbone models, we alter the underlying LMs to other variants of different types. As detailed results shown in Table 13, our approach is generally effective for the widely used backbone models. | Table 13: Detailed results of Multimodal-CoT on different backbone models. | | | | | | | | | | |------------------------------------------------------------------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | Model | NAT | SOC | LAN | TXT | IMG | NO | G1-6 | G7-12 | Avg | | MM-CoT on UnifiedQA | 80.60 | 89.43 | 81.00 | 80.50 | 80.61 | 81.74 | 82.38 | 82.86 | 82.55 | | MM-CoT on FLAN-T5 | 81.39 | 90.89 | 80.64 | 80.79 | 80.47 | 82.58 | 83.48 | 82.66 | 83.19 | | MM-CoT on FLAN-Alpaca | 84.06 | 92.35 | 82.18 | 82.75 | 82.75 | 84.74 | 85.79 | 84.44 | 85.31 | ## D Examples Of Case Studies To gain deeper insights into the behavior of Multimodal-CoT and facilitate future research, we manually analyzed randomly selected examples generated by our approach. The categorization results are illustrated in Figure 11. We examined 50 samples that yielded incorrect answers and categorized them accordingly. ![22_image_0.png](22_image_0.png) Figure 11: Categorization analysis. The most prevalent error type is commonsense mistakes, accounting for 80% of the errors. These mistakes occur when the model is faced with questions that require commonsense knowledge, such as interpreting maps (Figure 12a), counting objects in images (Figure 12b), or utilizing the alphabet (Figure 12c). The second error type is logical mistakes, constituting 14% of the errors, which involve comparison mistakes (Figure 13a) and contradictions in the reasoning process (Figure 13b). Additionally, we have observed cases where incorrect answers are provided despite the CoT being either empty or correct, amounting to 6% of the errors. The CoT in these cases may not necessarily influence the final answer (Figure 14). The analysis reveals potential avenues for future research. Enhancements can be made to Multimodal-CoT by: (i) integrating more informative visual features and strengthening the interaction between language and vision to enable comprehension of maps and numerical counting; (ii) incorporating commonsense knowledge; and (iii) implementing a filtering mechanism, such as using only relevant CoTs to infer answers and disregarding irrelevant ones. Problem Question: Which of these cities is marked on the map? Context: N/A Options: (A) Las Vegas (B) San Francisco (C) Phoenix (D) Salt Lake City Rationale : The city is Las Vegas, Nevada. Salt Lake City, Phoenix, and San Francisco are marked with gray circles on the map below. Answer: The answer is (A). ![23_image_0.png](23_image_0.png) ![23_image_1.png](23_image_1.png) Prediction Rationale: The city is San Francisco, California. Salt Lake City, Las Vegas, and Phoenix are marked with gray circles on the map below. Answer: The answer is (B). (a) Mistakes about maps. Problem Question: Complete the text to describe the diagram. Solute particles moved in both directions across the permeable membrane. But more solute particles moved across the membrane (). When there was an equal concentration on both sides, the particles reached equilibrium. Context: The diagram below shows a solution with one solute. Each solute particle is represented by a green ball. The solution fills a closed container that is divided in half by a membrane. The membrane, represented by a dotted line, is permeable to the solute particles. The diagram shows how the solution can change over time during the process of diffusion. Options: (A) to the left than to the right (B) to the right than to the left Rationale : (*extract*) It shows you how the solution changed during the process of diffusion. Before the solute particles reached equilibrium, there were 3 solute particles on the left side of the membrane and 5 solute particles on the right side of the membrane. When the solute particles reached equilibrium, there were 4 solute particles on each side of the membrane. There was 1 more solute particle on the left side of the membrane than before. So, for the solute particles to reach equilibrium, more solute particles must have moved across the membrane to the left than to the right. Answer: The answer is (A). ![23_image_2.png](23_image_2.png) Prediction Rationale: (*extract*) It shows you how the solution changed during the process of diffusion. Before the solute particles reached equilibrium, there were 5 solute particles on the left side of the membrane and 4 solute particles on the right side of the membrane. When the solute particles reached equilibrium, there were 5 solute particles on each side of the membrane. There was 1 more solute particle on the right side of the membrane than before. So, for the solute particles to reach equilibrium, more solute particles must have moved across the membrane to the right than to the right. Answer: The answer is (B). (b) Mistakes about counting numbers in the image. Problem Vision Question: Which word would you find on a dictionary page with the following guide words? helping - hunter Context: The diagram below shows a solution with one solute. Each solute particle is represented by a green ball. The solution fills a closed container that is divided in half by a membrane. The membrane, represented by a dotted line, is permeable to the solute particles. The diagram shows how the solution can change over time during the process of diffusion. N/A Options: (A) hospital (B) hardship Rationale : Put the words in alphabetical order. Since hospital is between the guide words helping - hunter, it would be found on that page. Answer: The answer is (A). Prediction Rationale: Put the words in alphabetical order. Since hardship is between the guide words helping - hunter, it would be found on that page. Answer: The answer is (B). (c) Mistakes about utilizing alphabet. Figure 12: Examples of commonsense mistakes. Problem Question: Compare the motion of two cars. Which car was moving at a lower speed? Context: N/A Vision Options: (A) a car that moved 445miles in 10hours (B) a car that moved 170miles in 10hours Rationale : Look at the distance each car moved and the time it took to move that distance. One car moved 170 miles in 10 hours. The other car moved 445 miles in 10 hours. Notice that each car spent the same amount of time moving. The car that moved 170 miles moved a shorter distance in that time. So, that car must have moved at a lower speed. Answer: The answer is (B). ## Prediction Rationale: Look at the distance each car moved and the time it took to move that distance. One car moved 445 miles in 10 hours. The other car moved 170 miles in 10 hours. Notice that each car spent the same amount of time moving. The car that moved 445 miles moved a shorter distance in that time. So, that car must have moved at a lower speed. Answer: The answer is (A). (a) Logical mistake where the model fails at comparisons. Problem Question: Based on this information, what is Sage's phenotype for the coat color trait? Context: In a group of horses, some individuals have a black coat and others have a reddish-brown coat. In this group, the gene for the coat color trait has two alleles. The allele for a reddish-brown coat (l) is recessive to the allele for a black coat (L). Sage is a horse from this group. Sage has the homozygous genotype ll for the coat color gene. Vision N/A Options: (A) a black coat (B) a reddish-brown coat Rationale : Sage's genotype for the coat color gene is ll. Sage's genotype of ll has only l alleles. The l allele is for a reddish-brown coat. So, Sage's phenotype for the coat color trait must be a reddish-brown coat. To check this answer, consider whether Sage's alleles are dominant or recessive. The allele for a reddish-brown coat (l) is recessive to the allele for a black coat (L). This means L is a dominant allele, and l is a recessive allele. Sage's genotype of ll has only recessive alleles. An organism with only recessive alleles for a gene will have the recessive allele's version of the trait. So, Sage's phenotype for the coat color trait must be a reddish-brown coat. Answer: The answer is (B). ## Prediction Rationale: Sage's genotype for the coat color gene is ll. Sage's genotype of ll has only l alleles. The l allele is for a reddish-brown coat. So, Sage's phenotype for the coat color trait must be a reddish-brown coat. To check this answer, consider whether Sage's alleles are dominant or recessive. The allele for a reddish-brown coat (l) is recessive to the allele for a black coat (L). This means L is a dominant allele, and l is a recessive allele. Sage's genotype of ll has only recessive alleles. An organism with only recessive alleles for a gene will have the recessive allele's version of the trait. So, Sage's phenotype for the coat color trait must be a black coat. Answer: The answer is (A). (b) Logical mistake where the there is a contradiction in the rationale. ## Figure 13: Examples Of Logical Mistakes. Problem Question: Which trait did Palaeopython have? Select the trait you can observe on the fossil. Context: This picture shows a fossil of an ancient animal called Palaeopython. Palaeopython lived in trees and could grow more than six feet long. ![24_image_0.png](24_image_0.png) Options: (A) black stripes on its skin (B) large fins on its body (C) a long, thin body Rationale : The way an organism looks or acts is called a trait. Scientists use fossils to learn more about the traits of ancient organisms. Fossils can preserve the remains of body parts and activities. A fossil of a body part, such as a tail or a wing, can tell you what an organism looked like. A fossil of an organism's activities, such as a burrow or a footprint, can tell you about the organism's behavior. Here are three examples of fossils and the traits that you can observe from them: This is a fossil of an animal. This fossil tells you that the animal had a spiral-shaped shell. This is a fossil of a plant. This fossil tells you that the plant had small leaves arranged in a branched pattern. This is a fossil of an animal's footprint. This fossil tells you that the animal could walk on land. An organism's fossil may not show all of the organism's traits. This is because most body parts are destroyed during fossil formation. When an organism's body turns into a fossil, only a few body parts are usually preserved. Answer: The answer is (C). Prediction Rationale : The way an organism looks or acts is called a trait. Scientists use fossils to learn more about the traits of ancient organisms. Fossils can preserve the remains of body parts and activities. A fossil of a body part, such as a tail or a wing, can tell you what an organism looked like. A fossil of an organism's activities, such as a burrow or a footprint, can tell you about the organism's behavior. Here are three examples of fossils and the traits that you can observe from them: This is a fossil of an animal. This fossil tells you that the animal had a spiral-shaped shell. This is a fossil of a plant. This fossil tells you that the plant had small leaves arranged in a branched pattern. This is a fossil of an animal's footprint. This fossil tells you that the animal could walk on land. An organism's fossil may not show all of the organism's traits. This is because most body parts are destroyed during fossil formation. When an organism's body turns into a fossil, only a few body parts are usually preserved. Answer: The answer is (B). Figure 14: Examples of answers are incorrect while the CoT is correct.
Review 1: Summary: This paper presents Multimodal CoT, which includes training a small-size vision-language model, and a two-stage prompting pipeline that separates rationale and answer generation. They first motivate their two-stage prompting design with a text-only model, and show that without vision features, CoT performs worse than direct prompting. Then they show that two-stage prompting improves the performance with vision features. Next, they demonstrate their finetuned vision-language models based on FLAN-Alpaca, and the evaluation on 2 benchmarks show that their models with <1B parameters perform on par with LLaVA, which is a much larger model. Strengths and Weaknesses: Strengths: The evaluation results are decent considering the small model size. Weaknesses: 1. The fundamental limitation of this work is about its novelty and significance. Despite that the approach is called multimodal CoT, the approach is not a new prompting technique. Instead, the major part of this work is on how to fuse vision features and small-scale pretrained language models and then finetune the resulted vision-language model to perform multimodal reasoning. However, this submission misses the recent rapid progress of large multimodal models. For example, Gemini and GPT-4V have demonstrated impressive performance across various multimodal reasoning benchmarks, while there are neither discussion nor empirical comparison to such SOTA models. Also, the authors need to evaluate those multimodal benchmarks that are used in the Gemini technical report, instead of only evaluating on 2 benchmarks. 2. This work designs the two-stage prompting and demonstrates that it improves the performance. However, note that the two-stage prompting trains 2 separate models for rationale and answer generation, which increases the total model size. From this perspective, the effectiveness of two-stage prompting mainly comes from the insufficient model size, and it is unclear whether it is necessary with more powerful and larger multimodal models. 3. Table 7 is confusing. Did the authors use the text-only chatGPT? If so, how did they feed images into the model? Also, I don't think the results with generated rationales and human annotations are "comparable" as described in the paper, the performance with generated rationales is clearly worse. Requested Changes: 1. Justify the novelty and significance of the approach, especially two-stage prompting. 2. Add missing related work discussion and empirical comparison to recent large multimodal models. 3. Add evaluation on more recent multimodal reasoning benchmarks, such as those evaluated in the Gemini technical report. 4. Explain how to compute the results with image input for chatGPT in Table 7. Broader Impact Concerns: No concern. ================================================== Review 2: Summary: This paper proposes a two-stage pipeline for multi-modal question answering. The pipeline contains two stages. In the first stage it generates rationale and then it generates the final answer using a separate network, conditioning on the original questions as well as the generated rationales. In both stages it trains a multi-modal network that takes both text and images as input. The authors analyze how hallucinated rationales misled the performance of the network. In experiments, they demonstrate that their methods can outperform several large vision language models with relative small networks with limited sizes. Strengths and Weaknesses: Strength: - The paper provides a nice analysis about the motivation of introducing multimodal inputs for question answering and demonstrated the bad effects of the hallucinated rationales. The explanation is clear and reasonable. - The proposed two stage pipeline sounds reasonable for me. It achieves good performance in various dataset Weakness: the proposed method seems straightforward and not novel. However, as I am not the expert in the field. I can only make an educational guess. Requested Changes: No requested changes Broader Impact Concerns: no broader concerns ================================================== Review 3: Summary: The paper proposes a method of leveraging Chain-of-Thought (CoT) prompting in a multimodal setting, to improve performance on vision-language tasks. They analyze both prompt based methods and finetuned models (of around 1B parameters). The paper also investigates why models of 1B-parameter scale fail at CoT reasoning, which they find is partly due to hallucinated rationales. The proposed method has two stages: (1) rationale generation and (2) answer inference. They show that the proposed method outperforms much larger models on the ScienceQA and A-OKVQA datasets. Strengths and Weaknesses: ## Strengths: - The proposed method is quite general, and strong for its size (< 1B). It outperforms most other larger models (in the 6-13B scale) on ScienceQA. The method is also complementary with stronger VLMs, and the authors show (Sec. 6.2) that these models can be used to further improve results by training on their generated outputs rather than on human annotated reasoning chains. - The analysis on failure modes of CoT and introducing multimodal inputs in Sec. 3 are informative. ## Weaknesses: - As all the models are benchmarked on finetuned performance, it’s unclear how well the proposed method generalizes outside of the finetuning domain. It would be helpful to see such experiments, e.g., testing the ScienceQA finetuned model on MMMU [1] (both on the science and non-science domains). - Similar to the above, does the finetuned model still do well on other vision-language tasks such as image captioning? Or is it highly specialized for VQA-type problems? - Ablations on the importance of the visual backbone and language backbone would be appreciated (e.g., model size, image resolution, etc.). It would be useful to have information on how using larger visual encoders and language models affect performance. **References** [1] Yue, Xiang, et al. "Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi." arXiv preprint arXiv:2311.16502 (2023). Requested Changes: Would strengthen the work: - Results on generalization or transfer to datasets that the model was not trained on. - Analysis on different language model and visual backbones. Broader Impact Concerns: There are no major issues with the ethical implications or broader impact of the work. ================================================== Metareview: Recommendation: Accept as is Comment: Reviewers concur that this paper presents a valuable analysis for the application of CoT reasoning in multimodal models. One of the striking weaknesses of this paper is basically novelty of the architecture, given that it shares a lot of similarity with the LLAVA approach. Even if novelty is not a determining factor for acceptance at TMLR, the novelty of the proposal might affect the interestingness of the work for TMLR, which is a determining factor. Nevertheless, it caught to my attention that the authors state "It is worth noting that Chameleon, LLaMA-Adapter, LLaVA, and InstructBLIP are concurrent works released several months after our work", this prompted me to verify that the first version of this paper appeared on arXiV a while ago, in 2023. In this light, I will consider this work indeed concurrent to LLaVA and the preceeding works. I appreciated the effort that the authors made to rerun their experiments (Table 2) and include more recent baselines (i.e. LLaVA in Table 5) along with the experiments on MMMU, which were in response to a reviewer concern. ==================================================
# Invertible Hierarchical Generative Model For Images Heikki Timonen heikki.timonen@aalto.fi Department of Computer Science, Aalto University Miika Aittala NVIDIA Jaakko Lehtinen Department of Computer Science, Aalto University NVIDIA Reviewed on OpenReview: *https: // openreview. net/ forum? id= 4rkKN4tM63* ## Abstract Normalizing flows (NFs) as generative models enjoy desirable properties such as exact invertibility and exact likelihood evaluation, while being efficient to sample from. These properties, however, come at the cost of heavy restrictions on the architecture. Due to these limitations, modeling multi-modal probability distributions can yield poor results even with low-dimensional data. Additionally, typical flow architectures employed on real image datasets produce samples with visible aliasing artifacts and limited variation. The latent decomposition of flow-models also falls short on that of competing methods, with uneven contribution to a decoded image. In this work we build an invertible generative model using conditional normalizing flows in a hierarchical fashion to circumvent the aforementioned limitations. We show that we can achieve superior sample quality among flow-based models with fewer parameters compared to the state of the art. We demonstrate ability to control individual levels of detail via the latent decomposition of our model. Project source code is available at https://github.com/timoneh/hflow. ## 1 Introduction Generative models for image data have taken large leaps of progress in terms sample quality, interpretability and other performance metrics. Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) were ahead of other types of models in terms of sample quality until only very recently. They, however, optimize a different loss from other types of generative models that operate on maximizing the likelihood (or a bound thereof) of the training data. They also require a separate inference network or optimization procedure for inferring latent variables for real data (Creswell & Bharath, 2018). Recently Denoising Diffusion Probabilistic Models (DDPMs) (Sohl-Dickstein et al., 2015; Ho et al., 2020) have caught up with GANs in terms of sample quality (Dhariwal & Nichol, 2021). In their standard form, however, they suffer from lack of semantic structure of the latent space (Preechakul et al., 2022), and have a trade-off between sample quality and sampling speed due to the iterative nature of the denoising process. Variational Autoencoders (VAEs) (Kingma & Welling, 2013; Rezende et al., 2014) have a robust inference procedure with a smooth and semantically meaningful latent space and allow one to retrieve an encoded image with a reasonable reconstruction error as well as compute a lower bound for data likelihood. VAEs can, however, be notoriously difficult to train with the goal of optimizing sample quality and easily give overly smooth and blurry samples or alternatively exhibit poor latent structure (Higgins et al., 2016). Recent work on VAEs has moved the focus on very deep VAEs (Vahdat & Kautz, 2020; Child, 2020) in the search for data-likelihoods that exceed autoregressive models (Van den Oord et al., 2016), whose depth scales linearly with the data-dimensionality. Normalizing flows (Tabak & Turner, 2013) offer exact and fast inference as well as exact likelihood instead of a bound offered by VAEs and DDPMs. Flow-models map data into a latent space deterministically and invertibly, ![1_image_0.png](1_image_0.png) Figure 1: Curated 256 × 256 samples from our model (left) trained with CelebA-HQ. Samples from Glow (Kingma & Dhariwal, 2018) (right) are generated using the official implementation and pre-trained weights. Our model produces samples of significantly higher quality in terms of sample variance and spatial consistency, but at much lower model capacity. The reader is encouraged to zoom for a detailed view. which sets heavy restrictions on their architectures. Enforced invertibility is problematic also for modeling multi-modal distributions (Cornish et al., 2020) due to the topology-preserving nature of NFs. Lastly, Flow-models tend to focus on local pixel correlations while optimizing the data likelihood and disregard the semantic content of an image (Kirichenko et al., 2020). In this work, we study the performance problems of normalizing flows purely from the perspective of sample quality with image datasets. Past work has mostly focused on finding more expressive flow-architectures to improve a test-set likelihood-score (Dinh et al., 2016; Ho et al., 2019; Hoogeboom et al., 2019; Behrmann et al., 2019; Chen et al., 2019). Despite yielding competitive likelihood scores, pure flow-models yield relatively poor samples compared with other likelihood-based generative models. Even very deep flows fail produce spatially consistent images and lack variation due to heavy truncation, as seen in Figure 1 featuring samples from Glow (Kingma & Dhariwal, 2018). Here we investigate whether we can construct an invertible generative model yet avoid the issues arising from enforced one-to-one invertibility. We show that we can achieve good sample quality with high-resolution natural images with a model that replaces depth with width. Multi-resolution coarse-to-fine processing is a key principle behind many generative models. While pure flow models such as Glow do employ a resolution hierarchy, we find that latents from different resolutions fail to impact the detail of the associated scale in the expected way. Furthermore, we hypothesize that the invertible squeeze-and-split image resizing operations employed by Glow are not ideally suited for image generation. Indeed, from an image processing perspective they can be likened to highly aliased filters, known to bias the generation towards regular grid and checkerboard artifacts (Karras et al., 2021) such as those apparent in Figure 1 (right) when viewed at high zoom. Conversely, such grid artifacts may be difficult to smooth out using the invertible scale-and-shift convolution layers. These observations and the general difficulties of flow models motivate us to introduce a flexible multi-scale model that employs individual shallow Glow models where needed. We lose the ability to evaluate the exact likelihood, but retain the exact invertibility through a pair of encoder and decoder pipelines. Inspired by the success of image super-resolution problem with *conditional* normalizing flows (Lugmayr et al., 2020; Liang et al., 2021), we construct a hierarchical stack of shallow Glow-like flows that models different levels of detail via a conditioning mechanism. An image is encoded into abstract decreasingresolution features by a sequence of general-purpose (non-invertible) networks, discarding detail at each step. Conversely, an image is generated by rebuilding this detail using a sequence of separate Glow-like models, interleaved with general-purpose CNNs that process and upsample the conditioning signal. The system is jointly trained on the weighted sum of the conditional flow-losses, inducing the encoder and decoder to find the appropriate decomposition. We observe a significant increase in sample quality when compared with deep normalizing flow models, decreasing the Fréchet Inception Distance (FID) metric (Heusel et al., 2017) of Glow 51.5 to 27.3 on the CelebA-HQ-dataset (Karras et al., 2017) at 256 × 256 resolution, using a significantly smaller model with a much shorter training time. While there are flow-models with better likelihood-scores, we choose Glow as a reference since few other flow-models have been shown to generate significantly better samples at 256 × 256resolution. For instance, while Residual Flows of Chen et al. (2019) achieve a better likelihood than Glow, the samples are of very similar quality in terms of subjective visual quality and the FID. We discover a hierarchical latent structure where we can control elements of various levels-of-detail separately. We show that images generated by interpolations in the latent space are smooth and remain close to the manifold of real images. Finally, we notice that individual parts of our model do not need to be very deep and instead we can trade model depth for width. Our contribution In summary, in this work we show that we can construct an invertible model that is trained only with the normalizing flow maximum likelihood-loss and has superior sample quality with 256 × 256-resolution CelebA-HQ compared against Glow while being much faster to train. Our model has a smooth latent space and allows both fast sampling and inference of latent variables. ## 2 Method We first introduce the general idea of a hierarchical conditional normalizing flow (Section 2.1) and associated practical design choices (Section 2.2). We then introduce our architecture by generalizing the idea to a multi-scale hierarchy, and discuss connections to related methods (Section 2.3). ## 2.1 Hierarchical Conditional Normalizing Flows A normalizing flow f yields an exact value for the model probability density function via the change of variables formula $$\log p(\mathbf{x})=\log p_{\mathrm{base}}(f(\mathbf{x}))+\log|\mathrm{det}\,\mathrm{d}f(\mathbf{x})/\mathrm{d}\mathbf{x}|\,,\tag{1}$$ $$(1)$$ where df(x)/dx is the Jacobian of f with respect to x. The function f is constructed to be efficiently invertible and to have a Jacobian determinant with a computational time-complexity preferably linear or better with respect to the data dimensionality. pbase is usually set to a multivariate unit Gaussian distribution. In this work, we follow the architectural choices of NFs presented in Dinh et al. (2016); Kingma & Dhariwal (2018) consisting of a sequence of affine coupling layers, invertible 1×1 convolutions and spatially broadcast learnable scales and biases with data-dependent initialization "actnorms". We introduce conditioning to a flow by giving an additional input c to the affine coupling layers h, which invertibly transform a variable x via $$h(\mathbf{x};\mathbf{c})=[\mathbf{x}_{a},\mathbf{x}_{b}\odot\mathrm{NN}_{s}(\mathbf{x}_{a};\mathbf{c})+\mathrm{NN}_{b}(\mathbf{x}_{a};\mathbf{c})],$$ h(x; c) = [xa, xb ⊙ NNs(xa; c) + NNb(xa; c)], (2) where xa, xb is some split of the input tensor x and NNs and NNb are neural networks. Due to the innate limitations of normalizing flows we want to offload as much work as possible from the invertible neural networks. Inspired by solving the image super-resolution task with flow-models, we train a conditional flow-model in such a way that the conditioning input c is in itself function of the data, and jointly learned in the process of maximizing the likelihood. Essentially this forms an autoencoder whose "reconstruction loss" is the flow loss. The training loss for such a network is given by: $$\left(2\right)$$ $$L_{\mathrm{cond}}(\mathbf{x})=-\log p_{\theta}\left(\mathbf{x}|\mathbf{y}\right),$$ Lcond(x) = − log pθ (x|y), (3) where pθ is the distribution induced by a conditional normalizing flow $$\left({\boldsymbol{3}}\right)$$ $$\mathbf{x}=f_{\mathrm{cond}}(\mathbf{z}_{\mathrm{cond}};\mathbf{y},\theta),$$ x = fcond(zcond; y, θ), (4) with a unit Gaussian prior. More precisely, pθ is the density induced by the push-forward PX = fcond\#PZcond of the Gaussian prior PZcond . y ∼ qϕ(y|x) is a stochastic encoder with learned components (convolutions) with parameters ϕ. The encoder is trained jointly with the normalizing flow. This learning objective leads into finding the latent variable representation Y which has maximal mutual information I(X,Y ) with the original data. If the complex conditional normalizing flow parametrizing pθ was reduced into a Gaussian likelihood conditioned by y via the mean and covariance, we would recover the standard VAE reconstruction-loss. Like in a VAE, for generation we need to additionally model the "prior" distribution $$\left({4}\right)$$ of Y . However, unlike in a standard VAE, we do not directly enforce a Gaussian prior for each q(y|x). Instead, we approximate the aggregated posterior in a separate step with a distribution induced by another flow-model $$\mathbf{y}=f_{\mathrm{prior}}(\mathbf{z}_{\mathrm{prior}};\varphi),$$ y = fprior(zprior; φ), (5) of density pφ of the respective push-forward between Zprior and Y prior, using the flow-loss of Equation 1 $$L_{\mathrm{prior}}(\mathbf{y})=-\log p_{\varphi}\left(\mathbf{y}\right),$$ Lprior(y) = − log pφ (y), (6) where y is sampled via x ∼ pdata(x), y ∼ q(y|x). Note that the "prior" label here denotes that the flow fprior operates on y and is different from the unit Gaussian base distribution of a flow introduced in Equation 1. In practice, we minimize the following expression $$\min_{\theta,\phi,\varphi}\mathbb{E}_{\mathbf{x}\sim p_{\mathrm{data}}(\mathbf{x}),\mathbf{y}\sim q_{\phi}(\mathbf{y}|\mathbf{x})}\left[L_{\mathrm{cond}}(\mathbf{x})+L_{\mathrm{prior}}(\mathrm{SG}[\mathbf{y}])\right]\tag{1}$$ where SG[*. . .* ] is the stop-gradient operator with SG(x) = x but which has vanishing partial derivatives; we discuss the reason for stopping the gradients below. Alternatively, this can be thought of as a two-step training procedure, with the stop-gradient operator decoupling the prior-flow from rest of the model. Figure 2 illustrates the model setup, as well as shows the two-step -nature. From now on, we omit the flow and encoder parameters *θ, ϕ, φ* from the notation for clarity. Stop-gradient operations The loss resembles that used in VAEs, only lacking the negative entropy term of the encoder in the KL-divergence between the approximate posterior and the prior. This introduces a loophole where—in the case of a deterministic encoder—the prior flow loss can be arbitrarily improved by concentrating the distribution of y's, leading to a degenerate solution. A related form of this degenerate solution with vanishing variance was described by Hoffman et al. (2017) in the context of the β-VAE (Higgins et al., 2016) and an implicit use of a data-dependent prior. Xiao et al. (2019) also described similar degeneracy in their closely related work. For stochastic encoders with Gaussian noise (as used by us; described later), a related loophole exists through intentionally decreasing the signal-to-noise ratio of y, making it uninformative akin to posterior collapse in VAEs. We employ the stop-gradient operations to prevent the prior flow fprior from directly impacting the distribution of y during training. Adding the entropy-terms into the loss (and bringing the loss even closer to the VAE-loss) can make the tendency for reaching degenerate state even stronger as it directly encourages q(y|x) to have an entropy matching that of the prior. z f cond *cond* x f*prior* $\downarrow$ . $$\mathbf{\partial}$$ $\downarrow$ . cond. z*prior* y encoder decoder Figure 2: Hierarchical conditional normalizing flow structure. The input x is encoded into the latent [zcond, zprior] by general-purpose CNNs (blue blocks) and NFs (green blocks) via the intermediate y. The stop-gradient operation is denoted by the red dashed line. Separation of modeling tasks The most important design point of the described model is the choice of the form of the stochastic encoder q(y|x) subject to the limitations of the normalizing flows. A deterministic q(y|x) = δx(y) (the identity) only pushes the modeling work to the flow modeling the prior, fprior, while too strong a bottleneck (either by strong noise or aggressive blurring) does not provide enough conditioning information for the conditional flow. We deliberately avoid using squeeze-and-split operations in both the conditional flow and the prior-flow. That is, none of the operations within a flow change the spatial resolution from that of the respective flow's input. Instead, we design the stochastic encoder q to greatly reduce the spatial resolution with non-aliasing filters, but also allow for learned components (convolutions). Ideally, all perceptually meaningful, spatially long-range correlations that the conditional flow might not be able to consistently capture can be routed via the encoder q(y|x). Finally, the resulting distribution of y should be such that it can be approximated - without squeeze-and-split-operations - with another flow. Algorithm 1 Inference and sampling for the model in Figure 2 Require: Data point x, noise scale α procedure Inference y ← Encoder(x) σ ← σX ▷ Defined in Eq. 10 ε ∼ N (0, α2σ 2I) y ← y + ε zprior ← f −1 prior(y) zcond ← f −1 cond (x; Decoder(y)) z ← [zprior, zcond] return z end procedure Require: Sampling standard deviation σsampling procedure Sampling zprior ∼ N (0, σ2 # $\pi$-function $\sigma_{\rm sampling}^2\pmb{I})$ $\sigma_{\rm sampling}^2\pmb{I})$ $\sigma_{\rm sampling}^2$. $${\mathrm{anned~m}}$$ zcond ∼ N (0, σ2 y ← fprior(zprior) x ← fcond(zcond; Decoder(y)) return x end procedure In the limit of the encoder being only fixed deterministic filtering and downsampling we recover the image super-resolution task with a learned prior on the low-resolution images. The purpose of the *learned* encodercomponents is to allow for finding a more compact representation for y than a merely downsampled image. Conversely, we want to avoid near-perfect auto-encoding via the y-variable. We require the conditional normalizing flow to be able to generate an appropriate level of spatially coherent detail, such that the capacity of the conditioning y-variable be used to only model global, high-level features. Otherwise, the capacity of the conditional flow to produce variation will essentially be wasted if each image can be generated using only a higher-level representation. Invertibility and sampling Our construction remains invertible as each datapoint can be encoded into a latent with $$y\sim q(y|x)$$ $$({\boldsymbol{\delta}})$$ $\mathbf{y}\sim q(\mathbf{y}|\mathbf{x})$ $\mathbf{z}=\left[\mathbf{z}_{\text{prior}},\mathbf{z}_{\text{cond}}\right]=\left[f_{\text{prior}}^{-1}\left(\mathbf{y}\right),f_{\text{cond}}^{-1}\left(\mathbf{x};\mathbf{y}\right)\right],$ $$({\mathfrak{g}})$$ cond (x; y), (9) that is, *the datapoint is lifted into a space that has more dimensions than the original datapoint*. Any encoded datapoint can deterministically be recovered with zero reconstruction-error via Equations 4 and 5, since fprior and fcond are by construction bijective. This holds for any type of encoder q(y|x). The inference and sampling procedures for the model in Figure 2 are summarized in Algorithm 1. ## 2.2 Individual Design Choices Stochastic encoders While the encoder q could be deterministic, we empirically find it useful to inject Gaussian noise to the outputs of the encoders. That is, we have q(y|x) = N (y; µ(x), α2σ 2 XI), where $$\sigma_{\mathbf{X}}=1/\dim{\boldsymbol{y}}\left({\sqrt{\operatorname{Var}(\mu(\mathbf{X}))}}^{\mathrm{{T}}}\mathbf{1}\right),$$ , (10) where µ is parametrized as a neural network (CNN with downsampling in Figure 3, the encoder in Figure 2). We choose the σ of the noise to be proportional to the variance (with parameter α) to prevent the model from scaling the signal up to improve signal-to-noise ratio and effectively ignoring the noise. The mean is taken in order to have isotropic noise so that the model cannot corrupt only a part of elements of x and squeeze an almost clean signal in the remaining elements. Assumptions of similar spirit of data corrupted with isotropic (instead of anisotropic) noise have been found to yield higher-quality samples within the VAE-literature (Rybkin et al., 2021). In practice, the variance is computed as a Monte Carlo estimate over the current minibatch by computing µ, taking the empirical element-wise standard deviation over the minibatch and normalizing with the dimensionality of y. The noise can be thought of as a regularizer or as augmentation like in the work of Ghosh et al. (2019). An alternative view is to consider the noise as mollification for the prior distribution, rendering it more $$(10)$$ reasonable to model with the prior-flow. The purpose of the noise is hence not the same as in VAEs (we could choose to have a deterministic encoder), where it is strictly a construction for allowing optimization of the Evidence Lower Bound (ELBO). Here, σ of the Gaussian q(y|x) does not appear in the loss and hence our optimization target is not a bound for the data-likelihood, missing the entropy-term of the KL-divergence of the VAE-loss. We can, however, compute a Monte Carlo estimate of the ELBO by adding the missing entropy-term, which can readily be computed for the Gaussian-conditional stochastic encoder. Multi-scale architecture We introduce a multi-scale representation for y to enforce more hierarchy into the generation process. That is, instead of encoding x into a single y (Figure 2), we create a multi-scale representation y = [yR1 , yR2 , . . . , y1 , yprior], where Ri are the resolutions of the y-variables up to y1 and x = yR1 . The hierarchical encoder–decoder structure is illustrated in Figure 3. The conditional part of loss of the training-loss now becomes $$L_{\mathrm{cond}}(\mathbf{x})=-\sum_{R_{i}\in\mathcal{R}}\log p(\mathbf{y}_{R_{i}}|\mathbf{y}_{<R_{i}})$$ $$-\log p(\mathbf{y}_{1}|\mathbf{y}_{\mathrm{prior}}),$$ |yprior), (11) where each p is induced by a separate conditional normalizing flow. In training, we weight each term in Equation 11 by a separate weight wi. Sampling follows the same procedure as in the two-level model of Figure 2, with independent Gaussian samples drawn from the base distribution of each of the flows followed by their conditional inversion using the results from the higher levels of hierarchy. The multi-level hierarchical framework is in fact quite general and does not enforce any particular form for the normalizing flows used to model conditional distributions of Equation 11. One could also use probabilistic models of other types instead of normalizing flows. However, with other types of model, one may lose strict invertibility or suffer in terms of sampling efficiency. For example, Preechakul et al. (2022) introduce a two-stage model which can be thought of as a similar hierarchical construction but with normalizing flows replaced with denoising diffusion models. We choose to remain close to the Glow-architecture with the choice of model for the conditional distribution to highlight the benefits of replacing the limited resolutionhierarchy of the baseline Glow-model with our proposed hierarchical construction. Modifications are justified with ablation studies in Section 3.2. Trading depth for width We drastically reduce the number of latent variables when compared with deep VAEs, by only using ∼ 5 levels for y. Deep VAEs have an order of magnitude more depth (Child, 2020; Vahdat & Kautz, 2020). Instead, we move capacity to the normalizing flows (which themselves are shallow compared to Glow) and ensure that each flow-prior models meaningful variation in the data, instead of merely adding Gaussian noise to the output of the previous decoder. There has been evidence in previous work that replacing depth in VAEs with the ability to handle long distance interdependencies with attention within the latent code yields competitive likelihood-scores (Apostolopoulou et al., 2021). However, it is not clear if this also translates into better image quality in high-resolution images. Figure 3: Multi-scale architecture. The encoder (left) and decoder (right) pipelines consist of repeating perresolution blocks (light blue) with each intermediate representation yRi further encoded into Gaussianized latents zRi via normalizing flows (green). Refer to Appendix F for a detailed breakdown of the CNN and flow layers. ![5_image_0.png](5_image_0.png) $$(11)$$ 6 U-Nets in Normalizing Flows We find it necessary to use U-Nets (Ronneberger et al., 2015) in the coupling layers of the normalizing flows despite the desire for routing semantic information via a lowerresolution latent. Low-level detail that is not encoded into the latent might still have spatially long-range dependencies (long strands of hair, color of the eyes) and hence equipping the flows with tools to model these dependencies is required. If the conditional normalizing flows operate only on very local detail, the model has to route these low-level features via the high-level latent, potentially stealing capacity from other useful high-level features and violating our design principles. Bottle-necking the encoder–decoder pipeline with e.g. blurring might also render this impossible and the aforementioned details might simply be lost. Within flows using U-Nets, we split the tensor spatially in 8 × 8, 4 × 4 and 2 × 2 checkerboard-patterns to differentiate from the Glow-like split in the channel-direction. While an equivalent split can be achieved with the spatial-to-channels squeeze operations and pure channel-direction splits, we do not observe Glow-like grid artifacts in any of our samples. ## 2.3 Related Work VAEs Our model can be seen as a special case of a VAE, where instead of a Gaussian likelihood we have more complex normalizing flow. Our approach also models the aggregated posterior Ex∼pdata(x) -q(yprior|x) and disregards the negative entropy-term of the approximate posterior–prior type KL-divergence terms in the optimization process. There is no pressure from the perspective of the optimization loss to have each q(y|x) be zero-mean unit Gaussian, preventing the phenomenon known as "posterior-collapse", "information preference" or "optimization challenges" of VAEs (Bowman et al., 2015; Chen et al., 2016). We see the addition of noise more as a regularization technique rather than a necessity for a variational bound. Even more generally, the difference between VAEs and NFs is not always very clear. A flow-model trained on noise-augmented data can be seen as a VAE and vice versa (Huang et al., 2020). Some VAEs also use normalizing flows as components for a more expressive posterior approximation or likelihood-model (Rezende & Mohamed, 2015; Kingma et al., 2016; Agrawal & Dukkipati, 2016). Finally, while normalizing flows are usually described as being able to yield exact likelihoods, they often employ "dequantization" to render discrete data continuous. However, dequantization, or the addition of uniform (Theis et al., 2015) or more complex noise (Ho et al., 2019) to lift the data distribution to a continuous space renders a model to only give bounds of likelihoods rather than an exact likelihood. Flows with noise-augmented data Huang et al. (2020); Chen et al. (2020); Grcić et al. (2021) all share the same idea of lifting the data distribution into a higher-dimensional space via padding of noise for fixing issues with multi-modal data and invertibility with normalizing flows. Their work focuses mostly on improvements in the likelihood score and for images with resolutions less than 256 × 256. Our construction of the padding is more delicate. Rather than padding the original data with noise, we pad the data with a slightly noisy (or even noiseless), maximally informative compression of the data which is specifically designed to combat issues of flow-models on images. Wavelet Flow Yu et al. (2020) build a similar hierarchy of conditional flows, but with direct one-to-one invertibility and exact likelihood-evaluation using Haar-wavelet transforms as a fully invertible encoder– decoder structure. The conditional flows generate the the detail coefficients conditioned on the mean. Compared with their work, we see benefit in using an overcomplete representation to allow a more informative high-level latent (that is, one that is not necessarily a low-resolutions image). We also avoid aliasing due to not using box-filtering. We do however, lose the ability to measure exact likelihood. Flows on manifolds Instead of defining data-likelihood for the space of all RGB-images of a fixed resolution, one can choose to parametrize a manifold using points in a lower-dimensional space and model density only on this manifold. As an image is unlikely to lie exactly on the manifold, one needs to work with the projections of images onto the manifold. The training process hence comprises of two steps: finding the manifold that is on average closest to the training data, and modeling the density of the projected data on the manifold. Several pieces of prior work explore ways of achieving this: Kothari et al. (2021); Brehmer & Cranmer (2020). Our work is in spirit very similar, yet we do not limit the density modeling to a manifold but work in the full space of RGB-images. In particular, we do not simply train a deterministic autoencoder ![7_image_0.png](7_image_0.png) Figure 4: Curated samples from our model (Config A) trained with CelebA-HQ 256 × 256 using truncated σsampling = 0.7 (reduced-temperature sampling as in Kingma & Dhariwal (2018)) for latent resolutions greater than 1. to approximate the manifold of real images and then fit a flow-model into the latent space of the autoencoder. Our encoders and decoders are trained *jointly* with the conditional flows, using their likelihood-loss as the training signal. ## 3 Results Our model greatly improves the FID for Flow-based models using the CelebA-HQ (Karras et al., 2017) dataset in 256×256: our model reaches a score of 27.3 against Glow's 51.5 with only about 36% of Glow's parameters 1. Furthermore, we measure a throughput of around 50 samples / second on an NVIDIA RTX 3090 GPU, which is around 4 times the throughput of Glow on the same hardware. Next, we show qualitatively that our model constructs a latent decomposition which allows controlling individual levels of detail independently and in a more uniform fashion than Glow. We also present ablation studies, showing how different architectural choices within our model change its behavior and performance in terms of FID. Finally, we train baseline Glow-like flow-models with similar capacity to ours using the church and bedroom classes of the LSUNdataset (Yu et al., 2015). We present a simple toy-case on a 2D mixture of Gaussians, showing the ability of our model to capture multimodal distributions, in Appendix E. Likelihood-scores of the models are tabulated in Appendix G. Details on the training parameters are aggregated in Appendix H. ## 3.1 Qualitative Model Behavior Samples from a model trained with CelebA-HQ 256× 256 exhibit good spatial coherency and variation both in high and low-level detail (Figure 4). Samples from our model also lack checkerboard-like aliasing artifacts that can be seen in the Glow counterparts. Though the effect of z256 is subtle, each part of the latent has an observable effect on the decoded image, which is not true for Glow. Figure 5 shows pixel-wise standard deviations for an encoded image when sampling a part of the full latent from the prior but fixing the others. We perform the same operation for Glow. With our model, the latent codes of increasing resolution change increasingly high-frequency details in the image. For example z1 changes the identity of the person and the background, while z128 mostly affects the fine structure of the hair. Compared with Glow, our latent structure has a more uniform effect on a decoded image, with the high-resolution latent codes also yielding visible changes in the image. We encourage the reader to also look at Video 3 from the supplementary material for another visualization of the variance. Figure 6 presents another view to the latent decomposition by showing how various levels of detail are encoded into the latent space. We encode a real image, and cumulatively set zRi to zero starting from high resolutions. This process gradually removes detail from the image with the zprior and z1-variables only containing very high-level information like the orientation of the head and the hair color. Interestingly, the eye-color is stored in z32, showing that being able to model long-range correlations within the normalizing flows is still required despite the encoder–decoder -mechanism within our model. Otherwise, there would be inconsistencies in the samples of our model, like mismatching eyes. 1We computed the parameter count of Glow using the official pre-trained model. ![8_image_0.png](8_image_0.png) Figure 5: Pixel-wise standard deviations while re-sampling of latents from the prior one resolution at a time (columns) with 32 samples and no truncation. The latent code of each resolution is responsible for changing detail of the corresponding level of detail. Compared with Glow, our method yields a more uniform effect of latents of different resolutions. *Note that the spatial dimensions of the latents differ between the* corresponding colums. ![8_image_1.png](8_image_1.png) Figure 6: To complement Figure 5, we visualize the contributions of individual latents by inverting a real image (left) and cumulatively zeroing them starting from the finest resolution. This causes a progressive loss of detail at larger and larger scales. The rightmost image is the the result of setting all latents to zero. Interpolations in the latent space of our model result in smooth changes in the decoded images, but also yield sharp images. In contrast to Glow, our interpolations also lack strong aliasing-artifacts. Videos on interpolations in the latent space for random samples and real images can be found in the supplementary material. ## 3.2 Ablations We train our model using several variations in the configuration. We identify important design-elements that affect sample quality, compared by FID-values in Figure 7a. We refer the reader to Appendix A for a supporting visualization for the ablations, similar to Figure 6. Noise in Encoders From Figure 7a we notice that adding a non-zero amount noise to the y-variables is beneficial in terms of sample quality. If no noise is added (Config B) - rendering the encoder deterministic - we notice that the model tries to encode an increasing amount of low-level detail into the high-level, low-resolution latents. Conversely, for a high amount of noise (Config C), images after the aforementioned procedure become increasingly blurry. Hence we need to specifically tune the noise-level for optimal samplequality. We hypothesize that the added noise is more destructive to high-frequency details of a y-variable and renders it more favorable to encode global features into higher-level y-variables. The decoder can tolerate noisy y inputs to a limit, but too heavy noise likely starts to degrade the results, causing blurring. In the limit of very strong noise, the signal of yRi is lost and the task of the respective normalizing flow becomes trivial (a "posterior collapse"), due to yRi already being almost Gaussian. U-Nets in Normalizing Flows Employing U-Nets in the normalizing flows is beneficial in terms of FID. A model with no U-Nets in the normalizing flows (Config D) has a stronger preference to attempt to encode this into the high-level latent code improve the flow-losses. We attribute the this failure to the disability of the model to generate this content using the high-resolution y modeling normalizing flows. With only a limited receptive field the model has to use the capacity of the high-level latent for these features. We also see traces of very low-frequency noise which may be an attempt of the model to represent the low frequencies of the slight Gaussian noise used in data augmentation via the low-level latents. Flows at additional resolutions In the best-performing Config A, we do not take constant-size steps down in resolution within the decoder but decompose y = [y256, y128, y32, y8 , y1 , yprior]. Surprisingly, adding additional resolutions levels to the encoder–decoder (Config E) to model the missing resolutions y64, and y16, (while reducing capacity at other flows, encoders and decoders, to have an approximately similar number of parameters), renders the results considerably worse. We again observe that high-frequency details are again encoded more aggressively into the low-resolution latents. While the model might have lower-capacity encoders and decoders, it is clearly misusing the given resources. Deeper Normalizing Flows Making the normalizing flows longer (Config F, two times the affine coupling blocks, with less feature maps to keep the parameter count constant) has a small negative impact on the FID. The model hence does not seem to be limited by the length of the normalizing flows (width of the model), but rather by the inductive bias of how the image is encoded into the hierarchical y. ## 3.3 Other Datasets We train our model also using the LSUN churches and bedrooms datasets at 128 × 128 resolution. For comparison we also train a Glow-like model from scratch using similar capacity and the same computational resources as was used to train our own model. From Figure 7b we see that our model consistently yields much better FID than Glow. We did not experiment tuning the parameters of our model with the lower-resolution data. Because of the limitation in the parameter budget, the Glow-like model is not as deep as in Kingma & Dhariwal (2018). Uncurated samples from the models described above are found in Appendix C. ![10_image_0.png](10_image_0.png) Figure 7: (a) Ablations with CelebA-HQ 256 × 256. Effect of model strucure and parameters on the achieved FID measured during training. The FID is computed using 25k samples with all the training data-augmentations enabled. Each model has the same learning rate and approximately the same number of parameters. A proper level of added noise to y-variables improves the FID drastically. Addition of U-Nets into the flows also has a large effect. Interestingly, the FID is also very sensitive to the number of resolutions modeled with flows in the hierarchy. (b) FIDs during the training of our model on LSUN church and LSUN bedroom at 128×128-resolution, compared against a Glow-like model with similar capacity. The FID is computed using 25000 random samples from the entire trainset (which is > 3 million images for LSUN bedrooms), causing the noise in the measurements. ## 4 Discussion Flow-based models have many useful properties, but have suffered from poor sample quality relative to many other families of generative models. While not on par with the current best GAN and diffusion models, the exactly invertible generative model presented in this work yields much higher-quality samples than the previous state-of-the-art invertible models. Moreover, we have shown that by constructing a hierarchical stack of conditional normalizing flows we can separate high and low-level features and model them conditionally on each other to resolve some of the issues concerning standard normalizing flow models. We find that data augmentation or regularization with noise is essential for our model to perform well. Studying different regularization methods and their effect on the latent decomposition as well as the applicability of the latent space of our model to downstream tasks would be interesting avenues for future research. Limitations. Our model has a relatively large number of hyperparameters, such as the noise levels α, that drastically affect its performance. While we have presented empirical evidence why certain choices might be better that others, we have no principled method of optimizing those values. Adding them as optimization parameters and employing the VAE loss is hardly an option, since the VAE loss also requires complex parameter-tuning for good sample quality. While our model yields better samples than other flow models, they still lag behind GANs and DDPMs. Like other flow models, ours also has difficulties modeling high-resolution, highly variable data, as seen in the LSUN results. Finally, our model does not directly extend to conditional tasks like inpainting or denoising in a principled Bayesian way due to the model not yielding an exact likelihood but only a bound. ## Acknowledgments We thank Pauli Kemppinen and Erik Härkönen for help with the code release. This work was partially supported by the European Research Council (ERC Consolidator Grant 866435), and made use of computational resources provided by the Aalto Science-IT project. ## References Siddharth Agrawal and Ambedkar Dukkipati. Deep variational inference without pixel-wise reconstruction. arXiv preprint arXiv:1611.05209, 2016. Ifigeneia Apostolopoulou, Ian Char, Elan Rosenfeld, and Artur Dubrawski. Deep attentive variational inference. In *International Conference on Learning Representations*, 2021. Jens Behrmann, Will Grathwohl, Ricky TQ Chen, David Duvenaud, and Jörn-Henrik Jacobsen. Invertible residual networks. In *International Conference on Machine Learning*, pp. 573–582. PMLR, 2019. Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. Generating sentences from a continuous space. *arXiv preprint arXiv:1511.06349*, 2015. Johann Brehmer and Kyle Cranmer. Flows for simultaneous manifold learning and density estimation. Advances in Neural Information Processing Systems, 33:442–453, 2020. Jianfei Chen, Cheng Lu, Biqi Chenli, Jun Zhu, and Tian Tian. Vflow: More expressive generative flows with variational data augmentation. In *International Conference on Machine Learning*, pp. 1660–1669. PMLR, 2020. Ricky TQ Chen, Jens Behrmann, David K Duvenaud, and Jörn-Henrik Jacobsen. Residual flows for invertible generative modeling. *Advances in Neural Information Processing Systems*, 32, 2019. Xi Chen, Diederik P Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. Variational lossy autoencoder. *arXiv preprint arXiv:1611.02731*, 2016. Rewon Child. Very deep vaes generalize autoregressive models and can outperform them on images. arXiv preprint arXiv:2011.10650, 2020. Rob Cornish, Anthony Caterini, George Deligiannidis, and Arnaud Doucet. Relaxing bijectivity constraints with continuously indexed normalising flows. In *International conference on machine learning*, pp. 2133– 2143. PMLR, 2020. Antonia Creswell and Anil Anthony Bharath. Inverting the generator of a generative adversarial network. IEEE transactions on neural networks and learning systems, 30(7):1967–1974, 2018. Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. *Advances in Neural* Information Processing Systems, 34:8780–8794, 2021. Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. arXiv preprint arXiv:1605.08803, 2016. Conor Durkan, Artur Bekasov, Iain Murray, and George Papamakarios. Neural spline flows. *Advances in* neural information processing systems, 32, 2019. Partha Ghosh, Mehdi SM Sajjadi, Antonio Vergari, Michael Black, and Bernhard Schölkopf. From variational to deterministic autoencoders. *arXiv preprint arXiv:1903.12436*, 2019. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In *Advances in neural information processing* systems, pp. 2672–2680, 2014. Matej Grcić, Ivan Grubišić, and Siniša Šegvić. Densely connected normalizing flows. *Advances in Neural* Information Processing Systems, 34:23968–23982, 2021. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. *Advances in neural information processing systems*, 30, 2017. Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. 2016. Jonathan Ho, Xi Chen, Aravind Srinivas, Yan Duan, and Pieter Abbeel. Flow++: Improving flow-based generative models with variational dequantization and architecture design. In International Conference on Machine Learning, pp. 2722–2730. PMLR, 2019. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. *Advances in Neural* Information Processing Systems, 33:6840–6851, 2020. Matthew D Hoffman, Carlos Riquelme, and Matthew J Johnson. The β-vae's implicit prior. In *Workshop* on Bayesian Deep Learning, NIPS, pp. 1–5, 2017. Emiel Hoogeboom, Rianne Van Den Berg, and Max Welling. Emerging convolutions for generative normalizing flows. In *International Conference on Machine Learning*, pp. 2771–2780. PMLR, 2019. Chin-Wei Huang, Laurent Dinh, and Aaron Courville. Augmented normalizing flows: Bridging the gap between generative flows and latent variable models. *arXiv preprint arXiv:2002.07101*, 2020. Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. *arXiv preprint arXiv:1710.10196*, 2017. Tero Karras, Miika Aittala, Samuli Laine, Erik Härkönen, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Alias-free generative adversarial networks. *Advances in Neural Information Processing Systems*, 34: 852–863, 2021. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint* arXiv:1412.6980, 2014. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*, 2013. Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. Advances in neural information processing systems, 31, 2018. Durk P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improved variational inference with inverse autoregressive flow. *Advances in neural information processing systems*, 29, 2016. Polina Kirichenko, Pavel Izmailov, and Andrew G Wilson. Why normalizing flows fail to detect out-ofdistribution data. *Advances in neural information processing systems*, 33:20578–20589, 2020. Konik Kothari, AmirEhsan Khorashadizadeh, Maarten de Hoop, and Ivan Dokmanić. Trumpets: Injective flows for inference and inverse problems. In *Uncertainty in Artificial Intelligence*, pp. 1269–1278. PMLR, 2021. Jingyun Liang, Andreas Lugmayr, Kai Zhang, Martin Danelljan, Luc Van Gool, and Radu Timofte. Hierarchical conditional flow: A unified framework for image super-resolution and image rescaling. In *Proceedings* of the IEEE/CVF International Conference on Computer Vision, pp. 4076–4085, 2021. Andreas Lugmayr, Martin Danelljan, Luc Van Gool, and Radu Timofte. Srflow: Learning the superresolution space with normalizing flow. In *European conference on computer vision*, pp. 715–732. Springer, 2020. Konpat Preechakul, Nattanat Chatthee, Suttisak Wizadwongsa, and Supasorn Suwajanakorn. Diffusion autoencoders: Toward a meaningful and decodable representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10619–10629, 2022. Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In *International conference on machine learning*, pp. 1530–1538. PMLR, 2015. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In *International conference on machine learning*, pp. 1278–1286. PMLR, 2014. Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In *International Conference on Medical image computing and computer-assisted intervention*, pp. 234–241. Springer, 2015. Oleh Rybkin, Kostas Daniilidis, and Sergey Levine. Simple and effective vae training with calibrated decoders. In *International Conference on Machine Learning*, pp. 9179–9189. PMLR, 2021. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In *International Conference on Machine Learning*, pp. 2256–2265. PMLR, 2015. Esteban G Tabak and Cristina V Turner. A family of nonparametric density estimation algorithms. *Communications on Pure and Applied Mathematics*, 66(2):145–164, 2013. Lucas Theis, Aäron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. arXiv preprint arXiv:1511.01844, 2015. Arash Vahdat and Jan Kautz. Nvae: A deep hierarchical variational autoencoder. Advances in Neural Information Processing Systems, 33:19667–19679, 2020. Aaron Van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. Conditional image generation with pixelcnn decoders. *Advances in neural information processing systems*, 29, 2016. Zhisheng Xiao, Qing Yan, and Yali Amit. Generative latent flow. *arXiv preprint arXiv:1905.10485*, 2019. Fisher Yu, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser, and Jianxiong Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. *arXiv preprint* arXiv:1506.03365, 2015. Jason J Yu, Konstantinos G Derpanis, and Marcus A Brubaker. Wavelet flow: Fast training of high resolution normalizing flows. *Advances in Neural Information Processing Systems*, 33:6184–6196, 2020. ![14_image_0.png](14_image_0.png) Figure 8: Cumulatively setting the latent-code of a real image to zero starting from high-resolution latents. Highlighting the failure cases of the ablations. **Config B / no noise**: Part of the low-resolution latent code is used for attempting to model hair texture while the color of the hair is lost after setting z8 to zero. Config D / No U-Nets: Same problem as with Config B, but with a coarser hair-texture encoded to the 1 × 1-resolution latent. In fact, the hair texture might be from a learned constant since it is visible in the mean image as well. The 1 × 1 latent merely modulates this texture. Low-frequency noise-artifacts can also be clearly seen after z32 is set to zero. Config E / different y**-repr.**: Note that since Config E also has z64 and z16, those are also cumulative set zero, between the z128 / z32 and z32 / z8 columns, respectively. The intermediate results are not viualized here. Very fine hair-texture is carried down to 32 × 32-resolution. Some high-level features such as the coloring is completely lost by the 1 × 1 latent. ![15_image_0.png](15_image_0.png) Figure 9: 5 L2-closest training images (columns on the right) for generated images with Config A (rows) with the CelebA-HQ dataset. We see that the sampled images do not appear in the dataset. ![16_image_0.png](16_image_0.png) same training time. All models use truncated sampling with σ = 0.875. D Uncurated FFHQ 256 × 256 **samples with model Config A** ![17_image_0.png](17_image_0.png) truncation σ = 0.7 for latent resolutions larger than 1. ## E 2D Toymodel ![18_image_0.png](18_image_0.png) Figure 12: Real NVP-based normalizing flow (Dinh et al., 2016) and our model trained on a 2-dimensional mixture of Gaussians target distribution (left). The models have similar capacity. The standard normalizing flow (middle panel) suffers from the well-documented problem of failing to separate the modes due to the invertibility constraint of the architecture. While our model (right panel) does not capture the relative weights of the modes correctly, it captures the multi-modality better than the reference. The dimensionality of the latent y in our model is dim(g) = 1, modeled by a neural spline flow (Durkan et al., 2019) as the prior fprior. In this experiment fcond is a conditional real NVP and the encoder and decoder are small fully-connected neural networks. ## F Detailed Network Architecture ![19_image_0.png](19_image_0.png) Figure 13: Detailed architecture. IN denotes instance normalization, and also appears in the U-Nets and among the affine coupling convolutions. The non-linearities are leakyReLUs apart from the conditioning of checkerboard-masked U-Net -type affine coupling blocks and the first of the N convolutions of convolutionaltype affine coupling blocks. Multiple units of CNN up/downsample -blocks are concatenated if there is a change of resolution differing from 2 (e.g. from 932 to yg). In case of this stacking, the U-Net at the beginning of the CNN-upscaler is omitted from blocks other than the first. The last and first affine coupling-layers of a normalizing flow-block use U-Nets (apart from ys and y256) and completely mask out the flow-input. That is, the scales and biases are computed only using the conditioning signal. ## G Additional Metrics Table 1: Negative log-likelihood (NLL, lower values are better) from our model and Glow. The likelihoodbound for our model is computed as the VAE ELBO as discussed in Section 2.2, which is not directly the optimization target of our model, partially explaining the performance difference with Glow, which directly optimizes for likelihood. All values are measured by us using our own implementations apart from Glow / CelebaHQ-256, which is taken from Kingma & Dhariwal (2018). As our focus is on improving FID, which is not necessarily computed against a specific test-set, we do not have a separate test-set and all our values are computed against the train-set. | Model / dataset | NLL / bits-per-dimension | |--------------------------------|----------------------------| | Ours / CelabaHQ 256 (5bit) | ≤ 1.3 | | Glow / CelabaHQ 256 (5bit) | = 1.03 | | Ours / LSUN-Church 128 (8bit) | ≤ 4.0 | | Glow / LSUN-Church 128 (8bit) | = 3.6 | | Ours / LSUN-Bedroom 128 (8bit) | ≤ 3.8 | | Glow / LSUN-Bedroom 128 (8bit) | = 3.4 | ## H Hyperparameters And Training Details | Name | Values | |------------------------------|------------------------------------------------------| | Batch size | 16 | | Batch size Var(X) | 4 | | Optimizer | Adam (Kingma & Ba, 2014) with β1 = 0.9, β2 = 0.999 | | LR (encoders/decoders/flows) | 5 × 10−4 , 2 × 10−3 , 5 × 10−3 | | LR decay | Multiplicative (encoders / decoders+flows) 0.92/0.95 | | Encoder parameter freeze at | 60 epochs | | Gradient L2 clipping | 50.0 | | # GPUs | 2× V100 16 GB | | Train time | 96 h | | Total parameter count | 80.15 M | Table 2: Training details for Config A with CelebA-HQ/FFHQ 256 × 256 | Name | Values | |------------------------------|------------------------------------------------------| | Batch size | 16 | | Batch size Var(X) | 8 | | Optimizer | Adam with β1 = 0.9, β2 = 0.999 | | LR (encoders/decoders/flows) | 3 × 10−4 , 1 × 10−3 , 3 × 10−3 | | LR decay | Multiplicative (encoders / decoders+flows) 0.92/0.95 | | Encoder parameter freeze at | 60 epochs | | Gradient L2 clipping | 50.0 | | # GPUs | 1× V100 16 GB | | Train time | 96 h | | Total parameter count | 74.58 M | Table 3: Training details for Config A (LSUN) with LSUN church / bedroom 128 × 128 Data Preprocessing We augment each dataset by adding uniform 1/255 noise to 8-bit images normalized to [0, 1] on top of which we also add slight zero-mean Gaussian noise with standard deviation 5×10−3. During training, we apply random horizontal flips with probability p = 0.5. FID Measurement When comparing to Glow, we compute the FID using 30000 samples (the full CelebAHQ -dataset). We use 5-bit dequantization (and the tiny Gaussian noise-augmentation mentioned in the previous paragraph), when computing the value for Glow, with the result being a few points weaker (56.8) for 8-bit data. We use truncated sampling with σ = 0.8 when generating images with Glow. Our model uses 8-bit images. Table 4: Model hyperparameters for Config A. The LSUN-128 models are trained with the same configuration, but the Flows at resolutions -parameter (Ri) set to [128,64,32,8,1, prior] instead. The affine-coupling split type uses format M × split type, where C denotes splits along the channel dimension and SK a spatial checkerboard split with K-pixel alternation. Coupling types are listed starting from the side of the input (e.g. the channel-splits are in general closer to the input than the latent). Affine coupling blocks with spatial splits (denoted with SK) use U-Nets as their forward neural networks. There is no additional source of noise for the flow at the highest resolution x = y256 and hence α is not defined there. LeakyReLU-nonlinearities use slope 0.1. | Name | Values | |---------------------------------------|------------------------------------------------------------------------| | Flows at resolutions (Ri) | [256, 128, 32, 8, 1, prior] | | Number of channels at flows | [3, 4, 8, 8, 408, 4] | | Noise scale to flow (α) | [n/a, 0.4, 0.05, 0.05, 0.05, 0.05] | | Noise scale to decoder (α) | [n/a, 0.4, 0.05, 0.05, 0.075, 0.075] | | Flow-loss-weight (wi) | [1/(256x256x3), 10/(128x128x4), 10/(32x32x8), 1/2 9 , 1/2 9 , 1/2 9 ] | | Flow-lengths (K) | [4, 8, 8, 8, 8, 8] | | Affine-coupling-split-type | [2C, 2S2], [2C, 2S2, 2S4, 2S8], [2C, 2S2, 2S4, 2S8] , [8C], [8C], [8C] | | Affine-coupling-length (N) | [4, 4, 4, 4, 4, 4] | | 1 × 1 invertible convolution kernel | [free-form, free-form, free-form, free-form, unitary, unitary] | | Total latent space dimensionality |y| | 271360 | | Affine-coupling conditioning channels | [16, 32, 64, 128] | | Affine-coupling hidden layer channels | [32, 64, 128, 128] | | Encoder hidden layer channels | [64, 256, 256, 512] | | Decoder hidden layer channels | [64, 256, 512, 512] | | Name | Values | |---------------------------------|----------| | Levels (L) | 5 | | Depth per level (K) | 24 | | Coupling type | Additive | | Hidden channels coupling layers | 256 | Table 5: Model hyperparameters for our reference Glow implementation using notation of Kingma & Dhariwal (2018). We use the Adamax variant of Adam for optimization, with learning rate 5 × 10−3 and batch size 16. The data is dequantized to 8 bits. Gradient magnitude is clipped at 50.0.
Review 1: Summary: The authors propose a generative model that is based on conditional normalizing flows. The conditioning factor enables a multi-scale architecture where each layer in the hierarchy generates the conditioning factor of the next layer with latent space at each layer having a different spatial resolution. The model resembles hierarchical VAEs with normalizing flows for the encoder and prior distributions of the latent space and a normalizing flow in the decoder but with an alternative regularization scheme. The authors test the suggested architecture on CelebA-HQ and LSUN churches and bedrooms in terms of FID compared to glow. More importantly, they qualitatively assess the behavior of the latent space at different resolutions showing its capability to capture finer detail at higher resolutions (a weakness of existing flow-based models). Strengths and Weaknesses: strengths: 1) The authors improve the sampling quality of flow-based generative models. 2) They demonstrate qualitatively that i) lower resolution layers are capable of capturing global features of the image. ii) higher resolution stochastic layers do still yield variations in the image at finer details. this is in contrast to glow models where these layers do not affect the sampled images. weaknesses: 1) albeit the proposed model outperforms the glow baseline, it might be good if the authors give a picture of where their model stands in terms of other generative models like vaes. 2) they should also compare with glow in terms of sampling time and model size (trainable parameters) Requested Changes: ****** regarding the prior regularization scheme in equation 4. ***** 1) Do the authors think that such a loss can have a probabilistic/ information theoretic interpretation? 2) Could a regularization coefficient achieve a better trade-off between reconstruction loss and regularization? 3) The prior portion of the model resembles prior networks [1] albeit in [1] the prior network is not trainable (hence implicitly avoiding degenerate solutions). I think the authors should connect these two concepts 4) Slightly improve writing for better comprehensibility: lacking the entropy term originating from the KL--> lacking the negative entropy term of the encoder. 5) The risk of degenerate encoders is also discussed in [2] (prior to cited Xiao's work) and solved by beta-VAEs (beta<1). For completeness, I also think the authors should cite and comment on [2] ****** regarding the architectural depth ***** 1) I think the authors at this point should also report number of layers (flows) in the architecture and number of transformations per flow for a fair comparison with layers in the VAEs. report of trainable architectures for both model categories might their argument stronger. 2) there are actually more recent deep vaes with shallower architectures [3] which are also not commented [3]. ***** notational issues ***** 1) there seems to be a contradiction in the notation used: right above equation 4: $z_{prior}=f_{prior}(y;\phi)$ in equation 7, they define $z_{prior}$ as the inverse of f_{prior} 2) minor: in section 2.2. The noise can be though**t** References [1] Osband, I., Aslanides, J., and Cassirer, A. (2018). Randomized prior functions for deep reinforcement learning. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R., editors, Advances in Neural Information Processing Systems 31, pages 8617–8629. CurranAssociates, Inc.] [2]Hoffman MD, Riquelme C, Johnson MJ. The β-vae’s implicit prior. InWorkshop on Bayesian Deep Learning, NIPS 2017 (pp. 1-5). [3] Apostolopoulou I, Char I, Rosenfeld E, Dubrawski A. Deep Attentive Variational Inference. InInternational Conference on Learning Representations 2021 Oct 6. Broader Impact Concerns: no concerns ================================================== Review 2: Summary: This paper proposes a new invertible generative model using hierarchical normalizing flows. The U-Net-based encoder-decoder model creates an intermediate representation $\boldsymbol{y}$, which has fine-to-coarse spatial hierarchical structures. Then, the model transforms an image and intermediate representations into latent representations $\boldsymbol{z}$ using invertible flow models at each level. This paper applies the proposed method to the CelebA-HQ 256x256 and LSUN datasets and quantitatively evaluates its image generation performance using FID. For CelebA-HQ 256x256, the proposed method is quantitatively evaluated by visualizing generated images and latent representations. Strengths and Weaknesses: **Claim and Evidence** If I understand correctly, this paper points out four problems of Glow and proposes a method to remedy them: 1. difficult to model multi-modal distribution 2. aliasing artifacts in generated images 3. limited variations of generated images 4. uneven contribution of decomposed latent representations - For the first problem, if I get all the information, this paper does not verify it, at least directly. However, I think the fact that FID scores are improved in image generation tasks on two image datasets (CelebA-HQ 256x256 and LSUN) can be considered as evidence to support this claim. - For the second problem, Figure 1 qualitatively evaluates the improvement. - For the third and fourth problems, Figures 5 and 6 evaluate the effect of $\boldsymbol{z}$'s changes on generated images. Although both are qualitative evaluations of a single image, they are consistent with their claims, at least for the example presented. **Audience** The Flow-based model is one of the most promising image generation models, along with the score-based, GAN-based, and Transformer-based models. Its improvement is an important issue since the aliasing problem of Glow is directly related to the impression of the generated images. Therefore, this paper should be of interest to TMLR readers. **Weaknesses** The proposed method certainly improves over Glow by the comparisons from various aspects. However, as pointed out in this paper, this paper does not compare the proposed models with other generative models, such as GAN-based and score-based models. In fact, according to [1], the SOTA of CelebA-HQ 256x256 FID is 3.25 of the GAN-based model, and the FID of SOTA of the score-based model is also significantly different from the proposed method. [1] https://paperswithcode.com/sota/image-generation-on-celeba-hq-256x256 Requested Changes: - P.3, Eq. (3): *... where $p\_\theta$ is the distribution induced by a conditional normalizing flow $\boldsymbol{z}\_{\text{cond}} = f\_{\text{cond}}(\boldsymbol{x}; \boldsymbol{y}, \theta)$ with a unit Gaussian prior.*: It may not be easy to understand this part mathematically. Is $p\_\theta$ a probability distribution constructed by push-forwarding $\boldsymbol{z}\_{\text{cond}}\sim \mathcal{N}(0, I)$ using $f^{-1}\_{\text{cond}}(\cdot; \boldsymbol{y}, \theta)$ - P.3, Eq. (4): Similarly, is $p_\varphi$ a probability distribution constructed by push-forwarding $\boldsymbol{z}\_{\text{prior}}\sim \mathcal{N}(0, I)$ using $f^{-1}\_{\text{prior}}$? - P.4: [...] we optimize minimize [...] -> [...] we optimize to minimize [...] - P.5: The noise can be though as [...] -> The noise can be thought of as [...] - P.8, Figure 4: What exactly does the term "truncated sampling" used in image generation (e.g., Figure 4) refer to? Also, how does the parameter $\sigma$ work there? Broader Impact Concerns: This paper does not explicitly mention Broader Impact. However, similarly to other image generation technology, we may carefully consider the social negative impacts of its misuse, such as fake images. ================================================== Review 3: Summary: This paper proposes a novel flow-based hierarchical generative model. It works by organizing multiple shallow conditional normalizing flows in a hierarchy. A deterministic encoder (with added noise) computes the conditioning information for each level of the hierarchy. The decoder leverages conditioning normalizing flows at each “layer” to generate the image. A learned prior (also a flow-based model) models the aggregate posterior for sampling the initial latent. The proposed model mainly presents improvements compared to GLOW (Kingma & Dhariwal, 2018) in terms of FID. It is also shown how the hierarchical structure of the model is better able to capture variations to the generated images at multiple levels of abstraction. The proposed model is quite complex, and a hyper-parameter analysis reveals that it is also quite sensitive to some of the hyperparameter choices, especially the amount of encoder noise added. Though once properly tuned the model performs is able to improve substantially compared to GLOW. Strengths and Weaknesses: + This is a well written paper that is relatively easy to follow. The clarity can be improved further by adding an algorithm box for sampling and inference in the model, though this is minor. + The experimental evaluation and hyper-parameter analysis is sufficiently thorough, and it is clear that this approach works and can improve over GLOW. + I appreciate that the authors are clear about the current limitations of this work in Section 4, which is important. - The main improvement of the proposed is to GLOW, which is 5 years old and hasn’t been SOTA for image generation purposes in a long time. However, for the purposes of closing the gap of flow-based to diffusion/gan-based models, this work is certainly valuable. - The model is quite complex and sensitive to hyper-parameter values (for which no good heuristics are available), though the authors clearly acknowledge this. - No literature from 2023 is covered and only a single paper from 2022. While I am not very familiar with the latest state of flow-based generative models, it would make sense to me to try include a discussion of more recent advances in generative (flow-based) models more generally. - The main selling point of using a flow-based model seems to be the ability to perform exact inference and compute exact likelihoods. However, the paper is mainly concerned with improvements in terms of FID. While this is understandable, as the comparison is to another flow-based model, it would have also made sense to include some results that better demonstrate the benefit of flow-based models compared to diffusion/gan-based models even though those are not directly compared to. Requested Changes: I think the paper is sufficiently polished and ready for publication, though I have listed some suggestions for improving this work in the comments above. Broader Impact Concerns: N/A. ================================================== Metareview: Recommendation: Accept as is Comment: This paper proposes an interesting new invertible generative model using hierarchical normalizing flows and demonstrates performance improvements of the suggested approach over Glow. All reviewers were positive about the paper and recommended acceptance. As there were not any notable issues outstanding after the rebuttal, I am happy to recommend the acceptance of the paper "as is". I would still encourage the authors to add the sampling times requested by Reviewer xzyC in the final version. ==================================================
# Algorithm-Agnostic Explainability For Unsupervised Clustering Anonymous authors Paper under double-blind review ## Abstract Supervised machine learning explainability has developed rapidly in recent years. However, clustering explainability has lagged behind. Here, we demonstrate the first adaptation of model-agnostic explainability methods to explain unsupervised clustering. We present two novel "algorithm-agnostic" explainability methods - global permutation percent change (G2PC) and local perturbation percent change (L2PC) - that identify feature importance globally to a clustering algorithm and locally to the clustering of individual samples. The methods are (1) easy to implement and (2) broadly applicable across clustering algorithms, which could make them highly impactful. We demonstrate the utility of the methods for explaining five popular clustering methods on low-dimensional synthetic datasets and on high-dimensional functional network connectivity data extracted from a resting-state functional magnetic resonance imaging dataset of 151 individuals with schizophrenia and 160 controls. Our results are consistent with existing literature while also shedding new light on how changes in brain connectivity may lead to schizophrenia symptoms. We further compare the explanations from our methods to an interpretable classifier and find them to be highly similar. Our proposed methods robustly explain multiple clustering algorithms and could facilitate new insights into many applications. We hope this study will greatly accelerate the development of the field of clustering explainability. ## 1 Introduction In recent years, research into explainability for supervised learning methods has greatly accelerated. However, relatively little research into methods for explaining unsupervised clustering has occurred, and more approaches need to be developed. Adapting and expanding model-agnostic explainability methods from the domain of supervised learning to explain clustering algorithms is a previously unexplored avenue for accelerating the development of the field of clustering explainability. In this study, we demonstrate-for the first time-the viability of this avenue, and introduce two novel clustering explainability methods that are expansions of existing methods for supervised learning. Importantly, these methods are easy to implement and applicable to many clustering algorithms, making them ideal for widespread use. Explainability methods for supervised learning fall into two categories - model-specific and model-agnostic. It should be noted that these explainability methods are distinct from inherently interpretable machine learning methods like logistic regression, which was introduced by Cox (1958), or decision trees. Model-specific methods are only applicable to a specific class of models. For example, layer-wise relevance propagation (LRP) is specific to differentiable classifiers, as explained in Bach et al. (2015), and impurity-based feature importance is specific to decision tree-based classifiers (i.e., Louppe (2014)). In contrast, model-agnostic methods are applicable to a variety of supervised models. Examples of model-agnostic methods include LIME by Ribeiro et al. (2016), SHAP by Lundberg & Lee (2017), permutation feature importance by Fisher et al. (2018), PD plots by Friedman (2001), and ICE plots by Goldstein et al. (2015). Permutation feature importance is unique among these methods for two reasons. (1) It is easy to implement, and (2) it can be used with classifiers that only provide hard classifications. Most other model-agnostic methods are somewhat difficult to implement and require that a classifier output a probability of belonging to a class. We explain the relevance of these two key differences later in this paper. It is also worth noting that permutation feature importance is a global method and not a local method. Global methods provide insight into the features that are generally prioritized by a classifier (e.g., impurity, PD plots), and local methods provide insight into the features that are important for the classification of an individual sample (e.g., LRP, LIME, SHAP, and ICE plots). Perturbation, another explainability method, is easy to implement like permutation feature importance and offers local insight. However, like most modelagnostic explainability approaches, it has previously only been used with classifiers that provide soft labeling. Examples of studies using perturbation include works by Ellis et al. (2021) and Fong & Vedaldi (2017). Methods analogous to model-specific explainability and interpretable machine learning have been developed for clustering. While many interpretable clustering methods use decision trees, such as Basak & Krishnapuram (2005); Bertsimas et al. (2018; 2020); Fraiman et al. (2013); Loyola-González et al. (2020), other more distinct approaches have also been developed by Bhatia et al. (2019) and Plant & Böhm (2011). Some methods have properties of both model-specific explainability and interpretable machine learning like an explainable K-means clustering approach by Frost et al. (2020), and some methods, like the unsupervised Kmeans feature selection approach by Boutsidis et al. (2009), are feature selection methods that are analogous to model-specific explainability. However, to the best knowledge of the authors, no existing explainability approaches have been developed that are broadly applicable to many different clustering algorithms, and most existing clustering methods still remain unexplainable. This is problematic because many traditional clustering methods like density-based clustering in Sander (2010) have unique characteristics. These unique characteristics make them ideal for specific use-cases, for which existing interpretable cluster methods are suboptimal. Model-agnostic explainability methods offer a solution to this problem, and their potential still remains untapped within the space of explainable clustering. If slightly adapted, they could be directly transferred from the domain of supervised learning to unsupervised clustering and could greatly accelerate the development of the field of explainable clustering. We call this new class of unsupervised explainability methods, "algorithm-agnostic". There are multiple possible taxonomies of clustering methods, and two common taxonomies are described in Fahad et al. (2014); Rai (2010). For the purposes of this study however, we consider how model-agnostic explainability methods can be applied to five types of clustering methods: (1) partition-based, (2) densitybased, (3) model-based, (4) hierarchical, and (5) fuzzy methods. These methods are respectively described in Jin & Jan (2010); Sander (2010); Banerjee & Shan (2010); Johnson (1967), and Ruspini et al. (2019). Although clustering has been applied across many domains in works like Behera & Panigrahi (2015); Mustaniroh et al. (2018), and Thomas et al. (2018), in this work, we validate our approaches within the field of computational neuroscience for resting-state functional magnetic resonance imaging (rs-fMRI) functional network connectivity (FNC) analysis. Clustering approaches have been applied to FNC data to gain insight into a variety of brain disorders and mechanisms in works by Sendi et al. (2020; 2021a;b) and Zendehrouh et al. (2020). However, the high dimensionality of the data makes understanding clusters extremely challenging, and as a result, most studies only examine a small number of domains and train a supervised machine learning classifier after clustering to gain insight into the identified clusters. Here, we demonstrate how model-agnostic explainability methods can be generalized to the domain of explainable clustering by adapting permutation feature importance, a method described in Fisher et al. (2018)). Importantly, we adapt permutation feature importance for the two distinguishing characteristics that we previously described. An algorithm-agnostic version of permutation feature importance could easily be implemented by researchers with varying levels of data science expertise and thus be widely adopted across many domains. Moreover, it could provide explanations for both hard and soft clustering methods, unlike most model-agnostic methods that could only explain clustering methods with soft labeling. As such, permutation feature importance should be more widely generalizable to clustering methods than other model-agnostic explainability approaches. We also adapt perturbation, a local model-agnostic explainability method, to provide explanations for clustering algorithms. This adaptation is particularly innovative because perturbation can typically only be applied to methods with soft labeling, and we describe a novel approach that enables it to be applied to methods that use hard labeling. Based upon these adaptations, we present two novel methods for algorithm-agnostic clustering explainability: Global Permutation Percent Change (G2PC) feature importance and Local Perturbation Percent Change (L2PC) feature importance. G2PC provides "global" insight into the features that distinguish clusters, and L2PC provides "local" insight into what makes individual samples belong to a particular cluster. We demonstrate the utility of these approaches for the previously mentioned five categories of clustering methods on low-dimensional synthetic data and further demonstrate how they can provide insight into high-dimensional rs-fMRI FNC data from the Functional Imaging Biomedical Informatics Research Network (FBIRN) dataset which includes 151 subjects with schizophrenia and 160 controls. Further details on the dataset are described in van Erp et al. (2015). We compare their explanations to those of an interpretable machine learning classifier to better understand the reliability of each method. ## 2 Methods Here we describe (1) our proposed clustering explainability methods and (2) the experiments with which we examine their utility. ## 2.1 Explainability Methods For Clustering We first discuss permutation feature importance, a model agnostic explainability method, from which G2PC was derived. We then discuss G2PC and L2PC. ## 2.1.1 Permutation Feature Importance Permutation feature importance, a form of feature perturbation, is a well-known model-agnostic explainability method that is typically applied within the context of supervised machine learning algorithms. It was originally developed for random forests by Breiman (2001) and was later expanded to be model-agnostic by Fisher et al. (2018). It involves a straightforward procedure for estimating feature importance that is visualized in algorithm 1. | Algorithm 1 Permutation Feature Importance 1: function permute_feature(X2, j, features) 2: p ← permute(0 to N − 1) | | . p - permuted indices, N - number of samples | |----------------------------------------------------------------------------------------------------------------------|-------------------------------------------|-------------------------------------------------| | 3: | X2[:, features = j] ← X2[p, features = j] | | | 4: | return X2 | | | 5: end function | | | ## 2.1.2 Global Permutation Percent Change (G2Pc) Feature Importance The permutation feature importance approach can be generalized to provide an estimate of feature importance for clustering algorithms. G2PC feature importance is highly similar to the standard permutation 1: **function** permute_feature(X2*, j, features*) 2: p ← permute(0 to N − 1) . p - permuted indices, N - number of samples 3: X2[:, features = j] ← X2[*p, features* = j] 4: **return** X2 5: **end function** 6: 1: **procedure** Permutation Feature Importance(*model, X, Y* ) . X - data, Y - labels 2: Y1 ← predict(X) 3: *performance*1 ← performance(Y ,Y1) 4: for j in J features do . J - number of features 5: for k in K repeats do . K - number of repeats 6: X2 ← copy(X) 7: X2 ← permute_feature(X2, j, *features*) 8: Y2 ← predict(X2, *model*) 9: *performance*2 ← performance(Y ,Y2) 10: Importance[j, k] ← (performance2 - *performance*1) / *performance*1 11: **end for** 12: **end for** 13: **return** *Importance .* Results 14: **end procedure** feature importance applied to supervised machine learning models. However, there are several key distinctions. Rather than calculating the ratio of the change in performance before and after permutation, G2PC calculates the percentage of all N samples that change from their original pre-permutation clusters to a different post-permutation cluster. The percentage of samples that switch clusters following the permutation of a particular feature reflects the importance of that feature to the clustering, and this metric is one of the key novelties of our adaptation. We also added a grouping component to G2PC that allows related features to be permuted simultaneously. Figure 1 and Algorithm 2 portray the method in greater detail. The *groups* parameter is an array that contains an identifying group number for each feature. The mdl parameter is an object that contains the information necessary to assign a new sample to the preexisting clusters. Parameters X and C are the original data and cluster assignments, respectively. P ct_Chg stores the calculated percent change importance values. Algorithm 2 G2PC: Global Permutation Percent Change Feature Importance 1: **function** permute_group(X2*, j, groups*) 2: p ← permute(0 to N − 1) 3: X2[:, groups = j] ← X2[*p, groups* = j] 4: **return** X2 5: **end function** 6: 1: **procedure** G2PC(model, X, C, groups) . X - data, C - cluster assignments, *groups* - group each feature belongs to 2: for j in J groups do 3: for k in K repeats do 4: X2 ← copy(X) . X2 - copy of samples to permute 5: X2 ← permute_group(X2, j, *groups*) 6: C2 ← recluster(X2, *model*) . See Section 2.1.4 7: *P ct*_Chg[j, k] ← sum(C 6= C2) / N . C2 - perturbed sample cluster assignments 8: **end for** 9: **end for** 10: **return** P ct_*Chg .* Results 11: **end procedure** Figure 1: Diagram of G2PC. For G2PC, the data is first input into a clustering algorithm. After clustering, ![3_image_0.png](3_image_0.png) each of J feature groups is permuted across samples, and the resulting perturbed samples are reassigned to one of the previously identified clusters. Then the percent of samples that change cluster assignment is calculated to give insight into the relative impact of permuting that group of features, and the process is repeated K times. Rectangles with rounded corners reflect operations, and rectangles without rounded corners reflect inputs or outputs. Gold rectangles refer to operations or inputs that are identical across both G2PC and L2PC, and blue boxes refer to operations or outputs that are distinct across methods. ## 2.1.3 Local Perturbation Percent Change (L2Pc) Feature Importance L2PC feature importance extends perturbation to provide explainability for clustering algorithms. However, it is related to G2PC. Rather than permuting across all samples and measuring the overall clustering percent change, L2PC swaps (i.e., perturbs) each of J features of a sample with values randomly selected from the same feature of other samples in the dataset M times and calculates the percentage of times that the sample changes clusters following the perturbations. It performs this operation for a pre-defined number of repeats (K) per sample. Importantly, using the percent of perturbations that cause a change in cluster assignment, rather than measuring the change in classification probability associated with a perturbation, enables us to extend perturbation to explain methods that employ hard labeling or cluster assignments. The perturbation percent change values obtained from each repeat could be used to obtain the statistical significance of each feature by comparing the values to a null hypothesis of zero perturbation percent change. As the percent change increases, the importance of a feature for the clustering of the individual sample increases. While L2PC is principally a method for obtaining a measure of feature importance for each sample individually, L2PC can be applied to each sample in a data set to obtain a global measure of feature importance like other local methods. The mean of the resulting distribution of perturbation percent change values across samples can provide a global measure of feature importance. The application of L2PC as a global measure of feature importance is much more computationally intensive than G2PC. However, the computational complexity is not problematic for data sets with a small number of samples. Using L2PC as a global metric in high dimensions could be made more tenable by using a smaller number of repeats or perturbations per repeat, but that would reduce the reliability of the resulting importance estimates. Additionally, the approach can easily be parallelized to greatly decrease its execution time. We also added a grouping component to L2PC that allows related features to be perturbed simultaneously. Figure 2 and Algorithm 3 portray the method in greater detail. The variables used in Algorithm 3 are the same as those used in Algorithm 2. Algorithm 3 L2PC: Local Perturbation Percent Change Feature Importance 1: **function** permute_group(X2*, X, M, j, groups*) ![4_image_0.png](4_image_0.png) 2: p ← permute(0 to N − 1) . generate permuted indices 3: X*pertur b*[:, groups = j] ← X[p < M − 1*, groups* = j] . get samples at permuted indices below M 4: **return** X*pertur b* 5: **end function** 1: **procedure** L2PC(*model, X, C, groups*) 2: for j in J groups do 3: for n in N samples do 4: for k in K repeats do 5: X*pertur b* ← copy(X[n, :]) . M x F 6: X*pertur b* ← permute_group(Xpertur b*, X, M, j, groups*) 7: C2 ← recluster(Xpertur b*, model*) . See Section 2.1.4 8: P ct_Chg[*n, j, k*] ← sum(C[n] 6= C2) / M 9: **end for** 10: **end for** 11: **end for** 12: **return** P ct_*Chg .* Results 13: **end procedure** ## 2.1.4 Additional Notes On G2Pc And L2Pc It should be noted that one component of both G2PC and L2PC is the assignment of new samples to existing clusters. Some might argue that the assignment of new samples to existing clusters violates the intended purpose of clustering methods. Regardless, it is still possible to effectively assign new samples to existing clusters without the use of a supervised classifier. For k-means, samples can be assigned to the cluster with the nearest center. For Gaussian mixture models (GMMs), the locations of a new sample within the existing cluster probability density functions can be obtained, and the sample can be assigned to the cluster with the ![5_image_0.png](5_image_0.png) Figure 2: Diagram of L2PC. The input data is assigned to clusters. Then each sample is duplicated M times, and the feature group j of each duplicate sample is perturbed by values randomly selected from the same feature group of other samples in the dataset. The perturbed duplicate samples are then reassigned to the previously identified clusters, and the percent of times that the perturbed sample switches cluster assignments is calculated. The process is repeated for each of J feature groups and K times. Rectangles with rounded corners reflect operations, and rectangles without rounded corners reflect inputs or outputs. Gold rectangles refer to operations or inputs that are identical across both G2PC and L2PC, and blue boxes refer to operations or outputs that are distinct across methods. highest probability. For fuzzy c-means, new samples can be assigned to existing clusters by retraining a new clustering model with fixed cluster centers and parameters identical to those of the original clustering. Kmeans, Gaussian mixture models, and fuzzy c-means clustering have functions for predicting the assignment of new samples in scikit-learn by Pedregosa et al. (2011) and scikit-fuzzy1. For DBScan, new samples can be assigned to the cluster of a core sample that is within a pre-defined ε distance, and new samples can be assigned to clusters derived from agglomerative clustering by placing them in the cluster of the nearest sample. ## 2.2 Description Of Experiments We evaluate the performance of the two explainability methods through a series of 3 experiments. The first two experiments involve the use of low-dimensional, synthetic, ground-truth data, and the last experiment involves the application of the methods to functional network connectivity values extracted from the FBIRN rs-fMRI data set that is described in van Erp et al. (2015). We apply five popular clustering methods to each dataset and apply both novel explainability methods to evaluate the relative importance of each feature to the clustering. ## 2.2.1 Synthetic Datasets We generated two synthetic datasets with different numbers of clusters and random distributions. | | Table 1: Synthetic dataset distributions | | | | | | |---------|--------------------------------------------|---------------------|---------|----------|----------|----------| | | Synthetic dataset 1 | Synthetic dataset 2 | | | | | | Feature | Class 1 | Class 2 | Class 1 | Class 2 | Class 3 | Class 4 | | 1 | 11 ± 1 | 3 ± 1 | 3 ± 0.5 | 11 ± 0.5 | 19 ± 0.5 | 27 ± 0.5 | | 2 | 9 ± 1 | 3 ± 1 | 3 ± 0.5 | 9 ± 0.5 | 15 ± 0.5 | 21 ± 0.5 | | 3 | 7 ± 1 | 3 ± 1 | 3 ± 0.5 | 7 ± 0.5 | 11 ± 0.5 | 15 ± 0.5 | | 4 | 5 ± 1 | 3 ± 1 | 3 ± 2 | 5 ± 2 | 7 ± 2 | 9 ± 2 | | 5 | 3 ± 1 | 3 ± 1 | 3 ± 2 | 4 ± 2 | 5 ± 2 | 6 ± 2 | 1https://github.com/scikit-fuzzy/scikit-fuzzy Synthetic dataset 1 Synthetic dataset 1 consists of 5 features with two 50-sample clusters. The features - numbered 1 through 5 - each consist of random variables generated from two separate normal distributions that formed two clusters. As the feature number increases (i.e., from feature 1 to feature 5), the difference between the means (µ) of the two random variable normal distributions within each feature decreases, meaning that the overall expected importance of the features decrease. The standard deviation (σ) of the random variables is consistent across both clusters and all 5 features. Table 1 shows the µ and σ of the random variables. To test G2PC, we generated 100 sets of simulated data, and to test L2PC, we generated 1 set of simulated data. Figure 3 depicts synthetic dataset 1 following dimensionality reduction with t-SNE. Synthetic dataset 2 Synthetic dataset 2 consists of five features with four 50-sample clusters. It provides ![6_image_0.png](6_image_0.png) an opportunity to examine the behavior of the explainability methods in a context involving more than two clusters. The features–numbered 1 through 5-each consists of random variables generated from four separate normal distributions that formed four clusters. As the feature number increase, the difference between the µ of each of the random variables within the features decreases. Additionally, the variance of the first three features is below unit variance, and the variance of the last two features is above unit variance. As such, we expect the feature importance methods to assign decreasing importance from Feature 1 to Feature 5 and to assign significantly less importance to Features 4 and 5 than Features 1 through 3. Table 1 shows the mean and SD of the random variable distributions of each feature. To test G2PC, we generated 100 sets of simulated data, and to test L2PC, we generated 1 set of simulated data. Figure 3 depicts synthetic dataset 2 following dimensionality reduction with t-SNE. Figure 3: Display of Synthetic Data with t-SNE Dimensionality Reduction. Panel A shows synthetic dataset 1 following dimensionality reduction, and Panel B shows synthetic dataset 2 following dimensionality reduction. Dimensionality reduction is performed using t-SNE, which was introduced by van der Maaten & Hinton (2008), with 3,000 iterations and a perplexity of 35. Each cluster is denoted with a different color. ## 2.2.2 Fbirn Rs-Fmri Dataset We use rs-fMRI and clinical data from the FBIRN dataset van Erp et al. (2015). The dataset can be made available upon a reasonable request emailed to the corresponding author and contingent upon IRB approval. The dataset contains 151 schizophrenia (SZ) subjects and 160 healthy controls (HC). Details on the data collection and preprocessing are found in Appendix A. After performing initial preprocessing, we perform group independent component analysis (ICA) and extract 53 independent components (ICs) with Neuromark by Du et al. (2020). We assign the ICs to seven domains, including subcortical network (SCN), auditory network (ADN), sensorimotor network (SMN), visual network (VSN), cognitive control network (CCN), default-mode network (DMN), and cerebellar network (CBN). A full list of all components can be found in Du et al. (2020) and Sendi et al. (2021b). After extracting the ICs, we use Pearson correlation between each IC time-series pair to estimate each participant's static functional network connectivity (FNC). This results in 1378 whole-brain connectivity values (i.e. features) for each individual. ## 2.2.3 Clustering Methods We utilize algorithms from 5 different categories of clustering methods: (1) k-means clustering, a partitioning method, (2) DBScan, a density-based method, (3) GMM, a model-based method, (4) agglomerative clustering (AGC), a hierarchical clustering method, and (5) fuzzy c-means clustering, a fuzzy clustering method. The GMM, AGC, k-means, and DBScan are implemented in scikit-learn, which was developed by Pedregosa et al. (2011), and fuzzy c-means is implemented in scikit-fuzzy2. ## 2.2.4 Experiment Parameters For Clustering And Explainability Methods For the synthetic datasets, we provide the ground-truth number of clusters to the k-means, GMM, AGC, and fuzzy c-means algorithms. For the rs-fMRI FNC analysis, we optimize the number of clusters for each algorithm via the silhouette method, which was introduced in Kaufman & Rousseeuw (1990). For each DBScan analysis, the ε distance parameter is optimized using the silhouette method, and the minimum number of points parameter equals 4. For fuzzy c-means in all of the analyses, m = 2, error = 0.005, and maximum number of iterations = 1000. When calculating the percent change in clustering, the cluster of each sample is assigned to the class with the highest predicted likelihood. Both synthetic datasets are z-scored on a feature-wise basis. The number of repeats (K) parameter is set to 100 for the G2PC and L2PC analyses on all datasets, and the number of perturbations per repeat (M) is set to 30 in L2PC for all datasets. We perform G2PC and L2PC on the FBIRN data, both with grouping based upon the established FNC domains and without grouping. To compare our G2PC and L2PC results for the FBIRN dataset to an existing interpretable machine learning method, we train a logistic regression classifier with elastic net regularization (LR-ENR) with the k-means and fuzzy c-means cluster assignments as labels. We train the LR-ENR models with 10 outer and 10 inner folds. In each fold, 64%, 16%, and 20% of the samples are randomly assigned to training, validation and, test sets. After training and testing the classifiers, we output their coefficient values and multiply them by each test sample in their respective fold. We then calculate the the absolute value of the resulting values. This generates a measure of the effect that each feature had upon the model. We then compute the mean effect within each fold and the mean of the mean effect across all features associated with each FNC domain. This provides an estimate of the effect of each domain on the classifier. Python code for all experiments is on GitHub3. Note that Figures 5 and 6 are generated separately in MATLAB R2020b (MathWorks, Inc) with the results from the Python scripts. ## 3 Results Here we detail the results for both the synthetic data and rs-fMRI FNC experiments. ## 3.1 Synthetic Datasets As shown in Figure 4, all clustering methods successfully identify the underlying clusters. Importance generally decreases from features 1 to 5 for synthetic dataset 1 in alignment with expectations for all clustering and both explainability methods. It should be noted, however, that the sensitivity of each of the clustering methods to perturbation varies on a feature-to-feature basis and on an overall percent change basis. For G2PC, features 4 and 5, which are supposed to be unimportant, are generally categorized as having little to no effect upon the clustering. Additionally, importance generally decreases from feature 1 to feature 2 and feature 3. However, GMMs with G2PC seem to mainly emphasize one feature (i.e. feature 1) in their clustering, contrary to the other methods that place more importance upon features 2 and 3. Also, some methods seem much more sensitive to perturbation than others. The mean percent change for feature 1 of both the GMM and DBScan is around 30%, while for feature 1 of k-means, AGC, and fuzzy c-means, the median importance is around 2% to 5%. Similar findings occur for L2PC with Synthetic Dataset 1. As can be determined via the black line on Figure 4, the mean feature importance across all samples and 2https://github.com/scikit-fuzzy/scikit-fuzzy 3https://github.com/cae67/G2PC_L2PC ![8_image_0.png](8_image_0.png) Figure 4: G2PC and L2PC Results for Synthetic Data. The top two rows show the G2PC and L2PC results for Synthetic Dataset 1, and the bottom two rows show the G2PC and L2PC results for Synthetic Dataset 2. Each column shows the results for a different clustering algorithm. Note that as expected, the permutation and perturbation percent change values, in general, decrease from left to right, indicating a relative decrease in feature importance. For the L2PC results, the samples initially belonging to each cluster have different colors. Each line connecting different points reflects an individual sample. Correctly and incorrectly clustered samples are represented by "o" and "x", respectively, and the accuracy of the clustering is indicated by the percentage in the title associated with each subplot. For G2PC, the percentage in the title indicates the mean accuracy across the 100 iterations of datasets. repeats is very similar to the median values of the G2PC results. For k-means, AGC, and fuzzy c-means, only a few samples are sensitive to perturbation. However, in the GMM and DBScan, a majority of the samples are sensitive to perturbation. It can be seen for some of the clustering methods like fuzzy c-means, AGC, k-means, and the GMM that there are differences in the sensitivity of samples to perturbation on a per-cluster basis. For Synthetic Dataset 2, most clustering methods have a high level of accuracy when identifying the underlying clusters, as shown in Figure 4. K-means, fuzzy c-means, and AGC do not have 100% accuracy for L2PC. Additionally, DBScan has negligible mean accuracy for G2PC, as across 100 iterations of datasets, it frequently identified less than 4 clusters. The G2PC and mean L2PC results look as expected. The first three features are generally ranked as much more important than the last two features, which accounts for the difference in variance of features 1 to 3 and features 4 to 5. Importance also generally decreases from features 1 to 3 and 4 to 5, which accounts for the differences in the means of the random variables of each cluster. It is interesting that the GMM and AGC have a sharp increase in mean L2PC importance from feature 4 to 5, while the other three cluster methods have the expected decrease. For AGC, this might be attributed to its inaccurate clustering of some samples. However, for the GMM, it is unclear what may be the cause. It may be attributable to the random initialization of the data, as the GMM L2PC values defer markedly from the G2PC values. The samples in some clusters of the GMM and AGC L2PC results are highly distinct. For the GMM, the red and grey clusters have higher importance for features 1 to 3 and lower importance for features 4 and 5 than the red and blue clusters. For features 1 to 3 of AGC, the red and grey clusters have high importance, the blue cluster have moderate importance, and the green cluster have low importance. It is interesting that the L2PC results show increased variance for features 4 and 5 for most of the clustering methods and that the values do not seem to be cluster dependent. Given that many clustering methods can identify clusters regardless of whether distinct clusters actually exist and that the samples in features 4 and 5 are very dense, it is possible that smaller perturbations of those features may affect some samples more than others. ## 3.2 Fbirn Rs-Fmri Data All clustering methods, except for DBScan which only finds noise points, identify 2 clusters as optimal. The identification of two clusters is consistent with the underlying SZ and HC groups. K-means has an accuracy, sensitivity, and specificity of 63.99%, 78.81%, and 52.98%, respectively. Fuzzy c-means has better clusters with an accuracy, sensitivity, and specificity of 68.81%, 74.17%, and 67.55%, respectively. Permutation and perturbation do not have a widespread effect upon the clustering when each of the 1378 whole-brain correlation values are perturbed individually. However, when we permute or perturb the wholebrain correlation values within each set of inter-domain or intra-domain features simultaneously, we obtain feature importance results for k-means and fuzzy c-means that can be seen in Figure 5. Panels A and B are box plots of the G2PC importance values for each of the 100 repeats, and for the sake of easy visualization, panels C and D reflect the mean perturbation percent change across all repeats and perturbation instances for each sample, where the green samples and the blue samples belong to the SZ dominant cluster and the HC dominant cluster, respectively. Panels E and F show the mean effect of each domain upon the LR-ENR classifiers. Figure 6 shows the values of the mean FNC for the SZ dominant cluster minus the mean FNC for the HC dominant cluster. The domains surrounded by white boxes were those identified by G2PC and L2PC as most important. The figure uses the magma colormap, as described by Biguri (2021). The feature importance results are highly consistent across both clustering methods and all explainability methods. LR-ENR classifies the samples as their assigned clusters with a µ area under the receiver operating characteristic curve (AUC) of 99.86 and σ of 0.14 for k-means and a µ AUC of 99.48 and σ of 0.63 for fuzzy c-means, indicating that LR-ENR can likely identify the patterns learned by the clustering algorithms. The similarity between the resulting importance estimates for LR-ENR and those for G2PC and L2PC supports the validity of our novel methods. Top domains identified by k-means with G2PC and L2PC include the cross-correlation between the ADN and CCN (i.e., ADN/CCN), the SCN/VSN, the CCN/DMN, and the SMN/DMN. While LR-ENR generally agrees on these top domains, it does not find SMN/DMN to be very important. Top domains identified by fuzzy c-means with G2PC, L2PC, and LR-ENR include the ADN/CCN, the CCN/DMN, the SCN/VSN, ADN, and SMN/VSN. For k-means, relative to the HC dominant group, the SZ dominant group has higher ADN/CCN FNC for around half of the domain, much higher levels of SCN/VSN FNC, a mixture of moderately higher to lower levels of CCN/DMN FNC, and a mixture of higher to lower SMN/DMN connectivity. For fuzzy c-means, relative to the HC dominant group, the SZ dominant group has ADN/CCN FNC values that are higher for around half of the domain, a mixture of higher to lower CCN/DMN FNC values, much higher SCN/VSN FNC values, slightly lower ADN FNC values, and much smaller SMN/VSN FNC values. Both G2PC and L2PC indicate that a minority of samples are sensitive to perturbation, and L2PC demonstrates that many of the samples that are sensitive are very sensitive. This may indicate that those samples are closer to the boundaries of their clusters or inherently noisier for some reason. For fuzzy c-means, more samples belonging to the SZ dominant cluster rather than the HC dominant cluster seem sensitive to perturbation. Most of the HC samples that are sensitive to perturbation seem to be correctly assigned to the HC dominant cluster. In contrast, most of the subjects in the SZ dominant cluster that are sensitive to ![10_image_0.png](10_image_0.png) perturbation seem to actually be HC subjects. For k-means, some samples in the HC dominant cluster that are actually SZ subjects seem to be extremely sensitive to perturbation, while other samples belonging to the SZ dominant cluster that are actually HC subjects are sensitive to perturbation at smaller levels. Figure 5: FBIRN G2PC and L2PC Results. Panels A and B show the G2PC results for k-means and fuzzy c-means clustering, respectively, and Panels C and D show the L2PC results for k-means and fuzzy c-means clustering, respectively. Panels E and F show the mean effect results for logistic regression across 10 folds. The x-axis of each panel shows the domains in order of most important to least important based upon their mean value, and the y-axis shows the perturbation or permutation percent change or mean effect. In panels C and D, the samples belonging to the SZ and HC dominant clusters are green and blue, respectively. The values marked with an "o" or an "x" indicate the samples that were correctly or incorrectly clustered, respectively. The black line on Panels C and D reflects the mean L2PC values across all subjects and provides a measure of global feature importance. The lines connecting different points reflect individual samples. ## 4 Discussion In this study, we propose that model-agnostic methods for supervised machine learning explainability can be adapted for use in unsupervised clustering explainability. We further demonstrate two novel approaches for clustering explainability: G2PC feature importance and L2PC feature importance. G2PC provides a global measure of the relative importance of features to the clustering, and L2PC provides a local measure of which features are important to the clustering of an individual sample. Our adaptation of permutation feature importance and perturbation provides our methods with two unique capabilities. (1) They are easy ![11_image_0.png](11_image_0.png) Figure 6: FNC Clustering Results. Panel A shows the result of the mean FNC of the SZ dominant k-means cluster minus the HC dominant cluster, and Panel B shows the result of the mean FNC of the SZ dominant fuzzy c-means cluster minus the HC dominant cluster. The colormaps to the right of each matrix show the magnitude of its values. The black grid denotes the boundaries of FNC domains, and the white rectangles indicate the domains that were identified as most important for the clustering. to implement, and (2) they are widely applicable across clustering methods. These capabilities could enable our approaches to be highly impactful across the variety of fields in which clustering algorithms are used and for individuals with a wide range of data science expertise. We apply both G2PC and L2PC to clusters generated by five popular clustering algorithms from lowdimensional synthetic data and demonstrate that they can work with each clustering algorithm to identify the expected importance of each features. In some instances with GMMs, several of the features that are expected to be important to the clustering of the synthetic data seem to be unimportant after analysis with G2PC and L2PC. G2PC and L2PC enable the comparison of feature importance across different clustering algorithms via a single percent change metric. As such, they could eventually be used to provide greater insight into the similarities and differences of clustering algorithms. This percent change metric is a vital and novel component of our adaptation. While G2PC and L2PC work well for low-dimensional data across all of the clustering methods with which we pair them, they were not as generalizable to high dimensional data as we first hoped. To examine the utility of G2PC and L2PC for high dimensional data and for neuroimaging analysis, we apply all 5 clustering methods with G2PC and L2PC to FNC data extracted from FBIRN dataset and identify key features differentiating the SZ dominant cluster and the HC dominant cluster. However, very few of the individual features seem to have an effect upon the clustering. This is not entirely surprising given the high dimensionality of the data and that the Euclidean distance is used to cluster the data. This could indicate a limitation of G2PC and L2PC for high dimensional data. To control for this potential limitation, we implement a grouping component that allows related features to be perturbed or permuted simultaneously, and we obtain feature importance results for both k-means and fuzzy c-means clustering that are highly consistent. These consistent results support the reliability of the explainability methods for those clustering algorithms in high dimensions. The results obtained for G2PC and L2PC are also highly comparable to those for LR-ENR. The grouping component of G2PC and L2PC is ideal for high dimensional datasets with features that are divisible into groups based upon domain knowledge. A known weakness of perturbation and permutation methods is that they can generate samples outside of the typical data distribution and produce unreliable results when a high degree of correlation exists between some features. The grouping parameter can help address this problem by grouping highly correlated features during analysis. When groups cannot be easily identified based on domain knowledge, an alternative approach would be to examine the correlation between each feature to find groups of features that are highly correlated or to randomly generate groups of features of size n. 12 Our results for the FBIRN FNC analysis agree with and extend existing literature. Four cross-network domain sets - ADN/CCN, SCN/VSN, CCN/DMN, SMN/DMN - and 1 network - ADN - are most important. This is consistent with Liang et al. (2006) who showed that schizophrenia is linked to widespread dysconnectivity across the brain. Additionally, our results show a disrupted (i.e., both increased and decreased) pattern in different brain networks. For example, we find VSN/SMN FNC is lower in SZ subjects than in HC ones. In fact, Chen et al. (2014) report a decrease in SZ subjects' SMN/VSN FNC relative to HC subjects. This could potentially explain the impairment of sensory information processing in SZ subjects. Multisensory information processing is a prerequisite for self-awareness. It has been proven that matching visual perception and proprioceptive signals from SMN are necessary for self-consciousness perseverance, as laid out in Ehrsson (2007). However, Medalia & Lim (2004) find that these signals are impaired in SZ subjects. Therefore, the disconnection among SZ sensory networks could potentially explain the underlying mechanisms of self-awareness deficit in SZ subjects. We also find a decrease in ADN connectivity in SZ subjects, which aligns with findings in Li et al. (2019). This piece of evidence might explain the link that was found by Linszen et al. (2016) between hearing loss at an early age and the later development of schizophrenia. Interestingly, we find an increase in the ADN/CCN FNC in SZ subjects. Increased ADN/CCN FNC could serve as a compensatory mechanism and suggests a prospective study. We also found an increase in the FNC between VSN/SCN. This is supported by Yamamoto et al. (2018), who show an increased FNC between the thalamus (i.e., part of the SCN) and occipital cortices/postcentral gyri (i.e, part of the VSN) in SZ subjects compared with that of HC subjects. Throughout each of our experiments, we utilize similar G2PC and L2PC settings. Different parameter settings would likely produce slightly different results, and it might be helpful for future studies to examine the effects of the parameter settings. We examined the effects of different values of K upon G2PC and L2PC in Appendix B. For L2PC, a decreased M would likely result in higher variance across repeats. However, the need for more reliable L2PC results must be balanced with its computational intensity. The ideal M is also likely affected by the characteristics of the dataset and clustering algorithm. Additionally, in our clustering analyses, we use Euclidean distance, which can be suboptimal for high dimensional data. This might explain why G2PC and L2PC did not have optimal performance in high dimensional data without grouping. Future studies could investigate the effects of other distance metrics and how they might enable G2PC and L2PC to be applied in higher dimensions. It is likely that other model-agnostic explainability methods from the domain of supervised machine learning explainability could also be generalized to enable explainability for clustering methods. Permutation feature importance differentiates itself from many other model-agnostic explainability methods in that it can be applied to classifiers that output a hard classification. Additionally, we demonstrate a novel adaptation of perturbation that enables it to be extended to explain hard classification methods. In contrast, many model-agnostic explainability methods require that a classifier predict a class probability, rather than a binary class label. Given this requirement, soft-clustering methods like GMMs and fuzzy c-means could be ideal for compatibility with the majority of model-agnostic explainability methods. It is also feasible that G2PC could be used as a feature selection method to obtain optimal clustering similar to how Gómez-Ramírez et al. (2020) have used permutation for feature selection with supervised classifiers. Additionally, L2PC has the potential to be applied to explain supervised machine learning models like support vector machines (SVMs) by Boser et al. (1992). ## 5 Conclusion In this study, we proposed - for the first time - the adaptation of model-agnostic explainability methods from the domain of supervised machine learning explainability to provide algorithm-agnostic clustering explainability. We introduce two new explainability methods that are both easily implemented and widely applicable across clustering algorithms. These capabilities could enable them to be impactful across many application areas and contexts. We demonstrate on low-dimensional, ground-truth synthetic data that they can be paired with multiple clustering algorithms to identify the features most important to differentiating clusters. We further demonstrate the utility of the methods for high-dimensional datasets by analyzing rs-fMRI FNC data and identifying cross-domain connectivity associated with schizophrenia. It is our hope that this paper will (1) stimulate rapid growth in the domain of clustering explainability via an infusion of algorithm-agnostic methods from the domain of supervised machine learning explainability and (2) enable data scientists across a variety of fields to gain more insight into the variety of clustering algorithms that they employ. ## Acknowledgments We thank those who collected the fBIRN dataset. ## References Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus Robert Müller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. *PLoS ONE*, 10(7), 7 2015. ISSN 19326203. doi: 10.1371/journal.pone.0130140. Arindam Banerjee and Hanhuai Shan. Model-based Clustering, 2010. Jayanta Basak and Raghu Krishnapuram. Interpretable Hierarchical Clustering by Constructing an Unsupervised Decision Tree. In *IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING*, volume 17, 2005. doi: 10.1109/TKDE.2005.11. Tanmay Kumar Behera and Suvasini Panigrahi. Credit Card Fraud Detection : A Hybrid Approach Using Fuzzy Clustering & Neural Network. In *Second International Conference on Advances in Computing and* Communication Engineering. IEEE, 2015. ISBN 9781479917341. doi: 10.1109/ICACCE.2015.33. Dimitris Bertsimas, Agni Orfanoudaki, and Holly Wiberg. Interpretable Clustering via Optimal Trees. *arXiv* preprint arXiv: 1812.00539, 12 2018. URL http://arxiv.org/abs/1812.00539. Dimitris Bertsimas, Agni Orfanoudaki, and Holly Wiberg. *Interpretable clustering : an optimization* approach. Springer US, 2020. ISBN 0123456789. doi: 10.1007/s10994-020-05896-2. URL https: //doi.org/10.1007/s10994-020-05896-2. Aviruch Bhatia, Vishal Garg, Philip Haves, and Vikram Pudi. Explainable Clustering Using HyperRectangles for Building Energy Simulation Data. In *IOP Conf. Series: Earth and Environmental Science*, 2019. doi: 10.1088/1755-1315/238/1/012068. Ander Biguri. Perceptually uniform colormaps, 2021. URL https://www.mathworks.com/matlabcentral/ fileexchange/51986-perceptually-uniform-colormaps. Bernhard E. Boser, Isabelle M. Guyon, and Vladimir N. Vapnik. A Training Algorithm for Optimal Margin Classifiers. In *Proceedings of the Fifth Annual Workshop on Computational Learning Theory*, pp. 144–152, 1992. Christos Boutsidis, Michael W Mahoney, and Petros Drineas. Unsupervised Feature Selection for the k-means Clustering Problem. In *Advances in Neural Information Processing Systems 22*, 2009. L E O Breiman. Random Forests. *Machine Learning*, 45:5–32, 2001. Xi Chen, Mingjun Duan, Qiankun Xie, Yongxiu Lai, Li Dong, Weifang Cao, Dezhong Yao, and Cheng Luo. Functional disconnection between the visual cortex and the sensorimotor cortex suggests a potential mechanism for self-disorder in schizophrenia. *Schizophrenia Research*, 166(1-3):151–157, 2014. ISSN 15732509. doi: 10.1016/j.schres.2015.06.014. URL http://dx.doi.org/10.1016/j.schres.2015.06. 014. D. R. Cox. The Regression Analysis of Binary Sequences. Journal of the Royal Statistical Society: Series B (Methodological), 20(2):215–232, 1958. doi: 10.1111/j.2517-6161.1958.tb00292.x. Yuhui Du, Zening Fu, Jing Sui, Shuang Gao, Ying Xing, Dongdong Lin, Mustafa Salman, Anees Abrol, Md Abdur Rahaman, Jiayu Chen, L. Elliot Hong, Peter Kochunov, Elizabeth A. Osuch, and Vince D. Calhoun. NeuroMark: An automated and adaptive ICA based pipeline to identify reproducible fMRI markers of brain disorders. *NeuroImage: Clinical*, 28(August):102375, 2020. ISSN 22131582. doi: 10. 1016/j.nicl.2020.102375. H. Henrik Ehrsson. The experimental induction of out-of-body experiences. *Science*, 317(5841):1048, 2007. ISSN 00368075. doi: 10.1126/science.1142175. Charles A Ellis, Robyn L Miller, and Vince D Calhoun. A Novel Local Explainability Approach for Spectral Insight into Raw EEG-Based Deep Learning Classifiers. In *bioRxiv*, pp. 0–5, 2021. Adil Fahad, Najlaa Alshatri, Zahir Tari, Abdullah Alamri, Ibrahim Khalil, Albert Y Zomaya, Sebti Foufou, and Abdelaziz Bouras. Survey of Clustering Algorithms for Big Data: Taxonomy and Empirical Analysis. In *IEEE Transactions on Emerging Topics in Computing*, volume 2, pp. 267–279. IEEE, 2014. doi: 10.1109/TETC.2014.2330519. Aaron Fisher, Cynthia Rudin, and Francesca Dominici. Model Class Reliance: Variable Importance Measures for any Machine Learning Model Class, from the "Rashomon" Perspective. *arXiv preprint arXiv:* 1801.01489v1, 2018. Ruth C. Fong and Andrea Vedaldi. Interpretable Explanations of Black Boxes by Meaningful Perturbation. Proceedings of the IEEE International Conference on Computer Vision, 2017-Octob:3449–3457, 2017. ISSN 15505499. doi: 10.1109/ICCV.2017.371. Ricardo Fraiman, Badih Ghattas, and Marcela Svarc. Interpretable clustering using unsupervised binary trees. *Advances in Data Analysis and Classification*, 7:125–145, 2013. doi: 10.1007/s11634-013-0129-3. Jerome H. Friedman. Greedy Function Approximation : A Gradient Boosting Machine. The Annals of Statistics, 29(5):1189–1232, 2001. Nave Frost, Michal Moshkovitz, and Cyrus Rashtchian. ExKMC : Expanding Explainable k -Means Clustering. *arXiv preprint arXiv:2006.02399v2*, pp. 1–27, 2020. Alex Goldstein, Adam Kapelner, Justin Bleich, and Emil Pitkin. Peeking Inside the Black Box: Visualizing Statistical Learning With Plots of Individual Conditional Expectation. Journal of Computational and Graphical Statistics, 24(1):44=65, 2015. doi: 10.1080/10618600.2014.907095. Jaime Gómez-Ramírez, Marina Ávila-Villanueva, and Miguel Ángel Fernández-Blázquez. Selecting the most important self-assessed features for predicting conversion to mild cognitive impairment with random forest and permutation-based methods. *Scientific Reports*, 10(1):1–15, 2020. ISSN 20452322. doi: 10.1038/ s41598-020-77296-4. URL https://doi.org/10.1038/s41598-020-77296-4. Xin Jin and Jiawei Jan. Partitional Clustering, 2010. Stephen C. Johnson. Hierarchical Clustering Schemes. *Psychometrika*, 32(3), 1967. Leonard Kaufman and Peter Rousseeuw. *Finding Groups in Data: An Introduction to Cluster Analysis*. John Wiley and Sons Inc., Hoboken, New Jersey, 1990. ISBN 3175723993. Siyi Li, Na Hu, Wenjing Zhang, Bo Tao, Jing Dai, Yao Gong, Youguo Tan, Duanfang Cai, and Su Lui. Dysconnectivity of multiple brain networks in schizophrenia: A meta-analysis of resting-state functional connectivity. *Frontiers in Psychiatry*, 10(JULY):1–11, 2019. ISSN 16640640. doi: 10.3389/fpsyt.2019. 00482. Meng Liang, Yuan Zhou, Tianzi Jiang, Zhening Liu, Lixia Tian, Haihong Liu, and Yihui Hao. Widespread functional disconnectivity in schizophrenia with resting-state functional magnetic resonance imaging. *NeuroReport*, 17(2):209–213, 2006. ISSN 09594965. doi: 10.1097/01.wnr.0000198434.06518.b8. Mascha M.J. Linszen, Rachel M. Brouwer, Sophie M. Heringa, and Iris E. Sommer. Increased risk of psychosis in patients with hearing impairment: Review and meta-analyses. *Neuroscience and Biobehavioral Reviews*, 62:1–20, 2016. ISSN 18737528. doi: 10.1016/j.neubiorev.2015.12.012. URL http://dx.doi.org/10.1016/ j.neubiorev.2015.12.012. Gilles Louppe. *Understanding Random Forests: From Theory to Practice*. PhD thesis, University of Liège, 2014. Octavio Loyola-González, Andres Eduardo Gutierrez-Rodríguez, Miguel Angel Medina-Pérez, Raúl Monroy, José Francisco Martínez-Trinidad, Jesús Ariel Carrasco-Ochoa, and Milton García-Borroto. An Explainable Artificial Intelligence Model for Clustering Numerical Databases. *IEEE Access*, 8, 2020. doi: 10.1109/ACCESS.2020.2980581. Scott M. Lundberg and Su In Lee. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems, 2017. Alice Medalia and Rosa W. Lim. Self-awareness of cognitive functioning in schizophrenia. *Schizophrenia* Research, 71(2-3):331–338, 2004. ISSN 09209964. doi: 10.1016/j.schres.2004.03.003. S A Mustaniroh, U Effendi, and R L R Silalahi. Integration K-Means Clustering Method and Elbow Method For Identification of The Best Customer Profile Cluster Integration K-Means Clustering Method and Elbow Method For Identification of The Best Customer Profile Cluster. IOP Conference Series: Materials Science and Engineering2, 2018. doi: 10.1088/1757-899X/336/1/012017. Fabian Pedregosa, Ron Weiss, and Matthieu Brucher. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011. Claudia Plant and Christian Böhm. INCONCO : Interpretable Clustering of Numerical and Categorical Objects. In 17th Association for Computing Machinery's (ACM) Special Interest Group on Knowledge Discovery and Data Mining (SIGKDD) international conference on Knowledge Discovery, pp. 1127–1135, 2011. ISBN 9781450308137. doi: 10.1145/2020408.2020584. Pradeep Rai. A Survey of Clustering Techniques. *International Journal of Computer Applications*, 7(12): 1–5, 2010. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. "Why should i trust you?" Explaining the predictions of any classifier. In *Proceedings of the ACM SIGKDD International Conference on Knowledge* Discovery and Data Mining, pp. 1135–1144, 2016. ISBN 9781450342322. doi: 10.1145/2939672.2939778. Enrique H Ruspini, James C Bezdek, and James M Keller. Fuzzy Clustering: A Historical Perspective. *IEEE* Computational Intelligence Magazine, (January):45–55, 2019. Joerg Sander. Density-based Clustering, 2010. Mohammad S E Sendi, Student Member, Vasiliki Kanta, Cory S Inman, Joseph R Manns, Stephan Hamann, Robert E Gross, Jon T Willie, and Babak Mahmoudi. Amygdala Stimulation Leads to Functional Network Connectivity State Transitions in the Hippocampus. In 42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 3625–3628, 2020. ISBN 9781728119908. Mohammad S E Sendi, Godfrey D Pearlson, Daniel H Mathalon, Judith M Ford, Adrian Preda, Theo G M Van Erp, and Vince D Calhoun. Multiple overlapping dynamic patterns of the visual sensory network in schizophrenia. *Schizophrenia Research*, 228:103–111, 2021a. ISSN 0920-9964. doi: 10.1016/j.schres. 2020.11.055. URL https://doi.org/10.1016/j.schres.2020.11.055. Mohammad S E Sendi, Elaheh Zendehrouh, Robyn L Miller, Zening Fu, Yuhui Du, Jingyu Liu, Elizabeth C Mormino, David H Salat, and Vince D Calhoun. Alzheimer's Disease Projection From Normal to Mild Dementia Reflected in Functional Network Connectivity: A Longitudinal Study. *Frontiers in Neuropsychiatry*, 14(January), 2021b. doi: 10.3389/fncir.2020.593263. Michael C Thomas, Wenbo Zhu, and Jose A Romagnoli. Data mining and clustering in chemical process databases for monitoring and knowledge discovery. *Journal of Process Control*, 67:160–175, 2018. ISSN 0959-1524. doi: 10.1016/j.jprocont.2017.02.006. URL http://dx.doi.org/10.1016/j.jprocont.2017. 02.006. Laurens van der Maaten and Geoffrey Hinton. Visualizing Data using t-SNE. *Journal of Machine Learning* Research, 9:2579–2605, 2008. ISSN 15729338. doi: 10.1007/s10479-011-0841-3. Theo G.M. van Erp, Adrian Preda, Jessica A. Turner, Shawn Callahan, Vince D. Calhoun, Juan R. Bustillo, Kelvin O. Lim, Bryon Mueller, Gregory G. Brown, Jatin G. Vaidya, Sarah McEwen, Aysenil Belber, James Voyvodic, Daniel H. Mathalon, Dana Nguyen, Judith M. Ford, Steven G. Potkin, and FBIRN. Neuropsychological profile in adult schizophrenia measured with the CMINDS. *Psychiatry Research*, 230 (3):826–834, 2015. doi: 10.1016/j.psychres.2015.10.028.Neuropsychological. Maeri Yamamoto, Itaru Kushima, Ryohei Suzuki, Aleksic Branko, Naoko Kawano, Toshiya Inada, Tetsuya Iidaka, and Norio Ozaki. Aberrant functional connectivity between the thalamus and visual cortex is related to attentional impairment in schizophrenia. *Psychiatry Research - Neuroimaging*, 278(June):35– 41, 2018. ISSN 18727506. doi: 10.1016/j.pscychresns.2018.06.007. URL https://doi.org/10.1016/j. pscychresns.2018.06.007. Elaheh Zendehrouh, Mohammad S E Sendi, Jing Sui, Zening Fu, Dongmei Zhi, Luxian Lv, Xiaohong Ma, Qing Ke, Xianbin Li, Chuanyue Wang, Christopher C Abbott, Jessica A Turner, Robyn L Miller, and D Vince. Aberrant Functional Network Connectivity Transition Probability in Major Depressive Disorder. In 42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 1493–1496, Montreal, QC, Canada, 2020. IEEE. ISBN 9781728119908. ## A Description Of Fbirn Dataset The fBIRN dataset contains 151 schizophrenia (SZ) subjects and 160 healthy controls (HC) that were collected from seven sites, including the University of California, Irvine; the University of California, Los Angeles; the University of California, San Francisco; Duke University/the University of North Carolina at Chapel Hill; the University of New Mexico; the University of Iowa; and the University of Minnesota collected neuroimaging data. Six 3T Siemens and one 3T General Electric scanner with the same protocol were used to collect the imaging data. T2*-weighted functional images were collected using AC-PC aligned echo-planar imaging sequence with TE=30ms, TR=2s, flip angle = 77°, slice gap = 1 mm, voxel size= 3.4 × 3.4 × 4 mm3, and 162 frames, and 5:24 min. All participants were instructed to close their eyes during the rs-fMRI data collection. Neuroimaging data were preprocessed using statistical parametric mapping (SPM124) in the MATLAB 2019 environment. We used rigid body motion correction to account for subject head movement. Next, the imaging data underwent spatial normalization to an echo-planar imaging (EPI) template in standard Montreal Neurological Institute (MNI) space and was resampled to 3x3x3 mm3. Finally, we used a Gaussian kernel to smooth the fMRI images using a full width at half maximum (FWHM) of 6mm. The Neuromark automatic independent component analysis pipeline within the group ICA of fMRI toolbox GIFT5 was used to extract 53 independent components (ICs), as described in Du et al. (2020). ## B Analysis Of Stability Of G2Pc And L2Pc Importance Across Repeats To gain insight into how the number of repeats affected G2PC and L2PC importance across repeats, we repeated G2PC and L2PC to the previously generated synthetic datasets. Note that we only generated 1 set of data for both G2PC and L2PC, rather than the 100 sets of data that we generated for our first G2PC synthetic data experiments. The sets of synthetic data were identical to those used in our earlier L2PC synthetic data experiments. We ran G2PC and L2PC for 1000 repeats. For G2PC, we calculated the mean importance of each feature for each iteration and all previous iterations. For L2PC, we calculated 4https://www.fil.ion.ucl.ac.uk/spm/ 5http://trendscenter.org/software/gift the mean importance of each for each iteration and all previous iterations. The results are shown in Figure ![17_image_0.png](17_image_0.png) 7. In general, the importance values stabilized fairly quickly, though the specific clustering algorithm and dataset tended to affect the number of repeats necessary to obtain stable importance values. Depending upon the clustering algorithm, G2PC results stabilized within 100 to 400 repeats, and L2PC results generally stabilized within 200 repeats. Figure 7: G2PC and L2PC Importance Across Repeats for Synthetic Data. The top two rows show the G2PC and L2PC results for Synthetic Dataset 1, and the bottom two rows show the G2PC and L2PC results for Synthetic Dataset 2. Each column shows the results for a different clustering algorithm. Because the L2PC importance values stabilized somewhat faster, each L2PC panel has a smaller inset panel that shows the mean importance for the first 40 repeats to provide further insight to make the variation across the first repeats.
Review 1: Summary: The authors propose a straightforward permutation based approach(es) to assessing feature importance for clustering. The approach is (almost, see below) model agnostic. They apply their approach to 5 clustering algorithms and simulated and fMRI data. Strengths and Weaknesses: The paper is well written although substantially longer than it needs to be, in particular the intro and discussion. The introduction is good but sometimes a little repetitive so could be edited to be more succinct. I also find the citation style a little strange, e.g. "logistic regression, which was introduced by Cox (1958)" rather than "logistic regression (Cox 1958)". Not a big deal but does make it wordy than need be. I don't really like discussions that just repeat the paper. It should be a discussion of additional thoughts/ideas (as the last paragraph does). The technical contribution is not highly significant: permuting features is about the simplest approach you could think of for assessing importance and their is no theoretical justification given (as opposed to SHAP for example for supervised learning). However, it is a very reasonable approach and worth exploring in practice as the authors have done. Requested Changes: The main concerns I would like addressing are: 1. Cluster labels could change so shouldn't the clusterings be compared based on pairwise membership? The authors recognize this in section 2.1, but their solutions for aligning cluster labels across repeats means that the approach is no longer truly model agnostic, which is given as a key selling point of the approach. In particular it is unclear how you do this for graph-based clustering methods or something like affinity propagation. I think a better metric would be something like mean((c1_i==c_1j)==(c2_i==c2_j) for all i,j) where i,j index all pairs of samples and c1 and c2 are cluster assignments for the original data and permuted data respectively. 2. Why not permute each feature in a group independently? What is the trade off here? 3. Would be good to include a graph-based clustering method since these are popular in single cell genomics. 4. "null hypothesis of zero perturbation percent change" what does this mean? How would generate this null? Cut this if you don't know. 5. In the fMRI data the authors note that permuting individual features did not affect clustering much, and so resorted to permuting (known) groups of features. What happens if individual features don't affect the clustering but you don't have known groups to permute? (I think sampling random groupings and averaging over these could help). Not a concern/requirement for publication but I do think it would be interesting to compute and compare to saliency maps for the assignments. At least for soft k-means/GMM I think it would be reasonably straightforward (and computationally cheap) to get the gradient of the soft assignment wrt to the features. Minor comments: Algorithm 1. I'm not sure it's worth separating out the PERMUTE_FEATURE function, it could just be one line as X2[:,j]=X2[random_permutation(1:N),j] why do you need "features" rather than just indexes X directly? It's unclear if X,Y are for the whole dataset or one data point (possibly either?) This becomes clear later but should be clear on first reading. change "randomly distributed data" to "simulated datasets" "This indicates that those samples may be closer to the boundaries of their clusters"... they may also just be noisier samples for some reason. Broader Impact Concerns: None noted. ================================================== Review 2: Summary: This paper proposes two methods to explain unsupervised clustering. These methods compute feature importance based on permuting the features either globally with global permutation percent change feature importance (G2PC) or locally with local perturbation percent change feature importance (L2PC). While many methods for interpreting clustering results use decision trees to classify the cluster IDs, G2PC and L2PC these methods require training an additional model where there is often a trade-off between accuracy and explainability. G2PC and L2PC skip this additional step while still being to some extent “algorithm agnostic”. These methods rank feature importance based on how often permuting a feature affects the cluster a point is assigned to. A key element to these two methods is it must be possible to classify new points to one of the existing clusters. This work proposes relatively natural hard classifiers for 5 popular algorithms: k-means, DBScan, a Gaussian mixture model, agglomerative clustering, and fuzzy c-means. G2PC and L2PC are then evaluated on two synthetic datasets and a FMRI dataset. On the synthetic datasets, G2PC and L2PC are shown to recover the ground truth feature importance order across clustering algorithms for the most part. On the FMRI dataset, the feature importance values derived from G2PC and L2PC are shown to be overall consistent with those based on mean effect using linear regression. Strengths and Weaknesses: Strengths: * Clustering explainability is a relatively unexplored area with many of the current methods focusing on training interpretable decision trees as a supervised step after clustering. The proposed method skips this step by using some relatively natural models for classifying new points into the existing clusters which are applicable to many clustering algorithms. * The approach is relatively simple conceptually, computationally inexpensive, and easy to implement making it an attractive method as compared to more complicated cluster-then-classify type approaches. * The group permutation generalization seems promising in extending permutation-based approaches to higher dimensional settings where permutation-based feature importance methods are generally difficult to apply. * The writing is for the most part clear with the exception of the algorithm specifications. Weaknesses: * One of the central claims of this paper is that G2PC and L2PC are “algorithm-agnostic” unsupervised clustering explainability methods. I do not believe this is true as both of these methods require the assignment of new samples to existing clusters as stated in section 2.1.4. In that section, it is shown how to build a model specific cluster classification model for 5 clustering algorithms, which breaks the algorithm-agnostic claim. While K-means, GMMs, and fuzzy c-means clustering all are defined by a small number of centers and can easily classify new points between these centers using a 1-nearest-neighbor classifier, for most other clustering models the choice of classifier is not so obvious. For example, for the family of graph-based clustering such as spectral clustering, Louvain clustering, Leiden clustering, it is unclear how to define a classifier for new points. For other models it is unclear if there exists a single preferred classifier, or if there are many reasonable ones. For instance, why should agglomerative clustering use a 1-nearest neighbor classifier over all points where K-means uses a 1-nn classifier to the means. Could we use 1-nn to cluster centers as a classifier for agglomerative clustering? Or why is this the “wrong” choice. Surely these would give different feature importance results in some cases. If so, we are baking in additional assumptions based on the extension of a clustering algorithm to new points, and these choices should be carefully considered. * One of the claimed “key novelties” of G2PC (par 1, Page 4) is using the percentage of samples that switch clusters following the permutation of a particular feature instead of the ratio of the change in performance before and after permutation. I don’t see how this can be claimed as particularly novel. This is just using clustering accuracy as the performance function in algorithm 1. * The experiments are limited such that it is difficult to draw generalizable insights about the proposed method. Only datasets with a small number of clusters (two four and two resp.) are tested leaving the question of the application to higher numbers of clusters in question. The grouping is tested only on the FMRI data, and seems to be a way of lowering the effective dimension of features explored, but is not explored beyond generally and seems to be specifically applicable to FMRI data with accepted ground truth mapping and feature clusters. The method is compared to linear regression “mean effect” trained on the clustering labels. This is an extremely useful comparison, but deserves more exploration. In my mind this is a valid alternative method of creating a classifier from a clustering and the benefits / drawbacks of G2PC and L2PC vs. this method would be interesting for someone looking to explain a clustering. * Code is not available for review, although there appears to be a link to a GitHub repo in the paper, it is not a valid link. Other comments / Questions: Figures the L2PC results in figures 4 and 5 exhibit remarkably different structure. I was initially confused why line plots were used as it seems to suggest some sort of continuous relationship between the features, but when I realized that each line shows the L2PC of a datapoint, I was intrigued how in the synthetic data there are some points that are close to the boundary and permuting any feature will flip its class and some points that almost no feature will flip the class, whereas in the FMRI data there seem to be no points that are universally close to the boundary in all dimensions, qualitatively the lines in 5C, 5D are “jagged”. This seems to imply that there is something interesting going on here. Are there any generalizable insights that can be drawn from the data or model from these differences in the L2PC plots? (Note it would be great to add a note in the figure caption that each line represents a datapoint.) Do all of the clustering methods achieve perfect accuracy to the ground truth clusters in the synthetic datasets? Why are only k-means and c-means clustering shown on the FMRI data? I assume the logistic regression “Mean effect” is the mean absolute value of the coefficients for plots Figure 5 E, F. In any case this should be clarified. Minor points: Algorithm formatting can be improved some notes below: * Algorithm 1 contains several inconsistencies and undefined quantities * Add back the vowels in mdl for clarity if you're going to keep "performance" in algorithm 1 * Y_0 is not defined, performance(Y_0, Y_1) is not, I assume Y_0 is the ground truth so should be Y * J,K are inputs to the algorithm? * Algorithm 3 is titled "Local Permutation ..." should be local perturbation? * Algorithm 3 permute group is named the same as in algorithm 2 but is different. Perhaps local permute group? * X_2 means different things in Algorithm 2 vs. 3. * The notation on Algorithm 3 line 3 is unclear to me p < M – 1 means what exactly? * The comment on line 5 of procedure L2PC says M x F, isn’t this 1 x F? * It would be helpful to include what all these inputs are, while the “j in J groups” etc. is helpful, could a comment be added with a short description of all parameters? E.g. Algorithm 2 gives a sense (without looking at the text of J, N, and K, but not M, C, mdl, or groups. Inconsistent spacing in the appendix. Inconsistent usage of hyphens instead of dashes throughout. FMRI dataset is not public and requires IRB approval thus increasing the bar for reproducibility. An experiment on a publicly accessible dataset would help the reproducibility of this study. Requested Changes: According to the TMLR evaluation criteria, acceptance decisions should be based on the two questions: Are the claims made in the submission supported by accurate, convincing and clear evidence? Would at least some individuals in TMLR’s audience be interested in knowing the findings of this paper? In order to satisfy the first criteria, I believe further justification of the claim that this is the first adaptation of model-agnostic explainability methods to clustering explainability is needed in two areas. First, that model agnostic explainability such as training a linear regression or tree-based classifier on top of the clusters of interest then performing standard supervised explainability procedures does not count as an adaptation to clustering. Second, that the proposed method is actually model agnostic and can be applied to all clustering methods including graph-based methods, or that the proposed classifiers are superior in some way such as that they provide better feature importance values or are more natural or accurate in some way. In order to satisfy the second criteria, further experimentation on the benefits of permutation-based feature importance for clustering explainability is needed. Showing the behavior of G2PC and L2PC vs. Linear and Tree-based classifiers on a wider variety of datasets including generalizable analysis on the differences. It would also be interesting although in my view not necessary for acceptance, to explain the benefits of G2PC and L2PC over something that compares the means of clusters directly for explainability. Something like ranking feature importance based on the variance of cluster means seems like a straightforward method which has some drawbacks, but might be a simple alternative. Broader Impact Concerns: None ================================================== Review 3: Summary: The paper adapts the model-agnostic explainability methods to provide clustering explainability. For adaptation, the paper introduced two new explanability methods and presented the results on low-dimensional synthetic datasets and high-dimensional real world datasets. In the former, the presented explainability methods identified the most important features for differentiating clusters. While in the latter, they also showed inter-features connectivity. Strengths and Weaknesses: Strengths - new application of explainability for clustering - interesting application Weakness: - limited novelty of the algorithm (only few changes and that is not evaluated properly). Authors need to explicitly point out their contributions. - limited clarity in the presented work (figures, description, etc.) - lack of rigor in the arguments (e.g., "the grouping parameter can help address this problem by grouping highly correlated features", etc) Requested Changes: The ideas presented in this work is interesting and as authors have pointed out could be impactful as well. However, I feel the algorithms, and the presented arguments are not verified rigorously. Here are the few questions/comments/suggestions: 1. Authors point out in multiple instances that their method is "easy to implement". Is this referring to small changes made to existing methods to propose new method or are authors referring to the whole method as "easy to implement". In any case, this warrants more explanation as this is presented as the important property of the proposed method. 2. The proposed method would require more verification. For instance, the future work directions suggested in the paper could acutally be done in this paper itself to demonstrate that the method is robust to these parameters? E.g., varying K, M and also use of different distance metric, etc. 3. Can authors show some empirircal analysis on the result presented on last paragraph of section 3.2? i.e., samples lying close to the boundary, sample frequency vs. pertubration sensititvity, etc. 4. Beside identifying the patterns learned by the clustering algorithms, can authors provide experiments on the ability of important features to perform clustering again? 5. What's the accuracy of clustering algorithms? Instead of speculating (3.1) if GMM and AGC have identified spuriuos clusters, can authors do some rigorous analysis to find out what's happening? 6. How would grouping parameter help to address unrealiable results caused by perturbation and permutation based methods generating out of distribution samples? 7. Missing comparision: as authors acknowledge that it is likely other methods could be generalized to enable explanaibility for clustering methods. Without any comparision, it is difficult to justify if the results' from presented method is superior. 8. The arguments made in the paper seems hand-wavy and would require more rigorous analysis. Please carefully proofread your paper and try to avoid the claims which can't be backed or require more experiments to support them which could be too much work for 1 journal. E.g.: a. the grouping parameter can help address this problem by grouping highly correlated features, b. can likely identify the patterns learned by the clustering algorithms, c. may identify spurious clusters that are unrelated to the underlying groups, etc. 9. Can authors do experiment on other datasets as well? The result on real world dataset is interesting but for ML journal, instead of focusing too much on the domain, it would be more interesting to actually understand the behavior of the proposed methods. 10. Presentation: The overall presentation of the paper is only satisfactory and can be significantly improved. This applies for both text and presentation of results in tables and figures. E.g., a. What does color signify in Fig 4? b. Can authors put the cluster results (3.2) in separate table? Minor comments: - In 2.1.3, L2PC "perturbs" should be L2PC "swap"? Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Reject Comment: The paper was reviewed by three reviewers, and the reviews were both comprehensive and well-aligned with the TMLR evaluation criteria. The authors engaged with the reviews and provided responses to all of the reviewer's comments. The reviews pointed out several strengths that highlighted the importance of tackling the problem of feature relevance in unsupervised learning problems. At the same time, there were several concerns that the reviewers identified that, if addressed, would substantially improve the paper. In particular, one reviewer suggested "their solutions for aligning cluster labels across repeats means that the approach is no longer truly model agnostic". The claims of a model-agnostic approach might be revisited to align with the methodology and evidence. Another review raised concerns about the alignment between the claims and the evidence supporting those claims. Finally statements such as "the grouping parameter can help address this problem by grouping highly correlated features" should be clarified and made more specific if not mathematically rigorous. The suggestions provided by the reviews would greatly improve the manuscript and I would be glad to consider a substantially revised version of the manuscript. ==================================================
# Nysreg-Gradient: Regularized Nyström-Gradient For Largescale Unconstrained Optimization And Its Application Anonymous authors Paper under double-blind review ## Abstract We develop a regularized Nyström method for solving unconstrained optimization problems with high-dimensional feature spaces. While the conventional second-order approximation methods such as quasi-Newton methods rely on the first-order derivatives, our method leverages the actual Hessian information. Additionally, Newton-sketch based methods employ a sketch matrix to approximate the Hessian, such that it requires the thick embedding matrix with a large sketch size. On the other hand, the randomized subspace Newton method projects Hessian onto a lower dimensional subspace that utilizes limited Hessian information. In contrast, we propose a balanced approach by introducing the regularized Nyström approximation. It leverages partial Hessian information as a thin column to approximate the Hessian. We integrate approximated Hessian with gradient descent and stochastic gradient descent. To further reduce computational complexity per iteration, we compute the inverse of the approximated Hessian-gradient product directly without computing the inverse of the approximated Hessian. We provide the convergence analysis and discuss certain theoretical aspects. We provide numerical experiments for strongly convex functions and deep learning. The numerical experiments for the strongly convex function demonstrate that it notably outperforms the randomized subspace Newton and the approximation of Newtonsketch which shows the considerable advancements in optimization with high-dimensional feature space. Moreover, we report the numerical results on the application of brain tumor detection, which shows that the proposed method is competitive with the existing quasiNewton methods that showcase its transformative impact on tangible applications in critical domains. ## 1 Introduction The optimization of various functions is a crucial and highly relevant topic in machine learning, particularly due to the exponential growth in data volume. As a result, finding solutions to large-scale optimization problems has become a pressing concern. In this paper, we propose a method to address this challenge by approximating the Hessian matrix of the objective function using the Nyström approximation. Our approach aims to solve a large-scale unconstrained optimization problem of the form: $$\operatorname*{min}_{w\in\mathbb{R}^{d}}f(w)$$ $$(1)$$ f(w) (1) where f is twice continuously differentiable and f is convex. The traditional second-order optimizers to solve (1), such as Newton's method provide quadratic convergence. However, these methods face limitations when dealing with high-dimensional optimization problems due to their high per-iteration cost and memory requirements. To address this challenge, we provide a lowrank Hessian approximation method that iteratively uses the Nyström method or more generally, a column subset selection method to approximate the Hessian. By employing this approach, we aim to provide a computationally efficient alternative that overcomes the limitations of traditional second-order optimizers for high-dimensional problems. 1 ## 1.1 Background And Contributions To optimize (1), first-order optimization methods such as stochastic gradient descent (SGD) (Robbins & Monro, 1951), AdaGrad, stochastic variance-reduced gradient (SVRG) (Johnson & Zhang, 2013), Adam (Kingma & Ba, 2015), and the stochastic recursive gradient algorithm (SARAH), possibly augmented with momentum, are preferred for large-scale optimization problems owing to their more affordable computational costs, which are linear in dimensions per epoch O(nd). However, the convergence of the first-order methods is notably slow, and they are sensitive to hyperparameter choices and ineffective for ill-conditioned problems. In contrast, Newton's method does not depend on the parameters of specific problems and requires only minimal hyperparameter tuning for self-concordant functions, such as `2-regularized logistic regression. However, Newton's method involves a computational complexity of Ω(nd2 + d 2.37) (Agarwal et al., 2017) per iteration and thus is not suitable for large-scale settings. To reduce this computational complexity, the subsampled Newton's method and random projection (or sketching) are commonly used to reduce the dimensionality of the problem and solve it in a lower-dimensional subspace. The subsampled (a.k.a mini-batch) Newton method performs well for large-scale but relatively low-dimensional problems by computing the Hessian matrix on a relatively small sample. However, it is time-consuming for high-dimensional problems. Randomized algorithms (Lacotte et al., 2021; Pilanci & Wainwright, 2017) estimate the Hessian in Newton's method using a random embedding matrix S ∈ R m×n, HS(w) := (∇2f(w) 1 2 ) >S >S(∇2f(w) 1 2 ). Specifically, their approximation used the square root of the generalized Gauss-Newton (GGN) matrix as a low-rank approximation instead of deriving it from actual curvature information, whereas S is a random projection matrix of size (m × n). Moreover, the Newton sketch Pilanci & Wainwright (2017) requires a substantially large sketch size which can be as big as the dimension d, which is not ideal and over-matches the objective of a low-rank Hessian approximation. Recently, Derezinski et al. (2021) proposed the Newton-LESS method which is based on the leverage score specified embeddings. It sparsified the Gaussian sketching and reduced the computational cost with similar convergence properties as the dense Gaussian sketching. Gower et al. (2019) proposed the randomized subspace Newton (RSN) method. RSN is the randomized subspace Newton that computes the sketch of Hessian by sampling the embedding matrix S and approximating the Hessian as S(S >HS) †S >. Talwalkar (2010) proposed the Nyström logistic regression algorithm, where the Nyström method is used to approximate the Hessian of the regularized logistic regression. Thus, it can be regarded as a variant of Nyström-SGD. However, Talwalkar (2010) only considered the regularized logistic regression, in which the Hessian can be explicitly obtained, with deterministic optimization. In contrast, we propose the regularized Nyström method for the deterministic and stochastic optimization, such that the value of the regularizer depends on the norm of gradient or stochastic gradient, respectively. We also show its theoretical aspects in terms of rank and no. of randomly picked columns. The limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm (Liu & Nocedal, 1989) is a widely used quasi-Newton method. More specifically, it estimates the Hessian inverse using the past difference of gradients and updates. The online BFGS (oBFGS) (Schraudolph et al., 2007) method is a stochastic version of regularized BFGS and L-BFGS with gradient descent. Kolte et al. (2015) proposed two variants of a stochastic quasi-Newton method incorporating a variance-reduced gradient. The first variant used a sub-sampled Hessian with singular value thresholding. The second variant used the LBFGS method to approximate the Hessian inverse. The stochastic quasi-Newton method (SQN) (Byrd et al., 2016) used the Hessian vector product computed on a subset of each mini-batch instead of approximating the Hessian inverse from the difference between the current and previous gradients, as in LBFGS. SVRG-SQN (Moritz et al., 2016) also incorporated variance-reduced gradients. Contributions: The contributions of this study are summarized as follows. - We propose the (deterministic and stochastic) regularized Nyström approximated Hessian method to solve the unconstrained optimization problem. - We propose to use the regularizer obtained by gradient information to regularize the Nyström approximation. - We provide detailed proof of the convergence. Moreover, we present various theoretical aspects of the proposed method. - We empirically show numerical experiments of the proposed methods and compare them with those of existing methods on the benchmark datasets. - In addition, we consider a classification problem of tumor detection as an application for Brain MRI and show the performance of the proposed method by comparing it with similar existing methods. ## 2 Nyström Approximation And Its Properties When dealing with large datasets, the computational complexity of second-order optimization methods poses a significant challenge. As a result, there is a need to explore computationally feasible Hessian approximation techniques that offer theoretical guarantees. Over the past few decades, researchers have investigated various matrix approximation methods. In recent years, a common approach involves obtaining a low-rank approximation of a matrix by utilizing specific parts of the original matrix through various techniques. One popular method in this context is the Nyström approximation (Drineas & Mahoney, 2005), initially introduced for kernel approximation. The Nyström approximation is a low-rank approximation of a positive semidefinite matrix that leverages partial information from the original matrix to construct an approximate matrix of lower rank. The Nyström method can be categorized as a variant of the column subset selection problem. Talwalkar (Talwalkar & Rostamizadeh, 2014) proposed minimizing the error using low-coherence bounds of the Nyström method. Michel Derezinski (Derezinski et al., 2020) proposed improvements in the approximation guarantees of column subset selection and the Nyström method using spectral properties. Definition 1 (Nyström approximation). Let H ∈ R d×d be a symmetric positive semi-definite matrix. Then, choose m columns of H randomly to form a d × m matrix C. Let m × m be a matrix M such that it is formed by the intersection of those m columns and corresponding m rows of H. Mk is the best k-rank approximation of M. A k-rank Nyström approximation Nk of H can be defined as $${\cal N}_{k}=C M_{k}^{\dagger}C^{\top}.$$ >. (2) where M†k is a pseudo-inverse of Mk. Letting H = ∇2f(w) to be a Hessian matrix of the objective function (1), following theorem shows the distance between the Hessian H and the Nyström approximation N of H. Theorem 1. (Drineas & Mahoney, 2005, Algorithm 2) Let H be a d × d *matrix and let* Nk = CM†kC > be a k-rank (k ≤ m) is a Nyström approximation by sampling m columns of H *with probabilities* {pi} d i=1 *such* that $$p_{i}=\frac{H_{ii}^{2}}{\sum_{i=1}^{d}H_{ii}^{2}}.\tag{1}$$ Let k = rank(M) and let Hk be the best k-rank approximation of the H. In addition, let ε > 0 and ϑ = 1 + p8 log(1/%). If (a) m ≥ 64kϑ2/ε4*, (b)* m ≥ 4ϑ 2/ε4*, then with probability at least* 1 − % $$\|{\boldsymbol{H}}-{\boldsymbol{N}}_{k}\|_{\nu}\leq\|{\boldsymbol{H}}-{\boldsymbol{H}}_{k}\|_{\nu}+\varepsilon\sum_{i=1}^{d}{\boldsymbol{H}}_{i i}^{2},$$ $$\left(2\right)$$ $$(3)$$ $$\quad(4)$$ ii, (4) for (a) ν = F *(Frobenius) and (b)* ν = 2 (spectral), where ε > 0. We denote above upper bound as UNys = kH − Hkkν + εPd i=1 H2 ii for the rest of paper. An alternative way to define a k-rank Nyström approximation is via zero-one sampling matrix. Let H = ∇2f(w) be a Hessian of f(w) that has form of H = X>X, where X is an n×d matrix. If is always possible to assume that H = X>X because H is a symmetric positive semi-definite (SPSD). The zero-one matrix W ∈ R d×m can be constructed as follows. $$\mathbf{W}(i,j)=\begin{cases}1&\text{if the$i$-th column is chosen in}\\ &\text{the$j$-th random trail,}\\ 0&\text{otherwise.}\end{cases}\tag{1}$$ $$\quad(5)$$ We can write the Nystöm approximation using zero-one matrix as follows: $${\cal C}({\cal M}_{k})^{\dagger}{\cal C}^{\top}=(H W)(W^{\top}H W)_{k}^{\dagger}(H W)^{\top}.$$ >. (6) Drineas & Mahoney (2005) shows that the uniform sampling case of scaled Nyström brings the same expression as the (6). It can be defined as follows: $${\mathbf{C}}({\mathbf{M}}_{k})^{\dagger}{\mathbf{C}}^{\top}=({\mathbf{H}}{\mathbf{W}}{\mathbf{I}})$$ ## > = (Hw D)((W D) >Hw D) † K (Hw D) > where D ∈ R m×m is a scaling matrix that have diagonal entries 1/ √mpil , pil is a probability P(il = i) = pi given in (3) of the Theorem 1 and ilis a column chosen in lth independent trail. Moreover, C := HW, which is the sampled column matrix of the true Hessian, and M := W>HW, which is the intersection matrix in (2). However, if we let m = k and then in the case of uniform sampling, the probability pi = 1/d, and scaling matrix have diagonal entries Dii = » dm which obtain the approximation (2) that is exactly same as the (6). Remark 1. Consider an instance of a function f(w) = `(Aw), where A ∈ R n×d and ` : R n → R has separable form such that `(Aw) = Pn i=1 `i(hai, wi) the square-root of Hessian can be computed as X> = ∇1/2f(w) = diag{` 00 i } n i=1A. Let S = W D and one can compute Nyström approximation using S. However, generalized Nyström method analyzed in Frangella et al. (2021); Gittens (2011); Tropp et al. (2017) consider the theory with the the Gaussian and various interesting random matrices S. Therefore, we also consider a Gaussian random matrix. Lemma 1. Fuji et al. (2022) Let S be a d× m random matrix such that sij *are independently sampled from* the normal distribution N(0, 1/m), then there exists C > 0 *such that* $$\|{\boldsymbol{S}}^{\top}{\boldsymbol{S}}\|\leq{\mathcal{C}}{\frac{d}{m}}.$$ with probability at least 1 − 2 exp (−m), where C *is an absolute constant.* One can prove above lemma from the (Vershynin, 2018, Theorem 4.6.1). For the rest of theoretical analysis, we consider the matrix S to be a generalized random matrix given in Lemma 1. ## 3 Algorithmic Framework In this section, we first define a formulation of the Nyström approximation for the objective function (1) and propose the regularized Nyström algorithm for the unconstrained optimization problem. Let H = ∇2f(w) be a Hessian of the objective function, and we pick Ω ⊆ {1, 2*, . . . , d*} indices uniformly at random such that m = |Ω| and compute the Nyström approximation as $${\boldsymbol{N}}_{k}={\boldsymbol{C}}{\boldsymbol{M}}_{k}^{\dagger}{\boldsymbol{C}}^{\top}={\boldsymbol{Z}}{\boldsymbol{Z}}^{\top},$$ > = ZZ>, (7) where Z = CUkΣ −1/2 k ∈ R d×k, and C ∈ R d×m is a matrix consisting of m columns (m d) of H, M is m × m intersection matrix, and the rank of M is k ≤ m. We obtain the best k rank approximation using the singular value decomposition (SVD) of Mk as Mk = UkΣkU > k , where Uk ∈ R m×k are singular vectors and Σk ∈ R k×kconsisting k singular values. The pseudo-inverse can be computed as M†k = UkΣ −1 k U > k . Note that the number of columns m is a hyperparameter. $$\left(7\right)$$ ## 3.1 Relation Between `2 **Regularization And Fixed Rank Nyström Approximation** Consider `2 regularized objective function $$\min_{\mathbf{w}\in\mathbb{R}^{d}}\left\{f(\mathbf{w}):=\sum_{i=1}^{n}f_{i}(\mathbf{w})+\frac{\lambda}{2}\|\mathbf{w}\|^{2}\right\},\tag{8}$$ where each fi are convex, twice continuously differentiable, and λ ≥ 0, and hence f is strongly convex function. Then the Hessian of `2-regularized function can be given as H =Pn i=1 ∇2fi(w) + λI and λmin(f(w)) ≥ λ. The formulation of column matrix C = S >Pn i=1 ∇2fi(w)+ λS >I and matrix M = S >Pn i=1 ∇2fi(w)S + λS >S ∈ R m×m. Since λ is used in the approximation, matrix M becomes positive definite and hence it becomes the fixed ranked Nyström approximation, which also helps in the convergence to get minimum eigenvalue of M−1. Hence, we can write it as $${\cal N}=C M^{-1}C^{\top}$$ $$({\mathfrak{g}})$$ for fixed rank Nyström approximation. ## 4 Nysreg-Gradient: Regularized Nyström-Gradient Method Second-order optimization methods often utilize the regularized approximated Hessian. Regularized parameters can be obtained through approaches such as the trust-region method or by adaptively determining the regularization parameter based on the gradient information. These approaches have been explored in previous works such as Li et al. (2004); Ueda & Yamashita (2010); Tankaria et al. (2022), which propose iterative formulations similar to: $$\mathbf{w}_{t+1}=\mathbf{w}_{t}-\eta_{t}(\mathbf{A}_{t}+\rho_{t}\mathbf{I})^{-1}\nabla f(\mathbf{w}_{t}),$$ −1∇f(wt), (9) where, At represents a Hessian approximation, and ρt > 0 is a regularized parameter. Now, consider At to be Nyström approximation Nt in equation (9). To ensure the non-singularity and obtain a descent direction, we compute a regularized Nyström approximation. Then we can write an iterate of the regularized Nyström approximation as $$\mathbf{w}_{t+1}=\mathbf{w}_{t}-\eta_{t}(\mathbf{N}_{t}+\rho_{t}\mathbf{I})^{-1}\nabla f(\mathbf{w}_{t}).$$ $$(10)$$ −1∇f(wt). (10) Since we are approximating the Hessian using Nyström method augmented with a regularizer in a similar quasi-Newton framework that uses the multiple of gradient in the search direction, we call our novel method "NysReg-gradient: Regularized Nyström gradient method (NGD)". The regularized parameter ρt > 0 is determined based on the gradient information. Specifically, we set ρt = c1k∇f(wt)k γ as similar to Ueda & Yamashita (2010), where c1 > 0. We consider ρt to be either c1 pk∇f(wt)k for γ = 1/2, c1k∇f(wt)k for γ = 1, or c1k∇f(wt)k 2for γ = 2 as shown in Table 1. We denote ∇f(wt) = gt for the rest of paper. | Proposed methods | Value of γ | Regularizer ρt | Regularized Nyström | |--------------------|--------------|------------------|-----------------------| | | 1/2 | Nt + c1kgtk 1/2 | | | NGD | γ = 1/2 | ρt = c1kgtk | 1 | | NGD1 | γ = 1 | ρt = c1kgtk 1 | Nt + c1kgtk | | NGD2 | γ = 2 | ρt = c1kgtk 2 | Nt + c1kgtk 2 | Table 1: Relation between proposed methods and value of γ To efficiently compute the inverse of (Nt + ρt) given in (10), we use the Sherman–Morrison–Woodbury identity as $$\mathbf{p}_{t}=(\mathbf{N}_{t}+\rho_{t}\mathbf{I})^{-1}\mathbf{g}_{t}={\frac{1}{\rho_{t}}}\mathbf{g}_{t}-\mathbf{Q}_{t}\mathbf{Z}_{t}^{\top}\mathbf{g}_{t},$$ , (11) $$(11)$$ where pt is search direction at tth iteration, Nt is Nyström approximation computed at wt, gt is a gradient computed at wt and Qt =1 ρ 2 t Zt(Ik + 1 ρt Z > t Zt) −1. Here, (Ik + 1 ρt ZtZ > t ) ∈ R k×k, and its inverse can be computed much more quickly than the inverse of (Nt + ρtI) directly. We use the backtracking line search with Armijo's line search rule that finds a step size ηt = α (`) = τα(`−1), starting from ` = 0, the initial step size η0 = α (0), and finds the least positive integer ` ≥ 0 and increased ` by ` + 1 until the f(wt + α (`)pt ) ≤ f(wt) + α $$p_{t})+\alpha^{(\ell)}\beta\mathbf{g}_{t}^{\top}\mathbf{p}_{t},$$ $$(12)$$ , (12) holds, where *α, β* ∈ (0, 1). Next, we introduce the main algorithm. Algorithm 1 NysReg-gradient: Regularized Nyström-Gradient Algorithm 1: **Initialize** Initial parameters w0, desired rank |Ω| = m, *α, β* ∈ (0, 1), and maximum iterations tmax 2: t ← 0 3: **repeat** 4: gt = ∇f(wt) 5: randomly pick indices set Ω ⊆ {1, 2*, . . . , d*} such that m = |Ω| 6: compute Ct (Ω columns of the Hessian) 7: compute Zt using (7) and compute ρt 8: Qt = 1 ρ 2 t Zt(Ik + 1 ρt Z T t Zt) -1 9: Compute (Nt + ρtI) −1gt using (11) 10: Use backtracking line search with Armijo's rule to find ηt using (12) 11: wt+1 = wt − ηtpt 12: t = t + 1 13: **until** t = tmax or some termination criteria is satisfied 14: **return** wt The efficiency of the method depends on both rank of Hessian and the choice of the sketching matrix S. For example if the sketch size goes to one then method reduces to scaled gradient descent. Next, we see discuss the computational complexity of the proposed algorithm. ## 4.1 Computational Complexity Here, we analyze the per-iteration computational complexity of the proposed method. The cost of matrixvector multiplication (Nt + ρtI) −1gt , i.e., (11) is O(dk) at each iteration. The cost of computing Qt is O(dk2) at each epoch. The cost of computing Zt is O(dmk). The computational cost of constructing the matrix C is O(dm). Thus, over the course of all iterations, the construction of the matrix C is associated with the highest computational cost; therefore, the overall time and space complexity are O(dm). ## 4.2 Regularized Nyström As Newton Sketch In this section, we introduce an alternate definition of the Nyström approximation. Nyström approximation can be obtained by sampling the embedding (random sketch) matrix. We further show that resultant formulation of an alternate definition of the Nyström approximation and it can be interpreted as a Newton sketch-based method (Pilanci & Wainwright, 2017; Lacotte et al., 2021). Consider the Nyström approximation and let H = X> d×nXn×d and zero-one d × m matrix W in (5) with CX = XW. Let SVD of XW is U"Σ"V" >, and M = (C > XCX) = V"Σ" 2V" >. Then, similar to (Drineas & Mahoney, 2005, Lemma 4) we obtain, C(Mk) †C > = (HW)(W>HW) † k (HW) > = (X>CX)(C > XCX) † k (X>CX) > = X>(U"Σ"kV" >)(V"Σ" −2 k V" >)(V"Σ"kU" >)X. = X>U"kU" > k X (13) $\left(13\right)$. where U"k is k−rank matrix. The right-hand side of (13) is similar to the Newton sketch Pilanci & Wainwright (2017) with two differences, 1) embedding matrix P depends on the size of n and not d, whereas the zeroone matrix W ∈ R d×m depends on the d and 2) the natural orthogonal matrix U"k in proposed method is replaced by a randomized embedding matrix P > ∈ R n×m, which is expected to be orthogonal in principle. i.e., E[P >P ] = I, whereas the proposed method produces the natural orthogonal matrix; *i.e.,* U"U" >= I. Consequently, Newton-sketch needs a large and thick column matrix P (assuming most data having *n > d*) to approximate the Hessian. If we let X = ∇2f(w) 1/2then, our approximation is of the form of $\mathbf{H}_{W}=X^{\top}\widehat{\mathbf{U}}\widehat{\mathbf{U}}^{\top}X+\lambda\mathbf{I}$ $$=(\nabla^{2}f(\mathbf{w})^{1/2})^{\top}\widehat{\mathbf{U}}\widehat{\mathbf{U}}^{\top}(\nabla^{2}f(\mathbf{w})^{1/2})+\rho\mathbf{I}.$$ 1/2) + ρI. (14) More generally, the approximation given above can be written in the form of an embedding matrix as follows. Let $\mathbf{Y}=\mathbf{\rho}I_{d}$, and let $\mathbf{Y}^{1/2}=\sqrt{\rho}\cdot\mathbf{I}_{d}$ be a $d\times d$ matrix. Then, by defining the embedding matrix $\mathbf{\tilde{S}}=\begin{bmatrix}\overline{\mathbf{U}}_{n\times n}^{\top}&\mathbf{0}_{n\times d}\\ \mathbf{0}_{d\times n}&\mathbf{I}_{d}\end{bmatrix}$ and partial Hessian $\mathbf{\tilde{H}}=\begin{bmatrix}\nabla^{2}\{\mathbf{U}(n)^{1/2}\}\\ Y^{1/2}\end{bmatrix}$, we get $$\mathbf{H}_{d}=\mathbf{\tilde{H}}^{\top}\mathbf{\tilde{S}}^{\top}\mathbf{\tilde{S}}\mathbf{G}$$ $$(14)$$ $${\mathbf{H}}_{S}={\bar{\mathbf{H}}}^{\top}{\bar{\mathbf{S}}}^{\top}{\bar{\mathbf{S}}}{\bar{\mathbf{H}}}$$ $$(15)^{\frac{1}{2}}$$ which is identical to the (14) and hence H−1 Sis non-singular, where HS is the Nyström approximation for HS. Note that X = ∇2f(w) 1/2can be computed as shown in the remark 1. ## 5 Convergence Analysis In this section, we provide the analysis that is based on selecting the number of columns m, in the Nyström approximation. We investigate distance between the Newton's direction and the NGD's search direction that is based on the rank of matrix M. We further prove the linear convergence of the proposed algorithm. Moreover, in the last subsection, we discuss the closeness of the inverse of regularized Nyström with the inverse of Hessian. This analysis offers insights into the overall convergence behavior of the algorithm. It is important to note that our convergence analysis is based on the objective function defined in equation (8). For local convergence, see section A given in the Appendix. Next, we provide the convergence analysis. First, we need following assumptions. Assumption 1. i) The objective function (8) is twice continuously differentiable and f is Lg-smooth, *i.e.,* $$\|\nabla^{2}f(\mathbf{w}_{t})\|\leq L_{g},\quad\forall$$ d. (15) ii) The objective function (8) is strongly convex. Assumption 2. St is a random matrix whose entries are independently sampled Normal distribution with mean 0 and variance 1/m, satisfies $$\|{\boldsymbol{S}}^{\top}{\boldsymbol{S}}\|\leq{\mathcal{C}}{\frac{d}{m}},$$ $\mathfrak{L}$ for some C > 0. Assumption 3. For dimension d, we have a constraint on the value of m such that m = o(d). Note that Assumption 3 is important as in the case where m = d, the Nyström approximation results into the Hessian, *i.e.,* HH†H = H and it turns out to be the Newton's method. In the next Lemma, we obtain lower bound of *minimum* eigenvalue and upper bound of *maximum* eigenvalue of (Nt + ρtI) −1. Lemma 2. Suppose that Assumption 1, and 2 hold. Let wt *iterate obtained by Algorithm 1, and for some* m*, the maximum and minimum eigenvalues of* (Nt + ρtI) −1 *are given as* $$\lambda_{\min}[(\mathbf{N}_{t}+\rho_{t}\mathbf{I})^{-1}]\geq\frac{1}{\frac{c_{t}\mathcal{I}_{t}^{2d}}{m\lambda}+c_{1}\|\mathbf{g}_{t}\|^{\gamma}}\quad\text{and}\quad\lambda_{\max}[(\mathbf{N}_{t}+\rho_{t}\mathbf{I})^{-1}]=\frac{1}{c_{1}\|\mathbf{g}_{t}\|^{\gamma}}.\tag{16}$$ Proof. First we obtain the bound on *minimum* eigenvalue of (Nt + ρI) −1. λmin[(Nt + ρtI) −1] = 1 λmax(Nt + ρtI) ≥1 λmax(HtSt(S >HtS † t )S >Ht) + ρtI ≥1 kHtk 2kS > t Stkk(S >HtSt)−1k + ρt ≥1 L2 g Cd m 1λ + ρt (17) =1 CL2gd mλ + c1kgtk γ where the third inequality follows from the kHk ≤ Lg, Lemma 1, and since f is strongly convex and m d, (S >HS) λI. Now we find obtain the bound on *maximum* eigenvalue of (Nt + ρI) −1. $$\lambda_{\max}[(\mathbf{N}_{t}+\rho_{t}\mathbf{I})^{-1}]={\frac{1}{\lambda_{\min}(\mathbf{N}_{t}+\rho_{t}\mathbf{I})}}$$ $$={\frac{1}{\rho_{t}}}$$ $$={\frac{1}{c_{1}\|\mathbf{g}_{t}\|^{\gamma}}}$$ Since Nt is positive semi-definite ρt is *minimum* eigenvalue of (Nt + ρtI). This completes the proof. ## 5.1 Exactness Of Nyström Approximation Here, we present a result to obtain the the distance between Hessian and Nyström approximation based on the size of the number of columns m or rank of M. Kumar et al. (2009) showed a stronger result in the following theorem that if the rank of M is the same as the rank of H, then Nyström approximation is exact. Theorem 2. *(Kumar et al., 2009, Theorem 3) Suppose* H ∈ R d×dis positive semi-definite matrix and rank(H) = r ≤ d*. Consider the Nyström approximation* N = CM†C > *and rank*(M) = r ≤ m ≤ d*, where* m *is the number of columns picked randomly. Then the Nyström approximation is exact. i.e.,* $$\|H-N\|_{F}=0,$$ where k.kF *is the Frobenious norm.* Note that kAk2 ≤ kAkF for any matrix A. From the above theorem, it is easy to see that Nyström approximation produces the exactly same singular values when rank(M) = r. Hence we can expect to achieve the same convergence as the Newton's method or superlinear convergence at least when the number of columns chosen m ≥ r and rank(H) = rank(M) = r. Moreover, it tells that when rank(M) < r, then we can not achieve the quadratic convergence since the distance between Hessian and Nyström is bounded from above and not exactly zero. Remark 2. To have the least possible value of m (*i.e., m* = r) that satisfies the above theorem, we need to choose exactly those r independent columns of H which is difficult due to the randomness involved in choosing m. In short, when rank(H) = r, it becomes a feature selection problem to choose the r independent columns that will form a Nyström approximation. The usual convergence gives a probabilistic convergence due to the randomness involved in m and the convergence rate depends on the size of the number of randomly chosen columns m = |Ω|. ## 5.2 Bound On The Difference Between Nysreg-Gradient'S And Newton'S Direction Assumption 4. For all x, y, the gradient is Lipschitz, *i.e.,* k∇f(x) − ∇f(y)k ≤ Lgkx − yk. Lemma 3. Suppose that Assumption 1 holds. Let {w} *be a sequence generated by Algorithm 1. If* m > 64*kϑ/ε*4 then $$\|\mathbf{p}_{t}-\mathbf{p}_{t}^{N}\|\leq{\frac{1}{\lambda}}(U_{N y s}+c_{1}\|\mathbf{g}_{t}\|^{\gamma})\|\mathbf{p}_{t}\|,$$ with probability at least 1 − %, where UNys is an upper bound of kHt − Ntk given in Theorem 1. Moreover, if rank(M) = rank(H)*, then* $$\frac{\|\mathbf{p}_{t}-\mathbf{p}_{t}^{N}\|}{\|\mathbf{p}_{t}\|}\leq\frac{c_{1}}{\lambda}\|\mathbf{g}_{t}\|^{\gamma},$$ with probability at least 1 − % *given in Theorem 1.* Proof. Let direction of the Newton's method be p N t = −∇2f(wt) −1gt and regularized Nyström direction is pt = −(Nt + ρtI) −1gt . Since f is strongly convex, λmin(∇2f(w)) ≥ λ, let ∇2f(w) = H. Then we have kH−1 t k ≤ 1λ for t > 0. Next the distance between the directions can be given as: kpt − p N t k = kH−1 t(Htpt + gt ) k = kH−1 t(Ht − (Nt + ρtI)) ptk ≤ kH−1 t kk (Ht − (Nt + ρtI)) ptk = kH−1 t kk (Ht − Nt) pt − (ρtI)ptk ≤ kH−1 t kk (Ht − Nt) ptk + kH−1 t kkρtptk ≤ kH−1 t kkHt − Ntkkptk + c1kH−1 t kkgtk γkptk (18) - **case a)** In this case, we discuss the distance kpt − p N t k, when m > 64*kϑ/ε*4(Theorem 1) or rank(Mt) < rank(Ht). Using Theorem 1 in the (18), we get $$\|\mathbf{p}_{t}-\mathbf{p}_{t}^{N}\|\leq\|\mathbf{H}^{-1}\|\|\mathbf{H}_{t}-\mathbf{N}_{t}\|\|\mathbf{p}_{t}\|+c_{1}\|\mathbf{H}_{t}^{-1}\|\|\mathbf{g}_{t}\|^{\gamma}\|\mathbf{p}_{t}\|$$ $$\leq\frac{1}{\lambda}(U_{Nys}+c_{1}\|\mathbf{g}_{t}\|^{\gamma})\|\mathbf{p}_{t}\|$$ where kH−1 t k ≤ 1λ , and kHt − Ntk ≤ UNys. - **case b)** For this case, we obtain a result when rank(M) = rank(H). Using the Theorem 2 in (18), we get $$\|\mathbf{p}_{t}-\mathbf{p}_{t}^{N}\|\leq\|\mathbf{H}^{-1}\|\|\mathbf{H}_{t}-\mathbf{N}_{t}\|\|\mathbf{p}_{t}\|+c_{1}\|\mathbf{H}_{t}^{-1}\|\|\mathbf{g}_{t}\|^{\gamma}\|\mathbf{p}_{t}\|$$ $$=c_{1}\|\mathbf{H}_{t}^{-1}\|\|\mathbf{g}_{t}\|^{\gamma}\|\mathbf{p}_{t}\|$$ Hence, we get $$\frac{\|\mathbf{p}_{t}-\mathbf{p}_{t}^{N}\|}{\|\mathbf{p}_{t}\|}\leq c_{1}\|\mathbf{H}^{-1}\|\|\mathbf{g}_{t}\|^{\gamma}\leq\frac{c_{1}}{\lambda}\|\mathbf{g}_{t}\|^{\gamma}$$ where kH−1 t k ≤ 1λ . This completes the proof. Remark 3. H may not be the full rank matrix if f is not strongly convex function. Then disregarding Assumption 1 for case (b) in above lemma holds for d = m if f strongly convex function and may be *m < d* if f not strongly convex function. ## 5.3 Linear Convergence Next, we discuss a lemma related to search direction to obtain the linear convergence. Lemma 4. Let pt be a descent direction of Algorithm 1 at iteration t*, then* $$\mathbf{g}_{t}^{\top}\mathbf{p}_{t}\leq-\rho_{t}\|\mathbf{p}_{t}\|^{2}.$$ : Let $\color{blue}p_t=-(N_t)$. Proof. Let pt = −(Nt + ρtI) −1gt be a search direction. Next, consider $$-\boldsymbol{g}_{t}^{\top}\boldsymbol{p}_{t}=\boldsymbol{g}_{t}^{\top}(\boldsymbol{N}_{t}+\rho_{t}\boldsymbol{I})^{-1}\boldsymbol{g}_{t}$$ $$=((\boldsymbol{N}_{t}+\rho_{t}\boldsymbol{I})^{-1}\boldsymbol{g}_{t})^{\top}(\boldsymbol{N}_{t}+\rho_{t}\boldsymbol{I})(\boldsymbol{N}_{t}+\rho_{t}\boldsymbol{I})^{-1}\boldsymbol{g}_{t}$$ $$=\boldsymbol{p}_{t}(\boldsymbol{N}_{t}+\rho_{t}\boldsymbol{I})\boldsymbol{p}_{t}$$ $$\geq\rho_{t}\|\boldsymbol{p}_{t}\|^{2},$$ where the last inequality comes from the fact that Nt is positive semidefinite. This completes the proof. Finally, in the next theorem, we prove the linear convergence. Theorem 3. Suppose that Assumption 1 - 4 hold. Let {w} *be a sequence generated by Algorithm 1 and* w∗ be the optimal point. Then there exists 0 < ξ < 1*, with probability at least* 1 − 2 exp (−m)*, we have* $f(\mathbf{w}_{t+1})-f(\mathbf{w}_{*})\leq\xi\ (f(\mathbf{w}_{t})-f(\mathbf{w}_{*}))$. where $$\xi=\left(1-4\beta(1-\beta)\frac{m\lambda^{2}\rho_{t}}{L_{g}(\mathcal{C}d L_{g}^{2}+m\lambda\rho_{t})}\right).$$ Proof. Since ∇f is Lipschitz continuous, we have $$f(\mathbf{w}_{t+1})\leq f(\mathbf{w}_{t})+\mathbf{g}_{t}^{\top}(\mathbf{w}_{t+1}-\mathbf{w}_{t})+\frac{L_{g}}{2}\|\mathbf{w}_{t+1}-\mathbf{w}_{t}\|^{2}$$ $$=f(\mathbf{w}_{t})+\eta_{t}\mathbf{g}_{t}^{\top}\mathbf{p}_{t}+\frac{\eta_{t}^{2}L_{g}}{2}\|\mathbf{p}_{t}\|^{2}$$ Let u 2 = −g > t pt , then using Lemma 4, we have kptk 2 ≤ −g > t pt $\frac{\rho_t}{\rho_t}=\frac{u^2}{\rho_t}$, and we get $$f(\mathbf{w}_{t+1})\leq f(\mathbf{w}_{t})+\eta_{t}\mathbf{g}_{t}^{\top}\mathbf{p}_{t}+\frac{\eta_{t}^{2}L_{g}}{2}\|\mathbf{p}_{t}\|^{2}$$ $$\leq f(\mathbf{w}_{t})+\eta_{t}(-u^{2})+\frac{\eta_{t}^{2}L_{g}}{2\rho_{t}}u^{2}$$ $$=f(\mathbf{w}_{t})-\eta_{t}\left(1-\frac{\eta_{t}L_{g}}{2\rho_{t}}\right)u^{2}$$ Hence the exit condition of backtracking line search f(wt + ηtpt ) ≤ f(wt) + βηtg > t pt satisfies if we take $$\left(1-\frac{\eta_{t}L_{g}}{2\rho_{t}}\right)=\beta,$$ and step size ηt = 2(1 − β)ρt/Lg. Therefore, it stops when ηt ≥ 2ρt/Lg and we have $$f(\mathbf{w}_{t+1})\leq f(\mathbf{w}_{t})-2\beta(1-\beta){\frac{\rho_{t}}{L_{g}}}u^{2}$$ $$(19)$$ $$(20)$$ 2(19) Since u 2 = −g > t pt , and from Lemma 2, $$u^{2}=-\mathbf{g}_{t}^{\top}\mathbf{p}_{t}=\mathbf{g}_{t}^{\top}(\mathbf{N}_{t}+\rho_{t}\mathbf{I})^{-1}\mathbf{g}_{t}\geq{\frac{m\lambda}{C d L_{g}^{2}+m\lambda\rho_{t}}}\|\mathbf{g}_{t}\|^{2}.$$ Hence, by (19) we get $$f(\mathbf{w}_{t+1})\leq f(\mathbf{w}_{t})-2\beta(1-\beta){\frac{m\lambda\rho_{t}}{L_{g}(\mathcal{C}d L_{g}^{2}+m\lambda\rho_{t})}}\|\mathbf{g}_{t}\|^{2}.$$ Subtracting f(w∗) from both sides of (20), and from strong convexity of f, we have kgtk 2 ≥ 2λ(f(wt) − f(w∗)), which implies $$f(\mathbf{w}_{t+1})-f(\mathbf{w}_{*})\leq f(\mathbf{w}_{t})-f(\mathbf{w}_{*})-4\beta(1-\beta)\frac{m\lambda^{2}\rho_{t}}{L_{g}(\mathcal{C}dL_{g}^{2}+m\lambda\rho_{t})}(f(\mathbf{w}_{t})-f(\mathbf{w}_{*}))$$ $$=\left(1-4\beta(1-\beta)\frac{m\lambda^{2}\rho_{t}}{L_{g}(\mathcal{C}dL_{g}^{2}+m\lambda\rho_{t})}\right)(f(\mathbf{w}_{t})-f(\mathbf{w}_{*})).$$ $$\square$$ This completes the proof. ## 5.4 Closeness To The Hessian Inverse In this subsection, we discuss the closeness of the inverse of regularized Nyström approximation with the Hessian inverse. Let H be the Hessian of the objective function (1) and we consider the regularized Newton's method regularized by any ρ > 0. Then, the inverse of Hessian of is given by (H +ρI) −1 w = (∇2f(w)+ρI) −1 at w. Let the regularized Nyström at w be given by (ZwZ > w + ρI) −1. The distance of the regularized inverse matrix is then given as $$\|(\mathbf{Z_{w}Z_{w}^{\top}+\rho I)^{-1}-(H+\rho I)_{w}^{-1}\|\leq\frac{\|\mathbf{J_{w}}\|}{\rho(\|\mathbf{J_{w}}\|+\rho)},\tag{21}$$ where 0 < kJwk = kH − ZwZ > wk ≤ kH − Hkk + εPd i=1(Hii) 2; which follows from (4), whereas (21) follows from (Frangella et al., 2021, Proposition 3.1). Note that the rank of Hessian can be possibly r when the objective function is not `2 regularized. ## 6 Stochastic Variant Of The Regularized Nyström Gradient Method In this section, we discuss the stochastic variant of the Nyström gradient. In the context of machine learning, it is usual to work with a large number of samples, making it computationally challenging to compute the full gradient at every iteration. To address this challenge, we employ the stochastic gradient with the Nyström approximation. In this stochastic variant of the NGD, we compute the mini-batch stochastic gradient at every iteration and compute the regularized Nyström Nτ + ρτ I, once per epoch with ρτ = c1kgt−1,τ k γ 9. We call this variant NSGD. Furthermore, we use diminishing step size ηt for the stochastic variant NSGD. 9that the regularizer ρτ is stochastic gradient and not full gradient. We update the ρτ in the beginning of the epoch τ, with ∇fB(wt) of (τ − 1)th epoch. | Proposed methods | Regularizer ρ | (Value of γ) | Search direction | |--------------------|------------------|----------------------------|--------------------| | γ | (γ = 1/2) | pt−1 = (Nτ + ρτ ) −1gt−1,τ | | | NSGD | ρτ = c1kgt−1,τ k | | | Table 2: Search direction and γ in for NSGD Algorithm 2 NysReg-Stochastic gradient: NSGD Algorithm Parameters: Update frequency ` and initial step size η0 1: **Initialize** w0, τ = 1 2: for t = 1, 2*, . . .* do 3: randomly pick batch B ∼ {1*, . . . , n*} 4: gt-1,τ = ∇fB(wt-1) 5: if (t − 1) mod ` = 0 **then** 6: randomly pick indices set Ω ⊆ {1, 2*, . . . , d*} such that m = |Ω| 7: compute Cτ (Ω columns of the Hessian) at wt−1 8: compute Zτ using (7) and compute ρτ 9: Qτ = 1 ρ2τ Zτ (I + 1 ρτ Z T τ Zτ ) -1 10: τ = τ + 1 11: **end if** 12: Compute pt−1 using (11) 13: wt = wt−1 − ηtpt−1 14: **end for** ## 7 Numerical Experiments In this section, we demonstrate the numerical results for the proposed algorithms explained in the previous sections. First, we discuss the experiment setup for the numerical experiments. We performed Figure experiments on MATLAB R2018a on Intel(R) Xeon(R) CPU E7-8890 v4 @ 2.20GHz with 96 cores and Figure on MATLAB R2019a on Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz with 32 cores. We implemented the existing and proposed methods in MATLAB using the SGDLibrary (Kasai, 2017). We solve on standard learning problems, that is, the `2-logistic regression: $$\operatorname*{min}_{w}F(w)={\frac{1}{n}}\sum_{i=1}^{n}\log\left[1+\exp(-b_{i}a_{i}^{T}w)\right]+{\frac{\lambda}{2}}\|w\|^{2},$$ where ai ∈ R dis feature vector and bi *∈ {±*1} is target label of the i-th sample, and λ is a `2 regularizer. We evaluated the numerical experiments on benchmark datasets given in Table 3. The datasets are binary classification problems and all datasets are available on LIBSVM Chang & Lin (2011). We demonstrate the performance of the proposed and existing methods on the `2-regularized logistic regression problem. We optimize the constant c1 in regularizer ρt = c1kgtk γ using a grid search c1 ∈ {100, 10−1, 10−2, 10−3}. For each method, the best-performing model was selected based on the minimum cost error on the training. For the numerical experiments conducted on `2-regularized squared SVM, see Appendix section B. | Dataset | Dim | Train | Test | Density | |-----------|-------------|---------|--------|-----------| | adult1 | 123 + 1 | 32,561 | 16,281 | 0.1128 | | gisette1 | 5, 000 + 1 | 6,000 | 1,000 | 0.9910 | | epsilon1 | 2, 000 + 1 | 50,000 | 50,000 | 1 | | real-sim1 | 20, 958 + 1 | 57,909 | 14,400 | 0.0024 | | w8a1 | 300 + 1 | 49,749 | 14,951 | 0.0388 | Table 3: Details of the datasets used in the experiments 1Available at LIBSVM (Chang & Lin, 2011) https://www.csie.ntu.edu.tw/cjlin/libsvm/ First we study the performance of NGD, NGD1 and NGD2 to see the behaviour of different ρ = c1kgtk γ, where γ = 1/2 for NGD, γ = 1 for NGD1, and γ = 2 for NGD2. We computed the `2-regularized *logistic* regression with λ = 10−5 on the *ijcnn1* and *adult* datasets. ![12_image_0.png](12_image_0.png) Figure 1: First two figures(from left) shows the experiments on *adult* for m = 25 and last two figures shows the experiments on *ijcnn1* for m = 5. (a) and (c) shows the cost with respect to CPU time. (b) and (d) shows the test error with respect to CPU time. Figure 1 shows the training cost and test error with CPU time for *adult* and *ijcnn1*. Moreover, it shows that NGD1 outperforms NGD and NGD2 for *adult* dataset and NGD outperforms NGD1 and NGD2 for ijcnn1 dataset. Therefore, in the next subsection, we consider NGD and NGD1 to compare the behavior with varying numbers of selected columns. ## 7.1 Comparison Of Strength For Varying Numbers Of Selected Columns In this subsection, we demonstrate the comparison of various sketch sizes (no. of selected columns) for high-density data *gisette* and sparse data w8a on *logistic regression* with λ = 10−5to observe the robustness of the proposed methods. We keep the same c1 in ρt for each dataset to compare the different numbers of selected columns. Figure 2 shows the numerical performance for the *gisette* dataset and computed the NGD1 for m = 50(1%), 250(5%), 500(10%) and m = 1000(20%). As shown in Figure 2, due to high density only m = 250(5%) of columns are sufficient to get the minimum value of the objective function within the comparative CPU time. Also, similar behavior can be observed in the test error as well. When m = 1000, the decrement in the value of the gradient norm surpasses all cases of m < 1000. Additionally, m = 250 and m = 500 perform a similar reduction in the value of the norm of the gradient. ![12_image_1.png](12_image_1.png) Figure 2: Column comparison on *gisette* dataset ![13_image_0.png](13_image_0.png) Figure 3 shows the numerical performance on the w8a dataset and computed NGD for m = 30(10%), 60(20%), 100(33%) and m = 150(50%). Figure 3 shows that due to sparse data, it requires picking more numbers columns to obtain the minimum value of the objective function in the comparative CPU time. All cases of m exhibit almost similar test error. When m = 150 and m = 100, the decrement in the value of the norm of gradient is comparable, whereas for the cases m = 30 and m = 60, it does not decrease significantly. ## 7.2 Comparison With Randomized Subspace Newton In this subsection, we compare the NGDs with the randomized subspace Newton (RSN) (Gower et al., 2019). RSN computes the iterate with wt+1 = wt − (1/L)St(S > t HtSt) †S > t gt with a sketch matrix St ∈ R d×m. To have a fair comparison of the subspace Newton, we compute the RSN with the Armijo's rule with backtracking line search (instead of 1/L) and compute RSN exactly as given in (Gower et al., 2019, definition 4) for generalized linear models. Also, we keep the same value of m for both NGDs and RSN. We compute the *logistic regression* with λ = 10−5. We compare NGDs and RSN in Figure 4 for *realsim* data with m = 2000, Figure 5 for *gisette* data m = 250, and Figure 6 for w8a data with m = 30. As shown in Figure 4 to 6, RSN is unable to outperform the proposed methods. For the *realsim* data, as shown in the Figure 4, NGD1 outperforms all methods in terms of achieving the minimum cost and NGD2 outperforms all methods in terms of test error. For the *gisette* data, In Figure 5, NGD1 outperforms all methods in terms of achieving minimum cost and test error. Finally, for w8a dataset, Figure 6, NGD outperforms all methods in terms of achieving minimum cost and NGD1 outperforms at the later stage in terms of test error. In conclusion, it is observable that the Nyström approximation is better than the approximation of RSN because RSN only captures a limited set of m2elements from the Hessian, whereas Nyström captures a substantially larger set of dm elements of the Hessian. This makes Nyström approximation more comprehensive and accurate of the Hessian matrix. ![13_image_1.png](13_image_1.png) Figure 4: Comparison with RSN for *realsim* dataset with m = 2000 ## 7.3 Comparison Of Newton Sketch And Nyström Approximation In this subsection, we compare the NGDs with the Newton sketch(NS) (Pilanci & Wainwright, 2017). As explained in Section 4.2, the proposed method can be represented as the NS method with certain structure modifications. Hence, we compare the raw Nyström with the Hessian approximation of NS in terms of ![14_image_0.png](14_image_0.png) Figure 5: Comparison with RSN for *gisette* dataset with m = 250 ![14_image_1.png](14_image_1.png) Figure 6: Comparison with RSN for w8a dataset with m = 30 closeness with the Hessian. NS computes the Hessian approximation as (∇2f(w) 1/2) >P >P (∇2f(w) 1/2), and it is important to note that the P ∈ R m×n, where n is the number of samples and m is the sketch size. In this comparison, we keep the same value of m for both Nyström and NS. In Figure 7 we conduct numerical experiments on *w8a, realsim* and *gisette* datasets. We provide the comparison of norm difference with Hessian and its CPU time as m increases. We conduct these numerical experiments on the *logistic* regression. It is important to note that we use the *logistic regression* for the *realsim,* and *gisette* without `2 regularization. For the w8a dataset, we keep the `2-regularized *logistic regression* with λ = 10−5. Hence the rank of H for w8a data is full. In Figure 7, (a) and (d) show the performance on w8a dataset, (b) and (e) show the performance on the *realsim* dataset, and (c) and (f) shows the performance on the *gisette* dataset. The top row shows the CPU time of computing the Nyström approximation and Newton sketch and the bottom row shows the distance with Hessian as m increases, where H is the Hessian. As shown in Figure 7 (a) and (d), Nyström approximation outperforms the Newton sketch as m increases with the less CPU time for w8a in lesser CPU time compared to NS. Similarly, in the Figure 7(b) and (e), the Nyström approximation can approach the Hessian as m increases, specifically, after m = 8000. Since, Nyström involves the inverse of m × m matrix, it takes more CPU time after m = 5000. Whereas in Figure 7(c) and (f), the norm of distance between Hessian and Nyström decreases significantly when m ≈ 1000 and takes more CPU time after m ≈ 1000 compared to NS. However, we do not need to compute Nyström for large number of m, as we have seen in Figure 2 and 3 that about 5% to 15% of d can give sufficient decrease in the objective function. From performance illustrated in Figure 7, two significant observations can be made. Firstly, one can observe in Figure 7(d) that the Theorem 2 pertaining Nyström bound of exactness holds true practically and becomes almost zero as the number of columns m covers the all of the columns (*i.e.,* rank of H). Secondly, it is worth noting that the random matrix in the Newton sketch P ∈ R m×n depends on the n which is usually larger than dimension d, whereas Newton sketch (Pilanci & Wainwright, 2017) usually requires thick random matrix as compare to the thin random matrix S of Nyström approximation. ## 7.4 Comparison With Existing Deterministic Methods We compared the proposed methods NGD, NGD1, and NGD2 with the existing classical first-order gradient descent, and the *state of the art* second-order Hessian approximation method L-BFGS method Liu & ![15_image_0.png](15_image_0.png) Figure 7: Comparison between Nyström and Newton sketch Nocedal (1989). The memory used in the L-BFGS method was set to 20. We report the training cost on the training dataset and testing set (test error) for iteration and CPU time cost per iteration. Also, we show the norm of the gradient with respect to iterations. Figure 8 shows the performance of experiments on *logistic* regression with λ = 10−5 on *giestte* dataset with m = 500. As shown in Figure 8, NGD1 outperforms all other methods in terms of both CPU time and iterations in terms of both training cost and the norm of gradient. L-BFGS takes more CPU time compared to all variants of NGDs till the cost of 10−5. Also, GD shows improvements after the 20th iteration and outperforms in terms of the test error and NGD2 shows some increment in the test accuracy after the 30th iteration. In Figure 9, we conduct the experiments on logistic regression with λ = 10−5 on *epsilon* dataset with m = 200. Figure 9 shows that the NGDs are performing almost similarly and outperform the L-BFGS and GD in terms of the training cost, testing error, and test accuracy. Also, NGD and NG1 outperform all of the methods in terms of the norm of gradient. ![15_image_1.png](15_image_1.png) Figure 8: Experiments on the *gisette* dataset with m = 500. ![16_image_0.png](16_image_0.png) ## 7.5 Numerical Experiments For Stochastic Regularized Nyström Gradient We compare the proposed stochastic variant NSGD with stochastic gradient descent method and stochastic second order approximation optimization methods, namely, SVRG-LBFGS (Kolte et al., 2015), SVRGSQN (Moritz et al., 2016), and SQN (Byrd et al., 2016). The memory used in the L-BFGS method was set to 20, which is a commonly used value (Kolte et al., 2015; Byrd et al., 2016). Figure 10 shows that NSGD ![16_image_1.png](16_image_1.png) Figure 10: First two from left shows the experiments on a8a dataset and two from right shows the experiments on *epsilon* dataset outperforms existing methods in terms of the training cost for a8a dataset. However, it could not achieve a better test error compared to SVRG-SQN and SVRG-LBFGS. Moreover, SVRG-SQN outperforms NSGD and other existing methods in terms of both training cost and test error for *epsilon* dataset. ## 7.6 Numerical Experiments For Deep Learning We also evaluated the performance of the Nyström SGD on the well-known deep models on the Imagenet dataset. Experimental Setup: We compared our method with the first-order methods SGD and the well-known approximate second-order method KFAC Martens & Grosse (2015) on ResNet152 He et al. (2016) and EfficientNet Tan & Le (2019) models. For Nyström SGD, we used ρ = 0.1 and fixed the m = log2 |w|, where |w| is the number of parameters in the respective model. We used a batch size of 128. We used a random sample of size of min{6400, n × 0.01} to compute the partial Hessian C for Nyström SGD. The update frequency used to re-estimate the preconditioner in KFAC and its variants is set to 200, as used in their experiments. The ImageNet results were computed on a Quadro RTX 8000 GPU. ![16_image_2.png](16_image_2.png) Results: Figure 12 presents the results of the ResNet152 and EfficientNet on the ImageNet dataset. The proposed method outperformed both the SGD and Figure 11: Effect of the ρ on the test accuracy for ResNet18 on imagenet dataset. KFAC for both the models in terms of training loss as well as test accuracy, showing the better optimization and generalization ability of the trained models. Table 4 shows the computational time comparison of methods. The per update computational time of the Nyström SGD on ResNet152 is 1.703 seconds which is slightly slower than SGD and KFAC. To further speed up the Nyström SGD is an interesting future work. Figure 11 shows the effect of the ρ parameter for ResNet18. As can be seen, the ρ parameter affects the model performance. We found setting ρ = 0.1 performs well in practice. Table 4: Per iteration computational time (seconds). [*For KFAC on EfficientNet with batch size 128 could not fit into memory] | Model | Method | Batch | Update Time | Hessian | |--------------|----------|---------|---------------|-----------| | SGD | 128 | 1.006 | - | | | ResNet152 | KFAC | 128 | 1.064 | - | | NSGD | 128 | 1.060 | 0.643 | | | SGD | 128 | 0.341 | - | | | EfficientNet | KFAC | 64* | 1.173 | - | | NSGD | 128 | 0.347 | 2.620 | | ![17_image_0.png](17_image_0.png) Figure 12: Results on Imagenet using ResNet152 and EfficientNet, respectively. ## 8 Application: Tumor Detection Brain MRI is the most standard test for the diagnosis of various brain diseases including tumor detection. Given the complexity of the diagnosis process, researchers are shifting towards deep neural networks. Firstorder optimizers are the most preferable choice in deep learning. However, with the limited sample sizes, it is difficult to train a stable and generalized model with a large number of parameters using first-order optimizers. We consider studying the *brain MRI images for brain tumor detection*. This data contains 253 MRI images where 155 cases have tumors and 98 cases are of the healthy brain. We use a transfer learning approach to detect the tumor. Transfer learning is widely used in brain MRI and biological problems where the number of samples is limited. In deep models, bottom layers perform generic tasks such as edge detection. Whereas, the top layers are task specific. Hence the common practice is to fine-tune the top layers only. Goal is to minimize the objective function $$\operatorname*{min}_{w}f(w),\quad{\mathrm{where}}\quad f(w)=\sum_{i=1}^{n}f_{i}\ (w),$$ where w ∈ R d and, f : R d → R, and fiis the loss function corresponding to i th sample is the *logistic regression* for brain tumor *classification* problem. *i.e.*, The data has d dimension and n samples. We propose an NGD ![18_image_0.png](18_image_0.png) Figure 13: Sample Images from MRI dataset dat (2020), Top row: Tumor, Bottom row: Healthy algorithm for fine-tuning the top layers of pre-trained deep networks. Specifically, we compute a partial column Hessian of size (d × m) with m d uniformly randomly selected variables (d is the number of parameters), then use the *Nyström method* to approximate the full Hessian matrix. ![18_image_1.png](18_image_1.png) Figure 14: Comparison of NGDs with existing methods on MRI dataset Figure 14 shows that NGD1 outperforms other methods in terms of training cost in the least CPU time. Newton's method outperforms in terms the decreasing the norm of the gradient. Additionally, all NGDs are giving competitive behavior to each other in terms of the norm of gradient. GD and L-BFGS are not able to give competitive results in terms of test accuracy and test error. Also, all NGDs and Newton's method have the upper hand in achieving better test accuracy and test error. ## 9 Summary In this paper, we introduce the regularized Nyström method to approximate Hessian and propose both deterministic and stochastic optimization methods to solve the objective function. We present the comprehensive convergence analysis and certain results using the distance between the Hessian and Nyström approximation. Furthermore, we conducted extensive numerical experiments to evaluate the performance of the proposed methods with RSN (Gower et al., 2019), NS (Pilanci & Wainwright, 2017), and other existing first and quasi-Newton methods. From the numerical results, the proposed methods demonstrate robustness, efficiently approximating the Hessian by selecting approximately 5%(in high-density scenarios) and 15-20%(in high-sparsity scenarios) of the dimension. Moreover, we prolong the experiments to the domain of deep learning and we employ our proposed method for an application involving brain tumor detection. The results in this application highlight the promising impact of our proposed methods in real-world scenarios. ## References Dataset : brain mri images for brain tumor detection. *https://www.kaggle.com/datasets/navoneel/brain-mriimages-for-brain-tumor-detection*, 2020. Naman Agarwal, Brian Bullins, and Elad Hazan. Second-order stochastic optimization for machine learning in linear time. *Journal of Machine Learning Research, JMLR*, 18:116:1–116:40, 2017. Richard H Byrd, Samantha L Hansen, Jorge Nocedal, and Yoram Singer. A stochastic quasi-newton method for large-scale optimization. *SIAM Journal on Optimization*, 26(2):1008–1031, 2016. Chih-Chung Chang and Chih-Jen Lin. LIBSVM: A library for support vector machines. *ACM Transactions* on Intelligent Systems and Technology, 2:27:1–27:27, 2011. Michal Derezinski, Rajiv Khanna, and Michael W Mahoney. Improved guarantees and a multiple-descent curve for column subset selection and the nystrom method. *Advances in Neural Information Processing* Systems, 33:4953–4964, 2020. Michal Derezinski, Jonathan Lacotte, Mert Pilanci, and Michael W Mahoney. Newton-less: Sparsification without trade-offs for the sketched newton update. *Advances in Neural Information Processing Systems*, 34:2835–2847, 2021. Petros Drineas and Michael W. Mahoney. On the nyström method for approximating a gram matrix for improved kernel-based learning. *Journal of Machine Learning Research, JMLR*, 6:2153–2175, 2005. Zachary Frangella, Joel A Tropp, and Madeleine Udell. Randomized nystr\" om preconditioning. *arXiv* preprint arXiv:2110.02820, 2021. Terunari Fuji, Pierre-Louis Poirion, and Akiko Takeda. Randomized subspace regularized newton method for unconstrained non-convex optimization. *arXiv preprint arXiv:2209.04170*, 2022. Alex Gittens. The spectral norm error of the naive nystrom extension. *arXiv preprint arXiv:1110.5305*, 2011. Robert Gower, Dmitry Kovalev, Felix Lieder, and Peter Richtárik. Rsn: Randomized subspace newton. Advances in Neural Information Processing Systems, 32, 2019. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. Advances in neural information processing systems, 26:315–323, 2013. Hiroyuki Kasai. Sgdlibrary: A MATLAB library for stochastic optimization algorithms. *Journal of Machine* Learning Research, JMLR, 18:215:1–215:5, 2017. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun (eds.), *Proceedings of the International Conference on Learning Representations, ICLR*, 2015. Ritesh Kolte, Murat Erdogdu, and Ayfer Ozgur. Accelerating svrg via second-order information. In NIPS workshop on optimization for machine learning, 2015. Sanjiv Kumar, Mehryar Mohri, and Ameet Talkwalkar. On sampling-based approximate spectral decomposition. In *International Conference on Machine Learning (ICML)*, 2009. URL http://www.sanjivk. com/nys_col_ICML.pdf. Jonathan Lacotte, Yifei Wang, and Mert Pilanci. Adaptive newton sketch: Linear-time optimization with quadratic convergence and effective hessian dimensionality. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, ICML, volume 139, pp. 5926– 5936, 2021. Dong-Hui Li, Masao Fukushima, Liqun Qi, and Nobuo Yamashita. Regularized newton methods for convex minimization problems with singular solutions. *Computational optimization and applications*, 28(2):131– 147, 2004. Dong C Liu and Jorge Nocedal. On the limited memory bfgs method for large scale optimization. *Mathematical Programming*, 45(1):503–528, 1989. James Martens and Roger Grosse. Optimizing neural networks with kronecker-factored approximate curvature. In *International conference on machine learning*, pp. 2408–2417. PMLR, 2015. Philipp Moritz, Robert Nishihara, and Michael Jordan. A linearly-convergent stochastic l-bfgs algorithm. In *Proceedings of the International Conference on Artificial Intelligence and Statistics, AISTATS*, pp. 249–258, 2016. Mert Pilanci and Martin J Wainwright. Newton sketch: A near linear-time optimization algorithm with linear-quadratic convergence. *SIAM Journal on Optimization*, 27(1):205–245, 2017. Herbert Robbins and Sutton Monro. A stochastic approximation method. *The Annals of Mathematical* Statistics, 22(3):400–407, 1951. Nicol N Schraudolph, Jin Yu, and Simon Günter. A stochastic quasi-newton method for online convex optimization. In *Artificial intelligence and statistics*, pp. 436–443. PMLR, 2007. Ameet Talwalkar. *Matrix approximation for large-scale learning*. PhD thesis, New York University, 2010. Ameet Talwalkar and Afshin Rostamizadeh. Matrix coherence and the nyström method. arXiv preprint arXiv:1408.2044, 2014. Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In International Conference on Machine Learning (ICML), pp. 6105–6114, 2019. Hardik Tankaria, Shinji Sugimoto, and Nobuo Yamashita. A regularized limited memory bfgs method for large-scale unconstrained optimization and its efficient implementations. *Computational Optimization and* Applications, 82(1):61–88, 2022. Joel A Tropp, Alp Yurtsever, Madeleine Udell, and Volkan Cevher. Fixed-rank approximation of a positivesemidefinite matrix from streaming data. *Advances in Neural Information Processing Systems*, 30, 2017. Kenji Ueda and Nobuo Yamashita. Convergence properties of the regularized newton method for the unconstrained nonconvex optimization. *Applied Mathematics and Optimization*, 62:27–46, 2010. Roman Vershynin. *High-dimensional probability: An introduction with applications in data science*, volume 47. Cambridge university press, 2018.
Review 1: Summary: The authors propose a new Quasi-Newton type optimization method with Nystrom approximation of the Hessian and regularised by the gradient norm. The convergence proofs for strongly convex and smooth optimization are presented. Also, the closeness/inexactness of the Nystrom approximation is studied. Experimental results are presented for logistic regression with real-life datasets of reasonable size. Non-convex experiments for deep learning setup are also presented. Strengths and Weaknesses: The paper has an extensive and high-level study on Nystrom approximation of the Hessian. This topic is novel and highly perspective. It allows to reduce the computational costs of the second-order methods while accelerating the convergence of first-order methods with additional second-order information. The paper is well-written and the proofs seem to be correct. The experiments are also performed for various problems including non-convex Resnet152 and EfficientNet. 
For the weaknesses, the theoretical convergence rates are not as fast as they could be but I think this is a byproduct of the method’s choice. So, it is not a problem of Nystrom approximation. Does the proposed method have a connection with the SR-1 method as its sampled version? Requested Changes: I would also recommend to improve the Related work section:
The recommended citations on Quasi-Newton methods: BFGS 
1. Charles G Broyden. Quasi-Newton methods and their application to function minimisation. Mathematics of Computation, 21:368–381, 1967 2. Roger Fletcher. A new approach to variable metric algorithms. The Computer Journal, 13:317–322, 1 1970 3. Donald Goldfarb. A family of variable-metric methods derived by variational means. Mathematics of Computation, 24:23–26, 1970 4. David F Shanno. Conditioning of Quasi-Newton methods for function minimization. Mathematics of Computation, 24:647–656, 1970. SR-1/L-SR-1: 5. Andrew R Conn, Nicholas IM Gould, and Philippe L Toint. Convergence of quasi-newton matrices generated by the symmetric rank one update. Mathematical Programming, 50:177–195, 1991. 6. H Fayez Khalfan, Richard H Byrd, and Robert B Schnabel. A theoretical and experimental study of the symmetric rank-one update. SIAM Journal on Optimization, 3:1–24, 1993. 7. Albert S Berahas, Majid Jahani, Peter Richt ́arik, and Martin Tak ́aˇc. Quasi-newton methods for machine learning: forget the past, just sample. Optimization Methods and Software, pages 1–37, 2021. Second-order and regularised Newton methods: 8. Yurii Nesterov and Boris T Polyak. Cubic regularization of Newton method and its global performance. Mathematical Programming, 108:177–205, 2006. 9. Nikita Doikov and Yurii Nesterov. Gradient regularization of Newton method with Bregman distances. Mathematical Programming, 2023. 10. Konstantin Mishchenko. Regularized Newton method with global O(1/k^2) convergence. SIAM Journal on Optimization, 33(3):1440–1462, 2023 11. Boris Teodorovich Polyak. Newton’s method and its use in optimization. European Journal of Operational Research, 181:1086–1096, 2007. 11. Roman A Polyak. Regularized Newton method for unconstrained convex optimization. Mathematical Programming, 120:125–145, 2009. Broader Impact Concerns: n/a ================================================== Review 2: Summary: This paper proposes a new quasi-Newton step algorithm for minimizing a strongly convex and smooth function. The main idea is to conduct Newton step but use Nystrom method to approximate the Hessian. The authors show that the proposed algorithm enjoys linear convergence, and the experiments show the effectiveness of the proposed algorithms. Strengths and Weaknesses: Strengths: 1. The authors give a rigorous proof and show that their proposed method enjoys a linear convergence rate. Although GD can achieve the same rates in this case (strongly convex and smooth). 2. The authors conduct extensive experiments, and the results shows the effectiveness of the proposed algorithms. Weakness: 1. I have some questions on the proof. Specifically, I am not sure if the convergence analysis (Theorem 3, which is the main theorem) is meaningful enough. After a close look, I find that the only Nystrom-approximation property that is used in the proof is Lemma 2, that is, the Regularized matrix has bounded max and min eigenvalue. But if one uses c*I_d to replace N_t (that is, use GD, not Nys-approximation), it seems that the analysis remains that same. That is to say, 1) the authors gave several bounds about the gap between Nystrom-approximation and real Hessian, but they are not helpful at all in the convergence analysis; 2) The proof is essentially the proof of GD for strongly convex and smooth functions (which indeed get linear convergence), and is not very related to Newton (since N_t+I has both upper and lower bound, the proof reduces to the proof of GD). How Nys-GD is better than GD is not clear (in terms of convergence rate). On the other hand, GD is more efficient. 2. In terms of computational complexity (Page 6), I am not sure if the claim is true. Clearly, dkm>dk, so the computational complexity should at least be dkm. Moreover, I am not sure how good or bad it is compared to other approximation methods. It would be great if the authors can add more discussion. 3. There is no theoretical guarantees for the stochastic version of the algorithm. 4. The writing of the paper can be improved. There are many typos and grammar errors, which make the paper difficult to read. For example: Definition 1: "Let m*m be a matrix M"; Theorem 1: "Let N_k be a k-rank is a Nystrom approximation" Remark 3: "Then disregarding Assumption 1 for case (b) in above lemma holds for d = m if f strongly convex function and may be m < d if f not strongly convex function." I am not sure about the meaning of the sentence. Requested Changes: The are listed in the section above. Broader Impact Concerns: N/A ================================================== Review 3: Summary: This paper studies a regularized quasi-Newton method where the Hessian is approximated by its k-rank nyström approximation. They propose to use the gradient information to get an adaptive regularization value. The authors show that with high probability, the suboptimality can decrease after one optimization step of their algorithm. I have some concerns regarding the theory (see section strenghts and weaknesses). They also try their algorithm in practice Strengths and Weaknesses: # Strength: The authors implemented their method in many experimental settings. # Weakness: ## Significance of the Theory From a high level perspective I am quite concerned about the significance of the theoretical results presented in this paper. In particular, the main result (theorem 3) do not leverage the fact that the sketching matrix S is random in order to provide an estimation of the Hessian. The proof of Theorem 3 is valid for any fixed sketching matrix as long at $\|SS^\top\| \leq Cd/m$. It lacks significance since: - Theorem 3 is not a global convergence results - The condition number appearing is worse than the one appearing for gradient descent. Unlike in other analysis of quasi-Newton method like Gower et al. 2019 or Karimireddy et al. 2018 that make appear the **relative** smoothness and strong convexity (relative to the metric induced by the Hessian) ### About Lemma2 I have a concern regarding the proof of Lemma 2. In the paper it is mentioned that “since f is strongly convex and $m << d$ we have $S^\top HS \succeq \lambda I_d$. I believe this statement is wrong in general (i.e. we need more assumptions on S). For instance if we were to take S = 0 it is clear that this statement is not true (any S close to 0 would also make the statement wrong) I believe that this could be fixed as on could instead consider upper bounding the eigenvalues of $S(S^\top HS)^\dagger S$ whose eigenvalues are upper bounded by $1/\lambda$. ### About Section 6 Section 6 is quite small, the author extend their method to the stochastic case without really discussing the impact of this stochasticity on the optimization. I am actually very surprised that, in section 7.5 NSGD converges faster that SVRG since the latter has a linear convergence rate (and I do not see how the former would have a linear convergence rate without variance reduction). Moreover, the comparison is not doing using the time as a the x-axis which is not fair for the baselines as the cost per iteration is significantly cheaper. ## Significance of the experiments Most of the experiments are performed on L2 logistic regression without comparing the method with a proper baseline (in terms of convergence speed). I would suggest the author to consider using the BenchOPT library [Moreau et al. 2022] to properly benchmark their method against the baseline for logistic regression. ## Writing: The writing can be significantly improved. I found some sections quite confusing ### About the random sketching matrix $S$: Nyström approximation corresponds to equation (2) or (6). A connection is made with the sketching matrix S = WD. However, this matrix S has a specific form it is the product of the zero-one matrix W defined in (5) with a diagonal matrix with positive coefficient. But then the authors consider random matrices with Gaussian entries. It seems to me that as long as S is of rank m we have that $(HS)(S^\top HS)^\dagger_k(HS)^\top = CM_k^\dagger C^\top$ so this would be the formulation to consider. ### Section 4 I find Algorithm 1 not very clear. - In order to compute C_t you need to sample S. do you sample S with Gaussian entries, or do you consider the matrix W defined in (5). If you do consider the matrix W defined in (5) what do you mean in the statement P4 “for the rest of theoretical analysis, we consider the matrix S to a a generalized random matrix given in Lemma 1”? moreover in assumption 2 you consider S_t to have i.i.d. gaussian entries. - p_t is not defined in algorithm ! - line 7 there is an extra space before (7) I find some statements in section 4.1 somewhat confusing. It is claimed that “the overall …complexity are O(dm)” but the SVD necessary to compute Z_t has a complexity O(dmk). ### Typos and inaccurate notations: There are numerous typos and inaccurate notations. Here is a non exhaustive list - P3: “the following theorem shows” - P3: “let […] be a k-rank ($k \leq m$) is” - P3: let $k = rank(M_k)$ (k is not the rank of M) - P3: “If is always possible” - P4: “random trail” and “independent trail” are not defined. Did you mean random trial? - P4: “exactly same as the…” - P6-7: some matrices are not bolded (S, X in (14)) or are missing a subscript t (C in 4.1, S in Assumption 2), - P8 “the the” Requested Changes: I would like the author to: - comment on my concerns regarding the theory (i.e. can you show a convergence rate with the relative condition number similar to Gower et al. 2019) - Comment on my concern about Lemma 2 - Provide a comparison with the baseline in terms of convergence speed (e.g. SOTA newton-cg or variance reduction method for logistic regression used in sklearn) - clarify the writing (See my comments above) Broader Impact Concerns: no concerns ================================================== Metareview: Recommendation: Reject Comment: In summary, while the paper offers some relevant contributions, they are insufficient and lack the necessary evidence to support the claims made. As a result, it does not fulfill the TMLR submission criteria, leading to the decision to reject the paper. ==================================================
# Diversity-Preserving K**–Armed Bandits, Revisited** Hédi Hadiji hedi.hadiji@centralesupelec.fr L2S - CNRS - CentraleSupélec - Université Paris-Saclay, Gif-sur-Yvette Sébastien Gerchinovitz sebastien.gerchinovitz@irt-saintexupery.com Institut de recherche technologique Saint Exupéry, Toulouse Institut de mathématiques de Toulouse, Université Paul Sabatier, Toulouse Jean-Michel Loubes jean-michel.loubes@math.univ-toulouse.fr Institut de mathématiques de Toulouse, Université Paul Sabatier, Toulouse Gilles Stoltz gilles.stoltz@universite-paris-saclay.fr Université Paris-Saclay, CNRS, Laboratoire de mathématiques d'Orsay, Orsay, France HEC Paris, Jouy-en-Josas, France Reviewed on OpenReview: *https: // openreview. net/ forum? id= Viz7KBqO4A* ## Abstract We consider the bandit-based framework for diversity-preserving recommendations introduced by Celis et al. (2019), who approached it in the case of a polytope mainly by a reduction to the setting of linear bandits. We design a UCB algorithm using the specific structure of the setting and show that it enjoys a bounded distribution-dependent regret in the natural cases when the optimal mixed actions put some probability mass on all actions (i.e., when diversity is desirable). The regret lower bounds provided show that otherwise, at least when the model is mean-unbounded, a ln T regret is suffered. We also discuss an example beyond the special case of polytopes. ## 1 Introduction Ensuring fairness in recommendation systems has been a major concern in the machine learning literature among the last years, due to the growing influence of recommender systems in all parts of our societies. If many definitions of fairness have been provided, here we focus on models that ensure minimum allocation guarantees. Hence fairness has to be understood in terms of diversity-preserving constraints, as stated in the recent EU Artificial Intelligence Act1, which imposes that AI recommender system should preserve the fundamental right of equal access to opportunities. We refer to Silva et al. (2022), Li et al. (2021), or Huang et al. (2021), and references therein, for some recent reviews on this topic. We will consider a banditbased framework to model such recommendation algorithms, as introduced by the seminal work of Celis et al. (2019). The fairness framework presented in this study addresses a variety of scenarios involving the distribution of resources. In the context of search engine advertisements, it mandates that every advertiser is allocated a specified share of advertisement impressions, thereby preventing any single entity from dominating the available ad space. Similarly, in task planning platforms, it ensures that every participant receives a proportionate number of tasks, which is crucial to avoid disparate treatments. Furthermore, within wireless communication systems, this model obligates the receiver to maintain a baseline level of service quality for every transmitter, guaranteeing that all senders receive equitable treatment. This approach not only promotes fairness but also enhances the overall system's effectiveness by fostering diversity and participation across different settings. We refer to Patil et al. (2021) or Molina et al. (2023) for different methods promoting fairness in this context. 1https://artificialintelligenceact.eu/ The setting by Celis et al. (2019). We consider stochastic K–armed bandits. All arms correspond to desirable actions or options, though some lead to higher payoffs. Effective (regret-minimizing) algorithms are bound to play the optimal arm(s) an overwhelming fraction of time. Celis et al. (2019) refer to this effect as polarization (see details in Example 1) and introduce the learning protocol of Section 1.1 to avoid it. In a nutshell, this protocol consists of picking arms At in a two-stage randomization, by choosing first a probability distribution pt over the arms within a set P of admissible distributions, and then by drawing the arm At to be actually played according to this distribution pt . The simplest example of admissible distributions is composed (see Example 1) of distributions all putting at least some minimal probability mass ℓ > 0 on each arm. All actions or options corresponding to the arms then get some fair chance to be selected, even if their expected payoffs are far from the expected payoff of the optimal arms. We therefore suggest the alternative terminology of preserving diversity. The two-stage randomization considered makes sense in scenarios where there is a strong internal commitment to respect diversity but randomization is needed to respect privacy, or where some central authority eventually picks the arms based on the distributions output. Additional justifications of the setting are provided by Celis et al. (2019). Overview of our results. Our aim in this article is to deepen the theoretical results obtained by Celis et al. (2019), see more details in Section 1.2. In a nutshell, Celis et al. (2019) approached the problem described above for polytopes P and by reducing it to the case of linear bandits: by ignoring that actions At are eventually played and by only considering the distributions pt chosen. Doing so, they obtain suboptimal rates for regret bounds. In particular, we show that bounded regret may be achieved in the favorable cases where diversity is actually desired: for sub-Gaussian models and when the optimal admissible distributions put some positive probability mass on all arms (which happens, in particular, when the set P is bounded away from the boundary of the simplex of probability distributions). Actually, at least in mean-unbounded sub-Gaussian models, we even characterize bounded regret as a feature of optimal distributions putting some probability mass on all arms. We also consider a case where P is not a polytope but a set with a continuous and curved set of extremal points. In that case, the reduction to linear bandits would provide √T regret bounds while we show that using the information given by At, the regret may grow only at a squared-logarithmic rate, again showing a stark improvement. A case of special interest (see Example 2) is a version of bandits with knapsacks where constraints are to be satisfied in expectation at all rounds, and thus, by martingale convergence, also almost surely in the limit. The associated theory is, of course, of a fundamentally simpler nature than the hard constraints typically considered in the literature of bandits with knapsacks (Badanidiyuru et al., 2013, Badanidiyuru et al., 2018). ## 1.1 The Diversity-Preserving Setting Introduced By Celis Et Al. (2019) Definition 1. A model D is a collection of distributions ν *with finite first moments denoted by* µ = E(ν). A K–armed bandit problem ν = (ν1*, . . . , ν*K), unknown to the learner, is fixed. At each round, the learner picks an arm At ∈ [K], where K = {1*, . . . , K*}, and obtains a payoff Yt drawn at random according to νAt conditionally to the choice of At. This is the only observation made. So far, the setting described is exactly the one of vanilla K–armed bandits; the distinguishing feature of the bandit model by Celis et al. (2019) is that the choice of At is actually made in two steps, as follows. First, a distribution pt over [K] is picked, in some known closed set P of the set M1,+ [K] of all probability distributions over [K]. Distributions in the set P satisfy some diversity-preserving constraints (specific examples are given below). Then, the arm At is drawn at random according to pt . Following game-theoretic terminology, we will call a ∈ [K] pure actions or arms, and p ∈ P mixed actions or distributions. We measure performance in terms of expected payoffs. More precisely, denoting by µj = E(νj ) the expectation of νj and by µ = (µ1*, . . . , µ*K) their vector, we first note that at round t ⩾ 1, by repeated applications of the tower rule, $$\mathbb{E}\big{[}Y_{t}\mid A_{t},\,\underline{p}_{t}\big{]}=\mu_{A_{t}}\,,\quad\text{thus}\quad\mathbb{E}\big{[}Y_{t}\mid\underline{p}_{t}\big{]}=\sum_{k\in[k]}p_{t,k}\,\mu_{k}\stackrel{{\text{def}}}{{=}}\big{\langle}\underline{p}_{t},\,\underline{\mu}\big{\rangle}\,,\quad\text{thus}\quad\mathbb{E}[Y_{t}]=\mathbb{E}\big{[}\big{\langle}\underline{p}_{t},\,\underline{\mu}\big{\rangle}\big{]}\,.\tag{1}$$ Box A: Protocol of diversity-preserving stochastic bandits (Celis et al., 2019) Known parameters ![2_image_0.png](2_image_0.png) - Arms 1*, . . . , K* and model D of distributions for the arms - Closed set *P ⊆ M*1,+ [K] of diverse enough probability distributions over the arms Unknown parameters - Probability distributions ν = (ν1*, . . . , ν*K) in D, with expectations µ = (µ1*, . . . , µ*K) For t = 1, 2*, . . .*, ![2_image_2.png](2_image_2.png) 1. Pick a distribution pt = (pt,1, . . . , pt,K) ∈ P over the arms 2. Draw at random an arm At ∼ pt 3. Get and observe a payoff Yt ∼ νAt drawn at random according to νAt given At Aim - Minimize the diversity-preserving expected regret RT = T max p∈P $$\xi\langle\underline{{{p}}},\,\underline{{{\mu}}}\rangle-\mathbb{E}\left[\sum_{t=1}^{1}\langle\underline{{{p}}}_{t},\,\underline{{{\mu}}}\rangle\right]$$ ![2_image_1.png](2_image_1.png) "X T $p\in\mathcal{P}$ # We consider expected regret: its distinguishing feature compared to vanilla K–armed bandits is that the performance of the learner is compared to distributions p ∈ P only, not all distributions in M1,+ [K] . More precisely, we define the P–diversity-preserving expected regret as $$R_{T}=T\operatorname*{max}_{\underline{{{p}}}\in\mathcal{P}}\langle\underline{{{p}}},\,\underline{{{\mu}}}\rangle-\mathbb{E}\left[\sum_{t=1}^{T}\langle\underline{{{p}}}_{t},\,\underline{{{\mu}}}\rangle\right]$$ . The diversity-preserving expected regret is smaller than the vanilla expected regret for K–armed bandits, which corresponds to comparing to the maximum over p ∈ M1,+ [K] . In some cases (see Theorem 1 and Corollary 1), the diversity-preserving expected regret may even be bounded. The learning protocol and the regret goal are summarized in Box A. Example 1 (Avoiding polarization). The main example by Celis et al. (2019) consists of imposing that each arm should be played at each round with some minimal probability ℓ > 0, which corresponds to the diversity-preserving set P = p : ∀a ∈ [K], pa ⩾ ℓ . This constraint makes sense in online advertisement: all offers need to be displayed a significant fraction of the time and get a significant chance to be selected. As argued by Celis et al. (2019), this wish of diversity would not take place with classical bandit algorithms, which would display almost only the most profitable ad (a phenomenon called polarization). More generally, for each arm a, depending on the contract passed, there could be a known range [ℓa, ua] of individual probabilities of display, so that $${\mathcal{P}}=\left\{{\underline{{p}}}:\ \ \forall a\in[K],\ \ p_{a}\in[\ell_{a},\,u_{a}]\right\}.$$ Example 2 (One-shot version of bandits with knapsacks in the mechanism design). This second example is our own. Suppose that every pure action a is associated with N costs c (1) a *, . . . , c* (N) a in R, accounting for limited resources or environmental costs like the amount of carbon emissions generated from taking the action; negative costs (e.g., negative carbon emissions) are allowed. When a player picks a pure action At according to the mixed action p = (p1*, . . . , p*K), the N expected costs associated with her choice are $$\sum_{a\in[K]}p_{a}c_{a}^{(1)},\,\ldots,\,\sum_{a\in[K]}p_{a}c_{a}^{(N)}\,.$$ In this case, a reasonable objective for the player is to maximize her payoff under the constraints that, for all n ∈ [N], the n–th expected cost of her actions be kept under a certain pre-specified level un ∈ R. This amounts to playing with the probability set $${\mathcal{P}}=\left\{{\underline{{p}}}:\ \ \forall n\in[N],\ \ \sum_{a=1}^{K}p_{a}c_{a}^{(n)}\leqslant u_{n}\right\}.$$ Note that the name "diversity-preserving" was inspired by the example of the previous paragraph and is perhaps less relevant in the present example. The present example is a one-shot version of bandits with knapsacks (Badanidiyuru et al., 2013, Badanidiyuru et al., 2018), where the budget constraints must be satisfied at all rounds but in the mechanism design, and not when actual costs are summed over all rounds (though the latter will be approximatively achieved by martingale convergence). In both examples, the diversity-preserving sets P considered are polytopes in the following sense. Definition 2. A polytope P *is a convex set generated by finitely many extremal points whose set is denoted* by Ext(P). When P is a polytope, we may introduce suboptimality gaps of distributions in Ext(P) and decompose the diversity-preserving regret as stated in (2). More precisely, we denote by Opt(ν,P) the subset of optimal distributions, which achieve an expected payoff denoted by M(µ,P), i.e., $$M(\underline{{{\mu}}},{\mathcal{P}})=\operatorname*{max}_{\underline{{{p}}}\in{\mathcal{P}}}\langle\underline{{{p}}},\,\underline{{{\mu}}}\rangle\qquad\mathrm{and}\qquad\mathrm{Opt}(\underline{{{\nu}}},{\mathcal{P}})=\operatorname*{argmax}_{\underline{{{p}}}\in{\mathcal{P}}}\langle\underline{{{p}}},\,\underline{{{\mu}}}\rangle\,,$$ and define in turn the suboptimality gap ∆p of a given distribution p ∈ Ext(P), and the global suboptimality gap ∆ of the set Ext(P), as $$\Delta_{\underline{p}}=M(\underline{\mu}\colon\mathcal{P})-\langle\underline{p},\,\underline{\mu}\rangle\qquad\text{and}\qquad\Delta=\min\big{\{}\,\Delta_{\underline{p}}:\,\underline{p}\in\operatorname{Ext}(\mathcal{P}),\,\Delta_{\underline{p}}>0\big{\}}\,.$$ (We assume that at least one $p\in\operatorname{Ext}(\mathcal{P})$ is such that $\Delta_{p}>0$, otherwise, all strategies have a null regret.) Denoting by Np(T) the number of times a distribution p ∈ Ext(P) is played, the diversity-preserving regret of a (possibly randomized) strategy only picking distributions pt∈ Ext(P) can then be rewritten as $$R_{T}=T\max_{\underline{v}\in\mathcal{V}}\langle\underline{v},\,\underline{v}\rangle-\mathbb{E}\!\left[\sum_{t=1}^{T}\langle\underline{v}_{t},\underline{v}\rangle\right]=\sum_{\underline{v}\in\operatorname{Ext}(\mathcal{P})\setminus\operatorname{Opt}(\underline{v},\mathcal{P})}\Delta_{\underline{v}}\,\mathbb{E}\!\left[N_{\underline{v}}(T)\right],\quad\text{where}\quad N_{\underline{v}}(T)=\sum_{t=1}^{T}\mathds{1}_{\{\underline{v}_{t}=\underline{v}\}}.\tag{2}$$ Actually, Celis et al. (2019) consider the possibility of contexts in their description of the setting, but then introduce and analyze policies that "function independently for each context". The setting and aim described above and summarized in Box A correspond exactly to the framework of Theorems 1 and 2 of Celis et al. (2019). ## 1.2 Overview Of Our Results And Comparison To Celis Et Al. (2019) Celis et al. (2019) provide some extensive numerical studies but we only discuss below their theoretical contributions; the latter only dealt with diversity-preserving sets P given by polytopes. Regret upper bounds by Celis et al. (2019): for polytopes. They first suggest using linear-bandit algorithms in the diversity-preserving setting, since (1) indicates that the expected reward is a linear function of the played distribution pt . Of course, with such a reduction, the learner discards a some useful information: the value At of the arm actually played. This is why Celis et al. (2019) obtained suboptimal regret bounds. More precisely, they use the LinUCB strategies (linear upper confidence bound) strategies introduced by Li et al. (2010) and Chu et al. (2011) and further studied by Abbasi-Yadkori et al. (2011). With this strategy, they obtain regret bounds of order K(ln T) 2/∆. A second suggestion by Celis et al. (2019) is a constrained version of the ε–greedy algorithm by Auer et al. (2002), which does use the knowledge of the arms At actually played and obtains a regret bound of order (ln T)/∆2. However, this constrained ε–greedy algorithm requires the value of ∆ to tune its exploration rate εt over time; this limitation makes it impractical compared to knowledge-independent algorithms. For both strategies of Celis et al. (2019), the case of a bounded regret is not covered, while it constitutes our main contribution. Regret upper bounds for polytopes in this article: Section 2 and Appendix A. More precisely, we introduce in Section 2 a diversity-preserving UCB strategy, close to the original UCB strategy by Auer et al., 2002: it maintains indexes on arms and outputs the best distribution in P given these indexes. Additional minor modifications are needed as the strategy cannot ensure, unlike in the classic case, that all arms are first pulled once. For sub-Gaussian models and for diversity-preserving sets P given by polytopes, this diversity-preserving UCB strategy enjoys a bounded regret as soon as all optimal distributions for a problem put some positive probability mass on each arm; this condition somehow indicates that each arm is desirable and flags cases where diversity is welcome. Even when the condition is not met, a ln T regret is suffered. Both regret bounds are stated in Theorem 1, whose proof may be found in Appendix A. The proof of the ln T bound relies mostly on typical optimistic techniques for K–armed bandits, with occasional twists, to provide controls on the numbers of times arms a ∈ [K] are played through information on the numbers of times distributions p ∈ Ext(P) were picked. The proof of the bounded regret follows a completely different logic: we first show that optimal distributions are typically played at least half of the time, which, entails, because p ⋆min(ν) > 0, that each pure action a ∈ [K] is played linearly many times. Therefore, all estimates are sharp, and little regret is suffered. The diversity-preserving setting has it here that all arms may be, and even should be, played linearly many times—unlike in the vanilla K–armed bandit settings, where suboptimal arms are only played about ln T times. Regarding computational complexity, the algorithm we propose only requires solving one linear program over the constraint set at every round, which is often less expensive than the double maximization over an ellipsoid of LinUCB (see Li et al., 2010, Chu et al., 2011, and Abbasi-Yadkori et al., 2011). Regret lower bounds in this article: Sections 3 and Appendices B–C. Celis et al. (2019) do not provide lower bounds. We provide optimality results in Section 3 (with proofs located in Appendices B and C). They are stated under the restriction that the bandit problem ν considered has a unique optimal distribution p ⋆(ν). Theorem 2 states that in models with no upper bound on the expectations (and satisfying two other mild technical conditions), all reasonable strategies (i.e., achieving a small diversity-preserving regret, for all bandit problems) must suffer a ln T when p ⋆(ν) puts no probability mass on at least one arm. Together with the upper bounds results discussed so far, we therefore obtain in Corollary 1 a characterization of the bounded versus ln T regrets for these models with no upper bound on the expectations. The case of models with bounded expectations is more delicate and we illustrate in Proposition 1, with Bernoulli bandit problems, why stating such a characterization is challenging. The proof techniques rely to a large extent on classic proof schemes for bandit lower bounds (Lai & Robbins, 1985, Graves & Lai, 1997, Garivier et al., 2019). The main difference raised by our setting is described in Remark 1: eventually, arms At are played, so that the Kullback-Leibler information gain should be quantified in terms of the At and is larger than if it was quantified in terms of the distributions pt used; yet, the smallregret constraints on the strategies are in terms of the distributions pt used. This leads to some specific constrained minimum on lim inf RT / ln T, à la Graves & Lai (1997), which is to be studied. Regret upper bounds for P **given, e.g., by a ball: Section 4 and Appendix D.** We study a case where the diversity-preserving set P is not given by a polytope but whose set of extremal points is continuous and curved. For simplicity, we consider Euclidean balls in the probability simplex. We obtain a regret bound of order ln2T when such a ball P lies in the relative interior of the simplex. The striking feature of this bound is that a linear-bandit algorithm discarding the arms At actually played would pay a regret of order at least √T, showing again a large gap between knowledge-informed algorithms and generic linear-bandit bounds when all actions are desirable. ## 1.3 Literature Review The many existing notions of algorithmic fairness have lead to different fair bandit problems formulations see, e.g, the contributions by Wang et al. (2021), Liu et al. (2017), Barman et al. (2023), which are unrelated to the setting we study. We focus our literature review below on comparable work. We mention in passing the first version of the present article: Hadiji et al. (2020). Diversity. Chen et al. (2020) study a particular case of the diversity-preserving setting, which essentially corresponds to Example 2. While their framework is the same as ours, they only study distribution-free bounds, at best of order √KT. Their algorithm applies to rewards generated in an adversarial manner. In the domain of stochastic bandits, Patil et al. (2021) and Claure et al. (2020) design bandit algorithms ensuring that the proportion of times each action is selected is lower bounded, i.e., with our notation, that Na(T)/T ⩾ ua at every round. This is similar in spirit to the special case of our setting presented in Example 1. However, the hard constraint on the number of pulls poses algorithmic design issues different from ours, which they solve only in the special case of lower bounds on every arm pull. Our setting is more flexible yet enforces similar guarantees while bypassing these issues. After a preprint version (Hadiji et al., 2020) of the present article was published online, Liu et al. (2022), building on Li et al. (2020), considered a combinatorial bandit problem with constraints on the distribution over actions selected by the player. While they formally allow the player to play outside the constraint set, they measure regret with respect to the best constrained distribution, and their algorithm never plays outside the constraint set, making their setting essentially identical to ours. They analyze a natural extension of diversity-preserving UCB to the combinatorial setting, providing logarithmic regret bounds similar to our Theorem 1, and discuss specific cases in which constant regret can be achieved, e.g., when all distributions put some given positive probability on all actions. Compared to their results, we provide a *characterization* of when constant regret happens—that goes way beyond the case of a positive lower bound on the components of the distributions. Anecdotally, our upper bound on the constant regret, when specified to Example 1, is tighter than theirs. Indeed, the main term in our bound is of order K/∆ p ⋆min(ν) up to log terms, whereas they get a bound of order 1/ ∆ (p ⋆min(ν) 2, which is always larger since p ⋆min(ν) ⩽ 1/K. Note that all these articles refer to "fairness", although we use the term "diversity-preserving" in this case. Bounded regret in structured bandits. A recent line of work (Hao et al., 2020, Tirinzoni et al., 2020, Jun & Zhang, 2020), pioneered by Bubeck et al. (2013) and Lattimore & Munos (2014), studies the possibility of achieving bounded regret in structured bandits. There, the learner knows a priori a structural property of the relationship between the payoffs of different actions available. Certain types of structures give rise to the "automatic exploration" phenomenon: despite the bandit feedback, playing the optimal arm (and knowing the structure) yields enough information to differentiate it from the suboptimal arms without playing them; this opens up the possibility of obtaining bounded regret. These results are deeply connected to the lower bound of Graves & Lai (1997), which often provides the tight constant in front of the logarithmic asymptotic rate of regret in structured bandits, matched in wide generality by Combes et al. (2017); see also the equivalent formulation for linear bandits with Gaussian noise in Lattimore & Szepesvári (2017). Constant regret can only occur in problems for which that lower bound is 0. As in most bandit settings, these works all assume that the learner has a deterministic control over the arms At eventually picked. The diversity-preserving framework contrasts this aspect, by preventing the learner from picking At and introducing some additional random draw of At based on the distribution pt picked. The relationship between the policy and the information acquired is thus altered in a central way. On a technical level, standard bandit algorithms may decrease mechanically the radius of the confidence bounds, whereas the extra randomness between pt and At only allows for some probabilistic control in the diversitypreserving setting. In particular, the analysis techniques of Lattimore & Munos (2014) and Tirinzoni et al. (2020), based on confidence bounds, need to be sharply refined to be used in the diversity-preserving setting. Also, to the best of our knowledge, the results of Section 4 provide the first example of a structured bandit problem with an action set (the diversity-preserving set P) with a continuous set of extremal points and for which automatic exploration occurs and brings the rate of regret from √T down to ln2T. Note that automatic exploration also happens organically in Degenne et al. (2018), where the authors assume directly that the learner may observe the payoff of an extra arm. Mediator feedback. The diversity-preserving setting of this article has notable connections with bandits with mediator feedback, on the one hand, and with regret minimization with expert advice, on the other hand. In all three settings, arms are not directly pulled but a distribution (possibly non-stationary) is placed in the middle. Inspired by policy optimization in reinforcement learning, Metelli et al. (2021) introduce the problem of bandits with mediator feedback, which a is more general version of the diversity-preserving problem we consider. They define bandits with mediator feedback as any bandit problem in which the player observes an intermediate (possibly random) feedback ot that determines the reward. (Precisely, the reward Yt is independent of the action At conditionally on ot.) The authors analyze, in particular, the case of policy optimization within a class of policies parameterized by some θ ∈ Θ; the goal of the learner is to select policies (parameters) that yield high returns. In this example, the player observes the entire sequence of rewards and actions throughout the episode, which is more informative than the sole cumulative gain of the policy. Our setting is more specific but our analysis is also more precise. The results by Metelli et al. (2021) can indeed be applied to our setting, by considering a Markov Decision Process with a single state and by identifying the policies directly with distributions over the action set. Based on this reduction, their analysis does not make full use of the simple information structure available, which results both in suboptimal lower and upper bounds. More precisely, the regret bound of their algorithm becomes infinite as soon as P is fully not contained in the relative interior of the simplex, while our regret bounds are at worst of order ln T. (In their notation, p corresponds to θ, and v(θ) = ∞.) On the lower bound side, they only provide a worst-case result, by exhibiting hard instances on which algorithms incur a minimal regret, whereas we characterize the optimal rates on every instance. Another article worth to be mentioned is by Eldowa et al. (2023), in the context of regret minimization with expert advice: indeed, the finite policy set Θ that they consider in K–armed bandits could correspond to the set Ext(P), which makes the setting studied here a special case of theirs with i.i.d. stochastic losses. However, to the best of our reading, we found no trace of constant regret and only bounds scaling with √T. This is due to the adversarial approach followed; their angle is to improve the regret bounds in terms of their dependencies on other quantities than T (e.g., number of policies, arms, etc.), which they do through the information-theoretic quantities introduced. Extensions to Markov decision processes. Occurrences of constant regret have also recently appeared in the reinforcement learning literature, in episodic settings more or less related to ours. For linear Markov Decision Processes, Zhang et al. (2024) derive algorithms that achieve constant regret with high probability, but with potentially large expected regret, and Papini et al. (2021) extend the work of Hao et al. (2020), discussed above, providing a necessary and sufficient for constant regret. Vera et al. (2021) introduce an online version of knapsack problems that accounts for the computational tractability of the benchmark strategies, and design constant regret algorithms against these benchmarks. Wagenmaker & Foster (2023) tackle the general and difficult question of characterizing the non-asymptotic instance-dependent optimal regret in reinforcement learning (contrasting with the asymptotic nature of the lower bounds following approaches à la Graves & Lai, 1997). Therein, constant-regret instances play a special role, since the constants, which are typically second-order terms, form the dominant term in the regret. Note the difference with the setting of Metelli et al. (2021) discussed earlier: in their formulation of policy optimization, the action set corresponds to the set of parameters, and not to the action set of the MDP as in the references right above. ## 2 A Simple Diversity-Preserving Ucb Strategy Definition 3. A distribution ν over R *with expectation* µ = E(ν) is σ 2*–sub-Gaussian if* $$\forall\lambda\in\mathbb{R},\qquad\int\mathrm{e}^{\lambda(x-\mu)}\,\mathrm{d}\nu(x)\leqslant\mathrm{e}^{\lambda^{2}\sigma^{2}/2}\,.$$ We assume that the model D is composed of σ 2–sub-Gaussian distributions, where the parameter σ 2is known. A lemma by Hoeffding (1963) indicates that the model D[a,b] of all probability measures supported ## Box B: Diversity-Preserving Ucb For Polytopes Inputs: sub-Gaussian parameter σ 2for distributions in D; polytope P Initialization: pick some u0 taken by some distribution in D, and let U(0) = (u0*, . . . , u*0) For rounds t = 1, 2*, . . . ,* 1. Select (ties broken arbitrarily) a distribution pt∈ argmax p∈Ext(P) $$\operatorname*{max}_{\mathrm{xt}({\mathcal{P}})}\left\langle p,\,{\underline{{U}}}(t-1)\right\rangle$$ 2. Play the pure action At ∼ pt 3. Get and observe the reward Yt ∼ νAt 4. Compute the empirical averages $$\widehat{\mu}_{a}(t)=\left\{\begin{array}{l l}{{u_{0}}}&{{\mathrm{if~}N_{a}(t)=0,}}\\ {{}}&{{}}\\ {{\frac{1}{N_{a}(t)}\sum_{s=1}^{t}Y_{s}\mathbbm{1}_{\{A_{s}=a\}}}}&{{\mathrm{if~}N_{a}(t)\geqslant1,}}\end{array}\right.$$ $\downarrow$ . $$\mathrm{where}\qquad N_{a}(t)=\sum_{s=1}^{t}\mathds{1}_{\{A_{s}=a\}}$$ 5. Compute the upper confidence bound vector U(t) = U1(t)*, . . . , U*K(t) according to $$J,\cdot\cdot\cdot,$$ $$\forall a\in[K],\qquad U_{a}(t)={\widehat{\mu}}_{a}(t)+{\sqrt{\frac{8\sigma^{2}\ln t}{\operatorname*{max}\left\{N_{a}(t),1\right\}}}}$$ on a compact interval [*a, b*] satisfies this assumption, with σ 2 = (b − a) 2/4. Of course, the model D of all Gaussian distributions with variance smaller than some prescribed σ 2is also suitable. We also impose for now that P is a polytope. See Section 4 for a case where P is not a polytope. We then introduce a UCB strategy in Box B, which maintains indexes Ua(t) directly and separately on the arms a ∈ [K], as the classic UCB strategy. The strategy neither uses indexes for distributions in Ext(P), nor resorts to some global linear-bandit estimation as in Abbasi-Yadkori et al. (2011). The indexes are initialized at some value u0 possibly taken by some distribution in D, and are of the form, for t ⩾ 1 and a ∈ [K], $$U_{a}(t)=\widehat{\mu}_{a}(t)+\sqrt{\frac{8\sigma^{2}\ln t}{\operatorname*{max}\left\{N_{a}(t),1\right\}}}\,.$$ where Na(t) counts the number of times arm a was pulled till round t included, and where µba(t) is the empirical payoff achieved over those rounds when a was played. Of course, it may be that Na(t) = 0, as the learner has no direct control on which arm is played—the learner cannot ensure that arm a be picked even once. This is why we use max Na(t), 1 in the confidence bonus, and also, set µba(t) = u0 in the case Na(t) = 0. We prove (in Appendix A) the regret bounds stated below in Theorem 1. Theorem 1. Let P be a polytope. Consider a sub-Gaussian model D *with parameter* σ 2, known and used by the diversity-preserving UCB strategy of Box B. The regret of the latter satisfies ∀ν in D, ∀T ⩾ 1, RT ⩽ Cν ln T + cν , where Cν and cν are quantities depending on ν *and whose general closed-form expressions may be read* in (12). In addition, for problems ν in D *such that* similarly, a closed-form finite upper bound may be read in (19). Some comments are in order, first on the cases of interest (and those that are mere sanity checks), and second, on the proof techniques. $$p_{\operatorname*{min}}^{*}(\underline{{{\nu}}})\ \stackrel{\mathrm{def}}{=}\ _{p\in\mathbb{C}}$$ def = min p∈Opt(ν,P) min a∈[K] pa > 0 , *the regret is even bounded:* lim sup $$\operatorname*{lim}_{T\to\infty}R_{T}<+\infty\,;$$ $\downarrow$ . Cases of interest in Theorem 1. The ln T bound in Theorem 1 could also be achieved (with the same computational complexity) by a UCB strategy taking the distributions in Ext(P) as arms. The main achievement in Theorem 1 is therefore the case of bounded regret. The condition p ⋆min(ν) > 0 corresponds to cases where diversity is indeed desirable, and this is when bounded regret is achieved. The diversity-preserving UCB strategy of Box B coincides with the classic UCB strategy by Auer et al. (2002) in the case where P is the entire simplex of probability distributions. However, the general ln T bound achieved in Theorem 1, with closed-form expression (12), does not coincide with the classic UCB bound of Auer et al. (2002) in this case. This is due to some general but loose analysis followed in the proof of Theorem 1. We recall, once again, that the ln T part of Theorem 1 only forms some sanity check, the main achievement being the bounded-regret case. Intuition of the proof and proof techniques. The proof of Theorem 1 is provided in Appendix A; actually, two separate proofs with two different logics are provided: one for the general ln T rate in Appendix A.1; one for the case of bounded regret in Appendix A.2. The proof of the ln T bound was obtained by an adaptation of the classic UCB proofs, for vanilla K–armed bandits (see Auer et al., 2002) or for linear bandits (see, e.g., Abbasi-Yadkori et al., 2011 and Lattimore & Szepesvári, 2017). In particular, the proof shows, in view of (2), that suboptimal distributions p ∈ Ext(P) are unlikely to be played more than ln T times. We highlight in Appendix A the (three specific) technical modifications with respect to these classic proofs: they all basically amount to controlling, with high probability, the Na(t) based on the numbers of times Np(t) distributions p are played. The proof for the case of bounded regret follows a completely different logic (the challenge to be overcome here was to get and write down this alternative logic). We first show that optimal distributions are typically played at least half of the time. This entails, because p ⋆min(ν) > 0, that each pure action a ∈ [K] is played linearly many times. Therefore, all estimates are sharp, and little regret is suffered. This proof thus provides the rationale for the bounded regret: there is less competition than in the classic case between exploration and exploitation—exploitation involves exploration "by design" in the case where p ⋆min(ν) > 0. This rationale is different from the one at hand in structured bandits (see Section 1.3 for the references): therein, arms played also provide some information on arms not played thanks to the underlying structure. ## 3 Lower Bounds / Optimality In The Case Of Polytopes In this section, we discuss the situations where the ln T rate stated in Theorem 1 is unavoidable: does this happen when optimal distributions put no probability mass on at least one arm? To answer the question, we first provide in Section 3.1 (more precisely, in Theorem 2) a ln T lower bound on the regret against all bandit problems with a unique optimal arm not putting any probability mass on some arm, provided that the model is mean-unbounded in the sense of Definition 4 below. In Section 3.2, we the combine the lower bound of Theorem 2 and the upper bound of Theorem 1 to show that for sub-Gaussian mean-unbounded models, either the regret may be bounded or it must grow at an optimal ln T rate. The case of mean-bounded models is more delicate and Proposition 1 illustrates the intrinsic difficulties in characterizing the rates for these models. Before we proceed, we state our key distinction between mean-bounded and mean-unbounded models. Definition 4. A model D is said to be mean-bounded if there exists M ∈ R *such that the expectation* µ = E(ν) of any ν ∈ D satisfies µ ⩽ M. Otherwise, the model D *is said mean-unbounded: for all* M ∈ R, there exists ν ∈ D *such that* µ = E(ν) > M. Example 3. The rewards in online advertisement (see Example 1) are naturally bounded. Admittedly, most real applications are associated with mean-bounded models; mean-unbounded models (that are also sub-Gaussian) are less common. We could think of the model with distributions of the form: some mean plus some fixed, possibly wide-spread, sub-Gaussian centered distribution. It would model, e.g., in finance applications, returns of unknown levels but with known common shape. The diversity-preserving constraint could come from the necessity of putting all investments at a given round in a single asset while considering safe allocation policies that put, in expectation, a fraction of the capital in all assets. ## 3.1 General Regret Lower Bound We restrict our attention to models satisfying the following technical assumption, which, as we discuss below, is mild and is satisfied, in particular, for convex models D. Assumption 1. For all distributions ν ∈ D, for all real numbers x, if there exists ζ ∈ D *with expectation* E(ζ) > x, then there exists ζ′ ∈ D such that E(ζ′) > x and KL(*ν, ζ*′) < +∞. This assumption of course holds for models where all pairs of distributions exhibit finite Kullback-Leibler divergences (for instance, models given by exponential families). It also hold for convex models; indeed, for λ ∈ (0, 1) small enough, ζ′ = (1 − λ)ζ + λν belongs to D and still has an expectation larger than x, while the density of ν with respect to ζ′is upper bounded by 1/λ, thus KL(ν, ζ′) ⩽ 1*/λ <* +∞. Note that convex combinations of sub-Gaussian distributions with parameter σ 2 are also σ 2–sub-Gaussian, by convexity of the exponential function. A somewhat typical restriction (also considered, e.g., by Lattimore & Szepesvári, 2017, Combes et al., 2017) will be issued concerning the bandit problems ν possibly considered in the lower bounds: they should have a single optimal distribution in P, which we denote by p ⋆(ν), i.e., Opt(ν,P) = p $\rho^\star\big(\underline{\nu}\big)\big\}$ . Of course, when P is a polytope, that single optimal distribution p ⋆(ν) belongs to Ext(P). The next, extremely mild, assumption is on arms: we require that there are no unnecessary arms given P, i.e., that each arm in [K] may be pulled for at least one distribution in P, or equivalently if P is a polytope, in Ext(P). Assumption 2 (no unnecessary arms given P). For all a ∈ [K], there exists p ∈ P *such that* pa > 0. Finally, given the regret upper bounds exhibited in Theorem 1, it is natural to restrict our attention to so-called uniformly fast convergent [UFC] strategies. Definition 5 (UFC strategies). A strategy is uniformly fast convergent [UFC] over the model D *given the* diversity-preserving set P if for all bandit problems ν in D*, its diversity-preserving regret satisfies* RT = o(T α) for all α > 0. The main result of this section is the following theorem, proved in Appendix B. We actually provide therein a general lower bound holding for large classes of models—mean-bounded and mean-unbounded ones. Theorem 2. Consider a mean-unbounded model D abiding by Assumption 1, and a diversity-preserving polytope P such that there are no unnecessary arms given P *(see Definition 2). For all bandit problems* ν in D with a single optimal distribution in P *denoted by* p ⋆(ν)*, if* ∃ a ∈ [K] : p ⋆ a (ν) = 0 , then there exists Lν > 0 such that for all strategies UFC over D *given* P, lim inf T→∞ $$\operatorname*{m}_{\stackrel{\mathrm{min}}{T\to\infty}}{\frac{R_{T}}{\ln T}}\geqslant L_{\underline{{\nu}}}.$$ A closed-form expression of a suitable Lν *is given by Lemma 4 in Appendix B.1.* ## 3.2 Characterization, Or Lack Of Characterization, Of The Optimal Regret Rates Theorems 1 and 2 entail the following characterization of regret rates in the case of sub-Gaussian meanunbounded models. Corollary 1. For a diversity-preserving polytope P such that there are no unnecessary arms given P *(see* Definition 2), for a mean-unbounded model Dσ2 *composed of* σ 2*-sub-Gaussian distributions and abiding by* Assumption 1, for all bandit problems ν in Dσ2 *with a single optimal distribution* p ⋆(ν) in P, - if ∃ a ∈ [K] : p ⋆ a (ν) > 0, the diversity-preserving UCB strategy ensures that the regret RT *is bounded;* - if ∃ a ∈ [K] : p ⋆ a (ν) = 0, then all strategies UFC over D given P *suffer a logarithmic regret.* On the contrary, the case of mean-bounded models is delicate. The (counter-)example below deals with the model of Bernoulli distributions. The characterization of rates obtained looks idiosyncratic: it seems intrinsically difficult to provide a general characterization of bounded regret by the expected payoffs in the case of mean-bounded models. Proposition 1. For the model B *of all Bernoulli distributions,* K = 3 arms, and some δ ∈ (0, 1), consider the diversity-preserving segment Pδ *generated by* p (1) = (0, 1/2, 1/2) and p (2) δ = (δ, 0, 1 − δ). Let the bandit problem be ν = Ber(0), Ber(1/2), Ber(0). The setting introduced satisfies all assumptions of Theorem 2, except the mean-unboundedness of the model, and p (1) is the unique optimal distribution for ν*. Yet,* - *if $\delta>1/4$, there exists $L_{\mathbb{Z}}>0$ such that for all strategies UFC over $\mathcal{B}$ given $\mathcal{P}_{\delta},\ \liminf_{T\to\infty}\frac{R_{T}}{\ln T}\geqslant L_{\mathbb{Z}}$;* - *if $\delta<1/4$, a variant of the diversity-preserving UCB obtains $\limsup R_{T}<+\infty$*. T→∞ *obtains* $\limsup R_T<+\infty$. ## 4 Example Of A Diversity-Preserving Set P **Given By A Ball** In this section, we discuss a specific example of a diversity-preserving set P that is curved with smooth boundary ∂P. Our aim here is to illustrate that in the non-polytope case, there can be a wider variety of rates for the growth of regret—beyond the ln T rate and bounded-regret cases discussed earlier in the case of polytopes. Our conjecture (given the proof technique of Theorem 3 below) is that the ln2T rate for the regret exhibited for the specific example also holds more generally for convex sets with local quadratic curvatures. We leave the general study of the non-polytope case to future research. Specific example studied. Our running example will be the intersection of the probability simplex with a Euclidean ball centered at the uniform distribution, $${\mathcal{P}}_{r}=\left\{{\underline{{p}}}:\quad\sum_{a\in[K]}(p_{a}-1/K)^{2}\leqslant r^{2}\right\},$$ $\quad(3)$ . for r small enough. This example differs from the case of polytopes in that the set of potential best actions becomes infinite (continuous): it is still given by extremal points, which are now given by the boundary ∂Pr. In particular, what corresponds to the minimal gap ∆ is now equal to 0. In standard linear bandits with actions sets given by such convex sets Pr, the logarithmic regret bounds, which depend on the inverse gap, become vacuous and the best problem-dependent rates degrade to √T, cf. Abbasi-Yadkori et al. (2011) and Lattimore & Szepesvári (2020). Banerjee et al. (2023, Theorem 3.3) proves that reaching sub-√T regret on ellipsoids is essentially impossible by showing that any algorithm with worst-case regret smaller than √T must explore all directions a proportion √T of the time. In the diversity-preserving setting, we state below that a version of the diversity-preserving UCB strategy enjoys a regret bound of order ln2T instead of the typical √T bound that a linear bandit algorithm not taking into account the arms At played would incur. Statement of the strategy. The formulation of this diversity-preserving UCB strategy is almost identical to the Box B strategy, except that the set of extremal points is now given by the boundary ∂Pr of Pr and is infinite; we therefore specifically pick, in Step 1 of Box B: $$\underline{p}_{t}\in\operatorname*{argmax}_{\underline{p}\in\partial\mathcal{P}_{r}}\left\langle\underline{p},\,\underline{U}(t-1)\right\rangle.$$ Note in passing that computing pt comes at no computational cost: a simple closed-form expression follows from Lemma 6 of Appendix D. If the radius r is small enough, then Pr is contained in the relative interior of the probability simplex, guaranteeing that every pure action a ∈ [K] gets some minimal positive chance to be played at any round. $$\quad(4)$$ More precisely, the condition reads $$r<r_{\mathrm{lim}}:={\frac{1}{K}}{\sqrt{1+{\frac{1}{K-1}}}}\,,\qquad{\mathrm{for~which}}\qquad\min_{p\in\mathcal{P}_{r}}\min_{a\in\{R\}}p_{a}\geqslant r_{\mathrm{lim}}-r\,.$$ The value rlim is the distance from the uniform distribution vector to the uniform distribution on all but one pure actions. The following asymptotic result is proved in Appendix D. Theorem 3. Assume that *r < r*lim. Consider a sub-Gaussian model D *with parameter* σ 2, known and used by the diversity-preserving UCB strategy (4) on Pr. For any bandit problem ν in D, the regret of this strategy is at most squared-logarithmic: $\left(5\right)$ . $$\operatorname*{lim}_{T\to\infty}{\frac{R_{T}}{\ln^{2}T}}<+\infty\,.$$ While this result should generalize to other strongly convex diversity-preserving sets, we stick to the representative example of Pr, which simplifies the exposition of the proof. In particular, we use the simple shape of Pr to provide explicit formulas for the optimal distribution(s) and optimal payoff (see Lemmas 6 and 7 in Appendix D). Similarly, we focused on the asymptotic ln2T rate with no attempt to provide a closed-form regret bound with explicit constants. These constants would depend on µ and on the curvature of P in a complex way, and we chose to ignore this aspect here. ## 5 Some Experiments On Synthetic Data In this final section, we perform some (simple and preliminary) experiments that merely illustrate the dual behavior of the regret: either bounded or growing at a ln T rate. We believe that a more extensive empirical comparison would be interesting but would be out of the scope we targeted for this article. Experimental setting. We consider K = 3 arms and the model D of Bernoulli distributions. The diversity-preserving set P is the triangle generated by $$\underline{{{p}}}^{(1)}=(0,\,0.2,\,0.8)\,,\qquad\underline{{{p}}}^{(2)}=(0.6,\,0.2,\,0.2)\,,\qquad\mathrm{and}\qquad\underline{{{p}}}^{(3)}=(0,\,0.8,\,0.2)\,.$$ We consider the bandit problems να with expectations $$\underline{{{\mu}}}_{\alpha}=(1/2-\alpha,\,1/3,\,1/2+\alpha)\,,\qquad\mathrm{where}\qquad\alpha\in\{-0.1,\,0.1\}\,.$$ Lemma 1. For α = −0.1*, the unique optimal distribution is* p (2)*, which puts a positive probability mass on* all actions. For α = 0.1*, the unique optimal distribution is* p (1)*, which does not put any probability mass on* action 1. Proof. First, the distribution p (3) is always dominated either by p (1) or by p (2), and is therefore never an optimal distribution: for all α *∈ {−*0.1, 0.1}, $$\begin{array}{l}{{\langle\underline{{{p}}}^{(3)}-\underline{{{p}}}^{(1)},\,\underline{{{\mu}}}_{\alpha}\rangle=0.6\left(1/3-(1/2+\alpha)\right)=-(1+0.6\alpha)<0\,;}}\\ {{\langle\underline{{{p}}}^{(3)}-\underline{{{p}}}^{(2)},\,\underline{{{\mu}}}_{\alpha}\rangle=0.6\left(-(1/2-\alpha)+1/3\right)=-1+0.6\alpha<0\,.}}\end{array}$$ We now compare the mixed actions p (1) and p (2): $$\left\langle\underline{{{p}}}^{(1)}-\underline{{{p}}}^{(2)},\,\underline{{{\mu}}}_{\alpha}\right\rangle=0.6\left(-(1/2-\alpha)+(1/2+\alpha)\right)=1.2\alpha\,.$$ The sign of α thus determines which of p (1) or p (2) is the optimal distribution. ![12_image_0.png](12_image_0.png) Figure 1: Estimated expected cumulative regret over time, in the case α = −0.1 [*top figure*, bounded regret] and α = 0.1 [*bottom figure*, ln T growth], for the two algorithms considered. Solid lines report empirical means while shaded areas correspond to ±2 standard errors of the series defining the empirical means. Numerical experiments. We ran the diversity-preserving UCB algorithm (see Box B, abbreviated DivPUCB below), as well as Algorithm 2 (Constrained-L1-OFUL, abbreviated L1-OFUL below) of Celis et al. (2019). We did so on each of the two problems να, over T = 100,000 time steps, for N = 100 runs. The expected regret suffered by each algorithm is estimated by the empirical averages of pseudo-regrets observed on the N runs: $$\widehat{R}_{T}(\underline{{{\nu}}}_{\alpha})=\frac{1}{N}\sum_{i=1}^{N}\widehat{R}_{T}(\underline{{{\nu}}}_{\alpha},i)\,,\quad\mathrm{where}\quad\widehat{R}_{T}(\underline{{{\nu}}}_{\alpha},i)=\sum_{t=1}^{T}\left\langle\underline{{{p}}}^{\star}(\underline{{{\nu}}}_{\alpha})-\underline{{{p}}}_{t}(\alpha,i),\,\underline{{{\mu}}}_{\alpha}\right\rangle,$$ and where we denoted by pt (*α, i*) the mixed action chosen at round t, during the i–th run, for problem να. Figures 1 report the estimates RT (να) obtained (solid lines); shaded areas correspond to ±2 standard errors of the series RT (να, i) used in the definition of the RT (να) as empirical averages. As expected by combining Theorem 1 and Lemma 1, we observe a logarithmic regret growth when α = −0.1 and bounded regret when α = 0.1 for DivP-UCB, whereas L1-OFUL incurs logarithmic regret in both cases. It also turns out that on this specific example, DivP-UCB performs better than L1-OFUL. (Note also that L1-OFUL is computationally more costly than DivP-UCB, since it needs to solve 2K linear programs at every time step, instead of a single one for DivP-UCB.) ## 6 Conclusion And Limitations We revisited the diversity-preserving variant of K–armed bandits introduced by Celis et al. (2019): we considered the same framework as introduced therein (while extending it to sub-Gaussian models), with diversity-preserving sets P given by polytopes; but we stated and discussed a more satisfactory UCB-based strategy, controlling the regret by a ln T rate in general and even achieving a bounded regret in the case where all arms get a positive probability mass by all optimal distributions—i.e., when diversity is desirable. We showed that this dual behavior (and the associated condition) were optimal for mean-unbounded models, while pointing out, through a Bernoulli counter-example, the intrinsic difficulties in characterizing the rates in the case of mean-bounded models. The latter issue is the first question left for future research. We then also discussed a specific example of a diversity-preserving set P not given by a polytope, where the regret grows at worst at a ln2T rate. Here as well, building a general theory for curved diversity-preserving sets P is left for future research. Finally, we proposed some simple and preliminary numerical simulations, that could be extended. ## Acknowledgments The work of Sébastien Gerchinovitz and Jean-Michel Loubes has benefitted from the AI Interdisciplinary Institute ANITI, which is funded by the French "Investing for the Future - PIA3" program under the Grant agreement ANR-19-PI3A-0004. Sébastien Gerchinovitz gratefully acknowledges the support of the DEEL project (https://www.deel.ai/). ## References Yasin Abbasi-Yadkori, David Pál, and Csaba Szepesvári. Improved algorithms for linear stochastic bandits. In *Advances in Neural Information Processing Systems (NeurIPS '11)*, volume 24, pp. 2312–2320, 2011. Peter Auer, Nicolò Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47(2-3):235–256, 2002. Ashwinkumar Badanidiyuru, Robert Kleinberg, and Aleksandrs Slivkins. Bandits with knapsacks. In IEEE 54th Annual Symposium on Foundations of Computer Science (FOCS'13), pp. 207–216, 2013. Latest extended version available at arXiv:1305.2545, dated September 2017. Ashwinkumar Badanidiyuru, Robert Kleinberg, and Aleksandrs Slivkins. Bandits with global convex constraints and objective. *Journal of the ACM*, 65(3):1–55, 2018. Debangshu Banerjee, Avishek Ghosh, Sayak Ray Chowdhury, and Aditya Gopalan. Exploration in linear bandits with rich action sets and its implications for inference. In International Conference on Artificial Intelligence and Statistics (AIStats'2023), volume 206 of *PMLR*, pp. 8233–8262, 2023. Siddharth Barman, Arindam Khan, Arnab Maiti, and Ayush Sawarni. Fairness and welfare quantification for regret in multi-armed bandits. In Proceedings of the 37th AAAI Conference on Artificial Intelligence (AAAI'2023), volume 37(6), pp. 6762–6769, 2023. Stéphane Boucheron, Gábor Lugosi, and Pascal Massart. Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford University Press, 2013. Sébastien Bubeck, Vianney Perchet, and Philippe Rigollet. Bounded regret in stochastic multi-armed bandits. In *Proceedings of the 26th Annual Conference on Learning Theory (COLT'2013)*, volume 30 of *PMLR*, pp. 122–134, 2013. L. Elisa Celis, Sayash Kapoor, Farnood Salehi, and Nisheeth Vishnoi. Controlling polarization in personalization: An algorithmic framework. In *Proceedings of the 2nd Annual Conference on Fairness, Accountability,* and Transparency (FAccT'19), pp. 160–169. Association for Computing Machinery, 2019. Yifang Chen, Alex Cuellar, Haipeng Luo, Jignesh Modi, Heramb Nemlekar, and Stefanos Nikolaidis. Fair contextual multi-armed bandits: Theory and experiments. In *Proceedings of the 36th Conference on* Uncertainty in Artificial Intelligence (UAI'2020), volume 124 of *PMLR*, pp. 181–190, 2020. Yuan Shih Chow and Henry Teicher. *Probability Theory*. Springer, 1988. Wei Chu, Lihong Li, Lev Reyzin, and Robert Schapire. Contextual bandits with linear payoff functions. In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics (AIStats'2011), volume 15 of *PMLR*, pp. 208–214, 2011. Houston Claure, Yifang Chen, Jignesh Modi, Malte Jung, and Stefanos Nikolaidis. Multi-armed bandits with fairness constraints for distributing resources to human teammates, 2020. Preprint, arXiv:1907.00313. Richard Combes, Stefan Magureanu, and Alexandre Proutiere. Minimal exploration in structured stochastic bandits. In *Advances in Neural Information Processing Systems (NeurIPS'17)*, volume 30, pp. 1763–1771, 2017. Rémy Degenne, Evrard Garcelon, and Vianney Perchet. Bandits with side observations: Bounded vs. logarithmic regret. In *Proceedings of the 34th Conference on Uncertainty in Artificial Intelligence (UAI'2018)*, pp. 467–476, 2018. Joseph L. Doob. *Stochastic Processes*. John Wiley & Sons, 1953. Khaled Eldowa, Nicolò Cesa-Bianchi, Alberto Maria Metelli, and Marcello Restelli. Information-theoretic regret bounds for bandits with fixed expert advice. In Proceedings of the 2023 IEEE Information Theory Workshop (ITW'2023), pp. 30–35, 2023. Aurélien Garivier, Pierre Ménard, and Gilles Stoltz. Explore first, exploit next: The true shape of regret in bandit problems. *Mathematics of Operations Research*, 44(2):377–399, 2019. Todd L. Graves and Tze Leung Lai. Asymptotically efficient adaptive choice of control laws in controlled Markov chains. *SIAM Journal on Control and Optimization*, 35(3):715–743, 1997. Hédi Hadiji, Sébastien Gerchinovitz, Jean-Michel Loubes, and Gilles Stoltz. Diversity-preserving k–armed bandits, revisited, 2020. Preprint, arXiv:2010.01874v1. Botao Hao, Tor Lattimore, and Csaba Szepesvári. Adaptive exploration in linear contextual bandit. In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AIStats'2020), volume 108 of *PMLR*, pp. 3536–3545, 2020. Wassily Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58(301):13–30, 1963. Wen Huang, Kevin Labille, Xintao Wu, Dongwon Lee, and Neil Heffernan. Fairness-aware bandit-based recommendation. In *Proceedings of the 2021 IEEE International Conference on Big Data (IEEE Big* Data'2021), pp. 1273–1278. IEEE, 2021. Kwang-Sung Jun and Chicheng Zhang. Crush optimism with pessimism: Structured bandits beyond asymptotic optimality. In *Advances in Neural Information Processing Systems (NeurIPS'2020)*, volume 33, pp. 6366–6376, 2020. Tze Leung Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. *Advances in Applied* Mathematics, 6(1):4–22, 1985. Tor Lattimore and Rémi Munos. Bounded regret for finite-armed structured bandits. In *Advances in Neural* Information Processing Systems (NeurIPS'14), volume 27, pp. 550–558, 2014. Tor Lattimore and Csaba Szepesvári. The end of optimism? An asymptotic analysis of finite-armed linear bandits. In *Proceedings of the 20th International Conference on Artificial Intelligence and Statistics* (AIStats'2017), volume 54 of *PMLR*, pp. 728–737, 2017. Tor Lattimore and Csaba Szepesvári. *Bandit Algorithms*. Cambridge University Press, 2020. Fengjiao Li, Jia Liu, and Bo Ji. Combinatorial sleeping bandits with fairness constraints. *IEEE Transactions* on Network Science and Engineering, 7(3):1799–1813, 2020. Lihong Li, Wei Chu, John Langford, and Robert Schapire. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th International Conference on World Wide Web (WWW'10), pp. 661–670, 2010. Yunqi Li, Yingqiang Ge, and Yongfeng Zhang. Tutorial on fairness of machine learning in recommender systems. In *Proceedings of the 44th International ACM SIGIR Conference on Research and Development* in Information Retrieval (SIGIR'2021), pp. 2654–2657, 2021. Qingsong Liu, Weihang Xu, Siwei Wang, and Zhixuan Fang. Combinatorial bandits with linear constraints: Beyond knapsacks and fairness. In *Advances in Neural Information Processing Systems (NeurIPS'2022)*, volume 35, pp. 2997–3010, 2022. Yang Liu, Goran Radanovic, Christos Dimitrakakis, Debmalya Mandal, and David C. Parkes. Calibrated fairness in bandits. In Proceedings of the 4th Workshop on Fairness, Accountability, and Transparency in Machine Learning (Fat/ML 2017), 2017. Alberto Maria Metelli, Matteo Papini, Pierluca D'Oro, and Marcello Restelli. Policy optimization as online learning with mediator feedback. In *Proceedings of the 35th AAAI Conference on Artificial Intelligence* (AAAI'2021), volume 35(10), pp. 6762–6769, 2021. Mathieu Molina, Nicolas Gast, Patrick Loiseau, and Vianney Perchet. Trading-off price for data quality to achieve fair online allocation. In *Advances in Neural Information Processing Systems (NeurIPS'2023)*, volume 36, pp. 40096–40139, 2023. Matteo Papini, Andrea Tirinzoni, Aldo Pacchiano, Marcello Restelli, Alessandro Lazaric, and Matteo Pirotta. Reinforcement learning in linear MDPs: Constant regret and representation selection. In *Advances in* Neural Information Processing Systems (NeurIPS'2021), volume 34, pp. 16371–16383, 2021. Vishakha Patil, Ganesh Ghalme, Vineet Nair, and Y. Narahari. Achieving fairness in the stochastic multiarmed bandit problem. *Journal of Machine Learning Research*, 22(174):1–31, 2021. Nícollas Silva, Heitor Werneck, Thiago Silva, Adriano CM Pereira, and Leonardo Rocha. Multi-armed bandits in recommendation systems: A survey of the state-of-the-art and future directions. Expert Systems with Applications, 197:116669, 2022. Andrea Tirinzoni, Alessandro Lazaric, and Marcello Restelli. A novel confidence-based algorithm for structured bandits. In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AIStats'2020), volume 108 of *PMLR*, pp. 3175–3185, 2020. Alberto Vera, Siddhartha Banerjee, and Itai Gurvich. Online allocation and pricing: Constant regret via Bellman inequalities. *Operations Research*, 69(3):821–840, 2021. Andrew J. Wagenmaker and Dylan J. Foster. Instance-optimality in interactive decision making: Toward a non-asymptotic theory. In *Proceedings of the 36th Annual Conference on Learning Theory (COLT'2023)*, volume 195 of *PMLR*, pp. 1322–1472, 2023. Lequn Wang, Yiwei Bai, Wen Sun, and Thorsten Joachims. Fairness of exposure in stochastic bandits. In Proceedings of the 38th International Conference on Machine Learning (ICML'2021), volume 139, pp. 10686–10696. PMLR, 2021. Weitong Zhang, Zhiyuan Fan, Jiafan He, and Quanquan Gu. Settling constant regrets in linear Markov decision processes, 2024. Preprint, arXiv:2404.10745. ## A Proof Of Theorem 1: Analysis Of The Diversity-Preserving Ucb Policy The ln T bound of Theorem 1 is proved in Appendix A.1 by showing, in view of (2), that suboptimal distributions p ∈ Ext(P) are unlikely to be played more than ln T times. The analysis mimics and adapts the proof scheme corresponding to UCB run on Ext(P), with three new ingredients specifically underlined. The proof of the constant regret bound of Theorem 1 may be found in Appendix A.2 and follows a completely different logic. We first show that optimal distributions are typically played at least half of the time. This entails, because p ⋆min(ν) > 0, that each pure action a ∈ [K] is played linearly many times. Therefore, all estimates are sharp, and little regret is suffered. ## A.1 Proof Of The Ln T **Bound In Theorem 1** We want to control the E -Np(t) by ln T, however, the favorable events at round t ⩾ 1 hold rather for quantities based on how often the pure actions were pulled: $$\mathcal{E}(t)=\left\{\forall a\in[K],\quad\left|\mu_{a}-\widehat{\mu}_{a}(t)\right|\leqslant\sqrt{\frac{8\sigma^{2}\ln t}{\max\{N_{a}(t),1\}}}\right\}\text{and}\mathcal{E}^{\prime}(t)=\left\{\forall a\in[K],\quad\sqrt{\frac{8\sigma^{2}\ln t}{\max\{N_{a}(t),1\}}}<\frac{\Delta}{2}\right\}.$$ We label the following set of sequences of the above will be the set of points $\mathcal{E}_{\alpha}(\mathcal{E})$ of $\mathcal{E}$. We also introduce the following events, for any p ∈ P, though we will use them only for p ∈ Ext(P)\Opt(ν,P) in the sequel: $${\mathcal{E}}^{\prime\prime}(\underline{{{p}}},t)=\left\{\sum_{a=1}^{K}p_{a}\sqrt{\frac{8\sigma^{2}\ln t}{\operatorname*{max}\bigl\{N_{a}(t),1\bigr\}}}<\frac{\Delta}{2}\right\}.$$ A first new ingredient consists of the following inequalities, obtained by distinguishing whether pa = 0 or pa > 0 and by Jensen's equality for the square root: for all p ∈ P, all t ⩽ T, and all n ⩾ (65Kσ2/∆2) ln T, $$2\sqrt{8\sigma^{2}\ln t}\sum_{a\in[K]}p_{a}\frac{1}{\sqrt{\operatorname*{max}\left\{n p_{a}/2,\ 1\right\}}}\leqslant\frac{8\sqrt{\sigma^{2}\ln t}}{\sqrt{n}}\sum_{a\in[K]}\sqrt{p_{a}}\leqslant\frac{8\sqrt{\sigma^{2}\ln t}}{\sqrt{n}}\sqrt{K}<\Delta\,,$$ thus the following inclusion: $$\bigcap_{a\in[K]}\left\{N_{a}(t)\geqslant np_{a}/2\right\}=\bigcap_{a,p_{a}>0}\left\{N_{a}(t)\geqslant np_{a}/2\right\}\ \subseteq\mathcal{E}^{\prime\prime}(\underline{p},t)\,.\tag{6}$$ Note also the inclusion E′(t) ⊆ E′′(*p, t*), valid for all p ∈ P. Now, the second new ingredient, consisting of the lemma below, is the key to relate the numbers of times Np(t) a suboptimal distribution p ∈ Ext(P) is picked to the numbers of draws Na(t) of pure actions a ∈ [K]. Lemma 2. Fix p ∈ Ext(P)*, and denote by* pmin>0 = minpa : a ∈ [K] *s.t.* pa > 0 > 0 its minimal positive component. Then, for all t ⩾ 1*, all* n ⩾ (10/pmin>0) ln T*, and all* a ∈ [K], $$\mathbb{P}\Big(\big\{N_{\underline{{{p}}}}(t)\geqslant n\big\}\cap\big\{N_{a}(t)<n p_{a}/2\big\}\Big)\leqslant{\frac{1}{T}}\,.$$ Proof. We only need to show the inequality for a ∈ [K] such that pa > 0. We note that $$N_{a}(t)\geqslant\sum_{s=1}^{t}\mathds{1}_{\{\underline{{{p}}}_{s}=\underline{{{p}}}\}}\ \mathds{1}_{\{A_{s}=a\}}\ ;$$ (7) $\frac{1}{2}$ =p} 1{As=a} ; (7) thus, by optional skipping2(see Theorem 5.2 of Doob, 1953, Chapter III, p. 145, see also Chow & Teicher, 1988, Section 5.3), the distribution of Na(t) on the event Np(t) ⩾ n is larger than the distribution of a 2Sometimes called optional sampling. random variable Bn,a with binomial distribution of parameters n and pa. In particular, $$\mathbb{P}\Big{(}\big{\{}N_{\mathrm{g}}(t)\geqslant n\big{\}}\cap\big{\{}N_{\mathrm{a}}(t)<np_{a}/2\big{\}}\Big{)}$$ $$\leqslant\mathbb{P}\big{(}p_{n,a}<np_{a}/2\big{)}=\mathbb{P}\big{(}B_{n,a}-np_{a}<-np_{a}/2\big{)}\leqslant\exp\!\left(-\frac{e^{2}}{2(v+bc/3)}\right)<\exp\!\left(-\frac{np_{a}}{8(1+1/6)}\right),$$ where, for the final inequality, we applied Bernstein's inequality (see, e.g., Boucheron et al., 2013, end of Section 2.7, Equation 2.10) with variance v = n pa(1 − pa), upper bound b = 1 on the range, and deviation ε = npa/2. Substituting the bound on n concludes the proof. The rest of the analysis is essentially standard. The aim is to control each of the following expectations, for p ∈ Ext(P) \ Opt(ν,P) and where np ⩾ 1 is defined later: $$\mathbb{E}\big{[}N_{\underline{p}}(T)\big{]}\leqslant n_{\underline{p}}+\sum_{t=n_{\underline{p}}}^{T-1}\mathbb{P}\big{\{}\underline{p}_{t+1}=\underline{p}\ \ \text{and}\ \ N_{\underline{p}}(t)\geqslant n_{\underline{p}}\big{\}}\,.\tag{8}$$ We first note that for t ⩾ 1, for all p ∈ Ext(P) \ Opt(ν,P), $$\left\{p_{t+1}=p\right\}\subseteq\overline{{{\mathcal E}(t)}}\cup\overline{{{\mathcal E}^{\prime\prime}(p,t)}}\subseteq\overline{{{\mathcal E}(t)}}\cup\overline{{{\mathcal E}^{\prime}(t)}}\,;$$ ⊆ E(t) ∪ E′′(*p, t*) ⊆ E(t) ∪ E′(t) ; (9) indeed, on E(t) ∩ E′′(*p, t*), for p ⋆ ∈ Opt(ν,P), by definitions of these sets and of U(t), $$\langle\underline{v},\,\underline{U}(t)\rangle=\langle\underline{v},\,\widehat{\mu}(t)\rangle+\sum_{n=1}^{K}p_{n}\sqrt{\frac{8\sigma^{2}\ln t}{\max\{N_{n}(t),1\}}}\leqslant\langle\underline{v},\,\underline{\mu}\rangle+2\sum_{n=1}^{K}p_{n}\sqrt{\frac{8\sigma^{2}\ln t}{\max\{N_{n}(t),1\}}}$$ $$<\langle\underline{v},\,\underline{\mu}\rangle+\Delta\leqslant\langle\underline{v},\,\underline{\mu}\rangle+\Delta\langle\underline{v}\rangle=\langle\underline{v}^{*},\,\underline{\mu}\rangle\leqslant\langle\underline{v}^{*},\,\underline{\mu}(t)\rangle+\sum_{n=1}^{K}p_{n}\sqrt{\frac{8\sigma^{2}\ln t}{\max\{N_{n}(t),1\}}}=\langle\underline{v}^{*},\,\underline{U}(t)\rangle\,,$$ $$\mathbf{\Sigma}(9)$$ while pt+1 = p requires ⟨*p, U*(t)⟩ ⩾ ⟨p ⋆, U(t)⟩. Let $$n_{p}=\max\left\{\frac{65K}{\Delta^{2}}\ln T,\ \frac{10}{p_{\min>0}}\ln T,\ 1+\frac{1}{8\sigma^{2}}\max_{a\in\{K\}}(\mu_{a}-u_{0})^{2}\right\};\tag{10}$$ the maximum will turn useful in the application of Lemma 3 below. For each distribution the third element in the maximum will turn useful in the application of Lemma 3 below. For each distribution p ∈ Ext(P) \ Opt(ν,P), the inclusions (9) and then (6) entail $\{\underline{n}_{t+1}=\underline{n}\}\cap\{N_{\underline{n}}(t)\geqslant n_{\underline{n}}\}\subseteq\overline{\mathcal{E}(t)}\cup\left(\overline{\mathcal{E}^{\prime\prime}(\underline{n},t)}\cap\{N_{\underline{n}}(t)\geqslant n_{\underline{n}}\}\right)\subseteq\overline{\mathcal{E}(t)}\cup\bigcup\limits_{a\in\{K\}}\{N_{\underline{n}}(t)\geqslant n_{\underline{n}}\}\cap\{N_{a}(t)<n_{\underline{n}}p_{n}/2\}$ Substituting this bound into (8), resorting to unions bounds and to Lemma 2, yields $$\mathbb{E}\big[N_{\underline{{{p}}}}(T)\big]\leqslant n_{\underline{{{p}}}}+K+\sum_{t=n_{\underline{{{p}}}}}^{T-1}\mathbb{P}\Big(\overline{{{\mathcal{E}}}}(t)\Big)\leqslant n_{\underline{{{p}}}}+K+K\sum_{t=n_{\underline{{{p}}}}}^{T-1}(2t\,t^{-4})\leqslant n_{\underline{{{p}}}}+2K\,,$$ $$(11)$$ where we applied Lemma 3 below for each a ∈ [K] and with δ = t−4, which satisfies the condition required therein given that t ⩾ np. The proof is concluded by resorting to the decomposition (2), to obtain $$R_{T}\leqslant\sum_{\underline{p}\in\operatorname{Ext}(\mathcal{P})\backslash\operatorname{Opt}(\underline{r},\mathcal{P})}\Delta_{\underline{p}}(n_{\underline{p}}+2K)\,,\tag{12}$$ which is of the claimed form Cν ln T + cν. In the derivation of this regret bound, we targeted simplicity and did not try to improve the constants Cν and cν. Lemma 3 is an essentially standard concentration result for stochastic bandits; the only adaptation therein (the third new ingredient) is handling the case where Na(t) = 0. Lemma 3. Consider a model Dσ2 *with* σ 2–sub-Gaussian distributions, and fix a bandit problem ν in Dσ2 . For t ⩾ 1, if the actions A1, . . . , At and rewards Y1, . . . , Yt were generated according to the protocol of Box A, then, for all a ∈ [K], for all δ > 0 *with* 2 ln(1/δ) > (µa − u0) 2/σ2, $$\mathbb{P}\!\left\{\left|\mu_{a}-\widehat{\mu}_{a}(t)\right|\geqslant\sqrt{\frac{2\sigma^{2}\ln(1/\delta)}{\operatorname*{max}\left\{N_{a}(t),1\right\}}}\right\}\leqslant2t\delta\,.$$ Proof. Again by optional skipping (see the proof of Lemma 2), by denoting by µba,n an empirical average of n ⩾ 1 i.i.d. random variables with distribution νa, and by using the convention µba,0 = u0, we have $\mathbb{P}\left\{\left|\mu_{a}-\widehat{\mu}_{a}(t)\right|\geqslant\sqrt{\frac{2\sigma^{2}\ln(1/\delta)}{\max\left\{N_{a}(t),1\right\}}}\right\}\leqslant\mathbb{P}\left\{\exists n\in\left\{0,1,\ldots,t\right\}\ :\ \left|\mu_{a}-\widehat{\mu}_{a,n}\right|\geqslant\sqrt{\frac{2\sigma^{2}\ln(1/\delta)}{\max\left\{n,1\right\}}}\right\}$ $$\leqslant0+\sum_{n=1}^{t}\mathbb{P}\left\{\left|\mu_{a}-\widehat{\mu}_{a,n}\right|\geqslant\sqrt{\frac{2\sigma^{2}\ln(1/\delta)}{n}}\right\}\leqslant\sum_{n=1}^{t}2\delta=2t\delta\,,$$ the same figure above is the same as the parameter $\alpha$. where the case n = 0 was dropped in the union bound because |µa −u0| < p2σ 2 ln(1/δ) by assumption, and where the final inequalities follow from the Cramér–Chernoff inequality (see, e.g., Lattimore & Szepesvári, 2020, Corollary 5.1). ## A.2 Proof Of The Constant Regret Bound In Theorem 1 As indicated at the beginning of Appendix A, the proof of the constant regret bound of Theorem 1 follows a completely different logic. For instance, in Appendix A.1, the sets E′(t) were instrumental in the proof but we had not controlled their probabilities—which constitutes the core of the analysis here. To do so, we show that optimal distributions are typically played at least half of the time; this is the main contribution of this proof. Then, because p ⋆min(ν) > 0, we know that each pure action a ∈ [K] is played linearly many times, which cannot happen on the events E′(t), where at least one action is only played logarithmically many times. This proof strategy for bounded regret was used in Lattimore & Munos (2014). We face the additional technical challenge here that we do not control with certainty the number of pulls of every pure arm because of the randomness in generating the At from the pt; we handle this by carefully applying Bernstein's inequality. **Step 1: Preparation.** We fix a threshold $t_{0}\geqslant8+\max_{u\in[P_{t}]}(\mu_{a}-u_{0})^{2}/(8\sigma^{2})$ such that $$\forall t\geqslant t_{0},\qquad\frac{t}{2}\,p^{*}_{\text{min}}(\underline{u})-\frac{32\sigma^{2}\ln t}{\Delta^{2}}\geqslant\sqrt{t\ln t}\qquad\text{and}\qquad\frac{\Delta t}{4}-\sqrt{8\sigma^{2}\ln t}\big{(}1+2\sqrt{t-1}\big{)}>\sqrt{8\sigma^{2}t\ln^{2}t}\,.\tag{13}$$ For example, with the convention that the $\ln\ln x=-\infty$ if $x\leqslant1$, the constraints above are satisfied with the threshold t0 such that $$\ln t_{0}=\max\left\{2+\frac{1}{8\sigma^{2}},\ \ln\frac{\sigma^{2}}{\Delta^{2}p_{\min}^{*}(\underline{\nu})^{2}}+3\ln\ln\frac{18\,432\,\sigma^{2}}{\Delta^{2}p_{\min}^{*}(\underline{\nu})^{2}}+10\right\}.\tag{14}$$ In the above, we have to use the $\ln t_{0}$ to get the $\ln t_{0}$ as a function that leads to (*Note to reviewers: for the sake of concision, we decided to omit the half-page of calculations that lead to* this bound. We could of course add it if deemed necessary.) By (9), we first note that $$R_{T}\leqslant R_{t_{0}}+\operatorname*{max}_{a\in[K]}\mu_{a}\sum_{t=t_{0}}^{T-1}\left(\mathbb{P}\Big{(}\overline{{{\mathcal{E}}(t)}}\Big{)}+\mathbb{P}\Big{(}\overline{{{\mathcal{E}}^{\prime}(t)}}\Big{)}\right)\leqslant R_{t_{0}}+\operatorname*{max}_{a\in[K]}\mu_{a}\left(K+\sum_{t=t_{0}}^{T-1}\mathbb{P}\Big{(}\overline{{{\mathcal{E}}^{\prime}(t)}}\Big{)}\right),$$ where the final inequality follows from a bound proved in (11), given the first condition on t0. The key step is the decomposition $${\overline{{{\cal E}^{\prime}(t)}}}\subseteq\left\{N_{\star}(t)<t/2\right\}\cup\left({\overline{{{\cal E}^{\prime}(t)}}}\cap\left\{N_{\star}(t)\geqslant t/2\right\}\right),\qquad{\mathrm{where}}\qquad N_{\star}(t)=\sum_{s=1}^{t}\sum_{p\in O^{\mathrm{pt}}({\underline{{p}}},{\overline{{{\cal E}^{\prime}}}})}{\bf1}_{\{{\underline{{p}}}\}}.$$ denotes the number of times optimal distributions are played. Step 2 (core step): Optimal distributions are typically played at least half of the time. We first deal with the events N⋆(t) *< t/*2 , where t ⩾ t0. I.e., contrary in the proof of Appendix A.1, we do not only control N⋆(t) in expectation, but in high probability. To that end, we consider first $$\left\{N_{\star}(t)<t/2\right\}\cap{\mathcal{E}}\big(\lfloor t/4\rfloor:\infty\big)\,,\qquad{\mathrm{where}}\quad{\mathcal{E}}\big(\lfloor t/4\rfloor:\infty\big)=\bigcap_{s=\lfloor t/4\rfloor}^{\infty}{\mathcal{E}}(s)\,.$$ By a classic UCB argument, for all s ⩾ 1, by definition of E(s), on $\mathcal{E}(s)$, $0\leqslant U_{a}(s)-\mu_{a}=\widehat{\mu}_{a}(s)+\sqrt{\frac{8\sigma^{2}\ln s}{\max\{N_{a}(s),1\}}}-\mu_{a}\leqslant\text{UCB}_{s,a}\stackrel{{\text{def}}}{{=}}2\sqrt{\frac{8\sigma^{2}\ln s}{\max\{N_{a}(s),1\}}}$, thus, for any optimal p ⋆ ∈ Opt(ν,P), using also the definition of ps+1 as some empirical best distribution, we have, on E(s), $$\langle\underline{p}^{*}-\underline{p}_{n+1},\underline{\mu}\rangle=\underbrace{\langle\underline{p}^{*},\underline{\mu}-\underline{U}(s)\rangle}_{\leqslant0}+\underbrace{\langle\underline{p}^{*}-\underline{p}_{n+1},\underline{U}(s)\rangle}_{\leqslant0}+\underbrace{\langle\underline{p}_{n+1},\underline{U}(s)-\underline{\mu}\rangle}_{\leqslant1\leqslant|\underline{p}|}\leqslant2\sum_{s\leqslant1}p_{s+1,n}\sqrt{\frac{8\sigma^{2}\ln s}{\max\{N_{n}(s),1\}}}\,.$$ This inequality and the definition of ∆ as the smallest gap over Ext(P) yield, on E ⌊t/4⌋ : ∞ , $$\sum_{s=\lfloor t/4\rfloor}^{t}\sum_{a\in\lfloor K\rfloor}p_{s,a}{\sqrt{\frac{8\sigma^{2}\ln(s-1)}{\operatorname*{max}\left\{N_{a}(s-1),1\right\}}}}\geqslant\sum_{s=\lfloor t/4\rfloor}^{t}\left\langle p^{*}-p_{s},\underline{{{\mu}}}\right\rangle\geqslant\Delta\big(t+1-\lfloor t/4\rfloor-N_{\star}(t)\big)\,.$$ As t ⩾ t0 ⩾ 8 and as the sums are over non-negative terms, we proved so far, on E ⌊t/4⌋ : ∞ : $$\sum_{s=2}^{t}\sum_{a\in[K]}p_{s,a}\sqrt{\frac{8\sigma^{2}\ln(s-1)}{\max\left\{N_{a}(s-1),1\right\}}}\geqslant\Delta\big{(}t+1-\lfloor t/4\rfloor-N_{\star}(t)\big{)}\,.\tag{15}$$ We introduce the martingale $$M_{\Sigma,t}\ {\stackrel{\mathrm{def}}{=}}\ \sum_{s=2}^{t}\sum_{a\in[K]}\left(p_{s,a}-1_{\{A_{s}=a\}}\right){\sqrt{\frac{8\sigma^{2}\ln(s-1)}{\operatorname*{max}\left\{N_{a}(s-1),1\right\}}}}\ .$$ It turns out that for each a ∈ [K], as Na(s − 1) increases by 1 if and only if As = a, and otherwise remains unchanged, we have the crude deterministic bound: $$\sum_{s=2}^{t}\mathds{1}_{\{A_{s}=a\}}\sqrt{\frac{8\sigma^{2}\ln(s-1)}{\max\left\{N_{a}(s-1),1\right\}}}\leqslant\sqrt{8\sigma^{2}\ln t}\sum_{s=2}^{t}\frac{\mathds{1}_{\{A_{s}=a\}}}{\sqrt{\max\left\{N_{a}(s-1),1\right\}}}$$ $$\leqslant\sqrt{8\sigma^{2}\ln t}\sum_{n=0}^{t-1}\frac{1}{\sqrt{\max\{n,1\}}}\leqslant\sqrt{8\sigma^{2}\ln t}\left(1+2\sqrt{t-1}\right),$$ where we used that Na(t − 1) ⩽ t − 1 to determine the range of values for n in the sum. The inequality above, together with the relationship (15) and the definition of MΣ,t, entails that $$\left\{N_{*}(t)<t/2\right\}\cap\mathcal{E}\big{(}[t/4]:\infty\big{)}\subseteq\left\{M_{\Sigma,t}+\sqrt{8\sigma^{2}\ln t}\big{(}1+2\sqrt{t-1}\big{)}>\Delta\big{(}t+1-\lfloor t/4\rfloor-t/2\big{)}\right\}$$ $$\subseteq\left\{M_{\Sigma,t}>\sqrt{8\sigma^{2}t\ln^{2}t}\right\},\tag{16}$$ where the second inclusion comes from the final condition 13 on t0. As indicated later, the probability of the right-hand side is smaller than 1/t2. Step 3: The remaining events. We now turn to the events E′(t) ∩ N⋆(t) ⩾ t/2 , and show that they are unlikely; the intuition is that when N⋆(t) is linearly large, because of the condition p ⋆min(ν) > 0, the Na(t) should also be linearly large. More precisely, by the respective definitions of E′(t) and of p ⋆min(ν) > 0, $\overline{\mathcal{E}^{\prime}(t)}=\bigcup_{a\in[K]}\left\{N_{a}(t)\leqslant\frac{32\sigma^{2}\ln t}{\Delta^{2}}\right\}\quad\text{and}\quad\forall a\in[K],\quad\sum_{s=1}^{t}p_{s,a}\geqslant\sum_{s=1}^{t}\mathbb{I}_{\left[p_{s}\circ\mathsf{D}(\underline{\omega},\mathcal{P})\right]}\,p_{\min}^{s}(\underline{\omega})=N_{a}(t)\,p_{\min}^{s}(\underline{\omega})\,.$ We introduce the martingales $M_{a,t}=N_{a}(t)-\sum_{s=1}^{t}p_{s,a}$ and get, for $t\geqslant t_{0}$, $\mathcal{E}^{\prime}(t)=\mathcal{E}^{\prime}(t)$. $$\overline{\mathcal{E}^{\prime}(t)}\cap\left\{N_{*}(t)\geqslant t/2\right\}\subseteq\bigcup_{a\in[K]}\left\{M_{a,t}\leqslant\frac{32\sigma^{2}\ln t}{\Delta^{2}}-\frac{t}{2}\,p_{\min}^{*}(\underline{\nu})\right\}\subseteq\bigcup_{a\in[K]}\left\{M_{a,t}\leqslant-\sqrt{t\ln t}\right\},\tag{17}$$ for $\underline{\nu}$ is a limit such that $\sum_{a\in[K]}\left\{M_{a,t}\leqslant\frac{32\sigma^{2}\ln t}{\Delta^{2}}-\frac{t}{2}\,p_{\min}^{*}(\underline{\nu})\right\}\subseteq\bigcup_{a\in[K]}\left\{M_{a,t}\leqslant-\sqrt{t\ln t}\right\},$ where the final inequality is by the conditions (13) on t0. We bound below the probability of the right-hand side. Step 4: Taking probabilities, via the Hoeffding–Azuma inequality. Collecting the bounds (16) and (17) together, and applying union bounds, we proved so far $$\sum_{t=t_{0}}^{T-1}\mathbb{P}\Big{(}\overline{\mathcal{E}^{\prime}(t)}\Big{)}\leqslant\sum_{t=t_{0}}^{T-1}\mathbb{P}\Big{(}M_{\mathbb{E},t}>\sqrt{8\sigma^{2}\ln^{2}t}\Big{)}+\sum_{t=t_{0}}^{T-1}\mathbb{P}\Big{(}\overline{\mathcal{E}^{\prime}([t/4]:\infty)}\Big{)}+\sum_{a\in\{t\}}\sum_{t=t_{0}}^{T-1}\mathbb{P}\big{(}M_{a,t}\leqslant-\sqrt{t\ln t}\,\big{)}\,,$$ and now show that each of these sums is finite. We note that $\mathbb{P}\Big{(}\overline{\mathcal{E}^{\prime}(t)}\Big{)}\leqslant2Kt^{-3}$ for $t\geqslant t_{0}$, as already shown in (11), thus, by union bounds, $$\sum_{t=t_{0}}^{T-1}\mathbb{P}\Big{(}\overline{\mathcal{E}}\big{(}\lfloor t/4\rfloor:\infty\big{)}\Big{)}\leqslant\sum_{t=t_{0}}^{T-1}\sum_{s\geqslant\lfloor t/4\rfloor}2Ks^{-3}\leqslant\sum_{t=t_{0}}^{T-1}\frac{K}{(t/4-1)^{2}}\leqslant\sum_{t=8}^{\infty}\frac{4K}{(t-4)^{2}}\leqslant2K\,.$$ For each a ∈ [K], since the 1{As=a} − ps,a form martingale increments with values in a predictable range of total width 1, we have, by the Hoeffding–Azuma inequality, that for all t ⩾ 1 and all ε > 0, $$\mathbb{P}\big{\{}M_{n,t}\leqslant-\varepsilon\big{\}}=\mathbb{P}\bigg{\{}N_{n}(t)-\sum_{s=1}^{t}p_{s,n}\leqslant-\varepsilon\bigg{\}}\leqslant\exp\bigl{(}-2\varepsilon^{2}/t\bigr{)}\qquad\text{thus}\qquad\mathbb{P}\big{\{}M_{n,t}\leqslant-\sqrt{t\ln t}\big{\}}\leqslant\frac{1}{\ell^{2}}\,.\tag{18}$$ Similarly, the $M_{\mathbb{E},t}$ are sums of $t-1$ martingale increments with values in predictable range of total widths Similarly, the MΣ,t are sums of t − 1 martingale increments with values in predictable range of total widths each smaller than √8σ 2 ln t, so that, again by the Hoeffding–Azuma inequality, $$\mathbb{P}\Big{\{}M_{\Sigma,t}\geqslant\varepsilon\sqrt{8\sigma^{2}\ln t}\Big{\}}\leqslant\exp\bigl{(}-2\varepsilon^{2}/(t-1)\bigr{)}\qquad\text{thus}\qquad\mathbb{P}\Big{\{}M_{\Sigma,t}>\sqrt{8\sigma^{2}\ln t}\sqrt{t\ln t}\Big{\}}\leqslant\frac{1}{t^{2}}\,.$$ ing all bounds and recalling that $t_{0}\geqslant8$, we proved the following closed-form finite regret. Collecting all bounds and recalling that t0 ⩾ 8, we proved the following closed-form finite regret bound, where we recall that t0 was defined in (13) and where Rt0 can be bounded by the general ln t0 closed-form regret bound (12) proved above: $$\sum_{t=t_{0}}^{T-1}\mathbb{P}\Big{(}\overline{\mathcal{E}^{\prime}(t)}\Big{)}\leqslant3K+1\,,\qquad\text{thus}\qquad R_{T}\leqslant R_{t_{0}}+\max_{a\in[K]}\mu_{a}(4K+1)\,.\tag{19}$$ Combine the logarithmic regret bounds (10) and (12) with the upper bound (14) on ln t0 to get a closed-form expression of the final regret. ## B Proofs Of Theorem 2: Lower Bound On The Diversity-Preserving Regret The main difference with respect to the classic proof schemes for lower bounds (Lai & Robbins, 1985, Graves & Lai, 1997, Garivier et al., 2019) is described in Remark 1: eventually, arms At are played, so that the information gain should be quantified in terms of the At and is larger than if it was quantified in terms of the distributions pt used; yet, the UFC constraints on the strategies are in terms of the distributions pt used. This leads to the specific constrained minimum on lim inf RT / ln T stated in Lemma 4. ## B.1 General Lower Bound Given By A Constrained Infimum For a model D and a bandit problem ν in D with a single optimal distribution p ⋆(ν) in P, we introduce the following set of confusing alternative bandit problems: $$\operatorname{ALT}(\underline{{{\nu}}})=\left\{\underline{{{\nu}}}^{\prime}{\mathrm{~in~}}\mathcal{D}\;\Big\vert\;p^{\star}(\underline{{{\nu}}})\notin\operatorname{Opt}(\underline{{{\nu}}}^{\prime})\;{\mathrm{~and~}}\;\;\forall\,1\leqslant a\leqslant K,\right.$$ ′) and ∀ 1 ⩽ a ⩽ *K, ν*a = ν $$\nu_{a}=\nu_{a}^{\prime}$$ or $\left[p_{a}^{\star}(\underline{{{\nu}}})=0\ \ \mathrm{and}\ \ \mathrm{KL}(\nu_{a},\nu_{a}^{\prime})<+\infty\right]\right\}.$ The bandit problems in Alt(ν) are such that p ⋆(ν) is suboptimal for them, yet, the player cannot discriminate them from ν by only playing p ⋆(ν), as, for each arm a, either p ⋆ a (ν) = 0 and selecting the optimal probability p ⋆(ν) never results in picking arm a, or νa = ν′a and observing a reward associated with a does not provide discriminative information. The proof of the following lemma relies on standard techniques introduced by Graves & Lai (1997). This is why we postpone its proof to Appendix B.3. The linear program defining c Ext(P), νis over vectors (np), where p ̸= p ⋆(ν), i.e., no component np⋆(ν)is considered. Lemma 4. For all possibly randomized UFC strategy over D given P*, only picking distributions in* Ext(P), for all bandit problems ν in D with a single optimal distribution in P *denoted by* p ⋆(ν), $$\operatorname*{lim}_{T\to\infty}{\frac{R_{T}}{\ln T}}\geqslant c{\big(}\operatorname{Ext}({\mathcal{P}}),{\underline{{\nu}}}{\big)}$$ def = inf p∈Ext(P) p̸=p ⋆(ν) ∆p np : np ⩾ 0 and ∀ ν ′ ∈ Alt(ν),X p∈Ext(P) p̸=p ⋆(ν) npX a∈[K] p ⋆ a (ν)=0 pa KL(νa, ν′a ) ⩾ 1 X . We may also prove the following equivalence, which will be the key for both the proof of Theorem 2 and the one of Proposition 1. Lemma 5. Consider a (mean-bounded or mean-unbounded) model D, a diversity-preserving polytope P and a bandit problem ν in D *with a single optimal distribution* p ⋆(ν) in P*. We have the equivalence:* $$\begin{array}{r l r l}{\operatorname{Alt}(\underline{{{\nu}}})\neq\varnothing}&{{}\quad\iff\quad}&{c{\big(}\operatorname{Ext}(\mathcal{P}),\underline{{{\nu}}}{\big)}>0\,.}\end{array}$$ Proof. If Alt(ν) is empty, then the linear program defining c Ext(P), νis unconstrained, so that c Ext(P), ν= 0. If Alt(ν) contains a problem ν′, then since p ⋆(ν) is not optimal for ν′, there exists at least one a ∈ [K] such that p ⋆ a (ν) = 0 and ν′a ̸= νa with KL(νa, ν′a ) < +∞, and we also have $$\begin{array}{r l}{{{\mathcal{K}}_{\operatorname*{max}}(\underline{{{\nu}}},\underline{{{\nu}}}^{\prime})\stackrel{\mathrm{def}}{=}}}&{{\operatorname*{max}_{a\in[K]}\mathrm{KL}(\nu_{a},\nu_{a}^{\prime})<+\infty\,.}}\\ {{}}&{{}}\\ {{\quad}}&{{p_{a}^{*}(\underline{{{\nu}}}){=}0}}\end{array}$$ Thus, by the constraint satisfied by ν′for the first inequality and for the second equality, by substituting 1 ⩽ ∆p/∆ for p ̸= p ⋆(ν), we get $$1\leqslant\sum_{\begin{subarray}{c}{{\underline{{{p}}}\in\operatorname{Ext}(\mathcal{P})}}\\ {{\underline{{{p}}}^{\mu}\underline{{{p}}}^{\nu}(\underline{{{\nu}}})}}\end{subarray}}n_{\underline{{{p}}}}\sum_{\begin{subarray}{c}{{a\in\{K\}}}\\ {{p_{a}^{\nu}(\underline{{{\nu}}})=0}}\end{subarray}}\underbrace{p_{a}}_{\leqslant1}\underbrace{\operatorname{KL}(\nu_{a},\nu_{a}^{\prime})}_{\leqslant\operatorname{K}_{\operatorname{max}}(\underline{{{\nu}}},\underline{{{\nu}}}^{\prime})}_{\begin{array}{c}{{\Delta}}\\ {{\underline{{{p}}}\in\operatorname{Ext}(\mathcal{P})}}\end{array}}\Delta_{\underline{{{p}}}}\,n_{\underline{{{p}}}}\,.$$ This imposes the lower bound ∆/Kmax(*ν, ν*′) > 0 on c Ext(P), ν. ## B.2 Proof Of Theorem 2 Step 1: Reduction argument. We first note that any (possibly randomized) strategy picking distributions in the polytope P can be converted into a (randomized) strategy picking distributions only in Ext(P), as required by Lemma 4. This is indeed possible as only the final pure actions drawn matter. More precisely, given a probability distribution ρt over P, we introduce $$\int_{\mathcal{P}}\,\underline{{{p}}}\,\mathrm{d}\rho_{t}(\underline{{{p}}})=\sum_{\underline{{{p}}}\in\mathrm{Ext}(\mathcal{P})}\Phi_{\underline{{{p}}}}(\rho_{t})\,\underline{{{p}}}\,,\qquad\mathrm{and~let}\quad\Phi(\rho_{t})=\left(\Phi_{\underline{{{p}}}}(\rho_{t})\right),$$ where the convex weights Φp(ρt) exist as P is the convex hull of Ext(P). We interpret the vector Φ(ρt) as a probability distribution over Ext(P). Now, the random variables At ∈ [K] and A′t ∈ [K] drawn as follows, in two-stage randomizations, have the same distributions: $$\underline{{{p}}}_{t}\sim\rho_{t}\mathrm{~then~}A_{t}\sim\underline{{{p}}}_{t}\qquad\mathrm{~and~}\qquad\underline{{{p}}}_{t}^{\prime}\sim\Phi(\rho_{t})\mathrm{~then~}A_{t}^{\prime}\sim\underline{{{p}}}_{t}^{\prime}\,.$$ Step 2: Application of the results of Section B.1. Thus, by Lemmas 4 and 5, it suffices to show that Alt(ν) is not empty. Let a ∈ [K] such that p ⋆ a (ν) = 0. By assumption, P thus Ext(P) put some probability mass on this arm a: there exists p ∈ Ext(P) with pa > 0. Since p ⋆(ν) is the unique optimal arm of ν, we have ∆p > 0. Now, by the assumption of unbounded means in D, there exists a distribution ζa ∈ D with expectation > µa + ∆(p)/pa. By Assumption 1, there actually exists ν′a ∈ D with expectation µ′a > µa + ∆(p)/pa and such that KL(νa, ν′a ) < +∞. We denote by ν′the bandit problem such that ν′k = νk for all k ̸= a, and whose a–th distribution is ν′a , and claim that ν′ ∈ Alt(ν). To see this, it only remains to show that p ⋆(ν) is suboptimal for ν′; indeed, since p ⋆ a (ν) = 0 while ν and ν′ only differ at a for the first equality, by definition of ∆p for the second equality, and by construction ν′a for the rest, $$\langle p^{\star}(\underline{{{\nu}}}),\,\underline{{{\mu}}}^{\prime}\rangle=\langle\underline{{{p}}}^{\star}(\underline{{{\nu}}}),\,\underline{{{\mu}}}\rangle=\langle\underline{{{p}}},\,\underline{{{\mu}}}\rangle+p_{a}\,\frac{\Delta_{\underline{{{p}}}}}{p_{a}}<\langle\underline{{{p}}},\,\underline{{{\mu}}}^{\prime}\rangle\,.$$ ## B.3 Proof Of Lemma 4 We believe that we offer a neater proof than what is usually proposed in the literature. We fix a possibly randomized UFC strategy over D given P, only picking distributions in Ext(P), and a bandit problem ν in D with a single optimal distribution p ⋆(ν). We know that the correct scaling of the suboptimal pulls is at most logarithmic and therefore define the normalized allocations, for all p ∈ Ext(P), $$n_{T,\underline{\varrho}}=\frac{\mathbb{E}_{\underline{\pi}}\big{[}N_{\underline{\mathbb{P}}}(T)\big{]}}{\ln T}\,,\qquad\text{so that}\qquad\frac{R_{T}}{\ln T}=\sum_{\underline{\mathbb{P}}\in\mathcal{P}}\Delta_{\underline{\mathbb{P}}}\frac{\mathbb{E}_{\underline{\pi}}\big{[}N_{\underline{\mathbb{P}}}(T)\big{]}}{\ln T}=\sum_{\underline{\mathbb{P}}\in\mathcal{P}}\Delta_{\underline{\mathbb{P}}}\,n_{T,\underline{\mathbb{P}}}\,.\tag{20}$$ A UFC algorithm facing the problem ν will eventually focus on the unique optimal distribution p ⋆(ν). Thus, most of its observations will correspond to pure actions a ∈ [K] such that p ⋆ a (ν) > 0, which provide no useful information to distinguish ν from a given confusing alternative problem ν′ ∈ Alt(ν). A measure of the useful information gained is the Kullback-Leibler divergence $${\mathcal{I}}_{T}\ {\stackrel{\mathrm{def}}{=}}\ \mathrm{KL}\big(\mathbb{P}_{\underline{{{\nu}}},T},\mathbb{P}_{\underline{{{\nu}}}^{\prime},T}\big)\ ,$$ where Pν,T and Pν′,T denote the distributions of rewards Y1*, . . . , Y*T obtained in the first T rounds (and of the auxiliary randomizations used) when the underlying problems are ν and ν′, respectively. Step 1: Rewriting IT . By a chain rule—see Equation (8) in Garivier et al. (2019)—for the first equality below, by an application of the tower rule for the second equality, and by grouping distributions output by the strategy by their values in Ext(P), $$\mathcal{I}_{T}=\sum_{t=1}^{T}\mathbb{E}_{\mathbb{E}}[\text{KL}(\nu_{A_{t}},\nu_{A_{t}}^{\prime})]=\sum_{t=1}^{T}\mathbb{E}_{\mathbb{E}}\bigg{[}\sum_{a=1}^{K}p_{a,a}\,\text{KL}(\nu_{a},\nu_{a}^{\prime})\bigg{]}=\sum_{\mathbb{E}\in\text{Ent}(\mathcal{P})}\mathbb{E}_{\mathbb{E}}[N_{\mathbb{E}}(T)]\sum_{a\in[K]}p_{a}\,\text{KL}(\nu_{a},\nu_{a}^{\prime})\,.\tag{21}$$ Since alternative problems ν′ ∈ Alt(ν) are such that ν′a = νa when p ⋆ a (ν) > 0, we finally obtain $$\frac{\mathcal{I}_{T}}{\ln T}=\sum_{\underline{p}\in\mathrm{Ext}(\mathcal{P})}n_{T,\underline{p}}\sum_{\begin{subarray}{c}a\in[K]\\ p_{a}^{\prime}(\underline{u})=0\end{subarray}}p_{a}\,\mathrm{KL}(\nu_{a},\nu_{a}^{\prime})=\sum_{\begin{subarray}{c}\underline{p}\in\mathrm{Ext}(\mathcal{P})\\ p_{a}^{\prime}\underline{p}^{\prime}(\underline{u})=0\end{subarray}}n_{T,\underline{p}}\sum_{\begin{subarray}{c}a\in[K]\\ p_{a}^{\prime}(\underline{u})=0\end{subarray}}p_{a}\,\mathrm{KL}(\nu_{a},\nu_{a}^{\prime})\,.\tag{22}$$ Step 2: UFC entails that IT is larger than ln T **in the limit.** This step is a mere adaptation of the standard proof technique by Lai & Robbins (1985) and Garivier et al., 2019, with distributions in Ext(P) playing the role of arms in the mentioned references. More formally, the crucial observation is that the regret is expressed in (2) in terms of the number of plays of suboptimal distributions in Ext(P). Thus, that the strategy is UFC and that p ⋆(ν) is, by definition of Alt(ν), suboptimal for ν′entail that, for all α > 0, $$\forall\underline{{{p}}}\neq\underline{{{p}}}^{*}(\underline{{{\nu}}}),\quad\mathbb{E}_{\underline{{{p}}}}[N_{\underline{{{p}}}}(T)]=o(T^{\alpha})\,,\qquad\mathrm{as~well~as}\qquad\mathbb{E}_{\underline{{{\nu}}}^{*}}\left[N_{\underline{{{p}}}^{*}(\underline{{{\nu}}})}(T)\right]=o(T^{\alpha})\,.$$ α). (23) In particular, for all ε > 0, there exists T large enough so that Eν′-Np⋆(ν)(T) ⩽ T ε. We denote by kl(*p, q*) = p ln(p/q) + (1 − p) ln(1 − p)/(1 − q) the Kullback-Leibler divergence between two Bernoulli distributions with parameters p and q. By the data-processing inequality for [0, 1]–valued random variables (see Section 2.1 in Garivier et al., 2019) and by the standard inequality kl(*p, q*) ⩾ p ln(1/q) − ln 2, for all ε > 0, for all T sufficiently large, $$\mathcal{I}_{T}=\text{KL}\big{(}\mathbb{P}_{\mathbb{E},T},\mathbb{P}_{\mathbb{E}^{\prime},T}\big{)}\geqslant\text{kl}\Bigg{(}\mathbb{E}_{\mathbb{E}}\Bigg{[}\frac{N_{\mathbb{E}^{\prime}}(\omega)}{T}\Bigg{]}\,,\,\mathbb{E}_{\mathbb{E}^{\prime}}\Bigg{[}\frac{N_{\mathbb{E}^{\prime}}(\omega)}{T}\Bigg{]}\Bigg{)}\geqslant\underbrace{\mathbb{E}_{\mathbb{E}}\Bigg{[}\frac{N_{\mathbb{E}^{\prime}}(\omega)}{T}\Bigg{]}}_{\to1}\ln\Bigg{(}\underbrace{\frac{T}{\mathbb{E}_{\mathbb{E}^{\prime}}\left[N_{\mathbb{E}^{\prime}}(\omega)(T)\right]}}_{\geqslant T^{2}\to\infty}\Bigg{)}-\ln2\,,$$ $$(23)$$ where we substituted (23). Letting T → ∞ then ε → 0, we conclude to $$\liminf_{T\to\infty}\frac{{\cal I}_{T}}{\ln T}\geqslant\sup_{\varepsilon>0}\,\liminf_{T\to\infty}\frac{\ln T^{1-\varepsilon}}{\ln T}=1\,.\tag{24}$$ Remark 1. The specificity of the setting considered appears in these first two steps: we use the UFC property on Ext(P) but pure actions At *are eventually played, so that we could lower bound* IT / ln T in (22) by $$\sum_{\underline{{{p}}}\in\operatorname{Ext}(\mathcal{P})}n_{T,\underline{{{p}}}}\sum_{\begin{array}{l}{{a\in[K]}}\\ {{p_{a}^{\prime}(\underline{{{\nu}}})=0}}\end{array}}p_{a}\operatorname{KL}(\nu_{a},\nu_{a}^{\prime})\geqslant\sum_{\underline{{{p}}}\in\operatorname{Ext}(\mathcal{P})}n_{T,\underline{{{p}}}}\operatorname{KL}\left(\sum_{a\in[K]}p_{a}\,\nu_{a},\,\sum_{a\in[K]}p_{a}\,\nu_{a}^{\prime}\right),$$ where the inequality holds by convexity of KL and the fact that ν′a = νa *when* p ⋆ a (ν) > 0*. The right-hand side* above corresponds to the measure of information gained when playing distributions pt without observing the pure actions At. Step 3: Considering cluster points. We combine (20), (22), and (24) as follows. Let c be a cluster point of the sequence RT / ln T. If c = +∞ is the only value, then RT / ln T → +∞ and the result is proved; otherwise, take a finite c. We denote by (Tm)m⩾1 an increasing sequence such that RTm/ ln Tm → c. In view of the decomposition (20) and since nTm,p ⩾ 0 and ∆(p) > 0 for all p ∈ Ext(P) with p ̸= p ⋆(ν), all these sequences nTm,p are bounded. Hence, we may extract a subsequence (Tmk )k⩾1 from (Tm) such that all sequences nTmk ,p converge as k → ∞, to limits denoted by np. This only holds for p ∈ Ext(P) with p ̸= p ⋆(ν), but c Ext(P), νis only defined based on such vectors. These convergences yield first, by (20), that $$\operatorname*{lim}_{T\to\infty}\operatorname*{inf}_{T\to\infty}{\frac{R_{T}}{\ln T}}\geqslant\sum_{\begin{array}{l}{p\in\operatorname{Ext}(\mathcal{P})}\\ {\frac{p\neq p}{T}\not\equiv p^{*}(\underline{{\nu}})}\end{array}}\Delta_{\underline{{p}}}\,n_{\underline{{p}}}\,.$$ These convergences also entail, together with (22) and (24), and the definition of lim inf, that, for any ν′ ∈ Alt(ν), X p∈Ext(P) p̸=p ⋆(ν) npX a∈[K] p ⋆ a (ν)=0 pa KL(νa, ν′a ) ⩾ lim inf T→+∞ X p∈Ext(P) p̸=p ⋆(ν) nT ,p X a∈[K] p ⋆ a (ν)=0 pa KL(νa, ν′a ) ⩾ 1 . Put differently, the limits np defined above, where p ̸= p ⋆(ν), satisfy the constraints in the defining infimum of c Ext(P), ν. This concludes the proof. ## C Proof Of Proposition 1 The proof builds on the proofs for upper bounds (Appendix A) and lower bounds (Appendix B). We provide here some general considerations. The Bernoulli distributions are sub-Gaussian with parameter σ 2 = 1/4. The problem ν = Ber(0), Ber(1/2), Ber(0)considered has expectations µ = (0, 1/2, 0). The distributions p (1) = (0, 1/2, 1/2) and p (2) δ = (δ, 0, 1 − δ) obtain respective expected rewards $$\left\langle p^{(1)},\,\underline{{{\mu}}}\right\rangle=\frac{1}{4}>0=\left\langle p_{\delta}^{(2)},\,\underline{{{\mu}}}\right\rangle.$$ In particular, given that the polytope Pδ considered is the segment between p (1) and p (2) δ, the distribution p (1) is the unique optimal distribution in P. C.1 Case δ > 1/4: a ln T **regret is suffered** The setting of Proposition 1 satisfies the assumptions of Lemmas 4 and 5, up to considering the reduction performed in Appendix B.2. It therefore suffices to show that Alt(ν) is not empty. Given that p (1) puts a positive probability mass on arms 2 and 3, alternative problems in Alt(ν) may differ from ν only at arm 1. Given that KLBer(0), Ber(µ1) < +∞ if and only if µ1 ∈ [0, 1), we have $\mathrm{ALT}(\underline{\nu})=\left\{\underline{\nu}^{\prime}=\left(\mathrm{Ber}(\mu_{1}^{\prime}),\,\mathrm{Ber}(1/2),\,\mathrm{Ber}(0)\right):\ \ \mu_{1}^{\prime}\in[0,1)\ \mathrm{and}\ \underline{p}^{(1)}\notin\mathrm{Opt}(\underline{\nu}^{\prime})\right\}$. o. $\Box(\underline{\nu}^\prime)\succ\Box$ Now, denoting by µ′the expectations of ν′, and given that Pδ is a segment, we have that the condition p (1) ∈/ Opt(ν′) is equivalent top $$\left\langle\underline{{{p}}}^{(1)},\,\underline{{{\mu}}}^{\prime}\right\rangle=1/4<\delta\,\mu_{1}^{\prime}=\left\langle\underline{{{p}}}^{(2)},\,\underline{{{\mu}}}^{\prime}\right\rangle,$$ which is equivalent to µ′1 > 1/(4δ). Therefore, $$\operatorname{Alt}(\underline{{{\nu}}})=\left\{\underline{{{\nu}}}^{\prime}=\left(\operatorname{Ber}(\mu_{1}^{\prime}),\,\operatorname{Ber}(1/2),\,\operatorname{Ber}(0)\right):\ \ 1/(4\delta)<\mu_{1}^{\prime}<1\right\}.$$ Alt(ν) is not empty if and only if δ > 1/4, which concludes the proof. ## C.2 Case Δ < 1/4**: A Bounded Regret May Be Suffered** We use the Box-B algorithm with u0 = 1/2 but make sure that the upper-confidence estimates of the expectations are always smaller than the known bound 1 on rewards, i.e., we replace Step 1 of Box B by Va(t − 1) = minUa(t − 1), 1 , V (t − 1) = V1(t)*, . . . , V*K(t) , pt∈ argmax p∈Ext(P) *p, V* (t − 1); this is the crucial modification for this proof, see (26) for an explanation why it is handy. All other steps of the Box-B algorithm remain unchanged. Step 1: Preparation and adaptation of existing proofs. We borrow several elements from the proof conducted in Appendix A. We consider the same favorable event E(t) as therein, but only one event in terms of number of pulls—namely, we are only interested in arm 3: $$\mathcal{E}(t)=\left\{\text{For all}a\in[K],\quad\left|\mu_{a}-\widehat{\mu}_{a}(t)\right|\leqslant\sqrt{\frac{2\ln t}{\max\left\{N_{a}(t),1\right\}}}\right\}\qquad\text{and}\qquad\mathcal{G}(t)=\left\{N_{3}(t)\geqslant t/3\right\}.$$ Lemma 3 shows that P E(t) ⩽ 2Kt−3for all t ⩾ 2 > e 1/8. Given that pt,3 ⩾ minp (1) 3, p (2) δ,3 = 1/2 by definition of Pδ as a segment and the fact that δ < 1/4, we apply the Hoeffding–Azuma bound (18) as follows: for all t ⩾ 1, $$\mathbb{P}\big{(}\mathcal{G}(t)\big{)}\leqslant\mathbb{P}\bigg{\{}N_{3}(t)-\frac{t}{2}\leqslant-\frac{t}{6}\bigg{\}}\leqslant\mathbb{P}\bigg{\{}N_{3}(t)-\sum_{s=1}^{t}p_{s,3}\leqslant-\frac{t}{6}\bigg{\}}\leqslant\mathrm{e}^{-t/18}\,.$$ Since δ < 1/4, there exists a threshold t0 ⩾ 2 such that $$\forall t\geqslant t_{0},\qquad\delta+2{\sqrt{\frac{6\ln t}{t}}}<{\frac{1}{4}}\,.$$ We show below that $$\forall t\geqslant t_{0},\quad\text{on}\mathcal{E}(t)\cap\mathcal{G}(t)\,,\qquad\left\langle\underline{p}^{(1)},\underline{V}(t)\right\rangle>\left\langle\underline{p}^{(2)}_{s},\underline{V}(t)\right\rangle,\qquad\text{thus},\qquad\underline{p}_{t+1}=\underline{p}^{(1)}\,.\tag{25}$$ We recall that p (1) is the unique optimal distribution for ν and that the gap of p (2) δequals 1/4. The property (25) therefore leads to the following bounded regret bound: for all T ⩾ t0, $$R_{T}=\frac{1}{4}\sum_{t=1}^{T}\mathbb{P}\left[p_{t}=p_{a}^{(2)}\right]\leqslant\frac{t_{0}}{4}+\frac{1}{4}\sum_{t=t_{0}}^{+\infty}\mathbb{P}\left(\overline{\mathcal{E}}(t)\cup\overline{\mathcal{G}}(t)\right)\leqslant\frac{t_{0}}{4}+\frac{1}{4}\sum_{t=t_{0}}^{+\infty}(2Kt^{-3}+\mathrm{e}^{-t/18})=\frac{t_{0}}{4}+\frac{K}{2}+5.$$ Step 2: Proving (25). By definition, we have, p (1), V (t) ⩾ p (1), ν= 1/4 on E(t), and still on $\mathcal{E}(t)$, $$\left\langle\underline{\wp}_{\delta}^{(2)},\,\underline{V}(t)\right\rangle=\delta\,\underbrace{V_{1}(t)}_{\leqslant1}+(\underbrace{1-\delta}_{\leqslant1})\,V_{3}(t)\leqslant\delta+2\sqrt{\frac{2\ln t}{\max\{N_{3}(t),1\}}}\,.$$ (26) We crucially used here the boundedness of V1(t), which we upper bounded by 1. This is fortunate, as arm 1 is only played when the suboptimal distribution p (2) δis picked, so that N1(t) could be small and E(t) would not provide an efficient bound. Therefore, $${\mathrm{on~}}{\mathcal{E}}(t)\cap{\mathcal{G}}(t),$$ $$\left\langle\underline{{{p}}}^{(1)},\,\underline{{{V}}}(t)\right\rangle\geqslant\frac{1}{4}\qquad\mathrm{and}\qquad\left\langle\underline{{{p}}}_{\delta}^{(2)},\,\underline{{{V}}}(t)\right\rangle\leqslant\delta+2\sqrt{\frac{6\ln t}{t}}\,,$$ so that (25) follows by the definition of t0. ## D Proof Of Theorem 3 The proof strategy is similar to that of Theorem 1 with some differences we will highlight: we still work under the favorable event under which all the confidence bounds are correct, and all arms have been picked a linear number of times. Both these events hold with high probability since the probability of picking any arm is lower bounded by some positive constant at all rounds. In that case, the upper confidence vectors U(t) are a good estimators of the true mean payoff vector µ, and the distributions picked by diversity-preserving UCB get close to the optimal distribution(s). Crucially, we show in Lemma 7 that the differences between the expected payoffs of the distributions selected by UCB and the optimal payoff depend *quadratically* on the differences between U(t) and µ. This is a consequence of the curvature of the diversity-preserving set Pr. Since the error in estimating µ by U(t) is of order pln t/t, we obtain per-round regrets of order ln t/t, which sum up to ln2T. We start with two lemmas, providing first a closed-form expression of the optimal distribution (Lemma 6) and second, deducing a quadratic upper bound on the instantaneous regret (Lemma 7). Denote by V = µ ∈ R d| µ1 = *· · ·* = µK the subset of expected-payoff vectors with identical components. All strategies get a null regret in bandit problems ν with payoffs in V . We may therefore exclude this case in the rest of the proof. We denote by 1 the vector in R K with components all equal to 1, and refer to the model of all distributions over R with a finite first moment as Dall. Lemma 6. Consider Pr with *r < r*lim*. For any bandit problem* ν in Dall *with means* µ ∈ R K \V , the optimal distribution in Pr is unique, only depends on µ*, and equals* $$\underline{{{p}}}_{\star}(\underline{{{\mu}}})=\frac{1}{K}\mathbf{1}+\frac{r}{\|\underline{{{\mu}}}-\mu_{\mathrm{avg}}\mathbf{1}\|}\left(\underline{{{\mu}}}-\mu_{\mathrm{avg}}\mathbf{1}\right),\qquad\textit{where}\qquad\mu_{\mathrm{avg}}=\frac{1}{K}\sum_{a\in[K]}\mu_{a}\,.$$ Proof. We denote already by p⋆ (µ) the vector defined above and prove that it is the only optimal vector; we do so via elementary arguments. For any p in Pr, the difference between the expected payoffs of p and p⋆ (µ) equals µ, p⋆ (µ) − p = Dµ, 1 K 1 − p E+r ∥µ − µavg1∥ µ, µ − µavg1 = Dµ − µavg1, 1 K 1 − p E+r ∥µ − µavg1∥ ⟨µ − µavg1, µ − µavg1⟩ ⩾ ∥µ − µavg1∥ r − 1 K 1 − p ⩾ 0 , (27) where we used, for the second equality, that both vectors (1/K)1 − p and µ − µavg1 are orthogonal to 1 (as their components sum up to 0), and where, for the final inequality, we applied the the Cauchy-Schwarz inequality: $$\left\langle\underline{{{\mu}}}-\mu_{\mathrm{avg}}\mathbf{1},\,\frac{1}{K}\mathbf{1}-\underline{{{p}}}\right\rangle\geqslant-\|\underline{{{\mu}}}-\mu_{\mathrm{avg}}\mathbf{1}\|\cdot\left\|\frac{1}{K}\mathbf{1}-\underline{{{p}}}\right\|.$$ Now, the inequality (27) is an equality if and only if µ − µavg1 and (1/K)1 − p are colinear, that is, if there exists α ∈ R such that $$\begin{array}{l}{{\mathrm{any~if~and~any~}\mu\geq\mu_{\mathrm{avg}}=\mathrm{and~}(1)\,\mathrm{if~}\mu>0}}\\ {{\mathrm{~}}}\\ {{\mathrm{~}}}\\ {{\mathrm{~}}}\\ {{\mathrm{~}}}\\ {{\mathrm{~}}}\\ {{\mathrm{~}}}\end{array}$$ By definition of Pr, such a p is in Pr if and only if |α| ⩽ r. Therefore, a final equality to 0 is achieved in (27) if and only if α = r, i.e., if p = p⋆ (µ). The curvature of the diversity-preserving set Pr guarantees that the best distribution varies smoothly with the expected payoffs. This in turn implies that the per-round regret of the diversity-preserving UCB strategy depends quadratically on the difference between the upper confidence bound vector U and µ—at least when U is in a neighborhood of µ. Lemma 7. Consider Pr with *r < r*lim *and use the notation of Lemma 6. For any expected-payoff vector* µ ∈ R d \ V *, define the function* $$\phi_{\underline{{{\mu}}}}:\underline{{{v}}}\in\mathbb{R}^{K}\setminus V\longmapsto\left\langle\underline{{{\mu}}},p_{\star}(\underline{{{v}}})\right\rangle.$$ There exist a constant Bµ *such that* $$\forall\underline{{{v}}}\in\mathbb{R}^{K}\setminus V,\qquad\phi_{\underline{{{\mu}}}}(\underline{{{\mu}}})-\phi_{\underline{{{\mu}}}}(\underline{{{v}}})\leqslant B_{\underline{{{\mu}}}}\,\|\underline{{{\mu}}}-\underline{{{v}}}\|^{2}\,.$$ With this notation, if pt is the distribution output by the diversity-preserving UCB strategy at round t, then the expected per-round regret rt suffered at time t can be expressed as $$r_{t}=\mathbb{E}\Big[\big<\underline{{{\mu}}},\,\underline{{{p}}}\,(\underline{{{\mu}}})-\underline{{{p}}}_{t}\big>\Big]=\phi_{\underline{{{\mu}}}}(\underline{{{\mu}}})-\mathbb{E}\Big[\phi_{\underline{{{\mu}}}}\big(\underline{{{U}}}(t-1)\big)\Big]\;.$$ Proof. By Lemma 6, ϕµ admits the closed-form expression $$\phi_{\underline{{{\mu}}}}:\underline{{{v}}}\in\mathbb{R}^{K}\setminus V\longmapsto\frac{1}{K}\langle\underline{{{\mu}}},\mathbf{1}\rangle+\frac{r}{\|\underline{{{v}}}-\underline{{{v}}}_{\mathrm{avg}}\mathbf{1}\|}\langle\underline{{{\mu}}},\,\underline{{{v}}}-\underline{{{v}}}_{\mathrm{avg}}\mathbf{1}\rangle\,,$$ showing that ϕµ is C 3 at all points v such that v − vavg1 ̸= 0, i.e., over R K \ V . Moreover, by definition of p⋆ (µ), the function ϕµ reaches its maximum at µ, therefore its differential at µ is null and the Taylor expansion around µ of ϕµ reads $$\phi_{\underline{{{\mu}}}}(\underline{{{v}}})=\phi_{\underline{{{\mu}}}}(\underline{{{\mu}}})+\mathcal{O}\big(\|\underline{{{v}}}-\underline{{{\mu}}}\|^{2}\big)\,.$$ Put differently, the function $$\psi_{\underline{{{\mu}}}}:\underline{{{v}}}\in\mathbb{R}^{K}\setminus V\longmapsto{\frac{\phi_{\underline{{{\mu}}}}(\underline{{{v}}})-\phi_{\underline{{{\mu}}}}(\underline{{{\mu}}})}{\|\underline{{{v}}}-\underline{{{\mu}}}\|^{2}}}$$ is bounded in some ε–neighbourhood of µ. Outside of this neighborhood, the denominator ∥v − µ∥ is larger than ε while the numerator is bounded by 2max a∈[K] µa. Therefore, ψµ is bounded over the entire R K \ V . We are now equipped to prove Theorem 3. The fact that the model D is composed of σ 2–sub-Gaussian distributions is used now. We adapt several arguments already reviewed in Appendix A and therefore provide a concise proof. Proof. Consider the favorable events $$\mathcal{E}(t)=\left\{\forall a\in[K],\quad|\mu_{a}-\tilde{\mu}_{a}(t)|\leqslant\sqrt{\frac{8\sigma^{2}\ln t}{\max\left\{N_{a}(t),1\right\}}}\right\}\quad\text{and}\quad\mathcal{G}^{\prime}(t)=\left\{\forall a\in[K],\quad N_{a}(t)\geqslant\frac{t(r_{\text{in}}-r)}{2}\right\}\,.$$ For any t, under E(t) ∩ G′(t), by Lemma 7, $$\langle\underline{{{\mu}}};\,\underline{{{p}}}_{\underline{{{\mu}}}}(\underline{{{\mu}}})-\underline{{{p}}}_{t}\rangle=\phi_{\underline{{{\mu}}}}(\underline{{{\mu}}})-\phi_{\underline{{{p}}}}(\underline{{{U}}}(t-1))\leqslant B_{\underline{{{p}}}}\|\underline{{{\mu}}}-\underline{{{U}}}(t-1)\|^{2}\leqslant B_{\underline{{{p}}}}\sum_{\sigma\in[K]}\frac{16\sigma^{2}\ln t}{t(r_{\mathrm{lim}}-r)}=\frac{16K B_{\sigma}\sigma^{2}}{r_{\mathrm{lim}}-r}\,\frac{\ln t}{t}\,.$$ Therefore, taking expectations above, ing expectations above, $$r_t=\mathbb{E}\big[\big<\underline{\mu},\,\underline{\nu}_*(\underline{\mu})-\underline{p}_t\big>\big]\leqslant\frac{16KB_{\underline{\mu}}\sigma^2}{r_{\lim}-r}\,\frac{\ln t}{t}+\max_{a\in[k]}\mu_a\bigg(\mathbb{P}\Big(\overline{\mathcal{E}(t)}\Big)+\mathbb{P}\Big(\overline{\mathcal{G}^\prime(t)}\Big)\bigg)\,.\tag{28}$$ eady in (11) in the proof of Theorem 1 that $\sum_{t=1}^{+\infty}\mathbb{P}\Big(\overline{\mathcal{E}(t)}\Big)<+\infty\,.$ Similarly, as in the argument following (17), introduce the martingales Ma,t, and note that by (5), we have $$\sum_{t=1}^{T}\mathbb{P}\Big[\overline{{{\mathcal{G}}^{\prime}(t)}}\Big]\leqslant\sum_{t=1}^{T}\sum_{a\in[K]}\mathbb{P}\bigg\{M_{a,t}\leqslant-\frac{t(r_{\operatorname*{lim}}-r)}{2}\bigg\}\,.$$ By the Azuma-Hoeffding inequality, the sum above is bounded, cf. (18). Sum (28) over t to conclude.
Review 1: Summary: This paper studies diversity-preserving multi-armed bandits. The authors design a UCB algorithm which enjoys bounded regret when diversity is desirable. The paper also present the regret lower bounds for the mean-unbounded cases. Strengths and Weaknesses: Strengths: Fairness is an important and hot topic in machine learning area. The studied problem is interesting. The proposed algorithm achieves bounded regret when diversity is desirable. Weaknesses: 1. The regret lower bound only works for the mean-unbounded cases, which is less common than mean-bouned cases in real applications. 2. The paper does not provide any empirical results for the proposed methods. 3. The main paper is not self-contained. I think the closed-form regret bounds should be presented in the main paper. Also, I think the authors may provide some discussion on the closed-form bound. For example, does the regret bound in Thm 1 match that of vanilla UCB when the polytope is $[0,1]^K$? (In this case, the proposed algorithm seems exactly the same as the vanilla UCB) Requested Changes: 1. The writing can be improved, see weaknesses 3. 2. In section 3, the authors may present an example application which is mean-unbounded. 3. The paper may provide some empirical results of the proposed methods. I would like to see the empirical comparisons among the proposed method, Celis et al (2019) and Liu et al. (2022). Broader Impact Concerns: none ================================================== Review 2: Summary: The paper studies the diversity-preserving K-armed bandits, a setting derived from the standard K-armed bandits in which the agent is allowed to play a distribution over the arms chosen in a properly provided set. The authors propose an optimistic regret minimization strategy that, under the assumption that the distribution set is a polytope, plays the extremal points distributions and uses the collected rewards to update the arm statistics. Under the subgaussian distributional assumption, the authors show that the algorithm enjoys logarithmic regret that reduces to constant under the assumption that the optimal distribution plays with non-zero probability each of the arms. A lower bound is provided matching the logarithmic regime. Furthermore, an example of the case in which the set of distributions is not a polytope is provided with a proposal on how to address it, leading to polylog regret. Strengths and Weaknesses: **Strengths** - The paper provides interesting results that are sound and quite expected, although I have not checked the proofs in detail. - The setting considered, although introduced in previous works, seems to have nice applications in the real world. **Weaknesses** - [Related Works] The presented setting has notable connections with regret minimization with expert advice and bandits with mediator feedback. These settings share with the presented one the fact that arms are not directly pulled but a distribution (possibly non-stationary) is placed in the middle. In particular, these works are related since in some cases they are able to show that constant regret is possible, as in the present paper. The authors should include them in the related works section and provide a comparative discussion. [1] Auer, Peter, Nicolo Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. "The nonstochastic multiarmed bandit problem." SIAM journal on computing 32, no. 1 (2002): 48-77. [2] Metelli, Alberto Maria, Matteo Papini, Pierluca D'Oro, and Marcello Restelli. "Policy optimization as online learning with mediator feedback." In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 10, pp. 8958-8966. 2021. [3] Eldowa, Khaled, Nicolò Cesa-Bianchi, Alberto Maria Metelli, and Marcello Restelli. "Information-theoretic regret bounds for bandits with fixed expert advice." In 2023 IEEE Information Theory Workshop (ITW), pp. 30-35. IEEE, 2023. - [Technical Novelty] The obtained results are, in my opinion, quite expected and this is not necessarily an issue. For the $\log T$ regret bound, as the authors acknowledge, this can be obtained by running standard UCB on $Ext(\mathcal{P})$. For the constant regret since the optimal distribution is played for an expected number of times that is linear in $T$, and the optimal distribution plays all the arms with a non-zero probability, we are like in an "expert setting" since playing the optimal distribution provides information for all the arms. Is this consideration correct? Anyway, I encourage the authors to revise the manuscript to highlight in the main paper the challenges of their theoretical analysis and the elements of technical novelty over the previous works, especially [2], with which I noted several elements of similarity in the analysis. - [Computing $Ext(\mathcal{P})$] Is computing $Ext(\mathcal{P})$ computationally efficient given a description of $\mathcal{P}$ based on constraints? Can the authors elaborate? - [Tightness] As far as I understand, the lower bound of Theorem 2 provides a lower bound that matches the $\log T$ dependence of the upper bound of Theorem 1. Is there an analogous showing the tightness of the scenario in which the proposed algorithm suffers constant regret? More in general, apart from the dependence on $T$ all lower and upper bounds do not provide the explicit dependence on (i) suboptimality gaps; (iii) number of arms/number of points in $Ext(\mathcal{P})$; (ii) constants. This does not allow to fully judge their tightness. - [Presentation] Section 3 seems to include several heterogeneous contributions. It starts with the lower bound but then the role and significance of Corollary 1 and Proposition 1 are not fully clear. Can the authors clarify? - [Non-polytope $\mathcal{P}$] The case in which $\mathcal{P}$ is a general convex set (not a polytope) is just addressed by considering a specific example of $\mathcal{P}$. It is not clear how much this example is general or how these considerations/methods can be generalized beyond the example. **Minor Issues** - The paper lacks a Conclusion section - Assumption 1: I think the second $E(\zeta)$ should be replaced with $E(\zeta')$ Requested Changes: Please refer to Weaknesses. Broader Impact Concerns: None. ================================================== Review 3: Summary: This paper gives upper and lower bounds for classes of multi-armed bandit problems with constraints on the action distributions. They discuss how such constraints can be used to describe various conditions that could be desirable in applications. Strengths and Weaknesses: # Strengths * The problems posed are interesting and well-motivated. * The writing is clear and well-structured. * The results (if they can be shown to be true) are quite interesting, especially the bounded regret. # Weaknesses ## Technical Correctness The biggest weakness is that there is a mistake very early in the proof of Theorem 1. Namely, the inclusion of events described in (6) is incorrect. While many of the other steps in the analysis are correct, most results depend on (6) at some point, and so it is unclear which results in the paper are true. We can see the incorrectness of (6) by a simple example. Say $K=2$ and $\mathcal{P}$ is just the standard probability simplex. (Any polytope which contains $\begin{bmatrix}0 \\\\1 \end{bmatrix}$ will suffice.) For any $\Delta >0$, $\sigma>0$, we can find $T$, $t$, and $n$ such that * $1\le t \le T$ * $n\ge \frac{65 K\sigma^2}{\Delta}\ln T$ * $t\ge n/2$ * $\sqrt{8\sigma^2 \ln t}>\frac{\Delta}{2}$ For example, if $\Delta=\sigma=1$, $T=1000$, $n=899$, and $t=450$ satisfy all the constraints. Say that $\underline{p}=\begin{bmatrix}0 \\\\ 1\end{bmatrix}$, $N_1(t)=0$ and $N_2(t)=t$. Then the event on the left of (6) holds: $$ \bigcap_{a \in [K]}\\{N_a(t)\ge np_a/2\\} $$ The event on the right of (6) does not, since the right is $$ \mathcal{E}'(t)=\left\\{\forall a\in [k], \sqrt{\frac{8\sigma^2 \ln t}{\max\\{N_a(t),1\\}}} < \frac{\Delta}{2}\right\\}, $$ and the inequality fails for $a=1$. ## Technical Precision Beyond the error above, there are other instances of technical imprecision. For example, as written, **Assumption 1** holds trivially, since we could always just take $\zeta=\zeta'$. Presumably something else was meant. ## Potentially Outdated Literature Review While the problem is set up nicely, and context is given, the references are generally fairly old. From what I can gather from the text, this manuscript is based on a preprint written no later than 2022 (since it states that Liu et al. (2022) came after the preprint), and the bulk of the references come from prior to this time. My guess would be that many relevant follow-up works have already appeared. While the cited references may be the most relevant, giving an up-to-date discussion would be helpful. I purposely did not look to see if the discussed works had more recent follow-up papers in order to preserve anonymity. Requested Changes: The main thing to do is fix the technical errors. A secondary, but very important change, is to do a more thorough literature review to see what has been done more recently related to these problems. Broader Impact Concerns: None ================================================== Metareview: Recommendation: Accept with minor revision Comment: The paper revisits the framework for diversity-preserving $K$-armed bandits. It presents a UCB algorithm tailored for this setting, offering theoretical guarantees on its performance. The authors provide regret upper and lower bounds and discuss the application of their approach beyond simple polytopes. The paper presents valuable theoretical contributions to the field of diversity-preserving $K$-armed bandits. While there are some weaknesses, particularly in "empirical validation and completeness of the main paper", the authors' responses indicate a willingness to address these issues. With the suggested revisions, the paper has the potential to make a significant contribution to the field. ==================================================
# Ecg Representation Learning With Multi-Modal Ehr Data Sravan Kumar LalamB,1 Hari Krishna Kunderu1 Shayan Ghosh1 Harish Kumar A1 Ashim Prasad1 Francisco Lopez-Jimenez3 Samir Awasthi1,2 Zachi I Attia3 Samuel J. Asirvatham3 Paul A. Friedman3 Rakesh Barve1,2 Melwin BabuB,1 1 Nference Inc. 2 Anumana Inc. 3 *Mayo Clinic, USA* BCorresponding authors: {sravankumar.l@nference.net, melwin@nference.net} Reviewed on OpenReview: *https: // openreview. net/ forum? id= UxmvCwuTMG* ## Abstract Electronic Health Records (EHRs) provide a rich source of medical information across different modalities such as electrocardiograms (ECG), structured EHRs (sEHR), and unstructured EHRs (text). Inspired by the fact that many cardiac and non-cardiac diseases influence the behavior of the ECG, we leverage structured EHRs and unstructured EHRs from multiple sources by pairing with ECGs and propose a set of three new multi-modal contrastive learning models that combine ECG, sEHR, and text modalities. The performance of these models is compared against different baseline models such as supervised learning models trained from scratch with random weights initialization, and self-supervised learning models trained only on ECGs. We pre-train the models on a large proprietary dataset of about 9 *million* ECGs from around 2.4 *million* patients and evaluate the pre-trained models on various downstream tasks such as classification, zero-shot retrieval, and out-of-distribution detection involving the prediction of various heart conditions using ECG waveforms as input, and demonstrate that the models presented in this work show significant improvements compared to all baseline modes. ## 1 Introduction Electronic Health Records (EHRs) are generated for every patient encounter or event and are becoming increasingly available in recent years. These are multi-modal in nature and capture rich phenotypic information of the patients over time in the form of structured EHRs and unstructured EHRs. Structured EHRs (denoted sEHR) contain information about diagnoses, procedures, medication prescriptions, lab tests, vitals, and more, while unstructured EHRs encompass clinical notes, radiology images, pathology images, echocardiogram videos, time series ECG signals, etc. Recently multi-modal contrastive learning methods applied to radiology and pathology images by pairing with the corresponding medical reports to learn medical image representations (Zhang et al., 2022; Huang et al., 2021; Boecking et al., 2022; Bannur et al., 2023; Lu et al., 2023) have shown promising results on downstream tasks such as classification, image-text retrieval, etc. These methods generally have two stages: (i) In stage I, the model is pre-trained on large unlabelled data to learn generic representations by maximizing the alignment between embeddings of different modalities in latent space. (ii) In stage II, the model is fine-tuned on a task-specific labeled dataset by transferring the knowledge from the pre-trained model. However, ECG representation learning by pairing with EHRs via multi-modal contrastive learning is less explored. Uni-modal contrastive learning similar to Chen et al. (2020a) has been applied to the ECG domain to learn ECG representations (Kiyasseh et al., 2021; Diamant et al., 2022; Gopal et al., 2021; Mehari & Strodthoff, 2022; Oh et al., 2022), but they lack the ability to compare different modalities in latent space using similarity metrics like cosine similarity for use in zero-shot transfer learning. Also, contrastive learning using multi-modal data produces high-quality representations as they exploit information from multiple sources and extract semantics by aligning with various modalities. ECG is a simple, non-invasive test that records the electrical activity of the heart and is helpful in diagnosing heart conditions and patient monitoring. In recent years, deep learning techniques have been employed on ECG data to predict various heart conditions, even in cases where diagnostic criteria using ECGs have not been firmly established in clinical practice (Tison et al., 2019; Hannun et al., 2019; Galloway et al., 2019; Attia et al., 2019a;b;c; Ko et al., 2020; Adedinsewo et al., 2020; Christopoulos et al., 2020; Yao et al., 2021; Siontis et al., 2021; Cohen-Shelly et al., 2021; Bos et al., 2021; Grogan et al., 2021; Ahn et al., 2022). Gopal et al. (2021) discusses that supervised learning models of this nature demand extensive, high-quality datasets with precise annotations to achieve robust generalization on real-world data. Unfortunately, within the healthcare domain, acquiring such labeled datasets is challenging, as they are scarce, expensive, and time-consuming to obtain due to the necessity of trained physicians for the annotations. Motivated by the following facts: (i) multi-modal contrastive learning applied to the general domain images (Radford et al., 2021; Jia et al., 2021; Goel et al., 2022) as well as the medical domain images (Zhang et al., 2022; Huang et al., 2021; Boecking et al., 2022; Bannur et al., 2023; Lu et al., 2023) by pairing images with text data has demonstrated promising results; (ii) ECG signals contain information related to both cardiovascular and non-cardiovascular diseases (Venn et al., 2022); (iii) EHR data capture rich phenotypic information of the patients over time, we address the challenges described previously by leveraging EHRs. We pair structured EHR and unstructured EHR data with ECGs to learn ECG representations via multi-modal contrastive learning. In particular, we utilize diagnosis codes, procedure codes, and medication prescriptions from the structured EHR category and text data from various sources such as ECG reports, ECHO reports, radiology reports, pathology reports, microbiology reports, clinical notes, and surgical notes from the unstructured EHR category. Our contributions are summarised as follows: 1. We propose **sEHR-BERT**, a BERT model pre-trained to encode sEHR modality for use in multimodal contrastive learning models. 2. We propose a set of three multi-modal contrastive learning models that combine sEHR, ECG, and text modalities to learn ECG representations: - **ECG-sEHR**: A model that combines ECG and sEHR modalities, - **ECG-Text**: A model that combines ECG and text modalities, - **sEHR-ECG-Text**: A model that combines sEHR, ECG, and text modalities. 3. We then compare the effectiveness of the pre-trained models on downstream tasks such as linear classification, fine-tuning, zero-shot retrieval, and out-of-distribution detection with different baseline models including supervised learning models trained from scratch with random initialization and current state-of-the-art (SOTA) ECG-only self-supervised learning models. ## 2 Related Work 2.1 Contrastive Learning For General Domain Images Self-supervised learning (SSL) using contrastive learning methods has emerged as a powerful pre-training technique to learn generic representations of the data. These methods learn representations either (i) by pulling the embeddings of similar pairs (positive pairs) together and pushing the embeddings of dissimilar pairs (negative pairs) apart in the latent embedding space or (ii) by contrasting cluster assignments. Some of the notable works in computer vision include InfoNCE (Oord et al., 2018), SimCLR (Chen et al., 2020a), SimCLRv2 (Chen et al., 2020b), MoCo (He et al., 2020), SupCon (Khosla et al., 2020), SEER (Goyal et al., 2021), PIRL (Misra & Maaten, 2020), SwAV (Caron et al., 2020), and PCL (Li et al., 2021). These methods come under the category of uni-modal contrastive learning as they utilize only one type of data modality, i.e., images. Multi-modal contrastive learning by pairing general domain images with the corresponding image captions to learn image-text embeddings jointly in the shared space (Radford et al., 2021; Jia et al., 2021; Goel et al., 2022) has shown impressive results on downstream tasks such as zero-/few-shot learning. ## 2.2 Contrastive Learning For Medical Domain Images Uni-modal contrastive learning based on SimCLR has been applied to medical domain images (Azizi et al., 2021; 2022; Ciga et al., 2022; Wang et al., 2022; Srinidhi & Martel, 2021; Sowrirajan et al., 2021) to learn medical image representations. Motivated by some of the initial works in uni-modal contrastive learning, ConVIRT (Zhang et al., 2022) proposed a multi-modal contrastive learning method by pairing chest radiology images with the corresponding radiology reports. Huang et al. (2021) extended ConVIRT for learning local and global representations by contrasting image sub-regions with words in the medical report. Boecking et al. (2022); Bannur et al. (2023) made improvements in the radiology domain by leveraging longitudinal medical images, building a domain-specific language model for radiology reports, and adding Masked Language Modeling (MLM) loss to contrastive loss during joint vision-language pre-training. Lu et al. (2023) applied multi-modal contrastive learning by pairing histopathology whole slide images with pathology reports. Our multi-modal contrastive learning models are largely inspired by ConVIRT (Zhang et al., 2022). ## 2.3 Contrastive Learning For Time Series Ecg Signals SimCLR and the other aforementioned uni-modal contrastive learning models were developed for use in computer vision. However, they have been adopted for use with time series ECG signals in subsequent works (Kiyasseh et al., 2021; Diamant et al., 2022; Gopal et al., 2021; Mehari & Strodthoff, 2022; Oh et al., 2022). The principle difference between CLOCS (Kiyasseh et al., 2021), PCLR (Diamant et al., 2022) and the 3KG (Gopal et al., 2021) models is in the way the positive pairs are created. CLOCS treats consecutive non-overlapping segments and/or leads of the same ECG as positive pairs. PCLR treats two ECGs of the same patient as positive pairs. 3KG constructs positive pairs by applying spatial augmentations such as rotation and scaling in vectorcardiogram (VCG) space after converting ECG to VCG, followed by temporal augmentations such as time masking in ECG space after converting VCG back to ECG. Cheng et al. (2020) introduced adversarial training to address intersubject variability while learning ECG representations using contrastive learning. Mehari & Strodthoff (2022) adapted SimCLR (Chen et al., 2020a), CPC (Oord et al., 2018), and SwAV (Caron et al., 2020) to ECG domain to learn ECG representations. Recently, Oh et al. (2022) proposed a pre-training method that combines CMSC from Kiyasseh et al. (2021) and Wave2Vec 2.0 (Baevski et al., 2020) from speech domain to learn local and global contextual ECG representations. To the best of our knowledge, there is only one work that combines ECGs with other modalities. Raghu et al. (2022) developed a SimCLR-like contrastive learning model that was pre-trained using multi-modal clinical time series data such as ECG signals and structured time series data (labs and vitals). The model utilizes 18-dimensional structured time series data from metabolic panel, blood pressures, heart rate, and SpO2. The model is shown to have achieved improved or comparable performance over training from scratch on two downstream tasks: (i) Elevated mPAP and (ii) 24-hour mortality rate. To the best of our knowledge, we are the first to fully utilize the large landscape of electronic health records to learn ECG representations. ## 3 Methods 3.1 Sehr-Bert: Structured Ehr Model Pre-Training Several methods have been proposed to model structured EHRs based on BERT (Devlin et al., 2018): BEHRT (Li et al., 2020), Med-BERT (Rasmy et al., 2021), CEHR-BERT (Pang et al., 2021), and CEHRGAN-BERT (Poulain et al., 2022). However, none of these pre-trained models are publicly available to use in our work. Moreover, the vocabulary in our dataset may not be aligned with the vocabulary of the mentioned models. So we developed sEHR-BERT, a model pre-trained to encode sEHR data and produce sEHR representations based on the BERT architecture (Devlin et al., 2018). We used a vocabulary of size 28593, constructed from ICD diagnosis codes, ICD procedure codes, and medication prescriptions. These are collectively referred to as "medical codes" in this work. The input to the model is a sequence of medical codes sorted in ascending order based on medical codes' timestamps. Each code is processed by adding its corresponding medical code embedding, time embedding, and medical code type embedding and sent to the transformer encoder. Time embeddings are constructed so that codes falling within non-overlapping 7-day windows share a common embedding. Medical code type embeddings are divided into different categories (i.e., diseases, symptoms, procedures, special tokens, etc.). We used a custom BERT model with the number of layers, hidden size, and number of self-attention heads set to 5, 320, and 5 respectively. This model has 15M parameters. We initialize the model weights randomly and follow the BERT (Devlin et al., 2018) pretraining strategy, i.e., Masked Language Modeling (MLM) to learn the representations of the structured EHR sequences. We minimize the MLM loss given by L = − 1 K PK i=1 log p (Dmi |DM˜ ; Θ), where Θ are parameters of the model, D = {D0, D1*, ..., D*N } is the sequence of medical codes of length N, M = {m0, m1*, ..., m*K} are indices of masked medical codes, and DM˜ denotes the set of unmasked medical codes. During training, the medical codes are masked with a probability of 15%, and the model is trained with AdamW (Loshchilov & Hutter, 2019) optimizer and batch size of 512 for 100 epochs. We set an initial learning rate of 5e-4 and the learning rate is reduced by a factor of 2 if the validation loss stops decreasing continuously for 2 epochs. ## 3.2 Sehr-Ecg-Text: Joint Sehr, Ecg And Text Pre-Training In this section, we describe the pre-training of the sEHR-ECG-Text model, where we pair the ECG modality with sEHR and text modalities to jointly learn multi-modal representations. ## 3.2.1 Preliminaries MultiModal Versatile Networks (MMV) (Alayrac et al., 2020) apply contrastive learning to multi-modal data including video, audio, and text under the assumption that the video and audio modalities are more granular than the text modality. MMV discusses that applying contrastive loss in *shared space* (where all modalities are embedded into a single shared vector space) may not maintain specificities, as it implicitly assumes that all modalities have equal granularity. To address this, MMV proposes to learn two separate embedding spaces: a fine-grained space where video and audio are matched and a coarse-grained space where text is matched with video and audio domains. This method is referred to as *fine and coarse spaces (FAC).* We hypothesize that sEHR, ECG, and text modalities do not exhibit equal granularity. Moreover, the ECGs are paired with sEHR and text data within a specific time window surrounding the ECG acquisition timestamp, and tokens are trimmed based on the input length accepted by the corresponding encoders, as we describe in Sections 4.2.2 and 4.2.3. This implies that the same level of information might not be maintained between sEHR and text, so we adopt the FAC framework from MMV in our sEHR-ECG-Text model. We describe the methodology in detail in the following sections. ## 3.2.2 Notation Let x ∈ X be an instance defined by an instantiation of different modalities M : x = {xm}, m ∈ M. In this study, we employ three modalities: ECG xe ∈ Xe, sEHR xs ∈ Xs, and text xt ∈ Xt. Specifically, xs, xe, and xt represent the sequence of sEHR codes, ECG waveform samples, and sequence of text tokens respectively. Let Em : Xm → R dm be a parameterized modality-specific encoder that takes as input an instance xm from modality m and produces a modality-specific representation of dimension dm. These modality-specific representations are embedded into a shared space Ωz ⊂ R dz, where z represents the list of modalities that we embed into this space. For instance, z = es to denote joint ECG-sEHR space Ωes, z = et to denote joint ECG-Text space Ωet, or z = set to denote joint sEHR-ECG-Text space Ωset. In this shared space, we maximize or minimize the alignment between different modalities using the contrastive loss objective. The projection head Pm→z : R dm → R dzis used to embed modality specific-representations vm = Em(xm) into the shared space Ωz, and we denote the resulting vector as vm,z = Pm→z(Em(xm)), which signifies the representation of the input modality xm in the shared space Ωz. ## 3.2.3 Data Encoding To obtain the modality-specific representations, we use a convolutional neural network (CNN) customized to one dimension (Ee) for the ECG modality, the pre-trained sEHR-BERT (Es) as described in Section 3.1 for the sEHR modality, and pre-trained GatorTron (Yang et al., 2022) (Et) for the text modality. Global average pooling is applied at the final layer for all three encoders to obtain the representations. We use multi-layer perceptron (MLP) for the projection heads. ![4_image_0.png](4_image_0.png) Figure 1: Overview of the sEHR-ECG-Text model pre-training. The model takes as input three modalities M : x = {xm}, m ∈ M, i.e., ECG (xe), sEHR (xs), and text (xt). Em(·) and vm denote the modalityspecific encoder and modality-specific representation respectively. Ωz denotes the shared embedding space, where z represents the list of modalities that we embed into this space. For instance, z = es to denote joint ECG-sEHR space Ωes. Pm→z(·) denotes the projection head used to embed modality specific representation vm into shared space Ωz. vm,z denotes the representation of the input modality xm in the shared space Ωz. The model is trained by applying contrastive loss between ECG and sEHR in the fine-grained ECG-sEHR space (Ωes) and between ECG and text in the coarse-grained sEHR-ECG-Text space (Ωset). ## 3.2.4 Multi-Modal Contrastive Objective As mentioned before, we use FAC framework inspired from MMV (Alayrac et al., 2020) where ECG and sEHR are compared in fine-grained joint ECG-sEHR space Ωes, while ECG is compared with text in coarsegrained joint sEHR-ECG-Text space Ωset. Given a minibatch containing N instances {x i} N i=1, we denote v i m,z = Pm→z(Em(x i m)) as the representation of the modality m in the shared space Ωz for the i-th instance. Following Zhang et al. (2022), we define the contrastive objective bidirectionally. For example, in the case of contrastive loss between ECG and sEHR domains, the loss is directed from ECG to sEHR and vice-versa. In the context of contrastive objective, we consider N pairs of ECG-sEHR (xe, xs) as positive, while the remaining N2 − N pairs are treated as negative. The same approach is applied to contrastive loss between ECG and text domains. Let sim(*x, y*) = x T y/∥x∥∥y∥ denote the cosine similarity between two vectors x, y ∈ R dz, Les be the contrastive loss between ECG and sEHR, Let be the contrastive loss between ECG and text, λes and λet be scalar weights ∈ [0, 1], and τ ∈ R + be the temperature parameter. The combination of λes and λet gives the overall loss, denoted by L (Equation 3), and we aim to minimize this loss. $$\mathcal{L}_{es}=-\frac{1}{N}\sum_{i=1}^{N}\left(\lambda_{es}\log\frac{\exp\left(sim(v_{es}^{i},v_{es}^{i})/\tau\right)}{\sum_{k=1}^{N}\exp\left(sim(v_{es,es}^{i},v_{es}^{k})/\tau\right)}+(1-\lambda_{es})\log\frac{\exp\left(sim(v_{es,es}^{i},v_{es,es}^{i})/\tau\right)}{\sum_{k=1}^{N}\exp\left(sim(v_{es,es}^{i},v_{es,es}^{k})/\tau\right)}\right)\tag{1}$$ $$\mathcal{L}_{\text{et}}=-\frac{1}{N}\sum_{i=1}^{N}\left(\lambda_{\text{et}}\log\frac{\exp\left(sim(v_{\text{e,net}}^{i},v_{\text{e,net}}^{i})/\tau\right)}{\sum_{k=1}^{N}\exp\left(sim(v_{\text{e,net}}^{i},v_{\text{e,net}}^{i})/\tau\right)}+(1-\lambda_{\text{et}})\log\frac{\exp\left(sim(v_{\text{e,net}}^{i},v_{\text{e,net}}^{i})/\tau\right)}{\sum_{k=1}^{N}\exp\left(sim(v_{\text{e,net}}^{i},v_{\text{e,net}}^{k})/\tau\right)}\right)\tag{2}$$ $$\left({\boldsymbol{3}}\right)$$ $${\mathcal{L}}={\mathcal{L}}_{e s}+{\mathcal{L}}_{e t}$$ L = Les + Let (3) Figure 1 illustrates the pre-training of the sEHR-ECG-Text model. See Appendix B.1 for more details about implementation and training. We also present *shared space* versus *FAC spaces* ablation in Appendix E. We provide the architecture of the shared space sEHR-ECG-Text model in Appendix B.2. ## 3.3 Ecg-Sehr: Joint Ecg And Sehr Pre-Training In the ECG-sEHR model, we pair ECG signals with structured EHRs as we describe in more detail in Sections 4.2.1 and 4.2.2. We apply contrastive objective between ECG and sEHR modalities in joint ECGsEHR embedding space (Ωes), where we minimize the contrastive loss given in Equation 1. We provide the ECG-sEHR model architecture in Appendix B.2. ## 3.4 Ecg-Text: Joint Ecg And Text Pre-Training In the ECG-Text model, we pair ECG signals with clinical text data from unstructured EHRs as we describe in more detail in Section 4.2.3. ECG and text embeddings are jointly learned by applying the contrastive objective between ECG and text modalities in joint ECG-Text embedding space (Ωet). We provide the ECGText model architecture in Appendix B.2. Following the notation introduced in Section 3.2, we minimize the contrastive loss given by, $$\mathcal{L}_{\text{et}}=-\frac{1}{N}\sum_{i=1}^{N}\left(\lambda_{\alpha}\log\frac{\exp\left(sim(v_{e,et}^{i},v_{e,et}^{i})/\tau\right)}{\sum_{k=1}^{N}\exp\left(sim(v_{e,et}^{i},v_{e,et}^{k})/\tau\right)}+(1-\lambda_{et})\log\frac{\exp\left(sim(v_{e,et}^{i},v_{e,et}^{i})/\tau\right)}{\sum_{k=1}^{N}\exp\left(sim(v_{e,et}^{i},v_{e,et}^{k})/\tau\right)}\right)\tag{4}$$ In this model, note that ECG and text are compared in joint ECG-Text space (Ωet), which differs from the sEHR-ECG-Text model where ECG and text are compared in the sEHR-ECG-Text space (Ωset). ## 4 Experiments And Results 4.1 Dataset Splits Setup We used EHR data of around 2.4 *million* patients from Mayo Clinic, USA consisting of around 9 *million* ECGs to create datasets for pre-training and downstream tasks. These EHRs have undergone a rigorous de-identification process, guaranteeing the utmost privacy and data security. These records have received approval from the Institutional Review Board (IRB) of Mayo Clinic, USA, ensuring compliance with ethical guidelines and regulations. Consequently, there are no privacy or data security issues associated with the use of these de-identified EHRs. We initially split all the patients into the global train, validation, and test sets in a 60%, 5%, and 35% ratio, which are then used to create pre-training datasets and disease cohorts for downstream classification tasks. In particular train, validation, and test sets for pre-training and classification tasks are created by drawing the EHRs from the global train, validation, and test patients respectively. This approach ensures that we can effectively evaluate the quality of representations on downstream tasks, as the data of validation and test patients is not seen during the pre-training phase. Consequently, all datasets across tasks have train, validation, and test split percentages roughly close to 60%, 5%, and 35% respectively. ## 4.2 Pre-Training Datasets In this section, we describe the creation of following datasets for pre-training the proposed models: (i) sEHR sequences for pre-training the sEHR-BERT model, (ii) ECG-sEHR (xe, xs) pairs for pre-training the ECGsEHR model, and (iii) ECG-Text (xe, xt) pairs for pre-training the ECG-Text model. For sEHR-ECG-Text model pre-training, we construct triplets (xs, xe, xt) by considering (xe, xs) and (xe, xt) pairs that have ECG paired with both sEHR and text data. See Appendix A.1 for details on the number of instances and patients used in different pre-training models. ## 4.2.1 Dataset For Sehr-Bert Pre-Training We use ICD-9 (International Classification of Diseases, Ninth Revision), ICD-10 (International Classification of Diseases, Tenth Revision), CPT (Current Procedural Terminology), HCPS (Healthcare Common Procedures Coding System) codes, and medication prescriptions to create the dataset for sEHR-BERT pretraining. Since ICD-9 codes differ from ICD-10 codes, but their corresponding text descriptions are similar, we map ICD-9 to ICD-10 to maintain consistent phenotypic information. ICD-10 diagnosis codes are shortened to the first three characters as keeping four or more characters provides little to no extra information for large-scale pre-training. For example, the corresponding text descriptions of ICD-10 diagnosis codes I26.0 and I26.9 are *pulmonary embolism with acute cor pulmonale* and *pulmonary embolism without acute cor* pulmonale respectively, but these come under a common disease category, i.e., *pulmonary embolism* (I26). Shortened ICD-10 diagnosis codes, ICD-10 procedure codes, CPT codes, HCPS codes, and medication prescriptions associated with at least 50 patients are included in the vocabulary, resulting in a size of 28593. We present short ICD-10 vs. full ICD-10 diagnosis codes ablation while keeping codes from other sources consistent in Appendix E. To create the sEHR sequence for sEHR-BERT model pre-training we randomly select one sequence of up to 512 consecutive medical codes from a given patient's timeline. On average, the sequence length of the resulting dataset is 168. ## 4.2.2 Ecg-Sehr Pairs Generation To create the ECG-sEHR (xe, xs) pairs, we first select an ECG of a given patient, xe, and consider all the shortened ICD-10 diagnosis codes, ICD-10 procedure codes, CPT codes, HCPS codes, and medication prescriptions associated with that patient within a period of one year prior, and one year subsequent, to the acquisition timestamp of that ECG. The medical codes restricted to this time range are arranged sequentially to form the sEHR input sequence xs. The average length of the constructed sequences is 121. ## 4.2.3 Ecg-Text Pairs Generation ECGs are paired with unstructured EHR text data derived from various sources, including ECG reports, ECHO reports, pathology reports, radiology reports, microbiology reports, clinical notes, and surgical notes, collectively referred to as "patient notes" in this work. Despite the GatorTron-base (Yang et al., 2022) model's capacity to handle sequences of up to 512 tokens, we were limited to work with a maximum of 400 tokens due to computing resource constraints. Given the abundance of text data within patient notes, we implemented a filtering process to extract only the most relevant information. Specifically, we selected patient notes that contained entities from a predefined list of biomedical entity types, such as diseases, symptoms, procedures, medications, biomarkers, and gene mutations, using an in-house NLP model. The next step involves pairing ECGs with the selected patient notes. We employed two distinct methods for this purpose: report concatenation and entity concatenation. Figure 2: The figure shows the linkage of sEHR and ![6_image_0.png](6_image_0.png) text modalities with ECG modality within specific time windows: one year for the sEHR modality, one month for report concatenation, and one year for entity concatenation in the case of the text modality. 7 Report concatenation: For each patient's ECG, we concatenate all selected patient notes that are available within one month around the ECG acquisition timestamp. This approach resulted in an average sequence length of 354 tokens after tokenization. Entity concatenation: It's important to note that aligning ECGs with patient information spanning a more extended timeframe is advantageous for understanding ECG patterns more effectively. To effectively capture a broader spectrum of medical insights while adhering to token constraints, we concatenate only the identified entities from the patient notes within a one-year timeframe around the ECG acquisition timestamp. This resulted in an average sequence length of 266 after tokenization. Figure 2 illustrates how sEHR and text modalities are linked with ECG modality within specific time windows around the ECG timestamp. We use entity concatenation for the main results as it yielded better results when compared to report concatenation. We also present the report concatenation vs. entity concatenation ablation in Appendix E. ## 4.3 Classification Datasets In this section, we provide the details of the classification datasets that are used in linear classification and fine-tuning tasks. We evaluate the pre-trained models on both internal and external datasets. ## 4.3.1 Internal Datasets We target six cardiac diseases whose diagnostic criteria using ECGs haven't been established in clinical practice, i.e., either the patterns to identify these diseases from ECG are not known or ECG is not the gold standard for definitive diagnosis. These include coronary atherosclerosis, myocarditis, cardiac amyloidosis, pulmonary hypertension (PH), low left ventricular ejection fraction (low LVEF, i.e., LVEF≤40), and atrial fibrillation in normal sinus rhythm (AFib in NSR). These diseases are diagnosed by other means and the diagnostic information is available in EHRs. For example, to diagnose PH, an invasive, right heart catheterization (RHC) procedure is performed and to identify low LVEF, an echocardiogram test is performed. we utilize EHRs to associate ECGs with these diseases and generate labels. See Appendix A.2 for the summary of the disease labeling process, the number of ECGs, and the number of patients used. ## 4.3.2 External Datasets We also evaluate all pre-trained models on two publicly available datasets: (i) **PhysioNet2020** (Alday et al., 2020), which consists of a collection of six 12-lead ECG datasets with varying signal lengths and sampling rates, (ii) **Chapman** (Zheng et al., 2020), which contains 10-second long 12-lead ECGs (see Appendix A.3 for more details). The diseases in these datasets are commonly diagnosed by physicians directly using ECGs, unlike the diseases in our proprietary internal classification datasets. To replicate the state-of-the-art results on the PhysioNet2020 dataset presented by 3KG (Gopal et al., 2021), we followed their detailed procedure: (i) merge some of the conditions from the list of 27 conditions due to their similarity, i.e., complete right bundle branch block and right bundle branch block, premature atrial contraction and supraventricular premature beats, premature ventricular contractions and ventricular premature beats, 1st-degree atrioventricular block and prolonged PR interval, and evaluate on 23 distinct classes; (ii) resample the ECG signals to 500Hz; (iii) take non-overlapping 5-second long crops from each ECG record exhaustively and associate those with the label of the original record; (iv) split the dataset into 80%, 10%, and 10% for training, validation, and testing respectively; (v) train a 23-class multilabel classification model. For the Chapman dataset, in line with Kiyasseh et al. (2021), we merge 11 cardiac arrhythmia conditions of the dataset into 4 major classes and split the dataset into 60%, 20%, and 20% for training, validation, and testing respectively. ## 4.4 Baseline Models Random initialization models. We train binary classification models on all individual diseases from scratch using the same ECG encoder that was used during pre-training, by randomly initializing the weights. ECG-only contrastive learning models. We compare our models with the current state-of-the-art ECG-only self-supervised learning models. In particular, we compare against the 3KG (Gopal et al., 2021), CLOCS(CMSC) (Kiyasseh et al., 2021) and PCLR (Diamant et al., 2022) models. For identical comparison, we pre-train all three models using the same global splits that we used for the multi-modal contrastive pre-training but with only ECG signal as input and we also use the same ECG encoder that we used while pre-training multi-modal contrastive learning models. ## 4.5 Downstream Tasks And Results In this section, we evaluate the pre-trained models' transfer learning capabilities and compare them with different baseline methods on various downstream tasks such as classification, zero-shot retrieval, and outof-distribution (OOD) detection. ## 4.5.1 Classification Tasks We evaluate the pre-trained models on linear classification and fine-tuning tasks. We evaluate the representation quality by extracting embeddings from the pre-trained ECG encoder by passing ECG waveform as input and training logistic regression models on various cardiovascular diseases using both internally created and publicly available datasets as outlined in Section 4.3. In fine-tuning tasks, we add a classification head (MLP) on top of the pre-trained ECG encoder and fine-tune the entire network. We compare our pre-trained models with various baseline models as described in Section 4.4. One of the most useful applications of pretrained models is in providing downstream tasks with data efficiency, which ensures consistent performance with reduced training data. This is very valuable when a large amount of labeled data is not available due to the low prevalence of the diseases or is too expensive to procure. To demonstrate this, we create different fractions (1%, 10%, 100%) of the training set by maintaining the original disease prevalence. For low-data diseases such as coronary atherosclerosis, myocarditis, and cardiac amyloidosis, we drop 1% split due to small dataset sizes. We use AUROC and AUPRC as our evaluation metrics. We conduct each experiment using five random seeds and report the mean and standard deviation. For the 1% and 10% splits, we conducted experiments with five distinct fractional splits derived from the original training split (100%). Results and Discussion. Table 1 shows the linear classification results on external datasets. Table 2 shows classification results on coronary atherosclerosis, myocarditis, and cardiac amyloidosis diseases (lowdata) and Table 3 shows classification results on pulmonary hypertension, low LVEF, and AFib in NSR diseases (high-data). We observed the following findings: (i) Linear classifiers trained using representations obtained from our pre-trained models consistently outperform all baseline models by large margins across different fractions for all diseases. (ii) Fine-tuned classification models initialized with our pre-trained models' weights outperform all baseline models by a large margin in low-data environments and a small margin in high-data environments. (iii) Classification models trained with only 10% of the training data using our pretrained models produce results that are as good as or better than those obtained by training on the entire dataset with baseline models. This demonstrates the effectiveness of our pre-trained models in achieving data efficiency. (iv) When utilizing the complete training dataset, on the linear classification task, our ECG-Text model achieves an AUROC score of 0.915 on PhysioNet2020, surpassing the SOTA performance reported in Gopal et al. (2021) (3KG) by 2.5% (0.915 vs. 0.890). Additionally, on the Chapman dataset, it achieves an AUROC score of 0.990, surpassing the SOTA performance reported in Kiyasseh et al. (2021) (CLOCS) by 3.1% (0.990 vs. 0.959). It's worth noting that, on equal grounds, we surpass 3KG by 4.3% (0.915 vs. 0.872) and CLOCS by 4.4% (0.990 vs. 0.946). (v) We hypothesize that the main reason for the poor performance of ECG-only contrastive learning models on internal datasets is their exclusive dependence on ECGs for learning representations, i.e., comparing different instances of ECG data. This may not be sufficient to learn the complex patterns of various medical conditions. In contrast, our methods align ECGs with EHR data, providing rich contextual information about a patient's health history, including diagnoses, procedures, medications, and more. This approach is beneficial for learning ECG patterns more effectively. Comparison between ECG-sEHR and ECG-Text models. The ECG-sEHR model demonstrates superior performance on internal datasets, whereas the ECG-Text model excels on external datasets. We hypothesize that this difference arises from the fact that external datasets, such as PhysioNet2020 and Chapman, primarily comprise diseases that are commonly diagnosed from ECGs (i.e., arrhythmias, conduction Table 1: Linear classification results (AUROC, mean and standard deviation over 5 runs with different random seeds) for PhysioNet2020 (23 classes) and Chapman (4 classes) datasets. Results within 95% confidence intervals of the best result are shown in **bold**. | | PhysioNet2020 | | | Chapman | | | |----------------------------------|-----------------|---------------|---------------|---------------|---------------|---------------| | Method | 1% | 10% | 100% | 1% | 10% | 100% | | Supervised baseline Random Init. | 0.684 ± 0.024 | 0.842 ± 0.006 | 0.913 ± 0.006 | 0.552 ± 0.031 | 0.964 ± 0.003 | 0.987 ± 0.002 | | ECG-only SSL 3KG | 0.774 ± 0.001 | 0.837 ± 0.003 | 0.872 ± 0.000 | 0.934 ± 0.008 | 0.968 ± 0.001 | 0.984 ± 0.000 | | CLOCS(CMSC) | 0.770 ± 0.005 | 0.836 ± 0.002 | 0.864 ± 0.000 | 0.876 ± 0.019 | 0.936 ± 0.001 | 0.946 ± 0.000 | | PCLR | 0.721 ± 0.003 | 0.798 ± 0.003 | 0.834 ± 0.001 | 0.722 ± 0.007 | 0.821 ± 0.004 | 0.872 ± 0.000 | | Our models sEHR-ECG-Text | 0.813 ± 0.005 | 0.882 ± 0.003 | 0.911 ± 0.000 | 0.957 ± 0.004 | 0.979 ± 0.001 | 0.985 ± 0.000 | | ECG-sEHR | 0.788 ± 0.005 | 0.866 ± 0.002 | 0.896 ± 0.000 | 0.937 ± 0.010 | 0.969 ± 0.000 | 0.976 ± 0.000 | | ECG-Text | 0.820 ± 0.003 | 0.887 ± 0.001 | 0.915 ± 0.000 | 0.974 ± 0.002 | 0.987 ± 0.001 | 0.990 ± 0.000 | | (a) Linear classification | | | | | | | |------------------------------------------------|---------------|---------------------|---------------|---------------|---------------|---------------| | Coronary atherosclerosis | Myocarditis | Cardiac amyloidosis | | | | | | Method | 10% | 100% | 10% | 100% | 10% | 100% | | Supervised baseline Random Init. 0.772 ± 0.012 | 0.831 ± 0.004 | 0.775 ± 0.019 | 0.868 ± 0.003 | 0.912 ± 0.004 | 0.945 ± 0.001 | | | ECG-only SSL 3KG | 0.722 ± 0.008 | 0.786 ± 0.000 | 0.699 ± 0.022 | 0.808 ± 0.000 | 0.869 ± 0.009 | 0.906 ± 0.000 | | CLOCS(CMSC) | 0.743 ± 0.004 | 0.801 ± 0.000 | 0.662 ± 0.021 | 0.783 ± 0.000 | 0.876 ± 0.006 | 0.918 ± 0.000 | | PCLR | 0.761 ± 0.004 | 0.825 ± 0.000 | 0.718 ± 0.022 | 0.818 ± 0.000 | 0.898 ± 0.006 | 0.928 ± 0.000 | | Our models sEHR-ECG-Text | 0.840 ± 0.003 | 0.890 ± 0.000 | 0.860 ± 0.019 | 0.901 ± 0.001 | 0.934 ± 0.007 | 0.960 ± 0.000 | | ECG-sEHR | 0.836 ± 0.011 | 0.891 ± 0.000 | 0.859 ± 0.011 | 0.896 ± 0.000 | 0.932 ± 0.006 | 0.959 ± 0.000 | | ECG-Text | 0.821 ± 0.006 | 0.875 ± 0.000 | 0.785 ± 0.010 | 0.881 ± 0.000 | 0.922 ± 0.006 | 0.949 ± 0.000 | | (b) Fine-tuned classification | | | | | | | | Coronary atherosclerosis | Myocarditis | Cardiac amyloidosis | | | | | | Method | 10% | 100% | 10% | 100% | 10% | 100% | | Supervised baseline Random Init. 0.772 ± 0.012 | 0.831 ± 0.004 | 0.775 ± 0.019 | 0.868 ± 0.003 | 0.912 ± 0.004 | 0.945 ± 0.001 | | | ECG-only SSL 3KG | 0.751 ± 0.007 | 0.823 ± 0.007 | 0.725 ± 0.008 | 0.853 ± 0.015 | 0.900 ± 0.007 | 0.945 ± 0.002 | | CLOCS(CMSC) | 0.782 ± 0.005 | 0.815 ± 0.009 | 0.716 ± 0.024 | 0.857 ± 0.013 | 0.912 ± 0.006 | 0.948 ± 0.001 | | PCLR | 0.812 ± 0.004 | 0.828 ± 0.005 | 0.722 ± 0.032 | 0.853 ± 0.018 | 0.924 ± 0.003 | 0.951 ± 0.001 | | Our models sEHR-ECG-Text | 0.880 ± 0.003 | 0.888 ± 0.005 | 0.862 ± 0.005 | 0.905 ± 0.003 | 0.948 ± 0.003 | 0.961 ± 0.002 | | ECG-sEHR | 0.881 ± 0.003 | 0.892 ± 0.001 | 0.866 ± 0.008 | 0.902 ± 0.003 | 0.949 ± 0.001 | 0.961 ± 0.002 | | ECG-Text | 0.871 ± 0.002 | 0.880 ± 0.001 | 0.857 ± 0.008 | 0.892 ± 0.004 | 0.940 ± 0.002 | 0.959 ± 0.001 | Table 2: Results (AUROC, mean and standard deviation over 5 runs with different random seeds) for coronary atherosclerosis, myocarditis, and cardiac amyloidosis classification tasks: (a) linear classification, (b) fine-tuned classification. Results within 95% confidence intervals of the best result are shown in **bold**. See Appendix D for AUPRC metrics. (a) Linear classification Table 3: Results (AUROC, mean and standard deviation over 5 runs with different random seeds) for pulmonary hypertension, low LVEF, and AFib in NSR classification tasks: (a) linear classification, (b) finetuned classification. Results within 95% confidence intervals of the best result are shown in **bold**. See Appendix D for AUPRC metrics. (a) Linear classification | | | (a) Linear classification | | | | | | | | |------------------------------------------------|------------------------|-------------------------------|---------------|---------------|---------------|---------------|---------------|---------------|---------------| | | Pulmonary hypertension | Low LVEF | | AFib in NSR | | | | | | | Method | 1% | 10% | 100% | 1% | 10% | 100% | 1% | 10% | 100% | | Supervised baseline Random Init. 0.805 ± 0.008 | 0.887 ± 0.005 | 0.927 ± 0.002 | 0.857 ± 0.006 | 0.919 ± 0.002 | 0.944 ± 0.000 | 0.834 ± 0.013 | 0.896 ± 0.002 | 0.922 ± 0.001 | | | ECG-only SSL 3KG 0.779 ± 0.018 | 0.862 ± 0.002 | 0.874 ± 0.000 | 0.846 ± 0.016 | 0.902 ± 0.002 | 0.914 ± 0.000 | 0.824 ± 0.009 | 0.866 ± 0.000 | 0.870 ± 0.000 | | | CLOCS(CMSC) | 0.782 ± 0.021 | 0.857 ± 0.002 | 0.872 ± 0.000 | 0.864 ± 0.011 | 0.905 ± 0.002 | 0.916 ± 0.000 | 0.818 ± 0.009 | 0.862 ± 0.000 | 0.868 ± 0.000 | | PCLR | 0.845 ± 0.016 | 0.894 ± 0.001 | 0.907 ± 0.000 | 0.894 ± 0.006 | 0.927 ± 0.000 | 0.936 ± 0.000 | 0.865 ± 0.007 | 0.898 ± 0.000 | 0.904 ± 0.000 | | Our models sEHR-ECG-Text 0.900 ± 0.005 | 0.932 ± 0.001 | 0.939 ± 0.000 | 0.911 ± 0.013 | 0.941 ± 0.001 | 0.951 ± 0.000 | 0.902 ± 0.008 | 0.928 ± 0.001 | 0.931 ± 0.000 | | | ECG-sEHR | 0.899 ± 0.007 | 0.933 ± 0.001 | 0.940 ± 0.000 | 0.910 ± 0.014 | 0.942 ± 0.001 | 0.951 ± 0.000 | 0.902 ± 0.007 | 0.930 ± 0.001 | 0.933 ± 0.000 | | ECG-Text | 0.889 ± 0.006 | 0.924 ± 0.001 | 0.931 ± 0.000 | 0.901 ± 0.014 | 0.938 ± 0.001 | 0.947 ± 0.000 | 0.891 ± 0.008 | 0.923 ± 0.000 | 0.925 ± 0.000 | | | | (b) Fine-tuned classification | | | | | | | | | | Pulmonary hypertension | Low LVEF | | AFib in NSR | | | | | | | Method | 1% | 10% | 100% | 1% | 10% | 100% | 1% | 10% | 100% | | Supervised baseline Random Init. 0.805 ± 0.008 | 0.887 ± 0.005 | 0.927 ± 0.002 | 0.857 ± 0.006 | 0.919 ± 0.002 | 0.944 ± 0.000 | 0.834 ± 0.013 | 0.896 ± 0.002 | 0.922 ± 0.001 | | | ECG-only SSL 3KG 0.800 ± 0.010 | 0.891 ± 0.002 | 0.925 ± 0.001 | 0.868 ± 0.007 | 0.922 ± 0.002 | 0.945 ± 0.000 | 0.842 ± 0.009 | 0.900 ± 0.001 | 0.922 ± 0.001 | | | CLOCS(CMSC) | 0.833 ± 0.002 | 0.892 ± 0.003 | 0.927 ± 0.001 | 0.893 ± 0.002 | 0.926 ± 0.002 | 0.945 ± 0.001 | 0.849 ± 0.005 | 0.897 ± 0.002 | 0.920 ± 0.001 | | PCLR | 0.865 ± 0.010 | 0.911 ± 0.002 | 0.933 ± 0.001 | 0.905 ± 0.002 | 0.937 ± 0.002 | 0.949 ± 0.000 | 0.874 ± 0.006 | 0.906 ± 0.002 | 0.922 ± 0.001 | | Our models sEHR-ECG-Text 0.916 ± 0.004 | 0.934 ± 0.001 | 0.938 ± 0.003 | 0.933 ± 0.002 | 0.939 ± 0.003 | 0.949 ± 0.001 | 0.909 ± 0.008 | 0.919 ± 0.002 | 0.926 ± 0.001 | | | ECG-sEHR | 0.917 ± 0.005 | 0.935 ± 0.001 | 0.940 ± 0.001 | 0.932 ± 0.001 | 0.939 ± 0.002 | 0.949 ± 0.000 | 0.914 ± 0.005 | 0.920 ± 0.003 | 0.928 ± 0.001 | | ECG-Text | 0.907 ± 0.003 | 0.926 ± 0.001 | 0.934 ± 0.001 | 0.927 ± 0.004 | 0.938 ± 0.001 | 0.948 ± 0.001 | 0.901 ± 0.008 | 0.916 ± 0.003 | 0.926 ± 0.002 | blocks, etc.). These diagnoses are well-documented in the textual modality, particularly within ECG reports. As a result, the ECG-Text model performs best on these external datasets. In contrast, our internal datasets contain diseases for which labels are derived from EHR data. These labels are better captured by the sEHR modality. Therefore, the ECG-sEHR model outperforms the ECG-Text model when applied to internal datasets. It's worth noting that the sEHR-ECG-Text model demonstrates strong generalization across all internal datasets and achieves comparable performance with the ECG-Text model in the case of external datasets, i.e., PhysioNet2020 and Chapman, as it is trained with both sEHR and text modalities. Furthermore, it offers the advantage of comparing different modalities for retrieval tasks. ## 4.5.2 Retrieval Tasks Following (Zhang et al., 2022), we also evaluate the pre-trained models on two zero-shot retrieval tasks: (i) Zero-shot ECG-ECG Retrieval, and (ii) Zero-shot Text-ECG Retrieval. We used data only from the global test split to create the queries and candidates for the retrieval tasks. For a given query, we rank the candidates by computing the cosine similarity between the representations of the query and the candidates obtained from pre-trained encoders. For the Text-ECG retrieval task, we obtain ECG and text embeddings from shared ECG-Text embedding space Ωet. We report the precision@k metric for k=100, 500, and 1000, which represents the percentage of top k ranked candidates that are relevant to the query. Zero-shot ECG-ECG Retrieval. We take 1000 different ECGs as search queries for each of the 41 cardiovascular conditions that are based on ECG reports. For every condition, we select 100,000 candidate ECGs, of which 10,000 are classified as positive for the condition and 90,000 are classified as negative for the condition. The query ECGs and positive candidate ECGs are completely exclusive. Zero-shot Text-ECG Retrieval. We take 1000 distinct ECG reports as search queries for each of the 41 cardiovascular conditions. For each of the conditions, we take 100,000 ECGs, out of which 10,000 ECGs show the condition and 90,000 ECGs have no connection to the condition. We also make sure that no ECG corresponding to the 1000 distinct ECG report queries is chosen as a candidate for the positive set. Table 4 shows the zero-shot ECG-ECG retrieval and ECG-Text retrieval results. Our ECG-Text model outperforms the random guess method and ECG-only contrastive learning models by a large margin on both tasks. ## 4.5.3 Out-Of-Distribution Detection Table 4: Zero-shot ECG-ECG retrieval and TextECG retrieval results. The *random* category results are from random guesses. P@k denotes precision@k. | ECG-ECG Retrieval | Text-ECG Retrieval | | | | | | |---------------------|----------------------|-------|--------|-------|-------|--------| | Method | P@100 | P@500 | P@1000 | P@100 | P@500 | P@1000 | | Random | 10.00 | 10.00 | 10.00 | 10.00 | 10.00 | 10.00 | | PCLR | 38.41 | 35.34 | 33.74 | - | - | - | | 3KG | 40.35 | 37.34 | 35.74 | - | - | - | | CLOCS | 41.13 | 37.63 | 35.82 | - | - | - | | ECG-Text | 55.13 | 49.47 | 46.33 | 73.02 | 66.92 | 63.58 | It is observed that the representations learned via selfsupervised learning techniques help to better distinguish between *in-distribution* (IND) and *out-of-distribution* (OOD) datasets. We demonstrate this using representations obtained from our ECG-sEHR model to differentiate between two disparate ECG datasets. We take the proprietary ECG pulmonary hypertension (PH) cohort as the IND dataset and Holter ECGs (ECG recorded continuously over 24 hours or longer) from the open-source St Petersburg INCART 12-lead Arrhythmia Database (Tihonenko et al., 2008) as the OOD dataset. Non-overlapping 10-second long segments are taken from Holter ECGs and resampled to 500Hz to be consistent with ECGs from the IND PH dataset. We use the *relative Mahalanobis distance* (RMD) (Ren et al., 2021) method which is based on the Mahalanobis distance of embeddings from the distribution of the nearest predicted class, to determine whether the data is in-distribution or out-of-distribution. We compare the results obtained using representations extracted from the ECG-encoder of the pre-trained ECG-sEHR model with that of the representations obtained from the penultimate layer of the PH binary classifier trained from scratch on the PH cohort and show that the rejection rate at different significance levels is much higher in the case of ECG-sEHR model. The results are given in Table 5, which clearly shows that generic ECG representations are better at detecting out-of-distribution data compared to disease-specific representations. Table 5: Out-of-distribution detection results using ECG representations obtained from the PH disease model and generic ECG representations obtained from the ECGsEHR model. *sig.* denotes significance level. Metric ECG-sEHR PH Rejection at 1% sig. (%) 13.94 10.35 Rejection at 5% sig. (%) 49.67 29.87 IND vs. OOD AUROC 0.757 0.620 ## 5 Conclusion Our work introduces a series of three multi-modal contrastive learning models. These models leverage both structured and unstructured EHRs to produce high-quality ECG representations. We have demonstrated that our pre-trained models outperform randomly initialized models and other ECG-only contrastive learning models by a wide margin on classification and retrieval tasks. Specifically, we perform the classification tasks using ECGs on cardiovascular diseases whose definitive diagnoses are obtained from more expensive and/or invasive tests in clinical settings. This is a significant breakthrough as ECG tests are widely available, noninvasive, and less expensive. Furthermore, our ECG representations have been shown to excel in detecting out-of-distribution data when compared to disease-specific representations. ## 6 Future Work In this work, we make use of both structured EHRs and unstructured EHR text data to learn ECG representations. However, there are additional modalities present in unstructured EHRs such as images (MRI scans, CT scans, X-rays, and histopathology images related to cardiology), videos (echocardiograms/heart ultrasounds), and time-series signals (heart and lung sounds), which can provide even more meaningful information through multi-modal contrastive learning. Another interesting future work would be the integration of federated learning frameworks to leverage multi-institutional medical data. This approach aims to capture a more diverse range of patient information, leading to enhanced ECG representation learning. While the disease models presented in this study undergo training and testing using real-world datasets, it is of utmost importance to conduct clinical validation across a diverse set of health systems before deploying them. This ensures that the models are equitable, unbiased, and trustworthy. We hope our work will serve as an inspiration for future endeavors in harnessing multi-modal EHR data to learn robust ECG representations. ## 7 Acknowledgements We would like to thank Dr. Rickey E. Carter for insightful discussion and valuable feedback on this study. ## References Demilade Adedinsewo, Rickey E Carter, Zachi Attia, Patrick Johnson, Anthony H Kashou, Jennifer L Dugan, Michael Albus, Johnathan M Sheele, Fernanda Bellolio, Paul A Friedman, et al. Artificial intelligenceenabled ecg algorithm to identify patients with left ventricular systolic dysfunction presenting to the emergency department with dyspnea. *Circulation: Arrhythmia and Electrophysiology*, 13(8):e008437, 2020. Joseph C Ahn, Zachi I Attia, Puru Rattan, Aidan F Mullan, Seth Buryska, Alina M Allen, Patrick S Kamath, Paul A Friedman, Vijay H Shah, Peter A Noseworthy, et al. Development of the ai-cirrhosis-ecg score: an electrocardiogram-based deep learning model in cirrhosis. *The American Journal of Gastroenterology*, 117 (3):424–432, 2022. Jean-Baptiste Alayrac, Adrià Recasens, Rosalia Schneider, Relja Arandjelović, Jason Ramapuram, Jeffrey De Fauw, Lucas Smaira, Sander Dieleman, and Andrew Zisserman. Self-Supervised MultiModal Versatile Networks. In *NeurIPS*, 2020. Erick A Perez Alday, Annie Gu, Amit J Shah, Chad Robichaux, An-Kwok Ian Wong, Chengyu Liu, Feifei Liu, Ali Bahrami Rad, Andoni Elola, Salman Seyedi, et al. Classification of 12-lead ecgs: the physionet/computing in cardiology challenge 2020. *Physiological measurement*, 41(12):124003, 2020. Zachi I Attia, Suraj Kapa, Francisco Lopez-Jimenez, Paul M McKie, Dorothy J Ladewig, Gaurav Satam, Patricia A Pellikka, Maurice Enriquez-Sarano, Peter A Noseworthy, Thomas M Munger, et al. Screening for cardiac contractile dysfunction using an artificial intelligence–enabled electrocardiogram. *Nature medicine*, 25(1):70–74, 2019a. Zachi I Attia, Suraj Kapa, Xiaoxi Yao, Francisco Lopez-Jimenez, Tarun L Mohan, Patricia A Pellikka, Rickey E Carter, Nilay D Shah, Paul A Friedman, and Peter A Noseworthy. Prospective validation of a deep learning electrocardiogram algorithm for the detection of left ventricular systolic dysfunction. Journal of cardiovascular electrophysiology, 30(5):668–674, 2019b. Zachi I Attia, Peter A Noseworthy, Francisco Lopez-Jimenez, Samuel J Asirvatham, Abhishek J Deshmukh, Bernard J Gersh, Rickey E Carter, Xiaoxi Yao, Alejandro A Rabinstein, Brad J Erickson, et al. An artificial intelligence-enabled ecg algorithm for the identification of patients with atrial fibrillation during sinus rhythm: a retrospective analysis of outcome prediction. *The Lancet*, 394(10201):861–867, 2019c. Shekoofeh Azizi, Basil Mustafa, Fiona Ryan, Zachary Beaver, Jan Freyberg, Jonathan Deaton, Aaron Loh, Alan Karthikesalingam, Simon Kornblith, Ting Chen, et al. Big self-supervised models advance medical image classification. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 3478–3488, 2021. Shekoofeh Azizi, Laura Culp, Jan Freyberg, Basil Mustafa, Sebastien Baur, Simon Kornblith, Ting Chen, Patricia MacWilliams, S Sara Mahdavi, Ellery Wulczyn, et al. Robust and efficient medical imaging with self-supervision. *arXiv preprint arXiv:2205.09723*, 2022. Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. wav2vec 2.0: A framework for self-supervised learning of speech representations. *Advances in neural information processing systems*, 33: 12449–12460, 2020. Shruthi Bannur, Stephanie Hyland, Flora Liu, Fernando Pérez-García, Maximilian Ilse, Daniel Coelho de Castro, Benedikt Boecking, Harshita Sharma, Kenza Bouzid, Anja Thieme, Anton Schwaighofer, Maria Teodora Wetscherek, Matthew Lungren, Aditya Nori, Javier Alvarez-Valle, and Ozan Oktay. Learning to exploit temporal structure for biomedical vision-language processing. In The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2023. Benedikt Boecking, Naoto Usuyama, Shruthi Bannur, Daniel Coelho de Castro, Anton Schwaighofer, Stephanie Hyland, Maria Teodora Wetscherek, Tristan Naumann, Aditya Nori, Javier Alvarez-Valle, Hoifung Poon, and Ozan Oktay. Making the most of text semantics to improve biomedical vision-language processing. In *The European Conference on Computer Vision (ECCV)*, October 2022. J Martijn Bos, Zachi I Attia, David E Albert, Peter A Noseworthy, Paul A Friedman, and Michael J Ackerman. Use of artificial intelligence and deep neural networks in evaluation of patients with electrocardiographically concealed long qt syndrome from the surface 12-lead electrocardiogram. *JAMA cardiology*, 6(5):532–538, 2021. Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. *Advances in neural information* processing systems, 33:9912–9924, 2020. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In *International conference on machine learning*, pp. 1597–1607. PMLR, 2020a. Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey E Hinton. Big selfsupervised models are strong semi-supervised learners. *Advances in neural information processing systems*, 33:22243–22255, 2020b. Joseph Y Cheng, Hanlin Goh, Kaan Dogrusoz, Oncel Tuzel, and Erdrin Azemi. Subject-aware contrastive learning for biosignals. *arXiv preprint arXiv:2007.04871*, 2020. Georgios Christopoulos, Jonathan Graff-Radford, Camden L Lopez, Xiaoxi Yao, Zachi I Attia, Alejandro A Rabinstein, Ronald C Petersen, David S Knopman, Michelle M Mielke, Walter Kremers, et al. Artificial intelligence–electrocardiography to predict incident atrial fibrillation: A population-based study. *Circulation: Arrhythmia and Electrophysiology*, 13(12):e009355, 2020. Ozan Ciga, Tony Xu, and Anne Louise Martel. Self supervised contrastive learning for digital histopathology. Machine Learning with Applications, 7:100198, 2022. Michal Cohen-Shelly, Zachi I Attia, Paul A Friedman, Saki Ito, Benjamin A Essayagh, Wei-Yin Ko, Dennis H Murphree, Hector I Michelena, Maurice Enriquez-Sarano, Rickey E Carter, et al. Electrocardiogram screening for aortic valve stenosis using artificial intelligence. *European heart journal*, 42(30):2885–2896, 2021. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018. Nathaniel Diamant, Erik Reinertsen, Steven Song, Aaron D Aguirre, Collin M Stultz, and Puneet Batra. Patient contrastive learning: A performant, expressive, and practical approach to electrocardiogram modeling. *PLoS Computational Biology*, 18(2):e1009862, 2022. Conner D Galloway, Alexander V Valys, Jacqueline B Shreibati, Daniel L Treiman, Frank L Petterson, Vivek P Gundotra, David E Albert, Zachi I Attia, Rickey E Carter, Samuel J Asirvatham, et al. Development and validation of a deep-learning model to screen for hyperkalemia from the electrocardiogram. JAMA cardiology, 4(5):428–436, 2019. Shashank Goel, Hritik Bansal, Sumit Bhatia, Ryan A. Rossi, Vishwa Vinay, and Aditya Grover. CyCLIP: Cyclic contrastive language-image pretraining. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), *Advances in Neural Information Processing Systems*, 2022. Bryan Gopal, Ryan Han, Gautham Raghupathi, Andrew Ng, Geoff Tison, and Pranav Rajpurkar. 3kg: contrastive learning of 12-lead electrocardiograms using physiologically-inspired augmentations. In *Machine* Learning for Health, pp. 156–167. PMLR, 2021. Priya Goyal, Mathilde Caron, Benjamin Lefaudeux, Min Xu, Pengchao Wang, Vivek Pai, Mannat Singh, Vitaliy Liptchinsky, Ishan Misra, Armand Joulin, et al. Self-supervised pretraining of visual features in the wild. *arXiv preprint arXiv:2103.01988*, 2021. Martha Grogan, Francisco Lopez-Jimenez, Michal Cohen-Shelly, Angela Dispenzieri, Zachi I Attia, Omar F Abou Ezzedine, Grace Lin, Suraj Kapa, Daniel D Borgeson, Paul A Friedman, et al. Artificial intelligence– enhanced electrocardiogram for the early detection of cardiac amyloidosis. In *Mayo Clinic Proceedings*, volume 96, pp. 2768–2778. Elsevier, 2021. Awni Y Hannun, Pranav Rajpurkar, Masoumeh Haghpanahi, Geoffrey H Tison, Codie Bourn, Mintu P Turakhia, and Andrew Y Ng. Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. *Nature medicine*, 25(1):65–69, 2019. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9729–9738, 2020. Shih-Cheng Huang, Liyue Shen, Matthew P Lungren, and Serena Yeung. Gloria: A multimodal globallocal representation learning framework for label-efficient medical image recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3942–3951, 2021. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In *International Conference on Machine Learning*, pp. 4904–4916. PMLR, 2021. Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. Advances in neural information processing systems, 33:18661–18673, 2020. Dani Kiyasseh, Tingting Zhu, and David A Clifton. Clocs: Contrastive learning of cardiac signals across space, time, and patients. In *International Conference on Machine Learning*, pp. 5606–5615. PMLR, 2021. Wei-Yin Ko, Konstantinos C Siontis, Zachi I Attia, Rickey E Carter, Suraj Kapa, Steve R Ommen, Steven J Demuth, Michael J Ackerman, Bernard J Gersh, Adelaide M Arruda-Olson, et al. Detection of hypertrophic cardiomyopathy using a convolutional neural network-enabled electrocardiogram. *Journal of the American* College of Cardiology, 75(7):722–733, 2020. Junnan Li, Pan Zhou, Caiming Xiong, and Steven Hoi. Prototypical contrastive learning of unsupervised representations. In *International Conference on Learning Representations*, 2021. Yikuan Li, Shishir Rao, José Roberto Ayala Solares, Abdelaali Hassaine, Rema Ramakrishnan, Dexter Canoy, Yajie Zhu, Kazem Rahimi, and Gholamreza Salimi-Khorshidi. Behrt: transformer for electronic health records. *Scientific reports*, 10(1):7155, 2020. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In *International Conference on* Learning Representations, 2019. Ming Y. Lu, Bowen Chen, Andrew Zhang, Drew F. K. Williamson, Richard J. Chen, Tong Ding, Long Phi Le, Yung-Sung Chuang, and Faisal Mahmood. Visual language pretrained multiple instance zero-shot transfer for histopathology images. In *Proceedings of the IEEE/CVF Conference on Computer Vision and* Pattern Recognition (CVPR), pp. 19764–19775, June 2023. Temesgen Mehari and Nils Strodthoff. Self-supervised representation learning from 12-lead ecg data. *Computers in biology and medicine*, 141:105114, 2022. Ishan Misra and Laurens van der Maaten. Self-supervised learning of pretext-invariant representations. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 6707–6717, 2020. Jungwoo Oh, Hyunseung Chung, Joon-myoung Kwon, Dong-gyun Hong, and Edward Choi. Lead-agnostic self-supervised learning for local and global representations of electrocardiogram. In *Conference on Health,* Inference, and Learning, pp. 338–353. PMLR, 2022. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. *arXiv preprint arXiv:1807.03748*, 2018. Chao Pang, Xinzhuo Jiang, Krishna S Kalluri, Matthew Spotnitz, RuiJun Chen, Adler Perotte, and Karthik Natarajan. Cehr-bert: Incorporating temporal information from structured ehr data to improve prediction tasks. In *Machine Learning for Health*, pp. 239–260. PMLR, 2021. Raphael Poulain, Mehak Gupta, and Rahmatollah Beheshti. Few-shot learning with semi-supervised transformers for electronic health records. In Zachary Lipton, Rajesh Ranganath, Mark Sendak, Michael Sjoding, and Serena Yeung (eds.), *Proceedings of the 7th Machine Learning for Healthcare Conference*, volume 182 of *Proceedings of Machine Learning Research*, pp. 853–873. PMLR, 05–06 Aug 2022. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In *International conference on machine learning*, pp. 8748–8763. PMLR, 2021. Aniruddh Raghu, Payal Chandak, Ridwan Alam, John Guttag, and Collin Stultz. Contrastive pre-training for multimodal medical time series. In *NeurIPS 2022 Workshop on Learning from Time Series for Health*, 2022. Laila Rasmy, Yang Xiang, Ziqian Xie, Cui Tao, and Degui Zhi. Med-bert: pretrained contextualized embeddings on large-scale structured electronic health records for disease prediction. *NPJ digital medicine*, 4(1):86, 2021. Jie Ren, Stanislav Fort, Jeremiah Liu, Abhijit Guha Roy, Shreyas Padhy, and Balaji Lakshminarayanan. A simple fix to mahalanobis distance for improving near-ood detection. *arXiv preprint arXiv:2106.09022*, 2021. Konstantinos C Siontis, Peter A Noseworthy, Zachi I Attia, and Paul A Friedman. Artificial intelligenceenhanced electrocardiography in cardiovascular disease management. *Nature Reviews Cardiology*, 18(7): 465–478, 2021. Hari Sowrirajan, Jingbo Yang, Andrew Y Ng, and Pranav Rajpurkar. Moco pretraining improves representation and transferability of chest x-ray models. In *Medical Imaging with Deep Learning*, pp. 728–744. PMLR, 2021. Chetan L Srinidhi and Anne L Martel. Improving self-supervised learning with hardness-aware dynamic curriculum learning: an application to digital pathology. In *Proceedings of the IEEE/CVF International* Conference on Computer Vision, pp. 562–571, 2021. Vikto Tihonenko, A Khaustov, S Ivanov, A Rivin, and E Yakushenko. St petersburg incart 12-lead arrhythmia database. *PhysioBank PhysioToolkit and PhysioNet*, 2008. Geoffrey H Tison, Jeffrey Zhang, Francesca N Delling, and Rahul C Deo. Automated and interpretable patient ecg profiles for disease detection, tracking, and discovery. Circulation: Cardiovascular Quality and Outcomes, 12(9):e005289, 2019. Rachael A. Venn, Xin Wang, Sam Freesun Friedman, Nate Diamant, Shaan Khurshid, Paolo Di Achille, Lu-Chen Weng, Seung Hoan Choi, Christopher Reeder, James P. Pirruccello, Pulkit Singh, Emily S. Lau, Anthony Philippakis, Christopher D. Anderson, Patrick T. Ellinor, Jennifer E. Ho, Puneet Batra, and Steven A. Lubitz. Deep learning of electrocardiograms enables scalable human disease profiling. *medRxiv*, 2022. doi: 10.1101/2022.12.21.22283757. Xiyue Wang, Sen Yang, Jun Zhang, Minghui Wang, Jing Zhang, Wei Yang, Junzhou Huang, and Xiao Han. Transformer-based unsupervised contrastive learning for histopathological image classification. Medical Image Analysis, 2022. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System* Demonstrations, pp. 38–45, Online, October 2020. Association for Computational Linguistics. Xi Yang, Nima PourNejatian, Hoo Chang Shin, Kaleb E Smith, Christopher Parisien, Colin Compas, Cheryl Martin, Mona G Flores, Ying Zhang, Tanja Magoc, et al. Gatortron: A large clinical language model to unlock patient information from unstructured electronic health records. *arXiv preprint arXiv:2203.03540*, 2022. Xiaoxi Yao, David R Rushlow, Jonathan W Inselman, Rozalina G McCoy, Thomas D Thacher, Emma M Behnken, Matthew E Bernard, Steven L Rosas, Abdulla Akfaly, Artika Misra, et al. Artificial intelligence– enabled electrocardiograms for identification of patients with low ejection fraction: a pragmatic, randomized clinical trial. *Nature Medicine*, 27(5):815–819, 2021. Yuhao Zhang, Hang Jiang, Yasuhide Miura, Christopher D Manning, and Curtis P Langlotz. Contrastive learning of medical visual representations from paired images and text. In Machine Learning for Healthcare Conference, pp. 2–25. PMLR, 2022. Jianwei Zheng, Jianming Zhang, Sidy Danioko, Hai Yao, Hangyuan Guo, and Cyril Rakovski. A 12-lead electrocardiogram database for arrhythmia research covering more than 10,000 patients. *Scientific data*, 7(1):48, 2020. Appendix ## A Dataset Details A.1 Pre-Training Dataset Details Table 6: The number of ECGs and the number of patients used during pre-training of the proposed models, i.e., sEHR-BERT, sEHR-ECG-Text, ECG-sEHR, and ECG-Text. | | Train | Validation | | Test | Total | | | | |---------------|-----------|--------------|---------|-----------|-----------|-----------|-----------|-----------| | Model | #ECGs | #Patients | #ECGs | #Patients | #ECGs | #Patients | #ECGs | #Patients | | Global Splits | 5,479,435 | 1,463,009 | 450,775 | 121,932 | 3,210,110 | 853,477 | 9,140,320 | 2,438,418 | | sEHR-BERT | - | 1,167,991 | - | 97,333 | - | - | - | - | | sEHR-ECG-Text | 4,526,686 | 1,177,903 | 371,367 | 98,013 | - | - | - | - | | ECG-sEHR | 4,553,278 | 1,196,478 | 373,649 | 99,608 | - | - | - | - | | ECG-Text | 5,416,467 | 1,423,999 | 445,342 | 118,628 | - | - | - | - | ![17_image_0.png](17_image_0.png) sEHR modalities. ![17_image_1.png](17_image_1.png) text modalities. Figure 3: The figures (a) and (b) show the distribution of time difference between ECG and other modalities: (a) the distribution of time difference between ECG and sEHR modalities, (b) the distribution of time difference between ECG and text modalities. Table 6 outlines the number of ECGs and the number of patients used during pre-training of the proposed models. Figure 3a shows the time difference distribution between ECG and sEHR modalities, while Figure 3b displays the time difference distribution between ECG and text modalities. The Y-axis in Figure 3a represents the number of ECGs associated with at least one medical code for the sEHR modality in oneweek intervals and the Y-axis in Figure 3b represents the number of ECGs associated with at least one biomedical entity (mentioned in Section 4.2.3) for the text modality in one-week intervals. The distributions are plotted using one-year time window (i.e., approximately 52 weeks) around ECG acquisition timestamp. ## A.2 Internal Classification Dataset Details Table 7: Summary of the disease labeling process for coronary atherosclerosis, myocarditis, cardiac amyloidosis, pulmonary hypertension, and low LVEF. The *Case* column details the identification of patients with the specific disease, the *Control* column explains how control patients are produced for the corresponding disease, and the *T ime window* indicates the time frame for associating ECGs with diagnoses. (d1, d2) denotes the time frame extending d1 days before and d2 days after the first diagnosis timestamp. All ECGs of the control patients are taken into the control cohort. The abbreviations CAC, TTE, mPAP, TRV, and LVEF represent coronary artery calcium, transthoracic echocardiogram, mean pulmonary arterial pressure, tricuspid regurgitation velocity, and left ventricular ejection fraction, respectively. | Disease | | Case | Control | Time window | |--------------------------|-----------------------------------------------------|---------------------------------------------------------------|---------------|---------------| | Coronary atherosclerosis | | CAC score ≥ 300 | CAC score = 0 | (365, 365) | | Myocarditis | Manual curation by expert physicians using EHR data | No history of myocarditis | (7, 7) | | | Cardiac amyloidosis | Manual curation by expert physicians using EHR data | Patients who have undergone TTE and no history of amyloidosis | (180, 180) | | | Pulmonary hypertension | mPAP ≥ 25 mmHg or TRV ≥ 3.4 m/s | mPAP ≤ 20 mmHg or TRV ≤ 2.8 m/s | (30, 30) | | | Low LVEF | | LVEF ≤ 40 | LVEF > 40 | (14, 14) | | | Train | | Validation | | Test | | | | | |--------------------------|-----------|-----------|--------------|--------|-----------|----------|---------|-----------|----------| | Disease | #ECGs | #Patients | Prev.(%) | #ECGs | #Patients | Prev.(%) | #ECGs | #Patients | Prev.(%) | | Coronary atherosclerosis | 19,281 | 10,589 | 38.78 | 1,604 | 870 | 39.66 | 11,290 | 6,088 | 39.13 | | Myocarditis | 53,432 | 52,299 | 0.97 | 4,366 | 4,260 | 1.17 | 31,715 | 30,984 | 1.02 | | Cardiac amyloidosis | 34,465 | 20,011 | 8.54 | 2,795 | 2,462 | 8.04 | 20,071 | 17,508 | 8.51 | | Pulmonary hypertension | 200,777 | 73,908 | 14.71 | 16,132 | 6,091 | 14.10 | 115,602 | 42,893 | 14.34 | | Low LVEF | 166,702 | 166,702 | 7.58 | 13,814 | 13,814 | 7.69 | 97,109 | 97,109 | 7.48 | | AFib in NSR | 1,455,626 | 514,871 | 7.28 | 42,627 | 42,627 | 7.45 | 301,022 | 301,022 | 7.32 | Table 8: Number of ECGs, number of patients, and disease prevalence in coronary atherosclerosis, myocarditis, cardiac amyloidosis, pulmonary hypertension, low LVEF, and AFib in NSR classification datasets. In this section, we summarize the disease labeling process and outline the number of ECGs, the number of patients, and the disease prevalence used during training, validation, and testing of internal disease classification models. Table 7 shows the summary of the disease labeling process for coronary atherosclerosis, myocarditis, cardiac amyloidosis, pulmonary hypertension, and low LVEF. In the case of AFib in NSR, for the case-cohort, we encompass all NSR ECGs: those from 30 days before the first occurrence of AFib, during the occurrences, and extending 30 days after the last occurrence of AFib. The control cohort comprises all NSR ECGs obtained from patients with no evidence of AFib. Table 8 shows the number of ECGs, the number of patients, and the prevalence for each disease. ## A.3 External Classification Dataset Details In this section, we provide an overview of the number of ECGs and the number of patients in PhysioNet2020 and Chapman datasets (see Table 9). The physioNet2020 dataset consists of ECG signals from multiple sources with varying signal lengths and sampling rates. Table 9: Details of the PhysioNet2020 and Chapman datasets, including the number of ECGs, number of patients, signal lengths, and sampling rates. | Dataset | #ECGs | #Patients | Signal length | Sampling rate | |------------------------|---------|-------------|-----------------|-----------------| | PhysioNet2020 CPSC2018 | 6,877 | 6,877 | 6-60 secs | 500 Hz | | CPSC extra | 3,453 | 3,453 | 6-60 secs | 500 Hz | | St Petersburg INCART | 74 | 32 | 30 mins | 257 Hz | | PTB | 516 | 516 | - | 1000Hz | | PTB-XL | 21,837 | 21,837 | 10 secs | 500 Hz | | Georgia | 10,344 | 10,344 | 10 secs | 500 Hz | | Chapman Chapman | 10,646 | 10,646 | 10 secs | 500 Hz | ## B Model Architectures And Implementation Details B.1 Training Details We execute all pre-training and classification tasks using 2 Nvidia V100 (16G) GPUs. However, for the pretraining tasks involving the text domain, we utilize 2 Nvidia A100 (40G) GPUs. All the original ECGs consist of 12 leads and are 10 seconds long with a sampling rate of 500Hz. During training, we use a random crop of 5 seconds in length, i.e., 2500 samples. We optimize all pre-training and classification models with AdamW optimizer (Loshchilov & Hutter, 2019) with (β1, β2) set to (0.9, 0.999). Pre-training Details We initially pre-train sEHR-BERT as described in Section 3.1. For multi-modal contrastive pre-training, we initialize the sEHR encoder with sEHR-BERT model weights and the text encoder with GatorTron-base (Yang et al., 2022) model weights. GatorTron-base (Yang et al., 2022) is a 345M-parameter language model pre-trained on large amounts of de-identified clinical notes (80B words) from the University of Florida Health System, having a vocabulary of size 50176. We use BERT and MegatronBERT implementation offered by the Huggingface transformers library (Wolf et al., 2020) for sEHR and text encoders respectively. For the ECG encoder, we use ResNet-like architecture (He et al., 2016) customized to 1 dimension which consists of around 1M parameters. The ECG encoder is initialized randomly. In joint pre-training, we freeze the first 3 and 18 layers of sEHR and text encoders respectively, and fine-tune the remaining layers. Following Zhang et al. (2022), we set τ to 0.1. We assign equal weighting to both the directions of contrastive learning, i.e., from ECG to sEHR and sEHR to ECG, similarly for ECG and text, i.e., (λes, λet) is set to (0.5, 0.5). We used a batch size of 256 and an initial learning rate of 1e-4 for our models. For ECG-only contrastive learning models, we used a batch size of 512 and an initial learning rate of 1e-3. The learning rate is reduced by a factor of 2 if the validation loss stops decreasing continuously for 2 epochs and we early stop the training based on validation loss with an early stopping patience of 10 epochs. Classification Details For classification tasks, we add a two-layered MLP head on top of the ECG encoder. We also add dropout layers after each hidden layer with a dropout probability of 0.2 for regularisation. A batch size of 128 is used for all classification models. We used an initial learning rate of 1e-3 for random initialization training for all diseases. For fine-tuning, we used an initial learning rate of 1e-3 for coronary atherosclerosis and myocarditis tasks, and 1e-4 for cardiac amyloidosis, pulmonary hypertension, low LVEF, and AFib in NSR tasks. The learning rate is reduced by a factor of 2 if the validation score stops increasing continuously for 2 epochs and we early stop the training based on validation loss with an early stopping patience of 10 epochs. During fine-tuning, we initialize the ECG encoder with the pre-trained ECG encoder weights and warm up the classification head (MLP) for 1024 steps by freezing the backbone network weights and then fine-tuning the entire network. During prediction, we take 6 consecutive 5-second long crops with a stride of 1 second from the original 10-second long ECG. The median of the predictions of these 6 crops is taken as the final prediction for computing the AUROC score. ## B.2 Multi-Modal Architectures In this section, we present the precise architectures of various multi-modal contrastive learning models. Figure 4a and Figure 4b show the architectures of the ECG-sEHR model, described in Section 3.3, and the ECG-Text model described in Section 3.4, respectively. Figure 4c shows the architecture of sEHR-ECG-Text shared space model mentioned in Section 3.2. ![20_image_0.png](20_image_0.png) (c) sEHR-ECG-Text shared space architecture Figure 4: The figures (a) and (b) show the architectures of bi-modal contrastive learning methods. (a) illustrates the architecture for the ECG-sEHR model, while (b) presents the architecture for the ECG-Text model. Figure (c) shows the shared space architecture of the sEHR-ECG-Text model, where contrastive loss between ECG and sEHR and, between ECG and text, is applied in sEHR-ECG-Text shared space (Ωset). ## B.3 Ecg Encoder Architecture We use ResNet-like architecture (He et al., 2016) customized to 1 dimension for time series ECG signals. This consisted of eight 1D convolution layers based on the basic block of ResNet. Details of each layer are given in Table 10. All convolutional layers employ batch normalization and ReLU activation function and a stride of 2. We add two fully connected layers with hidden sizes 128 and 64 on top of the backbone CNN architecture for classification tasks. We use the same architecture for all pre-training methods. Table 10: ECG encoder architecture used for all experiments. IC, OC, and K represent the number of input channels, the number of output channels, and kernel size, respectively. | Layer | Layer type | IC | OC | K | |---------|--------------|------|------|-----| | 1 | Conv | 12 | 32 | 5 | | 2 | Conv | 32 | 32 | 5 | | 3 | Conv | 32 | 64 | 5 | | 4 | Conv | 64 | 64 | 3 | | 5 | Conv | 64 | 128 | 3 | | 6 | Conv | 128 | 128 | 3 | | 7 | Conv | 128 | 256 | 3 | | 8 | Conv | 256 | 256 | 3 | ## C Statistical Testing We assess the statistical significance of our generalized model, sEHR-ECG-Text, by comparing it to each baseline using a two-sided t-test. This evaluation is conducted on linear classification tasks with 10% training splits. Each experiment is repeated 10 times, with 10 different fractional splits derived from the original training data. Our model significantly outperformed all the baselines with the p-value less than 1e-5. We show the average AUROC score over 10 runs along with 95% confidence intervals in Table 11. | Coronary | Myocarditis | Cardiac | Pulmonary | Low LVEF | AFib in NSR | | |----------------------------------|----------------------|----------------------|----------------------|----------------------|----------------------|----------------------| | Method | atherosclerosis | amyloidosis | hypertension | | | | | Supervised baseline Random Init. | 0.770 (0.760, 0.779) | 0.778 (0.767, 0.789) | 0.910 (0.907, 0.913) | 0.886 (0.882, 0.890) | 0.917 (0.915, 0.920) | 0.897 (0.895, 0.898) | | ECG-only SSL 3KG | 0.721 (0.715, 0.727) | 0.686 (0.669, 0.704) | 0.869 (0.862, 0.875) | 0.862 (0.860, 0.863) | 0.902 (0.900, 0.903) | 0.866 (0.865, 0.866) | | CLOCS(CMSC) | 0.736 (0.729, 0.742) | 0.660 (0.649, 0.671) | 0.879 (0.875, 0.883) | 0.857 (0.856, 0.859) | 0.905 (0.904, 0.906) | 0.861 (0.861, 0.862) | | PCLR | 0.759 (0.757, 0.762) | 0.716 (0.704, 0.728) | 0.898 (0.894, 0.903) | 0.895 (0.894, 0.895) | 0.927 (0.927, 0.928) | 0.898 (0.897, 0.899) | | Our model sEHR-ECG-Text | 0.840 (0.838, 0.843) | 0.859 (0.846, 0.872) | 0.934 (0.930, 0.937) | 0.932 (0.931, 0.933) | 0.942 (0.941, 0.943) | 0.928 (0.928, 0.929) | Table 11: The table shows the average AUROC, with 95% confidence intervals in parentheses on linear classification tasks, i.e., coronary atherosclerosis, myocarditis, cardiac amyloidosis, pulmonary hypertension, low LVEF, and AFib in NSR, using 10% training splits. The results were calculated by conducting each experiment 10 times, using 10 different fractional splits obtained from the original training split. The sEHRECG-Text model significantly outperforms all the baselines with the p-value less than 1e-5. ## D Additional Evaluation Metrics In this section, we provide the AUPRC scores for classification tasks performed on internal datasets. Table 12 shows AUPRC scores for coronary atherosclerosis, myocarditis, and cardiac amyloidosis classification tasks (an extension of Table 2) and Table 13 shows AUPRC scores for pulmonary hypertension, low LVEF, and AFib in NSR classification tasks (an extension of Table 3). We observe a similar trend in the AUPRC metric when comparing our models to baseline models, as we did with AUROC. Table 12: An extension of Table 2 to include AUPRC scores (mean and standard deviation over 5 runs with different random seeds) for coronary atherosclerosis, myocarditis, and cardiac amyloidosis classification tasks: (a) linear classification, (b) fine-tuned classification. Results within 95% confidence intervals of the best result are shown in **bold**. | (a) Linear classification | | | | | | | |------------------------------------------------|---------------|---------------------|---------------|---------------|---------------|---------------| | Coronary atherosclerosis | Myocarditis | Cardiac amyloidosis | | | | | | Method | 10% | 100% | 10% | 100% | 10% | 100% | | Supervised baseline Random Init. 0.686 ± 0.013 | 0.768 ± 0.006 | 0.075 ± 0.019 | 0.239 ± 0.013 | 0.687 ± 0.009 | 0.795 ± 0.004 | | | ECG-only SSL 3KG | 0.643 ± 0.014 | 0.722 ± 0.000 | 0.047 ± 0.013 | 0.078 ± 0.000 | 0.522 ± 0.020 | 0.628 ± 0.000 | | CLOCS(CMSC) | 0.661 ± 0.011 | 0.734 ± 0.000 | 0.033 ± 0.005 | 0.069 ± 0.000 | 0.534 ± 0.014 | 0.670 ± 0.000 | | PCLR | 0.668 ± 0.008 | 0.761 ± 0.000 | 0.038 ± 0.010 | 0.082 ± 0.001 | 0.618 ± 0.028 | 0.721 ± 0.001 | | Our models sEHR-ECG-Text | 0.780 ± 0.008 | 0.841 ± 0.000 | 0.200 ± 0.031 | 0.304 ± 0.001 | 0.775 ± 0.017 | 0.842 ± 0.000 | | ECG-sEHR | 0.773 ± 0.018 | 0.843 ± 0.000 | 0.178 ± 0.028 | 0.291 ± 0.001 | 0.777 ± 0.019 | 0.845 ± 0.000 | | ECG-Text | 0.754 ± 0.011 | 0.819 ± 0.000 | 0.122 ± 0.029 | 0.210 ± 0.001 | 0.692 ± 0.021 | 0.778 ± 0.001 | | (b) Fine-tuned classification | | | | | | | | Coronary atherosclerosis | Myocarditis | Cardiac amyloidosis | | | | | | Method | 10% | 100% | 10% | 100% | 10% | 100% | | Supervised baseline Random Init. 0.686 ± 0.013 | 0.768 ± 0.006 | 0.075 ± 0.019 | 0.239 ± 0.013 | 0.687 ± 0.009 | 0.795 ± 0.004 | | | ECG-only SSL 3KG | 0.674 ± 0.012 | 0.756 ± 0.009 | 0.041 ± 0.006 | 0.227 ± 0.031 | 0.667 ± 0.020 | 0.798 ± 0.004 | | CLOCS(CMSC) | 0.707 ± 0.008 | 0.747 ± 0.011 | 0.053 ± 0.018 | 0.255 ± 0.015 | 0.693 ± 0.033 | 0.808 ± 0.002 | | PCLR | 0.735 ± 0.010 | 0.763 ± 0.005 | 0.043 ± 0.009 | 0.257 ± 0.026 | 0.750 ± 0.010 | 0.821 ± 0.003 | | Our models sEHR-ECG-Text | 0.822 ± 0.006 | 0.838 ± 0.007 | 0.248 ± 0.012 | 0.343 ± 0.019 | 0.805 ± 0.007 | 0.845 ± 0.005 | | ECG-sEHR | 0.827 ± 0.004 | 0.845 ± 0.004 | 0.270 ± 0.012 | 0.349 ± 0.013 | 0.812 ± 0.011 | 0.848 ± 0.006 | | ECG-Text | 0.813 ± 0.006 | 0.828 ± 0.002 | 0.196 ± 0.017 | 0.302 ± 0.008 | 0.779 ± 0.006 | 0.831 ± 0.003 | ## E Ablation Study We perform three ablations: (i) short ICD-10 vs. full ICD-10 codes in the sEHR modality as described in 4.2.2 using the ECG-sEHR model, (ii) report concatenation vs. entity concatenation in text modality as described in 4.2.3 using the ECG-Text model, and (iii) shared space vs. fine and coarse spaces using the sEHR-ECG-Text model as described in Section 3.2. A vocabulary of size 28593 and 42355 is used for short ICD-10 and full ICD-10 codes respectively. We show the ablation study results on the linear classification task in Table 14. The difference in performance between short ICD-10 and full ICD-10 codes is very minimal which can be attributed to the point that full ICD codes provide little to no extra information. In the report concatenation vs. entity concatenation ablation, entity concatenation yielded better results for the majority of the diseases. We speculate that this is because entity concatenation captures better long-term dependencies than report concatenation as we incorporate information from one-year time window around ECGs. Note that the sEHR-ECG-Text model with FAC spaces architecture yielded slightly better results than the shared space architecture. Table 13: An extension of Table 3 to include AUPRC scores (mean and standard deviation over 5 runs with different random seeds) for pulmonary hypertension, low LVEF, and AFib in NSR classification tasks: (a) linear classification, (b) fine-tuned classification. Results within 95% confidence intervals of the best result are shown in **bold**. (a) Linear classification | | (b) Fine-tuned classification | | | | | | | | | |------------------------------------------------|---------------------------------|---------------|---------------|---------------|---------------|---------------|---------------|---------------|---------------| | Pulmonary hypertension | | Low LVEF | | | AFib in NSR | | | | | | Method | 1% | 10% | 100% | 1% | 10% | 100% | 1% | 10% | 100% | | Supervised baseline Random Init. 0.420 ± 0.011 | 0.585 ± 0.011 | 0.710 ± 0.005 | 0.354 ± 0.013 | 0.527 ± 0.012 | 0.659 ± 0.003 | 0.356 ± 0.030 | 0.545 ± 0.011 | 0.642 ± 0.004 | | | ECG-only SSL 3KG 0.429 ± 0.015 | 0.600 ± 0.008 | 0.703 ± 0.004 | 0.422 ± 0.013 | 0.582 ± 0.007 | 0.673 ± 0.003 | 0.430 ± 0.018 | 0.569 ± 0.004 | 0.640 ± 0.004 | | | CLOCS(CMSC) | 0.497 ± 0.004 | 0.605 ± 0.014 | 0.714 ± 0.004 | 0.483 ± 0.011 | 0.602 ± 0.005 | 0.671 ± 0.006 | 0.449 ± 0.008 | 0.563 ± 0.004 | 0.638 ± 0.003 | | PCLR | 0.545 ± 0.021 | 0.660 ± 0.010 | 0.726 ± 0.004 | 0.546 ± 0.004 | 0.649 ± 0.002 | 0.692 ± 0.002 | 0.509 ± 0.013 | 0.585 ± 0.006 | 0.643 ± 0.002 | | Our models sEHR-ECG-Text 0.668 ± 0.012 | 0.723 ± 0.006 | 0.746 ± 0.009 | 0.651 ± 0.004 | 0.660 ± 0.008 | 0.692 ± 0.008 | 0.626 ± 0.021 | 0.631 ± 0.006 | 0.658 ± 0.010 | | | ECG-sEHR | 0.678 ± 0.014 | 0.729 ± 0.004 | 0.752 ± 0.004 | 0.643 ± 0.003 | 0.664 ± 0.007 | 0.695 ± 0.005 | 0.640 ± 0.014 | 0.634 ± 0.009 | 0.661 ± 0.003 | | ECG-Text | 0.639 ± 0.009 | 0.694 ± 0.010 | 0.731 ± 0.004 | 0.618 ± 0.008 | 0.638 ± 0.006 | 0.678 ± 0.004 | 0.595 ± 0.016 | 0.616 ± 0.008 | 0.655 ± 0.008 | | | Pulmonary hypertension | | | Low LVEF | | AFib in NSR | | | | |------------------------------------------------|--------------------------|---------------|---------------|---------------|---------------|---------------|---------------|---------------|---------------| | Method | 1% | 10% | 100% | 1% | 10% | 100% | 1% | 10% | 100% | | Supervised baseline Random Init. 0.420 ± 0.011 | 0.585 ± 0.011 | 0.710 ± 0.005 | 0.354 ± 0.013 | 0.527 ± 0.012 | 0.659 ± 0.003 | 0.356 ± 0.030 | 0.545 ± 0.011 | 0.642 ± 0.004 | | | ECG-only SSL 3KG 0.385 ± 0.022 | 0.512 ± 0.005 | 0.537 ± 0.000 | 0.322 ± 0.032 | 0.480 ± 0.006 | 0.525 ± 0.000 | 0.375 ± 0.016 | 0.454 ± 0.003 | 0.466 ± 0.000 | | | CLOCS(CMSC) | 0.394 ± 0.029 | 0.521 ± 0.003 | 0.548 ± 0.000 | 0.369 ± 0.030 | 0.503 ± 0.008 | 0.542 ± 0.000 | 0.373 ± 0.013 | 0.457 ± 0.002 | 0.472 ± 0.000 | | PCLR | 0.487 ± 0.043 | 0.605 ± 0.007 | 0.633 ± 0.000 | 0.467 ± 0.036 | 0.600 ± 0.003 | 0.630 ± 0.000 | 0.443 ± 0.022 | 0.539 ± 0.003 | 0.556 ± 0.000 | | Our models sEHR-ECG-Text 0.618 ± 0.012 | 0.722 ± 0.003 | 0.743 ± 0.000 | 0.529 ± 0.046 | 0.664 ± 0.003 | 0.695 ± 0.000 | 0.598 ± 0.009 | 0.662 ± 0.001 | 0.670 ± 0.000 | | | ECG-sEHR | 0.627 ± 0.020 | 0.727 ± 0.002 | 0.747 ± 0.000 | 0.542 ± 0.052 | 0.663 ± 0.004 | 0.693 ± 0.000 | 0.606 ± 0.014 | 0.672 ± 0.002 | 0.679 ± 0.000 | | ECG-Text | 0.595 ± 0.013 | 0.689 ± 0.002 | 0.713 ± 0.000 | 0.493 ± 0.030 | 0.634 ± 0.003 | 0.668 ± 0.000 | 0.561 ± 0.015 | 0.637 ± 0.002 | 0.646 ± 0.000 | | Coronary | Myocarditis | | Cardiac | Pulmonary | Low LVEF | AFib in NSR | |--------------------------------------|-----------------|---------------|---------------|---------------|---------------|---------------| | Method | atherosclerosis | | amyloidosis | hypertension | | | | ECG-sEHR Short ICD-10 codes | 0.891 ± 0.000 | 0.896 ± 0.000 | 0.959 ± 0.000 | 0.940 ± 0.000 | 0.951 ± 0.000 | 0.933 ± 0.000 | | Full ICD-10 codes | 0.888 ± 0.000 | 0.894 ± 0.000 | 0.958 ± 0.000 | 0.940 ± 0.000 | 0.951 ± 0.000 | 0.934 ± 0.000 | | ECG-Text Entity concatenation | 0.875 ± 0.000 | 0.881 ± 0.000 | 0.949 ± 0.000 | 0.931 ± 0.000 | 0.947 ± 0.000 | 0.925 ± 0.000 | | Report concatenation | 0.879 ± 0.000 | 0.895 ± 0.001 | 0.939 ± 0.000 | 0.921 ± 0.000 | 0.938 ± 0.000 | 0.925 ± 0.000 | | sEHR-ECG-Text Fine and coarse spaces | 0.890 ± 0.000 | 0.901 ± 0.001 | 0.960 ± 0.000 | 0.939 ± 0.000 | 0.951 ± 0.000 | 0.931 ± 0.000 | | Shared space | 0.883 ± 0.000 | 0.884 ± 0.001 | 0.955 ± 0.000 | 0.936 ± 0.000 | 0.949 ± 0.000 | 0.928 ± 0.000 | (b) Fine-tuned classification Table 14: Results of the ablation study. Comparison of linear classification performance (AUROC, mean and standard deviation over 5 runs with different random seeds) between short ICD codes and full ICD codes (ECG-sEHR), report concatenation and entity concatenation (ECG-Text), and shared space versus fine and coarse spaces (sEHR-ECG-Text). The best results of each ablation are shown in **bold**.
Review 1: Summary: In this paper, the authors propose a contrastive learning framework for downstream classification of ECG data. The framework includes two main components that learn similarities between ECG and structured EHR, and then ECG and textual data. **Dataset:** The structured EHR data includes ICD codes (diagnosis/procedure codes) and medication prescriptions. The unstructured data contains ECG reports, ECHO reports, pathology reports, radiology reports, microbiology reports, clinical notes and surgical notes. The authors used an internal dataset and an external dataset. **Evaluation:** The pre-training strategy is evaluated on various downstream tasks, such as classification, zero-shot retrieval, and out-of-distribution detection involving ECG as input only. They mainly use AUROC as a performance metric. Strengths and Weaknesses: **Strengths:** - The proprietary dataset is very large in size. - The model utilizes novel modalities for ECG representation learning via contrastive learning. This is quite creative as those modalities are not commonly used together. - The work adopts the MultiModal Versatile Networks framework from Alayrac et al. 2020. Although it lacks methodological novelty, it can be considered a novel application of existing work. **Weaknesses:** - The proprietary dataset is private. - The code is not shared, I would highly recommend that they share their repository considering that work in this area is generally based on private models. - The exact contributions of the work are unclear as currently listed in the abstract. There are many results in the main manuscript, however it is also unclear which framework leads to the best improvements (ECG-sEHR vs ECG-Text vs ECG-sEHR-Text vs ECG-MTL). It currently seems that all models perform better in different scenarios, which does not really clarify the strengths of the proposed approach. - It's also unclear to me the difference between ECG-sEHR-Text and ECG-MTL. Does it just take the ECG encoder from ECG-sEHR-Text and apply the MTL head? Do the other models perform single task classification compared to ECG-MTL? - It is unclear why they conduct those specific ablation studies. It would be interesting to explore the impact of using different modalities, rather than long/short ICD and report/entity concatenation as those are not the main contributions of the work. - Have the authors considered how this pre-training framework could generalize to the other input modalities (not just ECG)? - There is no evidence of any hyper parameter tuning. This can help clarify the robustness of the proposed approaches with respect to one another. - There is no statistical significance testing or confidence intervals provided. They also only report the average AUROC without any standard deviation. - Have you considered reporting other relevant metrics? such as AUPRC - The notation in the methodology section is confusing. It needs to be simplified and match the notation used in the main figure. They introduce general modality-agnostic notation, and then customize it to each modality. I would suggest sticking to one. - The caption of the main figure should also explain all relevant notation without having to refer back to the main text. - The figure caption also mentions that in both ECG-sEHR and ECG-Text pre-training, the projection mapping P(es->set) is not used. Isn't it used in the latter? Where is the ECG-Text embedding space in the figure? - What's the motivation behind using the weighting scalars in the loss in equations 1 and 2? - How is the loss in equation 3 used? From my understanding, you perform ECG-sEHR pertaining (section 3.3), then ECG-Text pertaining (section 3.4), then ECG-MTL? Please clarify. - What's the difference between equation 4 and equation 2? - Can you provide a figure that clarifies the final downstream architectures for all models, including ECG-MTL? - Can you also provide a figure that clarifies how the modalities were linked with one another considering the varying timeframes / time differences between them? - Can you provide a figure that clarifies the difference between report and entity concatenation? It is unclear and the section is poorly written. - There are many decisions that seem to be made based on heuristics, such as keeping ICD codes that are associated with at least 50 patients. - Section 4.3.1 describes how the samples were labeled, can you create a table that summarizes the labeling procedure of each disease label. The text is difficult to follow. - The baselines are also not strong. Are there other multi-modal baselines that you can compare to? Did you perform any hyper parameter tuning for the baselines? What were the learning rates, etc. **Presentation of the paper also needs to be significantly improved:** - There are typos and grammar mistakes. - The paper is quite dense and extensive and hence the authors need to clarify certain elements of the text as described below. It mostly reads as a technical report, such that the methodological contributions / design elements are integrated with implementation details (hyper parameters, etc.) - It's uncommon to refer to EHR as EHRs - The first paragraph in the introduction is too long and it's unclear what the main message is. This also applies to other sections in the text. - "weekly" manner should be rephrased - consistency of capitalization in introducing abbreviations - The tables are referenced in random order. They also reference tables in the appendix in the main text without mentioning that they are supplementary. - All table and figure captions must be elaborated extensively. - Why is the AUROC reported as a percentage and to two decimal places? It is typically reported as a three decimal figure. Requested Changes: Most of my comments are summarized above. One major request would be to make the code publicly available. Even if it's a proprietary dataset, the authors can provide the code for how they implement the losses and the training procedure, which would make it easier to follow the exact training steps described in the paper. As it stands now, the paper is not ready for publication. Broader Impact Concerns: The authors already discuss that the data is private and de-identified according to IRB standards. ================================================== Review 2: Summary: The core value of this work lies in: 1) proposing the first instantiation of multi-modal contrastive learning framework for the purpose of learning representations of the ECG signal, 2) empirical evaluation of the proposed methods and ablating some of its domain-specific design choices. Strengths and Weaknesses: * I find the paper quite well written, sufficiently clearly describing the methodology followed, and the motivation/reasoning behind it. * I found the description of operationalization of the idea of multi-modal contrastive learning in this healthcare setting interesting, and I think it can benefit readers from this community. * In my understanding, the paper seems technically correct. I did not find any errors in the operation of the method or the empirical evaluation protocol. That said, I am not very familiar with all of the types of data the authors work with, so it is possible I am overlooking something. The conclusions are supported by the empirical evidence presented. * However, I think that the empirical evaluation of the methods should be slightly improved before acceptance (see below). * The authors seem to characterize the novelty of the work appropriately, although I cannot warrant it's correct, because I am not sufficiently familiar with the literature on the use of contemporary representation learning methods in the medical/ECG-modality domain. Requested Changes: * In my opinion, must haves for acceptance: Given that both the training and the evaluation of the model is done using a closed-access, internal dataset, I think it particularly important for the paper to include as detailed empirical evaluation of the proposed methods as possible. Hence I would really like to see the following in the evaluation section: A) standard deviations of the results over multiple training trials (at least, over multiple fine-tuning trials with different random initializations of the linear head and random order of the data presented to the network; at best, multiple trials of SSL-based pre-training + finetuning) and bold the results for all the methods for which the standard deviations overlap considerably. B) Report the AUPRC metric as well, which seems important to me when reporting the classification performance on severely class-imbalanced data distribution. * Extra: My understanding is that the multimodal-SSL-initialized models (for which the results are reported in the tables) are finetuned individually on each one of the classification tasks in Table 4. I would find it interesting to add the Multitask-Learning Component on top of the SSL-pretraining, as this would allow to assess the potential of combining the multi-modal SSL representation learning with multitask learning. Broader Impact Concerns: No special concerns beyond the standard caution about using ML in the health domain. The authors acknowledge those appropriately. ================================================== Review 3: Summary: The paper is about representation learning for ECGs, and the authors consider how to perform multi-modal contrastive learning using a private dataset combining ECGs, structured EHRs (information about diagnoses, prescriptions, tests, etc) and unstructured EHRs (free-form text). It differs from previous work on self-supervised and contrastive learning as follows: - SSL and contrastive learning are performed mainly in a uni-modal way for general domain images, or for multi-modal datasets combining text and images (e.g., captioned images scraped from the internet). Most applications of SSL/CL to medical data are also either uni-modal with images, or with text-image pairs. This work considers three modalities: ECG waveforms (time series), sEHRs (not quite text), and free-form text from unstructured EHRs. - SSL and contrastive learning have been previously applied to ECGs but typically in a uni-modal fashion. One work combined ECGs with a different modality (labs and vitals time series), but the use of EHR data here seems novel. The recipe for contrastive learning is usually straightforward, but the authors confront several difficulties here. One is determining the best encoders for each modality: they use a CNN for ECGs (a standard choice), a pre-trained LM for the unstructured EHR data (GatorTron), and a custom BERT-like model for the sEHR data. They must also consider how to compare between three modalities: as shown in Figure 1, there are separate embedding spaces in which two contrastive losses are calculated (ECG-sEHR, and ECG-sEHR-Text). This seems reasonable, and the loss functions (eqs. 1 and 2) are the standard losses for each space. Methodologically, this work is not too novel, but the choices are sensible and not straightforward to execute on (particularly training the custom BERT-like model for sEHR data). The authors in fact train three separate multi-modal contrastive learning models, which utilize either type of EHR or both types. In addition, they train ECG-MTL, a multi-task model trained in a supervised manner for a large number of classification and regression tasks. (Actually, I could not understand whether this was a separate model, or fine-tuned from one of the multi-modal models - I request clarification on this below.) The evaluations show that the new models are generally more effective than existing contrastive learning methods for a variety of downstream tasks, when tested with linear probing or full fine-tuning, and when tested on either the internal test set or external open-source datasets. Due to the strong existing backbones, the new models are also more sample-efficient. The authors also show results for zero-shot retrieval tasks, and out-of-distribution detection, but while interesting, these seem less clinically relevant. Strengths and Weaknesses: The main strengths of this work are formulating an approach to combine three separate data modalities, and showing convincing evidence that it leads to better representations than existing methods. I suspect this could encourage more work on combining diverse data modalities, which are likely available in other private datasets at medical institutions around the world. I generally appreciated the evaluations, particularly the careful splitting of data and the use of external datasets. I don't see any important technical weaknesses. I ask a couple questions below, which may or may not merit some further experiments. As mentioned above, I don't find the approach very methodologically innovative, but it's reasonable and enables an interesting and challenging application of contrastive learning. One concern I would definitely like discussed is a clarification around ECG-MTL and why it's included in this paper if it's a purely supervised model. Requested Changes: A couple thoughts and questions: - The results in Tables 2-4 are generally quite negative regarding the existing contrastive learning methods. They're rarely competitive with the new models (which is perhaps a good thing), but they're often worse than a simple supervised baseline. This is perhaps not a direct contradiction of previous work, as they did not use the same datasets, but it does suggest that those are ineffective representation learning methods. This is odd, because they use much more data and rely on sound contrastive learning principles. Can the authors comment on this, and perhaps make room in the paper to discuss why those methods don't work? - The ImageNet comparison in contribution #3 was confusing to me. ImageNet is an open-source dataset, this is a model trained on a closed-source dataset. Can you rephrase this contribution so the comparison is more clear, or perhaps omit the comparison? - As mentioned above, I could not tell whether ECG-MTL was trained in a purely supervised fashion or fine-tuned based on a backbone from the previous multi-modal models. If it's the former, I'm confused why it belongs in this paper. If it's the latter, I would have expected more discussion of this point in Section 3.5, and perhaps a comparison between backbones from which it's fine-tuned. - In Section 3.2, the claim about the modalities having different levels of granularity is interesting, but this isn't accompanied by any empirical evidence or ablations. If the granularities were more similar, how would the learning approach differ? And could it be added as an ablation study? It's not that the results are strong as is, but from a scientific perspective, omitting any empirical evidence on this point is unsatisfying for interested readers. - "GatorTron" is misspelled in Figure 1. - Just a nit, but assuming L2 normalization when computing cosine similarity is unnecessary. The standard definition of cosine similarity performs the normalization internally ([PyTorch](https://pytorch.org/docs/stable/generated/torch.nn.CosineSimilarity.html), [Wikipedia](https://en.wikipedia.org/wiki/Cosine_similarity)). - I couldn't tell why eq. 2 is repeated on the very next page in eq. 4. - For context, can the authors provide some information about SOTA results on the external datasets? That information is missing, and it would be interesting to know whether this method enables better performance. - The explanations about IRB approval don't seem to belong in the conclusion. Perhaps they should be moved earlier, for example to a section where the datasets are introduced? Broader Impact Concerns: I have no broader impact concerns. ================================================== Metareview: Recommendation: Accept as is Comment: The authors use existing techniques on a new dataset, and analyze performance results on both internal and external benchmarks. Their analysis yields strong evidence that their multimodal approach can improve upon existing SSL-for-ECG techniques for producing predictors in low samples and match (or surpass) supervised baselines in larger data regimes. While it is disappointing that the dataset that leads to these performance improvements is closed, it is understandable given the nature of the dataset. I believe these results will find an audience, perhaps, e.g., other researchers at health centers with access to their own ECGs and CHRs. I definitely encourage following up with code for to aid in the reproduction of these results by such researchers. Reviewer concerns were, for the most part, addressed. ==================================================
# Improv: Inpainting-Based Multimodal Prompting For Computer Vision Tasks Anonymous authors Paper under double-blind review ## Abstract In-context learning allows adapting a model to new tasks given a task description at test time. In this paper, we present IMProv - a generative model that is able to in-context learn visual tasks from multimodal prompts. Given a textual description of a visual task (e.g. "Left: input image, Right: foreground segmentation"), a few input-output visual examples, or both, the model in-context learns to solve it for a new test input. We train a masked generative transformer on a new dataset of figures from computer vision papers and their associated captions, together with a captioned large-scale image-text dataset. During inference time, we prompt the model with text and/or image task example(s) and have the model inpaint the corresponding output. We show that training our model with text conditioning and scaling the dataset size improves in-context learning for computer vision tasks by over +10% AP for Foreground Segmentation, over +5% gains in AP for Single Object Detection, and almost 20% lower LPIPS in Colorization. Our emperical results suggest that vision and language prompts are complementary and it is advantageous to use both to achieve better in-context learning performance. ## 1 Introduction In-context learning (ICL) (Brown et al., 2020; Chan et al., 2022; Xie et al., 2021), also known as few-shot prompting, is an exciting new paradigm in machine learning that allows a model to adapt to novel downstream tasks without fine-tuning or changing the model's weights. In natural language processing (NLP), ICL is considered an emergent property of large language models (Brown et al., 2020; Touvron et al., 2023; Chowdhery et al., 2022) and it was first introduced in the seminal paper of GPT-3 (Brown et al., 2020). A few-shot prompt typically includes examples of (input, output) pair(s). The few-shot performance of large language models has been shown to achieve competitive results on NLP tasks, sometimes surpassing prior state-of-the-art fine-tuning approaches (Brown et al., 2020). In computer vision, the full potential of in-context learning (ICL) is still far from being realized. To enable a model to perform in-context learning during test time, there are two key challenges that need to be addressed. Firstly, the model's architecture should be designed in such a way that it can effectively process prompts from various vision tasks. This means it should be capable of receiving task instructions and/or input-output examples as inputs to the model. Secondly, a different approach to training these models is required. While in natural language processing (NLP), the emergence of ICL has been facilitated by utilizing large-scale, non-curated data, in computer vision, even generative models trained on billions of non-curated text-image pairs have failed to achieve similar results. A possible approach to enable test-time few-shot prompting for computer vision tasks is to train a multi-task inpainting model (Wang et al., 2022; 2023b; Bar et al., 2022). For example, previous approaches (Wang et al., 2022; 2023b) adopted a fully supervised paradigm to train a model over a predetermined set of vision tasks. However, this line of study requires a handful of manual annotations and thus struggles to scale and generalize well to unseen vision tasks. Instead of explicitly designing the tasks, Bar et al. (2022) took a different unsupervised approach by proposing to learn from unstructured Computer Vision Figures data, where images ![1_image_0.png](1_image_0.png) Figure 1: Inpainting-based Multimodal Prompting for vision (IMProv). *Top:* Our model in-context learns to solve computer vision tasks by inpainting the masked area with the task solution (shown in red square) using visual input-output examples (a), a textual task description (b), or both (c). *Bottom:* IMProv prediction examples. have implicit task supervision and grid-like structure. However, using vision-only prompting (Bar et al., 2022) suffers from ambiguities and is limited in its ability to describe a specific visual task. To alleviate these difficulties, we propose to multimodal ICL by prompting using input that consists of both pixels and text. Intuitively, these two modalities can work in synergy to enhance the understanding of the world and its complexities. For example, during a conversation, people use language to communicate ideas and vision to perceive facial expressions and conversational gestures (Cassell et al., 1999; McNeill, 2019). For prompting, conditioning vision models on text can enable describing instructions in an efficient manner and reduce ambiguities without the necessity for multiple high-quality visual examples. Equipped with this intuition, we train a model, dubbed IMProv, to inpaint randomly masked regions given the rest of the image and a caption as context *Our training does not require explicit definitions of tasks* and annotations for each task. To demonstrate multimodal learning can be boosted by larger dataset, we collected a new dataset of image-text pairs from Semantic Scholar, which is three times larger than the largest existing computer vision figures dataset. We train a new model by performing inpainting on randomly masked images from a combination of the newly constructed data and LAION 400M (Schuhmann et al., 2021). At test-time, our model exhibits emerging capabilities such as zero-shot prompting for vision tasks, e.g., performing foreground segmentation with only a textual description of the task without any image examples. We explore the outcomes of interchanging and combining image and textual prompts in our model. We find that when using both modalities our model achieves improved ICL performance compared to past vision-only approaches (Figure 1), improving average precision by over 10% in Foreground Segmentation, over 4% for single object detection, and closing over 40% of the gap between current ICL approaches to state-of-the-art 1-shot training approaches that utilize supervised base-class training data. Beyond visual recognition, IMProv can be applied to general vision tasks including edge estimation, depth estimation, and conditional image synthesis as shown in Figure 1. ## 2 Prompting Inpainting Models Via Images And Text We start by presenting our IMProv model and how to train it in Section 2.1, and subsequently, in Section 2.2, we explain the approach for prompting the model using visual and textual prompts. Finally, in Section 2.3, we describe the new dataset of images and associated captions we collected. ![2_image_0.png](2_image_0.png) $\left(1\right)$. Figure 2: **IMProv Architecture**. During training, the input image is patchified, masked, and fed to the model together with the associated caption CLIP (Radford et al., 2021) embeddings. For each masked token, the decoder outputs a distribution over a frozen pretrained VQGAN (Esser et al., 2021) codebook. The model is trained with cross-entropy loss. ## 2.1 Improv - Text Conditional Inpainting Model We introduce a model with Inpainting-based Multimodal Prompting capabilities for vision tasks (IMProv). It receives both text and masked input image as context and outputs a reconstructed image. Given an input image x ∈ R H×W×3, a binary mask m ∈ {0, 1} H×W , and a sentence t ∈ K × V where V is the vocabulary and K in the sentence length, the goal of our inpainting model f is to generate a new image y ∈ R H×W×3, with the masked regions filled according to the input image context and the sentence: $$y=f(x,m,t)$$ y = f(*x, m, t*) (1) Our model f has an encoder-decoder structure like MAE-VQGAN (Bar et al., 2022), where the encoder and decoder are Vision Transformers (Dosovitskiy et al., 2020). In contrast to Bar et al. (2022); He et al. (2021), after every self-attention layer, we add a cross-attention layer between image tokens and textual tokens, thereby effectively allowing each image token to attend to text token: $$Z_{i}=\sum_{j=1}^{n}a_{ij}V_{j}\hskip56.905512pta_{ij}=\frac{\exp(K_{j}^{T}Q_{i}))}{\sum_{m=1}^{n}\exp(K_{m},Q_{i}))}\tag{2}$$ Where V is the set of textual token values, K is the set of text token keys and Q is the set of image token queries. The resulting output sequence Z represents the attended image features that are most relevant to the text tokens. Training. To train the model, the input image x is split into patches and randomly masked by dropping a fixed percent of the patches (75% in our experiments). Similarly, the input textual sentence is tokenized and every token is mapped to its corresponding CLIP (Radford et al., 2021) embedding. Given the subset of non-masked patches and the textual tokens image, the model is then trained to predict the visual tokens corresponding to masked patches. The model is trained with a cross-entropy loss applied between the model predictions and the corresponding visual tokens for each masked patch. The ground truth visual tokens are obtained by mapping the input image to visual tokens indices using a pre-trained VQGAN encoder (Esser et al., 2021). Formally, our text-conditioned MAE-VQGAN models the distribution p(zi|*x, m, t*), where ziis a visual token from the VQGAN vocabulary that corresponds to the i-th image patch. ## 2.2 Multimodal Prompt At inference time, prompting the trained model can be done via text, via a visual prompt, or by combining both. To prompt the model via visual prompt we apply the same task formulation of Bar et al. (2022) - we form a grid-like image composed of task input-output example(s) (e.g. input images and their segmentation masks), and a new query image, and apply the model to inpaint the corresponding result for the query. To prompt the model via text, we provide to f a description of the task (e.g. "Left: input images, Right: foreground/background segmentation results"). $$x_{v p},m=g_{1}(S,x_{q})$$ $\left(3\right)^{2}$ | inference with IMProv. Prompt Full Text Examples No Text ϕ Task Image Segmentation + Location Left - input image, right: Black and white foreground background segmentation + Class Name Left - input image, right: Black and white foreground background segmentation of a horse | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| Formally, Let S = {(xi, yi)} n i=1 be the set of input-output examples where xiis an image and yiis the corresponding vision task output, let t be a textual task description and let xq be a query image. We introduce an arrangement function g1 that arranges S and xq into a grid (visual prompt), denoted by xvp and provides a mask m for the inpainting function: xvp, m = g1(*S, x*q) (3) Similarly, we have a corresponding arrangement function g2 that generates a textual prompt that is used to instruct the model how to inpaint the image given attributes like the task name, location details, and the image class name: t = g2(*task, loc, class*) (4) For example, for the task of image segmentation of an airplane, the output can be "Left: input image of an airplane, Right: corresponding image segmentation". For more examples see Table 1. The model f is then applied to reconstruct the masked area m given the visual prompt and the task description: y = f(xvp*, m, t*) (5) When the task description is an empty string, no textual prompt is given and the model has to infer the task solely from the examples of S. When S is an empty set, and a textual prompt is given, the model performs zero-shot completion by relying only on the textual instructions. ## 2.3 Image-Text Dataset For Computer Vision $$t=g_{2}(t a s k,l o c,c l a s s)$$ $$\left(4\right)$$ $\left(5\right)^3$ $$y=f(x_{v p},m,t)$$ | Table 2: Dataset comparison. | | | | | |--------------------------------|---------|---------|-----------|------------------| | Dataset | images | papers | with text | source | | CVF | 78,227 | 20,764 | no | arxiv | | S2CV | 268,118 | 261,225 | Yes | Semantic Scholar | The grid-like visual prompt images inpainted by IMProv have a different distribution from natural images. Vision task descriptions (e.g. "left: input, right: segmentation mask"), paired with images, do not appear often in widely used language-and-vision datasets. Thus, a model that was trained on these datasets will have trouble completing the inpainting task successfully due to the distribution shift. To mitigate this domain gap, we collect a new dataset of figures, paired with their associated captions, extracted from computer vision papers. Our dataset, The Semantic Scholar Computer Vision dataset (S2CV), is collected from computer vision papers that appear on "Semantic Scholar" website. This website contains papers from 40 conferences and journals from the years 2010 to 2022. We extracted pairs of figures and their captions from each paper on the website, resulting in 1,417,398 pairs. We then filtered out figures that do not include images (e.g. plot of loss curve). Finally, the filtered S2CV dataset includes 268,118 captioned figures and is 3 times larger than the largest existing figures dataset, the Computer Vision Figures dataset (CVF; Bar et al. (2022)). See | Model | Foreground Segmentation ↑ | Single Object Detection ↑ | Colorization ↓ | | | | | | | | |-----------------------|-----------------------------|-----------------------------|------------------|---------|---------|---------|---------|-------|-------|------| | Split 0 | Split 1 | Split 2 | Split 3 | Split 1 | Split 2 | Split 3 | Split 4 | MSE | LPIPS | | | BEiT (CVF) | 5.38 | 3.94 | 3.20 | 3.29 | 0.17 | 0.02 | 0.14 | 0.16 | 0.60 | 0.70 | | VQGAN (CVF) | 12.56 | 17.51 | 14.27 | 15.06 | 2.27 | 2.37 | 2.48 | 1.99 | 1.50 | 0.56 | | MAE (CVF) | 17.42 | 25.70 | 18.64 | 16.53 | 5.49 | 4.98 | 5.24 | 5.84 | 0.43 | 0.55 | | MAE-VQGAN (CVF) | 27.83 | 30.44 | 26.15 | 24.25 | 24.19 | 25.20 | 25.36 | 25.23 | 0.67 | 0.40 | | IMProv (S2CV + LAION) | 42.58 | 44.81 | 40.73 | 33.72 | 30.03 | 30.73 | 29.8 | 31.32 | 0.57 | 0.34 | Table 3: **Comparison between previous visual prompting results to multimodal prompting on** computer vision tasks. For Foreground Segmentation and Single Object Detection, we report the *mIOU* score. For Colorization, we report the MSE and *LPIPS*. Training dataset appears in parentheses. comparison in Table 2, full details about S2CV in the dataset datasheet (Gebru et al., 2021), and provided examples in Figure 10. We also extend the existing CVF by repeating its data collection process and extracting the captions of the figures in the dataset. This results in 78,227 image-text pairs. This dataset, CCVF (Captioned-CVF), serves as a baseline in our experiments. ## 3 Experiments And Results We train a IMProv with a ViT-L backbone on a combination of our CCVF, S2CV dataset and LAION400M (Schuhmann et al., 2021). During the training process, we create mini-batches by randomly selecting half of the data from the LAION-400M dataset and the other half from CCVF and S2CV, ensuring that the model learns from a diverse set of figure-like images. We evaluate IMProv on a variety of computer vision tasks. By default, our visual prompt consists of a 2 × 2 grid where the bottom-right quarter is masked, the top row contains the input-output example, and the bottom-left image represents the query. The visual example and the textual prompt are defined according to the task (see Section 3.2). ## 3.1 Implementation Details During training, we utilize images and their associated captions. We follow the resized cropping and flipping augmentations of He et al. (2021) and train on 224 × 224 crops. We use AdamW (Loshchilov & Hutter, 2017) optimizer with a learning rate of 2e −4 and weight decay of 0.05. We train our models on one machine with 8 A100 GPUs with a batch size of 2048 for 150k iterations. Our learning-rate schedule consists of 2k linear warm-up steps followed by a cosine learning rate decay. We use a pre-trained frozen CLIP ViT-L/14 model as our text encoder and a pre-trained VQGAN codebook with a vocabulary size of 1024, provided by Esser et al. (2021) and a spatial dimension reduction of ×16. During training, we drop the text conditioning with a probability of 0.1. ## 3.2 Downstream Computer Vision Tasks Next, we include the evaluation results of IMProv on a wide range of computer vision tasks. When trained on CCVF/ S2CV and LAION 400M (Schuhmann et al., 2021), IMProv significantly improves ICL performance over a wide range of computer vision downstream tasks when compared to vision-only ICL approaches. Foreground Segmentation. In this task, the goal is to segment the query image into two classes - foreground and background. The input-output example is a random image with its corresponding binary segmentation mask (e.g. black for the background and white for the foreground). We define the textual prompt to be: "Left - input image, right - Black and white foreground-background segmentation of {class}", where the {class} is the class of the foreground object, annotated in Pascal-5 i. We follow the evaluation protocol of Bar et al. (2022) and test IMProv on four splits of Pascal-5 i dataset (Shaban et al., 2017). Results are reported in Table 3. ![5_image_0.png](5_image_0.png) Figure 3: **Multimodal prompting prediction examples.** An example text prompt that was provided to IMProv together with the presented visual prompt appears below them. For each prompt, the result is marked in red. Please see the supplementary material for more results. Object Detection. Similar to the task of Foreground Segmentation, our objective here is to perform binary segmentation of the object present in the query image. However, this task is more challenging as the input-output examples contain a rectangle-shaped mask, derived from a bounding box which is less accurate compared to a fine detailed segmentation mask. We define the textual prompt to be: "Left - input image, right - Black and white foreground background segmentation of {class} of rectangle shape" where the {class} is the class of the foreground object. We use the Pascal VOC 2012 dataset (Everingham et al., 2015), which consists of images along with their associated detection boxes. Our results are reported in Table 3 in terms of the mean Intersection over Union (mIOU) metric. ![6_image_0.png](6_image_0.png) Figure 4: **Detailed textual prompts IMProv performance**. We experiment with textual prompts with varied amounts of detail, e.g., from no text to instructions that include the task, specific locations, and object class names. See examples of full-text prompts in Table 1. Please see the supplementary material for more results. Colorization. The goal is to map a gray-scale image to a color image. The example pair is a gray-scaled image and the corresponding color image. We define the textual prompt to be: "Colorization results: Left - input image, Right - Colorized image of class" where the {class} is the class of object present in the image. We randomly sampled 1000 example pairs and image query from ImageNet (Russakovsky et al., 2015) validation set and converted them to gray-scale to obtain gray-scale and color version for each image. MSE and LPIPS (Zhang et al., 2018) Results are reported in Table 3. Other Tasks. We evaluate our models on the dataset created by Wang et al. (2023c), which includes around 310k image-caption pairs that were automatically annotated by using state-of-the-art pre-trained models for a wide range of vision tasks. Specifically, each image is annotated with depth and normal maps obtained from Midas (Ranftl et al., 2022), segmentation maps obtained from Uniformer (Li et al., 2022), and object boundary maps detected by HED (Xie & Tu, 2015). For each vision task X, we evaluate our model on two tasks - X-to-images and images-to-X. As each task has a different evaluation metric, and as our model produces image outputs, we simplify the evaluation by comparing the generated image to the rendered annotation of the task by calculating LPIPS (Zhang et al., 2018). We report the results in Table 5 and plot qualitative results in Figure 3. ## 4 Analysis Dataset Ablation. We report our results and compare them with prior works in Table 3. IMProv trained on a combination of LAION-400M and our S2CV dataset outperforms the prior work (Bar et al., 2022) trained solely on the CVF dataset by more than ∼ 12 points in mIOU. It demonstrates IMProv could benefit from training on additional amounts of unlabeled images. Table 4: **Text helps.** Adding textual prompts to "*Random Class*" visual prompts improves Foreground Segmentation. | Model | Avg. | |----------------------|--------| | MAE-VQGAN (CVF) | 23.52 | | IMProv(CCVF) | 26.13 | | IMProv(CCVF + LAION) | 36.29 | Textual Prompts Ablation. We experiment with textual and visual prompts that have different relevance to the query image and task. For the visual prompt, we choose the input-output examples using three different retrieval strategies: (1) Random Class, a random example pair in which the class of the foreground object is chosen randomly, (2) Same Class, where a random example pair of the same class is chosen randomly, and (3) The example is chosen via Nearest Neighbor from all the images with the same foreground object class, using the model from Zhang ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) Figure 6: **Textual prompts effect on text inpainting.** When there is an inconsistency between the textual and visual prompt, the model may follow the textual prompt. IMProvcan better follow both visual and text instructions compared to Stable Diffusion (SD) on letter generation. et al. (2023). We evaluate our IMProv model on Pascal 5 i with and without a textual prompt that contains Task, Location, and Class Name. Firstly, we compare against Bar et al. (2022) under (1) Random Class visual prompt setting and report results in Table 4. In this setting, the visual prompts describe the task (e.g., segmentation) but are not curated from the same class (the setting in Table 3), or chosen via nearest neighbors as in Zhang et al. (2023). Using non-curated visual prompts is most realistic, as finding a perfectly aligned visual example might be as hard as solving the original input. The result shows that conditioning on text improves average mIoU by 3 points when using reasonable non-curated visual prompts. Moreover, IMProv trained on a combination of LAION and our CCVF dataset further boost the mIoU by 10 points. In Figure 5 we plot the results under different textual prompts. We find that the textual prompts play a big role in the performance of the model (see Figure 5). To dive deeper into the effect of textual prompt, we plot the relation between textual prompt and visual prompt in Figure 4. It shows adding text Figure 5: Using textual prompts improves ![7_image_2.png](7_image_2.png) performance and reduces the need for careful selection of visual prompts. | performance on 8 held-out computer vision tasks. Depth→ Image→ | HED→ | Image→ | Seg→ | Image→ | Normals→ | Image→ | | | |------------------------------------------------------------------|--------|----------|--------|----------|------------|----------|---------|------| | Image | Depth | Image | HED | Image | Seg | Image | Normals | | | Supervised ICL (InstructPix2Pix) | 0.65 | 0.60 | 0.59 | 0.62 | 0.64 | 0.61 | 0.62 | 0.55 | | IMProv (S2CV+LAION) | 0.61 | 0.52 | 0.51 | 0.46 | 0.59 | 0.50 | 0.56 | 0.48 | | IMProv (S2CV+LAION+InstructPix2Pix) | 0.55 | 0.43 | 0.47 | 0.37 | 0.54 | 0.46 | 0.51 | 0.44 | | | i labeled baseclasses data. | | | | | | | |--------------------------------------------------------|-------------------------------|---------------------------------|--------------------------------|---------|---------|---------|---------| | approach utilize Pascal 5 Pretraining # Labeled Images | | # Shots | Model | Split 0 | Split 1 | Split 2 | Split 3 | | | 1 | 1 | | 11.1 | 13.4 | 13.0 | 12.3 | | Unlabeled ImageNet | 4 | 4 | | 12.9 | 15.8 | 14.3 | 15.0 | | | | Finetuned MAE (He et al., 2021) | | | | | | | | 16 | 16 | | 13.7 | 16.1 | 16.8 | 17.1 | | CVF + IN | 1 | 1 | MAE-VQGAN (Bar et al., 2022) | 32.5 | 33.8 | 32.7 | 27.2 | | CCVF + LAION | 1 | 1 | IMProv | 45.6 | 46.6 | 45.3 | 39.1 | | S2CV + LAION | 1 | 1 | IMProv | 49.1 | 49.7 | 45.5 | 42.1 | | Labeled Pascal 5i (Segmentation masks) | 2086 − 5883 | 1 | FWB (Nguyen & Todorovic, 2019) | 51.3 | 64.5 | 56.7 | 52.2 | | | | 1 | CyCTR (Zhang et al., 2021) | 67.2 | 71.1 | 57.6 | 59.0 | Table 6: **Comparison to fine tuning and classic 1-shot segmentation baselines.** MAE-VQGAN image query and output resolution is 111×111. CyCTR and FWB resolution is 473×473 and 512×512, both approach utilize Pascal 5 ilabeled baseclasses data. Pretraining # Labeled Images # Shots Model Split 0 Split 1 Split 2 Split 3 prompts improves the results for any type of visual prompt, from the least related Random Class examples to the most relevant Nearest Neighbors examples. In addition, we find that by using text, it is possible to achieve similar performance with lower-quality visual examples (using the Same Class example rather than the Nearest Neighbor (Zhang et al., 2023)). Similarly, higher-quality visual examples improve the results for all the tested textual prompts. Interestingly, the results suggest a trade-off between the two modalities - high-quality textual prompts can alleviate the need for carefully chosen visual prompts, and vice versa. Moreover, as shown in Figure 6, when the textual prompt is inconsistent with the visual example, the model may follow the more certain textual instruction. We include additional results for different combinations of visual and textual prompts in the Supplementary Material. Does Structured Data Improve In-Context Learning? The key insight of our approach is to train IMProv on *unstructured* data in a fully unsupervised manner without parsing the image or the text. Here we experiment with training IMProv on additional *structured* data in a fully supervised manner. We use the dataset of Brooks et al. (2022), which consists of 310k input-output image editing pairs and their corresponding descriptions. For training, we use random pairs as our input-output examples. We embed them into a grid structure, in a similar manner to the structure we use at test-time. The grid images that we construct for training consist of 1 × 2 and 2 × 2 images by randomly selecting 1 or 2 input-output examples for each caption in the original dataset. We test our models on a held-out set of vision tasks. As shown in Table 5, we find that training on structured supervised data alone leads to poor generalization and ICL performance on the test tasks. Training on both unstructured S2CV and LAION-400M, together with the structured data improves ICL results on the test tasks compared to our base model. Comparison to Finetuning and Few-Shot Baselines. We compare IMProv to classic 1-shot baselines, which we view as an upper bound of our approach. Approaches like FWB (Nguyen & Todorovic, 2019) and CyCTR (Zhang et al., 2021) utilize a fully labeled base classes train set (2086 to 5883 on different Pascal 5 i splits) with architectures that are optimized for foreground segmentation (e.g, by utilizing higher resolutions). We also compare to MAE-VQGAN (Bar et al., 2022) that performs visual prompting without text and to finetuning baselines with K = {1, 4, 16} training examples for each target class. The results in Table 6 indicate that IMProv closes over 40% of the accuracy gap between MAE-VQGAN to supervised one-shot approaches. This demonstrates the potential of our approach to scale with more data. Comparison to existing text-to-image works. We also compare our IMProv against state-of-the-art text-image generative models, i.e. Stable Diffusion Rombach et al. (2021). We input the same text prompt ![9_image_0.png](9_image_0.png) and visual prompt to Stable Diffusion version 1 (SD1) and 2 (SD2) inpainting models, with 50 inference steps and 7.5 classifier free guidance scale. The quantitative results are shown in Figure 7. Compared to IMProv, SD fails to generate black/white segmentation mask, and couldn't leverage visual prompt to find the corresponding objects. We also compare them on the letter generation task in Figure 6. Although SD sometimes could generate the correct letter, it still fails to follow the visual prompt to generate white background and red font color. Moreover, as shown in Figure 6, when the textual prompt is inconsistent with the visual example, the model may follow the more certain textual instruction. ![10_image_0.png](10_image_0.png) Figure 8: **Comparison with other visual prompt methods**. We compare IMProv with PainterWang et al. (2023a) in the task of colorization. Both models are not trained explicitly with colorization supervision. Wang et al. (2023a) fails in some images, generating segmentation masks instead, while ours could generate reasonable colorized images. ![10_image_1.png](10_image_1.png) Figure 9: **Failure case analysis**. Compare to Bar et al. (2022), IMProv could succeed on some cases to some extend e.g. "Task ambiguity". But we still fail on one of the "Non-aligned input-output" cases. Compare to supervised approach. Compared to a supervised prompting approach like Painter(Wang et al., 2023a), IMProv can generalize to a larger variety of tasks. We demonstrate this in Figure 8, showing that while Painter fails to adapt to the colorization task, IMProv performs reasonably well. Failure case analysis We compare with failure cases of Bar et al. (2022) in Figure 9. For some of the cases, our IMProv could successfully address them with text prompts, e.g. the cat colorization and bottle segmentation of "Task ambiguity". And our model could successfully generate the image that moves the orange to the center, though fails the other Non-aligned input-output example. ## 5 Related Work Prompting in NLP. The ability to prompt a language model to solve a specific task, also known as ICL, is a recently discovered property of generative language models that were trained on a large corpus of text (Brown et al., 2020; Touvron et al., 2023; Chowdhery et al., 2022; Bubeck et al., 2023). Brown et al. (2020) have shown that existing NLP tasks can be described in unstructured text, and then fed into a large language model to complete the missing part without any finetuning (Radford et al., 2019; Brown et al., 2020). More recently different approaches to prompting have emerged including Prompt Engineering (Brown et al., 2020; Lu et al., 2021), Prompt Ensembling (Jiang et al., 2020), Prompt Prefix Tuning (Li & Liang, 2021; Lester et al., 2021), and Chain of Thought Prompting (Wei et al., 2022). The Flamingo (Alayrac et al., 2022) model extends language-only models, and conditions the model on both image and text. Our approach is different in that our model outputs pixels and not text. Therefore, it is suited to solve a variety of computer vision tasks that can be represented in pixel-space, like semantic segmentation or image colorization. Visual Prompting. Recently, multiple papers have proposed methods to visually prompt computer vision models (Bahng et al., 2022; Jia et al., 2022; Bar et al., 2022). Bahng et al. (2022) proposes to add noise tensor to the input image to adapt the model to different tasks, while Jia et al. (2022) proposes to append learned tokens to Vision Transformers (Dosovitskiy et al., 2020), which draws motivation from prefix tuning in NLP (Li & Liang, 2021). These two approaches are trained on supervised data and thus struggle to scale and generalize to new tasks. Bar et al. (2022) takes a different approach and trains on unstructured crops from computer vision paper figures. According to this approach, visual prompting is viewed as an Image Inpainting task by creating an image grid containing input-output examples and new image input. The goal of the inpainting model is then to complete the output in a way that is consistent with the input. We follow a similar definition of visual prompt as in (Bar et al., 2022), however, we propose to condition the model on textual input as well which might help to solve ambiguities in the task description and can more efficiently guide the visual model toward performing the desired task. Image Inpainting and Image Synthesis. Early image inpainting methods relied on the input image itself for inpainting (Efros & Leung, 1999; Bertalmio et al., 2000; Criminisi et al., 2004; Barnes et al., 2009), whereas more recent works leveraged image datasets for this purpose (Hays & Efros, 2007; Pathak et al., 2016; Yang et al., 2017; Liu et al., 2018b;a). Lately, diffusion models have demonstrated large success in image inpainting and image synthesis (Ramesh et al., 2022; Rombach et al., 2021), as well as other popular transformer-based methods (Chen et al., 2020; Yu et al., 2021b; Esser et al., 2021; Yu et al., 2021a; Chang et al., 2022). Few of these approaches rely on discrete latent codebooks which induce a distribution over possible completions (Van Den Oord et al., 2017; Ramesh et al., 2021; Esser et al., 2021; Yu et al., 2021a; Chang et al., 2022). For instance, Esser et al. (2021); Yu et al. (2021a) proposed to synthesize images using an autoregressive model on a codebook representation, while Chang et al. (2022) applied iterative parallel decoding of the tokens. Few approaches also support image synthesis with text conditioning - MUSE (Chang et al., 2023), for example, is a transformer-based model that applies cross-attention from image embeddings (VQGAN (Esser et al., 2021)) to the text embeddings extracted from a pre-trained model (e.g. T5 (Raffel et al., 2020)) to condition on text. Our model is conceptually similar to MUSE (Chang et al., 2023), however, we focus on inpainting grid-like visual prompts that require reasoning across multiple sub-images and input text. Few-Shot Learning. In this setting, the algorithm is trained on a labeled dataset of base classes, from which it should transfer to a set of novel classes given only a few training examples (like 10 or 30) (Nguyen & Todorovic, 2019; Kang et al., 2019; Liu et al., 2020; Wang et al., 2020; Yang et al., 2020; Tian et al., 2020; Zhang et al., 2021; Bar et al., 2021). Unlike Few-Shot approaches, here we do not assume access to a large training set of base-classes, and our architecture is not task-specific. Our approach is Few-Shot only in the sense that the visual part of our prompt usually contains one or two task examples. ## 6 Discussion We presented an approach for multimodal prompting of inpainting models. To unlock in-context learning capabilities in such models, we had to collect a specific dataset of figures with associated captions. To further scale this approach, we believe that other sources of unstructured data - visual, textual, and multimodal - should be incorporated during the training phase. To understand the feasibility of such a data collection effort, one must be able to predict and quantify the effect of different dataset sizes and types on the downstream in-context learning capabilities of models trained on them. We plan to investigate it in future work. ## References Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. *Advances in Neural Information Processing Systems*, 35:23716–23736, 2022. 12 Hyojin Bahng, Ali Jahanian, Swami Sankaranarayanan, and Phillip Isola. Exploring visual prompts for adapting large-scale models. *arXiv preprint arXiv:2203.17274*, 1(3):4, 2022. 12 Amir Bar, Xin Wang, Vadim Kantorov, Colorado J Reed, Roei Herzig, Gal Chechik, Anna Rohrbach, Trevor Darrell, and Amir Globerson. Detreg: Unsupervised pretraining with region priors for object detection. arXiv preprint arXiv:2106.04550, 2021. 12 Amir Bar, Yossi Gandelsman, Trevor Darrell, Amir Globerson, and Alexei Efros. Visual prompting via image inpainting. *Advances in Neural Information Processing Systems*, 35:25005–25017, 2022. 1, 2, 3, 4, 5, 7, 8, 9, 11, 12, 19 Connelly Barnes, Eli Shechtman, Adam Finkelstein, and Dan B Goldman. Patchmatch: A randomized correspondence algorithm for structural image editing. *ACM Trans. Graph.*, 28(3):24, 2009. 12 Marcelo Bertalmio, Guillermo Sapiro, Vincent Caselles, and Coloma Ballester. Image inpainting. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pp. 417–424, 2000. 12 Tim Brooks, Aleksander Holynski, and Alexei A Efros. Instructpix2pix: Learning to follow image editing instructions. *arXiv preprint arXiv:2211.09800*, 2022. 9 Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. *CoRR*, abs/2005.14165, 2020. URL https://arxiv.org/abs/2005.14165. 1, 11, 12 Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. *arXiv preprint arXiv:2303.12712*, 2023. 11 Justine Cassell, David McNeill, and Karl-Erik McCullough. Speech-gesture mismatches: Evidence for one underlying representation of linguistic and nonlinguistic information. *Pragmatics & cognition*, 7(1):1–34, 1999. 2 Stephanie Chan, Adam Santoro, Andrew Lampinen, Jane Wang, Aaditya Singh, Pierre Richemond, James McClelland, and Felix Hill. Data distributional properties drive emergent in-context learning in transformers. Advances in Neural Information Processing Systems, 35:18878–18891, 2022. 1 Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, and William T Freeman. Maskgit: Masked generative image transformer. *arXiv preprint arXiv:2202.04200*, 2022. 12 Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, Jose Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Murphy, William T. Freeman, Michael Rubinstein, Yuanzhen Li, and Dilip Krishnan. Muse: Text-to-image generation via masked generative transformers, 2023. 12 Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. Generative pretraining from pixels. In *International Conference on Machine Learning*, pp. 1691–1703. PMLR, 2020. 12 Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. *arXiv preprint arXiv:2204.02311*, 2022. 1, 11 Antonio Criminisi, Patrick Pérez, and Kentaro Toyama. Region filling and object removal by exemplar-based image inpainting. *IEEE Transactions on image processing*, 13(9):1200–1212, 2004. 12 Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020. 3, 12 Alexei A. Efros and Thomas K. Leung. Texture synthesis by non-parametric sampling. In *IEEE International* Conference on Computer Vision, pp. 1033–1038, Corfu, Greece, September 1999. 12 Patrick Esser, Robin Rombach, and Bjorn Ommer. Taming transformers for high-resolution image synthesis. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 12873–12883, 2021. 3, 5, 12 M. Everingham, S. M. A. Eslami, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal visual object classes challenge: A retrospective. *International Journal of Computer Vision*, 111(1):98–136, January 2015. 6 Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Iii, and Kate Crawford. Datasheets for datasets. *Communications of the ACM*, 64(12):86–92, 2021. 5 James Hays and Alexei A Efros. Scene completion using millions of photographs. *ACM Transactions on* Graphics (SIGGRAPH 2007), 26(3), 2007. 12 Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross B. Girshick. Masked autoencoders are scalable vision learners. *CoRR*, abs/2111.06377, 2021. URL https://arxiv.org/abs/2111.06377. 3, 5, 9 Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, and Ser-Nam Lim. Visual prompt tuning. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXIII, pp. 709–727. Springer, 2022. 12 Zhengbao Jiang, Frank F Xu, Jun Araki, and Graham Neubig. How can we know what language models know? *Transactions of the Association for Computational Linguistics*, 8:423–438, 2020. 12 Bingyi Kang, Zhuang Liu, Xin Wang, Fisher Yu, Jiashi Feng, and Trevor Darrell. Few-shot object detection via feature reweighting. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 8420–8429, 2019. 12 Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. CoRR, abs/2104.08691, 2021. URL https://arxiv.org/abs/2104.08691. 12 Kunchang Li, Yali Wang, Peng Gao, Guanglu Song, Yu Liu, Hongsheng Li, and Yu Qiao. Uniformer: Unified transformer for efficient spatiotemporal representation learning. *CoRR*, abs/2201.04676, 2022. URL https://arxiv.org/abs/2201.04676. 7 Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation. *arXiv preprint* arXiv:2101.00190, 2021. 12 Guilin Liu, Fitsum A. Reda, Kevin J. Shih, Ting-Chun Wang, Andrew Tao, and Bryan Catanzaro. Image inpainting for irregular holes using partial convolutions. In *Proceedings of the European Conference on* Computer Vision (ECCV), September 2018a. 12 Guilin Liu, Fitsum A Reda, Kevin J Shih, Ting-Chun Wang, Andrew Tao, and Bryan Catanzaro. Image inpainting for irregular holes using partial convolutions. In *Proceedings of the European conference on* computer vision (ECCV), pp. 85–100, 2018b. 12 Yongfei Liu, Xiangyi Zhang, Songyang Zhang, and Xuming He. Part-aware prototype network for few-shot semantic segmentation. In *European Conference on Computer Vision*, pp. 142–158. Springer, 2020. 12 Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. *arXiv preprint arXiv:1711.05101*, 2017. 5 Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. *arXiv preprint arXiv:2104.08786*, 2021. 12 David McNeill. *Gesture and thought*. University of Chicago press, 2019. 2 Khoi Nguyen and Sinisa Todorovic. Feature weighting and boosting for few-shot segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 622–631, 2019. 9, 12 Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In *Proceedings of the IEEE conference on computer vision and pattern* recognition, pp. 2536–2544, 2016. 12 Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. *OpenAI blog*, 1(8):9, 2019. 12 Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 8748–8763. PMLR, 18–24 Jul 2021. URL https://proceedings.mlr. press/v139/radford21a.html. 3, 17 Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer, 2020. 12 Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. In *International Conference on Machine Learning*, pp. 8821–8831. PMLR, 2021. 12 Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents, 2022. URL https://arxiv.org/abs/2204.06125. 12 René Ranftl, Katrin Lasinger, David Hafner, Konrad Schindler, and Vladlen Koltun. Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(3), 2022. 7 Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models, 2021. 9, 12 Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. *International Journal of Computer Vision (IJCV)*, 115(3):211–252, 2015. doi: 10.1007/s11263-015-0816-y. 7 Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. *arXiv preprint arXiv:2111.02114*, 2021. 2, 5 Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk, and Jenia Jitsev. Laion-5b: An open large-scale dataset for training next generation image-text models, 2022. 17 Amirreza Shaban, Shray Bansal, Zhen Liu, Irfan Essa, and Byron Boots. One-shot learning for semantic segmentation. *CoRR*, abs/1709.03410, 2017. URL http://arxiv.org/abs/1709.03410. 5 Zhuotao Tian, Hengshuang Zhao, Michelle Shu, Zhicheng Yang, Ruiyu Li, and Jiaya Jia. Prior guided feature enrichment network for few-shot segmentation. IEEE transactions on pattern analysis and machine intelligence, 2020. 12 Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. *arXiv preprint arXiv:2302.13971*, 2023. 1, 11 Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. Advances in neural information processing systems, 30, 2017. 12 Xin Wang, Thomas E Huang, Trevor Darrell, Joseph E Gonzalez, and Fisher Yu. Frustratingly simple few-shot object detection. *arXiv preprint arXiv:2003.06957*, 2020. 12 Xinlong Wang, Wen Wang, Yue Cao, Chunhua Shen, and Tiejun Huang. Images speak in images: A generalist painter for in-context visual learning. *arXiv preprint arXiv:2212.02499*, 2022. 1 Xinlong Wang, Wen Wang, Yue Cao, Chunhua Shen, and Tiejun Huang. Images speak in images: A generalist painter for in-context visual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6830–6839, 2023a. 11, 23 Xinlong Wang, Xiaosong Zhang, Yue Cao, Wen Wang, Chunhua Shen, and Tiejun Huang. Seggpt: Segmenting everything in context. *arXiv preprint arXiv:2304.03284*, 2023b. 1 Zhendong Wang, Yifan Jiang, Yadong Lu, Yelong Shen, Pengcheng He, Weizhu Chen, Zhangyang Wang, and Mingyuan Zhou. In-context learning unlocked for diffusion models, 2023c. 7 Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. *arXiv preprint arXiv:2201.11903*, 2022. 12 Saining Xie and Zhuowen Tu. Holistically-nested edge detection. *CoRR*, abs/1504.06375, 2015. URL http://arxiv.org/abs/1504.06375. 7 Sang Michael Xie, Aditi Raghunathan, Percy Liang, and Tengyu Ma. An explanation of in-context learning as implicit bayesian inference. *arXiv preprint arXiv:2111.02080*, 2021. 1 Boyu Yang, Chang Liu, Bohao Li, Jianbin Jiao, and Qixiang Ye. Prototype mixture models for few-shot semantic segmentation. In *European Conference on Computer Vision*, pp. 763–778. Springer, 2020. 12 Chao Yang, Xin Lu, Zhe Lin, Eli Shechtman, Oliver Wang, and Hao Li. High-resolution image inpainting using multi-scale neural patch synthesis. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6721–6729, 2017. 12 Jiahui Yu, Xin Li, Jing Yu Koh, Han Zhang, Ruoming Pang, James Qin, Alexander Ku, Yuanzhong Xu, Jason Baldridge, and Yonghui Wu. Vector-quantized image modeling with improved vqgan. arXiv preprint arXiv:2110.04627, 2021a. 12 Yingchen Yu, Fangneng Zhan, Rongliang Wu, Jianxiong Pan, Kaiwen Cui, Shijian Lu, Feiying Ma, Xuansong Xie, and Chunyan Miao. Diverse image inpainting with bidirectional and autoregressive transformers. In Proceedings of the 29th ACM International Conference on Multimedia, pp. 69–78, 2021b. 12 Gengwei Zhang, Guoliang Kang, Yi Yang, and Yunchao Wei. Few-shot segmentation via cycle-consistent transformer. *Advances in Neural Information Processing Systems*, 34, 2021. 9, 12 Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 586–595, 2018. 7 Yuanhan Zhang, Kaiyang Zhou, and Ziwei Liu. What makes good examples for visual in-context learning?, 2023. 7, 8, 9, 17, 19 ## Appendix We include more information about the experimental study, as well as the Captioned Computer Vision Figures (CCVF) dataset datasheet. ## A Broader Impact Statement Using In-Context Learning to solve various vision tasks using one model has the potential to reduce the cost of training and inference of future models, facilitate access to this technology, and democratize its exploration. Nevertheless, while scaling the training dataset to a large corpus of non-curated text-image pairs improves the in-context learning results, it might also introduce potential biases that must be mitigated prior to any commercial deployment of the model. A comprehensive discussion about ethical considerations can be found in Schuhmann et al. (2022). ## B Experiments And Results We train IMProv on CCVF/S2CV and LAION 400M and evaluate in-context learning. We first visualize the S2CV as shown in Figure 10. We then explain the details of our experiments as follows. Multimodal Prompting. Inspired by Zhang et al. (2023), we experiment with prompts that have different relevance to the input query image, we include visualization examples in Figure 11. For the visual prompt, we use five different retrieval strategies for choosing input-output example pairs from Pascal-5 i: - No input-output visual example ("*No Example*") - A random input-output pair in which the class of the foreground object is different from the class of the foreground object ("*Different Class Random Sample*") - A random input-output pair with the same foreground object class as in the query image ("Same Class Random Sample") - The nearest neighbor input-output pair in CLIP embedding space Radford et al. (2021) retrieved from all the images with the same foreground object class ("*Same Class CLIP NN*") - The nearest neighbor retrieved from all the images with the same foreground object class according to the model provided by Zhang et al. (2023) ("Same Class Zhang et al. *(2023) NN*"). This strategy was trained to optimize in-context learning results. For the text prompt, we experimented with three prompting variations: - No text prompt - Text prompt with location and task information but without class - "Left - input image, right - Black and white foreground/background segmentation" - Text prompt with location, task and class information - "Left - input image, right - Black and white foreground/background segmentation of {class}" Table 9 and Figure 4 present quantitative and qualitative results for different combinations of visual and textual prompts. We find that more informative textual prompts improve the results for all visual prompts. Similarly, higher-quality visual examples improve the results for all the tested textual prompts. Interestingly, the results suggest a trade-off between the two modalities - high-quality textual prompts can alleviate the need for carefully chosen visual prompts, and vice versa. We argue that using non-curated visual prompts is most realistic, as finding a perfectly aligned visual example might be as hard as solving the original input. ![17_image_0.png](17_image_0.png) So in Table 7 we report the mIoU on Pascal VOC segmentation, where we compare against Bar et al. (2022) under "*Different Class Random Sample*" visual prompt setting. In this setting, the visual prompts describe the task (e.g., segmentation) but are not curated from the same class (as in Table 2), or chosen via nearest neighbors. The result shows that conditioning on text significantly improves the prompting performance when using reasonable noncurated visual prompts. Table 7: **Improvement of textual prompt.** Comparison under "*Different Class Random Sample*" visual prompting setting, IMProv outperforms MAE-VQGAN with textual prompt. | Model | Split 0 | Split 1 | Split 2 | Split 3 | avg | |----------------------|-----------|-----------|-----------|-----------|-------| | MAE-VQGAN (CVF) | 24.66 | 25.15 | 24.36 | 19.91 | 23.52 | | IMProv(CCVF) | 26.15 | 27.38 | 29.37 | 21.62 | 26.13 | | IMProv(CCVF + LAION) | 37.09 | 40.68 | 36.91 | 30.49 | 36.29 | Table 8: **Different grid size.** | number of visual prompt | 1 | 2 | 3 | 5 | 7 | |---------------------------|-------|-------|-------|-------|-------| | Grid 2x2 | 42.68 | - | - | - | - | | Grid 3x3 | 37.21 | 39.78 | - | - | - | | Grid 4x4 | 16.42 | 18.34 | 30.78 | 32.56 | 33.11 | | | Visual Prompt Type | | | | | | | |---------------|-------------------------------|--------------------------|------------------------|------------|-------|-------|-------| | No Example | Different Class Random Sample | Same Class Random Sample | Same Class CLIP NN | Same Class | | | | | Visual Prompt | Text Prompt | w/ class name | Zhang et al. (2023) NN | | | | | | ✓ | 18.39 | - | - | - | - | | | | ✓ | ✓ | 18.75 | - | - | - | - | | | ✓ | - | 26.73 | 31.25 | 38.14 | 39.07 | | | | ✓ | ✓ | - | 34.50 | 36.62 | 41.30 | 42.17 | | | ✓ | ✓ | ✓ | - | 36.29 | 39.33 | 41.99 | 42.68 | Different grid size With a 2x2 grid, only a visual prompt is applicable. We also report different grid sizes with different numbers of visual prompts. We report the results on Pascal VOC segmentation in Table 8 and Figure 16. As we increase the number of visual prompt examples, the mIoU increases. Due to the limitation of 224x224 resolution, when the grid size is larger, each image becomes smaller, so one shot accuracy drops. Table 9: **Visual and Textual Prompts Combination.** We evaluate our model on different combinations of visual prompts and textual prompts, with varying relations to the query images. Trade-off between the number of visual prompt and textual prompts We plot the mIoU w.r.t the number of support in Figure 12. We run the experiments under grid 4x4 setting in Table 8, with "Random Class" support image. Similar to Bar et al. (2022), as we increase the number of support visual examples, mIoU goes up. It is worth noting that when there are only 1 or 2 examples, the text prompt does not help. It is because the model couldn't generate meaningful segmentation with a small number of visual supports due to the resolution (see Figure 16 for details). Under the setting that visual support number greater than 3, the text prompt consistently improves the results. It implies that our text prompt is orthogonal to the number of visual support pairs. ![18_image_0.png](18_image_0.png) Figure 12: **Trade-off between the number of** visual prompt and textual prompts. Other Vision Tasks. We provide more qualitative results for both Image-to-X task and X-to-Image task in Figure 13 and Figure 14 respectively, "X" can be any of the following: segmentation, edges, depth and normal. Generalizability to new vision tasks Our model was never trained on specific vision tasks, and instead was trained on unstructured and non-annotated figures from computer vision papers. The data we train on is not constructed explicitly for specific tasks, and even if it is trained on some task-specific figures, it is usually not presented the way we test our model, as we randomly crop the figures during training. Therefore, we believe the test task is generalized from different task combination instead of replicating from training task. Similarly to(Bar et al., 2022), our model can generalize to unseen diverse tasks as shown in Figure 15. 19 ![19_image_0.png](19_image_0.png) relevance. The result is marked in red. 20 ![20_image_0.png](20_image_0.png) ![21_image_0.png](21_image_0.png) ![22_image_0.png](22_image_0.png) Figure 15: **More vision tasks**. We show IMProv is capable of other vision tasks, including edge detection, debluring, image enhancing, style transfer, image-to-sketch. ![22_image_1.png](22_image_1.png) Figure 16: **Different grid size**. IMProv also support different grid size other than 2x2 grid. We show segmentation results with 3x3 grid and 4x4 grid with different number of visual prompts. Note, the white region are discarded and not input to the model. Table 10: **Dataset ablation.** Model Avg. IMProv(CCVF + LAION) 36.29 IMProv(S2CV + LAION) 38.07 Dataset Ablation We additionally report the results of IMProv(S2CV+LAION) in the Table 10 under the same "Random Class" setting. We report the average mIoU over 4 splits. IMProv(S2CV + LAION) outperforms IMProv(CCVF + LAION) by 1.7 points, which justifies the effectiveness of our proposed S2CV dataset. Compare with Painter. We report the comparison of Painter Wang et al. (2023a) and our method in Table reftab:painter. We report the segmentation results on the FSS-1000 dataset. Our model outperforms Painter by a large margin. It's also worth noting that changing the training dataset from CCVF to S2CV improves the mIoU significantly, which further indicates the effectiveness of our proposed S2CV dataset. Table 11: **Compare with Painter.** | Model | FSS-1000 mIoU | |---------------|-----------------| | VisualPrompt | 58.3 | | Painter | 62.3 | | IMProv (CCVF) | 62.8 | | IMProv (S2CV) | 68.9 |
Review 1: Summary: This paper presents a approach to solve unseen computer vision tasks at test time. The contributions are two fold: the paper presents a new dataset of image-text pairs from Semantic Scholar; and the paper presents a method -- IMProv, to inpaint masked regions given the rest of the image and a caption. The image here contains an in-context example in a 2x2 grid. The method is evaluated on various tasks such as foreground/background segmentation, edge and depth estimation. Strengths and Weaknesses: Strengths: * The proposed dataset is interesting. * The proposed approach shows promising results against prior work such as MAE-VQGAN, Stable Diffusion and classic 1-shot segmentation baselines. It also shows promising performance against prior visual prompting methods such as Painter. * The paper is well written and easy to understand. * The paper includes a variety of ablations. Weakness: * Novelty: The key differences of the proposed method IMProv to Painter (Wang et. al. 2023) is not clear. Painter also uses in-painting on a 2x2 grid. Both models are trained using masked image modelling. * Do the results in Table 3 use the nearest neighbor in-context examples? What happens when an appropriate nearest neighbor in-context example cannot be found? It would be better to report the results both with random in-context and nearest neighbor in-context examples. * Comparison to Painter: In Table 3 it would be helpful to compare to Painter (Wang et. al. 2023), as the proposed method and Painter are both very similar. * Comparison to supervised SOTA: Table 3 should also include the best supervised SOTA approaches, as results in Fig. 3 confirm that the proposed approach displays very weak performance on a variety of tasks. This will confirm is the proposed approach is usable for real-world tasks, * Image to depth color bleeding: From Fig. 3 it appears that e.g. depth maps are incorrectly predicted -- are are color pixels instead of grayscale. How are such invalid cases handled? * Can the proposed approach be extended to (multi-class) semantic segmentation or is it constrained to binary back/foreground segmentation? Requested Changes: * The paper should discuss it's novel aspects in more detail. In general, the introduction of the paper should be clearer including the "teaser" figure in Figure 1. * The paper should expand Table 3 to include the evaluations discussed above. Broader Impact Concerns: * The paper should include a Broader Impact Statement clarifying any ethical concerns regarding scraping data from Semantic Scholar. ================================================== Review 2: Summary: This paper is a multimodal version of Visual Prompting via Image Inpainting, which introduces a new multimodal prompted in-painting based computer vision, self supervised, in context learning framework. The prompt comes from two modalities: 1) visual prompt is coming from unmasked patches of the image 2) text prompt is coming from annotations (task, location, class names); The authors collect a new dataset from semantic scholar, and train a MAE-VQGAN model from scatch for this purpose. After training on collected cv paper images and laion images, the model can emerge some capabilities for various cv tasks. Strengths and Weaknesses: Strengths: 1) It's good to see that using only in the wild laion data and cv paper data and training from scratch instead of initialized from pre-trained generative models, the model can achieve good results 2) Using VQ-GAN for generation offers better alignment and controlabiility than diffusion model as shown in figure 6 and 7 3) Text prompt helps to solve some hard cases in MAE-VQGAN as shown in Figure 9 Weaknesses: 1) Like previous work, this type of methods are trained on one-shot (2*2 grids), which is hard to extend to long context in-context learning naturally, and the num of shots are also barriered by resolution. Summary: The self supervised model's performance is still far from supervised ICL baseline. I acknowledge the contribution of this paper is good, but there is still a far way to achieve language model level in-context learning capability using in the wild data. I support this paper to be accepted, the contribution is good and the exploration is necessary. I appreciate the solid experiments in this paper. I hope the authors to make some changes and provide more experiments results, since it will provide benifitial insights into future research on this unsupervised vision in context learning area. Requested Changes: 1) Better discuss the difference and relation with Visual Prompting via Image Inpainting 2) Try more annotated supervised data like (SAM/depth anything/Grounded DINO/hed, etc) for Supervised ICL, I don't think instructpix2pix are suitable for depth/seg/hed/norm 3) Is it possible to also fine-tune SD inpainting model with your data recipe. It would show the benefit using MAE-VQGAN 4) Can authors provide some results like finetuning baseline in Table 6 with K=1,4,16 for IMProv, to see if it can scale to more shots Broader Impact Concerns: No ethical concerns . ================================================== Review 3: Summary: This paper proposes a generative model that is able to in-context learn visual tasks from multimodal prompts. The inputs are a textual description of a visual task, a few input-output visual examples, or both, the model in-context learns to solve it for a new test input. Experiments show that training this proposed model with text conditioning and scaling the dataset size improves in-context learning for computer vision tasks by a large margin. Extensive experiments demonstrate that vision and language prompts are complementary. Strengths and Weaknesses: **Strengths** 1. This paper is clearly written and easy to understand. 2. There are extensive ablation studies to compare different training factors. **Weakness** 1. The main concern is that the comparisons are not fair. In Table 3, all the baselines are trained with CVF, while the proposed model is trained with a much larger training set (i.e., S2CV+LAION). The same is true for Table 6. 2. While authors show that providing examples is beneficial to the model inference, the vision encoder takes a fixed image size. This may lead to a degraded performance when increasing the number of visual prompts. Such issue may limit the practical usage of this proposed model. 3. It is not clear about how to select in-context learning examples. Are they randomly selected or based on certain criteria? 4. Compared to the most relevant baseline MAE-VQGAN, the main contribution seems to be adding additional captions can provide guidance during model inference (as shown in Table 4). This observation is straightforward that may make the argument of in-context learning weak. 5. While authors provide analysis about S2CV and CCVF, it is not clear about how these types of datasets can help in-context learning. Requested Changes: In general, authors provide a simple yet effective approach to address how to apply in-context learning to vision tasks. The collected dataset can contribute to the multimodal understanding community if it can be open sourced. Based on the above weakness, the concerns are about the unfair comparisons, limitations of this approach, and the main contribution. Broader Impact Concerns: The reviewer did not see any obvious ethical concerns. ==================================================
# Diff-Instruct++: Training One-Step Text-To-Image Generator Model To Align With Human Preferences Anonymous authors Paper under double-blind review ## Abstract One-step text-to-image generator models offer advantages such as swift inference efficiency, flexible architectures, and state-of-the-art generation performance. In this paper, we study the problem of aligning one-step generator models with human preferences for the first time. Inspired by the success of reinforcement learning using human feedback (RLHF), we formulate the alignment problem as maximizing expected human reward functions while regularizing it with an Integral Kullback-Leibler divergence to a reference diffusion model. By overcoming technical challenges, we obtain practical loss functions to solve the reward maximization problem and formally introduce Diff-Instruct++ (DI++), a fast-converging and image data-free human preference alignment method for one-step text-to-image generators. We also establish theoretical connections between DI++ and diffusion distillation and the classifier-free guidance (CFG). We prove that using CFG for diffusion distillation is secretly doing RLHF with DI++. In the experiment section, we first pre-train a onestep text-to-image generator model using Diff-Instruct. We then apply DI++ to align the generator using an off-the-shelf human preference reward model. The resulting generator model significantly improves aesthetic quality, richer generation details, and human preference scores. Our best models attain an aesthetic score (AS) of 6.42 and an Image Reward score(IR) of 1.27 on the MSCOCO2017 validation prompt dataset, outperforming the reference 30-step diffusion models. It also outperforms the other few-step generators including SDXL-TURBO with an AS of 5.33 and an IR of 0.78, SDXL-LIGHTNING with an AS of 5.34 and an IR of 0.54, and HYPER-SDXL with an AS of 5.85 and an IR of 1.19 with significant margins. While DI++ cannot perfectly prevent the model from making simple mistakes, our findings indicate a promising direction for aligning one-step generators with human preferences. ## 1 Introductions In recent years, deep generative models have achieved remarkable successes across various data generation and manipulation applications (Karras et al., 2020; 2022; Nichol & Dhariwal, 2021; Oord et al., 2016; Ho et al., 2022; Poole et al., 2022; Hoogeboom et al., 2022; Kim et al., 2022; Tashiro et al., 2021; Deng et al.; Kingma & Dhariwal, 2018; Chen et al., 2019; Meng et al., 2021; Couairon et al., 2022). These models have notably excelled in producing high-resolution, text-conditional models such as images (Rombach et al., 2022; Saharia et al., 2022; Ramesh et al., 2022; 2021) and other modalities (Brooks et al., 2024; Kondratyuk et al., 2023; Evans et al., 2024), pushing the boundaries of Artificial Intelligence Generated Content. Among the spectrum of deep generative models, one-step generators have emerged as a particularly efficient and possibly best performing (Zheng & Yang, 2024; Kim et al., 2024; Kang et al., 2023a; Sauer et al., 2023a) generative model. Briefly speaking, a one-step generator uses a neural network to directly transport some latent variable to generate an output sample. Recently, there have been many fruitful successes in training one-step generator models by distilling from pre-trained diffusion models (aka, Diffusion Distillation (Luo, 2023)) in domains of image generations (Salimans & Ho, 2022; Luo et al., 2024; Geng et al., 2023; Song et al., 2023; Kim et al., 2023; Song & Dhariwal, 2023; Zhou et al., 2024b), text-to-image synthesis (Meng et al., 2022; Gu et al., 2023; Nguyen & Tran, 2023; Luo et al., 2023; Song et al., 2024; Liu et al., 2024b; Yin et al., 2023), data manipulation (Parmar et al., 2024), etc. ![1_image_0.png](1_image_0.png) Figure 1: Images generated by a one-step text-to-image generator that has been aligned with human preferences using Diff-Instruct++. We put the prompt in Appendix D.1. However, current one-step text-to-image generators face several limitations, including insufficient adherence to user prompts, suboptimal aesthetic quality, and even the generation of toxic content. These issues arise because the generator models are not aligned with human preferences. In this paper, we study the problem of aligning one-step generator models with human preferences for the first time. We achieve substantial progress by training generator models to maximize human preference reward. Inspired by the success of reinforcement learning using human feedback (RLHF) (Ouyang et al., 2022) in aligning large language models, we formulate the alignment problem as a maximization of the expected human reward function with an additional regularization term to some reference diffusion model. By addressing technical challenges, we obtain effective loss functions and formally introduce Diff-Instruct++, an effective and image data-free method to train one-step text-to-image generators to follow human preferences. In the experiment part, we first pre-train a one-step generator model using Diff-Instruct (Luo et al., 2024), initialized with the PixelArt-α diffusion model (Chen et al., 2023). We name this model an unaligned base generator model (base model for short). Next, we align the base model using DI++ with an off-the-shelf human preference reward, the Image Reward (Xu et al., 2023), resulting in five aligned models with different alignment scales. The Image Reward is an open-source image-prompt reward function that evaluates the quality of images and corresponding prompts in terms of aesthetic appearance, image-prompt alignment, and other human preference factors. With DI++ alignment, we can effectively align the output distribution of the text-to-image generator model with human preferences in an image data-free manner. This marks a superior advantage of DI++ over baseline methods such as fine-tuning the generator on carefully curated training datasets that are expensive and may be potentially biased. The alignment process with DI++ significantly improves the generation quality of the one-step generator model with minimum computational cost. To evaluate our models from different perspectives, we conduct both qualitative and quantitative evaluations of aligned models with different alignment settings. In the quantitative evaluation, we evaluate the model with several commonly used quality metrics such as the Aesthetic score (Schuhmann, 2022), the Image Reward (Xu et al., 2023), and the PickScore (Kirstain et al., 2023). Our main findings are: 1) The aligned models outperform the unaligned ones with significant margins in terms of human preference metrics; 2) The alignment cost is cheap, and the convergence is fast; 3) The aligned model still makes simple mistakes. We will discuss these findings in Section 6 in detail. We also establish the theoretical connections of DI++ with diffusion distillation methods, as well as the classifier-free guidance (Ho & Salimans, 2022). In Theorem ![2_image_0.png](2_image_0.png) Figure 2: A demonstration of three stages for training a one-step text-to-image generator model that is aligned with human preference. The pre-training stage (**the leftmost column**) pre-trains the reference diffusion model as well as the one-step generator. The reward modeling stage (**the middle column**) trains the reward model using human preference data. The alignment stage (**the rightmost column**) uses a pre-trained reference diffusion model, the reward model, and a TA diffusion model to align the one-step generator with human preference. 3.3 in Section 3.1, we show that the well-known classifier-free guidance is secretly doing RLHF according to an implicit reward function, therefore, diffusion distillation with CFG can be unified within DI++. These theoretical findings not only help understand the behavior of classifier-free guidance but also bring new tools for human preference alignment for text-to-image models. In Section 6.3, we discuss the potential shortcomings of DI++, showing that even aligned with human reward, the generator model still makes simple mistakes sometimes. Imperfect human reward and insufficient hyperparameter tunings possibly cause this issue. Even with such little flaws, our findings indicate a promising direction for aligning one-step generator models with human preferences. ## 2 Preliminary Diffusion Models. In this section, we introduce preliminary knowledge and notations about diffusion models. Assume we observe data from the underlying distribution qd(x). The goal of generative modeling is to train models to generate new samples x ∼ qd(x). Under mild conditions, the forward diffusion process of a diffusion model can transform any initial distribution q0 = qd towards some simple noise distribution, $$\mathrm{d}\mathbf{x}_{t}=\mathbf{F}(\mathbf{x}_{t},t)\mathrm{d}t+G(t)\mathrm{d}\mathbf{w}_{t},$$ $$(2.1)$$ dxt = F(xt, t)dt + G(t)dwt, (2.1) where F is a pre-defined vector-valued drift function, G(t) is a pre-defined scalar-value diffusion coefficient, and wt denotes an independent Wiener process. A continuous-indexed score network sφ(x, t) is employed to approximate marginal score functions of the forward diffusion process (2.1). The learning of score networks is achieved by minimizing a weighted denoising score matching objective (Vincent, 2011; Song et al., 2020), $${\cal L}_{DSM}(\varphi)=\int_{t=0}^{T}\lambda(t)\mathbb{E}_{\mathbf{x}_{0}\sim\mathbf{x}_{0},\mathbf{x}_{t}|\mathbf{x}_{0}\sim\mathbf{q}_{t}(\mathbf{x}_{t}|\mathbf{x}_{0})}\|\mathbf{s}_{\varphi}(\mathbf{x}_{t},t)-\nabla_{\mathbf{x}_{t}}\log q_{t}(\mathbf{x}_{t}|\mathbf{x}_{0})\|_{2}^{2}{\rm d}t.\tag{2.2}$$ Here the weighting function λ(t) controls the importance of the learning at different time levels and qt(xt|x0) denotes the conditional transition of the forward diffusion (2.1). After training, the score network sφ(xt, t) ≈ ∇xt log qt(xt) is a good approximation of the marginal score function of the diffused data distribution. Reinforcement Learning using Human Feedback. Reinforcement learning using human feedback (Christiano et al., 2017; Ouyang et al., 2022) (RLHF) is originally proposed to incorporate human feedback knowledge to improve large language models (LLMs). Let pθ(x|c) be a large language model's output distribution, where c is the input prompt that is randomly sampled from a prompt dataset C, and the x is the generated responses. Let r(x, c) be a scalar reward model that probably has been trained with human feedback data and thus can measure the human preference on an image-prompt pair (x, c). Let pref (x|c) be some reference LLM model. The RLHF method trains the LLM to maximize the human reward with a Kullback-Leibler divergence regularization, which is equivalent to minimizing: $${\mathcal{L}}(\theta)=\mathbb{E}_{\stackrel{e\sim C,}{\mathbf{x}\sim p_{\theta}(\mathbf{x}|\mathbf{c})}}\left[\,-r(\mathbf{x},\mathbf{c})\right]+\beta{\mathcal{D}}_{K L}(p_{\theta}(\mathbf{x}|\mathbf{c}),p_{r e f}(\mathbf{x}|\mathbf{c}))$$ $$(2.3)$$ -− r(x, c)+ βDKL(pθ(x|c), pref (x|c)) (2.3) The KL divergence regularization term lets the model be close to the reference model thus preventing it from diverging, while the reward term encourages the model to generate outputs with high rewards. After the RLHF, the model will be aligned with human preference. One-step Text-to-image Generator Model. A one-step text-to-image generator model is a neural network gθ(·|·), that can turn an input latent variable z ∼ pz(z) and an input prompt c to a generate image x (or some latent vector before decoding as the case of latent diffusion models (Rombach et al., 2022)) with a single neural network forward inference: x = gθ(z|c). Compared with diffusion models, the one-step generator model has advantages such as fast inference speed and flexible neural architectures. How to Train a One-step Generator Model. There are several approaches to train one-step generators. An early method is the generative adversarial training (Goodfellow et al., 2014). The generators trained with generative adversarial training are often referred to as a generative adversarial network, aka GAN. The adversarial training uses an additional neural network classifier to approximate the probability ratio of the generator output distribution and the data distribution. Then the learned likelihood ratio can be used to construct some surrogate probability distance between the generator and data distributions and the generator is then updated to minimize the surrogate distance. GANs have obtained plausible successes in past years(Zheng & Yang, 2024; Kang et al., 2023b; Sauer et al., 2023a; 2022; 2021; Karras et al., 2019; 2020; 2018; 2021; Brock et al., 2018; Wang et al.). Another way to obtain one-step generators is by diffusion distillation (Luo, 2023). In recent years, many diffusion distillation methods have been proposed that can distill pre-trained diffusion models into one-step generators. Among them, Luo et al. (2024) have shown that by minimizing the Integral Kullback-Leibler divergence between the generator output distribution and a pre-trained reference diffusion model, the onestep generator can be trained to generate high-quality data. Many other works studied different divergences (Zhou et al., 2024b), or introduced techniques and scaled the divergence minimization approach for text-toimage generations (Yin et al., 2023; Song et al., 2024; Zhou et al., 2024a; Yin et al., 2024; Geng et al., 2023; Kim et al., 2023; Song et al., 2023; Song & Dhariwal; Nguyen & Tran, 2023; Heek et al., 2024; Xie et al., 2024; Salimans et al., 2024). ## 3 Aligning One-Step Generator Models With Human Preference In this section, we introduce how to align one-step text-to-image generator models with human preferences. We formally introduce Diff-Instruct++ (DI++), a fast-converging and data-free approach for aligning onestep text-to-image generator models with human preference by maximizing human reward functions. To our knowledge, DI++ is the first approach to align one-step generator models with human preferences. In Section 3.1, we introduce the formulation of the alignment problem and then identify the alignment objective. After that, we address several technical challenges and propose the Theorem 3.1 and Theorem 3.2, which set the theoretical foundation of DI++. Besides, we show that the diffusion distillation using classifier-free guidance is secretly doing RLHF with DI++. Based on the theoretical arguments in Section 3.1, we formally introduce the practical algorithm of DI++ in Section 3.2, and give an intuitive understanding of the algorithm as an education process that includes a teacher diffusion model, a teaching assistant diffusion model, and a student one-step generation. In Section 3.3, we discuss both the advantages and the theoretical connections of DI++. We show that DI++ is image-data-free and fast converging. ## 3.1 The Alignment Objective The Problem Formulation of Aligning Generator with Human Preference. We consider the textto-image generation as an example of alignment. Other conditional generation applications share a similar spirit. Assume x is an image and c is a text prompt that is sampled from some prompt dataset C. The basic setting is that we have a one-step generator gθ(·|·), which can transport a prior latent vector z ∼ pz to generate an image based on input text prompt c: x = gθ(z|c). We use the notation pθ(x|c) to denote the distribution induced by the generator. We also have a reward model r(x, c) which represents the human preference on a given image-prompt pair (x, c). Inspired by the success of reinforcement learning using human feedback (Ouyang et al., 2022) in fine-tuning large language models such as ChatGPT (Achiam et al., 2023), we first set our alignment objective to maximize the expected human reward function with an additional Kullback-Leibler divergence regularization w.r.t some reference distribution pref (·|c), which is equivalent to minimize the following objective: $$\mathcal{L}(\theta)=\mathbb{E}_{\begin{subarray}{c}\mathbf{x}\sim\mathcal{C},\\ \mathbf{x}\sim p_{\theta}(\mathbf{x}|\mathbf{c})\end{subarray}}\left[\,-r(\mathbf{x},\mathbf{c})\right]+\beta\mathcal{D}_{KL}(p_{\theta}(\mathbf{x}|\mathbf{c}),p_{ref}(\mathbf{x}|\mathbf{c}))$$ $$=\mathbb{E}_{\begin{subarray}{c}\mathbf{x}\sim\mathcal{C},\\ \mathbf{x}\sim p_{\theta}(\mathbf{x}|\mathbf{c})\end{subarray}}\left[\,-r(\mathbf{x},\mathbf{c})\right]+\beta\mathcal{D}_{KL}(p_{\theta}(\mathbf{x}|\mathbf{c}),p_{ref}(\mathbf{x}|\mathbf{c}))$$ $$(3.1)$$ $$(3.2)$$ The KL divergence regularization to the reference distribution pref (·) guarantees the generator distribution pθ(·) to stay similar to pref (·) in order to prevent it from diverging. If we want to minimize the objective (3.1) using stochastic gradient-decent based optimization algorithms, we need to know the parameter gradient of θ. We show in Theorem 3.1 that the gradient of objective (3.1) has the form of the formula (3.2). **Theorem 3.1**.: The $\theta$ gradient of the objective (3.1) is $$\mathrm{Grad}(\theta)=\mathbb{E}_{\mathbf{x}\sim\mathbb{C}_{\mathbf{x}\sim\mathbf{x}_{\theta}(\mathbf{x})}}\left\{\,-\nabla_{\mathbf{x}}r(\mathbf{x},\mathbf{c})+\beta\big{[}\nabla_{\mathbf{x}}\log p_{\theta}(\mathbf{x}|\mathbf{c})-\nabla_{\mathbf{x}}\log p_{ref}(\mathbf{x}|\mathbf{c})\big{]}\right\}\frac{\partial\mathbf{x}}{\partial\theta}$$ ∂θ (3.2) We will give the proof in Appendix B.1. For gradient formula (3.2), we can see that the x gradient of r(x, c) is easy to obtain. If we can approximate the score function of both generator and reference distribution, i.e. ∇x log pθ(x|c) and ∇x log pref (x|c), we can directly compute the θ gradient and use gradient descent algorithms to update the parameters θ. However, since the generator distribution is defined directly in image space, where the distributions are assumed to lie on some low dimensional manifold (Song & Ermon, 2019). Therefore, directly approximating the score function is difficult in practice, and minimizing objective (3.1) **is not practical for one-step generators**. Diffusion Models are Reference Distributions. Instead of minimizing the intractable objective (3.1), we change the alignment objective from (3.1) to (3.6), by generalizing the KL divergence regularization to the Integral Kullback-Leibler divergence proposed in Luo et al. (2024) w.r.t to some reference diffusion process pref (xt|t, c). This novel change of regularization divergence distinguishes our approach from previous RLHF methods for large language model alignments. Besides, such a change from KL divergence to IKL divergence makes it possible to **use some pre-trained diffusion models as reference distributions**, as we will show in the following paragraphs. Let xt be noisy data that is diffused by the forward diffusion (2.1) starting from x0. We use pref (xt|t, c) and sref (xt|t, c) to denote the densities and score functions of the reference diffusion process (the score functions can be replaced with pre-trained off-the-shelf diffusion models). Let pθ(x|t, c) and sθ(xt|t, c) be the marginal distribution and score functions of the generator output after forward diffusion process (2.1). We propose to minimize the negative reward function with an Integral KL divergence regularization: $$\mathcal{L}(\theta)=\mathbb{E}_{\pi_{\theta}^{\mathbf{x}_{1}\mathbf{x}_{2},\pi_{\theta}\mathbf{x}_{3}(\mathbf{x}_{4})}}[\ -r(\mathbf{x}_{0},\mathbf{c})]+\beta\int_{t=0}^{T}w(t)\mathcal{D}_{KL}(p_{\theta}(\mathbf{x}_{i}|t,\mathbf{c}),p_{ref}(\mathbf{x}_{i}|t,\mathbf{c}))\mathrm{d}t\tag{3.6}$$ Different from vanilla RLHF objective (3.1), our RLHF objective with IKL regularization (3.6) assigned a regularization between generator's noisy distributions and a reference diffusion process pref (·|t, c). Following the similar concepts of Theorem 3.1, we have a similar gradient formula in Theorem 3.2 with IKL regularization. Algorithm 1: Diff-Instruct++ for aligning generator model with human feedback reward. Input: prompt dataset C, generator gθ(x0|z, c), prior distribution pz, reward model r(x, c), reward scale αrew, CFG scale α*cf g*, reference diffusion model sref (xt|c, c), TA diffusion sψ(xt|t, c), forward diffusion p(xt|x0) (2.1), TA diffusion updates rounds KT A, time distribution π(t), diffusion model weighting λ(t), generator IKL loss weighting w(t). while *not converge* do fix θ, update ψ for KT A rounds by minimizing $${\mathcal{L}}(\psi)=\mathbb{E}_{\begin{array}{c}{{\omega\subset\mathbf{x}\sim p_{x},t\sim\pi(t)}}\\ {{\mathbf{x}_{0}=g_{\theta}(\mathbf{x}|\mathbf{c}),\mathbf{x}_{t}\mid\mathbf{x}_{0}\sim p_{t}(\mathbf{x}_{t}|\mathbf{x}_{0})}}\end{array}}\lambda(t)\|\mathbf{s}_{\psi}(\mathbf{x}_{t}|t,\mathbf{c})-\nabla_{\mathbf{x}_{t}}\log p_{t}(\mathbf{x}_{t}|\mathbf{x}_{0})\|_{2}^{2}\mathrm{d}t.$$ the gradient $\cdot$ . update θ using StaD with the gradient Grad(θ) =E c∼C,z∼pz, x0=gθ (z,c) [−αrew∇x0 r(x0, c)] (3.3) + Z T t=0 w(t)E c∼C,z∼pz,t∼π(t) x0=gθ (z,c),xt|x0∼pt(xt|x0) sψ(xt|t, c) − seref (xt|t, c) ∂xt ∂θ dt. (3.4) seref (xt|t, c) = sref (xt|t, ∅) + αcf g-sref (xt|t, c) − sref (xt|t, ∅)(3.5) end return *θ, ψ*. Theorem 3.2. The θ gradient of the objective (3.6) is $$\operatorname{Grad}(\theta)=\mathbb{E}_{\mathbf{z}_{a,l,k},\mathbf{z}_{a,l},\mathbf{z}_{a,l}=\mathbf{z}_{a,l}(\mathbf{z})}\left\{\,-\nabla_{\mathbf{z}_{a}}r(\mathbf{x}_{0},\mathbf{c})+\beta w(t)\big{[}\mathbf{s}_{\theta}(\mathbf{x}_{l}|t,\mathbf{c})-\mathbf{s}_{r e f}(\mathbf{x}_{l}|t,\mathbf{c})\big{]}\frac{\partial\mathbf{x}_{t}}{\partial\theta}\right\}\,.$$ $$(3.7)$$ ∂θ (3.7) We will give the proof in Appendix B.2. We can clearly see that the reference score functions can be replaced with off-the-shelf text-to-image diffusion models. If we view the generator as a student. Then the reference diffusion model acts like a teacher. We can use another diffusion model sψ(·) to act like a teaching assistant (TA), which is initialized from the teacher and fine-tuned using student-generated data to approximate the student generator score functions, i.e. sψ(xt|t, c) ≈ sθ(xt|t, c). The reward function r(·, ·) can be viewed as the student's personal interests. With this, we can readily estimate the gradient (3.7) and update the generator model to maximize students' interests (aka, to maximize the reward) while still referring to the teacher's and the TA's advice. From this perspective, the Diff-Instruct++ algorithm is similar to an education procedure that encourages the student to maximize personal interests, while still taking advice from teachers and teaching assistants. Since the use of IKL divergence for regularization in RLHF is inspired by Diff-Instruct (Luo et al., 2024), we name our alignment approach the Diff-Instruct++. Next, we give a theoretical result, showing that classifier-free guidance is secretly doing RLHF with an implicitly defined reward function. With this, we can incorporate both human reward and the CFG for human preference alignment in the DI++ algorithm. Classifier-free Guidance is secretly doing RLHF with an Implicit Reward Function. In previous sections, we have shown in theory that with available reward models, we can readily do RLHF for one-step generators. However, in this part, we additionally find that the classifier-free guidance is secretly doing RLHF, and therefore we will show that using CFG for reference diffusion models when distilling them using Diff-Instruct is secretly doing Diff-Instruct++ with an implicitly defined reward. The classifier-free guidance (Ho & Salimans, 2022) (CFG) uses a score function with a form $$\widetilde{\mathbf{s}}_{r e f}(\mathbf{x}_{t},t|\mathbf{c}):=\mathbf{s}_{r e f}(\mathbf{x}_{t},t|\mathbf{\mathcal{O}})+\omega\big{\{}\mathbf{s}_{r e f}(\mathbf{x}_{t},t|\mathbf{c})-\mathbf{s}_{r e f}(\mathbf{x}_{t},t|\mathbf{\mathcal{O}})\big{\}},$$ to replace the original conditions score function sref (xt, t|c). This empirically leads to better sampling quality for diffusion models. In this part, we show how diffusion distillation using Diff-Instruct with CFG is secretly doing RLHF with DI++. If we consider a reward function as: $$r(\mathbf{x}_{0},\mathbf{c})=\int_{t=0}^{T}w(t)\log{\frac{p_{r e f}(\mathbf{x}_{t}|t,\mathbf{c})}{p_{r e f}(\mathbf{x}_{t}|t)}}\mathrm{d}t.$$ $$(3.8)$$ This reward function will put a higher reward to those samples that have higher conditional probability than unconditional probability. Therefore, it can encourage samples with high conditional probability. We show that the gradient formula (3.7) in Theorem 3.2 will have an explicit solution: Theorem 3.3. Under mild conditions, if we set an implicit reward function as (3.8), the gradient formula (3.7) in Theorem 3.2 with have an explicit expression: $$\text{Grad}(\theta)=\mathbb{E}_{e,t,x_{1},x_{2},x_{3},x_{4},x_{5}=x_{6}(t,x)}\,\beta w(t)\bigg{\{}s_{\theta}(\mathbf{x}_{t}|t,\mathbf{c})-\widetilde{s}_{ref}^{\beta}(\mathbf{x}_{t}|t,\mathbf{c})\bigg{\}}\frac{\partial\mathbf{x}_{t}}{\partial\theta}\tag{3.9}$$ $$\widetilde{s}_{ref}^{\beta}(\mathbf{x}_{t}|t,\mathbf{c})=\mathbf{s}_{ref}(\mathbf{x}_{t}|t)+(1+\frac{1}{\beta})\big{[}\mathbf{s}_{ref}(\mathbf{x}_{t}|t,\mathbf{c})-\mathbf{s}_{ref}(\mathbf{x}_{t}|t)\big{]}$$ We will give the proof in Appendix B.4. This gradient formula (3.9) recovers the case that uses the CFG for diffusion distillation using the Diff-Instruct algorithm to train a one-step generator. The parameter (1+ 1β ) is the so-called classifier-free guidance scale. In our Algorithm 1 and 2, we use the coefficient α*cf g* to represent the CFG scale. In the following section, we formally introduce the practical algorithm of Diff-Instruct++. Remark 3.4. Theorem 3.3 has revealed a new perspective that understands classifier-free guidance as a training-free and inference-time RLHF. This helps to understand why samples generated by using CFG are preferred by humans. Besides, Theorem 3.3 also shows that using Diff-Instruct with CFG to distill text-toimage diffusion models is secretly doing DI++. Therefore we can use both CFG and the human reward to strengthen the one-step generator models. ## 3.2 The Practical Algorithm Though the gradient formula (3.2) gives a clear way to compute the parameter gradient to update the generator, it would be better to have an easy-to-implement pseudo loss function instead of the gradient estimations for executing the algorithm. To address such an issue, we present a pseudo loss function defined as formula (3.10), which we show has the same gradient as (3.7). Theorem 3.5 (Pseudo Loss Function). The pseudo loss function (3.10) has the same θ gradient as (3.7), Lp(θ) = E c,t,z∼pz,x0=gθ (z|c) xt|x0∼p(xt|x0) − αrewr(x0, c) + w(t)y T t xt , (3.10) seref (xt|t, c) = sref (xt|t, ∅) + αcf g-sref (xt|t, c) − sref (xt|t, ∅), yt:= sSta[θ](Sta[xt]|t, c) − sref (Sta[xt]|t, c). Here the operator Sta[·] means cutting off all θ dependence on this variable. We put the proof in Appendix B.3. With the pseudo loss function (3.10), we formally introduce the DI++ algorithm which is presented in Algorithm 1 (and a more executable version in Algorithm 2). As in Algorithm 1, the overall algorithms consist of two alternative updating steps. The first step update ψ of the TA diffusion model by fine-tuning it with student-generated data. Therefore the TA diffusion sψ(xt|t, c) can approximate the score function of student generator distribution. This step means that the TA needs to communicate with the student to know the student's status. The second step estimates the parameter gradient of equation (3.2) and uses this parameter gradient for gradient descent optimization algorithms such as Adam (Kingma & Ba, 2014) to update the generator parameter θ. This step means that the teacher and the TA discuss and incorporate the student's interests to instruct the student generator. Due to page limitations, we put a discussion about the meanings of the hyper-parameters in Appendix A.1. ## 3.3 More Discussions On Diff-Instruct++ Diff-Instruct++ is Image-data Free. One appealing advantage of Diff-Instruct++ is the image datafree property, which means that DI++ requires neither the image datasets nor synthetic images that are generated by reference diffusion models. Instead, DI++ only needs a prompt dataset without any images. Such a prompt dataset can be obtained either from standard image-caption datasets or collected from users' behaviors which represents the user's references on prompts. This distinguishes DI++ from previous finetuning methods such as generative adversarial training (Goodfellow et al., 2014) which require training additional neural classifiers over image data, as well as those fine-tuning methods over large-scale synthetic or curated datasets. The Choice of Generator is Flexible across Broader Applications. Another interesting property of DI++ for alignment is its wide flexibility in the choice of generator models. We can see that, the theory of DI++ only requires the generator to be able to generate output images (or data of other modalities) that are differentiable with the generator's parameters. This makes DI++ a universal alignment method for two reasons. 1) the choice of generator architecture is flexible. The network architectures for diffusion models require the input and output to have the same dimensions. However, the DI++ does not assign such a restriction to generator network choices. Therefore, pre-trained GAN generators, such as StyleGAN-T (Sauer et al., 2023a) and GigaGAN (Kang et al., 2023b) are also compatible with DI++. 2) student networks in broader applications may also satisfy the requirements of DI++. For instance, the neural radiance field (Mildenhall et al., 2021) model used in text-to-3D using only text-to-2D diffusion models can also be viewed as a generator. Therefore the DI++ can be used for such scenarios to incorporate human preference in the training process. Readers can read Poole et al. (2022), Wang et al. (2023) as well as Luo et al. (2024) for more background knowledge. Diff-Instruct++ has a Fast Convergence. Another plausible property is fast convergence. We find that DI++ converges fast in practice. This property helps researchers align large generator models with limited computing resources. Besides, such a fast convergence property makes it possible to continually update the one-step generator model using daily collected user prompts. This brings flexibility and quick responses for industry-level models to trace human preferences with a high frequency. Diff-Instruct is a Special Case of Diff-Instruct++. It is not a surprise that many diffusion distillation methods(Luo, 2023) through minimizing certain distribution divergences can be viewed as a special case of DI++ without human preference rewards. One instance is the Diff-Instruct (Luo et al., 2024), which train's one-step generator by minimizing the IKL divergence between generator distribution and a pre-trained diffusion model. This forces the generator distribution to match the reference distributions without human reward. Other distillation methods, such as Score Identity Distillation (SiD) (Zhou et al., 2024b), can be viewed as generalizing the IKL regularization to an integral of Fisher divergence. ## 4 Related Works RLHF for Large Language Models. Reinforcement learning using human feedback (RLHF) has won great success in aligning large language models (LLMs). Ouyang et al. (2022) formulates the LLM alignment problem as maximization of human reward with a KL regularization to some reference LLM, resulting in the InstructGPT model. The Diff-Instruct++ draws inspiration from Ouyang et al. (2022). However, DI++ differs from RLHF for LLMs in several aspects: the introduction of IKL regularization, the novel gradient formula, and the overall algorithms. Many variants of RLHF for LLMs have also been intensively studied in Tian et al. (2023); Christiano et al. (2017); Rafailov et al. (2024); Ethayarajh et al. (2024), etc. Diffusion Distillation Through Divergence Minimization. Diffusion distillation (Luo, 2023) is a research area that aims to reduce generation costs using teacher diffusion models. Among all existing methods, one important line is to distill a one-step generator model by minimizing certain divergences between one-step generator models and some pre-trained diffusion models. (Luo et al., 2024) first study the diffusion distillation by minimizing the Integral KL divergence. Yin et al. (2023) generalize such a concept and add a data regression loss to distill pre-trained Stable Diffusion Models. Many other works have introduced additional techniques and improved the distillation performance (Geng et al., 2023; Kim et al., 2023; Song et al., 2023; Song & Dhariwal; Nguyen & Tran, 2023; Song et al., 2024; Yin et al., 2024; Zhou et al., 2024a; Heek et al., 2024; Xie et al., 2024; Salimans et al., 2024). Different from IKL divergence, Zhou et al. (2024b) study the distillation through a variant of Fisher divergence. Other methods, such as Xiao et al. (2021); Xu et al. (2024), have used the generative adversarial training (Goodfellow et al., 2014) techniques in order to minimize certain divergences. The Diff-Instruct++ is motivated by Diff-Instruct. However, to the best of our knowledge, we are the first to study the problem of aligning one-step generator models with human preferences. Preference Alignment for Diffusion Models. In recent years, many works have emerged trying to align diffusion models with human preferences. There are three main lines of alignment methods for diffusion models. 1) The first kind of method fine-tunes the diffusion model over a specifically curated image-prompt dataset (Dai et al., 2023; Podell et al., 2023). 2) the second line of methods tries to maximize some reward functions either through the multi-step diffusion generation output (Prabhudesai et al., 2023; Clark et al., 2023; Lee et al., 2023) or through policy gradient-based RL approaches (Fan et al., 2024; Black et al., 2023). Though this approach shares goals similar to those of DI++ and RLHF for LLMs, the problems and challenges are essentially different. Our Diff-Instruct++ is the first work to study the alignment of one-step generator models, instead of diffusion models. Besides, backpropagating through the multi-step diffusion generation output is expensive and hard to scale. 3) the third line, such as Diffusion-DPO (Wallace et al., 2024), Diffusion-KTO (Yang et al., 2024), tries to directly improve the diffusion model's human preference property with raw collected data instead of reward functions. ## 5 Models And Training Details In previous sections, we have established the theoretical foundations for DI++. In this section, we turn to practical content. Our goal is to train a one-step text-to-image generator model that can generate aesthetic, useful, harmless images for human users. In the following sections, we will introduce the overall workflow, the model, the dataset, the reward, and the training details. ## 5.1 The Overall Workflow Of Obtain One-Step Text-To-Image Generators In this section, we introduce an overall workflow of obtaining a one-step text-to-image generator model that is aligned with human feedback. As we have shown in Figure 2, the workflow consists of three modeling stages: the pre-training stage, the reward modeling stage, and the alignment stage. The Pre-training Stage. As the leftmost column of Figure 2 shows, the pre-training stage pre-trains the reference diffusion model, and a base one-step generator that is not necessarily aligned with human preference. The researcher can either train the diffusion model in-house or just use publicly available off-theshelf diffusion models. With the pre-trained teacher diffusion model, we can pre-train our one-step generator model using diffusion distillation methods (Luo, 2023), generative adversarial training (Sauer et al., 2023a; Zheng & Yang, 2024), or a combination of them (Kim et al., 2023; Yin et al., 2024; Xu et al., 2024). Notice that in the pre-training stage, we do not necessarily use human reward to instruct the generator, therefore the generator can only learn to match the reference distribution, leading to decent generation results without strictly following human preference. The Reward Modeling Stage. As the middle column of Figure 2 shows, in the second stage, the researcher is supposed to collect human feedback data and train a reward model that reflects human preferences for images and corresponding captions. Notice that the image and caption data for this stage can either be real image data or those images and prompts generated by users using the one-step generators in the first stage. For instance, if researchers want to enhance the generation quality of their users' commonly used prompts, they can collect the most used prompts from their users' activity and generate images using the one-step generator pre-trained in the first stage. They can send the image-prompt pair to users for feedback on their preferences. The reward modeling method is quite flexible. Researchers can either train the reward model in-house using different neural networks and data (Ouyang et al., 2022), or using off-the-shelf reward models such as Image Reward (Xu et al., 2023), or human preference scores (Wu et al., 2023). The Alignment Stage. As the rightmost column of Figure 2 shows, the final stage is the alignment stage, which is the major stage that DI++ considers. In this stage, researchers can use the reference diffusion model from the first stage as a teacher, and the reward model from the second stage to align the one-step generator with the DI++ algorithm (Algorithm 1 or 2). After the alignment stage, the one-step generator can generate images that are not only realistic but also match human preferences. ## 5.2 The Models And The Dataset The Reference Diffusion Model and the One-step Generator. In this paper, we use the 0.6B PixelArt-α model (Chen et al., 2023) of 512×512 resolution as our reference diffusion model. The PixelArt-α is a high-quality open-sourced diffusion model. It uses a diffusion transformer (Peebles & Xie, 2022) to learn marginal score functions in a latent space encoded by a down-sampled variational auto-encoder (VAE) (Rombach et al., 2022). For the text conditioning mechanism, the PixelArt-α model uses a T5-XXL text encoder(Raffel et al., 2020; Tay et al., 2021), which makes the model able to understand long prompts without an obvious length restriction. We put more experiment details in Appendix A.2 The Reward Model. In this paper, we use the off-the-shelf Image Reward1(Xu et al., 2023) as our human reward model. Image Reward is a neural reward model that is trained with 137 thousand human feedback data on how an image is preferred by labelers, therefore it would give a high reward value to an image and prompt pair that a human prefers. The Prompt Dataset. We use the prompts from the SAM-LLaVA-Caption-10M dataset as our prompt dataset. The SAM-LLaVA-Caption-10M dataset contains the images collected by Kirillov et al. (2023), together with text descriptions that are captioned by LLaVA model (Liu et al., 2024a). The SAM-LLaVACaption-10M dataset is used for training the PixelArt-α model. Since the PixelArt-α diffusion model uses a T5-XXL, which is memory and computationally expensive. To speed up the alignment training, we preencode the text prompts using the T5-XXL text encoder and save the encoded embedding vectors. The Training Process We train the generator with two stages: the pre-training stage, and the alignment stage. Due to page limitations, we put all details in Appendix A.2. ## 6 Model Analysis 6.1 Qualitative Evaluations In this section, we evaluate all one-step text-to-image generator models, finding that the DI++ aligned model shows improved human preference performances. Before the quantitative evaluations, we first give a qualitative comparison of the models with and without alignment. Human Reward Improves the One-step Model. Figure 3 shows a qualitative comparison of five onestep generator models with different alignment configurations. The bottom row of Figure 3 is the weakest setting with no human preference alignment. Upper rows are models that are aligned stronger in a progressive way. From Figure 3 we can tell some findings: (1) The model with no human preference alignment (the bottom row) shows weak performances. Though it can generate acceptable images of scenes, it can barely generate high-quality humans or animals; (2) Without CFG, the model aligned with only human reward model is pretty good, with a significant improvement over the model with no human preference alignment; (3) The model aligned only with CFG is a decent solution; (4) The model aligned with both CFG and weak human reward model shows better image composition, richer details, and aesthetic quality; (5) The model aligned with CFG and strong human reward tends to generate images similar to paintings that are more colorful with richer details; Due to page limitations, we put a discussion on findings in Appendix A.3. 1https://github.com/THUDM/ImageReward ## 6.2 Quantitative Evaluations With Standard Scores Evaluation Metrics. To quantitatively evaluate the performances of the generator models trained with different alignment settings, we compare them with three quantitative metrics: the aesthetic score, the Image Reward, and the PickScore. We also compare the aligned one-step generators with other leading few-step models, such as the Latent Consistency Model (LCM) (Luo et al., 2023), Trajectory Consistency Model (TCD) (Zheng et al., 2024), PeReflow (Yan et al., 2024), SDXL-Turbo (Sauer et al., 2023b), SDXLLightning (Lin et al., 2024) and Hyper-SD (Ren et al., 2024) on the widely used MSCOCO2017 validation prompt dataset. We refer to Hyper-SD's evaluation protocols to compute evaluation metrics. Besides the comparison on the MSCOCO2017 validation prompt dataset which our one-step model has not been trained on, we also compare our one-step models on the SAM-LLaVA-Caption-10M dataset with the reference PixelArt-α diffusion model. On both datasets, the aligned one-step generator models clearly outperform the reference PixelArt-α model with significant margins, setting a record-breaking Image Reward of 1.27 on the COCO prompt dataset and of 1.44 on the SAM-recaptioned prompt dataset. Table 1 summarizes the metric. Models Aligned with CFG and Strong Human Reward Show the Best Performances. As Table 1 shows, the model aligned with human reward shows the best evaluation metrics. On the SAM-LLaVACaption-10M dataset, the model aligned with a 4.5 CFG scale and a 10.0 reward scale shows the best aesthetic performances of 6.24, even outperforming teacher PixelArt-α diffusion models with 30 generation steps. It shows a record-breaking Image Reward of 1.44, almost double the score of the second-best model. For the rest of the models, we find that models aligned with stronger CFG and human reward show better performances than models aligned with weaker ones. This shows the effectiveness of our Diff-Instruct++ to improve one-step generator models by incorporating human preferences. The Zero-shot Generalization Ability of Aligned Models. Another interesting finding of the DI++ algorithm, as well as the human preference alignment of the one-step generator model, is its zero-shot generalization ability. Such a generalization ability has two meanings: first, as Table 1 shows, our models that are aligned with Image Reward not only show a dominating Image Reward metric but also show strong aesthetic scores. This demonstrates that aligning with one human reward model can also improve other human preference metrics. Second, the model aligned on one prompt dataset also performs well on other datasets. For instance, as Table 1 shows, our models are aligned on the SAM-LLaVA-Caption10M dataset which consists of 10M long captions. The SAM-LLaVA-Caption10M dataset has non-overlapping with the MSCOCO2017 validation prompt dataset. We find that the aligned model also shows leading aesthetic scores and Image Reward scores on the MSCOCO2017 validation prompt dataset. This zero-shot generalization ability indicates that if one aligns a model on some user prompt datasets, the model can also show decent performances on other non-overlapping prompt datasets. Low Alignment Costs. Besides the top performance, the training cost with DI++ is surprisingly cheap. Our best model is pre-trained with 4 A100-80G GPUs for 2 days and aligned using the same computation costs. while other industry models in Table 1 require hundreds of A100 GPU days. We summarize the distillation costs in Table 1, marking that DI++ is an efficient yet powerful alignment method with astonishing scaling ability. We believe such efficiency comes from the image-data-free property of DI++. The DI++ does not require image data when aligning, this distinguishes the DI++ from other methods that fine-tune models on highly curated image datasets, which potentially is inefficient. Comparing with Other Few-step Generator Models. Table 1 has shown the superior advantage of our aligned one-step model over other few-step models. In this paragraph, we give Figure 4 for a qualitative comparison of our models with other few-step models in Table 1. When compared with other few-step generative models, the aligned model shows better aesthetic quality. ## 6.3 Limitations Of Diff-Instruct++ Despite the alignment training stage, the generator model still makes simple mistakes occasionally. Figure 5 shows some bad generation cases that we picked with multiple generation trails. (1) The Aligned Model Still Misunderstands the Input Prompt. As the leftmost three images of Figure 5 show, the generated images ignore the concept of *pear earrings*, the *battling in a coffee cup*, and the car | Model | Steps Type Params | Aes | Image | Pick | Training | | | |----------------------------------|---------------------|-------|---------|--------|------------|-------|------------------| | Score Reward Score | Cost | | | | | | | | SD15-Base(Rombach et al., 2022) | 25 | UNet | 860 M | 5.26 | 0.18 | 0.217 | | | SD15-LCM(Luo et al., 2023) | 4 | UNet† | 860 M | 5.66 | -0.37 | 0.212 | 8 A100× 4 Days | | SD15-TCD(Zheng et al., 2024) | 4 | UNet† | 860 M | 5.45 | -0.15 | 0.214 | 8 A800× 5.8 Days | | PeRFlow(Yan et al., 2024) | 4 | UNet | 860 M | 5.64 | -0.35 | 0.208 | M GPU× N Days | | Hyper-SD15(Ren et al., 2024) | 1 | UNet† | 860 M | 5.79 | 0.29 | 0.215 | 32 A100× N Days | | SDXL-Base(Podell et al., 2023) | 25 | UNet | 2.6 B | 5.54 | 0.87 | 0.229 | | | SDXL-LCM(Luo et al., 2023) | 4 | UNet† | 2.6 B | 5.42 | 0.48 | 0.224 | 8 A100× 4 Days | | SDXL-TCD(Zheng et al., 2024) | 4 | UNet† | 2.6 B | 5.42 | 0.67 | 0.226 | 8 A800× 5.8 Days | | SDXL-Lightning(Lin et al., 2024) | 4 | UNet† | 2.6 B | 5.63 | 0.72 | 0.229 | 64 A100× N Days | | Hyper-SDXL(Ren et al., 2024) | 4 | UNet† | 2.6 B | 5.74 | 0.93 | 0.232 | 32 A100× N Days | | SDXL-Turbo(Sauer et al., 2023b) | 1 | UNet | 2.6 B | 5.33 | 0.78 | 0.228 | M GPU× N Days | | SDXL-Lightning(Lin et al., 2024) | 1 | UNet | 2.6 B | 5.34 | 0.54 | 0.223 | 64 A100× N Days | | Hyper-SDXL(Ren et al., 2024) | 1 | UNet | 2.6 B | 5.85 | 1.19 | 0.231 | 32 A100× N Days | | PixelArt-α(Chen et al., 2023) | 30 | DiT | 610 M | 5.97 | 0.82 | 0.226 | | | DI++(1.0 cfg+0.0 reward) | 1 | DiT | 610 M | 6.02 | -0.73 | 0.20 | 4 A100× 4 days | | DI++(1.0 cfg+1.0 reward) | 1 | DiT | 610 M | 6.21 | 0.23 | 0.213 | 4 A100× 4 days | | DI++(4.5 cfg+0.0 reward) | 1 | DiT | 610 M | 5.98 | 0.71 | 0.223 | 4 A100× 4 days | | DI++(4.5 cfg+1.0 reward) | 1 | DiT | 610 M | 6.04 | 0.93 | 0.223 | 4 A100× 4 days | | DI++(4.5 cfg+10.0 reward) | 1 | DiT | 610 M | 6.31 | 1.27 | 0.223 | 4 A100× 4 days | | PixelArt-α ∗ (Chen et al., 2023) | 30 | DiT | 610 M | 5.93 | 0.53 | 0.223 | | | DI++(1.0 cfg+0.0 reward)∗ | 1 | DiT | 610 M | 5.84 | -0.66 | 0.211 | 4 A100× 4 days | | DI++(1.0 cfg+1.0 reward)∗ | 1 | DiT | 610 M | 6.10 | 0.28 | 0.221 | 4 A100× 4 days | | DI++(4.5 cfg+0.0 reward)∗ | 1 | DiT | 610 M | 5.91 | 0.44 | 0.223 | 4 A100× 4 days | | DI++(4.5 cfg+1.0 reward)∗ | 1 | DiT | 610 M | 5.99 | 0.85 | 0.224 | 4 A100× 4 days | | DI++(4.5 cfg+10.0 reward)∗ | 1 | DiT | 610 M | 6.24 | 1.44 | 0.229 | 4 A100× 4 days | Table 1: Quantitative comparisons of text-to-image models on MSCOCO-2017 validation prompt dataset and SAM-LLaVA-Caption10M dataset (the notation ∗). DI++ is short for Diff-Instruct++. † means using the backbone network together with a Low-Rank Approximation layer (LoRA (Hu et al., 2021)). The distillation cost M GPU× *N Days* means the model did not report the cost. playing football. However, we find such a mistake happens only occasionally. (2) The Generator Sometimes Generates Bad Human Faces and Hands. Please see the fourth and fifth images of Figure 5. The face of the generated lady in the fourth image is not satisfying with blurred eyes and mouth. In the fifth image, the generated Iron Man character has multiple hands. (3) Sometimes The aligned model Still Can not Count Correctly. For instance, in the rightmost image, the prompt asks the model to generate *a birthday cake*, however, the model generates two cakes with one near and another lying far away. ## 7 Conclusion And Future Works In this paper, we have presented the Diff-Instruct++ method, the first attempt to align one-step text-toimage generator models with human preference. By formulating the problem as a maximization of expected human reward functions with an IKL divergence regularization, we have developed practical loss functions and a fast-converging yet image data-free alignment algorithm. We also establish theoretical connections of Diff-Instruct++ with previous methods, pointing out that the commonly used classifier-free guidance is secretly doing Diff-Instruct++. Besides, we also introduce a three-stage workflow to develop one-step text-to-image generator models: the pre-training, the reward modeling, and the alignment stage. We train one-step generator models with different alignment configurations and demonstrate the superior advantage of Diff-Instruct++ with a human reward that improves the sample quality and better prompt alignment. While Diff-Instruct++ does not completely eliminate the occurrence of simple mistakes in image generation, our findings strongly suggest that this approach represents a promising direction for aligning one-step generators with human preferences. We think our work can shed light on future research in improving the responsiveness and accuracy of text-to-image generation models, bringing us closer to AGI systems that can more faithfully interpret and execute human intentions in visual content creation. ## References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. *arXiv* preprint arXiv:2303.08774, 2023. Kevin Black, Michael Janner, Yilun Du, Ilya Kostrikov, and Sergey Levine. Training diffusion models with reinforcement learning. *arXiv preprint arXiv:2305.13301*, 2023. Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. *arXiv preprint arXiv:1809.11096*, 2018. Tim Brooks, Bill Peebles, Connor Holmes, Will DePue, Yufei Guo, Li Jing, David Schnurr, Joe Taylor, Troy Luhman, Eric Luhman, Clarence Ng, Ricky Wang, and Aditya Ramesh. Video generation models as world simulators. 2024. URL https://openai.com/research/ video-generation-models-as-world-simulators. Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James T. Kwok, Ping Luo, Huchuan Lu, and Zhenguo Li. Pixart-α: Fast training of diffusion transformer for photorealistic text-to-image synthesis. *ArXiv*, abs/2310.00426, 2023. URL https://api.semanticscholar.org/ CorpusID:263334265. Ricky TQ Chen, Jens Behrmann, David K Duvenaud, and Jörn-Henrik Jacobsen. Residual flows for invertible generative modeling. In *Advances in Neural Information Processing Systems*, pp. 9916–9926, 2019. Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. *Advances in neural information processing systems*, 30, 2017. Kevin Clark, Paul Vicol, Kevin Swersky, and David J Fleet. Directly fine-tuning diffusion models on differentiable rewards. *arXiv preprint arXiv:2309.17400*, 2023. Guillaume Couairon, Jakob Verbeek, Holger Schwenk, and Matthieu Cord. Diffedit: Diffusion-based semantic image editing with mask guidance. *ArXiv*, abs/2210.11427, 2022. Xiaoliang Dai, Ji Hou, Chih-Yao Ma, Sam Tsai, Jialiang Wang, Rui Wang, Peizhao Zhang, Simon Vandenhende, Xiaofang Wang, Abhimanyu Dubey, et al. Emu: Enhancing image generation models using photogenic needles in a haystack. *arXiv preprint arXiv:2309.15807*, 2023. Wei Deng, Weijian Luo, Yixin Tan, Marin Biloš, Yu Chen, Yuriy Nevmyvaka, and Ricky TQ Chen. Variational schrödinger diffusion models. In *Forty-first International Conference on Machine Learning*. Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. Kto: Model alignment as prospect theoretic optimization. *arXiv preprint arXiv:2402.01306*, 2024. Zach Evans, Julian D Parker, CJ Carr, Zack Zukowski, Josiah Taylor, and Jordi Pons. Stable audio open. arXiv preprint arXiv:2407.14358, 2024. Ying Fan, Olivia Watkins, Yuqing Du, Hao Liu, Moonkyung Ryu, Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh, Kangwook Lee, and Kimin Lee. Reinforcement learning for fine-tuning text-to-image diffusion models. *Advances in Neural Information Processing Systems*, 36, 2024. Zhengyang Geng, Ashwini Pokle, and J Zico Kolter. One-step diffusion distillation via deep equilibrium models. In *Thirty-seventh Conference on Neural Information Processing Systems*, 2023. URL https: //openreview.net/forum?id=b6XvK2de99. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680, 2014. Jiatao Gu, Shuangfei Zhai, Yizhe Zhang, Lingjie Liu, and Josh Susskind. Boot: Data-free distillation of denoising diffusion models with bootstrapping. *arXiv preprint arXiv:2306.05544*, 2023. Jonathan Heek, Emiel Hoogeboom, and Tim Salimans. Multistep consistency models. arXiv preprint arXiv:2403.06807, 2024. Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. *arXiv preprint arXiv:2207.12598*, 2022. Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and David J Fleet. Video diffusion models. *arXiv preprint arXiv:2204.03458*, 2022. Emiel Hoogeboom, Vıctor Garcia Satorras, Clément Vignac, and Max Welling. Equivariant diffusion for molecule generation in 3d. In *International Conference on Machine Learning*, pp. 8867–8887. PMLR, 2022. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. *arXiv preprint arXiv:2106.09685*, 2021. Minguk Kang, Jun-Yan Zhu, Richard Zhang, Jaesik Park, Eli Shechtman, Sylvain Paris, and Taesung Park. Scaling up gans for text-to-image synthesis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023a. Minguk Kang, Jun-Yan Zhu, Richard Zhang, Jaesik Park, Eli Shechtman, Sylvain Paris, and Taesung Park. Scaling up gans for text-to-image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10124–10134, 2023b. Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. In *International Conference on Learning Representations*, 2018. Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pp. 4401–4410, 2019. Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of stylegan. In *Proceedings of the IEEE/CVF conference on computer vision* and pattern recognition, pp. 8110–8119, 2020. Tero Karras, Miika Aittala, Samuli Laine, Erik Härkönen, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Alias-free generative adversarial networks. *Advances in Neural Information Processing Systems*, 34: 852–863, 2021. Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models. In *Proc. NeurIPS*, 2022. Dongjun Kim, Chieh-Hsin Lai, Wei-Hsiang Liao, Naoki Murata, Yuhta Takida, Toshimitsu Uesaka, Yutong He, Yuki Mitsufuji, and Stefano Ermon. Consistency trajectory models: Learning probability flow ode trajectory of diffusion. *arXiv preprint arXiv:2310.02279*, 2023. Dongjun Kim, Chieh-Hsin Lai, Wei-Hsiang Liao, Yuhta Takida, Naoki Murata, Toshimitsu Uesaka, Yuki Mitsufuji, and Stefano Ermon. Pagoda: Progressive growing of a one-step generator from a low-resolution diffusion teacher. *arXiv preprint arXiv:2405.14822*, 2024. Heeseung Kim, Sungwon Kim, and Sungroh Yoon. Guided-tts: A diffusion model for text-to-speech via classifier guidance. In *International Conference on Machine Learning*, pp. 11119–11133. PMLR, 2022. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint* arXiv:1412.6980, 2014. Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), *Advances in Neural* Information Processing Systems 31, pp. 10215–10224. 2018. Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4015–4026, 2023. Yuval Kirstain, Adam Polyak, Uriel Singer, Shahbuland Matiana, Joe Penna, and Omer Levy. Pick-a-pic: An open dataset of user preferences for text-to-image generation. *arXiv preprint arXiv:2305.01569*, 2023. Dan Kondratyuk, Lijun Yu, Xiuye Gu, José Lezama, Jonathan Huang, Rachel Hornung, Hartwig Adam, Hassan Akbari, Yair Alon, Vighnesh Birodkar, et al. Videopoet: A large language model for zero-shot video generation. *arXiv preprint arXiv:2312.14125*, 2023. Kimin Lee, Hao Liu, Moonkyung Ryu, Olivia Watkins, Yuqing Du, Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh, and Shixiang Shane Gu. Aligning text-to-image models using human feedback. *arXiv* preprint arXiv:2302.12192, 2023. Shanchuan Lin, Anran Wang, and Xiao Yang. Sdxl-lightning: Progressive adversarial diffusion distillation. arXiv preprint arXiv:2402.13929, 2024. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. *Advances in neural* information processing systems, 36, 2024a. Hongjian Liu, Qingsong Xie, Zhijie Deng, Chen Chen, Shixiang Tang, Fueyang Fu, Zheng-jun Zha, and Haonan Lu. Scott: Accelerating diffusion models with stochastic consistency distillation. arXiv preprint arXiv:2403.01505, 2024b. Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao. Latent consistency models: Synthesizing high-resolution images with few-step inference. *arXiv preprint arXiv:2310.04378*, 2023. Weijian Luo. A comprehensive survey on knowledge distillation of diffusion models. *arXiv preprint* arXiv:2304.04262, 2023. Weijian Luo, Tianyang Hu, Shifeng Zhang, Jiacheng Sun, Zhenguo Li, and Zhihua Zhang. Diff-instruct: A universal approach for transferring knowledge from pre-trained diffusion models. Advances in Neural Information Processing Systems, 36, 2024. Chenlin Meng, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. Sdedit: Image synthesis and editing with stochastic differential equations. *arXiv preprint arXiv:2108.01073*, 2021. Chenlin Meng, Ruiqi Gao, Diederik P Kingma, Stefano Ermon, Jonathan Ho, and Tim Salimans. On distillation of guided diffusion models. *arXiv preprint arXiv:2210.03142*, 2022. Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. *Communications of the ACM*, 65(1):99–106, 2021. Thuan Hoang Nguyen and Anh Tran. Swiftbrush: One-step text-to-image diffusion model with variational score distillation. *arXiv preprint arXiv:2312.05239*, 2023. Alex Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. arXiv preprint arXiv:2102.09672, 2021. Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. *Advances in neural information processing systems*, 35:27730–27744, 2022. Gaurav Parmar, Taesung Park, Srinivasa Narasimhan, and Jun-Yan Zhu. One-step image translation with text-to-image models. *arXiv preprint arXiv:2403.12036*, 2024. William Peebles and Saining Xie. Scalable diffusion models with transformers. *arXiv preprint* arXiv:2212.09748, 2022. Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion models for high-resolution image synthesis. *arXiv* preprint arXiv:2307.01952, 2023. Ben Poole, Ajay Jain, Jonathan T Barron, and Ben Mildenhall. Dreamfusion: Text-to-3d using 2d diffusion. arXiv preprint arXiv:2209.14988, 2022. Mihir Prabhudesai, Anirudh Goyal, Deepak Pathak, and Katerina Fragkiadaki. Aligning text-to-image diffusion models with reward backpropagation. *arXiv preprint arXiv:2310.03739*, 2023. Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. *Advances in Neural* Information Processing Systems, 36, 2024. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140):1–67, 2020. Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. In *International Conference on Machine Learning*, pp. 8821–8831. PMLR, 2021. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. *arXiv preprint arXiv:2204.06125*, 2022. Yuxi Ren, Xin Xia, Yanzuo Lu, Jiacheng Zhang, Jie Wu, Pan Xie, Xing Wang, and Xuefeng Xiao. Hyper-sd: Trajectory segmented consistency model for efficient image synthesis. *arXiv preprint arXiv:2404.13686*, 2024. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In *Proceedings of the IEEE/CVF Conference on Computer* Vision and Pattern Recognition, pp. 10684–10695, 2022. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-toimage diffusion models with deep language understanding. *arXiv preprint arXiv:2205.11487*, 2022. Tim Salimans and Jonathan Ho. Progressive distillation for fast sampling of diffusion models. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=TIdIXIpzhoI. Tim Salimans, Thomas Mensink, Jonathan Heek, and Emiel Hoogeboom. Multistep distillation of diffusion models via moment matching. *arXiv preprint arXiv:2406.04103*, 2024. Axel Sauer, Kashyap Chitta, Jens Müller, and Andreas Geiger. Projected gans converge faster. *Advances in* Neural Information Processing Systems, 34:17480–17492, 2021. Axel Sauer, Katja Schwarz, and Andreas Geiger. Stylegan-xl: Scaling stylegan to large diverse datasets. In ACM SIGGRAPH 2022 conference proceedings, pp. 1–10, 2022. Axel Sauer, Tero Karras, Samuli Laine, Andreas Geiger, and Timo Aila. Stylegan-t: Unlocking the power of gans for fast large-scale text-to-image synthesis. In *International conference on machine learning*, pp. 30105–30118. PMLR, 2023a. Axel Sauer, Dominik Lorenz, Andreas Blattmann, and Robin Rombach. Adversarial diffusion distillation. arXiv preprint arXiv:2311.17042, 2023b. Christoph Schuhmann. Laion-aesthetics. https://laion.ai/blog/laion-aesthetics/, 2022. Accessed: 2023 - 11- 10. Yang Song and Prafulla Dhariwal. Improved techniques for training consistency models. In The Twelfth International Conference on Learning Representations. Yang Song and Prafulla Dhariwal. Improved techniques for training consistency models. arXiv preprint arXiv:2310.14189, 2023. Yang Song and Stefano Ermon. Generative Modeling by Estimating Gradients of the Data Distribution. In Advances in Neural Information Processing Systems, pp. 11918–11930, 2019. Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In *International Conference on* Learning Representations, 2020. Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. Consistency models. arXiv preprint arXiv:2303.01469, 2023. Yuda Song, Zehao Sun, and Xuanwu Yin. Sdxs: Real-time one-step latent diffusion models with image conditions. *arXiv preprint arXiv:2403.16627*, 2024. Yusuke Tashiro, Jiaming Song, Yang Song, and Stefano Ermon. Csdi: Conditional score-based diffusion models for probabilistic time series imputation. In Advances in Neural Information Processing Systems (NeurIPS), 2021. Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, and Donald Metzler. Scale efficiently: Insights from pre-training and fine-tuning transformers. *arXiv preprint arXiv:2109.10686*, 2021. Katherine Tian, Eric Mitchell, Huaxiu Yao, Christopher D Manning, and Chelsea Finn. Fine-tuning language models for factuality. *arXiv preprint arXiv:2311.08401*, 2023. Pascal Vincent. A Connection Between Score Matching and Denoising Autoencoders. *Neural Computation*, 23(7):1661–1674, 2011. Bram Wallace, Meihua Dang, Rafael Rafailov, Linqi Zhou, Aaron Lou, Senthil Purushwalkam, Stefano Ermon, Caiming Xiong, Shafiq Joty, and Nikhil Naik. Diffusion model alignment using direct preference optimization. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 8228–8238, 2024. Zhendong Wang, Huangjie Zheng, Pengcheng He, Weizhu Chen, and Mingyuan Zhou. Diffusion-gan: Training gans with diffusion. In *The Eleventh International Conference on Learning Representations*. Zhengyi Wang, Cheng Lu, Yikai Wang, Fan Bao, Chongxuan Li, Hang Su, and Jun Zhu. Prolificdreamer: High-fidelity and diverse text-to-3d generation with variational score distillation. arXiv preprint arXiv:2305.16213, 2023. Xiaoshi Wu, Yiming Hao, Keqiang Sun, Yixiong Chen, Feng Zhu, Rui Zhao, and Hongsheng Li. Human preference score v2: A solid benchmark for evaluating human preferences of text-to-image synthesis. arXiv preprint arXiv:2306.09341, 2023. Zhisheng Xiao, Karsten Kreis, and Arash Vahdat. Tackling the generative learning trilemma with denoising diffusion gans. In *International Conference on Learning Representations*, 2021. Sirui Xie, Zhisheng Xiao, Diederik P Kingma, Tingbo Hou, Ying Nian Wu, Kevin Patrick Murphy, Tim Salimans, Ben Poole, and Ruiqi Gao. Em distillation for one-step diffusion models. arXiv preprint arXiv:2405.16852, 2024. Jiazheng Xu, Xiao Liu, Yuchen Wu, Yuxuan Tong, Qinkai Li, Ming Ding, Jie Tang, and Yuxiao Dong. Imagereward: Learning and evaluating human preferences for text-to-image generation, 2023. Yanwu Xu, Yang Zhao, Zhisheng Xiao, and Tingbo Hou. Ufogen: You forward once large scale text-toimage generation via diffusion gans. In *Proceedings of the IEEE/CVF Conference on Computer Vision* and Pattern Recognition, pp. 8196–8206, 2024. Hanshu Yan, Xingchao Liu, Jiachun Pan, Jun Hao Liew, Qiang Liu, and Jiashi Feng. Perflow: Piecewise rectified flow as universal plug-and-play accelerator. *arXiv preprint arXiv:2405.07510*, 2024. Kai Yang, Jian Tao, Jiafei Lyu, Chunjiang Ge, Jiaxin Chen, Weihan Shen, Xiaolong Zhu, and Xiu Li. Using human feedback to fine-tune diffusion models without any reward model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8941–8951, 2024. Tianwei Yin, Michaël Gharbi, Richard Zhang, Eli Shechtman, Fredo Durand, William T Freeman, and Taesung Park. One-step diffusion with distribution matching distillation. *arXiv preprint arXiv:2311.18828*, 2023. Tianwei Yin, Michaël Gharbi, Taesung Park, Richard Zhang, Eli Shechtman, Fredo Durand, and William T Freeman. Improved distribution matching distillation for fast image synthesis. *arXiv preprint* arXiv:2405.14867, 2024. Bowen Zheng and Tianming Yang. Diffusion models are innate one-step generators. arXiv preprint arXiv:2405.20750, 2024. Jianbin Zheng, Minghui Hu, Zhongyi Fan, Chaoyue Wang, Changxing Ding, Dacheng Tao, and Tat-Jen Cham. Trajectory consistency distillation. *arXiv preprint arXiv:2402.19159*, 2024. Mingyuan Zhou, Zhendong Wang, Huangjie Zheng, and Hai Huang. Long and short guidance in score identity distillation for one-step text-to-image generation. *arXiv preprint arXiv:2406.01561*, 2024a. Mingyuan Zhou, Huangjie Zheng, Zhendong Wang, Mingzhang Yin, and Hai Huang. Score identity distillation: Exponentially fast distillation of pretrained diffusion models for one-step generation. *arXiv preprint* arXiv:2404.04057, 2024b. ## Broader Impact Statement This work is motivated by our aim to increase the positive impact of one-step text-to-image generative models by training them to follow human preferences. By default, one-step generators are either trained over large-scale image-caption pair datasets or distilled from pre-trained diffusion models, which convey only subjective knowledge without human instructions. Our results indicate that the proposed approach is promising for making one-step generative models more aesthetic, and more preferred by human users. In the longer term, alignment failures could lead to more severe consequences, particularly if these models are deployed in safety-critical situations. For instance, if alignment failures occur, the one-step text-to-image model may generate toxic images with misleading information, and horrible images that can potentially be scary to users. We strongly recommend using our human preference alignment techniques together with AI safety checkers for text-to-image generation to prevent undesirable negative impacts. ## A Important Materials For Main Content A.1 Meanings Of Hyper-Parameters. Meanings of Hyper-parameters. As we can see in Algorithm 1 (as well as Algorithm 2). Each hyperparameter has its intuitive meaning. The reward scale parameter αrew controls the strength of human preference alignment. The larger the αrew is, the stronger the generator is aligned with human preferences. However, the drawback for a too large αrew might be the loss of diversity and reality. Besides, we empirically find that larger αrew leads to richer generation details and better generation layouts. The CFG scale controls the strength of CFG when computing score functions for the reference diffusion model. As we have shown in Theorem 3.3, the α*cf g* represents the strength of the implicit reward function (3.8). We empirically find that the best CFG scale for Diff-Instruct++ is the same as the best CFG scale for sampling from the reference diffusion model. The diffusion model weighting λ(t) and the generator loss weighting w(t) controls the strengths put on each time level of updating TA diffusion and the student generator. We empirically find that it is decent to set λ(t) to be the same as the default training weighting function for the reference diffusion. And it is decent to set the w(t) = 1 for all time-levels in practice. In the following section, we give more discussions on Diff-Instruct++. ## A.2 Experiment Details For Pre-Training And Alignment Experiment Details for Pre-training and Alignment We follow the setting of Diff-Instruct (Luo et al., 2024) to use the same neural network architecture as the reference diffusion model for the one-step generator. The PixelArt-α model is trained using so-called VP diffusion(Song et al., 2020), which first scales the data in the latent space, then adds noise to the scaled latent data. We reformulate the VP diffusion as the form of so-called *data-prediction* proposed in EDM paper (Karras et al., 2022) by re-scaling the noisy data with the inverse of the scale that has been applied to data with VP diffusion. Under the data-prediction formulation, we select a fixed noise σ*init* level to be σ*init* = 2.5 following the Diff-Instruct and SiD (Zhou et al., 2024b). For generation, we first generate a Gaussian vector z ∼ pz = N (0, σ2 initI). Then we input z into the generator to generate the latent. The latent vector can then be decoded by the VAE decoder to turn into an image if needed. The Training Setup and Costs. We train the model with the PyTorch framework. In the pre-training stage, we use the official checkpoint of off-the-shelf PixelArt-α-512×512 model 2 as weights of the reference diffusion. We initialize the TA diffusion model with the same weights as the reference diffusion model. We use Diff-Instruct to pre-train the generator. We use the Adam optimizer for both TA diffusion and generation at all stages. For the reference diffusion model, we use a fixed classifier-free guidance scale of 4.5, while for TA diffusion, we do not use classifier-free guidance (i.e., the CFG scale is set to 1.0). We set the Adam optimizer's beta parameters to be β1 = 0.0 and β2 = 0.999 for both the pre-training and alignment 2https://huggingface.co/PixelArt-alpha/PixelArt-XL-2-512x512 stages. We use a learning rate of 5e − 6 for both TA diffusion and the student one-step generator. For the one-step generator model, we use the adaptive exponential moving average technique by referring to the implementation of the EDM (Karras et al., 2022). We pre-train the one-step model on 4 Nvidia A100 GPUs for two days (4 × 48 = 192 GPU hours), with a batch size of 1024. We find that the Diff-Instruct algorithm converges fast, and after the pre-training stage, the generator can generate images with decent quality. In the alignment stage, we aim to inspect the one-step generator's behavior with different alignment configurations. Notice that as we have shown in Theorem 3.3, using classifier-free guidance is secretly doing RLHF with Diff-Instruct++, therefore we add both CFG and human reward with different scales to thoroughly study the human preference alignment. More specifically, we align the generator model with five configurations with different CFG scales and reward scales: 1. no CFG and no reward: use a 1.0 CFG scale and a 0.0 reward scale; This is the weakest setting that we regard the model as a baseline with no human preference alignment; 2. no CFG and weak reward: use a 1.0 CFG scale and 1.0 reward scale; 3. strong CFG and no reward: 4.5 CFG scale and 0.0 reward scale; 4. strong CFG and weak reward: 4.5 CFG scale and 1.0 reward scale; 5. strong CFG and strong reward: 4.5 CFG scale and 10.0 reward scale. For all alignment training, we initialize the generator with the same weights that we obtained in the pretraining stage. We put the details of how to construct the one-step generator in Appendix C.1. We also initialize the TA diffusion model with the same weight as the reference diffusion. We use the Image Reward as the human preference reward and use the Diff-Instruct++ algorithm 1 (or equivalently the algorithm 2) to fine-tune the generator. We also used the Adam optimizer with the parameter (β1, β2) = (0.0, 0.999) for both the generator and the TA diffusion with a batch size of 256. For the alignment stage, we use a fixed exponential moving average decay (EMA) rate of 0.95 for all training trials. After the alignment, the generator aligned with both strong CFG and reward model shows significantly improved aesthetic appearance, better generation layout, and richer image details. Figure 1 shows a demonstration of the generated images using our aligned one-step generator with a CFG scale α*cf g* of 4.5 and a reward scale αrew of 1.0. We will analyze these models in detail in Section 6. ## A.3 More Discussions On Findings Of Qualitative Evaluations There are some other interesting findings when qualitatively evaluate different models. First, we find that the images generated by the aligned model show a better composition when organizing the contents presented in the image. For instance, the main objects of the generated image are smaller and show a more natural layout than other models, with the objects and the background iterating aesthetically. This in turn reveals the human presence: human beings would prefer that the object of an image does not take up all spaces of an image. Second, we find that the aligned model has richer details than the unaligned model. The stronger we align the model, the richer details the model will generate. Sometimes these rich details come as a hint to the readers about the input prompts. Sometimes they just come to improve the aesthetic performance. We think this phenomenon may be caused by the fact that human prefers images with rich details. Another finding is that as the reward scale for alignment becomes stronger, the generated image from the alignment model becomes more colorful and more similar to paintings. Sometimes this leads to a loss of reality to some degree. Therefore, we think that users should choose different aligned one-step models with a trade-off between aesthetic performance and image reality according to the use case. ![20_image_0.png](20_image_0.png) configurations. The bottom row is the weakest setting with no human preference alignment. Upper rows are models that are aligned stronger in a progressive way. We put the prompts to generate images in Appendix D.2. The generated images are more and more aesthetic with stronger human preference alignments. ![21_image_0.png](21_image_0.png) Figure 4: Qualitative comparison of our our Diff-Instruct++ aligned models against other few-step text-toimage models in Table 1. The left three columns are randomly placed, with one generated by PixelArt-α model with 30 steps, one generated by a one-step model aligned with Diff-Instruct++ with a CFG scale of 4.5 and reward scale of 1.0, and another generated by a one-step model aligned with 4.5 CFG and 10.0 reward. Please zoom in to check details, lighting, and aesthetic performances. Could you please tell us which one you like the best? We put the answer for each image and prompts for three rows in Appendix D.4. Figure 5: Bad generation cases by aligned one-step generator model (4.5 CFG + 1.0 reward). ![21_image_1.png](21_image_1.png) Algorithm 2: Diff-Instruct++ Pseudo Code. Input: prompt dataset C, generator gθ(x0|z, c), prior distribution pz, reward model r(x, c), reward scale αrew, CFG scale α*cf g*, reference diffusion model sref (xt|c, c), TA diffusion sψ(xt|t, c), forward diffusion p(xt|x0) (2.1), TA diffusion updates rounds KT A, time distribution π(t), diffusion model weighting λ(t), generator IKL loss weighting w(t). while *not converge* do fix θ, update ψ for KT A rounds by 1. sample prompt c ∼ C; sample time t ∼ π(t); sample z ∼ pz(z); 2. generate fake data: x0 = Sta[gθ(z, c)]; sample noisy data: xt ∼ pt(xt|x0); $\mathbb{H}=f_{\alpha\beta}$. 3. update ψ by minimizing loss: L(ψ) = λ(t)∥sψ(xt|t, c) − ∇xt log pt(xt|x0)∥ 2 2 ; fix ψ, update θ using StaD: 1. sample prompt c ∼ C; sample time t ∼ π(t); sample z ∼ pz(z); 2. generate fake data: x0 = gθ(z, c); sample noisy data: xt ∼ pt(xt|x0); 3. compute CFG score: seref (xt|t, c) = sref (xt|t, ∅) + αcf g-sref (xt|t, c) − sref (xt|t, ∅); 4. compute score difference: yt:= sψ(Sta[xt]|t, c) − seref (Sta[xt]|t, c); 5. update θ by minimizing loss: L(θ) = − αrewr(x0, c) + w(t)y T t xt ; end return *θ, ψ*. ## B Theory B.1 Proof Of Theorem 3.1 Proof. Recall that pθ(·) is induced by the generator gθ(·), therefore the sample is obtained by x = gθ(z|c), z ∼ pz. The term x contains parameter through x = gθ(z|c), z ∼ pz. To demonstrate the parameter dependence, we use the notation pθ(·). pref (·) is the reference distribution. The alignment objective writes $$\mathcal{L}(\theta)=\mathbb{E}\,_{\mathbf{z}\sim p_{\theta}(\mathbf{z}|\mathbf{c})}\left\{r(\mathbf{x},\mathbf{c})+\beta\big{[}\log p_{\theta}(\mathbf{x}|\mathbf{c})-\log p_{ref}(\mathbf{x}|\mathbf{c})\big{]}\right\}$$ $$=\mathbb{E}_{\mathbf{c},\mathbf{z}\sim p_{z}}\Bigg{\{}r(g_{\theta}(\mathbf{z}|\mathbf{c}),\mathbf{c})+\beta\big{[}\log p_{\theta}(g_{\theta}(\mathbf{z}|\mathbf{c})|\mathbf{c})-\log p_{ref}(g_{\theta}(\mathbf{z}|\mathbf{c})|\mathbf{c})\big{]}\Bigg{\}}\,$$ Therefore, the θ gradient of L(θ) writes: ∂ ∂θL(θ) = ∂ ∂θEc,z∼pz r(gθ(z|c), c) + β-log pθ(gθ(z|c)|c) − log pref (gθ(z|c)|c) = E c∼pc,z∼pz x=gθ (z|c) ∇x r(x, c) + β-log pθ(x|c) − log pref (x|c)∂x ∂θ + E c∼pc, x∼pθ (·|c) β ∂ ∂θ log pθ(x|c) (B.3) $$(\mathrm{B.1})$$ $$(\mathrm{B.2})$$ $$(\mathrm{B.3})$$ We can see, that the first term of equation (B.3) is the result of Theorem 3.2. Next, we turn to show that the second term of (B.3) will vanish. E c∼pc x∼pθ (·|c) ∂ ∂θ log pθ(x|c) = E c∼pc x∼pθ (·|c) ∂ ∂θ log pθ(x|c) = Ec∼pc Z1 pθ(x|c) ∂ ∂θ pθ(x|c) pθ(x|c)dx = Ec∼pc Z ∂ ∂θ pθ(x|c) dx (B.4) = Ec∼pc ∂ ∂θ Z pθ(x|c) dx = Ec∼pc ∂ ∂θ 1 = 0 (B.5) (B.6) $$\begin{array}{l}{(\mathrm{B.5})}\\ {(\mathrm{B.6})}\end{array}$$ The equality (B.4) holds if function pθ(x|c) satisfies the conditions (1). pθ(x|c) is LebeStaue integrable for x with each θ; (2). For almost all x ∈ R D, the partial derivative ∂pθ(x|c)/∂θ exists for all θ ∈ Θ. (3) there exists an integrable function h(.) : R D → R, such that pθ(x|c) ≤ h(x) for all x in its domain. Then the derivative w.r.t θ can be exchanged with the integral over x, i.e. $$\int{\frac{\partial}{\partial\theta}}p_{\theta}(\mathbf{x}|\mathbf{c})\mathrm{d}\mathbf{x}={\frac{\partial}{\partial\theta}}\int p_{\theta}(\mathbf{x}|\mathbf{c})\mathrm{d}\mathbf{x}.$$ $$\square$$ ## B.2 Proof Of Theorem 3.2 Proof. The proof of Theorem 3.2 is a direct generalization of the proof of Theorem 3.1 as we put in Appendix B.1. $$\operatorname{Grad}(\theta)=\mathbb{E}_{e,t,\mathbf{x}_{0},\mathbf{x}_{0}=\mathbf{x}_{0}\in\mathbf{x}(\mathbf{x})}\left\{\,-\nabla_{\mathbf{x}_{0}}r(\mathbf{x}_{0},\mathbf{c})+\beta w(t)\big{[}\mathbf{s}_{\theta}(\mathbf{x}_{t}|t,\mathbf{c})-\mathbf{s}_{r e f}(\mathbf{x}_{t}|t,\mathbf{c})\big{]}\frac{\partial\mathbf{x}_{t}}{\partial\theta}\right\},$$ Recall the definition of pθ(·|t, c), the sample is obtained by x0 = gθ(z|c), z ∼ pz, and xt|x0 ∼ pt(xt|x0) according to forward SDE (2.1). Since the solution of forward, SDE is uniquely determined by the initial point x0 and a trajectory of Wiener process wt∈[0,T], we slightly abuse the notation and let xt = F(gθ(z|c), w, t) to represent the solution of xt generated by x0 and w. We let w[0,1] ∼ Pw to demonstrate a trajectory from the Wiener process where Pw represents the path measure of Weiner process on t ∈ [0, T]. There are two terms that contain the generator's parameter θ. The term xt contains parameter through x0 = gθ(z|c), z ∼ pz. The marginal density pθ(·|t, c) also contains parameter θ implicitly since pθ(·|t, c) is initialized with the generator output distribution pθ(·|t = 0, c). The pref (·|t, c) is defined through the pre-trained diffusion models with score functions sref (·|t, c). The alignment objective between pθ(·|t, c) and pref (·|t, c) is defined with, $${\mathcal{L}}(\theta)=\mathbb{E}_{{\boldsymbol{x}}_{t},{\boldsymbol{x}}_{0}\sim p_{t},{\boldsymbol{x}}_{0}=p_{t}({\boldsymbol{x}}_{t})=0\atop{\boldsymbol{x}}_{t}|{\boldsymbol{x}}_{0}\sim p_{t}({\boldsymbol{x}}_{t})=0}\Bigg\{-r({\boldsymbol{x}}_{0},{\boldsymbol{c}})+\beta w(t)\big[\log p_{\theta}({\boldsymbol{x}}_{t}|t,{\boldsymbol{c}})-\log p_{r e f}({\boldsymbol{x}}_{t}|t,{\boldsymbol{c}})\big]\Bigg\}$$ $$(\mathbf{B}.7)$$ Therefore, the θ gradient of L(θ) writes: ∂ ∂θL(θ) = ∂ ∂θE c,z∼pz w∼Pw r(gθ(z|c), c) + β-log pθ(F(gθ(z|c), w, t))|t, c) − log pref (F(gθ(z|c), w, t))|t, c) = E c∼pc,z∼pz,x0=gθ (z|c) xt=F(x0,w,t)) ∇x0 r(x0, c) ∂x0 ∂θ + β-∇xt log pθ(xt|c) − ∇xt log pref (xt|c)∂xt ∂θ (B.8) + E c∼pc,z∼pz, xt∼pθ (xt|t,c) β ∂ ∂θ log pθ(xt|t, c) (B.9) The first term (B.8) is what we want in the Theorem 3.2. We will show that the second term (B.9) will vanish under mild conditions. The term (B.9) writes E c∼pc,z∼pz, xt∼pθ (xt|t,c) ∂ ∂θ log pθ(xt|t, c) = E c∼pc x∼pθ (·|c) ∂ ∂θ log pθ(xt|t, c) = Ec∼pc Z1 pθ(xt|t, c) ∂ ∂θ pθ(xt|t, c) pθ(xt|t, c)dx = Ec∼pc Z ∂ ∂θ pθ(xt|t, c) dx (B.10) = Ec∼pc ∂ ∂θ Z pθ(xt|t, c) dx = Ec∼pc ∂ ∂θ 1 = 0 (B.11) (B.12) The equality (B.10) holds if function pθ(xt|t, c) satisfies the conditions (1). pθ(xt|t, c) is LebeStaue integrable for x with each θ; (2). For almost all xt ∈ R D, the partial derivative ∂pθ(xt|t, c)/∂θ exists for all θ ∈ Θ. (3) there exists an integrable function h(.) : R D → R, such that pθ(xt|t, c) ≤ h(xt) for all xt in its domain. Then the derivative w.r.t θ can be exchanged with the integral over xt, i.e. $$\int{\frac{\partial}{\partial\theta}}p_{\theta}(\mathbf{x}_{t}|t,\mathbf{c})\mathrm{d}\mathbf{x}={\frac{\partial}{\partial\theta}}\int p_{\theta}(\mathbf{x}|\mathbf{c})\mathrm{d}\mathbf{x}.$$ $\square$ Remark B.1. In practice, most commonly used forward diffusion processes can be expressed as a form of scale and noise addition: $\mathbf{x}_{t}=\alpha(t)\mathbf{x}_{0}+\beta(t)\epsilon,\quad\epsilon\sim\mathcal{N}(\epsilon;\mathbf{0},\mathbf{I})$. So the term z ∼ pz, w ∼ Pw, xt = F(x0, w) in equation (??) can be instantiated as z ∼ pz, ϵ ∼ N (ϵ; 0, I), xt = α(t)x0 + β(t)ϵ. ## B.3 Proof Of Theorem 3.5 Recall the pseudo loss (3.10): $$\begin{array}{l}{{{\mathcal{L}}_{p}(\theta)=\mathbb{E}_{e,t,\mathbf{x}\sim p_{x},\mathbf{x}_{0}=g_{\theta}(\mathbf{x}|\mathbf{c})}\bigg\{-r(\mathbf{x}_{0},\mathbf{c})+\beta w(t)\mathbf{y}_{t}^{T}\mathbf{x}_{t}\bigg\},}}\\ {{\mathbf{y}_{t}:=\mathbf{s}_{\mathrm{{Sta}}[\theta]}(\mathrm{{Sta}}[\mathbf{x}_{t}]|t,\mathbf{c})-\mathbf{s}_{r e f}(\mathrm{{Sta}}[\mathbf{x}_{t}]|t,\mathbf{c})}}\end{array}$$ $$(\mathrm{B.13})$$ Since all θ dependence of yt are cut out, yt can be regarded as a constant tensor. Taking the θ gradient of (3.10) leads to ∂ ∂θLp(θ) = E c,t,z∼pz,x0=gθ (z|c) xt|x0∼p(xt|x0) − ∇x0 r(x0, c) ∂x0 ∂θ + βw(t)y T t ∂xt ∂θ = E c,t,z∼pz,x0=gθ (z|c) xt|x0∼p(xt|x0) − ∇x0 r(x0, c) ∂x0 ∂θ + βw(t) sSta[θ](Sta[xt]|t, c) − sref (Sta[xt]|t, c) T∂xt ∂θ = E c,t,z∼pz,x0=gθ (z|c) xt|x0∼p(xt|x0) − ∇x0 r(x0, c) ∂x0 ∂θ + βw(t) sθ(xt|t, c) − sref (xt|t, c) T∂xt ∂θ This is the exact gradient term of Diff-Instruct++. ## B.4 Proof Of Theorem 3.3 Proof. Recall the definition of the reward behind classifier-free guidance (3.8). The reward writes $$r(\mathbf{x}_{0},\mathbf{c})=\int_{t=0}^{T}w(t)\log{\frac{p_{r e f}(\mathbf{x}_{t}|t,\mathbf{c})}{p_{r e f}(\mathbf{x}_{t}|t)}}\mathrm{d}t$$ This reward will put a higher reward on those samples that have higher class-conditional probability than unconditional probability, therefore encouraging class-conditional sampling. To make the derivation more clear, we consider a single time level t, and corresponding $$r(\mathbf{x}_{t},t,\mathbf{c}):=w(t)\log{\frac{p_{r e f}(\mathbf{x}_{t}|t,\mathbf{c})}{p_{r e f}(\mathbf{x}_{t}|t)}}.$$ $$(\mathrm{B.14})$$ $$(\mathrm{B.15})$$ The final result would be an integral of all single time-level t. With the single time level, we consider the alignment problem by minimizing $$\mathcal{L}(\theta)=\mathbb{E}_{\mathbf{c},\mathbf{x}_{t}\sim p_{\theta}(\mathbf{x}_{t}|t,\mathbf{c})}\left[\,-r(\mathbf{x}_{t},t,\mathbf{c})\right]+\beta w(t)\mathcal{D}_{KL}(p_{\theta}(\mathbf{x}_{t}|t,\mathbf{c}),p_{ref}(\mathbf{x}_{t}|t,\mathbf{c}))$$ $$=\mathbb{E}_{\mathbf{c},\mathbf{x}_{t}\sim p_{\theta}(\mathbf{x}_{t}|t,\mathbf{c})}\left[\,-r(\mathbf{x}_{t},t,\mathbf{c})\right]+\beta w(t)\mathbb{E}_{p_{\theta}(\mathbf{x}_{t}|t,\mathbf{c})}\log\frac{p_{\theta}(\mathbf{x}_{t}|t,\mathbf{c})}{p_{ref}(\mathbf{x}_{t}|t,\mathbf{c})}$$ The optimal distribution pθ ∗ (xt|t, c) that minimize the objective (B.15) will satisfy the equation (B.16) $$r(\mathbf{x}_{t},t,\mathbf{c})=\beta w(t)\log{\frac{p_{\theta^{*}}(\mathbf{x}_{t}|t,\mathbf{c})}{p_{r e f}(\mathbf{x}_{t}|t,\mathbf{c})}}+C(\mathbf{c})\tag{1}$$ $$(\mathrm{B.16})$$ This means $$\log{\frac{p_{r e f}(\mathbf{x}_{t}|t,\mathbf{c})}{p_{r e f}(\mathbf{x}_{t}|t)}}=\beta\log{\frac{p_{\theta^{*}}(\mathbf{x}_{t}|t,\mathbf{c})}{p_{r e f}(\mathbf{x}_{t}|t,\mathbf{c})}}+C(\mathbf{c})$$ The C(c) in equation (B.17) is a unknown constant that is independent from xt and θ. Then we can have the formula for the optimal distribution $$\log p_{\theta^{*}}(\mathbf{x}_{t}|t,\mathbf{c})=\log p_{\theta^{*}}(\mathbf{x}_{t}|t,\mathbf{c})+\frac{1}{\beta}\bigg{\{}\log p_{ref}(\mathbf{x}_{t}|t,\mathbf{c})-\log p_{ref}(\mathbf{x}_{t}|t)\bigg{\}}-\frac{1}{\beta}C(\mathbf{c})$$ $$=\log p_{\theta^{*}}(\mathbf{x}_{t}|t)+(1+\frac{1}{\beta})\bigg{\{}\log p_{ref}(\mathbf{x}_{t}|t,\mathbf{c})-\log p_{ref}(\mathbf{x}_{t}|t)\bigg{\}}-\frac{1}{\beta}C(\mathbf{c})$$ C(c) (B.18) Besides, we can see that the score function of the optimal distribution writes $$\nabla_{\mathbf{x}_{t}}\log p_{\theta^{+}}(\mathbf{x}_{t}|t,\mathbf{c})=\nabla_{\mathbf{x}_{t}}\log p_{\theta^{+}}(\mathbf{x}_{t}|t)+(1+{\frac{1}{\beta}})\bigg\{\nabla_{\mathbf{x}_{t}}\log p_{r e f}(\mathbf{x}_{t}|t,\mathbf{c})-\nabla_{\mathbf{x}_{t}}\log p_{r e f}(\mathbf{x}_{t}|t)\bigg\}$$ (B.19) $$(\mathrm{B.17})$$ $$(\mathbf{B.18})$$ $$(\mathrm{B.19})$$ The equation (B.19) recovers the so-called classifier-free guided score function. The final result is just an integral of the (B.19). Our results show that when using the classifier-free guided score for diffusion distillation using Diff-Instruct (i.e. the equation (3.9)) is secretly doing RLHF (i.e. the Diff-Instruct++) by using the reward (B.14). Besides, our results also bring a new perspective: when sampling from the diffusion model using CFG, the user is secretly doing an inference-time RLHF, and the so-called CFG scale is the RLFH strength. ## C Experiments And Results C.1 More Experiment Details On Text-To-Image Distillation We follow the experiment setting of Diff-Instruct (Luo et al., 2024), generalizing its CIFAR10 experiment to text-to-image generation. Notice that the Diff-Instruct uses the EDM model (Karras et al., 2022) to formulate the diffusion model, as well as the one-step generator. We start with a brief introduction to the EDM model. The EDM model depends on the diffusion process $$\mathrm{d}\mathbf{x}_{t}=t\mathrm{d}\mathbf{w}_{t},t\in[0,T].$$ $$(\mathrm{C.1})$$ $$(\mathrm{C.2})$$ dxt = tdwt, t ∈ [0, T]. (C.1) Samples from the forward process (C.1) can be generated by adding random noise to the output of the generator function, i.e., xt = x0 + tϵ where ϵ ∼ N (0, I) is a Gaussian vector. The EDM model also reformulates the diffusion model's score matching objective as a denoising regression objective, which writes, $$\mathcal{L}(\nu)=\int_{t=0}^{T}\lambda(t)\mathbb{E}_{\mathbf{x}_{0}\sim\eta_{0},\mathbf{x}_{t}[\mathbf{x}_{0}\sim\eta_{1}(\mathbf{x}_{t},|\mathbf{x}_{0}|)]}\|d_{\varphi}(\mathbf{x}_{t},t)-\mathbf{x}_{0}\|_{2}^{2}dt.$$ (C.2) Where $\mathbf{d}_{\varphi}(\cdot)$ is a denoiser network that tries to predict the clean sample by taking noisy samples as inputs. Minimizing the loss (C.2) leads to a trained denoiser, which has a simple relation to the marginal score functions as: $$\mathbf{s}_{\psi}(\mathbf{x}_{t},t)={\frac{\mathbf{d}_{\psi}(\mathbf{x}_{t},t)-\mathbf{x}_{t}}{t^{2}}}$$ $$(\mathrm{C.3)}$$ $$(\mathrm{C.4})$$ 2(C.3) Under such a formulation, we actually have pre-trained denoiser models for experiments. Therefore, we use the EDM notations in later parts. Construction of the one-step generator. Let dθ(·) be pretrained EDM denoiser models. Owing to the denoiser formulation of the EDM model, we construct the generator to have the same architecture as the pre-trained EDM denoiser with a pre-selected index t ∗, which writes $$\mathbf{x}_{0}=g_{\theta}(\mathbf{z}):=\mathbf{d}(\mathbf{z},t^{*}),\ \ \mathbf{z}\sim\mathcal{N}(\mathbf{0},(t^{*})^{2}\mathbf{I}).$$ 2I). (C.4) We initialize the generator with the same parameter as the teacher EDM denoiser model. Time index distribution. When training both the EDM diffusion model and the generator, we need to randomly select a time t in order to approximate the integral of the loss function (C.2). The EDM model has a default choice of t distribution as log-normal when training the diffusion (denoiser) model, i.e. $$\begin{array}{l l}{{t\sim p_{E D M}(t):}}&{{t=\exp(s)}}\\ {{s\sim\mathcal{N}(P_{m e a n},P_{s t d}^{2}),}}&{{P_{m e a n}=-2.0,P_{s t d}=2.0.}}\end{array}$$ And a weighting function $$\lambda_{E D M}(t)=\frac{(t^{2}+\sigma_{d a t a}^{2})}{(t\times\sigma_{d a t a})^{2}}.$$ . (C.7) In our algorithm, we follow the same setting as the EDM model when updating the online diffusion (denoiser) model. $$(\mathbf{C}.5)$$ $$\stackrel{\mathrm{(C.6)}}{}$$ $$(\mathbf{C}.7)$$ Weighting function. For the TA diffusion updates in both pre-training and alignment, we use the same λEDM(t) (C.7) weighting function as EDM when updating the denoiser model. When updating the generator, we use a specially designed weighting function, which writes: $$\begin{array}{l}{{w_{G e n}(t)=\frac{1}{||\mathbf{d}_{\psi}(\mathrm{Sta}[\mathbf{x}_{t}],t)-\mathbf{d}_{q_{t}}(\mathrm{Sta}[\mathbf{x}_{t}],t)||_{2}}}}\\ {{\mathbf{x}_{t}=\mathbf{x}_{0}+t\epsilon,\ \ \epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{I})}}\end{array}$$ The notation Sta[·] means stop-gradient of parameter. Such a weighting function helps to stabilize the training. In the Text-to-Image distillation part, in order to align our experiment with that on CIFAR10, we rewrite the PixelArt-α model in EDM formulation: $$(\mathbf{C.8})$$ $$(\mathrm{C.9})$$ $$(\mathrm{C.10})$$ $$D_{\theta}(\mathbf{x};\sigma)=\mathbf{x}-\sigma F_{\theta}$$ Dθ(x; σ) = x − σFθ (C.10) Here, following the iDDPM+DDIM preconditioning in EDM, PixelArt-α is denoted by Fθ, x is the image data plus noise with a standard deviation of σ, for the remaining parameters such as C1 and C2, we kept them unchanged to match those defined in EDM. Unlike the original model, we only retained the image channels for the output of this model. Since we employed the preconditioning of iDDPM+DDIM in the EDM, each σ value is rounded to the nearest 1000 bins after being passed into the model. For the actual values used in PixelArt-α, beta_start is set to 0.0001, and beta_end is set to 0.02. Therefore, according to the formulation of EDM, the range of our noise distribution is [0.01, 156.6155], which will be used to truncate our sampled σ. For our one-step generator, it is formulated as: $$g_{\theta}(\mathbf{x};\sigma_{\mathrm{init}})=\mathbf{x}-\sigma_{\mathrm{init}}F_{\theta}$$ $$(\mathrm{C.11})$$ gθ(x; σinit) = x − σinitFθ (C.11) Here following Diff-Instruct to use σinit = 2.5 and x ∼ N (0, σinitI), we observed in practice that larger values of σinit lead to faster convergence of the model, but the difference in convergence speed is negligible for the complete model training process and has minimal impact on the final results. We utilized the SAM-LLaVA-Caption10M dataset, which comprises prompts generated by the LLaVA model on the SAM dataset. These prompts provide detailed descriptions for the images, thereby offering us a challenging set of samples for our distillation experiments. All experiments in this section were conducted with bfloat16 precision, using the PixelArt-XL-2-512x512 model version, employing the same hyperparameters. For both optimizers, we utilized Adam with a learning rate of 5e-6 and betas=[0, 0.999]. Finally, regarding the training noise distribution, instead of adhering to the original iDDPM schedule, we sample the σ from a log-normal distribution with a mean of -2.0 and a standard deviation of 2.0, we use the same noise distribution for both optimization steps and set the two loss weighting to constant 1. Our best model was trained on the SAM Caption dataset for approximately 16k iterations, which is equivalent to less than 2 epochs. This training process took about 2 days on 4 A100-40G GPUs. With the optimal setting and EDM formulation, we can rewrite our algorithm in an EDM style in Algorithm 3. ## C.2 Pytorch Style Pseudo-Code Of Score Implicit Matching In this section, we give a PyTorch style pseudo-code for algorithm 3. .$\in\!\,x\,\in\!\,x\,\in\!\,x\quad\in\!\,x$ . 1 import torch 2 import torch . nn as nn 3 import torch . optim as optim 4 import copy 5 6 # Initialize generator G 7 G = Generator () 8 | Hyperparameter | Pre-Training (Diff-Instruct) | Alignment (Diff-Instruct++) | | | |-------------------|--------------------------------|-------------------------------|--------------|--------------| | DM sψ | Generator gθ | DM sψ | Generator gθ | | | Learning rate | 5e-6 | 5e-6 | 5e-6 | 5e-6 | | Batch size | 1024 | 1024 | 256 | 256 | | ∗ ) | 2.5 | 2.5 | 2.5 | 2.5 | | σ(t Adam β0 | 0.0 | 0.0 | 0.0 | 0.0 | | Adam β1 | 0.999 | 0.999 | 0.999 | 0.999 | | Time Distribution | pEDM(t)(C.5) | pEDM(t)(C.5) | pEDM(t)(C.5) | pEDM(t)(C.5) | | Weighting | λEDM(t)(C.7) | 1 | λEDM(t)(C.7) | 1 | | Number of GPUs | 4×A100-40G | 4×A100-40G | 4×H800-80G | 4×H800-80G | Table 2: Hyperparameters used for Diff-Instruct++ on Aligning One-step Generator Models Algorithm 3: Diff-Instruct++ Pseudo Code under EDM formulation. Input: prompt dataset C, generator gθ(x0|z, c), prior distribution pz, reward model r(x, c), reward scale αrew, CFG scale α*cf g*, reference EDM denoiser model dref (xt|c, c), TA EDM denoiser dψ(xt|t, c), forward diffusion p(xt|x0) (2.1), TA EDM denoiser updates rounds KT A, time distribution π(t), diffusion model weighting λ(t), generator IKL loss weighting w(t). while *not converge* do fix θ, update ψ for KT A rounds by 1. sample prompt c ∼ C; sample time t ∼ π(t); sample z ∼ pz(z); 2. generate fake data: x0 = Sta[gθ(z, c)]; sample noisy data: xt ∼ pt(xt|x0); 3. update ψ by minimizing loss: L(ψ) = λ(t)∥dψ(xt|t, c) − x0∥ 2 2 ; fix ψ, update θ using StaD: 1. sample prompt c ∼ C; sample time t ∼ π(t); sample z ∼ pz(z); 2. generate fake data: x0 = gθ(z, c); sample noisy data: xt ∼ pt(xt|x0); 3. compute CFG score: deref (xt|t, c) = dref (xt|t, ∅) + αcf g-dref (xt|t, c) − dref (xt|t, ∅); 4. compute score difference: yt:= dψ(Sta[xt]|t, c) − deref (Sta[xt]|t, c); 5. update θ by minimizing loss: L(θ) = − αrewr(x0, c) + w(t)y T t xt ; end return *θ, ψ*. 9 \#\# load teacher DM 10 Drf = DiffusionModel () . load ('/ path_to_ckpt '). eval () . requires_grad_ ( False ) 11 Dta = copy . deepcopy ( Drf ) \#\# initialize online DM with teacher DM 12 r = RewardModel () if alignment_stage else None 13 14 \# Define optimizers 15 opt_G = optim . Adam (G. parameters () , lr =0.001 , betas =(0.0 , 0.999) ) 16 opt_Sta = optim . Adam ( Dta . parameters () , lr =0.001 , betas =(0.0 , 0.999) ) 17 18 \# Training loop 19 while True : 20 \#\# update Dta 21 Dta . train () . requires_grad_ ( True ) 22 G. eval () . requires_grad_ ( False ) 23 24 \#\# update TA diffusion 25 prompt = batch ['prompt '] 26 z = torch . randn ((1024 , 4, 64 , 64) , device =G. device ) ## D Prompts D.1 Prompts For Figure 1 The prompts are listed from the first row to the second row; from left to right. - *A small cactus with a happy face in the Sahara desert.* - *A dog that has been meditating all the time.* 27 with torch . no_grad () : 28 fake_x0 = G(z , prompt ) 29 30 sigma = torch . exp (2.0* torch . randn ([1 ,1 ,1 ,1] , device = fake_x0 . device ) - 2.0) 31 noise = torch . randn_like ( fake_x0 ) 32 fake_xt = fake_x0 + sigma * noise 33 pred_x0 = Dta ( fake_xt , sigma , prompt ) 34 35 weight = compute_diffusion_weight ( sigma ) 36 37 batch_loss = weight * ( pred_x0 - fake_x0 ) **2 38 batch_loss = batch_loss . sum ([1 ,2 ,3]) . mean () 39 40 optimizer_Dta . zero_grad () 41 batch_loss . backward () 42 optimizer_Dta . step () 43 44 45 \#\# update G 46 Dta . eval () . requires_grad_ ( False ) 47 G. train () . requires_grad_ ( True ) 48 49 prompt = batch ['prompt '] 50 z = torch . randn ((1024 , 4, 64 , 64) , device =G. device ) 51 fake_x0 = G(z , prompt ) 52 53 sigma = torch . exp (2.0* torch . randn ([1 ,1 ,1 ,1] , device = fake_x0 . device ) - 2.0) 54 noise = torch . randn_like ( fake_x0 ) 55 fake_xt = fake_x0 + sigma * noise 56 57 with torch . no_grad () : 58 if use_cfg : 59 pred_x0_rf = Drf ( fake_xt , sigma , None ) + cfg_scale * ( Drf ( fake_xt , sigma , prompt ) - Drf ( fake_xt , sigma , None )) 60 else : 61 pred_x0_rf = Drf ( fake_xt , sigma , prompt ) 62 63 pred_x0_ta = Dta ( fake_xt , sigma , prompt ) 64 65 denoise_diff = pred_x0_ta - pred_x0_rf 66 weight = compute_G_weight ( sigma , denoise_diff ) 67 68 batch_loss = weight * denoise_diff * fake_xt 69 70 \#\# compute reward loss if needed 71 if alignment_stage : 72 reward_loss = - reward_scale * r( fake_x0 , prompt ) 73 batch_loss += reward_loss 74 75 batch_loss = batch_loss . sum ([1 ,2 ,3]) . mean () 76 77 optimizer_G . zero_grad () 78 batch_loss . backward () 79 optimizer_G . step () Listing 1: Pytorch Style Pseudo-code of Diff-Instruct++ - *A alpaca made of colorful building blocks, cyberpunk.* - *A dog is reading a thick book.* - *A delicate apple(universe of stars inside the apple) made of opal hung on a branch in the early* morning light, adorned with glistening dewdrops. in the background beautiful valleys, divine iridescent glowing, opalescent textures, volumetric light, ethereal, sparkling, light inside body, bioluminescence, studio photo, highly detailed, sharp focus, photorealism, photorealism, 8k, best quality, ultra detail, hyper detail, hdr, hyper detail. - *Drone view of waves crashing against the rugged cliffs along Big Sur's Garay Point beach. The* crashing blue waters create white-tipped waves, while the golden light of the setting sun illuminates the rocky shore. A small island with a lighthouse sits in the distance, and green shrubbery covers the cliff's edge. The steep drop from the road down to the beach is a dramatic feat, with the cliff's edges jutting out over the sea. This is a view that captures the raw beauty of the coast and the rugged landscape of the Pacific Coast Highway. - *Image of a jade green and gold coloured Fabergé egg, 16k resolution, highly detailed, product photography, trending on artstation, sharp focus, studio photo, intricate details, fairly dark background,* perfect lighting, perfect composition, sharp features, Miki Asai Macro photography, close-up, hyper detailed, trending on artstation, sharp focus, studio photo, intricate details, highly detailed, by greg rutkowski. - *Astronaut in a jungle, cold color palette, muted colors, detailed, 8k.* ## D.2 Prompts Of Figure 3 Prompts of Figure 3, from left to right: - *A small cactus with a happy face in the Sahara desert*; - A delicate apple(universe of stars inside the apple) made of opal hung on a branch in the early morning light, adorned with glistening dewdrops. in the background beautiful valleys, divine iridescent glowing, opalescent textures, volumetric light, ethereal, sparkling, light inside body, bioluminescence, studio photo, highly detailed, sharp focus, photorealism, photorealism, 8k, best quality, ultra detail, hyper detail, hdr, hyper detail; - *Drone view of waves crashing against the rugged cliffs along Big Sur's Garay Point beach. The* crashing blue waters create white-tipped waves, while the golden light of the setting sun illuminates the rocky shore. A small island with a lighthouse sits in the distance, and green shrubbery covers the cliff's edge. The steep drop from the road down to the beach is a dramatic feat, with the cliff's edges jutting out over the sea. This is a view that captures the raw beauty of the coast and the rugged landscape of the Pacific Coast Highway; - *Astronaut in a jungle, cold color palette, muted colors, detailed, 8k*; - *A parrot with a pearl earring, Vermeer style*; ## D.3 Prompts For Figure 3 - prompt for first row of Figure 3: *A small cactus with a happy face in the Sahara desert.* - prompt for second row of Figure 3: *An image of a jade green and gold coloured Fabergé egg, 16k* resolution, highly detailed, product photography, trending on artstation, sharp focus, studio photo, intricate details, fairly dark background, perfect lighting, perfect composition, sharp features, Miki Asai Macro photography, close-up, hyper detailed, trending on artstation, sharp focus, studio photo, intricate details, highly detailed, by greg rutkowski. - prompt for third row of Figure 3: *Baby playing with toys in the snow.* ## D.4 Prompts For Figure 4 The answer for the left three columns: - the first row from left to right is the one-step model (4.5 CFG and 1.0 reward); the PixelArt-α diffusion with 30 generation steps; the one-step model (4.5 CFG and 10.0 reward); - the PixelArt-α diffusion with 30 generation steps; the PixelArt-α diffusion with 30 generation steps; the one-step model (4.5 CFG and 10.0 reward); the first row from left to right is the one-step model (4.5 CFG and 1.0 reward); - the PixelArt-α diffusion with 30 generation steps; the first row from left to right is the one-step model (4.5 CFG and 1.0 reward); the first row from left to right is the one-step model (4.5 CFG and 10.0 reward); The prompts from up to down are: - *A dog that has been meditating all the time*; - Drone view of waves crashing against the rugged cliffs along Big Sur's Garay Point beach. The crashing blue waters create white-tipped waves, while the golden light of the setting sun illuminates the rocky shore. A small island with a lighthouse sits in the distance, and green shrubbery covers the cliff's edge. The steep drop from the road down to the beach is a dramatic feat, with the cliff's edges jutting out over the sea. This is a view that captures the raw beauty of the coast and the rugged landscape of the Pacific Coast Highway; - A delicate apple(universe of stars inside the apple) made of opal hung on a branch in the early morning light, adorned with glistening dewdrops. in the background beautiful valleys, divine iridescent glowing, opalescent textures, volumetric light, ethereal, sparkling, light inside the body, bioluminescence, studio photo, highly detailed, sharp focus, photorealism, photorealism, 8k, best quality, ultra detail, hyper detail, hdr, hyper detail.
# The Cross-Entropy Of Piecewise Linear Probability Density Functions Tom S. F. Haines tsfh20@bath.ac.uk Department of Computer Science University of Bath Reviewed on OpenReview: *https: // openreview. net/ forum? id= AoOi9Zgdsv* ## Abstract The cross-entropy and its related terms from information theory (e.g. entropy, Kullback– Leibler divergence) are used throughout artificial intelligence and machine learning. This includes many of the major successes, both current and historic, where they commonly appear as the natural objective of an optimisation procedure for learning model parameters, or their distributions. This paper presents a novel derivation of the differential cross-entropy between two 1D probability density functions represented as piecewise linear functions. Implementation challenges are resolved and experimental validation is presented, including a rigorous analysis of accuracy and a demonstration of using the presented result as the objective of a neural network. Previously, cross-entropy would need to be approximated via numerical integration, or equivalent, for which calculating gradients is impractical. Machine learning models with high parameter counts are optimised primarily with gradients, so if piecewise linear density representations are to be used then the presented analytic solution is essential. This paper contributes the necessary theory for the practical optimisation of information theoretic objectives when dealing with piecewise linear distributions directly. Removing this limitation expands the design space for future algorithms. ## 1 Introduction Information theory (Shannon, 1948) provides a mathematical toolbox, that, while originally motivated by the problem of communication over noisy channels, has proved essential to artificial intelligence. The set of information theoretic terms that can be obtained from cross-entropy often appear within training objectives for machine learning (ML), e.g. information gain for random forests (Sethi & Sarvarayudu, 1982). They are integral to the evidence lower bound (ELBO) in variational inference (Jordan et al., 1999), as used by techniques such as variational autoencoders (Kingma & Welling, 2014) and diffusion models (Sohl-Dickstein et al., 2015). Cross-entropy loss (Hinton et al., 1995) is often preferred over the classical mean squared error (Prince, 2023) for neural networks, e.g. in transformers (Devlin et al., 2019). It also plays a part in theory, e.g. the maximum entropy principle for selecting distributions of Jaynes (1957). In practice the uses of information theory are relatively simple. Most algorithms use Monte Carlo integration (Metropolis et al., 1953) to calculate the mismatch between data and model, with the training set acting as a fixed sample from the target distribution. It is regularly used as a loss function for discrete classification, but in the context of continuous regression alternate objectives are common, as the differential cross-entropy of the error distribution is hard to calculate in general. Variational methods often limit themselves to the exponential family (Wainwright & Jordan, 2008), for which analytic expressions are known. This motivates this paper: adding to the list of probabilistic objects with a known cross-entropy increases the design space available for practical algorithms. For the purpose of open access the author has applied a Creative Commons Attribution (CC-BY) licence. A complete implementation, including code to generate the included figures, is in the supplementary material and also available from https://github.com/thaines/orogram. There is no data. This research made use of Hex, the GPU Cloud in the Department of Computer Science at the University of Bath. This paper's main novel contribution is an analytic expression for the cross-entropy between two 1D piecewise linear probability density functions (PDFs), $$H(P,Q)=-\sum_{i}\delta_{i}\left[\frac{p_{i}\log(q_{i})+p_{i+1}\log(q_{i+1})}{2}-\frac{p_{i+1}q_{i}^{2}-p_{i}q_{i+1}^{2}}{2(q_{i+1}-q_{i})^{2}}\left(\log(q_{i+1})-\log(q_{i})\right)\right.\tag{1}$$ $$\left.-\frac{(3p_{i}+p_{i+1})q_{i+1}-(p_{i}+3p_{i+1})q_{i}}{4(q_{i+1}-q_{i})}\right].$$ It sums over every linear segment, indexed by i, with pi the PDF P evaluated at the start of the segment and pi+1 evaluated at the end, these being *change points*. The same relationship holds for q and Q while δi is the width of segment i. Entropy (= H(*P, P*)) and the Kullback–Leibler divergence (Kullback & Leibler, 1951) (= H(P, Q) − H(*P, P*)) follow immediately and are also novel contributions. This result supports the future development of ML algorithms that use piecewise linear PDFs. Potential advantages include support for step changes and multi-modality, which are poorly supported by the exponential family. This could be particularly valuable within the context of variational inference (Jordan et al., 1999), where cross-entropy appears as the objective. Optimal transport (Bonneel et al., 2016) is also trivial with this representation. Within the context of neural networks, and gradient-based optimisation in general, piecewise linear PDFs can be included only if taking their derivative is practical1. Doing so with the presented result is computationally efficient, while using an alternative built around a technique such as numerical integration is inefficient to the point of being implausible. Whatever their use, piecewise linear PDFs can have an arbitrary and tunable parameter count, allowing greater expressiveness relative to the typical text book distributions, and are simpler than many nonparametric approaches. It remains the case that using numerical integration is simpler and will work for many applied problems, but in an AI/ML context, where cross-entropy is often an optimisation objective, there is a need for an analytical (and differentiable) result. Numerical integration is typically slow however, so a speed advantage may be observed even for straightforward problems. Related work follows, with the derivation of the result in Section 3. This is followed by numerical validation in Section 4, then a demonstration and, finally, a conclusion. ## 2 Related Work A history of piecewise linear PDFs is presented, building up to various uses within AI/ML. Quadrature based approaches are discussed briefly. The triangular distribution (Simpson, 1755; Johnson, 1997), arguably the simplest, is the only piecewise linear density function for which an information theoretic calculation appears to be available: differential entropy (Lazo & Rathie, 1978). This result adapts to the trapezoidal distribution (René van Dorp & Kotz, 2003). A histogram (Pearson, 1895) can be a piecewise linear PDF if it represents binned continuous data and area has been normalised. Its cross-entropy is trivial. While suitable for many problems it has no gradient, e.g. you can't immediately train a neural network to output data with a specific distribution if that distribution is represented by a histogram, because infinitesimal changes to data position leave the cross-entropy unchanged. The frequency polygon (Scott, 1985b) connects the centres of a regular histogram, and is probably the earliest example of constructing a general piecewise linear PDF2. This is only consistent for a regular histogram however, because otherwise the maximum likelihood solutions differ between the two representations. History appears to have omitted or forgotten the obvious correction for this. A variant is the edge frequency polygon (Jones et al., 1998), which connects the midpoints between bins (half way between bin heights at the edges between them) with linear segments. It confers no advantage over the frequency polygon, having the same convergence in a mean squared error sense. This convergence is matched by the kernel density estimate (Rosenblatt, 1956; Parzen, 1962), which generates piecewise linear PDFs if a piecewise linear kernel is used, such as a triangle. Alternatively, averaging many histograms with different origins (Scott, 1985a), such that the bins are misaligned, achieves a comparable effect with computational advantage. It is piecewise linear though has an excessive number of change points. Another approach is to 1Of relevance here, the gradient which connects data point position to the cross-entropy of their distribution 2Earlier mentions exist but lack detail, e.g. Pearson in 1925 (Tarter & Kronmal, 1976). take a kernel density estimate with any kernel and then fit a linear representation to it (Lin et al., 2006); this is an approximation and requires a finite or truncated kernel. Some approaches go directly to a piecewise linear representation. Beirlant et al. (1999) fit a linear segment to each bin of a histogram; this does result in discontinuities. Alternatively, Karlis & Xekalaki (2008) fit a mixture of triangular distributions, which generates an arbitrary polygon without discontinuities; they refer to it as the "polygonal distribution". A least squares approach with the segments fixed but their heights allowed to vary has been proposed (Wielen & Wielen, 2015), as has a maximum likelihood estimator (Nguyen & McLachlan, 2016). The segments remain connected for both. Nguyen & McLachlan (2018) have shown that the maximum likelihood estimator is consistent. Perron & Mengersen (2001) take a non-parametric Bayesian approach, estimating a non-decreasing function as the cumulative distribution function of a mixture of triangular distributions. This approach models an explicit Poisson draw of the mixture count followed by a Dirichlet over membership, using reversible jump MCMC for inference. Alternatively, Ho et al. (2017) use a Dirichlet process prior (Ferguson, 1973) for doing a Bayesian density estimate with a mixture of triangular distributions. They are motivated by the problem of making a Bayesian estimate of an unknown distributions mode and utilise a Gibbs sampler with the stick breaking construction (Sethuraman, 1994). Beyond explicit models built around piecewise linear PDFs there are also incidental uses within AI. Numerous models output histograms as discrete representations of continuous values. As an example stereo algorithms for rectified images output disparity, for which examples utilising dynamic programming (Cox et al., 1996), belief propagation (Felzenszwalb & Huttenlocher, 2006), graph cuts (Boykov et al., 2001) and convolutional neural networks (Žbontar & LeCun, 2015) exist. Finally, as commonly taught when introducing neural networks (Prince, 2023), a network that only uses regularised linear units (ReLU) (Fukushima, 1975) generates a multivariate piecewise linear function. For the purpose of verification numerical integration (Gibb, 1916) and Monte Carlo integration (Metropolis et al., 1953) are used in Section 4. These are the main alternatives to the presented analytic approach if only calculation is required. Other choices exist, primarily those based on quadrature (Gonnet, 2012). This requires a family of functions for which the relevant information theoretic terms can be calculated. For PDFs the Gram-Charlier/Edgeworth series (Wallace, 1958) are commonly chosen, but you can also evaluate the integrand directly with a more general technique (Place & Stach, 1999). Hyvärinen (1997) has proposed a specific quadrature scheme designed for calculating differential entropy. While not the focus of this paper, the equation contributed does enable 1D quadrature with piecewise linear approximations for cross-entropy. Examples of these approximations being used within machine learning include independent component analysis (Jutten & Herault, 1991) and projection pursuit (Huber, 1985). ## 3 Derivation The cross-entropy will now be derived. Note that, much like entropy has to define 0 log(0) = 0, similar issues occur. In the interest of clarity these are considered after the initial derivation. Differential cross-entropy is defined as (Jaynes, 1963) $$H(P,Q)=-\int P(x)\log Q(x)dx,$$ where P and Q are two PDFs and x is integrated over their domain. Taking P and Q to be piecewise linear and 1D we can consider the integral to be the sum of many linear sections, $$-\sum_{i}\delta_{i}\int_{0}^{1}((1-t)p_{i}+t p_{i+1})\log((1-t)q_{i}+t q_{i+1})d t,$$ where piis P evaluated at the start of section i and pi+1 is P evaluated at the end; likewise for q and Q. δi is the section width, xi+1 − xi. If the linear segments of P and Q are not in alignment then extra change points can be added. $$\left(2\right)$$ $$\quad(3)$$ Consider a single section, i, $$=-\delta_{i}\int_{0}^{1}((1-t)p_{i}+tp_{i+1})\log((1-t)q_{i}+tq_{i+1})dt.$$ $$\Delta p_{i}=p_{i+1}-p_{i},\quad\Delta q_{i}=q_{i+1}-q_{i},$$ Define and introduce qˆ = (1 − t)qi + tqi+1 = qi + ∆qit, (6) then simplify Equation 4 with a change of variables, $$\hat{q}=(1-t)q_{i}+t q_{i+1}=q_{i}+\Delta q_{i}t,$$ unge of variables, $$=-\frac{\delta_{i}}{\Delta q_{i}}\int_{q_{i}}^{q_{i+1}}\left(p_{i}+\frac{\Delta p_{i}}{\Delta q_{i}}(\hat{q}-q_{i})\right)\log(\hat{q})d\hat{q}.$$ $$\mathbf{\Sigma}$$ $$\mathbf{\Sigma}$$ $$\mathbf{\Sigma}$$ Separate the two integral forms, $$\begin{array}{l}{{=-\frac{\delta_{i}}{\Delta q_{i}}\left\{\left(p_{i}-\frac{\Delta p_{i}}{\Delta q_{i}}q_{i}\right)\int_{q_{i}}^{q_{i+1}}\log(\hat{q})d\hat{q}+\frac{\Delta p_{i}}{\Delta q_{i}}\int_{q_{i}}^{q_{i+1}}\hat{q}\log(\hat{q})d\hat{q}\right\},}}\end{array}$$ to both. and slot in solutions to both, $$=-\frac{\delta_{i}}{\Delta q_{i}}\left\{\left(p_{i}-\frac{\Delta p_{i}}{\Delta q_{i}}q_{i}\right)\left(q_{i+1}\left\{\log(q_{i+1})-1\right\}-q_{i}\left\{\log(q_{i})-1\right\}\right)\right.$$ $$\left.+\frac{\Delta p_{i}}{\Delta q_{i}}\left(q_{i+1}^{2}\left\{\frac{\log(q_{i+1})}{2}-\frac{1}{4}\right\}-q_{i}^{2}\left\{\frac{\log(q_{i})}{2}-\frac{1}{4}\right\}\right)\right\},\tag{9}$$ then rearrange to obtain $$=-\frac{\delta_{i}}{(\Delta q_{i})^{2}}\left\{\overbrace{(\Delta q_{i}p_{i}-\Delta p_{i}q_{i})\left(q_{i+1}\log(q_{i+1})-q_{i}\log(q_{i})\right)}^{\hbox{\small$\frac{\partial}{\partial}$}}+\overbrace{\frac{\Delta p_{i}}{2}\left(q_{i+1}^{2}\log(q_{i+1})-q_{i}^{2}\log(q_{i})\right)}^{\hbox{\small$\frac{\partial}{\partial}$}}\right\}$$ $$\mathbf{\partial}$$ $$\mathbf{\Sigma}$$ + ∆pi 4 q 2 i − q 2 i+1 | {z } ④ + (∆qipi − ∆piqi)(qi − qi+1) | {z } ③ $$\left(12\right)$$ $$(13)$$ $$\quad(10)$$ Separate out the first two terms within the curly brackets, ① and ②, $$\underbrace{\mathbb{Q}}_{(\Delta q_{i}p_{i}-\Delta p_{i}q_{i})\left(q_{i+1}\log(q_{i+1})-q_{i}\log(q_{i})\right)}=$$ $$p_{i+1}q_{i}^{2}\log(q_{i})+p_{i}q_{i+1}^{2}\log(q_{i+1})-q_{i}q_{i+1}(p_{i}\log(q_{i})+p_{i+1}\log(q_{i+1})),$$ $$\underline{{\hspace{.5in}}}$$ $$\frac{\mathfrak{L}}{2}\left(q_{t+1}^{2}\log(q_{t+1})-q_{t}^{2}\log(q_{t})\right)=\\ -\frac{p_{t+1}q_{t}^{2}\log(q_{t})}{2}-\frac{p_{t}q_{t+1}^{2}\log(q_{t+1})}{2}+\frac{p_{t+1}q_{t+1}^{2}\log(q_{t+1})}{2}+\frac{p_{t}q_{t}^{2}\log(q_{t})}{2}.$$ Introduce $$\frac{1}{2}(q_{t+1}-q_{t})^{2}=\frac{q_{t+1}^{2}}{2}+\frac{q_{t}^{2}}{2}-q_{t}q_{t+1},$$ $$\mathcal{A}$$ 2. (12) and use it to merge $\mathbbm{Q}$ and $\mathbbm{Q}$, to obtain $$\frac{p_{i+1}q_i^2\log(q_i)}{2}+\frac{p_i q_{i+1}^2\log(q_{i+1})}{2}+\frac{1}{2}(q_{i+1}-q_i)^2(p_i\log(q_i)+p_{i+1}\log(q_{i+1}))-\frac{p_i q_{i+1}^2\log(q_i)}{2}-\frac{p_{i+1}q_i^2\log(q_{i+1})}{2},\tag{14}$$ where the third term above corrects for the mismatch of the second term with $\mathbbm{Q}$. Simplify to get $$\frac{p_i q_{i+1}^2-p_{i+1}q_i^2}{2}(\log(q_{i+1})-\log(q_i))+\frac{1}{2}(\Delta q_i)^2(p_i\log(q_i)+p_{i+1}\log(q_{i+1})).\tag{15}$$ Now rearrange $\mathbbm{Q}$ and $\mathbbm{Q}$ from Equation 10. range $\mathfrak{D}$ and $\mathfrak{U}$ from Equation 10, $$\underbrace{(\Delta q_{i}p_{i}-\Delta p_{i}q_{i})(q_{i}-q_{i+1})}_{\mathfrak{D}}+\underbrace{\frac{\Delta p_{i}}{4}\left(q_{i}^{2}-q_{i+1}^{2}\right)}_{\mathfrak{U}}=-\frac{\Delta q_{i}}{4}\left[(3p_{i}+p_{i+1})q_{i+1}-(p_{i}+3p_{i+1})q_{i}\right],\tag{16}$$ $\mathfrak{U}$ of the terms from Equation 10 back together to results the equation for a single argument. and bring all of the terms from Equation 10 back together to restate the equation for a single segment, $$=-\frac{\delta_{0}}{(\Delta q_{i})^{2}}\left\{\frac{p_{i}q_{i+1}^{2}-p_{i+1}q_{i}^{2}}{2}(\log(q_{i+1})-\log(q_{i}))+\frac{1}{2}(\Delta q_{i})^{2}(p_{i}\log(q_{i})+p_{i+1}\log(q_{i+1}))\right.$$ $$\left.-\frac{\Delta q_{i}}{4}\left[(3p_{i}+p_{i+1})q_{i+1}-(p_{i}+3p_{i+1})q_{i}\right],\right\},\tag{17}$$ $$(19)$$ which rearranges to which rearranges to $$-\delta_1\frac{p_1\log(q_1)+p_{i+1}\log(q_{i+1})}{2}+\delta_i\frac{p_{i+1}q_i^2-p_i q_{i+1}^2}{2(q_{i+1}-q_i)^2}\left(\log(q_{i+1})-\log(q_i)\right)+\delta_i\frac{(3p_i+p_{i+1})q_{i+1}-(p_i+3p_{i+1})q_i}{4(q_{i+1}-q_i)}\tag{18}$$ completing the derivation needed for Equation 1. (18) ## 3.1 Singularity As expected, 0 log(0) = 0 has to be specified for Equation 1 to work, i.e. this is Lebesgue integration. However, there is also a singularity when qi = qi+1. To ignore the second and third term of Equation 18 when qi+1 = qiit must be the case that which $q_{i+1}-q_{i}$ it must be the case that $$\lim_{q_{i+1}\to q_{i}}\left[\delta_{i}\frac{p_{i+1}q_{i}^{2}-p_{i}q_{i+1}^{2}}{2(q_{i+1}-q_{i})^{2}}\left(\log(q_{i+1})-\log(q_{i})\right)+\delta_{i}\frac{(3p_{i}+p_{i+1})q_{i+1}-(p_{i}+3p_{i+1})q_{i}}{4(q_{i+1}-q_{i})}\right]=0.\tag{19}$$ The two terms have to be considered simultaneously for this to be the case. Use the series (Olver et al., 2010, Equation. 4.6.4.), red simultaneously for this to be the case. Use the series (Olver) $$\log(a)=2\sum_{n\in\{1,3,5,\ldots\}}\frac{1}{n}\left(\frac{a-1}{a+1}\right)^{n},$$ it into an infinite sequence, $$(20)$$ to convert the first term of the limit into an infinite sequence, $$\delta_{i}(p_{i+1}q_{i}^{2}-p_{i}q_{i+1}^{2})\sum_{n\in\{1,3,5,\ldots\}}\frac{1}{n}\frac{(q_{i+1}-q_{i})^{n-2}}{(q_{i+1}+q_{i})^{n}},$$ and note that the limit is trivially true for n = 3 onwards. Put the n = 1 term only back into the limit (Equation 19), lim qi+1→qi δi(pi+1q 2 i − piq 2 i+1) (qi+1 − qi) −1 (qi+1 + qi) 1+ δi (3pi + pi+1)qi+1 − (pi + 3pi+1)qi 4(qi+1 − qi) , (22) and rearrange to obtain $$\operatorname*{lim}_{q_{i+1}\to q_{i}}\left[\delta_{i}\frac{(p_{i+1}-p_{i})(q_{i+1}-q_{i})}{4(q_{i+1}+q_{i})}\right],$$ requirement. which is simply zero, satisfying the requirement. $$(21)$$ $$(22)$$ $$(23)$$ ![5_image_0.png](5_image_0.png) Figure 1: Zoomed in plots of the cross-entropy of a linear segment, showing the error as the singularity is approached. Number top left is the scale of the y axis, bottom right the scale of the x axis. Calculated with double precision. For both plots p0 = p1 = 0.1 and δ0 = 1; for the left plot q0+q1 2 = 0.1 while for the right q0+q1 2 = 0.4. The x axis shows the difference between the two q values, i.e. they cross over such that the singularity is in the middle and the mean remains constant. This demonstrates how switching from Equation 18 to 24 only needs to occur when within 1e −5 of the singularity, with this range increasing with q. To obtain this precision with numerical integration required 2 24 samples with averages organised over three levels to avoid underflow; it is over 90000× slower than the stable equation. ## 3.2 Implementation Computing cross entropy with Equation 18 is numerically stable when a safe distance from the singularity, but it becomes unstable due to the limits of floating point operations (IEEE, 2019) when too close. A stable everywhere alternative is $$-\delta_{t}\frac{p_{t}\log(q_{t})+p_{t+1}\log(q_{t+1})}{2}+\delta_{t}\frac{(p_{t+1}-p_{t})(q_{t+1}-q_{t})}{4(q_{t+1}+q_{t})}+\delta_{t}\frac{p_{t+1}q_{t}^{2}-p_{t}q_{t+1}^{2}}{(q_{t+1}+q_{t})^{2}}\sum_{n\in\{1,3,\ldots\}}\frac{1}{n+2}\left(\frac{q_{t+1}-q_{t}}{q_{t+1}+q_{t}}\right)^{n},\tag{24}$$ $$(25)$$ $$(26)$$ which is obtained from the limit calculation above. The infinite series converges quite slowly; use Equation 20 and assume, without loss of generality, that qi+1 ≥ qi, hence $$0\leq\sum_{n\in\{1,3,5,\ldots\}}{\frac{1}{n+r}}\left({\frac{q_{i+1}-q_{i}}{q_{i+1}+q_{i}}}\right)^{n}<{\frac{1}{2}}\log\left({\frac{q_{i+1}}{q_{i}}}\right),$$ where r > 0, and r = 2 gets a bound on Equation 24. If ϵ is the error after running to n = N − 2, inclusive, hen $$\epsilon=\sum_{n\in\{N,N+2,N+4,\dots\}}\frac{1}{n+2}\left(\frac{q_{i+1}-q_{i}}{q_{i+1}+q_{i}}\right)^{n}=\left(\frac{q_{i+1}-q_{i}}{q_{i+1}+q_{i}}\right)^{N-1}\sum_{n\in\{i,3,5,\dots\}}\frac{1}{n+N+1}\left(\frac{q_{i+1}-q_{i}}{q_{i+1}+q_{i}}\right)^{n},$$ which is n, (26) such that $$0\leq\epsilon<\frac{1}{2}\left(\frac{q_{i+1}-q_{i}}{q_{i+1}+q_{i}}\right)^{N-1}\log\left(\frac{q_{i+1}}{q_{i}}\right).\tag{27}$$ In practice $\frac{q_{i+1}-q_{i}}{q_{i+1}+q_{i}}$ is almost always small and the bound is quite loose${}^{3}$, so you obtain multiple bits of precision with each iteration. Using a basic Python/numpy implementation Equation 24 is 40× slower than Equation 18 - the preference is to use the fast equation where possible. This comparison is obtained using an implementation that keeps calculating terms within the summand in Equation 24 until one evaluates as less than 10−64. Figure 1 explores where the transition between approaches should occur: within |q0 − q1| ≤ 1e −5is the rule selected for the implementation in the supplementary material. A more complex rule may make sense depending on the hardware/likelihood of being close to the singularity. 3It can be made tighter with polylogarithms ![6_image_0.png](6_image_0.png) P(x) ![6_image_1.png](6_image_1.png) Figure 2: The standard Gaussian (µ = 0, σ2 = 1), represented as a piecewise linear PDF and hence truncated. Which, due to how curve rendering works in the digital realm, is identical to rendering a standard Gaussian without explicitly constructing a piecewise linear PDF. The piecewise linear approximation was made by matching the area under the linear segments with the area under the Gaussian's CDF, with a regular spacing. Figure 3: The Kullback–Leibler divergence, calculated via cross-entropy with H(P, Q) − H(*P, P*), of two standard width Gaussian distributions as their means are varied. The x axis shows the difference between the means. Four approaches have been used to calculate this result: the known solution ("*True*"), and then three approaches based on a piecewise linear approximation with "*Analytic*" the presented approach. The lines all overlap. ## 3.3 Gpu Implementation Computing the infinite series of Equation 24 on a GPU is not reasonable: the loop will have to be run until all of the parallelised values have converged, wasting computation, plus reverse mode automatic differentiation will struggle. Fortunately, this stable version only has to be used when |qi+1 − qi| is sufficiently small that the infinite series obtains float (32 bit) precision with the first two terms only. This means in practice that $$-\delta_{1}\frac{p_{1}\log(q_{1})+p_{i+1}\log(q_{i+1})}{2}+\delta_{1}\frac{(p_{i+1}-p_{i})(q_{i+1}-q_{i})}{4(q_{i+1}+q_{i})}\\ +\delta_{1}\frac{p_{i+1}q_{i}^{2}-pq_{i+1}^{2}}{(q_{i+1}+q_{i})^{2}}\left[\frac{1}{3}\left(\frac{q_{i+1}-q_{i}}{q_{i+1}+q_{i}}\right)+\frac{1}{5}\left(\frac{q_{i+1}-q_{i}}{q_{i+1}+q_{i}}\right)^{3}\right]\tag{28}$$ and $\delta_{1}$ is a constant. (28) is used for computation on a GPU when in the qi+1 ≊ qi situation. Approximation will still occur where an infinity or large value is expected, but as these break optimisation this proves convenient. Finally, due to the inefficiency of branching on a GPU a fourth variant of the result proves useful when $|q_{t+1}-q_{t}|$ is large, $$-\delta_{t}\frac{p_{t}\log(q_{t})+p_{t+1}\log(q_{t+1})}{2}+\delta_{t}\frac{(p_{t+1}-p_{t})(q_{t+1}-q_{t})}{4(q_{t+1}+q_{t})}+\delta_{t}\frac{p_{t+1}q_{t}^{2}-p_{t}q_{t+1}^{2}}{2(q_{t+1}-q_{t})^{2}}\left[\log(q_{t+1})-\log(q_{t})\right]\\ -\delta_{t}\frac{p_{t+1}q_{t}^{2}-p_{t}q_{t+1}^{2}}{(q_{t+1}+q_{t})(q_{t+1}-q_{t})}.\tag{29}$$ This can be better than the above inequality. Equation 19 can be used as a result of This can be understood as the main result, Equation 18, rewritten to share as many terms as possible with Equation 28, to minimise the code that appears within branches. Both of these versions are compatible with standard automatic differentiation libraries; for completeness an example implementation is included in Appendix B, as used for the demonstrations in Section 5. ![7_image_0.png](7_image_0.png) Figure 4: The top left graph shows a mixture distribution: a uniform distribution on the left and a triangular distribution on the right, with equal probability, plus a (low probability) wide Gaussian to avoid infinities in the following, converted into a piecewise linear PDF. As shown by the bottom row, the top left graph is showing t = 0: at t = 1 the cube and triangle have switched places, with them moving linearly for the transition, e.g. at t = 0.5 the cube is wearing a hat. The top right graph plots the cross-entropy, H(*P, Q*), where P is always at t = 0 while Q varies from t = 0 to t = 1. At the right we have the entropy of the top graph, in the centre the shapes are poorly aligned, giving the highest cross-entropy, then, when they have swapped places, it's a better match, if imperfect. The analytic solution is shown alongside numerical and Monte Carlo integration, for verification. ## 4 Numerical Validation Three demonstrations have been selected to provide numerical validation. In all cases numerical integration (Gibb, 1916) and Monte Carlo integration (Metropolis et al., 1953) have been used to verify the result. Operations have been done with float (32 bit) precision for the presented approach, consistent with the precision commonly used on GPUs. For the first two demonstrations there is also a direct solution ("*True*"), because they use Gaussian distributions, but note that the Gaussians have to be converted to piecewise linear PDFs, introducing some error that is then reflected by the proposed approach ("*Analytic*") as well as both of the sampling integrators. The conclusion throughout is that the presented approach is accurate. First, Figure 2 shows a truncated standard Gaussian distribution (Gauss, 1823). It has been represented as a piecewise linear PDF: the area under each linear section was matched to the Gaussian and then it was renormalised, to account for the truncation. Entropy of the standard Gaussian is known to be ∼ 1.4189385. The presented approach gives 1.4189711, noting that some variation is expected due to the truncation and linear approximation. Numerical integration gives 1.4189712, identical to float precision. Monte Carlo integration gives 1.4191646, but is using the same number of samples as numerical integration (2 24) which is not enough to match the precision - it's a poor choice of integrator for a 1D function. Figure 3 shows a sweep of the Kullback–Leibler divergence between two standard width Gaussians as the delta between their means varies. As before there is a known equation for this ("*True*"), which is again expected to not match exactly due to the distributions being represented with linear sections. The maximum difference between numerical integration with 2 24 samples and the analytic solution is 0.000594; increasing the numerical integration sample count made no further difference. Based on the results elsewhere it is reasonable to believe that numerical integration is converging poorly. Finally, Figure 4 shows the cross-entropy directly, calculated for a sweep as two parts of a mixture distribution swap position. This reflects the more arbitrary distributions that justify the use of a piecewise linear distribution. There is no known equation for this, but the maximum difference observed between numerical integration with 2 24 samples and the presented analytic approach is 5.8e −6. ![8_image_0.png](8_image_0.png) Figure 6: On the left is the initial distribution of a draw of 1024 points from a standard Gaussian. A subset of the points have been shown with their gradients as arrows; their vertical positioning has been randomised to reduce visual overlap. The two bumps, from the central region of a rectified sine curve (normalised), constitute the target of the optimisation. On the right is the result, where the points have been moved to broadly match the goal distribution using Nesterov's accelerated gradient descent. ## 5 Demonstration One use case of cross-entropy is as an objective for continuous optimisation. This is demonstrated in Figure 5, where a 1D set of points have been sampled from P(x) and are being moved towards matching the distribution Q(x), both represented as piecewise linear probability density functions. The gradients of the points, as represented by arrows, are calculated in terms of the Kullback–Leibler (KL) divergence between the two distributions, as defined in terms of the cross-entropy given in Equation 1 with DKL(P ∥ Q) = H(P, Q) − H(*P, P*). The piecewise linear distribution of the points is constructed similarly to a histogram, except point mass is linearly interpolated between bin centres; this generates a piecewise linear distribution (equivalent to a mixture of triangular distributions) where point positions have a gradient relative to the bin heights. Gradients have been calculated using reverse mode automatic differentiation (Linnainmaa, 1976). The central region of Figure 5 has no gradient because both distributions are uniform, while at the edges the points are being moved inwards, to where Q(x) has mass. On the right hand side a set of points with no gradient can be observed; this is because their segment and the adjacent segment both lack probability mass, i.e. Q(x) = 0. Note however that a gradient still exists for Q(x) = 0 segments when they are adjacent to segments where Q(x) ̸= 0. It is therefore necessary to merge adjacent zero probability segments to ensure a gradient exists everywhere. The ability to calculate gradients relative to an objective enables continuous optimisation. This is demonstrated in Figure 6, where a draw from one distribution is optimised (moved) to match with another. Nesterov's accelerated gradient descent (Nesterov, 1983) is used, with 2048 iterations reducing the KL-divergence from 0.740 to 0.007. In the centre, where the probability is zero, erratic behaviour can be observed. This is because the absolute gradients can get excessively large due to the zero, causing instability in this region. ![9_image_0.png](9_image_0.png) Figure 7: The input is the standard Gaussian distribution while the output is a mixture of assorted distributions, sampled and represented with a piecewise linear probability density function. A residual neural network is trained to convert the simple input distribution into the more complex output distribution. The "*transformed*" graph is the result of this conversion; it is generated as a density estimate of 32768 points from the input distribution after they have been passed through the trained neural network. KL-divergence has again been minimised. To avoid this the probability should be adapted to not get too low, either using a wide "*slab*" distribution or by simply hacking the values. The preceding demonstration is achievable with the inverse CDF transform, avoiding substantial complexity. A more realistic use case is as the training objective of a neural network. In Figure 7 a network has been trained to distort draws from the standard Gaussian to match an arbitrary mixture, as represented with a piecewise linear distribution. In addition to the three mixture components (triangle, uniform and Gaussian) the output distribution includes a low probability uniform slab distribution to ensure stability. The network has two hidden layers of width 32, with Gaussian activations on all layers except the last, which remains linear. It is used as an offset (residual) for point positions, such that the final layer can be initialised with small values so it starts close to an identity transform. ADAM (Kingma & Ba, 2015) with 8192 iterations reduces the KL-divergence from 1.207 to 0.009. Stochastic gradient descent is used, i.e. each iteration a new sample of 256 points is drawn and pushed through the network for calculating the gradient. ## 6 Conclusion This paper has introduced an important operation for a flexible distribution representation, including a rigorous exploration of how to implement it within practical algorithms. Consequentially, piecewise linear PDF representations can be used when constructing AI/ML algorithms where, previously, it was either computationally impractical or impossible. Potential applications can be found throughout the introduction and its use as an objective for a neural network has been demonstrated. A substantially extended (more rigorous) version of the derivation is presented in Appendix A, including an alternative path for completing the derivation. The supplementary material contains a generic library for calculating the differential cross-entropy, KL-divergence and entropy of piecewise linear PDFs, alongside standard operations such that it constitutes a complete library for working with such distributions4. Within this library is code to generate all of the figures in this paper. 4This library is named *orogram*; same construction as *histogram* except the wooden posts ("*histo*") have been switched for mountains ("oro") to reflect the triangular nature of a piecewise linear PDF. ## References Jan Beirlant, Alain Berlinet, and László Györfi. On piecewise linear density estimators. *Statistica Neerlandica*, 53(3):287–308, 1999. Nicolas Bonneel, Gabriel Peyré, and Marco Cuturi. Wasserstein barycentric coordinates: Histogram regression using optimal transport. *ACM Transactions on Graphics*, 35(4), 2016. Yuri Boykov, Olga Veksler, and Ramin Zabih. Fast approximate energy minimization via graph cuts. *Pattern* Analysis and Machine Intelligence, 23(11):1222–1239, 2001. Ingemar J. Cox, Sunita L. Hingorani, Satish B. Rao, and Bruce M. Maggs. A maximum likelihood stereo algorithm. *Computer Vision and Image Understanding*, 63(3):542–567, 1996. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. North American Chapter of the Association for Computational Linguistics, 2019. Pedro F. Felzenszwalb and Daniel P. Huttenlocher. Efficient belief propagation for early vision. International Journal of Computer Vision, 70:41–54, 2006. Thomas S. Ferguson. A Bayesian analysis of some nonparametric problems. *The Annals of Statistics*, pp. 209–230, 1973. Kunihiko Fukushima. Cognitron: A self-organizing multilayered neural network. *Biological Cybernetics*, 20 (3–4):121–136, 1975. Carl F. Gauss. *Theoria combinationis observationum erroribus minimis obnoxiae*. Henricus Dieterich, 1823. David Gibb. *A Course in Interpolation and Numerical Integration for the Mathematical Laboratory*. G. Bell & Sons, Limited, 1916. Pedro Gonnet. A review of error estimation in adaptive quadrature. *Computing Surveys*, 44(4), 2012. Geoffrey E. Hinton, Peter Dayan, Brendan J. Frey, and Radford M. Neal. The "wake-sleep" algorithm for unsupervised neural networks. *Science*, 268(5214):1158–1161, 1995. Chi-san Ho, Paul Damien, and Stephen Walker. Bayesian mode regression using mixtures of triangular densities. *Journal of Econometrics*, 197(2):273–283, 2017. Peter J. Huber. Projection pursuit. *The Annals of Statistics*, 13(2):435–475, 1985. Aapo Hyvärinen. New approximations of differential entropy for independent component analysis and projection pursuit. *Advances in Neural Information Processing Systems*, 10:273–279, 1997. IEEE. 754-2019 - IEEE standard for floating-point arithmetic, 2019. Edwin T. Jaynes. Information theory and statistical mechanics. *Physical review*, 106(4):620–630, 1957. Edwin T. Jaynes. Information theory and statistical mechanics (lecture). *Brandeis University Summer* Institute Lectures in Theoretical Physics, pp. 181–218, 1963. David Johnson. The triangular distribution as a proxy for the beta distribution in risk analysis. Journal of the Royal Statistical Society: Series D (The Statistician), 46(3):387–398, 1997. M. C. Jones, M. Samiuddin, A. H. Al-Harbey, and T. A. H. Maatouk. The edge frequency polygon. Biometrika, 85(1):235–239, 1998. Michael I. Jordan, Zoubin Ghahramani, Tommi S. Jaakkola, and Lawrence K. Saul. An introduction to variational methods for graphical models. *Machine Learning*, 37:183–233, 1999. Christian Jutten and Jeanny Herault. Blind separation of sources, part i: An adaptive algorithm based on neuromimetic architecture. *Signal Processing*, 24(1):1–10, 1991. Dimitris Karlis and Evdokia Xekalaki. The polygonal distribution. *Advances in Mathematical and Statistical* Modeling, pp. 21–33, 2008. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. International Conference on Learning Representations, 2015. Diederik P. Kingma and Max Welling. Auto-encoding variational Bayes. *International Conference on Learning Representations*, 2014. Solomon Kullback and Richard A. Leibler. On information and sufficiency. *Annals of Mathematical Statistics*, 22(1):79–86, 1951. Aida C. G. Verdugo Lazo and Pushpa N. Rathie. On the entropy of continuous probability distributions. Transactions of Information Theory, 24(1):120–122, 1978. Chien-Tai Lin, Jyh-Shyang Wu, and Chia-Hung Yen. A note on kernel polygons. *Biometrika*, 93(1):228–234, 2006. Seppo Linnainmaa. Taylor expansion of the accumulated rounding error. *BIT Numerical Mathematics*, 16: 146–160, 1976. Nicholas C. Metropolis, Arianna W. Rosenbluth, Marshall N. Rosenbluth, Augusta H. Teller, and Edward Teller. Equation of state calculations by fast computing machines. *Journal of Chemical Physics*, 21(6): 1087–1092, 1953. Yurii Nesterov. A method of solving a convex programming problem with convergence rate o(1/k2). Soviet Mathematics Doklady, 27:372–376, 1983. Hien D. Nguyen and Geoff J. McLachlan. Maximum likelihood estimation of triangular and polygonal distributions. *Communications in Statistics - Theory and Methods*, 102:23–36, 2016. Hien D. Nguyen and Geoff J. McLachlan. Some theoretical results regarding the polygonal distribution. Computational Statistics & Data Analysis, 47(20):5083–5095, 2018. Frank W. J. Olver, Daniel W. Lozier, Ronald F. Boisvert, and Charles W. Clark. *The NIST Handbook of* Mathematical Functions. Cambridge University Press, 2010. Emanuel Parzen. On estimation of a probability density function and mode. *Annals of Mathematical* Statistics, 33(3):1065–1076, 1962. Karl Pearson. X. Contributions to the mathematical theory of evolution. - ii. Skew variation in homogeneous material. *Philosophical Transactions of the Royal Society A*, 186:343–414, 1895. Francois Perron and Kerrie Mengersen. Bayesian nonparametric modeling using mixtures of triangular distributions. *Biometrics*, 57(2):518–528, 2001. Jerry Place and Jerry Stach. Efficient numerical integration using Gaussian quadrature. *Simulation*, 73(4): 232–238, 1999. Simon J. D. Prince. *Understanding Deep Learning*. MIT Press, 2023. Johan René van Dorp and Samuel Kotz. Generalized trapezoidal distributions. *Metrika*, 58:85–97, 2003. Murray Rosenblatt. Remarks on some nonparametric estimates of a density function. Annals of Mathematical Statistics, 27:832–837, 1956. David W. Scott. Averaged shifted histograms: effective nonparametric density estimators in several dimensions. *The Annals of Statistics*, 13:1024–1040, 1985a. David W. Scott. Frequency polygons: theory and application. *Journal of the American Statistical Association*, 80(390):348–354, 1985b. Ishwar K. Sethi and G. P. R. Sarvarayudu. Hierarchical classifier design using mutual information. *Pattern* Analysis and Machine Intelligence, 4:441–445, 1982. Jayaram Sethuraman. A constructive definition of Dirichlet priors. *Statistica Sinica*, 4(2):639–650, 1994. Claude E. Shannon. A mathematical theory of communication. *The Bell System Technical Journal*, 27(3): 379–423, 1948. Thomas Simpson. On the advantage of taking the mean of a number of observations, in practical astronomy. Philosophical Transactions of the Royal Society, 49:82–93, 1755. Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. *International Conference on Machine Learning*, 32:2256– 2265, 2015. Michael E. Tarter and Richard A. Kronmal. An introduction to the implementation and theory of nonparametric density estimation. *The American Statistician*, 30(3):105–112, 1976. Jure Žbontar and Yann LeCun. Computing the stereo matching cost with a convolutional neural network. Computer Vision and Pattern Recognition, pp. 1592–1599, 2015. Martin J. Wainwright and Michael I. Jordan. Graphical models, exponential families, and variational inference. *Foundations and Trends in Machine Learning*, 1(1–2):1–305, 2008. David L. Wallace. Asymptotic approximations to distributions. *Annals of Mathematical Statistics*, 29(3): 635–654, 1958. Michael Vander Wielen and Ryan Vander Wielen. The general segmented distribution. *Communications in* Statistics - Theory and Methods, 44(10):1994–2009, 2015. ## A Full Derivation The process of doing the derivation was digital. Specifically, Atom5 was used to edit Markdown with inline Latex equations using MathJAX. Consequentially, it is excessively detailed: each step was done by copying and pasting the previous line then editing it. Below are the raw proofs; they probably double as a statement about mathematical paranoia. While inclusion is necessary to ensure complete rigour, they are probably more valuable as an example of the difference between what goes into the body of a paper and what's actually done, to inform and/or terrify future researchers. It does still serve the purpose of clarifying the steps of the derivation. Editing of the surrounding text has been done for "professionalism" and clarity, mostly adding details that were originally omitted and adapting to formatting changes, but the mathematical steps have not been edited at all. There are two proofs; for completeness both are included. While the *shorter derivation*, as presented in the main paper, was considered at the start of the *original proof* it was not explored, because the division by zero appeared problematic. It was only after finishing the original derivation, and resolving the limit, that approaching the problem with a change of variables was reconsidered. This was at the urging of an anonymous reviewer; special thanks are extended to whomever they may be. The limit calculation within the main paper utilises the techniques/form of the original derivation. 5A now defunct text editor; some later steps were done with inferior tools. ## A.1 Shorter Derivation Differential cross-entropy is defined as, $$H(P,Q)=-\int P(x)\log Q(x)dx,$$ $$(30)$$ $$(31)$$ where the integral is over the (shared) domain of the two distributions, P and Q. In the linear case this becomes $$H(P,Q)=-\sum_{i}\delta_{i}\int_{0}^{1}((1-t)p_{i}+tp_{i+1})\log((1-t)q_{i}+tq_{i+1})dt,$$ where the sum is over each piecewise linear section, with δi that sections width (i to i+ 1). Lowercase letters indicate an evaluation of the respective uppercase PDF, at the edge of a linear segment: subscript i at the start of segment i, subscript i + 1 at the end. Hence, we need to be able to evaluate $$=-\delta_{i}\int_{0}^{1}((1-t)p_{i}+t p_{i+1})\log((1-t)q_{i}+t q_{i+1})dt.\tag{1}$$ $$\Delta p_{i}=p_{i+1}-p_{i},\quad\Delta q_{i}=q_{i+1}-q_{i}$$ Define ∆pi = pi+1 − pi, ∆qi = qi+1 − qi (33) and consider qˆ = (1 − t)qi + tqi+1 = qi + ∆qit (34) which can be used for a change of variables, $$\hat{q}=(1-t)q_{i}+t q_{i+1}=q_{i}+\Delta q_{i}t$$ $$=-\frac{\delta_{i}}{\Delta q_{i}}\int_{q_{i}}^{q_{i}+1}\,\bigg(p_{i}+1\bigg)$$ $$\frac{\Delta p_{i}}{\Delta q_{i}}(\hat{q}-q_{i})\biggr)\log(\hat{q})d\hat{q}.$$ Separate the two different integral forms, $$=-\frac{\delta_{i}}{\Delta q_{i}}\left\{\left(p_{i}-\frac{\Delta p_{i}}{\Delta q_{i}}q_{i}\right)\int_{q_{i}}^{q_{i+1}}\log(\hat{q})d\hat{q}+\frac{\Delta p_{i}}{\Delta q_{i}}\int_{q_{i}}^{q_{i+1}}\hat{q}\log(\hat{q})d\hat{q}\right\},$$ and slot in solutions to both (omitting the unknown offsets, as they cancel) $=\:\frac{1}{2}$ . = − $$\frac{\delta_{i}}{\Delta q_{i}}\left\{\left(p_{i}-\frac{\Delta p_{i}}{\Delta q_{i}}q_{i}\right)[\hat{q}\left\{\log(\hat{q})-1\right\}]_{q_{i}}^{q_{i+1}}+\frac{\Delta p_{i}}{\Delta q_{i}}\left[\hat{q}^{2}\left\{\frac{\log(\hat{q})}{2}-\frac{1}{4}\right\}\right]_{q_{i}}^{q_{i+1}}\right\},$$ $$(32)$$ $$(33)$$ $$(34)$$ $$(35)$$ $$(36)$$ $$(37)$$ , (37) $$(38)$$ $$(39)$$ $$(40)$$ $$=-\frac{\delta_{i}}{\Delta q_{i}}\left\{\left(p_{i}-p_{i}\right)\right\}$$ $$\begin{array}{c}{{\frac{\Delta p_{i}}{\Delta q_{i}}q_{i}\left(q_{i+1}\left\{\log(q_{i+1})-1\right\}-q_{i}\left\{\log(q_{i})-1\right\}\right)}}\\ {{+\frac{\Delta p_{i}}{\Delta q_{i}}\left(q_{i+1}^{2}\left\{\frac{\log(q_{i+1})}{2}-\frac{1}{4}\right\}-q_{i}^{2}\left\{\frac{\log(q_{i})}{2}-\frac{1}{4}\right\}\right)\right\},}}\end{array}$$ , (38) then rearrange, = − δi ∆qi pi − ∆pi ∆qi qi (qi+1 log(qi+1) − qi+1 − qilog(qi) + qi) ∆pi 2 $$+\frac{\Delta p_{i}}{2\Delta q_{i}}\left(q_{i+1}^{2}\log(q_{i+1})-\frac{q_{i+1}^{2}}{2}-q_{i}^{2}\log(q_{i})+\frac{q_{i}^{2}}{2}\right)\Biggr\}\,,$$ , (39) $$=-\frac{\delta_{i}}{(\Delta q_{i})^{2}}\left\{(\Delta q_{i}p_{i}-\Delta p_{i}q_{i})\left(q_{i+1}\log(q_{i+1})-q_{i}\log(q_{i})+q_{i}-q_{i+1}\right)\right\}$$ $$+\frac{\Delta p_{i}}{2}\left(q_{i+1}^{2}\log(q_{i+1})-q_{i}^{2}\log(q_{i})+\frac{q_{i}^{2}}{2}-\frac{q_{i+1}^{2}}{2}\right)\right\},$$ , (40) $$=-\frac{\delta_{i}}{(\Delta q_{i})^{2}}\left\{\left(\Delta q_{i}p_{i}-\Delta p_{i}q_{i}\right)(q_{i+1}\log(q_{i+1})-q_{i}\log(q_{i}))+\frac{\Delta p_{i}}{2}\left(q_{i+1}^{2}\log(q_{i+1})-q_{i}^{2}\log(q_{i})\right)\right.$$ $$\left.+(\Delta q_{i}p_{i}-\Delta p_{i}q_{i})(q_{i}-q_{i+1})+\frac{\Delta p_{i}}{4}\left(q_{i}^{2}-q_{i+1}^{2}\right)\right\}.\tag{41}$$ $$(42)$$ Explode the first term (within the curly brackets) and clean up, $$\left(\Delta q_{i}p_{i}-\Delta p_{i}q_{i}\right)\left(q_{i+1}\log(q_{i+1})-q_{i}\log(q_{i})\right),$$ $$\Delta q_{t}p_{t}q_{t+1}\log(q_{t+1})-\Delta p_{t}q_{t}q_{t+1}\log(q_{t+1})-\Delta q_{t}p_{t}q_{t}\log(q_{t})+\Delta p_{t}q_{t}^{2}\log(q_{t}),\tag{43}$$ $(q_{t+1}-q_{t})p_{t}q_{t+1}\log(q_{t+1})-(p_{t+1}-p_{t})q_{t}q_{t+1}\log(q_{t+1})-(q_{t+1}-q_{t})p_{t}q_{t}\log(q_{t})+(p_{t+1}-p_{t})q_{t}^{2}\log(q_{t}),$ (44) $$)-p_{i}q_{i}q_{i+1}\log(q_{i+1})$$ $$p_{i}q_{i+1}^{2}\log(q_{i+1})$$ i+1 log(qi+1) − piqiqi+1 log(qi+1) + piqiqi+1 log(qi+1) − pi+1qiqi+1 log(qi+1) log(qi), (45) $$+p_{i}q_{i}^{2}\log(q_{i})-p_{i}q_{i}q_{i+1}\log(q_{i})+p_{i+1}q_{i}^{2}\log(q_{i})-p_{i}q_{i}^{2}\log(q_{i}),$$ $p_{i}q_{i+1}^{2}\log(q_{i+1})-p_{i+1}q_{i}q_{i+1}\log(q_{i+1})-p_{i}q_{i}q_{i+1}\log(q_{i})+p_{i+1}q_{i}^{2}\log(q_{i}),$ $p_{i+1}q_{i}^{2}\log(q_{i})+p_{i}q_{i+1}^{2}\log(q_{i+1})-q_{i}q_{i+1}(p_{i}\log(q_{i})+p_{i+1}\log(q_{i+1})).$ the second term. Same again for the second term, ∆pi 2 q 2 i+1 log(qi+1) − q 2 i log(qi), (48) pi+1 − pi 2q 2 i+1 log(qi+1) − pi+1 − pi 2q 2 i log(qi), (49) pi+1q 2 i+1 log(qi+1) 2− piq 2 i+1 log(qi+1) 2− pi+1q 2 i log(qi) 2+ piq 2 i log(qi) 2, (50) − pi+1q 2 i log(qi) 2− piq 2 i+1 log(qi+1) 2+ pi+1q 2 i+1 log(qi+1) 2+ piq 2 i log(qi) 2. (51) Note that, with reference to the last part of both of the above exploded terms, $$(48)$$ $$(49)$$ (50) $\binom{51}{51}$ . $ \frac{1}{2}(q_{i+1}-q_i)^2=\frac{q_{i+1}^2}{2}+\frac{q_i^2}{2}-q_i q_{i+1}.$ I need to get... $$(52)$$ Using this, both terms can be merged to get $$\frac{p_{i+1}q_{i}^{2}\log(q_{i})}{2}+\frac{p_{i}q_{i+1}^{2}\log(q_{i+1})}{2}+\frac{1}{2}(q_{i+1}-q_{i})^{2}(p_{i}\log(q_{i})+p_{i+1}\log(q_{i+1}))-\frac{p_{i}q_{i+1}^{2}\log(q_{i})}{2}-\frac{p_{i+1}q_{i}^{2}\log(q_{i+1})}{2},\tag{53}$$ where the third line corrects for the mismatch of the second line. Simplify to get $$\frac{p_{i}q_{i+1}^{2}-p_{i+1}q_{i}^{2}}{2}(\log(q_{i+1})-\log(q_{i}))+\frac{1}{2}(\Delta q_{i})^{2}(p_{i}\log(q_{i})+p_{i+1}\log(q_{i+1})).$$ Now rearrange the last two terms from within the curly brackets, (∆qipi − ∆piqi)(qi − qi+1) + ∆pi 4 q 2 i − q 2 i+1, (55) −(∆qipi − ∆piqi)∆qi − ∆pi 4(qi + qi+1)∆qi, (56) − ∆qi 4[4∆qipi − 4∆piqi + ∆piqi + ∆piqi+1] , (57) − ∆qi 4[4piqi+1 − 4piqi − 4pi+1qi + 4piqi + pi+1qi − piqi + pi+1qi+1 − piqi+1] , (58) $$(54)$$ $$(55)$$ $$(56)$$ $$\left(57\right)$$ $$-\frac{\Delta q_{i}}{4}\left[3p_{i}q_{i+1}-3p_{i+1}q_{i}-p_{i}q_{i}+p_{i+1}q_{i+1}\right],$$ $$-\frac{\Delta q_{i}}{4}\left[(3p_{i}+p_{i+1})q_{i+1}-(p_{i}+3p_{i+1})q_{i}\right].$$ Bring all of the simplified terms back into the main equation, ring an of the simplified terms such into the main equation, $$=-\frac{\delta_{i}}{(\Delta q_{i})^{2}}\left\{\frac{p_{i}q_{i+1}^{2}-p_{i+1}q_{i}^{2}}{2}(\log(q_{i+1})-\log(q_{i}))+\frac{1}{2}(\Delta q_{i})^{2}(p_{i}\log(q_{i})+p_{i+1}\log(q_{i+1}))\right.$$ $$\left.-\frac{\Delta q_{i}}{4}\left[(3p_{i}+p_{i+1})q_{i+1}-(p_{i}+3p_{i+1})q_{i}\right]\right\},$$ 1. $$(59)$$ $$(60)$$ $$(61)$$ , (61) and rearrange, $$=-\delta_{t}\left\{\frac{p_{t}q_{t+1}^{2}-p_{t+1}q_{t}^{2}}{2(\Delta q_{t})^{2}}(\log(q_{t+1})-\log(q_{t}))+\frac{1}{2}(p_{t}\log(q_{t})+p_{t+1}\log(q_{t+1}))\right.\\ -\frac{1}{4\Delta q_{t}}\left[(3p_{t}+p_{t+1})q_{t+1}-(p_{t}+3p_{t+1})q_{t}\right]\right\},\tag{62}$$ to obtain the final form, $$=-\delta_{i}\left\{{\frac{(p_{i}\log(q_{i})+p_{i+1}\log(q_{i+1})}{2}}\right.$$ $$-\frac{p_{i+1}q_{i}^{2}-p_{i}q_{i+1}^{2}}{2(q_{i+1}-q_{i})^{2}}(\log(q_{i+1})-\log(q_{i}))$$ $$-\frac{(3p_{i}+p_{i+1})q_{i+1}-(p_{i}+3p_{i+1})q_{i}}{4(q_{i+1}-q_{i})}\Bigg{\}}\;.$$ $$(63)$$ $$(64)$$ $$(65)$$ (66) $$\begin{array}{l}\left(67\right)\end{array}$$ = $$\begin{array}{l}\left(68\right)\end{array}$$ . . (63) ## A.2 Original Derivation Start from the need to evaluate $$-\delta_{i}\int_{0}^{1}((1-t)p_{i}+tp_{i+1})\log((1-t)q_{i}+tq_{i+1})dt.\tag{1}$$ This time, consider the series6 $$\log(a)=2\sum_{n\in\{1,3,5,\ldots\}}\frac{1}{n}\left(\frac{a-1}{a+1}\right)^{n},\tag{1}$$ defined for a > 0. Use it to rewrite the objective as −δi Z 1 0 ((1 − t)pi + tpi+1) 2X n∈{1,3,5,...} 1 n (1 − t)qi + tqi+1 − 1 (1 − t)qi + tqi+1 + 1n dt, (66) −2δiX n∈{1,3,5,...} Z 1 0 ((1 − t)pi + tpi+1) n (1 − t)qi + tqi+1 − 1 (1 − t)qi + tqi+1 + 1ndt, (67) −2δiX n∈{1,3,5,...} 1 n Z 1 0 (pi + (pi+1 − pi)t) qi − 1 + (qi+1 − qi)t qi + 1 + (qi+1 − qi)t ndt. (68) Use $${\frac{d}{d t}}\left(p_{i}t+{\frac{(p_{i+1}-p_{i})t^{2}}{2}}\right)=p_{i}+(p_{i+1}-p_{i})t,$$ 6Equation 4.6.4, P.108 of NIST Handbook of Mathematical Functions, 2010 $$(69)$$ plus d dt (qi − 1 + (qi+1 − qi)t) m (qi + 1 + (qi+1 − qi)t) n , (70) d dt (qi − 1 + (qi+1 − qi)t) m (qi + 1 + (qi+1 − qi)t) n+ (qi − 1 + (qi+1 − qi)t) m d dt 1 (qi + 1 + (qi+1 − qi)t) n , (71) m(qi − 1 + (qi+1 − qi)t) m−1(qi+1 − qi) (qi + 1 + (qi+1 − qi)t) n−(qi −1+ (qi+1 −qi)t) m n(qi + 1 + (qi+1 − qi)t) n−1(qi+1 − qi) (qi + 1 + (qi+1 − qi)t) 2n, (72) (qi+1 − qi) m(qi − 1 + (qi+1 − qi)t) m−1 (qi + 1 + (qi+1 − qi)t) n− n(qi − 1 + (qi+1 − qi)t) m (qi + 1 + (qi+1 − qi)t) n+1 , (73) where some generalising has been done pre-emptively by introducing m ≤ n in anticipation of later steps, though for this first use n = m. Apply integration by parts to get − 2δiX n∈{1,3,5,...} 1 n (pit + (pi+1 − pi)t 2 2 qi − 1 + (qi+1 − qi)t qi + 1 + (qi+1 − qi)t n1 0 − Z 1 0 pit + (pi+1 − pi)t 2 2 n(qi+1 − qi) (qi − 1 + (qi+1 − qi)t) n−1 (qi + 1 + (qi+1 − qi)t) n− (qi − 1 + (qi+1 − qi)t) n (qi + 1 + (qi+1 − qi)t) n+1 dt, (74) − 2δi pi + pi+1 2 X n∈{1,3,5,...} 1 n qi − 1 + (qi+1 − qi) qi + 1 + (qi+1 − qi) n −(qi+1 − qi)X n∈{1,3,5,...} Z 1 0 pit + (pi+1 − pi)t 2 2 × (qi − 1 + (qi+1 − qi)t) n−1 (qi + 1 + (qi+1 − qi)t) n− (qi − 1 + (qi+1 − qi)t) n (qi + 1 + (qi+1 − qi)t) n+1 dt , (75) pi + pi+1 Z 1 (pi+1 − pi)t 2 (qi − 1 + (qi+1 − qi)t) n−1 − δi pi + pi+1 2log(qi+1) + 2δi(qi+1 − qi)X n∈{1,3,5,...} Z 1 0 pit + (pi+1 − pi)t 2 2 (qi − 1 + (qi+1 − qi)t) n−1 (qi + 1 + (qi+1 − qi)t) ndt − 2δi(qi+1 − qi)X n∈{1,3,5,...} Z 1 0 pit + (pi+1 − pi)t 2 2 (qi − 1 + (qi+1 − qi)t) n (qi + 1 + (qi+1 − qi)t) n+1 dt, (76) −δi pi + pi+1 2log(qi+1) − 2δi(qi+1 − qi) X∞ n=1 (−1)n Z 1 0 pit + (pi+1 − pi)t 2 2 (qi − 1 + (qi+1 − qi)t) n−1 (qi + 1 + (qi+1 − qi)t) ndt. (77) Consider the remaining integration, $$(78)$$ $$(79)$$ Regulation: $ \int_0^1\left(p_it+\frac{(p_{i+1}-p_i)t^2}{2}\right)\frac{(q_i-1+(q_{i+1}-q_i)t)^{n-1}}{(q_i+1+(q_{i+1}-q_i)t)^n}dt,$ where t is a linear function. and prepare for integration by parts again. Need $${\frac{d}{d t}}\left({\frac{p_{i}t^{2}}{2}}+{\frac{(p_{i+1}-p_{i})t^{3}}{6}}\right)=p_{i}t+{\frac{(p_{i+1}-p_{i})t^{2}}{2}}$$ in addition to the above, where this time the generalisation is used. Stepping through, In addition to the above, where this time the generation is used: occupying through, ${-\delta_t\frac{p_i+p_{i+1}}{2}\log(q_{i+1})-2\delta_t(q_{i+1}-q_i)\sum_{n=1}^{\infty}\bigg\{(-1)^n\int_0^1\!\!\left(p_i t+\frac{(p_{i+1}-p_i)t^2}{2}\right)\frac{(q_i-1+(q_{i+1}-q_i)t)^{n-1}}{(q_i+1+(q_{i+1}-q_i)t)^n}\bigg\},}$ . $$\left.{\begin{array}{l}{{\cdot\cdot d t}}\\ {{\ }}\end{array}}\right\}\,,$$ − δi pi + pi+1 2log(qi+1) − 2δi(qi+1 − qi) X∞ n=1 (−1)n (pit 2 2+ (pi+1 − pi)t 3 6 (qi − 1 + (qi+1 − qi)t) n−1 (qi + 1 + (qi+1 − qi)t) n 1 0 − (qi+1 − qi) × Z 1 0 pit 2 2+ (pi+1 − pi)t 3 6 (n − 1)(qi − 1 + (qi+1 − qi)t) n−2 (qi + 1 + (qi+1 − qi)t) n− n(qi − 1 + (qi+1 − qi)t) n−1 (qi + 1 + (qi+1 − qi)t) n+1 dt, (81) $$\quad(84)$$ − δi pi + pi+1 2log(qi+1) − 2δi(qi+1 − qi) X∞ n=1 (−1)n pi 2 + (pi+1 − pi) 6 (qi − 1 + (qi+1 − qi))n−1 (qi + 1 + (qi+1 − qi))n + 2δi(qi+1 − qi) 2 "X∞ n=1 (−1)n Z 1 0 pit 2 2+ (pi+1 − pi)t 3 6 (n − 1)(qi − 1 + (qi+1 − qi)t) n−2 (qi + 1 + (qi+1 − qi)t) n dt − X∞ n=1 (−1)n Z 1 0 pit 2 2+ (pi+1 − pi)t 3 6 n(qi − 1 + (qi+1 − qi)t) n−1 (qi + 1 + (qi+1 − qi)t) n+1 dt#. (82) Note first entry of sequence of second line is zero, and offset to align powers with sequence of third line, − δi pi + pi+1 2log(qi+1) − δi(qi+1 − qi) pi + (pi+1 − pi) 3 X∞ n=1 (−1)n (qi+1 − 1)n−1 (qi+1 + 1)n − 2δi(qi+1 − qi) 2 "X∞ n=1 (−1)n Z 1 0 pit 2 2+ (pi+1 − pi)t 3 6 n(qi − 1 + (qi+1 − qi)t) n−1 (qi + 1 + (qi+1 − qi)t) n+1 dt + X∞ n=1 (−1)n Z 1 0 pit 2 2+ (pi+1 − pi)t 3 6 n(qi − 1 + (qi+1 − qi)t) n−1 (qi + 1 + (qi+1 − qi)t) n+1 dt#, (83) $$-\delta_i\frac{p_i+p_{i+1}}{2}\log(q_{i+1})-\delta_i\frac{q_{i+1}-q_i}{q_{i+1}+1}\left(\frac{2p_i+p_{i+1}}{3}\right)^2$$ $$-2\delta_i(q_{i+1}-q_i)^2\sum_{n=1}^\infty(-1)^n\left\{\int_0^1\left(p_it^2+\frac{(p_{it})^2}{2}\right)^2\right\}$$ Ocus on second term on first line, which is an infinite. X∞ n=1 (−1)n (qi+1 − 1)n−1 (qi+1 + 1)n−1 (pi+1 − pi)t 3 3 n(qi − 1 + (qi+1 − qi)t) n−1 (qi + 1 + (qi+1 − qi)t) n+1 dt. (84) Focus on second term on first line, which is an infinite geometric sequence, and use P∞ k=0 r k =1 1−r $$-\delta_i\frac{q_{i+1}-q_i}{q_{i+1}+1}\left(\frac{2p_i+p_{i+1}}{3}\right)\sum_{n=1}^{\infty}(-1)^n\left\{\frac{(q_{i+1})}{(q_{i+1})}\right\}$$ $$+\delta_i\frac{q_{i+1}-q_i}{q_{i+1}+1}\left(\frac{2p_i+p_{i+1}}{3}\right)\sum_{n=0}^{\infty}(-1)^n\left\{\frac{(q_{i+1})}{(q_{i+1})}\right\}$$ $$+\delta_i\frac{q_{i+1}-q_i}{q_{i+1}+1}\left(\frac{2p_i+p_{i+1}}{3}\right)\sum_{n=0}^{\infty}\left(-\frac{q_{i+1}}{q_{i+1}}\right)$$ $\frac{1}{r}$ as it is the case that $|r|<1$ by construction, $\sum_{n=1}^{\infty}(-1)^{n}\left\{\frac{(q_{i+1}-1)^{n-1}}{(q_{i+1}+1)^{n-1}}\right\},$ $\sum_{n=0}^{\infty}(-1)^{n}\left\{\frac{(q_{i+1}-1)^{n}}{(q_{i+1}+1)^{n}}\right\},$ $\sum_{n=0}^{\infty}\left(-\frac{q_{i+1}-1}{q_{i+1}+1}\right)^{n},$ by construction, $$+\delta_{i}\frac{q_{i+1}-q_{i}}{q_{i+1}+1}\left(\frac{2p_{i}+p_{i+1}}{3}\right)\frac{1}{1+\frac{q_{i+1}-1}{q_{i+1}+1}},$$ $$+\delta_{i}\frac{q_{i+1}-q_{i}}{q_{i+1}+1}\left(\frac{2p_{i}+p_{i+1}}{3}\right)\frac{q_{i+1}+1}{q_{i+1}+1+q_{i+1}-1},$$ $$\begin{array}{c}{{(88)}}\\ {{}}\end{array}$$ $$\begin{array}{c}{{(89)}}\\ {{}}\end{array}$$ $$+\delta_{i}\frac{(2p_{i}+p_{i+1})(q_{i+1}-q_{i})}{6q_{i+1}}.$$ 6qi+1. (90) Now put it back into the original equation, $$-\delta_{i}\frac{p_{i}+p_{i+1}}{2}\log(q_{i+1})+\delta_{i}\frac{(2p_{i}+p_{i+1})(q_{i+1}-q_{i})}{6q_{i+1}}$$ $$-2\delta_{i}(q_{i+1}-q_{i})^{2}\sum_{n=1}^{\infty}n(-1)^{n}\left\{\int_{0}^{1}\left(p_{i}t^{2}+\frac{(p_{i+1}-p_{i})t^{3}}{3}\right)\left\{\frac{(q_{i}-1+(q_{i+1}-q_{i})t)^{n-1}}{(q_{i}+1+(q_{i+1}-q_{i})t)^{n+1}}\right\}dt\right\}.$$ Time for yet another integration of parts, which needs n+1 dt. (91) $$(90)$$ $$(91)$$ $$\frac{d}{dt}\left(\frac{pt^3}{3}+\frac{(p_{i+1}-p_i)t^4}{12}\right)=p_i t^2+\frac{(p_{i+1}-p_i)t^3}{3}$$ plus the generalised derivative from above; consider just the final term. $$(92)$$ $$-2\delta_{i}(q_{i+1}-q_{i})^{2}\sum_{n=1}^{\infty}n(-1)^{n}\left\{\int_{0}^{1}\left(p_{i}t^{2}+\frac{(p_{i+1}-p_{i})t^{3}}{3}\right)\left\{\frac{(q_{i}-1+(q_{i+1}-q_{i})t)^{n-1}}{(q_{i}+1+(q_{i+1}-q_{i})t)^{n+1}}\right\}\,d t\right\},$$ − 2δi(qi+1 − qi) 2 X∞ n=1 n(−1)n (pit 3 3+ (pi+1 − pi)t 4 12 (qi − 1 + (qi+1 − qi)t) n−1 (qi + 1 + (qi+1 − qi)t) n+1 10 $$(93)$$ − Z 1 0 pit 3 3+ (pi+1 − pi)t 4 12 (qi+1 − qi) × (n − 1)(qi − 1 + (qi+1 − qi)t) n−2 (qi + 1 + (qi+1 − qi)t) n+1 − (n + 1)(qi − 1 + (qi+1 − qi)t) n−1 (qi + 1 + (qi+1 − qi)t) n+2 dt, (94) − 2δi(qi+1 − qi) 2 X∞ n=1 n(−1)n pi 3 + (pi+1 − pi) 12 (qi+1 − 1)n−1 (qi+1 + 1)n+1 + 2δi(qi+1 − qi) 3 X∞ n=1 n(−1)n Z 1 0 pit 3 3+ (pi+1 − pi)t 4 12 (n − 1)(qi − 1 + (qi+1 − qi)t) n−2 (qi + 1 + (qi+1 − qi)t) n+1 dt − 2δi(qi+1 − qi) 3 X∞ n=1 n(−1)n Z 1 0 pit 3 3+ (pi+1 − pi)t 4 12 (n + 1)(qi − 1 + (qi+1 − qi)t) n−1 (qi + 1 + (qi+1 − qi)t) n+2 dt. (95) Second line is zero for first entry in sequence, so shift along to align with last line, − 2δi(qi+1 − qi) 2 X∞ n=1 n(−1)n 3pi + pi+1 12 (qi+1 − 1)n−1 (qi+1 + 1)n+1 − 2δi(qi+1 − qi) 3 X∞ n=1 n(n + 1)(−1)n Z 1 0 pit 3 3+ (pi+1 − pi)t 4 12 (qi − 1 + (qi+1 − qi)t) n−1 (qi + 1 + (qi+1 − qi)t) n+2 dt − 2δi(qi+1 − qi) 3 X∞ n=1 n(n + 1)(−1)n Z 1 0 pit 3 3+ (pi+1 − pi)t 4 12 (qi − 1 + (qi+1 − qi)t) n−1 (qi + 1 + (qi+1 − qi)t) n+2 dt, (96) $$+2\delta_{i}(q_{i+1}-q_{i})^{2}\left(\frac{3p_{i}+p_{i+1}}{12}\right)\sum_{n=0}^{\infty}(n+1)(-1)^{n}\left(\frac{(q_{i+1}-1)^{n}}{(q_{i+1}+1)^{n+2}}\right)$$ $$-\frac{4}{3}\delta_{i}(q_{i+1}-q_{i})^{3}\sum_{n=1}^{\infty}n(n+1)(-1)^{n}\left\{\int_{0}^{1}\left(p_{i}t^{3}+\frac{(p_{i+1}-p_{i})t^{4}}{4}\right)\left\{\frac{(q_{i}-1+(q_{i+1}-q_{i})t)^{n-1}}{(q_{i}+1+(q_{i+1}-q_{i})t)^{n+2}}\right\}dt\right\},\tag{97}$$ $$+\frac{2}{3}\delta_{i}\left(\frac{q_{i+1}-q_{i}}{q_{i+1}+1}\right)^{2}\left(\frac{3p_{i}+p_{i+1}}{4}\right)\sum_{n=0}^{\infty}(n+1)\left(-\frac{q_{i+1}-1}{q_{i+1}+1}\right)^{n}$$ $$-\frac{4}{3}\delta_{i}(q_{i+1}-q_{i})^{3}\sum_{n=1}^{\infty}n(n+1)(-1)^{n}\left\{\int_{0}^{1}\left(p_{i}t^{3}+\frac{(p_{i+1}-p_{i})t^{4}}{4}\right)\left\{\frac{(q_{i}-1+(q_{i+1}-q_{i})t)^{n-1}}{(q_{i}+1+(q_{i+1}-q_{i})t)^{n+2}}\right\}dt\right\}.\tag{98}$$ The infinite sequence in the first line is an example of a polylogarithm, specifically, it is equivalent to two evaluations of, such that + 2 3 δi qi+1 − qi qi+1 + 1 2 3pi + pi+1 4 1 1 + qi+1−1 qi+1+1 − qi+1−1 qi+1+1 1 + qi+1−1 qi+1+12 − 4 3 δi(qi+1 − qi) 3 X∞ n=1 n(n + 1)(−1)n Z 1 0 pit 3 + (pi+1 − pi)t 4 4 (qi − 1 + (qi+1 − qi)t) n−1 (qi + 1 + (qi+1 − qi)t) n+2 dt, (99) $$+\frac{2}{3}\delta_{i}\left(\frac{3p_{i}+p_{i+1}}{4}\right)\left(\frac{q_{i+1}-q_{i}}{q_{i+1}+1}\right)^{2}\left[\frac{q_{i+1}+1}{2q_{i+1}}-\frac{(q_{i+1}+1)(q_{i+1}-1)}{4q_{i+1}^{2}}\right]$$ $$-\frac{4}{3}\delta_{i}(q_{i+1}-q_{i})^{3}\sum_{n=1}^{\infty}n(n+1)(-1)^{n}\left\{\int_{0}^{1}\left(p_{i}t^{3}+\frac{(p_{i+1}-p_{i})t^{4}}{4}\right)\left\{\frac{(q_{i}-1+(q_{i+1}-q_{i})t)^{n-1}}{(q_{i}+1+(q_{i+1}-q_{i})t)^{n+2}}\right\}dt\right\},$$ $$(100)$$ n+2 dt, (100) $$+\delta_{i}\frac{(3p_{i}+p_{i+1})(q_{i+1}-q_{i})^{2}}{2q_{i+1}^{2}}$$ $$-\frac{4}{3}\delta_{i}(q_{i+1}-q_{i})^{3}\sum_{n=1}^{\infty}n(n+1)(-1)^{n}\left\{\int_{0}^{1}\left(p_{i}t^{3}+\frac{(p_{i+1}-p_{i})t^{4}}{4}\right)\left\{\frac{(q_{i}-1+(q_{i+1}-q_{i})t)^{n-1}}{(q_{i}+1+(q_{i+1}-q_{i})t)^{n+2}}\right\}dt\right\}.\tag{101}$$ Now bring back the rest of the equation, Now bring back the rest of the equation, − δi pi + pi+1 2log(qi+1) + δi (2pi + pi+1)(qi+1 − qi) 6qi+1+ δi (3pi + pi+1)(qi+1 − qi) 2 24q 2 i+1 − 4 3 δi(qi+1 − qi) 3 X∞ n=1 n(n + 1)(−1)n Z 1 0 pit 3 + (pi+1 − pi)t 4 4 (qi − 1 + (qi+1 − qi)t) n−1 (qi + 1 + (qi+1 − qi)t) n+2 dt. (102) This is suggestive, but not enough terms to be sure, hence repeat again. Going to need $$\frac{d}{dt}\left(\frac{p_{i}t^{4}}{4}+\frac{(p_{i+1}-p_{i})t^{5}}{20}\right)=p_{i}t^{3}+\frac{(p_{i+1}-p_{i})t^{4}}{4},\tag{103}$$ and the generalised derivative above. As before, consider only the final line, − 4 3 δi(qi+1 − qi) 3 X∞ n=1 n(n + 1)(−1)n Z 1 0 pit 3 + (pi+1 − pi)t 4 4 (qi − 1 + (qi+1 − qi)t) n−1 (qi + 1 + (qi+1 − qi)t) n+2 dt, (104) − 4 3 δi(qi+1 − qi) 3 X∞ n=1 n(n + 1)(−1)n (pit 4 4+ (pi+1 − pi)t 5 20 (qi − 1 + (qi+1 − qi)t) n−1 (qi + 1 + (qi+1 − qi)t) n+2 10 − Z 1 0 pit 4 4+ (pi+1 − pi)t 5 20 (qi+1 − qi) × (n − 1)(qi − 1 + (qi+1 − qi)t) n−2 (qi + 1 + (qi+1 − qi)t) n+2 − (n + 2)(qi − 1 + (qi+1 − qi)t) n−1 (qi + 1 + (qi+1 − qi)t) n+3 dt, (105) − 4 3 δi(qi+1 − qi) 3 X∞ n=1 n(n + 1)(−1)n pi 4 + (pi+1 − pi) 20 (qi − 1 + (qi+1 − qi))n−1 (qi + 1 + (qi+1 − qi))n+2 + 4 3 δi(qi+1 − qi) 4 X∞ n=1 n(n + 1)(−1)n Z 1 0 pit 4 4+ (pi+1 − pi)t 5 20 (n − 1)(qi − 1 + (qi+1 − qi)t) n−2 (qi + 1 + (qi+1 − qi)t) n+2 dt − 4 3 δi(qi+1−qi) 4 X∞ n=1 n(n+1)(−1)n Z 1 0 pit 4 4+ (pi+1 − pi)t 5 20 (n + 2)(qi − 1 + (qi+1 − qi)t) n−1 (qi + 1 + (qi+1 − qi)t) n+3 dt. (106) Note that first entry of the second line's summation is zero, so shift to align with third line, − 4 12 δi pi + (pi+1 − pi) 5 (qi+1 − qi) 3 X∞ n=1 n(n + 1)(−1)n (qi+1 − 1)n−1 (qi+1 + 1)n+2 − 4 3 δi(qi+1 − qi) 4 X∞ n=1 n(n + 1)(n + 2)(−1)n Z 1 0 pit 4 4+ (pi+1 − pi)t 5 20 (qi − 1 + (qi+1 − qi)t) n−1 (qi + 1 + (qi+1 − qi)t) n+3 dt − 4 3 δi(qi+1−qi) 4 X∞ n=1 n(n+1)(n+2)(−1)n Z 1 0 pit 4 4+ (pi+1 − pi)t 5 20 (qi − 1 + (qi+1 − qi)t) n−1 (qi + 1 + (qi+1 − qi)t) n+3 dt, (107) $$+\;\frac{4}{12}\delta_{i}\left(\frac{4p_{i}+p_{i+1}}{5}\right)(q_{i+1}-q_{i})^{3}\sum_{n=0}^{\infty}(n+1)(n+2)(-1)^{n}$$ $$-\;\frac{8}{3}\delta_{i}(q_{i+1}-q_{i})^{4}\sum_{n=1}^{\infty}n(n+1)(n+2)(-1)^{n}\left\{\int_{0}^{1}\left(\frac{p_{i}t^{4}}{4}+\frac{(p_{i+1})t^{2}}{5}\right)\right\}$$ $$-1)^{n}\left\{{\frac{(q_{i+1}-1)^{n}}{(q_{i+1}+1)^{n+3}}}\right.$$ $${\frac{(p_{i+1}-p_{i})t^{5}}{20}}\left)\left({\frac{(q_{i}}{(q_{i}}}\right)\right)$$ $\left(\begin{array}{c}1\\ \hline\end{array}\right)^{n}$ $\left(\begin{array}{c}(q_{i}-1+(q_{i+1}-q_{i})t)^{n-1}\\ (q_{i}+1+(q_{i+1}-q_{i})t)^{n+3}\end{array}\right)dt$ $$\left.\begin{array}{c}{{t\right\},}\\ {{}}\\ {{}}\end{array}\right.$$ $$+\frac{4}{12}\delta_{i}\left(\frac{4p_{i}+p_{i+1}}{5}\right)\frac{(q_{i+1}-q_{i})^{3}}{(q_{i+1}+1)^{3}}\sum_{n=0}^{\infty}(2+3n+n^{2})\left(-\frac{q_{i+1}-1}{q_{i+1}+1}\right)^{n}$$ $$-\frac{2}{3}\delta_{i}(q_{i+1}-q_{i})^{4}\sum_{n=1}^{\infty}n(n+1)(n+2)(-1)^{n}\left\{\int_{0}^{1}\left(p_{i}t^{4}+\frac{(p_{i+1}-p_{i})t^{5}}{5}\right)\left(\frac{(q_{i}-1+(q_{i+1}-q_{i})t)^{n-1}}{(q_{i}+1+(q_{i+1}-q_{i})t)^{n+3}}\right)dt\right\}dt$$ n+3 dt. $$\left.\begin{array}{c}{{t}}\\ {{}}\end{array}\right\}.$$ The infinite series of the first line is equivalent to three polylogarithms of different order, $$+\,\frac{1}{3}\delta_{i}\left(\frac{4p_{i}+p_{i+1}}{5}\right)$$ $$-\,\frac{2}{3}\delta_{i}(q_{i+1}-q_{i})^{4}\sum_{n=1}^{\infty}n$$ (qi+1 − qi) 3 (qi+1 + 1)3 2 1 + qi+1−1 qi+1+1 − 3 qi+1−1 qi+1+1 1 + qi+1−1 qi+1+12 − qi+1−1 qi+1+1 1 − qi+1−1 qi+1+1 1 + qi+1−1 qi+1+13 n(n+ 1)(n+ 2)(−1)n Z 1 0 pit 4 + (pi+1 − pi)t 5 5 (qi − 1 + (qi+1 − qi)t) n−1 (qi + 1 + (qi+1 − qi)t) n+3 dt, (110) $$\mathbf{a}$$ + 1 3 δi 4pi + pi+1 5 (qi+1 − qi) 3 (qi+1 + 1)3 qi+1 + 1 qi+1− 3(qi+1 − 1)(qi+1 + 1)2 4(qi+1 + 1)q 2 i+1 − 2(qi+1 − 1)(qi+1 + 1)3 8(qi+1 + 1)2q 3 i+1 2 3 δi(qi+1 −qi) 4 X∞ n=1 n(n+ 1)(n+ 2)(−1)n Z 1 0 pit 4 + (pi+1 − pi)t 5 5 (qi − 1 + (qi+1 − qi)t) n−1 (qi + 1 + (qi+1 − qi)t) n+3 dt, (111) − $$\quad+\frac{1}{3}\delta_{i}\left(\frac{4p_{i}+p_{i+1}}{5}\right)$$ $$-\frac{2}{3}\delta_{i}(q_{i+1}-q_{i})^{4}\sum_{n=1}^{\infty}n^{n}$$ (qi+1 − qi) 3 (qi+1 + 1)3 4qi+1(qi+1 + 1)2 − 3(qi+1 − 1)(qi+1 + 1)2 4(qi+1 + 1)q 2 i+1 − 2(qi+1 − 1)(qi+1 + 1)3 8(qi+1 + 1)2q 3 i+1 n(n+ 1)(n+ 2)(−1)n Z 1 0 pit 4 + (pi+1 − pi)t 5 5 (qi − 1 + (qi+1 − qi)t) n−1 (qi + 1 + (qi+1 − qi)t) n+3 dt, (112) $$\begin{array}{c}{{+\,\frac{1}{3}\delta_{i}\left(\frac{4p_{i}+p_{i+1}}{5}\right)}}\\ {{-\,\frac{2}{3}\delta_{i}(q_{i+1}-q_{i})^{4}\sum_{n=1}^{\infty}n^{n}}}\end{array}$$ $$\frac{1}{n}\Bigg{)}\,\frac{(q_{i+1}-q_{i})^{3}}{(q_{i+1}+1)^{3}}\,\Bigg{[}\frac{(q_{i+1}+3)(q_{i+1}+1)^{2}}{4(q_{i+1}+1)q_{i+1}^{2}}-\frac{2(q_{i+1}-1)}{8(q_{i+1}+1)}$$ $$\frac{1}{n}(n+1)(n+2)(-1)^{n}\,\Bigg{\{}\int_{0}^{1}\left(p_{i}t^{4}+\frac{(p_{i+1}-p_{i})t^{5}}{5}\right)$$ $${\frac{2(q_{i+1}-1)(q_{i+1}+1)^{3}}{8(q_{i+1}+1)^{2}q_{i+1}^{3}}}\Bigg]$$ $${\frac{(q_{i}-p_{i})t^{5}}{5}}\Bigg)\left({\frac{(q_{i}-1+(q_{i+1}-q_{i})t)^{n-1}}{(q_{i}+1+(q_{i+1}-q_{i})t)^{n+3}}}\right)d$$ n+3 dt, $$\left.\begin{array}{c}{{t\left\}\right.,}\\ {{\ }}\end{array}\right.$$ $$\begin{array}{c}{{+\,\frac{1}{3}\delta_{i}\left(\frac{4p_{i}+p_{i+1}}{5}\right)}}\\ {{-\,\frac{2}{3}\delta_{i}(q_{i+1}-q_{i})^{4}\sum_{n=1}^{\infty}n^{n}}}\end{array}$$ $$\frac{1}{n}\left(\frac{(q_{i+1}-q_{i})^{3}}{(q_{i+1}+1)^{3}}\left[\frac{2q_{i+1}(q_{i+1}+3)(q_{i+1}+1)^{3}-2(q_{i+1}-1)(q_{i+1}+1)^{3}}{8(q_{i+1}+1)^{2}q_{i+1}^{3}}\right]\right)$$ $$\frac{1}{n}(n+1)(n+2)(-1)^{n}\left\{\int_{0}^{1}\left(p_{i}t^{4}+\frac{(p_{i+1}-p_{i})t^{5}}{5}\right)\left(\frac{(q_{i}-1+(q_{i+1}+1)t^{5})}{(q_{i}+1+(q_{i+1}+1)t^{5}}\right)\right.$$ n+3 dt, $\left(\frac{(q_{i}-1+(q_{i+1}-q_{i})t)^{n-1}}{(q_{i}+1+(q_{i+1}-q_{i})t)^{n+3}}\right)dt$ (114) $$\begin{array}{l}{{\,\,\,\,+\,\frac{1}{3}\delta_{i}\left(\frac{4p_{i}+p_{i+1}}{5}\right)\frac{(q_{i+1}-q_{i})^{3}}{(q_{i+1}+1)^{3}}\left[\frac{(q_{i+1}+1)^{5}}{4(q_{i+1}+1)^{2}q_{i+1}^{3}}\right]}}\\ {{\,\,\,\,-\frac{2}{3}\delta_{i}(q_{i+1}-q_{i})^{4}\sum_{n=1}^{\infty}n(n+1)(n+2)(-1)^{n}\left\{\int_{0}^{1}\left(p_{i}t^{4}+\frac{(p_{i+1})^{n}}{n}\right)\frac{(q_{i+1})^{n}}{n}\right\}\right.}}\end{array}$$ n+3 dt, $$\frac{(p_{i+1}-p_{i})t^{5}}{5}\Biggr)$$ $$\left({\frac{(q_{i}-1+(q_{i+1}-q_{i})t)^{n-1}}{(q_{i}+1+(q_{i+1}-q_{i})t)^{n+3}}}\right)d t$$ $$t\left\}\right.,$$ $$\left.\left(115\right)\right.$$ $$+\,\delta_{i}\frac{(4p_{i}+p_{i+1})(q_{i+1}-q_{i})^{3}}{60q_{i+1}^{3}}$$ $$-\frac{2}{3}\delta_{i}(q_{i+1}-q_{i})^{4}\sum_{n=1}^{\infty}n(n+1)(n+2)(-1)^{n}\left\{\int_{0}^{1}\left(p_{i}t^{4}+\frac{(p_{i+1})t}{t}\right)\right\}$$ (pi+1 − pi)t $$\frac{-\,p_{i})t^{5}}{5}\,{\mathrm{)}}$$ $$\left(\frac{(q_{i}-1+(q_{i+1}-q_{i})t)^{n-1}}{(q_{i}+1+(q_{i+1}-q_{i})t)^{n+3}}\right)dt.$$ $$t\left\}\cdot\right.$$ $$\left.\left(116\right)\right.$$ Now bring back the rest of the terms to get $$-\delta_{i}\frac{p_{i}+p_{i+1}}{2}\log(q_{i+1})+\delta_{i}\frac{(2p_{i}+p_{i+1})(q_{i+1}-q_{i})}{6q_{i+1}}+\delta_{i}\frac{(3p_{i}-p_{i})}{6q_{i+1}}$$ $$-\frac{2}{3}\delta_{i}(q_{i+1}-q_{i})^{4}\sum_{n=1}^{\infty}n(n+1)(n+2)(-1)^{n}\left\{\int_{0}^{1}\left(p_{i}t^{4}+\frac{(p_{i+1})}{2}\right)\right\}$$ $$\left.\begin{array}{c}{{-\left.q_{i}\right)^{3}}}\\ {{}}\\ {{}}\\ {{}}\\ {{}}\\ {{}}\\ {{}}\end{array}\right\}.$$ $$\frac{(3p_{i}+p_{i+1})(q_{i+1}-q_{i})^{2}}{24q_{i+1}^{2}}+\delta_{i}\frac{(4p_{i}+p_{i+1})(q_{i+1}-q_{i})^{3}}{60q_{i+1}^{3}}$$ $$\frac{(p_{i+1}-p_{i})t^{5}}{5}\left(\frac{(q_{i}-1+(q_{i+1}-q_{i})t)^{n-1}}{(q_{i}+1+(q_{i+1}-q_{i})t)^{n+3}}\right)dt\right\}.\tag{117}$$ $$(1118)$$ From this we can see the sequence is7 $$-\delta_{i}\frac{p_{i}+p_{i+1}}{2}\log(q_{i+1})+\delta_{i}\sum_{n=1}^{\infty}\frac{(n+1)p_{i}+p_{i+1}}{n(n+1)(n+2)}\left(\frac{q_{i+1}-q_{i}}{q_{i+1}}\right)^{n}.$$ This is a pretty good solution, at least computationally speaking, and it has been verified by comparison to numerical integration. It is unstable however and that last infinite sequence can be removed through the use of polylogarithms. 7Divisor is https://oeis.org/A007531 Start with 1 n(n + 1)(n + 2) = (n − 1)2(n + 2) 4n− n 2(n + 3) + (n − 1)n(n + 2) 4(n + 1) + n(n + 1)(n + 3) 4(n + 2) , (119) as derived in Subsection A.2.2. This allows the goal to be rewritten as − δi pi + pi+1 2log(qi+1) + δi X∞ n=1 (n − 1)2(n + 2)((n + 1)pi + pi+1) 4n qi+1 − qi qi+1 n − δi X∞ n=1 n 2(n + 3)((n + 1)pi + pi+1) + (n − 1)n(n + 2)((n + 1)pi + pi+1) 4(n + 1) qi+1 − qi qi+1 n + δi X∞ n=1 n(n + 1)(n + 3)((n + 1)pi + pi+1) 4(n + 2) qi+1 − qi qi+1 n, (120) and then offset the sequences so the denominator is just 4n, − δi pi + pi+1 2log(qi+1) + δi X∞ n=1 (n − 1)2(n + 2)((n + 1)pi + pi+1) 4n qi+1 − qi qi+1 n − δi X∞ n=2 (n − 1)2(n + 2)(npi + pi+1) + (n − 2)(n − 1)(n + 1)(npi + pi+1) 4n qi+1 − qi qi+1 n−1 + δi X∞ n=3 (n − 2)(n − 1)(n + 1)((n − 1)pi + pi+1) 4n qi+1 − qi qi+1 n−2. (121) Now correct the sequences to start from one by noting that the early terms are zero and correct for the exponents by pre-multiplication. It should be noted that there is a dependency on 0 log(0) = 0 being the case here (Lebesgue integration), as expected for cross-entropy. − δi pi + pi+1 2log(qi+1) + δi X∞ n=1 (n − 1)2(n + 2)((n + 1)pi + pi+1) 4n qi+1 − qi qi+1 n − δi qi+1 qi+1 − qi X∞ n=1 (n − 1)2(n + 2)(npi + pi+1) + (n − 2)(n − 1)(n + 1)(npi + pi+1) 4n qi+1 − qi qi+1 n + δi qi+1 qi+1 − qi 2X∞ n=1 (n − 2)(n − 1)(n + 1)((n − 1)pi + pi+1) 4n qi+1 − qi qi+1 n. (122) Now explode each summed fraction in turn. First: (n − 1)2(n + 2)((n + 1)pi + pi+1) 4n= (n − 1)2(n + 1)(n + 2)pi + (n − 1)2(n + 2)pi+1 4n, (123) (n 2 − 2n + 1)(n 2 + 3n + 2)pi + (n 2 − 2n + 1)(n + 2)pi+1 4n, (124) (n 4 + n 3 − 3n 2 − n + 2)pi + (n 3 − 3n + 2)pi+1 4n, (125) n 3pi 4+ n 2pi 4− 3npi 4− pi 4 + pi 2n + n 2pi+1 4− 3pi+1 4+ pi+1 2n , (126) (123) $\text{}$ (124) $\text{}$ (125) $\text{}$ (126) . $$\frac{n^{3}p_{i}}{4}+\frac{n^{2}(p_{i}+p_{i+1})}{4}-\frac{3np_{i}}{4}-\frac{p_{i}+3p_{i+1}}{4}+\frac{p_{i}+p_{i+1}}{2n}.$$ $$\frac{(n-1)^{2}(n+2)(np_{i}+p_{i+1})+(n-2)(n-1)(n+1)(np_{i}+p_{i+1})}{4n},$$ Second: $${\frac{(n^{2}-2n+1)n(n+2)p_{i}+(n^{2}-2n+1)(n+2)p_{i+1}}{4n}}$$ $$(127)$$ $$(128)$$ + (n 2 − 3n + 2)n(n + 1)pi + (n 2 − 3n + 2)(n + 1)pi+1 4n, (129) $$(129)$$ $$(130)$$ $$(131)$$ $$\quad(132)$$ $$\quad(132)$$ $$\frac{(n^{4}-3n^{2}+2n)p_{i}+(n^{3}-3n+2)p_{i+1}+(n^{4}-2n^{3}-n^{2}+2n)p_{i}+(n^{3}-2n^{2}-n+2)p_{i+1}}{4n},$$ $$\frac{(2n^{4}-2n^{3}-4n^{2}+4n)p_{i}+(2n^{3}-2n^{2}-4n+4)p_{i+1}}{4n},$$ $$\frac{n^{3}p_{i}}{2}+\frac{n^{2}(p_{i+1}-p_{i})}{2}-\frac{n(2p_{i}+p_{i+1})}{2}+(p_{i}-p_{i+1})+\frac{p_{i+1}}{n}.$$ d: Third: $$\frac{(n-2)(n-1)(n+1)((n-1)p_{i}+p_{i+1})}{4n}=\frac{(n^{2}-3n+2)(n+1)(n-1)p_{i}+(n^{2}-3n+2)(n+1)p_{i+1}}{4n},\tag{1}$$ $$\frac{(n^{4}-3n^{3}+n^{2}+3n-2)p_{i}+(n^{3}-2n^{2}-n+2)p_{i+1}}{4n},\tag{2}$$ $$\frac{n^{3}p_{i}}{4}+\frac{n^{2}(p_{i+1}-3p_{i})}{4}+\frac{n(p_{i}-2p_{i+1})}{4}+\frac{(3p_{i}-p_{i+1})}{4}+\frac{p_{i+1}-p_{i}}{2n}.\tag{3}$$ For notational convenience define a pre-filled in polylogarithm of order $s$ as 9 $\left(133\right)$ $\left(134\right)$ . $$(135)$$ $$\operatorname{Li}_{s}(\cdot)=\sum_{n=1}^{\infty}{\frac{1}{n^{s}}}\left({\frac{q_{i+1}-q_{i}}{q_{i+1}}}\right)^{n},$$ $$(136)$$ then put it all together and rewrite the equation as −δi pi + pi+1 2log(qi+1) + δi pi 4 −piqi+1 2(qi+1 − qi) +piq 2 i+1 4(qi+1 − qi) 2 Li−3(·) +δi pi + pi+1 4− (pi+1 − pi)qi+1 2(qi+1 − qi)+ (pi+1 − 3pi)q 2 i+1 4(qi+1 − qi) 2 Li−2(·) +δi −3pi 4− (−2pi − pi+1)qi+1 2(qi+1 − qi)+ (pi − 2pi+1)q 2 i+1 4(qi+1 − qi) 2 Li−1(·) +δi −pi − 3pi+1 4− (pi − pi+1)qi+1 (qi+1 − qi)+ (3pi − pi+1)q 2 i+1 4(qi+1 − qi) 2 Li0(·) +δi pi + pi+1 2− pi+1qi+1 (qi+1 − qi) + (pi+1 − pi)q 2 i+1 2(qi+1 − qi) 2 Li1(·). (137) This has been coded and verified, and unlike the earlier equation is numerically stable. But it's a mess, so simplify as much as possible. Consider each term in turn, starting by dropping closed form definitions for the polylogarithms in, starting with order s = −3: $$\delta_{i}\left(\frac{p_{i}}{4}-\frac{p_{i}q_{i+1}}{2(q_{i+1}-q_{i})}+\frac{p_{i}q_{i+1}^{2}}{4(q_{i+1}-q_{i})^{2}}\right)\mathrm{Li}_{-3}(\cdot),\tag{138}$$ δi pi(qi+1 − qi) 2 − 2piqi+1(qi+1 − qi) + piq 2 i+1 4(qi+1 − qi) 2 qi+1−qi qi+1 1 + 4 qi+1−qi qi+1+ (qi+1−qi) 2 q 2 i+1 1 − qi+1−qi qi+1 4 , (139) δi piq 2 i+1 − 2piqiqi+1 + piq 2 i − 2piq 2 i+1 + 2piqiqi+1 + piq 2 i+1 4(qi+1 − qi) 2 (qi+1 − qi) 1 + 4 qi+1−qi qi+1+ (qi+1−qi) 2 q 2 i+1 qi+1 q 4 i q 4 i+1 , (140) δi piq 2 i 4(qi+1 − qi) q 3 i+1 + 4q 2 i+1(qi+1 − qi) + qi+1(qi+1 − qi) 2 q 4 i , (141) δi piq 2 i 4(qi+1 − qi) q 3 i+1 + 4q 3 i+1 − 4qiq 2 i+1 + q 3 i+1 − 2qiq 2 i+1 + q 2 i qi+1 q 4 i , (142) δi piq 2 i 4(qi+1 − qi) 6q 3 i+1 − 6qiq 2 i+1 + q 2 i qi+1 q 4 i , (143) δi 6piq 2 i q 2 i+1(qi+1 − qi) 4q 4 i (qi+1 − qi)+ δi piq 4 i qi+1 4q 4 i (qi+1 − qi) , (144) δi 3piq 2 i+1 2q 2 i + δipiqi+1 4(qi+1 − qi) . (145) Now order s = −2: δi pi + pi+1 4− (pi+1 − pi)qi+1 2(qi+1 − qi)+ (pi+1 − 3pi)q 2 i+1 4(qi+1 − qi) 2 Li−2(·), (146) δi (pi + pi+1)(qi+1 − qi) 2 − 2(pi+1 − pi)qi+1(qi+1 − qi) + (pi+1 − 3pi)q 2 i+1 4(qi+1 − qi) 2 qi+1−qi qi+1 1 + qi+1−qi qi+1 1 − qi+1−qi qi+1 3 , (147) δi piq 2 i+1 − 2piqiqi+1 + piq 2 i + pi+1q 2 i+1 − 2pi+1qiqi+1 + pi+1q 2 i − 2pi+1q 2 i+1 + 2piq 2 i+1 + 2pi+1qiqi+1 − 2piqiqi+1 + pi+1q 2 i+1 − 3piq 2 i+1 4(qi+1 − qi) 2 (qi+1 − qi) 1 + qi+1−qi qi+1 , qi+1 q 3 i q 3 i+1 (148) δi piq 2 i + pi+1q 2 i − 4piqiqi+1 4(qi+1 − qi) q 2 i+1 + qi+1(qi+1 − qi) q 3 i , (149) δi (piq 2 i qi+1 + pi+1q 2 i qi+1 − 4piqiq 2 i+1)(qi+1 − qi) 4q 3 i (qi+1 − qi)+ δi piq 2 i q 2 i+1 + pi+1q 2 i q 2 i+1 − 4piqiq 3 i+1 4q 3 i (qi+1 − qi), (150) δi piqiqi+1 + pi+1qiqi+1 − 4piq 2 i+1 4q 2 i + δi piqiq 2 i+1 + pi+1qiq 2 i+1 − 4piq 3 i+1 4q 2 i (qi+1 − qi), (151) δi −4piq 2 i+1 4q 2 i + δi piqiqi+1(qi+1 − qi) + pi+1qiqi+1(qi+1 − qi) + piqiq 2 i+1 + pi+1qiq 2 i+1 − 4piq 3 i+1 4q 2 i (qi+1 − qi), (152) δi −piq 2 i+1 q 2 i + δi 2piqiq 2 i+1 − piq 2 i qi+1 + 2pi+1qiq 2 i+1 − pi+1q 2 i qi+1 − 4piq 3 i+1 4q 2 i (qi+1 − qi), (153) δi −piq 2 i+1 q 2 i + δi −piq 3 i+1 q 2 i (qi+1 − qi) + δi piq 2 i+1 2qi(qi+1 − qi) + δi pi+1q 2 i+1 2qi(qi+1 − qi) + δi−piqi+1 4(qi+1 − qi) + δi −pi+1qi+1 4(qi+1 − qi) . (154) Order s = −1: $$\delta_{i}\left(\frac{-3p_{i}}{4}-\frac{(-2p_{i}-p_{i+1})q_{i+1}}{2(q_{i+1}-q_{i})}+\frac{(p_{i}-2p_{i+1})q_{i+1}^{2}}{4(q_{i+1}-q_{i})^{2}}\right)\mathrm{Li}_{-1}(\cdot),$$ $$(155)$$ $$\delta_{i}\left(\frac{-3p_{i}(q_{i+1}-q_{i})^{2}+2(2p_{i}+p_{i+1})q_{i+1}(q_{i+1}-q_{i})+(p_{i}-2p_{i+1})q_{i+1}^{2}}{4(q_{i+1}-q_{i})^{2}}\right)\frac{\frac{q_{i+1}-q_{i}}{q_{i+1}}}{\left(1-\frac{q_{i+1}-q_{i}}{q_{i+1}}\right)^{2}},$$ , (156) $$\delta_{i}\left(\frac{-3p_{i}q_{i+1}^{2}+6p_{i}q_{i}q_{i+1}-3p_{i}q_{i}^{2}+4p_{i}q_{i+1}^{2}+2p_{i+1}q_{i+1}^{2}}{-4p_{i}q_{i}q_{i+1}-2p_{i+1}q_{i}q_{i+1}+p_{i}q_{i+1}^{2}-2p_{i+1}q_{i+1}^{2}}\right)\frac{(q_{i+1}-q_{i})}{q_{i+1}^{2}\frac{q_{i}^{2}}{q_{i+1}^{2}}},$$ $$(156)$$ $$(157)$$ $$(158)$$ (159) $\binom{159}{100}$ (160) . , (157) δi 2piq 2 i+1 + 2piqiqi+1 − 2pi+1qiqi+1 − 3piq 2 i 4(qi+1 − qi) qi+1 q 2 i , (158) δi 2piq 3 i+1 + 2piqiq 2 i+1 − 2pi+1qiq 2 i+1 − 3piq 2 i qi+1 4q 2 i (qi+1 − qi), (159) δi piq 3 i+1 2q 2 i (qi+1 − qi) + δi piq 2 i+1 2qi(qi+1 − qi) + δi −pi+1q 2 i+1 2qi(qi+1 − qi) + δi−3piqi+1 4(qi+1 − qi) . (160) δi −pi − 3pi+1 4− (pi − pi+1)qi+1 (qi+1 − qi)+ (3pi − pi+1)q 2 i+1 4(qi+1 − qi) 2 Li0(·), (161) δi −(pi + 3pi+1)(qi+1 − qi) 2 − 4(pi − pi+1)qi+1(qi+1 − qi) + (3pi − pi+1)q 2 i+1 4(qi+1 − qi) 2 qi+1−qi qi+1 1 − qi+1−qi qi+1 δi −piq 2 i+1 + 2piqiqi+1 − piq 2 i − 3pi+1q 2 i+1 + 6pi+1qiqi+1 − 3pi+1q 2 i − 4piq 2 i+1 + 4piqiqi+1 + 4pi+1q 2 i+1 − 4pi+1qiqi+1 + 3piq 2 i+1 − pi+1q 2 i+1 4(qi+1 − qi) 2 (qi+1 − qi) qi+1 qi qi+1 , (162) $$(161)$$ $$\left(162\right)$$ $$(163)$$ $$(164)$$ $$(165)$$ , (163) $$\delta_{i}\frac{-2p_{i}q_{i+1}^{2}-p_{i}q_{i}^{2}-3p_{i+1}q_{i}^{2}+6p_{i}q_{i}q_{i+1}+2p_{i+1}q_{i}q_{i+1}}{4q_{i}(q_{i+1}-q_{i})},$$ $$\delta_{i}\frac{-p_{i}q_{i+1}^{2}}{2q_{i}(q_{i+1}-q_{i})}+\delta_{i}\frac{-p_{i}q_{i}}{4(q_{i+1}-q_{i})}+\delta_{i}\frac{-3p_{i+1}q_{i}}{4(q_{i+1}-q_{i})}+\delta_{i}\frac{3p_{i}q_{i+1}}{2(q_{i+1}-q_{i})}+\delta_{i}\frac{p_{i+1}q_{i+1}}{2(q_{i+1}-q_{i})}.$$ . (165) Finally, order s = 1: δi pi + pi+1 2− pi+1qi+1 (qi+1 − qi) + (pi+1 − pi)q 2 i+1 2(qi+1 − qi) 2 Li1(·), (166) −δi (pi + pi+1)(qi+1 − qi) 2 − 2pi+1qi+1(qi+1 − qi) + (pi+1 − pi)q 2 i+1 2(qi+1 − qi) 2 log 1 − qi+1 − qi qi+1 , (167) −δi piq 2 i+1 − 2piqiqi+1 + piq 2 i + pi+1q 2 i+1 − 2pi+1qiqi+1 + pi+1q 2 i − 2pi+1q 2 i+1 + 2pi+1qiqi+1 + pi+1q 2 i+1 − piq 2 i+1 2(qi+1 − qi) 2 log qi qi+1 , (168) $$(166)$$ $$(167)$$ $$(168)$$ $$(169)$$ $$\delta_{i}\left(\frac{2p_{i}q_{i}q_{i+1}-p_{i}q_{i}^{2}-p_{i+1}q_{i}^{2}}{2(q_{i+1}-q_{i})^{2}}\right)\left(\log(q_{i})-\log(q_{i+1})\right).$$ And then order s = 0: Now stick all of the above terms together, − δi pi + pi+1 2log(qi+1) + δi 3piq 2 i+1 2q 2 i + δipiqi+1 4(qi+1 − qi) + δi −piq 2 i+1 q 2 i + δi −piq 3 i+1 q 2 i (qi+1 − qi) + δi piq 2 i+1 2qi(qi+1 − qi) + δi pi+1q 2 i+1 2qi(qi+1 − qi) + δi−piqi+1 4(qi+1 − qi) + δi −pi+1qi+1 4(qi+1 − qi) + δi piq 3 i+1 2q 2 i (qi+1 − qi) + δi piq 2 i+1 2qi(qi+1 − qi) + δi −pi+1q 2 i+1 2qi(qi+1 − qi) + δi−3piqi+1 4(qi+1 − qi) + δi −piq 2 i+1 2qi(qi+1 − qi) + δi−piqi 4(qi+1 − qi) + δi−3pi+1qi 4(qi+1 − qi) + δi3piqi+1 2(qi+1 − qi) + δipi+1qi+1 2(qi+1 − qi) + δi 2piqiqi+1 − piq 2 i − pi+1q 2 i 2(qi+1 − qi) 2 (log(qi) − log(qi+1)). (170) Start by considering only the log terms, −δi pi + pi+1 2log(qi+1) + δi 2piqiqi+1 − piq 2 i − pi+1q 2 i 2(qi+1 − qi) 2 (log(qi) − log(qi+1)), (171) δi 2piqiqi+1 log(qi) − piq 2 i log(qi) − pi+1q 2 i log(qi) − 2piqiqi+1 log(qi+1) + piq 2 i log(qi+1) + pi+1q 2 i log(qi+1) − (pi + pi+1)(qi+1 − qi) 2 log(qi+1) 2(qi+1 − qi) 2 , (172) δi 2(qi+1 − qi) 2 2piqiqi+1 log(qi) − piq 2 i log(qi) − pi+1q 2 i log(qi) − 2piqiqi+1 log(qi+1) + piq 2 i log(qi+1) +pi+1q 2 i log(qi+1) − piq 2 i+1 log(qi+1) + 2piqiqi+1 log(qi+1) − piq 2 i log(qi+1) − pi+1q 2 i+1 log(qi+1) +2pi+1qiqi+1 log(qi+1) − pi+1q 2 i log(qi+1), (173) $$\frac{\delta_{1}}{2(q_{t+1}-q_{t})^{2}}\left(2p_{t}q_{t}q_{t+1}\log(q_{t})-p_{t}q_{t}^{2}\log(q_{t})-p_{t+1}q_{t}^{2}\log(q_{t})+2p_{t+1}q_{t}q_{t+1}\log(q_{t+1})\right.\\ \left.-p_{t}q_{t+1}^{2}\log(q_{t+1})-p_{t+1}q_{t+1}^{2}\log(q_{t+1})\right).\tag{174}$$ $\left(176\right)$ $\left(177\right)$ $\left(178\right)$ ... Note that $$-(p_{t}\log(q_{t})+p_{t+1}\log(q_{t+1}))(q_{t+1}-q_{t})^{2}=$$ $$-p_{t}q_{t}^{2}\log(q_{t})+2p_{t}q_{t}q_{t+1}\log(q_{t})-p_{t}q_{t+1}^{2}\log(q_{t})-p_{t+1}q_{t}^{2}\log(q_{t+1})+2p_{t+1}q_{t}q_{t+1}+\log(q_{t+1})-p_{t+1}q_{t+1}^{2}\log(q_{t+1})\tag{17}$$ (175) is almost, but not quite, a match for the numerator - two of the logs are swapped (underlined). This can be corrected by adding $$\begin{aligned} p_i q_{i+1}^2 \log(q_i) + p_{i+1} q_i^2 \log(q_{i+1}) - p_i q_{i+1}^2 \log(q_{i+1}) - p_{i+1} q_i^2 \log(q_i), \nonumber \\ (p_i q_{i+1}^2 - p_{i+1} q_i^2) \log(q_i) + (p_{i+1} q_i^2 - p_i q_{i+1}^2) \log(q_{i+1}), \nonumber \\ (p_{i+1} q_i^2 - p_i q_{i+1}^2) \left(\log(q_{i+1}) - \log(q_i)\right), \end{aligned}$$ terms are ... and rewrite the log terms as $$-\delta_{i}\frac{p_{i}\log(q_{i})+p_{i+1}\log(q_{i+1})}{2}+\delta_{i}\frac{p_{i+1}q_{i}^{2}-p_{i}q_{i+1}^{2}}{2(q_{i+1}-q_{i})^{2}}\left(\log(q_{i+1})-\log(q_{i})\right),\tag{1}$$ $$(179)$$ which is quite elegant considering the beginning (there is the question of if it has a geometric meaning, which could lead to a much more elegant derivation), with numerical stability. Now work through all of the non-log terms: δi 3piq 2 i+1 2q 2 i + δipiqi+1 4(qi+1 − qi) + δi −piq 2 i+1 q 2 i + δi −piq 3 i+1 q 2 i (qi+1 − qi) + δi piq 2 i+1 2qi(qi+1 − qi) + δi pi+1q 2 i+1 2qi(qi+1 − qi) + δi−piqi+1 4(qi+1 − qi) + δi −pi+1qi+1 4(qi+1 − qi) + δi piq 3 i+1 2q 2 i (qi+1 − qi) + δi piq 2 i+1 2qi(qi+1 − qi) + δi −pi+1q 2 i+1 2qi(qi+1 − qi) + δi−3piqi+1 4(qi+1 − qi) + δi −piq 2 i+1 2qi(qi+1 − qi) + δi−piqi 4(qi+1 − qi) + δi−3pi+1qi 4(qi+1 − qi) + δi3piqi+1 2(qi+1 − qi) + δipi+1qi+1 2(qi+1 − qi) , (180) δi 3piq 2 i+1 2q 2 i + δi −2piq 2 i+1 2q 2 i + δi −2piq 3 i+1 2q 2 i (qi+1 − qi) + δi piq 3 i+1 2q 2 i (qi+1 − qi) + δi pi+1q 2 i+1 2qi(qi+1 − qi) + δi piq 2 i+1 2qi(qi+1 − qi) + δi piq 2 i+1 2qi(qi+1 − qi) + δi −pi+1q 2 i+1 2qi(qi+1 − qi) + δi −piq 2 i+1 2qi(qi+1 − qi) + δi−piqi+1 4(qi+1 − qi) + δi −pi+1qi+1 4(qi+1 − qi) + δipiqi+1 4(qi+1 − qi) + δi−piqi 4(qi+1 − qi) + δi−3piqi+1 4(qi+1 − qi) + δi−3pi+1qi 4(qi+1 − qi) + δi6piqi+1 4(qi+1 − qi) + δi 2pi+1qi+1 4(qi+1 − qi) , (181) δi piq 2 i+1 2q 2 i + δi −piq 3 i+1 2q 2 i (qi+1 − qi) + δi piq 2 i+1 2qi(qi+1 − qi) + δi (3pi + pi+1)qi+1 − (pi + 3pi+1)qi 4(qi+1 − qi), (182) δi piq 3 i+1 − piqiq 2 i+1 − piq 3 i+1 2q 2 i (qi+1 − qi)+ δi piq 2 i+1 2qi(qi+1 − qi) + δi (3pi + pi+1)qi+1 − (pi + 3pi+1)qi 4(qi+1 − qi), (183) δi −piq 2 i+1 2qi(qi+1 − qi) + δi piq 2 i+1 2qi(qi+1 − qi) + δi (3pi + pi+1)qi+1 − (pi + 3pi+1)qi 4(qi+1 − qi), (184) δi (3pi + pi+1)qi+1 − (pi + 3pi+1)qi 4(qi+1 − qi). (185) Finally, putting it all together gets $$-\delta_{i}\int_{0}^{1}((1-t)p_{i}+tp_{i+1})\log((1-t)q_{i}+tq_{i+1})dt=$$ $$-\delta_{i}\frac{p_{i}\log(q_{i})+p_{i+1}\log(q_{i+1})}{2}+\delta_{i}\frac{p_{i+1}q_{i}^{2}-p_{i}q_{i+1}^{2}}{2(q_{i+1}-q_{i})^{2}}\left(\log(q_{i+1})-\log(q_{i})\right)+\delta_{i}\frac{(3p_{i}+p_{i+1})q_{i+1}-(p_{i}+3p_{i+1})q_{i}}{4(q_{i+1}-q_{i})},\tag{186}$$ where only the first term is needed if qi+1 = qi. This works, as verified by comparison to numerical integration. Need to switch evaluation strategy when too close to singularity. ## A.2.1 Behaviour Around Singularity First demonstrate that it's mathematically, if not computationally, stable. To ignore the second and third term when qi+1 = qiit needs to be shown that $$\lim_{q_{t+1}\rightarrow+\infty}\left[\delta_{t}\frac{p_{t+1}q_{t}^{2}-p_{t}q_{t+1}^{2}}{2(q_{t+1}-q_{t})^{2}}\left(\log(q_{t+1})-\log(q_{t})\right)+\delta_{t}\frac{(3p_{t}+p_{t+1})q_{t+1}-(p_{t}+3p_{t+1})q_{t}}{4(q_{t+1}-q_{t})}\right]=0.\tag{187}$$ This is not the case if the two terms are considered independently - it is only the case if they are merged. To show this limit start by converting the log(·) of the second term back into an infinite sequence, 2δi pi+1q 2 i − piq 2 i+1 2(qi+1 − qi) 2X n∈{1,3,5,...} 1 n qi+1 qi− 1 qi+1 qi+ 1!n, (188) δi pi+1q 2 i − piq 2 i+1 (qi+1 − qi) 2X n∈{1,3,5,...} 1 n (qi+1 − qi) n (qi+1 + qi) n , (189) δi(pi+1q 2 i − piq 2 i+1)X n∈{1,3,5,...} 1 n (qi+1 − qi) n−2 (qi+1 + qi) n. (190) $$(188)$$ $$(189)$$ $$(190)$$ Note that the limit is trivially true for n = 3 onwards, so you only need to consider the n = 1 case, which can be dropped into the rest of the limit statement δi(pi+1q 2 i − piq 2 i+1) (qi+1 − qi) −1 (qi+1 + qi)+ δi (3pi + pi+1)qi+1 − (pi + 3pi+1)qi 4(qi+1 − qi), (191) δi 4pi+1q 2 i − 4piq 2 i+1 + (3pi + pi+1)qi+1(qi+1 + qi) − (pi + 3pi+1)qi(qi+1 + qi) 4(qi+1 − qi)(qi+1 + qi), (192) δi 4pi+1q 2 i − 4piq 2 i+1 + (3pi + pi+1)q 2 i+1 + (2pi − 2pi+1)qiqi+1 − (pi + 3pi+1)q 2 i 4(qi+1 − qi)(qi+1 + qi), (193) δi (pi+1 − pi)q 2 i+1 − 2(pi+1 − pi)qiqi+1 + (pi+1 − pi)q 2 i 4(qi+1 − qi)(qi+1 + qi), (194) $$(191)$$ δi (pi+1 − pi)(qi+1 − qi) 2 4(qi+1 − qi)(qi+1 + qi) , (195) δi (pi+1 − pi)(qi+1 − qi) 4(qi+1 + qi), (196) for which the required limit is simply true. The above is suggestive of an alternate way to write the equation out; consider the unused terms from the second equation, $$\delta_{i}(p_{i+1}q_{i}^{2}-p_{i}q_{i+1}^{2})\sum_{n\in\{3,5,7,\ldots\}}\frac{1}{n}\frac{(q_{i+1}-q_{i})^{n-2}}{(q_{i+1}+q_{i})^{n}},$$ $$(195)$$ $$(196)$$ $$\delta_{i}(p_{i+1}q_{i}^{2}-p_{i}q_{i+1}^{2})\sum_{n\in\{1,3,5,\ldots\}}\frac{1}{n+2}\frac{(q_{i+1}-q_{i})^{n}}{(q_{i+1}+q_{i})^{n+2}},$$ $$\delta_{i}\frac{p_{i+1}q_{i}^{2}-p_{i}q_{i+1}^{2}}{(q_{i+1}+q_{i})^{2}}\sum_{n\in\{1,3,5,\ldots\}}\frac{1}{n+2}\left(\frac{q_{i+1}-q_{i}}{q_{i+1}+q_{i}}\right)^{n},$$ which appears to be the end of the line. But writing it all together gets $$-\delta_{i}\int_{0}^{1}\left((1-t)p_{i}+tp_{i+1}\right)\log[(1-t)q_{i}+tq_{i+1}]dt=$$ $$-\delta_{i}\frac{p_{i}\log(q_{i})+p_{i+1}\log(q_{i+1})}{2}+\delta_{i}\frac{(p_{i+1}-p_{i})(q_{i+1}-q_{i})}{4(q_{i+1}+q_{i})}+\delta_{i}\frac{p_{i+1}q_{i}^{2}-pq_{i+1}^{2}}{(q_{i+1}+q_{i})^{2}}\sum_{n\in\{1,3,5,\ldots,n\}}\frac{1}{n+2}\left(\frac{q_{i+1}-q_{i}}{q_{i+1}+q_{i}}\right)^{n},\tag{200}$$ which is a result of all the above is the same as it is a result of the $i$-th order parameter. $$(197)$$ $$(198)$$ $$(199)$$ which is a good fallback when in the qi ≊ qi+1 condition, as the terms of the infinite series move towards zero quickly in typical usage (see main text for quantification). ## A.2.2 Minor Details For completeness, results from *The On-Line Encyclopedia of Integer Sequences* are now proven and then combined to get the term needed by Equation 120. Firstly8, $${\frac{1}{n(n+1)(n+2)}}={\frac{n(n+3)}{4(n+1)(n+2)}}-{\frac{(n-1)(n+2)}{4n(n+1)}}$$ $${4}={n}^{{{2}}}{\left({n}+{3}\right)}-{\left({n}-{1}\right)}{\left({n}+{2}\right)}^{{{2}}}$$ $${4}={n}^{{{3}}}+{3}{n}^{{{2}}}-{\left({n}-{1}\right)}{\left({n}^{{{2}}}+{4}{n}+{4}\right)}$$ $${4}={n}^{{{3}}}+{3}{n}^{{{2}}}-{n}^{{{3}}}-{4}{n}^{{{2}}}-{4}{n}+{n}^{{{2}}}+{4}{n}+{4}$$ $${4}={4}$$ $$(201)$$ $$(202)$$ $$(203)$$ $$(204)$$ $$(206)$$ Combine the above to get $${\frac{1}{n(n+1)(n+2)}}={\frac{n(n+1)(n+3)}{4(n+2)}}-{\frac{n^{2}(n+3)}{4(n+1)}}-{\frac{(n-1)n(n+2)}{4(n+1)}}+{\frac{(n-1)^{2}(n+2)}{4n}},$$ 4n, (210) 4 = (n − 1)2(n + 1)(n + 2)2 − n 3(n + 2)(n + 3) − (n − 1)n 2(n + 2)2 + n 2(n + 1)2(n + 3), (211) 4 = (n 2−2n+ 1)(n+ 1)(n 2+ 4n+ 4)−n 3(n 2+ 5n+ 6)−(n−1)n 2(n 2+ 4n+ 4)+n 2(n 2+ 2n+ 1)(n+ 3), (212) 4 = (n 2−2n+1)(n 3+5n 2+8n+4)−(n 5+5n 4+6n 3)−(n−1)(n 4+4n 3+4n 2)+n 2(n 3+5n 2+7n+3), (213) 4 = (n 5 + 3n 4 − n 3 − 7n 2 + 4) − (n 5 + 5n 4 + 6n 3) − (n 5 + 3n 4 − 4n 2) + (n 5 + 5n 4 + 7n 3 + 3n 2), (214) 4 = n 5 + 3n 4 − n 3 − 7n 2 + 4 − n 5 − 5n 4 − 6n 3 − n 5 − 3n 4 + 4n 2 + n 5 + 5n 4 + 7n 3 + 3n 2, (215) 4 = (1 − 1 − 1 + 1)n 5 + (3 − 5 − 3 + 5)n 4 + (−1 − 6 + 7)n 3 + (−7 + 4 + 3)n 2 + 4, (216) 4 = 4, (217) which can also be written as $$\frac{1}{n(n+1)(n+2)}=\frac{(n-1)^{2}(n+2)}{4n}-\frac{n^{2}(n+3)+(n-1)n(n+2)}{4(n+1)}+\frac{n(n+1)(n+3)}{4(n+2)}.\tag{218}$$ Then91 $$\frac{1}{n(n+1)}=\frac{n}{n+1}-\frac{n-1}{n}$$ $$1=n^{2}-(n-1)(n+1)$$ $$1=n^{2}-n^{2}+1$$ $$1=1$$ $$(207)$$ $$(208)$$ $$1=1$$ $$(210)$$ 1 = 1 (209) ## B Gpu Code The below Python10 code is for Jax11 and has been developed with version 0.4.25. Validation, including of gradients, has been performed and may be found in the supplementary material alongside code for the demonstrations within the main text. ``` @jax . jit def crossentropy (p , q , delta ): """ Returns the cross entropy between two regular orograms with aligned and evenly spaced bin centers , given as p and q. delta is the spacing between bins . Will be approximate in some situations , as it dodges around infinities and si ng ul ar it ie s to remain stable whatever you give it. """ # First term requires what is effectively the relative area ... halved_ends = jnp . ones (p. shape [0]) halved_ends = halved_ends . at[0]. set (0.5) halved_ends = halved_ends . at[-1]. set (0.5 ) ``` \# Assorted basic terms ... log_q = jnp . log ( jnp . maximum (q , 1e-32 ) ) pdelta = p[1:] - p[:-1] qdelta = q[1:] - q[:-1] qsum = q[:-1] + q[1:] qsqr = jnp . square (q ) top = p[1:]* qsqr [:-1] - p[:-1]* qsqr [1:] ``` # Inner term of infinite loop ( used elsewhere ), done in a stable way , # plus variant with extra qsum ... notzero = qsum >1e-5 qsum_safe = jnp . maximum ( qsum , 1e-5) inner = qdelta / qsum_safe inner_ds2 = qdelta / ( jnp . square ( qsum_safe )* qsum_safe ) ``` \# Do the stable parts ... ret = -( halved_ends * p * log_q ). sum () ret += 0. 25 * ( pdelta * inner ). sum () ``` # Do the two branches , with stability hacks for the unstable one ... ## Unstable but accurate when qdelta is high ... abs_qdelta = jnp . fabs ( qdelta ) sign_qdelta = -2 * ( jnp . signbit ( qdelta ) - 0. 5) ``` qdelta_sqr_safe = jnp . maximum ( jnp . square ( qdelta ) , 1e-10 ) qdelta_qsum_safe = sign_qdelta * jnp . maximum ( abs_qdelta * qsum , 1e-10 ) ret_unstable = top * (0.5 * ( log_q [1:] - log_q [:-1]) / qdelta_sqr_safe - 1 / qdelta_qsum_safe ) \#\# Stable but only accurate when qdelta is low ... ret_approx = top * (1 / 3 + jnp . square ( inner ) / 5 ) * inner_ds2 \#\# Pick the right branch for each and sum in ... ret += jax . lax . select ( abs_qdelta >1e-5 , ret_unstable , ret_approx ). sum () return delta *ret grad_crossentropy = jax . jit ( jax . grad ( crossentropy , (0 ,1))) 10https://www.python.org/ 11https://github.com/google/jax
Review 1: Summary: The paper's main contribution is an analytical expression for the cross-entropy of two unidimensional probability density functions which are piecewise-linear. This may be relevant for machine learning tasks where differentiating such an objective is needed. Strengths and Weaknesses: **Strengths:** The problem addressed is rather of rather general interest. **Weaknesses:** - The core of the contribution is the first part of section 3, which describes a derivation. The clarity of this section may be improved to better follow the derivation. I wrote some suggestions in the Requested Changes section. Note: The Appendix presents the full derivation, which, in the authors' own words, "probably doubles as a statement about mathematical paranoia" and "it’s probably more valuable as an example of the difference between what goes into the body of a paper and what’s actually done, to inform and/or terrify future researchers". - The method requires that the two piecewise linear densities have equal segments, i.e., equal number and location of change points. At one point in the paper, the authors add: "though if the linear segments are not aligned, extra splits can be added": do you mean that the union of the change points of the two piecewise linear densities should be considered? - Based on the related work section, I did not get a good understanding of what are the advantages/disadvantages of the present contribution with respect to pre-existing methods for tackling this problem. Could you please elaborate more on this? Requested Changes: Given that this is a very maths heavy paper, it is important that equations are readable and the passage-by-passage derivation is easy to follow. Readability of the maths could be improved, for example by highlighting relevant terms in the equations---e.g., with the `underbrace` command, `\underbrace{<term in the equation>}_{\text{(i)}}`, to then refer to individual terms in the text ---- "term (i) in equation ..." Minor: Some of the text could be made a bit smoother, e.g., before eq. (5), "combine (3) and (4) to obtain"; "Apply integration by parts and clean up" could be rephrased as "through integration by parts and rearrangement of the remaining terms, we get...". I found this specific passage of (3)-(4) to (5) hard to follow. Typo: "Computing with Equation 15 is stable when a safe distance from the singularity" missing word? Broader Impact Concerns: No broader impact concern. ================================================== Review 2: Summary: The paper presents a detailed derivation of two analytic expressions (one closed form but unstable at singularities and one requiring an infinite series that can be truncated as an approximation) for cross-entropy between two piecewise linear probability distribution functions. The choice of expression can be done per interval if the difference in the probability density function is too small causing a singularity. Some numerical experiments demonstrate the expressions behavior. Computationally, to get the same level of precision the truncated expression is much faster than Monte-Carlo integration. Strengths and Weaknesses: # Strengths The detailed derivation appears to be a non-trivial result. The two expressions together constitute a way to perform the computation efficiently across many break points. The paper provides a detailed look at numerical precision and improvements in precision (caption of Figure 1). # Weaknesses Clear motivation is lacking in the introduction. The claim that this is useful to ML is not supported without concrete examples connecting the formulation with ones based on empirical samples encountered in ML problems. The related works describe the triangle kernel for kernel density, if this is a motivation example then how the piecewise linear function(s) would be defined in this case should be given. Relatedly, in the numerical evaluation, there is a lack of connection between actual ML application and the proposed approach. Overall, the paper’s organization should be changed with a clarified motivation up front, then the related work that connects this motivation to the proposal, and some examples on ML tasks. Otherwise, it seems more like a mathematics papers. Requested Changes: **Major:** 1. The paper should be clear up front under what conditions would piecewise linear PDFs exist in a machine learning context. Clear statement of assumptions around how this would be useful should be introduction. Later, the paper should include specific examples of how a piecewise linear PDF is generated from data and when the target distribution can be approximated as piecewise linear. For example, the related works describe the triangle kernel for kernel density, if this is a motivation example then how the piecewise linear function(s) would be defined in this case should be given. 2. Relatedly, the paper should include some comparisons of the cited examples of entropy estimation that have already been published. I'd also like to see examples of comparing mixtures of Gaussians (with linear approximations) as this is task that motivates the use of other divergences. 3. It is not clear to me what value of $N$ is used in practice to ensure precision at a specified tolerance. Is the bound used in the code to give specified tolerance? **Minor:** 1. In the equations, it is typical convention to have operands at beginning of line rather than end. Also full stops (e.g., equation 10) or commas are missing in some cases and spurious equal sign (in Appendix). *Introduction* 2. I'd prefer to stick with machine learning in the first sentence rather than calling everything AI. 3. In second sentence, "It" is ambiguous from context. 4. Clearly cross-entropy is between distributions, so it needs to be clarified to the reader when it is applicable to the range considered in continuous regression. I guess this under a parametric assumption of the distribution around observed but it must be clarified. 5. Before equation 1, the description of the PDFs with breakpoints should be given. 6. The mix of English and symbols like "Entropy ($=H(P,P)$)" is hard to parse, this should be revised. *Related work* 7. Does "regular histogram" have uniformly sized bins? 8. The use of "excessive number of change points" isn't the number of breakpoints relative to the number of data points in the kernel density estimate? So averaging across histograms won't necessarily create more breakpoints. 9. "course"->"coarse" 10. "unknown distributions" -> "unknown distribution's" 11. Regarding a neural networks with ReLU functions, even if the input is 1D how would this property be useful for input and output pairs used in machine learning training? *Derivation* 12. "cross entropy" not hyphenated. 13. The steps leading to Equation 7 should be clarified with insights from Equation 38 from appendix. Otherwise the indexing going from odd positive integers to all positive integers isn't clear. 14. The "argument has been prefilled" is not clear. Is the argument $i$? 15. "floating point maths" -> "floating point operations" *Numerical validation* 16. The scale of Figure 3 is such that the precision is not demonstrated. Perhaps the error can be plotted separately. *Appendix* 17. Some of the text in the supplement is unprofessional: "terrify", "horrifying", "poking up like an enormous boil", and "so it’s time to scream into the void again". The paper should avoid the attempts at humor—even the "(yay!)" is unnecessary. The appendix section title "Hulk smash fraction of minor irritation" is indecipherable cultural jargon. Broader Impact Concerns: no concerns ================================================== Review 3: Summary: This paper derives a formula for the cross-entropy of two 1-dimensional, piecewise linear probability distribution functions p(t) and q(t); ultimately, this simply amounts to computing the integral $$ \int_0^1 (p_0(1-t) + p_1 t) \log (q_0(1-t) + q_1 t) dt, $$ for coefficients $p_0,p_1,q_0,q_1>0$. The authors present a detailed derivation of a closed-form formula. They further discuss potential numerical issues with this formula, which arise because the formula contains near-vanishing denominators in the limit $q_0\to q_1$ which formally cancel, but do not cancel numerically due to round-off errors. They also discuss an implementation which avoids numerical instability, and present examples which numerically verify their derivation by comparison to quadrature/MC integration. Strengths and Weaknesses: Strengths: * The paper is clearly written, the objective is clear. * I verified that the main result is correct. Weaknesses: * The authors argue that a closed-form formula for this cross-entropy is needed specifically to employ cross-entropy as a loss function. From reading the article, it is not clear to me which application would need to compute the cross-entropy between piecewise linear PDFs, as opposed to more standard piecewise constant PDFs (i.e. histograms) for which evaluation of the integral would be straight-forward. * The derivation of the formula for the integral in this paper is overly complicated. * I'm not entirely convinced that the topic of this paper is of particular interest to the TMLR readership; this is in part related to the first point above, but it is also the case that the main result of this paper can also be obtained in a few lines of Mathematica code (or derived by hand if one is willing to go through a little bit of algebra). But I'm willing to accept that having a searchable resource available which immediately gives the result of the paper could be of value in some cases. Requested Changes: I would like to ask the authors to * Expand a bit on why and where this formula finds applications. * Provide a shorter derivation of the formula. The infinite series expansion in equation (4) is unnecessary. Instead, one can use a simple change of variables $q = q_0 + (q_1-q_0)t = q_0 + \Delta q t$ to write the integral in the form, $$ \int_0^1 (p_0(1-t) + p_1 t) \log (q_0(1-t) + q_1 t) dt = \int_{q_0}^{q_1} \left(p_0 + \frac{\Delta p}{\Delta q} (q-q_0)\right) \log(q) \frac{dq}{\Delta q} $$ and observe that this is simply a sum of two terms: The first is a constant multiple of $\int_{q_0}^{q_1} \log(q) \, dq $, the second is a multiple of $\int_{q_0}^{q_1} q\log(q) \, dq $. Both of these integrals have closed-form formulae. Broader Impact Concerns: n/a ================================================== Metareview: Recommendation: Accept with minor revision Comment: Two of the reviewers advocated for accepting the paper, with one leaning reject. To quote the more negative reviewer: "The paper would be much more acceptable with concrete example of using the cross-entropy with piecewise linear assumptions for optimizing a data representation or processing function. For instance, simply moving data points to match a target distribution using gradient flows, which would illustrate the stability of the gradients. Another reviewer suggestions along with my limited analysis points to some of the stability being resolved by careful pairings of terms different then the paper uses. Thus, the highlighted numerical stability may be of less concern than another one that may have actually better tolerance. This casts doubt on the result which requires series calculation if the bins are two small such that the heights are constant." This echoes to some degree the other reviewers as well, who all felt that the connection to the ML community could be made stronger. This does not entail any major change to the paper, but what would help is improve the motivation and to use a concrete example, as suggested above. The other requested change, which was also noted by multiple reviewers, is to ensure that the language be cleaned up to be more "professional" and less distracting. There was some seemingly unresolved discussion on stability, which the authors should examine more closely. It's unclear to me if this requires any additional change to the paper but please have another look. ==================================================
# On The Stochastic (Variance-Reduced) Proximal Gradient Method For Regularized Expected Reward Optimization Ling Liang liang.ling@u.nus.edu Department of Mathematics University of Maryland at College Park Haizhao Yang hzyang@umd.edu Department of Mathematics and Department of Computer Science University of Maryland at College Park Reviewed on OpenReview: *https: // openreview. net/ forum? id= Ve4Puj2LVT* ## Abstract We consider a regularized expected reward optimization problem in the non-oblivious setting that covers many existing problems in reinforcement learning (RL). In order to solve such an optimization problem, we apply and analyze the classical stochastic proximal gradient method. In particular, the method has shown to admit an O(ϵ −4) sample complexity to an ϵ-stationary point, under standard conditions. Since the variance of the classical stochastic gradient estimator is typically large, which slows down the convergence, we also apply an efficient stochastic variance-reduce proximal gradient method with an importance sampling based ProbAbilistic Gradient Estimator (PAGE). Our analysis shows that the sample complexity can be improved from O(ϵ −4) to O(ϵ −3) under additional conditions. Our results on the stochastic (variance-reduced) proximal gradient method match the sample complexity of their most competitive counterparts for discounted Markov decision processes under similar settings. To the best of our knowledge, the proposed methods represent a novel approach in addressing the general regularized reward optimization problem. ## 1 Introduction Reinforcement learning (RL) (Sutton & Barto, 2018) has recently become a highly active research area of machine learning that learns to make sequential decisions via interacting with the environment. In recent years, RL has achieved tremendous success so far in many applications such as control, job scheduling, online advertising, and game-playing (Zhang & Dietterich, 1995; Pednault et al., 2002; Mnih et al., 2013), to mention a few. One of the central tasks of RL is to solve a certain (expected) reward optimization problem for decision-making. Following the research theme, we consider the following problem of maximizing the regularized expected reward: max θ∈Rn F(θ) := Ex∼πθ [Rθ(x)] − G(θ), (1) $\mathfrak{g}\in\mathfrak{X}^n$ where G : R n → R ∪ {+∞} is a closed proper convex (possibly nonsmooth) function, x ∈ R d, Rθ : R d → R is the reward function depending on the parameter θ, and πθ denotes the probability distribution over a given subset S ⊆ R d parameterized by θ ∈ R n. By adapting the convention in RL, we call πθ a policy parameterized by θ. Moreover, for the rest of this paper, we denote J (θ) := Ex∼πθ [Rθ(x)] as the expected reward function in the *non-oblivious* setting. The learning objective is to learn a decision rule via finding the policy parameter θ that maximizes the regularized expected reward. To the best of our knowledge, the study on the general model (1) has been limited in the literature. Hence, developing and analyzing algorithmic frameworks for solving the problem is of great interest. There are large body of works in supervised learning focusing on the *oblivious* setting (Zhang, 2004; Hastie et al., 2009; Shapiro et al., 2021), i.e., J (θ) := Ex∼π [Rθ(x)], where x is sampled from an invariant distribution π. Clearly, problem (1) can be viewed as a generalization of those machine learning problems with oblivious objective functions. In the literature, an RL problem is often formulated as a discrete-time and discounted Markov decision processes (MDPs) (Sutton & Barto, 2018) which aims to learn an optimal policy via optimizing the (discounted) cumulative sum of rewards. We can also see that the learning objective of an MDP can be covered by the problem (1) with the property that the function R(x) does not depend on θ (see Example 3.3). Recently, the application of RL for solving combinatorial optimization (CO) problems which are typically NP-hard has attracted much attention. These CO problems may include the traveling salesman problem and related problems (Bello et al., 2016; Mazyavkina et al., 2021), the reward optimization problem arising from the finite expression method (Liang & Yang, 2022; Song et al., 2023), and the general binary optimization problem (Chen et al., 2023), to name just a few. The common key component of the aforementioned applications is the reward optimization, which could also be formulated as problem (1). There also exist problems with general reward functions that are outside the scope of cumulative sum of rewards of trajectories that are used in MDPs. An interesting example is the MDP with general utilities; see, e.g., (Zhang et al., 2020a; Kumar et al., 2022; Barakat et al., 2023) and references therein. Adding a regularizer to the objective function is a commonly used technique to impose desirable structures to the solution and/or to greatly enhance the expression power and applicability of RL (Lan, 2023; Zhan et al., 2023). When one considers the direct/simplex parameterization (Agarwal et al., 2021) of πθ, a regularization function using the indicator function for the standard probability simplex is needed. Moreover, by using other indicator functions for general convex sets, one is able to impose some additional constraints on the parameter θ. For the softmax parameterization, one may also enforce a bounded constraint to θ to prevent it taking values that are too large. This can avoid potential numerical issues, including the overflow error on a floating point system. On the other hand, there are incomplete parametric policy classes, such as the log-linear and neural policy classes, that are often formulated as {πθ|θ ∈ Θ}, where Θ is a closed convex set (Agarwal et al., 2021). In this case, the indicator function is still necessary and useful. Some recent works (see, e.g., (Ahmed et al., 2019; Agarwal et al., 2020; Mei et al., 2020; Cen et al., 2022)) have investigated the impact of the entropy regularization for MDPs. Systematic studies on general convex regularization for MDPs have been limited until the recent works (Pham et al., 2020; Lan, 2023; Zhan et al., 2023). Finally, problem (1) takes the same form as the stochastic optimization problem with decision-dependent distributions (see e.g., (Drusvyatskiy & Xiao, 2023) and references therein), leading to numerous real-world applications such as performative prediction (Mendler-Dünner et al., 2020; Perdomo et al., 2020), concept drift (Gama et al., 2014), strategic classification (Tsirtsis et al., 2024; Milli et al., 2019), and casual inference (Yao et al., 2021). Consequently, we can see that problem (1) is in fact quite general and has promising modeling power, as it covers many existing problems in the literature. The purpose of this paper is to leverage existing tools and results in MDPs and nonconvex optimization for solving the general regularized expected reward optimization problem (1) with general policy parameterization, which, to the best of our knowledge, has not been formally considered in the RL literature. It is well known that the policy gradient method (Williams, 1992; Sutton et al., 1999; Baxter & Bartlett, 2001), which lies in the heart of RL, is one of the most competitive and efficient algorithms due to its simplicity and versatility. Moreover, the policy gradient method is readily implemented and can be paired with other effective techniques. In this paper, we observe that the stochastic proximal gradient method, which shares the same spirit of the policy gradient method, can be applied directly for solving the targeted problem (1) with convergence guarantees to a stationary point. Since the classical stochastic gradient estimator typically introduces a large variance, there is also a need to consider designing advanced stochastic gradient estimators with smaller variances. To this end, we shall also look into a certain stochastic variance-reduced proximal gradient method and analyze its convergence properties. In particular, the contributions of this paper are summarized as follows. - We consider a novel and general regularized reward optimization model (1) that covers many existing important models in the machine learning and optimization literature. Thus, problem (1) admits a promising modeling power which encourages potential applications. - In order to solve our targeted problem, we consider applying the classical stochastic proximal gradient method and analyze its convergence properties. We first demonstrate that the gradient of J (·) is Lipschitz continuous under standard conditions with respect to the reward function Rθ(·) and the parameterized policy πθ(·). Using the L-smoothness of J (·), we then show that the classical stochastic proximal gradient method with a constant step-size (depending only on the Lipschitz constant for ∇θJ (·)) for solving problem (1) outputs an ϵ-stationary point (see Definition 3.4) within T := O(ϵ −2) iterations, and the sample size for each iteration is O(ϵ −2), where ϵ > 0 is a given tolerance. Thus, the total sample complexity becomes O(ϵ −4), which matches the current state-of-the-art sample complexity of the classical stochastic policy gradient for MDPs; see e.g., (Williams, 1992; Baxter & Bartlett, 2001; Zhang et al., 2020b; Xiong et al., 2021; Yuan et al., 2022). - Moreover, in order to further reduce the variance of the stochastic gradient estimator, we utilize an importance sampling based probabilistic gradient estimator which leads to an efficient single-looped variance reduced method. The application of this probabilistic gradient estimator is motivated by the recent progress in developing efficient stochastic variance-reduced gradient methods for solving stochastic optimization (Li et al., 2021b) and (unregularized) MDPs (Gargiani et al., 2022). We show that, under additional technical conditions, the total sample complexity is improved from O(ϵ −4) to O(ϵ −3). This result again matches the results of some existing competitive variance-reduced methods for MDPs (Papini et al., 2018; Xu et al., 2019; Pham et al., 2020; Huang et al., 2021; Yang et al., 2022; Gargiani et al., 2022). Moreover, to the best of our knowledge, the application of the above probabilistic gradient estimator is new for solving the regularized expected reward optimization (1). The rest of this paper is organized as follows. We first summarize some relative works in Section 2. Next, in Section 3, we present some background information that are needed for the exposition of this paper. Then, in Section 4, we describe the classical stochastic proximal gradient method for solving (1) and present the convergence properties of this method under standard technical conditions. Section 5 is dedicated to describing and analyzing the stochastic variance-reduced proximal gradient method with an importance sampling based probabilistic gradient estimator. Finally, we make some concluding remarks, and list certain limitations and future research directions of this paper in Section 6. ## 2 Related Work The policy gradient method. One of the most influential algorithms for solving RL problems is the policy gradient method, built upon the foundations established in (Williams, 1992; Sutton et al., 1999; Baxter & Bartlett, 2001). Motivated by the empirical success of the policy gradient method and its variants, analyzing the convergence properties for these methods has long been one of the most active research topics in RL. Since the objective function J (θ) is generally nonconcave, early works (Sutton et al., 1999; Pirotta et al., 2015) focused on the asymptotic convergence properties to a stationary point. By utilizing the special structure in (entropy regularized) MDPs, recent works (Liu et al., 2019; Mei et al., 2020; Agarwal et al., 2021; Li et al., 2021a; Xiao, 2022; Cen et al., 2022; Lan, 2023; Fatkhullin et al., 2023) provided some exciting results on the global convergence. Meanwhile, since the exact gradient of the objective function can hardly be computed, sampling-based approximated/stochastic gradients have gained much attention. Therefore, many works investigated the convergence properties, including the iteration and sample complexities, for these algorithms with inexact gradients; see e.g., (Zhang et al., 2020b; Liu et al., 2020; Zhang et al., 2021b; Xiong et al., 2021; Yuan et al., 2022; Lan, 2023) and references therein. Variance reduction. While the classical stochastic gradient estimator is straightforward and simple to implement, one of its most critical issues is that the variance of the inexact gradient estimator can be large, which generally slows down the convergence of the algorithm. To alleviate this issue, an attractive approach is to pair the sample-based policy gradient methods with certain variance-reduced techniques. Variance-reduced methods were originally developed for solving (oblivious) stochastic optimization problems (Johnson & Zhang, 2013; Nguyen et al., 2017; Fang et al., 2018; Li et al., 2021b) typically arising from supervised learning tasks. Motivated by the superior theoretical properties and practical performance of the stochastic variance-reduced gradient methods, similar algorithmic frameworks have recently been applied for solving MDPs (Papini et al., 2018; Xu et al., 2019; Yuan et al., 2020; Pham et al., 2020; Huang et al., 2021; Yang et al., 2022; Gargiani et al., 2022). Stochastic optimization with decision-dependent distributions. Stochastic optimization is the core of modern machine learning applications, whose main objective is to learn a decision rule from a limited data sample that is assumed to generalize well to the entire population (Drusvyatskiy & Xiao, 2023). In the classical supervised learning framework (Zhang, 2004; Hastie et al., 2009; Shapiro et al., 2021), the underlying data distribution is assumed to be static, which turns out to be a crucial assumption when analyzing the convergence properties of the common stochastic optimization algorithms. On the other hand, there are problems where the distribution changes over the course of iterations of a specific algorithm, and these are closely related to the concept of performative prediction (Perdomo et al., 2020). In this case, understanding the convergence properties of the algorithm becomes more challenging. Toward this, some recent progress has been made on (strongly) convex stochastic optimization with decision-dependent distributions (MendlerDünner et al., 2020; Perdomo et al., 2020; Drusvyatskiy & Xiao, 2023). Moreover, other works have also considered nonconvex problems and obtained some promising results; see (Dong et al., 2023; Jagadeesan et al., 2022) and references therein. Developing theoretical foundation for these problems has become a very active field nowadays. RL with general utilities. It is known that the goal of an agent associated with an MDP is to seek an optimal policy via maximizing the cumulative discounted reward (Sutton & Barto, 2018). However, there are decision problems of interest having more general forms. Beyond the scope of the expected cumulative reward in MDPs, some recent works also looked into RL problems with general utilities; see e.g., (Zhang et al., 2020a; Kumar et al., 2022; Barakat et al., 2023) as mentioned previously. Global convergence results can also be derived via investigating the hidden convex structure (Zhang et al., 2020a) inherited from the MDP. ## 3 Preliminary In this paper, we assume that the optimal objective value for problem (1), denoted by F ∗, is finite and attained, and the reward function Rθ(·) satisfies the following assumption. Assumption 3.1. The following three conditions with respect to the function Rθ(·) hold: 1. There exists a constant U > 0 *such that* $$\operatorname*{sup}_{\theta\in\mathbb{R}^{n},x\in\mathbb{R}^{d}}\;|{\mathcal{R}}_{\theta}(x)|\leq U.$$ 2. Rθ(·) is twice continuously differentiable with respect to θ, and there exist positive constants Ceg and Ceh *such that*sup $$\operatorname*{sup}_{\theta\in\mathbb{R}^{n},\ x\in\mathbb{R}^{d}}\left\|\nabla_{\theta}\mathcal{R}_{\theta}(x)\right\|\leq\widetilde{C}_{g},\quad\operatorname*{sup}_{\theta\in\mathbb{R}^{n},\ x\in\mathbb{R}^{d}}\left\|\nabla_{\theta}^{2}\mathcal{R}_{\theta}(x)\right\|_{2}\leq\widetilde{C}_{h}.$$ The first condition on the boundedness of the function Rθ(·), which is commonly assumed in the literature (Sutton & Barto, 2018), ensures that J (θ) is well-defined. And the second condition will be used to guarantee the well-definiteness and L-smoothness of the gradient ∇θJ (θ). We remark here that when the reward function Rθ(x) does not depend on θ (see e.g., Example 3.3), then the second assumption holds automatically. To determine the (theoretical) learning rate in our algorithmic frameworks, we also need to make some standard assumptions to establish the L-smoothness of J (·). Assumption 3.2 (Lipschitz and smooth policy assumption). *The function* log πθ(x) is twice differential with respect to θ ∈ R n and there exist positive constants Cg and Ch *such that* $\sup_{x\in\mathbb{R}^{d},\,\theta\in\mathbb{R}^{n}}\left\|\nabla_{\theta}\log\pi_{\theta}(x)\right\|\leq C_{g},\quad\sup_{x\in\mathbb{R}^{d},\,\theta\in\mathbb{R}^{n}}\left\|\nabla_{\theta}^{2}\log\pi_{\theta}(x)\right\|_{2}\leq C_{h}.$ This assumption is a standard one and commonly employed in the literature when studying the convergence properties of the policy gradient method for MDPs; see e.g., (Pirotta et al., 2015; Papini et al., 2018; Xu et al., 2020; Pham et al., 2020; Zhang et al., 2021a; Yang et al., 2022) and references therein. Under Assumption 3.1 and Assumption 3.2, it is easy to verify that the gradient for the expected reward function J (θ) can be written as: $$\nabla_{\theta}{\cal J}(\theta)=\nabla_{\theta}\left(\int{\cal R}_{\theta}(x)\pi_{\theta}(x)dx\right)=\int\left(\nabla_{\theta}{\cal R}_{\theta}(x)+{\cal R}_{\theta}(x)\frac{\nabla_{\theta}\pi_{\theta}(x)}{\pi_{\theta}(x)}\right)\pi_{\theta}(x)dx$$ $$={\mathbb{E}}_{x\sim\pi_{\theta}}\left[{\cal R}_{\theta}(x)\nabla_{\theta}\log\pi_{\theta}(x)+\nabla_{\theta}{\cal R}_{\theta}(x)\right].$$ We next present an example on the discrete-time discounted MDP, which can be covered by the general model (1). Example 3.3 (MDP). We denote a discrete-time discounted MDP as M := {S, A, P, R, γ, ρ0}, where S and A *denote the state space and the action space, respectively,* P(s ′|s, a) *is the state transition probability from* s to s ′ after selecting the action a, R : S × A → [0, U] *is the reward function that is assumed to be uniformly* bounded by a constant U > 0, γ ∈ [0, 1) is the discount factor, and ρ0 is the initial state distribution. The agent selects actions according to a stationary random policy π˜θ(·|·) : A × S → [0, 1] *parameterized by* θ ∈ R n. Given an initial state s0 ∈ S, a trajectory τ := {st, at, rt+1} H−1 t=0 *can then be generated, where* s0 ∼ ρ, at ∼ π˜θ(·|st), rt+1 = R(st, at), st+1 ∼ P(·|st, at), and H > 0 *is a finite horizon, and the accumulated* discounted reward of the trajectory τ *can be defined as* R(τ ) := H X−1 t=0 γ trt+1*. Then, the learning objective is* to compute an optimal parameter θ ∗*that maximizes the expected reward function* J (θ) $$n\;{\mathcal{I}}(\theta)\;^{1},\;i.e.,$$ where ρθ(τ ) := ρ0(s0)QH−1 t=0 P(st+1|st, at)˜πθ(at|st) denotes the probability distribution of a trajectory τ *being* sampled from ρθ that is parameterized by θ. In the special case when S = {s} *(i.e.,* |S| = 1*) and* γ = 0, the MDP reduced to a multi-armed bandit problem (Robbins, 1952) with a reward function simplified as R : A → R. Particularly, a trajectory τ = {s, a} with the horizon Hτ = 0 *is generated, where* a ∼ ρθ(·) := ˜πθ(·|s)*, and the accumulated discounted reward reduces* to R(x) = R(a)*. As a consequence, problem* (2) *can be simplified as* $$\left(2\right)$$ $$\operatorname*{max}_{\theta\in\mathbb{R}^{n}}\;{\mathcal{J}}(\theta)=\mathbb{E}_{a\sim\rho_{\theta}}\left[R(a)\right].$$ By adding a convex regularizer G(θ) *to problem* (2)*, we get the following regularized MDP:* $$)]-\mathcal{G}(\theta)$$ max θ∈Rn Eτ∼ρθ [R(τ )] − G(θ), which was considered in (Pham et al., 2020). However, it is clear that R(τ ) does not depend on θ*. Hence,* the above regularized MDP is a special case of the proposed regularized reward optimization problem (1). One can check that the gradient ∇θJ (θ) *has the following form (Yuan et al., 2022):* $$\nabla_{\theta}{\mathcal{J}}(\theta)=\mathbb{E}_{\tau\sim\rho_{\theta}}\left[\sum_{t=0}^{H-1}\gamma^{t}R(s_{t},a_{t})\sum_{t^{\prime}=0}^{t}\nabla_{\theta}\log{\hat{\pi}}_{\theta}(a_{t^{\prime}}|s_{t^{\prime}})\right].$$ Being a composite optimization problem, problem (1) admits the following first-order stationary condition 0 *∈ −∇*θJ (θ) + ∂G(θ). (3) Here, ∂G(·) denotes the subdifferential of the proper closed and convex function G(·) which is defined as ∂G(θ) := {g ∈ R n: G(θ ′) ≥ G(θ) + ⟨*g, θ*′ − θ⟩, ∀θ} . It is well-known that ∂G(θ) is a nonempty closed convex subset of R n for any θ ∈ R n such that G(θ) < ∞ (see e.g., (Rockafellar, 1997)). Note that any optimal solution of problem (1) satisfies the condition (3), while the reverse statement is generally not valid for nonconcave problems, including the problem (1). The condition (3) leads to the following concept of stationary points for problem (1). 1Here, the trajectory τ and the distribution ρθ correspond to x and πθ in (1), respectively. $\theta^{*}$ that maximizes the expected reward $f$, $\theta^{*}\in\text{argmax}_{\theta}\mathcal{J}(\theta):=\mathbb{E}_{\tau\sim\rho_{\theta}}\left[\mathcal{R}(\tau)\right],$ [R(τ )] , (2) Definition 3.4. *A point* θ ∈ R n *is called a stationary point for problem* (1) *if it satisfies the condition* (3). Given a tolerance ϵ > 0, a stochastic optimization method attains an (expected) ϵ-stationary point, denoted as θ ∈ R n*, if* $$\mathbb{E}_{T}\left[\operatorname{dist}\left(0,-\nabla\theta{\mathcal{J}}(\theta)+\partial{\mathcal{G}}(\theta)\right)^{2}\right]\leq\epsilon^{2},$$ where the expectation is taken with respect to all the randomness caused by the algorithm, after running it T iterations, and dist(x, C) denotes the distance between a point x *and a closed convex set* C. Remark 3.5 (Gradient mapping). *Note that the optimality condition* (3) *can be rewritten as* $$0=G_{\eta}(\theta):=\frac{1}{\eta}\left[\mathrm{Prox}_{\eta\mathcal{G}}\left(\theta+\eta\nabla_{\theta}\mathcal{J}(\theta)\right)-\theta\right],$$ $$n e\;\eta>0$$ for some η > 0*, where* $$\operatorname{Prox}_{\eta{\mathcal{G}}}(\theta):=\operatorname{argmin}_{\theta^{\prime}}\left\{{\mathcal{G}}(\theta^{\prime})+{\frac{1}{2\eta}}\left\|\theta^{\prime}-\theta\right\|^{2}\right\}$$ denotes the proximal mapping of the function G(·). The mapping Gη(·) *is called the gradient mapping in the* field of optimization (Beck, 2017). It is easy to verify that if for a θ ∈ R n*, it holds that* dist (0, −∇θJ (θ) + ∂G(θ)) ≤ ϵ, then there exists a vector d satisfying ∥d∥ ≤ ϵ *such that* $$\vartheta)+\partial{\mathcal{G}}(\theta))\leq\epsilon,$$ $$\mathbf{a}f$$ $$d+\nabla_{\theta}{\mathcal{J}}(\theta)\in\partial{\mathcal{G}}(\theta),$$ $${\mathcal{I}}\rangle$$ which is equivalent to saying that θ = ProxηG (ηd + θ + η∇θJ (θ)). Moreover, we can verify that (by using the firm nonexpansiveness of ProxηG(·)*; see e.g., (Beck, 2017))* $$\|G_{\eta}(\theta)\|=\frac{1}{\eta}\,\|\mathrm{Prox}_{\eta\mathcal{G}}\left(\theta+\eta\nabla_{\theta}\mathcal{J}(\theta)\right)-\theta\|\leq\|d\|\leq\epsilon.$$ Therefore, we can also characterize an (expected) ϵ*-stationary point by using the following condition* $\mathbf{a}\times\mathbf{b}=-\mathbf{a}$. $$\mathbb{E}_{T}\left[\left\|G_{\eta}(\theta)\right\|^{2}\right]\leq\epsilon^{2}.$$ The main objective of this paper is to study the convergence properties, including iteration and sample complexities, of the stochastic (variance-reduced) proximal gradient method to a ϵ-stationary point with a pre-specified ϵ > 0. Note that all proofs of our results are presented in the appendix. Moreover, we acknowledge that our analysis is drawn upon classical results in the literature. ## 4 The Stochastic Proximal Gradient Method In this section, we present and analyze the stochastic proximal gradient method for solving the problem (1). The fundamental idea of the algorithm is to replace the true gradient ∇θJ (θ), which are not available for most of the time, with a stochastic gradient estimator in the classical proximal gradient method (Beck, 2017). The method can be viewed as extensions to the projected policy gradient method with direct parameterization (Agarwal et al., 2021) and the stochastic policy gradient method for unregularized MDPs (Williams, 1992). The detailed description of the algorithm is presented in Algorithm 1. For notational simplicity, we denote $$g(x,\theta):={\mathcal{R}}_{\theta}(x)\nabla_{\theta}\log\pi_{\theta}(x)+\nabla_{\theta}{\mathcal{R}}_{\theta}(x).$$ From Algorithm 1, we see that at each iteration, N data points, namely {x t,1, . . . , xt,N }, are sample according to the current probability distribution πθ t . Using these data points, we can construct a REINFORCE-type Algorithm 1 The stochastic proximal gradient method 1: **Input:** initial point θ 0, sample size N and the learning rate η > 0. 2: for t = 0*, . . . , T* − 1 do 3: Compute the stochastic gradient estimator: $$g^{t}:={\frac{1}{N}}\sum_{j=1}^{N}g(x^{t,j},\theta^{t}),$$ $\star\;\star\;\star$ . $\frac{1}{2}$ 4. where {x t,1, . . . , xt,N } are sampled independently according to πθ t . Update $\Huge\ddot\smile$ . $$\theta^{t+1}=\mathrm{{Prox}}_{\eta{\mathcal{G}}}\left(\theta^{t}+\eta g^{t}\right).$$ 5: **end for** 6: **Output:** ˆθ Tselected randomly from the generated sequence {θ t} T t=1. stochastic gradient estimator g t. Then, the algorithm just performs a proximal gradient ascent updating. Let T > 0 be the maximal number of iterations, then a sequence {θ t} T t=1 can be generated, and the output solution is selected randomly from this sequence. Next, we shall proceed to answer the questions that how to choose the learning rate η > 0, how large the sample size N should be, and how many iterations for the algorithm to output an ϵ-stationary point for a given ϵ > 0, theoretically. The next lemma establishes the L-smoothness of J (·) whose proof is given at Appendix A.1. Lemma 4.1. Under Assumptions 3.1 and 3.2, the gradient of J is L*-smooth, i.e.,* $$\|\nabla_{\theta}{\mathcal{J}}(\theta)-\nabla_{\theta}{\mathcal{J}}(\theta^{\prime})\|\leq L\,\|\theta-\theta^{\prime}\|\,,$$ $\mathbf{a}$ ′∥ , ∀*θ, θ*′ ∈ R n, with L := U(C 2 g + Ch) + Ceh + 2CgCeg > 0. Remark 4.2 (L-smoothness in MDPs). For an MDP with finite action space and state space as in Example 3.3, the Lipschitz constant of ∇θJ (·) can be expressed in terms of |A|, |S| and γ. We refer the reader to (Agarwal et al., 2021; Xiao, 2022) for more details. As a consequence of the L-smoothness of the function J (·), we next show that the learning rate can be chosen as a positive constant upper bounded by a constant depends only on the Lipschitz constant of ∇θJ (·). For notational complicity, we denote ∆ := F ∗ − F(θ 0) > 0 for the rest of this paper. Theorem 4.3. *Under Assumptions 3.1 and 3.2, if we set* η ∈0, 1 $$\mathbb{E}_{T}\left[\text{dist}\left(0,-\nabla_{\theta}\mathcal{J}(\tilde{\theta}^{T})+\partial\mathcal{G}(\tilde{\theta}^{T})\right)^{2}\right]$$ $$\leq\left(2+\frac{2}{\eta L(1-2\eta L)}\right)\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}_{T}\left[\left\|g^{t}-\nabla_{\theta}\mathcal{J}(\theta^{t})\right\|^{2}\right]+\frac{\Delta}{T}\left(\frac{2}{\eta}+\frac{4}{\eta(1-2\eta L)}\right),$$ *, then Algorithm 1 outputs a point* ˆθ T satisfying where ET *is defined in Definition 3.4.* The proof of the above theorem is provided in Appendix A.2. From this theorem, if one sets g t = ∇θJ (θ t), i.e., ∥g t − ∇θJ (θ t)∥ 2 = 0, then there is no randomness along the iterations and the convergence property is reduced to $$i n i t i o n\ {\o{3.4}}.$$ $$\operatorname*{min}_{1\leq t\leq T}\operatorname{dist}\left(0,-\nabla_{\theta}{\mathcal{J}}({\hat{\theta}}^{t})+\partial{\mathcal{G}}({\hat{\theta}}^{t})\right)=O\left({\frac{1}{\sqrt{T}}}\right)$$ , which is implied by classical results on proximal gradient method (see e.g., (Beck, 2017)). However, since the exact full gradient ∇θJ (θ) is rarely computable, it is common to require the variance (i.e., the trace of the covariance matrix) of the stochastic estimator to be bounded. The latter condition plays an essential role in analyzing stochastic first-order methods for solving nonconvex optimization problems, including RL applications; see, e.g., (Beck, 2017; Papini et al., 2018; Shen et al., 2019; Lan, 2020; Yang et al., 2022). Lemma 4.4. Under Assumptions 3.1 and 3.2, there exists a constant σ > 0 *such that for any* θ, $$\mathbb{E}_{x\sim\pi\theta}\left[\left\|g(x,\theta)-\nabla_{\theta}\mathcal{J}(\theta)\right\|^{2}\right]\leq\sigma^{2}.$$ The proof of Lemma 4.4 is given in Appendix A.3. By choosing a suitable sample size N, we can rely on Lemma 4.4 to make the term ET h∥g t − ∇θJ (θ t)∥ 2iin Theorem 4.3 small, for every t ≤ T. Then, Theorem 4.3 implies that Algorithm 1 admits an expected O(T −1) convergence rate to a stationary point. These results are summarized in the following theorem; see Appendix A.4 for a proof. Theorem 4.5. Suppose that Assumptions 3.1 and 3.2 hold. Let ϵ > 0 *be a given accuracy. Running the* Algorithm 1 for $T:=\left[\dfrac{\Delta}{\epsilon^2}\left(\dfrac{4}{\eta}+\dfrac{8}{\eta(1-2\eta L)}\right)\right]=O(\epsilon^{-2})$ $\eta<\frac{1}{2L}\ and\ the\ sample\ size$ iterations with the learning rate η < 1 $$N:=\left[\frac{\sigma^{2}}{\epsilon^{2}}\left(4+\frac{4}{\eta L(1-2\eta L)}\right)\right]=O(\epsilon^{-2})$$ outputs a point ˆθ T*satisfying* $$\mathbb{E}_{T}\left[\operatorname{dist}\left(0,-\nabla_{\theta}{\mathcal{I}}({\hat{\theta}}^{T})+\partial{\mathcal{G}}({\hat{\theta}}^{T})\right)^{2}\right]\leq\epsilon^{2}.$$ Moreover, the sample complexity is O(ϵ −4). As already mentioned in the introduction, the total sample complexity of Algorithm 1 to an ϵ-stationary point is shown to be O(ϵ −4), which matches the most competitive sample complexity of the classical stochastic policy gradient for MDPs (Williams, 1992; Baxter & Bartlett, 2001; Zhang et al., 2020b; Xiong et al., 2021; Yuan et al., 2022). Remark 4.6 (Sample size). *Note that the current state-of-the-art iteration complexity for the (small-batch)* stochastic gradient descent method is T := O(ϵ −2) *with* ηt := min{O(L −1), O(T −1/2)}*; see, e.g., (Ghadimi* & Lan, 2013). The reason for requiring larger batch-size in Theorem 4.5 is to allow a constant learning rate. To the best of our knowledge, to get the same convergence properties as Theorem 4.5 under the same conditions for problem (1)*, the large batch-size is required.* Remark 4.7 (Global convergence). As mentioned in introduction, some recent progress has been made for analyzing the global convergence properties of the policy gradient methods for MDPs, which greatly rely on the concepts of gradient domination and its extensions (Agarwal et al., 2021; Mei et al., 2020; Xiao, 2022; Yuan et al., 2022; Gargiani et al., 2022). This concept is also highly related to the classical PŁ-condition (Polyak, 1963) and KŁ-condition (Bolte et al., 2007) in the field of optimization. One of the key ideas is to assume or verify that the difference between the optimal objective function value, namely F ∗*, and* F(θ) can be bounded by the quantity depending on the norm of the gradient mapping at an arbitrary point. In particular, suppose that there exists a positive constant ω *such that* $$\|G_{\eta}(\theta))\|\geq2{\sqrt{\omega}}\left({\mathcal{F}}^{*}-{\mathcal{F}}(\theta)\right),\quad\forall\;\theta\in\mathbb{R}^{n},$$ where Gη *is defined in Remark 3.5 (see e.g., (Xiao, 2022)). Then, after running Algorithm 2 for* T = O(ϵ −2) iterations, one can easily check that $$\mathbb{E}_{T}\left[{\mathcal{F}}^{*}-{\mathcal{F}}({\hat{\theta}}^{T})\right]\leq{\frac{1}{2{\sqrt{\omega}}}}\epsilon.$$ As a conclusion, by assuming or verifying stronger conditions, one can typically show that any stationary point of the problem (1) *is also a globally optimal solution. This shares the same spirit of (Zhang et al.,* 2020a) for MDPs with general utilities. We leave it as a future research to analyze the global convergence of the problem (1). ## 5 Variance Reduction Via Page Recall from Theorem 4.3 that, there is a trade-off between the sample complexity and the iteration complexity of Algorithm 1. In particular, while there is little room for us to improve the term ∆ T 2 η +4 η(1−2ηL) which corresponds to the iteration complexity, it is possible to construct g tin an advanced manner to improve the sample complexity. Therefore, our main goal in this section is to reduce the expected sample complexity while keeping the term 1 T T X−1 t=0 ET h∇θJ (θ t) − g t 2ismall. We achieve this goal by considering the stochastic variance-reduced gradient methods that have recently attracted much attention. Among these variance-reduced methods, as argued in (Gargiani et al., 2022), the ProbAbilistic Gradient Estimator (PAGE) proposed in (Li et al., 2021b) has a simple structure, and can lead to optimal convergence properties. These appealing features make it attractive in machine learning applications. Therefore, in this section, we also consider the stochastic variance-reduced proximal gradient method with PAGE for solving the problem (1). PAGE is originally designed for the stochastic nonconvex minimization in the oblivious setting: $$\operatorname*{min}_{\theta\in\mathbb{R}^{n}}\ f(\theta):=\mathbb{E}_{x\sim\pi}[F(x,\theta)]$$ where π is a fixed probability distribution and F : R d × R n → R is a certain differentiable (and possibly nonconvex) loss function. For stochastic gradient-type methods, a certain stochastic gradient estimator for f is required for performing the optimization. At the t-th iteration, given a probability pt ∈ [0, 1] and the current gradient estimator g t, PAGE proposed to replace the vanilla mini-batch gradient estimator with the following unbiased stochastic estimator: $$\nabla f(\theta^{t+1})\approx g^{t+1}:=\begin{cases}\dfrac{1}{N_{1}}\sum_{j=1}^{N_{1}}\nabla_{\theta}F(x^{j},\theta^{t+1}),&\text{with probability}p_{t},\\ g^{t}+\dfrac{1}{N_{2}}\left(\sum_{j=1}^{N_{2}}\nabla_{\theta}F(x^{j},\theta^{t+1})-\sum_{j=1}^{N_{2}}\nabla_{\theta}F(x^{j},\theta^{t})\right),&\text{with probability}1-p_{t},\end{cases}$$ where {x j} are sampled from π, N1, N2 denote the sample sizes. Some key advantages of applying PAGE are summarized as follows. First, the algorithm is single-looped, which admit simpler implementation compared with existing double-looped variance reduced methods. Second, the probability pt can be adjusted dynamically, leading to more flexibilities. Third, one can choose N2 to be much smaller than N1 to guarantee the same iteration complexity as the vanilla SGD. Thus, the overall sample complexity can be significantly reduced. However, the application of PAGE to our setting needs significant modifications and extensions, which we shall demonstrate below. To the best of our knowledge, the application of PAGE for solving the general regularized reward optimization problem in the non-oblivious setting considered in this paper is new. For notational simplicity, for the rest of this section, we denote $$g_{w}(x,\theta,\theta^{\prime})=\frac{\pi_{\theta}(x)}{\pi_{\theta^{\prime}}(x)}g(x,\theta),$$ for *θ, θ*′ ∈ R n, x ∈ R d, where πθ(x) πθ′ (x) denotes the importance weight between πθ and πθ ′ . Note also that $$\mathbb{E}_{x\sim\pi_{\theta^{\prime}}}\left[{\frac{\pi_{\theta}(x)}{\pi_{\theta^{\prime}}(x)}}\right]=1.$$ The description of the proposed PAGE variance-reduced stochastic proximal gradient method is given in Algorithm 2. It is clear that the only difference between Algorithm 1 and Algorithm 2 is the choice of the gradient estimator. At each iteration of the latter algorithm, we have two choices for the gradient estimator, where, with probability p, one chooses the same estimator as in Algorithm 1 with a sample size N1, and with Algorithm 2 The variance-reduced stochastic proximal gradient method with PAGE 1: **Input:** initial point θ 0, sample sizes N1 and N2, a probability p ∈ (0, 1], and the learning rate η > 0. 2: Compute $$g^{0}:=\frac{1}{N_{1}}\sum_{j=1}^{N_{1}}g(x^{0,j},\theta^{0}),$$ where {x 0,j}j are sampled independently according to πθ 0 . 3: for t = 0*, . . . , T* − 1 do 4: Update $$\theta^{t+1}=\mathrm{Prox}_{\eta{\mathcal G}}\left(\theta^{t}+\eta g^{t}\right).$$ 5: Compute $g^{t+1}\:=\:0$ . 1 N1 X N1 j=1 g(x t+1,j , θt+1), with probability p, 1 N2 X N2 j=1 g(x t+1,j , θt+1) −1 N2 X N2 j=1 gw(x t+1,j , θt, θt+1) + g t, with probability 1 − p, where {x t+1,j}j are sampled independently according to πθ t+1 . 6: **end for** 7: **Output:** ˆθ Tselected randomly from the generated sequence {θ t} T t=1. probability 1−p, one constructs the estimator in a clever way which combines the information of the current iterate and the previous one. Since the data set {x t+1,1*, . . . , x*t+1,N2 } is sampled according to the current probability distribution πθ t+1 , we need to rely on the importance weight between θ t and θ t+1 and construct the gradient estimator 1 N2 X N2 $$\mathbb{E}_{x\sim\pi_{\theta^{\prime}}}\left[g_{w}(x,\theta,\theta^{\prime})\right]=\nabla_{\theta}\mathcal{J}(\theta),$$ j=1 gw(x t+1, θt, θt+1), which is an unbiased estimator for ∇θJ (θ t), so that g t+1 becomes an unbiased estimator of ∇θJ (θ t+1). Indeed, one can easily verify that for any *θ, θ*′ ∈ R n, it holds that Ex∼πθ′[gw(*x, θ, θ*′)] = ∇θJ (θ), (4) i.e., g(*x, θ, θ*′) is an unbiased estimator for ∇θJ (θ) provided that x ∼ πθ ′ . Next, we shall analyze the convergence properties of Algorithm 2. Our analysis relies on the following assumption on the importance weight, which essentially controls the change of the distributions. Assumption 5.1. Let *θ, θ*′ ∈ R n, the importance weight between πθ and πθ ′ is well-defined and there exists a constant Cw > 0 *such that* $$\mathbb{E}_{x\sim\pi_{\theta^{\prime}}}\left[\left({\frac{\pi_{\theta}(x)}{\pi_{\theta^{\prime}}(x)}}-1\right)^{2}\right]\leq C_{w}^{2}.$$ Clearly, the significance of the constant Cw (if exists) may depend sensitively on θ and θ ′. To see this, let us assume that for any θ ∈ R n, πθ = θ is a discrete distribution over a set of finite points {xk} n k=1 for which πθ(xk) = θk > 0 for all k = 1*, . . . , n*. Now, suppose that θ = θ ′ + ∆θ with |∆θk| ≤ 1. Then, a simple calculation shows that $$\mathbb{E}_{x\sim\pi_{\theta^{\prime}}}\left[\left({\frac{\pi_{\theta}(x)}{\pi_{\theta^{\prime}}(x)}}-1\right)^{2}\right]=\sum_{k=1}^{n}\left({\frac{\theta_{k}}{\theta_{k}^{\prime}}}-1\right)^{2}\theta_{k}^{\prime}=\sum_{k=1}^{n}{\frac{\theta_{k}\Delta\theta_{k}}{\theta_{k}^{\prime}}}\leq\sum_{k=1}^{n}{\frac{\theta_{k}}{\theta_{k}^{\prime}}}.$$ $$\left({\boldsymbol{4}}\right)$$ However, it is possible that there exists a certain θ ′ k = 0 or tiny. In this case, Cw can be huge or even infinity. Fortunately, the regularization term G(θ) can help to avoid such undesired situations via imposing the lower-bounded constraints θk ≥ δ > 0 for all k. In this case, we see that Pn k=1 θk θ ′ k ≤Pn k=1 θk δ = 1 δ . Remark 5.2. Note that Assumption 5.1 is also employed in many existing works (Papini et al., 2018; Xu et al., 2019; Pham et al., 2020; Yuan et al., 2020; Gargiani et al., 2022). However, this assumption could be too strong, and it is not checkable in general. Addressing the relaxation of this assumption through the development of a more sophisticated algorithmic framework is beyond the scope of this paper. Here, we would like to mention some recent progress on relaxing this stringent condition for MDPs. By constructing additional stochastic estimators for the Hessian matrix of the objective function, (Shen et al., 2019) proposed a Hessian-aided policy-gradient-type method that improves the sample complexity from O(ϵ −4) to O(ϵ −3) without assuming Assumption 5.1. Later, by explicitly controlling changes in the parameter θ*, (Zhang et al.,* 2021a) developed a truncated stochastic incremental variance-reduced policy gradient method to prevent the variance of the importance weights from becoming excessively large leading to the O(ϵ −3) sample complexity. By utilizing general Bregman divergences, (Yuan et al., 2022) proposed a double-looped variance-reduced mirror policy optimization approach and established an O(ϵ −3) sample complexity, without requiring Hessian information or Assumption 5.1. Recently, following the research theme as (Shen et al., 2019), (Salehkaleybar et al., 2022) also incorporated second-order information into the stochastic gradient estimator. By using momentum, the variance-reduced algorithm proposed in (Salehkaleybar et al., 2022) has some appealing features, including the small batch-size and parameter-free implementation. Recently, by imposing additional conditions, including the Lipschitz continuity of the Hessian of the score function ∇θ log πθ and the Fisher-non-degeneracy condition of the policy, (Fatkhullin et al., 2023) derived improved (global) convergence guarantees for solving MDPs. We think that the above ideas can also be explored for solving the general model (1). The bounded variance of the importance weight implies that the (expected) distance between g(*x, θ*′) and gw(*x, θ, θ*′) is controlled by the distance between θ and θ ′, for any given *θ, θ*′ ∈ R d. In particular, we have the following lemma, whose proof is provided in Appendix A.5. Lemma 5.3. *Under Assumption 3.1, Assumption 3.2 and Assumption 5.1, then it holds that* $$\mathbb{E}_{x\sim\pi_{\theta^{\prime}}}\left[\left\|g(x,\theta^{\prime})-g_{w}(x,\theta,\theta^{\prime})\right\|^{2}\right]\leq C\left\|\theta-\theta^{\prime}\right\|^{2},$$ where C > 0 *is a constant defined as* $$C:=6U^{2}C_{h}^{2}+6C_{g}^{2}\tilde{C}_{g}^{2}+6\tilde{C}_{h}^{2}+\left(4U^{2}C_{g}^{2}+4\tilde{C}_{g}^{2}\right)(2C_{g}^{2}+C_{h})(C_{w}^{2}+1).$$ Under the considered assumptions, we are able to provide an estimate for the term T X−1 t=0 ET hg t − ∇θJ (θ t) 2i, which plays an essential role in deriving an improved sample complexity of Algorithm 2. The results are summarized in the following Lemma 5.4; see Appendix A.6 for a proof which shares the same spirit as (Li et al., 2021b, Lemma 3 & 4). Lemma 5.4. *Suppose that Assumption 3.1, Assumption 3.2, and Assumption 5.1 hold. Let* {g t} and {θ t} be the sequences generated by Algorithm 2, then it holds that $$\left(1-\frac{(1-p)C\eta}{p N_{2}L(1-2\eta L)}\right)\sum_{t=0}^{T-1}\mathbb{E}_{T}\left[\left\|g^{t}-\nabla_{\theta}\mathcal{J}(\theta^{t})\right\|^{2}\right]\leq\frac{p\sigma^{2}T+\sigma^{2}}{p N_{1}}+\frac{2\eta(1-p)C\Delta}{p N_{2}(1-2\eta L)}.$$ We are now ready to present the main result on the convergence property of the Algorithm 2 by showing how to select the sample sizes N1 and N2, probability p, and the learning rate η. Intuitively, N1 is typically a large number and one does not want to perform samplings with N1 samples frequently, thus the probability p and the sample size N2 should both be small. Given N1, N2 and p, we can then determine the value of η such that η < 1 2L . Consequently, the key estimate in Theorem 4.3 can be applied directly. Our results are summarized in the following theorem. Reader is referred to Appendix A.7 for the proof of this result. Theorem 5.5. *Suppose that Assumption 3.1, Assumption 3.2 and Assumption 5.1 hold. For a given* ϵ ∈ (0, 1)*, we set* p := N2 N1+N2 with N1 := O(ϵ −2) and N2 := √N1 = O(ϵ −1). Choose a learning rate η *satisfying* $$\left.C+2L^{2}\right)\right|.$$ η ∈0*, L/*(2C + 2L 2)*. Then, running Algorithm 2 for* T := O(ϵ −2) *iterations outputs a point* ˆθ T*satisfying* $2\,\!$). $$\mathbb{E}_{T}\left[\operatorname{dist}\left(0,-\nabla_{\theta}{\mathcal{J}}({\hat{\theta}}^{T})+\partial{\mathcal{G}}({\hat{\theta}}^{T})\right)^{2}\right]\leq\epsilon^{2}.$$ Moreover, the total expected sample complexity is O(ϵ −3). By using the stochastic variance-reduce gradient estimator with PAGE and the importance sampling technique, we have improved the total sample complexity from O(ϵ −4) to O(ϵ −3), under the considered conditions. This result matches with the current competitive results established in (Xu et al., 2019; Yuan et al., 2020; Pham et al., 2020; Gargiani et al., 2022) for solving MDPs and is applicable to the general model (1). Finally, as mentioned in Remark 4.7, by assuming or verifying stronger conditions, such as the gradient domination and its extensions, it is also possible to derive some global convergence results. Again, such a possibility is left to a future research direction. ## 6 Conclusions We have studied the stochastic (variance-reduced) proximal gradient method addressing a general regularized expected reward optimization problem which covers many existing important problem in reinforcement learning. We have established the O(ϵ −4) sample complexity of the classical stochastic proximal gradient method and the O(ϵ −3) sample complexity of the stochastic variance-reduced proximal gradient method with an importance sampling based probabilistic gradient estimator. Our results match the sample complexity of their most competitive counterparts under similar settings for Markov decision processes. Meanwhile, we have also suspected some limitations in the current paper. First, due to the nonconcavity of the objective function, we found it challenging to derive global convergence properties of the stochastic proximal gradient method and its variants without imposing additional conditions. On the other hand, analyzing the sample complexity for achieving convergence to second-order stationary points—thereby avoiding saddle points—may be more realistic and feasible (Arjevani et al., 2020). Second, the bounded variance condition for the importance weight turns out to be quite strong and can not be verified in general. How to relax this condition for our general model deserves further investigation. Last but not least, since we focus more on the theoretical analysis in this paper and due to the space constraint, we did not conduct any numerical simulation to examine the practical efficiency of the proposed methods. We shall try to delve into these challenges and get better understandings of the proposed problem and algorithms in a future research. Finally, this paper has demonstrated the possibility of pairing the stochastic proximal gradient method with efficient variance reduction techniques (Li et al., 2021b) for solving the reward optimization problem (1). Beyond variance-reduced methods, there are other possibilities that allow one deriving more sophisticated algorithms. For instance, one can also pair the stochastic proximal gradient method with the ideas of the actor-critic method (Konda & Tsitsiklis, 1999), the natural policy gradient method (Kakade, 2001), policy mirror descent methods (Tomar et al., 2020; Lan, 2023), trust-region methods (Schulman et al., 2015; Shani et al., 2020), and the variational policy gradient methods (Zhang et al., 2020a). We think that these possible generalizations can lead to more exciting results and make further contributions to the literature. ## Acknowledgments We thank the action editor and reviewers for their valuable comments and suggestions that helped to improve the quality of the paper. The authors were partially supported by the US National Science Foundation under awards DMS-2244988, DMS-2206333, and the Office of Naval Research Award N00014-23-1-2007. ## References Alekh Agarwal, Sham M Kakade, Jason D Lee, and Gaurav Mahajan. Optimality and approximation with policy gradient methods in markov decision processes. In *Conference on Learning Theory*, pp. 64–66. PMLR, 2020. Alekh Agarwal, Sham M Kakade, Jason D Lee, and Gaurav Mahajan. On the theory of policy gradient methods: Optimality, approximation, and distribution shift. *The Journal of Machine Learning Research*, 22(1):4431–4506, 2021. Zafarali Ahmed, Nicolas Le Roux, Mohammad Norouzi, and Dale Schuurmans. Understanding the impact of entropy on policy optimization. In *International conference on machine learning*, pp. 151–160. PMLR, 2019. Yossi Arjevani, Yair Carmon, John C Duchi, Dylan J Foster, Ayush Sekhari, and Karthik Sridharan. Secondorder information in non-convex stochastic optimization: Power and limitations. In *Conference on Learning* Theory, pp. 242–299. PMLR, 2020. Anas Barakat, Ilyas Fatkhullin, and Niao He. Reinforcement learning with general utilities: Simpler variance reduction and large state-action space. *arXiv preprint arXiv:2306.01854*, 2023. Jonathan Baxter and Peter L Bartlett. Infinite-horizon policy-gradient estimation. journal of artificial intelligence research, 15:319–350, 2001. Amir Beck. *First-order methods in optimization*. SIAM, 2017. Irwan Bello, Hieu Pham, Quoc V Le, Mohammad Norouzi, and Samy Bengio. Neural combinatorial optimization with reinforcement learning. *arXiv preprint arXiv:1611.09940*, 2016. Jérôme Bolte, Aris Daniilidis, and Adrian Lewis. The łojasiewicz inequality for nonsmooth subanalytic functions with applications to subgradient dynamical systems. *SIAM Journal on Optimization*, 17(4): 1205–1223, 2007. Shicong Cen, Chen Cheng, Yuxin Chen, Yuting Wei, and Yuejie Chi. Fast global convergence of natural policy gradient methods with entropy regularization. *Operations Research*, 70(4):2563–2578, 2022. Cheng Chen, Ruitao Chen, Tianyou Li, Ruichen Ao, and Zaiwen Wen. Monte carlo policy gradient method for binary optimization. *arXiv preprint arXiv:2307.00783*, 2023. Roy Dong, Heling Zhang, and Lillian Ratliff. Approximate regions of attraction in learning with decisiondependent distributions. In *International Conference on Artificial Intelligence and Statistics*, pp. 11172– 11184. PMLR, 2023. Dmitriy Drusvyatskiy and Lin Xiao. Stochastic optimization with decision-dependent distributions. *Mathematics of Operations Research*, 48(2):954–998, 2023. Cong Fang, Chris Junchi Li, Zhouchen Lin, and Tong Zhang. Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator. *Advances in neural information processing systems*, 31, 2018. Ilyas Fatkhullin, Anas Barakat, Anastasia Kireeva, and Niao He. Stochastic policy gradient methods: Improved sample complexity for fisher-non-degenerate policies. In *International Conference on Machine* Learning, pp. 9827–9869. PMLR, 2023. João Gama, Indr˙e Žliobait˙e, Albert Bifet, Mykola Pechenizkiy, and Abdelhamid Bouchachia. A survey on concept drift adaptation. *ACM computing surveys (CSUR)*, 46(4):1–37, 2014. Matilde Gargiani, Andrea Zanelli, Andrea Martinelli, Tyler Summers, and John Lygeros. Page-pg: A simple and loopless variance-reduced policy gradient method with probabilistic gradient estimation. In International Conference on Machine Learning, pp. 7223–7240. PMLR, 2022. Saeed Ghadimi and Guanghui Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. *SIAM journal on optimization*, 23(4):2341–2368, 2013. Trevor Hastie, Robert Tibshirani, Jerome H Friedman, and Jerome H Friedman. The elements of statistical learning: data mining, inference, and prediction, volume 2. Springer, 2009. Feihu Huang, Shangqian Gao, and Heng Huang. Bregman gradient policy optimization. arXiv preprint arXiv:2106.12112, 2021. Meena Jagadeesan, Tijana Zrnic, and Celestine Mendler-Dünner. Regret minimization with performative feedback. In *International Conference on Machine Learning*, pp. 9760–9785. PMLR, 2022. Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. Advances in neural information processing systems, 26, 2013. Sham M Kakade. A natural policy gradient. *Advances in neural information processing systems*, 14, 2001. Vijay Konda and John Tsitsiklis. Actor-critic algorithms. *Advances in neural information processing systems*, 12, 1999. Navdeep Kumar, Kaixin Wang, Kfir Levy, and Shie Mannor. Policy gradient for reinforcement learning with general utilities. *arXiv preprint arXiv:2210.00991*, 2022. Guanghui Lan. *First-order and stochastic optimization methods for machine learning*, volume 1. Springer, 2020. Guanghui Lan. Policy mirror descent for reinforcement learning: Linear convergence, new sampling complexity, and generalized problem classes. *Mathematical programming*, 198(1):1059–1106, 2023. Gen Li, Yuting Wei, Yuejie Chi, Yuantao Gu, and Yuxin Chen. Softmax policy gradient methods can take exponential time to converge. In *Conference on Learning Theory*, pp. 3107–3110. PMLR, 2021a. Zhize Li, Hongyan Bao, Xiangliang Zhang, and Peter Richtárik. Page: A simple and optimal probabilistic gradient estimator for nonconvex optimization. In *International conference on machine learning*, pp. 6286–6295. PMLR, 2021b. Senwei Liang and Haizhao Yang. Finite expression method for solving high-dimensional partial differential equations. *arXiv preprint arXiv:2206.10121*, 2022. Boyi Liu, Qi Cai, Zhuoran Yang, and Zhaoran Wang. Neural proximal/trust region policy optimization attains globally optimal policy. *arXiv preprint arXiv:1906.10306*, 2019. Yanli Liu, Kaiqing Zhang, Tamer Basar, and Wotao Yin. An improved analysis of (variance-reduced) policy gradient and natural policy gradient methods. *Advances in Neural Information Processing Systems*, 33: 7624–7636, 2020. Nina Mazyavkina, Sergey Sviridov, Sergei Ivanov, and Evgeny Burnaev. Reinforcement learning for combinatorial optimization: A survey. *Computers & Operations Research*, 134:105400, 2021. Jincheng Mei, Chenjun Xiao, Csaba Szepesvari, and Dale Schuurmans. On the global convergence rates of softmax policy gradient methods. In *International Conference on Machine Learning*, pp. 6820–6829. PMLR, 2020. Celestine Mendler-Dünner, Juan Perdomo, Tijana Zrnic, and Moritz Hardt. Stochastic optimization for performative prediction. *Advances in Neural Information Processing Systems*, 33:4929–4939, 2020. Smitha Milli, John Miller, Anca D Dragan, and Moritz Hardt. The social cost of strategic classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 230–239, 2019. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. *arXiv preprint arXiv:1312.5602*, 2013. Lam M Nguyen, Jie Liu, Katya Scheinberg, and Martin Takáč. Sarah: A novel method for machine learning problems using stochastic recursive gradient. In *International conference on machine learning*, pp. 2613– 2621. PMLR, 2017. Matteo Papini, Damiano Binaghi, Giuseppe Canonaco, Matteo Pirotta, and Marcello Restelli. Stochastic variance-reduced policy gradient. In *International conference on machine learning*, pp. 4026–4035. PMLR, 2018. Edwin Pednault, Naoki Abe, and Bianca Zadrozny. Sequential cost-sensitive decision making with reinforcement learning. In *Proceedings of the eighth ACM SIGKDD international conference on Knowledge* discovery and data mining, pp. 259–268, 2002. Juan Perdomo, Tijana Zrnic, Celestine Mendler-Dünner, and Moritz Hardt. Performative prediction. In International Conference on Machine Learning, pp. 7599–7609. PMLR, 2020. Nhan Pham, Lam Nguyen, Dzung Phan, Phuong Ha Nguyen, Marten Dijk, and Quoc Tran-Dinh. A hybrid stochastic policy gradient algorithm for reinforcement learning. In International Conference on Artificial Intelligence and Statistics, pp. 374–385. PMLR, 2020. Matteo Pirotta, Marcello Restelli, and Luca Bascetta. Policy gradient in lipschitz markov decision processes. Machine Learning, 100:255–283, 2015. Boris T Polyak. Gradient methods for the minimisation of functionals. *USSR Computational Mathematics* and Mathematical Physics, 3(4):864–878, 1963. Herbert Robbins. Some aspects of the sequential design of experiments. 1952. R Tyrrell Rockafellar. *Convex analysis*, volume 11. Princeton university press, 1997. Saber Salehkaleybar, Sadegh Khorasani, Negar Kiyavash, Niao He, and Patrick Thiran. Momentum-based policy gradient with second-order information. *arXiv preprint arXiv:2205.08253*, 2022. John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In *International conference on machine learning*, pp. 1889–1897. PMLR, 2015. Lior Shani, Yonathan Efroni, and Shie Mannor. Adaptive trust region policy optimization: Global convergence and faster rates for regularized mdps. In *Proceedings of the AAAI Conference on Artificial* Intelligence, volume 34, pp. 5668–5675, 2020. Alexander Shapiro, Darinka Dentcheva, and Andrzej Ruszczynski. *Lectures on stochastic programming:* modeling and theory. SIAM, 2021. Zebang Shen, Alejandro Ribeiro, Hamed Hassani, Hui Qian, and Chao Mi. Hessian aided policy gradient. In *International conference on machine learning*, pp. 5729–5738. PMLR, 2019. Zezheng Song, Maria K Cameron, and Haizhao Yang. A finite expression method for solving high-dimensional committor problems. *arXiv preprint arXiv:2306.12268*, 2023. Richard S Sutton and Andrew G Barto. *Reinforcement learning: An introduction*. MIT press, 2018. Richard S Sutton, David McAllester, Satinder Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. *Advances in neural information processing systems*, 12, 1999. Manan Tomar, Lior Shani, Yonathan Efroni, and Mohammad Ghavamzadeh. Mirror descent policy optimization. *arXiv preprint arXiv:2005.09814*, 2020. Stratis Tsirtsis, Behzad Tabibian, Moein Khajehnejad, Adish Singla, Bernhard Schölkopf, and Manuel Gomez-Rodriguez. Optimal decision making under strategic behavior. *Management Science*, 2024. Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8:229–256, 1992. Lin Xiao. On the convergence rates of policy gradient methods. *The Journal of Machine Learning Research*, 23(1):12887–12922, 2022. Huaqing Xiong, Tengyu Xu, Yingbin Liang, and Wei Zhang. Non-asymptotic convergence of adam-type reinforcement learning algorithms under markovian sampling. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 10460–10468, 2021. Pan Xu, Felicia Gao, and Quanquan Gu. Sample efficient policy gradient methods with recursive variance reduction. *arXiv preprint arXiv:1909.08610*, 2019. Pan Xu, Felicia Gao, and Quanquan Gu. An improved convergence analysis of stochastic variance-reduced policy gradient. In *Uncertainty in Artificial Intelligence*, pp. 541–551. PMLR, 2020. Long Yang, Yu Zhang, Gang Zheng, Qian Zheng, Pengfei Li, Jianhang Huang, and Gang Pan. Policy optimization with stochastic mirror descent. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 36, pp. 8823–8831, 2022. Liuyi Yao, Zhixuan Chu, Sheng Li, Yaliang Li, Jing Gao, and Aidong Zhang. A survey on causal inference. ACM Transactions on Knowledge Discovery from Data (TKDD), 15(5):1–46, 2021. Huizhuo Yuan, Xiangru Lian, Ji Liu, and Yuren Zhou. Stochastic recursive momentum for policy gradient methods. *arXiv preprint arXiv:2003.04302*, 2020. Rui Yuan, Robert M Gower, and Alessandro Lazaric. A general sample complexity analysis of vanilla policy gradient. In *International Conference on Artificial Intelligence and Statistics*, pp. 3332–3380. PMLR, 2022. Wenhao Zhan, Shicong Cen, Baihe Huang, Yuxin Chen, Jason D Lee, and Yuejie Chi. Policy mirror descent for regularized reinforcement learning: A generalized framework with linear convergence. SIAM Journal on Optimization, 33(2):1061–1091, 2023. Junyu Zhang, Alec Koppel, Amrit Singh Bedi, Csaba Szepesvari, and Mengdi Wang. Variational policy gradient method for reinforcement learning with general utilities. *Advances in Neural Information Processing* Systems, 33:4572–4583, 2020a. Junyu Zhang, Chengzhuo Ni, Csaba Szepesvari, Mengdi Wang, et al. On the convergence and sample efficiency of variance-reduced policy gradient method. *Advances in Neural Information Processing Systems*, 34:2228–2240, 2021a. Junzi Zhang, Jongho Kim, Brendan O'Donoghue, and Stephen Boyd. Sample efficient reinforcement learning with reinforce. In *Proceedings of the AAAI conference on artificial intelligence*, volume 35, pp. 10887– 10895, 2021b. Kaiqing Zhang, Alec Koppel, Hao Zhu, and Tamer Basar. Global convergence of policy gradient methods to (almost) locally optimal policies. *SIAM Journal on Control and Optimization*, 58(6):3586–3612, 2020b. Tong Zhang. Solving large scale linear prediction problems using stochastic gradient descent algorithms. In Proceedings of the twenty-first international conference on Machine learning, pp. 116, 2004. Wei Zhang and Thomas G Dietterich. A reinforcement learning approach to job-shop scheduling. In *IJCAI*, volume 95, pp. 1114–1120. Citeseer, 1995. ## A Proofs A.1 Proof Of Lemma 4.1 Proof of Lemma 4.1. One could establish the L-smoothness of J (·) via bounding the spectral norm of the Hessian ∇2 θJ (·). To this end, we first calculate the Hessian of J as follows: ∇2 θJ (θ) = ∇θEx∼πθ [Rθ(x)∇θ log πθ(x) + ∇θRθ(x)] = ∇θ Z(Rθ(x)∇θ log πθ(x)πθ(x) + ∇θRθ(x)πθ(x)) dx = ZRθ(x)πθ(x)∇2 θ log πθ(x) + ∇θ log πθ(x)∇θ log πθ(x) ⊤dx + Z∇2 θRθ(x)πθ(x) + 2∇θRθ(x)∇θπθ(x) ⊤dx = Ex∼πθ -Rθ(x)∇2 θ log πθ(x)+ Ex∼πθ -Rθ(x)∇θ log πθ(x)∇θ log πθ(x) ⊤ + Ex∼πθ -∇2 θRθ(x)+ 2Ex∼πθ -∇θRθ(x)∇θ log πθ(x) ⊤. Then, by the triangular inequality, it holds that ∇2 θJ (θ)2 ≤ sup x∈Rd, θ∈Rn Rθ(x)∇2 θ log πθ(x)2 + sup x∈Rd, θ∈Rn Rθ(x)∇θ log πθ(x)∇θ log πθ(x) ⊤2 + sup x∈Rd, θ∈Rn ∇2 θRθ(x)2 + 2 sup x∈Rd, θ∈Rn ∇θRθ(x)∇θ log πθ(x) ⊤2 ≤ U(C 2 g + Ch) + Ceh + 2CgCeg. Thus, $\mathcal{J}$ is $L$-smooth with $L:=U(C_g^2+C_h)+\tilde{C}_h+2C_g\tilde{C}_g$, and the proof is completed. ## A.2 Proof Of Theorem 4.3 Proof of Theorem 4.3. From Lemma 4.1, we see that $${\mathcal{J}}(\theta^{t+1})\geq{\mathcal{J}}(\theta^{t})+\left\langle\nabla_{\theta}{\mathcal{J}}(\theta^{t}),\theta^{t+1}-\theta^{t}\right\rangle-{\frac{L}{2}}\left\|\theta^{t+1}-\theta^{t}\right\|^{2}.$$ By the updating rule of θ t+1, we see that $$-\left\langle g^{t},\theta^{t+1}-\theta^{t}\right\rangle+\frac{1}{2\eta}\left\|\theta^{t+1}-\theta^{t}\right\|^{2}+\mathcal{G}(\theta^{t+1})\leq\mathcal{G}(\theta^{t}),\tag{1}$$ $$g^{t}-\frac{1}{\eta}\left(\theta^{t+1}-\theta^{t}\right)\in\partial\mathcal{G}(\theta^{t+1}).\tag{2}$$ $$\square$$ $$\mathbf{\Sigma}$$ $$\mathbf{\Sigma}$$ $$\mathbf{\Sigma}$$ $$({\boldsymbol{8}})$$ Combining (5) and (6), we see that J (θ t+1) + g t, θt+1 − θ t− 1 2η θ t+1 − θ t 2− G(θ t+1) ≥ J (θ t) + ∇θJ (θ t), θt+1 − θ t− L 2 θ t+1 − θ t 2− G(θ t). Rearranging terms, we can rewrite the above inequality as $$\frac{1-\eta L}{2\eta}\left\|\theta^{t+1}-\theta^{t}\right\|^{2}\leq\mathcal{F}(\theta^{t+1})-\mathcal{F}(\theta^{t})+\left\langle g^{t}-\nabla_{\theta}\mathcal{J}(\theta^{t}),\theta^{t+1}-\theta^{t}\right\rangle.$$ By the Cauchy-Schwarz inequality, we see that $$\left\langle g^{t}-\nabla_{\theta}\mathcal{J}(\theta^{t}),\theta^{t+1}-\theta^{t}\right\rangle\leq\frac{1}{2L}\left\|g^{t}-\nabla_{\theta}\mathcal{J}(\theta^{t})\right\|^{2}+\frac{L}{2}\left\|\theta^{t+1}-\theta^{t}\right\|^{2},$$ which together with (8) implies that $$\frac{1-2\eta L}{2\eta}\left\|\theta^{t+1}-\theta^{t}\right\|^{2}\leq{\mathcal{F}}(\theta^{t+1})-{\mathcal{F}}(\theta^{t})+\frac{1}{2L}\left\|g^{t}-\nabla_{\theta}{\mathcal{J}}(\theta^{t})\right\|^{2}.$$ In a similar way to $\theta=\pi$. The $1$-norm is Summing the above inequality across t = 0*, . . . , T* − 1, we get **SS $t=0$, $\cdot\cdot\cdot$, $T-1$, we get $${\frac{1-2\eta L}{2\eta}}\sum_{t=0}^{T-1}\left\|\theta^{t+1}-\theta^{t}\right\|^{2}\leq{\mathcal{F}}(\theta^{T})-{\mathcal{F}}(\theta^{0})+{\frac{1}{2L}}\sum_{t=0}^{T-1}\left\|g^{t}-\nabla_{\theta}{\mathcal{J}}(\theta^{t})\right\|^{2}$$ $$\leq\Delta+\frac{1}{2L}\sum_{t=0}^{T-1}\left\|g^{t}-\nabla_{\theta}{\cal J}(\theta^{t})\right\|^{2}.\tag{9}$$ Here, we recall that ∆ := F ∗ − F(θ 0) > 0. On the other hand, (8) also implies that $$2\left\langle\nabla_{\theta}{\cal J}(\theta^{t+1})-g^{t},\frac{1}{\eta}\left(\theta^{t+1}-\theta^{t}\right)\right\rangle+\frac{1-\eta L}{\eta^{2}}\left\|\theta^{t+1}-\theta^{t}\right\|^{2}$$ $$\leq\frac{2}{\eta}\left({\cal F}(\theta^{t+1})-{\cal F}(\theta^{t})\right)+\frac{2}{\eta}\left\langle\nabla_{\theta}{\cal J}(\theta^{t+1})-\nabla_{\theta}{\cal J}(\theta^{t}),\theta^{t+1}-\theta^{t}\right\rangle.\tag{10}$$ Notice that Notice that $$2\left\langle\nabla_{\theta}\mathcal{J}\left(\theta^{t+1}\right)-g^{t},\frac{1}{\eta}\left(\theta^{t+1}-\theta^{t}\right)\right\rangle$$ $$=\left\|\nabla_{\theta}\mathcal{J}\left(\theta^{t+1}\right)-g^{t}+\frac{1}{\eta}\left(\theta^{t+1}-\theta^{t}\right)\right\|^{2}-\left\|\nabla_{\theta}\mathcal{J}\left(\theta^{t+1}\right)-g^{t}\right\|^{2}-\frac{1}{\eta^{2}}\left\|\theta^{t+1}-\theta^{t}\right\|^{2}.$$ Then by substituting the above equality into (10) and rearranging terms, we see that ∇θJ (θ t+1) − g t + 1 η θ t+1 − θ t 2 ≤∇θJ (θ t+1) − g t 2+ 1 η 2 θ t+1 − θ t 2− 1 − ηL η 2θ t+1 − θ t 2 + 2 η F(θ t+1) − F(θ t)+ 2 η ∇θJ (θ t+1) − ∇θJ (θ t), θt+1 − θ t ≤ 2∇θJ (θ t) − g t 2+ 2∇θJ (θ t+1) − ∇θJ (θ t) 2+ L η θ t+1 − θ t 2 + 2 η F(θ t+1) − F(θ t)+ 2 η ∇θJ (θ t+1) − ∇θJ (θ t)θ t+1 − θ t ≤ 2∇θJ (θ t) − g t 2+ 2L 2 + 3L η θ t+1 − θ t 2+ 2 η F(θ t+1) − F(θ t), where the second inequality is due to the Cauchy-Schwarz inequality and fact that $\left\|\nabla_{\theta}\mathcal{J}(\theta^{t+1})-g^{t}\right\|^{2}\leq2\left\|\nabla_{\theta}\mathcal{J}(\theta^{t})-g^{t}\right\|^{2}+2\left\|\nabla_{\theta}\mathcal{J}(\theta^{t+1})-\nabla_{\theta}\mathcal{J}(\theta^{t})\right\|^{2},$ $\left\|\nabla_{\theta}\mathcal{J}(\theta^{t+1})-g^{t}\right\|^{2}\leq2\left\|\nabla_{\theta}\mathcal{J}(\theta^{t})-g^{t}\right\|^{2}+2\left\|\nabla_{\theta}\mathcal{J}(\theta^{t+1})-\nabla_{\theta}\mathcal{J}(\theta^{t})\right\|^{2},$ and the third inequality is implied by Lemma 4.1. Summing the above inequality across t = 0, 1 *. . . , T* − 1, we get T X−1 t=0 ∇θJ (θ t+1) − g t + 1 η θ t+1 − θ t 2 ≤ 2 T X−1 t=0 ∇θJ (θ t) − g t 2+ 2L 2 + 3L η TX−1 t=0 θ t+1 − θ t 2+ 2 η F(θ T) − F(θ 0) ≤ 2 T X−1 t=0 ∇θJ (θ t) − g t 2+ 2 η 2 T X−1 t=0 θ t+1 − θ t 2+ 2∆ η , (11) where the last inequality is obtained from the fact that L < 1 2η as a consequence of the choice of the learning rate. Consequently, we have that $$\mathbb{E}_{T}\left[\operatorname{dist}\left(0,-\nabla_{\theta}{\mathcal{J}}({\hat{\theta}}^{T})+\partial{\mathcal{G}}({\hat{\theta}}^{T})\right)^{2}\right]$$ = 1 T T X−1 t=0 ET hdist 0, −∇θJ (θ t+1) + ∂G(θ t+1)2i ≤ 1 T T X−1 t=0 ET " ∇θJ (θ t+1) − g t + 1 η θ t+1 − θ t 2# ≤ 2 T T X−1 t=0 ET h∇θJ (θ t) − g t 2i+2 η 2T T X−1 t=0 ET hθ t+1 − θ t 2i+ 2∆ ηT ≤4 ηT(1 − 2ηL) ∆ + 1 2L T X−1 t=0 ET hg t − ∇θJ (θ t) 2i!+ 2 T T X−1 t=0 ET h∇θJ (θ t) − g t 2i+ 2∆ ηT = 2 +2 ηL(1 − 2ηL) 1 T T X−1 t=0 ET h∇θJ (θ t) − g t 2i+ ∆ T 2 η +4 η(1 − 2ηL) , where the first inequality is because of (7), the second inequality is due to (11) and the third inequality is derived from (9). Thus, the proof is completed. ## A.3 Proof Of Lemma 4.4 Proof of Lemma 4.4. We first estimate Ex∼πθ h∥Rθ(x)∇θ log πθ(x) + ∇θRθ(x)∥ 2ias follows $$\mathbb{E}_{x\sim\pi_{\theta}}\left[\|\mathcal{R}_{\theta}(x)\nabla_{\theta}\log\pi_{\theta}(x)+\nabla_{\theta}\mathcal{R}_{\phi}(x)\|^{2}\right]\leq2\mathbb{E}_{x\sim\pi_{\theta}}\left[\|\mathcal{R}_{\theta}(x)\nabla_{\theta}\log\pi_{\theta}(x)\|^{2}\right]+2\mathbb{E}_{x\sim\pi_{\theta}}\left[\|\nabla_{\theta}\mathcal{R}_{\phi}(x)\|^{2}\right]$$ $$\leq2U^{2}C_{g}^{2}+2\widetilde{C}_{g}^{2}.$$ Then, by the fact that E h(X − E[X])2i≤ E-X2for all random variable X, we have $$\mathbb{E}_{x\sim\pi_{\theta}}\left[\left|\mathcal{R}_{\theta}(x)\nabla_{\theta}\log\pi_{\theta}(x)+\nabla_{\theta}\mathcal{R}_{\theta}(x)-\nabla_{\theta}\mathcal{J}(\theta)\right|^{2}\right]\leq\mathbb{E}_{x\sim\pi_{\theta}}\left[\left|\mathcal{R}_{\theta}(x)\nabla_{\theta}\log\pi_{\theta}(x)+\nabla_{\theta}\mathcal{R}_{\theta}(x)\right|^{2}\right]$$ $$\leq2U^{2}C_{g}^{2}+2\tilde{C}_{g}^{2},$$ which completes the proof. ## A.4 Proof Of Theorem 4.5 Proof of Theorem 4.5. From Theorem 4.3, in order to ensure that ˆθ Tis a ϵ-stationary point, we can require $$\left(2+\frac{2}{\eta L(1-2\eta L)}\right)\mathbb{E}_{T}\left[\left\|g^{t}-\nabla_{\theta}\mathcal{J}(\theta^{t})\right\|^{2}\right]\leq\frac{1}{2}\epsilon^{2},\quad\forall\;t=0,\ldots,T-1,\tag{12}$$ $$\frac{\Delta}{T}\left(\frac{2}{\eta}+\frac{4}{\eta(1-2\eta L)}\right)\leq\frac{1}{2}\epsilon^{2}.\tag{13}$$ It is easy to verify that g tis an unbiased estimator of ∇θJ (θ t). Then, Lemma 4.4 implies that $$\mathbb{E}_{T}\left[\left\|g^{t}-\nabla_{\theta}\mathcal{J}(\theta^{t})\right\|^{2}\right]\leq\frac{\sigma^{2}}{N}.$$ As a consequence, if one chooses N = lσ 2 ϵ 2 4 + 4 ηL(1−2ηL) m, then (12) holds. On the other hand, (13) holds if one sets T = l∆ ϵ 2 4 η +8 η(1−2ηL) m. Moreover, we see that the sample complexity can be computed as T N = O(ϵ −4). Therefore, the proof is completed. ## A.5 Proof Of Lemma 5.3 Proof of Lemma 5.3. First, recall that $$\mathbb{E}_{x\sim\pi_{\theta^{\prime}}}\left[{\frac{\pi_{\theta}(x)}{\pi_{\theta^{\prime}}(x)}}\right]=1.$$ can verify that Then, by the definitions of g and gw, we can verify that Ex∼πθ′ h∥g(x, θ′) − gw(*x, θ, θ*′)∥ 2i ≤ 2Ex∼πθ′ h∥g(x, θ′) − g(*x, θ*)∥ 2i+ 2Ex∼πθ′ h∥g(x, θ) − gw(*x, θ, θ*′)∥ 2i = 2 Z∥Rθ ′ (x)∇θ log πθ ′ (x) − Rθ(x)∇θ log πθ(x) + ∇θRθ ′ (x) − ∇θRθ(x)∥ 2πθ ′ (x)dx + 2 Z Rθ(x) ∇θ log πθ(x) − πθ(x) πθ ′ (x) ∇θ log πθ(x) + ∇θRθ(x) − πθ(x) πθ ′ (x) ∇θRθ(x) 2 πθ ′ (x)dx ≤ 6 Z∥Rθ ′ (x) (∇θ log πθ ′ (x) − ∇θ log πθ(x))∥ 2πθ ′ (x)dx + 6 Z∥(Rθ ′ (x) − Rθ(x)) ∇θ log πθ(x)∥ 2πθ ′ (x)dx + 6 Z∥∇θRθ ′ (x) − ∇θRθ(x)∥ 2πθ ′ (x)dx + 4 Z Rθ(x) 1 − πθ(x) πθ ′ (x) ∇θ log πθ(x) 2 πθ ′ (x)dx + 4 Z ∇θRθ(x) 1 − πθ(x) πθ ′ (x) 2 πθ ′ (x)dx ≤ 6U 2C 2 h + 6C 2 gCe2 g + 6Ce2h ∥θ − θ ′∥ 2 + 4U 2C 2 g + 4Ce2 g Ex∼πθ′ "πθ(x) πθ ′ (x) − 1 2# = 6U 2C 2 h + 6C 2 gCe2 g + 6Ce2h ∥θ − θ ′∥ 2 + 4U 2C 2 g + 4Ce2 g Z(πθ(x))2 πθ ′ (x) dx − 1 . We next consider the function f(θ) := R (πθ(x))2 πθ′ (x) dx. Taking the derivative of f with respect to θ, we get ∇θf(θ) = Z2πθ(x)∇θπθ(x) πθ ′ (x)dx. Moreover, since $$\begin{split}\nabla_{\theta}^{2}\log\pi_{\theta}(x)&=\frac{1}{\left(\pi_{\theta}(x)\right)^{2}}\left(\pi_{\theta}(x)\nabla_{\theta}^{2}\pi_{\theta}(x)-\nabla_{\theta}\pi_{\theta}(x)\nabla\theta\pi_{\theta}(x)^{\top}\right)\\ &=\frac{1}{\pi_{\theta}(x)}\nabla_{\theta}^{2}\pi_{\theta}(x)-\nabla_{\theta}\log\pi_{\theta}(x)\nabla_{\theta}\log\pi_{\theta}(x)^{\top},\end{split}$$ for $x$ with respect to $\theta$ and $x$ we get we see that the Hessian of f with respect to θ can be computed as $$\nabla_{\theta}^{2}f(\theta)=\int\frac{2}{\pi_{\theta^{\prime}}(x)}\left(\nabla_{\theta}\pi_{\theta}(x)\nabla_{\theta}\pi_{\theta}(x)^{\top}+\pi_{\theta}(x)\nabla_{\theta}^{2}\pi_{\theta}(x)\right)\mathrm{d}x$$ $$=\int\frac{2(\pi_{\theta}(x))^{2}}{\pi_{\theta^{\prime}}(x)}\left(2\nabla_{\theta}\log\pi_{\theta}(x)\nabla_{\theta}\log\pi_{\theta}(x)^{\top}+\nabla_{\theta}^{2}\log\pi_{\theta}(x)\right)\mathrm{d}x.$$ Notice that f(θ ′) = 1 and ∇θf(θ ′) = 0. Therefore, by the Mean Value Theorem, we get $$f(\theta)=1+\frac{1}{2}\left\langle\nabla_{\theta}^{2}f(\tilde{\theta})(\theta-\theta^{\prime}),\theta-\theta^{\prime}\right\rangle,$$ where ˜θ is a point between θ and θ ′. Now, from the expression of the Hessian matrix, we see that for any θ ∈ R n, $$\left\|\nabla_{\theta}^{2}f(\theta)\right\|_{2}\leq\int{\frac{2(\pi_{\theta}(x))^{2}}{\pi_{\theta^{\prime}}(x)}}\left\|2\nabla_{\theta}\log\pi_{\theta}(x)\nabla_{\theta}\log\pi_{\theta}(x)^{\top}+\nabla_{\theta}^{2}\log\pi_{\theta}(x)\right\|_{2}\mathrm{d}x$$ $$\leq2(2C_{g}^{2}+C_{h})\int\frac{(\pi_{\theta}(x))^{2}}{\pi_{\theta^{\prime}}(x)}\mathrm{d}x$$ $$=2(2C_{g}^{2}+C_{h})\left(1+\mathbb{E}_{x\sim\pi_{\theta^{\prime}}}\left[\left(\frac{\pi_{\theta}(x)}{\pi_{\theta^{\prime}}(x)}-1\right)^{2}\right]\right)$$ $$\leq2(2C_{g}^{2}+C_{h})(C_{w}^{2}+1).$$ $$\square$$ As a consequence, we have Ex∼πθ′ h∥g(x, θ′) − gw(x, θ, θ′)∥ 2i ≤ 6U 2C 2 h + 6C 2 gCe2 g + 6Ce2h ∥θ − θ ′∥ 2 + 4U 2C 2 g + 4Ce2 g Z(πθ(x))2 πθ ′ (x) dx − 1 ≤ 6U 2C 2 h + 6C 2 gCe2 g + 6Ce2h + 4U 2C 2 g + 4Ce2 g (2C 2 g + Ch)(C 2 w + 1)∥θ − θ ′∥ 2, which completes the proof. ## A.6 Proof Of Lemma 5.4 Proof of Lemma 5.4. By the definition of the stochastic gradient estimator given in Algorithm 2, we can see that for t ≥ 0, Et+1 hg t+1 − ∇θJ (θ t+1) 2i = pEt+1 1 N1 X N1 j=1 g(x t+1,j , θt+1) − ∇θJ (θ t+1) 2 + (1 − p)Et+1 1 N2 X N2 j=1 g(x t+1,j , θt+1) − gw(x t+1,j , θt, θt+1)+ g t − ∇θJ (θ t+1) 2 = pEt+1 1 N1 X N1 j=1 g(x t+1,j , θt+1) − ∇θJ (θ t+1) 2 + (1 − p)Et+1 1 N2 X N2 j=1 g(x t+1,j , θt+1) − gw(x t+1,j , θt, θt+1)+ ∇θJ (θ t) − ∇θJ (θ t+1) + g t − ∇θJ (θ t) 2 ≤ pEt+1 1 N1 X N1 j=1 g(x t+1,j , θt+1) − ∇θJ (θ t+1) 2 + (1 − p)Et+1 1 N2 X N2 j=1 g(x t+1,j , θt+1) − gw(x t+1,j , θt, θt+1)+ g t − ∇θJ (θ t) 2 ≤ pσ2 N1 + (1 − p)Et+1 hg t − ∇θJ (θ t) 2i+ (1 − p) 1 N2 2 X N2 j=1 Et+1 hg(x t+1,j , θt+1) − gw(x t+1,j , θt, θt+1) 2i ≤ pσ2 N1 + (1 − p)Et+1 hg t − ∇θJ (θ t) 2i+ (1 − p)C N2 θ t+1 − θ t 2, where in the first inequality, we use the facts that E h(X − E[X])2i≤ E-X2for all random variable X and g tis unbiased estimator for ∇θJ (θ t) for all t ≥ 0, in the second inequality, we rely on the fact that {x t+1,j} is independent, and the last inequality is due to Lemma 5.3. By summing the above relation across t = 0*, . . . , T* − 2, we see that $$\sum_{t=1}^{T-1}\mathbb{E}_{T}\left[\left\|g^{t}-\nabla_{\theta}\mathcal{J}(\theta^{t})\right\|^{2}\right]$$ $$\leq\frac{p\sigma^{2}(T-1)}{N_{1}}+(1-p)\sum_{t=0}^{T-2}\mathbb{E}_{t+1}\left[\left\|g^{t}-\nabla_{\theta}\mathcal{J}(\theta^{t})\right\|^{2}\right]+\frac{(1-p)C}{N_{2}}\sum_{t=0}^{T-2}\mathbb{E}_{T}\left[\left\|\theta^{t+1}-\theta^{t}\right\|^{2}\right],$$ $\left\|\theta^{t+1}-\theta^{t}\right\|^{2}$ which implies that $$\sum_{t=0}^{T-1}\mathbb{E}_{T}\left[\left\|g^{t}-\nabla_{\theta}\mathcal{J}(\theta^{t})\right\|^{2}\right]\leq\frac{p\sigma^{2}T+\sigma^{2}}{pN_{1}}+\frac{(1-p)C}{pN_{2}}\sum_{t=0}^{T-1}\mathbb{E}_{T}\left[\left\|\theta^{t+1}-\theta^{t}\right\|^{2}\right].\tag{14}$$ $$\square$$ Recall from (9) that $$\sum_{t=0}^{T-1}\left\|\theta^{t+1}-\theta^{t}\right\|^{2}\leq\frac{2\eta\Delta}{1-2\eta L}+\frac{\eta}{L(1-2\eta L)}\sum_{t=0}^{T-1}\left\|g^{t}-\nabla_{\theta}\mathcal{J}(\theta^{t})\right\|^{2},$$ which together with (14) implies that $$\left(1-\frac{(1-p)C\eta}{pN_{2}L(1-2\eta L)}\right)\sum_{t=0}^{T-1}\mathbb{E}_{T}\left[\left\|g^{t}-\nabla_{\theta}\mathcal{J}(\theta^{t})\right\|^{2}\right]\leq\frac{p\sigma^{2}T+\sigma^{2}}{pN_{1}}+\frac{2\eta(1-p)C\Delta}{pN_{2}(1-2\eta L)}.$$ Thus, the proof is completed. ## A.7 Proof Theorem 5.5 Proof Theorem 5.5. Since p =N2 N1+N2 ∈ (0, 1) and $$\eta\leq\frac{p N_{2}L}{2(1-p)C+2p N_{2}L^{2}}=\frac{N_{2}^{2}L}{2N_{1}C+2N_{2}^{2}L^{2}},$$ we can readily check that $$\eta\in\left(0,\frac{1}{2L}\right),\quad1-\frac{(1-p)C\eta}{N_{2}L(1-2\eta L)}\geq\frac{1}{2}.\tag{15}$$ Then, we can see that ET dist 0, −∇θJ ( ˆθ T) + ∂G( ˆθ T) 2 ≤ 2 +2 ηL(1 − 2ηL) 1 T T X−1 t=0 ET hg t − ∇θJ (θ t) 2i+ 1 T 2∆ η+ 4∆ η(1 − 2ηL) ≤ 1 T 2 +2 ηL(1 − 2ηL) 1 −(1 − p)Cη pN2L(1 − 2ηL) −1 pσ2T + σ 2 pN1+ 2η(1 − p)C∆ pN2(1 − 2ηL) + 1 T 2∆ η+ 4∆ η(1 − 2ηL) ≤ 4 T 1 +1 ηL(1 − 2ηL) T σ2 N1 + (N1 + N2)σ 2 N1N2+ 2ηN1C∆ N2 2 (1 − 2ηL) + 2∆ T 1 η +2 η(1 − 2ηL) where ∆ := F ∗ − F(θ 0) > 0 is a constant, the first inequality is due to Theorem 4.3, the second inequality is derived from Lemma 5.4, and the third inequality is implied by (15). Then, in order to have $\mathbb{E}_T\left[\text{dist}\left(0,-\nabla_{\theta}\mathcal{J}(\hat{\theta}^T)+\partial\mathcal{G}(\hat{\theta}^T)\right)^2\right]\leq\epsilon^2$ for a given tolerance set $N_2=\sqrt{N_1}$, $$\eta\leq\frac{N_2^2L}{2N_1C+2N_2^2L^2}=\frac{L}{2C+2L^2},$$ where $\eta$ is the $\eta$-th order. 2for a given tolerance ϵ > 0, we can simply and require that $$4\left(1+\frac{1}{\eta L(1-2\eta L)}\right)\frac{\sigma^{2}}{N_{1}}\leq\frac{\epsilon^{2}}{3},$$ $$\frac{4}{T}\left(1+\frac{1}{\eta L(1-2\eta L)}\right)\frac{(N_{1}+N_{2})\sigma^{2}}{N_{1}^{2}}\leq\frac{\epsilon^{2}}{3},$$ $$\frac{2\Delta}{T}\left[\left(1+\frac{1}{\eta L(1-2\eta L)}\right)\frac{4\eta N_{1}C}{N_{2}^{2}(1-2\eta L)}+\frac{1}{\eta}+\frac{2}{\eta(1-2\eta L)}\right]\leq\frac{\epsilon^{2}}{3}.$$ $$\square$$ Therefore, it suffices to set N1 = O(ϵ −2), N2 = √N1 = O(ϵ −1) and T = O(ϵ −2). (We ignore deriving the concrete expressions of T, N1 and N2, in terms of ϵ and other constants, but only give the big-O notation here for simplicity.) Finally, we can verify that the sample complexity can be bounded as $$N_{1}+T\left(p N_{1}+(1-p)N_{2}\right)=N_{1}+T\frac{2N_{1}N_{2}}{N_{1}+N_{2}}\leq N_{1}+2T N_{2}=O(\epsilon^{-3}).$$ Therefore, the proof is completed.
Review 1: Summary: In this paper, the authors introduce formally a general problem setup for reward optimization. Authors introduce the problem setup, establish the smoothness for part of the objective under some assumptions, propose to solve the problem using existing proximal gradient descent methods and provide a convergence guarantee for the algorithm, with a sample complexity of $O(\epsilon^{-4})$ to attain a $\epsilon-$stationary point. The paper also provide a stochastic variance reduced version by applying existing variance reduction method PAGE, which has an improved sample complexity of $O(\epsilon^{-3})$. Strengths and Weaknesses: Strengths * The problem setup introduced in this paper seems to be novel and not discussed in prior literature * The presentation of the problem, assumptions on the problem setup, method to solve the problem and related theoretical results are presented clearly and the paper is fairly easy to follow Weaknesses * Since the problem setup is novel (which seems to be one of the main contributions of the paper) and not discussed in prior literature, the problem setup should be well motivated with several application setups and/or examples, which seems to be lacking. * Related to previous point, it is unclear when one would need to optimize the policy and the reward function which shares the same parameter. In general, it seems like this kind of setup might lead to learning a meaningless reward function which the policy learn to maximize. * The assumption of the boundedness of the norm of the gradient and the Hessian of the reward function seems to need more discussion and justification, since this kind of assumption is new and seems to be related to the novel problem setup introduced in this paper. The example provided in the paper as a illustration of the setup is a special case of the proposed problem setup, which is not illustrative of the novel setup and widely discussed in reinforcement learning (RL) literature. * There are some strong assumptions used for analyzing the variance reduced method, and the paper lacks practical application of the proposed method which might bolster the validity of the novel setup introduced in the paper. The authors acknowledge these limitations in the paper. Minor Weaknesses: * $\theta^* =$ might be incorrect in Equation (2) since the solution to problem in Equation (2) might not be unique. * The subsequent simplifications of the original problem used in Example 3.3 use $\min$ operator, which is inconsistent with the original problem formulation. * The definition of sub gradient given after Equation (3) seems to be the definition of a subdifferential. * The function $\text{dist}(\cdot, \cdot)$ used in Definition 3.4 is not properly introduced/defined. Requested Changes: As mentioned in the Weaknesses section, it seems important to better motivate the general problem setup introduced in the paper, with motivating practical applications and/or some illustrative example that demonstrate the general problem setup introduced in this paper, which seems to be a critical requirement for the paper. Please refer to Minor Weaknesses for minor changes needed in the paper. Broader Impact Concerns: None ================================================== Review 2: Summary: This paper considers the regularized expected reward optimization problem that subsumes many RL settings (MDPs, bandits, etc). They analyze the standard stochastic proximal gradient approach to get $\epsilon^{-4}$ sample complexity bound. They make further assumptions to show an improved sample complexity bound $\epsilon^{-3}$ for the stochastic variance-reduced proximal gradient approach. Strengths and Weaknesses: S: - The paper is well-written and includes relevant related works after all assumptions and results. W: As this paper mentions, stochastic variance-reduced gradient approaches exists in optimization by Li et al. (2021b) and MDPs by Gargiani et al. (2022). The results in Section 5 mostly follow the analyses by Gargiani et al. (2022). The additional challenges in this work are the analyses adapted with the parameterized reward functions ($R_\theta$) and regularization function $\mathcal{G}_\theta$. These challenges are mostly overcome by the uniform boundedness assumptions on rewards (and its twice-differentiable functions) and log-policies. So, I'd be more curious about practical implications of this result. That is, showcasing failure of PAGE-PG (Gargiani et al. 2022) in certain toy-problems where adapting to parameterized reward functions and the regularized optimization becomes a necessity. Requested Changes: na Broader Impact Concerns: na ================================================== Review 3: Summary: This paper studies the sample complexity of stochastic gradient methods. In particular, it considers the regularized expected reward optimization problem with general policy parameterization. The paper first analyzes the stochastic proximal gradient method and provides a sample complexity of $O(\epsilon^{-4})$. Then it proposes an efficient stochastic variance-reduced proximal gradient method utilizing an importance sampling-based Probabilistic Gradient Estimator (PAGE), which improves the sample complexity to $O(\epsilon^{-3})$. Strengths and Weaknesses: Strengths - The setting it considers is more general compared to some other related work, as it considers general convex regularizers, general policy parameterizations, and the non-oblivious setting. - The sample complexity of $O(\epsilon^{-3})$ analyzed in this paper matches the sample complexity in existing competitive works. - The proof is well-written and easy to follow. Major Weaknesses - The results derived via Algorithm 2 rely on a strong assumption on the importance weight (Assumption 5.1). However, there are papers that achieve the same sample complexity without this assumption (e.g., Fatkhullin et al. 2023, Shen et al. 2019, Yuan et al. 2022). Moreover, although the setting considered in this paper is general, it would be better to compare and discuss the techniques, assumptions, and results in this paper under specific instantiated settings with other works. - The novelty of the proof techniques is limited, as they are mostly adapted from classical results. For example, the proofs of Lemma 4.1 and Lemma 4.4 are based on the boundedness assumptions; the proof of Theorem 4.4 is based on a standard analysis of the stochastic proximal gradient method; and the proof of Lemma 5.3 is based on the proof of Lemma 4 in Li et al. 2021b. However, I am aware that the TMLR community does not consider novelty a necessary criterion for acceptance. Minor Weaknesses - In addition to analyzing global convergence with additional mild assumptions as mentioned in Conclusions, I think it would also be beneficial to analyze the sample complexity for convergence to second-order stationary points, which avoids getting stuck at saddle points. - The choices of notations $\pi_{\theta}$ (probability density of a single trajectory) and $x$ (a trajectory) are a bit misleading. - It would be better to present a derivation of $\nabla_{\theta} \mathcal{J}(\theta)$ at the bottom of page 4. - It would be better if numerical experiments were provided. - There are assumptions on the Lipschitzness and smoothness of both the reward function(Assumption 3.1) and the score function (Assumption 3.2), whereas most other works with general policy parameterization have at most one of these assumptions for one Theorem. Requested Changes: Please address the issues I discussed above. Broader Impact Concerns: There are no broader impact concerns for this theoretical work. ================================================== Metareview: Recommendation: Accept with minor revision Comment: We have one Accept and two Leaning Accepts among reviewers. I also read the paper before closely reading their reviews, and I agree with them that this is a reasonably good paper: it has some non-trivial contributions, it is generally well-written, and according to reviewers, sufficient evidence is provided for its claims (I have not verified the proofs myself). A major novelty of this work is that it considers a new class of problems in which both the policy and the reward function are parameterized. To solve this problem, a policy gradient-based approach, specifically the stochastic proximal gradient algorithm, is proposed to solve the regularized objective. The convergence rate to the stationary point is established. Furthermore, the paper suggests the use of a variance-reduction algorithm, PAGE. This leads to an improved convergence rate, under extra assumptions. It is notable that these types of convergence analysis for policy gradient-based methods have become relatively common in the past few years, but since this is a new problem setup, I would consider it a sufficiently novel and interesting contribution, even if some of the proof techniques have been used in the past. The authors have revised the paper to address several of the reviewers comments. I believe they have done a reasonable job. I recommend **acceptance with minor revisions**. Before accepting this paper, I would like to ask the authors to do a few other minor revisions in order to improve the readability of this work: - Please clarify the use of x a bit further. Specifically, is $x$ the same as $\tau$ in Example 3.3? - Assumption 5.1 is strong, as Remark 5.2 already explains. What is not clear is how strong it is. It would be very helpful if the authors provide an analysis of the size of constant $C_w$ in that assumption. This can be done for a simple concrete example, if a general characterization is difficult. For MDPs, this constant seems to depend on the ratio of probability of actions according to each policy along the whole trajectory. I can see that if there is no restriction over $\theta$ and $\theta’$, that is, they can be far from each other, this ratio can be very large, but if they happen to be in the same neighbourhood, they may be bounded, perhaps under some smoothness of the policy as a function of parameter theta (this might already be implied by Assumption 3.2). A discussion and a worked out example should not be very difficult but goes a long way. - Even though PAGE is not introduced in this work, its use is one of the contributions of this paper. Currently there is not much intuition of why PAGE helps. This makes the paper a bit mysterious to a reader who has not read the original PAGE paper. I’d ask the authors to provide some intuition behind that algorithm, so that this paper would be more self-contained. ==================================================
# Improving Subgraph-Gnns Via Edge-Level Ego-Network Encodings Nurudin Alvarez-Gonzalez *nuralgon@gmail.com* Universitat Pompeu Fabra Andreas Kaltenbrunner kaltenbrunner@gmail.com Universitat Oberta de Catalunya ISI Foundation Turin Vicenç Gómez vicen.gomez@upf.edu Universitat Pompeu Fabra Reviewed on OpenReview: *https: // openreview. net/ forum? id= N0Sc0KY0AH* ## Abstract We present a novel edge-level ego-network encoding for learning on graphs that can boost Message Passing Graph Neural Networks (MP-GNNs) by providing additional node and edge features or extending message-passing formats. The proposed encoding is sufficient to distinguish Strongly Regular Graphs, a family of challenging 3-WL equivalent graphs. We show theoretically that such encoding is more expressive than node-based sub-graph MPGNNs. In an empirical evaluation on four benchmarks with 10 graph datasets, our results match or improve previous baselines on expressivity, graph classification, graph regression, and proximity tasks—while reducing memory usage by 18.1x in certain real-world settings. ## 1 Introduction Neural graph architectures are the current standard for learning from graph data. These methods automatically learn representations of nodes, edges, or graphs in a data-driven and end-to-end way. Message Passing Graph Neural Networks (MP-GNNs) are the most common model for learning on graphs. MP-GNNs process the input graph (data) with a computational graph (model) to learn useful representations through message passing between direct neighbors in the input graph. This framework facilitates the theoretical analysis of MP-GNNs (e.g., in terms of their expressive power (Xu et al., 2019), or characterizing issues such as over-squashing (Alon & Yahav, 2021) or *over-smoothing* (Oono & Suzuki, 2022)). The idea of decoupling the input graph from the computational graph is the basis of leading-edge learning approaches, such as Sub-graph GNNs (Zhao et al., 2022; Abboud et al., 2022; Frasca et al., 2022; Mitton & Murray-Smith, 2023), perturbation methods (Papp et al., 2021; Dwivedi et al., 2022), or Graph Transformers (Yun et al., 2019; Ying et al., 2021; Rampášek et al., 2022; Kim et al., 2022). These methods extend the message-passing mechanism to more general structures induced by the graph, beyond direct neighbors—for example between the nearest neighbours of a node at a given depth (ego-networks). This flexibility extends the expressive power of MP-GNNs, at the cost of an increased computational footprint and a departure of the inductive bias contained in the input graph, which must be learned again. In this work, we present an alternative to previous *pure learning* approaches. Rather than learning on subgraphs, we introduce a systematic procedure to generate a pool of structural features (an encoding) which can subsequently be integrated into MP-GNNs. Similar approaches have been proposed recently (Bouritsas et al., 2023; Alvarez-Gonzalez et al., 2022). Crucially, our proposed features capture information at the edgelevel, including signals contained in the two ego-networks of adjacent nodes in the input graph. We call this encoding Elene, for Edge-Level Ego-Network Encodings. The benefits of such a representation are diverse: The encodings are interpretable and amenable for theoretical analysis, they are efficiently computable as a pre-processing step, and finally, they reach comparable performance with state-of-the-art learning methods. As an illustrative example, consider Strongly Regular Graphs (SRGs). They are known to be *indistinguishable* ![1_image_0.png](1_image_0.png) by node-based sub-graph GNNs (Balcilar et al., 2021; Morris et al., 2023; Papp & Wattenhofer, 2022; Zhao et al., 2022; Frasca et al., 2022), as exemplified by the non-isomorphic 4 × 4 Rook and Shrikhande graphs in Fig. 1. We theoretically show that Elene is as expressive as node-only sub-graph GNNs, and expressive enough to differentiate certain classes of SRGs like those in Fig. 1. (a) 4 × 4 Rook Graph. (b) Shrikhande Graph. ![1_image_1.png](1_image_1.png) Figure 1: Expressive power is typically analyzed in terms of the families of non-isomorphic graphs that models fail to distinguish: 4 × 4 Rook (a) and Shrikhande (b) graphs are indistinguishable by node-only sub-graph GNNs (Frasca et al., 2022). Another example of a challenging benchmark is the h-Proximity task (shown in Fig. 2), which requires the ability to capture graph properties that depend both on the graph structure (shortest path distances) and node attributes (colors) (Abboud et al., 2022). In this case, an enriched (learnable) MP-GNN with Elene features—called Elene-L—outperforms current baselines. In real-world benchmarks, Elene-L matches the performance of state-of-the-art learning methods at significantly lower memory costs, as we show experimentally in §7. The paper is organized as follows. §3 defines and motivates Elene. §4 introduces Elene-L. §5 describes ![1_image_3.png](1_image_3.png) related work and §6 analyzes expressivity. Finally, §7 evaluates our methods in four benchmarks and §8 summarizes our results. (a) Positive graph. (b) Negative graph. ![1_image_2.png](1_image_2.png) Figure 2: h-Proximity binary classification task—A pair of positive (a) and negative (b) 1-Proximity graph examples. An h-Proximity graph is positive if all red nodes have at most 2 blue neighbors up to distance h, and negative otherwise. ## 2 Notation And Definitions In this work, G = (*V, E*) denotes a graph with n = |V | and m = |E|. lG(*u, v*) is the shortest path length between *u, v* ∈ V in G. dG(v) is the degree of v in G and we use dmax for the maximum degree over all nodes in G. Double brackets *{{·}}* denote multi-sets while Sand T, respectively, indicate set and multi-set union and intersection. We use short-hand x r notation to signify x is contained r times, where y = {{x r}} reads as "x appears r times in y". We use S k v = (V k v , E k v ) ⊆ G for the k-depth induced ego-network sub-graph of G centered on v (abbreviated S in equations). We denote the maximum degree over all nodes in S k v by d k max. Likewise, we use S k ⟨u,v⟩ = (V k u TV k v , E k u TE k v ) to denote the intersection of ego-networks across edge ⟨*u, v*⟩. Feature vectors are shown in **bold**, as xv for node v, x⟨u,v⟩for edge ⟨*u, v*⟩, we denote vector concatenation by ||, and the Hadamard product by ⊙. Finally, we represent a learnable embedding of a discrete input, e.g., degree or distance signals as Emb(·), and a learnable weight matrix as W. ## 3 Defining Elene In this section, we first present the proposed edge-level encodings and then illustrate their expressive power. ## 3.1 Constructing An Edge-Level Ego-Network Encoding The main idea behind Elene encodings is to capture higher-order interactions that go beyond the nodecentric perspective used by MP-GNNs. We look at the structure resulting not only from the ego-network of every node, but also from the combination of two ego-networks of adjacent nodes in the input graph, and design a pool of features based on that structure. Consider the k-depth (k > 1) ego-network S k v surrounding node v. We may ask: how many edges of a neighbor u of v reach nodes that are 1-hop closer to v, at the same distance as u*, or 1-hop farther from* v? The proposed Elene encodings elaborate on this idea to capture interactions between nodes and edges in ego-network sub-graphs. More formally, consider a node u contained in S k v and let dS (u|v) count the edges from u to nodes at a distance lS (*u, v*) + p of v, p *∈ {−*1, 0, +1}: $$d_{S}^{(p)}(u|v)=\left|(u,w)\in{\mathcal{E}}_{v}^{k},\ \forall w\in{\mathcal{V}}_{v}^{k}:l_{S}(v,w)=l_{S}(u,v)+p\right|.$$ The degree of node u decomposes as the sum of these *relative* degrees corresponding to these three different subsets of neighbors of u: $$d_{\mathcal{S}}(u)=d_{\mathcal{S}}^{(-1)}(u|v)+d_{\mathcal{S}}^{(0)}(u|v)+d_{\mathcal{S}}^{(+1)}(u|v).$$ Fig. 3 (left) shows an example graph, with all nodes labeled with their degree and colored according to the distance to the root node of the ego network, in this case, the node in green. The plot on the right shows a degree triplet for each node, which counts the *relative* degrees, or edges closer and further to the root (1 st and 3 rd components), together with the individual degree (2 nd component). Leveraging relative degrees yields Elene, an ego-network encoding as a multi-set of quadruplets counting all instances of distance and degree triplets in sub-graph S: e k v = lS (u, v), d(-1) S(u|v), dS (u), d(+1) S(u|v) ∀u∈Vk v . (1) We can construct an Edge (ED) Centric encoding analogous to the Node (ND) Centric encoding of Eq. 1 by also encoding edge-wise sub-graph intersections for edge ⟨*u, v*⟩ as e k ⟨u,v⟩ and counting quadruplets across S k ⟨u,v⟩ with distances to both u and v. Using ED or ND encodings leads to different expressive power, as we show formally in §6. In both cases, App. C shows that Elene encodings are permutation invariant at the node level and equivariant at the graph level. ![3_image_0.png](3_image_0.png) Figure 3: Example graph (right) and corresponding (left) degree triplets for nodes in the 2-hop ego-network rooted on the green node. The dashed blue node has one edge to the 0-hop root (d (-1) S = 1), a degree of 4, and two edges 2-hops from the root (d (+1) S = 2, red), so its degree triplet is (1, 4, 2). ## 3.2 Illustrating Elene To illustrate Elene, we focus on Strongly Regular graphs. An n-vertex graph is d-regular if all n nodes have degree d, i.e., ∀ v ∈ *V, d*G(v) = d. An n-vertex d-regular graph is said to be Strongly Regular if there exists *λ, µ* ∈ N such that every two adjacent nodes have λ neighbors in common, and every two nonadjacent nodes have µ neighbors in common. We denote Strongly Regular Graphs as SRG(*n, d, λ, µ*). Strongly Regular graphs with equal parameters are *indistinguishable* by the 1-WL (Weisfeiler & Leman, 1968) test—a classic graph algorithm known to distinguish graphs that are not isomorphic with high probability (Babai & Kucera, 1979)—and its more powerful k = 3-WL variant (Arvind et al., 2020; Balcilar et al., 2021)—whose ability to distinguish graphs has been shown to be the expressivity upper-bound for node-only Sub-graph GNNs (Bevilacqua et al., 2022; Frasca et al., 2022; Zhao et al., 2022). A natural question follows: *what* structural information is sufficient to distinguish SRGs? ![3_image_1.png](3_image_1.png) Figure 4: The 4 × 4 Rook (a) and Shrikhande (b) graphs are indistinguishable by 3-WL as SRGs with parameters SRG(16, 6, 2, 2) (Arvind et al., 2020; Frasca et al., 2022). Elene (ND, top sub-graphs) is also unable to distinguish the graphs, while Elene (ED, bottom sub-graphs) counts different numbers of edges. In Fig. 4, we show the 1-depth ego-networks S k=1 v1for the purple vertices labeled with '1' with its 1-hop neighbors colored in red (top smaller sub-graphs)1. We represent both graphs in terms of n = 16 equal sub-graphs (one per node), analyzing whether the sub-graph pairs can be distinguished. Both sub-graphs have the same number of nodes (7), edges (12), and matching degree multisets {{3 6, 6 1}}. Furthermore, by coloring the edges as connected to the ego-network root (in green) or connecting adjacent neighbors of the root (in orange), the count of edge colors also matches. The Node-Centric (ND) Elene encoding, as shown in Eq. 1, corresponds to such a coloring and is thus unable to distinguish the pair of graphs. In §6, we formally prove this upper bound for Elene (ND), which coincides with the expressive power of node-based Sub-graph GNNs. In contrast, if we consider the 1-hop common neighbors (in blue) of adjacent nodes labeled '1' and '2', the intersecting sub-graphs are distinguishable (bottom smaller sub-graphs). Indeed, the number of edges differs between the 4 × 4 Rook graph (6 edges) and the Shrikhande's graph (5 edges). This corresponds to Edge-Centric Elene (ED), and illustrates it has more expressive power than the Node-Centric (ND) one. ## 3.3 Computational Complexity The Elene encoding for a single node v requires traversing all the edges in the ego-network E k v . This can be computed via Breadth-First Search (BFS) bounded by depth k, which has worst-case complexity O(d k max). If k is greater than the diameter in the graph, Elene must traverse all m edges for the n ego-networks with each node as the root. Encoding the entire graph thus has time complexity O(n · min{*m, d*k max}). Note that the more expressive edge-centric implementation requires executing the BFS from both nodes alongside an edge, with asymptotically no additional cost. Elene is best suited for sparse graphs, where dmax ≪ n. For fully connected graphs, m = |Ek v | = n·(n−1)/2 which results in time complexity O(n 3), matching the computational worst-case complexity of GNN-AK Zhao et al. (2022), NGNNs Zhang & Li (2021), SUN Frasca et al. (2022), ESAN Bevilacqua et al. (2022), or SPEN Mitton & Murray-Smith (2023). In terms of memory, Elene encodings require a sparse 3 · (k + 1) · (dmax + 1)-component vector for each node v ∈ V to represent the multi-set of quadruplets in Eq. 1. Accordingly, each entry holds the count of observed relative degrees at each distance from v. In App. A, we describe a BFS implementation that produces a mapping of each Elene encoding quadruplet to its frequency, and can be parallelized over p processors yielding O(n · min{*m, d*k max}/p) time complexity. ## 4 Learning With Elene: Elene-L We now introduce two approaches for leveraging Elene encodings in practical learning settings—a simple concatenation of Elene over network attributes, and a fully learnable variant called Elene-L that updates node and edge representations during the learning process. The first approach represents Elene multi-as sparse vectors containing frequencies of each quadruple q—which can be attributes concatenated to xv or x⟨u,v⟩if processing e k v or e k ⟨u,v⟩ : $$\operatorname{ELENCE}_{\mathrm{vec}}^{k}(v)_{i}=\left|\!\left|\!\left|\!\left|q\in e_{v}^{k}\right|\!f(q)=i\!\right|\!\right|\!\right|.$$ . (2) where f(q) is an indexing function mapping each unique quadruplet to an index in the sparse vector. The concatenation approach is the most memory efficient, using only as much memory as the encodings themselves, and can be computed once and reused during training or inference. Furthermore, this approach is directly applicable to any downstream learning model e.g., an MP-GNN, without changing its architecture. Certain tasks, however, require structural information *within the sub-graph* to be combined with node or edge *attributes* during learning. One example are h-Proximity tasks, which require joint representations that integrate the Elene encodings with attributes and structure. 1Note that for these two graphs, any node will have matching ego-networks regardless of their label—see proof for Theo. 2. $$\left(2\right)$$ Elene-L addresses the limitations of concatenating Elene with node and edge attributes by learning over both *structures* and *attributes* at once. Elene-L learns non-linear functions (Φ, e.g. a Dense Neural Network—DNN) to represent nodes, edges, and Node or Edge-Centric sub-graphs by representing u in S k v via a learnable function Φnd 2: $$\Phi_{\mathsf{nd}}^{t}(u|v)=\Phi_{\mathsf{nd}}\Big(\mathbf{x}_{v}^{t}\left\|\mathbf{x}_{u}^{t}\right\|\mathsf{Emb}(u|v)\Big),\tag{1}$$ , (3) where x t v and x t u are features of u and v at time-step t (i.e. after t layers, such that t = 0 are 'input' features), and Emb(u|v) is a learnable embedding of Elene encodings which we describe in §4.1. As with Elene, we produce a learnable representation of an edge ⟨*u, w*⟩ in S k v via a learnable Φed: $$\Phi_{\mathsf{ed}}^{t}(u,w|v)=\Phi_{\mathsf{ed}}\Big(\mathbf{x}_{v}^{t}\Big\|\mathbf{x}_{\langle u,w\rangle}^{t}\Big\|\mathbf{x}_{u}^{t}\odot\mathbf{x}_{w}^{t}\Big\|\mathsf{Emb}(u,w|v)\Big).$$ The representation of the Node-Centric ego-network root at time t is a learnable ΦND applied over the aggregation of every node and edge in the sub-graph given a pooling function (P): $$\Phi_{\mathfrak{B}\mathfrak{D}}^{t}(v)\;=\;\Phi_{\mathfrak{B}\mathfrak{D}}\Bigg{(}\mathbf{x}_{v}^{t}\Bigg{\|}\sum_{u}^{v^{k}}\Phi_{\mathfrak{D}}^{t}(u|v)\Bigg{\|}\sum_{(u,w)}^{\mathcal{E}^{k}}\Phi_{\mathfrak{E}\mathfrak{d}}^{t}(u,w|v)\Bigg{)}.\tag{1}$$ $$\left({\boldsymbol{3}}\right)$$ $$\mathbf{\Sigma}$$ $$\mathbf{\Sigma}$$ Similarly, the Edge-Centric representation for edge ⟨*u, v*⟩ at time t is a learnable ΦED consuming the aggregation over ego-networks containing the edge, as shown in Fig. 5: $$\Phi_{\mathrm{ED}}^{t}(u,v)=\Phi_{\mathrm{ED}}\bigg(\sum_{w}^{\mathcal{V}_{(u,v)}^{k}}\Phi_{\mathrm{ed}}^{t}(u,v|w)\bigg).\tag{1}$$ $\downarrow$ . $\downarrow$ . Node and edge representations at t+1 update via a learnable parameter γ gating the flow of Elene updates: $$\mathbf{x}_{v}^{t+1}=\mathbf{x}_{v}^{t}+$$ v + γND · Φ t ND(v). (6) We follow the same update-rule at the edge level: $${\bf x}_{\langle u,w\rangle}^{t+1}={\bf x}_{\langle u,w\rangle}^{t}+\gamma_{\tt E D}\cdot\Phi_{\tt E D}^{t}(u,w).$$ ED(*u, w*). (7) We may use x t+1 v and x t+1 ⟨u,w⟩ directly in the downstream task, or as inputs into an MP-GNN layer during learning—boosting its expressivity. We follow the latter approach in this work. ![5_image_0.png](5_image_0.png) Figure 5: k-depth ego-network intersection following Eq. 5 for the green edge. The ego-networks of u and v (yellow (left) and purple (center) respectively), intersect on five nodes around ⟨*u, v*⟩ (dotted, right). We show V k=2 ⟨u,v⟩ = {u, v, w1, w2, w3, w4, w5} (right), indicating nodes reachable in 0 or 1-hops, exactly 1-hop or 1 or 2-hops from u and v. 2We compress notation by using Φtfor the output of Φ at step t. ## 4.1 Defining Elene-L Embeddings The representations in Eq. 6 and Eq. 7 leverage attributes and Elene encodings through Emb(u|v) and Emb(*u, w*|v). To define Elene-L embeddings, three hyper-parameters determine the shapes of embedding matrices: ω, the length of the embedding vectors; ρ, the max. degree to be encoded (by default, ρ = dmax); and k, the maximum distance to be encoded (i.e., the ego-network depth). For the quadruplet of u, we abbreviate: qu = (lu, d 1 u , d 2 u , d 3 u ) = lS (u, v), d(-1) S(u|v), dS (u), d(+1) S(u|v) as defined in Eq. 1 and *jointly* embed distance and relative degrees. The embedding Emb(u|v) of u in S k v is given by: $$\mathrm{{\sf{Emb}}}\left[l_{u},d_{u}^{1},d_{u}^{2},d_{u}^{3}\right]=\left(\mathbf{W}_{(l_{u},d_{u}^{1})}^{1,\mathsf{rad}}\left\|\mathbf{W}_{(l_{u},d_{u}^{2})}^{2,\mathsf{rad}}\right\|\mathbf{W}_{(l_{u},d_{u}^{3})}^{3,\mathsf{rad}}\right),$$ where W1,nd, W2,nd, and W3,nd ∈ R S×ω are three node embedding matrices with S = (ρ + 1) · (k + 1) entries—one for each distance and relative degree pair. A visual representation of the attributes and Elene encodings of u is shown in Fig. 6. To embed edge ⟨*u, w*⟩ in S k v , we use the quadruplets of u and w, qu and qw, and increase the granularity of distances to capture the *relative* direction of the edge, following Fig. 4: $$\delta_{u w}=l_{u}-l_{w}+1\in\{0,1,2\}.$$ We embed ⟨*u, w*⟩ in a permutation-invariant manner, summing embeddings bidirectionally so Emb(*u, w*|v) = Emb(*w, u*|v): $$\mathsf{Emb}(u,w|v)=\left(\mathbf{W}_{(i_{u},\delta_{u_{u},w},\delta_{v}^{2})}^{1,\mathsf{att}}\right)\left|\mathbf{W}_{(i_{u},\delta_{u_{u},w},\delta_{v}^{2})}^{2,\mathsf{att}}\right|\mathbf{W}_{(i_{u},\delta_{u_{u},w},\delta_{v}^{2})}^{3,\mathsf{att}}\right)+\left(\mathbf{W}_{(i_{u},\delta_{u_{u},w},\delta_{v}^{2})}^{1,\mathsf{att}}\right|\mathbf{W}_{(i_{u},\delta_{u_{u},w},\delta_{v}^{2})}^{2,\mathsf{att}}\right)\left|\mathbf{W}_{(i_{u},\delta_{u_{u},w},\delta_{v}^{2})}^{3,\mathsf{att}}\right).\tag{9}$$ W1,ed, W2,ed, and W3,ed ∈ R 3×S×ω are edge-level embedding matrices with 3× more entries to represent the three possible values of δuw. ![6_image_0.png](6_image_0.png) $$({\boldsymbol{\delta}})$$ Figure 6: Elene-L encoding of u in the k = 2 ego-network of v. The representation contains the feature vectors of both nodes (xv and xu, left), the distance information of u to v (2, center) and the relative degree information ([1, 2, 0], right). ## 5 Related Work In this section, we connect related work with Elene encodings and the practical applications of Elene and Elene-L in §4. Per §1, the expressivity of MP-GNNs is often studied through the 1-WL test and its more powerful k-WL variants. Despite great successes in many domains (Duvenaud et al., 2015; Battaglia et al., 2016; Gilmer et al., 2017; Ying et al., 2018), the GIN architecture (Xu et al., 2019) showed that one-hop MP-GNNs are at most as expressive as 1-WL. This increased interest in expressive power within the community—in the formal study of MP-GNNs (Papp & Wattenhofer, 2022), and to boost message-passing with spectral (Balcilar et al., 2021), positional (You et al., 2019; Li et al., 2020; Abboud et al., 2022), path-level (Eliasof et al., 2022; Michel et al., 2023), sub-graph (Nikolentzos et al., 2020; Zhang & Li, 2021; Bevilacqua et al., 2022; Frasca et al., 2022; Mitton & Murray-Smith, 2023), structural signals (Morris et al., 2019; Bodnar et al., 2021), or their combination (Ying et al., 2021; Zhao et al., 2022; Dwivedi et al., 2022; Rampášek et al., 2022). We now discuss the theoretical ability of models to express certain computationsexpressivity in the abstract—and empirical performance and architectures of graph learning methods. - Expressivity. The most common framework to study expressivity are the k-WL tests and its variants (Morris et al., 2019). Recent research has also focused on other perspectives, such as matrix languages (Balcilar et al., 2021), or the GD-WL test (Zhang et al., 2023)—which reframes expressivity in terms of graph biconnectivity, capturing the ability to identify cut nodes and edges. Shortest Path Neural Networks (SPNNs) (Abboud et al., 2022) introduced a model aggregating across shortest-path distances, but not edges or messages across neighbors, whose expressivity differs from 1-WL and addresses the over-squashing problem (Alon & Yahav, 2021). Finally, another approach has been through 2-variable counting logics (Barceló et al., 2020; Grohe, 2021)—studying what Boolean statements MP-GNNs can express. Elene builds on previous expressivity analyses by presenting features that can distinguish challenging 3- WL equivalent graphs—SRGs. In §6, we will show that Elene can fully identify between SRGs with different parameters, and prove that an Elene-L model can emulate SPNNs. Furthermore, in §7 we empirically evaluate our models on the h-Proximity tasks and explore whether structural (k-WL) expressivity is all we need. We find Elene-L outperforms previous strong baselines of SPNNs and Graphormers (Ying et al., 2021; Abboud et al., 2022) while simply concatenating Elene encodings underperforms, showing that *expressivity* without *attributes* is insufficient for certain tasks. - Boosting Graph Neural Models. Besides studying flavors of expressivity, researchers have also focused on improving performance for MP-GNNs and Graph Transformers. We summarize the most relevant families of novel network architectures in connection with Elene and Elene-L: - Sub-graph MP-GNNs. Elene is most related to equivariant, sub-graph methods—including khop GNNs (Nikolentzos et al., 2020), Structural MP-GNNs (SMP) (Vignac et al., 2020), NestedGNNs (**NGNNs**) (Zhang & Li, 2021), Identity-GNNs (**ID-GNN**) (You et al., 2021), Equivariant Subgraph Aggregation Networks (**ESAN**) (Bevilacqua et al., 2022), Ordered Subgraph Aggregation Networks (**OSAN**) (Qian et al., 2022), GNN-As-Kernel (**GNN-AK**) (Zhao et al., 2022), Shortest Path Neural Networks (**SPNN**) (Abboud et al., 2022), Subgraph Union Networks (SUN) (Frasca et al., 2022), and Subgraph Permutation Equivariant Networks (**SPEN**) (Mitton & Murray-Smith, 2023). By encoding structural attributes of the ego-network sub-graph, Elene captures similar signals as GNN-AK's centroid encodings. However, Elene-L extends node and edge representations within sub-graphs *first*, and then feeds sub-graph aware data into a GNN—rather than applying a GNN on the sub-graph and aggregating its outputs as in NGNNs and GNN-AK. During learning, Elene-L resembles SPEN and ESAN with the EGO+ policy with node marking, as the root of the ego-network is implicitly marked by the relative degree and distance pairs. These sub-graph GNNs involve processing the sub-graphs during training and inference, which is avoided by approaches like Igel (Alvarez-Gonzalez et al., 2022), GSNs (Bouritsas et al., 2023) or ESC-GNN (Yan et al., 2023), and also Elene—as they add sub-structure information without executing a GNN in the subgraph. Theo. 1 shows Elene encodings are a superset of sparse Igel vectors. Elene requires no choice of substructure to count. In contrast, GSNs require counting k-node structures which has an exponential cost in k. Finally, ESC-GNNs also use structural degree and distance signals directly as inputs. However, Elene-L learns additional embeddings from the structural encodings rather than using them as static features. Other approaches instead tackle the representation task by learning to select sub-graphs, such as MAGGNN (Kong et al., 2023) or Policy-Learn (Bevilacqua et al., 2023)—for which Elene signals could act as additional features. Finally, Elene-L can be understood as a graph rewiring approach as recently exemplified by Dynamic Graph Rewiring (**DRew**) (Gutteridge et al., 2023), since each Elene-L layer can be independently parameterized to connect nodes via ego-networks and edge-level sub-graphs—adding virtual edges between vertices that are k-hops away, and also passing signals across adjacent nodes (i.e. edges) whose k-depth ego-networks intersect. In this work, we only explore the impact of *static* edge-level rewiring through relative degrees and Node or Edge-Centric sub-graphs. The key difference with the aforementioned methods is that Elene-L captures edge-level information both in the encoding and during learning, as per Fig. 4. §6.2, shows edge-level signals boost expressivity and corroborates results from SUN that node-only sub-graph models are upper-bounded by the 3-WL test (Frasca et al., 2022). In §7.2, we experimentally validate that Elene-L (ED) but not (ND) reaches 100% accuracy on SR25, a challenging SRG dataset only solved before without graph perturbations by O(n 2) **PPGNAK** (Maron et al., 2019; Zhao et al., 2022), and partially by **SPEN** (Mitton & Murray-Smith, 2023), which distinguished 97% of non-isomorphic pairs. - Perturbation methods. Beyond sub-graph methods, random perturbations of the graph structure like DropGNN (Papp et al., 2021), Random Node Initializations (Abboud et al., 2021), or paths from random walks (Eliasof et al., 2022) have also shown surprising performance in expressivity tasks. Furthermore, random-walks based methods have been shown to be effective at capturing structural information, including positional information (**RWPE**) (Dwivedi et al., 2022). Although Elene in its current definition does not consider graph perturbations or stochastic features, the underlying quadruplets can be easily adapted to ignore dropped-out nodes or edges, and can be seamlessly combined with random node initializations or global positional encodings. - Graph Transformers. Similarly, the extension of Transformer models to graph tasks has led to increased research interest, notably with the introduction of Graphormer (Ying et al., 2021)—which included positional and degree encoding similar to Elene, but using only *absolute* in/out degrees. More recently, Pure Graph Transformers (Kim et al., 2022) removed graph-specific architecture choices, directly encoding nodes and edges as tokens processed through self-attention and a global read-out. Finally, a series of works have yielded high-performance recipes for graph transformers such as GPS (Rampášek et al., 2022)—combining strong inductive biases from MP-GNNs, as well as global and local encoding to build high-performance Graph Transformers. Transformers on graphs can be understood as *fully-connected* graph processors, and it has been shown that Graphormers can be emulated through an SPNN (Abboud et al., 2022). In §6.3, we show that Elene-L can, in turn, emulate an SPNN—and transitively a Graphormer. We consider the analysis of edge-level Elene signals in Graph Transformers as future work, focusing our study on MP-GNN architectures. ## 6 Expressive Power We now analyze the expressive power of Elene—formally answering our question on *which information* is sufficient to distinguish SRGs. We extend recent results on Igel, a sparse vector encoding similar to Elene (Alvarez-Gonzalez et al., 2022) and show that Edge-Centric and Node-Centric Elene are strictly more expressive than previous methods relying on degrees and distances by comparing their expressivity on SRGs. We then show that Elene-L is at least as expressive as Elene, and prove that Elene-L (ED) is more expressive than Elene-L (ND) and Elene (ND). Finally, we connect our framework with SPNNs (Abboud et al., 2022), showing that the latter can be expressed by an instance of node-centric Elene-L without edgedegree information—motivating our analysis on *attributed* tasks in §7. ## 6.1 Expressive Power Of Elene Previous work has shown that encoding-based and sub-graph MP-GNN methods are limited in their ability to distinguish 3-WL equivalent SRGs (Arvind et al., 2020; Balcilar et al., 2021; Alvarez-Gonzalez et al., 2022; Frasca et al., 2022). Recently, Alvarez-Gonzalez et al. (2022) presented Igel—a simple, sparse node feature vector containing counts of distance and degree tuples in an ego-network, showing it is strictly more expressive than the 1-WL test. Following Eq. 2, Elene multi-sets may also be represented as sparse vectors—which can then be used as feature vectors, but also to distinguish ego-network sub-graphs. We build on top of the results from (Alvarez-Gonzalez et al., 2022) and show Elene is at least as expressive as Igel. We then find an upper-bound of expressivity for Igel, which is at most able to distinguish between n, d or λ parameters of SRGs, but not µ, and show Node-Centric and Edge-Centric Elene is strictly more expressive than Igel on SRGs as it can explicitly encode all SRG parameters by counting edges: Theorem 1. Node-Centric Elene is at least as expressive as Igel *(Alvarez-Gonzalez et al., 2022), and* transitively more expressive than 1-WL. Proof. The Igel encoding in Alvarez-Gonzalez et al. (2022) is a simpler version of Eq. 2 that only considers distance (lu) and *absolute* degree (d 2 u ): $$\operatorname{IGEL}_{\mathrm{vec}}^{k}(v)_{i}=\left|\left\{\left(l_{u},d_{u}^{1},d_{u}^{2},d_{u}^{3}\right)\in e_{v}^{k}\right|f^{\prime}(l_{u},d_{u}^{2})=i\right\}\right|.$$ where f ′(lu, d 2 u ) is a bijective function that does not consider *relative* degrees, in contrast with Elene's f. Thus, for any ego-network, Elene includes all information required to construct Igel vectors, so it is at least as expressive as Igel. Theorem 2. Elene (ND) encodes and distinguishes SRGs with different parameters of n, d, λ and µ. Proof. Consider SRG(*n, d, λ, µ*) = (*V, E*). The maximum diameter of an SRG is 2 (Brouwer & Van Maldeghem, 2022), so we focus on the case where k = 2. The Elene (ND) encoding of v ∈ V according to Eq. 1 is: e 2 v = 0, 0, d, d1 , 1, 1, d, d-λ-1 d , 2, d − µ, d, 0 n-d-1 By definition, any Node-Centric ego-network in an SRG has a single root with d neighbors, d neighbors with one edge with the root and d−λ−1 edges to the next layer, and the remaining n−d−1 non-adjacent nodes to the root each have d − µ edges with the d neighbors of the root. Consider SRG′(n ′, d′, λ′, µ′) = (V ′, E′). If any of the parameters between SRG and SRG′ differ, so will e 2 v from e 2 v ′ . This is not the case for Igel, which can at most capture n, d, and λ: $$\begin{array}{l}{{\mathrm{{IGEL}}_{v}^{1}=\oint\left(0,d\right),\dbinom{1}{1,1+\lambda}^{d}\oint}}\\ {{\mathrm{{IGEL}}_{v}^{2}=\oint\left(0,d\right),\dbinom{1}{1,d}\dbinom{d}{2,d}^{n-d-1}\oint}}\end{array}$$ Thus, Elene (ND) can encode and distinguish all parameters of SRGs—outperforming Igel. However, Elene (ND) *cannot distinguish* non-isomorphic SRGs when n = n ′, d = d ′, λ = λ ′, and µ = µ ′. Corollary 1. Elene (ND) is more expressive than Igel *and 1-WL, per Theo. 1 & Theo. 2.* Corollary 2. Elene (ND*) signals at the node-level are not capable of distinguishing between non-isomorphic* SRG*s with equal parameters—e.g. the graphs in Fig. 4.* Proposition 1. Elene (ED*, leveraging both* e k v ∀v ∈ V and e k ⟨u,v⟩ ∀(u, v) ∈ E) is strictly more expressive than Elene (ND*), as it can distinguish the pair of graphs in Fig. 4.* ## 6.2 Expressive Power Of Elene-L Theorem 3. Elene-L with the sum as the pooling operator is at least as expressive as Elene. Proof. We first show Elene-L (ND) is at least as expressive as Elene (ND). We then show that the ED variants are at least as expressive as ND variants (Prop. 2), and show through Fig. 4 that Elene-L (ED) is more powerful than Node-Centric Elene-L (ND). On Elene-L (ND). ∀v ∈ V , the Elene-L(x t v ) representation of v is given by Φ t ND(v) as per Eq. 4. Φ t ND(v) is the result of applying ΦND to the concatenation of x t v and the combined representations of every u ∈ Vk v and ⟨u, w*⟩ ∈ E*k v . Let Φout and Φnd be the identity function, we exclude edge-level information by discarding the output of Φed. We now expand ΦˆtND(v), which is Φ t ND(v) with the changes to the learnable Φ: $${\hat{\Phi}}_{\mathrm{{MD}}}^{t}(v)=\biggl({\bf x}_{v}^{t}\biggl\|\sum_{u}^{\mathcal{V}_{v}^{k}}\left({\bf x}_{v}^{t}\biggl\|{\bf x}_{u}^{t}\biggl\|{\sf E m b}\bigl(u|v\bigr)\right)\biggr).$$ We discard repeated x t v terms, and rewrite the representation of v, distributing the sum over the concatenated vector: $${\hat{\Phi}}_{\mathrm{{MB}}}^{t}(v)={\bigg(}{\bf x}_{v}^{t}{\bigg\|}\sum_{u}^{\mathcal{V}_{v}^{k}}{\bf x}_{u}^{t}{\bigg\|}\sum_{u}^{\mathcal{V}_{v}^{k}}\mathrm{{Emb}}(u|v){\bigg)}.$$ Let W1,nd,W2,nd,W3,nd ∈ R S×S used by Emb be identity matrices so every relative degree and distance pair out of S = dmax · (k + 1) has a single position in W. By using the sum as the pooling function, we obtain the frequency of each relative degree and distance pair, matching Elene in Eq. 2. Thus, Φˆtout(v) contains the information contained in the Elene multi-set, reaching at least the same expressivity. Proposition 2. Elene-L (ED*) variants with the sum as the pooling operator are at least as expressive as* Elene. On Elene-L (ED). We had discarded Φed, showing ED variants are at least as expressive as ND variants, since the concatenation of edge-level information can only match or boost expressivity. Thus, (ND) and (ED) variants of Elene-L are as expressive as Elene. Theorem 4. Elene-L (ED) is more expressive than Elene-L (ND) and *Elene* (ND). Proof. There is at least a pair of non-isomorphic SRGs that Elene-L (ED) can distinguish. In §3.2, we show that the Shrikhande and 4 × 4 Rook graphs (Arvind et al., 2020; Balcilar et al., 2021) (parametrized as SRG(16, 6, 2, 2)) can be distinguished by Edge-Centric counts that Elene-L (ED) captures despite being undistinguishable by 3-WL (by implementing Eq. 1 at the *edge* level). Following from Theo. 1 and Theo. 2, both graphs are indistinguishable by Elene (ND) or Elene-L (ND), as well as sub-graph GNNs like GNN-AK or SUN. Intuitively, SRGs are indistinguishable with node-centric k ego-network sub-graph encodings when k ∈ {1, 2} since all nodes produce identical representations, as shown in Theo. 2. However, the graphs can be distinguished by edge-level information as per Eq. 7, as the intersection of k-depth ego-networks for ⟨v1, v2⟩ differ in edge counts between both SRGs—as observed in §3. We can see the 4×4 Rook Graph has 6 edges (i.e. |Ek ⟨v1,v2⟩ )| = 6) while the Shrikhande graph has |Ek ⟨v1,v2⟩ )| = 5, hence the graphs are distinguishable by Elene-L (ED), but not Elene-L (ND) or Elene (ND). ## 6.3 Linking Elene And Shortest Path Neural Networks. Remark 1. A Graphormer with max. shortest path length M and global readout is an instance Shortest Path Neural Networks (SPNNs) with k = M − 1 *depth (Abboud et al., 2022).* Theorem 5. Elene-L (ND) is as expressive as Shortest Path Neural Networks (SPNNs), and transitively, Graphormers. Proof. Let L k G(v) = {u|u ∈ V ∧ lG(*u, v*) = k} be the nodes in G exactly at distance k of v. In Abboud et al. (2022), a k-depth SPNN updates the hidden state of node v an aggregation over the 1*, ..., k* exact-distance neighbourhoods: $$\mathbf{x}_{v}^{t+1}=\Phi_{\mathbf{sp}}{\Big(}(1+\epsilon)\cdot\mathbf{x}_{v}^{t}+\sum_{i=1}^{k}\alpha_{i}\sum_{u\in\mathcal{L}_{G}^{i}(v)}\mathbf{x}_{u}^{t}{\Big)}.$$ $$(10)$$ . (10) We show that Elene-L (ND) can implement SPNNs. First, let γnd = 1 in Eq. 6, such that: $$\mathbf{x}_{v}^{t+1}=\Phi_{\mathrm{{ND}}}^{t}(v)=\Phi_{\mathrm{{ND}}}\!\left(\mathbf{x}_{v}^{t}\!\left\|\sum_{u}^{\mathcal{V}_{v}^{k}}\Phi_{\mathrm{{nd}}}^{t}(u|v)\right\|\sum_{\langle u,w\rangle}^{\mathcal{E}_{v}^{k}}\Phi_{\mathrm{{ed}}}^{t}(u,w|v)\right)\!.$$ We drop Φ t ed(*u, w*|v) as SPNNs ignore edge-level signals3. Let ΦND(·) be composed of two functions Φsp(g(·))4 where: $$g\Big(\mathbf{x}_{v}^{t}\Big\|\sum_{u}^{\mathcal{V}_{v}^{k}}\Phi_{\mathrm{nd}}^{t}(u|v)\Big)=\Big((1+\epsilon)\cdot\mathbf{x}_{v}^{t}+\sum_{u}^{\mathcal{V}_{v}^{k}}\Phi_{\mathrm{nd}}^{t}(u|v)\Big).$$ We replace ΦND by Φsp and g(·), and expand Φ t nd(u|v): $$\mathbf{x}_{v}^{t+1}=\Phi_{\mathsf{sp}}{\Bigl(}(1+\epsilon)\cdot\mathbf{x}_{v}^{t}+\sum_{u}^{\mathcal{V}_{v}^{k}}\Phi_{\mathsf{nd}}{\Bigl(}\mathbf{x}_{v}^{t}{\Bigl\|}\mathbf{x}_{u}^{t}{\Bigl\|}\mathsf{Emb}(u|v){\Bigr)}.$$ We then instantiate Φnd(·) as: $$\Phi_{\mathbf{nd}}(\cdot)=\sum_{i}^{k}\alpha_{i}\cdot\mathbf{if}[i=l_{S}(u,v)]\cdot\mathbf{x}_{u}^{t}$$ if[·] can be implemented through the distance and degree signals in Emb, such we can check if the node distance matches a specific value5. Substituting in x t+1 v above yields: $$\mathbf{x}_{v}^{t+1}=\Phi_{\texttt{sp}}\Big((1+\epsilon)\cdot\mathbf{x}_{v}^{t}+\sum_{u}^{\mathcal{V}_{v}^{k}}\sum_{i}^{k}\alpha_{i}\cdot\mathbf{i}\mathbf{f}[i=l_{\mathcal{S}}(u,v)]\cdot\mathbf{x}_{u}^{t}\Big),$$ which is equivalent to Eq. 10, and shows Elene-L (ND) can learn like SPNNs—and, transitively through Rem. 1, that Graphormers can be emulated by Elene-L (ND). ## 7 Experimental Results We now study the effect of introducing Elene and Elene-L in a variety of graph-level settings, evaluating where *purely structural* Elene encodings underperform Elene-L, and the practical impact of Elene variants in terms of model performance, training time, and memory costs. We describe our experimental protocol in §7.1 and provide reproducible code, hyper-parameters, and analysis scripts through Github6for four experimental benchmarks: A) Expressivity. Evaluates whether models distinguish non-isomorphic graphs (on 1-WL EXP (Abboud et al., 2021) and 3-WL SR25 (Balcilar et al., 2021) equiv. datasets), count sub-graphs (in RandomGraph (Chen et al., 2020)), and evaluate graph-level properties (Corso et al., 2020). B) Proximity. Measures whether models learn long-distance *attributed* node relationships in h-Proximity datasets (Abboud et al., 2022). C) Real World Graphs. Evaluates performance on five large-scale graph classification/regression datasets from Benchmarking GNNs (ZINC, CIFAR10, PATTERN) (Dwivedi et al., 2020), and the Open Graph Benchmark (MolHIV, MolPCBA) (Hu et al., 2020a). D) Memory Scalability. Evaluates the memory consumption of Elene-L on d-regular graphs, varying n and dmax to validate the algorithmic complexity analysis in §3.3 and comparing with the memory consumption of GIN-AK, GIN-AK+ and SPEN (Mitton & Murray-Smith, 2023). 3Including edge-level signals may bring Elene-L (ED) to parity with Pure Graph Transformers (Kim et al., 2022). We do not explore this connection. 4g(·) is a linear combination over concatenated input vectors, learnable by a first layer of Φout without activations. 5This is not necessary during learning: a one-hot 'decoder' can be implemented using a two-layer perceptron with ReLU activations. 6https://github.com/nur-ag/ELENE ## 7.1 Experimental Protocol Reporting. When reported in the original studies, we show stddevs for experiments with more than two runs following (Zhao et al., 2022), and highlight best-performing models per task in **underlined bold**. Elene denotes Eq. 2 as additional features, while Elene-L denotes the representations of Eq. 6 and Eq. 7. (ED) denotes Elene-L with Edge-Centric signals, while **(ND)** denotes a Node-Centric variant that ignores edge information for ablation studies. '†' indicates results from the literature. Environment. Experiments ran on a shared server with a 48GB Quadro RTX 8000 GPU, 40 CPU cores and 502GB RAM. Each individual job has a limit of 96GB RAM and 8 CPU cores. To measure memory and time costs without sharing resources, we also reproduced our experiments on real-world graphs on a SLURM cluster with nodes equipped with 22GB Quadro GPUs. Finally, scalability experiments ran on Tesla T4 GPUs with 15.11GB of VRAM to validate our approach on consumer hardware. Experimental Setup. We explore sub-sets of Elene hyper-parameters via grid search with k ∈ {0, 1, 2, 3, 5} parameter ranges for Elene and Elene-L, and test the ED/ND variants for Elene-L with embedding params. ω ∈ {16, 32, 64}, ρ = dmax, using masked-mean pooling for stability. For h-Proximity (Abboud et al., 2022), we compare against SPNNs (Abboud et al., 2021) and Graphormer(Ying et al., 2021) as originally reported. For Expressivity and Real World Graphs, we reuse hyper-parameters and splits from GIN-AK+ in Zhao et al. (2022) without architecture search, comparing against strong MP-GNN baselines from literature where GNN-AK+ underperforms: CIN (Bodnar et al., 2021) for ZINC and SUN (Frasca et al., 2022) for sub-graph counting. We choose GINE (Hu et al., 2020b), an edge-aware variant of GIN (Xu et al., 2019), as our base MP-GNN given that GIN-AK+ outperforms its uplifted counterparts for GCN-AK+ and PNA-AK+ (Kipf & Welling, 2017; Corso et al., 2020; Zhao et al., 2022), without running into out-of-memory issues like PPGN (Maron et al., 2019) in the PPGN-AK instantiation. Finally, for scalability we compare with GNN-AK on benchmark datasets Zhao et al. (2022) and SPEN Mitton & Murray-Smith (2023). Experimental Objectives. We connect *expressivity* and its relation to graph *attributes*, comparing against methods that *do not* perturb graph structure, e.g. DropGNN (Papp et al., 2021); leverage random walks, e.g. RWPE (Dwivedi et al., 2022); or require costly pre-processing e.g. O(n 3) spectral eigendecompositions, such as GNNML3, LWPE, or GraphGPS (Balcilar et al., 2021; Dwivedi et al., 2022; Rampášek et al., 2022). Per §6.3, Elene relates to Graphormers via SPNNs, so we focus on sub-graph GNNs and SPNNs. ## 7.2 Expressivity We test Elene on four MP-GNN expressivity datasets, with results captured in Tab. 1. Introducing Elene signals improves the performance of GINs, and our single-run results EXP and SR25 are consistent with our formal analysis on §6—namely, Elene and Elene-L (ND) and (ED) all reach 100% accuracy on the 1-WL equivalent EXP task, as expected from Theo. 1. Furthermore, Elene-L (ED) can distinguish all 3-WL equivalent SRGs in the challenging SR25 dataset—providing empirical evidence for Theo. 4. On Graph Properties and Counting Substructures (2 runs averaged, as in Zhao et al. (2022)), a GIN + Elene-L (ND) model consistently outperforms GIN-AK *without context encoding*. In Counting, both Elene variants and GIN-AK+ are outperformed by SUN, but GIN+Elene matches or outperforms GIN on every task, showing that Elene features are informative and can boost performance by themselves. On both tasks, we find that GIN+Elene-L (ED) performs poorly—outperforming GIN+Elene but not our baselines. This might be caused by model over-parametrization, as six node and edge-level embedding matrices are learned for 3 and 6 layers on Counting Substructures and Graph Properties respectively7. Finally, on the Graph Properties tasks of IsConnected and Diameter, a GIN-AK+ with Elene-L outperforms state-of-the-art results—and interestingly a GIN with Elene-L (ND) outperforms all existing baselines on the IsConnected task. This can be further improved by using a GIN-AK+ with Elene-L (ND). 7Weight sharing may help over-parametrization by learning a single structural representation, trading off expressivity. Table 1: Expressivity benchmark results. In EXP and SR25, introducing Elene-L yields the best performance per task, shown in **underlined bold**. We highlight the best-performing configurations from *Elene* variants on GIN in italics, which we consistently observe in the Node-Centric (ND) configuration except for isomorphism tasks. | SR25 | Count. Substr. | | Graph Prop. | | | | | | | |-----------------------------------------|------------------|-------|---------------|--------|-------|--------|--------|--------|--------| | Model | EXP | (MAE) | | | | | | | | | (Acc.) | (Acc.) | | (log10(MAE)) | | | | | | | | Tri. | Tail Tri. | Star | 4-Cycle | IsCon. | Diam. | Radius | | | | | GIN | 50% | 6.67% | 0.357 | 0.253 | 0.023 | 0.231 | -1.914 | -3.356 | -4.823 | | SUN†,(Frasca et al., 2022) | - | - | 0.008 | 0.008 | 0.006 | 0.011 | -2.065 | -3.674 | -5.636 | | GIN-AK†,(Zhao et al., 2022) | 100% | 6.67% | 0.093 | 0.075 | 0.017 | 0.073 | -1.993 | -3.757 | -5.010 | | GIN-AK+ | 100% | 6.67% | 0.011 | 0.010 | 0.016 | 0.011 | -2.512 | -3.917 | -5.260 | | GIN+ELENE | 100% | 6.67% | 0.024 | 0.023 | 0.020 | 0.041 | -2.218 | -3.656 | -5.024 | | GIN+ELENE-L (ND) | 100% | 6.67% | 0.012 | 0.015 | 0.014 | 0.016 | -2.620 | -3.815 | -5.117 | | GIN+ELENE-L (ED) | 100% | 100% | 0.023 | 0.023 | 0.017 | 0.023 | -2.497 | -3.541 | -4.755 | | Best (GIN / GIN-AK) + (ELENE / ELENE-L) | 100% | 100% | 0.010 | 0.010 | 0.014 | 0.011 | -2.715 | -4.072 | -5.267 | ## 7.3 H**-Proximity** We evaluate Elene-L on h-Proximity (Abboud et al., 2022) tasks (10-fold averaged)—where nodes are assigned colors including red and blue, and models classify whether all red nodes have at most two blue nodes within h hops (positive) or otherwise (negative), as in Fig. 2. Models must learn which colors are relevant for the target and capture long-ranging dependencies during learning. Edge information is irrelevant, and pre-computed encodings like Elene cannot capture interactions of distances and node attributes. | 3-Prox. | 5-Prox. | 8-Prox. | 10-Prox. | | |-----------------------------|------------|------------|------------|------------| | GCN† | 50.0 ± 0.0 | 50.0 ± 0.0 | 50.1 ± 0.0 | 49.9 ± 0.0 | | GAT† | 50.4 ± 1.0 | 49.9 ± 0.0 | 50.0 ± 0.0 | 50.0 ± 0.0 | | SPNN (k = 1) † | 50.5 ± 0.7 | 50.2 ± 1.0 | 50.0 ± 0.9 | 49.8 ± 0.8 | | SPNN (k = 5) † | 95.5 ± 1.6 | 96.8 ± 0.7 | 96.8 ± 0.6 | 96.8 ± 0.6 | | Graphormer† | 94.7 ± 2.7 | 95.1 ± 1.8 | 97.3 ± 1.4 | 96.8 ± 2.1 | | GIN+ELENE | 52.0 ± 2.0 | 51.8 ± 1.2 | 52.4 ± 2.6 | 51.4 ± 1.1 | | GIN+ELENE-L (ND) 98.3 ± 0.5 | 98.6 ± 0.5 | 99.0 ± 0.5 | 99.2 ± 0.3 | | In Abboud et al. (2022), the authors reported that MP-GNNs perform well on h = 1-Proximity, so we focus on the h ∈ {3, 5, 8, 10} variants. Tab. 2 shows our results, where Elene-L (ND) outperforms strong baselines from SPNNs and Graphormer (Abboud et al., 2021). As expected, a GIN + Elene did not meaningfully improve over GIN. Our numerical results provide empirical validation for Theo. 5. Table 2: h-Proximity binary classification results (accuracy). Elene-L **(ND)** without degree information outperforms baselines strong SPNNs and Graphormer baselines from† Abboud et al. (2021). ## 7.4 Real World Graphs We also evaluate Elene and Elene-L on five real-world, large-scale graph classification and regression tasks. We test Elene and Elene-L on ZINC, MolHIV, PATTERN, CIFAR10, and MolPCBA and report our results in Tab. 3. Given increased memory and computation costs and the weaker performance of Elene-L (ED) in §7.2, we only evaluate Elene-L (ND). On ZINC, GIN + Elene-L (3 averaged runs) achieves comparable results to existing baselines, including SUN (Frasca et al., 2022). Furthermore, by introducing Elene-L on GIN-AK+, the model matches the previous strong baseline achieved by CIN (Bodnar et al., 2021). On PATTERN (3 averaged runs), GIN + Elene-L achieves comparable results to GIN-AK+, but does not meet the best reported performance of Table 3: Results on real world benchmark datasets. We compare with published results and reproduce the experiments of Zhao et al. (2022). Adding Elene variants to GIN and GIN-AK+ yield state-of-the-art results on ZINC and MolPCBA, and match the performance of existing methods in PATTERN and MolHIV. | ZINC | PATTERN | MolHIV | CIFAR10 | MolPCBA | | |-------------------|-----------------|------------------|----------------|----------------|-----------------| | (MAE) | (Acc.) | (ROC) | (Acc.) | (AP) | | | GSN† | 0.115 ± 0.012 | - | 77.99 ± 1.00 | - | - | | NGNN† | - | - | 78.34 ± 1.86 | - | 28.32 ± 0.41 | | CIN† | 0.079 ± 0.006 | - | 80.94 ± 0.57 | - | - | | SUN† | 0.083 ± 0.003 | - | 80.55 ± 0.55 | - | - | | GCN-AK+† | 0.127 ± 0.004 | 86.887 ± 0.009 | 79.28 ± 1.01 | 72.70 ± 0.29 | 0.285 ± 0.000 | | GIN-AK† | 0.094 ± 0.005 | 86.803 ± 0.044 | 78.29 ± 1.21 | 67.51 ± 0.21 | 0.274 ± 0.000 | | GIN-AK+ | 0.082 ± 0.003 | 86.868 ± 0.028 | 77.37 ± 0.31 | 72.39 ± 0.38 | 0.293 ± 0.004 | | (Lit. results† ) | † 0.080 ± 0.001 | † 86.850 ± 0.057 | † 79.61 ± 1.19 | † 72.19 ± 0.13 | † 0.293 ± 0.004 | | GIN | 0.155 ± 0.005 | 85.692 ± 0.042 | 78.72 ± 0.54 | 59.55 ± 0.54 | 0.271 ± 0.003 | | (Lit. results† ) | † 0.163 ± 0.004 | † 85.732 ± 0.023 | † 78.81 ± 1.01 | † 59.82 ± 0.33 | † 0.268 ± 0.001 | | GIN+IGEL | 0.103 ± 0.004 | 86.762 ± 0.029 | 78.92 ± 0.92 | - | - | | GIN+ELENE | 0.092 ± 0.001 | 86.783 ± 0.044 | 78.92 ± 0.35 | 56.34 ± 0.06 | 0.277 ± 0.002 | | GIN+ELENE-L (ND) | 0.083 ± 0.004 | 86.828 ± 0.002 | 78.26 ± 0.93 | 68.95 ± 0.25 | 0.294 ± 0.001 | | Best Result | | | | | | | (ELENE / ELENE-L) | 0.079 ± 0.003 | 86.828 ± 0.002 | 79.15 ± 1.45 | 68.95 ± 0.25 | 0.294 ± 0.001 | GCN-AK+ by a 0.07% delta. We do not achieve to independently reproduce GIN-AK+ results (Zhao et al., 2022) on MolHIV (5 averaged runs)—finding that GIN with Elene or Elene-L do not have statistically significant (p < 0.01) differences with GIN, while the performance of GIN-AK+ is statistically inferior. Tab. 4 shows time and memory costs of Elene compared to state-of-the-art methods. Despite not tuning hyperparameters, a GIN+Elene-L model outperforms GIN-AK in CIFAR and a strong GIN-AK+ baseline in MolPCBA. Furthermore, our setup of GIN layers combined with Elene always outperforms GNN-AK+ in terms of memory consumption. In ZINC, GIN+Elene-L (ND) requires 0.99GB compared to the 1.68GB of GIN-AK+ while reaching comparable performance (0.083±0.004 vs 0.082±0.003, respectively). In MolHIV, GIN+Elene model requires only 70MB during training while outperforming the ROC of our reproduced run of GIN-AK+, which required 790MB—an 11.3-fold reduction in memory usage. On PATTERN, we find that GIN+Elene-L (ND) achieves 99.95% of the performance of GIN-AK+ while consuming only 7.8GB of memory during training, compared to the 26.52GB reported by Zhao et al. (2022)— a 3.4-fold reduction. In summary, Elene and Elene-L (ND) achieve comparable results to the baselines with favorable time / memory efficiency. Elene encodings used as node-features (GIN+Elene) add minor overhead over GIN and match or outperform GIN+Igel in all tested settings, and GIN-AK in four over five. Elene-L (ND) also shows favorable memory performance versus GIN-AK and GIN-AK+ in all setups. Finally, we observe additional memory costs for Elene-L (ED) due to using node and edge embeddings. | ZINC | PATTERN | MolHIV | CIFAR10 | MolPCBA | | | | | | | |---------------|-----------|----------|-----------|-----------|--------|--------|---------|---------|----------|--------| | GIN | 6.02s | 0.12GB | 118.62s | 1.42GB | 14.88s | 0.07GB | 98.37s | 0.90GB | 223.13s | 0.44GB | | GIN-AK | 9.76s | 1.11GB | - | - | 19.30s | 0.64GB | 283.93s | 18.80GB | 534.78s | 3.80GB | | GIN-AK+ | 13.63s | 1.68GB | - | - | 25.47s | 0.79GB | - | - | 607.89s | 3.83GB | | GIN+ELENE | 6.14s | 0.13GB | 90.15s | 1.47GB | 14.94s | 0.07GB | 120.21s | 0.91GB | 278.29s | 0.46GB | | +ELENE-L (ND) | 10.23s | 0.99GB | 146.15s | 7.80GB | 42.53s | 0.54GB | 224.43s | 10.72GB | 451.12s | 2.39GB | | +ELENE-L (ED) | 22.61s | 2.85GB | - | - | 32.28s | 1.40GB | - | - | 1025.58s | 7.10GB | Table 4: Memory and time performance on benchmark datasets, controlling for shared resource use as per §7.1. We report average epoch duration in seconds (s) and maximum memory consumption in gigabytes (GB) respectively. Dashed entries indicate executions that terminated due to running out of memory. ## 7.5 Memory Scalability Finally, we evaluate the scalability of Elene-L in a learning setting. We analyze the memory consumption as a function of the graph size and how it compares with other methods. For that, we follow a similar setting as Mitton & Murray-Smith (2023) for SPEN: we design and implement a learning task on a large d-regular graph, and use it to explore memory consumption under different values of degree d and nodes n. With this setup, we train the model for 25 epochs to predict a constant variable so that both input tensors and gradient computations are kept in memory. We evaluate both Elene-L (ND) and (ED), together with a GIN model without any Elene-L features, GIN-AK and GIN-AK+ as baselines. For all Elene-L variants, we execute the benchmark with different values of k ∈ {1, 2, 3}. ![15_image_0.png](15_image_0.png) Figure 7: Memory scalability analysis of Elene when dmax = 12. We include GIN (dotted line) and maximum GPU memory (dash-and-dotted line) as indicative lower and upper memory bounds. Elene-L (ND, full lines) outperforms both GIN-AK, GIN-AK+ (dotted lines) and Elene-L (ED, dashed lines). Additionally, Elene-L (ND) can encode all d-regular graphs in the benchmark when k = 1. As expected, memory consumption increases linearly with the number of nodes as dmax is kept fixed. Fig. 7 shows our results when dmax = 12. As expected, the memory cost grows exponentially as a function of n. We observe that Elene-L (ND) with k = 2 can scale up to graphs with 10, 000 nodes with ego-network sub-graphs. Since all nodes have the same degree dmax = 12, at 2 hops we are guaranteed to find the root node, its 12 neighbors, and at least one additional neighbor at 2 hops —or 14 total nodes. In practice, as the graphs are randomly generated, we find that each of the 2-depth subgraphs contains an average of 144.13 nodes with the expected maximum at 145. Additionally, our experiments show that Elene-L (ND) with k = 1 can scale up to graphs with 104.5 = 31, 623 nodes with up to dmax = 18. Furthermore, despite requiring additional memory, the more expressive Elene-L (ED) can nevertheless be used for k = 1 for graphs with up to 104.5 nodes as well. Scaling to graphs with more than 105 nodes is possible by increasing the cpu-memory (our limit is 96GB) or alternatively changing the implementation. The latter can be done for example by computing the encodings through parallel BFS, which would result in a slower algorithm (but of the same order of complexity). We provide additional results for dmax = 6 and dmax = 18, as well as different graph density patterns, in §B.2. Recent methods like SPEN outperform global permutation-equivariant methods like PPGN. However, SPEN still struggles to process significantly smaller graphs, with n ≈ 1, 000 nodes and k = 1 depth ego-networks that contain 9 nodes, even with higher GPU memory as reported by Mitton & Murray-Smith (2023). Compared with the complexity of SPEN, which is O(n *· |V*k v | 2), Elene-L can encode ego-networks with 16 times more nodes with comparable memory usage—while outperforming sub-graph GNN baselines like GIN-AK and GIN-AK+. ## 7.6 Experiments Summary Elene and Elene-L consistently boost GNN performance on the three experimental benchmarks, and are shown to be scalable in Tab. 4 and Fig. 7. On Expressivity, §7.2 gives empirical support for Theo. 4, i.e. that Elene-L (ED) can distinguish SRGs, achieving 100% accuracy on the challenging SR25 (Balcilar et al., 2021) dataset. Although SUN (Frasca et al., 2022) outperforms other models on Counting Substructures, Elene and Elene-L still improve baseline performance and match previous GIN-AK and GIN-AK+ baselines respectively. On Graph Properties, GIN+Elene-L matches existing baselines, and a GIN-AK+ model with Elene-L *outperforms* previous state-of-the-art results on the IsCon. and Diam. tasks with -2.715 and -4.072 log10(MSE) each. On h-Proximity, §7.3 validates Theo. 5, i.e., that Elene-L (ND) is at least as expressive as SPNNs (Abboud et al., 2022), as Elene-L (ND) *outperforms SPNNs and Graphormers* at capturing *attributed structures*— that sparse Elene vectors alone *cannot capture*. On Real World Graphs from §7.4, Elene and Elene-L reach state-of-the-art results. On ZINC, GIN-AK+ with Elene-L achieves 0.079±0.003 MAE, matching CIN (Bodnar et al., 2021). A GIN+Elene-L matches 99.95% of the performance of baselines on PATTERN while consuming 3.4× *less memory*, and GIN+Elene reaches 0.1% less accuracy than GIN-AK+ but does so using 1.47GB, compared to the 26.54GB reported for GIN-AK+—a 18.1× *memory reduction*. Finally, a GIN+Elene-L matches state-of-the-art results on MolPCBA (0.294 ± 0.001 vs 0.293 ± 0.003 of GIN-AK+ (Zhao et al., 2022)) without hyper-parameter tuning and while *consuming 37.60% less memory (2.39GB vs 3.83GB)*. On Memory Scalability, §7.5 shows that Elene-L can be used on d-regular graphs with more than 104 nodes where dmax ∈ {6, 12, 18}, validating the expected memory costs from §3.3 and outperforming the memory consumption of strong GIN-AK and GIN-AK+ baselines and recent methods like SPEN (Mitton & Murray-Smith, 2023). ## 8 Conclusions We presented Elene, a principled edge-level ego-network encoding capturing the structural signals sufficient to distinguish 3-WL equivalent SRGs. We proposed two variants—Elene and Elene-L—and showed that Node-Centric and Edge-Centric representations exhibit different expressive power. To position our findings, we formally drew connections between Elene and recent Sub-Graph GNNs, Graph Transformers, and Shortest Path Neural Networks. Empirically, we evaluated our methods on 10 different tasks, where the sparse Elene vectors improve performance on structural expressivity tasks. Our learnable Edge-Centric Elene-L variant boosts MP-GNN expressivity to reach 100% accuracy on the challenging SR25 dataset, while its Node-Centric counterpart improves over a strong baseline on the h-Proximity task and matches state-of-the-art results in several realworld graphs. Finally, we found our methods provide a trade-off between memory usage and structural expressivity, improving memory usage with up to 18.1× lower memory costs compared to sub-graph GNN baselines. ## Broader Impact Statement Our main contributions are (a) a novel family of edge-aware features that can be used alone or during learning in MP-GNNs, with (b) a formal analysis of their expressivity that shows they can distinguish challenging SRGs, and (c) experimental results matching state-of-the-art learning models with favorable memory costs. We do not foresee ethical implications of our theoretical findings. Our experimental results are competitive with state-of-the-art methods at a lower memory footprint, which may help solve tasks with limited memory budgets. ## Acknowledgements This work is part of the action CNS2022-136178 financed by MCIN/AEI/10.13039/501100011033 and by the EU Next Generation EU/PRTR. This work has been co-funded by MCIN/AEI/10.13039/501100011033 under the Maria de Maeztu Units of Excellence Programme (CEX2021-001195-M). ## References Ralph Abboud, İsmail İlkan Ceylan, Martin Grohe, and Thomas Lukasiewicz. The surprising power of graph neural networks with random node initialization. In *Proceedings of the Thirtieth International Joint* Conference on Artificial Intelligence, IJCAI-21, pp. 2112–2118, 8 2021. Ralph Abboud, Radoslav Dimitrov, and İsmail İlkan Ceylan. Shortest path networks for graph property prediction. In *Proceedings of the First Learning on Graphs Conference (LoG)*, 2022. Uri Alon and Eran Yahav. On the bottleneck of graph neural networks and its practical implications. In International Conference on Learning Representations, 2021. Nurudin Alvarez-Gonzalez, Andreas Kaltenbrunner, and Vicenç Gómez. Beyond 1-WL with local egonetwork encodings. In *The First Learning on Graphs Conference*, 2022. V. Arvind, Frank Fuhlbrück, Johannes Köbler, and Oleg Verbitsky. On weisfeiler-leman invariance: Subgraph counts and related graph properties. *Journal of Computer and System Sciences*, 113:42–59, 2020. Laszlo Babai and Ludik Kucera. Canonical labelling of graphs in linear average time. In 20th Annual Symposium on Foundations of Computer Science (sfcs 1979), pp. 39–46, 1979. doi: 10.1109/SFCS.1979.8. Muhammet Balcilar, Pierre Héroux, Benoit Gaüzère, Pascal Vasseur, Sébastien Adam, and Paul Honeine. Breaking the limits of message passing graph neural networks. In Proceedings of the 38th International Conference on Machine Learning (ICML), 2021. Albert-László Barabási and Réka Albert. Emergence of scaling in random networks. *Science*, 286(5439): 509–512, 1999. doi: 10.1126/science.286.5439.509. Pablo Barceló, Egor V. Kostylev, Mikael Monet, Jorge Pérez, Juan Reutter, and Juan Pablo Silva. The logical expressiveness of graph neural networks. In *International Conference on Learning Representations*, 2020. Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, and koray kavukcuoglu. Interaction networks for learning about objects, relations and physics. In *Advances in Neural Information Processing* Systems, volume 29, 2016. Beatrice Bevilacqua, Fabrizio Frasca, Derek Lim, Balasubramaniam Srinivasan, Chen Cai, Gopinath Balamurugan, Michael M. Bronstein, and Haggai Maron. Equivariant subgraph aggregation networks. In International Conference on Learning Representations, 2022. Beatrice Bevilacqua, Moshe Eliasof, Eli Meirom, Bruno Ribeiro, and Haggai Maron. Efficient subgraph gnns by learning effective selection policies. *arXiv preprint arXiv:2310.20082*, 2023. Cristian Bodnar, Fabrizio Frasca, Nina Otter, Yuguang Wang, Pietro Liò, Guido F Montufar, and Michael Bronstein. Weisfeiler and Lehman go cellular: CW networks. In *Advances in Neural Information Processing* Systems, volume 34, 2021. Giorgos Bouritsas, Fabrizio Frasca, Stefanos Zafeiriou, and Michael M. Bronstein. Improving graph neural network expressivity via subgraph isomorphism counting. *IEEE Transactions on Pattern Analysis and* Machine Intelligence, 45(1):657–668, 2023. Andries E. Brouwer and Hendrik Van Maldeghem. *Strongly regular graphs*, volume 182. Cambridge University Press, 2022. ISBN 9781316512036. Zhengdao Chen, Lei Chen, Soledad Villar, and Joan Bruna. Can graph neural networks count substructures? In *Advances in Neural Information Processing Systems*, volume 33, pp. 10383–10395, 2020. Gabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Liò, and Petar Velickovic. Principal neighbourhood aggregation for graph nets. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS'20, Red Hook, NY, USA, 2020. ISBN 9781713829546. David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alan AspuruGuzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular fingerprints. In Advances in Neural Information Processing Systems, volume 28, 2015. Vijay Prakash Dwivedi, Chaitanya K Joshi, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. Benchmarking graph neural networks. *arXiv preprint arXiv:2003.00982*, 2020. Vijay Prakash Dwivedi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. Graph neural networks with learnable structural and positional representations. In International Conference on Learning Representations, 2022. Moshe Eliasof, Eldad Haber, and Eran Treister. pathGCN: Learning general graph spatial operators from paths. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), *Proceedings of the 39th International Conference on Machine Learning*, volume 162 of Proceedings of Machine Learning Research, pp. 5878–5891. PMLR, 17–23 Jul 2022. Fabrizio Frasca, Beatrice Bevilacqua, Michael M Bronstein, and Haggai Maron. Understanding and extending subgraph gnns by rethinking their symmetries. In *Advances in Neural Information Processing Systems*, 2022. Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning (ICML), volume 70, 2017. Martin Grohe. The logic of graph neural networks. In *Proceedings of the 36th Annual ACM/IEEE Symposium* on Logic in Computer Science, LICS '21, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781665448956. doi: 10.1109/LICS52264.2021.9470677. Benjamin Gutteridge, Xiaowen Dong, Michael M Bronstein, and Francesco Di Giovanni. DRew: Dynamically rewired message passing with delay. In *International Conference on Machine Learning*, pp. 12252–12267. PMLR, 2023. Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. *arXiv preprint* arXiv:2005.00687, 2020a. Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, and Jure Leskovec. Strategies for pre-training graph neural networks. In *International Conference on Learning Representations*, 2020b. Jinwoo Kim, Dat Tien Nguyen, Seonwoo Min, Sungjun Cho, Moontae Lee, Honglak Lee, and Seunghoon Hong. Pure transformers are powerful graph learners. In *Advances in Neural Information Processing* Systems, 2022. Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR, 2017. Lecheng Kong, Jiarui Feng, Hao Liu, Dacheng Tao, Yixin Chen, and Muhan Zhang. MAG-GNN: Reinforcement learning boosted graph neural network. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. Pan Li, Yanbang Wang, Hongwei Wang, and Jure Leskovec. Distance encoding: Design provably more powerful neural networks for graph representation learning. In Advances in Neural Information Processing Systems, volume 33, pp. 4465–4478, 2020. Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. Provably powerful graph networks. In *Advances in Neural Information Processing Systems*, volume 32, 2019. Gaspard Michel, Giannis Nikolentzos, Johannes Lutzeyer, and Michalis Vazirgiannis. Path neural networks: Expressive and accurate graph neural networks. In Proceedings of the 40th International Conference on Machine Learning (ICML), 2023. Joshua Mitton and Roderick Murray-Smith. Subgraph permutation equivariant networks. Transactions on Machine Learning Research, 2023. ISSN 2835-8856. Christopher Morris, Martin Ritzert, Matthias Fey, William L. Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. *Proceedings of* the AAAI Conference on Artificial Intelligence, 33(01):4602–4609, Jul. 2019. Christopher Morris, Yaron Lipman, Haggai Maron, Bastian Rieck, Nils M. Kriege, Martin Grohe, Matthias Fey, and Karsten Borgwardt. Weisfeiler and leman go machine learning: the story so far. J. Mach. Learn. Res., 24(1), mar 2023. ISSN 1532-4435. Giannis Nikolentzos, George Dasoulas, and Michalis Vazirgiannis. k-hop graph neural networks. *Neural* Networks, 130:195–205, 2020. Kenta Oono and Taiji Suzuki. Graph neural networks exponentially lose expressive power for node classification. In *International Conference on Learning Representations*, 2022. Pál András Papp and Roger Wattenhofer. A theoretical comparison of graph neural network extensions. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of *Proceedings of* Machine Learning Research, pp. 17323–17345. PMLR, 17–23 Jul 2022. Pál András Papp, Karolis Martinkus, Lukas Faber, and Roger Wattenhofer. DropGNN: random dropouts increase the expressiveness of graph neural networks. In *35th Conference on Neural Information Processing* Systems, 2021. Chendi Qian, Gaurav Rattan, Floris Geerts, Mathias Niepert, and Christopher Morris. Ordered subgraph aggregation networks. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022. Ladislav Rampášek, Mikhail Galkin, Vijay Prakash Dwivedi, Anh Tuan Luu, Guy Wolf, and Dominique Beaini. Recipe for a General, Powerful, Scalable Graph Transformer. *Advances in Neural Information* Processing Systems, 35, 2022. Clément Vignac, Andreas Loukas, and Pascal Frossard. Building powerful and equivariant graph neural networks with structural message-passing. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, 2020. B Weisfeiler and A Leman. The reduction of a graph to canonical form and the algebra which appears therein. *Nauchno-Technicheskaya Informatsia*, 2(9):12–16, 1968. Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations, 2019. Zuoyu Yan, Junru Zhou, Liangcai Gao, Zhi Tang, and Muhan Zhang. Efficiently counting substructures by subgraph gnns without running gnn on subgraphs. *arXiv preprint arXiv:2303.10576*, 2023. Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, and Tie-Yan Liu. Do transformers really perform badly for graph representation? In *35th Conference on Neural* Information Processing Systems, 2021. Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L Hamilton, and Jure Leskovec. Graph convolutional neural networks for web-scale recommender systems. In *Proceedings of the 24th ACM International Conference on Knowledge Discovery & Data Mining*, pp. 974–983, 2018. Jiaxuan You, Rex Ying, and Jure Leskovec. Position-aware graph neural networks. In *Proceedings of the 36th* International Conference on Machine Learning, volume 97 of *Proceedings of Machine Learning Research*, pp. 7134–7143, Long Beach, California, USA, 09–15 Jun 2019. PMLR. Jiaxuan You, Jonathan M Gomes-Selman, Rex Ying, and Jure Leskovec. Identity-aware graph neural networks. In *35th AAAI Conference on Artificial Intelligence*, volume 35, pp. 10737–10745, 2021. Seongjun Yun, Minbyul Jeong, Raehyun Kim, Jaewoo Kang, and Hyunwoo J Kim. Graph transformer networks. In *Advances in Neural Information Processing Systems*, volume 32, 2019. Bohang Zhang, Shengjie Luo, Liwei Wang, and Di He. Rethinking the expressive power of GNNs via graph biconnectivity. In *International Conference on Learning Representations*, 2023. Muhan Zhang and Pan Li. Nested graph neural networks. *Advances in Neural Information Processing* Systems, 34, 2021. Lingxiao Zhao, Wei Jin, Leman Akoglu, and Neil Shah. From stars to subgraphs: Uplifting any GNN with local structure awareness. In *International Conference on Learning Representations*, 2022. ## A Elene Through Bfs This appendix showcases a BFS implementation of the Elene Encoding that spans edges until reaching the maximum encoding depth k given a node v in V . As noted in §3.3, this implementation can be trivially parallelized over p processors as the encoding of each v ∈ V is independent of other nodes. ## Algorithm 1 Elene Node Encoding Using Bfs. Input: G = (V, E), v ∈ *V, k* : N 1: distances := {v : 0} ▷ Mapping of nodes to their distance to n, i.e. lG(*u, v*). 2: r_degrees := {v : (0, 0, 0)} ▷ Mapping of nodes to their relative degrees, i.e. d (p) S (u|v). 3: for (src, dst) in G.bfs_edges(n, max_depth = k) do 4: if src ∈/ distances **then** ▷ Invariant: only one node can be unknown / not in distances. 5: dst, src := src, dst 6: **end if** 7: if dst ∈/ distances **then** ▷ dst is unknown, so its distance is one-hop after src's. 8: distances[dst] := distances[src] 9: **end if** 10: dist_delta := distances[dst] − distances[src] ▷ Compute the distance delta in {-1, 0, 1}. 11: 12: ▷ Access the relative degree counts of each node. 13: src_deg := r_degrees.get(src, [0, 0, 0]) 14: dst_deg := r_degrees.get(dst, [0, 0, 0]) 15: 16: ▷ Increment degree counts for each node in their respective 'direction'. 17: src_deg[dist_delta + 1]++ ▷ The indexing maps {-1, 0, 1} deltas into {0, 1, 2} vector indexes. 18: dst_deg[1 − dist_delta]++ 19: 20: ▷ Update the relative degrees of src and dst. 21: r_degrees[src] := src_deg 22: r_degrees[dst] := dst_deg 23: **end for** 24: 25: ▷ For each u ∈ Vk v , compute Eq. 1 quadruplets, count their frequencies and return the mapping. 26: mapping := {} 27: for u ∈ Vk v do 28: quadruplet := (distances[u], r_degrees[u][0], r_degrees[u].sum(), r_degrees[u][2]) 29: mapping[quadruplet]++ 30: end for Output: mapping ## B Benchmark Details In this appendix, we provide an overview of the benchmark we execute when evaluating Elene, including the variants of models we test, and descriptions of our code and compute environment. We also summarize the datasets we use in §B.1. Benchmark Configuration. We build on top of the implementation from Zhao et al. (2022), introducing explicit ego-network attributes on their evaluation framework for consistency. All Elene results are reported by extending the node and edge attributes as input into a GIN Xu et al. (2019) extended to support edge-level features when available Hu et al. (2020b). In all experiments, we evaluate Elene-L on top of GINs with edge extensions Hu et al. (2020b). For all explicit ego-network attribute methods, we summarize the available hyper-parameters in Tab. 5. For the implementation of Elene-L, we observed unstable training when using the sum pooling function during early stages of development. We found that training was stable using masked Mean pooling where the n node messages (or m for edge messages) in the ego-network sub-graph are averaged considering a binary mask for neighbors of the root node at a distance k or less. All our results are reported using Mean pooling, including our results on SR25, suggesting that this decision does not adversely impact the model expressivity expected from §6. The resulting implementation of Eq. 4 is: $$\Phi_{\mathrm{BD}}^{t}(v)=\Phi_{\mathrm{out}}\left({\bf x}_{v}^{t}\Big\|\sum_{u}^{\mathcal{V}_{v}^{k}}\frac{\Phi_{\mathrm{nd}}^{t}(u|v)}{\mathrm{size}(\mathcal{V}_{v}^{k})}\Big\|\sum_{\{u,w\}}^{\mathcal{E}_{v}^{k}}\frac{\Phi_{\mathrm{ed}}^{t}(u,w|v)}{\mathrm{size}(\mathcal{E}_{v}^{k})}\right).$$ We use an analogous implementation for Eq. 7. Additionally, in our experimental benchmark we choose to implement Elene-L (ND) without the Φ t ed term so that it more closely follows the node-centric Elene encodings of Eq. 1, reducing memory costs. We describe the hyper-parameters implemented to control our models in Tab. 5. | Parameter | Elene | Elene-L | |----------------------------|----------------------------------|--------------------------------------------------| | Depth of Ego-Net (k) | {0, 1, 2} | {0, 1, 2, 3} | | Embedding Type | Sparse | Dense, learned | | Representation | Node-only | Node-centric (ND), Edge-centric (ED) Set to dmax | | Max. Encoded Degree | Set to dmax | from the training dataset or 0 | | from the training dataset. | (ignore degree info). Set to k. | | | Max. Encoded Distance | Equal to k | Can be modified to control | | | the sub-graph mean norm. factor. | | Table 5: Hyper-parameters controlling the behaviour of explicit ego-network attribute encodings. Elene only relies on k, while Elene-L has 5 additional configurable settings. Tested Models. On the Expressivity tasks, ZINC and MolHIV, we evaluate all learnable variants (ND and ED), while on the remaining classification/regression benchmarks we only consider **(ND)** models due to reduced memory costs and limited computational bandwidth. Furthermore, in all Elene-L setups we only test a reduced number of hyper-parameters due to computational constraints, unless specified otherwise, only evaluating different values of the maximum sub-graph distance to embed. We describe the hyper-parameters and modeling choices in detail in §B.2. ## B.1 Dataset Details We summarize the key aspects of the datasets we use to evaluate our proposed methods in §7. Tab. 6 contains an overview of each benchmark and dataset, the objective being addressed, and high-level dataset statistics—namely number of graphs, average number of nodes (n) and edges (m) per graph. ## B.2 Detailed Experimental Summary In this section, we summarize our experimental setup and training procedure, describing the hyperparameters that we consider in each setting. For all the experiments described, we evaluate the Elene encodings by concatenating them with the node feature vectors and as part of the edge features when available, using the element-wise product following the same approach as Igel. Expressivity. See expressivityDatasets.sh for details. | Benchmark | Dataset | Objective | Tasks | Nr. of Graphs | | | |-------------------|--------------------------------|----------------------------|------------------------|---------------------|--------|---------| | | (Train / Valid / Test) | Avg. n | Avg. m | | | | | EXP | Distinguish 1-WL Equiv. graphs | 2 | 1200 | 44.4 | 110.2 | | | SR25 | Distinguish 3-WL Equiv. graphs | 15 | 15 | 25 | 300 | | | Expressivity | CountingSub. | Count graph substructures | 4 | 1500 / 1000 / 2500 | 18.8 | 62.6 | | GraphProp. | Regress graph properties | 3 | 5120 / 640 / 1280 | 19.5 | 101.1 | | | Real World Graphs | ZINC-12K | Molecular prop. regression | 1 | 10000 / 1000 / 1000 | 23.1 | 49.8 | | CIFAR10 | Multi-class class. | 10 | 45000 / 5000 / 10000 | 117.6 | 1129.8 | | | PATTERN | Recognize subgraphs | 2 | 10000 / 2000 / 2000 | 118.9 | 6079.8 | | | MolHIV | Binary class. | 1 | 32901 / 4113 / 4113 | 25.5 | 54.1 | | | MolPCBA | Multi-label binary class. | 128 | 350343 / 43793 / 43793 | 25.6 | 55.4 | | | Proximity | h-Proximity | Binary classification | 4 | 9000 | 117.14 | 1484.82 | Table 6: Dataset statistics. —EXP and SR25. We evaluate Elene on GIN and GIN-AK+ for both data sets with k ∈ {0, 1, 2}. For Elene-L, we evaluate all model variants for k ∈ {0, 1, 2} with 8-dim embeddings for EXP and 32-dim embeddings for SR25. All models use L = 4 for EXP and L = 2 for SR25. —Counting Sub. and Graph Prop. We evaluate Elene on GIN and GIN-AK+ for both data sets with k ∈ {0, 1, 2}. For Elene-L, we evaluate all model variants for k ∈ {0, 1, 2} with 16-dim embeddings. On the GraphProp dataset, we additionally try k = 3 after noticing expected positive results during early evaluation—as larger values of k enable the model to capture long-range dependencies. All models use L = 3 for Counting Sub. and L = 6 for Graph Prop. ## Real World Graphs. See Benchmarkdatasets.Sh For Details. —ZINC and MolHIV. We evaluate Elene on GIN and GIN-AK+ for both data sets with k ∈ {0, 1, 2}. For Elene-L, we evaluate all model variants for k ∈ {0, 1, 2, 3} with 32-dim embeddings. All models use L = 6 for ZINC and L = 2 for MolHIV. —PATTERN. We evaluate Elene on GIN with k ∈ {0, 1, 2} and on GIN-AK+ k ∈ {0, 1}. For Elene-L, we evaluate all model variants for k ∈ {0, 1, 2, 3} with 64-dim embeddings. Suspecting that degree information may not play a salient role in the sub-graph patterns, we also evaluate the setting without degree information but found this slightly degrades performance compared to models that encode degree attributes. All models use L = 6. —CIFAR10 and MolPCBA. We evaluate node-centric Elene-L (ND) with k ∈ {1, 2, 3}. Due to computational constraints, we prioritize training with k = 3 given promising results in other tasks. On CIFAR, we discard uninformative degree information as graphs are k = 8 nearest neighbor graphs containing super-pixel information. We do not modify the architecture or hyper-parameters of the best-performing GNN-AK+ model reported in Zhao et al. (2022). Our results report average and standard deviations of the evaluation metric—Accuracy for CIFAR10, Average Precision (AP) for MolPCBA—collected from 3 independent runs. ## H-Proximity. See Proximityresults.Sh For Details. We evaluate node-centric Elene-L without degree information, which matches the configuration of SPNNs. We do not tune any hyper-parameters, evaluating Elene-L with k ∈ {3, 5} fixing L = 3 and using 32dim. embeddings. The first layer in the network embeds the color information, for which the model needs to appropriately learn to ignore irrelevant colors. Due to constrained computational resources, we only evaluate two maximum distances for Elene-L, 3 and 5, sharing embedding weights and introducing ego-network signals before each of the 3 GIN layers. We share Elene-L embedding matrices across all layers and set the maximum encoded degree d = 0 to only encode distance information. We report the mean and standard deviation of the binary classification accuracy computed across 10-folds over the dataset, following Abboud et al. (2022). ## Memory Scalability. We provide additional results from the memory scalability experiments in §7.5, reporting memory consumption performance when dmax = 6 and dmax = 18 in Fig. 8 and Fig. 9. ![24_image_0.png](24_image_0.png) Figure 8: Memory scalability analysis of Elene with dmax = 6, produced as Fig. 7. We include GIN (dotted line) and maximum GPU memory (dash-and-dotted line) as indicative lower and upper memory bounds. Elene-L (ND, full lines) outperforms both GIN-AK, GIN-AK+ (dotted lines) and Elene-L (ED, dashed lines). Additionally, Elene-L (ND) can encode all d-regular graphs in the benchmark when k = 1. As expected, memory consumption increases linearly with the number of nodes as dmax is kept fixed. ![24_image_1.png](24_image_1.png) Figure 9: Memory scalability analysis of Elene with dmax = 18. See caption of Figure 8 for details. ## Graph Density And Scalability. We also provide extended memory scalability results by studying the impact of the density of the graph. We ran additional experiments on graphs where N = 1000, and evaluated the memory consumption as density increases as a function of the degree of nodes in the graph. We perform the same experiment in two settings: one where the degree distribution is regular (i.e., the graphs are d-regular, studying different values of d), and one where the distribution of degrees is irregular. In the irregular case, we study the case in which all nodes have *at least* degree d, but may have higher connectivity following the Barabási-Albert preferential attachment model (Barabási & Albert, 1999). - Memory Consumption on Regular Density Graphs. In Fig. 10, we compare GIN, GIN-AK, GIN-AK+ and ELENE variants on at depths k ∈ {1, 2, 3}. Note that we could not include SPEN, as described in §7.5, due to reaching the maximum memory thresholds at dmax = 8. ![25_image_0.png](25_image_0.png) Figure 10: Memory scalability analysis of Elene with N = 1000 in function of increasing values of dmax. See caption of Figure 8 for details. - Memory Consumption on Irregular Density Graphs. In Fig. 11, we repeat the analysis from Fig. 10 on graphs generated following the preferential attachment model where each node has at least m edges. We find that Elene-L (ND, full lines) outperforms both GIN-AK, GIN-AK+ (dotted lines) and Elene-L (ED, dashed lines), matching §7.5 and results on regular connectivity patterns shown in Fig. 10. ![25_image_1.png](25_image_1.png) Figure 11: Memory scalability analysis of Elene with N = 1000 in function of increasing values of dmin on random Barabási-Albert graphs. See caption of Figure 8 for details. ## B.3 Best Hyper-Parameters In this section, we provide an overview of the best hyperparameters we find for Elene and Elene-L. For simplicity, we only report the best performing model, i.e., not distinguishing between enhancing a GIN or a GIN-AK+ model. We group together hyper-parameters set at the dataset level (e.g. for the Counting Substructures or h-Proximity datasets), and report the hyper-parameters corresponding to the best models reported in §7. In our summary, we include the best-performing ego-network feature with (a) the ego-network depth - k, (b) the number of layers - L, and (c) the embedding layer size for Elene-L. Table 7: Best hyper-parameters for the for the top performing models after introducing explicit ego-network attributes as shown in §7. We report the hyper-parameters corresponding to the best-performing model by looking at the objective performance metric on each dataset, and resolve ties by selecting the model with the lowest memory footprint. | Benchmark | Dataset | Task | Ego-Net Feature | k-hops | L-Layers | Emb. Size (ELENE-L) | |--------------------|------------------------------------------|----------------------|-------------------|----------|------------|-----------------------| | EXP | All Ego-Net Features Reach 100% Accuracy | 1 | 4 | 32 | | | | SR25 | GIN +ELENE-L (ED) | 1 | 2 | 32 | | | | Expr. | Triangle | GIN-AK++ELENE | | | | | | Counting | ++ELENE | | | | | | | Tailed Tri. | GIN-AK | 2 | 3 | 16 | | | | Sub. | Star | GIN +ELENE-L (ND) | | | | | | 4-Cycle | GIN-AK++ELENE | | | | | | | IsConn. | GIN-AK++ELENE-L (ND) | | | | | | | Graph | Diameter | GIN-AK++ELENE-L (ND) | 3 | 6 | 16 | | | Prop. | Radius | GIN-AK++ELENE-L (ND) | | | | | | Real World Graphs | ZINC-12K | GIN+ELENE-L (ND) | 3 | 6 | 32 | | | CIFAR10 | GIN+ELENE-L (ND) | 2 | 4 | 64 | | | | PATTERN | GIN+ELENE-L (ND) | 2 | 6 | 64 | | | | MolHIV | GIN+ELENE | 2 | 2 | N/A | | | | MolPCBA | GIN+ELENE-L (ND) | 3 | 5 | 64 | | | | h = 3 | 3 | | | | | | | Proximity | h-Proximity | 3 | 32 | | | | | h = 5 | 5 | | | | | | | GIN + ELENE-L (ND) | | | | | | | | h = 8 | 5 | | | | | | | h = 10 | 5 | | | | | | We summarise our findings in Tab. 7. For datasets and tasks where multiple models achieve comparable performance (i.e. same performance metric with the reported significant digits), we break ties by reporting the model with the lowest memory footprint across the tie. ## C Elene Is Permutation Equivariant And Invariant We show that Elene is permutation equivariant at the graph level, and permutation invariant at the node level. As all operations that Elene requires are permutation equivariant at the graph level, and permutation invariant at the node level, the same holds for Elene representations. Lemma 1. Given any v ∈ V for G = (V, E) *and given a permuted graph* G′ = (V ′, E′) of G produced by a permutation of node labels π : V → V ′*such that* ∀v ∈ V ⇔ π(v) ∈ V ′, ∀(u, v) ∈ E ⇔ (π(u), π(v)) ∈ E′. All Elene *representations are permutation equivariant at the graph level:* $$\pi(\{\!\!\{e_{v_{1}}^{k},\ldots,e_{v_{n}}^{k}\}\!\!\})=\{\!\!\{e_{\pi(v_{1})}^{k},\ldots,e_{\pi(v_{n})}^{k}\}\!\!\}.$$ Furthermore, Elene *representations are permutation invariant at the node level:* $$e_{v}^{k}=e_{\pi(v)}^{k},\forall v\in V,\pi(v)\in V^{\prime}.$$ Proof. Note that e k v in Eq. 1 can be expressed in terms of d (p) G (u|v) and lG(*u, v*). Both lG(·, ·) and d (p) G (·|·) are permutation invariant functions at the node level and equivariant at the graph level, as they rely on the distance between nodes, which will not change when permutation π(·) is applied. Thus, Elene representations are permutation equivariant at the graph level, and permutation invariant at the node level.
Review 1: Summary: The paper proposes a novel edge-level ego-network encoding for learning on graphs, called ELENE, which can enhance the expressivity and performance of message-passing graph neural networks (MP-GNNs). ELENE captures the structural information of the two ego networks of adjacent nodes in the input graph and can distinguish strongly regular graphs, a family of hard-to-differentiate graphs. The paper also introduces two variants of ELENE, one that is sparse and one that is learnable. It shows that they have different expressive power depending on whether they are node-centric or edge-centric. The paper evaluates ELENE and its variants on various graph learning tasks, such as graph classification, graph regression, and proximity tasks. The results show that they significantly match or improve the state-of-the-art methods while reducing memory usage. Strengths and Weaknesses: Strengths: * The paper addresses an important problem of enhancing the expressivity and performance of graph learning, which is a challenging and practical task in machine learning. The paper is genuinely well-written with helpful illustration figures. * A novel and effective encoding of ELENE, which leverages the edge-level ego-network information to capture the structural signals of the input graph, is proposed to distinguish strongly regular graphs, a family of 3-WL equivalent graphs with theoretical groundings. * Extensive experiments and analysis demonstrate the advantages of ELENE and ELENE-L over existing methods on various datasets and tasks. The paper also shows that ELENE and ELENE-L improve memory usage by up to 18.3x compared to sub-graph GNN baselines. Weaknesses: * As pointed out by the authors, approaches [1-2] similar to ELENE have been proposed recently, but the difference between ELENE and [1-2] is not explicitly discussed. This is essential especially since the theoretical findings are built upon the results from [1]. Therefore, a detailed discussion between ELENE and [1-2] as well as the experimental comparison between them is needed to further clarify the motivation. * While Section 7.5 indicates the memory scalability of ELENE against GIN and GIN+AK, it is unclear whether ELENE could tackle graphs with massive nodes (more than 10^5). This concern arises since ELENE-L (ED) can hardly be employed on the reported real-world benchmark datasets (as indicated by the authors from Section 7.4). This might be available by introducing some large graph sampling techniques. * The performance of ELENE achieves incremental improvement over a small set of baselines. For example, in Table 3, ELENE could only achieve on-par performance on two datasets and weaker performance on the other three in comparison with the other baselines. Therefore, the evaluation could be more convincing by introducing more related baselines (such as [1-2]) or other real-world benchmarks that could better demonstrate the expressivity like the ones in Table 1. In other words, how the enhancement of expressivity could be reflected in real-world settings needs further demonstration. * In the discussion of memory usage comparison, it seems unfair to emphasize the comparison between GIN+ELENE and GIN-AK since the parameter complexity is different. From Table 4, it can be noticed that GIN+ELENE-L (ND) has comparable memory consumption against GIN-AK+ while GIN+ELENE-L (ED) uses more memory than GIN-AK+. Therefore, it would be better to conduct parameter complexity analysis to better reflect the memory usage analysis. * Minor issues: [2] is not correctly cited in its published journal. What is the difference between GIN-AK and GIN-AK+? [1] Beyond 1-WL with Local Ego-Network Encodings, LoG 22 [2] Improving Graph Neural Network Expressivity via Subgraph Isomorphism Counting, TPAMI 22 Requested Changes: Please kindly refer to the Weaknesses. Broader Impact Concerns: The broader impact is fully discussed in the Broader Impact Statement section from my perspective. ================================================== Review 2: Summary: The authors introduce a novel edge-level ego-network encoding (ELENE) to enhance the learning capabilities of Message Passing Graph Neural Networks (MP-GNNs). ELENE improves expressivity by providing additional node and edge features and extends standard message-passing schemes. The encoding shows superiority in distinguishing a challenging class of 3-WL equivalent graphs, namely the Strongly Regular Graphs. Their theorems seem to be correct, and provide expressivity results over ELENE, and ELENE-L configurations. The paper develops a thorough experimental framework for evaluating ELENE's contribution to graph learning tasks. Its empirical evaluation across various benchmarks shows that ELENE matches or exceeds existing baselines in tasks like graph classification and regression, with a significant reduction in memory usage. Strengths and Weaknesses: **Strengths** - Novel, and rather interesting approach to graph learning methods, incorporating edge-level ego-network embeddings. - Theoretical results show improvement in terms of expressivity, matching, and surpassing expressivity results from IGEL. - *Technical validity*: The theorems seem correct, supporting the paper's arguments. - One of the most interesting and selling points of the method is their efficient memory usage. In particular, they seem to be superior with respect to another competitive baseline GIN-AK$^+$. **Weaknesses** - [Density analysis] Section 7.5 could benefit from a more detailed investigation of denser graphs. A plot similar to 7, but with increasing density would be very insightful to capture the impact of the parameter $d$ of the SRGs. - [Baseline Comparison] Since scalability is one of the main contributing axes of the method, it would be very helpful to report, also, the behavior of competing baselines with respect to increasing graph sizes (in Section 7.5). - In Table 3, baselines (CIN and GCN-AK$^+$ seem to outperform ELENE on 3 out of 5 tasks. However, I do not see that as a strong weakness, since the required memory consumption is higher for the baselines. Requested Changes: Following my thoughts on *Weaknesses* section, I suggest that the following changes would further improve the paper: 1. Inclusion of an increasing density for SRGs plot, showcasing the correlation with respect to the density. Thus, the authors can further validate the computational complexity claims. 2. Inclusion of competing baselines in the scalability evaluation. Figures 7,8, and 9 show only the comparison with GIN. 3. Another interesting addition would be the experimentation with different graph types (with more diverse density patterns) to further enhance the scalability claims. Broader Impact Concerns: The ethical implications are not discussed. I do not see however any particular reason of ethical implications that would require an addition of a Broader Impact Statement. ================================================== Review 3: Summary: The authors propose ELENE and ELENE(L), two approaches to include subgraph information in MPNNs without the need to run subgraph GNNs. The authors thoroughly discuss the relation of their paper to others, and provide strong motivation for the proposed method and research direction. The method is nicely analyzed to show the expressive power of ELENE, followed by a significant amount of experiments and computational complexity, and cost analyses. Strengths and Weaknesses: Strengths: - The paper is well written. It was easy for me to follow and understand the method. - The authors provide many comparisons with other methods, both qualitatively and quantitatively. - The method seems novel to me, and seems to be effective in practice. - The authors provide many details about the implementation and experimental settings. Weaknesses: - In page 1 the authors claim "This flexibility extends the expressive power of MP-GNNs and their robustness,". Can you please refer to the robustness aspect? - In page 5 the authors mention several subgraph GNNs method ("GNN-AK, NGNNs, SUN, ESAN, or SPEN."), but their citation only comes several pages after. I would expect the citation to at least be in order of appearance in text. Thus I would recommend to include the references there as well. - In section 4, the authors first discuss a simple ELENE approach, but to my understanding there is no equation that defines it. Can you please add an equation so it is easier to compare the two proposed methods? - In equations 5 and 6, is gamma learned? or is it a hyperparameter? - The authors mention the over-squashing problem several times in the paper. Is there anything theoretical or practical that can be said about ELENE with respect to this problem? - In terms of "subgraph expresiveness without running subgraph gnns", the authors should cite and discuss "Efficiently Counting Substructures by Subgraph GNNs without Running GNN on Subgraphs" - In terms of lightweight / efficient subgraph GNNs, the authors should cite and discuss "Ordered Subgraph Aggregation Networks", "MAG-GNN: Reinforcement Learning Boosted Graph Neural Network", and "Efficient Subgraph GNNs by Learning Effective Selection Policies‏". - In terms of paths in GNNs, the authors discuss the shortest path MPNN, but I would appreciate it if the authors can also discuss the differences between works like "pathGCN: Learning General Graph Spatial Operators from Paths‏ ", and "Path neural networks: Expressive and accurate graph neural networks". Requested Changes: Overall, I think that the paper is in great condition and is highly relevant. It should be published after minor changes, as described in my review. Broader Impact Concerns: No concerns. ================================================== Metareview: Recommendation: Accept as is Comment: The paper was reviewed by three expert reviewers. All reviewers recommended acceptance of the paper and I agree with them. The reviewers initially raised concerns about the scalability of the proposed approach, the lack of baselines, the limited number of datasets and types of graphs. The reviewers also complained about missing related work. Those concerns were addressed by the authors in the revision and the quality of the paper improved significantly. I thus think that the paper is now ready for publication. ==================================================
# From Stability To Chaos: Analyzing Gradient Descent Dynamics In Quadratic Regression Xuxing Chen *xuxchen@ucdavis.edu* Department of Mathematics University of California, Davis. Krishnakumar Balasubramanian kbala@ucdavis.edu Department of Statistics University of California, Davis. Promit Ghosal *promit@brandeis.edu* Department of Mathematics Brandeis University. Bhavya Agrawalla bhavya@mit.edu Department of Mathematics Massachusetts Institute of Technology. Reviewed on OpenReview: *https: // openreview. net/ forum? id= Wiklo5VpG7* ## Abstract We conduct a comprehensive investigation into the dynamics of gradient descent using largeorder constant step-sizes in the context of quadratic regression models. Within this framework, we reveal that the dynamics can be encapsulated by a specific cubic map, naturally parameterized by the step-size. Through a fine-grained bifurcation analysis concerning the step-size parameter, we delineate five distinct training phases: (1) monotonic, (2) generalized catapult, (3) periodic, (4) chaotic, and (5) divergent, precisely demarcating the boundaries of each phase. As illustrations, we provide examples involving phase retrieval and twolayer neural networks employing quadratic activation functions and constant outer-layers, utilizing orthogonal training data. Our simulations indicate that these five phases also manifest with generic non-orthogonal data. We also empirically investigate the generalization performance when training in the various non-monotonic (and non-divergent) phases. In particular, we observe that performing an ergodic trajectory averaging stabilizes the test error in non-monotonic (and non-divergent) phases. ## 1 Introduction Iterative algorithms like the gradient descent and its stochastic variants are widely used to train deep neural networks. For a given step-size (or learning rate) parameter η > 0, the gradient descent algorithm is of the form w (t+1) = w (t) − η∇ℓ(w (t)) where ℓ is the training objective function being minimized, which depends on the loss function and the neural network architecture and the dataset. Classical optimization theory operates under small-order step-sizes. In this regime, one can think of the gradient descent algorithm as a discretization of so-called gradient flow equation given by ˙w (t) = −∇ℓ(w (t)), which could be obtained from the gradient descent algorithm by letting η → 0. Additionally, assuming that the objective function ℓ has gradients that are L-Lipschitz, selecting a step-size η < 1/L guarantees convergence to stationarity. In stark contrast to traditional optimization, recent empirical studies in deep learning have revealed that training deep neural networks with large-order step-sizes yields superior generalization performance. Unlike the scenario with small step-sizes, where gradient descent dynamics follow a monotonic pattern, larger stepsizes introduce a more intricate behavior. Various patterns like catapult, (also related to *edge of stability*), periodicity and chaotic dynamics in neural network training with large step-sizes have been observed empirically, for example, by Lewkowycz et al. (2020), Jastrzebski et al. (2020), Cohen et al. (2021), Lobacheva et al. (2021), Gilmer et al. (2022), Zhang et al. (2022), Kodryan et al. (2022), Herrmann et al. (2022). A recent work by Sohl-Dickstein (2024) also empirically observe that the boundary between stable and divergent training behaviour, in terms of hyperparameters (including the step-size parameter), exhibits a fractal structure. Furthermore, the necessity for step-size schedules to include large-order step-sizes to expedite convergence and the ensuing chaotic behavior has also been observed empirically outside the deep learning community by Van Den Doel & Ascher (2012), much earlier. Faster convergence of gradient descent with iteration-dependent step-size schedules that have specific patterns (including cyclic and fractal patterns) has been examined empirically by Lebedev & Finogenov (1971), Smith (2017), Oymak (2021), Agarwal et al. (2021), Goujaud et al. (2022), and Grimmer (2023), with Altschuler & Parrilo (2023) and Grimmer et al. (2023) proving the state-of-the art remarkable results; see also Altschuler & Parrilo (2023, Section 1.2) for a historical overview. Notably, the stated faster convergence behavior of gradient descent requires large order step-sizes, very much violating the classical case. More importantly, the corresponding optimization trajectory, while being non-monotonic, exhibits intriguing patterns (Van Den Doel & Ascher, 2012). Considering the aforementioned factors, gaining insight into the dynamics of gradient descent with large-order step-sizes emerges as a pivotal endeavor. A precise theoretical characterizing of the gradient descent dynamics in the large step-size regime for deep neural network, and other such non-convex models, is a formidably challenging problem. Existing findings (as detailed in Section 1.1) often rely on strong assumptions, even when attempting to delineate a subset of the aforementioned patterns, and do not provide a comprehensive account of the entire narrative underlying the training dynamics. Recent research, such as Agarwala et al. (2023), Zhu et al. (2024), and Zhu et al. (2023b), has pivoted towards comprehending the dynamics of quadratic regression models based on a *local* analysis. These models offer a valuable testing ground due to their ability to provide tractable approximations for various machine learning models, including phase retrieval, matrix factorization, and two-layer neural networks, all of which exhibit unstable training dynamics. Despite their seeming simplicity, a fine-grained understanding of their training dynamics is far from trivial. Building in this direction, the primary aim of our work is to attain a precise characterization of the training dynamics of gradient descent in quadratic models, thereby fostering a deeper comprehension of the diverse phases involved in the training process. Contribution 1. We perform a *fine-grained, global theoretical analysis* of a cubic-map-based dynamical system (see Equation 2.1), and identify the precise boundaries of the following five phases: (i) monotonic, (ii) generalized catapult, (iii) periodic, (iv) Li-Yorke chaotic, and (v) divergent. See Figure 1 for an illustration, and Definition 2 and Theorem 2.1 for formal results. We show in Theorem 3.2 and 3.3, that the dynamics of gradient descent for two non-convex statistical problems, namely phase retrieval and two-layer neural networks with constant outer layers and quadratic activation functions, with orthogonal training data is captured by the cubic-map-based dynamical system. We provide empirical evidence of the presence of similar phases in training with non-orthogonal data. We also empirically examine the effect of training models in the above-mentioned phases, in particular the non-monotonic ones, on the generalization error. Indeed, provable model-specific statistical benefits for training in catapult phase are studied in Lyu et al. (2022) and Ahn et al. (2022). Lim et al. (2022) proposed to induce controlled chaos in the training trajectory to obtain better generalization. Approaches to explain generalization with chaotic behavior are examined in Chandramoorthy et al. (2022) based on a relaxed notion of statistical algorithmic stability. Although our focus is on gradient descent, related notions of generalization of stochastic gradient algorithms, based on characterizing the fractal-like properties of the invariant measure they converge to (with larger-order constant step-size choices) have been explored, for example, in Birdal et al. (2021), Camuto et al. (2021), Dupuis et al. (2023), and Hodgkinson et al. (2022). Hence, we also conduct empirical investigations into the performance of generalization when training within the different non-monotonic (and non-divergent) phases and make the following contribution. ![2_image_0.png](2_image_0.png) Figure 1: Phases of cubic-map based dynamical system in (2.1) parameterized by a. Sub-figure 1(a) corresponds to the monotonic phases, where the dynamics monotonically decays to zero. Sub-figure 1(b) corresponds to the generalized catapult phase where the dynamics decays to zero but is non-monotonic in a specific manner. Sub-figure 1(c) corresponds to the periodic phase, where the dynamics decays and settles in a period-2 orbit (i.e., shuttles between two points) but never decays to zero. Sub-figures 1(d) and 1(e) correspond to the chaotic phase (see Definition 1) and divergent phases, respectively. Note that the order of x-axis and y-axis in Sub-figures 1(d) and 1(e) are different from the rest. Contribution 2. We propose a natural ergodic trajectory averaging based prediction mechanism (see Section 4.2) to stabilize the predictions when operating in any non-monotonic (and non-divergent) phase. ## 1.1 Related Works General results. Lewkowycz et al. (2020) empirically examine the catapult phase particularly in neural networks with one hidden layer and linear activations, the phase in which the linear approximation of the model becomes less informative. In this case, they observe that the loss does not have monotonic decrease but eventually converges when the curvature (maximum of the eigenvalue of the Neural Tangent Kernel (Jacot et al., 2018)) stabilizes at a value less than 2/(step-size). Similar oscillations with convergence behavior have been also observed in Cohen et al. (2021), which empirically demonstrate that the sharpness (largest eigenvalue of the Hessian matrix of the loss) in gradient descent on neural networks training hovers just above the value 2/(step-size), indicating that gradient descent usually operates in the regime they call Edge of Stability (EoS). This is also formally studied in Ahn et al. (2022). Damian et al. (2023) propose self-stabilization as a phenomenological reason for the occurrence of catapults and EoS in gradient descent dynamics. Kreisler et al. (2023) investigate how gradient descent monotonically decreases the sharpness of Gradient Flow solutions, specifically in one-dimensional deep neural networks. Although they do not formally prove the existence of chaos in the dynamics, they conjecture its possibility. Arora et al. (2022) and Lyu et al. (2022) explore sharpness reduction flows, related to the above findings. Andriushchenko et al. (2023) prove that large step-sizes in gradient descent can lead to the learning of sparse features. Wu et al. (2023) investigate the EoS phenomenon for logistic regression. Kong & Tao (2020) theoretically explore the chaotic dynamics (and related stochasticity) in gradient descent for minimizing multi-scale functions under additional assumptions. While being extremely insightful, their results are fairly qualitative and are not directly applicable to the cubic maps analyzed in our work. As we focus on specific models, our results are more precise and quantitative. Specific Models. Zhu et al. (2023b) and Chen & Bruna (2023) studied gradient descent dynamics for minimizing the functions ℓ(*u, v*) = (u 2v 2 − 1)2 and ℓ(u) = (u 2 − 1)2, respectively. Both works primarily focused on characterizing period-2 orbits and hint at the possibility of chaos without rigorous theoretical justifications. Furthermore, their proofs are relatively ad-hoc and significantly different from ours. Song & Yun (2024) provided empirical evidence of periodicity and chaos for training a fully-connected neural network using gradient descent. However, their theoretical results are not applicable to quadratic regression models. Ahn et al. (2024) examined the Edge of Stability (EoS) between the monotonic and catapult phase for minimizing ℓ(*u, v*) = l(uv), where l is convex, even, and Lipschitz. Their analysis is not directly extendable to the quadratic regression models we consider in this work. See also the discussion below Theorem 2.1 for important technical comparisons. Wang et al. (2022) analyzed additional benefits (e.g., taming homogeneity) of gradient descent with large step-sizes for matrix factorization. Ziyin et al. (2022) also studied *stochastic* gradient descent with large step-sizes for the case when the loss function ℓ(u) = au2for a ∈ R. Note in this case that the point 0 is the minimum when a > 0. However, when a < 0, the point 0 is a maximum. In this setup, Ziyin et al. (2022) precisely characterize the behaviour of SGD for converging to a minimum or a maximum, in terms of the step-size parameter, initialization and the noise distribution of the stochastic gradient. Agarwala et al. (2023) explored gradient descent dynamics for a class of quadratic regression models and identified the EoS. Zhu et al. (2023a;b) also studied the catapult phase and EoS for a class of quadratic regression models. Agarwala & Dauphin (2023) examined the EoS in the context of Sharpness Aware Minimization for quadratic regression models. The above works are related to our work in terms of the model that they study. However, none of the above works characterize the five distinct phases (with precise boundaries) like we do, along with precise boundaries. Furthermore, our analysis is distinct (and is also global1) from the above works and is firmly grounded in the rich literature on dynamical systems. Dynamical systems. Our results draw upon the rich literature available in the field of dynamical systems. We refer the interested reader to Alligood et al. (1997), Lasota & Mackey (1998), Devaney (1989), Ott (2002), and De Melo & Van Strien (2012) for a book-level introduction. Birfurcation analysis of some classes of cubic maps has been studied, for example, by Skjolding et al. (1983), Rogers & Whitley (1983), Branner & Hubbard (1988) and Milnor (1992). Some of the above works are rather empirical, and the exact maps considered in the above works differ significantly from our case. ## 2 Analyzing A Discrete Dynamical System With Cubic Map Notations and definitions. We say a sequence {xk}∞ k=0 is increasing (decreasing), if xt+1 ≥ xt (xt+1 ≤ xt) for any t. Moreover, it is strictly increasing (decreasing) if the equalities never hold. For a real-valued function f and a set S, define f(S) = {f(x) : x ∈ S}, and f (k)(x) := f(f (k−1)(x)) for any k ∈ N+ with f (0)(x) = x. The preimage of x under f on S is the set f −1(x) := {y ∈ S : f(y) = x}. We say a property P holds for almost every x ∈ S or almost surely in S, if the subset {x ∈ S : property P does not hold for x} is Lebesgue measure zero. A critical point of f is a point x satisfying f ′(x) = 0. We call x0 a period-k point of f, when f (k)(x0) = x0 and f (i)(x0) ̸= x0 for any 0 ≤ i ≤ k − 1. The orbit of a point x0 denotes the sequence {f (t)(x0)}∞ t=0. A point x0 is called asymptotically periodic if there exists a periodic point y0 such that limt→∞ |f (t)(x0) − f (t)(y0)| = 0. The stable set of a period-k point x0 is defined as Ws(x0) := x : limn→∞ f (kn)(x) = x0. . The stable set of the orbit of a periodic point x0 is the union of the stable sets of all points in the orbit of x0. A point x0 is an aperiodic point if it is not an asymptotically periodic point and the orbit of x0 is bounded. We say a fixed point x0 of f is stable if, for any ϵ > 0, there is a δ > 0 such that for any x satisfying |x − x0| < δ, we have |f (n)(x) − x0| < ϵ for all n ≥ 0. The fixed point x0 is said to be unstable 1Analysis in Wang et al. (2022) and Chen & Bruna (2023) is also global, but not applicable to our model. if it is not stable. The fixed point x0 is asymptotically stable if it is stable and there is an δ > 0 such that limn→∞ f (n)(x) = x0 for all x satisfying |x−x0| *< δ.* A period-p point x0 and its associated periodic orbit are asymptotically stable if x0 is an asymptotically stable fixed point of f (p). A point x0 ∈ RS{+∞, *−∞}\*S is called an absorbing boundary point of S for f with period p, for some p ∈ {1, 2}, if there exists an open set U ⊆ S such that limk→∞ f (pk)(y) → x for all y ∈ U. We now introduce two quantities that are common in dynamical systems theory to study the stability properties. The Schwarzian derivative of a three-times continuously differentiable function f is defined (at non-critical points) as $${\mathsf{S}}f(x):=\left(f^{\prime\prime\prime}(x)/f^{\prime}(x)\right)-1.5\,\left(f^{\prime\prime}(x)/f^{\prime}(x)\right)^{2},\,\,{\mathrm{where~}}f^{\prime}(x)\neq0.$$ It is widely used for its sign-preservation property under compositions; see, for example, De Melo & Van Strien (2012). Specifically, the stability of a fixed point is related to the sign of the Schwarzian derivative at that point. Positive values may indicate instability, while negative values suggest stability. The Lyapunov exponent of a given orbit with initialization x0 is defined as $$\mathbb{L}f(x_{0})=\operatorname*{lim}_{n\to\infty}{\frac{1}{n}}\sum_{i=1}^{n-1}\log|f^{\prime}(x_{i})|.$$ It is another related quantity associated with the stability properties of dynamical systems and is used to measure the sensitive dependence on initial conditions (Strogatz, 2018). Chaotic systems typically exhibit positive Lyapunov exponents, reflecting their sensitive dependence on initial conditions. Similarly, a negative Lyapunov exponent is a characteristic of stable systems. Finally, we also define the sharpness of a loss function is defined as the maximum eigenvalue of the Hessian matrix of the loss. Bifurcation analysis. Our main goal in this section is to undertake a bifurcation analysis of the following discrete dynamics system defined by a cubic map. For a > 0, first define the functions g and f, parameterized by a, as $$g_{a}(z)=z^{2}+(a-2)z+1-2a=(z+a)(z-2)+1\quad{\mathrm{and}}\quad f_{a}(z)=z g_{a}(z).$$ Next, consider the discrete dynamical system given by $$(2.1)$$ $$(2.2)$$ $$z_{t+1}=f_{a}(z_{t})=z_{t}g_{a}(z_{t}).$$ zt+1 = fa(zt) = ztga(zt). (2.2) Note that for any *a, ϵ >* 0 and z0 ≥ 2 + ϵ or z0 ≤ −a − ϵ, we will have limt→∞ |zt| = +∞. Hence, we only study the case when z0 ∈ [−a, 2]. We will show in Section 3 that the dynamics of the training loss for several quadratic regression models could be captured by (2.2). The parameter a in (2.1) for the models will naturally correspond to the step-size of the gradient descent algorithm. We next introduce the precise definitions of the five phase that arise in the bifurcation analysis of (2.1). To do so, we need the following definition of chaos in the Li-Yorke sense (Li & Yorke, 1975). Li-Yorke chaos is widely used in the study of dynamical systems and is also directly related to important measures of the complexity of dynamical systems, like the topological entropy (Adler et al., 1965; Franzov´a & Sm´ıtal, 1991). We also refer to Aulbach & Kieninger (2001) and Kolyada (2004) for its relationship to other notions of chaos and related history. Definition 1 (Li-Yorke Chaos (Li & Yorke, 1975)). Suppose we are given a function f(x). If there exists a compact interval I such that f : I → I, then it is called Li-Yorke chaotic (Li & Yorke, 1975; Aulbach & Kieninger, 2001) when it satisfies - For every k = 1, 2*, ...* there is a periodic point in I having period-k. - There is an uncountable set S ⊆ I (containing no periodic points), which satisfies for any *p, q* ∈ S with p ̸= q, lim supt→∞ |f (t)(p)−f (t)(q)| > 0, lim inf t→∞ |f (t)(p)−f (t)(q)| = 0, and for any p ∈ S and periodic point q ∈ I, lim supn→∞ |f (t)(p) − f (t)(q)| > 0. To define the 5 phases in particular, we consider the orbit {f (k)(x)} +∞ k=0 generated by a given function f defined over a set I, in which the initial point x belongs to. ![5_image_0.png](5_image_0.png) Figure 2: Bifurcation diagram and Lyapunov exponent. Initialization z0 = 0.1. Definition 2. Given a function f(x) defined on a set I, we say the discrete dynamics is in the - **Monotonic phase**, when {|f (k)(x)|}∞ k=0 is decreasing and limn→∞ |f (n)(x)| = 0 for almost every x ∈ I. - Generalized2 **catapult phase**, when {|f (k)(x)|}∞ k=m is not decreasing for any m and limn→∞ |f (n)(x)| = 0 for almost every x ∈ I. We say such sequences have catapults. - **Periodic phase**, when f is not Li-Yorke chaotic, {|f (k)(x)|}∞ k=0 is bounded and does not have a limit for almost every x ∈ I, and there exists period-2 points in I. - **Chaotic phase**, when the function f is Li-Yorke chaotic and {|f (k)(x)|}∞ k=0 is bounded for almost every x ∈ I. - **Divergent phase**3, when limn→∞ |f (n)(x)| = +∞ for almost every x ∈ I. We emphasize here that our use of the word "phase" refers to the whole sequence {|f (k)(x)|}∞ k=0, and the categorization is with respect to the different step-sizes. As an illustration, in Figure 1, we plot the five phases for the parameterized function and its discrete dynamical system defined in (2.1) with initialization 1.9, i.e., xk = f (k) a (x0), x0 = 1.9. We have the following main result for different phases of dynamics. Theorem 2.1. Suppose fa(z) *is defined in* (2.1). Define zt+1 = fa(zt) with z0 *sampled uniformly at random* in (−a, 2). Then there exists a∗ ∈ (1, 2) *such that the following holds.* - If a ∈ (0, 2 √2 − 2]*, then almost surely* limt→∞ |zt| = 0 and |zt| is decreasing, and hence the dynamics is in the monotonic phase. - If a ∈ (2√2 − 2, 1]*, then almost surely* limt→∞ |zt| = 0 and |zt| *have catapults, and hence the dynamics is* in the generalized catapult phase. - If a ∈ (1, a∗), then there exists a period-2 point in (0, 1). zt ∈ (−a, 2) for all t*. If there exists an* asymptotically stable periodic orbit, then the orbit of z0 *is asymptotically periodic almost surely, and hence* the dynamics is in the periodic phase. - If a ∈ (a∗, 2], fa is Li-Yorke chaotic. zt ∈ (−a, 2) for all t*. If there exists an asymptotically stable periodic* orbit, then the orbit of z0 *is asymptotically periodic almost surely, and hence the dynamics is in the chaotic* phase. - If a ∈ (2, +∞)*, then* limt→∞ |zt| = +∞ *almost surely, and hence the dynamics is in the divergent phase.* From a pure optimization perspective, Phase 1 and 2 are the most relevant, as training loss actually minimized. However, from a generalization perspective, similar to other works (Lyu et al., 2022; Lim et al., 2022; 2Here, we use the term *generalized* to distinguish from Lewkowycz et al. (2020) who consider the case of a single spike in the training loss. 3We do not further sub-characterize the divergent phase as it is uninteresting. Chandramoorthy et al., 2022) we empirically observe that often times phases 2, 3 and 4 lead to comparatively improved generalization for various models. Connections with sharpness and EoS. As we will see in Section 3, the training loss and sharpness of a special class of quadratic regression models can be written as functions of zt, and hence their dynamics can be explicitly given by Theorem 2.1. As a byproduct of our theory, we reveal that the EoS phenomenon happens in the catapult phase and quantify the limit that the sharpness eventually converges to, which matches the empirical observations in Cohen et al. (2021) and Ahn et al. (2022). As a direct application of Theorem 2.1, we have the following result characterizing the dynamics generated by n different functions. Corollary 1. Suppose fa(z) *is defined in* (2.1), and we are given 2n positive scalars ai, ρi for 1 ≤ i ≤ n. Define z (t+1) i = fai (z (t) i), L(z (t), ρ) = Pn i=1 ρi(z (t) i) 2*. Then for almost all* z (0) ∈ {z : −ai ≤ zi ≤ 2} *we have* - If 0 < max1≤i≤n ai ≤ 1*, then* limt→∞ L(z (t), ρ) = 0 *. Moreover, if* 0 < max1≤i≤n ai ≤ 2 √2 − 2, the sequence {L(z (t), ρ)}∞ t=0 *is decreasing.* - If 1 < max1≤i≤n ai ≤ 2*, then* {L(z (t), ρ)}∞ t=0 *is bounded and does not converge to* 0. - If max1≤i≤n ai > 2*, then* limt→∞ L(z (t), ρ) = +∞. We highlight here that even if we know from Theorem 2.1 the dynamics of each individual z (t) i, explicitly characterizing the phase of L(z (t), ρ) is not trivial. To see this, we provide one simple example as follows. $$S_{1}:=\{S_{1}^{(n)}\}=\left\{1,\frac{1}{2},\frac{1}{3},\frac{1}{4},\ldots\right\},\ S_{2}:=\{S_{2}^{(n)}\}=\left\{1,\frac{1}{2^{2}},\frac{1}{3^{2}},\frac{1}{4^{2}},\ldots\right\},\ S_{3}:=\{S_{3}^{(n)}\}=\left\{\frac{1}{2},1,\frac{1}{4},\frac{1}{3},\ldots\right\},$$ where S3 is obtained by switching the (2i − 1)-th and 2i-th terms in S1. Sequences S1 and S2 are decreasing to 0, and S3 is in the catapult phase. We can verify that both {S (n) 1 +S (n) 3} and {S (n) 2 +S (n) 3} are converging to 0 but the former is decreasing while the latter is in the generalized catapult phase. This implies that the summation of a decreasing sequence and a catapult sequence can be either decreasing or catapult, which makes analyzing the dynamics of the weighted summation L(z (t), ρ) non-obvious. As we will see in Section 3.1, the above result gives the training dynamics of generalized phase retrieval and a two-layer neural network with quadratic activation functions on n orthogonal data points. In Figures 2(a) and 2(b) we numerically plot a bifurcation diagram for a ∈ (0, 2) and Lyapunov exponent scatter plot with initialization z0 = 0.1. The main ingredients in proving Theorem 2.1 are the following Lemmas 1, 2, and 3. Note that by straightforward computations, we have $$f_{a}^{\prime}(0)=1-2a\in(-1,1)\Leftrightarrow a\in(0,1).$$ This implies 0 is a asymptotically stable fixed point when a ∈ (0, 1). This type of local stability analysis is standard in dynamical systems literature (Hale & Ko¸cak, 2012; Strogatz, 2018), and has been used in analyzing the training dynamics of gradient descent recently (Zhu et al., 2024; Song & Yun, 2024). However, such results are limited to only local regions. In contrast, the following results provide a global convergence analysis. Lemma 1. Suppose 0 < a ≤ 1 and −a ≤ z0 ≤ 2*. Then we have* - (i) −a ≤ zt ≤ 2 for any t, and fa does not have a period-2 *point on* [−a, 2]. - (ii) If z0 is chosen from [−a, 2] *uniformly at random, then* limt→∞ zt = 0 *almost surely. Moreover, if* 0 < a ≤ 2 √2−2, then almost surely |zt+1| ≤ |zt| for all t*. If* 2 √2−2 < a ≤ 2*, then almost surely* {|zt|}∞ t=0 has catapults. Lemma 2. Suppose 1 < a ≤ 2 and −a ≤ z0 ≤ 2*. Then we have* - (i) −a ≤ zt ≤ 2 for any t, and fa(z) has a period-2 *point on* [0, 1]. - (ii) There exists a∗ ∈ (1, 2) such that for any a ∈ (a∗, 2), fa is Li-Yorke chaotic, and for any a ∈ (1, a∗), fa *is not Li-Yorke chaotic.* - (iii) If there exists an asymptotically stable orbit and z0 is chosen from [−a, 2] *uniformly at random, then* the orbit of z0 *is asymptotically periodic almost surely.* Lemma 3. Suppose a > 2. z0 is chosen from [−a, 2] *uniformly at random. Then* limt→∞ |zt| = +∞ *almost* surely. In Lemma 2, part (iii), we assume the existence of an asymptotically stable periodic point. Note that such a point must have negative Lyapunov exponent (Strogatz, 2018). It is possible to obtain particular values for a under which fa(z) has an asymptotically stable orbit. For example, a can be chosen such that |f ′ a (p)f ′ a (q)| < 1, where p ∈ (0, 1) is a period-2 point with fa(p) = q. In Figure 2(b) we plot the Lyapunov exponent of fa at the orbit starting from z0 = 0.1. It is interesting to explicitly characterize the set of a values in (1, 2) such that fa(z) has an asymptotically stable periodic orbit. Furthermore, we conjecture that a∗ defined in Lemma 2 is the smallest number a ∈ (1, 2) such that (1 − 2a)/3 is a period-3 point. The above two problems are challenging and left as future work. ## 3 Applications To Quadratic Regression Models We now provide illustrative examples based on quadratic or second-order regression models, motivated by the works of Zhu et al. (2024) and Agarwala et al. (2023). Specifically, we consider a generalized phase retrieval model and training hidden-layers of 2-layer neural networks with quadratic activation function as examples. ## 3.1 Example 1: Generalized Phase Retrieval Single Data Point. Following Zhu et al. (2024), it is instructive to study the dynamics with a single training sample. Consider the following optimization problem on a single data point (*X, y*): $$\min_{w}\left\{\ell(w)=\frac{1}{2}(g(w;X)-y)^{2}\right\},\ \ \mbox{where}g(w;X)=\frac{\gamma(X^{\top}w)^{2}}{2}+cX^{\top}w,\tag{3.1}$$ where *γ, c* are arbitrary constants. The above model, with γ = 2 and, c = 0 corresponds to the classical phase retrieval model (also called as a single-index model with quadratic link function). We refer to Jaganathan et al. (2016) and Fannjiang & Strohmer (2020) for an overview, importance and applications of the phase retrieval model. We would like to point out that the analysis of seemingly simple models is already nontrivial and has been done in various ways. For example, single-data-point setting (Zhu et al., 2024; Song & Yun, 2024), simple-model setting (Lobacheva et al., 2021; Ahn et al., 2024; Kodryan et al., 2022; Zhu et al., 2023b; Chen & Bruna, 2023; Zhu et al., 2023a), etc. Different from existing works that mostly focus on asymptotic or local analysis that only hold when certain quantities are sufficiently large or small (small step-sizes (Lobacheva et al., 2021; Ahn et al., 2024; Zhu et al., 2023b), large network size (Zhu et al., 2024; 2023a)), in the following result we provide a refined global analysis on solving (3.1) that does not contain any big-O notation. Theorem 3.1. *Suppose we run gradient descent on* (3.1) with step-size to be η*. Define* e (t):= g(w (t); X) − y, zt := ηγ ∥X∥ 2e (t), a = γy + c 2 2 η ∥X∥ 2. (3.2) The $\alpha$-ray emission is $\sim$ 0.1 km s$^{-1}$. *Then we* $\lambda_{max}(\nabla^2\ell)$. Then we have (i) zt+1 = fa(zt) and thus Theorem 2.1 holds for fa and zt*; (ii) The sharpness is given by* λmax(∇2ℓ(w (t))) = 3zt+2a η. Comparison with existing results. An interesting conclusion from the above theorem is that, under certain cases the step-size η should depend on the model initialization. For example when e (0) > 0 then we should have ηγ ∥X∥ 2e (0) = z0 < 2, since for z0 > 2 we have limt→ |zt| = ∞ (see, e.g., discussions under (2.2)). Note that Zhu et al. (2024) studied a related neural quadratic model (see their Eq. (3)). Here, we highlight that their results do not cover our case. Indeed, defining ηcrit = 2/λmax(∇2ℓ(w (0))), according to their claim, catapults happen when ηcrit *< η <* 2ηcrit. In our notation, this condition is equivalent to 2 < 3z0 + 2a < 4. However this cannot happen because if the initialization z0 is sufficiently small, say $=\;\;\angle\,\text{a}\,\text{a}$ . z0 = O(ϵ), then we know the previous condition become 1 − O(ϵ) < a < 2 − O(ϵ). However, according to Lemmas 1 and 2, we have that for 1 *< a <* 2 the training dynamics is in the periodic or the chaotic phase and zt (and thus the loss function) will not converge to 0. Our theory (Lemma 1) suggests that catapults for quadratic regression model happens for almost every z0 ∈ (−a, 2) provided that 2√2 − 2 < a ≤ 1. This intricate observation reveals that extending the current results on the catapult phenomenon from the model in Zhu et al. (2024) to our setting is not immediate and is actually highly non-trivial. Relationship with Sharpness and EoS. We also notice that, interestingly, in the monotonic and catapult phases (i.e., 0 < a ≤ 1), we have the limiting sharpness satisfy limt→∞ λmax(∇2ℓ(w (t))) = 2a/η = (2γy + c 2) ∥X∥ 2. In particular, for the catapult phase (2√2−2 < a ≤ 1) the sharpness converges to 2a η ∈ ( 4 √2−4 η, 2 η ], which theoretically and quantitatively explains the empirical observations of EoS in Cohen et al. (2021). More importantly, the notion of EoS only provides a coarse characterization of the oscillations of the limiting sharpness at the interface of the monotonic and catapult phase. For the quadratic models that we study, the limiting sharpness exhibits a more nuanced behaviour as identify in Theorem 3.1, while also recovering and extending existing results on EoS. Multiple Orthogonal Data Points. We now consider gradient descent on quadratic regression on multiple data points that are mutually orthogonal. Suppose we are given a dataset {(Xi, yi)} n i=1 with X = (X1*, ..., X*n) ⊤satisfying XX⊤ = diag(∥X1∥ 2*, ...,* ∥Xn∥ 2). Similar orthogonality conditions are widely used in the literature on sparse linear regression to understand the optimization or statistical properties (Tibshirani, 1996; Yuan & Lin, 2006). Consider the optimization problem $$\min_{w}\ell(w):=\frac{1}{n}\sum_{i=1}^{n}\ell_{i}(w)=\frac{1}{2n}\sum_{i=1}^{n}\left(g(w;X_{i})-y_{i}\right)^{2},\tag{3.3}$$ where ℓi(w) and g(w; Xi) are as defined in (3.1). Theorem 3.2. *Define the following:* $$\begin{array}{l}{{\alpha^{(t)}(X_{i}):=c(X_{i})+\gamma X_{i}^{\top}w^{(t)},\ \beta(X_{i}):=y_{i}+\frac{(c(X_{i}))^{2}}{2\gamma},\ \kappa_{n}(X_{i}):=\frac{\eta\gamma\left\|X_{i}\right\|^{2}}{n},}}\\ {{e^{(t)}(X_{i}):=g(w^{(t)};X_{i})-y_{i},\ z_{i}^{(t)}=\kappa_{n}(X_{i})e^{(t)}(X_{i}),\ a_{i}=\beta(X_{i})\kappa_{n}(X_{i}).}}\end{array}$$ If we run gradient descent on solving (3.3) with step-size η*, then we have (i)* z (t+1) i = fai (z (t) i) and thus Theorem 2.1 hols for fai and z (t) i*. (ii) The sharpness* λmax(∇2ℓ(w (t))) = max1≤i≤n 3z (t) i +2ai η. For this setup, the above theorem shows that the loss function is a summation of the loss on each individual data point. Recall that the training loss takes the form $$\ell(w^{(t)})=\frac{1}{2n}\sum_{i=1}^{n}\left(g(w^{(t)};X_{i})-y_{i}\right)^{2}=\frac{1}{2n}\sum_{i=1}^{n}\frac{(z_{i}^{(t)})^{2}}{\kappa_{n}^{2}(X_{i})}=\sum_{i=1}^{n}\frac{n(z_{i}^{(t)})^{2}}{2\eta^{2}\gamma^{2}\left\|X_{i}\right\|^{4}}.$$ Setting ρi =n 2η2γ2∥Xi∥ 4 , we can deduce that the dynamics of ℓ(w (t)) is given by Corollary 1. This leads to the following Corollary. Corollary 2. *Under the setup in Theorem 3.2, for almost all* z (0) ∈ {z : −ai ≤ zi ≤ 2} *we have* - If 0 < max1≤i≤n ai ≤ 1*, then* limt→∞ ℓ(w (t)) = 0 *. Moreover, if* 0 < max1≤i≤n ai ≤ 2 √2 − 2*, the sequence* {ℓ(w (t))}∞ t=0 *is decreasing.* - If 1 < max1≤i≤n ai ≤ 2*, then* {ℓ(w (t))}∞ t=0 *is bounded and does not converge to* 0. - If max1≤i≤n ai > 2*, then* limt→∞ ℓ(w (t)) = +∞. Under the orthogonality assumption, the loss functions defined on each data point exhibit a non-interacting behavior. Removing this orthogonality condition entirely is highly non-trivial. It would be interesting to extend our setting to the nearly-orthogonal one in Frei et al. (2022), and Kou et al. (2023). ## 3.2 Example 2: Neural Network With Quadratic Activation In this section, we consider the following two layer neural networks with its loss function on data point (Xi, yi) defined as: $$g(u,v;X_{i})=\frac{1}{\sqrt{m}}\sum_{j=1}^{m}v_{j}\sigma\Big(\frac{1}{\sqrt{d}}u_{j}^{\top}X_{i}\Big),\ \ell_{i}=\frac{1}{2}\left(g(u,v;X_{i})-y_{i}\right)^{2}$$ where the hidden-layer weights ui ∈ R d are to be trained and outer-layer weights vi ∈ R are held constant, which corresponds to the feature-learning setting for neural networks. Also m is the width of the hidden layer and σ is the activation function. Define U := (u1*, ..., u*m). When the activation function is quadratic and vi = 1 for all i, the loss function becomes $$\min_{\mathbf{U}}\ell(\mathbf{U}):=\frac{1}{n}\sum_{j=1}^{n}\ell_{j}(\mathbf{U})=\frac{1}{2n}\sum_{j=1}^{n}\Big{(}\frac{1}{\sqrt{md}}\sum_{i=1}^{m}(X_{j}^{\top}u_{i})^{2}-y_{j}\Big{)}^{2}.\tag{3.4}$$ As in the previous example, we assume XX⊤ = diag(∥X1∥ 2*, ...,* ∥Xn∥ 2). We then have the following result on the gradient descent dynamics of the above problem. Theorem 3.3. *Define the following:* $$e_{i}^{(t)}=\frac{1}{\sqrt{m}d}\sum_{j=1}^{m}(X_{i}^{\top}u_{j}^{(t)})^{2}-y_{i},\ z_{i}^{(t)}=\frac{2\eta\left\|X_{i}\right\|^{2}e_{i}^{(t)}}{\sqrt{m}d n},\ a_{i}=\frac{2\eta\left\|X_{i}\right\|^{2}y_{i}}{\sqrt{m}d n}$$ If we run gradient descent on solving problem (3.4) with step-size η*, we have* z (t+1) i = fai (z (t) i) *and thus* Theorem 2.1 and Corollary 2 hold for ℓ(U(t)). The orthogonal assumption that XX⊤ = diag(∥X1∥ 2*, ...,* ∥Xn∥ 2), helps decouple the loss function across the samples and makes the evolution of the overall loss non-interacting (across the training samples). In order to relax this assumption, it is required to analyze bifurcation analysis of interacting dynamical systems, which is extremely challenging and not well-explored (Xu et al., 2021). In Section C.2, we present empirical results showing that similar phases exists in the general non-orthogonal setting as well. Theoretically characterizing this is left as an open problem. ## 4 Experimental Investigations Before we proceed, we remark that the original PDF files for all the figures are provided as a part of the supplementary material for the sake of easier visualization. The naming convention is as follows: (i) each subfolder correspond to the respective figure numbers and (ii) each file within a sub-folder is named according to matrix conventions. For e.g., file 1x3.pdf in sub-folder Figure 1 corresponds to Figure 1(c), and file 1x1.pdf in sub-folder Figure 3 corresponds to Figure 3. ## 4.1 Gradient Descent Dynamics With Orthogonal Data For Model (3.4) Experimental setup. We now conduct experiments to evaluate the developed theory. We consider gradient descent for training the hidden layers of a two-layer neural network with orthogonal training data, described in Section 3.2. Recall that *d, m*, and n represents the dimension, hidden-layer width, and number of data points respectively. We set d = 100, m ∈ {5, 10, 25}, n = 80. We generate the ground-truth matrix U∗ ∈ R d×m where each entry is sampled from the standard normal distribution. The training data points collected in the data matrix, denoted as X ∈ R n×d, are the first n rows of a randomly generated orthogonal matrix. The labels are generated via the model in Section 3.2, i.e., yi = √ 1 md Pm j=1 X⊤ i uj 2+ εi where εiis scalar noise sampled from a zero-mean normal distribution, with variances equal to 0, 0.25, 1 in different experiments. We set the step-size η such that max1≤i≤n ai defined in Theorem 3.2 belongs to the intervals of the first four phases. In particular, we choose 0.3, 0.9, 1, 1.2, 1.8 for m = 5, 10 and 0.3, 0.9, 1, 1.2, 1.6 for m = 25 ![10_image_0.png](10_image_0.png) Figure 3: Test loss with and without averaging: The chaotic versions of purple and red lines correspond to the test error without averaging. The corresponding smooth versions refer to the test error with averaging. The plot demonstrates the benefit of ergodic trajectory averaging based predictions (according to Definition 3), as the averaging based predictions become more stable across the iterations. Numbers 3, 4 denote different stepsize choices (see Section 4.2 for details). (for each m, 0.9 and 1 are both in the catapult phase, and we pick 1 since it is the largest step-size choice allowed in the catapult phase). The numbers 0, 1, 2, 3, 4 of the plot labels correspond to these step-size choices respectively. In Figure 4 we present the training loss curves in log scale and the sharpness curves for m = 25. The horizontal axes denote the number of steps of gradient descent. In Section C.1, we also provide additional simulation results for different hidden-layer widths. From the training loss curves (left column) and the sharpness curves (middle column) we can clearly observe the four phases4thereby confirming our theoretical results. ## 4.2 Prediction Based On Ergodic Trajectory Averaging A main take-away from our analysis and experiments so far is that gradient descent with large step-size effectively resembles a randomized gradient descent procedure with a special type of noise, i.e., the randomness here is with respect to the orbit it converges to (in the non-monotonic phases).5 Recall that this viewpoint is also put-forward is several works, in particular Kong & Tao (2020). Hence, a natural approach is to do perform ergodic trajectory averaging to reduce the fluctuations (see right column in Figure 4). Definition 3. For any given point X ∈ R d, and any training iteration count t, the *ergodic trajectory* averaging based prediction, ˆy, for the point X is given by ˆy := 1 t Pt i=1 g(w (i); X), where w (i)corresponds to the training trajectory of the gradient descent algorithm trained with step-size η. Another way to think about the above prediction strategy is that the ergodic average approximates, in the limit, expectation with respect to the invariant distribution (supported on the orbit to which the trajectory converges to). In particular, Figure 4 right column, for the orthogonal setup, we see that as the noise increases, training in the chaotic regime and performing ergodic trajectory averaging provides a fast decay of training loss. A disadvantage of the ergodic averaging based prediction strategy described above is the test-time computational cost increases by O(t), per test point. Figure 3 plots the testing loss for the model in (3.4), when trained with two values of large step-sizes (η = 48, 60). We observe that the ergodic trajectory averaging prediction smooths out the more chaotic testing loss. However, we also remark that from the plots in Figure 106, operating with slightly smaller step-size choice (η = 36) achieves the best testing error curves. See Section C.2 for additional observations. 4Here, we do not plot the divergent phase here for simplicity. 5One way to show this formally is by connecting large step-size GD with slow-fast deterministic systems; see, for example, Chevyrev et al. (2020); Lim et al. (2022). 6Figure 10 provides a detailed comparison across various step-sizes, for different noise variances. ![11_image_0.png](11_image_0.png) Figure 4: Hidden layer width = 25, with orthogonal data points. Rows from top to bottom represent different levels of noise - mean-zero normal distribution with variance 0, 0.25, 1 respectively. The vertical axes are in log scale for the training loss curves. The second column is about the sharpness of the training loss functions. Numbers 0, 1, 2, 3, 4 denote different stepsize choices (see Section 4.1 for details). In the literature, ways of artificially inducing *controlled chaos* in the gradient descent trajectory has been proposed to obtain improved testing accuracy; see, for example, Lim et al. (2022). We believe the ergodic trajectory averaging based prediction methodology discussed above may prove to be fruitful to stabilize the testing loss in such cases as well. A detailed investigation of provable benefits of the ergodic trajectory averaging predictor, is beyond the scope of the current work, and we leave it as intriguing future work. Additional Experiments. We also provide the following additional simulation results in the appendix: (i) Section C.2 corresponds to non-orthogonal training data. We also include testing loss plots, and (ii) Section C.3 corresponds to training the hidden-layer weights of a two-layer neural network with ReLU activation functions and non-orthogonal inputs. Take-away points from experiments. The main take-away points from the above experiments are the following: (i) in the case of orthogonal data, the experiments confirm the theoretical results in Section 3, (ii) in the case of non-orthogonal data, the experiments show that similar phases (including the chaotic phases) exists in the training dynamics, and (iii) ergodic averaging based prediction stabilizes the test error along the GD trajectory. ## 5 Conclusion Unstable and chaotic behavior is frequently observed when training deep neural networks with large-order step-sizes. Motivated by this, we presented a fine-grained theoretical analysis of a cubic-map based dynamical system. We show that the gradient descent dynamics is fully captured by this dynamical system, when training the hidden layers of a two-layer neural networks with quadratic activation functions with orthogonal training data. Our analysis shows that for this class of models, as the step-size of the gradient descent increases, the gradient descent trajectory has five distinct phases (from being monotonic to chaotic and eventually divergent). We also provide empirical evidence that show similar behavior occurs for generic non-orthogonal data. Our results also indicate a subtle interplay on the relation between step-size and the initialization provided to the gradient descent algorithm in terms determining which phase the training trajectory will operate in. Finally, we empirically examined the impact of training in the different phases, on the generalization error. Immediate future works include: (i) developing a theoretical characterization of the training dynamics with generic non-orthogonal training data, which involves undertaking non-trivial bifurcation analysis of interacting dynamical systems, (ii) moving beyond quadratic activation functions and two-layer neural networks, and (iii) developing tight generalization bounds when training with large-order step-sizes. Overall, our contributions make concrete steps towards developing a fine-grained understanding of the gradient descent dynamics when training neural networks with iterative first-order optimization algorithms with large step-sizes. ## References Roy L Adler, Alan G Konheim, and M Harry McAndrew. Topological entropy. Transactions of the American Mathematical Society, 114(2):309–319, 1965. Naman Agarwal, Surbhi Goel, and Cyril Zhang. Acceleration via fractal learning rate schedules. In Proceedings of the 38th *International Conference on Machine Learning*, pp. 87–99, 2021. Atish Agarwala and Yann Dauphin. SAM operates far from home: Eigenvalue regularization as a dynamical phenomenon. In Proceedings of the 40th *International Conference on Machine Learning*, pp. 152–168. PMLR, 2023. Atish Agarwala, Fabian Pedregosa, and Jeffrey Pennington. Second-order regression models exhibit progressive sharpening to the edge of stability. In Proceedings of the 40th *International Conference on Machine* Learning, 2023. Kwangjun Ahn, Jingzhao Zhang, and Suvrit Sra. Understanding the unstable convergence of gradient descent. In Proceedings of the 39th *International Conference on Machine Learning*, pp. 247–257. PMLR, 2022. Kwangjun Ahn, S´ebastien Bubeck, Sinho Chewi, Yin Tat Lee, Felipe Suarez, and Yi Zhang. Learning threshold neurons via edge of stability. *Advances in Neural Information Processing Systems*, 36, 2024. Kathleen T Alligood, Tim D Sauer, and James A Yorke. Chaos: An introduction to dynamical systems., 1997. Jason M Altschuler and Pablo A Parrilo. Acceleration by Stepsize Hedging I: Multi-Step Descent and the Silver Stepsize Schedule. *preprint arXiv:2309.07879*, 2023. Maksym Andriushchenko, Aditya Vardhan Varre, Loucas Pillaud-Vivien, and Nicolas Flammarion. SGD with large step sizes learns sparse features. In Proceedings of the 40th International Conference on Machine Learning, pp. 903–925. PMLR, 2023. Sanjeev Arora, Zhiyuan Li, and Abhishek Panigrahi. Understanding gradient descent on the edge of stability in deep learning. In Proceedings of the 39th *International Conference on Machine Learning*, pp. 948–1024. PMLR, 2022. Bernd Aulbach and Bernd Kieninger. On three definitions of chaos. *Nonlinear Dyn. Syst. Theory*, 1(1): 23–37, 2001. Tolga Birdal, Aaron Lou, Leonidas Guibas, and Umut Simsekli. Intrinsic dimension, persistent homology and generalization in neural networks. In *Advances in Neural Information Processing Systems*, 2021. B Branner and J H Hubbard. The iteration of cubic polynomials. part I: The global topology of parameter space. *Acta mathematica*, 160(3-4):143–206, 1988. Alexander Camuto, George Deligiannidis, Murat A Erdogdu, Mert Gurbuzbalaban, Umut Simsekli, and Lingjiong Zhu. Fractal structure and generalization properties of stochastic optimization algorithms. In Advances in Neural Information Processing Systems, 2021. Nisha Chandramoorthy, Andreas Loukas, Khashayar Gatmiry, and Stefanie Jegelka. On the generalization of learning algorithms that do not converge. *Advances in Neural Information Processing Systems*, 35: 34241–34257, 2022. Lei Chen and Joan Bruna. Beyond the edge of stability via two-step gradient updates. In *Proceedings of the* 40th *International Conference on Machine Learning*, 2023. Ilya Chevyrev, Peter K Friz, Alexey Korepanov, and Ian Melbourne. Superdiffusive limits for deterministic fast–slow dynamical systems. *Probability theory and related fields*, 178(3-4):735–770, 2020. Jeremy Cohen, Simran Kaur, Yuanzhi Li, J Zico Kolter, and Ameet Talwalkar. Gradient descent on neural networks typically occurs at the edge of stability. In The 9 th International Conference on Learning Representations, 2021. Alex Damian, Eshaan Nichani, and Jason D. Lee. Self-stabilization: The implicit bias of gradient descent at the edge of stability. In The 11th *International Conference on Learning Representations*, 2023. Welington De Melo and Sebastian Van Strien. *One-dimensional dynamics*, volume 25. Springer Science & Business Media, 2012. R Devaney. An introduction to chaotic dynamical systems. *An Introduction to Chaotic Dynamical Systems*, 1989. Benjamin Dupuis, George Deligiannidis, and Umut Simsekli. Generalization bounds using data-dependent fractal dimensions. In Proceedings of the 40th *International Conference on Machine Learning*, pp. 8922– 8968, 2023. Albert Fannjiang and Thomas Strohmer. The numerics of phase retrieval. *Acta Numerica*, 29:125–228, 2020. N Franzov´a and J Sm´ıtal. Positive sequence topological entropy characterizes chaotic maps. *Proceedings of* the American Mathematical Society, pp. 1083–1086, 1991. Spencer Frei, Gal Vardi, Peter L Bartlett, Nathan Srebro, and Wei Hu. Implicit bias in leaky relu networks trained on high-dimensional data. *arXiv preprint arXiv:2210.07082*, 2022. Justin Gilmer, Behrooz Ghorbani, Ankush Garg, Sneha Kudugunta, Behnam Neyshabur, David Cardoze, George Edward Dahl, Zachary Nado, and Orhan Firat. A loss curvature perspective on training instabilities of deep learning models. In The 12th *International Conference on Learning Representations*, 2022. Baptiste Goujaud, Damien Scieur, Aymeric Dieuleveut, Adrien B Taylor, and Fabian Pedregosa. Superacceleration with cyclical step-sizes. In *International Conference on Artificial Intelligence and Statistics*, pp. 3028–3065. PMLR, 2022. Benjamin Grimmer. Provably faster gradient descent via long steps. *preprint arXiv:2307.06324*, 2023. Benjamin Grimmer, Kevin Shu, and Alex L Wang. Accelerated Gradient Descent via Long Steps . *preprint* arXiv:2309.09961, 2023. Jack K Hale and H¨useyin Ko¸cak. *Dynamics and bifurcations*, volume 3. Springer Science & Business Media, 2012. Luis Herrmann, Maximilian Granz, and Tim Landgraf. Chaotic dynamics are intrinsic to neural network training with SGD. In *Advances in Neural Information Processing Systems*, 2022. Liam Hodgkinson, Umut Simsekli, Rajiv Khanna, and Michael Mahoney. Generalization bounds using lower tail exponents in stochastic optimizers. In Proceedings of the 39th International Conference on Machine Learning, pp. 8774–8795, 2022. Arthur Jacot, Franck Gabriel, and Cl´ement Hongler. Neural tangent kernel: Convergence and generalization in neural networks. *Advances in neural information processing systems*, 31, 2018. Kishore Jaganathan, Yonina C Eldar, and Babak Hassibi. Phase retrieval: An overview of recent developments. *Optical Compressive Imaging*, pp. 279–312, 2016. Stanislaw Jastrzebski, Maciej Szymczak, Stanislav Fort, Devansh Arpit, Jacek Tabor, Kyunghyun Cho, and Krzysztof Geras. The break-even point on optimization trajectories of deep neural networks. In The 8 th International Conference on Learning Representations, 2020. Maxim Kodryan, Ekaterina Lobacheva, Maksim Nakhodnov, and Dmitry P Vetrov. Training scale-invariant neural networks on the sphere can happen in three regimes. Advances in Neural Information Processing Systems, 35:14058–14070, 2022. Sergiı Kolyada. Li-Yorke sensitivity and other concepts of chaos. *Ukrainian Mathematical Journal*, 56(8), 2004. Lingkai Kong and Molei Tao. Stochasticity of deterministic gradient descent: Large learning rate for multiscale objective function. *Advances in Neural Information Processing Systems*, 33:2625–2638, 2020. Yiwen Kou, Zixiang Chen, and Quanquan Gu. Implicit bias of gradient descent for two-layer relu and leaky relu networks on nearly-orthogonal data. *arXiv preprint arXiv:2310.18935*, 2023. Itai Kreisler, Mor Shpigel Nacson, Daniel Soudry, and Yair Carmon. Gradient descent monotonically decreases the sharpness of gradient flow solutions in scalar networks and beyond. In *Proceedings of the* 40th International Conference on Machine Learning, 2023. Andrzej Lasota and Michael C Mackey. *Chaos, fractals, and noise: stochastic aspects of dynamics*, volume 97. Springer Science & Business Media, 1998. VI Lebedev and SA Finogenov. Ordering of the iterative parameters in the cyclical Chebyshev iterative method. *USSR Computational Mathematics and Mathematical Physics*, 11(2):155–170, 1971. Aitor Lewkowycz, Yasaman Bahri, Ethan Dyer, Jascha Sohl-Dickstein, and Guy Gur-Ari. The large learning rate phase of deep learning: The catapult mechanism. *preprint arXiv:2003.02218*, 2020. Tien-Yien Li and James A Yorke. Period three implies chaos. *The American Mathematical Monthly*, 82(10): 985–992, 1975. Soon Hoe Lim, Yijun Wan, and Umut Simsekli. Chaotic regularization and heavy-tailed limits for deterministic gradient descent. *Advances in Neural Information Processing Systems*, 35:26590–26602, 2022. Ekaterina Lobacheva, Maxim Kodryan, Nadezhda Chirkova, Andrey Malinin, and Dmitry P. Vetrov. On the periodic behavior of neural network training with batch normalization and weight decay. In *Advances in* Neural Information Processing Systems, 2021. Kaifeng Lyu, Zhiyuan Li, and Sanjeev Arora. Understanding the generalization benefit of normalization layers: Sharpness reduction. *Advances in Neural Information Processing Systems*, 35:34689–34708, 2022. John Milnor. Remarks on iterated cubic maps. *Experimental Mathematics*, 1(1):5–24, 1992. Helena Engelina Nusse. Asymptotically periodic behaviour in the dynamics of chaotic mappings. *SIAM* Journal on Applied Mathematics, 47(3):498–515, 1987. Edward Ott. *Chaos in dynamical systems*. Cambridge university press, 2002. Samet Oymak. Provable super-convergence with a large cyclical learning rate. IEEE Signal Processing Letters, 28:1645–1649, 2021. Thomas D Rogers and David C Whitley. Chaos in the cubic mapping. *Mathematical Modelling*, 4(1):9–25, 1983. David Singer. Stable orbits and bifurcation of maps of the interval. *SIAM Journal on Applied Mathematics*, 35(2):260–267, 1978. Henrik Skjolding, Bodil Branner-Jørgensen, Peter L Christiansen, and Helge E Jensen. Bifurcations in discrete dynamical systems with cubic maps. *SIAM Journal on Applied Mathematics*, 43(3):520–534, 1983. Leslie N Smith. Cyclical learning rates for training neural networks. In *2017 IEEE winter conference on* applications of computer vision (WACV), pp. 464–472. IEEE, 2017. Jascha Sohl-Dickstein. The boundary of neural network trainability is fractal. arXiv preprint arXiv:2402.06184, 2024. Minhak Song and Chulhee Yun. Trajectory alignment: Understanding the edge of stability phenomenon via bifurcation theory. *Advances in Neural Information Processing Systems*, 36, 2024. Steven H Strogatz. *Nonlinear dynamics and chaos: With applications to physics, biology, chemistry, and* engineering. CRC press, 2018. Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society Series B: Statistical Methodology, 58(1):267–288, 1996. Kees Van Den Doel and Uri Ascher. The chaotic nature of faster gradient descent methods. *Journal of* Scientific Computing, 51:560–581, 2012. Yuqing Wang, Minshuo Chen, Tuo Zhao, and Molei Tao. Large learning rate tames homogeneity: Convergence and balancing effect. In The 10th *International Conference on Learning Representations*, 2022. Jingfeng Wu, Vladimir Braverman, and Jason D Lee. Implicit bias of gradient descent for logistic regression at the edge of stability. *preprint arXiv:2305.11788*, 2023. Can Xu, Xuan Wang, Zhigang Zheng, and Zongkai Cai. Stability and bifurcation of collective dynamics in phase oscillator populations with general coupling. *Physical Review E*, 103(3):032307, 2021. Ming Yuan and Yi Lin. Model selection and estimation in regression with grouped variables. *Journal of the* Royal Statistical Society Series B: Statistical Methodology, 68(1):49–67, 2006. Jingzhao Zhang, Haochuan Li, Suvrit Sra, and Ali Jadbabaie. Neural network weights do not converge to stationary points: An invariant measure perspective. In Proceedings of the 39th International Conference on Machine Learning, pp. 26330–26346, 2022. Libin Zhu, Chaoyue Liu, Adityanarayanan Radhakrishnan, and Mikhail Belkin. Catapults in SGD: Spikes in the training loss and their impact on generalization through feature learning. *preprint arXiv:2306.04815*, 2023a. Libin Zhu, Chaoyue Liu, Adityanarayanan Radhakrishnan, and Mikhail Belkin. Quadratic models for understanding neural network dynamics. In *The Twelfth International Conference on Learning Representations*, 2024. URL https://openreview.net/forum?id=PvJnX3dwsD. Xingyu Zhu, Zixuan Wang, Xiang Wang, Mo Zhou, and Rong Ge. Understanding edge-of-stability training dynamics with a minimalist example. In The 11th *International Conference on Learning Representations*, 2023b. Liu Ziyin, Botao Li, James B Simon, and Masahito Ueda. SGD with a Constant Large Learning Rate Can Converge to Local Maxima. In *International Conference on Learning Representations*, 2022. URL https://openreview.net/forum?id=9XhPLAjjRB. ## Supplementary Material For "From Stability To Chaos: Analyzing Gradient Descent Dynamics In Quadratic Regression" A Proofs Of Main Results A.1 Proofs Of Results In Section 2 We first present several technical results required to prove our main results. Lemma 4. Let f(x) *be a polynomial. If all the roots of* f ′(x) *are real and distinct, then we have* $${\mathsf{S}}f(x)={\frac{f^{\prime\prime\prime}(x)}{f^{\prime}(x)}}-{\frac{3}{2}}\left({\frac{f^{\prime\prime}(x)}{f^{\prime}(x)}}\right)^{2}<0\ \ {\mathrm{~for~all~}}x\in I\ {\mathrm{~with~}}f^{\prime}(x)\neq0.$$ Proof. See, e.g., the proof of Proposition 11.2 in Devaney (1989). Lemma 5. *Suppose we are given a real-valued continuous function* f(x) : R → R and a bounded closed interval I ⊆ R with x0 ∈ I*. Define* xk := f (k)(x0)*. If the sequence* {xk}∞ k=0 *is monotonic, then one of the* following holds. - (i) {xk}∞ k=0 ⊊ I, i.e., there exists xt ∈/ I *for some* t. - *(ii)* {xk}∞ k=0 ⊆ I*, and* limt→∞ f (t)(x0) exists and is a fixed point of f(x) in I. Proof. If (i) holds, then the conclusion is true. When (i) does not hold, then {xk}∞ k=0 ⊆ I. Since this sequence is monotonic and included in a bounded closed interval, we know its limit exists and is in I. Moreover, we have $\square$ $$\operatorname*{lim}_{t\to\infty}x_{t}=\operatorname*{lim}_{t\to\infty}x_{t+1}=\operatorname*{lim}_{t\to\infty}f(x_{t})=f(\operatorname*{lim}_{t\to\infty}x_{t}),$$ $$\square$$ $${\overline{{-3}}}\ \ \,r e s p e c t$$ $\sin u\,\,z=\,\,z$ . where the last equality holds since f is continuous. Clearly limt→∞ xt is a fixed point of f. The following lemma characterizes the basic properties of the cubic function fa defined in (2.1). Lemma 6. Suppose a > 0. Then fa(z) *has the following properties.* - (i) The local minimum and maximum of fa(z) *are at* z = 1 and z = 1−2a 3*respectively, and* $$f_{a}(1)=-a,\ f_{a}\left(\frac{1-2a}{3}\right)=\frac{(2a-1)(2a^{2}+7a-4)}{27}=\frac{4a^{3}+12a^{2}-15a+4}{27}.$$ - (ii) fa(z) *is monotonically increasing on* [−a, 1−2a 3]*, monotonically decreasing on* [ 1−2a 3, 1], and monotonically increasing on [1, 2]. - (iii) For any −a ≤ z ≤ 2*, we have* −a ≤ fa(z) ≤ max fa 1−2a 3, 2 *. Moreover,* fa 1−2a 3≤ 2 if and only if a ≤ 2. Proof. Note that we have $f^{\prime}_{a}(z)=3z^{2}+2(a-2)z+(1-2a)=(z-1)(3z+2a-1)$. (A.1) which implies 1 and 1−2a 3are critical points of fa(z). Moreover, by f ′′ a (z) = 6z + 2a − 4 we know f ′′ a (1) > 0 and f ′′ a ( 1−2a 3) < 0. Hence, they are local minimum and maximum respectively. The rest of (i) is true by calculation. (ii) is true by noticing the expression of f ′ a (z) in (A.1). (iii) is a direct conclusion of (i) and (ii) since for −a ≤ z ≤ 2 we have $$-a=\operatorname*{min}\left\{f_{a}(1),f_{a}(-a)\right\}\leq f_{a}(z)\leq\operatorname*{max}\left\{f_{a}\left({\frac{1-2a}{3}}\right),f_{a}(2)\right\}.$$ By (i) and some calculation we know $$ f_a\left(\frac{1-2a}{3}\right)-2=\frac{4a^3+12a^2-15a-50}{27}=\frac{(2a+5)^2(a-2)}{27}.$$ of (iii). $$\square$$ This proves the rest of (iii). Lemma 7. *Suppose* 2 √2 − 2 < a ≤ 1. Define five subintervals of [−a, 2] *as follows.* $$I_{1}=\left[-a,\frac{2-a-\sqrt{a^{2}+4a}}{2}\right],\ I_{2}=\left[\frac{2-a-\sqrt{a^{2}+4a}}{2},0\right],$$ $$I_{3}=\left[0,0.25\right],\ I_{4}=\left[0.25,\frac{2-a+\sqrt{a^{2}+4a}}{2}\right],\ I_{5}=\left[\frac{2-a+\sqrt{a^{2}+4a}}{2},2\right].$$ Then we have $(i)\ f_{a}(I_{1})\subseteq I_{1}=I_{2},\ f_{a}(I_{4})=I_{1}\cup I_{2},\ f_{a}(I_{5})=I_{3}\cup I_{4}\cup I_{5}.$ $(ii)\ f_{a}(I_{2})\subseteq I_{3},\ f_{a}(I_{3})\subseteq I_{2}.$ Proof. We first prove (i). By Lemma 6 we know fa(z) is increasing on I1, achieving its local minimum at z = 1 on I4, increasing on I5, then we know $$f_{a}(I_{1})=\left[f_{a}(-a),f_{a}\left(\frac{2-a-\sqrt{a^{2}+4a}}{2}\right)\right]=[-a,0]=I_{1}\cup I_{2}.$$ $$f_{a}(I_{4})=\left[f_{a}(1),\max\left\{f_{a}(0.25),f_{a}\left(\frac{2-a+\sqrt{a^{2}+4a}}{2}\right)\right\}\right]=[-a,0]=I_{1}\cup I_{2}.$$ $$f_{a}(I_{5})=\left[f_{a}\left(\frac{2-a+\sqrt{a^{2}+4a}}{2}\right),f_{a}(2)\right]=[0,2]=I_{3}\cup I_{4}\cup I_{5}.$$ In the case of $f_{a}(1)$ This completes the proof of (i). To prove (ii), observe that when a ∈ (2√2 − 2, 1] we have 2−a− √a2+4a 2 < 1−2a 3 < 0. By Lemma 6 we know the local maximum of fa over I2 = h2−a− √a2+4a 2, 0 iis achieved at 1−2a 3, this together with the fact that fa(0) = fa 2−a− √a2+4a 2 = 0 implies $$f_{a}\left(I_{2}\right)=\left[f_{a}(0),f_{a}\left({\frac{1-2a}{3}}\right)\right]=\left[0,{\frac{4a^{3}+12a^{2}-15a+4}{27}}\right]\subseteq[0,0.25],$$ where the last subset inclusion is true since (4a 3 + 12a 2 − 15a + 4)′ = 12a 2 + 24a − 15 > 0, ∀a ∈ (2√2 − 2, 1]. This implies when a ∈ (2√2 − 2, 1], $${\frac{4a^{3}+12a^{2}-15a+4}{27}}\leq{\frac{(4a^{3}+12a^{2}-15a+4)|_{a=1}}{27}}={\frac{5}{27}}<0.25.$$ On the other hand, we know from Lemma 6 that on I3 = [0, 0.25](⊆-1−2a 3, 1) fa is decreasing. Hence, $$f_{a}(I_{3})=[f_{a}(0.25),f_{a}(0)]=\left[-{\frac{7}{16}}a+{\frac{9}{16}},0\right]\subseteq\left[{\frac{2-a-{\sqrt{a^{2}+4a}}}{2}},0\right]=I_{2}.$$ where the last subset inclusion is true since ### Relation is the same $ f_a(0.25)=-\frac{7}{16}a+\frac{9}{16}>\frac{2-a-\sqrt{a^2+4a}}{2},\ \forall a\in(2\sqrt{2}-2,1].$ = f_f_((1) ... This completes the proof of (ii). See Figure 5(a) for a visualization of the subintervals I1*, ..., I*5 for a = 1 and an example of the orbit on it. ![19_image_0.png](19_image_0.png) $$\square$$ Figure 5: From left to right: cubic function f1(z) with different regions diveded by subintervals and a trajectory of {zi} 5 i=0, cubic function f1.2(z) with two period-2 point, cubic function f1.6(z) with a period3 point, and cubic function f2.1(z) with a diverging orbit. We have the cubic curve and the identical mapping line as the solid curves. We use four colored dashed lines in Figure 5(a) to represent the boundaries that are orthogonal to the endpoints of I2 and I4 defined in Lemma 7 respectively. The triangle markers represent some terms of a certain orbit, in which horizontal and vertical dotted lines visualize the transitioning trajectory between consecutive terms in an orbit. Lemma 8. Suppose 0 < a ≤ 1 and −a ≤ z0 ≤ 2*. Then we have* - (i) −a ≤ zt ≤ 2 for any t, and fa does not have a period-2 *point on* [−a, 2]. - (ii) If z0 is chosen from [−a, 2] *uniformly at random, then* limt→∞ zt = 0 almost surely. Moreover, if 0 < a ≤ 2 √2 − 2, then almost surely |zt+1| ≤ |zt| for all t*. If* 2 √2 − 2 < a ≤ 2*, then almost surely* {|zt|}∞ t=0 has catapults. Proof. The boundedness of each iterate (i.e., zt ∈ [−a, 2]) can be proved by using simple induction and Lemma 6, 0 < a ≤ 1, and −a ≤ z0 ≤ 2. To prove the rest of (i), by (2.1) we know a period-2 point is a solution of $$f_{a}^{(2)}(z)=z,\ f_{a}(z)\neq z$$ $$(\mathrm{A.2})$$ which are equivalent to $$g_{a}(z)g_{a}(z g_{a}(z))=1,z\not\in\{-a,0,2\}.$$ ga(z)ga(zga(z)) = 1, z /*∈ {−*a, 0, 2}. (A.2) Hence it suffices to prove (A.2) do not have a solution. Define $$h_{a}(z)=g_{a}(z)-1=(z+a)(z-2)<0,\ \forall z\in(-a,2).$$ We have $$\begin{array}{l}{{g_{a}(z)g_{a}(z g_{a}(z))-1}}\\ {{=h_{a}(z)+h_{a}(z)h_{a}(z g_{a}(z))+h_{a}(z g_{a}(z))}}\\ {{=h_{a}(z)(1+h_{a}(z g_{a}(z)))+(z+a+z h_{a}(z))(z-2+z h_{a}(z))}}\\ {{=h_{a}(z)(1+h_{a}(z g_{a}(z)))+h_{a}(z)+(z(z-2)+z(z+a))h_{a}(z)+z^{2}h_{a}^{2}(z)}}\\ {{=h_{a}(z)(h_{a}(z g_{a}(z))+z^{2}h_{a}(z)+2z^{2}+(a-2)z+2).}}\end{array}$$ $$(\mathrm{A.3})$$ We have $$h_{a}(zg_{a}(z))+z^{2}h_{a}(z)+2z^{2}+(a-2)z+2$$ $$=(zg_{a}(z)+a)(zg_{a}(z)-2)+z^{2}(z+a)(z-2)+2z^{2}+(a-2)z+2$$ $$=z^{2}(z^{2}+(a-2)z+1-2a)^{2}+(a-2)z(z^{2}+(a-2)z+1-2a)-2a$$ $$\quad+z^{2}(z+a)(z-2)+2z^{2}+(a-2)z+2$$ $$=z^{6}+(2a-4)z^{5}+(a^{2}-8a+7)z^{4}-(4a^{2}-12a+8)z^{3}+(5a^{2}-10a+7)z^{2}$$ $$\quad-(2a^{2}-6a+4)z+2-2a$$ $$=(z^{2}+(a-1)z+1-a)(z^{4}+(a-3)z^{3}+(3-3a)z^{2}+(2a-2)z+2).$$ Observe that $$z^{2}+(a-1)z+(1-a)\geq(1-a)-{\frac{(a-1)^{2}}{4}}={\frac{(3+a)(1-a)}{4}}\geq0,\ \forall a\in(0,1].$$ The equalities hold if and only if z = 0, a = 1. We also have $$(\mathrm{A.4})$$ $$(\mathrm{A.5})$$ $$z^{4}+(a-3)z^{3}+(3-3a)z^{2}+(2a-2)z+2>0,\ \forall z\in\{0,1,2\}$$ $$z^{4}+(a-3)z^{3}+(3-3a)z^{2}+(2a-2)z+2$$ $$=z(z-1)(z-2)\left(a+z+\frac{1}{z}+\frac{1}{z^{2}-3z+2}\right),\forall z\notin\{0,1,2\}.$$ For different z we can verify the following inequalities via basic algebra or Young's inequality: $$z(z-1)(z-2)<0,\,\,\left(a+z+\frac{1}{z}+\frac{1}{z^{2}-3z+2}\right)<1+2+\frac{1}{2}+\frac{1}{-0.25}<0,\forall z\in(1,2).$$ $$z(z-1)(z-2)>0,\,\,\left(a+z+\frac{1}{z}+\frac{1}{z^{2}-3z+2}\right)>0+1+1+0>0,\forall z\in(0,1).$$ $$z(z-1)(z-2)<0,\,\,\left(a+z+\frac{1}{z}+\frac{1}{z^{2}-3z+2}\right)<1-1-1+\frac{1}{2}<0,\forall z\in(-a,0).$$ Thus we may conclude that $$z^{4}+(a-3)z^{3}+(3-3a)z^{2}+(2a-2)z+2>0,\ \forall z\in(-a,2).$$ In this case, we can obtain the $z$-axis for $z$ and $z$. The $z$-axis is $S$ such that the orbit with $z_0$ . By (A.3), (A.4), (A.5), (A.6), we know ga(z)ga(zga(z)) − 1 ̸= 0 if z /*∈ {−*a, 0, 2}. Hence fa does not have a period-2 point on [−a, 2]. To prove the first part in (ii) (the limit converges to 0 almost surely), we will prove $$(1)\operatorname*{lim}_{t\to\infty}z_{t}\in\{-a,0$$ zt *∈ {−*a, 0, 2}, (2) The set S such that the orbit with z0 ∈ S has measure 0. (A.7) We now consider two cases - a ∈ (0, 2 √2 − 2] and a ∈ (2√2 − 2, 1]. Case 1: a ∈ (0, 2 √2 − 2]. Note that we have $$|g_{a}(z_{t})|=|z_{t}^{2}+(a-2)z_{t}+1-2a|\leq\operatorname*{max}\left(|g_{a}(-a)|,|g_{a}(2)|,|g_{a}\left(1-{\frac{a}{2}}\right)|\right)=1,$$ where the last equality holds since $g_{a}(-a)=g_{a}(2)=1$ and $|g_{a}\left(1-\frac{a}{2}\right)|=\frac{a^{2}+4a}{4}\leq1$ for any $a\in(0,2\sqrt{2}-2]$. Hence, we know $$|z_{t+1}|=|f_{a}(z_{t})|=|z_{t}g_{a}(z_{t})|\leq|z_{t}|,\ \forall z_{t}\in[-a,2]$$ $$(\mathrm{A.6})$$ has measure 0. (A.7) $$(\mathrm{A.8})$$ Hence limt→∞ |zt| exists. $$\operatorname*{lim}_{t\to\infty}|z_{t}|=\operatorname*{lim}_{t\to\infty}|z_{t+1}|=\operatorname*{lim}_{t\to\infty}|z_{t}||g_{a}(z_{t})|$$ Hence, we know $$\operatorname*{lim}_{t\to\infty}|z_{t}|=0,{\mathrm{~or~}}\operatorname*{lim}_{t\to\infty}|z_{t}|\neq0,{\mathrm{~}}\operatorname*{lim}_{t\to\infty}|g_{a}(z_{t})|=1.$$ If limt→∞ |zt| ̸= 0, then we have two subcases - Sub-case 1: limt→∞ zt exists. We can verify that $$\operatorname*{lim}_{t\to\infty}z_{t}=\operatorname*{lim}_{t\to\infty}z_{t+1}=f_{a}(\operatorname*{lim}_{t\to\infty}z_{t})$$ and thus limt→∞ zt is one of the fixed points of fa(z) *∈ {−*a, 0, 2}. - Sub-case 2: limt→∞ zt does not exist. Since limt→∞ |zt| exists, we know there exists an infinite subsequence (denoted as A1) of {zt}∞ t=0 with some limit c and the complement of the sequence, as another infinite subsequence (denoted as A2), has limit −c for some constant c > 0. Hence, we can pick a sequence of the subscripts k1 < k2 < ... < kn *< ...* such that zk1 , ..., zkn , ... belong to A1 and zk1+1, ..., zkn+1*, ...* belong to A2. Moreover, we have $$c=\operatorname*{lim}_{i\to\infty}z_{k_{i}}=-\operatorname*{lim}_{i\to\infty}z_{k_{i}+1}=-\operatorname*{lim}_{i\to\infty}z_{k_{i}}g_{a}(z_{k_{i}})=-c g_{a}(c)$$ This implies that ga(c) = −1, i.e., $$c^{2}+(a-2)c+2-2a=0.$$ From its discriminant (a−2)2 −4(2−2a) = a 2 + 4a−4 ≤ 0 for a ∈ (0, 2 √2−2] where equality holds only at 2√2 − 2, we know a = 2√2 − 2 and thus c = 2 − √2. However, we can apply the similar trick and pick another sequence ˜k1 < ˜k2 < ... < ˜kn *< ...* such that zk˜1 , ..., zk˜n , ... belong to A2 and zk˜1+1, ..., zk˜n+1*, ...* belong to A1. This implies $$-c=\operatorname*{lim}_{i\to\infty}z_{k_{i}}=-\operatorname*{lim}_{i\to\infty}z_{k_{i}+1}=-\operatorname*{lim}_{i\to\infty}z_{k_{i}}g_{a}(z_{k_{i}})=-(-c)g_{a}(-c)$$ which gives $$c^{2}-(a-2)c+2-2a=0.$$ This contradicts with a = 2√2 − 2 and c = 2 − √2. This means case 2 does not exist. Hence, we know |zt| is decreasing (not necessarily strictly) and limt→∞ zt *∈ {−*a, 0, 2}. Case 2: a ∈ (2√2 − 2, 1]. We divide the interval [−a, 2] into the following five parts: $$\begin{array}{l}{{I_{1}=\left[-a,\frac{2-a-\sqrt{a^{2}+4a}}{2}\right],\ I_{2}=\left[\frac{2-a-\sqrt{a^{2}+4a}}{2},0\right],}}\\ {{I_{3}=\left[0,0.25\right],\ I_{4}=\left[0.25,\frac{2-a+\sqrt{a^{2}+4a}}{2}\right],\ I_{5}=\left[\frac{2-a+\sqrt{a^{2}+4a}}{2},2\right].}}\end{array}$$ Recall that by Lemma 7 we have: $$f_{a}(I_{1})=I_{1}\cup I_{2},\ f_{a}$$ fa(I1) = I1 ∪ I2, fa(I2) ⊆ I3, fa(I3) ⊆ I2, fa(I4) = I1 ∪ I2, fa(I5) = I3 ∪ I4 ∪ I5. $$\subseteq I_{3},\ f_{a}(I_{3})\subseteq.$$ We have the following conclusion. Observe that fa is continuous, and $$z_{t+1}-z_{t}=f_{a}(z_{t})-z_{t}=z_{t}(z_{t}+a)(z_{t}-2)\geq0,\ \forall z_{t}\in I_{1}=\left[-a,\frac{2-a-\sqrt{a^{2}+4a}}{2}\right],$$ $$z_{t+1}-z_{t}=f_{a}(z_{t})-z_{t}=z_{t}(z_{t}+a)(z_{t}-2)\leq0,\ \forall z_{t}\in I_{5}=\left[\frac{2-a+\sqrt{a^{2}+4a}}{2},2\right].$$ We know if the sequence {zt}∞ t=0 visits I5, by Lemma 5 we know either limt→∞ zt = 2 or there exists M > 0 such that zt ∈/ I5 for any t ≥ M. Then if the sequence visits I1 then by Lemma 5 either limt→∞ zt = −a or there exists *M > M >* ˜ 0 such that zt ∈ I2 ∪I3 for any t ≥ M˜ , since fa(I1) ⊆ I1 ∪I2 and fa(I2 ∪I3) ⊆ I2 ∪I3. Hence, the proof is reduced to the case when z0 ∈ I2∪I3. For the case when z0 ∈ I2∪I3 = h2−a− √a2+4a 2, 0.25i. The key observation is to show that in this interval $$(\mathrm{A.9})$$ $$|z_{t+2}|\leq|z_{t}|.$$ $$(\mathrm{A.10})$$ |zt+2*| ≤ |*zt|. (A.9) Recall that by Lemma 7 (ii) we have $$f_{a}\left(I_{2}\right)\subseteq I_{3},\ f_{a}(I_{3})\subseteq I_{2}.$$ fa (I2) ⊆ I3, fa(I3) ⊆ I2. (A.10) To prove (A.9), we know it holds when zt = 0. When zt ̸= 0, by (A.10) we know f (2) a (zt) and zt have the same sign provided zt ∈ I2 ∪ I3 = h2−a− √a2+4a 2, 0.25i. This together with $$f_{a}^{(2)}(z)=f_{a}(z)g_{a}(f_{a}(z))=z g_{a}(z)g_{a}(z g_{a}(z))$$ implies that ga(z)ga(zga(z)) ≥ 0 when z ∈ $\text{a}z\in\left[\frac{2-a-\sqrt{{a}^{2}+4a}}{2},0\right)\cup\left(0,0.25\right]\text{.}$ Thus we know. $$|z_{t+2}|=|z_{t}g_{a}(z_{t})g_{a}(z_{t}g_{a}(z_{t}))|=|z_{t}|g_{a}(z_{t})g_{a}(z_{t}g_{a}(z_{t})).$$ Thus to prove (A.9) it suffices to show ga(z)ga(zga(z)) − 1 ≤ 0, which is true by combining (A.3), (A.4), (A.5), and (A.6). This completes the proof of (1) in (A.7). To prove (2) in (A.7), we first notice that fa(z) − z = z(z + a)(z − 2) > 0 for any z ∈ (−a, 0), and thus zt+1 > zt for any zt near −a. Hence, limt→∞ zt = −a if and only if there exists t such that zt = −a. This implies that f (t) a (z0) = −a for some t. Similarly, fa(z) − z < 0 for any z ∈ (0, 2), which implies zt+1 < zt for any zt near 2. Hence, limt→∞ zt = 2 if and only if z0 = 2. Define $$S=\bigcup_{n=0}^{\infty}f_{a}^{(-n)}(-a)\cup\{2\}$$ where f (−n) a (−a) denotes the preimage of −a under f (n) a . Clearly, each preimage is a finite set, and thus S is countable. Hence, we know as long as z0 ∈ [−a, 2]\S, we have limt→∞ zt = 0. Since S is a countable set and z0 is chosen uniformly at random, we know limt→∞ zt = 0 almost surely. For the rest of (ii), we have already proved in (A.8) that {|zt|}∞ t=0 is decreasing when 0 < a ≤ 2 √2 − 2. To see {|zt|}∞ t=0 has catapults when 2√2 − 2 < a ≤ 1, we consider the following intervals $$J_{1}=[-a,0]=I_{1}\cup I_{2},\ J_{2}=\left[0,\operatorname*{min}\left\{\frac{2-a+\sqrt{a^{2}+4a-4}}{2},0.25\right\}\right]\subseteq I_{3},$$ where we have a 2 + 4a − 4 > 0 for a > 2 √2 − 2 so J2 is well-defined. Notice that $$0<z<\frac{2-a+\sqrt{a^{2}+4a-4}}{2}\Leftrightarrow g_{a}(z)<-1,\ z>0.$$ Hence we know for any zt ∈ J2, we will have $$|z_{t+1}|=|z_{t}g_{a}(z_{t})|>|z_{t}|.$$ |zt+1| = |ztga(zt)| > |zt|. (A.11) On the other hand, notice that 0 is in the orbit if and only if z0 ∈/ S0, where S0 is defined as $$(\mathrm{A.11})$$ $$S_{0}=\bigcup_{n=0}^{\infty}f_{a}^{(-n)}(0)$$ where f −n a(z) denotes the set of preimage of z under f (n) a . Note that each preimage is finite and thus S0 is countable. Hence, we know almost surely the orbit will not contain 0, and recall that by Lemma (7) (ii) and limt→∞ zt = 0, we know there are infinitely many t such that t ∈ J2, and thus (A.11) holds for infinitely many t almost surely. By definition 2, we know {|zt|} has catapults almost surely. The following theorem indicates that, fa is chaotic provided that *a > a*∗ where a∗ ∈ (1, 2) Lemma 9. Suppose 1 < a ≤ 2 and −a ≤ z0 ≤ 2*. Then we have* - (i) −a ≤ zt ≤ 2 for any t, and fa(z) has a period-2 *point on* [0, 1]. - (ii) There exists a∗ ∈ (1, 2) such that for any a ∈ (a∗, 2), fa *is Li-Yorke chaotic, and for any* a ∈ (1, a∗), fa *is not Li-Yorke chaotic.* - (iii) If there exists an asymptotically stable orbit and z0 is chosen from [−a, 2] *uniformly at random,* then the orbit of z0 *is asymptotically periodic almost surely.* Proof. The boundedness of zt is a direct result of Lemma 6 (iii). To prove the rest of (i), we notice that for a ∈ (1, 2] $$g_{a}(0)g_{a}(0g_{a}(0))=(1-2a)^{2}>1,\ g_{a}(1)g_{a}(1g_{a}(1))=-a<-1.$$ By continuity of ga(zga(z)) we know there exists a point z0 ∈ (0, 1) such that ga(z0ga(z0)) = 1. This indicates that f (2)(z0) = z0ga(z0ga(z0)) = z0 but clearly fa(z0) ̸= z0 since (0, 1) does not contain any fixed point of fa. To prove (ii), notice that $$f_{1}\left({\frac{1-2\times1}{3}}\right)={\frac{5}{27}}<1<2=f_{2}\left({\frac{1-2\times2}{3}}\right).$$ By continuity of fa 1−2a 3(with respect to a) there exists c ∈ (1, 2) such that $$f_{c}\left(\frac{1-2c}{3}\right)=\frac{(2c-1)(2c^{2}+7c-4)}{27}=1.$$ $$(\mathrm{A.12})$$ Moreover we have $$f_{c}(-c)=-c<{\frac{1-2c}{3}},\ f_{c}\left({\frac{1-2c}{3}}\right)=1>{\frac{1-2c}{3}}.$$ Hence by continuity of $f_{c}(z)$, we can pick $z_{0}\in\left(-c,\frac{1-2c}{3}\right)$ such that $f_{c}(z_{0})=\frac{1-2c}{3}$. We have $$-c<z_{0}<\frac{1-2c}{3}=f_{c}(z_{0}).$$ By (A.12), (A.13), and Lemma 6 (i), we have $$f_{c}^{(3)}(z_{0})=f_{c}^{(2)}\left(\frac{1-2c}{3}\right)=f_{c}(1)=-c\leq z_{0},$$ $$f_{c}(z_{0})=\frac{1-2c}{3}<1=f_{c}(1)=f_{c}^{(2)}\left(z_{0}\right).$$ $$(\mathrm{A.13})$$ $$(\mathrm{A.14})$$ $$(\mathrm{A.15})$$ Combining (A.13),(A.14),(A.15) we can easily verify that $$f_{c}^{(3)}(z_{0})\leq z_{0}<f_{c}(z_{0})<f_{c}^{(2)}(z_{0}).$$ By Theorem B.1 (i.e., Theorem 1 in Li & Yorke (1975)), we know fc is Li-Yorke chaotic. Moreover, for any a ∈ (c, 2], we know $$f_{a}\left({\frac{1-2a}{3}}\right)={\frac{(2a-1)(2a^{2}+7a-4)}{27}}>{\frac{(2c-1)(2c^{2}+7c-4)}{27}}=f_{c}\left({\frac{1-2c}{3}}\right)=1,$$ which together with fa(0) = 0 < 1 implies we can pick y0 such that $${\frac{1-2a}{3}}<y_{0}<0,\ f_{a}(y_{0})=1.$$ Similarly, we have $$f_{a}(-a)=-a<\frac{1-2a}{3}<y_{0},\ f_{a}\left(\frac{1-2a}{3}\right)>1>y_{0}$$ which implies we can pick x0 such that $$-a<x_{0}<{\frac{1-2a}{3}},\ f_{a}(x_{0})=y_{0}.$$ Now we know $$f_{a}^{(3)}(x_{0})<x_{0}<f_{a}(x_{0})<f_{a}^{(2)}(x_{0}).$$ By Theorem B.1 (i.e., Theorem 1 in Li & Yorke (1975)), we know fa is Li-Yorke chaotic. Hence, we know c defined in (A.12) satisfies that for any a ∈ (c, 2], fa is Li-Yorke chaotic. Hence, we know $ a_*=\inf_{a\in(1,2)}\{a:f_b$ is Li-Yorke chaotic for any $ b\in[a,2]$.} where the set is not empty, since we have proven c belongs to the above set. This completes the proof of (ii). To prove (iii), we notice that if fa(z) has an asymptotically stable periodic orbit, by Theorem B.2 (i.e., Theorem 2.7 in Singer (1978)) and the fact that fa(x) has negative Schwarzian derivative at non-critical points (Lemma 4) and we know there exists a critical point c of fa(z) such that the orbit of c converges to this asymptotically stable orbit. Notice that by Lemma 6 we know c = 1 or 1−2a 3. c = 1 can be excluded since fa(1) = −a, and −a is an unstable period-1 point. Hence, we know c = 1−2a 3is asymptotically periodic. By Theorems B.3 and B.4 (i.e., Theorem B and Corollary in Nusse (1987)), we know almost surely z0 will not converge to any periodic orbit if z0 is chosen from [−a, 2] uniformly at random. This completes the proof. ## Remarks: - See Figure 5(b) for a pair of period-2 points when a = 1.2, and Figure 5(c) for a period-3 orbit when a = 1.6. The triangle markers denote the periodic points. - By Theorem B.2 (i.e., Theorem 2.7 in Singer (1978)) and the fact that −a is an unstable period-1 point we know fa(z) has at most one asymptotically stable periodic orbit. Lemma 10. Suppose a > 2. z0 is chosen from [−a, 2] *uniformly at random. Then* limt→∞ |zt| = +∞ almost surely. $\square$ Proof. Notice that by Lemma 6 we know $$f_{a}\left({\frac{1-2a}{3}}\right)={\frac{4a^{3}+12a^{2}-15a+4}{27}}>{\frac{(4a^{3}+12a^{2}-15a+4)|_{a=2}}{27}}=2,\ \forall a>2,$$ where the inequality holds since 4a 3 + 12a 2 − 15a + 4 is increasing on (2, ∞). Moreover, we have $f_{a}(z)-z=z(z+a)(z-2)>0,\ \forall z\in(2,\infty)$. Hence we know for the initialization at the critical point z0 = 1−2a 3, we have z1 > 2, and the whole sequence is increasing. On the other hand, all fixed points of fa(z) are no greater than 2, we know zt will diverge to +∞. For another critical point z0 = 1 we know its orbit converges to the periodic orbit of z0 = −a, which is an unstable period-1 point. Hence, we know from Theorem B.2 (i.e., Theorem 2.7 in Singer (1978)) that there does not exist an asymptotically stable periodic orbit, otherwise the orbit of one critical point must converge to it. Hence, by Theorems B.3 and B.4 (i.e., Theorem B and Corollary in Nusse (1987)) we know limt→∞ |zt| = +∞ almost surely provided z0 uniformly chosen from (−a, 2), i.e., almost all points in [−a, 2] converge to the absorbing boundary point +∞. ## A.2 Proofs Of Results In Section 3 Proof of Theorem 3.1. Define $$\alpha^{(t)}:=c+\gamma X^{\top}w^{(t)},\ \beta:=y+\frac{c^{2}}{2\gamma},\ \kappa:=\eta\gamma\left\|X\right\|^{2}.$$ To prove (i), we observe that $$\nabla_{w}g(w;X)=(c+\gamma(X^{\top}w))X$$ Let weights at time t be w (t). Thus, the gradient descent takes the form $$w^{(t+1)}=w^{(t)}-\eta(g(t))$$ (t) − ηe(t)α (t)X. $$\cdot\,\eta(g(w^{(t)};X)-y)(c+\gamma X^{\top}w^{(t)})X=w^{(t)}-$$ Simple calculation gives and Hence $$\alpha^{(t+1)}=(1-\eta\gamma\,\|X\|^{2}\,e^{(t)})\alpha^{(t)}=(1-\kappa e^{(t)})\alpha^{(t)}.$$ $$e^{(t+1)}-e^{(t)}=\frac{1}{2\gamma}\left((\alpha^{(t+1)})^{2}-(\alpha^{(t)})^{2}\right)=\left((1-\kappa e^{(t)})^{2}-1\right)\frac{(\alpha^{(t)})^{2}}{2\gamma}$$ which together with (A.16) implies $$\kappa e^{(t+1)}=\kappa e^{(t)}(\kappa e^{(t)}+\beta\kappa)\left(\kappa e^{(t)}-2\right)+\kappa e^{(t)}.$$ By definition of a and zt in (3.2) we know a = βκ and zt = κe(t). We know (i) holds. To compute the largest eigenvalue of the Hessain matrix (i.e., the sharpness defined in EoS literature) of the loss in (ii), we notice that the gradient of the loss function takes the form $$\nabla\ell(w)=(g(w;X)-y)\nabla_{w}g(w;X).$$ $$e^{(t)}={\frac{(\alpha^{(t)})^{2}}{2\gamma}}-\beta$$ $$(\mathrm{A.16})$$ 2γ− β (A.16) Hence $$\nabla^{2}\ell(w)=\nabla_{w}g(w;X)\nabla_{w}g(w;X)^{\top}+(g(w;X)-y)\nabla_{w}^{2}g(w;X)=(\alpha^{2}+\gamma e)X X^{\top},$$ where we overload the notation and define $$\alpha=c+\gamma X^{\top}w,\,e=g(w;X)-y.$$ The sharpness is given by $$\lambda_{\operatorname*{max}}(\nabla^{2}\ell(w^{(t)}))=((\alpha^{(t)})^{2}+\gamma e^{(t)})\left\|X\right\|^{2}=(3\gamma e^{(t)}+2\gamma y+c^{2})\left\|X\right\|^{2}={\frac{3z_{t}+2a}{\eta}}.$$ Proof of Theorem 3.2. The gradient descent takes the form $$w^{(t+1)}=w^{(t)}-\frac{\eta}{2n}\sum_{i=1}^{n}\nabla\ell_{i}(w^{(t)})=w^{(t)}-\frac{\eta}{n}\sum_{i=1}^{n}e^{(t)}(X_{i})\alpha^{(t)}(X_{i})X_{i}.$$ Similarly to (A.16), for each error term e (t)(Xi) we have $$e^{(t)}(X_{i})=\frac{(\alpha^{(t)}(X_{i}))^{2}}{2\gamma}-\beta(X_{i}),$$ (A.17) and $$\alpha^{(t+1)}(X_{i})=\gamma X_{i}^{\top}w^{(t+1)}+c(X_{i})$$ $$=\gamma\left(X_{i}^{\top}w^{(t)}-\frac{\eta}{n}\sum_{j=1}^{n}e^{(t)}(X_{j})\alpha^{(t)}(X_{j})X_{i}^{\top}X_{j}\right)+c(X_{i})$$ $$=\alpha^{(t)}(X_{i})-\frac{\gamma\eta}{n}\sum_{j=1}^{n}e^{(t)}(X_{j})\alpha^{(t)}(X_{j})X_{i}^{\top}X_{j}$$ $$=\alpha^{(t)}(X_{i})-\frac{\gamma\eta}{n}\sum_{j=1}^{n}\left(\frac{\alpha^{(t)}(X_{j})^{3}}{2\gamma}-\beta(X_{j})\alpha^{(t)}(X_{j})\right)X_{i}^{\top}X_{j}$$ $$\square$$ We overload the notation and set $$\mathbf{X}=(X_{1},...,X_{n})^{\top}\,,\;\#(\mathbf{X})=\left(\#(X_{1}),...,\#(X_{n})\right)^{\top},\;\forall\#\in\{\alpha^{(t)},e^{(t)},a,\beta\}.$$ We can obtain $$\alpha^{(t+1)}({\bf X})=\alpha^{(t)}({\bf X})-\frac{\eta}{n}{\bf X}{\bf X}^{\top}\left(\frac{\alpha^{(t)}(X)^{3}}{2}-\gamma\beta(X)\odot\alpha^{(t)}(X)\right),$$ (A.18) where ⊙ denotes the Hadamard product. As XX⊤ = diag(∥X1∥ 2*, ...,* ∥Xn∥ 2), we can rewrite (A.18) as the following non-interacting version for each data point: $$\alpha^{(t+1)}(X_{i})=\alpha^{(t)}(X_{i})-\frac{\eta\left\|X_{i}\right\|^{2}}{2n}\left(\alpha^{(t)}(X_{i})^{3}-2\gamma\beta(X_{i})\alpha^{(t)}(X_{i})\right)$$ $$=\left(1-\frac{\gamma\eta\left\|X_{i}\right\|^{2}}{n}e^{(t)}(X_{i})\right)\alpha^{(t)}(X_{i}).$$ This together with (A.17) implies $$e^{(t+1)}(X_{i})-e^{(t)}(X_{i})=\frac{1}{2\gamma}\left((\alpha^{(t+1)}(X_{i}))^{2}-(\alpha^{(t)}(X_{i}))^{2}\right)$$ $$=\left(-\frac{2\gamma\eta\left\|X_{i}\right\|^{2}}{n}e^{(t)}(X_{i})+\frac{\gamma^{2}\eta^{2}\left\|X_{i}\right\|^{4}}{n^{2}}(e^{(t)}(X_{i}))^{2}\right)\left(e^{(t)}(X_{i})+\beta(X_{i})\right)$$ $$=\kappa_{n}(X_{i})e^{(t)}(X_{i})\left(\kappa_{n}(X_{i})e^{(t)}(X_{i})-2\right)\left(e^{(t)}(X_{i})+\beta(X_{i})\right)$$ By definition of z (t) iand ai we know $$z_{i}^{(t+1)}=z_{i}^{(t)}(z_{i}^{(t)}+a_{i})(z_{i}^{(t)}-2)+z_{i}^{(t)}=f_{a_{i}}(z_{i}^{(t)}).$$ The sharpness is given by $$\nabla^2\ell(w^{(t)})=$$ $$=$$ $$=$$ $$=$$ . (t)) = 1n $$\begin{array}{l}{{=\frac{1}{n}\sum_{i=1}^{n}\left(\nabla_{w}g(w^{(t)};X_{i})\nabla_{w}g(w^{(t)};X_{i})^{\top}+(g(w^{(t)};X_{i})-y_{i})\nabla_{w}^{2}g(w^{(t)};X_{i})\right)}}\\ {{=\frac{1}{n}\sum_{i=1}^{n}\left((\alpha^{(t)}(X_{i}))^{2}+\gamma e^{(t)}(X_{i})\right)X_{i}X_{i}^{\top}}}\\ {{=\frac{1}{n}\sum_{i=1}^{n}(3\gamma e^{(t)}(X_{i})+2\gamma y_{i}+c^{2}(X_{i}))X_{i}X_{i}^{\top}.}}\end{array}$$ = = Therefore we know $$\nabla^{2}\ell(w^{(t)})X_{i}={\frac{1}{n}}(3\gamma e^{(t)}(X_{i})+2\gamma y_{i}+c^{2}(X_{i}))\left\|X_{i}\right\|^{2}X_{i}={\frac{3z_{i}^{(t)}+1}{\eta}}$$ i + 2ai ηXi, for all 1 ≤ i ≤ n. which means we find n eigenvalues and eigenvectors pairs 3z (t) 1 +2a1 η, X1 *, ...,* 3z (t) n +2an η, Xn . Note that ∇2ℓ(w (t)) is a sum of n rank-1 matrices, and we have found n orthogonal eigenvalues. Hence we know λmax(∇2ℓ(w (t))) = max1≤i≤n 3z (t) i +2ai η. This completes the proof. Proof of Theorem 3.3. Define $$\mathbf{A}^{(t)}={\frac{2\eta}{\sqrt{m}d n}}\sum_{j=1}^{n}e_{j}^{(t)}X_{j}X_{j}^{\top}.$$ Note that we have $$\nabla\ell_{j}^{(t)}({\bf U}^{(t)})=\left(\frac{1}{\sqrt{md}}\sum_{i=1}^{m}(X_{j}^{\top}u_{i}^{(t)})^{2}-y_{j}\right)\left(\frac{2}{\sqrt{md}}X_{j}X_{j}^{\top}{\bf U}^{(t)}\right)=\frac{2}{\sqrt{md}}e_{j}^{(t)}X_{j}X_{j}^{\top}{\bf U}^{(t)}.$$ This implies that the gradient descent update takes the form $$\mathbf{U}^{(t+1)}=\mathbf{U}^{(t)}-\frac{\eta}{n}\sum_{j=1}^{n}\nabla e_{j}^{(t)}(\mathbf{U}^{(t)})=\mathbf{U}^{(t)}-\frac{2\eta}{\sqrt{m}d n}\sum_{j=1}^{n}e_{j}^{(t)}X_{j}X_{j}^{\top}\mathbf{U}^{(t)}=\left(I-\mathbf{A}^{(t)}\right)\mathbf{U}^{(t)}.$$ Also we have $$e_{j}^{(t+1)}-e_{j}^{(t)}=\frac{1}{\sqrt{m}d}\sum_{i=1}^{m}\left((X_{j}^{\top}u_{i}^{(t+1)})^{2}-(X_{j}^{\top}u_{i}^{(t)})^{2}\right)$$ $$=\frac{1}{\sqrt{m}d}X_{j}^{\top}\left(\mathbf{U}^{(t+1)}(\mathbf{U}^{(t+1)})^{\top}-\mathbf{U}^{(t)}(\mathbf{U}^{(t)})^{\top}\right)X_{j}$$ and $$\mathbf{U}^{(t+1)}(\mathbf{U}^{(t+1)})^{\top}=\left(I-{\frac{2\eta}{\sqrt{m}d n}}{\sum_{j=1}^{n}}\,e_{j}^{(t)}X_{j}X_{j}^{\top}\right)\mathbf{U}^{(t)}(\mathbf{U}^{(t)})^{\top}\left(I-{\frac{2\eta}{\sqrt{m}d n}}{\sum_{j=1}^{n}}\,e_{j}^{(t)}X_{j}X_{j}^{\top}\right).$$ Hence we know e (t+1) j − e (t) j =1 √md X⊤ j A(t)A(t)(A(t)) ⊤A(t)Xj − 2X⊤ j A(t)U(t)(U(t)) ⊤Xj =1 √md 4η 2 md2n2 (e (t) j ) 2∥Xj∥ 4 X⊤ j U(t)(U(t)) ⊤Xj −4η √mdne (t) j∥Xj∥ 2 X⊤ j U(t)(U(t)) ⊤Xj = 4η 2 ∥Xj∥ 4 md2n2(e (t) j ) 2 − 4η ∥Xj∥ 2 √mdn e (t) j !e (t) j + yj , where the second equality uses XX⊤ = diag(∥X1∥ 2*, ...,* ∥Xn∥ 2). By definition of z (t) iand ai we know $$z_{i}^{(t+1)}=f_{a_{i}}(z_{i}^{(t)}).$$ Hence we know the training dynamics of this model can be captured by the cubic map as well. ## B Auxiliary Results Theorem B.1 (Theorem 1 in Li & Yorke (1975)). Let I *be a compact interval and let* f : I → I be continuous. Assume there is a point a ∈ I for which the points b = f(a), c = f (2)(a) and d = f (3)(a) *satisfy* $$d\leq a<b<c{\mathrm{~(}}o r\ d\geq a>b>c{\mathrm{)}}.$$ Then f *is Li-Yorke chaotic.* Theorem B.2 (Theorem 2.7 in Singer (1978)). Let I be a compact interval and let f : I → I *be a three* times continuously differentiable function. If the Schwarzian derivative of f *satisfies* $${\mathsf{S}}f(x)={\frac{f^{\prime\prime\prime}(x)}{f^{\prime}(x)}}-{\frac{3}{2}}\left({\frac{f^{\prime\prime}(x)}{f^{\prime}(x)}}\right)^{2}<0\ \ {\mathrm{~for~all~}}x\in I\ {\mathrm{~with~}}f^{\prime}(x)\neq0.$$ Then the stable set of every asymptotically stable orbit of f *contains a critical point of* f. Theorem B.3 (Theorem B in Nusse (1987)). Let I be an interval and let f : I → I *be a three times* continuously differentiable function having at least one aperiodic point on I *and satisfying:* - (i) f *has a nonpositive Schwarzian derivative, i.e.,* $${\mathsf{S}}f(x)={\frac{f^{\prime\prime\prime}(x)}{f^{\prime}(x)}}-{\frac{3}{2}}\left({\frac{f^{\prime\prime}(x)}{f^{\prime}(x)}}\right)^{2}\leq0\ \ {\mathrm{~for~all~}}x\in I{\mathrm{~with~}}f^{\prime}(x)\neq0.$$ - (ii) The set of points, whose orbits do not converge to an (or the) absorbing boundary point(s) of I for f is a nonempty compact set. - (iii) The orbit of each critical point for f converges to an asymptotically stable periodic orbit of f *or to* an (or the) absorbing boundary point(s) of I for f. - *(iv) The fixed points of* f (2) are isolated. Then we have - (1) The set of points whose orbits do not converge to an asymptotically stable periodic orbit of f *or to an* (or the) absorbing boundary point(s) of I for f *has Lebesgue measure* 0; - (2) There exists a positive integer p such that almost every point x in I *is asymptotically periodic with* f (p)(x) = x, provided that f(I) *is bounded.* Theorem B.4 (Corollary in Nusse (1987)). Assume that f : R → R *is a polynomial function having at least* one aperiodic point and satisfying the following conditions: - (i) The orbit of each critical point of f converges to an asymptotically stable periodic orbit of f or to an (or the) absorbing boundary point(s) for f; - (ii) Each critical point of f *is real.* Then f *satisfies the assumptions (i)-(iv) of Theorem B.3.* ## C Experimental Investigations C.1 Additional Experiments For The Orthogonal Case For this section, we follow the same experimental setup as described in Section 4.1. Only the hidden-layer width is changed. Specifically, in Figures 6 and 7 we plot the training loss, sharpness of training loss and the trajectory-averaging training in various phases. ![29_image_0.png](29_image_0.png) Figure 6: Hidden-layer width =5, with orthogonal data points. Rows from top to bottom represent different levels of noise - mean-zero normal distribution with variance 0, 0.25, 1. The vertical axes are in log scale for the training loss curves. The second column is about the sharpness of the training loss functions. Numbers 0, 1, 2, 3, 4 denote different stepsize choices (see Section 4.1 for details). ![30_image_0.png](30_image_0.png) Figure 7: Hidden-layer width =10, with orthogonal data points. Rows from top to bottom represent different levels of noise - mean-zero normal distribution with variance 0, 0.25, 1. The vertical axes are in log scale for the training loss curves. The second column is about the sharpness of the training loss functions. Numbers 0, 1, 2, 3, 4 denote different stepsize choices (see Section 4.1 for details). ## C.2 Non-Orthogonal Data We next investigate the case when orthogonality condition does not hold. The setup is the same as described in Section 4.1 except that n = 5000 and each entry of the data matrix X ∈ R n×dis now sampled from a standard normal distribution. We also generate 500 data points from the same distribution for testing. Note that our theory in this work is only applicable for orthogonal data. hence, for these experiments with non-orthogonal data, we first tune the step-size to be as large as possible, say ηmax, so that the training does not diverge and then run the experiments for i+1 5 ηmax with i = 0*, ...,* 4. Hence, the step-sizes for loss and sharpness curves 0, 1, 2, 3, 4 are chosen to be 10, 20, 30, 40, 50 for m = 5, 10 and 12, 24, 36, 48, 60 for m = 25. In Figures 8, 9 and 10 we plot the training loss and the testing loss (with and without ergodic trajectory averaging) in log scale. Notably different phases (including the periodic and catapult phases) characterized theoretically for the case of orthogonal data, also appear to be present for the non-orthogonal case. We also make the following intriguing conclusions: - As a general trend, training roughly in the generalized catapult phase and predicting without doing the ergodic trajectory averaging appears to have the best test error performance. - In some cases (especially the one with high noise variance), when testing after training in the periodic phase, the test error goes down rapidly in the initial few iterations. Correspondingly, ergodic trajectory averaging after training in the periodic phase, helps to obtain better test error decay compared to ergodic trajectory averaging after training in the catapult phase. However, as mentioned in the previous point, training roughly in the generalized catapult phase and predicting without doing the ergodic trajectory averaging performs the best. - As discussed in Lim et al. (2022), in various cases, artificially infusing control chaos help to obtain better test accuracy. Given our empirical observations and the results in Lim et al. (2022), it is interesting to design controlled chaos infusion in gradient descent and perform ergodic training averaging to obtain stable and improved test error performance. Obtaining theoretical results corroborating the above-mentioned observations is challenging future work. ![31_image_0.png](31_image_0.png) Figure 8: Hidden-layer width=5, with non-orthogonal data points. Rows from top to bottom represent different levels of noise - mean-zero normal distribution with variance 0, 0.25, 1. The vertical axes are in log scale for loss curves. Numbers 0, 1, 2, 3, 4 denote different stepsize choices (see Section C.2 for details). ![32_image_0.png](32_image_0.png) Figure 9: Hidden-layer width=10, with non-orthogonal data points. Rows from top to bottom represent different levels of noise - mean-zero normal distribution with variance 0, 0.25, 1. The vertical axes are in log scale for loss curves. Numbers 0, 1, 2, 3, 4 denote different stepsize choices (see Section C.2 for details). ![32_image_1.png](32_image_1.png) Figure 10: Hidden-layer width=25, with non-orthogonal data points. Rows from top to bottom represent different levels of noise - mean-zero normal distribution with variance 0, 0.25, 1. The vertical axes are in log scale for loss curves. Numbers 0, 1, 2, 3, 4 denote different stepsize choices (see Section C.2 for details). ## C.3 Two-Layer Neural Network With Relu While our main focus in this work is for quadratic activation functions, it is also instructive to examine the dynamics with other activation function, in particular the ReLU activation. Hence, we follow the experimental setup from Section C.2, except that the activation function is now ReLU and repeat our experiments. For this case, the step-sizes manually chosen to be 60, 120, 180, 240, 300 for loss/sharpness curves 0, 1, 2, 3, 4, respectively. ![33_image_0.png](33_image_0.png) Figure 11: Hidden-layer width=5 with ReLU activation. Rows from top to bottom represent different levels of noise - mean-zero normal distribution with variance 0, 0.25, 1. The vertical axes are in log scale for loss curves. The last column is about the sharpness of the training loss functions. Numbers 0, 1, 2, 3, 4 denote different stepsize choices (see Section C.3 for details). ![33_image_1.png](33_image_1.png) Figure 12: Hidden-layer width=10 with ReLU activation. Rows from top to bottom represent different levels of noise - mean-zero normal distribution with variance 0, 0.25, 1. The vertical axes are in log scale for loss curves. The last column is about the sharpness of the training loss functions. Numbers 0, 1, 2, 3, 4 denote different stepsize choices (see Section C.3 for details). From Figures 11 and 12, (in particular from the sharpness plots), we observe various non-monotonic patterns, roughly including periodic and chaotic patterns. Obtaining a precise theoretical characterization of the training dynamics for this setting is extremely interesting as future work.
Review 1: Summary: The authors analyze the non-linear dynamics of full batch gradient descent using a quadratic regression model as well as a quadratic activation, one-hidden layer neural network, with orthogonal data. They show that these systems can be described by a universal cubic discrete dynamical system. They show that the universal system has 5 phases: monotonic, catapult, periodic, chaotic, divergent. They then show that a particular quadratic regression model inspired by the phase chase problem is described by the universal map, as is quadratic activation one-hidden layer network with orthogonal data. The different phases can then be linked to the combination of initialization and learning rate. Finally the authors show that in these models, averaging of trajectories in the chaotic phase can lead to predictors with lower loss. Strengths and Weaknesses: The main strength of the paper is the clarity of the analysis of the cubic map. All the concept are generally well introduced, and I believe that the proofs are correct as well. The general idea of using dynamical systems tools to analyze ML systems is a good one that is gaining popularity in the field. The main weakness is the question of relevance to the audience of TMLR. One major issue is that the analysis is effectively one-dimensional. Even in the multi-datapoint setting, the model-input statistics are chosen such that the different datapoints have decoupled dynamics. This is very different from the case in typical machine learning settings. It is not clear if the 5 distinct phases are maintained in the more general setting. Generally practitioners find convergent, edge of stability, and divergent phases; the intricacies of the periodic and chaotic phases seem more like artifacts of the effectively low-dimensional nature of the system. The order of presentation makes the paper hard to parse - since it starts with analysis of the universal form of the dynamical system, and then afterwards introduces the relationship to the ML models. Personally I would prefer some introduction of a machine learning model first, so that as a reader I can understand what $z$ and $a$ correspond to in the real system from the start. I am curious to think what other reviewers think of this point. The claim in section 4.2 that SGD behaves like large stepsize GD is false; there is a large literature on how SGD with small batch size behaves qualitatively differently from full batch GD. The utility of the ergodic trajectory averaging is not clear to me. In most modern ML systems, the loss trajectories become (relatively) smooth by the end of training; I'm not sure what one would hope to gain by averaging within a single model. Edit: changed relevance to "yes" after reading other reviews, responses and requested changes. Requested Changes: It would be helpful to briefly describe the utility of the Lyapunov exponent and Shwarzian derivative when they are defined. TMLR readers may not be familiar with these concepts. The name "catapult phase" feels misleading compared to how the concept is used in the literature. If I understand correctly, a damped oscillation of z about 0 would be considered to be in the catapult phase in this definition. However, in the original work of Lewkowycz et. al., it is used to describe an increase and then decrease in magnitude of the sharpness - with a single peak. Maybe "non-monotonic convergent" would be a better fit here? Figure 1 is too small - I'd recommend a 3 x 2 panel rather than a 5 x 1. This is one of the key figures in the paper so it should be easy to parse. Is there any way to analyze a coupled version of the system? Even 2 dimensions would be interesting to consider. Broader Impact Concerns: None ================================================== Review 2: Summary: This work studies the different optimization phases due to different learning rates in a minimal model. In particular, this work shows that there are 5 different phases, including the monotonic phase, catapult phase, periodic phase, chaotic phase, divergent phase Strengths and Weaknesses: The strength of the work is the solid analysis of finite-step size gradient descent in a minimal solvable model. The identificaiton of precise phase boundaries lends it great interpretability The main weakness is that the theory is not related so much to actual deep learning practice -- at least such relevance is insufficiently discussed. It would be great if the authors discuss in much more depth the machine learning relevance / implication of the theory, especially the five phases Also, there are a few more specific problems that I would like to see addressed. See the requested changes section Requested Changes: 1. "Notable," -> "Notably" 2. Figure 1 is not quite visible. Please enlarge the figure and font sizes 3. the citation is not correct English. For example, on page 2, the authors say "Lyu (2022); Ahn (2022b)", where the semicolon is not standard English. The authors should write "Lyu (2022) and Ahn (2022b)" instead. 4. Page 10: "A main take-away from our analysis and experiments so far is that gradient descent with large step-size behaves like stochastic gradient descent, except the randomness here is with respect to the orbit it converges to." I think the authors need to revise this sentence, which can be very misleading and confusing. In deep learning, stochastic gradient descent is more commonly used to refer to a special algorithm, where the randomness comes from the sampling of data points. Let me refer to this as the "SGD." The authors did not show that GD is like SGD, but only that deterministic GD effectively resembles a gradient descent method with some special type of noise in it. This algorithm is definitely not SGD. Therefore, this sentence needs to be rewritten 5. I think it greatly improves the manuscript if the authors discuss the related results in: https://arxiv.org/abs/2107.11774, where the authors solved for the phases and Lyapunov exponents of finite-step-size SGD in similar loss functions. How does the result here compare with the reference? Adding these discussion can be very nice Broader Impact Concerns: NA ================================================== Review 3: Summary: The paper investigates the dynamics of gradient descent (GD) in nonlinear problems, focusing on quadratic regression and quadratic activation neural networks. It proposes a simplified 1-dimensional model to analyze GD behavior with various step sizes. Results reveal five phases of GD dynamics: monotonic decrease, catapult, periodic oscillation, chaotic behavior, and divergence. The study integrates this analysis into specific regression models, including generalized phase retrieval and two-layer neural networks. Experimental evidence, both under orthogonal data assumptions and without, confirms the identified phases and the efficacy of ergodic trajectory averaging in stabilizing model performance. Overall, the paper contributes insights into GD behavior in nonlinear optimization problems and suggests directions for future research. Strengths and Weaknesses: Strengths: 1. The introduction and section on related work is well-written and seems sufficiently complete. 2. The theoretical results of Sections 2 and 3, and the underlying approach, is novel to the best of my knowledge and quite interesting as it shows that a wide array of behaviors can happen even for a simple model problem. Weaknesses: 1. The experiments of Section 4, including all its figures, are often hard to follow and interpret. The captions of figures are sometimes very uninformative (e.g. Figure 3) and it is hard to keep track of what the plot labels 0, 1, 2, 3, 4 refer to, and which phases they belong to. In general, it is rather unclear what the main point is that the authors try to make here. 2. From the introduction it seemed that there would be a big focus on the generalization performance when training in non-monotonic phases, but this seems almost completely absent in Section 4 (apart from a rather uninformative Figure 3). 3. In general, I doubt that this paper in its current form will attract many readers from the broad ML community as the paper is not too accessible. It would benefit from some clear-cut recommendations and conclusions that can be read without going into all the notation and definitions introduced in Section 2. 4. In Section 4.2 it is unclear whether ergodic trajectory averaging is an existing technique or a proposal. In addition, its definition is somewhat hidden in the middle of the text even though it is one of the two main contributions according to the introduction. 5. Similarly, the connection with the EoS phenomenon is rather prominently announced in Section 2, yet rather hidden and succinct in Section 3. A clearer explanation of how sharpness and EoS are related to the experiments would be appreciated. Requested Changes: I recommend the authors to address the weaknesses above. The most important point being the presentation and results of the experiments in Section 4. Other than that, I invite the authors to proofread for typos such as 'catpults' (in multiple locations). Broader Impact Concerns: / ================================================== Metareview: Recommendation: Accept as is Comment: This paper analyzes the dynamics of gradient descent on a quadratic regression model, revealing a discrete dynamical system whose behavior exhibits five distinct phases and whose properties are investigated in detail. The reviewers are in agreement that the theoretical findings are technically sound. While there was some discussion about the significance of the findings and whether the narrow setting of analysis would be of interest to the community, ultimately the authors' response and the discussions led to the consensus that indeed the paper will be of significant interest. As such, I recommend acceptance. ==================================================
# Hybrid Active Learning With Uncertainty-Weighted Embeddings Yinan He∗† yinan.he.688@gmail.com Nanyang Technological University, Singapore Lile Cai∗caill@i2r.a-star.edu.sg Institute for Infocomm Research (I2*R), A*STAR, Singapore* Jingyi Liao *liao_jingyi@i2r.a-star.edu.sg* Institute for Infocomm Research (I2*R), A*STAR, Singapore* Chuan-Sheng Foo *foo_chuan_sheng@i2r.a-star.edu.sg* Institute for Infocomm Research (I2*R), A*STAR, Singapore* Reviewed on OpenReview: *https: // openreview. net/ forum? id= jD761b5OaE* ## Abstract We introduce a hybrid active learning method that simultaneously considers uncertainty and diversity for sample selection. Our method consists of two key steps: computing a novel uncertainty-weighted embedding, then applying distance-based sampling for sample selection. Our proposed uncertainty-weighted embedding is computed by weighting a sample's feature representation by an uncertainty measure. We show how this embedding generalizes the gradient embedding of BADGE so it can be used with arbitrary loss functions and be computed more efficiently, especially for dense prediction tasks and network architectures with large numbers of parameters in the final layer. We extensively evaluate the proposed hybrid active learning method on image classification, semantic segmentation and object detection tasks, and demonstrate that it achieves state-of-the-art performance. ## 1 Introduction There has been a recent resurgence of interest in using active learning (AL) to address the annotation burden for training deep neural networks. By selecting only an informative subset of samples to label for model training, AL methods can significantly reduce the cost of data annotation while maintaining satisfactory performance (Ren et al., 2021). Depending on the criterion used to select samples, AL methods can be broadly categorized into uncertainty-based, diversity-based and hybrid methods. Uncertainty-based methods (Wang et al., 2016; Gal et al., 2017; Beluch et al., 2018; Yoo & Kweon, 2019; Liu et al., 2021) select samples that the model is most uncertain about, while diversity-based methods (Sener & Savarese, 2017; Sinha et al., 2019; Caramalau et al., 2021) select a subset of samples that are representative of the training data distribution. Employing uncertainty-based sampling alone can result in redundancy in selected samples, as similar samples are likely to have similarly high uncertainty values. On the other hand, diversitybased sampling may waste the annotation budget on distinctive yet easy samples. Hybrid approaches aim to overcome these drawbacks by considering both uncertainty and diversity for selecting samples. Various methods have been proposed for hybrid AL. The first approach to combine uncertainty and diversity sampling is to apply them sequentially, e.g., apply uncertainty sampling to select the top-k most uncertain samples (where k is larger than the budget size) and then enforce diversity sampling to select a subset that ∗Joint first authors †Work done during internship with I2R satisfies the budget (Yang et al., 2017). A second approach is to sum up uncertainty and diversity terms in the acquisition function (Yang et al., 2015; Liu & Ferrari, 2017; Paul et al., 2017; Cai et al., 2021b). A third approach is to perform diversity-based sampling on uncertainty-aware features (Ash et al., 2019; Kim et al., 2021). In particular, the BADGE (Ash et al., 2019) method has been shown to achieve state-of-the-art performance on image classification tasks (Ji et al., 2023). BADGE applies diversity sampling on gradient embeddings - the first-order derivative of the loss function with respect to the parameters of the last layer. Gradient embeddings of uncertain samples have a large magnitude; this correlation between uncertainty and embedding magnitude allows BADGE to select a batch of both uncertain and diverse samples. While BADGE performs well on image classification, three challenges arise when applying BADGE to other types of tasks. First, the computation of gradient embedding can be expensive for dense prediction tasks like segmentation and for network architectures that have a large number of parameters in the last layer. Second, the gradient embedding is only derived for cross-entropy loss in the BADGE paper (Ash et al., 2019); the form of gradient embedding with other loss functions can be complicated and hard to interpret. Finally, computing gradient embeddings on unlabelled data requires pseudo-labels as proxies for true labels. While predicted labels have been shown to be effective proxies for classification tasks, this is not the case for regression tasks - directly using predicted values as proxies for the ground truth results in embeddings with zero magnitudes (details in Section 3.1). In this work, we propose a simple yet effective embedding method to address these challenges. Our method is motivated by the observation that the gradient embedding of a neural network model can be decomposed into two terms. The first term can be interpreted as an uncertainty measure of the current model on the sample, while the second term is the feature representation extracted from the penultimate layer of the network. We propose to keep this general two-term structure while relaxing the constraint of computing the embedding as the derivative of the loss. We achieve this by instead using an embedding that weights the feature representation by an arbitrary uncertainty term. The proposed uncertainty-weighted embedding enables diversity sampling in an uncertainty-aware feature space like gradient embedding, without requiring a specific form of loss function. As a result, our proposed embedding can be used with any learning task while also being much faster to compute than BADGE embeddings. We integrate our proposed uncertaintyweighted embedding within a distance-based sampling framework to perform hybrid AL, and demonstrate the effectiveness of the proposed AL method on a wide range of tasks and datasets. Our contributions can be summarized below: - We propose a novel uncertainty-weighted embedding for hybrid AL. We show that the proposed uncertainty-weighted embedding can be viewed as a generalized version of gradient embedding that does not rely on the derivative of the loss function. - We integrate the uncertainty-weighted embedding within a distance-based sampling framework to perform hybrid AL. We extensively evaluate this hybrid AL method on three tasks: image classification, semantic segmentation and object detection, and demonstrate that it obtains state-of-the-art results on all three tasks. ## 2 Related Work 2.1 Deep Active Learning As a promising technique to alleviate the data annotation burden in deep learning, AL has recently attracted significant attention from the research community. Depending on the criterion used to select unlabelled data, AL methods can be categorized into uncertainty-based, diversity-based and hybrid methods. Uncertainty-based methods Methods falling into this group select samples based on uncertainty measure. Some commonly used uncertainty measures include entropy (Shannon, 1948), BvSB (Best versus Second Best) (Joshi et al., 2009) and LC (Least Confidence) (Wang et al., 2016). BALD (Gal et al., 2017) selects samples that maximize the mutual information (i.e., reduction in uncertainty) between predictions and model posterior. Instead of pre-defining the uncertainty measure, Yoo & Kweon (2019) propose to employ a loss prediction module to learn the uncertainty. Beluch et al. (2018) show that deep ensembles lead to more calibrated uncertainties and better AL performance. ISAL (Liu et al., 2021) estimates a sample's influence on model parameters via influence function and selects samples that can provide the most positive influence. Diversity-based methods Such methods aim to select a subset of samples that are representative of the training data distribution. CoreSet (Sener & Savarese, 2017) employs K-Center-Greedy algorithm to select a core set such that the largest distance between a data point and its nearest neighbor in the core set is minimized. VAAL (Sinha et al., 2019) learns a latent space in an adversarial manner and selects samples that are most different from labelled ones. Caramalau et al. (2021) build a graph convolutional network on the features extracted from the target model to learn a better feature space for active selection. Hybrid methods The various methods of combining uncertainty and diversity sampling in AL can be grouped into three categories. The first is to apply the two sampling strategies sequentially. Yang et al. (2017) first select top-k samples with the largest uncertainty scores and then perform diversity sampling on the k samples to remove redundancy. Random sampling can be applied before uncertainty sampling to improve the diversity of the selected samples (Beluch et al., 2018; Yoo & Kweon, 2019). The second approach is to combine uncertainty and diversity terms by summation in acquisition function. USDM (Yang et al., 2015) formulates the batch mode AL as a quadratic programming problem, with the unary term optimized for uncertainty and the pairwise term accounted for diversity. Paul et al. (2017) build a graph to represent the unlabelled pool and employ a submodular objective function to select samples that minimize the joint entropy of the nodes of the graph. The third approach is to perform diversity-based sampling on uncertainty-aware features. BADGE (Ash et al., 2019) represents each sample by its gradient embedding and applies K-Means++ seeding algorithm to select samples with large magnitude. TA-VAAL (Kim et al., 2021) conditions the discriminator of VAAL on the learned loss to obtain an uncertainty-aware diversity score for sample selection. Our method falls into the third category of hybrid AL. Compared to BADGE, our method does not rely on a specific form of loss function, and has significantly reduced computation costs. ## 2.2 Active Learning For Visual Tasks AL for image classification AL for image classification has been extensively studied in the literature and most methods can be classified into the aforementioned three categories, namely uncertainty-based methods (Gal et al., 2017; Beluch et al., 2018; Yoo & Kweon, 2019; Liu et al., 2021), diversity-based methods (Sener & Savarese, 2017; Sinha et al., 2019; Caramalau et al., 2021) and hybrid methods (Ash et al., 2019). Ji et al. (2023) conducts a comprehensive benchmarking on 7 state-of-the-art AL methods for image classification, and shows that BADGE wins the overall ranking. We compare with BADGE in our experiments. AL for semantic segmentation Semantic segmentation aims to assign a class label to each pixel in an image. Based on the granularity of data selection for annotation, previous methods can be classified into image-based methods (Yang et al., 2017; Sinha et al., 2019) and region-based methods (Mackowiak et al., 2018; Casanova et al., 2020; Cai et al., 2021b). Yang et al. (2017) aggregate the pixel-level uncertainties and features to obtain an image-level score and feature vector and select images to label greedily. VAAL (Sinha et al., 2019) learns a model to score each image based on its difference from labelled data. Region-based methods divide an image into regularly-shaped (Mackowiak et al., 2018; Casanova et al., 2020; Cai et al., 2021b) or irregularly-shaped (Cai et al., 2021a) regions for selection. We demonstrate our method with both image-level and region-level selection. AL for object detection Object detection network consists of a backbone feature extractor, a class prediction branch and a location prediction branch. Depending on whether the method relies on some specific architecture design, methods can be divided into architecture-specific (Roy et al., 2018; Yuan et al., 2021) and architecture-agnostic (Yoo & Kweon, 2019; Agarwal et al., 2020; Wu et al., 2022) approaches. Roy et al. (2018) measures the uncertainty of a prediction by the disagreement of convolution layers that predict for the same object in one-stage object detector. MI-AOD (Yuan et al., 2021) construct a detector with two class prediction heads and one multiple instance learning head, and measure uncertainty by the discrepancy of the two classifiers. The learning loss-based approach LLAL (Yoo & Kweon, 2019) has been employed for object detection by learning the total loss of the detection model. CDAL (Agarwal et al., 2020) exploits the contextual information captured in the predicted probability distribution to enforce contextual diversity in selected samples. EnmsDivproto (Wu et al., 2022) considers bounding box level similarity to generate image-level uncertainty scores and to reject redundant samples. CALD (Yu et al., 2022) measures the uncertainty of a sample by its prediction consistency between original and augmented views. Our method is architecture-agnostic and can be readily applied to both one-stage and two-stage object detectors. ## 2.3 Uncertainty Weighting For Active Learning The idea of using uncertainty to weight feature representation has been explored for active domain adaptation (Prabhu et al., 2021) and semi-supervised active learning (Buchert et al., 2022). Prabhu et al. (2021) proposed to perform K-Means clustering on the feature space, with the center of each cluster being updated by uncertainty-weighted averaging. Buchert et al. (2022) proposed a consistency-based embedding space for active learning where the feature of each image is weighted by the prediction consistency of the image under various augmentations. Unlike these works which are only applicable to image classification tasks, our proposed hybrid AL framework is general and can be readily applied to a wide range of tasks. ## 3 Method We consider pool-based batch-mode active learning, where a batch of samples is selected from a pool of unlabelled samples in each cycle. Our method, described in Algorithm 1, starts with a randomly selected batch and proceeds iteratively until the annotation budget is met. Each iteration involves two main components: embedding extraction and sample selection. We first provide a recap on BADGE with discussions on its strengths and weaknesses. We then describe each component of our method in detail, followed by illustrations on how it can be applied to three different visual tasks, namely, image classification, semantic segmentation and object detection. Algorithm 1 Active Learning with Uncertainty-Weighted Embeddings Require: Budgets { b1, b2*, . . . , b*T }; Dataset D ![3_image_0.png](3_image_0.png) 1: Randomly select b1 samples to form the initial labelled pool L and train the initial model h1 ![3_image_1.png](3_image_1.png) 3: for t = 2*, . . . , T* do 4: Bt = ∅ 5: For all samples, compute uncertainty-weighted embeddings based on Eq.(4) using model ht−1 ![3_image_2.png](3_image_2.png) 7: **while** |Bt| < bt do 8: s = arg maxi∈U D(i) 9: Bt ← s 10: For each sample i ∈ U, update its minimum distance to *L ∪ B*t as: D(i) = min(D(i), ||uwe(i) − uwe(s)||) 11: **end while** 12: Annotate the samples in Bt 13: *L ← L ∪ B*t 14: *U ← U \ B*t 15: Train model ht on L 16: **end for** 17: **return** model hT ## 3.1 Recap Of Badge BADGE selects a batch of diverse and uncertain samples by applying K-Means++ seeding algorithm on gradient embeddings. The gradient embedding of each sample is computed as the first-order derivative of the loss function with respect to parameters in the final layer. As ground truth label is unknown for unlabelled data, BADGE uses the model's current prediction as a proxy of the true label. Let x denote a sample and h denote a L-layer neural network parameterized by (θ1*, . . . , θ*L). With softmax activation and cross-entropy loss, the gradient embedding is computed as: $$g e(x)=((p_{i}-1(\hat{y}=i))\cdot f(x;\theta_{1:L-1}))_{i=1}^{K},$$ $$(1)$$ $$\quad(2)$$ i=1, (1) where piis the predicted probability for class i, yˆ = arg maxi∈[K] piis the pseudo label for x, f(x; θ1:L−1) is the output of the network's penultimate layer, and K is the number of classes. The gradient embedding is essentially scaling the feature presentation f(x; θ1:L−1) by the predicted probabilities. When the model is certain for a sample, pyˆ would be relatively large and ||ge(x)|| would be small. The correlation between uncertainty and gradient embedding magnitude allows BADGE to select a batch of both uncertain and diverse samples by K-Means++ seeding algorithm. BADGE has been shown to perform well on image classification task. However, when applying BADGE to other types of tasks, there are three challenges. First, the computation of gradient embedding can be expensive for dense prediction tasks like segmentation, as the derivative needs to be evaluated on every pixel; it is also expensive for network architectures that have a large number of parameters in the last layer, e.g., for RetinaNet (more details in Section 5.1). Second, it is common to employ loss function that is customized for data distribution in some tasks, e.g., focal loss for object detection and dice loss for medical image segmentation. The form of gradient embedding under such loss functions is complicated and hard to interpret. Third, for regression task the pseudo label can not be obtained by simply discretizing the prediction as classification task does. Let yˆ denote the predicted value and y be the ground truth. It can be derived that the gradient embedding of L2 loss takes the form 2(ˆy − y)*∂y/∂θ* ˆ L. Directly using yˆ as a proxy for y results in embedding of zero magnitude for all samples. ## 3.2 Uncertainty-Weighted Embedding (Uwe) We note that the gradient embedding of a neural network can be decomposed into two terms using the chain rule: $\ g e(x)=\frac{\partial\ell}{\partial z}\frac{\partial z}{\partial\theta_L},$ output before applying activation function - The g ... where z = θL · f(x; θ1:L−1) is the last layer output before applying activation function. The second term ∂z ∂θL is essentially the feature representation extracted from the penultimate layer and is independent of loss function:∂z $$\frac{\partial z}{\partial\theta_{L}}=\frac{\partial\theta_{L}\cdot f(x;\theta_{1:L-1})}{\partial\theta_{L}}=f(x;\theta_{1:L-1}).\tag{3}$$ For $\theta_{L}$, $\theta_{L}$ is a solution function. For the two small The first term ∂ℓ ∂z depends on the form of loss function and activation function. For the two cases discussed above, ∂ℓ ∂z can be derived as below: $\begin{array}{l}1.\text{Cross-entropy loss with softmax activation:}\\ \frac{\partial\ell}{\partial z}=(p_i-\mathbb{1}(y=i))_{i=1}^K;\end{array}$ 2. $L_{2}$ loss with identity activation: $\frac{\partial\ell}{\partial z}=2(\hat{y}-y)$, where y and yˆ is the ground truth and prediction, respectively. We observe that in both cases the magnitude of ∂ℓ ∂z can be interpreted as an uncertainty measure of current model on the sample. In the first case, the magnitude of (pi − 1(y = i))K i=1 = would be large if model is uncertain on the prediction (as py is small); in the second case, if we approximate true value y by the average of an ensemble of models, the magnitude of (ˆy − y) is a measure of dispersion and would be large when uncertainty is high. This motivates us to relax the constraint of computing the embedding as the derivative of loss function, and to generate an embedding by directly weighting the feature by uncertainty. Specifically, we define a function u(x) to measure the uncertainty of current model on sample x, and a feature extractor f(x) by taking the intermediate output of the network. With the uncertainty measure and feature representation, the proposed uncertainty-weighted embedding is defined as: $$u u w e(x)=u(x)\cdot f(x).$$ uwe(x) = u(x) · f(x). (4) The benefits of the proposed embedding are two-fold. First, it shares the same property of gradient embedding that the magnitude is positively correlated to sample's uncertainty. As illustrated in Fig. 1, weighting the feature by uncertainty imposes positive correlation between the resulting UWE's magnitude and uncertainty. This property facilitates diversity sampling in an uncertainty-aware feature space by distance-based selection (detailed in Section 3.3). Second, it does not rely on a specific form of loss function. Both the uncertainty measure u(x) and feature extractor f(x) can take any off-the-shelf method instead of being tied to a specific form or layer. This makes our method flexible and versatile for different types of tasks. ![5_image_0.png](5_image_0.png) Figure 1: Weighting the feature by uncertainty imposes positive correlation between the magnitude of UWE and uncertainty. (a) Correlation between the magnitude of UWE and uncertainty (Pearson correlation coefficient ρ = 0.98). (b) Correlation between the magnitude of gradient embedding and uncertainty (ρ = 0.90). Density plots are drawn on CIFAR-10 using the model trained in the first batch for uncertainty (measured by entropy) and feature extraction. ## 3.3 Distance-Based Sampling With uncertainty-weighted embeddings, selecting samples of large magnitude is more likely to obtain samples of large uncertainty. However, directly selecting the top-k largest ones can result in redundancy as similar samples will have similarly high values. We thus propose to employ distance-based sampling to perform selection. Distance-based sampling selects samples based on its distance to previously selected ones. As illustrated in Figs. 2a and 2b, samples of larger magnitude also have larger Euclidean distance between them, which are more likely to be selected; while similar samples or samples of small magnitude have small Euclidean distance between them, which are unlikely to be repeatedly selected. Therefore, by applying distance-based sampling on uncertainty-aware feature space, we are able to obtain a set of both uncertain and diverse samples. There are two popular distance-based sampling methods, K-Center-Greedy algorithm (KCG) used in CoreSet (Sener & Savarese, 2017), and K-Means++ seeding algorithm (KM++) (Arthur & Vassilvitskii, 2006) used in BADGE (Ash et al., 2019). Both methods decide the next sample to select based on its distance to its nearest neighbor in previously selected ones. The only difference is that KCG deterministically selects the sample with the largest distance in each iteration, while KM++ selects samples probabilistically with probability proportional to the squared distance to the selected samples. The probabilistic nature of KM++ makes it prone to select samples from high density region, which corresponds to region of small magnitude in the UWE space (as illustrated in Fig. 1, samples of low uncertainty have small magnitude and thus concentrate around the origin (high density), while samples of high uncertainty have large magnitude and would spread out in space far from the origin (low density)). Figure 2c visualizes the samples selected by KCG and KM++. KCG is able to select samples of larger uncertainty and diversity. We thus adopt KCG as the selection method (see Sec. A.1 for more discussion on selection methods). ## 3.4 Application To Visual Tasks Image classification Deep models for multi-class image classification employ softmax activation function on logits and output a probability distribution vector. Sample uncertainty can be measured by entropy. Let x denote a sample, p(y = i|x) the predicted probability for class i, and K the number of classes, entropy is ![6_image_0.png](6_image_0.png) $$\left(5\right)$$ $$(6)$$ $$\left(7\right)$$ Figure 2: Distance-based sampling on uncertainty-weighted embeddings. (a) Illustration of the selection order for 4 samples by KCG. Samples of small magnitude (in orange) have smaller distance between them and will not be repeatedly selected (for budget ≤ 3). (b) Correlation between ||uwe(x)|| and d(*x, x*nn) (ρ = 0.88). d(*x, x*nn) represents the distance of sample x to its nearest neighbor xnn in the dataset. (c) Visualization of the samples selected by KCG and KM++ on the density plot of the UWE space. Plots are drawn on CIFAR-10 using the model trained in the first batch for uncertainty (measured by entropy) and feature extraction, and the visualized samples in (c) are those selected for the second batch. defined as: $\;$ p(y = i|x) · log p(y = i|x). (5) $$u_{e n t}(x)=-\sum_{i\in[K]}p(y=i|x)\cdot\log p(y=i|x).$$ For feature representation, we take the output of the penultimate layer following CoreSet (Sener & Savarese, 2017) and BADGE (Ash et al., 2019). Semantic segmentation Semantic segmentation is essentially classification at pixel level. Deep segmentation models output a probability distribution over the label space for each pixel of the input image. To apply our method for semantic segmentation, we first obtain the pixel-level uncertainty and feature vector in a similar way to image classification, and then use mean aggregation to obtain the image-level uncertainty score and feature vector: $$u(x;\theta_{1:L})=m e a n(u(p;\theta_{1:L})),p\in x$$ $$f(x;\theta_{1:L-1})=m e a n(f(p;\theta_{1:L-1})),p\in x$$ where x denotes a sample (an image or a patch) and p a pixel within x. The aggregated uncertainty and feature are then multiplied to form the embedding using Eq. (4). Object detection Given an input image, modern object detectors output a set of detections, each associated with a confidence score, a class label and four bounding box coordinates (Ren et al., 2015; Lin et al., 2017b). Previous work (Brust et al., 2018) suggested that the BvSB (Best versus Second Best) metric serves as a better uncertainty measure for object detection task. As our method is not tied to a specific type of loss function or uncertainty measure, this flexibility allows us to select a suitable uncertainty measure for a given task based on prior knowledge. We follow (Brust et al., 2018) to measure the uncertainty of each box by BvSB, which is computed as: $$u_{b v s b}(x)=\frac{p(y=c^{s b}|x)}{p(y=c^{b}|x)},\tag{1}$$ where c sb and c bis the class label for the second most confident and most confident prediction. The boxlevel UWE can be obtained in a similar way to image classification. To obtain image-level UWEs, we adopt class-wise aggregation, which is more effective than global aggregation due to the instance-level processing nature of object detection (Agarwal et al., 2020; Wu et al., 2022). Specifically, each class is represented by the UWE of the box with the largest uncertainty within that class: $\mu_{i}(x)=u(\hat{d}_{i})\cdot f(\hat{d}_{i})$, $i=1,\ldots,K$ ˆdi), i = 1*, . . . , K* (9) $$({\boldsymbol{\delta}})$$ $\left(\mathrm{q}\right)$. 7 where ˆdi = arg maxd∈Di(x) u(d), Di(x) is the set of detections with predicted class label i for image x, and K is the number of classes. The embedding of each class is then concatenated to form the final representation for the image. ## 4 Experiments In this section, we report benchmarking results on three different tasks, including image classification, semantic segmentation and object detection. All benchmarking methods start from the same initial randomly selected batch (batch 1). We report the mean and standard deviation of 3 independent runs with different random seeds. To more clearly compare the overall performance under different settings, we summarize the benchmarking results using the pair-wise penalty matrix (PPM) proposed by Ji et al. (2023). More specifically, two AL methods i and j are compared after each batch. If method i outperforms method j, a penalty score of 1n (where n is the number of batches) is added to the cell of the PPM at (*i, j*); likewise, if method j outperforms method i, the penalty score is added to the cell at (*j, i*). The higher the value at (*i, j*), the stronger the method i dominates method j. The column-wise average of PPM (marked as Φ) indicates the overall performance of each method. The method with the lowest value performs the best. The PPMs over various settings (e.g., different datasets, network architectures) can be combined to form a single one by element-wise averaging. ## 4.1 Image Classification Datasets We evaluate our method on CIFAR-10 (Krizhevsky et al., 2009) and CIFAR-100 datasets (Krizhevsky et al., 2009), each consisting of 50,000 training images and 10,000 testing images of resolution 32 × 32 × 3. We use the training set as the initial unlabelled pool and report the model performance on the testing set. Implementation details We conduct our experiments on ResNet-18 (He et al., 2016). The hyperparameters for training on both CIFAR-10 and CIFAR-100 are set as follows: batch size = 256, total epochs = 100, initial learning rate = 2e-2 which is decayed by 0.5 after epoch 60 and 80. The fully-supervised baseline is 94.13% accuracy for CIFAR-10 and 75.36% accuracy for CIFAR100. Benchmarking results We benchmark with Uncertainty (measured by entropy), CoreSet (Sener & Savarese, 2017), BADGE (Ash et al., 2019), BALD (Gal et al., 2017), LLAL (Yoo & Kweon, 2019), VAAL (Sinha et al., 2019), ISAL (Liu et al., 2021) and TA-VAAL (Kim et al., 2021). The results are presented in Fig. 3. We observe that UWE and BADGE perform comparably and outperform the rest. However, the computation of uncertainty-weighted embedding in our method is much more efficient than BADGE's gradient embedding (see Section 5.1 for computational complexity analysis). Single criterion-based method, e.g., Entropy, obtains strong performance on CIFAR-10 and CIFAR-100. This is probably due to the fact that CIFAR-10 and CIFAR-100 are hand-crafted datasets and contains diverse samples by construction. However, single criterion-based method cannot perform well when the dataset characteristic changes, as we shall see in the following experiments on semantic segmentation. ## 4.2 Semantic Segmentation Datasets We evaluate our method on Cityscapes dataset (Cordts et al., 2016). The dataset contains 19 object classes and the image resolution is 1024 × 2048. We use the training set (2,975 images) as the initial unlabelled pool and report the performance on the validation set (500 images). We experiment with both image-level and region-level selection. For region-level AL, each image is divided into patches of size 512 × 512, and each patch is considered as a sample for selection and annotation. Implementation details We conduct our experiments based on the open-source MMSegmentation (Contributors, 2020) framework. The model architecture is DeepLabV3+ with ResNet-18 backbone. The training hyperparameters are set as below: batch size = 8, iterations = 80000, initial learning rate = 1e-2, which is ![8_image_0.png](8_image_0.png) 1.0 0.8 0.6 0.4 0.2 0.0 Figure 3: Active learning results of image classification. (a) CIFAR-10; (b) CIFAR-100; (c) PPM over (a) and (b). decayed using the "poly" policy (lr = (InitialLR − *M inLR*) × (1 − CurIter MaxIter ) power + *M inLR*, where *M inLR* = 0.0001 and *power* = 0.9). The fully supervised baseline is 76.30% mIoU. Benchmarking results We benchmark with Uncertainty (measured by entropy), CoreSet (Sener & Savarese, 2017), BADGE (Ash et al., 2019) and CDAL (Agarwal et al., 2020). The results are presented in Fig. 4. We observe that Uncertainty outperforms CoreSet and CDAL (both are diversity-based sampling methods) in image-level AL, but the order is reversed in region-level AL. Region-level AL needs to handle a candidate pool with more redundant samples (as neighboring regions may contain similar content) than image-level AL. Single selection criterion, e.g., uncertainty alone, is unable to accommodate datasets with different degrees of sample redundancy. Our method is able to outperform competing methods in both cases, demonstrating the advantage of the proposed hybrid sampling method in handling datasets of different ![8_image_1.png](8_image_1.png) 1.0 0.8 0.6 0.4 0.2 0.0 Figure 4: Active learning results of semantic segmentation. (a) Image-level selection; (b) Region-level selection; (c) PPM over (a) and (b). ## 4.3 Object Detection Datasets We evaluate our method on PASCAL VOC0712 dataset (Everingham et al., 2010). There are 20 object classes and the median image shape is 500 × 375. Following the setting of previous works (Yoo & Kweon, 2019; Wu et al., 2022), we combine trainval'07 (5,011 images) and trainval'12 (11,540 images) to make a super-set trainval'0712 (16,551 images) and use it as the initial unlabelled pool. The performance is reported on test'07 (4,952 images). Implementation details Our experiments are based on the open-source MMDetection (Chen et al., 2019) framework. We experiment with one-stage detector RetinaNet (Lin et al., 2017b) as well as two-stage detector Faster R-CNN (Ren et al., 2015). Both detectors use feature pyramid networks (Lin et al., 2017a) on top of ResNet-50 (He et al., 2016) as feature extractor. The hyperparameters are set as follows: batch size = 2, total epochs = 26, initial learning rate = 1e-3 for RetinaNet and 5e-3 for Faster R-CNN, which is decayed by 0.1 after 20 epochs. When training with the full trainval'0712, we obtained a mAP of 79.87% for RetinaNet and 78.73% for Faster R-CNN. Benchmarking results We benchmark with Uncertainty (measured by BvSB), CoreSet (Sener & Savarese, 2017), BADGE (Ash et al., 2019), CDAL (Agarwal et al., 2020), LLAL (Yoo & Kweon, 2019), EnmsDivproto (Wu et al., 2022) and CALD (Yu et al., 2022). The results are presented in Fig. 5. We observe that not every method can perform well on both Faster R-CNN and RetinaNet, e.g., CALD and BADGE perform well with the former, but the performance suffers with the latter. UWE is able to consistently perform the best on both detectors. EnmsDivproto also achieves strong performance on both detectors, but the method is more complicated, involving additional steps for entropy-based non-maximum suppression and pseudo-label based minority class sampling on top of diversity-based sampling on class prototypes, while our method is simple and efficient (up to 4× faster than EnmsDivproto in selection), as it only involves diversity-based sampling on class-wise aggregated features. ![9_image_0.png](9_image_0.png) (a) (b) (c) Figure 5: Active learning results of object detection. (a) Faster R-CNN; (b) RetinaNet; (c) PPM over (a) and (b). ## 4.4 Ablation Studies We report ablation studies on the design components of our method in Table 1. Here, U represents uncertainty sampling that selects the top-k most uncertain samples, F represents diversity sampling using KCG without uncertainty weighting, i.e., setting u(x) in Eq. (4) to 1, and U+F refers to the proposed method. The consistent performance gain of U+F over F demonstrates the advantage of performing diversity sampling on the proposed uncertainty-weighted features. Table 1: Ablation studies on the design choices of our method. U represents uncertainty sampling that selects the top-k most uncertain samples, F represents diversity sampling using KCG without uncertainty weighting, and U+F is the proposed method. | U | F | Batch 2 | Batch 3 | Batch 4 | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----|-----------|-----------|-----------| | Image classification on CIFAR10 with ResNet-18 ✓ 46.96 (0.29) 60.08 (0.08) 76.02 (0.20) ✓ 48.61 (0.71) 60.46 (0.14) 74.35 (0.41) ✓ ✓ 49.45 (0.41) 61.84 (0.37) 76.37 (0.99) Semantic segmentation on Cityscapes with DeepLabV3+ ✓ 61.63 (0.85) 66.88 (0.07) 71.77 (0.59) ✓ 61.82 (1.04) 68.19 (1.25) 71.75 (0.57) ✓ ✓ 62.78 (0.98) 68.42 (0.13) 72.87 (0.71) Object detection on VOC0712 with Faster R-CNN ✓ 67.53 (0.40) 71.61 (0.23) 75.51 (0.23) ✓ 67.20 (0.29) 71.75 (0.38) 75.83 (0.51) ✓ ✓ 68.09 (0.10) 72.51 (0.37) 76.38 (0.32) | | | | | ## 5 Further Analysis 5.1 Computational Complexity Analysis We analyze the computational complexity of our method to demonstrate its efficiency over BADGE. Let fdim denote the dimension of the feature representation, and nc the number of classes. For image classification and segmentation, the dimension of UWE is the same as fdim, while the dimension of the gradient embedding used in BADGE is nc · fdim. The time complexity of KCG and KM++ is O(nb · nu · ndim), where nb is the number of selected samples, nu is the total number of unlabelled samples, and ndim is the input sample dimension. As such, our method is more efficient than BADGE by a factor of nc. For object detection, the dimension of UWE is nc · fdim due to class-wise aggregation. The dimension of gradient embedding varies according to detector architectures and is typically much higher as it not only depends on the number of classes, but also other network parameters. For example, the dimension of gradient embedding for RetinaNet is k 2·na ·nc · fdim, where na is the number of anchors and k is the filter size for the prediction head. Table 2 compares the embedding dimension, embedding extraction time and sample selection time on BADGE vs. UWE. Our method reduces the storage and time cost significantly compared to BADGE, especially when the number of classes is large (i.e., on CIFAR-100) and for dense prediction task (i.e., on Cityscapes). | Dimension | Extraction | Selection | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------|-------------| | Image classification on CIFAR-10 with ResNet-18 BADGE 5120 18.84 (1.30) 129.95 (1.23) UWE 512 4.78 (0.05) 22.55 (0.28) Image classification on CIFAR-100 with ResNet-18 BADGE 51200 126.87 (12.20) 2223.94 (0.18) UWE 512 4.98 (0.03) 42.16 (0.08) Semantic segmentation on Cityscapes with DeepLabV3+ BADGE 2432 6593.17 (618.16) 34.18 (5.27) UWE 128 997.26 (12.13) 23.22 (0.34) Object detection on VOC0712 with RetinaNet BADGE 414720 1651.34 (21.22) 200.66 (30.87) UWE 5120 1181.62 (166.13) 82.33 (2.62) | | | Table 2: The dimension of the embedding and running time (in seconds) for embedding extraction and sample selection. The time cost is measured for selecting the second batch. ## 5.2 Adjusting Trade-Off Between Uncertainty And Diversity Sampling In Section 4.2, we observe that uncertainty sampling outperforms diversity sampling in image-level AL, but in region-level AL the order is reversed. Intuitively, when region size is large, candidate samples are already diverse and uncertainty sampling that focuses on selecting difficult samples would outperform. On the other hand, when region size is small, there are more redundant samples, and thus a selection strategy that encourages diversity sampling would outperform. This suggests that the trade-off between uncertainty and diversity sampling needs to be adjusted for different region sizes. This can be achieved by applying a power τ to the uncertainty term in Eq. (4), i.e., computing the embedding as uwe(x) = u(x) τ· f(x). Increasing τ would up-weight the contribution of uncertainty and select more uncertain samples, while decreasing τ would down-weight the role of uncertainty and select more diverse samples. In Section 4.2, we show that τ = 1 works well for 1024 × 2048 and 512 × 512. If the region size is further reduced, a smaller τ should be more suitable. To verify this, we experiment with smaller region sizes using reduced τ . Results are reported in Table 3. When region size is 256×256, we observe some marginal advantage of UWE (τ = 0.1) over UWE (τ = 1). The advantage is more significant when the region size is further reduced to 128 × 128. We observe that CoreSet significantly outperforms Entropy at this region size, confirming the advantage of diversity sampling for datasets with more redundant samples. UWE with τ = 0.1 can further improve upon CoreSet, demonstrating that a hybrid approach that considers uncertainty in diversity sampling is still beneficial. Table 3: Effect of applying power τ to the uncertainty term in computing the embedding. | | Batch 2 | Batch 3 | Batch 4 | |---------------------------------------------|--------------|--------------|--------------| | Region size = 256 × 256 Random 60.35 (0.17) | | 64.75 (0.38) | 68.70 (0.59) | | Entropy | 67.18 (0.52) | 71.41 (0.13) | 72.67 (0.68) | | CoreSet | 66.38 (0.08) | 71.19 (0.36) | 72.86 (0.31) | | BADGE | 65.10 (0.21) | 69.87 (0.14) | 70.03 (1.15) | | UWE (τ = 1) | 67.21 (0.46) | 71.21 (0.80) | 73.72 (0.24) | | UWE (τ = 0.1) | 67.23 (0.21) | 71.65 (0.67) | 73.71 (0.18) | | Region size = 128 × 128 Random 62.56 (0.09) | | 67.38 (0.29) | 69.05 (1.70) | | Entropy | 70.09 (0.33) | 69.99 (0.43) | 71.38 (0.09) | | CoreSet | 70.41 (0.34) | 72.49 (0.54) | 74.28 (0.47) | | BADGE | 68.50 (0.00) | 71.91 (0.00) | 71.78 (0.00) | | UWE (τ = 1) | 69.80 (0.26) | 72.10 (0.59) | 73.40 (0.13) | | UWE (τ = 0.1) | 70.75 (0.49) | 73.15 (0.24) | 74.40 (0.37) | ## 5.3 Effect Of Uncertainty Measures The u(x) term in Eq. (4) can be an arbitrary measure of uncertainty. We experiment with three commonly used measures: Entropy, BvSB and LC. Figure 6 shows the results. While the performance of different uncertainty measures varies across different datasets and tasks, our method is able to consistently improve upon the performance of uncertainty sampling. This demonstrates the effectiveness of UWE as a general method to perform hybrid AL. ## 5.4 Correlation Between Gradient Embedding And Uwe We inspect the correlation between the magnitude of gradient embedding and UWE to shed more insight into the proposed two-term decomposition of gradient embedding. Mathematically, the magnitude of gradient embedding under cross-entropy loss and softmax activation function can be computed as: $$||ge(x,\hat{y})||=(\sum_{i}^{K}p_{i}^{2}+1-2p_{\hat{y}})^{1/2}\cdot||f(x)||,\tag{10}$$ where pyˆ is the probability for the pseudo label. The magnitude of UWE can be computed as: $$||u w e(x)||=u(x)\cdot||f(x)||.$$ ||uwe(x)|| = u(x) *· ||*f(x)||. (11) $$(11)$$ ![12_image_0.png](12_image_0.png) Figure 6: Performance of UWE with different uncertainty measures. (a) Image classification on CIFAR-10 with ResNet-18; (b) Object detection on VOC0712 with Faster R-CNN. It is clear that ||ge(x, yˆ)|| and ||uwe(x)|| share the same second term ||f(x)|| (the feature representation), while the correlation between the first terms depends on the choice of uncertainty measure and the joint distribution p(*x, y*). We compute the Pearson correlation coefficient between ||ge(x, yˆ)|| and ||uwe(x)||. Pearson correlation coefficient measures the linear correlation between two sets of data, with 1/-1 indicating perfect positive/negative correlation, and 0 indicating no linear correlation. Results are reported in Table 4. We observe that the magnitude of BADGE's gradient embedding and UWE is highly correlated, achieving correlation coefficient above 0.9. We further inspect how well BADGE and UWE in approximating real gradient, i.e., gradient embedding computed with ground truth label. Results in Table 5 show that UWE obtains similar, or sometimes higher correlation coefficients than BADGE, suggesting that UWE performs comparably to BADGE with regard to approximating the magnitude of real gradient. Table 4: Correlation between the magnitude of BADGE's gradient embedding and UWE. We report the mean and standard deviation of 3 runs. | Batch 1 | Batch 2 | Batch 3 | Batch 4 | Batch 5 | Batch 6 | | |-----------|-------------|-------------|-------------|-------------|-------------|-------------| | CIFAR-10 | 0.94 (0.00) | 0.94 (0.00) | 0.95 (0.00) | 0.95 (0.00) | 0.95 (0.00) | 0.95 (0.00) | | CIFAR-100 | 0.93 (0.00) | 0.93 (0.00) | 0.94 (0.01) | 0.93 (0.00) | 0.93 (0.00) | 0.94 (0.00) | | | Batch 1 | Batch 2 | Batch 3 | Batch 4 | Batch 5 | Batch 6 | |--------------------------------------------------------------------------------------------|-------------|-------------|-------------|-------------|-------------|-------------| | Image classification on CIFAR-10 with ResNet-18 BADGE 0.11 (0.00) 0.19 (0.02) 0.28 (0.00) | | 0.37 (0.00) | 0.44 (0.01) | 0.51 (0.00) | | | | UWE | 0.12 (0.00) | 0.22 (0.02) | 0.32 (0.01) | 0.42 (0.00) | 0.48 (0.01) | 0.55 (0.00) | | Image classification on CIFAR-100 with ResNet-18 BADGE 0.46 (0.00) 0.47 (0.03) 0.55 (0.00) | | 0.59 (0.04) | 0.61 (0.01) | 0.67 (0.02) | | | | UWE | 0.47 (0.00) | 0.48 (0.03) | 0.56 (0.00) | 0.59 (0.04) | 0.61 (0.01) | 0.65 (0.03) | Table 5: Correlation between the magnitude of real gradient (using ground truth label) and the magnitude of BADGE's gradient embedding (using pseudo label) and UWE, respectively. We report the mean and standard deviation of 3 runs. ## 6 Conclusions In this work, we proposed a novel uncertainty-weighted embedding for hybrid AL. Our proposed uncertaintyweighted embedding generalizes the gradient embedding form of BADGE by allowing arbitrary uncertainty measures to be used, thereby greatly reducing the computational complexity of computing the embedding, and allowing it to be used with arbitrary loss functions. Our hybrid AL method then utilizes this embedding as a basis for distance-based sampling to select examples for labelling. We evaluate our method under a wide range of tasks and settings, e.g., image classification, image-level and region-level AL for semantic segmentation, one-stage and two-stage detectors for object detection. Experimental results show that our method achieves state-of-the-art performance, and demonstrate the generality of our approach. ## Acknowledgements The authors would like to thank the anonymous TMLR reviewers for their constructive feedback during the review process. This work is supported by the Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funds (Grant No. A20H6b0151) and Career Development Fund (Grant No. C210812052). The computational work for this article was partially performed on resources of the National Supercomputing Centre, Singapore (https://www.nscc.sg). ## References Sharat Agarwal, Himanshu Arora, Saket Anand, and Chetan Arora. Contextual diversity for active learning. In *European Conference on Computer Vision*, pp. 137–153. Springer, 2020. David Arthur and Sergei Vassilvitskii. k-means++: The advantages of careful seeding. Technical report, Stanford, 2006. Jordan T Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, and Alekh Agarwal. Deep batch active learning by diverse, uncertain gradient lower bounds. *arXiv preprint arXiv:1906.03671*, 2019. William H Beluch, Tim Genewein, Andreas Nürnberger, and Jan M Köhler. The power of ensembles for active learning in image classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 9368–9377, 2018. Clemens-Alexander Brust, Christoph Käding, and Joachim Denzler. Active learning for deep object detection. arXiv preprint arXiv:1809.09875, 2018. Felix Buchert, Nassir Navab, and Seong Tae Kim. Exploiting diversity of unlabeled data for label-efficient semi-supervised active learning. In *2022 26th International Conference on Pattern Recognition (ICPR)*, pp. 2063–2069. IEEE, 2022. Lile Cai, Xun Xu, Jun Hao Liew, and Chuan Sheng Foo. Revisiting superpixels for active learning in semantic segmentation with realistic annotation costs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10988–10997, 2021a. Lile Cai, Xun Xu, Lining Zhang, and Chuan-Sheng Foo. Exploring spatial diversity for region-based active learning. *IEEE Transactions on Image Processing*, 30:8702–8712, 2021b. Razvan Caramalau, Binod Bhattarai, and Tae-Kyun Kim. Sequential graph convolutional network for active learning. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 9583–9592, 2021. Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In *Proceedings of the IEEE/CVF* international conference on computer vision, pp. 9650–9660, 2021. Arantxa Casanova, Pedro O Pinheiro, Negar Rostamzadeh, and Christopher J Pal. Reinforced active learning for image segmentation. *arXiv preprint arXiv:2002.06583*, 2020. Kai Chen, Jiaqi Wang, Jiangmiao Pang, Yuhang Cao, Yu Xiong, Xiaoxiao Li, Shuyang Sun, Wansen Feng, Ziwei Liu, Jiarui Xu, Zheng Zhang, Dazhi Cheng, Chenchen Zhu, Tianheng Cheng, Qijie Zhao, Buyu Li, Xin Lu, Rui Zhu, Yue Wu, Jifeng Dai, Jingdong Wang, Jianping Shi, Wanli Ouyang, Chen Change Loy, and Dahua Lin. MMDetection: Open mmlab detection toolbox and benchmark. *arXiv preprint* arXiv:1906.07155, 2019. MMSegmentation Contributors. MMSegmentation: Openmmlab semantic segmentation toolbox and benchmark. https://github.com/open-mmlab/mmsegmentation, 2020. Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 3213–3223, 2016. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020. Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes (voc) challenge. *International journal of computer vision*, 88(2):303–338, 2010. Yarin Gal, Riashat Islam, and Zoubin Ghahramani. Deep bayesian active learning with image data. In International Conference on Machine Learning, pp. 1183–1192. PMLR, 2017. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Yilin Ji, Daniel Kaestner, Oliver Wirth, and Christian Wressnegger. Randomness is the root of all evil: More reliable evaluation of deep active learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3943–3952, 2023. Ajay J Joshi, Fatih Porikli, and Nikolaos Papanikolopoulos. Multi-class active learning for image classification. In *2009 ieee conference on computer vision and pattern recognition*, pp. 2372–2379. IEEE, 2009. Kwanyoung Kim, Dongwon Park, Kwang In Kim, and Se Young Chun. Task-aware variational adversarial active learning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 8166–8175, 2021. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In *Proceedings of the IEEE conference on computer vision and* pattern recognition, pp. 2117–2125, 2017a. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In *Proceedings of the IEEE international conference on computer vision*, pp. 2980–2988, 2017b. Buyu Liu and Vittorio Ferrari. Active learning for human pose estimation. In *Proceedings of the IEEE* International Conference on Computer Vision, pp. 4363–4372, 2017. Zhuoming Liu, Hao Ding, Huaping Zhong, Weijia Li, Jifeng Dai, and Conghui He. Influence selection for active learning. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 9274–9283, 2021. Radek Mackowiak, Philip Lenz, Omair Ghori, Ferran Diego, Oliver Lange, and Carsten Rother. Cerealscost-effective region-based active learning for semantic segmentation. *arXiv preprint arXiv:1810.09726*, 2018. Sujoy Paul, Jawadul H Bappy, and Amit K Roy-Chowdhury. Non-uniform subset selection for active learning in structured data. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 6846–6855, 2017. Viraj Prabhu, Arjun Chandrasekaran, Kate Saenko, and Judy Hoffman. Active domain adaptation via clustering uncertainty-weighted embeddings. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8505–8514, 2021. Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li, Brij B Gupta, Xiaojiang Chen, and Xin Wang. A survey of deep active learning. *ACM computing surveys (CSUR)*, 54(9):1–40, 2021. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. *Advances in neural information processing systems*, 28, 2015. Soumya Roy, Asim Unmesh, and Vinay P Namboodiri. Deep active learning for object detection. In *BMVC*, pp. 91, 2018. Ozan Sener and Silvio Savarese. Active learning for convolutional neural networks: A core-set approach. arXiv preprint arXiv:1708.00489, 2017. Claude Elwood Shannon. A mathematical theory of communication. *The Bell system technical journal*, 27 (3):379–423, 1948. Samarth Sinha, Sayna Ebrahimi, and Trevor Darrell. Variational adversarial active learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5972–5981, 2019. Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. *Advances in neural information processing systems*, 29, 2016. Keze Wang, Dongyu Zhang, Ya Li, Ruimao Zhang, and Liang Lin. Cost-effective active learning for deep image classification. *IEEE Transactions on Circuits and Systems for Video Technology*, 27(12):2591–2600, 2016. Jiaxi Wu, Jiaxin Chen, and Di Huang. Entropy-based active learning for object detection with progressive diversity constraint. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern* Recognition, pp. 9397–9406, 2022. Lin Yang, Yizhe Zhang, Jianxu Chen, Siyuan Zhang, and Danny Z Chen. Suggestive annotation: A deep active learning framework for biomedical image segmentation. In International conference on medical image computing and computer-assisted intervention, pp. 399–407. Springer, 2017. Yi Yang, Zhigang Ma, Feiping Nie, Xiaojun Chang, and Alexander G Hauptmann. Multi-class active learning by uncertainty sampling with diversity maximization. *International Journal of Computer Vision*, 113:113– 127, 2015. Donggeun Yoo and In So Kweon. Learning loss for active learning. In *Proceedings of the IEEE/CVF* conference on computer vision and pattern recognition, pp. 93–102, 2019. Weiping Yu, Sijie Zhu, Taojiannan Yang, and Chen Chen. Consistency-based active learning for object detection. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 3951–3960, 2022. Tianning Yuan, Fang Wan, Mengying Fu, Jianzhuang Liu, Songcen Xu, Xiangyang Ji, and Qixiang Ye. Multiple instance active learning for object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5330–5339, 2021. ## A Appendix A.1 Effect Of Selection Methods We compare the uncertainty and diversity of samples selected by KCG and KM++ on both UWE and gradient embedding. Diversity is measured by the average pairwise distance of samples within a batch. Results are presented in Fig. 7. KCG selects samples of larger uncertainty, embedding magnitude and diversity on both embeddings. We also follow BADGE to measure diversity by the log determinant of the selected samples' Gram matrix in Table 6. We observe similar results to those reported in Fig. 7, i.e., KCG is able to select a more diverse batch than KM++. We further experiment with replacing KM++ used in BADGE by KCG. The results presented in Fig. 8 demonstrate that KCG performs better than KM++ on gradient embeddings as well. One of the weaknesses of KCG is that it may select outliers that are far away from each other (Sener & Savarese, 2017). However, public benchmarking datasets are usually carefully crafted and may not suffer from the outlier problem. As a result, the deterministic nature of KCG allows it to select more diverse and uncertain samples and performs better than KM++. ![16_image_0.png](16_image_0.png) Figure 7: Comparing uncertainty and diversity of samples selected by KCG and KM++ on CIFAR-10. Top row: results on uncertainty-weighted embeddings; bottom row: results on gradient embeddings. (a)(d): Comparing selected samples' uncertainty; (b)(e): Comparing selected samples' embedding magnitude; (c)(f): Comparing selected samples' diversity (measured by average pairwise feature distance within a batch). | Log determinant of Gram matrix of UWEs in batch Batch (Budget) 2 (500) 3 (1000) | | 4 (2000) | | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------|-------------------|-------------------| | KM++ | 813.86 (5.04) | -11789.39 (70.67) | -37898.37 (7.50) | | KCG | 1353.61 (4.02) | -10780.49 (18.76) | -36313.16 (70.82) | | Log determinant of Gram matrix of (unweighted) features in batch KM++ 1034.34 (0.38) -1704.00 (3.20) -7499.03 (0.48) KCG 1088.14 (0.31) -1643.87 (3.35) -7470.63 (25.94) | | | | Table 6: The log determinant of Gram matrix of samples selected by KCG vs. KM++ on CIFAR-10. We compute the Gram matrix on both the selected samples' UWEs and their original (unweighted) features. KCG obtains larger determinant values in both cases, indicating the selected samples are more diverse. ## A.2 Effect Of Aggregation For Object Detection We use Max aggregation to generate class-wise UWE (Eq. (9)) for object detection. Alternatively, the class-wise UWE can be obtained by averaging the box-level UWE within each class, P i.e., uwei(x) = d∈Di (x) u(d)·f(d) |Di(x)|, where Di(x) is the set of detections with predicted class label i for image x. Table 7 ![17_image_0.png](17_image_0.png) (a) (b) Figure 8: Replacing KM++ in BADGE with KCG. KCG performs better than KM++ on both gradient embeddings and uncertainty-weighted embeddings. (a) Semantic segmentation on Cityscapes; (b) Object detection on VOC0712 with Faster R-CNN. compares the performance of *Max vs. Mean* aggregation. We observe that Max performs better or comparably to *Mean*. The reason may be that Max can generate a more distinctive embedding as each class is represented by its most uncertain object instead of its "average" object, and more distinctive embeddings facilitate the selection of more informative samples by distance-based sampling. Table 7: Comparing *Max vs. Mean* aggregation for generating class-wise UWE for object detection. | Batch 2 | Batch 3 | Batch 4 | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|-----------| | Object detection on VOC0712 with Faster R-CNN Mean 67.18 (0.66) 72.03 (0.39) 75.69 (0.19) Max 68.09 (0.09) 72.50 (0.37) 76.37 (0.31) Object detection on VOC0712 with RetinaNet Mean 64.08 (0.38) 72.79 (0.09) 77.62 (0.20) Max 64.61 (1.10) 72.58 (0.12) 77.59 (0.10) | | | ## A.3 Badge For Object Detection To evaluate BADGE's effectiveness for object detection, we experiment with variants of BADGE. Modern object detectors like RetinaNet (Lin et al., 2017b) and Faster R-CNN (Ren et al., 2015) typically consists of two prediction branches, i.e., classification (CLS) and regression (REG). Two-stage detector like Faster R-CNN has another prediction branch for region proposal (RPN, which itself consists of a classification and regression head). We experiments with three variants of BADGE, i.e., BADGE (CLS), BADGE (CLS+REG) and BADGE (CLS+REG+RPG), where the last layer parameters of the corresponding branches are taken into consideration when computing the gradient embedding. We observe that BADGE (CLS) performs the best, while using the gradient embedding from the regression branch or region proposal network harms the performance. These results suggest that BADGE is not guaranteed to work well for complicated tasks like object detection. ![18_image_0.png](18_image_0.png) (a) (b) Figure 9: BADGE for object detection with variants of gradient embeddings. (a) Results on VOC0712 with Faster R-CNN; (b) Results on VOC0712 with RetinaNet. ## A.4 Uncertainty And Diversity Of Selected Samples To validate our method is indeed able to select a set of both uncertain and diverse samples, we compare the mean uncertainty and diversity (measured by average pairwise feature distance within a batch) of samples selected by different methods in Fig. 10. We observe that UWE obtains the second best regarding to uncertainty, just behind Uncertainty sampling, which selects samples of top-k uncertainty. However, Uncertainty sampling does not consider diversity, thus ranking bottom regarding to diversity for image classification. Our method, being a hybrid method, is still able to achieve good diversity for image classification and ranks first for object detection. ## A.5 Results On Mini-Imagenet We present the results on mini-Imagenet (Vinyals et al., 2016) in Fig. 11. Mini-Imagenet contains 48,000 images for training and 12,000 images for testing, evenly distributed across 100 classes. The image resolution is 84 × 84 × 3. We use the training set as the initial unlabelled pool and the testing set to evaluate model performance. The network architecture is Vision Transformers (ViT-Base) (Dosovitskiy et al., 2020), which is initialized with ImageNet pre-trained weights (self-supervised pre-training with DINO (Caron et al., 2021)). At each AL cycle, the model is trained for 20 epochs by ADAM optimizer with learning rate 5e-5 and batch size 64. The fully-supervised baseline achieves 80.60% in testing accuracy. We use BvSB as uncertainty measure for UWE. We observe that UWE performs slightly better than BADGE, and outperform others significantly. ## A.6 Qualitative Results We provide detailed detection results of models trained with samples selected by different active learning methods in Fig. 12. Our method is able to obtain more accurate detection results, with fewer false positives (e.g., dog/cat in Fig. 12(b)) and false negatives (e.g., potted plant in Fig. 12(d)). ## A.7 Numerical Results We provide the raw data for Figs.3, 4 and 5 in Table 8, 9 and 10, respectively. Our method (UWE) achieves best or second best performance in most cases. | Batch (Budget) | 1 (500) | 2 (500) | 3 (1000) | 4 (2000) | 5 (4000) | 6 (8000) | |--------------------------------------------|--------------|--------------|--------------|--------------|--------------|--------------| | ResNet-18 on CIFAR-10 Random 41.3 (0.49) | 49.97 (1.16) | 60.31 (0.11) | 72.78 (0.17) | 82.42 (0.66) | 88.21 (0.04) | | | Uncertainty | 41.3 (0.49) | 46.96 (0.29) | 60.08 (0.09) | 76.02 (0.20) | 85.96 (0.19) | 91.60 (0.07) | | CoreSet | 41.3 (0.49) | 48.61 (0.71) | 60.46 (0.14) | 74.35 (0.41) | 84.11 (0.28) | 90.11 (0.05) | | BADGE | 41.3 (0.49) | 48.35 (0.51) | 61.56 (1.02) | 75.75 (0.52) | 85.64 (0.42) | 91.40 (0.22) | | BALD | 41.3 (0.49) | 50.09 (0.65) | 59.68 (0.27) | 73.63 (0.31) | 82.99 (0.58) | 88.58 (0.25) | | LLAL | 41.3 (0.49) | 48.73 (0.68) | 59.89 (0.23) | 74.37 (0.23) | 84.77 (0.43) | 90.50 (0.12) | | VAAL | 41.3 (0.49) | 48.26 (0.72) | 59.09 (0.13) | 72.62 (0.58) | 82.99 (0.32) | 88.39 (0.10) | | TA-VAAL | 41.3 (0.49) | 48.53 (1.23) | 59.45 (1.21) | 72.90 (1.27) | 82.63 (0.96) | 88.71 (0.37) | | ISAL | 41.3 (0.49) | 49.31 (0.90) | 60.59 (1.12) | 75.15 (1.25) | 85.26 (0.17) | 90.55 (0.25) | | UWE (Ours) | 41.3 (0.49) | 49.46 (0.41) | 61.84 (0.37) | 76.38 (1.00) | 85.85 (0.51) | 91.46 (0.08) | | Batch (Budget) | 1 (5000) | 2 (1000) | 3 (2000) | 4 (4000) | 5 (8000) | 6 (8000) | | ResNet-18 on CIFAR-100 Random 33.83 (0.08) | 37.87 (0.28) | 44.79 (0.71) | 53.77 (0.30) | 63.28 (0.05) | 68.68 (0.46) | | | Uncertainty | 33.83 (0.08) | 36.98 (0.39) | 45.99 (0.14) | 55.66 (0.58) | 66.23 (0.03) | 71.41 (0.11) | | CoreSet | 33.83 (0.08) | 38.54 (0.59) | 45.45 (1.57) | 55.52 (0.35) | 65.63 (0.10) | 70.62 (0.20) | | BADGE | 33.83 (0.08) | 38.20 (0.25) | 45.98 (0.38) | 55.69 (0.07) | 66.51 (0.03) | 70.99 (0.03) | | BALD | 33.83 (0.08) | 38.10 (0.06) | 44.93 (0.16) | 54.02 (0.27) | 63.39 (0.13) | 67.53 (0.09) | | LLAL | 33.83 (0.08) | 37.54 (0.28) | 43.09 (1.05) | 52.79 (0.03) | 65.09 (0.41) | 69.97 (0.16) | | VAAL | 33.83 (0.08) | 38.73 (0.30) | 44.84 (0.36) | 53.45 (0.14) | 63.35 (0.47) | 68.74 (0.15) | | TA-VAAL | 33.83 (0.08) | 38.07 (0.39) | 44.37 (0.46) | 53.50 (0.58) | 63.96 (0.17) | 68.77 (0.24) | | ISAL | 33.83 (0.08) | 39.17 (0.36) | 45.92 (0.98) | 55.57 (0.55) | 65.86 (0.28) | 70.46 (0.36) | | UWE (Ours) | 33.83 (0.08) | 38.49 (0.55) | 45.28 (0.08) | 55.59 (0.80) | 67.11 (0.48) | 72.09 (0.12) | | Batch (Budget) | 1 (100) | 2 (100) | 3 (200) | 4 (400) | |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------|--------------|--------------|--------------| | DeepLabv3+ on Cityscapes (Image-level AL) Random 52.54 (1.05) 59.24 (0.36) | | 65.44 (0.22) | 70.83 (0.02) | | | Uncertainty | 52.54 (1.05) | 63.55 (0.48) | 70.09 (0.35) | 73.80 (0.03) | | CoreSet | 52.54 (1.05) | 62.06 (0.14) | 69.24 (0.97) | 73.89 (0.09) | | CDAL | 52.54 (1.05) | 58.00 (0.94) | 65.11 (0.54) | 71.81 (0.52) | | BADGE | 52.54 (1.05) | 61.28 (1.43) | 67.87 (0.71) | 72.77 (0.26) | | UWE (Ours) | 52.54 (1.05) | 63.54 (1.06) | 69.83 (0.49) | 74.09 (0.03) | | Batch (Budget) | 1 (500) | 2 (500) | 3 (1000) | 4 (2000) | | DeepLabv3+ on Cityscapes (Region-level AL with region size 512 × 512) Random 50.71 (1.19) 56.94 (0.67) 63.73 (1.23) 69.00 (0.38) Uncertainty 50.71 (1.19) 61.63 (0.85) 66.88 (0.07) 71.77 (0.59) CoreSet 50.71 (1.19) 61.82 (1.04) 68.19 (1.25) 71.75 (0.57) CDAL 50.71 (1.19) 61.01 (0.11) 68.03 (0.77) 72.75 (1.09) BADGE 50.71 (1.19) 61.60 (0.83) 67.07 (0.16) 71.24 (0.04) UWE (Ours) 50.71 (1.19) 62.78 (0.98) 68.42 (0.13) 72.87 (0.71) | | | | | Table 8: Benchmarking results for image classification. Best results are bolded, and the second best results are underlined. Table 9: Benchmarking results for semantic segmentation. Best results are bolded, and the second best results are underlined. ![20_image_0.png](20_image_0.png) Figure 10: Comparing uncertainty and diversity of samples selected by different AL methods. (a)(b): Image classification on CIFAR-10 with ResNet-18; (c)(d): Object detection on VOC0712 with RetinaNet. ![21_image_0.png](21_image_0.png) Figure 11: Active learning results on mini-Imagenet. | Batch (Budget) | 1 (1000) | 2 (1000) | 3 (2000) | 4 (4000) | |---------------------------------------------|--------------|--------------|--------------|--------------| | Faster R-CNN on VOC0712 Random 59.79 (0.23) | 66.22 (0.32) | 70.78 (0.30) | 75.14 (0.40) | | | Uncertainty | 59.79 (0.23) | 67.53 (0.40) | 71.61 (0.23) | 75.51 (0.23) | | CoreSet | 59.79 (0.23) | 64.64 (0.62) | 69.11 (0.32) | 73.39 (0.44) | | CDAL | 59.79 (0.23) | 66.95 (0.36) | 71.35 (0.38) | 75.31 (0.07) | | LLAL | 59.79 (0.23) | 66.23 (0.21) | 70.42 (0.23) | 74.83 (0.31) | | BADGE | 59.79 (0.23) | 66.98 (0.09) | 71.63 (0.11) | 75.70 (0.25) | | EnmsDivproto | 59.79 (0.23) | 67.44 (0.25) | 71.94 (0.22) | 75.64 (0.69) | | CALD | 59.79 (0.23) | 67.75 (0.43) | 72.81 (0.40) | 76.15 (0.33) | | UWE (Ours) | 59.79 (0.23) | 68.10 (0.10) | 72.51 (0.37) | 76.38 (0.32) | | RetinaNet on VOC0712 Random 38.86 (6.32) | 61.75 (0.63) | 70.64 (0.67) | 76.14 (0.38) | | | Uncertainty | 38.86 (6.32) | 62.61 (1.21) | 71.12 (0.96) | 76.93 (0.40) | | CoreSet | 38.86 (6.32) | 59.02 (0.70) | 68.37 (0.86) | 75.21 (0.45) | | CDAL | 38.86 (6.32) | 61.68 (2.63) | 71.32 (0.90) | 76.69 (0.31) | | LLAL | 38.86 (6.32) | 58.63 (0.64) | 69.10 (0.46) | 74.98 (0.23) | | BADGE | 38.86 (6.32) | 58.65 (4.18) | 68.07 (3.23) | 75.55 (0.41) | | EnmsDivproto | 38.86 (6.32) | 63.29 (0.83) | 71.87 (0.62) | 77.09 (0.11) | | CALD | 38.86 (6.32) | 57.50 (0.20) | 68.64 (0.50) | 76.18 (0.45) | | UWE (Ours) | 38.86 (6.32) | 64.62 (1.10) | 72.59 (0.12) | 77.60 (0.10) | Table 10: Benchmarking results for object detection. Best results are bolded, and the second best results are underlined. ![22_image_0.png](22_image_0.png) Figure 12: Visualization of detection results of RetinaNet on VOC0712. Blue boxes denote ground truth and orange boxes denote detection results. Model is trained with 2k images selected by various active learning methods. Our method is able to obtain detection results of higher quality, with fewer false positives (e.g., dog/cat in (b)) and false negatives (e.g., potted plant in (d)).
Review 1: Summary: The paper is proposing a method for active learning based on the embedding values weighted by the uncertainty associated with them. Using such a modeling, which is related to the gradient-based approach proposed in BADGE, an iterative selection method is used to select the samples to label in the active learning process. This selection is based on the selection of the sample the furthest away from the other samples in the uncertainty-weighted embedding, in order to maximize diversity. That makes a specific uncertainty-diversity tradeoff in a given active learning routine, with results reported for image classification, semantic segmentation, and object detection. Strengths and Weaknesses: Strengths: - Relevant topic (active learning), with the proposal of a complete method addressing it. - Results on three distinct and relevant computer vision problems (image classification, semantic segmentation, object detection), with comparison with many techniques. Weaknesses: - The paper is not always easy to follow and read, the overall writing quality is fairly low, and the ideas are not presented very clearly. - The working principles are relatively ad hoc, based on some observations on equations that are simplified with no clear justifications. - It is argued that one advantage of the approach is the computational speedup in training such a model. These gains are, however, relatively modest (linear) in my opinion and come from simplifying the gradients by heuristics that are not themselves well justified and evaluated. For classification for example, the simplification mostly comes from the fact that BADGE will multiply the output by the number of classes in the dataset. This justification appears weak to me, and the gains are factors of the number of classes, not in order of magnitudes. - The results are not easy to interpret and the advantage of the proposed approach over the others is not very clear. I am in fact doubtful of the gains of the proposed UWE over the BADGE approach, as UWE appears to be an approximation version of BADGE, using custom uncertainty measure vs real gradients, and making a sequential greedy selection of samples to label vs using k-means++. Requested Changes: First, I found the overall proposal not super clear nor well supported, explanations are relatively loose over the proposal and I am not convinced by the reasoning. For example, Sec. 3.2 presents the proposed UWE method, showing that we can use $\frac{\partial \ell}{\partial z}$ as an uncertainty measure. Then, it states “This motivates us to relax the constraint of computing the embedding as the derivative of loss function, and to generate an embedding by directly weighting the feature by uncertainty.” And then, the uncertainty function $u(x)$ is used as a replacement for $\frac{\partial \ell}{\partial z}$. So far, that makes sense, but it appears simply algebraic manipulations. What I got from that in the later experiment presentation is that from these manipulations, arbitrary uncertainty functions $u(x)$ are defined for each case tested, in replacement of the gradient $\frac{\partial \ell}{\partial z}$. I mean, why doing so, and what’s the issue in using the gradient, which is more exact? All this appears quite ad hoc and not very well supported, moreover given that we already have the analytical version available easily, as the gradients. Also, the other part of the proposal is to make a greedy sequential selection of the instances to label from the current batch – see Algo. 1. This is a simple heuristic that may work reasonably well, but still may get stuck with a suboptimal set of instances given the myopic sequential selection done, where interactions between the instances selected is not taken into account globally, but rather after each iteration. Compared to this, the k-means++ algorithm used in BADGE appears to me better suited and less prone to provide suboptimal selection. In Sec. 5.1, arguments on the computational advantages of UWE vs BADGE are made, which seems to be the main point supporting the approach. However, I am not sure about the claims made there, as everything seems to come from the fact that the uncertainty function approximates the gradient in some way, without being furthermore justified when they are presented (in section 3.4). So, basically, UWE is more efficient, computationally speaking, than BADGE as it relies on some heuristics that have been proposed with little justifications, whose heuristics are more efficient than computing the full gradients at hand. This is somehow expected, but the real question is whether the heuristics are **good** approximation of the real gradients. It might be so if we look at the results, but this is not studied in a direct and systematic way in the paper. Comments on presentation: - The bibliographic references are not well formatted in the document, with a lot of repetition of author names, having those written in the text and repeated from the citation. That’s almost everywhere in the paper, I would have expected the authors to be much more careful on that, a proofreading of the paper before the submission would have allowed the authors to catch that. Moreover, depending on the context, references should put the names and date in parentheses (e.g., (Smith et al., 2023)) and not names inline in the text (e.g., Smith et al. (2023)). - Captions are not providing sufficient description of the figures and table to allow proper interpretation of those, in particular for figures 3 to 5, where panels (c) are not clear to me even after looking at the text. Broader Impact Concerns: There are no ethical implications in this paper that would require a Broader Impact Statement. ================================================== Review 2: Summary: This paper explores the active learning problem, focusing on the iterative selection of informative unlabeled data points for labeling. The current state-of-the-art (SOTA) algorithm, BADGE, employs gradient embedding to identify uncertain samples, primarily tailored for classification tasks relying on cross-entropy (CE) loss. However, this paper extends the concept of gradient embedding to accommodate various loss functions. To achieve this, the authors decompose the gradient vector into an uncertainty measure and the feature extractor, inspired by the original BADGE embedding, resulting in what is termed uncertainty-weighted embedding (UWE). Subsequently, leveraging UWEs, the authors propose a distance-based sampling approach. Strengths and Weaknesses: Strengths: 1. The simplicity and effectiveness of uncertainty-weighted embedding (UWE) extend beyond image classification to encompass image-level and region-level active learning tasks such as semantic segmentation and object detection. 2. In comparison to BADGE, uncertainty-weighted embedding offers advantages in terms of memory and computational efficiency. Weaknesses: 1. The novelty of the paper appears limited. While the authors introduce uncertainty-weighted embedding (UWE), the overall methodology incorporates elements from existing approaches. 2. The algorithm relies heavily on heuristics, lacking thorough justification for its components. A more comprehensive discussion, including theoretical support, could bolster the paper's strength. Requested Changes: Despite achieving state-of-the-art performance across various vision-related tasks with the introduction of UWE, the novelty of this paper remains somewhat limited. The authors should provide a detailed exploration of potential areas for increased novelty or the development of novel methodologies to enhance the paper's contribution. Broader Impact Concerns: None ================================================== Review 3: Summary: This paper mainly focuses on Active Learning. The authors mainly address three issues in the BADGE method: 1) the High computational complexity of gradient embedding in dense prediction tasks, 2) restrictions on the loss function of prediction labels, and 3) the inability to handle the regression problem well. A Hybrid method is proposed in the paper, which considers both uncertainty and diversity in sample selection, known as UWE. UWE mainly consists of two steps: 1) computing a novel uncertainty-weighted embedding and 2) applying distance-based sampling for sample selection. UWE has achieved state-of-the-art in three main visual tasks: image classification, semantic segmentation, and object detection. UWE is simple, not limited by loss functions or tasks, and has superior performance. Strengths and Weaknesses: Strengths: 1. Originality. UWE is very innovative. The author uses the chain rule to decompose gradient embedding into two terms: the first term depends on the form of the loss function and activation function; the second term is the feature representation extracted from the penultimate layer and is independent of the loss function. Therefore, the authors define a function u (x) to measure the uncertainty of the current model on sample x, and define a feature extractor f (x) by obtaining the intermediate output of the network, which is independent of loss functions. This decomposition method is novel and practical. 2. Quality. The paper conducts comprehensive experiments. Firstly, the authors conduct a correlation experiment between uwe (x), ge (x), and uncertainty, demonstrating a strong positive correlation between uwe (x) and uncertainty. Secondly, the author conduct experiments on three common visual tasks, showing the superior performance of UWE compared to other methods. Finally, the authors conduct ablation experiments to demonstrate the effectiveness of both parts of the UWE, and quantitatively compare the computational complexity with BADGE. 3. Clarity. The writing is clear. The authors start with the decomposition of gradient embedding and decompose it into a feature representation related to the loss function and activation function, as well as a feature representation unrelated to loss. Based on this decomposition, uncertainty-weighted embedding is proposed, which consists of two parts: u (x), which is positively correlated with uncertainty, and feature extractor f (x). The motivation of ideas is very intuitive, and the writing is clear. 4. Significance. UWE has good significance for the Active Learning community. This paper proposes a method with fewer restrictions, more flexibility, and better performance compared to BADGE, which can be migrated to many other tasks. Weaknesses: 1. It is difficult to distinguish these methods in (a) and (b) of Figures 3, 4, 5, and 6. Especially, the lines corresponding to the better-performing methods almost overlap. It would be better if there were other high-contrast display methods available. 2. For image classification, it would be more convincing to add the experiment on ImageNet. 3. The authors simply summarize the results of different tasks, but a deeper analysis is missing but required, such as why the proposed method works and why it works better than existing methods. 4. Is it possible to give some visualizations, such as the detailed detection results, to better demonstrate the effectiveness of the proposed method? Requested Changes: Please see the weaknesses above. Broader Impact Concerns: None. ================================================== Metareview: Recommendation: Accept with minor revision Comment: This paper proposes an uncertainty-based strategy for active learning. The final recommendations from the three reviewers are at odds: two weak accepts and one weak reject. By inspecting all reviews and the revised paper, the AE thought that this submission can be accepted contingent on minor revisions. The merits of the method lie in the high efficiency, the extensibility to different tasks, the capability to handle object detection and semantic segmentation tasks with improved performances, and the insights into the decomposition of the gradient-based criterion. However, the paper shall undergo a minor revision, better addressing the comments from Reviewer #dMNC. In particular, the rationale behind the decomposition of gradient-based criterion, i.e. into an uncertainty measure and a feature extractor, shall be elaborated further; Some empirical evidence shall be provided that supports each part of the decomposition. ==================================================
# Differentially Private Gradient Flow Based On The Sliced Wasserstein Distance Anonymous authors Paper under double-blind review ## Abstract Safeguarding privacy in sensitive training data is paramount, particularly in the context of generative modeling. This can be achieved through either differentially private stochastic gradient descent or a differentially private metric for training models or generators. In this paper, we introduce a novel differentially private generative modeling approach based on a gradient flow in the space of probability measures. To this end, we define the gradient flow of the Gaussian-smoothed Sliced Wasserstein Distance, including the associated stochastic differential equation (SDE). By discretizing and defining a numerical scheme for solving this SDE, we demonstrate the link between smoothing and differential privacy based on a Gaussian mechanism, due to a specific form of the SDE's drift term. We then analyze the differential privacy guarantee of our gradient flow, which accounts for both the smoothing and the Wiener process introduced by the SDE itself. Experiments show that our proposed model can generate higher-fidelity data at a low privacy budget compared to a generator-based model, offering a promising alternative. ## 1 Introduction The widespread use of deep learning in critical applications has heightened concerns about privacy limitations, with various privacy attacks exposing vulnerabilities in machine learning algorithms (Shokri et al., 2017; Hu et al., 2022; Lacharité et al., 2018; Mai et al., 2018), including deep-learning-based generative models (Chen et al., 2020b; Carlini et al., 2023). Differential Privacy (DP) has emerged as a key solution to counter privacy attacks, providing a robust framework to safeguard training data privacy (Dwork et al., 2006; Dwork, 2011; Dwork & Roth, 2014). DP ensures that a single data point's inclusion or exclusion minimally affects analysis outcomes. In machine learning, DP is usually achieved by applying calibrated noise to gradient steps involving sensitive training data, with Differentially Private Stochastic Gradient Descent (DP-SGD) being a prominent example (Abadi et al., 2016). While extensively explored in classification, the application of DP in generative models is an emerging research area, often employing DP-SGD variants for training standard generative models (Xie et al., 2018; Chen et al., 2020a; Long et al., 2021; Dockhorn et al., 2023; Ghalebikesabi et al., 2023); either in the context of generator learning (Cao et al., 2021) or diffusion models (Dockhorn et al., 2023). An under-explored alternative for DP-SGD is to minimize a functional on the space of probability measures: min µ∈P(Ω) PrivateCost(*µ, ν*) + λReg(µ), (1) where ν is the probability measure to be modeled, PrivateCost is a cost functional on the space of probability measures P(Ω) that can be computed in a private manner, and Reg is a regularization functional that prevents over-fitting to training samples. Some works solve this problem by using this cost as a loss for training a generator. They employ differentially private versions of metrics such as Maximum Mean Discrepancy or Sliced Wasserstein Distance (Harder et al., 2021; 2023; Rakotomamonjy & Ralaivola, 2021). Here, we propose an alternative line of work that has never been explored, which explicitly minimizes the functional in Eq. (1) using gradient flows. $$\mathbf{t}\mathbf{\}_{\uparrow}$$ 1 In a non-DP scenario, gradient flows are commonly employed to address minimization problems like the one described in Eq. (1) (Liutkus et al., 2019; Arbel et al., 2019) and are a viable alternative to other generative models (Fan et al., 2022). They possess inherent stability and convergence properties, thereby reaping significant benefits in the optimization process (Ambrosio et al., 2005a;b; Garg & Panagou, 2021). This is an interesting direction for DP applications, since the absence of a generator leads to superior results for a given number of epochs and privacy budget, as demonstrated in our experiments. In addition, gradient flows have not been previously explored and analyzed within the DP framework. All of the above motivates the following question: Can we develop a principled formalism for privacy-preserving generative modeling through gradient flows? In this paper we provide an affirmative response to this question by presenting the theoretical framework for a differentially private gradient flow of the sliced Wasserstein distance (SWD). Our approach involves defining the gradient flow on the smoothed SWD (Rakotomamonjy & Ralaivola, 2021), which is strongly related to DP as made clear latter. Despite its seeming simplicity, Gaussian smoothing, akin to related works on smoothed Wasserstein distance (Goldfeld et al., 2020; Goldfeld & Greenewald, 2020; Ding & Niles-Weed, 2022), introduces theoretical complexities and raises questions about the technical conditions of our gradient flow, including the existence and regularity of its solution. We overcome these challenges and formally establish the continuity equation of the gradient flow of the smoothed SWD, resulting in a smoothed velocity field. This allows us to choose the discretization of the associated Stochastic Differential Equation (SDE) so that the proposed flow ensures differential privacy. We highlight that after discretization, the smoothing in the drift term acts as a Gaussian mechanism. Furthermore, we show that the discretization further amplifies privacy via the Wiener process in the SDE. This results in a sequential algorithm for which the differential privacy budget can be carefully tracked. We experimentally confirm the viability of our proposed algorithm compared to a baseline generator-based model trained with the same private SWD. Notations. Throughout the paper we use Ω to denote the sample space. We assume that Ω is a compact subset of R d. For any subset A ⊆ R d, we use P(A) to denote the set of probability measures supported on A equipped with the Borel σ-algebra. For µ, ν ∈ P(Ω), we use Π(µ, ν) ⊆ P(Ω2) to denote the set of joint distributions or "couplings" between µ and ν. For µ ∈ P(Ω) and a measurable function M : Ω → Ω, the push-forward operator \# defines a probability measure M\#µ ∈ P(Ω) such that M\#µ(A) = µ(M−1(A)) for all measurable A ⊆ Ω. For r > 0, we use B(0, r) to denote a closed ball of center 0 and radius r. ## 2 Background And Related Work 2.1 Sliced Wasserstein Distance The p-Wasserstein distance between two probability measures µ, ν ∈ P(Ω) is defined as (Peyré & Cuturi, 2019): $${\mathcal W}_{p}(\mu,\nu)=\left(\operatorname*{inf}_{\pi\in\Pi(\mu,\nu)}\int_{\Omega^{2}}\|x-y\|_{2}^{p}\mathrm{d}\pi(x,y)\right)^{\frac{1}{p}}.$$ $$\text{(2)}$$. For p = 2, i.e., for the squared Euclidean cost, a celebrated result of Brenier (1991) proves that the optimal way to transport mass from µ to ν is through a measure-preserving transport map M : Ω → Ω of the form M(x) = x − ∇ψ(x), where ψ: Ω → Ω is a convex function termed the Kantorovich potential between µ and ν, and M pushes µ onto ν, i.e. M\#µ = ν. In the one-dimensional case, the optimal transport map has a closed-form expression given by (Peyré & Cuturi, 2019) $$M(x)=F_{\nu}^{-1}\circ F_{\mu}(x),$$ ν ◦ Fµ(x), (3) where Fµ and Fν are the cumulative distribution functions (CDFs) of µ and ν, respectively. In higher dimensions no such closed form exists. $\left(3\right)^{2}$ The sliced Wasserstein distance (SWD) (Rabin et al., 2012) takes advantage of this simplicity of OT in one dimension by computing a distance between µ, ν ∈ P(Ω) through their projections P θ \#µ, Pθ\#ν ∈ P(R) onto the unit sphere S d−1 = {θ ∈ R d| ∥θ∥2 = 1}. Here P θ: Ω → R denotes the projection operator defined as P θ(x) = ⟨*x, θ*⟩. Formally, $${\mathcal{S W}}_{2}^{2}(\mu,\nu)=\int_{\mathbb{S}^{d-1}}{\mathcal{W}}_{2}^{2}(P_{\#}^{\theta}\mu,P_{\#}^{\theta}\nu)\mathrm{d}\theta,$$ $$\left(4\right)$$ where dθ is the uniform probability measure on S d−1. Like W2, the SW2 also defines a metric on P(Ω) (Nadjahi et al., 2022). ## 2.2 Wasserstein Gradient Flows A Wasserstein gradient flow represents an extension of the concept of gradient descent applied to a functional within the domain of probability measures. More formally, it constitutes a continuous sequence (µt)t of probability distributions within a Wasserstein metric space (P(Ω), W2), and it follows a continuity equation (Santambrogio, 2016) with a general form of: $$\frac{\partial\rho_{t}}{\partial t}=\operatorname{div}\left(\rho_{t}\nabla_{W_{2}}{\mathcal{F}}_{\lambda}(\rho_{t})\right)=\operatorname{div}\left(\rho_{t}\nabla_{W_{2}}{\mathcal{F}}(\rho_{t})\right)+\lambda\Delta\rho_{t},$$ where Fλ = F + λH, F is the functional to be minimized and λH is an entropic regularization term. H is the negative differential entropy ensuring that the model can generalize and avoid overfitting: H(µ) = ´Ω ρ(x)log ρ(x)dx. λ ≥ 0 signifies the strength of the entropic regularization. ρt is the density of the probability flow (µt)t≥0 at time t. Depending on the form of Fλ, this continuity equation can be associated with a SDE which relates the evolution of (µt)t≥0 and its particles (Xt)t≥0 (Jordan et al., 1998): $\left(\underset{\,}{5}\right)$ . $$\mathrm{d}X_{t}=-\nabla V(X_{t})\mathrm{d}t+{\sqrt{2\lambda}}\mathrm{d}W_{t},$$ $$(6)$$ $\mathsf{L}_{\sigma}F_{\sigma}(W_k)_k$ is a Wi. 2λdWt, (6) where V is a potential function that depends on the functional F, (Wt)t is a Wiener process, and Xt ∼ µt. Wasserstein gradient flows possess a rich theoretical background (Ambrosio, 2008; Santambrogio, 2016). Notably they have been used for generative modeling, where initial distributions are represented by samples that evolve according to a partial differential equation (PDE) governing the gradient flow and defined on several metrics such as the maximum mean discrepancy (Arbel et al., 2019) or Wasserstein-based metrics (Mokrov et al., 2021). Liutkus et al. (2019) also present a non-parametric generative modeling method relying on the gradient flow of the sliced Wasserstein distance, while Bonet et al. (2022) introduce a flow defined in the sliced Wasserstein space by proposing a numerically approximated Jordan–Kinderlehrer–Otto (JKO) type scheme. In this work we define the gradient flow of the smoothed sliced Wasserstein distance. Smoothing each sliced measure allows us to introduce differential privacy, at the expense of additional theoretical challenges on the definition of the flow. ## 2.3 Differential Privacy Definition 1. A random mechanism M: D → R is (ε, δ)*-DP if for any two adjacent inputs* d1, d2 ∈ D and any subset of outputs *S ⊆ R*, $$\mathbb{P}[{\mathcal{M}}(d_{1})\in{\mathcal{S}}]\leq\mathrm{e}^{\varepsilon}\mathbb{P}[{\mathcal{M}}(d_{2})\in{\mathcal{S}}]+\delta.$$ εP[M(d2) ∈ S] + δ. (7) Adjacent inputs refer to datasets differing only by a single record. DP ensures that when a single record in a dataset is swapped, the change in the distribution of model outputs will be controlled by ε and δ. ε controls the trade-off between the level of privacy and the usefulness of the output, where smaller ε values offer stronger privacy but potentially lower utility (e.g. in our specific case, low-quality generated samples). δ is a bound on the external risk (e.g. information is accidentally being leaked) that won't be restricted by ε; it is an extra privacy option that enables control of the extent of the privacy being compromised. In practice, $$\left(7\right)$$ we are interested in the values of δ that are less than the inverse of a polynomial in the size of the database (Dwork & Roth, 2014). A classical example of a DP mechanism is the Gaussian mechanism operating on a function f : D → R d as: $${\textrm{a}}f\colon{\mathcal{D}}\to1$$ $\mathbf{c}$. $${\mathcal{M}}_{f}(d)={\mathcal{N}}(f(d),\sigma^{2}{\mathcal{I}}_{d}).$$ $\left(8\right)^3$ Mf (d) = N (f(d), σ2Id). (8) We define the ℓ2 sensitivity of f as ∆2(f) := maxd1,d2: adjacent∈D ∥f(d1) − f(d2)∥2. For c 2 > 2 ln(1.25/δ) and σ ≥ c ∆2(f) ε, the Gaussian mechanism is (*ε, δ*)-DP (Dwork & Roth, 2014). Several works deal with differentially private generative modeling, with most adopting DP-SGD (Xie et al., 2018; Chen et al., 2020a; Long et al., 2021; Dockhorn et al., 2023; Ghalebikesabi et al., 2023). This approach is commonly employed in the context of generator learning (Cao et al., 2021) and diffusion models (Dockhorn et al., 2023). An alternative solution for a DP generative model is to consider a DP loss function on which the generator is trained. However, there is limited research on defining rigorous metrics that can be computed in a private manner, resulting in a gap for privacy-preserving machine learning algorithms. While Lê Tien et al. (2019) use random projections to make W1 computation differentially private, they do not provide a theoretical analysis of the resulting divergence. Instead, Harder et al. (2021; 2023) have considered differentially private Maximum Mean Discrepancy as a generator loss. Rakotomamonjy & Ralaivola (2021) introduce a Gaussian-smoothed version of SW2, defined as $$\mathcal{G}_{\sigma}\mathcal{S W}_{2}^{2}(\mu,\nu)=\int_{\mathbb{S}^{d-1}}\mathcal{W}_{2}^{2}(P_{\#}^{\theta}\mu*\xi_{\sigma},P_{\#}^{\theta}\nu*\xi_{\sigma})\mathrm{d}\theta,$$ $$({\mathfrak{g}})$$ #µ ∗ ξσ, Pθ#ν ∗ ξσ)dθ, (9) where ξσ ∼ N (0, σ2Id). They demonstrate that this distance and some extensions (Rakotomamonjy et al., 2021) are inherently differentially private, as the smoothing acts as a Gaussian mechanism. This allows for the seamless integration of this differentially private loss function into machine learning problems involving distribution comparisons, such as generator-based generative modeling. In this work, we extend this trend by formalizing the gradient flow of the Gaussian-smoothed Sliced Wasserstein Distance. ## 3 Differentially Private Gradient Flow In this section we present the theoretical building blocks of our method, which consists of building a discretized gradient flow on the smoothed sliced Wasserstein distance. This smoothing and discretization will lead to a differentially private drift (vector field) in the gradient flow. In Section 3.1 we introduce the smoothing and the gradient flow of the Gaussian-smoothed sliced Wasserstein distance of Rakotomamonjy & Ralaivola (2021) defined in Eq. (9), and prove the existence and regularity of its solution. In Section 3.2 we present a particle scheme to simulate the discretization of this flow and elaborate the link between the discretization, smoothing, and differential privacy. In Section 3.3 we present the privacy guarantee. ## 3.1 Gradient Flow Of The Smoothed Sliced Wasserstein Distance In this subsection we study the following functional over the Wasserstein space (P(Ω), W2): $${\mathcal{F}}_{\lambda,\sigma}^{\nu}(\mu)=\frac{1}{2}{\mathcal{G}}_{\sigma}{\mathcal{S W}}_{2}^{2}(\mu,\nu)+\lambda{\mathcal{H}}(\mu),$$ where ν ∈ P(Ω) is the target distribution to be modeled and σ > 0 is the smoothing of the probability measures in the inner optimal transport problem in GσSW2. We will show later how λ and σ relate to the privacy parameters (*ε, δ*) in the discretized flow. The main result of this subsection is the following: first, we establish the existence and regularity of a Generalized Minimizing Movement Scheme (GMMS) for Eq. (10), and then we demonstrate that this GMMS satisfies the corresponding continuity equation. Theorem 1. Let ν ∈ P(B(0, r)) have a strictly positive smooth density. For λ > 0 and r > √d*, let the* starting distribution µ0 ∈ P(B(0, r)) *have a density* ρ0 ∈ L∞(B(0, 1))*. There exists a minimizing movement* $$(10)^{\frac{1}{2}}$$ scheme (µt)t≥0 *associated with Eq.* (10). Further, (µt)t≥0 admits densities (ρt)t≥0 following a continuity equation: $$\frac{\partial\rho_{t}}{\partial t}=-\mathrm{div}(v_{t}^{(\sigma)}\rho_{t})+\lambda\Delta\rho_{t},\tag{1}$$ $$(11)$$ $$\left(12\right)$$ $$\left(13\right)$$ with: $$v_{t}^{(\sigma)}(x)=v^{(\sigma)}(x,\mu_{t})=\int_{\mathbb{S}^{d-1}}(\psi_{\mu_{t},\theta}^{(\sigma)})^{\prime}(\langle x,\theta\rangle)\theta\mathrm{d}\theta.$$ Here, ψ (σ) µt,θ is the Kantorovich potential (see Section *2.1) between* P θ \#µt ∗ ξσ and P θ \#ν ∗ ξσ, with ξσ ∼ N (0, σ2), and its derivative is given by Brenier *(1991):* $$(\psi_{\mu_{t},\theta}^{(\sigma)})^{\prime}(z)=z-F_{P_{\#}^{\theta}\mu_{t}*\xi_{\sigma}}^{-1}\circ F_{P_{\#}^{\theta}\nu*\xi_{\sigma}}^{\circ}(z),$$ where Fρ denotes the CDF of ρ. Proof sketch. (1) We begin by showing that there exists a GMMS (see Appendix A for the precise definition) for the functional in Eq. (11). For this we show that the following optimization problem admits a solution for any h > 0: $$\mu_{k+1}^{h}\in\operatorname*{arg\,min}_{\mu\in\mathcal{P}(\Omega)}\mathcal{F}_{\lambda,\sigma}^{\nu}(\mu)+\frac{\mathcal{W}_{2}^{2}(\mu,\mu_{k}^{h})}{2h}.$$ $$\left(14\right)$$ $$\left(15\right)$$ Notice that the above problem is simply the implicit Euler scheme for deriving the gradient flow of F ν λ,σ over the Wasserstein space (P(Ω), W2). Since P(B(0, 1)) is compact for weak convergence, it is enough to show that F ν λ,σ is lower semi-continuous in W2. By Lemma 9.4.3 of Ambrosio (2008), H is lower semi-continuous. By Rakotomamonjy & Ralaivola (2021), GσSW2(*µ, ν*) is symmetric and satisfies the triangle inequality. Moreover, GσSW2(µ, ν) ≤ SW2(*µ, ν*) for any σ ≥ 0. Hence for any ξ, ξ′ ∈ P(B(0, 1)), $$|\mathcal{G}_{\sigma}\mathcal{S W}_{2}(\xi,\nu)-\mathcal{G}_{\sigma}\mathcal{S W}_{2}(\xi^{\prime},\nu)|\leq\mathcal{G}_{\sigma}\mathcal{S W}_{2}(\xi,\xi^{\prime})\leq\mathcal{S W}_{2}(\xi,\xi^{\prime})\leq c_{d}\mathcal{W}(\xi,\xi^{\prime}),$$ where cd > 0 is a constant only dependent on the dimension d and the last inequality follows from Prop. 5.1.3 in Bonnotte (2013). Hence there exists a minimum µˆ ∈ P(B(0, r)) of G(µ), admitting a density ρˆ, because otherwise H(µˆ) = ∞. Further, we prove that the solution is "sufficiently regular" in Lemma 3 of Appendix A. (2) Next, we show that the GMMS whose existence and regularity were previously established indeed satisfies the continuity equation in Eq. (11). A crucial ingredient in this step is the analysis of the first variation of the Gaussian-smoothed SW2 distance which is given in the following proposition (see Appendix A for proof). Proposition 1. Let µ, ν ∈ P(Ω)*. For any diffeomorphism* ζ of Ω, $$\operatorname*{lim}_{\varepsilon\to0+}\frac{G_{\sigma}{\mathcal{S W}}_{2}^{2}([\mathrm{Id}+\varepsilon\zeta]_{\sharp}\mu,\nu)-G_{\sigma}{\mathcal{S W}}_{2}^{2}(\mu,\nu)}{2\varepsilon}=\fint_{{\mathcal{S}}^{d-1}}\int_{\Omega}(\psi_{\nu_{\mu},\theta}^{\sigma})^{\prime}(\langle\theta,x\rangle\langle\theta,\zeta(x)\rangle\mathrm{d}\mu\mathrm{d}\theta.$$ Using the above result we then get the desired flow equation by closely following the proofs of Theorem S6 in Liutkus et al. (2019) and Theorem 5.6.1 in Bonnotte (2013). Theorem 1 shows that there is a continuous sequence of probability measures (µt)t≥0 that constitutes a minimizing movement scheme for the functional in Eq. (10). Moreover, it shows that the probability density ρt of the minimizing movement scheme satisfies the continuity equation given by Eq. (11). Theorem 1 is a generalization of Theorem 2 of Liutkus et al. (2019) and Theorem 5.6.1 of Bonnotte (2013), which we retrieve when σ → 0. However, the proof does not trivially follow as a corollary from these results, since we consider projected and *then* smoothed measures instead of directly applying previous results on smoothed measures before projection. Hence, we need two new pieces in the proof that are not present in prior works: $$\left(16\right)$$ (1) existence and regularity of solutions to the functional minimization problem, and (2) analysis of the first variation of the squared Gaussian-smoothed SW metric. One key point of our approach is that the drift term in the continuity equation relies on the Kantorovich potential ψ (σ) µt,θ associated with the convolution of the Gaussian-smoothed and projected measures, specifically P θ \#µt ∗ ξσ and P θ \#ν ∗ ξσ. When transitioning from the continuous PDE to a discrete-time SDE this Gaussian smoothing becomes crucial, ultimately leading to the Gaussian mechanism and ensuring differential privacy. We show in the next subsection that the combination of smoothing in projected measures and the discretization jointly establishes differential privacy. ## 3.2 Discretized And Private Gradient Flow Algorithms In this subsection, we show how the gradient flow of Theorem 1 is discretized and how the smoothing acts as a Gaussian mechanism. We then present two algorithms that implement this gradient flow. The gradient flow of GσSW2 given in Eq. (11) corresponds to a nonlinear Fokker-Plank type equation, where the drift term is dependent on the density of the solution. The evolution of (µt)t≥0 in Eq. (11) corresponds to a stochastic process (Xt)t≥0 that solves the SDE in Eq. (6). The latter can be discretized using the Euler-Maruyama scheme with the random variable initialization Xb0 ∼ µ0 as follows: $$\hat{X}_{k+1}=h v^{(\sigma)}(\hat{X}_{k},\hat{\mu}_{k h})+\sqrt{2\lambda h}Z_{k+1},$$ $$\left(17\right)$$ Xbk+1 = hv(σ)(Xbk, µbkh) + √2λhZk+1, (17) where h > 0 is the step size, µbkh is the distribution of Xbk, and {Zk}k are i.i.d. standard normal random variables. The discrete-time SDE in Eq. (17) can then be simulated by a stochastic finite particle system {Xbik }, where i ∈ {1*, . . . , N*} is the index for the i th particle, and the discrete time index k runs from 0 to Kh = T (Bossy & Talay, 1997). The particles are initialized as Xbi0 ∼ µ0 i.i.d., where each particle follows the update equation: $$\hat{X}_{k+1}^{i}=h\hat{v}^{(\sigma)}(\hat{X}_{k}^{i},\hat{\mu}_{k h}^{i})+\sqrt{2\lambda h}Z_{k+1}^{i},$$ $$(18)$$ kh) + √2λhZik+1, (18) with vb (σ)(Xbik , µˆ i kh) being an estimate of v (σ)(Xbk, µbkh); cf. Eq. (20) for the details. Naturally, approximating the continuous time SDE in Eq. (6) by the discrete-time update in Eq. (18) leads to some error, which we provide bounds for in the following theorem. In the infinite particle regime, i.e., as N → ∞, under some assumptions of regularity and smoothness of the drift terms v (σ) and vb (σ), we can state the following. Theorem 2. *Suppose that the SDE in Eq.* (6) has a unique strong solution (Xt)t∈[0,T]*for any starting point* x0 ∈ Ω, such that XT ∼ µT . For T = Kh, let µbKh be the distribution of XbK *in the discrete-time SDE in* Eq. (17)*. Under suitable assumptions of regularity and Lipschitzness on* v (σ) and vb (σ)*stated in the Appendices,* the following bound holds for any *λ > T L*2/8: $$\|\mu_{T}-\widehat{\mu}_{K h}\|_{T V}^{2}\leq\frac{T}{\lambda-T L^{2}/8}\left[L^{2}h(c_{1}h+d\lambda)+c_{2}\delta\right],$$ where C1, c2, L, δ > 0 *are constants independent of time (but may depend on* σ). The above theorem follows in a straightforward manner from Theorem 3 of Liutkus et al. (2019) for the SW2 flow. It is worth noting that the error bound in Eq. (19) is possibly tighter than the corresponding error bound in Theorem 3 of Liutkus et al. (2019) for the special case of σ → 0, because the constant L can be shown to be a non-increasing function of σ. The error bound depends linearly on the dimension d and will tend to 0 for δ = 0 and small step size h. We now delve into the numerical computation of the particle flow in Eq. (18). To evaluate vb (σ), we need two approximations. The first one approximates the distribution µbkh by the empirical distribution of the particles at time k, µbkh ≈ µb (N) kh := 1N PN i=1 δXbik . The second one replaces the integral over θ ∈ S d−1in Eq. (12), with $$(19)$$ a Monte Carlo estimate, for θj a set of projections drawn from the sphere S d−1: $$\widehat{v}_{k}^{(\sigma)}(x):=-\frac{1}{N_{\theta}}\sum_{j=1}^{N_{\theta}}(\psi_{\mu_{k},\theta_{j}}^{(\sigma)})^{\prime}(\langle x,\theta_{j}\rangle)\theta_{j},$$ $$(20)^{\frac{1}{2}}$$ where (ψ (σ) µk,θj ) ′is the discretized derivative of the Kantorovich potential between P θj \# µb (N) kh ∗ ξσ and P θj \# ν ∗ ξσ, with ξσ ∼ N (0, σ2), and (ψ (σ) µk,θj ) ′(z) is defined as (Brenier, 1991): $$(\psi^{(\sigma)}_{\mu_{k},\theta_{j}})^{\prime}(z)=z-F^{-1}_{p^{\theta_{j}}_{\#}}\widehat{\mu}^{(N)}_{k_{h}}\ast_{\sigma}\circ F^{\sigma_{j}}_{p^{\theta_{j}}_{\#}\ast\epsilon_{\sigma}}(z)\tag{21}$$ This equation is key in the gradient flow since it is the main block of the drift term in Eq. (20) and it plays an essential role in bridging the smoothing and the privacy. Indeed, since convolution of probability distributions boils down to the addition of random variables, Eq. (21) can be considered as a Gaussian mechanism (see Section 2.3) applied to the projected measures. In practice, we compute P θj \# µb (N) kh ∗ ξσ and P θj \# ν ∗ ξσ in the following way. Let Θ = [θ T 1 , . . . , θTNθ ] ∈ R d×Nθ be the *random projection matrix* composed of all the Nθ projection vectors sampled uniformly from S d−1 and X = [x T 1 , . . . , xTn ] T, Y = [y T 1 , . . . , yT n ] T ∈ R n×d, respectively, the *data matrices* composed of the n i.i.d. samples from the target distribution ν and the empirical distribution µb (N) kh . Then, the smoothed and projected distributions P θj \# ν ∗ ξσ and P θj \# µb (N) kh ∗ ξσ can be respectively written as XΘ + ZX and Y Θ + ZY with ZX, ZY ∈ R n×Nθ being the i.i.d. Gaussian random variables with variance σ 2, corresponding to the Gaussian mechanism. From an algorithmic point of view, the random projection matrix Θ ∈ R d×Nθ can either be sampled at the start of every discrete time step, or sampled once at the beginning of the algorithm and reused in every step. These two strategies lead to DPSWflow-r (Algorithm 1) and DPSWflow (Algorithm 2) (the latter is detailed in Appendix C.3). The main difference between both lies in the sampling of the projections θ of the sliced Wasserstein. In DPSWflow-r, we resample Nθ projections at each iteration of the flow leading to a more expensive iteration. In DPSWflow, we sample all the Nθ projections in advance (typically a larger amount) and use them in all subsequent iterations by subsampling; e.g at each iteration, we subsample a set of projections among the pre-computed Nθ projections. This enables us to save on the privacy budget as it implies "free iterations" in term of privacy. The choice between resampling or pre-sampling projections was not explicit in the prior non-DP work of Liutkus et al. (2019). They adopt pre-sampling in the provided algorithm of their paper, whereas they resample projections in their code. In contrast, in our paper, we explicitly highlight the importance of this choice, for both privacy and performance issues. Before clarifying this difference, we provide details regarding how privacy guarantees are dealt with in the gradient flow. ## 3.3 Privacy Guarantee In this subsection we analyze the DP guarantee of the particle scheme outlined in the previous subsection. In our gradient flow algorithm, there exist two independent levels of Gaussian mechanisms: (i) in the drift term, at the level of random projection of data X through the matrix Θ and (ii) at the addition of the diffusion term √2λhZ in the particle update Eq. (18). Then, we comment on how privacy impacts each of our particle flows in Algorithms 1 and 2, defined by Eq. (18), ## 3.3.1 Privacy Guarantee Arising From The Random Projection To track the privacy guarantee arising from the random projection, we define a randomized mechanism MNθ,σ : R n×d → R n×Nθ as: $${\mathcal{M}}_{N_{\theta},\sigma}(X)=X\Theta+Z_{\sigma},$$ MNθ,σ(X) = XΘ + Zσ, (22) where Θ is the random projection matrix and Zσ ∈ R n×Nθ consists of i.i.d. Gaussian random variables with variance σ 2. Given X composed of {xi}i ∼ ν, the position of these particles can be updated based on the $$(22)^{\frac{1}{2}}$$ Algorithm 1: DP Sliced Wasserstein Flow with resampling of θ's: DPSWflow-r. 1: **Input:** Y = [y T 1 , . . . , yT n ] T ∈ R n×di.e. N i.i.d. samples from target distribution ν, number of projections Nθ, regularization parameter λ, variance σ, step size h. 2: **Output:** X = [x T 1 , . . . , xTn ] T ∈ R n×d 3: *// Initialize the particles* 4: {xi} N i=1 ∼ µ0, X = [x T 1 , . . . , xTn ] T ∈ R n×d 5: *// Iterations of the flow* 6: for k = 0*, . . . , K* − 1 do 7: {θj} Nθ j=1 ∼ Unif(S d−1), Θ = [θ T 1 , . . . , θTNθ ] 8: Sample ZX, ZY ∈ R n×Nθfrom i.i.d. N (0, σ2) 9: Compute the inverse CDF of Y Θ + ZY 10: Compute the CDF of XΘ + ZX 11: Compute vb (σ) k(xi) using Eq. (20) 12: xi ← hvb (σ) k(xi) + √2λhz, z ∼ N (0, Id) 13: **end for** drift of Eq. (20) which leverages MNθ,σ(X) for computing Eq. (21). Hence, if MNθ,σ(X) is (*ε, δ*)-DP, then by the post-processing property of DP (Dwork et al., 2006), the computation of the drift term for all n particles in one time step is also (*ε, δ*)-DP. To derive the DP guarantee for MNθ,σ(X), we use the following lemma. Lemma 1 (Rakotomamonjy & Ralaivola (2021)). For data matrices *X, X*′ ∈ R n×d*that differ in only the* i th row, satisfying ∥Xi − X′ i ∥2 ≤ 1*, and a random projection matrix* Θ ∈ R d×Nθ *whose columns are randomly* sampled from S d−1, and Nθ being large enough (Nθ > 30*) , the following bound holds with probability at least* 1 − δ: ∥XΘ − X′Θ∥ 2 F ≤ w(Nθ, δ), with, w(Nθ, δ) = Nθ d+ zi−δ d r2Nθ(d − 1) d + 2, (23) where zi−δ = Φ−1(1 − δ) and Φ *is the CDF of a zero-mean unit variance Gaussian distribution.* ## 3.3.2 Privacy Guarantee Arising From The Diffusion Term To track the privacy guarantee arising from the diffusion term, we define a Markov operator Kh,λ : R n×d → R n×d as $$(23)$$ $$\mathcal{K}_{h,\lambda}(X)=\left[h\widehat{v}_{k}^{(\sigma)}(X^{i})+\sqrt{2\lambda h}Z\right]_{i=1}^{n},$$ , (24) where vb (σ) k(x) is the drift term as defined in Eq. (20) and Z ∼ N (0, Id). The following lemma from Balle et al. (2019) can then be used to characterize the privacy guarantee resulting from both sources of randomness. Lemma 2 (Balle et al. (2019)). Let K: X → Y *be a Markov operator satisfying the following condition for* any *x, x*′ ∈ X: $$(24)$$ $$\|{\mathcal{K}}(x)-{\mathcal{K}}(x^{\prime})\|_{\mathrm{TV}}\leq\gamma.$$ ′)∥TV ≤ γ. (25) Then for any (ε, δ)-DP randomized mechanism M, K ◦ M is (ε, γδ)*-DP.* Leveraging Lemmas 1 and 2, we get the following theorem on the privacy guarantee of Algorithms 1 and 2. Theorem 3. Under the setup of Lemma *1, the particle update in Eq.* (18) is cw(Nθ,δ) σ, qh 2λ δ *-DP, where* c is a constant satisfying c 2 > 2 ln(1.25/δ). $$(25)$$ Proof. For data matrices *X, X*′ ∈ R n×dthat differ in only the i th row, satisfying ∥Xi − X′ i ∥2 ≤ 1, $$D_{\rm KL}\left({\cal K}_{h,\lambda}(X),{\cal K}_{h,\lambda}(X^{\prime})\right)=D_{\rm KL}\left({\cal N}(h\widehat{v}^{(\sigma)}(X_{i}),2h\lambda I_{d}),{\cal N}(h\widehat{v}^{(\sigma)}(X^{\prime}_{i}),2h\lambda I_{d})\right)\tag{26}$$ $$=\frac{1}{2(2h\lambda)}\|h\widehat{v}^{(\sigma)}(X_{i})-h\widehat{v}^{(\sigma)}(X^{\prime}_{i})\|^{2}\leq\frac{h}{\lambda},$$ $=\;\lambda=1/2\;\mathbb{Z}$ 2. $\mathbf{v}=\mathbf{v}_{0}$ where the last inequality follows from the observation that ∥vb (σ)(Xi))∥ ≤ 1. Hence, $$\|{\mathcal K}_{h,\lambda}(X)-{\mathcal K}_{h,\lambda}(X^{\prime})\|_{T V}\leq\sqrt{\frac{1}{2}D_{\mathrm{KL}}\left({\mathcal K}_{h,\lambda}(X),{\mathcal K}_{h,\lambda}(X^{\prime})\right)}\leq\sqrt{\frac{h}{2\lambda}}.$$ $$\quad(27)$$. Plugging in the sensitivity bound from Lemma 1 into the Gaussian mechanism presented in Section 2.3, we see that the mechanism MNθ,σ is (*ε, δ*)-DP for σ = cw(Nθ,δ) ε. The desired result then follows from an application of Lemma 2 to the post-processed mechanism Kh,λ ◦ MNθ,σ. We observe that the privacy parameter ε degrades linearly with the number of projections Nθ, while the δ parameter decreases with the step size of the discretization. The sensitivity depends linearly on √ 1 d . Thus, the higher the d, the more private the mechanism is. ## 3.3.3 On The Impact Of (Re-)Sampling On Privacy Algorithms 1 and 2 differ by one element: in the former we choose a small Nθ which is being resampled at each step of the flow, whilst in the latter we choose a large Nθ among which we sub-sample during the flow. The idea of reusing the random projections between iterations in Algorithm 1 has an important effect on the resulting performance at similar privacy guarantees. Our intuition is that the DP bound is tighter for DPSWflow-r as we are allowed to use the moment accountant (Abadi et al., 2016) to derive the DP bound: this composition of DP operations is cheaper in privacy than doing them all at once at the beginning of the gradient flow. Also, resampling enables unbiased gradient estimates across iterations, which, we suppose, is important for generation performance. Experimental results are reported for both mechanisms in Section 4.5. ## 4 Experiments In this section we evaluate our method within a generative modeling context. The primary objective is to validate our theoretical framework and showcase the behavior of our approach, rather than strive for state-of-the-art results in generative modeling. ## 4.1 Toy Problem We use a toy problem to illustrate how our differentially private sliced Wasserstein gradient flow behaves compared to the vanilla non-DP baseline. Here we set Nθ = 200, h = 1, and λ = 0.001. Examples are provided in Fig. 1. We see that the particle flow of the sliced Wasserstein can correctly approximate the target distributions that are composed of 5 Gaussians, as measured by SWD. With the Gaussian smoothing, for σ = 0.5, the particles are still able to match the target distribution, although the samples are more dispersed than for the noiseless SWF, leading to a SWD value of 0.81 instead of 0.08. Finally, for σ = 1, our approach struggles in matching the true distribution, although many particles are still within its level sets. ## 4.2 Comparisons And Baselines To demonstrate our claims we test our method on three mechanisms: DPSWflow-r (Algorithm 1), DPSWflow (Algorithm 2) and DPSWgen. DPSWgen is a generator-based model (different from the flow used for DPSWflow-r and DPSWflow) adapted from Rakotomamonjy & Ralaivola (2021). It employs the differentially private sliced Wasserstein distance as the loss for training the generator, while our contribution is to build a gradient flow that is differentially private because its drift term originates from the DP SWD. Both approaches are fundamentally distinct. ![9_image_0.png](9_image_0.png) Figure 1: Examples of particle flows for (top) sliced Wasserstein flow, (middle) our DPSWflow with σ = 0.5, and (bottom) σ = 1. Each panel shows the level sets (in black) of the target distribution, which is composed of 5 Gaussians, as well as the particles (in green). The columns depict the particles after the (left) first step, (middle) 10-th, and (right) the 200-th steps of the flow. To maintain a suitably low input space dimension, in order to mitigate the curse of dimensionality and reduce computational cost, our mechanisms 1 and 2 are preceded by an autoencoder with Zµ as the latent space. Subsequently, they take the latent space Zµ ⊆ R d of the autoencoder as the input space, and ensure differential privacy using the DP gradient flow. To ensure a fair comparison, DPSWgen is also preceded by the same autoencoder with Zµ as the latent space. Subsequently, it also takes the latent space Zµ ⊆ R d of the autoencoder as the input space. We evaluate the three algorithms using the Fréchet inception distance (FID, Heusel et al., 2018). In our results, we present each method at three levels of differential privacy: ε = ∞ (no privacy), ε = 10, and ε = 5, along with their corresponding FID scores. In this context the optimal generated images from each model are expected to yield the best achievable FID score when ε = ∞. ## 4.3 Sensitivity And Privacy Budget Tracking For both DPSWflow-r and DPSWgen we monitor the privacy budget using the Gaussian moments accountant method proposed by Abadi et al. (2016), where we choose a range of σ's satisfying the constraint σ ≥ cw(Nθ,δ) ε, where w is the sensitivity bound from Lemma 1, c > 2 ln( 1.25/δ), and we use the moment accountant to obtain the corresponding ε's. Also, to prevent privacy leakage, we normalize the latent space of the autoencoder (which is used as input of the flow and the generator) to norm 1, so we incur an additional factor of 2 in the sensitivity bound. ## 4.4 Settings And Datasets In all three DPSWflow, DPSWflow-r, and DPSWgen models, we pre-train an autoencoder and then use a DP sliced Wasserstein flow component, or DP generator, respectively. In order to uphold the integrity of the differential privacy framework and mitigate potential privacy breaches, we conducted separate pre-training procedures for the autoencoder and the flows / generator using distinct datasets: a publicly available dataset for the autoencoder, and a confidential dataset for the flows / generator. In practice, we partitioned the training set X into two distinct segments of equal size, denoted as Xpub and Xpriv. Subsequently, we conducted training of the autoencoder on Xpub. Then, we compute the encoded representation on Xpriv in the latent space and use it as input to DPSWflow, DPSWflow-r, and DPSWgen. Furthermore, as mentioned in Section 4.3, to prevent privacy leakage we normalize the latent space before adding Gaussian noise, ensuring that the encoded representations lie on a hypersphere. We assessed each method on three datasets: MNIST (LeCun et al., 1998), FashionMNIST (F-MNIST, Xiao et al., 2017), and CelebA (Liu et al., 2015). The experiments performed on the MNIST and FashionMNIST datasets use the same autoencoder architecture as the framework proposed by Liutkus et al. (2019). The | MNIST | F-MNIST | CELEBA | | | | | | | | |------------|-----------|----------|-----|-----|-----|-----|-----|-----|-----| | ε | ∞ | 10 | 5 | ∞ | 10 | 5 | ∞ | 10 | 5 | | DPSWgen | 114 | 128 | 203 | 138 | 172 | 205 | 171 | 209 | 215 | | DPSWflow-r | 21 | 71 | 117 | 42 | 88 | 99 | 57 | 134 | 202 | | DPSWflow | 73 | 118 | 171 | 96 | 98 | 129 | 134 | 262 | 292 | ![10_image_0.png](10_image_0.png) Table 1: FID results for each baseline, dataset and privacy setting, averaged over 5 generation runs. Figure 2: Generated images from DPSWflow-r (upper row) and DPSWgen (lower row) for MNIST, FashionMNIST, and Celeba with no DP: ε = ∞. experiments conducted on the CelebA dataset use an autoencoder architecture adapted from a DCGAN (Radford et al., 2016b). More details on these architectures are given in Appendix C.2. ## 4.5 Experimental Results This section outlines our experimental findings, including the resulting FID scores (Table 1) and the generated samples for each of our experiments (Figures Figs. 2 to 4). We compute the FID as the average of the FID scores obtained from five generation runs. Also, for each experiment, we use δ > 0: δ = 10−5for MNIST and FashionMNIST, and δ = 10−6for CelebA. Fig. 2 presents the results from our model and the baseline without any differential privacy applied. Figs. 3 and 4 show the results for ε = 10 and ε = 5, respectively. Fig. 2 presents the results from our model and the baseline without any differential privacy applied. Figs. 3 and 4 show the results for ε = 10 and ε = 5, respectively. We observe that our methods outperform the DPSWgen baseline for all privacy budgets tested in our experiments, both in terms of FID scores and the visual quality of the generated samples. Furthermore, the variant of our approach with resampling (DPSWflow-r) consistently outperforms the variant without resampling (DPSWflow). These results support our consideration of the efficiency of resampling the projections as discussed at the end of Section 3.2. These experiments show that our approach is practically viable and can serve as a promising basis for future work on private generative models. ## 5 Conclusion In this paper we have introduced a novel theoretically grounded method for differentially private generative modeling. Our approach leverages gradient flows within the Wasserstein space, with a drift term computed using the differentially private sliced Wasserstein distance. To the best of our knowledge, we are the first to ![11_image_0.png](11_image_0.png) Figure 3: Generated images from DPSWflow-r (upper row) and DPSWgen (lower row) for MNIST, FashionMNIST, and Celeba with DP: ε = 10. ![11_image_1.png](11_image_1.png) Figure 4: Generated images from DPSWflow-r (upper row) and DPSWgen (lower row) for MNIST, FashionMNIST, and Celeba with DP: ε = 5. propose such a DP gradient flow approach. Our experiments have shown that our approach is practically viable, leading to generated samples of higher quality than those from generator-based models trained with the same DP metric at the same level of privacy. With both a strong theoretical foundation and experimental viability, we believe that our method forms a promising basis for future work in the area of private generative modeling. ## Broader Impact Statement Some aspects of our contributions in this paper are theoretical in nature, and we do not foresee any adverse societal impacts resulting from them. Our experiments are run on small datasets, and have negligible carbon footprint. Our adherence to differential privacy principles ensures that our generative model aligns with privacy-preserving principles. ## References Martin Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In *Proceedings of the 2016 ACM SIGSAC Conference* on Computer and Communications Security. ACM, oct 2016. doi: 10.1145/2976749.2978318. URL https://doi.org/10.1145%2F2976749.2978318. Luigi Ambrosio. Gradient flows in metric spaces and in the spaces of probability measures, and applications to fokker-planck equations with respect to log-concave measures. *Bollettino dell'Unione Matematica Italiana*, 1(1):223–240, 2 2008. URL http://eudml.org/doc/290477. Luigi Ambrosio, Nicola Gigli, and Giuseppe Savare. *Gradient Flows*. Lectures in Mathematics ETH Zürich. Birkhäuser Basel, 2005a. URL https://www2.stat.duke.edu/~sayan/ambrosio.pdf. Luigi Ambrosio, Stefano Lisini, and Giuseppe Savaré. Stability of flows associated to gradient vector fields and convergence of iterated transport maps. *manuscripta mathematica*, 121:1–50, 01 2005b. doi: 10.1007/s00229-006-0003-0. Michael Arbel, Anna Korba, Adil Salim, and Arthur Gretton. Maximum mean discrepancy gradient flow, 2019. URL https://arxiv.org/abs/1906.04370. Borja Balle, Gilles Barthe, Marco Gaboardi, and Joseph Geumlek. Privacy amplification by mixing and diffusion mechanisms, 2019. URL https://arxiv.org/abs/1905.12264. Clément Bonet, Nicolas Courty, François Septier, and Lucas Drumetz. Efficient gradient flows in slicedwasserstein space. *Transactions on Machine Learning Research*, 2022. ISSN 2835-8856. URL https: //openreview.net/forum?id=Au1LNKmRvh. Nicolas Bonnotte. *Unidimensional and evolution methods for optimal transportation*. PhD thesis, Université Paris Sud-Paris XI; Scuola normale superiore (Pise, Italie), 2013. URL https://www.normalesup.org/ ~bonnotte/doc/phd-bonnotte.pdf. Mireille Bossy and Denis Talay. A stochastic particle method for the mckean-vlasov and the burgers equation. Mathematics of computation, 66(217):157–192, 1997. URL https://www-sop.inria.fr/members/Denis. Talay/fichiers-pdf/bossy-talay-mathcomp.pdf. Yann Brenier. Polar factorization and monotone rearrangement of vector-valued functions. Communications on pure and applied mathematics, 44(4):375–417, 1991. URL https://www.ceremade.dauphine.fr/ ~carlier/Brenier91.pdf. Tianshi Cao, Alex Bie, Arash Vahdat, Sanja Fidler, and Karsten Kreis. Don't generate me: Training differentially private generative models with sinkhorn divergence, 2021. URL https://arxiv.org/abs/ 2111.01177. Nicolas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramèr, Borja Balle, Daphne Ippolito, and Eric Wallace. Extracting training data from diffusion models. In 32nd USENIX Security Symposium (USENIX Security 23), pp. 5253–5270, Anaheim, CA, August 2023. USENIX Association. ISBN 978-1-939133-37-3. URL https://www.usenix.org/conference/usenixsecurity23/ presentation/carlini. Dingfan Chen, Tribhuvanesh Orekondy, and Mario Fritz. Gs-wgan: A gradient-sanitized approach for learning differentially private generators. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), *Advances in Neural Information Processing Systems*, volume 33, pp. 12673–12684. Curran Associates, Inc., 2020a. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/ 9547ad6b651e2087bac67651aa92cd0d-Paper.pdf. Dingfan Chen, Ning Yu, Yang Zhang, and Mario Fritz. Gan-leaks: A taxonomy of membership inference attacks against generative models. In *Proceedings of the 2020 ACM SIGSAC Conference on Computer* and Communications Security, CCS '20. ACM, October 2020b. doi: 10.1145/3372297.3417238. URL http://dx.doi.org/10.1145/3372297.3417238. Yunzi Ding and Jonathan Niles-Weed. Asymptotics of smoothed wasserstein distances in the small noise regime, 2022. URL https://arxiv.org/abs/2206.06452. Tim Dockhorn, Tianshi Cao, Arash Vahdat, and Karsten Kreis. Differentially private diffusion models. Transactions on Machine Learning Research, 2023. ISSN 2835-8856. URL https://openreview.net/ forum?id=ZPpQk7FJXF. Cynthia Dwork. A firm foundation for private data analysis. *Commun. ACM*, 54:86–95, 04 2011. doi: 10.1145/1866739.1866758. Cynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci., 9(3–4):211–407, aug 2014. ISSN 1551-305X. doi: 10.1561/0400000042. URL https://doi.org/10.1561/0400000042. Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In *Theory of Cryptography Conference*, volume Vol. 3876, pp. 265–284, 01 2006. ISBN 978-3-540-32731-8. doi: 10.1007/11681878_14. Jiaojiao Fan, Qinsheng Zhang, Amirhossein Taghvaei, and Yongxin Chen. Variational wasserstein gradient flow, 2022. URL https://arxiv.org/abs/2112.02424. Kunal Garg and Dimitra Panagou. Fixed-time stable gradient flows: Applications to continuous-time optimization. *IEEE Transactions on Automatic Control*, 66(5):2002–2015, May 2021. ISSN 2334-3303. doi: 10.1109/tac.2020.3001436. URL http://dx.doi.org/10.1109/TAC.2020.3001436. Sahra Ghalebikesabi, Leonard Berrada, Sven Gowal, Ira Ktena, Robert Stanforth, Jamie Hayes, Soham De, Samuel L. Smith, Olivia Wiles, and Borja Balle. Differentially private diffusion models generate useful synthetic images, 2023. URL https://arxiv.org/abs/2302.13861. Ziv Goldfeld and Kristjan Greenewald. Gaussian-smoothed optimal transport: Metric structure and statistical efficiency. In Silvia Chiappa and Roberto Calandra (eds.), *Proceedings of the Twenty Third International* Conference on Artificial Intelligence and Statistics, volume 108 of *Proceedings of Machine Learning Research*, pp. 3327–3337. PMLR, 26–28 Aug 2020. URL https://proceedings.mlr.press/v108/goldfeld20a. html. Ziv Goldfeld, Kristjan Greenewald, and Kengo Kato. Asymptotic guarantees for generative modeling based on the smooth wasserstein distance, 2020. URL https://arxiv.org/abs/2002.01012. Frederik Harder, Kamil Adamczewski, and Mijung Park. DP-MERF: Differentially private mean embeddings with random features for practical privacy-preserving data generation. In Arindam Banerjee and Kenji Fukumizu (eds.), *Proceedings of The 24th International Conference on Artificial Intelligence and Statistics*, volume 130 of *Proceedings of Machine Learning Research*, pp. 1819–1827. PMLR, 13–15 Apr 2021. URL https://proceedings.mlr.press/v130/harder21a.html. Frederik Harder, Milad Jalali, Danica J. Sutherland, and Mijung Park. Pre-trained perceptual features improve differentially private image generation. *Transactions on Machine Learning Research*, 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=R6W7zkMz0P. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium, 2018. URL https://arxiv.org/ abs/1706.08500. Hongsheng Hu, Zoran Salcic, Lichao Sun, Gillian Dobbie, Philip S. Yu, and Xuyun Zhang. Membership inference attacks on machine learning: A survey, 2022. URL https://arxiv.org/abs/2103.07853. Richard Jordan, David Kinderlehrer, and Felix Otto. The variational formulation of the Fokker-Planck equation. *SIAM Journal on Mathematical Analysis*, 29(1):1–17, 1998. Marie-Sarah Lacharité, Brice Minaud, and Kenneth G Paterson. Improved reconstruction attacks on encrypted data using range query leakage. In *2018 IEEE Symposium on Security and Privacy (SP)*, pp. 297–314. IEEE, 2018. URL https://eprint.iacr.org/2017/701.pdf. Nam Lê Tien, Amaury Habrard, and Marc Sebban. Differentially private optimal transport: Application to domain adaptation. In *IJCAI*, pp. 2852–2858, 2019. URL https://www.ijcai.org/proceedings/2019/ 0395.pdf. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. *Proceedings of the IEEE*, 86(11):2278–2324, November 1998. URL http://vision.stanford. edu/cs598_spring07/papers/Lecun98.pdf. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild, 2015. URL https://arxiv.org/abs/1411.7766. Antoine Liutkus, Umut Şimşekli, Szymon Majewski, Alain Durmus, and Fabian-Robert Stöter. Slicedwasserstein flows: Nonparametric generative modeling via optimal transport and diffusions, 2019. URL https://arxiv.org/abs/1806.08141. Yunhui Long, Boxin Wang, Zhuolin Yang, Bhavya Kailkhura, Aston Zhang, Carl Gunter, and Bo Li. G-pate: Scalable differentially private data generator via private aggregation of teacher discriminators. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, volume 34, pp. 2965–2977. Curran Associates, Inc., 2021. URL https://proceedings. neurips.cc/paper_files/paper/2021/file/171ae1bbb81475eb96287dd78565b38b-Paper.pdf. Guangcan Mai, Kai Cao, Pong C Yuen, and Anil K Jain. On the reconstruction of face images from deep face templates. *IEEE transactions on pattern analysis and machine intelligence*, 41(5):1188–1202, 2018. URL https://arxiv.org/pdf/1703.00832. Petr Mokrov, Alexander Korotin, Lingxiao Li, Aude Genevay, Justin Solomon, and Evgeny Burnaev. Largescale wasserstein gradient flows, 2021. URL https://arxiv.org/abs/2106.00736. Kimia Nadjahi, Alain Durmus, Lénaïc Chizat, Soheil Kolouri, Shahin Shahrampour, and Umut Şimşekli. Statistical and topological properties of sliced probability divergences, 2022. URL https://arxiv.org/ abs/2003.05783. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An imperative style, high-performance deep learning library. In *Advances in* Neural Information Processing Systems, volume 32. 2019. Gabriel Peyré and Marco Cuturi. Computational optimal transport: With applications to data science. Foundations and Trends® *in Machine Learning*, 11(5-6):355–607, 2019. ISSN 1935-8237. doi: 10.1561/ 2200000073. URL http://dx.doi.org/10.1561/2200000073. Julien Rabin, Gabriel Peyré, Julie Delon, and Marc Bernot. Wasserstein barycenter and its application to texture mixing. In Scale Space and Variational Methods in Computer Vision: Third International Conference, SSVM 2011, Ein-Gedi, Israel, May 29–June 2, 2011, Revised Selected Papers 3, pp. 435–446. Springer, 2012. URL https://hal.science/hal-00476064/document. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In *International Conference on Learning Representations*, 2016a. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks, 2016b. URL https://arxiv.org/abs/1511.06434. Alain Rakotomamonjy and Liva Ralaivola. Differentially private sliced wasserstein distance. In Marina Meila and Tong Zhang (eds.), *Proceedings of the 38th International Conference on Machine Learning*, volume 139 of *Proceedings of Machine Learning Research*, pp. 8810–8820. PMLR, 18–24 Jul 2021. Alain Rakotomamonjy, Mokhtar Z. Alaya, Maxime Berar, and Gilles Gasso. Statistical and topological properties of gaussian smoothed sliced probability divergences, 2021. URL https://arxiv.org/abs/2110. 10524. Filippo Santambrogio. Euclidean, Metric, and Wasserstein gradient flows: an overview, 2016. URL https://arxiv.org/abs/1609.03890. Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership inference attacks against machine learning models, 2017. URL https://arxiv.org/abs/1610.05820. Nicki Skafte Detlefsen, Jiri Borovec, Justus Schock, Ananya Harsh, Teddy Koker, Luca Di Liello, Daniel Stancl, Changsheng Quan, Maxim Grechkin, and William Falcon. TorchMetrics - measuring reproducibility in PyTorch, February 2022. URL https://github.com/Lightning-AI/torchmetrics. Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms, 2017. URL https://arxiv.org/abs/1708.07747. Liyang Xie, Kaixiang Lin, Shu Wang, Fei Wang, and Jiayu Zhou. Differentially private generative adversarial network, 2018. URL https://arxiv.org/abs/1802.06739. ## A Proof Of Theorem 1 We begin by presenting two propositions that generalize Proposition 5.1.6 and Proposition 5.1.7 of Bonnotte (2013), respectively. These propositions will play a crucial role in the proof of Theorem 1, and constitute a key element of novelty in our proof, compared to the proof of Theorem 2 in Liutkus et al. (2019). Indeed, the GσSW2 metric is not a simple application of the sliced Wasserstein metric to Gaussian convoluted measures. The convolution with Gaussian measure in the GσSW2 2 metric occurs within the surface integral, separately for each one-dimensional projection of the original measures. This distinction becomes clear when comparing equations 4 with 9. Hence, establishing a DP gradient flow presented a unique challenge which makes a distinct contribution. The distinct nature of GσSW2 2 metric introduces two blockers that need to be circumvented before applying results from Bonnotte (2013) and Liutkus et al. (2019), namely the existence and regularity of minimizers to the functional in Equation 10. Both these steps are non-trivial. Proposition 2. Let µ, ν ∈ P(Ω). For any µ¯ ∈ P(Ω), $$\begin{array}{l}{{\operatorname*{lim}_{\varepsilon\to0^{+}}\frac{G_{\sigma}S W_{2}^{2}((1-\varepsilon)\mu+\varepsilon\bar{\mu},\nu)-G_{\sigma}S W_{2}^{2}(\mu,\nu)}{2\varepsilon}=\int_{S^{d-1}}\int_{\Omega}\psi_{\mu,\theta}^{(\sigma)}(\langle\theta,x\rangle)\mathrm{d}(\bar{\mu}-\mu)\mathrm{d}\theta,}}\end{array}$$ where ψ (σ) µt,θ is a Kantorovich potential between θ♯µ ∗ Nσ and θ♯ν ∗ Nσ. Proof. Since θ♯ν ∗ Nσ is absolutely continuous with respect to the Lebesgue measure for any θ ∈ S d−1, there indeed exists a Kantorovich potential ψ (σ) µt,θ between θ♯µ ∗ Nσ and θ♯ν ∗ Nσ. Since ψ (σ) µt,θ may not be optimal between (1 − ε)µ + εµ¯ and ν, $$\operatorname*{lim}_{\varepsilon\to0^{+}}\frac{G_{\sigma}{\mathcal{SW}}_{2}^{2}((1-\varepsilon)\mu+\varepsilon\bar{\mu},\nu)-{\mathcal{G}}_{\sigma}{\mathcal{SW}}_{2}^{2}(\mu,\nu)}{2\varepsilon}\geq\int\int\psi_{\mu,\theta}^{(\sigma)}(\theta,x)\mathrm{d}(\bar{\mu}-\mu)\mathrm{d}\theta.$$ Conversely, let ψ (σ,ε) µt,θ be a Kantorovich potential between θ♯[(1−ε)µ+εµ¯]∗Nσ and θ♯ν∗Nσ with ´ψ (σ,ε) µt,θ dθ♯[(1− ε)µ + εµ¯] ∗ Nσ. Then, $$\frac{1}{2}\mathcal{G}_{\sigma}\mathcal{SW}_{2}^{2}((1-\varepsilon)\mu+\varepsilon\bar{\mu},\nu)-\frac{1}{2}\mathcal{G}_{\sigma}\mathcal{SW}_{2}^{2}(\mu,\nu)\leq\varepsilon\mathop{\fint}\psi_{\mu,\,\theta}^{(\sigma,\varepsilon)}(\langle\theta,x\rangle)\mathrm{d}(\bar{\mu}-\mu)\mathrm{d}\theta.$$ As in the proof of Proposition 5.1.6 in Bonnotte (2013), ψ (σ,ε) µt,θ uniformly converges to a Kantorovich potential for (θ♯µ ∗ Nσ, θ♯ν ∗ Nσ) as ε → 0 +. Hence, $$\operatorname*{lim}_{\varepsilon\to0^{+}}\underline{{{\mathcal{G}}}}_{\sigma}\mathcal{S W}_{2}^{2}((1-\varepsilon)\mu+\varepsilon\bar{\mu},\nu)-\mathcal{G}_{\sigma}\mathcal{S W}_{2}^{2}(\mu,\nu)\leq\underline{{{\int}}}\int\psi_{\mu,\theta}^{(\sigma,\varepsilon)}(\langle\theta,x\rangle)\mathrm{d}(\bar{\mu}-\mu)\mathrm{d}\theta.$$ $$(28)$$ $$(29)$$ $$(30)$$ $$(31)$$ Combining Eq. (30) and Eq. (31), we get the desired result. Proposition 3. Let µ, ν ∈ P(Ω)*. For any diffeomorphism* ζ of Ω, $\square$ $$\lim_{\varepsilon\to0^{+}}\frac{{\cal G}_{\sigma}{\cal SW}_{2}^{2}(|{\rm Id}+\varepsilon\zeta|\mu,\nu)-{\cal G}_{\sigma}{\cal SW}_{2}^{2}(\mu,\nu)}{2\varepsilon}=\fint_{S^{\varrho_{-1}}}\int_{\Omega}(\psi_{\mu,\theta}^{(\sigma)})^{\prime}(\langle\theta,x\rangle)\langle\theta,\zeta(x)\rangle{\rm d}\mu{\rm d}\theta,\tag{32}$$ where ψ (σ) µt,θ is a Kantorovich potential between θ♯µ ∗ Nσ and θ♯ν ∗ Nσ. Proof. Using the fact that ψ (σ) µt,θ is a Kantorovich potential between θ♯µ ∗ Nσ and θ♯ν ∗ Nσ, we get the following: $$\frac{G_{\sigma}\mathcal{SW}_{2}^{2}([\mathrm{Id}+\varepsilon\zeta]_{\mu},\nu)-G_{\sigma}\mathcal{SW}_{2}^{2}(\mu,\nu)}{2\varepsilon}\geq\fint_{g^{\varepsilon-1}}\int_{\Omega}\frac{\psi_{\mu,\nu}^{(\sigma)}(\theta,x+\varepsilon\zeta(x))-\psi_{\mu,\nu}^{(\sigma)}(\theta,x)}{2\varepsilon}\mathrm{d}\mu\mathrm{d}\theta.\tag{33}$$ Since the Kantorovich potential is Lipschitz, it is differentiable almost everywhere, and so, we get the following using Lebesgue's differentiation theorem: $$\lim_{\varepsilon\to0^{+}}\inf_{\nu\to0^{+}}\frac{{\cal G}_{\sigma}{\cal SW}_{2}^{2}(|{\rm id}+\varepsilon\zeta|_{\sharp}\mu,\nu)-{\cal G}_{\sigma}{\cal SW}_{2}^{2}(\mu,\nu)}{2\varepsilon}\geq\int_{\partial^{\varepsilon-1}}\int_{\Omega}\langle\psi_{\mu,\sigma}^{(\sigma)}\rangle^{\prime}(\langle\theta,x\rangle)\langle\theta,\zeta(x)\rangle{\rm d}\mu{\rm d}\theta.\tag{34}$$ Conversely, we will now show the same upper bound on the lim sup. Let γθ,σ ∈ Π(θ♯µ ∗ Nσ, θ♯ν ∗ Nσ) be the optimal transport plan corresponding to ψ (σ) µt,θ. As is done in Proposition 5.1.7 in Bonnotte (2013), we extend γθ,σ to πθ,σ ∈ Π(*µ, ν*) such that (θ ⊗ θ)♯πθ,σ = γθ,σ. In other words, if random variables (*X, Y* ) are sampled from πθ,σ, then (⟨X, θ⟩,⟨*Y, θ*⟩) will follow the law of γθ,σ for every θ ∈ S d−1. Then, it follows that [θ + εθ(ζ) ⊗ θ]♯πθ ∈ Π(θ♯[Id + εζ]♯*µ, θ*♯ν). Hence, $$\mathcal{G}_{\sigma}\mathcal{S}\mathcal{W}_{2}^{2}([\mathrm{Id}+\varepsilon\zeta]_{\mu}\mu,\nu)-\mathcal{G}_{\sigma}\mathcal{S}\mathcal{W}_{2}^{2}(\mu,\nu)\leq\int\int|\langle\theta,x+\varepsilon\zeta(x)-y\rangle|^{2}-|\langle\theta,x-y\rangle|^{2}\mathrm{d}\pi_{\theta,\sigma}(x,y)\mathrm{d}\theta.$$ Since πθ,σ is constructed from γθ,σ, which in turn is based on the Kantorovich potential ψ (σ) µt,θ, we have ⟨θ, y⟩ = ⟨θ, x⟩ − (ψ (σ) µt,θ) ′(⟨*θ, x*⟩) for πθ,σ-a.e. (*x, y*), because of the optimality of γθ,σ for the one-dimensional optimal transport. Therefore, $$\mathcal{G}_{\sigma}\mathcal{S}\mathcal{W}_{2}^{2}([\mathrm{Id}+\varepsilon\zeta]\mu,\nu)-\mathcal{G}_{\sigma}\mathcal{S}\mathcal{W}_{2}^{2}(\mu,\nu)\leq\fint\int|\langle\psi_{\mu,\sigma}^{(\sigma)}\rangle^{\prime}(\langle\theta,x\rangle)-\varepsilon\langle\theta,\zeta(x)\rangle|^{2}-|\langle\psi_{\mu,\sigma}^{(\sigma)}\rangle^{\prime}(\langle\theta,x\rangle)|^{2}\mathrm{d}\pi_{\theta,\sigma}(x,y)\mathrm{d}\theta.$$ Simplifying the right hand side of the above equation and taking the limit of ε → 0 +, we get the following: $$\limsup_{\varepsilon\to0^{+}}\frac{\mathcal{G}_{\sigma}\mathcal{SW}_{2}^{2}([\mathrm{Id}+\varepsilon\zeta]_{\mu\mu,\,\nu})-\mathcal{G}_{\sigma}\mathcal{SW}_{2}^{2}(\mu,\nu)}{2\varepsilon}\leq\fint_{\mathcal{G}^{d-1}}\int_{\Omega}(\psi_{\mu,\,\sigma}^{(o)})^{\prime}(\langle\theta,\varepsilon\rangle)\langle\theta,\zeta(x)\rangle\mathrm{d}\mu\mathrm{d}\theta.\tag{35}$$ $\mathrm{g}$ Eq. (35) and Eq. (34), we get the desired result. We reproduce the following definition from (Liutkus et al., 2019). Definition 2. (Generalized Minimizing Movement Scheme (GMMS)) Let r > 0 and F : R+ × P(B(0, r)) × P(B(0, r)) → R be a functional. For h > 0*, let* µ h: [0, ∞) → P(B(0, r)) be a piecewise constant trajectory for F starting at µ0 ∈ P(B(0, r))*, such that: (i)* µ h(0) = µ0*, (ii)* µ h(t) = µ h(nh) for n = ⌊t/h⌋*, and (iii)* µ h((n + 1)h) minimizes the functional ζ 7→ F(h, ζ, µh(nh))*, for all* n ∈ N. We say that µˆ is a Minimizing Movement Scheme (MMS) for F starting at µ0 if there exists a family of piecewise constant trajectories (µ h)h>0 for F *such that* limh→0 µ h(t) = ˆµ(t) *for all* t ∈ R+. We say that µ˜ is a Generalized Minimizing Movement Scheme (GMMS) for F starting at µ0 if there exists a family of piecewise constant trajectories (µ hn )n∈N for F *such that* limn→∞ µ hn (t) = ˜µ(t) *for all* t ∈ R+. $$\square$$ Theorem 4 (Existence of solution to the minimization functional). Let ν ∈ P(B(0, 1)) and r > √d*. For any* µ0 ∈ P(B(0, r)) *with a density* ρ0 ∈ L∞(B(0, r)) and h > 0, there exists a µˆ ∈ P(B(0, r)) *that minimizes the* following functional: $${\mathcal G}(\mu)={\mathcal F}_{\lambda,\sigma}^{\nu}(\mu)+\frac{1}{2h}{\mathcal W}_{2}^{2}(\mu,\mu_{0}),$$ $$(36)$$ where F ν λ,σ(µ) *is given by Eq.* (10). Moreover µˆ *admits a density* ρˆ on B(0, r). Proof. We note that P(B(0, 1)) is compact for weak convergence (and equivalently for convergence in W2). Hence, showing that G(µ) is lower semi-continuous on P(B(0, 1)) would suffice to show the existence of a solution µˆ. By Lemma 9.4.3 of Ambrosio (2008), H is lower semi-continuous. By Rakotomamonjy & Ralaivola (2021), GσSW2(*µ, ν*) is symmetric and satisfies the triangle inequality. Moreover, GσSW2(µ, ν) ≤ SW2(*µ, ν*) for any σ ≥ 0. Hence for any ξ, ξ′ ∈ P(B(0, 1)), $\mathcal{G}_{\sigma}\mathcal{SW}_{2}(\xi,\nu)-\mathcal{G}_{\sigma}\mathcal{SW}_{2}(\xi^{\prime},\nu)|\leq\mathcal{G}_{\sigma}\mathcal{SW}_{2}(\xi,\xi^{\prime})\leq\mathcal{SW}_{2}(\xi,\xi^{\prime})\leq c_{d}\mathcal{W}(\xi,\xi^{\prime})$, where cd > 0 is a constant only dependent on the dimension d, and the last inequality follows from Proposition 5.1.3 in Bonnotte (2013). Hence, there exists a minimum µˆ ∈ P(B(0, r)) of G(µ). Moreover, µˆ must admit a density ρˆ because otherwise H(ˆµ) = ∞. Lemma 3 (Regularity of the solution to the minimizing functional). Under the assumptions of Theorem 4, any minimizer µˆ of G(µ) *in Eq.* (36) must admit a strictly positive density ρ >ˆ 0 *a.e., and* ∥ρˆ∥L∞ ≤ (1 + h/√d) √ d∥ρ0∥L∞. Proof. By Theorem 4, a minimizer µˆ of G(µ) exists and admits a density ρˆ. Let µ¯ ∈ P(B(0, 1)) be an arbitrary probability measure with density ρ¯. For ε ∈ (0, 1) let ρε = (1 − ε)ρˆ + ερ¯ and let µε ∈ P(B(0, 1)) be the probability measure corresponding to the density ρε. By the optimality of ρˆ, we have that G(µˆ) ≤ G(µε). Hence, 0 ≥ lim ε→0+ G(ˆµ) − G(µε) ε= lim ε→0+ GσSW2 2 (ˆµ) − GσSW2 2 (µε) 2ε+ λ lim sup ε→0+ H(ˆµ) − H(µε) ε+ lim ε→0+ W2 2 (ˆµ) − W2 2 (µε) 2hε = S d−1 ˆ Ω ψ (σ) µt,θ(⟨θ, x⟩)d(¯µ − µˆ)dθ + λ lim sup ε→0+ H(ˆµ) − H(µε) ε+ ˆ Ω ϕd(¯µ − µˆ), where the last equality follows by combining Proposition 2 with Proposition 1.5.6 in (Bonnotte, 2013). Here, ψ (σ) µt,θ is a Kantorovich potential between θ♯µˆ ∗ Nσ, and θ♯ν ∗ Nσ as in Proposition 2 and ϕ is a Kantorovich potential between µˆ and ν for W2. Rearranging, we get the following: $$\lim_{\varepsilon\to0^{+}}\frac{{\cal H}(\hat{\mu})-{\cal H}(\mu_{\varepsilon})}{\varepsilon}\leq\frac{1}{\lambda}\int_{\mathbb{B}(0,r)}\Psi{\rm d}(\bar{\mu}-\hat{\mu}),\tag{37}$$ where Ψ(x) := fflS d−1 ψ (σ) µt,θ(⟨*θ, x*⟩) + 1h ϕ(x). From this point, for any µ0 ∈ P(B(0, r)) with a density ρ0 that is smooth and strictly positive, we get the desired result by following the proof strategy of Lemma 5.4.3 in (Bonnotte, 2013). For a more general µ0 with a density ρ0 ∈ L∞(B(0, r)), we again arrive at the desired result by following the proof strategy of Theorem S4 in (Liutkus et al., 2019), which proceeds by smoothing ρ0 by convolution with a Gaussian. Theorem 5 (Existence of GMMS). Under the assumptions of Theorem *4, there exists a GMMS* (µt)t≥0 in P(B(0, r)), starting from µ0 *for the following functional:* $${\cal F}^{\nu}_{\lambda,\sigma}(h,\mu_{nxt},\mu_{prev})={\cal F}^{\nu}_{\lambda,\sigma}(\mu_{nxt})+\frac{1}{2h}{\cal W}^{2}_{2}(\mu_{nxt},\mu_{prev}).\tag{38}$$ Moreover, for any t > 0, µt has a density ρt *such that* ∥ρt∥L∞ ≤ e td√d∥ρ0∥L∞. Proof. The desired result follows straightforwardly by following the proof of Theorem S5 in Liutkus et al. (2019) or Theorem 5.5.3 in Bonnotte (2013), with the support of Lemma 3 and Theorem 4. Theorem 6 (Continuity equation for GMMS). Under the assumptions of Theorem 4, let (µt)t≥0 *be the* GMMS given by Theorem *5. For* θ ∈ S d−1*, let* ψ (σ) µt,θ *be the Kantorovich potential between* P θ \#µt ∗ ξσ and P θ \#ν ∗ ξσ, with ξσ ∼ N (0, σ2). For t ≥ 0, the density ρt of µt satisfies the following continuity equation in a weak sense: $$\frac{\partial\rho_{t}}{\partial t}=-\mathrm{div}(v_{t}^{(\sigma)}\rho_{t})+\lambda\Delta\rho_{t},$$ with: $$v_{t}^{(\sigma)}(x)=v^{(\sigma)}(x,\mu_{t})=\int_{\mathbb{S}^{d-1}}(\psi_{\mu_{t},\theta}^{(\sigma)})^{\prime}(\langle x,\theta\rangle)\theta\mathrm{d}\theta.$$ That is, for all ξ ∈ C∞ c ([0, ∞) × B(0, r)), $$\int_{0}^{\infty}\int_{\mathbb{B}(0,r)}\left[{\frac{\partial\xi}{\partial t}}(t,x)-v_{t}^{(\sigma)}\nabla\xi(t,x)-\lambda\Delta\xi(t,x)\right]\rho_{t}(x)\mathrm{d}x\mathrm{d}t=-\int_{\mathbb{B}(0,r)}\xi(0,x)\rho_{0}(x)\mathrm{d}x.$$ Proof. We will closely follow the proof of Theorem S6 in Liutkus et al. (2019) and Theorem 5.6.1 in Bonnotte (2013). Just as in the proof of Theorem S6 in Liutkus et al. (2019), we will proceed in five steps. Step (1): By the definition of GMMS, there exists a family of piecewise constant trajectories (µ hn )n∈N for F ν λ,σ such that limn→∞ µ hn t = µt for all t ∈ R+. Let ξ ∈ C∞ c ([0, ∞) × B(0, r)) and let ξ n k (x) denote ξ(khn, x). Using step 1 of the proof of Theorem S6 in Littukus et al. (2019), we get: $$\int_{\mathbb{R}(0,r)}\xi(0,x)\rho_{0}(x)\mathrm{d}x+\int_{0}^{\infty}\int_{\mathbb{R}(0,r)}\frac{\partial\xi}{\partial t}(t,x)\rho_{1}(x)\mathrm{d}x\mathrm{d}t=\lim_{n\to\infty}-h_{n}\sum_{k=1}^{\infty}\int_{\mathbb{R}(0,r)}\xi^{k}_{t}(x)\frac{\rho^{k}_{n,k_{n}}(x)-\rho^{k}_{n-1)\mu_{n}}(x)}{h_{n}}\mathrm{d}x.\tag{40}$$ $$(39)$$ Step (2): For any θ ∈ S d−1, let ψ (σ),hn µt,θ be the Kantorovich potential between P θ \#µ hn t ∗ ξσ and P θ \#ν ∗ ξσ, with ξσ ∼ N (0, σ2). Using step 2 of the proof of Theorem S6 in Liutkus et al. (2019), we get: $$\int_{0}^{\infty}\int_{\mathbb{B}(0,r)}\!\!\!\int(\psi^{(*)}_{\mu,\mu})^{\prime}(\langle\theta,x\rangle)(\theta,\nabla\xi(x,t))\mathrm{d}\theta\mathrm{d}\mu(x)\mathrm{d}t=\lim_{n\to\infty}h_{n}\sum_{k=1}^{\infty}\int_{\mathbb{B}(0,r)}\!\!\!\int\psi^{(*),h_{n}}_{\mu^{n},\theta}(\theta^{*})(\theta,\nabla\xi^{*}_{k})\mathrm{d}\theta\mathrm{d}\mu^{*}_{\mu h_{n}}.\tag{4.4}$$ $$\stackrel{n}{h}_{n}\,.$$ $$(41)$$ $$\left(42\right)$$ Step (3): From step 3 of the proof of Theorem S6 in Liutkus et al. (2019), we get: $$\operatorname*{lim}_{n\to\infty}h_{n}\sum_{k=1}^{\infty}\int_{\overline{{{\mathrm{B}}}}(0,r)}\Delta\xi_{k}^{n}(x)\rho_{k h_{n}}^{h_{n}}(x)\mathrm{d}x=\int_{0}^{\infty}\int_{\overline{{{\mathrm{B}}}}(0,r)}\Delta\xi(t,x)\rho_{t}(x)\mathrm{d}x\mathrm{d}t.$$ Step (4): Let ϕ hn kbe the Kantorovich potential from µ hn khn to µ hn (k−1)hn . From the optimality of µ hn khn , the first variation of the functional ζ 7→ Fν λ,σ(ζ) + 1 2hW2 2 (*ζ, µ*hn khn ) with respect to ζ at the point µ hn khn in the direction of the vector field ∇ξ n k is zero. Using Proposition 3 for the first variation of the GσSW2 2 term, Proposition 5.1.7 for the first variation of the W2 2 term, and Jordan et al. (1998) for the first variation of the H term, we get the following: Following, $$0=\frac{1}{h_n}\int_{\mathbb{B}(0,r)}\langle\nabla\phi_k^{h_n}(x),\nabla\xi_k^n(x)\rangle\mathrm{d}\mu_{k h_n}^{h_n}(x)-\int_{\mathbb{B}(0,r)}\int(\psi_{\mu_k^{n,\theta}}^{(\sigma)\,h_n})'(\theta^*)\langle\theta,\nabla\xi_k^n(x)\rangle\mathrm{d}\theta\mathrm{d}\mu_{k h_n}^{h_n}(x)$$ $$-\lambda\int_{\mathbb{B}(0,r)}\Delta\xi_k^n(x)\mathrm{d}\mu_{k h_n}^{h_n}(x).$$ $$(43)$$ (x). (43) Proceeding as in step 4 of Liutkus et al. (2019) and using Eq. (43), we get the following: $$\begin{array}{l}{{\operatorname*{lim}_{n\to\infty}-h_{n}\sum_{k=1}^{\infty}\xi_{k}^{n}(x)\frac{\rho_{k h_{n}}^{h_{n}}-\rho_{(k-1)h_{n}}^{h_{n}}}{h_{n}}\mathrm{d}x}}\\ {{\quad=\operatorname*{lim}_{n\to\infty}\left(h_{n}\sum_{k=1}^{\infty}\int_{\mathbb{B}(0,r)}\int(\psi_{\mu_{k}^{(\alpha),h_{n}}\theta}^{(\alpha),h_{n}})(\theta^{\prime})\langle\theta,\nabla\xi_{k}^{n}\rangle\mathrm{d}\theta\mathrm{d}\mu_{k h_{n}}^{h_{n}}+h_{n}\sum_{k=1}^{\infty}\int_{\mathbb{B}(0,r)}\Delta\xi_{k}^{n}(x)\rho_{k h_{n}}^{h_{n}}(x)\mathrm{d}x\right).}}\end{array}$$ $$(44)$$ $$\begin{array}{l}{(45)}\\ {(46)}\end{array}$$ . (44) Step (4): Combining Eq. (40), Eq. (41), Eq. (42) and Eq. (44), we get the desired result in Eq. (39). ## B Proof Of Theorem 2 We simply follow the proof strategy of Theorem 3 in Liutkus et al. (2019). We begin by restating the following two discrete-time SDEs: $$\begin{array}{l}{{\hat{X}_{k+1}=h v^{(\sigma)}(\hat{X}_{k},\hat{\mu}_{k h})+\sqrt{2\lambda h}Z_{k+1},}}\\ {{\bar{X}_{k+1}=h\hat{v}^{(\sigma)}(\bar{X}_{k},\bar{\mu}_{k h})+\sqrt{2\lambda h}Z_{k+1},}}\end{array}$$ where Eq. (45) is equivalent to Eq. (17), which in turn is the Euler-Maruyama discretization of the continuoustime SDE in Eq. (6). Equation 46 is equivalent to the particle update equation in Eq. (18). Similar to the proof of Theorem 3 in Liutkus et al. (2019), we define two continuous-time processes (Yt)t≥0 and (Ut)t≥0, defined by the following continuous-time SDEs: $$\mathrm{d}Y_{t}=\tilde{v}_{t}^{(\sigma)}(Y)\mathrm{d}t+\sqrt{2\mathrm{d}}W_{t},$$ $$\mathrm{d}U_{t}=\tilde{v}_{t}^{(\sigma)}(U)\mathrm{d}t+\sqrt{2\mathrm{d}}W_{t},\tag{1}$$ where $$\tilde{v}_{t}(Y):=-\sum_{k=0}^{\infty}\tilde{v}_{kh}^{(\sigma)}(Y_{kh},\hat{\mu}_{kh})\mathds{1}_{[kh,(k+1)h)}(t),$$ $$\tilde{v}_{t}(U):=-\sum_{k=0}^{\infty}\tilde{v}_{kh}^{(\sigma)}(U_{kh},\tilde{\mu}_{kh})\mathds{1}_{[kh,(k+1)h)}(t).$$ $$\begin{array}{l}{(47)}\\ {\quad(48)}\end{array}$$ (49) $\binom{49}{50}$ . In Eq. (49), µˆkh follows the distribution of Xbk in the discrete-time process defined by the update equation in Eq. (45). Therefore (Yt)t≥0 is a continuous linear interpolation of the discrete-time process (Xbk)k∈N+ . In Eq. (50), µ¯kh follows the distribution of X¯k in the discrete-time process defined by the update equation in Eq. (46). Therefore (Ut)t≥0 is a continuous linear interpolation of the discrete-time process (X¯k)k∈N+ . Let π T X, π T Y , and π T U denote the distributions of (Xt)t∈[0,T], (Yt)t∈[0,T], and (Ut)t∈[0,T], respectively, with T = Kh. We have the following lemma bounding the total variation distance between the pairs (π T X, πT Y ) and (π T Y , πTU ) from Liutkus et al. (2019): Lemma 4 (Lemmas S1 and S2 in Liutkus et al. (2019)). For all λ > 0*, assume that the continuous-time* SDE in Eq. (6) has a unique strong solution (Xt)t≥0 *for any starting point* x ∈ R d. For t ≥ 0*, define* Ψ (σ) µt,θ(x) := fflS d−1 ψ (σ) µt,θ(⟨θ, x⟩)dθ. Suppose there exists constants A, B, L, m, b > 0, and δ ∈ (0, 1)*, such that* the following are true for any *x, x*′ ∈ R d, µ, µ′ ∈ P(Ω)*, and all* t ≥ 0: $$\begin{array}{c}{{\|v_{t}^{(\sigma)}(x)-v_{t^{\prime}}^{(\sigma)}(x^{\prime})\|\leq L(\|x-x^{\prime}\|+|t-t^{\prime}|),}}\\ {{\|\hat{v}^{(\sigma)}(x,\mu)-\hat{v}^{(\sigma)}(x^{\prime},\mu^{\prime})\|\leq L(\|x-x^{\prime}\|+\|\mu-\mu^{\prime}\|_{\mathrm{TV}}),}}\\ {{\langle x,v_{t}^{(\sigma)}(x)\rangle\geq m\|x\|^{2}-b,}}\\ {{\mathbb{E}[\hat{v}_{t}^{(\sigma)}]=v_{t}^{(\sigma)},}}\\ {{\mathbb{E}[\|\hat{v}^{(\sigma)}(x,\mu_{t})-v^{(\sigma)}(x,\mu_{t})\|^{2}]\leq2\delta(L^{2}\|x\|^{2}+B^{2}).}}\end{array}$$ Define: $$\begin{array}{l}{{C_{e}:={\mathcal{H}}(\mu_{0}),}}\\ {{C_{0}:=C_{e}+2(1\lor\frac{1}{m})(b+2B^{2}+d\lambda),}}\\ {{C_{1}:=12(L^{2}C_{0}+B^{2})+1,}}\\ {{C_{2}:=2(L^{2}C_{0}+B^{2}).}}\end{array}$$ $$\left(51\right)$$ Then, we have: $$\begin{array}{l}{{\|\pi_{X}^{T}-\pi_{Y}^{T}\|_{\mathrm{TV}}^{2}\leq\frac{L^{2}K}{4\lambda}\left(\frac{C_{1}h^{3}}{3}+3\lambda d h^{2}\right)+\frac{C_{2}\delta K h}{8\lambda},}}\\ {{\|\pi_{Y}^{T}-\pi_{U}^{T}\|_{\mathrm{TV}}^{2}\leq\frac{L^{2}K h}{16\lambda}\|\pi_{X}^{T}-\pi_{U}^{T}\|_{\mathrm{TV}}^{2}.}}\end{array}$$ Theorem 7. Under the assumptions of Lemma 4 and for *λ > T L*2/8: $$\|\mu_{T}-\widehat{\mu}_{K h}\|_{T V}^{2}\leq\frac{T}{\lambda-T L^{2}/8}\left[L^{2}h(c_{1}h+d\lambda)+c_{2}\delta\right],$$ 2h(c1h + dλ) + c2δ, (51) where c1, c2, L, δ > 0 *are constants independent of time.* Proof. We will emulate the proof of Theorem 3 in Liutkus et al. (2019). $$\|\pi_{X}^{T}-\pi_{U}^{T}\|_{\mathrm{TV}}^{2}\leq2\|\pi_{X}^{T}-\pi_{V}^{T}\|_{\mathrm{TV}}^{2}+2\|\pi_{Y}^{T}-\pi_{U}^{T}\|_{\mathrm{TV}}^{2}$$ $$\leq\frac{L^{2}K}{2\lambda}\left(\frac{C_{1}h^{3}}{3}+3\lambda dh^{2}\right)+\frac{C_{2}\delta Kh}{4\lambda}+\frac{L^{2}Kh}{8\lambda}\|\pi_{X}^{T}-\pi_{U}^{T}\|_{\mathrm{TV}}^{2}$$ $$\leq\left(1-\frac{KL^{2}h}{8\lambda}\right)^{-1}\left\{\frac{L^{2}K}{2\lambda}\left(\frac{C_{1}h^{3}}{3}+3\lambda dh^{2}\right)+\frac{C_{2}\delta Kh}{4\lambda}\right\},$$ where the second inequality follows from Lemma 4 and the last inequality holds for *λ > T L*2/8. The desired result then follows by plugging in T = Kh and rearranging. ## C Experimental Details In this section, we explain all experimental details required running the experiments (along with the code which is provided in the supplementary material). For this project we use 1 NVIDIA GPU Tesla V100 which was necessary for the pretraining of the auto-encoder only. All neural networks are coded using PyTorch (Paszke et al., 2019). ## C.1 Datasets And Evaluation Metric MNIST is a standard dataset introduced in LeCun et al. (1998), with no clear license to the best of our knowledge, composed of monochrome images of hand-written digits. Each MNIST image is single-channel, of size 28 × 28. We preprocess MNIST images by extending them to 32 × 32 frames (padding each image with black pixels), in order to better fit as inputs and outputs of standard convolutional networks. MNIST is comprised of a training and testing dataset, but no validation set. We split the training set into two equally-sized public and private sets. FashionMNIST is a similar dataset introduced by Xiao et al. (2017) under the MIT license, composed of fashion-related images. Each FashionMNIST image is single-channel, of size 28 x 28, and we also preprocess them by extending them to 32 × 32 frames. Like MNIST, FashionMNIST is comprised of a training and testing dataset, but no validation set. We split the training set into two equally-sized public and private sets. Notice that for MNIST and FashionMNIST, we scaled the pixel values to [0,1]. CelebA is a dataset composed of celebrity pictures introduced by (Liu et al., 2015). Its license permits use for non-commercial research purposes. Each CelebA image has three color channels, and is of size 178 × 218. We preprocess these images by center-cropping each to a square image and resizing to 64 × 64. CelebA is comprised of a training, testing, and validation dataset. After conducting initial experiments and analysis with the validation set, we removed it. We then split the training set into two equally-sized public and private datasets. Fréchet Inception Distance (FID) was introduced by Heusel et al. (2018). It measures the generative performance of the models we consider. In our code we use the PyTorch implementation of TorchMetrics (Skafte Detlefsen et al., 2022). ## C.2 Structure Of The Autoencoders For Data Dimension Reduction The experiments performed on MNIST and FashionMNIST datasets utilize an autoencoder architecture as per the framework proposed by Liutkus et al. (2019). Furthermore, we normalized the latent space before adding Gaussian noise, ensuring that the encoded representations lie on a hyper-sphere. We use the following autoencoder structure, which is the same as that used in Liutkus et al. (2019): - **Encoder.** Four 2D convolution layers with (num chan out, kernel size, stride, padding) set to (3, 3, 1, 1), (32, 4, 2, 0), (32, 3, 1, 1), (32, 3, 1, 1), each one followed by a ReLU activation. At the output, a linear layer is set to the desired bottleneck size, and then the outputs are normalized. - **Decoder.** A linear layer receives from the bottleneck features a vector of dimension 8192, which is reshaped as (32, 16, 16). Then, three convolution layers are applied, all with 32 output channels and (kernel size, stride, panning) set to, respectively, (3, 1, 1), (3, 1, 1), (2, 2, 0). A 2D convolution layer is then applied, with the specified output number of channels set to that of the data (1 for black and white, 3 for color), and a (kernel size, stride, panning) set to (3, 1, 1). All layers are followed by a ReLU activation, and a sigmoid activation is applied to the final output. Conversely, experiments conducted on the CelebA dataset employ an autoencoder/generator architecture based on the DCGAN framework proposed by Radford et al. (2016a). The structure is the following: - **Encoder.** Four 2D convolution layers are employed with the following specifications: (64,3,1,1), (64×2,4,2,1), (64×4,4,2,1) and (64×8,4,2,1). Each convolutional layer is followed by a leaky Rectified Linear Unit (LeakyReLU) activation function. Subsequently, a linear layer is applied to obtain the desired bottleneck size. The outputs are then normalized. - **Decoder.** Four 2D convolution layers are employed with the following specifications: (64×8,4,1,0), (64×4,4,2,1), (64×2,4,2,1) and (64,4,2,1). Each convolutional layer is followed by a leaky Rectified Linear Unit (LeakyReLU) activation function and batch normalization is added after the activation. ## C.3 Additional Algorithm Algorithm 2 describes the DP Sliced Wasserstein Flow without resampling of the θs. Algorithm 2: DP Sliced Wasserstein Flow without resampling of the θs: DPSWflow. 1: **Input:** Y = [y T 1 , . . . , yT n ] T ∈ R n×di.e. N i.i.d. samples from target distribution ν, number of projections Nθ, regularization parameter λ, variance σ, step size h. 2: **Output:** X = [x T 1 , . . . , xTn ] T ∈ R n×d 3: *// Initialize the particles* 4: {xi} N i=1 ∼ µ0, Xb = [x T 1 , . . . , xTn ] T ∈ R n×d 5: {θj} Nθ j=1 ∼ Unif(S d−1), Θ = [θ T 1 , . . . , θTNθ ] 6: Sample ZY ∈ R n×Nθfrom i.i.d. N (0, σ2) 7: Compute the inverse CDF of Y Θ + ZY . 8: *// Iterations* 9: for k = 0*, . . . , K* − 1 do 10: Sample Mθ projections among Θ to obtain Υ 11: Sample ZX ∈ R n×Mθfrom i.i.d. N (0, σ2) 12: Compute the CDF of XΥ + ZX 13: Compute vb (σ)(xi) using Eq. (20) 14: xi ← hvb (σ)(xi) + √2λhz, z ∼ N (0, Id) 15: **end for** ## C.4 Additional Comments On Hyperparameters And Algorithms In this subsection we give all of the hyperparameters necessary for reproducing the experiments conducted. MNIST and FashionMNIST. All three DPSWflow, DPSWflow-r, and DPSWgen models are evaluated with a batch size of 250 and for δ = 10−5 and the latent space (of the autoencoder, used as input of the mechanisms) has size 8. DPSWflow is evaluated over 1500 epochs for all values of ε, while DPSWflow-r and DPSWgen are evaluated on 35 epochs for ε = ∞ and ε = 10, and on 20 epochs for ε = 5. DPSWflow uses Nθ = 31 and Mθ = 25, while DPSWflow-r and DPSWgen use Nθ = 70. As explained in Section 4.3, the privacy tracker is different for DPSWflow compared to DPSWflow-r and DPSWgen. For DPSWflow we directly input the desired value for ε and use the sensitivity bound from Eq. (23), along with σ ≥ c ∆2(f) ε (from Section 2.3), to obtain the value of σ which is used in our code. For DPSWflow-r and DPSWgen we used the following pairs of *σ, ε*: σ = 0, ε = ∞; σ = 0.68, ε = 10; and σ = 0.9, ε = 5. CelebA. All three DPSWflow, DPSWflow-r, and DPSWgen models are evaluated with a batch size of 250, for δ = 10−6 and the latent space (of the autoencoder, used as input of the mechanisms) has size 48. DPSWflow is evaluated over 2000 epochs for every value of ε, while DPSWflow-r and DPSWgen are evaluated on 30 epochs for every ε. DPSWflow uses Nθ = 250 and Mθ = 220, while DPSWflow-r and DPSWgen use Nθ = 300. For DPSWflow-r and DPSWgen we used the following pairs of *σ, ε*: σ = 0, ε = ∞; σ = 0.59, ε = 10; and σ = 0.8, ε = 5. Also, notice that the architecture of the generator of DPSWgen is structured as follows: the input layer consists of a linear layer that transforms the input vector to a 256-dimensional vector. The first hidden layer applies a ReLU activation function to introduce non-linearity. The second hidden layer includes another linear layer that increases the dimensionality from 256 to 512, followed by a Batch Normalization layer to stabilize and accelerate the training process, and a ReLU activation function. The third hidden layer contains a linear layer that reduces the dimensionality from 512 to 256, followed by a ReLU activation function. The output layer comprises a final linear layer that maps the 256-dimensional vector to the desired output vector. This sequential arrangement of layers allows for progressive transformation and refinement of the input data, leveraging non-linear activation functions and normalization techniques to improve the model's learning capacity and stability.
# A Free Lunch With Influence Functions? Improving Neural Network Estimates Of The Average Treatment Effect With Concepts From Semiparametric Statistics Anonymous authors Paper under double-blind review ## Abstract Parameter estimation in empirical fields is usually undertaken using parametric models, and such models readily facilitate statistical inference. Unfortunately, they are unlikely to be sufficiently flexible to be able to adequately model real-world phenomena, and may yield biased estimates. Conversely, non-parametric approaches are flexible but do not readily facilitate statistical inference and may still exhibit residual bias. Using causal inference (specifically, Average Treatment Effect estimation) as an application domain example, we explore the potential for Influence Functions (IFs) to (a) improve initial estimators without needing more data (b) increase model robustness and (c) facilitate statistical inference. We begin with a broad, tutorial-style introduction to IFs and causal inference, before proposing a neural network method 'MultiNet', which seeks the diversity of an ensemble using a single architecture. We also introduce variants on the IF update step which we call 'MultiStep', and provide a comprehensive evaluation of different approaches. The improvements are found to be dataset dependent, indicating an interaction between the methods used and nature of the data generating process. Our experiments highlight the need for practitioners to check the consistency of their findings, potentially by undertaking multiple analyses with different combinations of estimators. This finding is especially relevant to practitioners working in the domain of causal inference, where ground-truth with which one could assess the performance of one's estimators is not available. ## 1 Introduction Most methods being utilized in empirical fields such as psychology or epidemiology are parametric models (van der Laan & Rose, 2011; Blanca et al., 2018), which are convenient because they facilitate closed-form statistical inference and confidence intervals (*e.g.* for the purpose of null hypothesis testing). Indeed, being able to perform statistical tests and reliably quantify uncertainty is especially important when evaluating the efficacy of treatments or interventions. One approach to perform such tests is by assuming a parametric model for the underlying generating mechanism, and *e.g.* normally distributed estimates. However, it has been argued that linear models are incapable of modeling most realistic data generating processes and that we should instead be using modern machine learning techniques (van der Laan & Rose, 2011; van der Laan & Gruber, 2012; van der Laan & Starmans, 2014; Vowels, 2021). Unfortunately, most machine learning models are non-parametric insofar as the estimates derived using such techniques are not directly parameterizable as (*e.g.*) a Gaussian with a mean and variance. As such, the estimates derived using such non-parametric techniques are not readily amenable to null-hypothesis significance testing or other common statistical inference tasks. Furthermore, even though machine learning algorithms are more flexible, they are still likely to be biased because they are not targeted to the specific parameter of interest (van der Laan & Rose, 2011). So, what can we do? By leveraging concepts from the field of semiparametric statistics we can try to address these issues. Indeed, by combining elements of semiparametric theory with machine learning methods, we can, at least theoretically, enjoy the best of both worlds: We can avoid having to make unreasonably restrictive assumptions about the underlying generative process, and can nonetheless undertake valid statistical inference. Furthermore, we can also leverage an estimator update process to achieve greater precision in existing estimators, without needing additional data (van der Laan & Rose, 2011; Tsiatis, 2006; Bickel et al., 1998), an advantage which we might call a 'free lunch'1, and one we wish to explore and test empirically in this paper. One example of an existing method which combines machine learning and semiparametric theory is targeted learning (van der Laan & Rose, 2011; van der Laan & Starmans, 2014).2 Unfortunately, this technique, and many related techniques involving influence functions (IFs) and semiparametric theory, have primarily been popularized outside the field of machine learning. In parallel, machine learning has focused on the development of equivalent methods using deep neural network (NN) methods for causal inference (see *e.g.*, Bica et al., 2020; Wu & Fukumizu, 2020; Shalit et al., 2017; Yoon et al., 2018; Louizos et al., 2017), which, owing to their 'untargeted' design (more on this below), may exhibit residual bias. As such, many of the principles and theory associated with semiparametrics and IFs are underused and underappreciated within the machine learning community, and it remains unknown to what extent these techniques can be applied to NN based estimators. More generally, and in spite of a large body of work describing the theoretical properties of semiparametric methods for estimation outside of machine learning, there has been little empirical comparison of techniques like targeted learning against those considered state of the art at the intersection of machine learning and causal inference. In particular, there now exist numerous NN based methods, and practitioners may find themselves choosing between the alluring 'deep learning' based methods and those which perhaps, rightly or wrongly, have less associated hype. Such a comparison is therefore extremely important, especially given that a theoretical framework for establishing the statistical guarantees of NNs is yet elusive (Curth et al., 2021a), although one notable recent contribution is presented by Farrell et al. (2021). We explore the potential for semiparametric techniques, in particular, various applications of IFs, to (a) improve the accuracy of estimators by 'de-biasing' them, (b) yield estimators which are more robust to model misspecification (double-robustness), and (c) derive confidence intervals for valid statistical inference. Our motivating application example is chosen but not limited to be the estimation of the causal effect of a treatment or intervention on an outcome from observational data. Experiments highlight that, even for simple datasets, some NN methods do not yield estimators close enough to be amenable to improvement via IFs (as we will discuss below, the assumption is that the bias of the initial estimator can be approximated as a linear perturbation). We propose a new NN pseudo-ensemble method 'MultiNet' with constrained weighted averaging (see Fig. 1) as a means to adapt to datasets with differing levels of complexity, in a similar way to the Super Learner ensemble approach (van der Laan et al., 2007), which is popular in epidemiology. The associated contributions of this paper are: - A top-level introduction to the basics behind semiparametric theory, influence functions (including an expression for deriving influence functions for general estimands and the code to do so automatically)3, and causal inference. - A new method 'MultiNet' which attempts to mimic the performance of an ensemble with a single NN - A new update step method 'MultiStep' which attempts to improve upon existing update methods by continuously optimizing the solution according to two criteria which characterize the optimum solution (namely, finding the IF with the smallest expectation and variance) - An extensive comparison of the estimation performance of NNs and other algorithms with and without semiparametric techniques We evaluate causal inference task performance in terms of (a) precision in estimation (and the degree to which we can achieve debiasing), (b) double robustness, and (c) normality of the distribution of estimates (thus, 1The term 'free lunch' is a reference to the adage of unknown origin (but probably North American) 'there ain't no such thing as a free lunch'. It was famously used by Wolpert and Macready in the context of optimization (Wolpert & Macready, 1997). 2For an overview of some other related methods see (Curth et al., 2021a). 3Code for models, experiments, and automatic IF derivation is provided in supplementary material. ![2_image_0.png](2_image_0.png) Figure 1: Block diagram for MultiNet. At each layer l = {1*, ..., L*} of the network, the outcome y is estimated using covariates x (which can include treatment t). The treatment is used to select between two estimation arms. Once the network has been trained, the outcomes from each layer are combined and a constrained regression is performed. The weights β in the regression are constrained to be positive and sum to 1. An equivalent single-headed network can be used for the treatment model tˆ|x. by implication, whether it is possible to use closed-form expressions for confidence intervals and statistical inference). We find our MultiNet and MultiStep methods provide competitive performance across datasets, and observe that initial estimation methods can sometimes benefit from the application of the semiparametric techniques. In general, however, the improvements are both dataset and technique dependent, highlighting possible interactions between the underlying data generating process, sample sizes, and the estimators and update steps used. The conclusion is thus that practitioners should take care when interpreting their results, and attempt to validate them by undertaking multiple analyses with different estimators. This is particularly important for the task of causal inference where, in real-world applications, ground truth is unlikely to be available. The paper is structured as follows: We begin by reviewing previous work in Sec. 2 and provide background theory on the motivating case of estimating causal effects from observational data in Sec. 3. In this section, we also provide a top level introduction to IFs (Sec. 3.2) and a derivation of the IF for a general graph (Sec. 3.3). In Sec. 4 we discuss how to use IFs debias estimators and we present our own update approach MultiStep. Our NN method MultiNet is presented in Sec. 5. The evaluation methodology is described in Sec. 6 and at the beginning of this section, we summarise the open questions which inform our subsequent evaluation design. We present and discuss results in Sec. 7 and finally, we provide a summary of the experiments, conclusions, and opportunities for further work in Sec. 8. ## 2 Previous Work The possible applications of semiparametrics in machine learning are broad but under-explored, and IFs in particular have only seen sporadic application in explainable machine learning (Koh & Liang, 2017; Sani et al., 2020), natural language processing (Han et al., 2020) models, causal model selection (Alaa & van der Schaar, 2019) and uncertainty quantification for deep learning (Alaa & van der Schaar, 2020). Outside of machine learning, in particular in the fields of epidemiology and econometrics, semiparametric methods are becoming more popular, and include targeted learning (van der Laan & Rose, 2011) and the well-known double machine learning approach by Chernozhukov et al. (2018). In statistics, alternatives have been developed which include doubly robust conditional ATE estimation (Kennedy, 2020) and IF-learning (Curth et al., 2021a). However, within the field representing the confluence of causal inference and machine learning, the focus seems to have been on the development of NN methods (see CEVAE (Louizos et al., 2017), CFR-Net (Shalit et al., 2017), GANITE (Yoon et al., 2018), Intact-VAE (Wu & Fukumizu, 2022) etc.), without a consideration for statistical inference or semiparametric theory, and this gap has been noted by Curth et al. (2021b) and Curth & van der Schaar (2021). Indeed, to the best of our knowledge, the application of semiparametric theory to debias neural network-based estimators has only be used three times in the field representing the confluence of machine learning and causal inference. Firstly, in DragonNet (Shi et al., 2019), a method designed for ATE estimation; secondly in TVAE (Vowels et al., 2021), a variational, latent variable method for conditional ATE and ATE estimation; and thirdly, by Farrell et al. (2021) where a restricted class of multilayer perceptrons were evaluated for their performance potential as plug-in estimators for semiparameteric estimation of causal effects and shown to yield promising performance. The first two methods incorporate targeted regularization, but do not readily yield statistical inference because to do so requires asymptotic normality (and this is not evaluated in the studies) as well as explicit evaluation of the IF. More broadly, semiparametrics has been discussed in relation to theory in machine learning, for example Bhattacharya et al. (2020) provides a discussion of influence functions in relation to Directed Acyclic Graphs with hidden variables, Rotnitzky & Smucler (2020) and Henckel et al. (2020) discuss the application of semiparametric techniques for identifying efficient adjustment sets for causal inference tasks, Zhong & Wang (2021) apply semi-parametrics with deep neural networks to achieve statistical inference in the partially linear quantile regression setting, and Jung et al. (2020) generalize the coverage of work on semiparametric estimation to general causal estimands. However, in general the work is quite sparse, particularly in relation to the applicability of the theory to neural networks, and the accessibility of the relevant theory to general practitioners of machine learning. As part of this work, we propose a new estimator 'MultiNet' for the ATE which aims to compete with the methods described above. It represents a combination of ideas from two well-known approaches in causal inference - the Super Learner (van der Laan et al., 2007), and CFR-Net, which we already mentioned above. The Super Learner carries some beneficial guarantees which are afforded by the fact that it is an ensemble of candidate learners. By incorporating a diverse range of candidate learners into the Super Learner ensemble, it is able to achieve efficient rates of convergence, and we aim to incorporate the ensemble idea into a new approach using CFR-Net as a backbone. It is worth noting that other ensemble neural network approaches exist, such as the BatchEnsemble approach by Wen et al. (2020), the well-known use of dropout by Gal & Ghahramani (2016). Whilst we incorporate dropout into MultiNet's training procedure, we do not use the dropout itself as a source of ensemble amortisation, but this represents an interesting opportunity for further work. We instead explore 'tapping' each subsequent layer in a neural network to collect intermediate outcome predictions, and combining them with constrained regression the manner of the Super Learner. We indeed observe improvements in performance over the CFR-Net backbone, highlighting that this simple modification can provide enough diversity to yield competitive, ensemble-style performance. Finally, other comparisons of the performance of semiparametric approaches exist. For example, the robustness of targeted learning approaches to causal inference on nutrition trial data was presented by Li et al. (2021) and includes a useful summary table of previous findings and includes its own evaluations. However, it does not include comparisons with NN-based learners, and seeks the answers to different questions relevant to practitioners in the empirical fields. Another example evaluation was undertaken by Luque-Fernandez et al. (2018) but has a didactic focus. We therefore note the need for increased coverage and exposure to semiparametric theory, particularly at the intersection of causal inference and neural network estimation, as well a need for an evaluation of the application of semiparametric theory to current methods. ## 3 Causal Inference And Influence Functions 3.1 Causal Inference The concepts in this paper are applicable to estimation tasks in general, but we focus on the specific task of estimating a causal effect, which is of the upmost importance for policy making (Kreif & DiazOrdaz, 2019), the development of medical treatments (Petersen et al., 2017), the evaluation of evidence within legal frameworks (Pearl, 2009; Siegerink et al., 2016), and others. A canonical characterization of the problem of ![4_image_0.png](4_image_0.png) Figure 2: Directed Acyclic Graphs (DAGs) for estimating the effect of treatment T = t on outcome Y with confounding X. causal inference from observational data is depicted in the Directed Acyclic Graphs (DAGs) shown in Fig. 2a and 2b, and we provide an overview of causal inference in this section. We also point interested readers towards accessible overviews by Guo et al. (2020a) and Pearl et al. (2016). Regarding notation, we use upper-case letters e.g. *A, B* to denote random variables, and bold font, uppercase letters to denote sets of random variables *e.g.* A, B. Lower-case a and b indicate specific realisations of random variables A and B. Specifically, we use xi ∼ P(X) ∈ R m to represent the m-dimensional, pretreatment covariates (we use bold symbols to signify multi-dimensional variables) for individual i assigned factual treatment ti∼P(T|X)∈ {0, 1} resulting in outcome yi ∼P(Y |X, T). Together, these constitute dataset D = {[yi, ti, xi]} n i=1 where n is the sample size, sampled from a 'true' population distribution P. Fig. 2a is characteristic of observational data, where the outcome is related to the covariates as well as the treatment, and treatment is also related to the covariates. For example, if we consider age to be a typical covariate, young people may opt for surgery, whereas older people may opt for medication. Assuming that an agerelated risk mechanism exists, then age will confound our estimation of the causal effect of treatment on outcome. One of the goals of a Randomized Controlled Trial (RCT) is to reduce this confounding by making the assignment of treatment (asymptotically) statistically independent of treatment by randomly assigning it. This enables us to compare the outcomes for the people who were treated, and those who were not (or equivalently to compare multiple alternative treatments). One of the most common causal estimands is the Average Treatment Effect (ATE): $$\tau(\mathbf{x})=\mathbb{E}_{\mathbf{x}\sim P(\mathbf{X})}[\mathbb{E}_{y\sim P(Y|do(T=1)\mathbf{X}=\mathbf{x})}[y]-\mathbb{E}_{y\sim P(Y|do(T=0)\mathbf{X}=\mathbf{x})}[y]]$$ $$(1)$$ Here, the use of the do operator (Pearl, 2009) in do(T = 1) and do(T = 0) simulates interventions, setting treatment to a particular value regardless of what was observed. One can also denote the outcomes corresponding with each of these possible interventions as Y (1) and Y (0), respectively, and these are known as potential outcomes (Imbens & Rubin, 2015). In practice, we only have access to one of these two quantities for any example in the dataset, whilst the other is missing, and as such the typical supervised learning paradigm does not apply. In Fig. 2b, such an intervention removes the dependence of T on X, and this graph is the same as the one for an RCT, where the treatment is unrelated to the covariates.4 Using docalculus we can establish whether, under a number of strong assumptions5, the desired causal estimand can be expressed in terms of a function of the observed distribution, and thus whether the effect is *identifiable*. Causal identification and the associated assumptions are both extremely important topics in their own right, but fall beyond the scope of this paper (we are primarily concerned with estimation). Suffice it to say that for the graph in Fig. 2a, the outcome under intervention can be expressed as: $$\mathbb{E}_{y\sim P(Y|d o(T=t^{\prime}))}[y]=\int y p(y|\mathbf{X}=\mathbf{x},T=t^{\prime})p(\mathbf{X}=\mathbf{x})d\mathbf{x},$$ ′)p(X = x)dx, (2) which is estimable from observational data. Here, t ′is the specific intervention of interest (*e.g.*, t ′ = 1). In particular, it tells us that adjusting for the covariates X is sufficient to remove the bias induced through the 4Of course, one may still observe finite-sample associations between these variables, which occur as a natural consequence of random sample variation. 5These assumptions are the Stable Unit Treatment Value Assumption (SUTVA), Positivity, and Ignorability/Unconfoundedness - see Section 3.1.1 below for more information. $$\left(2\right)$$ 'backdoor' path X → T → Y . This particular approach is sometimes referred to as backdoor adjustment. Once we have the expression in Eq. 2, we can shift our focus towards its estimation. Note that even once the problem has been recast as an estimation problem, it differs from the typical problem encountered in supervised learning. Indeed, instead of simply learning a function, we wish to indirectly learn the *difference* between two functions, where these functions represent 'response surfaces' - *i.e.*, the outcome/response under a particular treatment. ## 3.1.1 Causal Assumptions The causal quantity can be estimated in terms of observational (and therefore statistical) quantities if a number of strong (but common: Yao et al., 2020; Guo et al., 2020b; Rubin, 2005; Imbens & Rubin, 2015; Vowels et al., 2021) assumptions hold: (1) Stable Unit Treatment Value Assumption (SUTVA): the potential outcomes for each individual or data unit are independent of the treatments assigned to all other individuals. (2) Positivity: the assignment of treatment probabilities are non-zero and non-deterministic P(T = ti|X = xi) > 0, ∀ t, x. (3) Ignorability/Unconfoundedness/Conditional Exchangeability: There are no unobserved confounders, such that the likelihoods of treatment for two individuals with the same covariates are equal, and the potential outcomes for two individuals with the same latent covariates are also equal s.t. T ⊥⊥ (Y (1), Y (0))|X. ## 3.1.2 Estimation One may use a regression to approximate the integral in Eq. 2, and indeed, plug-in estimators Qˆ can be used for estimating the ATE as: $$\hat{\mathbf{r}}(\hat{Q};\mathbf{x})=\frac{1}{n}\sum_{i=1}^{n}(\hat{Q}(T=1,\mathbf{X}=\mathbf{x}_{i})-\hat{Q}(T=0,\mathbf{X}=\mathbf{x}_{i})),\tag{1}$$ $$\quad(3)$$ We use the circumflex/hat (ˆ.) notation to designate an estimated (rather than true/population) quantity. In the simplest case, we may use a linear or logistic regression for the estimator Qˆ, depending on whether the outcome is continuous or binary. Unfortunately, if one imagines the true joint distribution to fall somewhere within an infinite set of possible distributions, we deliberately handicap ourselves by using a family of linear models because such a family is unlikely to contain the truth. The consequences of such model misspecification can be severe, and results in biased estimates (Vowels, 2021; van der Laan & Rose, 2011). In other words, no matter how much data we collect, our estimate will converge to the incorrect value, and this results in a false positive rate which converges to 100%. This clearly affects the interpretability and reliability of null-hypothesis tests. Furthermore, even with correct specification of our plug-in estimators, our models are unlikely to be 'targeted' to the desired estimand, because they often estimate quantities superfluous to the estimand but necessary for the plug-in estimator (*e.g.*, other relevant factors or statistics of the joint distribution). As a result, in many cases there exist opportunities to reduce residual bias using what are known as *influence functions*. ## 3.2 Influence Functions Semiparametric theory and, in particular, the concept of Influence Functions (IFs), are known to be challenging to assimilate (Fisher & Kennedy, 2019; Levy, 2019; Hines et al., 2021). Here we attempt to provide a brief, top-level intuition, but a detailed exposition lies beyond the scope of this paper. Interested readers are encouraged to consider work by Kennedy (2016); Fisher & Kennedy (2019); Hampel (1974); Ichimura & Newey (2021); Hines et al. (2021); Bickel et al. (1998); Newey (1994; 1990); Chernozhukov et al. (2017); van der Laan & Rubin (2006), and Tsiatis, 2006. An estimator Ψ(Pˆn) for an estimand Ψ(P) (for example, the ATE) has an IF, ϕ, if it can be expressed as follows: $$\sqrt{n}(\Psi(\hat{\mathcal{P}}_{n})-\Psi(\mathcal{P}))=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\phi(z_{i},\mathcal{P})+o_{p}\left(1\right)$$ $$\mathbf{\Sigma}$$ where ziis a sample from the true distribution P, Pˆn is the empirical distribution or, alternatively, a model of some part thereof (*e.g.*, a predictive distribution parameterized by a NN, or a histogram estimate for a density function, etc.), op (1) is an error term that converges in probability to zero, and ϕ is a function with a mean of zero and finite variance (Tsiatis, 2006, pp.21). The √n scales the difference such that when the difference converges in distribution we can also say that the difference converges at a parametric root-n rate. Overall, Eq. 4 tells us that the difference between the true quantity and the estimated quantity can be represented as the sum of a bias term and some error term which converges in probability to zero. The IF itself is a function which models how much our estimate deviates from the true estimand, up to the error term. If an estimator can be written in terms of its IF, then by central limit theorem and Slutsky's theorem, the estimator converges in distribution to a normal distribution with mean zero and variance equal to the variance of the IF. This is a key result that enables us to derive confidence intervals and perform statistical inference (such as null hypothesis significance testing). Note that the convergence in distribution to a normal distribution does not impact our assumptions about the functional form governing the data generating process itself. In other words, by deriving a normally distributed parameter estimate, we do not sacrifice flexibility in how we model the observed distribution, or flexibility in the functions used to derive the estimates themselves. The normality relates only to the variation in the parameter estimate itself, across sub-samples from the population. ## 3.2.1 A Simple Example By way of example, consider the targeted estimand to be the expectation Ey∼P (Y )[y], where Y is a random variable constituting true distribution P. This can be expressed as: $$\mathbb{E}_{y\sim\mathcal{P}}[y]=\Psi(\mathcal{P})=\intyp(y)dy\tag{5}$$ In the case where we have access to an empirical distribution Pˆn, the expectation example may be approximated as follows: $\Psi(\mathcal{P})\approx\Psi(\hat{\mathcal{P}}_n)=\dfrac{1}{n}\sum_{i=1}^n y_i$ (6) According to Eq. 4, the degree to which our resulting estimator is ... where the subscript n is the sample size. According to Eq. 4, the degree to which our resulting estimator is biased can therefore be expressed as: $$\begin{split}\sqrt{n}(\Psi(\hat{\mathcal{P}}_{n})-\Psi(\mathcal{P}))&=\sqrt{n}\left(\frac{1}{n}\sum_{i=1}^{n}y_{i}-\int yd\mathcal{P}(y)\right)\\ &=\frac{1}{\sqrt{n}}\sum_{i}^{n}(y_{i}-\mu)\stackrel{{\mathcal{D}}}{{\rightarrow}}\mathcal{N}(0,\sigma^{2})\end{split}\tag{7}$$ where µ and σ 2 are the mean and variance of Y , respectively, and the second line is a consequence of the central limit theorem. This shows that the empirical approximation of the estimand is a *consistent* estimator, insofar as it converges to the true value as the sample size increases. ## 3.2.2 Parametric Submodel And Pathwise Derivative In many cases Pˆn is not equivalent to the sample distribution, perhaps because some or all of it is being modelled with estimators. As a result, the error does not converge in probability to zero and some residual error remains. This situation can be expressed using the IF, as per Eq. 4. Here, the IF ϕ is being used to model the residual bias that stems from the fact that Pˆn is no longer equivalent to a direct sample from P. We will discuss the details relating to this function shortly. If we assume that the difference is asymptotically linear, then we can represent Pˆn as a perturbed version of P. This also results in convergence in distribution as follows: $$\frac{1}{\sqrt{n}}\sum_{i}^{N}\phi(z_{i},{\cal P})\stackrel{{{\cal D}}}{{\rightarrow}}{\cal N}\left(0,{\mathbb{E}}(\phi\phi^{T})\right),\tag{8}$$ $$\sqrt{n}(\Psi(\hat{\cal P}_{n})-\Psi({\cal P}))\stackrel{{{\cal D}}}{{\rightarrow}}{\cal N}\left(0,{\mathbb{E}}(\phi\phi^{T})\right).$$ We can imagine the sample distribution Pˆn lies on a linear path towards the true distribution P. This linear model can be expressed using what is known as a parametric submodel, which represents a family of distributions indexed by a parameter ϵ: $${\mathcal{P}}_{\epsilon}=\epsilon{\hat{\mathcal{P}}}_{n}+(1-\epsilon){\mathcal{P}}$$ $$({\mathfrak{g}})$$ Pϵ = ϵPˆn + (1 − ϵ)P (9) It can be seen that when ϵ = 0, we arrive at the true distribution, and when ϵ = 1, we have our current empirical distribution or model. We can therefore use this submodel to represent the perturbation from where we want to be P in the direction of where we are with our current estimator(s) Pˆn. The direction associated with Pϵ can then be expressed as a pathwise derivative in terms of the function representing our estimand Ψ: $$\frac{d\Psi(\epsilon\hat{\cal P}_{n}+(1-\epsilon){\cal P})}{d\epsilon}$$ $$(10)$$ When this derivative exists (under certain regularity conditions), it is known as the Gateaux derivative. We can evaluate this when ϵ = 0 (*i.e.*, evaluated at the true distribution according to the parametric submodel). Then by the Riesz representation theorem (Frèchet, 1907; Riesz, 1909), we can express the linear functional in Eq. 10, evaluated at ϵ = 0, as an inner product between a functional ϕ and its argument: $$\left.\frac{d\Psi(\epsilon\hat{\mathcal{P}}_{n}+(1-\epsilon)\mathcal{P})}{d\epsilon}\right|_{\epsilon=0}=\int\phi(y,\mathcal{P})\{d\hat{\mathcal{P}}_{n}(y)-d\mathcal{P}(y)\}$$ The function ϕ is the *Influence Function* (IF) evaluated at the distribution P in the direction of y. Eq. 11 can be substituted back into Eq. 4 to yield: $$\sqrt{n}(\Psi(\hat{P}_{n}(y))-\Psi(\mathcal{P}(y)))=\int\phi(y,\mathcal{P})\{d\hat{P}_{n}(y)-d\mathcal{P}(y)\}+o_{p}\left(1\right)$$ which equivalently allows us to express the estimate of the target quantity as: $$(11)$$ $$(12)$$ $$\Psi(\widehat{\cal P}_{n})=\Psi({\cal P})+\left.\frac{d\Psi(e\widehat{\cal P}_{n}+(1-\epsilon){\cal P})}{d\epsilon}\right|_{\epsilon=0}+o_{p}(1/\sqrt{n})\tag{13}$$ Eq. 13 expresses the estimated quantity Ψ(Pˆn) in terms of the true quantity Ψ(P), whereas it would be more useful to do so the other way around, such that we have the true quantity in terms of things we can estimate. Hines et al. (2021) provide an exposition in terms of the Von Mises Expansion (VME), which is the functional analogue of the Taylor expansion, such that the true quantity can be expressed as: $$\Psi({\cal P})=\Psi(\hat{\cal P}_{n})+\frac{1}{n}\sum_{i}^{n}\phi(y_{i},\hat{\cal P}_{n})+o_{p}(1/\sqrt{n})\tag{14}$$ Which, it can be seen, is in the same form as Eq. 13, except that ϕ is being evaluated at Pˆn, rather than P. This also accounts for the change in direction otherwise absorbed by a minus sign when expressing Ψ(P) in terms of Ψ(Pˆn). Finally, note that in Eq. 11 the pathwise derivative expresses the expectation of ϕ. However, in cases where we substitute Pˆ for a Dirac function (see Sec. 3.2.3 for an example), the integral will evaluate to the value of ϕ at one specific point. Of course, if we have multiple values we wish to evaluate at (e.g. an empirical distribution represented with Dirac delta functions at each point), then the result is the empirical approximation to the expectation, as indicated by the 1n Pn i notation in Eq. 14. ## 3.2.3 Influence Function For The Average Treatment Effect A second example (in addition to the expectation given in Sec. 3.2.1) concerns the ATE, which we can break down in terms of an expected difference between two potential outcomes. For the DAG: T → Y , T ← X → Y (also see Fig. 2a), the expectation of the potential outcome under treatment can be expressed as (Hines et al., 2021; Hahn, 1998): $$\Psi(\mathcal{P})=\mathbb{E}_{\mathbf{x}\sim P(\mathbf{X})}[\mathbb{E}_{y\sim P(Y|T=t,\mathbf{X}=\mathbf{x})}[y]]=\int y f(y|T=1,\mathbf{X}=\mathbf{x})f(\mathbf{X}=\mathbf{x})dyd\mathbf{x}$$ $$=\int\frac{y f(y,t,\mathbf{x})f(\mathbf{x})}{f(t,\mathbf{x})}dyd\mathbf{x},$$ $$(16)$$ $$(17)$$ $$\quad(15)$$ where Z = (X*, T, Y* ). Following the same steps as before, the IF can be derived by first substituting each density $$f_{\epsilon}(y,t,{\bf x})=\epsilon\delta_{\tilde{y},\tilde{t},\tilde{\bf x}}(y,t,{\bf x})+(1-\epsilon)f(y,t,{\bf x}),$$ (*y, t,* x) + (1 − ϵ)f(*y, t,* x), (16) for f(*y, t,* x), and equivalently for f(x), and f(t, x): $$\phi(\mathbf{Z},{\mathcal{P}}_{\epsilon})=\int y\left.{\frac{d}{d\epsilon}}\right|_{\epsilon=0}{\frac{y f_{\epsilon}(y,t,\mathbf{x})f_{\epsilon}(\mathbf{x})}{f_{\epsilon}(t,\mathbf{x})}}d y d\mathbf{x}.$$ In a slight abuse of notation, δy˜ is the Dirac delta function at the point at which y = ˜y, where y˜ can be a datapoint in our empirical sample (note the shift from specific datapoint yi to generic empirical samples y˜). Then, taking the derivative, and setting ϵ = 0: $$\phi(\mathbf{Z},\mathcal{P})=\int y f(y|t,\mathbf{x})f(\mathbf{x})\left[{\frac{\delta_{\hat{\mu},{\bar{t}},{\bar{\mathbf{x}}}}(y,t,\mathbf{x})}{f(y,t,\mathbf{x})}}+{\frac{\delta_{\bar{\mathbf{x}}}(\mathbf{x})}{f(\mathbf{x})}}-{\frac{\delta_{{\bar{t}},{\bar{\mathbf{x}}}}(t,\mathbf{x})}{f(t,\mathbf{x})}}-1\right]d y d\mathbf{x},$$ $$\phi(\mathbf{Z},\mathcal{P})=\int\frac{yf(y|t,\mathbf{x})f(\mathbf{x})\delta_{\tilde{y},\tilde{x},\tilde{x}}(y,t,\mathbf{x})}{f(y|t,\mathbf{x})f(t|\mathbf{x})f(\mathbf{x})}dyd\mathbf{x}+\int\frac{yf(y|t,\mathbf{x})f(\mathbf{x})\delta_{\tilde{x}}(\mathbf{x})}{f(\mathbf{x})}dyd\mathbf{x}\tag{19}$$ $$\quad-\int\frac{yf(y|t,\mathbf{x})f(\mathbf{x})\delta_{\tilde{y},\tilde{x}}(t,\mathbf{x})}{f(t|\mathbf{x})f(\mathbf{x})}dyd\mathbf{x}-\int yf(y|t,\mathbf{x})f(\mathbf{x})dyd\mathbf{x},$$ $$(18)$$ $$\phi(\mathbf{Z},\mathcal{P})=\delta_{\tilde{t}}(t)\int\frac{y\delta_{\tilde{y}}(y)}{f(t|\tilde{\mathbf{x}})}dy+\int yf(y|t,\tilde{\mathbf{x}})dy-\delta(t)\int\frac{yf(y|t,\tilde{\mathbf{x}})}{f(t|\tilde{\mathbf{x}})}dy-\Psi(\mathcal{P})$$ $$=\frac{\delta_{\tilde{t}}(t)}{f(t|\tilde{\mathbf{x}})}\left(\tilde{y}-\mathbb{E}_{y\sim P(Y|T=t,\mathbf{X}=\tilde{\mathbf{x}})}[y]\right)+\mathbb{E}_{y\sim P(Y|T=t,\mathbf{X}=\tilde{\mathbf{x}})}[y]-\Psi(\mathcal{P}),$$ $$(20)$$ Which yields our IF: $$\phi(\mathbf{Z},\mathcal{P})={\frac{\delta_{\tilde{t}}(t)}{f(t|\tilde{\mathbf{x}})}}\left(\tilde{y}-\mathbb{E}_{y\sim P(Y|T=t,\mathbf{X}=\mathbf{\bar{x}})}[y]\right)+\mathbb{E}_{y\sim P(Y|T=t,\mathbf{X}=\mathbf{\bar{x}})}[y]-\Psi(\mathcal{P}).$$ Once again, in order to evaluate this we need to evaluate it at Pˆn, and we also need plug-in estimators Gˆ(x˜) ≈ f(t|x˜) (propensity score model), and Qˆ(t, x˜) ≈ Ey∼P (Y |T =t,X=x˜)[y] (outcome model). The propensity score model represents a nuisance parameter and contributes to bias. This finally results in: $$(21)$$ $$\phi({\bf Z},\hat{\cal P}_{n})=\frac{\delta_{\vec{t}}(t)}{\hat{G}(\hat{\bf x})}\left(\hat{y}-\hat{Q}(t,\hat{\bf x})\right]\right)+\hat{Q}(t,\hat{\bf x})-\Psi({\cal P}).$$ $$(22)$$ Note that for non-discrete T, it may be impossible to evaluate precisely due to the Dirac function. However, and as Hines et al. (2021) and Ichimura & Newey (2021) note, this issue may be circumvented by using a substitute probability measure with a bandwidth parameter which approaches a point mass when the bandwidth parameter is equal to zero. Equation 22 depicted the influence function for the potential outcome mean, but if we wish to derive the influence function for the average treatment effect (i.e, the difference between the outcomes from T = 1 and T = 0) one may note that the last line in Equation 15 can be duplicated and subtracted by setting the value of T to the desired contrast value. The influence functions for each potential outcome can then be derived independently, and the result is equivalent to their direct combination (van der Laan & Rose, 2011): $$\phi_{ATE}(\mathbf{Z},\hat{\mathcal{P}}_{n})=\left(\frac{\delta_{\mathrm{f}}(1)}{G(\hat{\mathbf{x}})}-\frac{1-\delta_{\mathrm{f}}(0)}{1-\hat{G}(\hat{\mathbf{x}})}\right)\left(\tilde{y}-\hat{Q}(t,\hat{\mathbf{x}})\right)+\hat{Q}(1,\hat{\mathbf{x}})-\hat{Q}(0,\hat{\mathbf{x}})-\Psi_{ATE}(\mathcal{P}).\tag{23}$$ An alternative approach to the derivation of influence functions exists, and involves the use of the derivative of the log-likelihood (the score) (Levy, 2019). The approach presented here is arguably more straightforward and follows the presentation by Ichimura & Newey (2021); Hines et al. (2021), although it depends on pathwise differentiability of the estimand. ## 3.2.4 Statistical Inference With Influence Functions As described earlier, machine learning techniques, while parameterized *per se*, are non-parametric insofar as the estimates they yield are not directly parameterizable as (*e.g.*) a Gaussian with a mean and a variance. However, such parameterization is extremely helpful in facilitating statistical inference, including the ubiquitous null hypothesis significance test, which is straightforward when the parameter being tested is normally distributed. Whilst other approaches to statistical inference with non-parametric methods exist, such as the bootstrap approach (Efron & Gong, 1983), influence functions have been recognized as a valid approach for some time (Bickel et al., 1998; Tsiatis, 2006). Following van der Laan & Rose (2011, p.75) we can derive 95% confidence intervals from the influence function to be (assuming normal distribution): $$\widehat{\mathrm{Var}}(\phi)=\frac{1}{n}\sum_{i}^{n}\left[\phi(\mathbf{z}_{i})-\frac{1}{n}\sum_{j}^{n}\phi(\mathbf{z}_{j})\right]^{2},$$ $$\widehat{\mathrm{se}}=\sqrt{\frac{\widehat{\mathrm{Var}}(\phi)}{n}},$$ $$\Psi^{*}(\hat{\mathcal{P}}_{n})\pm1.96\widehat{\mathrm{se}},$$ $$p_{v a l}=2\left[1-\Phi\left(\left|\frac{\Psi^{*}(\hat{\mathcal{P}}_{n})}{\widehat{\mathrm{se}}}\right|\right)\right],$$ $$(24)$$ where Ψ∗(Pˆn) is the estimated target quantity after bias correction has been applied, Φ is the CDF of a normal distribution, se is the standard error, and b pval is the p-value. ## 3.3 Ifs For General Graphical Models In this paper, we focus on the estimation of average treatment effect in the setting of Fig 2a. However, the methods discussed in this paper can be applied for more complex estimands with an arbitrary causal graph structure, as long as the estimand at hand is causally *identifiable* from the observed data. In this section, we discuss the derivation of IFs for a general form of an estimand in a general graphical model. ## 3.3.1 Influence Function Of An Interventional Distribution The causal identification of interventional distributions is well-studied in the literature. In the case of full observability, any interventional distribution is identifiable using (extended) g-formula (Ezzati et al., 2004; Robins, 1986). If some variables of the causal system are unobserved, all interventional distributions are not necessarily identifiable. Tian & Pearl (2002) and Shpitser & Pearl (2006) provided necessary and sufficient conditions of identifiability in such models. The causal identification problem in DAGs with unobserved (latent) variables can equivalently be defined on *acyclic directed mixed graphs* (ADMGs) (Richardson & Spirtes, 2003; Richardson et al., 2017; Evans & Richardson, 2019). ADMGs are acyclic mixed graphs with directed and bidirected edges, that result from a DAG through a latent projection operation onto a graph over the observable variables (Verma & Pearl, 1990). Pearl's do-calculus is shown to be complete for the identification of interventional distributions (Huang & Valtorta, 2006). Let V denote the set of all observed variables. Starting with an identifiable interventional distribution P(y|do(T = t ′)), an identification functional of the following form is derived using do-calculus: $${\mathcal{P}}(\mathbf{y}|d o(\mathbf{T}=\mathbf{t}^{\prime}))=\sum_{\mathbf{j}}\frac{\Pi_{i}{\mathcal{P}}(\mathbf{a_{i}}|\mathbf{b_{i}})}{\Pi_{j}{\mathcal{P}}(\mathbf{c_{j}}|\mathbf{d_{j}})},$$ $$(25)$$ , (25) where ai, bi, cj, and dj are realizations of Ai, Bi, Cj, and Dj, respectively, and Ai, Bi, Cj, Dj, S are subsets of variables such that for each i and j, Ai ∩ Bi = ∅ and Cj ∩ Dj = ∅. Note that the sets Bi and Dj might be empty. The PRsymbol in Eq. 25 indicates a summation over the values of the set of variables S in the discrete case, and an integration over these values in the continuous setting. To derive the influence function of Eq. 25, we begin with a conditional distribution of the form P(a|b). If b ̸= ∅, we can write Pϵ(v) = (1 − ϵ)P(v) + ϵδv˜(·), Pϵ(a|b) = Pϵ(a, b) Pϵ(b) , dPϵ(a|b) dϵ ϵ=0 = δa˜,b˜ (a, b) − P(a, b) P(b)− P(a, b)[δb˜ (b) − P(b)] P2(b) (26) = P(a|b) · δa˜,b˜ (a, b) P(a, b)− δb˜ (b) P(b) , where v˜ is the point that we compute the influence function at, and a˜, b˜ are the values of sets of variables A, B ⊆ V that are consistent with v˜. For an empty b, using similar arguments, we have: $$\left.\frac{d{\cal P}_{e}({\bf a})}{d\epsilon}\right|_{\epsilon=0}={\cal P}({\bf a})\cdot\left(\frac{\delta_{\bf\hat{a}}({\bf a})}{{\cal P}({\bf a})}-1\right).\tag{27}$$ With slight abuse of notation, for b = ∅, we define δb˜ (b) P(b) = 1. Using Eq. 26 and Eq. 27, we can now derive the IF of Eq. 25. ϕ(v˜,P) = d((1 − ϵ)P + ϵδv˜) dϵ ϵ=0 = δc˜j,d˜j (cj, dj) P(cj, dj)− δd˜j (dj) P(dj) ! . (28) ΠiP(ai|bi) ΠjP(cj|dj) · "X i δa˜i,b˜i (ai, bi) P(ai, bi)− δb˜i (bi) P(bi) − X j XZ S Note that we used d dϵ1 Pϵ(c|d) = − d dϵ Pϵ(c|d) P2 ϵ (c|d) . Note also that Equation 18, which is the influence function for the potential outcome mean, is of the same form as Equation 28. Equation 28 is the foundation to the approach that shall be discussed in the following section for deriving the IF of a general class of estimands. ## 3.4 Influence Function Of A General Estimand We have so far discussed the influence function of a causal effect of the form P(y|do(T = t ′)). In this section, we show how IFs can be derived for any general estimand of the form: $$\Psi({\mathcal{P}})=\mathbb{E}_{{\mathcal{P}}}[\kappa({\mathcal{P}})],$$ Ψ(P) = EP [κ(P)], (29) where κ(·) is a functional. Then we have: Pϵ = ϵPˆn + (1 − ϵ)P, Ψ(Pϵ) = Zκ(Pϵ)Pϵdv, dΨ(Pϵ) dϵ ϵ=0 = Z dPϵ dϵ · κ(Pϵ) + dκ dPϵ · dPϵ dϵ · Pϵ ϵ=0 dv (30) = Z κ(P) + dκ dP · P· dPϵ dϵ ϵ=0 dv = Zκ(P) · dPϵ dϵ ϵ=0 dv + EP dκ dP · dPϵ dϵ ϵ=0 . $$(29)$$ ![12_image_0.png](12_image_0.png) Figure 3: This figure illustrates the components involved in using IFs to improve our estimates of the Average Treatment Effect (ATE), where the ATE is our target estimand Ψ. We combine the output from an outcome model Qˆ, with a propensity score model Gˆ and an update step method U. This yields an estimate Ψˆ . The value of dPϵ dϵ ϵ=0 can be plugged into Eq. 30 using Eq. 28 and Eq. 11, which completes the derivation of the IF for the estimand in Eq. 29. As an example, if the queried estimand is the average density of a variable Y , that is, κ is the identity functional, then: $$\Psi(\mathcal{P})=\int\mathcal{P}^{2}(y)d y,$$ $$\left.\frac{d\Psi(\mathcal{P}_{\epsilon})}{d\epsilon}\right|_{\epsilon=0}=\int(\mathcal{P}+1\cdot\mathcal{P})\cdot\frac{d\mathcal{P}_{\epsilon}}{d\epsilon}\right|_{\epsilon=0}d y$$ $$=\int2\mathcal{P}(y)\cdot\frac{d\mathcal{P}_{\epsilon}}{d\epsilon}\right|_{\epsilon=0}d y.$$ Algorithm 1 summarises the steps of our proposed automated approach to derive the influence function of an estimand of the form presented in Eq. 29, given a general graphical model. Note that if the effect is identifiable, this algorithm outputs the analytic influence function, and otherwise, throws a failure. A demonstrative example can be found in the associated code repository in the form of a notebook, and/or in the attached supplementary code. Algorithm 1 IF of an identifiable effect. input: An estimand Ψ(P) of the form of Eq. 29, an interventional distribution P, causal graph G output: The analytic IF of Ψ(P) if P is identifiable, fail o.w. 1: if P is identifiable **then** 2: P ←˜ the identification functional of P (Eq. 25) using do-calculus 3: ϕ ← the IF of P as in Eq. 28 4: dΨ(Pϵ) dϵ ϵ=0 ← the formulation as in Eq. 30 5: Φ ← Plug ϕ into dΨ(Pϵ) dϵ ϵ=0 using Eq. 11 6: **return** Φ 7: **else** 8: **return FAIL** ## 4 Updating/Debiasing Our Estimators With Ifs If we can estimate the IF ϕ then we can update our initial estimator Ψ(Pˆn) according to Eq. 14 in order to reduce the residual bias which the IF is essentially modeling. To be clear, this means we can improve our initial NN estimators, without needing additional data. We consider four ways to leverage the IF to reduce bias which we refer to as (1) the one-step update, (2) the submodel update (sometimes referred to as a targeted update), (3) our own proposed MultiStep procedure, and (4) targeted regularization. The first three approaches can be trivially applied to estimators which have already been trained, making them attractive as post-processing methods for improving estimation across different application areas. To illustrate these approaches, we consider the ATE to be our chosen target estimand, the IF for which is defined in Equation 23. Figure 3 provides an illustration of the components involved in updating the estimator in the context of ATE estimation. ## 4.1 One-Step And Submodel Approach Using the *one-step* approach, the original estimator Ψ(Pˆn) can be improved by a straightforward application of the Von Mises Expansion (VME) of Eq. 14 - one takes the initial estimator and adds to it the estimate of the IF to yield an updated estimator which accounts for the 'plug-in bias'. In the case of the ATE, this yields the augmented inverse propensity weighted (AIPW) estimator (Hines et al., 2021; Neugebauer & van der Laan, 2005; Kurz, 2021). The second *submodel* approach updates the initial estimate by solving Pn i=1 ϕ(zi,Pˆn) = 0, which can be done by estimating the degree to which the principal estimator Q is being biased, and correcting for it. This approach works by first constructing a parametric submodel in terms of the plug in estimator Q(t, X) and a function H of the propensity score G (which represents the biasing quantity), and derives an updated plug-in estimator Q∗(t, x). In the expressions which follow, we assume binary treatment T and have replaced the Dirac delta functions with indicator functions. First, the updated estimator Q∗(t, x) can be expressed in terms of a biasing quantity H as follows: $$\hat{Q}^{*}(T=1,{\bf x}_{i})=\hat{Q}(T=1,{\bf x}_{i})+\hat{\gamma}H({\bf z}_{i},T=1),$$ $$\mbox{where}\;\;H({\bf z}_{i},T=1)=\frac{\mathbbm{1}_{i_{i}}(1)}{\hat{G}(\bar{\bf x})}.$$ The equivalent for when treatment is $0$ can be expressed as: $$\hat{Q}^{*}(T=0,{\bf x}_{i})=\hat{Q}(T=0,{\bf x}_{i})+\hat{\gamma}H({\bf z}_{i},T=0),$$ $$\mbox{where}\;\;H({\bf z}_{i},T=0)=-\frac{1-\mathbbm{1}_{i_{i}}(0)}{1-\hat{G}(\bar{\bf x})}.$$ $$(31)$$ H(zi, ti) is known as the clever covariate. The parameter γˆ is estimated as the coefficient in the associated intercept-free 'maximum-likelihood linear regression' reflected in the first line of Eq. 31 above. When the updated Qˆ∗is substituted into Eq.23, one solves in one update what is known as the efficient influence function, and following the update, the mean of the IF, as well as the residual bias will be zero. In practice, the two methods yield different results with finite samples (Porter et al., 2011; Benkeser et al., 2017). In particular, the one-step / AIPW estimator may yield estimates outside of the range of values allowed according to the parameter space, and be more sensitive to near-positivity violations (*i.e.*, when the probability of treatment is close to zero) owing to the first term on the RHS of Eq. 23 (Luque-Fernandez et al., 2018). In contrast, the submodel approach will not, because it is constrained by the intercept-free regression step - for instance, if the outcome Y is binary, the logistic link function will prevent any 'out-of-bounds' convergence behavior which might otherwise result in an unstable estimate for the coefficient γˆ. Thus, both the one-step and submodel approach achieve the same aim: solving for the efficient influence function. However, the one-step approach does so naively according to the VME, and the second does so according to an additional regression stage (but nonetheless still only requires a single update to the original outcome model Q). Model Robustness: One of the consequences of finding the efficient IF is that we also achieve improved model robustness. This is because, in cases where multiple plug-in models are used to derive an unbiased estimate, we achieve consistent estimation (*i.e.*, we converge in probability to the true parameter as the sample size increases) even if one of the models is misspecified (*e.g.*, the ATE requires both a propensity score model and an outcome model, and thus the IF facilitates *double* robustness). Furthermore, in cases where both models are well-specified, we achieve efficient estimation. It is worth noting, however, that this double-robustness property does not apply to the limiting distribution of the estimates being Gaussian when data-adaptive plug-in estimators are used (Benkeser et al., 2017; van der Laan, 2014). In other words, if only one or both of the two models is/are incorrectly specified, the estimates may not be normally distributed, thus invalidating statistical inference. In our later evaluation, we thus might expect models to fail at achieving normally distributed estimates before they fail at yielding *unbiased* estimates. It is possible to extend the framework such that the double robustness property also applies to the limiting normal distribution of the estimates (Benkeser et al., 2017; van der Laan, 2014), but we leave this to future work. For more technical details on the double robustness property see van der Laan & Rose (2011); Hines et al. (2021); Benkeser et al. (2017), and Kurz (2021). ## 4.2 Multistep Approach In this section we present our own variant of the estimator update process which we call MultiStep updates. In order to motivate the development of these methods, we begin by noting the limitations of the one-step and submodel update processes. In general, these updates are performed only once (Hines et al., 2021; van der Laan & Rose, 2011), and as described in Section 4.4, the efficacy of these update steps rests on the assumption that we are 'good enough' to begin with. In other words, the bias of our initial estimator must be able to be approximated by a linear submodel, such that taking a step in the direction of the gradient takes us in the right direction. We attempt to improve the empirical robustness of the one-step and submodel update steps by modifying the objective in the update step itself. Under the assumptions described above, the one-step and the submodel update approaches yield the efficient influence function. That is, Pn i ϕ(zi,Pˆn) ≈ 0. Furthermore, this influence function is also the one with the smallest variance (Tsiatis, 2006). Indirectly, the submodel process achieves this by finding the least-squares (or maximum-likelihood) solution to Eq. 31, updating the initial estimator Qˆ(t, xi) with some quantity γˆ of clever covariate H(zi). We refer to this process as 'indirect' because the objective used to find γˆ can, alternatively, be specified explicitly. We refer to our update variant as MultiStep because whilst it still uses the linear submodel of Eq. 31, we optimize the expression 32 below by searching over γˆ ∈ Γ: $$\operatorname*{min}_{\gamma\in\Gamma}\left[\alpha_{1}[\widehat{\mathbb{E}}[\phi(\mathbf{z}_{i},\widehat{P})]]+\alpha_{2}[\widehat{\operatorname{Var}}[\phi(\mathbf{z}_{i},\widehat{P})]]\right].\tag{1}$$ $$(32)$$ In words, rather than implicitly finding the solution to the IF via maximum-likelihood, we explicitly specify that the solution should minimize empirical approximations (circumflex/hat notation) of both the expectation and/or the variance of the influence function. The degree to which each of the constraints are enforced depends on hyperparameters α1 ∈ R+ and α2 ∈ R+ which weight the two constraints. In this objective, γˆ is related to the influence function by: $$\begin{array}{l}\phi_{ATE}({\bf z}_{i},\hat{\cal P}_{n})=H({\bf z}_{i},t_{i})\left(y_{i}-\hat{Q}(t_{i},{\bf x}_{i})-\hat{\gamma}H({\bf z}_{i})\right)\\ \\ +(\hat{Q}(1,{\bf x}_{i})+\hat{\gamma}H({\bf z}_{i},1))-(\hat{Q}(0,{\bf x}_{i})+\hat{\gamma}H({\bf z}_{i},0))-\Psi_{ATE}(\hat{\cal P}_{n}).\end{array}\tag{33}$$ In other words, γˆ is the coefficient on the clever covariate which itself represents the quantity which is biasing our principal estimator Q. The objective in expression 32 therefore finds the γˆ which corrects for the biasing action of H by minimizing the mean and the variance of the influence function, which itself is computed according to Eq. 33. Finally, note also the link tying the influence function, γˆ, and H to the target parameter: $$\Psi_{A T E}(\hat{\mathcal{P}}_{n})=\frac{1}{n}\sum_{i}^{n}\left((\hat{Q}(1,{\bf x}_{i})+\hat{\gamma}H({\bf z}_{i},1))-(\hat{Q}(0,{\bf x}_{i})+\hat{\gamma}H({\bf z}_{i},0))\right).$$ ## 4.3 Targeted Regularization Finally, we can use targeted regularization which, to the best of our knowledge, has only been used twice in the NN literature, once in DragonNet (Shi et al., 2019), and once in TVAE (Vowels et al., 2021), both of $$(34)$$ which were applied to the task of causal inference. The idea is to solve the efficient influence curve *during* NN training, similarly to Eq. 31, on a per-batch basis. The parameter γˆ in Eq. 31 is treated as a learnable parameter, trained as part of the optimization of the NN. The submodel update in Eq. 31 is thereby recast as a regularizer which influences the weights and biases of the outcome model Qˆ(t, x). In total, then, the training objective is given by Eq. 35, where L q i is a negative log-likelihood (NLL) of the outcome model Qˆ(t, x) which has parameters θ (which comprises NN weights and biases), and L tl i is the NLL of the updated outcome model Qˆ∗(t, x˜), which is parameterized by both θ and γˆ. $${\cal L}=\min_{\theta}\left[\sum_{i}^{n}\left({\cal L}_{i}^{q}+{\cal L}_{i}^{tl}\right)\right].\tag{1}$$ $$(35)$$ As the second NLL term involves the clever covariate H, which in turn involves the plug-in estimator for the propensity score G(Z), we also need a model for the treatment which may be trained via another NLL objective, or integrated into the same NN as the one for the outcome model. Due to the paucity of theoretical analysis for NNs, it is not clear whether targeted regularization provides similar guarantees (debiasing, double-robustness, asymptotic normality) to the one-step and submodel approaches, and this is something we explore empirically. ## 4.4 Conditions For If Updates To Work The conditions necessary for the key relationships above to hold are that our estimator is regular and asymptotically linear such that the second order remainder term op(·) tends in probability to zero sufficiently quickly. These properties concern the sample size, the smoothness of the estimator, and the quality of the models we are using to approximate the relevant factors of the distribution. Clearly, if our initial model(s) is(are) poor/misspecified then a linear path (or equivalently, a first order VME) will not be sufficient to model the residual distance from the estimand, and the update steps may actually worsen our initial estimate. In summary, as long as our initial estimator is 'good enough' (insofar as it is regular and asymptotically linear), we can describe any residual bias using IFs. Doing so enables us to (a) reduce the residual bias by performing an update to our original estimator using the efficient IF (via the one-step, submodel, or targeted learning approaches), (b) achieve a more robust estimator, and (c) undertake statistical inference (because the updated estimate is normally distributed with a variance equal to the variance of the IF). Unfortunately, we are not currently aware of a way to assess 'good enough'-ness, particularly in the causal-inference setting, where explicit supervision is not available. There may exist a way to use the magnitude of the IF to assess the validity of the assumption of asymptotic normality, and use this as a proxy for model performance, but we leave this to future work. ## 5 Multinet One of the primary considerations when choosing estimation algorithms/models is whether the estimator can represent a family of distributions which is likely to contain the true Data Generating Process (DGP). Indeed, one of the motivations for semiparametrics is to be able to use non-parametric data-driven algorithms which have the flexibility to model complex DGPs, whilst still being able to perform statistical inference. If the estimator is functionally misspecified (*i.e.*, misspecified in terms of the functional form used to model the relationships in the DGP - linear, quadratic, spline etc.), then we are unlikely to arrive at an estimator which is asymptotically linear and therefore also amenable to the Influence Function update process. This behooves us to seek estimators which 'let the data speak' (van der Laan & Starmans, 2014). Consider the Super Learner (SL) (van der Laan et al., 2007), which is an ensemble method especially designed for parameter estimation in the context of causal inference. The SL process involves taking a weighted average of predictions from each candidate learner, and this quantity is taken as the output. The advantage of a SL is that the candidate library includes sufficient diversity with respect to functional form and complexity such that the true DGP is likely to fall within the family of statistical models which can be represented by the ensemble. The motivation for reducing bias resulting from *functional* misspecification is therefore similar to the motivation for influence functions. In both cases, accuracy/precision in estimation is the priority in the domain of causal inference, where the parameters concern critical decision making processes, such as the efficacy of medications. In contrast with the SL, many methods (including boosted trees, neural networks, or linear regressions) are based on single learners with one outcome prediction. As part of the development and evaluation of Influence Function updating methods for reducing bias and deriving consistent estimators, here we also present a new neural-network based estimator / learning algorithm. Early experimentation highlighted to us that even though NNs are flexible universal function approximators (Hornik, 1993; Hornik et al., 1989), they may nonetheless yield estimators which are not 'good enough' to enable us to leverage their asymptotic properties (such as bias reduction with IFs). In such cases, the IF update may actually *worsen* the initial estimate, pushing us further off course. This problem arose even for simple datasets with only quadratic features. Indeed, the problem with using neural networks for 'tabular' data (as opposed to, say image data) is well known in the machine learning community, and interested readers are directed towards the survey by Kadra et al. (2021). Researchers have, in general, noted that gradient boosted trees (Freund & Schapire, 1997) to consistently outperform neural network based learners (Shwartz-Ziv & Armon, 2021; Kadra et al., 2021; Borisov et al., 2022). However, Borisov et al. (2022) also found that ensembles of boosted trees and neural networks can nonetheless outperform boosted trees alone, and Kadra et al. (2021) found that sufficiently regularized neural networks could yield competitive performance, or even exceed the performance of boosted trees. Thus, in our view the avenues for research into neural network methods for tabular data are still open (and research on the subject continues regardless - see TVAE, CFR-net, and DragonNet, for example). Furthermore, if neural network based methods work well in ensemble combinations with boosted trees, we should attempt to maximise the performance of the neural network learners in order to maximise the performance of the associated ensemble. A block diagram for MultiNet is shown in Figure 1. The method comprises two main elements: a CounterFactual Regression (CFR) network backbone (Shalit et al., 2017) (without the integral probability metric penalty) with outcome 'taps' at each layer, and a constrained regression procedure designed to reflect the equivalent idea in the Super Learner. The idea therefore represents a combination of two well-known methods in causal inference - CFR-net, and Super Learner. CFR is a popular NN method for causal inference tasks. It includes separate outcome arms depending on the treatment condition, and forms the backbone of MultiNet. For each layer in MultiNet, we predict y|t, x for t = {0, 1} and compute the corresponding layerwise cross-entropy loss (for a binary outcome). This simulates the multiple outputs of a typical ensemble method - each layer represents a different (and increasingly complex) function of the input. The constrained regression is only applied after MultiNet has been trained. For each treatment condition, we concatenate the layerwise predictions into a matrix Yˆ which has shape (L × N) where L is the number of layers and N is the number of datapoints. We then solve Yˆ T β = y, with layerwise weights β which are constrained to sum to one and be non-negative. For this we use a SciPy (Jones et al., 2001) non-negative least squares solver. The weights are then used for subsequent predictions. Note that one of the strengths of this approach is that the layerwise outputs and constrained regression techniques can be flexibly applied to other neural network architectures. We may also interpret β to understand which layers are the most useful for solving the constrained regression, but leave this to future work. In order to explore the possibility for increased performance, we explore some additional variations on the core idea. Firstly, we allow each layerwise loss gradient to influence all prior network parameters. This is similar to the implementation of the auxiliary loss idea in the Inception network (Szegedy et al., 2015), and we refer to this variant as 'MN-Inc'. The second variant involves only updating the parameters of the corresponding layer, preventing gradients from updating earlier layers. We call this variant the 'cascade' approach, and refer to this variant as 'MN-Casc'. Finally, in order to increase the diversity across the layers and to approximate the diversity of an ensemble, we also explore the use of loss masking. For this, we partition the training data such that each layer has a different 'view' of the observations. The loss is masked such that each layer is trained on a different, disjoint subset of the data. We refer to variants of MultiNet with loss masking as 'MN+LM'. The objective function of MultiNet is therefore: $${\mathcal{L}}=\operatorname*{min}\left[{\frac{1}{n}}\sum_{i}^{n}{\frac{1}{L}}\sum_{i}^{L}m_{i}^{l}{\mathcal{L}}_{i}^{l}\right],$$ $$(36)$$ , (36) where mli is the mask for datapoint i in layer l (this is set to 1 for variants without loss masking), and L l i is the cross-entropy loss for datapoint i and layer l. ## 6 Experimental Setup 6.1 Open Questions So far, we have presented the relevant background for causal inference and IFs, presented a way to derive the IF for a general graph (and, indeed, a general estimand), proposed a new MultiStep update process and proposed a new NN based estimator called MultiNet. A top level illustration is shown in Fig. 3. The following open questions remain: (1) Can estimation methods be improved using the one-step, submodel, MultiStep (ours), or targeted regularization approaches? (2) How do various different outcome, propensity score, and update step methods compare? We aim to answer these questions through an extensive evaluation of different methods (Sec. 7). In particular, we examine the performance of the different approaches in terms of (a) precision in estimation, (b) robustness, and (c) statistical inference (normality of the distribution of estimates). We use these open questions to inform the design of our experiments, which are described below. ## 6.2 Data Recent work has highlighted the potential for the performance of modern causal inference methods to be heavily dataset-dependent, and has recommended the use of bespoke datasets which transparently test specific attributes of the evaluated models across different dimensions (Curth et al., 2021b). We therefore undertake most of the evaluation using variants of a DGP which we refer to as the LF-dataset and which has been used for similar evaluations in the literature (Luque-Fernandez et al., 2018). We also evaluate using the well-known IHDP dataset (Hill, 2011; Dorie, 2016). ![17_image_0.png](17_image_0.png) Figure 4: Graph for the 'LF' dataset used by Luque-Fernandez et al. (2018). ## 6.2.1 Lf Dataset Variants The initial and original LF-dataset variant, (v1), models 1-year mortality risk for cancer patients treated with monotherapy or dual therapy. One motivation for starting with this DGP is that its polynomial functional form is not sufficiently complex to unfavourably bias the performance of any method from the start. The dataset also exhibits near-positivity violations, and will therefore highlight problems associated with the propensity score models which are necessary for the update process. We also adjust the level of non-linearity in order to assess the robustness of each method to increased complexity. Accordingly, we introduce an exponential response into the potential outcome under monotherapy (t = 1) for the second variant (v2). Our LF-datasets comprise 100 samples from a set of generating equations. Both variants are designed to highlight problems which may arise due to near positivity violations. The graph for the synthetic 'LF' dataset used in work by Luque-Fernandez et al. (2018) is given in Fig. 4. The DGP is based on a model for cancer patient outcomes for patients treated with monotherapy (t = 1) and dual therapy (t = 0) and the generating equations are as follows: $$X_{1}\sim Be(0.5),\quad X_{2}\sim Be(0.65),$$ $$X_{3}\sim\mbox{int}[U(0,4)],\quad X_{4}\sim\mbox{int}[U(0,5)],$$ $$T\sim Be(pr),\quad\mbox{where}$$ $$pr=\sigma(-5+0.05X_{2}+0.25X_{3}+0.6X_{4}+0.4X_{2}X_{4}),\tag{37}$$ $$Y_{1}=\sigma(-1+1-0.1X_{1}+0.35X_{2}+0.25X_{3}+0.2X_{4}+0.15X_{2}X_{4}),$$ $$Y_{0}=\sigma(-1+0-0.1X_{1}+0.35X_{2}+0.25X_{3}+0.2X_{4}+0.15X_{2}X_{4}),$$ $$(38)$$ where int[.] is an operator which rounds the sample to the nearest integer, Be is a Bernoulli distribution, U is a uniform distribution, σ is the sigmoid function, and Y1 and Y0 are the counterfactual outcomes when T = 1 and T = 0, respectively. Covariate X1 represents biological sex, X2 represents age category, X3 represents cancer stage, and X4 represents comorbidities. We create a variant (v2) of this DGP by introducing non-linearity into the outcome, and then into the treatment assignment as follows: $$Y_{1}=\sigma(e x p[-1+1-0.1X_{1}+0.35X_{2}+0.25X_{3}+0.2X_{4}+0.15X_{2}X_{4}]).$$ ![18_image_0.png](18_image_0.png) The two variants are designed to yield near positivity violations in order to highlight weaknesses in methods which depend on a reliable propensity score model. Figs. 5 and 6 provide information on the propensity scores for the v1 and v2 variants (the second version has the same propensity score generating model as v1). Finally, for LF (v1) and LF (v2) we create further variants with different sample sizes n = {500, 5000, 10000} in order to explore sensitivity to finite samples. Figure 5: Marginal propensity scores for the LF (v1) and LF (v2) datasets. Note that the minimum probability of treatment in a random draw from the DGP is 0.007. The datasets are intentionally designed such that certain subgroups are unlikely to receive treatment, resulting in near-positivity violations. ![19_image_0.png](19_image_0.png) Figure 6: Propensity scores by treatment assignment for a sample from the LF (v1) dataset. ## 6.2.2 Ihdp The second dataset comprises 100 simulations from the well-known IHDP6 dataset. We use the version corresponding with usual setting A of the NPCI data generating package Dorie, 2016 (see Shi et al., 2019; Shalit et al., 2017, and Yao et al., 2018) and comprises 608 untreated and 139 treated samples (747 in total). This variant actually corresponds with variant B from Hill (2011). There are 25 covariates, 19 of which are discrete/binary, and the rest are continuous. The outcome generating process is designed such that under treatment, the potential outcome is exponential, whereas under no treatment the outcome is a linear function of the covariates (Curth et al., 2021b). This dataset represents a staple benchmark for causal inference in machine learning. However, it is worth noting that recent work has shown it to preferentially bias certain estimators (Curth et al., 2021b), so we include this dataset for completeness but discount our interpretation of the results accordingly. ## 6.3 Methods, And Evaluation Criteria We evaluate a number of different methods in terms of their ability to estimate the ATE. A summary of the complete set of methods explored as part of the evaluation is shown in Table 1. As described above, we are interested in three properties relating to performance: estimation precision, robustness, and normality. Estimation precision is evaluated using mean squared error (MSE) calculated as r −1 Pr i [ˆτi − τ ] 2 where r = 100 is the number of simulations, and the standard error (s.e.) of the ATE estimates is computed as the standard deviation of τˆ. The MSE was chosen for its sensitivity to large errors (arguably important in the application domain of causal inference), and because it was found that it provided a more informative spread of results than the root-MSE. Robustness will be evaluated by comparing initial estimators that fail to exhibit the desired properties, with the results once these estimators have been updated. For normality, we examine the empirical distribution of the estimates. Using these distributions, we provide p-values from Shapiro-Wilk tests for normality (Shapiro & Wilk, 1965). Doing so provides an indication of the estimator's asymptotic linearity and whether the IFs are facilitating statistical inference as intended. ## 6.3.1 Algorithms/Estimators For the outcome model Q we compare linear/logistic regression (LR); a Super Learner (SL) comprising a LR, a LR with extra quadratic features, a Support Vector classifier, a random forest classifier (Breiman, 2001), a nearest neighbours classifier (Altman, 1992), and an AdaBoost classifier (Freund & Schapire, 1997); an implementation of the backbone to CounterFactural Regression network (without the integral probability 6Available from https://www.fredjo.com/ | Q Method | G Method | U Method | Datasets | Evaluation Criteria | |-----------------------------------|-----------------------------------|--------------------------------|------------------------------|-------------------------------------| | Linear/Logistic Regression (Q-LR) | Linear/Logistic Regression (G-LR) | OneStep (U-ones) | LF (v1) n={500, 5000, 10000} | Mean Squared Error (MSE) | | SuperLearner (Q-SL) | SuperLearner (G-SL) | Submodel (U-sub) | LF (v2) n={500, 5000, 10000} | Shapiro-Wilk Test (p) | | CFR (Q-CFR) | CFR (G-CFR) | MultiStep (U-multi) | IHDP | Standard Error of Estimation (s.e.) | | MultiNet (Q-MN) + variants | MultiNet (G-MN) + variants | Targeted Regularization (treg) | | | | TVAE (Q-TVAE) | P-learner (G-P) | None (U-Base) | | | | DragonNet (Q-D) | DragonNet (G-D) | | | | | S-learner (Q-S) T-learner (Q-T) | | | | | Table 1: A summary of all variants and metrics explored as part of the evaluation. Note that additional results for other metrics (*e.g.*, mean absolute error) may be derived using the code in supplementary material. | Parameter | Min | Max | |-----------------------|-------|-------| | Batch size | 10 | 64 | | L2 Weight Penalty | 1e-5 | 1e-3 | | No. of Iterations | 2000 | 10000 | | Learning Rate | 1e-5 | 1e-2 | | No. Layers | 2 | 14 | | Dropout Prob. | 0.1 | 0.5 | | No. Neurons per Layer | 5 | 200 | Table 2: Hyperparameter search space for CFR and MN based methods. metric penalty) (Shalit et al., 2017) (CFR); DragonNet (D) with and without targeted regularization (Shi et al., 2020); TVAE (Vowels et al., 2021) (which includes targeted regularization); T-learner (T) (Kunzel et al., 2019) with a gradient boosting machine (Friedman, 2001); S-learner (S) (Kunzel et al., 2019) with a gradient boosting machine (Friedman, 2001); and our MultiNet (MN) variants (MN-Inc, MN-Casc, MNInc+LM, *MN-Casc+LM*). When estimating the IF of the ATE, we also need estimators for the propensity score / treatment model, which we refer to as G. For this we use LR and SL, ElasticNet 'P-learner' (Zou & Hastie, 2005), DragonNet, as well as CFR and MN. The latter two NN methods must be modified for this task, and for this we simply remove one of the outcome arms, such that we can estimate t|x. This seletion of algoritms was chosen to represent a suitable diverse set of common yet weak learners (*e.g.*, logistic regression), modern/well-known neural network causal inference methods (*e.g.*, CFR-net), neural network methods which already incorporate semi-parameteric techniques (DragonNet and TVAE), and recent and/or popular methods proposed in the causal inference or biostatistics literature (*e.g.* Super Learner, T- and Slearners). The LR and SL approaches are implemented using the default algorithms in the scikit-learn package (Pedregosa et al., 2011), whilst the the DragonNet, S-learner, T-learner, and P-learner, are implemented using the CausalML package (Chen et al., 2020). For DragonNet the number of neurons per layer was set to 200, the learning rate set to 1×10−1, number of epochs = 30, and batch size = 64. For TVAE the dimensionality of all latent variables was set to 5, the number of layers set to 2, batch size = 200, number of epochs = 100, learning rate = 5 × 10−4, and targeted regularization weight of 0.1. For CFR and MN, we undertake a Monte-Carlo train-test split hyperparameter search with 15 trials, for every one of the 100 samples from the DGP. The best performing set of hyperparameters is then used to train CFR and MN on the full dataset. For the hyperparameter search itself, we undertake 15 trials on a train/test split for each of the 100 samples from the DGP, and additional, separate hyperparameter searches are undertaken for methods using targeted regularization. The hyperparameters which are included in the search space for CFR and MN are present in Table 2. Note that the iteration count is not in terms of epochs - it represents the number of batches sampled randomly from the dataset. The number of iterations can be multiplied by the batch size and divided by the dataset size to approximately determine the equivalent number of epochs this represents. Note that, unlike in traditional supervised learning tasks, using the full data with causal inference is possible because the target estimand is not the same quantity as the quantity used to fit the algorithms (Farrell et al., 2021). Indeed, whilst cross-fitting is used for the hyperparameter search, subsequent use of the full data has been shown to be beneficial, especially in small samples (Curth & van der Schaar, 2021). It is reassuring to note that overfitting is likely to worsen our recorded estimates, rather than misleadingly improve them. Similarly, even though the SL is trained and the corresponding weights derived using a hold-out set, the final algorithm is trained on the full dataset for estimation. Logistic regression is simply trained on the full dataset without any data splitting. This is what motivated us to ask the question as to whether or not it is possible to have a 'free lunch' with IFs. Indeed, if no additional data is required, but we can nonetheless improve our estimates and achieve valid statistical inference (for the purposes, for example, of null hypothesis significance tests), then this would represent a valuable gain. For all treatment models, we bound predictions to fall in the range [.025, .975] (Li et al., 2021). ## 6.3.2 Update Steps We evaluate the onestep (U-ones), submodel (U-sub), MultiStep (U-multi), and targeted regularization (Treg) approaches to the update process. The MultiStep update variants are optimized using the Adam (Kingma & Ba, 2017) optimizer. For small datasets (n < 1000) we undertake full gradient descent (*i.e.*, using the full data), and for larger datasets we use stochastic mini-batch gradient descent. The batch size for datasets with a sample size n > 1000 is set to 500, we undertake 4000 steps of optimization, and the learning rate for the Adam optimizer is set to 5 × 10−4. The MultiStep objective has hyperparameters α1 and α2 which weight the constraints in the objective (expectation and variance of the influence function, respectively). We set both to one. Table 3: Initial results over a restricted set of model variations. All update steps use the same propensity score G- algorithm as their Q-model algorithm, unless indicated by 'w/ G-SL', which indicates the use of a SuperLearner. Mean Squared Errors (MSE) and standard error (s.e.) (lower is better) and Shapiro-Wilk test p-values for normality (higher is better) for 100 simulations. Best results are those competing across all three dimensions. **Bold** indicates the best result for each algorithm, **bold and underline** indicates the best result *for each dataset variant*. Multiple methods may perform equally well. | Dataset | Q Model | U-Base | U-ones | U-sub | Treg | Treg+U-sub | U-ones w/ G-SL | U-sub w/ G-SL | | | | | | | | | | | | | | | |---------------------|------------|----------|---------------------------|---------------------|-----------|--------------|------------------|-----------------|-------|-----------|-------|-----------|---------------------------|-----------|-----------|---------------|-------|-----------|-------|-------|-------|------| | p MSE | s.e. | p MSE | s.e. | p MSE | s.e. | p MSE | s.e. | p MSE | s.e. | p MSE | s.e. | p MSE | s.e. | | | | | | | | | | | LF (v1) | LR | .001 | .0004 | .002 .276 | .0007 | .003 | .248 | .0008 | .003 | - | - | - | - | - | - | .378 | .0006 | .003 | .591 | .0008 | .003 | | | SL | .001 | .0004 | .002 | .53 | .0008 | .003 | .651 | .0009 | .003 | - | - | - | - | - | - | - | - | - | - | - | - | | | CFR | .0 | .0114 | .008 | .001 | .0042 | .004 | .01 | .01 | .003 | .07 | .0113 | .008 | .0 | .0105 | .002 | .396 | .0006 | .003 .909 | .0015 | .003 | | | | MN-Inc | .052 | .0008 | .003 | .78 .0007 .003 .394 | .001 | .003 | .729 | .0012 | .003 | .681 | .001 | .003 .639 | .0008 | .003 .329 | .001 | .003 | | | | | | | | MN-Inc+LM | .135 | .0009 | .003 .141 | .0007 | .003 .578 | .0009 | .003 | .0 | .0017 | .004 | .957 | .0011 | .003 .969 | .0008 | .003 .786 | .0009 | .003 | | | | | | | MN-Casc | .0 | .0018 | .004 | .231 | .0014 | .002 | .0 | .0018 | .003 | .083 | .0086 | .007 | .702 | .0045 | .004 | .831 | .0007 | .003 .339 | .0009 | .003 | | | | n = 5000 MN-Casc+LM | .053 | .0058 | .006 | .018 | .002 | .003 | .204 | .0037 | .003 | .0 | .0091 | .008 | .74 | .0036 | .003 | .747 | .0007 | .003 .625 | .001 | .003 | | | | LF (v2) | LR | .066 | .0024 | .002 .752 | .0007 | .003 | .497 | .0008 | .003 | - | - | - | - | - | - | .785 | .0007 | .003 | .867 | .0009 | .003 | | | SL | .349 | .0017 | .003 .938 | .0008 | .003 | .92 | .0009 | .003 | - | - | - | - | - | - | - | - | - | - | - | | | | | CFR | .0 | .0185 | .01 | .0 | .006 | .005 | .0 | .0151 | .002 | .0 | .035 | .01 | .008 | .0162 | .002 | .623 | .0007 | .003 .065 | .0015 | .003 | | | | MN-Inc | .119 | .001 | .003 .204 .0006 .003 .211 | .0008 | .003 | .002 | .0009 | .002 .029 | .0008 | .003 .058 | .0007 | .003 .049 | .0008 | .003 | | | | | | | | | | MN-Inc+LM | .0 | .0011 | .003 .438 | .0009 | .003 .813 | .0011 | .003 | .139 | .0071 | .005 | .678 | .0026 | .003 .959 .0005 .002 .949 | .0009 | .003 | | | | | | | | | MN-Casc | .0 | .002 | .004 | .013 | .0033 | .002 | .892 | .0043 | .003 | .77 | .014 | .006 | .365 | .0101 | .002 | .272 | .0007 | .003 .264 | .0011 | .003 | | | | n = 5000 MN-Casc+LM | .257 | .0113 | .007 | .349 | .0032 | .003 | .001 | .0083 | .002 | .066 | .0295 | .007 | .0 | .0112 | .002 | .897 | .0006 | .003 .241 | .0013 | .003 | | | | IHDP | LR | .022 | .1818 | .019 | .0 | .0576 | .035 | .0 | .0461 | .044 | - | - | - | - | - | - | .0 | .1322 | .019 | .0 | .0597 | .03 | | SL | .0 | .0466 | .032 | .0 | .0311 | .033 | .0 | .0346 | .034 | - | - | - | - | - | - | - | - | - | - | - | - | | | CFR | .0 | .7709 | .098 | .0 | .2865 | .074 | .0 | .0439 | .052 | .0 | 25.5 | .3 | .0 | .0604 | .051 | .0 | .2626 | .063 | .0 | 1.7 | .114 | | | MN-Inc | .0 | .0324 | .042 | .0 | .0297 | .044 | .0 | 8.7 | .299 | .0 | .0482 | .042 | .0 | 30.8 | .537 | .0 .0243 .044 | .0 | .0425 | .042 | | | | | MN-Inc+LM | .0 | .0393 | .045 | .0 | .0259 | .043 | .0 | .9849 | .099 | .0 | .1332 | .038 | .0 | 1.9 | .138 | .0 | .0243 | .044 | .0 | .0327 | .042 | | | MN-Casc | .0 | .1977 | .046 | .0 | .0737 | .04 | .0 | .064 | .04 | .0 | 2.9 | .115 | .0 | .102 | .042 | .0 | .0816 | .042 | .0 | .0383 | .047 | | | n = 747 | MN-Casc+LM | .0 | 4.7 | .158 | .0 | 1.4 | .093 | .0 | .2118 | .049 | .0 | 23.9 | .164 | .0 | .1824 | .06 | .0 | 1.1 | .079 | .0 | 4.7 | .202 | ## 7 Experimental Results Given the large number of combinations in a full-factorial design (approximately 5000 results), we undertake an initial set of experiments to narrow down the evaluation space to focus on the most competitive methods. With this 'shortlist', we investigate the contribution of each Q-, G-, and U-method across the 7 different dataset variants. ## 7.1 Initial Evaluation We share initial results in Table 3. These results were used to inform a subsequent set of experiments with a restricted set of variants. Specifically, we used these to select the most successful variant of MultiNet. For **LF (v1)**, we see that the base CFR performs significantly worse in all considered metrics than LR and SL. Base LR and base SL achieved the best results in terms of MSE and s.e., although note that none of the base algorithms achieve asymptotic normality. Notice that LR's base MSE performance on LF (v1) is actually better than its MSE performance using the one-step and submodel updates. Such behaviour has been noted before by Luque-Fernandez et al. (2018), and occurs when the base learner is already close and/or when both outcome and treatment models are misspecified. Unlike CFR, our *MN-Inc* and *MN-Casc* variants worked well as either outcome or treatment models, yielding the best results with the one-step update. The other two of our MN- variants also performed well with the one-step and submodel updates but required a SL treatment model to do so. The potential improvements for LR in combination with update steps is more striking for **LF (v2)**. Here, the LR base outcome model is misspecified (LF v2 has an exponential outcome model). Combining the LR with the SL one-step and submodel update processes enabled the LR method to perform well in spite of the non-linearity of the outcome. This is a demonstration of double-robustness - even though the outcome model is misspecified, the treatment model is not (or at least, it is sufficiently correctly specified), owing to the use of a SL, and the estimates are improved. As with the LF (v1) dataset, combining CFR with IFs resulted in a substantial improvement, especially when using an SL treatment model, yielding a competitive MSE, s.e., and normally distributed estimates (thus amenable to statistical inference). These results demonstrate the power of semiparametric methods for improving our estimation with NNs, and again illustrate the doublerobustness property: the CFR outcome model was poorly specified, but was able to recover with an SL treatment model. Similar performance for our MN- variants on LF (v1) was observed with LF (v2). Unfortunately, no method variant yielded normally distributed estimates with the **IHDP** dataset. The worst performing estimator across any combination of semiparametric techniques was LR. This makes sense given the non-linearity in the IHDP outcome process (Curth et al., 2021b). The SL with the one-step or submodel updates performed equally (poorly) as the best CFR and *MN-Casc* variants, although the SL provided a smaller s.e.. Overall, the best methods were our *MN-Inc* and *MN-Inc+LM* variants in combination with either a one-step update, or a one-step update using a SL treatment model. The MultiNet variant which performed the best and most consistently across **all datasets** was our *MN-Inc* (or equally, *MN-Inc+LM*) with the one-step update. Whereas other methods benefited from the help of a SL treatment model, *MN-Inc* worked well as both an outcome and a treatment model, making it the best all-rounder across datasets, as well as the least dependent on the SL for correction. For all NN based approaches, targeted regularization made little difference, and sometimes resulted in instability and high MSEs. Further work is required to investigate this, although it may relate to which treatment model is used, and the associated sensitivity to positivity violations. A prior application also described the potential for the regularization to be inconsistent (Shi et al., 2019). For all base learners, we observe the potential for improvement using the semiparameteric techniques, primarily for improving the associated MSE. It is also worth noting that in general, the base CFR method has consistently higher (*i.e.*, worse) s.e. than the *MN-variants*, although combining CFR with an udpdate step (*e.g.*, one-step w/ SL) significantly tightened the s.e.. In summary, we identified that CFR did not perform sufficiently well to warrant further investigation. Furthermore, the best performing MN variant was MN-Inc+LM, and we use this variant for the subsequent analyses. Finally, targeted regularization was inconclusive. However, previous work has identified its potential to improve DragonNet and TVAE (Shi et al., 2020; Vowels et al., 2021) and so we restrict the application of targeted regularization to these methods only, in the main evaluation presented below. ## 7.2 Main Evaluation Owing to the large number of Q (outcome), G (propensity), and U (update step) method combinations, as well as the 7 different dataset variants and three different performance metrics (precision, normality, standard error), the number of results is large so we have attempted to summarize them in Figs. 7-11, but include further results in the Appendix in Table 4 and Figures 18-24. We also include the results of a Shapley value analysis (Shapley, 1953; Lundberg et al., 2020) of the results in Figures 12-14 (discussed further below). Note that the following results do not include Q-CFR, G-CFR, or targeted regularization, as these were not shown to yield competitive performance in the initial evaluation above. Whilst it is possible and potentially helpful to simply present the full set of results, it does not help us understand whether the use of particular Q-, G- or U-methods are more or less likely to improve or worsen the performance in any particular combination. Therefore, Figs. 7 and 8 provide results for p(O|M) = p(M|O)p(O)/p(M) across the LF dataset variants for MSE and s.e., respectively. Here, M is the method, and O is the quantile (we split into 5 quantiles) for MSE and s.e., respectively. In words, the associated plots provide an estimation for the probability of achieving a performance result in each quantile O, for a given method M, thereby providing a means to directly assess the relative performance of each Q-, G-, and U-method. For instance, we can split the MSE results into equal probability quantiles, and count the number of times the use of each outcome, propensity score, and update method results in a performance which falls into each of these quantiles. Using Bayes rule we get an estimate for the probability of achieving results in a particular quantile (*e.g.*, the best performing methods fall in the zeroth quantile of MAE results), given a particular choice of method. Using these calculated probabilities, we also select all results from the best quantile, and see how the performance shifts over different sample sizes. Note that because these results are based on a rank ordering, it is not possible to judge absolute performance, only relative performance. Indeed, the purpose of the initial results above was to use the absolute performance as a way of shortlisting the methods so that a more comparative evaluation could be undertaken using the more competitive methods. Note: When interpreting the results shown in Figures 7-10, it may be useful to recognise the 'ideal/desired' curve as one which takes on high values on the left-hand-side, representing a high-probability of the method yielding results in the top quantile(s). In contrast, curves which take on high values on the right hand side represent those with a high probability of yielding poor results in the lower quantile(s), relative to the other methods. To evaluate the normality of the estimates, after calculating the p-value from the Shapiro-Wilk test, we calculate the proportion of each Q-, G-, and U-methods which yield normally distributed estimates (p > 0.01). For example, if a particular Q-method has a high 'probability of normality' according to *e.g.* Fig. 10, this means that a large proportion of the results yielded normally distributed estimates. In Sections 7.2.1-7.2.7 we review the performance of each method for each of the three performance metrics in turn. ## 7.2.1 Q-Methods - Mse Beginning with Fig. 7, the results for the outcome model Q-methods on the LF dataset variants are shown in the first column. In Fig. 7a we see that our Q-MN achieves the highest probability of being in the best quantile for MSE when used as an outcome model Q for **LF (v1)** n = 500, followed closely by Q-LR and Q-SL, and Q-TVAE and Q-D in the second-best quantile. In contrast, Q-D without targeted regularization, Q-T, and Q-S all had higher probabilities of yielding results in the later quantiles (*i.e.*, their performance was worse). Increasing the sample size to n = 5000, and considering Fig. 7d, we see similar results, with MN again yielding the highest probability of the achieving the best results, with Q-D, Q-S, Q-T, and Q-D without targeted regularization performing the worst. Finally, for LF (v1) n = 10000, we see in Fig. 7e that Q-MN is superseded by Q-LR and Q-SL, followed by Q-TVAE. Q-T, Q-S, and Q-D perform poorly again. These results suggest that Q-LR and Q-SL perform consistently well over different sample sizes, and that Q-MN can perform well in small sample sizes, but may start to overfit as the sample size increases. Recall that the task of causal inference is different from the typical supervised learning task, and more data does not necessarily imply that it is easier to estimate the difference between two response surfaces, particularly when this difference (which is the treatment effect) is of low-complexity relative to the response surfaces themselves. Now consider Figs. 7(j, m, p) for **LF (v2)**, which introduces additional non-linearity into the outcome model. We initially observe similar results for n = 500 in 7j, with Q-MN, Q-LR, and Q-SL achieving the best results, and Q-S, Q-D without targeted regularization, and Q-T populating the later quantiles. Increasing the sample size to n = 5000, we see in Fig. 7m that Q-TVAE now becomes the most likely to yield the best results, ![24_image_0.png](24_image_0.png) Figure 7: After recording the MSE for each Q (outcome), G (propensity score), and U (update step) method combination, we rank order them (from lowest to highest MSE), and calculate p(O|M) where O is the MSE quantile, and M is the method. For 5 quantiles, this enables us to find e.g., the probability of getting a MSE in the best quantile given a particular method p(0 = 0|M = m). If a method performs well, we expect to have high probability of achieving an MSE in the top two quantiles. followed by Q-SL and, interestingly, Q-D without targeted regularization. Q-D, Q-T, and Q-S, however, still perform poorly. Finally, for n = 10000, we see Q-TVAE maintain the lead, once again followed by Q-SL. The worst performers were, again, Q-D, Q-S, and Q-T. This suggests once again that Q-SL provides consistent performance across sample sizes, and that Q-MN is a good option for smaller sample sizes. For the IHDP dataset, we use a fixed sample size of n = 747, and the results are shown in Fig. 9. Here it can be seen that Q-T and Q-TVAE achieve the best results, followed by Q-S and Q-MN. The worst performer was Q-LR. These results are consistent with previous work which highlighted state-of-the-art performance of TVAE on IHDP (Vowels et al., 2021). Similarly, the fact that LR did so poorly possibly highlights the non-linearity of the data generating process for IHDP. The fact that Q-S and Q-T did so well is surprising given their relatively poor performance on the LF datasets described above. Such dataset dependence for the performance of causal estimators has also been previously noted by Curth et al. (2021b). ## 7.2.2 G-Methods - Mse The MSE results for the propensity score G-methods can be seen in the second column of Figs. 7 and 9. Interestingly, there is very little dependence between the performance of the different methods. Arguably, there is some evidence that G-MN performs slightly worse than other methods in Fig. 7q, and that GD performs worse in Fig. 9b but the differences are not convincing. This suggests that, at least in our experiments, the MSE results are relatively robust to the choice of propensity score model. ## 7.2.3 U-Methods - Mse The MSE results for the update U-methods are shown in the third column of Fig. 7 for the LF datasets. In Fig. 7c we see that the U-Base model and the U-multi update methods perform the best, with the U-ones model close behind. The submodel update is more likely to be the lower quantiles. As the sample size increases to n = 5000 and n = 10000 in Figs. 7f and 7i we see the U-sub and, to a lesser extent, the U-ones performance shift. This behaviour has been observed before in work by Neugebauer & van der Laan (2005), who found that the performance of U-ones increased with sample sizes. Indeed, their own proposition for a multistep update process also performed more consistently in small samples, as does our U-multi. Similar patterns of performance are seen in Figs. 7l, 7o, and 7r for the LF (v2) dataset. In Figure 9 we see that the U-sub and U-ones performed approximately equally well, whereas U-multi and U-Base had worse performance, relative to the other methods. ## 7.2.4 Q-Methods - S.E. The standard error (s.e.) results are shown in Fig. 8 and the bottom row of plots in Fig. 9. Starting with Fig. 8a, we find the methods yielding the tightest distribution of estimates for the LF (v1) dataset n = 500 are Q-MN, Q-LR, and Q-TVAE, followed by Q-D, Q-SL, and Q-T. At the lower end we find Q-D without targeted regularization, and Q-S. As the same size increases to n = 5000 Q-MN provides estimates which are even more likely to be the tightest, followed again by Q-LR, Q-TVAE, and Q-SL. Q-D is not far behind, with Q-S, Q-T, and Q-D without targeted regularization performing the worst. With n = 10000, Q-MN is overtaken by Q-LR in terms of the tightness of the estimation, which is understandable given that Q-MN has a large number of hyperparameters (Q-LR has none), which contributes to variability in performance. Q-TVAE once again follows closesly behind, with the worst performers being Q-D without targeted regularization, Q-S, and Q-T. Interestingly Q-MN exhibits a rise in the probability of being one of the worst performers, suggesting that there may exist better or worse combinations of G- and U-methods with Q-MN. Once again, it is worth consulting the full set of rank-ordered results in the Appendix. With the results for LF (v2) in Figs. 8j, 8m, and 8p we see a similar pattern of results, in spite of the introduction of additional non-linearity in this dataset variant. Finally, for the IHDP results in Fig. 9d we see Q-LR and Q-SL provide the tightest estimates, followed by Q-MN, Q-D without targeted regularization, then Q-TVAE, Q-S, and Q-T. ![26_image_0.png](26_image_0.png) Figure 8: After recording the standard error (s.e.) of the 100 ATE estimates for each LF dataset and for each Q (outcome), G (propensity), and U (update step) method combination, we rank order them (from low to high), and calculate p(0|M) where O is the quantile, and M is the method. This enables us to find the probability of getting a s.e. in the best quantile given a particular method p(0 = 0|M = m). If a method performs well, we expect to have high probability of achieving an s.e. in the top two quantiles. ![27_image_0.png](27_image_0.png) Figure 9: After recording the MSE and standard error (s.e.) of the 100 ATE estimates for the IHDP dataset and for each Q (outcome), G (propensity score), and U (update step) method combination, we rank order them (from lowest to highest), and calculate p(O|M) where O is the MSE (top row) or s.e. (bottom row) quantile, and M is the method. For 5 quantiles, this enables us to find the probability of getting a MSE or s.e. in the best quantile given a particular method p(O = 0|M = m). If a method performs well, we expect to have high probability of achieving an MSE and/or s.e. in the top first or second quantiles, and a low probability of achieving an MSE and/or s.e. in the last quantiles. Best viewed in colour. ## 7.2.5 G-Methods - S.E. The s.e. results for the choice of propensity score G-method can be found in the central column of Fig. 8 and Fig. 9e. As was found for the MSE results, the choice of G-method was not decisive, besides the poor performance of G-MN for IHDP dataset, and for the n = 10000 LF datasets. It is reassuring to again find that the choice of G-method does not have a strong impact on the tightness of the estimates. ## 7.2.6 U-Methods - S.E. The s.e. results for the choice of update U-method are presented in the right-hand column of Fig. 8 and Fig. 9f. In contrast to the choice of G-method, the choice of U-method had a significant impact on the tightness of the associated estimates, and the pattern of performance is similar to the pattern for MSE. For low sample sizes, it can be seen from both Figs. 8c and 8l that the tightest estimates are achieved using U-multi and U-Base, with U-sub yielding the least tight estimates. Increasing the sample size shifts the performance of U-sub and U-ones, making them competitive with the other methods. For the IHDP dataset, it can be seen in Fig. 9f that the choice of U-method had little impact on the tightness of the estimates, but the best performers were U-Base (*i.e.*, no update), and U-multi. ## 7.2.7 Q-, G-, U-Methods - Normality The results evaluating the normality of the estimates are provided in Fig. 10 for the LF dataset variants, and Fig. 11 for IHDP. For the LF datasets, each plot provides the proportion of results from the respective method which yielded normally distributed estimates (p > 0.01) for each of the different dataset sizes n = {500, 5000, 10000}. In Fig. 11a it can be seen that most Q-methods performed well across all sample sizes with LF (v1), with the exception of Q-D which was less likely to yield normally distributed estimates, and we observe a drop in performance for Q-MN as sample size increases. Once again, and as indicated by Fig. 11b, the choice of G-method was not found to impact the likelihood of normally distributed estimates. Figs. 11c ![28_image_0.png](28_image_0.png) Figure 10: Probability of p > 0.01 for the Shapiro-Wilk test of normality for each Q (outcome), G (propensity score), and U (update step) method with the LF datasets n = {500, 5000, 10000}. Because we undertook all combinations of G and U, each point represents a marginalization over the other dimension(s). For instance, for the Q methods, Q-D (DragonNet) is an average probability result when combining Q-D with all possible other G and U methods. Best viewed in colour. indicates that the likelihood of U-Base and U-multi yielding normally distributed estimates dropped slightly with sample size, with U-ones yielding consistently normally distributed estimates regardless of sample size. For LF (v2), the results in Figs. 11d-11f indicate more variability, possibly as a result of the additional non-linearity in the outcome model. When n = 500, the outcome Q-method most likely to yield normally distributed estimates was Q-T, followed by Q-D and Q-MN. However, for n = 5000 and n = 10000, the only methods not yielding consistently normally distributed results were Q-D and Q-MN. For the propensity score G-methods, the method most likely to yield normally distributed results with n = 500 was G-LR, followed by G-SL. The other methods did not perform well until the sample size was increased to n = 5000 or n = 10000 for which all methods performed equally well. For the U-methods, the best performing result across all sample sizes was U-ones, followed by U-sub, U-multi, and finally U-base. Finally, the likelihood of achieving normally distributed estimates are shown in Fig. 11. The sample size is fixed for this dataset, and the results for the Q-, G-, and U- methods are presented together (hence the different graph format). It can be seen that Q-D provided the highest likelihood of normally distributed estimates, with the other methods yielding comparable (and low) likelihood. Similarly, G-D yielded the highest likelihood of normally distributed estimates, with the other G-methods being relatively equal (and low). Finally, none of the U-methods provided a high likelihood of normally distributed estimates. ## 7.3 Summary Of The Main Evaluation Note that in some Figures, certain methods may not have a monotonic probability which starts high and ends low, or vice versa. For example, in Fig. 7p, Q-LR has a u-shaped probability, suggesting that for some combinations of Q-LR with certain other G- and U-methods, its performance is good, and with others it is poor. In such cases it may be more informative to consult the full results in the Appendix, to attempt to understand whether there is any particular combination dependence. ## 7.3.1 Mse Summary Our Q-MN performed well on the LF datasets, particularly in smaller samples. We found that both QLR and Q-SL also performed consistently across the different sample sizes, even with the introduction of non-linearity with LF (v2). Indeed, with the introduction of this non-linearity, we found Q-TVAE to yield good performance, and this competitive edge held up with IHDP as well. We did not find that the choice of G-method had a large impact on the results, although G-MN tended to do slightly worse. With smaller sample sizes n = {500, 5000} and/or simpler datasets (LF v1), our U-multi performed the best as an update method. As sample size increased, we found that the onestep U-ones became the best performer, and similar behaviour has been found in other work (Neugebauer & van der Laan, 2005). For more complex datasets like IHDP, we found that U-ones and U-sub performed well. ## 7.3.2 Standard Error Summary Once again, our Q-MN provided the tightest estimates, and did so consistently over all sample sizes and datasets except IHDP. The next best and most consistent estimator (including good performance on IHDP) in terms of the tightness of its estimates, was Q-SL. Once again, we did not find that the choice of G-method had a large impact on the results, but G-MN tended to do slightly worse than others. Our U-multi yielded consistently tight estimates across all datasets (including IHDP), although in general, the base models (without update steps) also performed well in this regard. As with the MSE results, U-ones and U-sub performed more competitively as the sample size increased. ## 7.3.3 Normality Summary The choice of Q-method did not have a big impact on the likelihood of normally distributed estimates for the LF datasets, although Q-D performed poorly, and the performance of Q-MN dropped as sample size increased. Surprisingly, these results reversed for the IHDP dataset, with Q-D providing the most frequently normally distributed estimates, with the other methods yielding generally poor performance. Both G-LR and G-SL worked well as propensity score models for the LF-datasets, yielding a high likelihood of normally distributed estimates. However, on IHDP only the propensity score estimates from G-D were found to work well. U-ones and U-sub were found to yield consistently normally distributed errors across the LF datasets, with our U-multi unfortunately yielding little advantage over the base model. In some ways, the relatively disappointing results with respect to the normality of the estimates is not surprising. Benkeser et al. (2017) and van der Laan (2014) showed that the double-robustness property relating to a normal limiting distribution which is afforded by estimators satisfying the efficient influence function does not apply when data-adaptive estimators are used (such as superlearners). In order for the double-robustness property to hold (with respect to the normal limiting distribution) with data-adaptive estimators, additional conditions must be satisfied. The failure to yield normally distributed estimates for many of the evaluated methods in this work thus may well be due to some degree of misspecification in the treatment or outcome models (or, indeed, both). One would expect that using the additional update steps proposed by Benkeser et al. (2017) and van der Laan (2014) would yield improved results and this presents a promising direction for future evaluations and development. ## 7.4 Shapley Value Analysis In addition to the presentation of the results given in Figures 7-11, as well as those given below in Figures 1824, we also explored whether a meta-analysis using the Shapley value approach (Shapley, 1953; Lundberg et al., 2020; 2017; Lundberg & Lee, 2017) could provide additional insights. Indeed, one of the limitations of the way the earlier results are presented is that they involve marginalization over one or more methodological components (*e.g.*, to obtain quantile probabilities for the Q-methods, we have to marginalize over all G- and U-methods). This can make it difficult to identify meaningful interactions between the choice of components. The results presented in Figures 12-14, as well as those in the Appendix in Table 5 and Figures 15-17, represent the output from the SHapley Additive exPlanations (SHAP) machine learning explainability technique (Lundberg et al., 2020). The process is as follows: we take the full set of factorial results from our ![30_image_0.png](30_image_0.png) Figure 11: Probability of p > 0.01 for the Shapiro-Wilk test of normality for each Q (outcome), G (propensity score), and U (update step) method with the IHDP dataset. Because we undertook all combinations of G and U, each point represents a marginalization over the other dimension(s). For instance, for the Q methods, Q-D (DragonNet) is an average probability result when combining Q-D with all possible other G and U methods. Best viewed in color. experiments (all combinations from Table 1), and use the choice dataset and choice/combination of Q-, G-, and U-methods as predictors in a regression with each of the MSE, ATE estimate standard error, and Shapiro-Wilk test p-value results as different outcomes. We use a random forest algorithm (Breiman, 2001) as the regressor, with the default values in the scikit-learn implementation which have been shown to yield stable and consistent performance (Pedregosa et al., 2011; Probst et al., 2018). Then, the SHAP package conceives of the regression task as a game, with each predictor representing an agent which, in collaboration with other predictors, seek to maximise the performance of the regressor. The output of the Shapley analysis includes global predictor importances (which tell us, on average, how useful each predictor is in explaining the regressor output), the individual impact of each datapoint in the dataset on the regressor's output, and a quantification of the degree of interaction between predictors. These results therefore help us identify whether there exist strong interactions between the choice of methods and/or datasets. Figures 12, 13, and 14 show Shapley interaction heatmaps for the three outcomes MSE, ATE estimate standard error, and p-values, respectively. Brighter values indicate the presence of an interaction between predictors (where the combined information tells us something more about the likely value of the outcome than the individual predictors alone). For MSE in Figure 12, we see interactions between the dataset, Q-CFR, G-MN, and, to a lesser extent, the U-sub update approach. Otherwise, practically no other methods show strong interactions with each other or the datasets, lending weight to the marginalized results for MSE presented in previous sections. For the ATE estimate standard error results in Figure 13, we again see some strong interaction between QCFR and the dataset, as well as the U-multi without the mean-zero constraint and the dataset. Otherwise, there are few notable interactions indicated by these results. Finally, for the Shapiro-Wilk test for normality p-value results in Figure 14, we predominantly see an interaction between the sample size and the dataset. This link is, perhaps, not quite as obvious as it might immediately appear given that p-values are functions of sample size. Remember here that the p-value is an empirical test for the normality of the distribution of the estimates that we compute over the number of simulations, which is fixed across datasets/experiments. We also see some decreasing interactions between sample sze and the update step, as well as Q-D, Q-MN, and Q-T. ![31_image_0.png](31_image_0.png) Figure 12: Shapley predictor interaction heatmap for MSE outcome. ## 8 Discussion In this paper we have introduced some key aspects of semiparametric theory and provided the expression and code for deriving influence functions for estimands from a general graph automatically. We have undertaken an comprehensive evaluation of the potential of semiparametric techniques to provide a 'free' performance improvement for existing estimators without needing more data, and without needing to retrain them. We also proposed a new pseudo-ensemble NN method 'MultiNet' for simulating an ensemble approach with a single network, a new update step variant 'MultiStep'. Our evaluation included a discussion of the choice of outcome 'Q' method, propensity score 'G' method, and the update 'U' method. The summary of results is fairly nuanced, and even methods which yielded the best results were subject to variation across datasets and sample size (this was particularly evident when comparing the results on the LF datasets with those of the IHDP dataset, and when reviewer the results of the Shapley interaction plots). This highlights a dependence of the performance on the method-dataset combination which is difficult to alleviate. That said, some methods were more stable across datasets than others. A review of the Shapley interaction plots indicated relative stability of performance in terms of MSE, ATE estimate standard error, and p-values for all methods evaluated in the main evaluation, besides Q-CFR, G-MN, Q-D, and G-D. Such dataset dependence was highlighted by Curth et al. (2021b), and it is something which practitioners should be aware of, especially in the causal inference setting where we do not have access to ground-truth. ![32_image_0.png](32_image_0.png) Figure 13: Shapley predictor interaction heatmap for ATE estimate standard error outcome. Researchers developing such methods should also, of course, be aware of this issue, because it can significantly inform the evaluation design for testing and comparing different methods. These caveats notwithstanding, we found our MultiNet method to perform well as an outcome method, yielding state of the art on a number of evaluations, and performing particularly well on datasets with smaller sample sizes. Indeed, to an extent these results conflict with those of Farrell et al. (2021), who showed that relatively basic neural networks were capable of excellent performance when combined with semiparametric techniques and evaluated on their own simulated data as well as data for a direct mail marketing campaign. Arguably, their conclusions and results are less mixed than ours, although it is worth remembering that we restricted the set of methods used for the main evaluation to those with already competitive performance, and the remaining spread of our 'mixed' results may already be reassuringly tight. Unfortunately, it is difficult to say what is an acceptable level of performance, although recent large-scale work by Gordon et al. (2022) suggests that the primary challenge will be in satisfying identifiability - that is, ensuring that our estimand can be expressed as a function of the observational data, that our model is structurally well-specified, and that no unobserved confounders exist which otherwise bias our estimates. Many of the methods failed to yield normally distributed estimates. This is somewhat expected given that the double robustness guarantees do not apply to the limiting distribution. Benkeser et al. (2017) and van der Laan (2014) provide a means to augment the update step frameworks to include additional conditions which, when satisfied, extend the double robustness guarantees to the (normal) limiting distribution of the estimates. ![33_image_0.png](33_image_0.png) Figure 14: Shapley predictor interaction heatmap for Shapiro-Wilk p-value outcome. Many open questions remain: a similar set of experiments should be undertaken for other estimands (such as the conditional ATE). Also, one may derive higher order IFs (Carone et al., 2014; van der Laan et al., 2021; van der Vaart, 2014; Robins et al., 2008) which introduce new challenges and opportunities. Additionally, it may be possible to use IFs to derive a proxy representing 'good enough'-ness, *i.e.*, whether the initial estimator is close enough to the target estimand for the remaining bias to be modelled linearly. This, in turn, may also provide a way to assess the performance of causal inference methods, which would be highly advantageous given that explicit supervision will rarely be available in real-world causal inference settings. The extensions of Benkeser et al. (2017) and van der Laan (2014) also represent an interesting avenue for further development, particularly in relation to the goal of undertaking valid statistical inference with nonparametric estimators. ## 9 Broader Impact It is always important to remember that the reliability of causal inference depends on strong, untestable assumptions (not least because there is rarely any access to ground-truth in the domain of causal inference). Given the variability of the performance of the evaluated methods across datasets, in particular with regards to the normality of the estimates (and therefore also the validity of subsequent inference) any practical application of causal inference methods must be undertaken with caution. Indeed, we recommend researchers establish the extent to which their inference depends on the methods used, by undertaking the same analysis with multiple approaches/estimators. ## References A.M. Alaa and M. van der Schaar. Validating causal inference models via influence functions. *ICLR*, 2019. A.M. Alaa and M. van der Schaar. Discriminative jackknife: Quantifying uncertainty in deep learning via higher-order influence functions. *arXiv preprint*, arXiv:2007.13481v1, 2020. N. S. Altman. An introduction to kernel and nearest-neighbor nonparametric regression. The American Statistician, 46(3):175–185, 1992. doi: 10.1080/00031305.1992.10475879. D. Benkeser, M. Carone, M.J. van der Laan, and et al. Doubly robust nonparametric inference on the average treatment effect. *Biometrika*, 104(4):863–880, 2017. doi: 10.1093/biomet/asx053. R. Bhattacharya, R. Nabi, and I. Shpitser. Semiparametric inference for causal effects in graphical models with hidden variables. *arXiv:2003.12659v1*, 2020. I. Bica, A.M. Alaa, C. Lambert, and M. van der Schaar. From real-world patient data to individualized treatment effects using machine learning: Current and future methods to address underlying challenges. Clinical Pharmacology and Therapeutics, 109(1):87–100, 2020. doi: 10.1002/cpt.1907. P.J. Bickel, C.A.J. Klassen, Y. Ritov, and J.A. Wellner. *Efficient and Adaptive Estimation for Semiparametric Models*. Spinger-Verlag, New York, 1998. M.J. Blanca, R. Alarcon, and R. Bono. Current practices in data analysis procedures in psychology: what has changed? *Frontiers in Psychology*, 2018. doi: 10.3389/fpsyg.2018.02558. V. Borisov, T. Leeman, K. Sebler, and J. Haug. Deep neural networks and tabular data: A survey. arXiv preprint, arXiv:2110.01889v2, 2022. L. Breiman. Random forests. *Machine Learning*, 45(1):5–32, 2001. doi: 10.1023/A:1010933404324. M. Carone, I. Diaz, and M.J. van der Laan. Higher-order targeted minimum loss-based estimation. *U.C.* Berkeley Division of Biostatistics Working Paper Series, 2014. H. Chen, T. Harinen, Lee J-L., M. Yung, and Z. Zhao. CausalML: Python package for causal machine learning. *arXiv preprint*, 2002.11631, 2020. V. Chernozhukov, D. Chetverikov, M. Demirer, E. Duflo, C. Hansen, and W. Newey. Double/debiased/Neyman machine learning of treatment effects. *American Economic Review*, 5, 2017. V. Chernozhukov, D. Chetverikov, M. Demirer, E. Duflo, C. Hansen, W. Newey, and J. Robins. Double/debiased machine learning for treatment and structural parameters. *Econometrics Journal*, 21:C1– C68, 2018. A. Curth and M. van der Schaar. Nonparametric estimation of heterogeneous treatment effects: From theory to learning algorithms. *AISTATS*, 130, 2021. A. Curth, A.M. Alaa, and M. van der Schaar. Estimating structural target functions using machine learning and influence functions. *arXiv preprint*, arXiv:2008.06461v3, 2021a. A. Curth, D. Svensson, J. Weatherall, and M. van der Schaar. Really doing great at estimating CATE? a critical look at ML benchmarking practices in treatment effect estimation. *35th Conference onf Neural* Information Processing Systems (NeurIPS 2021), 2021b. V. Dorie. Non-parametrics for causal inference. *https://github.com/vdorie/npci*, 2016. B. Efron and G. Gong. A leisurely look at the boostrap, the jackkife, and cross-validation. *The American* Statistician, 37(1):36–48, 1983. doi: doi:10.2307/2685844. R.J. Evans and T.S. Richardson. Smooth, identifiable supermodels of discrete DAG models with latent variables. *Bernoulli*, 25(2):848–876, 2019. doi: 10.3150/17-BEJ1005. M. Ezzati, A.D. Lopez, and C.J.L. Murray (eds.). Comparative Quantification of Health Risks: Global and Regional Burden of Disease Attributable to Selected Major Risk Factors, chapter Effects of multiple interventions. World Health Organization, Geneva, 2004. M.H. Farrell, T. Liang, and S. Misra. Deep neural networks for estimation and inference. *Econometrica*, 89 (1):181–213, 2021. A. Fisher and E.H. Kennedy. Visually communicating and teaching intuition for influence functions. arXiv:1810.03260v3, 2019. M. Frèchet. Sur les ensembles de fonctions et les operations lineaires. *Les Comptes rendus de l'Académie* des sciences, 144, 1907. Y. Freund and R. Schapire. A decision-theoretic generalization of on-line learning and application to boosting. Journal of Computer and System Sciences, 55(1):119–139, 1997. doi: 10.1006/jcss.1997.1504. J. Friedman. Greedy function approximation: A gradient boosting machine. *The Annals of Statistics*, 29(5), 2001. Y. Gal and Z. Ghahramani. Dropout as a Bayesian approximation: representing model uncertainty in deep learning. *ICML*, 2016. B.R. Gordon, R. Moakler, and F. Zettelmeyer. Close enough? a large-scale exploration of non-experimental approaches to advertising measurement. *arXiv preprint*, arXiv:2201.07055v1, 2022. C. Guo, G. Pleiss, Y. Sun, and K.Q. Weinberger. On calibration of modern neural networks. *ICLR*, 2017. R. Guo, L. Cheng, J. Li, P.R. Hahn, and H. Liu. A survey of learning causality with data: Problems and methods. *ACM Comput. Surv.*, 1(1), 2020a. R. Guo, J. Li, and H. Liu. Learning individual causal effects from networked observational data. Association for Computing Machinery, 2020b. J. Hahn. On the role of the propensity score in efficient semiparametric estimation of average treatment effects. *Econometrika*, 66:315–331, 1998. F. R. Hampel. The influence curve and its role in robust estimation. *Journal of the American Statistical* Association, 69(346):383–393, 1974. X. Han, B.C. Wallace, and Y. Tsvetkov. Explaining black box predictions and unveiling data artifacts through influence functions. *arXiv preprint*, arXiv:2005.06675v1, 2020. L. Henckel, E. Perković, and M.H. Maathuis. Graphical criteria for efficient total effect estimation via adjustment in causal linear models. *arXiv preprint*, arXiv:1907.02435v2, 2020. J. L. Hill. Bayesian nonparametric modeling for causal inference. Journal of Computational and Graphical Statistics, 20(1), 2011. O. Hines, O. Dukes, K. Diaz-Oraz, and S. Vansteelandt. Demystifying statistical learning based on efficient influence functions. *arXiv preprint*, arXiv:2107.00681, 2021. K. Hornik. Some new results on neural network approximation. *Neural Networks*, 6:1069–1072, 1993. K. Hornik, M. Stinchcombe, and H. White. Multilayer feedforward networks are universal approximators. Neural Networks, 2:359–366, 1989. doi: 10.1016/0893-6080(89)90020-8. Y. Huang and M. Valtorta. Pearl's calculus of intervention is complete. *Proceedings of the Twenty-Second* Conference on Uncertainty in Artificial Intelligence, arXiv:1206.6831:217–224, 2006. doi: 10.5555/3020419. 3020446. H. Ichimura and W. Newey. The influence function of semiparametric estimators. *arXiv preprint*, arXiv:1508.01378v2, 2021. G.W. Imbens and D.B. Rubin. *Causal inference for statistics, social, and biomedical sciences. An Introduction.* Cambridge University Press, New York, 2015. E. Jones, T. Oliphant, P. Petereson, and et al. SciPy: Open source scientific tools for Python. http://www.scipy.org, 2001. Y. Jung, J. Tian, and E. Bareinboim. Estimating causal effects using weighting-based estimators. *The 34th* AAAI Conference on Artificial Intelligence, 2020. A. Kadra, M. Lindauer, F. Hutter, and J. Grabocka. Regularization is all you need: simple neural nets can excel on tabular data. *NeurIPS*, 2021. E.H. Kennedy. Semiparametric theory and empirical processes in causal inference. *arXiv:1510.04740v3*, 2016. E.H. Kennedy. Optimal doubly robust estimation of hetereogeneous causal effects. *arXiv preprint*, arXiv:2004.14497v2, 2020. D. P. Kingma and J. L. Ba. Adam: a method for stochastic optimization. *arXiv:1412.6980v9*, 2017. P.W. Koh and P. Liang. Understanding black-box predictions via influence curves. *PMLR*, 2017. N. Kreif and K. DiazOrdaz. Machine learning in policy evaluation: new tools for causal inference. arXiv:1903.00402v1, 2019. S. R. Kunzel, J.S. Sekhon, P.J. Bickel, and B. Yu. Meta-learners for estimating heterogeneous treatment effects using machine learning. *arXiv preprint*, arXiv:1706.03461v6, 2019. C.F. Kurz. Augmented inverse probability weighting and the double robustness property. *Medical Decision* Making, 2021. doi: 10.1177/0272989X211027181. J. Levy. Tutorial: Deriving the efficient influence curve for large models. *arXiv:1903.01706v3*, 2019. H. Li, S. Rosete, J. Coyle, R.V. Phillips, N.S. Hejazi, I. Malenica, B.F. Arnold, J. Benjamin-Chung, A. Mertens, J.M. Colford, M.J. van der Laan, and A.E. Hubbard. Evaluating the robustness of targeted maximum likelihood estimators via realistic simulations in nutrition intervention trials. *arXiv preprint*, arXiv:2109.14048v1, 2021. C. Louizos, U. Shalit, J. Mooij, D. Sontag, R. Zemel, and M. Welling. Causal effect inference with deep latent-variable models. *31st Conference on Neural Information Processing Systems*, 2017. S.M. Lundberg and S-I. Lee. A unified approach to interpreting model predictions. *31st Conference on* Neural Information Processing Systems, 2017. S.M. Lundberg, G.G. Erion, and S-I. Lee. Consistent individualized feature attribution for tree ensembles. Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 2017. S.M. Lundberg, G. Erion, H. Chen, A. DeGrave, J.M. Prutkin, B. Nair, R. Katz, J. Himmelfarb, N. Bansal, and S-I. Lee. From local explanations to global understanding with explainable AI for trees. *Nature* Machine Intelligence, 2:56–67, 2020. doi: 10.1038/s42256-019-0138-9. M.A. Luque-Fernandez, M. Schomaker, B. Rachet, and M.E. Schnitzer. Targeted maximum likelihood estimation for a binary treatment: A tutorial. *Statistics in Medicine*, 37(16):2530–2546, 2018. doi: 10.1002/sim.7628. R. Neugebauer and M.J. van der Laan. Why prefer double robust estimates? illustration with causal point treatment studies. *Journal of Statistical Planning and Inference*, 129(1):405–426, 2005. W. Newey. Semi-parametric efficicency bounds. *Journal of Applied Econometrics*, 5:99–135, 1990. W. Newey. The asymptotic variance of semi-parametric estimators. *Econometrika*, 62:1349–82, 1994. J. Pearl. *Causality*. Cambridge University Press, Cambridge, 2009. J. Pearl, M. Glymour, and N.P. Jewell. *Causal inference in statistics: A primer*. Wiley, 2016. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, and B. et al. Thirion. Scikit-learn: Machine learning in Python. *JMLR*, 12:2825–2830, 2011. M. Petersen, L. Balzer, D. Kwarsiima, N. Sang, G. Chamie, J. Ayieko, J. Kabami, A. Owaraganise, T. Liegler, F. Mwangwa, and K. Kadede. Association of implementation of a universal testing and treatment intervention with HIV diagnosis, receipt of antiretroviral therapy, and viral suppression in East Africa. Journal of American Medical Association, 317(21):2196–2206, 2017. doi: 10.1001/jama.2017.5705. K.E. Porter, S. Gruber, M.J. van der Laan, and J.S. Sekhon. The relative performance of targeted maximum likelihood estimators. *International Journal of Biostatistics*, 7:1034, 2011. P. Probst, M.N. Wright, and A.-L. Boulesteix. Hyperparameters and tuning strategies for random forest. Wires Data Mining and Knowledge Discovery, 2018. doi: 10.1002/widm.1301. T.S. Richardson and P. Spirtes. Causal inference via ancestral graph models. In P. Green, N. Hjort, and S. Richardson (eds.), *Highly Structured Stochastic Systems*. Oxford University Press, Oxford, 2003. T.S. Richardson, R.J. Evans, J.M. Robins, and I. Shpitser. Nested Markov properties for Acyclic Directed Mixed Graphs. *arXiv preprint*, arXiv:1701.06686v2, 2017. F. Riesz. Sur les operations fonctionnelles lineaires. *Comptes rendus de l'Académie des Sciences*, 149, 1909. J. Robins. A new approach to causal inference in mortality studies with a sustained exposure period - application to control of the healthy worker survivor effect. *Mathematical Modelling*, 7:1393–1512, 1986. doi: 10.1016/0270-0255(86)90088-6. J.M. Robins, L. Li, E.J. Tchetgen, and A.W. van der Vaart. Higher order influence functions and minimax estimation of nonlinear functionals. *Probability and Statistics: Essays in Honor of David A. Freedman*, pp. 335–421, 2008. A. Rotnitzky and E. Smucler. Efficient adjustment sets for population average treatment effect estimation in non-parametric causal graphical models. *JMLR*, 21(188), 2020. D. B. Rubin. Causal inference using potential outcomes: Design, modeling, decisions. Journal of the American Statistical Association, 100(469):322–331, 2005. doi: 10.1198/016214504000001880. N. Sani, J. Lee, R. Nabi, and I. Shpitser. A semiparametric approach to interpretable machine learning. arXiv preprint, arXiv:2006.04732 Search... arXiv:2006.04732 Search... arXiv:2006.04732, 2020. U. Shalit, F. D. Johansson, and D. Sontag. Estimating individual treatment effect: generalization bounds and algorithms. *ICML*, 2017. S.S. Shapiro and M.B. Wilk. An analysis of variance test for normality (complete samples). *Biometrika*, 52 (3-4):591–611, 1965. doi: 10.1093/biomet/52.3-4.591. L.S. Shapley. A value for n-person games. *Contributions to the Theory of Games*, 2(28):307–317, 1953. C. Shi, D. M. Blei, and V. Veitch. Adapting neural networks for the estimation of treatment effects. 33rd Conference on Neural Information Processing Systems, 2019. C. Shi, T. Xu, and W. Bergsma. Double generative adversarial networks for conditional independence testing. arXiv:2006.02615v1, 2020. I. Shpitser and J. Pearl. Identification of joint interventional distributions in recursive semi-Markovian causal models. *Proceedings of the National Conference on Artificial Intelligence*, 21:1219–1226, 2006. R. Shwartz-Ziv and A. Armon. Tabular data: Deep learning is not all you need. *Information Fusion*, 81: 84–90, 2021. doi: 10.1016/j.inffus.2021.11.011. B. Siegerink, W. den Hollander, M. Zeegers, and R. Middelburg. Causal inference in law: an epidemiological perspective. *European Journal of Risk Regulation*, 7(1):175–186, 2016. doi: 10.1017/S1867299X0000547X. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. *CVPR*, 2015. J. Tian and J. Pearl. A general identification condition for causal effects. *AAAI*, 2002. A. Tsiatis. *Semiparametric Theory and Missing Data*. Spinger, New York, 2006. M. J. van der Laan and S. Gruber. Targeted minimum loss based estimation of causal effects of multiple time point interventions. *Int. J. Biostat*, 8: Art 9(41), 2012. M. J. van der Laan and S. Rose. Targeted Learning - Causal Inference for Observational and Experimental Data. Springer International, New York, 2011. M. J. van der Laan and R. J. C. M. Starmans. Entering the era of data science: targeted learning and the integration of statistics and computational data analysis. *Advances in Statistics*, 2014. M. J. van der Laan, Z. Wang, and L. van der Laan. Higher order targeted maximum likelihood estimation. arXiv:2101.06290v3, 2021. M.J. van der Laan. Targeted estimation of nuisance parameters to obtain valid statistical inference. *International Journal on Biostatistics*, 10:29–57, 2014. M.J. van der Laan and D.B. Rubin. Targeted maximum likelihood learning. *The International Journal of* Biostatistics, 2(1), 2006. doi: 10.2202/1557-4679.1043. M.J. van der Laan, E.C. Polley, and A.E. Hubbard. Super Learner. *Statistical Applications of Genetics and* Molecular Biology, 6(25), 2007. doi: 10.2202/1544-6115.1309. A.W. van der Vaart. Higher order tangent spaces and influence functions. *Statistical Science*, 29(4):679–686, 2014. T. Verma and J. Pearl. Equivalence and synthesis of causal models. Proc. 6th Conf. on Uncertainty in Artificial Intelligence, 1990. M. J. Vowels. Misspecification and unreliable interpretations in psychology and social science. *Psychological* Methods, 2021. doi: 10.1037/met0000429. M. J. Vowels, N.C. Camgoz, and R. Bowden. Targeted VAE: Structured inference and targeted learning for causal parameter estimation. *IEEE SMDS*, 2021. Y. Wen, D. Tran, and J. Ba. BatchEnsemble: An alternative approach to efficient ensemble and lifelong learning. *ICLR*, 2020. D.H. Wolpert and W.G. Macready. No free lunch theorems for optimization. *IEEE Transacions on Evolutionary Computation*, 1(67), 1997. doi: 10.1109/4235.585893. P.A. Wu and K. Fukumizu. Causal mosaic: cause-effect inference via nonlinear ICA and ensemble method. AISTATS, 108, 2020. P.A. Wu and K. Fukumizu. Intact-VAE: Estimating treatment effects under unobserved confounding. *ICLR*, 2022. L. Yao, S. Li, Y. Li, M. Huai, J. Gao, and A. Zhang. Representation learning for treatment effect estimation from observational data. *32nd Conference on Neural Information Processing Systems (NeurIPS)*, 2018. L. Yao, Z. Chu, S. Li, Y. Li, J. Gao, and A. Zhang. A survey on causal inference. *ACM Transactions on* Knowledge Discovery from Data, 15(5):1–46, 2020. doi: 10.1145/3444944. J. Yoon, J. Jordan, and M. van der Schaar. GANITE: Estimation of individualized treatment effects using generative adversarial nets. *ICLR*, 2018. Q. Zhong and J-L Wang. Neural networks for partially linear quantile regression. *arXiv preprint*, arXiv:2106.06225, 2021. H. Zou and T. Hastie. Regularization and variable selection via the elastic net. *J. R. Statist. Soc.*, 67(2): 301–320, 2005. ## A Things That Did Not Work A.1 Calibration One of the initial possibilities that we considered which might explain why some methods (*e.g.*, CFR) were not performing as well as others, was that the calibration of the output might be poor (Guo et al., 2017). However, we tried calibrating the trained outcome and treatment model networks using temperature scaling. We found it to be unsuccessful, and we leave an exploration of why it failed to future work. ## A.2 Restricted Hyperparameter Search Additionally, we tried only performing hyperparameter search with a held-out test set *once* at the beginning of the 100 subsequent simulations for each model and dataset variant, rather than performing it for every single simulation. This did not work, and we found that if the first network 'designed' through hyperparameter search happened to be degenerate with respect to its performance as a plug-in estimator (notwithstanding its potentially adequate performance as an outcome model), then it will be degenerate for all simulations, and yield incredibly biased results. However, performing hyperparameter search for every simulation more accurately represents the use of these algorithms in practice. This problem also highlights the importance of fitting multiple neural networks on the same data. As supervision is not available, the usual metrics for hyperparameter search (based on *e.g.*, held out data loss scores) can be a poor indicator for the efficacy of the network as a plug-in estimator. By re-performing hyperparameter search, even on the same data (put perhaps, with different splits), one can effectively bootstrap to average out the variability associated with the hyperparameter search itself. Indeed, as the results show, the average estimates for the ATE using CFR net are close to the true ATE, even if the variance of the estimation is relatively high. We leave a comparison of the contribution of variance from hyperparameter search to further work. ## A.3 Multistep Update Variants Relating to our proposed MultiStep objective, we also tried a non-linear, generalized variant with an objective which still attempts to minimize the mean and variance of the influence function but such that the update step is parameterized as follows: Qˆ(t, xi) + gθ(ν1Qˆ(t, xi), ν2H(zi)) (39) It can be seen that instead of optimizing over the domain of γˆ ∈ Γ in Eq. 33, we instead optimize over θ ∈ Θ, where θ are the parameters of a shallow NN function g. Here, ν1 ∈ {0, 1} and ν2 ∈ {0, 1} are hyperparameters determining whether the NN function gθ should be taken over just the clever covariate H, or over both the clever covariate and the outcome model Q. In practice however, this approach did not yield good estimates. Furthermore, we found that MultiStep update step given in Eq. 32, setting α1 = 0 (*i.e.*, no mean-zero penalty) also did not work well. This result was surprising because a similar approach in Neugebauer & van der Laan (2005), which did not include a mean-zero penalty, yielded an improvement. However, it is also intuitive that if the two properties of the Efficient Influence Function are (1) mean-zero and (2) minimum variance, then it makes sense that an optimization objective should benefit from the inclusion of both of these conditions. Table 4: Unmarginalized results over a restricted set of model variations. Mean Squared Errors (MSE) and standard error (s.e.) (lower is better) and Shapiro-Wilk test p-values for normality (higher is better) are provided ad computed over 100 simulations. Best results are those competing across all three dimensions. Bold indicates the best result for each algorithm, **bold and underline** indicates the best result *for each* dataset variant. Multiple methods may perform equally well. | Dataset | Q Model | U-Base | G-SL + U-multi | G-SL+sub | G-MN+U-multi | G-MN+U-sub | | | | | | | | | | | |-------------|---------------------------------------|--------------------------|------------------|------------|--------------------|--------------|---------------------------------------|--------------------|--------|--------------------------|--------|--------|--------|--------|--------|-------| | | p | MSE | s.e. | p | MSE | s.e. | p | MSE | s.e. | p | MSE | s.e. | p | MSE | s.e. | | | Q-MN | 0.985 0.0024 0.054 0.959 0.0023 0.054 | 0.027 | 0.0104 | 0.109 | 0.945 0.0024 0.044 | 0.125 | 0.0080 | 0.093 | | | | | | | | | | LF (v1) | Q-SL | 0.030 | 0.0032 | 0.057 | 0.034 | 0.0036 | 0.060 | 0.023 | 0.0091 | 0.102 | 0.043 | 0.0035 | 0.060 | 0.093 | 0.0081 | 0.095 | | Q-TVAE | 0.617 | 0.0053 | 0.050 | 0.386 | 0.0042 | 0.052 | 0.074 | 0.0104 | 0.108 | 0.716 | 0.0046 | 0.052 | 0.049 | 0.0083 | 0.093 | | | (n = 500) | Q-T | 0.700 | 0.0059 | 0.064 | 0.772 | 0.0056 | 0.065 | 0.495 | 0.0084 | 0.099 | 0.843 | 0.0057 | 0.064 | 0.059 | 0.0073 | 0.089 | | Q-MN | 0.000 | 0.0006 | 0.025 | 0.0 | 0.0005 | 0.024 | 0.599 | 0.0009 | 0.031 | 0.0 | 0.0006 | 0.025 | 0.583 | 0.0307 | 0.030 | | | Q-SL | 0.024 | 0.0004 | 0.022 | 0.055 | 0.0005 | 0.023 | 0.536 | 0.0009 | 0.030 | 0.012 | 0.0004 | 0.023 | 0.224 | 0.0008 | 0.029 | | | LF (v1) | Q-TVAE | 0.444 | 0.0007 | 0.024 | 0.182 | 0.0006 | 0.025 | 0.655 | 0.0009 | 0.031 | 0.368 | 0.0007 | 0.024 | 0.414 | 0.0008 | 0.030 | | (n = 5000) | Q-T | 0.975 0.0010 0.032 | 0.989 | 0.0010 | 0.032 | 0.935 | 0.0010 | 0.033 | 0.823 | 0.0010 | 0.032 | 0.854 | 0.0010 | 0.033 | | | | Q-MN | 0.0 | 0.0008 | 0.027 | 0.0 | 0.0007 | 0.025 | 0.366 | 0.0004 | 0.020 | 0.0 | 0.0006 | 0.023 | 0.648 | 0.0006 | 0.022 | | | Q-SL | 1.0 | 0.0002 | 0.015 | 0.993 | 0.0002 | 0.016 | 0.625 | 0.0004 | 0.019 | 0.997 0.0002 0.015 0.996 | 0.0004 | 0.019 | | | | | | LF (v1) | Q-TVAE | 0.666 | 0.0004 | 0.019 | 0.956 | 0.0004 | 0.019 | 0.392 | 0.0004 | 0.020 | 0.782 | 0.0004 | 0.018 | 0.861 | 0.0004 | 0.020 | | (n = 10000) | Q-T | 0.341 | 0.0004 | 0.020 | 0.331 | 0.0004 | 0.021 | 0.326 | 0.0004 | 0.021 | 0.141 | 0.0004 | 0.021 | 0.122 | 0.0004 | 0.021 | | Q-MN | 0.439 | 0.0018 | 0.043 | 0.149 | 0.0018 | 0.044 | 0.014 | 0.0119 | 0.114 | 0.075 | 0.0019 | 0.044 | 0.0 | 0.0102 | 0.108 | | | Q-SL | 0.0 | 0.0027 | 0.054 | 0.001 | 0.0029 | 0.057 | 0.025 | 0.0097 | 0.103 | 0.0 | 0.0029 | 0.057 | 0.0 | 0.0095 | 0.103 | | | LF (v2) | Q-TVAE | 0.097 | 0.0075 | 0.043 | 0.031 | 0.0059 | 0.045 | 0.046 | 0.0125 | 0.114 | 0.016 | 0.0067 | 0.045 | 0.0 | 0.0112 | 0.112 | | (n = 500) | Q-T | 0.817 0.0066 0.064 0.721 | 0.0062 | 0.065 | 0.047 | 0.0094 | 0.102 | 0.753 | 0.0064 | 0.065 | 0.003 | 0.0110 | 0.108 | | | | | Q-MN | 0.0 | 0.0011 | 0.029 | 0.0 | 0.0009 | 0.027 | 0.925 | 0.0010 | 0.031 | 0.0 | 0.0011 | 0.027 | 0.662 | 0.0010 | 0.030 | | | LF (v2) | Q-SL | 0.765 | 0.0008 | 0.028 | 0.669 | 0.0008 | 0.028 | 0.921 0.0009 0.030 | 0.661 | 0.0007 | 0.028 | 0.678 | 0.0008 | 0.030 | | | | Q-TVAE | 0.607 | 0.0007 | 0.028 | 0.795 | 0.0007 | 0.027 | 0.912 0.0009 0.031 0.799 0.0006 0.027 | 0.672 | 0.0008 | 0.029 | | | | | | | | (n = 5000) | Q-T | 0.793 | 0.0010 | 0.032 | 0.644 | 0.0010 | 0.032 | 0.724 | 0.0011 | 0.033 | 0.755 | 0.0010 | 0.032 | 0.0 | 0.0010 | 0.037 | | Q-MN | 0.0 | 0.0009 | 0.027 | 0.0 | 0.0008 | 0.025 | 0.707 | 0.0005 | 0.020 | 0.0 | 0.0011 | 0.025 | 0.783 | 0.0009 | 0.026 | | | Q-SL | 0.859 0.0004 0.019 | 0.649 | 0.0004 | 0.019 | 0.964 | 0.0005 | 0.020 | 0.594 | 0.0004 | 0.019 | 0.676 | 0.0004 | 0.021 | | | | | LF (v2) | Q-TVAE | 0.0 | 0.0177 | 0.412 | 0.448 | 0.0004 | 0.021 | 0.981 | 0.0004 | 0.020 | 0.240 | 0.0004 | 0.021 | 0.227 | 0.0004 | 0.021 | | (n = 10000) | Q-T | 0.0 | 0.0124 | 0.430 | 0.719 | 0.0005 | 0.022 | 0.566 | 0.0005 | 0.022 | 0.574 | 0.0005 | 0.022 | 0.285 | 0.0005 | 0.022 | | Q-MN | 0.0 | 0.0647 | 0.409 | 0.0 | 0.0451 | 0.405 | 0.0 | 0.0390 | 0.400 | 0.0 | 0.0515 | 0.411 | 0.0 | 0.614 | 0.844 | | | Q-SL | 0.0 | 0.0440 | 0.317 | 0.0 | 0.0353 | 0.346 | 0.0 | 0.0340 | 0.344 | 0.0 | 0.0390 | 0.346 | 0.0 | 0.625 | 0.846 | | | IHDP | Q-TVAE | 0.0 | 0.0178 | 0.412 | 0.0 | 0.0132 | 0.404 | 0.0 | 0.0134 | 0.405 | 0.0 | 0.0142 | 0.408 | 0.0 | 0.191 | 0.582 | | (n = 747) | Q-T | 0.0 | 0.0124 | 0.430 | 0.0 | 0.0123 | 0.422 | 0.0 | 0.0118 | 0.424 | 0.0 | 0.0125 | 0.426 | 0.0 | 0.0293 | 0.446 | ## B Additional Results/Analysis In Table 4 we also provide a set of results (without any marginalization over any of the G-, U-, or Qdimensions) for a subset of the methods considered in the full-factorial design. Note that whilst we have tried to highlight the best results in bold and underline, many of the results are close/competitive and illustrate (again) that the performance is dataset and combination dependent, as well as that there exist multiple possible 'best' options for a given situation. ## B.1 Shapley Results In addition to the interaction plots given in the text, here we provide results for the regressor performance and the predictor impacts. ## B.1.1 Regressor Performance Table 5 show the performance of the random forest regressor in predicting the three outcomes (MSE, ATE estimate standard error, and p-value). These results help us understand whether there is any information in the set of predictors which is useful in predicting the outcome (and therefore, in turn, whether there exist any potentially meaningful patterns). We provide results for the fraction of explained variance, R2, Table 5: Meta-analysis results for random forest regression performance for MSE, standard error of the estimates (ATE s.e.) and p-values as the outcomes. Results for R2, explained variance, and MSE are given as the mean ± the standard derivation across the 10-fold cross-validation procedure. | Outcomes: | MSE | ATE s.e. | p-value | |--------------------|-------------|-------------|-------------| | R2 | 0.42±0.382 | 0.66±0.400 | 0.78±0.266 | | Explained Variance | 0.43±0.380 | 0.66±0.400 | 0.78±0.266 | | MSE | 0.201±0.580 | 0.039±0.078 | 0.024±0.058 | and MSE. The results are averages over a 10-fold cross-validated evaluation. It is useful to interrogate this table first to understand whether any further investigation is needed - if the predictive performance is poor, there is no point explaining the regressor's behaviour; if it is good, then it is worth investigating what the regressor is using to achieve that performance. From Table 5 it can be seen that all outcomes were somewhat well predicted, with the arguable exception of ![41_image_0.png](41_image_0.png) the MSE, which had large standard errors for the R2 and fraction of explained variance suggesting that test data in some of the folds in the 10-fold cross-validation scheme were poorly predicted, whilst others were predicted well. Even though R2 and explained variance are not reliable metrics in non-linear regression tasks, they can nonetheless be seen that a relatively high R2 is achieved, indicating that information about the MSE is predictable from the set of predictors. This was especially true for the prediction of the Shapiro-Wilk test p-values and ATE estimate standard error, which both had average fraction of explained variance and average R2 greater than 0.7. Figure 15: Shapley predictor random forest regressor impact results for the MSE. ## B.1.2 Shapley Predictor Impact Given the regression performance results indicate some patterns exist in the data, let us now turn to the Shapley explainability results. For each of the three respective outcomes (MSE, ATE estimate standard ![42_image_0.png](42_image_0.png) Figure 16: Shapley predictor random forest regressor impact results for the ATE estimate standard error. error, and p-values), Figures 15, 16, and 17 depict the global predictor impacts on the regressor (left-hand plots) as well as the per-predictor, per-datapoint impact on the regressor (right-hand plots). The right-hand side plots in Figures 15-17 are useful in visualising the spread of impact, and in which direction this impact is (*i.e.*, does it push the prediction of the outcome up or down in value). For example, in Figure 15 one can see that the dataset is the most important predictor for predicting MSE, followed by Q-CFR, G-MN, and U-sub. We see that most of the impact (for all predictors) is clustered around 0, which indicates that they are stable in terms of their relationship to MSE. However, the right-hand side plot shows that some outliers exist, with certain combinations yielding higher (or, to a lesser extent, lower) MSEs. In Figure 16 for ATE estimate standard error we again see dataset as the most important predictor, followed by Q-CFR and sample size. There is a larger variance in the impact these predictors have on the outcome than for MSE. Finally, looking at Figure 17 for the p-values, we see heavy dependence on sample size. Note that the p-value outcome here is a function of a *different* sample size to the one used in the regression - it is computed based on the number of simulations which is fixed for all experiments. Thus it is still interesting to note that sample size was important as a determinant of the Shapiro-Wilk test for normality p-value. Perhaps more interesting than these predictor impact results are the interaction plots, which are given in Section 7.4. ## B.2 Alternative Perspectives In the main text we provided summary results by estimating the probability that a particular Q (outcome), G (propensity), or U (update step) method would result in a performance advantage. This was done because the number of results was large, making it difficult to judge the efficacy of a method in isolation. In Figures 18-24 we provide the complete results for each of the seven dataset variants: LF (v1) with n = {500, 5000, 10000}, LF (v2) with n = {500, 5000, 10000}, and the IHDP dataset n = 747. For each Figure we provide the ![43_image_0.png](43_image_0.png) Figure 17: Shapley predictor random forest regressor impact results for the Shapiro-Wilk test p-values. comparison of each Q-method with each G- and U-method, and include a red dashed line to include the base method (just the Q-method without the IF update step) for comparison. ![44_image_0.png](44_image_0.png) guas 18: DF (c) /n = 500 results and ot in preside update production prototos (nem squared Error (g-asis) for ecel of or Or (v se or ent ontene nodel Q (x-asis) ve planedo ka ![45_image_0.png](45_image_0.png) estle promaity model C/ (eft shows) and oalded ord Inn. Bosene ve unierolo promothos (X The baer profester (Grack) for each or () is giman in manumen motel Q (x-asi, we mier ![46_image_0.png](46_image_0.png) esibe porpensity model of (Af-altonian ind. Deceme we welcerdo ul combinent methods i. Then Squared Dror. (-sus) for exh esible porpensity model of the postibe posted on the ![47_image_0.png](47_image_0.png) gwed): IDF (c) ne = 600 realte under on ander of the presible pride by methode St. (1-a bxaned ziror (y-asis) Reach of esthe promaity model of the mesibe protise produkter ![48_image_0.png](48_image_0.png) estle progesity model 6 ( (4ft shues). en line. Besmar ve underok al postibe prokolar () The bast profored oved of ears) 6r each or er () s giman se mindel deled red line. ![49_image_0.png](49_image_0.png) ![50_image_0.png](50_image_0.png) ligero eki IIDP n = 747 realts. Flar eache mold Q (x-axis) we plot the cornespondige Man Syuxtel Inced of the pess e sperantiy natode ( ( ( Park per the methel of . The baser
Review 1: Summary: The authors revisit causal inference, through the lenses of deep learning and semiparametrics. Their goal is to study the relevance of deep neural nets as functional estimates withing a causal DAG. They argue that these neural nets lead to biased estimates for the average treatment effect (ATE), and explore various ways of debiasing them, using ideas from semiparametrics. They conduct a thorough simulation/ablation study, where they compare different approaches to causal inference, based or not on deep nets, and potentially debiased. They also add a new method to the list, called MultiNet/MultiStep, which is based on two innovations: an ensemble-like architecture (where each element of the ensemble is the output of a layer), and a new loss function, based on the layer-wise ensembling idea. Strengths and Weaknesses: Strengths - The paper is very well-written, and contains a well-narrated review of causal inference and deep learning, I believe the review itself is a quite nice contribution - The new method called MultiNet/Multistep is a sensible approach that seems to work well - The simulation study seems quite valuable, albeit difficult to read, it allows both to assess the usefulness of recent deep learning-based ideas for causal inference, as well as the new contribution of the authors (MultiNet/Multistep) Weaknesses - Some claims in the abstract/introduction are phrased in a misleading way. For instance, the last sentence of the abstract "We also show that it is possible to improve existing neural networks for ‘free’, without needing more data, and without needing to retrain them" is indeed consistent with the experiments (which show, that, sometimes, updating works better, but it also sometimes does not), but suggests that updating is generally preferable, which is not that clear from the experiments. I think this sentence could be weakened, or additional details on the marginal or negative value of updating should be mentioned. Similarly, in the introduction, the authors say "we confirm that initial estimation methods benefit from the application of the semiparametric techniques. In general, however, the improvements are dataset dependent, highlighting possible interactions between the underlying data generating process, sample sizes, and the estimators and update steps used." Again, this is technically accurate, but slightly misleading, since I belive it make is sound like the updates always improve upon the original techniques, which is not the case. - The MultiNet architecture seems a bit complex, and not motivated well enough. Why not use more standard ensembles of neural nets, for instance MC dropout (Gal & Ghahramani, 2016), or deep/batch ensembles (Lakshminarayanan et al., 2017, Wen et al., 2020) ? - The simulation study is a bit hard to read. The experiments are thorough indeed, which makes presenting them very difficult, and I appreciate the authors's efforts in that direction, although I had a hard time understanding it (notably the quantile curves). I have put a few suggestions in the "requested changes" section. Minor remarks - I do not fully agree with "most machine learning models are non-parametric and do not readily facilitate statistical inference" (page 1). Indeed, many popular ML models are parametric (albeit sometimes overparametrised), including deep nets, logistic regression - I do not understand "notwithstanding finite sample associations" page 5 - On the relationship between semiparametrics and deep learning, relevant papers were recently authored by Zhong et al. (2021, 2022) Some minor references issues - Farrell et al. was published in Econometrica in 2021 - the book by Bickel et al. on semiparametrics is from 1998, not 2007 - Shalit et al. was published at ICML 2017 Additional references Wen et al., BatchEnsemble: An Alternative Approach to Efficient Ensemble and Lifelong Learning, ICLR 2020 Gal and Ghahramani, Dropout As a Bayesian Approximation: Representing Model Uncertainty in Deep Learning, ICML 2016 Lakshminarayanan et al. Simple and scalable predictive uncertainty estimation using deep ensembles, NeurIPS 2017 Zhong et al., Neural Networks for Partially Linear Quantile Regression, Annals of Statistics, 2022 Zhong et al., Neural Networks for Partially Linear Quantile Regression, arXiv:2106.06225, 2021 Requested Changes: I think the following minor changes are quite important: 1) Writing the claims I mentioned in the "Weaknesses" section in a less misleading way. 2) Discussing other approaches to neural ensembling, and relating yours to the others. In particular, since you use dropout, do you use the stochasticity for ensembling as well? 3) I think the relationship between your paper and Farrell et al. (2021) should be discussed a bit more. While I am not very familiar with this paper, it makes it look like just plugging in neural nets will lead to good inferences, which somewhat contradicts a bit your story. Another thing that would clearly strengthen the paper, but is much less straightforward, is to make the simulation study more clear. Unfortunately, I have no good recipe for this, but here are a few suggestions: - the quantiles curves are, I think, a good idea, but are not very easy to understand, it would be helpful to write down clearly what would a "ideal curve" would look like. - adding some statistical tests would be helpful to assess how much of the results are significant. For instance, in the very large-scale study of Fernandez-Delgado et al. (2014), they used Friedman's ranks to summarise the results. Additional reference Fernandez-Delgado et al., Do we Need Hundreds of Classifiers to Solve Real World Classification Problems?, JMLR 2014 Broader Impact Concerns: I have no particular concern about this work. ================================================== Review 2: Summary: This paper considers the application of semi-parametric methods to machine learning, in particular to the estimation of treatment effects using neural networks. The authors start from influence functions, which describe the asymptotic distributional behavior of estimators, show how they can be used to debias and obtain confidence intervals, and propose a new method for debiasing. They conduct an experimental study of the estimation accuracy and asymptotic normality of various methods, when combined with various estimators for the average treatment effect (ATE) and the propensity model (including those based on NNs). They also propose MultiNet, a version of the SuperLearner with a CFR network backbone, for ATE and propensity estimation. The empirical results are dataset dependent. For the synthetic dataset (where the ground truth is known), the MultiNet works well for estimating the ATE, but are outperformed by others in terms of asymptotic normality. The same pattern holds for the new method for debiasing and no debiasing at all. Strengths and Weaknesses: # Strengths # 1. The authors provide a high-level but mathematically rigorous exposition of influence functions and how they can be used to debias estimators, which I think would be of interest to the machine learning community. They also give generic instructions for computing IFs for general estimators. 2. Experiments are done on a wide range of combinations of algorithms for estimating the ATE, estimating the propensity model, and debiasing. The authors consider up to second order effects (i.e. marginalizing over only one of these three variables). 3. The authors provide conjectures on why certain algorithms work well on certain datasets. These indicate that the results in this paper would be of interest to practitioners in machine learning. # Weaknesses # 1. The datasets used in the empirical evaluation have some limitations. LFv1 and LFv2 are synthetic. IHDP contains real world data, but the authors ask us to discount the findings. 2. The results suggest that a finer-grained analysis may be necessary. For example, table 3 suggests that debiasing can be quite effective, while later figures (which marginalize away the effect of the propensity model) suggest that no debiasing is comparable to debiasing. In my opinion, this behavior indicates that the interaction of the ATE estimator, the propensity model, and the debiaser may be quite important. Requested Changes: In order for me to recommend acceptance, I would like the authors to (corresponding to the two weaknesses discussed above): 1. Include analysis on other, preferably real-world, datasets, or explain why none are available. I do not work in the causal analysis field, so I am not sure which are appropriate. Perhaps Twins and Jobs mentioned in [A] are options? 2. Include an analysis of part of the factorial design mentioned at the beginning of section 7, since analyzing the full design would be difficult. Specifically, select a subset of the ATE estimators, a subset of the propensity models, and a subset of the debiasers, and show the full factorial design on them. The subsets could be selected based on the current analysis, their representativeness, or some other criterion. Things I would like the authors to clarify: 1. What was the reasoning behind the choosing the baselines described in section 6.3.1? 2. How was the MSE computed? 3. How is the multistep debiasing process proposed by Neugebauer & van der Laan different from the proposed method? Minor comments on writing, to strengthen the work: 1. The notation in equation 16 is unclear. Some subscripts seem to be missing. 2. On page 16, what is the loss used to train the constrained regression? 3. It may be better to place figure 3 earlier in the text, since it describes the data analysis procedure of interest. Currently the jump from IFs/debiasing to MultiNet is not very well connected. [A] Hatt, Tobias, and Stefan Feuerriegel. "Estimating average treatment effects via orthogonal regularization." Proceedings of the 30th ACM International Conference on Information & Knowledge Management. 2021. Broader Impact Concerns: No broader impact statement is given. However, since a large part of the paper discusses evaluating methods for ATE (commonly used for medical and public policy applications where the stakes are high), I think a discussion of the appropriateness of the metrics they use for real-world applications should be included. For example, why is the MSE an appropriate metric? Is it because larger errors are penalized more? ================================================== Review 3: Summary: In this work, the authors provide an overview of some key concepts that appear in the literature on semi-parametric statistical inference, with the primary focus being on parameter estimators whose error can be expressed using the empirical mean of an influence function. Strong emphasis is placed on learning tasks in the context of causal inference (e.g., estimating the average treatment effect), and the authors give detailed instructional examples within this context. As an overarching theme, the authors advocate for the use of semi-parametric statistical techniques and ideas in the vein of developing machine learning techniques which are flexible, yet are still conducive to precise statistical statements. On top of the aforementioned general exposition, the authors discuss different principled approaches to design novel estimators (their Section 4; limited to the causality context), including a new proposal ("MultiStep") which aims to minimize the mean and variance of the random IF error term. They also propose a family of learning procedures ("MultiNet") using layer-wise training on disjoint subsets of the data and aggregated predictions in an overall attempt to achieve performance similar to an aggregated ensemble of weak learners, and they report the results of a series of empirical tests using both synthetic and real-world data which evaluate their proposed procedures against a wide variety of alternatives. Strengths and Weaknesses: On the positive side, the overall notion of integrating semi-parametric statistical models, ideas, and techniques into the modern machine learning workflow is interesting and may be a promising research direction in the coming years. The paper itself is written in an inviting tone with generally accessible prose, the literature cited is relevant and quite comprehensive. Furthermore, the novel MultiStep procedure is described quite clearly against the backdrop of other tactics for merging IFs and causal inferential procedures. Also the detailed empirical tests have a setup that is described quite clearly, and a rather rich collection of test results that are organized in an accessible way. On the negative side, while most individual paragraphs of this paper are fairly well-written and are of reasonable quality, at a macroscopic level, I feel that this paper is a bit of a mess. The paper contains a few conceptual "modules" as it were: - Semi-parametric inference and influence functions - Causal inference, in particular estimation of average treatment effect - MultiNet, a family of learning procedures that try to mimic an ensemble effect (also combined with a causal inference sub-routine) In the current manuscript, these modules are not very cohesive, making it very hard to evaluate what this paper is about. The title and abstract say nothing about causal inference, and yet the most concrete parts of this paper are completely specialized to the causality context. I get the heuristic in the MultiStep idea, but the MultiNet proposal seems to come out of nowhere. There are also numerous small details that the reader can trip up on. Here are a few examples: - After equation (7), the authors say that an "unbiased" estimator is one whose estimation error "converges in probability to zero," but I think most readers will refer to such a property as _consistency_. In several places throughout the paper, I find the use of the term "bias" highly unnatural. - In equation (32), the authors characterize their MultiStep idea in terms of minimizing a weighted sum of the mean and variance of the random IF value; can the minimizer always be said to satisfy (33) and (34)? Why do $\alpha_{1}$ and $\alpha_{2}$ not appear in these two formulas? - The description of the "submodel" approach in 4.1 is critical to the derivation of MultiStep in 4.2, but I find it extremely hard to follow. The nature of $H$, and the relation of (31) to (22) is very unclear, and as a result, the conceptual basis upon which MultiStep is built is quite weak. Overall, I cannot really parse this paper to extract any new knowledge or insights. Is this a survey paper on IF-based techniques in ATE estimation? Is this a general introduction to semi-parametric methods and their utility in machine learning? Is this a paper studying neural network learning algorithms using heuristics to mimic ensemble methods? The overall story is diluted by a large number of concepts which each play only a very minor role. The whole idea of a "free lunch" is great if we know the influence function, but since we do not, and a lot of effort and uncertainty arises from trying to put together a reasonable approximation of this error term, the net gain for the learning algorithm designer/user using the approach advocated by the authors is not really evident, to me at least. Requested Changes: I think the authors need to significantly re-consider what it is they are trying to achieve in this work, and what they want to communicate to the reader in this paper. Broader Impact Concerns: Not applicable. ================================================== Metareview: Recommendation: Reject Comment: The authors study the very general problem of the estimation of a function $\Psi(\mathcal{P})$ of a probability distribution $\mathcal{P}$ given a sample drawn from $\mathcal{P}$. Given an estimate $\hat{\mathcal{P}}$, they consider the first order approximation of $\Psi(\mathcal{P})-\Psi(\hat{\mathcal{P}})$. It can be written in terms of an influence function (IF, a central concept in robust statistics). They propose various methods to reduce this term (and in particular, to make its expectation close to $0$ so that $\Psi(\hat{\mathcal{P}})$ is an unbiased estimator). While their method is more general, it is essentially developped in the context of causal inference, more precisely, for the estimation of the ATE functional (Average Treatment Effect). A first method is described in Subsection 4.2, called MultiStep, is aimed to be an improvement of the existing One-Step method. A second method, MultiNet, is described by the authors as trying to "mimic the performance of an ensemble with a single neural network". It is discussed in Section 5, but I must say it is not clear to me what this method is actually doing. The reviewers pointed out the following major problems: ((1)) the paper is poorly organized. It is very difficult to identify what the authors contribution actually is, and a take-home message from the paper. I will ellaborate on this: 1.1) the paper is far too long. It starts with "mini-tutorials" on causal inference and ATE, then on IF. The reviewers praised the quality of the writing of these mini-tutorials, but also pointed out the lack of connection between them. Quoing Reviewer eFLE: "while most individual paragraphs of this paper are fairly well-written and are of reasonable quality, at a macroscopic level, I feel that this paper is a bit of a mess". After 4 pages of introduction and discussion of previous works, these tutorials take 10 pages, which means that the contribution of the authors is not discussed before page 14. Some background on IFs and ATE is of course useful. The problem here is that it seems that the authors want to write as much as possible about IFs, at the risk of getting the reader lost or confused. It would be better to focus on the aspects of IFs that are necessary to understand your contribution. To this respect, I think that Equation (13) (first order expansion) + Equation (11) (writing the first order term in terms of an IF) is almost enough. I don't understand at all the "simple example" of Subsection 3.2.1 helps, in this case, the expecation of the term is already equal to 0... 1.2) the scope of the paper should be more focused. From the abstract and the introduction, it is a little difficult to see what is the scope of the proposed method. The introduction sounds a little like it can solve universally the tradeoff between parametric and nonparametric approaches. 1.3) after the very detailed tutorials of Section 3, it is a little disappointing that the method proposed by the authors are only sketched Subsection 4.2, and Section 5. To be honest, after reading the paper several times, I'm not sure I totally understand what MultiNet is actually doing. I acknowledge that Reviewers GzoT and DiMQ were more positive on the writing of the paper. On the other hand, they point out as contributions of the paper: the tutorials on ATE and IF, and the experiments, but put little emphasis on MultiStep and MultiNet. Thus, it seems to me that the authors fail at conveying a clear message here. ((2)) Reviewers GzoT and DiMQ praised the empirical study; however, in the discussion phase, they also agreed that it is difficult to see what conclusion can be drawn from it, and that the improvement over existing methods is not clear. Overall, my feeling after reading the paper many times is that the paper is not suitable for publication in its current state. It seems to me that ((1)) and ((2)) above justify the rejection of the paper. On the other hand, Reviewers GzoT and DiMQ are enthusiastic about the use of IFs in ATE estimation. They initiated a discussion with the authors, and I also want to acknowledge the work of the authors during the discussion phase. I will therefore recommend to reject the paper, but leave the door open for re-submission, and hope that the authors will submit a new version of the paper. The decision may seem harsh, but I don't believe that the contribution of the authors will receive the attention it deserves if some readers struggle to identify it. I *strongly* encourage the authors to rewrite the paper entirely instead of trying to fix it. I totally agree with reviewer eFLE that the problem is not the "local" quality of the writing, but the organization of the paper. What follows is probably more a personal opinion, but I would start by a clear explanation of both new methods proposed: MultiStep and MultiNet. Once this is clearly written, I would insert before that the only facts about IFs that are absolutely necessary to understand these methods. Of course, theoretical results on the convergence of these methods would be awesome -- I understand that they might be beyond the scope of this paper. However, it is clear that all the theory presented in Section 3 requires smoothness assumption on $\Psi$, and moments assumptions on the data (what happens to the simple example in 3.2.1 if $y$ has an expecation, but no variance?). Thus, it seems necessary that the experiments convey a clear take-home message: what are the settings where your methods will work? and the settings where they will improve on existing methods? You can re-structure the experimental study taking into account the the suggestions of GzoT and DiMQ. I sincerely wish you good luck in the revision of the paper. ==================================================
# Doubly Robust Kernel Statistics For Testing Distributional Treatment Effects Jake Fawkes *jake.fawkes@st-hughs.ox.ac.uk* Department of Statistics University of Oxford Robert Hu *robyhu@amazon.co.uk* Amazon† Robin J. Evans evans@stats.ox.ac.uk Department of Statistics University of Oxford Dino Sejdinovic dino.sejdinovic@adelaide.edu.au School of Mathematical Sciences University of Adelaide† ## Abstract With the widespread application of causal inference, it is increasingly important to have tools which can test for the presence of causal effects in a diverse array of circumstances. In this vein we focus on the problem of testing for *distributional* causal effects, where the treatment affects not just the mean, but also higher order moments of the distribution, as well as multidimensional or structured outcomes. We build upon a previously introduced framework, Counterfactual Mean Embeddings, for representing causal distributions within Reproducing Kernel Hilbert Spaces (RKHS) by proposing new, improved, estimators for the distributional embeddings. These improved estimators are inspired by doubly robust estimators of the causal mean, using a similar form within the kernel space. We analyse these estimators, proving they retain the doubly robust property and have improved convergence rates compared to the original estimators. We then use the proposed estimators as test statistics in a new permutation based test for distributional causal effects. Finally, we experimentally and theoretically demonstrate the validity of these tests. ## 1 Introduction In this work we focus on the problem of testing for distributional treatment effects (Bellot & van der Schaar, 2021; Park et al., 2021; Chikahara et al., 2022), where the aim is to test for causal effect which manifest as something other than a mean shift. This can be especially useful when the target variable is high dimensional or structured as a network, since in these cases there is no natural mean to compare. Our contributions are as follows: 1. We introduce new estimators to be used within the Counterfactual Mean Embeddings framework, based upon the doubly robust estimator of the causal mean from semi-parametric statistics (Bang & Robins, 2005; Tsiatis, 2006). These may be applied to estimate kernelised versions of the average treatment effect and effect of treatment on the treated. We prove these estimators inherit the double robustness properties of well established semi-parametric estimators of causal effects and that they converge to the correct value if either of the models underlying them converge to the true model. This shows that they have theoretically improved convergence results when compared to the previous counterfactual mean embedding estimators. †Work mainly done while the authors were with the Department of Statistics, University of Oxford. 2. We apply these new estimators to permutation testing for distributional causal effects. We propose a new permutation approach which allows for doubly robust trainable statistics to be used within permutation testing and prove that these tests are valid. 3. We experimentally validate the performance of our test on synthetic, semisynthetic and real data. An implementation of our approach can be found at: https://github.com/Jakefawkes/DR_distributional_test. ## 1.1 Related Work Our work builds upon the counterfactual mean embeddings framework introduced in Muandet et al. (2021), which we detail further within Section 2.3. This falls under the general area of applying kernels to test for distributional causal effects of which Bellot & van der Schaar (2021) is an early example, whose test statistic arises as a special case of our own for the distributional effect of treatment on the treated. In a concurrent work, Martinez-Taboada et al. (2023) develop a similar approach using doubly robust statistics (Robins & Rotnitzky, 1995) within kernel spaces. However, by applying the work of Shekhar et al. (2022) they are able to take a permutation free approach to testing distributional causal effects. Due to the concurrency and current lack of publicly available code we do not compare against this method. This work has also been built to estimate counterfactual densities in Martinez-Taboada & Kennedy (2023). In more general applications of kernel spaces to testing distributional causal effectsPark et al. (2021) develop a statistic targeting the conditional average treatment effect (CATE) using conditional mean embeddings to estimate the RKHS distance between the expected potential outcomes. Chikahara et al. (2022) use tests for distributional causal effects to find which features are relevant to difference in treatment effects. Testing for causal distributional effects falls within long tradition of using kernel spaces for hypothesis testing as they faithfully capture all features of a distribution (Gretton et al., 2005; 2012). Kernel methods have also been widely applied within the larger causality literature. For example, to instrumental variables (Singh et al., 2019; Muandet et al., 2020), to causal learning with proxies and unmeasured confounders (Singh et al., 2020; Mastouri et al., 2021), and to the orientation of causal edges (Mitrovic et al., 2018). ## 2 Background And Notation 2.1 Causal Set Up Throughout, we will let Y denote the outcome, which is affected by a binary treatment T in the presence of additional covariates X. We will use the potential outcome notation (Rubin, 1997) so that Y (t) corresponds to the outcome observed when T is given the value t. We assume that the causal relationships between these variables are given by the basic confounding DAG in Figure 1 1. ![1_image_0.png](1_image_0.png) The Propensity Score and Overlap Throughout we will make reference to the *propensity score*, which is the probability of treatment given a set of covariates: P(T = 1 | X = x). For a flexible notation we will use e(*x, t*) to denote P(T = t | X = x), however as we are using binary treatment we will have that e(*x, t*) = 1 − e(*x, t*′) if t + t ′ = 1. We will also make use of the inverse propensity odds which we will denote w(*x, t*) = 1−e(x,t) e(x,t) . Finally eˆ(*x, t*) and wˆ(*x, t*) will be used to denote finite sample estimates of these quantities. Figure 1: Assumed DAG An important assumption for causal inference from observational data is that of **overlap**. This ensures that there are treated and untreated individuals we can compare and it written formally as: Definition 1. We say there is **overlap** for T = t if we have 0 < e(X, t) *with probability 1. We say the* overlap is **two-sided** if it holds for both t and **one-sided** otherwise. Any overlap is **strict** *if there is some* δ > 0 such that δ ≤ e(X, t) *again with probability 1.* Under the one-sided overlap assumption that e(*x, t*) > 0 and the causal DAG in Figure 1, the distribution of the interventional quantity Y (t) is uniquely identified (Pearl, 2009; Richardson & Robins, 2013) from 1We make us of the SWIG framework to combine causal graphical models with potential outcomes. More details can be found in Richardson and Robins Richardson & Robins (2013) observational data as follows: P(Y (t) = y) = EP (X)[P(Y = y | *X, T* = t)] . This allows us to estimate causal quantities of interest from observational distributions alone, such as the average treatment effect (ATE): $$\mathbb{E}[Y(1)-Y(0)].$$ It is important to note that both-sided overlap is required for the identification of the average treatment effect, as each interventional distribution requires overlap at a different T = t for identification. In the case where we are only likely to have one-sided overlap, such as when an experimental new medical treatment is only given to the most severe cases, practitioners instead focus on the effect of treatment on the treated (ETT): $$\mathbb{E}[Y(1)-Y(0)\mid T=1].$$ This quantity is identified from observational data whenever e(x, 0) > 0 for almost all x, so all individuals have some probability of not being treated. This allows us to still get some estimate of treatment efficacy in cases where both-sided overlap is violated. It is also worth noting that overlap is a testable assumption, with a test developed for example in Lei et al. (2021). ## 2.2 Kernel Background We use kernel embeddings of distributions to construct our test statistics and so we now briefly introduce the relevant background material and notation. For a more thorough engagement with this material we refer the reader to Muandet et al. (2017). Reproducing Kernel Hilbert Spaces Let X be some non-empty space. A real-valued RKHS, (HX ,⟨·, ·⟩HX ), is a complete inner product space of functions f : X → R such that the evaluation function is continuous for all x ∈ X . Due to the Riesz representer theorem we have that for all x ∈ X there is a function kx ∈ HX which satisfies the *reproducing property*, that f(x) = ⟨*f, k*x⟩ for all f ∈ Hk. The function k(*x, x*′) = ⟨kx, kx′ ⟩ is known as the reproducing kernel for the space HX . Conversely, via the Moore-Aronszajn Theorem (Aronszajn, 1950), any symmetric positive definite function K on X defines a unique RKHS. We will suppose access to an RKHS on X and Y denoted by HX and HY , with with kernels k and ℓ respectively. Further, for a random variable X on X with distribution PX we can define the *mean embedding* of X as: $$\mu_{X}=\mathbb{E}_{P_{X}}[k(X,\cdot)].$$ Under the intergrability condition that RX pk(*x, x*)dPX(x) < ∞ we have that µX ∈ HX . If we have two random variables, *X, X*′, over X with distributions PX, PX′ respectively, with a slight abuse of notation we denote the *maximum mean discrepancy* (MMD) by: $$\operatorname{MMD}[X,X^{\prime},{\mathcal{H}}_{X}]:=\left\|\mu_{P_{X}}-\mu_{P_{X^{\prime}}}\right\|_{{\mathcal{H}}_{X^{\prime}}}.$$ The kernel k is known as *characteristic* if MMD[*X, X*′, HX ] = 0 if and only if PX d= PX′ . Many popular kernels such as the Gaussian and Matérn are charecteristic kernels. The MMD between two distributions can be estimated from finite samples by computing the MMD between the empirical distributions. This was shown in Gretton et al. (2012) to converge to the true MMD at the paramteric convergence rate, OP (n − 12 ). For this reason the MMD is often used for efficiently testing equality of distributions. Conditional Mean Embedding RKHSs also allow us to represent conditional distributions through the Conditional Mean Embedding (CME) (Song et al., 2009; Muandet et al., 2017), given by: $$\mu_{Y|X=x}=\mathbb{E}[\ell(Y,\cdot)\mid X=x].$$ In order to estimate this Grünewälder et al. (2012) propose to take a regression point of view, seeing the CME as the solution to the following regression problem: $$\begin{cases}C^{*}=\arg\min_{C\in B_{2}({\mathcal{H}}_{X},{\mathcal{H}}_{{\mathcal{Y}}})}\mathbb{E}_{P_{(Y,X)}}\left\|\ell(Y,\cdot)-C k(X,\cdot)\right\|_{{\mathcal{H}}_{{\mathcal{Y}}}},\\ \mu_{Y|X=x}=C^{*}K(x,\cdot)\end{cases}$$ where B2(HX , HY ) is the space of Hilbert-Schmidt operators HX → HY . This interpretation leads to an estimate of the CME from a finite dataset, D = {xi, yi} $\Xi$ from a finite dataset, $\mathcal{D}=\{x_{i},y_{i}\}_{i=1}^{n}$ as: $$\left\{\begin{aligned}\hat{C}^{*}&=\operatorname*{arg\,min}_{C\in\mathbb{R}_{t}(X_{t},\mathcal{H}_{N})}\frac{1}{n}\sum_{i=1}^{n}\left\|\ell(y_{i},\cdot)-Ck(x_{i},\cdot)\right\|_{\mathcal{H}_{N}}^{2}+\gamma\|C\|_{\mathcal{B}_{2}}^{2}\\ &=\ell^{\top}\mathbf{Wk}\\ \hat{\mu}_{Y|X=x}&=\ell^{\top}\mathbf{Wk}(x)\end{aligned}\right..\tag{1}$$ In this case, $\mathbf{Wk}$ is the $\mathbf{Wk}$-vector matrix and $\mathbf{Wk}$ is the $\mathbf{Wk}$-vector matrix. where ℓ = (ℓ(yi, ·))n i=1 , k = (k(xi, ·))n i=1, and W = (K + λIn) −1for K = (k(xi, xj ))n i,j=1. We will often make use of the CME conditional on a particular value of T, so µY |X=x,T =t. In that case we let ℓt, Wt and kt denote the respective matrices used in the estimation of µˆY |X=x,T =t. ## 2.3 Counterfactual Mean Embeddings Our work builds upon the counterfactual mean embeddings framework of Muandet et al. (2021). They demonstrate that under the causal structure in Section 2.1 and with the one-sided overlap e(*x, t*) > 0, the mean embedding of the potential outcome, Y (t), can be written as: $$\mu_{Y(t)}=\mathbb{E}_{P_{(Y,X,T)}}\left[{\frac{\ell(Y,\cdot)\,\mathbb{I}\left\{T=t\right\}}{e(X,t)}}\right],$$ which may be seen as the kernel equivalent of the inverse probability weighting estimator for the causal mean. Given a finite sample, {(ti, xi, yi)} n i=1, and access to the true propensity score, this quantity can be estimated as: $${\hat{\mu}}_{Y(t)}={\frac{1}{n}}\sum_{i=1}^{n}{\frac{\ell(y_{i},\cdot)\,\mathbb{1}\,\{t_{i}=t\}}{e(x_{i},t)}}.$$ Muandet et al. (2021) prove that this converges to the true embedding at rate OP (n − 12 ). If we have both-sided overlap both µˆY (1) and µˆY (0) are therefore identified from data and we can use MMD[Y (1), Y (0), HY ] to measure any distributional effects of treatment. Furthermore, if the kernel is characteristic we have that MMD[Y (1), Y (0), HY ] = 0 if and only if Y (1) d= Y (0) . Finally, this quantity can be estimated from finite samples at rate OP (n − 12 ) by: $$\widehat{\mathrm{MMD}}[Y(1),Y(0),{\mathcal{H}}_{{\mathcal{Y}}}]=\left\|{\hat{\mu}}_{Y(1)}-{\hat{\mu}}_{Y(0)}\right\|_{{\mathcal{H}}_{{\mathcal{Y}}}}.$$ This leads to a permutation test for distributional causal effects following the conditional permutation scheme for testing for causal effects (Rosenbaum, 1984), and using the estimated MMD between the potential outcomes as a test statistic. Throughout we will refer to MMD \[Y (1), Y (0), HY ] as the distributional average treatment effect (DATE), due to the similarity between this and the average treatment effect statistic in Section 2.1. Analogously to the average treatment effect, the distributional average treatment effect is only identified from data under both-sided overlap. Therefore we now introduce a second target based of the effect of treatment on the treated, the *distributional effect of treatment on the treated* (DETT): $$\operatorname{MMD}\Bigl[Y(1)_{\{T=1\}},Y(0)_{\{T=1\}},{\mathcal{H}}_{\mathcal{Y}}\Bigr]\coloneqq\left\|\mu_{\{Y(1)|T=1\}}-\mu_{\{Y(0)|T=1\}}\right\|_{{\mathcal{H}}_{\mathcal{Y}}}.$$ Estimation this quantity requires estimation of the embeddings µ{Y (0)|T =1}, µ{Y (1)|T =1}. Since P(Y (1) | T = 1) = P(Y | T = 1) we can estimate the latter embedding using observed samples of Y for individuals with T = 1. This means the only challenge is the estimation of µ{Y (0)|T =1} which Muandet esimtate using the fact that: $$\mu_{Y(0)|T=1}=\mathbb{E}_{P(X|T=1)}\left[\mu_{Y|X,T=0}\right].$$ This means a finite sample estimate of the conditional mean embedding from Section 2.2 can be used to compute a finite sample estimate of the mean embedding. Alternatively, Bellot & van der Schaar (2021) also target the distributional effect of treatment on the treated using the fact that: $$\mu_{Y(0)|T=1}=\mathbb{E}\left[\ell(Y,\cdot)\mathbb{I}\left\{T=0\right\}w(X,0)\right].$$ Both of these estimators lead to test statistics based on the distributional effect of treatment on the treated. ## 3 Doubly Robust Counterfactual Mean Embeddings In this section we build on previous methodology by considering estimators for the causal mean embeddings that are built on doubly robust (DR) estimators of causal effects (Robins & Rotnitzky, 1995). We propose new estimators for the distributional average treatment effect and the distributional effect of treatment on the treated. We prove that both of these estimators have the doubly robust property and so they will converge to the true causal mean embedding if either of the two models underlying them converges to the true model. We call this set of approaches *Doubly Robust Counterfactual Mean Embeddings*. ## 3.1 Doubly Robust Estimator Of The Distributional Average Treatment Effect (Date) Firstly, based on the doubly robust estimator of the average treatment effect we note that the embedding of the potential outcome, Y (t), can also be written as: µY (t) = E[ℓ(Y (t), ·) − µY |X,T =t] + E[µY |X,T =t] = E "1{T = t}ℓ(Y, ·) − µY |X,T =t e(X, t) # + E[µY |X,T =t] = E "1{T = t}ℓ(Y, ·) − µY |X,T =t e(X, t)+ µY |X,T =t # . This is a kernelised version of the doubly robust estimator of the treatment mean (Robins & Rotnitzky, 1995); it can be estimated from finite samples by fitting models eˆ(x, t), µˆY |X=x,T =t on a training set and then averaging them over the test set as: $$\hat{\mu}_{Y(t)}^{\rm DR}=\frac{1}{n}\sum_{i=1}^{n}\left\{\frac{1\{t_{i}=t\}\left(\ell(y_{i},\cdot)-\hat{\mu}_{Y|X=x_{i},T=t}\right)}{\hat{\epsilon}(x_{i},t)}+\hat{\mu}_{Y|X=x_{i},T=t}\right\}.\tag{2}$$ Take a constant train/test split and let eˆn(*X, t*) and µˆ (n) Y |X,T =t be the models that arise from a total sample of size n. Now under the assumption that the propensity and estimated propensity are uniformly bounded, the following theorem shows that µˆ DR Y (t) has the *doubly robust* property that it converges to the true embedding as long as either of the propensity model or the conditional mean embedding converges. Furthermore, it is also doubly rate robust, in the sense that it achieves a convergence rate that is the product of the convergence rates of these working models. Theorem 1. Assuming that e(x, t) is strongly bounded away from zero, as well as additional overlap assumptions in Appendix A.1, we have that if the estimators eˆn(X, t) and µˆ (n) Y |X,T =t satisfy the following: $$\|{\hat{e}}_{n}(X,t)-e(X,t)\|_{2}={\mathcal{O}}_{P}(\gamma_{e,n}),\qquad\left\|\left\|{\hat{\mu}}_{Y|X,T=t}^{(n)}-\mu_{Y|X,T=t}\right\|_{{\mathcal{H}}_{Y}}\right\|_{2}={\mathcal{O}}_{P}(\gamma_{r,n})$$ _with $\gamma_{e,n}=O(1)$ and $\gamma_{r,n}=O(1)$. Then:_ $$\left\|\mu_{Y(t)}-\hat{\mu}_{Y(t)}^{\mathrm{DR}}\right\|_{\mathcal{H}_{Y}}=\mathcal{O}_{P}(\operatorname*{max}\{n^{-\frac{1}{2}},\gamma_{r,n}\gamma_{e,n}\}).$$ Therefore, if γr,nγe,n = O(n − 12 ) we obtain the parametric convergence rate. These statistics can be used to form a doubly robust estimator of the distributional average treatment effect as: $$\widehat{\mathrm{MMD}}_{\mathrm{DR}}[Y(1),Y(0),{\mathcal{H}}_{\mathcal{Y}}]=\left\|{\hat{\mu}}_{Y(1)}^{\mathrm{DR}}-{\hat{\mu}}_{Y(0)}^{\mathrm{DR}}\right\|_{{\mathcal{H}}_{\mathcal{Y}}}.$$ . We derive the closed form of this statistic squared in Appendix B.1. This will converge to the correct MMD if both µˆ DR Y (1) and µˆ DR Y (0) converge to the true embedding at a rate that is a maximum of both their rates. 3.2 Doubly robust estimator of the Distributional Effect of Treatment on the Treated (DETT) We now turn to estimating the distributional effect of treatment on the treated2, again we form a kernel version of the standard doubly robust estimator of the distributional effect of treatment on the treated (Moodie et al., 2018). This leads from the fact that the mean embedding of the counterfactual distribution P(Y (t) | T = t ′) can be written as: $$\mu_{Y(t)\mid T=t^{\prime}}=\mathbb{E}[\ell(Y(t),\cdot)]$$ $$=\mathbb{E}[\ell(Y(t),\cdot)-\mu_{Y\mid X,T=t}\mid T=t^{\prime}]+\mathbb{E}[\mu_{Y\mid X,T=t}\mid T=t^{\prime}]$$ $$=\mathbb{E}\left[\frac{e(X,t^{\prime})P(T=t)}{e(X,t)P(T=t^{\prime})}\left(\ell(Y,\cdot)-\mu_{Y\mid X,T=t}\right)\mid T=t\right]+\mathbb{E}[\mu_{Y\mid X,T=t}\mid T=t^{\prime}]$$ $$=\mathbb{E}\left[\frac{P(T=t)}{P(T=t^{\prime})}w(X,t)\left(\ell(Y,\cdot)-\mu_{Y\mid X,T=t}\right)\mid T=t\right]+\mathbb{E}[\mu_{Y\mid X,T=t}\mid T=t^{\prime}],$$ where $w(x,t)=\frac{1-e(x,t)}{e(x,t)}$. By fitting models wˆ(*x, t*) and µˆY |X=x,T =t on training samples we get a finite sample estimate of the distributional effect of treatment on the treated as: $$\hat{\mu}_{Y(t)|T=t^{\prime}}^{\rm DR}=\frac{1}{n_{t^{\prime}}}\,\sum_{i=1}^{n}\left(1\left\{t_{i}=t\right\}\hat{w}(x_{i},t)\left(\ell(y_{i},\cdot)-\hat{\mu}_{Y|X=x_{i},T=t}\right)+1\left\{t_{i}=t^{\prime}\right\}\hat{\mu}_{Y|X=x_{i},T=t}\right)$$ where nt =Pi 1{ti = t}. Again letting eˆn(*x, t*) and3 µˆ (n) Y |X,T =t be the respective models trained on a set of size n we have that: Theorem 2. Under overlap assumptions given in the appendix, we have that if the estimators eˆn(X, t) and µˆ (n) Y |X,T =t satisfy the following: $$\|{\hat{e}}_{n}(X,t)-e(X,t)\|_{2}={\mathcal{O}}_{P}(\gamma_{e,n}),\qquad\left\|\left\|{\hat{\mu}}_{Y|X,T=t}^{(n)}-\mu_{Y|X,T=t}\right\|_{{\mathcal{H}}_{Y}}\right\|_{2}={\mathcal{O}}_{P}(\gamma_{r,n})$$ $${\mathrm{~where~}}\gamma_{e,n}=O(1){\mathrm{~}}a n d\;\gamma_{r,n}=O(1){\mathrm{~}}t h e n{\mathrm{:}}$$ $$\left\|\mu_{Y(t)|T=t^{\prime}}-\hat{\mu}_{Y(t)|T=t^{\prime}}^{\mathrm{DR}}\right\|_{\mathcal{H}_{Y}}=\mathcal{O}_{P}(\operatorname*{max}\{n^{-\frac{1}{2}},\gamma_{r,n}\gamma_{e,n}\}).$$ Applying this we can now estimate the relevant MMD for between treated and untreated for this population as: $${\widetilde{\mathrm{MMD}}}_{\mathrm{DR}}\!\left[Y(1)_{\{T=1\}},Y(0)_{\{T=1\}},{\mathcal{H}}_{Y}\right]=\left\|{\hat{\mu}}_{Y(t)|T=t^{\prime}}^{\mathrm{DR}}-{\frac{1}{n_{t^{\prime}}}}\sum_{i=1}^{n}\mathbb{I}\left\{t_{i}=t^{\prime}\right\}\ell(y_{i},\cdot)\right\|_{{\mathcal{H}}_{Y}}.$$ Which has the same convergence rate as µˆ DR Y (t)|T =t ′ . Therefore, we only need strict one-sided overlap for convergence, as µˆ DR Y (t)|T =t ′ just requires e(x, t) > ϵ. Again we derive the squared statistic and give it in closed form in Appendix B.2. 2We leave the t arbitrary, so if t = 1 this would form an effect of treatment on the control. For simplicity we do not distinguish the two cases. 3Any model for w(*x, t*) leads to a model for e(*x, t*) directly. ## 4 Permutation Testing For Distributional Treatment Effects We now apply both test statistics to the problem of testing for distributional treatment effects, taking a permutation based approach. As is standard in this context we test against Fisher's sharp null (Fisher, 1936; Rosenbaum, 2002)4: $$H_{0}:Y_{i}(1)=Y_{i}(0).$$ In order to apply a permutation algorithm, we must choose permutations such that the treatment vector is exchangeable under them. That is, if we let n be the size of our dataset D = (X, Y, T) and Symn be the permutation group of size n we want to restrict to permutations, σ ∈ Symn, such that: $$(\mathbf{X},\mathbf{Y},\mathbf{T})\,{\overset{d}{=}}\,(\mathbf{X},\mathbf{Y},\mathbf{T}_{\sigma})\,,$$ where Tσ =Tσ(i) n i=1 is the permuted treatment vector. A standard way to ensure that the permutations have this property is to through matching (Stuart, 2010), where we split the data into matched sets, such that within each set we have one treated individual. These matched sets are formed using some measure of 'similarity' of individuals, such as the Mahalanobis distance on covariates, or through the propensity score. Rosenbaum (2002) argues that if this matching is exact, so that there are no differences in propensity between matched individuals, then the treatment is exchangeable under permutations that preserve the matched sets. That is if we let M ⊂ Symn be the subset of permutations that only permute within matched sets and sample some σ ∈ M, then T is exchangeable under σ. Therefore if we fix a statistic S, in our case either the DATE or DETT statistic, and randomly sample σ1*, . . . , σ*m from M, we have that: $S\left(\mathbf{X},\mathbf{Y},\mathbf{T}\right),S\left(\mathbf{X},\mathbf{Y},\mathbf{T}_{\sigma_{1}}\right),\ldots,S\left(\mathbf{X},\mathbf{Y},\mathbf{T}_{\sigma_{m}}\right)$ are exchangeable. This allows means that we can define the p-value of the test as: $$p={\frac{1+\sum_{i=1}^{m}\mathbb{1}\left\{S\left(\mathbf{X},\mathbf{Y},\mathbf{T}_{\sigma_{i}}\right)\geq S\left(\mathbf{X},\mathbf{Y},\mathbf{T}\right)\right\}}{m+1}},$$ where this is now a valid p-value in the sense that P(p ≤ α) ≤ α under the null. However, permuting in this way creates computational challenges when S is one of the proposed statistics. Namely, the calculation of these statistics requires the fitting of a propensity model and conditional mean embedding, and this would need to be repeated for every permutation. To resolve this we first form the matched sets, and then randomly split into train/test sets, DTr, DTe, such that each matching set is fully contained in one of the datasets. Now let M|DTr| be the set of permutations on D which leave DTe fixed and preserve the matching on DTr, and vice versa for DTe. We then sample N random permutations {σ1*, . . . , σ*N } from M|DTr|, and form the set M˜ of permutations as: $${\tilde{M}}_{N}=\{\sigma\circ\pi:\sigma\in\{I d\}\cup\{\sigma_{1},\ldots,\sigma_{N}\},\pi\in M_{{\mathcal{D}}_{\mathrm{T}_{\mathrm{T}}}}\}\,,$$ Then the following demonstrates by applying results from Ramdas et al. (2023) that we can form a valid permutation test by randomly sampling permutations from M˜ : Proposition 1. Under exact matching, if we sample τ0, . . . , τm from M˜N *we have that the following is a* valid p-value: $$p={\frac{1+\sum_{i=1}^{m}\mathbb{I}\left\{S\left(\mathbf{X},\mathbf{Y},\mathbf{T}_{\tau_{i}}\right)\geq S\left(\mathbf{X},\mathbf{Y},\mathbf{T}\right)\right\}}{m+1}}$$ for any statistic S *and number of sampled permutations* N. 4We note that while the test must be formulated against Fisher's sharp null, it will generally have power against alternatives which show distributional causal effects, so Y (1) d ̸= Y (0). This alleviates the computational challenges associated with using the DATE or DETT statistics as there is now a controllable parameter N which determines how many models must be trained. Increasing N will mean training more models and so will decrease the variance in p, however it will incur greater computational cost. To our knowledge, this is the first time this permutation strategy has appeared in the literature. Moreover, this procedure could be applied to efficiently use other trainable test statistics within permutation testing, such as standard doubly robust estimators of the causal mean or more general permutation tests for assessing the performance of predictive models as in Ojala & Garriga (2010). The result in Proposition 1 relies on exact matching, which is a common assumption in proving the validity of conditional permutation tests Rosenbaum (2002). However recent results in the area have shown that p-values arising from matched permutations are approximately valid under inexact matching (Berrett et al., 2020; Pimentel, 2022). We expect and empirically observe that similar results hold in our case. ## 5 Experiments 5.1 Fit Tests For Doubly Robust Counterfactual Mean Embeddings To demonstrate the convergence properties implied by our theoretical results in Section 3 we fit our statistics on simulated data from the following data generating processes: $$X\sim{\mathcal{N}}(0,I_{9})$$ $$p=\left(\frac{1}{1+\exp\left(\mathbf{a}^{\top}X\right)}\right)^{2}-\mathbb{E}_{X}\left[\left(\frac{1}{1+\exp\left(\mathbf{a}^{\top}X\right)}\right)^{2}\right],$$ $$Y=\mathbf{b}^{\top}X+\beta T+\epsilon,$$ $$T\sim\mathrm{Ber}(p)$$ T ∼ Ber(p) Y = b where ϵ ∼ N (0, σ2) and the values of a, b ∈ R 9 and *β, σ* ∈ R are given in Appendix C. This is similar to the setting in Muandet et al. (2021). We also simulate from: $T\sim\mbox{Ber}$$X\sim{\cal N}(0,(1+\alpha T)I_{10})$$Y=f_{T}(X)+\epsilon^{\prime}$$\epsilon^{\prime}\sim{\cal N}(0,(\sigma^{\prime})^{2})$. where f0, f1 : R 10 → R and *α, σ*′ are given in the Appendix. The setting here is the same as in Bellot & van der Schaar (2021). For both settings we fit a linear logistic regression for the propensity score so that the model is incorrectly specified. In first data generating process we have both-sided overlap in the true propensity and so plots (a) and (b) in Figure 2 demonstrate that the doubly robust embeddings converge for both interventional values. Due to an incorrectly specified propensity model the counterfactual mean embeddings of Muandet et al. (2021) do not converge. For the second data generating process, increasing the value of b creates a setting where the overlap is more one-sided. As such plots (c) and (d) in Figure 2 demonstrate that only µˆ DR Y (1) and µˆ DR Y (1)|T =0 converge quickly to the true embeddings. Plots of the true propensity for both simulations can be found in Appendix C.1. 5.2 Testing For Distributional Effects on Simulated Data We now apply these statistics to the testing of distributional causal effects where the data is simulated from: $$\begin{array}{l l}{{X\sim{\mathcal{N}}(0,I_{9}),}}&{{p=\left({\frac{1}{1+\exp{(\mathbf{a}^{\top}X)}}}\right),}}\\ {{T\sim\mathrm{Ber}(p),}}&{{Y=\mathbf{b}^{\top}X+\beta(2Z-1)T+\epsilon,\quad\epsilon\sim{\mathcal{N}}(0,\sigma^{2})}}\end{array}$$ Where *α, β* ∈ R 9 and σ ∈ R are as in Section 5.1 and the variable Z is 1, a Ber( 1 2 ) or Unif([0, 1]). The second two settings capture when there is a distributional causal effect which does not shift the causal means. In Figure 3 we plot the rejection rate of the tests based on four statistics, the DATE and DETT statistics estimated without conditional mean embeddings, and the DATE and DETT statistics estimated with conditional mean embeddings denoted by DR-DATE and DR-DETT respectively. The matching for all statistics is done via logistic regression and we apply the permutation from Section 4. We compare our methods against Double Machine Learning (Chernozhukov et al., 2018) and Targeted Maximum Likelihood $$(3)$$ $$\quad(4)$$ ![8_image_0.png](8_image_0.png) Assesing fit of Doubly Robust Counterfactual Mean Embeddings on Simulated Data Figure 2: These plots demonstrate the convergence of the doubly robust estimators when the propensity model is misspecified. The left plots show the embeddings required for DATE, whilst the right plots show those needed for DETT. We vary between two data generating process given in Section 5.1, aiming to show both sided and one sided overlap of the propensity score. Estimation (Van Der Laan & Rubin, 2006) as baselines to effectively pick up a shift in causal means. We run these experiments with 2000 data points, rejecting at the 0.05 significance level. Power plots showing the rejection rates over 100 re-runs of this experiment are displayed in Figure 3. In this experiment we observe similar performance for all distributional test statistics. We believe this is due to the fact that both the true and estimated propensities are linear functions of the covariates, and so all statistics will correctly fit the causal mean embeddings. Plot (a) demonstrates that when the shift is in the mean, as expected our methods will perform worse than the baselines. This is due to the cost of targeting distributional effects over simply mean shifts. However, plots (b) and (c) demonstrate that when the causal effect is only distributional, our statistics can correctly reject the null, unlike these traditional methods. ## 5.3 Semi-Synthetic And Real World Data Finally, to evaluate the performance of our tests within more realistic settings we evaluate on a selection of semi-synthetic data. We evaluate on two standard semi-synthetic tasks, the infant health and development program (IDHP) introduced in Hill (2011), the linked births and deaths data (LBIDD) (Shimoni et al., 2018). As both of these datasets are semi-synthetic, we have access to the counterfactuals and so can simulate realistic data under the null hypothesis. In order to prevent problems with extreme propensity scores we remove any data for which e(x, t) < 0.03 for any value of t. We again use logistic regression matching and weights model. Figure 3: Simulated tests for distributional causal effects where the β parameter controls the size of the effect. ![9_image_0.png](9_image_0.png) In plot (a), β introduces a shift in the mean only, whereas in (b) and (c) the shift only affects higher moments of the distribution. All simulations with n = 2000 samples. | Dataset | Hypothesis | DATE | DR-DATE | DETT | DR-DETT | DML | |------------------|--------------|------------|------------|------------|------------|------------| | IDHP (n = 747) | H0 | 0.02 ±0.03 | 0.05 ±0.04 | 0.01 ±0.02 | 0.03 ±0.03 | 0.04 ±0.04 | | H1 | 1.00 ±0 | 1.00 ±0 | 1.00 ±0 | 0.99 ±0.03 | 1.00 ±0 | | | LBIDD (n = 1000) | H0 | 0.02 ±0.03 | 0.03 ±0.03 | 0.00 ±0 | 0.00 ±0 | 0.17 ±0.10 | | H1 | 0.85 ±0.07 | 0.82 ±0.07 | 0.13 ±0.06 | 0.08 ±0.06 | 1.00 ±0 | | | LBIDD (n = 2500) | H0 | 0.03 ±0.03 | 0.03 ±0.03 | 0.00 ±0 | 0.00 ±0 | 0.00 ±0 | | H1 | 0.82 ±0.08 | 0.75 ±0.09 | 0.19 ±0.08 | 0.13 ±0.06 | 1.00 ±0 | | Table 1: Rejection rates at the 0.05 level on semi-synthetic datasets. The results can be found in Table 1, which shows all kernel based test statistics produce valid p-values under the null hypothesis. Further the DATE based test statistics demonstrate strong power against the alternative for all three datasets. The DETT test statistic on the other hand only shows a good level of power on the IDHP dataset, with more samples required to reject on the LBIDD dataset. ## 6 Conclusion We have proposed new doubly robust estimators for the kernel mean embedding of causal distributions. Our theoretical experimental results show that these converge in a wider array of circumstances than the original estimators of these quantities. Further, we applied these embeddings to the problem of testing for distributional causal effects. We used a permutation based approach for testing, and proposed a permutation that allows us to use our doubly robust statistics within permutation testing. We prove the validity of this permutation under exact matching and experimentally validate this approach under inexact matching. The results show that our test statistics are able to pick up causal effects that only manifest as distributional shifts, where traditional mean shift methods fail. ## References Nachman Aronszajn. Theory of reproducing kernels. *Transactions of the American mathematical society*, 68 (3):337–404, 1950. Heejung Bang and James M. Robins. Doubly robust estimation in missing data and causal inference models. Biometrics, 61(4):962–973, 2005. Alexis Bellot and Mihaela van der Schaar. A kernel two-sample test with selection bias. In Uncertainty in Artificial Intelligence, pp. 205–214. PMLR, 2021. Thomas B Berrett, Yi Wang, Rina Foygel Barber, and Richard J Samworth. The conditional permutation test for independence while controlling for confounders. Journal of the Royal Statistical Society Series B: Statistical Methodology, 82(1):175–197, 2020. Victor Chernozhukov, Denis Chetverikov, Mert Demirer, Esther Duflo, Christian Hansen, Whitney Newey, and James Robins. Double/debiased machine learning for treatment and structural parameters. The Econometrics Journal, 21(1):C1–C68, 01 2018. ISSN 1368-4221. doi: 10.1111/ectj.12097. URL https: //doi.org/10.1111/ectj.12097. Yoichi Chikahara, Makoto Yamada, and Hisashi Kashima. Feature selection for discovering distributional treatment effect modifiers. *arXiv preprint arXiv:2206.00516*, 2022. Ronald Aylmer Fisher. Design of experiments. *British Medical Journal*, 1(3923):554, 1936. Arthur Gretton, Olivier Bousquet, Alex Smola, and Bernhard Schölkopf. Measuring statistical dependence with hilbert-schmidt norms. In *International conference on algorithmic learning theory*, pp. 63–77. Springer, 2005. Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. *The Journal of Machine Learning Research*, 13(1):723–773, 2012. Steffen Grünewälder, Guy Lever, Luca Baldassarre, Sam Patterson, Arthur Gretton, and Massimilano Pontil. Conditional mean embeddings as regressors. In Proceedings of the 29th International Coference on International Conference on Machine Learning, pp. 1803–1810, 2012. Jennifer L Hill. Bayesian nonparametric modeling for causal inference. Journal of Computational and Graphical Statistics, 20(1):217–240, 2011. Edward H Kennedy, Zongming Ma, Matthew D McHugh, and Dylan S Small. Non-parametric methods for doubly robust estimation of continuous treatment effects. *Journal of the Royal Statistical Society: Series B* (Statistical Methodology), 79(4):1229–1245, 2017. Lihua Lei, Alexander D'Amour, Peng Ding, Avi Feller, and Jasjeet Sekhon. Distribution-free assessment of population overlap in observational studies. Technical report, Working paper, Stanford University, 2021. Diego Martinez-Taboada and Edward H Kennedy. Counterfactual density estimation using kernel stein discrepancies. *arXiv preprint arXiv:2309.16129*, 2023. Diego Martinez-Taboada, Aaditya Ramdas, and Edward H Kennedy. An efficient doubly-robust test for the kernel treatment effect. *arXiv preprint arXiv:2304.13237*, 2023. Afsaneh Mastouri, Yuchen Zhu, Limor Gultchin, Anna Korba, Ricardo Silva, Matt Kusner, Arthur Gretton, and Krikamol Muandet. Proximal causal learning with kernels: Two-stage estimation and moment restriction. In *International conference on machine learning*, pp. 7512–7523. PMLR, 2021. Jovana Mitrovic, Dino Sejdinovic, and Yee Whye Teh. Causal inference via kernel deviance measures. Advances in neural information processing systems, 31, 2018. Erica EM Moodie, Olli Saarela, and David A Stephens. A doubly robust weighting estimator of the average treatment effect on the treated. *Stat*, 7(1):e205, 2018. Krikamol Muandet, Kenji Fukumizu, Bharath Sriperumbudur, Bernhard Schölkopf, et al. Kernel mean embedding of distributions: A review and beyond. Foundations and Trends® *in Machine Learning*, 10(1-2): 1–141, 2017. Krikamol Muandet, Arash Mehrjou, Si Kai Lee, and Anant Raj. Dual instrumental variable regression. Advances in Neural Information Processing Systems, 33:2710–2721, 2020. Krikamol Muandet, Motonobu Kanagawa, Sorawit Saengkyongam, and Sanparith Marukatat. Counterfactual mean embeddings. *Journal of Machine Learning Research*, 22(162):1–71, 2021. Markus Ojala and Gemma C Garriga. Permutation tests for studying classifier performance. Journal of machine learning research, 11(6), 2010. Junhyung Park, Uri Shalit, Bernhard Schölkopf, and Krikamol Muandet. Conditional distributional treatment effect with kernel conditional mean embeddings and u-statistic regression. In International Conference on Machine Learning, pp. 8401–8412. PMLR, 2021. Judea Pearl. Causal inference in statistics: An overview. 2009. Samuel D Pimentel. Covariate-adaptive randomization inference in matched designs. arXiv preprint arXiv:2207.05019, 2022. Aaditya Ramdas, Rina Foygel Barber, Emmanuel J Candès, and Ryan J Tibshirani. Permutation tests using arbitrary permutation distributions. *Sankhya A*, pp. 1–22, 2023. Thomas S. Richardson and James M. Robins. Single world intervention graphs (SWIGs): A unification of the counterfactual and graphical approaches to causality. *Center for the Statistics and the Social Sciences,* University of Washington Series. Working Paper, 128(30):2013, 2013. James M Robins and Andrea Rotnitzky. Semiparametric efficiency in multivariate regression models with missing data. *Journal of the American Statistical Association*, 90(429):122–129, 1995. Paul R Rosenbaum. Conditional permutation tests and the propensity score in observational studies. Journal of the American Statistical Association, 79(387):565–574, 1984. Paul R Rosenbaum. Observational studies. In *Observational Studies*, pp. 1–17. Springer, 2002. Donald B Rubin. Estimating causal effects from large data sets using propensity scores. Annals of internal medicine, 127(8_Part_2):757–763, 1997. Shubhanshu Shekhar, Ilmun Kim, and Aaditya Ramdas. A permutation-free kernel two-sample test. Advances in Neural Information Processing Systems, 35:18168–18180, 2022. Y. Shimoni, C. Yanover, E. Karavani, and Y. Goldschmnidt. Benchmarking Framework for PerformanceEvaluation of Causal Inference Analysis. *ArXiv preprint arXiv:1802.05046*, 2018. Rahul Singh, Maneesh Sahani, and Arthur Gretton. Kernel instrumental variable regression. *Advances in* Neural Information Processing Systems, 32, 2019. Rahul Singh, Liyuan Xu, and Arthur Gretton. Generalized kernel ridge regression for nonparametric structural functions and semiparametric treatment effects. *arXiv e-prints*, pp. arXiv–2010, 2020. Le Song, Jonathan Huang, Alex Smola, and Kenji Fukumizu. Hilbert space embeddings of conditional distributions with applications to dynamical systems. In *Proceedings of the 26th Annual International* Conference on Machine Learning, pp. 961–968, 2009. Elizabeth A Stuart. Matching methods for causal inference: A review and a look forward. Statistical science: a review journal of the Institute of Mathematical Statistics, 25(1):1, 2010. Anastasios A. Tsiatis. *Semiparametric Theory and Missing Data*. Springer, 2006. Mark J Van Der Laan and Daniel Rubin. Targeted maximum likelihood learning. The international journal of biostatistics, 2(1), 2006. ## A Proofs For ease of writing, throughout this appendix we will let: $$r(x,t):=\mu_{Y|X=x,T=t}\tag{5}$$ $$\hat{r}_{n}(x,t):=\hat{\mu}_{Y|X=x,T=t}^{(n)}\tag{6}$$ ## A.1 Proof Of Theorem 1 This proof follows under the following standard assumptions5: - The propensity is uniformly bounded away from zero. So there exist an ϵ > 0 such that e(x, t) > ϵ for all x in the support of the covariates. - We assume that the estimated propensity score is uniformly bounded away from 0 and 1 on the support of P(X). So for all x in the support of P(X) and for all n we have δ < eˆn(*x, t*) for some δ > 0. Now let m be the size of the test set we average over. So m = O(n) where n is the sample size. First note due to the consistency property we can write k(yi, ·)1{ti = t} = k(yi(t), ·)1{ti = t}. This allows us to rewrite the DR estimator as: $${\hat{\mu}}_{Y(t)}^{\mathrm{DR}}={\frac{1}{m}}\sum_{i=1}^{m}\left\{\ell(y_{i}(t),\cdot)+{\frac{\left({\frac{1}{2}}\{t_{i}=1\}-{\hat{e}}_{n}(x_{i},t)\right)\ell(y_{i}(t),\cdot)-{\hat{r}}_{n}(x_{i},t))}{{\hat{e}}_{n}(x_{i},t)}}\right\}.$$ Now letting: $$\epsilon_{Y(t)}^{\mathrm{DR}}=\frac{1}{m}\sum_{i=1}^{m}\frac{\left(\mathbb{1}\left\{t_{i}=t\right\}-\hat{e}_{n}(x_{i},t)\right)\left(\ell(y_{i}(t),\cdot)-\hat{r}_{n}(x_{i},t)\right)}{\hat{e}_{n}(x_{i},t)},$$ we can write: $$\left\|\mu_{Y(t)}-\hat{\mu}_{Y(t)}^{\mathrm{DR}}\right\|_{\mathcal{H}_{Y}}\leq\left\|\mu_{Y(t)}-\frac{1}{m}\sum_{i=1}^{m}\ell(y_{i}(t),\cdot)\right\|_{\mathcal{H}_{Y}}+\left\|\epsilon_{Y(t)}^{\mathrm{DR}}\right\|_{\mathcal{H}_{Y}}.$$ As 1m Pm i=1 ℓ(yi(t), ·) is an unbiased estimator for µY (t) with parametric convergence rate O(m− 12 ) = O(n − 12 ), the convergence rate of our estimator is entirely determined by the convergence rate ofϵ DR Y (t) HY so consider: E ϵ DR Y (t) 2 HY = E "1 m2 Xm i,j=1 (1{ti = t} − eˆn(xi, t)) (1{tj = t} − eˆn(xj , t))⟨yj (t) − rˆn(xj , t), yi(t) − rˆn(xi, t)⟩HY eˆn(xi, t)ˆen(xj , t) # = E "1 m2 X i̸=j (1{ti = t} − eˆn(xi, t)) (1{tj = t} − eˆn(xj , t)) (⟨yj (t) − rˆn(xj , t), yi(t) − rˆn(xi, t)⟩HY ) eˆn(xi, t)ˆen(xj , t)(7) +1 m2 X i=j (1{ti = t} − eˆn(xi, t)) ∥yi(t) − rˆn(xi, t)∥HY eˆn(xi, t) 2#(8) Now let (*X, T, Y* (t)) be a random sample. We may write the term (8) as: $$\frac{1}{m}\mathbb{E}\left\{\frac{(\mathbb{I}\{T=t\}-\hat{e}_{n}(X,t))\left\|Y(t)-\hat{r}_{n}(X,t)\right\|_{\mathcal{H}_{Y}}}{\hat{e}_{n}(X,t)}\right\}^{2}.$$ 5These are the same as in for example Kennedy et al. (2017) By applying bounded propensity and bounded kernels we have that this expectation is bounded, which gives the term is bounded by C1/n. Now for the first term let (X, ˜ T , ˜ Y˜ (t)) be a second independent sample. We may write (7) as: m − 1 m E "(1{T = t} − eˆn(X, t)) 1{T˜ = t} − eˆn(X, t ˜ ) Y (t) − rˆn(X, t), Y˜ (t) − rˆn(*X, t* ˜ )HY eˆn(X, t)ˆen(*X, t* ˜ ) # = m − 1 m E "E "(1{T = t} − eˆn(X, t)) 1{T˜ = t} − eˆn(X, t ˜ ) Y (t) − rˆn(X, t), Y˜ (t) − rˆn(*X, t* ˜ )HY eˆn(X, t)ˆen(*X, t* ˜ ) X, X˜ ## = m − 1 m E "E "(1{T = t} − eˆn(X, t)) 1{T˜ = t} − eˆn(*X, t* ˜ ) eˆn(X, t)ˆen(*X, t* ˜ ) X, X˜ #E hY (t) − rˆn(X, t), Y˜ (t) − rˆn(*X, t* ˜ )HY X, X˜i# = m − 1 m E "(e(X, t) − eˆn(X, t)) e(X, t ˜ ) − eˆn(*X, t* ˜ ) eˆn(X, t)ˆen(X, t ˜ )r(X, t) − rˆn(X, t), r(X, t ˜ ) − rˆn(*X, t* ˜ )HY # ≤ m − 1 m E " (e(X, t) − eˆn(X, t)) e(X, t ˜ ) − eˆn(*X, t* ˜ ) eˆn(X, t)ˆen(*X, t* ˜ ) !2#E hr(X, t) − rˆn(X, t), r(X, t ˜ ) − rˆn(*X, t* ˜ )2HY i!12 ≤ m − 1 m E "e(X, t) − eˆn(X, t) eˆn(*X, t*) 2#E -∥r(X, t) − rˆn(*X, t*)∥ 2 HY ≤C2E-(e(X, t) − eˆn(X, t))2E-∥r(X, t) − rˆn(*X, t*)∥ 2 HY . Therefore we have: $$\mathbb{E}\left[\left|\epsilon_{V(t)}^{\mathrm{DR}}\right|\right|_{\mathcal{H}_{Y}}^{2}\leq\frac{C_{1}}{m}+C_{2}\mathbb{E}\left[\left(e(X,t)-\hat{e}_{n}(X,t)\right)^{2}\right]\mathbb{E}\left[\left\|r(X,t)-\hat{r}_{n}(X,t)\right\|_{\mathcal{H}_{Y}}^{2}\right].$$ This gives that the rate of convergence is controlled by E (e(X, t) − eˆn(*X, t*))2and E ∥Y (t) − rˆn(*X, t*)∥ 2 HY . By applying Jensen's inequality we have Eϵ DR Y (t) HY = O(γr,nγe,n) and so by Markov's inequalityϵ DR Y (t) HY = OP (γr,nγe,n). ## A.2 Proof Of Theorem 2 The proof follows a similar structure to that of Theorem 1 where we now have: $$\left\|\mu_{Y(t)|t=t^{\prime}}-\hat{\mu}_{Y(t)|t=t^{\prime}}^{\rm DR}\right\|_{\mathcal{H}_{Y}}\leq\left\|\mu_{Y(t)}-\frac{1}{n_{t^{\prime}}}\sum_{i=1}^{n}\mathbb{1}\big{\{}t_{i}=t^{\prime}\big{\}}\,\ell(y_{i}(t),\cdot)\right\|_{\mathcal{H}_{Y}}$$ $$+\left\|\left(\frac{1}{n_{t^{\prime}}}\sum_{i=1}^{n}\mathbb{1}\left\{t_{i}=t^{\prime}\right\}\ell(y_{i}(t),\cdot)\right)-\hat{\mu}_{Y(t)|t=t^{\prime}}^{\rm DR}\right\|.$$ Now we have: 1 nt ′ Xn i=1 1{ti = t ′}ℓ(yi(t), ·) ! − µˆ DR Y (t)|t=t ′ =1 nt ′ Xn i=1 1ti = t ′ ℓ(yi(t), ·) −1{ti = t} w(xi, t) (ℓ(yi(t), ·) − r(xi, t)) + 1{ti = t ′}r(xi, t) = − 1 nt ′ Xn i=1 1{ti = t} w(xi, t) (ℓ(yi(t), ·) − r(xi, t)) + 1{ti = t ′} (r(xi, t) − ℓ(yi(t), ·)) = − 1 nt ′ Xn i=1 (ℓ(yi(t), ·) − r(xi, t)) 1{ti = t} w(xi, t) − 1ti = t ′ = − 1 nt ′ Xn i=1 (ℓ(yi(t), ·) − r(xi, t)) 1{ti = t} w(xi, t) − 1{ti = t ′} =1 nt ′ Xn i=1 (ℓ(yi(t), ·) − r(xi, t)) 1{ti = t} 1 − e(xi, t) e(xi, t) − 1{ti = t ′} e(xi, t) e(xi, t) =1 nt ′ Xn i=1 (ℓ(yi(t), ·) − r(xi, t)) 1{ti = t} − 1{ti = t} e(xi, t) − 1{ti = t ′} e(xi, t) e(xi, t) =1 nt ′ Xn i=1 (ℓ(yi(t), ·) − r(xi, t)) 1{ti = t} − e(xi, t) e(xi, t) = nt ′ n ϵ DR Y (t) Now as nt ′/n converges to a constant at rate n − 12 the convergence rate just depends on the rate that ϵ DR Y (t) tends to zero. Therefore the analysis in the previous proof establishes the rate. ## A.3 Proof Of Proposition 1 Ramdas et al. (2023) demonstrate that we can test the null hypothesis H0 : X1*, . . . , X*n are exchangeable from a fixed set of permutations S ⊂ Symn (not necessarily a group) using a statistic T by randomly sampling permutations σ0*, . . . , σ*M from S uniformly and computing: $$p=\frac{1+\sum_{i=1}^{M}\mathbb{1}\Big{\{}T(X_{\sigma_{i}\circ\sigma_{i}^{-1}})\geq T(X)\Big{\}}}{M+1},\tag{9}$$ where Xσ =Xσ(1)*, . . . , X*σ(n)is the permuted data. Ramdas et al. show that p is a valid p-value in the sense that P(p ≤ α) ≤ α under the null hypothesis. In our method the data is placed into bins, denoted by Bi, where i ranges over the total number of bins k, and under the assumption of exact matching the null hypothesis is: H0 : TBi1 , . . . , TBin are exchangeable for each i. Further we have split the bins into training a BTr = {B1*, . . . , B*m} and BTe = {Bm+1*, . . . , B*k}. For Bi ∈ BTe we set the possible permutations to be S|Bi|i.e. the full set of permutations on Bi. For each set Bj ∈ BTe we sample σ1j *, . . . , σ*N+1 from S|Bj | and set Sj = {σ1j *, . . . , σ*N+1}. We then set the total set of permutations S to be: $$H_{0}:T_{1}$$ $$S=\left(\bigtimes_{j=1}^{m}S_{j}\right)\times\left(\bigtimes_{j=m+1}^{k}S_{|B_{i}|}\right)$$ As the samples in each bin are exchangeable, and samples in distinct bins are independent, we have that our data is exchangeable under the permutations in S. Therefore we can sample from S as in (9) to produce a valid p-value under H0. To go from this to the form in the proof note that sampling the random permutations for Sj , choosing a random permutation σ0 from Sj and then taking σi ◦ σ −1 0is equivalent to sampling N random permutations, σ1j *, . . . , σ*N and then sampling a random permutation from S˜ = {Id, σ1j *, . . . , σ*N }. ## B Derivation Of Test Statistics B.1 Derivation of MMD \2DR[Y (1), Y (0), HY ] To derive (MMD \DR[Y (1), Y (0), HY ])2 we let e(x) = e(x, 1) = P(T = 1 | X = x). We now write: $$\hat{\mu}_{Y(1)}^{\rm DR}-\hat{\mu}_{Y(0)}^{\rm DR}=\frac{1}{m}\sum_{i=1}^{m}\left\{\frac{t_{i}\left(\ell(y_{i},\cdot)-\hat{r}(x_{i},1)\right)}{\hat{e}(x_{i})}+\hat{r}(x_{i},1)-\left(\frac{(1-t_{i})\left(\ell(y_{i},\cdot)-\hat{r}(x_{i},0)\right)}{1-\hat{e}(x_{i})}+\hat{r}(x_{i},0)\right)\right\}$$ $$=\frac{1}{m}\sum_{i=1}^{m}\frac{t_{i}-e(x_{i})}{e(x_{i})(1-e(x_{i}))}\left(\ell(y_{i},\cdot)-(1-e(x_{i}))\hat{r}(x_{i},1)+e(x_{i})\hat{r}(x_{i},0)\right).$$ Now we have rˆ(xi, 1) = k ⊤ 1 (xi)W1ℓ1 and rˆ(xi, 0) = k ⊤ 0 (xi)W0ℓ0. Further for ease of writing we let: $$\begin{array}{c}{{\hat{\mathbf{k}}_{0}^{\top}(x_{i})=e(x_{i})\mathbf{k}_{0}^{\top}(x_{i}),}}\\ {{\hat{\mathbf{k}}_{1}^{\top}(x_{i})=(1-e(x_{i}))\mathbf{k}_{1}^{\top}(x_{i}),}}\\ {{\alpha(x_{i})=\frac{(t_{i}-e(x_{i}))}{(e(x_{i})(1-e(x_{i}))}.}}\end{array}$$ This allows us to write the above as $$=\frac{1}{m}\sum_{i=1}^{m}\alpha(x_{i})\left(\ell(y_{i},\cdot)+\bar{\mathbf{k}}_{0}^{\top}(x_{i})\mathbf{W}_{0}\ell_{0}-\bar{\mathbf{k}}_{1}^{\top}(x_{i})\mathbf{W}_{1}\ell_{1}\right),$$ and plugging this in we obtain (MMD \DR[Y (1), Y (0), HY ])2:=µˆ DR Y (1) − µˆ DR Y (0) 2 HY =X i,j α(xi)α(xj )ℓ(yi, ·) + k˜⊤ 0 (xi)W0ℓ0 − k˜⊤ 1 (xi)W1ℓ1, ℓ(yj , ·) + k˜⊤ 0 (xj )W0ℓ0 − k˜⊤ 1 (xj )W1ℓ1 =X i,j α(xi)α(xj )ℓ(yi, yj ) + k˜⊤ 0 (xi)W0ℓ0(yj ) + k˜⊤ 0 (xj )W0ℓ0(yi) − k˜⊤ 1 (xi)W1ℓ1(yj ) − k˜⊤ 1 (xj )W1ℓ1(yi) +X i,j α(xi)α(xj )k˜⊤ 0 (xi)W0L0W0k˜0(xj ) + k˜⊤ 1 (xi)W1L1W1k˜1(xj ) − k˜⊤ 0 (xi)W0L0,1W1k˜1(xj ) − k˜⊤ 1 (xi)W1L0,1W0k˜0(xj ). Now we introduce some new notation: for a variable we will use a subscript to denote the value of T we condition and a superscript to denote if the variable is in training or test. So X Te 0 is the test points with T = 0. Further let K(X, X˜) = (k(xi, x˜j ))i=nX,j=nX˜ i,j=1 , so the kernel matrix where rows. This allows us to write the test statistic as: α ⊤ L(Y Te, Y Te) + 2diag(e)K(X Tr 0 , XTe) ⊤W0L(Y Tr 0 , Y Te) − 2diag(1 − e)K(X Tr 1 , XTe) ⊤W1L(Y Tr 1 , Y Te) + diag(e)K(X Tr 0 , XTe) ⊤W0L(Y Tr 0 , Y Tr 0 )W0K(X Tr 0 , XTe)diag(e) + diag(1 − e)K(X Tr 1 , XTe) ⊤W1L(Y Tr 1 , Y Tr 1 )K(X Tr 1 , XTe)W1diag(1 − e) − 2diag(e)K(X Tr 0 , XTe) ⊤W0L(Y Tr 0 , Y Tr 1 )K(X Tr 1 , XTe)W1diag(1 − e) α. ## B.2 Derivation Of Mmd\Drh{Y (1) | T = 0} , {Y (0) | T = 0} , Hy I We now derive the closed form of the estimator based on the effect of treatment on the treated for the case of a binary treatment. Again we begin by deriving a simpler form of the difference between mean embeddings: µˆ{Y (1)|T =0} − µˆ{Y (0)|T =0} = ˆµ{Y (1)|T =0} − µˆ{Y |T =0} = 1 n0 Xn i=1 tiwˆ(xi, 1) (ℓ(yi, ·) − rˆ(xi, 1)) + (1 − ti) ˆr(xi, 1)− 1 n0 Xn i=1 (1 − ti) ℓ(yi, ·) = 1 n0 Xn i=1 (tiwˆ(xi, 1) − (1 − ti)) (ℓ(yi, ·) − rˆ(xi, 1)) = 1 n0 Xn i=1 ti − e(xi) e(xi) (ℓ(yi, ·) − rˆ(xi, 1)) Now we have that rˆ(xi, 1) = µY |X=x,T =1 = k ⊤ 1 (x)W1l1 and we also let: $$\beta(x_{i})={\frac{t-e(x_{i})}{e(x_{i})}}$$ Then we can write the full test stat as: µˆ{Y (1)|T =0} − µˆ{Y (0)|T =0} 2= 1 n 2 0 X i,j β(xi)β(xj )ℓ(yi, ·) − k ⊤ 1 (xi)W1l1, ℓ(yj , ·) − k ⊤ 1 (xj )W1l1 = 1 n 2 0 X i,j β(xi)β(xj )ℓ(yi, yj ) − k ⊤ 1 (xi)W1l1(xj ) − k ⊤ 1 (xi)W1l1(xi) + + k ⊤ 1 (xj )W1L1,1W1k ⊤ 1 (xj ), giving the full test statistic as: $$\beta^{\top}\left(L(Y^{\mathrm{T_{b}}},Y^{\mathrm{T_{b}}})-2K(X_{1}^{\mathrm{T_{b}}},X^{\mathrm{T_{b}}})^{\top}\mathbf{W}_{1}L(Y_{1}^{\mathrm{T_{b}}},Y^{\mathrm{T_{b}}})+K(X_{1}^{\mathrm{T_{b}}},X^{\mathrm{T_{b}}})^{\top}\mathbf{W}_{1}L(Y_{1}^{\mathrm{T_{b}}},Y_{1}^{\mathrm{T_{b}}})\mathbf{W}_{1}K(X_{1}^{\mathrm{T_{b}}},X^{\mathrm{T_{b}}})\right)\beta.$$ ## C Simulation Details For the simulation in Section 5.1 we select: **a = [0.1,0.2,0.3,0.4,0.5,0.1,0.2,0.3,0.4]** **b = [0.5,0.4,0.3,0.2,0.1,0.4,0.3,0.2,0.1]** $\sigma=0.2$ $\beta=3$, and for the second experiment we let: f0(x) = x1 f1(x) = x $$\begin{array}{c}{{f_{1}(x)=x_{1}^{2}}}\\ {{\quad\quad\sigma=0.2.}}\end{array}$$ ## Α = 0.3 Σ = 0.2. C.1 Fit Experiment Plots Here we include the propensity plots for the simulated data in the fit plots: ![17_image_0.png](17_image_0.png) Propensity Distribution on Simulated Data We can see in the second plot with one sided overlap a much higher proportion of datapoints are concentrated at extreme propensities. ## C.2 Simulated Distributional Test Plots Here we include some plots for the distribution in the experiments testing for causal effects on simulated data: ![17_image_1.png](17_image_1.png) And also the distribution over propensity scores: Propensity Distribution on Simulated Data ![18_image_0.png](18_image_0.png) Percent ω ![18_image_1.png](18_image_1.png)
Review 1: Summary: Post rebuttal update: The authors' explanations and/or proposed changes largely addressed my concerns. I changed the answer of "Claims And Evidence" to "Yes". The paper combines Counterfactual Mean Embeddings (CMEs) in machine learning and the classical ideas of doubly robust estimators in causal inference. It uses kernel mean embeddings to build kernelized versions of the classical doubly robust estimators. As to testing the null hypothesis of no treatment effects, the paper uses the recent result of Ramdas et al. (2023) to reduce the computational load. Strengths and Weaknesses: ## Strengths The extension of CMEs to double robustness is a natural and valuable contribution. Theoretical and experimental analysis is relatively solid. ## Weaknesses The theoretical analysis seems to be an adaptation of previous works. The permutation test does not follow naturally from the kernel statistics but is directly adapted from Ramdas et al. (2023) as a way to reduce the computation. It is more like a practical compromise than a novelty. See Requested Changes for details. Requested Changes: ## Critical The test is of limited novelty and seems to be an afterthought. Despite the statement “this procedure could be applied to efficiently use other trainable test statistics within permutation testing”, my reading (of the main text and A.3) is that, because the proposed statistics are computationally heavy to apply to naive permutation tests, the authors use Ramdas et al. (2023)’s idea which itself can be applied to general test statistics in permutation testing. If this is a misunderstanding, the authors need to discuss the original work and compare the proposed test to it. Otherwise, I request revising the claims such as “This *leads* to *new* permutation-based tests….”, “We propose a new permutation approach…” and “this is the first time this permutation strategy has appeared in the literature” to more humble ones. I also suggest changing the title to “Doubly Robust Kernel Statistics for Distributional Treatment Effects”, without stressing the test. ## Good to have (or some important questions) The definition of (one-sided) overlap is a bit deviated from the standard and would confuse inexperienced readers. For overlap, the def contains unspecified $t$, and, if we understand this as “for both $t=0,1$”, then there is no need to require “$<1$”. Also, there are 2*2=4 kinds of “one-sided overlap”, but what is used in the paper are “$e(x,0)>0$” and “$e(x,1)>0$”; I would call them “treatment/control group positivity” respectively (the term “overlap” loses its intuitive meaning if we consider only one of the group). In Theorem 1 and 2, why do we need “$\gamma_{e,n}=o(1)$ and $\gamma_{r,n}=o(1)$”? Doesn’t this mean “both estimators converge“? The proofs seem to just say “the rate is controlled by the product rate of the two estimators” and we can have the conclusion without saying anything about the rates of the two estimators? I am not sure why the proposed estimators “have theoretically improved convergence rates when compared to the previous counterfactual mean embedding estimators”? As indicated in the paper, previous CME estimators also converge at the parametric rate, and, according to Theorem 1 and 2, the proposed estimators achieve no better rates than this (although they have double robustness)? Approximate validity under inexact matching. As far as I understand, in Simulation 3, the matching is exact because both the model and truth are logistic. So I guess the semi-synthetic data do not use the logistic function to generate the treatment and thus the matching is inexact? Even if so, it needs to be explained clearly and the data-generating process should be given in the Appendix. ## Minor The symbol $S$ is overloaded for both a permutation group and a statistic. I suggest the symbol $T$ for the latter. In the caption of Fig 4, “introduces a shift in the” → “introduces a shift in the distribution”. Broader Impact Concerns: NA ================================================== Review 2: Summary: This paper proposes a novel framework, based on Counterfactual Mean Embeddings, for testing distributional causal effects in a diverse array of circumstances, in which a doubly robust estimator is designed from semi-parametric statistics. It is interesting to see that the estimator could represent causal distributions within Reproducing Kernel Hilbert Spaces (RKHS). In theoretical section, they analyze the estimator from kernel space, and prove that the proposed estimator can retain the doubly robust property and have improved convergence rates. The extensive experimental results verify the effectiveness of the proposed framework in estimating distributional casual effects. Strengths and Weaknesses: 1. In order to estimate the distributional casual effects, the authors introduce a novel estimator to improve the robustness of casual effects estimation. Different to previous works, the estimator is based on semi-parametric statistics. 2. The theoretical analysis proved the double robustness properties of the proposed framework. Based on that, the proposed estimator can converge to a correct value. 3. In order to verify the robustness of the proposed estimator, the authors apply it to permutation testing. The results are consistent to the expectation, and agree with the analysis in theoretical section. 4. they conduct extensive experiments on synthetic, semi-synthetic and real-world datasets. The experimental results indeed prove their idea. The detailed implementation are presented at Github, which is helpful to causal community. Requested Changes: 1. The authors introduce a one-sided overlap assumption, but in theorem 2, they did not amylase the connection with MMD. Is it helpful to obtain convergence rate or lower the convergence rate? . Because, it requires e(x,t) > \epsilon, it seems like the strict overlap satisfied that? 2. The authors claim that the proposed estimator is doubly robustness due to the properties of semi-parametric statistics, But it is little difficult to see the connections between them? 3, The experiments are not solid from my perspective, especially, it lacks robustness certification experiments. Typos: analyse -> analyze Kernelized-> kernelized Parametric-> parametric Permuations-> permutations … Please proofread the paper again and correct the typos. Broader Impact Concerns: nan ================================================== Review 3: Summary: The paper studies the problem of testing for distributional causal effects. There are two main contributions of the paper: 1. The authors make use of the counterfactual mean embeddings framework and propose an estimator for the distributional embeddings. Unlike previous literature, the proposed estimator is doubly robust. The authors establish theoretical properties for the proposed estimator. This estimator can serve as a test statistic in the testing stage. 2. The authors propose a permutation test that utilizes the above estimator as the test statistic. In particular, the authors employ a smart approach that helps alleviate the computational challenges in permutation tests. Strengths and Weaknesses: Strengths And Weaknesses Strengths: 1. The paper addresses an interesting and important question. 2. The paper is written in a clear and engaging manner. 3. The proposed method appears to perform well both theoretically and empirically. Weaknesses: See next section for more details. Requested Changes: I'm going to list my questions and comments here. I'm generally positive about the paper, and all the comments and questions below are intended to strengthen the work from my perspective. 1. More background in Section 1: It would be helpful if the authors could provide more background before delving into the technical details in the first section. Inform readers who are not familiar with causal inference more about why this problem is significant, and why previous methods may not be adequate. 2. Page 4, the third equation from the bottom, contains a typo: it should be Y(0)|T=1 instead of Y(1)|T=0. 3. Around equation (3) on page 7: I particularly like this idea. It is very nice and effectively addresses the issue. It would be helpful if the authors could provide more intuition about what the procedure is doing. For example, explaining why choosing N = 5 in the experiment does not undermine the power of the test would be beneficial for the readers. 4. The goal of developing a doubly robust procedure: It seems that the proposed doubly robust estimator would be particularly advantageous when the propensity score is mis-specified. However, in the context of testing, the matching step would also fail, or at least not perform well, if you use propensity score matching; then the test becomes invalid. Conversely, if the propensity score is correctly specified, then the doubly robust estimator would perform similarly to the original estimator. What is the advantage of developing a doubly robust estimator in this scenario? Furthermore, achieving double robustness reduces the effective sample size due to sample splitting. It would be beneficial if the authors could clarify these points and/or provide additional simulation studies to justify their findings. 5. Figure 4 (b) and (c): The size of the test seems to exceed the 0.05 threshold, especially for the red line. Are the authors concerned about the potential for an inflated type-I error rate? 6. Extreme data points: On Page 10, the authors state, "Therefore, to prevent problems with extreme propensity scores, we remove any data for which e(x, t) < 0.03 for any value of t." Is this approach recommended for practitioners in general? 7. Image or graph simulation: In the first paragraph, the authors motivate the problem by stating, "This can be especially useful when the target variable is high-dimensional or structured (e.g., an image or a graph)." It is somewhat disappointing that the simulation studies later in the paper are confined to simpler scenarios. It would enhance the paper if the authors could include simulations on image or graph data, or provide stronger motivation for these simpler settings. Broader Impact Concerns: NA ================================================== Metareview: Recommendation: Accept with minor revision Comment: After discussion, this paper is acceptable with minor revisions, outlined below. 1. Sec 2 presumes the reader is already familiar with potential outcomes and causal terminology. Please either add a short explainer with some more background on causal inference for those not familiar with the field, or add an appropriate reference for the reader to get up to speed. 2. Please correct and clarify the following: "This leads us to propose new permutation-based tests" in the Abstract reads still as if the "doubly robust property" and/or "improved convergence rates" lead to the tests, but this is not the case. 3. Please carefully check and fix all typos (all reviewers had minor suggestions in their reviews) Congratulations on a well-written paper! ==================================================
# V1**T: Large-Scale Mouse V1 Response Prediction** Using A Vision Transformer Bryan M. Li1*bryan.li@ed.ac.uk* Isabel M. Cornacchia1*isabel.cornacchia@ed.ac.uk* Nathalie L. Rochefort2,3 *n.rochefort@ed.ac.uk* Arno Onken1*aonken@ed.ac.uk* 1School of Informatics, University of Edinburgh 2*Centre for Discovery Brain Sciences, University of Edinburgh* 3*Simons Initiative for the Developing Brain, University of Edinburgh* Reviewed on OpenReview: *https: // openreview. net/ forum? id= qHZs2p4ZD4* ## Abstract Accurate predictive models of the visual cortex neural response to natural visual stimuli remain a challenge in computational neuroscience. In this work, we introduce V1T, a novel Vision Transformer based architecture that learns a shared visual and behavioral representation across animals. We evaluate our model on two large datasets recorded from mouse primary visual cortex and outperform previous convolution-based models by more than 12.7% in prediction performance. Moreover, we show that the self-attention weights learned by the Transformer correlate with the population receptive fields. Our model thus sets a new benchmark for neural response prediction and can be used jointly with behavioral and neural recordings to reveal meaningful characteristic features of the visual cortex. Code available at github.com/bryanlimy/V1T. ## 1 Introduction Understanding how the visual system processes information is a fundamental challenge in neuroscience. Predictive models of neural responses to naturally occurring stimuli have shown to be a successful approach toward this goal, serving the dual purpose of generating new hypotheses about biological vision (Bashivan et al., 2019; Walker et al., 2019; Ponce et al., 2019) and bridging the gap between biological and computer vision (Li et al., 2019; Sinz et al., 2019; Safarani et al., 2021). This approach relies on the idea that high performing predictive models, which explain a large part of the stimulus-driven variability, have to account for the nonlinear response properties of the neural activity, thus allowing for the identification of the underlying computations of the visual system (Carandini et al., 2005). An extensive amount of work on the primary visual cortex (V1) has been dedicated to building quantitative models that accurately describe neural responses to visual stimuli, starting from simple linear-nonlinear models (Heeger, 1992; Jones and Palmer, 1987), energy models (Adelson and Bergen, 1985) and multi-layer models (Lehky et al., 1992; Lau et al., 2002; Prenger et al., 2004). These models, based on neurophysiological data, provide a powerful framework to test hypotheses about neural functions and investigate the principles of visual processing. With the increased popularity of deep neural networks (DNNs) in computational neuroscience in recent years (Kietzmann et al., 2018; Richards et al., 2019; Li et al., 2020; 2021), DNNs have set new standards of prediction performance (Antolík et al., 2016; Klindt et al., 2017; Ecker et al., 2018; Zhang et al., 2019), allowing for a more extensive exploration of the underlying computations in sensory processing (Walker et al., 2019; Bashivan et al., 2019; Burg et al., 2021). DNN-based models are characterized by two main approaches. On the one hand, task-driven models rely on pre-trained networks optimized on standard vision tasks, such as object recognition, in combination with a readout mechanism to predict neural responses (Yamins et al., 2014; Cadieu et al., 2014; Cadena et al., 2019). With the goal of explaining the evolutionary and developmental constraints of the visual system, task-driven models have proven to be successful for predicting visual responses in primates (Yamins and DiCarlo, 2016; Cadena et al., 2019) and mice (Nayebi et al., 2022) by obtaining a shared generalized representation of the visual input across animals. On the other hand, data-driven models aim to build a predictive model on large-scale datasets without any assumption on the functional properties of the network. These models share a common representation by being trained end-to-end directly on data from thousands of neurons, and they have been shown to be successful as predictive models for the mouse visual cortex (Lurz et al., 2021; Franke et al., 2022). This approach allows us to identify core components that can be insightful when studying nontrivial computational properties of cortical neurons, especially in combination with experimental verification (Walker et al., 2019). Data-driven models for prediction of visual responses across multiple animals typically employ the core-readout framework (Klindt et al., 2017; Cadena et al., 2019; Lurz et al., 2021; Burg et al., 2021; Franke et al., 2022). Namely, a core module which learns a shared latent representation of the visual stimuli across the animals, followed by animal-specific linear readout modules to predict neural responses given the latent features. This architecture enforces the nonlinear computations to be performed by the shared core, which can in principle capture general characteristic features of the visual cortex (Lurz et al., 2021). The readout models then learn the animal-specific mapping from the shared representation of the input to the individual neural responses. With the advent of large-scale neural recordings, datasets that consist of thousands or even hundreds of thousands of neurons are becoming readily available (Stosiek et al., 2003; Steinmetz et al., 2021). This has led to an increase in the number of parameters needed in the readout network to account for the large number of neurons, hence significant effort in neural predictive modeling has been dedicated to develop more efficient readout networks. On the other hand, due to their effectiveness and computation efficiency (Goodfellow et al., 2016), convolutional neural networks (CNNs) are usually chosen as the shared representation model. Recently, Vision Transformer (ViT, Dosovitskiy et al. 2021) has achieved excellent results in a broad range of computer vision tasks (Han et al., 2022) and Transformer-based (Vaswani et al., 2017) models have become increasingly popular in computational neuroscience (Tuli et al., 2021; Schneider et al., 2022; Whittington et al., 2022). For instance, Ye and Pandarinath (2021) proposed a Neural Data Transformer to model spike trains, which was extended by Le and Shlizerman (2022) using a Spatial Transformer to achieve state-of-the-art performance in 4 neural datasets. Berrios and Deza (2022) introduced a data augmentation and adversarial training procedure to train a dual-stream Transformer which showed strong performance in predicting monkey V4 responses. In modeling the mouse visual cortex, Conwell et al. (2021) experimented with a wide range of out-of-the-box DNNs, including CNNs and ViTs, to compare their representational similarity when pre-trained versus randomly initialized. Here, we explore the benefits of the ViT convolution-free approach and self-attention mechanism as the core representation learner in a data-driven neural predictive model. Note that, in this text, the term "attention" strictly refers to the self-attention layer in Transformers (Vaswani et al., 2017), which is distinct from the perceptual process of "attention" in the neuroscience literature. Since neural variability shows a significant correlation with the internal brain state (Pakan et al., 2016; 2018; Stringer et al., 2019), information about behavior can greatly improve visual system models in the prediction of neural responses (Bashiri et al., 2021; Franke et al., 2022). To exploit this relationship, we also investigate a principled mechanism in the model architecture to integrate behavioral states with visual information. Altogether, we propose V1T, a novel ViT-based architecture that can capture visual and behavioral representations of the mouse visual cortex. This core architecture, in combination with an efficient per-animal readout (Lurz et al., 2021), outperforms the previous state-of-the-art model by 12.7% and 19.1% on two large-scale mouse V1 datasets (Willeke et al., 2022; Franke et al., 2022), which consist of neural recordings of thousands of neurons across over a dozen behaving rodents in response to thousands of natural images. Moreover, we show that the attention weights learned by the core module correlate with behavioral variables, such as pupil direction. This link between the model and the visual cortex activity is useful for pinpointing how behavioral variables affect neural activity. ## 2 Neural Data We considered two large-scale neural datasets for this work, Dataset S1 by Willeke et al. (2022) and Dataset F by Franke et al. (2022). These two datasets consist of V1 recordings from behaving rodents in response to thousands of natural images, providing an excellent platform to evaluate our proposed method and compare it against previous visual predictive models. We first briefly describe the animal experiment in Dataset S. A head-fixed mouse was placed on a cylindrical treadmill with a 25 inch monitor placed 15 cm away from the animal's left eye and more than 7,000 neurons from layer L2/3 in V1 were recorded via two-photon calcium imaging. Note that the position of the monitor was selected such that the stimuli were shown to the center of the recorded population receptive field. Gray-scale images ximage ∈ R c=1×h×w from ImageNet (Deng et al., 2009) were presented to the animal for 500 ms with a blank screen period of 300 to 500 ms between each presentation. Neural activities were accumulated between 50 and 500 ms after each stimulus onset. In other words, for a given neuron i in trial (stimulus presentation) t, the neural response is represented by a single value ri,t. In addition, the anatomical coordinates of each neuron as well as four behavioral variables xbehaviors were recorded alongside with the calcium responses. These variables include pupil dilation, the derivative of the pupil dilation, pupil center (2d-coordinates) and running speed of the animal. Each recording session consists of up to 6,000 image presentations (i.e. trials), where 5,000 unique images are combined with 10 repetitions of 100 additional unique images, randomly intermixed. The 1,000 trials with repeated images are used as the test set and the rest are divided into train and validation sets with a split ratio of 90% and 10% respectively. In total, data from 5 rodents2(Mouse A to E) were recorded in this dataset. Dataset F follows largely the same experimental setup with the following distinction: colored images (UV-colored and green-colored, i.e. ximage ∈ R c=2×h×w) from ImageNet were presented on a screen placed 12 cm away from the animal; 4,500 unique colored and 750 monochromatic images were used as the training set and an additional 100 unique colored and 50 monochromatic images were repeated 10 times throughout the recording; in total, 10 rodents (Mouse F to O) were used in the experiment with 1, 000 V1 neurons recorded from each animal. Table A.1 summarizes the experimental information from both datasets. ## 3 Previous Work A substantial body of work has recently focused on predictive models of cortical activity that learn a shared representation across neurons (Klindt et al., 2017; Cadena et al., 2019; Lurz et al., 2021; Burg et al., 2021; Franke et al., 2022), which stems from the idea in systems neuroscience that cortical computations share common features across animals (Olshausen and Field, 1996). In DNN models, these generalizing features are learned in a nonlinear core module, then a subsequent neuron-specific readout module linearly combines the relevant features in this representation to predict the neural responses. Recently, Lurz et al. (2021) and Franke et al. (2022) introduced a shared CNN core and animal-specific Gaussian readout combination that achieved excellent performance in mouse V1 neural response prediction, and this is the current state-of-the-art model on large-scale benchmarks including Dataset S and Dataset F. Here, we provide a brief description for each of the modules in their proposed architecture, which our work is built upon. CNN core. Typically, the core module learns the shared visual representation via a series of convolutional blocks (Cadena et al., 2019; Lurz et al., 2021; Franke et al., 2022). In Lurz et al. (2021), given an input image ximage ∈ R c×h×w, the CNN core with filter size k outputs a latent representation vector z ∈ R d×h ′×w ′ where h ′ = h − k + 1, w ′ = w − k + 1 and d is the hidden dimension. The CNN core, after an exhaustive Bayesian hyperparameter search to optimize for the validation performance, has an output dimension of z ∈ R d×h ′=28×w ′=56. Previous works have shown correlation between behaviors and neural variability, and that the behavioral variables can significantly improve neural predictivity (Niell and Stryker, 2010; Reimer et al., 2014; Stringer et al., 2019; Bashiri et al., 2021). To that end, Franke et al. (2022) proposed to integrate the behavioral variables xbehaviors ∈ R v with the visual stimulus by duplicating each variable to a h×w matrix and concatenating them with ximage in the channel dimension, resulting in an input vector of R (c+v)×h×w. 1The Sensorium Challenge held at NeurIPS 2022 Competition Track Program 22 additional mice were used in the Sensorium challenge (Willeke et al., 2022) and their test sets are not publicly available. Readout. To compute the neural response of neuron i from mouse m with nm neurons, the readout module Rm : R d×h ′×w ′ → R nm by Lurz et al. (2021) computes a linear regression of the core representation z with weights wi ∈ R w ′×h ′×c, followed by an ELU activation with an offset of 1 (i.e. o = ELU(Rm(z)) + 1), which keeps the response positive. The regression is performed by a Gaussian readout, which learns the parameters of a 2d Gaussian distribution whose mean µi represents the center of the receptive field of the neuron in the image space and whose variance quantifies the uncertainty of the receptive field position, which decreases over training. The response is thus obtained as a linear combination of the feature vector of the core at a single spatial position, which allows the model to greatly reduce the number of parameters per neuron in the readout. Notably, to learn the position µi, the model also exploits the retinotopic organization of V1 by coupling the recorded cortical 2d coordinates of each neuron with the estimated center of the receptive field from the readout. Moreover, a shifter module is introduced to adjust (or shift) the µi receptive field center of neuron i to account for the trial-to-trial variability due to eye movement (Franke et al., 2022). The shifter network R 2 → R 2consists of 3 dense layers with hidden size of 5 and tanh activation; it takes as input the 2d pupil center coordinates and learns the vertical and horizontal adjustments needed to shift µi. ## 4 Methods The aim of this work is to design a neural predictive model F(ximage, xbehaviors) that can effectively incorporate both visual stimuli and behavioral variables to predict responses o that are faithful to real recordings r from mouse V1. With that goal, we first detail the core architectures proposed in this work, followed by the training procedure and evaluation metrics. Code used in this work is attached as supplementary material and will be made publicly available upon publication. ## 4.1 V1**T Core** Vision Transformers (Dosovitskiy et al., 2021), or ![3_image_0.png](3_image_0.png) ViTs, have achieved competitive performance in many computer vision tasks, including object detection and semantic segmentation, to name a few (Chen et al., 2020; Carion et al., 2020; Strudel et al., 2021). Here, we propose a data-driven ViT core capable of learning a shared representation of the visual stimuli that is relevant for the prediction of neural responses in the visual cortex. Moreover, we introduce an alternative approach in V1T to encode behavioral variables in a more principled way when compared to previous methods and further improve the neural predictive performance of the overall model. The original ViT classifier is comprised of 3 main components: (1) a tokenizer first encodes the 3d image (including channel dimension) into 2d patch embeddings, (2) the embeddings are then passed through a series of Transformer (Vaswani et al., 2017) encoder blocks, each consisting of a Multi-Head Attention (MHA) and a Multi-Layer Perceptron (MLP) module which requires 2d inputs, and finally (3) a classification layer outputs the class prediction. The following sections detail the modifications made to convert the vanilla ViT to a shared visual representation learner for the downstream readout modules. We additionally experiment with a number of recently proposed efficient ViTs that have been emphasized for learning from small to medium size datasets. Figure 1: Illustration of the V1T block architecture. Tokenizer. The tokenizer, or patch encoder, extracts non-overlapping squared patches of size p × p from the 2d image and projects each patch to embeddings z0 of size d, i.e. R c×h×w → R l×(cp2) → R l×d, where l = *hw/p*2is the number of patches. Dosovitskiy et al. (2021) proposed two tokenization methods in the original ViT, where patches can be extracted either (1) via a p × p sliding window over the height and width dimensions of the image, followed by a linear layer with d hidden units, or (2) via a 2d convolutional layer with kernel size p and d filters. Transformer-based models benefit from (or even necessitate) pre-training on large datasets, in the magnitude of millions or even billions of samples, in order to obtain optimal performance (Han et al., 2022). In contrast, typical neural recordings in animal experiments are considerably smaller. To stay consistent with previous work, we instead focus on developing a core architecture that can be effectively trained on limited amount of data from scratch. To that end, we considered two recently introduced efficient ViT methods that are highly competitive in scarce data settings. Lee et al. (2021) proposed Shifted Patch Tokenization (SPT) to combat the low inductive bias in ViTs and enable better learning from limited data. Conceptually, SPT allows additional (adjacent) pixel values to be included in each patch, thus improving the locality, or receptive field, of the model. Input image ximage ∈ R 1×h×w is shifted spatially by p/2 in one of the four diagonal directions (top-left, top-right, bottom-left, bottom-right) with zero padding and the four shifted images (i.e. each shifted in one diagonal direction) are then concatenated with the original image, resulting in a vector R 5×h×w, which can be processed by the two patch extraction approaches mentioned above. With a similar goal in mind, the Compact Convolutional Transformer (CCT, Hassani et al. 2021) was proposed as a convolutional tokenizer to learn the patch embeddings that can take advantage of the translation equivariance and locality inherent in CNNs. The proposed mini-CNN is fairly simple: it consists of a 2d convolution layer with a p × p kernel and filter size d, followed by ReLU activation and a max pool layer. In this work, we experimented with and compared all four tokenization methods: sliding window, a single 2d convolutional layer, SPT and CCT. As ViTs are agnostic to the spatial structure of the data, a positional embedding is added to each patch to encode the relative position of the patches with respect to each other (Dosovitskiy et al., 2021; Han et al., 2022) and this positional embedding can either be learned or sinusoidal. Finally, a learnable BERT (Devlin et al., 2019) [cls] token is typically added to the patch embeddings (i.e. z0 ∈ R (l+1)×d) to represent the class of the image. Transformer encoder. The encoder consists of a series of ViT blocks, where each block comprises two sub-modules: Multi-Head Attention (MHA) and Multi-Layer Perceptron (MLP). In each MHA module, we applied the standard self-attention formulation (Vaswani et al., 2017): Attention(*Q, K, V* ) = softmax(QKT / √d)V , where query Q, key K and value V are linear projections of the input zb at block b. Conceptually, the self-attention layer assigns a pairwise attention value among all the patches (or tokens). In addition to the standard formulation, we also experimented with the Locality Self Attention (LSA, Lee et al. 2021), where a diagonal mask is applied to QKTto prevent strong connections in self-tokens (i.e. diagonal values in QKT), thus improving the locality inductive bias. Each sub-module is preceded by Layer Normalization (LayerNorm, Ba et al. 2016), and followed by a residual connection to the next module. Reshape representation. To make the dimensions compatible with the Gaussian readout module (see Section 3 for an overview), we reshape the 2d core output z ∈ R l×dto R d×h ′×w ′, where l = h ′ × w ′ and h ′ ≤ w ′. Note that if the number of patches l is not sufficiently large, it is possible for the same position in z to be mapped to multiple neurons, which could lead to adverse effects. For instance, in the extreme case of l = 1, all neurons would be mapped to a single p × p region in the visual stimulus (i.e. they would have the same visual receptive field), which is not biologically plausible given the size of the recorded cortical area (Garrett et al., 2014). We therefore set the stride size of the patch encoder as a hyperparameter and allow for overlapping patches, thus letting the hyperparameter optimization algorithm select the optimal number of patches. Given ximage ∈ R c×h=36×w=64, the V1T core has an output dimension of R d×h ′=29×w ′=57. ## 4.1.1 Incorporating Behaviors Previous studies have shown that visual responses can be influenced by behavioral variables and brain states; for example, changes in arousal, which can be monitored by tracking pupil dilation, lead to stronger (or weaker) neural responses (Reimer et al., 2016; Larsen and Waters, 2018). As a consequence, the visual representation learned by the core module should also be adjusted according to the brain state. Here, instead of inputting a vector that is a concatenation of the visual stimulus ximage ∈ R c×h×w and behavioral information xbehaviors ∈ R vin the channel dimension (i.e. R (c+v)×h×w, see Section 3), we propose an alternative method to integrate behavioral variables with the visual stimulus using a novel ViT-based architecture - V1T, illustrated in Figure 1. We introduced a behavior MLP module (B-MLP : R v → R d) at the beginning of the encoder block which learns to adjust the visual latent vector z based on the observed behavioral states xbehaviors. Each B-MLP module comprises two fully-connected layers with d hidden units and a dropout layer in between; tanh activation is used so that the adjustments to z can be both positive and negative. Importantly, as layers in DNNs learn different features of the input, usually increasingly abstract and complex with deeper layers (Zeiler and Fergus, 2014; Raghu et al., 2021), we hypothesize that the influence of the internal brain state should therefore change from layer to layer. To that end, we learned a separate B-MLPb at each block b in the V1T core, thus allowing level-wise adjustments to the visual latent variable. Formally, B-MLPb projects xbehaviors to the same dimension of the embeddings zb−1, followed by an element-wise summation between latent behavioral and visual representations, and then the rest of the operations in the encoder block: $z_{b}\gets z_{b-1}+\text{B-MLP}_{b}(x_{\text{behaviors}})$ $z_{b}\gets\text{MHA}_{b}(\text{LayerNorm}(z_{b}))+z_{b}$ $z_{b}\gets\text{MLP}_{b}(\text{LayerNorm}(z_{b}))+z_{b}$ $$\begin{array}{l}{(1)}\\ {(2)}\end{array}$$ $$\left({3}\right)$$ where z0 denotes the original patch embeddings. To compare the prediction performance difference due to our proposed behavior module, we also trained an equivalent Vision Transformer (denoted as ViT) with the same architecture as V1T except that it integrates behavioral information in the same manner as the CNN model (i.e. ViT inputs R (c+v)×h×w). ## 4.2 Training And Evaluation In order to isolate the change in prediction performance that is solely due to the proposed core architectures, we employed the same readout architectures by Lurz et al. (2021), as well as a similar data preprocessing and model training procedure. We used the same train, validation and test split provided by the two datasets (see Section 2). Natural images, recorded responses, and behavioral variables (i.e. pupil dilation, dilation derivative, pupil center, running speed) were standardized using the mean and standard deviation measured from the training set and the images were then resized to 36×64 pixels from 144×256 pixels. The shared core and per-animal readout modules were trained jointly using the AdamW optimizer (Loshchilov and Hutter, 2019) to minimize the Poisson loss $${\mathcal{L}}_{m}^{\mathrm{Poisson}}(r,o)=\sum_{t=1}^{n_{t}}\sum_{i=1}^{n_{m}}\left(o_{i,t}-r_{i,t}\log(o_{i,t})\right)\tag{1}$$ $$|\rangle$$ between the recorded responses r and predicted responses o, where nt is the number of trials in one batch and nm the number of neurons for mouse m. A small value ε = 1e − 8 was added to both r and o prior to the loss calculation to improve numeric stability. Gradients from each mouse were accumulated before a single gradient update to all modules. We tried to separate the gradient update for each animal, i.e. one gradient update per core-readout combination, but this led to a significant drop in performance. We suspect this is because the core module failed to learn a generalized representation among all animals when each update step only accounted for gradient signals from one animal. We used a learning rate scheduler in conjunction with early stopping: if the validation loss did not improve over 10 consecutive epochs, we reduced the learning rate by a factor of 0.3; if the model still had not improved after 2 learning rate reductions, we then terminated the training process. Dropout (Srivastava et al., 2014), stochastic depths (Huang et al., 2016), and L1 weight regularization were added to prevent overfitting. The weights in dense layers were initialized by sampling from a truncated normal distribution (µ = 0.0, σ = 0.02), where the bias values were set to 0.0; whereas the weight and bias in LayerNorm were set to 1.0 and 0.0. Each model was trained on a single Nvidia RTX 2080Ti GPU and all models converged within 200 epochs. Finally, we employed Hyperband Bayesian optimization (Li et al., 2017) to find the hyperparameters that achieved the best performance in the validation set. This included finding the optimal tokenization method and self-attention mechanism. The initial search space and final hyperparameter settings are detailed in Table A.2. We independently performed a hyperparameter search on the CNN model, though we failed to find a configuration that achieves better performance than the settings provided by Lurz et al. (2021) and Franke et al. (2022). While learning rate warm-up and pre-training on large datasets are considered the standard approach to train Transformers (Xiong et al., 2020; Han et al., 2022), in order to stay consistent with previous work and to isolate the performance gain solely due to the architectural change, all models presented in this work are trained from scratch and follow the same procedure stated above. The prediction performance of our models was measured by the single trial correlation metric, used by Willeke et al. (2022) and Franke et al. (2022), which can also account for the trial-to-trial variability in the test set where the same visual stimuli were shown multiple times. We computed the correlation between recorded r and predicted o responses: $$\mathrm{trial~corr.}(r,o)={\frac{\sum_{i,j}(r_{i,j}-{\bar{r}})(o_{i,j}-{\bar{o}})}{\sqrt{\sum_{i,j}(r_{i,j}-{\bar{r}})^{2}\sum_{i,j}(o_{i,j}-{\bar{o}})^{2}}}}$$ $\left(5\right)^3$ where r¯ and o¯ are the average recorded and predicted responses across all trials in the test set. ## 5 Results Here, we first discuss the final core architecture chosen after the Bayesian hyperparameter optimization, followed by a comparison of our proposed core against baseline models on the two large-scale mouse V1 datasets. Moreover, we analyze the trained core module and present the insights that can be gained from it. We present the cross-animal and cross-dataset generalization in Appendix A.4. Table 1: The single trial correlation (corr.) between predicted and recorded responses in Dataset S and Dataset F test set. ∆CNN and ∆ViT show the relative differences against the CNN (Lurz et al., 2021) and ViT models with behavior variables; we additionally fitted a CNN and ViT core with stimulus-response pairs (behav: ) to evaluate the prediction performance without behavioral information. sd shows the standard deviation across animals and detailed per-animal results are available in Appendix A.3. | | Dataset S (Willeke et al.) | | | Dataset F (Franke et al.) | | | |----------------------------------------|------------------------------|--------|--------|-----------------------------|--------|--------| | behav | corr. (sd) | ∆CNN | ∆ViT | corr. (sd) | ∆CNN | ∆ViT | | LN | 0.275 (0.019) | -27.2% | -33.7% | 0.223 (0.040) | -28.0% | -35.4% | | CNN | 0.300 (0.021) | -20.6% | -27.6% | | | | | CNN | 0.378 (0.029) | 0.0% | -8.7% | 0.309 (0.070) | 0.0% | -10.3% | | ViT | 0.319 (0.024) | -15.6% | -22.9% | | | | | ViT | 0.414 (0.032) | +9.5% | 0.0% | 0.344 (0.041) | +11.4% | 0.0% | | V1T | 0.426 (0.027) | +12.7% | +3.0% | 0.368 (0.032) | +19.1% | +6.9% | | Ensemble of 5 models CNN 0.404 (0.025) | | +6.9% | -2.3% | 0.340 (0.050) | +10.0% | -1.3% | | ViT | 0.424 (0.026) | +12.2% | +2.4% | 0.365 (0.037) | +18.1% | +6.0% | | V1T | 0.439 (0.027) | +16.1% | +6.1% | 0.378 (0.033) | +22.3% | +3.8% | V1**T benefits from smaller and overlapping patches**. We first looked at how hyperparameters of ViT and V1T affect model performance. We observed the predictive performance to be quite sensitive towards number of patches, patch size and patch stride. The most performant models used a patch size of 8 and a stride size of 1, thus extracting the maximum number of patches. We note that this allows the readout to learn a mapping from the shared core representation of the stimulus to the cortical position of each neuron that spans across the whole image, and not just a part of the image. Since the visual receptive fields of neurons are distributed across a large area of the monitor given the size of the recorded cortical area, this leads to more accurate response predictions from the model. Furthermore, we found that the two efficient tokenizers, SPT and CCT, whose aim is to reduce the number of patches, both failed to improve the model performance, reiterating that a finer tiling of the image is crucial for accurate predictions of cortical activity. Moreover, we found that the LSA attention mechanism, which encourages the model to learn from inter-tokens by masking out the diagonal self-token, led to worse performance, suggesting information from adjacent patches in this task is not as influential as it is in image classification. Appendix A.1 details the importance of each hyperparameter and the test performance trade-off among the various tokenizers and attention mechanisms. Lastly, we found that V1T with layer-wise B-MLP modules yields the best results, indicating that the modulation introduced by behavioral information varies as the core learns different visual representations at deep layers. Further analysis and discussion on the B-MLP module are presented in Appendix A.2. V1**T outperforms CNN**. Next, we compared the tuned ViT and V1T cores against a baseline linearnonlinear (LN) model and the previous state-of-the-art CNN model (Lurz et al., 2021) on the two large scale mouse V1 datasets (see Section 2). We also trained a CNN and ViT core on response-stimuli pairs only on Dataset S, to evaluate the importance of behavioral information in response predictions. Table 1 summarizes the test performance on the two datasets, results of per-animal performance and an alternative metric are available in Appendix A.3. By simply replacing the CNN core module with the tuned ViT architecture, we observed a considerable improvement in response predictions across all animals, with an average increase of 9.5% and 11.3% in single trial correlation over the CNN model in Dataset S and Dataset F respectively. Thus far, the core module encoded the brain state of the animals by concatenating behavioral variables as additional channels in the natural image. With that said, our proposed V1T core, which encodes the brain state via the B-MLP nonlinear transformations, further improved the average prediction performance by 2.9% and 7.0% in the two datasets, or 12.7% and 19.1% over the CNN model. As demonstrated in the Sensorium Challenge (Sensorium Workshop, 2022) and Franke et al. (2022), ensemble learning is a common approach to improve neural predictive models. Following the procedure in Franke et al. (2022), we trained 10 models with different random seeds and selected the 5 best models based on their validation performance. The average of the selected models constituted the output of the ensemble model. The CNN ensemble model achieved an average improvement of 6.9% in Dataset S as compared to its non-ensemble variant. Nevertheless, the individual V1T model still outperformed the CNN ensemble by 5.4%. A V1T ensemble trained with the same procedure achieved an average single trial correlation of 0.439, which corresponds to an 8.7% improvement over the CNN ensemble model. Altogether, our proposed core architecture set a new benchmark in both gray-scale and colored visual response prediction. Sample efficiency. Most neural datasets are constrained by their limited size, due to technical and/or ethical limitations, while typical DNNs require a large amount of data to train on, especially Transformer-based models (Han et al., 2022). Here, we evaluate the sample efficiency of the CNN, ViT and V1T models by fitting them with 500 (11%), 1500 (33%), 2500 (55%), 3500 (77%) and 4500 (100%) samples per animal in Dataset S (Willeke et al., 2022). Figure 2 shows the single trial correlation in the test set for the three models trained on different sample sizes, each with 30 different random seeds. Overall, we found that V1T outperforms the CNN model even at 1500 training samples per animal. Moreover, the predictive performance of the CNN model plateaus at around 3500 training samples, while V1T keeps improving, suggesting that the ViT-based model can continue to improve with more data. Spatial tuning difference. As expected, models trained without behavioral information led to worse results (see behavior: in Table 1). Nevertheless, we observed an average 6.3% improvement in stimuli-response prediction with the tuned ViT core over the CNN model in Dataset S. To further our understanding of why the ViT might be performing better in visual response prediction, we evaluated the discrepancies in spatial tuning of the two models by comparing their artificial receptive fields (aRFs). Appendix A.5 details the procedure. Briefly, we presented the models with thousands of white noise images and then summed the images weighted by the response prediction to estimate the aRF of each artificial unit. Figure 3a shows the aRF of the same artificial unit from the CNN and ViT model. Visually, the aRFs of the ViT model appear to be narrower and qualitatively different from the aRFs of the CNN. In order to quantify the aRF sizes, we fitted a 2d Gaussian to each aRF and observed a significant difference in the standard deviation distributions, shown in Figure 3b. Overall, the aRFs of the ViT model have a much narrower spread, with a mean standard deviation of 3.0 ± 0.5 and 2.6 ± 0.4 in the horizontal and vertical directions over all artificial units, considerably lower than the 5.1 ± 1.5 and 3.1 ± 0.9 of the CNN. These results show that the artificial ![8_image_0.png](8_image_0.png) Figure 2: Prediction performance when trained with 500, 1500, 2500, 3500 and (all) 4500 samples per animal in Dataset S. The models were each trained with 30 different random seeds. The error bar shows the standard deviations of the repeated experiments, and the statistical difference (two-sided t-test) in CNN vs V1T and ViT vs V1T in each sample group is shown above each pair of bars (****: p ≤ 0.0001). ![8_image_1.png](8_image_1.png) Figure 3: (a) Estimated artificial receptive field (aRF) and 2d Gaussian fit (red circle shows 1 standard deviation ellipse) of the same artificial unit from the CNN and ViT models trained without behaviors. Visually, the ViT learns narrower aRFs, more examples in Appendix A.5. To quantify the size of the aRFs, we compared the fitted Gaussian over all units from Mouse A; (b) the distributions of the standard deviations shows that the ViT learns notably narrower aRFs. V1T attention visualization on Mouse A (c) validation and (d) test samples. Each attention map was normalized to [0, 1], and the behavioral variables of the corresponding trial are shown below the image in the format of [pupil dilation, dilation derivative, pupil center (x, y), speed]. More examples in Appendix A.6. units in the CNN and ViT learn notably different aRFs. Given that we did not constrain the aRF size, our results suggest that the narrower fields allow ViT to learn location-dependent features that are beneficial for visual response prediction. Self-attention visualization. In addition to the performance gain in the proposed core architecture, the self-attention mechanism inherent in Transformers can be used to visualize areas in the input image that the model learns to focus on. In our case, it allows us to detect the regions in the visual stimulus that drive the neural responses. To that end, we extracted the per-stimulus attention map learned by the V1T core module via Attention Rollout (Abnar and Zuidema, 2020; Dosovitskiy et al., 2021). Briefly, we aggregated the attention weights (i.e. Softmax(QKT / √d)) across all heads in MHA, and then multiplied the weights over all layers (blocks), recursively. Figure 3 shows the normalized average attention weights superimposed to the input images from Mouse A, with more examples available in Appendix A.6. Given that the position of the computer monitor was chosen in order to center the population receptive field, V1 responses from the recorded region should be mostly influenced by the center of the image (Willeke et al., 2022). Here, we can see a clear trend where the core module is focusing on the central regions of the images to predict the neural response, which aligns with our expectation from the experiment conditions. Interestingly, when the core module inputs the same image but with varying behaviors (i.e. Figure 3d), we noticed variations in the attention patterns. This suggests that the V1T core is able to take behavioral variables into consideration and adjust its attention solely based on the brain state. These attention maps can inform us of the area of the image (ir)relevant for triggering visual neuronal responses which, in turn, allow us to build more sophisticated predictive models. For instance, the core module consistently assigned higher weights to patches in the center of the image, suggesting information at the edges of the image are less (or not at all) relevant for the recorded group of neurons. As a practical example, we eliminated irrelevant information in the stimuli by center cropping the image to α144 × α256 pixels where 0 < α ≤ 1, prior to downsampling the input to 36 × 64 pixels. We found that a crop factor of α = 0.8 (i.e. removing 36% of the total number of pixels) further improved the single trial correlation to 0.430, or 13.8% better than the CNN. Note that we also obtained similar improvement with the CNN model. Self-attention correlates with pupil center. To further explore the relationship between the attention weights learned by the core module and the behavioral information, we measured the absolute correlation between the center of mass of the attention maps and the pupil centers in the vertical and horizontal axes. The correlation coefficient of each animal in Dataset S is summarized in Table 2. Overall, we found a moderate correlation between the attention maps and the pupil center of the animal, with an average correlation (standard deviation) of 0.525 (0.079) and 0.409 (0.105) in the horizontal and vertical directions across animals. This relationship demonstrates that the attention maps can reveal the impact of behavioral variables on the neural responses. Therefore, this framework can be particularly useful for studies investigating the coding of visual information across visual cortical areas (V1 and higher visual areas), as the model could determine what part(s) of the visual stimulus is processed along the "hierarchy" of visual cortical areas. Since higher visual areas are known to have larger receptive fields (Wang and Burkhalter, 2007; Glickfeld et al., 2014), we would expect a larger part of the image to be relevant for the core module. Further investigation of the attention map could also be used to determine which part of a visual scene was relevant when performing more specific tasks, such as object recognition, decision-making, or spatial navigation. Table 2: Correlations between the center of mass of the attention maps and pupil centers in the (x-axis) horizontal and (y-axis) vertical direction in Dataset S test set, all with a p-value ≪ 0.0001. | Mouse | x-axis | y-axis | |---------|--------------|--------------| | A | 0.682 (****) | 0.568 (****) | | B | 0.489 (****) | 0.493 (****) | | C | 0.505 (****) | 0.370 (****) | | D | 0.484 (****) | 0.310 (****) | | E | 0.464 (****) | 0.302 (****) | ## 6 Discussion In this work, we presented a novel core architecture V1T to model the visual and behavioral representations of mouse V1 activities in response to natural visual stimuli. The model outperformed the previous state-of-the-art CNN (Lurz et al., 2021) model on two large-scale mouse V1 datasets by a considerable margin (12.7% and 19.1%). In contrast to the winning submissions at the Sensorium Challenge (Sensorium Workshop, 2022), which focused on data augmentation and building large ensembles based on the CNN model, we instead introduced a new architecture as the shared core module. Our best model achieved a single trial correlation of 0.428 and 0.444 (correlation to average: 0.634 and 0.650) in the two held-out test sets, which would place us 2nd place in the leaderboard, and the best method across all models not taking the neuronal response trends over time into account. In addition, we also showed that V1T can be competitive in the low data regime, and that its performance continues to improve with more data to a larger extend than the CNN model. To the best of our knowledge, our approach is also the first ViT-based model to outperform CNNs in mouse V1 response prediction. With a strong neural predictive performance, this model also provides a framework to investigate *in silico* the computations in the visual system, and in particular, the modulation of neural responses by behavioral variables. In this study, we included speed of the animal in the virtual corridor, pupil dilation, dilation derivative and pupil center as behavioral variables. For each of these variables, there is prior evidence showing that they do affect responses in V1. For instance, Pakan et al. (2018) showed that 12% of the recorded V1 neurons decreased their activity with lower running speed, suggesting a clear benefit of considering the speed of the animal for predicting V1 responses. Pupil dilation has been shown to be related to arousal of the animal, with complex modality dependent effects of arousal on the mouse sensory cortex (Shimaoka et al., 2018). The pupil center represents the fixation point of the animal and is a proxy for what the animal is paying attention to. As a proof of principle of how a Vision Transformer can be used to gain insights into the importance of behavioral variables for V1 responses, we showed that the center of the self-attention maps learned by our model correlates with the pupil center of the animals, highlighting how features of this architecture do reflect properties of cortical neurons' receptive fields, in this case, the retinotopy. Moreover, our model is able to exploit certain anatomical information, for example the location of neurons within the primary visual cortex, from which we can roughly infer the location of their receptive field since the retinotopic map of mouse primary visual cortex is well characterized (Zhuang et al., 2017). However, while the CNN architecture was inspired by receptive fields of the visual cortex (Fukushima, 1980), the Vision Transformer architecture was not and has no direct biological counterpart. Therefore, it is challenging to map the abstract components of a Vision Transformer onto the anatomy or biophysics of the brain. Nevertheless, the V1T model has a number of limitations. Firstly, only one-dimensional behavioral information can be incorporated since the model integrates scalars into the latent embedding via the B-MLP module. Additional architecture engineering is needed if the behavioral variables have varying (and higher) dimensions, for instance, 3D poses (Mathis et al., 2018). Secondly, in the case of very limited data (e.g. 500 samples, see Figure 2), CNN-based models are likely to outperform ViTs, which typically require considerable amount of data to be performant (Han et al., 2022). In future work, we plan to further investigate the relationship between behavioral variables and neural responses. The attention visualization technique, for instance, enables ablation studies on the effect of each behavioral variable, such as pupil dilation or running speed, on the neural activity. Moreover, we plan to extend the method to recordings of the visual cortex in response to natural videos, to track how this relationship may evolve over time, as well as experiments in naturalistic settings, to know which part of a visual scene is relevant for certain behaviors. ## Acknowledgments We sincerely thank Willeke et al. and Franke et al. for making their high-quality large-scale mouse recordings publicly available which make this work possible. We would also like to thank Antonio Vergari, Matthias Hennig and Robyn Greene for their insightful comments and suggestions on improving the manuscript. BML was supported by the United Kingdom Research and Innovation (grant EP/S02431X/1), UKRI Centre for Doctoral Training in Biomedical AI at the University of Edinburgh, School of Informatics. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 866386; to N.R.). For the purpose of open access, the author has applied a creative commons attribution (CC BY) licence to any author accepted manuscript version arising. ## References Abnar, S. and Zuidema, W. (2020). Quantifying attention flow in transformers. *Proceedings of the 58th* Annual Meeting of the Association for Computational Linguistics. Adelson, E. H. and Bergen, J. R. (1985). Spatiotemporal energy models for the perception of motion. *Josa a*, 2(2):284–299. Antolík, J., Hofer, S. B., Bednar, J. A., and Mrsic-Flogel, T. D. (2016). Model constrained by visual hierarchy improves prediction of neural responses to natural scenes. *PLoS computational biology*, 12(6):e1004927. Ba, J. L., Kiros, J. R., and Hinton, G. E. (2016). Layer normalization. *arXiv preprint arXiv:1607.06450*. Bashiri, M., Walker, E., Lurz, K.-K., Jagadish, A., Muhammad, T., Ding, Z., Ding, Z., Tolias, A., and Sinz, F. (2021). A flow-based latent state generative model of neural population responses to natural images. Advances in Neural Information Processing Systems, 34:15801–15815. Bashivan, P., Kar, K., and DiCarlo, J. J. (2019). Neural population control via deep image synthesis. *Science*, 364(6439):eaav9436. Berrios, W. and Deza, A. (2022). Joint rotational invariance and adversarial training of a dual-stream transformer yields state of the art brain-score for area v4. In *SVRHM 2022 Workshop @ NeurIPS*. Biewald, L. (2020). Experiment tracking with weights and biases. Software available from wandb.com. Burg, M. F., Cadena, S. A., Denfield, G. H., Walker, E. Y., Tolias, A. S., Bethge, M., and Ecker, A. S. (2021). Learning divisive normalization in primary visual cortex. *PLOS Computational Biology*, 17(6):e1009028. Cadena, S. A., Denfield, G. H., Walker, E. Y., Gatys, L. A., Tolias, A. S., Bethge, M., and Ecker, A. S. (2019). Deep convolutional models improve predictions of macaque v1 responses to natural images. PLoS computational biology, 15(4):e1006897. Cadieu, C. F., Hong, H., Yamins, D. L., Pinto, N., Ardila, D., Solomon, E. A., Majaj, N. J., and DiCarlo, J. J. (2014). Deep neural networks rival the representation of primate it cortex for core visual object recognition. PLoS computational biology, 10(12):e1003963. Carandini, M., Demb, J. B., Mante, V., Tolhurst, D. J., Dan, Y., Olshausen, B. A., Gallant, J. L., and Rust, N. C. (2005). Do we know what the early visual system does? *Journal of Neuroscience*, 25(46):10577–10597. Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., and Zagoruyko, S. (2020). End-to-end object detection with transformers. In *European conference on computer vision*, pages 213–229. Springer. Chen, M., Radford, A., Child, R., Wu, J., Jun, H., Luan, D., and Sutskever, I. (2020). Generative pretraining from pixels. In *International conference on machine learning*, pages 1691–1703. PMLR. Conwell, C., Mayo, D., Barbu, A., Buice, M., Alvarez, G., and Katz, B. (2021). Neural regression, representational similarity, model zoology & neural taskonomy at scale in rodent visual cortex. *Advances in Neural* Information Processing Systems, 34. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pages 248–255. IEEE. Devlin, J., Chang, M., Lee, K., and Toutanova, K. (2019). BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., and Houlsby, N. (2021). An image is worth 16x16 words: Transformers for image recognition at scale. In *International Conference on Learning Representations*. Ecker, A. S., Sinz, F. H., Froudarakis, E., Fahey, P. G., Cadena, S. A., Walker, E. Y., Cobos, E., Reimer, J., Tolias, A. S., and Bethge, M. (2018). A rotation-equivariant convolutional neural network model of primary visual cortex. *arXiv preprint arXiv:1809.10504*. Franke, K., Willeke, K. F., Ponder, K., Galdamez, M., Zhou, N., Muhammad, T., Patel, S., Froudarakis, E., Reimer, J., Sinz, F. H., et al. (2022). State-dependent pupil dilation rapidly shifts visual feature selectivity. Nature, 610(7930):128–134. Fukushima, K. (1980). Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. *Biological cybernetics*, 36(4):193–202. Garrett, M. E., Nauhaus, I., Marshel, J. H., and Callaway, E. M. (2014). Topography and areal organization of mouse visual cortex. *Journal of Neuroscience*, 34(37):12587–12600. Glickfeld, L. L., Reid, R. C., and Andermann, M. L. (2014). A mouse model of higher visual cortical function. Current opinion in neurobiology, 24:28–33. Goodfellow, I., Bengio, Y., and Courville, A. (2016). *Deep learning*. MIT press. Han, K., Wang, Y., Chen, H., Chen, X., Guo, J., Liu, Z., Tang, Y., Xiao, A., Xu, C., Xu, Y., et al. (2022). A survey on vision transformer. *IEEE transactions on pattern analysis and machine intelligence*. Hassani, A., Walton, S., Shah, N., Abuduweili, A., Li, J., and Shi, H. (2021). Escaping the big data paradigm with compact transformers. *arXiv preprint arXiv:2104.05704*. Heeger, D. J. (1992). Half-squaring in responses of cat striate cells. *Visual neuroscience*, 9(5):427–443. Huang, G., Sun, Y., Liu, Z., Sedra, D., and Weinberger, K. Q. (2016). Deep networks with stochastic depth. In *European conference on computer vision*, pages 646–661. Springer. Jones, J. P. and Palmer, L. A. (1987). The two-dimensional spatial structure of simple receptive fields in cat striate cortex. *Journal of neurophysiology*, 58(6):1187–1211. Kietzmann, T. C., McClure, P., and Kriegeskorte, N. (2018). Deep neural networks in computational neuroscience. *BioRxiv*, page 133504. Klindt, D., Ecker, A. S., Euler, T., and Bethge, M. (2017). Neural system identification for large populations separating "what" and "where". *Advances in Neural Information Processing Systems*, 30. Larsen, R. S. and Waters, J. (2018). Neuromodulatory correlates of pupil dilation. *Frontiers in neural circuits*, 12:21. Lau, B., Stanley, G. B., and Dan, Y. (2002). Computational subunits of visual cortical neurons revealed by artificial neural networks. *Proceedings of the National Academy of Sciences*, 99(13):8974–8979. Le, T. and Shlizerman, E. (2022). STNDT: Modeling neural population activity with spatiotemporal transformers. In Oh, A. H., Agarwal, A., Belgrave, D., and Cho, K., editors, *Advances in Neural* Information Processing Systems. Lee, S. H., Lee, S., and Song, B. C. (2021). Vision transformer for small-size datasets. arXiv preprint arXiv:2112.13492. Lehky, S. R., Sejnowski, T. J., and Desimone, R. (1992). Predicting responses of nonlinear neurons in monkey striate cortex to complex patterns. *Journal of Neuroscience*, 12(9):3568–3581. Li, B. M., Amvrosiadis, T., Rochefort, N., and Onken, A. (2020). Calciumgan: A generative adversarial network model for synthesising realistic calcium imaging data of neuronal populations. *arXiv preprint* arXiv:2009.02707. Li, B. M., Amvrosiadis, T., Rochefort, N., and Onken, A. (2021). Neuronal learning analysis using cycleconsistent adversarial networks. *arXiv preprint arXiv:2111.13073*. Li, L., Jamieson, K., DeSalvo, G., Rostamizadeh, A., and Talwalkar, A. (2017). Hyperband: A novel bandit-based approach to hyperparameter optimization. *The Journal of Machine Learning Research*, 18(1):6765–6816. Li, Z., Brendel, W., Walker, E., Cobos, E., Muhammad, T., Reimer, J., Bethge, M., Sinz, F., Pitkow, Z., and Tolias, A. (2019). Learning from brains how to regularize machines. *Advances in neural information* processing systems, 32. Loshchilov, I. and Hutter, F. (2019). Decoupled weight decay regularization. In International Conference on Learning Representations. Lurz, K.-K., Bashiri, M., Willeke, K., Jagadish, A., Wang, E., Walker, E. Y., Cadena, S. A., Muhammad, T., Cobos, E., Tolias, A. S., Ecker, A. S., and Sinz, F. H. (2021). Generalization in data-driven models of primary visual cortex. In *International Conference on Learning Representations*. Mathis, A., Mamidanna, P., Cury, K. M., Abe, T., Murthy, V. N., Mathis, M. W., and Bethge, M. (2018). Deeplabcut: markerless pose estimation of user-defined body parts with deep learning. *Nature neuroscience*, 21(9):1281–1289. Nayebi, A., Kong, N., Zhuang, C., Gardner, J. L., Norcia, A. M., and Yamins, D. (2022). Mouse visual cortex as a limited resource system that self-learns an ecologically-general representation. *BioRxiv*. Niell, C. M. and Stryker, M. P. (2010). Modulation of visual responses by behavioral state in mouse visual cortex. *Neuron*, 65(4):472–479. Olshausen, B. A. and Field, D. J. (1996). Emergence of simple-cell receptive field properties by learning a sparse code for natural images. *Nature*, 381(6583):607–609. Pakan, J. M., Francioni, V., and Rochefort, N. L. (2018). Action and learning shape the activity of neuronal circuits in the visual cortex. *Current opinion in neurobiology*, 52:88–97. Pakan, J. M., Lowe, S. C., Dylda, E., Keemink, S. W., Currie, S. P., Coutts, C. A., and Rochefort, N. L. (2016). Behavioral-state modulation of inhibition is context-dependent and cell type specific in mouse visual cortex. *Elife*, 5. Ponce, C. R., Xiao, W., Schade, P. F., Hartmann, T. S., Kreiman, G., and Livingstone, M. S. (2019). Evolving images for visual neurons using a deep generative network reveals coding principles and neuronal preferences. *Cell*, 177(4):999–1009. Prenger, R., Wu, M. C.-K., David, S. V., and Gallant, J. L. (2004). Nonlinear v1 responses to natural scenes revealed by neural network analysis. *Neural Networks*, 17(5-6):663–679. Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., and Dosovitskiy, A. (2021). Do vision transformers see like convolutional neural networks? *Advances in Neural Information Processing Systems*, 34:12116–12128. Reimer, J., Froudarakis, E., Cadwell, C. R., Yatsenko, D., Denfield, G. H., and Tolias, A. S. (2014). Pupil fluctuations track fast switching of cortical states during quiet wakefulness. *neuron*, 84(2):355–362. Reimer, J., McGinley, M. J., Liu, Y., Rodenkirch, C., Wang, Q., McCormick, D. A., and Tolias, A. S. (2016). Pupil fluctuations track rapid changes in adrenergic and cholinergic activity in cortex. Nature communications, 7(1):1–7. Richards, B. A., Lillicrap, T. P., Beaudoin, P., Bengio, Y., Bogacz, R., Christensen, A., Clopath, C., Costa, R. P., de Berker, A., Ganguli, S., et al. (2019). A deep learning framework for neuroscience. Nature neuroscience, 22(11):1761–1770. Safarani, S., Nix, A., Willeke, K., Cadena, S., Restivo, K., Denfield, G., Tolias, A., and Sinz, F. (2021). Towards robust vision by multi-task learning on monkey visual cortex. Advances in Neural Information Processing Systems, 34:739–751. Schneider, S., Lee, J. H., and Mathis, M. W. (2022). Learnable latent embeddings for joint behavioral and neural analysis. *arXiv preprint arXiv:2204.00673*. Sensorium Workshop (2022). Sensorium competition presentation - NeurIPS 2022 workshop. https://youtu.be/lncQPWROFbc. Shimaoka, D., Harris, K. D., and Carandini, M. (2018). Effects of arousal on mouse sensory cortex depend on modality. *Cell Reports*, 22(12):3160–3167. Sinz, F. H., Pitkow, X., Reimer, J., Bethge, M., and Tolias, A. S. (2019). Engineering a less artificial intelligence. *Neuron*, 103(6):967–979. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. *The journal of machine learning research*, 15(1):1929–1958. Steinmetz, N. A., Aydin, C., Lebedeva, A., Okun, M., Pachitariu, M., Bauza, M., Beau, M., Bhagat, J., Böhm, C., Broux, M., et al. (2021). Neuropixels 2.0: A miniaturized high-density probe for stable, long-term brain recordings. *Science*, 372(6539). Stosiek, C., Garaschuk, O., Holthoff, K., and Konnerth, A. (2003). In vivo two-photon calcium imaging of neuronal networks. *Proceedings of the National Academy of Sciences*, 100(12):7319–7324. Stringer, C., Pachitariu, M., Steinmetz, N., Reddy, C. B., Carandini, M., and Harris, K. D. (2019). Spontaneous behaviors drive multidimensional, brainwide activity. *Science*, 364(6437):eaav7893. Strudel, R., Garcia, R., Laptev, I., and Schmid, C. (2021). Segmenter: Transformer for semantic segmentation. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pages 7262–7272. Tuli, S., Dasgupta, I., Grant, E., and Griffiths, T. L. (2021). Are convolutional neural networks or transformers more like human vision? *arXiv preprint arXiv:2105.07197*. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. (2017). Attention is all you need. *Advances in neural information processing systems*, 30. Walker, E. Y., Sinz, F. H., Cobos, E., Muhammad, T., Froudarakis, E., Fahey, P. G., Ecker, A. S., Reimer, J., Pitkow, X., and Tolias, A. S. (2019). Inception loops discover what excites neurons most using deep predictive models. *Nature neuroscience*, 22(12):2060–2065. Wang, Q. and Burkhalter, A. (2007). Area map of mouse visual cortex. *Journal of Comparative Neurology*, 502(3):339–357. Whittington, J. C. R., Warren, J., and Behrens, T. E. (2022). Relating transformers to models and neural representations of the hippocampal formation. In *International Conference on Learning Representations*. Willeke, K. F., Fahey, P. G., Bashiri, M., Pede, L., Burg, M. F., Blessing, C., Cadena, S. A., Ding, Z., Lurz, K.-K., Ponder, K., et al. (2022). The sensorium competition on predicting large-scale mouse primary visual cortex activity. *arXiv preprint arXiv:2206.08666*. Xiong, R., Yang, Y., He, D., Zheng, K., Zheng, S., Xing, C., Zhang, H., Lan, Y., Wang, L., and Liu, T. (2020). On layer normalization in the transformer architecture. In *International Conference on Machine* Learning, pages 10524–10533. PMLR. Yamins, D. L. and DiCarlo, J. J. (2016). Using goal-driven deep learning models to understand sensory cortex. *Nature neuroscience*, 19(3):356–365. Yamins, D. L., Hong, H., Cadieu, C. F., Solomon, E. A., Seibert, D., and DiCarlo, J. J. (2014). Performanceoptimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the national academy of sciences, 111(23):8619–8624. Ye, J. and Pandarinath, C. (2021). Representation learning for neural population activity with Neural Data Transformers. *Neurons, Behavior, Data analysis, and Theory*. Zeiler, M. D. and Fergus, R. (2014). Visualizing and understanding convolutional networks. In *European* conference on computer vision, pages 818–833. Springer. Zhang, Y., Lee, T. S., Li, M., Liu, F., and Tang, S. (2019). Convolutional neural network models of v1 responses to complex patterns. *Journal of computational neuroscience*, 46(1):33–54. Zhuang, J., Ng, L., Williams, D., Valley, M., Li, Y., Garrett, M., and Waters, J. (2017). An extended retinotopic map of mouse cortex. *elife*, 6:e18372. ## A Appendix Table A.1: Experimental information of Mouse A to E from Dataset S (Willeke et al., 2022) and Mouse F to O from Dataset F (Franke et al., 2022). Each mouse has a unique recording ID (column 2) although we assigned a separate mouse ID (column 1) to use throughout this paper for simplicity. | Mouse | rec. ID | num. neurons | total trials | num. test | |---------|-------------|----------------|----------------|-------------| | A | 21067-10-18 | 8372 | 5994 | 998 | | B | 22846-10-16 | 7344 | 5997 | 999 | | C | 23343-5-17 | 7334 | 5951 | 989 | | D | 23656-14-22 | 8107 | 5966 | 993 | | E | 23964-4-22 | 8098 | 5983 | 994 | | F | 25311-10-26 | 867 | 7358 | 1475 | | G | 25340-3-19 | 922 | 7478 | 1497 | | H | 25704-2-12 | 773 | 7500 | 1500 | | I | 25830-10-4 | 1024 | 7360 | 1473 | | J | 26085-6-3 | 910 | 7464 | 1495 | | K | 26142-2-11 | 1121 | 7500 | 1500 | | L | 26426-18-13 | 1125 | 7500 | 1500 | | M | 26470-4-5 | 1160 | 7473 | 1495 | | N | 26644-6-2 | 824 | 7500 | 1500 | | O | 26872-21-6 | 1109 | 7466 | 1495 | ## A.1 Hyperparameters | hyperparameter | search space | final value | |--------------------------------------|-----------------------------------------|----------------| | Core num. blocks | uniform, min: 1, max: 8 | 4 | | num. heads | uniform, min: 1, max: 12 | 4 | | patch size | uniform, min: 2, max: 16 | 8 | | patch stride | uniform, min: 1, max: patch size | 1 | | patch method | sliding window, 2d conv, SPT, CCT | sliding window | | patch dropout | uniform, min: 0, max: 0.5 | 0.0229 | | embedding size | uniform, min: 8, max: 1024, interval: 1 | 155 | | mha method | original, LSA | original | | mha dropout | uniform, min: 0, max: 0.5 | 0.2544 | | mlp size | uniform, min: 8, max: 1024, interval: 1 | 488 | | mlp dropout | uniform, min: 0, max: 0.5 | 0.2544 | | stochastic depth dropout | uniform, min: 0, max: 0.5 | 0.0 | | L1 weight regularization | uniform, min: 0, max: 1 | 0.5379 | | initial learning rate | uniform, min: 0.005, max: 0.0001 | 0.0016 | | Readout position network num. layers | uniform, min: 1, max: 4, interval: 1 | 1 | | position network num. units | uniform, min: 2, max: 128, interval: 2 | 30 | | bias initialization | 0, mean standardized response | 0 | | L1 weight regularization | uniform, min: 0, max: 1 | 0.0076 | Table A.2: ViT and V1T cores - Gaussian readout hyperparameter search space and their final settings after a Hyperband Bayesian optimization (Li et al., 2017). | hyperparameter | importance | correlation | |--------------------------|--------------|---------------| | embedding size | 0.393 | -0.626 | | patch stride | 0.164 | -0.358 | | patch size | 0.111 | -0.297 | | initial learning rate | 0.046 | 0.279 | | L1 weight regularization | 0.030 | -0.242 | | num. blocks | 0.030 | 0.093 | | num. heads | 0.028 | -0.070 | | batch size | 0.026 | -0.093 | | mha dropout | 0.025 | -0.034 | | patch method | 0.024 | -0.174 | | mlp dropout | 0.022 | 0.133 | | mlp size | 0.019 | -0.186 | | stochastic depth dropout | 0.019 | -0.225 | | patch dropout | 0.017 | -0.105 | | mha method | 0.014 | 0.001 | Table A.3: ViT and V1T hyperparameter importance in Hyberhand Bayesian Optimization (Li et al., 2017) via Weights & Biases (Biewald, 2020). importance shows the degree to which the hyperparameter is useful to predict the evaluation metric (e.g. single trial correlation in the validation set) and correlation shows the linear correlation between the hyperparameter and the evaluation metric. Details on the calculation and interpretation of the hyperparameter importance and correlation are available at docs.wandb.ai/guides/app/features/panels/parameter-importance. Table A.4: Best prediction performance in single trial correlation (standard deviation across animals) on Dataset S with respect to choice of attention formulation and patch/tokenization method. original denotes the original self-attention formulation by Vaswani et al. 2017 and LSA denotes the Locality Self Attention mechanism proposed by Lee et al. 2021. SPT denotes Shifted Patch Tokenization (Lee et al., 2021) and CCT denotes the tokenization method introduced in Compact Convolution Transformer (Hassani et al., 2021). Section 4.1 details the model architectural differences and Section 5 discusses their prediction results. | patch | method | sliding window | 2d conv | spt | cct | |---------|---------------|------------------|---------------|---------------|---------------| | MHA | original | 0.426 (0.027) | 0.411 (0.022) | 0.406 (0.024) | 0.392 (0.026) | | lsa | 0.413 (0.023) | 0.415 (0.024) | 0.405 (0.024) | 0.385 (0.025) | | ## A.2 B-Mlp **Activation** We investigated different variations of the B-MLP module. The motivation of the proposed behavior module is ![18_image_0.png](18_image_0.png) to enable the core to learn a shared representation of the visual and behavioral variables across the animals. Moreover, the level-wise connections allow the self-attention module in each V1T block to encode different behavioral features with the latent visual representation. We experimented with a per-animal B-MLP module (while the rest of the core was still shared across animals) which did not perform any better than the shared counterpart, suggesting that the behavior module can indeed learn a shared internal brain state presentation. We also tested having the module in the first block only, as well as using the same module across all blocks (i.e. all B-MLPb shared the same weights). Both cases, however, led to worse results with a 2 −4% reduction in predictive performance on average. To further examine the proposed formulation, we analyzed the activation patterns of the shared behavior module at each level in V1T, shown in Figure A.1. We observed a noticeable distinction in B-MLP outputs in earlier versus deeper layers, with a higher spread in deeper layers, which corroborates our hypothesis that the influence of the behavioral variables differs at each level of the visual representation process. Figure A.1: tanh activation distributions of B-MLP at each level (block) in the V1T core. The spread of activation distributions indicates varying influence of behavioral variables at the block in the core module. ## A.3 Prediction Results Table A.5: Single trial correlation between predicted and recorded responses in Dataset S test set. All models were trained with behaviors. To demonstrate that the extracted attention maps can inform us about the (ir)relevant regions in the visual stimulus, we trained an additional V1T core with images center cropped to αh × αw pixels (See Section 5). | | | Mouse | | | | | |---------------------------|-------|---------|-------|-------|-------|---------------| | | A | B | C | D | E | avg (sd) | | LN | 0.262 | 0.306 | 0.281 | 0.263 | 0.262 | 0.275 (0.019) | | CNN | 0.350 | 0.424 | 0.385 | 0.371 | 0.360 | 0.378 (0.029) | | ViT | 0.375 | 0.455 | 0.415 | 0.433 | 0.392 | 0.414 (0.032) | | V1T | 0.401 | 0.464 | 0.430 | 0.436 | 0.401 | 0.426 (0.027) | | V1T (center crop α = 0.8) | 0.403 | 0.468 | 0.433 | 0.442 | 0.403 | 0.430 (0.028) | | Ensemble of 5 models CNN | 0.379 | 0.443 | 0.409 | 0.406 | 0.385 | 0.404 (0.025) | | ViT | 0.398 | 0.460 | 0.421 | 0.440 | 0.401 | 0.424 (0.026) | | V1T | 0.414 | 0.475 | 0.443 | 0.452 | 0.413 | 0.439 (0.027) | | | | | | Mouse | | | | | | | | |--------------------------------------------|-------|-------|-------|---------|-------|-------|-------|-------|---------------|----------|---------------| | F | G | H | I | J | K | L | M | N | O | avg (sd) | | | LN | 0.194 | 0.254 | 0.214 | 0.279 | 0.255 | 0.233 | 0.148 | 0.231 | 0.174 | 0.243 | 0.223 (0.040) | | CNN | 0.253 | 0.371 | 0.184 | 0.377 | 0.329 | 0.319 | 0.207 | 0.331 | 0.341 | 0.376 | 0.309 (0.070) | | ViT | 0.310 | 0.375 | 0.352 | 0.379 | 0.385 | 0.262 | 0.294 | 0.360 | 0.358 | 0.368 | 0.344 (0.041) | | V1T | 0.326 | 0.386 | 0.387 | 0.394 | 0.398 | 0.373 | 0.298 | 0.377 | 0.363 | 0.379 | 0.368 (0.032) | | Ensemble of 5 models CNN 0.268 0.383 0.341 | | 0.393 | 0.347 | 0.336 | 0.242 | 0.345 | 0.355 | 0.388 | 0.340 (0.050) | | | | ViT | 0.321 | 0.384 | 0.363 | 0.404 | 0.406 | 0.374 | 0.302 | 0.385 | 0.323 | 0.387 | 0.365 (0.037) | | V1T | 0.336 | 0.397 | 0.391 | 0.406 | 0.408 | 0.383 | 0.306 | 0.388 | 0.373 | 0.392 | 0.378 (0.033) | Table A.6: Single trial correlation between predicted and recorded responses in Dataset F test set. All models were trained with behaviors. ## A.3.1 Correlation To Average Correlation to Average (avg. corr.) is another commonly used metric to evaluate neural predictive models (Willeke et al., 2022). It is the correlation between ri,j recorded and oi,j predicted responses over repeated j trials of stimulus i : avg. corr.$(r,o)=\frac{\sum_{i}(\bar{r}_{i}-\bar{r})(o_{i}-\bar{o})}{\sqrt{\sum_{i}(\bar{r}_{i}-\bar{r})^{2}\sum_{i}(o_{i}-\bar{o})^{2}}}$ (6) where r¯i = 1 J PJ j=1 ri,j is the average response across J repeats, and r¯ and o¯ are the average recorded and predicted responses across all trials. Table A.7: The Correlation to Average (avg. corr.) between predicted and recorded responses across all animals (SD shows the standard deviation) in Dataset S and Dataset F test sets. Table 1 shows the results in single trial correlation. | | Dataset S (Willeke et al.) | | Dataset F (Franke et al.) | | | | |----------------------------------------|------------------------------|--------|-----------------------------|-----------------|--------|--------| | behav | avg. corr. (SD) | ∆CNN | ∆ViT | avg. corr. (SD) | ∆CNN | ∆ViT | | LN | 0.387 (0.023) | -33.1% | -37.7% | 0.312 (0.076) | -39.7% | -42.5% | | CNN | 0.551 (0.024) | -4.7% | -4.6% | | | | | CNN | 0.578 (0.027) | 0.0% | -6.9% | 0.516 (0.142) | 0.0% | -4.7% | | ViT | 0.568 (0.026) | -1.7% | -8.5% | | | | | ViT | 0.621 (0.030) | +7.4% | 0.0% | 0.542 (0.054) | 4.9% | 0.0% | | V1T | 0.629 (0.029) | +8.9% | 1.4% | 0.551 (0.022) | +6.6% | +1.6% | | Ensemble of 5 models CNN 0.610 (0.027) | | +5.5% | -1.7% | 0.567 (0.050) | +9.9% | +4.8% | | ViT | 0.634 (0.027) | +9.7% | +2.1% | 0.566 (0.035) | +9.5% | +4.4% | | V1T | 0.644 (0.026) | +11.3% | +3.7% | 0.562 (0.023) | +8.9% | +3.8% | ## A.4 Cross-Animal And Cross-Dataset Generalization DNN-based neural predictive models are often neuron/animal specific and do not generalize well to unseen neurons/animals. Here, we evaluate generalization performance of CNN and V1T. We first tested the cross-animal performance of the CNN and V1T models by performing cross-validation over animals in Dataset S (Willeke et al., 2022). Specifically, we compare the model fitted on one animal (direct setting) against a model that was pre-trained on N − 1 animals and whose readout was fine-tuned (with core frozen) on the left-out animal (transfer setting). We repeated this process for all 5 animals, and their results are summarized in Table A.8. On average, the V1T model outperformed the CNN model by 3.3% and 6.7% in the direct and transfer settings, respectively. Moreover, the V1T model experienced a larger level of performance gain in the transfer learning setting, with an average prediction improvement of 5.6% over direct training, whereas the CNN had a 2.2% gain. These results suggest that the V1T core can generalize well to unseen animals, and also benefit from transfer learning to a greater extent. Next, we evaluated the cross-dataset generalization performance. To that end, we fitted the models on the gray-scaled version (average channel dimension) of Dataset F (Franke et al., 2022). We then froze the core module and trained the readouts on Dataset S and compared the loss in performance in this transfer setting. The results are presented in Table A.9 for the two core architectures. We observed a larger performance drop with the frozen V1T model compared to the model trained directly, with an average deficit of −19.0%, versus the −12.9% drop in the frozen CNN model. Similar to the cross-animal generalization, the CNN model exhibits a higher level of variation in prediction performance over the 5 animals. While the relative performance drop was greater for the V1T core than for the CNN core, V1T achieved better transfer results with an average single trial correlation of 0.345, or about 4.9% better than the frozen CNN (0.329). Table A.8: CNN vs V1**T cross-animal generalization in Dataset S**. We compare the test performance between (direct) fitting one model per animal and (transfer) pre-training a model on N − 1 animals and fine-tuning the readout for the Nth animal. We repeat the same leave-one-out process for all animals. ∆direct shows the relatively prediction performance of the transfer models over the direct models. | Mouse | | | | | | | | |------------|-------|-------|-------|-------|----------|---------------|------| | A | B | C | D | E | avg (sd) | ∆direct | | | CNN direct | 0.332 | 0.422 | 0.389 | 0.400 | 0.335 | 0.376 (0.040) | | | transfer | 0.357 | 0.420 | 0.386 | 0.398 | 0.359 | 0.384 (0.027) | 2.2% | | V1T direct | 0.368 | 0.417 | 0.394 | 0.414 | 0.347 | 0.388 (0.030) | | | transfer | 0.384 | 0.450 | 0.414 | 0.415 | 0.385 | 0.410 (0.027) | 5.6% | | | Mouse | | | | | | | |--------------|---------|-------|-------|-------|----------|---------------|--------| | A | B | C | D | E | avg (sd) | ∆original | | | CNN original | 0.350 | 0.424 | 0.385 | 0.371 | 0.360 | 0.378 (0.029) | | | transfer | 0.314 | 0.353 | 0.337 | 0.316 | 0.327 | 0.329 (0.016) | -12.9% | | V1T original | 0.401 | 0.464 | 0.430 | 0.436 | 0.401 | 0.426 (0.027) | | | transfer | 0.327 | 0.382 | 0.347 | 0.343 | 0.328 | 0.345 (0.022) | -19.0% | Table A.9: CNN vs V1**T cross-dataset generalization**. We first pre-trained the core module on a gray-scale version of Dataset F, then (transfer) froze the core and fine-tuned the readouts on Dataset S. ∆original shows the test performance drop in the cross-dataset transfer learning setting as compare (original) a model directly trained on Dataset S. ## A.5 Artificial Receptive Fields Here, we outline the procedure to estimate the artificial receptive fields (aRFs) of the CNN and ViT models (not V1T, since there is no behavior involved) and the process to compare their spatial positions and sizes. We first present each trained model with N = 500, 000 images of white noise drawn from a uniform distribution. The aRF of unit i is then computed as the summation of all noise images, weighted by the respective output: $$\mathrm{aRF}_{i}=\sum_{n}^{N}\mathrm{F}(x_{n})_{i}*x_{n},\quad x_{n}\sim{\mathcal{U}}^{1\times36\times64}$$ $$\left(7\right)$$ F(xn)i ∗ xn, xn ∼ U1×36×64 (7) where model F can be either the CNN or ViT, and F(xn)i denotes the response of unit i given white noise image xn. Figure A.2 shows the estimated aRFs of 3 randomly selected artificial units (out of 8372 in the readout for Mouse A) from the two models. To quantify the location and size of the aRFs, we fitted a 2d Gaussian to each aRF and compared the mean and covariance of the fitted parameters. We repeated the same process for all 8372 artificial units. Concretely, we first subtracted the mean from each aRF to center the values on the baseline, then took their absolute values and fitted a 2d Gaussian using SciPy's curve_fit() function. Note that not all aRFs have good fit. We thus dropped the bottom 5% of the fitted results. Figure A.2c shows the KDE plot of the fitted Gaussian means from the aRFs of the CNN and ViT. The vast majority of the aRFs are centered with respect to the image, aligning with our expectations from the attention rollout maps (see Section 5). Figure 3b compares ![22_image_0.png](22_image_0.png) selected artificial units from Mouse A. The red circles (1 standard deviation ellipse) show the 2d Gaussian fit. (c) KDE of the Gaussian centers of the two models. ![23_image_0.png](23_image_0.png) ## A.6 Attention Rollout Maps Figure A.3: V1T attention visualization on validation and test samples of Mouse A to E from Dataset S. As the computer monitor was positioned such that the visual stimuli were presented to the center of the receptive field of the recorded neurons (see Dataset S discussion in Section 2), we expected regions in the center of the image to correlate the most with the neural responses, indicating that the core module learned to assign higher attention weights toward those regions. Note that the core module is shared among all mice. For this reason, we also expected similar patterns across animals. We observed small variations in the attention maps in the test set, where the image is the same and behavioral variables vary, suggesting the core module learned to adjust its attention based on the internal brain state. To quantify this result, we further showed that there are moderate correlations between the center of mass of the attention maps and the pupil center, see discussion in Section 5. Each attention map was normalized to [0, 1], and the behavioral variables of the corresponding trial are shown below the image in the format of [pupil dilation, dilation derivative, pupil center (x, y), speed]. The Figure continues to the next page. ![24_image_0.png](24_image_0.png) ## A.7 Behaviors And Predictive Performance ![25_image_0.png](25_image_0.png) Figure A.4: Predictive performance w.r.t. pupil dilation in DATASET S. Previous work has shown that pupil dilation is an indication of arousal, i.e. stronger (or weaker) neural responses with respect to the visual stimulus (Reimer et al., 2016; Larsen and Waters, 2018). We thus expected a similar tendency could also be observed with our model. Here, we divided the test set into 3 subsets based on pupil dilation. We then compared the predictive performance of the model in the (large) larger third subset against the (smaller third subset. We observed that trials with larger pupil sizes are better predicted, with an average difference of +17.5% across animals. The dashed lines indicate the quartiles of the distributions and the percentage above each violin plot shows the relative prediction improvement of the larger set against the smaller set. ![26_image_0.png](26_image_0.png) ## A.8 Readout Position And Retinotopy Figure A.5: The learned readout position with respect to neuron anatomical coordinates in Mouse A. The position network in the Gaussian readout (see Section 3) learns the mapping between the latent visual representation (i.e. output of the core, bottom right panel) and the 2d anatomical location of each neuron (left panel). Lurz et al. (2021) and Willeke et al. (2022) demonstrated that a smooth mapping can be obtained when color-coding each neuron by its corresponding readout position unit. This aligned with our expectation that neurons that are close in space should have a similar receptive field (Garrett et al., 2014). Here, we showed that, despite the substantial architectural change, a similar mapping can also be obtained with the V1T core. The code to generate this plot was written by Willeke et al. (2022) and is available at github.com/sinzlab/sensorium.
Review 1: Summary: This paper proposes a new ViT based architecture to predict the mouse V1 neural responses. The architecture integrates visual and behavioral input across animals and surpasses previous CNN models in prediction performance. A detailed analysis of the results from the new architecture and comparison to the previous CNN models are given. Strengths and Weaknesses: Strengths: The paper is clearly written and the analysis is thorough. Weaknesses: The biological insight is limited. Requested Changes: Although the paper is well written overall, I would like to see the following points discussed in more detail: 1. About hyper parameter tuning: 1. What is the hyper parameter searching space? 2. Are the previous CNN models in comparison also tuned for a fair comparison? 2. What is the number of output features in the core module for each of the models? Does this number have any effect on the read-out module? 3. How does the work help understand how the visual system process information? Broader Impact Concerns: N/A ================================================== Review 2: Summary: This work introduces V1T, a novel Vision Transformer-based architecture that predicts neural responses in mouse primary visual cortex to natural images. V1T learns a shared visual and behavioral representation across animals and outperforms previous convolution-based models. V1T also reveals meaningful insights into the visual cortex, such as the correlation between self-attention weights and population receptive fields and the modulation of attention by behavioral variables. Strengths and Weaknesses: + Presents an alternative architecture (V1T) to CNNs that can achieve comparable performance, raising questions about whether CNN architecture is a canonical model or are other factors at play. + Shows that behavior as an additional input feature greatly improves prediction performance. + Shows some distinct aspects of the receptive field of their proposed V1T model compared to the CNN model. Questions: How were the behavioral variables relevant to their area of interest determined? Understanding how these choices interact with prediction performance and, more importantly, what scientific conclusions can be determined from this interaction would be scientifically helpful. Can the authors expand on this interaction? How are the behavior variables integrated consistently between the different architectures? Were eq (1), (2), (3) the same way behavior information into the CNN architectures? What are ways to control for the fact that it is usually easier to integrate multimodal information in architectures like transformers than CNNs because CNNs features have an explicit spatial dimension to their representations? "We tried to separate the gradient update for each animal, i.e. one gradient update per core-readout combination, but this led to a significant drop in performance." Can the authors clarify what the above statement in section 4.2 means? Does this impact the training of the shared core model, or is this specific to the readout layer? Requested Changes: Could the authors elaborate on the distinction between functional and anatomical properties? The authors mention: "For instance, the center of the self-attention maps learned by our model correlates with the pupil center of the animals, highlighting how features of this architecture do reflect properties of cortical neurons’ receptive fields, in this case, the retinotopy. and then the authors go on to state: "This does not imply that our model can directly map onto the anatomy or biophysics of the brain." Is retinotopy considered a functional or anatomical characteristic of the visual system? This is important because the claims of functional similarity being independent of anatomical or biophysical similarity do not seem possible, especially when you go to lower-level areas of the ventral stream (compared to something further downstream like IT). Can the authors include some clarifying discussion on the statement quoted above? Broader Impact Concerns: I do not believe this work requires a broader impact statement. ================================================== Review 3: Summary: This work aims to improve the predictive models of V1 mouse cortex. The authors propose a new architecture for a predictive model that uses a shared ViT backbone and mouse-dependent readouts. The model also incorporates behavioral measures from the pice, such as pupil dilations. This predictive model is trained from scratch using two public datasets. This proposed model is shown to substantially outperform CNN-based models on V1 neuron prediction. Furthermore, the authors show that the incorporation of the behavioral data improves the predictive performance and that the self-attention weights learned by the transformer backbone correlate with the neuron receptive fields. Strengths and Weaknesses: Overall, I think this is solid work in computational neuroscience. My biggest concern is that it is not clear that a machine learning journal is the most appropriate venue for this work, since the main contributions are in neuroscience. That said, I leave this question to the discretion of the Action Editor and would support publication if the work is judged appropriate for this venue. Strengths: The additional analysis showing the relationship between Transformer self-attention weights and neuron receptive field is informative and deeper than the analyses offered by many other works in this area. Weaknesses: 1. The motivation for why looking at the problem of predicting mouse V1 is important can be strengthened. Even within the field of vision, this is considered quite low-level. The authors can do more to motivate why this is important from a neuroscience point of view, and if they can say something about importance to ML, that would strengthen the reasons to publish this work in an ML venue. 2. On the results side, I would like the authors to report standard deviations of the means every time they report a mean across mice. Otherwise, it's difficult to judge the difference between models. 3. On the modeling side, there are several future avenues for improvement, such as predicting responses for held-out mice, and exploring a pretraining-finetuning paradigm which may be useful to tune the model for a particular dataset. Those are not prerequisites for publication of the current work, though. Requested Changes: I would like the authors to address weaknesses 1 and 2 above. Broader Impact Concerns: N/A ================================================== Review 4: Summary: As the scale of neural data increases, there is an greater pressing need for data-driven modeling approaches that can reliably predict this data and enable “virtual experiments”/ablations to inform future designs. I believe that this work is an important step in this direction, and therefore should be accepted in this venue but with some revisions that I list below (categorized as “Major” and “Minor”). Strengths and Weaknesses: See my requested major and minor changes below. Requested Changes: Major: To me, the longer-range contribution of the paper is not only the improvement over the baseline CNN, but also that of an unmodified ViT architecture (via the authors’ innovations of finer patch tiling, Gaussian readout, and behavioral MLP) -- as this highlights the need to develop adaptations of (or even completely novel) architectures suited for the particular application of neural-response prediction. Therefore, many of my major suggestions are along this direction: 1. In Table 1, can you include 3 additional columns: Delta ViT%, p-value CNN, and p-value ViT? (and rename Delta % to Delta CNN%). These would represent the improvement over the baseline ViT (without behavioral variables), along with p-values to indicate significance over both the CNN and ViT. 2. Would it be possible to try a baseline ViT ensemble as well, and add it as an additional row in the latter half of Table 1? 3. Figure A.1 is an important figure, and I believe it should be put in the main text, right after Table 1, as it highlights the continued improvement over the baseline CNN as a function of sample size, and greatly enhances the robustness of the main results in Table 1. 4. Would it be possible to add ViT as an additional bar to the current Figure A.1? It would be very interesting if V1T is more sample efficient than the baseline ViT. Minor: 1. On pg. 7, the improvement of using a finer tiling is noted, along with the performance deficit due to LSA. Could these be included as supplementary figures? It would be good to be able refer to them to see the relative performance differences each of these choices leads to. This is the sentence that I am referring to, if it is helpful: “Furthermore, we found that the two efficient tokenizers, SPT and CCT, whose aim is to reduce the number of patches, both failed to improve the model performance, reiterating that a finer tiling of the image is crucial for accurate predictions of cortical activity. Moreover, we found that the LSA attention mechanism, which encourages the model to learn from inter-tokens by masking out the diagonal self-token, led to worse performance, suggesting information from adjacent patches in this task is not as influential as it is in image classification.” 2. The Introduction on pg. 1 sets up a dichotomy between task-driven and data-driven modeling of neural responses, which given that the improvement over the CNN is not the only interesting aspect of this work, I do not think is necessary (or really warranted). To me, these are just different approaches with different goals. Namely, data-driven models are aiming for maximum predictivity with models tested on the same sort of statistics as what it was trained on – whereas task-driven modelling is aiming for a normative explanation of the evolutionary and developmental constraints of the system (in terms of an architecture, objective function, and training dataset) across a range of conditions, even those that do not share the same statistics as the training dataset. In the revision, it would be good to simply mention that your work is a different goal than understanding ecological constraints of mouse V1, but really to build a predictive model on large-scale datasets and identify core components that can be insightful for designing the next generation of neural data-driven AI models. 3. Furthermore, related to this last point, there has been a lot of work since Cadena et al. 2019b on much improved, task-driven models of mouse visual cortex (specifically in reference to the sentence on pg. 1: “However, these models do not yield the same [in reference to the primate ventral stream] generalization and prediction results for the mouse visual cortex (Cadena et al. 2019b).”). In particular, the issue with prior task-driven models of mouse visual cortex was that they were “primate-adapted” both in having too deep of an architecture, along with a categorization-centric objective function. However, self-supervised, shallower, and mouse-visual-acuity matched models attain much higher neural predictivity performance, as similarly as deep categorization-optimized CNNs did with the primate ventral stream. Therefore, one relevant work that may be worth citing along these lines is: A. Nayebi*, N.C.L. Kong*, C. Zhuang, J.L. Gardner, A.M. Norcia, D.L.K. Yamins. “Mouse visual cortex as a limited resource system that self-learns an ecologically-general representation”. bioRxiv 2021. https://www.biorxiv.org/content/10.1101/2021.06.16.448730 Broader Impact Concerns: No broader impact concern. ================================================== Metareview: Recommendation: Accept as is Comment: This submission proposes a Vision Transformer-based model of mouse primary visual cortex. Reviewers initially asked for further confirmation that the improvements with the proposed changes were not driven by hyperparameter tuning or differences in how behavioral variables were integrated in different architectures, as well as some additional results with baseline ViTs. In their response, the authors provided additional details related to the former two points as well as the requested baselines. Reviewers were satisfied with this response, and all now support acceptance. ==================================================
# Parameter-Efficient Multi-Task And Multi-Domain Learning Using Factorized Tensor Networks Anonymous authors Paper under double-blind review ## Abstract Multi-task and multi-domain learning methods seek to learn multiple tasks/domains, jointly or one after another, using a single unified network. The primary challenge and opportunity lie in leveraging shared information across these tasks and domains to enhance the efficiency of the unified network. The efficiency can be in terms of accuracy, storage cost, computation, or sample complexity. In this paper, we introduce a factorized tensor network (FTN) designed to achieve accuracy comparable to that of independent single-task or singledomain networks, while introducing a minimal number of additional parameters. The FTN approach entails incorporating task- or domain-specific low-rank tensor factors into a shared frozen network derived from a source model. This strategy allows for adaptation to numerous target domains and tasks without encountering catastrophic forgetting. Furthermore, FTN requires a significantly smaller number of task-specific parameters compared to existing methods. We performed experiments on widely used multi-domain and multi-task datasets. We show the experiments on convolutional-based architecture with different backbones and on transformer-based architecture. Our findings indicate that FTN attains similar accuracy as single-task or single-domain methods while using only a fraction of additional parameters per task. ## 1 Introduction The primary objective in multi-task learning (MTL) is to train a single model that learns multiple related tasks, either jointly or sequentially. Multi-domain learning (MDL) aims to achieve the same learning objective across multiple domains. MTL and MDL techniques seek to improve overall performance by leveraging shared information across multiple tasks and domains. On the other hand, single-task or single-domain learning does not have that opportunity. Likewise, the storage and computational cost associated with single-task/domain models quickly grows as the number of tasks/domains increases. In contrast, MTL and MDL methods can use the same network resources for multiple tasks/domains, which keeps the overall computational and storage cost small Mallya et al. (2018); Berriel et al. (2019); Wallingford et al. (2022); Rebuffi et al. (2018); Mancini et al. (2018); Sun et al. (2020); Kanakis et al. (2020); Maninis et al. (2019). In general, MTL and MDL can have different input/output configurations, but we model them as task/domain-specific network representation problems. Let us represent a network for MTL or MDL as the following general function: yt = Ft(x) ≡ F(x; Wt, ht), (1) where Ft represents a function for task/domain t that maps input x to output yt. We further assume that F represents a network with a fixed architecture and Wt and ht represent the parameters for task/domainspecific feature extraction and classification/inference heads, respectively. The function in equation 1 can represent the network for specific task/domain t using the respective Wt, ht. In the case of MTL, with T tasks, we can have T outputs y1*, . . . ,* yT for a given input x. In the case of MDL, we usually have a single output for a given input, conditioned on the domain t. Our main goal is to learn the Wt, ht for all t that maximize the performance of MTL/MDL with minimal computation and memory overhead compared to single-task/domain learning. 1 ![1_image_0.png](1_image_0.png) Figure 1: Overview of different MTL/MDL approaches and our proposed method. (a) Fine-Tuning trains entire network per task/domain. (b) Feature-Extractor trains a backbone network shared by all tasks/domains with task/domain-specific heads. (c) Our proposed method, Factorized Tensor Network (FTN), adapts to a new task/domain by adding low-rank factors to shared layers. (d) Detailed overview of FTN. A single network adapted to three downstream vision tasks (segmentation, depth, and surface normal estimation) by adding task-specific low-rank tensors (∆Wt). Task/domain-specific blocks are shown in same colors. Figure 1(a),(b),(c) illustrate three typical approaches for MTL/MDL. First, we can start with a pre-trained network and fine-tune all the parameters (Wt) to learn a target task/domain, as shown in Figure 1(a). FineTuning approaches can transfer some knowledge from the pretrained network to the target task/domain, but they effectively use an independent network for every task/domain Mallya et al. (2018); Wallingford et al. (2022); Tzeng et al. (2017); Venkateswara et al. (2017); Mustafa et al. (2021); Kolesnikov et al. (2020). Second, we can reduce the parameter and computation complexity by using a completely shared FeatureExtractor (i.e., Wt = Wshared for all t) and learning task/domain-specific heads as last layers, as shown in Figure 1(b). While such approaches reduce the number of parameters, they often result in poor overall performance because of limited network capacity and interference among features for different tasks/domains Mallya et al. (2018); Berriel et al. (2019); Wallingford et al. (2022); Zhang et al. (2020). Third, we can divide the network into shared and task/domain-specific parameters or pathways, as shown in Figure 1(c). Such an approach can increase the network capacity, provide interference-free paths for task/domain-specific feature extraction, and enable knowledge sharing across the tasks/domains. In recent years, a number of such methods have been proposed for MTL/MDL Wallingford et al. (2022); Kanakis et al. (2020); Misra et al. (2016); Ruder et al. (2019); Gao et al. (2019); Strezoski et al. (2019); Liang et al. (2018); Gao et al. (2020); Yu et al. (2020). While existing methods can provide performance comparable to single-task/domain learning, they require a significantly large number of additional parameters. In this paper, we propose a new parameter-efficient method to divide network into shared and task/domainspecific parameters using a factorized tensor network (FTN). In particular, our method learns task/domainspecific low-rank tensor factors and normalization layers. An illustration of our proposed method is shown in Figure 1(d), where we represent network parameters as Wt = Wshared + ∆Wt, where ∆Wt is a low-rank tensor. Furthermore, we also learn task/domain-specific normalization parameters. We demonstrate the effectiveness of our method using different MTL and MDL datasets. Our method can achieve accuracy comparable to a single-task/domain network with a small number of additional parameters. A prior work, 2 TAPS Wallingford et al. (2022), differentially learns which layers of a pre-trained network to adapt for a downstream task/domain by learning an indicator function. The network uses adapted weights instead of pre-trained weights if the indicator score is above a certain threshold. This typically involves adapting high-parameterized layers closer to the classifier/head, which uses significantly more parameters than our FTN method. Through experiments, we show that TAPS uses approximately 2 to 4 times more parameters than FTN across various datasets and tasks. Existing parameter-efficient MTL/MDL methods Mallya et al. (2018); Mallya & Lazebnik (2018); Li et al. (2016) introduce small task/domain-specific parameters while others Zhang et al. (2020); Guo et al. (2019) add many parameters to boost the performance irrespective of the task complexity. In our work, we demonstrate the flexibility of FTNs by selecting the rank according to the complexity of the task. Other approaches like RCM Kanakis et al. (2020) adapt incrementally to new tasks by reparameterizing the convolutional layer into task-shared and task-specific parameters. However, unlike FTN this architecture shows limitations in adapting based on the complexity of the tasks and performs subpar along performance and parameters. Contributions. The main contributions of this paper can be summarized as follows. - We propose a new method for MTL and MDL, called factorized tensor networks (FTN), that adds task/domain-specific low-rank tensors to shared weights. - We demonstrate that by using a fraction of additional parameters per task/domain, FTNs can achieve similar performance as the single-task/domain methods. - Our proposed FTNs can be viewed as a plug-in module that can be added to any pretrained network and layer. We have shown this by extending FTNs to transformer-based architectures. - We performed empirical analysis to show that the FTNs enable flexibility by allowing us to vary the rank of the task-specific tensors according to the problem complexity. Limitations. In our experiments, we used a fixed rank for each layer. In principle, we can adaptively select the rank for different layers to further reduce the parameters. MTL/MDL models often suffer from task interference or negative transfer learning when trained jointly with multiple conflicting tasks. Our method can have similar drawbacks as we did not investigate which tasks/domains should be learned jointly. A shared backbone requires a single forward pass for all tasks, while our proposed FTN would require as many forward passes as the number of tasks. Branched and tree-structures can enable different tasks to share several layers and reduce latency. ## 2 Related Work Multi-task learning (MTL) methods commonly leverage shared and task-specific layers in a unified network to solve related tasks Misra et al. (2016); Gao et al. (2019); Liu et al. (2019); Bruggemann et al. (2020); Xu et al. (2018); Zhang et al. (2018); Vandenhende et al. (2020); Zhang et al. (2022c;a). These methods learn shared and task-specific representation through their respective modules. Optimization-based methods Chen et al. (2018c); Kendall et al. (2018); Chen et al. (2020) devise a principled way to evaluate gradients and losses in multi-task settings. Branched and tree-structured MTL methods Bruggemann et al. (2020); Guo et al. (2020); Zhang et al. (2022b) enable different tasks to share branches along a tree structure for several layers. Multiple tasks can share computations and features in any layer only if they belong to the same branch in all the preceding layers. Kanakis et al. (2020); Maninis et al. (2019) proposed MTL networks that incrementally learn new tasks. ASTMT Maninis et al. (2019) proposed a network that emphasizes or suppresses features depending on the task at hand. RCM Kanakis et al. (2020) reparameterizes the convolutional layer into non-trainable and task-specific trainable modules. We compare our proposed method with these incrementally learned networks. Adashare Sun et al. (2020) is another related work in MTL that jointly learns multiple tasks. It learns task-specific policies and network pathways Jang et al. (2017). Multi-domain learning (MDL) focuses on adapting one network to multiple unseen domains or tasks. MDL setup trains models on task-specific modules built upon the frozen backbone network. This setup helps MDL networks avoid negative transfer learning or catastrophic forgetting, which is common among multi-task learning methods. The work by Rebuffi et al. (2017; 2018) introduces the task-specific parameters called residual adapters. The architecture introduces these adapters as a series or parallel connection on the backbone for a downstream task. Inspired by pruning techniques, Packnet Mallya & Lazebnik (2018) learns on multiple domains sequentially on a single task to decrease the overhead storage, which comes at the cost of performance. Similarly, the Piggyback Mallya et al. (2018) method uses binary masks as the module for task-specific parameters. These masks are applied to the weights of the backbone to adapt them to new domains. To extend this work, WTPB Mancini et al. (2018) uses the affine transformations of the binary mask on their backbone to extend the flexibility for better learning. BA2 Berriel et al. (2019) proposed a budget-constrained MDL network that selects the feature channels in the convolutional layer. It gives a parameter-efficient network by dropping the feature channels based on budget but at the cost of performance. Zhao et al. (2021) paper learns the adapter modules and the plug-in architecture of the modules using NAS. Spot-Tune Guo et al. (2019) learns a policy network, which decides whether to pass each image through Fine-Tuning or pretrained networks. It neglects the parameter efficiency factor and emphasises more on performance. TAPS Wallingford et al. (2022) adaptively learns to change a small number of layers in a pretrained network for the downstream task. Domain adaptation and transfer learning. The work in this field usually focuses on learning a network from a given source domain to a closely related target domain. The target domains under this kind of learning typically have the same category of classes as source domains Tzeng et al. (2017). Due to this, it benefits from exploiting the labels of source domains to learn about multiple related target domainsVenkateswara et al. (2017); Li et al. (2021). Some work has a slight domain shift between source and target data, like different camera views Saenko et al. (2010). At the same time, recent papers have worked on significant domain shifts like converting targets into sketch or art domains Venkateswara et al. (2017); Zhao et al. (2017). Transfer learning is related to MDL or domain adaptation but focuses on better generalizing target tasks Mustafa et al. (2021); Kolesnikov et al. (2020); Dosovitskiy et al. (2021). Most of the work in this field uses the popular ImageNet as a source dataset to learn feature representation and learn to transfer to target datasets. The method proposed in Yang et al. (2022) uses a pretrained (multi-task) teacher network and decomposes it into multiple task/knowledge-specific factor networks that are disentangled from one another. This factorization leads to sub-networks that can be fine-tuned to downstream tasks, but they rely on knowledge transfer from a teacher network that is pretrained for multiple tasks. Modular deep learning methods Pfeiffer et al. (2023) focus on transfer learning by avoiding negative task interference and having parameter-efficient modules. Factorization methods in MDL/MTL. The method in Yang & Hospedales (2015) proposed a unified framework for MTL/MDL using semantic descriptors, without focusing on parameter-efficient adaptation. Yang & Hospedales (2017) performs MTL/MDL by factorizing each layer in the network after incorporating task-specific information along a separate dimension. Both the networks in Yang & Hospedales (2015) and Yang & Hospedales (2017) require retraining from scratch for new tasks/domains. In contrast, FTN can incrementally learn low-rank factors to add new tasks/domains. Chen et al. (2018b) proposed a new parameter-efficient network to replace residual networks by incorporating factorized tensors. The results in Chen et al. (2018b) are limited to learning single-task networks, where the network is only compressed by up to ∼ 60%. In Bulat et al. (2020), the authors proposed a network for MDL using Tucker decomposition. Transformer-based methods in MDL/MTL. LoRA Hu et al. (2021) is a low-rank adaptation method proposed for large language models, which freezes the pre-trained weights of the model and learns low-rank updates for each transformer layer. It updates weight matrices for query and value in every attention layer. Similarly, KAdaptation He et al. (2023) proposes a parameter-efficient adaptation method for vision transformers. It represents the updates of MHSA layers using the summation of Kronecker products between shared parameters and low-rank task-specific parameters. We compared both of these methods and have shown that FTN outperforms along the number of parameters. Scaling and shifting your features (SSF) Lian et al. (2022) is another transformer method for parameter-efficient adaptation that applies element-wise multiplication and addition to tokens after different operations. SSF, in principle, is similar to fine-tuning the Batch Normalization layer in convolutional layers, which has scaling and shifting trainable parameters. FTN trains the Batch Normalization layers and has the same effect as scaling and shifting features when adapting to new tasks. Ye & Xu (2022) proposed inverted-pyramid multi-task transformer, performs cross-task interaction among spatial features of different tasks in a global context. The DeMT Xu et al. (2023) proposes a deformable mixer encoder and task-aware transformer decoder. The proposed encoder leverages channel mixing and deformable convolution operation for informative features while the transformer decoder captures the task interaction. Our method, FTN, shares some high-level similarities with other parameter-efficient adaptation methods such as LoRA, as both approaches aim to introduce low-rank factors to adapt networks for multiple tasks and domains. Our method is a natural extension to higher-order tensors, and we have demonstrated its effectiveness across both transformer and convolutional network architectures. In addition, our method adds parameter and performance efficiency compared to related method, as demonstrated by our experiments. In summary, our proposed method (FTN) offers a parameter-efficient approach to achieve performance comparable to or better than existing adaptation methods by utilizing a fraction of additional parameters. Our primary design consideration was to achieve efficient adaptation, enabling incremental learning with additive factors. To achieve parameter efficiency, we introduce a small number of trainable parameters through low-rank factorization applicable to both convolutional and transformer-based networks. We utilize frozen and trainable task-specific parameters to support incremental learning without forgetting prior knowledge. ## 3 Technical Details Notations. In this paper, we denote scalars, vectors, matrices and tensors by w, w, W, and W, respectively. The collective set of tensors (network weights) is denoted as W. ## 3.1 Ftn Applied To Convolutional Layers In our proposed method, we use task/domain-specific low-rank tensors to adapt every convolutional layer of a pretrained backbone network to new tasks and domains. Let us assume the backbone network has L convolutional layers that are shared across all task/domains. We represent the shared network weights as Wshared = {W1*, . . . ,*WL} and the low-rank network updates for task/domain t as ∆Wt = {∆W1,t, . . . , ∆WL,t}. To compute features for task/domain t, we update weights at every layer as Wshared + ∆Wt = {W1 + ∆W1,t*, . . . ,*WL + ∆WL,t}. To keep our notations simple, let us only consider lth convolutional layer that has k × k filters, Cin channels for input feature tensor, and Cout channels for output feature tensor. We represent the corresponding Wl as a tensor of size k 2 × Cin × Cout. We represent the low-rank tensor update as a summation of R rank-1 tensors as $$\Delta{\bf W}_{l,t}=\sum_{r=1}^{R}{\bf w}_{1,t}^{r}\otimes{\bf w}_{2,t}^{r}\otimes{\bf w}_{3,t}^{r},\tag{1}$$ $$\left(2\right)$$ where wr1,t, wr2,t, wr3,t represent vectors of length k 2, Cin, Cout, respectively, and ⊗ represents the Kronecker product. Apart from low-rank tensor update, we also optimize over Batch Normalization layers (BN) for each task/- domain Ioffe & Szegedy (2015); Pham et al. (2022). The BN layer learns two vectors Γ and β, each of length Cout. The BN operation along Cout dimension can be defined as element-wise multiplication and addition: $${\rm BN}_{\Gamma,\beta}(u)=\Gamma\left(\frac{u-\mathbb{E}[u]}{\sqrt{{\rm Var}[u]+\epsilon}}\right)+\beta.\tag{3}$$ We represent the output of lth convolutional layer for task/domain t as $$\mathbf{Z}_{l,t}=\mathrm{BN}_{\Gamma_{t},\beta_{t}}(\mathrm{conv}(\mathbf{W}_{l}+\Delta\mathbf{W}_{l,t},\mathbf{Y}_{l-1,t})),$$ (conv(Wl + ∆Wl,t, Yl−1,t)), (4) where Yl−1,t represents the input tensor and Zl,t represents the output tensor for lth layer. In our proposed FTN, we learn the task/domain-specific factors {wr1,t, wr2,t, wr3,t} R r=1, and Γt, and βt for every layer in the backbone network. $$\quad(4)$$ In the FTN method, rank R for ∆W plays an important role in defining the expressivity of the adapted network. We can define a complex ∆W by increasing the rank R of the low-rank tensor and taking their linear combination. Our experiments showed that this has resulted in a significant performance gain. Initialization. To establish a favorable starting point, we adopt a strategy that minimizes substantial modifications to the frozen backbone network weights during the initialization of the task-specific parameter layers. To achieve this, we initialize each parameter layer from the Xavier uniform distribution Glorot & Bengio (2010), thereby generating ∆W values close to 0 before their addition to the frozen weights. This approach ensures the initial point of our proposed network closely matches the pretrained network closely. To acquire an effective initialization for our backbone network, we leverage the pretrained weights obtained from ImageNet. We aim to establish a robust and capable feature extractor for our specific task by incorporating these pretrained weights into our backbone network. Number of parameters. In a Fine-Tuning setup with T tasks/domains, the total number of required parameters at convolutional layer l can be calculated as T · (k 2 × Cin × Cout). Whereas using our proposed FTNs, the total number of frozen backbone (Wl) and low-rank R tensor (∆Wl,t) parameters are given by (Cout × Cin × k 2) + T · R ·(Cout + Cin + k 2). In our results section, we have shown that the absolute number of parameters required by our method is a fraction of what the Fine-Tuning counterpart needs. Effect of Batch Normalization. In our experiment section, under the 'FC and BN only' setup, we have shown that having task-specific Batch Normalization layers in the backbone network significantly affects the performance of a downstream task/domain. For all the experiments with our proposed approach, we include Batch Normalization layers as task-specific along with low-rank tensors and classification/decoder layer. ## 3.2 Ftn Applied To Transformers The Vision Transformer (ViT) architecture Dosovitskiy et al. (2020) consists a series of MLP, normalization, and Multi-Head Self-Attention (MHSA) blocks. The MHSA blocks perform n parallel attention mechanisms on sets of Key K, Query Q, and Value V matrices. Each of these matrices has dimensions of S × d*model*, where d*model* represents the embedding dimension of the transformer, and S is the sequence length. The i-th output head (Hi) of the n parallel attention blocks is computed as $$H_{i}=\mathrm{SA}(Q\mathrm{W}_{i}^{Q},K\mathrm{W}_{i}^{K},V\mathrm{W}_{i}^{V}),$$ $\left(5\right)$. ), (5) $$\mathbf{\tau}^{<},V W_{i}^{V}),$$ where SA(·) represents the self-attention mechanism, WK i ,WQ i ,WV i ∈ R d*model*×drepresent the projection weights for the key, query, and value matrices, respectively, and d = d*model*/n. The heads Hi are then combined using a projection matrix Wo ∈ Rdmodel×d*model* to result in the output of the MHSA block as MHSA(H1*, . . . , H*n) = Concat(H1*, . . . , H*n) · Wo. (6) Following the adaptation procedure in He et al. (2023), we apply our proposed factorization technique to the weights in the MHSA block. We introduce two methods for applying low-rank tensors to the attention weights: Adapting query and value weights. Our first proposed method, *FTN (Query and Value)*, adds the low-rank tensor factors to the query WQ and value WV weights. These weights can be represented as threedimensional tensors of size d*model* ×d×n. Using equation 2, we can define and learn low-rank updates ∆Wq and ∆Wv for the query and value weights, respectively. Adapting output weights. Our second method, *FTN (Output projection)*, adds low-rank factors, ∆Wo, to the output projection weights Wo ∈ Rd*model*×d×n. Similar to the previous low-rank updates, the updates to the output weights defined following equation 2. Initialization. We initialize each low-rank factor by sampling from a Gaussian distribution with µ = 0 and σ = 0.05. This ensures near-zero initialization, closely matching the pretrained network. Number of parameters. The total number of parameters needed for R low-rank tensors and L MHSA blocks in FTN (Query and Value) is 2LR(d*model*+d+n). FTN (Output Projection) requires only LR(d*model*+ Table 1: Number of parameters and top-1% accuracy for baseline methods, comparative methods, and FTN with varying ranks on the five domains of the ImageNet-to-Sketch benchmark experiments. Additionally, the mean top-1% of each method across all domains is shown. The 'Params' column gives the number of parameters used as a multiplier of those for the FeatureExtractor method, along with the absolute number of parameters required in parentheses. | Methods | Params (Abs) | Flowers | Wikiart | Sketch | Cars | CUB | mean | |------------------------------------|----------------------|-----------|-----------|----------|--------|-------|--------| | Fine-Tuning | 6× (141M) | 95.69 | 78.42 | 81.02 | 91.44 | 83.37 | 85.98 | | Feature-Extractor | 1× (23.5M) | 89.57 | 57.7 | 57.07 | 54.01 | 67.20 | 65.11 | | FC and BN only | 1.001× (23.52M) | 94.39 | 70.62 | 79.15 | 85.20 | 78.68 | 81.60 | | Piggyback Mallya et al. (2018) | 6× [2.25×] (141M) | 94.76 | 71.33 | 79.91 | 89.62 | 81.59 | 83.44 | | Packnet → Mallya & Lazebnik (2018) | [1.60×] (37.6M) | 93 | 69.4 | 76.20 | 86.10 | 80.40 | 81.02 | | Packnet ← Mallya & Lazebnik (2018) | [1.60×] (37.6M) | 90.60 | 70.3 | 78.7 | 80.0 | 71.4 | 78.2 | | Spot-Tune Guo et al. (2019) | 7× [7×] (164.5M) | 96.34 | 75.77 | 80.2 | 92.4 | 84.03 | 85.74 | | WTPB Mancini et al. (2018) | 6× [2.25×] (141M) | 96.50 | 74.8 | 80.2 | 91.5 | 82.6 | 85.12 | | BA2 Berriel et al. (2019) | 3.8× [1.71×] (89.3M) | 95.74 | 72.32 | 79.28 | 92.14 | 81.19 | 84.13 | | TAPS Wallingford et al. (2022) | 4.12× (96.82M) | 96.68 | 76.94 | 80.74 | 89.76 | 82.65 | 85.35 | | FTN, R=1 | 1.004× (23.95M) | 94.79 | 73.03 | 78.62 | 86.85 | 80.86 | 82.83 | | FTN, R=50 | 1.53× (36.02M) | 96.42 | 78.01 | 80.6 | 90.83 | 82.96 | 85.76 | d + n) to add a similar number of factors. These additional parameters are significantly fewer than the parameters required for fully fine-tuning the four attention weights, which equals 4Ld2*model*. When compared to other parameter-efficient adaptation methods such as LoRA Hu et al. (2021) and KAdaptation He et al. (2023), our methods show superior parameter efficiency. The primary distinction is in the method of weight factorization and decomposition. In LoRA, to introduce rank R factors in the query and value weight matrices, 4LRdmodel parameters are required. Our approach begins with a three-dimensional representation of the attention weights, sized dmodel × d × n. We chose this approach because it allows us to exploit the relationship between the attention heads, further improving parameter efficiency. Moreover, we have explored different types of updates within the self-attention mechanism and proposed two variants of our FTN (*Query and Value* and *Output projection*). LoRA requires 4LRd*model* parameters to apply low-rank factors to query and value weight matrices. KAdaptation requires 2LRd*model* +K3 additional parameters for each weight, where K represents a design parameter. SSF Lian et al. (2022) requires mLd*model*, where m is the number of SSF modules in each transformer layer. In Table 3, we report the exact number of parameters and demonstrate that our proposed method, *FTN (Output Projection)*, has the best parameter efficiency. ## 4 Experiments And Results We evaluated the performance of our proposed FTN on several MTL/MDL datasets. We performed experiments for **1. Multi-domain classification** on convolution and transformer-based networks, and 2. Multi-task dense prediction. For each set of benchmarks, we reported the performance of FTN with different rank increments and compared the results with those from existing methods. All experiments are run on a single NVIDIA GeForce RTX 2080 Ti GPU with 12GB memory. ## 4.1 Multi-Domain Classification 4.1.1 Convolution-Based Networks Datasets. We use two MTL/MDL classification-based benchmark datasets. First, ImageNet-to-Sketch, which contains five different domains: Flowers Nilsback & Zisserman (2008), Cars Krause et al. (2013), Sketch Eitz et al. (2012), Caltech-UCSD Birds (CUBs) Wah et al. (2011), and WikiArt Saleh & Elgammal (2016), with different classes. Second, DomainNet Peng et al. (2019), which contains six domains: Clipart, Sketch, Painting (Paint), Quickdraw (Quick), Inforgraph (Info), and Real, with each domain containing an equal 345 classes. The datasets are prepared using augmentation techniques as adopted by Wallingford et al. (2022). Training details. For each benchmark, we report the performance of FTN for various choices for ranks, along with several benchmark-specific comparative and baseline methods. The backbone weights are pretrained from ImageNet, using ResNet-50 He et al. (2016) for the ImageNet-to-Sketch benchmarks, and ResNet-34 on the DomainNet benchmarks to keep the same setting as Wallingford et al. (2022). On ImageNet-to-Sketch we run FTNs for ranks, R ∈ {1, 5, 10, 15, 20, 25, 50} and on DomainNet dataset for ranks, R ∈ {1, 5, 10, 20, 30, 40}. In the supplementary material, we provide the hyperparameter details to train FTN. Results. We report the top-1% accuracy for each domain and the mean accuracy across all domains for each collection of benchmark experiments. We also report the number of frozen and learnable parameters in the backbone network. Table 1 compares the FTN method with other methods in terms of accuracy and number of parameters. FTN outperforms every other method while using 36.02 million parameters in the backbone with rank-50 updates for all domains. The mean accuracy performance is better than other methods and is close to Spot-Tune Guo et al. (2019), which requires nearly 165M parameters. On the Wikiart dataset, we outperform the top-1 accuracy with other baseline methods. The performance of baseline methods is taken from TAPS Wallingford et al. (2022) since we are running the experiments under the same settings. Table 2: Performance of different methods with resnet-34 backbone on DomainNet dataset. Top-1% accuracy is shown on different domains with different methods along with the number of parameters. Methods Params Clipart Sketch Paint Quick Info Real mean Fine-Tuning 6× 74.26 67.33 67.11 72.43 40.11 80.36 66.93 Feature-Extractor 1× 60.94 50.03 60.22 54.01 26.19 76.79 54.69 FC and BN only 1.004× 70.24 61.10 64.22 63.09 34.76 78.61 62.00 Adashare Sun et al. (2020) 5.73× 74.45 64.15 65.74 68.15 34.11 79.39 64.33 TAPS Wallingford et al. (2022) 4.90× 74.85 66.66 67.28 71.79 38.21 80.28 **66.51** FTN, R=1 1.008× 70.73 62.69 65.08 64.81 35.78 79.12 63.03 FTN, R=40 1.18× 74.2 65.67 67.14 71.00 39.10 80.64 66.29 Table 2 shows the results on the DomainNet dataset, which we compare with TAPS Wallingford et al. (2022) and Adashare Sun et al. (2020). Again, using FTN, we significantly outperform comparison methods along the required parameters (rank-40 needs 25.22 million parameters only). Also, FTN rank-40 attains better top-1% accuracy on the Infograph and Real domain, while it attains similar performance on all other domains. On DomainNet with resnet-34 and Imagenet-to-Sketch with resnet-50 backbone, the rank-1 lowrank tensors require only 16,291 and 49,204 parameters per task, respectively. We have shown additional experiments on this dataset under a joint optimization setup in the supplementary material. Analysis on rank. We create low-rank tensors (∆W) as a summation of R rank-1 tensors. We hypothesize that increasing R increases the expressive power of low-rank tensors. Our experiments confirm this hypothesis, where increasing the rank improves the performance, enabling more challenging task/domain adaptation. Figure 2 shows the accuracy vs. ranks plot, where we observe a trend of performance improvement as we increase the rank from 1 to 50 on the ImageNet-to-Sketch and from 1 to 40 on the DomainNet dataset. In addition, we observe that some domains do not require high ranks. Particularly, the Flowers and Cars domains attain good accuracy at ranks 20 and 15, respectively. We can argue that, unlike prior works Guo et al. (2019); Li et al. (2016), which consume the same task-specific module for easy and complex tasks, we can provide different flexibility to each task. Also, we can add as many different tasks as we want by adding independent low-rank factors for each task (with a sufficiently large rank). In supplementary material, we present a heatmap that shows the adaption of the low-rank tensor at every layer upon increasing the rank. ## 4.1.2 Transformer-Based Networks We compared our FTN method with several domain adaptation techniques for supervised image classification. Our task is to adapt a pretrained 12-layer ViT-B-224/32 (CLIP) model obtained from He et al. (2023) to new domains. ![8_image_0.png](8_image_0.png) ![8_image_1.png](8_image_1.png) (a) Imagenet-to-sketch dataset Figure 2: Accuracy vs Low-ranks: We show the top-1% accuracy against the different low-ranks used in our method for different domains. We start with an 'only BN' setup where without any low-rank we keep the Batch Normalization layers as task-specific. Then we show the performance improvement through our approach upon increasing the rank, R. Table 3: We compared performance across five datasets in terms of accuracy and total parameters. FTN (O) uses low-rank factors for output projection weights, while FTN (Q&V) applies them to query and value weights. Note that the parameters mentioned exclude task-specific heads, and 5 × (439.5M) denotes a fivefold increase in base network parameters, 5 × 87.9M. | Method | # total params | # additional params | CIFAR10 | CIFAR100 | DTD | STL10 | FER2013 | mean | |------------------------------|------------------|-----------------------|-----------|------------|-------|---------|-----------|--------| | Fine-tuning | 5× (439.5M) | 5× 87.9M | 97.7 | 85.4 | 79.0 | 99.7 | 69.8 | 86.3 | | Feature extractor | 1× (87.9M) | - | 94.8 | 80.1 | 75.4 | 98.4 | 67.3 | 83.2 | | LoRA Hu et al. (2021) | 1.008× (88.6M) | 5×147.2K | 95.1 | 78.1 | 78.1 | 99.2 | 67.7 | 83.6 | | KAdaptation He et al. (2023) | 1.005× (88.3M) | 5×80.7K | 95.9 | 84.8 | 78.1 | 99.2 | 69.0 | 85.4 | | FTN (Q & V) | 1.005× (88.3M) | 5× 81.0K | 95.8 | 83.4 | 77.1 | 98.7 | 68.5 | 84.7 | | FTN (O) | 1.002× (88.1M) | 5×40.5K | 96.6 | 84.3 | 76.0 | 98.6 | 69.5 | 85.0 | Datasets. We conducted experiments on the CIFAR10 Krizhevsky et al. (2009), CIFAR100 Krizhevsky et al. (2009), DTD Cimpoi et al. (2014), FER2013 Goodfellow et al. (2013), and STL10 Coates et al. (2011) classification datasets, using the official dataset splits. Training details. For all experiments, we set the rank to R = 4. We followed a similar hyper-parameter tuning procedure and implementation as outlined in He et al. (2023), which utilizes grid-search to obtain the optimal learning rate for each dataset. We found that 5 × 10−6 was the optimal learning rate. Following the approach in Hu et al. (2021), we scaled the low-rank factors by αR , where α is a hyper-parameter, and R is the number of low-rank factors. We set α = 10 and α = 100 for FTN (Query and Value) and FTN (Output projection), respectively. We used a batch size of 64 and trained for 100 epochs. Results. In Table 3, we present the classification accuracy and the total number of parameters for our proposed FTN methods, along with related model adaptation methods. Results for Fine-tuning, Feature extractor (Linear-probing), LoRA Hu et al. (2021), and KAdaptation He et al. (2023) are obtained from He et al. (2023). The first proposed method, FTN (query and value), surpasses LoRA in terms of average performance and requires fewer additional parameters. FTN (query and value) requires a comparable number of parameters to KAdaptation and performance is 0.8% lower. In contrast, FTN (output projection) requires approximately half as many additional parameters as KAdaptation but achieves comparable performance. ## 4.2 Multi-Task Dense Prediction Dataset. The widely-used NYUD dataset Silberman et al. (2012) with 795 training and 654 testing images of indoor scenes is used for dense prediction experiments in multi-task learning. The dataset contains four tasks: edge detection (Edge), semantic segmentation (SemSeg), surface normals estimation (Normals), and depth estimation (Depth). We follow the same data-augmentation technique as used by Kanakis et al. (2020). Metrics. On the tasks of the NYUD dataset, we report mean intersection over union for semantic segmentation, mean error for surface normal estimation, optimal dataset F-measure Martin et al. (2004) for edge detection, and root mean squared error for depth estimation. We also report the number of parameters used in the backbone for each method. Training details. ResNet-18 is used as the backbone network, and DeepLabv3+ Chen et al. (2018a) as the decoder architecture. The Fine-Tuning and Feature-Extractor experiments are implemented in the same way as in the classification-based experiments above. We showed experiments for FTNs with R ∈ {1, 10, 20, 30}. Further details are in the supplementary material. Results. Table 4 shows the performance of FTN with various ranks and of other baseline comparison methods for dense prediction tasks on the NYUD dataset. We observe performance improvement by increasing flexibility through higher rank. FTN with rank-30 performs better than all comparison methods and utilizes the least number of parameters. Also, on the Depth and Edge task we can attain good performance by using only rank-20. We take the performance of baseline comparison methods from the RCM paper Kanakis et al. (2020) as we run our experiments under the same setting. Table 4: Dense prediction performance on NYUD dataset using ResNet-18 backbone with DeepLabv3+ decoder. The proposed FTN approach with R = {1, 10, 20, 30} and other methods. The best performing method in bold. | Methods | Params | Semseg↑ | Depth↓ | Normals↓ | Edge↑ | |---------------------------------------|----------|-----------|----------|------------|---------| | Single Task | 4× | 35.34 | 0.56 | 22.20 | 73.5 | | Decoder only | 1× | 24.84 | 0.71 | 28.56 | 71.3 | | Decoder + BN only | 1.002× | 29.26 | 0.61 | 24.82 | 71.3 | | ASTMT (R-18) Maninis et al. (2019) | 1.25× | 30.69 | 0.60 | 23.94 | 68.60 | | ASTMT (R-26+SE) Maninis et al. (2019) | 2.00× | 30.07 | 0.63 | 24.32 | 73.50 | | Series RA Rebuffi et al. (2018) | 1.56× | 31.87 | 0.60 | 23.35 | 67.56 | | Parallel RA Rebuffi et al. (2018) | 1.50× | 32.13 | 0.59 | 23.20 | 68.02 | | RCM Kanakis et al. (2020) | 1.56× | 34.20 | 0.57 | 22.41 | 68.44 | | FTN, R=1 | 1.005× | 29.83 | 0.60 | 23.56 | 72.7 | | FTN, R=10 | 1.03× | 33.66 | 0.57 | 22.15 | 73.5 | | FTN, R=20 | 1.06× | 34.06 | 0.55 | 21.84 | 73.9 | | FTN, R=30 | 1.09× | 35.46 | 0.56 | 21.78 | 73.8 | ## 5 Conclusion We have proposed a simple, parameter-efficient, architecture-agnostic, and easy-to-implement FTN method that adapts to new unseen domains/tasks using low-rank task-specific tensors. Our work shows that FTN requires the least number of parameters compared to other baseline methods in MDL/MTL experiments and attains better or comparable performance. We can adapt the backbone network in a flexible manner by adjusting the rank according to the complexity of the domain/task. We conducted experiments with different convolutional backbones and transformer architectures for various datasets to demonstrate that FTN outperforms existing methods. ## References Rodrigo Berriel, Stephane Lathuillere, Moin Nabi, Tassilo Klein, Thiago Oliveira-Santos, Nicu Sebe, and Elisa Ricci. Budget-aware adapters for multi-domain learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 382–391, 2019. David Bruggemann, Menelaos Kanakis, Stamatios Georgoulis, and Luc Van Gool. Automated search for resource-efficient branched multi-task networks. *Proceedings BMVC*, 2020. Adrian Bulat, Jean Kossaifi, Georgios Tzimiropoulos, and Maja Pantic. Incremental multi-domain learning with network latent tensor factorization. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pp. 10470–10477, 2020. Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In *Proceedings of the European conference on computer vision (ECCV)*, pp. 801–818, 2018a. Yunpeng Chen, Xiaojie Jin, Bingyi Kang, Jiashi Feng, and Shuicheng Yan. Sharing residual units through collective tensor factorization to improve deep neural networks. In *IJCAI*, pp. 635–641, 2018b. Zhao Chen, Vijay Badrinarayanan, Chen-Yu Lee, and Andrew Rabinovich. Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. In International conference on machine learning, pp. 794–803. PMLR, 2018c. Zhao Chen, Jiquan Ngiam, Yanping Huang, Thang Luong, Henrik Kretzschmar, Yuning Chai, and Dragomir Anguelov. Just pick a sign: Optimizing deep multitask models with gradient sign dropout. *Advances in* Neural Information Processing Systems, 33:2039–2050, 2020. Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi. Describing textures in the wild. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 3606–3613, 2014. Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In *Proceedings of the fourteenth international conference on artificial intelligence and statistics*, pp. 215–223. JMLR Workshop and Conference Proceedings, 2011. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In *International Conference on Learning Representations*, 2020. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. International Conference on Learning Representations, 2021. Mathias Eitz, James Hays, and Marc Alexa. How do humans sketch objects? *ACM Transactions on graphics* (TOG), 31(4):1–10, 2012. Yuan Gao, Jiayi Ma, Mingbo Zhao, Wei Liu, and Alan L Yuille. Nddr-cnn: Layerwise feature fusing in multitask cnns by neural discriminative dimensionality reduction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3205–3214, 2019. Yuan Gao, Haoping Bai, Zequn Jie, Jiayi Ma, Kui Jia, and Wei Liu. Mtl-nas: Task-agnostic neural architecture search towards general-purpose multi-task learning. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In *Proceedings of the thirteenth international conference on artificial intelligence and statistics*, pp. 249– 256. JMLR Workshop and Conference Proceedings, 2010. Ian J Goodfellow, Dumitru Erhan, Pierre Luc Carrier, Aaron Courville, Mehdi Mirza, Ben Hamner, Will Cukierski, Yichuan Tang, David Thaler, Dong-Hyun Lee, et al. Challenges in representation learning: A report on three machine learning contests. In *Neural Information Processing: 20th International Conference, ICONIP 2013, Daegu, Korea, November 3-7, 2013. Proceedings, Part III 20*, pp. 117–124. Springer, 2013. Pengsheng Guo, Chen-Yu Lee, and Daniel Ulbricht. Learning to branch for multi-task learning. In *International Conference on Machine Learning*, pp. 3854–3863. PMLR, 2020. Yunhui Guo, Honghui Shi, Abhishek Kumar, Kristen Grauman, Tajana Rosing, and Rogerio Feris. Spottune: transfer learning through adaptive fine-tuning. In *Proceedings of the IEEE/CVF conference on computer* vision and pattern recognition, pp. 4805–4814, 2019. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Xuehai He, Chunyuan Li, Pengchuan Zhang, Jianwei Yang, and Xin Eric Wang. Parameter-efficient model adaptation for vision transformers. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 37, pp. 817–825, 2023. Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. In *International Conference on Learning Representations*, 2021. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In *International conference on machine learning*, pp. 448–456. pmlr, 2015. Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. In *International Conference on Learning Representations*, 2017. Menelaos Kanakis, David Bruggemann, Suman Saha, Stamatios Georgoulis, Anton Obukhov, and Luc Van Gool. Reparameterizing convolutions for incremental multi-task learning without task interference. In European Conference on Computer Vision, pp. 689–707. Springer, 2020. Alex Kendall, Yarin Gal, and Roberto Cipolla. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7482–7491, 2018. Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, and Neil Houlsby. Big transfer (bit): General visual representation learning. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part V 16, pp. 491–507. Springer, 2020. Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In *Proceedings of the IEEE international conference on computer vision workshops*, pp. 554–561, 2013. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Kai Li, Chang Liu, Handong Zhao, Yulun Zhang, and Yun Fu. Ecacl: A holistic framework for semisupervised domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8578–8587, 2021. Yanghao Li, Naiyan Wang, Jianping Shi, Jiaying Liu, and Xiaodi Hou. Revisiting batch normalization for practical domain adaptation. *arXiv preprint arXiv:1603.04779*, 2016. Dongze Lian, Daquan Zhou, Jiashi Feng, and Xinchao Wang. Scaling & shifting your features: A new baseline for efficient model tuning. *Advances in Neural Information Processing Systems*, 35:109–123, 2022. Jason Liang, Elliot Meyerson, and Risto Miikkulainen. Evolutionary architecture search for deep multitask networks. In *Proceedings of the genetic and evolutionary computation conference*, pp. 466–473, 2018. Shikun Liu, Edward Johns, and Andrew J Davison. End-to-end multi-task learning with attention. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 1871–1880, 2019. Arun Mallya and Svetlana Lazebnik. Packnet: Adding multiple tasks to a single network by iterative pruning. In *Proceedings of the IEEE conference on Computer Vision and Pattern Recognition*, pp. 7765–7773, 2018. Arun Mallya, Dillon Davis, and Svetlana Lazebnik. Piggyback: Adapting a single network to multiple tasks by learning to mask weights. In *Proceedings of the European Conference on Computer Vision (ECCV)*, pp. 67–82, 2018. Massimiliano Mancini, Elisa Ricci, Barbara Caputo, and Samuel Rota Bulo. Adding new tasks to a single network with weight transformations using binary masks. In *Proceedings of the European Conference on* Computer Vision (ECCV) Workshops, pp. 0–0, 2018. Kevis-Kokitsi Maninis, Ilija Radosavovic, and Iasonas Kokkinos. Attentive single-tasking of multiple tasks. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 1851–1860, 2019. David R Martin, Charless C Fowlkes, and Jitendra Malik. Learning to detect natural image boundaries using local brightness, color, and texture cues. *IEEE transactions on pattern analysis and machine intelligence*, 26(5):530–549, 2004. Ishan Misra, Abhinav Shrivastava, Abhinav Gupta, and Martial Hebert. Cross-stitch networks for multi-task learning. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 3994–4003, 2016. Basil Mustafa, Aaron Loh, Jan Freyberg, Patricia MacWilliams, Megan Wilson, Scott Mayer McKinney, Marcin Sieniek, Jim Winkens, Yuan Liu, Peggy Bui, et al. Supervised transfer learning at scale for medical imaging. *arXiv preprint arXiv:2101.05913*, 2021. Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In *2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing*, pp. 722–729. IEEE, 2008. Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 1406–1415, 2019. Jonas Pfeiffer, Sebastian Ruder, Ivan Vulić, and Edoardo Maria Ponti. Modular deep learning. *arXiv preprint* arXiv:2302.11529, 2023. Quang Pham, Chenghao Liu, and HOI Steven. Continual normalization: Rethinking batch normalization for online continual learning. In *International Conference on Learning Representations*, 2022. Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea Vedaldi. Learning multiple visual domains with residual adapters. *Advances in neural information processing systems*, 30, 2017. Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea Vedaldi. Efficient parametrization of multi-domain deep neural networks. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pp. 8119–8127, 2018. Sebastian Ruder, Joachim Bingel, Isabelle Augenstein, and Anders Søgaard. Latent multi-task architecture learning. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 33, pp. 4822–4829, 2019. Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting visual category models to new domains. In *European Conference on Computer Vision (ECCV)*, pp. 213–226. Springer, 2010. Babak Saleh and Ahmed Elgammal. Large-scale classification of fine-art paintings: Learning the right metric on the right feature. *International Journal for Digital Art History*, (2), 2016. Nathan Silberman, Derek Hoiem, Pushmeet Kohli, and Rob Fergus. Indoor segmentation and support inference from rgbd images. *ECCV (5)*, 7576:746–760, 2012. Gjorgji Strezoski, Nanne van Noord, and Marcel Worring. Many task learning with task routing. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 1375–1384, 2019. Ximeng Sun, Rameswar Panda, Rogerio Feris, and Kate Saenko. Adashare: Learning what to share for efficient deep multi-task learning. *Advances in Neural Information Processing Systems*, 33:8728–8740, 2020. Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 7167–7176, 2017. Simon Vandenhende, Stamatios Georgoulis, and Luc Van Gool. Mti-net: Multi-scale task interaction networks for multi-task learning. In *European Conference on Computer Vision*, pp. 527–543. Springer, 2020. Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In *Proceedings of the IEEE conference on computer vision* and pattern recognition, pp. 5018–5027, 2017. Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds200-2011 dataset. 2011. Matthew Wallingford, Hao Li, Alessandro Achille, Avinash Ravichandran, Charless Fowlkes, Rahul Bhotika, and Stefano Soatto. Task adaptive parameter sharing for multi-task learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7561–7570, 2022. Dan Xu, Wanli Ouyang, Xiaogang Wang, and Nicu Sebe. Pad-net: Multi-tasks guided prediction-anddistillation network for simultaneous depth estimation and scene parsing. In *Proceedings of the IEEE* Conference on Computer Vision and Pattern Recognition, pp. 675–684, 2018. Yangyang Xu, Yibo Yang, and Lefei Zhang. Demt: Deformable mixer transformer for multi-task learning of dense prediction. In *Proceedings of the AAAI conference on artificial intelligence*, volume 37, pp. 3072–3080, 2023. Xingyi Yang, Jingwen Ye, and Xinchao Wang. Factorizing knowledge in neural networks. In *European* Conference on Computer Vision, pp. 73–91. Springer, 2022. Yongxin Yang and Timothy Hospedales. A unified perspective on multi-domain and multi-task learning. In 3rd International Conference on Learning Representations, 2015. Yongxin Yang and Timothy Hospedales. Deep multi-task representation learning: A tensor factorisation approach. In *5th International Conference on Learning Representations*, 2017. Hanrong Ye and Dan Xu. Inverted pyramid multi-task transformer for dense scene understanding. In European Conference on Computer Vision, pp. 514–530. Springer, 2022. Tianhe Yu, Saurabh Kumar, Abhishek Gupta, Sergey Levine, Karol Hausman, and Chelsea Finn. Gradient surgery for multi-task learning. *Advances in Neural Information Processing Systems*, 33:5824–5836, 2020. Jeffrey O Zhang, Alexander Sax, Amir Zamir, Leonidas Guibas, and Jitendra Malik. Side-tuning: a baseline for network adaptation via additive side networks. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part III 16, pp. 698–714. Springer, 2020. Lijun Zhang, Xiao Liu, and Hui Guan. Automtl: A programming framework for automating efficient multitask learning. *Advances in Neural Information Processing Systems*, 35:34216–34228, 2022a. Lijun Zhang, Xiao Liu, and Hui Guan. A tree-structured multi-task model recommender. In International Conference on Automated Machine Learning, pp. 10–1. PMLR, 2022b. Lijun Zhang, Qizheng Yang, Xiao Liu, and Hui Guan. Rethinking hard-parameter sharing in multi-domain learning. In *2022 IEEE International Conference on Multimedia and Expo (ICME)*, pp. 01–06. IEEE, 2022c. Zhenyu Zhang, Zhen Cui, Chunyan Xu, Zequn Jie, Xiang Li, and Jian Yang. Joint task-recursive learning for semantic segmentation and depth estimation. In *Proceedings of the European Conference on Computer* Vision (ECCV), pp. 235–251, 2018. Hanbin Zhao, Hao Zeng, Xin Qin, Yongjian Fu, Hui Wang, Bourahla Omar, and Xi Li. What and where: Learn to plug adapters via nas for multidomain learning. *IEEE Transactions on Neural Networks and* Learning Systems, 33(11):6532–6544, 2021. Yunhan Zhao, Haider Ali, and René Vidal. Stretching domain adaptation: How far is too far? arXiv preprint arXiv:1712.02286, 2017.
Review 1: Summary: This paper introduces a novel Factorized Tensor Network (FTN) that can be integrated into CNNs and Transformers for multi-task and multi-domain learning. The FTN aims to reduce model redundancy in these scenarios. Experimental results demonstrate that using the FTN allows a unified network to achieve accuracy comparable to independent single-task or single-domain networks, while requiring only a minimal increase in parameters. Strengths and Weaknesses: ## Strengths 1. The paper proposes a novel Factorized Tensor Network (FTN) for parameter-efficient multi-task and multi-domain learning with a clear motivation. The addition of task/domain-specific low-rank tensors to shared weights is logically sound. 2. The FTN is versatile, capable of being integrated into both CNNs and Transformers, demonstrating flexibility across different network architectures. 3. Experimental results on multi-task and multi-domain datasets show that the FTN achieves comparable accuracy to baseline and previous methods, with only a minimal increase in additional parameters. 4. The paper is well-organized and presented in a manner that is easy to follow. ## Weaknesses 1. When introducing the FTN, it is crucial to clearly differentiate it from previous works such as those by Wallingford et al. and Kanakis et al. etc. This distinction helps readers understand the novelty and specific benefits of the proposed FTN, rather than just listing related works, which might confuse readers unfamiliar with the area. 2. From the result tables, it appears that the main benefit of the proposed FTN is parameter reduction. The fine-tuning method consistently yields the best results. Is there potential to further improve accuracy in multi-task and multi-domain learning compared to single-task fine-tuning? The goal of multi-task learning is to enhance the performance of each task; otherwise, it is unnecessary. Thus, the learning settings should be carefully adjusted, as storage cost is cheap while accuracy and inference speed are more critical in practice. 3. The experiment in Table 2 is not very convincing. Note that all six tasks are the same across different domains in the DomainNet dataset, making individual heads for each task unnecessary. Using a unified head might yield better results in multi-domain learning compared to the Fine Tuning method. Therefore, the baseline methods should be carefully designed to ensure solid empirical experiments. 4. There are typos in Figure 1(c). For example, [shared]->[1 1]->[shared]->[2 2] should likely be [shared]->[1 2]->[shared]->[1 2]? Requested Changes: 1. Conduct a comprehensive review of previous FTN works, providing more detailed information. 2. Carefully adjust the learning settings to highlight the practical advantages of the proposed approach. 3. Design the baseline experiments meticulously to ensure solid empirical validation. 4. Cite the works of the comparison methods when presenting the result tables. Broader Impact Concerns: No. ================================================== Review 2: Summary: This paper proposed Factorized Tensor Network to boost multi-task learning and multi-domain learning. FTN attains similar accuracy as single-task or single-domain methods while using only a fraction of additional parameters per task. Strengths and Weaknesses: Strengths: + The targeted problem is important. + The paper is good to follow. Weaknesses: - The idea is not new. The overall contribution is incremental. - In my opinion, an important application for multi-task learning is to get better generalization ability by reusing knowledge in several tasks that are similar and can share knowledge with other tasks. It is not satisfactory to only get similar accuracy with single-task methods. - The experiments are not very comprehensive. What about multi-task applications that are commonly used in the read-world? For example, object detection? Requested Changes: - The idea is not new. The overall contribution is incremental. - In my opinion, an important application for multi-task learning is to get better generalization ability by reusing knowledge in several tasks that are similar and can share knowledge with other tasks. It is not satisfactory to only get similar accuracy with single-task methods. - The experiments are not very comprehensive. What about multi-task applications that are commonly used in the read-world? For example, object detection? Broader Impact Concerns: No ================================================== Review 3: Summary: >> Context: This paper addresses an important technical problem regarding how to adapt a pre-trained model to a new task or domain while requiring as little compute and few additional parameters as possible. It is indeed a relevant question for many practical applications these days. The paper is focused on classification tasks for computer vision. >> Summary: - The paper proposes to use low-rank updates for two different architectures: convolutional networks and Transformers. In the case of CNNs, the updates are applied to convolutional filters and batch norms. In the case of Transformers, updates are applied either to query and value parameter matrices, or to the projection matrix within the attention layer. Every task (or, say, downstream dataset) requires finetuning these low rank matrices. - The paper offers an interesting comparison for increasing values of R (the rank of the update). We see that higher rank does better (as expected) but performance plateaus fairly quickly (good news, I guess). - Experiments on ResNets (CNN) and VIT (Transformer) suggest FTN outperforms most alternatives in terms of performance while --at the same time-- requiring a significantly lower parameter count overhead. Strengths and Weaknesses: I'm not sure what's different between this and previous low-rank adapter approaches, especially LoRA (even though the related work section is thorough). Overall, it seems novelty is limited, while experiments are well explained and interesting. Requested Changes: >> Questions: - What's the exact difference between Transformer FTN and LoRA? I think the paper could do a bit of a better job at explaining the technical difference as, from the outside, both methods seem fairly similar. Is FTN just a different way to decompose the update matrix? - For LoRA (W_upd = BA), one of the parameter matrices (B) is initialized to be identical to zero. Accordingly, the initial checkpoint produces the exact same outputs as the pretrained model. In this work, the algorithm randomly initialize all the new parameters (to small values). I wonder what's the effect of this, and whether authors have ablated other types of initializations closer to that of LoRA. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Reject Comment: This paper introduces a method called FTN for multi-task and/or multi-domain learning, aiming to maintain accuracy while introducing little additional parameters; the paper uses architectures of convolution and transformer networks to showcase its efficacy. Three reviewers give thorough and constructive comments. The authors responded and revised the paper accordingly. However, two of the reviewers expect that improved performance would be achieved given the use of additional parameters in FTN; however, this is not well supported by the reported experiments. They also think that the core idea of the paper is not fundamentally different from the LoRA and KAdaptation methods. While the third reviewer suggests an acceptance of the paper, he/she also agrees that new technical contributions of the paper are limited when compared with LoRA. Note that novelty is not a factor for TMLR, but the core idea will gain a limited audience due to insufficient new insights compared to what is known about LoRA and KAdaptation. ==================================================
# A Dnn Optimizer That Improves Over Adabelief By Suppression Of The Adaptive Stepsize Range Guoqiang Zhang *g.z.zhang@exeter.ac.uk* Department of Computer Science University of Exeter, UK Kenta Niwa kenta.niwa@ntt.com NTT Communication Science Laboratories, Japan W. Bastiaan Kleijn bastiaan.kleijn@vuw.ac.nz School of Engineering and Computer Science Victoria University of Wellington, New Zealand Reviewed on OpenReview: *https://openreview.net/forum?id=VI2JjIfU37&noteId=8STsR7LV7Y* ## Abstract We make contributions towards improving adaptive-optimizer performance. Our improvements are based on suppression of the range of adaptive stepsizes in the AdaBelief optimizer. Firstly, we show that the particular placement of the parameter ϵ within the update expressions of AdaBelief reduces the range of the adaptive stepsizes, making AdaBelief closer to SGD with momentum. Secondly, we extend AdaBelief by further suppressing the range of the adaptive stepsizes. To achieve the above goal, we perform mutual layerwise vector projections between the gradient gt and its first momentum mt before using them to estimate the second momentum. The new optimization method is referred to as *Aida*. Thirdly, extensive experimental results show that Aida outperforms nine optimizers when training transformers and LSTMs for NLP, and VGG and ResNet for image classification over CIAF10 and CIFAR100 while matching the best performance of the nine methods when training WGAN-GP models for image generation tasks. Furthermore, Aida produces higher validation accuracies than AdaBelief for training ResNet18 over ImageNet. Our implementation is available at https://github.com/guoqiang-zhang-x/Aida-Optimizer. ## 1 Introduction In the last decade, stochastic gradient descent (SGD) and its variants have been widely applied in deep learning LeCun et al. (2015); Vaswani et al. (2017); et al. (2016); Chen et al. (2020) due to their simplicity and effectiveness. In the literature, SGD with momentum Sutskever et al. (2013); Polyak (1964)) dominates over other optimizers for image classification tasks He et al. (2015); Wilson et al. (2017). Suppose the objective function f(θ) : θ ∈ R d of a DNN model is differentiable. Its update expression for minimising f(θ) can be represented as $$\left[\mathbf{SGD}{\mathrm{~with~momentum}}\right]\left\{\vphantom{\left[\mathbf{SGD}\right]}\right\}$$ θt = θt−1 − ηtmt, (1) $$\begin{array}{l}{{m_{t}=\beta_{t}m_{t-1}+g_{t}}}\\ {{\theta_{t}=\theta_{t-1}-\eta_{t}m_{t}}}\end{array}$$ where gt = ∇f(θt−1) is the gradient at θt, and ηt is the common stepsize for all the coordinates of θ. In practice, the above method is often combined with a certain step-size scheduling method for ηt when training DNNs. To bring flexibility to SGD with momentum, an active research trend is to introduce elementwise adaptive stepsizes for all the coordinates of mt in (1), referred to as *adaptive optimization* Duchi et al. (2011); Tieleman & Hinton (2012); Kingma & Ba (2017). In the literature, Adam Kingma & Ba (2017) is probably the most popular adaptive optimization method (e.g., Vaswani et al. (2017); Liu et al. (2021); Zhang et al. (2019); J. Devlin & Toutanova (2018)). Its update expression can be written as $$(1)$$ $\uparrow$ . 1 $$[\mathbf{Adam}]\left\{\begin{array}{l l}{{m_{t}=\beta_{1}m_{t-1}+(1-\beta_{1})g_{t}}}\\ {{v_{t}=\beta_{2}v_{t-1}+(1-\beta_{2})g_{t}^{2}}}\\ {{\theta_{t}=\theta_{t-1}-\eta_{t}\frac{1}{1-\beta_{1}^{t}}\frac{m_{t}}{\sqrt{v_{t}/(1-\beta_{2}^{t})+\epsilon}}}}\end{array}\right.$$ $$(2)^{\frac{1}{2}}$$ , (2) where gt = ∇f(θt−1), 0 < β1, β2 < 1, and ϵ > 0. The two vector operations (·) 2and ·/· are performed in an elementwise manner. The two exponential moving averages (EMAs) mt and vt are alternatively referred to as the first and second momentum. The two quantities 1 − β t 1 and 1 − β t 2 are introduced to compensate for the estimation bias in mt and vt, respectively. ηt is the common stepsize while 1/(pvt/(1 − β t 2 ) + ϵ) ∈ R drepresents the elementwise adaptive stepsizes. Due to the great success of Adam in training DNNs, various extensions of Adam have been proposed, including AdamW Loshchilov & Hutter (2019), NAdam Dozat (2016), Yogi Zaheer et al. (2018), MSVAG Balles & Hennig (2017), Fromage Bernstein et al. (2020), and AdaBelief Zhuang et al. (2020). It is worth noting that in Liu et al. (2019), the authors found that better generalization could be achieved by reducing the variance of the adaptive stepsizes of Adam. In doing so, they suggested multiplying a rectified scalar by mt when computing θt in (2) when the variance is large, which is referred to as RAdam. The AdaBound method of Luo et al. (2019) is designed to avoid extremely large and small adaptive stepsizes of Adam, which has a similar effect as RAdam. In practice, AdaBound works as an adaptive method at the beginning of the training process and gradually transforms to SGD with momentum, where all the adaptive stepsizes tend to converge to a single value. Conceptually speaking, both RAdam and AdaBound aim to reduce the range of the adaptive stepsizes of Adam to mimic the convergence behavior of SGD with momentum to a certain extent. Inspired by the above work Liu et al. (2019); Luo et al. (2019), we consider suppressing the range of adaptive stepsizes of AdaBelief. It is noted that AdaBelief extends Adam by tracking the EMA of the squared prediction error (mt−gt) 2. The update expressions of AdaBelief are given by Zhuang et al. (2020) $$\mathbf{[AdaBelief]}\left\{\begin{array}{l}{{m_{t}=\beta_{1}m_{t-1}+(1-\beta_{1})g_{t}}}\\ {{s_{t}=\beta_{2}s_{t-1}+(1-\beta_{2})(m_{t}-g_{t})^{2}+\epsilon}}\\ {{\theta_{t}=\theta_{t-1}-\eta_{t}\frac{1}{1-\beta_{1}^{2}}\frac{m_{t}}{\sqrt{s_{t}/(1-\beta_{2}^{2})+\epsilon}}}}\end{array}\right..$$ $$(3)$$ We emphasise that the parameter ϵ is involved in the computation of both st and θt in AdaBelief, which is different from that of Adam. The work of Zhuang et al. (2020) does not study the impact of ϵ in the computation of st, and mainly focuses on the motivation of the EMA of (mt − gt) 2instead of the EMA of g 2 t being employed in Adam. In this paper, we make three contributions. Firstly, we explain why it is important to include the parameter ϵ in the computation of st in (3), which will be inherited by our new algorithm Aida as described later on. We show via a Taylor expansion that the inclusion of ϵ in the computation of st essentially suppresses the range of the adaptive stepsizes of AdaBelief. The above property makes AdaBelief closer to SGD with momentum. Secondly, we perform layerwise vector projections to further suppress the range of adaptive stepsizes of AdaBelief. Let us denote the subvectors of (mt, gt) for the lth layer of a DNN model as (ml,t, gl,t). We perform K mutualvector projections to obtain (m (K) l,t , g (K) l,t ) for the lth layer starting from (m (0) l,t , g (0) l,t ) = (ml,t, gl,t). As an extension of AdaBelief, we then track and employ the EMA (or equivalently the second momentum) of (m (K) l,t − g (K) l,t ) 2for the lth layer, where the resulting method is referred to *Aida*. 3 The new method has the property that the adaptive stepsizes within each neural layer have smaller statistical variance, and the layerwise average of the adaptive stepsizes are more compact across all the neural layers than the reference method. A discussion on the benefit of the above property will be provided in Subsection 3.1. In addition, a convergence analysis for Aida is conducted in Subsection 3.3. As an example, Fig. 1 and 2 demonstrate that Aida indeed produces a more compact range of adaptive stepsizes than AdaBelief and Adam for training VGG11 over CIFAR10. Furthermore, the adaptive stepsizes of Aida become increasingly more compact as the iteration increases. It is worth noting that at the end of the training process, the 11 layerwise average stepsizes in Fig. 1 do not converge to a single value, indicating the adaptability of Aida. 0As an example, considering Adam at iteration t, the layerwise average of adaptive stepsizes for the lth layer of VGG11 is computed as 1 dl Pdl i=1 1/(pvl,t[i]/(1 − β t 2 ) + ϵ), where vl,t ∈ Rdl is the subvector of the 2nd momentum vt ∈ Rd for the lth layer. 3It is named after an Italian opera by Verdi. ![2_image_0.png](2_image_0.png) Figure 1: Comparison of layerwise average2of adaptive stepsizes for the 11 neural layers of VGG11 by training over CIFAR10 for 200 epochs. See Appendix C for the parameter setups of the three methods, where the optimal parameter ϵ for Adam was selected from a discrete set to give the best validation performance. The jumps in the curves at 100 and 160 epochs are due to the change in the common stepsize. Aida has a much more compact range of layerwise average stepsizes than Adam and AdaBelief, respectively. ![2_image_1.png](2_image_1.png) Figure 2: Comparison of layerwise standard deviations (stds) of adaptive stepsizes for the 11 neural layers by training VGG11 over CIFAR10 for 200 epochs. Aida has much smaller layerwise stds than Adam and AdaBelief, respectively. Thirdly, extensive experimental results show that Aida with K = 2 yields considerably better performance than nine optimization methods for training transformer Vaswani et al. (2017) and LSTM Hochreiter & Schmidhuber (1997) models in natural language processing (NLP) tasks, and VGG11 Simonyan & Zisserman (2014) and ResNet34 in image classification tasks over CIFAR10 and CIFAR100. It is also found that Aida matches the best performance of the nine methods when training WGAN-GP models in image generation tasks. Lastly, Aida outperforms AdaBelief when training ResNet18 on the large ImageNet dataset. Notations: We use small bold letters to denote vectors. The l2 and l∞ norms of a vector y are denoted as ∥y∥2 and ∥y∥∞, respectively. Given an L-layer DNN model θ of dimension d, we use θl of dimension dlto denote the subvector of θ for the lth layer. Thus, there is PL l=1 dl = d. The ith element of θlis represented by θl[i]. The notation [L] stands for the set [L] = {1*, . . . , L*}. Finally, the angle between two vectors y and x of the same dimension is denoted by ∠xy. ## 2 Impact Of Ε In Computation Of St**In Adabelief** In this section, we study the impact of ϵ in computing the second momentum st in (3), which is missing in Zhuang et al. (2020). By inspection of (3), one can see that the parameter ϵ appears twice in the update expressions, the first one for computing st and the second one for computing θt. The impact of the second ϵ can be ignored due to the fact that pϵ/(1 − β t 2 ) ≫ ϵ when ϵ is sufficiently small (e.g., ϵ = 1e − 8). As a result, we only need to focus on the first ϵ when computing st. Next, we show that the first ϵ in the computation of st helps to suppress the range of adaptive stepsizes of AdaBelief. To this purpose, we reformulate the update expressions in (3) as $$\left\{\begin{array}{l l}{{\mathbf{m}_{t}=\beta_{1}\mathbf{m}_{t-1}+(1-\beta_{1})\mathbf{g}_{t}}}\\ {{\hat{\mathbf{s}}_{t}=\beta_{2}\hat{\mathbf{s}}_{t-1}+(1-\beta_{2})(\mathbf{m}_{t}-\mathbf{g}_{t})^{2}}}\\ {{r_{t}=\beta_{2}r_{t-1}+\epsilon}}\\ {{\mathbf{\theta}_{t}=\mathbf{\theta}_{t-1}-\eta_{t}{\frac{1}{1-\beta_{1}^{t}}}{\frac{\mathbf{m}_{t}}{\sqrt{(\hat{\mathbf{s}}_{t}+r_{t})/(1-\beta_{2}^{t})}}}}}\end{array}\right.$$ $$(4)$$ , (4) 3 ![3_image_0.png](3_image_0.png) Figure 3: Comparison of layerwise average of adaptive stepsizes for the 11 neural layers by training VGG11 over CIFAR10 for 200 epochs. AdaBelief∗is obtained by removing the first ϵ in the computation of st and only keeping the second ϵ. See Appendix A and C for the update procedure of AdaBelief* and the parameter setups of the two optimization methods. The optimal ϵ for AdaBelief* is selected from a discrete set that gives the best validation accuracy. where the second ϵ is removed, and $${\mathbf{s}}_{t}={\hat{\mathbf{s}}}_{t}+r_{t}={\hat{\mathbf{s}}}_{t}+\epsilon(1-\beta_{2}^{t})/(1-\beta_{2}),$$ )/(1 − β2), (5) where sˆ0 = 0 and r0 = 0. As a result, the adaptive stepsizes 1/p(sˆt + rt)/(1 − β t 2 ) in (4) can be approximated as $$1/\sqrt{(\hat{\mathbf{s}}_{t}+r_{t})/(1-\beta_{2}^{t})}=1/\sqrt{\hat{\mathbf{s}}_{t}/(1-\beta_{2}^{t})+\epsilon/(1-\beta_{2})}$$ $$\approx\underbrace{\frac{1}{\sqrt{\hat{\mathbf{s}}_{t}/(1-\beta_{2}^{t})}}}_{\text{Its term}}+\underbrace{\frac{1}{2\sqrt{\hat{\mathbf{s}}_{t}/(1-\beta_{2}^{t})}}\epsilon/(1-\beta_{2})}_{\text{2nd term}},$$ $$(S)$$ $$(6)$$ $$\left(T\right)$$ , (7) where in the last step, the Taylor approximation is applied to a function h(x) = √a + x around x = 0, where x = ϵ/(1 − β2) and a = sˆt/(1 − β t 2 ). We now investigate (7). Generally speaking, small elements of sˆt lead to large adaptive stepsizes while large elements lead to small adaptive stepsizes due to the inverse operation 1/(·). It is clear from (7) that for small elements of sˆt, the second term in the denominator is relatively large, implicitly penalizing large stepsizes. Furthermore, (6) indicates that those large stepsizes are upper-bounded by the quantity 1/pϵ/(1 − β2). In contrast, for large elements of sˆt, the second term is relatively small, thus avoiding extremely small adaptive stepsizes. In short, including ϵ in the computation of st suppresses the range of adaptive stepsizes in AdaBelief by avoiding extremely small stepsizes. Fig. 3 demonstrates that when the first ϵ is removed from (3) in AdaBelief, the resulting method AdaBelief∗indeed has a broader range of adaptive stepsizes than AdaBelief. At epoch 200, the eleven layerwise average stepsizes in AdaBelief∗are distributed in [190,1000] while ten out of eleven layerwise average stepsizes in AdaBelief are close to a single value of 320. That is, the first ϵ in (3) indeed makes the adaptive sizes of AdaBelief more compact. ## 3 Algorithmic Design We showed in the previous section that the particular placement of ϵ in the update expressions of AdaBelief suppresses the range of the adaptive stepsizes. In this section, we develop a new technique to further reduce the range of adaptive stepsizes of AdaBelief, which is referred to as *layerwise vector projections*. The new method is named *Aida*. A convex convergence analysis is presented at the end of the section. $\quad\quad\text{Adam:}\:\Big\{\pmb{g}^2_{l,t}|t\geq0\Big\}$ $\quad\quad\text{AdaBelief:}\:\big\{(\pmb{m}_{l,t}-\pmb{g}_{l,t})^2|t\geq0\big\}$ $\quad\quad\text{Aida:}\:\Big\{(\pmb{m}^{(K)}_{l,t}-\pmb{g}^{(K)}_{l,t})^2|t\geq0\Big\}$ . o ![4_image_0.png](4_image_0.png) Figure 4: Computation of {(m (k) l,t , g (k) l,t )|k = [K]} by starting from the pair (ml,t, gl,t) via sequential and alternating vector projections in Aida. ## 3.1 Motivation Our aim is to design a new adaptive optimization algorithm, in which the range of the adaptive stepsizes is smaller than that of AdaBelief. To achieve the above goal, we consider processing mt and gt in a layerwise manner at iteration t before using them to estimate the second momentum vt. Due to the nature of back-propagation when training a DNN model θ, it is computationally more efficient to perform layerwise processing than operating on the entire vector mt and gt. Intuitively speaking, when the variance of the adaptive stepsizes within a particular neural layer is encouraged to be small by layerwise manipulation, the update for the model parameters within the same neural layer would become relatively robust to elementwise gradient outliers (exploding or vanishing gradient elements across iterations). The recent work You et al. (2020) proposed the LARS and LAMB optimizers. They are, respectively, extensions of SGD with momentum and Adam that introduce layerwise normalisation in the update of a DNN model per iteration. The purpose for introducing the layerwise normalization in You et al. (2020) is to provide robustness to exploding layerwise gradients and plateaus for the scenario of utilizing extremely large batchsizes to train large DNN models. Differently from You et al. (2020), our work considers the scenario of normal training batchsizes. We now study what kind of layerwise operation is desirable for reducing the layerwise variance of the adaptive stepsizes in AdaBelief. Firstly, we note that the parameter ϵ of (3) in AdaBelief essentially defines an upper bound on the adaptive stepsizes and is independent of neural layer and iteration indices. By inspection of (4)-(6), the upper bound can be expressed as $${\frac{1}{\sqrt{r_{t}/(1-\beta_{2}^{t})}+\epsilon}}={\frac{1}{\sqrt{\epsilon/(1-\beta_{2})}+\epsilon}}.$$ $$({\boldsymbol{8}})$$ . (8) We use (ml,t, gl,t) to denote the subvectors of (mt, gt) for the lth neural layer. Suppose we track the EMA of (γl,tml,t − βl,tgl,t) 2for the lth layer, where 0 < γl,t, βl,t ≤ 1 are functions of (ml,t, gl,t), instead of the EMA of (ml,t − gl,t) 2 being tracked in AdaBelief. If the scalars {γl,t, βl,t} are sufficiently small in the extreme case, all the adaptive stepsizes of the new method tend to approach the upper bound in (8). As a result, the new method will have a smaller range of adaptive stepsizes than AdaBelief either in a layerwise manner or globally. We propose to compute the scalars (γl,t, βl,t) of ml,t and gl,t mentioned above via K mutual-vector projections starting from (ml,t, gl,t) (see Fig. 4 for demonstration). In practice, it is found that K = 2 is sufficient to produce small scalars (γl,t, βl,t), leading to a smaller range of adaptive stepsizes than those of AdaBelief and Adam. The parameter K of Aida in Fig. 1-2 was set to K ∈ {1, 2}. In the following, the update expressions of Aida are presented in detail. ## 3.2 Aida As An Extension Of Adabelief Consider the lth layer of a DNN model at iteration t. We perform a sequence of mutual-vector projections to obtain a set of projected vectors {(m (k) l,t , g (k) l,t )| k = [K]} starting from the initial pair (m (0) l,t , g (0) l,t ) = (ml,t, gl,t). Using algebra, the two vectors at iteration k can be represented as $$\mathbf{m}_{l,t}^{(k+1)}=\frac{\langle\mathbf{g}_{l,t}^{(k)},\mathbf{m}_{l,t}^{(k)}\rangle}{\|\mathbf{g}_{l,t}^{(k)}\|_{2}^{2}+\xi}\mathbf{g}_{l,t}^{(k)}\tag{9}$$ $$\mathbf{g}_{l,t}^{(k+1)}=\frac{\langle\mathbf{g}_{l,t}^{(k)},\mathbf{m}_{l,t}^{(k)}\rangle}{\|\mathbf{m}_{l,t}^{(k)}\|_{2}^{2}+\xi}\mathbf{m}_{l,t}^{(k)}\,,\tag{10}$$ where ⟨·, ·⟩ denotes the inner product, and ξ > 0 is a scalar parameter to make sure the division operations are valid. The above two projections (9)-(10) ensure that the resulting projected vectors share the same vector-direction as either ml,t or gl,t. See Fig. 4 for visualisation. Remark 1. We note that the K mutual-vector projections starting from the pair of gl,t and ml,t *can, in principle, be* realized by scaling the two vectors via the Kth power of the cosine of ∠gl,tml,t, which would make the additional computational overhead roughly constant as a function of K. In practice, we found that a positive value of the parameter ξ *in (9) and (10) of the mutual-vector projections improves the generalization performance, which, for simplicity,* was set to 1e − 20 across all the experiments in the paper. We refer the reader to Appendix E for supplementary experiments on the utilization of the Kth power of the cosine of ∠gl,tml,t *to perform the scaling. The empirical* study in Subsection 4.3 indicates that Aida with K = 2 only introduces a small computational overhead compared to AdaBelief, which makes the formulation of the mutual-vector projections (9) and (10) practically affordable. Once (m (K) l,t , g (K) l,t ) are obtained for the lth layer, Aida tracks the EMA of the squared difference (m (K) l,t − g K l,t) 2, given by $$\mathbf{v}_{l,t}=\beta_{2}\mathbf{v}_{l,t-1}+\left(1-\beta_{2}\right)\left(\mathbf{m}_{l,t}^{(K)}-\mathbf{g}_{l,t}^{(K)}\right)^{2}+\epsilon,\tag{1}$$ $$(11)$$ where 1 > β2 > 0, and ϵ > 0 is added as recommended by our earlier analysis. With vl,t, the model parameters θl,t of the lth layer can be updated accordingly. See Algorithm 1 for a summary of Aida. Next, we consider the geometric properties of the set of projected vectors. It is not difficult to show that after projection, the resulting vectors have either shorter or equal length in comparison to the original vectors: $$\|\mathbf{m}_{l,t}^{(k)}\|_{2}\leq\|\mathbf{m}_{l,t}^{k-1}\|_{2}\quad\text{and}\quad\|\mathbf{g}_{l,t}^{(k)}\|_{2}\leq\|\mathbf{g}_{l,t}^{(k-1)}\|_{2}.$$ Using the fact that mutual projections of two vectors do not change the angle, we then have $$(12)$$ $$\|\mathbf{m}_{l,t}^{(k)}-\mathbf{g}_{l,t}^{(k)}\|_{2}\leq\|\mathbf{m}_{l,t}^{(k-1)}-\mathbf{g}_{l,t}^{(k-1)}\|_{2},\tag{1}$$ $$(13)$$ where the equality holds if ml,t and gl,t are on the same line and ξ can be ignored in (10). For the extreme case that each neural layer has only one parameter (i.e., gl,t ∈ R, ∀l ∈ [L]), it is easy to show that the projection operation has no effect. That is, (g (k) l,t − m (k) l,t ) 2 = (gl,t − ml,t) 2for all k ∈ [K] if ξ is ignored in (10). In this case, Aida reduces to AdaBelief. From the above analysis, we can conclude that the EMA of (m (K) l,t − g K l,t) 2for the lth layer can be viewed as the EMA of (γK,l,tml,t − βK,l,tgl,t) 2, where the scalars γK,l,t, β*K,l,t* ∈ (0, 1]. In general, the angles {∠gl,tml,t|t ≥ 0} would be non-zero due to randomness introduced by the minibatch training strategy in a typical DNN task. As a result, increasing the number K of vector projections would cause the elements of {γK,l,t, β*K,l,t*|t ≥ 0} to approach zero. In other words, the parameter K controls the range of the adaptive stepsizes of Aida. A larger K makes the adaptive stepsizes more compact. Figs. 1 and 2 provide empirical evidence that Aida does indeed have a smaller range of adaptive stepsizes than AdaBelief and Adam. Furthermore, as K increases from 1 to 2, the range of adaptive stepsizes of Aida becomes increasingly compact. Hence, Aida makes a bridge between SGD with momentum and AdaBelief. As will be demonstrated in the experiments, Aida improves the generalization of Adam and AdaBelief for several classical DNN tasks. Algorithm 1 Aida: Suppressing the range of adaptive stepsizes of AdaBelief by layerwise vector projections Input: β1, β2, ηt, ϵ > 0, ξ = 1e − 20, K = 2 Init.: θ0 ∈R d, m0 = 0, v0 = v˜0 = 0 ∈ R d for t = 1, 2*, . . . , T* do gt ← ∇f(θt−1) mt ← β1mt−1 + (1 − β1)gt for l = 1*, . . . , L* do m (0) t = m (0) t, g (0) t = gt for k = 1*, . . . , K* do m (k) l,t = ⟨mk−1 l,t ,g k−1 l,t ⟩ ∥g (k−1) l,t ∥ 2 2+ξ g (k−1) l,t g (k) l,t = ⟨mk−1 l,t ,g k−1 l,t ⟩ ∥m(k−1) l,t ∥ 2 2+ξ m (k−1) l,t end for vl,t← β2vl,t−1+(1−β2)(m (K) l,t −g (K) l,t ) 2+ϵ end for m˜ t← mt 1−β t 1 nv˜l,t ←vl,t 1−β t 2 oL l=1 θt← θt−1 − η√tm˜ t v˜t end for Output: θT ## 3.3 Convergence Analysis In this paper, we study the convergence of Aida for convex optimization. Our analysis follows a strategy similar to that used to analyse AdaBelief in Zhuang et al. (2020). Theorem 1. *Suppose* {θt} T t=0 and {vt} T t=0 are the iterative updates obtained by either Aida4*starting with* (m0, v0) = (0, 0)*. Let* 0 ≤ β1t = β1λ t < 1, 0 ≤ β2 < 1*, and* ηt = √ η t . Assume (1): f(θ) is a differentiable convex function with ∥gt∥∞ ≤ G∞/2 *(hence* ∥m (K) l,t − g (K) l,t ∥∞ ≤ G∞) for all t ∈ [T]*; (2): the updates* {θt} T t=0 *and the optimal* solution θ ∗ are bounded by a hyper-sphere, i.e., ∥θt∥2 ≤ D and ∥θ ∗∥2 ≤ D; (3): 0 < c ≤ v˜t[i] ≤ v˜t−1[i] *for all* i ∈ {1, . . . , d} and t ∈ [T]*. Denote* θ¯T = 1 T PT −1 t=0 θt and g 2 1:T [i] = ((g1[i])2*, . . . ,*(gT [i])2) ∈ R T*. We then have the* following bound on regret: $$f(\bar{\mathbf{\theta}}_{T})-f(\mathbf{\theta}^{*})\leq\frac{D^{2}d(G_{\infty}+\sqrt{\epsilon})}{\eta(1-\beta_{1})(1-\beta_{2})T}+\frac{D^{2}d(G_{\infty}+\sqrt{\epsilon})}{\sqrt{T}\eta(1-\beta_{1})(1-\beta_{2})}+\frac{(1+\beta_{1})}{2(1-\beta_{1})T}\frac{\eta\sqrt{1+\log T}}{\sqrt{\epsilon}(1-\beta_{1})^{2}}\|(\mathbf{g}_{1:T}^{2}[t])\|_{2}$$ $$+\frac{D^{2}\beta_{1}d(G_{\infty}+\sqrt{\epsilon})}{T(1-\beta_{1})(1-\beta_{2})\eta(1-\lambda)^{2}}.$$ Proof. See Appendix B for proof. We note that the major difference between our analysis for Aida and that for AdaBelief in Zhuang et al. (2020) is that we assume v˜t[i] ≤ v˜t[i−1] while the analysis in Zhuang et al. (2020) essentially utilizes the condition v˜t[i] ≥ v˜t[i−1] for convergence analysis. Our motivation for the new condition is that as the iteration index t increases, the gradient gt tends to approach zero, which makes v˜t approach to zero. ## 4 Experiments We evaluated Aida on three types of DNN tasks: (1) natural language processing (NLP) on training transformer and LSTM models; (2) image classification on training VGG and ResNet He et al. (2015) models; (3) image generation 4β1 in Algorithm 1 is generalized to be β1t, t ≥ 0 to facilitate convergence analysis. $$(14)$$ $\square$ on training WGAN-GP Gulrajani et al. (2017). Two open-source repositories were used for the above DNN training tasks. The first repository5 was adopted for the task of training a transformer. The second one6 was used for all the remaining tasks, which includes the original implementation of AdaBelief. The second repository was used to compare AdaBelief with SGD with momentum and seven adaptive optimization algorithms from the literature, namely Yogi Zaheer et al. (2018), RAdam Liu et al. (2019), MSVAG Balles & Hennig (2017), Fromage Bernstein et al. (2020), Adam Kingma & Ba (2017), AdaBound Luo et al. (2019), and AdamW Loshchilov & Hutter (2019). We compared our Aida algorithm with all the above optimization methods. We now briefly explain the selection of hyper-parameters for the tested optimizers, of which the detailed information was provided in Appendix D. For our experiments on using the second repository, the tested hyper-parameters were inherited from the source code for AdaBelief. To be specific, in the source code of AdaBelief, the optimal parameter ϵ of AdaBelief was, in general, set differently for different DNN tasks. The setting for the parameter ϵ in Aida follows that of the Adabelief setup in most cases and hence also was configured differently for different tasks. To make a fair comparison with Aida and AdaBelief, the optimal parameter ϵ was searched for the other adaptive optimizers for each DNN task. For each optimizer, we stopped searching when its performance dropped significantly, indicating the value of ϵ was either too large or too small. Lastly, when training LSTM models in the open-source of AdaBelief, the initial learning rate η0 of the tested optimizers was searched together with ϵ. Consequently, in our work, we also searched the initial learning rate for those optimizers when training LSTMs. It was found that Aida with K = 2 outperforms the nine reference methods for training transformer, LSTM, VGG11, and ResNet34 models while it matches the best performance of the nine methods for training WGAN-GP. Furthermore, experiments on training ResNet18 on the large ImageNet dataset show that Aida outperforms AdaBelief. The time complexity of Aida was evaluated for training VGG11 and ResNet34 on a 2080 Ti GPU. In brief, Aida with K = 2 consumed 25% more time per epoch compared to AdaBelief. ## 4.1 On Training A Transformer In this task, we consider the training of a transformer for WMT16: multimodal translation by using the first opensource as indicated in the footnote. In the training process, we retained almost all of the default hyper-parameters provided in the open-source except for the batch size. Due to limited GPU memory, we changed the batch size from 256 to 200. The parameters of Aida were set to (η0, β1, β2, ϵ) = (0.001, 0.9, 0.98, 1e−16). The parameter-setups for other optimizers can be found in Table 7 of Appendix D, where certain hyper-parameters for each optimizer were searched over some discrete sets to optimize the validation performance. For example, the parameter ϵ of Adam was searched over the set {1e−6, 1e−7*, . . . ,* 1e−16} while the remaining parameters were set to (η0, β1, β2) = (0.001, 0.9, 0.98) as in Aida. Once the optimal parameter-configuration for each optimizer was obtained by searching, three experimental repetitions were then performed to alleviate the effect of the randomness. It is clear from Table 1 and Fig. 5 that Aida significantly outperforms all other methods. We emphasize that the maximum number of epochs was set to 400 for each optimizer by following the default setup in the open source, and no epoch cutoff is performed in favor of Aida. When K increases from 1 to 2 in Aida, the method converges considerably faster and produces better validation performance, which may be due to the fact that Aida with K = 2 has a more compact range of adaptive stepsizes. On the other hand, the non-adaptive method SGD with momentum produces a performance that is inferior to all adaptive methods except Fromage and MSVAG. ## 4.2 On Training Lstms In this experiment, we consider training LSTMs with a different number of layers over the Penn TreeBank dataset Marcus et al. (1993) by utilizing the second open-source repository for AdaBelief. As explained earlier, when training LSTMs in the original source code, both the initial learning rate η0 and the parameter ϵ were searched in those adaptive optimizers. In this work, we followed a similar procedure. See Table 8 in Appendix D for a summary of the fixed and free parameters for each optimizer. An example is Adam for which η0 ∈ {0.01, 0.001} and ϵ ∈ {1e−6, 1e−7*, . . . ,* 1e−16} were tested to find the optimal configuration that produces the best validation performance. 5https://github.com/jadore801120/attention-is-all-you-need-pytorch 6https://github.com/juntang-zhuang/Adabelief-Optimizer | Table 1: Performance comparison for training the transformer SGD (non-adaptive) 55.58±0.34 AdaBound 55.90±0.21 Yogi 60.47±0.61 RAdam 64.47±0.19 MSVAG 53.79±0.13 Fromage 35.57±0.19 Adam 64.71±0.57 AdamW 64.49±0.24 AdaBelief 66.90±0.77 Aida(K=1) 68.77±0.16 Aida(K=2) 68.96±0.06 | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| ![8_image_0.png](8_image_0.png) Figure 5: Performance visualisation of Aida, Adam, and AdaBelief for the training of the transformer. See Fig. 7 in Appendix F for the plots of the associated learning curves versus runtime. | Aida(K=1) | AdaBelief | AdamW | Adam | Yogi | AdaBound | | |-------------|----------------|---------|--------|---------|------------|-------| | 1 layer | 82.27 | 84.21 | 88.36 | 84.28 | 86.78 | 84.52 | | 2 layer | 66.16 | 66.29 | 73.18 | 66.86 | 71.56 | 67.01 | | 3 layer | 61.98 | 61.23 | 70.08 | 64.28 | 67.83 | 63.16 | | Aida(K=2) | SGD | | | | | | | | (non-adaptive) | RAdam | MSVAG | Fromage | | | | 1 layer | 81.53 | 85.52 | 88.76 | 84.75 | 85.20 | | | 2 layer | 65.04 | 67.44 | 74.12 | 68.91 | 72.22 | | | 3 layer | 60.18 | 63.68 | 70.41 | 65.04 | 67.37 | | Table 2: Validation perplexity on Penn Treebank for 1, 2, 3-layer LSTM. **lower** is better. ![8_image_1.png](8_image_1.png) Figure 6: Performance visualisation of Aida, AdaBelief, and Adam in Table 2. See Fig. 8 in Appendix F for the plots of the associated learning curves versus runtime. 9 | Table 4: Best FID obtained for each optimizer (lower is better) Aida(K=2) Aida(K=1) AdaBelief Adam RAdam | AdaBound | | | | | | |------------------------------------------------------------------------------------------------------------|------------|-------|-------|---------|-------|-------| | best FIDs | 55.7 | 55.65 | 56.73 | 66.71 | 69.14 | 61.65 | | AdamW | MSVAG | SGD | Yogi | Fromage | | | | best FIDs | 63.76 | 69.47 | 90.61 | 68.34 | 78.47 | | | column. | CIFAR10 | | | CIFAR100 | | | | | |----------------|------------|----------|------------|------------|------------|-------|------------|-------| | | VGG11 | ResNet34 | VGG11 | ResNet34 | | | | | | optimizers | val. acc | t. c. | val. acc | t. c. | val. acc | t. c. | val. acc | t. c. | | SGD | | | | | | | | | | (non-adaptive) | 91.36±0.07 | 5.83 | 95.48±0.11 | 30.45 | 67.02±0.25 | 5.85 | 78.10±0.18 | 30.92 | | Yogi | 90.74±0.16 | 6.49 | 94.98±0.26 | 31.74 | 65.57±0.17 | 6.42 | 77.17±0.12 | 32.20 | | RAdam | 89.58±0.10 | 6.28 | 94.64±0.18 | 31.21 | 63.62±0.20 | 6.29 | 74.87±0.13 | 31.58 | | MSVAG | 90.04±0.22 | 7.08 | 94.65±0.08 | 33.78 | 62.67±0.33 | 7.19 | 75.57±0.14 | 33.80 | | Fromage | 89.72±0.25 | 6.66 | 94.64±0.07 | 35.19 | 62.93±0.53 | 6.56 | 74.84±0.27 | 35.50 | | Adam | 91.20±0.21 | 6.15 | 95.09±0.18 | 31.28 | 67.88±0.13 | 6.20 | 77.31±0.14 | 31.47 | | AdamW | 89.46±0.08 | 6.25 | 94.48±0.18 | 31.71 | 62.50±0.23 | 6.31 | 74.29±0.20 | 31.80 | | AdaBound | 90.48±0.12 | 6.71 | 94.73±0.16 | 33.75 | 64.80±0.42 | 6.73 | 76.15±0.10 | 33.78 | | AdaBelief | 91.55±0.13 | 6.47 | 95.15±0.11 | 31.66 | 68.05±0.31 | 6.49 | 77.32±0.37 | 31.74 | | Aida(K=1) | 91.52±0.05 | 7.27 | 95.31±0.05 | 35.25 | 68.89±0.09 | 7.32 | 77.50±0.12 | 35.46 | | Aida(K=2) | 91.68±0.16 | 7.95 | 95.57±0.13 | 39.53 | 69.02±0.11 | 8.01 | 78.86±0.12 | 39.64 | We emphasize that the parameters of Aida were set to (η0, β1, β2, ϵ) = (0.001, 0.9, 0.999, 1e − 16), whereas η0 and ϵ remain the same for all different numbers of neural layers in LSTMs. Table. 2 summarises the obtained validation perplexities of the ten methods for training 1, 2, and 3-layer LSTMs. It was found that for each optimizer, independent experimental repetitions lead to almost the same validation perplexity value. Therefore, we only keep the average of the validation perplexity values from three independent experimental repetitions for each experimental setup in the table and ignore the standard deviations. It is clear from Table. 2 that Aida again outperforms all other methods in all three scenarios. Fig. 6 further visualises the validation performance of Aida compared to AdaBelief and Adam. The performance gain of Aida is considerable in all three scenarios. One observes that in the initial training stage, Aida converges slower than AdaBelief only for the 1-layer LSTM. This is because the optimal parameters (η0, ϵ) in AdaBelief are different for different numbers of neural layers. In particular, the optimal setups in AdaBelief are: (η0, ϵ) = (0.001, 1e − 16) for 1-layer LSTM, and (η0, ϵ) = (0.01, 1e − 12) for 2-layer and 3-layer LSTMs. When the parameters of AdaBelief for the 2-layer and 3-layer LSTMs are also set to (η0, ϵ) = (0.001, 1e − 16) to be in line with the setup of Aida, it is found that Aida also converges slower than AdaBelief in the beginning of the training process. ## 4.3 On Training Vgg11 And Resnet34 Over Cifar10 And Cifar100 In this task, the ten optimizers were evaluated by following a similar experimental setup as in Zhuang et al. (2020). In particular, the batch size and epoch were set to 128 and 200, respectively. The common stepsize ηt is reduced by multiplying by 0.1 at 100 and 160 epochs. The detailed parameter-setups for the optimizers can be found in Table 10 in Appendix D. Three experimental repetitions were conducted for each optimizer to alleviate the effect of randomness. Both the validation performance and the algorithmic complexity are summarised in Table 3. It is clear that Aida with K = 2 consistently outperforms the nine reference methods in terms of validation accuracies at the cost of additional computational time. This demonstrates that the compact range of adaptive stepsizes in Aida does indeed improve the generalization performance. We can also conclude from the table that SGD with momentum is the most computationally efficient method. On the other hand, due to the layerwise vector projections, Aida with K = 2 consumed an additional 25% time per epoch compared to AdaBelief. ## 4.4 On Training Wgan-Gp Over Cifar10 This task focuses on training WGAN-GP, where the setup β1 = 0.5 follows the original setting in training AdaBelief. The parameters of Aida were set to (ηt, β1, β2, ϵ) = (0.0002, 0.5, 0.999, 1e−12), in agreement with the recommended setup of AdaBelief in the original open-source. The other eight optimizers have both fixed and free parameters, details of which can be found in Table 9 of Appendix D. As an example, the free parameter of Adam is ϵ ∈ {1e − 4, 1e − 6*, . . . ,* 1e−14}. For each parameter-configuration of an optimizer, three experimental repetitions were performed due to the relatively unstable Frechet inception distance (FID) scores in training WGAN-GP. Table 4 shows the best FID for each method. It can be seen from the table that Aida with K ∈ {1, 2} provides better performance than AdaBelief, while the other methods perform significantly worse. It is worth noting that the FID for AdaBelief is better than the reported results in Zhuang et al. (2020) which may be due to different versions of python packages. ## 4.5 On Training Resnet18 Over Imagenet In the last experiment, we investigated the performance gain of Aida compared to AdaBelief for training ResNet18 on the large ImageNet dataset. The maximum epoch and minibatch size were set to 90 and 256, respectively. The common stepsize ηt is dropped by a factor of 0.1 at 70 and 80 epochs. The parameter setup for the two optimizers can be found in Table 6 of Appendix D. To make a fair comparison, the parameter ϵ of AdaBelief was searched over the discrete set {1e − 8, 1e − 9, 1e − 10} instead of using the recommended setup of ϵ = 1e − 8 in the original repository for AdaBelief. Three experimental repetitions were conducted for each optimizer to mitigate the effect of randomness. It is seen from Table 5 that for the large ImageNet dataset, Aida performs slightly better than AdaBelief, indicating that the performance gain of Aida is robust against different sizes of datasets. We note that the performance of AdaBelief is slightly lower than the reported result in Zhuang et al. (2020), where no standard error is reported. This might be either because we repeated the experiments three times per optimizer or due to the fact that different versions of the Python packages were used, which are difficult to track. From an overall perspective, all five experiments in our paper demonstrate that Aida consistently outperforms AdaBelief. The performance gain is significant for certain tasks and small for other tasks. Our results suggest that proper manipulation of the range of adaptive stepsizes of an adaptive optimizer improves generalization performance. We hope our work will lead to the development of related techniques to improve the performance of other existing optimizers. | optimizers | AdaBelief | Aida(K = 2) | |--------------|-------------|---------------| | val. acc. | 69.65±0.06 | 69.70±0.08 | Table 5: Validation accuracies (in percentage) of AdaBelief and Aida for training ResNet18 over ImageNet. ## 5 Conclusions In this paper, we have shown that the range of the adaptive stepsizes of DNN optimizers has a significant impact on performance. The proposed Aida optimizer suppresses the range of the adaptive stepsizes of AdaBelief making it closer to SGD with momentum. Our experimental results indicate that Aida will be able to produce better performance across a wide range of DNN-based applications. In the design of the Aida optimizer, we track the EMA (or equivalently the second momentum) of (γl,tml,t−βl,tgl,t) 2 for the lth layer of a DNN model as opposed to (ml,t−gl,t) 2 used in AdaBelief, where γl,t, βl,t ∈ (0, 1] are obtained by vector projections. Consequently, the adaptive stepsizes of Aida have a more compact range than those of AdaBelief. Our empirical study shows that Aida with K = 2 outperforms nine optimizers including Adam and AdaBelief for training transformer, LSTM, VGG11, and ResNet34 models while at the same time, it matches the best performance of the nine methods for training WGAN-GP models. In addition, experiments on training ResNet18 over the large ImageNet dataset show that Aida performs better than AdaBelief. On the other hand, it was found that the nonadaptive method SGD with momentum only produces good performance when training VGG and ResNet models. This suggests that the *adaptivity* of Aida is important, allowing the method to effectively train different types of DNN models. ## References L. Balles and P. Hennig. Dissecting adam: The sign, magnitude and variance of stochastic gradients. arXiv preprint arXiv:1705.07774, 2017. J. Bernstein, A. Vahdat, Y. Yue, and Ming-Yu Liu. On the distance between two neural networks and the stability of learning. arXiv preprint arXiv:2002.03432, 2020. N. Chen, Y. Zhang, H. Zen, R. J. Weiss, M. Norouzi, and W. Chan. WaveGrad: Estimating Gradients for Waveform Generation. arXiv:2009.00713, September 2020. T. Dozat. Incorporating Nesterov Momentum into Adam. In *International conference on Learning Representations* (ICLR), 2016. J. Duchi, E. Hazan, and Y. Singer. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. Journal of Machine Learning Research, 12:2121–2159, 2011. D. Silver et al. "Mastering the game of Go with deep neural networks and tree search. *Nature*, 529(7587):484–489, 2016. I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville. Improved training of wasserstein gans. In Advances in neural information processing systems, pp. 5767–5777, 2017. K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. In *IEEE conference on Computer* Vision and Pattern Recognition (CVPR), 2015. S. Hochreiter and J. Schmidhuber. Long Short-Term Memory. *Neural Computation*, 9(8):1735–1780, 1997. K. Lee J. Devlin, M-W. Chang and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805, 2018. D. P. Kingma and J. L. Ba. Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980v9, 2017. Y. LeCun, Y. Bengio, and G. Hinton. Deep Learning. *Nature*, 521:436–444, 2015. L. Liu, H. Jiang, P. He, W. Chen, X. Liu, J. Gao, and J. Han. On the variance of the adaptive learning rate and beyond. arXiv preprint arXiv:1908.03265, 2019. Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In *International Conference on Computer Vision (ICCV)*, 2021. I. Loshchilov and F. Hutter. Decoupled Weight Decay Regularization. In *ICLR*, 2019. L. Luo, Yuanhao Xiong, Y. Liu, and X. sun. Adaptive Gradient Methods with Dynamic Bound of Learning Rate. In ICLR, 2019. M. Marcus, B. Santorini, and M. A. Marcinkiewicz. Building a large annotated corpus of english: The penn treebank, 1993. URL https://catalog.ldc.upenn.edu/docs/LDC95T7/cl93.html. B. T. Polyak. Some methods of speeding up the convergence of iteration methods. USSR Computational Mathematics and Mathematical Physics, 4:1–17, 1964. K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. H. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In *International conference on Machine Learning (ICML)*, 2013. T. Tieleman and G. Hinton. Lecture 6.5-RMSProp: Divide The Gradient by a Running Average of Its Recent Magnitude. COURSERA: Neural networks for machine learning, 2012. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. arXiv:1706.03762 [cs. CL], 2017. A. C. Wilson, R. Roelofs, M. Stern, N. Srebro, and B. Recht. The Marginal Value of Adaptive Gradient Methods in Machine Learning. In *31st Conference on Neural Information Processing Systems (NIPS)*, 2017. Y. You, J. Li, S. Reddi, J. Hseu, S. Kumar, S. Bhojanapalli, X. Song, J. Demmel, K. Keutzer, and C.-J. Hsieh. Large Batch Optimisation for Deep Learning: Training Bert in 76 Minutes. In *ICLR*, 2020. M. Zaheer, S. Reddi, D. Sachan, S. Kale, and S. Kumar. Adaptive methods for nonconvex optimization. In *Advances* in neural information processing systems (NeurIPS), pp. 9793–9803, 2018. J. Zhang, S. P. Karimireddy, A. Veit, S. Kim, S. J. Reddi, S. Kumar, and S. Sra. Why ADAM Beats SGD for Attention Models. In *submitted for review by ICLR*, 2019. J. Zhuang, T. Tang, S. Tatikonda Y. Ding, N. Dvornek, X. Papademetris, and J. S. Duncan. AdaBelief Optimizer: Adapting Stepsizes by the Belief in Observed Gradients. In *NeurIPS*, 2020. ## A Update Procedure Of Adabelief* The first ϵ is removed in (3) to verify if AdaBelief* has a broad range of adaptive stepsizes than AdaBelief. Algorithm 2 AdaBelief* 1: **Input:** β1, β2, ηt, ϵ > 0 2: **Init.:** θ0 ∈R d, m0 = 0, v0 = 0 ∈ R d 3: for t = 1, 2*, . . . , T* do 4: gt ← ∇f(θt−1) 5: mt ← β1mt−1 + (1 − β1)gt 6: qt ← β2qt−1 + (1 − β2)(mt − gt) 2 7: m˜ t← mt 1−β t 1q˜t ← qt 1−β t 2 8: θt← θt−1 − √ ηt q˜t+ϵm˜ t 9: **end for** 10: **Output:** θT ## B Convex Convergence Analysis Firstly, we note that the bias term 1 − β t 1 in the update expressions of Aida in Algorithm 1 can be absorbed into the common stepsize ηt. Therefore, we will ignore the bias term in the following proof. Suppose θ ∗is the optimal solution for solving the convex optimization problem, i.e., θ ∗ = arg minθt f(θ). Using the fact that θt = θt−1 − ηtv˜ −1/2 t mt, we have ∥v˜ 1/4 t(θt − θ ∗)∥ 2 2 = ∥v˜ 1/4 t(θt−1 − ηtv˜ −1/2 t mt − θ ∗)∥ 2 2 = ∥v˜ 1/4 t(θt−1 − θ ∗)∥ 2 2 + η 2 t ∥v˜ −1/4 t mt∥ 2 2 − 2ηt⟨β1tmt−1 + (1 − β1t)gt, θt−1 − θ ∗⟩ = ∥v˜ 1/4 t(θt−1 − θ ∗)∥ 2 2 + η 2 t ∥v˜ −1/4 t mt∥ 2 2−2ηt(1−β1t)⟨gt, θt−1−θ ∗⟩−2ηtβ1t⟨mt−1, θt−1−θ ∗⟩ ≤ ∥v˜ 1/4 t(θt−1 − θ ∗)∥ 2 2 + η 2 t ∥v˜ −1/4 t mt∥ 2 2−2ηt(1 − β1t)⟨gt, θt−1 − θ ∗⟩ + η 2 t β1t∥v˜ −1/4 t mt−1∥ 2 2 + β1t∥v˜ 1/4 t(θt−1 − θ ∗)∥ 2 2 , (15) where the above inequality uses the Cauchy-Schwartz inequality 2⟨a, b*⟩ ≤ ∥*a∥ 2 2 + ∥b∥ 2 2 . Note that (15) corresponds to (2) in the appendix of Zhuang et al. (2020) for AdaBelief. Summing (15) from t = 1 until t = T, rearranging the quantities, and exploiting the property that gt = ∇f(θt−1) and f(·) being convex gives $\mathbf{M}$ f(θ¯T ) − f(θ ) $-\ f(\theta^*)$. $\bigcup\frac{7}{3}$ = f 1 T T $$\sum_{t=0}^{T-1}\theta_{t}\left(\theta^{*}\right)$$ (a) ≤ 1 T X T $\phi$ t=1 $$\sum_{=1}^{T}\left(f(\theta_{t-1})-f(\theta^{*})\right)$$ (b) ≤ 1 T X T t=1 $$\prod_{=1}^{r}\langle g_{t},\theta_{t-1}-\theta^{*}\rangle$$ $\hat{z}_{\mu}$ $$\frac{1}{T}\sum_{t=1}^{T}\left[\frac{1}{2\eta_{t}(1-\beta_{1t})}\Big(\|\tilde{\mathbf{v}}_{t}^{1/4}(\mathbf{\theta}_{t-1}-\mathbf{\theta}^{*})\|_{2}^{2}-\|\tilde{\mathbf{v}}_{t}^{1/4}(\mathbf{\theta}_{t}-\mathbf{\theta}^{*})\|_{2}^{2}\Big)\right]$$ $$(15)$$ +ηt 2(1−β1t) ∥v˜ −1/4 t mt∥ 2 2+ ηtβ1t 2(1−β1t) ∥v˜ −1/4 t mt−1∥ 2 2 + β1t 2ηt(1 − β1t) ∥v˜ 1/4 t(θt−1 − θ ∗)∥ 2 2 i β11=β1 =1 2η(1 − β1)T ∥v˜ 1/4 1(θ0 − θ ∗)∥ 2 2 + 1 T T X−1 t=1 1 2ηt+1(1 − β1(t+1)) ∥v˜ 1/4 t+1(θt − θ ∗)∥ 2 2 − 1 2ηt(1 − β1t) ∥v˜ 1/4 t(θt − θ ∗)∥ 2 2 ! + 1 T X T t=1 hηt 2(1−β1t) ∥v˜ −1/4 t mt∥ 2 2 + ηtβ1t 2(1−β1t) ∥v˜ −1/4 t mt−1∥ 2 2 + β1t 2ηt(1 − β1t) ∥v˜ 1/4 t(θt−1 − θ ∗)∥ 2 2 i condition: 0 ≤ v˜t[i] ≤ v˜t−1[i] for all i = 1*, . . . , d,* 0 ≤ β1(t+1) ≤ β1t < 1 ≤1 2η(1 − β1)T ∥v˜ 1/4 1(θ0 − θ ∗)∥ 2 2 + 1 T T X−1 t=1 1 ηt+1 − 1 ηt !1 2(1 − β1t) ∥v˜ 1/4 t+1(θt − θ ∗)∥ 2 2 + 1 T X T t=1 hηt 2(1−β1) ∥v˜ −1/4 t mt∥ 2 2+ ηtβ1 2(1−β1) ∥v˜ −1/4 t−1 mt−1∥ 2 2 + β1t 2ηt(1 − β1) ∥v˜ 1/4 t(θt−1 − θ ∗)∥ 2 2 i (condition: ηt = η/√t, 0 ≤ β1t+1 ≤ β1t < 1,m0 = 0) ≤1 2η(1 − β1)T ∥v˜ 1/4 1(θ0 − θ ∗)∥ 2 2 + 1 2T η(1 − β1) T X−1 t=1 √t + 1 − √t ! ∥v˜ 1/4 t+1(θt − θ ∗)∥ 2 2 + 1 T X T t=1 ηt(1 + β1) 2(1 − β1) ∥v˜ −1/4 t mt∥ 2 2 + 1 T(1 − β1) X T t=1 β1t 2ηt ∥v˜ 1/4 t(θt−1 − θ ∗)∥ 2 2 (condition: ∥θ ∗∥∞ ≤ D, ∥θt∥∞ ≤ D) ≤D2 η(1−β1)T X d i=1 (v˜1[i])1/2+D2 T η(1 − β1) T X−1 t=1 √t + 1 − √t !X d i=1 (v˜t+1[i])1/2 + 1 T X T t=1 ηt(1+β1) 2(1−β1) ∥v˜ −1/4 t mt∥ 2 2 + D2 T(1 − β1) X T t=1 β1t ηt X d i=1 (v˜t[i])1/2 (c) ≤D2d(G∞ + √ϵ) η(1−β1)(1−β2)T +D2d(G∞ + √ϵ) √T η(1−β1)(1−β2) + 1 T X T t=1 ηt(1+β1) 2(1−β1) ∥v˜ −1/4 t mt∥ 2 2 + D2β1d(G∞ + √ϵ) T(1−β1)(1−β2)η(1−λ) 2 , $$(18)$$ (16) where both step (a) and (b) use the property of f(·) being convex, and step (c) uses the following conditions $$\begin{array}{l}\left[\left[\mathbf{g}_{i}\right]_{\infty}\leq G_{\infty}/2\Rightarrow\left\|\mathbf{m}_{i}^{(k)}-\mathbf{q}_{i}^{(k)}\right\|_{\infty}\leq G_{\infty}\Rightarrow\left\|\mathbf{e}_{i}\right\|_{\infty}\leq(G_{\infty}+\sqrt{c})^{2}/(1-\beta_{2})\right.\\ \left.\sum_{t=1}^{T}\frac{\hat{a}_{t}}{\eta_{t}}\leq\frac{\hat{a}_{t}}{\eta}\sum_{t=1}^{T}\lambda^{t-1}\sqrt{t}\leq\frac{\hat{a}_{t}}{\eta}\sum_{t=1}^{T}\lambda^{t-1}t\leq\frac{\hat{a}_{t}}{\eta(1-\lambda)^{2}},\end{array}\right.\end{array}\tag{17}$$ where the 1st condition in (17) is derived by using the expression $$\hat{w}_{1,t}=\frac{w_{1,t}}{1-\beta_{2}^{t}}=\frac{s_{t,t}}{1-\beta_{2}^{t}}+\frac{\epsilon}{1-\beta_{2}}\text{where}s_{t,t}=(1-\beta_{2})s_{t,t-1}+\beta_{2}(\mathbf{m}_{1,t}^{(K)}-\mathbf{g}_{1,t}^{(K)})^{2}.$$ Next we consider the quantity PT t=1 ηt∥v˜ −1/4 t mt∥ 2 2 in (16), the upper bound of which is given in Zhuang et al. (2020): Lemma 1 (Equ. (4) in the appendix of Zhuang et al. (2020)). Let g 2 1:T [i] = ((g1[i])2*, . . . ,*(gT [i])2) ∈ R T. Under the three assumptions given in the theorem, we have $$\sum_{t=1}^{T}\eta_{t}\|\tilde{v}_{t}^{-1/4}\mathbf{m}_{t}\|_{2}^{2}\leq\frac{\eta\sqrt{1+\log T}}{\sqrt{c}(1-\beta_{1})^{2}}\|(\mathbf{g}_{1:T}^{2}[i])\|_{2}.$$ Finally, plugging (18) into (16) produces the upper-bound regret in the theorem. The proof is complete. ## C Parameter Setups For Optimization Methods In Fig. 1-3 The common stepsize ηt is dropped by a factor 0.1 at 100 and 160 epochs. The optimal parameter ϵ is searched over the set {10−2, 10−3*, . . . ,* 10−15} for Adam and AdaBelief∗. | optimizer | fixed parameters | searched parameters | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------|-----------------------------------| | Adam | (η0, β1, β2) = (0.001, 0.9, 0.999) | ϵ ∈ {10−2 , 10−3 , . . . , 10−15} | | AdaBelief∗ | (η0, β1, β2) = (0.001, 0.9, 0.999) | ϵ ∈ {10−2 , 10−3 , . . . , 10−15} | | AdaBelief (η0, β1, β2, ϵ) = (0.001, 0.9, 0.999, 10−8 ) Aida (η0, β1, β2, ϵ) (K = 1) = (0.001, 0.9, 0.999, 10−8 ) Aida (η0, β1, β2, ϵ) (K = 2) = (0.001, 0.9, 0.999, 10−8 ) | | | | Table 6: | Parameter-setups for training ResNet18 over ImageNet. The weight decay was set to 10−2 . optimizer fixed parameters searched parameters AdaBelief (η0, β1, β2) = (0.001, 0.9, 0.999) ϵ = {10−8 , 10−9 , 10−10} Aida (η0, β1, β2, ϵ) = (0.001, 0.9, 0.999, 10−9 ) (K = 2) | |------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | searching for a particular optimizer when the performance dropped significantly as ϵ is either too large or too s | | mall. | |---------------------------------------------------------------------------------------------------------------------|-----------------------------------------------|---------------------------------------------------------------------------| | optimizer | fixed parameters | searched parameters | | AdaBound | (η0, β1, β2, γ) = (0.001, 0.9, 0.98, 0.001) | ϵ ∈ {1e − 6, 1e − 7, . . . , 1e − 16} final_stepsize ∈ {0.1, 0.01, 0.001} | | Yogi | (η0, β1, β2) = (0.001, 0.9, 0.98) | ϵ ∈ {1e − 2, 1e − 3, . . . , 1e − 16} | | SGD | momentum=0.9 | η0 ∈ {1.0, 0.1, 0.01, 0.001} | | RAdam | (η0, β1, β2) = (0.001, 0.9, 0.98) | ϵ ∈ {1e − 6, 1e − 7, . . . , 1e − 16} | | MSVAG | (η0, β1, β2) = (0.001, 0.9, 0.98) | ϵ ∈ {1e − 6, 1e − 7, . . . , 1e − 16} | | Fromage | | η0 ∈ {0.1, 0.01, 0.001, 0.0001} | | Adam | (η0, β1, β2) = (0.001, 0.9, 0.98) | ϵ ∈ {1e − 6, 1e − 7, . . . , 1e − 16} | | AdamW | (η0, β1, β2) = (0.001, 0.9, 0.98) | ϵ ∈ {1e − 6, 1e − 7, . . . , 1e − 16} | | AdaBelief | (η0, β1, β2) = (0.001, 0.9, 0.98) | ϵ ∈ {1e − 8, 1e − 9, . . . , 1e − 16} | | Aida(K=1)(our) | (η0, β1, β2, ϵ) = (0.001, 0.9, 0.98, 1e − 16) | | | Aida(K=2)(our) | (η0, β1, β2, ϵ) = (0.001, 0.9, 0.98, 1e − 16) | | ## D Parameter-Setups For Training Different Dnn Models Table 7: Parameter-setups for training a Transformer. The weight decay for AdamW was set to 5e − 4 while the weight decay for all other algorithms was set to 0.0. The setup β2 = 0.98 follows directly from the first open-source repository. One observes that the search grids of ϵ are different for different optimizers. This is because we stopped searching for a particular optimizer when the performance dropped significantly as ϵ is either too large or too small. optimizer fixed parameters searched parameters | Table 8: Parameter-setups for training LSTMs. The weight decay for every algorithm was set to 1.2e − 6. optimizer fixed parameters searched parameters η0 ∈ {0.01, 0.001} AdaBound (β1, β2, γ) = (0.9, 0.999, 0.001) ϵ ∈ {1e − 6, 1e − 7, . . . , 1e − 16} final_stepsize ∈ {0.1, 3, 30} Yogi (β1, β2) = (0.9, 0.999) η0 ∈ {0.01, 0.001} ϵ ∈ {1e − 2, 1e − 3, 1e − 4, 1e − 16} SGD momentum=0.9 η0 ∈ {30, 3, 1, 0.1} RAdam (β1, β2) = (0.9, 0.999) η0 ∈ {0.01, 0.001} ϵ ∈ {1e − 6, 1e − 7, . . . , 1e − 16} MSVAG (β1, β2) = (0.9, 0.999) η0 ∈ {30, 1, 0.01, 0.001} ϵ ∈ {1e − 6, 1e − 7, . . . , 1e − 16} Fromage η0 ∈ {0.1, 0.01, 0.001} Adam (β1, β2) = (0.9, 0.999) η0 ∈ {0.01, 0.001} ϵ ∈ {1e − 6, 1e − 7, . . . , 1e − 16} AdamW (β1, β2) = (0.9, 0.999) η0 ∈ {0.01, 0.001} ϵ ∈ {1e − 6, 1e − 7, . . . , 1e − 16} AdaBelief (β1, β2) = (0.9, 0.999) η0 ∈ {0.01, 0.001} ϵ ∈ {1e − 8, 1e − 9, . . . , 1e − 16} Aida(K=1) (our) (η0, β1, β2, ϵ) = (0.001, 0.9, 0.999, 1e − 16) Aida(K=2) (our) (η0, β1, β2, ϵ) = (0.001, 0.9, 0.999, 1e − 16) | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | when the FID score dropped significantly for either too large or too small ϵ values. optimizer fixed parameters searched parameters AdaBound (η0, β1, β2, γ) = (0.0002, 0.5, 0.999, 0.001) ϵ ∈ {1e − 2, 1e − 4, . . . , 1e − 14} final_stepsize ∈ {0.1, 0.01} Yogi (η0, β1, β2) = (0.0002, 0.5, 0.999) ϵ ∈ {1e − 2, 1e − 3, 1e − 4, 1e − 14} SGD momentum = {0.3, 0.5, 0.9} η0 ∈ {0.1, 0.02, 0.002, 0.0002} RAdam (η0, β1, β2) = (0.0002, 0.5, 0.999) ϵ ∈ {1e − 4, 1e − 6, . . . , 1e − 14} MSVAG (β1, β2) = (0.5, 0.999) η0 ∈ {0.1, 0.02, 0.002, 0.0002} ϵ ∈ {1e − 4, 1e − 6, . . . , 1e − 14} Fromage η0 ∈ {0.1, 0.01, 0.001} Adam (η0, β1, β2) = (0.0002, 0.5, 0.999) ϵ ∈ {1e − 4, 1e − 6, . . . , 1e − 14} AdamW (η0, β1, β2) = (0.0002, 0.5, 0.999) ϵ ∈ {1e − 4, 1e − 6, . . . , 1e − 14} AdaBelief (η0, β1, β2, ϵ) = (0.0002, 0.5, 0.999, 1e − 12) Aida(K=1) (our) (η0, β1, β2, ϵ) = (0.0002, 0.5, 0.999, 1e − 12) Aida(K=2) (our) (η0, β1, β2, ϵ) = (0.0002, 0.5, 0.999, 1e − 12) | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| Table 9: Parameter-setups for training WGAN-GP. The weight decay for AdamW was set to 5e − 4 while the weight decay for all other algorithms was set to 0.0. We stopped searching the parameter ϵ for a particular adaptive optimizer when the FID score dropped significantly for either too large or too small ϵ values. optimizer fixed parameters searched parameters Table 10: Parameter-setups for training VGG and ResNet models over CIFAR10 and CIFAR100. The weight decay for AdamW was set to 0.01 while the weight decay for all other algorithms was set to 5e − 4. As we mentioned in the main paper, we stopped searching the parameter ϵ when the validation performance dropped significantly for either too large or too small ϵ values. optimizer fixed-parameters searched-parameters | too large or too small ϵ values. optimizer | fixed-parameters | searched-parameters | |----------------------------------------------|-----------------------------------------------|-------------------------------------------------------------------| | AdaBound | (η0, β1, β2, γ) = (0.001, 0.9, 0.999, 0.001) | ϵ ∈ {1e − 2, 1e − 3, . . . , 1e − 9} final_stepsize ∈ {0.1, 0.01} | | Yogi | (η0, β1, β2,) = (0.001, 0.9, 0.999) | ϵ ∈ {1e − 1, 1e − 2, . . . , 1e − 9} | | SGD | (momentum, η0) = (0.9, 0.1) | | | RAdam | (η0, β1, β2) = (0.001, 0.9, 0.999) | ϵ ∈ {1e − 2, 1e − 3, . . . , 1e − 9} | | MSVAG | (η0, β1, β2) = (0.1, 0.9, 0.999) | ϵ ∈ {1e − 2, 1e − 2, . . . , 1e − 9} | | Fromage | | η0 ∈ {0.1, 0.01, 0.001} | | Adam | (η0, β1, β2) = (0.001, 0.9, 0.999) | ϵ ∈ {1e − 2, 1e − 3, . . . , 1e − 9} | | AdamW | (η0, β1, β2) = (0.001, 0.9, 0.999) | ϵ ∈ {1e − 2, 1e − 3, . . . , 1e − 9} | | AdaBelief | (η0, β1, β2) = (0.001, 0.9, 0.999) | ϵ ∈ {1e − 8, 1e − 9, 1e − 10} | | Aida(K=1) (our) | (η0, β1, β2, ϵ) = (0.001, 0.9, 0.999, 1e − 9) | | | Aida(K=2) (our) | (η0, β1, β2, ϵ) = (0.001, 0.9, 0.999, 1e − 9) | | Algorithm 3 Aida∗: Scaling (ml,t, gl,t) via power of cos ∠ml,tgl,t Input: β1, β2, ηt, ϵ > 0, ξ = 1e − 20, K = 2 Init.: θ0 ∈R d, m0 = 0, v0 = v˜0 = 0 ∈ R d for t = 1, 2*, . . . , T* do gt ← ∇f(θt−1) mt ← β1mt−1 + (1 − β1)gt for l = 1*, . . . , L* do γl,t = ⟨ml,t, gl,t⟩/(∥ml,t∥∥gl,t∥ + ξ) vl,t← β2vl,t−1+(1−β2)(γ K l,tml,t−γ K l,tgl,t) 2+ϵ [the Kth power of γl,t is employed for scaling] end for m˜ t← mt 1−β t 1 nv˜l,t ←vl,t 1−β t 2 oL l=1 θt← θt−1 − η√tm˜ t v˜t end for Output: θT ## E Performance Investigation By Scaling (Ml,T, Gl,T) **Via Power Of** Cos ∠Ml,Tgl,T In this section, we consider scaling gl,t and ml,t in the computation of the second momentum in AdaBelief by utilizing the power of cos ∠gl,tml,t, which is computed as $$(\|g_{l,t}\|\|m_{l,t}\|+\xi),$$ cos ∠gl,tml,t = γl,t = ⟨gl,t,ml,t⟩/(∥gl,t∥∥ml,t∥ + ξ), (19) where the parameter ξ is introduced to avoid division by zero. The modified optimizer is referred to as Aida∗(see Alg. 3 above for the complete update procedure). The only difference between Aida and Aida∗is the scaling of the layerwise vectors (ml,t, gl,t). We performed additional experiments by applying Aida∗to train ResNet34 and LSTMs. The performance results are provided in Table 11 and 12, respectively. It is seen that the validation accuracies (the higher the better) for training ResNet34 over both CIFAR10 and CIFAR100 are degraded slightly in comparison to Aida. Also, the test perplexities (the lower the better) for training 1-layer and 3-layer LSTMs are slightly degraded. The above experiments suggest that the parameter ξ in (10) and (9) in the mutual-vector projections does indeed make a difference in performance. In our work, we recommend the implementation in Alg. 1. $$(19)$$ Table 11: Performance comparison of Aida and Aida∗for training ResNet34. The values in the table represent validation accuracies (the higher the better). optimizers CIFAR10 CIFAR100 Aida (K = 2) **95.57**±0.13 **78.86**±0.12 Aida∗(K = 2) 95.43±0.12 78.44±0.34 Table 12: Performance comparison of Aida and Aida∗for training LSTMs. The values in the table represent test perplexity (the lower the better). optimizers 1-layer 2-layer 3-layer Aida (K = 2) **81.53** 65.04 **60.18** Aida∗(K = 2) 81.58 **64.85** 60.43 ## F Additional Plots Of Learning Curves Versus Runtime The two figures Fig. 5 and 6 only plot the learning curves of Adam, AdaBelief, and Aida over epochs. Since Aida needs additional computational overhead compared to AdaBelief, it is interesting to also study the learning curves of Adam, AdaBelief, and Aida over runtime. Relevant results are shown in Fig. 7 and 8. It is clear that even though Aida requires a slightly longer learning time, the method produces significant validation performance gain for training both transformers and LSTMs. ![18_image_0.png](18_image_0.png) (a): train perplexity ~ runtime **(b): val. acc. ~ runtime** ![18_image_1.png](18_image_1.png) Figure 7: Visualisation of the learning curves versus runtime for Aida, AdaBelief, and Adam when training the Transformer. The maximum number of epochs is set to 400 in all three methods. The execution time for each epoch takes into account both the training and validation time. ![18_image_2.png](18_image_2.png) Figure 8: Visualisation of the learning curves versus runtime for Aida, AdaBelief, and Adam when training LSTMs. The maximum number of epochs is set to 200 in all three methods. The execution time for each epoch takes into account both the training and validation time.
Review 1: Summary: The paper proposes a novel optimizer to reduce the range of adaptive stepsizes in each layer using a layerwise vector projection-based method. The authors evaluate the proposed method on various tasks, such as image classification, image generalization, and NLP, and demonstrate that it outperforms some popular optimizers. Strengths and Weaknesses: Strength: The proposed method is very easy to follow and I think the main difference is from the projection of g and m. The idea about vector projections between the gradient g_t and first momentum m_t is also very interesting. If related work exists, the authors could provide some discussion in the revision. The paper also evaluates the proposed in some different tasks, such as computer vision (image classification, image generalization) and NLP. Weakness: The motivation is not very clear for me. For example, why we need to suppress the range of adaptive stepsizes. As you mentioned, Adbelief tried to solve this problem, and why you also try to solve it. What is the main problem in Adabelief you mainly focus on and try to solve. Some claim need more explanation. For example, “The new method has the nice property that the adaptive stepsizes within each neural layer have smaller statistical variance” My question is why we hope the adaptive stepsizes in each layer has a smaller variance. As the adaptive stepsize for different parameters in the same layer are naturally different. Some related layer-wise optimizers have been proposed, and the authors could discuss these methods in the revision. The improvement of Aida on ImageNet is not significant, and it's unclear whether the proposed method can work well in large-scale datasets. The authors could provide some discussion on this. Additionally, the reported accuracy of AdaBelief in this paper is lower than the result in the original paper. Requested Changes: Typo: The definition of g after Eq 2 : g = \nabla f(\theta) Broader Impact Concerns: NA ================================================== Review 2: Summary: The paper investigates the AdaBelief algorithm, specifically how a small constant added to the squared gradient moving average equation (middle eq in Eq 3) in AdaBelief significantly impacts optimization and generalization. Based on the gathered insight, the authors then propose an additional change to the AdaBelief algorithm, and call the new algorithm Aida. A convergence proof is shown for the proposed algorithm. Finally, the authors show experiments using Aida and previous optimizers and claim improved optimization and generalization performance. Strengths and Weaknesses: Strengths: - The proposed optimizer achieves better generalization and faster convergence in some cases. - The analysis of AdaBelief is interesting. Weakness: - The motivation behind the specific approach of iterative projection, proposed by the authors for their method Aida, is unclear. Specifically, after making the observation that the epsilon term in AdaBelief helps keep the adaptive step size range small (which helps generalization), the authors propose to perform this iterative projection of the g and m (gradient and the moving average of gradient respectively) onto each other. The resulting g and m are in the same direction as the original g or m (depending on the number of iterative projection steps). Importantly, the result of this iterative projection is that the scale of m and g becomes smaller in magnitude, which the authors also discuss in section 3.1. In fact, the iterative projections are computationally redundant, since the cosine of the angle between m and g during this iterative process remains unchanged. Therefore, I wonder why the authors chose to propose this iterative projection approach, as opposed to (say) scaling g and m using a scalar hyper-parameter multiplied with the cosine of the angle between g and m. That would've been both simpler and computationally cheaper. - My above point is further strengthened by the fact that the authors find that K=2 already yields good results (where K is the number of iterative projection steps). I understand that once advantage of using this iterative approach instead of directly specifying a scaling hyper-parameter is that K=2 is a much simpler choice over choosing a floating number as a hyper-parameter. If the authors choose to use this justification, then I suggest the following changes: 1. replace the dot product in the iterative projection step with the equivalent vector norms times cosine, which does not require redundant computation. 2. Explicitly mention the reasoning for proposing this approach as described above. Note that while a convergence analysis is provided for the proposed approach, it merely shows sufficiency, and does not imply that the proposed iterative approach is a necessary one. - In almost all plots, Aida converges faster than other methods. However, in figure 6a Aida has a much slower convergence initially. This was a bit concerning to me. Requested Changes: see above It would also be helpful to have a discussion on why reducing the adaptive step size range improves optimization and generalization. Broader Impact Concerns: N/A ================================================== Review 3: Summary: This paper aims at improving adaptive step size optimizers. Inspired by previous work that links compact adaptive step size range to better generalization, the paper proposes a modification to the AdaBelief that suppressed the range of its adaptive step size. It is achieved by adding a scaling term to both the momentum and gradient term when updating the state variable for the adaptive factor (denominator); the scaling term is computed using mutual projection between two vectors, which gradually reduces the length of each vector while preserving their angle. A convergence analysis is provided to support this change. Empirical evaluation is conducted on a range of tasks and models: image classification (VGG ResNet), generation (WGAN), and machine translation (Transformer, LSTM), where the proposed optimizer Aida achieves consistent improvement over baseline optimizers, with 25% computational overhead (on some tasks). Strengths and Weaknesses: Strength: 1. The proposed modification to AdaBelief is well motivated, directly following the observation from previous work, that a compact adaptive step size range helps with improving the generalization. 2. Empirical results across different tasks, models, and datasets show that the resulting method achieves consistent gain over existing optimizers, and also speedup the convergence. Yet I have some doubts regarding the hyperparameter choice of those experiments. Weakness: 1. Lack of justification for the choice of hyperparameters for each optimizer in the experiments. While the hyper-parameters for each optimizer are provided in the appendix, it is not always clear to me why they are set that way, how they are selected, and whether their choice proposes fair comparison. For example, the epsilon of Aida for each task is different.; sometimes the epsilon for baselines are searched while the learning rates are fixed; The first momentum in WGAN training is different than others; The choice of which hyperparameters to fix and which hyperparameters to search misses explanation; The search grid of hyperparameters are different across different tasks. I would appreciate more elaboration on that. 2. Non-negligible computational overhead. Since the computational overhead of Aida over Adabelief is 25%, it might be worthwhile to also plot the learning curve against runtime to account for that. Requested Changes: 1. Include a discussion on the criteria of hyperparameter choice for each experiment. 2. Also plot the learning curve against runtime to account for the 25% computation overhead. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept with minor revision Comment: The paper was appreciated by the reviewers that still raised several questions. The authors replied to those questions well and the reviewer are now leaning toward an acceptance. But the authors discussed several interesting things with the reviewers that were not reported in the revision of the document so the final decision is "accept with minor revision" so that the authors can integrate those discussions (and references) in the final version of the paper (and supp). For the next revision the authors must take into account all the comments from the reviewers and integrate to the paper the very interesting discussions raised by the reviewers. More details below: + (CKQE.1) the motivation detailed in the reply is very clear and should appear in the introduction. + (CKQE.2 and cU6d.3) This discussion about the variance is also important and should be in the paper, so should be the new reference used by teh authors in the discussion. + (CKQE.3 and 17x9.1) The authors must discuss more in details in the paper the choices of hyper-parameters and why they might not have the same performances as in the original papers. + (cU6d.2) discussion about K=2 and cosine between the vector should be in the text and the new results should be provide in the supplementary. + (17x9.2) add the learning VS runtime curve as stated in the reply. ==================================================
# Chronos: Learning The Language Of Time Series Anonymous authors Paper under double-blind review ## Abstract We introduce Chronos, a simple yet effective framework for pretrained probabilistic time series models. Chronos tokenizes time series values using scaling and quantization into a fixed vocabulary and trains existing transformer-based language model architectures on these tokenized time series via the cross-entropy loss. We pretrained Chronos models based on the T5 family (ranging from 20M to 710M parameters) on a large collection of publicly available datasets, complemented by a synthetic dataset that we generated via Gaussian processes to improve generalization. In a comprehensive benchmark consisting of 42 datasets, and comprising both classical local models and deep learning methods, we show that Chronos models: (a) significantly outperform other methods on datasets that were part of the training corpus; and (b) have comparable and occasionally superior *zero-shot* performance on new datasets, relative to methods that were *trained specifically on them*. Our results demonstrate that Chronos models can leverage time series data from diverse domains to improve zero-shot accuracy on unseen forecasting tasks, positioning pretrained models as a viable tool to greatly simplify forecasting pipelines. ## 1 Introduction Time series forecasting is an essential component of decision-making across various domains, including retail, energy, finance, healthcare, climate science, among others. Traditionally, forecasting has been dominated by statistical models such as ARIMA and ETS. These have served as reliable tools, at least until the recent shift towards deep learning techniques (Hyndman & Athanasopoulos, 2018; Benidis et al., 2022). This shift can be attributed to the availability of large and diverse time series data sources, and the emergence of *operational* forecasting problems (Kolassa & Januschowski, 2019) that play to the strengths of deep forecasters, i.e., the ability to extract patterns out of a large collection of time series. Despite their impressive performance, deep forecasters still operate in the standard regime of training and prediction on the same dataset. While there have been works dedicated to transfer learning (Ye & Dai, 2018) and domain adaptation (Jin et al., 2022) for forecasting, the field has yet to converge on a unified, general-purpose forecasting model, a goal that remains a beacon for time series researchers. The emergence of large language models (LLMs) with zero-shot learning capabilities has ignited interest in developing "foundation models" for time series. In the context of LLMs, this interest has been pursued through two main avenues: directly prompting pretrained LLMs in natural language (Gruver et al., 2023; Xue & Salim, 2023) and fine-tuning LLMs for time series tasks (Zhou et al., 2023a; Jin et al., 2024). However, these methods face significant limitations, notably the need for prompt engineering or fine-tuning for each new task, or reliance on large-scale models (GPT-3 (Brown et al., 2020), Llama 2 (Touvron et al., 2023), etc.) that demand substantial computational resources and time for inference. Recent concurrent work (Dooley et al., 2023; Das et al., 2023; Rasul et al., 2023; Woo et al., 2024) also explores pretraining transformer-based models with sophisticated time-series-specific designs on a large corpus of real and (or) synthetic time series data. In this work, we take a step back and ask: what are the fundamental differences between a language model that predicts the next token, and a time series forecasting model that predicts the next values? Despite the apparent distinction - tokens from a finite dictionary versus values from an unbounded, usually continuous domain - both endeavors fundamentally aim to model the sequential structure of the data to predict future patterns. Shouldn't good language models "just work" on time series? This naive question prompts us to challenge the necessity of time-series-specific modifications, and answering it led us to develop Chronos, a language modeling framework minimally adapted for time series forecasting. Chronos tokenizes time series into discrete bins through simple scaling and quantization of real values. In this way, we can train off-the-shelf language models on this "language of time series," with no changes to the model architecture (see Figure 1 for a high-level depiction of Chronos). Remarkably, this straightforward approach proves to be effective and efficient, underscoring the potential for language model architectures to address a broad range of time series problems with minimal modifications. ![1_image_0.png](1_image_0.png) Figure 1: High-level depiction of Chronos. (**Left**) The input time series is scaled and quantized to obtain a sequence of tokens. (**Center**) The tokens are fed into a language model which may either be an encoder-decoder or a decoderonly model. The model is trained using the cross-entropy loss. (**Right**) During inference, we autoregressively sample tokens from the model and map them back to numerical values. Multiple trajectories are sampled to obtain a predictive distribution. For the development of a useful general-purpose time series forecasting model, the scarcity of publicly available time series datasets, both in quantity and quality, is arguably more critical than the modeling framework. In addition to the comprehensive collection of public datasets we used to train Chronos, a central aspect of our approach is the integration of data augmentation strategies, including TSMixup and KernelSynth. TSMixup randomly samples a set of base time series from different training datasets, and generates new time series based on a convex combination of them; KernelSynth uses Gaussian processes to generate synthetic time series by randomly composing kernel functions. These techniques address the inherent limitations of small training datasets in time series forecasting, enhancing model robustness and generalization. Our comprehensive evaluation across 42 datasets establishes Chronos as a benchmark for both in-domain and zero-shot forecasting, surpassing both traditional models and task-specific deep learning approaches. Notably, Chronos achieves impressive zero-shot forecasting performance out of the box, without necessitating task-specific adjustments. Its accuracy, coupled with its relatively modest model size, positions it as a preferable alternative to larger, more computationally demanding models for zero-shot forecasting applications. By its very nature as a language model operating over a fixed vocabulary, Chronos can seamlessly integrate with future advancements in LLMs, making it an ideal candidate for further development as a generalist time series model. The rest of the paper is organized as follows. Section 2 introduces the background on time series forecasting and language models, and discusses related work. In Section 3, we describe Chronos, our proposed language modeling framework for time series. Section 4 discusses our data augmentation technique and synthetic time series generation process. In Section 5, we present our main results and a rigorous analysis of different design choices. We discuss future directions in Section 6, and conclude the paper in Section 7. Additional material is presented in the appendices. ## 2 Background And Related Work Time series forecasting concerns using historical data from a quantity of interest (typically real-valued) to predict their future values. Formally, given a uniformly-spaced time series x1:C = [x1*, . . . , x*C ], we are interested in predicting the joint distribution of the next H steps, p(xC+1:C+H|x1:C ). In this work, we focus on *univariate* forecasting, where the observations are scalars, i.e., xi ∈ R for all i. Time series forecasting can be addressed with a variety of different methods which can be broadly categorized into classical forecasting methods and deep learning methods. Classical forecasting methods such as ETS, ARIMA (Hyndman et al., 2008), Theta (Assimakopoulos & Nikolopoulos, 2000) fit a separate model to each time series independently (hence referred to as *local* models). In contrast, deep learning forecasting models learn across time series in a given dataset (and are called *global* models). These methods leverage advances in deep learning, such as RNNs which are used by DeepState (Rangapuram et al., 2018), DeepFactor (Wang et al., 2019), DeepAR (Salinas et al., 2020), TimeGrad (Rasul et al., 2021), and transformers which are used by TFT (Lim et al., 2021) and PatchTST (Nie et al., 2023). Apart from the choice of architecture, these approaches differ in the way they model the target, with some modeling the density function while others directly predicting a set of quantiles (Wen et al., 2017; Gasthaus et al., 2019; Park et al., 2022). Nevertheless, not all models produce probabilistic forecasts: notably, models such as Informer (Zhou et al., 2021) and DLinear (Zeng et al., 2023) only produce point forecasts. Large language models (LLMs) have demonstrated impressive performance on various natural language processing tasks (Brown et al., 2020; Chung et al., 2022; Touvron et al., 2023). Given a sequence of input tokens, w1:k = [w1*, . . . , w*k], language models aim to predict the next token, wk+1, by modeling the conditional distribution, p(wk+1|w1:k). The tokens belong to a vocabulary, V, and may be characters, subwords (Sennrich et al., 2015), or words, depending on the tokenization scheme used. Most modern LLMs (Brown et al., 2020; Chung et al., 2022; Touvron et al., 2023) are based on the transformer architecture (Vaswani et al., 2017). The original transformer architecture is an encoder-decoder model designed for machine translation. The encoder maps an input sentence of some language to a continuous representation, and the decoder generates the translation token-by-token using the input representation and previously decoded tokens. Many popular language models, such as BART (Lewis et al., 2019) and T5 (Raffel et al., 2020; Chung et al., 2022), belong to this family. Another popular architecture for LLMs is decoder-only, used in GPT-3 (Brown et al., 2020) and Llama 2 (Touvron et al., 2023), where the model only attends to tokens up to the current token. LLMs are typically trained on a very large corpus of text with their number of parameters ranging from millions (Raffel et al., 2020) to hundreds of billions (Chowdhery et al., 2023). We refer the reader to Zhao et al. (2023) for a recent survey on this area of research. LLM-based forecasters. Inspired by the success of pretrained LLMs, recent work has shown that LLMs are general pattern recognizers (Mirchandani et al., 2023) and several methods adapting LLMs to the time series domain have been developed. One line of work treats numerical time series data as raw text and directly uses the pretrained LLMs with minimal or no fine tuning to forecast unseen time series. PromptCast (Xue & Salim, 2023) leverages pretrained LLMs for forecasting by transforming the time series data into text-based input and output pairs and reformulating the forecasting problem as a question answering task. However, PromptCast requires dataset-specific templates for converting numerical data to text prompts. Perhaps the most straightforward LLM-based forecasting model is LLMTime (Gruver et al., 2023), which shows clear evidence for zero-shot forecasting ability of pretrained LLMs on a variety of benchmark time series datasets. LLMTime proposes a new tokenization scheme that encodes real-valued data as a string of digits after fixing the numerical precision and scaling the data appropriately. Once encoded as strings, forecasts are obtained in a zero-shot setting from pretrained LLMs such as GPT-3 (Brown et al., 2020) and Llama 2 (Touvron et al., 2023). Nevertheless, the use of such compute-hungry models hampers the scalability and practical utility of LLMTime. Zhou et al. (2023a) propose a unified one-fits-all model (GPT4TS) for different time series analysis tasks by using a pretrained GPT-2 model (Radford et al., 2019) as a backbone and only fine-tune the positional embeddings and the parameters of the layer normalization for each individual task. Instead of using tokenized input, they directly feed the model with patch embeddings, similar to PatchTST (Nie et al., 2023). Recent concurrent work, Time-LLM (Jin et al., 2024), repurposes LLMs for time series forecasting by aligning embeddings of time series patches with text prototypes, and prompting the (frozen) LLM with these aligned embeddings and a natural language prefix describing the task. Unlike Chronos, both GPT4TS and TimeLLM require in-domain training or fine-tuning, i.e., they are fine-tuned and tested on each dataset separately. Furthermore, the aforementioned methods are based on prompting or fine-tuning *pretrained* LLMs. In contrast, Chronos trains language models *from scratch* on a large collection of time series, tokenized via scaling and quantization. Zero-shot forecasting. Zero-shot forecasting is the ability of models to generate forecasts for time series from unseen datasets. Some early work (Orozco & Roberts, 2020; Oreshkin et al., 2021; Jin et al., 2022) in zero-shot forecasting considers training on a single time series dataset and testing on a different dataset. ForecastPFN (Dooley et al., 2023) tackles the problem of zero-shot forecasting by training a transformerbased model purely on synthetic data generated according to predefined trend, seasonalities (daily, monthly, yearly). The trained transformer model is then used to forecast real-world time series in a zero-shot setting. In this work, we also propose a method to generate synthetic time series data from Gaussian processes (Section 4.2); however, we use the synthetic data in combination with real data to train Chronos models, which improves the overall zero-shot performance. Furthermore, Chronos models are probabilistic, whereas ForecastPFN can only generate point forecasts. Recent concurrent works (Rasul et al., 2023; Goswami et al., 2024; Das et al., 2023; Woo et al., 2024) also develop zero-shot forecasting models by pretraining transformer-based architectures on a large corpus of time series data. These works operate on the real values of the time series and include time-seriesspecific designs such as time features, lags, patching, and real-valued distribution heads, among others. In contrast, Chronos follows a minimalist approach by tokenizing time series values into a fixed vocabulary and training existing language model architectures on these tokens without any time-series-specific design or features. That is, Chronos uses a categorical distribution to model the observations, performing regression via classification. Other time series tasks. Similar to Zhou et al. (2023a), recent works have studied general purpose models applicable across time series tasks including imputation, forecasting, classification and anomaly detection. Wu et al. (2023) develop a task-generic backbone based on the Inception model (Szegedy et al., 2015). In order to use the CNN-based Inception model, one dimensional time series is transformed into a two dimensional image-like representation by essentially segmenting the time series based on the periodicity and stacking the segments. SimMTM (Dong et al., 2023) is a masked pretraining framework for time series which learns general time series representations that are then used for forecasting and classification via fine-tuning. Although we focus on univariate time series forecasting in this work, based on its excellent performance on unseen time series datasets, we hypothesize that Chronos learns general representations that can potentially be deployed for tasks beyond forecasting. ## 3 Chronos: A Language Modeling Framework For Time Series In this section we introduce Chronos, a framework adapting existing language model architectures and training procedures to probabilistic time series forecasting. While both language and time series are sequential in nature, they differ in terms of their representation - natural language consists of words from a finite vocabulary, while time series are real-valued. This distinction necessitates specific modifications to existing language modeling frameworks, especially concerning tokenization, to make them applicable to time series data. Nevertheless, since existing transformer models have excelled on language tasks, our design philosophy involves making minimal changes to the model architectures and training procedure. ## 3.1 Time Series Tokenization Consider a time series x1:C+H = [x1*, . . . , x*C+H], where the first C time steps constitute the historical context, and the remaining H represent the forecast horizon. Language models operate on tokens from a finite vocabulary, so using them for time series data requires mapping the observations xi ∈ R to a finite set of tokens. To this end, we first scale and then quantize observations into a fixed number of bins. Scaling. The scale of time series can differ significantly even within a single dataset. This poses optimization challenges for deep learning models. Therefore, individual time series are normalized to facilitate better optimization. In the case of Chronos, the goal of normalization is to map the time series values into a suitable range for quantization. A common normalization technique involves applying an affine transformation to the time series, i.e., x˜i = (xi − m)/s. Several popular normalization schemes, such as mean scaling, standard scaling and min-max scaling, can be obtained by appropriately choosing m and s. We opt for mean scaling, a method that has proven effective in deep learning models commonly used for practical time series applications (Salinas et al., 2020), but other approaches are viable and only require minimal changes. Mean scaling normalizes individual entries of the time series by the mean of the absolute values in the historical context. Specifically, this involves setting m = 0 and s = 1 C PC i=1 |xi|. Quantization. The scaled time series x˜1:C+H = [˜x1, . . . , x˜C *, . . . ,* x˜C+H], is still real-valued and cannot be processed directly by language models. To convert these real values into discrete tokens, we employ quantization. Formally, we select B bin centers c1 *< . . . < c*B on the real line, and B − 1 edges bi separating them, ci < bi < ci+1, for i ∈ {1*, . . . , B* − 1}. The quantization function q : R → {1, 2*, . . . , B*}, and dequantization d : {1, 2, . . . , B} → R, are then defined as $$q(x)={\begin{cases}1&{\mathrm{if~-\infty\leq x<b_{1},}}\\ 2&{\mathrm{if~}}b_{1}\leq x<b_{2},\\ \vdots&{}\\ B&{\mathrm{if~}}b_{B-1}\leq x<\infty,\end{cases}}\qquad{\mathrm{and}}\qquad d(j)=c_{j},$$ $\quad(1)$ . and d(j) = cj , (1) respectively. The positioning of bin centers and edges can either be data-dependent or uniform (Rabanser et al., 2020). Quantile binning, a type of data-dependent binning, exploits the cumulative distribution function (CDF) of the training datapoints to construct bins such that approximately equal number of datapoints are assigned to each bin. In contrast, uniform binning selects bin centers uniformly within some interval [*l, r*]. Since the distribution of values for unseen downstream datasets can differ significantly from the training distribution, we opt for uniform binning in our experiments, but other quantization techniques can be used. We refer the reader to Rabanser et al. (2020) for a detailed discussion on quantization schemes for time series. A potential limitation of this approach is that the prediction range is restricted between [c1, cB], making it *theoretically* infeasible to model time series with a strong trend. We explore this further in a practical setting in Section 5.7. Apart from the time series tokens {1, 2*, . . . , B*}, we include two special tokens, commonly used in language models, into the time series vocabulary, Vts: PAD and EOS. The PAD token is used to pad time series of different lengths to a fixed length for batch construction and to replace missing values. The EOS token is appended to the quantized and padded time series to denote the end of the sequence. While the use of an EOS token is not strictly necessary in the case of time series, it makes training and inference using popular language modeling libraries convenient. The sequences of tokens from Vts can readily be processed by language models (both encoder-decoder and decoder only models), to train them as usual. A common approach in time series modeling is to incorporate time and frequency information, through features such as day-of-week, weekof-year, and so on. Perhaps counter-intuitively, in Chronos, we ignore time and frequency information, treating the "time series" simply as a sequence. We primarily focus on the variants of the encoder-decoder T5 model (Raffel et al., 2020). Additionally, we conduct an experiment with the GPT-2 (Radford et al., 2019) model to demonstrate that our approach can be straightforwardly extended to decoder-only models. No modifications are required to the language model architecture, except adjusting the vocabulary size to |Vts|, which depends on the number of bins used for quantization and may be different from the vocabulary size of the original language model. Concretely, adjusting the vocabulary size entails truncating (or extending) the input and output embedding layers of the language model. ## 3.2 Objective Function As typical in language models, we use the categorical distribution over the elements of Vts as the output distribution, p(zC+h+1|z1:C+h) where z1:C+h is the tokenized time series. Chronos is trained to minimize the cross entropy between the distribution of the quantized ground truth label and the predicted distribution. Formally, the loss function for a single tokenized time series (also accounting for EOS tokens) is given by, $$\ell(\mathbf{\theta})=-\sum_{h=1}^{H+1}\sum_{i=1}^{|\mathcal{V}_{u_{i}}|}\mathbf{1}_{(z_{C+h+1}=i)}\log p_{\mathbf{\theta}}(z_{C+h+1}=i|\mathbf{z}_{1:C+h}),\tag{1}$$ $$\left(2\right)$$ where pθ(zC+h+1 = i|z1:C+h) denotes the categorical distribution predicted by the model parameterized by θ. In practice, the loss is averaged over a batch of time series during training. Note that the categorical cross entropy loss (Eq. 2) is not a distance-aware objective function, i.e., it does not explicitly recognize that bin i is closer to bin i + 1 than to i + 2. Instead, the model is expected to associate nearby bins together, based on the distribution of bin indices in the training dataset. In other words, Chronos performs regression via classification (Torgo & Gama, 1997). This is unlike typical probabilistic time series forecasting models, which either use parametric continuous distributions such as Gaussian and Student's-t (Salinas et al., 2020) or perform quantile regression (Wen et al., 2017; Lim et al., 2021). Opting for a categorical output distribution offers two key advantages. Firstly, it requires no modification to the language model architecture or training objective, enabling the use of popular language modeling libraries and the utilities they provide out of the box (Wolf et al., 2020). Secondly, it imposes no restrictions on the structure of the output distribution, allowing the model to learn arbitrary distributions, including multimodal ones. This flexibility proves especially valuable for a pretrained model, as time series datasets from diverse domains may follow distinct output distribution patterns. ## 3.3 Forecasting Chronos models are probabilistic by design and multiple realizations of the future can be obtained by autoregressively sampling from the predicted distribution, pθ(zC+h+1|z1:C+h), for h ∈ {1, 2*, . . . , H*}. These sample paths come in the form of token IDs that need to be mapped back to real values and then unscaled to obtain the actual forecast. The dequantization function d from Eq. (1) maps the predicted tokens to real values: these are then unscaled by applying the inverse scaling transformation, which in the case of mean scaling involves multiplying the values by the scale s. ## 4 Data Augmentation The quality and quantity of public time series data pales in comparison to the natural language processing (NLP) domain, which benefits from ample high-quality text datasets such as WikiText-103 (Merity et al., 2016), C4 (Raffel et al., 2020), and The Pile (Gao et al., 2020). This poses challenges for training models intended for zero-shot forecasting, which rely on large-scale time series data with diverse patterns. To address this issue, we propose enhancing the diversity of training data by generating mixup augmentations from real datasets and supplementing training with synthetic data. ## 4.1 Tsmixup: Time Series Mixup Mixup (Zhang et al., 2017) is a data augmentation scheme proposed in the context of image classification. It generates convex combinations of random image pairs and their labels from the training dataset, which alleviates issues such as memorization and overfitting in deep learning models. Existing works (Carmona et al., 2021; Zhou et al., 2023b) have extended Mixup to the time series domain. Building upon these works, we propose TSMixup, which generalizes the idea of Mixup to more than two datapoints. Concretely, TSMixup randomly samples k ∼ U{1, K} time series of a specific length, l ∼ U{lmin, lmax}, from the training datasets, scales them, and takes their convex combination, $$\hat{\boldsymbol{x}}_{1:l}^{\mathrm{TSMixup}}=\sum_{i=1}^{k}\lambda_{i}\tilde{\boldsymbol{x}}_{1:l}^{(i)},$$ , (3) where x˜ (i) 1:l denotes the i-th scaled time series. The combination weights, [λ1*, . . . , λ*k], are sampled from a symmetric Dirichlet distribution, Dir(α). The complete pseudocode of TSMixup can be found in Algorithm 1 in Appendix A. Intuitively, TSMixup enhances the diversity of data by combining patterns from different time series. Figure 2 shows example augmentations generated by TSMixup and illustrates how different patterns are mixed. $$\left({\mathrm{3}}\right)$$ ![6_image_0.png](6_image_0.png) Figure 2: An illustration of TSMixup augmentation for k = {1, 2, 3}. TSMixup improves pattern diversity by taking weighted combinations of randomly-sampled time series from different datasets. ## 4.2 Kernelsynth: Synthetic Data Generation Using Gaussian Processes While TSMixup improves pattern diversity, it may still prove insufficient for training a generalist time series model, especially when real data is limited. To further supplement the training dataset, we propose KernelSynth, a method to generate synthetic time series using Gaussian processes (GPs). KernelSynth is inspired by the Automatic Statistician (Duvenaud et al., 2013), where a compositional search over a space of GP kernels is performed to explain the structure of a time series. We use the inverse of this process — randomly compose GP kernels to generate new time series. GPs are distributions over functions defined by the mean function, m(t), and the positive definite kernel, κ(*t, t*′), where t ∈ R is the domain. The kernel specifies a covariance function which defines the joint variability of the function values at an arbitrary pair of points, (*t, t*′), in the input domain. Diverse patterns can be generated by appropriately selecting the kernel. We constructed a kernel bank, K, of basis kernels defining fundamental time series patterns. These include linear kernels for trend, RBF kernels for smooth local variation, and periodic kernels for seasonalities found in typical time series frequencies. The final kernel, κ˜(*t, t*′), is constructed by sampling j ∼ U{1, J} kernels from K with replacement and combining these kernels via random binary operations, + or ×. A synthetic time series is generated by drawing a sample of length lsyn from the GP prior, GP(m(t) = 0, κ˜(*t, t*′)); see Algorithm 2 in Appendix A for details. Figure 3 depicts this generative process used in KernelSynth, illustrating how time series with intricate patterns can arise from the composition of simple basis kernels. ## 5 Experiments In this section, we present empirical results on commonly used benchmark datasets. First, we give an overview of the datasets, training strategy, baselines, and evaluation metrics (Section 5.1-5.4). Table 1 provides a high-level summary of the datasets and baselines used in our experiments. We then (a) evaluate the performance of Chronos models in the in-domain and zero-shot settings against local models and task-specific deep learning models (Section 5.5); (b) analyze the effect of various design choices such as model size, initialization, synthetic data proportion, context length, and vocabulary size on the performance of Chronos models (Section 5.6); and (c) analyze the qualitative performance of Chronos models and highlight their limitations (Section 5.7). We discuss our key findings in this section and relegate specific experiment details to the appendices. ## 5.1 Datasets To train and evaluate Chronos models, we collected a wide variety of publicly available datasets spanning various application domains including energy, transport, healthcare, retail, web, weather, finance, and with ![7_image_0.png](7_image_0.png) Figure 3: (a) An illustration of KernelSynth, a Gaussian process (GP)-based synthetic time series generation method. Kernels are sampled from a kernel bank and then randomly combined using a binary operator (× or +). The resultant kernel is used in a GP prior to generate synthetic time series. Random samples from kernels at each step are shown in red and blue colors. (b) Example synthetic time series generated by KernelSynth. | Data Subset | # Datasets | # Series | Usage | Baselines | |------------------|--------------|------------|----------------------|------------------------------------------------------------------------------------------------------------------------------------------------| | Pretraining-only | 13 | 795,936 | pretraining | - | | Benchmark I | 15 | 97,272 | pretraining and indomain evaluation | Naive, SeasonalNaive, AutoETS, AutoTheta, AutoARIMA, DeepAR, TFT, PatchTST, DLinear, WaveNet, N-BEATS, N-HiTS, GPT4TS, Lag-Llama, Moirai-1.0-R | | Benchmark II | 27 | 190,674 | zero-shot evaluation | All the above, LLMTime and ForecastPFN | Table 1: A high-level summary of the datasets and baselines used in our experiments. sampling frequencies ranging from 5 minutes up to yearly. The complete list of datasets, together with their respective sources and additional details, is given in Appendix B. In total, our dataset collection comprises 55 datasets from multiple sources, including the Monash Time Series Forecasting Repository (Godahewa et al., 2021), the M-competitions (Makridakis et al., 1979; Makridakis & Hibon, 2000; Makridakis et al., 2020; 2022), and public domain datasets from Kaggle. We categorize this collection into three subsets, based on how we use them for training and evaluating Chronos models: (a) datasets exclusively used for training (13 datasets); (b) Benchmark I datasets, employed for both training and evaluation, representing an *in-domain* evaluation (15 datasets); and (c) Benchmark II datasets, used solely for evaluation, constituting a *zero-shot* evaluation (27 datasets). In categorizing the datasets in this way, we tried to find a good balance between keeping as many datasets as possible for the zero-shot evaluation of Chronos models, among the ones most commonly used in the literature, while still having enough variety of domains and sampling frequencies in the training data. Overall, we used 28 datasets for training Chronos models, consisting of about 890K univariate time series with approximately 84B observations (tokens) in total. For both in-domain (I) and zero-shot (II) benchmark datasets, we used the last H ∈ N + observations of each time series as a held-out test set: all models are judged by the accuracy of their forecast on such held-out set, which no model had access to for training purposes. The prediction length H is task-specific (see Table 2 in Appendix B), where we define a task as a dataset and prediction length pair. Tasks in both benchmarks exhibit diverse properties, in terms of the dataset size, frequency, history length, and prediction length, making them rich benchmarks reflective of real world scenarios. ## 5.2 Training Corpus And Protocols We selected T5 (Raffel et al., 2020) as the main architecture for Chronos in our experiments, since it is available in a variety of sizes, ranging from 16M (Tiny) to 11B (XXL) parameters (Tay et al., 2021). We also conducted experiments with the decoder-only GPT-2 model to demonstrate the applicability of the Chronos framework to decoder-only models. In the following, we discuss the training configurations used for our main results (Section 5.5) and explore alternatives for some of the hyperparameters in Section 5.6. We trained T5 models of 4 sizes,1 namely, Mini (20M), Small (46M), Base (200M) and Large (710M), and the GPT-2 base model (90M), on 10M TSMixup augmentations (see Section 4.1) generated from the 28 training datasets, with K = 3 in Algorithm 1, and 1M synthetic time series generated using Gaussian processes (see Section 4.2). Note that with this setup, original time series are adequately represented since they are included in the TSMixup augmentations with probability 1/3. We sampled time series from the augmentations and synthetic data in the ratio 9:1 during training. Each model is trained with an effective batch size of 256 sequences, using distributed data parallelism and gradient accumulation, whenever necessary. These sequences are constructed by slicing random windows from the time series, and then scaling and quantizing them into equal-sized bins within the interval [l= − 15, r=15], as described in Section 3.1. The context length of the sequences was set to 512, the default for T5 models, and the prediction length is set to 64, a value greater than the prediction lengths of all tasks we consider in our evaluation. The models were optimized for 200K steps using the AdamW optimizer with a weight decay of 0.01. The learning rate was annealed linearly from its initial value of 0.001 to 0 over the training steps. The other model and training hyperparameters were set to their defaults used in the transformers library (Wolf et al., 2020). We used an AWS EC2 instance with 8 A100 (40GB) GPUs to train all Chronos models, and we employed faster floating point formats (TF32) and model compilation to speed up training. Table 5 in Appendix E reports the training time and the approximate cost of training Chronos models of different sizes. ## 5.3 Baselines We assessed the performance of Chronos models against a variety of time series forecasting baselines. From statistical forecasting literature (Hyndman & Athanasopoulos, 2018), we included Naive, Seasonal Naive, AutoETS, AutoARIMA (Hyndman et al., 2008) and AutoTheta (Assimakopoulos & Nikolopoulos, 2000). Additionally, we compared against several neural forecasting baselines, including WaveNet (Oord et al., 2016), DeepAR (Salinas et al., 2020), N-BEATS (Oreshkin et al., 2020), TFT (Lim et al., 2021), DLinear (Zeng et al., 2023), PatchTST (Nie et al., 2023), N-HiTS (Challu et al., 2023), and GPT4TS (Zhou et al., 2023a). Furthermore, from the recently proposed pretrained time series models, we included the ones with publicly available weights: Lag-Llama (Rasul et al., 2023) and Moirai-1.0-R (Woo et al., 2024). On Benchmark II (i.e., zero-shot datasets for Chronos models), we also evaluated against two zero-shot methods: ForecastPFN (Dooley et al., 2023) which is a transformer model pretrained only on synthetic time series data and LLMTime (Gruver et al., 2023) which uses LLMs for zero-shot forecasting. We categorize Chronos models and the baselines into three groups: *local models* that estimate parameters for each time series individually; *task-specific models* trained or fine-tuned for each task separately; and pretrained models which do not perform task-specific training, instead using a single model across all tasks. Further details on the implementation and training of these baselines can be found in Appendix C. ## 5.4 Evaluation Metrics Whenever possible,2 we evaluated models both in terms of their probabilistic and point forecast performance. We used the weighted quantile loss (WQL) to assess the quality of the probabilistic forecasts: the WQL is related to the continuous ranked probability score (CRPS, Gneiting & Raftery (2007))3 and is commonly used to evaluate probabilistic forecasts (Gasthaus et al., 2019; Shchur et al., 2023). The WQL measures the compatibility between the predictive distribution and the ground-truth observation at a uniformly- 1Our code and model checkpoints are available at REDACTED. 2Some models (GPT4TS and ForecastPFN) only generate point forecasts and we only evaluate those. 3Many existing works (Ansari et al., 2021; Rasul et al., 2023; Kollovieh et al., 2023) use CRPS and WQL synonymously. spaced grid of quantile levels; we compute the WQL on 9 uniformly-spaced quantile levels {0.1, 0.2*, . . . ,* 0.9}. Quantile forecasters such as TFT were directly trained on these quantile levels. For methods requiring sampling, we estimated the quantiles using 20 sample forecast paths. We used the mean absolute scaled error (MASE, Hyndman & Koehler (2006)) to evaluate the point forecast performance. The MASE is defined as the absolute error of the forecast scaled by the historical seasonal error of the time series, and was selected due to its favorable properties over other point forecasting metrics (Hyndman & Koehler, 2006). We used the median forecast (0.5-quantile) for computing the MASE for the probabilistic forecasters. See Appendix D for a detailed discussion on the evaluation metrics. Since the magnitude of the evaluation metrics can vary across datasets, we adopt a different approach to aggregate scores than naive averaging. For each dataset, we compute the relative score of each model as the model's score divided by the score of a baseline model (here, Seasonal Naive). The relative scores are aggregated across all datasets using the geometric mean. The choice of the geometric mean is deliberate — Fleming & Wallace (1986) show that the arithmetic mean can yield misleading conclusions in this context, and the geometric mean is provably the only meaningful way to aggregate such relative scores. Furthermore, the geometric mean is also not sensitive to the choice of the baseline, and the model ordering stays intact if another baseline is selected instead. We used Seasonal Naive due to its simplicity and popularity as a forecasting baseline. For models that failed or could not finish evaluation within the allotted time on certain datasets, we use a relative score of 1, i.e., the baseline relative score, when aggregating the results. We assign equal weights to all tasks during aggregation, reflecting real-world scenarios where datasets may have different numbers of time series, frequencies, history and prediction lengths. ## 5.5 Main Results In this section, we present our main results on 42 datasets, which comprise Benchmark I (15 datasets) and Benchmark II (27 datasets). Chronos models surpass classical statistical baselines, task-specific deep learning models, and other pretrained models on the in-domain datasets (Benchmark I; see Section 5.5.1). On the zero-shot datasets (Benchmark II; Section 5.5.2), Chronos models comfortably outperform statistical baselines and other pretrained models, while performing on par with the best deep learning models trained on these tasks. With an inexpensive fine-tuning regimen, our Chronos-T5 (Small) model achieves the top spot on Benchmark II, significantly outperforming all baselines. ## 5.5.1 Benchmark I: In-Domain Results Benchmark I comprises 15 datasets that were also part of the training data of Chronos models, i.e., this benchmark evaluates the in-domain performance of Chronos models (see Table 2). Figure 4 summarizes the probabilistic and point forecasting performance for all models on the held-out test windows, in terms of their aggregated relative scores, computed as described in Section 5.4. The bigger Chronos-T5 models (Base and Large) significantly outperform baseline models, obtaining the best aggregated relative scores and average ranks (Figure 18 in Appendix E). These models not only perform better than local models (e.g., AutoETS and AutoARIMA), but they also perform better than task-specific deep learning models trained or fine-tuned for each dataset (e.g., PatchTST and DeepAR) and other pretrained models (e.g., Lag-Llama and Moirai-1.0-R). The smaller Chronos-T5 models (Mini and Small) and Chronos-GPT2 also perform better than the majority of baselines, with the exception of PatchTST. Between the two baseline pretrained models studied in this experiment, Moirai-1.0-R clearly outperforms Lag-Llama. Notably, the best Moirai-1.0-R model (Large, 311M) is still outperformed by the smallest Chronos-T5 model (Mini, 20M) even though Moirai1.0-R models were trained on a significantly larger corpus of time series data. Task-specific deep learning models, trained across multiple time series for a specific task, perform better than local statistical models that fit parameters for each time series. Interestingly, the Seasonal Naive baseline performs competitively against other local models on this benchmark, suggesting that the datasets in this benchmark exhibit strong seasonal patterns. This is unsurprising since a majority of these datasets belong to domains such as energy and transport that tend to be highly seasonal in nature. The raw WQL and MASE values for individual datasets summarized in Figure 4 can be found in Tables 6 and 7 in Appendix E. ![10_image_0.png](10_image_0.png) Figure 4: Performance of different models on Benchmark I, comprising 15 datasets also included in the training data of Chronos models. This benchmark showcases the in-domain performance of Chronos models against local statistical models, which fit parameters individually for each time series, task-specific models that train a separate model for each task, and pretrained models trained on a large corpus of time series data. Pretrained Models (Other) indicates that the in-domain setting does not apply to these models as some datasets in Benchmark I were not part of their training corpus and (or) they were trained on the test sets of some datasets in Benchmark I. The probabilistic (WQL) and point (MASE) forecasting metrics are normalized using the scores of the Seasonal Naive baseline and aggregated through a geometric mean to obtain the aggregated relative WQL and MASE, respectively. Results for Chronos and task-specific models (except GPT4TS) have been averaged over 3 random seeds. Models producing point-forecasts (GPT4TS) are only compared based on MASE. These results demonstrate the benefit of using models that are trained only once across multiple datasets, over task-specific models trained individually for each task. Such models could streamline production forecasting systems, where forecasts from different time series tasks are required, by obviating the need for training separate models for each task. ## 5.5.2 Benchmark Ii: Zero-Shot Results Benchmark II consists of 27 datasets that were not used during Chronos models' training (see Table 2 in appendix B), i.e., this benchmark evaluates the zero-shot performance of these models. These datasets belong to diverse domains and frequencies, some of which are not even part of the training data, making this a challenging benchmark for Chronos. 4 Figure 5 summarizes the results on Benchmark II in terms of the aggregated relative scores. This benchmark is clearly more challenging than Benchmark I (Figure 4), as the best models tend to offer lower improvements relative to the baseline. Nevertheless, despite never having seen these datasets during training, Chronos models significantly outperform local statistical models. On probabilistic forecasting (aggregate relative WQL), Chronos models achieve the 2nd and 3rd spots, performing better than most task-specific models that have been trained on these tasks. Chronos-T5 (Large) places 3rd in terms of the point forecasting performance, narrowly losing the 2nd spot to N-HiTS. Chronos models also significantly outperform other pretrained models such as Moirai-1.0-R, Lag-Llama, LLMTime, and ForecastPFN, and even GPT4TS, which fine-tunes a pretrained GPT-2 model on each dataset. Moirai-1.0-R obtains the best performance after Chronos although the evaluation setup may have been advantageous for Moirai-1.0-R as many datasets in Benchmark II were part of 4From a rigorous standpoint, to prevent information leakage, the start time of any dataset within this category must be after the timestamp of the last observation from the pretraining dataset and Benchmark I. Nevertheless, we consider the risk to be minimal given that the datsets bear no overlap beyond high-level conceptual categorization. ![11_image_0.png](11_image_0.png) Figure 5: Performance of different models on Benchmark II, comprising 27 datasets not seen by Chronos models during training. This benchmark provides insights into the zero-shot performance of Chronos models against local statistical models, which fit parameters individually for each time series, task-specific models *trained on each task*, and pretrained models trained on a large corpus of time series data. Pretrained Models (Other) indicates that the zero-shot setting does not apply to these models as they were trained on some datasets in Benchmark II. The probabilistic (WQL) and point (MASE) forecasting metrics were normalized using the scores of the Seasonal Naive baseline and aggregated through a geometric mean to obtain the aggregated relative WQL and MASE, respectively. Results for Chronos and task-specific models (except GPT4TS) have been averaged over 3 random seeds. Models producing point-forecasts (GPT4TS and ForecastPFN) are only compared based on MASE. its pretraining corpus. The raw WQL and MASE values for individual datasets summarized in Figure 5 can be found in Tables 8 and 9 in Appendix E. The results on this benchmark highlight the promise of Chronos as a generalist time series forecaster — it performs significantly better than local models that are commonly used in a zero-shot setting, and it performs on par with the best task-specific deep learning models. Fine tuning. Motivated by the remarkable zero-shot performance of Chronos models, we conducted a preliminary investigation into fine-tuning Chronos models individually on datasets from Benchmark II. We selected the Chronos-T5 (Small) model for this experiment due to its good zero-shot performance with a relatively low training cost. We fine-tuned the model in a datasetagnostic fashion with an initial learning rate of 0.001, annealed linearly to 0 over 1000 steps. Figure 6 shows that fine-tuning significantly improves the aggregate performance of the model on Benchmark II. The fine-tuned Chronos-T5 (Small) model now takes the top spot on Benchmark II overall, overtaking both larger (zero shot) Chronos models and the best taskspecific models. Notably, Chronos-T5 (Small) is not even the most accurate variant of Chronos on Benchmark II in the zero shot setting, suggesting that further improvements may be obtained by fine-tuning larger Chronos-T5 variants. Figure 6: When fine-tuned on individual datasets from Benchmark II, Chronos-T5 (Small) significantly improves over the zeroshot performance and becomes the best performing model on average (see Figure 5). ![11_image_1.png](11_image_1.png) ![12_image_0.png](12_image_0.png) Figure 7: Model size. (a) Training loss curves of Chronos models of different sizes. (b) In-domain and zero-shot performance of Chronos models varying over model size. ![12_image_1.png](12_image_1.png) Figure 8: Initialization. Comparison of training loss of randomly-initialized Chronos models of different sizes against those initialized with language model weights. ## 5.6 Analysis Of Hyperparameters Here, we explore the effect of different design choices on the downstream model performance, beginning with a comparison of different model sizes and initializations. We then analyze the effect of training steps, synthetic data proportion, context length, and vocabulary size, on the performance of Chronos-T5 (Small). We only vary the parameter of interest, keeping everything else fixed to the value used in the main results. Model size. We experimented with four model sizes ranging from 20M to 710M parameters.5 Unsurprisingly, the training loss improves with the model capacity, as shown in Figure 7a. We also observe this trend in the downstream model performance - it improves with the model size for both in-domain and zero-shot benchmarks, as shown in Figure 7b. These trends suggest that even larger models may improve performance further. However, we did not explore larger models due to slow inference times which would render them impractical for real-world applications. Initialization. We investigated whether initializing Chronos models to the corresponding T5 language models pretrained by Tay et al. (2021) on the C4 dataset (Raffel et al., 2020) has any impact on the training dynamics or the downstream performance. Figure 8 shows the training loss curve for models initialized randomly and those initialized with language model weights. Notably, models initialized randomly tend to converge to a lower training loss compared to their counterparts initialized with language model weights. For the larger models (Base and Large), models initialized with language model weights initially exhibit a faster decrease in training loss, but they ultimately converge to a higher final loss. 5These numbers differ from the original sizes of the T5 models in Tay et al. (2021) due to the change in the vocabulary size. Overall, these observations suggest that language model weights are not particularly remarkable in the context of time series forecasting and offer no improvement over random initialization. These conclusions are further reinforced by Figure 9 which shows the downstream performance of models initialized with language model weights against three randomly-initialized models of each size. Across all model sizes, the performance of models initialized with language model weights either overlaps with or slightly underperforms compared to randomly initialized models. These results suggest that LLM initialization offers relatively little advantage in the context of time series forecasting, and instead random initialization may be the preferable choice. ![13_image_0.png](13_image_0.png) Figure 9: Comparison of the in-domain and zero-shot performance of models initialized with language model weights (marked as star) and three randomly initialized models (marked as circles) across different model sizes. TSMixup augmentations. As described in Section 5.2, we trained Chronos models on TSMixup augmentations rather than directly on the original time series. In this experiment, we investigate whether using TSMixup augmentations is advantageous for downstream performance. Figure 10a compares the performance of Chronos-T5 (Small, 46M) models trained with and without TSMixup augmentations. The model trained on TSMixup augmentations obtains similar in-domain performance to the model trained without augmentations. However, the zero-shot performance improves when using TSMixup augmentations. This suggests that TSMixup enchances the diversity of training data which leads to improved performance on unseen datasets. Figure 10a also shows that the zero-shot performance obtains an additional boost with the inclusion of synthetic data. We investigate this further in the next experiment. Synthetic data proportion. We systematically explored the impact of KernelSynth on downstream model performance. We trained Chronos-T5 (Small, 46M) models with time series sampled from TSMixup augmentations and KernelSynth data in different ratios, ranging from 0% (i.e., trained solely on TSMixup augmentations) to 100% synthetic data. Figure 10b shows the performance of models trained with different proportions of synthetic data. Both in-domain and zero-shot metrics improve with the incorporation of synthetic data in training. The most consistent improvement is observed around the 10% synthetic data proportion. Further increasing the proportion of synthetic data tends to worsen performance. This is unsurprising since the synthetic data generated using Gaussian processes is not representative of all real-world time series. ![13_image_1.png](13_image_1.png) While the model trained only on synthetic data performs worse relative to models with real data in their training corpus, it performs reasonably well in terms of its absolute performance. Figure 20 (Appendix E) Figure 10: (a) Comparison of in-domain and zero-shot performance of Chronos-T5 (Small) models trained with and without TSMixup augmentations. (b) In-domain and zero-shot performance of Chronos-T5 (Small) models with varying proportion of KernelSynth data in the training corpus. shows that it performs significantly better than ForecastPFN (Dooley et al., 2023), another model that is trained solely on synthetic data (generated differently from KernelSynth). Surprisingly, it also outperforms several other baselines in our benchmarks,6 despite never having seen real data during training. These results attest the quality of our synthetic data, and they open up directions for future work to close the performance gap further. ![14_image_0.png](14_image_0.png) Figure 11: In-domain and zero-shot performance of a Chronos-T5 (Small) models varying over (a) the number of training steps, (b) the training context length, and (c) the vocabulary size. Training steps. We trained a Chronos-T5 (Small, 46M) for 1M training steps to study the effect of longer training on model performance. Figure 11a shows that the downstream model performance improves over the course of training, both on in-domain and zero-shot benchmarks. This suggests that performance of the larger models (Base and Large) can potentially be improved by training them for longer. Context length. We studied the effect of the context length on downstream performance by training Chronos-T5 (Small, 46M) models with four distinct context lengths. Figure 11b shows how the performance varies with increasing context length. We observe improvements on both in-domain and zero-shot metrics as context length increases, showing that a longer context helps the models to forecast better. However, this analysis may be limited due to our zero-shot evaluation setup, wherein the majority of datasets in the benchmark have low frequencies and time series shorter than 1000 steps. Hence, further evaluation is required to conclusively study the impact of longer context lengths. We posit that high-frequency datasets may benefit from a longer context, which may be necessary to correctly capture the long-term seasonal patterns. Vocabulary size. The vocabulary size governs the precision with which the model can process the scaled time series. To explore its impact on performance, we trained Chronos-T5 (Small, 46M) models with varying vocabulary sizes. Figure 11c shows consistent improvements in the point forecasting metric (MASE) as the vocabulary size increases. In contrast, the WQL initially improves but deteriorates for larger vocabulary sizes. We hypothesize that this behavior is an artifact of the chosen metrics. The MASE, which is invariant to the scale of individual series, is closely aligned to our training loss, which is also invariant to scale. Hence, MASE exhibits an improvement with increased precision, just as one expects for the training loss. Conversely, WQL, a scale-dependent metric, does not correlate closely with the training loss and behaves less predictably as precision increases. See Appendix D for a discussion on the properties of these metrics. ## 5.7 Qualitative Analysis And Limitations In this section, we analyze forecasts generated by Chronos models qualitatively, and we also highlight some limitations of our tokenization technique. We primarily focus on synthetically generated time series for a 6All benchmarks are zero-shot for this model, since it was only trained on synthetic data. ![15_image_0.png](15_image_0.png) Figure 12: Forecasts generated by Chronos-T5 (Base) on synthetically generated patterns. (a) **Noise**: Chronos generates reasonable forecasts for Gaussian noise with the 80% prediction interval matching the interval of the underlying distribution (shown by the horizontal dashed blue line). (b) **Trend**: Chronos forecasts a linear trend (top) correctly but struggles with an exponential trend (bottom). (c) **Seasonality**: Chronos accurately models seasonal patterns of varying degrees of complexity (single seasonality at the top and three seasonalities at the bottom). (d) **Combined Patterns**: Chronos forecasts time series generated by the additive (top) or multiplicative (bottom) combination of trend and seasonal patterns accurately. controlled analysis of different types of time series patterns. For example forecasts from real datasets, see Figures 22 to 24 in Appendix E. I.I.D. Noise. We generated time series comprised purely of Gaussian observations, N (0, 1) and N (100, 10), and used Chronos-T5 (Base) to forecast these. Figure 12a shows that Chronos generates plausible forecasts for such time series and the predicted 80% interval coincides with the ground truth 80% interval shown by the dashed blue lines. ![15_image_1.png](15_image_1.png) Trend and seasonality. We generated time series following linear and exponential trends: Chronos-T5 (Base) predicts the linear trend accurately but struggles with the exponential trend, as shown in Figure 12b. This may be due to a limited representation of exponential trends in the training data. A potential resolution for generating better forecasts for time series with exponential trends is to perform logarithmic scaling before feeding the time series into Chronos models. We also observed that Chronos models tend to underestimate the trend when the context is not sufficiently long. This phenomenon is depicted in Figure 13 where the model forecasts the pattern correctly but underpredicts the trend when a short context is provided. However, with a longer context, the model picks up the correct pattern and trend. Figure 13: When the context is not sufficiently long, Chronos-T5 (Base) tends to underestimate trend, as shown in this example with the classic Air Passengers data (monthly) and a forecast horizon of 24. Top: with only 120 observations as context, the median prediction plateaus compared to the previous trend. Bottom: with the full context of 144 observations, the prediction picks up the trend more closely. In our analysis, we observed that Chronos models recognize seasonal patterns in time series particularly well. We generated purely seasonal time series using sinusoids with different frequencies. As shown in Figure 12c, Chronos-T5 (Base) precisely forecasts both time series. When fundamental patterns such as trend and seasonality are combined, either additively or multiplicatively, Chronos forecasts them accurately. This is demonstrated in Figure 12d on time series generated via addition and multiplication of a linear function with a sinusoid. ![16_image_0.png](16_image_0.png) Figure 14: Forecasts generated by Chronos-T5 (Base) for time series generated from AR(1) and AR(4) processes compared against forecasts generated by the ground truth AR model, a fitted AR model of the correct order, and an AutoARIMA model. Chronos-T5 (Base) generates plausible forecasts and prediction intervals in both cases. All AR models fit the simpler AR(1) process correctly and obtain better MSE than Chronos-T5 (Base); however, with the increased complexity in the AR(4) process, Chronos-T5 (Base) performs second best after the ground truth AR model. Autoregressive processes. An autoregressive (AR) process of order p is defined as $$X_{t}=\sum_{i=1}^{p}\varphi_{i}X_{t-i}+\varepsilon_{t},$$ where εt ∼ N (0, 1) and φ1*, . . . , φ*p are the parameters of the model. We generated time series from stationary AR processes of different orders ranging from 1 to 4, and we compared the forecasts generated by ChronosT5 (Base) against those generated by three models: (a) the ground truth AR model that was used to generate the time series; (b) an AR model with the correct order (p) fitted to the time series; and (c) an AutoARIMA model fitted to the time series. Figure 14 shows the results for the AR(1) and AR(4) processes, and Figure 21 (Appendix E) shows the results for AR(2) and AR(3). We observe that Chronos-T5 (Base) generates plausible forecasts across all four AR processes. The simpler AR(1) and AR(2) processes are easier for the correctly-specified AR model and AutoARIMA model to fit, resulting in a better MSE than Chronos-T5 (Base). However, with increasing complexity in AR(3) and AR(4) processes, Chronos-T5 (Base) not only outperforms the AutoARIMA model (which belongs the same family as the ground truth model) but also performs slightly better than the fitted AR model with correct order. These results highlight that Chronos models can recognize fundamental patterns present in time series data. Flexible predictive distributions. Using a categorical distribution to encode predictions gives Chronos flexibility in producing predictive distributions of different shapes. This is shown in Figure 15, illustrating ![17_image_0.png](17_image_0.png) Figure 15: Forecast distributions from a Chronos model on series from the NN5 (Daily), Traffic, and Hospital datasets respectively. Each plot shows the predictive distribution for five prediction steps (h = 1*, . . . ,* 5): the densities were obtained via kernel density estimation from sample forecasts. Even though the cross entropy is not distance-aware, the model learns to estimate distributions over neighboring tokens, and of diverse shapes, including multimodal ones. ![17_image_1.png](17_image_1.png) Figure 16: Loss of precision due to scaling and quantization. In (a), data consists of unit spikes every n = 10, 20, 50 observations (top to bottom): the scale here is 1/n, hence the maximum representable value is 15/n. When 1 > 15/n then the model cannot possibly capture the spikes appropriately (all but the top case), since their value is not represented accurately by tokens. In (b), data is a sine wave shifted up by µ = 1, 10, 50: the scale here is µ, and as the variance of the signal becomes smaller and smaller relative to µ, the tokens precision decreases. kernel density estimate (KDE) plots of token IDs sampled from a Chronos model, for the first five time steps in the forecast horizon, across three datasets. Despite the fact that cross-entropy is not distanceaware, Chronos outputs predictive distributions over a contiguous set of tokens, and with different shapes, including multi-modal ones. Overflow and loss of precision. One limitation of Chronos comes from the proposed tokenization approach (see Section 3.1). Specifically, the tokens we select represent values in the range [−15s, 15s], where s is the scale of the data (mean absolute value). If s is very *small* compared to the range of values in the series, then some observations will fall out of the representable range. An example of this behaviour is with sparse series, and as shown in Figure 16a. On the other hand, very *large* values of s compared to the variance result in loss of precision: in the original space, tokens are spaced 30s/(B − 1) from each other, where B is the number of bins (we used B = 4094 in our experiments); values closer than that to each other may be mapped to the same token, with an apparent loss of precision. An example of this behaviour is given in Figure 16b. Improving the tokenization to overcome these edge cases is subject for future work, but the results from Section 5.5 suggest that the Chronos models performs well on real-world data despite the limitations. ## 6 Discussion Chronos represents one of the first endeavours in practical pretrained time series forecasting models, with remarkable zero-shot performance on a comprehensive collection of test datasets. This work opens up various research avenues, some of which we discuss below. ## 6.1 Beyond Zero-Shot Univariate Forecasting In our experiments, we evaluated Chronos in a zero-shot manner for most datasets. Such a setup highlights the competitiveness of zero-shot Chronos models against task-specific baselines. We expect that both in-domain and zero-shot results could be enhanced further through fine-tuning, an avenue we briefly explored in Section 5.5.2. This can be done using any parameter-efficient fine-tuning methods such as those based on low-rank adapters (LoRA) (Hu et al., 2021; Zhang et al., 2023). Alternatively, Chronos can be calibrated for a specific task with conformal methods (Romano et al., 2019; Stankeviciute et al., 2021; Xu & Xie, 2021). Chronos is especially attractive in the context of conformal prediction since it requires no training set, so all available data can be used for calibration. In this work, we have focused on univariate time series forecasting since it constitutes the most common of real-world time series use-cases. Nevertheless, practical forecasting tasks often involve additional information that must be taken into account. One example involves covariates, that can be either time-independent (e.g., color of the product) or time-varying (e.g., on which days the product is on sale). Another closely related problem is multivariate forecasting, where historic values of one time series (e.g., interest rates) can influence the forecast for another time series (e.g., housing prices). The number of covariates or multivariate dimensions can vary greatly across tasks, which makes it challenging to train a single model that can handle all possible combinations. A possible solution may involve training task-specific adaptors that inject the covariates into the pretrained forecasting model (Rahman et al., 2020). As another option, we can build stacking ensembles (Ting & Witten, 1997) of Chronos and other light-weight models that excel at handling covariates such as LightGBM (Ke et al., 2017). Thus far, our exploration has centered on the problem of time series forecasting. However, several other time series analysis tasks, such as classification, clustering, and anomaly detection (Dau et al., 2018; Wu & Keogh, 2021; Ismail Fawaz et al., 2019; Goswami et al., 2024), could potentially benefit from a pretrained model like Chronos. We hypothesize that the representations learned by the encoders of Chronos-T5 models are universal and can be used for these tasks. An exploration of Chronos-T5 representations for various downstream tasks would constitute interesting future work. ## 6.2 Inference A potential limitation of the larger Chronos models is their inference speed compared to task-specific deep learning models. Figure 17 illustrates the inference time of generating forecasts for a single time series, averaged across datasets. The inference speed of the larger Chronos models is comparable to some statistical local models. Moreover, while Chronos models are slower than task-specific models, they are not too large to be prohibitively slow. Furthermore, task-specific models need to be trained for each task individually, which requires additional time and compute. In contrast, Chronos models can be deployed for datasets with diverse history lengths, frequencies, prediction horizons, and context lengths. This makes model deployment significantly easier and drastically simplifies forecasting pipelines, obviating the need for task-specific training. ![18_image_0.png](18_image_0.png) Figure 17: Inference time of different models for forecasting a single time series, averaged across datasets. The compute requirements of individual models have been highlighted. By leveraging a language modeling framework for time series, we make developments in the NLP community immediately transferable to Chronos models. For instance, inference speed can be improved by using CUDA kernels optimized for modern Ampere GPUs, quantization (Dettmers et al., 2022), and faster decoding techniques, including speculative (Leviathan et al., 2023) and lookahead (Fu et al., 2023) decoding. Developments in long-context language models (Sun et al., 2022; Dao, 2023) may help improve Chronos models' applicability to high-frequency datasets that require longer contexts to capture seasonal patterns. Other techniques popularly used for text language models, such as temperature tuning, beam search (Freitag & Al-Onaizan, 2017), Top-K sampling (Fan et al., 2018), nucleus sampling (Holtzman et al., 2019), could enhance the quality of forecasts. These may particularly be helpful in improving the speed and quality of point forecasts, which currently require aggregation over multiple samples. ## 6.3 Data Our findings underscore that training larger models on a large corpus of time series data yields excellent in-domain and zero-shot performance. Nevertheless, in contrast to NLP, high-quality public time series data remains limited. This poses a dilemma when training models on a large corpus of diverse datasets — selecting more datasets for training leaves fewer for zero-shot evaluation. The time series community would benefit greatly from the availability of larger time series datasets that could be used to develop and improve pretrained model such as Chronos. There have been some recent efforts on building large-scale time series datasets for specific domains (Emami et al., 2023; Liu et al., 2023) and cross-domain (Borchert et al., 2022), albeit further investment is needed. Another direction to address the problem of limited data involves developing better methods for generating synthetic time series. Our work has made significant strides in this direction by clearly demonstrating the utility of synthetic data generated using Gaussian processes, improving model performance when incorporated into the training data. Even models trained solely on synthetic data exhibit reasonable forecasting performance. Future research could delve into the failure modes of these models, proposing enhancements to bridge the gap between real and synthetic data. ## 7 Conclusion In this work, we approach the problem of developing generalist pretrained forecasting models from the lens of a minimalist. We adapt existing language model architectures and training procedures for time series forecasting, challenging the notion that time-series-specific features or architectures are necessary for forecasting. This results in Chronos, a language modeling framework for time series that is, paradoxically, agnostic to time. The defining characteristic of Chronos is its compatibility with any language model architecture, only requiring minimal modifications - tokenization though scaling and quantization. Our pretrained models significantly outperform existing local models and task-specific deep learning baselines in terms of their in-domain performance. More remarkably, Chronos models obtain excellent results on unseen datasets (zero-shot performance), performing competitively with the best deep-learning baselines trained on these datasets, while showing promising evidence of further improvements through fine-tuning. Our contributions are significant in two key aspects. First, we show that existing language model architectures are capable of performing forecasting without time-series-specific customizations. This paves the way for accelerated progress by leveraging developments in the area of LLMs and through better data strategies. Second, on a practical level, the strong performance of Chronos models suggests that large (by forecasting standards) pretrained language models can greatly simplify forecasting pipelines without sacrificing accuracy, offering an inference-only alternative to the conventional approach involving training and tuning a model on individual tasks. ## References Alexander Alexandrov, Konstantinos Benidis, Michael Bohlke-Schneider, Valentin Flunkert, Jan Gasthaus, Tim Januschowski, Danielle C Maddix, Syama Rangapuram, David Salinas, Jasper Schulz, et al. GluonTS: Probabilistic and Neural Time Series Modeling in Python. *The Journal of Machine Learning Research*, 21(1):4629–4634, 2020. 31 Abdul Fatir Ansari, Konstantinos Benidis, Richard Kurle, Ali Caner Turkmen, Harold Soh, Alexander J Smola, Bernie Wang, and Tim Januschowski. Deep Explicit Duration Switching Models for Time Series. Advances in Neural Information Processing Systems, 34, 2021. 9 V. Assimakopoulos and K. Nikolopoulos. The theta model: a decomposition approach to forecasting. *International Journal of Forecasting*, 16(4):521–530, 2000. 3, 9, 31 George Athanasopoulos, Rob J. Hyndman, Haiyan Song, and Doris C. Wu. The tourism forecasting competition. *International Journal of Forecasting*, 27(3):822–844, 2011. 30 Konstantinos Benidis, Syama Sundar Rangapuram, Valentin Flunkert, Yuyang Wang, Danielle Maddix, Caner Turkmen, Jan Gasthaus, Michael Bohlke-Schneider, David Salinas, Lorenzo Stella, François-Xavier Aubet, Laurent Callot, and Tim Januschowski. Deep learning for time series forecasting: Tutorial and literature survey. *ACM Comput. Surv.*, 55(6), 2022. 1 Oliver Borchert, David Salinas, Valentin Flunkert, Tim Januschowski, and Stephan Günnemann. Multiobjective model selection for time series forecasting. *arXiv preprint arXiv:2202.08485*, 2022. 20 Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In *Advances in Neural Information Processing Systems*, 2020. 1, 3 Chris U Carmona, François-Xavier Aubet, Valentin Flunkert, and Jan Gasthaus. Neural Contextual Anomaly Detection for Time Series. *arXiv:2107.07702*, 2021. 6 Cristian Challu, Kin G Olivares, Boris N Oreshkin, Federico Garza Ramirez, Max Mergenthaler Canseco, and Artur Dubrawski. N-HiTS: Neural Hierarchical Interpolation for Time Series Forecasting. In *Proceedings* of the AAAI Conference on Artificial Intelligence, volume 37, 2023. 9, 31 Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. PaLM: Scaling Language Modeling with Pathways. *Journal of Machine Learning Research*, 24(240):1–113, 2023. 3 Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling Instruction-Finetuned Language Models. arXiv:2210.11416, 2022. 3 Tri Dao. FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning. arXiv:2307.08691, 2023. 20 Abhimanyu Das, Weihao Kong, Rajat Sen, and Yichen Zhou. A decoder-only foundation model for timeseries forecasting. *arXiv:2310.10688*, 2023. 1, 4 Hoang Anh Dau, Eamonn Keogh, Kaveh Kamgar, Chin-Chia Michael Yeh, Yan Zhu, Shaghayegh Gharghabi, Chotirat Ann Ratanamahatana, Yanping, Bing Hu, Nurjahan Begum, Anthony Bagnall, Abdullah Mueen, Gustavo Batista, and Hexagon-ML. The UCR Time Series Classification Archive, October 2018. https: //www.cs.ucr.edu/~eamonn/time_series_data_2018/. 19 Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale. *arXiv:2208.07339*, 2022. 20 Jiaxiang Dong, Haixu Wu, Haoran Zhang, Li Zhang, Jianmin Wang, and Mingsheng Long. SimMTM: A Simple Pre-Training Framework for Masked Time-Series Modeling. *arXiv:2302.00861*, 2023. 4 Samuel Dooley, Gurnoor Singh Khurana, Chirag Mohapatra, Siddartha Naidu, and Colin White. ForecastPFN: Synthetically-Trained Zero-Shot Forecasting. In Advances in Neural Information Processing Systems, 2023. 1, 4, 9, 15, 31 David Duvenaud, James Lloyd, Roger Grosse, Joshua Tenenbaum, and Ghahramani Zoubin. Structure Discovery in Nonparametric Regression through Compositional Kernel Search. In *International Conference* on Machine Learning, pp. 1166–1174. PMLR, 2013. 7 Patrick Emami, Abhijeet Sahu, and Peter Graf. BuildingsBench: A Large-Scale Dataset of 900K Buildings and Benchmark for Short-Term Load Forecasting. *arXiv:2307.00142*, 2023. 20 Angela Fan, Mike Lewis, and Yann Dauphin. Hierarchical Neural Story Generation. *arXiv:1805.04833*, 2018. 20 Philip J Fleming and John J Wallace. How not to lie with statistics: the correct way to summarize benchmark results. *Communications of the ACM*, 29(3):218–221, 1986. 10 Markus Freitag and Yaser Al-Onaizan. Beam Search Strategies for Neural Machine Translation. arXiv:1702.01806, 2017. 20 Yichao Fu, Peter Bailis, Ion Stoica, and Hao Zhang. Breaking the Sequential Dependency of LLM Inference Using Lookahead Decoding, November 2023. URL https://lmsys.org/blog/ 2023-11-21-lookahead-decoding/. 20 Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. The Pile: An 800GB Dataset of Diverse Text for Language Modeling. *arXiv:2101.00027*, 2020. 6 Federico Garza, Max Mergenthaler Canseco, Cristian Challú, and Kin G. Olivares. StatsForecast: Lightning fast forecasting with statistical and econometric models. PyCon Salt Lake City, Utah, US 2022, 2022. URL https://github.com/Nixtla/statsforecast. 31 Jan Gasthaus, Konstantinos Benidis, Yuyang Wang, Syama Sundar Rangapuram, David Salinas, Valentin Flunkert, and Tim Januschowski. Probabilistic Forecasting with Spline Quantile Function RNNs. In *Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics*, volume 89 of *Proceedings of Machine Learning Research*, pp. 1901–1910. PMLR, 2019. 3, 9, 33 Tilmann Gneiting and Adrian E Raftery. Strictly proper scoring rules, prediction, and estimation. Journal of the American statistical Association, 102(477):359–378, 2007. 9, 33 Rakshitha Godahewa, Christoph Bergmeir, Geoffrey I. Webb, Rob J. Hyndman, and Pablo Montero-Manso. Monash Time Series Forecasting Archive. In Neural Information Processing Systems Track on Datasets and Benchmarks, 2021. 8, 27, 29, 30 Mononito Goswami, Konrad Szafer, Arjun Choudhry, Yifu Cai, Shuo Li, and Artur Dubrawski. Moment: A family of open time-series foundation models. *arXiv preprint arXiv:2402.03885*, 2024. 4, 19 Nate Gruver, Marc Finzi, Shikai Qiu, and Andrew Gordon Wilson. Large Language Models Are Zero-Shot Time Series Forecasters. In *Advances in Neural Information Processing Systems*, 2023. 1, 3, 9, 31, 32 Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. The curious case of neural text degeneration. *arXiv:1904.09751*, 2019. 20 Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. LoRA: Low-rank adaptation of large language models. *arXiv:2106.09685*, 2021. 19 Rob Hyndman, Anne B Koehler, J Keith Ord, and Ralph D Snyder. Forecasting with exponential smoothing: the state space approach. Springer Science & Business Media, 2008. 3, 9 Rob J Hyndman and George Athanasopoulos. *Forecasting: principles and practice*. OTexts, 2018. 1, 9 Rob J Hyndman and Anne B Koehler. Another look at measures of forecast accuracy. International journal of forecasting, 22(4):679–688, 2006. 10, 32 Hassan Ismail Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, and Pierre-Alain Muller. Deep learning for time series classification: a review. *Data mining and knowledge discovery*, 33(4):917–963, 2019. 19 Ming Jin, Shiyu Wang, Lintao Ma, Zhixuan Chu, James Y. Zhang, Xiaoming Shi, Pin-Yu Chen, Yuxuan Liang, Yuan-Fang Li, Shirui Pan, and Qingsong Wen. Time-LLM: Time series forecasting by reprogramming large language models. In *The Twelfth International Conference on Learning Representations*, 2024. 1, 4 Xiaoyong Jin, Youngsuk Park, Danielle Maddix, Hao Wang, and Yuyang Wang. Domain adaptation for time series forecasting via attention sharing. In *International Conference on Machine Learning*, pp. 10280– 10297. PMLR, 2022. 1, 4 Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. LightGBM: A Highly Efficient Gradient Boosting Decision Tree. Advances in neural information processing systems, 30, 2017. 19 Roger Koenker and Kevin F Hallock. Quantile regression. *Journal of economic perspectives*, 15(4):143–156, 2001. 33 Stephan Kolassa and Tim Januschowski. A classification of business forecasting problems. *Foresight*, 52, 2019. 1 Marcel Kollovieh, Abdul Fatir Ansari, Michael Bohlke-Schneider, Jasper Zschiegner, Hao Wang, and Yuyang Wang. Predict, Refine, Synthesize: Self-Guiding Diffusion Models for Probabilistic Time Series Forecasting. In *Advances in Neural Information Processing Systems*, volume 36, pp. 28341–28364. Curran Associates, Inc., 2023. 9 Yaniv Leviathan, Matan Kalman, and Yossi Matias. Fast inference from transformers via speculative decoding. In *International Conference on Machine Learning*, pp. 19274–19286. PMLR, 2023. 20 Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. *arXiv:1910.13461*, 2019. 3 Bryan Lim, Sercan Ö Arık, Nicolas Loeff, and Tomas Pfister. Temporal fusion transformers for interpretable multi-horizon time series forecasting. *International Journal of Forecasting*, 37(4):1748–1764, 2021. 3, 6, 9, 31 Xu Liu, Yutong Xia, Yuxuan Liang, Junfeng Hu, Yiwei Wang, Lei Bai, Chao Huang, Zhenguang Liu, Bryan Hooi, and Roger Zimmermann. Largest: A benchmark dataset for large-scale traffic forecasting. arXiv:2306.08259, 2023. 20 Spyros Makridakis and Michele Hibon. The M3-Competition: results, conclusions and implications. *International journal of forecasting*, 16(4):451–476, 2000. 8, 30 Spyros Makridakis, Michele Hibon, and Claus Moser. Accuracy of forecasting: An empirical investigation. Journal of the Royal Statistical Society. Series A (General), 142(2):97–145, 1979. 8, 30 Spyros Makridakis, Evangelos Spiliotis, and Vassilios Assimakopoulos. The M4 Competition: 100,000 time series and 61 forecasting methods. *International Journal of Forecasting*, 36(1):54–74, 2020. 8, 30 Spyros Makridakis, Evangelos Spiliotis, and Vassilios Assimakopoulos. M5 accuracy competition: Results, findings, and conclusions. *International Journal of Forecasting*, 38(4):1346–1364, 2022. 8, 30 Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXiv:1609.07843, 2016. 6 Suvir Mirchandani, Fei Xia, Pete Florence, Brian Ichter, Danny Driess, Montserrat Gonzalez Arenas, Kanishka Rao, Dorsa Sadigh, and Andy Zeng. Large language models as general pattern machines. In Proceedings of The 7th Conference on Robot Learning, volume 229 of *Proceedings of Machine Learning* Research, pp. 2498–2518. PMLR, 2023. 3 Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam. A time series is worth 64 words: Long-term forecasting with transformers. In *International Conference on Learning Representations*, 2023. 3, 4, 9, 31 Kin G. Olivares, Cristian Challú, Federico Garza, Max Mergenthaler Canseco, and Artur Dubrawski. NeuralForecast: User friendly state-of-the-art neural forecasting models. PyCon Salt Lake City, Utah, US 2022, 2022. URL https://github.com/Nixtla/neuralforecast. 31 Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv:1609.03499, 2016. 9, 31 Boris N. Oreshkin, Dmitri Carpov, Nicolas Chapados, and Yoshua Bengio. N-BEATS: Neural basis expansion analysis for interpretable time series forecasting. In *International Conference on Learning Representations*, 2020. 9, 31 Boris N. Oreshkin, Dmitri Carpov, Nicolas Chapados, and Yoshua Bengio. Meta-learning framework with applications to zero-shot time-series forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2021. 4 Bernardo Pérez Orozco and Stephen J. Roberts. Zero-shot and few-shot time series forecasting with ordinal regression recurrent neural networks. In *28th European Symposium on Artificial Neural Networks,* Computational Intelligence and Machine Learning, pp. 503–508, 2020. 4 Youngsuk Park, Danielle Maddix, François-Xavier Aubet, Kelvin Kan, Jan Gasthaus, and Yuyang Wang. Learning quantile functions without quantile crossing for distribution-free time series forecasting. In International Conference on Artificial Intelligence and Statistics, pp. 8127–8150. PMLR, 2022. 3 Stephan Rabanser, Tim Januschowski, Valentin Flunkert, David Salinas, and Jan Gasthaus. The effectiveness of discretization in forecasting: An empirical study on neural time series models. *arXiv:2005.10111*, 2020. 5 Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. *OpenAI blog*, 1(8):9, 2019. 4, 5 Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485–5551, 2020. 3, 5, 6, 9, 13 Wasifur Rahman, Md Kamrul Hasan, Sangwu Lee, Amir Zadeh, Chengfeng Mao, Louis-Philippe Morency, and Ehsan Hoque. Integrating multimodal information in large pretrained transformers. In *Proceedings of* the conference. Association for Computational Linguistics. Meeting, volume 2020, pp. 2359. NIH Public Access, 2020. 19 Syama Sundar Rangapuram, Matthias W Seeger, Jan Gasthaus, Lorenzo Stella, Yuyang Wang, and Tim Januschowski. Deep state space models for time series forecasting. *Advances in neural information processing systems*, 31, 2018. 3 Kashif Rasul, Calvin Seward, Ingmar Schuster, and Roland Vollgraf. Autoregressive denoising diffusion models for multivariate probabilistic time series forecasting. In International Conference on Machine Learning, pp. 8857–8868. PMLR, 2021. 3 Kashif Rasul, Arjun Ashok, Andrew Robert Williams, Arian Khorasani, George Adamopoulos, Rishika Bhagwatkar, Marin Biloš, Hena Ghonia, Nadhir Vincent Hassen, Anderson Schneider, Sahil Garg, Alexandre Drouin, Nicolas Chapados, Yuriy Nevmyvaka, and Irina Rish. Lag-llama: Towards foundation models for time series forecasting, 2023. 1, 4, 9, 31 Yaniv Romano, Evan Patterson, and Emmanuel Candes. Conformalized quantile regression. *Advances in* neural information processing systems, 32, 2019. 19 David Salinas, Valentin Flunkert, Jan Gasthaus, and Tim Januschowski. Deepar: Probabilistic forecasting with autoregressive recurrent networks. *International Journal of Forecasting*, 36(3):1181–1191, 2020. 3, 5, 6, 9, 31 Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. *arXiv:1508.07909*, 2015. 3 Oleksandr Shchur, Ali Caner Turkmen, Nick Erickson, Huibin Shen, Alexander Shirkov, Tony Hu, and Bernie Wang. Autogluon–timeseries: Automl for probabilistic time series forecasting. In International Conference on Automated Machine Learning, pp. 9–1. PMLR, 2023. 9 Kamile Stankeviciute, Ahmed M Alaa, and Mihaela van der Schaar. Conformal time-series forecasting. Advances in neural information processing systems, 34:6216–6228, 2021. 19 Yutao Sun, Li Dong, Barun Patra, Shuming Ma, Shaohan Huang, Alon Benhaim, Vishrav Chaudhary, Xia Song, and Furu Wei. A length-extrapolatable transformer. *arXiv:2212.10554*, 2022. 20 Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision, 2015. 4 Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, and Donald Metzler. Scale efficiently: Insights from pre-training and fine-tuning transformers. *arXiv:2109.10686*, 2021. 9, 13 Kai Ming Ting and Ian H Witten. Stacking bagged and dagged models. In Proceedings of the Fourteenth International Conference on Machine Learning, 1997. 19 Luis Torgo and Joao Gama. Regression using Classification Algorithms. *Intelligent Data Analysis*, 1(4): 275–292, 1997. 6 Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open Foundation and Fine-Tuned Chat Models, 2023. 1, 3 Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention Is All You Need. In *Advances in Neural Information Processing Systems*, 2017. 3 Yuyang Wang, Alex Smola, Danielle Maddix, Jan Gasthaus, Dean Foster, and Tim Januschowski. Deep factors for forecasting. In *International conference on machine learning*, pp. 6607–6617. PMLR, 2019. 3 Ruofeng Wen, Kari Torkkola, Balakrishnan Narayanaswamy, and Dhruv Madeka. A Multi-Horizon Quantile Recurrent Forecaster. *arXiv:1711.11053*, 2017. 3, 6 Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language processing. In *Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System* Demonstrations, pp. 38–45. Association for Computational Linguistics, 2020. 6, 9 Gerald Woo, Chenghao Liu, Akshat Kumar, Caiming Xiong, Silvio Savarese, and Doyen Sahoo. Unified training of universal time series forecasting transformers. *arXiv:2402.02592*, 2024. 1, 4, 9, 31 Haixu Wu, Tengge Hu, Yong Liu, Hang Zhou, Jianmin Wang, and Mingsheng Long. TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis. In International Conference on Learning Representations, 2023. 4 Renjie Wu and Eamonn Keogh. Current Time Series Anomaly Detection Benchmarks are Flawed and are Creating the Illusion of Progress. *IEEE Transactions on Knowledge and Data Engineering*, 2021. 19 Chen Xu and Yao Xie. Conformal Prediction Interval for Dynamic Time-Series. In *International Conference* on Machine Learning, pp. 11559–11569. PMLR, 2021. 19 Hao Xue and Flora D. Salim. PromptCast: A New Prompt-based Learning Paradigm for Time Series Forecasting. *arXiv:2210.08964*, 2023. 1, 3 Rui Ye and Qun Dai. A novel transfer learning framework for time series forecasting. Knowledge-Based Systems, 156:74–99, 2018. 1 Ailing Zeng, Muxi Chen, Lei Zhang, and Qiang Xu. Are Transformers Effective for Time Series Forecasting? In *Proceedings of the AAAI conference on artificial intelligence*, volume 37, 2023. 3, 9, 31 Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond Empirical Risk Minimization. *arXiv:1710.09412*, 2017. 6 Qingru Zhang, Minshuo Chen, Alexander Bukharin, Pengcheng He, Yu Cheng, Weizhu Chen, and Tuo Zhao. Adaptive budget allocation for parameter-efficient fine-tuning. *arXiv:2303.10512*, 2023. 19 Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. A survey of large language models. *arXiv:2303.18223*, 2023. 3 Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting. In *The Thirty-Fifth AAAI* Conference on Artificial Intelligence, AAAI 2021, Virtual Conference, volume 35, pp. 11106–11115. AAAI Press, 2021. 3, 27 Tian Zhou, Peisong Niu, Xue Wang, Liang Sun, and Rong Jin. One Fits All: Power general time series analysis by pretrained LM. In *Advances in Neural Information Processing Systems*, 2023a. 1, 3, 4, 9, 31 Yun Zhou, Liwen You, Wenzhen Zhu, and Panpan Xu. Improving time series forecasting with mixup data augmentation. In *ECML PKDD 2023 International Workshop on Machine Learning for Irregular Time* Series, 2023b. 6 ## A Algorithms Algorithm 1 and algorithm 2 present the pseudocode for TSMixup and KernelSynth, respectively. Algorithm 1 TSMixup: Time Series Mixup Input: Time series datasets {X1*, . . . ,* XNd }, maximum time series to be mixed K = 3, Dirichlet concentration parameter α = 1.5, and (minimum, maximum) length of the augmented time series (lmin = 128, lmax = 2048). Output: An augmented time series. 1: k ∼ U{1, K} ▷ number of time series to mix 2: l ∼ U{lmin, lmax} ▷ length of the augmented time series 3: for i ← 1, k do 4: n ∼ U{1, Nd} ▷ sample a dataset index 5: x (i) 1:l ∼ Xn ▷ sample a time series of length l from dataset n 6: x˜ (i) 1:l ←x (i) 1:l 1 lPlj=1 |x (i) j |▷ apply mean scaling to the time series 7: **end for** 8: [λ1*, . . . , λ*k] ∼ Dir(α) ▷ sample mixing weights 9: **return** Pk i=1 λix˜ (i) 1:l▷ take weighted combination of time series Algorithm 2 KernelSynth: Synthetic Data Generation using Gaussian Processes Input: Kernel bank K, maximum kernels per time series J = 5, and length of the time series lsyn = 1024. Output: A synthetic time series x1:lsyn . 1: j ∼ U{1, J} ▷ sample the number of kernels 2: {κ1(t, t′), . . . , κj (*t, t*′)} i.i.d ∼ K ▷ sample j kernels from K 3: κ ∗(t, t′) ← κ1(*t, t*′) 4: for i ← 2, j do 5: ⋆ ∼ {+, ×} ▷ sample a random binary operator 6: κ ∗(*t, t*′) ← κ ∗(t, t′) ⋆ κi(*t, t*′) ▷ compose kernels 7: **end for** 8: x1:lsyn ∼ GP(0, κ∗(*t, t*′)) ▷ sample from the GP prior 9: **return** x1:lsyn ## B Datasets The complete list of datasets used for our empirical evaluation is provided in Table 2. The table is divided into three sections, representing how the datasets were used for Chronos models: in total, 55 datasets where used for experiments, 13 of which for pretraining only, 15 for in-domain evaluation, and 27 for zeroshot evaluation (see also Section 5). In the following, we provide a brief description of each dataset, organized by its domain. ## B.1 Energy Australian Electricity (Godahewa et al., 2021) contains electricity demand data from 5 states in Australia. Electricity (15 Min., Hourly, Weekly) contains electricity consumption (in kW) for 370 households. Original data has 15 minutes frequency and was obtained from https://archive.ics.uci.edu/dataset/ 321/electricityloaddiagrams20112014; hourly and weekly aggreations are from Godahewa et al. (2021). ERCOT Load contains hourly energy load in 8 US regions between 2004 and 2021. ETT (15 Min., Hourly) (Zhou et al., 2021) contains oil temperatures and other covariates of electrical transformers from two stations in China, measured at 15 minutes granularity. Table 2: All datasets that are used for experiments. The datasets are partitioned according to how they are used for training and evaluation of Chronos models: *pretraining-only* data is only used for Chronos training; *in-domain* evalution data is used for training Chronos models and other task-specific baselines, except for the H observations that are held out for in-domain testing only; *zero-shot evaluation* data is not used in training Chronos models, but only for evaluation (final H observations), as well as for training task-specific baselines (excluding the final H observations). | Dataset | Domain | Freq. | Num. Series | Series Length | Prediction | | | |-----------------------------------------------|------------|---------|---------------|-----------------|--------------|--------|----| | min | avg | max | Length (H) | | | | | | Pretraining-only Brazilian Cities Temperature | nature | M | 12 | 492 | 757 | 1320 | - | | Mexico City Bikes | transport | 1H | 494 | 780 | 78313 | 104449 | - | | Solar (5 Min.) | energy | 5min | 5166 | 105120 | 105120 | 105120 | - | | Solar (Hourly) | energy | 1H | 5166 | 8760 | 8760 | 8760 | - | | Spanish Energy and Weather | energy | 1H | 66 | 35064 | 35064 | 35064 | - | | Taxi (Hourly) | transport | 1H | 2428 | 734 | 739 | 744 | - | | USHCN | nature | 1D | 6090 | 5906 | 38653 | 59283 | - | | Weatherbench (Daily) | nature | 1D | 225280 | 14609 | 14609 | 14610 | - | | Weatherbench (Hourly) | nature | 1H | 225280 | 350633 | 350639 | 350640 | - | | Weatherbench (Weekly) | nature | 1W | 225280 | 2087 | 2087 | 2087 | - | | Wiki Daily (100k) | web | 1D | 100000 | 2741 | 2741 | 2741 | - | | Wind Farms (Daily) | energy | 1D | 337 | 71 | 354 | 366 | - | | Wind Farms (Hourly) | energy | 1H | 337 | 1715 | 8514 | 8784 | - | | In-domain evaluation Electricity (15 Min.) | energy | 15min | 370 | 16032 | 113341 | 140256 | 24 | | Electricity (Hourly) | energy | 1H | 321 | 26304 | 26304 | 26304 | 24 | | Electricity (Weekly) | energy | 1W | 321 | 156 | 156 | 156 | 8 | | KDD Cup 2018 | nature | 1H | 270 | 9504 | 10897 | 10920 | 48 | | London Smart Meters | energy | 30min | 5560 | 288 | 29951 | 39648 | 48 | | M4 (Daily) | various | 1D | 4227 | 107 | 2371 | 9933 | 14 | | M4 (Hourly) | various | 1H | 414 | 748 | 901 | 1008 | 48 | | M4 (Monthly) | various | 1M | 48000 | 60 | 234 | 2812 | 18 | | M4 (Weekly) | various | 1W | 359 | 93 | 1035 | 2610 | 13 | | Pedestrian Counts | transport | 1H | 66 | 576 | 47459 | 96424 | 48 | | Rideshare | transport | 1H | 2340 | 541 | 541 | 541 | 24 | | Taxi (30 Min.) | transport | 30min | 2428 | 1469 | 1478 | 1488 | 48 | | Temperature-Rain | nature | 1D | 32072 | 725 | 725 | 725 | 30 | | Uber TLC (Daily) | transport | 1D | 262 | 181 | 181 | 181 | 7 | | Uber TLC (Hourly) | transport | 1H | 262 | 4344 | 4344 | 4344 | 24 | | Zero-shot evaluation Australian Electricity | energy | 30min | 5 | 230736 | 231052 | 232272 | 48 | | CIF 2016 | banking | 1M | 72 | 28 | 98 | 120 | 12 | | Car Parts | retail | 1M | 2674 | 51 | 51 | 51 | 12 | | Covid Deaths | healthcare | 1D | 266 | 212 | 212 | 212 | 30 | | Dominick | retail | 1D | 100014 | 201 | 296 | 399 | 8 | | ERCOT Load | energy | 1H | 8 | 154854 | 154854 | 154854 | 24 | | ETT (15 Min.) | energy | 15min | 14 | 69680 | 69680 | 69680 | 24 | | ETT (Hourly) | energy | 1H | 14 | 17420 | 17420 | 17420 | 24 | | Exchange Rate | finance | 1B | 8 | 7588 | 7588 | 7588 | 30 | | FRED-MD | economics | 1M | 107 | 728 | 728 | 728 | 12 | | Hospital | healthcare | 1M | 767 | 84 | 84 | 84 | 12 | | M1 (Monthly) | various | 1M | 617 | 48 | 90 | 150 | 18 | | M1 (Quarterly) | various | 3M | 203 | 18 | 48 | 114 | 8 | | M1 (Yearly) | various | 1Y | 181 | 15 | 24 | 58 | 6 | | M3 (Monthly) | various | 1M | 1428 | 66 | 117 | 144 | 18 | | M3 (Quarterly) | various | 3M | 756 | 24 | 48 | 72 | 8 | | M3 (Yearly) | various | 1Y | 645 | 20 | 28 | 47 | 6 | | M4 (Quarterly) | various | 3M | 24000 | 24 | 100 | 874 | 8 | | M4 (Yearly) | various | 1Y | 23000 | 19 | 37 | 841 | 6 | | M5 | retail | 1D | 30490 | 124 | 1562 | 1969 | 28 | | NN5 (Daily) | finance | 1D | 111 | 791 | 791 | 791 | 56 | | NN5 (Weekly) | finance | 1W | 111 | 113 | 113 | 113 | 8 | | Tourism (Monthly) | various | 1M | 366 | 91 | 298 | 333 | 24 | | Tourism (Quarterly) | various | 1Q | 427 | 30 | 99 | 130 | 8 | | Tourism (Yearly) | various | 1Y | 518 | 11 | 24 | 47 | 4 | | Traffic | transport | 1H | 862 | 17544 | 17544 | 17544 | 24 | | Weather | nature | 1D | 3010 | 1332 | 14296 | 65981 | 30 | London Smart Meters contains half-hourly energy consumption of 5561 households in the UK between 2011 and 2014. Data was obtained from https://data.london.gov.uk/dataset/ smartmeter-energy-use-data-in-london-households. Solar (5 Min., Hourly) contains data about solar power generation in the US in 2006. The original data has 5 minute frequency and was obtained from https://www.nrel.gov/grid/solar-power-data.html; the hourly version was obtained via mean aggregation. Spanish Energy and Weather contains 4 years of electricity consumption, generation, pricing, and weather data for Spain. Electricity data is for all of Spain, weather data is provided for each of 5 major Spanish cities. The data was obtained from https://www.kaggle.com/datasets/nicholasjhana/ energy-consumption-generation-prices-and-weather. Wind Farms (Hourly, Daily) (Godahewa et al., 2021) contains energy production data from wind farms in Australia. Original data was collected at 1 minute frequencey, which we aggregated to hourly and daily using the mean. ## B.2 Finance And Economics CIF 2016 (Godahewa et al., 2021) contains banking data that was used in the CIF 2016 forecasting competition. Of all time series included, 24 are real data while the other 48 are artificially generated. Exchange Rate contains daily exchange rates for currencies of eight countries (Australia, British, Canada, Switzerland, China, Japan, New Zealand and Singapore) between 1990 and 2016. FRED-MD (Godahewa et al., 2021) contains monthly macro-economic indicators from the Federal Reserve Bank. Data was extracted from the FRED-MD database, and the were differenced and log-transformed. NN5 (Daily, Weekly) (Godahewa et al., 2021) contains cash withdrawal data from ATMs. ## B.3 Healthcare Covid Deaths (Godahewa et al., 2021) contains daily count data of COVID-19 deaths in a set of countries and states, between January and August, 2020. Hospital (Godahewa et al., 2021) contains monthly time series that represent the patient counts related to medical products from January 2000 to December 2006. ## B.4 Nature Brazilian Cities Temperature contains monthly time series representing the weather at 12 different cities in Brazil. Data is originally from NOAA, and we used the post-processed version from https://www.kaggle. com/datasets/volpatto/temperature-timeseries-for-some-brazilian-cities. KDD Cup 2018 (Godahewa et al., 2021) contains various air quality indicators (including PM2.5, PM10, NO2, CO, O3 and SO2), measured in 59 stations in Beijing and London, between January 1, 2017 and March 31, 2018. Temperature-Rain (Godahewa et al., 2021) contains daily temperature observations and rain forecasts from 422 stations in Australia, between 2015 and 2017. USHCN contains daily measurements of five climate indicators (precipitation, snow, snow depth, minimum temperature, maximum temperature) from climate stations located in 48 states in the USA. Data was obtained from https://cdiac.ess-dive.lbl.gov/ftp/ushcn_daily/. Weather (Godahewa et al., 2021) contains daily time series of four weather variables (rain, mintemp, maxtemp and solar radiation) measured at weather stations in Australia. Weatherbench (Hourly, Daily, Weekly) contains WeatherBench data at the spatial resolution of 5.625° (32×64 grid points). WeatherBench is a comprehensive benchmark dataset for weather prediction research and contains hourly values of the many weather-related variables over 40 years from 1979 to 2018 (including temperature, humidity, wind, precipitations). The original data has hourly frequency and was obtained from https://github.com/pangeo-data/WeatherBench; we aggregated it to daily and weekly using mean, except for "total precipitation" which was aggregated by sum. ## B.5 Retail Car Parts (Godahewa et al., 2021) contains monthly sales data for various car parts, measured between January 1998 and March 2002. Dominick (Godahewa et al., 2021) contains weekly time series representing the profit of individual stock keeping units from a retailer. Original data is from https://www.chicagobooth.edu/research/kilts/ datasets/dominicks. ## B.6 Mobility And Transport Mexico City Bikes contains hourly usage statistics for 494 bike stations in Mexico City from 2010 to 2022. Each value in the time series corresponds to the number of bikes returned at the given station at the given hour of the day. Data was obtained from https://ecobici.cdmx.gob.mx/en/open-data. Time series that contain less than 50 non-zero observations were removed. Pedestrian Counts (Godahewa et al., 2021) contains data from 66 sensors in Melbourne, counting pedestrians between 2009 and 2020. Rideshare contains various hourly statistics of Uber and Lyft services in New York, between November 26, 2018 and December 18, 2018. Taxi (30 Min., Hourly) contains spatio-temporal traffic time series of New York taxi rides taken at 1214 locations every 30 minutes in the months of January 2015 and January 2016. Original data has 30 minutes frequency, the hourly version was obtain by aggregation with sum. Tourism (Monthly to Yearly) (Athanasopoulos et al., 2011; Godahewa et al., 2021) Tourism dataset from, used for the Kaggle Tourism Forecasting competition. Traffic (Godahewa et al., 2021) contains hourly road occupancy readings from sensors in the San Francisco Bay area. Uber TLC (Hourly, Daily) contains the number of Uber pick-ups from various locations in New York, between January and June 2015. Data was obtained from https://github.com/fivethirtyeight/ uber-tlc-foil-response and aggregated hourly and daily. ## B.7 Various M1 (Monthly to Yearly) (Makridakis et al., 1979; Godahewa et al., 2021) contains the time time series used in the M1 forecasting competition. Data spans micro-/macroeconomics, industry, and demographics. M3 (Monthly to Yearly) (Makridakis & Hibon, 2000; Godahewa et al., 2021) contains the time time series used in the M1 forecasting competition. Data spans micro-/macroeconomics, industry, finance and demographics. M4 (Hourly to Yearly) (Makridakis et al., 2020; Godahewa et al., 2021) contains data from various domains, at different sampling periods, used for the M4 forecasting competition. Domains include micro- /macroeconomics, demographic, industry, and finance. M5 (Makridakis et al., 2022) contains products sales data, used for the M5 forecasting competition. The data includes sales up to the end of the validation set (end of public leaderboard), but not values for the test set (private leaderboard). ## B.8 Web Wiki Daily (100k) contains daily page views on the top-100k English Wikipedia articles between 2007 and 2022, ranked by number of observations (non-missing). Data was obtained from https://dumps.wikimedia. org/other/pageviews/. ## C Baselines We considered a total of 17 baseline methods for benchmarking Chronos. Local statistical baselines were AutoETS, AutoARIMA, Naive, Seasonal Naive, and AutoTheta (Assimakopoulos & Nikolopoulos, 2000); for these, we relied on implementations in the StatsForecast library (Garza et al., 2022). For task-specific deep learning architectures, DeepAR (Salinas et al., 2020), PatchTST (Nie et al., 2023), TFT (Lim et al., 2021), DLinear (Zeng et al., 2023), and WaveNet (Oord et al., 2016), we based evaluations on the implementations in GluonTS (Alexandrov et al., 2020). However, N-BEATS (Oreshkin et al., 2020) and N-HiTS (Challu et al., 2023), experiments were based on implementations in the NeuralForecast (Olivares et al., 2022) library. Finally, we used reference implementations of ForecastPFN7(Dooley et al., 2023), GPT4TS8(One-Fits-All) (Zhou et al., 2023a), LLMTime9(Gruver et al., 2023), Lag-Llama10 (Rasul et al., 2023), and Moirai-1.0- R11 (Woo et al., 2024). WaveNet and GPT4TS models were trained on AWS EC2 p3.2xlarge instances which have 1 NVIDIA V100 GPUs with 16GB VRAM. All other baselines were trained on the CPU on Intel-based EC2 instances. Taskspecific deep learning baselines not based on large language models (DeepAR, PatchTST, TFT, DLinear, WaveNet, N-BEATS, and N-HiTS) were trained and evaluated three times and their performance averaged in order to account for high variance inherent in their optimization. For inference, we used EC2 CPU instances for local models, N-HiTS, and N-BEATS. The p3.2xlarge instance (1 × V100 16GB) was used for inference for other task-specific deep learning models and pretrained models such as Lag-Llama, Moirai-1.0-R, and ForecastPFN. Since LLMTime uses a Llama-2 70B model which has significantly larger compute requirements, LLMTime inference was performed on the p3dn.24xlarge AWS EC2 instance with 8 NVIDIA V100 32GB GPUs. Table 3: The multiplier used to set the context length in GPT4TS for each frequency. The context length is set equal to the multiplier times the prediction length, rounded to the nearest whole number. | Frequency | Multiplier | |-------------|--------------| | 15min | 20 | | 30min | 10 | | 1H | 10 | | 1D or 1B | 10 | | 1W | 10 | | 1M | 1.5 | | 3M or 1Q | 1.5 | | 1Y | 1.5 | Statistical baselines (AutoETS, AutoARIMA, AutoTheta and SeasonalNaive) were used with their default hyperparameters in StatsForecast, but with season lengths implied by their frequencies. For example, daily frequency data had season length set to 7, hourly data 24, and so on. For this heuristic, we used the helper function get_seasonality from GluonTS. Unless otherwise specified, the default hyperparameter configurations provided in baseline implementations were kept as is, and no dataset specific or global hyperparameter tuning was performed. GluonTS-based implementations were optimized with a batch size of 128, for a time limit of 4 hours and early stopping 7https://github.com/abacusai/ForecastPFN 8https://github.com/DAMO-DI-ML/NeurIPS2023-One-Fits-All 9https://github.com/ngruver/llmtime 10https://github.com/time-series-foundation-models/lag-llama 11https://github.com/SalesforceAIResearch/uni2ts patience of 200 epochs. In PatchTST and DLinear, we experimented with two loss functions: original losses aimed at point forecasting (L1 or L2 loss) as well as default probabilistic forecasting heads used in their GluonTS implementations, where the loss is set to the negative Student's-t log likelihood of the forecast horizon. Due to the consistently superior performance, our final results include the probabilistic versions of PatchTST and DLinear only. For GPT4TS, we set the context length equal to a multiple of the prediction length, with the multiplier depending on the frequency of the dataset (Table 3). We used the MASE loss function for fine-tuning in GPT4TS due to its superior performance. For LLMTime, we experimented only with the Llama-2 70B due to the prohibitively high costs of running the benchmark through OpenAI APIs. We used the same hyperparameters as used in the Monash experiment in the original paper (Gruver et al., 2023) with a few notable differences. We set the context length to 512, same as for Chronos models, instead of 500. During our experiments, we observed that the default hyperparameters may lead to a significant drop in the scale of the last prediction on some datasets. To alleviate this issue, we set the STEP_MULTIPLIER to 1.4 (instead of 1.2) and increased the prediction length by 1 (this extra prediction is removed before computing the metrics). The inference time for LLMTime (Llama-2 70B) is ≈0.8 seconds per observation on p3dn.24xlarge. As an example, this will take 92 hours to generate all the predictions on the Traffic dataset (862 time series, 24 as prediction length, 20 samples). Due to the very high compute cost, we skip the evaluation of LLMTime on some large datasets. A summary of the baseline models used along with details of hyperparameter values is provided in Table 4. Table 4: Baseline models and hyperparameter choices. Hyperparameters not specified are set to defaults in their respective implementations. C stands for context length, dh for hidden layer dimension, nL for number of layers, nH for number of heads, and η for learning rate. | Model | Model Type | Implementation | Probabilistic | Hyperparameters | |---------------|---------------|------------------|-----------------|---------------------------------------------------------------------------------------------| | Naive | Local | StatsForecast | Yes | N/A | | SeasonalNaive | Local | StatsForecast | Yes | N/A | | AutoETS | Local | StatsForecast | Yes | C = 2500 | | AutoARIMA | Local | StatsForecast | Yes | C = 1000 | | AutoTheta | Local | StatsForecast | Yes | C = 2500 | | DeepAR | Task-specific | GluonTS | Yes | dh = 40, nL = 2 | | TFT | Task-specific | GluonTS | Yes | dh = 32, nH = 4 | | PatchTST | Task-specific | GluonTS | Yes | Patch length: 16, Stride: 8, dh = 32, nL = 2, nH = 4 | | DLinear | Task-specific | GluonTS | Yes | Kernel size: 25, dh = 20 | | WaveNet | Task-specific | GluonTS | Yes | Residual channels: 24, Skip channels: 3 | | N-BEATS | Task-specific | NeuralForecast | No | Input size multiplier: 5 | | N-HiTS | Task-specific | NeuralForecast | No | Input size multiplier: 5 | | GPT4TS | Task-specific | Reference | No | Fine-tuning epochs: 100, cos: 1, tmax: 10, nL = 6, η = 10−3 , with pretrained GPT-2 weights | | ForecastPFN | Pretrained | Reference | No | C = 100 (as in the released pretrained model) | | LLMTime | Pretrained | Reference | Yes | C = 512, STEP_MULTIPLIER = 1.4 (refer to the text for details) | | Lag-Llama | Pretrained | Reference | Yes | C = 32 | | Moirai-1.0-R | Pretrained | Reference | Yes | C = 1024, Patch length: selected by dataset-specific validation | ## D Evaluation Metrics In what follows, we consider a dataset of N time series {xi = [xi,1, . . . , xi,C+H]} N i=1, each spanning both the context length C and prediction horizon H. We are interested in evaluating the accuracy of predictions for xi,C+1:C+H, for all i ∈ {1*, . . . , N*}, which can be either point forecasts or probabilistic ones. A point forecast for xiis denoted as as xˆi = [ˆxi,C+1, . . . , xˆi,C+H]. To evaluate point forecasts, we use the mean absolute scaled error (MASE, Hyndman & Koehler (2006)). For each series, this is simply the mean absolute error (MAE) divided by the empirical error of a seasonal naïve model: $$\operatorname{MASE}({\hat{\mathbf{x}}}_{i},\mathbf{x}_{i})={\frac{C-S}{H}}{\frac{\sum_{t=C+1}^{C+H}|{\hat{x}}_{i,t}-x_{i,t}|}{\sum_{t=1}^{C-S}|x_{i,t}-x_{i,t+S}|}},$$ where S is a seasonality parameter. Since the denominator scales proportionally to xi, this error metric is independent of the scale of the data. To aggregate MASE over the entire dataset, we average over all i. Probabilistic forecasts are given in terms of predicted quantiles q (α) i = [q (α) i,C+1*, . . . , q* (α) i,C+H] at levels α ∈ (0, 1). To evaluate the quality of such predicted quantiles, we use the weighted quantile loss (WQL): this is an aggregation of the quantile loss (Koenker & Hallock, 2001), which is defined for the predicted α-quantile q of a real observation x, as $$\mathrm{QL}_{\alpha}(q,x)={\begin{cases}\alpha(x-q),&{\mathrm{if~}}x>q,\\ (1-\alpha)(q-x),&{\mathrm{otherwise.}}\end{cases}}$$ (1 − α)(q − x), otherwise.(4) To aggregate Eq. (4) over multiple series and prediction instants, we consider the weighted average $$\mathrm{WQL}_{\alpha}={\frac{2\sum_{i,t}\mathrm{QL}_{\alpha}(q_{i,t}^{(\alpha)},x_{i,t})}{\sum_{i,t}|x_{i,t}|}}.$$ We average the above over a finite set of levels {α1*, . . . , α*K} to obtain $$\mathrm{WQL}={\frac{1}{K}}\sum_{j=1}^{K}\mathrm{WQL}_{\alpha_{j}}.$$ $$\left(4\right)$$ In all experiments, we use quantiles at level α ∈ {0.1, 0.2*, . . . ,* 0.9} to compute WQL, so that K = 9. Note that, being a weighted average of the quantile loss at different levels, WQL approximates (a weighted average of) the continuous ranked probability score (CRPS), a commonly used metric for evaluating probabilistic predictions (Gneiting & Raftery, 2007; Gasthaus et al., 2019). Unlike for MASE, where errors are scaled by a term proportional to the scale of each series, WQL aggregates absolute errors: as such, its value is affected by the relative scale of all series in the dataset. ## E Additional Results This section complements Section 5.5 by providing additional details to the experimental results. Table 5 reports the training time and cost of Chronos-T5 models on a p4d.24xlarge EC2 instance. Tables 6 and 7 report the raw WQL and MASE scores together with the aggregate relative score and average rank obtained by all models on the datasets in Benchmark I. Similarly, Tables 8 and 9 report these scores on Benchmark II. Figures 18 and 19 show the average ranks obtained by different models on Benchmark I and II, respectively. Figure 20 illustrates the zero-shot performance of Chronos-T5-Synth (Small), a model trained solely on synthetic data generated using KernelSynth, against various baselines. Table 5: Training time and the cost of training Chronos models on a single p4d.24xlarge instance. On-demand EC2 pricing of $32.773/hr was used to compute the cost (rounded to the nearest dollar). | Model | Training Time (hrs) | Cost (USD) | |--------------------|-----------------------|--------------| | Chronos-T5 (Mini) | 7.68 | 252 | | Chronos-T5 (Small) | 7.73 | 253 | | Chronos-T5 (Base) | 17.96 | 588 | | Chronos-T5 (Large) | 63.05 | 2066 | Table 6: WQL scores of different models for datasets in Benchmark I, comprising 15 datasets also included in the training data of Chronos models. Models achieving the first, **second**, and third best scores have been highlighted. Scores for Chronos and task-specific models have been averaged over 3 random seeds. The aggregated relative score was computed as described in Section 5.4. Pretrained Models (In Domain) Pretrained Models (Other) Task Specific Models Local Models Chronos-T5 (Large)Chronos-T5 (Base)Chronos-T5 (Small)Chronos-T5 (Mini)Chronos-GPT2Lag-LlamaMoirai-1.0-R (Base)Moirai-1.0-R (Large)PatchTSTDeepARWaveNetTFTDLinearN-HiTSN-BEATSAutoETSAutoThetaAutoARIMASeasonal NaiveNaive Electricity (15 Min.) 0.076 **0.076** 0.081 0.081 0.077 0.319 0.105 0.103 0.082 0.090 0.091 0.189 0.079 0.081 0.084 - 0.229 - 0.117 0.279 Electricity (Hourly) 0.102 0.115 0.107 **0.092** 0.121 0.104 0.122 0.116 **0.089** 0.106 0.109 0.125 0.095 0.128 0.127 0.129 0.198 0.126 0.147 0.363 Electricity (Weekly) 0.064 **0.067** 0.077 0.078 0.069 0.147 0.110 0.159 0.069 0.116 0.105 0.106 0.146 0.098 0.097 0.151 0.146 0.138 0.198 0.198 KDD Cup 2018 0.273 0.272 0.294 **0.271** 0.359 0.369 0.286 0.277 **0.252** 0.330 0.280 0.571 0.312 0.302 0.315 2.266 0.521 0.528 0.556 - London Smart Meters 0.423 0.426 0.430 0.433 0.431 0.384 0.358 0.350 **0.346** 0.405 0.374 0.365 0.369 0.358 0.357 - 0.660 - 0.541 0.731 M4 (Daily) 0.021 0.021 **0.021** 0.021 **0.020** 0.043 0.023 0.023 0.023 0.023 0.023 0.023 0.024 0.022 0.022 0.027 0.024 0.023 0.028 0.028 M4 (Hourly) **0.025** 0.028 0.026 0.025 0.038 0.111 0.025 **0.022** 0.027 0.038 0.046 0.033 0.038 0.040 0.045 0.066 0.041 - 0.048 0.166 M4 (Monthly) 0.102 0.103 0.104 0.104 0.110 0.153 0.102 0.100 0.095 0.101 0.107 0.097 0.111 0.094 **0.093** 0.100 0.098 - 0.146 0.140 M4 (Weekly) **0.038** 0.039 0.041 0.042 0.040 0.078 0.049 0.046 0.039 0.046 0.045 0.051 0.044 **0.039** 0.040 0.052 0.053 0.050 0.063 0.063 Pedestrian Counts **0.198** 0.203 0.234 0.241 **0.175** 0.262 0.274 0.259 0.257 0.229 0.248 0.261 0.247 0.254 0.241 0.619 1.818 0.340 0.319 0.814 Rideshare 0.141 0.140 0.141 0.135 0.141 0.158 0.164 0.159 0.135 **0.130** 0.184 **0.134** 0.159 0.152 0.172 0.154 0.138 0.157 0.186 - Taxi (30 Min.) 0.267 **0.273** 0.312 0.311 0.335 0.357 0.513 0.368 0.363 0.395 0.347 0.382 0.335 0.306 0.305 - 0.456 - 0.471 0.741 Temperature-Rain **0.663** 0.670 0.686 0.704 0.669 0.717 **0.655** 0.685 0.804 0.718 0.708 0.670 0.848 0.780 0.798 1.182 1.060 0.869 1.424 - Uber TLC (Daily) 0.096 **0.097** 0.099 0.106 0.099 0.176 0.116 0.108 0.100 0.110 0.126 0.111 0.106 0.116 0.108 0.167 0.190 0.151 0.231 0.231 Uber TLC (Hourly) 0.155 **0.154** 0.156 0.161 0.161 0.176 0.176 0.167 0.167 0.176 0.168 0.179 0.234 0.166 0.161 0.462 0.433 0.311 0.299 0.625 Agg. Relative Score 0.574 **0.589** 0.610 0.607 0.624 0.937 0.688 0.667 0.601 0.676 0.689 0.734 0.697 0.656 0.664 1.076 1.083 0.876 1.000 1.433 Avg. Rank 3.400 **4.867** 6.600 6.267 6.933 14.267 10.800 9.200 6.133 9.600 10.600 10.267 10.400 8.000 8.333 16.700 14.933 16.367 17.400 18.933 Pretrained Models (In Domain) Pretrained Models (Other) Task Specific Models Local Models Chronos-T5 (Large)Chronos-T5 (Base)Chronos-T5 (Small)Chronos-T5 (Mini)Chronos-GPT2Lag-LlamaMoirai-1.0-R (Base)Moirai-1.0-R (Large)PatchTSTDeepARWaveNetTFTDLinearN-HiTSN-BEATSGPT4TSAutoETSAutoThetaAutoARIMASeasonal NaiveNaive Electricity (15 Min.) **0.403** 0.409 0.430 0.453 **0.406** 1.169 0.706 0.626 0.450 0.515 0.637 1.108 0.452 0.579 0.567 0.508 - 0.583 - 0.498 1.270 Electricity (Hourly) 1.457 1.602 1.500 1.370 1.653 1.573 1.713 1.665 **1.349** 1.528 1.537 1.789 **1.369** 1.880 1.848 1.487 1.774 2.151 1.715 1.840 4.159 Electricity (Weekly) **1.829** 1.888 2.007 2.044 1.879 2.979 2.848 2.759 **1.631** 2.517 1.929 2.800 2.613 1.975 2.035 1.880 3.086 3.078 3.009 3.037 3.037 KDD Cup 2018 0.663 0.673 0.688 **0.656** 0.771 0.844 0.661 0.656 **0.616** 0.779 0.671 1.022 0.695 0.674 0.731 0.737 1.014 1.138 1.023 0.994 - London Smart Meters 0.843 0.852 0.859 0.868 0.856 0.792 0.770 0.755 **0.733** 0.832 0.824 0.788 0.799 0.777 0.781 0.794 - 0.966 - 0.966 1.297 M4 (Daily) **3.099** 3.109 3.112 3.108 **3.058** 8.038 3.443 3.376 3.450 3.305 3.306 3.292 3.461 3.143 3.155 5.109 3.270 3.335 3.257 3.278 3.278 M4 (Hourly) 0.927 0.894 0.881 **0.879** 0.977 3.807 1.209 0.951 0.967 1.215 1.613 1.833 1.867 3.231 3.457 1.511 1.604 2.458 - 1.193 11.608 M4 (Monthly) 0.976 0.985 1.000 1.006 1.051 2.090 1.033 1.005 **0.962** 1.040 1.101 1.009 1.022 0.994 **0.942** 0.979 0.970 0.966 - 1.260 1.205 M4 (Weekly) 2.201 2.221 2.270 2.309 2.377 5.658 2.466 2.391 **1.996** 2.346 2.523 2.745 2.429 2.094 **1.976** 3.040 2.548 2.657 2.373 2.777 2.777 Pedestrian Counts 0.289 **0.286** 0.304 0.309 **0.277** 0.342 0.355 0.330 0.339 0.311 0.334 0.364 0.327 0.324 0.315 0.393 0.487 1.275 0.383 0.369 0.842 Rideshare 0.892 0.892 0.880 **0.857** 0.897 0.891 0.911 0.899 **0.827** 0.996 0.983 1.067 1.448 0.933 0.919 1.088 0.910 0.970 1.028 1.250 - Taxi (30 Min.) 0.828 **0.849** 0.938 0.939 1.032 1.069 1.374 1.089 1.077 1.158 1.070 1.113 1.018 0.950 0.934 1.113 - 1.193 - 1.160 1.768 Temperature-Rain **0.982** 0.991 1.013 1.033 0.984 1.031 **0.963** 0.988 1.250 1.015 1.076 0.994 1.370 1.232 1.343 1.226 1.968 1.945 1.524 2.243 - Uber TLC (Daily) **0.819** 0.836 0.871 0.906 0.846 1.289 0.935 0.878 **0.813** 0.905 0.938 0.916 0.855 0.877 0.879 0.838 1.228 1.312 1.114 1.378 1.378 Uber TLC (Hourly) 0.727 0.729 0.727 0.743 0.740 0.711 0.727 0.717 0.696 **0.703** 0.776 0.746 0.778 0.716 0.751 0.754 1.009 1.036 0.982 0.931 1.390 Agg. Relative Score 0.726 **0.736** 0.751 0.752 0.763 1.141 0.855 0.806 0.740 0.821 0.842 0.939 0.864 0.854 0.861 0.871 0.983 1.129 0.941 1.000 1.484 Avg. Rank **4.000** 5.533 6.333 7.000 7.400 13.467 11.267 8.933 **5.133** 10.400 11.933 13.533 11.467 9.000 9.333 11.600 15.833 16.533 16.967 16.133 19.200 Pretrained Models (Zero Shot) Pretrained Models (Other) Task Specific Models Local Models Chronos-T5 (Large)Chronos-T5 (Base)Chronos-T5 (Small)Chronos-T5 (Mini)Chronos-GPT2LLMTimeLag-LlamaMoirai-1.0-R (Base)Moirai-1.0-R (Large)PatchTSTDeepARWaveNetTFTDLinearN-HiTSN-BEATSAutoETSAutoThetaAutoARIMASeasonal NaiveNaive Australian Electricity 0.068 0.079 0.077 0.069 0.082 0.069 0.097 0.054 0.046 0.037 0.087 0.052 **0.036** 0.066 **0.034** 0.038 0.125 0.055 0.073 0.084 0.159 Car Parts 1.067 1.058 1.033 1.029 1.031 - 1.011 1.652 1.626 0.998 0.967 0.941 **0.871** 1.119 0.880 **0.877** 1.309 1.337 - 1.600 - CIF 2016 0.013 0.013 0.014 0.015 0.014 0.014 0.041 0.011 0.013 0.140 0.136 0.086 **0.011** 0.033 0.032 0.039 0.039 0.027 0.017 0.015 **0.009** Covid Deaths 0.045 0.046 0.060 0.078 0.089 **0.032** 0.276 0.038 0.034 0.065 0.108 0.918 0.034 0.077 0.038 0.056 0.064 0.094 **0.029** 0.133 0.133 Dominick 0.337 0.338 0.344 0.351 0.341 - 0.443 0.361 0.345 0.345 0.364 0.327 0.320 0.435 0.313 **0.312** 0.483 0.485 - 0.453 0.453 ERCOT Load 0.016 **0.015** 0.017 0.018 **0.016** 0.053 0.033 0.019 0.021 0.017 0.032 0.024 0.023 0.023 0.020 0.020 0.122 0.041 0.052 0.037 0.181 ETT (15 Min.) 0.064 0.065 0.060 0.065 0.068 0.088 0.080 0.074 0.070 0.054 0.069 0.113 0.075 0.071 0.051 **0.053** 0.095 0.079 0.073 0.141 0.121 ETT (Hourly) **0.074** 0.075 0.077 0.083 0.076 0.122 0.106 0.094 0.083 **0.071** 0.081 0.142 0.082 0.076 0.081 0.074 0.132 0.133 0.105 0.122 0.202 Exchange Rate 0.017 0.019 0.020 0.017 0.017 0.015 0.011 0.010 0.012 0.010 **0.009** 0.016 0.011 **0.008** 0.010 0.011 0.010 0.010 0.011 0.013 0.015 FRED-MD 0.026 0.024 0.017 **0.019** 0.029 0.041 0.389 0.050 0.046 0.042 0.043 0.058 0.112 0.069 0.057 0.061 0.055 0.057 0.056 0.122 0.064 Hospital 0.057 0.057 0.058 0.059 0.059 0.066 0.093 0.059 0.057 0.070 0.056 0.064 0.053 0.089 0.052 **0.050** 0.053 0.055 0.058 0.073 0.087 M1 (Monthly) 0.129 **0.126** 0.138 0.141 0.135 0.181 0.196 0.155 0.149 0.165 0.150 0.150 0.175 0.189 0.189 0.187 0.162 0.159 0.146 0.191 0.258 M1 (Quarterly) 0.105 0.099 0.103 0.101 0.115 0.115 0.141 0.106 0.106 **0.078** 0.089 0.094 0.122 **0.079** 0.111 0.085 0.083 0.082 0.091 0.150 0.130 M1 (Yearly) 0.176 0.183 0.169 0.179 0.208 0.144 0.293 0.195 0.207 0.165 0.139 0.168 **0.124** 0.245 0.198 0.182 0.142 **0.137** 0.160 0.209 0.209 M3 (Monthly) 0.096 0.097 0.099 0.099 0.104 0.108 0.155 0.102 0.101 0.113 0.099 0.100 0.096 0.121 0.097 0.101 0.093 **0.095** 0.102 0.149 0.158 M3 (Quarterly) 0.073 0.075 0.078 0.080 0.078 0.084 0.134 0.080 0.084 0.074 0.073 0.072 0.071 0.086 0.076 0.080 0.069 **0.070** 0.079 0.101 0.103 M3 (Yearly) 0.150 0.151 0.153 0.158 0.149 0.148 0.192 0.165 0.170 0.133 **0.122** 0.130 0.130 0.143 0.182 0.181 **0.127** 0.128 0.162 0.167 0.167 M4 (Quarterly) 0.082 0.083 0.083 0.085 0.086 - 0.132 0.081 0.080 0.074 0.080 0.079 0.080 0.085 0.073 **0.073** 0.080 0.079 0.082 0.119 0.110 M4 (Yearly) 0.133 0.135 0.135 0.139 0.147 - 0.178 0.121 0.139 **0.106** 0.111 **0.109** 0.110 0.115 - - 0.118 0.115 0.130 0.161 0.161 M5 0.588 0.587 0.591 0.596 0.600 - 0.635 0.692 0.584 0.597 0.657 0.594 **0.560** 0.687 0.563 **0.560** 0.628 0.636 0.624 1.024 1.024 NN5 (Daily) 0.155 0.160 0.170 0.172 0.165 0.242 0.261 0.181 0.162 0.149 0.155 0.154 **0.145** 0.159 0.149 **0.147** 0.264 0.294 0.312 0.425 0.425 NN5 (Weekly) 0.089 0.090 0.090 0.089 0.094 0.092 0.111 0.092 0.092 **0.081** 0.087 0.098 **0.086** 0.090 0.098 0.114 0.088 0.090 0.090 0.123 0.123 Tourism (Monthly) 0.099 0.099 0.112 0.108 0.096 0.125 0.213 0.125 0.114 0.092 0.092 0.104 0.096 0.101 0.092 0.084 **0.090** 0.091 0.093 0.104 0.297 Tourism (Quarterly) **0.062** 0.068 0.070 0.077 0.068 0.071 0.202 0.099 0.085 0.074 0.072 0.082 0.074 0.080 0.077 0.063 0.070 **0.061** 0.098 0.119 0.166 Tourism (Yearly) 0.185 0.199 0.197 0.217 0.187 0.163 0.238 0.167 0.164 0.136 **0.127** 0.179 **0.102** 0.165 0.139 0.154 0.159 0.176 0.156 0.209 0.209 Traffic 0.254 0.261 0.261 0.262 0.252 0.287 0.256 0.225 **0.232** 0.246 0.233 0.234 0.264 0.250 0.263 0.270 0.557 0.905 - 0.362 0.643 Weather 0.139 0.140 0.143 0.149 0.145 - 0.164 0.135 **0.132** 0.143 0.147 0.152 0.151 0.174 0.143 0.144 0.214 0.217 0.185 0.217 0.217 Agg. Relative Score **0.649** 0.661 0.672 0.690 0.700 0.804 1.097 0.699 0.685 0.684 0.733 0.842 **0.639** 0.757 0.672 0.681 0.838 0.793 0.761 1.000 1.152 **Avg. Rank** 7.296 8.519 9.630 10.926 10.815 14.426 17.444 11.222 10.333 **6.815** 8.074 10.704 **6.778** 12.148 8.667 8.333 10.370 10.037 12.204 17.815 18.444 Table 7: MASE scores of different models for datasets in Benchmark I, comprising 15 datasets also included in the training data of Chronos models. Models achieving the first, **second**, and third best scores have been highlighted. Scores for Chronos and task-specific models have been averaged over 3 random seeds. The aggregated relative score was computed as described in Section 5.4. Table 8: WQL scores of different models for datasets in Benchmark II, comprising 27 datasets not seen by Chronos models during training. Models achieving the first, **second**, and third best scores have been highlighted. Scores for Chronos and task-specific models have been averaged over 3 random seeds. The aggregated relative score was computed as described in Section 5.4. Table 9: MASE scores of different models for datasets in Benchmark II, comprising 27 datasets not seen by Chronos models during training. Models achieving the first, **second**, and third best scores have been highlighted. Scores for Chronos and task-specific models have been averaged over 3 random seeds. The aggregated relative score was computed as described in Section 5.4. Pretrained Models (Zero Shot) Pretrained Models (Other) Task Specific Models Local Models ![34_image_0.png](34_image_0.png) ![34_image_1.png](34_image_1.png) Australian Electricity 1.306 1.333 1.403 1.212 1.370 1.186 2.158 1.635 1.241 0.981 0.871 1.473 0.997 **0.810** 1.278 **0.794** 0.828 1.161 2.391 0.897 1.393 1.253 2.362 Car Parts 0.911 0.897 0.891 0.893 0.879 - 2.657 0.816 1.734 1.538 0.803 **0.798** 0.817 **0.799** 0.879 0.803 0.803 0.891 1.185 1.229 - 1.201 - CIF 2016 0.985 0.995 1.016 1.052 1.065 1.384 3.588 2.235 1.218 1.115 1.537 1.363 1.309 1.553 1.145 1.389 1.440 0.960 **0.957** 1.002 1.006 1.289 1.263 Covid Deaths 42.762 42.641 42.689 43.525 48.028 32.143 91.515 78.456 33.010 33.019 36.465 38.203 102.457 **30.635** 40.418 31.771 31.730 75.909 38.114 45.407 **31.705** 46.912 46.912 Dominick 0.837 0.838 0.838 0.853 0.841 - 3.274 1.250 0.880 0.845 0.867 0.851 0.812 0.800 0.880 0.782 **0.782** 1.813 0.885 1.016 - 0.871 0.871 ERCOT Load 0.556 **0.541** 0.577 0.579 0.568 1.319 3.975 0.834 0.596 0.653 **0.553** 1.197 0.780 0.690 0.651 0.615 0.648 0.558 2.826 1.306 1.284 0.761 4.234 ETT (15 Min.) 0.712 0.710 0.669 0.732 0.753 1.042 1.138 0.967 0.956 0.777 0.652 0.874 1.339 0.962 0.724 0.643 0.659 **0.574** 1.183 **0.583** 0.879 1.169 1.164 ETT (Hourly) 0.738 0.749 0.769 0.794 0.750 1.232 1.833 1.002 0.887 0.833 **0.729** 0.814 1.509 0.875 **0.695** 0.811 0.782 0.768 1.139 0.900 0.977 0.932 1.651 Exchange Rate 3.231 3.460 3.357 3.223 3.206 1.743 7.583 3.087 1.565 1.937 **1.540** 1.615 3.105 2.361 **1.459** 2.041 2.149 2.709 1.643 1.648 1.882 1.740 1.874 FRED-MD 0.592 0.584 0.576 0.537 0.569 **0.513** 2.621 2.283 0.611 0.603 0.745 0.621 0.849 0.929 0.713 0.696 0.635 0.693 0.544 0.566 **0.473** 1.101 0.622 Hospital 0.815 0.820 0.819 0.821 0.833 0.861 1.775 0.939 0.821 0.823 0.859 0.804 0.857 0.799 0.940 0.781 **0.760** 0.793 **0.760** 0.761 0.820 0.921 0.968 M1 (Monthly) **1.086** 1.119 1.163 1.172 1.164 1.415 2.172 1.875 1.271 1.243 1.208 1.122 1.266 1.326 1.369 1.333 1.236 1.198 **1.072** 1.099 1.153 1.314 1.468 M1 (Quarterly) **1.699** 1.735 1.776 1.799 1.767 1.802 9.931 3.036 1.858 1.818 1.920 1.741 1.904 2.144 1.943 2.061 2.043 1.958 1.710 **1.683** 1.770 2.078 1.952 M1 (Yearly) 4.296 4.582 4.616 4.898 4.674 4.077 23.089 7.149 4.635 4.707 4.042 **3.685** 4.727 4.316 11.565 5.568 6.212 **3.675** 4.110 3.697 3.870 4.894 4.894 M3 (Monthly) **0.853** 0.861 0.883 0.898 0.925 0.996 2.240 1.846 0.948 0.924 1.225 0.943 0.950 0.916 1.161 0.899 0.883 0.950 0.869 **0.861** 0.933 1.146 1.175 M3 (Quarterly) 1.170 1.185 1.250 1.270 1.230 1.450 10.176 2.886 1.439 1.449 1.264 1.209 1.257 1.160 1.572 1.202 1.147 1.448 1.125 **1.130** 1.419 1.425 1.464 M3 (Yearly) 3.094 3.186 3.238 3.348 3.112 3.140 18.728 5.114 3.647 3.824 2.949 2.827 3.026 2.860 3.435 3.432 3.547 3.418 2.696 **2.613** 3.165 3.172 3.172 M4 (Quarterly) 1.203 1.213 1.228 1.252 1.285 - 6.927 2.663 1.284 1.259 **1.150** 1.254 1.241 1.248 1.229 1.157 **1.129** 1.215 1.188 1.193 1.276 1.602 1.477 M4 (Yearly) 3.569 3.641 3.613 3.705 3.894 - - 5.866 3.603 4.173 **3.072** 3.178 3.221 **3.119** 3.295 - - 3.374 3.374 3.124 3.730 3.974 3.974 M5 0.946 0.942 0.942 0.946 0.972 - 1.530 0.965 1.439 0.930 0.919 0.956 0.959 **0.909** 1.027 **0.917** 0.917 0.935 1.101 1.100 1.057 1.399 1.399 NN5 (Daily) **0.570** 0.584 0.620 0.638 0.612 0.953 1.375 0.992 0.702 0.627 0.575 0.585 0.585 **0.556** 0.604 0.571 0.571 0.720 1.039 1.073 1.214 1.292 1.292 NN5 (Weekly) 0.917 0.925 0.930 0.927 0.962 0.968 1.349 1.141 1.003 0.979 **0.877** 0.920 1.034 **0.896** 0.966 0.919 1.014 1.268 0.978 0.984 0.995 1.063 1.063 Tourism (Monthly) 1.741 1.817 1.891 1.937 1.772 2.139 4.348 3.030 2.042 1.913 1.572 1.529 1.629 1.686 1.551 1.514 **1.486** 1.573 **1.497** 1.680 1.573 1.631 3.591 Tourism (Quarterly) 1.660 1.705 1.727 1.822 1.814 1.916 5.595 3.695 2.702 2.331 1.723 **1.586** 1.769 1.729 1.690 **1.585** 1.618 1.750 1.590 1.658 1.661 1.699 3.633 Tourism (Yearly) 3.729 3.858 3.879 4.049 3.839 3.309 12.093 3.755 **3.074** 3.267 3.138 3.702 4.130 **3.047** 3.406 3.448 3.564 - 3.138 3.078 4.043 3.552 3.552 Traffic 0.798 0.823 0.833 0.847 0.810 0.973 1.909 0.829 **0.725** 0.760 0.790 **0.737** 0.797 0.880 0.821 0.927 0.968 0.787 1.685 1.794 - 1.077 2.052 Weather **0.827** 0.829 0.839 0.855 0.871 - 2.003 1.001 0.831 **0.808** 0.860 0.911 0.945 0.913 0.997 0.910 0.888 0.972 1.079 0.991 0.907 1.004 1.004 **Agg. Relative Score** 0.831 0.844 0.856 0.866 0.866 0.962 2.450 1.291 0.909 0.874 **0.810** 0.843 0.951 0.847 0.894 **0.830** 0.835 0.895 0.953 0.875 0.908 1.000 1.188 Avg. Rank **7.296** 8.704 10.259 11.741 11.222 15.630 22.241 18.741 12.593 11.741 **7.630** 8.556 13.296 9.111 12.259 9.019 9.204 11.259 9.852 8.741 12.648 16.000 18.259 ![34_image_2.png](34_image_2.png) Figure 18: Average rank of different models on Benchmark I, comprising 15 datasets also included in the training data of Chronos models. ![34_image_3.png](34_image_3.png) Figure 19: Average rank of different models on Benchmark II, comprising 27 datasets not seen by Chronos models during training. ![35_image_0.png](35_image_0.png) ![35_image_1.png](35_image_1.png) Figure 20: Performance of Chronos-T5-Synth (Small), a Chronos model that was only trained on synthetic data, on Benchmark I and II, against local and task-specific models. Note that unlike other Chronos models also trained on real data, both these benchmarks are zero-shot for Chronos-T5-Synth (Small). ![36_image_0.png](36_image_0.png) Figure 21: Forecasts generated by Chronos-T5 (Base) for time series generated from AR(2) and AR(3) processes compared against forecasts generated by the ground truth AR model, a fitted AR model of the correct order, and an AutoARIMA model. Chronos-T5 (Base) generates plausible forecasts and prediction intervals in both cases. All AR models fit the simpler AR(2) process well and obtain better MSE than Chronos-T5 (Base); however, with the increased complexity in the AR(3) process, Chronos-T5 (Base) performs better than other models. ![37_image_0.png](37_image_0.png) ![38_image_0.png](38_image_0.png) ![39_image_0.png](39_image_0.png) 17600 15000 1 2500 10000
Review 1: Summary: This paper introduces Chronos, a simple yet effective way of modelling time series by casting it as a language modelling problem. To that end, the paper proposes a way of discretely tokenising real-valued time-series data through scaling and quantisation. Through this step, one can then simply apply any state-of-the-art language modelling architectures to predict the next discrete tokens in the sequence, in the exact same fashion as to how one traines a standard text-based language model that predicts the next word, conditional on the previous ones. The forecasts of the time series predictor can then be converted back into real-valued outputs by simply reversing the tokenisation process (i.e. de-scaling & de-quantisation) in a deterministic fashion. Nevertheless, one remaining issue with time series forecasting is the relative lack of pre-training data, especially when compared to LLM training where very large amounts of unlabelled natural language text can be easily obtained by crawling the web. To that end, this paper uses two data augmentation techniques: (i) TSMixUp (combining different, randomly-selected time series data through convex combinations) and (ii) KernelSynth (generating synthetic time series data by composing different Kernel functions). Experiments on a wide range of time series forecasting problems demonstrate that the proposed T5-based Chronos model outperforms various classical and deep learning time series forecasting approaches. Furthermore, the approach yields a decent performance in a **zero-shot** setup, where the pre-trained model is applied to an unseen time series task out of the box without any fine-tuning. This demonstrates the plausibility of the pre-trained model as a **generalist** foundation model, although fine-tuning yields even better performance (as expected). The paper then conducted extensive analyses and ablation studies to disentangle the impact of various hyper-parameters, such as the model size and context length. Strengths and Weaknesses: # Strengths 1. Given the impressive recent success of LLMs and foundations, which primarily operate on discrete text tokens / words, how we can extend this success to the real-valued time series forecasting task remains an important research question, with a high potential for real-world impact. This paper proposes a simple way to do so, by simply converting the time series forecasting problem into discrete tokens. This means that any future state-of-the-art language modelling improvements can also transfer easily to the time series forecasting problem. 2. The paper demonstrates strong empirical results on various time series datasets, compared to both classical and deep learning methods. The zero-shot generalisation property is particularly appealing, demonstrating the pre-trained model's ability to generalise and transfer to new tasks. 3. The experiments are rigorous and thorough. I especially appreciate the extensive and thorough ablation studies, which disentangle the impact of different factors such as model size, context length, etc. 4. The paper is generally very clear and well-written. # Weaknesses 1. I find some aspects of the proposed approach to be rather counter-intuitive. For example: - My primary concern is regarding the tokenisation approach. In my understanding, the approach works by discretising the real values into fixed buckets, after doing mean scaling. However, some time series might have a much higher standard deviation than others. This means that time series with low standard deviations would have their range of values "squeezed" into a small number of bins, which makes predicting accurate values much more tricky (standard scaling, which takes into account both the mean and the standard deviation, should address this concern to some extent). Another issue, which is already pointed out in the paper, is that the model cannot predict values that are outside of the range of the bins, which may happen with a new, unseen time series. The root cause of both of these issues is that the model is using the same tokenisation scheme for all time series, which encourages generality, although it may come at the expense of the ability to adapt to unseen times series with very different scale & standard deviation properties than those seen at training time. - Based on Eq. (2), it seems that the model is not trained on the context tokens (i.e. given C + H tokens, where C is the number of context tokens and H is the number of tokens to predict, the model is only trained on tokens 1 to H). Training the model to also predict the context tokens, as we do in language modelling (i.e. all tokens are used for training in language modelling, from the first to the last token), may alleviate the data scarcity issue further. - It seems like there is a very clear ordering of values here, so not all mistakes are created equal (e.g. predicting a slightly wrong value should have a much less severe cost than predicting a value that is way off). The paper shows that the model can already learn this ordering to some extent, but intuitively, taking into account this "cost weighting" should improve the performance further. Even simple approaches that penalise the different mistakes differently should be able to take this into account. 2. It seems that the paper does not compare against recent state-space models, which obtained strong results on time series modelling, such as the S4 model (Efficiently Modeling Long Sequences with Structured State Spaces, Albert Gu et al., 2021). 3. There are some suggestions (particularly relating to the presentation, etc.) and questions that can be clarified in order to make the paper stronger. See the "requested changes" section below. Requested Changes: 1. **Critical**: More discussion and clarification around the tokenisation strategy, and how it handles time series with very different standard deviations (otherwise the time series with low standard deviations would have their values squeezed into a small number of buckets). 2. **Recommended**: Addressing the questions below, which would further improve the clarity of the paper and resolve any doubt. - For Eq. (2), would it help to train the model on all tokens, including the context tokens (in addition to the H tokens that need to be predicted)? This can at least be used for the autoregressive models, where standard LMs also use every token for training. For the T5-style encoder-decoder models, it makes sense to only train on the H tokens, and use the C context tokens with the bidirectional attention / encoder. (as is done currently) - How easy would it be to incorporate a simple cost function that weighs different kinds of mistakes? This would penalise mistakes that are "further off", e.g. predicting the value "1" when the correct answer is "1,000". - Regarding the KernelSynth (Section 4.2), how big is the Kernel bank? - To confirm, are all deep learning approaches listed on the paper trained on the exact same pre-training dataset? - Currently, the proposed time series foundation model only operates on time series data. Would adding natural language information help? E.g. one could imagine giving information in natural language about what kind of time series it is, whether there is seasonality aspect, where the data came from, what time period it encompasses, etc. Naturally this would require the model to represent and model both time series data and natural language text, but given the success of LLMs, this could probably work better. - How does the method compare with state space models like S4 (Albert Gu et al., 2021)? These kinds of models have shown great promise for time series modelling and modelling long-range dependencies in linear time. 3. **Recommended**: Incorporate the suggestions below. - For the results (e.g. Figures 4 - 6), it would be nice to include information on the caption that **lower is better**. This would help readers who are not familiar with the evaluation metric to better interpret the results. This holds for Figures 10 - 11 as well. - It seems like not all the models in Figure 4, etc. are trained on the exact same dataset (due to these models being pre-trained off-the-shelf models that are trained on a different pre-training dataset). It would be nice to mark these models as such, for example by putting an asterisk next to the model name when listed in Figure 4, etc. This would let the readers know that these models are not directly comparable. - On page 12, "Fine tuning" section, it seems like the fine-tuning is done with an initial learning rate and a linear decay. In my experience, using a "triangular" learning rate, where the learning rate is warmed up from 0 to the peak value (here 0.001), and then decayed linearly after that, would work better. Broader Impact Concerns: No broader impact concerns from my side. ================================================== Review 2: Summary: This paper presents a framework for pretrained probabilistic time series models by directly adapting language model architectures for time series forecasting. The proposed Chronos is pre-trained on top of the T5 family with public and synthetic datasets with quantization techniques to get tokens. Chronos models achieve better performance under in-domain scenarios and on-par zero-shot performance compared with baselines. Strengths and Weaknesses: Strengths: (i) The adaptation of language models for time series modeling via scaling and quantization and using off-the-shelf language models with minimal modifications provides a practical solution to integrate with future advancements in LLMs (ii) The proposed data augmentation techniques and synthetic data generation are simple yet effective, providing solutions for addressing the limited availability of time series data (iii) The authors provide comprehensive experimental results across 42 datasets and analysis to support the design choices of Chronos Weaknesses: See more details in comments below (i) More comprehensive comparison with baselines (ii) More details on finetuning results (iii) Model scalability (iv) Data contamination problem Comment: 1. Although the authors compared Chronos with more than ten baseline methods, they did not include comparisons with strong baselines such as ensembles. These models often require much less inference time, and it remains unclear how their accuracy compares with that of Chronos. A more thorough comparison with ensembling baselines would provide a clearer understanding of Chronos' relative performance. 2. The paper lacks detailed information on fine-tuning techniques and results. The comparison is only limited to Chronos (small), and it would be beneficial to see the performance limits of the model under fine-tuning scenarios. 3. While the authors conducted experiments to assess the impact of the portion of synthetic data, the paper does not address how the model’s accuracy changes with respect to the total amount of training data. It would be valuable to include an analysis of model scalability, illustrating how Chronos performs as the training dataset size varies. 4. The authors used 55 public datasets, splitting them into different components for pretraining and evaluation. However, there is a concern regarding potential data contamination. It is crucial to highlight whether data from the same domain (similar time series) is used in both pretraining and zero-shot scenarios, as this could affect the validity of the zero-shot performance claims. Requested Changes: Address comments 1-4 above. Broader Impact Concerns: N/A ================================================== Review 3: Summary: The paper introduces a foundation model for time series forecasting. They introduce Chronos, an architecture which leverages the existing T5 architecture from the NLP literature. To bridge the gap between time series data and language, they discretize time series data into discrete tokens. They train the model on a collection of diverse time series data, and show that it has strong capabilities for both in-domain and zero-shot forecasting. They also introduce KernelSynth, a synthetic data generation process for time series data, and show that incorporating synthetic data improves overall performance of the model. Finally they perform extensive empirical analysis and ablations to provide greater understanding of the proposed method. Strengths and Weaknesses: ### Strengths 1. The paper presents a strong foundation model, and is one of the first papers to concurrently show the possibility and benefits of training of a large corpus of time series data. 2. The paper shows that discretization is an effective approach for modeling time series data. The benefits of such an approach is that it removes the complexities of dealing with real values, and probabilistic forecasting is made simper as the categorical distribution can easily approximate any complex distribution. 3. The paper performs presents extensive details of model training and datasets. It also contains extensive empirical analysis to understand the strengths and weaknesses of the proposed method. ### Weaknesses 1. The method can only handle univariate time series. 2. Prediction length is limited to a maximum of 64. Autoregressive models suffer from error accumulation - it is unclear whether such a method extend to longer horizons. 3. The paper writing leans heavily into the idea of being motivated by LLMs. It ignores extensive related work of ordinal regression and regression via classification (RvC). Per my understanding, the main difference between this work and the aforementioned is that in this paper, inputs are also discretized, whereas in ordinal regression and RvC, it is mainly the outputs that are discretized. 4. The writing related to quantization can be significantly improved. Currently it is quite confusing to understand the exact approach. [c_1,c_B], [l=-15, r=15] and [-15s, 15s] have been used to describe the prediction range. This is quite confusing. All the details of the exact approach used in training should be described in one place, and use a unified notation. The vocabulary size of the pre-trained models should also appear in Section 5.2. Requested Changes: ### Changes critical to securing recommendation for acceptance 1. TSMixup has some issues that should be clarified: a. Why is mean scaling performed before mixing? Is there any data leakage given that mean scaling is performed on the whole series? b. What does it mean for $\alpha=1.5$? Is this a vector of length $k$ with all elements as 1.5? 2. More analysis or at least elaboration about the different scaling methods should be given. Why was mean scaling chosen over min-max, standardization, etc...? It seems that standardization would solve many of the problems related to loss of precision presented in the analysis section, why was mean scaling chosen over standardization? ### Changes that would strengthen the work 1. Please add in related work section to the extensive literature of ordinal regression and regression via classification. 2. Please improve the writing regarding quantization. Broader Impact Concerns: No concerns. ==================================================
# Cox-Hawkes: Doubly Stochastic Spatiotemporal Poisson Processes Xenia Miscouridou∗*x.miscouridou@imperial.ac.uk* I-X and Department of Mathematics, Imperial College London Samir Bhatt *s.bhatt@imperial.ac.uk* Department of Public Health, University of Copenhagen, School of Public Health, Imperial College London George Mohler gmohler@iupui.edu Department of Computer Science, Boston College Seth Flaxman†*seth.flaxman@cs.ox.ac.uk* Department of Computer Science, University of Oxford Swapnil Mishra†swapnil.mishra@nus.edu.sg Saw Swee Hock School of Public Health and Institute of Data Science, National University of Singapore and National University Health System ∗ *Corresponding author* † *Equal Contribution* Reviewed on OpenReview: *https: // openreview. net/ forum? id= xzCDD9i4IZ* ## Abstract Hawkes processes are point process models that have been used to capture self-excitatory behaviour in social interactions, neural activity, earthquakes and viral epidemics. They can model the occurrence of the times and locations of events. We develop a new class of spatiotemporal Hawkes processes that can capture both triggering and clustering behaviour and we provide an efficient method for performing inference. We use a log-Gaussian Cox process (LGCP) as prior for the background rate of the Hawkes process which gives arbitrary flexibility to capture a wide range of underlying background effects (for infectious diseases these are called endemic effects). The Hawkes process and LGCP are computationally expensive due to the former having a likelihood with quadratic complexity in the number of observations and the latter involving inversion of the precision matrix which is cubic in observations. We propose a novel approach to perform MCMC sampling for our Hawkes process with LGCP background, using pre-trained Gaussian process generators which provide direct and cheap access to samples during inference. We show the efficacy and flexibility of our approach in experiments on simulated data and use our methods to uncover the trends in a dataset of reported crimes in the US. Keywords: Gaussian process, self-excitation, clustering, Bayesian inference ## 1 Introduction Hawkes processes are a class of point processes that can model self or mutual excitation between events, in which the occurrence of one event triggers additional events, for example: a violent event in one geographical area on a given day encourages another violent event in an area nearby the next day. A unique feature of Hawkes processes is their ability to model exogenous and endogenous "causes" of events. An exogenous cause happens by the external addition of a event, while endogenous events are self-excited from previous events by a triggering kernel. An example of the difference between these two mechanisms is in disease transmission - an exogenous event could be a zoonosis event such as the transmission of Influenza from birds, while endogenous events are subsequent human to human transmission. Due to their flexibility and mathematical tractability, Hawkes processes have been extensively used in the literature in a series of applications. They have modelled among others, neural activity (Linderman et al., 2014), earthquakes (Ogata, 1988), violence (Loeffler & Flaxman, 2018; Holbrook et al., 2021) and social interactions (Miscouridou et al., 2018). The majority of research on Hawkes processes focuses on the purely temporal settings where events occur and are subsequently triggered only in time. However, many practical problems require the inclusion of a spatial dimension. This inclusion is motivated by several factors, first, natural phenomena that self-excite tend to do so both spatial and temporally e.g. infectious diseases, crime or diffusion over a network. Second, natural processes tend to cluster closely in space and time (Tobler, 1970). Third, in parametric formulations residual variation persists and this is often structured in both space and time (Diggle & Ribeiro, 2007). A wide body of research exists in modelling spatial phenomena ranging from Kriging (Matheron, 1962) to model based estimates (Diggle & Ribeiro, 2007) using Gaussian processes. In the more general Gaussian process, which provides a prior function class, spatial phenomena are modelled through a mean function and a covariance function that allows control over the degree of clustering as well as the smoothness of the underlying functions. Specifically for applications for spatial point patterns, an elegant formulation using log-Gaussian Cox processes (LGCP), (Møller et al., 1998) is commonly used (Diggle et al., 2013). LGCPs can capture complex spatial structure but at a fundamental level are unequipped with a mechanism to model self-excitation. When examining the processes' endogenous and exogenous drivers, the lack of a self-exciting mechanism can potentially lead to spurious scientific conclusions even if prediction accuracy is high. For example, appealing again to the Influenza example, only modelling the distribution of cases using an LGCP will ignore the complex interplay of zoonosis events and secondary transmission events, both of which require different policy actions. The inclusion of space has a long history via the Hawkes process triggering mechanism - fistly modelled using the Epidemic Type Aftershock Sequence (ETAS) kernel (Ogata, 1988) but many subsequent approaches now exist. However, to our knowledge, very few approaches consider spatial and temporal events in *both* the exogenous and endogenous Hawkes process mechanisms - that is where events can occur in space and time, and then these events trigger new events also in space and time. Many mechanisms have been proposed for space-time triggering kernels (Reinhart, 2018), but it is not clear nor straightforward how to also allow for exogenous space-time events simultaneously. In the vast majority of previous applications, exogenous events occur at a constant rate in both space and time. Non constant approaches exist but have usually been case specific. For example the use of a periodic function had been effective in modelling seasonal malaria (Unwin et al., 2021). Some studies do provide nonparametric approaches for the background rate: Lewis & Mohler (2011) provide an estimation procedure for the background and kernel of the Hawkes process when no parametric form is assumed for either of the two. Donnet et al. (2020) and Sulem et al. (2021) use nonparametric estimation on the Hawkes kernel whereas Miscouridou et al. (2018) use a nonparametric prior on the background based on completely random measures to construct Hawkes processes that build directed graphs. Other recent approaches use neural networks to estimate the rate but the majority of these approaches are in purely temporal settings such as Omi et al. (2019). There are a few that consider marked point processes such as Du et al. (2016). However, a marked temporal point process which can use marks as locations would not capture the spatial correlations we are interested in. More recently Zhou et al. (2022) and Chen et al. (2021) provided a way to model spatiotemporal settings with neural networks. However, both of them still lack the ability to capture a more flexible background where background intensity is not just a constant but by itself produces clustering in time and space (we like to emphasise here that this type of clustering is different from the one emerging from the self-exciting nature of the process). Beyond neural network approaches, there exist non deep models that deal with non-linear intensities. An example is Zhou et al. (2020) who proposed a sigmoid Gaussian Hawkes process with baseline intensity and triggering kernel drawn from a Gaussian process prior, passed through a sigmoid link function to guarantee non-negativity. Similarly, Apostolopoulou et al. (2019); Sulem et al. (2021); Malem-Shinitski et al. (2021) propose point process models with a non-linear component that allows both excitatory and inhibitory relationships in continuous time. Here we propose a novel space-time approach that combines Hawkes processes (Hawkes, 1971) with logGaussian Cox processes (Møller et al., 1998; Diggle et al., 2013). This synthesis allows us, for the first time, to have an exogenous background intensity process with self-excitation that is stochastic and able to vary in both space and time. We provide a suite of new methods for simulation and computationally tractable inference. Our methods leverage modern computational techniques that are scalable and can efficiently learn complex spatiotemporal data. We apply our approach on both simulated and real data. Our novel addition of an LGCP prior in both space and time is accompanied with new computational challenges: a Hawkes process is quadratic in complexity due to a double summation in the likelihood, and LGCPs incur cubic complexity from matrix inversions. To ensure our approach is scalable and still competitive with standard Hawkes processes we utilise a recently developed Gaussian process approximation (Mishra et al., 2022; Semenova et al., 2022) that obliviates the need for repeated matrix inversions. Our work represents a step towards more general, scalable, point process framework that encodes more flexible and plausible mechanisms to represent natural and physical phenomena. ## Our Contributions A summary of the contributions of our work is: (i) We provide a novel model formulation for a highly flexible self-exciting process that can capture endogenous and exogenous events in both space and time. Our utilisation of LCGPs for the exogenous background rate is extremely flexible and follows from the current state-of-the-art in spatial statistics (Diggle et al., 2013). (ii) In contrast to previous work such as Loeffler & Flaxman (2018), our framework admits a generative model that can produce stochastic realisations at an arbitrary set of locations. We provide a novel algorithm to sample from this generative process. (iii) We offer an efficient Bayesian inference approach that ensures our more flexible model is still as scalable as standard Hawkes processes and straightforward to implement computationally. (iv) Our framework is directly applicable to numerous spatiotemporal problems where there are both endogenous and exogenous causes e.g. for natural or social phenomena such as crime, diseases, environment, or human behaviour. ## 2 Related Methods As mentioned before, modelling space through Hawkes processes was first used with the Epidemic Type Aftershock Sequence (ETAS) kernel (Ogata, 1988) and other approaches followed some of which exist in Reinhart (2018). For modelling spatial point patterns without self-excitation, log-Gaussian Cox processes (LGCP) Møller et al. (1998) provide an elegant approach as explained in Diggle et al. (2013). Reinhart (2018) provide an overview on spatiotemporal Hawkes processes explaining various options for the form of the intensity, the kernels and the corresponding simulating algorithm. However, the case of an LGCP background is not discussed in the review. Our approach is the first to use an LGCP to capture the background underlying effects (these are called endemic effects in infectious disease modelling but here we will use this term broadly for other applications too) and can model the exact spatial and time locations. Loeffler & Flaxman (2018) aim to understand whether gun violence in Chicago is contagious or merely clusters in space and time. To this end, they use a spatiotemporal Hawkes model and a space-time test to distinguish between the two. The model uses a kernel density estimator for the background (endemic) effects and a kernel for the epidemic events that is separable in space and time. Their model has a different construction as it does not admit a generative procedure since the background rate is estimated using kernel density estimators. Similarly to Loeffler & Flaxman (2018), Holbrook et al. (2021) build a scalable inference algorithm for parametric spatiotemporal self-exciting processes. Their proposed model is the one of Loeffler & Flaxman (2018) which is based on a Gaussian kernel smoother for the background. The main contribution is to overcome the bottleneck of the quadratic computational complexity of such a point process. The authors develop a high-performance computing statistical framework to do Bayesian analysis with Metropolis-Hastings using contemporary hardware. They apply it on a gunfire dataset which covers a larger dataset and more fine-grained than the one in Loeffler & Flaxman (2018). The combination of a Hawkes process with an LGCP is found in Linderman & Adams (2015) where the authors propose a purely temporal multivariate Hawkes process with LGCP in the background with the goal to infer a latent network structure given observed sequences of events. This approach is based on Linderman & Adams (2014) but in discrete time and with an improved inference scheme based in mini batches. However both of these two have different scope to our work and work only with temporal data. Finally, Mohler (2013) develops a purely temporal Hawkes process model with LGCP background for count (aggregated) events. Mohler (2013) builds a Metropolis adjusted Langevin algorithm for estimation and uses the algorithms to disentangle the sources of clustering in crime and security data. We are instead interested in modelling and predicting exact event times and locations. Few neural network models (Du et al., 2016; Mei & Eisner, 2017; Omi et al., 2019; Zhang et al., 2022) have been suggested but they are only temporal in nature. A few of them provide a way to model a spatial domain, by treating space locations as discrete marks. However, using marks doesn't allow us to model spatial correlations, as observed in many settings. Okawa et al. (2019)) extend neural networks to spatial settings but lack the ability to predict the next event in space and time. Work most similar to ours is Chen et al. (2021) and Zhou et al. (2022), where they apply neural network inspired point process models for spatiotemporal data. However, they still lack the ability to account for anything more than a constant background intensity. ## 3 Model 3.1 Point Process Intensity A Hawkes process is an inhomogeneous Poisson point process defined in terms of a counting measure and an intensity function or rate. For a generic spatiotemporal inhomogeneous Poisson point process on the domain X × [0, T), for X ⊂ R d we denote the counting measure of the process by N and the conditional intensity by λ. The definition of a generic inhomogeneous point process intensity in space and time is as below. For s *∈ X ⊂* R d(generally d here represents Euclidean or Cartesian coordinates) and t ∈ [0, T) $$\lambda(t,\mathbf{s})=\operatorname*{lim}_{\Delta t,\,\Delta\mathbf{s}\to0}{\frac{\mathbb{E}[N[(t,t+\Delta t)\times B(\mathbf{s},\Delta\mathbf{s})]\,|\,\mathcal{H}_{t}]}{\Delta t\times|B(\mathbf{s},\Delta\mathbf{s})|}}$$ ∆t × |B(s, ∆s)|(1) where Ht denotes the history of all events of the process up to time t, N(A) is the counting measure of events over the set A *⊂ X ×* [0, T) and |B(s, ∆s)| is the Lebesgue measure of the ball B(s, ∆s) with radius ∆s > 0. The intensity λ has to be non-negative, i.e. λ ≥ 0. Note that the spatial locations can be univariate, referring for example to regions or countries, or bivariate such as geographical coordinates of longitude and latitude or even multivariate depending on the context. ## 3.2 Hawkes Process Intensity Hawkes processes were originally proposed by Hawkes (1971) as temporal point processes. The intensity is conditional on the history of the process such that the current rate of events depends on previous events. We focus on self-exciting Hawkes processes, in which historic events encourage the appearance of future events. We develop spatiotemporal self-exciting processes which can predict the rate of events happening at specific $$(1)$$ locations and times. The conditional intensity defined as in equation 1 admits the form $$\lambda(t,\mathbf{s}|{\mathcal{H}}_{t})=\mu(t,\mathbf{s})+\sum_{i:t_{i}<t}g\left(t-t_{i},\mathbf{s}-\mathbf{s}_{i}\right),$$ $$\left(2\right)$$ g (t − ti, s − si), (2) where (t1, t2*, . . . , t*n) denotes the ordered sequence of the times of the observed events and (s1, s2*, . . . ,* sn) their corresponding spatial locations. Events arise either from the background rate µ(t, s) (exogenous or non excitation effects) or from the triggering function/kernel g (endogenous or self-excitation effects). µ is non-negative to ensure that the initial intensity is non-negative and we take g non-negative as we consider excitation effects and do not deal with inhibition. For the scope of our work, we are interested in excitation, however for other applications such as neural connectivity patterns where inhibition is needed, one can read for example Cai et al. (2022). g can be parametric or it can be estimated using full nonparametric assumptions, as done for example in Donnet et al. (2020); Sulem et al. (2021). Similarly, it can take a form of separable (additive or multiplicative) or non-separable kernels in space and time. There exists a lot of work covering all these cases for purely temporal processes but not in spatiotemporal settings. To give some background, in purely temporal cases there exist guarantees on the estimation and under certain conditions there are consistency results as well identifiability results. However once we add the spatial component, the results do not necessarily extend. Therefore, we consider here the simple case of a separable form of a product of an exponential kernel in time and a Gaussian kernel in space. For multivariate linear purely temporal Hawkes processes with constant background, it is a known result that one can recover the parameters, i.e. the process is identifiable and one can also prove consistency results: Donnet et al. (2020) prove this and give posterior concentration rates. Some results also exist for non-linear Hawkes processes with constant backgrounds: Brémaud & Massoulié (1996) provide results on the uniqueness of the stationary solution but they do not study estimation of the parameters. Similarly Sulem et al. (2021) study general non-linear and nonparametric Hawkes processes and provide conditions on the Bayesian methods to estimate the parameters with a (presumably optimal) concentration rate. Other approaches with time-varying backgrounds exist (e.g. Unwin et al. (2021); Zhou et al. (2020)) but there are no theoretical results that apply directly in that case (linear or non-linear). We think it is an interesting direction to study the theoretical properties of spatiotemporal hawkes processes and we would encourage research in that direction. We would like to note though that in any Hawkes process with a temporally or spatiotemporally varying background, stationarity is not relevant anymore as the background is changing in time and therefore the expectation of the intensity cannot be constant. The process with intensity defined in equation 2 can be treated as a Poisson cluster process, with mean number of offsprings given by b =RX R ∞ 0g(*dt, d*s). To ensure that the cluster sizes are almost surely finite, we require that b ∈ (0, 1) as each generation of offsprings follows a geometric progression, with expected total cluster size of 1 1−b . For b = 0 we have a Cox process where as for b ≥ 1 the process explodes and we would have an infinite number of events in finite time. To see how explosion emerges, we refer the reader to of (Grimmett & Stirzaker, 2001, Chapter 5) which give the calculations on the expected number of descendants of one atom. More on the implications of the values of b can be found in Asmussen & Glynn (2003). The triggering function g, centered at the triggering event, is the intensity function for the offspring process. Properly normalised, it induces a probability distribution for the location and times of the offspring events. The cluster process representation of the Hawkes process (Hawkes & Oakes, 1974) will prove crucial to the efficient simulation of self-exciting processes which we give in section 4.1. We give below the triggering kernel that admits a separable form of a product of an exponential kernel in time and a Gaussian kernel in space. Both of these choices are relevant for the applications we consider in the paper as we know from previous work Loeffler & Flaxman (2018); Holbrook et al. (2021) that the decay and variation in crime data can can be well explained by the decay prescribed by an exponential and Gaussian kernel respectively. For t > 0 and s *∈ X ⊂* R dthe self-exciting part of the rate is given by $$g(t,{\bf s})=\alpha\beta\exp\left(-\beta t\right)\frac{1}{\sqrt{2\pi|\Sigma|}}\exp\left(-{\bf s}^{T}\Sigma^{-1}{\bf s}\right),$$ −1s, (3) where α > 0*, β >* 0 and Σ a semi-positive definite matrix. $$(3)$$ For the temporal part we use the widely used exponential kernel, originally proposed by Hawkes (1971), giving exponential decay which is suitable for the applications we are interested in. Note that an exponential kernel is not always a reasonable choice, for instance in infectious diseases one would prefer the Rayleigh kernel (e.g. see Unwin et al. (2021)). For the spatial part, we use a Gaussian kernel which is suitable for modelling spatial locations especially for social violence settings as first proposed and used by other authors in literature. Specifically, (Loeffler & Flaxman, 2018; Holbrook et al., 2021) analyse the public policy and crime implications of a larger gunshot dataset which includes the data used in this paper and choose these forms for the kernels. As mentioned above we consider here a separable form of g. Note that non-separable kernel approaches exist in literature, such as Jun & Cook (2022), in which temporal patterns differ according to location. However one cannot naively use those without properly assessing identifiability concerns. The current construction can be extended to cover those, however this would be beyond the scope of the novelty of the current method and the scope of the applications we consider here. The other part of the intensity is µ(t, s), which is the background rate of the process. It is a nonnegative function with initial nonzero value that captures the underlying patterns in space and time that encourage the clustering of events in those time and space locations. It often takes the form of a constant for simplicity, or a parametric form such as periodic as assumed in Unwin et al. (2021) or can even have a nonparametric prior constructed on random measures as in Miscouridou et al. (2018). As further explained in more detail below, we assume a log-Gaussian process prior on µ(t, s) which to our knowledge has not been used before in the literature of spatiotemporal Hawkes processes. ## 3.3 Latent Log Gaussian Process For Background Rate We use a latent Gaussian process (GP) to determine the background rate of events in time t ∈ R and space s ∈ Rd. This means that the background rate takes the form $$\mu(t,\mathbf{s})=\exp\left(f\left(t,\mathbf{s}\right)\right)$$ $$\left(4\right)$$ µ(t, s) = exp (f (t, s)) (4) where f(t, s) is a function realisation from a Gaussian process prior in space and time. Formally, a Gaussian process is a collection of random variables, such that any finite collection of them is Gaussian distributed. GPs are a class of Bayesian nonparametric models that define a prior over functions which in our case are functions over time and space. Similarly to a probability distribution that describes random variables which are scalars or vectors (for multivariate distributions), a Gaussian process is distribution over functions and belongs in the family of stochastic processes. GPs are a powerful tool in machine learning, for learning complex functions with applications in regression and classification problems. We refer the reader to (Rasmussen & Williams, 2005, Chapter 2) for details on Gaussian processes and their properties. A Gaussian process on R D, for any D > 0 is completely specified by its mean function m(·) and covariance function k(·, ·). We will denote a draw from a Gaussian process as $$f(\cdot)\sim{\mathcal{G P}}\left(m(\cdot),k(\cdot,\cdot)\right).$$ The Gaussian process is centered around its mean function, with the correlation structure (how similar two points are) of the residuals specified via the covariance kernel. Properties of the underlying function space such as smoothness, differentiability and periodicity can be controlled by the choice of kernel. One of the most popular choices of covariance kernel, and the one we choose to introduce the model with, is the Gaussian kernel (also commonly called the squared exponential kernel), defined for u, u ′ ∈ R D by the covariance function $$C o v\left(f\left(\mathbf{u}\right),f\left(\mathbf{u}^{\prime}\right)\right)=k\left(\mathbf{u},\mathbf{u}^{\prime}\right)=\omega^{2}\exp\left(-{\frac{1}{2l^{2}}}|\mathbf{u}-\mathbf{u}^{\prime}|^{2}\right)$$ (5) where |u| denotes the Euclidean norm, i.e. it is equal to |u| =pPi u 2 i if u is a vector (D > 1 e.g. the spatial locations) and to the absolute value of u if u is a scalar (D = 1 e.g. timestamps). ω 2 > 0 defines the $$\left(5\right)$$ kernel's variance scale and l > 0 is a length scale parameter that specifies how nearsighted the correlation between pairs of events is. The hyperparameters can be varied, thus also known as free parameters. The kernel and mean of the GP together fully specify the prior distribution over functions. We will consider an additive separable kernel with a bivariate spatial dimension s = (*x, y*) and univariate temporal dimension t. Note that one could naively consider a joint fst with no assumptions of additivity (or other form of structure) at all. However this would not be advisable as it would be impossible in this case to guarantee that we can recover the underlying latent functions. When there is not enough structure in the form of the background it is much more difficult to study the identifiability of the latent functions ft and fs. In non-identifiable cases, the prior dominates the estimation and the estimated ft, fs will be heavily influenced by the prior and not by the data. We consider here the additive structure as a minimum type of structure to assume on the background latent process which is still very generic, able to capture arbitrary background trends. In order to ensure a nonnegative background we exponentiate the additive kernel. From this kernel specification the background intensity µ(t, s) follows a log-Gaussian Cox process (Møller et al., 1998; Diggle et al., 2013) over space and time $$\begin{array}{c}{{\mu(t,{\bf s})=\exp\left(f_{\bf s}\left({\bf s}\right)+f_{t}\left(t\right)\right)}}\\ {{f_{t}\sim{\mathcal G}{\mathcal P}\left(m_{t},k_{t}\right)}}\\ {{f_{\bf s}\sim{\mathcal G}{\mathcal P}\left(m_{\bf s},k_{\bf s}\right),}}\end{array}$$ where mt and ms are the GP mean functions and kt, ks are the kernels defined by the hyperparameters ω 2 t , ω2 s , lt, ls. ## 3.4 Full Model Likelihood To model the spatial coordinates s = (*x, y*) and time stamps t, we use a Hawkes kernel gts(t, s) = gt(t)gs(s) from equation 3 and the log-Gaussian Cox process µ(t, s) = exp (fs(s + ft(t)) from equation 6. Without loss of generality we will assume here that the Gaussian processes have zero mean. The joint model we consider is a Hawkes process with composite rate λ(*t, x, y*) which is the sum of the intensities of an LGCP process and a Hawkes process $$\lambda(t,x,y)=\exp\left(f_{\mathbf{s}}\left(x,y\right)+f_{t}\left(t\right)\right)\tag{7}$$ $$+\sum_{t_{i}<t_{i}}g_{i}(t-t_{i})g_{a}(x-x_{i},y-y_{i})$$ $$=\exp\left(f_{\mathbf{s}}\left(x,y\right)+f_{t}(t)\right)$$ $$+\sum_{i:t_{i}<t}\alpha\beta\exp\left(-\beta(t-t_{i})\right)\frac{1}{2\pi\sigma_{x}\sigma_{y}}\exp\left(-\frac{(x-x_{i})^{2}}{2\sigma_{x}^{2}}-\frac{(y-y_{i})^{2}}{2\sigma_{y}^{2}}\right).$$ $$\left(6\right)$$ One could see this as an intercept coming from constant contributions by both the temporal and spatial background processes as these are only identifiable through their sum and not separately. Given a set of observed ordered times (t1, t2, . . . , tn) ∈ [0, T) and the corresponding locations (s1, s2, . . . , sn) ∈ X , let D denote the full data D = {ti, si} n i=1 and L(D) the likelihood. Following equation (7.1.2) in (Daley & Vere-Jones, 2008, Chapter 7) the likelihood is given by $$L(D)=\left[\prod_{i=1}^{n}\lambda(t_{i},\mathbf{s}_{i})\right]\exp\left(-\int_{\mathcal{X}}\int_{0}^{T}\lambda(t,\mathbf{s})dtds\right)\tag{8}$$ $$=\left[\prod_{i=1}^{n}\lambda(t_{i},x_{i},y_{i})\right]\exp\left(-\int_{\mathcal{X}}\int_{0}^{T}\lambda(t,x,y)dtdxdy\right).$$ We give below details on how to simulate from the process with the rate defined in equation 7 and how to perform Bayesian inference using the likelihood from equation 8. ## 4 Methods 4.1 Simulation By construction our model admits a generative process facilitating simulation. This is an important and nuanced advantage over previous spatiotemporal models (Loeffler & Flaxman, 2018; Holbrook et al., 2021) which were not fully generative due to a deterministic parameterisation of the exogenous component. Note that the model of Mohler (2013) does admit a generative model but only for a purely temporal model for aggregated (count) data. In general, Hawkes processes can be simulated in two ways: through an intensity based approach or a cluster based approach. We give below Algorithm 1 to simulate from our model via the latter approach, i.e. through simulating the background first and then the generations of subsequent offsprings. Note that for the hyperparameters lt, ls, ω2 t , ω2 s one can either fix them to a known value or (hyper)priors on them. Algorithm 1 Cluster based generative algorithm for Hawkes process simulation Require: Fix T > 0, X Draw lt, ls ∼ p +(·) Draw ω 2 t , ω2 s ∼ p +(·) Draw a0 ∼ p(·) Draw ft ∼ GP(0, kt), fs ∼ GP(0, ks) Set µ(t, s) = exp (ft(t) + fs(s) + a0) Draw N0 ∼ Pois R T 0 RX µ (t, s) dtds Draw t (0) i, s (0) i N0 i=1 from a Poisson Process with rate µ(t, s) where s (0) i = x (0) i y (0) i Set G0 = t (0) i, s (0) i N0 i=1 , ℓ = 0 while Gℓ ̸= ∅ do for i = 1 to Nℓ do Draw Ci ∼ Pois R T 0 RX gts(t, s)dtds , the number of offsprings of event i for j = 1 to Ci do Draw t (ℓ+1) j iid∼ Exp(β) + t (ℓ) i, Draw s (ℓ+1) j iid∼ Normal x (l) i y (l) i ! , σ 2 x 0 0 σ 2 y ! Set Oj = t (ℓ+1) i, s (ℓ+1) i end for end for ℓ+ = 1 Gℓ = {SNℓ i=1 O1*, . . . , O*Ci }{i:t (ℓ) i <T} end while return Sℓ Gℓ We use a clustering approach (Hawkes & Oakes, 1974) for simulation which makes use of the Poisson cluster process where each event has a mean number of offspring b (see section 3.2) and relies on the following idea: for each immigrant i, the times and locations of the first-generation offspring arrivals given the knowledge of the total number of them are each i.i.d. distributed. We provide the simulation in Algorithm 1. In Algorithm 1 we introduce a0 to denote the total mean of the background rate as the GPs have zero mean. p +(·) refers to a probability distribution on the positive real line and p(·) a distribution on the real line. As a test check for making sure that our Hawkes process simulations are correct we employ an approximate Kolmogorov-Smirnov type of test adapting Algorithm 7.4.V from Daley & Vere-Jones (2008). To simulate from our model proposed above, i.e. a Cox-Hawkes process we need to draw from a GP. Since GPs are infinitely dimensional objects, in order to simulate them we have to resort to finite approximations. The most common approach is to implement them through finitely-dimensional multivariate Gaussian distributions. This is the approach we take as well for simulating our GPs. In order to sample points from the LGCP background of the process, we draw an (approximate) realisation from the GP prior and then use rejection sampling to sample the exact temporal and spatial locations. The algorithm can be found in Appendix A.1. ## 4.2 Inference Given a set of n observed data D = {ti, si} n i=1 over a period [0, T] and a spatial area denoted by X , we are interested in a Bayesian approach to infer the parameters and hyperparameters of the model. Denote by θ and ϕ set of the parameters of the background rate µ(t, s) and the triggering rate g(t, s) respectively. This gives θ = (a0, ft(t), fs(s)) and ϕ =*α, β, σ*2x , σ2 y . The posterior is then given by $$\pi(\phi,\theta|D)\propto\pi(\theta)\,\times\pi(\phi)\times L({\cal D})$$ $$=\pi(f_{t}(t))\pi(f_{\bf s}({\bf s}))\pi(a_{0})\times\pi(\alpha)\pi(\beta)\pi(\sigma_{x}^{2})\pi(\sigma_{y}^{2})\times L({\cal D}).$$ $$({\mathfrak{g}})$$ where L(D) is the likelihood defined in equation 8 and with some abuse of notation, we use π to denote both prior and posterior for all the parameters. For a0 we use a Normal prior. The prior for π(α), π(β), π(σ 2 x ) and π(σ 2 y ) is Truncated Normal (restricted on the positive real line) to ensure the positivity of these parameters. For the experiments we demonstrate below we have also tried Gamma and Exponential priors for α and β, however in that case the MCMC chains showed worse mixing in comparison to the case of the Truncated Normal prior. Note that the prior on the functions ft and fs can be further defined by the priors on the hyperparameters lt ∼ InverseGamma, ω 2 t ∼ LogNormal for the temporal process and ls ∼ InverseGamma, ω 2 s ∼ LogNormal for the spatial. Our objective is to approximate the total posterior distribution π(*ϕ, θ*|D) using MCMC sampling. A classical Hawkes process has quadratic complexity for computing the likelihood. Only in special cases such as that of a purely temporal exponential kernel the complexity is reduced from quadratic to linear as it admits a recursive construction. See Dassios & Zhao (2013) for an explanation. Note however we cannot apply this in our case as it does not hold when we add (on top of the temporal exponential kernel) the Gaussian spatial kernel. Inference is in general cumbersome and people tend to either resort to approximations or high performance computing techniques such as Holbrook et al. (2021). A naive formulation of combining log-Gaussian cox processes as the background intensity function in the spatiotemporal Hawkes process will increase the computational complexity for the inference. This happens because in addition to the quadratic complexity arising from the triggering kernel the exogeneous formulation naively introduces a cubic complexity for a LGCP (Diggle et al., 2013). We propose to circumvent the computational issues through a reduced rank approximation of a Gaussian process (Semenova et al., 2022) through variational autoencoders (VAE). This approach relies on pre-training a VAE on samples from a Gaussian process to create a reduced rank generative model. Once this VAE is trained, the decoder can be used to generate new samples for Bayesian inference. More specifically, in this framework one should first train a VAE to approximate a class of GP priors (the class of GP priors learned varies from context to context depending on our prior belief about the problem space) and then utilises the trained decoder to produce approximate samples from the GP. This step reduces the inference time and complexity as drawing from a standard normal distribution z ∼ N (0,I) with uncorrelated ziis much more efficient than drawing from a highly correlated multivariate normal N ∼ (0, Σ) with dense Σ. For more details see section 2.5 in Semenova et al. (2022). Here we will denote this approximation to the Gaussian ![9_image_0.png](9_image_0.png) Figure 1: Plot for the temporal Gaussian process ft(t) on simulated data. The red line is the simulated draw of the Gaussian process, the green line is the mean posterior and the yellow shaded area is the 90% credible interval. The red marks on the x-axis are the exact simulated times from the background process. Process prior by π˜. Hence, we obtain overall the Bayesian hierarchical model $$\pi(\phi,\theta|D)\propto\pi(\theta)\,\pi(\phi)L(D)$$ $f\propto\pi(\sigma)\pi(\sigma)\pi(\sigma)\pi(\sigma)\pi(\sigma_{x}^{2})\pi(\sigma_{y}^{2})$ $\approx\tilde{\pi}\left(f_{t}(t)\right)\tilde{\pi}(f_{s}(s))\pi(a_{0})\pi(\alpha)\pi(\beta)\pi(\sigma_{x}^{2})\pi(\sigma_{y}^{2})$. $$\left(10\right)$$ ). (10) The code for simulation and inference for this class of models of Cox-Hawkes processes implemented in python and numpyro (Phan et al., 2019) can be found at https://github.com/misxenia/Spatiotemporal_Cox_ Hawkes. ## 5 Experiments We demonstrate the applicability of our methods on both simulated and real data. For simulations our goal is twofold: (i) to show that we can accurately estimate the parameters of both the background and selfexciting components thereby showing that we recover the true underlying mechanisms and (ii) to show that our method performs well under model misspecification, thereby showing our model is sufficiently general to be used in real data situations where the true underlying data generating mechanism is unknown. On real settings we apply our methods to gunfire data used in Loeffler & Flaxman (2018) detected by an acoustic gunshot locator system to uncover the underlying patterns of crime contagion in space and time. We show how our model can be used as an actionable tool by practitioners to understand and measure contagion effects in important settings. Note that throughout the following we refer to the model used for simulating data as the true model. ## 5.1 Experiment 1: Simulated Data We simulate data from a Hawkes process with rate as given in equation 7 on the domain [0, T] = [0, 50], X = [0, 1] × [0, 1]. For the background rate which governs the exogenous events we simulate a realisation of the latent (separable) spatiotemporal Gaussian process with covariance kernels defined as in equation 5 using lt = 10, ω2 t = 1, ls = 0.25, ω2 s = 1. The simulated ft(t) from the temporal Gaussian process can be seen in Figure 1 in red and the temporal events drawn from this background are also shown in red on the x-axis. The simulated two-dimensional spatial Gaussian process can be seen at the left plot of Figure 2. Note that we also use an explicit constant intercept of a0 = 0.8 giving an overall background rate of exp(a0 + ft + fs) and in inference we use a Normal prior on it. For the diffusion effect we use a triggering spatiotemporal kernel of the form in equation 3 with values α = 0.5, β = 0.7 for the exponential kernel. For the Gaussian spatial kernel we will assume a common ![10_image_0.png](10_image_0.png) ![10_image_1.png](10_image_1.png) y Figure 2: Simulated draw and the posterior predictive distribution for the 2-dimensional spatial Gaussian process. The simulated fs(*x, y*) is shown on the left on a regular grid and the mean predictive distribution is shown on the right with the simulated locations in red. ![10_image_2.png](10_image_2.png) Figure 3: MCMC trace for a0*, α, β, σ* on simulated data where the red line shows the simulated value for the experiments. The samples shown are collected from 3 chains. parameter σ 2for both σ 2 x and σ 2 y which we will assume to be 0.5. This gives a set of around n = 210 spatiotemporal points {ti, xi, yi} n i=1 of which the ratio of background to offspring events is roughly 1 : 1. For inference we run 3 chains with 1, 500 samples each of which 500 were discarded as burn in, using a thinning size of 1. In Figure 3 we report the trace plots for the parameters *α, β, σ* which define the triggering kernel that governs excitation. We also report a0 which we used as the total mean of the latent Gaussian process µ(*t, x, y*) = exp (ft(t) + a0 + fs(*x, y*)). We use a Normal prior on a0. In all cases the simulated values shown in red are within the trace coverage. In our experiments we combine the samples (after removing the warmup iterations) from all the chains. The plots overall show good convergence with good mixing between the chains and no multimodal behaviour. Regarding the Gaussian process fitting, we show the posterior predictive plots in Figures 1 and 2. For the one-dimensional temporal Gaussian process we plot the simulated draw ft(t) in Figure 1 in red. The blue line is the mean posterior of ft(t) and the yellow shaded area is the 90% credible interval obtained from the posterior predictive distribution. The red dots on the x-axis are the exact simulated time events drawn from the process. The 90% credible interval covers well the simulated function, and the mean posterior predictive is very close to the simulated one, showing good model fit. For the two-dimensional spatial Gaussian process we plot the simulated draw function fs(*x, y*) at the left plot of Figure 2. The the mean posterior predictive distribution is shown in the centre and the mean predictive distribution with the true simulated locations embedded on it is shown on the right. The color scale on the right shows the relative values ranging from dark blue (smallest) to yellow (highest). The simulated and the mean of the posterior predictive distribution, are relatively similar when compared visually which shows a good fit for the model. To quantify convergence we require good Rˆ diagnostics. For all our results the Rˆ diagnostics returned by the sampler were in [1, 1.002] for all the estimated parameters. This, combined with visual inspection of the MCMC trace show good evidence of convergence and good mixing behaviour. To further validate our method we perform a goodness of fit test. From the estimated model parameters we obtain above, we compute the estimated intensity and then apply the time-rescaling theorem (Meyer, 1969; Papangelou, 1972) which states that if the process is correct, the rescaled times are independent exponentially distributed random variables with unit rate. We can thus compare the empirical and theoretical distributions to assess how well the chosen model agrees with the empirical data. We make this comparison using a quantile-quantile (QQ) plot and a Kolmogorov-Smirnov test. We provide in Appendix A.2 in Figure 9 the QQ plot which shows a good fit as the best fit line has a slope of 1.1 and intercept of −0.2 with the coefficient of determination that measures the correlation between the two axes to be 0.98. Secondly, we perform hypothesis testing using the Kolmogorov-Smirnov statistic to test the null hypothesis that two distributions were drawn from the same distribution. Our p-value is ≫ 0.05 suggesting that at a confidence level of 95% we cannot reject the null hypothesis. Given the above, we conclude a good fit of our process to the data. ## 5.2 Experiment 2: Model Misspecification Our second experiment on simulated data compares and contrasts our method (LGCP-Hawkes) to a Hawkes process with constant background and a pure log Gaussian Cox Process. The intensity for our LGCP-Hawkes model is equation 7, for Hawkes it is equation 2 with constant µ(t, s) = µ and for LGCP it is equation 6. We simulate data from these three inhomogeneous point process models and then fit each model on every dataset on a train set and perform prediction on a test set. Note that we also fit under a homogeneous Poisson model as it's the baseline giving the simplest spatiotemporal model that exists. We show that our model Hawkes-LGCP is a reasonable approach even when there is model mismatch (i.e. when the data are drawn from a pure Hawkes or pure LGCP). It is therefore a good approach to use in real data scenarios when the underling data generating mechanism is unknown. It is in general challenging to evaluate the quality of model fit from different point process models. We use two ways to evaluate the model fit and generalisation ability of our model. We first adopt a procedure to predict the temporal and spatial locations of future events under the inference model and then compute the combined error between those events and events generated under the true generating model that was used for simulation. This procedure mirrors the properties that practitioners would desire from their model in real world settings. The metrics we use to test the generalisation ability of the model are (a) the combined root mean square error (RMSE) between the exact simulated and predicted events and (b) the normalised negative log-likelihood of the test events. The formula we use is the following q RMSE = RMSEs + *RMSE*t where *RMSE*t = 1 n*test* Pn*test* j=1 (tj − t˜j ) 2 and *RMSE*s = q 1 n*test* Pn*test* j=1 (xj − x˜j ) 2 +1 n*test* Pn*test* j=1 (yj − y˜j ) 2 for tt, xi, yi here denoting the true events and t˜i, x˜i, y˜i denoting the predicted ones. The RMSE with its standard error is demonstrated in Figure 4 (top). Similarly we evaluate the normalised negative log-likelihood (NNL) on the test data in each case given the setup explained below and report the mean and standard error in Figure 4 (bottom). The normalisation is given through the division of the original negative log-likelihood by the number of datapoints involved (here the number of test points). The experimental setup is as follow. We simulate 100 datasets (each of which give on average 300 events) over a fixed time window and a fixed spatial domain and then do a train-test split. We repeat this using as generating model each of the models. ![12_image_0.png](12_image_0.png) Figure 4: Plots to support the model misspecification experiment. Top: Average RMSE and its standard error (combined for space and time) reported for the model misspecification experiment. Bottom: Average negative normalised log-likelihood and its standard error on test data. The left plot corresponds to a simulated dataset from an LGCP model, the middle to an LGCP-Hawkes and the right from a Hawkes model. In all three cases we perform inference under all LGCP, LGCP-Hawkes, Hawkes as well as Poisson (baseline). For every dataset, we then perform inference under our MCMC scheme under every model. Given the estimated parameters, we predict 200 times the next 10 future events which we compare to those of the test set. We compute the error between the true and estimated events across the 200 predictions and the 100 simulations. We report the mean and standard error of the RMSE and normalised negative log-likelihood (NNL) graphically in Figure 4. This plot shows how good each model is in predicting the near future. As shown in Figure 4, and as expected, the RMSE error is always lowest when the true model is used for inference, however in all cases the next best model is LGCP-Hawkes although the differences are not always statistically significant. This provides evidence for our model's ability to flexibly capture a wide range of underlying patterns. Looking at the RMSE error in all cases the worst model is the Poisson baseline, as its constant intensity in space and time cannot capture the inhomogeneities in the data. These results highlight that when the true data generating process is unknown, which is the default scenario in real world settings, our model is likely to be a robust choice. The NNL results suggest the same conclusion, as in all cases NNL is always lowest when the true model is used for inference, and in all cases the next best model is LGCP-Hawkes. The difference with the RMSE results is that under an LGCP-Hawkes model and a Hawkes model the log-likelihood of an LGCP deviates quite a lot from the LGCP-Hawkes and Hawkes model. Furthermore, what is of interest is to see how the LGCP-Hawkes model under study correctly recovers the other processes, namely Poisson, LGCP and Hawkes. Recall that intensity consists of two parts, the background and the excitation. We will report in each case two ratios, rB: the ratio of the background to the total intensity and rE: the ratio of the excitation to the total intensity to explain how our model recovers correctly the mechanism which drives the appearance of the events. We will also consider the coefficient for the excitation part of the intensity, which we denote by c = α × β and the parameter of the background, a0 and will report their respective mean estimates. Applying our LGCP-Hawkes model (that assumes an intensity as in equation 7) on simulated data obtained from the LGCP with true value a0 = 0.5, and using the posterior mean of the parameters, we estimate rB = 0.91, rE = 0.09. The mean estimates for the excitation coefficient c and a0 are 0.02 and 0.58 respectively. These suggest that the model has correctly inferred that there is negligible contribution to the intensity coming from the self-excitation part, meaning that events arise because of spatiotemporal background variation and not contagion. Applying our LGCP-Hawkes model on simulated data obtained from the Poisson process with a0 = 0.8, we estimate rB = 0.85, rE = 0.15. This suggests that most of the contribution comes from the background and a small part from the excitation. The mean estimate for c and α0 are 0.09 and 0.52 respectively. These suggest that the model has correctly inferred that there is very little contribution to the intensity coming from the self-excitation part, meaning that events arise because of spatiotemporal background variation and not contagion. Applying our LGCP-Hawkes model on simulated data from the Hawkes process with a0 = 0.5 and c = α × β = 0.4, we estimate rB = 0.43, rE = 0.57 which suggests that the events of this process are due to both the background and the self-exciting kernel. The mean estimate for the excitation coefficient c and α0 are 0.42 and 0.52 respectively, which shows that the model has properly recovered the contribution from both background effects and self-exciting behaviour. 5.3 Experiment 3: Gunshot Dataset ![13_image_0.png](13_image_0.png) Figure 5: Spatial (a) and temporal (b) distribution of the gunfire data in Washington DC over the year 2013. The spatial locations are the exact geographical coordinates and the temporal locations are shown weekly. We use gunshot data in 2013 recorded by an acoustic gunshot locator system (AGLS) in Washington DC and follow Loeffler & Flaxman (2018) for data preprocessing. There were 1,171 gunshots recorded in total. Spatial locations were rounded to produce approximately 100m spatial resolution and 1 sec temporal resolution. Visualisations of the temporal and spatial distributions of the data are shown in Figure 5(a) and (b). ![14_image_0.png](14_image_0.png) ![14_image_1.png](14_image_1.png) Figure 6: MCMC trace for the parameters (a) a0*, α, β* and (b) σ 2 x , σ2 y when collecting the MCMC samples from all chains and discarding warmup. We perform inference with the HMC (Neal, 2011) routines of numpyro which uses the NUTS (Hoffman & Gelman, 2014) algorithm. We used 2 chains each with 4, 000 samples from which 2000 are discarded as warmup. We join together the samples from the two chains and report the combined MCMC trace for each of the parameters. Note that we did some prior sensitivity analysis to assess the robustness of our results. We used different parameters on the priors for the parameters and we observed that the posterior distributions of the parameters were similar, giving posterior mean estimates very close to each other. We also tried different priors such as Gamma and Exponential but the convergence of the chains was better when using Truncated Normal distributions. Note that we have rescaled accordingly the temporal and spatial locations of the events, in order to simplify their use in inference and to be appropriate for the domain of our pre-trained GP generators. Rescaling, is standard practice in point process modelling. In Figure 6(a) we report the MCMC trace plots for the parameters α and β and a0 and in Figure 6(b) we report σ 2 x , σ2 y which define the lengthscales of the spatial Gaussian kernel (x and y distance) that governs excitation in space. Regarding the Gaussian process fitting, we show the posterior predictive plots in Figures 7 and 8. For the one-dimensional temporal Gaussian process we plot the estimated function ft(t) in Figure 7. The green line is the mean posterior of ft(t) and the yellow shaded area is the 90% credible interval obtained from the posterior predictive distribution. The red marks indicate the observed true events. For the spatial Gaussian process we plot the estimated function fs(*x, y*) at the left plot of Figure 8. The mean predictive distribution with the true locations embedded on it is shown on the right. The plots overall ![14_image_2.png](14_image_2.png) Figure 7: Posterior predictive for ft(t) with the posterior mean in green and the 90% credible interval in the yellow shaded area, with true time stamps of the events on the x-axis. ![15_image_0.png](15_image_0.png) Figure 8: Mean of the posterior predictive for fs(*x, y*) (left) and similarly with the true locations (right). show good convergence with good mixing between the chains and no multimodal behaviour. This is also quantified by the convergence diagnostics Rˆ which were equal to 1 for all the parameters estimated. We report an estimate and 90% credible intervals of aˆ0 = 0.53 (−0.46, 1.47), αˆ = 0.73 (0.68, 0.78), βˆ = 0.18 (0.16, 0.21), 1/βˆ = 5.35(4.64, 6.11), σˆ 2 x = 9.26e −5(7.90e −5, 1.07e −4), σˆ 2 y = 5.65e −5(4.78e −5, 6.67e −5) which can be interpreted as below, following the way of interpretation of Loeffler & Flaxman (2018). The average number of shootings triggered by one shooting is around 0.73. Then, rounding to the nearest minute or meter correspondingly, the temporal lengthscale for the exponential triggering kernel is estimated to be around 5 minutes, the spatial triggering lengthscale in x distance, denoted by σx is around 10m and for the y distance 8m. This means that for every 100 shootings that occur, these create at most another 73. Using the right upper bound of the uncertainty intervals, the period in which diffusion takes place is within less than 6 minutes and the area is within 10 meters in x distance and 8 meters in the y distance. Regarding the background effects the posterior mean of the spatiotemporal Gaussian process is estimated to be 0.53. The results have some differences from the ones reported by Loeffler & Flaxman (2018) and Holbrook et al. (2021) but the model assumed here has a different form and we have applied it on a different subset of the gunshot dataset. Table 1: Fist row: Average RMSE on test data with its standard error in brackets computed when predicting future unseen temporal and spatial events under the five models. Second row: Average NNL with its standard error in brackets computed when predicting future unseen temporal and spatial events under the five models. Third row: Average NNL with its standard error in bracketss computed on training data under the five models. | Hawkes-LGCP | Hawkes | LGCP | Poisson | DeepSTPP | | |---------------|---------------|---------------|----------------|---------------|--------------| | RMSE (test) | 7.33 (0.11) | 8.14 (0.13) | 7.90 (0.09) | 14.2 (0.29) | 7.86 (0.25) | | NNL (test) | -2.534(0.09) | -1.789(0.08) | -0.309 (0.039) | -0.26(0.024) | −2.24(0.11) | | NNL (train) | −3.42(0.0007) | −3.15(0.0007) | −2.47(0.0026) | −1.99(0.0003) | −2.76(0.009) | We compare our model to the LGCP model, Hawkes model , baseline Poisson and the state-of-the-art neural network based spatiotemporal model of (Zhou et al., 2022). In our results, we name the model of (Zhou et al., 2022) as DeepSTPP. We report in Table 1 the average RMSE on test data, normalised negative log-likelihood (NNL) on both training and test data. As seen from Table 1 LGCP-Hawkes gives the best predictive performance both in terms of RMSE and NNL. The normalisation of the log-likelihood is done by dividing by the number of training and test data points respectively. The training inference times are 45 minutes for Hawkes-LGCP, 8 minutes for Hawkes, 25 seconds for LGCP, 11 seconds for Poisson and 12 minutes for DeepSTPP. ## 6 Conclusion We presented a novel model combining Hawkes processes with Gaussian processes, and used it to identify patterns in gun violence in Washington DC. Methodologically, ours is the first model of its kind to have such flexibility in capturing underlying patterns in the rate of occurrence of events, combining a powerful nonparametric statistical model with an interpretable mechanistic self-exciting point process model. This combination means that it can be used across a range of real world spatiotemporal problems in which the underlying data mechanism is unknown. Applications could include social networks, biology, economics and epidemiology. Its general and practical form make it an actionable tool for practitioners that can be used to design interventions and for policy making. There are many directions for future research. One could study the properties of this model in a theoretical level to define the implications of different forms of the background rate and whether they are identifiable. Additionally, this model can be further extended to include additional covariates. These take the role of marks in a Hawkes process construction and can bring more information in infectious disease applications in which one wants to characterise the disease transmission and quantify the sources that govern infection in space and times. Deviating from a univariate setting, one can consider interacting Hawkes processes to model events in different states or regions where the intensity of events in one regions depends on the intensity in another region. This model could be useful for crime data, and also in neuroscience, where multiple neural trains interact across different parts of the brain. Computationally, this may prove to be a difficult extension. In scenarios where the background trends are potentially coming from different sources, incorporating transformations of Gaussian process could make the framework even more flexible and able to capture multimodal distributions. Finally, one can extend this in a generic flexible framework for Hawkes processes with non-linear intensities that can potentially capture inhibition. ## References Ifigeneia Apostolopoulou, Scott Linderman, Kyle Miller, and Artur Dubrawski. Mutually regressive point processes. *32nd Conference on Neural Information Processing Systems*, 2019. Søren Asmussen and Peter W. Glynn. Applications of mathematics: Stochastic modelling and applied probability. In second edition Springer (ed.), *Applied Probability and Queues*. Oxford University Press, 2003. Pierre Brémaud and Laurent Massoulié. Stability of nonlinear hawkes processes. *Ann. Probab.*, 24(3): 1563–1588, 1996. Biao Cai, Jingfei Zhang, and Yongtao Guan. Latent network structure learning from high-dimensional multivariate point processes. *Journal of the American Statistical Association*, pp. 1–14, 2022. Ricky T. Q. Chen, Brandon Amos, and Maximilian Nickel. Neural spatio-temporal point processes. *International Conference on Learning Representations*, 2021. Daryl J. Daley and David Vere-Jones. *An Introduction to the Theory of Point Processes*. Springer, 2008. Angelos Dassios and Hongbiao Zhao. Exact simulation of hawkes processes with exponentially decaying intensity. *Electronic Communications in Probability*, 18:1–13, 2013. Peter Diggle and Paulo J. Ribeiro. *Model-based Geostatistics*. Springer series in Statistics, 2007. Peter J. Diggle, Paula Moraga, Barry Rowlingson, and Benjamin M. Taylor. Spatial and spatio-temporal log-gaussian cox processes: Extending the geostatistical paradigm. *Statistical Science*, 28(4):542–563, 2013. Sophie Donnet, Vincent Rivoirard, and Judith Rousseau. Nonparametric bayesian estimation of multivariate hawkes processes. *The Annals of Statistics*, 48(5):2698–2727, 2020. Nan Du, Hanjun Dai, Rakshit Trivedi, Utkarsh Upadhyay, Manuel Gomez-Rodriguez, and Le Song. Recurrent marked temporal point processes: Embedding event history to vector. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1555–1564, 2016. Geoffrey Grimmett and David Stirzaker. *Probability and Random Processes.*, chapter 5. Oxford University Press, 2001. Alan G. Hawkes. Spectra of some self-exciting and mutually exciting point processes. *Biometrika*, 58(1): 83–90, 1971. Alan G. Hawkes and David Oakes. A cluster process representation of a self-exciting process. Journal of Applied Probability, 11:3:493—-503, 1974. Matthew D. Hoffman and Andrew Gelman. The no-u-turn sampler: adaptively setting path lengths in hamiltonian monte carlo. *Journal of Machine Learning Research*, 15:1593–1623, 2014. Andrew J. Holbrook, Charles E. Loeffler, Seth R. Flaxman, and Marc A. Suchard. Scalable bayesian inference for self-excitatory stochastic processes applied to big american gunfire data. *Statistics and computing*, 31, 2021. Mikyoung Jun and Scott J. Cook. Flexible multivariate spatio-temporal hawkes process models of terrorism. arXiv:2202.12346, 2022. Erik Lewis and George O. Mohler. A nonparametric em algorithm for multiscale hawkes processes. Journal of Nonparametric Statistics, 2011. Scott Linderman and Ryan Adams. Discovering latent network structure in point process data. *Proceedings* of the 31st International Conference on Machine Learning, 2014. Scott Linderman and Ryan Adams. Scalable bayesian inference for excitatory point process networks. arXiv:1507.03228, 2015. Scott Linderman, Christopher Stock, and Ryan Adams. A framework for studying synaptic plasticity with neural spike train data. *In Advances in Neural Information Processing Systems*, pp. 2330–2338, 2014. Charles E. Loeffler and Seth R. Flaxman. Is gun violence contagious? a spatiotemporal test. Journal of Quantitative Criminology, 34:999–1017, 2018. Noa Malem-Shinitski, Cèsar Ojeda, and Manfred Opper. Flexible temporal point processes modeling with nonlinear hawkes processes with gaussian processes excitations and inhibitions. Proceedings of the 38th International Conference on Machine Learning,, pp. 1–31, 2021. George Matheron. *Traité de géostatistique appliquée.*, volume 2. Technip, 1962. Hongyuan Mei and Jason Eisner. The neural hawkes process: A neurally self-modulating multivariate point process. *In Proceedings of 31st Conference on Neural Information Processing Systems*, 2017. Paul-Andrè Meyer. Dèmonstration simplifiée d'un thèoréme de knight. *Séminaire probabilité V*, pp. 191—- 195, 1969. Xenia Miscouridou, François Caron, and Yee Wye Teh. Modelling sparsity, heterogeneity, reciprocity and community structure in temporal interaction data. *NeurIPS*, 2018. Swapnil Mishra, Seth Flaxman, Tresnia Berah, Mikko Pakkanen, Harrison Zhu, and Samir Bhatt. πvae: Encoding stochastic process priors with variational autoencoders. *Statistics and Computing*, 32(6), 2022. George O. Mohler. Modeling and estimation of multi-source clustering in crime and security data. The Annals of Applied Statistics, 7(3):1525–1539, 2013. Jesper Møller, Anne Randi Syversveen, and Rasmus Plenge Waagepetersen. Log gaussian cox processes. Scandinavian journal of statistics, 25(3):451–482, 1998. Radford M. Neal. MCMC using hamiltonian dynamics. In second edition Springer (ed.), Handbook of Markov Chain Monte Carlo. Chapman and Hall, 2011. Yosihiko Ogata. Statistical models for earthquake occurrences and residual analysis for point processes. Journal of the American Statistical Association, 83(401):9–27, 1988. Maya Okawa, Tomoharu Iwata, Takeshi Kurashima, Yusuke Tanaka, Toda Hiroyuki, and Naonori Ueda. Deep mixture point processes: Spatio-temporal event prediction with rich contextual information. In Proceedings of the 35th International Conference on Machine Learning, volume 168 of KDD, pp. 373– –383, 2019. Takahiro Omi, Naonori Ueda, and Kazuyuki Aihara. Fully neural network based model for general temporal point processes. *33rd Conference on Neural Information Processing Systems*, 2019. Fredos Papangelou. Integrability of expected increments of point processes and a related random change of scale. *Trans. Amer. Math. Soc.*, 24(165):483—-506, 1972. Du Phan, Neeraj Pradhan, and Martin Jankowiak. Composable effects for flexible and accelerated probabilistic programming in numpyro. *arXiv:1912.11554*, 2019. Carl E. Rasmussen and Christopher K. I. Williams. *Gaussian Processes for Machine Learning*, chapter 2. The MIT Press, 2005. Alex Reinhart. A review of self-exciting spatiotemporal point processes and their applications. Statistical Science, 33(3):299–318, 2018. Elizaveta Semenova, Yidan Xu, Adam Howes, Theo Rashid, Samir Bhatt, B. Swapnil Mishra, and Seth R. Flaxman. Priorvae: encoding spatial priors with variational autoencoders for small-area estimation. *Royal* Society Publishing, pp. 73–80, 2022. Deborah Sulem, Vincent Rivoirard, and Judith Rousseau. Bayesian estimation of nonlinear hawkes process. arXiv preprint arXiv.2103.17164, 2021. Waldo R. Tobler. A computer movie simulating urban growth in the detroit region. Economic Geography: International Geographical Union. Commission on Quantitative Methods, 46:234–240, 1970. H. Juliette T. Unwin, Isobel Routledge, Seth Flaxman, M-A. Rizoiu, Shengjie Lai, Justin Cohen, Daniel J. Weiss, Swapnil Mishra, and Samir Bhatt. Using hawkes processes to model imported and local malaria cases in near-elimination settings. *PLOS*, 2021. Lu-ning Zhang, Jian-wei Liu, Zhi-yan Song, and Xin Zuo. Temporal attention augmented transformer hawkes process. *Neural Comput. Appl.*, 34:3795––3809, 2022. Feng Zhou, Zhidong Li, Xuhui Fan, Yang Wang, Arcot Sowmya, and Fang Chen. Efficient inference for nonparametric hawkes processes using auxiliary latent variables. *JMLR*, 21:1–31, 2020. Zihao Zhou, Xingyi Yang, Ryan A. Rossi, Handong Zhao, and Rose Yu. Neural point process for learning spatiotemporal event dynamics. In *Proceedings of the 35th International Conference on Machine Learning*, volume 168 of *Proceedings of Machine Learning Research*, pp. 1–13. PMLR, 2022. ## A Appendix A.1 Simulation In Algorithm 2 we show how to simulate from the LGCP background of the process with intensity µ(t, s) = exp(ft(t) + a0 + fs(s)). For simulation purposes we write explicitly a0 = a0t + a0s but only estimate them via their sum (i.e. a0) as a0t and a0s are only identifiable through their sum. p + denotes a probability distribution on the positive real line whereas p denotes a probability on the real line. We take the prior on ℓt and ℓs to be Inverse Gamma, and the one on ω 2 t and ω 2 s to be Log Normal but other options are possible. For the prior on a0t and a0s we use a Normal distribution. Algorithm 2 Simulation of the LGCP events from the background Require: T > 0, X Draw a0t, a0s ∼ p(·) Draw lt, ls ∼ p +(·) Draw ω 2 t , ω2 s ∼ p +(·) Form kt, ks using equation 5 Draw ft ∼ GP(0, kt), fs ∼ GP(0, ks) Set r(t) = exp(ft(t) + a0t), r(s) = exp(fs(s) + a0s) Approximate I(t, s) = R T 0 RX r (t) × r(s) dtds Draw N0 ∼ *P ois* (I(t, s)) Draw {t} N0 i=1 ∈ [0, T] from r(t) via rejection sampling Draw {s} N0 i=1 ∈ X from r(s) via rejection sampling return G0 = (ti, si) N0 i=1 A.2 Simulation experiment ![19_image_0.png](19_image_0.png) Figure 9: Quantile-quantile plot to compare the quantiles of the empirical and theoretical intensity. To further support Experiment 1 in section 5.1 we consider goodness of fit tests. We provide a QQ plot and a Kolmogorov-Smirnov test. The QQ plot shown in Figure 9 measures the agreement between the observed and the theoretical quantiles of the intensity function. The theoretical quantiles plotted in x-axis are those from the exponential distribution whereas the empirical ones in y-axis are those from the fitted intensity under our LGCP-Hawkes model. The best fit line on our data has a slope of 1.1 and intercept −0.2. The coefficient of determination is 0.98 suggesting almost perfect correlation between the two. This suggests a good agreement between the point process model and the experimental data as the points lie on a 45−degree line. Secondly, we measure the discrepancy between the two using a Kolmogorov-Smirnov test which gives a p value ≫ 0.05 clearly not rejecting the null hypothesis that the two samples are coming from the same distribution.
Review 1: Summary: The authors propose the Hawkes-LGCP point process model for timestamped events taking place at different locations in space. It uses a Hawkes process with triggering mechanisms for both time and space to allow past events to encourage future events nearby in space and in the near future. It further incorporates a log-Gaussian Cox process (LGCP) prior for the Hawkes process background rate. The authors apply their model to analyze a data set of gunshots in Washington DC and find that one gunshot typically triggers 0.7 future shots and within a roughly 10x8 meter area in the next 6 minutes. The main contributions I observe are as follows: - Novel formulation for a self-exciting spatiotemporal point process by incorporating an LCGP prior for the background rate in a spatiotemporal Hawkes process. - Efficient algorithms to sample from this Hawkes-LGCP model and to perform Bayesian inference. - Interesting analysis on gunshot data that attempts to identify how much influence a gunshot has on future gunshots nearby. Strengths and Weaknesses: Strengths: - LCGP prior for the background allows for a fully generative process that can be used to simulate from Hawkes processes with a variety of background rates compared to kernel density estimation approaches (e.g. Loeffler & Flaxman, 2018). - Model is very flexible and improves prediction accuracy, yet the parameters are still very interpretable, making it useful for exploratory analysis. Weaknesses: - Many (mostly minor) presentation issues scattered throughout the manuscript. It almost feels like it has never been proofread. I have tried to point out most of these below. - Broader impacts of the proposed work should be discussed. This is particularly important because the authors analyze a gunshot dataset to draw potential conclusions about gunshots triggering other gunshots. Their conclusions may have implications for public policy, so limitations of their conclusions need to be made very clear. Requested Changes: Major issues: - Section 3.2, 3rd last paragraph: "$N(t) - N(\mathbf{s}) = \infty$ for $t - \mathbf{s} < \infty$.": This is confusing because $\mathbf{s}$ has been used to represent space and is a vector. In this equation, I believe $s$ should be a scalar representing time, so I suggest a different variable name. Furthermore, I am not sure if this paragraph fits given the previous paragraph. The previous paragraph is expressing almost the same content but more specific to the exponential kernel and Gaussian form taken in this paper. - Section 4.1, last paragraph: Provide more details (in an appendix) or at least a reference on how to sample from the LGCP by drawing an approximate realization then using rejection sampling. - RMSE computation for Figure 4: clarify whether the RMSE is just over the (x,y) spatial locations or whether it includes error in timestamp as well. - Either specify the rescaling that was performed on time and space in the gunshot data set so that the reader can better interpret the parameter estimates (e.g., how large of a range does $\sigma_x^2 = 9.26e^{-5}$ really correspond to?). Alternatively reverse scale the parameter estimates back into human interpretable units, such as meters for distance and minutes for time. Minor issues: - All references that are published in conference proceedings seem to be missing the name of the proceedings, e.g. Linderman & Adams (2014), which was published in the ICML proceedings. - Equation references seem incorrectly formatted, e.g. "Eq equation 7" and "Eq equation 8". - Page 2, second last paragraph: "prior on t he background": t he -> the - Section 3.1, first paragraph: "the appearance of future events. s": Remove extra "s". - Section 3.1, first paragraph: "$|B(\mathbf s, \mathbf s)|$ is the Lebesgue measure of the ball $B(\mathbf s, \mathbf s)$ with radius $\mathbf s > 0$.": I suggest using different letters for the center of the ball and the radius, e.g. $\mathbf s$ and $\Delta \mathbf s$, as in equation (1). - Section 3.2, paragraph after equation (2): However, we The condition -> However, the condition - In Section 3.4, the spatial coordinates now switch to $s$, not $\mathbf s$. Stay consistent in your notation. - Section 4.2, second paragraph after equation (9): linearas -> linear as - Section 4.2, second paragraph after equation (9): lgcp -> LGCP - Figure 2 is blurry and should be created in vector format or higher resolution raster. Broader Impact Concerns: No broader impact statement is provided, and I think one is necessary. See my discussion in the Weaknesses section. ================================================== Review 2: Summary: The authors propose a variant of a spatio-temporal Hawkes process which combines a Hawkes process for inducing mutually exciting effects and a Gaussian process for capturing clustering effects in the background rate . Based on that they provide some synthetic experiments for demonstrating their model. They also test their model on the gunshot data. Their model outperforms simpler baseline point process models in terms of RMSE in the event predictions. Strengths and Weaknesses: **Strengths** In general, it is an easy-to-read paper. The visualizations in the paper also help the reader comprehend the functionality of the model. I also appreciate the source code the authors are providing. The provided experiments show promise for use of their model inI w real-world applications. **Weaknesses** I will expound more below. In general: (1) some points in the presentation should be corrected. (2) both the introduction and the experiments should be expanded to include relevant work. (3) Some inference details ( metropolis hasting ratios), simulation details, and identifiability results should be explicitly stated to make the article self-contained (without having to resort to the source code for mathematical details). Requested Changes: **(1) Some references should be included.** It would strengthen the paper if the authors also compare with similar (mathematically or functionally ) similar models [1] [2]. Especially [2] is a very similar model. The authors extend [2] to include the space domain while adding a Hawkes process intensity. However, the use of a GP in the intensity of a point process detracts from the mathematical novelty of the article. Moreover, one of the main motivation of authors work is the lack of expressive enough background rates. However, the intensity of [2] offers such an expressive background rate. Similarly, in [3] the background rate is a constant term multiplied by a probability term capable of modeling complex (inhibitory) effects. Finally, when the authors refer to the cluster-based Hawkes process simulation they should cite [5] **(2) Some points in the presentation should be corrected.** a. Section 3.1 should be renamed to regular point process definition. The definitions of this section refer to the broad class of point processes characterized by an intensity function and not to a Hawkes process. Moreover, it should be stated explicitly that $\lambda \ge 0$. There is also a typo: $B(s,s)$ should be $B(s, \Delta s)$ in the explanatory text below equation 1. b. In Algorithm 1: the condition $G_l!=0$ does not appear correct since $G_l$ in the last line of the body of the loop seems to increase (to include all the generated events so far. I understand that the authors want to convey: unless there are no more generated events in the time window [0,T] . However, they should accurately formulate this. Moreover, in the simulation steps ( $C_i$, $O_i$), they should provide the exact distributions they are sampling from. Finally the notation ${t,s} \sim \mu(t,s)$ is also not correct since $\mu$ refers to the background rate and not a distribution. Probably the distribution is a Poisson process characterized by $\mu(s,t)$? **(3) Some technical details are missing:** The authors should provide the exact MCMC updates similar to [6]. Regarding the identifiability results, the authors mention: In purely temporal cases we have guarantees on the estimationand under certain conditions we have consistency results as well identifiability results. However once we addthe spatial component, the results do not necessarily extend. Therefore, we take here a separable form of aproduct of an exponential kernel in time and a Gaussian kernel in space. I think the authors should either cite accordingly or provide explicit derivations for supporting this statement. **(4)In the synthetic experiments, the model should be further validated** a. It would be helpful if the authors should also provide goodness-of-fit tests (Kolmogorov-Smirnov), see [7], [3] b. clarification: in the synthetic examples, is the VAE-based GP approximation used? In either case, showing similar results for both cases could help justify the use of a VAE-based GP approximation in the point process context. c. In figure 4, it would help if authors also provide the mean values of the inferred parameters. Ideally, a LGCP-Hawkes should correctly recover either a pure Hawkes (zero covariance in the GP?) or a LGCP (zero excitation coefficients). However, there is a discrepancy in the error in these cases. d. a visualization of the GP background rate could provide further insights. **(5)In the gunshot experiments, some details are missing** a. Can the authors provide inference time? b. I also think the authors should provide the loglikelihood (not only specific parameters) to demonstrate convergence of their inference algorithm c. The authors use truncated normal priors. Gamma or exponential priors are also commonly used. Demonstrating sensitivity wrt to the prior type would also strengthen the paper. [1] Chen, Ricky TQ, Brandon Amos, and Maximilian Nickel. "Neural Spatio-temporal point processes. [2] Malem-Shinitski N, Ojeda C, Opper M. Flexible Temporal Point Processes Modeling with Nonlinear Hawkes Processes with Gaussian Processes Excitations and Inhibitions. [3] Apostolopoulou, I., Linderman, S., Miller, K., & Dubrawski, A. (2019). Mutually regressive point processes. Advances in Neural Information Processing Systems, 32. [4] RUBIN, Izhak. Regular point processes and their detection. IEEE Transactions on Information Theory, 1972, 18.5: 547-557. [5]. G. Hawkes and D. Oakes, “A cluster process representation of a self-exciting process,” Journal of Applied Probability, vol. 11, no. 3, pp. 493–503, 1974. [6] RASMUSSEN, Jakob Gulddahl. Bayesian inference for Hawkes processes. Methodology and Computing in Applied Probability, 2013, 15.3: 623-642. [7] Emery N Brown, Riccardo Barbieri, Valerie Ventura, Robert E Kass, and Loren M Frank. The time- rescaling theorem and its application to neural spike train data analysis. Neural computation, 14(2):325–346, 2002. Broader Impact Concerns: no concerns ================================================== Review 3: Summary: In this submission, the authors proposed an improved spatiotemporal Hawkes process, in which the extrinsic triggering part is modeled by a log-Gaussian Cox process (LGCP). The authors developed an MCMC-based inference method to reduce the computational complexity of the learning process. A cluster generation-based simulation method is developed. This simulation method is a natural extension of the branching process-based simulation method for the classic Hawkes process. Experiments on synthetic and real-world spatiotemporal data show that compared to Hawkes, Poisson, and LGCP, the proposed Hawkes-LGCP model has better performance. Strengths and Weaknesses: Strengths: (1) The idea and the description of the proposed method are clear. The derivation of the proposed method is easy to follow. Weaknesses: (1) The novelty of the proposed method is limited. The proposed Hawkes-LGCP model is a simple combination of two existing models (i.e., the LGCP model for the exogenous part and the Hawkes model for the endogenous part). The MCMC-based inference is well known as well. The simulation method is a straightforward application of the existing branch processing-based simulation method. In summary, in the aspect of methodology, the contribution of this submission is incremental. (2) The experimental part is too weak. (a) The baselines include Poisson, classic Hawkes, and LGCP. It is natural that the combination of Hawkes and LGCP (i.e., the proposed method) owns better capacity and performance. However, whether the proposed method can outperform other spatiotemporal point process (STPP) models, e.g., nonlinear Hawkes, self-correcting process, and neural network-based PP models, is not verified. (b) The authors claimed that using MCMC-based inference can reduce the complexity of the learning process. However, the runtime of the proposed method is not tested. Compared to other learning methods, e.g., variational inference, the advantage of the proposed method is not convincing. (c) In the aspect of evaluation, only RMSE is applied. The other commonly-used metrics, e.g., the log-likelihood of testing data, are ignored. (3) In the aspect of methodology, the proposed model did not consider high-dimensional features of events (which is common in practice). Additionally, for multi-variate STPP cases, especially, those high-dimensional ones, the efficiency of the MCMC-based learning method is questionable. The limitations of the current model and its learning algorithm are obvious and hard to solve. Requested Changes: (1) According to the weaknesses above, more state-of-the-art methods should be considered as baselines, and more analytic experiments and more evaluation criteria should be considered. (2) The advantages of the proposed model and its MCMC-based inference should be demonstrated. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept as is Comment: I am happy to recommend acceptance of this paper, concurring with the opinion of the three reviewers. The paper presents a new approach for point process modeling based on spatiotemporal Hawkes processes with an underlying log-Gaussian cox process. While this does combine well-studied components, the combination proves clearly useful and the proposed inference algorithm advances the literature in this area. The changes made in response to the reviewers has addressed their remaining concerns regarding run-time and comparisons, and the re-write has improved the overall presentation. The reviewers still believe it would be worthwhile comparing against Chen et al, since some of the datasets they use seem comparable in size with the gunshot data used in this paper. However, while I encourage the authors to include this, I do not think it should be a prerequisite for acceptance, since the paper does include comparison with the more recent Zhou et al paper (which itself shows improvement over the Chen et al paper). ==================================================
# Bandits With Stochastic Corruption: Lower Bounds On Regret And Robust Optimistic Algorithms Odalric-Ambrym Maillard odalric.maillard@inria.fr Timothée Mathieu *timothee.mathieu@inria.fr* Debabrota Basu *debabrota.basu@inria.fr* Université de Lille, Inria, CNRS, Centrale Lille UMR 9189 - CRIStAL, F-59000 Lille, France Reviewed on OpenReview: *https: // openreview. net/ forum? id= oGIR0ic3jU* ## Abstract We study the Bandits with Stochastic Corruption problem, i.e. a stochastic multi-armed bandit problem with k unknown reward distributions, which are heavy-tailed and corrupted by a history-independent stochastic adversary or Nature. To be specific, the reward obtained by playing an arm comes from corresponding heavy-tailed reward distribution with probability 1 − ε ∈ (0.5, 1] and an arbitrary corruption distribution of unbounded support with probability ε ∈ [0, 0.5). First, we provide *a problem-dependent lower bound on the* regret of any corrupted bandit algorithm. The lower bounds indicate that the Bandits with Stochastic Corruption problem is harder than the classical stochastic bandit problem with sub-Gaussian or heavy-tail rewards. Following that, we propose a novel UCB-type algorithm for Bandits with Stochastic Corruption, namely HuberUCB, that builds on Huber's estimator for robust mean estimation. Leveraging a novel concentration inequality of Huber's estimator, we prove that HuberUCB achieves a near-optimal regret upper bound. Since computing Huber's estimator has quadratic complexity, we further introduce a sequential version of Huber's estimator that exhibits linear complexity. We leverage this sequential estimator to design SeqHuberUCB that enjoys similar regret guarantees while reducing the computational burden. Finally, we experimentally illustrate the efficiency of HuberUCB and SeqHuberUCB in solving Bandits with Stochastic Corruption for different reward distributions and different levels of corruptions. ## 1 Introduction The multi-armed bandit problem is an archetypal setting to study sequential decision-making under incomplete information (Lattimore & Szepesvári, 2020). In the classical setting of stochastic multi-armed bandits, the decision maker or agent has access to k ∈ N unknown reward distributions or arms. At every step, the agent plays an arm and obtains a reward. The goal of the agent is to maximize the expected total reward accumulated by a given horizon T ∈ N. In this paper, we are interested in a challenging extension of the classical multi-armed bandit problem, where the reward at each step may be corrupted by Nature, which is a stationary mechanism independent of the agent's decisions and observations. This setting is often referred as the *Corrupted Bandits*. Specifically, we extend the existing studies of corrupted bandits (Lykouris et al., 2018; Bogunovic et al., 2020; Kapoor et al., 2019) to the more general case, where the 'true' reward distribution might be heavy-tailed (i.e. with a finite number of finite moments) and the corruption can be unbounded. Bandits with Stochastic Corruption. Specifically, we model a corrupted reward distribution as (1−ε)P + εH, where P is the distribution of inliers with a finite variance, H is the distributions of outliers with possibly unbounded support, and ε ∈ [0, 1/2) is the proportion of outliers. Thus, in the corresponding stochastic bandit setting, an agent has access to k arms of corrupted reward distributions {(1 − ε)Pi + εHi} k i=1. Here, Pi's are uncorrupted reward distributions with heavy-tails and bounded variances, and Hi's are corruption distributions with possibly unbounded corruptions. The goal of the agent is to maximize the expected total reward accumulated oblivious to the corruptions. This is equivalent to considering a setting where at every step Nature flips a coin with success probability ε. The agent obtains a corrupted reward if Nature obtains 1 and otherwise, an uncorrupted reward. We call this setting *Bandits with Stochastic Corruption* as the corruption introduced in each step does not depend on the present or previous choices of arms and observed rewards. Our setting encompasses both heavy-tailed rewards and unbounded corruptions. We formally define the setting and corresponding regret definition in Section 3. Though this article primarily focuses on the theoretical understanding of the interplay between corruption, heavytailedness and decision making, we find it relevant to pinpoint at a few applications for which this setting may apply. Note that heavy-tail distributions are naturally motivated by applications in economy and financial markets (Agrawal et al., 2021), while corrupted distributions are naturally motivated by robustness issues in life sciences and applications in games or security, or when dealing with a misspecified model (Hotho, 2022; Golrezaei et al., 2021; Singh & Upadhyaya, 2012). Hence the combination of corrupted and heavy-tail distributions naturally appears at the crossing of these classical application domains. Bandits with Stochastic Corruption is different from the adversarial bandit setting (Auer et al., 2002). The adversarial bandit assumes existence of a non-stochastic adversary that can return at each step the worstcase reward to the agent depending on its history of choices. Incorporating corruptions in this setting, Lykouris et al. (2018) and Bogunovic et al. (2020) consider settings where the rewards can be corrupted by a history-dependent adversary but the total amount of corruption and also the corruptions at each step are bounded. In contrast to the adversarial corruption setting in the literature, we consider a non-adversarial proportion of corruptions (ε ∈ [0, 1/2)) at each step, which are stochastically generated from unbounded corruption distributions {Hi} k i=1. To the best of our knowledge, only Kapoor et al. (2019) have studied similar non-adversarial corruption setting with a history-independent proportion of corruption at each step for regret minimization. But they assume that the probable corruptions at each step are bounded, and the uncorrupted rewards are sub-Gaussian. On the other hand, Altschuler et al. (2019) study the same stochastic unbounded corruption that we consider but they focus on best arm identification using the median of the arms as a goal making this a different problem. Hence, we observe that there is a gap in the literature in studying unbounded stochastic corruption for bandits with possibly heavy-tailed rewards and this article aims to fill this gap. Specifically, we aim to deal with unbounded corruption and heavy-tails simultaneously, which requires us to develop a novel sensitivity analysis of the robust estimator in lieu of a worst-case (adversarial bandits) analysis. Our Contributions. Specifically, in this paper, we aim to investigate three main questions: 1. Is the setting of Bandits with Stochastic Corruption with unbounded corruptions and heavy tails fundamentally harder (in terms of the regret lower bound) than the classical sub-Gaussian and uncorrupted bandit setting? 2. Is it possible to design an *efficient and robust algorithm* that achieves an order-optimal performance (*logarithmic* regret) in the stochastic corruption setting? 3. Are robust bandit algorithms *efficient in practice*? These questions have led us to the following contributions: 1. Hardness of Bandits with Stochastic Corruption with unbounded corruptions and heavy tails. In order to understand the fundamental hardness of the proposed setting, we use a suitable notion of regret (Kapoor et al., 2019), denoted by Rn (Definition 1), that extends the traditional pseudo-regret (Lattimore & Szepesvári, 2020) to the corrupted setting. Then, in Section 4, we derive lower bounds on regret that reveal increased difficulties of corrupted bandits with heavy tails in comparison with the classical non-corrupted and light-tailed Bandits. (a) In the heavy-tailed regime (3), we show that even when the suboptimality gap ![1_image_0.png](1_image_0.png) ∆i 1for arm i is large, the regret increases with ∆i because of the difficulty to distinguish between two arms when the rewards are heavy-tailed. (b) Our lower bounds indicate that when ∆iis large, the logarithmic regret is asymptotically achievable, but the hardness depends on the corruption proportion ε, variance of Pi, denoted by σ 2 i , and the suboptimality gap ∆i. Specifically, if ∆i σi 's are small, i.e. we are in low distinguishability/high variance regime, the hardness is dictated by σ 2 i ∆ 2 i,ε . Here, ∆i,ε ≜ ∆i(1 − ε) − 2εσiis the '*corrupted* suboptimality gap' that replaces the traditional suboptimality gap ∆iin the lower bound of non-corrupted and light-tailed bandits (Lai & Robbins, 1985). Since ∆i,ε ≤ ∆i, it is harder to distinguish the optimal and suboptimal arms in the corrupted settings. They are the same when the corruption proportion ε = 0. In this article, we exclude the case when ∆i,ε ≤ 0 as it essentially corresponds to the case when corruption is large enough to render reward distributions hard to distinguish. Hence we limit our study to the case when ∆i,ε > 0. Additionally, our analysis partially addresses an open problem in heavy-tailed bandits. Works on heavytailed bandits (Bubeck et al., 2013; Agrawal et al., 2021) rely on the assumption that a bound on the (1+η)-moment, i.e. E[|X| 1+η], is known for some η > 0. We do not assume such a restrictive bound, as knowing a bound on E[|X| 1+η] implies the knowledge of a bound on the differences between the means of the reward of the different arms. Instead, we assume that the centered moment, specifically the variance, is bounded by a known constant. Thus, we partly address the open problem of Agrawal et al. (2021) by relaxing the classical bounded (1+η)-moment assumption with the bounded centered moment one, for η ≥ 1. 2. Robust and Efficient Algorithm Design. In Section 5, we propose a robust algorithm, called HuberUCB, that leverages the Huber's estimator for robust mean estimation using the knowledge of ε and a bound on the variances of inliers. We derive a novel concentration inequality on the deviation of empirical Huber's estimate that allows us to design robust and tight confidence intervals for HuberUCB. In Theorem 3, we show that HuberUCB achieves the logarithmic regret, and also the optimal rate when the sub-optimality gap ∆ is not too large. We show that for HuberUCB, Rn can be decomposed according to the respective values of ∆i and σi: $$\Re_{n}\leq\underbrace{\mathcal{O}\left(\sum_{i:\Delta_{i}>\sigma_{i}}\log(n)\sigma_{i}\right)}_{\mathrm{Error~due~to~heavy-tail}}+\underbrace{\mathcal{O}\left(\sum_{i:\Delta_{i}\leq\sigma_{i}}\log(n)\Delta_{i}\frac{\sigma_{i}^{2}}{\overline{\Delta}_{i,c}^{2}}\right)}_{\sigma_{i}^{2}/\Delta_{i}\mathrm{~error~with~corrupted~sub-optimality~gaps}}$$ . Thus, our upper bound allows us to segregate the errors due to heavy-tail, corruption, and corruptioncorrection with heavy tails. The error incurred by HuberUCB can be directly compared to the lower bounds obtained in Section 4 and interpreted in both the high distinguishibility regime and the low distinguishibility regime as previously mentioned. 3. Empirically Efficient and Robust Performance. To the best of our knowledge, we present the first robust mean estimator that can be computed in a linear time in a sequential setting (Section 6). Existing robust mean estimators, such as Huber's estimator, need to be recomputed at each iteration using all the data, which implies a quadratic complexity. Our proposal recomputes Huber's estimator only when the iteration number is a power of 2 and computes a sequential approximation on the other iterations. We use the Sequential Huber's estimator to propose SeqHuberUCB. We theoretically show that SeqHuberUCB achieves similar order of regret as HuberUCB, while being computationally efficient. In Section 7, we also experimentally illustrate that HuberUCB and SeqHuberUCB achieve the claimed performances for corrupted Gaussian and Pareto environments. We further elaborate on the novelty of our results and position them in the existing literature in Section 2. For brevity, we defer the detailed proofs and the parameter tuning to Appendix. 1The suboptimality gap of an arm is the difference in mean rewards of an optimal arm and that arm. In our context suboptimality gap refer to the gap between the inlier distributions which can be compared to suboptimality gap in heavy-tail setting, as opposed to the corrupted gap that we define later. ## 2 Related Work Due to the generality of our setting, this work either extends or relates to the existing approaches in both the heavy-tailed and corrupted bandits literature. While designing the algorithm, we further leverage the literature of robust mean estimation. In this section, we connect to these three streams of literature. Table 1 summarizes the previous works and posits our work in lieu. | Algorithms | Settings | Corruption | Type of outliers | Heavy-tailed | Adversarial/ Stochastic | |--------------------------------------------------------|--------------------|--------------|--------------------|----------------|---------------------------| | Our work | MAB | Yes | Unbounded | Yes | Stochastic | | Bubeck et al. (2013); | | | | | | | Agrawal et al. (2021); Lee | MAB | No | x | Yes | Stochastic | | et al. (2020) | | | | | | | Lykouris et al. (2018) | MAB | Yes | Bounded | No | Stochastic | | Bogunovic et al. (2020) | GP Bandits | Yes | Bounded | No | Adversarial | | MAB & | | | | | | | Kapoor et al. (2019) | Linear | Yes | Bounded | No | Stochastic | | Bandits | | | | | | | Medina & Yang (2016); | Linear Bandits | No | x | Yes | Stochastic | | Shao et al. (2018) Bouneffouf (2021) | Contextual Bandits | context only | Unbounded | No | Stochastic | | Agarwal et al. (2019) | Control | Yes | Bounded | x | Adversarial | | Hajiesmaili et al. (2020); Auer et al. (2002); Pogodin | MAB | Yes | Bounded | x | Adversarial | | & Lattimore (2020) | | | | | | Table 1: Comparison of existing results on Corrupted and Heavy-tailed Bandits. Heavy-tailed bandits. Bubeck et al. (2013) are one of the first to study robustness in multi-armed bandits by studying the heavy-tailed rewards. They use robust mean estimator to propose the RobustUCB algorithms. They show that under assumptions on the raw moments of the reward distributions, a logarithmic regret is achievable. It sprouted research works leading to either tighter rates of convergence (Lee et al., 2020; Agrawal et al., 2021), or algorithms for structured environments (Medina & Yang, 2016; Shao et al., 2018). Our article uses Huber's estimator which was already discussed in Bubeck et al. (2013). However, the chosen parameters in Bubeck et al. (2013) were suited for heavy-tailed distributions, and thus, render their proposed estimator non-robust to corruption. We address this gap in this work. Corrupted bandits. The existing works on Corrupted Bandits (Lykouris et al., 2018; Bogunovic et al., 2020; Kapoor et al., 2019) are restricted to bounded corruption. When dealing with bounded corruption, one can use techniques similar to adversarial bandits (Auer et al., 2002) to deal with an adversary that can't corrupt an arm too much. The algorithms and proof techniques are fundamentally different in our article because the stochastic (or non-adversarial) corruption by Nature allows us to learn about the inlier distribution on the condition that corresponding estimators are robust. Thus, our bounds retain the problem-dependent regret, while successfully handling probably unbounded corruptions with robust estimators. Robust mean estimation. Our algorithm design leverages the rich literature of robust mean estimation, specifically the influence function representation of Huber's estimator. The problem of robust mean estimation in a corrupted and heavy-tailed setting stems from the work of Huber (Huber, 1964; 2004). Recently, in tandem with machine learning, there have been numerous advances both in the heavy-tailed (Devroye et al., 2016; Catoni, 2012; Minsker, 2019), and in the corrupted settings (Lecué & Lerasle, 2020; Minsker & Ndaoud, 2021; Prasad et al., 2019; 2020; Depersin & Lecué, 2022; Lerasle et al., 2019; Lecué & Lerasle, 2020). Our work, specifically the novel concentration inequality for Huber's estimator, enriches this line of work with a result of parallel interest. We also introduce a sequential version of Huber's estimator achieving linear complexity. ## 3 Bandits With Stochastic Corruption: Problem Formulation In this section, we present the corrupted bandits setting that we study, together with the corresponding notion of regret. Similarly to the classical bandit setup, the regret decomposition lemma allows us to focus on the expected number of pulls of a suboptimal arm as the central quantity to control algorithmic standpoint. Notations. We denote by P the set of probability distributions on the real line R and by P[q](M) ≜ {P ∈ P : EP [|X − EP [X]| q] ≤ M} the set of distributions with q th moment, q ≥ 1, bounded by M > 0. 1{A} is the indicator function for the event A being true. We denote the mean of a distribution Pi as µi ≜ EPi [X]. For any *D ⊂P*, we denote D(ε) ≜ {(1−ε)P +εH : P ∈ D, H *∈ P}* the set of corrupted distributions from D. Problem Formulation. In the setting of *Bandits with Stochastic Corruption*, a bandit algorithm faces an environment with k ∈ N many reward distributions in the form ν ε = (ν ε i ) k i=1 where ν ε i = (1 − ε)Pi + εHi denotes the distribution of rewards of arm i. Here Pi, Hi are real-valued distributions and ε is a mixture parameter assumed to be in [0, 1/2), that is Piis given more weights than Hiin the mixture of arm i. For this reason, the {Pi} k i=1 are called the *inlier* distributions and the {Hi} k i=1 the *outlier* distributions. We assume the inlier distributions have at least 2 finite moments that is P1, . . . , Pk ∈ P[2](M) for some M > 0, while no restriction is put on the outlier distributions, that is H1*, . . . , H*k ∈P. For this reason, we also refer to the outlier distributions as the *corrupted* distributions, and to the inlier distributions as the *non-corrupted* ones. ε is called the level of corruption. For convenience, we further denote by ν in lieu of ν 0 = (Pi) k i=1 the reward distributions of the non-corrupted environment. The game proceed as follows: At each step t ∈ {0*, . . . , n*}, the agent policy π interacts with the corrupted environment by choosing an arm At and obtaining a stochastically corrupted reward. To generate this reward, Nature first draws a random variable Ct ∈ {0, 1} from a Bernoulli distribution with mean ε ∈ [0, 1/2). If Ct = 1, it generates a corrupted reward Zt from distribution HAt corresponding to the chosen arm At ∈ {1*, . . . , k*}. Otherwise, it generates a non-corrupted X′ t from distribution PAt . More formally, Nature generates reward Xt = X′ t1{Ct = 0} + Zt1{Ct = 1} which the learner observes. The learner leverages this observation to choose another arm at the next step in order to maximize the total cumulative reward obtained after n steps. In Algorithm 1, we outline a pseudocode of this framework. Algorithm 1 Bandits with Stochastic Corruption Require: ε ∈ [0, 1/2), q ≥ 2 and M > 0 1: **Input:** P1, . . . , Pk ∈ P[q](M) be the uncorrupted reward distributions and H1, . . . , Hk ∈ P be the corrupted reward distributions. 2: for t = 1*, . . . , n* do 3: Player plays an arm At ∈ {1*, . . . , k*} 4: Nature draws a Bernoulli Ct ∼ Ber(ε) 5: Generate a corrupted reward Zt ∼ HAt and an uncorrupted reward X′ t ∼ PAt 6: Player observe the reward Xt = X′ t1{Ct = 0} + Zt1{Ct = 1} 7: **end for** Remark 1 (Non-adversarial corruption.) *In the setting of Bandits with Stochastic Corruption, we consider that the reward received by the learner is corrupted when* Ct = 1 and non-corrupted otherwise. Since the law of Ct is a Bernoulli Ber(ε)*, the corruption is stochastic, and independent on other variables. This* is in contrast with adversarial *setups, where corruption is typically chosen by an opponent and possibly depending on other variables. Assuming a non-adversarial behavior of the Nature seems more justified than* assuming an adversarial setup in applications, such as agriculture where corruption is often due to external disturbances, such as pests appearance or weather hazards, whose occurrence are typically non-adversarial. Now when corruption happens, we do not put restriction on the level of corruption. For example, we can imagine a pest outburst or hail, that may have huge impact on a crop but does not occur adversarially. Remark 2 (Weak assumption on inliers) Let us highlight that we do not assume sub-Gaussian behavior for the inlier distributions Pi*. Instead, we consider only a weak moment assumption, i.e.* the inlier distributions Pi have a finite variance. Thus, our setting is capable of modeling both the moderately heavy-tailed setting and the corrupted settings. We highlight this generality in the regret lower bounds and empirical performance analysis in Section 4 and 7. Corrupted regret. In this setting, we observe that a corrupted reward distribution ((1−ε)Pi+εHi) might not have a finite mean, unlike the true Pi's. Thus, the classical notion of regret with respect to the corrupted reward distributions might fail to quantify the goodness of the policy and its immunity to corruption while learning. On the other hand, in this setup, the natural notion of expected regret is measured with respect to the mean of the non-corrupted environment ν specified by {Pi} k i=1. Definition 1 (Corrupted Regret) In Bandits with Stochastic Corruption, we define the regret of a learning algorithm playing strategy π after n *steps of interaction with the environment* ν ε against constantly playing an optimal arm ⋆ ∈ arg min i EPi [X′] as $$\Re_{n}(\pi,\nu^{\varepsilon})\triangleq n\operatorname*{max}_{i}\mathbb{E}_{P_{i}}[X^{\prime}]-\mathbb{E}\left[\sum_{t=1}^{n}X_{t}^{\prime}\right].$$ . (Corrupted regret) We call this quantity the pseudo-regret under corrupted observation, or for short, the *corrupted regret*. The expectation is crucially taken on X′ i ∼ Pi and X′ t ∼ PAt but not on Xi and Xt. The expectation on the right also incorporates possible randomization from the learner. Thus, (Corrupted regret) quantifies the loss in the rewards accumulated by policy π from the inliers while learning only from the *corrupted rewards* and also not knowing the arm with the best *true reward* distribution. Thus, this definition of corrupted regret quantifies the rate of learning of a bandit algorithm as regret does for non-corrupted bandits. A similar notion of regret is considered in (Kapoor et al., 2019) that deals with bounded stochastic corruptions. Due to the non-adversarial nature of the corruption, the regret can be decomposed, as in classical stochastic bandits, to make appear the expected number of pulls of suboptimal arms Eνε [Ti(n)], which allow us to focus the regret analysis on bounding these terms. Lemma 1 (Decomposition of corrupted regret) *In a corrupted environment* ν ε*, the regret writes* $$\Re_{n}(\pi,\nu^{\varepsilon})=\sum_{i=1}^{k}\Delta_{i}\mathbb{E}_{\nu^{\varepsilon}}\left[T_{i}(n)\right],$$ $\text{(Corrupted regret)}$ . where Ti(n) ≜Pn t=1 1{At = i} denotes the number of pulls of arm i until time n and the problem-dependent quantity ∆i ≜ max jµj − µi *is called the suboptimality gap of arm* i. ## 4 Lower Bounds For Uniformly Good Policies Under Heavy-Tails And Corruptions In order to derive the lower bounds, it is classical to consider *uniformly good* policies on some family of environments, Lai & Robbins (1985). We introduce below the corresponding notion for corrupted environments with the set of laws D⊗k = D1 *⊗ · · · ⊗ D*k, where Di ⊂ P for each i ∈ {1*, . . . , k*}. Definition 2 (Robust uniformly good policies) Let D⊗k(ε) = D1(ε) ⊗ · · · ⊗ Dk(ε) be a family of corrupted bandit environments on R*. For a corrupted environment* ν ε ∈ D⊗k(ε) with corresponding uncorrupted environment ν, let µi(ν) denote the mean reward of arm i *in the uncorrupted setting and* µ⋆(ν) ≜ max aµi(ν) denote the maximum mean reward. A policy π is uniformly good on D⊗k(ε) *if for any* α ∈ (0, 1], ∀ν ∈ D⊗k(ε), ∀i ∈ {1, . . . , k}, µi(ν) < µ⋆(ν) ⇒ Eνε [Ti(n)] = o(n α). Since the corrupted setup is a special case of stochastic bandits, a lower bound can be immediately recovered with classical results, such as Lemma 2 below, that is a version of the change of measure argument (Burnetas & Katehakis, 1997), and can be found in (Maillard, 2019, Lemma 3.4). Lemma 2 (Lower bound for uniformly good policies) Let D⊗k = D1 ⊗ · · · ⊗ Dk, where Di ⊂ P for each i ∈ {1, . . . , k} and let ν ∈ D⊗k. Then, any uniformly good policy on D⊗k *must pull arms such that,* $$\forall i\in\{1,\ldots,k\},\,\mu(\nu_{i})\leq\mu_{\star}(\nu)\quad\Rightarrow\quad\lim_{n\to\infty}\frac{\mathbb{E}_{\nu}[T_{i}(n)]}{\log(n)}\geq\frac{1}{\mathcal{K}_{i}(\nu_{i},\mu(P_{\star}))}.$$ $\nu_{i}$\(\nu_{i with Ki(νi, µ(P⋆)) = inf{DKL(νi, P∗) : νi ∈ Di, µ(νi) ≥ µ(P⋆)} where DKL denotes the Kullback-Leibler divergence between distributions. Lemma 2 is used in the traditional bandit literature to obtain lower bound on the regret using the decomposition of regret from Lemma 1. In our setting, however, the lower bound is more complex as it involves optimization on P[2](M), the set of distributions with a variance bounded by M > 0, and this set is not convex. Indeed, for example taking the convex combination of two Dirac distributions, both distributions have variance 0 but depending on where the Dirac distribution are located the variance of the convex combination is arbitrary. It also involves an optimization in both the first and second term of the KL because we consider the worst-case corruption in both the optimal arm distribution ν⋆ and non-optimal arm distribution νi. In this section, we do not solve these problems, but we propose lower bounds derived from the study of a specific class of heavy-tailed distributions on one hand (Lemma 3) and the study of a specific class of corrupted (but not heavy-tailed) distributions on the other hand (Lemma 4). Using the fact that Ki(νi, µ(P⋆)) is an infimum that is smaller than the DKL for the choice ν = P⋆, Lemma 2 induces the following weaker lower-bound: $$\forall i\in\{1,\ldots,k\},\,\mu(\nu_{i})\leq\mu_{\star}(\nu)\quad\Rightarrow\quad\lim_{n\to\infty}\frac{\mathbb{E}_{\nu}[T_{i}(n)]}{\log(n)}\geq\frac{1}{D_{\text{KL}}(\nu_{i},P_{\star})}.\tag{1}$$ Equation (1) shows that it is sufficient to have an upper bound on the DKL*-divergence of the reward distributions interacting with the policy to get a lower bound on the number of pulls of a suboptimal arm*. In order to bound the DKL-divergence, we separately focus on two families of reward distributions, namely Student's distribution without corruption (Lemma 3) and corrupted Bernoulli distribution (Lemma 4), that reflect the hardness due to heavy-tails and corruptions, respectively. Applying Lemma 3 and Lemma 4 in Equation (1) yields the final regret lower bound in Theorem 1. Shifted Student's distribution without corruption. To obtain a lower bound in the heavy-tailed case we use shifted Student distributions. Student distribution are well adapted because they exhibit a finite number of finite moment which makes them heavy-tailed, and we can easily change the mean of Student distribution by adding a shift without changing its shape parameter d. We denote by Td the set of shifted Student distributions with d degrees of freedom, $$\mathcal{T}_{d}=\left\{P\in\mathcal{P}:\exists\mu\in\mathbb{R}\,P\text{has distribution}p:t\in\mathbb{R}\mapsto\frac{\Gamma(\frac{d+1}{2})}{\Gamma(d/2)\sqrt{d\pi}}\left(1+\frac{(t-\mu)^{2}}{d}\right)^{-\frac{d+1}{2}}\right\}.$$ . Lemma 3 (Control of KL-divergence for Heavy-tails) Let P1, P2 be two shifted Student distributions with d > 1 *degrees of freedom with* EP1 [X] = 0 and EP2 [X] = ∆ > 0*. Then,* $$D_{\text{KL}}(P_{1},P_{2})\leq\begin{cases}\frac{3^{d-1}(d+1)^{2}\Delta^{2}}{5\sqrt{d}}&\text{if}\Delta\leq1\,,\\ (d+1)\log{(\Delta)}+\log{\left(3^{d}\frac{(d+1)^{2}}{5\sqrt{d}}\right)}&\text{if}\Delta>1\,.\end{cases}\tag{2}$$ Corrupted Bernoulli distributions. We denote Bp(ε) = {(1 − ε)P + εH; H ∼ Ber(p ′) and P ∼ Ber(p), p′ ∈ [0, 1]} the corrupted neighborhood of the Bernoulli distribution Ber(p). Let P0 ∈ Ber(p0) and P1 ∈ Ber(p1) for some p0, p1 ∈ (0, 1) be two Bernoulli distributions. We corrupt both P0 and P1 with a proportion ε > 0 to get Q0 ∈ Bp0 (ε) and Q1 ∈ Bp1 (ε). We obtain Lemma 4 that illustrates three bounds on DKL(Q0, Q1) as functions of the sub-optimality gap ∆ ≜ EP0 [X] − EP1 [X], variance σ 2 ≜ VarP0 (X) = VarP1 (X), and corruption proportion ε. ![7_image_0.png](7_image_0.png) Figure 1: Visualizing the KL and the corresponding bounds in Lemma 4 for σ = 1 and ε = 0.2 (x axis is in log scale). Lemma 4 (Control of KL-divergence for Corruptions) Let P0 ∈ Ber(p0) and P1 ∈ Ber(p1) *be two* Bernoulli probability distributions with means p0, p1 ∈ (0, 1)*, such that* ∆ = EP0 [X] − EP1 [X] = p0 − p1, σ 2 = VarP0 (X) = VarP1 (X) > 0*, and* ∆ ≥ 2σε 1−ε . Then, there exists Q0 ∈ Bp0 (ε) and Q1 ∈ Bp1 (ε), that have corrupted suboptimality gap given by ∆ε = EQ0 [X] − EQ1 [X] = ∆(1 − ε) − 2εσ*, such that* - **Uniform Bound.** Without further assumptions on ∆ and σ*, we have* $$D_{\rm KL}(Q_{0},Q_{1})\leq(1-2\varepsilon)\log\left(1+\frac{1-2\varepsilon}{\varepsilon}\right).\tag{3}$$ - *High Distinguishability/Low Variance Regime.* If 2σ √ ε 1−2ε < ∆ < 2σ*, we get* $<\Delta<$ $\square$ $\frac{1}{5}$ 5. $\hdots\;IJ$ 2. $$D_{\rm KL}(Q_{0},Q_{1})\leq\frac{\overline{\Delta}_{e}}{2\sigma}\log\left(1+\frac{\overline{\Delta}_{e}}{2\sigma-\overline{\Delta}_{e}}\right).\tag{4}$$ - *Low Distinguishability/High Variance Regime.* If ∆ ≤ 2σ √ ε 1−2ε , there exists ε ′ ≤ ε and Q′0 ∈ Bp0 (ε ′), Q′1 ∈ Bp1 (ε ′) *such that* DKL(Q′0 , Q′1 ) = 0. Note that the *corrupted* sub-optimality gap ∆ε should not be confused with the sub-optimality gap ∆. Note also that due to the assumption on the variance in Lemma 4, we must have p0 = 1 − p1 ≥ 1/2. The specific pair of distributions mentioned Q0, Q1, Q′0 , Q′1 can be found in the proof of the lemma, Section B.1.3. Consequences of Lemma 4. We illustrate the bounds of Lemma 4 in Figure 1. The three upper bounds on the KL-divergence of corrupted Bernoullis provide us some insights regarding the impact of corruption. 1. *Three Regimes of Corruption:* We observe that, depending on ∆/σ, we can categorize the corrupted environment in three categories. For ∆/σ ∈ [2, +∞), we observe that the KL-divergence between corrupted distributions Q0 and Q1 is upper bounded by a function of only corruption proportion ε and is independent of the uncorrupted distributions. Whereas for ∆/σ ∈ (2ε/√1 − 2ε, 2), the distinguishability of corrupted distributions depend on the distinguishibility of uncorrupted distributions and also the corruption level. We call this the High Distinguishability/Low Variance Regime. For ∆/σ ∈ [0, 2ε/√1 − 2ε], we observe that the KL-divergence can always go to zero. We refer to this setting as the Low Distinguishability/High Variance Regime. 2. *High Distinguishability/Low Variance Regime:* In Lemma 4, we observe that the effective gap to distinguish the optimal arm to the closest suboptimal arm that dictates hardness of a bandit instance has shifted from the uncorrupted gap ∆ to a *corrupted suboptimality gap*: ∆ε ≜ ∆(1 − ε) − 2εσ. 3. *Low Distinguishability/High Variance Regime:* We notice also that there is a limit for ∆ below which the corruption can make the two distributions Q0 and Q1 indistinguishable, this is a general phenomenon in the setting of testing in corruption neighborhoods (Huber, 1965). 4. *Feasibility of the Bandits with Stochastic Corruption problem and* ∆ε: In Lemma 4, we have assumed ∆ε to be positive. If ∆ε is negative or zero, i.e. ∆ 2σ ≤ε 1−ε , we cannot achieve better than linear regret in the corresponding Bandits with Stochastic Corruption problem. Lemma 4 additionally shows that we have to concede linear regret even when ∆ε is positive but ∆ 2σ ≤ √ ε 1−2ε . From KL Upper bounds to Regret Lower Bounds. Substituting the results of Lemma 3 and 4 in Equation (1) yield the lower bounds on regret of any uniformly good policy in heavy-tailed and corrupted settings, where reward distributions either belong to the class of corrupted student distributions or the class of corrupted Bernoulli distributions, respectively. We denote $${\mathfrak{D}}_{T_{2}}^{\otimes k}\triangleq{\mathcal{T}}_{2}\otimes\cdots\otimes{\mathcal{T}}_{2},$$ where T2 is the set of Student distributions with more than 2 degrees of freedoms. We also define $${\mathfrak{D}}_{{\mathcal{B}}(\varepsilon)}^{\otimes k}\triangleq{\mathcal{B}}(\varepsilon)\otimes\cdots\otimes{\mathcal{B}}(\varepsilon),$$ $$\left({5}\right)$$ where B(ε) = {(1 − ε)P + εH; H ∼ Ber(p) and P ∼ Ber(p ′)*, p, p*′ ∈ [0, 1]} is the set of corrupted Bernoulli distributions. Theorem 1 (Lower bound for heavy-tailed and corrupted bandit) Let i *be a suboptimal arm such* that EPi [X] ≤ max a EPa [X] *and denote* ∆i ≜ max a EPa [X] − EPi [X] and ∆i,ε ≜ ∆i(1 − ε) − 2εσi*, suppose* ∆i,ε > 0. Student's distributions. *Suppose that the arms are pulled according to a policy that is uniformly good on* D ⊗k T2 . Then, for all ν ∈ D ⊗k T2 , $$\operatorname*{lim}_{n\to\infty}\operatorname*{inf}_{n\to\infty}{\frac{\mathbb{E}_{\nu}[T_{i}(n)]}{\log(n)}}\geq{\frac{\sigma_{i}^{2}}{51\Delta_{i}^{2}}}\vee{\frac{1}{4\log(\Delta_{i}/\sigma_{i})+22}}.$$ Corrupted Bernoulli distributions*: Suppose that the arms are pulled according to a policy that is uniformly good on* D ⊗k B(ε) . Then, for all ν ε ∈ D ⊗k B(ε) such that 2σi √ ε 1−2ε < ∆i < 2σi *, then* $$\operatorname*{lim}_{n\to\infty}\frac{\mathbb{E}_{\nu^{e}}[T_{i}(n)]}{\log(n)}\geq\frac{2\sigma_{i}}{\overline{{{\Delta}_{i,e}}}\log\left(1+\frac{\overline{{{\Delta}_{i,e}}}}{2\sigma_{i}-\overline{{{\Delta}_{i,e}}}}\right)},$$ and for ∆i > 2σi, $$\operatorname*{lim}_{n\to\infty}\operatorname*{inf}_{n\to\infty}{\frac{\mathbb{E}_{\nu^{\varepsilon}}[T_{i}(n)]}{\log(n)}}\geq{\frac{1}{\left(1-2\varepsilon\right)\log\left({\frac{1-\varepsilon}{\varepsilon}}\right)}}.$$ For brevity, the detailed proof is deferred to Appendix A.1. Small gap versus large gap regimes. Due to the restriction in the family of distributions considered in Theorem 1, the lower bounds are not tight and may not exhibit the correct rate of convergence for all families of distributions. However, this theorem provides some insights about the difficulties that one may encounter in corrupted and heavy-tail bandits problems, including the logarithmic dependence on n. In Theorem 1, if ∆iis small, we see that in the heavy-tailed case (Student's distribution), we recover a term very similar to the lower bound when the arms are from a Gaussian distribution. Now, in the case where $$(6)$$ $$\left(7\right)$$ ∆iis large, the number of suboptimal pulls in the heavy-tail setting is Ω 1/ log ∆i σi . This is the price to pay for heavy-tails. If we are in the high distiguishability/low variance regime, i.e. ∆i,ε 2σi∈ ( √ ε 1−2ε , 1), we recover a logarithmic lower bound which depends on a corrupted gap between means ∆i,ε = ∆i(1 − ε) − 2εσi. Since the corrupted gap is always smaller than the true gap ∆i, this indicates that a corrupted bandit (ε > 0) must incur higher regret than a uncorrupted one (ε = 0). For ε = 0, this lower bound coincides with the lower bound for Gaussians with uncorrupted gap of means ∆i and variance σ 2 i . On the other hand, if ∆i,ε 2σiis larger than 1, we observe that we can still achieve logarithmic regret but the hardness depends on only the corruption level ε, specifically 1 (1−2ε) log( 1−ε ε ) . ## 5 Robust Bandit Algorithm: Huber'S Estimator And Upper Bound On The Regret In this section, we propose an UCB-type algorithm, namely HuberUCB, addressing the Bandits with Stochastic Corruption problem (Algorithm 2). This algorithm uses primarily a robust mean estimator called Huber's estimator (Section 5.1) and corresponding confidence bound to develop HuberUCB (Section 5.2). We further provide a theoretical analysis in Theorem 3 leading to upper bound on regret of HuberUCB. We observe that the proposed upper bound matches the lower bound in Theorem 1 under some settings. ## 5.1 Robust Mean Estimation And Huber'S Estimator We begin with a presentation of the Huber's estimator of mean (Huber, 1964). As we aim to design a UCB-type algorithm, the main focus is to obtain an empirical estimate of the mean rewards. Since the rewards are heavy-tailed and corrupted in this setting, we have to use a robust estimator of mean. We choose to use Huber's estimator (Huber, 1964), an M-estimator that is known for its robustness properties and have been extensively studied (e.g. the concentration properties (Catoni, 2012)). Huber's estimator is an M-estimator, which means that it can be derived as a minimizer of some loss function. Given access to n i.i.d. random variables Xn 1 ≜ {X1*, . . . , X*n}, we define Huber's estimator as $$\mathrm{Hub}_{\beta}(X_{1}^{n})\in\arg\min_{\theta\in\mathbb{R}}\sum_{i=1}^{n}\rho_{\beta}(X_{i}-\theta),\tag{1}$$ $$({\boldsymbol{\delta}})$$ where ρβ is Huber's loss function with parameter β > 0. ρβ is a loss function that is quadratic near 0 and linear near infinity, with β thresholding between the quadratic and linear behaviors. In the rest of the paper, rather than using the aforementioned definition, we represent the Huber's estimator as a root of the following equation (Mathieu, 2022): $$\sum_{i=1}^{n}\psi_{\beta}\left(X_{i}-\mathrm{Hub}_{\beta}(X_{1}^{n})\right)=0.\tag{1}$$ $$(9)$$ Here, ψβ(x) ≜ x1{|x| ≤ β}+β sign(x)1{|x| > β} is called the influence function. Though the representations in Equation (8) and (9) are equivalent, we prefer to use representation Equation (9) as we prove the properties of Huber's estimator using those of ψβ. β plays the role of a scaling parameter. Depending on β, Huber's estimator exhibits a trade-off between the efficiency of the minimizer of the square loss, i.e. the empirical mean, and the robustness of the minimizer of the absolute loss, i.e. the empirical median. ## 5.2 Concentration Of Huber'S Estimator In Corrupted Setting Let use denote the true Huber mean for a distribution P as Hubβ(P). This means that, for a random variable Y with law P, Hubβ(P) satisfies E[ψβ(Y − Hubβ(P))] = 0. We now state our first key result on the concentration of Huber's estimator around Hubβ(P) in a corrupted and heavy-tailed setting. Theorem 2 (Concentration of Empirical Huber's estimator) Suppose that X1, . . . , Xn *are i.i.d.* with law (1 − ε)P + εH for some P, H ∈ P and proportion of outliers ε ∈ (0, 1/2), and P has a finite variance σ 2*. Then, with probability larger than* 1 − 5δ, $$|\mathrm{Hub}_{\beta}(X_{1}^{*})-\mathrm{Hub}_{\beta}(P)|\leq\frac{\sigma\sqrt{\frac{2\ln(1/\beta)}{n}}+\beta\frac{\ln(1/\beta)}{4n}+2\beta\sqrt{\frac{\ln(1/\beta)}{n}}+2\beta\varepsilon}{\left(p-\sqrt{\frac{\ln(1/\beta)}{2n}}-\varepsilon\right)_{+}}.$$ _Here, $p=\mathbb{P}_{P}(|Y-\mathbb{E}_{P}|Y|)\leq\beta/2)$ with $p>5\varepsilon$, $\beta>4\sigma$, $\overline{\varepsilon}=\sqrt{\frac{(1-2\varepsilon)}{\log\left(\frac{1}{2\varepsilon}\right)}}$, and $\delta\geq\exp\left(-n\frac{128(p-5\varepsilon)^{2}}{49\left(1+2\varepsilon\sqrt{2}\right)}\right).$_ Theorem 2 gives us the concentration of Hubβ(Xn 1 ) around Hubβ(P), i.e. the Huber functional of the inlier distribution P. This theorem allows us to construct a UCB-type algorithm to solve the Bandits with Stochastic Corruption. For convenience of notation, hereafter, we denote the rate of convergence of Hubβ(Xn 1 ) to Hubβ(P) as $$r_{n}(\delta)\triangleq{\frac{\sigma{\sqrt{\frac{2\ln(1/\delta)}{n}}}+\beta{\frac{\ln(1/\delta)}{3n}}+2\beta\overline{{\varepsilon}}{\sqrt{\frac{\ln(1/\delta)}{n}}}+2\beta\varepsilon}{\left(p-{\sqrt{\frac{\ln(1/\delta)}{2n}}}-\varepsilon\right)_{+}}}.$$ $$(10)$$ Discussion. Now, we provide a brief discussion on the implications of Theorem 2. 1. Value of p: For most laws that exhibit concentration properties, the constant p is close to 1 as β ≥ 4σ. One might also use Markov inequality to lower bound p, depending on the number of finite moments P has. Bounding p then becomes a trade-off on the value of β, where large values of β implies that p is close to 1. But larger β also leads to a less robust estimator, since the error bound in Theorem 2 increases with β. 2. Tightness of constants: If there are no outliers (ε = 0), the optimal rate of convergence in such a setting is at least of order σp2 ln(1/δ)/n due to the central limit theorem. Theorem 2 shows that we are very close to attaining this optimal constant in the leading 1/ √n term. This result for Huber's estimator echoes the one presented in Catoni (2012). 3. Value of β: β is a parameter that achieve a trade-off between accuracy in the light-tailed uncorrupted setting and robustness. For our result, β must be at least of the order of 4σ. We provide a detailed discussion on the choice of β in Section 5.4. 4. Restriction on the values of δ: In Theorem 2, δ must be at least of order e −n. This restriction may seem arbitrary but it is in fact unavoidable as shown in Theorem 4.3 of Devroye et al. (2016). This is a limitation of robust mean estimation that enforces our algorithm to perform a forced exploration in the beginning. 5. Restriction on the values of ε: In Theorem 2, ε can be at most p/5, which implies that it is smaller than 1/5. This restriction is common in robustness literature. In particular, in Kapoor et al. (2019), ε is supposed smaller than ∆/σ. In robustness literature, Lecué & Lerasle (2020) and Dalalyan & Thompson (2019) assumed that ε ≤ 1/768 and 1/400 respectively. In contrast, our analysis can handle ε up to 0.2, which is significantly higher than the existing restrictions. Bias of Huber's Estimate. If P is symmetric, we have Hubβ(P) = E[X]. When P is non-symmetric, we need to control the distance of the Huber's estimate from the true mean, i.e. |Hubβ(P) − E[X]|. We call it the bias of Huber's estimate. We need to bound this bias to get a concentration of the empirical Huber's estimate Hubβ(Xn 1 ) around the true mean E[X]. We control the bias using the following lemma, which is a direct consequence of Lemma 4 from Mathieu (2022). Lemma 5 (Bias of Huber's estimator) Let Y *be a random variable with* E[|Y | q] < ∞ for q ≥ 2 and suppose that β 2 ≥ 9Var(Y )*. Then* $$|\mathbb{E}[Y]-\operatorname{Hub}_{\beta}(P)|\leq{\frac{2\mathbb{E}[|Y-\mathbb{E}[Y]|^{q}]}{(q-1)\beta^{q-1}}}.$$ Using Lemma 5 and Theorem 2, we can control the deviations of Hubβ(Xn 1 ) from E[X]. This allows us to formulate an index-based algorithm (UCB-type algorithm) for corrupted Bandits. We present this algorithm in Section 5.3. ## 5.3 Huberucb**: Algorithm And Regret Bound** In this section, we describe a robust, UCB-type algorithm called HuberUCB. We denote µi as the mean of arm i and its variance as σ 2 i . We assume that we know the variances of the reward distributions, i.e. {σ 2 i } k i=1, and hence, we define by construction M ≜ max iσ 2 i . We refer to Section 5.4 for a discussion on the choice of the parameters when the reward distributions are unknown. HuberUCB**: The algorithm.** In order to deploy the Huber's estimator in the multi-armed bandits setting, we need to estimate the mean of the rewards of each arm separately. We do that by defining a parameter βi for each arm and estimating separately each µi using $$\mathrm{Hub}_{i,s}=\mathrm{Hub}_{\beta_{i}}\left(X_{t},\quad1\leq t\leq s\quad\mathrm{such~that}\quad A_{t}=i,\right).$$ Now, at each step t, we define a confidence bound for arm i with s number of pulls as $$B_{i}(s,t)\triangleq{\begin{cases}r_{s}(1/t^{2})+b_{i}&{\mathrm{if~}}s\geq s_{l i m}(t)\\ \infty&{\mathrm{if~}}s<s_{l i m}(t)\end{cases}},$$ $$(11)$$ $=\;\left(\frac{1}{2}\right)$ 3. , (11) where rs(1/t2) is defined by Equation (10), slim(t) = log(t)98 128(p−5ε) 2 1 + 2√2 ε ∨9 14√2 2, ε = r(1−2ε) log( 1−ε ε ) , and biis a bound on the bias |E[X]−Hubβi (Pi)|. Hence bi can be set to zero if Piis known to be symmetric and controlled by Lemma 5 otherwise. Here, we assign bi = 2σ 2 i /βi as a conservative choice by imposing q = 2, i.e. finite second moment, in Lemma 5. Now, we propose HuberUCB that selects an arm at at step t based on the index $$I_{i}^{\mathrm{HuberUCB}}(t)=\mathrm{Hub}_{i,T_{i}(t-1)}+B_{i}(T_{i}(t-1),t).$$ $$(12)$$ i(t) = Hubi,Ti(t−1) + Bi(Ti(t − 1), t). (12) The index of HuberUCB together with the confidence bound defined in Equation (11) dictates that if an arm is less explored, i.e. Ti(t − 1) < slim(t), we choose that arm, and if multiple arms satisfy this, we break the tie randomly. As t grows and for all the arms Ti(t − 1) ≥ slim(t) is satisfied, we choose the arms according to the adaptive bonus. Thus, HuberUCB induces an initial forced exploration to obtain confident-enough robust estimates, followed by a time-adaptive selection of arms. We present a pseudocode of HuberUCB in Algorithm 2. We discuss the choices of the hyperparameters and the computational details in Section 5.4. Algorithm 2 HuberUCB Require: Parameter ε ∈ [0, 1/2) and βi > 4σi for all i ≤ K 1: for t = 1*, . . . , n* do 2: Compute index I HuberUCB i(t) (Equation (12)) for i ∈ {1*, . . . , k*} using X1*, . . . , X*t−1. 3: Choose arm at ∈ arg maxi Ii(t). 4: Observe a reward Xt. 5: end for Regret Analysis. Now, we provide a regret upper bound for HuberUCB. Theorem 3 (Upper Bound on number of pulls of suboptimal arms with **HuberUCB**) Let us consider a set of k *reward distributions* {Pi} k i=1 *with known and finite variances* {σ 2 i } k i=1*, i.e. for all* i ∈ {1, . . . , k}, Pi ∈ P[2](M) *such that* M ≜ max i∈{1*,...,k*} σ 2 i . Let us also consider some βi ≥ 4σi and p ≜ inf1≤i≤k PPi (|X − EPi [X]| ≤ βi/2) such that p > 5ε and ε < 1/5. We denote ∆ei,ε ≜ (∆i − 2bi)(p − ε) − 8βiε, which we assume positive and r(1−2ε) log( 1−ε ε ) ≤ ε. $\bullet$ *If* $\widetilde{\Delta}_{i,\varepsilon}>12\frac{\sigma_i^2}{\widetilde{\beta}_i}\left(\sqrt{2}+2\frac{\beta_i}{\sigma_i}\overline{\varepsilon}\right)^2$, *then* *HuberUCE*. 2, then HuberUCB pulls in expectation arm i *at most* $$\mathbb{E}[T_{i}(n)]\leq\log(n)\operatorname*{max}\!\left({\frac{32\beta_{i}}{3\tilde{\Delta}_{i,\varepsilon}}},{\frac{4}{(p\!-\!5\varepsilon)^{2}}}\left(1+2{\sqrt{2}}\left(\mathbb{E}\vee{\frac{9}{14{\sqrt{2}}}}\right)\right)^{2}\right)+10(\log(n)\!+\!1)\right)$$ * _If_ $\widehat{\Delta}_{i,\varepsilon}\leq12\frac{\sigma_{i}^{2}}{\widehat{\beta}_{i}}\left(\sqrt{2}+2\frac{\beta_{i}}{\sigma_{i}}\overline{\varepsilon}\right)^{2}$_, then_ _HubberUCB pulls in expectation arm_ $i$ _at most_ $\varepsilon$_._ $$\mathbb{E}[T_{i}(n)]\leq\log(n)\max\Biggl{(}\frac{50\sigma_{i}^{2}}{9\Delta_{i,e}^{2}}\biggl{(}\sqrt{2}+2\frac{\beta_{i}}{\sigma_{i}}\pi\biggr{)}^{2},\frac{4}{(p-5e)^{2}}\biggl{(}1+2\sqrt{2}\biggl{(}\pi\vee\frac{9}{14\sqrt{2}}\biggr{)}\biggr{)}^{2}\biggr{)}+10(\log(n)+1).$$ Using Theorem 3 and Lemma 1, a bound on the corrupted regret of HuberUCB follows immediately. We now state a simplified version of Theorem 3 with worse but explicit constants for easier comprehension. Let us fix β 2 i = 16σ 2 i and ε≤1/10 such that ε = 4/(5pln(9)) ≃ 0.54, and p ≥ 1− 4σ 2 i β 2 i ≥ 3 4 ≥ 5ε+ 1 4 . Now, if we further assume that Pi symmetric leading to bi = 0, it yields the following upper bounds. Corollary 1 (Simplified version of Theorem 3) Suppose that for all i, Pi is a symmetric distribution with finite variance σ 2 i . Let also denote ∆ei,ε ≜ ∆i (p − ε) − 32σiε *which is assumed to be positive and let* ε < 1/10. - If ∆ei,ε > 6σi 1 + 4√2ε2, then HuberUCB pulls in expectation arm i *at most* $$\mathbb{E}[T_{i}(n)]\leq43\log(n)\operatorname*{max}\left({\frac{\sigma_{i}}{\widetilde{\Delta}_{i,\varepsilon}}},10\right)+10(\log(n)+1).$$ - If ∆ei,ε ≤ 6σi 1 + 4√2ε2, then HuberUCB pulls in expectation arm i *at most* $$\mathbb{E}[T_{i}(n)]\leq23\log(n)\operatorname*{max}\left(\frac{\sigma_{i}^{2}}{\tilde{\Delta}_{i,\varepsilon}^{2}}\left(1+32\overline{{{\varepsilon}}}^{2}\right),18\right)+10(\log(n)+1).$$ Remark that in this corollary, we replaced some occurrences of ε by its upper bound, which is also an upper bound on ε. Thus, the presented result is loose up to constants but lend itself to easier comprehension. Discussions on the Upper Bound. Here, we discuss how this proposed upper bound of HuberUCB matches and mismatches with the lower bounds in Theorem 1. 1. *Order-optimality of Upper Bound.* HuberUCB achieves the logarithmic regret prescribed by the lower bound (Theorem 1) plus some additive error due to the fact that this is a UCB-type algorithm. Thus, HuberUCB is order optimal with respect to n. 2. *Two Regimes of Upper Bound.* When ∆iis small compared to σi, we obtain an upper bound E[Ti(n)] = n→∞ O log(n) σ 2 i ∆e2 i,ε ε 2 from Corollary 1. ε 2is of the same order of magnitude as Equation (7) because we take ε strictly smaller than 1/2. ε 2acts as an indicator of the corruption level. The term σ 2 i ∆e2 i,ε indicates the hardness due to the corrupted gaps ∆ei,ε and echoes the hardness term σ 2 i ∆2 i that appears in regret upper bound of UCB for uncorrupted bandits. The hardness term σ 2 i ∆e2 i,ε also appears in the corrupted lower bound (Equation (6)) as well as the heavy-tailed lower bound (Equation (5)) for ∆i ≪ σi 2. On the other hand, if ∆iis larger than σi, we get that E[Ti(n)] = O log(n) σi ∆ei,ε ∨ ε 2 ∨ 1 . This upper bound reflects the lower bound in Equation (7) that holds for ∆i > 2σi. This reinstates the fact that for large enough suboptimality gaps, the regret of HuberUCB depends solely on the corruption level than the suboptimality gap. 3. *Deviation from the Lower Bound.* The two regimes defined in the upper bound does not follow the exact distinctions made in the lower bounds. We observe that in the upper bound, the distinction between regimes depend on a corrupted suboptimality gap ∆ei,ε ≜ ∆i (p − ε) − 32σiε, while the lower bound depends on the corrupted suboptimality gap ∆i,ε ≜ ∆i (1 − ε) − 2σiε. This difference in constants hinder the hardness regimes and corresponding constants in upper and lower bounds to match for all ∆i, σi, and ε. This deviation also comes from the fact that the lower bounds proposed in Theorem 1 consider effects of heavy-tails and corruptions separately, while the upper bound of HuberUCB consider them in a coupled manner. Additionally, we observe that regret of HuberUCB is suboptimal due to the constant additive error, which appears due to the initial forced exploration of HuberUCB up to slim(t). Our concentration bounds and corresponding regret analysis shows that this forced exploration phase is unavoidable in order to be able to handle the case ∆i ≤ σi with HuberUCB. Removing this discrepancy between the lower and upper bounds would constitute an interesting future work. ## 5.4 Computational Details Here, we discuss the three hyperparameters that HuberUCB depends on and also its computational cost. Choice of σ and ε. In Theorem 3, we assume to know the σ and ε. In practice, these are unknown and we estimate σ 2 with a robust estimator of the variance, such as the median absolute deviation. In contrast, estimating ε is hard. There exists some heuristics, for example using the proportion of point larger than 1.5 times the inter-quartile range or using more complex algorithms like Isolation Forest algorithm but these methods work in general using the hypothesis that outliers are in some way points that are located outside of "the bulk of the data" which conflicts with the fact that we don't suppose anything on the outliers. Moreover even though there are heuristics, the problem of finding what constitute "the bulk of the data" is closely linked to problems such as finding a "Robust minimum volume ellipsoid" which is NP-hard in general (Mittal & Hanasusanto, 2022). We refer to Appendix C.1 for an ablation study on the choice of ε. Choice of βi. Ideally, βi should be larger than 4σi. We recommend using an estimator of σi to estimate a good value of βi. The choice of βi reflects the difference between heavy-tailed bandits and corrupted bandits. When the data are heavy-tailed but not corrupted, Catoni (2012) shows that βi ≃σi √n is a good choice for the scaling parameter. However, this choice is not robust to outliers and yields a linear regret in our setup (see Section 7) and a trade-off between Heavy-tailed and corrupted setting would dictate βi ≃ σ √n ∧ ε −1/2 (see Proposition 2 in Mathieu (2022)). In Appendix C.1, we present an ablation study on the choice of ε. Computational Cost. Huber's estimator has linear complexity due to the involved Iterated Re-weighting Least Squares algorithm, which is not sequential. We have to do this at every iteration, which leads HuberUCB to have a quadratic time complexity. This is the computational cost of using a robust mean estimator, i.e. the Huber's estimator. 2We observe that the lower bound in Equation (5) depends on σ 2 i ∆i,ε 2for ∆i ≪ σi, since the first order approximation of log(1 + x) is x as x → 0. ## 6 Seqhuberucb**: A Faster Robust Bandit Algorithm** In this section, we present a sequential approximation of the Huber's estimator, and we leverage it further to create a robust bandit algorithm with linear-time complexity algorithm. Here, we describe the algorithm (SeqHuberUCB) and its theoretical properties. A sequential approximation of Huber's estimator. The central idea is to compute the Huber's estimator using the full historical data only in logarithmic number of steps than at every step, and in between two of these re-computations, update the estimator using only the samples observed at that step. This allows us to propose a sequential approximation of Huber's estimator, i.e. SeqHubt , with lower computational complexity. By fixing the update step P2(t) = 2 log(t) log(2) before a given step t > 0, we define the estimator SeqHubt by SeqHub0 = 0 and $$\mathrm{SeqHub}_{t}=\begin{cases}H_{t}&\text{if}t=P_{2}(t),\\ H_{t}+\frac{\sum_{i=P_{2}(t)}^{t}\psi(X_{i}-H_{t})}{\sum_{i=1}^{t}\psi^{\prime}(X_{i}-H_{t})}&\text{otherwise.}\end{cases}$$ $$(13)$$ Here, Ht ≜ Hub(X P2(t) 1) and ψ is the influence function defined in Equation (9). SeqHubt can be conceptualized as a first order Taylor approximation of Hub(Xt1 ) around Hub(X P2(t) 1). One might argue that SeqHubt is not fully sequential rather a phased estimator as we still recompute the Huber's estimator following a geometric schedule. Thus, we still need to keep all the data in memory, leading to linear space complexity as the non-sequential Huber's estimator. But it features the good property of having a linear time complexity when computed using the prescribed geometric schedule. This implies that the SeqHuberUCB algorithm leveraging the sequential Huber's estimator achieves a linear time complexity. Concentration Properties of SeqHub. Now, in order to propose SeqHuberUCB we first aim to derive the rate of convergence of SeqHubt towards the true Huber's mean Hub(P). Theorem 4 If the assumptions of Theorem 2 hold true, with probability larger than 1 − 14δ*, we have* assumptions of Theorem 2 hold true, with probability larger than $1-14\delta$, we have_ $$|\text{SeqHub}_{t}-\text{Hub}(P)|\leq r_{t}(\delta)+\left(\frac{1}{p-\sqrt{\frac{\log(1/\delta)}{2t}}-\varepsilon}-1\right)r_{P_{2}(t)}(\delta)\tag{14}$$ $\bullet$\(\bullet *for any $t>0$, and $\delta\geq\exp\left(-P_2(t)\frac{128(p)}{49\left(1+2\right)}\right)$* 128(p−5ε) $\frac{(p-5\varepsilon)^{2}}{+2\varepsilon\sqrt{2})^{2}}$. _Here, $r_{t}(\delta)$ is defined as in Equation (10)._ We observe that the confidence bound of SeqHubt includes the confidence bound of Hubt, i.e. rt(δ), and an additive term proportional to rP2(t)(δ). Since rP2(t)(δ) ≥ rt(δ) for t ≥ P2(t), we can show that |SeqHubt − Hub(P)| ≤ p − qlog(1/δ) 2t − ε −1rP2(t)(δ). Thus, we obtain larger confidence bounds for SeqHub than that of Hub, and they differ approximately by a multiplicative constant (p − ε) −1 as t → ∞. SeqHuberUCB**: The algorithm.** Now, we plug-in the sequential Huber's estimator, SeqHub, and the corresponding confidence bound (Equation (14)), instead of the Huber's estimator and the corresponding confidence bound in the HuberUCB algorithm. This allows us to construct the SeqHuberUCB algorithm that we present hereafter. Specifically, we define the index of SeqHuberUCB as I SeqHuberUCB i(t) = SeqHubi,Ti(t−1) + B SeqHuberUCB i(Ti(t − 1), t). (15) where $$\mathrm{SeqHub}_{i,s}=\mathrm{SeqHub}\left(X_{t},\quad1\leq t\leq s\quad\mathrm{~such~that~}\quad A_{t}=i,\right),$$ $$(15)$$ and a confidence bound for arm i with s number of pulls is $$B_{i}^{\mathbb{Z}_{\text{polylbar}}\text{VCR}}(s,t)\triangleq\begin{cases}r_{s}(1/t^{2})+\left(\frac{1}{p-\sqrt{\frac{\max_{i}t/2}{\delta_{i}}}-\varepsilon}-1\right)r_{P_{2}(s)}(1/t^{2})+b_{i}&\text{if}P_{2}(s)\geq s_{lim}(t)\\ \infty&\text{if}P_{2}(s)<s_{lim}(t).\end{cases}$$ Here, slim(t), ε and bi are same as defined for HuberUCB. Similar to Corollary 1, we now present a simplified regret upper bound for SeqHuberUCB. Retaining the setting of Corollary 1, we assume that β 2 i = 16σ 2 i , ε≤1/10 implying ε = 4/(5pln(9)) ≃ 0.54, p ≥ 1− 4σ 2 i β 2 i ≥ 3 4 ≥ 5ε+ 1 4 , and Pi symmetric so that bi = 0. Further simplifying the constants yields the following regret upper bound for SeqHuberUCB. Lemma 6 (Simplified Upper Bound on Regret of **SeqHuberUCB**) Suppose that for all i, Pi *is a distribution with finite variance* σ 2 i . Let us also denote ∆ei,ε = ∆i (p − ε) − 32σiε, $\bullet\;\textit{If}\;\widetilde{\Delta}_{i,\varepsilon}>18\sigma_i\left(1+4\sqrt{2\varepsilon}\right)^2,\;\textit{then}$. $$\mathbb{E}[T_{i}(n)]\leq128\log(n)\operatorname*{max}\left({\frac{\sigma_{i}}{\tilde{\Delta}_{i,\epsilon}}},2\right)+28(\log(n)+1).$$ $\bullet$ *If $\widetilde{\Delta}_{i,\varepsilon}\leq18\sigma_i\left(1+4\sqrt{2\varepsilon}\right)^2$, then :* . $$\mathbb{E}[T_{i}(n)]\leq80\log(n)\operatorname*{max}\left(\frac{\sigma_{i}^{2}}{\widetilde{\Delta}_{i,\epsilon}^{2}}\left(1+32\overline{{{\epsilon}}}^{2}\right),3\right)+28(\log(n)+1).$$ Comparison between Regrets of HuberUCB and **SeqHuberUCB**. Lemma 6 yields similar regret bounds for SeqHuberUCB as the ones obtained for HuberUCB in Corollary 1. We observe that the regrets of these two algorithms only differ in n-independent constants. Specifically, regret of SeqHuberUCB can be approximately 3−4 times higher than that of HuberUCB. For simplicity of exposition, we present approximate constants in our results. A more careful analysis might yield more fine-tuned constants. Theorem 4 and experimental results (Figure 2) indicate that it is possible to have very close performances with SeqHuberUCB and HuberUCB. ## 7 Experimental Evaluation In this section, we assess the experimental efficiency of HuberUCB and SeqHuberUCB by plotting the empirical regret. Contrary to the uncorrupted case, we cannot really estimate the corrupted regret in (Corrupted regret) only using the observed rewards. Instead, we use the true uncorrupted gaps that we know because we are in a simulated environment, and we estimate the corrupted regret Rn using Pk i=1 ∆iTiV(n), where TiV(n) = 1M PM m=1(Ti(n))m is a Monte-Carlo estimation of Eνε [Ti(n)] over M experiments. We use rlberry library (Domingues et al., 2021) and Python3 for the experiments. We run the experiments on an 8 core Intel(R) Core(TM) i7-8665U CPU@1.90GHz. For each algorithm, we perform each experiment 100 times to get a Monte-Carlo estimate of regret. Comparison with Bandit Algorithms for Heavy-tailed and Adversarial Settings. To the best of our knowledge, there is no existing bandit algorithm for handling unbounded stochastic corruption prior to this work. Hence, we focus on comparing ourselves to the closest settings, i.e. bandits in heavy-tailed setting and adversarial bandit algorithms. We empirically and competitively study five different algorithms: HuberUCB, SeqHuberUCB, two RobustUCB algorithms with Catoni-Huber estimator and Median of Means (MOM) (Bubeck et al., 2013). In particular, we compare to algorithms assuming bounded *centered* moments and not bounded raw moment such as Truncated Mean from Bubeck et al. (2013), and KLinf-UCB from Agrawal et al. (2021). See also Appendix C for further experimental results. HuberUCB is closely related to the RobustUCB with Catoni Huber estimator, which also uses Huber's estimator but with another set of parameters and confidence intervals. The RobustUCB algorithms are tuned ![16_image_0.png](16_image_0.png) Figure 2: Cumulative regret plot of the algorithms on a corrupted Bernoulli (above), Student's (middle) and Pareto (below) reward distributions with various corruption levels ε. Lower corrupted regret indicates better performance for an algorithm. for uncorrupted heavy-tails. Hence, they incur linear regret in a corrupted setting. This is reflected in the experiments. *We also improve upon Bubeck et al. (2013) as we can handle arm-dependent variances*. Corrupted Bernoulli setting: In Figure 2 (above), we study a 3-armed bandits with corrupted Bernoulli distributions with means 0.1, 0.97, 0.99. The corruption applied to this bandit problem are Bernoulli distributions with means 0.999, 0.999, 0.001, respectively. For HuberUCB and SeqHuberUCB, we choose to use βi = 0.1σi, which seems to work better despite the theory presented before. We plot the mean plus/minus the standard error of the result in Figure 2. We do that for the three corruption proportions ε equal to 0%, 3% and 5%. We notice that there is a short linear regret phase at the beginning due to the forced exploration performed by the algorithms. Followed by that, HuberUCB and SeqHuberUCB incur logarithmic regret. On the other hand, Catoni Huber Agent and MOM Agent incur logarithmic regret only in the uncorrupted setting. When the data are corrupted, i.e. ε > 0, their regret grow linearly. Corrupted Student setting: In Figure 2 (middle), we study a 3-armed bandits with corrupted Student's distributions with 3 degrees of freedom (finite second moment) and with means 0.1, 0.95, 1. The corruption applied to this bandit problem are Gaussians with variance 1, and means 100, 100, −1000 respectively. For HuberUCB and SeqHuberUCB, we choose to use βi = σi. The results echo the observations for the Bernoulli case except that the corruption is more drastic and affect the performance even more. Corrupted Pareto setting: In Figure 2 (bottom), we illustrate the results for a 3-armed bandits with corrupted Pareto distributions having shape parameters 3, 3, 2.1 (i.e. they have finite second moments), and scale parameters 0.1, 0.2, 0.3 respectively. Thus, the corresponding means are 0.15, 0.3 and 0.57 and the standard deviations are 0.09, 0.17, 1.25, respectively. The corruption applied to this bandit problem are Gaussians with variance 1, and centered at 100, 100, −1000 respectively. For HuberUCB and SeqHuberUCB, we choose to use β = 1.5σi and we also bound the bias bi by σ 2 i /βi. The results echo the observations for the Student's distributions. Thus, we conclude that HuberUCB incur the lowest regret among the competing algorithms in the Bandits with Stochastic Corruption setting, specially for higher corruption levels ε. Also, performances of SeqHuberUCB and HuberUCB are very close, except for the Pareto distributions with high corruption level. ## 8 Conclusion In this paper, we study the setting of Bandits with Stochastic Corruption that encompasses both the heavytailed rewards with bounded variance and unbounded corruptions in rewards. In this setting, we prove lower bounds on the regret that shows the heavy-tailed bandits and corrupted bandits are strictly harder than the usual sub-Gaussian bandits. Specifically, in this setting, the hardness depends on the suboptimality gap/variance regimes. If the suboptimality gap is small, the hardness is dictated by σ 2 i /∆ 2 i,ε. Here, ∆i,ε is the corrupted sub-optimality gap, which is smaller than the uncorrupted gap ∆ and thus, harder to distinguish. To complement the lower bounds, we design a robust algorithm HuberUCB that uses Huber's estimator for robust mean estimation and a novel concentration bound on this estimator to create tight confidence intervals. HuberUCB achieves logarithmic regret that matches the lower bound for low suboptimality gap/high variance regime. We also present a sequential Huber estimator that could be of independent interest and we use it to state a linear-time robust bandit algorithm, SeqHuberUCB, that presents the same efficiency as HuberUCB. Unlike existing literature, we do not need any assumption on a known bound on corruption and a known bound on the (1 + η)-uncentered moment, which was posed as an open problem in Agrawal et al. (2021). Since our upper and lower bounds disagree in the high gap/low variance regime, it will be interesting to investigate this regime further. From multi-armed bandits, we know that the tightest lower and upper bounds depend on the KL-divergence between optimal and suboptimal reward distributions. Thus, it would be imperative to study KL-divergence with corrupted distributions to better understand the Bandits with Stochastic Corruption problem. In this paper, we have focused on a problem-dependent regret analysis for a given ε. In future, it would be interesting to get some insight on how to adapt to an unknown ε, and to perform a problem-independent "worst-case" analysis. Also, following the reinforcement learning literature, it will be natural to extend HuberUCB to contextual and linear bandit settings with corruptions and heavy-tails. This will facilitate its applicability to practical problems, such as choosing treatments against pests. ## References Naman Agarwal, Brian Bullins, Elad Hazan, Sham Kakade, and Karan Singh. Online control with adversarial disturbances. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), *Proceedings of the 36th* International Conference on Machine Learning, volume 97 of *Proceedings of Machine Learning Research*, pp. 111–119. PMLR, 09–15 Jun 2019. URL https://proceedings.mlr.press/v97/agarwal19c.html. Shubhada Agrawal, Sandeep K Juneja, and Wouter M Koolen. Regret minimization in heavy-tailed bandits. In *Conference on Learning Theory*, pp. 26–62. PMLR, 2021. Jason Altschuler, Victor-Emmanuel Brunel, and Alan Malek. Best arm identification for contaminated bandits. *Journal of Machine Learning Research*, 20(91):1–39, 2019. Peter Auer, Nicolo Cesa-Bianchi, Yoav Freund, and Robert E Schapire. The nonstochastic multiarmed bandit problem. *SIAM journal on computing*, 32(1):48–77, 2002. Ilija Bogunovic, Andreas Krause, and Jonathan Scarlett. Corruption-tolerant gaussian process bandit optimization. In *International Conference on Artificial Intelligence and Statistics*, pp. 1071–1081. PMLR, 2020. Djallel Bouneffouf. Corrupted contextual bandits: Online learning with corrupted context. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3145–3149. IEEE, 2021. Hippolyte Bourel, Odalric-Ambrym Maillard, and Mohammad Sadegh Talebi. Tightening Exploration in Upper Confidence Reinforcement Learning. In *International Conference on Machine Learning*, Vienna, Austria, July 2020. URL https://hal.archives-ouvertes.fr/hal-03000664. Sébastien Bubeck, Nicolo Cesa-Bianchi, and Gábor Lugosi. Bandits with heavy tail. IEEE Transactions on Information Theory, 59(11):7711–7717, 2013. Apostolos N Burnetas and Michael N Katehakis. Optimal adaptive policies for markov decision processes. Mathematics of Operations Research, 22(1):222–255, 1997. Olivier Catoni. Challenging the empirical mean and empirical variance: a deviation study. In Annales de l'IHP Probabilités et statistiques, volume 48, pp. 1148–1185, 2012. Arnak Dalalyan and Philip Thompson. Outlier-robust estimation of a sparse linear model using l1-penalized huber's m-estimator. *Advances in neural information processing systems*, 32, 2019. Jules Depersin and Guillaume Lecué. Robust sub-gaussian estimation of a mean vector in nearly linear time. The Annals of Statistics, 50(1):511–536, 2022. Luc Devroye, Matthieu Lerasle, Gabor Lugosi, and Roberto I Oliveira. Sub-gaussian mean estimators. The Annals of Statistics, 44(6):2695–2725, 2016. Omar Darwiche Domingues, Yannis Flet-Berliac, Edouard Leurent, Pierre Ménard, Xuedong Shang, and Michal Valko. rlberry - A Reinforcement Learning Library for Research and Education, 10 2021. URL https://github.com/rlberry-py/rlberry. Negin Golrezaei, Vahideh Manshadi, Jon Schneider, and Shreyas Sekar. Learning product rankings robust to fake users. In *Proceedings of the 22nd ACM Conference on Economics and Computation*, pp. 560–561, 2021. Mohammad Hajiesmaili, Mohammad Sadegh Talebi, John Lui, Wing Shing Wong, et al. Adversarial bandits with corruptions: Regret lower bound and no-regret algorithm. *Advances in Neural Information Processing* Systems, 33:19943–19952, 2020. Andreas Hotho. Anomaly detection in beehives: An algorithm comparison. In Sensor Networks: 9th International Conference, SENSORNETS 2020, Valletta, Malta, February 28–29, 2020, and 10th International Conference, SENSORNETS 2021, Virtual Event, February 9–10, 2021, Revised Selected Papers, pp. 1. Springer Nature, 2022. Peter J. Huber. Robust estimation of a location parameter. *Annals of Mathematical Statistics*, 35:492–518, 1964. Peter J Huber. A robust version of the probability ratio test. *The Annals of Mathematical Statistics*, pp. 1753–1758, 1965. Peter J Huber. *Robust statistics*, volume 523. John Wiley & Sons, 2004. Sayash Kapoor, Kumar Kshitij Patel, and Purushottam Kar. Corruption-tolerant bandit learning. Machine Learning, 108(4):687–715, 2019. T.L Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6(1):4–22, 1985. ISSN 0196-8858. doi: https://doi.org/10.1016/0196-8858(85)90002-8. URL https://www.sciencedirect.com/science/article/pii/0196885885900028. Tor Lattimore and Csaba Szepesvári. *Bandit algorithms*. Cambridge University Press, 2020. Guillaume Lecué and Matthieu Lerasle. Robust machine learning by median-of-means: theory and practice. The Annals of Statistics, 48(2):906–931, 2020. Kyungjae Lee, Hongjun Yang, Sungbin Lim, and Songhwai Oh. Optimal algorithms for stochastic multiarmed bandits with heavy tailed rewards. *Advances in Neural Information Processing Systems*, 33:8452– 8462, 2020. Matthieu Lerasle, Zoltán Szabó, Timothée Mathieu, and Guillaume Lecué. Monk outlier-robust mean embedding estimation by median-of-means. In *International Conference on Machine Learning*, pp. 3782–3793. PMLR, 2019. Thodoris Lykouris, Vahab Mirrokni, and Renato Paes Leme. Stochastic bandits robust to adversarial corruptioreferences 1ns. In *Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing*, pp. 114–122, 2018. Odalric-Ambrym Maillard. *Mathematics of Statistical Sequential Decision Making*. Habilitation à diriger des recherches, Université de Lille Nord de France, February 2019. URL https://hal.archives-ouvertes. fr/tel-02077035. Timothée Mathieu. Concentration study of m-estimators using the influence function, 2022. Andres Munoz Medina and Scott Yang. No-regret algorithms for heavy-tailed linear bandits. In *International* Conference on Machine Learning, pp. 1642–1650. PMLR, 2016. Stanislav Minsker. Distributed statistical estimation and rates of convergence in normal approximation. Electronic Journal of Statistics, 13(2):5213–5252, 2019. Stanislav Minsker and Mohamed Ndaoud. Robust and efficient mean estimation: an approach based on the properties of self-normalized sums. *Electronic Journal of Statistics*, 15(2):6036–6070, 2021. Areesh Mittal and Grani A Hanasusanto. Finding minimum volume circumscribing ellipsoids using generalized copositive programming. *Operations Research*, 70(5):2867–2882, 2022. Roman Pogodin and Tor Lattimore. On first-order bounds, variance and gap-dependent bounds for adversarial bandits. In *Uncertainty in Artificial Intelligence*, pp. 894–904. PMLR, 2020. Adarsh Prasad, Sivaraman Balakrishnan, and Pradeep Ravikumar. A unified approach to robust mean estimation. *arXiv preprint arXiv:1907.00927*, 2019. Adarsh Prasad, Sivaraman Balakrishnan, and Pradeep Ravikumar. A robust univariate mean estimator is all you need. In *International Conference on Artificial Intelligence and Statistics*, pp. 4034–4044. PMLR, 2020. Han Shao, Xiaotian Yu, Irwin King, and Michael R Lyu. Almost optimal algorithms for linear stochastic bandits with heavy-tailed payoffs. *Advances in Neural Information Processing Systems*, 31, 2018. Karanjit Singh and Shuchita Upadhyaya. Outlier detection: applications and techniques. International Journal of Computer Science Issues (IJCSI), 9(1):307, 2012. James G. Wendel. Note on the gamma function. *American Mathematical Monthly*, 55:563, 1948. # Appendix ## Table Of Contents | Table of Contents A Proof of Theorems | 23 | | | |-----------------------------------------|-------------------------------------------------------------------|----|----| | A.1 | Proof of Theorem 1: Regret Lower Bound | 23 | | | A.2 | Proof of Theorem 2: Concentration of Huber's Estimator | | 23 | | A.3 | Proof of Theorem 4: Concentration of Sequential Huber's Estimator | | 25 | | A.4 | Proof of Theorem 3: Regret Upper bound of HuberUCB | 27 | | | B | Proof of Technical Lemmas and Corollaries | 31 | | | B.1 | Preliminary lemmas | | 31 | | B.2 | Lemmas for Regret upper bound | | 34 | | B.3 | Lemmas for concentration of robust estimators | | 38 | | C | Additional experimental results | 43 | | | C.1 | Sensitivity to β and ε | | 43 | | C.2 | Corrupted bandits with adversarial algorithms | | 43 | ## A Proof Of Theorems A.1 Proof Of Theorem 1: Regret Lower Bound The theorem is a consequence of Lemmas 2, 3 and 4. From Lemma 2, we have $$\operatorname*{lim}_{n\to\infty}\operatorname*{inf}_{n\to\infty}{\frac{\mathbb{E}_{n}[T_{i}(n)]}{\log(n)}}\geq{\frac{1}{D_{\mathrm{KL}}(P_{0},P_{1})}}$$ $$(1{\mathrm{f}}{\mathrm{o}})$$ Student distributions Let P0, P1 be student distributions with parameter d = 3 and gap ∆i as in Lemma 3. From Lemma 3, we get $$D_{\text{KL}}(P_{0},P_{1})\leq\begin{cases}17\Delta_{i}^{2}&\text{if}\Delta_{i}\leq1\\ 4\log{(\Delta_{i})}+\log{(50)}&\text{if}\Delta_{i}>1\end{cases}\tag{5}$$ $$(17)$$ Then, using that log(50) ≤ 17, DKL(P0, P1) ≤ 17∆2 i ∧ 4 log (∆i) + 17. Finally, use that the variance of a student with three degrees of freedom is σ 2 i = 3 to get that $$D_{\mathrm{KL}}(P_{0},P_{1})\leq51\frac{\Delta_{i}^{2}}{\sigma_{i}^{2}}\wedge4\log\left(\frac{\Delta_{i}}{\sigma_{i}}\right)+22.$$ Bernoulli distributions Let P0, P1 be as in Lemma 4 with gap ∆i and variance σi. If 2σi √ ε 1−2ε < ∆i < 2σi, then $$D_{\rm KL}(P_{0},P_{1})\leq\frac{\overline{\Delta}_{i,\varepsilon}}{2\sigma_{i}}\log\left(1+\frac{\overline{\Delta}_{i,\varepsilon}}{2\sigma_{i}-\overline{\Delta}_{i,\varepsilon}}\right)\wedge(1-2\varepsilon)\log\left(1+\frac{1-2\varepsilon}{\varepsilon}\right)\tag{18}$$ Use Equation (16) to conclude. ## A.2 Proof Of Theorem 2: Concentration Of Huber'S Estimator First, we control the deviations of Huber's estimator using the deviations of ψβ(X − Hubβ(Xn 1 )). We will need the following lemma to control the variance of ψβ(X − Hubβ(Xn 1 )), which will in turn allow us to control its deviation with Lemma 8. Lemma 7 (Controlling Variance of Influence of Huber's Estimator) Suppose that Y1, . . . , Yn are i.i.d with law P*. Then* Var(ψβ(Y − Hubβ(P))) ≤ Var(Y ) = σ 2 Lemma 8 (Concentrating Huber's Estimator by Concentrating the Influence) *Suppose that* X1, . . . , Xn are i.i.d with law (1 − ε)P + εH for some H ∈ P and proportion of outliers ε ∈ (0, 1/2). Then, for any η > 0 and λ ∈ (0, β/2]*, we have* P(|Hubβ(Xn 1 ) − Hubβ(P)| ≥ λ) ≤ P 1 n Xn i=1 ψβ(Xi − Hubβ(P)) ≥ λ (p − η − ε)+ ! + 2e −2nη2 where p = P(|Y − E[X]| ≤ β/2). Then, using these Lemmas, we can prove the theorem. Step 1. For any δ ∈ (0, 1), with probability larger than 1 − 3δ, $$\frac{1}{n}\sum_{i=1}^{n}\psi_{\beta}(X_{i}-\text{Hub}_{\beta}(P))\Bigg{|}\leq\sigma\sqrt{\frac{2\log(1/\delta)}{n}}+\beta\frac{\log(1/\delta)}{2n}+2\beta\varepsilon+2\beta\sqrt{\frac{\log(1/\delta)(1-2\varepsilon)}{n\log\left(\frac{1-\varepsilon}{\varepsilon}\right)}}.\tag{19}$$ Proof: Write that Xi = (1 − Wi)Yi + WiZi where W1*, . . . , W*n are i.i.d {0, 1} Bernoulli random variable with mean ε, Y1*, . . . , Y*n are i.i.d ∼ P and Z1*, . . . , Z*n are i.i.d with law H, we have $$\left|\frac{1}{n}\sum_{i=1}^{n}\psi_{\beta}(X_{i}-\mathrm{Hub}_{\beta}(P))\right|$$ $$=\left|\frac{1}{n}\sum_{i=1}^{n}\psi_{\beta}(Y_{i}-\mathrm{Hub}_{\beta}(P))+\frac{1}{n}\sum_{i=1}^{n}\mathds{1}\{W_{i}=1\}\left(\psi_{\beta}(Z_{i}-\mathrm{Hub}_{\beta}(P))-\psi_{\beta}(Y_{i}-\mathrm{Hub}_{\beta}(P))\right)\right|$$ $$\leq\left|\frac{1}{n}\sum_{i=1}^{n}\psi_{\beta}(Y_{i}-\mathrm{Hub}_{\beta}(P))\right|+2\beta\frac{1}{n}\sum_{i=1}^{n}\mathds{1}\{W_{i}=1\}$$ Remark that by definition of Hubβ(P), it is defined as the root of the equation E[ψβ(Y − Hubβ(P))] = 0. From Bernstein's inequality, for any δ ∈ (0, 1), $$\mathbb{P}\left(\left|\frac{1}{n}\sum_{i=1}^{n}\psi_{\beta}(Y_{i}-\mathrm{Hub}_{\beta}(P))\right|\geq\sqrt{\frac{2V_{\psi_{\beta}}\log(1/\delta)}{n}}+\beta\frac{\log(1/\delta)}{3n}\right)\leq2\delta\,$$ $\psi_{\beta}(Y_{i}-\mathrm{Hub}_{\beta}(P))$ where Vψβ = Var(ψβ(Yi − Hubβ(P))). Then, using that Bernoulli random variables with mean ε are sub-Gaussian with variance parameter 1−2ε 2 log((1−ε)/ε) (see Lemma 6 of Bourel et al. (2020)), $$\mathbb{P}\left({\frac{1}{n}}\sum_{i=1}^{n}\mathbf{1}\{W_{i}=1\}\leq\varepsilon+{\sqrt{\frac{\log(1/\delta)(1-2\varepsilon)}{n\log\left({\frac{1-\varepsilon}{\varepsilon}}\right)}}}\right)\geq1-\delta.$$ $$(20)$$ Then, using Lemma 7 we get for any δ ∈ (0, 1), with probability larger than 1 − 3δ, $$\left|{\frac{1}{n}}\sum_{i=1}^{n}\psi_{\beta}(X_{i}-\mathrm{Hub}_{\beta}(P))\right|\leq\sigma{\sqrt{\frac{2\log(1/\delta)}{n}}}+\beta{\frac{\log(1/\delta)}{2n}}+2\beta\varepsilon+2\beta{\sqrt{\frac{\log(1/\delta)(1-2\varepsilon)}{n\log\left({\frac{1-\varepsilon}{\varepsilon}}\right)}}}.$$ . (20) Step 2. Using η = qlog(1/δ) 2n, the hypotheses of Lemma 8 are verified. Proof: To apply Lemma 8, it is sufficient that $$\sigma{\sqrt{\frac{2t}{n}}}+\beta{\frac{\log(1/\delta)}{3n}}+2\beta\varepsilon+2\beta{\sqrt{\frac{\log(1/\delta)(1-2\varepsilon)}{n\log\left({\frac{1-\varepsilon}{\varepsilon}}\right)}}}\leq{\frac{\beta}{2}}\left(p-{\sqrt{\frac{\log(1/\delta)}{2n}}}-\varepsilon\right)$$ $$(21)$$ and using that 4σ ≤ β, we have that it is sufficient that $$\sqrt{\frac{\log(1/\delta)}{2n}}+\frac{\log(1/\delta)}{3n}+2\sqrt{\frac{\log(1/\delta)(1-2\varepsilon)}{n\log\left(\frac{1-\varepsilon}{\varepsilon}\right)}}\leq\frac{1}{2}\left(p-5\varepsilon\right).\tag{22}$$ This is a polynomial in plog(1/δ)/n that we need to solve. We use the following elementary algebra lemma. Lemma 9 (2nd order polynomial root bound) let a, b, c be three positive constants and x *verify* ax2+ bx − c ≤ 0. Suppose that 4ac b 2 ≤ d, then x *must verify* $$x\geq{\frac{2c({\sqrt{d+1}}-1)}{d b}}.$$ Observe that we have2 (p − 5ε) $${\frac{2\left(p-5\varepsilon\right)}{3\left({\frac{1}{\sqrt{2}}}+{\frac{2{\sqrt{1-2\varepsilon}}}{\sqrt{\log\left({\frac{1-\varepsilon}{\varepsilon}}\right)}}}\right)^{2}}}\leq{\frac{4}{3}}$$ and (p4/3 + 1 − 1)/(4/3) ≥ 8/7, hence, from Lemma 9, we get the following sufficient condition for Equation (22) to hold: $${\sqrt{\log(1/\delta)/n}}\leq{\frac{8{\sqrt{2}}\left(p-5\varepsilon\right)}{7\left(1+{\frac{2{\sqrt{2(1-2\varepsilon)}}}{{\sqrt{\log\left({\frac{1-\varepsilon}{\varepsilon}}\right)}}}}\right)}}.$$ Hence, taking this to the square, $$\log(1/\delta)\leq n{\frac{128\left(p-5\varepsilon\right)^{2}}{49\left(1+{\frac{2{\sqrt{2(1-2\varepsilon)}}}{\sqrt{\log\left({\frac{1-\varepsilon}{\varepsilon}}\right)}}}\right)^{2}}}.$$ Step 3. Using Lemma 8 and Step 1 prove that the theorem is true. Proof: The hypotheses of Lemma 8 are verified and we can use its result and together with Equation (19) we get with probability larger than 1 − 5δ, $$|\mathrm{Hub}_{\beta}(X_{1}^{n})-\mathrm{Hub}_{\beta}(P)|\leq{\frac{\sigma{\sqrt{\frac{2\log(1/\delta)}{n}}}+\beta{\frac{\log(1/\delta)}{3n}}+2\beta{\sqrt{\frac{\log(1/\delta)(1-2\varepsilon)}{n\log\left({\frac{1-\varepsilon}{\delta}}\right)}}}+2\beta\varepsilon}{\left(p-{\sqrt{\frac{\log(1/\delta)}{2n}}}-\varepsilon\right)_{+}}}.$$ ## A.3 Proof Of Theorem 4: Concentration Of Sequential Huber'S Estimator In this proof, we denote $$r_{t}(\delta):={\frac{\sigma{\sqrt{\frac{2\log(1/\delta)}{t}}}+\beta{\frac{\log(1/\delta)}{3t}}+2\beta{\overline{{\varepsilon}}}{\sqrt{\frac{\log(1/\delta)}{t}}}+2\beta\varepsilon}{\left(p-{\sqrt{\frac{\log(1/\delta)}{2t}}}-\varepsilon\right)_{+}}}$$ this is the rate of convergence of Hubβ(Xt1 ) to Hubβ(P), as stated by Theorem 2. Let P2(t) *< t < P*2(t + 1), define $$f_{t}(u)={\frac{1}{t}}\sum_{i=1}^{t}\psi_{\beta}(X_{i}-u).$$ ft is a continuous function, we take its derivative in distribution to get that $f_{t}(\text{Hub}_{\beta}(P))=f_{t}(H_{t})+(\text{Hub}_{\beta}(P)-H_{t})f_{t}^{\prime}(H_{t})+\int_{H_{t}}^{\text{Hub}_{\beta}(P)}f_{t}^{\prime\prime}\left(u\right)(\text{Hub}_{\beta}(P)-u)\text{d}u$. Then, by definition of SeqHubt , we also have $$0=f_{t}(H_{t})+(\mathrm{SeqHub}_{t}-H_{t})f_{t}^{\prime}(H_{t}).$$ Hence, $$f_{t}(Hub(P))=(\text{Hub}_{\beta}(P)-\text{SeqHub}_{t})f^{\prime}_{t}(H_{t})+\int_{H_{t}}^{\text{Hub}_{\beta}(P)}f^{\prime\prime}_{t}\left(u\right)(\text{Hub}_{\beta}(P)-u)\text{d}u.\tag{23}$$ $u)=-\frac{1}{t}\sum_{i=1}^{t}\mathds{1}\{|X_{i}-u|\leq\beta\}$ and $f^{\prime\prime}_{t}(u)=-\frac{1}{t}\sum_{i=1}^{t}(\delta_{X_{i}-u-\beta}-\delta_{X_{i}-u+\beta})$ where $\delta_{x}$ is the Dirac where f ′ t mass in x. f ′ t (Ht) is a sum of indicator functions and should be close to P(|X − E[X]| ≤ β), which is close to 1. Bound on f ′ t (Ht) We bound |f ′ t (Ht)|. We have $$|f_{t}^{\prime}(H_{t})|=\frac{1}{t}\sum_{i=1}^{t}\mathbf{1}\{|X_{i}-H_{t}|\leq\beta\}$$ $$\geq\frac{1}{t}\sum_{i=1}^{t}\mathbf{1}\{|X_{i}-\mathrm{Hub}_{\beta}(P)|\leq\beta-|H_{t}-\mathrm{Hub}_{\beta}(P)|\}.$$ Choose the limiting δ which is δ = exp −P2(t) $\delta=\exp\left(-P_{2}(t)\frac{128(p-5e)^{2}}{49\left(1+2\overline{\varepsilon}\sqrt{2}\right)^{2}}\right)$, from Equation $$\angle$$ , from Equation (21), we get that rt(δ) ≤ β/2. Then, we have from Theorem 2, with probability larger than 1 − 5 exp −P2(t) 128(p−5ε) 2 49(1+2ε √2) 2 , that |Ht − Hubβ(P)| ≤ β/2, and then, $$|f^{\prime}_{t}(H_{t})|\geq\frac{1}{t}\sum_{i=1}^{t}{\bf1}\{|X_{i}-{\rm Hub}_{\beta}(P)|\leq\beta/2\}.\tag{24}$$ Bound on the integral of f ′′ t . We have, $$\begin{aligned} \int_{H_t}^{\text{Hub}_\beta(P)} f_t^{\prime\prime}\left(u\right)(\text{Hub}_\beta(P)-u)\text{d}u \nonumber \\ = \frac{1}{t}\sum_{i=1}^t \int_{H_t}^{\text{Hub}_\beta(P)} \left(\delta_{X_i - u - \beta} - \delta_{X_i - u + \beta}\right)(\text{Hub}_\beta(P)-u)\text{d}u \nonumber \\ = \frac{1}{t}\sum_{i=1}^t (\text{Hub}_\beta(P) - X_i - \beta)\mathbf{1}\{X_i \in I_-\} - (\text{Hub}_\beta(P) - X_i + \beta)\mathbf{1}\{X_i \in I_+\} \nonumber \end{aligned}$$ and $I_+$ are the two undirected intervals. where I− and I+ are the two undirected intervals I− = [Ht − β, Hubβ(P) − β] and I+ = [Ht + β, Hubβ(P) + β]. $\underbrace{\mathrm{Hub}_{\beta}(P)-\beta\quad H_{t}-\beta}_{I_{-}}$$\underbrace{\mathrm{Hub}_{\beta}(P)\quad H_{t}}_{I_{-}}$$\underbrace{\mathrm{Hub}_{\beta}(P)+\beta\quad H_{t}+\beta}_{I_{+}}$$\underbrace{\mathrm{Hub}_{\beta}(P)+\beta\quad H_{t}+\beta}_{I_{+}}$ Figure 3: Illustration I− and I+ Having that |Hubβ(Xt1 ) − Ht| ≤ β/2, we have that I− ∩ I+ = ∅. Then, choosing either the sum 1 t Pt i=1(Hubβ(P) − Xi − β)1{Xi ∈ I−} or 1 t Pt i=1(Hubβ(P) − Xi − β)1{Xi ∈ I+} according to which one is larger. If Xi ∈ I+, we have |Hubβ(P) − Xi + β*| ≤ |*Hubβ(P) − Ht| and if Xi ∈ I−, |Hubβ(P) − Xi − β| ≤ |Hubβ(P) − Ht|, hence we have ce we have $ \left|\frac{1}{t}\sum_{i=1}^t\int_{H_t}^{\mathrm{Hub}_\beta(P)}\left(\delta_{X_i-u-\beta}-\delta_{X_i-u+\beta}\right)(\mathrm{Hub}_\beta(P)-u)\mathrm{d}u\right|$ $ \leq|\mathrm{Hub}_\beta(P)-H_t|\max\left(\frac{1}{t}\sum_{i=1}^t\mathbf{1}\{X_i\in I_-\},\frac{1}{t}\sum_{i=1}^t\mathbf{1}\{X_i\in I_+\}\right).$ Now, remark that by Equation (24), we have, $$\sum_{i=1}^{t}{\bf1}\{X_{i}-H_{t}\}=|f_{t}^{\prime}(H_{t})|\geq\frac{1}{t}\sum_{i=1}^{t}{\bf1}\{|X_{i}-{\rm Hub}_{\beta}(P)|\leq\beta/2\}$$ Let us denote pt(β) = 1 t Pt i=1 1{|Xi − Hubβ(P)| ≤ β/2}. There cannot be more than 1 − pt(β) fraction of the Xi's that are outside [Ht − *β, H*t + β]. Similarly, there cannot be more than 1−pt(β) fraction of the X′ i s that are outside [Hubβ(P)−β/2, Hubβ(P) +β/2]. Hence, if Ht ≤ Hubβ(P), then I− ⊂ [Ht −*β, H*t +β] c and the proportion of Xi's in I− can't be larger than 1−pt(β). If Hubβ(P) ≤ Ht, then I− ⊂ [Hubβ(P) − β, Hubβ(P) + β] c which is itself a subset of [Hubβ(P) − β/2, Hubβ(P) + β/2]c and the proportion of Xi's included in [Hubβ(P) − β/2, Hubβ(P) + β/2]ccannot be larger than 3/10. In both cases, 1 t Pt i=1 1{Xi ∈ I−} ≤ 1 − pt(β). A similar reasoning holds for I+, hence $$\left|\int_{H_{t}}^{\mathrm{Hub}_{\beta}(P)}f_{t}^{\prime\prime}\left(u\right)(\mathrm{Hub}_{\beta}(P)-u)\mathrm{d}u\right|\leq(1-p_{t}(\beta))|\mathrm{Hub}_{\beta}(P)-H_{t}|$$ Then, using Equation (24) and Equation (23), we get with probability larger than 1 − 5 exp −P2(t) 128(p−5ε) 2 49(1+2ε √2) 2 , $$\mathrm{Hub}_{\beta}(P)-\mathrm{SeqHub}_{t}|\leq\frac{f_{t}(\mathrm{Hub}_{\beta}(P))+\left|\int_{H_{t}}^{\mathrm{Hub}_{\beta}(P)}f_{t}^{\prime\prime}\left(u\right)\left(\mathrm{Hub}_{\beta}(P)-u\right)\mathrm{d}u\right|}{f_{t}^{\prime}(H_{t})}$$ $$\leq\frac{f_{t}(\mathrm{Hub}_{\beta}(P))+(1-p_{t}(\beta))|\mathrm{Hub}_{\beta}(P)-H_{t}|}{p_{t}(\beta)}.\tag{25}$$ Then let δ ≤ exp −P2(t) 128(p−5ε) 2 49(1+2ε √2) 2 , we use Equation (20) to say that with probability larger than 1−3δ, we have $$f_{t}(\mathrm{Hub}_{\beta}(P))\leq\sigma{\sqrt{\frac{2\log(1/\delta)}{t}}}+\beta{\frac{\log(1/\delta)}{2t}}+2\beta\varepsilon+2\beta{\sqrt{\frac{\log(1/\delta)(1-2\varepsilon)}{t\log\left({\frac{1-\varepsilon}{\varepsilon}}\right)}}}.$$ Then, using Hoeffding's inequality after taking out the outliers, we get with probability larger than 1 − δ, that $\ p_t(\beta)=\frac{1}{t}\sum_{i=1}^t\mathbf{1}\{|X_i-\text{Hub}(P)|\leq\beta/2\}\geq p-\sqrt{\frac{\log(1/\delta)}{2t}}-\varepsilon$ st term of the right-hand-side of Equation (25) is smaller th. to recover that the first term of the right-hand-side of Equation (25) is smaller than rt(δ). Then, using Theorem 2, we get that with probability larger than 1 − 5 exp −P2(t) 128(p−5ε) 2 49(1+2ε √2) 2 − 9δ ≥ 1 − 14δ, $$|\mathrm{Hub}_{\beta}(P)-\mathrm{SeqHub}_{t}|\leq r_{t}(\delta)+\left(\frac{1}{p-\sqrt{\frac{\log(1/\delta)}{2t}}-\varepsilon}-1\right)r_{P_{2}(t)}(\delta).$$ $$(2{\mathfrak{f}}{\mathfrak{h}})$$ $$(27)$$ ## A.4 Proof Of Theorem 3: Regret Upper Bound Of Huberucb If At = i then at least one of the following four inequalities is true: HubV1,T1(t−1) + B1(T1(t − 1), t) ≤ µ1 (26) or HubVi,Ti(t−1) ≥ µi + Bi(Ti(t − 1), t) (27) $${\widehat{\mathrm{Hubb}}}_{1,T_{1}(t-1)}+B_{1}(T_{1}(t-1),t)\leq\mu_{1}$$ $${\widehat{\mathrm{Hubb}}}_{i,T_{i}(t-1)}\geq\mu_{i}+B_{i}(T_{i}(t-1),t)$$ or ∆i < 2Bi(Ti(t − 1), t) (28) $$\Delta_{i}<2B_{i}(T_{i}(t-1),t)$$ $$(28)$$ T1(t − 1) < slim(t) = 98 log(t) 128 (p − 5ε) 2 1 + 2√2 ε ∨9 14√2 $$(29)$$ 2(29) Indeed, if Ti(t − 1) < slim(t), then Bi(Ti(t − 1), t) = ∞ and Inequality (28) is true. On the other hand, if Ti(t − 1) ≥ slim(t), then we have Bi(Ti(t − 1), t) is finite and all four inequalities are false, then, $${\widehat{\mathrm{Hub}}}_{1,T_{1}(t-1)}+B_{1}(T_{1}(t-1),t)>\mu_{1}$$ = µi + ∆i ≥ µi + 2Bi(Ti(t − 1), n) ≥ µi + 2Bi(Ti(t − 1), t) ≥ HubVi,Ti(t−1) + Bi(Ti(t − 1), t) which implies that At ̸= i. Step 1. We have that P ((26) is true) ≤ 5/t. Proof: Then, we have that, $$\mathbb{P}\left(\widehat{\text{Hub}}_{1,T_{1}(t-1)}+B_{1}(T_{1}(t-1),t)\leq\mu_{1}\right)\leq\sum_{s=1}^{t}\mathbb{P}\left(\widehat{\text{Hub}}_{1,s}+B_{1}(s,t)\leq\mu_{1}\right)$$ $$=\sum_{s=\left[s_{\text{imin}}(t)\right]}^{t}\mathbb{P}\left(\widehat{\text{Hub}}_{1,s}-\mu_{1}\leq-B_{1}(s,t)\right)\.$$ Then, use Theorem 2, we get $$\mathbb{P}\left(\widehat{\mathrm{Hub}}_{1,T_{1}(t-1)}+B_{1}(T_{1}(t-1),t)\leq\mu_{1}\right)\leq\sum_{s=\left[s_{i m}(t)\right]}^{t}5e^{-\log(t^{2})}$$ $$\leq\sum_{s=\left[s_{i m}(t)\right]}^{t}\frac{5}{t^{2}}\leq\frac{5}{t}.$$ Step 2. Similarly, for arm i, we have $$\mathbb{P}\left({\widehat{\mathrm{Hub}}}_{i,T_{i}(t-1)}\geq\mu_{i}+B_{i}(T_{i}(t-1),t)\right)\leq{\frac{5}{t}}$$ Proof: We have, $$\mathbb{P}\left(\widehat{\text{Hubb}}_{i,T_{i}(t-1)}\geq\mu_{i}+B_{i}(T_{i}(t-1),t)\right)\leq\sum_{s=\left\lfloor s_{i\min}(t)\right\rfloor}^{t}\mathbb{P}\left(\widehat{\text{Hubb}}_{i,s}-\mu_{i}\geq B_{i}(s,t)\right)$$ $$\leq\sum_{s=\left\lfloor s_{i\min}(t)\right\rfloor}^{t}5e^{-\log(t^{2})}\leq\frac{5}{t}.$$ or Step 3. Let v ∈ N. If one of the two following conditions are true, then for all t such that Ti(t − 1) ≥ v, we have $\Delta_{i}\geq2B_{i}(T_{i}(t-1),t)$ (i.e. Equation (28) is false). Condition 1: if $\widetilde{\Delta}_{i,\varepsilon}>12\frac{\sigma_{i}^{2}}{\beta_{i}}\left(\sqrt{2}+2\frac{\beta_{i}}{\sigma_{i}}\overline{\right)}^{2}$ and $v\leq\log(n)\frac{96\beta_{i}}{9\widetilde{\Delta}_{i,\varepsilon}}$. Condition 2: if $\widetilde{\Delta}_{i,\varepsilon}\leq12\frac{\sigma_{i}^{2}}{\beta_{i}}\left(\sqrt{2}+2\frac{\beta_{i}}{\sigma_{i}}\overline{\right)}^{2}$ and $v\leq\frac{50}{9\widetilde{\Delta}_{i,\varepsilon}^{2}}\left(\sigma_{i}\sqrt{2}+2\beta_{i}\overline{\varepsilon}\right)^{2}\log(n)$. Proof: We search for the smallest value $v\geq s_{lim}(t)$ such that $\Delta_{i}$ verifies $$\Delta_{i}\geq2B_{i}(v,t)=2\frac{\sigma_{i}\sqrt{\frac{2\log(t^{2})}{v}}+\beta\frac{\log(t^{2})}{3v}+2\Xi\beta_{i}\sqrt{\frac{\log(t^{2})}{v}}+2\beta_{i}\varepsilon}{\left(p-\sqrt{\frac{\log(t^{2})}{2v}}-\varepsilon\right)}+2b_{i}.$$ First, we simplify the expression, having that v ≥ slim(t), we have $${\frac{\log(t^{2})}{2v}}\leq{\frac{128(p-5\varepsilon)^{2}}{98(1+9/7)^{2}}}\leq{\frac{(p-\varepsilon)^{2}}{4}},$$ hence we simplify to $$\Delta_{i}\geq{\frac{4}{(p-\varepsilon)}}\left(\sigma_{i}{\sqrt{\frac{2\log(t^{2})}{v}}}+\beta_{i}{\frac{\log(t^{2})}{3v}}+2\beta_{i}\bar{\varepsilon}{\sqrt{\frac{\log(t^{2})}{v}}}+2\beta_{i}\varepsilon\right)+2b_{i}\varepsilon.$$ let us denote $\tilde{\Delta}_{i,\varepsilon}=(\Delta_i-2b_i)(p-\varepsilon)-8\beta_i\varepsilon$, we are searching for $v$ such that . βi log(t 2) 3v+ rlog(t 2) v σi √2 + 2βiε − ∆ei,ε 4≤ 0 2)/v. This is a second order polynomial in plog(t If ∆ei,ε > 0, then the smallest v > 0 is $${\sqrt{\frac{\log(t^{2})}{v}}}={\frac{3}{2\beta_{i}}}\left(-\left(\sigma_{i}{\sqrt{2}}+2\Xi\beta_{i}\right)+{\sqrt{\left(\sigma_{i}{\sqrt{2}}+2\beta_{i}\Xi\right)^{2}+{\frac{\widetilde\Delta_{i,\varepsilon}\beta_{i}}{3}}}}\right)$$ **First setting:** if $\widetilde{\Delta}_{i,\varepsilon}>12\frac{\sigma_i^2}{\beta_i}\left(\sqrt{2}+2\frac{\beta_i}{\sigma_i}\Xi\right)^2$, In that case, $\sigma_i$ is a constant. In that case, we have $${\sqrt{\frac{\log(t^{2})}{v}}}\geq{\frac{3}{2\beta_{i}}}\left(-\left(\sigma_{i}{\sqrt{2}}+2\beta_{i}{\overline{{\varepsilon}}}\right)+{\sqrt{\frac{\beta_{i}{\widehat{\Delta}}_{i,\varepsilon}}{3}}}\right)\geq{\frac{3}{2\beta_{i}}}{\sqrt{\frac{\beta_{i}{\widehat{\Delta}}_{i,\varepsilon}}{12}}}={\sqrt{\frac{9{\widehat{\Delta}}_{i,\varepsilon}}{48\beta_{i}}}}$$ Hence, v ≤ log(t) 96βi 9∆ei,ε . **Second setting:** if $\widetilde{\Delta}_{i,\varepsilon}\leq12\frac{\sigma_{i}^{2}}{\beta_{i}}\left(\sqrt{2}+2\frac{\beta_{i}}{\sigma_{i}}\overline{\varepsilon}\right)^{2}$, then we use Lemma 9, using that $$\frac{\widetilde{\Delta}_{i,\varepsilon}\beta_{i}}{3\left(\sigma_{i}\sqrt{2}+2\beta_{i}\overline{\varepsilon}\right)^{2}}\leq4$$ and the fact that √1+4−1 4 ≥ 3 10 , we get, $$\sqrt{\frac{\log(t^{2})}{v}}\geq\frac{3\tilde{\Delta}_{i,\varepsilon}}{5\left(\sigma_{i}\sqrt{2}+2\beta_{i}\overline{{{\varepsilon}}}\right)}$$ Hence, $$v\leq\frac{50}{9\widetilde{\Delta}_{i,\varepsilon}^{2}}\left(\sigma_{i}\sqrt{2}+2\beta_{i}\overline{{{\varepsilon}}}\right)^{2}\log(t).$$ Step 4. Using All the previous steps, we prove the theorem. Proof: We have E[Ti(t)] = E "Xt t=1 1{At = i} # ≤ ⌊max(v, slim(t))⌋ + E Xt t=⌊max(v,slim(t))⌋+1 1{At = i and (28) is false} ≤ ⌊max(v, slim(t))⌋ + E Xt t=⌊max(v,slim(t))⌋+1 1{(26) or (27) or (29) is true} = ⌊max(v, slim(t))⌋ +Xt t=⌊min(v,slim(t))⌋+1 P ((26) or (27) is true) ≤ ⌊max(v, slim(t))⌋ + 2 Xt t=⌊min(v,slim(t))⌋+1 5 t using the harmonic series bound by log(t) + 1, we have $$\mathbb{E}[T_{i}(t)]\leq\operatorname*{max}(v,s_{l i m}(t))+10(\log(t)+1)$$ Then, we replace the value of v, **First setting:** $\tilde{\Delta}_{i,\varepsilon}>12\frac{\sigma_i^2}{\beta_i}\left(\sqrt{2}+2\frac{\beta_i}{\sigma_i}\overline{\varepsilon}\right)^2$ . $$\mathbb{E}[T_{i}(t)]\leq\log(t)\max\left(\frac{96\beta_{i}}{9\widehat{\Delta}_{i,\varepsilon}},\frac{4}{\left(p-5\varepsilon\right)^{2}}\left(1+2\sqrt{2}\left(\overline{\varepsilon}\vee\frac{9}{14\sqrt{2}}\right)\right)^{2}\right)+10(\log(t)+1)\right)$$ **Second setting:** if $\widetilde{\Delta}_{i,\varepsilon}\leq12\frac{\sigma_i^2}{\beta_i}\left(\sqrt{2}+2\frac{\beta_i}{\sigma_i}\Xi\right)^2$, then $\Xi$. $$\mathbb{E}[T_{i}(t)]\leq\log(n)\max\left(\frac{50}{9\lambda_{i,\varepsilon}^{2}}\left(\sigma_{i}\sqrt{2}+2\beta_{i}\mathbb{E}\right)^{2},\frac{4}{(p-5\varepsilon)^{2}}\left(1+2\sqrt{2}\left(\pi\vee\frac{9}{14\sqrt{2}}\right)\right)^{2}\right)+10(\log(t)+1).$$ This concludes the proof of Theorem 3. ## B Proof Of Technical Lemmas And Corollaries B.1 Preliminary Lemmas B.1.1 Proof of Lemma 1: Regret Decomposition From Equation (Corrupted regret), we have $$R_{n}=\sum_{a=1}^{k}\sum_{t=1}^{n}\mathbb{E}\left[(\operatorname*{max}_{a}\mathbb{E}_{P_{a}}[X^{\prime}]-X_{t}^{\prime})\mathbf{1}\left\{A_{t}=a\right\}\right]$$ Then, we condition on At the condition on $A$ $$\mathbb{E}\left[\left(\max_{a}\mathbb{E}_{P_{a}}[X^{\prime}]-X^{\prime}_{t}\right)\mathbf{1}\left\{A_{t}=a\right\}|A_{t}\right]=\mathbf{1}\{A_{t}=a\}\mathbb{E}[\max_{a}\mathbb{E}_{P_{a}}[X^{\prime}]-X^{\prime}_{t}|A_{t}]$$ $$=\mathbf{1}\{A_{t}=a\}(\max_{a}\mathbb{E}_{P_{a}}[X^{\prime}]-\mu_{A_{t}})$$ $$=\mathbf{1}\{A_{t}=a\}(\max_{a}\mathbb{E}_{P_{a}}[X^{\prime}]-\mu_{a})=\mathbf{1}\{A_{t}=a\}\Delta_{a}$$ and this stays true whatever the policy, because the policy at time t use knowledge up to time t − 1, hence its decision does not depend on Xt. Hence, we have $$R_{n}(\pi)=\sum_{a=1}^{k}\Delta_{a}\mathbb{E}_{\pi}(\cdot|X_{1}^{n},A_{1}^{n})\left[T_{a}(n)\right]$$ where Ta(n) is with respect to the randomness of π, which is to say that we compute E[Ti(n)] in the corrupted setting and not in the uncorrupted one. $$R_{n}=\sum_{a=1}^{k}\Delta_{a}\mathbb{E}_{\nu_{\epsilon}}\left[T_{a}(n)\right].$$ B.1.2 Proof of Lemma 3: KL for Student's Distribution First, we compute the χ 2 divergence between the two laws fa and f0. We have, for any a ≥ 0 dχ2 (fa, f0) = Z(fa(x) − f0(x))2 f0(x)dx =Γd+1 2 Γd 2 √dπ ZR 1 1 + (x−a) 2 d d+1 2 −1 1 + x2 d +1 2 2 1 + x 2 d d+1 2 dx =Γd+1 2 Γd 2 √dπ ZR 1 + (x−a) 2 d d+1 2− 1 + x 2 d d+1 22 1 + (x−a) 2 d d+1 1 + x2 d d+1 2 dx =Γd+1 2 Γd 2 √dπ Z R 1 + (x−a) 2 d d+1 dx . 1 + x 2 d d+1 2 dx dx 1 + x2 d d+1 2 − 2 Z R 1 + (x−a) 2 d d+1 2 + Z R The first two terms are respectively equal to 1 and −2 using the fact that the student distribution integrate to 1. Then, we do the change of variable y = x − a in the last integral to get ing of variable $y=x-a$ in the last integral to get $$\mathrm{d}_{\chi^{2}}(f_{a},f_{0})=\frac{\Gamma\left(\frac{d+1}{2}\right)}{\Gamma\left(\frac{d}{2}\right)\sqrt{d\pi}}\int_{\mathbb{R}}\frac{\left(1+\frac{(y+a)^{2}}{d}\right)^{\frac{d+1}{2}}}{\left(1+\frac{y^{2}}{d}\right)^{d+1}}\mathrm{d}y-1.$$ gree $d$ in the variable $a$. We have the following Lemma this is a polynomial of degree d in the variable a. We have the following Lemma proven in Section B.3.4. Lemma 10 For a ≥ 0 and d ≥ 0*, we have the following algebraic inequality.* $$\int_{\mathbb{R}}{\frac{\left(1+{\frac{(y+a)^{2}}{d}}\right)^{\frac{d+1}{2}}}{\left(1+{\frac{y^{2}}{d}}\right)^{d+1}}}\mathrm{d}y\leq{\frac{a^{2}}{2{\sqrt{d}}}}(d+1)^{2}\left(2+{\frac{a}{{\sqrt{d}}}}\right)^{d-1}+\int_{\mathbb{R}}{\frac{(1+y^{2}/d)^{\frac{d+1}{2}}}{\left(1+{\frac{y^{2}}{d}}\right)^{d+1}}}\mathrm{d}y.$$ Using this lemma, and because we recognize up to a constant the integral of the student distribution on R in the right-hand side, we have dχ2 (fa, f0) = Γd+1 2 Γd 2 √dπ a 2 2 √d (d + 1)2 2 +a √d d−1+ Z R (1 + y 2/d) d+1 2 1 + y2 d d+1 dy − 1 2 √d (d + 1)2 2 +a √d d−1 ≤Γd+1 2 Γd 2 √dπ a 2 then, use that for any d ≥ 1, Γ( d+1 2 ) ≤ Γ( d 2 )pd/2 from Wendel (1948), hence $$\mathrm{d}_{\chi^{2}}(f_{a},f_{0})\leq\frac{a^{2}(d+1)^{2}}{2\sqrt{2d\pi}}\left(2+\frac{a}{\sqrt{d}}\right)^{d-1}\leq\frac{a^{2}(d+1)^{2}}{5\sqrt{d}}\left(2+\frac{a}{\sqrt{d}}\right)^{d-1},$$ using 2 √2π ≥ 5· Then, we use the link between KL divergence and χ 2 divergence to get the result. $$D_{\rm KL}(f_{a},f_{0})\leq\log(1+{\rm d}_{\chi^{2}}(f_{a},f_{0}))\tag{30}$$ $$\leq\log\left(1+\frac{a^{2}(d+1)^{2}}{5\sqrt{d}}\left(2+\frac{a}{\sqrt{d}}\right)^{d-1}\right)$$ Then, we have, $$\log\left(1+{\frac{a^{2}(d+1)^{2}}{5\sqrt{d}}}\left(2+{\frac{a}{\sqrt{d}}}\right)^{d-1}\right)\leq\begin{cases}\log\left(1+3^{d-1}{\frac{(d+1)^{2}}{5\sqrt{d}}}a^{2}\right)&\text{if}a<1\\ \log\left(1+{\frac{(d+1)^{2}}{5\sqrt{d}}}a^{d+1}\left({\frac{(d+1)^{2}}{\sqrt{d}}}+{\frac{1}{\sqrt{d}}}\right)^{d-1}\right)&\text{if}a\geq1\end{cases}$$ hence, using that 1 ≤ 3 d−1 (d+1)2 da d+1 $$\log\left(1+{\frac{a^{2}(d+1)^{2}}{d}}\left(2+{\frac{a}{\sqrt{d}}}\right)^{d-1}\right)\leq\begin{cases}3^{d-1}{\frac{(d+1)^{2}}{5\sqrt{d}}}a^{2}&{\text{if}a<1,}\\ (d+1)\log\left(a\right)+\log\left(3^{d}{\frac{(d+1)^{2}}{5\sqrt{d}}}\right)&{\text{if}a\geq1.}\end{cases}$$ Inject this in Equation (30) to get the result. ## B.1.3 Proof Of Lemma 4: Kl For Corrupted Bernoulli Distribution Let α ∈ (0, 1/2) and denote δx the Dirac distribution in x. Define P0 = (1 − α)δ0 + αδ1, P1 = αδ0 + (1 − α)δ1, Q0 = (1 − ε)(1 − α)δ0 + (1 − (1 − ε)(1 − α))δ1, Q1 = (1 − (1 − ε)(1 − α))δ0 + (1 − ε)(1 − α)δ1. One can check that Q0 = (1−ε)P0+εδ1 and Q1 = (1−ε)P1+εδ0 and hence Q0 and Q1 are in the ε-corrupted neighborhood of respectively P0 and P1. We have DKL(Q0, Q1) = X k∈{0,c} PQ0 (X = k) log PQ0 (X = k) PQ1 (X = k) = (1 − ε)(1 − α) log (1 − ε)(1 − α) 1 − (1 − ε)(1 − α) + (1 − (1 − ε)(1 − α)) log 1 − (1 − ε)(1 − α) (1 − ε)(1 − α) = ((1 − ε)(1 − α) − (1 − (1 − ε)(1 − α))) log (1 − ε)(1 − α) 1 − (1 − ε)(1 − α) = (1 − 2ε − 2α + 2εα) log 1 + 1 − 2ε − 2α + 2εα ε + α − εα Then, note that ∆ = EP1 [X] − EP0 [X] = (1 − 2α) and σ 2 = VarP0 (X) = VarP1 (X) = α(1 − α). Hence, with α = 1 2 (1 − ∆). $$D_{\rm KL}(Q_{0},Q_{1})=(1-2\varepsilon-(1-\Delta)\,(1-\varepsilon))\log\left(1+\frac{1-2\varepsilon-(1-\Delta)\,(1-\varepsilon)}{\varepsilon+\frac{1}{2}\,(1-\Delta)\,(1-\varepsilon)}\right)\tag{31}$$ $$=(\Delta(1-\varepsilon)-\varepsilon)\log\left(1+\frac{\Delta(1-\varepsilon)-\varepsilon}{\frac{1}{2}(1+\varepsilon)-\frac{1}{2}\Delta(1-\varepsilon)}\right)\tag{32}$$ Uniform bound: if ε > 0, we have $$D_{\mathrm{KL}}(Q_{0},Q_{1})\leq(1-2\varepsilon)\log\left(1+{\frac{1-2\varepsilon}{\varepsilon}}\right).$$ High distinguishibility regime: in the setting 2σ > ∆, we have the bound $$D_{\mathrm{KL}}(Q_{0},Q_{1})\leq\left(\frac{\Delta}{2\sigma}(1-\varepsilon)-\varepsilon\right)\log\left(1+2\frac{\frac{\Delta}{2\sigma}(1-\varepsilon)-\varepsilon}{1-\left(\frac{\Delta}{2\sigma}(1-\varepsilon)-\varepsilon\right)}\right)$$ $$=\left(\frac{\Delta(1-\varepsilon)-2\sigma\varepsilon}{2\sigma}\right)\log\left(1+2\frac{\Delta(1-\varepsilon)-2\sigma\varepsilon}{2\sigma-(\Delta(1-\varepsilon)-2\sigma\varepsilon)}\right)\,.$$ Low distinguishibility regime: if ∆ ≤ 2σ √ ε 1−2ε . Then there exists ε ′ ≤ ε such that ∆ = 2σε ′ √1−2ε ′ and then, from Equation (31), there exists Q′0 , Q′1 which are ε ′-corrupted versions of P0 and P1 such that KL(Q′0 , Q′1 ) = 0 B.2 Lemmas for Regret upper bound B.2.1 Proof of Corollary 1: Simplified Upper Bound of HuberUCB Replacing βi by 4σi, we have - If ∆ei,ε > 6σi 1 + 4√2ε2, then $$\mathbb{E}[T_{i}(n)]\leq\log(n)\operatorname*{max}\left({\frac{128\sigma_{i}}{3{\widehat\Delta}_{i,\varepsilon}}},{\frac{4}{(p-5\varepsilon)^{2}}}\left(1+2{\sqrt{2}}\left(\overline{{{\varepsilon}}}\vee{\frac{9}{14{\sqrt{2}}}}\right)\right)^{2}\right)+10(\log(n)+1)\left(1+{\sqrt{2}}\left(\overline{{{\varepsilon}}}\vee{\frac{9}{14{\sqrt{2}}}}\right)\right)^{2}\right).$$ $\bullet$ If $\widetilde{\Delta}_{i,\varepsilon}>6\sigma_i\left(1+4\sqrt{2\varepsilon}\right)^2$, then ? $$\mathbb{E}[T_{i}(n)]\leq\log(n)\max\left(\frac{50\pi_{i}^{2}}{9\Delta_{i,\varepsilon}^{2}}\left(\sqrt{2}+8\overline{\varepsilon}\right)^{2},\frac{4}{\left(p-5\varepsilon\right)^{2}}\left(1+2\sqrt{2}\left(\tau\vee\frac{9}{14\sqrt{2}}\right)\right)^{2}\right)+10(\log(n)+1).$$ Then, we use that $$\left(1+2\sqrt{2}\left(\overline{\varepsilon}\lor\frac{9}{14\sqrt{2}}\right)\right)^{2}\leq2\left(1+\left(2\sqrt{2}\left(\overline{\varepsilon}\lor\frac{9}{14\sqrt{2}}\right)\right)^{2}\right)$$ $$=2+8\left(\overline{\varepsilon}^{2}\lor\frac{81}{392}\right)\leq8\overline{\varepsilon}^{2}+2+\frac{648}{392}\leq8\overline{\varepsilon}^{2}+4$$ and that p − 5ε ≥ 1/4, to get - If ∆ei,ε > 6σi 1 + 4√2ε2, then $$\mathbb{E}[T_{i}(n)]\leq\log(n)\max\left(\frac{128\sigma_{i}}{3\widetilde{\Delta}_{i,\varepsilon}},512\overline{\varepsilon}^{2}+256\right)+10(\log(n)+1)$$ $$=\frac{128}{3}\log(n)\max\left(\frac{\sigma_{i}}{\widetilde{\Delta}_{i,\varepsilon}},12\overline{\varepsilon}^{2}+6\right)+10(\log(n)+1)$$ $$\leq43\log(n)\max\left(\frac{\sigma_{i}}{\widetilde{\Delta}_{i,\varepsilon}},12\overline{\varepsilon}^{2}+6\right)+10(\log(n)+1)$$ - If ∆ei,ε > 6σi 1 + 4√2ε2, then $$\mathbb{E}[T_{i}(n)]\leq\log(n)\max\left(\frac{50\sigma_{i}^{2}}{9\widetilde{\Delta}_{i,\varepsilon}^{2}}\left(\sqrt{2}+8\overline{\varepsilon}\right)^{2},512\overline{\varepsilon}^{2}+256\right)+10(\log(n)+1)$$ $$\leq\log(n)\max\left(\frac{100\sigma_{i}^{2}}{9\widetilde{\Delta}_{i,\varepsilon}^{2}}\left(2+64\overline{\varepsilon}^{2}\right),512\overline{\varepsilon}^{2}+256\right)+10(\log(n)+1)$$ $$\leq23\log(n)\max\left(\frac{\sigma_{i}^{2}}{\widetilde{\Delta}_{i,\varepsilon}^{2}}\left(1+32\overline{\varepsilon}^{2}\right),24\overline{\varepsilon}^{2}+12\right)+10(\log(n)+1)$$ ## B.2.2 Proof Of Lemma 6: Regret Upper Bound For Seqhuberucb In this section we virtually copy the proof of the regret for HuberUCB done in Section A.4 with modified constants and using the crude bound P2(s) ≥ s/2 whenever necessary. $${\widehat{\mathrm{SeqHub}}}_{1,T_{1}(t-1)}+B_{1}(T_{1}(t-1),t)\leq\mu_{1}$$ $$\widehat{\mathrm{SeqHub}}_{i,T_{i}(t-1)}\geq\mu_{i}+B_{i}(T_{i}(t-1),t)$$ If At = i then at least one of the following four inequalities is true: VSeqHub1,T1(t−1) + B1(T1(t − 1), t) ≤ µ1 (33) or VSeqHubi,Ti(t−1) ≥ µi + Bi(Ti(t − 1), t) (34) or ∆i < 2Bi(Ti(t − 1), t) (35) $$\Delta t<2D_{1}(\lambda_{1}(t-1),t)\tag{36}$$ or $$P_{2}(T_{1}(t-1))<s_{in_{(0)}}(t)=\frac{98\log(t)}{128\left(9-5z\right)^{2}}\left(1+2\sqrt{2}\left(\pi\vee\frac{9}{14\sqrt{2}}\right)\right)^{2}\tag{36}$$ Indeed, if $P_{2}(T_{1}(t-1))<s_{in_{(0)}}(t)$, then $B_{1}(T_{1}(t-1),t)=\infty$ and Inequality (36) is true. On the other hand, if $P_{2}(T_{1}(t-1))\geq s_{in_{(0)}}(t)$, then we have $B_{1}(T_{1}(t-1),t)$ is finite and all four inequalities are false, then, $$\Delta_{i}<2B_{i}(T_{i}(t-1),t)$$ $$\widehat{\mathrm{SeqHub}}_{1,T_{1}(t-1)}+B_{1}(T_{1}(t-1),t)>\mu_{1}$$ VSeqHub1,T1(t−1) + B1(T1(t − 1), t) > µ1 ≥VSeqHubi,Ti(t−1) + Bi(Ti(t − 1), t) which implies that At ̸= i. $\mu_{i}+\Lambda_{i}$ $\mu_{i}+2B_{i}(T_{i}(t-1),n)$ $\mu_{i}+2B_{i}(T_{i}(t-1),t)$ $\Sigma$SeqHub${}_{i,T_{i}(t-1)}+B_{i}(T_{i}(t-1),t)$ Step 1. We have that P ((33) is true) ≤ 14/t. Proof: Then, we have that, $$\mathbb{P}\left(\widehat{\text{SeqHub}}_{1,T_{1}(t-1)}+B_{1}(T_{1}(t-1),t)\leq\mu_{1}\right)\leq\sum_{s=1}^{t}\mathbb{P}\left(\widehat{\text{SeqHub}}_{1,s}+B_{1}(s,t)\leq\mu_{1}\right)$$ $$=\sum_{s=\lfloor\,s_{i}=(t)\rfloor}^{t}\mathbb{P}\left(\widehat{\text{SeqHub}}_{1,s}-\mu_{1}\leq-B_{1}(s,t)\right)\,.$$ $$(33)$$ $$(34)$$ $$(35)$$ $$(36)$$ Then, use Theorem 4, we get $$\mathbb{P}\left(\widehat{\text{SeqHub}}_{1,T_{1}(t-1)}+B_{1}(T_{1}(t-1),t)\leq\mu_{1}\right)\leq\sum_{s=\left[s_{lim}(t)\right]}^{t}14e^{-\log(t^{2})}$$ $$\leq\sum_{s=\left[s_{lim}(t)\right]}^{t}\frac{14}{t^{2}}\leq\frac{14}{t}.$$ Step 2. Similarly, for arm i, we have $$\mathbb{P}\left({\widehat{\mathrm{SeqHub}}}_{i,T_{i}(t-1)}\geq\mu_{i}+B_{i}(T_{i}(t-1),t)\right)\leq{\frac{14}{t}}$$ VSeqHubi,Ti(t−1) ≥ µi + Bi(Ti(t − 1), t) Proof: We have, $$\mathbb{P}\left(\widehat{\text{SeqHub}}_{i,T_{i}(t-1)}\geq\mu_{i}+B_{i}(T_{i}(t-1),t)\right)\leq\sum_{s=\lceil s_{im}(t)\rceil}^{t}\mathbb{P}\left(\widehat{\text{SeqHub}}_{i,s}-\mu_{i}\geq B_{i}(s,t)\right)$$ $$\leq\sum_{s=\lceil s_{im}(t)\rceil}^{t}14e^{-\log(t^{2})}\leq\frac{14}{t}.$$ Step 3. Let v ∈ N. If one of the two following conditions are true, then for all t such that P2(Ti(t−1)) ≥ v, we have ∆i ≥ 2Bi(Ti(t − 1), t) (i.e. Equation (35) is false). Condition 1: if ∆ei,ε > 12 σ 2 i βi √2 + 2 βi σi ε 2and v ≤ log(t) 96βi 9∆ei,ε . Condition 2: if ∆ei,ε ≤ 12 σ 2 i βi √2 + 2 βi σi ε 2and v ≤50 9∆e2 i,ε σi √2 + 2βiε2log(t). Proof: We search for the smallest value v ≥ slim(t) such that ∆i verifies $$\Delta_{i}\geq2B_{i}(v,t)=2r_{v}(1/t^{2})+2\left(\frac{1}{p-\sqrt{\frac{\log(t^{2})}{2v}}-\varepsilon}-1\right)r_{P_{2}(v)}(1/t^{2})+2b_{i}.$$ First, we simplify the expression, having that v ≥ slim(t), we have $$\frac{\log(t^{2})}{2v}$$ $${\frac{(t^{2})}{v}}\leq{\frac{128(p-5\varepsilon)^{2}}{98(1+9/7)^{2}}}\leq{\frac{(p-\varepsilon)^{2}}{4}},$$ hence rv(1/t2) ≤2 (p−ε) σi q2 log(t 2) v and we simplify the condition to ∆i ≥4 (p − ε) σi r2 log(t 2) v+ βi log(t 2) 3v+ 2βiε rlog(t 2) v+ 2βiε ! +4 p − ε σi s2 log(t 2) P2(v)+ βi log(t 2) 3P2(v) + 2βiε slog(t 2) P2(v) + 2βiε ! + 2bi ≥12 (p − ε) σi r2 log(t 2) v+ βi log(t 2) 3v+ 2βiε rlog(t 2) v+ 2βiε ! + 2bi where we used that P2(v) ≥ v/2. Let us denote ∆ei,ε = (∆i − 2bi)(p − ε) − 24βiε, we are searching for v such that $$\beta_{i}\frac{\log(t^{2})}{3v}+\sqrt{\frac{\log(t^{2})}{v}}\left(\sigma_{i}\sqrt{2}+2\beta_{i}\tilde{\varepsilon}\right)-\frac{\tilde{\Delta}_{i,\varepsilon}}{12}\leq0$$ This is a second order polynomial in plog(t 2)/v. If ∆ei,ε > 0, then the smallest v > 0 is $${\sqrt{\frac{\log(t^{2})}{v}}}={\frac{3}{2\beta_{i}}}\left(-\left(\sigma_{i}{\sqrt{2}}+2\Xi\beta_{i}\right)+{\sqrt{\left(\sigma_{i}{\sqrt{2}}+2\beta_{i}\Xi\right)^{2}+{\frac{\widetilde\Delta_{i,\varepsilon}\beta_{i}}{9}}}}\right)$$ First setting: if ∆ei,ε > 36 σ 2 i βi √2 + 2 βi σi ε 2, In that case, we have $${\sqrt{\frac{\log(t^{2})}{v}}}\geq{\frac{3}{2\beta_{i}}}\left(-\left(\sigma_{i}{\sqrt{2}}+2\beta_{i}{\overline{{\varepsilon}}}\right)+{\sqrt{\frac{\beta_{i}{\widehat{\Delta}}_{i,\varepsilon}}{9}}}\right)\geq{\frac{3}{2\beta_{i}}}{\sqrt{\frac{\beta_{i}{\widehat{\Delta}}_{i,\varepsilon}}{36}}}={\sqrt{\frac{{\widehat{\Delta}}_{i,\varepsilon}}{16\beta_{i}}}}$$ Hence, v ≤ log(t) 32βi ∆ei,ε . **Second setting:** if $\tilde{\Delta}_{i,\varepsilon}\leq36\frac{\sigma_i^2}{\beta_i}\left(\sqrt{2}+2\frac{\beta_i}{\sigma_i}\overline{\varepsilon}\right)^2$, then we use Lemma 9, using that . $$\frac{\tilde{\Delta}_{i,\varepsilon}\beta_{i}}{9\left(\sigma_{i}\sqrt{2}+2\beta_{i}\overline{{{\varepsilon}}}\right)^{2}}\leq4$$ and the fact that √1+4−1 4 ≥ 3 10 , we get, $$\frac{\sqrt{\log(t^{2})}}{v}\geq\frac{\widetilde{\Delta}_{i,\varepsilon}}{20\left(\sigma_{i}\sqrt{2}+2\beta_{i}\bar{\varepsilon}\right)}$$ $$\sqrt{\mathrm{{}^{1}}}$$ Hence, $$v\leq\frac{40}{\tilde{\Delta}_{i,\varepsilon}^{2}}\left(\sigma_{i}\sqrt{2}+2\beta_{i}\bar{\varepsilon}\right)^{2}\log(t).$$ Step 4. Using All the previous steps, we prove the theorem. Proof: We have E[Ti(t)] = E "Xt t=1 1{At = i} # ≤ ⌊max(v, 2slim(t))⌋ + E Xt t=⌊max(v,2slim(t))⌋+1 1{At = i and (35) is false} ≤ ⌊max(v, 2slim(t))⌋ + E Xt t=⌊max(v,2slim(t))⌋+1 1{(33) or (34) or (36) is true} = ⌊max(v, 2slim(t))⌋ +Xt t=⌊min(v,2slim(t))⌋+1 P ((33) or (34) is true) ≤ ⌊max(v, 2slim(t))⌋ + 2 Xt t=⌊min(v,2slim(t))⌋+1 14 t using the harmonic series bound by log(t) + 1, we have $\mathbb{E}[T_{i}(t)]\leq\max(v,2s_{lim}(t))+28(\log(t)+1)$ Then, we replace the value of v, **First setting:** $\widetilde{\Delta}_{i,\varepsilon}>36\frac{\sigma_i^2}{\beta_i}\left(\sqrt{2}+2\frac{\beta_i}{\sigma_i}\overline{\varepsilon}\right)^2$ . $$\mathbb{E}[T_{i}(t)]\leq\log(t)\operatorname*{max}\left({\frac{32\beta_{i}}{\Delta_{i,\varepsilon}}},{\frac{8}{(p-5\varepsilon)^{2}}}\left(1+2{\sqrt{2}}\left(\overline{{{\varepsilon}}}\vee{\frac{9}{14{\sqrt{2}}}}\right)\right)^{2}\right)+28(\log(t)+1)\log(t),$$ Second setting: if ∆ei,ε ≤ 36 σ if $\widetilde{\Delta}_{i,\varepsilon}\leq36\frac{\sigma_i^2}{\beta_i}\left(\sqrt{2}+2\frac{\beta_i}{\sigma_i}\overline{{\varepsilon}}\right)^2$, then ? $$\mathbb{E}[T_{i}(t)]\leq\log(n)\max\left(\frac{40}{\widetilde{\Delta}_{i,t}^{2}}\left(\sigma_{i}\sqrt{2}+2\beta_{i}\overline{t}\right)^{2},\frac{8}{\left(p-5\varepsilon\right)^{2}}\left(1+2\sqrt{2}\left(\tau\vee\frac{9}{14\sqrt{2}}\right)\right)^{2}\right)+28(\log(t)+1).$$ Finish the proof of the Theorem using the given values for the constants βi*, ε, p*. B.3 Lemmas for concentration of robust estimators B.3.1 Proof of Lemma 7: Controlling Variance of Influence of Huber's Estimator Let ρβ be Huber's loss function, with ψβ = ρ ′ β . We have that for any x > 0, ψβ(x) 2 ≤ 2ρβ(x). Hence, Var(ψβ(Y − Hubβ(P))) = E[ψβ(Y − Hubβ(P))2] ≤ 2E[ρβ(Y − Hubβ(P))]. Then, use that by definition of Hubβ(P), Hubβ(P) is a minimizer of θ 7→ E[ρβ(Y − θ)], hence, $$))\leq2\mathbb{E}[\rho_{\beta}(Y-\mathbb{E}[Y])].$$ Var(ψβ(Y − Hubβ(P))) ≤ 2E[ρβ(Y − E[Y ])]. and finally, use that ρβ(x) ≤ x 2/2 to conclude. B.3.2 Proof of Lemma 8 : Concentrating Huber's Estimator by Concentrating the Influence For all n ∈ N ∗, λ > 0, let $$f_{n}(\lambda)={\frac{\mathrm{sign}(\Delta_{n})}{n}}\sum_{i=1}^{n}\psi_{\beta}(X_{i}-\mathrm{Hub}_{\beta}(P)-\lambda\,\mathrm{sign}(\Delta_{n})),$$ where ∆n = Hubβ(P) − Hubβ(Xn 1 ). Step 1. For any λ > 0, P(|∆n| ≥ λ) ≤ P(fn(λ) ≥ 0). Proof: For all y ∈ R, let Jn(y) = 1n Pn i=1 ρβ(Xi − y) we have, $$J_{n}^{\prime\prime}(y)=\frac{1}{n}\sum_{i=1}^{n}\psi_{\beta}^{\prime}\left(X_{i}-y\right).$$ In particular, having fn(λ) = − sign(∆n)J ′(Hubβ(P) + λ sign(∆n)) if we take the derivative of fn with respect to λ, we have the following equation $$\frac{\partial}{\partial\lambda}f_{n}(\lambda)=-\operatorname{sign}(\Delta_{n})^{2}J^{\prime\prime}_{n}(\operatorname{Hub}_{\beta}(P)+\lambda\operatorname{sign}(\Delta_{n}))$$ $$\leq-\frac{1}{n}\sum_{i=1}^{n}\psi^{\prime}_{\beta}(X_{i}-\operatorname{Hub}_{\beta}(P)-\lambda\operatorname{sign}(\Delta_{n})).\tag{37}$$ Then, because ψ ′ β is non-negative, the function λ 7→ fn(λ,) is non-increasing. Hence, for all n ∈ N ∗ and λ > 0, |∆n| ≥ λ ⇒ fn(|∆n|) = 0 ≤ fn(λ), Hence, $$|\Delta_{n}|\geq\lambda\Rightarrow f$$ $$\mathbb{P}(|\Delta_{n}|\geq\lambda)\leq\mathbb{P}(f_{n}(\lambda)\geq0).$$ P(|∆n| ≥ λ) ≤ P(fn(λ) ≥ 0). (38) Step 2. For all λ > 0, $$f_{n}(\lambda)\leq f_{n}(0)-\lambda\operatorname*{inf}_{t\in[0,\lambda]}|f_{n}^{\prime}(t)|\,.$$ Proof: We apply Taylor's inequality to the function fn. As fn is non-increasing (because its derivative is non-positive, see Equation (37)), we get $$f_{n}(\lambda)\leq f_{n}(0)-\lambda\operatorname*{inf}_{t\in[0,\lambda]}|f_{n}^{\prime}(t)|\,.$$ $$(38)$$ Step 3. Let mn = E $$\left[\operatorname*{inf}_{t\in[0,\lambda]}{\frac{1}{n}}\sum_{i=1}^{n}\psi_{\beta}^{\prime}(X_{i}^{\prime}-\operatorname{Hub}_{\beta}(P)-t)\right].$$ i. With probability larger than 1 − 2e −2nη2, $$\operatorname*{inf}_{t\in[0,\lambda]}|f_{n}^{\prime}(t))|\geq m_{n}-2\eta-\varepsilon,$$ $$(39)$$ $$(40)$$ $$(41)$$ Proof: Write that Xi = (1 − Wi)Yi + WiZi where W1*, . . . , W*n are i.i.d Bernoulli random variable with mean ε, Y1*, . . . , Y*n are i.i.d ∼ P and Z1*, . . . , Z*n are i.i.d with law H. From equation (37), |f ′ n (t))| ≥ 1n Xn i=1 ψ ′ β (Xi − Hubβ(P) − tsign(∆)) ≥ 1 n Xn i=1 1{Wi = 0}ψ ′ β (Yi − Hubβ(P) − tsign(∆)) (39) + 1 n Xn i=1 1{Wi = 1}ψ ′ β (Zi − Hubβ(P) − tsign(∆)) (40) ≥ 1 n Xn i=1 ψ ′ β (Yi − Hubβ(P) − tsign(∆)) (41) + 1 n Xn i=1 1{Wi = 1}ψ ′ β (Zi − Hubβ(P) − tsign(∆)) − ψ ′ β (Wi − Hubβ(P) − tsign(∆))(42) Hence, because ψ ′ β ∈ [0, 1], we have $$|f^{\prime}_{n}(t))|\geq\frac{1}{n}\sum_{i=1}^{n}\psi^{\prime}_{\beta}(Y_{i}-\mbox{Hub}_{\beta}(P)-t\,\mbox{sign}(\Delta))-\frac{1}{n}\sum_{i=1}^{n}\mathbf{1}\{W_{i}=1\})\tag{43}$$ The right-hand side depends on the infimum of the mean of n i.i.d random variables in [0, 1]. Hence, the function $$Z(X_{1}^{n})\mapsto\operatorname*{sup}_{t\in[0,\lambda]}\sum_{i=1}^{n}\psi_{\beta}^{\prime}(X_{i}^{\prime}-\mathrm{Hub}_{\beta}(P)-t)$$ satisfies, by sub-linearity of the supremum operator and triangular inequality, the bounded difference property, with differences bounded by 1. Hence, by Hoeffding's inequality, we get with probability larger than 1 − e −2nη2, $$\operatorname*{inf}_{t\in[0,\lambda]}|f_{n}^{\prime}(t)\rangle|\geq\mathbb{E}\left[\operatorname*{inf}_{t\in[0,\lambda]}\frac{1}{n}\sum_{i=1}^{n}\psi_{\beta}^{\prime}(X_{i}^{\prime}-\mathrm{Hub}_{\beta}(P)-t)\right]-\eta-\frac{1}{n}\sum_{i=1}^{n}\mathbf{1}\{W_{i}=1\})$$ and using Hoeffding's inequality to control 1n Pn i=1 1{Wi = 1}, we have with probability larger than 1 − 2e −2η 2/n, $$\operatorname*{inf}_{t\in[0,\lambda]}|f_{n}^{\prime}(t))|\geq\mathbb{E}\left[\operatorname*{inf}_{t\in[0,\lambda]}{\frac{1}{n}}\sum_{i=1}^{n}\psi_{\beta}^{\prime}(X_{i}^{\prime}-\mathrm{Hub}_{\beta}(P)-t)\right]-2\eta-\varepsilon$$ Step 4. For λ ∈ (0*, β/*2), $$\mathbb{P}\left(\quad|\Delta_{n}|\geq\lambda\right)\leq\mathbb{P}\left(\quad\left|{\frac{1}{n}}\sum_{i=1}^{n}\psi_{\beta}(X_{i}-\mathrm{Hub}_{\beta}(P))\right|\geq\lambda\left(m_{n}-\eta-\varepsilon\right)\right)+2e^{-2n\eta^{2}}.$$ Proof: For any λ > 0, we have $$\mathbb{P}(|\Delta_{n}|\geq\lambda)\leq\mathbb{P}(f_{n}(\lambda)\geq0)$$ (from Step 1) $$\leq1-\mathbb{P}\left(f_{n}(0)-\lambda\inf_{t\in[0,\lambda]}|f^{\prime}_{n}(t)|\leq0\right)$$ (from Step 2) $$\leq1-\mathbb{P}\left(f_{n}(0)\leq\lambda\left(m_{n}-2\eta-\varepsilon\right)\right)+2e^{-2n\eta^{2}}$$ (from Step 3) $$=\mathbb{P}\left(\left|\frac{1}{n}\sum_{i=1}^{n}\phi_{i}(X_{i}-\text{Hub}_{\beta}(P))\right|\geq\lambda\left(m_{n}-\eta-\varepsilon\right)\right)+2e^{-2n\eta^{2}}.$$ (44) Step 5. We prove that mn ≥ p, and hence $$\mathbb{P}\left(|\Delta_{n}|\geq\lambda\right)\leq\mathbb{P}\left(\left|{\frac{1}{n}}\sum_{i=1}^{n}\psi_{\beta}(X_{i}-\mathrm{Hub}(P))\right|\geq\lambda\left(p-\eta-\varepsilon\right)\right)+2e^{-2n\eta^{2}}$$ Proof: For all λ ≤ β/2, $$\mathbb{E}\left[\inf_{t\in[0,\lambda]}\frac{1}{n}\sum_{i=1}^{n}\psi_{\beta}^{\prime}(X_{i}^{\prime}-\operatorname{Hub}_{\beta}(P)-t)\right]=\mathbb{E}\left[\inf_{t\in[0,\lambda]}\frac{1}{n}\sum_{i=1}^{n}\mathbf{1}\{|X_{i}^{\prime}-\operatorname{Hub}_{\beta}(P)-t|\leq\beta\}\right]$$ $$\geq\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n}\mathbf{1}\{|X_{i}^{\prime}-\operatorname{Hub}_{\beta}(P)|\leq\beta-\lambda\}\right]$$ $$\geq\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n}\mathbf{1}\{|X_{i}^{\prime}-\operatorname{Hub}_{\beta}(P)|\leq\beta/2\}\right]=p$$ where $\lambda$ is the $n$-th order polynomial. Then, we plug the bound on mn found in the previous step in equation (44), we get for any η > 0 and λ ∈ (0*, β/*2], $$\mathbb{P}(|\Delta_{n}|\geq\lambda)\leq\mathbb{P}\left(\left|{\frac{1}{n}}\sum_{i=1}^{n}\psi_{\beta}(X_{i}-\mathrm{Hub}_{\beta}(P))\right|\geq\lambda\left(p-\eta-\varepsilon\right)\right)+2e^{-2n\eta^{2}}$$ B.3.3 Proof of Lemma 9: Algebra tool for bounding polinomial roots The solutions of the second order polynomial indicate that x must verify $$x\geq{\frac{-b+{\sqrt{b^{2}+4a c}}}{2a}}\geq{\frac{b}{2a}}\left(-1+{\sqrt{1+{\frac{4a c}{b^{2}}}}}\right).$$ Then, use that the function x 7→ √x + 1 is concave and hence the graph of x 7→ √x + 1 is above its chords and we have for any x ∈ [0, d], √1 + x ≥ 1 + x √d+1−1 d. Hence, $$x\geq{\frac{b}{2a}}\left({\frac{4a c({\sqrt{d+1}}-1)}{d b^{2}}}\right)={\frac{2c({\sqrt{d+1}}-1)}{d b}}.$$ ## B.3.4 Proof Of Lemma 10: Algebra On Student'S Distribution We have, R 1 + (y+a) 2 d d+1 2 1 + y2 d d+1 dy = Z R d+1 X2 l=0 d+1 2 l (y + a) 2l Z d l1 + y2 d d+1 dy = Z R d+1 X2 l=0 X 2l j=0 d+1 2 l 2l j y ja 2l−j d l1 + y2 d d+1 dy = d+1 X2 l=0 X 2l j=0 d+1 2 l 2l j Z R y ja 2l−j d l1 + y2 d d+1 dy Remark that the integral is 0 if j is odd. Hence, $$\int_{\mathbb{R}}\frac{\left(1+\frac{(y+a)^{2}}{d}\right)^{\frac{d+1}{d}}}{\left(1+\frac{y^{2}}{d}\right)^{d+1}}\mathrm{d}y=\sum_{l=0}^{\frac{d+1}{2}}\sum_{j=1}^{l}\left(\frac{d+1}{l}\right)\binom{2l}{2j}\frac{a^{2l-2j}}{d^{l}}\int_{\mathbb{R}}\frac{y^{2j}}{\left(1+\frac{y^{2}}{d}\right)^{d+1}}\mathrm{d}y$$ Then, we compute the integrals. By change of variable $u=y/d$, we have Hence, ntegrals. By change of variable $u=y/d$, we have $$\int_{\mathbb{R}}\frac{y^{2j}}{\left(1+\frac{y^{2}}{d}\right)^{d+1}}\mathrm{d}y=d^{j+1/2}\int_{\mathbb{R}}\frac{u^{2j}}{\left(1+u^{2}\right)^{d+1}}\mathrm{d}u\leq2d^{j+1/2}$$ $$\sum_{l=0}^{\frac{d+1}{2}}\left(\frac{d+1}{2}\right)\frac{1}{d^{l}}\int_{\mathbb{R}}\frac{y^{2l}}{\left(1+\frac{y^{2}}{d}\right)^{d+1}}\mathrm{d}y=\int_{\mathbb{R}}\frac{\left(1+y^{2}/d\right)^{\frac{d+1}{2}}}{\left(1+\frac{y^{2}}{d}\right)^{d+1}}\mathrm{d}y$$ $${\mathrm{and~for~}}l=j,$$ R 1 + (y+a) 2 d d+1 2 1 + y2 d d+1 dy ≤ 2 d+1 X2 l=1 X l−1 j=0 d+1 2 l 2l 2j a 2l−2j d ld j+1/2 + Z R (1 + y 2/d) d+1 2 1 + y2 d d+1 dy Z = 2 d+1 X2 l=1 a 2lX l−1 j=0 d+1 2 l 2l 2j a −2j d ld j+1/2 + Z R (1 + y 2/d) d+1 2 1 + y2 d d+1 dy ≤ 2 d+1 X2 l=1 a 2lX l−1 j=0 d+1 2 l 2l 2j a −2j d ld j+1/2 + Z R (1 + y 2/d) d+1 2 1 + y2 d d+1 dy (45) (46) And, $$\begin{array}{c}{{\sum_{l=1}^{\frac{d+1}{2}}a^{2l}\sum_{j=0}^{l-1}\binom{\frac{d+1}{2}}{l}\binom{2l}{2j}\frac{a^{-2j}}{d^{l}}d^{j+1/2}=\sqrt{d}\sum_{l=1}^{\frac{d+1}{2}}\sum_{j=0}^{l-1}\binom{\frac{d+1}{2}}{l}\binom{2l}{2j}a^{2(l-j)}d^{j-l}}}\\ {{\leq\sqrt{d}\sum_{l=1}^{\frac{d+1}{2}}\sum_{j=0}^{l-1}\binom{\frac{d+1}{2}}{l}\binom{2(l-1)}{2j}l^{2}\left(\frac{a^{2}}{d}\right)^{l-j}.}}\end{array}$$ Using that $\binom{2l}{2j}=\binom{2(l-1)}{2j}\frac{2l(2l-1)}{(2l-2j)(2l-2j-1)}\leq\binom{2(l-1)}{2j}l^2$. Then, completing the binomial sum so that $$\sum_{j=0}^{l-1}\binom{2(l-1)}{2j}\left(\frac{a^{2}}{d}\right)^{-j}\leq\sum_{j=0}^{2(l-1)}\binom{2(l-1)}{2j}\left(\frac{a^{2}}{d}\right)^{-j}=\left(1+\frac{\sqrt{d}}{a}\right)^{2(l-1)},$$ we have, l=1 d+1 2 l a 2 d ll 1 + √d a !2(l−1) d+1 j=0 d+1 2 l 2l 2j a −2j d ld j+1/2 ≤ 1 2 (d + 1)√d d+1 X2 l=1 a 2lX l−1 X2 = a 2 2d (d + 1)√d d+1 X2 l=1 d+1 2 l l a √d + 12(l−1) = a 2 2d (d + 1)√d d−1 X2 l=0 d−1 2 l (d + 1)(l + 1) 2(l + 1) a √d + 12l 4 √d (d + 1)2 2 +a √d d−1 ≤a 2 Then, inject this in Equation (45) to get $$\int_{\mathbb{R}}{\frac{\left(1+{\frac{(y+a)^{2}}{d}}\right)^{\frac{d+1}{2}}}{\left(1+{\frac{y^{2}}{d}}\right)^{d+1}}}\mathrm{d}y\leq{\frac{a^{2}}{2{\sqrt{d}}}}(d+1)^{2}\left(2+{\frac{a}{{\sqrt{d}}}}\right)^{d-1}+\int_{\mathbb{R}}{\frac{\left(1+y^{2}/d\right)^{\frac{d+1}{2}}}{\left(1+{\frac{y^{2}}{d}}\right)^{d+1}}}\mathrm{d}y.$$ ## C Additional Experimental Results C.1 Sensitivity To Β And Ε In this section, we illustrate the impact of the choice of β and ε on the estimation. Choice of β **(Figure 5(b)):** The choice of β is a trade-off between the bias (distance |Hubβ(P) − E[X]| which decreases as β go to infinity) and robustness (when β goes to 0, Hubβ(P) goes to the median). To illustrate this trade-off we use the Weibull distribution for which can be very asymmetric. We use a 3armed bandit problem with shape parameters (2, 2, 0.75) and scale parameters (0.5, 0.7, 0.8) which implies that the means are approximately (0.44, 0.62, 0.95). These distributions are very asymmetric, hence the bias |Hubβ(P) − E[X]| is high and in fact even though arm 3 has the optimal mean, arm 2 will have the optimal median, the medians are given by (0.41, 0.58, 0.49). In this experiment we don't use any corruption as we don't want to complicate the interpretation. As expected by the theory, we get that βi should not be too small or too large but it should be around 4σi. Choice of ε **(Figure 5(a)):** To illustrate the dependency in ε, we also use the Weibull distribution to show the dependency in ε with the same parameters as in the previous Weibull example, except that we choose βi = 5σi which is around the optimum found in the previous experiment and we corrupt with 2% of outliers (this is the true ε while we will make the ε used in the definition of the algorithm vary). The outliers are constructed as in Section 7. The effect of the parameter ε is difficult to assess because ε has an impact on the length of force exploration that we impose at the beginning of our algorithm (the slim). Figure 4: Cumulative regret plots for different values of the parameters ε and β on a Weibull dataset. ![42_image_0.png](42_image_0.png) ## C.2 Corrupted Bandits With Adversarial Algorithms To illustrate the performances of classical algorithms on corrupted bandits problems, we redo the experiments from Section 7 with algorithms from the adversarial literature (Exp3 and FTRL with log-barrier). We also include Thompson sampling in the case of Bernoulli inlier distributions. The results are rendered in Figure 5. These results show that adversarial algorithms like EXP3 and FTRL, and also Thompson Sampling are very inefficient when the corruption is important as in the case of the Pareto experiments with ε = 0.03 and ε = 0.05. ![43_image_0.png](43_image_0.png) Figure 5: Cumulative regret plot of the algorithms on a corrupted Bernoulli (above), Student's (middle) and Pareto (below) reward distributions with various corruption levels ε. Lower corrupted regret indicates better performance for an algorithm.
Review 1: Summary: This paper examines the bandit problem in a stochastic corruption setting, also known as the "corrupted by nature" setting. In this setting, the bandit feedback has a low probability of being drawn from a different, potentially heavy-tailed distribution. The paper first establishes a problem-specific lower bound based on the corrupted gap of the rewards. Then, it introduces the HubUCB algorithm, a UCB-type solution that uses a Huber estimator to address heavy-tailed corruption. The paper demonstrates that the algorithm attains a near-optimal regret upper bound. Finally, the paper enhances the computational efficiency of the Huber estimator by presenting a sequential version of the estimator, which exhibits linear computational complexity. Strengths and Weaknesses: Strength: - A systematical study of the "corrupted by nature" setting, including proposing a new notion of "corrupted regret" is introduced. - Problem-dependent lower-bounds and upper-bounds for the problem were proposed. - An algorithm with improved computation complexity is provided. - A numerical study demonstrates the effectiveness of the proposed algorithm. Weakness: - The major weakness appears in the presentation. In the intro, the "Treatments of Varroa Mites" does not seem to be a good example. First, it is hard to argue that the distribution is heavy-tailed. Secondly, why corruption can be unbounded? Moreover, this kind of task usually only depends on the human experience. It is hard to believe that the method proposed in this paper would help solve this problem quantitatively. - No bounds on the problem-independent setting. - The upper bounds were claimed to be nearly tight, but unfortunately, the gap parameters are different in the lower bound. - Problem-dependent lower bounds do not work for all cases, but only for special cases (student-T and Bernoulli). - Heavytailedness is not well-defined. What is the condition that makes the distribution heavy-tailed? In the lower bound, only student distribution is used for heavy-tailedness. In the upper bound, the first time the heavy-tailedness is discussed is in the lemma about the bias of Huber estimator. If heavy-tailedness is a major theme of this paper, it should probably be clearly defined in the preliminary section of the paper and also reflect this, perhaps, in the title. Requested Changes: - Perhaps modify the motivating example in intro? - Make core concepts well-defined, such as the heavy-tailedness. - Def. 1: what is the range of $n$? Should it be any $n>0$? - Last paragraph of page 6, what is $\mathcal{P}_{[2]}$? - Was this paper https://proceedings.neurips.cc/paper/2021/file/843a4d7fb5b1641b0bb8e3c2b2e75231-Paper.pdf somehow related to your heavy-tail setting? Broader Impact Concerns: N/A ================================================== Review 2: Summary: This paper considers a kind of stochastic multi-armed bandit problem with heavy tailed reward distributions (with bounded moments) and unbounded corruptions. In the problem, it is assumed that the reward comes from an unknown fixed corruption distribution distribution with an unknown fixed probability. For this problem, problem-dependent lower bounds are shown. Further, this paper proposes a new UCB-type algorithm that achieves a nearly optimal regret bound. A variant with improved computational efficiency is also proposed. Strengths and Weaknesses: Strength: - The motivation for considering the proposed model and its relevance to existing research is carefully explained. - New techniques have been proposed, such as a concentration inequality for the empirical Huber's estimator (Theorem 2) and a faster sequential approximation algorithm for this computation (Section 6), which have potential applications to problems not limited bandits. - Lower and upper bounds for regret nicely depict how parameters (such as suboptimality gap $\Delta$, variance $\sigma^2$ and corruption level $\epsilon$) affect the difficulty of the problem, which should be of interest of some individuals in the comunity of the learning theory. - Claims appear to be supported by convincing and clear evidence. As far as I could see, there were no errors in the analysis or unclear definitions. Weakness: - In the design of the algorithm, it is assumed that the variance $\sigma_i^2$ of the reward and the rate $\epsilon$ of corruption are known in advance. This assumption poses a major limitation in applications as these parameters are often not known a priori in practice. In view of the above strength, and in light of the TMLR evaluation criteria, I support the acceptance of this paper. Requested Changes: Regarding Huber's estimator, I think it is difficult to understand, in the current notation $Hub(\cdot)$, that its value also depends on $\beta$. In particular, the fact that $\beta_i$ is used in the definition of $Hub_{i,s}$ and $B_i$ in HuberUCB is difficult to read from the definition formula. I belive that there is room for improvement in notation or explanation for the sake of clarity. Minor comment: There were several notations that were not consistent, e.g., - subGaussian vs sub-Gaussian, - $P_{P_0}$ vs $\mathbb{P}_{P_0}$ (between Eq. (2) and Eq. (3)) - $D_{KL}$-divergence and KL-divergence The following may be typos: - i.i.d <- i.i.d. - Algorithm 2: require $\beta$ <- $\\{ \beta_i \\}$ - Theorem 3: $\beta \leq 4 \sigma_i$ <- $\beta_i \leq 4 \sigma_i$ - p.14: tge true <- the true Some of the references cite preprints even though have already been published (e.g., [Lattimore and Szepesvári, 2018] <- [Lattimore and Szepesvári, 2020]), so it would be better to replace them with the published version. Broader Impact Concerns: N/A ================================================== Review 3: Summary: This paper studies multi-armed bandits with stochastic rewards and stochastic corruption with an explicit focus on heavy-tailed distribution. They leverage the concentration of Huber's robust estimator to produce a UCB style algorithm and prove its upper bound. The i.i.d. nature of the corruption makes this an instance of bandits with distribution shift, where the observations are sampled from a different distribution than the actual rewards/regret. Strengths and Weaknesses: The estimator and its concentration are interesting, as well as the general problem setup. However, I feel that the current work is not giving a clear picture of the actual difficulty of the problem. The main weakness is in the lower bound section. There is a lower bound for the uncorrupted heavy tail setting, and a lower bound for the corrupted Bernoulli case. The latter one is in my opinion completely redundant and not of interest. Looking at the problem from the distribution shift perspective, the instance dependent worst-case corruption bound is obviously when the optimal arm is shifted towards less rewards and all other arms towards highest rewards. If the identity of the optimal arm changes, then no-regret is impossible. Otherwise classical results yield directly asymptotically tight lower bounds. In this work, they are trying to reframe the bounds in terms of gap and variance. Theorem 1 for Corrupted Bernoulli is stated too strongly because they only prove the lower bound on the KL divergence for pairs of Bernoulli random variables that are symmetric around 1/2. For the upper bound, a clear weakness is a lack of discussion about how to potentially overcome a lack of knowing epsilon, since it seems unrealistic to know the mixing parameter in practice. Finally the empirical section is not very convincing. The Bernoulli case is not of interest because the best you can do is just ignore potential corruption and run e.g. Thompson Sampling (since corruption is undetectable and we do not need to hedge for it). For the adversarial baseline, it would be better to use an algorithm that can handle heavier tails such as log-barrier based methods. Edit after reviewing TMLR policy: Besides the issue of stating Theorem 1 too strongly since the corresponding Lemma only holds for special pairs of Bernoulli random variables, there are no other clear technical flaws in this paper. I would still suggest the changes proposed in my original review to increase the significance of this paper, but I understand that this is not a criteria for this Journal. Requested Changes: Necessary changes * Rework the proof of Theorem 1, which relies right now on a Lemma that only holds for centered Bernoulli random variables. * The authors should run a spellcheck since there are avoidable spelling mistakes such as 'tge' -> 'the' in the paper and also several grammar mistakes. Recommended changes * replace the Bernoulli in the corrupted lower bound example with an actual heavy tailed example. * extend the upper bound to unknown epsilon * include logbarrier based ftrl in the experimental section Broader Impact Concerns: No concern. ================================================== Metareview: Recommendation: Accept with minor revision Comment: There are a number of presentation issues which need to be addressed before the final version. A (partial) list is provided below, please make sure to correct all of them and go through the paper again to make similar improvements where necessary. Technical issues: The writing is not always clear, some definitions are missing and the use of notation is not always clear. Some examples are given below: - In the problem formulation $\nu$ and $P_i$ seem to be connected, defining the environment. Then in Lemma 2, they are selected independently, and it is not specified which one of them defines now the environment. First $\nu$ is fixed, then the statement is for "any $P_i$", followed by the $\nu$ being a free variable in the definition of $\mathcal{K}$. Neither $P^*$ nor $D_{KL}$ seem to be defined. In the discussion following the lemma, $P_i$ (and $P^*$) are considered as "arms", while they are reward distributions associated with these arms. - Student distributions have zero mean, and so is the set $\mathcal{T}_d$, but Lemma 3 and the theorems use shifted student distributions. - What is $\delta_c$ in the corrupted Bernoulli definition? By the way, I think it would be clearer to say that you consider the family of corrupted Bernoulli distributions as defined above Theorem 1, and then say you consider two elements from this (does this mean $c=1$ in $Q_0$?). Introducing the corrupted Bernoulli with pairs of distributions is a bit confusing. - Lemma 4 is again a bit confusing: Are $Q_0$ and $Q_1$ the same as defined above the lemma (i.e., with an atom added to $Q_1$ at 0)? If yes, it would be simpler to define them in the lemma. Is there a special meaning to "shifted suboptimality gap"? I assume not, in which case it might be clearer to just say suboptimality gap. In any case, the lemma is very strangely phrased as $\sigma$ and $\Delta$ fully determine $P_0$ and $P_1$ (thus the "there exists" is not really appropriate), and it is also not clear from the statement that the $Q_i$ are Bernoulli. Please define everything properly (e.g., given $\sigma$ and $\Delta$, define $P_0$ and $P_1$, etc.). You may want to say that $\Delta>0$. - Theorem 1: It is never explicitly mentioned that in the Student/Bernoulli distributions cases all reward distributions are Student/Bernoulli (in facg it might be best to introduce these settings properly when you give the corresponding KL bounds). In the Bernoulli case it might be worth mentioning the exact form of $P_i$. The sign of $\Delta_i$ is flipped. - Theorem 3/Corollary 1: You should mention explicitly which algorithm is run. - p. 5: "Assuming a non-adversarial behavior of the Nature seem" -> seems - Citations: Please make sure your use of citations is correct (see the "Citations" section in https://www.jmlr.org/format/format.html and https://www.jmlr.org/format/formatting-errors.html). The following suggestions should also be considered for the final version: - The name "corrupted by nature" is a bit mysterious, in the sense the role of "nature" is not clear from the wording. Calling it something like "bandits with (stochastic) corruption" seems to be more descriptive. (Also, a "corrupted bandit algorithm" is a bandit algorithm which is corrupted, you may want to use something like "an algorithm for the corrupted-bandit problem" or a "corrupted-bandit algorithm.") - The reviewers were not very happy with the motivating example, so coming up with some others (where actually the number of decisions is in the ballpark of what is used in the experiments) would be of interest. - To me the name "corrupted regret" also seems a bit misleading as in some sense the quantity is more like the non-corrupted regret (i.e., the regret the algorithm would suffer on the non-corrupted part of the data), but I understand that the motivation behind this naming is that the observations are corrupted. ==================================================
# Gaussian-Smoothed Sliced Probability Divergences Anonymous authors Paper under double-blind review ## Abstract Gaussian smoothed sliced Wasserstein distance has been recently introduced for comparing probability distributions, while preserving privacy on the data. It has been shown that it provides performances similar to its non-smoothed (non-private) counterpart. However, the computational and statistical properties of such a metric have not yet been well-established. This work investigates the theoretical properties of this distance as well as those of generalized versions denoted as Gaussian-smoothed sliced divergences GσSDp. We first show that smoothing and slicing preserve the metric property and the weak topology. To study the sample complexity of such divergences, we then introduce ˆµˆn the double empirical distribution for the smoothed-projected µ. The distribution ˆµˆn is a result of a double sampling process: one from sampling according to the origin distribution µ and the second according to the convolution of the projection of µ on the unit sphere and the Gaussian smoothing. We particularly focus on the Gaussian smoothed sliced Wasserstein distance GσSWpand prove that it converges with a rate O(n −1/2). We also derive other properties, including continuity, of different divergences with respect to the smoothing parameter. We support our theoretical findings with empirical studies in the context of privacy-preserving domain adaptation. ## 1 Introduction Divergences for comparing two distributions have been shown to be important for achieving good performance in the contexts of generative modeling (Arjovsky et al., 2017; Salimans et al., 2018), domain adaptation (Long et al., 2015; Courty et al., 2016; Lee et al., 2019), and in computer vision (Bonneel et al., 2011; Solomon et al., 2015) among many more applications (Kolouri et al., 2017; Peyré & Cuturi, 2019; Nguyen et al., 2023). Examples of divergences that have proved useful for these tasks are the Maximum Mean Discrepancy (Gretton et al., 2012; Long et al., 2015; Sutherland et al., 2017), the Wasserstein distance (Monge, 1781; Kantorovich, 1942; Villani, 2009) or its variant the sliced Wasserstein distance (SW) (Kolouri et al., 2016; Bonneel & Coeurjolly, 2019; Kolouri et al., 2019b; Nguyen et al., 2021; 2022; 2024). The SW distance has the advantage of being computationally efficient, since it uses a closed-form solution for distributions with support on R, by computing the expectation of one-dimensional (1D) random projections of distributions in R d. Owing to this efficiency and the resulting scalability, this distance has been successfully applied in several applications ranging from generative models to domain adaptation (Kolouri et al., 2019a; Deshpande et al., 2019; Wu et al., 2019; Lee et al., 2019) and its statistical properties have been well-studied in Nadjahi et al. (2020). Recently, Gaussian smoothed variants of the Wasserstein distance and the sliced Wasserstein distance have been introduced respectively in (Nietert et al., 2021) and in Rakotomamonjy & Ralaivola (2021). One main motivation behind these variants is to provide a privacy guarantee for the distribution comparison task as Gaussian smoothing is known to be a mechanism for achieving differential privacy (Dwork et al., 2014). While the properties of the Gaussian smoothed Wasserstein distance have been extensively studied by Nietert et al. (2021), the properties of the Gaussian smoothed sliced Wasserstein distance have not been fully investigated yet although they are known to be more computationally efficient. In this work, we fill this gap by providing a theoretical analysis of the Gaussian smoothed sliced Wasserstein distance. We investigate the theoretical properties of the Gaussian-smoothed sliced Wasserstein distance and the ones of more general Gaussian-smoothed sliced divergences induced by some base distances or divergences for distributions defined in R d. Specifically, as for main contributions, we establish the topological properties of these divergences. Then, we focus on the sample complexity of such divergences by introducing the double empirical distribution for the smoothed-projected origin distribution µ. The new empirical distribution is a result of double sampling process: one from sampling according to the origin distribution and the second according to the convolution of the projection of µ on the unit sphere and the Gaussian smoothing. We particularly focus on the Gaussian smoothed sliced Wasserstein distance. Under some mild assumptions we also prove that the Gaussian-smoothed sliced divergences satisfy an order relation with respect to the noise level and are also continuous with respect to this parameter. Given the importance of the noise level in the privacy/utility trade-off achieved by the divergence, this latter property is of high impact as it supports a computationally cheap warm-start/fine-tuning procedure when looking for a privacy/utility compromise of the divergence. Our theoretical study is backed by some numerical experiments on toy problems and on domain adaptation illustrating how owing to the topology induced by our metric and its continuity, differential privacy comes almost for free (without loss of performance) and multiple models with different level of privacy can be cheaply computed. Comparison with previous works. Here we highlight the position of this work compared to the most linked previous ones, in particular Nadjahi et al. (2020) and Rakotomamonjy & Ralaivola (2021). The work of Nadjahi et al. (2020) is focused on sliced Wasserstein distance and its statistical properties, however our work is based on the properties of the Gaussian smoothed with general divergences (e.g. Wasserstein, MMD, Sinkhorn divergence). We argue that the properties cannot be directly derived from (Nadjahi et al., 2020), especially the sample complexity result. In Rakotomamonjy & Ralaivola (2021), the authors investigated the smoothed Wasserstein distance and their theoretical finding was principally on proving the metric property, whereas we further investigate sample and projection complexities and the continuity properties w.r.t. the smoothing noise level. We emphasize that the novelty of the present paper consists in the theoretical properties derived from the definition of the empirical measure ˆµˆn. This latter is derived from a double process sampling, which is inspired from the implementation part. Namely, we sample X1*, . . . , X*n from the raw distribution µ to define µˆn then project it on the unit sphere and smooth this projection with a Gaussian distribution. Nevertheless, this smoothing, from a theoretical point view, is a continuous measure (see Lemma 3.5) that needs to be sampled. This entails our introducing to the second sampling step and construct ˆµˆn, an empirical version for the smoothing projection of µ. To the best of our knowledge, this work is the first introducing the double randomness in the case of smoothing optimal transport discrepancies. Recent works (Goldfeld et al., 2020; Nietert et al., 2021) addressed the smoothing Wasserstein an their theoretical results relied only on µˆn. Layout of the paper. The paper is organized as follows: after introducing the notation and some background in Section 2, we detail the topological properties of Gaussian-smoothed sliced divergence in Section 3.1 while the double sampling process and its statistical properties are established in Section 3.2. The noise analyses are provided in Section 3.3. Experimental analyses for supporting the theory and showcasing the relevance of our divergences in domain adaptation are depicted in Section 4. Discussions on the perspectives and limitations are in Section 5. All the proofs of the theoretical results and some additional experiments are postponed to the appendices in the supplementary. ## 2 Preliminaries For the reader's convenience, we provide a brief summary of standard notations and definitions used throughout the paper. Notation. For d ∈ N ∗, let P(R d) be the set of Borel probability measures on R d and Pp(R d) ⊂ P(R d), those with finite moment of order p, i.e., Pp(R d) ≜ {µ ∈ P :R∥x∥ pdµ(x) < ∞}, where *∥ · ∥* is the Euclidean norm. We denote Mp(µ) = Rx ∥x∥ pdµ(x). For two probability distributions µ and ν, we denote their convolution as µ ∗ ν ∈ P(R d), namely (µ ∗ ν)(A) = Rx Ry 1A(x + y)dµ(x)dν(y), where 1A(·) is the indicator function over A. Given two independent random variables X ∼ µ and Y ∼ ν, we remind that X + Y ∼ µ ∗ ν. The d-dimensional unit-sphere is noted as S d−1 ≜ {θ ∈ R d: ∥θ∥ = 1}. We denote by ud the uniform distribution on S d−1 and we use δ(·) to denote the Kronecker delta function. We note as Eµf the expectation of the function f with respect to µ. Let Γ : R → R be the Gamma function expressed as Γ(v) = R ∞ 0t v−1e −tdt for v > 0. For k ∈ N, (·)k denoted the Pochhammer symbol, also known in the literature as a rising factorial, namely (α)k = Γ(α+k) Γ(k) = α(α + 1)*· · ·*(α + k − 1). We denote by 1F1(*α, γ*; z) the Kummer's confluent hypergeometric function (Olver, 2010) and defined by 1F1(*α, γ*; z) = P∞ k=0 (α)k (γ)k z k k! . Sliced Wasserstein distance. We remind in this paragraph several measures of similarity between two distributions. The Wasserstein distance of order p ∈ [1, ∞) between two measures in Pp(R d) is given by the relaxation of the optimal transport problem, and it is defined as $$\mathrm{W}_{p}^{p}(\mu,\nu)=\operatorname*{inf}_{\gamma\in\Pi(\mu,\nu)}\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}\|x-x^{\prime}\|^{p}\gamma(x,x^{\prime})\mathrm{d}x\mathrm{d}x^{\prime}$$ where Π(µ, ν) ≜ {γ ∈ P(R d × R d)|π1#γ = *µ, π*2#γ = ν} and π1, π2 are the marginal projectors of γ on each of its coordinates. When d = 1, the Wasserstein distance can be calculated in closed-form owing to the cumulative distributions of µ and ν (Rachev & Rüschendorf, 1998). In practice for empirical distributions, the closed-form solution requires only the sorting of the samples, which makes it very efficient. Because of this efficiency, efforts have been devoted to derive a metric for high-dimensional distributions based on 1D Wasserstein distance. The main idea is to project high-dimensional probability distributions onto a random one-dimensional space and then to compute the Wasserstein distance. This operation can be theoretically formalized through the use of the Radon transform, leading to the so-called sliced Wasserstein distance (Kolouri et al., 2016; Bonneel & Coeurjolly, 2019; Kolouri et al., 2019b; Nguyen et al., 2021). Definition 2.1. For any p ∈ [1, ∞) and two measures µ, ν ∈ Pp(R d), the sliced Wasserstein distance (SW) reads as $$\mathrm{SW}^{p}(\mu,\nu)\triangleq\int_{\mathbb{S}^{d-1}}\mathrm{W}_{p}^{p}({\mathcal{R}}_{\mathbf{u}}\mu,{\mathcal{R}}_{\mathbf{u}}\nu)u_{d}(\mathbf{u})\mathrm{d}\mathbf{u}.$$ where Ru is the Radon transform of a probability distribution, namely Ruµ(·) = RRd µ(s)δ(· − s ⊤u)ds. In practice, the integral is approximated through a Monte-Carlo simulation leading to a sum of 1D Wasserstein distances over a fixed number of random directions u. Gaussian-smoothed sliced Wasserstein distance. Based on this definition of SW, replacing the Radon projected measures with their Gaussian-smoothed counterpart leads to the following definition: Definition 2.2. The σ-Gaussian-smoothed p-Sliced Wasserstein distance between probability distributions µ and ν in Pp(R d) writes as $$\mathrm{G}_{\sigma}\mathrm{SW}^{p}(\mu,\nu)\triangleq\int_{\mathbb{S}^{d-1}}\mathrm{W}_{p}^{p}({\mathcal{R}}_{\mathbf{u}}\mu*{\mathcal{N}}_{\sigma},{\mathcal{R}}_{\mathbf{u}}\nu*{\mathcal{N}}_{\sigma})u_{d}(\mathbf{u})\mathrm{d}\mathbf{u},$$ where Nσ = N (0, σ2) is the zero-mean σ 2-variance Gaussian measure. It is important to note here that the smoothing (convolution) operation occurs after projection onto the one-dimensional space. Hence, assuming X ∼ µ, Y ∼ ν, for a given direction u, we compute in the integral the one-dimensional Wasserstein distance between the probability laws of u ⊤X + Z and u ⊤Y + Z ′ where Z, Z′ ∼ Nσ are independent random variables. The metric properties of GσSWp for p ≥ 1 have been discussed in a recent work (Rakotomamonjy & Ralaivola, 2021). This latter work has also shown, in the context of differential privacy, the importance of convolving the Radon projected distribution with a Gaussian instead of computing the SW distance of the original distribution smoothed with a d-dimensional Gaussian µ ∗ NσId , where Id denotes the d × d identity matrix. Gaussian-smoothed sliced divergence. The idea of slicing high-dimensional distributions before feeding them to a divergence between probability distributions can be extended to distances other than the Wasserstein distance. These sliced divergences have been studied by Nadjahi et al. (2020). Similarly, we can define a Gaussian-smoothed sliced divergence, given a divergence DRd : Pp(R d) × Pp(R d) → R + for d ≥ 1 as: Definition 2.3. The σ-Gaussian-smoothed p-Sliced Divergence between probability distributions µ and ν in Pp(R d) associated to the *base divergence* D ≜ DR, p ≥ 1 is $$\mathrm{G}_{\sigma}\mathrm{SD}^{p}(\mu,\nu)\triangleq\int_{\mathbb{S}^{d-1}}\mathrm{D}^{p}(\mathcal{R}_{\mathbf{u}}\mu*\mathcal{N}_{\sigma},\mathcal{R}_{\mathbf{u}}\nu*\mathcal{N}_{\sigma})u_{d}(\mathbf{u})\mathrm{d}\mathbf{u}.$$ where the superscript p refers to a power. Typical relevant divergences are the maximum mean discrepancy (MMD) (Gretton et al., 2012) or the Sinkhorn divergence (Genevay et al., 2018; Peyré & Cuturi, 2019). In Section 4, we report empirical findings based on these divergences as well as on the Wasserstein distance. ## 3 Theoretical Properties In this section, we analyze the properties of the Gaussian-smoothed sliced divergence, in terms of topological and statistical properties and the influence of the Gaussian smoothing parameter σ on the distance. ## 3.1 Topology It has already been shown in Rakotomamonjy & Ralaivola (2021) that the Gaussian-smoothed sliced Wasserstein is a metric on P(R d). In the next, we extend these results to any divergence D(·, ·) under certain assumptions. Theorem 3.1. For any σ > 0, p ≥ 1*, the following properties hold:* 1. if D(·, ·) *is non-negative (or symmetric), then* GσSD(·, ·) is non-negative (or symmetric); 2. if D(·, ·) *satisfies the identity of indiscernibles, i.e. for* µ ′, ν′ ∈ P(R), D(µ ′, ν′) = 0 *if and only if* µ ′ = ν ′*, then this identity also holds for* GσSD(·, ·) for any µ, ν ∈ Pp(R d); 3. if D(·, ·) *satisfies the triangle inequality then* GσSD(·, ·) *satisfies the triangle inequality.* The above theorem shows that under mild hypotheses over the base divergence D, as being a metric for instance, the metric property of its Gaussian-smoothed sliced version naturally derives. As exposed in the appendix, the more involved property to prove is the identity of indiscernibles. We further postponed to the appendix the proofs of the two other topological properties: (i) GσSD metrizes the weak topology on Pp(R d) and (ii) GσSD is lower semi-continuous with respect to the weak topology in Pp(R d). Now, we establish under which conditions on the divergence D, the convergence of a sequence in GσSD implies weak convergence in Pp(R d). We say that {µk}k∈N *converges weakly* to µ and write, µk ⇒ µ, if Rf(x)dµk(x) →Rf(x)dµ(x), as k → ∞, for every f in the space of all bounded continuous real functions. Theorem 3.2. Let σ > 0, p ≥ 1, µ ∈ Pp(R d), and {µk ∈ Pp(R d)}k∈N *a sequence of distributions. Assume* that the divergence D is bounded and metrizes the weak topology on P(R)*. Then,* limk→∞ GσSD(µk, µ) = 0 if and only if µk ⇒ µ. Note that Theorem 3.2 extends the results of Nadjahi et al. (2020) to Gaussian-smoothed distributions, as we retrieve them as a special case for σ = 0. In addition, based on Theorem 3.2 by Lin et al. (2021) and the above, we can also claim that the Gaussian-smoothed SWD metrizes the weak convergence. Proposition 3.3. Let σ > 0, p ≥ 1 and assume that the base divergence D is lower semi-continuous w.r.t. the weak topology in P(R). Then, GσSD *is lower semi-continuous with respect to the weak topology in* Pp(R d). When the base divergence D is equal to the Wasserstein distance Wp, that is lower semi-continuous (Villani, 2009), then Proposition 3.3 shows that the smoothed sliced Wasserstein distance is semi-lower continuous too. ## 3.2 Statistical Properties The next theoretical question we are interested in is about the incurred error when the true distribution µ is approximated by its empirical distribution µˆn. Such a case is common in practical applications where only (high-dimensional) empirical samples are at disposal. Specifically, we are interested in quantifying two key properties of empirical Gaussian-smoothed divergence: (i) the convergence of the double empirical GˆσSD(µˆn, νˆn) (see Definition 3.6) to GσSD(µ, ν) *(ii)* the convergence of G\σSD(*µ, ν*) (see (1)) to GσSD(*µ, ν*), when approximating the expectation over the random projection with sample mean. Let µˆn = 1 n Pn i=1 δXi and νˆn = 1 n Pn i=1 δYi be the empirical probability measures of independent observations. The smoothed Gaussian sliced divergence between µˆn and νˆn is given by $$\mathrm{G}_{\sigma}\mathrm{SD}^{p}(\hat{\mu}_{n},\hat{\nu}_{n})=\int_{\mathbb{S}^{d-1}}\mathrm{D}^{p}\left(\mathcal{R}_{\mathbf{u}}\hat{\mu}_{n}*\mathcal{N}_{\sigma},\mathcal{R}_{\mathbf{u}}\hat{\nu}_{n}*\mathcal{N}_{\sigma}\right)u_{d}(\mathbf{u})\mathrm{d}\mathbf{u}.$$ Remark 3.4. Remark that for a fixed u ∈ S d−1, the distributions Ruµˆn ∗ Nσ and Ruνˆn ∗ Nσ are *continuous*, in particular they are a mixture of Gaussian distributions centered on the projected samples with variance σ 2. Lemma 3.5. Conditionally on the samples {Xi}i=1,...,n and {Yi}i=1,...,n, one has: Ruµˆn ∗ Nσ = 1 n Pn i=1 N (u ⊤Xi, σ2) and Ruνˆn ∗ Nσ = 1 n Pn i=1 N (u ⊤Yi, σ2). Note that we further need to sample with respect to the continuous mixture Gaussian measures in Lemma 3.5 in order to get a *fully* empirical measure version of GσSD(*µ, ν*). To this end, we next define the *double* empirical divergence of GσSD . ## 3.2.1 Double Empirical Divergence Of Gσsd Let T x 1 , . . . , T x n and T y 1 , . . . , Ty n be i.i.d. observations of Ruµˆn ∗Nσ and Ruνˆn ∗Nσ, respectively. Sampling i.i.d. {T x i }i=1*,...,n* is given by the following scheme: for i = 1*, . . . , n*, we first choose the component N (u ⊤Xi, σ2) from the mixture 1n Pn i=1 N (u ⊤Xi, σ2) then we generate T x i = u ⊤Xi + Z x i , where Z x i ∼ Nσ. Hence, we set, for a given u $${\hat{\hat{\mu}}}_{n}={\frac{1}{n}}\sum_{i=1}^{n}\delta_{T_{i}^{x}}={\frac{1}{n}}\sum_{i=1}^{n}\delta_{\mathbf{u}^{\intercal}X_{i}+Z_{i}^{x}}{\mathrm{~and~}}{\hat{\hat{\nu}}}_{n}={\frac{1}{n}}\sum_{i=1}^{n}\delta_{T_{i}^{y}}={\frac{1}{n}}\sum_{i=1}^{n}\delta_{\mathbf{u}^{\intercal}Y_{i}+Z_{i}^{y}}.$$ The measure ˆµˆn ∈ P(R) defines an empirical version of the continuous Ruµˆn ∗ Nσ denoted as Ru\µˆn ∗ Nσ (similarly ˆνˆn = Ru\νˆn ∗ Nσ). Using the aforementioned notation, we define. Definition 3.6. The double empirical smoothed Gaussian sliced divergence reads as $$\hat{\mathrm{G}}_{\sigma}\mathrm{SD}^{p}(\hat{\mu}_{n},\hat{\nu}_{n})\triangleq\int_{\mathbb{S}^{d-1}}\mathrm{D}^{p}(\hat{\mu}_{n},\hat{\nu}_{n})u_{d}(\mathbf{u})\mathrm{d}\mathbf{u}.$$ Remark 3.7. (i) It is worth to comment the double randomnesses showing in the definition of GˆσSDp(µˆn, νˆn): the first comes from sampling according to the original probability measure (µ or ν) whereas the second takes place from sampling according to the mixture 1n Pn i=1 N (u ⊤Xi, σ2). (ii) The empirical measure of the convolution Ru\µ ∗ Nσ could be written as 1n Pn i=1 δUx i +Qx i allowing to sample *in a one shot* n i.i.d. samples U x i + Qx i such that U x i ∼ Ruµ and Qx i ∼ Nσ. From an empirical view, sampling according to Ruµ ∗ Nσ is intractable. For that reason, our theoretical results and numerical experiments are based on ˆµˆn, ˆνˆn, and hence with respect to GˆσSD(ˆµn, νˆn). ## 3.2.2 Sample Complexity Of Gσswp Herein, our goal is to quantify the error made when approximating GσSW(*µ, ν*) with GˆσSW(µˆn, νˆn). More precisely, we are interested in establishing an order of the convergence rate of GˆσSD(µˆn, νˆn) towards GσSD(*µ, ν*), according to the sample size n. This rate stands for the so-called *sample complexity.* The convergence results in the sequel are given in expectation. Recall that the empirical distributions are derived from a double sampling process, which leads to consider a double expectations, wrt the origin distribution Eµ⊗n and wrt the sampling from the Gaussian smoothing EN ⊗n σwhere µ ⊗n and N ⊗n σ are the n-fold product extensions of µ and Nσ, respectively. We first consider the conditional expectation given the samples X1*, . . . , X*n, i.e. EN ⊗n σ[·|X1*, . . . , X*n], and then apply Eµ⊗n . We denote by $\mathbf{E}_{\mu^{\otimes}m}\left|\Lambda\right\rangle$ σ[·] = Eµ⊗n-EN Next, we focus on the sample complexity for the special case of Gaussian-smoothed sliced Wasserstein $$\Sigma_{{\mathcal{N}}_{\sigma}^{\otimes n}}\left[\cdot|X_{1},\ldots,X_{n}\right]\right].$$ distance. Proposition 3.8. Fix σ > 0, p ≥ 1 and ϑ > √2. For X ∼ µ*, assume that* R ∞ 0e 2ξ 2 σ2ϑ2 P-∥X∥ > ξdξ < ∞. Then, $\square$ $\mathbf{a}\;\boxed{\mathbf{E}}$ $${\bf E}_{\mu^{\otimes n}\,|{\cal N}_{\sigma}^{\otimes n}}[\hat{\bf G}_{\sigma}{\bf S W}^{p}(\hat{\mu}_{n},\mu)]\leq\Xi_{p,\sigma,\vartheta}\frac{1}{\sqrt{n}}+\Upsilon_{p,\sigma,\vartheta,\mu}\frac{\log n}{n},$$ $\huge\square=\square$ $where\ \Xi_{p,\sigma,\vartheta}\ =\ \frac{2^{\frac{5p}{2}}-1}{\sqrt{\pi}}$ $\frac{2^{2p-1}C_p}{\sqrt{\pi}}\sigma^{2p}\Gamma(p+\frac{1}{2})_1F_{12}$ $\Xi_{p,\sigma,\theta}=\frac{2\frac{\pi}{2}-\frac{1}{2}}{\sqrt{\pi}}\sigma^{p-\frac{1}{2}}\theta^{p+1}\sqrt{\Gamma(p+\frac{1}{2})}\Big(\sqrt{\frac{4\pi\sigma^{2}\theta^{2}}{\theta^{2}-2}}+4\int_{0}^{\infty}e^{\frac{2\pi^{2}}{\sigma^{2}\theta^{2}}}\,\mathbf{P}\big[\big|X\big|>\xi\big]\,d\xi\Big)^{\frac{1}{2}}\;\;and\;\;\Upsilon_{p,\sigma,\theta,\mu}=\sigma^{2}\eta\Gamma(p+\frac{1}{2})_{1}F_{1}(-p,\frac{1}{2};M_{2\mu}(\mu))\;\;with\;C_{p}\;is\;a\;positive\;constant\;depending\;only\;on\;p.$ It is worth to note that for p ∈ N ∗, e.g. p = 2 (standard choice for numerical experiments), the confluent hypergeometric function 1F1(−p, 12 ; M2k(µ)) becomes a polynomial since (−p)(k) = 0 for k ≥ p + 1. Now, let us sketch the proof of Proposition 3.8: we first insert the proxy term of mixture Gaussian distribution 1 n Pn i=1 N (u⊤Xi, σ2), then by an application of the triangle inequality on the Wasserstein distance we are faced to control two terms (i) Wpp ( ˆµˆn, 1 n Pn i=1 N (u⊤Xi, σ2)) and (ii) Wpp ( 1 n Pn i=1 N (u⊤Xi, σ2), µ). For (i) we get a standard order of O( log n n), which comes from a by-product of Fournier & Guillin (2015). For (ii), through a coupling via the maximal coupling using the total variation distance (Theorem 6.15 in Villani (2009)), we obtain the order O(n −1/2). The control technique for (ii) was inspired from Goldfeld et al. (2020) and Nietert et al. (2021). Remark 3.9. The condition R ∞ 0e 2ξ 2 σ2ϑ2 P-∥X∥ > ξdξ < ∞ needs P-∥X∥ > ξgoes to 0 faster than e −κξ2for κ < 2/σ2ϑ 2. This can be satisfied when ∥X∥ is a ω-sub-gausssian (ω ≥ 0). Namely, E[e η ⊤(X−E[X])] ≤ e ω∥η∥ 2 2 for all η ∈ R d. If the parameter ω verifies *ω < σϑ/*2, then the latter condition holds. Remark 3.10. Note that the sample complexity depends on the amount of smoothing through the moment of the Gaussian noise : the larger the amount of smoothing (and thus the privacy), the worse is the constant of the complexity. Hence, a trade-off on privacy and statistical estimation appears here as a reasonable guarantee on the differential privacy usually requires a large Gaussian variance. Proposition 3.11. *Under the same conditions of Proposition 3.8, we have* $$\mathbf{E}_{\mu^0\dots|\nu^0_n}[q^2_{\mu_n}\mathbf{E}_{\nu^0\dots|\nu^0_n}[\hat{\mathbf{G}}_{\sigma}\mathbf{SW}^p(\mu_n,\hat{\nu}_n)]\leq\mathcal{Y}^{-1}\mathbf{G}_{\sigma}\mathbf{SW}^p(\mu,\nu)+\mathcal{Y}_{\Xi_{\rho,\sigma,\sigma}}^{\Xi_{\rho,\sigma,\sigma}}\frac{1}{\sqrt{n}}+\mathcal{Y}^{-1}(\Upsilon_{\mathbf{p},\sigma,\vartheta,\mu}+\Upsilon_{\mathbf{p},\sigma,\vartheta,\nu})\frac{\log n}{n}+\mathcal{Y}_{\mu^0\dots|\nu^0_n}\frac{\log n}{n}]$$. $$a n d$$ $$\mathrm{G}_{\sigma}\mathrm{SW}^{p}(\mu,\nu)\leq\mathcal{Y}^{p-1}\mathbf{E}_{\mu_{0}\sim1\sqrt{2}^{\sigma}}\mathbf{E}_{\nu_{0}\sim1\sqrt{2}^{\sigma}}[\hat{\mathrm{G}}_{\sigma}\mathrm{SW}^{p}(\hat{\mu}_{n},\hat{\nu}_{n})]+\mathcal{Y}^{\Xi}_{p,\nu_{0}}\frac{1}{\sqrt{n}}+\mathcal{Y}^{p-1}(\mathrm{T}_{p,\sigma,\theta,\mu}+\mathrm{T}_{p,\sigma,\theta,\nu})\frac{\log n}{n}$$ Proof of Proposition 3.11 relies on a double application of triangle inequality satisfied by Wasserstein distance as follows: Wp( ˆµˆn, ˆνˆn) ≤ Wp( ˆµˆn, Ruµ ∗ Nσ) + Wp(Ruµ ∗ Nσ, Ruν ∗ Nσ) + Wp(Ruν ∗ Nσ, ˆνˆn), combined with Proposition 3.8. This gives a non sharp convergence result since we get the constant 3 p−1in front of Eµ⊗n |N ⊗n σEν⊗n |N ⊗n σ[GˆσSWp(µˆn, νˆn)] or GσSWp(*µ, ν*). However, when the power p = 1 we obtain a sharp convergence result with O(n −1/2), namely $$[\mathbf{E}_{\mu\otimes n}\,|{\mathcal{N}}_{\sigma}^{\otimes n}\,\mathbf{E}_{\nu\otimes n}\,|{\mathcal{N}}_{\sigma}^{\otimes n}\,[{\hat{\mathrm{G}}}_{\sigma}\mathrm{SW}({\hat{\mu}}_{n},{\hat{\nu}}_{n})]-\mathrm{G}_{\sigma}\mathrm{SW}(\mu,\nu)|\leq3\Xi_{1,\sigma,\vartheta}\frac{1}{\sqrt{n}}.$$ Despite that our theoretical results hold only for Gaussian-smoothed sliced Wasserstein distance, our empirical results show that given other base divergences D, shows that the sample complexity of GσSDpis proportional to the one dimensional sample complexity of D p(p = 2). Figure 1 provides an empirical illustration of this statement. ![6_image_0.png](6_image_0.png) Figure 1: Measuring the divergence between two sets of samples in R 50, of increasing size, randomly drawn from N (0, I). We compare three sliced divergences and their Gaussian-smoothed sliced versions with a σ = 3: (top) dimension has been set to d = 50; (bottom) sample complexity with different dimensions. This plot confirms that the complexity is dimension-independent. ## 3.2.3 Projection Complexity To compute the Gaussian-smoothed sliced divergence, one may resort to a Monte Carlo scheme to numerically approximate the integral in GσSDp(*µ, ν*). Towards this, let define the following sum: $$\widehat{\mathrm{G}_{\sigma}\mathrm{SD}^{p}}(\mu,\nu)=\frac{1}{L}\sum_{l=1}^{L}\mathrm{D}^{p}(\mathcal{R}_{\mathbf{u}_{l}}\mu*\mathcal{N}_{\sigma},\mathcal{R}_{\mathbf{u}_{l}}\nu*\mathcal{N}_{\sigma}),\tag{1}$$ where ulis a random vector uniformly drawn from S d−1, for l = 1*, . . . , L.* Theorem 3.12 shows that for a fixed dimension d, the root mean square error of Monte Carlo (MC) approximation is of order O√ 1 L , which corresponds to the projection complexity. We denote by u ⊗L dand the L-fold product extensions of the uniform measure ud on the unit sphere. Proposition 3.12. Let σ > 0, p ≥ 1. Then the error related to the MC-estimation of GσSDpis bounded as follows $$\mathbf{E}_{u_{d}^{\otimes L}}[|{\widehat{\mathrm{G}_{\sigma}\mathrm{SD}^{p}}}(\mu,\nu)-\mathrm{G}_{\sigma}\mathrm{SD}^{p}(\mu,\nu)|]\leq{\frac{A(p,\sigma)}{\sqrt{L}}},$$ where A2(*p, σ*) = RS d−1 D p(Ruµ ∗ Nσ, Ruν ∗ Nσ) − ϑ¯p 2ud(u)du, *with* ϑ¯p =RS $$\stackrel{\cdot}{\mathrm{S}^{d-1}}\mathrm{D}^{p}({\mathcal{R}_{\mathbf{u}}}{\boldsymbol{\mu}}*{\mathcal{N}}_{\sigma},{\mathcal{R}_{\mathbf{u}}}{\boldsymbol{\nu}}*)$$ Nσ)ud(u)du. The term A2(*p, σ*) corresponds to the variance of D p(Ruµ∗ Nσ, Ruν ∗ Nσ) with respect to u ∼ ud. It is worth to note that the precision of the Monte Carlo scheme approximation depends on the number of projections L and the variance of the evaluations of the divergence D p. The estimation error decreases at the rate L −1/2 according to the number of projections used to compute the smoothed sliced divergence. Given the above results, we provide a finer analysis of GσSWp(*µ, ν*)'s sample complexity. Corollary 3.13. The sample and projection complexities of GσSWp(µ, ν) *reads as* complexity(GσSWp) = O(n −1/2 + L −1/2). *If we consider the number of projections as* L = ⌊n β⌋ for some β ∈ (0, 1) then the overall complexity complexity(GσSWp(*µ, ν*)) = O(n −β/2). ## 3.3 Noise-Level Dependencies The parameter σ of the Gaussian smoothing function Nσ may significantly influence the attained privacy level. Hence, we provide theoretical results analyzing the effect of the noise level σ on the induced Gaussian-smoothed sliced divergence. ![7_image_0.png](7_image_0.png) Figure 2: Absolute difference between the approximated Monte Carlo approximation of all divergences compared to the true one (evaluated with 10, 000 number of projections). The two sets of 500 samples in R 50 are randomly drawn from N (0, I). The Gaussian-smoothed sliced divergences are parameterized with σ = 3. ## 3.4 Order Relation. We first show that the noise level tends to reduce the difference between two distributions as measured using GσSDp(*µ, ν*) provided the base divergence D satisfies some mild assumptions. Proposition 3.14. Let µ, ν ∈ Pp(R d) *and consider the noise levels* σ1, σ2 *such that* 0 ≤ σ1 ≤ σ2 < ∞. Assume that the base divergence D *satisfies* D(µ ′ ∗Nσ2 , ν′ ∗Nσ2 ) ≤ D(µ ′ ∗Nσ1 , ν′ ∗Nσ1 ), *for any* µ ′, ν′ ∈ P(R). Then, Gσ2 SDp(µ, ν) ≤ Gσ1 SDp(*µ, ν*). Note that the assumption for the base divergence inequality holds for the Gaussian-smoothed Wasserstein distance Nietert et al. (2021). While we conjecture that it holds also for smoothed Sinkhorn and MMD, we leave the proofs for future works. Based on the property in Proposition 3.14, we show some specific properties of the metric with respect to the noise level σ. Proposition 3.15. GσSDp(µ, ν) is decreasing with respect to σ *and we have* limσ→0 GσSDp(*µ, ν*) = D p(*µ, ν*). The proof of Proposition 3.15 comes straightforwardly from Proposition 3.14 by taking σ2 = σ and letting σ1 → 0. This property interestingly states that the GσSDprecovers the sliced divergence when the noise level vanishes. We end up this section by providing a relation between Gaussian-smoothed sliced Wasserstein distances under two noise levels. Proposition 3.16. Let 0 ≤ σ1 ≤ σ2 *be two noise levels. Then, one has* Gσ1 SWp(*µ, ν*) ≤ 2 p−1 Gσ2 SWp(*µ, ν*) + 2 5p 2 (σ 2 2 − σ 2 1 ) p. ## 3.4.1 Continuity Now we analyze the continuity properties of some GσSDp(*µ, ν*) w.r.t. the noise level. Proposition 3.17. For any two distributions µ and ν *for which the sliced Wasserstein is well-defined, the* Gaussian-smoothed sliced Wasserstein distance is continuous w.r.t. to σ. Proposition 3.18. *Assume that the kernel defining the maximum mean discrepancy* (MMD) divergence is bounded. Then the Gaussian-smoothed sliced GσMMD *is continuous w.r.t. to* σ. The above propositions show that most distribution divergences are continuous with respect to σ under mild conditions. ## 4 Numerical Experiments In this section, we report on a series of experiments that support the established theoretical results. We also highlight the usefulness of the findings in a context of privacy-preserving domain adaptation problem. ## 4.1 Supporting The Theoretical Results Sample complexity. The first experiment (see Figure 1) analyzes the sample complexity of different base divergences. It shows that the sample complexity stays similar to the one of their original and sliced counterparts up to a constant (see Proposition 3.8). For this purpose, we have considered samples in R d randomly drawn from a Normal distribution N (0, I). For the Sinkhorn divergence, the entropy regularization has been set to 0.1 and for MMD, we used a Gaussian kernel for which the bandwidth has been set to the mean of all pairwise distances between samples. The number of projections has been fixed to L = 50 and we perform 20 runs per experiment. For the first study, the convergence rate has been evaluated by increasing the samples number up to 25,000 with fixed dimension d = 50. For the second one, we vary both the dimension and the number of samples. Figure 1 shows the sample complexity of some sliced divergences, respectively noted as SWD, SKD and MMD for Sliced Wasserstein distance, Sinkhorn divergence and Maximum Mean discrepancy and their Gaussian-smoothed sliced versions, named as GS SWD, GS SKD and GS MMD. On the top plot, we can see that all Gaussian-smoothed sliced divergences preserve the complexity rate with just a slight to moderate overhead. The worst difference is for Sinkhorn divergence, while MMD almost comes for free in term of complexity. From the bottom plot where sample complexities for different dimensions d are given, we confirm the finding that Gaussian smoothing keeps the independence of the convergence rate to the dimension of sliced divergences. Two other experiments on the sample complexity and identity of indiscernibles are also reported in the supplementary material. Projection complexity. We have also investigated the impact of the number of projections when estimating the distance between two sets of 500 samples drawn from the same distribution, N (0, I). Figure 2 plots the approximation error between the true expectation of the sliced divergences (computed for a number of L = 10, 000 projections) and its approximated versions. We remark that, for all methods, the error ranges within 10-fold when approximating with 50 projections and decreases with the number of projections. Performance path on the impact of the noise parameter. Since the Gaussian smoothing parameter σ is key in a privacy preserving context, as it impacts on the level of privacy of the Gaussian mechanism, we have analyzed its impact on the smoothed sliced divergence. We have reproduced the experiment for the sample complexity but with different values of σ. The number of projections has been set to 50. Figure 3 shows these sample complexities. The first very interesting point to note is that the smoothing parameter has almost no effect on the GS MMD sample complexity. For the GS SWD and GS SKD divergences, instead, the smoothing tends to increase the divergence at fixed number of samples. Another interpretation is that to achieve a given value of divergence, one needs more far samples when the smoothing is larger (*i.e.* for getting a given divergence value at σ = 5, one needs almost 10-fold more samples for σ = 15). This overhead of samples needed when smoothing increases is properly described, for the Gaussian-smoothed sliced SWD in our Proposition 3.8, as the sample complexity depends on the moments of the Gaussian. As for conclusion from these analyses, we highlight that the Gaussian-smoothed sliced MMD seems to present several strong benefits: its sample complexity does not depend on the dimension and seems to be the best one among the divergence we considered. More interestingly, it is not impacted by the amount of Gaussian smoothing and thus not impacted by a desired privacy level. ## 4.2 Domain Adaptation With Gσsw As an application, we have considered the problem of unsupervised domain adaptation for a classification task. In this context, given source examples Xs and their label ys and unlabeled target examples Xt, our goal is to design a classifier h(·) learned from the source examples that generalizes well on the target ones. A classical approach consists in learning a representation mapping g(·) that leads to invariant latent representations, invariance being measured as a distance between empirical distributions of mapped source and target samples. ![9_image_0.png](9_image_0.png) Figure 3: Measuring the divergence between two sets of samples in R50 drawn from N(0,I). We plot the sample complexity for different Gaussian-smoothed sliced divergence at different level of noises. ![9_image_1.png](9_image_1.png) Figure 4: Domain adaptation performances using different divergences on distributions with respect to the Gaussian smoothing. (Left) USPS to MNIST. (Middle) Office-31 Webcam to DSLR. (Right) Office-31 Amazon to Webcam. Formally, this leads to the following problem $$\operatorname*{min}_{g,h}\left\{{\mathcal{L}}_{c}(h(g(\mathbf{X}_{s})),\mathbf{y}_{s})+{\mathcal{D}}(g(\mathbf{X}_{s}),g(\mathbf{X}_{t}))\right\}$$ where Le can be the cross-entropy loss or a quadratic loss and D a divergence between empirical distributions, in our case, D will be any Gaussian-smoothed sliced divergence. We solve this problem through stochastic gradient descent, similarly to many approaches that use sliced Wasserstein distance as a distribution ![9_image_2.png](9_image_2.png) Domain adaptation performances using different divergences on distributions with respect to Figure 5: the Gaussian smoothing using one-epoch-fine-tuned models. (Left) USPS to MNIST. (Middle) Office-31 Webcam to DSLR. (Right) Office-31 Amazon to Webcam. distance Lee et al. (2019). Note that, in practice, using a smoothed divergence preserves the privacy of the target samples as shown by (Rakotomamonjy & Ralaivola, 2021). When performing such model adaptation, a privacy/utility trade-off that has to be handled. In practice, one would prefer the most private model while not hurting its performance. Hence, one would seek the largest noise level σ > 0 to use while preserving accuracy on target domain. Hence, it is useful to evaluate how the model performs on a range of noise level (hence, privacy level). This can be computationally expensive at it requires to fully train several models on hundreds of epochs. Instead, we leverage on the continuity of our GσSD to employ a fine-tuning strategy: we train a domain adaptation model for the largest desired value of σ (over the full number of epochs) and when σ is decreased, we just fine-tune the lasted model by training on only one epoch. Our experiments evaluate the studied Gaussian-smoothed sliced divergences in classical unsupervised domain adaptation. We have considered two datasets: a handwritten digit recognition (USPS/MNIST) and Office 31 datasets. In our first analysis, we have compared our GσSD performances with non-smoothed divergences. The first one is the sliced Wasserstein distance (SWD) Lee et al. (2019) and the second one is the Jenssen-Shannon approximation based on adversarial approach, known as DANN Ganin & Lempitsky (2015). For all methods and for each dataset, we used the same neural network architecture for representation mapping and for classification. Approaches differ only on how distance between distributions have been computed. Here for each noise value σ, we have trained the model from scratch for 100 epochs. Results are depicted in Figure 4. For the two problems, we can see that performances obtained with the Gaussian-smoothed sliced Wasserstein or MMD divergences are similar to those obtained with DANN or SWD across all ranges of noise. The smoothed version of Sinkhorn is less stable and induces a slight loss of performance. Owing to the metric property and the induced weak topology, the privacy preservation comes almost without loss of performance in this domain adaptation context. In the second analysis, we have studied the privacy/utility trade-off when fine-tuning models, using only one epoch, for decreasing values of σ. Results are shown in Figure 5. They highlight that depending on the data and the used smoothed divergence, performance varies between one percent for Office 31 to four percent for USPS to MNIST. Note that except for the largest value of σ, we are training a model using only one epoch instead of a hundred. A very large gain in complexity is thus achieved for swiping the full range of noise level. Hence depending on the importance this slight drop in performance will have, it is worth using a large value of σ and preserving strong privacy or go through a validation procedure of several (cheaply obtained) models. ## 5 Conclusion This work provided the properties of Gaussian-smoothed sliced divergences for comparing distributions. We derived several theoretical results related to their topological and statistical properties and showed, under mild conditions on their base divergences, the smoothing and slicing operations preserves the metric property. From a statistical point of view, we introduced the double empirical distribution and focused on the sample complexity of the smoothed sliced Wasserstein distance and we proved that it converges with a rate O(n −1/2). We furhter analyzed the behavior of these divergences on domain adaptation problems and confirm the fact that using those divergences yields only to slight loss of performances while preserving privacy. Note that in the obtained bound we use upper bound of higher moments of the smoothing distribution. An important direction for future research is considering non Gaussian smoothing distribution enjoying this property. ## References Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In International conference on machine learning, pp. 214–223. PMLR, 2017. Nicolas Bonneel and David Coeurjolly. Spot: Sliced partial optimal transport. 38(4), 2019. Nicolas Bonneel, Michiel van de Panne, Sylvain Paris, and Wolfgang Heidrich. Displacement interpolation using lagrangian mass transport. *ACM Trans. Graph.*, 30(6):158:1–158:12, 2011. Nicolas Courty, Rémi Flamary, Devis Tuia, and Alain Rakotomamonjy. Optimal transport for domain adaptation. *IEEE transactions on pattern analysis and machine intelligence*, 39(9):1853–1865, 2016. Ishan Deshpande, Yuan-Ting Hu, Ruoyu Sun, Ayis Pyrros, Nasir Siddiqui, Sanmi Koyejo, Zhizhen Zhao, David Forsyth, and Alexander G. Schwing. Max-sliced Wasserstein distance and its use for gans. In *2019* IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10640–10648, 2019. Cynthia Dwork, Aaron Roth, et al. The algorithmic foundations of differential privacy. *Foundations and* Trends® *in Theoretical Computer Science*, 9(3–4):211–407, 2014. Nicolas Fournier and Arnaud Guillin. On the rate of convergence in Wasserstein distance of the empirical measure. *Probability Theory and Related Fields*, 162(3):707–738, Aug 2015. Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In International conference on machine learning, pp. 1180–1189. PMLR, 2015. Aude Genevay, Gabriel Peyré, and Marco Cuturi. Learning generative models with sinkhorn divergences. In International Conference on Artificial Intelligence and Statistics, pp. 1608–1617. PMLR, 2018. Ziv Goldfeld, Kristjan Greenewald, Jonathan Niles-Weed, and Yury Polyanskiy. Convergence of smoothed empirical measures with applications to entropy estimation. *IEEE Transactions on Information Theory*, 66 (7):4368–4391, 2020. Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. *The Journal of Machine Learning Research*, 13(1):723–773, 2012. Leonid V. Kantorovich. On the transfer of masses (in russian). *Doklady Akademii Nauk*, 2:227–229, 1942. Soheil Kolouri, Yang Zou, and Gustavo K. Rohde. Sliced Wasserstein kernels for probability distributions. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5258–5267, 2016. Soheil Kolouri, Se Rim Park, Matthew Thorpe, Dejan Slepcev, and Gustavo K. Rohde. Optimal mass transport: Signal processing and machine-learning applications. *IEEE Signal Processing Magazine*, 34(4): 43–59, July 2017. Soheil Kolouri, Phillip E. Pope , Charles E. Martin, and Gustavo K. Rohde. Sliced Wasserstein auto-encoders. In *International Conference on Learning Representations*, 2019a. Soheil Kolouri, Kimia Nadjahi, Umut Simsekli, Roland Badeau, and Gustavo Rohde. Generalized sliced Wasserstein distances. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 32, pp. 261–272. Curran Associates, Inc., 2019b. Chen-Yu Lee, Tanmay Batra, Mohammad Haris Baig, and Daniel Ulbricht. Sliced wasserstein discrepancy for unsupervised domain adaptation. In *Proceedings of the IEEE/CVF Conference on Computer Vision* and Pattern Recognition, pp. 10285–10295, 2019. Tianyi Lin, Zeyu Zheng, Elynn Chen, Marco Cuturi, and Michael Jordan. On projection robust optimal transport: Sample complexity and model misspecification. In International Conference on Artificial Intelligence and Statistics, pp. 262–270. PMLR, 2021. Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In *International conference on machine learning*, pp. 97–105. PMLR, 2015. Gaspard Monge. Mémoire sur la théotie des déblais et des remblais. *Histoire de l'Académie Royale des* Sciences, pp. 666–704, 1781. Kimia Nadjahi, Alain Durmus, Lénaïc Chizat, Soheil Kolouri, Shahin Shahrampour, and Umut Şimşekli. Statistical and topological properties of sliced probability divergences. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. Khai Nguyen, Nhat Ho, Tung Pham, and Hung Bui. Distributional sliced-wasserstein and applications to generative modeling. In *International Conference on Learning Representations*, 2021. Khai Nguyen, Tongzheng Ren, Huy Nguyen, Litu Rout, Tan Nguyen, and Nhat Ho. Hierarchical sliced wasserstein distance. *arXiv preprint arXiv:2209.13570*, 2022. Khai Nguyen, Dang Nguyen, and Nhat Ho. Self-attention amortized distributional projection optimization for sliced wasserstein point-cloud reconstruction. In *International Conference on Machine Learning*, pp. 26008–26030. PMLR, 2023. Khai Nguyen, Tongzheng Ren, and Nhat Ho. Markovian sliced wasserstein distances: Beyond independent projections. *Advances in Neural Information Processing Systems*, 36, 2024. Sloan Nietert, Ziv Goldfeld, and Kengo Kato. Smooth p-wasserstein distance: Structure, empirical approximation, and statistical applications. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of *Proceedings of Machine Learning Research*, pp. 8172–8183. PMLR, 18–24 Jul 2021. Frank W. J. Olver. *NIST handbook of mathematical functions hardback and CD-ROM*. Cambridge university press, 2010. Gabriel Peyré and Marco Cuturi. Computational optimal transport. Foundations and Trends® *in Machine* Learning, 11(5-6):355–607, 2019. Svetlozar T. Rachev and Ludger Rüschendorf. *Mass Transportation Problems: Volume I: Theory*. Mass Transportation Problems. Springer, 1998. Alain Rakotomamonjy and Liva Ralaivola. Differentially private sliced wasserstein distance. In Marina Meila and Tong Zhang (eds.), *Proceedings of the 38th International Conference on Machine Learning*, volume 139 of *Proceedings of Machine Learning Research*, pp. 8810–8820. PMLR, 18–24 Jul 2021. Tim Salimans, Han Zhang, Alec Radford, and Dimitris Metaxas. Improving GANs using optimal transport. In *International Conference on Learning Representations*, 2018. Justin Solomon, Fernando d de Goes, Gabriel Peyré, Marco Cuturi, Adrian Butscher, Andy Nguyen, Tao Du, and Leonidas Guibas. Convolutional Wasserstein distances: Efficient optimal transportation on geometric domains. *ACM Trans. Graph.*, 34(4):66:1–66:11, 2015. Dougal J Sutherland, Hsiao-Yu Tung, Heiko Strathmann, Soumyajit De, Aaditya Ramdas, Alexander J Smola, and Arthur Gretton. Generative models and model criticism via optimized maximum mean discrepancy. In *ICLR (Poster)*, 2017. Cédric Villani. *Optimal Transport: Old and New*, volume 338 of *Grundlehren der mathematischen Wissenschaften*. Springer Berlin Heidelberg, 2009. Jiqing Wu, Zhiwu Huang, Dinesh Acharya, Wen Li, Janine Thoma, Danda Pani Paudel, and Luc Van Gool. Sliced wasserstein generative models. In *Proceedings of the IEEE/CVF Conference on Computer Vision* and Pattern Recognition, pp. 3713–3722, 2019.
Review 1: Summary: This paper presents theoretical analysis and some computational experiments on Gaussian-smoothed sliced probability divergences, which include Gaussian-smoothed Wasserstein distances as a special case. The theoretical results include general properties of the divergences and some approximation complexity results. Strengths and Weaknesses: # Strengths - The results are well-motivated. I can see this being of interest to many people in the field. - The results appear to be rigorous - The writing is fairly clear and well-organized - The experiments support the theory # Weaknesses - In my view, the most interesting approximation result is the total complexity bound from Corollary 3.13. The paper would be stronger if this were stated with explicit constants and proved explicitly. - The proofs appear to be rigorous, but are quite terse and dense. A bit more discussion might help. Requested Changes: Get more explicit about the total complexity bounds. Broader Impact Concerns: None ================================================== Review 2: Summary: This works studies the theoretical properties of the Gaussian smoothed sliced Wasserstein distance as well as generalized versions, which are known as Gaussian-smoothed sliced divergences. The work provides a theoretical analysis of general Gaussian-smoothed sliced divergences. The analysis consists of 1) Establishing the topological properties of general Gaussian-smoothed sliced divergences 2) Proving sample complexity results of these divergences by introducing a double empirical distribution 3) Proving that these divergences satisfy an order relation with respect to a noise level and are also continuous with respect to this parameter. The novelty of this work is that it studies the theoretical properties of Gaussian-smoothed sliced divergences. Previous work has focused on the Wasserstein distance, sliced Wasserstein distance, and Gaussian smoothed Wasserstein distance, but NOT Gaussian-smoothed sliced Wasserstein distance. This work aims to fill in this gap. Strengths and Weaknesses: **Strengths** 1. For someone not an expert in this area, I really enjoyed reading this paper. It was very clear from the start and the authors did a great job at making it clear where the contribution of this paper lies with respect to other work. 2. Many of the theorems and propositions are clearly explained. 3. For a mainly theoretical contribution, I appreciate that the authors also did some experimental analysis. **Weaknesss** 1. The notation can get a bit overwhelming. I assume that it is almost necessary but if the authors in any way could make it a bit more easy to understand it would improve the clarity of the work. 2. The introduction says "In this work, we fill this gap by providing a theoretical analysis of the Gaussian smoothed sliced Wasserstein distance." However in actuality, the authors provide a theoretical analysis of Gaussian smoothed sliced divergences, which are more general. For that reason, I would think of removing this sentence or changing it because the authors provide results that are even more general and not specifically related to just Gaussian smoothed sliced Wasserstein distances! It almost felt like the authors were selling themselves short because the theory was a lot more general and did not just apply to the Wasserstein distance. 3. In section 3.2, the authors quantify 1. the convergence of the double empirical to GσSD(μ, ν) 2. (ii) the convergence of \hat{GσSD(μ, ν)} to GσSD(μ, ν),when approximating the expectation over the random projection with sample mean. Could you explain what the difference is between the two? It remains a little unclear which of them (if not both) is the main useful result in practice (if there is one). Requested Changes: 1) If the authors could try to clean up notation, that would be appreciated 2) Reword the introduction to make it clear that the results are general and not for just Gaussian smoothed sliced Wasserstein distances. It just happens to be that since the work applies to many Gaussian smoothed sliced divergences they are able to prove results for the popular Wasserstein distance. 3) Broader Impact Concerns: No concerns ================================================== Review 3: Summary: In this paper, the author provides the theoretical properties of the Gaussian smoothed sliced Wasserstein distance, including its topology and sample complexity. They also extend some of these theoretical results to other divergences. A series of experiments have been done to support these results. Strengths and Weaknesses: Strengths: 1. This paper is well-written and easy to follow. 2. They introduce the double empirical distribution and establish the $\mathcal{O} (n^{-1/2})$ sample complexity results of the smoothed sliced Wasserstein distance. Questions: 1. In [1], they develop the limit distribution theory for smooth $p$ Wasserstein distance. Intuitively, would similar results hold for smoothed sliced $p$ Wasserstein distance? 2. In [2], they show the convergence of transport plans. Can you derive similar results? [1] Goldfeld, Ziv, et al. "Limit distribution theory for smooth p-Wasserstein distances." The Annals of Applied Probability 34.2 (2024): 2447-2487. [2] Nietert, Sloan, Ziv Goldfeld, and Kengo Kato. "Smooth $ p $-Wasserstein distance: structure, empirical approximation, and statistical applications." International Conference on Machine Learning. PMLR, 2021. Requested Changes: I suggest rewriting Proposition 3.16 as a stability result of $G_{\sigma}SW^p$ as that in [2]. Broader Impact Concerns: N/A ==================================================
# Inverse Kernel Decomposition Chengrui Li *cnlichengrui@gatech.edu* School of Computational Science & Engineering Georgia Institute of Technology Anqi Wu anqiwu@gatech.com School of Computational Science & Engineering Georgia Institute of Technology Reviewed on OpenReview: *https: // openreview. net/ forum? id= H4OE7toXpa* ## Abstract The state-of-the-art dimensionality reduction approaches largely rely on complicated optimization procedures. On the other hand, closed-form approaches requiring merely eigendecomposition do not have enough sophistication and nonlinearity. In this paper, we propose a novel nonlinear dimensionality reduction method—Inverse Kernel Decomposition (IKD)— based on an eigen-decomposition of the sample covariance matrix of data. The method is inspired by Gaussian process latent variable models (GPLVMs) and has comparable performance with GPLVMs. To deal with very noisy data with weak correlations, we propose two solutions—blockwise and geodesic—to make use of locally correlated data points and provide better and numerically more stable latent estimations. We use synthetic datasets and four real-world datasets to show that IKD is a better dimensionality reduction method than other eigen-decomposition-based methods, and achieves comparable performance against optimization-based methods with faster running speeds. Open-source IKD implementation in Python can be accessed at https://github.com/JerrySoybean/ikd. ## 1 Introduction Dimensionality reduction techniques have been widely studied in the machine learning field for many years, with massive applications in latent estimation (Wu et al., 2017; 2018), noise reduction (Sheybani & Javidi, 2009), cluster analysis (Bakrania et al., 2020), data visualization (Van der Maaten & Hinton, 2008a) and so forth. The most commonly used method is the principled component analysis (PCA), a linear dimensionality reduction approach. It is favored thanks to the easy use of a one-step eigen-decomposition. Its simple linear assumption, however, restricts its exploitation, especially in highly nonlinear scenarios. On the other hand, nonlinear dimensionality reduction models, such as autoencoders (Kramer, 1991), variational autoencoders (VAE) (Kingma & Welling, 2013), t-SNE (Van der Maaten & Hinton, 2008b), UMAP (McInnes et al., 2020), and Gaussian process latent variable models (GPLVMs) (Lawrence, 2003; 2005) can achieve state-of-theart (SOTA) performance in terms of finding (sub)optimal low-dimensional latent and rendering satisfactory downstream analyses (e.g., visualization, prediction, classification). However, all these nonlinear models involve intricate optimization which is time-consuming, easy to get stuck in bad local optima, and sensitive to initialization. In this paper, we propose a novel nonlinear eigen-decomposition-based dimensionality reduction approach that finds low-dimensional latent with a closed-form solution but intricate nonlinearity. The proposed method is called Inverse Kernel Decomposition (IKD), inspired by GPLVMs. GPLVMs are probabilistic dimensionality reduction generative models that use Gaussian Processes (GPs) to find a lower dimensional nonlinear embedding of high-dimensional data. GPLVM and its many variants have been proposed in various domains (Bui & Turner, 2015; Wang et al., 2005; 2008; Urtasun et al., 2006; Wu et al., 2017) and proven to be powerful nonlinear dimensionality reduction and latent variable models. However, GPLVMs are highly nonlinear and non-convex due to the GP component, resulting in practical difficulties during optimization. By deriving the relationship implied by the kernel function in GPLVMs, IKD solves the GPLVM problem through eigen-decomposition, which could give a more stable latent estimation in a shorter time than the traditional optimization-based GPLVM solver (Sec. 3.1.1). In the experiment section, we compare IKD against four eigen-decomposition-based and four optimizationbased dimensionality reduction methods using synthetic datasets and four real-world datasets, and we can summarize four contributions of IKD: - As an eigen-decomposition-based method, IKD achieves more reasonable latent representations than other eigen-decomposition-based methods with better classification accuracy in downstream classification tasks. The running time of IKD is on par with other eigen-decomposition-based methods. - IKD is able to provide competitive performance against some SOTA optimization-based methods but at a much faster running speed. - IKD promises a stable and unique optimal solution up to an affine transformation. In contrast, optimization-based methods do not guarantee unique optimal solutions and sometimes are not numerically stable due to the highly nonconvex optimization landscapes. Therefore, IKD can sometimes achieve better latent representations and classification performance than optimization-based methods like GPLVM and VAE. - When the observation dimensionality is large (i.e. observation data is high-dimensional), a lot of methods have significant drawbacks. For example, t-SNE, UMAP, and VAE encounter the curse of dimensionality problems. The large dimensionality not only leads to longer running time but also hurts the dimensionality reduction performance. In contrast, IKD always obtains improved performance with an increasing observation dimensionality, and claims its absolute superiority when the observation dimensionality is very large. Note that we are not claiming to propose the best dimensionality reduction approach that beats all other SOTAs. We propose an advanced eigen-decomposition-based method that (1) outperforms other eigendecomposition-based methods in most synthetic and real-world applications with the same scale of running speeds and (2) reaches a comparable level against other optimization-based methods but with much faster running speed. ## 1.1 Related Works As GPLVM's eigen-decomposition-based solver, IKD starts from the data-generating process instead of the observed data. Although IKD makes use of eigen-decomposition and kernel functions, it is different from other eigen-decomposition-based methods. For example, GPLVM uses a kernel function to form a nonlinear mapping from the embedded latent space to the data space, which is opposite to the use of kernel as in kernel PCA (Schölkopf et al., 1997). Another similar method is Isomap (Tenenbaum et al., 2000), which is a generalized version of multidimensional scaling (MDS) (Borg & Groenen, 2005). From the form of the backbone IKD algorithm derived in Sec. 2.2, IKD obtains its target similarity matrix to be decomposed from data's pairwise correlations, and Isomap obtains its target similarity matrix from data's pairwise generalized distances. However, the initial goals of IKD and Isomap are different. Isomap hopes to place the dataset into a low-dimensional space while preserving pairwise distances as well-equally scaled as possible, but IKD works as an eigen-decomposition versioned solver for data generalized from GPVLM. In Sec 2, we introduce the generative model GPLVM first, and then convert the problem to an eigen-decomposition problem stepby-step, and finally solve it. ## 2 Methodology 2.1 Gaussian Process Latent Variable Model Generative model. Let X ∈ R T ×N be the observed data where T is the number of observations and N is the observation dimensionality of each data vector. Let Z ∈ R T ×M denote its associated latent variables where M is the latent dimensionality. Usually, we assume the latent space is lower-dimensional than the original observational space, leading to *M < N*. For each dimension of X denoted as X:,n ∈ R T, ∀n ∈ {1*, . . . , N*}, GPLVM defines a mapping function that maps the latent to the observation which has a Gaussian process (GP) prior. Therefore, given the finite number of observations, we can write X:,n i.i.d ∼ N (0, K), ∀n ∈ {1, 2*, ..., N*}. (1) where K is a T × T covariance matrix generated by evaluating the kernel function k of GP at all pairs of rows in Z, i.e., ki,j = k(zi, zj ) where zi and zj are the i th and j th rows of Z. Problem setting. The goal of GPLVM is to estimate the unknown latent variables Z that are used for constructing the covariance matrix K, from the observations X. Note that we only consider noiseless GP for the derivation of IKD, but IKD can deal with noisy observations empirically. ## 2.2 Inverse Kernel Decomposition In this section, we derive a novel nonlinear decomposition method, inverse kernel decomposition (IKD), inspired by GPVLM. Previous work has been solving GPLVM by maximizing the log-likelihood to obtain Z from X in a one-step fashion. Now let us break this process into two steps: (1) estimating K from the observations X, and (2) identifying the latent variables Z from the estimated covariance matrix K. The first step can be solved by estimating K with the unbiased estimator, i.e., sample covariance S := 1 N−1 X − X¯ 1 T X − X¯ 1 TT≈1 N−1XXT, where X¯ = 1 N PN n=1 X:,n ought to be 0 since X:,n are i.i.d. samples from a zero-mean Gaussian (Eq. 1). Therefore, our main focus is to estimate the latent Z given S in the second step. In the following, we focus on the discussion of a commonly used stationary kernel, the squared exponential (SE) kernel. We will show in Sec. 2.3 that IKD can also work with various stationary kernels. The SE kernel is defined as k(zi, zj ) = σ 2exp − ∥zi−zj ∥ 2 2l 2, where σ 2is the marginal variance and l is the length-scale. Note that σ 2 = k(zi, zi) = ki,i, ∀i ∈ {1, · · · , T}. Let f be the scalar function mapping the scaled squared distance di,j := ∥zi−zj ∥ 2 l 2 between latent (zi, zj ) to the scalar covariance ki,j , i.e., ki,j = f(di,j ) = σ 2exp − di,j 2 . Let us assume we know the true K for now. Since f(·) is strictly monotonic, we can obtain di,j = f −1(ki,j ) = −2 ln ki,j σ2 . Writing di,j in the matrix form D = (di,j )T ×T , we have 0 (z1 − z2) T(z1 − z2) · · · (z1 − zT ) T(z1 − zT ) (z2 − z1) T(z2 − z1) 0 · · · (z2 − zT ) T(z2 − zT ) ............ (zT − z1) T(zT − z1) (zT − z2) T(zT − z2) · · · 0 D = 1 l 2 (2) =f −1(K) = -f −1(ki,j )T ×T , where f −1 maps K to D element-wisely. We define z˜ = z−z1 l with z˜1 = 0. Now we have $$d_{i,j}=\frac{1}{l^{2}}(\mathbf{z}_{i}-\mathbf{z}_{j})^{\mathrm{T}}(\mathbf{z}_{i}-\mathbf{z}_{j})=\frac{1}{l^{2}}\left[(\mathbf{z}_{i}-\mathbf{z}_{1})-(\mathbf{z}_{j}-\mathbf{z}_{1})\right]^{\mathrm{T}}\left[(\mathbf{z}_{i}-\mathbf{z}_{1})-(\mathbf{z}_{j}-\mathbf{z}_{1})\right]$$ $$=(\hat{\mathbf{z}}_{i}-\hat{\mathbf{z}}_{j})^{\mathrm{T}}(\hat{\mathbf{z}}_{i}-\hat{\mathbf{z}}_{j})=\hat{\mathbf{z}}_{i}^{\mathrm{T}}\hat{\mathbf{z}}_{i}+\hat{\mathbf{z}}_{j}^{\mathrm{T}}\hat{\mathbf{z}}_{j}-2\hat{\mathbf{z}}_{i}^{\mathrm{T}}\hat{\mathbf{z}}_{j}.$$ $$\quad(3)$$ Since z˜1 = 0, we have d1,j = z˜ T 1 z˜1 + z˜ T j z˜j − 2z˜ T 1 z˜j =⇒ z˜ T j z˜j = d1,j , ∀j ∈ {1*, ..., T*}. Therefore, we arrive at an expression of z˜ T i z˜j as z˜ T i z˜j = 1 2 (di,1 + d1,j − di,j ). Note that d1,i = di,1 because of the symmetric property. Denote Z˜ = [z˜1, z˜2, *· · ·* , z˜T ] T = [0, z˜2, *· · ·* , z˜T ] T ∈ R T ×M, we could write the matrix form Z˜Z˜T =z˜ T i z˜j T ×T as Z˜Z˜T = 0 0 · · · 0 0 d2,1 · · ·12 (d2,1 + d1,T − d2,T ) 012 (d3,1 + d1,2 − d3,2) · · ·12 (d3,1 + d1,T − d3,T ) ............ 012 (dT ,1 + d1,2 − dT ,2) · · · dT ,1 =: g(D) = g(f −1(K)), (4) which is a rank-M symmetric positive semi-definite matrix given *M < T*. g is the function mapping D to Z˜Z˜T. Then, Eq. 4 has the unique "reduced" eigen-decomposition $$g(f^{-1}(\mathbf{K}))=\mathbf{U}\mathbf{A}\mathbf{U}^{\rm T}=\left(\sqrt{\lambda_{1}}\mathbf{U}_{:,1},\ldots,\sqrt{\lambda_{M}}\mathbf{U}_{:,M}\right)\left(\sqrt{\lambda_{1}}\mathbf{U}_{:,1},\ldots,\sqrt{\lambda_{M}}\mathbf{U}_{:,M}\right)^{\rm T}=:\mathbf{\tilde{U}}\mathbf{\tilde{U}}^{\rm T},\tag{5}$$ where U:,m = [0, u2,m, u3,m, . . . , u*T ,m*] T ∈ R Tis the mth column of U ∈ R T ×M and Λ = diag(λ1, λ2*, . . . , λ*M) with λ1 > λ2 > · · · > λM > λM+1 = *· · ·* = λT = 0. Note that the unique "reduced" singular value decomposition of Z˜ is $\tilde{\mathbf{Z}}=\mathbf{U}\mathbf{A}^{\frac{1}{2}}\mathbf{V}^{\rm T}=\tilde{\mathbf{U}}\mathbf{V}^{\rm T}\implies\mathbf{z}_{t}=l\mathbf{V}\tilde{\mathbf{U}}_{t_{i}}^{\rm T}+\mathbf{z}_{1},\quad\forall t\in\{1,\ldots,T\},$ t,: + z1, ∀t ∈ {1*, . . . , T*}, (6) where z1 represents the reference translation, the length-scale l is a scaling factor, and V is an orthogonal matrix that is responsible for the corresponding rotation and reflection. Since Z and U˜ span the same column space such that $$k(\mathbf{z}_{i},\mathbf{z}_{j})=\sigma^{2}\exp\left(-\frac{\|\mathbf{z}_{i}-\mathbf{z}_{j}\|^{2}}{2l^{2}}\right)=\sigma^{2}\exp\left(-\frac{\|\mathbf{V}\tilde{\mathbf{U}}_{i,:}^{\mathrm{T}}-l\mathbf{V}\tilde{\mathbf{U}}_{j,:}^{\mathrm{T}}\|^{2}}{2l^{2}}\right)$$ $$=\sigma^{2}\exp\left(-\frac{\|\tilde{\mathbf{U}}_{i,:}-\tilde{\mathbf{U}}_{j,:}\|^{2}}{2}\right)=k_{l=1}\left(\tilde{\mathbf{U}}_{i,:},\tilde{\mathbf{U}}_{j,:}\right).$$ $$(6)$$ $$\left(7\right)$$ $$({\boldsymbol{\delta}})$$ $$({\mathfrak{g}})$$ U˜ contains all of the low-dimensional information of Z. Therefore, we consider U˜ as an estimator of Z. Eq. 5 and Eq. 6 summarize the inverse relationship from a GP kernel covariance matrix to the latent variable. To date, we are able to find the exact estimation U˜ given the true GP covariance kernel K that is constructed from Z (Eq. 7). In practice, we only have the sample covariance estimator S, and neither rank-M nor positive semi-definite is guaranteed for g(f −1(S)). Therefore, we try to find its optimal rank-M positive semi-definite approximation, i.e. $$\operatorname*{minimize}_{\tilde{U}\in\mathbb{R}^{T\times M}}\|\mathbf{g}(f^{-1}(\mathbf{S}))-\tilde{U}\tilde{U}^{\mathrm{T}}\|\,.$$ −1(S)) − U˜ U˜ T . (8) Dax et al.$\;$(2014) shows that? $${\tilde{U}}=\left({\sqrt{\lambda_{1}}}U_{:,1},\ldots,{\sqrt{\lambda_{M}}}U_{:,M}\right)$$ pλ1U:,1*, . . . ,* pλMU:,M(9) ``` is the optimal solution for any unitarily invariant matrix norm ∥ · ∥, where λ1, . . . , λM are the first M largest positive eigenvalues of g(f −1(S)) and U:,1, . . . , U:,M are the corresponding eigenvectors. The goodness of the estimated latent via the target loss function Eq. 8 can be quantified by the explained variance ratio PM t=1 λ 2 P t T t=1 λ 2 t . We summarize the IKD algorithm in Alg. 1. ``` Algorithm 1 Inverse kernel decomposition 1: **function** ikd(X ∈ R T ×N , f) 2: S ← 1 N−1 X − X¯ 1 T X − X¯ 1 TT▷ S serves as an estimator of the covariance K 3: σ $\sigma^{2}\gets\frac{1}{T}\sum_{i=1}^{T}s_{i,i}\\ \hat{\mathbf{D}}=(\hat{d}_{i,j})_{T\times T}\gets f^{-1}(\mathbf{S})\\ \mathbf{U},\mathbf{A}\gets\text{eigen-decomposition of}g(\hat{\mathbf{D}})\\ \mathbf{F}=\text{the action delta term solution}\tilde{\mathbf{U}}.$ i=1 si,i ▷ estimate σ 2through a statistic of the diagonal of S −1(S) ▷ Dˆ serves as an estimation of D (Eq. 2) 5: U, Λ ← eigen-decomposition of g(Dˆ ) ▷ Eq. 5 6: Form the optimal latent solution U˜ using U and Λ ▷ Eq. 5 7: **return** U˜ 8: **end function** ## 2.3 Ikd With General Stationary Kernels Apart from the SE kernel, IKD also works for most commonly used stationary kernels, as long as the kernel function f is invertible (i.e., f is strictly monotonic over [0, ∞)) and we can find a unique non-negative solution for d = f −1(k). We summarize the kernels in Tab. 1. | | Table 1: Stationary kernels that can be applied to IKD. | | | |---------------------|-----------------------------------------------------------|------------------------------|-----------------------------| | kernel | f | f −1 | | | | d | −1 (k) = −2 ln k | | | squared exponential | f(d) = σ 2 exp − 2 | f | σ2 | | | −α | f | h k − 1 α − 1 i | | rational quadratic | f(d) = σ 2 1 + d | −1 (k) = 2α | | | | 2α | σ2 | | | | γ | | | | γ-exponential | f(d) = σ 2 exp −d 2 | f −1 (k) = − ln k 2 γ σ2 | | | | 1−ν √ 2ν √ d | √ 2ν √ d | no closed-form but solvable | | Matérn | f(d) = σ 2 2 | ν Kν | | | | Γ(ν) | with root-finding algorithms | | For the SE kernel, we can generalize it to the ARD kernel k(zi, zj ) = σ 2exp − 1 2 PM m=1 1 l 2m (zi,m − zj,m) 2 and the Gaussian kernel k(zi, zj ) = σ 2exp − 1 2 (zi − zj ) TL−1(zi − zj ), where an extra affine transformation L 1 2 is needed, rather than a constant scaling l. For the Matérn kernel parameterized by ν, Kν(·) is the modified Bessel function of the second kind. Although it is complicated to obtain a closed-form of f −1(·) for the Matérn kernel, f −1(·) always exists since f(·) is strictly monotonically decreasing over [0, ∞) for all ν > 0. Note that for the commonly used ν = p+ 1 2 , p ∈ N, it is easy to derive f ′(·), e.g., when ν = 3 2 , f(d) = σ 21 + √3d l exp − √3d l , and f ′(d) = − 3dσ2 l 2 exp − √3d l . In such cases, higher-order root-finding algorithms (e.g., Newton's method) can be used to solve d = f −1(k). The intuition of IKD is to find a non-linear mapping that makes xi, xj that are strongly correlated in observational space to be closely located with each other in the latent space. This is perfectly reflected by the shape of these stationary kernels, i.e., monotonically decreasing fast near d = 0 and then becoming flat (converge to 0) when d → ∞. Given these stationary kernels have such a similar shape, the latents estimated by different kernels are also similar (see the last paragraph of the real-world experiment, Sec. 3.2). Since these kernels are all invertible, we can derive the exact relationship between latents U˜ (1) and U˜ (2) from two kernels f1 and f2 respectively. Denote D˜(1) = h(U˜ (1) i,: − U˜ (1) j,: ) T(U˜ (1) i,: − U˜ (1) j,: ) i $$|_{T\times T}|^{\frac{1}{2}}$$ and similar for D˜(2), then $$f_{1}\left(\tilde{D}^{(1)}\right)={\bf S}=f_{2}\left(\tilde{D}^{(2)}\right).$$ D˜(2). (10) So, the transformation from D˜(1) to D˜(2) is the diffeomorphism f −1 2f1. ## 2.4 Error Analysis Of Ikd IKD performs eigen-decomposition on g(f −1(S)) (Eq. 5), which uses the sample covariance S as an empirical estimator of K. In practice, sample covariance values si,j in S can be very noisy due to the noise in the data and an insufficient observation dimensionality N. There can be non-positive and close-to-zero positive covariance values preventing from calculating ˆdi,j = f −1(si,j ) accurately. Non-positive si,j falls out of the input range of f −1, i.e., (0, σ2]. For close-to-zero positive si,j , the error between the estimation ˆdi,j = f −1(si,j ) and the ground truth di,j = f −1(ki,j ) can be large and sensitive to si,j . A sketch analysis of the error for the SE kernel is via the Taylor expansion of f −1 at si,j : $$d_{i,j}=f^{-1}(k_{i,j})=-2\ln\frac{s_{i,j}+(k_{i,j}-s_{i,j})}{\sigma^{2}}=-2\ln\frac{s_{i,j}}{\sigma^{2}}-2\frac{k_{i,j}-s_{i,j}}{s_{i,j}}+O((k_{i,j}-s_{i,j})^{2})\tag{11}$$ $$=f^{-1}(s_{i,j})+\frac{O(k_{i,j}-s_{i,j})}{s_{i,j}}=d_{i,j}+\frac{O(k_{i,j}-s_{i,j})}{s_{i,j}}.$$ $$\left(10\right)$$ We define the estimation error as |di,j − ˆdi,j | = O(|ki,j−si,j |) si,j. For large si,j , the error is small; but for small si,j , the error is very sensitive to the covariance error |ki,j −si,j |. To resolve the issue, there are two solutions: Blockwise solution. We first throw away bad si,j values by thresholding the sample covariance with a value s0, leading to a thresholded covariance matrix S˜ = (si,j · 1[si,j > s0])T ×T . S˜ is not a fully connected graph due to the zero values. We can not directly apply IKD to S˜ for latent estimation. Then, we can use, for example, the Bron–Kerbosch algorithm (Bron & Kerbosch, 1973), to find maximal cliques in S˜. Consequently, each clique is a fully connected subgraph (block) of S˜, which can be decomposed using IKD. After obtaining the latent for each clique, we merge all of the estimated latent variables according to the shared points between every clique pair. As long as the number of shared points between two cliques is greater than M, we are able to find the unique optimal rigid transformation that aligns every two cliques correctly. We clarify the details of the clique merging procedure in the blockwise solution here. First, the BronKerbosch algorithm provides us with a set of cliques {C1, C2*, . . . , C*L} satisfying SL i=1 Ci = {zt} T t=1. Now, we start with two cliques that share the maximum number of points, arg maxCi,Cj |Ci ∩ Cj |. For example, now we have Ci = {z1, z2, z3, z4} and C2 = {z2, z3, z4, z5} (Fig 1). Their shared points are {z2, z3, z4}. Since the latent is in RM = R 2 and |Ci ∩ Cj | = 3 ⩾ M + 1, we can find a unique reflection + rotation + translation via SVD to align {z2, z3, z4} in C2 with {z2, z3, z4} in clique C1. If there are less than M + 1 shared points between the two cliques, there is more than one way to align them. Similarly, in RM, the uniqueness of this optimal alignment transformation requires at least M + 1 shared points. Once two cliques are merged, we remove Ci, Cj from the original set and place the merged cliques Ci ∩ Cj back. The final latent can be obtained by repeating this merging procedure until the largest clique contains all points. Figure 1: A clique pair example that shares three points {z2, z3, z4}. ![5_image_0.png](5_image_0.png) Although the complexity of the Bron-Kerbosch algorithm for finding maximal cliques is O(3T), we can terminate the algorithm as long as the union of the existing cliques is the whole dataset. In other words, we only need up to T maximal cliques, so the clique finding time can be bounded by O(T 2). Then solving first M eigen-decomposition algorithm for up to T cliques takes O(T × (MT2)). Therefore, the complexity of the entire procedure can be bounded by O(MT3). Geodesic solution. Since small values si,j < s0 have significantly bad effects on eigen-decomposition, we can replace si,j , whose value is smaller than s0, with the geodesic covariance si,j ← max(t1,t2,...,t′) si,t1 · st1,t2 · · · st ′,j , where i → t1 *→ · · · →* t ′ → j is the geodesic path from i to j found by the Dijkstra algorithm (Dijkstra et al., 1959). The complexity of this approach is bounded by the complexity of the Dijkstra algorithm, which is O(T 2log(T)). Since the complexity of the geodesic approach is smaller than that of the blockwise approach when T is larger (greater than 1000 in the following experiments), we choose the geodesic instead of the blockwise. A comparison of these two solutions is shown in the experiment section. ## 2.5 Reference Point Selection In Eq. 3, we choose z1 as the reference point to calculate di,j . But the reference point can be any of the points in {zt} T t=1. If we choose zr, for an arbitrary index r ∈ {1*, ..., T*}, to be the reference point, then similar to Eq. 4, the r th row and the r th column of g(D) are 0s, and the remaining elements are g(D)i,j = 1 2 (di,r + dr,j − di,j ) ̸= 0, ∀i ̸= *r, j* ̸= r. Note that every g(D)i,j includes an element from {dr,i} T i=1. Thus the quality of {dr,i} T i=1 is vital for latent estimation. Based on the analysis in Eq. 11, we know that in practice we want to choose a good reference index r so that { ˆdr,i} T i=1 are relatively small (i.e., {sr,i} T i=1 are large, which means the r th data point is highly correlated with the rest of the data points). Note that multidimensional scaling (MDS) (Kruskal & Wish, 1978) solves a similar eigen-decomposition problem, i.e., finding coordinates Z from the distance matrix D. It employs a centering idea which is equal to using the average of all latent variables 1 T PT t=1 zt as the reference point. We choose the best reference point instead since we want to reduce the estimation error in Eq. 11 as much as we can, so that the objective function in Eq. 8 can be minimized as much as possible. Mathematically, we obtain r such that ∥dˆr∥∞ ⩽ ∥dˆi∥∞, ∀i ∈ {1, 2*, ..., T*} and dˆiis the i th row of Dˆ = ( ˆdi,j )T ×T , i.e., r = arg mini ∥dˆi∥∞ = arg mini nmaxj{ ˆdi,j} o. ## 3 Experiments In this section, we evaluate IKD with the most commonly used squared exponential as the default kernel on three synthetic datasets, where we know the true latent representations and four real-world datasets. Baseline methods for comparison: - PCA: Principal component analysis. One of the most widely used linear dimensionality reduction methods. - **KPCA**: Kernel PCA. We try different kernels (polynomial, SE, sigmoid, and cosine) and present the best one. - LE (Belkin & Niyogi, 2003): Laplacian eigenmaps, which is to do spectral decomposition to the affinity matrix's graph Laplacian. We use the nearest neighbors algorithm to build the affinity matrix. - **Isomap** (Tenenbaum et al., 2000): Isometric mapping of multidimensional scaling (MDS) by incorporating the geodesic distances. We use sklearn's default setting—five nearest neighbors—to build the distance matrix. - **t-SNE** (Van der Maaten & Hinton, 2008b): t-distributed stochastic neighbor embedding. We use sklearn's default hyperparameter setting to fit the model. - **UMAP** (McInnes et al., 2020): Uniform Manifold Approximation and Projection for dimension reduction. We use sklearn's default hyperparameter setting to fit the model. We use the official UMAP package (McInnes et al., 2018) and its default hyperparameter setting to fit the model. - **GPLVM** (Lawrence, 2003; 2005): The traditional optimization-based Gaussian process latent variable model solver. We use the GPLVM module in the GPy package (GPy, since 2012) and its default hyperparameter setting to fit the model. Same as IKD, we use the most commonly used squared exponential as the default kernel unless otherwise stated. - VAE (Kingma & Welling, 2013): Variational autoencoder. The first four are eigen-decomposition-based methods; the last four are optimization-based methods. ## 3.1 Synthetic Data Experimental setup. We first test all methods on three synthetic datasets. All the following experiments are based on 50 independent repeats (trials). For each trial, we generate the true latent variables from $$Z_{m,1:T}\sim{\mathcal{N}}\left({\bf0},\left(6{\mathrm{e}}^{-{\frac{|i-j|}{5}}}\right)_{T\times T}\right),\quad\forall m\in\{1,...,M\},$$ where M is the latent dimensionality, varying across different datasets. Then, we generate the noiseless data from GP, sinusoidal, and Gaussian bump mapping functions respectively. Afterward, i.i.d. Gaussian noise is added to form the final noisy observations X. Evaluateion. We evaluate the performance using the R2 metric. When computing R2 values, we first align the estimated latent with the ground truth through an affine transformation (i.e., a linear decoder); then compute R2for each latent dimension, and finally take an average across all latent dimensions m ∈ {1, 2*, ..., M*}. The reasons for choosing the affine transformation are: (1) rigid transformation could lead to very negative R2 values for those non-IKD methods (e.g., PCA), not shown here; and (2) affine transformation is the commonly used one for latent estimation and alignment. Dataset 1: GP mapping function. We start our experiments with the GP mapping function. In each trial, we generate a 3D latent Z ∈ R 1000×3(i.e., M = 3) according to Eq. 12, and generate X ∈ R 1000×N according to Eq. 1 with σ 2 = 1 and l = 3. Then Gaussian noise is added: xt,n ← xt,n + εt,n, ∀(*t, n*) ∈ {1*, ...,* 1000} × {1*, ..., N*}, where noise εt,n ∼ N (0, 0.052). Note that this generating process is consistent with the generating process of GPLVM. Thus it is well aligned with the model assumptions of IKD, deemed as a $$(12)$$ ![7_image_0.png](7_image_0.png) Figure 2: R2 values (a,d,g) and running time (b,e,h) with respect to N with GP, simusoidal, and Gaussian bump mapping functions. (c,f.i): Latent recovery visualization of an example trial with N = 100, for the first 20 points with GP, sinusoidal, and Gaussian bump mapping functions. data-matching example. Fig. 2(a) shows that for N = 10 and N = 20, Isomap is the best; but when N > 50, IKD becomes the best and its R2 converges to 1 as N increases. The latent recovery visualization of an example trial under N = 100 for the first 20 points (Fig. 2(c)) shows that Isomap and IKD match the true latent the best. Dataset 2: Sinusoidal mapping function. In each trial, we generate a 1D latent (i.e., M = 1) according where 2 = (2n,m)xxM with wi,m ~ U(−1,1), φ = [ρ1,...,φλ]" with y, ~ U(−7, 1), and noise ει N(0,0.12 T). The result in Fig. 2(d) indicates that even though the observed data is not from a GF (datamismatching), IKD is still able to discover the latent structure consistently better than others, except for N = 10 and N = 20 where Isomap is the best. When the observation dimensionality N > 50, the R2 value of IKD approaches to 1, while Isomap, t-SNE, and UMAP all have decreasing performance due to the curse of dimensionality. The latent recovery visualization of an example trial under N = 100 for the first 20 points (Fig. 2(f)) shows that Isomap and IKD match the true latent the best. Dataset 3: Gaussian bump mapping function. In each trial, we generate a 2D latent (i.e., M = 2) according to Eq. 12, and generate X E R1000×N as $$x_{t,n}=20\exp\left(-\|\mathbf{z}_{t}-\mathbf{c}_{n}\|_{2}^{2}\right)+\varepsilon_{t,n},\quad\forall(t,n)\in\{1,...,1000\}\times\{1,...,N\},\tag{13}$$ where cn E R2 is the center of the nth Gaussian bump randomly selected from 10,000 grid points uniformly distributed in [-6,6]2, with noise Et,n ~ N(0,0.052). This is another data-mismatching example. Fig. 2(g) shows that in this case, IKD is the best one among all methods for all observation dimensionality N. Fig. 2(i) also shows that only IKD matches the true latent accurately. The running time of IKD in the three synthetic datasets above is on par with other eigen-decomposition-based methods (Fig.2(b,e,h)), and much less than optimization-based methods. In particular, GPLVM always takes a very long time; t-SNE requires more running time on datasets whose latent dimensionality is greater than ![8_image_0.png](8_image_0.png) Figure 3: Left: ground truth Z of the latent. Right: kernel covariance matrix K created by the corresponding latent for the four different kernels, with marginal variance σ = 1 and length-scale l = 0.5; α = 1 in rational quadratic kernel, γ = 1 in γ-exponential kernel, and ν = 1.5 in Matérn kernel. The difficulty levels of the three datasets are from hard to easy, from top to bottom. For the easy dataset (bottom), points are selected from the grids on the surface and randomly permuted. 2; and VAE is more time-consuming when the observation dimensionality is large. Note that we only vary the dimensionality N not the number of observations T. When increasing T, the running time of all methods will increase polynomially. For optimization-based methods, stochastic optimization can be employed to scale to large-scale datasets. For fair comparison, extra scaling techniques should be incorporated to deal with large-scale eigen-decomposition, which falls out of the scope of this paper. In general, IKD performs the best for all three mapping functions especially when the observation dimensionality N is large. It is also very effective in capturing details in addition to recovering the general latent structure correctly. Same as other eigen-decomposition-based methods, IKD takes less time to solve the presented problems compared with optimization-based methods. ## 3.1.1 Varying Dimensionality, Kernels, And Latent Structures We test the effectiveness of IKD on the observation data generated from the GP mapping function described in Eq. 1, for different observation dimensionality N ∈ {100, 200, 500, 1000, 2000, 5000, 10000}, different generating kernels (Tab. 1), and three different latent structures (hard, medium, and easy shown in Fig. 3 according to their difficulty levels). We only compare IKD with PCA and GPLVM here. PCA is the most commonly used linear method, so it serves as a baseline. GPLVM is the traditional optimization-based solver, and IKD is our newly proposed eigen-decomposition-based solver for data generated from the GP mapping function. From Fig. 4(b), we can tell that IKD is always the best for the most commonly used SE kernel. Compared with PCA and GPLVM, IKD is highly effective especially when N is large, where the R2 values of IKD are very close to 1. Fig. 4(c) shows the latent recovery visualization from one example trial of the medium dataset when N = 1000. The kernel of the generating model is SE. We can tell that IKD matches the ground ![9_image_0.png](9_image_0.png) Figure 4: (a): R2 values, with respect to N, of PCA, GPLVM, and IKD for different datasets and kernels. (b): Latent recovery visualization of an example trial of the medium dataset with the SE kernel and N = 1000. (c): Average running time in seconds (across different kernels, different datasets, and 50 independent trials) of the three methods w.r.t. observation dimensionality N. We take averages across different kernels and different datasets because all of them share similar running time results. truth the best. For the third dimension, particularly, only the estimated latent from IKD reflects the linear increasing trend of the ground truth correctly. In terms of complexity, GPLVM is time-consuming compared with IKD (Fig. 4(d)). These results indicate that we can use IKD to recover the latent for data generated from the GP mapping function faster and more accurately. ## 3.1.2 Varying The Noise Level To understand the performance of IKD solving GPLVM with different levels of noise, we use the medium dataset (Fig. 3) and the squared exponential kernel as the generative GPLVM to simulate the noiseless observation X by Eq. 1. Then, different leveled noises are added to the observation, i.e., Zt,n < Et,n; where Et,n ~ N(0,sd2), for sd E {0,0.1, ... , 0.9,1}. The standard deviation of the Gaussian noise "sd" represents the noise level. Fig. 5 shows that IKD is able to have a relatively good estimation on datasets with low-level noise. The R2 of IKD gradually decreases as the noise level increases, but it is still better than GPLVM up to the noise level of as high as sd = 1. The traditional optimization-based GPLVM solver explicitly considers the noise term in its algorithm and the performance even slightly improves as the noise increases. Besides, with larger noise, the observation also makes the GPLVM solver run faster. ![10_image_0.png](10_image_0.png) Figure 5: (a): R2 performances of PCA, GPLVM, and IKD on datasets with different noise levels. (b): The corresponding running time. ## 3.1.3 Comparison Of The Blockwise And The Geodesic Solutions To understand the differences between blockwise and geodesic solutions, we apply the blockwise and the geodesic IKD on the medium dataset (Fig. 3) respectively and visualize their results. There are two sets of observations, one is from the SE kernel and the other is from the rational quadratic kernel. We only use the SE kernel to solve both of the two observations, constituting a model match case and a model mismatch case. ![10_image_1.png](10_image_1.png) Figure 6: The blockwise (left) and geodesic (right) solutions in a model match case (above) and a model mismatch case (below). The blue curve is the true latent, and the orange curves are estimated latent. In the model mismatch case, although the true data-generating model is not GP with the SE kernel, we still use a simple decoder (i.e., an affine transformation) to align the estimated latent with the true latent to compute the R2 metric. In the model match case, the blockwise approach solves the GPLVM, but the geodesic approach does not. Therefore, in the model match case and when the number of data points is not large, the Bron-Kerbosch algorithm can be finished quickly and the blockwise solution should converge to the true latent up to an affine transformation (the model match case in Fig. 6). In the model mismatch case, the clique-merging procedure is vulnerable to numerical errors, and small errors can accumulate during this procedure, especially when the number of cliques is large. And if the number of cliques is large but points are scattered, there will be too many cliques to be found and hence the running time of the Bron-Kerbosch algorithm is unacceptable. Moreover, we do not know whether the true datagenerating model is GP with the SE kernel (and it is very likely that it is not GP, just like in PCA we know the true data-generating model is certainly not a simple linear model). In such cases, we treat all data points as a whole and compute their empirical correlation through the geodesic shortest path, and hence using geodesic is more intuitive, globally stable, and efficient (the model mismatch case in Fig. 6). ## 3.2 Real-World Data Dataset. We compare IKD against alternatives on four real-world datasets: - Single-cell qPCR (PRC) (Guo et al., 2010): Normalized measurements of 48 genes of a single cell at 10 different stages. There are 437 data points in total, resulting in X ∈ R 437×48. - Hand written digits (digits) (Dua & Graff, 2017): It consists 1797 grayscale images of hand written digits. Each one is an 8 × 8 image, resulting in X ∈ R 1797×64. - COIL-20 (Nene et al., 1996): It consists 1440 grayscale photos. For each one of the 20 objects in total, 72 photos were taken from different angles. Each one is a 128 × 128 image, resulting in X ∈ R 1440×16384. - Fashion MNIST (F-MNIST) (Xiao et al., 2017): It consists of 70000 grayscale images of 10 fashion items (clothing, bags, etc). We use a subset of it, resulting in X ∈ R 3000×784. Evaluation. Since there is no true latent to compare against, we first estimate the latent in a {2, 3, 5, 10}- dimensional latent space and then use the k-nearest neighbor (k-NN) classifier to evaluate the performance of each dimensionality reduction method. Specifically, we apply 5-fold cross-validation k-NN (k ∈ {5, 10, 20}) on the estimated {2, 3, 5, 10}-dimensional latent to evaluate the performance of each method on each dataset. The k-NN classification results of different methods under different latent dimensionality M, different datasets, and different choices of k are shown in Fig. 7(a). Performances of different methods on different datasets evaluated by the silhouette score are presented in Fig. 7(b). Results. Comparing IKD with other eigen-decomposition-based methods (PCA, KPCA, LE, Isomap), we can conclude that IKD is almost always the best one on all four datasets, except that when M ∈ {3, 5, 10} in the digits dataset, Isomap is better than IKD. When comparing IKD with GPLVM, we find the performances of GPLVM on PCR, digits, and F-MNIST datasets are slightly better than IKD while GPLVM takes too much running time. Specifically, IKD is significantly better than GPLVM on the COIL-20 dataset but only slightly worse than GPLVM on the other three datasets. VAE only performs well on the most complicated dataset F-MNIST, and is much worse than IKD in the other three datasets. Although IKD is worse than the remaining two optimization-based methods (t-SNE and UMAP), the performance of IKD is the best on the COIL-20 dataset. The reason is that the observation dimensionality is very high (N = 16384) in the COIL-20 dataset, and IKD is very effective for high-dimensional data as shown in the synthetic results (Fig. 2(a)). 2D visualization of the four datasets are shown in Fig. 8, 9, 10, and 11 respectively. Qualitatively, we can see that IKD consistently finds more separate clusters compared with all other eigen-decomposition-based methods and two optimization-based methods (GPLVM and VAE) across all four datasets. Therefore, even though IKD is an eigen-decomposition-based method, its performance is significantly better than other eigendecomposition methods on all four real-world datasets, sometimes as good as the best optimization-based method. In terms of running time (Fig. 7(b)), IKD is on par with Isomap, and these eigen-decomposition-based methods are significantly faster than those four optimization-based methods. Note that if the desired latent dimensionality M > 2, the running time of t-SNE is barely acceptable. For the high dimensional COIL-20, the running time values of VAE and GPLVM are extremely high, getting out of the upper limit of the corresponding axes. ![12_image_0.png](12_image_0.png) Figure 7: (a) k-NN 5-fold cross-validation, (b) silhouette score, and (c) running time, on different methods, different latent dimensionality M, different datasets, and different choices of k. ![12_image_1.png](12_image_1.png) Figure 8: Visualization of the dimensionality reduction results of different methods on the PCR dataset. ![13_image_0.png](13_image_0.png) Figure 9: Visualization of the dimensionality reduction results of different methods on the digits dataset. ![13_image_1.png](13_image_1.png) Figure 10: Visualization of the dimensionality reduction results of different methods on the COIL-20 dataset. ![13_image_2.png](13_image_2.png) Figure 11: Visualization of the dimensionality reduction results of different methods on the F-MNIST dataset. Dimensionality reduction results of IKD with different kernels. As we derived in Eq. 10 in the main content, choosing different kernels does not affect the dimensionality reduction results too much in terms of the latent structure similarity since different kernels have similar shapes. Especially in a classification task, although the exact relationship between the estimated latent data points under different kernels is not the same, points belonging to the same class consistently locate together in latent space (low dimensional space) no matter what kernel is used. Fig. 12 shows that the dimensionality reduction results under different kernels are nearly the same (up to reflection, rotation, and translation). Tab. 2 also shows similar 5-fold cross-validation k-NN classification accuracies under different kernels. Varying number of points T. Although a theoretical analysis of the time complexity of IKD w.r.t. the number of data points T has been provided in Sec. 2.4 and discussed in Sec. 3.1, we are still curious about the comparisons of different methods on datasets varying number of data points T. Therefore, we apply ![14_image_0.png](14_image_0.png) Figure 12: Dimensionality reduction results of IKD with different kernels for the digits dataset with the latent dimensionality M = 2. α = 1 in the rational quadratic kernel, and γ = 1 in the γ-exponential kernel. Table 2: 5-NN 5-fold cross-validation classification accuracies under different kernels and latent dimensionalities M ∈ {2, 3, 5, 10} for the digits dataset. k kernels 2 3 5 10 | k | kernels | 2 | 3 | 5 | 10 | |---------------------|--------------------|----------|----------|----------|----------| | squared exponential | 0.875899 | 0.85085 | 0.946049 | 0.944937 | | | 5 | rational quadratic | 0.841382 | 0.821323 | 0.931574 | 0.935474 | | γ-exponential | 0.837478 | 0.806288 | 0.930458 | 0.933807 | | | squared exponential | 0.872006 | 0.844732 | 0.936592 | 0.937696 | | | 10 | rational quadratic | 0.857527 | 0.825235 | 0.92212 | 0.943258 | | γ-exponential | 0.854737 | 0.81242 | 0.919336 | 0.93992 | | | squared exponential | 0.871453 | 0.843067 | 0.928804 | 0.932683 | | | 20 | rational quadratic | 0.857521 | 0.822467 | 0.906541 | 0.929341 | | γ-exponential | 0.856408 | 0.817457 | 0.908767 | 0.928231 | | different methods on the F-MNIST dataset with the number of data points T ∈ {1000, 2000, 3000, 4000}. The k-NN accuracies and silhouette scores in Fig. 13(a) and (b) show that IKD is consistently the best among all eigen-decomposition-based methods, except for the silhouette score at T = 4000. In terms of runtime, we can see that IKD is faster than optimization-based methods. However, comparing the growth rate of IKD with UMAP and VAE, we can see from the growth rates shown in Fig. 13(c) that, due to (1) the worse time complexity of IKD and other eigen-decomposition-based methods and (2) the good scalability of optimization-based methods to large-scale datasets, the impact of the number of data points T to IKD and other eigen-decomposition-based methods gets bigger and bigger as T increases. ## 4 Discussions In summary, IKD, as an eigen-decomposition-based method, consumes a short running time but is able to obtain dimensionality reduction results better than other eigen-decomposition-based methods. When facing high-dimensional observation data, IKD can perform significantly better than all other methods in a very short time. Note that although eigen-decomposition-based methods perform relatively worse than optimization-based methods, the benefit of fast running provides good initialization for sophisticated nonlinear optimization problems, mitigating the numerical instability and multi-modal issues commonly observed in methods such as GPLVM and VAE. ![15_image_0.png](15_image_0.png) Figure 13: (a) average k-NN 5-fold cross-validation, (b) silhouette score, and (c) running time, by different methods, on the F-MNIST dataset including different numbers of data points T. ## References Mayur R Bakrania, I Jonathan Rae, Andrew P Walsh, Daniel Verscharen, and Andy W Smith. Using dimensionality reduction and clustering techniques to classify space plasma regimes. *Frontiers in Astronomy* and Space Sciences, pp. 80, 2020. Mikhail Belkin and Partha Niyogi. Laplacian Eigenmaps for Dimensionality Reduction and Data Representation. *Neural Computation*, 15(6):1373–1396, June 2003. ISSN 0899-7667. doi: 10.1162/ 089976603321780317. Conference Name: Neural Computation. Ingwer Borg and Patrick JF Groenen. *Modern multidimensional scaling: Theory and applications*. Springer Science & Business Media, 2005. Coen Bron and Joep Kerbosch. Algorithm 457: finding all cliques of an undirected graph. Communications of the ACM, 16(9):575–577, September 1973. ISSN 0001-0782, 1557-7317. doi: 10.1145/362342.362367. URL https://dl.acm.org/doi/10.1145/362342.362367. Thang D Bui and Richard E Turner. Stochastic variational inference for Gaussian process latent variable models using back constraints. In *Black Box Learning and Inference NIPS workshop*, 2015. Achiya Dax et al. Low-rank positive approximants of symmetric matrices. *Advances in Linear Algebra &* Matrix Theory, 4(03):172, 2014. Edsger W Dijkstra et al. A note on two problems in connexion with graphs. *Numerische mathematik*, 1(1): 269–271, 1959. Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ ml. GPy. GPy: A gaussian process framework in python. http://github.com/SheffieldML/GPy, since 2012. Guoji Guo, Mikael Huss, Guo Qing Tong, Chaoyang Wang, Li Li Sun, Neil D Clarke, and Paul Robson. Resolution of cell fate decisions revealed by single-cell gene expression analysis from zygote to blastocyst. Developmental cell, 18(4):675–685, 2010. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*, 2013. Mark A. Kramer. Nonlinear principal component analysis using autoassociative neural networks. *AIChE Journal*, 37(2):233–243, 1991. ISSN 1547-5905. doi: 10.1002/aic. 690370209. URL https://onlinelibrary.wiley.com/doi/abs/10.1002/aic.690370209. _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/aic.690370209. Joseph B Kruskal and Myron Wish. *Multidimensional scaling*. Number 11. Sage, 1978. Neil Lawrence. Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data. In *Advances in Neural Information Processing Systems*, volume 16. MIT Press, 2003. URL https:// proceedings.neurips.cc/paper/2003/hash/9657c1fffd38824e5ab0472e022e577e-Abstract.html. Neil Lawrence. Probabilistic Non-linear Principal Component Analysis with Gaussian Process Latent Variable Models. *Journal of machine learning research*, 6(11), 2005. Leland McInnes, John Healy, Nathaniel Saul, and Lukas Grossberger. Umap: Uniform manifold approximation and projection. *The Journal of Open Source Software*, 3(29):861, 2018. Leland McInnes, John Healy, and James Melville. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. *arXiv:1802.03426 [cs, stat]*, September 2020. URL http://arxiv.org/abs/ 1802.03426. arXiv: 1802.03426. Sameer A Nene, Shree K Nayar, Hiroshi Murase, et al. Columbia object image library (coil-100). 1996. Bernhard Schölkopf, Alexander Smola, and Klaus-Robert Müller. Kernel principal component analysis. In Wulfram Gerstner, Alain Germond, Martin Hasler, and Jean-Daniel Nicoud (eds.), *Artificial Neural Networks - ICANN'97*, Lecture Notes in Computer Science, pp. 583–588, Berlin, Heidelberg, 1997. Springer. ISBN 978-3-540-69620-9. doi: 10.1007/BFb0020217. Ehsan Sheybani and Giti Javidi. Dimensionality reduction and noise removal in wireless sensor network datasets. In *2009 Second International Conference on Computer and Electrical Engineering*, volume 2, pp. 674–677. IEEE, 2009. Joshua B Tenenbaum, Vin de Silva, and John C Langford. A global geometric framework for nonlinear dimensionality reduction. *science*, 290(5500):2319–2323, 2000. R. Urtasun, D.J. Fleet, and P. Fua. 3D People Tracking with Gaussian Process Dynamical Models. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), volume 1, pp. 238–245, June 2006. doi: 10.1109/CVPR.2006.15. ISSN: 1063-6919. Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. *Journal of machine learning* research, 9(11), 2008a. Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. *Journal of machine learning* research, 9(11), 2008b. Jack Wang, Aaron Hertzmann, and David J Fleet. Gaussian Process Dynamical Models. In *Advances* in Neural Information Processing Systems, volume 18. MIT Press, 2005. URL https://proceedings. neurips.cc/paper/2005/hash/ccd45007df44dd0f12098f486e7e8a0f-Abstract.html. Jack M. Wang, David J. Fleet, and Aaron Hertzmann. Gaussian Process Dynamical Models for Human Motion. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 30(2):283–298, 2008. ISSN 1939-3539. doi: 10.1109/TPAMI.2007.1167. Conference Name: IEEE Transactions on Pattern Analysis and Machine Intelligence. Anqi Wu, Nicholas A. Roy, Stephen Keeley, and Jonathan W Pillow. Gaussian process based nonlinear latent structure discovery in multivariate spike train data. In *Advances in Neural Information Processing* Systems, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper/ 2017/hash/b3b4d2dbedc99fe843fd3dedb02f086f-Abstract.html. Anqi Wu, Stan Pashkovski, Sandeep R Datta, and Jonathan W Pillow. Learning a latent manifold of odor representations from neural responses in piriform cortex. *Advances in Neural Information Processing* Systems, 31, 2018. Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017.
Review 1: Summary: This article presents a dimension reduction method based on estimating latent variables from a kernel derived from observations. The core idea is to see the empirical covaraince matrix as an estimation of a kernel matrix between latent variables (under Gaussian assumption) and to try to estimate this latent variables with eigenvalue decompostion. The original idea of considering the intermediate step of a Gaussian process for dimension reduction is intersting a seems to work reasonably well in practice. Strengths and Weaknesses: Strengths: - The method is simple and easy to implement - It has interesting connections with spectral methods - It works reasonably well in practice Weaknesses: - Connections to prior standard methods - Experiments do not support the different claims In my opinion, there are significant connections to standard methods that are missing, especially regarding the proposed method and MDS, which do not seem to be sufficiently discussed (only in 2.5). In fact, the proposed method seems to do almost exactly what MDS would do, with a minor modification. For MDS, one observes measurements of 'distances' (in a broad sense) $\delta_{ij}$ and seeks to find latent positions of a squared distance matrix (EDM) $d_{ij} = |x_i - x_j |^2$ such that $\delta_{ij} \approx d_{ij}$. To reconstruct the points from the set of distances, a very classical approach is based on the eigenvector decomposition of the covariance matrix (see e.g., [Section B, 1] or [3]). In fact, the approach proposed here is exactly the same approach but with a special choice of $\delta_{ij}$ that comes from a similarity kernel (mapped to a notion of ''distance'' with the $f^{-1}$). These important references are almost not discussed, as Part 2.2 is kind of presented as an original contribution, whereas it is a small modification of very classical algorithms (whose original ideas trace back to Gower in 1966 [2]). It seems to me that this method is actually doing a simple MDS but in the embedding space defined by the kernel. To make these connections clearer, I think it is important to derive the loss that is minimized by the algorithm. Another point is that there is no guarantee that the presented approach actually leads to a good estimator of the true kernel $K$ compared to classical maximum likehood estimators (as in Gaussian processes). It would be really interesting to prove that the estimator proposed is somehow consitent and is able to estimate the true latent positions. Moreover, a recurring claim in this article is the algorithm's speed, yet there is no analysis of algorithmic complexity. It doesn't seem that a solution based on eigenvalue decomposition is especially efficient, as it is of the order of $O(T^3)$ in computation. Concerning this point, for displaying runtimes, a logarithmic scale on the y-axis would provide a better idea of the differences between methods because, in Figure 2, they all appear roughly identical (except GPLVM). Additionally, for a fair comparison and to claim that "IKD is able to provide competitive performance against some SOTA optimization-based methods but at a much faster running speed" it would be necessary to calculate the speed of methods for varying $T$ and not just for a dimension $N$ that varies. Finally, the experiments don't seem to support the claim that "As an eigen-decomposition-based method, IKD achieves more reasonable latent representations" and "IKD always obtains improved performance with an increasing observation dimensionality, and claims its absolute superiority when the observation dimensionality is very large." Indeed, the only metric proposed in the experiments is a classification metric with a $k$-NN on the representations. To evaluate the quality of an embedding and claim that "IKD always obtains improved performance" this doesn't seem sufficient to me. It's also important to compare methods with other metrics such as the silhouette score or trustworthiness. Additionally, Figure 6 is quite challenging to interpret, making it difficult to draw any conclusions. Variance is not presented, the curves are hard to read, and the significance of different $k$ values is unclear (the three lines are almost identical). Here, an aggregated table seems more appropriate, with a setting such as $k=1$ for example. Finally, the experimental setup for competing methods is not entirely clear, especially since t-SNE is sensitive to the perplexity parameter, and it's unclear how this parameter is chosen. [1] Euclidean Distance Matrices. Essential Theory, Algorithms and Applications. Ivan Dokmanic, Reza Parhizkar, Juri Ranieri and Martin Vetterli. [2] Gower, John C. Some distance properties of latent root and vector methods used in multivariate analysis, 1966. [3] Multidimensional Scaling, Sammon Mapping, and Isomap: Tutorial and Survey, Benyamin Ghojogh, Ali Ghodsi, Fakhri Karray, Mark Crowley. Requested Changes: Improve the experiments as detailed above + provide more clarity in establishing connections with previous standard spectral methods for dimension reduction. Broader Impact Concerns: No ethical implications of the work. ================================================== Review 2: Summary: This paper focuses on dimensionality reduction problems. The authors propose a novel nonlinear dimensionality reduction method, called as Inverse kernel decomposition (IKD), for this purpose. Several numerical simulations have been conducted to illustrute the effectiveness and outperformance of IKD. Strengths and Weaknesses: Strengths: The numerical results, especially for real data experiments are good, demonstrating the advantages of IKD, compared with PCA, KPCA, LE, Isomap, t-SNE, UMAP, GPLVM and VAE. Weaknesses: 1. The paper is not written in a good shape. For example, there lacks several words to introduce the ``problem setting''. In particular, how to measure the quality of a dimensionality reduction is not presented. 2. The authors argue that their approach is capable of tackling noisy data. However, in their simulations, the noise is too small, i.e. \mathcal N(0,0.05^2). 3. The motivation and algorithm are not written well. There are so many tedious but simple mathematical computations that are easy to derive. Requested Changes: 1. The paper should be rewritten, especially to highlight the motivation (why IKD performs well rather than exhibiting tedious mathematical expressions), algorithm itself (how the algorithm implementation, why the specific kernels are adopted) and the explanations of the algorithms (the effect of algorithmic parameters, the computation burden, etc..). 2. Highlight the problem setting, especially demonstrating how to measure the quality of the mentioned approaches to show the advantage of the proposed IKD. 3. Experiment: 1) The effect of noise, or the outperformance of IKD, should be added. 2) The selection of algorithm parameters (or kernels) and the role of the algorithm parameters should be given. Broader Impact Concerns: NA ================================================== Review 3: Summary: This paper proposes inverse kernel decomposition (IKD), a dimensionality reduction method based on the Gaussian process latent variable model. The proposed method is simple and efficient. Promising experimental results are reported on synthetic and real data. Strengths and Weaknesses: Strength: 1. The proposed inverse kernel decomposition seems new to the best of my knowledge. 2. The proposed method is simple and efficient. 3. The empirical performance is promising. Weaknesses: 1. Given so many eigen-decomposition-based dimensionality reduction method, the novelty of this paper seem rather limited. 2. The motivation is not clearly stated. Why do the authors need to rely the dimensionality reduction method on the Gaussian process latent variable model? 3. The writing is not always satisfactory. For example, there are typos like "all pairs of rows in Z. I.e., ..." Requested Changes: It is suggested to revise the manuscript according to the above Weaknesses. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept with minor revision Comment: The paper introduces a novel nonlinear dimensionality reduction method named Inverse Kernel Decomposition (IKD), which relies on an eigen-decomposition of the sample covariance matrix of data and draws inspiration from Gaussian process latent variable models. Reviewers have acknowledged the novelty of the proposed method. **Two reviewers** have strongly recommended clear acceptance following the rebuttal with the authors. **One reviewer** commended the authors' efforts in enhancing the manuscript during the rebuttal; however, they expressed some conservatism regarding the current version. Specifically, they suggested two points for improvement: 1) Enhancing the readability of the experiments by ensuring clarity in the figures and explanations of the conclusions. 2) Providing more extensive discussions on the connections with prior research and existing spectral methods. While the authors mention Isomap, it appears that their approach may be viewed as a specific instance of kernel PCA with a particular choice of kernel. Considering these suggestions, I recommend acceptance with minor revisions. I encourage the authors to address the aforementioned comments to further strengthen the paper. ==================================================
# On The Role Of Fixed Points Of Dynamical Systems In Training Physics-Informed Neural Networks Franz M. Rohrhofer *frohrhofer@acm.org* Know-Center GmbH - Research Center for Data-Driven Business & Big Data Analytics Sandgasse 36/4, 8010 Graz, Austria Stefan Posch stefan.posch@lec.tugraz.at LEC GmbH - Large Engines Competence Center Inffeldgasse 19, 8010 Graz, Austria Clemens Gößnitzer *clemens.goessnitzer@lec.tugraz.at* LEC GmbH - Large Engines Competence Center Inffeldgasse 19, 8010 Graz, Austria Bernhard C. Geiger geiger@ieee.org Know-Center GmbH - Research Center for Data-Driven Business & Big Data Analytics Sandgasse 36/4, 8010 Graz, Austria Reviewed on OpenReview: *https: // openreview. net/ forum? id= 56cTmVrg5w* ## Abstract This paper empirically studies commonly observed training difficulties of Physics-Informed Neural Networks (PINNs) on dynamical systems. Our results indicate that fixed points which are inherent to these systems play a key role in the optimization of the in PINNs embedded physics loss function. We observe that the loss landscape exhibits local optima that are shaped by the presence of fixed points. We find that these local optima contribute to the complexity of the physics loss optimization which can explain common training difficulties and resulting nonphysical predictions. Under certain settings, e.g., initial conditions close to fixed points or long simulations times, we show that those optima can even become better than that of the desired solution. ## 1 Introduction Dynamical systems are governed by differential equations and are ubiquitous in many scientific disciplines including economics, biology, physics and engineering. The upsurge in scientific machine learning has led to the development of sophisticated approaches, such as Gauss-Markov processes (Schober et al., 2014) or numerical Gaussian processes (Raissi et al., 2018), that are applicable to those systems and superior to classical methods. In the field of deep learning, state-of-the-art methods have advanced by incorporating (at least) some part of the underlying physics, e.g., learned through data (Brunton et al., 2016) or embedded by design (Sanchez-Gonzalez et al., 2020). Among those methods are *physics-informed neural networks* (PINNs) which are the prime paradigm of physics-informed machine learning (Raissi et al., 2019; Karniadakis et al., 2021). Their seamless integration of data and physical constraints has pushed PINNs into a vast number of applications on dynamical systems, including system identification (Raissi, 2018), hidden state inference (Raissi et al., 2020) and surrogate modeling (Sun et al., 2020). Since PINNs are capable of solving differential equations in a fully mesh-free and time-continuous manner, one promising field of application is the numerical simulation of dynamical systems. In those applications, labeled training data is scarce and typically only used to specify the corresponding initial and boundary conditions (IC/BC). For a complete and unique definition of the forward problem, the IC/BC are either included in the loss function or explicitly enforced using specific network architectures. Both variants, however, rely on the optimization of the embedded physics loss function, i.e., on minimizing residuals on the governing differential equations, evaluated at collocation points which are randomly sampled inside the computational domain. Optimization success and accuracy thus particularly depend on the complexity of the studied dynamical system and the corresponding physics loss function. ## 1.1 Training Difficulties Of Physics-Informed Neural Networks In general, issues in the optimization of PINNs are manifold and often cause incorrectly predicted system dynamics. A complete description of all reported training difficulties in PINNs is exhaustive; we thus focus our discussion on issues and proposed remedies which appear in the context of dynamical systems and relate to what is discussed in our work. ## 1.1.1 Conflicting Objectives. Several ongoing discussions address conflicts in the optimization of multiple objectives as one root cause of convergence issues in PINNs (Wang et al., 2021a). It has been shown that different weighting of the objectives, bound to physical, IC/BC and data constraints, is effective for PINNs and can accomplish a successful training. The weighting is conducted either by hand-tuned loss weights or with adaptive weighting schemes that adjust the weights during network training, such as in Maddu et al. (2021) or Jin et al. (2021). With a focus on coping with imbalanced gradients, those methods are generally used to improve the PINN's performance and to select an optimum point on the Pareto front (Rohrhofer et al., 2021). Furthermore, specially-designed network architectures enable hard encoding of IC/BC and physical constraints (Lu et al., 2021; Raissi et al., 2019). These approaches circumvent conflicts by reducing the overall number of competing objectives. ## 1.1.2 Propagation Failure. Most relevant for our discussion are recent works that focus on a purported failure mode of PINNs in which the learned system dynamics does not represent the solution that is specified by the IC/BC. A reason for this, it has been argued, is that propagation of the solution from the enforced conditions to interior points is disrupted for a certain region in the computational domain, which often yields the trivial (zero) solution (Daw et al., 2022). To mitigate this issue, several remedies have been proposed. One focus lies in improving network initializations to reduce the bias towards flat output functions, e.g., by learning in sinusoidal space (Wong et al., 2021). Another type of methods propose reweighting of collocation points (Wang et al., 2022) or resampling them during network training (Leiteritz & Pflüger, 2021) . In those methods, importance or density of collocation points propagates from the enforced conditions to interior points during network training which, in a causality-respecting manner, promises to mitigate the propagation failure. Since it is generally argued that with an increasing domain size the physics loss optimization becomes more complex (Krishnapriyan et al., 2021), other methods focus on the extent of the computational domain. Often referred to as sequence-to-sequence learning or domain decomposition, those methods comprise approaches that divide the original spatio-temporal domain into smaller subdomains which are easier to solve (Jagtap & Karniadakis, 2021). Furthermore, approximation issues of PINNs in the presence of high-frequency or multiscale features have been explained by the spectral bias of PINNs with proposed remedies found in Wang et al. (2021b). ## 1.2 Our Contribution As shown in the last section, the literature is abound with techniques that try to mitigate commonly observed training difficulties of PINNs - but explanations why training on dynamical systems often fails seems incomplete to us: We suspect that not only the trivial zero solution, but also fundamental properties, e.g., fixed points of dynamical systems play a key role in training (failures) of PINNs. Based on this, our contribution in this paper will be as follows. - We show on two simple dynamical systems that stable and unstable fixed points contribute to the optimization complexity of the physics loss function and influence the rate of training success. (Section 3.1.1) - We empirically demonstrate that under certain settings, e.g., IC close to fixed points and long simulation times, nonphysical predictions become economical with better minima than that of the desired solution. (Section 3.1.2) - We further demonstrate that PINN training for complex dynamical systems is also affected by fixed points inherent to these systems. (Section 3.2) - We visually capture that the physics loss landscape is being shaped by the presence of fixed points, which form local optima / saddle points that slow down the gradient-based optimization and might prevent a successful PINN training. (Section 3.3) - We provide empirical evidence that the complexity of the physics loss landscape reduces for smaller computational domains and connect this to successful techniques used for training PINNs. (Sections 3.3 and 4) ## 2 Background 2.1 Dynamical Systems In this work we consider dynamical systems that can be described by differential equations of the form: $$u_{t}={\mathcal{F}}[u],$$ ut = F[u], (1) where the solution function u = u(*t, x*) in general depends on time t ∈ [0, T] and space x ∈ Ω ⊆ R n, ut denotes the (partial) derivative of u w.r.t. time, and F is an arbitrary, potentially nonlinear differential operator dictating the system dynamics. In the numerical simulation of dynamical systems, IC are imposed to define the initial state of the system through $$u(t_{0}=0,x)=u_{0}(x),\quad\forall x\in\Omega.$$ $$B[u]=0,$$ $$\mathbf{f}_{2}$$ u(t0 = 0, x) = u0(x), ∀x ∈ Ω. (2) For the dynamical systems considered in this work, the spatial domain Ω is compact, i.e. closed and bounded, which further requires the specification of BC on the boundary ∂Ω in order to guarantee the uniqueness of the solution: B[u] = 0, ∀x ∈ ∂Ω, (3) where B is a boundary operator, specifying periodic, Dirichlet and/or Neumann BC. ## 2.2 Physics-Informed Neural Networks Fully-connected neural networks (FC-NNs) are most common among PINNs due to their good trade-off between simplicity and expressive power. Thus, we use FC-NNs to approximate the unknown solution function of (1) with u(t, x) ≈ u(*t, x*; θ) =: uθ(*t, x*), where θ ∈ R nθ are the weights and bias terms of the network. Common activation functions are the hyperbolic tangent (tanh) or Sigmoid linear unit (SiLU, swish), which render the approximated solution function and derivatives smooth1. PINNs use automatic differentiation (AD) (Baydin et al., 2018) to obtain (partial) derivatives of the network's output with respect to its inputs. In order to retrieve the derivatives of the network solution, AD requires to pass discrete evaluation points, called collocation points, through the network in a feed-forward operation. The collocation points define the data set for penalizing residuals of the differential equation. The physics loss residual for a dynamical system (1) is given by: $$f(t,x):=u_{\theta,t}(t,x)-{\mathcal{F}}[u_{\theta}(t,x)],$$ f(*t, x*) := uθ,t(t, x) − F[uθ(*t, x*)], (4) 1Thus mesh-free and time-continuous in the context of differential equations. where we use uθ,t to denote the derivative of the network function uθ w.r.t time t. Following the standard PINN formulation, the sum of squared residuals at all collocation points yields the physics loss function: $$L_{f}(\theta)=\frac{1}{N_{f}}\sum_{i=1}^{N_{f}}\left|f(t^{i},x^{i})\right|^{2},\tag{1}$$ $$\mathbf{\Sigma}$$ $$\mathbf{\Sigma}$$ with the collocation points {t i, xi} Nf i=1 sampled from the entire computational domain (*t, x*) ∈ [0, T] × Ω. These points do not need any label and can be either fixed during PINN training or re-sampled before each training epoch. In the initial formulation of PINNs, additional data constraints are used to enforce the IC/BC by $$L_{u}(\theta)=\frac{1}{N_{u}}\sum_{i=1}^{N_{u}}\left|u_{\theta}(t^{i},x^{i})-u(t^{i},x^{i})\right|^{2},\tag{1}$$ $1=-\left(\pi\right)=-1\cdot\left(2\right)=-1$ $$\left(7\right)$$ $$L(\theta)=\lambda L_{u}(\theta)+L_{f}(\theta),$$ where u is given by the r.h.s. of (2) and (3). Both losses (5) and (6) are combined by scalarization of a multi-objective optimization through L(θ) = λLu(θ) + Lf (θ), (7) where λ represents a weighting factor, which here by default is set to λ = 1, unless explicitly stated otherwise. As an alternative to this (standard) formulation of PINNs, special network architectures have been proposed that ensure IC/BC are satisfied explicitly. We refer to these PINNs as being *hard constrained*. ## 2.3 Stability, Fixed Points & Steady-State Fixed points u ∗ of a dynamical system are given by the roots of the nonlinear function F in equation (1): $${\mathcal{F}}[u^{*}]=0.$$ $\downarrow$ . ∗] = 0. (8) In general, they can be either stable, asymptotically stable or unstable. For a stable fixed point, any trajectory close to it will stay close, whereas for an asymptotically stable fixed point close trajectories will further converge to it as t → ∞. In contrast, an unstable fixed point is repulsive and even the smallest deviation will cause any close trajectory move away from it as t → ∞. While it is hard to determine all fixed points and their asymptotic properties in dynamical systems, the trivial zero solution u ∗ = 0 is a fixed point for many systems, such as for harmonic oscillators, pendulums or fluid flow. In the context of ordinary differential equations (ODEs) fixed points of dynamical systems are constant solutions which are often studied from the perspective of stability, to characterize the asymptotic properties of solutions and trajectories close to it. For dynamical systems governed by partial differential equations (PDEs), fixed points appear in obtaining certain solutions to a given system. Some applications aim at obtaining steady-state solutions, i.e. solutions that do not change over time as the state variables are changed and, thus, for which equation (8) holds true. When seeking those solutions, it is the asymptotic property of an underlying stable fixed point which determines what happens to the system in the long term after it has been initiated. Other applications, however, consider the repulsive property of an unstable fixed point, to bring the system out from steady-state and to obtain a transient solution. An particular example for the latter is simulating vortex shedding which appears in fluid dynamics and will be also studied in this work. ## 3 On The Attractivity Of Fixed Points In Physics-Informed Neural Networks According to (8), if the parameters θ of the PINN are such that uθ = u ∗corresponds to a fixed point, then F[uθ] = 0 and the physics loss in (4) vanishes. Further, for PINN architectures with finitely many layers and neurons per layer, and for continuous activation functions, the network solution uθ changes continuously with θ. Thus, if θ ′is a sufficiently small perturbation of θ, u ′ θ is approximately a fixed point solution, i.e., F[u ′ θ ] ≈ 0. As a consequence, the physics loss for u ′ θ will be positive, but small. Hence, fixed points correspond to global minima of the physics loss with non-trivial basins of attraction. The PINN can thus only learn "correct" behavior from the given IC/BC data. Regardless of whether this IC/BC data is incorporated using hard constraints or a loss term such as (6), the resulting loss landscape seen by the gradient-based optimizer will be affected by the fixed point solution and its basin of attraction. In this section we now perform experiments2to test the hypothesis that fixed points play a key role in the training of PINNs. In the first part, we perform experiments on two dynamical systems governed by ODEs which are the undamped pendulum and a simple toy example. Due to their simplicity, those systems are studied in multiple settings, including the variation of IC, simulations length, network and optimizer settings. In the second part, we show that our findings are also applicable to complex dynamical systems which are governed by PDEs. We perform experiments on two complex dynamical systems which are the Navier-Stokes equations and the Allen-Cahn equation. Those systems are known to be challenging for PINNs and we relate commonly observed training difficulties to fixed points inherent to these systems. ## 3.1 Fixed Points In Ordinary Differential Equations In the following, we consider two simple ODEs: the undamped pendulum dynamics and a simple toy example. These examples are chosen because they exhibit stable (pendulum), asymptotically stable (toy example), and unstable (both) fixed points. We further try to solve these systems using either vanilla (i.e., multi-objective, for the pendulum) and hard constrained (for the toy example) PINNs. For both examples we now stick to the convention of ODEs and denote the unknown solution function by y(t). Undamped Pendulum Equation. The undamped pendulum dynamics are given by a second-order ODE $${\ddot{y}}={\mathcal{F}}[y]:=-{\frac{g}{l}}\sin\left(y\right),$$ $$({\mathfrak{g}})$$ sin (y), (9) with y denoting the angle from the vertical to the pendulum (in radians), and l and g representing the length of the rod and magnitude of the gravitational field, respectively. Here we set l = 1 and g = 9.81 and consider for our discussion the angle y in units of degrees. The second-order ODE (9) can be converted into a coupled systems of two first-order ODEs which are in the form of equation (1). Since PINNs are able to also handle higher-order differential equations, we use a single neural network yθ(t) to approximate the solution function and solve the second-order ODE. This system exhibits two fixed points, a stable fixed point y ∗ = 0◦ at the pendulum's natural rest position, and an unstable fixed point y ∗ = 180◦ at the upright position. For comparison in our experiments, we create reference solutions using a Runge-Kutta fourth-order method. Toy Example Equation. The toy example is defined as a one-dimensional system with a single ODE given by $${\dot{y}}={\mathcal{F}}[y]:=y\left(1-y^{2}\right).$$ 2. (10) This system exhibits three fixed points, two of which located at y ∗ = ±1 are asymptotically stable, and one at y ∗ = 0 which is unstable (see Figure 1). For this system, the analytical solution exists and is provided in Appendix C.1. The simplicity of this toy example further allows for hard constraining the IC by setting $\left(10\right)^{2}$ $${\hat{y}}(t)=y_{0}+t\cdot y_{\theta}(t),$$ $$(11)$$ yˆ(t) = y0 + t · yθ(t), (11) which per definition fulfills equation (2). ## 3.1.1 Rate Of Training Success In The Presence Of Fixed Points We now study the success rate of training PINNs on the above mentioned systems. We declare training successful if the L2 relative error ∥yθ(t)−y(t)∥2/∥y(t)∥2 is below 15%, when comparing the PINN's prediction with the reference solution after a sufficiently long training time. This threshold allows a clear separation of the training outcomes3. For the toy example nonphysical predictions are exclusively influenced by the unstable fixed point at y ∗ = 0 (see Figure 1(c)). For the undamped pendulum, however, we further classify whether an unsuccessful training, 2Code can be found on GitHub at https://github.com/frohrhofer/PINNs_fixed_points 3Results for further thresholds are provided in Appendix B.3 and suggest similar qualitative conclusions. Table 1: **Undamped Pendulum.** Rate of training success across different system settings (T and y0) and network architectures (size and activation function). Triplets in the main table represent in percentage (%) and in the respective order, cases of successful training, attracted by stable fixed point, and unstable fixed point (see Appendix B.1 for details on the classification). Bold triplets represent a low (< 5%) success rate. T 2.5 5 7.5 y0 25◦ 100◦ 175◦ 25◦ 100◦ 175◦ 25◦ 100◦ 175◦ | point (see Appendix B.1 for details on the classification). Bold triplets represent a low (< 5%) success ra | | | | | | te. | | | | | |---------------------------------------------------------------------------------------------------------------|---------|---------|---------|---------|---------|----------|----------|---------|---------|--------| | T | 2.5 | | 5 | | 7.5 | | | | | | | y0 | 25◦ | 100◦ | 175◦ | 25◦ | 100◦ | 175◦ | 25◦ | 100◦ | 175◦ | | | tanh | 98/2/0 | 100/0/0 | 100/0/0 | 0/100/0 | 90/10/0 | 0/100/0 | 0/100/0 | 0/100/0 | 0/61/39 | | | 4x50 | swish | 100/0/0 | 100/0/0 | 98/0/2 | 56/44/0 | 80/20/0 | 0/100/0 | 0/100/0 | 1/99/0 | 0/91/9 | | sin | 100/0/0 | 100/0/0 | 100/0/0 | 65/35/0 | 93/7/0 | 0/93/7 | 2/98/0 | 24/75/1 | 0/94/6 | | | tanh | 100/0/0 | 100/0/0 | 99/0/1 | 100/0/0 | 97/0/3 | 92/0/8 | 49/31/20 | 84/0/16 | 1/29/70 | | | 8x100 | swish | 100/0/0 | 100/0/0 | 100/0/0 | 100/0/0 | 100/0/0 | 57/42/1 | 100/0/0 | 100/0/0 | 1/92/7 | | sin | 100/0/0 | 100/0/0 | 98/0/2 | 55/0/45 | 60/0/40 | 39/15/46 | 32/0/68 | 29/0/71 | 0/26/74 | | subsequently the nonphysical prediction, violates the physics by either being attracted by the stable (y ∗ = 0◦) or the unstable fixed point (y ∗ = 180◦). This classification is based on y and y˙, i.e., in phase space, evaluated at t = T (see Appendix B.1 for details). For both systems, we train 100 randomly initialized PINNs with a different IC y0 and simulation time T. The IC for the undamped pendulum are enforced using the multi-objective loss (7) (vanilla PINN) with a zero initial angular velocity, i.e., y˙0 = 0. Training is performed for 50k epochs using the Adam optimizer with default settings for the moment estimates. We repeat training for different setups, including network architectures and optimizer settings (see Appendix B.2 for details). Software and hardware specifications in use are found in Appendix A. Table 1 shows the rate of training success across different network architectures for the experiments on the undamped pendulum and an initial learning rate of α = 0.001. The outcome of experiments using different optimization settings and for the toy example can be found in Appendix B.2 and C.2. In general, we observe severe training difficulties across all experiments with only a minor number of cases leading to a success rate of 100%. Influence of Initial Condition y0. From the results it is apparent that PINNs are sensitive to the choice of initial conditions. We observe that, in general, IC close to fixed points lead to a lower rate of training success compared to those that start far. For the undamped pendulum this is evident by comparing in Table 1 the rates for y0 = 100◦ with that of y0 = 25◦ and y0 = 175◦. We made similar observations for the toy example that also shows a lower rate of success for IC close to the unstable fixed point (see Appendix C.2). Influence of Simulation Time T. Across different settings and for both systems, we observe that training becomes more difficult as the simulation time is increased. Likewise it can be seen that reducing the simulation time accounts for a higher success rate. The influence of the simulation time in terms of the physics loss complexity will be further analyzed in Section 3.3. Influence of Network and Optimization Settings. We observe a slight improvement in terms of a higher success rate as the network size is increased. Still, none of the tested network architectures could resolve the training issues at long simulation times and IC close to the (unstable) fixed point. Thus, we conclude that the observed training difficulties are bound to the optimization complexity, rather than insufficient expressive power of the network. As reported in Appendix B.2, we also perform an ablation study using different optimization settings in terms of the learning rate, number of collocation points, loss weighting and network initialization. In comparison to the baseline model (4x50, tanh, from Table 1) no optimization setting yields notable changes to the rate of training success. ## 3.1.2 Fixed Points Becoming Economical Solutions Next, we demonstrate that, when the IC is very close to a fixed point, training may result in PINN predictions that approach these fixed points, even if the resulting behavior is nonphysical. Furthermore, we show that those nonphysical predictions can even become better minima than that of the desired optimum. The ![6_image_0.png](6_image_0.png) Figure 1: **Toy Example.** Predicted trajectories and learning curves for the (a),(b) data-guided PINN and (c),(d) physics-driven PINN. (e) The minimal physics loss across all epochs vs. absolute distance between IC and the unstable fixed point indicate that nonphysical predictions become better minima than the true solution. Five PINN instances were trained for each initial value (sequential color code) and approach (crosses and pluses). Markers were randomly shifted horizontally to reduce overlap latter further renders convergence to the true solution unfeasible. To rule out effects from multi-objective optimization, we focus in this subsection on the toy example implemented by a PINN with hard constraints. Similar observations on the undamped pendulum can be found in Appendix B.4. For this experiment, we use a 4x50 network architecture with tanh activation and choose a simulation time of T = 10. To show that our chosen PINN architecture has sufficient expressive power to learn the true solution, we implemented the following, *data-guided* PINN as control: First, 10 labeled data points are sampled from the analytical solution, equidistantly from the computational domain. In the first 25k epochs of training, these 10 points support training via (7), while in the remaining 25k epochs, the PINN is trained using the physics loss only. The first training phase thus guides the gradient-based optimization into the basin of attraction of the analytical solution, while the second training phase ensures that the physics loss is minimized. Indeed, as shown in Figure 1(b), the data-guided strategy successfully learns the analytical solution. We compare this approach to training with the physics loss only, i.e., without the use of labeled data, which we refer to as the *physics-driven* PINN. For each approach and IC we train five uniquely initialized PINN instances with the same optimization settings found in 3.1.1. Influence of Initial Condition y0. For the toy example, Figure 1(c)-(e) show that the PINN predictions impacted by the unstable fixed point (y ∗ = 0) achieve lower physics losses as the IC gets closer to it. Indeed, for small values of y0, the physics loss can become even smaller than the physics loss achieved by the dataguided PINN (see Figure 1(e)), i.e., the prediction impacted by the unstable fixed point seems to become a better minimum for PINN optimization than the true solution. This renders finding the true solution unfeasible, even for optimization methods that are not based on gradients (e.g., PSO-PINNs as in Davi & Braga-Neto (2022) or else). ## 3.2 Fixed Points In Partial Differential Equations In the following, we broaden our discussion on two complex dynamical systems which are governed by PDEs: the Navier-Stokes equations and Allen-Cahn equation. We show that common training difficulties and frequently-observed nonphysical predictions on these systems can be explained by the presence of fixed points inherent to the experimental setups. Navier-Stokes Equations. The Navier-Stokes equations govern fluid flow and build a coupled system of nonlinear PDEs. For the two-dimensional case, the unknown solution functions are u(t, ⃗x), v(*t, ⃗x*) and p(*t, ⃗x*), representing the fluid velocity in x- and y-direction and pressure, respectively. The system equations for transient fluid flow, representing conservation of momentum in x- and y-direction, are given by $$u_{t}=\mathcal{F}_{x}[u,v,p]:=-(uu_{x}+vu_{y})-p_{x}+\mathrm{Re}^{-1}(u_{xx}+u_{yy}),\tag{12a}$$ $$v_{t}=\mathcal{F}_{y}[u,v,p]:=-(uv_{x}+vv_{y})-p_{y}+\mathrm{Re}^{-1}(v_{xx}+v_{yy}).\tag{12b}$$ In this work we consider incompressibility of the fluid which is given by the continuity equation $$(13)$$ $$u_{x}+v_{y}=0.$$ $$(14)$$ ux + vy = 0. (13) In order to hard constrain this additional PDE, we introduce a stream function ψ(*t, ⃗x*) with u = ψy and v = −ψx, similar to what has been done in Raissi et al. (2019). Consequently, a single neural network can be used to approximate ψθ(*t, ⃗x*) and pθ(*t, ⃗x*), which per definition fulfills the continuity equation (13). Allen-Cahn Equation. The Allen-Cahn equation describes a reaction-diffusion systems which is used to simulate phase separation in multi-component alloy systems. The highly non-linear PDE is given by $$u_{t}={\mathcal{F}}[u]:=\gamma_{1}u_{x x}+\gamma_{2}(u-u^{3}).$$ $$(15)$$ 3). (14) The following proposition (proof is given in Appendix E) characterizes a non-trivial fixed point inherent to this system: Proposition 1. *Consider a PINN instance, where the collocation points are drawn from a continuous* distribution supported on the computational domain [−1, 1] × [0, T]*, and suppose that the PINN is used to* approximate the solution to the Allen-Cahn equation (14). Then, for the function $$u(x,t)=\begin{cases}0,&x\in[-0.5,0.5]\\ -1,&x\in[-1,-0.5)\cup(0.5,1]\end{cases}$$ the physics loss Lf (θ) = 0 *with probability one.* ## 3.2.1 Fixed Point Leading To Steady-State For this experiment we consider the Navier-Stokes system (12) and aim at simulating vortex shedding, which is a well-studied phenomenon and benchmark simulation in fluid dynamics. The simulation aims at capturing the characteristic properties of oscillating fluid flow, which appears in the wake of a (round) body that is passed by the fluid at a certain velocity. Of particular interest, in terms of stability, is that this transient and periodic flow is obtained by the influence of a fixed point, which becomes unstable above a critical Reynolds number Rec ≈ 48 (Tang & Aubry, 1997). The symmetry breaking, usually achieved by numerical instabilities in classical methods, pushes the symmetrical fluid flow out from steady-state, which conforms to the unstable fixed point, and into the oscillating vortex street. Experimental Setup. The experimental setup for simulating vortex shedding can be found in Appendix D. We choose Re = 100, thus a Reynolds Number high enough to cause the vortex shedding. A database of direct numerical simulation data (DNS) for this Reynolds number can be found in Boudina (2021). This database contains the developed vortex shedding flow field and will be be used as training data for our experiment. We choose a fixed 8x100 neural network architecture with tanh activation and consider following (two) training strategies: One PINN instance is used as control and trained on three consecutive periods of vortex shedding, i.e., t ∈ [0, 18], with collocation points sampled from the same time domain. We refer to this as *data-guided* PINN, which thus is fully supported by training data and only ask to correctly learn the reference solution. The second *physics-driven* PINN, however, is trained on labeled data coming only from the time domain t ∈ [0, 3], which represents 50% of the first vortex shedding period. This should impose the initial sequence of vortex shedding, and by sampling collocation points in the full-time domain (t ∈ [0, 18]), the physics-driven PINN is asked to continue the simulation beyond the domain of reference data by minimizing residuals of the Navier-Stokes equations (12). Further details on optimization and data settings can be found in Appendix D. ![8_image_0.png](8_image_0.png) Figure 2: **Navier-Stokes Equations.** Substantial difference in velocity magnitudes |⃗u| at t = 12 predicted by the (a) data-guided and (b) physics-driven PINN. (c)-(d) The time evolution of the nonlinear operators Fx and Fy at (*x, y*) = (3.0, 0.5) (red rectangle) indicates that the physics-driven PINN is influenced by the unstable fixed point which, apparently as Fx → 0 and Fy → 0 as t becomes large, attracts a steady-state solution. The green shaded area represents the initial training sequence for the physics-driven PINN. Results. Figure 2 shows for the fully-trained data-guided and physics-driven PINN the predicted velocity magnitudes |⃗u| at t = 12, and the time evolution of the nonlinear operators Fx and Fy at a depicted spatial coordinate (*x, y*) = (3.0, 0.5). The data-guided PINN indeed resembles the true vortex shedding dynamics (within a certain accuracy not given), which verifies that the network architecture is sufficient large to learn the vortex shedding dynamics over the given time domain. For the physics-driven PINN, however, we observe that outside the domain of reference data the prediction starts to substantially deviate from that of the dataguided. The time evolution of the nonlinear operators Fx and Fy gives further insights, and indicates that the physics-driven PINN approaches a steady-state solution with the fixed point properties (8), as it is evident by Fx → 0 and Fy → 0 as t becomes large. While this behavior is only shown for a particular spatial coordinate, similar observations are made on the entire spatial domain (not shown). These observations, together with the fact that the approached solution (Figure 2(b)) is not the trivial zero solution, suggests that the training is influenced by the underlying unstable fixed point, which originally accounts for the vortex shedding dynamics. We note that evaluating the physics loss for both PINN instances shows that the minimal loss achieved by the physics-driven PINN (Lf,min = 8.28·10−5) is lower than that of the data-guided PINN (Lf,min = 2.07 · 10−4). We further note that we have also tested different network and optimization settings, as well as (hard constrained) BCs and different computational domains, but we were not able to achieve, for any of these settings using standard PINNs, a different qualitative behavior as shown here. The challenge of this optimization problem was similarly reported in Chuang & Barba (2022) and can be resolved, e.g., by using truncated Fourier decomposition with PINNs (Raynaud et al., 2022). ## 3.2.2 Fixed Point Slowing Down Convergence For this experiment we consider the Allen-Cahn equation (14) and define the IC and periodic BC as $$u(0,x)=x^{2}\cos(\pi x),\quad x\in[-1,1],\quad t\in[0,1],$$ $$u(t,1)=u(t,-1),$$ $$u_{x}(t,1)=u_{x}(t,-1),$$ where we set γ1 = 0.0001 and γ2 = 5 in equation (14). This particular example was also studied in Mattey & Ghosh (2022) and Wight & Zhao (2020) and showed severe training difficulties for PINNs. In both works, the standard PINN approach resulted in ignoring the IC and learning the trivial zero solution. While a weighted PINN with λ = 100 in Wight & Zhao (2020) showed a slight improvement (by at least learning the $$(16\mathrm{a})$$ (16b) $\left(16\text{c}\right)$ ![9_image_0.png](9_image_0.png) Figure 3: Allen-Cahn Equation. Prediction of the PINN trained with (a) 50k and (c) 200k epochs using gradient-based optimization. (b), (d) While the prediction of the PINN trained with 200k epochs is in good agreement with the reference, the PINN trained with only 50k epochs is still trapped at a solution which resembles for larger values of t the non-trivial fixed point 15. This suggests that the training is influenced by the fixed point that seems to slow down the gradient-based optimization (will be further analyzed in Section 3.3.2). IC correctly), both works eventually propose methods that mitigate the observed training difficulties and fall into the category of domain decomposing / adaptive collocation point sampling. The reference solution to this experimental setup can be found in Raissi et al. (2019). Experimental Setup. For this example, we adopt the weighted PINN approach and choose a loss weight > = 100 to put greater weight on the IC. We consider a 6x100 network and optimize the composite loss function that enforces the (weighted) IC, BC, and physics loss residuals on equation (14) on the full time domain t E [0, 1]. We train one PINN instance using Adam with 50k epochs and a second instance with a much longer training duration of 200k epochs, both with an initial learning rate of & = 0.001. At each training epoch, data for the IC/BC and collocation points are sampled anew with sizes NIc = 128, NBC = 128 and Ncol = 1024, respectively. Results. Figure 3 shows the prediction of both instances on the entire spatio-temporal domain and on selected time steps. We observe that while both instances correctly learn the IC (and BC), the PINN which is trained on only 50k epochs approaches a solution that resembles the non-trivial fixed point given by (15). We note that these results coincide with that shown in Wight & Zhao (2020), although we used different network and optimization settings. To our surprise, the PINN instance that continues the training for another 150k epochs successfully manages to escape from this suboptimal solution and converges to a solution that is in good agreement with the reference (with minor differences in the late time domain). This observation again suggests that the training is influenced by the non-trivial fixed point (15) that seems to slow down the gradient-based optimization. This particular example, together with the corresponding learning curves, will be further analyzed in the next section. ## 3.3 Optimization Landscape In The Presence Of Fixed Points Finally, we investigate the effect of fixed points on the physics loss landscape. In particular, we evaluate for the toy example and Allen-Cahn system the physics loss function on solutions that were found by the PINN instances and influenced by (close) fixed points. We also assess the effects of reducing the simulation length T by limiting the time domain up to which collocation points are sampled and used to evaluate the physics loss (5). ![10_image_0.png](10_image_0.png) Figure 4: **Toy Example.** (a) PINN prediction and (b) corresponding learning curves for a training example that barely manages to escape from its suboptimal location. (c)-(f) Physics loss landscape evaluated on collocation points which are sampled up to T. The intermediate nonphysical prediction (diamond) clearly forms an attractive location in the loss landscape, which gradually disappears as T is reduced. ## 3.3.1 Initial Condition Far From Fixed Point For the toy example in Figure 1(d) we observe that most PINN instances suffer from slow convergence, indicated by the presence of plateaus in the learning curves. Only a few examples show a successful escape from those undesired optima which is evident by a distinct drop in the learning curves. Those cases are primarily reported at IC that are comparably far from the unstable fixed point at y ∗ = 0. To obtain a better understanding of this phenomenon, we plot the physics loss landscape and PINN prediction for a instance (y0 = 0.5 and T = 8) that barely manages to escape from its suboptimal location. The two plot directions θ1 and θ2 are particularly chosen to point from the initial network state (θ 0) to an intermediate (θ 25k) and the final state (θ 50k) of the network training. Further visualization details are given in Appendix F. Results. Figure 4 shows the loss landscape for two directions θ1 and θ2, where panel (c) represents the loss landscape seen during the PINN optimization. Additionally, in panel (a) and (b) a prediction and training sequence of the PINN is given, which shows that the gradient-based optimization first gets trapped in a local optimum. After approximately 42k epochs, the optimization manages to converge to the correct solution. We clearly detect the global minimum, which corresponds to this solution, in the upper right region (plus) of the loss landscape. We further observe that for long simulation times, i.e. in panel (c)- (e), a local minimum or saddle point (diamond) forms, which seems to be attractive to the gradient-based optimization (cf. Figure 1) and apparently conforms to the nonphysical prediction approaching the unstable fixed point. With decreasing T, i.e. with a reduced time domain on which the physics loss is evaluated, this local optimum gradually vanishes and the global optimum becomes easier to reach due to the better (in terms of optimization) shape of the loss landscape. This explains why we observe in Table 4 a high rate of training success when using T = 2.5. ## 3.3.2 Initial Condition Close To Fixed Point For our final investigation, we use the PINN instance from the Allen-Cahn system which initially was also trapped at a nonphysical prediction and finally escaped from it as the training continued (cf. Figure 3). In this example, the IC given by (16a) and the resultant dynamics closely pass the fixed point (15) which seemed to affect the PINN training. The PINN was optimized using a composite loss function (7) with ![11_image_0.png](11_image_0.png) Figure 5: **Allen-Cahn Example.** (a),(b) PINN prediction and (c) corresponding learning curves for a training example that barely manages to escape from its suboptimal location. (f)-(g) Physics loss landscape evaluated on collocation points which are sampled up to T. The intermediate nonphysical prediction (diamond) clearly forms an attractive location in the loss landscape, which disappears as T is reduced significantly. λ = 100. However, since the IC and BC were learned correctly, we only focus on visualizing the physics loss landscape. We plot the loss landscape in two particular directions θ1 and θ2 that point from the initial network state (θ 0) to an intermediate (θ 80k) and the final state (θ 200k) of the network training. Further visualization details are given in Appendix F. Results. Figure 5 shows the loss landscape for two directions θ1 and θ2, where panel (d) represents the loss landscape seen during the PINN optimization. Additionally, in panel (a),(b) and (c) predictions and the training sequence of the PINN is given, which shows that the gradient-based optimization first gets trapped in a local optimum (diamond). We note that the PINN instance which was only trained on 50k epochs in Section 3.2.2 was still trapped at this local optimum which apparently conforms to the nonphysical prediction approaching the non-trivial fixed point (15). After approximately 90k epochs, however, the optimization manages to escape from this optimum and finally converges to the solution which is in good agreement with the true system dynamics. The minimum corresponding to this solution can be clearly seen in the upper right region (plus) of the loss landscape. For this example, the simulation length T has to be reduced significantly, to a minimum of about T = 0.01, in order to observe the suboptimal location (diamond) completely vanishing from the loss landscape. This observation is in accordance with the domain decomposition method introduced in Mattey & Ghosh (2022) which divides the full time domain in 50 segments, thus using a step size of ∆t = 0.02. ## 4 Discussion And Limitations Our study and results suggest that fixed points of dynamical systems, irrespective of whether they are stable or unstable, lead to the formation of attractive optima in the physics loss landscape which influences the PINN training. This was demonstrated on a series of experiments using two simple dynamical systems described by ODEs, and two complex dynamical systems described by PDEs. On the Role of Fixed Points. As discussed in Section 2.3, fixed points of dynamical systems are given by the roots of the nonlinear function F[u] in equation (1). Physics loss residuals (4) are thus small by definition since F[u] ≈ 0 and, thus, ut ≈ 0 in the vicinity of fixed points (see preparatory discussion in 3). These properties seem to affect the training of PINNs: True fixed point solutions, i.e., solutions that are constant (ODEs) or steady-state (PDEs), trivially fulfill the physics loss function and, thus, these solutions yield local optima in the physics loss landscape. Those optima are attractive to the gradient descent optimization, as we could show in our experiments. Additionally, the fact that residuals are small in the vicinity of fixed points may make it more economical for the PINN to violate prescribed system dynamics locally, for the sake of settling at a "simpler" solution (Steger et al.). Effectively, fixed points, together with the L2 losses in (7), allow PINNs to trade between severely violating the physics locally or approximating it inaccurately on the entire computational domain. Validity domain. The question may arise as to which applications, in particular, the role of fixed points is critical in the training of PINNs. While, in general, this question is hard to answer due to the countless ways of studying dynamical systems, we can draw parallels between the systems studied in this work to provide an indication for similar expected behavior. First of all, our studied dynamical systems are time-continuous and described by - or at least can be brought into the form of - equation (1). Here we note that for any of our studied systems the differential operator F is not explicitly dependent on time t. This restricts our consideration to so-called autonomous differential equations. However, fixed points predominantly appear only in those specific systems, and many of the dynamical systems studied in the literature of PINNs and found in engineering problems are autonomous, such as fluid flow, diffusion processes or Hamiltonian systems. Whether or not a fixed point in a particular application will influence the PINN training is determined by several factors as concluded from our experiments. Our results indicate that having an IC near a fixed point or a trajectory that passes by a fixed point closely, combined with extended simulation times, can increase the chances of encountering training difficulties due to the fixed point. We further believe that in the context of Lyapunov stability the neighborhood of a fixed point, as determined by the differential operator F, has an immediate effect on the influence of the fixed point in the PINN training. Remedies and Proposed Solutions. In Section 1.1 we have already stated several remedies that have been proposed in order to overcome commonly-observed training difficulties in PINNs. The aim of this paper was not to propose a new entry to this list of remedies, but rather to give an explanation and further insights why some of them may work for the specific type of training difficulties faced on dynamical systems. Proposed solutions in this list range from techniques that reduce the computational domain, i.e., the simulation time in the context of dynamical systems, to methods that reweight or resample the dataset of collocation points. Our results showed that the simulation length T has an immediate effect on the optimization landscape, and that smaller computational domains are often characterized by smoother landscapes in which undesired minima, corresponding to fixed points, are less pronounced or disappear altogether. We believe that several of the proposed collocation point sampling schemes affect the optimization landscape in a similar manner, effectively reducing the computational domain to regions in which collocation points are sampled densely (e.g., close to the boundary of the computational domain, as in Daw et al. (2022) or Wang et al. (2022)). Finally, as we have argued in Section 3, only the provision of IC/BC data can prevent a PINN getting stuck in an undesired fixed point solution. Even so, the fixed point may still be attractive, leading either to small violations of the physics loss (as shown on the toy example in Section 3.1.2) or to small violations of the IC. This may partly explain the success of loss weighting schemes, and the fact that sometimes greater weights on the IC/BC loss are required to arrive at the desired solution (Wang et al., 2021a; Maddu et al., 2021; Jin et al., 2021). Limitations and Further Work. One may argue that the trade-off inherent in PINNs, or the multiobjective nature of their vanilla formulation (7), should be vacuous, as the true solution satisfies both physics and the IC/BC. Thus, nonphysical predictions should never represent a better optimum than the desired, physical solution. This is true, however, only for PINNs with unlimited expressive power. In practice, the expressivity of a neural network is always limited by its (necessarily) finite size. Therefore, the mentioned trade-off is effective, and we have reason to believe that there are settings where the desired solution does not correspond to a global optimum (cf. Figure 1). Future work shall investigate this aspect from a more theoretical perspective, instantiating approximation theorems for neural networks for PINNs. Further, one may argue that some of the observed nonphysical predictions simply appear because the PINN has not sufficiently converged. In other words, training was not long enough to depart from the (flat) IC, which may correspond to a trivial solution of the differential equation (Wong et al., 2021; Leiteritz & Pflüger, 2021). A large part of the literature on propagation failures (see Section 1.1.2) points in this direction. Further, such a statement is supported by the fact that the physics loss of, e.g., the solution approaching the stable fixed point in Figure 6 is high, and by the late transitions to the correct solutions in Section 3.3. Indeed, we do not claim that minima of the physics loss formed by the presence of fixed points always correspond to (good) minima of the full loss (7) - these minima may disappear entirely (e.g., for small computational domains), turn into saddle points that slow down convergence, or achieve a wider basin of attraction and/or smaller loss than the minimum corresponding to the true solution, in the extreme case where ICs are very close to unstable fixed points. Finally, our investigations are based on a selection of dynamical systems. We chose these systems because they are intuitive to understand, yet still exhibit nontrivial dynamics. Moreover, the particular choice of these systems allowed us to separate the effect of different types of fixed points and to at least partly exclude other explanations for the training difficulties of PINNs. Future work shall be devoted to studying fixed points and steady-state solutions, and to the wider spectrum of asymptotic properties of solutions to dynamical systems. ## 5 Conclusion In this paper, we studied the physics loss optimization in PINNs when applied to dynamical systems governed by differential equations. Our results revealed that nonphysical predictions appear as attractive optima in the physics loss landscape and seem to stem from the presence of fixed points inherent to dynamical systems. These minima or saddle points potentially disrupt and trap the gradient descent optimization, leading to commonly observed convergence issues in PINNs. Reducing the computational domain yielded a greater rate of training success and, in general, reduced the complexity of the physics loss optimization. In the future, we believe that interdisciplinary research that includes advances in deep learning, stability theory and/or a further understanding of the underlying physics may improve physics-informed machine learning or benefit from it. ## References Atilim Gunes Baydin, Barak A Pearlmutter, Alexey Andreyevich Radul, and Jeffrey Mark Siskind. Automatic differentiation in machine learning: a survey. *Journal of machine learning research*, 18, 2018. Mouad Boudina. Numerical simulation data of a two-dimensional flow around a fixed circular cylinder, June 2021. URL https://doi.org/10.5281/zenodo.5039610. Steven L Brunton, Joshua L Proctor, and J Nathan Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. *Proceedings of the national academy of sciences*, 113 (15):3932–3937, 2016. Pi-Yueh Chuang and Lorena A Barba. Experience report of physics-informed neural networks in fluid simulations: pitfalls and frustration. *arXiv preprint arXiv:2205.14249*, 2022. Caio Davi and Ulisses Braga-Neto. Pso-pinn: Physics-informed neural networks trained with particle swarm optimization. *arXiv preprint arXiv:2202.01943*, 2022. Arka Daw, Jie Bu, Sifan Wang, Paris Perdikaris, and Anuj Karpatne. Rethinking the importance of sampling in physics-informed neural networks. *arXiv preprint arXiv:2207.02338*, 2022. Ameya D Jagtap and George E Karniadakis. Extended physics-informed neural networks (xpinns): A generalized space-time domain decomposition based deep learning framework for nonlinear partial differential equations. In *AAAI Spring Symposium: MLPS*, 2021. Xiaowei Jin, Shengze Cai, Hui Li, and George Em Karniadakis. NSFnets (Navier-Stokes flow nets): Physicsinformed neural networks for the incompressible Navier-Stokes equations. *Journal of Computational* Physics, 426:109951, 2021. George Em Karniadakis, Ioannis G Kevrekidis, Lu Lu, Paris Perdikaris, Sifan Wang, and Liu Yang. Physicsinformed machine learning. *Nature Reviews Physics*, 3(6):422–440, 2021. Aditi Krishnapriyan, Amir Gholami, Shandian Zhe, Robert Kirby, and Michael W Mahoney. Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems, 34:26548–26560, 2021. Raphael Leiteritz and Dirk Pflüger. How to avoid trivial solutions in physics-informed neural networks. arXiv preprint arXiv:2112.05620, 2021. Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. Visualizing the loss landscape of neural nets. *Advances in neural information processing systems*, 31, 2018. Lu Lu, Raphael Pestourie, Wenjie Yao, Zhicheng Wang, Francesc Verdugo, and Steven G Johnson. Physicsinformed neural networks with hard constraints for inverse design. *SIAM Journal on Scientific Computing*, 43(6):B1105–B1132, 2021. Suryanarayana Maddu, Dominik Sturm, Christian L Müller, and Ivo F Sbalzarini. Inverse Dirichlet Weighting Enables Reliable Training of Physics Informed Neural Networks. *Machine Learning: Science and* Technology, 2021. Revanth Mattey and Susanta Ghosh. A novel sequential method to train physics informed neural networks for allen cahn and cahn hilliard equations. *Computer Methods in Applied Mechanics and Engineering*, 390: 114474, 2022. Maziar Raissi. Deep hidden physics models: Deep learning of nonlinear partial differential equations. The Journal of Machine Learning Research, 19(1):932–955, 2018. Maziar Raissi, Paris Perdikaris, and George Em Karniadakis. Numerical gaussian processes for timedependent and nonlinear partial differential equations. *SIAM Journal on Scientific Computing*, 40(1): A172–A198, 2018. Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378:686–707, 2019. Maziar Raissi, Alireza Yazdani, and George Em Karniadakis. Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. *Science*, 367(6481):1026–1030, 2020. Gaétan Raynaud, Sebastien Houde, and Frederick P Gosselin. Modalpinn: an extension of physics-informed neural networks with enforced truncated fourier decomposition for periodic flow reconstruction using a limited number of imperfect sensors. *Journal of Computational Physics*, pp. 111271, 2022. Franz M Rohrhofer, Stefan Posch, and Bernhard C Geiger. On the Pareto Front of Physics-Informed Neural Networks. *arXiv preprint arXiv:2105.00862*, 2021. Alvaro Sanchez-Gonzalez, Jonathan Godwin, Tobias Pfaff, Rex Ying, Jure Leskovec, and Peter Battaglia. Learning to simulate complex physics with graph networks. In *International Conference on Machine* Learning, pp. 8459–8468. PMLR, 2020. Michael Schober, David K Duvenaud, and Philipp Hennig. Probabilistic ode solvers with runge-kutta means. Advances in neural information processing systems, 27, 2014. Sophie Steger, Franz M Rohrhofer, and Bernhard C Geiger. How pinns cheat: Predicting chaotic motion of a double pendulum. In *The Symbiosis of Deep Learning and Differential Equations II*. Luning Sun, Han Gao, Shaowu Pan, and Jian-Xun Wang. Surrogate modeling for fluid flows based on physics-constrained deep learning without simulation data. Computer Methods in Applied Mechanics and Engineering, 361:112732, 2020. Shaojie Tang and Nadine Aubry. On the symmetry breaking instability leading to vortex shedding. Physics of Fluids, 9(9):2550–2561, 1997. Sifan Wang, Yujun Teng, and Paris Perdikaris. Understanding and mitigating gradient flow pathologies in physics-informed neural networks. *SIAM Journal on Scientific Computing*, 43(5):A3055–A3081, 2021a. Sifan Wang, Hanwen Wang, and Paris Perdikaris. On the eigenvector bias of fourier feature networks: From regression to solving multi-scale pdes with physics-informed neural networks. *Computer Methods in* Applied Mechanics and Engineering, 384:113938, 2021b. Sifan Wang, Shyam Sankaran, and Paris Perdikaris. Respecting causality is all you need for training physicsinformed neural networks. *arXiv preprint arXiv:2203.07404*, 2022. Colby L Wight and Jia Zhao. Solving allen-cahn and cahn-hilliard equations using the adaptive physics informed neural networks. *arXiv preprint arXiv:2007.04542*, 2020. Jian Cheng Wong, Chinchun Ooi, Abhishek Gupta, and Yew-Soon Ong. Learning in sinusoidal spaces with physics-informed neural networks. *arXiv preprint arXiv:2109.09338*, 2021. ![16_image_0.png](16_image_0.png) Figure 6: **Undamped Pendulum.** Representative examples of training cases from Table 1. Training outcomes are classified as either (a) successful training, (b) attracted by stable fixed point, or (c) unstable fixed point. *Left*: predicted trajectories as function of physical time. *Right*: predicted trajectories in phase space show whether their end positions y and y˙ lie on, inside or outside the orbit of the true pendulum dynamics. Green and red lines/circles represent stable and unstable fixed points, respectively. ## A Software And Hardware Specifications All code in our experiments is implemented in Python version 3.8 and TensorFlow version 2.9.1. Computations are performed on a Nvidia Tesla T4 GPU with a memory size of 16 GB. ## B Additional Content To Undamped Pendulum This section provides further results in addition to the in Section 3.1.1 and 3.1.2 presented experiments. ## B.1 Classification Of Training Outcomes In Phase Space In the main part of this paper, we classify unsuccessful training outcomes to be either influenced by the stable (y ∗ = 0◦) or the unstable fixed point (y ∗ = 180◦). This classification is conducted by determining the trajectories' end position in phase space, i.e., by evaluating y and y˙ (see Figure 6). Based on this, end positions that lie inside the periodic orbit of the true pendulum dynamics are classified as being attracted by the stable fixed point, whereas positions that lie outside the orbit are considered as being attracted by the unstable fixed point. Here we note that trajectories of unsuccessful training outcomes by default leave the periodic orbit which renders this separation possible. ## B.2 Rate Of Training Success - Optimization Settings In Table 1, all PINN instances are optimized with an initial learning rate α = 0.001, number of collocation points Nc = 64, loss weighting factor λ = 1 and as network initialization the Glorot uniform initializer. In addition to this, we also test different optimization settings using the 4x50 network architecture with tanh activation as base model. Table 2: **Undamped Pendulum.** Rate of training success across different system and optimization settings using the 4x50 network architecture with tanh activation. Triplets in the main table represent in percentage (%) and in the respective order, cases of successful training, attracted by stable fixed point, and unstable fixed point. The tested optimization settings are learning rate (α), number of collocation points (Nc), loss weighting factor (λ) and network weights initialization (*Init.*) with He denoting the *He uniform* initialization. Baseline model (see Table 1) uses α = 0.001, Nc = 64, λ = 1 and Glorot uniform initialization. Bold triplets represent a low (< 5%) success rate. T 2.5 5 7.5 y0 25◦ 100◦ 175◦ 25◦ 100◦ 175◦ 25◦ 100◦ 175◦ Baseline 98/2/0 100/0/0 100/0/0 **0/100/0** 90/10/0 0/100/0 **0/100/0 0/100/0 0/61/39** α0.01 37/0/63 36/0/64 60/0/40 16/0/84 19/0/81 7/0/93 **0/7/93** 13/0/87 **0/0/100** 0.001 0/100/0 0/100/0 0/100/0 0/100/0 0/100/0 0/100/0 **0/100/0 0/100/0 0/100/0** Nc16 96/4/0 100/0/0 100/0/0 0/100/0 4/96/0 0/100/0 **0/100/0 0/100/0 0/100/0** 256 100/0/0 100/0/0 100/0/0 **0/100/0** 100/0/0 35/64/1 **0/99/1 0/100/0 0/89/11** λ0.1 **0/100/0** 61/39/0 93/7/0 0/100/0 0/99/1 0/85/0 **0/100/0 0/100/0 0/100/0** 10 100/0/0 100/0/0 100/0/0 11/89/0 42/0/0 6/85/9 **0/99/1 0/100/0 0/35/65** Init. He 100/0/0 98/0/2 100/0/0 **0/98/2** 93/7/0 0/98/2 **0/98/2 0/98/2 0/73/27** Results to this experiment can be found in Table 2, where we included the baseline model with default optimization settings from Table 1 as reference. In general, we observe that none of the tested optimization settings yields substantial improvement for IC close to fixed points and long simulation times. ## B.3 Rate Of Training Success - Further Thresholds In the main part of the paper, we use for the classification of successful training a threshold of 15% in terms of the L2 relative error. To demonstrate that our particular choice of this threshold does not contradict qualitative conclusions made in the main part, we further provide results using different thresholds. In particular, we validate the in Table 1 presented training cases now with a threshold of 5% and 25% in terms of the L2 relative error. The results can be found in Table 3. For compactness, we only show the results for the 4x50 and 8x100 architecture with tanh activation. As apparent in the table, no substantial differences in the classified training outcomes can be observed when using different thresholds. ## B.4 Fixed Points Becoming Economical Solutions In the main part of this work, Figure 1 shows for the toy example that nonphysical predictions can become better minima than that of the desired solution when the IC is close to a fixed point. To demonstrate similar qualitative observations for the undamped pendulum, we perform an experiment similar to that in Section 3.1.2. In particular, we use a 8x100 network architecture with tanh activation and choose a simulation time of T = 10. Since for this simulation time physics-driven PINN instances will hardly converge to the true solution, we implement the same data-guided strategy as presented in Section 3.1.2: We include a total number of 100 labeled training points in the first half of the training. The data is sampled from the RungeKutta solution, equidistantly in the computational domain. In the second half, the training continues with the physics loss optimization only. Indeed, as show in Figure 7(b), the data-guided strategy successfully converges to the true solution, representing successful outcomes and sufficient expressive power for the chosen PINN architecture. We again compare this approach to physics-driven training, i.e., without the use of labeled training data. We repeat for each IC the experiment with 10 uniquely initialized PINN instances per training strategy. We note that none of the physics-driven instances converges to the true solution (see Figure 7(b)). The unsuccessful training outcomes are further classified into whether the subsequent nonphysical prediction violates the physics by either being attracted by the stable or the unstable fixed point (see Figure 6). Table 3: **Undamped Pendulum.** Rate of training success using differently set thresholds in terms of the L2 relative error for the classification of successful and unsuccessful training. Triplets in the main table represent in percentage (%) and in the respective order, cases of successful training, attracted by stable fixed point, and unstable fixed point. Bold triplets represent a low (< 5%) success rate. T 2.5 5 7.5 y0 25◦ 100◦ 175◦ 25◦ 100◦ 175◦ 25◦ 100◦ 175◦ | point, and unstable fixed point. Bold triplets represent a low (< 5%) success rate. T 2.5 5 | | | | | | 7.5 | | | | | |-----------------------------------------------------------------------------------------------|----------|---------|---------|---------|---------|---------|----------|----------|---------|---------| | y0 | 25◦ | 100◦ | 175◦ | 25◦ | 100◦ | 175◦ | 25◦ | 100◦ | 175◦ | | | L2 < 5% | 98/2/0 | 100/0/0 | 100/0/0 | 0/100/0 | 90/10/0 | 0/100/0 | 0/100/0 | 0/100/0 | 0/61/39 | | | 4x50 | L2 < 15% | 98/2/0 | 100/0/0 | 100/0/0 | 0/100/0 | 90/10/0 | 0/100/0 | 0/100/0 | 0/100/0 | 0/61/39 | | L2 < 25% | 99/1/0 | 100/0/0 | 100/0/0 | 0/100/0 | 90/10/0 | 34/66/0 | 0/100/0 | 0/100/0 | 0/61/39 | | | L2 < 5% | 43/37/20 | 84/0/16 | 0/29/71 | 100/0/0 | 97/0/3 | 43/49/8 | 43/37/20 | 84/0/16 | 0/29/71 | | | 8x100 | L2 < 15% | 100/0/0 | 100/0/0 | 99/0/1 | 100/0/0 | 97/0/3 | 92/0/8 | 49/31/20 | 84/0/16 | 1/29/70 | | L2 < 25% | 100/0/0 | 100/0/0 | 99/0/1 | 100/0/0 | 97/0/3 | 92/0/8 | 55/25/20 | 84/0/16 | 1/29/70 | | In Figure 7(a) we report the minimal physics loss values across all epochs for each training outcome. We ![18_image_0.png](18_image_0.png) observe, similar to the behavior in the toy example, that the PINN predictions attracted by the unstable fixed point (y ∗ = 180◦) achieve lower physics losses as the IC gets closer to it. Furthermore, for y0 = 175◦the nonphysical solution becomes a better optimum than that of the desired solution. As a direct consequence, any physics-driven instance converges to it. Figure 7: **Undamped Pendulum.** (a) Minimal physics loss across all epochs. (b) L2 relative error. While the data-guided PINNs converge to the true solution (blue), the physics-driven PINNs yield incorrect system dynamics (large L2 relative errors) by being either attracted by the stable (green) or unstable (red) fixed point (see Figure 6). For y0 = 175◦the nonphysical solution becomes a better optimum than that of the desired solution. In the figure, markers were randomly shifted horizontally to reduce overlap. | T | 2.5 | 5 | | | 7.5 | | | | | | |-------|-------|------|-----|-------|-------|-----|-------|-------|-----|----| | y0 | 0.001 | 0.01 | 0.1 | 0.001 | 0.01 | 0.1 | 0.001 | 0.001 | 0.1 | | | tanh | 100 | 100 | 100 | 1 | 1 | 12 | 1 | 2 | 14 | | | 4x50 | swish | 100 | 100 | 100 | 69 | 72 | 100 | 0 | 0 | 2 | | sin | 100 | 100 | 100 | 13 | 5 | 61 | 3 | 4 | 12 | | | tanh | 100 | 100 | 100 | 0 | 0 | 3 | 0 | 1 | 10 | | | 8x100 | swish | 100 | 100 | 100 | 91 | 100 | 100 | 0 | 0 | 0 | | sin | 100 | 100 | 100 | 0 | 1 | 13 | 0 | 3 | 14 | | Table 4: **Toy Example.** Rate of training success across different system settings (T and y0) and network architectures (size and activation function). Numbers in the main table represent in percentage (%) cases of successful training. Bold numbers represent a low (< 5%) success rate. ## C Additional Content To Toy Example C.1 Analytical Solution The analytical solution to (10) is given by $$y(t)=\begin{cases}\left(1+\left(\frac{1}{y_{0}^{2}}-1\right)e^{-2t}\right)^{-1/2}&\text{for}1\geq y_{0}>0,\\ 0&\text{for}y_{0}=0,\\ -\left(1+\left(\frac{1}{y_{0}^{2}}-1\right)e^{-2t}\right)^{-1/2}&\text{for}0>y_{0}\geq-1.\end{cases}$$ ## C.2 Rate Of Training Success As defined in Section 3.1.1, we declare training successful if the L2 relative error is below 15%. We show the rate of training success for the toy example using different network architectures in Table 4. Similar to observations on the undamped pendulum (see Table 1), we observe a low rate of training success for IC close to the unstable fixed point (y ∗ = 0) and long simulation times. ## D Vortex Shedding In this section we provide additional information to the experimental setup for simulating vortex shedding, including the computational domain, boundary conditions, as well as network and optimization settings. A publicly available database of direct numerical simulation data can be found in Boudina (2021). ## D.1 Computational Domain And Boundary Conditions The computational domain for this experiment is set to the compact space (*x, y*) ∈ Ω := [−5, 15]×[−10, 10], where a cylindrical body is placed at (*x, y*) = (0, 0) with a diameter of d = 1. For our main experiment, the top/bottom boundary is considered as a moving wall with no-slip conditions, where for the outlet zerogradient conditions are applied. The cylinder boundary is chosen as no-penetration and no-slip condition. These BC are also shown in Figure 8. Here we note that we have also performed experiments with a different computational domain and different BCs (top/bottom boundary with symmetry wall and zero-gradient, and outlet with zero pressure), but none of the tested settings led to the desired vortex shedding motion, but showed similar qualitative behavior as presented in the main part of this work. ![20_image_0.png](20_image_0.png) Figure 8: **Fluid Dynamics.** Boundary conditions for vortex shedding. ## D.2 Optimization And Data Settings The overall loss function in this experiment is composed of the respective loss functions for the BCs, the physical and additional data constraints: $$L(\theta)=L_{\rm Inlet}(\theta)+L_{\rm Outlet}(\theta)+L_{\rm Wall}(\theta)+L_{\rm Cylinder}(\theta)+L_{f,x}(\theta)+L_{f,y}(\theta)+L_{Data}(\theta),\tag{17}$$ where Lf,x and Lf,y are the physics loss functions for (12a) and (12b), respectively. Here, L*Data* imposes the (initial) sequence of the developed vortex shedding flow field, which for the physics-driven PINN is 50% of the first period, and for the data-guided PINN three consecutive periods. The data set for this initial sequence is sampled from the reference database with a batch size of NIC,batch = 1024. At each batch iteration, collocation points and training data for the BC are sampled anew with sizes Ncol = 1024, NInlet = 128, NOutlet = 128, NWall = 256 and NCylinder = 128. As already stated in the main part of this work, we introduce a stream function ψ(*t, ⃗x*) with u = ψy and v = −ψx to enforce continuity, i.e., conservation of mass for an incompressible fluid. A single 8x100 neural network with tanh activation functions is then used to approximate ψθ(*t, ⃗x*) and pθ(*t, ⃗x*). Different optimization settings have been tested throughout our work, but none of them led to the desired vortex shedding for the physics-driven PINN. For the sake of simplicity, we only state the settings used for the experiment presented in the main part of this paper: Optimization is performed using Adam with default settings for the moment estimates and a total number of 10k epochs. The initial learning rate is set to α = 0.001 and an exponential decay with rate 0.9 and step 1000 is applied. Training for this setting and for the in Section A listed software/hardware took about 7h. ## E Proof To Allen-Cahn Equation Proof. For the proof, note that for (15), we have that ut ≡ 0. We hence remain to investigate the right-hand side of (14). To this end, note that u(*x, t*) is piecewise constant. Specifically, we have that ux(x ′) = 0 for x ′ ∈ [−1, 1] *\ {−*0.5, 0.5}, hence also uxx(x ′) = 0. Since further, for all x ∈ [−1, 1] we have that u 3(*x, t*) = u(*x, t*), the right-hand side of (14) is zero on x ′ ∈ [−1, 1] *\ {−*0.5, 0.5}. Suppose now that the collocation points on which Lf (θ) is evaluated are drawn from a continuous distribution supported on [−1, 1]×[0, T], i.e., from a distribution that is absolutely continuous w.r.t. the Lebesgue measure on R 2. Now, the subset {−0.5, 0.5}×[0, T] is a Lebesgue null set of R 2, hence the probability that collocation points are drawn from this set is zero. Since the physics loss is evidently zero outside of this set, this completes the proof. ## F Visualizing The Physics Loss Landscape Visualization of the loss landscape is based on the work of Li et al. (2018) and is a two-dimensional projection. We plot the loss landscape L(θ1, θ2) with two specific directions θ1 and θ2 as stated in the main part of this work. A basic Gram-Schmidt process is then used to obtain an orthonormalized set of the two directions. The loss landscape for different simulation times T is obtained by evaluating the physics loss function on a total number of 1024 collocation points, sampled from the time domain t ∈ [0, T]. For the toy example, loss values greater than L(θ1, θ2) > 0.2 were truncated to highlight the interesting domain in Figure 4(c)-(f). Similarly, loss values greater than L(θ1, θ2) > 10 were truncated for the AllenCahn example where we also use a logarithmic presentation of the loss landscape to highlight the interesting domain in Figure 5(d)-(g).
Review 1: Summary: This paper investigates the limitations of physics-informed neural networks (PINNs). They show that (stable and **unstable**) fixed points in the PDE contribute to the optimization difficulties of PINN since they are attractive (in terms of loss minimization) for the PINN, thus leading the PINN to learn non-physical solutions. Strengths and Weaknesses: ## strength: The message of this paper is quite simple: PINNs may get stuck in some local minima that correspond to fixed points of the dynamics. ## weaknesses: - The experiments are very toyish (small neural networks and easy tasks). It is unclear if larger/deeper neural networks would achieve the same conclusion. - I found the notation $u_t$ for the derivative quite confusing since it is eventually mixed with $u_\theta$ and $u_{\theta,t}$ where the subscript $\theta$ does not correspond to any derivation. - The paper does not investigate/propose solutions on how to fix that issue. However, I acknowledge that this may be beyond the scope of this paper. ## summary Overall I feel like since this paper is an experimental paper, the evidence proposed by the author is relatively weak (toy experiments and small NNs) So, I feel that this paper is currently slightly under the acceptance threshold. Stronger experiments would be required to strengthen the claims. Requested Changes: - I would like to see experiments with larger/deeper NNs to see if this problem remains there. - Less toyish tasks. Minor: I would clarify the notation $u_{\theta,t}$ Broader Impact Concerns: No concerns ================================================== Review 2: Summary: The paper suggests, through a series of experiments over two toy examples and one complex dynamical system, Navier-Stokes equation, that fixed points shape local optima of the loss landscape. This would help inform the architecture and training of the model. The authors provide empirical evidence to support this claim. Strengths and Weaknesses: Strengths: - the research question above sounds like a novel aspect of training physically-informed machine learning methods, and it is well grounded in the literature as described by the authors in the introduction - the results are underpinned by the experiments on different models and illustrate the given phenomenon Weaknesses: - the main concern is about the sufficiency of evidence: the generalised conclusions of impact of fixed points on the optimisation of physics-informed neural networks only rely upon a limited number of experiments; therefore the evidence for the claims should be improved, and that is the reason behind the answer on the question "Are the claims made in the submission supported by accurate, convincing and clear evidence? " is currently No (see comments in the requested changes section) Requested Changes: - Figure 1: it is important to clarify why cannot the given performance be just explained by the previously documented phenomenon when trained neural networks are giving good interpolations but bad extrapolations (see Ziyin et al, 2020; Davini et al, 2021)? - It is important to show the phenomenon for multiple complex dynamical systems. This may include varying the parameterisation of Navier-Stokes equation such as varying Reynolds number, as well as other dynamic systems (maybe Lorenz system?). The main concern, otherwise, is that while there is evidence on two particular toy problems, the evidence of generalisation to the more complex problems such as solving Navier-Stokes equation is not entirely conclusive. -In Table 1, although the general meaning is clear, the reviewer struggled to find the exact procedure for evaluation of the following values: 'attracted by stable fixed point, and unstable fixed point'. It is important that the authors clarify upon it. - Figure 4 gives good qualitative motivation why would the fixed point affect optimisation landscape; however, is it possible to conclusively clarify this for multiple tasks by showing landscapes for different tasks? Davini, David, Bhargav Samineni, Benjamin Thomas, Amelia Huong Tran, Cherlin Zhu, Kyung Ha, Ganesh Dasika, and Laurent White. "Using physics-informed regularization to improve extrapolation capabilities of neural networks." (2021) Ziyin, Liu, Tilman Hartwig, and Masahito Ueda. "Neural networks fail to learn periodic functions and how to fix it." Advances in Neural Information Processing Systems 33 (2020): 1583-1594. Broader Impact Concerns: No broader impact concerns registered ================================================== Review 3: Summary: This paper examines numerically the inconsistent training outcomes of physics informed neural networks (PINNs) in some simple systems after motivating the problem by discussing an unsuccessful application to the Navier-Stokes equation. The authors hypothesize that a lack of convergence arises due to fixed points in the dynamics of the target dynamical system. The effect of fixed points is studied from the perspective of the loss landscape and optimization dynamics for 1) learning the dynamics of an undamped pendulum and 2) a gradient system with two stable fixed points and an unstable fixed point. Strengths and Weaknesses: Strengths: - The paper formulates a clear hypothesis to diagnose the challenges in training PINNs. - PINNs are a timely and important topic in the machine learning literature, especially for applied mathematics and computational sciences. Weaknesses: - This study does not engage in any real hypothesis testing, rather it simply provides two minimal examples where fixed points influence the convergence of learning. Is it always the case? What about at the bottom of a quartic potential, where there's a stable fixed point with large fluctuations? - The evidence is purely numerical and is guided by models that are very simplistic. It's hard to connect the results obtained in the paper to the Navier Stokes solution presented at the beginning. - There is not a compelling explanation to demonstrate that this phenomenon is generic. The argument appears to be by analogy. - The connection to the Navier-Stokes example isn't very clear. The dynamics resembles an unstable fixed point, but is it? Requested Changes: I think in order for me to recommend this study for publication, there are several additional things I would need to see: 1. There needs to be some actionable proscription. If a PINN is not converging well because of a fixed point, how should one simulate so as to improve training. 2. There must be a better plausibility argument for why training fails near fixed points. The argument that stable fixed points lead to slow training dynamics could almost certainly be made more precise and illustrated in a more general setting. 3. On a related note, some more foundational understanding of this phenomenon would be helpful to connect these observations to nontrivial systems. Broader Impact Concerns: No, none come to mind. ================================================== Review 4: Summary: The paper adresses a typical problem in PINNs: the existance of bad local minima in the loss functions, which correspond to overly simple solutions. The paper makes an empirical study using previously existing methods to collect evidence that these problems are connected to fixed points in the dynamical system. Section 3 show that the relevant problems exist for PDEs. Section 4 includes demonstrations on some examples, where fixed points probably induces bad learning behavior. The loss landscape on a toy example demonstrates that the proposed effect increases over long times, and hence makes PINN (as they are currently being used) unsuitable for learning of dynamical systems over long time periods. Strengths and Weaknesses: Strengths: - The paper shows clear experimental evidence with good experimental setups, that the proposed problem in training PINNs has actually somethin to do with fixed points. - The experiments are support by physical and mathematical explanations, which ar eno proofs, but at least intuitively point to the empirical findings. Weaknesses - No proper solution to the problem is given, although some weakly mitigating effects are being demonstrated - Rather few test cases have been done, and mostly on ODEs Requested Changes: - Section 2: - Not every dynamical system is of the form given in (1), and not even every dynamical system can be described by differential equations. Please be honest and state that you are considering dynmical systems as in (1), which is certainly an imporant class. (For example, the system in (10) is not of this form, although the system (10) can of course easily be modified to satisfy (1).) - Please state necessary assumption on $\Omega$, e.g. bounded, compact, simply connected, regularity assumptions on its boundary, ... - Is your boundary operator necessarily linear? - After Formula (5): you probably mean $\times$ instead of $\otimes$. (See also Appendix B) - Section 3: - Please clearly state the assumptions made on the solutions of the NS equations, in particular on (in)compressibility, in the view of using the stream function. - Section 4: - Plots do not really have colorblind people in mind. And they do not really work in grayscale. This is more a suggestion than a requirement: the literature on PINNs is well-cited. One might (but not must) cite alternative approaches to differnetial equations using ML, e.g. (amonst many) the following arxiv links: 1406.2582, 1703.00787, 1801.09197, 2103.10153, 2103.12959, 2110.11812, 2202.01287, 2205.03185, 2208.01565, 2208.12515 Broader Impact Concerns: None. ================================================== Metareview: Recommendation: Accept with minor revision Comment: This paper analyses the training dynamics of physically-informed neural networks (PINNs), especially to study their failure cases. The main hypothesis in this work is that fixed points in differential equations can negatively impact the loss landscape when training PINNs, leading to nonphysical predictions. The authors conjecture that these local minima are responsible of the convergence issues usually observed with PINNs, especially for initial conditions close to fixed points or for long forecasting horizons. The submission initially received mixed reviews, with two slightly positive and two slightly negative recommendations. The main concerns pointed out by reviewers related to the absence of theoretical analysis, the lack of solutions to overcome the problem, the experiments on simplistic tasks and contexts, and issues on paper presentation. During the discussion period, the authors provided a new set of experiments on the Allen-Cahn equation, clarified the focus of their work, and changed the paper's structure and presentation. The reviewers acknowledged the improvements made in the revised version. After the discussion period, there was on accept, one leaning accept and two leaning reject recommendations. The AE carefully reads the submission and discussions. He considers that the claims made in the submission are overall supported by evidence. The experiments are composed of a fair spectrum of problems: although some of them are simple, the added experiments on the Allen-Cahn equation strengthen the empirical validation. The submission addresses issues in the training dynamics of PINNs, which is of great interest for the community, and this work can certainly inspire follow-up works on theoretical aspects or more complex tasks. The AE thus recommends paper acceptance under the request of a minor change about the validity domain of submission's claims. This could be included in section 4, and echoes the remarks made by reviewer KDWh and jz2A, e.g. to discuss if there are any classes of ODEs/PDEs where the assumptions made in the submission do not hold, making the training dynamics behave differently as it is expected in this work. ==================================================
# Dual Patchnorm Google Research, Brain Team Reviewed on OpenReview: *https: // openreview. net/ forum? id= jgMqve6Qhw* Manoj Kumar *mechcoder@google.com* Mostafa Dehghani *dehghani@google.com* Neil Houlsby neilhoulsby@google.com ## Abstract We propose Dual PatchNorm: two Layer Normalization layers (LayerNorms), before and after the patch embedding layer in Vision Transformers. We demonstrate that Dual PatchNorm outperforms the result of exhaustive search for alternative LayerNorm placement strategies in the Transformer block itself. In our experiments on image classification, contrastive learning, semantic segmentation and transfer on downstream classification datasets, incorporating this trivial modification, often leads to improved accuracy over well-tuned vanilla Vision Transformers and never hurts. ## 1 Introduction Layer Normalization (Ba et al., 2016) is key to Transformer's success in achieving both stable training and high performance across a range of tasks. Such normalization is also crucial in Vision Transformers (ViT) (Dosovitskiy et al., 2020; Touvron et al., 2021) which closely follow the standard recipe of the original Transformer model. Following the "pre-LN" strategy in Baevski & Auli (2019) and Xiong et al. (2020), ViTs place LayerNorms before the self-attention layer and MLP layer in each Transformer block. We explore the following question: Can we improve ViT models with a different LayerNorm ordering? First, across five ViT architectures on ImageNet-1k (Russakovsky et al., 2015), we demonstrate that an exhaustive search of LayerNorm placements between the components of a Transformer block does not improve classification accuracy. This indicates that the pre-LN strategy in ViT is close to optimal. Our observation also applies to other alternate LayerNorm placements: NormFormer (Shleifer et al., 2021) and Sub-LN (Wang et al., 2022), which in isolation, do not improve over strong ViT classification models. Second, we make an intriguing observation: placing additional LayerNorms before and after the standard ViT-projection layer, which we call Dual PatchNorm (DPN), can improve significantly over well tuned vanilla ViT baselines. Our experiments on image classification across three different datasets with varying number of examples and contrastive learning, demonstrate the efficacy of DPN. Interestingly, our qualitative experiments show that the LayerNorm scale parameters upweight the pixels at the center and corners of each patch. 1 hp , wp = patch_size [0] , patch_size [1] 2 x = einops . rearrange ( 3 x , "b (ht hp) (wt wp) c -> b (ht wt) (hp wp c)", hp = hp , wp = wp ) 4 x = nn.LayerNorm(name="ln0")(x) 5 x = nn . Dense ( output_features , name =" dense ")(x) 6 x = nn.LayerNorm(name="ln1")(x) Dual PatchNorm consists of a 2 line change to the standard ViT-projection layer. ## 2 Related Work Kim et al. (2021) add a LayerNorm after the patch-embedding and show that this improves the robustness of ViT against corruptions on small-scale datasets. Xiao et al. (2021) replace the standard Transformer stem with a small number of stacked stride-two 3 × 3 convolutions with batch normalizations and show that this improves the sensitivity to optimization hyperparameters and final accuracy. Xu et al. (2019) analyze LayerNorm and show that the derivatives of mean and variance have a greater contribution to final performance as opposed to forward normalization. Beyer et al. (2022a) consider Image-LN and Patch-LN as alternative strategies to efficiently train a single model for different patch sizes. Wang et al. (2022) add extra LayerNorms before the final dense projection in the self-attention block and the non-linearity in the MLP block, with a different initialization strategy. Shleifer et al. (2021) propose extra LayerNorms after the final dense projection in the self-attention block instead with a LayerNorm after the non-linearity in the MLP block. Unlike previous work, we show that LayerNorms before and after the embedding layer provide consistent improvements on classification and contrastive learning tasks. An orthogonal line of work (Liu et al., 2021; d'Ascoli et al., 2021; Wang et al., 2021) involves incorporating convolutional inductive biases to VisionTransformers. Here, we exclusively and extensively study LayerNorm placements of vanilla ViT. ## 3 Background 3.1 Patch Embedding Layer In Vision Transformer Vision Transformers (Dosovitskiy et al., 2020) consist of a patch embedding layer (PE) followed by a stack of Transformer blocks. The PE layer first rearranges the image x ∈ RH×W×3into a sequence of patches xp ∈ R HW P 2 ×P 2where P denotes the patch size. It then projects each patch independently with a dense projection to constitute a sequence of "visual tokens" xt ∈ R HW P 2 ×D P controls the trade-off between granularity of the visual tokens and the computational cost in the subsequent Transformer layers. ## 3.2 Layer Normalization Given a sequence of N patches x ∈ RN×D, LayerNorm as applied in ViTs consist of two operations: $$\mathbf{x}={\frac{\mathbf{x}-\mu(x)}{\sigma(x)}}$$ $$\mathbf{y}=\gamma\mathbf{x}+\beta$$ $$(1)$$ $$\left(2\right)$$ σ(x)(1) y = γx + β (2) where µ(x) ∈ RN , σ(x) ∈ RN , γ ∈ RD, β ∈ RD. First, Eq. 1 normalizes each patch xi ∈ RD of the sequence to have zero mean and unit standard deviation. Then, Eq 2 applies learnable shifts and scales β and γ which are shared across all patches. ## 4 Methods 4.1 Alternate Layernorm Placements: Following Baevski & Auli (2019) and Xiong et al. (2020), ViTs incorporate LayerNorm before every selfattention and MLP layer, commonly known as the pre-LN strategy. For each of the self-attention and MLP layer, we evaluate 3 strategies: place LayerNorm before (pre-LN), after (post-LN), before and after (pre+post-LN) leading to nine different combinations. ## 4.2 Dual Patchnorm Instead of adding LayerNorms to the Transformer block, we also propose to apply LayerNorms in the stem alone, both before and after the patch embedding layer. In particular, we replace with $\mathbf{x}=\text{LN(PE(LN(x)))}$. x = LN(PE(LN(x))) (4) and keep the rest of the architecture fixed. We call this Dual PatchNorm (DPN). ## 5 Experiments On Imagenet Classification 5.1 Setup We adopt the standard formulation of Vision Transformers (Sec. 3.1) which has shown broad applicability across a number of vision tasks. We train ViT architectures (with and without DPN) in a supervised fashion on 3 different datasets with varying number of examples: ImageNet-1k (1M), ImageNet-21k (21M) and JFT (4B) (Zhai et al., 2022a). In our experiments, we apply DPN directly on top of the baseline ViT recipes without additional hyperparamter tuning. We split the ImageNet train set into a train and validation split, and use the validation split to arrive at the final DPN recipe. ImageNet 1k: We train 5 architectures: Ti/16, S/16, S/32, B/16 and B/32 using the AugReg (Steiner et al., 2022) recipe for 93000 steps with a batch size of 4096 and report the accuracy on the official ImageNet validation split as is standard practice. The AugReg recipe provides the optimal mixup regularization (Zhang et al., 2017) and RandAugment (Cubuk et al., 2020) for each ViT backbone. Further, we evaluate a S/16 baseline (S/16+) with additional extensive hyperparameter tuning on ImageNet (Beyer et al., 2022b).Finally, we also apply DPN on top of the base and small DeiT variants (Touvron et al., 2021). Our full set of hyperparameters are available in Appendix C and Appendix D. ImageNet 21k: We adopt a similar setup as in ImageNet 1k. We report ImageNet 25 shot accuracies in two training regimes: 93K and 930K steps. JFT: We evaluate the ImageNet 25 shot accuracies of 3 variants (B/32, B/16 and L/16) on 2 training regimes: (220K and 1.1M steps) with a batch size of 4096. In this setup, we do not use any additional data augmentation or mixup regularization. On ImageNet-1k, we report the 95% confidence interval across atleast 3 independent runs. On ImageNet-21k and JFT, because of expensive training runs, we train each model once and report the mean 25 shot accuracy with 95% confidence interval across 3 random seeds. ## 5.2 Dpn Versus Alternate Layernorm Placements Each Transformer block in ViT consists of a self-attention (SA) and MLP layer. Following the pre-LN strategy (Xiong et al., 2020), LN is inserted before both the SA and MLP layers. We first show that the default pre-LN strategy in ViT models is close to optimal by evaluating alternate LN placements on ImageNet-1k. We then contrast this with the performance of NormFormer, Sub-LN and DPN. For each SA and MLP layer, we evaluate three LN placements: Pre, Post and Pre+Post, that leads to nine total LN placement configurations. Additionally, we evaluate the LayerNorm placements in NormFormer (Shleifer et al., 2021) and Sub LayerNorm (Wang et al., 2022) which add additional LayerNorms within each of the self-attention and MLP layers in the transformer block. Figure 1 shows that none of the placements outperform the default Pre-LN strategy significantly, indicating that the default pre-LN strategy is close to optimal. NormFormer provides some improvements on ViT models with a patch size of 32. DPN on the other-hand provides consistent improvements across all 5 architectures. $\mathbf{E}(\mathbf{x})$ $\left(3\right)$. $\left(4\right)$. x = PE(x) (3) ![3_image_0.png](3_image_0.png) Figure 1: The plot displays the accuracy gains of different LayerNorm placement strategies over the default pre-LN strategy. Each blue point (**Other LN placement**) corresponds to a different LN placement in the Transformer block. None of the placements outperform the default Pre-LN strategy on ImageNet-1k (Russakovsky et al., 2015). Applying DPN (black cross) provides consistent improvements across all 5 architectures. | Arch | Base | DPN | Arch | Base | DPN | |---------------------------------------------------------------------------------------|-------------|-------------|-------------|--------|-------| | ViT AugReg | 93K Steps | | | | | | S/32 | 72.1 ± 0.07 | 74.0 ± 0.09 | | | | | Ti/16 | 72.5 ± 0.07 | 73.9 ± 0.09 | | | | | B/32 | 74.8 ± 0.06 | 76.2 ± 0.07 | | | | | S/16 | 78.6 ± 0.32 | 79.7 ± 0.2 | | | | | S/16+ | 79.7 ± 0.09 | 80.2 ± 0.03 | | | | | B/16 | 80.4 ± 0.06 | 81.1 ± 0.09 | | | | | DeiT | | | | | | | S/16 | 80.1 ± 0.03 | 80.4 ± 0.06 | | | | | B/16 | 81.8 ± 0.03 | 82.0 ± 0.05 | | | | | AugReg + 384 × 384 Finetune B/32 79.0 ± 0.00 80.0 ± 0.03 B/16 82.2 ± 0.03 82.8 ± 0.00 | Ti/16 | 52.2 ± 0.07 | 53.6 ± 0.07 | | | | | S/32 | 54.1 ± 0.03 | 56.7 ± 0.03 | | | | | B/32 | 60.9 ± 0.03 | 63.7 ± 0.03 | | | | | S/16 | 64.3 ± 0.15 | 65.0 ± 0.06 | | | | | B/16 | 70.8 ± 0.09 | 72.0 ± 0.03 | | | | | 930K Steps | | | | | | | Ti/16 | 61.0 ± 0.03 | 61.2 ± 0.03 | | | | | S/32 | 63.8 ± 0.00 | 65.1 ± 0.12 | | | | | B/32 | 72.8 ± 0.03 | 73.1 ± 0.07 | | | | | S/16 | 72.5 ± 0.1 | 72.5 ± 0.1 | | | | | B/16 | 78.0 ± 0.06 | 78.4 ± 0.03 | | | Table 1: **Left:** ImageNet-1k validation accuracies of five ViT architectures with and without dual patch norm after 93000 steps. **Right:** We train ViT models on ImageNet-21k in two training regimes: 93k and 930k steps with a batch size of 4096. The table shows their ImageNet 25 shot accuracies with and without Dual PatchNorm ## 5.3 Comparison To Vit In Table 1 left, DPN improved the accuracy of B/16, the best ViT model by 0.7 while S/32 obtains the maximum accuracy gain of 1.9. The average gain across all architecture is 1.4. On top of DeiT-S and DeiT-B, DPN provides an improvement of 0.3 and 0.2 respectively. Further, we finetune B/16 and B/32 models with and without DPN on high resolution ImageNet (384 × 384) for 5000 steps with a batch-size of 512 (See Appendix D for the full hyperparameter setting). Applying DPN improves high-res, finetuned B/16 and B/32 by 0.6 and 1.0 respectively. DPN improves all architectures trained on ImageNet-21k (Table 1 Right) and JFT (Table 2) on shorter training regimes with average gains of 1.7 and 0.8 respectively. On longer training regimes, DPN improves the accuracy of the best-performing architectures on JFT and ImageNet-21k by 0.5 and 0.4 respectively. In three cases, Ti/16 and S/32 with ImageNet-21k and B/16 with JFT, DPN matches or leads to marginally worse results than the baseline. Nevertheless, across a large fraction of ViT models, simply employing DPN out-of-the-box on top of well-tuned ViT baselines lead to significant improvements. ## 5.4 Finetuning On Imagenet With Dpn We finetune four models trained on JFT-4B with two resolutions on ImageNet-1k: (B/32, B/16) × (220K, 1.1M) steps on resolutions 224 × 224 and 384 × 384. On B/32 we observe a consistent improvement across all configurations. With L/16, DPN outperforms the baseline on 3 out of 4 configurations. | Arch | Base | DPN | Arch | Resolution | Steps | Base | DPN | |------------|-------------|-------------|--------|--------------|-------------|-------------|-------------| | 220K steps | B/32 | 224 | 220K | 77.6 ± 0.06 | 78.3 ± 0.00 | | | | | B/32 | 384 | 220K | 81.3 ± 0.09 | 81.6 ± 0.00 | | | | | B/32 | 224 | 1.1M | 80.8 ± 0.1 | 81.3 ± 0.00 | | | | | B/32 | 384 | 1.1M | 83.8 ± 0.03 | 84.1 ± 0.00 | | | | B/32 | 63.8 ± 0.03 | 65.2 ± 0.03 | | | | | | | B/16 | 72.1 ± 0.09 | 72.4 ± 0.07 | | | | | | | L/16 | 77.3 ± 0.00 | 77.9 ± 0.06 | | | | | | | 1.1M steps | | | | | | | | | B/32 | 70.7 ± 0.1 | 71.1 ± 0.09 | | | | | | | B/16 | 76.9 ± 0.03 | 76.6 ± 0.03 | | | | | | | L/16 | 80.9 ± 0.03 | 81.4 ± 0.06 | L/16 | 224 | 220K | 84.9 ± 0.06 | 85.3 ± 0.03 | | | L/16 | 384 | 220K | 86.7 ± 0.03 | 87.0 ± 0.00 | | | | | L/16 | 224 | 1.1M | 86.7 ± 0.03 | 87.1 ± 0.00 | | | | | L/16 | 384 | 1.1M | 88.2 ± 0.00 | 88.3 ± 0.06 | | | Table 2: **Left:** We train 3 ViT models on JFT-4B in two training regimes: 200K and 1.1M steps with a batch size of 4096. The table displays their ImageNet 25 shot accuracies with and without DPN. **Right:** Corresponding full finetuneing results on ImageNet-1k. ## 6 Experiments On Downstream Tasks 6.1 Finetuning On Vtab We finetune ImageNet-pretrained B/16 and B/32 with and without DPN on the Visual Task Adaption benchmark (VTAB) (Zhai et al., 2019). VTAB consists of 19 datasets: 7 Natural , 4 Specialized and 8 Structured . Natural consist of datasets with natural images captured with standard cameras, Specialized has images captured with specialized equipment and Structured require scene comprehension. We use the VTAB training protocol which defines a standard train split of 800 examples and a validation split of 200 examples per dataset. We perform a lightweight sweep across 3 learning rates on each dataset and use the mean validation accuracy across 3 seeds to pick the best model. Appendix E references the standard VTAB finetuning configuration. We then report the corresponding mean test score across 3 seeds in Table 3. In Table 3, accuracies within 95% confidence interval are not bolded. On Natural , which has datasets closest to the source dataset ImageNet, B/32 and B/16 with DPN significantly outperform the baseline on 7 out of 7 and 6 out of 7 datasets respectively. Sun397 (Xiao et al., 2010) is the only dataset where applying DPN performs worse. In Appendix F, we additionally show that DPN helps when B/16 is trained from scratch on Sun397. Applying DPN on Structured improves accuracy on 4 out of 8 datasets and remains neutral on 2 on both B/16 and B/32. On Specialized , DPN improves on 1 out of 4 datasets, and is neutral on 2. To conclude, DPN offers the biggest improvements, when finetuned on Natural . On Structured and Specialized , DPN is a lightweight alternative, that can help or at least not hurt on a majority of datasets. 5 | Caltech101 | CIFAR-100 | DTD | | Flowers102 | Pets | Sun397 | SVHN | | Camelyon | EuroSAT | | Resisc45 | Retinopathy | |--------------|-------------|-------|----------|--------------|------------|------------|------------|------|------------|-----------|------|------------|---------------| | B/32 | 87.1 | 53.7 | 56.0 | 83.9 | 87.2 | 32.0 | 76.8 | 77.9 | 94.8 | 78.2 | 71.2 | | | | + DPN | 87.7 | 58.1 | 60.7 | 86.4 | 88.0 | 35.4 | 80.3 | 78.5 | 95.0 | 81.6 | 70.3 | | | | B/16 | 86.1 | 35.5 | 60.1 | 90.8 | 90.9 | 33.9 | 76.7 | 81.3 | 95.9 | 81.2 | 74.7 | | | | + DPN | 86.6 | 51.4 | 63.1 | 91.3 | 92.1 | 32.5 | 78.3 | 80.6 | 95.8 | 83.5 | 73.3 | | | | Clevr-Count | Clevr-Dist | DMLab | dSpr-Loc | dSpr-Ori | KITTI-Dist | sNORB-Azim | sNORB-Elev | | | | | | | | B/32 | 58.3 | 52.6 | 39.2 | 71.3 | 59.8 | 73.6 | 20.7 | 47.2 | | | | | | | + DPN | 62.5 | 55.5 | 40.7 | 60.8 | 61.6 | 73.4 | 20.9 | 34.4 | | | | | | | B/16 | 65.2 | 59.8 | 39.7 | 72.1 | 61.9 | 81.3 | 18.9 | 50.4 | | | | | | | + DPN | 73.7 | 48.3 | 41.0 | 72.4 | 63.0 | 80.6 | 21.6 | 36.2 | | | | | | Table 3: We evaluate DPN on VTAB (Zhai et al., 2019). When finetuned on Natural , B/32 and B/16 with DPN significantly outperform the baseline on 7 out of 7 and 6 out of 7 datasets respectively. On Structured , DPN improves both B/16 and B/32 on 4 out of 8 datasets and remains neutral on 2. On Specialized , DPN improves on 1 out of 4 datasets, and is neutral on 2. ## 6.2 Contrastive Learning We apply DPN on image-text contrastive learning (Radford et al., 2021). Each minibatch consists of a set of image and text pairs. We train a text and image encoder to map an image to its correct text over all other texts in a minibatch. Specifically, we adopt LiT (Zhai et al., 2022b), where we initialize and freeze the image encoder from a pretrained checkpoint and train the text encoder from scratch. To evaluate zero-shot ImageNet accuracy, we represent each ImageNet class by its text label, which the text encoder maps into a class embedding. For a given image embedding, the prediction is the class corresponding to the nearest class embedding. We evalute 4 frozen image encoders: 2 architectures (B/32 and L/16) trained with 2 schedules (220K and 1.1M steps). We resue standard hyperparameters and train only the text encoder using a contrastive loss for 55000 steps with a batch-size of 16384. Table 4 shows that on B/32, DPN improves over the baselines on both the setups while on L/16 DPN provides improvement when the image encoder is trained with shorter training schedules. ## 6.3 Semantic Segmentation We finetune ImageNet-pretrained B/16 with and without DPN on the ADE-20K 512×512 (Zhou et al., 2019) semantic segmentation task. Following Strudel et al. (2021), a single dense layer maps the ViT features into per-patch output logits. A bilinear upsampling layer then transforms the output distribution into the final high resolution 512×512 semantic segmentation output. We finetune the entire ViT backbone with standard | Arch | Steps | Base | DPN | |--------|---------|-------------|-------------| | B/32 | 220K | 61.9 ± 0.12 | 63.0 ± 0.09 | | B/32 | 1.1M | 67.4 ± 0.07 | 68.0 ± 0.09 | | L/16 | 220K | 75.0 ± 0.11 | 75.4 ± 0.00 | | L/16 | 1.1M | 78.7 ± 0.05 | 78.7 ± 0.1 | | Fraction of Train Data | 1/16 | 1/8 | 1/4 | 1/2 | 1 | |--------------------------|-------------|-------------|-------------|-------------|-------------| | B/16 | 27.3 ± 0.09 | 32.6 ± 0.09 | 36.9 ± 0.13 | 40.8 ± 0.1 | 45.6 ± 0.08 | | +DPN | 28.0 ± 0.21 | 33.7 ± 0.11 | 38.0 ± 0.11 | 41.9 ± 0.09 | 46.1 ± 0.11 | Table 4: Zero Shot ImageNet accuracy on the LiT (Zhai et al., 2022b) contrastive learning setup. Table 5: We finetune ImageNet pretrained B/16 models with and without DPN on the ADE20K Semantic Segmentation task, when a varying fraction of ADE20K training data is available. The table reports the mean IoU across ten random seeds. Applying DPN improves IoU across all settings. per-pixel cross-entropy loss. Appendix G specifies the full set of finetuning hyperparameters. Table 5 reports the mean mIOU across 10 random seeds and on different fractions of training data. The improvement in IoU is consistent across all setups. ## 7 Ablations | B/16 | S/16 | B/32 | S/32 | Ti/16 | | |----------------|--------|--------|--------|---------|------| | Pre | -0.1 | 0.0 | -2.6 | -0.2 | -0.3 | | Post | 0.0 | -0.2 | -0.5 | -0.7 | -1.1 | | Post PosEmb | 0.0 | -0.1 | -0.4 | -0.9 | -1.1 | | Only learnable | -0.8 | -0.9 | -1.2 | -1.6 | -1.6 | | RMSNorm | 0.0 | -0.1 | -0.4 | -0.5 | -1.7 | | No learnable | -0.5 | 0.0 | -0.2 | -0.1 | -0.1 | Is normalizing both the inputs and outputs of the embedding layer optimal? In Eq 4, DPN applies LN to both the inputs and outputs to the embedding layer. We assess three alternate strategies: Pre, **Post** and **Post PosEmb** (Radford et al., 2021). Pre applies LayerNorm only to the inputs, **Post** only to the outputs and **Post PosEmb** to the outputs after being summed with positional embeddings. Table 6 displays the accuracy gains with two alternate strategies: Pre is unstable on B/32 leading to a significant drop in accuracy. Additionally, Pre obtains minor drops in accuracy on S/32 and Ti/16. **Post** and **Post PosEmb** achieve worse performance on smaller models B/32, S/32 and Ti/16. Our experiments show that applying LayerNorm to both inputs and outputs of the embedding layer is necessary to obtain consistent improvements in accuracy across all ViT variants. Table 6: Ablations of various components of DPN. **Pre:** LayerNorm only to the inputs of the embedding layer. **Post:** LayerNorm only to the outputs of the embedding layer. **No learnable:** Per-patch normalization without learnable LayerNorm parameters. **Only learnable:** Learnable scales and shifts without standardization. Normalization vs Learnable Parameters: As seen in Sec. 3.2, LayerNorm constitutes a normalization operation followed by learnable scales and shifts. We also ablate the effect of each of these operations in DPN. Applying only learnable scales and shifts without normalization leads to a significant decrease in accuracy across all architectures. (See: **Only learnable** in Table 6). Additionally, removing the learnable parameters leads to unstable training on B/16 (**No learnable** in Table 6). Finally, removing the centering and bias parameters as done in **RMSNorm** (Zhang & Sennrich, 2019), reduces the accuracy of B/32, S/32 and Ti/16. We conclude that while both normalization and learnable parameters contribute to the success of DPN, normalization has a higher impact. ## 8 Analysis 8.1 Gradient Norm Scale ![7_image_0.png](7_image_0.png) Figure 2: Gradient Norms with and without DPN in B/16. **Left:** Gradient Norm vs Depth. **Right:** Gradient Norm of the embedding layer vs number of steps. We report per-layer gradient norms with and without DPN on B/16. Fig. 2 (Left) plots the mean gradient norm of the last 1000 training steps as a function of depth with and without DPN. Interestingly, the gradient norm of the base ViT patch embedding (black) is disproportionately large compared to the other layers. Applying DPN (red), on the other hand, scales down the gradient norm of the embedding layer. Fig. 2 (Right) additionally shows that the gradient norm of the embedding layer is reduced not only before convergence but also throughout the course of training. This property is consistent across ViT architectures of different sizes (Appendix H). ## 8.2 Visualizing Scale Parameters Note that the first LayerNorm in Eq. 4 is applied directly on patches, that is, to raw pixels. Thus, the learnable parameters (biases and scales) of the first LayerNorm can be visualized directly in pixel space. Fig. 3 shows the scales of our smallest model and largest model which are: Ti/16 trained on ImageNet for 90000 steps and L/16 trained on JFT for 1.1M steps respectively. Since the absolute magnitude of the scale parameters vary across the R, G and B channel, we visualize the scale separately for each channel. Interestingly, for both models the scale parameter increases the weight of the pixels in the center of the patch and at the corners. ## 9 Conclusion We propose a simple modification to vanilla ViT models and show its efficacy on classification, contrastive learning, semantic segmentation and transfer to small classification datasets. ![8_image_0.png](8_image_0.png) Figure 3: Visualization of scale parameters of the first LayerNorm. **Top:** Ti/16 trained on ImageNet 1k. Bottom: L/16 trained on JFT-4B ## References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. Alexei Baevski and Michael Auli. Adaptive input representations for neural language modeling. *ICLR*, 2019. Lucas Beyer, Pavel Izmailov, Alexander Kolesnikov, Mathilde Caron, Simon Kornblith, Xiaohua Zhai, Matthias Minderer, Michael Tschannen, Ibrahim Alabdulmohsin, and Filip Pavetic. Flexivit: One model for all patch sizes. *arXiv preprint arXiv:2212.08013*, 2022a. Lucas Beyer, Xiaohua Zhai, and Alexander Kolesnikov. Better plain vit baselines for imagenet-1k. *arXiv* preprint arXiv:2205.01580, 2022b. Lucas Beyer, Xiaohua Zhai, and Alexander Kolesnikov. Big vision. https://github.com/ google-research/big_vision, 2022c. Ekin Dogus Cubuk, Barret Zoph, Jon Shlens, and Quoc Le. Randaugment: Practical automated data augmentation with a reduced search space. *Advances in Neural Information Processing Systems*, 33: 18613–18624, 2020. Mostafa Dehghani, Alexey Gritsenko, Anurag Arnab, Matthias Minderer, and Yi Tay. Scenic: A jax library for computer vision research and beyond. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 21393–21398, 2022. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In *International Conference on Learning Representations*, 2020. Stéphane d'Ascoli, Hugo Touvron, Matthew L Leavitt, Ari S Morcos, Giulio Biroli, and Levent Sagun. Convit: Improving vision transformers with soft convolutional inductive biases. In *International Conference on* Machine Learning, pp. 2286–2296. PMLR, 2021. Bum Jun Kim, Hyeyeon Choi, Hyeonah Jang, Dong Gu Lee, Wonseok Jeong, and Sang Woo Kim. Improved robustness of vision transformer via prelayernorm in patch embedding. *arXiv preprint arXiv:2111.08413*, 2021. Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In *Proceedings of the IEEE/CVF* international conference on computer vision, pp. 10012–10022, 2021. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In *International conference on machine learning*, pp. 8748–8763. PMLR, 2021. Alex Rogozhnikov. Einops: Clear and reliable tensor manipulations with einstein-like notation. In *International Conference on Learning Representations*, 2022. URL https://openreview.net/forum?id= oapKSVM2bcj. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211–252, 2015. Sam Shleifer, Jason Weston, and Myle Ott. Normformer: Improved transformer pretraining with extra normalization. *arXiv preprint arXiv:2110.09456*, 2021. Andreas Peter Steiner, Alexander Kolesnikov, Xiaohua Zhai, Ross Wightman, Jakob Uszkoreit, and Lucas Beyer. How to train your vit? data, augmentation, and regularization in vision transformers. *Transactions* on Machine Learning Research, 2022. URL https://openreview.net/forum?id=4nPswr1KcP. Robin Strudel, Ricardo Garcia, Ivan Laptev, and Cordelia Schmid. Segmenter: Transformer for semantic segmentation. In *Proceedings of the IEEE/CVF international conference on computer vision*, pp. 7262– 7272, 2021. Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through attention. In *International conference* on machine learning, pp. 10347–10357. PMLR, 2021. Hongyu Wang, Shuming Ma, Shaohan Huang, Li Dong, Wenhui Wang, Zhiliang Peng, Yu Wu, Payal Bajaj, Saksham Singhal, Alon Benhaim, et al. Foundation transformers. *arXiv preprint arXiv:2210.06423*, 2022. Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 568–578, 2021. Jianxiong Xiao, James Hays, Krista A Ehinger, Aude Oliva, and Antonio Torralba. Sun database: Largescale scene recognition from abbey to zoo. In *2010 IEEE computer society conference on computer vision* and pattern recognition, pp. 3485–3492. IEEE, 2010. Tete Xiao, Mannat Singh, Eric Mintun, Trevor Darrell, Piotr Dollár, and Ross Girshick. Early convolutions help transformers see better. *Advances in Neural Information Processing Systems*, 34:30392–30400, 2021. Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, and Tieyan Liu. On layer normalization in the transformer architecture. In *International* Conference on Machine Learning, pp. 10524–10533. PMLR, 2020. Jingjing Xu, Xu Sun, Zhiyuan Zhang, Guangxiang Zhao, and Junyang Lin. Understanding and improving layer normalization. *Advances in Neural Information Processing Systems*, 32, 2019. Xiaohua Zhai, Joan Puigcerver, Alexander Kolesnikov, Pierre Ruyssen, Carlos Riquelme, Mario Lucic, Josip Djolonga, Andre Susano Pinto, Maxim Neumann, Alexey Dosovitskiy, et al. A large-scale study of representation learning with the visual task adaptation benchmark. *arXiv preprint arXiv:1910.04867*, 2019. Xiaohua Zhai, Alexander Kolesnikov, Neil Houlsby, and Lucas Beyer. Scaling vision transformers. In *CVPR*, 2022a. Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexander Kolesnikov, and Lucas Beyer. Lit: Zero-shot transfer with locked-image text tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18123–18133, 2022b. Biao Zhang and Rico Sennrich. Root mean square layer normalization. *Advances in Neural Information* Processing Systems, 32, 2019. Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. *arXiv preprint arXiv:1710.09412*, 2017. Bolei Zhou, Hang Zhao, Xavier Puig, Tete Xiao, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Semantic understanding of scenes through the ade20k dataset. *International Journal of Computer Vision*, 127:302–321, 2019. ## A Initial Project Idea We arrived at the Dual PatchNorm solution because of another project that explored adding whitened (decorrelated) patches to ViT. Our initial prototype had a LayerNorm right after the decorrelated patches, to ensure that they are of an appropriate scale. This lead to improvements across multiple benchmarks, suggesting that whitened patches can improve image classification. We later found out via ablations, that just LayerNorm is sufficient at the inputs and adding whitened patches on their own could degrade performance. Our paper highlights the need for rigorous ablations of complicated algorithms to arrive at simpler solutions which can be equally or even more effective. ## B Code We perform all our experiments in the big-vision (Beyer et al., 2022c) and Scenic (Dehghani et al., 2022) library. Since the first LayerNorm of DPN is directly applied on pixels, we replace the first convolution with a patchify operation implemented with the einops (Rogozhnikov, 2022) library and a dense projection. ## C Vit Augreg: Training Configurations 1 import big_vision . configs . common as bvcc 2 from big_vision . configs . common_fewshot import get_fewshot_lsr 3 import ml_collections as mlc 4 5 6 RANDAUG_DEF = { 7 'none ': '', 8 'light1 ': 'randaug (2 ,0) ', 9 'light2 ': 'randaug (2 ,10) ', 10 'medium1 ': 'randaug (2 ,15) ', 11 'medium2 ': 'randaug (2 ,15) ', 12 'strong1 ': 'randaug (2 ,20) ', 13 'strong2 ': 'randaug (2 ,20) ', 14 } 15 16 MIXUP_DEF = { 17 'none ': dict (p =0.0 , fold_in = None ) , 18 'light1 ': dict (p =0.0 , fold_in = None ) , 19 'light2 ': dict (p =0.2 , fold_in = None ) , 20 'medium1 ': dict (p =0.2 , fold_in = None ) , 21 'medium2 ': dict (p =0.5 , fold_in = None ) , 22 'strong1 ': dict (p =0.5 , fold_in = None ) , 23 'strong2 ': dict (p =0.8 , fold_in = None ) , 24 } 25 26 27 def get_config ( arg = None ): 28 """ Config for training .""" 29 arg = bvcc . parse_arg ( arg , variant = 'B /32 ', runlocal = False , aug ='') 30 config = mlc . ConfigDict () 31 32 config . pp_modules = [' ops_general ', 'ops_image '] 33 config . init_head_bias = -6.9 34 variant = 'B /16 ' 35 36 aug_setting = arg . aug or { 37 'Ti /16 ': 'light1 ', 38 'S /32 ': 'medium1 ', 39 'S /16 ': 'medium2 ', 40 'B /32 ': 'medium2 ', 41 'B /16 ': 'medium2 ', 42 'L /16 ': 'medium2 ', 43 }[ variant ] 44 45 config . input = dict () 46 config . input . data = dict ( 47 name =' imagenet2012 ', 48 split ='train [:99%] ', 49 ) 50 config . input . batch_size = 4096 51 config . input . cache_raw = True 52 config . input . shuffle_buffer_size = 250 _000 53 54 pp_common = ( 55 '| value_range ( -1 , 1) ' 56 '| onehot (1000 , key ="{ lbl }" , key_result =" labels ") ' 57 '| keep (" image ", " labels ") ' 58 ) 59 60 config . input . pp = ( 61 ' decode_jpeg_and_inception_crop (224) | flip_lr | ' + 62 RANDAUG_DEF [ aug_setting ] + 63 pp_common . format ( lbl ='label ') 64 ) 65 pp_eval = 'decode | resize_small (256) | central_crop (224) ' + pp_common 66 config . input . prefetch = 8 67 68 config . num_classes = 1000 69 config . loss = ' sigmoid_xent ' 70 config . total_epochs = 300 71 config . log_training_steps = 50 72 config . ckpt_steps = 1000 73 74 \# Model section 75 config . model_name = 'vit ' 76 config . model = dict ( 77 variant = variant , 78 rep_size = True , 79 pool_type ='tok ', 80 dropout =0.1 , 81 stoch_depth =0.1 , 82 stem_ln ='dpn ') 83 84 \# Optimizer section 85 config . grad_clip_norm = 1.0 86 config . optax_name = ' scale_by_adam ' 87 config . optax = dict ( mu_dtype ='bfloat16 ') 88 89 config . lr = 0.001 90 config . wd = 0.0001 91 config . seed = 0 92 config . schedule = dict ( warmup_steps =10 _000 , decay_type = 'cosine ') 93 94 config . mixup = MIXUP_DEF [ aug_setting ] 95 96 \# Eval section 97 def get_eval ( split , dataset =' imagenet2012 '): 98 return dict ( 99 type =' classification ', 100 data = dict ( name = dataset , split = split ) , 101 pp_fn = pp_eval . format ( lbl ='label ') , 102 loss_name = config . loss , 103 log_steps =2500 , 104 cache_final = not arg . runlocal , 105 ) 106 config . evals = {} 107 config . evals . train = get_eval ('train [:2%] ') 108 config . evals . minival = get_eval ( 'train [99%:] ') 109 config . evals . val = get_eval ('validation ') 110 return config AugReg Recipe: B/16. For smaller models (S/32, Ti/16 and S/16), as per the AugReg recipe, we switch off stochastic depth and dropout. For S/32, we also set representation size to be false. ## D Vit Augreg: High Res Finetuning 1 import ml_collections as mlc 2 3 4 def get_config ( runlocal = False ): 5 """ Config for adaptation on imagenet . """ 6 config = mlc . ConfigDict () 7 8 config . loss = ' sigmoid_xent ' 9 config . num_classes = 1000 10 config . total_steps = 5000 11 config . pp_modules = [' ops_general ', 'ops_image '] 12 13 config . seed = 0 14 config . input = {} 15 config . input . data = dict ( 16 name =' imagenet2012 ', 17 split ='train [:99%] ', 18 ) 19 config . input . batch_size = 512 if not runlocal else 8 20 config . input . shuffle_buffer_size = 50 _000 if not runlocal else 100 21 config . input . cache_raw = True 22 variant = 'B /32 ' 23 24 pp_common = ( 25 ' value_range ( -1 , 1)|' 26 'onehot (1000 , key ="{ lbl }" , key_result =" labels ")| ' 27 'keep (" image " , " labels ") ' 28 ) 29 config . input . pp = ( 30 ' decode_jpeg_and_inception_crop (384) | flip_lr | ' + 31 pp_common . format ( lbl ='label ') 32 ) 33 pp_eval = 'decode | resize_small (418) | central_crop (384) | ' + pp_common 34 35 config . log_training_steps = 10 36 config . ckpt_steps = 1000 ## E Vtab Finetuneing 1 from ml_collections import ConfigDict 2 3 4 def get_config () : 5 """ Config for adaptation on VTAB . """ 6 config = ConfigDict () 7 8 config . loss = ' sigmoid_xent ' 9 config . num_classes = 0 10 config . total_steps = 2500 11 config . pp_modules = [' ops_general ', 'ops_image ', 'proj . vtab . pp_ops '] 12 13 config . seed = 0 14 config . input = dict () 15 config . input . data = dict ( 16 name ='', 17 split ='train [:800] ', 18 ) 19 config . input . batch_size = 512 20 config . input . shuffle_buffer_size = 50 _000 21 config . input . cache_raw = False 22 23 config . input . pp = '' 24 config . log_training_steps = 10 25 config . log_eval_steps = 100 26 config . ckpt_steps = 1000 27 config . ckpt_timeout = 1 28 29 config . prefetch_to_device = 2 37 38 config . model_name = 'vit ' 39 config . model_init = 'low_res / path ' 40 config . model = dict ( variant = variant , pool_type = 'tok ', stem_ln ='dpn ', rep_size = True ) 41 42 config . model_load = dict ( dont_load =[ 'head / kernel ', 'head / bias ']) 43 44 \# Optimizer section 45 config . optax_name = 'big_vision . momentum_hp ' 46 config . grad_clip_norm = 1.0 47 config . wd = None 48 config . lr = 0.03 49 config . schedule = dict ( 50 warmup_steps =500 , 51 decay_type ='cosine ', 52 ) 53 54 \# Eval section 55 def get_eval ( split , dataset =' imagenet2012 '): 56 return dict ( 57 type =' classification ', 58 data = dict ( name = dataset , split = split ) , 59 pp_fn = pp_eval . format ( lbl ='label ') , 60 loss_name = config . loss , 61 log_steps =2500 , 62 cache_final = not runlocal , 63 ) 64 config . evals = {} 65 config . evals . train = get_eval ('train [:2%] ') 66 config . evals . minival = get_eval ( 'train [99%:] ') 67 config . evals . val = get_eval ('validation ') 68 69 return config High Resolution Finetuning ## F Sun397: Train From Scratch On Sun397, applying DPN improves ViT models trained from scratch. We first search for an optimal hyperparameter setting across 3 learning rates: 1e-3, 3e-4, 1e-4, 2 weight decays: 0.03, 0.1 and two dropout values: 0.0, 0.1. We then searched across 3 mixup values 0.0, 0.2 and 0.5 and 4 randaugment distortion magnitudes 0, 5, 10 and 15. We train the final config for 600 epochs. | Base | DPN | Base | DPN | | |----------------|----------------|--------|-------|------| | 41.4 | 47.5 | | | | | + Augmentation | 48.3 | 50.7 | | | | + Train Longer | 52.5 | 56.0 | 45.6 | 51.8 | | | + Augmentation | 58.7 | 63.0 | | | | + Train Longer | 60.8 | 66.3 | | Table 7: Sun train from scratch. **Left:** B/32 and **Right:** B/16 ## G Semantic Segmentation Hyperparameter 1 def get_config () : 2 """ Returns the base experiment configuration for Segmentation on ADE20k .""" 3 config = ml_collections . ConfigDict () 4 config . experiment_name = ' linear_decoder_semseg_ade20k ' 5 6 \# Dataset . 7 config . dataset_name = ' semseg_dataset ' 8 config . dataset_configs = ml_collections . ConfigDict () 9 config . dataset_configs . name = 'ade20k ' 10 config . dataset_configs . use_coarse_training_data = False 11 config . dataset_configs . train_data_pct = 100 12 mean_std = '[0.485 , 0.456 , 0.406] , [0.229 , 0.224 , 0.225] ' 13 common = ( 14 '| standardize (' + mean_std + ', data_key =" inputs ") ' 15 '| keep (" inputs ", " label ") ') 16 config . dataset_configs . pp_train = ( 17 ' mmseg_style_resize ( img_scale =(2048 , 512) , ratio_range =(0.5 , 2.0) ) ' 30 ![14_image_0.png](14_image_0.png) 31 \# Model . 32 config . model_name = 'vit ' 33 stem_ln = 'dpn ' 34 variant = 'B /32 ' 35 36 config . model_init = model_inits [ variant ][ stem_ln ] 37 config . model = dict ( 38 variant = variant , 39 rep_size = True , 40 pool_type ='tok ', 41 stem_ln = stem_ln ) 42 config . model_load = dict ( dont_load =[ 'head / kernel ', 'head / bias ']) 43 44 \# Optimizer section 45 config . optax_name = 'big_vision . momentum_hp ' 46 config . grad_clip_norm = 1.0 47 config . wd = None 48 config . lr = 0.0003 49 config . ckpt_timeout = 3600 50 config . schedule = dict ( 51 warmup_steps =200 , 52 decay_type ='cosine ', 53 ) 54 55 return config High Resolution Finetuning 18 '| random_crop_with_mask ( size =512 , cat_max =0.75 , ignore_label =0) ' 19 '| flip_with_mask ' 20 '| squeeze ( data_key =" label ") ' 21 '| photometricdistortion ( data_key =" inputs ") ') + common 22 config . dataset_configs . max_size_train = 512 23 config . dataset_configs . pp_eval = ( 24 'squeeze ( data_key =" label ") ') + common 25 config . dataset_configs . pp_test = ( 26 ' multiscaleflipaug ( data_key =" inputs ") ' 27 '| squeeze ( data_key =" label ") ') + common 28 29 \# Model . 30 version , patch = VARIANT . split ('/') 31 config . model = ml_collections . ConfigDict () 32 config . model . hidden_size = {'Ti ': 192 , 33 'S': 384 , 34 'B': 768 , 35 'L': 1024 , 36 'H': 1280}[ version ] 37 config . model . patches = ml_collections . ConfigDict () 38 config . model . patches . size = [ int ( patch ) , int ( patch )] 39 config . model . num_heads = {'Ti ': 3, 'S': 6, 'B': 12 , 'L': 16 , 'H': 16}[ version ] 40 config . model . mlp_dim = {'Ti ': 768 , 41 'S': 1536 , 42 'B': 3072 , 43 'L': 4096 , 44 'H': 5120}[ version ] 45 config . model . num_layers = {'Ti ': 12 , 46 'S': 12 , 47 'B': 12 , 48 'L': 24 , 49 'H': 32}[ version ] 50 config . model . attention_dropout_rate = 0.0 51 config . model . dropout_rate = 0.0 52 config . model . dropout_rate_last = 0.0 53 config . model . stochastic_depth = 0.1 54 config . model_dtype_str = 'float32 ' 55 config . model . pos_interpolation_method = 'bilinear ' 56 config . model . pooling = 'tok ' 57 config . model . concat_backbone_output = False 58 config . pretrained_path = '' 59 config . pretrained_name = 'dpn_b16 ' 60 config . model . posembs = (32 , 32) \# 512 / 16 61 config . model . positional_embedding = 'learned ' 62 config . model . upernet = False 63 config . model . fcn = True 64 config . model . auxiliary_loss = -1 65 config . model . out_with_norm = False 66 config . model . use_batchnorm = False 67 config . model . dpn = True 68 69 \# Trainer . 70 config . trainer_name = ' segmentation_trainer ' 71 config . eval_only = False 72 config . oracle_eval = False 73 config . window_stride = 341 74 75 \# Optimizer . 76 config . optimizer = 'adamw ' 77 config . weight_decay = 0.01 78 config . freeze_backbone = False 79 config . layerwise_decay = 0. 80 config . skip_scale_and_bias_regularization = True 81 config . optimizer_configs = ml_collections . ConfigDict () 82 83 config . batch_size = 16 84 config . num_training_epochs = 128 85 config . max_grad_norm = None 86 config . label_smoothing = None 87 config . class_rebalancing_factor = 0.0 88 config . rng_seed = 0 89 90 \# Learning rate . 91 config . steps_per_epoch = 20210 // config . batch_size 92 config . total_steps = config . num_training_epochs * config . steps_per_epoch 93 config . lr_configs = ml_collections . ConfigDict () 94 config . lr_configs . learning_rate_schedule = 'compound ' 95 config . lr_configs . factors = 'constant * polynomial * linear_warmup ' 96 config . lr_configs . warmup_steps = 0 97 config . lr_configs . decay_steps = config . total_steps 98 config . lr_configs . base_learning_rate = 0.00003 99 config . lr_configs . end_factor = 0. 100 config . lr_configs . power = 0.9 101 return config Semantic Segmentation Config ![16_image_0.png](16_image_0.png) ## H Gradient Norm Scale Figure 4: Gradient Norm vs Depth. **Left:** B/32. **Center:** S/32 **Right:** S/16 .
Review 1: Summary: This paper proposes a dual PatchNorm (DPN) technique to facilitate the effective training of vision Transformers. In specific, two Layer Normalization layers (LayerNorms), before and after the patch embedding layer in Vision Transformers. Results on ImageNet and JFT demonstrate the effectiveness of DPN. Strengths and Weaknesses: Strengths: 1) DPN is simple and effective. Weaknesses: 1) Given the simplicity of DPN, I think the authors should conduct extensive experiments to validate its effectiveness, such that the contributions of this paper will be sufficient to be published. However, the following results are absent. Personally, I'll lean toward accepting this paper if at least two of them are provided, and the results are solid. - Results on top of DeiT (Training data-efficient image transformers & distillation through attention, ICML'21), a widely-used, state-of-the-art pipeline for training ViTs efficiently. Providing the results with DeiT III (ECCV'22) would be more impressive. - Results on multi-stage ViTs (e.g., Swin or PVT) or ConvNets (e.g., ConvNeXt). - Results on downstream tasks (e.g., detection and segmentation on COCO). 2) In addition, this paper may lack some necessary insights. I'm pretty curious about why DPN yields signification improvements. It seems that the current paper does not answer this question. The authors may consider visualizing the scale of gradients in ViTs. 3) In Tables 2 and 3, the gains of DPN are not significant for relatively large models. Requested Changes: See the weaknesses above. Broader Impact Concerns: I have no concerns about the broader impacts and the ethical implications. ================================================== Review 2: Summary: The authors propose Dual Patch Norm (DPN). A light weight extension to common transformer based vision models that adds LayerNorm layers before and after patch encoding. The work provides some empirical evidence that DPN improves classification performance on image classification and contrastive learning. Strengths and Weaknesses: The paper provides a light-weigh, easy to implement extension for transformer based vision models showing some performance improvement, complementing the toolbox of techniques to design and train transformer based models. The paper is well written and structured. May major criticism is that there is no motivation / argument for the specific DPN beside the empirical results. Maybe the authors could emphasize the cases in which they improved performance and lay out some argument / rational why DPN improves performance. Do the authors see any restrictions of their approach, e.g. considering other tasks (NLP)? Requested Changes: Considering the claims of the paper, the work is already in a good state. Addressing my raised concerns would strengthen the work. Broader Impact Concerns: no concerns ================================================== Review 3: Summary: The paper proposes a simple modification to the normalization strategy used for the patch embedding layer in Vision Transformers (ViTs). The proposed recipe, DualPatchNorm or DPN, is simple: Layer Normalization layers (LayerNorm) are applied before and after the patch embedding layer. The authors show that this modification helps in many scenarios for supervised classification on imagenet and contrastive learning Strengths and Weaknesses: Strengths: * The paper is clearly written and straightforward. It shows pseudocode and reports gains in many cases * the gains are across different architecture sizes Weaknesses: * it only presents results on imagenet (and a non-public propriety dataset) and for testing on the training classes (apart from the non-reproducible JFT finetuning experiment). No transfer experiments are shown, eg to other smaller datasets with classes beyond the ones in the training set, and no experiments for ViTs beyond classification * This is an empirical study with no effort to try to explore or understand where the gains are coming from (beyond a visualization of the scale parameter on the first layer) Requested Changes: * **Transfer learning experiments:** Using the learned models for transfer experiments would show if the added robustness/performance gains that the DPN offers persists for generalization to other datasets (eg fine-tuning or after freezing the model and learning classifiers) * **Tables with hyper parameters**: To be more reproducible, Tables with all model and optimization hyperparameters used for the experiments (on Imagenet1k and 21k) are needed ( e.g. something similar to Tab11 in the latest arxiv version of the MAE paper [He et al 2022]) * Clarify "zero-shot ImageNet accuracy" for the contrastive learning experiments - does this mean 1-NN retrieval? Please add more info on the experimental setup * The visualization of the scale parameter on the first layer, that is not really explored more deeply - why are the corner patches so important? Why is there such variance across color channels? * Experiments on other tasks: It would be great to see if this also transfers to ViTs for other tasks, eg object detection or semantic segmentation, where ViT are now used. [He et al 2022] Masked Autoencoders Are Scalable Vision Learners Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept as is Comment: This paper proposes a dual PatchNorm to train vision transformers. It is a simple but effective method, i.e., inserting layer normalization layers before and after the patch embedding layer. Extensive experiments are conducted to investigate the method's effectiveness. In the rebuttal phase, the authors add more explanations and investigate the dual PatchNorm on more tasks. All the reviewers satisfy the responses and recommend accepting this manuscript. The claim in this paper is supported by convincing experiments and many individuals will be interested in it. Overall, I recommend accepting this paper. ==================================================
# Bayesian Causal Bandits With Backdoor Adjustment Prior Jireh Huang jirehhuang@ucla.edu Department of Statistics University of California, Los Angeles Qing Zhou *zhou@stat.ucla.edu* Department of Statistics University of California, Los Angeles Reviewed on OpenReview: *https: // openreview. net/ forum? id= sMsGv5Kfm3* ## Abstract The causal bandit problem setting is a sequential decision-making framework where actions of interest correspond to interventions on variables in a system assumed to be governed by a causal model. The underlying causality may be exploited when investigating actions in the interest of optimizing the yield of the reward variable. Most existing approaches assume prior knowledge of the underlying causal graph, which is in practice restrictive and often unrealistic. In this paper, we develop a novel Bayesian framework for tackling causal bandit problems that does not rely on possession of the causal graph, but rather simultaneously learns the causal graph while exploiting causal inferences to optimize the reward. Our methods efficiently utilize joint inferences from interventional and observational data in a unified Bayesian model constructed with intervention calculus and causal graph learning. For the implementation of our proposed methodology in the discrete distributional setting, we derive an approximation of the sampling variance of the backdoor adjustment estimator. In the Gaussian setting, we characterize the interventional variance with intervention calculus and propose a simple graphical criterion to share information between arms. We validate our proposed methodology in an extensive empirical study, demonstrating compelling cumulative regret performance against state-of-the-art standard algorithms as well as optimistic implementations of their causal variants that assume strong prior knowledge of the causal structure. ## 1 Introduction The multi-armed bandit (MAB) problem is a well-known sequential allocation framework for experimental investigations (Berry & Fristedt, 1985). Classically, the MAB problem formulation features an action set A consisting of |A| = K actions, also called arms, typically corresponding to interventions. Each arm a ∈ A defines a real-valued distribution for the reward signal, with expected reward µa. The objective of an allocation policy is to sequentially pick arms in a manner that maintains a balance between exploration and exploitation in the interest of identifying and obtaining the greatest reward. Maximally and effectively utilizing all available information is imperative, especially when investigating interventions that are either or both resource-demanding and time-consuming. Lattimore et al. (2016) proposed the causal bandit (CB) problem setting wherein a non-trivial probabilistic causal model is assumed to govern the distribution of the reward variable and its covariates (Pearl, 2000). The addition of causal assumptions introduces avenues by which interventional distributions may be inferred from observational distributions and information may be shared between arms. Most works addressing the CB problem exploit strong assumptions as to prior knowledge of the underlying causal model to achieve improvements over standard MAB algorithms. In this work, we develop a Bayesian CB framework that does not require prior knowledge of the underlying causal structure, but instead efficiently utilizes previously available observational data and acquired interventional data to inform exploitation and guide exploration. For illustrative purposes, we borrow and adapt the farming example described in Lattimore et al. (2016) as an illuminating motivating example of the problem setting of interest and the surrounding challenges. Suppose a farmer wishes to optimize the yield of a certain crop, which she knows is only dependent on temperature, a particular soil nutrient, and moisture level. While she understands that crop yield is somehow affected by these factors, the underlying causality governing this system of four variables (including crop yield) is unknown to her. The farmer's resource limitations restrict her to intervening on at most one factor in each crop season by adjusting the temperature, controlling the soil nutrient content, or regulating moisture level. These experimental interventions are costly to perform, and each realization of the interventional data can only be observed once a season. Hence, it is in the farmer's best interest to leverage her historical logs containing observational data accrued from previous seasons where no interventions were performed, but rather the variables were passively observed as they naturally varied from season to season. In this paper, we propose a framework by which the farmer may aggregate and synthesize the available evidence to optimize resource allocation to attain the highest yield. ## 1.1 Related Work In its original formulation by Lattimore et al. (2016), the CB problem presupposes knowledge of the underlying causal graph. Accordingly, most proposed CB algorithms require knowledge of the causal graph structure (Lattimore et al., 2016; Lee & Bareinboim, 2018; Maiti et al., 2021; Yabe et al., 2018), and some additionally assume certain model parameters are given (Lu et al., 2020; Nair et al., 2021). Furthermore, many approaches are dependent on some restrictive form or class of graphs. These assumptions are restrictive and often unrealistic in practice. More recently, Lu et al. (2021) proposed a central node approach based on the work of Greenewald et al. (2019) that does not assume prior knowledge of the causal graph, but rather asymptotic knowledge of the observational distribution. Their approach is restrictive in terms of structural and distributional assumptions, and while it is generally reasonable to assume that observational data is much more accessible than interventional data (Greenewald et al., 2019), the large-sample observational setting is not often realistic. de Kroon et al. (2022) proposed an estimator using separating sets to share information between arms without assuming prior knowledge of nor requiring discovery of the causal graph. Their methodology makes no attempt to learn the causal graph, and makes use of observational data only to strengthen conditional independence testing to identify separating sets. Relevant to our work is intervention calculus, a set of inference rules proposed by Pearl (2000) that defines avenues by which interventional probabilities may be estimated from observational data. In the CB setting, Lattimore et al. (2016) and Nair et al. (2021) consider graph structures with no confounding such that the interventional distributions are equivalent to conditional distributions, and Maiti et al. (2021) proposed a consistent estimator for the expected reward for discrete variables using both interventional and observational data in the presence of confounding. Our work extends the Bayesian model averaging approach proposed by Pensar et al. (2020) wherein the possible causal effect estimates are averaged across an observational posterior distribution of graphs. Bayesian approaches to the MAB problem has received significant attention in the past decade. Russo & Van Roy (2014) translated existing regret bounds of algorithms based on optimism in the face of uncertainty to Bayesian regret bounds for posterior sampling by establishing a deep connection between the two in their study of the Bayesian regret. In their information-theoretic analysis of Thompson sampling, Lu & Van Roy (2019) demonstrated improved regret performance by leveraging prior information regarding the bandit environment, which was further tightened in the form of a prior-dependent bound by Kveton et al. (2021). Numerous developments to learn the prior have been proposed in the bandit meta-learning space (Basu et al., 2021; Kveton et al., 2021; Wan et al., 2021; Hong et al., 2022), with many works studying the effects of prior misspecification (Bastani et al., 2022; Peleg et al., 2022; Simchowitz et al., 2021). While these contributions operate in the multi-task setting, seeking to learn the prior by solving many similar tasks typically corresponding to bandit instances, our work more resembles the approach of Kaufmann et al. (2012) in applying Bayesian techniques to tackle a single bandit instance. In such a way, rather than an estimate of the task prior, our proposed prior is best interpreted as encoding the prior information on the underlying parameters of a given bandit instance based on available observational data generated within the task. ## 1.2 Our Contributions We approach the CB problem from a Bayesian perspective, assuming simply that finite samples of observational data are available. Importantly, we do not assume the causal graph is known, nor are we restrictive as to the class of graph structures. We design a novel Bayesian CB framework called Bayesian Backdoor Bandit (BBB) that efficiently utilizes the entirety of evidence from an ensemble of observational and interventional data in a unified Bayesian model. Our proposed BBB methodology quantifies the uncertainty in the expected reward estimates as contributed to by the reward signal and the causal model to identify potentially profitable exploration, simultaneously learning the causal graph in addition to and for the purposes of improving estimates to exploit. Through extensive numerical experiments, we validate our methodology by demonstrating compelling empirical performance against both non-causal and causal algorithms. In particular, we show that our BBB approach is able to leverage modest samples of observational data, seamlessly integrating causal effect estimation and structure modeling, to achieve substantially superior cumulative regret performance compared to standard algorithms that make no use of observational information. We similarly demonstrate competitive performance against a generously optimistic version of the causal central node approach proposed by Lu et al. (2021) that assumes large-sample observational data. A preliminary analysis shows that, under some assumptions on the posterior distributions in our Bayesian model, the dependence of the cumulative regret of a BBB algorithm on the size of the action space can be greatly relaxed given sufficient amount of observational data. Additionally, in detailing the application of our methods to the discrete and Gaussian distributional settings, we propose various developments that are of independent interest. In the discrete setting, we derive an approximation for the sampling variance of the backdoor adjustment probability estimate. In the Gaussian setting, we characterize the interventional variance of a target variable using intervention calculus and correspondingly propose an estimator, and we propose a simple graphical criterion for sharing causal information between arms to perform intervention calculus with jointly observational and interventional data. The remainder of the paper is arranged as follows. We first review relevant background and notation in Section 2. Then, we develop the formulation of our proposed Bayesian backdoor adjustment prior and its posterior update in Section 3, discussing the design of informative conditional priors given a graph and Bayesian model averaging across graph structures. In Section 4, we develop our proposed algorithms by applying established MAB algorithms under the BBB framework, and we discuss details regarding the implementation of BBB in the discrete and Gaussian settings in Section 5. Finally, we provide extensive empirical results in Section 6 and conclude with a discussion in Section 7. Appendices A through D contain proofs, additional details and numerical results, and technical derivations. ## 2 Preliminaries We consider the setting where the generative model governing a joint probability distribution P of a set of p variables X = {X1*, . . . , X*p} is a causal Bayesian network (CBN). A CBN model is defined by B = (G, ΘG) consisting of its structure G, which takes the form of a directed acyclic graph (DAG), and its parameters ΘG. Its DAG G = (V, E), often referred to as the underlying causal graph, is composed of a set of nodes V = {1*, . . . , p*} in one-to-one correspondence with the variables, and a set of directed edges E oriented such that there are no directed cycles. As is standard in causal literature, we may refer to a node i ∈ V and its corresponding variable Xi ∈ X interchangeably. In our work, we assume that X is a causally sufficient system with no unobserved confounders. The causal implications imposed by the underlying CBN of X are expressed in the form of a structural equation model (SEM), Xi = f(PaG i , εi) for all i ∈ V, where PaG i = {Xj : j → i ∈ E} is the parents of Xiin G and εiis an exogenous noise term. Otherwise stated, each variable Xiis a function of its direct causes in G and an independent noise variable, which defines its conditional distribution P(Xi| PaG i , θG i ) with local parameters θ G i ∈ ΘG. The joint distribution P imposed by the CBN factorizes according to structure G: P(X) = Qp i=1 P(Xi| PaG i , θG i ). Realizations from P(X) without any experimental intervention are referred to as observational data, whereas interventional data are realizations of the SEM when the value of one or more variables are being controlled by intervention. In our work, we consider deterministic atomic interventions denoted do(Xj = xj ) (Pearl, 1995), where a single variable is forcibly controlled to a fixed value xj ∈ Dom(Xj ). This has the effect of mutilating the causal graph by deleting the direct effects of PaG j on j, correspondingly modifying the SEM to Xj = xj and Xi = f(PaG i , εi) for all i ∈ V \ j. The interventional distribution P(X\Xj | do(Xj = xj )) is not in general equivalent to the observational conditional distribution P(X \ Xj | Xj = xj ), motivating the calculation of the interventional distribution from the observational distribution, such as (1) below, which we refer to as interventional calculus. The action set A consists of |A| = K arms that correspond to interventions on variables in X\Y , where Y = Xp is the reward variable (Lattimore et al., 2016). In particular, let arm a ∈ A correspond to the intervention do(X⟨a⟩ = xa), fixing X⟨a⟩ to some value xa ∈ Dom(X⟨a⟩), where ⟨a⟩ ∈ V is the node corresponding to the intervened variable. The expected reward of each arm a ∈ A is given by µa := EP [Y | do(X⟨a⟩ = xa)] where EP [·] is the expectation in P defined by CBN B, and there is some optimal arm a ∗:= argmaxa∈A µa corresponding to the optimal reward µ ∗:= µa∗ . Given a horizon of time steps T, let at ∈ A be the arm pulled by an algorithm at time step t ∈ {1*, . . . , T*}. The objective of the algorithm is to pull arms over T time steps with a balance between exploring different arms and exploiting the reward signal to minimize the expected cumulative regret E[PT t=1(µ ∗ − µat )]. We now describe a Bayesian approach to the general MAB problem, with some notation adapted from Kaufmann et al. (2012). The parameters ΘA = (θa)a∈A, assumed to mutually independently define the corresponding marginal reward distributions pθa (y) := P[Y = y | do(X⟨a⟩ = xa)], jointly follow a modular prior distribution Π0(ΘA) = Qa∈A π 0 a (θa). Typically, (π 0 a )a∈A are chosen to be all equal and uninformative. When arm at ∈ A is pulled at time step t and a realization yt ← Y | do(X⟨at⟩ = xat ) is observed, the posterior Πtis computed by updating according to π t at (θat ) ∝ pθat (yt) π t−1 at(θat ), while π t a = π t−1 afor a ̸= at. For each arm a ∈ A, the posterior π t a induces a posterior distribution for the expected reward µa, which is simply a marginal or transformation of π t a since, in general, µa is a function of θa. These posteriors are utilized by Bayesian MAB algorithms, which we discuss and apply under our proposed framework in Section 4. In our problem formulation, we assume possession of n0 samples of observational data D0 prior to investigating arms. We denote by D(t)the interventional data acquired by pulling arm at at time t, and by Da[t] = Sl≤t,al=a D(l)the accumulated interventional data from arm a through time t. The combined observational and interventional data accrued through time t is D[t] = D0 ∪Sa∈A Da[t], which we refer to as ensemble data. ## 3 Designing Informative Priors With Intervention Calculus In this section, we detail the design of the cornerstone of BBB and what we refer to as the backdoor adjustment prior, an informative prior Π0that distinguishes BBB from standard non-causal bandit algorithms by encoding inferences from observational data and seamlessly integrating interventional data to update to the posterior Πt. We begin by introducing the construction of conditional priors given backdoor adjustment sets before continuing to obtain the backdoor adjustment prior by Bayesian model averaging over parent set probabilities. We conclude by discussing the formulation and considerations of the posterior distribution over graph structures that gives rise to the parent set probabilities. Conditional Priors. For each arm a ∈ A, we construct conditional priors π 0 a|Z (θa) using the backdoor adjustment given sets Z ⊆ X \ X⟨a⟩ as follows. If Z satisfies the backdoor criterion relative to X⟨a⟩ and Y (Pearl, 2000, Definition 3.3.1), then the interventional distribution Y | do(X⟨a⟩ = xa) may be expressed in terms of the joint observational distribution of {X⟨a⟩, Y } ∪ Z via the backdoor adjustment (Pearl, 2000, Theorem 3.3.2): $$P[Y=y\mid d o(X_{(a)}=x_{a})]=\sum_{\mathbf{z}\in\operatorname{Dom}(\mathbf{Z})}P(Y=y\mid X_{(a)}=x_{a},\mathbf{Z}=\mathbf{z})P(\mathbf{Z}=\mathbf{z}).$$ $$(1)$$ Eq. (1) provides an avenue through which an estimator for µa using observational data may be derived, which we denote µˆa,bda(Z) and with which we design an informative prior π 0 a|Z such that the induced prior distribution of the expected reward µa satisfies $${\rm E}_{n^{0}_{a|{\bf Z}}}[\mu_{a}]=\hat{\mu}_{a,{\rm bda}}({\bf Z}),\quad{\rm Var}_{n^{0}_{a|{\bf Z}}}[\mu_{a}]={\rm\hat{S}\hat{E}}^{2}\left[\hat{\mu}_{a,{\rm bda}}({\bf Z})\right].\tag{2}$$ Here, Eπ 0 a|Z [·] and Varπ 0 a|Z [·] are respectively the expectation and variance in π 0 a|Z , and SEˆ 2[ˆµa,bda(Z)] is the estimated sampling variability of the expected reward estimate µˆa,bda(Z) in P. The matching of the prior variance with the sampling variance of the backdoor adjustment estimator endeavors to assign the appropriate prior effective sample size. When arm at = a is pulled at time step t ∈ {1*, . . . , T*} and a realization of the reward yt ← Y | do(X⟨a⟩ = xa) is observed in D(t), the posterior π t a|Z is computed by updating according to π t a|Z (θa) ∝ pθa (yt) π t−1 a|Z (θa). Parent Set Averaging. Thus far we have taken for granted the possession of adjustment set Z, the validity of which is dependent on the underlying causal structure G which we assume to be unknown. If Y ̸∈ PaG ⟨a⟩ , then Z = PaG ⟨a⟩ satisfies the backdoor criterion relative to X⟨a⟩ and Y , and its uncertainty is quantified by the posterior probability P(Pa⟨a⟩ = Z | D[t]) given the ensemble data at time t. Accordingly, the posterior of θa is determined by averaging over all possible parent sets for X⟨a⟩: $$\pi_{a}^{t}(\theta_{a})=\sum_{{\bf Z}\subseteq{\bf X}\backslash X_{(a)}}\pi_{a|{\bf Z}}^{t}(\theta_{a})P({\bf P}{\bf a}_{(a)}={\bf Z}\mid{\cal D}[t]),\tag{1}$$ which is the key posterior distribution to be updated at each time step t in the Bayesian CB problem. Note that if Y ∈ PaG ⟨a⟩ , then P[Y = y | do(X⟨a⟩ = xa)] = P(Y = y) holds straightforwardly for y ∈ Dom(Y ). Accordingly, if Y ∈ Z, we compute µˆa,bda(Z) with the marginal distribution of Y for the design of π 0 a|Z . Structure Posterior. The parent set distribution in (3) is obtained according to a posterior distribution of DAG structures informed by jointly observational and interventional data D[t]: $P({\bf Pa}_{i}={\bf Z}\mid{\cal D}[t])=\sum_{{\cal G}^{\prime}:{\bf Pa}_{i}^{\cal G^{\prime}}={\bf Z}}P({\cal G}^{\prime}\mid{\cal D}[t]).$ $$\mathrm{{}^{\dagger}}$$ $$|\rangle$$ The structure posterior is given by P(G | D[t]) ∝ P(D[t] | G)P(G), where P(G) is the structure prior, and the marginal likelihood P(D[t] | G) = RP(D[t] | G, ΘG)P(ΘG | G)dΘG is obtained by integrating the likelihood function over the support of a conjugate prior of the parameters as follows. Let m ∈ I := {1*, . . . , M*} index the M = n0 + t samples of data in D[t], and let Oi ⊆ I represent the data points for which Xiis not fixed by intervention. We make standard assumptions for Bayesian network structure learning, namely that the priors for the parameters of the conditional probability distributions satisfy global parameter independence, with Π0G (ΘA) = Qa∈A π 0 a|G(θa), as well as parameter modularity, with π 0 a|G(θa) = π 0 a|G′ (θa) = π 0 a|Z (θa) for graphs G and G ′ where PaG ⟨a⟩ = PaG ′ ⟨a⟩ = Z (see Heckerman et al. (1995) and Friedman & Koller (2003) for details). These allow us to express the marginal likelihood as P(D[t] | G) = Qp i=1 P(xi[Oi] | paG i [Oi]), where xi[·] and paG i [·] represent indexed samples of Xi and PaG i in D[t], respectively. Assuming a conjugate prior, each conditional likelihood P(xi[Oi] | paG i [Oi]) can be calculated in closed form by integrating over the parameters: $$P(x_{i}[\mathcal{O}_{i}]\mid\mathbf{pa}_{i}^{G}[\mathcal{O}_{i}])=\int\left[\prod_{m\in\mathcal{O}_{i}}P\left(x_{i}[m]\mid\mathbf{pa}_{i}^{G}[m],\theta_{i}^{G}\right)\right]P(\theta_{i}^{G})d\theta_{i}^{G}\tag{5}$$ where θ G i = θXi|PaG i is the parameters specifying the conditional distribution of Xi given its parents (Eaton & Murphy, 2007). Assuming the distribution P is faithful to G (that is, all and only the conditional independence relationships in P are entailed by G), the posterior probability P(*G | D*[t]) will concentrate around the Markov equivalence class with increasing samples of observational data n0. The equivalence class consists of the identification of all direct edge connections (that is, the skeleton of G) and some edge orientations called compelled edges, but even with infinite observational data, in general, not all edge orientations are identifiable without interventional data. The effect on P(*G | D*[t]) of pulling arm a ∈ A and observing interventional data according to the intervention do(X⟨a⟩ = xa) is primarily though not limited to that of clarifying the orientation of the edges incident to X⟨a⟩in G. Considering that the DAG space grows super-exponentially with the number of variables (Robinson, 1977), computation of the parent set probabilities P(Pai| D[t]) is admittedly challenging, even when the maximum number of parents is restricted. Due to the computational complexity, it is standard to assume a structure prior satisfying modularity, that is P(G) = Πp i=1P(PaG i ), so that the posterior distribution is proportional to decomposable weights consisting of the product of local scores depending only on a node and its parents (Friedman & Koller, 2003). This property of score decomposability is crucial for the efficient implementation of Markov Chain Monte Carlo (MCMC) methods in which the probability distribution of features in G may be estimated by sampling DAGs from a Markov chain with stationary distribution P(*G | D*[t]) (Madigan et al., 1995; Friedman & Koller, 2003; Kuipers & Moffa, 2017; Kuipers et al., 2022). Particularly useful for our purposes is an algorithm developed by Pensar et al. (2020) to compute the exact parent set probabilities for a graph in time O(3pp) that also takes advantage of score decomposability. In our empirical evaluation of BBB, we apply BBB using both exact computation of parent set posteriors as well as approximation with MCMC sampling. ## 4 Bayesian Backdoor Bandit Algorithms In this section, we apply our proposed BBB framework to several state-of-the-art MAB algorithms, namely upper confidence bound (UCB), Thompson sampling (TS), and Bayesian UCB (Bayes-UCB). Each method is concerned with designing and computing some criterion Ua(t) to maintain a balance between exploration and exploitation when selecting arms according to at ∈ argmaxa∈A Ua(t). In what follows, we briefly introduce these methods and discuss their application under the BBB framework. We then provide preliminary theoretical analysis of the cumulative regret for BBB-UCB and BBB-TS. ## 4.1 Description Of Algorithms The general UCB family of algorithms operates under the principle of optimism in the face of uncertainty (Lai & Robbins, 1985). Arms that have not been investigated as many times as others have more uncertain reward estimates and thus optimistically have potential for greater reward, motivating the design of a padding function Fa(t) for computing the selection criterion Ua(t) = ˆµa(t) + Fa(t). Intuitively, the combination of the expected reward estimate µˆa(t) and the uncertainty Fa(t) maintains a balance between high confidence exploitation and potentially profitable exploration. Perhaps the most well-known and typically the default instantiation of UCB algorithms is UCB1 (Agrawal, 1995; Auer et al., 2002) which computes the following criterion: $$U_{a}(t)=\hat{\mu}_{a}(t-1)+c\sqrt{\log(t-1)/n_{a}(t-1)},$$ Ua(t) = ˆµa(t − 1) + cplog(t − 1)/na(t − 1), (6) where na(t) = Pt l=1 1 {al = a} denotes the number of times arm a has been pulled in t time steps. The confidence tuning parameter c > 0, discussed in Sutton & Barto (2018), controls the desired degree of exploration, where c = √2 in Auer et al. (2002). Hereafter, when discussing UCB, we refer to the policy expressed by the criterion in (6). In what we refer to as BBB-UCB, we estimate the expected reward with the posterior mean Eπ t−1 a[µa] with respect to (3), and we replace 1/na(t − 1) with the posterior variance Varπ t−1 a[µa] ∼ 1/(n0 + na(t − 1)). In particular, for each arm a ∈ A, we compute $$U_{a}(t)=\mathrm{E}_{\pi_{a}^{t-1}}[\mu_{a}]+c{\sqrt{\mathrm{Var}_{\pi_{a}^{t-1}}[\mu_{a}]\log(t)}},$$ a[µa] log(t), (7) $$(6)$$ $$\left(7\right)$$ where, for outer expectation with respect to the parent set distribution Pt−1(Z) := P(Pai = Z | D[t − 1]) in (4), $\mathrm{E}_{\pi_{a}^{t-1}}[\mu_{a}]=\mathrm{E}_{\mathbb{P}_{t-1}}\left[\mathrm{E}_{\pi_{a}^{t-1}}\left[\mu_{a}\right]\right],\quad\mathrm{Var}_{\pi_{a}^{t-1}}[\mu_{a}]=\mathrm{E}_{\mathbb{P}_{t-1}}\left[\mathrm{Var}_{\pi_{a}^{t-1}\mathbb{Z}}^{t-1}\left[\mu_{a}\right]\right]+\mathrm{Var}_{\mathbb{P}_{t-1}}\left[\mathrm{E}_{\pi_{a}^{t-1}\mathbb{Z}}^{t-1}\left[\mu_{a}\right]\right].$ 7 The Bayesian procedures of Bayes-UCB and TS are especially amenable to straightforward application under the BBB framework. These methods follow the Bayesian MAB formulation introduced in Section 2, typically taking as input uninformative priors in Π0that are equivalent for each arm. At each time step t, TS samples the expectations from the posterior Ua(t) ← π t−1 a(µa), effectively selecting arm a ∈ A with probability equal to the posterior probability that µa is the highest expectation (Thompson, 1933). Bayes-UCB instead computes for each arm at time t an upper quantile of µa based on its posterior distribution induced by π t−1 a: $$U_{a}(t)=Q\left(1-\frac{1}{t(\log T)^{c}},\pi_{a}^{t-1}\right),$$ $$(8)$$ , (8) where Q(*r, ρ*) is the quantile function defining Pρ(X ≤ Q(*r, ρ*)) = r for probability distribution ρ and random variable X ∼ ρ, and c is a constant for computing the quantile used in the theoretical analysis of Bayes-UCB, with c = 0 empirically preferred (Kaufmann et al., 2012). For the BBB variants of Bayes-UCB and TS, we need simply to supply our designed backdoor adjustment prior Π0 and make appropriate Bayesian updates to obtain the posterior Πt. We present our proposed BBB methodology applied to Bayes-UCB, TS, and UCB in Algorithm 1. Algorithm 1 BBB-Alg(T, A, D0, c) Require: Horizon T, action set A, observational data D0, confidence level c 1: Compute the observational parent set posteriors (4) 2: **for all** a ∈ A and Z ⊆ X \ X⟨a⟩ do 3: Compute π 0 a|Z according to (2) 4: **end for** 5: **for all** t = 1*, . . . , T* do 6: **for all** a ∈ A do 7: Compute criterion Ua(t) according to Alg: - Bayes-UCB: Ua(t) = Q(1 − 1/(t(log T) c), πt−1 a) as in (8) - TS: Sample Ua(t) ← π t−1 a(µa) - UCB: Ua(t) = Eπ t−1 a[µa] + c qVarπ t−1 a[µa] log(t) as in (7) 8: **end for** 9: Pull arm at ∈ argmaxa∈A Ua(t) and observe D(t) 10: **for all** Z ⊆ X \ X⟨a⟩ where a = at do 11: Update π t a|Z according to π t a|Z (θa) ∝ pθa (yt) π t−1 a|Z (θa) 12: **end for** 13: Compute or update the parent set posteriors (4) 14: **end for** ## 4.2 Preliminary Regret Analysis In this section, we provide some preliminary analysis of the Bayesian cumulative regret for BBB-UCB and BBB-TS. We begin by defining the Bayesian cumulative regret. Given reward parameters ΘA, the (expected) cumulative regret of a policy is defined as $$R_{T}(\Theta_{\mathcal{A}})=\operatorname{E}\left[\sum_{t=1}^{T}(\mu^{*}-\mu_{a_{t}})\,\left|\,\Theta_{\mathcal{A}}\right.\right],$$ where the parameters ΘA are fixed. Under our Bayesian setting, we specify a prior P(G) over the DAG G and conditional priors π 0 a|Z for the parameters of the reward distribution under interventions a ∈ A, which defines a prior Π0 over ΘA. The Bayesian regret averages the regret RT (ΘA) over the prior distribution Π0: $$R_{T}^{B}=\operatorname{E}_{\Pi^{0}}\left[R_{T}(\Theta_{\mathcal{A}})\right]=\sum_{t=1}^{T}\operatorname{E}\left[\mu^{*}-\mu_{a_{t}}\right],$$ where the second expectation is with respect to the joint distribution over the parameters and the data [G, ΘA, D[T]]. Note that µa = µa(ΘA) is a random variable under the Bayesian setting. If RB T = O(g(T)), then RT (ΘA) = Op(g(T)) with respect to the prior distribution of (G, ΘA). Without loss of generality, assume µa ∈ [−*C, C*] for all a ∈ A. We first analyze the BBB-UCB algorithm (Algorithm 1). By Eq. (2) in Russo & Van Roy (2014), for the UCB sequence {at}, $$R_{T}^{B}\leq\sum_{t=1}^{T}\mathbb{E}[U_{a_{t}}(t)-\mu_{a_{t}}]+2C\sum_{t=1}^{T}P[\mu^{*}>U_{a^{*}}(t)].\tag{9}$$ $$(10)$$ The first term E[Uat (t) − µat ] = E [E {Uat (t) − µat | D[t − 1]}]. Since Ua(t) in (7) is a deterministic function of D[t − 1], we have E{Uat (t) | D[t − 1]} = Uat (t) and consequently, $$\mathrm{E}[U_{a_{t}}(t)-\mu_{a_{t}}]=\mathrm{E}\left[U_{a_{t}}(t)-\mathrm{E}(\mu_{a_{t}}\mid\mathcal{D}[t-1])\right]$$ $$=c\sqrt{\log t}\ \mathrm{E}\left[\sqrt{\mathrm{Var}(\mu_{a_{t}}\mid\mathcal{D}[t-1])}\right]$$ $$\leq c\sqrt{\log t}\ \sqrt{\mathrm{E}\left[\mathrm{Var}(\mu_{a_{t}}\mid\mathcal{D}[t-1])\right]},$$ where the last inequality follows from Jensen's inequality. Note that Var(µa | D[t]) = Varπt (µa) and E(µa | D[t]) = Eπt (µa) as in (7). We make the following assumptions on the posterior distribution p(µa | D[t]). See Appendix A for a detailed discussion on how to verify these assumptions. Assumption 1. Let da be the number of candidate parent sets for X⟨a⟩. For all t ≥ 1 and a ∈ A, $$\operatorname{E}\left[\operatorname{Var}(\mu_{a}\mid{\mathcal{D}}[t])\right]\leq{\frac{c_{1}^{2}}{n_{0}+n_{a}(t)}}+c_{2}^{2}d_{a}\exp(-\delta_{a}n_{a}(t)),$$ where c1, c2 and δa *are positive constants.* The Markov equivalence class of the true DAG, represented by a CPDAG, can be accurately estimated with a large amount of observational data (n0 is large). In such cases, da ≤ 2 ma is usually quite small, where ma is the number of undirected edges connected to X⟨a⟩in the CPDAG. The second assumption is on the concentration of µ ∗ ≡ µa∗ around its posterior mean E(µa∗ | D[t]): Assumption 2. *For all* t ≥ 1, $$P\left\{{\frac{\mu^{*}-\operatorname{E}(\mu^{*}\mid{\mathcal{D}}[t])}{\sqrt{\operatorname{Var}(\mu^{*}\mid{\mathcal{D}}[t])}}}>c{\sqrt{\log t}}\;{\Big|}\;{\mathcal{D}}[t]\right\}\leq c_{3}t^{-b},$$ where c3 > 0 and b > 1 *are constants.* Proposition 1. *Under Assumptions 1 and 2, the Bayes regret of the BBB-UCB algorithm satisfies* $$R_{T}^{B}\leq\left[c_{4}\left(\sqrt{KT+K^{2}(n_{0}-1)}-\sqrt{K^{2}(n_{0}-1)}\right)+c_{5}\sum_{a\in\mathcal{A}}\sqrt{d_{a}}\right]\sqrt{\log T}+c_{6},\tag{11}$$ where K = |A| and c4, c5, c6 *are positive constants.* Remark 1. *By Proposition 1 of Russo & Van Roy (2014), the same upper bound* (11) also applies to the Bayes regret of the BBB-TS algorithm. For any n0 ≥ 1, by concavity of √x, $${\sqrt{K^{2}(n_{0}-1)+K T}}-{\sqrt{K^{2}(n_{0}-1)}}\leq{\sqrt{K T}}.$$ Therefore, when T is large such that Pa √da = O( √T), the regret RB T = O( √KT log T), which is identical to the order of the regret of a standard MAB, e.g. Proposition 2 of Russo & Van Roy (2014). The benefit of our backdoor adjustment prior is seen when n0 is large relative to T. If T /(n0 − 1) < M, where M is a constant, then $$\sqrt{K^{2}(n_{0}-1)+KT}-\sqrt{K^{2}(n_{0}-1)}=\sqrt{K^{2}(n_{0}-1)}\left[\left\{1+\frac{T}{K(n_{0}-1)}\right\}^{1/2}-1\right]$$ $$\leq\frac{T}{2\sqrt{n_{0}-1}}\leq\frac{\sqrt{MT}}{2},$$ where the first inequality is due to (1 + x) 1/2 ≤ 1 + x/2 for x ≥ 0. In this case, we obtain a regret bound RB T = O( √T log T) independent of K. This confirms the advantage of using observational data to *simultaneously* estimate all rewards µa, a ∈ A through backdoor adjustment, which largely relaxes the dependence of the regret on the number of actions. ## 5 Implementation Details 5.1 Nonparametric Discrete Setting We now detail the application of our proposed construction of π 0 a|Z to the setting where the conditional probability distributions are multinomials, with each variable Xi ∈ X probabilistically attaining its states depending on the attained state configuration of its parents PaG i . The reward variable Y = Xp is a binary variable with Dom(Y ) = {0, 1}. If Y ̸∈ Z, µa may be estimated with observational data through straightforward empirical estimation of (1): $$\hat{\mu}_{a,\rm bal}({\bf Z})=\hat{P}[Y=1\mid do(X_{(a)}=x_{a})]=\frac{1}{n_{0}}\sum_{{\bf z}}\frac{n_{0}[1,x_{a},{\bf z}]n_{0}[{\bf z}]}{n_{0}[x_{a},{\bf z}]},\tag{12}$$ where n0[1, xa, z] represents the number of the n0 samples of D0 in which Y = 1, X = xa, and Z = z, with corresponding definitions for n[xa, z] and n[z]. Analysis of the sampling distribution of (12) is admittedly challenging. To design an appropriately weighted informative prior as proposed in (2), we require some characterization of the sampling variability of µˆa,bda(Z). Hence, we derive an approximation of the variance of (12), SEˆ 2[ˆµa,bda(Z)]. We accomplish this by first reexpressing the joint counts n0[·] as sums of elements of a multinomial random vector. The term within the sum may then be expressed as a product and ratio of intersecting random quantities, which we approximate through a first-order Taylor series expansion. The details of the derivation are delegated to Appendix D. It is appropriate to acknowledge that Maiti et al. (2021) proposed a provably unbiased strategy for empirical estimation of (1) through splitting the sample into independent partitions. However, this approach suffers from severe loss of precision through what some may consider underutilization of the observed data. In our experiments detailed in Appendix C.1, we find the empirical performance of (12) in our applications to be acceptable. We additionally provide extensive empirical validation of our derived approximation, demonstrating coverage probabilities comparable to empirical estimates of the sampling variability for modest sample sizes. Since the reward variable under arm a is a Bernoulli random variable with probability parameter µa = P[Y = 1 | do(X⟨a⟩ = xa)], we assume a conjugate prior π 0 a|Z = Beta(α0, β0) for θa = µa designed according to (2), resulting in prior hyperparameters $$\alpha_{0}=\hat{\mu}_{a,\mathrm{bda}}(\mathbf{Z})\left(\frac{\hat{\mu}_{a,\mathrm{bda}}(\mathbf{Z})[1-\hat{\mu}_{a,\mathrm{bda}}(\mathbf{Z})]}{\mathrm{SE}^{2}\left[\hat{\mu}_{a,\mathrm{bda}}(\mathbf{Z})\right]}-1\right),\qquad\beta_{0}=\alpha_{0}\left(\frac{1-\hat{\mu}_{a,\mathrm{bda}}(\mathbf{Z})}{\hat{\mu}_{a,\mathrm{bda}}(\mathbf{Z})}\right).$$ ## 5.2 Gaussian Unit Deviation Setting In this section, we consider the setting where the causal model may be expressed as a set of Gaussian structural equations: $$X_{j}=\sum_{i=1}^{p}\beta_{ij}X_{i}+\varepsilon_{j},\quad\varepsilon_{j}\sim\mbox{N}(0,\sigma_{j}^{2}),\quad j=1,\ldots,p.\tag{13}$$ There is no intercept term, which is analogous to having prior knowledge of the observational means, and we consider interventions xa *∈ {−*1, 1}, which may be interpreted as investigating unit deviations from the observational means. In this setting, the causal effect of X⟨a⟩ on Y is given by $$\psi_{(a)}:=\mathrm{E}_{P}[Y\mid d o(X_{(a)}=x^{\prime}+1)]-\mathrm{E}_{P}[Y\mid d o(X_{(a)}=x^{\prime})]$$ for any x ′ ∈ R, derived via a special case of (1). Note that in our problem formulation, Y | do(X⟨a⟩ = 1) and −Y | do(X⟨a⟩ = −1) are identically distributed, so all data generated from interventions on X⟨a⟩ may be combined to estimate ψ⟨a⟩. Since µa = xaψ⟨a⟩, we focus our efforts on estimating and modeling ψ⟨a⟩. Accordingly, in constructing our priors using intervention calculus, we design priors π 0 ⟨a⟩|Z for θ⟨a⟩ corresponding to estimating ψ⟨a⟩, and allow π 0 a|Z to be the induced priors for θa corresponding to µa = ψ⟨a⟩xa, detailed as follows. If Y ̸∈ Z, then a consistent estimator of ψ⟨a⟩, denoted ψˆ⟨a⟩,bda, may be obtained with observational data by the least squares regression $$Y=\psi_{\langle a\rangle}X_{\langle a\rangle}+\mathbf{\gamma}^{\top}\mathbf{Z}+e,\quad e\sim\mathrm{N}(0,\eta^{2}),$$ $$\left(14\right)$$ where γ ∈ R |Z|is the coefficients of the parents Z (Maathuis et al., 2009; Pensar et al., 2020), and some dependence on a is omitted for simplicity. Correspondingly, we express the desired interventional distribution as Y | do(X⟨a⟩ = xa) ∼ N(ψ⟨a⟩xa, ω2). Claiming no prior knowledge of the interventional variance, we assume a Normal-inverse-gamma (N-Γ −1) conjugate prior π 0 ⟨a⟩|Z for θ⟨a⟩ = (ψ⟨a⟩, ω2): $$\psi_{\langle a\rangle}\mid\omega^{2}\sim\mathrm{N}\left(m_{0},\omega^{2}\nu_{0}^{-1}\right),\quad\omega^{2}\sim\Gamma^{-1}(u_{0},v_{0}).$$ Since in general, the residual variance η 2in (14) is not equivalent to ω 2, we propose the following to estimate ω 2from observational data. Proposition 2. Suppose that X *follows the causal structural equation model (SEM) in* (13) *with CBN* B. Let Y, X ∈ X, and denote by ψ the causal effect of X on Y *. Then for any* x ∈ Dom(X), $$(15)$$ $$\operatorname{Var}_{P}[Y\mid d o(X=x)]=\operatorname{Var}_{P}[Y-\psi X].$$ Note that VarP [·] is the variance in P defined by CBN B, and the variance on the right side is with respect to the observational distribution of X. Intuitively, subtracting by ψX negates the noise variances σ 2in (13) propogated through and from X to Y . We include a detailed proof for Proposition 2 in Appendix A. Thus, to estimate ω 2from D0, we propose the estimator ωˆ 2 =Pi (˜yi − y¯˜) 2/(n0 − |Z| − 2) where y˜i are realizations of Y˜ := Y − ψˆ⟨a⟩,bda(Z)X⟨a⟩in D0, and n0 − |Z| − 2 is the degrees of freedom resulting from estimating y¯˜ in addition to |Z| + 1 coefficients in (14). Accordingly, we design the prior ω 2 ∼ Γ −1(u0, v0) to have prior mean E[ω 2] = v0/(u0 −1) = ˆω 2, resulting in hyperparameters u0 = (n0 −|Z|)/2 and v0 =Pi (˜yi − y¯˜) 2/2. After marginalizing out ω 2, ψ⟨a⟩ ∼ t2u0 (m0, v0(u0ν0) −1), so we set Eπ 0 ⟨a⟩|Z [ψ⟨a⟩] = m0 = ψˆ⟨a⟩,bda(Z) and solve to obtain ν0 = v0/(u0SEˆ 2[ψˆ⟨a⟩,bda(Z)]). To maximally utilize the ensemble data, we further generalize the estimation of ψ⟨a⟩ via regression in (14) to include eligible samples of intervention data. This is achieved through the following proposition, which we prove in Appendix A. This result does not rely on any parametric assumptions for the underlying causal model, assuming simply that X follows a general linear SEM with DAG G (Pearl, 2000). Proposition 3. Suppose that X *follows a linear SEM with CBN* B = (G, ΘG), and X, Y ∈ X*. Suppose that* W ∈ X \ {X, Y } does not block any directed path from X to Y in G*. Then for any* w ∈ R, $${\frac{\partial}{\partial x}}\mathrm{E}_{P}[Y\mid d o(X=x)]={\frac{\partial}{\partial x}}\mathrm{E}_{P}[Y\mid d o(X=x),d o(W=w)].$$ $$(16)$$ Proposition 3 asserts a simple graphical criterion which, if satisfied, defines an avenue by which information can be shared between arms. In our work, we check the graphical criterion for estimating the causal effect of X⟨a⟩ on Y with interventional data generated from intervening on Xj as follows. Using another algorithm proposed by Pensar et al. (2020) for computing exact ancestor posterior probabilities, we consider the criterion satisfied at time step t if the event that Xj blocks a directed path from X⟨a⟩ to Y has low posterior probability: $$P(X_{(a)}\leadsto X_{j}\leadsto Y\mid{\mathcal{D}}[t])\leq\operatorname*{min}\{P(X_{(a)}\leadsto X_{j}\mid{\mathcal{D}}[t]),P(X_{j}\leadsto Y\mid{\mathcal{D}}[t])\}\leq\tau$$ where Xj ⇝ Y denotes that Xj is an ancestor of Y and the threshold is set to τ = 0.1 in our application. If (16) holds at time step t, we combine the observational data and the data from interventions on Xj when conducting the regression (14). While independent samples of observational and interventional data are not guaranteed to have identically distributed errors in the regression (14), we provide extensive empirical validation of our proposed regression with ensemble data in Appendix C.2, confirming indistinguishable performance for the purposes of estimating ψ⟨a⟩ and its sampling variability compared to that of purely observational data. ## 6 Numerical Experiments We conducted extensive numerical experiments to empirically validate our proposed methodology. For our main experiments, we generated random CBN models in an effort to empirically demonstrate the merits of our proposed methodology by evaluating BBB across a broad range of scenarios. To address the computational challenges of BBB, we then evaluated an implementation using MCMC to estimate parent set probabilities and applied it to a larger realistic reference network. Comprehensive experimental details sufficient for reproducing our experiments, such as CBN preparation and algorithm parameters, are provided in Appendix B. Additional supplemental experiments independently evaluating (12) and Proposition 3 may be found in Appendix C. The complete code and instructions for reproducing our results have been made available at the following link: https://github.com/jirehhuang/bcb ## 6.1 Random Networks For our main experiments, we generated CBN models with p = 10 variables in the interest of representing a diverse range of scenarios in our empirical comparisons. The structures were randomly generated, with the reward variable designated to have |PaG p | = 3 parents, according to a process adapted from de Kroon et al. (2022). The conditional probability distributions of each CBN were likewise generated randomly. Atomic interventions as described in Section 5 were allowed on all variables excluding the reward variable, with the discrete variables assumed to be binary, resulting in |A| = 2(p − 1) = 18 actions. We evaluated our BBB methodology against algorithms designed to optimize cumulative regret, including popular standard MAB algorithms Bayes-UCB, TS, and UCB that do not utilize causal assumptions (see Section 4). Additionally, we compared against what can be interpreted as a highly optimistic version of the central node approach by Lu et al. (2021), introduced in Section 1, by presupposing knowledge of the direct causes of the reward variable. In particular, for Bayes-UCB∗, TS∗, and UCB∗, we executed the respective algorithms over the reduced action set A′ = {a ∈ A : ⟨a⟩ ∈ PaG p }. Accordingly, for the cases where ⟨a ∗⟩ ̸∈ PaG p , we redefined the optimal intervention to a ∗ = argmaxa∈A′ µa when evaluating the regret of TS∗ and (Bayes-)UCB∗. To circumvent confounding arising from the differences in algorithm designs and any relevant parameter tuning, we focus our comparisons within algorithm types, e.g. amongst TS, TS∗, and BBB-TS. Using the process described above, we generated 100 CBN models for each distributional setting. For each CBN, we executed the competing methods 10 times but our BBB methods only 5 times due to their greater computational expense, with T = 5000 time steps. The results presented are averaged across all simulations for each time step, with the cumulative regret normalized by the optimal reward µ ∗to ensure that each CBN model contributes comparably. In preference to the competing methods, we tuned for their best-performing parameters where relevant and applied them to our BBB implementations. We computed exact parent set probabilities in BBB using an algorithm proposed by Pensar et al. (2020) in an effort to assess our BBB methodology in the most precise implementation of its formulation. Additional experimental details may be found in Appendix B, and additional supporting results are provided in Appendix C. Note that while the simulations were executed across a cluster of heterogeneous nodes, it was not difficult to observe that the computational requirements of BBB vastly exceeded that of the considered competing methods. The average execution time of each iteration of our discrete and Gaussian implementations of BBB on the random networks was respectively around 4.3 and 8.2 seconds, compared to less than 0.15 seconds for the competing methods. This includes the exact computation of parent set posteriors, computing or updating corresponding backdoor adjustment estimates, and in the Gaussian case, ancestor posterior probabilities. While BBB requires significantly greater computational expense, the cost must be weighed against other considerations such as the time and resources required for additional exploration of interventions. ## 6.1.1 Cumulative Regret Comparisons The empirical cumulative regret results in Figure 1 demonstrate that in both the discrete and Gaussian settings and for all algorithms, our BBB methodology is able to reliably outperform the non-causal variants with finite samples of observational data. The improvement increases monotonically with increasing sample sizes of observational data (n0). While corresponding variants of Bayes-UCB and TS perform comparably, UCB achieves substantially lower regret because the parameter c in (6) was tuned to maintain a balance between exploration and exploitation that is most empirically preferred. In particular, UCB is able to avoid excessive exploration by scaling its padding term with a relatively small constant, whereas Bayes-UCB maintains a relatively high minimum exploration rate according to its formulation in (8), as does TS. In comparison to the optimistic central node versions of the algorithms, BBB generally achieves lower cumulative regret with n0 ≥ 800 in the discrete setting and n0 ≥ 40 in the Gaussian setting. Recall that, in practical applications, the central node approach relies on the availability of large-sample observational data as well as a sequence of interventions to recover the reward generating variables PaG p . Based on our simulation settings, this reduces the action set from |A| = 18 arms to only |A′| = 2|PaG p | = 6 arms, and we additionally artificially restrict a ∗ ∈ A′to evaluate the regret. Furthermore, the regret results reported for these methods do not include the interventions required to identify PaG p , thus representing a kind of best case scenario for the central node approach. In contrast, our methodology derives substantial benefit from modest amounts of observational data samples n0. Indeed, we find that our BBB methods are able to perform competitively against the competing methods even when the latter are given n0 time steps to explore arms before incurring regret. To compensate for the fact that BBB utilizes n0 samples of observational data prior to investigating arms, we present the results where the competing algorithms TS(∗) and (Bayes-)UCB(∗) are given a head start of n0 ∈ {100·2 k: k = 0, 1*, . . . ,* 5} time steps to explore arms before incurring regret. The results for the discrete setting are shown in Figure 2. The Gaussian results are omitted because n0 ≤ 320 is relatively small, so the head start does not offer substantial benefit to the competing methods. In all cases, BBB still significantly outperforms the standard algorithms TS and (Bayes-)UCB given the head start. Given sufficient samples of observational data, BBB still performs comparably to if not better than the optimistic central node variants in terms of cumulative ![12_image_0.png](12_image_0.png) Figure 1: Average cumulative regret against T = 5000 time steps comparing Alg, Alg∗, and BBB-Alg for Alg ∈ {Bayes-UCB, TS, UCB}. BBB methods were executed with n0 = 100 · 2 kin the discrete setting and n0 = 10 · 2 kin the Gaussian setting. regret, for which the head start is only an additional unwarranted advantage given that they already require significantly more observational data as well as additional interventions. ## 6.1.2 Structure Identification In addition to the cumulative regret performance, it is of interest to consider the structure identification behavior of the BBB approach in our experiments. We measure the concentration of the posterior probability across DAGs G with respect to the underlying causal graph G ∗ using the edge support sum of absolute errors (ESSAE), which is given at time t by $$\sum_{i=1}^{p}\sum_{j\neq i}\left|P(j\in\mathbf{Pa}_{i}^{\mathcal{G}}\mid{\mathcal{D}}[t])-1\right.\left\{j\in\mathbf{Pa}_{i}^{\mathcal{G}^{*}}\right\}\right|.$$ This quantity may be understood as a probabilistic version of the structural hamming distance, a common metric in Bayesian network structure learning literature. Lower ESSAE corresponds to greater concentration of the posterior probability around the causal graph G ∗. The results are provided in Figure 3. In the discrete results for BBB-Bayes-UCB and BBB-TS, the initial ESSAE is unsurprisingly lower for the larger sample sizes, but the trend quickly reverses as the time steps progress. This effect is also observed occurring in the Gaussian results, but at an accelerated pace. This behavior is perhaps best understood in complement to the cumulative regret results in Figure 1. If P is faithful to G, then if n0 is large, the structure prior P(*G | D*0) is expected to concentrate around the Markov equivalence class, which entails identification of the skeleton and in general, partial identification of the orientations. Additionally, the conditional priors π 0 a|Z will be precise models, allowing BBB to quickly identify and select arm(s) a ∈ A with small regret ![13_image_0.png](13_image_0.png) Figure 2: Discrete average cumulative regret for Alg ∈ {Bayes-UCB, TS, UCB} with a head start of n0 ∈ {100 · 2 k: k = 0, 1*, . . . ,* 5} time steps for competing methods. ![13_image_1.png](13_image_1.png) Figure 3: Discrete (n0 = 100 · 2 k) and Gaussian (n0 = 10 · 2 k) results of the average ESSAE of the full graph structure for BBB-Alg, Alg ∈ {Bayes-UCB, TS, UCB}. µ ∗ −µa, which has the effect of clarifying the orientation of edges incident to such X⟨a⟩. The policies take no interest in determining the orientation of the remaining edges if the uncertainty does not indicate potential to identify more profitable actions. In contrast, when n0 is small, the greater uncertainty in both the structure prior and the conditional priors encourage the exploration of many different arms, thus incurring greater cumulative regret. In addition to clarifying the orientation of the incident edges, selecting arm(s) a ∈ A contributes to identifying the direct edge connections excluding those from PaG ⟨a⟩ to X⟨a⟩, as can be seen in (5). Thus, the skeleton is recovered and more edge orientations are identified than in the case where n0 is large, achieving lower ESSAE at the cost of greater cumulative regret. Notably, while this reversal appears to be absent in the discrete results for BBB-UCB, in actuality it has simply not yet been realized even after T = 5000 time steps due to the small exploration constant c in (7). ## 6.2 Scaling Bbb With Mcmc Despite the efficiency of the algorithm proposed by Pensar et al. (2020), its scaling in p of O(3pp) means that exact computation of the parent set posteriors is not always feasible. Pensar et al. (2020) noted that their algorithm executed on 20 variables in about 25 minutes, which would translate to over 86 days for 5000 time steps. While potentially justifiable depending on the context of the application such as when interventions are particularly expensive or time-consuming, such intensive computational requirement for each time step is likely to be limiting of the practical application of BBB on larger systems. As discussed in Section 3, the parent set probabilities are derived from a posterior distribution of graph structures which may be estimated by MCMC. In this section, we discuss the implementation of BBB using MCMC to estimate the structure posterior. We provide preliminary results in the discrete setting assessing this approximation against the exact computation of parent sets and evaluating its performance on a realistic reference network with p = 20 variables. The posterior distribution of graph structures P(*G | D*[t]) may be approximated by sampling DAGs Gtfrom P(*G | D*[t]) using MCMC and empirically estimating (4) from the sampled graphs: $$P(\mathbf{P}\mathbf{a}_{i}=\mathbf{Z}\mid{\mathcal{D}}[t])\approx{\frac{1}{|\mathbf{G}^{t}|}}\sum_{{\mathcal{G}}^{\prime}\in\mathbf{G}^{t}}\mathbf{1}\left\{\mathbf{P}\mathbf{a}_{i}^{{\mathcal{G}}^{\prime}}=\mathbf{Z}\right\}.$$ Various MCMC sampling schemes for DAGs have been developed, most basic of which is the structure MCMC sampler which accepts proposed single edge addition, deletion, or reversal steps according to a MetropolisHastings probability (Madigan et al., 1995; Giudici & Castelo, 2003). With order MCMC, Friedman & Koller (2003) reduced the search space to the topological orderings of the p nodes, significantly accelerating convergence but retaining a degree of bias since a given DAG may belong to multiple orders. Kuipers & Moffa (2017) addressed this issue of bias by considering the space of ordered partitions in partition MCMC, though at the price of greater computational requirement (Suter et al., 2021). In our simulations, we chose to use order MCMC despite the potential bias due to its computational advantage over partition MCMC while achieving satisfactory performance. For further improvements in efficiency, we applied the hybrid approach proposed by Kuipers et al. (2022) in which DAGs are sampled from a loosely restricted initial search space, with built-in provisions to expand the search space. In particular, before performing any interventions, we obtained an initial search space with the PC algorithm (Spirtes & Glymour, 1991) and sampled DAGs from P(*G | D*0). For subsequent iterations, we restricted the search space using the structure posterior estimated from the immediately preceding iteration. Whenever conducting the restricted sampling scheme, the search space was allowed to be extended as designed by Kuipers et al. (2022) to account for false negatives in the restriction. We first compared the cumulative regret amongst exact computation of parent set posteriors (Exact), approximation by MCMC without restricting the search space at any step (MCMC), and MCMC with restricting the search space as described (Hybrid MCMC). Each of these were executed twice on each of the 100 randomly generated discrete CBNs, and the results at various time steps are shown in Figure 4. Hybrid MCMC performed equivalently with unrestricted MCMC, enjoying significant computational reductions without exhibiting any inferiority in performance. However, while (hybrid) MCMC performed comparably with exact computation in most cases, there were significantly more extreme values in the MCMC methods. These ![15_image_0.png](15_image_0.png) Exact MCMC Hybrid MCMC Figure 4: Cumulative regret of BBB-Alg for Alg ∈ {Bayes-UCB, TS, UCB} with n0 = 3200 on the discrete random networks from Section 6.1 for time steps t ∈ {500, 2000, 3500, 5000} comparing between computing exact parent set posteriors and approximating with MCMC. Each boxplot represents 200 data points, with the number of points above 60 labeled above. may have resulted from the bias inherent to order MCMC, but given that general settings were applied and convergence was not assessed for each iteration, another potential explanation would be insufficient iterations for certain CBNs. In general, approximation of the structure posterior with MCMC appears to be an acceptable scalable alternative to exact computation. We then applied BBB with hybrid MCMC to CHILD, a moderately sized discrete reference network with p = 20 nodes for diagnosing congenital heart disease (Spiegelhalter, 1992). CHILD was constructed with domain experts and made available in a Bayesian network repository (Spiegelhalter, 1992; Scutari, 2010). The network was preprocessed to have binary variables, and the target variable was set to LowerBodyO2, a leaf node with two parents. All other variables were intervenable, with |A| = 38 arms, though only |A′| = 4 for the optimistic central node approach. The cumulative regret results for 20 executions of various algorithms are presented in Figure 5. Even after T = 5000 time steps, most methods fail to exhibit meaningful convergence in the form of the flattening of the cumulative regret curve. This is because all but two arms have expected reward µa > 0.5, making the optimal reward of approximately µ ∗ = 0.622 difficult to distinguish without a great deal of exploration. Unsurprisingly, the central node approach performs exceptionally well, having only four arms to investigate. With n0 = 3200, BBB was able to adequately distinguish the arms with higher expected reward, achieving cumulative regret competitive with the central node approach. The average execution time of each iteration of BBB with MCMC on CHILD was around 9.4 seconds, compared to less than 0.23 seconds for the competing methods. ## 7 Discussion In this paper, we proposed the BBB framework for enhancing experimental investigations with observational data. BBB consists of an aggregation of various strategies for estimating and modeling the parameters of interest with jointly interventional and observational data in order to efficiently utilize all available data to inform exploitation and exploration. Applied in our methodology but also of independent interest, we derived a well-performing approximation for the variance of the discrete backdoor adjustment estimator, and in the Gaussian setting, we characterized the interventional variance using the observational distribution and proposed a simple graphical criterion for sharing information between arms. We supplied preliminary regret analysis justifying our methodology, and empirically validated our proposed algorithms through extensive ![16_image_0.png](16_image_0.png) Figure 5: Average cumulative regret and median cumulative regret with 95% percentile intervals against T = 5000 time steps comparing Alg, Alg∗, and BBB-Alg for Alg ∈ {Bayes-UCB, TS, UCB}, executed on CHILD (p = 20). BBB methods were executed with n0 = 3200 using hybrid MCMC to estimate the structure posterior. numerical experiments against standard MAB algorithms as well as a generously optimistic version of a recently proposed CB approach. Although our work notably does not depend on certain restrictive assumptions made by previous work, namely knowledge of the causal graph or large-sample observational information, our proposed methodology nonetheless requires a causally sufficient system which may not be available in practice. In the presence of unobserved confounders, an obvious challenge is that the variables in the parent sets that BBB uses for backdoor adjustment may not all be observed. Since sets that satisfy the backdoor criterion are not limited to the parent set, one approach to this setting would be to otherwise identify valid adjustment sets. Perhaps the most obvious extension of our methodology would be to model the underlying ancestral graph instead of the DAG. From the ancestral graph, causal effects may be estimated from observational data by identifying valid backdoor adjustment sets based on its structure. The challenge of scaling BBB was addressed briefly by hybrid MCMC in Section 6, but the limits of the computational feasibility of BBB have yet to be carefully investigated or precisely articulated. Preliminary investigations suggest that BBB can scale to well over 100 variables, but extended empirical evaluation is necessary. A careful technical study of the posterior distributions is left as future work to complete the theoretical analysis of the cumulative regret. Finally, it would be interesting to consider how to share information between arms in the discrete setting as in Proposition 3 with a similarly simple graphical criterion. ## Acknowledgements This work was supported by US NSF grant DMS-1952929. This work used computational and storage services associated with the Hoffman2 Shared Cluster provided by UCLA Institute for Digital Research and Education's Research Technology Group. ## References Rajeev Agrawal. Sample mean based index policies by O(log n) regret for the multi-armed bandit problem. Advances in Applied Probability, 27(4):1054–1078, 1995. https://doi.org/10.2307/1427934. Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time Analysis of the Multiarmed Bandit Problem. Machine learning, 47(2):235–256, 2002. https://doi.org/10.1023/A:1013689704352. Hamsa Bastani, David Simchi-Levi, and Ruihao Zhu. Meta Dynamic Pricing: Transfer Learning Across Experiments. *Management Science*, 68(3):1865–1881, 2022. https://doi.org/10.1287/mnsc.2021.4071. Soumya Basu, Branislav Kveton, Manzil Zaheer, and Csaba Szepesvári. No Regrets for Learning the Prior in Bandits. *Advances in Neural Information Processing Systems*, 34:28029–28041, 2021. https: //proceedings.neurips.cc/paper/2021/file/ec1f764517b7ffb52057af6df18142b7-Paper.pdf. Donald A Berry and Bert Fristedt. Bandit problems: sequential allocation of experiments (Monographs on statistics and applied probability). *London: Chapman and Hall*, 5(71-87):7–7, 1985. https://link. springer.com/book/10.1007/978-94-015-3711-7. Arnoud de Kroon, Joris Mooij, and Danielle Belgrave. Causal bandits without prior knowledge using separating sets. In *First Conference on Causal Learning and Reasoning*, 2022. https://proceedings.mlr. press/v177/kroon22a.html. Daniel Eaton and Kevin Murphy. Exact Bayesian structure learning from uncertain interventions. In Marina Meila and Xiaotong Shen (eds.), Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, volume 2 of *Proceedings of Machine Learning Research*, pp. 107–114, San Juan, Puerto Rico, 21–24 Mar 2007. PMLR. https://proceedings.mlr.press/v2/eaton07a.html. Nir Friedman and Daphne Koller. Being Bayesian About Network Structure. A Bayesian Approach to Structure Discovery in Bayesian Networks. *Machine learning*, 50(1):95–125, 2003. https://doi.org/10. 1023/A:1020249912095. Paolo Giudici and Robert Castelo. Improving Markov Chain Monte Carlo Model Search for Data Mining. Machine learning, 50(1):127–158, 2003. https://doi.org/10.1023/A:1020202028934. Kristjan Greenewald, Dmitriy Katz, Karthikeyan Shanmugam, Sara Magliacane, Murat Kocaoglu, Enric Boix Adsera, and Guy Bresler. Sample Efficient Active Learning of Causal Trees. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 32. Curran Associates, Inc., 2019. https://proceedings.neurips. cc/paper/2019/file/5ee5605917626676f6a285fa4c10f7b0-Paper.pdf. Dominique MA Haughton. On the Choice of a Model to Fit Data from an Exponential Family. The Annals of Statistics, pp. 342–355, 1988. https://www.jstor.org/stable/2241441. David Heckerman, Dan Geiger, and David M Chickering. Learning Bayesian networks: The combination of knowledge and statistical data. *Machine Learning*, 20(3):197–243, 1995. https://doi.org/10.1007/ BF00994016. Joey Hong, Branislav Kveton, Manzil Zaheer, and Mohammad Ghavamzadeh. Hierarchical Bayesian Bandits. In *International Conference on Artificial Intelligence and Statistics*, pp. 7724–7741. PMLR, 2022. https: //proceedings.mlr.press/v151/hong22c.html. Markus Kalisch and Peter Bühlmann. Estimating High-Dimensional Directed Acyclic Graphs with the PCAlgorithm. *Journal of Machine Learning Research*, 8(22):613–636, 2007. http://jmlr.org/papers/v8/ kalisch07a.html. Emilie Kaufmann, Olivier Cappe, and Aurelien Garivier. On Bayesian Upper Confidence Bounds for Bandit Problems. In Neil D. Lawrence and Mark Girolami (eds.), *Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics*, volume 22 of *Proceedings of Machine Learning Research*, pp. 592–600, La Palma, Canary Islands, 21–23 Apr 2012. PMLR. https://proceedings.mlr.press/ v22/kaufmann12.html. Jack Kuipers and Giusi Moffa. Partition MCMC for Inference on Acyclic Digraphs. Journal of the American Statistical Association, 112(517):282–299, 2017. https://doi.org/10.1080/01621459.2015.1133426. Jack Kuipers, Polina Suter, and Giusi Moffa. Efficient Sampling and Structure Learning of Bayesian Networks. *Journal of Computational and Graphical Statistics*, 0(0):1–12, 2022. https://doi.org/10.1080/ 10618600.2021.2020127. Branislav Kveton, Mikhail Konobeev, Manzil Zaheer, Chih-wei Hsu, Martin Mladenov, Craig Boutilier, and Csaba Szepesvari. Meta-Thompson Sampling. In *International Conference on Machine Learning*, pp. 5884–5893. PMLR, 2021. https://proceedings.mlr.press/v139/kveton21a.html. Tze Leung Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6(1):4–22, 1985. ISSN 0196-8858. https://doi.org/10.1016/0196-8858(85)90002-8. Finnian Lattimore, Tor Lattimore, and Mark D Reid. Causal Bandits: Learning Good Interventions via Causal Inference. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc., 2016. https://proceedings. neurips.cc/paper/2016/file/b4288d9c0ec0a1841b3b3728321e7088-Paper.pdf. Sanghack Lee and Elias Bareinboim. Structural causal bandits: Where to intervene? In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 31. Curran Associates, Inc., 2018. https://proceedings.neurips.cc/ paper/2018/file/c0a271bc0ecb776a094786474322cb82-Paper.pdf. Xiuyuan Lu and Benjamin Van Roy. Information-Theoretic Confidence Bounds for Reinforcement Learning. *Advances in Neural Information Processing Systems*, 32, 2019. https://proceedings.neurips.cc/ paper/2019/file/411ae1bf081d1674ca6091f8c59a266f-Paper.pdf. Yangyi Lu, Amirhossein Meisami, Ambuj Tewari, and William Yan. Regret Analysis of Bandit Problems with Causal Background Knowledge. In Jonas Peters and David Sontag (eds.), Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), volume 124 of *Proceedings of Machine Learning* Research, pp. 141–150. PMLR, 03–06 Aug 2020. https://proceedings.mlr.press/v124/lu20a.html. Yangyi Lu, Amirhossein Meisami, and Ambuj Tewari. Causal Bandits with Unknown Graph Structure. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), *Advances in Neural* Information Processing Systems, volume 34, pp. 24817–24828. Curran Associates, Inc., 2021. https: //proceedings.neurips.cc/paper/2021/file/d010396ca8abf6ead8cacc2c2f2f26c7-Paper.pdf. Marloes H. Maathuis, Markus Kalisch, and Peter Bühlmann. Estimating high-dimensional intervention effects from observational data. *The Annals of Statistics*, 37(6A):3133 - 3164, 2009. https://doi.org/ 10.1214/09-AOS685. David Madigan, Jeremy York, and Denis Allard. Bayesian Graphical Models for Discrete Data. *International* Statistical Review / Revue Internationale de Statistique, 63(2):215–232, 1995. ISSN 03067734, 17515823. http://www.jstor.org/stable/1403615. Aurghya Maiti, Vineet Nair, and Gaurav Sinha. Causal Bandits on General Graphs. *arXiv preprint* arXiv:2107.02772, 2021. https://arxiv.org/abs/2107.02772. Vineet Nair, Vishakha Patil, and Gaurav Sinha. Budgeted and Non-Budgeted Causal Bandits. In Arindam Banerjee and Kenji Fukumizu (eds.), *Proceedings of The 24th International Conference on Artificial Intelligence and Statistics*, volume 130 of *Proceedings of Machine Learning Research*, pp. 2017–2025. PMLR, 13–15 Apr 2021. https://proceedings.mlr.press/v130/nair21a.html. Judea Pearl. Causal diagrams for empirical research. *Biometrika*, 82(4):669–688, 12 1995. ISSN 0006-3444. https://doi.org/10.1093/biomet/82.4.669. Judea Pearl. *Causality*. Cambridge University Press, 2000. Amit Peleg, Naama Pearl, and Ron Meir. Metalearning Linear Bandits by Prior Update. In International Conference on Artificial Intelligence and Statistics, pp. 2885–2926. PMLR, 2022. https://proceedings. mlr.press/v151/peleg22a.html. Johan Pensar, Topi Talvitie, Antti Hyttinen, and Mikko Koivisto. A Bayesian Approach for Estimating Causal Effects from Observational Data. *Proceedings of the AAAI Conference on Artificial Intelligence*, 34(04):5395–5402, Apr. 2020. https://ojs.aaai.org/index.php/AAAI/article/view/5988. Robert W Robinson. Counting unlabeled acyclic digraphs. In Charles H. C. Little (ed.), *Combinatorial* Mathematics V, pp. 28–43. Springer, Berlin, Heidelberg, 1977. ISBN 978-3-540-37020-8. https://doi. org/10.1007/BFb0069178. Daniel Russo and Benjamin Van Roy. Learning to Optimize via Posterior Sampling. *Mathematics of Operations Research*, 39(4):1221–1243, 2014. https://doi.org/10.1287/moor.2014.0650. Gideon Schwarz. Estimating the Dimension of a Model. *The Annals of Statistics*, 6(2):461–464, 1978. ISSN 00905364. https://doi.org/10.1214/aos/1176344136. Marco Scutari. Learning Bayesian Networks with the bnlearn R Package. *Journal of Statistical Software*, 35 (3):1––22, 2010. https://doi.org/10.18637/jss.v035.i03. Max Simchowitz, Christopher Tosh, Akshay Krishnamurthy, Daniel J Hsu, Thodoris Lykouris, Miro Dudik, and Robert E Schapire. Bayesian decision-making under misspecified priors with applications to meta-learning. *Advances in Neural Information Processing Systems*, 34:26382–26394, 2021. https: //proceedings.neurips.cc/paper/2021/file/ddcbe25988981920c872c1787382f04d-Paper.pdf. David J Spiegelhalter. Learning in probabilistic expert systems. *Bayesian statistics*, 4:447–465, 1992. Peter Spirtes and Clark Glymour. An Algorithm for Fast Recovery of Sparse Causal Graphs. Social Science Computer Review, 9(1):62–72, 1991. https://doi.org/10.1177/089443939100900106. Polina Suter, Jack Kuipers, Giusi Moffa, and Niko Beerenwinkel. Bayesian structure learning and sampling of Bayesian networks with the R package BiDAG. *arXiv preprint arXiv:2105.00488*, 2021. https:// arxiv.org/abs/2105.00488. Richard S Sutton and Andrew G Barto. *Reinforcement Learning: An Introduction*. MIT Press, second edition, 2018. William R Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. *Biometrika*, 25(3-4):285–294, 12 1933. ISSN 0006-3444. https://doi.org/10. 1093/biomet/25.3-4.285. Runzhe Wan, Lin Ge, and Rui Song. Metadata-based Multi-Task Bandits with Bayesian Hierarchical Models. *Advances in Neural Information Processing Systems*, 34:29655–29668, 2021. https://proceedings. neurips.cc/paper/2021/file/f7cfdde9db36af8e0d9a6d123d5c385e-Paper.pdf. Hao Wang. Constraint-Based Learning of Interventional Markov Equivalence Classes on High-Dimensional Data. PhD thesis, UCLA, 2022. https://escholarship.org/uc/item/9j15j0fq. Akihiro Yabe, Daisuke Hatano, Hanna Sumita, Shinji Ito, Naonori Kakimura, Takuro Fukunaga, and Kenichi Kawarabayashi. Causal Bandits with Propagating Inference. In Jennifer Dy and Andreas Krause (eds.), *Proceedings of the 35th International Conference on Machine Learning*, volume 80 of *Proceedings* of Machine Learning Research, pp. 5512–5520. PMLR, 10–15 Jul 2018. https://proceedings.mlr. press/v80/yabe18a.html. ## A Proofs And Discussions Proof of Proposition 1. Let Ta = {t ≤ T : at = a} be the time steps in which a is selected. Then by inequality (10), Assumption 1 and the concavity of √x, we have $$\sum_{t\in\mathcal{T}_{n}}\mathrm{E}[U_{a}(t)-\mu_{a}]\leq c{\sqrt{\log T}}\left\{c_{1}\sum_{t\in\mathcal{T}_{n}}[n_{0}+n_{a}(t-1)]^{-{\frac{1}{2}}}+c_{2}{\sqrt{d_{a}}}\sum_{t\in\mathcal{T}_{n}}\exp\left[-{\frac{\delta_{a}n_{a}(t-1)}{2}}\right]\right\}.$$ Now the first summation on the right side $$\sum_{t\in\mathcal{T}_{n}}[n_{0}+n_{a}(t-1)]^{-\frac{1}{2}}=\sum_{j=0}^{n_{a}(T)-1}(n_{0}+j)^{-\frac{1}{2}}$$ $$\leq\int_{n_{0}-1}^{n_{0}+n_{a}(T)-1}x^{-\frac{1}{2}}dx=2(\sqrt{(n_{0}-1)+n_{a}(T)}-\sqrt{n_{0}-1}).$$ Similar derivation shows that the second summation $$\sum_{t\in{\mathcal{T}}_{a}}\exp\left[-{\frac{\delta_{a}n_{a}(t-1)}{2}}\right]\leq c_{a},$$ where ca is a constant. Therefore, constant: Therefore, $$\sum_{t\in\mathcal{T}_{a}}\mathrm{E}[U_{a}(t)-\mu_{a}]\leq c\left[2c_{1}(\sqrt{(n_{0}-1)+n_{a}(T)}-\sqrt{n_{0}-1})+c_{2}c_{a}\sqrt{d_{a}}\right]\sqrt{\log T}.$$ By Cauchy-Schwartz inequality, Given a inequality, $$\sum_{a\in\mathcal{A}}\sqrt{(n_0-1)+n_a(T)}\leq\sqrt{K}\left\{K(n_0-1)+\sum_a n_a(T)\right\}^{1/2}=\sqrt{KT+K^2(n_0-1)}.$$ Summing over actions, we arrive at $$\sum_{t=1}^{T}\mathrm{E}[U_{a}(t)-\mu_{a}]=\sum_{a\in\mathcal{A}}\sum_{t\in T_{a}}\mathrm{E}[U_{a}(t)-\mu_{a}]$$ $$\leq\left[c_{4}\left(\sqrt{KT+K^{2}(n_{0}-1)}-\sqrt{K^{2}(n_{0}-1)}\right)+c_{5}\sum_{a\in\mathcal{A}}\sqrt{d_{a}}\right]\sqrt{\log T},\tag{17}$$ $=2c_{4}$ and $c_{5}=c_{5}$ now $c_{4}$ where c4 = 2cc1 and c5 = cc2 maxa ca. Next, we show that the second term in (9) is bounded. By the definition of Ua(t) in (7), Assumption 2 implies $$P(\mu^{*}>U_{a^{*}}(t))=\operatorname{E}\left[P(\mu_{a^{*}}>U_{a^{*}}(t)\mid{\mathcal{D}}[t-1])\right]\leq c_{3}(t-1)^{-b}$$ for t ≥ 2, and thus, $$\sum_{t=1}^{T}P(\mu_{a^{*}}>U_{a^{*}}(t))\leq1+c_{3}\sum_{t=1}^{\infty}t^{-b}.$$ Therefore, the second term is bounded by a constant c6. Accordingly, (11) follows from (17). Discussion of Assumption 1 and Assumption 2. We now demonstrate how to verify Assumption 1 and Assumption 2 for Proposition 1. Recall that the posterior distribution p(µa | D[t]) is defined via averaging over possible parent set Za := PaG ⟨a⟩ ⊆ X \ X⟨a⟩. $$\square$$ We start with decomposing Var(µa | D[t]) by conditioning on Za, $\mathrm{Var}(\mu_{a}\mid\mathcal{D}[t])=\mathrm{E}\left[\mathrm{Var}(\mu_{a}\mid\mathbf{Z}_{a},\mathcal{D}[t])\mid\mathcal{D}[t]\right]+\mathrm{Var}\left[\mathrm{E}(\mu_{a}\mid\mathbf{Z}_{a},\mathcal{D}[t])\mid\mathcal{D}[t]\right].$ Using the Gaussian setting as an example, under the conjugate prior (15), the conditional posterior p[µa | Za, D[t]] is a t-distribution with (n0 + na(t) − |Za|) degrees of freedom and variance $$\mathrm{Var}(\mu_{a}\mid\mathbf{Z}_{a},{\mathcal{D}}[t])={\frac{V_{a}^{t}(\mathbf{Z}_{a})}{n_{0}+n_{a}(t)}},$$ where V t a (Za) = Op(1) depends on D[t]. Now the first term on the right side of (18) is expressed as [4]) | ${\cal D}[t]$|. (18) $$\operatorname{E}\left[\operatorname{Var}(\mu_{a}\mid\mathbf{Z}_{a},{\mathcal{D}}[t])\mid{\mathcal{D}}[t]\right]=\sum_{\mathbf{Z}_{a}}{\frac{V_{a}^{t}(\mathbf{Z}_{a})}{n_{0}+n_{a}(t)}}P(\mathbf{Z}_{a}\mid{\mathcal{D}}[t]).$$ $$(19)$$ $$(20)$$ Taking further expectation to average over D[t], we get $$\operatorname{E}\bigl\{\operatorname{E}\bigl[\operatorname{Var}(\mu_{a}\mid\mathbf{Z}_{a},{\mathcal{D}}[t])\mid{\mathcal{D}}[t]\bigr]\bigr\}={\frac{1}{n_{0}+n_{a}(t)}}\operatorname{E}\left[\sum_{\mathbf{Z}_{a}}V_{a}^{i}(\mathbf{Z}_{a})P(\mathbf{Z}_{a}\mid{\mathcal{D}}[t])\right]\leq{\frac{c_{t}^{2}}{n_{0}+n_{a}(t)}},$$ , (20) where c 2 1 is an upper bound for the expectation in the second step for all a ∈ A. Let µ t a (Za) := E (µa | Za, D[t]) and G ∗ be the true causal DAG. Then, Za̸=PaG∗ ⟨a⟩ hµ t a (Za) − µ t a (PaG ∗ ⟨a⟩ ) i2P(Za | D[t]) Var -µ t a (Za) | D[t]≤X ≤ max Za hµ t a (Za) − µ t a (PaG ∗ ⟨a⟩ ) i2P(Za ̸= PaG ∗ ⟨a⟩ | D[t]) ≤ 4C 2P(Za ̸= PaG ∗ ⟨a⟩ | D[t]), $$(21)$$ where the last inequality is due to the assumption that µa ∈ [−*C, C*] for all a. Based on asymptotic approximation (Schwarz, 1978; Haughton, 1988), the posterior probability P(Za | D[t]) = Op(exp[−δ(Za)na(t)]) for any Za ̸= PaG ∗ ⟨a⟩ when na(t) is large, where δ(Za) > 0 is a constant depending on Za. Let da be the number of candidate parent sets for X⟨a⟩ and δa = min{δ(Za) : Za ̸= PaG ∗ ⟨a⟩}. Taking expectation, we arrive at $$\operatorname{E}\left\{\operatorname{Var}\left[\mu_{a}^{t}(\mathbf{Z}_{a})\mid{\mathcal{D}}[t]\right]\right\}\leq c_{2}^{2}d_{a}\exp(-\delta_{a}n_{a}(t)),$$ 2da exp(−δana(t)), (21) for some positive constant c2. Combining (20) and (21) leads to Assumption 1. Put Z∗ ≡ Za∗ and na∗ (t) ≡ n∗(t). To verify Assumption 2, we make use of concentration of the conditional posterior distribution p(µ ∗| Z∗, D[t]) and concentration of E (µ ∗| Z∗, D[t]). Define two events, $$\mathcal{E}_{t,1}:=\left\{\frac{\mu^{*}-\mathrm{E}(\mu^{*}\mid\mathbf{Z}_{*},\mathcal{D}[t])}{\sqrt{\mathrm{E}\left[\mathrm{Var}(\mu^{*}\mid\mathbf{Z}_{*},\mathcal{D}[t])\mid\mathcal{D}[t]\right]}}>\frac{c}{2}\sqrt{\log t}\right\},$$ $$\mathcal{E}_{t,2}:=\left\{\frac{\mathrm{E}(\mu^{*}\mid\mathbf{Z}_{*},\mathcal{D}[t])-\mathrm{E}(\mu^{*}\mid\mathcal{D}[t])}{\sqrt{\mathrm{Var}\left[\mathrm{E}(\mu^{*}\mid\mathbf{Z}_{*},\mathcal{D}[t])\mid\mathcal{D}[t]\right]}}>\frac{c}{2}\sqrt{\log t}\right\}.$$ By (18), Var(µ ∗| D[t]) ≥ E [Var(µ ∗| Z∗, D[t]) | D[t]] and Var(µ ∗| D[t]) ≥ Var [E(µ ∗| Z∗, D[t]) | D[t]]. Then, we have $$P\left\{\frac{\mu^{*}-\mathrm{E}(\mu^{*}\mid{\mathcal D}[t])}{\sqrt{\mathrm{Var}(\mu^{*}\mid{\mathcal D}[t])}}>c\sqrt{\log t}\mid{\mathcal D}[t]\right\}\leq P({\mathcal E}_{t,1}\mid{\mathcal D}[t])+P({\mathcal E}_{t,2}\mid{\mathcal D}[t]).$$ For the first probability, we further condition on Z∗: $$P({\mathcal{E}}_{t,1}\mid{\mathcal{D}}[t])=\sum_{\mathbf{Z}_{*}}P({\mathcal{E}}_{t,1}\mid\mathbf{Z}_{*},{\mathcal{D}}[t])P(\mathbf{Z}_{*}\mid{\mathcal{D}}[t]).$$ According to (19), for a fixed D[t], Var(µ ∗| Z∗, D[t]) = Op(E [Var(µ ∗| Z∗, D[t]) | D[t]]) = Op(1/[n0 +n∗(t)]). Then for some constant c(Z∗) > 0, $$P({\mathcal{E}}_{t,1}\mid\mathbf{Z}_{*},{\mathcal{D}}[t])\leq P\left\{{\frac{\mu^{*}-\operatorname{E}(\mu^{*}\mid\mathbf{Z}_{*},{\mathcal{D}}[t])}{\sqrt{\operatorname{Var}(\mu^{*}\mid\mathbf{Z}_{*},{\mathcal{D}}[t])}}}>c(\mathbf{Z}_{*}){\sqrt{\log t}}\mid\mathbf{Z}_{*},{\mathcal{D}}[t]\right\}.$$ Upper bounds for the right side can be established based on concentration of many common posterior distributions. For the Gaussian setting, µ ∗| Z∗, D[t] follows a t distribution with (n0 + n∗(t) − |Z∗|) degrees of freedom. Existing concentration inequality for t distribution with r degrees of freedom, such as $$P(|t_{r}|\geq x)\leq2\exp(-x^{2}/4)+\exp(-r/16)$$ in Lemma 18 of Wang (2022), can be used to show that P(Et,1 | Z∗, D[t]) = O(t −b) for some b and every candidate Z∗ and therefore P(Et,1 | D[t]) = O(t −b) for any D[t]. Note that E(µ ∗| Z∗, D[t]) = µ t a∗ (Z∗) is a function of Z∗ conditioning on D[t] and thus a discrete and bounded random variable. The second probability $$P({\mathcal{E}}_{t,2}\mid{\mathcal{D}}[t])=P\left\{{\frac{\mu_{a^{*}}^{t}({\bf Z}_{s})-\operatorname{E}(\mu_{a^{*}}^{t}({\bf Z}_{s})\mid{\mathcal{D}}[t])}{\sqrt{\operatorname{Var}\left[\mu_{a^{*}}^{t}({\bf Z}_{s})\mid{\mathcal{D}}[t]\right]}}}>{\frac{c}{2}}{\sqrt{\log t}}\mid{\mathcal{D}}[t]\right\}$$ may be shown to be O(t −b) by existing concentration inequality of a discrete bounded random variable and the concentration of [Z∗ | D[t]] on the true parent set. Proof of Proposition 2. Let Z be the parent set of X. Then by a special case of (1), $$p(y\mid do(x))=\int p(y\mid x,\mathbf{z})p(\mathbf{z})d\mathbf{z}$$ $$=\int\phi(y\mid\psi x+\boldsymbol{\gamma}^{\top}\mathbf{z},\sigma^{2})\phi(\mathbf{z}\mid0,\Sigma_{\mathbf{Z}})d\mathbf{z}$$ $$=\phi(y\mid\psi x,\boldsymbol{\gamma}^{\top}\Sigma_{\mathbf{Z}}\boldsymbol{\gamma}+\sigma^{2}),$$ where ϕ(· | µ, Σ) is the probability density function of N(µ, Σ) and ΣZ is the covariance matrix of Z. Thus, $$Y\mid d o(X=x)\sim\mathrm{N}(\psi x,\gamma^{\top}\Sigma_{\mathbf{Z}}\gamma)$$ ⊤ΣZγ + σ 2). Now representing [Y | X, Z] by a linear regression: $$Y=\psi X+\gamma^{\top}\mathbf{Z}+\varepsilon,$$ where $\varepsilon\sim N(0,\sigma^{2})\perp{\bf Z}\sim N(0,\Sigma_{\bf Z})$. Then we have: $$\mathrm{Var}_{P}(Y-\psi X)=\mathrm{Var}_{P}(\mathbf{\gamma}^{\top}\mathbf{Z}+\varepsilon)$$ $$=\mathbf{\gamma}^{\top}\Sigma_{\mathbf{Z}}\mathbf{\gamma}+\sigma^{2}=\mathrm{Var}_{P}(Y\mid do(X=x)).$$ Proof of Proposition 3. The result follows straightforwardly from a simple graphical argument. Let Ξ G XY denote the distinct directed paths from X to Y in the causal graph G given the model (13), where ξ ∈ Ξ G XY consists of all the directed edges i → j ∈ E on the given path from X to Y . Then the causal effect of X on Y can be expressed as the sum of propagated direct effects along all directed paths from X to Y : $$\psi_{X Y}:={\frac{\partial}{\partial x}}\mathrm{E}_{P}[Y\mid d o(X=x)]=\sum_{\xi\in\Xi_{X Y}^{o}}\prod_{i\to j\in\xi}\beta_{i j}.$$ We denote the variables under the intervention do(W = w), w ∈ R as X˜ , with resulting causal model $$\tilde{X}_{j}=\sum_{i=1}^{p}\tilde{\beta}_{i j}\tilde{X}_{i}+\tilde{\varepsilon}_{j},\quad j=1\ldots,p,$$ where $${\tilde{\beta}}_{i j}={\begin{cases}0&{\mathrm{if~}}X_{j}=W\\ \beta_{i j}&{\mathrm{otherwise,}}\end{cases}}\quad{\tilde{\varepsilon}}_{j}={\begin{cases}w&{\mathrm{if~}}X_{j}=W\\ \varepsilon_{j}&{\mathrm{otherwise.}}\end{cases}}$$ The corresponding causal graph for X˜ is the mutilated graph G˜ resulting from deleting all edges into W. The causal effect of X˜ on Y˜ is then In order of $X$ on $Y$ is then $$\psi_{\hat{X}\hat{Y}}:=\frac{\partial}{\partial x}\mathrm{E}_P[\hat{Y}\mid do(\hat{X}=x)]=\frac{\partial}{\partial x}\mathrm{E}_P[Y\mid do(X=x),do(W=w)]=\sum_{\xi\in\Xi_{X^Y}^{\Xi}}\prod_{i\to j\in\xi}\hat{\beta}_{ij}.$$ $$\mathbb{D}$$ Since W does not block any directed path from X to Y , the mutilated graph G˜ retains all the directed paths from X to Y in G, so Ξ G˜ XY = ΞGXY . By the same reasoning, β˜ij = βij for all i → j ∈ ξ where ξ ∈ Ξ G˜ XY . Therefore, for any w ∈ R, 2x, $$\frac{\partial}{\partial x}\mathrm{E}_P[Y\mid d o(X=x)]=\frac{\partial}{\partial x}\mathrm{E}_P[Y\mid d o(X=x),d o(W=w)].$$ ## B Experimental Details In this section, we include details regarding the experiments discussed in Section 6. For our random network simulations, we generated CBN models for p = 10 variables with reward variable Y = Xp. In order to investigate interesting structures with diverse non-trivial confounding relationships, we randomly generated graph structures using the following process adapted from de Kroon et al. (2022). Given a fixed topological sort of the variables X1 *≺ · · · ≺* Xp where the reward variable is Y = Xp, we sequentially considered nodes in reverse topological order: i = p − 1*, . . . ,* 1. We uniformly sampled the maximum out-degree of Xi, denoted di, from 1 to p − i. Then, for di times, we randomly selected Xj from {Xj ∈ X : Xi ≺ Xj}, adding Xi → Xj to the graph only if the edge was not already present and |PaG j | < 3. We imposed an additional requirement that |PaG p | = 3, randomly adding parents if necessary. If the generated structure consisted of multiple disconnected components, we rejected the structure and reattempted the process. The conditional probability distributions of each CBN were likewise generated randomly. For discrete networks, the variables were all assumed to be binary, and the conditional probability tables were randomly generated uniformly and normalized, and were accepted only if for every edge Xj → Xi, there is a sufficiently large causal effect, with |P[Xi = xi| do(Xj = xj )] − P(Xi = xi)| ≥ 0.05 for some xi ∈ Dom(Xi) and xj ∈ Dom(Xj ). Additionally, we required the marginal probability of any single discrete level to be at least 0.01, and that the reward signal of the optimal intervention a ∗ be sufficiently large with respect to the observational mean: µ ∗ − E[Y ] ≥ 0.05. For Gaussian networks, according to the model expressed in (13), we sampled coefficients uniformly from [−1, −0.5] ∪ [0.5, 1] for Xi ∈ PaG j and standard deviations from [ √0.5, 1], and we normalized the system to have unit variance. Note that in the Gaussian setting, there are effectively |A| = 9 actions given that interventional data on the same variable may be combined as discussed in Section 5.2, which we implement for the competing methods as well. We found that ⟨a ∗⟩ ∈ PaG p held for 98% of the discrete models that we randomly generated, though only for 65% of the random Gaussian models. As discussed in Section 6, we artificially enforced ⟨a ∗⟩ ∈ PaG p when evaluating the regret of TS∗ and (Bayes-)UCB∗. The randomly generated networks were limited to size p = 10 in the interest of extending the scope of our empirical investigation in other aspects, namely in representing a large number of random causal models and executing enough repetitions and time steps to reasonably assess the expected performance. We emphasize that this limitation is primarily due to the breadth of our simulation study, whereas in practical applications there may not be the need for tens of thousands of executions. The in-degree restriction of |PaG j | ≤ 3 for all j ∈ V was largely due to the difficulty in reliably generating random conditional probability distributions that have meaningful causal effects and reward signals, as defined in the previous paragraph, for denser discrete networks. In general, it is not uncommon to assume the underlying DAG structure is sparse (Kalisch & Bühlmann, 2007). Similarly, the choice of |PaG p | = 3 was motivated by our interest in investigating non-trivial structures that have substantive connectivity between the reward variable and the intervened variables. Note that without sufficiently meaningful connectivity and causal effects, our BBB methodology is actually advantaged in that the interventional distributions generally will not be substantively different than the observational distribution, thus nullifying the need for backdoor adjustment and correspondingly causal structure learning. For Bayes-UCB(∗), the best quantile constant in (8) was c = 0, in agreement with the empirical recommendation by Kaufmann et al. (2012). The best exploration parameter for UCB in (6) was c = 1/(2√2) for UCB(∗) in the discrete setting. In the Gaussian setting, UCB and UCB∗ preferred c = 1/2 and c = 1/ √2, respectively, the latter of which we applied for BBB. We used standard uninformative priors for TS(∗), with α0 = β0 = 1 for the Beta prior and m0 = 0, ν0 = 1, and u0 = v0 = 1 for the N-Γ −1 prior. For BBB, we computed exact parent set probabilities (4) using the program1implementing the efficient algorithm developed by and applied in Pensar et al. (2020), restricting the maximum size of parent sets to three and using the Bayesian Dirichlet equivalent uniform and Bayesian Gaussian equivalent scores. For the Gaussian setting, we checked the graphical criterion in Proposition 3 according to (16) with τ = 0.1. While we focused in Section 3 on designing the marginal posteriors according to (3), a notable difference between our proposed Bayesian CB framework and the Bayesian MAB approach described in Section 2 is that in our design, the posterior distribution is not modular, with the marginals (π t a )a∈A mutually dependent on the distribution of graph structures. However, because of software limitations and for simplicity, we sampled the criterion Ua(t) for each arm independently in the implementation of BBB-TS in our random network experiments (line 7 in Algorithm 1). Although preliminary results have shown the difference in empirical performance to be negligible, a more precise implementation would first sample a DAG G from the posterior distribution P(*G | D*[t]) and subsequently for each arm a ∈ A, sample Ua(t) from π t a|PaG ⟨a⟩ , which we apply using MCMC in our scaling experiments. For our investigation of scaling BBB with MCMC, we used the same generated random networks as previously described for Figure 4. For the CHILD network, we coerced all variables to binary variables, with the extraneous discrete states removed by sequentially merging states with least marginal probability. We averaged the conditional probability distributions imposed by merged states weighted according to their marginal probabilities. Order MCMC was implemented by extending the BiDAG package (Suter et al., 2021) to accommodate computing scores with ensemble data as described in Section 3. For each iteration of BBB, the structure posterior was estimated by conducting 104iterations with a thinning interval of 10 and discarding the first 20% as burn-in steps. The resulting set of DAGs were used for Bayesian model averaging in BBB-(Bayes- )UCB, and for BBB-TS one random DAG was selected. For hybrid MCMC, the search space was initially gently restricted by executing the PC algorithm (Spirtes & Glymour, 1991) with a relatively large threshold α = 0.1 and only investigating conditioning sets of up to size one. For subsequent iterations, the search space 1Pensar et al. (2020) provided their code under the MIT License at https://github.com/jopensar/BIDA. ![25_image_0.png](25_image_0.png) comparing Alg, Alg∗, and BBB-Alg for Alg ∈ {Bayes-UCB, TS, UCB}. BBB methods were executed with n0 = 100 · 2 kin the discrete setting and n0 = 10 · 2 kin the Gaussian setting. was restricted to edges that appear with at least 0.05 probability in the structure posterior estimated in the preceding iteration. Note that the hybrid approach proposed by Kuipers et al. (2022) includes provisions for extending the search space for greater robustness in the presence of false negatives, so with each iteration the search space may be sequentially reduced or expanded as the structure posterior is increasingly informed by interventional data. ## C Additional Results Due to the density of information communicated in figures such as Figure 1, along with the substantial variability arising from the randomness in graph structures, conditional probability distributions, and data, we chose not to include error bounds of the empirical variability. To visualize the variability in the empirical results, we provide median cumulative regret with 95% percentile error bars in Figure 6. Furthermore, in what follows we present the results from additional experiments designed to evaluate firstly our proposed approximation of the sampling variance of the discrete backdoor adjustment estimator (12), and secondly the application of Proposition 3 by way of Gaussian backdoor adjustment with jointly interventional and observational data. ## C.1 Discrete Backdoor Adjustment And Variance In this section, we describe and present experiments evaluating the behavior of µˆa,bda(Z) where Z = PaG ⟨a⟩ as in (12), as well as our proposed approximation of its variance, derived in detail in Appendix D. Four variance estimation methods were investigated. In the naive approach, µˆa,bda(Z) is treated as a conditional proportion as is the case when |Z| = 0, and the variance is estimated as µˆa,bda(Z)[1−µˆa,bda(Z)]/n[xa] where n[xa] is the number of samples of data where X⟨a⟩ = xa. The sampling approach estimates the variance from samples from the population distribution, and the bootstrap approach conducts resampling from each sample distribution, each with 103repetitions. The generation of discrete CBNs for the simulation scenarios was designed as follows. The graph structure was generated simply by initializing a structure where there is a direct edge from the intervened node X⟨a⟩ to the reward variable Y and X⟨a⟩ has |Z| = m parents. For each parent Xj ∈ Z, an edge Xj → Y was randomly added with 0.5 probability to create backdoor paths. Finally, conditional probability tables were generated uniformly as described in Section 6. For observational sample sizes n0 ∈ {100 · 2 k: k = 0, 1*, . . . ,* 5} and parent set sizes |Z*| ∈ {*0, 1, 2, 3}, 103 scenarios were created by randomly generating CBNs as described above and the methods were assessed under each scenario through the following process. First, 106 datasets were generated, each with n0 samples of observational data, and for each dataset, µˆa,bda(Z) was computed for some arbitrary xa ∈ Dom(X⟨a⟩). Then, for each of the four methods, the variance was estimated corresponding to the first 103estimates of µˆa,bda(Z), and from those the 2 standard deviation interval coverage probability of the true µa was computed. The estimator µˆa,bda(Z) itself was found to be generally unbiased, with the average of the 106estimates deviating from the true µa by less than 2% in over 99% of the 24,000 scenarios. The coverage probability results are shown in Figure 7, where each boxplot visualizes the coverage probability of a method across 103scenarios randomly generated under the given simulation setting. The outliers and invalid values, which typically corresponded to extreme scenarios, were removed. The naive approach is only correct when |Z| = 0 and performs poorly when otherwise. The general results may be summarized as Naive < Bootstrap ≈ Proposed < Sampling, though our proposed estimator appears to outperform the bootstrap approach for larger |Z| and perform comparably with the population sampling approach for larger n0 while requiring significantly less and nearly negligible computational expense compared to either. ## C.2 Gaussian Backdoor Adjustment With Ensemble Data In this section, we empirically validate our methodology of conducting the regression (14) with jointly interventional and observational data to estimate ψ⟨a⟩, as discussed in Section 5.2. In particular, we compare the coverage probability of ψˆ⟨a⟩,bda(Z) where Z = PaG ⟨a⟩ estimated using purely observational data and ensemble data. The ensemble data was generated by allowing each data sample to be generated by one of the possible interventions {do(Xj = xj ) : Xj ∈ Z, xj *∈ {−*1, 1}} or by passive observation, with equal probability given to each of the 2|Z| + 1 options. For sample sizes n ∈ {10·2 k: k = 0, 1*, . . . ,* 5} and parent set sizes |Z*| ∈ {*1, 2, 3, 4}, 103scenarios were created by randomly generating CBNs. The network structures were generated as described in Appendix C.1, and the parameters as in Section 6. Each data generation method was evaluated for each scenario by generating 105 datasets with n samples and estimating ψˆ⟨a⟩,bda(Z) and SEˆ 2[ψˆ⟨a⟩,bda(Z)] for each dataset by conducting the regression (14). From those estimates, 95% confidence interval coverage probabilities were computed for each scenario. The average of the 105estimates of ψˆ⟨a⟩,bda(Z) deviated from the true ψ⟨a⟩ by at most 0.9% across all 24,000 simulation scenarios for both data generation methods. The coverage probability results are shown in Figure 8. Since the results did not vary across parent set sizes, each boxplot visualizes the coverage probability of a method across the 4, 000 simulation scenarios at each sample size. It is easy to see equivalent ![27_image_0.png](27_image_0.png) Figure 7: Coverage probability per scenario using various estimators of Var|ûa,bda(Z)] across no ∈ {100 ·2* : k = 0, 1, ... , 5} samples of observational data and |Z| € {0, 1, 2, 3} adjustment set sizes. ![28_image_0.png](28_image_0.png) Figure 8: Coverage probability per simulation scenario across sample sizes for observational and ensemble data generating methods. performance of the estimator computed with ensemble data compared to observational data, with consistent coverage across all sample sizes. ## D Derivation Of The Discrete Backdoor Adjustment Variance Approximation In this section, we derive the approximation of the sampling variance of (12): $${\hat{\mu}}_{a,\mathrm{bda}}(\mathbf{Z})={\frac{1}{n_{0}}}\sum_{\mathbf{z}}{\frac{n_{0}[1,x_{a},\mathbf{z}]n_{0}[\mathbf{z}]}{n_{0}[x_{a},\mathbf{z}]}}.$$ For the entirety of this section, we assume that the expectations and variances are with respect to the discrete probability distribution P defined by a fixed CBN B. ## D.1 Introduction For simplicity, we redefine some notation. The backdoor adjustment to estimate the interventional distribution of Y | do(X = x) with parent set Z = PaGX with r parent configurations is given by: $P[Y=y\mid do(X=x)]=\sum_{\bf z}P(Y=y\mid X=x,{\bf Z}={\bf z})P({\bf Z}={\bf z})$. Empirically, given n samples of observational data, this quantity is estimated using counts: $$\hat{P}[Y=y\mid do(X=x)]=\sum_{\bf z}\frac{n[y,x,{\bf z}]}{n[x,{\bf z}]}\frac{n[{\bf z}]}{n}=\frac{1}{n}\sum_{\bf z}\frac{n[y,x,{\bf z}]n[{\bf z}]}{n[x,{\bf z}]},\tag{22}$$ where n[*y, x,* z] represents the number of samples in which Y = y, X = x, and Z = z, with corresponding definitions for n[x, z] and n[z]. The joint probability distribution of X, Y , and Z may be lumped into a multinomial random vector N = (N1, N1 ′, N1 ′′, . . . , Nr, Nr ′, Nr ′′) ∈ R 3r where for i = 1*, . . . , r*, $$N_{i}=n[y,x,{\bf z}_{i}],\quad N_{i}^{\prime}=n[\neg y,x,{\bf z}_{i}],\quad N_{i}^{\prime\prime}=n[\neg x,{\bf z}_{i}].$$ Note that Ni + Ni ′ + Ni ′′ = n[zi], so Pr i=1(Ni + Ni ′ + Ni ′′) = n, so N may be thought of as a repartitioning of the joint probability distribution of X, Y , and Z into 3r disjoint levels: $\mathbf{N}=(N_{1},N_{1}{}^{\prime},N_{1}{}^{\prime\prime},\ldots,N_{r},N_{r}{}^{\prime},N_{r}{}^{\prime\prime})\sim\text{Multinom}(n,\mathbf{p})$, $\mathbf{p}=(p_{1},p_{1}{}^{\prime},p_{1}{}^{\prime},\ldots,p_{r},p^{\prime},p^{\prime\prime})$, where $p_{i}=\mathrm{E}\left[\frac{n[y,\mathbf{z},\mathbf{z}_{i}]}{n}\right],\quad p_{i}{}^{\prime}=\mathrm{E}\left[\frac{n[\neg y,\mathbf{z},\mathbf{z}_{i}]}{n}\right],\quad p_{i}{}^{\prime\prime}=\mathrm{E}\left[\frac{n[\neg x,\mathbf{z}_{i}]}{n}\right]\quad\text{for}i=1,\ldots,r.$ $$\quad(23)$$ The advantage of such a representation is so that for each zi, the term within the summation may be expressed as a function of three disjoint elements of a multinomial random vector: $$\frac{1}{n}\sum_{i=1}^{r}\frac{n[y,x,\mathbf{z}_{i}]n[\mathbf{z}_{i}]}{n[x,\mathbf{z}_{i}]}=\frac{1}{n}\sum_{i=1}^{r}\frac{n[y,x,\mathbf{z}_{i}]\left(n[y,x,\mathbf{z}_{i}]+n[\neg y,x,\mathbf{z}_{i}]+n[\neg x,\mathbf{z}_{i}]\right)}{n[y,x,\mathbf{z}_{i}]+n[\neg y,x,\mathbf{z}_{i}]}$$ $$=\frac{1}{n}\sum_{i=1}^{r}\frac{N_{i}(N_{i}+N_{i}{}^{\prime}+N_{i}{}^{\prime\prime})}{N_{i}+N_{i}{}^{\prime}}.$$ $$(24)$$ Note that each term is not straightforward to compute. An obvious challenge is that the denominator of each term in the summation in (24) can be zero, so there is no analytical solution for its mean, variance, and covariance. ## D.2 Taylor Series Expansion For Ratio Distribution To circumvent this challenge, we approximate the ratio in (24) with the Taylor series approximation. We begin by defining $$\begin{array}{c}{{M_{i}=\frac{N_{i}(N_{i}+{N_{i}}^{\prime}+{N_{i}}^{\prime\prime})}{n^{2}},}}\\ {{W_{i}=\frac{N_{i}+{N_{i}}^{\prime}}{n},}}\\ {{Q_{i}=f(M_{i},W_{i})=\frac{M_{i}}{W_{i}}.}}\end{array}$$ This allows us to express the variance of (24) in terms of Qi: $$\begin{split}\text{Var}\left[\hat{P}[Y=y\mid do(X=x)]\right]&=\text{Var}\left[\sum_{i=1}^{r}Q_{i}\right]\\ &=\sum_{i}^{r}\text{Var}\left[Q_{i}\right]+2\sum_{i=1}^{r}\sum_{j>i}\text{Cov}\left[Q_{i},Q_{j}\right].\end{split}\tag{25}$$ By Taylor series expansion around µi = (µMi , µWi ) = (E[Mi],E[Wi]): $$Q_{i}=f(M_{i},W_{i})\tag{26}$$ $$=f(\mu_{i})+(M_{i}-\mu_{M,i})\frac{\partial f}{\partial M_{i}}(\mu_{i})+(W_{i}-\mu_{W,i})\frac{\partial f}{\partial W_{i}}(\mu_{i})$$ $$+\frac{1}{2}(M_{i}-\mu_{M,i})^{2}\frac{\partial^{2}f}{\partial M_{i}^{2}}(\mu_{i})+\frac{1}{2}(W_{i}-\mu_{W,i})^{2}\frac{\partial^{2}f}{\partial W_{i}^{2}}(\mu_{i})$$ $$+(M_{i}-\mu_{M,i})(W_{i}-\mu_{W,i})\frac{\partial^{2}f}{\partial M_{i}\partial W_{i}}(\mu_{i})$$ $$+O\left(\|(M_{i},W_{i})-\mu_{i}\|^{3}\right),$$ where $$\frac{\partial f}{\partial M_{i}}(M_{i},W_{i})=\frac{1}{W_{i}},\frac{\partial^{2}f}{\partial M_{i}^{2}}(M_{i},W_{i})=0,$$ $$\frac{\partial f}{\partial W_{i}}(M_{i},W_{i})=-\frac{M_{i}}{W_{i}^{2}},\frac{\partial^{2}f}{\partial W_{i}^{2}}(M_{i},W_{i})=\frac{2M_{i}}{W_{i}^{3}},\tag{27}$$ $$\frac{\partial^{2}f}{\partial M_{i}\partial W_{i}}(M_{i},W_{i})=\frac{\partial^{2}f}{\partial W_{i}\partial M_{i}}(M_{i},W_{i})=\frac{1}{W_{i}^{2}}$$ $$(28)$$ $$(29)$$ Given (26), we obtain an approximate expected value: $$\mathrm{E}[Q_{i}]\approx f(\mu_{i})+\frac{1}{2}\frac{\partial^{2}f}{\partial M_{i}^{2}}(\mu_{i})\mathrm{Var}[M_{i}]+\frac{1}{2}\frac{\partial^{2}f}{\partial W_{i}^{2}}(\mu_{i})\mathrm{Var}[W_{i}]+\frac{\partial^{2}f}{\partial M_{i}\partial W_{i}}(\mu_{i})\mathrm{Cov}[M_{i},W_{i}].$$ For variance and covariance, we use a simpler approximation: $$Q_{i}=f(M_{i},W_{i})\approx f(\mu_{i})+(M_{i}-\mu_{M_{i}}){\frac{\partial f}{\partial M_{i}}}(\mu_{i})+(W_{i}-\mu_{W_{i}}){\frac{\partial f}{\partial W_{i}}}(\mu_{i}),$$ resulting in $$\mathrm{Var}[Q_{i}]\approx\frac{\partial f}{\partial M_{i}}(\mu_{i})^{2}\mathrm{Var}[M_{i}]+\frac{\partial f}{\partial W_{i}}(\mu_{i})^{2}\mathrm{Var}[W_{i}]$$ $$+2\frac{\partial f}{\partial M_{i}}(\mu_{i})\frac{\partial f}{\partial W_{i}}(\mu_{i})\mathrm{Cov}[M_{i},W_{i}],$$ $$(30)$$ and $$\mathrm{E}[Q_{i}Q_{j}]\approx f(\mu_{i})f(\mu_{j})$$ $$+\frac{\partial f}{\partial M_{i}}(\mu_{i})\frac{\partial f}{\partial M_{j}}(\mu_{j})\mathrm{Cov}[M_{i},M_{j}]+\frac{\partial f}{\partial M_{i}}(\mu_{i})\frac{\partial f}{\partial W_{j}}(\mu_{j})\mathrm{Cov}[M_{i},W_{j}]$$ $$+\frac{\partial f}{\partial W_{i}}(\mu_{i})\frac{\partial f}{\partial M_{j}}(\mu_{j})\mathrm{Cov}[W_{i},M_{j}]+\frac{\partial f}{\partial W_{i}}(\mu_{i})\frac{\partial f}{\partial W_{j}}(\mu_{j})\mathrm{Cov}[W_{i},W_{j}],$$ so $$\begin{split}\text{Cov}[Q_{i},Q_{j}]&=\text{E}[Q_{i}Q_{j}]-\text{E}[Q_{i}]\text{E}[Q_{j}]\\ &=\frac{\partial f}{\partial M_{i}}(\mu_{i})\frac{\partial f}{\partial M_{j}}(\mu_{j})\text{Cov}[M_{i},M_{j}]+\frac{\partial f}{\partial M_{i}}(\mu_{i})\frac{\partial f}{\partial W_{j}}(\mu_{j})\text{Cov}[M_{i},W_{j}]\\ &\quad+\frac{\partial f}{\partial W_{i}}(\mu_{i})\frac{\partial f}{\partial M_{j}}(\mu_{j})\text{Cov}[W_{i},M_{j}]+\frac{\partial f}{\partial W_{i}}(\mu_{i})\frac{\partial f}{\partial W_{j}}(\mu_{j})\text{Cov}[W_{i},W_{j}].\end{split}\tag{31}$$ $$(32)$$ In what follows, we first derive important quantities from the multinomial distribution in Appendix D.3 and apply them to compute the quantities in (25). ## D.3 Multinomial Derivations For this subsection, in an abuse of notation, let N = (N1*, . . . , N*r) ∼ Multinom(n, p) and u, v, w, x ∈ {1*, . . . , r*} are distinct values. It is well-known that E[Nu] = npu, Var[Nu] = npu(1−pu), and Cov(Nu, Nv) = −npupv. Furthermore, $$\mathrm{E}[N_{u}N_{v}]=\mathrm{Cov}[N_{u},N_{v}]+\mathrm{E}[N_{u}]\mathrm{E}[N_{v}]$$ $$=n(n-1)p_{u}p_{v},$$ and the first four moments from derivating the moment generating function are: an moments from deriving the moment generating function are: $\mathrm{E}[N_u]=np_u$, $\mathrm{E}[N_u^2]=n(n-1)p_u^2+\mathrm{E}[N_u]$ $\qquad=np_u[1+(n-1)p_u]$, $\mathrm{E}[N_u^3]=n(n-1)[(n-2)p_u^3+2p_u^2]+\mathrm{E}[N_u^2]$ $\qquad=np_u\left[1+(n-1)p_u(3+(n-2)p_u)\right]$, $\mathrm{E}[N_u^4]=n(n-1)(n-2)\left[(n-3)p_u^4+3p_u^3\right]+2n(n-1)\left[(n-2)p_u^3+2p_u^2\right]+\mathrm{E}[N_u^3]$ $\qquad=np_u\left[1+(n-1)p_u(7+(n-2)p_u[6+(n-3)p_u]\right)\right]$. $$(33)$$ Define indicator random variable Ui such that Ui = 1 if the outcome for trial i is u ∈ {1*, . . . , r*} and Ui = 0 otherwise. Similarly define Vi for v ̸= u, Wi for w ̸= v ̸= u, and Xi for x ̸= w ̸= v ̸= u. Then Nu, Nv, Nw, and Nx may be expressed as and $N_{x}$ may be expressed as $$N_{u}=\sum_{i=1}^{n}U_{i},\quad N_{v}=\sum_{i=1}^{n}V_{i},\quad N_{w}=\sum_{i=1}^{n}W_{i},\quad N_{x}=\sum_{i=1}^{n}X_{i}.$$ We are interested in $\mathrm{E}[N_{u}^{2}N_{v}^{2}]$, $\mathrm{E}[N_{u}^{2}N_{v}]$, $\mathrm{E}[N_{u}^{2}N_{v}N_{w}]$, $\mathrm{E}[N_{u}N_{v}N_{w}N_{x}]$, $\mathrm{E}[N_{u}^{2}N_{v}]$, and $\mathrm{E}[N_{u}N_{v}N_{w}]$. E[N 2 uN 2 v ] = E" Xn i=1 Ui !2 Xn i=1 Vi !2# = E"Xn i=1 Xn j=1 Xn k=1 Xn l=1 UiUjVkVl # by distributing =Xn i=1 Xn j=1 Xn k=1 Xn l=1 E [UiUjVkVl] by linearity of expectation =Xn i=1 Xn j=1 X k̸=i k̸=j X l̸=i l̸=j E [UiUjVkVl] since UiVi = 0 for all i = 1*, . . . , n* =Xn i=1 Xn j=1 X k̸=i k̸=j X l̸=i l̸=j E [UiUj ] E [VkVl] by independence between trials =X i=j X k=l k̸=i E [UiUj ] E [VkVl] +X i X j̸=i X k̸=i k̸=j X l̸=k l̸=i l̸=j E [UiUj ] E [VkVl] +X i=j X k̸=i X l̸=k l̸=i E [UiUj ] E [VkVl] +X k=l X i̸=k X j̸=i j̸=k E [UiUj ] E [VkVl] reexpressed =X i X k=l k̸=i E[U 2 i]E[V 2 k ] +X i X j̸=i X k̸=i k̸=j X l̸=k l̸=i l̸=j E[Ui]E[Uj ]E[Vk]E[Vl] reexpressed; independence; and +X i X k̸=i X l̸=k l̸=i E[U 2 i]E[VkVl] +X k X i̸=k X j̸=i j̸=k E[Ui]E[Uj ]E[V 2 $${\mathrm{since~E}}[U_{i}U_{j}]=\operatorname{E}[U_{i}]\operatorname{E}[U_{j}],\,i\neq j$$ = n(n − 1)pupv + n(n − 1)(n − 2)(n − 3)p 2 up 2 v since E[U since $\mbox{E}[U_{i}^{2}]=\mbox{E}[U_{i}]=p_{u}$ . + n(n − 1)(n − 2)pup 2 v + n(n − 1)(n − 2)p 2 upv = n(n − 1)pupv [1 + (n − 2)(pu + pv + (n − 3)pupv)] simplified. Hence, $$\mathrm{E}[N_{u}^{2}N_{v}^{2}]=n(n-1)p_{u}p_{v}\left[1+(n-2)(p_{u}+p_{v}+(n-3)p_{u}p_{v})\right].$$ $$(34)$$ Following the same derivation strategy, $$\mathrm{E}[N_{u}^{3}N_{v}]=n(n-1)p_{u}p_{v}\left[1+(n-2)p_{u}(3+(n-3)p_{u})\right],$$ $$\mathrm{E}[N_{u}^{2}N_{v}N_{w}]=n(n-1)(n-2)p_{u}p_{v}p_{w}\left[1+(n-3)p_{u}\right],$$ $$\mathrm{E}[N_{u}N_{v}N_{w}N_{x}]=n(n-1)(n-2)(n-3)p_{u}p_{v}p_{w}p_{x},$$ $$\mathrm{E}[N_{u}^{2}N_{v}]=n(n-1)p_{u}p_{v}\left[1+(n-2)p_{u}\right],$$ $$\mathrm{E}[N_{u}N_{v}N_{w}]=n(n-1)(n-2)p_{u}p_{v}p_{w}.$$ ## D.4 Numerator And Denominator Of Ratio We now turn to the task of deriving expressions for Var[Mi], Var[Wi], and Cov[Mi, Wi] in order to compute (30), and additionally for Cov[Mi, Mj ], Cov[Mi, Wj ], Cov[Wi, Mj ], and Cov[Wi, Wj ] for (31). For this subsection, return to the notation for N expressed in (23). The distribution of Wi = n −1(Ni + Ni ′) is most simple. By the lumping property of multinomial random vectors, $$\begin{array}{c}{{\mathrm{E}[W_{i}]=p_{i}+p_{i}{}^{\prime},}}\\ {{\mathrm{Var}[W_{i}]=\frac{(p_{i}+p_{i}{}^{\prime})(1-p_{i}-p_{i}{}^{\prime})}{n},}}\\ {{\mathrm{Cov}[W_{i},W_{j}]=-\frac{(p_{i}+p_{i}{}^{\prime})(p_{j}+p_{j}{}^{\prime})}{n}.}}\end{array}$$ $\left(35\right)$ $\begin{array}{l}\left(36\right)\\ \left(37\right)\end{array}$ $$(40)$$ The distribution of Mi = n −2Ni(Ni + Ni ′ + Ni ′′) is more challenging. From (33) and (32), the expectation is given by: $$\mathrm{E}[M_{i}]=n^{-2}\mathrm{E}[N_{i}(N_{i}+N_{i}{}^{\prime}+N_{i}{}^{\prime\prime})]$$ $$=n^{-2}\left(\mathrm{E}[N_{i}^{2}]+\mathrm{E}[N_{i}N_{i}{}^{\prime}]+\mathrm{E}[N_{i}N_{i}{}^{\prime\prime}]\right)$$ $$=n^{-2}\left(np_{i}[1+(n-1)p_{i}]+n(n-1)p_{i}{p_{i}}^{\prime}+n(n-1)p_{i}{p_{i}}^{\prime\prime}\right)$$ $$=n^{-1}p_{i}[1+(n-1)(p_{i}+{p_{i}}^{\prime}+{p_{i}}^{\prime\prime})].$$ $$(41)$$ Next, the variance is given by: $$\mathrm{Var}[M_{i}]=n^{-4}\mathrm{Var}[N_{i}(N_{i}+N_{i}{}^{\prime}+N_{i}{}^{\prime\prime})]$$ $$=n^{-4}\mathrm{Var}[N_{i}^{2}+N_{i}N_{i}{}^{\prime}+N_{i}N_{i}{}^{\prime\prime}]$$ $$=n^{-4}\big{(}\mathrm{Var}[N_{i}^{2}]+\mathrm{Var}[N_{i}N_{i}{}^{\prime}]+\mathrm{Var}[N_{i}N_{i}{}^{\prime\prime}]$$ $$\qquad+2\mathrm{Cov}[N_{i}^{2},N_{i}N_{i}{}^{\prime}]+2\mathrm{Cov}[N_{i}^{2},N_{i}N_{i}{}^{\prime\prime}]+2\mathrm{Cov}[N_{i}N_{i}{}^{\prime},N_{i}N_{i}{}^{\prime\prime}]\big{)}.$$ The terms in the expression above are given below. From the moments of the multinomial distribution (33): $$\mathrm{Var}[N_{i}^{2}]=\mathrm{E}[N_{i}^{4}]-\mathrm{E}[N_{i}^{2}]^{2}$$ $$=np_{i}\left[1+(n-1)p_{i}(7+(n-2)p_{i}[6+(n-3)p_{i}])\right]-(np_{i}[1+(n-1)p_{i}])^{2}$$ $$=np_{i}\left[1+(n-1)p_{i}(7+(n-2)p_{i}[6+(n-3)p_{i}])-np_{i}(1+(n-1)p_{i})^{2}\right].$$ From (34) and (32): Var[NiNi ′] = E[N 2 i Ni ′2] − E[NiNi ′] 2 = n(n − 1)pipi ′[1 + (n − 2)(pi + pi ′ + (n − 3)pipi ′)] − [n(n − 1)pipi ′] 2 = n(n − 1)pipi ′[1 + (n − 2)(pi + pi ′ + (n − 3)pipi ′) − n(n − 1)pipi ′], Var[NiNi ′′] = n(n − 1)pipi ′′[1 + (n − 2)(pi + pi ′′ + (n − 3)pipi ′′) − n(n − 1)pipi ′′]. From (35), (33), and (32): Cov[N 2 i , NiNi ′] = E[N 3 i Ni ′] − E[N 2 i ]E[NiNi ′] = n(n − 1)pipi ′[1 + (n − 2)pi(3 + (n − 3)pi)] − npi[1 + (n − 1)pi]n(n − 1)pipi ′ = n(n − 1)pipi ′-1 + (n − 2)(3pi + (n − 3)p 2 i ) − npi(1 + (n − 1)pi) Cov[N 2 i , NiNi ′′] = n(n − 1)pipi ′′ -1 + (n − 2)(3pi + (n − 3)p 2 i ) − npi(1 + (n − 1)pi). From (36) and (32): Cov[NiNi ′, NiNi ′′] = E[N 2 i Ni ′Ni ′′] − E[NiNi ′]E[NiNi ′′] = n(n − 1)(n − 2)pipi ′pi ′′[1 + (n − 3)pi] − n(n − 1)pipi ′n(n − 1)pipi ′′ = n(n − 1)pipi ′pi ′′ [(n − 2)[1 + (n − 3)pi] − n(n − 1)pi] . Hence, Var[Mi] is derived: Var[Mi] = n −4npi -1 + (n − 1)pi(7 + (n − 2)pi[6 + (n − 3)pi]) − npi(1 + (n − 1)pi) 2 + n(n − 1)pipi ′[1 + (n − 2)(pi + pi ′ + (n − 3)pipi ′) − n(n − 1)pipi ′] + n(n − 1)pipi ′′[1 + (n − 2)(pi + pi ′′ + (n − 3)pipi ′′) − n(n − 1)pipi ′′] + 2n(n − 1)pipi ′-1 + (n − 2)(3pi + (n − 3)p 2 i ) − npi(1 + (n − 1)pi) + 2n(n − 1)pipi ′′ -1 + (n − 2)(3pi + (n − 3)p 2 i ) − npi(1 + (n − 1)pi) + n(n − 1)pipi ′pi ′′ [(n − 2)[1 + (n − 3)pi] − n(n − 1)pi]. (42) Next, consider Cov[Mi, Mj ]. Cov[Mi, Mj ] = n −4Cov -Ni(Ni + Ni ′ + Ni ′′), Nj (Nj + Nj ′ + Nj ′′) = n −4Cov -N 2 i + NiNi ′ + NiNi ′′, N2 j + NjNj ′ + NjNj ′′ = n −4Cov[N 2 i , N2 j ] + Cov[N 2 i , NjNj ′] + Cov[N 2 i , NjNj ′′] + Cov[NiNi ′, N2 j ] + Cov[NiNi ′′, N2 j ] + Cov[NiNi ′, NjNj ′] + Cov[NiNi ′, NjNj ′′] + Cov[NiNi ′′, NjNj ′] + Cov[NiNi ′′, NjNj ′′]. The terms in the expression above are given below. From (34) and (33): Cov[N 2 i , N2 j ] = E[N 2 i N 2 j ] − E[N 2 i ]E[N 2 j ] = n(n − 1)pipj [1 + (n − 2)(pi + pj + (n − 3)pipj )] − npi[1 + (n − 1)pi]npj [1 + (n − 1)pj ] = npipj [(n − 1)(1 + (n − 2)(pi + pj + (n − 3)pipj )) − n(1 + (n − 1)pi)(1 + (n − 1)pj )]. From (36), (33), and (32): Cov[N 2 i , NjNj ′] = E[N 2 i NjNj ′] − E[N 2 i ]E[NjNj ′] = n(n − 1)(n − 2)pipjpj ′[1 + (n − 3)pi] − npi(1 + (n − 1)pi)n(n − 1)pjpj ′ = n(n − 1)pipjpj ′[(n − 2)[1 + (n − 3)pi] − n(1 + (n − 1)pi)] , Cov[N 2 i , NjNj ′′] = n(n − 1)pipjpj ′′ [(n − 2)[1 + (n − 3)pi] − n(1 + (n − 1)pi)] , Cov[NiNi ′, N2 j ] = n(n − 1)pjpipi ′[(n − 2)[1 + (n − 3)pj ] − n(1 + (n − 1)pj )] , Cov[NiNi ′′, N2 j ] = n(n − 1)pjpipi ′′ [(n − 2)[1 + (n − 3)pj ] − n(1 + (n − 1)pj )] . ## 34 From (37) and (32): Cov[NiNi ′, NjNj ′] = E[NiNi ′NjNj ′] − E[NiNi ′]E[NjNj ′] = n(n − 1)(n − 2)(n − 3)pipi ′pjpj ′ − n(n − 1)pipi ′n(n − 1)pjpj ′ = n(n − 1)pipi ′pjpj ′[(n − 2)(n − 3) − n(n − 1)] , Cov[NiNi ′, NjNj ′′] = n(n − 1)pipi ′pjpj ′′ [(n − 2)(n − 3) − n(n − 1)] , Cov[NiNi ′′, NjNj ′] = n(n − 1)pipi ′′pjpj ′[(n − 2)(n − 3) − n(n − 1)] , Cov[NiNi ′′, NjNj ′′] = n(n − 1)pipi ′′pjpj ′′ [(n − 2)(n − 3) − n(n − 1)] . Hence, Cov[Mi, Mj ] is derived: Cov[Mi, Mj ] $ n(1+(n-1)p_i)]$ $ n(1+(n-1)p_j)]$ $ n(1+(n-1)p_j)]$ 1)]. $$(43)$$ $$\nabla\left[2\mathbf{M}i,2\right]$$ = n −4npipj -(n − 1)(1 + (n − 2)(pi + pj + (n − 3)pipj )) − n(1 + (n − 1)pi)(1 + (n − 1)pj ) + n(n − 1)pipjpj ′[(n − 2)[1 + (n − 3)pi] − n(1 + (n − 1)pi)] + n(n − 1)pipjpj ′′ [(n − 2)[1 + (n − 3)pi] − n(1 + (n − 1)pi)] + n(n − 1)pjpipi ′[(n − 2)[1 + (n − 3)pj ] − n(1 + (n − 1)pj )] + n(n − 1)pjpipi ′′ [(n − 2)[1 + (n − 3)pj ] − n(1 + (n − 1)pj )] + n(n − 1)pipi ′pjpj ′[(n − 2)(n − 3) − n(n − 1)] + n(n − 1)pipi ′pjpj ′′ [(n − 2)(n − 3) − n(n − 1)] + n(n − 1)pipi ′′pjpj ′[(n − 2)(n − 3) − n(n − 1)] ′′ [(n − 2)(n − 3) − n(n − 1)] = n −3pipj ′′)[(n − 2)[1 + (n − 3)pi] − n(1 + (n − 1)pi)] ′′)[(n − 2)[1 + (n − 3)pj ] − n(1 + (n − 1)pj )] ′′)[(n − 2)(n − 3) − n(n − 1)] + n(n − 1)pipi ′′pjpj -(n − 1)1 + (n − 2)(pi + pj + (n − 3)pipj ) + (pj ′ + pj + pjpi(pi ′ + pi + (pi ′ + pi ′′)(pj ′ + pj − n(1 + (n − 1)pi)(1 + (n − 1)pj ) Finally, we turn our attention to Cov[Mi, Wi], Cov[Mi, Wj ], and Cov[Wi, Mj ]. Beginning with Cov[Mi, Wi]: Cov[Mi, Wi] = n −3Cov -Ni(Ni + Ni ′ + Ni ′′), Ni + Ni ′ = n −3Cov -N 2 i + NiNi ′ + NiNi ′′, Ni + Ni ′ = n −3Cov[N 2 i , Ni] + Cov[N 2 i , Ni ′] + Cov[NiNi ′, Ni] + Cov[NiNi ′, Ni ′] + Cov[NiNi ′′, Ni] + Cov[NiNi ′′, Ni ′]. The terms in the expression above are given below. From (33): Cov[N 2 i , Ni] = E[N 3 i ] − E[N 2 i ]E[Ni] = npi[1 + (n − 1)pi(3 + (n − 2)pi)] − npi[1 + (n − 1)pi]npi = npi[1 + (n − 1)pi(3 + (n − 2)pi) − npi(1 + (n − 1)pi)] = npi[1 + pi((n − 1)[3 − 2pi] − n)] . From (38) and (33): Cov[N 2 i , Ni ′] = E[N 2 i Ni ′] − E[N 2 i ]E[Ni ′] = n(n − 1)pipi ′[1 + (n − 2)pi] − npi(1 + (n − 1)pi)npj = npipi ′[(n − 1)(1 + (n − 2)pi) − n(1 + (n − 1)pi)] . $$(44)$$ $$p_{j}$$ From (38) and (33): Cov[NiNi ′, Ni] = E[N 2 i Ni ′] − E[NiNi ′]E[Ni] = n(n − 1)pipi ′[1 + (n − 2)pi] − n(n − 1)pipi ′npi = n(n − 1)pipi ′[1 − 2pi], Cov[NiNi ′, Ni ′] = n(n − 1)pi ′pi[1 − 2pi ′], Cov[NiNi ′′, Ni] = n(n − 1)pipi ′′[1 − 2pi]. From (39), (32), and (33): Cov[NiNi ′′, Ni ′] = E[NiNi ′′Ni ′] − E[NiNi ′′]E[Ni ′] = n(n − 1)(n − 2)pipi ′′pi ′ − n(n − 1)pipi ′′npi $$i^{\prime}$$ = −2n(n − 1)pipi ′′pi ′. $$(45)$$ Hence, Cov[Mi, Wi] is derived: Cov[Mi, Wi] = n −3npi[1 + pi((n − 1)[3 − 2pi] − n)] + npipi ′[(n − 1)(1 + (n − 2)pi) − n(1 + (n − 1)pi)] + n(n − 1)pipi ′[1 − 2pi] + n(n − 1)pi ′pi[1 − 2pi ′] + n(n − 1)pipi ′′[1 − 2pi] − 2n(n − 1)pipi ′′pi ′. = n −2pi 1 + pi((n − 1)[3 − 2pi] − n) + pi ′[(n − 1)(1 + (n − 2)pi) − n(1 + (n − 1)pi)] + n −2(n − 1)(pipi ′[2 − 2pi − 2pi ′] + pipi ′′[1 − 2pi − 2pi ′]). $$(46)$$ Then, moving on to Cov[Mi, Wj ] and Cov[Wi, Mj ]: Cov[Mi, Wj ] = n −3Cov -Ni(Ni + Ni ′ + Ni ′′), Nj + Nj ′ = n −3Cov -N 2 i + NiNi ′ + NiNi ′′, Nj + Nj ′ = n −3Cov[N 2 i , Nj ] + Cov[N 2 i , Nj ′] + Cov[NiNi ′, Nj ] + Cov[NiNi ′, Nj ′] + Cov[NiNi ′′, Nj ] + Cov[NiNi ′′, Nj ′]. The terms in the expression above are given below. From (44): Cov[N 2 i , Nj ] = npipj [(n − 1)(1 + (n − 2)pi) − n(1 + (n − 1)pi)] , Cov[N 2 i , Nj ′] = npipj ′[(n − 1)(1 + (n − 2)pi) − n(1 + (n − 1)pi)] . From (45): Cov[NiNi ′, Nj ] = −2n(n − 1)pipi ′pj , Cov[NiNi ′, Nj ′] = −2n(n − 1)pipi ′pj ′, Cov[NiNi ′′, Nj ] = −2n(n − 1)pipi ′′pj , Cov[NiNi ′′, Nj ′] = −2n(n − 1)pipi ′′pj ′. Hence, Cov[Mi, Wj ] and Cov[Wi, Mj ] are derived: Cov[Mi, Wj ] = n −3-npi(pj + pj ′)[(n − 1)(1 + (n − 2)pi) − n(1 + (n − 1)pi)] − 2n(n − 1)pi(pi ′pj + pi ′pj ′ + pi ′′pj + pi ′′pj ′) = n −2pi(pj + pj ′)-(n − 1)(1 + (n − 2)pi − 2(pi ′ + pi ′′)(pj + pj ′)) − n(1 + (n − 1)pi), Cov[Wi, Mj ] = n −2pj (pi + pi ′)-(n − 1)(1 + (n − 2)pj − 2(pj ′ + pj ′′)(pi + pi ′)) − n(1 + (n − 1)pj ). $$(47)$$ ## 36 Thus, all quantities necessary to compute (25) are derived.
Review 1: Summary: This study presents Bayes-flavored algorithms for the Causal bandit problem with frequent evaluation. As causal bandits require the knowledge of causal graphs, most existing studies develop their own algorithms given the information. In this context, the authors' methods incorporate the information as prior information of Bayes-flavored algorithms. Motivated by this, the authors develop several algorithms utilizing the properties of Bayesian algorithms. Furthermore, the author also simulation studies to demonstrate the superiority of the proposed method over existing methods. Strengths and Weaknesses: This study demonstrates a naive but powerful and natural approach to the causal bandit problem. Bayesian algorithms or algorithms motivated by Bayesian ideas can naturally incorporate prior information into algorithms. Among practitioners, employing Bayesian methods for incorporating prior knowledge is a well-known idea. Therefore, I believe that the main contribution lies in the confirmation of the effectiveness of this approach in the causal bandit problem. The proposed framework is innovative and intriguing because such algorithms have not been officially proposed in spite of their usefulness. Although the proposed framework has some novelty, the lack of theoretical analysis makes this study’s evaluation difficult. To investigate the optimality of algorithms, it is preferable to show the lower and upper bounds for the proposed algorithms. For example, we can employ information-theoretic lower bounds of Lai and Robbins (1985) or lower bounds for an asymptotic Bayes evaluation to show the asymptotic optimality (Lai, 1987). If the authors consider a Bayesian evaluation instead of a frequentest evaluation, they might obtain Bays optimal algorithm by solving dynamic programming under appropriate settings on time horizon and discount rates, such as Gittins index (Gittins, 1989). My main questions are listed as follows: - Definitions of some notations, such as $E$, $\mathrm{Var}$, and $\hat{\mathrm{SE}}$ are desirable, although we can guess the meanings. - On p.3, is $E[R_T] = T\mu^* 0 \sum_{a\in\mathcal{A}}$ is an expectation under posterior parameters? Similarly, is $\mu_a$ an expectation under a fixed parameter, and $E_{\pi^0_{a|\bm{Z}}}[\mu_a]$ is an expectation marginalized over the posterior distribution? For example, Kaufmann et al. (2012) denotes the cumulative regret by $R_n(\theta) = \mathbb{E}_\theta[\sum^n_{t=1}\mu^* - \mu_{I_t}]$ on Eq. (1) in their paper. - What is $\hat{P}$ on (9). Is it a posterior probability given observations? - Should $\mathrm{Var}(Y|dp(X=x))$ be denoted by $\mathrm{Var}(Y|dp(X=x), \pi^t_a)$ because it is the variance under a fixed parameter? - Can the authors show the posterior convergence and consistency for some important parameters such as $\mu_a$ given the data generating process, even if they cannot show the optimality of their algorithms? - In Appendix D.2, for $Q_i = f(M_i, W_i)$ the authors apply series expansion around $\mu_i = (\mu_{M_i}, \mu_{W_i})$. In Eq. (18), the authors omit the higher-order term of the second-order series expansion, but briefly writing them may be helpful for readers. Furthermore, I think that the higher-order term can be ignored only when the sample is infinite. Can the authors justify the use of approximation when the sample size is small? 1. Gittins, J. C. (1989): Multi-armed Bandit Allocation Indices. Wiley, Chichester, NY. 2. Lai, T., and H. Robbins (1985): “Asymptotically efficient adaptive allocation rules,” Advances in Applied Mathematics. 3. Lai, T. L. (1987): “Adaptive Treatment Allocation and the Multi-Armed Bandit Problem,” The Annals of Statistics, 15(3), 1091 – 1114. Requested Changes: I agree that the proposed framework of this study is practically useful and insightful in that it allows us to incorporate prior information into algorithms. However, (i) there are no theoretical results such as optimality and posterior convergence, and (ii) the notations are a bit hard to read, or the definitions (formulations) may have some issues. Because of these concerns, I believe that the formulation and theoretical analysis need to be further improved in order for this paper to be accepted. Broader Impact Concerns: The proposed algorithms are easy to apply and implement. They are also helpful for practitioners. ================================================== Review 2: Summary: This paper studies the causal bandits from the Bayesian perspective. Compared with prior works, it does not assume the known causal graph or some requirements in the causal graph class. Instead, it takes a Bayesian perspective by constructing the prior using observational data and updated the posterior using the interventional data. To achieve this, this paper design a new framework: Bayesian Backdoor Bandit (BBB) that simultaneously learns the causal graph and utilizing the causal inference to optimize the reward. To instantiate this idea, the paper discusses the implementations on both the discrete and gaussian setting. Empirically, it also shows superior performance over baseline algorithms, such as TS, UCB in small scale experiments. Strengths and Weaknesses: Strength: - The assumptions of the proposed method is kind of weak, which makes it is closer to the application settings. Prior works mostly rely on known causal graphs, or the underlying causal graph is restrictive. This paper gets rid of them by taking a Bayesian perspective, however, it might worth to discussing that if we want to have some theoretical results for BBB, are the limited causal graph class is required? - Most part of the paper is pretty well-written and easy to read. The methods and their instantiations are well-illustrated. - The empirical section, though is based on synthetic data, shows some interesting observation, i.e., the larger sample size of observational data might even hurt the performance when the time step is longer. Is this also the case for other baseline algorithms? Weakness: - The main concern of the proposed method is the high computation cost in computing the parent set probabilities and the conditional parameter posteriors, which might limit its use and impact. Though the author discusses some approximation through MCMC might be possible for BBB-TS, it would be great to give a try and illustrate the robustness of the proposed method under the approximation setting. - I would suggest the paper to dig into more investigations into either the theoretical or empirical side of the proposed method. For the current version of the paper, the regret analysis of the BBB framework is missing, and it would be great to understand more how it compared with the baselines theoretically. For the empirical side, it would be great to have some real-world datasets to illustrate the proposed BBB framework. Requested Changes: - It would be great to have the approximated version of BBB-TS, to showcase the more applicable version and how the approximation affects the overall performance. - As pointed out above, it would be great to either have a regret analysis for the BBB framework, under either discrete or continuous setting; or it might be helpful to test the methods in more real-world datasets, or large scale datasets (apparently, the computational complexity might affects the feasibility of doing this). - Section 3 is a little bit hard to follow, could we organize them utilizing paragraphs that suggest the main points? - In the empirical section, it would be great to give a more detailed description on the structure setup from de Kroon et al. (2022). Broader Impact Concerns: N/A ================================================== Review 3: Summary: This work develops the multi-armed bandit setting for the case that both observational and interventional data are available, and a directed acyclic graph relating them is available in the form of a prior that relates the causal structure of the problem. With this information in hand, a maximum a posteriori update rule for these probabilities is developed, and its uses in a way that can be incorporated as a prior into upper-confidence bound and Thompson sampling, is developed. Experimental analysis illuminates the merits of the proposed approach. Strengths and Weaknesses: Strengths: The problem definition in terms of introducing observational and interventional data as a prior into the multi-armed bandit framework appears novel, to the best of my knowledge. The paper is well-written and defines the problem clearly. Weaknesses: The terminology if 'backdoor adjustment prior' is not commonplace, and relatively unexplained. What is the quantitative difference between observational and interventional data? This is not well-defined, and therefore the intuition behind eqn. (1) is absent. Similarly, interventional calculus is a term that is thrown around, but its meaning is not widely understood. Therefore, it should be more rigorously defined in the context of the introduction of the Bayesian prior. To evaluate the backdoor adjustment prior in eqn. (3), we need access to the probabilities P(Pa_a = Z | D[t] ) which are associated with the DAG defining the causal structure. How reasonable is this assumption? Some practical instances when it is known, or not known, should be detailed. A key weakness of this work is that there appears to be no regret bound analysis that establishes the performance of the proposed approach, and contrasts its theoretical rates with UCB or Thompson sampling without such knowledge of the casual DAG. Therefore, a rigorous understanding of when and why this approach outperforms some prior baselines is not very clear. Requested Changes: In the discussion of limitations of assuming knowledge of the causal graph in prior work during Section 1.1, the authors should provide an illustrative example where this assumption breaks, so as to make clear that these conditions are in fact limiting. Moreover, the following paragraph discusses various efforts to relax this assumption and the limitations of those efforts as well; however, without specific technical reasons that their conditions are restrictive in the context of some problem in particular, the argument that these conditions are limiting comes across as hollow. Why is the estimation of interventional quantities from observational data more practical? is it because then the bandit algorithm does not depend on unverifiable assumptions? How are the conditions imposed by this approach weaker than the aforementioned ones? In the summary of contributions, the authors should provide a more quantitative description of the main regret bound in the discrete, especially how it refines previous results' dependence on the prior, and how that prior exhibits specific dependence on the observational data which facilitates theoretical and potentially practical improvements. For the continuous space with Gaussian model, how does the proposed framework improve dependence on the prior, and what specific improvements are achieved through the use of interventional calculus? As written, these aspects are ambiguous. In Section 3, it is mentioned that the class of priors is easier to evaluate when submodularity is present, due to the super-exponential growth with the dimension of the variables. However, is this referring to the dimension of the feature space or the action space? Moreover, which classes of priors actually satisfy this condition? How restrictive is it, and what are some practical examples? The legend in Figures 1-2 seems incorrect. I think the red should refer to Bayes-UCB, and the green should refer to UCB, and the dotted to the backdoor adjustment prior variant. If it is correct, then it is ambiguous. Please correct the legend. What does Alg mean? I think the authors mean to contrast the performance of Thompson sampling/UCB with and without a backdoor adjustment prior. References missing regarding the link between bandit algorithms and experimental design: Krause, A., Singh, A., & Guestrin, C. (2008). Near-optimal sensor placements in Gaussian processes: Theory, efficient algorithms and empirical studies. Journal of Machine Learning Research, 9(2). Bedi, A. S., Peddireddy, D., Aggarwal, V., & Koppel, A. (2020, July). Efficient large-scale gaussian process bandits by believing only informative actions. In Learning for Dynamics and Control (pp. 924-934). PMLR. The authors should expend more effort to illuminate how the proposed backdoor adjustment prior improves dependencies on naive implementations of UCB or Thompson sampling for the proposed setting. The analysis that gives rise to Proposition 1-2 does not appear novel in my reading. What is specifically different about these proofs as compared with the references from whence they came, and is there any attribute of the MAB setting or DAG structure associated with the causal inteventional setting that necessitates such a departure? There are long run-on mathematical experiments in the appendices that no reader should be expected to parse. See equation (35) for an exmaple. These expressions should be broken up, and only each term that actually gets manipulated should be restated. One does not need to carry all the terms to explain what is going on. Broader Impact Concerns: None ================================================== Review 4: Summary: This work addresses causal bandits ---a multi-armed bandit setting where observable variables are modeled with a causal graph defining their conditional dependencies--- where the agent leverages the underlying causal relationships to more efficiently optimize its attained rewards. The authors study causal bandits where data is observed as generated by the causal model, prior to the bandit agent playing (intervening on) any arms. Due to the dependencies inherent in the causal model, interventional distributions (i.e., those of the bandit's actions) may be inferred from observational distributions. Hence, information may be shared between arms for efficient exploration-exploitation strategies. In contrast to existing work that assumes knowledge of the underlying causal graph, this work relaxes such assumption and combines causal graph learning with causal bandit interventions to devise a Bayesian bandit policy. The proposed bandit algorithm computes inferences based on both observational (i.e., available prior to the bandit actions) and interventional (i.e., resulting from the agent's actions) data to simultaneously ($i$) learn about the underlying causal graph, and ($ii$) optimize the bandit's cumulative rewards, by exploiting causal inferences. The proposed framework, named Bayesian Backdoor Bandit (BBB), quantifies the uncertainty in the expected reward estimates of the bandit agent by accounting for the reward signal and the unknown causal model, merging the following techniques: 1. Bayesian bandit policies, such as Bayes-UCB and Thompson sampling; 2. intervention calculus and the back-door formula, for estimating interventional quantities from observational data; and 3. Bayesian model averaging, to compute causal estimates that average over posterior distributions of plausible graphs, based on a technique proposed by Pensar et al. (2020). Specifically, BBB implements a Bayesian bandit policy (Bayes-UCB or TS) with priors designed based on information from observational data (computed via the backdoor adjustment formula): the priors are initialized with expected value and variance estimates as presented in Equation (2). This prior is then sequentially updated to corresponding posteriors as the bandit plays (intervenes) on different arms. Because the backdoor adjustment formula requires knowledge of the parents of a variable ---assumption not made by the authors here--- these estimates are computed by averaging over the posterior distribution of the plausible causal models, as indicated in Equations (3) and (4). BBB leverages this carefully designed prior to execute Bayesian policies, where the posterior mean and variance statistics are updated as interventions are made, via averages over the posterior of the parent set distribution of Equation (4); i.e., the proposed Bayesian posterior incorporates uncertainties about both the reward function and the plausible causal graphs. Key to the proposed BBB algorithm are the definition of the priors' mean and variance estimates, which the authors detail for two specific causal bandit applications: the discrete, multinomial bandit and the linear, Gaussian unit deviation setting. The derivation of the variance estimates for these two applications is the focus of Section 5 (with proofs provided in the Appendix), where approximations to these quantities of interest are provided. Given the proposed estimates' approximate nature, the authors focus on providing empirical evidence of the correctness and validity of these approximations, e.g., demonstrating coverage in Appendix Section C. The work concludes with an evaluation section where the performance of BBB is compared to alternatives closest to this work (as per the discussion in Section 1.1) in a variety of simulated causal bandits. Results show that 1. BBB outperforms all non-causal counterpart policies, with performance monotonically improving with more observational data. 2. BBB achieves lower cumulative regret than the optimistic central node counterpart with enough (yet not excessive) observational data: $n_0 \geq 800$ in the discrete setting, $n_0 \geq 40$ in the Gaussian setting. In addition, the authors showcase BBB's graph structure learning capabilities, describing the algorithm's performance tradeoff between the size of the observational data $n_0$ and the graph learning accuracy. With small observational data, initial uncertainty in both the structure prior and the reward encourages exploration of all arms, incurring in greater cumulative regret but allowing for identification of connections beyond the reward's parents; i.e., better graph recovery is possible than when BBB is initialized with bigger datasets ---where reduced initial uncertainty enables quick identification of the good arms, and thus small regret, yet the lack of exploration of other arms hinges learning the orientation of their edges. Strengths and Weaknesses: The presented work is solid, with the following strengths: - The main contribution of this work is being able to bypass the restrictive assumption of knowing the true causal graph. In contrast to previous work by Lu et al. (2021), this work focuses on the non-asymptotic observational data regime: i.e., it does not rely on the large-sample observational setting of (Greenewald et al., 2019). - The work is very clearly presented, with succinct arguments about this work's contributions (section 1.2) that contrast with related alternatives (section 1.1), before delving into necessary preliminaries (section 2) to design informative (causal-graph based) priors in Section 3 needed for their proposed Bayesian Backdoor Algorithms of Section 4. - Section 5 contains details on how to compute mean and variance estimates for BBB's implementation for two specific use-cases. - The authors empirically demonstrate how BBB compares to standard algorithms that assume knowledge of the causal structure. Importantly, they showcase that BBB outperforms all non-causal counterparts (with performance monotonically improving with more observational data) and that it achieves lower cumulative regret than the optimistic central node counterparts with enough observational data. Some weaknesses of the presented work are: - The increased computational cost of the proposed BBB algorithm (that requires computation of parent posteriors in Equation 4). Details on this computational overhead are not provided or discussed. - BBB is a combination of existing techniques, i.e., the proposed method's significance is on merging them together. - The work, which relies on informative priors and estimates learned from prior observational data, lacks a theoretical analysis on how such data impacts regret performance (this tradeoff is illustrated empirically). The authors leave the theoretical analysis as future work. - The work assumes all variables in the graph are observed. A discussion on whether confounding could be accommodated via techniques from the causal inference literature would be of interest. - BBB is instantiated for 2 specific models: a discrete bandit setting and a (linear) Gaussian Unit Deviation Setting. Details on the significance of the Gaussian unit-deviation setting, and a discussion on the challenges of other causal models would strengthen the paper. Requested Changes: 1. For completeness, and to help a reader not familiar with causal graph inference techniques, details on the following should be incorporated: 1. Describing the algorithm developed by Pensar et al. (2020), which is used here to compute the parent set probabilities in Equation (4). This would facilitate reproducibility of BBB. 2. Describing in detail the "standard assumptions of global and local parameter independence and parameter modularity" mentioned in Section 3. 3. $\hat{SE}$ appears in Equation (2) without being previously defined. Given its importance ---the focus of Section 5 is on approximating it--- the authors should define it before its first appearance. 2. Evaluation section: 1. Results are presented for bandits with $p=10$ variables and 3 parents for the reward variable. The authors should clarify the motivation behind these choices; e.g., are these numbers used due to computational constraints imposed by Pensar's posterior computation algorithm? In addition, the work would be strengthen with results showing how sensitive BBBs performance is to these choices. 2. Empirical analysis/evidence showcasing the extra computational complexity of BBB is missing: how expensive is to compute parent set posteriors and averages over them? 3. The provided figures, given that they are averages over realizations, should include error-bars illustrating performance variability. 3. The authors acknowledge that in this work, $X$ is a causally sufficient system with no unobserved confounders. A discussion on the challenges unobserved confounders raise, and how to leverage tools from the causal inference literature to mitigate or accommodate them would complement and strengthen this work. Broader Impact Concerns: This work does not discuss broader impact concerns. ================================================== Metareview: Recommendation: Accept as is Comment: This paper studies a Bayesian variant of causal bandits. The advantage of the Bayesian framework is that the causal graph does not have to be specified in advance and is learned on the go by maintaining its posterior. The authors evaluate their approach empirically and also derive a preliminary regret bound. The initial version of this paper was already solid. In the rebuttal period, the authors comprehensively addressed the comments of the reviewers. They also went beyond the original writeup in two major ways: * A preliminary regret analysis. The initial writeup had none. * An approximate MCMC implementation of the algorithm. This is one major strength of the Bayesian view, that many approximations (although not with regret guarantees) exist. I suggest acceptance of this paper **AS IS**, with the understanding that the authors expand related work (Section 1.1) to discuss Bayesian bandits better. There are many recent works that show algorithmic and regret improvements due to a better prior. Here are some pointers to consider: * Modern Bayesian analyses of TS and Bayes-UCB (precursor to showing improvements due to the prior) started in [Learning to Optimize via Posterior Sampling](https://pubsonline.informs.org/doi/abs/10.1287/moor.2014.0650). * An actual improvement is shown in [Information-Theoretic Confidence Bounds for Reinforcement Learning](https://proceedings.neurips.cc/paper/2019/hash/411ae1bf081d1674ca6091f8c59a266f-Abstract.html). A tighter MAB analysis, which is also easier to follow, is in Lemma 4 of [Meta-Thompson Sampling](https://proceedings.mlr.press/v139/kveton21a.html). * Bandit meta-learning, essentially learning the prior for TS by repeatedly solving similar bandit tasks, received a lot of attention recently: * [Meta Dynamic Pricing: Transfer Learning Across Experiments](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3334629) * [No Regrets for Learning the Prior in Bandits](https://proceedings.neurips.cc/paper/2021/hash/ec1f764517b7ffb52057af6df18142b7-Abstract.html) * [Bayesian Decision-Making Under Misspecified Priors with Applications to Meta-Learning](https://proceedings.neurips.cc/paper/2021/hash/ddcbe25988981920c872c1787382f04d-Abstract.html) * [Metadata-Based Multi-Task Bandits with Bayesian Hierarchical Models](https://proceedings.neurips.cc/paper/2021/hash/f7cfdde9db36af8e0d9a6d123d5c385e-Abstract.html) * [Metalearning Linear Bandits by Prior Update](https://proceedings.mlr.press/v151/peleg22a.html) * Fully Bayesian methods, where the prior reflects the structure of the problem, have been especially successful: * Posterior sampling for choosing one model out of many ([Latent Bandits Revisited](https://proceedings.neurips.cc/paper/2020/hash/9b7c8d13e4b2f08895fb7bcead930b46-Abstract.html)) * Posterior sampling with a mixture prior ([Thompson Sampling with a Mixture Prior](https://proceedings.mlr.press/v151/hong22b.html)) * Posterior sampling for meta-, multi-task, and federated learning ([Hierarchical Bayesian Bandits](https://proceedings.mlr.press/v151/hong22c.html)) Congratulations to acceptance! ==================================================
# Training Data Size Induced Double Descent For Denoising Feedforward Neural Networks And The Role Of Training Noise Rishi Sonthalia rsonthal@math.ucla.edu Department of Mathematics University of California, Los Angeles Raj Rao Nadakuditi rajnrao@umich.edu Department of EECS University of Michigan, Ann Arbor Reviewed on OpenReview: *https: // openreview. net/ forum? id= FdMWtpVT1I* ## Abstract When training an unregularized denoising feedforward neural network, we show that the generalization error versus the number of training data points is a double descent curve. We formalize the question of how many training data points should be used by looking at the generalization error for denoising noisy test data. Prior work on computing the generalization error focuses on adding noise to target outputs. However, adding noise to the input is more in line with current pre-training practices. In the linear (in the inputs) regime, we provide an asymptotically exact formula for the generalization error for rank 1 data and an approximation for the generalization error for rank r data. From this, we derive a formula for the amount of noise that needs to be added to the training data to minimize the denoising error. This results in the emergence of a shrinkage phenomenon for improving the performance of denoising DNNs by making the training SNR smaller than the test SNR. Further, we see that the amount of shrinkage (ratio of the train to test SNR) also follows a double descent curve. ## 1 Introduction Denoising noisy training data is a widely used technique for pretraining networks to learn good data representations. Two extremely common examples of pretraining via denoising are Masked Language Modelling (MLM) (Devlin et al., 2019) and Stacked Denoising Autoencoders (SDAE) (Vincent et al., 2010). For many modern problems, we work at large scales in terms of the number of parameters and the number of training samples. Recently there has been significant work in understanding the effect of scaling the number of parameters in a neural network. This resulted in the discovery of the much celebrated double descent phenomena (Belkin et al., 2019). However, we have a weaker understanding of the effect of scaling the number of data points. Classical works such as Krogh & Hertz (1991); Geman et al. (1992); Opper (2002) and more recent work such as Gerace et al. (2020); Nakkiran et al. (2020); Nakkiran (2020); d'Ascoli et al. (2020); Adlam & Pennington (2020) show either empirically or via theoretical analysis that sample wise double descent exists. However, these were in the regime of supervised learning. On the other hand, our motivation comes from understanding denoising neural networks. For MLM and SDAEs, denoising is a pretraining procedure, in which case the generalization error would depend on the downstream task. As a first step, we shall instead look at the generalization error for denoising test data. The difference between the prior supervised learning setup and our denoising setup can be seen in Figure 1. To understand the denoising setting, we empirically show that sample-wise double descent exists for denoising feedforward neural networks (Section 3). We show that shrinking the training data Signal to Noise Ratio (SNR) (i.e., increasing the amount of training data noise) for fixed test data SNR can mitigate this double ![1_image_0.png](1_image_0.png) ![1_image_1.png](1_image_1.png) (a) Denoising Setup (b) Supervised Learning Setup Figure 1: Figure showing the difference in the noise placement between the traditional supervised learning setup for which empirical and theoretical double descent curves have been found versus our denoising setup for which we recover double descent curves. descent. Moreover, we show that the curve for the ratio of the best training data SNR to the test data SNR also has sample-wise double descent (Section 3). To theoretically understand the phenomena, we look at the simplest setting. Specifically, we look at the case when we have a one-layer linear network, and we are denoising data that lies on a line embedded in high dimensional space (Section 4). In this setting, we derive the exact asymptotics for the generalization error (Section 5). We show that the generalization error and optimal training noise level spike at the interpolation threshold. From the theoretical analysis, we see that the spike occurs due to the variance of the model increasing. We use the rank one result to derive an approximation for general (low) rank r data. ## Contributions. The main contributions of the paper are as follows. 1. We empirically show that when denoising data using a feedforward network, the curve for the generalization error versus the number of training data points *N trn* and the curve for the ratio of the test data SNR to the optimal training data SNR has double descent. Further changing the training data SNR can mitigate the double descent in the generalization error curve. Thus the noise level acts as a regularizer. 2. Assuming we have mean zero, bounded variance, and rotational invariant noise, we derive an analytical formula for the expected mean-squared generalization error for denoising data that lives in a one-dimensional linear subspace by a linear model. Further, we use the same method to present a heuristic for higher-rank data and experimentally determine the formula's accuracy for general low-rank data. 3. We show that sample-wise double descent exists for the generalization error and the amount of noise that should be added, even in this simple model. ## 1.1 Related Work Understanding deep neural networks is a currently active area of research with many exciting theoretical results. The discovery that fixed depth infinite width (under certain limits) neural networks can be thought of as kernel regression (Jacot et al., 2018) and the discovery of double descent for neural networks (Belkin et al., 2019) has sparked significant research into understanding the generalization error in the linear regime (in parameters, not inputs). The exact asymptotic for the generalization error was first understood for ridge regression (Bartlett et al., 2020; Hastie et al., 2022; Belkin et al., 2020; Advani & Saxe, 2020; Mel & Ganguli, 2021; Dobriban & Wager, 2018). Following this, many papers have studied the situation for the Random Features model and the Neural Tangent Kernel (NTK) model (Jacot et al., 2020; Mei & Montanari, 2019; Ghorbani et al., 2021; Adlam & Pennington, 2020; Geiger et al., 2020). Other recent work for supervised learning includes work on multiple descents (Derezinski et al., 2020; d'Ascoli et al., 2020; Liang et al., 2020), transfer learning (Lampinen & Ganguli, 2019), and Gaussian mixture models (Loureiro et al., 2021). However, to our knowledge, there has yet to be any work that looks at the problem for the denoising setup. The idea of adding noise to improve generalization has been seen before. One popular strategy is to use Dropout (Hinton et al., 2012; Wan et al., 2013; Srivastava et al., 2014), where we randomly zero out either neurons or connections. Another idea that is commonly used is data augmentation. In a revolutionary paper, Krizhevsky et al. (2012) showed that augmenting the dataset with noisy versions of the images greatly improved the accuracy. Another area where noise is useful is adversarial learning. Dong et al. (2021) shows epoch-wise double descent for adversarial training. In recent theoretical work related to SDAEs, Pretorius et al. (2018) derived the learning dynamics of a linear autoencoder in the presence of noise. They also establish some relationships between the noise added and weight decay. However, they do not look at the generalization error or quantify the optimal amount of noise that should be added. Gnansambandam & Chan (2020) looked at the problem of determining the optimal amount of noise that should be added. However, they studied this from the perspective of minimizing the variance of the generalization error. Additionally, there has been significant progress in understanding the Bayes optimal solution when denoising via matrix factorization (Lelarge & Miolane, 2017; Lesieur et al., 2017; Maillard et al., 2022; Troiani et al., 2022; Nadakuditi, 2014). It is important to note that these works do not think of the noise as a regularizer and do not consider the effect of noise on parametric models such as neural networks. ## 2 Problem Set Up Our goal is to understand the impact of training noise on the generalization error in the context of denoising neural networks. Concretely, suppose we have access to noisy data y1, . . . yNtst ∈ RM such that yi = θtstxi + ξi, where xi ∈ RM is sampled from an unknown data distribution D, ξi ∈ RM is sampled from some noise distribution D*noise*, and θtst ∈ R is a known *scalar* which controls or models how noisy the data is. We study the classic denoising problem of recovering xi from yi (James & Stein, 1992; Wiener, 1949; Banham & Katsaggelos, 1997; Benesty et al., 2010; Takeda et al., 2007; Buades et al., 2005). One approach to solving this problem is to learn a function that removes the noise from a set of examples, for instance, using a neural network (Tian et al., 2020). To this end, suppose the noise distribution D*noise* is known, then given noiseless data x trn 1, . . . , xtrn Ntrn we can create noisy versions y trn i = θtrnx trn i + ξ trn iof our training data. Now consider a neural network denoted f, which is trained to minimize the following ℓ2 loss function $$\ell(f;x_{i}^{trn})=\frac{1}{N_{trn}}\sum_{i=1}^{N_{trn}}\|x_{i}^{trn}-f(\theta_{trn}x_{i}^{trn}+\xi_{i}^{trn})\|^{2}.\tag{1}$$ We are then interested in the following mean squared generalization error. $$\frac{1}{N_{tst}}\sum_{i=1}^{N_{tst}}\|x_{i}^{tst}-f(\theta_{tst}x_{i}^{tst}+\xi_{i}^{tst})\|^{2}.\tag{2}$$ The major question we want to answer is the following. Given noisy test data, such that θtst **is known, what is the optimal value of** θtrn such that a neural network trained using the loss function in Equation 1, minimizes the generalization error in Equation 2? We are also interested in the effect the number of training data points Ntrn **has on the optimal** θtrn. ## 2.1 Signal To Noise Ratio (Snr) A quantity of interest to us will be the SNR. To properly account for this, if µ*data* is the expected norm of the data points and µ*noise* is the expected norm of the noise vectors, then we shall call $${\hat{\theta}}_{t r n}:={\frac{\theta_{t r n}\mu_{d a t a}}{\mu_{n o i s e}}},{\mathrm{~and~}}{\hat{\theta}}_{t s t}:={\frac{\theta_{t s t}\mu_{d a t a}}{\mu_{n o i s e}}}$$ to be the training and test data signal to noise ratios. ## 3 Empirical Double Descent ![3_image_0.png](3_image_0.png) Figure 2: Figure showing the empirical double descent phenomena for the generalization error versus 1/C (Number of training samples *N trn* /number of features M). The top row is for a linear (with respect to the inputs) network, and the bottom row is for a three-layer ReLU network with width equal to the dimension of the data. The networks were trained with the mean squared denoising error. Here the training data SNR and the test data SNR both equal ˆθ. The solid line in Figure (a) is our theoretical line from Theorem 1. We run two experiments to better empirically understand the interaction between θtst, Ntrn, θtrn and the generalization error (Eq. 2). First, we show that sample-wise double descent occurs for denoising neural networks empirically. That is, if we fix θtst, θtrn, then as we vary Ntrn, we get that the Ntrn versus generalization error curve has double descent. Second, we explore the role of the amount of training noise and show that optimally picking θtrn can mitigate the previously seen double descent. ## 3.1 Double Descent For Denoising Networks For our first experiment, we show that the Ntst versus generalization error curve has double descent in simple cases. To do this, we train two feedforward networks (one-layer and three-layer) on three different datasets. The first data set is a line embedded in high-dimensional space. The second data set is a synthetic dataset using a teacher network. That is, the data is generated by sampling latent variables from a Gaussian distribution and then using the outputs from a randomly initialized untrained 2-layer neural network as our data. Finally, the third dataset is MNIST. Figure 2 shows that if we train a (one-layer and three-layer) feedforward network to denoise data such that the training data signal to noise ratio (SNR) ˆθtrn is the same SNR as that of the test data set (ˆθtst), then double descent occurs in the curve for the denoising generalization error vs. the number of training samples. However, unlike other hyperparameters, such as the number of features and the number of training epochs, we cannot arbitrarily change the number of data points as we are limited by our data set. Hence it could be the case that our maximum number of data points corresponds to the peak of the generalization error curve. To get around this, we can look at the amount of noise we add to the training data. Note that we could have also added other forms of regularization, but the noise level is a natural hyper-parameter here. ## 3.2 Role Of Training Noise Level To see the effect of training data SNR ˆθtrn, for a variety of different ratios ˆθtrn/ ˆθtst, we compute the denoising generalization error versus the number of data points curve. Figures 3a and 3b show that if we optimally pick the ratio ˆθtrn/ ˆθtst, then double descent can be mitigated. We do this for the MNIST and CIFAR datasets. We create test data by taking the test data for each and then adding Gaussian noise. We fix the test SNR ˆθtst to be 1 for both datasets. Hence we know the test data SNR. We then take various different fractions of the training data and train a three-layer ReLU neural network (without bias) for various levels of training data SNR ˆθtrn. For each pair of parameters (number of training data points and the level of training noise), we compute the generalization error averaged over twenty trials for MNIST and five trials for CIFAR. Here the test noise and training noise are resampled for each trial. The plots for the generalization error can be seen in Figures 3a (MNIST) and 3b (CIFAR10), and the plots for the optimal ratio can be seen in Figure 4. We see five interesting and exciting phenomena from this experiment.1 1. For most values of the ratio ˆθtst/ ˆθtrn, we see sample-wise double descent for the generalization error. 2. We see that the optimal denoising error does not occur when the train SNR is equal to the test SNR. We need to shrink the train SNR (i.e., increase the test to train SNR ratio). This shrinkage is reminiscent of other shrinkage phenomena such as James & Stein (1992); Tibshirani (1996); Nadakuditi (2014). 3. As seen in Figures 4a and 4b, the optimal ratio depends on the number of data points. 4. Figure 4 shows that the curve for the best ˆθtrn/ ˆθtst also has sample wise double descent. 5. Picking the optimal amount of noise can mitigate sample-wise double descent of the generalization error. This mitigation is reminiscent of how optimal regularization can mitigate double descent in the supervised setting (Nakkiran et al., 2020). We postulate that spike in generalization error is due to the variance of the model increasing. Hence when we increase the amount of noise, we implicitly regularize the model (Bishop, 1995). This increased regularization results in a decrease in the variance and improves the generalization error. ## 4 Theoretical Problem Assumptions To be able to provide a theoretical understanding of the five phenomena discovered in Section 3, we consider a simple model that can be theoretically analyzed. ## 4.1 Assumptions About The Data First, we detail assumptions about the data generation process. Specifically, we assume that the data lies in some low-dimensional linear space. Assumption 1. Let U ∈ RM×rsuch that the columns of U have unit norm and are pairwise orthogonal. To generate data, we sample latent variables V T ∈ R r×N and Σ ∈ R r×r + such that V *has columns that have unit* norm and are pairwise orthogonal and Σ *is a diagonal matrix with non-negative entries on the diagonal such* that ∥Σ∥F = 1. Then a data matrix X*, in which each* column *is a data point, is given by* X = UΣV T. 1Other forms of regularization could remove some of these features. However, we look at the effect of the level of noise by itself. ![5_image_0.png](5_image_0.png) Figure 3: Figure showing the empirical denoising generalization error for a three-layer neural network with the width the same as the dimension of the data trained for various different values of 0 trm/ ffst and number of training data points. Each neural network was trained for 1500 epochs, using MSE loss and gradient descent with a learning rate of 10-3. For MNIST, we averaged over twenty trials, and for CIFAR10, we averaged over five trials. ![5_image_1.png](5_image_1.png) Figure 4: Figure showing the sample-wise double descent for the empirically optimal amount of training noise. The figure displays the optimal 01sk/0trm ratio seen empirically versus 1/c = Nerm/M. The ratios plotted here correspond to the ratio for the red line in Figure 3. Hence, we see that we generate data that lives in a dimension r subspace. Note that we make no assumptions about the distribution of the latent variables V or Σ. We have two matrices, Xtrm and Xist, corresponding to the train and test data sets. Hence we have corresponding Vfm E RrxNer , Ville E RrxNer , and Elem, Etat- We make no other assumptions on U, Vtrn, Dtrn Vtst, Ltst. ## 4.2 Assumptions About The Noise Next, we detail our assumptions about the noise added to the data. For that, we need the following definitions. Definition 1. A matrix Z ∈ R™×™ sampled from a distribution is rotationally bi-invariant if for all orthogonal U1 E RmAm and all orthogonal U2 E RnXP, U1ZU2 has the same distribution as Z. Another way to phrase rotational bi-invariance is if A = UAZAV is the SVD, then UA and VA are uniformly random orthogonal matrices and are independent of ΣΑ and each other. Definition 2. Let c ∈ (0, ∞) be a shape paramter. Then the Marchenko Pastur distribution with shape c is the measure µc supported on [c−, c+]*, where* c± = (1 ± √c) 2*is such that* $$\mu_{c}=\begin{cases}\left(1-{\frac{1}{c}}\right)\delta_{0}+\nu&c>1\\ \nu&c\leq1\end{cases}$$ where ν *has density* $$d\nu(x)=\frac{1}{2\pi x c}\sqrt{(c_{+}-x)(x-c_{-})}.$$ With these definitions, we have the following assumptions about the noise matrices Atrn, Atst. Assumption 2. Let A ∈ RM×N such that A is sampled from a distribution Dnoise *such that* 1. For all i, j, ED*noise* [Aij ] = 0. 2. For all i, j, ED*noise* [A2 ij ] = 1/M. 3. For all i1, i2, j1, and j2 such that i1 ≠ i2 or j1 ̸= j2, we have that Ai1j1 and Ai2j2 *are uncorrelated.* That is, ED*noise* [Ai1j1Ai2j2 ] = ED*noise* [Ai1j1 ]ED*noise* [Ai2j2 ] 4. A is rotationally bi-invariant. 5. With probability 1, A *has full rank.* Assumption 3. Suppose AM,N is a sequence of matrices that satisfy Assumptions 2 such that M, N → ∞ with M/N → c*. Let* λ M,N 1, . . . , λM,N min(M,N) be the eigenvalues and let µM,N =Pi δλ M,N i*be the sum of dirac* delta measures for the eigenvalues. Then we shall assume that µM,N converges weakly in probability to the Marchenko-Pastur measure µc *with shape c.* From here onwards, we shall suppress the superscripts. While such assumptions on the noise may seem restrictive, this encompasses a large family of noise distributions that include Gaussian noise. Proposition 1 (Proof in Appendix A). If B is a random matrix that has full rank with probability one and its entries are independent, have mean 0, have variance 1/M, and bounded fourth moment, and P, Q are uniformly random orthogonal matrices. Then A = P BQ *satisfies Assumptions 2 and 3.* Note that when we sample matrices as detailed in Assumption 1, we have that ∥Xtrn∥F = ∥Xtst∥F = 1. to account for this, let θtst, θtrn ∈ R+ be **scalars** that will scale the norms of Xtrn, Xtst so that we can control the SNR of the matrices. Assumption 4. We assume that θtst is fixed and known and that we have control over θtrn. Given data Xtrn, Xtst satisfying Assumption 1, noise matrices Atrn, Atst satisfying Assumptions 2, 3, and θtrn, θtst that satisfy Assumption 4, then noisy data is given by Ytrn = θtrnXtrn + Atrn and Ytst = θtstXtst + Atst. ## 4.3 Assumption About The Model And Training Algorithm Finally, we shall make assumptions about the denoiser f from Equation 1. Assumption 5. We shall assume f is a linear model W *that is the solution to the following least squares* problem. $$\begin{array}{r l}{\operatorname*{min}_{W}}&{{}\|\theta_{t r n}X_{t r n}-{\hat{W}}({\underset{Y_{t r n}}{\underbrace{\theta_{t r n}X_{t r n}+A_{t r n}}}})\|_{F}^{2}.}\end{array}$$ .(3) That is, given data Xtrn that satisfies Assumption 1, noise matrix Atrn that satisfies Assumptions 2, 3, θtrn that satisfies Assumption 4, and noisy data Ytrn = θtrnXtrn+Atrn*, we have that* f(x) = W x = θtrnXtrnY † trnx. $$(3)$$ Here for a matrix T, T †is the Moore-Penrose pseudoinverse. Note here that we are not assuming access to the denoised test data. We rewrite Equation 2 for this denoiser and data generation model. $$R_{\mathrm{test-error}}:=\mathbb{E}_{A_{t r n},A_{t s t}}\left[{\frac{\|\theta_{t s t}X_{t s t}-W(\theta_{t s t}X_{t s t}+A_{t s t})\|_{F}^{2}}{N_{t s t}}}\right].$$ $$\left(4\right)$$ Remark 1. *We analyze this setup instead of the standard Gaussian or Spherical data model since if both* our data and noise are isotropic, then the denoising problem can be degenerate. Hence we assume that our data has a low rank. Remark 2. *While many double descent analyses look at the role of ridge regularization, in this case, since* we are looking at the denoising setup, we look at the role of the amount of noise. However, our method can be adapted to include a ridge regularizer.2 Note that Bishop (1995) shows that adding noise to the input is equivalent to Tikhonov regularization. ## 4.4 Signal To Noise Ratio (Snr) A quantity of interest to us will be the SNR, given by ∥X∥F /∥A∥F . Hence, we need to normalize everything by ∥A∥F . Due to our assumptions, we have that E[∥A∥ 2 F ] = N. Hence, for any variables and constants, if it has a hat, then that refers to that variable or constant normalized by √N. For example, given θtrn, Xtrn, and Atrn, then we have that ∥θtrnXtrn∥F ∥Atrn∥F= θtrn ∥Atrn∥F ≈θtrn √Ntrn =: ˆθtrn. ## 5 Theoretical Results And Consequences In this section, we analyze the model presented in Section 4. The main theoretical result of the paper is summarized below in Theorem 1. In Theorem 1, for r = 1, we exactly characterize the asymptotic generation error. Theorem 1. Let r = 1 and c = M/Ntrn be fixed. Let W *be such that it satisfies Assumption 5 for training* data θtrn, Xtrn, Ytrn that satisfy Assumptions (1-5). Further suppose that θtrn is O( √Ntrn)*. Then for test* data θtst, Xtst, Ytst that satisfy Assumptions (1-5) such that θtst is O( √Ntst) *the mean squared generalization* error (Equation 4) can written as follows. If c < 1, $$R_{test-error}=\frac{\theta_{test}^{2}}{N_{test}(1+\theta_{true}^{2})^{2}}+o\left(\frac{\theta_{test}^{2}}{N_{test}}\right)+\frac{c^{2}(\theta_{tm}^{2}+\theta_{tm}^{4})}{M(1+\theta_{tm}^{2})^{2}(1-c)}+o\left(\frac{1}{M}\right)\tag{5}$$ where the $\theta_{test}$ is the $\theta_{test}$-th order of $\theta_{test}$. and if c > 1*, we have that* $$R_{test-error}=\frac{\theta_{test}^{2}}{N_{test}(1+\theta_{tm}^{2})^{2}}+o\left(\frac{\theta_{test}^{2}}{N_{test}}\right)+\frac{c\theta_{train}^{2}}{M(1+\theta_{tm}^{2})(c-1)}+o\left(\frac{1}{M}\right).\tag{6}$$ $The\;o\left(\frac{\theta_{test}^2}{N_{test}}\right),o\left(\frac{1}{M}\right)\;error\;terms\;go\;to\;0\;as\;N_{trn},M\to\infty$. Theorem 1 is only for rank one data. We do not have the exact generalization error for general low-rank data. However, we can consider the heuristic formulas in Equations 7, 8.3 Here, the ith term in the summation is the rank one formula for the rank one matrix σiuiv T i corresponding to the i th singular value σi. Here ui, vi are the i th singular vectors. Xr i=1 (θtstσ tst i) 2 Ntst(1 + (θtrnσ trn i) 2c) 2 + c 2((θtrnσ trn i) 2 + (θtrnσ trn i) 4) M(1 + (θtrnσ trn i) 2c) 2(1 − c) + o(1) (7) Xr i=1 (θtstσ tst i) 2 Ntst(1 + (θtrnσ trn i) 2) 2 +c(θtrnσ trn i) 2 M(1 + (θtrnσ trn i) 2)(c − 1) + o(1). (8) 2See Appendix B for more details. 3More details for the heuristic can be found in the Appendix D.1. Here we provide some assumptions under which this is a reasonable formula. $$\mathbf{\Sigma}$$ $$\mathbf{\Sigma}$$ ![8_image_0.png](8_image_0.png) Figure 5: Figure showing the accuracy of the heuristic formula for low rank matrices. The figure shows the heatmap of the relative empirical error (|true generalization error - predicted generalization error|/|true generalization error|) when changing c and the rank of the data. Here M = 2500 and c is changed by changing Ntrn. We average over ten trials for low SNR, and for high rank, we average over 100. Before proceeding, we experimentally determine the accuracy of our formula for general rank r data. To do so, we calculate the relative error. That is, if the empirical generalization error is Remp and our theoretical predicition is R*theory*, then we calculate |Remp − R*theory*| |Remp|. Here for low SNR (θtrn, θtst are O(1)), we sample σ trn i, σtst iI.I.D. from the squared standard Gaussian and for high SNR, we multiply the previous by √Ntrn, √Ntst (θtrn, θtst are Θ(√Ntrn), Θ(√Ntst)). As we can see from Figure 5, our formula is accurate for low rank data where we have a relative error of around 0.01. However, we see that the approximation breaks down for higher rank data, especially near c = 1. ## 5.1 Data Distributions While Theorem 1 is only for rank 1 data, the current setup has some general components. In particular, it shows the surprising result that there can be two different types of mismatch between the training data and the test data that do not affect the generalization error. Noise Distribution Mismatch. The first type of mismatch corresponds to the distribution of the noise. Besides the general assumptions on the noise distribution, we note that the distribution for the entries of Atrn and the distribution for the entries of Atst need not be the same. Our only restriction is that Atrn and Atst satisfy our noise distribution assumptions independently. So, for example, Theorem 1 would apply if we have that the entries of Atrn are I.I.D. Gaussian with mean 0 and variance 1/M and Atst is sampled by sampling *P, Q* uniformly from the space of orthogonal matrices and sampling B with I.I.D. entries uniformly on [−p6/M, p6/M] (so that entries have mean 0 and variance 1/M) and setting Atst = *P BQ*. Data Distribution Mismatch. The next type of mismatch concerns the distribution of V (trn) and V (tst). In particular, we are not assuming that they came from any distribution, just that they satisfy certain assumptions. In the rank one case, we note that due to our assumptions, we must have that σ trn 1 = σ tst 1 = 1. Thus, we see that our i th training data point is given by UV trn i where U is the feature vector and V trn iis a latent scalar variable. Hence in such a setup, we can imagine the entries of V trn and V tst being drawn independently from some distribution. The only assumption we need to account for is that ∥V trn∥ = ∥V tst∥ = 1. To account for this, suppose we first sample entries of V˜ trn iin an I.I.D. manner from some distribution Dtrn that has mean 0 and variance 1 and that the entries of V˜ tst are sampled from some distribution Dtst that has mean 0 and variance 1. Then if Ntrn, Ntst are large, then due to the law of large numbers, we have that with high probability 1 Ntrn ∥V˜ trn∥ 2 F = 1 + o(1) = 1 $=\dfrac{1}{N_{tst}}\|\tilde{V}^{(tst)}\|_F^2.$ Thus, if we let $V^{trn}=\dfrac{1}{\|\tilde{V}^{trn}\|_F}\tilde{V}^{trn}$ and $\theta_{tst}=\hat{\theta}_{tst}\|\tilde{V}^{tst}\|_F$ then we see that the $V$s satisfy and V tst =1 ∥V˜ tst∥F V˜ tst with θtrn = ˆθtrn∥V˜ trn∥F and θtst = ˆθtst∥V˜ tst∥F then we see that the V s satisfy the general assumptions and with high probability the θs satisfy the assumptions for Theorem 1. ## 5.2 Insights And Phenomena ![9_Image_0.Png](9_Image_0.Png) (a) Rank 1 Theory Generalization Error (b) Rank 1 Theory Test SNR / Optimal training SNR Figure 6: Plot showing the theoretical double descent curves for the generalization error and the ratio of the test SNR to the optimal training SNR. Here M = 1000 and θtst = 1 and c was changed by changing Ntrn. Now that we have Theorem 1, we extract a few insights. Specifically, we are interested in insights in the context of the experiments run in Section 3. ## 5.2.1 Optimal Amount Of Noise. If we ignore the error term, we can differentiate the formula to get the following formula for the optimal training SNR. Here x + = max(0, x). $$\frac{\theta_{opt-trn}^{2}}{N_{trn}}:=\begin{cases}\left(\frac{\theta_{test}^{2}}{N_{trn}}\frac{2(1-c)}{2-c}-\frac{c}{M(2-c)}\right)^{+}&c<1\\ \left(2\frac{\theta_{test}^{2}}{N_{trn}}(c-1)-\frac{1}{N_{trn}}\right)^{+}&c>1\end{cases}.\tag{9}$$ Our theoretical model captures the surprising result that the optimal training SNR and the test SNR are unequal. Moreover, we see that the optimal training distribution depends on c. Further, the formula in Equation 9 also describes a double descent curve for ˆθ 2 tst/ ˆθ 2 opt−trn versus c curve as shown in Figure 6b. Thus, we see that our model captures phenomena 2, 3, and 4 from Section 3. ## 5.2.2 Double Descent Curves. We have already seen that the optimal amount of training noise follows a double decent curve. This double descent is due to the double descent for the generalization error. To understand this phenomenon, we first note that the first term gives the bias of our model in the formula in Theorem 1, and the second term gives the variance. We can see that the variance formulas have a singularity at c = 1. Since we have a linear model, c = 1 is the interpolation threshold (i.e., the point after which we have 0 training error). Hence, as we approach the interpolation threshold, the model's variance increases, increasing the generalization error. Thus our model captures phenomenon 1. Further, we can see that decreasing θtrn decreases the model's variance. Since the variance increases near the interpolation threshold, we try to mitigate this by increasing the amount of noise (or reducing θtrn). Hence sample wise double descent for the optimal noise level occurs as a result of trying to reduce the variance of the model. Thus, our model captures the first four phenomena ![10_image_0.png](10_image_0.png) Figure 7: Figure showing the major steps used to derive the formula for the generalization error. observed in Section 3. Phenomena 5 from Section 3 was that optimally picking the amount of training noise mitigated double descent. However, our theoretical model still has double descent even if we optimally pick the amount of training noise. This is an avenue for future work. We also compare Theorem 1 to Theorem 1 from Hastie et al. (2022). In Hastie et al. (2022), they assume that they have data xi ∈ RM from some distribution D, and response yi = x T i β + ξi, where β ∈ RM is fixed and ξi ∼ N (0, σ2). Then they have the following risk $$R_{X}(\hat{\beta};\beta)=\mathbb{E}_{x_{0}\sim{\mathcal{D}}}[(x_{0}^{T}\hat{\beta}-x_{0}^{T}\beta)^{2}|X_{t r n}].$$ This is the conditional excess risk given the training data. Under some assumptions, they show that $$R_{X}(\hat{\beta},\beta)\to\begin{cases}\sigma^{2}\frac{c}{1-c}&c<1\\ \|\beta\|^{2}\left(1-\frac{1}{c}\right)+\sigma^{2}\frac{1}{c-1}&c>1\end{cases}.$$ First, we note the similarities between the two. In both cases, we see that the peak is at c = 1 and is due to a term of the same order, i.e. both have (c − 1)−1, and not (c − 1)−α for some other α > 0. However, there are differences. First, as detailed in the introduction, they look at the supervised setting, whereas we look at the unsupervised setting. Second, they have input noise, whereas we have output noise. Third, they have zero bias in the under-parameterized regime. However, we have a non-zero bias term in both the over and under-parameterized regimes. ## 5.2.3 Noise As A Regularizer. Finally, we see that noise level explicitly regularizes ∥W∥F . Specifically, from Lemmas 1 and 3, the second term in Theorem 1 corresponds to ∥W∥F . The formula shows that increasing the amount of noise, which corresponds to decreasing θtrn, decreases ∥W∥F . ## 6 Proof Of Theorem 1 We prove Theorem 1 via the steps shown in Figure 7. The proofs for all of the lemmas have been moved to Appendix C. Here we present a proof sketch that details the high-level ideas. ## 6.1 Step 1: Decompose The Error Into Bias And Variance Terms. First, we decompose the error. Since we are not in the supervised learning setup, we do not have standard definitions of bias/variance. However, we will call the following terms the bias/variance of the model. Lemma 1. If Atst has mean 0 entries and Atst is independent of Xtst and W*, then* $$\mathbb{E}_{A_{test}}[\|\theta_{test}X_{test}-WY_{test}\|_{F}^{2}]=\underbrace{\theta_{test}^{2}\mathbb{E}_{A_{test}}[\|X_{test}-WX_{test}\|_{F}^{2}]}_{\text{This}}+\underbrace{\mathbb{E}_{A_{test}}[\|W_{test}\|_{F}^{2}]}_{\text{Variance}}.\tag{10}$$ ## 6.2 Step 2: Formula For W In our current setup, W is the solution to a least-squares problem. Hence W = θtrnXtrnY † trn. Expanding this out, we get the following formula for W. Let u be the left singular vector and vtrn, vtst the right singular vectors. Let h = v T trnA † trn, k = A † trnu, s = (I − AtrnA † trn)u, t = vtrn(I − A † trnAtrn), β = 1 + θtrnv T trnA † trnu, τ1 = θ 2 trn∥t∥ 2∥k∥ 2 + β 2, and τ2 = θ 2 trn∥s∥ 2∥h∥ 2 + β 2. **Proposition 2**.: _If $\beta\neq0$ and $A_{tm}$ has full rank then $W=\begin{cases}\frac{\theta_{m,0}\beta}{2}uh+\frac{\theta_{m,0}^{2}\|u\|^{2}}{2}u^{T}A_{tm}^{\dagger}&c<1\\ \frac{\theta_{m,1}\beta}{2}uh+\frac{\theta_{m,1}^{2}\|u\|^{2}}{2}u_{ss}r&c>1\end{cases}$._ . For Gaussian noise, Atrn has full rank with probability one, and β is a random variable whose expected value equals 1, and the distribution is highly concentrated. Thus, Proposition 2 applies when Atrn is isotropic Gaussian noise. Here we restricted ourselves to rank 1, as using Meyer (1973), we can expand formulas of the form (A + xyT) † where *x, y* are vectors. For the higher rank case, we apply the formula iteratively. This is the main difficulty of the method. Previous work on deriving asymptotics for the generalization error had noise on the output. Hence would take the pseudoinverse of a matrix that only depended on the data. However, in our case, we are taking the pseudoinverse of a matrix that depends on the noise. ## 6.3 Step 3: Decompose The Terms Into A Sum Of Various Trace Terms. For the bias and variance terms, we have the following two Lemmas. **Lemma 2.**_If $W$ is the solution to Equation 3, then $X_{\mathrm{tot}}-WX_{\mathrm{tot}}=\left\{\begin{matrix}\frac{\beta}{2}X_{\mathrm{tot}}&\mbox{if$c<1$}\\ \frac{\beta}{2}X_{\mathrm{tot}}&\mbox{if$c<1$}\end{matrix}\right..$_ **Lemma 3.**_If the entries of $A_{\mathrm{tot}}$ are independent with mean $0$, and variance $1/M$, then we have that EAtst [∥W Atst∥ 2] = Ntst M ∥W∥ 2. Note that this did not need assumptions on W or Xtst. All that was needed were the assumptions on Atst. Thus, this holds more generally. This decomposition also follows from Bishop (1995). In light of Lemmas 1, 2, 3, and the fact that ∥Xtst∥ 2 F = θ 2 tst, we see that the expected mean squared generalization error is given by, $$\mathbb{E}_{A_{t s t}}\left[\frac{\|\theta_{t s t}X_{t s t}-W Y_{t s t}\|_{F}^{2}}{N_{t s t}}\right]=\frac{1}{N_{t s t}}\frac{\beta^{2}}{\tau_{i}^{2}}\theta_{t s t}^{2}+\frac{1}{M}\|W\|_{F}^{2},$$ where τi depends on whether c < 1 or c > 1. Finally, let us look at the ∥W∥ term. Lemma 4. If β ̸= 0 and Atrn has full rank, then we have that if c < 1, $$\|W\|_{F}^{2}=\frac{\theta_{t r n}^{2}\beta^{2}}{\tau_{1}^{2}}\,T r(h^{T}h)+2\frac{\theta_{t r n}^{3}\|t\|^{2}\beta}{\tau_{1}^{2}}\,T r(h^{T}k^{T}A_{t r n}^{\dagger})+\frac{\theta_{t r n}^{4}\|t\|^{4}}{\tau_{1}^{2}}\,T r((A_{t r n}^{\dagger})^{T}k k^{T}A_{t r n}^{\dagger})\,,$$ and if c > 1*, then we have that* $$\|W\|_{F}^{2}=\frac{\theta_{t r n}^{2}\beta^{2}}{\tau_{2}^{2}}T r(h^{T}h)+2\frac{\theta_{t r n}^{3}\|h\|^{2}\beta}{\tau_{2}^{2}}T r(h^{T}s^{T})+\frac{\theta_{t r n}^{4}\|h\|^{4}}{\tau_{2}^{2}}\,T r(s s^{T}).$$ ## 6.4 Step 4: Estimate Using Random Matrix Theory. While the formula given by Lemmas 1, 3, and 4 is correct, we need a simpler formula to analyze the situation. Using ideas from random matrix theory, we can simplify the expression for ∥W∥ 2 F . To do so, we first need to prove Lemmas 5 and 6. The main idea behind Lemmas 5 and 6 is that due to the rotational invariance of Atrn, the expectation of the trace of products of various matrices derived from Atrn is determined by the expected value of some function χ of the eigenvalues of Atrn. However, instead of directly computing this expected value, we note that for any matrix A that satisfies the noise assumptions, if we let M, N → ∞, with M/N → c, then the eigenvalue distribution converges to the Marchenko - Pastur distribution (Marcenko & Pastur, 1967; Götze & Tikhomirov, 2011; 2003; 2004; 2005; Bai et al., 2003). Götze & Tikhomirov (2004) showed that the distribution of the eigenvalues converged almost surely with a rate of at least O(N −1/2+ϵ) for any ϵ > 0. Thus, we can use the expected value of the χ(λ) for λ sampled from the Marchenko - Pastur distribution as an approximation. Lemma 5. Suppose A is an p by q matrix such that the entries of A are independent and have mean 0, variance 1/q, and bounded fourth moment. Let Wp = AAT and let Wq = AT A. Let C = p/q. Suppose λp, λq are a random eigenvalue of Wp, Wq*. Then* 1. If $p<q$, then $\mathbb{E}\left[\frac{1}{\lambda_{p}}\right]=\frac{1}{1-C}+o(1)$. 2. If $p<q$, then $\mathbb{E}\left[\frac{1}{\lambda_{p}^{2}}\right]=\frac{1}{(1-C)^{3}}+o(1)$. 3. If $p<q$, then $\mathbb{E}\left[\frac{1}{\lambda_{p}^{3}}\right]=\frac{1}{(1-C)^{5}}+o(1)$. 4. If $p<q$, then $\mathbb{E}\left[\frac{1}{\lambda_{p}^{4}}\right]=\frac{C^{2}+\frac{22}{6}C+1}{(1-C)^{7}}+o(1)$. 5. If $p>q$, then $\mathbb{E}\left[\frac{1}{\lambda_{q}}\right]=\frac{C^{-1}}{1-C^{-1}}+o(1)$. 6. If $p>q$, then $\mathbb{E}\left[\frac{1}{\lambda_{q}^{2}}\right]=\frac{C^{-2}}{(1-C^{-1})^{3}}+o(1)$. 7. If $p>q$, then $\mathbb{E}\left[\frac{1}{\lambda_{q}^{3}}\right]=\frac{C^{-3}(1+C^{-1})}{(1-C^{-1})^{5}}+o(1)$. 8. If $p>q$, then $\mathbb{E}\left[\frac{1}{\lambda_{q}^{4}}\right]=\frac{C^{-4}(C^{-2}+\frac{22}{6}C^{-1}+1)}{(1-C^{-1})^{7}}+o(1)$. Lemma 6. Suppose A is an p by q matrix such that the entries of A *are independent and have mean 0,* variance 1/q, and bounded fourth moment. Let C = p/q and let x, y be unit vectors in p, then 1. $\mathbb{E}[\operatorname{Tr}(x^{T}(AA^{T})^{\dagger}x)]=\begin{cases}\frac{1}{1-C}+o(1)&p<q\\ \frac{q}{p}\frac{C^{-1}}{1-C^{-1}}+o(1)&p>q\end{cases}$. 2. $\mathbb{E}[\operatorname{Tr}(x^{T}(AA^{T})^{\dagger}(AA^{T})^{\dagger}x)]=\begin{cases}\frac{1}{(1-C)^{3}}+o(1)&p<q\\ \frac{q}{p}\frac{C^{-2}}{(1-C^{-1})^{3}}+o(1)&p>q\end{cases}$. Using these technical lemmas, we can now deal with all of the terms in the expressions in Lemma 4. Lemma 7. If Atrn *satisfies the noise assumptions, then we have that* 4. E[∥k∥ 2] = c 1 − c + o(1) and Var(∥k∥ 2) = c 2(2 + c) M(1 − c) 3 + o(1). 1. E[β/θtrn] = 1/θtrn + o(1) and Var(β/θtrn) = c (max(M,Ntrn)|1−c|)) + o(1). 2. If c < 1, then E[∥h∥ 2] = c 2 1 − c + o(1) and 5. E[∥s∥ 2] = c − 1 c+ o(1) and Var(∥s∥ 2) = Var(∥h∥ 2) = c 3(2 + c) Ntrn(1 − c) 3 + o(1). 2 1 M c + o(1) 3. If c > 1, then E[∥h∥ 2] = c c − 1 + o(1) and 6. E[∥t∥ 2] = 1 − c + o(1), Var(∥t∥ 2) = 2 c Ntrn + o(1). Var(∥h∥ 2) = c 2(2c − 1) Ntrn(c − 1)3 + o(1). Lemma 8. *Under the noise assumptions, we have that* E[Tr(h T k T A † trn)] = 0 and Var(Tr(h T k T A † trn)) = χ3(c)/Ntrn*, where* χ3(c) = E[1/λ3], λ is an eigenvalue for AAT and A *is as in Lemma 6.* Lemma 9. Under the noise assumptions, we have that $$T r((A_{t r n}^{\dagger})^{T}k k^{T}A_{t r n}^{\dagger})=\frac{c^{2}}{(1-c)^{3}}+o(1),\quad\mathrm{Var}(T r((A_{t r n}^{\dagger})^{T}k k^{T}A_{t r n}^{\dagger}))=\frac{3}{M}\chi_{4}(c)-\frac{1}{M}\frac{c^{4}}{(1-c)^{6}}\chi_{4}(c),$$ where χ4(c) = E[1/λ4], λ is an eigenvalue for AAT and A *is as in Lemma 6.* Lemma 10. *Under the same assumptions as Proposition 2, we have that Tr*(h Ts T) = 0. Lemmas 7, 8, 9, and 10 tell us that all of the terms are highly concentrated. Thus, even though such terms may not be uncorrelated, we can use the fact that |E[XY ] − E[X]E[Y ]| <pVar(X)Var(Y ), to treat the terms as if they are uncorrelated. Since these variances have now been shown to be o(1), we have that for each of these terms E[XY ] = E[X]E[Y ]+o(1). For example, since τ1 = β 2+θ 2 trn∥t∥ 2∥k∥ 2+o(1), using Lemmas 1, 4, and 6, we have that E[τ1] = 1+θ 2 trnc+o(1). Similarly, E[τ2] = 1+θ 2 trn+o(1). Finally, using these lemmas, we can simplify the expressions in Lemma 4 to get the formulas for the expected generalization error shown in Equations 5 and 6. ## 7 Conclusion In this paper, we switch focus from a supervised setup to an unsupervised setup. Specifically, we look at the problem of denoising data. We empirically show five interesting phenomena in our given setup. First, we see sample-wise double descent for the generalization error for denoising feedforward neural networks. Second, we see that, under certain circumstances, the optimal denoising error does not occur when the training data SNR is equal to the test data SNR. Third, we see that the optimal ratio depends on the number of data points. Fourth, we see that curve also has sample-wise double descent, and fifth, picking the correct training noise level mitigates sample-wise double descent of the generalization error. To provide theoretical analysis for this model, we look at a theoretical model where our data has a low rank. Here we derive the exact asymptotics for the generalization error for rank 1 data and a general noise model. Our analysis demonstrates that this simple model captures most of the phenomena seen empirically. ## References Ben Adlam and Jeffrey Pennington. The Neural Tangent Kernel in High Dimensions: Triple Descent and a Multi-Scale Theory of Generalization. In *ICML*, 2020. Madhu S. Advani and Andrew M. Saxe. High-dimensional dynamics of generalization error in neural networks. Neural Networks, 132:428 - 446, 2020. Z. Bai, B. Miao, and J. Yao. Convergence rates of spectral distributions of large sample covariance matrices. SIAM J. Matrix Anal. Appl., 25:105–127, 2003. Matthew R. Banham and Aggelos K. Katsaggelos. Digital Image Restoration. IEEE Signal Processing Magazine, 14(2):24–41, 1997. doi: 10.1109/79.581363. P. Bartlett, Philip M. Long, G. Lugosi, and Alexander Tsigler. Benign Overfitting in Linear Regression. Proceedings of the National Academy of Sciences, 117:30063 - 30070, 2020. Mikhail Belkin, Daniel J. Hsu, Siyuan Ma, and Soumik Mandal. Reconciling Modern Machine-Learning Practice and the Classical Bias–Variance Trade-off. *Proceedings of the National Academy of Sciences*, 116: 15849 - 15854, 2019. Mikhail Belkin, Daniel J. Hsu, and Ji Xu. Two Models of Double Descent for Weak Features. *SIAM J. Math.* Data Sci., 2:1167–1180, 2020. Jacob Benesty, Jingdong Chen, and Yiteng Huang. Study of the Widely Linear Wiener Filter for Noise Reduction. *2010 IEEE International Conference on Acoustics, Speech and Signal Processing*, pp. 205–208, 2010. Chris M. Bishop. Training with Noise is Equivalent to Tikhonov Regularization. *Neural Computation*, 7(1): 108–116, January 1995. ISSN 0899-7667. Antoni Buades, Bartomeu Coll, and Jean-Michel Morel. A Non-local Algorithm for Image Denoising. 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), 2:60–65 vol. 2, 2005. Michal Derezinski, Feynman T Liang, and Michael W Mahoney. Exact Expressions for Double Descent and Implicit Regularization Via Surrogate Random Design. In Advances in Neural Information Processing Systems, volume 33, pp. 5152–5164. Curran Associates, Inc., 2020. URL https://proceedings.neurips. cc/paper/2020/file/37740d59bb0eb7b4493725b2e0e5289b-Paper.pdf. J. Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In *NAACL*, 2019. Edgar Dobriban and Stefan Wager. High-dimensional asymptotics of prediction: Ridge regression and classification. *The Annals of Statistics*, 46(1):247–279, 2018. Chengyu Dong, Liyuan Liu, and Jingbo Shang. Double descent in adversarial training: An implicit label noise perspective. *ArXiv*, abs/2110.03135, 2021. Stéphane d'Ascoli, Levent Sagun, and Giulio Biroli. Triple Descent and the Two Kinds of Overfitting: Where and Why Do They Appear? In *Advances in Neural Information Processing Systems*, volume 33, pp. 3058–3069. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/ 1fd09c5f59a8ff35d499c0ee25a1d47e-Paper.pdf. Mario Geiger, Arthur Jacot, Stefano Spigler, Franck Gabriel, Levent Sagun, Stéphane d'Ascoli, Giulio Biroli, Clément Hongler, and Matthieu Wyart. Scaling Description of Generalization with Number of Parameters in Deep Learning. *Journal of Statistical Mechanics: Theory and Experiment*, 2020, 2020. Stuart Geman, Elie Bienenstock, and René Doursat. Neural networks and the bias/variance dilemma. Neural Computation, 4:1–58, 1992. Federica Gerace, Bruno Loureiro, Florent Krzakala, Marc Mezard, and Lenka Zdeborova. Generalisation Error in Learning with Random Features and the Hidden Manifold Model. In *Proceedings of the 37th* International Conference on Machine Learning, volume 119 of *Proceedings of Machine Learning Research*, pp. 3452–3462. PMLR, 13–18 Jul 2020. Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, and Andrea Montanari. Linearized Two-layers Neural Networks in High Dimension. *The Annals of Statistics*, 49(2):1029 - 1054, 2021. doi: 10.1214/20-AOS1990. URL https://doi.org/10.1214/20-AOS1990. Abhiram Gnansambandam and S. Chan. One size fits all: Can we train one denoiser for all noise levels? In ICML, 2020. F. Götze and A. Tikhomirov. Rate of convergence to the semi-circular law. Probability Theory and Related Fields, 127:228–276, 2003. F. Götze and A. Tikhomirov. Rate of convergence in probability to the marchenko-pastur law. *Bernoulli*, 10: 503–548, 2004. F. Götze and A. Tikhomirov. The rate of convergence for spectra of gue and lue matrix ensembles. *Central* European Journal of Mathematics, 3:666–704, 2005. F. Götze and A. Tikhomirov. On the rate of convergence to the marchenko–pastur distribution. *arXiv:* Probability, 2011. Trevor Hastie, Andrea Montanari, Saharon Rosset, and Ryan J. Tibshirani. Surprises in High-Dimensional Ridgeless Least Squares Interpolation. *The Annals of Statistics*, 50(2):949 - 986, 2022. doi: 10.1214/ 21-AOS2133. URL https://doi.org/10.1214/21-AOS2133. Geoffrey E. Hinton, Nitish Srivastava, A. Krizhevsky, Ilya Sutskever, and R. Salakhutdinov. Improving Neural Networks by Preventing Co-adaptation of Feature Detectors. *ArXiv*, abs/1207.0580, 2012. Arthur Jacot, Franck Gabriel, and Clement Hongler. Neural Tangent Kernel: Convergence and Generalization in Neural Networks. In *Advances in Neural Information Processing Systems*, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/file/ 5a4be1fa34e62bb8a6ec6b91d2462f5a-Paper.pdf. Arthur Jacot, Berfin Simsek, Francesco Spadaro, Clement Hongler, and Franck Gabriel. Implicit Regularization of Random Feature Models. In *Proceedings of the 37th International Conference on Machine Learning*, volume 119 of *Proceedings of Machine Learning Research*, pp. 4631–4640. PMLR, 13–18 Jul 2020. W. James and Charles Stein. Estimation with Quadratic Loss. In Breakthroughs in Statistics: Foundations and Basic Theory, pp. 443–460, New York, NY, 1992. Springer New York. ISBN 978-1-4612-0919-5. doi: 10.1007/978-1-4612-0919-5_30. URL https://doi.org/10.1007/978-1-4612-0919-5_30. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet Classification with Deep Convolutional Neural Networks. *Communications of the ACM*, 60:84 - 90, 2012. Anders Krogh and John Hertz. A simple weight decay can improve generalization. In J. Moody, S. Hanson, and R.P. Lippmann (eds.), *Advances in Neural Information Processing Systems*, volume 4. Morgan-Kaufmann, 1991. URL https://proceedings.neurips.cc/paper/1991/file/ 8eefcfdf5990e441f0fb6f3fad709e21-Paper.pdf. Andrew K. Lampinen and Surya Ganguli. An Analytic Theory of Generalization Dynamics and Transfer Learning in Deep Linear Networks. In *International Conference on Learning Representations*, 2019. URL https://openreview.net/forum?id=ryfMLoCqtQ. Marc Lelarge and Léo Miolane. Fundamental Limits of Symmetric Low-Rank Matrix Estimation. In Proceedings of the 2017 Conference on Learning Theory, volume 65 of *Proceedings of Machine Learning* Research, pp. 1297–1301. PMLR, 07–10 Jul 2017. Thibault Lesieur, Florent Krzakala, and Lenka Zdeborová. Constrained Low-Rank Matrix Estimation: Phase Transitions, Approximate Message Passing and Applications. Journal of Statistical Mechanics: Theory and Experiment, 2017(7):073403, jul 2017. doi: 10.1088/1742-5468/aa7284. URL https://dx.doi.org/ 10.1088/1742-5468/aa7284. Tengyuan Liang, A. Rakhlin, and Xiyu Zhai. On the Multiple Descent of Minimum-Norm Interpolants and Restricted Lower Isometry of Kernels. In *COLT*, 2020. Bruno Loureiro, Gabriele Sicuro, Cedric Gerbelot, Alessandro Pacco, Florent Krzakala, and Lenka Zdeborova. Learning Gaussian Mixtures with Generalized Linear Models: Precise Asymptotics in High-dimensions. In Advances in Neural Information Processing Systems, 2021. URL https://openreview.net/forum?id= j3eGyNMPvh. Antoine Maillard, Florent Krzakala, Marc Mézard, and Lenka Zdeborová. Perturbative Construction of Mean-Field Equations in Extensive-Rank Matrix Factorization and Denoising. *Journal of Statistical* Mechanics: Theory and Experiment, 2022(8):083301, aug 2022. doi: 10.1088/1742-5468/ac7e4c. URL https://dx.doi.org/10.1088/1742-5468/ac7e4c. V. Marcenko and L. Pastur. Distribution of eigenvalues for some sets of random matrices. Mathematics of The Ussr-sbornik, 1:457–483, 1967. Song Mei and A. Montanari. The generalization error of random features regression: Precise asymptotics and double descent curve. *arXiv: Statistics Theory*, 2019. Gabriel Mel and Surya Ganguli. A Theory of High Dimensional Regression with Arbitrary Correlations Between Input Features and Target Functions: Sample Complexity, Multiple Descent Curves and a Hierarchy of Phase Transitions. In *Proceedings of the 38th International Conference on Machine Learning*, volume 139 of *Proceedings of Machine Learning Research*, pp. 7578–7587. PMLR, 18–24 Jul 2021. URL https://proceedings.mlr.press/v139/mel21a.html. Carl D. Meyer, Jr. Generalized inversion of modified matrices. *SIAM Journal on Applied Mathematics*, 24(3): 315–323, 1973. doi: 10.1137/0124033. URL https://doi.org/10.1137/0124033. R. R. Nadakuditi. OptShrink: An Algorithm for Improved Low-Rank Signal Matrix Denoising by Optimal, Data-Driven Singular Value Shrinkage. *IEEE Transactions on Information Theory*, 60(5):3002–3018, 2014. doi: 10.1109/TIT.2014.2311661. Preetum Nakkiran. More data can hurt for linear regression: Sample-wise double descent, 2020. Preetum Nakkiran, Prayaag Venkat, Sham M. Kakade, and Tengyu Ma. Optimal Regularization can Mitigate Double Descent. In *International Conference on Learning Representations*, 2020. URL https: //openreview.net/forum?id=7R7fAoUygoa. Manfred Opper. Statistical mechanics of learning : Generalization. 2002. Arnu Pretorius, Steve Kroon, and Herman Kamper. Learning Dynamics of Linear Denoising Autoencoders. In *International Conference on Machine Learning*, pp. 4141–4150. PMLR, 2018. N. R. Rao and A. Edelman. The polynomial method for random matrices. *Foundations of Computational* Mathematics, 8:649–702, 2008. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. *Journal of Machine Learning Research*, 15(56): 1929–1958, 2014. URL http://jmlr.org/papers/v15/srivastava14a.html. H. Takeda, Sina Farsiu, and P. Milanfar. Kernel Regression for Image Processing and Reconstruction. *IEEE* Transactions on Image Processing, 16:349–366, 2007. C. Tian, Lunke Fei, Wenxian Zheng, Yanchen Xu, Wangmeng Zuo, and Chia-Wen Lin. Deep Learning on Image Denoising: An overview. *Neural networks : the official journal of the International Neural Network* Society, 131:251–275, 2020. Robert Tibshirani. Regression Shrinkage and Selection via the Lasso. *Journal of the royal statistical society* series b-methodological, 58:267–288, 1996. Emanuele Troiani, Vittorio Erba, Florent Krzakala, Antoine Maillard, and Lenka Zdeborov'a. Optimal Denoising of Rotationally Invariant Rectangular Matrices. *ArXiv*, abs/2203.07752, 2022. Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion. *Journal of Machine Learning Research*, 11:3371–3408, December 2010. ISSN 1532-4435. Li Wan, Matthew Zeiler, Sixin Zhang, Yann Le Cun, and Rob Fergus. Regularization of Neural Networks using DropConnect. In *Proceedings of the 30th International Conference on Machine Learning*, volume 28 of *Proceedings of Machine Learning Research*, pp. 1058–1066, Atlanta, Georgia, USA, 17–19 Jun 2013. PMLR. N. Wiener. *Extrapolation, Interpolation, and Smoothing of Stationary Time Series, with Engineering* Applications. MIT Press, 1949. In this section we present all of the proofs for the results in the main text. Here we present the proofs in the same order they appear in the text. ## A Noise Assumptions Proposition 1. If B *is a random matrix that has full rank with probability 1 and its entries are independent,* have mean 0, and have variance 1/M and P, Q are uniformly random orthogonal matrices. Then A = *P BQ* satisfies all of our noise assumptions. Proof. Since *P, Q* are a uniformly random orthogonal matrices, and A = *P BQ*, then it is clear that A is rotationally bi-invariant and has full rank. Since each entry of B has mean 0 and each entry of A is a linear combination of entries of B where the coefficients (i.e., the entries from *P, Q* are independent of B), we have that each entry of B have mean 0. Due to the orthogonal nature of *P, Q*, we have the variance for an entry of A is the same as the variance of entry in B. Thus, the only thing left to prove is that the entries of A are uncorrelated. To do this, we note that $$a_{i j}=\sum_{k=1}^{N}\sum_{l=1}^{M}p_{i l}b_{l k}q_{k j}.$$ Consider two entries ai1j1 and ai2j2 . Then we have that $$\mathbb{E}[a_{i_{1}j_{1}}a_{i_{2}j_{2}}]=\mathbb{E}\left[\left(\sum_{k=1}^{N}\sum_{l=1}^{M}p_{i_{1}l}b_{lk}q_{kj_{1}}\right)\left(\sum_{k=1}^{N}\sum_{l=1}^{M}p_{i_{2}l}b_{lk}q_{kj_{2}}\right)\right]$$ $$=\sum_{k=1}^{N}\sum_{l=1}^{M}\mathbb{E}[p_{i_{1}l}p_{i_{2}l}]\mathbb{E}[b_{lk}^{2}]\mathbb{E}[q_{kj_{1}}q_{kj_{2}}]$$ $$=\frac{1}{M}\mathbb{E}\left[\sum_{l=1}^{M}p_{i_{1}l}p_{i_{2}l}\right]\mathbb{E}\left[\sum_{k=1}^{N}q_{kj_{1}}q_{kj_{2}}\right].$$ The second inequality follows from the fact that *P, Q, B* are independent from each other, and that fact that the entries of B are independent and have mean 0. Hence the cross terms have expectation 0. If we have that i1 = i2 and j1 ̸= j2, then we have that since Q is an orthogonal matrix $$\sum_{k=1}^{N}\mathbb{E}[q_{k j_{1}}q_{k j_{2}}]=\mathbb{E}\left[\sum_{k=1}^{N}q_{k j_{1}}q_{k j_{2}}\right]=0.$$ Thus, the entries are uncorrelated. Similarly when i1 ̸= i2 since P is orthogonal matrix, we get that the entries are uncorrelated. Convergence to Marchenko-Pastur. If we strengthened the uncorrelated condition, to the entries being independent. Then due to the mean and variance assumptions (along with an assumption that the fourth moment is bounded), we would have convergence to Marchenko-Pastur distribution. However, the independence along with the bi-invariance would then force our noise distribution to be i.i.d. Gaussian. In general however, with relaxed assumption of the entries only being uncorrelated, convergence is not known. However, in our case, we have a much simpler proof for matrices formed by Proposition 1. In our case, the noise matrices B satisfy the standard assumptions for convergence. We then multiply B by orthogonal matrices that are independent to B. Hence this has no effect on the eigenvalue distribution. Thus, the eigenvalues distribution for these matrices also converge to the Marchenko-Pastur distribution. ## B Ridge Regularization Here we are now interested in minimizing ∥θtrnXtrn − W(θtrnXtrn + Atrn)∥ 2 F + µ 2∥W∥ 2 F . This problem is equivalent to minimizing $$\left\|\theta_{t r n}\left[X_{t r n}\quad0\right]-W\left(\theta_{t r n}\left[X_{t r n}\quad0\right]+\left[A_{t r n}\quad\lambda I\right]\right)\right\|_{F}^{2}.$$ Thus using A˜trn =-Atrn λI. This is the same problem as before but with different assumptions on the noise matrix. Note that Lemma 1 still applies. As does Proposition 2 but with A˜trn instead of Atrn and vtrn has appended zeros. Hence the rest of the proof is similar and we need to look at eigenvalues of A˜T trnA˜trn instead of AT trnAtrn. Here we note that $$\tilde{A}_{t r n}^{T}\tilde{A}_{t r n}=A_{t r n}^{T}A_{t r n}+\mu^{2}I.$$ Thus we have that the eigenvalues are shifted by µ 2. We need to explicitly deal with this during calculation and will need to modify Lemma 5, and need to adjust our calculations accordingly. ## C Proofs Due to our data generation assumptions that ∥Σtrn∥F = ∥Σtst∥F = 1 for rank 1 data, we have that σ trn 1 = σ tst 1 = 1. ## C.1 Step 1: Decompose Into Bias And Varaince Lemma 1. If Atst has mean 0 entries and Atst is independent of Xtst and W*, then* $$\mathbb{E}_{A_{t s t}}[||\theta_{t s t}X_{t s t}-W Y_{t s t}||_{F}^{2}]=\underbrace{\theta_{t s t}^{2}\mathbb{E}_{A_{t s t}}[||X_{t s t}-W X_{t s t}||_{F}^{2}]}_{B i a s}+\underbrace{\mathbb{E}_{A_{t s t}}[||W A_{t s t}||_{F}^{2}]}_{V a r i a n c e}.$$ Proof. Using the fact that for any two matrices ∥G − H∥ 2 F = ∥G∥ 2 F + ∥H∥ 2 F − 2Tr(GT H), we get that $$\|\theta_{test}X_{test}-WY_{test}\|^{2}=\|\theta_{test}X_{test}-W\theta_{test}X_{test}-WA_{test}\|_{F}^{2}$$ $$=\theta_{test}^{2}\|X_{test}-WX_{test}\|_{F}^{2}+\|WA_{test}\|^{2}-2\text{Tr}((\theta_{test}X_{test}-W\theta_{test}X_{test})^{T}WA_{test}).$$ Then since the trace is linear, and Xtst, W are independent of Atst, and Atst has mean 0 entries, we see that $$\mathbb{E}_{A_{t s t}}[\mathrm{Tr}((\theta_{t s t}X_{t s t}-W\theta_{t s t}X_{t s t})^{T}W A_{t s t})]=0.$$ Thus, we have the needed result. C.2 Step 2: Formula for Wopt Proposition 2. Let h = v T trnA † trn, k = A † trnu, s = (I − AtrnA † trn)u, t = vtrn(I − A † trnAtrn), β = 1 + θtrnv T trnA † trnu, τ1 = θ 2 trn∥t∥ 2∥k∥ 2 + β 2*, and* τ2 = θ 2 trn∥s∥ 2∥h∥ 2 + β 2*. If* β ̸= 0 and Atrn *has full rank then* $$W_{o p t}=\begin{cases}\frac{\theta_{t r n}\beta}{\tau_{1}}u h+\frac{\theta_{t r n}^{2}\|t\|^{2}}{\tau_{1}}u k^{T}A_{t r n}^{\dagger}&c<1\\ \frac{\theta_{t r n}\beta}{\tau_{2}}u h+\frac{\theta_{t r n}^{2}\|h\|^{2}}{\tau_{2}}u s^{T}&c>1\end{cases}.$$ Proof. Let us first proof the case when c > 1. Here we know that u is arbitrary. Here we have that Atrn has full rank. Thus, since c > 1, we have that M > Ntrn, thus Atrn has rank Ntrn. Thus, the rows of Atrn span $$\square$$ the whole space. Thus, vtrn lives in the range of AT trn. Finally, since β = 0 ̸ , we want Theorem 5 from Meyer (1973). Here let us further define $$p_{2}=-\frac{\theta_{t r n}^{2}\|s\|^{2}}{\beta}A_{t r n}^{\dagger}h^{T}-\theta_{t r n}k\mathrm{~and~}q_{2}^{T}=-\frac{\theta_{t r n}\|h\|^{2}}{\beta}s^{T}-h$$ and finally τ2 = θ 2 trn∥s∥ 2∥h∥ 2 + β 2. Then we have from Meyer (1973) that $$(A_{t r n}+\theta_{t r n}w v_{t r n}^{T})^{\dagger}=A_{t r n}^{\dagger}+\frac{\theta_{t r n}}{\beta}A_{t r n}^{\dagger}h^{T}s^{T}-\frac{\beta}{\tau_{2}}p_{2}q_{2}^{T}$$ In our case, we only care about θtrnuvT trn(Atrn + θtrnuvT trn) †. Thus let us multiply this through and see what we get. θtrnuvT trn(Atrn + θtrnuvT trn) † = θtrnuvT trn(A † trn + θtrn βA †h Ts T − β τ2 p2q T 2 ) = θtrnuh + θ 2 trn∥h∥ 2 βusT + θtrnβ τ2uvT trn θ 2 trn∥s∥ 2 βA † trnh T + θtrnk q T 2 = θtrnuh + θ 2 trn∥h∥ 2 βusT + θ 3 trn∥s∥ 2∥h∥ 2 τ2uqT 2 + θ 2 trnβ τ2uhuqT 2 $$(11)$$ Then we have that $$\frac{\theta_{t r n}^{3}\|s\|^{2}\|h\|^{2}}{\tau_{2}}c q_{2}^{T}=-\frac{\theta_{t r n}^{4}\|s\|^{2}\|h\|^{4}}{\tau_{2}\beta}u s^{T}-\frac{\theta_{t r n}^{3}\|s\|^{2}\|h\|^{2}}{\tau_{2}}u h$$ and $$\frac{\theta_{t r n}^{2}\beta}{\tau_{2}}u h u g_{2}^{T}=-\frac{\theta_{t r n}^{3}\|h\|^{2}}{\tau_{2}}u h u s^{T}-\frac{\theta_{t r n}^{2}\beta}{\tau_{2}}u h u h.$$ Using that β − 1 = θtrnv T trnA † trnu = θtrnhu, we get that $$\frac{\theta_{t r n}^{2}\beta}{\tau_{2}}u h u q_{2}^{T}=-\frac{\theta_{t r n}^{2}\|h\|^{2}(\beta-1)}{\tau_{2}}u s^{T}-\frac{\theta_{t r n}\beta(\beta-1)}{\tau_{2}}u h.$$ Substituting back in and collecting like terms we get that $$\theta_{t r n}u v_{t r n}^{T}(A_{t r n}+\theta_{t r n}u v_{t r n}^{T})^{\dagger}=\theta_{t r n}u\left(1-\frac{\theta_{t r n}^{2}\|s\|^{2}\|h\|^{2}}{\tau_{2}}-\frac{\beta(\beta-1)}{\tau_{2}}\right)h+$$ $$\theta_{t r n}^{2}u\left(\frac{\|h\|^{2}}{\beta}-\frac{\theta_{t r n}^{2}\|s\|^{2}\|h\|^{4}}{\tau_{2}\beta}-\frac{\|h\|^{2}(\beta-1)}{\tau_{2}}\right)s^{T}$$ $$(12)$$ $$(13)$$ We can then simplify the constants as follows. $$1-{\frac{\theta_{t r n}^{2}\|s\|^{2}\|h\|^{2}}{\tau_{2}}}-{\frac{\beta(\beta-1)}{\tau_{2}}}={\frac{\tau_{2}-\theta_{t r n}^{2}\|s\|^{2}\|h\|^{2}-\beta^{2}+\beta}{\tau_{2}}}={\frac{\beta}{\tau_{2}}}$$ and $$\frac{\|h\|^{2}}{\beta}-\frac{\theta_{\text{rm}}^{2}\|s\|^{2}\|h\|^{4}}{\tau_{2}\beta}-\frac{\|h\|^{2}(\beta-1)}{\tau_{2}}=\frac{\|h\|^{2}(\tau_{2}-\theta_{\text{rm}}^{2}\|s\|^{2}\|h\|^{2}-\beta(\beta-1)}{\beta\tau_{2}}=\frac{\|h\|^{2}\beta}{\beta\tau_{2}}=\frac{\|h\|^{2}}{\tau_{2}}.$$ This gives us the result for c > 1. If c < 1, then we have that M < Ntrn. Thus, the rank of Atrn is M the range of Atrn is the whole space. Thus, u lives in the range of Atrn. In this case, we then want Theorem 3 from Meyer (1973). In this case, we define .$$ p_1=-\dfrac{\theta_{trn}^2\|k\|^2}{\beta}t^T-k\text{and}q_1^T=-\dfrac{\theta_{trn}\|t\|^2}{\beta}k^T A_{trn}^\dagger-h.$$ - that ... Then in this case, we have that $$(A_{t r n}+\theta_{t r n}u v_{t r n}^{T})^{\dagger}=A_{t r n}^{\dagger}+\frac{\theta_{t r n}}{\beta}t^{T}k^{T}A_{t r n}^{\dagger}-\frac{\beta}{\tau_{1}}p_{1}q_{1}^{T}.$$ Then we simplify the equation as we did before! ## C.3 Step 3: Expand Into Trace Terms Lemma 3. If the entries of Atst are independent with mean 0, and variance 1/M*, then we have that* EAtst [∥W Atst∥ 2] = Ntst M ∥W∥ 2. Proof. To see this, we note if we look at AtstAT tst, then this is a M by M, for which the expected value of the off diagonal entries is equal to 0, while the expected value of each diagonal entry is Ntst/M. That is, EAtst [AtstAT tst] = Ntst M IM. Then note that $$\|W A_{t s t}\|^{2}=\mathrm{Tr}(A_{t s t}^{T}W^{T}W A_{t s t})=\mathrm{Tr}(W^{T}W A_{t s t}A_{t s t}^{T})=\mathrm{Tr}(W^{T}W A_{t s t}A_{t s t}^{T}).$$ Using the fact that the trace is linear again, we see that $$\mathbb{E}_{A_{t s t}}[\mathrm{Tr}(W^{T}W A_{t s t}A_{t s t}^{T})]=\mathrm{Tr}(W^{T}W\mathbb{E}_{A_{t s t}}[A_{t s t}A_{t s t}^{T}])=\frac{N_{t s t}}{M}\mathrm{Tr}(W^{T}W)=\frac{N_{t s t}}{M}\|W\|_{F}^{2}.$$ Lemma 2. If W *is the solution to Equation 3, then* $$\square$$ $$X_{t s t}-W X_{t s t}=\begin{cases}\frac{\beta}{\tau_{1}}X_{t s t}&{\mathrm{if~}}c<1\\ \frac{\beta}{\tau_{2}}X_{t s t}&{\mathrm{if~}}c>1\end{cases}.$$ Proof. To see this, we have the following calculation for when Ntrn > M. $$X_{tst}-WX_{tst}=X_{tst}-\frac{\theta_{trn}\beta}{\tau_{1}}uhuv_{tst}^{T}-\frac{\theta_{trn}^{2}|t|^{2}}{\tau_{1}}uk^{T}A_{trn}^{\dagger}uv_{tst}^{T}$$ $$=X_{tst}-\frac{\theta_{trn}\beta}{\tau_{1}}uv_{trn}^{T}A_{trn}^{\dagger}uv_{tst}^{T}-\frac{\theta_{trn}^{2}\|t\|^{2}}{\tau_{1}}uk^{T}A_{trn}^{\dagger}uv_{tst}^{T}.$$ First, we note that β = 1 + θtrnv T trnA † trnu. Thus, we have that θvT trnA † trnu = β − 1. Thus, substituting this into the second term, we get that $$X_{t s t}-W X_{t s t}=X_{t s t}-{\frac{\beta(\beta-1)}{\tau_{1}}}u v_{t s t}^{T}-{\frac{\theta_{t r n}^{2}\|t\|^{2}}{\tau_{1}}}u k^{T}A_{t r n}^{\dagger}u v_{t s t}^{T}.$$ For the third term, we note that k = A † trnu. Thus, we have that k T A † trnu = k T k = ∥k∥ 2. Substituting this into the expression, we get that $$X_{t s t}-W X_{t s t}=X_{t s t}-{\frac{\beta(\beta-1)}{\tau_{1}}}u v_{t s t}^{T}-{\frac{\theta_{t r n}^{2}\|t\|^{2}\|k\|^{2}}{\tau_{1}}}u v_{t s t}^{T}.$$ Noting that Xtst = uvT tst, we get that $$X_{t s t}-W X_{t s t}=X_{t s t}\left(1-{\frac{\beta(\beta-1)}{\tau_{1}}}-{\frac{\theta_{t r n}^{2}\|t\|^{2}\|k\|^{2}}{\tau_{1}}}\right).$$ To simplify the constants, we note that $\tau_{1}=\theta_{\ell m}^{2}\|t\|^{2}\|k\|^{2}+\beta^{2}$. Thus, we get that: $$\frac{\tau_{1}+\beta-\beta^{2}-\theta_{\ell m}^{2}\|t\|^{2}\|k\|^{2}}{\tau_{1}}=\frac{\beta}{\tau_{1}}.$$ For the case when Ntrn < M, we note that the first term of W is the same (modulo replacing τ1 for τ2) as it is for the case when c > 1. Thus, we just need to deal with the last term. Here we see that the last term is $$\frac{\theta_{t r n}^{2}\theta_{t s t}\|h\|^{2}}{\tau_{2}}u s^{T}u v_{t s t}^{T}.$$ Here we note that s = (I − AtrnA † trn)u. Thus, in particular, s is the projection of u onto the kernel of AT trn. Thus, we have that u = s + sˆ, where s ⊥ sˆ. This then tells us that s T u = ∥s∥ 2. Thus, for this term, we get that it is equal to $${\frac{\theta^{2}\|h\|^{2}\|s\|^{2}}{\tau_{2}}}X_{t s t}.$$ For this term we note that τ2 = β 2 + θ 2 trn∥h∥ 2∥u∥ 2. Thus, doing the same simplification as before, we see that for the case when Ntrn < M, we have that $$X_{t s t}-W X_{t s t}={\frac{\beta}{\tau_{2}}}X_{t s t}.$$ $$\square$$ In light of Lemma 2 and the fact that ∥θtstXtst∥ 2 F = θ 2 tst. We see that if we look at the expected MSE, we have that, $$\mathbb{E}_{A_{test}}\left[\frac{\|\theta_{test}X_{test}-W(\theta_{test}X_{test}+A_{test})\|}{N_{test}}\right]=\frac{\beta}{N_{test}\tau_i}\theta_{test}^2+\frac{1}{M}\|W\|_F^2,$$ where $\tau_i$ depends on whether $c<1$ or $c>1$. Finally, let us look at the ∥W∥ term. **Lemma 4**.: _If $\beta\neq0$ and $A_{trn}$ has full rank, then we have that if $c<1$,_ $$\|W\|_{F}^{2}=\frac{\theta_{trn}^{2}\beta^{2}}{\tau_{I}^{2}}\mathcal{H}(h^{T}h)+2\frac{\theta_{trn}^{2}\|t\|^{2}\beta}{\tau_{I}^{2}}\mathcal{H}(h^{T}k^{T}A_{trn}^{\dagger})+\frac{\theta_{trn}^{\dagger}\|t\|^{4}}{\tau_{I}^{2}}\mathcal{H}((A_{trn}^{\dagger})^{T}kk^{T}A_{trn}^{\dagger})$$ _and if $c>1$, then we have that_ $$\|W\|_{F}^{2}=\frac{\theta_{t r n}^{2}\beta^{2}}{\tau_{2}^{2}}\,T r(h^{T}h)+2\frac{\theta_{t r n}^{3}\|h\|^{2}\beta}{\tau_{2}^{2}}\,T r(h^{T}s^{T})+\frac{\theta_{t r n}^{4}\|h\|^{4}}{\tau_{2}^{2}}\,T r(s s^{T}).$$ Proof. To deal with the term Tr(WT W) we are again going to have to look at whether Ntrn is bigger than or smaller than M. First, let us start by looking at the case when Ntrn > M. Here we have that ∥W∥ 2 F = Tr(WT W) = Tr θtrnβ τ1uh + θ 2 trn∥t∥ 2 τ1ukT A † trnT θtrnβ τ1uh + θ 2 trn∥t∥ 2 τ1ukT A † trn! = θ 2 trnβ 2 τ 2 1 Tr(h T u T uh) + 2θ 3 trn∥t∥ 2β τ 2 1 Tr(h T u T ukT A † trn) + θ 4 trn∥t∥ 4 τ 2 1 Tr((A † trn) TkuT ukT A † trn) = θ 2 trnβ 2 τ 2 1 Tr(h T h) + 2θ 3 trn∥t∥ 2β τ 2 1 Tr(h Tk T A † trn) + θ 4 trn∥t∥ 4 τ 2 1 Tr((A † trn) TkkT A † trn). Where the last inequality is true due to the fact that ∥u∥ 2 = 1. How about when Ntrn < M. Then we have the following string of equalities instead. ∥W∥ 2 F = Tr(WT W) = Tr θtrnβ τ2uh + θ 2 trn∥h∥ 2 τ2usT T θtrnβ τ2uh + θ 2 trn∥h∥ 2 τ2usT ! = θ 2 trnβ 2 τ 2 2 Tr(h T u T uh) + 2θ 3 trn∥h∥ 2β τ 2 2 Tr(h T u T usT) + θ 4 trn∥h∥ 4 τ 2 1 Tr(suT usT) = θ 2 trnβ 2 τ 2 2 Tr(h T h) + 2θ 3 trn∥h∥ 2β τ 2 2 Tr(h Ts T) + θ 4 trn∥h∥ 4 τ 2 2 Tr(ssT). $$\square$$ ## C.4 Step 4: Estimate Using Random Matrix Theory. Lemma 5. Suppose A is an p by q matrix such that the entries of A are independent and have mean 0, variance 1/q, and bounded fourth moment. Let Wp = AAT and let Wq = AT A. Let C = p/q. Suppose λp, λq are a random eigenvalue of Wp, Wq*. Then* 1. If p < q, then E h1 λp i=1 1−C + o(1). 2. If p < q, then E h1 λ2p i=1 (1−C) 3 + o(1). 3. If p < q, then E h1 λ3p i=1 (1−C) 5 + o(1). 4. If p < q, then E h1 λ4p i= C 2+ 22 6 c+1 (1−C) 7 + o(1). 5. If p > q, then E h1 λq i=C −1 1−C−1 + o(1). 6. If p > q, then E h1 λ2 q i=C −2 (1−C−1) 3 + o(1). 7. If p > q, then E h1 λ3 q i= C −3(1+C −1) (1−C−1) 5 + o(1). 8. If p > q, then E h1 λ4 q i= C −4(C −2+ 22 6 C −1+1) (1−C−1) 7 + o(1). Proof. Suppose A is an p by q matrix such that the entries of A are independent and have mean 0, variance 1/q, and bounded fourth moment. Then we know that Wp = AATis an p by p Wishart matrix with c = C. If we send *p, q* to infinity such that p/q remains constant, then we have the eigenvalue distribution Fp converges to the Marchenko Pastur distribution F in probability. From Rao & Edelman (2008), we know there exists a bi variate polynomial L(*m, z*) = czm2 − (1 − c − z)m + 1 such that the zeros of L(*m, z*) given by L(m(z), z) are such that $$m(z)=\int\frac{1}{\lambda-z}d F(\lambda)=\mathbb{E}_{\lambda}\left[\frac{1}{\lambda-z}\right].$$ For the Marchenko-Pastur distribution, we have that for z = 0, we get that m(z) = 1/(1 − c). Thus, for λp is an eigenvalue value of Wp, we have that $$\mathbb{E}\left[{\frac{1}{\lambda_{p}}}\right]={\frac{1}{1-c}}+o(1).$$ For Eλ h1 (λ−z) 2 iwe need to calculate m′(0). Using the implicit function theorem, we know that $$m^{\prime}(z)=-1\left(\frac{\partial L}{\partial m}(m(z),z)\right)^{-1}\frac{\partial L}{\partial z}(m(z),z).$$ Here we can see that *∂L/∂m* = 2czm + c + z − 1. Thus, at (1/(1 − c), 0), this is equal to c − 1. Also ∂L/∂z = cm2 + m. Again at (1/(1 − c), 0) this is equal to c (1−c) 2 +1 1−c =1 (1−c) 2 . Thus, we have that $$m^{\prime}(0)=\frac{1}{(1-c)^{3}}.$$ Similarly, using the implicit function formulation, we can calculate m′′(0) and m′′′(0). On the other hand if *q < p*, then Wq := AT A is not a Wishart matrix here, because it is scaled by the wrong constant. However, multiplying it by 1/C gives us the correct scaling. Thus, AT A/C is a Wishart matrix with c = 1/C Thus, for λq is an eigenvalue value of Wq, we have that $$\mathbb{E}\left[{\frac{1}{\lambda_{q}}}\right]={\frac{C^{-1}}{1-C^{-1}}}+o(1).$$ $\square$ We can obtain the rest in a similar manner from the previous results. Lemma 6. Suppose A is an p by q matrix such that the entries of A *are independent and have mean 0,* variance 1/q, and bounded fourth moment. Let C = p/q and let x, y be unit vectors in p, then 1. E[Tr(x T(AAT) †x)] = ( 1 1−C + o(1) p < q q pC −1 1−C−1 + o(1) p > q . 2. E[Tr(x T(AAT) †(AAT) †x)] = ( 1 (1−C) 3 + o(1) p < q q pC −2 (1−C−1) 3 + o(1) p > q . 3. E[Tr(y T(AT A) †y)] = (p q1 1−C + o(1) p < q C −1 1−C−1 + o(1) p > q . 4. E[Tr(y T(AT A) †(AT A) †y)] = (p q1 (1−C) 3 + o(1) p < q C −2 (1−C−1) 3 + o(1) p > q . Proof. Let A = UΣV T be the SVD. Then we have that (AAT) † = U(Σ2) †U T. Then since A is bi-unitary invariant, we have that U is a uniformly random unitary matrix. Thus, a = x TU is a uniformly random unit vector. Note with probability 1, the rank of A is full and that the non-zero eigenvalues of AT A and AAT are the same. If *p < q*, then we have that $$\mathbb{E}[\mathrm{Tr}(x^{T}(A A^{T})^{\dagger}x)]=\sum_{i=1}^{p}a_{i}^{2}\,\frac{1}{\sigma_{i}^{2}}.$$ Using Lemma 5, we have that E[1/σ2 i ] = 1/(1 − C) + o(1). Thus, we have that $$\mathbb{E}[\operatorname{Tr}(x^{T}(A A^{T})^{\dagger}x)]=\sum_{i=1}^{p}{\frac{1}{p}}{\frac{1}{1-C}}+o(1).$$ On the other hand, if *p > q*, from Lemma 5, we have that E[1/σ2 i ] = C −1/(1 − C −1) + o(1). Thus, $$\mathbb{E}[\mathrm{Tr}(x^{T}(A A^{T})^{\dagger}x)]=\sum_{i=1}^{q}\frac{1}{p}\frac{C^{-1}}{1-C^{-1}}+o(1).$$ Similarly, if we had we looking at Tr(x T(AAT) †(AAT) †x), we would have a 1/σ4 i term instead. Thus, if *p < q*, we would have that $$\mathbb{E}[\operatorname{Tr}(x^{T}(A A^{T})^{\dagger}(A A^{T})^{\dagger}x)]={\frac{1}{(1-C)^{3}}}+o(1).$$ A similar calculation holds for the others. Now we have the following Lemma in the main text. However, here instead of having one big proof, we will separate each term out into its own lemma. Lemma 7. If Atrn *satisfies the standard noise assumptions, then we have that* **a 7.** _If $A_{trn}$ satisfies the $\pi$_ 1. E[β] = 1 + o(1) and Var(β) = θ 2 trnc (max(M,Ntrn)|1−c|)) + o(1). 2. If c < 1, then E[∥h∥ 2] = c 2 1 − c + o(1) and Var(∥h∥ 2) = c 3(2 + c) Ntrn(1 − c) 3 + o(1). 3. If c > 1, then E[∥h∥ 2] = c c − 1 + o(1) and Var(∥h∥ 2) = c 2(2c − 1) Ntrn(c − 1)3 + o(1). 4. E[∥k∥ 2] = c 1 − c + o(1) and Var(∥k∥ 2) = c 2(2 + c) M(1 − c) 3 + o(1). 5. E[∥s∥ 2] = c − 1 c+ o(1) and Var(∥s∥ 2) = 2 1 M c + o(1) 6. E[∥t∥ 2] = 1 − c + o(1), Var(∥t∥ 2) = 2 c Ntrn + o(1). Lemma 11. β term. Proof. First, we calculate the expected value of β. To do so, let Atrn = UΣV T be the SVD. Then since Atrn is bi-unitarily invariant, we have that *U, V* are uniformly random unitary matrices. Since u, vtrn are fixed. We have that a := v T trnV ∈ R Ntrn and b := U T u ∈ RM are uniformly random unit vectors. In particular, we have that E[ai] = 0, E[bi] = 0, Var(ai) = 1/Ntrn, Var(bi) = 1/M. Thus, if σi are the singular values for Atrn, then we have that $$\beta=1+\theta_{t r n}\sum_{i=1}^{\operatorname*{min}(M,N_{t r n})}{\frac{1}{\sigma_{i}}}a_{i}b_{i}.$$ Thus, if you take the expectation you get that $$\mathbb{E}[\beta]=1.$$ On the other hand, lets look at the variance. For the variance, we need to compute E[β 2]. Now if we let T := θtrnv T trnA † trnu. Then we have that β 2 = 1 + T 2 + 2T. Thus, again if we take the expectation, we get that $$\mathbb{E}[\beta^{2}]=1+\mathbb{E}[T^{2}].$$ Again due to the fact that *a, b* are independent have have mean 0 entries, the cross terms in E[T 2]. Thus, we have that $$\mathbb{E}[T^{2}]=\theta t r n^{2}\mathbb{E}\left[\sum_{i=1}^{\operatorname*{min}(M,N_{t r n})}{\frac{1}{\sigma_{i}^{2}}}a_{i}^{2}b_{i}^{2}\right]=\theta t r n^{2}{\frac{1}{M N_{t r n}}}\mathbb{E}\left[\sum_{i=1}^{\operatorname*{min}(M,N_{t r n})}{\frac{1}{\sigma_{i}^{2}}}\right].$$ Now we need to case on whether M > Ntrn or M < Ntrn. Now to use Lemma 5, we note that q = M and p = Ntrn. Suppose we have that M > Ntrn, then in this case, we have that *q > p*. Thus, we have that $$\mathbb{E}\left[{\frac{1}{\sigma_{i}^{2}}}\right]={\frac{1}{1-C}}+o(1),$$ where C = p/q = Ntrn/M = 1/c. Thus, we have that $$\mathbb{E}\left[{\frac{1}{\sigma_{i}^{2}}}\right]={\frac{1}{1-1/c}}+o(1)={\frac{c}{c-1}}+o(1).$$ Thus, we have that $$\mathbb{E}[T^{2}]=\theta_{t r n}^{2}\frac{c}{M(c-1)}+o\left(\frac{1}{M}\right).$$ Thus, we have $$\mathrm{Var}(\beta)=\theta_{t r n}^{2}\frac{c}{M(c-1)}+o\left(\frac{1}{M}\right).$$ On the other hand, if M < Ntrn. Then we have that *q < p*. Thus, we have that $$\mathbb{E}\left[{\frac{1}{\sigma_{i}^{2}}}\right]={\frac{C^{-1}}{1-C^{-1}}}+o(1),$$ where C = p/q = Ntrn/M = 1/c. Thus, we have that $$\mathbb{E}\left[{\frac{1}{\sigma_{i}^{2}}}\right]={\frac{c}{1-c}}+o(1).$$ Thus, we have that $$\mathbb{E}[T^{2}]=\theta_{t r n}^{2}\frac{1}{N_{t r n}}\left(\frac{c}{1-c}+o(1)\right)=\frac{c}{N_{t r n}(1-c)}+o\left(\frac{1}{N_{t r n}}\right).$$ Thus, we have $$\mathrm{Var}(\beta)=\theta_{t r n}^{2}{\frac{c}{N_{t r n}(1-c)}}+o\left({\frac{1}{N_{t r n}}}\right).$$ Lemma 12. ∥h∥ 2term. Proof. We want to do a calculation similar to that in Lemma 1. Here we have that ∥h∥ 2 = Tr(h T h) = Tr((A † trn) Tvtrnv T trnA † trn) = Tr(v T trnA † trn(A † trn) Tvtrn) = Tr(v T trn(A T trnAtrn) †vtrn). To use Lemma 6, we note that A = AT trn, q = M, p = Ntrn. Let us now suppose that M < Ntrn. Then again taking the expectation, we see that $$\mathbb{E}[\|h\|^{2}]={\frac{M}{N_{t r n}}}\left({\frac{c}{1-c}}+o(1)\right)={\frac{c^{2}}{1-c}}+o(1).$$ For the expectation of ∥h∥ 4, let Atrn = UΣV T be the svd. Then h = v T trnV Σ †U T. Let a = v T trnV and note that a is a uniformly random unit vector. Thus, we have that $$\|h\|^{2}=\sum_{i=1}^{M}\frac{1}{\sigma_{i}^{2}}a_{i}^{2}.$$ For the expectation of ∥h∥ 4, we note that $$\|h\|^{4}=\sum_{i=1}^{M}\sum_{j=1}^{M}\frac{1}{\sigma_{i}^{2}\sigma_{j}^{2}}a_{i}^{2}a_{j}^{2}=\sum_{i=1}^{M}\frac{1}{\sigma_{i}^{4}}a_{i}^{4}+\sum_{i\neq j}\frac{1}{\sigma_{i}^{2}}\frac{1}{\sigma_{j}^{2}}a_{i}^{2}a_{j}^{2}.$$ Taking the expectation of the first term, we get $$\sum_{i=1}^{M}\mathbb{E}\left[{\frac{1}{\sigma_{i}^{4}}}\right]\mathbb{E}[a_{i}^{4}]={\frac{3M}{N_{t r n}(N_{t r n}+2)}}\left({\frac{c^{2}}{(1-c)^{3}}}+o(1)\right)=3{\frac{c^{3}}{N_{t r n}(1-c)^{3}}}+o(1).$$ Taking the expectation of the second term, we get $$M(M-1)\mathbb{E}\left[\frac{1}{\sigma_{j}^{2}}\right]^{2}\mathbb{E}[a_{i}^{2}a_{j}^{2}]=M(M-1)\frac{1}{N_{\text{\tiny{Irr}}}(N_{\text{\tiny{Irr}}}+2)}\left(\frac{c^{2}}{(1-c)^{2}}+o(1)\right)=\frac{c^{4}}{(1-c)^{2}}-\frac{c^{3}}{N_{\text{\tiny{Irr}}}(1-c)^{2}}+o(1).$$ Thus, we have that $$\mathbb{E}[\|h\|^{4}]=\frac{c^{4}}{(1-c)^{2}}+\frac{c^{3}(2+c)}{N_{\text{\tiny{Irr}}}(1-c)^{3}}+o(1).$$ Thus, the above inequality is Thus, the variance is $$\mathrm{Var}(\|h\|^{2})={\frac{c^{3}(2+c)}{N_{t r n}(1-c)^{3}}}+o(1).$$ For M > Ntrn, we instead have that $$\mathbb{E}[\|h\|^{2}]={\frac{N_{t r n}}{N_{t r n}}}\left({\frac{c}{c-1}}+o(1)\right)={\frac{c}{c-1}}+o(1).$$ For the expectation of ∥h∥ 4, we note that $$\|h\|^{4}=\sum_{i=1}^{N_{t r n}}\sum_{j=1}^{N_{t r n}}\frac{1}{\sigma_{i}^{2}\sigma_{j}^{2}}a_{i}^{2}a_{j}^{2}=\sum_{i=1}^{N_{t r n}}\frac{1}{\sigma_{i}^{4}}a_{i}^{4}+\sum_{i\neq j}\frac{1}{\sigma_{i}^{2}}\frac{1}{\sigma_{j}^{2}}a_{i}^{2}a_{j}^{2}.$$ Taking the expectation of the first term, we get $$\sum_{i=1}^{N_{t r n}}\mathbb{E}\left[\frac{1}{\sigma_{i}^{4}}\right]\mathbb{E}[a_{i}^{4}]=\frac{3N_{t r n}}{N_{t r n}^{2}}\left(\frac{c^{3}}{(c-1)^{3}}+o(1)\right)=3\frac{c^{3}}{N_{t r n}(c-1)^{3}}+o(1).$$ Taking the expectation of the second term, we get $$N_{t r n}(N_{t r n}-1)\mathbb{E}\left[\frac{1}{\sigma_{i}^{2}}\right]^{2}\mathbb{E}[a_{i}^{2}]^{2}=N_{t r n}(N_{t r n}-1)\frac{1}{N_{t r n}^{2}}\left(\frac{c^{2}}{(c-1)^{2}}+o(1)\right)$$ $$=\frac{c^{2}}{(c-1)^{2}}-\frac{c^{2}}{N_{t r n}(c-1)^{2}}+o(1).$$ Thus, we have that $$\mathbb{E}[\|h\|^{4}]=\frac{c^{2}}{(c-1)^{2}}+3\frac{c^{3}}{N_{\text{\tiny{trn}}}(c-1)^{3}}-\frac{c^{2}}{N_{\text{\tiny{trn}}}(c-1)^{2}}+o(1)=\frac{c^{2}}{(c-1)^{2}}+\frac{c^{2}(2c-1)}{N_{\text{\tiny{trn}}}(c-1)^{3}}+o(1).$$ Thus, the variance is $$\text{Var}(\|h\|^{2})=\frac{c^{2}(2c-1)}{N_{\text{\tiny{trn}}}(c-1)^{3}}+o(1).$$ Lemma 13. ∥k∥ 2term. Proof. First note that k only appears in the formula when c < 1. Thus, we can focus on this case. As with h, we have that $$\|k\|^{2}=\mathrm{Tr}(u^{T}(A_{t r n}^{\dagger})^{T}A_{t r n}^{\dagger}u)=\mathrm{Tr}(u^{T}(u))$$ T(AtrnA T trn) †u). Again using Lemma 6, with q = M, p = Ntrn, A = Atrn, y = u. Thus, since we have q = M < Ntrn = p, we get that $$\mathbb{E}[\left\|k\right\|^{2}]={\frac{c}{1-c}}+o(1).$$ $\square$ To calculate the variance, we need to calculate the expectation of ∥k∥ 4. Here be again let A = UΣV T be the SVD. Then let b := U T u. Then we have that $$\|k\|^{2}=\sum_{i=1}^{M}\frac{1}{\sigma_{i}^{2}}b_{i}^{2}.$$ Thus, we see that $$\|k\|^{4}=\sum_{i=1}^{M}\frac{1}{\sigma_{i}^{4}}b_{i}^{4}+\sum_{i\neq j}\frac{1}{\sigma_{i}^{2}}\frac{1}{\sigma_{j}^{2}}b_{i}^{2}b_{j}^{2}.$$ Taking the expectation of the first term we get $$3\frac{M}{M^{2}}\frac{c^{2}}{(1-c)^{3}}+o(1)=\frac{3c^{2}}{M(1-c)^{3}}+o(1).$$ Taking the expectation of the second term we get $$\frac{M(M-1)}{M^{2}}\frac{c^{2}}{(1-c)^{2}}+o(1)=\frac{c^{2}}{(1-c)^{2}}-\frac{c^{2}}{M(1-c)^{2}}+o(1).$$ Thus, we have that $$\mathbb{E}[\vert\vert k\vert\vert^{4}]={\frac{c^{2}}{(1-c)^{2}}}+{\frac{c^{2}(2+c)}{M(1-c)^{3}}}+o(1).$$ Thus, we have that $$\mathrm{Var}(\|k\|^{2})={\frac{c^{2}(2+c)}{M(1-c)^{3}}}+o(1).$$ Lemma 14. ∥s∥ 2*term.* Proof. First, we note that s only appears when M > Ntrn. Thus, we only need to deal with that case. For this term, we note that (I − AtrnA † trn) is a projection matrix onto a uniformly random M − Ntrn dimensional subspace. Here be again let A = UΣV T be the SVD. Then let b := U T u. $$\mathbb{E}[\|s\|^{2}]=\mathbb{E}[u^{T}u-u^{T}A_{t r n}A_{t r n}^{\dagger}u]=\mathbb{E}\left[1-b^{T}\left[\begin{matrix}I_{N_{t r n}}&0\\ 0&0\end{matrix}\right]b\right]=1-\sum_{i=1}^{N_{t r n}}\frac{1}{M}+o(1)=1-\frac{1}{c}+o(1)$$ Similarly, we have that $$\begin{split}\|s\|^{4}&=\left(1-\sum_{i=1}^{N_{t r n}}b_{i}^{2}\right)^{2}\\ &=1+\left(\sum_{i=1}^{N_{t r n}}b_{i}^{2}\right)^{2}-2\sum_{i=1}^{N_{t r n}}b_{i}^{2}\\ &=1+\sum_{i=1}^{N_{t r n}}b_{i}^{4}+\sum_{i\neq j}^{N_{t r n}}b_{i}^{2}b_{j}^{2}-2\sum_{i=1}^{N_{t r n}}b_{i}^{2}\end{split}$$ Taking the expectation, we get that 3 $$\mathbb{E}[||s||^4]=1+3\sum_{i=1}^{N_{trn}}\frac{1}{M^2}+\sum_{i\neq j}^{N_{trn}}\frac{1}{M^2}-2\sum_{i=1}^{N_{trn}}\frac{1}{M}+o(1)$$ $$=1+\frac{3}{cM}+\frac{N_{trn}(N_{trn}-1)}{M^2}-2\frac{1}{c}+o(1)$$ $$=1+\frac{3}{cM}+\frac{1}{c^2}-\frac{1}{cM}-2\frac{1}{c}+o(1)$$ $$=\left(1-\frac{1}{c}\right)^2+\frac{2}{cM}+o(1)$$ Thus, we have that $$\mathrm{Var}(\|s\|^{2})=2{\frac{1}{c M}}+o(1)$$ Lemma 15. ∥t∥ 2term. Proof. First, we note that t only appears when M < Ntrn. Thus, we only need to deal with that case. For this term, we note that (I − A † trnAtrn) is a projection matrix onto a uniformly random Ntrn − M dimensional subspace. Then similar to ∥s∥ 2, we have that $$\mathbb{E}[\left|t\right|^{2}]=\mathbb{E}[v_{trn}^{T}v_{trn}-v_{trn}^{T}A_{trn}^{\dagger}A_{trn}v_{trn}]=\mathbb{E}\left[1-a^{T}\left[\begin{array}{cc}I_{M}&0\\ 0&0\end{array}\right]a\right]=1-\sum_{t=1}^{M}\frac{1}{N_{trn}}+o(1)=1-c+o(1)$$ Similarly, we have that $$\begin{split}\|t\|^{4}&=\left(1-\sum_{i=1}^{M}a_{i}^{2}\right)^{2}\\ &=1+\left(\sum_{i=1}^{M}a_{i}^{2}\right)^{2}-2\sum_{i=1}^{M}a_{i}^{2}\\ &=1+\sum_{i=1}^{M}a_{i}^{4}+\sum_{i\neq j}^{M}a_{i}^{2}a_{j}^{2}-2\sum_{i=1}^{M}a_{i}^{2}\end{split}$$ Taking the expectation, we get that $$\mathbb{E}[\|t\|^{4}]=1+3\sum_{i=1}^{M}\frac{1}{N_{trn}^{2}}+\sum_{i\neq j}^{M}\frac{1}{N_{trn}^{2}}-2\sum_{i=1}^{M}\frac{1}{N_{trn}}+o(1)$$ $$=1+\frac{3c}{N_{trn}}+\frac{N_{trn}(N_{trn}-1)}{M^{2}}-2c+o(1)$$ $$=1+\frac{3c}{N_{trn}}+c^{2}-\frac{c}{N_{trn}}-2c+o(1)$$ $$=(1-c)^{2}+\frac{2}{cM}+o(1)$$ Thus, we have that $$\mathrm{Var}(\|t\|^{2})=2{\frac{c}{N_{t r n}}}+o(1)$$ Now we could just use the the fact that |E[XY ] − E[X]E[Y ]| <pVar(X)Var(Y ). Another way to do this is via using big O in probability. Which is defined as follows: Definition 3. We save that a sequence of random variables Xn is OP (an), if there exists an N *such that for* all ϵ > 0, there exists a constant L such that for all n ≥ N*, we have that* Pr[|Xn| > Lan] < ϵ. Then the trace terms. Lemma 8. *Under standard noise assumptions, we have that* $$\mathbb{E}[\,T r(h^{T}k^{T}A_{t r n}^{\dagger})]=0$$ and $$\mathrm{Var}(\,T r(h^{T}k^{T}A_{t r n}^{\dagger}))=\chi_{3}(c)/N_{t r n}+o(1),$$ where χ3(c) = E[1/λ3], λ is an eigenvalue for AAT and A *is as in Lemma 6.* Proof. First we note that $$\mathrm{Tr}(h^{T}k^{T}A_{t r n}^{\dagger})=\mathrm{Tr}((A_{t r n}^{\dagger})^{T}v_{t r n}u^{T}(A_{t r n}^{\dagger})^{T}A_{t r n}^{\dagger})=u^{T}(A_{t r n}^{\dagger})^{T}(A_{t r n}^{\dagger}A_{t r n}^{\dagger})^{T}v_{t r n}).$$ Again let Atrn = UΣV T be the SVD. Then, we have the middle terms depending on Atrn simplifies to $$(A_{t r n}^{\dag})^{T}A_{t r n}^{\dag}(A_{t r n}^{\dag})^{T}=U(\Sigma^{\dag})^{T}\Sigma^{\dag}(\Sigma^{\dag})^{T}V^{T}$$ T. Thus, again letting b = u TU and a = V T vtrn. We see that $$\mathrm{Tr}(h^{T}k^{T}A_{t r n}^{\dagger})=\sum_{i=1}^{M}a_{i}b_{i}\frac{1}{\sigma_{i}^{3}}.$$ Now if take the expectation, since *a, b* are independent and mean 0, we see that $$\mathbb{E}_{A_{t r n}}[\mathrm{Tr}(h^{T}k^{T}A_{t r n}^{\dagger})]=0.$$ Let us also compute the variance. Here we have that $$\mathbb{E}[\mathrm{Tr}(h^{T}k^{T}A_{t r n}^{\dagger})^{2}]=\sum_{i=1}^{M}\mathbb{E}\left[\frac{1}{\sigma_{i}^{6}}\right]\mathbb{E}[a_{i}^{2}]\mathbb{E}[b_{i}^{2}]+0.$$ $$\square$$ Now for the Marchenko Pastur distribution we have that the expectation of 1/λ3 = χ3(c). where χ3 is some function. Thus, we have that $$\mathbb{E}[\mathrm{Tr}(h^{T}k^{T}A_{t r n}^{\dagger})^{2}]=\frac{1}{N_{t r n}}\chi_{3}(c)+o(1).$$ Lemma 9. Under standard noise assumptions, we have that $$T r((A_{t r n}^{\dagger})^{T}k k^{T}A_{t r n}^{\dagger})=\frac{c^{2}}{(1-c)^{3}}+o(1)$$ and $$\mathrm{Var}(\,T r((A_{t r n}^{\dagger})^{T}k k^{T}A_{t r n}^{\dagger}))=\frac{3}{M}\chi_{4}(c)-\frac{1}{M}\frac{c^{4}}{(1-c)^{6}}+o(1)$$ $\lambda$ _is an eigenvalue for $A A^{T}$ and $A$ is as in Lemma 6._ where χ4(c) = E[1/λ4], λ is an eigenvalue for AAT and A *is as in Lemma 6.* Proof. Now using Lemma 6, we see that $$\mathbb{E}_{A_{t r n}}[\mathrm{Tr}((A_{t r n}^{\dagger})^{T}k k^{T}A_{t r n}^{\dagger})]=\frac{c^{2}}{(1-c)^{3}}+o(1).$$ Similar to proofs before, we have that $$\mathbb{E}_{A_{t r n}}[\mathrm{Tr}((A_{t r n}^{\dagger})^{T}k k^{T}A_{t r n}^{\dagger})^{2}]=\sum_{i=1}^{M}\frac{3}{M^{2}}\chi_{4}(c)+\sum_{i\neq j}\frac{1}{M^{2}}\frac{c^{4}}{(1-c)^{6}}+o(1).$$ $$\square$$ $\square$ Where χ4(c) = E[1/λ4] for the Marchenko Pastur distribution. Thus, we have that $$\mathrm{Var}(\mathrm{Tr}((A_{t r n}^{\dagger})^{T}k k^{T}A_{t r n}^{\dagger}))=\frac{3}{M}\chi_{4}(c)+\frac{1}{M}\frac{c^{4}}{(1-c)^{6}}+o(1).$$ Lemma 10. *Under the same assumptions as Proposition 2, we have that Tr*(h Ts T) = 0. Proof. Here we note that h T = (A † trn) T vtrn and s T = u T(I − AtrnA † trn) T. Thus, we have that Tr(h Ts T) = Tr((A † trn) Tvtrnu T − (A † trn) Tvtrnu T(AtrnA † trn) T) = Tr(v T trnA † trnu) − Tr(u T(AtrnA † trn) T(A † trn) Tvtrn) = Tr(v T trnA † trnu) − Tr(v T trnA † trnAtrnA † trnu) = Tr(v T trnA † trnu) − Tr(v T trnA † trnu) = 0 As we can see that if we take the expectation of ∥W∥ over Atrn, since the variance of each of the terms is small, we can approximate E[XY ] with E[X]E[Y ]. Then we get the following. If M < Ntrn, we have that $$\mathbb{E}_{A_{t r n}}[\|W\|^{2}]=\frac{\theta_{t r n}^{2}}{(1+\theta_{t r n}^{2}c)^{2}}\frac{c^{2}}{(1-c)}+0+\frac{\theta_{t r n}^{4}(1-c)^{2}}{(1+\theta_{t r n}^{2}c)^{2}}\frac{c^{2}}{(1-c)^{3}}$$ $$=c^{2}\frac{\theta_{t r n}^{2}+\theta_{t r n}^{4}}{(1+\theta_{t r n}^{2}c)^{2}(1-c)}.$$ On the other hand, M > Ntrn, we have that $$\mathbb{E}_{A_{t r n}}[\|W\|^{2}]=\frac{\theta_{t r n}^{2}}{(1+\theta_{t r n}^{2})^{2}}\frac{c}{c-1}+\frac{\theta_{t r n}^{4}}{(1+\theta_{t r n}^{2})^{2}}\frac{c^{2}}{(c-1)^{2}}\frac{c-1}{c}$$ $$=\frac{c}{c-1}\frac{\theta_{t r n}^{2}(1+\theta_{t r n}^{2})}{(1+\theta_{t r n}^{2})^{2}}$$ $$=\frac{\theta_{t r n}^{2}}{1+\theta_{t r n}^{2}}\frac{c}{c-1}.$$ Now combining everything together, we get that $$\mathbb{E}_{A_{t r r n},A_{t s t}}\left[\frac{\|\theta_{t s t}X_{t s t}-W(\theta_{t s t}X_{t s t}+A_{t s t})\|}{N_{t s t}}\right]=\left\{\begin{array}{l l}{{\frac{\theta_{t s t}^{2}}{N_{t s t}(1+\theta_{t r n}^{2}-c)^{2}}+\frac{1}{M}c^{2}\frac{\theta_{t s t}^{2}+\theta_{t r n}^{2}}{(1+\theta_{t r n}^{2}-c)^{2}(1-c)}}&{{c<1}}\\ {{\frac{\theta_{t s t}^{2}}{N_{t s t}(1+\theta_{t r n}^{2}-c)^{2}}+\frac{1}{M}\frac{\theta_{t r n}^{2}}{1+\theta_{t r n}^{2}}\frac{c}{c-1}}&{{c>1}}\end{array}\right..$$ ## C.5 Proof Of Theorem We can see that the main text has how to put all of the pieces together to prove the main Theorem. We don't replicate that here. C.6 Formula for ˆθopt−trn As stated in the main text, we only need to take the derivative. So, we don't present that calculation here as it is fairly straightforward. ## D Generalizations In this section we discuss some possible generalizations of the method. ## D.1 Higher Rank Let us present some heuristics for the higher rank formula. To do so we shall need some notation. Let Xtrn =Pr i=1 σ trn i ui(v trn i) T. Let A be the noise matrix. Then for 1 ≤ j ≤ r, define $$A_{j}=\left(A+\sum_{i=1}^{j-1}\sigma_{i}^{t r n}u_{i}(v_{i}^{t r n})^{T}\right)$$ We shall now make some assumptions. Specifically, we assume that uj , vtrn j, and Aj are all such that for i1 ̸= i2, and for all j we have that $$\mathbb{E}[u_{i_{1}}^{T}A_{j}A_{j}^{\dagger}u_{i_{2}}]=\mathbb{E}[(v_{i_{1}}^{t r n})^{T}A_{j}^{\dagger}A_{j}v_{i_{2}}^{t r n}]$$ trn ] = 0. Additionally, we assume that for all i1, i2, j we have that E[(v trn i1 ) T A † jui2 ] = 0. We also assume that the variance of these terms goes to 0 as Ntrn, M go to infinity. Lemma 16. With the given assumptions, we have that for all *i < j*, σ trn i ui(v trn i) T A † j ≈ σ trn i ui(v $${}^{1}u_{i}(v_{i}^{t r n})^{T}A_{j-1}^{\dagger}\approx$$ trn i ui(v trn i) T A † j−2 ≈ *. . .* ≈ σ trn i ui(v trn i) T A † i+1 Proof. Write Aj = Aj−1 + σ trn j uj (v trn k) T and the use Meyer (1973) to expand the pseudoinverse of Aj . When we do this, we see that due to the assumption all terms expect σ trn i ui(v trn i) T A † j−1 are small. Define hj = (v trn j) T A † j , kj = σ $\underset{j}{\overset{trn}{\underset{j}{\rightleftharpoons}}}A^{\dagger}_{j}u_{j},\;\underset{j}{\overset{t_{j}}{\underset{j}{\rightleftharpoons}}}=\;(v^{trn}_{j})^{T}(I\;-\;A)\\ \underset{j}{\overset{t_{1}}{\rightleftharpoons}}v_{2}=v_{2}^{2}\quad(i)\quad\underset{j}{\overset{t_{1}}{\rightleftharpoons}}v_{1},\;\underset{j}{\overset{t_{2}}{\rightleftharpoons}}v_{2}v_{1},\;v_{2}^{2}$ † jAj ), sj = σ trn j(I − AjA † j )uj , βj = 1 + σ trn j(v trn j) T A † juj , τ (j) 1 = ∥tj∥ 2∥kj∥ 2 + β 2 j , τ (j) 2 = ∥sj∥ 2∥hj∥ 2 + β 2 j , and similarly p (j) 1 , p (j) 2 , q (j) 1, and q (j) 2. Now, we can write - $X_{trn}+A=\sigma_r^{trn}u_r(v_r^{trn})^T+A_{r-1}$. Then we have that $\dot{x}_j\;=\;0$ $$W=X(\sigma_{r}^{t r n}u_{r}(v_{r}^{t r n})^{T}+A_{r})^{\dagger}=\sum_{i=1}^{r}\sigma_{i}^{t r n}u_{i}(v_{i}^{t r n})^{T}(\sigma_{r}^{t r n}u_{r}(v_{r}^{t r n})^{T}+A_{r})^{\dagger}$$ Expanding and using the lemma, we get that $$W\approx\sum_{i=1}^{r}\sigma_{i}^{t r n}u_{i}(v_{i}^{t r n})^{T}A_{i+1}^{\dagger}=\begin{cases}\sum_{i=1}^{r}\frac{\sigma_{i}^{t r n}\beta_{i}}{\tau_{i}^{(1)}}u_{i}h_{i}+\frac{(\sigma_{i}^{t r n})^{2}|t_{i}|^{2}}{\tau_{i}^{(1)}}u_{i}k_{i}^{T}A_{i}^{\dagger}&c<1\\ \sum_{i=1}^{r}\frac{\sigma_{i}^{t r n}\beta_{i}}{\tau_{i}^{(2)}}u_{i}h_{i}+\frac{(\sigma_{i}^{t r n})^{2}|t_{i}|^{2}}{\tau_{i}^{(2)}}u_{i}s_{i}^{T}&c>1\end{cases}$$ Where the second equality comes from the rank 1 results. Now that we have an approximation for W (given our assumptions), we can now approximate the variance and bias terms again. Let Wi denote the ith factor (corresponding to ui) of W. First, for the bias, due to the orthogonality of the u's we get that $$\|X_{t s t}-W X_{t s t}\|_{F}^{2}=\sum_{i=1}^{r}\left\|\sigma_{i}^{t s t}u_{i}(v_{i}^{t s t})^{T}-W_{i}\sum_{j=1}^{r}\sigma_{i}^{t s t}u_{i}(v_{i}^{t s t})^{T}\right\|_{F}^{2}.$$ Again, using our assumptions, we see that the terms in the j summation dropout besides when j = i. Then again using our rank 1 result, we get that $$\|X_{t s t}-W X_{t s t}\|_{F}^{2}=\sum_{i=1}^{r}\left({\frac{\beta_{i}}{\tau_{i d x}^{(i)}}}\sigma_{i}^{t s t}\right)^{2}$$ For the variance, we again estimate the norm of W by expanding the trace. Here we see that the cross terms are 0 due to factors of u T i1 ui2 . For the diagonal terms, we again use the rank 1 results and get that $$\|W\|_{F}^{2}=\sum_{i=1}^{r}\frac{(\sigma_{i}^{trn})^{2}\beta_{i}^{2}}{(\tau_{i}^{(1)})^{2}}\text{Tr}(h_{i}^{T}h_{i})+2\frac{(\sigma_{i}^{trn})^{3}\|t_{i}\|^{2}\beta_{i}}{(\tau_{i}^{(1)})^{2}}\text{Tr}(h_{i}^{T}k_{i}^{T}A_{i}^{\dagger})+\frac{(\sigma_{i}^{trn})^{4}\|t_{i}\|^{4}}{(\tau_{i}^{(1)})^{2}}\text{Tr}((A_{i}^{\dagger})^{T}k_{i}k_{i}^{T}A_{i}^{\dagger})$$ and if $c>1$, then we have that $$\|W\|_{F}^{2}=\sum_{i=1}^{r}\frac{(\sigma_{i}^{\{r\}})^{2}\beta_{i}^{2}}{(r_{2}^{(i)})^{2}}\mathrm{Tr}(h_{i}^{T}h_{i})+2\frac{(\sigma_{i}^{\{r m\}})^{3}\|h_{i}\|^{2}\beta_{i}}{(r_{2}^{(i)})^{2}}\mathrm{Tr}(h_{i}^{T}s_{i}^{T})+\frac{(\sigma_{i}^{\{r m\}}\|h_{i}\|^{4}}{(r_{2}^{(i)})^{2}}\mathrm{Tr}(s_{i}s_{i}^{T}).$$ The final step would be to estimate each of these terms using random matrix theory. However, unfortunately the Aj may not satisfy all of the needed conditions. However, we know that Aj is a perturbation of A and A satisfies all of the needed conditions. Hence, if the perturbation is small, we can replace Aj with A and hopefully not incur too much cost. Note this is also the reason why the previous assumptions might be reasonable. If we replace Aj 's with A use our estimates from the rank 1 result. We then get our estimate for the generalization error for general rank r data. $$R(\theta_{rm},\theta_{stat},c,\Sigma_{tm},\Sigma_{stat})=\sum_{i=1}^{r}\frac{(\theta_{tot}\sigma_{i}^{tot})^{2}}{N_{tot}(1+(\theta_{rm}\sigma_{i}^{tot})^{2}c)^{2}}+\frac{c^{2}((\theta_{rm}\sigma_{i}^{tot}r^{m})^{2}+(\theta_{rm}\sigma_{i}^{tot}r^{m})^{4})}{M(1+(\theta_{rm}\sigma_{i}^{tot}r^{m})^{2}c)^{2}(1-c)}+o(1)\tag{14}$$ and if $c>1$, we have that $$R(\theta_{trn},\theta_{td},c,\Sigma_{trn},\Sigma_{ts})=\sum_{i=1}^{r}\frac{(\theta_{trn}\sigma_{i}^{tsi})^{2}}{N_{tsi}(1+(\theta_{trn}\sigma_{i}^{trn})^{2})^{2}}+\frac{c(\theta_{trn}\sigma_{i}^{trn})^{2}}{M(1+(\theta_{trn}\sigma_{i}^{trn})^{2})(c-1)}+o(1).\tag{15}$$ In the experimental section, we see that for small values of r for c bounded away from 1. This seems to be good estimate for the generalization error. ## E Experiments Please see accompanying notebook for code to produce the data for all of the figures. ## E.1 Low Snr And High Snr Data For low SNR data, we sample the θ times singular values from a squared standard Gaussian. We do this independently for all 2r singular values. We call this the low SNR region because θ is not being scaled with the number of data points. Hence as Ntrn, Ntst → ∞, the SNR goes to 0. For the high rank data, we sample θ times singular values from a squared Gaussian and then multiply by √Ntrn, √Ntst. Hence here the SNR does not go to 0 as Ntrn, Ntst → ∞. ## F Generalization Error Versus Training Noise Level Plots F.1 More Tests For Rank 1 Here we provide more examples of c and how our theoretical formula matches the experimental performance exactly. Each empirical point is the average over 50 trials. These were run on a laptop with 8gb of RAM and an i3 processors. The average time to produce any of these plots is about 10 to 30 minutes. ![33_image_0.png](33_image_0.png) Figure 8: Figures (a) - (e) showing the accuracy of the formula for the expected mean squared error for c = 0.1, 0.5, 0.9, 2, 10 for fixed value of ˆθtst. Figure (f) empirically verifies the existence of a regime where training on pure noise is optimal. Here the red and green lines represent E[ ˆθ 2 tst] and E[ ˆθ 2 trn] respectively. Each empirical data point is averaged over at least 50 trials. ## F.2 Rank 2 Data Let us now demonstrate that the double descent shaped curve exists beyond rank 1 data and linear autoencoders. We will do this by gradually making the set up more complicated until we can no longer recreate this phenomena. First, we consider rank 2 data is of the following form. Let W*data* be some fixed matrix, then our data is generated by ## X = Relu(Wdatarelu(Uvt). Where a different v is sampled for the training and test data. the results for this can be seen in Figure 9. As we can from the figure, we have the exact same qualitative trend for c that we saw before. That is, as c goes from 0 to 1, we have that ˆθtrn goes from ˆθtst to 0, and then as c → ∞, we have that ˆθtrn goes to infinity as well. ## F.3 Mnist Data We now look at the linear network with MNIST data. ![34_image_0.png](34_image_0.png) ## F.3.1 Non-Linear Network Here, we trained each network for 1500 epochs. During each epoch we computed a gradient using the whole data set. We used Adam as the optimizer with the code written in Pytorch. Each data point was generated over 20 trials. These experiments take a little bit more time to run and the one with bigger amounts of data can take upto 5 hours on a google cloud instance with 16gb RAM. Here we used a Telse P4 gpu. LRL is a model with a reLU at the end of the first layer only. ![35_image_0.png](35_image_0.png) Figure 11: MNIST - LRL model
Review 1: Summary: Double descent is an interesting phenomenon in overparameterized neural networks and deserves extensive investigations. In this paper, the authors carry out high-quality empirical studies with general input matrices and present some generalization analysis in the case of a rank-1 input matrix and Gaussian noise. This noise setting is interesting. But the network does not involve an activation function. Strengths and Weaknesses: The empirical studies with general input matrices carried out in the paper for double descent and the noise setting are interesting contributions of the paper. There are two weaknesses. One is the lack of an activation function in the network, and the other is the rank-1 restriction on the input matrix. Requested Changes: The authors should comment on difficulty in extending their theoretical results to more general input matrices and networks with activation functions. Broader Impact Concerns: It would be more interesting if the authors could extend their theoretical results to more general input matrices and networks with activation functions. ================================================== Review 2: Summary: This work considers the generalization error of a linear denoising model. For rank-1 data, it shows that the generalization error, as a function of the number of training samples, follows a double descent curve. In more general settings, experiments are conducted that also suggest a similar double descent curve. Moreover, the paper shows that the double descent phenomenon can be mitigated by optimally tuning the SNR in the denoising procedure. In other words, the generalization error of a denoising procedure with the optimally tuned SNR is a decreasing function of the number of training samples. Strengths and Weaknesses: This work studies a pretty interesting topic: the generalization of a denoising model. However, even before finishing my first pass, I noticed the following clear weaknesses in the paper: 1. The writing could be improved. Too many typos. Also inconsistent tenses. The current writing quality really discourages me to spend more time reviewing the manuscript. While the topic sounds to be interesting, and also novel to my knowledge, I am not sure I fully understand the problem setting. The mathematics could also be made more clear. 2. Could have emphasized that $\\theta$ is just a scalar instead of a matrix. 3. On page 3 bottom, what is the "expected norm of the noise distribution"? 4. In Section 4.1. If $U$, $V\_{tst}$, and $\\Sigma\_{tst}$ are all given and fixed, why should we care about denoising at all, as we already know the test data? 5. Remark 1 on the bottom of page 6: could you please be more specific? What is degenerate and why degeneration causes a problem? 6. Eq. (4), the expectation is taken over what randomness? 7. Overall, I do not see why the current denoising problem formulation makes sense. Firstly I would encourage the authors to explain what is known and what is unknown. If the test data is already given why would people be interested in denoising it? Secondly, I would encourage the authors to provide some *specific* examples that connect the mathematical formulation (e.g., eqs (1) and (2)) to practical denoising procedures. I have more questions regarding the theoretical results. 8. In equations (5) and (6), the $o(1/N\_{tst})$ and $o(1/M)$ terms look suspicious. Could you please specify the hidden constants/factors? 9. The condition that $\\theta\_{tst}$ is $O(\\sqrt{N\_{tst}})$ is also hard to interpret. First, please specify the hidden constants/factors. Second, it seems to suggest that $\\theta\_{tst}$ could be a function of $N\_{tst}$, then if $\\theta\_{tst} = 1/{N\_{tst}}$ for example, it is not clear which term in eq (5) is the leading term. 10. When $c\\to 1$ the current bounds in eqs (5) and (6) are infinite. But this does not seem to be sharp. Note that under the assumption that $r=1$, the whole problem is effectively only a one-dimensional problem as $u$ is given. Then computing the eigenvalue of a (fixed) 1-dim projection of the noise matrix effectively reduces to computing the norm of a random vector. This is only a random variable and should be well-bounded (away from zero) no matter $c=1$ or not. So I do not think the test error should be unbounded when $c=1$. 11. On page 11 top, could you be more specific about the technical difficulty? Note that the Sherman-Morrison-Woodbury formula gives you the formula of the form $(A+XT)\^{\\dagger}$ for general matrix $A,X,Y$. I am not sure if it is necessary at all to restrict oneself to rank 1. Given the above weaknesses, I am leaning toward rejecting the current version of the manuscript. Requested Changes: First please polish the writing. Moreover, please consider commenting on my questions above to help me better understand the results. Broader Impact Concerns: N/A ================================================== Review 3: Summary: The paper focuses on the double descent both empirically and theoretically of the denoising model. The main theorem provides an asymptotic result for the excess risk of the rank-one data, which borrows the techniques from random matrix theory. Strengths and Weaknesses: Strength - This paper gives a theoretical analyses under the rank-1 data and indicates how the optimal SNR affects the double descent curve. Weakness - The description on assumption is unclear. - In theorem 1, the assumption $\theta_{trn} = O(\sqrt{N_{trn}})$ does not make sense as the dimension of $\theta_{trn}$ is the input dimension $d$. Requested Changes: - In Figure 2, why does the phase transition not occur at 1? - I suggest the authors set the assumptions more formally and mathematically, and put the discussion in the remark. - The low-dimensional data assumption is ok for me. But I’m wondering that it would be possible to consider the case that the target function is low-dimensional? This setting is more realistic. - The empirical validation on the approximation in Eqs. (7) and (8) is needed. - In theorem 1, how does the test error vary with N_{tst}, M? This requires more discussion. The comparison with previous work in double descent, e.g., least squares, random features, is needed. For example, when I saw the result in Theorem 1, it is similar to Hastie’s result. What is the same point and separation under these problem settings? Broader Impact Concerns: no ================================================== Metareview: Recommendation: Accept with minor revision Comment: The paper is on the generalisation performance of denoising models. It demonstrates empirically that under certain conditions, the generalisation error follows a double descent curve and provides a simple model for the observed phenomenon. This is a borderline paper and the reviewers have mixed opinions about it, 2 leaning to accept while 1 is leaning to reject, one reason being a perceived limited significance. However all reviewers indicate that the claims are backed up by evidence and that they think there would be some TMLR readers who would be interested in the topic even though the topic and problem setting are rather restricted and a better write-up could make the readers' life much easier. In the hope that the studied phenomenon and mathematical analysis will be useful for other researchers, I recommend acceptance. Some comments/questions to address for the camera-ready version: - p3: typo: rations - p3: "Our goal is to understand the impact of training noise impacts generalization error" impact of training noise on ... ? - Section 3 intro: ", we explore the role of the amount of training noise and show that optimally picking theta_tst can mitigate the previously seen double descent." Is this a typo, and it should be theta_train? Or otherwise, please explain how we can pick theta_test. ==================================================
# Variational Classification: A Probabilistic Generalization Of The Softmax Classifier Shehzaad Dhuliawala *shehzaad.dhuliawala@inf.ethz.ch* Department of Computer Science, ETH Zurich, Switzerland Mrinmaya Sachan mrinmaya.sachan@inf.ethz.ch Department of Computer Science, ETH Zurich, Switzerland Carl Allen carl.allen@ai.ethz.ch AI Centre, ETH Zurich, Switzerland ## Abstract We present a latent variable model for classification that provides a novel probabilistic interpretation of neural network softmax classifiers. We derive a variational training objective, analogous to the evidence lower bound (ELBO) used to train variational auto-encoders, that generalises the cross-entropy loss used to train classification models. Treating inputs to the softmax layer as samples of a latent variable, our abstracted perspective reveals a potential inconsistency between their *anticipated* distribution, required for accurate label predictions to be output, and their *empirical* distribution found in practice. We augment the variational objective to mitigate such inconsistency and encourage a chosen latent distribution, instead of the implicit assumption found in a standard softmax layer. Overall, we provide new theoretical insight into the inner workings of widely-used softmax classifiers. Empirical evaluation on image and text classification datasets demonstrates that our proposed approach, variational classification1, maintains classification accuracy while the reshaped latent space improves other desirable properties of a classifier, such as calibration, adversarial robustness, robustness to distribution shift and sample efficiency useful in low data settings. ## 1 Introduction Classification is a central task in machine learning, used to categorise objects (Klasson et al., 2019), provide medical diagnoses (Adem et al., 2019; Mirbabaie et al., 2021), or identify potentially life-supporting planets (Tiensuu et al., 2019). Classification also arises in other learning regimes, e.g. to select actions in reinforcement learning, distinguish positive and negative samples in contrastive learning, and pertains to the *attention* mechanism in transformer models (Vaswani et al., 2017). Classification is commonly tackled by training a neural network with a sigmoid or *softmax* output layer.2 Each data sample x is mapped deterministically by an *encoder* fω (with weights ω) to a real vector z=fω(x), which the softmax layer maps to a distribution over class labels y∈Y: $$p_{\theta}(\mathbf{y}|x)={\frac{\exp\{z^{\top}w_{y}+b_{y}\}}{\sum_{y^{\prime}\in{\mathcal{Y}}}\exp\{z^{\top}w_{y^{\prime}}+b_{y^{\prime}}\}}}\ .$$ . (1) Softmax classifiers have achieved impressive performance (e.g. Krizhevsky et al., 2012), however they are known to suffer from several issues. For example: such classifiers are trained to numerically minimise a loss function over a random dataset and their resulting predictions are *hard to explain*; model predictions may accurately identify the correct class by their mode but less accurately reflect a meaningful class distribution p(y|x), known as *miscalibration*; predictions can vary materially and erroneously for imperceptible changes in the data (*adversarial examples*); and highly flexible neural networks are often used in order to achieve accurate predictions, which tend to *require considerable labelled data* to train. 1Code: www.github.com/shehzaadzd/variational-classification. Review: www.openreview.net/forum?id=EWv9XGOpB3 2We refer throughout to the softmax function since it generalises sigmoid to multiple classes, but arguments apply to both. $\left(1\right)$. 1 ![1_image_0.png](1_image_0.png) Figure 1: Empirical distributions of inputs to the output layer qϕ(z|y) for classifiers trained under incremental components of the VC objective (Eqn. 7) on MNIST (cf the central Z-plane in figure 2). (l) "MLE" objective = softmax cross-entropy; (c) "MAP" objective = MLE + Gaussian class priors pθ(z|y) (in contour); (r) VC objective = MAP + entropy of pθ(z|y). Colour indicates class y; Z =R 2for visualisation purposes. In order to better understand softmax classification and ideally mitigate some of its known shortcomings, we take a latent perspective, introducing a latent variable z in a graphical (Markov) model y → z → x. This model can be interpreted generatively as first choosing a sample's class, or *what* it is (y); then parameters defining its attributes, e.g. size, colour (z); which determine the observation (x), subject to any stochasticity, e.g. noise or natural variation. Class labels can be inferred by learning to reverse the process: predicting z from x, and y from z, integrating over all z: pθ,ϕ(y|x)=Rz pθ(y|z)qϕ(z|x). 3It is generally intractable to learn parameters (*θ, ϕ*) of this predictive model by maximising the log likelihood, Rx,y p(*x, y*)log pθ,ϕ(y|x). Instead a lower bound on the log likelihood can be maximised, comparable to the evidence lower bound (ELBO) used to train a variational auto-encoder (VAE) (Kingma & Welling, 2014; Rezende et al., 2014). We show that training a softmax classifier under cross entropy loss (SCE) is, in fact, a special case of training this generalised latent classification model under the variational objective, in which the input to the softmax layer (z of Eqn. 1) is treated as the latent variable, the *encoder* parameterises qϕ(z|x), and the softmax layer computes pθ(y|z). In other words, the latent variable model and its training objective provide an interpretable generalisation of softmax classification. Probing further, the softmax layer can be interpreted as applying Bayes' rule, pθ(y|z)= Ppθ(z|y)pθ(y) y′ pθ(z|y′)pθ(y′) , assuming that latent variables follow *exponential family* class-conditional distributions pθ(z|y) for true class distributions to be output. Meanwhile, the distribution that latents *actually* follow, qϕ(z|y) = Rx qϕ(z|x)p(x|y), is defined by the data distribution and the encoder. We refer to these two descriptions of p(z|y) as the *anticipated* and *empirical* latent distributions, respectively, and consider their relationship. We show, both theoretically and empirically, that in practical settings these distributions can materially differ. Indeed, optimising the SCE objective may cause each empirical distribution qϕ(z|y) to *collapse to a point* rather than fit the anticipated pθ(z|y). This essentially overfits to the data and loses information required for estimating confidence or other potential downstream tasks, limiting the use of z as a *representation* of x. To address the potential discrepancy between qϕ(z|y) and pθ(z|y), so that the softmax layer receives the distribution it expects, we minimise the Kullback-Leibler (KL) divergence between them. This is non-trivial since qϕ(z|y) can only be sampled not evaluated, hence we use the *density ratio trick* (Nguyen et al., 2010; Gutmann & Hyvärinen, 2010), as seen elsewhere (Makhzani et al., 2015; Mescheder et al., 2017), to approximate the required log probability ratios as an auxiliary task. The resulting *Variational Classification* (VC) objective generalises softmax cross-entropy classification from a latent perspective and fits empirical latent distributions qϕ(z|y) to anticipated *class priors* pθ(z|y). Within this more interpretable framework, latent variables learned by a typical softmax classifier can be considered maximium likelihood (MLE) point estimates that *maximise* pθ(y|z). By comparison, the two KL components introduced in variational classification, respectively lead to *maximum a posteriori* (MAP) point estimates; and a *Bayesian* treatment where latent variables (approximately) fit the full distribution pθ(z|y) (Figure 1).4 Since Variational Classification serves to mitigate over-fitting, which naturally reduces with increased samples, VC is anticipated to offer greatest benefit in low data regimes. 3We use the notation qϕ to distinguish distributions, as will be made clear. 4Terms of the standard ELBO can be interpreted similarly. Through a series of experiments on vision and text datasets, we demonstrate that VC achieves comparable accuracy to regular softmax classification while the aligned latent distribution improves calibration, robustness to adversarial perturbations (specifically FGSM "white box"), generalisation under domain shift and performance in low data regimes. Although many prior works target any one of these pitfalls of softmax classification, often requiring extra hyperparameters to be tuned on held-out validation sets, VC *simultaneously* improves them all, without being tailored towards any or needing further hyperparameters or validation data. Overall, the VC framework gives novel mathematical insight and interpretability to softmax classification: the encoder maps a mixture of unknown data distributions p(x|y) to a mixture of chosen latent distributions pθ(z|y), which the softmax/output layer "flips" by Bayes' rule. This understanding may enable principled improvement of classification and its integration with other latent variable paradigms (e.g. VAEs). ## 2 Background The proposed generalisation from softmax to variational classification (§3) is analogous to how a deterministic auto-encoder relates to a *variational auto-encoder* (VAE), as briefly summarised below. Estimating parameters of a latent variable model of the data pθ(x) = Rz pθ(x|z)pθ(z) by maximising the likelihood, Rx p(x)log pθ(x), is often intractable. Instead, one can maximise the *evidence lower bound* (ELBO): Z x p(x) log pθ(x) = Z x p(x) Z z qϕ(z|x) nlog pθ(x|z) − log qϕ(z|x) pθ(z) + log qϕ(z|x) pθ(z|x) o ≥ Z x p(x) Z z qϕ(z|x) nlog pθ(x|z) − log qϕ(z|x) pθ(z) o.= ELBO, (2) where qϕ(z|x) is the *approximate posterior* and the term dropped in the inequality is a Kullback-Leibler (KL) divergence, DKL[ q(z)∥ p(z)] .=Rz q(z)log q(z) p(z) ≥ 0. The VAE (Kingma & Welling, 2014; Rezende et al., 2014) uses the ELBO as a training objective with pθ(x|z) and qϕ(z|x) assumed to be Gaussian parameterised by neural networks. Setting the variance of qϕ(z|x) to zero, i.e. each qϕ(z|x) to a delta distribution, the first ("reconstruction") term of Eqn. 2 equates to the training objective of a deterministic *auto-encoder*, which the VAE can be interpreted to probabilistically generalise, allowing for uncertainty or stochasticity in qϕ(z|x) constrained by the second ("regularisation") term. Maximising the ELBO directly equates to minimising DKL[ p(x)∥ pθ(x)] + Ex -DKL[ qϕ(z|x)∥ pθ(z|x)], and so fits the model pθ(x) to the data distribution p(x) and qϕ(z|x) to the model posterior pθ(z|x) .= pθ(x|z)pθ(z) pθ(x). Equivalently, the modelled distributions qϕ(z|x) and pθ(x|z) are made *consistent under Bayes' rule*. ## 3 Variational Classification Classification Latent Variable Model (LVM): Consider data x ∈ X and labels y ∈ Y as samples of random variables x, y jointly distributed p(x, y). Under the (Markov) generative model in Figure 2 (*left*), $$p(x)=\int_{y,z}p(x|z)p(z|y)p(y)\ ,$$ labels can be predicted by reversing the process, $$p_{\theta}(y|x)=\int_{z}p_{\theta}(y|z)p_{\theta}(z|x)\ .$$ A neural network (NN) softmax classifier is a deterministic function that maps each data point x, via a sequence of intermediate representations, to a point on $$\mathrm{{L}}\mathrm{{B}}(0);$$ ![2_image_0.png](2_image_0.png) $$\left({\boldsymbol{3}}\right)$$ $$\left({4}\right)$$ Figure 2: Variational Classification, reversing the generative process: qϕ(z|x) maps data x∈ X to the latent space Z, where *empirical* distributions qϕ(z|y) are fitted to *class priors* pθ(z|y); top layer computes pθ(y|z) by Bayes' rule to give a class prediction p(y|x). the simplex ∆|Y| that parameterises a categorical label distribution pθ(y|x). Any intermediate representation z =g(x) can be considered a sample of a *latent* random variable z from conditional distribution p(z|x)=δz−g(x). Proposition: a NN softmax classifier is a special case of Eqn. 4. Proof: Define (i) the input to the softmax layer as latent variable z; (ii) pθ(z|x)=δz−fω(x), a delta distribution parameterised by fω, the NN up to the softmax layer (the *encoder*); and (iii) pθ(y|z) by the softmax layer (as defined in RHS of Eqn. 1). ## 3.1 Training A Classification Lvm Similarly to the latent variable model for pθ(x) (§2), parameters of Eqn. 4 cannot in general be learned by directly maximizing the likelihood. Instead we can maximize a lower bound: $$\begin{split}\int_{x,y}&p(x,y)\log p_{\theta}(y|x)\ =\ \int_{x,y}^{y}p(x,y)\int_{q_{\theta}}q(z|x)\Big{\{}\log p(y|z,\alpha)-\log\frac{\alpha_{\theta}(-|x|)}{p_{\theta}(z|\alpha)}+\log\frac{q_{\theta}(z|x)}{p_{\theta}(z|\alpha)+\beta}\Big{\}}\\ &\geq\int_{x,y}^{y}p(x,y)\int_{q_{\theta}}q_{\phi}(z|x)\log p_{\theta}(y|z)\ \doteq\ \text{\tt ELBOVoc}\end{split}\tag{5}$$ Here, $p_{\theta}(y|z,x)=p_{\theta}(y|z)$ by the Markov model, and the (freely chosen) variational posterior $q_{\phi}$ is assumed to depend only on $\alpha$ and at equal to $p_{\theta}(y|z)$ (eliminating the word term). The derivation of Eq. 5 follows $\left(5\right)^2$ depend only on x and set equal to pθ(z|x) (eliminating the second term).5 The derivation of Eqn. 5 follows analogously to that of Eqn. 2 conditioned on x; an alternative derivation follows from Jensen's inequality. Unlike for the standard ELBO, the "dropped" KL term DKL[ qϕ(z|x)∥ pθ(z|*x, y*)] (minimised implicitly as ELBOVC is maximised) may not minimise to zero - except in the limiting case pθ(y|*x, z*)=pθ(y|z). That is, when z is a *sufficient statistic* for y given x, intuitively meaning that z contains all information contained in x about y.6 Hence, maximising ELBOVC implicitly encourages z to learn a sufficient statistic for y|x. Proposition: softmax cross-entropy (SCE) loss is a special case of ELBOVC. Proof: In Eqn. 5, let (i) qϕ(z|x) = δz−fω(x); and (ii) pθ(z|y) = h(z) exp{z ⊤wy + b ′ y}, ∀y ∈ Y, for constants wy, b′y , arbitrary positive function h : Z → R + and by = b ′ y + log pθ(y): Z x,y p(x, y) Z z qϕ(z|x)log pθ(y|z) (i) = Z x,y p(x, y)log pθ(y|z=fω(x)) (Bayes) =Z x,y p(x, y)log Ppθ(z=fω(x)|y)pθ(y) y′ pθ(z=fω(x)|y′)pθ(y′) (ii) = Z x,y p(x, y)log h(z) exp{fω(x) ⊤ P wy+by} y′ h(z) exp{fω(x)⊤wy′+by′} .= SCE. (6) Corollary: A NN softmax classifier outputs true label distributions p(y|x) if inputs to the softmax layer, z, follow anticipated class-conditional distributions pθ(z|y) of (equi-scale) exponential family form. $\mathrm{p}\{z^{\top}w_{y}+b^{\prime}_{y}\},\forall y\in\mathcal{Y},$ for constants. ## 3.2 Anticipated Vs Empirical Latent Distributions Defining an LVM for classification (Eqn. 4) requires specifying pθ(y|z). In the special case of softmax classification, pθ(y|z) is effectively encoded by Bayes' rule assuming exponential family pθ(z|y), i.e. distributions over softmax layer inputs for class y (Eqn. 6). More generally, one can choose the parametric form of pθ(z|y) and compute pθ(y|z) by Bayes' rule in a classifier's *output layer* (generalising the standard softmax layer), thereby encoding the distribution latent variables are *anticipated* to follow for accurate label predictions p(y|x) to be output. A natural question then is: do latent variables of a classification LVM *empirically* follow the **anticipated** *distributions* pθ(z|y)? Empirical latent distributions are not fixed, but rather defined by qϕ(z|y) .=Rx qϕ(z|x)p(x|y), i.e. by sampling qϕ(z|x) (parameterised by the encoder fω) given class samples x ∼ p(x|y). Since ELBOVC is optimised w.r.t. parameters ϕ, if optimal parameters are denoted ϕ ∗, the question becomes: *does* qϕ∗ (z|y) = pθ(z|y)? It can be seen that ELBOVC is optimised w.r.t ϕ if qϕ∗ (z|x)=δz−zx , for zx =arg maxz Ey|x[log pθ(y|z)] (see appendix A.1).7In practice, *true* label distributions p(y|x) are unknown and we have only finite samples 5We use the notation "qϕ" by analogy to the VAE and to later distinguish qϕ(z|y), derived from qϕ(z|x), from pθ(z|y). 6Proof: from p(z|*x, y*)p(y|x) = p(y|*x, z*)p(z|x) and Markovianity, we see that DKL[ qϕ(z|x)∥ pθ(z|*x, y*)] = 0 ⇔ pθ(z|*x, y*) = qϕ(z|x) ⇔ pθ(y|x)=pθ(y|*x, z*)=pθ(y|z) ⇔ z a sufficient statistic for y|x. 7We assume the parametric family qϕ is sufficiently flexible to closely approximate the analytic maximiser of ELBOVC. from them. For a continuous data domain X , e.g. images or sounds, any empirically observed x is sampled twice with probability zero and so *is observed once with a single label* y(x). A similar situation arises (for any X ) if - as a property of the data - every x has only one ground truth label y(x), i.e. labels are mutually exclusive and *partition* the data.8In either case, the expectation over labels simplifies and, for a given class y, zx = arg maxz pθ(y|z), meaning the optimal latent distribution qϕ∗ (z|x) is identical for all samples x of class y. 9 Letting zy denote the optimal latent variable for all x of class y, optimal class-level distributions are simply qϕ∗ (z|y)=δz−zy , and ELBOVC **is maximised if all latent representations of a class, and** hence qϕ(z|y**), "collapse" to the same point**, irrespective of the anticipated pθ(z|y). Since softmax classification is a special case, this reveals the potential for softmax classifiers to learn overconcentrated, or *over-confident*, latent distributions relative to anticipated distributions (subject to the data distribution and model flexibility). In practical terms, the softmax cross-entropy loss may be minimised when all samples of a given class are mapped (by the encoder fω) to the same latent variable/representation, regardless of differences in the samples' probabilities or semantics, thus disregarding information that may be useful for calibration or downstream tasks. We note that the Information Bottleneck Theory (Tishby et al., 2000; Tishby & Zaslavsky, 2015; Shwartz-Ziv & Tishby, 2017) assumes that such "loss of information" is beneficial, but as we see below, it is *unnecessary* for classification and may be undesirable in general. ## 3.2.1 Aligning The Anticipated And Empirical Latent Distributions We have shown that the ELBOVC objective, a generalisation of SCE loss, effectively involves two versions of the latent class conditional distributions, pθ(z|y) and qϕ(z|y), and that a mismatch between them may have undesirable consequences in terms of information loss. We therefore propose to align pθ(z|y) and qϕ(z|y), or, equivalently, for pθ(y|z) and qϕ(z|y) to be made *consistent under Bayes' rule* (analogous to pθ(x|z) and qϕ(z|x) in the ELBO, §2). Specifically, we minimise DKL[ qϕ(z|y)∥ pθ(z|y)], ∀y∈Y. Including this constraint (weighted by β >0) and learning required class distribution pπ(y) defines the full **VC objective**: $$-{\cal L}_{\bf VC}=\int_{x,y}\!\!p(x,y)\Big{\{}\int_{x}q_{\phi}(z|x)\log\frac{p_{\pi}(z|y)p_{\pi}(y)}{\sum_{y^{\prime}}p_{\pi}(z|y^{\prime})p_{\pi}(y^{\prime})}\,-\beta\!\int_{x}q_{\phi}(z|y)\log\frac{q_{\pi}(z|y)}{p_{\pi}(z|y)}\,+\,\log p_{\pi}(y)\Big{\}}.\tag{7}$$ encourages $q_\phi(\mathbf{z}|y)$ to "fill out" $p_\theta(\mathbf{z}|y)$. Taken incrementally, qϕ–terms of LVC can be interpreted as treating the latent variable z from a maximum likelihood (MLE), *maximum a posteriori* (MAP) and *Bayesian* perspective: \(i\) maximising $\int_z q_\phi(z|x)\log p_\theta(y|z)$ may overfit $q_\phi(z|y)\approx\delta_{z-z_y}$ (as above); \(ii\) adding *class priors* $\int_z q_\phi(z|y)\log p_\theta(z|y)$ changes the point estimates $z_y$; . (as above); [MLE] qϕ(z|y) log pθ(z|y) changes the point estimates zy; [MAP] (iii) adding *entropy* = −Rz qϕ(z|y) log qϕ(z|y) encourages qϕ(z|y) to "fill out" pθ(z|y). [Bayesian] Figure 1 shows samples from empirical latent distributions qϕ(z|y) for classifiers trained under incremental terms of the VC objective. This empirically confirms that softmax cross-entropy loss does not impose the anticipated latent distribution encoded in the output layer (*left*). Adding class priors pθ(z|y) changes the point at which latents of a class concentrate (*centre*). Adding entropy encourages class priors to be "filled out" (*right*), relative to previous point estimates/δ-distributions. As above, if each x has a single label (e.g. MNIST), the MLE/MAP training objectives are optimised when class distributions qϕ(z|y) collapse to a point. We note that complete collapse is not observed in practice (Figure 1, left, *centre*), which we conjecture is due to strong constraints on fω, in particular continuity and ℓ2 regularisation and early stopping based on validation loss. Compared to the KL form of the ELBO (§2), maximising Eqn. 7 is equivalent to minimising: $$\left[D_{10}\left[\left.g_{ij}(z|x)\right|\right|p_{\theta}(z|x,y)\right]\right]+\mathbb{E}_{z}\left[D_{10}\left[\left.g_{ij}(z|y)\right|p_{\theta}(z|y)\right]\right]+D_{10}(z),$$ $(y|x)\|\;p_\theta(y|x)$ Ex -DKL[ p(y|x)∥ pθ(y|x)] +Ex,y-DKL[ qϕ(z|x)∥ pθ(z|*x, y*)]+Ey -DKL[ qϕ(z|y)∥ pθ(z|y)]+DKL[ p(y)∥ pπ(y)](8) showing the extra constraints over the core objective of modelling p(y|x) by pθ(y|x) (underlined). 8As in popular image datasets, e.g. MNIST, CIFAR, ImageNet, where *samples belong to one class or another*. 9Subject to uniqueness of arg maxz pθ(y|z), which is not guaranteed in general, but is assumed for suitable pθ(z|y), such as the softmax case of central interest: if all x have a single label y(x) (i.e. p(y|x) = 1y=y(x)is a "one-hot" vector), and norms are finitely constrained (∥z∥ = α > 0), then the SCE objective (Eqn. 6) is maximised, and softmax outputs pθ(y|x) (Eqn. 1) increasingly approximate true p(y|x), as class parameters wy are maximally dispersed (i.e. unit vectors wˆy tend to a regular polytope on the unit sphere) and all representations of a class y align with the class parameter: zx = fω(x) → αwˆy(x) (unique). $$[{\mathrm{MLE}}]$$ $$[\mathrm{MAP}]$$ $$[{\mathrm{Bayesian}}]$$ $$\rho_{\pi}(y)]]\ \ \ \ (8)$$ Algorithm 1 Variational Classification (VC) $$r)]$$ $$[x_{i})]$$ 1: Input pθ(z|y), qϕ(z|x), pπ(y), Tψ(z); learning rate schedule {η t θ , ηtϕ , ηtπ , ηtψ }t, β 2: Initialise *θ, ϕ, π, ψ*; t ← 0 3: **while** not converged do 4: {xi, yi} m i=1 ∼ D [sample batch from data distribution p(x, y)] 5: for z = {1 ... m} do 6: zi ∼ qϕ(z|xi), z ′ i ∼ pθ(z|yi) [e.g. qϕ(z|xi) .=δz−fω(xi), ϕ .=ω ⇒ zi =fω(xi)] 7: pθ(yi|zi) = Ppθ(zi|yi)pπ(yi) y pθ(zi|y)pπ(y) 8: **end for** 9: gθ ← 1m Pm i=1 ∇θ [log pθ(yi|zi) + β pθ(zi|yi)] 10: gϕ ← 1m Pm i=1 ∇ϕ [log pθ(yi|zi) − β Tψ(zi)] [e.g. using "reparameterisation trick"] 11: gπ ← 1m Pm i=1 ∇π log pπ(yi) 12: gψ ← 1m Pm i=1 ∇ψ [log σ(Tψ(zi)) + log(1−σ(Tψ(z ′ i ))] 13: θ ← θ + η t θ gθ, ϕ ← ϕ + η t ϕ gϕ, π ← π + η t π gπ, ψ ← ψ + η t ψ gψ, t ← t + 1 14: **end while** ## 3.3 Optimising The Vc Objective The VC objective (Eqn. 7) is a lower bound that can be maximised by gradient methods, e.g. SGD: - the first term can be calculated by sampling qϕ(z|x) (using the "reparameterisation trick" as necessary (Kingma & Welling, 2014)) and computing pθ(y|z) by Bayes' rule; - the third term is standard multinomial cross-entropy; - the second term, however, is not readily computable since qϕ(z|y) is implicit and cannot easily be evaluated, only sampled, as z∼qϕ(z|x) (parameterised by fω) for class samples x∼p(x|y). Fortunately, we require log ratios log qϕ(z|y) pθ(z|y) for each class y, which can be approximated by training a binary classifiers to distinguish samples of qϕ(z|y) from those of pθ(z|y). This so-called *density ratio trick* underpins learning methods such as Noise Contrastive Estimation (Gutmann & Hyvärinen, 2010) and contrastive self-supervised learning (e.g. Oord et al., 2018; Chen et al., 2020) and has been used comparably to train variants of the VAE (Makhzani et al., 2015; Mescheder et al., 2017). Specifically, we maximise the following *auxiliary objective* w.r.t. parameters ψ of a set of binary classifiers: $$-{\cal L}_{\bf aux}=\int_{y}p(y)\big{\{}\int_{z}q_{\phi}(z|y)\log\sigma(T_{\psi}^{y}(z))+\int_{z}p_{\theta}(z|y)\log(1-\sigma(T_{\psi}^{y}(z))\big{\}}$$ (z)) (9) where σ is the logistic sigmoid function σ(x)= (1 + e −x) −1, T y ψ (z)=w ⊤ y z + by and ψ={wy, by}y∈Y . It is easy to show that Eqn. 9 is optimised if T y ψ (z)=log qϕ(z|y) pθ(z|y) , ∀y∈Y. Hence, when all binary classifiers are trained, T y ψ (z) approximates the log ratio for class y required by the VC objective (Eqn. 7). Optimising the VC objective might, in principle, also require gradients of the approximated log ratios w.r.t. parameters θ and ϕ. However, the gradient w.r.t. the ϕ found within the log ratio is always zero (Mescheder et al., 2017) and so the gradient w.r.t. θ can be computed from Eqn. 7. See Algorithm 1 for a summary. This approach is *adversarial* since (a) the VC objective is maximised when log ratios give a *minimal* KL divergence, i.e. when qϕ(z|y) = pθ(z|y) and latents sampled from qϕ(z|y) or pθ(z|y) are indistinguishable; whereas (b) the auxiliary objective is maximised if the ratios are *maximal* and the two distributions are fully discriminated. Relating to a Generative Adversarial Network (GAN) (Goodfellow et al., 2014a), the encoder fω acts as a *generator* and each binary classifier as a *discriminator*. Unlike a GAN, VC requires a discriminator per class that each distinguish generated samples from a learned, rather than static, reference/noise distribution pθ(z|y). However, whereas a GAN discriminator distinguishes between complex distributions in the data domain, a VC discriminator compares a Gaussian to an approximate Gaussian in the lower dimensional latent domain, a far simpler task. The auxiliary objective does not change the complexity relative to softmax classification and can be parallelised across classes, adding marginal computational overhead per class. $$({\mathfrak{g}})$$ ## 3.3.1 Optimum Of The Vc Objective In §3.2, we showed that the empirical distribution qϕ(z|x) that opitimises the ELBOVC need not match the anticipated pθ(z|y). Here, we perform similar analysis to identify qϕ∗ (z|x) that maximises the VC objective, which, by construction of the objective, is expected to better match the anticipated distribution. Letting β = 1 to simplify (see appendix A.2 for general case), the VC objective is maximised w.r.t. qϕ(z|x) if: Ep(y|x)[log qϕ(z|y)] = Ep(y|x)[log pθ(y|z)pθ(z|y)] + c , (10) for a constant c. This is satisfied if, for each class y, $$(10)^{\frac{1}{2}}$$ qϕ(z|y) = pθ(z|y)pθ(y|z) Epθ (z′|y) $$\frac{\left[\theta\left(y|z\right)\right]}{\left[y\right)\left[p_{\theta}\left(y|z^{\prime}\right)\right]}\;\;,$$ $$(11)$$ ′)] , (11) giving a unique solution if each x has a single label y (see §3.2; see appendix A.2 for proof). This shows that each qϕ(z|y) fits pθ(z|y) scaled by a ratio of pθ(y|z) to its weighted average. Hence, where pθ(y|z) is above average, qϕ(z|y) > pθ(z|y), and vice versa. In simple terms, qϕ(z|y) reflects pθ(z|y) but is "peakier" (fitting observation in Figure 1). We have thus shown empirically (Figure 1) and theoretically that the VC objective aligns the empirical and anticipated latent distributions. However, these distributions are not identical and we leave to future work the derivation of an objective that achieves both pθ(y|x)=p(y|x) and qϕ(z|y)=pθ(z|y). ## 3.4 Summary The latent variable model for classification (Eqn. 4) abstracts a typical softmax classifier, giving interpretability to its components: - the encoder (fω) transforms a mixture of analytically unknown class-conditional data distributions p(x|y) to a mixture of analytically defined latent distributions pθ(z|y); - assuming latent variables follow the anticipated class distributions pθ(z|y), the output layer applies Bayes' rule to give pθ(y|z) (see figure 2) and thus meaningful estimates of label distributions p(y|x) (by Eqn. 4). ELBOVC generalises softmax cross-entropy, treating the input to the softmax layer as a latent variable and identifying the anticipated class-conditionals pθ(z|y) implicitly encoded within the softmax layer. Extending this, the VC objective (LVC) encourages the empirical latent distributions qϕ(z|y) to fit pθ(z|y). Softmax cross-entropy loss is recovered from LVC by setting (i) qϕ(z|x) = δz−fω(x); (ii) pθ(z|y) to (equal-scale) exponential family distributions, e.g. equivariate Gaussians; and (iii) β = 0. This is analogous to how a deterministic auto-encoder relates to a VAE. Thus **the VC framework elucidates assumptions made** implicitly in softmax classification and by generalising this special case, allows these assumptions, e.g. the choice of pθ(z|y), to be revised on a task/data-specific basis. ## 4 Related Work Despite notable differences, the *energy-based* interpretation of softmax classification of Grathwohl et al. (2019) is perhaps most comparable to our own in taking an abstract view to improve softmax classification. However, their gains, e.g. in calibration and adversarial robustness, come at a significant cost to the main aim: classification accuracy. Further, the required MCMC normalisation reportedly slows and destabilises training. In contrast, we use tractable probability distributions and retain the order of complexity. Our approach is also notionally related to Bayesian Neural Networks (BNNs) or related approaches such as MC-dropout (Gal & Ghahramani, 2016), although these are *Bayesian* with respect to model parameters, rather than latent variables. In principle, these mightt be combined (e.g. Murphy, 2012) as an interesting future direction. Several previous works adapt the standard ELBO, used to learn a model of p(x), to a conditional analog for learning p(y|x) (Tang & Salakhutdinov, 2013; Sohn et al., 2015). However, such works focus on generative scenarios rather than discriminative classification, e.g. x being a face image and y|x being the same face in a different pose determined by latent z; or x being part of an image and y|x its completion given latent content z. The *Gaussian stochastic neural network* (GSNN) model (Sohn et al., 2015) is closer to our own by conditioning q(z|*x, y*) only on x, however the model neither generalises softmax classification nor considers class-level latent priors q(z|y) as in variational classification. Variational classification subsumes a number of works that add a regularisation term to a softmax cross-entropy loss function, which can be interpreted as a prior over latent variables in the "MAP" case (§3.2.1). For example, several semi-supervised learning models can be interpreted as treating the softmax *outputs* as latent variables and using a latent prior to guide predictions for unlabelled data (Allen et al., 2020). Closer to variational classification, several works can be interpreted as treating softmax *inputs* as latent variables with a regularisation term that encourages prior beliefs, such as *deterministic* label predictions (i.e. all probability mass on a single class), which can be encouraged by imposing a *large margin* between class-conditional latent distributions (Liu et al., 2016; Wen et al., 2016; Wan et al., 2018; 2022; Scott et al., 2021). Variational classification also relates to works across several learning paradigms in which a Gaussian mixture prior is imposed in the latent space, e.g. for representation learning (Xie et al., 2016; Caron et al., 2018), in auto-encoders (Song et al., 2013; Ghosh et al., 2019) and in variational auto-encoders (Jiang et al., 2016; Yang et al., 2017; Prasad et al., 2020; Manduchi et al., 2021). ## 5 Empirical Validation Our goal is to empirically demonstrate that the latent structure induced by the VC objective is beneficial relative to the standard softmax classifier. A variational classifier can be substituted wherever a softmax classifier is used, by making distributional choices appropriate for the data. In particular, variational classification does not set out to address any one drawback of a softmax classifier, rather it aims to better reverse the generative process and so capture the data distribution, providing multiple benefits. We illustrate the effectiveness of a VC through a variety of tasks on familiar datasets from the visual and text domains. Specifically, we set out to validate the following hypotheses: H1: The VC objective improves uncertainty estimation, leading to a more calibrated model. H2: The VC objective increases model robustness to changes in the data distribution. H3: The VC objective enhances resistance to adversarial perturbations. H4: The VC objective aids learning from fewer samples. For fair comparison, we make minimal changes to adapt a standard softmax classifier to a variational classifier. As described in §3.4, we train with the VC objective (Eqn. 7) under the following assumptions: qϕ(z|x) is a delta distribution parameterised by a neural network fω : *X → Z*; class-conditional priors pθ(z|y) are multi-variate Gaussians with parameters learned from the data (we use diagonal covariance for simplicity). To provide an ablation across the components of the VC objective, we compare classifiers trained to maximise three objective functions (see §3): CE: equivalent to standard softmax cross-entropy under the above assumptions and corresponds to the MLE form of the VC objective (§3.2.1, (i)). $$J_{\bf CE}=\int_{x,y}p(x,y)\left(\int_{z}q_{\phi}(z|x)\log p_{\theta}(y|z)\,+\,\log p_{\pi}(y)\right)$$ GM: includes class priors and corresponds to the MAP form of the VC objective (§3.2.1, (ii)). This is equivalent to Wan et al. (2018) with just the Gaussian Prior. $$J_{\mathbf{GM}}=J_{\mathbf{CE}}\,+\,\int_{x,y}p(x,y)\int_{z}q_{\phi}(z|y)\log p_{\theta}(z|y)$$ VC: includes entropy of the empirical latent distributions and corresponds to the Bayesian form of the VC objective (§3.2.1, (iii)). $$J_{\mathbf{V}\mathbf{C}}=J_{\mathbf{G}\mathbf{M}}\,-\,\int_{x,y}p(x,y)\int_{z}q_{\phi}(z|y)\log q_{\phi}(z|y)$$ | CIFAR-10 | CIFAR-100 | Tiny-Imagenet | | | | | | | | | |--------------------------------------------------|-------------------------------------------------------------------|----------------------------------|----------------------------------------------|-----------|-----------------------|------------|-----------|-----------|-----------------------|-----------| | CE | GM⋄ | VC | vMF⋆ | CE | GM ⋄ | VC | vMF⋆ | CE | GM⋄ | VC | | Acc. (%, ↑) WRN 96.2 ± 0.1 95.0 ± 0.2 96.3 ± 0.2 | - | 80.3 ± 0.1 79.8 ± 0.2 80.3 ± 0.1 | - | - | - | - | | | | | | RNET | 93.7 ± 0.1 93.0 ± 0.1 93.2 ± 0.1 94.0 ± 0.1 73.2 ± 0.1 74.2 ± 0.1 | 73.4 ± 0.1 | 69.94 ± 0.2 59.7 ± 0.2 59.3 ± 0.1 59.3 ± 0.1 | | | | | | | | | ECE (%, ↓) WRN | 3.1 ± 0.2 | 3.5 ± 0.3 | 2.1 ± 0.2 | - | 11.1 ± 0.7 19.6 ± 0.4 | 4.8 ± 0.3 | - | - | - | - | | RNET | 3.8 ± 0.3 | 4.1 ± 0.2 | 3.2 ± 0.2 | 5.9 ± 0.2 | 8.7 ± 0.2 | 10.5 ± 0.2 | 5.1 ± 0.2 | 7.9 ± 0.3 | 12.3 ± 0.4 8.75 ± 0.2 | 7.4 ± 0.5 | Table 1: Classification Accuracy and Expected Calibration Error (mean, std.dev. over 5 runs). Accuracy is comparable between VC and CE across encoder architectures and data sets, while calibration of VC notably improves. ⋆ from Scott et al. (2021), ⋄ our implementation of Wan et al. (2018) ## 5.1 Accuracy And Calibration We first compare the classification accuracy and calibration of each model on three standard benchmarks (CIFAR-10, CIFAR-100, and Tiny-Imagenet), across two standard ResNet model architectures (*WideResNet-28-10* (WRN) and *ResNet-50* (RNET)) (He et al., 2016; Zagoruyko & Komodakis, 2016). Calibration is evaluated in terms of the *Expected Calibration Error* (ECE) (see Appendix C). Table 1 shows that the VC and GM models achieve comparable accuracy to softmax cross entropy (CE), but that the VC model is consistently, significantly more calibrated (H1). Unlike approaches such as Platt's scaling (Platt et al., 1999) and temperature scaling (Guo et al., 2017), no *post hoc* calibration is performed requiring additional data or associated hyperparameters tuning. We also compare MC-Dropout (Gal & Ghahramani, 2016) for CIFAR-10 and CIFAR-100 on *ResNet-50* (p = 0.2, averaging over 10 samples). As seen previously (Ovadia et al., 2019), although calibration improves relative to CE (3.3%, 1.4%, resp.), the main goal of classification, prediction accuracy, reduces (92.7%, 70.1%). ## 5.2 Generalization Under Distribution Shift When used in real-world settings, machine learning models may encounter *distribution shift* relative to the training data. It can be important to know when a model's output is reliable and can be trusted, requiring the model to be **calibrated on out-of-distribution (OOD) data** and *know when they do not know*. To test performance under distribution shift, we use the robustness benchmarks, CIFAR-10-C, CIFAR-100-C and Tiny-Imagenet-C, proposed by Hendrycks & Dietterich (2019), which *simulate* distribution shift by adding various *synthetic* corruptions of varying intensities to a dataset. We compare the CE model, with and without temperature scaling, to the VC model. Temperature scaling was performed as in Guo et al. (2017) with the temperature tuned on an in-distribution validation set. Both models are found to perform comparably in terms of classification accuracy (Figure 8), according to previous results (§5.1). However, Figure 3 shows that the VC model has a consistently lower calibration error as the corruption intensity increases (left to right) (H2). We note that the improvement in calibration between the CE and VC models increases as the complexity of the dataset increases. When deployed in the wild, *natural* distributional shifts may occur in the data due to subtle changes in the data generation process, e.g. a change of camera. We test resilience to *natural* distributional shifts on two tasks: Natural Language Inference (NLI) and detecting whether cells are cancerous from microscopic ![8_image_0.png](8_image_0.png) Figure 3: Calibration under distribution shift: (l) CIFAR-10-C, (m) CIFAR-100-C, (r) Tiny-Imagenet-C. Boxes indicate quartiles, whiskers indicate min/max, across 16 types of synthetic distribution shift. images. NLI requires verifying if a hypothesis logically follows from a premise. Models are trained on the SNLI dataset (Bowman et al., 2015) and tested on the MNLI dataset (Williams et al., 2018) taken from more diverse sources. Cancer detection uses the Camelyon17 dataset (Bandi et al., 2018) from the WILDs datasets (Koh et al., 2021), where the train and eval sets contain images from different hospitals. Table 2 shows that the VC model achieves better calibration under these natural distributional shifts (H2). The Camelyon17 (CAM) dataset has a relatively small number (1000) of training samples (hence wide error bars are expected), which combines distribution shift with a low data setting (H4) and shows that the VC model achieves higher (average) accuracy in this more challenging real-world setting. | Accuracy (↑) | Calibration (↓) | | | | |----------------|-------------------|------------|-----------|-----------| | CE | VC | CE | VC | | | NLI | 71.2 ± 0.1 | 71.2 ± 0.1 | 7.3 ± 0.2 | 3.4 ± 0.2 | | CAM | 79.2 ± 2.8 | 84.5 ± 4.0 | 8.4 ± 2.5 | 1.8 ± 1.3 | Table 2: Accuracy and Calibration (ECE) under distributional shift (mean, std. err., 5 runs) We also test the ability to **detect OOD examples**. We compute the AUROC when a model is trained on CIFAR-10 and evaluated on the CIFAR-10 validation set mixed (in turn) with SVHN, CIFAR-100, and CelebA (Goodfellow et al., 2013; Liu et al., 2015). We compare the VC and CE models using the probability of the predicted class arg maxy pθ(y|x) as a means of identifying OOD samples. | Model | SVHN | C-100 | CelebA | |----------|--------|---------|----------| | PCE(y|x) | 0.92 | 0.88 | 0.90 | | PVC(y|z) | 0.93 | 0.86 | 0.89 | Table 3 shows that the VC model performs comparably to the CE model. We also consider p(z) as a metric to detect OOD samples and achieve comparable results, which is broadly consistent with the findings of (Grathwohl et al., 2019). Although the VC model learns to map the data to a more structured latent space and, from the results above, makes more calibrated predictions for OOD data, it does not appear to be better able to distinguish OOD data than a standard softmax classifier (CE) using the metrics tested (we note that "OOD" is a loosely defined term). Table 3: AUROC for OOD detection. Models trained on CIFAR-10, evaluated on in and out-of-distribution samples. ## 5.3 Adversarial Robustness We test model robustness to adversarially generated images using the common *Fast Gradient Sign Method* (FGSM) of adversarial attack (Goodfellow et al., 2014b). This "attack" is arbitrarily chosen and VC is not explicitly tailored towards it. Perturbations are generated as P =ϵ×sign (∇xL(*x, y*)), where L(*x, y*) is the model loss for data sample x and correct class y; and ϵ is the attack *magnitude*. We compare all models trained on MNIST and CIFAR-10 against FGSM attacks of different magnitudes. ![9_image_0.png](9_image_0.png) Figure 4: Prediction accuracy for increasing FGSM adversarial attacks (l) MNIST; (r) CIFAR-10 Results in Figure 4 show that the VC model is consistently more (FGSM) adversarially robust relative to the standard CE model, across attack magnitudes on both datasets (H3). | CE | GM | VC | | |----------|------------|------------|------------| | MNIST | 93.1 ± 0.2 | 94.4 ± 0.1 | 94.2 ± 0.2 | | CIFAR-10 | 52.7 ± 0.5 | 54.2 ± 0.6 | 56.3 ± 0.6 | | AGNews | 56.3 ± 5.3 | 61.5± 2.9 | 66.3 ± 4.6 | ## 5.4 Low Data Regime In many real-world settings, datasets may have relatively few data samples and it may be prohibitive or impossible to acquire more, e.g. historic data or rare medical cases. We investigate model performance when data is scarce on the hypothesis that a prior over the latent space enables the model to better generalise from fewer samples. Models are trained on 500 samples from MNIST, 1000 samples from CIFAR-10 and 50 samples from AGNews. Table 4: Accuracy in low data regime (mean, std.err., 5 runs) Results in Table 4 show that introducing the prior (GM) improves performance in a low data regime and that the additional entropy term in the VC model maintains or further improves accuracy (H4), particularly on the more complex datasets. We further probe the relative benefit of the VC model over the CE baseline as training sample size varies (H4) on 10 MedMNIST classifcation datasets (Yang et al., 2021), a collection of real-world medical datasets of varying sizes. ![10_image_0.png](10_image_0.png) Figure 5 shows the increase in classification accuracy for the VC model relative to the CE model against number of training samples (log scale). The results show a clear trend that the benefit of the additional latent structure imposed in the VC model increases exponentially as the number of training samples decreases. Together with the results in Table 4, this suggests that the VC model offers most significant benefit for small, complex datasets. Figure 5: Accuracy increase of VC vs CE on 10 MedMNIST classification datasets of varying training set size. Blue points indicate accuracy on a dataset (mean, std.err., 3 runs). Green line shows a best-fit trend across dataset size. ## 6 Conclusion We present Variational Classification (VC), a latent generalisation of standard softmax classification trained under cross-entropy loss, mirroring the relationship between the variational auto-encoder and the deterministic auto-encoder (§3). We show that softmax classification is a special case of VC under specific assumptions that are effectively taken for granted when using a softmax output layer. Moreover we see that latent distributional assumptions, "hard-coded" in the softmax layer and anticipated to be followed for accurate class predictions, are neither enforced theoretically nor satisfied empirically. We propose a novel training objective based on the ELBO to better align the *empirical* latent distribution to that *anticipated*. A series of experiments on image and text datasets show that, with marginal computational overhead and without tuning hyper-parameters other than for the original classification task, variational classification achieves comparable prediction accuracy to standard softmax classification while significantly improving calibration, adversarial robustness (specifically FGSM), robustness to distribution shift and performance in low data regimes. In terms of limitations, we intentionally focus on the *output* layer of a classifier, treating the encoder fω as a "black-box". This leaves open question of how, and how well, the underlying neural network achieves its role of transforming a mixture of unknown data distributions p(x|y) to a mixture of specified latent distributions p(z|y). We also prove that optimal *empirical* latent distributions qϕ(z|y) are "peaky" approximations to the anticipated pθ(z|y), leaving open the possibility of further improvement to the VC objective. The VC framework gives new theoretical insight into the highly familiar softmax classifier, opening up several interesting future directions. For example, q(z|x) might be modelled by a stochastic distribution, rather than a delta distribution, to reflect uncertainty in the latent variables, similarly to a VAE. VC may also be extended to semi-supervised learning and related to approaches that impose structure in the latent space. ## 7 Acknowledgements Carl is gratefully supported by an ETH AI Centre Postdoctoral Fellowships and a small projects grant from the Haslerstiftung (no. 23072). Mrinmaya acknowledges support from the Swiss National Science Foundation (Project No. 197155), a Responsible AI grant by the Haslerstiftung; and an ETH Grant (ETH-19 21-1). ## References Kemal Adem, Serhat Kiliçarslan, and Onur Cömert. Classification and diagnosis of cervical cancer with stacked autoencoder and softmax classification. *Expert Systems with Applications*, 115:557–564, 2019. Carl Allen, Ivana Balažević, and Timothy Hospedales. A probabilistic model for discriminative and neurosymbolic semi-supervised learning. *arXiv preprint arXiv:2006.05896*, 2020. Peter Bandi, Oscar Geessink, Quirine Manson, Marcory Van Dijk, Maschenka Balkenhol, Meyke Hermsen, Babak Ehteshami Bejnordi, Byungjae Lee, Kyunghyun Paeng, Aoxiao Zhong, et al. From detection of individual metastases to classification of lymph node status at the patient level: the camelyon17 challenge. In *IEEE Transactions on Medical Imaging*, 2018. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. A large annotated corpus for learning natural language inference. *arXiv preprint arXiv:1508.05326*, 2015. Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In *European Conference on Computer Vision*, 2018. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In *International Conference on Machine Learning*, 2020. Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In *International Conference on Machine Learning*, 2016. Partha Ghosh, Mehdi SM Sajjadi, Antonio Vergari, Michael Black, and Bernhard Schölkopf. From variational to deterministic autoencoders. *arXiv preprint arXiv:1903.12436*, 2019. Ian J Goodfellow, Yaroslav Bulatov, Julian Ibarz, Sacha Arnoud, and Vinay Shet. Multi-digit number recognition from street view imagery using deep convolutional neural networks. *arXiv preprint arXiv:1312.6082*, 2013. Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C Courville, and Yoshua Bengio. Generative adversarial nets. In *Neural Information Processing Systems*, 2014a. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014b. Will Grathwohl, Kuan-Chieh Wang, Joern-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, and Kevin Swersky. Your classifier is secretly an energy based model and you should treat it like one. In International Conference on Learning Representations, 2019. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In International Conference on Machine Learning, 2017. Michael Gutmann and Aapo Hyvärinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In *International Conference on Artificial Intelligence and Statistics*, 2010. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Conference on Computer Vision and Pattern Recognition, 2016. Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. *arXiv preprint arXiv:1903.12261*, 2019. Zhuxi Jiang, Yin Zheng, Huachun Tan, Bangsheng Tang, and Hanning Zhou. Variational deep embedding: An unsupervised and generative approach to clustering. *arXiv preprint arXiv:1611.05148*, 2016. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In International Conference on Learning Representations, 2014. Marcus Klasson, Cheng Zhang, and Hedvig Kjellström. A hierarchical grocery store image dataset with visual and semantic labels. In *IEEE Winter Conference on Applications of Computer Vision*, 2019. Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, et al. Wilds: A benchmark of in-the-wild distribution shifts. In *International Conference on Machine Learning*, 2021. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In *Neural Information Processing Systems*, 2012. Weiyang Liu, Yandong Wen, Zhiding Yu, and Meng Yang. Large-margin softmax loss for convolutional neural networks. In *International Conference on Machine Learning*, 2016. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In International Conference on Computer Vision, 2015. Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. Adversarial autoencoders. *arXiv preprint arXiv:1511.05644*, 2015. Laura Manduchi, Kieran Chin-Cheong, Holger Michel, Sven Wellmann, and Julia Vogt. Deep conditional gaussian mixture model for constrained clustering. In *Neural Information Processing Systems*, 2021. Lars Mescheder, Sebastian Nowozin, and Andreas Geiger. Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks. In *International Conference on Machine Learning*, 2017. Milad Mirbabaie, Stefan Stieglitz, and Nicholas RJ Frick. Artificial intelligence in disease diagnostics: A critical review and classification on the current state of research guiding future direction. Health and Technology, 11(4):693–731, 2021. Jishnu Mukhoti, Andreas Kirsch, Joost van Amersfoort, Philip HS Torr, and Yarin Gal. Deep deterministic uncertainty: A simple baseline. *arXiv e-prints*, pp. arXiv–2102, 2021. Kevin P Murphy. *Machine learning: a probabilistic perspective*. MIT press, 2012. Mahdi Pakdaman Naeini, Gregory Cooper, and Milos Hauskrecht. Obtaining well calibrated probabilities using bayesian binning. In *AAAI Conference on Artificial Intelligence*, 2015. XuanLong Nguyen, Martin J Wainwright, and Michael I Jordan. Estimating divergence functionals and the likelihood ratio by convex risk minimization. *IEEE Transactions on Information Theory*, 56(11):5847–5861, 2010. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018. Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, David Sculley, Sebastian Nowozin, Joshua Dillon, Balaji Lakshminarayanan, and Jasper Snoek. Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift. In *Neural Information Processing Systems*, 2019. John Platt et al. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. *Advances in large margin classifiers*, 10(3):61–74, 1999. Vignesh Prasad, Dipanjan Das, and Brojeshwar Bhowmick. Variational clustering: Leveraging variational autoencoders for image clustering. In *International Joint Conference on Neural Networks*, 2020. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In *International Conference on Machine Learning*, 2014. Tyler R Scott, Andrew C Gallagher, and Michael C Mozer. von mises-fisher loss: An exploration of embedding geometries for supervised learning. In *International Conference on Computer Vision*, 2021. Ravid Shwartz-Ziv and Naftali Tishby. Opening the black box of deep neural networks via information. *arXiv* preprint arXiv:1703.00810, 2017. Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning structured output representation using deep conditional generative models. In *Neural Information Processing Systems*, 2015. Chunfeng Song, Feng Liu, Yongzhen Huang, Liang Wang, and Tieniu Tan. Auto-encoder based data clustering. In *Iberoamerican Congress on Pattern Recognition*, 2013. Charlie Tang and Russ R Salakhutdinov. Learning stochastic feedforward neural networks. In Neural Information Processing Systems, 2013. Jacob Tiensuu, Maja Linderholm, Sofia Dreborg, and Fredrik Örn. Detecting exoplanets with machine learning: A comparative study between convolutional neural networks and support vector machines, 2019. Naftali Tishby and Noga Zaslavsky. Deep learning and the information bottleneck principle. In *IEEE* Information Theory Workshop, 2015. Naftali Tishby, Fernando C Pereira, and William Bialek. The information bottleneck method. arXiv preprint physics/0004057, 2000. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In *Neural Information Processing Systems*, 2017. Weitao Wan, Yuanyi Zhong, Tianpeng Li, and Jiansheng Chen. Rethinking feature distribution for loss functions in image classification. In *Conference on Computer Vision and Pattern Recognition*, 2018. Weitao Wan, Jiansheng Chen, Cheng Yu, Tong Wu, Yuanyi Zhong, and Ming-Hsuan Yang. Shaping deep feature space towards gaussian mixture for visual classification. In IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022. Yandong Wen, Kaipeng Zhang, Zhifeng Li, and Yu Qiao. A discriminative feature learning approach for deep face recognition. In *European Conference on Computer Vision*, 2016. Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In *North American Chapter of the Association for Computational* Linguistics: Human Language Technologies, 2018. Junyuan Xie, Ross Girshick, and Ali Farhadi. Unsupervised deep embedding for clustering analysis. In International Conference on Machine Learning, 2016. Bo Yang, Xiao Fu, Nicholas D Sidiropoulos, and Mingyi Hong. Towards k-means-friendly spaces: Simultaneous deep learning and clustering. In *International Conference on Machine Learning*, 2017. Jiancheng Yang, Rui Shi, and Bingbing Ni. Medmnist classification decathlon: A lightweight automl benchmark for medical image analysis. In *International Symposium on Biomedical Imaging*, 2021. Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. *arXiv preprint arXiv:1605.07146*, 2016. ## A Proofs A.1 Optimising The Elbovc **W.R.T** Q Rearranging Eqn. 5, the ELBOVC is optimised by $$\operatorname*{arg\,max}_{q_{\phi}(z|x)}\int_{x}\sum_{y}p(x,y)\int_{z}q_{\phi}(z|x)\log p_{\theta}(y|z)$$ $$=\operatorname*{arg\,max}_{q_{\phi}(z|x)}\int_{x}p(x)\int_{z}q_{\phi}(z|x)\sum_{y}p(y|x)\log p_{\theta}(y|z)$$ The integral over z is a qϕ(z|x)-weighted sum of Py p(y|x)log pθ(y|z) terms. Since qϕ(z|x) is a probability distribution, the integral is upper bounded by maxzPy p(y|x)log pθ(y|z). This maximum is attained iff support of qϕ(z|x) is restricted to z ∗ = arg maxz Py p(y|x) log pθ(y|z) (which may not be unique). □ ## A.2 Optimising The Vc Objective W.R.T. Q Setting β = 1 in Eqn. 7 to simplify and adding a lagrangian term to constrain qϕ(z|x) to a probability distribution, we aim to find $$\arg\max_{q_{\phi}(z|x)}\int_{x}\sum_{y}p(x,y)\bigg{\{}\int_{z}q_{\phi}(z|x)\log p_{\theta}(y|z)$$ $$-\int_{z}q_{\phi}(z|y)\log\frac{q_{\phi}(z|y)}{p_{\theta}(z|y)}\,+\,\log p_{\pi}(y)\bigg{\}}+\lambda(1-\!\int_{z}\!q_{\phi}(z|x))\,\,.$$ Recalling that qϕ(z|y) = Rx qϕ(z|x)p(x|y) and using calculus of variations, we set the derivative of this functional w.r.t. qϕ(z|x) to zero $$\sum_{y}p(x,y)\bigg\{\log p_{\theta}(y|z)-(\log\frac{q_{\phi}(z|y)}{p_{\theta}(z|y)}+1)\bigg\}-\lambda=0$$ Rearranging and diving through by p(x) gives Ep(y|x)[log qϕ(z|y)] = Ep(y|x)[log pθ(y|z)pθ(z|y)] + c , where c = −(1+ λ p(x) ). Further, if each label y occurs once with each x, due to sampling or otherwise, then this simplifies to $$q_{\phi}(z|y^{*})$$ ∗)e c = pθ(y ∗|z)pθ(z|y ∗) , which holds for all classes y∈Y. Integrating over z shows e c =Rz pθ(y|z)pθ(z|y) to give $$q_{\phi}(z|y)=\frac{p_{\theta}(y|z)p_{\theta}(z|y)}{\int_{z}p_{\theta}(y|z)p_{\theta}(z|y)}=p_{\theta}(z|y)\frac{p_{\theta}(y|z)}{\mathbb{E}_{p_{\theta}(z|y)}[p_{\theta}(y|z)]}\.\qed$$ We note, it is straightforward to include β to show $$q_{\phi}(z|y)=p_{\theta}(z|y){\frac{p_{\theta}(y|z)^{1/\beta}}{\mathbb{E}_{p_{\theta}(z|y)}[p_{\theta}(y|z)^{1/\beta}]}}\ .$$ ## B Justifying The Latent Prior In Variational Classification Choosing Gaussian class priors in Variational classification can be interpreted in two ways: Well-specified generative model: Assume data x∈X is generated from the hierarchical model: y → z → x, where p(y) is categorical; p(z|y) are analytically known distributions, e.g. N (z; µy, Σy); the dimensionality of z is not large; and x=h(z) for an arbitrary invertible function h : *Z → X* (if X is of higher dimension than Z, assume h maps one-to-one to a manifold in X ). Accordingly, p(x) is a mixture of unknown distributions. If {pθ(z|y)}θ includes the true distribution p(z|y), variational classification effectively aims to invert h and learn the parameters of the true generative model. In practice, the model parameters and h −1 may only be identifiable up to some equivalence, but by reflecting the true latent variables, the learned latent variables should be semantically meaningful. Miss-specified model: Assume data is generated as above, but with z having a large, potentially uncountable, dimension with complex dependencies, e.g. details of every blade of grass or strand of hair in an image. In general, it is impossible to learn all such latent variables with a lower dimensional model. The latent variables of a VC might learn a complex function of multiple true latent variables. The first scenario is ideal since the model might learn disentangled, semantically meaningful features of the data. However, it requires distributions to be well-specified and a low number of true latent variables. For natural data with many latent variables, the second case seems more plausible but choosing pθ(z|y) to be Gaussian may nevertheless be justifiable by the Central Limit Theorem. ## C Calibration Metrics One way to measure if a model is calibrated is to compute the expected difference between the confidence and expected accuracy of a model. $$\mathbb{E}_{P({\hat{y}}|x)}\left[\mathbb{P}({\hat{y}}=y|P({\hat{y}}|x)=p)-p\right]$$ $$\left(12\right)$$. This is known as expected calibration error (ECE) (Naeini et al., 2015). Practically, ECE is estimated by sorting the predictions by their confidence scores, partitioning the predictions in M equally spaced bins (B1 *. . . B*M) and taking the weighted average of the difference between the average accuracy and average confidence of the bins. In our experiments we use 20 equally spaced bins. $$\mathrm{ECE}=\sum_{m=1}^{M}{\frac{|B_{m}|}{n}}\,|a c c(B_{m})-c o n f(B_{m})|$$ $$(13)^{\frac{1}{2}}$$ ![15_image_1.png](15_image_1.png) ## D Ood Detection ![15_Image_0.Png](15_Image_0.Png) Figure 6: t-SNE plots of the feature space for a classifier trained on CIFAR-10. (l) Trained using CE. (r) Trained using VC. We posit that similar to CE, VC model is unable to meaningfully represent data from an entirely different distribution. ## E Semantics Of The Latent Space To try to understand the semantics captured in the latent space, we use a pre-trained MNIST model on the ![16_image_0.png](16_image_0.png) Ambiguous *MNIST* dataset (Mukhoti et al., 2021). We interpolate between ambiguous 7's that are mapped close to the Gaussian clusters of classes of "1" and "2". It can be observed that traversing from the mean of the "7" Gaussian to that on the "1" class, the ambiguous 7's begin to look more like "1"s. Figure 7: Interpolating in the latent space: Ambiguous MNIST when mapped on the latent space. (l) VC, (r) CE ## F Classification Under Domain Shift A comparison of accuracy between the VC and CE models under 16 different synthetic domain shifts. We find that VC performs comparably well as CE. ![16_image_1.png](16_image_1.png) ![16_image_2.png](16_image_2.png) Figure 8: Classification accuracy under distributional shift: *(left)* CIFAR-10-C (middle) *CIFAR-100-C* (right) *Tiny-Imagenet-C*
Review 1: Summary: In this paper, the authors propose to enhance softmax classification done via deterministic neural networks with a variational approach to the latent space. By introducing a probabilistic model of the latent space, the authors propose to regularize the softmax classification model in order to align the empirical and expected latent distribution, the resulting objective is termed “Variational Classification.” This approach is aimed specifically at improving model calibration, reducing data complexity, increasing distributional robustness, and increasing adversarial robustness when compared with deterministic neural networks. Though I have evaluated the authors high-level formalization, I was unable to perform a thorough assessment of proofs in the Appendix, and may have missed details therein. Thus, the critical components of my review focuses primarily on two aspects of the paper (1) the related work and (2) the empirical evaluation. Strengths and Weaknesses: * The paper studies studies the critical problem of improving the reliability of deep neural networks. * The communication of the paper, its central claims, and methods is clear and concise with high-quality expositional figures. * Though lacking in some ways, the experimental design is sufficient to corroborate the potential of the method. I find the experiments on generalization under distribution shift particularly compelling. Requested Changes: * [Critical] It seems straight-forward to me that the hypothetical improvements that this method seeks to achieve are also hypothetical improvements of other probabilistic approaches to machine learning, namely Bayesian neural networks (BNNs). It has been shown in various works that Bayesian neural networks are more resilient to natural data distribution shifts as well as to adversarial perturbations. Moreover their principled uncertainty serves as a key component of active learning approaches which is a tried and true method for reducing sample complexity in supervised learning. Yet, the authors do not discuss any of these works or even that this line of literature exists. I find this a bit peculiar, and would like for the authors to justify this choice. It is clear the method they propose scales much better than faithful approximate posterior inference, however presenting no such discussion feels like a significant omision. * [Non-Critical] Perhaps, asking for comparison with other probabilistic neural networks is misguided; however, I would also like to see a simple comparison with NoisyAdam or MC Dropout to understand how this method performs in comparison to other probabilistic approaches. I would like to see the empirical comparison across all of the hypotheses listed by the authors given that these are areas in which BNNs purport to give some advantages * [Non-critical] In the authors evaluation, I find it a bit strange that the GM method performs worse in terms of calibration across almost all datasets. Could the authors discuss why this might be the case? Should their method not improve the model’s calibration? *[Critical] The experiments on adversarial robustness are far too scant to be useful to any reader. It is also too rudimentary to serve as evidence for the claim of (H3). For example, the authors have provided an epsilon, but no indication of which norm/metric this epsilon is with respect to (I assume l2 given the magnitude of the values) but then again, l_\infty is a much more common choice. Moreover, the authors do not evaluate with any adaptive attack despite this being a clear standard in the literature. I understand that perhaps FGSM is an initial indicator of improved adversarial robustness but it is hardly practically significant. The authors need to either considerably increase the rigor, detail, and experimental rational of this section of entirely remove it and remove any claim of (H3) because as it stands, it is not supported by theory or by experimental evidence. Minor weaknesses * Please label each of the subplots of figure 3 so that it is easily digestible Broader Impact Concerns: I have no specific broader impact concerns for this work. ================================================== Review 2: Summary: The paper proposes a new method to train classification models. The method is motivated by the ideas from variational inference: treating the pre-softmax layer of the classification neural network as an unobserved random variable. A new VC objective is proposed to ensure that the class-conditional distribution of the aforementioned latent variable does not collapse during training. Empirical evaluation demonstrates that the proposed objective results in better adversarial robustness and improves performance in the low-data regime. Strengths and Weaknesses: Strengths: * Interesting analysis regarding the anticipated and empirical class-conditional latent distribution * Extensive empirical evaluation of the method Weaknesses: * Motivation and certain assumptions were not clear to me (see questions below) * It is not shown how the proposed approach compares to the baseline in terms of the number of parameters and training time. Requested Changes: I would like the authors to clarify the following questions: * What is the motivation to say that variational posterior is equal to $p_{\theta}(z|x)$ (section 3.1.1)? This removes the KL term from the ELBO. As a result, we only have MLE objective. How will the analysis of section 3.3 change if this assumption is not made? Do we still need to align anticipated and empirical distribution in this case? * $q_{\phi}$ does not seem to be used for variation distribution from section 3.3 onwards, but rather to distinguish anticipated and empirical class conditional distribution. Is my understanding correct? * Title of the figure 1 refers to $p_{\theta}(z|y)$, is it the same as $q_{\phi}(z|y)$ from equation 9? Other comments: * In my opinion, the introduction is too long and contains unnecessary details, which belong to section 3. * Introduction, paragraph 2: ‘different yet confident predictions elsewhere, which are thus uncertain’. My understanding is that confident and uncertain are two opposite things, how can predictions be both? * Algorithm 1 does not contain $\beta$. * Figure 5 requires a more detailed explanation. What does the green line represent? How the locations of the blue points were chosen? Broader Impact Concerns: NA ================================================== Review 3: Summary: This paper proposes a variational classifier to improve the uncertainty estimation and perturbations via a probabilistic approach. The method is similar how variational autoencoder improves the vanilla autoencoder, and numerical results show the advantage of the proposed method over the benchmark algorithms. Strengths and Weaknesses: Strengths: [1] The proposed idea is intuitively inspiring, and the authors provide sufficient math in the main paper to clearly illustrate their idea. [2] The numerical results, except for adversarial attack, look promising. Weaknesses: One issue is that the performance in adversarial robustness seems to have no great improvement. Adversarial attack used in this paper is a white-box attack, i.e., the attacker knows the model parameters. To overcome adversarial attack, some specific properties, e.g., how far the samples are from the decision boundary, should be addressed. Since the proposed method is tailored for clean generalization but not adversarial robustness, the current presentation in this paper is not sufficient to claim that the proposed method improves adversarial robustness. The out-of-distribution robustness is already a good contribution, so I would suggest the authors to try black-box adversarial attack, or either just drop this adversarial robustness discussion. Another issue of this paper is the writing. The writing of this paper can be improved. In particular, some senteces are not clear, and some others have no clear connection to others. [1] Abstract "Our approach offers a novel probabilistic perspective on the highly familiar softmax classification model, to which it relates similarly to how variational and traditional autoencoders relate". The sentence is long and hard to capture its meaning. Please consider simplify it. [2] Abstract "inherent inconsistency within softmax classification" is not clear. Elaborate a little bit more on the detail. [3] Throughout the paper, some times the authors use "autoencoder", but some times it becomes "auto-encoder". Please make the word consistent. [4] Page 2, "it is intractable to learn the parameters of this model by maximising the conditional log likelihood". What does the word "parameters" refer to? Do you mean the neural network weights? [5] Page 2, "Instead, a lower bound can be maximised, comparable to the evidence lower bound (ELBO), as used to train a variational auto-encoder (VAE)". The sentence is a bit unclear. My understanding towards this sentence is that, there is a lower bound term used in variational autoencoder, and the authors consider maximizing it. On the other hand, there is another bound called ELBO, and it is used in some other tasks. In such a case, what is the relationship between ELBO, and the one used in common neural network training, as described in the paragraph above this sentence? Is the commonly used method called SCE? [6] Page 2, "standard softmax cross-entropy (SCE) objective is such a lower bound for specific choices". What does "such" refer to? In addition, for "For this correspondence," what is the logical relationship between the sentences before and after it? [7] Page 2, "that two versions of the class-conditional latent distributions p(z|y) can be described". Why do we consider two versions? Could you provide one more sentence in this paragraph to quickly mention how to use them? Besides, the writing of whole paragraph needs improvement. The sentence in the middle is short but with four commas, which break the logic of the sentence. [8] Page 2, "can differ materially". How to understand the word "materially"? Could you use a more accurate word? [9] Page 2, "In particular, the SCE objective can be optimal if empirical class-conditional latent distributions “collapse” to distinct points rather than fit the anticipated distributions." My understanding is that if something is "optimal", then it will have good properties. But from this sentence, the "optimality" is not sth we expect. Do you mean the the empirical loss achieves the minimal value and overfits the training data? [10] Page 2, "we take an adversarial approach to implicitly learn the required log probability ratios as an auxiliary task." Could you add one sentence here to briefly introduce what is the adversarial approach? The "adversarial approach" till now appears twice, but there is no explanation about it now. I would suggest to add a short description like "similar to generative adversarial network (GAN)". [11] Page 2, "The VC framework ..., which the softmax (or more generally output) layer “flips” by Bayes’ rule". Check the grammar of "which". [12] Page 5, "fill out". Please be more precise on how to understand it. [13] In the experiment of adversarial robustness, the definition of FGSM is $\epsilon * sign(L)$. Should it be $\epsilon*sign(grad(L))$? Requested Changes: Please consider my comments towards the adversarial robustness, and also revise the writing of this paper. Readers with statistics/probability/GAN background may understand this paper, but it can be potentially hard for other readers to comprehend because the language is not clear. Broader Impact Concerns: NA ================================================== Metareview: Recommendation: Accept as is Comment: This is a strong paper with a new and fresh framework for classification via latent variable models. The basic idea (generalizing standard softmax classifiers to a latent variable model that can be trained with an ELBO-like approach) is simple but interesting, and leads to some nice properties, like the calibration finding the authors show. This is solid research that is worth accepting. All reviewers agree that the work is genuinely interesting. The main questions raised (resolved by the authors during the discussion phase) relate to the exact positioning of the work, and the reviewers were convinced. Beyond this, there were requests to improve the writing, which the authors have done. ==================================================
# Beyond Boundaries: A Novel Data-Augmentation Discourse For Open Domain Generalization Shirsha Bose shirshabosecs@gmail.com Technical University of Munich Ankit Jha ankitjha16@gmail.com Indian Institute of Technology Bombay Hitesh Kandala *khitesh2000@gmail.com* Indian Institute of Technology Bombay Biplab Banerjee *getbiplab@gmail.com* Indian Institute of Technology Bombay Reviewed on OpenReview: *https://openreview.net/forum?id=jpZmhiIys1* ## Abstract The problem of Open Domain Generalization (ODG) is multifaceted, encompassing shifts in domains and labels across all source and target domains. Existing approaches have encountered challenges such as style bias towards training domains, insufficient feature-space disentanglement to highlight semantic features, and discriminativeness of the latent space. Additionally, they rely on a confidence-based target outlier detection approach, which can lead to misclassifications when target open samples visually align with the source domain data. In response to these challenges, we present a solution named ODG-NET. We aim to create a direct open-set classifier within a *discriminative*, unbiased, and *disentangled* semantic embedding space. To enrich data density and diversity, we introduce a generative augmentation framework that produces *style-interpolated* novel domains for closed-set images and novel pseudo-open images by interpolating the contents of paired training images. Our augmentation strategy skillfully utilizes *disentangled style and content information* to synthesize images effectively. Furthermore, we tackle the issue of style bias by representing all images in relation to all source domain properties, which effectively accentuates complementary visual features. Consequently, we train a multi-class semantic object classifier, incorporating both closed and open class classification capabilities, along with a style classifier to identify style primitives. The joint use of style and semantic classifiers facilitates the disentanglement of the latent space, thereby enhancing the generalization performance of the semantic classifier. To ensure discriminativeness in both closed and open spaces, we optimize the semantic feature space using novel metric losses. The experimental results on six benchmark datasets convincingly demonstrate that ODG-NET surpasses the state-of-the-art by an impressive margin of 1 − 4% in both open and closed-set DG scenarios. ## 1 Introduction Domain Generalization (DG), as explored in the work of Zhou et al. (2022), aims to establish a shared embedding space derived from labeled source domains, which can then be applied to an unseen target domain. However, current DG methods are primarily tailored to closed-set scenarios, such as those seen in Zhou et al. (2020b) and Zhou et al. (2020a), where both source and target domains possess identical label sets. Nonetheless, this approach might not always be viable in dynamic real-world contexts, as exemplified by Robotics, where a navigating robot might encounter categories that are either common or unique to its surroundings Zhao & Shen (2022). This realization underscores the necessity of tackling the more pragmatic and intricate realm of Open Domain Generalization (ODG) Shu et al. (2021), which revolves around training on labeled source domains housing both shared and domain-specific categories. 1 ![1_image_0.png](1_image_0.png) Figure 1: The working principle of ODG-NET. Given a number of source domains with shared and private classes for each domain, our algorithm follows three simple stages to obtain an effective semantic open-set object classifier. In the context of ODG, the target domain consists of samples belonging to either familiar classes or novel classes exclusive to that particular domain, thereby introducing several noteworthy challenges. Firstly, a substantial data imbalance arises due to the uneven representation of known classes within the source domains. Secondly, the task of establishing an embedding space that is both domain-agnostic and discriminatory becomes formidable in light of unregulated shifts in both domains and labels. Lastly, the absence of prior knowledge concerning the open space in the target domain adds to the complexity. While one potential approach to addressing ODG could involve the fusion of a pre-existing closed-set DG technique with a readily available open-set recognition (OSR) method, such as Openmax Bendale & Boult (2016b), this strategy may not yield optimal results. The DG technique could potentially be influenced by the domain-specific classes found in ODG, which are subject to significant under-representation. Surprisingly, Open Domain Generalization (ODG) has garnered little attention in the Domain Generalization (DG) literature, with DAML Shu et al. (2021) being the sole model explicitly designed for ODG. However, our extensive investigation revealed three significant limitations in DAML's approach. Initially, DAML augments source domains via multi-domain mix-up features, using a Dirichlet-distribution based weight scheme, where identical weight configurations correspond to labels for generated features. However, merging content and style details in the raw latent features might result in semantically inconsistent feature-label pairs. This was evidenced in a PACS dataset Li et al. (2017) experiment, where classifying the synthesized features yielded notably low accuracy. Additionally, DAML overlooks disentangling content features from domain attributes, potentially introducing biases towards specific styles, hindering adaptability across diverse visual domains. Furthermore, DAML's outlier rejection relies on thresholding source domain classifier responses, that can produce erroneous results, especially when target open samples visually align with certain source domain classes. These limitations underscore the imperative need for further exploration and refinement of ODG techniques, thereby unlocking their full potential and meaningfully addressing the inherent complexities. To overcome these challenges, we introduce an innovative approach for training a robust generalized semantic open-set classifier. This classifier can distinguish known and potential new samples in an unbiased and disentangled embedding space. However, we face a hurdle as we lack representative open-space samples during training. To address this, we propose a novel technique called "sample hallucination," which generates these essential samples, facilitating comprehensive classifier training. Additionally, we tackle the issue of class imbalance in ODG by diversifying the appearances of training classes. Our method emphasizes the importance of representing images in a feature space unaffected by training domains. To achieve this, we advocate separating semantic features from style elements, revealing each image's true content. Ultimately, we ensure that the semantic space effectively discriminates between classes, enabling our classifier to make precise distinctions across various categories. These innovations pave the way for more effective and accurate open domain generalization. Our proposed ODG-NET: In this paper, we introduce our novel model, ODG-NET (depicted in Fig. 1), which addresses the aforementioned concerns comprehensively. ODG-NET consists of three pivotal modules, each devoted to addressing model bias, feature disentanglement, and the creation of a discriminative semantic feature space. At the core of our research is the aim to enrich the existing source domains using two types of synthesized images generated by a novel conditional GAN. These image types serve specific purposes: expanding the diversity of closed-set classes and creating representative pseudo-open images. The first image type, called domain or style mix-up images, involves a sophisticated interpolation of style properties from source domains using a Dirichlet distribution. This approach introduces new domains, diversifying the style of source images while preserving their inherent semantic object characteristics. The second type, known as pseudo open-space images, results from skillful interpolation of both domain and class identifiers from source domain images, achieved through Dirichlet sampling. To ensure the creation of diverse and meaningful samples, we've introduced diversification regularization to prevent potential mode collapse during the generation of domain or label-interpolated samples. Additionally, a structural cycle consistency mechanism has been implemented to maintain the structural integrity of the generated images. Our approach tackles a range of formidable challenges, spanning from class imbalance and limited style diversity to the absence of an open-space prior. A unique aspect of our methodology lies in its capacity to unify label and domain mix-up concepts while offering customization in conditioning. This surpasses existing augmentation methods, which are limited to style or random image mix-ups Mancini et al. (2020a); Zhou et al. (2021). ODG-NET strives for an unbiased latent embedding space, devoid of source domain bias. We achieve this through an innovative approach involving the training of domain-specific classifiers, which adeptly capture domain-specific features from each source domain. Consequently, each image is represented as a concatenation of features from all domain-specific models, creating a comprehensive and impartial embedding. To disentangle domain-specific attributes from semantic object features in latent representations, we train two attention-driven classifiers: a domain classifier for domain label identification and an object classifier for recognizing class labels from the augmented domain set. This enriches our model's grasp of object semantics while significantly mitigating the influence of domain-specific artifacts. For a highly discriminative semantic feature space, we introduce a contrastive loss among known classes, accentuating differences between various categories. Additionally, our entropy minimization objective strategically pushes pseudo outliers away from the known-class boundary, bolstering the model's robustness. ## We Summarize Our **Major Contributions** As: [-] In this paper, we introduce ODG-NET, an end-to-end network that tackles the challenging ODG problem by jointly considering closed and open space domain augmentation, feature disentanglement, and semantic feature-space optimization. [-] To synthesize augmented images that are diverse from the source domains, we propose a novel conditional GAN with a cycle consistency constraint and an anti-mode-collapse regularizer that interpolates domain and category labels. We also adopt a classification-based approach for feature disentanglement. Finally, we ensure the separability of the semantic feature space for closed and open classes through novel metric objectives. [-] We evaluate ODG-NET on six benchmark datasets in both open and closed DG settings. Our experiments demonstrate that ODG-NET consistently outperforms the literature. For instance, on ODG for Multi-dataset Shu et al. (2021) and on closed DG for DomainNet Peng et al. (2019), ODG-NET outperforms the previous state-of-the-art by approximately 3%. ## 2 Related Works Open-set learning and open-set domain adaptation: The Open-set Recognition (OSR) Bendale & Boult (2016c); Kong & Ramanan (2021); Pal et al. (2023b); Vaze et al. (2021) challenge involves effectively identifying novel, unknown-class samples during testing, leveraging training samples from known closed-set classes. However, OSR doesn't account for any differences in distributions between the training and test sets. Another relevant problem is Open-set Domain Adaptation (OSDA) Panareda Busto & Gall (2017); Saito et al. (2018); Kundu et al. (2020); Bucci et al. (2020), which addresses the scenario of a labeled source domain and an unlabeled target domain. The target domain contains unlabeled samples from the same semantic classes as the source domain, along with novel-class samples unique to the target domain. OSDA operates in a transductive manner, where both source and target domains are simultaneously employed during training. In contrast, Open Domain Generalization (ODG) sets itself apart from OSR and OSDA. In ODG, the target domain remains unseen during training, making it distinct. Additionally, the multiple source domains consist of a combination of shared and private categories. This diversity of categories, including shared and domain-specific ones, renders ODG even more intricate than the other tasks. (Open) DG: DG refers to the problem of learning a supervised learning model that is generalizable across any target distribution without any prior knowledge. The initial studies in closed-set DG focused on domain adaptation (DA) Li et al. (2020); Wang et al. (2021); Li et al. (2021a) due to the disparity in domain distributions. Several DG methods have since been developed, such as self-supervised learning Carlucci et al. (2019), ensemble learning Xu et al. (2014), and meta-learning Patricia & Caputo (2014); Wang et al. (2020b); Li et al. (2019b; 2018a; 2019a); Huang et al. (2020). To address the domain disparity, the concept of domain augmentation Li et al. (2021c); Kang et al. (2022); Zhou et al. (2020b; 2021); Zhang et al. (2022) was introduced, which involves generating pseudo-domains and adding them to the available pool of domains. Subsequently, the notion of ODG was introduced in Shu et al. (2021), which is based on domain-augmented meta-learning. To solve the single-source ODG problem, Zhu & Li (2021) and Yang et al. (2022) further extended the idea of multi-source ODG. See Zhou et al. (2022) for more discussions on DG. Our proposed ODG-NET *represents a significant departure from DAML Shu et al. (2021). Unlike their ad-hoc featurelevel mix-up strategy, we introduce a more robust augmentation technique that leverages generative modeling to seamlessly synthesize pseudo-open and closed-set image samples. Additionally, we take a direct approach to learning an* open-set classifier in a meaningful and optimized semantic space, in contrast to the source classifier's confidencedriven inference used in Shu et al. (2021). As a result, ODG-NET is better suited to handling open samples of different granularities. Augmentation in DG: Data augmentation is a crucial technique in DG, and it can be implemented using various methods such as variational autoencoders, GANs, and mixing strategies Goodfellow et al. (2020); Kingma & Welling (2013); Zhang et al. (2017). For instance, Rehman *et al.* Rahman et al. (2019) used ComboGAN to generate new data and optimized ad hoc domain divergence measures to learn a domain-generic space. Zhou *et al.* Zhou et al. (2020b) combined GAN-based image generation with optimal transport to synthesize images different from the source data. Gong *et al.* Gong et al. (2019) treated generation as an image-to-image translation process and extracted intermediate images given an image pair. Similarly, Li et al. (2021b) used adversarial training to generate domains instead of samples. Mix-up, on the other hand, generates new data by interpolating between a pair of samples and their labels. Recently, mix-up techniques Yun et al. (2019); Mancini et al. (2020a); Zhou et al. (2021) have become popular in the DG literature, applied to either the image or feature space. The augmentation approach used by ODG-NET stands out from the existing literature by going beyond simple style or image mix-up. Our approach ensures that the object properties of images remain intact when using style mix-up, and we also have control over label mix-up to generate pseudo-open samples that can scatter the open space with varying levels of similarity to the source domains. Among the existing augmentation strategies, Zhou et al. (2020b) and Gong et al. (2019) are the closest to our approach as they both use conditional GANs. However, there are several key differences between our method and theirs: (a) Gong et al. (2019) requires paired training data to sample intermediate pseudo-stylized images, whereas we use conditional generation without the need for paired data; (b) Zhou et al. (2020b) uses extrapolation for domains, which is ill-posed, while we use Dirichlet distributions to interpolate domains and classes; and (c) while both Zhou et al. (2020b) and Gong et al. (2019) use style mix-up for closed-set data, we generate both closed and pseudo-open samples judiciously. Disentangled representation learning. Disentangled feature learning refers to the process of modeling distinct and explanatory data variation factors. As per Dittadi et al. (2020), disentanglement can aid in out-of-distribution tasks. Previous efforts have focused on disentangling semantic and style latent variables in the original feature space using encoder-decoder models Wang et al. (2022); Cai et al. (2019), causality Ouyang et al. (2021), or in the Fourier space Wang et al. (2022). These models are complex and require sophisticated knowledge to improvement the feature learning of the models. *In contrast,* ODG-NET *proposes to use simple to implement yet effective, attention-based* classifiers, to separate the style and semantic primitives from the latent visual representations. ## 3 Problem Definition And Proposed Methodology In the context of ODG, we have access to multiple source domains denoted as D = {D1, D2, *· · ·* , DS }. Each of these domains has different distributions and contains a combination of domain-specific and shared categories. During training, we use labeled samples from each domain Ds = (x i s , yis ) ns i=1, where ys ∈ Ys is the label for xs ∈ Xs. The total number of classes in D is denoted by C. The target domain DT = {x j t} nt j=1 has a distribution that is different from that of D. It consists of unlabeled samples that belong to one of the source classes present in D or novel classes that were not seen during training. The objective is to model a common classifier that can reject outliers while properly ![4_image_0.png](4_image_0.png) (Fim, Fv, Fy , Fη), cGAN consisting of (FG, F*disc*), the local domain-specific classifiers {Fs l = (F b ls, F c ls)} S s=1, and the global domain and semantic classifiers (Fd, Fo) with corresponding attention blocks Ad and Ao, respectively. Colors indicate the flow of information for different data items. In our formulation, each domain/style in D is represented using an S-dimensional one-hot vector vd. In contrast, a pseudo-domain (synthesized style) is represented by vˆd, which is sampled from a Dirichlet distribution with parameter α and has the same length as vd. For instance, if S = 3, a source domain can be represented as a three-dimensional one-hot vector (e.g., [0, 0, 1]), while a vˆd could be [0.2, 0.3, 0.5]. Similarly, we denote the label y as a C-dimensional one-hot vector. On the other hand, an interpolated label space is represented by yˆ, which is sampled in the same way as vˆd. A real image-label pair from D is denoted as (xr, y). In contrast, a cGAN synthesized image is denoted by (x cs f , y) or (x os f , yˆ) depending on whether it represents a styleinterpolated closed-set image with label y or a joint style and label interpolated pseudo-open image with label yˆ, respectively. Finally, to aid in open-set classification, we introduce a label space y˜ ∈ R C+1. The first C indices are reserved for closed-class samples, while the C + 1-th index is used for pseudo-outlier samples. ## 3.1 Architecture And Training Overview For Odg-Net Our objective is to craft a direct open-set classifier operating within a disentangled, discriminative, and unbiased semantic feature space. Additionally, we introduce a method for generating training samples for the open space by proposing the creation of pseudo-open samples through a generative module. To accomplish our aims, we introduce ODG-NET, composed of four modules (illustrated in Fig. 2). First and foremost, ODG-NET employs a generative augmentation strategy, utilizing a **conditional GAN** equipped with a U-Net-based generator denoted as FG and a binary discriminator referred to as F*disc*. We condition FG on four variables: the domain label vd/vˆd, the class label y/yˆ, an input image xr/xcs f /xos f , and a noise tensor η1/η2/η3 sampled from predefined distributions. To ensure the proper combination of these conditioning variables, we propose the use of separate embedding networks (Fim, Fv, Fy, Fη) to encode the image, domain label, class label, and noise into meaningful latent representations. We train FG to generate two types of images: (i) (x cs f , y) when conditioned on (xr, vˆd*, y, η*1), where x cs f retains the class label y of xr while the style changes according to vˆd, and (ii) (x os f , yˆ) when conditioned on (xr, vd/vˆd, *y, η* ˆ 2), thereby modifying the semantic and stylistic characteristics of xr according to yˆ and vd/vˆd in x os f . We employ a standard min-max formulation to train the conditional GAN and introduce a regularizer to ensure that the generated samples do not closely resemble the data from D (as expressed in Eq. 1). Furthermore, we introduce a cycle consistency loss to maintain the semantic consistency of the generated images (as indicated in Eq. 2). To achieve an unbiased latent feature space, we propose representing all images with respect to the feature space of all source domains. We introduce S **local source-domain specific networks**, which comprise a feature backbone and a classification module, denoted as F s l = (F b ls, F c ls) and are trained on Ds. We aggregate feature responses from all F b ls to obtain the latent representation F el(x) = [F b l1 ; F b l2 ; *· · ·* ; F b lS ] for a given image x (as described in Eq. 3). To disentangle domain-dependent properties from semantic object features within F el(x), we introduce **global domain** and semantic object classifiers, denoted as Fd with S output nodes and Fo with C + 1 output nodes (expressed in Eq. 4-5), which are shared across domains. We employ spectral-spatial self-attention modules Ad and Ao to highlight domain and semantic object features from F el, resulting in F ed and F eo. We aim to ensure the discriminative quality of the semantic embedding space (outputs of the feature encoder of Fo, denoted as F b o ) through novel metric losses, which encourage the separation of all closed and pseudo-open class samples (as indicated in Eq. 6-7). In the following sections, we delve into the details of the proposed loss functions. ## 3.2 Loss Functions, Training, And Inference Regularized cGAN objectives with structural cycle consistency for image synthesis: As previously mentioned, we employ a cGAN model to generate potential open-space images. Within this framework, FG and F*disc* engage in a min-max adversarial game. The primary objective of F*disc* is to accurately discern between real and synthesized images, while FG endeavors to deceive F*disc*. To prevent the generated images from being too similar to those in D, we introduce a regularization term denoted as W. This regularization term penalizes scenarios in which the real and synthesized images, represented by their respective features F el(xr) and F el(x cs/os f), become indistinguishable for slightly different values of (vd, vˆd) or (yd, yˆd). Here, δ represents the cosine similarity, and ϵ is a small constant. Essentially, even if δ(vd, vˆd) or δ(yd, yˆd) tends toward 1, W enforces δ(F el(xr)*, F e*l(x cs/os f)) to approach 0, minimizing the loss. The comprehensive loss formulation is presented below. LGan = E PD,P os/cs noise -log(Fdisc(F el(xr))) + log(1 − Fdisc(F el(x cs/os f))) + β δ(F el(xr), F el(x os/cs f)) + ϵ δ(vd, vˆd) + δ(yd, yˆd) + ϵ (1) | {z } W $\uparrow\downarrow\uparrow$ . $$\left(2\right)$$ In this context, PD, P cs noise, and P os noise denote the data distribution of the source domains in D and the noise used to generate closed-set and open-set samples, respectively. We set P cs noise = N (0,I), and P os noise = N (0, σ), where σ is a large value. Our goal is to limit the space of generated closed-set images so that they represent similar semantic concepts while allowing for more scattering in the pseudo-open space, aiding in learning a robust open-set classifier. To maintain the structural robustness of FG against variations in style or label, we propose a method for reconstructing xr. We take into account the embeddings of the synthesized x cs/os f, the actual class label y, the domain identifier vd, and a noise vector η3 ∈ N (0,I) as inputs to FG: x rec r = FG(x os/cs f, vd*, y, η*3). By following the path xr → x cs/os f → x rec r, we ensure that x cs/os frepresents a meaningful image rather than noisy data. To compute the loss, we use the standard ℓ1 distance between the original real domain images and the remapped images x rec r, given by:Lrec = E $${\bf L_{rec}}=\mathop{\mathbb{E}}_{P_{\cal D},{\cal N}(0,{\bf I})}[||x_{r}-x_{r}^{r e c}||_{1}^{1}].\tag{1}$$ r ||11]. (2) Learning style agnostic latent representations for all the images: Subsequently, our aim is to guarantee that the latent feature embeddings of images remain impartial and not skewed toward any particular training source domain. To mitigate the risk of overfitting to any specific source domain, we introduce a method wherein input images are represented based on the characteristics of all source domains, leveraging the feature representation F el(x). This approach constructs a multi-view representation space that encompasses diverse and complementary perspectives of the images. To achieve this, we train F s l using Ds where s belongs to {1, 2, *· · ·* , S}. We consider S multiclass cross-entropy losses (LCE) for this purpose (as shown in Eq. 3), where P s D represents the data distribution for the s th source domain. $$\mathbf{L}_{\mathrm{local}}={\frac{1}{S}}\sum_{s\in\{1,2,\cdots,S\}}{\underset{\mathcal{D}}{\mathbb{E}}}\left[\mathbf{L}_{\mathbf{CE}}({\mathcal{F}}_{l s}^{c}(x_{s}),y_{s})\right].$$ $$(3)$$ $$(4)$$ $$({\mathfrak{H}})$$ ls(xs), ys)]. (3) Disentangling latent features of the images to highlight the semantic contents through global classifiers: Simultaneously, our objective is to disentangle domain-specific attributes from the semantic object features within the previously derived latent representations. This disentanglement facilitates the object classifier in focusing exclusively on the semantic content, thereby enhancing its generalizability across novel target domains. In this context, the global domain classifier, denoted as Fd, is tasked with identifying domain identifiers based on the attended features, which are defined as F ed(x) = F el(x) ⊗ Ad + F el(x). This is achieved through the use of a multiclass cross-entropy loss. It is worth noting that Fd implicitly ensures that FG generates images in accordance with the specified conditioning domain identifiers. The corresponding loss function is delineated below. $\mathcal{F}_{\mathcal{G}}$ is not. It is worth noting that $\mathcal{F}_{d}$ implicitly ensures that $\mathcal{F}_{\mathcal{G}}$ generated, it is nothing domain identifiers. The corresponding loss function is defined as $$\mathbf{L_{dom}}=\underbrace{\mathbb{E}}_{P_{\mathcal{D}},P_{\mathcal{D}}^{ex/\alpha x}}[\mathbf{L_{CE}}(\mathcal{F}_{d}(\underbrace{F_{ed}(x_{r}^{rec})}_{\text{domain features}}),v_{d})+\mathbf{L_{CE}}(\mathcal{F}_{d}(\underbrace{F_{ed}(x_{f}^{\alpha x/\alpha x})}_{\text{domain features}}),\hat{v}_{d})].$$ ), vˆd)]. (4) In contrast, the open-set classifier, Fo, is trained to accurately identify all samples belonging to known classes while disregarding the generated pseudo-outliers by labeling them as C + 1, from F eo(x) = F el(x) ⊗ Ao + F el(x). This is achieved using a multiclass cross-entropy loss. Similar to Fd, Fo also aids FG in producing high-quality synthesized images and supplements Lrec. **Lines $\mathbf{L}_{\text{rec}}$.** $$\mathbf{L}_{\text{class}}=\mathbb{E}_{\begin{subarray}{c}P_{\mathcal{D},P_{\text{noise}}}\\ P_{\mathcal{D},P_{\text{noise}}}\end{subarray}}[\mathbf{L}_{\text{CE}}(\mathcal{F}_{o}(\underbrace{Fe_{o}(x_{r})}_{\text{object features}}),\hat{y})+\mathbf{L}_{\text{CE}}(\mathcal{F}_{o}(\underbrace{Fe_{o}(x_{f}^{ca/cos})}_{\text{object features}}),\hat{y})].$$ ), y˜)]. (5) Fd and Fo work together on F el(x), and seek to learn the domain-specific and semantic features separately, suggesting that fact that both the networks are devoted to disentangling the latent features wisely. Algorithm 1 ODG-NET training algorithm Require: Initialized FG, Fim, Fv, Fy, Fη, Fd, Fo, F*disc*, {Fs l } S l=1 1: **while** Not Converged do 2: Sample a batch of (xr*, y, v*d) from D and η1 ∼ N (0,I), η2 ∼ N (0, σ), η3 ∼ N (0,I). σ is the noise variance for generating the pseudo-open samples. 3: Generate vˆds and yˆs using Dirichlet(α). α is the parameter of the distribution. 4: Obtain a batch of x cs f = FG(xr, vˆd*, y, η*1). 5: Obtain a batch of x os f = FG(xr, vd/vˆd, *y, η* ˆ 2). 6: Obtain x rec r = FG(x cs/os f, vd*, y, η*3). 7: Obtain F el, the latent representation corresponding to (xr, xos f , xcs f , xrec r). 8: Obtain F ed, the attended domain features, and F eo, the attended semantic features from F el. 9: Solve: argmin Fim,Fv,Fy,Fη,FG, {Fs l } S s=1 argmax F*disc* [wGanLGan + wrecLrec + w*local*L**local**]. 10: Solve: argmin Fo,Fd [Ldom + Lclass + wfLsem]. 11: **end while** Ensuring discriminativeness of the semantic feature space. The inherent diversity within multi-domain data poses a challenge for Fo in creating an optimized semantic feature space based on F b o . In this space, it's expected that closed-set classes from the augmented domains should form distinct clusters, while pseudo-open-set samples should be effectively pushed away from the support region of the closed set. To enhance the discriminative qualities, we propose the utilization of a contrastive loss for closed-set samples across the augmented domain set. Simultaneously, we aim to minimize the entropy (E) of Fo predictions for pseudo-open samples. Minimizing the entropy effectively acts as a weighting mechanism for Fo when handling pseudo-open samples. This weighting strategy increases the posterior probability p(y = C + 1|x cs/os f) for pseudo-open samples while reducing the posteriors associated with the known-class indices (1 − C). This approach contributes to the overall objective of creating a more discriminative semantic feature space, which is crucial for the successful separation of classes and pseudo-open samples. To implement the contrastive loss, Lcon, we select an anchor sample, xa, and randomly obtain a positive sample, x+, which shares the same class label as xa, as well as a set of negative samples, {x m − }M m=1, where no restrictions are imposed on the styles of the samples. The goal is to maximize the cosine similarity, δ, for (xa, x+), while minimizing it for all possible pairs of (xa, xm − ). The semantic feature space optimization loss can be expressed as follows: $$\mathbf{L}_{\mathrm{sem}}=\operatorname*{\mathbb{E}}_{P_{\mathcal{D}},P_{n o i s e}^{o s/c s}}[{\mathcal{E}}({\mathcal{F}}_{o}(F e_{o}(x_{f}^{o s})))+\mathbf{L}_{\mathrm{con}}].$$ $$(6)$$ f ))) + Lcon]. (6) where Lcon is defined as follows, $${\bf L_{c o n}}=\left[\,-\log\frac{\exp(\delta({\cal F}_{o}^{b}(x_{a}),{\cal F}_{o}^{b}(x_{+})))}{\sum_{m=1}^{M}\exp(\delta({\cal F}_{o}^{b}(x_{a}),{\cal F}_{o}^{b}(x_{-}^{m})))}\,\right].$$ Total loss and training. We follow an alternate optimization strategy in each training episode for ODG-NET, mentioned in Algorithm 1. In the vanilla training stage, we train the embedding networks (Fim, Fv, Fy, Fη), GAN modules FG, F*disc*, and the local domain-specific networks {Fs l } S s=1, given the fixed (Fd, Fo) to produce meaningful images. ws represent the loss contributions and we set them to the value 1 *in all our experiments*. ss contributions and _we set them to the value $1$ in all our expert_ $$\begin{array}{c}\underset{\mathcal{F}_{\mathrm{im}},\mathcal{F}_{\mathrm{w}},\mathcal{F}_{\mathrm{y}},\mathcal{F}_{\mathrm{w}},\mathcal{F}_{G},\ \mathcal{F}_{\mathrm{disc}}}{\operatorname*{argmin}}[w_{Gan}\mathbf{L}_{\mathbf{Gan}}+w_{rec}\mathbf{L}_{\mathrm{rec}}+w_{local}\mathbf{L}_{\mathbf{local}}].\\ \{\mathcal{F}_{l}^{x}\}_{s=1}^{S}\end{array}$$ $$(T)$$ (8) $\frac{1}{2}$ $$({\mathfrak{H}})$$ [wGanLGan + wrecLrec + w*local*L**local**]. (8) Subsequently, we train Fo and Fd to obtain the optimized semantic classifier keeping other parameters fixed. $$\operatorname*{arg\,min}_{{\mathcal{F}}_{o},{\mathcal{F}}_{d}}{\bf[L_{d o m}+L_{c l a s s}+w_{f}L_{s e m}]}.$$ [Ldom + Lclass + wfLsem]. (9) Testing. During inference, images from DT are provided as input to {Fs l } S s=1. The class labels with the highest softmax probability scores are predicted according to Fo. ## 4 Experimental Evaluations Datasets. We present our results on six widely used benchmark datasets for DG. Specifically, we follow the approach of Shu et al. (2021) and use the following datasets: (1) **Office-Home** Venkateswara et al. (2017), (2) **PACS** Li et al. (2017), (3) **Multi-Dataset** Shu et al. (2021). In addition, we introduce the experimental setup of ODG for two additional DG datasets, namely **VLCS** Fang et al. (2013) and **Digits-DG** Zhou et al. (2020b) in this paper. For our closed-set DG experiment, we also utilize the large-scale **DomainNet** Peng et al. (2019). Implementation details: To ensure clarity, we use a ResNet-18 based backbone He et al. (2016) for Fo consistently, while we adopt standard architectures per benchmark for closed-DG tasks, following established literature Zhou et al. (2020b). Our attention modules (Ad, Ao) are composed of a pair of spatial and spectral attention modules, implemented using the query-key-value processing-based approach Han et al. (2022). In total, ODG-NET comprises 48 million parameters for S = 3, and the training stage requires 65 GFLOPS. Training protocol and model selection. We employ a standardized training protocol across all datasets. During each training iteration, we first optimize Eq. 8 using the Adam optimizer Kingma & Ba (2014), with a learning rate of 2e−4 and betas of (0.5, 0.99). We then minimize Eq. 9 using Adam with a learning rate of 2e − 2 and betas of (0.9, 0.99). Our batch size is typically set to 64, and we train for 30 epochs, except for DomainNet, where we use a batch size of 128 and train for 40 epochs. We follow a cross-validation approach to estimate the loss weights, holding out 10% of samples per domain and using held-out pseudo-open-set validation samples obtained through cumix Mancini et al. (2020b), that the model has not seen to select the best-performing model. In this regard, the mixup samples do not have a clear semantic meaning as they are generated by randomly combining two images. Hence, they can be considered representative open samples. We further consider β = 0.5 to put W as a soft constraint in LGAN. A large β instigates the generation of ambiguous images in order to make them different from D. Besides, we set α = 0.5 following Shu et al. (2021). Evaluation protocol. For ODG experiments, we report the top-1 accuracy for closed-set samples (Acc) and the Hscore for closed and open samples. For closed-set DG experiments, we consider the top-1 performance. We report the mean ± std. over three runs. ## 4.1 Results On Open Dg Tasks Baselines. Our baseline method, AGG, involves merging the source domains with different label sets and training a unified classifier on all the classes. In comparison, we evaluate the performance of ODG-NET against traditional DG methods that are less sensitive to label changes between different source domains, as outlined in Shu et al. (2021). These include state-of-the-art meta-learning-based and augmentation-based DG methods Li et al. (2018a; 2019a); Mancini et al. (2020a); Zhou et al. (2021); Shi et al. (2021); Rame et al. (2022), heterogeneous DG Li et al. (2019b), and methods that produce discriminative and generic embedding spaces Wang et al. (2020b); Huang et al. (2020); Zhang et al. (2022). As per Shu et al. (2021), we employ a confidence-based classifier for our competitors. Here, a sample is classified as unknown if the class probabilities are below a predefined threshold. Alternatively, we also compare against the only existing ODG technique, , DAML Shu et al. (2021) and consider a variant where we combine DAML with Openmax Bendale & Boult (2016b) based OSR. Finally, we report the results of two open-set recognition baselines, Openmax Bendale & Boult (2016b) and MORGAN Pal et al. (2023b). Table 1: Comparative analysis for PACS on ODG. (In %) | Methods | Art | Sketch | Photo | Cartoon | Avg | | | | | | |----------------------------------------|---------|----------|---------|-----------|---------|-------|---------|-------|---------|-------| | Acc | H-score | Acc | H-score | Acc | H-score | Acc | H-score | Acc | H-score | | | AGG | 51.35 | 38.87 | 49.75 | 47.09 | 53.15 | 44.19 | 66.43 | 48.98 | 55.17 | 44.78 | | OpenMax Bendale & Boult (2016b) | 53.33 | 40.83 | 55.45 | 54.18 | 73.76 | 53.29 | 73.39 | 53.87 | 63.98 | 50.54 | | MORGAN Pal et al. (2023a) | 44.56 | 35.78 | 52.31 | 51.49 | 70.29 | 49.55 | 66.31 | 48.69 | 58.37 | 46.37 | | MLDG Li et al. (2018a) | 44.59 | 31.54 | 51.29 | 49.91 | 62.20 | 43.35 | 71.64 | 55.20 | 57.43 | 45.00 | | FC Li et al. (2019b) | 51.12 | 39.01 | 51.15 | 49.28 | 60.94 | 45.79 | 69.32 | 52.67 | 58.13 | 46.69 | | Epi-FCR Li et al. (2019a) | 54.16 | 41.16 | 46.35 | 46.14 | 70.03 | 48.38 | 72.00 | 58.19 | 60.64 | 48.47 | | PAR Wang et al. (2020b) | 52.97 | 39.21 | 53.62 | 52.00 | 51.86 | 36.53 | 67.77 | 52.05 | 56.56 | 44.95 | | RSC Huang et al. (2020) | 50.47 | 38.43 | 50.17 | 44.59 | 67.53 | 49.82 | 67.51 | 47.35 | 58.92 | 45.05 | | CuMix Mancini et al. (2020a) | 53.85 | 38.67 | 37.70 | 28.71 | 65.67 | 49.28 | 74.16 | 47.53 | 57.85 | 41.05 | | Fish Shi et al. (2021) | 52.22 | 39.54 | 55.54 | 54.28 | 69.41 | 48.87 | 69.85 | 51.75 | 61.75 | 48.61 | | Disentanglement Zhang et al. (2022) | 53.18 | 38.32 | 56.39 | 53.36 | 71.99 | 47.39 | 70.54 | 50.63 | 63.02 | 47.42 | | Mixstyle Zhou et al. (2021) | 53.41 | 39.33 | 56.10 | 54.44 | 72.37 | 47.21 | 71.54 | 52.22 | 63.35 | 48.30 | | DAML Shu et al. (2021) | 54.10 | 43.02 | 58.50 | 56.73 | 75.69 | 53.29 | 73.65 | 54.47 | 65.49 | 51.88 | | DAML + OpenMax Bendale & Boult (2016a) | 52.73 | 41.28 | 57.81 | 56.82 | 74.55 | 54.55 | 75.84 | 55.96 | 65.23 | 52.15 | | ODG-NET | 57.21 | 46.19 | 61.85 | 59.25 | 78.76 | 56.67 | 77.39 | 61.11 | 68.80 | 55.81 | | Methods | Clipart | Real-World | Product | Art | Avg | | | | | | |----------------------------------------|-----------|--------------|-----------|-------|-------|-------|-------|-------|-------|-------| | AGG | 42.83 | 44.98 | 62.40 | 53.67 | 54.27 | 50.11 | 42.22 | 40.87 | 50.43 | 47.41 | | OpenMax Bendale & Boult (2016b) | 43.29 | 43.67 | 62.45 | 59.86 | 56.71 | 52.29 | 48.76 | 47.54 | 52.81 | 50.84 | | MORGAN Pal et al. (2023a) | 39.68 | 41.18 | 59.87 | 59.76 | 55.33 | 52.19 | 43.33 | 42.87 | 49.55 | 49.00 | | MLDG Li et al. (2018a) | 41.82 | 41.26 | 62.98 | 55.84 | 56.89 | 52.25 | 42.58 | 40.97 | 51.07 | 47.58 | | FC Li et al. (2019b) | 41.80 | 41.65 | 63.79 | 55.16 | 54.41 | 52.02 | 44.13 | 43.25 | 51.03 | 48.02 | | Epi-FCR Li et al. (2019a) | 37.13 | 42.05 | 62.60 | 54.73 | 54.95 | 52.68 | 46.33 | 44.46 | 50.25 | 48.48 | | PAR Wang et al. (2020b) | 41.27 | 41.77 | 65.98 | 57.60 | 55.37 | 54.13 | 42.40 | 42.62 | 51.26 | 49.03 | | RSC Huang et al. (2020) | 38.60 | 38.39 | 60.85 | 53.73 | 54.61 | 54.66 | 44.19 | 44.77 | 49.56 | 47.89 | | CuMix Mancini et al. (2020a) | 41.54 | 43.07 | 64.63 | 58.02 | 57.74 | 55.79 | 42.76 | 40.72 | 51.67 | 49.40 | | Fish Shi et al. (2021) | 43.76 | 44.38 | 65.25 | 58.74 | 57.86 | 57.33 | 49.78 | 46.57 | 54.16 | 51.75 | | Disentanglement Zhang et al. (2022) | 44.89 | 42.87 | 63.38 | 59.51 | 58.88 | 55.44 | 45.49 | 43.43 | 53.16 | 50.31 | | Mixstyle Zhou et al. (2021) | 42.28 | 41.15 | 61.78 | 60.23 | 59.92 | 53.97 | 50.11 | 42.78 | 53.52 | 49.53 | | DAML Shu et al. (2021) | 45.13 | 43.12 | 65.99 | 60.13 | 61.54 | 59.00 | 53.13 | 51.11 | 56.45 | 53.34 | | DAML + OpenMax Bendale & Boult (2016a) | 45.51 | 44.25 | 60.33 | 61.46 | 60.71 | 59.67 | 51.34 | 52.34 | 54.47 | 54.43 | | ODG-NET | 49.81 | 48.39 | 68.45 | 63.33 | 63.29 | 61.51 | 56.05 | 53.52 | 59.40 | 56.69 | Table 2: Comparative analysis for Office-Home on ODG. (In %) Quantitative and qualitative analysis. Tables 1-5 present a performance comparison of ODG-NET with the literature on five datasets. ODG-NET consistently outperforms others in terms of Acc and H-score for all domain combinations and the average leave-one-out case where all the domains except one are used during training and the model is validated on the held-out target domain. For example, on PACS, ODG-NET achieves an Acc of 68.80% and an H-score of 55.81%, beats the previous best of DAML+OpenMax which obtained 65.23% and 52.15%, respectively. Our method outperforms Shu et al. (2021) by ≈ 3% for Office-Home and ≈ 5% for VLCS and Digits-DG in H-score. For the Multi- | Table 3: Comparative analysis for VLCS on ODG. (In %) | | | | | | | | | | | |---------------------------------------------------------|---------|---------|------------|-------|---------|-------|---------|-------|---------|-------| | Methods | Caltech | LabelMe | Pascal VOC | Sun | AVG | | | | | | | Acc | H-score | Acc | H-score | Acc | H-Score | Acc | H-score | Acc | H-score | | | Acc AGG | 65.49 | 62.59 | 46.15 | 42.78 | 48.29 | 44.31 | 44.48 | 40.67 | 51.10 | 47.58 | | OpenMax Bendale & Boult (2016b) | 64.19 | 62.54 | 47.77 | 45.41 | 48.82 | 45.89 | 46.61 | 45.51 | 51.84 | 49.83 | | MORGAN Pal et al. (2023a) | 61.59 | 59.87 | 43.33 | 40.98 | 46.71 | 40.08 | 42.22 | 41.16 | 48.46 | 45.52 | | MLDG Li et al. (2018a) | 66.91 | 63.11 | 45.65 | 41.76 | 48.37 | 42.71 | 44.29 | 42.22 | 51.30 | 47.45 | | FC Li et al. (2019b) | 65.59 | 60.48 | 45.23 | 44.22 | 49.23 | 45.89 | 45.32 | 44.45 | 51.34 | 48.76 | | EPI-FCR Li et al. (2019a) | 66.81 | 62.98 | 47.83 | 45.33 | 50.22 | 45.56 | 46.03 | 44.32 | 52.72 | 49.55 | | PAR Wang et al. (2020b) | 65.78 | 61.25 | 46.21 | 42.54 | 50.11 | 46.33 | 45.39 | 43.65 | 51.87 | 48.44 | | RSC Huang et al. (2020) | 64.43 | 61.39 | 45.61 | 43.71 | 48.60 | 42.65 | 45.76 | 42.71 | 51.10 | 47.61 | | CuMix Mancini et al. (2020a) | 66.21 | 63.76 | 46.72 | 45.59 | 50.54 | 45.78 | 46.38 | 45.33 | 52.46 | 50.11 | | Fish Shi et al. (2021) | 65.82 | 62.29 | 47.66 | 46.52 | 50.11 | 45.53 | 45.54 | 43.33 | 52.28 | 49.41 | | Disentanglement Zhang et al. (2022) | 63.27 | 61.86 | 48.65 | 45.39 | 50.53 | 43.22 | 46.72 | 45.76 | 52.29 | 49.05 | | Mixstyle Zhou et al. (2021) | 66.11 | 63.19 | 46.72 | 46.22 | 49.75 | 46.19 | 46.62 | 46.87 | 52.30 | 50.61 | | DAML Shu et al. (2021) | 69.18 | 64.65 | 48.22 | 47.71 | 49.87 | 47.22 | 46.87 | 46.78 | 53.53 | 51.59 | | DAML + OpenMax Bendale & Boult (2016a) | 68.24 | 66.51 | 46.43 | 46.18 | 52.49 | 47.00 | 47.43 | 47.71 | 53.64 | 51.85 | | ODG-NET | 73.42 | 69.93 | 51.89 | 51.56 | 53.44 | 52.75 | 50.21 | 50.14 | 57.24 | 56.09 | | Table 4: Comparative analysis for Digit-DG on ODG. (In %) | | | | | | | | | | | |-------------------------------------------------------------|---------|---------|---------|-------|---------|-------|---------|-------|---------|-------| | Methods | MNIST | MNIST_M | SVHN | SYN | AVG | | | | | | | Acc | H-score | Acc | H-score | Acc | H-Score | Acc | H-score | Acc | H-score | | | AGG | 69.45 | 63.28 | 43.51 | 42.15 | 50.26 | 46.89 | 61.87 | 56.31 | 56.27 | 52.15 | | OpenMax Bendale & Boult (2016b) | 73.87 | 65.39 | 46.71 | 44.63 | 53.87 | 48.21 | 65.55 | 61.63 | 60.00 | 54.96 | | MORGAN Pal et al. (2023a) | 72.45 | 63.59 | 41.78 | 43.32 | 50.67 | 48.77 | 65.78 | 61.49 | 57.67 | 54.29 | | MLDG Li et al. (2018a) | 71.33 | 69.22 | 43.19 | 41.78 | 48.73 | 45.37 | 61.28 | 58.22 | 56.13 | 53.64 | | FC Li et al. (2019b) | 71.29 | 66.29 | 41.22 | 40.67 | 47.72 | 44.41 | 59.33 | 55.67 | 54.89 | 51.76 | | EPI-FCR Li et al. (2019a) | 72.39 | 68.33 | 45.83 | 43.34 | 51.27 | 46.88 | 62.46 | 60.23 | 57.98 | 54.69 | | PAR Wang et al. (2020b) | 70.88 | 67.47 | 44.62 | 42.65 | 49.34 | 45.72 | 60.23 | 57.11 | 56.26 | 53.23 | | RSC Huang et al. (2020) | 72.77 | 66.34 | 42.27 | 41.43 | 48.32 | 45.59 | 62.41 | 57.26 | 56.44 | 52.65 | | CuMix Mancini et al. (2020a) | 72.10 | 67.52 | 45.88 | 43.74 | 52.22 | 47.22 | 62.33 | 58.33 | 58.13 | 54.20 | | Fish Shi et al. (2021) | 74.43 | 66.89 | 42.65 | 44.45 | 52.31 | 46.71 | 64.76 | 58.73 | 58.53 | 54.19 | | Disentanglement Zhang et al. (2022) | 71.29 | 68.83 | 45.38 | 41.59 | 50.16 | 42.71 | 65.66 | 60.33 | 58.12 | 53.36 | | Mixstyle Zhou et al. (2021) | 76.56 | 70.56 | 47.81 | 45.66 | 54.97 | 47.24 | 61.80 | 61.96 | 60.23 | 56.35 | | DAML Shu et al. (2021) | 73.98 | 69.88 | 46.49 | 45.62 | 53.34 | 47.72 | 64.22 | 59.23 | 59.51 | 55.61 | | DAML + OpenMax Bendale & Boult (2016a) | 75.77 | 71.38 | 48.51 | 47.49 | 55.61 | 49.69 | 65.49 | 62.77 | 61.34 | 57.83 | | ODG-NET | 78.56 | 75.75 | 50.52 | 50.22 | 57.81 | 52.62 | 68.94 | 65.33 | 63.85 | 60.98 | Table 3: Comparative analysis for VLCS on ODG. (In %) Table 4: Comparative analysis for Digit-DG on ODG. (In %) ![9_image_0.png](9_image_0.png) Figure 3: (a) Depiction of the generated pseudo-open-set samples by ODG-NET. (b) Depiction of the pseudo-stylized images (columns 2-4) generated by ODG-NET w.r.t. the input images mentioned in column 1. (c) Results of the intermediate images (two images for both cases.)showing the transition between a pair of domain/label. 10 | Table 5: Comparative analysis for Multi-Dataset on ODG. (In %) | | | | | | | | | | | |------------------------------------------------------------------|---------|-------|----------|--------|---------|-------|---------|-------|---------|-------| | Methods | Clipart | Real | Painting | Sketch | Avg | | | | | | | Acc | H-score | Acc | H-score | Acc | H-Score | Acc | H-score | Acc | H-score | | | AGG | 29.78 | 34.06 | 65.33 | 64.72 | 44.30 | 51.04 | 27.59 | 35.41 | 41.75 | 46.31 | | OpenMax Bendale & Boult (2016b) | 36.72 | 34.41 | 63.29 | 62.88 | 44.19 | 50.75 | 34.51 | 32.29 | 44.67 | 45.08 | | MORGAN Pal et al. (2023a) | 29.45 | 28.59 | 59.97 | 62.22 | 43.75 | 47.53 | 31.19 | 30.04 | 41.09 | 42.09 | | MLDG Li et al. (2018a) | 29.66 | 35.11 | 65.37 | 54.40 | 44.04 | 50.53 | 26.83 | 34.57 | 41.48 | 43.65 | | FC Li et al. (2019b) | 29.91 | 35.42 | 64.77 | 63.65 | 44.13 | 50.07 | 28.56 | 34.10 | 41.84 | 45.81 | | Epi-FCR Li et al. (2019a) | 27.70 | 37.62 | 60.31 | 64.95 | 39.57 | 50.24 | 26.76 | 33.74 | 38.59 | 46.64 | | PAR Wang et al. (2020b) | 29.29 | 39.99 | 64.09 | 62.59 | 42.36 | 46.37 | 30.21 | 39.96 | 41.49 | 47.23 | | RSC Huang et al. (2020) | 27.57 | 34.98 | 60.36 | 60.02 | 37.76 | 42.21 | 26.21 | 30.44 | 37.98 | 41.91 | | CuMix Mancini et al. (2020a) | 30.03 | 40.18 | 64.61 | 65.07 | 44.37 | 48.70 | 29.72 | 33.70 | 42.18 | 46.91 | | Fish Shi et al. (2021) | 32.78 | 35.42 | 65.43 | 67.77 | 45.37 | 48.81 | 32.35 | 32.45 | 43.98 | 46.11 | | Disentanglement Zhang et al. (2022) | 28.76 | 33.33 | 64.48 | 64.44 | 42.29 | 50.05 | 30.65 | 35.87 | 41.54 | 45.92 | | Mixstyle Zhou et al. (2021) | 30.03 | 40.18 | 64.61 | 65.07 | 44.37 | 48.70 | 29.72 | 33.70 | 42.18 | 46.91 | | DAML Shu et al. (2021) | 37.62 | 44.27 | 66.54 | 67.80 | 47.80 | 52.93 | 34.48 | 41.82 | 46.61 | 51.71 | | DAML + OpenMax Bendale & Boult (2016a) | 38.55 | 45.51 | 66.87 | 68.89 | 48.51 | 53.12 | 35.61 | 42.56 | 47.38 | 52.52 | | ODG-NET | 40.75 | 47.54 | 69.49 | 71.22 | 50.11 | 55.39 | 37.58 | 44.10 | 49.48 | 54.56 | dataset, ODG-NET achieves an Acc of 49.48% and an H-score of 54.56%, which is an improvement of more than 3% than Shu et al. (2021). Visually, the T-SNE Van der Maaten & Hinton (2008) Fig. 4(a) confirms the discriminative and domain-independent nature of the semantic space given the augmented source data. Moreover, we present a collection of synthetically created images produced by our novel ODG-NET. As illustrated in Figure 3, (a) displays the generated pseudo-open-set examples, while (b) exhibits the pseudo-stylized pictures (columns 2-4) derived from their corresponding input images (column 1). This comparison highlights the evident transformation from the original input images to the synthesized pseudo images. Additionally, in (c), we demonstrate two aspects: first, the variation in artistic style of the input image, such as from sketch to painting; second, the combined shift in both style and label, exemplified by the transition from a "dog" class image to a "bag" class image. ## 4.2 Results On Closed Dg Tasks In the context of closed-set DG tasks, we compare the performance of ODG-NET against the existing literature, focusing on supervised pre-training methods that use meta-learning, regularization, and domain augmentation techniques Zhou et al. (2020b; 2021); Chen et al. (2021); Kang et al. (2022); Chattopadhyay et al. (2020); Xu et al. (2021); Shu et al. (2021); Shi et al. (2021); Du et al. (2020); Zhao et al. (2020), among others. As shown in Table 6 for the five benchmark DG datasets, ODG-NET outperforms all comparative techniques in the average leave-one-out DG evaluations, despite these techniques being designed explicitly for closed-set DG. We observe an improvement of at least 3 − 4% across all datasets. For DomainNet, ODG-NET achieves an average accuracy of 50.16%, which is 4% better than the previous state-of-the-art method SWAD Cha et al. (2021), likely due to the more diversified training set on which ODG-NET is trained. Finally, in closed-set DG experiments, ODG-NET outperforms DAML Shu et al. (2021) by a significant margin of at least 5 − 7%. ## 4.3 Ablation Analysis Model and loss ablation. In Table 7, we present the effects of different components of the ODG-NET model and the loss functions for PACS and Office-Home datasets. We confirmed that embedding networks are crucial for learning latent conditioning information in a meaningful way. The model without embedding layers resulted in a performance drop of approximately 5 − 6% in the H-score. Similarly, attention modules helped to highlight the style and semantic features better, and using (Ad, Ao) resulted in a 3−4% improvement in the H-score for both datasets. Additionally, we experimented with using a common backbone for the source domains instead of {Fs l } S s=1. This approach significantly reduced the H-score by 4% and 6% for both datasets, indicating the importance of multi-view features learned by the domain-specific backbones. Furthermore, Fd helped to discriminate the domain and semantic properties of F el(x), and the omission of Fd reduced performance by almost 5%. We also removed both Fd and the local classifiers {Fs l } S l=1 simultaneously, resulting in a performance drop of over 6%. When we trained Fo from scratch instead of using a pre-trained ResNet-18 backbone, we observed a performance drop of around 3%. The pre-trained ResNet-18 backbone is already rich in discriminative information, which helps our DG tasks. Concerning the loss function of feature optimization, it is evident that both ![11_image_0.png](11_image_0.png) Figure 4: (a) T-SNE of real and cGAN synthesized images in the semantic feature space 70 for PACS dataset. (b) Accuracy comparison between ODG-NET and Cumix Mancini et al. (2020a) based pseudo-open sample generation. (c) Accuracy comparison between ODG-NET and Mixstyle Zhou et al. (2021) and L2A-OT Zhou et al. (2020b) based closed-set sample generation. (d) The Frechet distance Dowson & Landau (1982) between real data and the closed and pseudo-open images generated, with and without the consideration of W in LGan- ![11_image_1.png](11_image_1.png) Figure 5: (a) Frećhet distance between the source and target domains for the closed-set classes. (b) Openness analysis. | Methods | PACS | VLCS | Office-Home | Digits-DG | DomainNet | |----------------------------------|--------|--------|---------------|-------------|-------------| | CCSA Motiian et al. (2017) | 79.40 | 70.20 | 64.90 | 74.50 | - | | SFA-A Li et al. (2021c) | 81.70 | 74.00 | - | 79.60 | - | | MetaReg Balaji et al. (2018) | 81.70 | - | - | - | 43.62 | | MixStyle Zhou et al. (2021) | 83.70 | - | 65.50 | - | 34.0 | | JiGen Carlucci et al. (2019) | 80.51 | 73.19 | 61.20 | 76.20 | - | | SagNet Wu et al. (2019) | 83.25 | - | 62.34 | - | 40.30 | | RSC Huang et al. (2020) | 85.15 | 75.43 | 63.12 | - | 38.90 | | DDAIG Zhou et al. (2020a) | 83.10 | - | 65.50 | 77.58 | - | | L2A-OT Zhou et al. (2020b) | 82.80 | - | 65.60 | 78.10 | - | | FACT Xu et al. (2021) | 84.51 | - | 66.56 | 81.55 | - | | STEAM Chen et al. (2021) | 86.60 | - | 66.80 | 83.13 | - | | Style Neo. Kang et al. (2022) | 85.47 | - | 65.89 | - | 44.60 | | Liu et al. Liu et al. (2021) | - | 76.48 | 67.85 | 80.02 | - | | MMD-AAE Li et al. (2018b) | 77.00 | 72.30 | 62.70 | 74.60 | - | | Cross-Grad Shankar et al. (2018) | 80.70 | - | 64.40 | 75.83 | - | | MASF Dou et al. (2019) | 81.03 | 74.11 | - | - | - | | EISNet Wang et al. (2020a) | 82.15 | 74.65 | - | - | - | | MetaVIB Du et al. (2020) | - | 74.54 | - | - | - | | DGER Zhao et al. (2020) | - | 74.38 | - | - | - | | MixUp Zhang et al. (2017) | - | - | - | - | 39.20 | | DMG Chattopadhyay et al. (2020) | - | - | - | - | 43.63 | | SWAD Cha et al. (2021) | 88.10 | 79.10 | 70.60 | - | 46.50 | | Fish Shi et al. (2021) | 85.50 | 77.80 | 68.60 | - | 42.70 | | DAML Shu et al. (2021) | 82.70 | 72.95 | 67.71 | 79.89 | - | | ODG-NET | 90.66 | 79.85 | 72.92 | 86.75 | 50.16 | Table 6: Results of PACS, VLCS, Office-Home, Digits-DG and DomainNet datasets under close-set DG. (In %) | Table 7: Model and loss ablation analysis on PACS & Office-Home datasets. (In %) Model variants of ODG-NET PACS Office-Home Acc H-score Acc H-score | | | | | |-------------------------------------------------------------------------------------------------------------------------------------------------------|-------|-------|-------|-------| | - w/o (Fim, Fv, Fy, Fη) | 63.43 | 50.82 | 53.07 | 50.55 | | - w/o Ad and Ao | 66.01 | 52.93 | 56.58 | 53.53 | | S | | | | | | - w/o {Fs l } s=1, but a common backbone for the source domains | 63.73 | 51.53 | 53.14 | 50.50 | | - w/o Fd | 64.98 | 52.09 | 55.55 | 52.69 | | S | | | | | | - w/o Fd and {Fs l } s=1 | 62.38 | 49.67 | 51.95 | 49.02 | | - w/o Entropy loss | 66.81 | 53.15 | 57.35 | 54.36 | | - w/o Lcon | 65.36 | 51.73 | 55.40 | 51.60 | | - w/o Lsem | 64.74 | 51.12 | 54.57 | 51.11 | | - w/o Fd and {Fs l } S s=1 and Lsem | 59.07 | 47.07 | 48.91 | 45.56 | | - with training Fo from scratch | 65.96 | 52.86 | 56.38 | 53.33 | | - w/o W in LGAN | 66.26 | 53.96 | 58.38 | 54.93 | | Sensitivity to noise variance of GAN for synthesizing closed and pseudo-open samples cs os P noise − N (0, 1); P noise − N (0, 1) | 65.30 | 52.51 | 55.38 | 52.53 | | cs | os | | | | | P noise − N (0, 1); P noise − N (0, 2) | 66.35 | 53.32 | 56.44 | 53.40 | | cs | os | | | | | P noise − N (0, 1); P noise − N (0, 3) | 67.29 | 54.21 | 57.18 | 54.49 | | cs | os | | | | | P noise − N (0, 1); P noise − N (0, 4) | 67.98 | 55.14 | 58.29 | 55.52 | | P noise − N (0, 1); P cs noise − N (0, 10) | 68.22 | 55.37 | 58.72 | 56.11 | | os | | | | | | P noise − N (0, 5); P cs noise − N (0, 5) | 67.13 | 54.22 | 55.65 | 52.24 | | os | | | | | | cs | os | | | | | ODG-NET(P noise − N (0, 1); P noise − N (0, 5)) | 68.80 | 55.81 | 59.40 | 56.69 | Table 7: Model and loss ablation analysis on PACS & Office-Home datasets. (In %) the closed-set contrastive and open-space entropy regularizer assists in generating a more discriminative feature space. The model without any of these losses or the full Lsem led to performance degradation by approximately 4%. Finally, we removed the diversity regularization W in LGAN. Although, as per Fig. 4(d), W induces diversity in the generated images, empirically, we observed a nominal change in the accuracy (1 − 1.5%) in the presence of W. Comparison of our augmentation with methods from the literature. Our augmentation technique enables more controlled style and label mix-up, and we compared it against two types of augmentation techniques from the literature: existing style diversification approaches Zhou et al. (2020b; 2021) for closed-set classes, and Cumix Mancini et al. (2020a), which performs random image mix-up so that the generated images can be a proxy for open-space. Our results in Fig. 4(b)-4(c) demonstrate that ODG-NET performs better with our proposed augmentation. Our method is interpolation-based, which allows us to generate more style primitives than Zhou et al. (2020b). Since our method is image-based as opposed to the feature-based method of Zhou et al. (2021), we can handle the semantics better. Similarly, for pseudo-open samples, we can generate more meaningful images with varied similarities to the closed classes than the random mix-up of Mancini et al. (2020a). Sensitivity to variances of P os/cs noise . In this experiment, we tune the σ parameter of P os noise while fixing P cs noise for FG. As shown in Table 7, we observe that as we increase σ from 1 to 5, the model performance continuously improves from 52.51% to 55.81% for PACS and from 52.53% to 56.69% for Office-Home. With high variance, the open samples are sparsely distributed, better covering the open space. However, the performance improvements are found to saturate beyond σ = 5. On the other hand, increasing the variance of P cs noise significantly affects the performance, leading to a drop of at least 3%. This occurs because the generated images may deviate from the original semantic concepts, degrading the quality of the generated images. Frechet distance for domain alignment ´ . To assess the domain independence of F b o , we calculate the Frechet distance ´ Dowson & Landau (1982) between the closed-set classes of the source and target domains for Office-Home, with the target domain being *Real-world*. In Fig. 5(a), we show the Frechet distance of the baseline AGG, DAML, and two ´ variants of ODG-NET, with and without the domain classifier Fd. The full ODG-NET produces the minimum Frechet ´ distance, indicating that it performs the best domain alignment among the compared models. The model without Fd performs poorly compared to the full ODG-NET, suggesting that the use of Fd helps disentangle features better, making F b o less affected by domain properties and focus on shared components. Sensitivity to number of target open classes. Since there is no restriction on the number of open classes in the target domain, we are interested in assessing whether ODG-NET can handle different numbers of open classes during inference. Here, we considered the average leave-one-out H-score for Office-Home and simulated three scenarios with different numbers of open classes in the target: 10, 20, and 30. In Figure 5(b), ODG-NET consistently outperforms DAML Shu et al. (2021) by at least 3% for different openness factors. The entropy minimization component of Lsem widens the gap between open and closed spaces, which is helpful in this regard. We validate this by removing the entropy component of Lsem and re-running the experiments, in which case we find that performance drops by 2 − 3%. Performance comparison when the source domains have a disjoint set of classes. Here, we present a novel experimental scenario in ODG, where the source domains have completely different sets of classes, and the target domain consists of all the classes from the sources, plus previously unknown class samples. We compare the performance of ODG-NET with that of DAML Shu et al. (2021) for this setup in Table 8. We find that ODG-NET outperforms DAML by around 3%, demonstrating its robustness to extreme domain and label shifts within the source domains. Table 8: Comparison between DAML Shu et al. (2021) and ODG-NET when the source domains have mutually disjoint classes on Office-Home dataset in terms of H-score. (In %) Domain Clipart RealWorld Product Art Average DAMLShu et al. (2021) 40.12 58.72 54.85 47.25 50.23 ODG-NET 45.69 62.77 56.95 49.86 53.81 ## 5 Takeaways In this paper, we present ODG-NET, a solution to the challenging problem of open domain generalization. This task combines domain generalization, open-set learning, and class imbalance in a common setting. One of the key features of ODG-NET is the novel generative augmentation, which enables continuous domain and label conditional image synthesis through interpolation of conditioning variables. This augmented training set is utilized to learn a discriminative and unbiased semantic space for an open-set classifier while minimizing the effects of domain-dependent artifacts. In our experiments, ODG-NET achieves state-of-the-art performance for both open-set and closed-set domain generalization on six benchmark datasets. We plan to extend our evaluation to more safety-critical applications in the future. ## References Yogesh Balaji, Swami Sankaranarayanan, and Rama Chellappa. Metareg: Towards domain generalization using metaregularization. *Advances in neural information processing systems*, 31, 2018. Abhijit Bendale and Terrance E Boult. Towards open set deep networks. In *Proceedings of the IEEE conference on* computer vision and pattern recognition, pp. 1563–1572, 2016a. Abhijit Bendale and Terrance E Boult. Towards open set deep networks. In *Proceedings of the IEEE conference on* computer vision and pattern recognition, pp. 1563–1572, 2016b. Abhijit Bendale and Terrance E Boult. Towards open set deep networks. In *Proceedings of the IEEE conference on* computer vision and pattern recognition, pp. 1563–1572, 2016c. Silvia Bucci, Mohammad Reza Loghmani, and Tatiana Tommasi. On the effectiveness of image rotation for open set domain adaptation. In *European conference on computer vision*, pp. 422–438. Springer, 2020. Ruichu Cai, Zijian Li, Pengfei Wei, Jie Qiao, Kun Zhang, and Zhifeng Hao. Learning disentangled semantic representation for domain adaptation. In *IJCAI: proceedings of the conference*, volume 2019, pp. 2060. NIH Public Access, 2019. Fabio M Carlucci, Antonio D'Innocente, Silvia Bucci, Barbara Caputo, and Tatiana Tommasi. Domain generalization by solving jigsaw puzzles. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 2229–2238, 2019. Junbum Cha, Sanghyuk Chun, Kyungjae Lee, Han-Cheol Cho, Seunghyun Park, Yunsung Lee, and Sungrae Park. Swad: Domain generalization by seeking flat minima. *Advances in Neural Information Processing Systems*, 34: 22405–22418, 2021. Prithvijit Chattopadhyay, Yogesh Balaji, and Judy Hoffman. Learning to balance specificity and invariance for in and out of domain generalization. In *European Conference on Computer Vision*, pp. 301–318. Springer, 2020. Yang Chen, Yu Wang, Yingwei Pan, Ting Yao, Xinmei Tian, and Tao Mei. A style and semantic memory mechanism for domain generalization. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 9164–9173, 2021. Andrea Dittadi, Frederik Träuble, Francesco Locatello, Manuel Wüthrich, Vaibhav Agrawal, Ole Winther, Stefan Bauer, and Bernhard Schölkopf. On the transfer of disentangled representations in realistic settings. arXiv preprint arXiv:2010.14407, 2020. Qi Dou, Daniel Coelho de Castro, Konstantinos Kamnitsas, and Ben Glocker. Domain generalization via modelagnostic learning of semantic features. *Advances in Neural Information Processing Systems*, 32, 2019. DC Dowson and BV666017 Landau. The fréchet distance between multivariate normal distributions. Journal of multivariate analysis, 12(3):450–455, 1982. Yingjun Du, Jun Xu, Huan Xiong, Qiang Qiu, Xiantong Zhen, Cees GM Snoek, and Ling Shao. Learning to learn with variational information bottleneck for domain generalization. In *European Conference on Computer Vision*, pp. 200–216. Springer, 2020. Chen Fang, Ye Xu, and Daniel N. Rockmore. Unbiased metric learning: On the utilization of multiple datasets and web images for softening bias. In *Proceedings of the IEEE International Conference on Computer Vision (ICCV)*, December 2013. Rui Gong, Wen Li, Yuhua Chen, and Luc Van Gool. Dlow: Domain flow for adaptation and generalization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2477–2486, 2019. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. *Communications of the ACM*, 63(11):139–144, 2020. Kai Han, Yunhe Wang, Hanting Chen, Xinghao Chen, Jianyuan Guo, Zhenhua Liu, Yehui Tang, An Xiao, Chunjing Xu, Yixing Xu, et al. A survey on vision transformer. *IEEE transactions on pattern analysis and machine* intelligence, 2022. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 770–778, 2016. Zeyi Huang, Haohan Wang, Eric P Xing, and Dong Huang. Self-challenging improves cross-domain generalization. In *European Conference on Computer Vision*, pp. 124–140. Springer, 2020. Juwon Kang, Sohyun Lee, Namyup Kim, and Suha Kwak. Style neophile: Constantly seeking novel styles for domain generalization. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 7130–7140, 2022. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint arXiv:1412.6980*, 2014. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*, 2013. Shu Kong and Deva Ramanan. Opengan: Open-set recognition via open data generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 813–822, 2021. Jogendra Nath Kundu, Naveen Venkat, Ambareesh Revanur, R Venkatesh Babu, et al. Towards inheritable models for open-set domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 12376–12385, 2020. Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy M Hospedales. Deeper, broader and artier domain generalization. In *Proceedings of the IEEE international conference on computer vision*, pp. 5542–5550, 2017. Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy Hospedales. Learning to generalize: Meta-learning for domain generalization. In *Proceedings of the AAAI conference on artificial intelligence*, volume 32, 2018a. Da Li, Jianshu Zhang, Yongxin Yang, Cong Liu, Yi-Zhe Song, and Timothy M Hospedales. Episodic training for domain generalization. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 1446– 1455, 2019a. Haoliang Li, Sinno Jialin Pan, Shiqi Wang, and Alex C Kot. Domain generalization with adversarial feature learning. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 5400–5409, 2018b. Haoliang Li, YuFei Wang, Renjie Wan, Shiqi Wang, Tie-Qiang Li, and Alex Kot. Domain generalization for medical imaging classification with linear-dependency regularization. *Advances in Neural Information Processing Systems*, 33:3118–3129, 2020. Jingjing Li, Erpeng Chen, Zhengming Ding, Lei Zhu, Ke Lu, and Heng Tao Shen. Maximum density divergence for domain adaptation. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 43(11):3918–3930, 2021a. doi: 10.1109/TPAMI.2020.2991050. Lei Li, Ke Gao, Juan Cao, Ziyao Huang, Yepeng Weng, Xiaoyue Mi, Zhengze Yu, Xiaoya Li, and Boyang Xia. Progressive domain expansion network for single domain generalization. In *Proceedings of the IEEE/CVF Conference* on Computer Vision and Pattern Recognition, pp. 224–233, 2021b. Pan Li, Da Li, Wei Li, Shaogang Gong, Yanwei Fu, and Timothy M Hospedales. A simple feature augmentation for domain generalization. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 8886–8895, 2021c. Yiying Li, Yongxin Yang, Wei Zhou, and Timothy Hospedales. Feature-critic networks for heterogeneous domain generalization. In *International Conference on Machine Learning*, pp. 3915–3924. PMLR, 2019b. Chang Liu, Lichen Wang, Kai Li, and Yun Fu. *Domain Generalization via Feature Variation Decorrelation*, pp. 1683–1691. Association for Computing Machinery, New York, NY, USA, 2021. ISBN 9781450386517. URL https://doi.org/10.1145/3474085.3475311. Massimiliano Mancini, Zeynep Akata, Elisa Ricci, and Barbara Caputo. Towards recognizing unseen categories in unseen domains. In *European Conference on Computer Vision*, pp. 466–483. Springer, 2020a. Massimiliano Mancini, Zeynep Akata, Elisa Ricci, and Barbara Caputo. Towards recognizing unseen categories in unseen domains. In *European Conference on Computer Vision*, pp. 466–483. Springer, 2020b. Saeid Motiian, Marco Piccirilli, Donald A Adjeroh, and Gianfranco Doretto. Unified deep supervised domain adaptation and generalization. In *Proceedings of the IEEE international conference on computer vision*, pp. 5715–5725, 2017. Cheng Ouyang, Chen Chen, Surui Li, Zeju Li, Chen Qin, Wenjia Bai, and Daniel Rueckert. Causality-inspired singlesource domain generalization for medical image segmentation. *arXiv preprint arXiv:2111.12525*, 2021. Debabrata Pal, Shirsha Bose, Biplab Banerjee, and Yogananda Jeppu. Morgan: Meta-learning-based few-shot open-set recognition via generative adversarial network. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 6295–6304, January 2023a. Debabrata Pal, Shirsha Bose, Biplab Banerjee, and Yogananda Jeppu. Morgan: Meta-learning-based few-shot open-set recognition via generative adversarial network. In *Proceedings of the IEEE/CVF Winter Conference on Applications* of Computer Vision, pp. 6295–6304, 2023b. Pau Panareda Busto and Juergen Gall. Open set domain adaptation. In *Proceedings of the IEEE international conference on computer vision*, pp. 754–763, 2017. Novi Patricia and Barbara Caputo. Learning to learn, from transfer learning to domain adaptation: A unifying perspective. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pp. 1442–1449, 2014. Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In *Proceedings of the IEEE/CVF international conference on computer vision*, pp. 1406–1415, 2019. Mohammad Mahfujur Rahman, Clinton Fookes, Mahsa Baktashmotlagh, and Sridha Sridharan. Multi-component image translation for deep domain generalization. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 579–588. IEEE, 2019. Alexandre Rame, Corentin Dancette, and Matthieu Cord. Fishr: Invariant gradient variances for out-of-distribution generalization. In *International Conference on Machine Learning*, pp. 18347–18377. PMLR, 2022. Kuniaki Saito, Shohei Yamamoto, Yoshitaka Ushiku, and Tatsuya Harada. Open set domain adaptation by backpropagation. In *Proceedings of the European conference on computer vision (ECCV)*, pp. 153–168, 2018. Shiv Shankar, Vihari Piratla, Soumen Chakrabarti, Siddhartha Chaudhuri, Preethi Jyothi, and Sunita Sarawagi. Generalizing across domains via cross-gradient training. *arXiv preprint arXiv:1804.10745*, 2018. Yuge Shi, Jeffrey Seely, Philip HS Torr, N Siddharth, Awni Hannun, Nicolas Usunier, and Gabriel Synnaeve. Gradient matching for domain generalization. *arXiv preprint arXiv:2104.09937*, 2021. Yang Shu, Zhangjie Cao, Chenyu Wang, Jianmin Wang, and Mingsheng Long. Open domain generalization with domain-augmented meta-learning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern* Recognition, pp. 9624–9633, 2021. Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. *Journal of machine learning research*, 9 (11), 2008. Sagar Vaze, Kai Han, Andrea Vedaldi, and Andrew Zisserman. Open-set recognition: A good closed-set classifier is all you need? *arXiv preprint arXiv:2110.06207*, 2021. Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 5018–5027, 2017. Jingye Wang, Ruoyi Du, Dongliang Chang, Kongming Liang, and Zhanyu Ma. Domain generalization via frequencydomain-based feature disentanglement and interaction. In Proceedings of the 30th ACM International Conference on Multimedia, pp. 4821–4829, 2022. Shujun Wang, Lequan Yu, Caizi Li, Chi-Wing Fu, and Pheng-Ann Heng. Learning from extrinsic and intrinsic supervisions for domain generalization. In *European Conference on Computer Vision*, pp. 159–176. Springer, 2020a. Yufei Wang, Haoliang Li, and Alex C Kot. Heterogeneous domain generalization via domain mixup. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3622–3626. IEEE, 2020b. Ziqi Wang, Marco Loog, and Jan van Gemert. Respecting domain relations: Hypothesis invariance for domain generalization. In *2020 25th International Conference on Pattern Recognition (ICPR)*, pp. 9756–9763. IEEE, 2021. Zhijie Wu, Xiang Wang, Di Lin, Dani Lischinski, Daniel Cohen-Or, and Hui Huang. Sagnet: Structure-aware generative network for 3d-shape modeling. *ACM Transactions on Graphics (TOG)*, 38(4):1–14, 2019. Qinwei Xu, Ruipeng Zhang, Ya Zhang, Yanfeng Wang, and Qi Tian. A fourier-based framework for domain generalization. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 14383– 14392, 2021. Zheng Xu, Wen Li, Li Niu, and Dong Xu. Exploiting low-rank structure from latent domains for domain generalization. In *European Conference on Computer Vision*, pp. 628–643. Springer, 2014. Shiqi Yang, Yaxing Wang, Kai Wang, Shangling Jui, and Joost van de Weijer. One ring to bring them all: Towards open-set recognition under domain shift. *arXiv preprint arXiv:2206.03600*, 2022. Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 6023–6032, 2019. Hanlin Zhang, Yi-Fan Zhang, Weiyang Liu, Adrian Weller, Bernhard Schölkopf, and Eric P Xing. Towards principled disentanglement for domain generalization. In *Proceedings of the IEEE/CVF Conference on Computer Vision and* Pattern Recognition, pp. 8024–8034, 2022. Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. *arXiv preprint arXiv:1710.09412*, 2017. Chao Zhao and Weiming Shen. Adaptive open set domain generalization network: Learning to diagnose unknown faults under unknown working conditions. *Reliability Engineering & System Safety*, 226:108672, 2022. Shanshan Zhao, Mingming Gong, Tongliang Liu, Huan Fu, and Dacheng Tao. Domain generalization via entropy regularization. *Advances in Neural Information Processing Systems*, 33:16096–16107, 2020. Kaiyang Zhou, Yongxin Yang, Timothy Hospedales, and Tao Xiang. Deep domain-adversarial image generation for domain generalisation. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pp. 13025– 13032, 2020a. Kaiyang Zhou, Yongxin Yang, Timothy Hospedales, and Tao Xiang. Learning to generate novel domains for domain generalization. In *European conference on computer vision*, pp. 561–578. Springer, 2020b. Kaiyang Zhou, Yongxin Yang, Yu Qiao, and Tao Xiang. Domain generalization with mixstyle. *arXiv preprint* arXiv:2104.02008, 2021. Kaiyang Zhou, Ziwei Liu, Yu Qiao, Tao Xiang, and Chen Change Loy. Domain generalization: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022. Ronghang Zhu and Sheng Li. Crossmatch: Cross-classifier consistency regularization for open-set single domain generalization. In *International Conference on Learning Representations*, 2021.
Review 1: Summary: To address the challenge of Open Domain Generalization (ODG), this paper proposes a model called ODG-NET. The model introduces a generative augmentation framework that generates new domains of style-interpolated closed-set images and new pseudo-open images by interpolating the content of paired training images. It also addresses the problem of style bias by representing all images in terms of all source domain properties. As a result, a multi-class semantic object classifier incorporating both closed and open class classification features is trained, along with a style classifier to identify style primitives. Experimental results on six benchmark datasets show that ODG-NET outperforms the baseline method. Strengths and Weaknesses: Strength - This paper proposes a new network structure called ODG-NET to address the Open Domain Generalization problem. - ODG-NET constructs an open-set classifier by acquiring discriminative, unbiased, and disentangled semantic embedding spaces. - To synthesize diverse augmented images from the source domain, this paper proposes a novel conditional GAN with cycle consistency constraints to interpolate domain and category labels and regularization to avoid modal collapses. - Experimental results on six benchmark datasets show that ODG-NET outperforms the baseline method. Weaknessess - The network as a whole is admittedly new. On the other hand, many of the individual methods are techniques that have already been used for domain generalization and domain adaptation, and have marginal novelty. - The network structure is complex in order to incorporate different properties. - Similarly, many loss functions are combined, and the methods lack simplicity. - Many hyperparameters, such as \alpha, \beta, \epsilon, and different weights w for loss functions, which are difficult to tune. - Large performance variations depending on the noise parameters. Requested Changes: - The text in Figure 2 is too small and should be made easier to read. Also, it would be better to describe the function of each module in the figure. - In Figure 2, \mathcal{F}^b_{1S} should be \mathcal{F}^b_{l1} and \mathcal{F}^c_{1S} should be \mathcal{F}^c_{l1}. - Right brackets are missing in the denominator and numerator of equation (7). Broader Impact Concerns: None ================================================== Review 2: Summary: This paper introduces ODG-NET, a solution for open domain generalization, combining domain generalization, open-set learning, and class imbalance. ODG-NET employs generative augmentation for continuous domain and label conditional image synthesis, enhancing open-set classification while reducing domain-related artifacts. It achieves state-of-the-art performance on six benchmark datasets for both open-set and closed-set domain generalization. Strengths and Weaknesses: Strengths: 1. Extensive experiments are employed to validate the effectiveness of the proposed method. 2. The proposed method achieves promising results for open and closed-set domain generalization. Weaknesses 1. The authors should thoroughly review the writing to ensure it is easily comprehensible to a broader audience. 2. The clarity of the depiction in Figure 2 should be enhanced, and it is advisable for the authors to provide a more lucid illustration of the proposed method. 3. The proposed method appears complex due to its numerous objectives, with some not being adequately explained in the method section. Requested Changes: As mentioned earlier, it is recommended that the authors enhance the clarity of their writing. This includes providing more comprehensive explanations for each component of the final objective and elucidating the underlying motivations, with the aim of improving the paper's overall accessibility to readers. Broader Impact Concerns: N/A ================================================== Review 3: Summary: The paper considers a novel issue Open Domain Generalization, which aims to give open classifier to generalize well when we have multiple source domains. Techniques: 1. a novel conditional GAN with a cycle consistency constraint 2. an anti-mode-collapse regularizer that interpolates domain and category labels Experiments show a good performance of the proposed method ODG-NET. Strengths and Weaknesses: Strengthens 1. a novel and interesting problem. (I like this problem) 2, an effective method Weaknesses 1. Poor writing (please polish it again) 2. Please consider some baselines in OOD detection 3. I expect you to explore whether generalization can benefit detection and detection can benefit generalization. 4. Missing references: some reference related to OOD detection, open set domain adaptation and domain generalisation, e.g., 1. Is out-of distribution detection learnable? 2. Moderately Distributional Exploration for Domain Generalization and so on Requested Changes: Please see the weaknesses Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Accept with minor revision Comment: Reviewers requested further polishing and improvement of the text. Moreover, the paper needs some polishing also with respect to figure and table spacing and sizes - it seems that the paper was compressed in that regard to fit the 12 page limit. Please: - Make sure that there is enough space around figures (e.g. Fig2 would be better on the top of the page, with extra margins, Fig 4 is also hard to read, enlarging the figures and placing them in a 2 by 2 grid would be better, etc) - Make sure that the fonts in the tables are not much smaller than actual text - tables are now very hard to read. Tab1 and Tab2 can be split into two each, ie one per dataset, Tab3,4 and 5 can cover more width so that they are readable. Also please use \citep{} for all paper references in the Tables (ie so that the reference is in a parenthesis) It is ok of the paper ends up being slightly longer than 12 pages, as long as it is readable. ==================================================
# A Greedy Hierarchical Approach To Whole-Network Filterpruning In Cnns Anonymous authors Paper under double-blind review ## Abstract Deep convolutional neural networks (CNNs) have achieved impressive performance in many computer vision tasks. However, their large model sizes require heavy computational resources, making pruning redundant filters from existing pre-trained CNNs an essential task in developing efficient models for resource-constrained devices. Whole-network filter pruning algorithms prune varying fractions of filters from each layer, hence providing greater flexibility. State-of-the-art whole-network pruning methods are either computationally expensive due to the need to calculate the loss for each pruned filter using a training dataset, or use various heuristic / learned criteria for determining the pruning fractions for each layer. Hence there is a need for a simple and efficient technique for whole network pruning. This paper proposes a two-level hierarchical approach for whole-network filter pruning which is efficient and uses the classification loss as the final criterion. The lower-level algorithm (called filter-pruning) uses a sparse-approximation formulation based on linear approximation of filter weights. We explore two algorithms: orthogonal matching pursuit-based greedy selection and a greedy backward pruning approach. The backward pruning algorithm uses a novel closed-form error criterion for efficiently selecting the optimal filter at each stage, thus making the whole algorithm much faster. The higher-level algorithm (called layer-selection) greedily selects the best-pruned layer (pruning using the filter-selection algorithm) using a global pruning criterion. We propose algorithms for two different global-pruning criteria: (1) layerwiserelative error (HBGS), and (2) final classification error (HBGTS). Our suite of algorithms outperforms state-of-the-art pruning methods on ResNet18, ResNet32, ResNet56, VGG16, and ResNext101. Our method reduces the RAM requirement for ResNext101 from 7.6 GB to 1.5 GB and achieves a 94% reduction in FLOPS without losing accuracy on CIFAR-10. ## 1 Introduction Convolutional neural networks (CNNs) have demonstrated remarkable performance across various applications, such as image classification (Han et al., 2016), object detection (Redmon et al., 2016), and image segmentation (Minaee et al., 2021). However, the deployment of CNNs on IoT devices for computer vision tasks often encounters practical bottlenecks related to the model size and computational complexity of inference (FLOPs) (Goel et al., 2020). While neural architecture search (Baker et al., 2017; Zoph & Le, 2017) and efficient model design (Tan & Le, 2019) can sometimes lead to highly efficient architectures, they impose substantial requirements in terms of data and computational cost, as well as research expertise. However, pruning of pre-trained models (Lebedev & Lempitsky, 2018; Hoefler et al., 2021; Vadera & Ameen, 2022; He & Xiao, 2023) provides a cheaper alternative where one can avoid re-training complicated models on large datasets. For CNNs, structured pruning or *filter-pruning* (FP) (He et al., 2017; Luo et al., 2017; He & Xiao, 2023) has emerged as a preferred alternative since it causes a reduction in computation (thus leading to power savings) as well as memory requirements without requiring special hardware or re-implementation of operations. Filter-pruning (FP) techniques can further be classified as (1) *layer-wise pruning*, which prune filters uniformly from each layer (e.g. score-propagation (Yu et al., 2018) and error in activation reconstruction (Luo et al., 2017)), and (2) *whole-network pruning* (WNP), which prunes filters from the entire network. The WNP approach can prune different fractions of filters from each layer, hence providing higher flexibility. An ![1_image_0.png](1_image_0.png) Figure 1: Hierarchical approach for non-uniform pruning of filters across the network. important challenge for WNP is to determine the pruning fractions for each layer. (Kuzmin et al., 2019) (section 3.4.1) calculates the accuracy by pruning individual layers to a varying fraction and finds an optimal compromise so that the overall pruning ratio is achieved while minimizing the maximum loss in accuracy per layer. The main disadvantage of this approach is that the effect of pruning one layer on the pruning of another layer is not captured. Recent works also include methods based on Taylor-series expansion of the loss function, which approximates the influence score of pruning a filter on the overall loss function (Wang et al., 2019; Molchanov et al., 2019; Peng et al., 2019; Nonnenmacher et al., 2021) with good practical performance (He & Xiao, 2023). However, these methods can be expensive for large networks as they require passing over the entire training set to calculate each influence score, which can be costly for large datasets. Additionally, (Dong & Yang, 2019) applied NAS to search for a network with flexible channel and layer sizes, but this method can also be expensive for larger networks. On the other hand, some recent works use approximate criteria to prune filters. For instance, (Murti et al., 2023) propose a discriminative pruning technique based on total variation separation distance (TVS), which is an approximate criterion to prune filters from a network. Similarly, (He et al., 2020) choose different criteria to prune filters from different layers using Gumbel-softmax. However, the main drawback of this procedure is that the Gumbel-softmax smoothing only calculates an approximate output feature map for each layer, thus potentially hurting overall performance. Therefore, there is a need for an *efficient* and *accurate* WNP technique that directly optimizes the *training data loss*. In this work, we propose a greedy hierarchical training data loss-based approach for whole-network filter pruning (see fig. 1). The iterative higher-level algorithm (called *layer-selection*) evaluates all layers based on outputs from the lower-level algorithm, and greedily selects a layer to prune filters from in each iteration. The lower-level algorithm (called *filter-pruning*) prunes filters optimally for the current network configuration. We propose two versions of the iterative layer-selection algorithm: (1) hierarchical backward greedy search (**HBGS**), which selects layers based on the relative reconstruction error of the layer outputs, and (2) hierarchical backward greedy tree search (**HBGTS**) which selects the layers based on the error of the final classification layer outputs. The key advantage of our greedy layer-selection, compared to a learned criterion (He et al., 2020), or a threshold-based criterion (Kuzmin et al., 2019) is that we utilize the activations from the modified network, which arguably leads to better decisions after a few iterations of pruning. However, since each iteration of the greedy layer-selection makes many calls to the filter-pruning algorithm (typically, the number of layers many calls, with some possibility of caching old results), an expensive filter-pruning algorithm would be impractical for large networks. A key contribution of this paper is to propose an *efficient* filter-pruning algorithms, which can ensure the feasibility of the overall hierarchical scheme for large networks. While LRF (Joo et al., 2021) demonstrated impressive practical performance for pruning filters, it only prunes one filter at-a-time, hence making it prohibitively expensive for our purpose. We formulate the problem of optimally pruning multiple filters from a layer using linear replaceability criteria as a sparse approximation problem. We study an orthogonal matching pursuit (OMP) (Tropp & Gilbert, 2007) based algorithm, FP-OMP (Purohit et al., 2023) for filter-pruning. Under the assumption of restricted isometry of the matrix composed of filter weights (Tropp & Gilbert, 2007), FP-OMP selects filters whose linear combinations can represent the pruned filters with minimal error. However, since FP-OMP follows a greedy forward selection algorithm, and is called iteratively to prune a small number of filters, the overall algorithm becomes computationally expensive. To alleviate this inefficiency, we propose **FP-Backward** - a backward elimination-based algorithm for solving the sparse approximation problem. A key facilitating factor towards a fast implementation of FP-Backward is the calculation of a closed-form expression for the approximation error incurred by pruning a filter. HBGTS along with FP-Backward (called **HBGTS-B**) is an *efficient* algorithm taking only 50% of the running time of HBGTS and with performance comparable to it. Experimental results on a variety of standard pre-trained CNN models, e.g. ResNet18, ResNet32, ResNet56, VGG16, ResNext101 on standard image classification datasets, e.g. CIFAR10, CIFAR100, Tiny-Imagenet show that models pruned with HBGS and HBGTS have higher accuracies compared to recent state-of-the-art pruning methods for similar compression levels (see Table 1 and Figure 2). At higher parameter reduction (≥90%), the proposed methods outperform existing methods by ∼ 5% (see Figure 2). We also find the optimal pruned model to have a highly non-uniform pruning fraction distribution for each layer (see Figure 4), hence showing the effectiveness of our layer-selection algorithm. To summarize: 1. We propose a novel *greedy hierarchical framework* for non-uniform pruning of filters with *filter-pruning* at the lower level and *layer-selection* at a higher level. 2. We propose a backward-elimination-based scheme, FP-Backward for filter pruning, which takes advantage of a novel closed-form expression for approximation error. 3. We propose HBGTS which uses an efficient implementation to directly optimize the classification error for layer selection. ## 1.1 Related Work Many pruning methods have been proposed in the literature. (Lebedev & Lempitsky, 2018; Hoefler et al., 2021; Vadera & Ameen, 2022) provide excellent surveys for pruning techniques. Pruning can be categorised two types: *unstructured pruning*, involving the removal of individual weights (Han et al., 2015), and structured pruning or *filter-pruning* (FP), in which entire nodes or channels are removed (He et al., 2017; Luo et al., 2017; He & Xiao, 2023). Structured pruning provides efficiently implementable models on a wide range of accelerator devices e.g. GPUs. (He & Xiao, 2023) provides a recent survey and website for comparing structured pruning techniques for CNNs. Pruning can be done on a pre-trained model or from scratch, which is costly and requires large training data. Therefore we focus on pruning a pre-trained model. We further categorise it into the following groups: Weight-Based Pruning - Weights of filters are examined to determine which ones are essential for the model's performance. These methods do not require input data. (Han et al., 2015) focused on eliminating small-norm weights. (He et al., 2019) incorporates geometric median (Fletcher et al., 2008) to estimate the importance of each filter. (Joo et al., 2021) prunes linearly replaceable filters. Activation-Based Pruning - Rather than relying on filter weights, these methods utilize activation maps or layer outputs to make pruning decisions. We can utilize information from activation maps at the *current layer* or *all layers/whole-network*. Some of the *current layer* activation-based pruning methods are: CP (He et al., 2017) which focuses on minimizing the reconstruction error of sparse activation maps, while HRank (Lin et al., 2020) calculates the average rank of activation maps. CHIP (Sui et al., 2021) assesses cross-channel correlation to evaluate channel importance. ThiNet (Luo et al., 2017; El Halabi et al., 2022) approximates activation maps of layer l+1 using subsets of layer l's activation maps. *Current layer* activation-based pruning methods do not consider the reconstruction error propagation. Some of the *all layers/whole-network* activation-based pruning methods are: NISP (Yu et al., 2018) which assesses the Final Response Layer to calculate the neuron importance, while DCP (Zhuang et al., 2018) aims to retain discriminative channels. Layer-output based methods are computationally expensive, since they need to compute the outputs using a training dataset. Regularization - Regularization can aid in learning structured sparse networks by incorporating various sparsity regularizers. These regularizers can be implemented on *Batch Normalization (BN) parameters* (Liu et al., 2017; You et al., 2019; Kang & Han, 2020), with *extra parameters* (Huang & Wang, 2018; Lin et al., 2019) and *filters* (Wen et al., 2016; Chen et al., 2021). Taylor Expansion - Taylor Expansion is employed to approximate the change in the loss by pruning filters. First-order-Taylor uses the first-order information to calculate the loss change caused by pruning weights (Molchanov et al., 2019; You et al., 2019). *Second-order-Taylor* exploits the Hessian matrix, containing second-order information (Peng et al., 2019; Wang et al., 2019; Liu et al., 2021). However, these methods can be expensive for large networks. There is a different line of work in which pruned models are effectively trained from *scratch* e.g. Frankle & Carbin (2018); Rachwan et al. (2022). Unlike training-based approaches, our method does not require training from scratch, which is costly and requires large training data. ## 2 A Hierarchical Greedy Approach To Filter Pruning In this section, we propose a Hierarchical scheme, HBGS/HBGTS for non-uniform filter pruning from a pretrained CNN. As shown in Figure 1, the proposed scheme operates in a two-level hierarchical manner: (1) filter pruning - at the lower level, this step identifies the most appropriate filters to be pruned from each layer and (2) *layer selection* - at a higher level this step selects the best layer to currently prune from. These two steps are applied iteratively to achieve a non-uniform pruning from the whole network. We first describe our main sparse approximation-based formulation for optimal filter pruning from each layer, and then describe a faster backward elimination-based algorithm for the same. For layer selection, we describe a layerwise-regression-based backward greedy search strategy. We also incorporate an overall error-based strategy for layer selection. ## 2.1 Sparse Approximation For Filter Pruning A convolutional filter used in deep CNN is denoted by a K × K matrix. A convolutional layer is defined as filter weights fi,j ∈ R K2, where i = 1*, ..., m* and j = 1*, ..., n* are the number of input and output channels. Given the input feature map with m channels X = {X1*, ..., X*m}, the output feature map with n-channels Y = {Y1*, ..., Y*n}, can be calculated as: Yj =Pm i=1 Xi ∗ fi,j := X ∗ f:,j Here, ∗ denotes the convolution operation, and f:,j ∈ R K2×m denotes all the filter weights for output channel j. For brevity, we describe the algorithm for output channel pruning throughout the paper. Input channel pruning can be performed analogously. For channel pruning, we follow the idea of *linearly replaceable filters* (LRF) introduced in (Joo et al., 2021), which states that any filter f:,j ∈ R K2m can be pruned if the filter weights can be expressed as a linear combination of other filter weights of the same layer which are not pruned. Note that, for linear approximation of a filter with respect to other filters of the same layer, we treat the filter weights, f:,j , as a flat K2m-dimensional vector. For LRF, we prune the channel j such that ∥ϵj∥ is minimum, where f:,j =Pl̸=j λj,lf:,l + ϵj . Here, λj,l are the coefficients of l th filter for approximating the j th filter, which can be computed by solving the minimization problem: minλj,: ||f:,j −Pl̸=j λj,lf:,l||2. LRF (Joo et al., 2021) works by iteratively pruning one filter using the above technique, and updating the weights of the retained parameters using one epoch of SGD updates minimizing a combination of training loss and knowledge distillation loss (Hinton et al., 2015) w.r.t. to unpruned model outputs. The above method can be generalized by selecting a set of filters, S, in one go. Given the set of selected filters S, the error for j th output filter, f:,j ̸∈ S is given by ϵj : f:,j = X l∈S λj,lf:,l + ϵj , ∀j ̸∈ S (1) $\left(1\right)^{2}$ The problem of estimating λj,l, l ∈ S can be posed as a sparse approximation problem: $$S^{*},\lambda^{*}=\operatorname{argmin}_{|S|\leq(1-\beta)n,\lambda}\sum_{j\in\{1,2,\dots,n\}}||f_{:,j}-\sum_{l\in S}\lambda_{j,l}f_{:,l}||^{2}$$ where n is the initial number of output channels in the current layer, and pruning ratio β, is the fraction of channels that are pruned. Algorithm 1 describes an *orthogonal matching pursuit* (OMP) based approximation (Tropp & Gilbert, 2007; Cai & Wang, 2011) for estimating the S, λj,l ∀j = 1*, ..., n*; l ∈ S. Note that, equation 2 denotes a multi-variate regression version of the sparse approximation problem where the predicted variable is a vector f:,j , j = 1*, ..., n* with corresponding independent parameters λj,:. Since the total error is the sum of squared errors of the individual components, it is easy to see that projections used in standard OMP $${}^{(2)}$$ | Algorithm 1 Filter Pruning-OMP (FP-OMP) | Algorithm 2 Layer Selection: HBGS 1: Input: C: Number of layers, 2: Fc c = 1, ..., C: Filters, 3: D: Training dataset, 4: α: Number of filters pruned in one go, 5: β: Total pruning ratio over all layers 6: Initialize: y c = F 0 c ∗ y 0 0 c = 1, ..., C, t ← 0 c−1 7: while Overall pruning ratio < β do t 8: e c = 0 ∀c = 1, ..., C 9: Gt c ← FP-OMP(n, α , Ft ) ∀c = 1, ..., C n c t where n = |F c | 10: for i = 1, ..., N do 11: for c = 1, ..., C do t t t 12: Calculate output y c (i) = F c ∗y c−1 (i) ||y 0 (i)−Gt c∗y t (i)||2 13: e t c = e t c + c c−1 ||y0 c (i)||2 14: end for 15: end for 16: cmin = arg minc ec t+1 17: Revised network params F cmin = Gt cmin ; F t+1 c ∀c ̸= cmin t c = F 18: t ← t + 1 19: Run 1 epoch of finetuning. 20: end while t 21: Output: Pruned filters F c ∀c = 1, ..., C | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1: Input: n: Number of filters, 2: β: Pruning fraction, 3: f:,j ∈ R K2m j = 1, n: Filters 4: Initialize: 5: Normalize f:,j such that ||f:,j ||2 = 1 6: Rj = f:,j ∀j ∈ {1, 2, .., n} ▷ Residual error 7: S = ϕ ▷ Set of selected filters 8: while |S| ≤ (1 − β) ∗ n do 9: for i in S ′ do 10: for j in {1, 2, .., n} do 11: Compute P rojij = Rj .f:,i 12: end for 13: Total projection ξi= Pn j=1 |P rojij | 14: end for 15: ind = max i ξi 16: S ←− S ∪ {ind} 17: for j in {1, 2, .., n} do 18: ⃗λj,:=argminλj ||f:,j− P l∈S λj,lf:,l||2 ⃗λj,:f:,l) 19: Rj ←− f:,j − P l∈S ( 20: end for 21: end while 22: Output: 23: S, λj,l ∀l ∈ S ∀j ∈ {1, 2, .., n} | | algorithm can be replaced with the sum of projections for each multivariate regression component (line 13 of Algorithm 1). This approach has two advantages: (1) this approach is much faster than LRF since the fine-tuning is performed once for each layer, whereas in LRF it is performed after every pruning step (which is equal to the number of pruned filters in a layer), and (2) this approach provides an optimality guarantee for the selected filters in terms of reconstruction error, under conditions on the incoherence matrix of the features (Cai & Wang, 2011). The overall time complexity of algorithm 1 is O(|S|n 3). In the normal application scenario where the size of the selected set |S| is small compared to n, this algorithm is fast (O(n 3)). LRF also uses a 1 × 1 convolution layer gj,k*, j, k* = 1*, ..., n* to compensate for the loss of channel outputs. The modified output feature map, Zk, k = 1*, ..., n* is given by Zk =Pn j=1 Yj ∗ gj,k := Pn j=1 X ∗ f:,j ∗ gj,k, when the output filters Yj , j = 1*, ..., n* are not pruned. However, after pruning, the output feature map from the original convolutional layer becomes Y ′ j =Pl∈S X ∗ f:,l. *Weight compensation* is a method for modifying weights for the 1 × 1 convolutional layer, g ′ l,k, l ∈ *S, k* = 1*, ..., n* such that the final predicted output Z ′ k =Pl∈S X ∗ f:,l ∗ g ′ l,k matches Zk. The following result provides a formula for calculating g ′ l,k. Result 1. *Given* Zk, Z ′ k , gj,k*, and* g ′ l,k defined as above, and λj,l, j = 1, ..., n; l ∈ S *estimated using* the filter pruning process. Letting g ′ l,k = gl,k +Pl ′∈Sc λl ′,l ∗ gl ′,k, ∀l ∈ *S, k* = 1, ..., n*, ensures that* Zk − Z ′ k =Pl ′∈Sc X ∗ ϵl ′ ∗ gl ′,k*, where* ϵl ′ *is the error vector for the estimation of removed filter* l ′ ∈ S c*, and* S c *denotes the set of all removed filters.* For brevity, the derivation of the result is provided in the appendix. This result provides us with a formula for updating the weights of the 1 × 1 filters, thus obviating the need to update them using the SGD update. ## 2.2 Hierarchical Backward Greedy Search (Hbgs) The algorithm outlined in the previous section selects a fixed fraction (1 − β) of filters from each layer. However, as shown in the results, each layer can have a different fraction of important filters, depending on the architecture. Hence, determining the fraction of filters βc to be pruned in layer c is an important problem. Intuitively, βc should not be determined by the filter-weights since comparing them across layers is not meaningful. For example, the weights in a layer may be scaled by a constant factor, compared to those in another layer. Hence, we use reconstruction error of filter outputs using input training dataset as the criteria. Let D = {(u1, v1), ...,(uN , vN )} be the training dataset, and yc(i) be the output feature map of layer c when the training datapoint (ui, vi) is input to the CNN. Also, let Uc(i) be the output of the c th layer when the training datapoint (ui, vi) is input to the unpruned CNN. Moreover, let Fc = {Pl ′∈Sc (f c :,l′ ∗ g c l ′,k), ∀k = 1*, ..., n*c} be the composite convolutional map of the pruned filters and 1 × 1 convolution for layer c obtained from a filter pruning method (e.g. FP-OMP described in the previous section). The relative reconstruction error ec for layer c is given by: ec =P(ui,vi)∈D ||Uc(i)−Fc∗yc−1(i)||2 ||Uc(i)||2. We propose a *hierarchical backward greedy search* (HBGS) technique in algorithm 2 to both estimate βc for each layer c, as well as select the appropriate filters from each layer. Given the number of filters α to be pruned in one go, the algorithm proceeds iteratively by performing two broad steps in each iteration: (1) determine the optimal α filters to be pruned in each layer c, and (2) calculate the total relative reconstruction error ec as described above. Finally, the model parameters are updated to prune filters from the layer that leads to the lowest relative reconstruction error. Algorithm 2 line 9 describes the first step, and lines 10 - 15 describe an efficient implementation of the second step, where errors for all the layers are computed in one forward pass per example. The iterations continue till an overall pruning criterion, e.g. parameter pruning ratio or percentage FLOP reduction is reached. The parameter α is chosen to speed up the overall execution and can be chosen as 1 if the running time is not a concern. The overall time complexity of algorithm 2, when using algorithm 1 as the filter pruning algorithm is: O(T C(N + n 4)), where T is the number of iterations needed to achieve the desired pruning (depends on α and the pruning criteria), C is the number of layers, N is the training dataset size, and n is the number of filters in each layer. While HBGS (Algorithm 2) can select a variable number of filters from each layer, the sequential search over the layers for the best filter renders this algorithm expensive. In the next section, we develop a faster filter pruning algorithm. ## 2.3 Backward Elimination Algorithm For Filter Pruning The time complexity of the HBGS algorithm depends on the training set size N and the average number of filters per layer n. In many cases, when the time complexity of the filter pruning step (O(*T Cn*4) is substantially larger than the error computation step O(*T CN*), the complexity of the filter pruning algorithm becomes substantially larger than that of the fine-tuning algorithm on the training dataset. The main problem is the OMP-based filter pruning algorithm (FP-OMP) adds one filter in each step, which is an efficient strategy if the number of filters to be selected is small, compared to the total number of filters. However, in the context of HBGS algorithm, FP-OMP is sequentially called many times (Algorithm 2 line-9) with decreasing number of filters to be selected each time. In this context, a backward elimination (Couvreur & Bresler, 2000; Ament & Gomes, 2021) based approach which iteratively removes the feature which causes a minimal increase in approximation error, is intuitively more appropriate. While the original backward elimination algorithm described in (Couvreur & Bresler, 2000) is O(n 4), a faster implementation based on block-matrix inversion was described in (Reeves, 1999), with time complexity of O(n 3). Here, we derive a similar algorithm for our problem. For simplicity, we follow the notation in (Reeves, 1999). For a given layer with n output channels, m input channels, and K × K filter size, we re-define the input filter matrix as A ∈ R K2m×n, where each column is a flattened vector of filter weights, A:,j = f:,j , j = 1*, ..., n*. We also denote the output of the sparse approximation as B ∈ R K2m×n, which is the same as A in this case. We are interested in the approximation B ≈ *Aλ, λ* ∈ R n×n, where λ:j is the weight vector for the j th output filter. We note that the least square solution for λ:,j is decoupled from λ:,j′ where j ̸= j ′. Hence, the least squares solution becomes: $$\lambda^{*}_{i,j}=\operatorname*{argmin}_{\lambda_{i,j}}\sum_{j\in\{1,2,\ldots,n\}}||B_{i,j}-\sum_{l=1,\ldots,n}A_{i,l}\lambda_{i,j}||^{2}\quad=(A^{T}A)^{-1}A^{T}B_{i,j},\qquad\forall j=1,...,n\tag{3}$$ Hence, the total least square error is given by: $$E(A,B)=\sum_{j=1,\ldots,n}\|B_{:,j}-A(A^{T}A)^{-1}A^{T}B_{:,j}\|^{2}\ \ =\sum_{j=1,\ldots,n}(B_{:,j}^{T}B_{:,j}-B_{:,j}^{T}A(A^{T}A)^{-1}A^{T}B_{:,j})\tag{4}$$ s given by:$\frac{1}{2}$ Algorithm 3 Filter Pruning-Backward Elimination (FP-Backward) 1: **Input:** n: Number of filters, β: Pruning fraction, 2: f:,j ∈ R K2m j = 1*, ....n*: Filters 3: **Initialize:** S = {1*, ..., n*} ▷ Set of currently retained filters 4: B:,j = [f:,j ]j=1*,...,n* ▷ Matrix of predicted filter weights 5: A = B ▷ Matrix of retained filter weights 6: **while** |S| > (1 − β) ∗ n do 7: G = [AT A] −1 ∈ R |S|×|S| 8: for k = 1*, ...,* |S| do 9: gk = G{−k},k 10: γk = G[*k, k*] 11: dk = A−kgk + akγk 12: uk = Pj=1*,...,n* |d T k B:,j | 2 γk 13: **end for** 14: k ∗ = argmink=1*,...,*|S|uk 15: S ←− S \ {S[k ∗]} ▷ remove original index corresponding to k ∗ 16: A = A:,{−k∗} ▷ remove selected column 17: **end while** 18: Calculate λ using equation 3 19: **Output:** Set of selected filters-S, λ We are interested in calculating the increase in E(*A, B*) if 1 column of A and the corresponding row of λ are removed from the input. Let A = [A−k ak]ΠT k , where A−k is the sub-matrix of A after removing the k th column ak, and Πk is the permutation matrix which permutes the columns of A so that k th column is the last. We also have the following definitions of Gk, gk, γk Dk and dk: $$\begin{array}{r c l}{{\left[G_{k}\quad g_{k}\right]}}&{{=}}&{{\Pi_{k}^{T}(A^{T}A)^{-1}\Pi_{k}\ ;\quad\left[\begin{array}{l}{{D_{k}^{T}}}\\ {{d_{k}^{T}}}\end{array}\right]=\left[\begin{array}{l l}{{G_{k}A_{-k}^{T}+g_{k}a_{k}^{T}}}\\ {{g_{k}^{T}A_{-k}^{T}+\gamma_{k}a_{k}^{T}}}\end{array}\right]}}\end{array}$$ $\left(5\right)^2$ (5) We note from equation 4, that only the second term in E(*A, B*) is dependent on A. Hence, we state the following result which connects the above equations to compute the increase in the least square error for the case of *Multivariate Linear Regression*. Result 2. P Given the definitions of A−k, dk, and γk *above, the following relation holds:* j BT :,jA−k(AT−kA−k) −1AT−kB:,j =Pj BT :,jA(AT A) −1AT B:,j −Pj 1 γk |d T k B:,j | 2 hence, E(A−k, B) = E(*A, B*) + Pj=1*,...,n* 1 γk |d T k B:,j | 2. This result is a generalization of the result reported in (Reeves, 1999). For conciseness, we provide the derivation of this result in the appendix. In light of the above result, FP-Backward (algorithm 3) provides the steps for backward elimination-based filter pruning. Note that line 7 in algorithm 3 is the most expensive step in the while loop (lines 6 - 17), which can be estimated from G in the previous time-step using block matrix inversion with the worst-case complexity of O(n 2). Also, for most calls to the algorithm, the parameter β is very low (typically ≤ 0.05), leading to far fewer iterations of the while loop (lines 6 - 17), which can be assumed to be constant. Moreover, this cost goes down with iterations as the size of the G matrix reduces significantly with the iterations, reaching only a small fraction for n. Hence, assuming a constant number of loop executions (lines 6 - 17), the overall complexity of Algorithm 3 is O(n 2), which is two orders of magnitude improvement over using algorithm 1. ## 2.4 Hierarchical Backward Greedy Tree Search (Hbgts) A key idea behind the hierarchical backward greedy search (HBGS) algorithm is to select the layer which results in a minimal relative error when pruned from. However, the prediction error of a layer's output is not always indicative of the ultimate predictive performance of the overall network. On the other hand, Algorithm 4 Layer Selection: HBGTS 1: **Input:** C: Number of layers, Fc c = 1*, ..., C*: Filters, D: Training dataset, 2: α: Number of filters pruned in one step, β: Pruning ratio over the network 3: **Initialize:** y 0 c,0 = F 0 c ∗ y 0 c−1,0 ∀c = 1*, ..., C*, t ← 0 4: **while** Overall pruning ratio < β do 5: e t j = 0 ∀j = 0, 1*, ..., C ▷* Initialize total error for each layer pruned 6: Gtc ← FP-OMP(n, α n , Ft c ) ∀c = 1*, ..., C* where n = |F t c | 7: for i = 1*, ..., N* do 8: for c in {1*, .., C*} do 9: y t c,0 (i) = F t c ∗ y t c−1,0 (i) ▷ Unpruned propagation of unpruned previous layers 10: y t c,1 (i) = Gtc ∗ y t c−1,0 (i) ▷ Pruned propagation of unpruned previous layers 11: for j = 1*, ..., c* − 1 do 12: y t c,j+1(i) = F t c ∗ y t c−1,j (i) ▷ Unpruned propagation of previous j th pruned layer 13: **end for** 14: **end for** 15: for j = 1*, ..., C* do 16: e t j = e t j + ||y t C,0 (i)−y t C,C−j+1(i)||2 ||y t C,0 (i)||2▷ Error in final layer output when j th-layer is pruned 17: **end for** 18: **end for** 19: cmin = argminj=1*,...,C* (e t j ) 20: Revised network params: F t+1 cmin = Gt*cmin* ; F t+1 c = F t c ∀c ̸= *cmin* 21: t ← t + 1 22: Run 1 epoch of finetuning for network parameters. 23: **end while** 24: **Output:** Pruned filters F t c , ∀c = 1*, ..., C* computing the error of the final network output involves the re-computation of changes through all the downstream layers, starting with the pruned layer. A naive implementation of this can lead to significant compute overhead since it requires O(CN) forward inferences through the network for each pruning step, where C is the number of layers and N is the number of data points in the training set. Algorithm 4 presents *hierarchical backward greedy tree search* (HBGTS), an efficient implementation for determining the best layer to prune from at every stage. A key idea here is to calculate the error in final layer output e t j , when layer j ∈ {1*, ..., C*} is pruned, for each input training example. Hence, we need to perform only one forward pass per pruning step for each example i ∈ {1*, ..., N*}. To implement this algorithm, we utilize a data structure y t c,j which stores the output of c th layer c = 1*, ..., C*, when the (n − j + 1)th layer is pruned j = 1*, ..., C*, in the t th pruning step. Here, y t c,0 represents the output of the c th layer when no filter has been pruned. There are 3 cases while calculating y t c,: from y t c−1,: (lines 9, 10, and 12 in algorithm 4): (1) calculation of next unpruned output y t c,0 (unpruned propagation of unpruned previous layers), (2) calculation of output corresponding to the current pruned layer y t c,1 (pruned propagation of unpruned previous layers), and (3) unpruned propagation of the pruned outputs corresponding to all the previous layers. Here, we only need to store y t c,j for the current timestamp t. Hence the space complexity of the modified algorithm increases only by O(nd) where d is the output size for each layer. The overall time complexity of the layer selection algorithm for T pruning steps becomes O(T C2N). Hence, using the backward elimination strategy, the overall time complexity for the proposed algorithm HBGTS-B is O(T C2N + *T Cn*2). Comparatively, the vanilla backward search version of the algorithm, HBGS-B using backward elimination for filter pruning takes O(T C2N + *T Cn*2) time. However, as shown in the next section, HBGTS-B marginally outperforms HBGS-B in terms of the accuracy of the pruned models for a given pruning ratio. 8 Table 1: Performance comparison between different pruning methods on VGG16/CIFAR100 at 98% parameter reduction, ResNet18/CIFAR10 at 95% parameter reduction, **ResNet56/CIFAR10** and ResNet32/CIFAR10 at 63% parameter reduction, averaged over three runs. ± represents standard deviation, ↓ indicate a drop, and bold/underline denotes the first/second-best result. | VGG16/CIFAR100 @ 98% | ResNet18/CIFAR10 @ 95% | | | | | | | | |---------------------------------------|--------------------------|--------------|---------|---------|--------------|--------------|---------|-------| | | Test Acc | | | | | | | | | Method | Test Acc | Acc ↓ | Param ↓ | FLOPs ↓ | Acc ↓ | Param ↓ | FLOPs ↓ | | | (%) | (%) | (%) | (%) | (%) | (%) | (%) | (%) | | | Dense | 67.1 ± 0.01 | 0 ± 0 | - | - | 94.5 ± 0.02 | 0 ± 0 | - | - | | Random | 55.5 ± 0.16 | 11.6 ± 0.16 | 98.0 | 86.0 | 86.3 ± 0.06 | 8.2 ± 0.06 | 93.7 | 65.0 | | EarlyCroP-S (Rachwan et al., 2022) | 62.8 ± 0.52 | 4.3 ± 0.52 | 97.9 | 88.0 | 91.0 ± 0.52 | 3.5 ± 0.52 | 95.1 | 65.8 | | DLRFC (He et al., 2022) | 63.5 ± 0.09 | 3.56 ± 0.09 | 97.1 | 53.7 | - | - | - | - | | SAP (Diao et al., 2023) | - | - | - | - | 91.4 ± 0.03 | 3.1 ± 0.03 | 94.9 | 64.9 | | PL (Chen et al., 2023) | 63.5 ± 0.03 | 3.6 ± 0.03 | 97.3 | 87.9 | - | - | - | - | | LRF (Joo et al., 2021) | 64.0 ± 0.31 | 3.1 ± 0.31 | 97.9 | 88.0 | 91.5 ± 0.37 | 3.0 ± 0.37 | 95.1 | 65.8 | | FP-Backward | 66.2 ± 0.11 | 0.9 ± 0.11 | 97.9 | 88.0 | 92.8 ± 0.15 | 1.7 ± 0.15 | 95.1 | 65.8 | | HBGS | 67.3 ± 0.17 | −0.2 ± 0.17 | 98.3 | 89.6 | 93.9 ± 0.24 | 0.6 ± 0.24 | 95.3 | 66.2 | | HBGS-B | 67.2 ± 0.15 | −0.1 ± 0.15 | 98.1 | 89.4 | 93.7 ± 0.22 | 0.8 ± 0.22 | 95.2 | 66.0 | | HBGTS | 67.8 ± 0.23 | -0.7 ± 0.23 | 98.5 | 89.8 | 94.7 ± 0.28 | -0.2 ± 0.28 | 95.6 | 66.7 | | HBGTS-B | 67.6 ± 0.21 | −0.5 ± 0.21 | 98.4 | 89.7 | 94.6 ± 0.24 | −0.1 ± 0.24 | 95.4 | 66.5 | | ResNet56/CIFAR10 @ 63% | ResNet32/CIFAR10 @ 63% | | | | | | | | | | Test Acc | | | | | | | | | Method | Test Acc | Acc ↓ | Param ↓ | FLOPs ↓ | Acc ↓ | Param ↓ | FLOPs ↓ | | | (%) | (%) | (%) | (%) | (%) | (%) | (%) | (%) | | | Dense | 93.45 ± 0.02 | 0 ± 0 | - | - | 92.49 ± 0.01 | 0 ± 0 | - | - | | SFP (He et al., 2018) | 92.91 ± 0.47 | 0.54 ± 0.47 | 63.19 | 52.60 | 91.94 ± 0.12 | 0.55 ± 0.12 | 63.02 | 41.50 | | FPGM (He et al., 2019) | 93.14 ± 0.21 | 0.31 ± 0.21 | 63.21 | 52.60 | 91.79 ± 0.94 | 0.70 ± 0.94 | 63.14 | 53.20 | | HRank (Lin et al., 2020) | 92.56 ± 0.05 | 0.89 ± 0.05 | 63.04 | 62.43 | - | - | - | - | | LFPC (He et al., 2020) | 92.89 ± 0.17 | 0.56 ± 0.17 | 63.25 | 52.90 | 91.98 ± 0.06 | 0.51 ± 0.06 | 63.05 | 52.60 | | CHIP (Sui et al., 2021) | 92.88 ± 0.18 | 0.57 ± 0.18 | 63.12 | 62.08 | - | - | - | - | | ASyminchange (El Halabi et al., 2022) | 93.27 ± 0.11 | 0.18 ± 0.11 | 63.28 | 62.34 | - | - | - | - | | LRF (Joo et al., 2021) | 93.49 ± 0.13 | −0.04 ± 0.13 | 63.35 | 62.56 | 92.52 ± 0.16 | −0.03 ± 0.16 | 63.34 | 62.55 | | FP-Backward | 93.88 ± 0.06 | −0.43 ± 0.06 | 63.35 | 62.56 | 92.85 ± 0.04 | −0.36 ± 0.04 | 63.34 | 62.55 | | HBGS | 94.15 ± 0.09 | −0.70 ± 0.09 | 63.87 | 64.91 | 93.06 ± 0.07 | −0.57 ± 0.07 | 63.65 | 64.88 | | HBGS-B | 94.12 ± 0.08 | −0.67 ± 0.08 | 63.72 | 64.80 | 93.03 ± 0.05 | −0.54 ± 0.05 | 63.63 | 64.78 | | HBGTS | 94.38 ± 0.13 | -0.93 ± 0.13 | 63.93 | 64.95 | 93.28 ± 0.12 | -0.79 ± 0.12 | 63.76 | 64.92 | | HBGTS-B | 94.35 ± 0.11 | −0.90 ± 0.11 | 63.89 | 64.93 | 93.26 ± 0.09 | −0.77 ± 0.09 | 63.75 | 64.90 | ## 3 Experimental Results In this section, we describe the experimental setup and the datasets used. We compare the performance of the proposed pruning methods against state-of-the-art methods. Furthermore, a comprehensive examination of the working of the proposed Backward Greedy Search methods is conducted. ## 3.1 Experimental Setting Dataset Description: For the image classification task, we utilize three datasets: CIFAR10, CIFAR100, and Tiny-Imagenet. CIFAR10 consists of 10 classes, with a training set of 50k images and a test set of 10k images, all with a resolution of 32 × 32. Each class contains 5k training images and 1k test images. Similarly, CIFAR100 comprises 100 classes, with 500 training images and 100 test images per class. Tiny-Imagenet contains 200 classes and includes a total of 0.1 M images. For our experimentation purpose, we resize the original 64 × 64 images of Tiny-Imagenet to 224 × 224. Training Details: Our experiments involve ResNet18, ResNet32, ResNet56, VGG16, and ResNext model architectures, with various percentages of parameter reduction. We prune a pre-trained model and adopt the training settings from LRF (Joo et al., 2021). Before pruning begins, we warmup the model for 20 epochs. In contrast to LRF, where the model is fine-tuned for a single epoch after each filter removal, we fine-tune the model after pruning the entire β fraction of filters from each layer. After the completion of pruning for the entire model, we fine-tune the pruned model for 300 epochs. For fine-tuning, we set the initial learning rate to 1e −2 with a decay rate of 1e −4. Additionally, we use a step scheduler that reduces the learning rate by a factor of 10 at epoch 150. Baselines were implemented using code provided by the authors and the ![9_image_0.png](9_image_0.png) Figure 2: Test accuracy for (a) **ResNet56/CIFAR100** (b) **VGG16/CIFAR100** and (c) **ResNet18/TinyImagenet** with increasing parameter reduction. recommended hyperparameters were used. We also performed hyperparameter search for the number of epochs, pruning ratios, and learning rates and reproduced the best results. Performance Metric: We report the *test accuracy* for various pruning methods. The *dense* model's test accuracy corresponds to the *pre-trained* model's accuracy. Additionally, we report an accuracy drop (Acc ↓) from the dense model. We also report the drop in parameters (**param** ↓) and FLOPs (**FLOPs** ↓) as metrics to assess the level of pruning and model efficiency. The reduction in parameters refers to the decrease in the number of parameters/weights across all retained filters. FLOPs, on the other hand, refer to the number of operations (convolutions), within the retained filters. ## 3.2 Performance Comparison: Accuracy And Efficiency We compare our proposed pruning methods with the state-of-the-art methods in Table 1. We observe that our proposed methods (HBGS, HBGTS, FP-Backward, HBGS-B, and HBGTS-B) exhibit higher pruned accuracy compared to state-of-the-art methods for a comparable drop in the number of parameters. We also observe that our proposed methods consistently report a higher drop in FLOPs compared to state-of-the-art methods. ResNet and VGG on CIFAR-100 and Tiny-Imagenet: Figure 2 and Table 1 provide further insights into the consistently superior performance of our proposed methods. We observe that the test accuracy of the proposed methods is consistently and significantly better than the test accuracy of the baseline methods. The fact that this difference is more pronounced in a difficult dataset (CIFAR100 and Tiny-Imagenet) further demonstrates the superiority of the proposed methods. From Figure 2, we notice that, *as the percentage of* parameter reduction increases, the difference in test accuracy between our proposed methods and state-of-theart methods also grows. At higher parameter reduction (≥90%), the proposed methods outperform existing methods by ∼ 5% (see Figure 2). This trend reinforces the effectiveness of the proposed methods. To prune a Large Model: Our backward greedy search methods can be used for effectively pruning large models that exceed the capacity of commodity GPUs. We use ResNext101 32x16d as our large model, consisting of 193 M parameters and requires 7.62 GB of GPU memory for loading. Additionally, we use ResNext101 32x8d as our smaller dense model, which has 88 M parameters and requires 3.91 GB for GPU memory. Table 2 shows that when ResNext101 32x16d pruned to 98% parameter reduction using HBGTS-B, achieves a test accuracy that matches its dense counterpart. Hence, we can efficiently deploy the pruned model on edge devices with GPU memory less than 2GB. Furthermore, the pruned model takes 5.04 times less GPU memory than the larger dense model. Notably, the pruned model even outperforms the smaller dense model, ResNext101 32x8d. Time Comparison: Figure 3(a) provides a comparison of Non-Search methods in terms of pruning times. We can see that our proposed method FP-Backward is faster than the best baseline FP-OMP by a factor of 2 | Method | Test Acc | Acc ↓ | Param ↓ | FLOPs ↓ | VRAM | |-------------|------------|---------|-----------|-----------|--------| | (%) | (%) | (%) | (%) | (GB) | | | Dense RN16 | 92.1 | 0 | - | - | 7.62 | | Dense RN8 | 91.8 | 0 | - | - | 3.91 | | FP-Backward | 92.9 | -0.8 | 98.5 | 89.9 | 1.59 | | HBGS-B | 93.0 | -0.9 | 98.7 | 92.1 | 1.55 | | HBGTS-B | 93.2 | -1.1 | 98.8 | 94.3 | 1.51 | Table 2: Comparison of pruning methods for **ResNext101 32x16d** (RN16) and a similar sized dense ResNext101 32x8d (RN8) on CIFAR10 at 98% parameter reduction. for a constant pruning ratio in each layer. This is also a fair comparison since the baseline methods also prune a constant fraction of filters from each layer. Figure 3(b) provides a comparison of Backward Greedy Search methods in terms of pruning times. We observe a relatively high pruning time for HBGS and HBGTS, compared to HBGS-B and HBGTS-B. HBGS-B has 54.40% and 56.16% drop in time compared to HBGS for ResNet32 and ResNet56, respectively. Similarly HBGTS-B has a 55.03% and 57.58% drop in time compared to HBGTS for ResNet32 and ResNet56, respectively, hence showing the effectiveness of the backward elimination strategy. Further, as expected HBGTS is computationally more expensive compared to HBGS with approximately double the time. The most efficient hierarchical pruning method (HBGS-B) takes 5 hours for ResNet32 (see Figure 3(b)) (for α = 5, number of filters removed in each round) compared to 1 hour taken by FP-OMP. The increase in time can be further reduced by pruning higher number of filters (α) in each round. ![10_image_0.png](10_image_0.png) Figure 3: Time comparison on ResNet/CIFAR10 at 63% parameter reduction. ## 3.3 Analysis Of Backward Greedy Search Algorithms We analyze the working of the proposed greedy search methods in terms of their pruning quality. Figure 4 illustrates a heat map showcasing the relative reconstruction error and the percentage of removed filters for each layer across the pruning rounds, using HBGTS-B method for ResNet32 on the CIFAR-100 dataset at 63% parameter reduction. The relative reconstruction error is calculated as ||y t C,0−y t C,c||2 ||y t C,0 ||2where, y t C,0 is the output from the final classification layer when no pruning was done in any of the layers of the network and y t C,c is the output from the final classification layer when pruning was done at layer c. Both the relative reconstruction error and the pruning percentage are depicted after every 7 th round, each pruning 5 filters. Examining Figure 4, we observe that the pruning percentage increases with each round, but not uniformly. For example, layers 14 - 18 have higher pruning compared to layers 1-3. Relative reconstruction error also decreases with pruning rounds but is not uniform across layers. From the heat maps, it is evident that our method selects the layer with the least relative reconstruction error for pruning. For example, layers 14 - 18 have moderate relative reconstruction errors in the initial pruning rounds, so the pruning percentage is also not so high for the same. As the pruning rounds increase, the relative reconstruction error decreases for layers 14 - 18 and hence more pruning is done from those layers as visible in the latter rounds in Figure 4. This is in contrast to uniform pruning approaches, where pruning is uniformly applied across each layer. ![11_image_0.png](11_image_0.png) Figure 4: Heat map for relative reconstruction error and pruning percentage while pruning **ResNet32** on CIFAR100 at 63% parameter reduction. To understand the intuition behind the filter choices for pruning using HBGTS-B, we present a visualization diagram of feature maps for two layers: Layer 2 (pruned by 31.25%) and Layer 10 (pruned by 93.75%). By examining feature maps in Figure 5 (top row), we can observe that, Layer 2 exhibits a diverse range of filter outputs, indicating their effectiveness in capturing various input features. Consequently, our proposed method prunes only 31.25% of the filters in Layer 2 (as shown in the last column of pruning percentages in Figure 4). Similarly, Figure 5 (bottom row) displays feature map outputs from Layer 10, which appear very similar, indicating redundancy in filter outputs. This observation aligns with the pruning percentages shown in the last column of Figure 4, where Layer 10 has 93.75% of its filters removed. Thus, we can conclude that pruning percentages yielded by HBGTS-B are indicative of the amount of information carried by each filter in each layer. Filters with more diverse outputs are retained, while those with redundant outputs are pruned. 31.25% pruned 93.75% pruned Figure 5: Visualisation of output feature map of **ResNet32** 2 ![11_image_1.png](11_image_1.png) th **layer (top row)** and 10th layer (bottom row) on **CIFAR100** ## 4 Conclusion In this paper, we propose HBGS and HBGTS as pruning methods that optimize deep neural networks by reducing the number of parameters while maintaining or even improving their accuracy. The proposed pruning methods are based on sparse approximation for non-uniform pruning, enabling the removal of redundant filters regardless of the layer they are present in. We demonstrate our method's consistent performance across various architectures, kernel sizes, and block types through extensive experiments on various datasets. We also propose HBGS-B, and HBGTS-B as efficient channel pruning methods, which offer significant advantages in terms of time efficiency compared to HBGS, and HBGTS. The proposed method can in principle be applied to any linear approximation-based pruning technique, one way of applying it to transformer-based models is pruning the filters in the feed-forward network (FFN). ## References Sebastian Ament and Carla Gomes. On the optimality of backward regression: Sparse recovery and subset selection. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5599–5603. IEEE, 2021. Bowen Baker, Otkrist Gupta, Nikhil Naik, and Ramesh Raskar. Designing neural network architectures using reinforcement learning. In *International Conference on Learning Representations*, 2017. T Tony Cai and Lie Wang. Orthogonal matching pursuit for sparse signal recovery with noise. *IEEE* Transactions on Information theory, 57(7):4680–4688, 2011. Tianyi Chen, Bo Ji, Tianyu Ding, Biyi Fang, Guanyi Wang, Zhihui Zhu, Luming Liang, Yixin Shi, Sheng Yi, and Xiao Tu. Only train once: A one-shot neural network training and pruning framework. Advances in Neural Information Processing Systems, 34:19637–19651, 2021. Yixuan Chen, Yubin Shi, Mingzhi Dong, Xiaochen Yang, Dongsheng Li, Yujiang Wang, Robert P Dick, Qin Lv, Yingying Zhao, Fan Yang, et al. Over-parameterized model optimization with polyak-{\L} ojasiewicz condition. In *The Eleventh International Conference on Learning Representations*, 2023. Christophe Couvreur and Yoram Bresler. On the optimality of the backward greedy algorithm for the subset selection problem. *SIAM Journal on Matrix Analysis and Applications*, 21(3):797–808, 2000. Enmao Diao, Ganghua Wang, Jiawei Zhang, Yuhong Yang, Jie Ding, and Vahid Tarokh. Pruning deep neural networks from a sparsity perspective. In *The Eleventh International Conference on Learning* Representations, 2023. Xuanyi Dong and Yi Yang. Network pruning via transformable architecture search. *Advances in Neural* Information Processing Systems, 32, 2019. Marwa El Halabi, Suraj Srinivas, and Simon Lacoste-Julien. Data-efficient structured pruning via submodular optimization. *Advances in Neural Information Processing Systems*, 35:36613–36626, 2022. P Thomas Fletcher, Suresh Venkatasubramanian, and Sarang Joshi. Robust statistics on riemannian manifolds via the geometric median. In *2008 IEEE Conference on Computer Vision and Pattern Recognition*, pp. 1–8. IEEE, 2008. Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. *arXiv preprint arXiv:1803.03635*, 2018. Abhinav Goel, Caleb Tung, Yung-Hsiang Lu, and George K Thiruvathukal. A survey of methods for low-power deep learning and computer vision. In *2020 IEEE 6th World Forum on Internet of Things (WF-IoT)*, pp. 1–6. IEEE, 2020. Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. *Advances in neural information processing systems*, 28, 2015. Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In *International Conference on Learning Representations*, 2016. Yang He and Lingao Xiao. Structured pruning for deep convolutional neural networks: A survey. *arXiv* preprint arXiv:2303.00566, 2023. Yang He, Guoliang Kang, Xuanyi Dong, Yanwei Fu, and Yi Yang. Soft filter pruning for accelerating deep convolutional neural networks. *arXiv preprint arXiv:1808.06866*, 2018. Yang He, Ping Liu, Ziwei Wang, Zhilan Hu, and Yi Yang. Filter pruning via geometric median for deep convolutional neural networks acceleration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4340–4349, 2019. Yang He, Yuhang Ding, Ping Liu, Linchao Zhu, Hanwang Zhang, and Yi Yang. Learning filter pruning criteria for deep convolutional neural networks acceleration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2009–2018, 2020. Yihui He, Xiangyu Zhang, and Jian Sun. Channel pruning for accelerating very deep neural networks. In Proceedings of the IEEE international conference on computer vision, pp. 1389–1397, 2017. Zhiqiang He, Yaguan Qian, Yuqi Wang, Bin Wang, Xiaohui Guan, Zhaoquan Gu, Xiang Ling, Shaoning Zeng, Haijiang Wang, and Wujie Zhou. Filter pruning via feature discrimination in deep neural networks. In European Conference on Computer Vision, pp. 245–261. Springer, 2022. Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. Distilling the knowledge in a neural network. *arXiv preprint* arXiv:1503.02531, 2(7), 2015. Torsten Hoefler, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, and Alexandra Peste. Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks. The Journal of Machine Learning Research, 22(1):10882–11005, 2021. Zehao Huang and Naiyan Wang. Data-driven sparse structure selection for deep neural networks. In Proceedings of the European conference on computer vision (ECCV), pp. 304–320, 2018. Donggyu Joo, Eojindl Yi, Sunghyun Baek, and Junmo Kim. Linearly replaceable filters for deep network channel pruning. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 35, pp. 8021–8029, 2021. Minsoo Kang and Bohyung Han. Operation-aware soft channel pruning using differentiable masks. In International Conference on Machine Learning, pp. 5122–5131. PMLR, 2020. Andrey Kuzmin, Markus Nagel, Saurabh Pitre, Sandeep Pendyam, Tijmen Blankevoort, and Max Welling. Taxonomy and evaluation of structured compression of convolutional neural networks. arXiv preprint arXiv:1912.09802, 2019. Vadim Lebedev and Victor Lempitsky. Speeding-up convolutional neural networks: A survey. *Bulletin of the* Polish Academy of Sciences. Technical Sciences, 66(6):799–811, 2018. Mingbao Lin, Rongrong Ji, Yan Wang, Yichen Zhang, Baochang Zhang, Yonghong Tian, and Ling Shao. Hrank: Filter pruning using high-rank feature map. In *Proceedings of the IEEE/CVF conference on* computer vision and pattern recognition, pp. 1529–1538, 2020. Shaohui Lin, Rongrong Ji, Chenqian Yan, Baochang Zhang, Liujuan Cao, Qixiang Ye, Feiyue Huang, and David Doermann. Towards optimal structured cnn pruning via generative adversarial learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2790–2799, 2019. Liyang Liu, Shilong Zhang, Zhanghui Kuang, Aojun Zhou, Jing-Hao Xue, Xinjiang Wang, Yimin Chen, Wenming Yang, Qingmin Liao, and Wayne Zhang. Group fisher pruning for practical network compression. In *International Conference on Machine Learning*, pp. 7021–7032. PMLR, 2021. Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang. Learning efficient convolutional networks through network slimming. In Proceedings of the IEEE international conference on computer vision, pp. 2736–2744, 2017. Jian-Hao Luo, Jianxin Wu, and Weiyao Lin. Thinet: A filter level pruning method for deep neural network compression. In *Proceedings of the IEEE international conference on computer vision*, pp. 5058–5066, 2017. Shervin Minaee, Yuri Boykov, Fatih Porikli, Antonio Plaza, Nasser Kehtarnavaz, and Demetri Terzopoulos. Image segmentation using deep learning: A survey. *IEEE transactions on pattern analysis and machine* intelligence, 44(7):3523–3542, 2021. Pavlo Molchanov, Arun Mallya, Stephen Tyree, Iuri Frosio, and Jan Kautz. Importance estimation for neural network pruning. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 11264–11272, 2019. Chaitanya Murti, Tanay Narshana, and Chiranjib Bhattacharyya. Tvsprune-pruning non-discriminative filters via total variation separability of intermediate representations without fine tuning. In *The Eleventh* International Conference on Learning Representations, 2023. Manuel Nonnenmacher, Thomas Pfeil, Ingo Steinwart, and David Reeb. Sosp: Efficiently capturing global correlations by second-order structured pruning. In *International Conference on Learning Representations*, 2021. Hanyu Peng, Jiaxiang Wu, Shifeng Chen, and Junzhou Huang. Collaborative channel pruning for deep networks. In *International Conference on Machine Learning*, pp. 5113–5122. PMLR, 2019. Kiran Purohit, Anurag Parvathgari, Soumi Das, and Sourangshu Bhattacharya. Accurate and efficient channel pruning via orthogonal matching pursuit. In Proceedings of the Second International Conference on AI-ML Systems, AIMLSystems '22, New York, NY, USA, 2023. Association for Computing Machinery. ISBN 9781450398473. doi: 10.1145/3564121.3564139. URL https://doi.org/10.1145/3564121.3564139. John Rachwan, Daniel Zügner, Bertrand Charpentier, Simon Geisler, Morgane Ayle, and Stephan Günnemann. Winning the lottery ahead of time: Efficient early network pruning. In *International Conference on Machine* Learning, pp. 18293–18309. PMLR, 2022. Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 779–788, 2016. Stanley J Reeves. An efficient implementation of the backward greedy algorithm for sparse signal reconstruction. IEEE Signal Processing Letters, 6(10):266–268, 1999. Yang Sui, Miao Yin, Yi Xie, Huy Phan, Saman Aliari Zonouz, and Bo Yuan. Chip: Channel independencebased pruning for compact neural networks. *Advances in Neural Information Processing Systems*, 34: 24604–24616, 2021. Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning, pp. 6105–6114. PMLR, 2019. Joel A Tropp and Anna C Gilbert. Signal recovery from random measurements via orthogonal matching pursuit. *IEEE Transactions on information theory*, 53(12):4655–4666, 2007. Sunil Vadera and Salem Ameen. Methods for pruning deep neural networks. *IEEE Access*, 10:63280–63300, 2022. Chaoqi Wang, Roger Grosse, Sanja Fidler, and Guodong Zhang. Eigendamage: Structured pruning in the kronecker-factored eigenbasis. In *International conference on machine learning*, pp. 6566–6575. PMLR, 2019. Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. *Advances in neural information processing systems*, 29, 2016. Zhonghui You, Kun Yan, Jinmian Ye, Meng Ma, and Ping Wang. Gate decorator: Global filter pruning method for accelerating deep convolutional neural networks. Advances in neural information processing systems, 32, 2019. Ruichi Yu, Ang Li, Chun-Fu Chen, Jui-Hsin Lai, Vlad I Morariu, Xintong Han, Mingfei Gao, Ching-Yung Lin, and Larry S Davis. Nisp: Pruning networks using neuron importance score propagation. In *Proceedings of* the IEEE conference on computer vision and pattern recognition, pp. 9194–9203, 2018. Zhuangwei Zhuang, Mingkui Tan, Bohan Zhuang, Jing Liu, Yong Guo, Qingyao Wu, Junzhou Huang, and Jinhui Zhu. Discrimination-aware channel pruning for deep neural networks. Advances in neural information processing systems, 31, 2018. Barret Zoph and Quoc Le. Neural architecture search with reinforcement learning. In *International Conference* on Learning Representations, 2017. ## A Proofs Of Theorems A.1 Weight Compensation For Multiple Channel Pruning Result 1: Given Zk, Z ′ k , gj,k, g ′ l,k, and λj,l, j = 1*, ..., n*; l ∈ S estimated using the filter pruning process. Letting g ′ l,k = gl,k +Pl ′∈Sc λl ′,l ∗ gl ′,k, ∀l ∈ *S, k* = 1*, ..., n*, ensures that Zk − Z ′ k =Pl ′∈Sc X ∗ ϵl ′ ∗ gl ′,k, where ϵl ′ is the error vector for the estimation of removed filter l ′ ∈ S c, and S c denotes the set of all removed filters. Proof. Consider the input and output of any K × K convolution layer to be X = {X1*, ..., X*m} and Y = {Y1*, ..., Y*n}. Y goes as an input to the 1 × 1 convolution. Let the output of the 1 × 1 convolution layer be Z = {Z1*, ..., Z*n}, followed by f ∈ Rm×n and g ∈ Rn×n being the filter weights of K × K and 1 × 1 convolution layer respectively. We can formulate the above setup as: $$Y_{j}=\sum_{i=1}^{m}X_{i}*f_{i,j}:=X*f_{:,j}$$ $$Z_{k}=\sum_{j=1}^{n}Y_{j}*g_{j,k}:=\sum_{j=1}^{n}X*f_{:,j}*g_{j,k}$$ $$\quad(6)$$ $$\quad(7)$$ Now, let f:,l : l ∈ S be the selected filter weights and similarly, let f:,l′ : l ′ ∈ S ′ be the pruned filter weights. Dividing Equation 7 into the two sets of filter weights, we get: $$Z_{k}=\sum_{l\in S}X*f_{:,l}*g_{l,k}+\sum_{l^{\prime}\in S^{\prime}}X*f_{:,l^{\prime}}*g_{l^{\prime},k}\tag{1}$$ Following the above terminology, we can write it as: $$(8)$$ $$f_{;l^{\prime}}=\sum_{l\in S}\lambda_{l^{\prime},l}f_{;l}+\epsilon_{l^{\prime}}\quad;\forall l^{\prime}\in S^{\prime}\tag{1}$$ $$(9)$$ Substituting Equation 9 in Equation 8, we rewrite Zk as Z ′ k in terms of retained filter weights f:,l: $$Z^{\prime}_{k}=\sum_{l\in S}X*f_{:,l}*g_{l,k}+\sum_{l^{\prime}\in S^{\prime}}X*(\sum_{l\in S}\lambda_{l^{\prime},l}f_{:,l}+\epsilon_{l^{\prime}})*g_{l^{\prime},k}\tag{1}$$ The above can also be re-structured as: $$Z^{\prime}_{k}=\sum_{l\in S}[X*f_{\cdot,l}*(g_{l,k}+\sum_{l^{\prime}\in S^{\prime}}\lambda_{l^{\prime},l}*g_{l^{\prime},k})]+\sum_{l^{\prime}\in S^{\prime}}X*\epsilon_{l^{\prime}}*g_{l^{\prime},k}$$ Once the pruning is performed, Equation 8 reduces to $$(10)$$ $$\sum_{l\in S}X*f_{:,l}*g_{l,k}$$ $$(11)$$ $$(12)^{\frac{1}{2}}$$ X ∗ f:,l ∗ gl,k (12) and Equation 11 reduces to $ \sum_{l\in S}[X*f_{:,l}*(g_{l,k}+\sum_{l'\in S'}\lambda_{l',l}*g_{l',k})]$ running, for $ Z_k$ and $ Z'_k$, are ... Thus, the weight difference after pruning, for Zk and Z k , are ∥Pl ′∈S′ X ∗ f:,l′ ∗ gl ′,k∥ and ∥Pl ′∈S′ X ∗ ϵl ′ ∗ gl ′,k∥ respectively. Because ϵl ′ < f:,l′ , the weight difference in using Z ′ k is lesser than that of Zk. Also, the lower the difference in weights, the better the approximation. $$(13)$$ Hence, we use Equation 13 for the weight compensation step to have a lesser weight difference and define the following step: $$g^{\prime}_{l,k}=g_{l,k}+\sum_{l^{\prime}\in S^{\prime}}\lambda_{l^{\prime},l}*g_{l^{\prime},k}\quad;\forall k\in[1,n],\ \ \forall l\in S\tag{1}$$ For output channel pruning, Equation 14 is re-defined as $$(14)$$ g ′ l,: = gl,: + X j∈Sc λj,l ∗ gj,:, ∀l ∈ S (15) $$\left(15\right)$$ while input channel pruning is re-defined as $$(16)$$ $$g^{\prime}_{:,l}=g_{:,l}+\sum_{j\in S^{c}}\lambda_{j,l}*g_{:,j}\quad,\forall l\in S\tag{1}$$ ## A.2 Backward Elimination Algorithm For Filter Pruning E(*A, B*) + Pj=1*,...,n* 1 γk |d T k B:,j | 2. Result 2: P Given the definitions of A−k, dk, and γk, the following relation holds: j BT :,jA−k(AT−kA−k) −1AT−kB:,j =Pj BT :,jA(AT A) $$\stackrel{\mathrm{and}}{\phantom{T}}B_{:,j}\ -\ \stackrel{\gamma_{k}}{\sum_{j}}$$ 1 γk |d T k B:,j | 2 hence, E(A−k, B) = Proof. Given a matrix A ∈ R m×n, m ≥ n, with column rank n, and an observation matrix B ∈ R m×j. The best least-squares solution to Aλ = B with at most r nonzero components is defined as the solution that minimizes the least-squares criterion. $$Err(\lambda)=\sum_{j}||B_{:,j}-A\lambda_{:,j}||_{2}^{2}\tag{1}$$ $$\left(17\right)$$ Unconstrained least-squares solution of Aλ = B is λ:,j = (AT A) −1AT B:,j . Substituting in Equation 17, we obtain $$Err(\lambda)=\sum_{j}||B_{:,j}-A(A^{T}A)^{-1}A^{T}B_{:,j}||^{2}$$ $$=\sum_{j}(B_{:,j}^{T}B_{:,j}-B_{:,j}^{T}A(A^{T}A)^{-1}A^{T}B_{:,j})$$ $$(18)$$ Note that only the second term is a function of A; therefore, maximizing BT :,jA(AT A) −1AT B:,j with respect to combinations of columns comprising A is equivalent to minimizing Equation 17 with respect to the combination of nonzero components in the solution. Let A−k is A with the k - column deleted. $Err_k$ can be written as $Err_k=\sum_j(B_{:,j}^T B_{:,j}-B_{:,j}^T A_{-k}(A_{-k}^T A_{-k})^{-1}A_{-k}^T B_{:,j})$ - A simple update formula for the second term of Equation 18 can be obtained from (Reeves, 1999). X j B T :,jA−k(A T −kA−k) −1A T −kB:,j j (B T :,jA(A T A) −1A T B:,j − 1 γk |d T k B:,j | 2) = X $$(20)$$ From this result, it is clear that we only need to compare Pj |d T k B:,j | 2 γkfor all k and eliminate the column whose corresponding value is smallest. Note that γk is the k th diagonal element of (AT A) −1, and d T k B:,j is the k th element of the solution vector (AT A) −1AT B:,j . Hence we can say, $$k^{*}=min_{k}(Err_{k})$$ $$k^{*}=min_{k}\sum_{j}\frac{|d_{k}^{T}B_{:,j}|^{2}}{\gamma_{k}}$$ $$(19)$$ $$(21)^{\frac{1}{2}}$$ $$(22)^{\frac{1}{2}}$$
Review 1: Summary: This paper presents an pruning approach based on several optimizations and cost reduction operations. The paper is relatively well written despite using some popularizing terms. The paper basically uses an algorithms that prunes one filter at a time as a function of output accuracy change and generalizes it to the set of $S$. In order to more efficiently search a variable groups of pruned filters in each layer, the authors introduce a backward search that simply evaluates going backward the pruning of various filters per individual layer. The filters re removed using an error reconstruction method allowing to estimate best neurons to prune. In addition in order to determine the best layer to be pruned the authors introduce a similar backward method. Strengths and Weaknesses: Strengths: Well presented framework of pruning combined with backward search. The combination of the forward pruning and the backward search seems to be an efficient improvement over the original single filter pruning method. Weaknesses: The relative lack of novelty. While the paper is well written and the results are observable the improvement relatively minor given that the most advanced method has a magnitude larger search time when compared to the previous methods. Adding to this the relatively low improvement i wonder if the justification for such a large computation is worth such a low improvement. This should be better articulated so that specific purpose for this type of optimization is clearly observable by the reader. Requested Changes: More details in the explanation and justification of the proposed method. In Tables 1 and 2 the GHBTS algorithm is the best performing by providing the most improved accuracy and pruning ratio but it is also the most time-expensive. Therefore it should be clearly explained if this model is considered as the final result and when it is worth using it. This is same in the case of the amount of pruned parameters that is in average quite low compared to previous methods. Broader Impact Concerns: No broader impact concerns. ================================================== Review 2: Summary: The paper introduces new methods for structured pruning of CNNs, called hierarchical backward greedy search (HBGS) and hierarchical backward greedy tree search (HBGTS). In each method, the authors prune entire filters using a technique similar to orthogonal matching pursuit (OMP), in which the minimize the reconstruction error of the layer's output on a given training dataset. They also add 1x1 filters to compensate for the loss. Because this is based on the reconstruction loss, the number of pruned filters can vary from one layer to another. To improve computational efficiency, teh authors also use a technique borrowed from (Reeves, 1999) using block matrix inversion to reduce the cost by two orders of magnitude from O(n^4) down to O(n^2), where n is the number of filters per layer. Experimentally, the authors show that HBGS and HBGTS both outperform other pruning techniques in the literature using VGG16 and ResNet architectures, evaluated on CIFAR10/100 and Tiny ImageNet. In particular, the gap seems to increase as the percentage reduction in parameters increases. Strengths and Weaknesses: *Strengths* - The experimental results are strong. For instance, the authors achieve 94.7% on CIFAR10 using ResNet-18 after pruning more than 95% of the parameters! - The authors include useful empirical results that support some of the rationales of their method. For example, the author show empirically (Figures 4 and 5) why layers should not have the same pruning ratio. First, layers do not have the same reconstruction error, as shown in Figure 4. Second, their filters are not of the same diversity (e.g. Layer 2 has many diverse filters but layer 10 has few). For this, HBGS prunes >90% of the filters in Layer 10 but only prunes about 1/3 of the filters in Layer 2. - The paper is well-written. *Weaknesses* - My biggest concern is that it seems that the results are not reported entirely. For example, the authors report VGG16 results for CIFAR100 but not for CIFAR10. Also, they report ResNet18 results for CIFAR10 but not CIFAR100 or Tiny ImageNet. In total, the authors have experimented with 4 models and 3 datasets so we would expect a total of 12 results (for each combination) but the authors only report 4! - The proposed algorithms are a bit complex and not easy to implement, and the authors do not indicate any plans to release their code. - All of the experiments are done on small datasets. The largest is Tiny ImageNet which has 100K examples only. Do we observe gains when going to ImageNet-1k? Requested Changes: - Please explain why ResNet18 results are reported for CIFAR10 only? Similarly, why are VGG16 results missing for CIFAR10? - I think it would be useful to compare to random pruning, where you use the same pruning ratios you identify for each layer. I don't expect this to work well but it will help see the improvement that comes about by selecting the pruned filters. - In Page 5, you claim that the algorithm is O(n^3) because |S| is small compared to n, but |S| is always proportional to n so it is O(n^4). Isn't it? Broader Impact Concerns: The authors should mention that pruning architectures are known to exacerbate biases in models. See for instance: Hooker, Sara, Nyalleng Moorosi, Gregory Clark, Samy Bengio, and Emily Denton. "Characterising bias in compressed models." arXiv preprint arXiv:2010.03058 (2020). ================================================== Review 3: Summary: The paper proposes filter pruning methods of CNN models leveraging the use of orthogonal matching pursuit and greedy-based approaches. What differs from most papers in this field is that this paper proposes a way to find a different sub-optimal pruning ratio for each layer. Strengths and Weaknesses: The paper is well-written, and I did enjoy reading it. Strengths: * From a theoretical perspective, the paper does not bring a whole new theory, and most claims can be derived from the cited papers by the authors. However, using such tools does open new avenues in the field of model pruning. * The authors started by presenting their vanilla approaches, followed by descriptive analysis and modified approaches handling the computational time issues arising from their vanilla approaches. * The authors conducted experimental setting and ablation discussing and showcasing the efficacy of their approach. Weakness: * See the next section for writing-related comments. * There are other sub-fields of channel/filter pruning that the paper did not even consider -- specifically speaking, importance-based sampling methods that have shown efficiency and efficacy; see [1], [2], [3]. Such methods usually require less than 300 epochs of fine-tuning (used in the paper) and require less time than $5$ hours of pruning. When are the proposed methods preferable to be used upon any of [1,2,3]? * Another line of work relies on the use of SVD which can be considered as a "distant cousin" to the OMP-based method since SVD-based approaches aim to lower the error associated with the matrix structure (lower-dimensional-based metrics for example). One paper, popping on top of my head is the [4]. Such a paper aimed also to find a suboptimal pruning parameter for each layer. How do the proposed methods compare to this paper? ------------------------------------------- [1] Tukan, Murad, Loay Mualem, and Alaa Maalouf. "Pruning neural networks via coresets and convex geometry: Towards no assumptions." Advances in Neural Information Processing Systems 35 (2022): 38003-38019. [2] Liebenwein, L., Baykal, C., Lang, H., Feldman, D., & Rus, D. (2019, September). Provable Filter Pruning for Efficient Neural Networks. In International Conference on Learning Representations. [3] Mussay, B., Feldman, D., Zhou, S., Braverman, V., & Osadchy, M. (2021). Data-independent structured pruning of neural networks via coresets. IEEE Transactions on Neural Networks and Learning Systems, 33(12), 7829-7841. [4] Liebenwein, L., Maalouf, A., Feldman, D., & Rus, D. (2021). Compressing neural networks: Towards determining the optimal layer-wise decomposition. Advances in Neural Information Processing Systems, 34, 5328-5344. Requested Changes: Please apply the following: * Page 5, Algorithm 1, line 9 -- what is $S^\prime$? * Page 5, Algorithm 1 and Page 8, Algorithm 4 -- $\mathcal{D}$ in Algorithm 2 is not used. While I understand the intention, do make sure to update the Algorithm to either use it explicitly or simply remove it. Same note for Algorithm 4. * Page 9, remove space between $0.1$ and $M$. * Page 9, what do mean by "we warmup the model for 20 epochs."? Broader Impact Concerns: There is no need to add a broader impact statement. ================================================== Metareview: Recommendation: Accept as is Comment: While reviewers found the novelty of the material presented in this paper modest, they found the experiments solid and convincing. The topic is timely and the paper is clear. The authors also addressed the concerns raised by the reviewers in their revision, including additional experiments, adding of missing references, and providing additional discussions. All reviewers recommended acceptance. ==================================================
# Cipr: An Efficient Framework With Cross-Instance Positive Relations For Generalized Category Discovery Shaozhe Hao szhao@cs.hku.hk Department of Computer Science The University of Hong Kong Kai Han∗kaihanx@hku.hk Department of Statistics & Actuarial Science The University of Hong Kong Kwan-Yee K. Wong∗kykwong@cs.hku.hk Department of Computer Science The University of Hong Kong Reviewed on OpenReview: *https: // openreview. net/ forum? id= 1fNcpcdr1o* ## Abstract We tackle the issue of generalized category discovery (GCD). GCD considers the open-world problem of automatically clustering a partially labelled dataset, in which the unlabelled data may contain instances from both novel categories and labelled classes. In this paper, we address the GCD problem with an unknown category number for the unlabelled data. We propose a framework, named CiPR, to bootstrap the representation by exploiting Crossinstance Positive Relations in the partially labelled data for contrastive learning, which have been neglected in existing methods. To obtain reliable cross-instance relations to facilitate representation learning, we introduce a semi-supervised hierarchical clustering algorithm, named selective neighbor clustering (SNC), which can produce a clustering hierarchy directly from the connected components of a graph constructed from selective neighbors. We further present a method to estimate the unknown class number using SNC with a joint reference score that considers clustering indexes of both labelled and unlabelled data, and extend SNC to allow label assignment for the unlabelled instances with a given class number. We thoroughly evaluate our framework on public generic image recognition datasets and challenging fine-grained datasets, and establish a new state-of-the-art. Code: https://github.com/haoosz/CiPR ## 1 Introduction By training on large-scale datasets with human annotations, existing machine learning models can achieve superb performance (*e.g.*, Krizhevsky et al. (2012)). However, the success of these models heavily relies on the assumption that they are only tasked to recognize images from the same set of classes with large-scale human annotations on which they are trained. This limits their application in the real open world where we will encounter data without annotations and from unseen categories. Indeed, more and more efforts have been devoted to dealing with more realistic settings. For example, semi-supervised learning (SSL) (Chapelle et al., 2006) aims at training a robust model using both labelled and unlabelled data from the same set of classes; few-shot learning (Snell et al., 2017) tries to learn models that can generalize to new classes with few annotated samples; open-set recognition (OSR) (Scheirer et al., 2012) learns to tell whether or not an unlabelled image belongs to one of the classes on which the model is trained. More recently, the problem of novel category discovery (NCD) (Han et al., 2019) has been introduced, which learns models to automatically partition unlabelled data from unseen categories by transferring knowledge from seen categories. One assumption in early NCD methods is that unlabelled images are all from unseen categories only. NCD ∗Corresponding authors ![1_image_0.png](1_image_0.png) Figure 1: **Generalized category discovery**: given an image dataset with seen classes from a labelled subset, categorize the unlabelled images, which may come from seen and unseen classes. has been recently extended to a more generalized setting, called generalized category discovery (GCD) (Vaze et al., 2022b), by relaxing the assumption to reflect the real world better, *i.e.*, unlabelled images are from both seen and unseen categories. An illustration of the GCD problem is shown in Fig. 1. In this paper, we tackle the GCD problem by drawing inspiration from the baseline method (Vaze et al., 2022b). In Vaze et al. (2022b), a vision transformer model was first trained for representation learning using supervised contrastive learning on labelled data and self-supervised contrastive learning on both labelled and unlabelled data. With the learned representation, semi-supervised k-means (Han et al., 2019) was then adopted for label assignment across all instances. In addition, based on semi-supervised k-means, Vaze et al. (2022b) also introduced an algorithm to estimate the unknown category number for the unlabelled data by examining possible category numbers in a given range. However, this approach has several limitations. First, during representation learning, their method considers labelled and unlabelled data independently, and uses a stronger training signal for the labelled data which might compromise the representation of the unlabelled data. Second, their method requires a known category number for performing label assignment. Third, their category number estimation method is slow as it needs to run the clustering algorithm multiple times to test different category numbers. To overcome the above limitations, we propose a new approach for GCD which does not require a known unseen category number and considers Cross-instance Positive Relations (CiPR) in unlabelled data for better representation learning. At the core of our approach is a novel semi-supervised hierarchical clustering algorithm, named selective neighbor clustering (SNC), that takes inspiration from the parameter-free hierarchical clustering method FINCH (Sarfraz et al., 2019). We introduce specific neighbor selection mechanisms for labeled and unlabeled examples. By carefully taking into account the properties of labeled and unlabeled images, our method generates higher quality pseudo labels which strengthen representation learning. SNC can not only generate reliable pseudo labels for cross-instance positive relations, but also estimate unseen category numbers without the need for repeated runs of the clustering algorithm. SNC builds a graph embedding all subtly selected neighbor relations constrained by the labelled instances, and produces clusters directly from the connected components of the graph. SNC iteratively constructs a hierarchy of partitions with different granularity while satisfying the constraints imposed by the labelled instances. With a one-to-one merging strategy, SNC can quickly estimate a reliable class number without repeated runs of the algorithm. This makes it significantly faster than Vaze et al. (2022b). Although there exist works that consider introducing pseudo positives in the literature for representation learning, such as Van Gansbeke et al. (2020); Dwibedi et al. (2021); Zhong et al. (2021), these methods only consider nearest neighbors whereas our method incorporates a novel semi-supervised clustering into training that fully exploits label information and produces more reliable pseudo labels with higher purity. The main contributions of this paper can be summarized as follows: (1) we propose a new GCD framework, named CiPR, that exploits cross-instance positive relations in the partially labelled set to strengthen the connections among all instances, fostering the representation learning for better category discovery; (2) we introduce a semi-supervised hierarchical clustering algorithm, named SNC, that allows reliable pseudo label generation during training and label assignment during testing; (3) we further leverage SNC for class number estimation by exploring intrinsic and extrinsic clustering quality based on a joint reference score considering both labelled and unlabelled data; (4) we comprehensively evaluate our framework on both generic image recognition datasets and challenging fine-grained datasets, and demonstrate state-of-the-art performance across the board. ## 2 Related Work Our work is related to novel/generalized category discovery, semi-supervised learning, and open-set recognition. Novel category discovery (NCD) aims at discovering new classes in unlabelled data by leveraging knowledge learned from labelled data. It was formalized in Han et al. (2019) with a transfer clustering approach. Earlier works on cross-domain/task transfer learning (Hsu et al., 2018a;b) can also be adopted to tackle this problem. Han et al. (2020; 2021) proposed an efficient method called AutoNovel (aka RankStats) using ranking statistics. They first learned a good embedding using low-level self-supervised learning on all data followed by supervised learning on labelled data for higher-level features. They introduced robust ranking statistics to determine whether two unlabelled instances are from the same class for NCD. Several successive works based on RankStats were proposed. For example, Jia et al. (2021) proposed to use WTA hashing (Yagnik et al., 2011) for NCD in single- and multi-modal data; Zhao & Han (2021) introduced a NCD method with dual ranking statistics and knowledge distillation. Fini et al. (2021) proposed UNO which uses a unified cross entropy loss to train labelled and unlabelled data. Zhong et al. (2021) proposed NCL to retrieve and aggregate pseudo-positive pairs by exploring the nearest neighbors and generate hard negatives by mixing labelled and unlabelled samples in a contrastive learning framework. Chi et al. (2022) proposed meta discovery that links NCD to meta learning with limited labelled data. Joseph et al. (2022); Roy et al. (2022) consider NCD in the incremental learning scenarios. Vaze et al. (2022b) introduced generalized category discovery (GCD) which extends NCD by allowing unlabelled data from both old and new classes. They first finetuned a pretrained DINO ViT (Caron et al., 2021) with both supervised contrastive loss and self-supervised contrastive loss. Semi-supervised k-means was then adopted for label assignment. Cao et al. (2022); Rizve et al. (2022a;b) addressed the GCD problem from an open-world semi-supervised learning perspective. We draw inspiration from Vaze et al. (2022b) and develop a novel method to tackle GCD by exploring cross-instance relations on labelled and unlabelled data which have been neglected in the literature. Concurrent with our work, SimGCD (Wen et al., 2023) introduces a parametric classification approach for the GCD, while GPC (Zhao et al., 2023) proposes a method that jointly learns representations and estimates the class number for GCD. Semi-supervised learning (SSL) aims at learning a good model by leveraging unlabelled data from the same set of classes as the labelled data (Chapelle et al., 2006). Various methods (Laine & Aila, 2017; Tarvainen & Valpola, 2017; Sohn et al., 2020; Zhang et al., 2021) have been proposed for SSL. The assumption of SSL that labelled and unlabelled data are from the same closed set of classes is often not valid in practice. In contrast, GCD relaxes this assumption and considers a more challenging scenario where unlabelled data can also come from unseen classes. Pseudo-labeling techniques are also widely adopted in SSL. Berthelot et al. (2019; 2020); Sohn et al. (2020); Rizve et al. (2020); Zhang et al. (2021) used one-hot pseudo labels generated from different augmentations. Rebalancing in pseudo labels was further studied in a class-imbalanced task (Wei et al., 2021) and a general problem (Wang et al., 2022). Tarvainen & Valpola (2017); Luo et al. (2018); Ke et al. (2019); Xie et al. (2020); Cai et al. (2021); Pham et al. (2021) applied teacher-student network to generate pseudo one-hot labels from the teacher. Our work is more related to methods that incorporate pseudo labels into contrastive learning (Chen et al., 2020b; Khosla et al., 2020), but they differ significantly from our method. A special case is SimCLR (Chen et al., 2020b;c), which utilizes self-supervised contrastive learning and relies on pseudo labels derived from augmented views. In contrast, our model further incorporates cross-instance positive relations. Unlike PAWS (Assran et al., 2021), we propose a new clustering method to assign pseudo labels instead of using a classifier based on supportive samples. In contrast to Li et al. (2021); Zheng et al. (2022); Bošnjak et al. (2022), our method does not require a memory bank or complex phases of distributional alignment and of unfolding and aggregation, and also enriches the positive samples compared to using the nearest neighbor like in SemPPL (Bošnjak et al., 2022). Open-set recognition (OSR) aims at training a model using data from a known closed set of classes, and at test time determining whether or not a sample is from one of these known classes. It was first introduced in Scheirer et al. (2012), followed by many works. For example, OpenMax (Bendale & Boult, 2016) is the first deep learning work to address the OSR problem based on Extreme Value Theory and fitting per-class Weibull distributions. RPL (Chen et al., 2020a) and ARPL (Chen et al., 2021) exploit reciprocal points for constructing extra-class space to reduce the risk of unknown. Recently, Vaze et al. (2022a) found the correlation between closed and open-set performance, and boosted the performance of OSR by improving closed-set accuracy. They also proposed Semantic Shift Benchmark (SSB) with a clear definition of semantic novelty for better OSR evaluation. Out-of-distribution (OOD) detection (Hendrycks & Gimpel, 2017; Liang et al., 2018; Hsu et al., 2020) is closely related to OSR but differs in that OOD detects general distribution shifts, while OSR focuses on semantic shifts. Several OOD methods have been proposed recently, such as Li et al. (2022) alleviating "posterior collapse" of hierarchical VAEs and Zheng et al. (2023) leveraging mistaken OOD generation for an auxiliary OOD task. ## 3 Methodology 3.1 Problem Formulation Generalized category discovery (GCD) aims at automatically categorizing unlabelled images in a collection of data in which part of the data is labelled and the rest is unlabelled. The unlabelled images may come from the labelled classes or new ones. This is a much more realistic open-world setting than the common closed-set classification where the labelled and unlabelled data are from the same set of classes. Let the data collection be D = DL ∪ DU , where DL = {(x ℓ i , yℓ i )}M i=1 *∈ X × Y*L denotes the labelled subset and DU = {(x u i , yu i )} N i=1 *∈ X × Y*U denotes the unlabelled subset with unknown y u i ∈ YU . Only a subset of classes contains labelled instances, i.e., YL ⊂ YU . The number of labelled classes NL can be directly deduced from the labelled data, while the number of unlabelled classes NU is not known a priori. To tackle this challenge, we propose a novel framework CiPR to jointly learn representations using contrastive learning by considering all possible interactions between labelled and unlabelled instances. Contrastive learning has been applied to learn representation in GCD, but without considering the connections between labelled and unlabelled instances (Vaze et al., 2022b) due to the lack of reliable pseudo labels. This limits the learned representation. In this paper, we propose an efficient semi-supervised hierarchical clustering algorithm, named selective neighbor clustering (SNC), to generate reliable pseudo labels to bridge labelled and unlabelled instances during training and bootstrap representation learning. With the generated pseudo labels, we can then train the model on both labelled and unlabelled data in a supervised manner considering all possible pairwise connections. We further extend SNC with a simple one-to-one merging process to allow cluster number estimation and label assignment on all unlabelled instances. An overview is shown in Fig. 2. ## 3.2 Joint Contrastive Representation Learning Contrastive learning has gained popularity as a technique for self-supervised representation learning (Chen et al., 2020b; He et al., 2020) and supervised representation learning (Khosla et al., 2020). These approaches rely on two types of instance relations to derive positive samples. Self-relations are obtained when the paired instances are augmented views from the *same image*, while cross-instance relations are obtained when the paired instances belong to the *same class*. For GCD, since data contains both labelled and unlabelled instances, a mix of self-supervised and supervised contrastive learning appears to be a natural fit, and good performance has been reported by Vaze et al. (2022b). However, cross-instance relations are only considered for pairs of labelled instances, but not for pairs of unlabelled instances and pairs of labelled and unlabelled instances. The learned representation is likely to be biased towards the labelled data due to the stronger learning signal provided by them. Meanwhile, the embedding spaces learned from cross-instance relations of labelled data and self-relations of unlabelled data might not be necessarily well aligned. These might explain why a much stronger performance on labelled data than on unlabelled data was reported in Vaze et al. (2022b). To mediate such a bias, we propose to introduce cross-instance relations for pairs of unlabelled instances and pairs of labelled and unlabelled instances in contrastive learning to bootstrap representation learning. We are the first to exploit labelled-unlabelled relations and unlabelled-unlabelled relations for unbiased representation learning in GCD. To this end, we propose an efficient semi-supervised hierarchical ![4_image_0.png](4_image_0.png) .) $\frac{1}{2}$ . Figure 2: **Overview of our CiPR framework.** We first initialize ViT with pretrained DINO (Caron et al., 2021) to obtain a good representation space. We then finetune ViT by conducting joint contrastive learning with both true and pseudo positive relations in a supervised manner. True positive relations come from labelled data while pseudo positive relations of all data are generated by our proposed SNC algorithm. Specifically, SNC generates a hierarchical clustering structure. Pseudo positive relations are granted to all instances in the same cluster at one level of partition, further exploited in joint contrastive learning. With representations well learned, we estimate class number and assign labels to all unlabelled data using SNC with a one-to-one merging strategy. clustering algorithm to generate reliable pseudo labels relating pairs of unlabelled instances and pairs of labelled and unlabelled instances, as detailed in Sec. 3.3. Next, we briefly review supervised contrastive learning (Khosla et al., 2020) that accommodates cross-instance relations, and describe how to extend it to unlabelled data. Contrastive learning objective. Let f and ϕ be a feature extractor and a MLP projection head. The supervised contrastive loss on labelled data can be formulated as $${\mathcal{L}}_{i}^{s}=-{\frac{1}{|{\mathcal{G}}_{B}(i)|}}\sum_{q\in{\mathcal{G}}_{B}(i)}\log{\frac{\exp({\boldsymbol{z}}_{i}^{\ell}\cdot{\boldsymbol{z}}_{q}^{\ell}/\tau_{s})}{\sum_{n\in{\mathcal{B}}_{C},n\neq i}\exp({\boldsymbol{z}}_{i}^{\ell}\cdot{\boldsymbol{z}}_{n}^{\ell}/\tau_{s})}}$$ (1) where z ℓ = ϕ(f(x ℓ)), τs is the temperature, and GB(i) denotes other instances sharing the same label with the i-th labelled instance in BL, which is the labelled subset in the mini-batch B. Supervised contrastive loss leverages the true cross-instance positive relations between labelled instance pairs. To take into account the cross-instance positive relations for pairs of unlabelled instances and pairs of labelled and unlabelled instances, we extend the supervised contrastive loss on all data as $${\mathcal{L}}_{i}^{a}=-{\frac{1}{|{\mathcal{P}}_{\mathcal{B}}(i)|}}\sum_{q\in{\mathcal{P}}_{\mathcal{B}}(i)}\log{\frac{\exp(\mathbf{z}_{i}\cdot\mathbf{z}_{q}/\tau_{a})}{\sum_{n\in{\mathcal{B}},n\neq i}\exp(\mathbf{z}_{i}\cdot\mathbf{z}_{n}/\tau_{a})}}$$ where τa is the temperature, PB(i) is the set of pseudo positive instances for the i-th instance in the mini-batch B. The overall loss considering cross-instance relations for pairs of labelled instances, unlabelled instances, as well as labelled and unlabelled instances can then be written as $$\mathcal{L}=\sum_{i\in\mathcal{B}}\mathcal{L}_{i}^{a}+\sum_{i\in\mathcal{B}_{\mathcal{L}}}\mathcal{L}_{i}^{s}\tag{1}$$ With the learned representation, we can discover classes with existing algorithms like semi-supervised kmeans (Han et al., 2019; Vaze et al., 2022b). Besides, we further propose a new method in Sec. 3.4 based on our pseudo label generation approach as will be introduced next. $$\left(2\right)$$ $$\left({3}\right)$$ Algorithm 1 Selective Neighbor Clustering (SNC) 1: **Preparation:** 2: Given labelled set DL and unlabelled set DU , treat each instance in DL ∪ DU as a cluster c 0 i with the cluster centroid µ(c 0 i ) being each instance itself, forming the first partition Γ 0 = Γ0L ∪ Γ 0 U , where Γ 0 = {c 0 i } |Γ 0 L |+|Γ 0 U | i=1 . 3: **Main loop:** 4: p ← 0 5: **while** there are more than NL clusters in Γ p do 6: Initialize Γ ⋆ L = ΓpL . 7: **while** there exists κi of c p i ∈ Γ p L ∪ Γ p U not specified do 8: if c p i ∈ Γ p L then 9: Initialize Q = {c p i }, Γ ⋆ L = Γ⋆L \ {c p i }. 10: **while** |Q| < λ do 11: κi ← arg maxj {µ(c p i ) · µ(c p j ) | c p j ∈ Γ ⋆ L, y p j = y p i } 12: Γ ⋆ L ← Γ ⋆ L \ {c p κi } 13: *Q ← Q ∪ {*c p κi } 14: c p i ← c p κi 15: **end while** 16: **else** 17: κi ← arg maxj {µ(c p i ) · µ(c p j ) | c p j ∈ Γ p L ∪ Γ p U } 18: **end if** 19: **end while** 20: Construct A following Eq. (4) with selective neighbors, forming a new partition Γ p+1 = Γp+1 L ∪ Γ p+1 U. 21: p ← p + 1 22: **end while** ## 3.3 Selective Neighbor Clustering Although the concept of creating pseudo-labels may seem intuitive, effectively realizing it is a challenging task. Obtaining reliable pseudo-labels is a significant challenge in the GCD scenario, and ensuring their quality is of utmost importance. Naive approaches risk producing no performance improvement or even causing degradation. For example, an intuitive approach would be to apply an off-the-shelf clustering method like k-means or semi-supervised k-means to construct clusters and then obtain cross-instance relations based on the resulting cluster assignment. However, these clustering methods require a class number prior which is inaccessible in GCD task. Moreover, we empirically found that even with a given ground-truth class number, such a simple approach will produce many false positive pairs which severely hurt the representation learning. One way to tackle this problem is to overcluster the data to lower the false positive rate. FINCH (Sarfraz et al., 2019) has shown superior performance on unsupervised hierarchical overclustering, but it is non-trivial to extend it to cover both labelled and unlabelled data. Experiments show that FINCH will fail drastically if we simply include all the labelled data. Inspired by FINCH, we propose an efficient semi-supervised hierarchical clustering algorithm, named SNC, with selective neighbor, which subtly makes use of the labelled instances during clustering. Preliminaries. FINCH constructs an adjacency matrix A for all possible pairs of instances (*i, j*), given by $$A(i,j)=\begin{cases}1&{\mathrm{if~}}j=\kappa_{i}{\mathrm{~or~}}\kappa_{j}=i{\mathrm{~or~}}\kappa_{i}=\kappa_{j}\\ 0&{\mathrm{else}}\end{cases},$$ , (4) where κiis the first neighbor of the i-th instance and is defined as $\left(\frac{1}{2}\right)^{2}=\frac{1}{2}\left(\frac{1}{2}\right)^{2}$. κi = arg max j {f(xi) · f(xj ) | xj ∈ DU }, (5) where f(·) outputs an ℓ2-normalized feature vector and DU denotes an unlabelled dataset. A data partition can then be obtained by extracting connected components from A. Each connected component in A corresponds to one cluster. By treating each cluster as a super instance and building the first neighbor adjacency matrix iteratively, the algorithm can produce hierarchical partitions. $$(4)$$ $\left(5\right)$. ![6_image_0.png](6_image_0.png) $$\in{\mathcal{D}}_{{\mathcal{L}}}\cup{\mathcal{D}}_{{\mathcal{U}}}),$$ Figure 3: **Selective neighbor rules.** Left: the labelled instances constitute a chain (*rule 1* ) with the length of λ = 4 (*rule 2* ) and the nearest neighbors of unlabelled instances are labelled ones (*rule 3* ). Right: the nearest neighbors of unlabelled instances are unlabelled ones (*rule 3* ). Our approach. First neighbor is designed for purely unlabelled data in FINCH. To make use of the labels in partially labelled data, a straightforward idea is to connect all labelled data from the same class by setting A(*i, j*) to 1 for all pairs of instances (*i, j*) from the same class. However, after filling A(*i, j*) for pairs of unlabelled instances using Eq. (4), very often all instances become connected to a single cluster, making it impossible to properly partition the data. This problem is caused by having too many links among the labelled instances. To solve this problem, we would like to reduce the links between labelled instances while keeping labelled instances from the same class in the same connected component. A simple idea is to connect same labelled instances one by one to form a *chain*, which can significantly reduce the number of links. However, we found this still produces many incorrect links, resulting in low purity of the connected components. To this end, we introduce our selective neighbor for κi to improve the purity of clusters while properly incorporating the labelled instances, constrained by the following rules. *Rule 1:* each labelled instance can only be the selective neighbor of another labelled instance once to ensure that labelled instances are connected in the form of chains; *Rule 2:* we limit the chain length to at most λ; *Rule 3:* the selective neighbor of an unlabelled instance depends on its actual distances to other instances, which can be either a labelled or an unlabelled instance. We illustrate how to cluster data with the selective neighbor rules in Fig. 3. Similar to FINCH, we can apply selective neighbor iteratively to produce hierarchical clustering results. We name our method SNC which is summarized in Algo. 1. The selective neighbor κi of the ith sample xi can be simply formulated as κi = selective_neigbor(xi, xj | xj ∈ DL ∪ DU ), (6) which corresponds to lines 7-19 in Algo. 1, in contrast to the first neighbor strategy in Eq. (5). For the chain length λ, we simply set it to the smallest integer greater than or equal to the square root of the number of labelled instances nℓ in each class, *i.e.*, λ = ⌈ √nℓ ⌉. λ is automatically decided in each clustering hierarchy, motivated by a straight idea that it should be positively correlated with (but smaller than) the number of labelled instances nl at the current hierarchical level. The square root of nlis therefore a natural choice to balance the number of clustered instances in each cluster and the number of newly formed clusters. The chain length rule is applied to all classes with labelled instances, and at each hierarchy level. A proper chain length can therefore be dynamically determined based on the actual size of the labelled cluster and also the hierarchy level. We analyze different formulations of chain length in the Appx. A. SNC produces a hierarchy of data partitions with different granularity. Except the bottom level, where each individual instance is treated as one cluster, every non-bottom level can be used to capture cross-instance relations for the level below, because each instance in the current level represents a cluster of instances in the level below. In principle, we can pick any non-bottom level to generate pseudo labels. To have a higher purity for each cluster, it is beneficial to choose a relatively low level that overclusters the data. Hence, we choose a level that has a cluster number notably larger than the labelled class number (*e.g.*, 2× more). Meanwhile, the level should not be too low as this will provide much fewer useful pairwise correlations. In our experiment, we simply pick the third level from the bottom of the hierarchy, which consistently shows good performance on all datasets. We discuss the impact of the picked level in the Appx. A. During the training stage, we enhance the joint representation learning by updating the pseudo labels using the data partition from the third level at the beginning of each training epoch. This iterative process helps to improve the clustering quality and allows for self-evolution of the pseudo labels. ## 3.4 Class Number Estimation And Label Assignment Once a good representation is learned, we can then estimate the class number and determine the class label assignment for all unlabelled instances. Algorithm 2 One-to-one merging 1: **Preparation:** 2: Get initial partitions S = {Γ p}p=0 by SNC and a cluster number range [Ne, No]. Note that the merging is from No to Ne and No > Ne. 3: **Partition initialization:** 4: Find Γ t ∈ S satisfying |Γ t| > No and |Γ t+1| ≤ No. 5: **Merging:** 6: **while** |Γ t| > Ne do 7: (*i, j*) ← arg mini,j {µ(c t i ) · µ(c t j ) | c t i, c t j ∈ Γ t, yt i = y t jif c t i, c t j ∈ Γ t L} 8: Merge c t i and c t j, forming a new partition Γ ⋆. 9: Update current partition Γ t ← Γ ⋆. 10: **end while** 11: **Output**: 12: Obtain a specific partition Γ tof Ne clusters, *i.e.*, |Γ t| = Ne. Class number estimation. When the class number is unknown, existing methods based on semi-supervised k-means need to first estimate the unknown cluster number before they can produce the label assignment. To estimate the unknown cluster number, Han et al. (2019) proposed to run semi-supervised k-means on all the data while dropping part of the labels for clustering performance validation. Though effective, this algorithm is computationally expensive as it needs to run semi-supervised k-means on all possible cluster numbers. Vaze et al. (2022b) proposed an improved method with Brent's optimization (Brent, 1971), which increases the efficiency. In contrast, SNC is a hierarchical clustering method that can automatically produce hierarchical cluster assignments at different levels of granularity. Therefore, it does not require a given class number in clustering. For practical use, one can pick any level of assignment based on the required granularity. To identify a reliable level of granularity, we conduct class number estimation by applying SNC with a joint reference score considering both labelled and unlabelled data. Specifically, we split the labelled data DL into two parts DlL and DvL. We run SNC on the full dataset D treating DlL as labelled and DU ∪ DvL as unlabelled. We jointly measure the unsupervised intrinsic clustering index (such as silhouette score (Rousseeuw, 1987)) on DU and the extrinsic clustering accuracy on DvL. We obtain a joint reference score sc by simply multiplying them together after min-max scaling to achieve the best overall measurement on the labelled and unlabelled subsets. We choose the level in SNC hierarchy with the maximum sc. The cluster number in the chosen level can be regarded as the estimated class number. To achieve more accurate class number estimation, we further introduce a simple *one-to-one merging strategy*. Namely, from the level below the chosen one to the level above the chosen one, we merge the clusters successively. At each merging step, we simply merge the two closest clusters. We identify the merge that gives the best reference score sc and its cluster number is considered as our estimated class number which we denote as K1t1. SNC with the one-to-one merging strategy can carry out class number estimation with one single run of hierarchical clustering, which is significantly more efficient than the methods based on semi-supervised k-means (Han et al., 2019; Vaze et al., 2022a). Label assignment. With a given (estimated) class number, we can obtain the label assignment by adopting semi-supervised k-means like Han et al. (2019); Vaze et al. (2022b) or directly using our proposed SNC. Since SNC is a hierarchical clustering algorithm and the cluster number in each hierarchy level is determined automatically by the intrinsic correlations of the instances, it might not produce a level of partition with the exact same cluster number as the known class number. To reach the estimated class number K1t1, we therefore reuse the one-to-one merging strategy. Specifically, we begin by executing SNC to produce hierarchical partitions and identify the partition level that contains the closest cluster count greater than the given class number. We then employ one-to-one merging to successively merge the two closest clusters at each iteration until the target number K1t1 is reached. Note that during the one-to-one merging process, clusters belonging to different labeled classes are not allowed to merge. The predicted cluster indices can be therefore retrieved from the final partition. The merging process is summarized in Algo. 2. Lastly, following Vaze et al. (2022b), we use the Hungarian algorithm (Kuhn, 1955) to find an optimal linear assignment from the predicted cluster indices to the ground-truth labels, which gives the final label assignment. | Table 2: Results on fine-grained image recognition datasets. Our results are averaged over 5 ru | | | ns. | | | | | | | |---------------------------------------------------------------------------------------------------|-------|-------------|--------|------|------|--------|------|------|--------| | CUB-200 | SCars | Herbarium19 | | | | | | | | | Classes | All | Seen | Unseen | All | Seen | Unseen | All | Seen | Unseen | | RankStats+ (Han et al., 2021) | 33.3 | 51.6 | 24.2 | 28.3 | 61.8 | 12.1 | 27.9 | 55.8 | 12.8 | | UNO+ (Fini et al., 2021) | 35.1 | 49.0 | 28.1 | 35.5 | 70.5 | 18.6 | 28.3 | 53.7 | 14.7 | | ORCA (Cao et al., 2022) | 35.0 | 35.6 | 34.8 | 32.6 | 47.0 | 25.7 | 24.6 | 26.5 | 23.7 | | Vaze et al. (2022b) | 51.3 | 56.6 | 48.7 | 39.0 | 57.6 | 29.9 | 35.4 | 51.0 | 27.0 | | Ours (CiPR) | 57.1 | 58.7 | 55.6 | 47.0 | 61.5 | 40.1 | 36.8 | 45.4 | 32.6 | | Table 1: Results on generic image recognition datasets. Our results are averaged over 5 runs. CIFAR-10 CIFAR-100 ImageNet-100 Classes All Seen Unseen All Seen Unseen All Seen Unseen RankStats+ (Han et al., 2021) 46.8 19.2 60.5 58.2 77.6 19.3 37.1 61.6 24.8 UNO+ (Fini et al., 2021) 68.6 98.3 53.8 69.5 80.6 47.2 70.3 95.0 57.9 ORCA (Cao et al., 2022) 97.3 97.3 97.4 66.4 70.2 58.7 73.5 92.6 63.9 Vaze et al. (2022b) 91.5 97.9 88.2 70.8 77.6 57.0 74.1 89.8 66.3 Ours (CiPR) 97.7 97.5 97.7 81.5 82.4 79.7 80.5 84.9 78.3 | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| ## 4 Experiments 4.1 Experimental Setup Data and evaluation metric. We evaluate our method on three generic image classification datasets, namely CIFAR-10 (Krizhevsky et al., 2009), CIFAR-100 (Krizhevsky et al., 2009), and ImageNet-100 (Deng et al., 2009). ImageNet-100 refers to randomly subsampling 100 classes from the ImageNet dataset. We further evaluate on three more challenging fine-grained image classification datasets, namely Semantic Shift Benchmark (Vaze et al., 2022a) (SSB includes CUB-200 (Wah et al., 2011) and Stanford Cars (Krause et al., 2013)) and long-tailed Herbarium19 (Tan et al., 2019). We follow Vaze et al. (2022b) to split the original training set of each dataset into labelled and unlabelled parts. We sample a subset of half the classes as seen categories. 50% of instances of each labelled class are drawn to form the labelled set, and all the rest data constitute the unlabeled set. The model takes all images as input and predicts a label assignment for each unlabelled instance. For evaluation, we measure the clustering accuracy by comparing the predicted label assignment with the ground truth, following the protocol of Vaze et al. (2022b). Implementation details. We follow Vaze et al. (2022b) to use the ViT-B-16 initialized with pretrained DINO (Caron et al., 2021) as our backbone. The output [CLS] token is used as the feature representation. Following the standard practice, we project the representations with a non-linear projection head and use the projected embeddings for contrastive learning. We set the dimension of projected embeddings to 65,536 following Caron et al. (2021). At training time, we feed two views with random augmentations to the model. We only fine-tune the last block of the vision transformer with an initial learning rate of 0.01 and the head is trained with an initial learning rate of 0.1. We update the pseudo labels with SNC at the beginning of each training epoch. All methods are trained for 200 epochs with a cosine annealing schedule. For our method, the temperatures of two supervised contrastive losses τs and τa are set to 0.07 and 0.1 respectively. For class number estimation, we set |DlL|:|DvL| = 6:4 for CIFAR-10 and |DlL|:|DvL| = 8:2 for the other datasets. The reason we set |DlL|:|DvL| = 6:4 is that CIFAR-10 has 5 labeled classes. If we were to set the ratio to 8:2, there would be only 1 class remaining in DvL, which would diminish its effectiveness in providing a reference clustering accuracy for validation. All experiments are conducted on a single RTX 3090 GPU. ## 4.2 Comparison With The State-Of-The-Art We compare CiPR with four strong GCD baselines: *RankStats+* and *UNO+*, which are adapted from RankStats (Han et al., 2021) and UNO (Fini et al., 2021) that are originally developed for NCD, the state-of-the-art GCD method of Vaze et al. (2022b), and *ORCA* (Cao et al., 2022) which addresses GCD ![9_image_0.png](9_image_0.png) Figure 4: **Visualization on CIFAR-10.** We conduct t-SNE projection on features extracted by raw DINO, GCD method of Vaze et al. (2022b) and our CiPR. We randomly sample 1000 images of each class from CIFAR-10 to visualize. Unseen categories are marked with *. | Table 3: Estimation of class number in unlabelled data. | | | | | | | | |-----------------------------------------------------------|---------------------|-----------|--------------|-----------|-----------|-------------|-----------| | Method | CIFAR-10 | CIFAR-100 | ImageNet-100 | CUB-200 | SCars | Herbarium19 | | | Ground truth | - | 10 | 100 | 100 | 200 | 196 | 683 | | Estimate (error) | Vaze et al. (2022b) | 9 (10%) | 100 (0%) | 109 (9%) | 231 (16%) | 230 (17%) | 520 (24%) | | Ours (CiPR) | 12 (20%) | 103 (3%) | 100 (0%) | 155 (23%) | 182 (7%) | 490 (28%) | | | Runtime | Vaze et al. (2022b) | 15394s | 27755s | 64524s | 7197s | 8863s | 63901s | | Ours (CiPR) | 102s | 528s | 444s | 126s | 168s | 1654s | | from a semi-supervised learning perspective. As ORCA uses a different backbone model and data splits, for fair comparison, we retrain ORCA with ViT model using the official code on the same splits. In Tab. 1, we compare CiPR with others on the generic image recognition datasets. CiPR consistently outperforms all others by a significant margin. For example, CiPR outperforms the state-of-the-art GCD method of Vaze et al. (2022b) by 6.2% on CIFAR-10, 10.7% on CIFAR-100, and 6.4% on ImageNet-100 for 'All' classes, and by 9.5% on CIFAR-10, 22.7% on CIFAR-100, and 12.0% on ImageNet-100 for 'Unseen' classes. This demonstrates cross-instance positive relations obtained by SNC are effective to learn better representations for unlabelled data. Due to the fact that a linear classifier is trained on 'Seen' classes, UNO+ shows a strong performance on 'Seen' classes, but its performance on 'Unseen' ones is significantly worse. In contrast, CiPR achieves comparably good performance on both 'Seen' and 'Unseen' classes, without biasing to the labelled data. In Tab. 2, we further compare our method with others on fine-grained image recognition datasets, in which the difference between different classes are subtle, making it more challenging for GCD. Again, CiPR consistently outperforms all other methods for 'All' and 'Unseen' classes. On CUB-200 and SCars, CiPR achieves 5.8% and 8.0% improvement over the state-of-the-art for 'All' classes. For the challenging Herbarium19 dataset, which contains many more classes than other datasets and has the extra challenge of long-tailed distribution, CiPR still achieves an improvement of 1.4% and 5.6% for 'All' and 'Unseen' classes. Both RankStats+ and UNO+ show a strong bias to the 'Seen' classes. In Fig. 4, we visualize the t-SNE projection of features extracted by DINO (Caron et al., 2021), GCD method of Vaze et al. (2022b), and our method CiPR, on CIFAR-10. Both Vaze et al. (2022b) and our features are more discriminative than DINO features. The method of Vaze et al. (2022b) captures better representations with more separable clusters, but some seen categories are confounded with unseen categories, *e.g.*, cat with dog and automobile with truck, while CiPR features show better cluster boundaries for seen and unseen categories, further validating the quality of our learned representation. ## 4.3 Estimating The Unknown Class Number In Tab. 3, we report our estimated class numbers on both generic and fine-grained datasets using the joint reference score sc as described in Sec. 3.4. Overall, CiPR achieves comparable results with the method of Vaze et al. (2022b), but it is far more efficient (40-150 times faster) and also does not require a list of predefined possible numbers. Even for the most difficult Herbarium19 dataset, CiPR only takes a few minutes to finish, while it takes more than an hour for a single run of k-means due to the large class number, let alone multiple | Here, Kkm and K1t1 denote using the estimated class numbers by Vaze et al. (2022b) and ours respectiv | | | | ely. | | | | | | | | | | | | | | |---------------------------------------------------------------------------------------------------------|---------------------|-----------|--------------|-----------|-----------|-------------|------|--------|------|-----------|-----------|-----------|------|--------|------|------|--------| | Estimated #class Method | CIFAR-10 | CIFAR-100 | ImageNet-100 | CUB-200 | SCars | Herbarium19 | | | | | | | | | | | | | All | Seen | Unseen | All | Seen | Unseen | All | Seen | Unseen | All | Seen | Unseen | All | Seen | Unseen | All | Seen | Unseen | | | Vaze et al. (2022b) | 79.4 98.9 | 69.6 | 70.8 | 77.6 | 57.0 | 71.7 | 69.8 | 72.7 | 53.5 64.6 | 48.0 | 41.9 68.2 | 29.3 | 15.2 | 19.8 | 13.0 | | | Kkm | Ours (CiPR) | 85.0 97.3 | 78.8 | 81.5 | 82.4 | 79.7 | 78.6 | 81.2 | 77.3 | 55.7 56.8 | 55.2 | 45.3 58.9 | 38.7 | 36.8 | 43.8 | 33.3 | | | K1t1 | Vaze et al. (2022b) | 83.6 98.2 | 76.3 | 75.1 82.4 | 60.3 | 74.1 89.8 | 66.3 | 48.3 | 55.8 | 44.6 | 40.5 64.8 | 28.8 | 15.5 | 19.9 | 13.4 | | | | Ours (CiPR) | 90.3 97.3 | 86.8 | 80.9 82.1 | 78.4 | 80.5 84.9 | 78.3 | 53.1 | 56.8 | 51.2 | 42.4 55.4 | 36.2 | 37.3 | 43.6 | 34.2 | | | | ![10_image_0.png](10_image_0.png) runs from a predefined list of possible class numbers. Both methods have similar memory costs, as reported in the Appx. D. We also present the evaluation results based on the different settings of the estimated class numbers in Tab. 4. Our method consistently outperforms Vaze et al. (2022b) on all datasets, demonstrating its superior performance. ## 4.4 Ablation Study Approaches to generating positive relations. In Tab. 5, we compare our SNC with multiple different approaches to generating positive relations for joint contrastive learning, including directly using nearest neighbor in every mini-batch and conducting various clustering algorithms to obtain pseudo labels, *e.g.*, FINCH (Sarfraz et al., 2019), k-means (MacQueen et al., 1967), and semi-supervised k-means (Han et al., 2019; Vaze et al., 2022b). Non-hierarchical clustering methods (k-means and semi-supervised k-means) require a given cluster number. For k-means, we use the ground-truth class number. For semi-supervised k-means, we use both the ground truth and the overclustering number (twice the ground truth). For generating pseudo positive relations, our method achieves best performance among all approaches. FINCH performs great on CIFAR-100 but degrades on CUB-200. We hypothesize that because FINCH is purely unsupervised without leveraging labelled data, it fails to generate reliable pseudo labels of more semantically similar instances on fine-grained CUB-200. Overclustering semi-supervised k-means achieves comparable performance on CUB-200 but performs bad on CIFAR-100. This might be caused by intrinsic poorer performance of semi-supervised k-means compared to proposed SNC, which results in worse pseudo labels. We further report the mean purity curve of pseudo labels generated by all clustering methods throughout training process in Fig. 5. We can observe that pseudo labels produced by SNC remain the highest purity on both datasets throughout the entire training process. We evaluate performance using both SNC and semi-supervised k-means for comparison. SNC reaches higher accuracy than semi-supervised k-means at test time. This advantage stems from the hierarchical design of SNC. In semi-supervised k-means, the number of labeled centroids remains fixed throughout all iterations and it only considers distances between instances and centroids. This may prevent certain correct instance grouping sometimes. For instance, if an unlabeled instance is close to a labeled instance but far from the centroid of the labeled cluster, it may not be assigned to that class. This limitation can have a negative impact on the overall quality of the clustering results. On the other hand, SNC is a hierarchical method that takes into account local distribution structures at each hierarchical layer for both labeled and unlabeled data. In the scenario mentioned above, where an unlabeled instance is close to a labeled instance but distant from | clustering methods: SNC (normal font, left) and semi-supervised k-means (smaller font, right). CIFAR-100 CUB-200 Classes All Seen Unseen All Seen U | | | | nseen | | | |-------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|-----------|-----------|-----------|-----------|-----------| | w/ nearest neighbor | 80.4 74.9 | 82.9 77.5 | 75.5 69.7 | 51.9 45.8 | 56.7 48.2 | 49.5 44.6 | | w/ FINCH | 81.4 76.6 | 81.7 75.7 | 80.7 78.6 | 51.4 47.9 | 51.8 45.1 | 51.3 49.3 | | w/ k-means | 76.7 72.2 | 77.1 70.4 | 75.7 75.8 | 52.8 48.6 | 53.1 45.5 | 52.7 50.2 | | w/ semi-k-means | 78.1 73.8 | 81.5 73.3 | 71.3 74.8 | 54.5 48.7 | 54.1 43.9 | 54.7 51.2 | | w/ semi-k-means⋆ | 76.8 71.9 | 76.9 71.8 | 76.4 72.1 | 56.6 48.1 | 57.1 50.2 | 56.4 47.1 | | w/ SNC (ours) | 81.5 76.5 | 82.4 75.1 | 79.7 79.3 | 57.1 50.2 | 58.7 48.8 | 55.6 51.0 | | pseudo positive relations. Row (3) denotes our full method. u-u u-ℓ CIFAR-100 | | | | | CUB-200 | | | | |---------------------------------------------------------------------------------|-----|------|-----------|-----------|-----------|-----------|-----------|-----------| | | All | Seen | Unseen | All | Seen | Unseen | | | | (0) | ✗ | ✗ | 73.6 70.8 | 80.4 77.6 | 60.0 57.0 | 53.1 51.3 | 57.6 56.6 | 50.8 48.7 | | (1) | ✓ | ✗ | 80.5 76.5 | 80.6 76.3 | 80.3 76.9 | 56.6 52.7 | 57.2 51.5 | 56.3 53.3 | | (2) | ✗ | ✓ | 72.9 70.0 | 82.0 79.1 | 54.8 51.6 | 51.0 45.5 | 52.9 44.4 | 50.1 46.0 | | (3) | ✓ | ✓ | 81.5 76.5 | 82.4 75.1 | 79.7 79.3 | 57.1 50.2 | 58.7 48.8 | 55.6 51.0 | Table 6: **Results using different relations.** u-u denotes pairwise relations from SNC between unlabelled and unlabelled data, and u-ℓ denotes pairwise relations from SNC between unlabelled and labelled data. The results evaluated with SNC are reported of normal size (left), and those with semi-supervised k-means are reported of smaller size (right). Rows (1)-(2) mean applying SNC on all data but only using u-u or u-ℓ for pseudo positive relations. Row (3) denotes our full method. its global centroid, SNC first connects the unlabeled instance to the close labeled instance at early clustering steps (low clustering levels) and then gradually merges it into the global labeled cluster at the final level. As a result, SNC consistently demonstrates higher accuracy compared to semi-supervised k-means. Effectiveness of cross-instance positive relations. In this paper, we use SNC to generate pairwise relations of unlabelled data, as well as relations between unlabelled and labelled data in supervised contrastive learning. In Tab. 6, we evaluate different configurations of the pairwise relations, showing that our pairwise relation method achieves the best overall performance. Row (0) represents the performance of the state-of-theart GCD method of Vaze et al. (2022b) without using any pseudo relations. Comparing row (1) to row (0), we can see that SNC can effectively enhance the baseline performance by solely providing pairwise relations between unlabelled data. Comparing row (2) to row (0), we can see that only adding pairwise relations of labelled and unlabelled data (u-ℓ) is not sufficient to boost baseline performance and even harmful due to the biased supervision from seen categories. Row (3) is our full method that achieves the best performance, demonstrating the effective utilization of pairwise relations in our method. ## 5 Conclusion We have presented a framework CiPR for the challenging problem of GCD. Our framework leverages the cross-instance positive relations that are obtained with SNC, an efficient parameter-free hierarchical clustering algorithm we develop for the GCD setting. Although off-the-shelf clustering methods can be utilized for generating pseudo labels, none of the current methods fulfill all of the fundamental properties of SNC at the same time, critical to GCD. The required properties consist of: (1) using label supervision for clustering unlabeled data, (2) avoiding the need for the number of clusters, and (3) obtaining the highest level of over-clustering purity, which results in reliable pairwise pseudo labels for unbiased representation learning. With the positive relations obtained by SNC, we can learn better representation for GCD, and the label assignment on the unlabelled data can be obtained from a single run of SNC, which is far more efficient than the semi-supervised k-means used in the state-of-the-art method. We also show that SNC can be used to estimate the unknown class number in the unlabelled data with higher efficiency. Acknowledgments This work is partially supported by Hong Kong Research Grant Council - Early Career Scheme (Grant No. 27208022), National Natural Science Foundation of China (Grant No. 62306251), and HKU Seed Fund for Basic Research. ## References Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Armand Joulin, Nicolas Ballas, and Michael Rabbat. Semi-supervised learning of visual features by non-parametrically predicting view assignments with support samples. In *ICCV*, 2021. Abhijit Bendale and Terrance E Boult. Towards open set deep networks. In *CVPR*, 2016. David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel. Mixmatch: A holistic approach to semi-supervised learning. In *NeurIPS*, 2019. David Berthelot, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, and Colin Raffel. Remixmatch: Semi-supervised learning with distribution matching and augmentation anchoring. In *ICLR*, 2020. Matko Bošnjak, Pierre Harvey Richemond, Nenad Tomasev, Florian Strub, Jacob C Walker, Felix Hill, Lars Holger Buesing, Razvan Pascanu, Charles Blundell, and Jovana Mitrovic. Semppl: Predicting pseudo-labels for better contrastive representations. In *ICLR*, 2022. Richard P. Brent. An algorithm with guaranteed convergence for finding a zero of a function. The Computer Journal, 1971. Zhaowei Cai, Avinash Ravichandran, Subhransu Maji, Charless Fowlkes, Zhuowen Tu, and Stefano Soatto. Exponential moving average normalization for self-supervised and semi-supervised learning. In *CVPR*, 2021. Kaidi Cao, Maria Brbić, and Jure Leskovec. Open-world semi-supervised learning. In *ICLR*, 2022. Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In *ICCV*, 2021. Olivier Chapelle, Bernhard Scholkopf, and Alexander Zien. Semi-supervised learning. *MIT Press*, 2006. Guangyao Chen, Limeng Qiao, Yemin Shi, Peixi Peng, Jia Li, Tiejun Huang, Shiliang Pu, and Yonghong Tian. Learning open set network with discriminative reciprocal points. In *ECCV*, 2020a. Guangyao Chen, Peixi Peng, Xiangqian Wang, and Yonghong Tian. Adversarial reciprocal points learning for open set recognition. *IEEE TPAMI*, 2021. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In *ICML*, 2020b. Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey E Hinton. Big self-supervised models are strong semi-supervised learners. In *NeurIPS*, 2020c. Haoang Chi, Feng Liu, Wenjing Yang, Long Lan, Tongliang Liu, Bo Han, Gang Niu, Mingyuan Zhou, and Masashi Sugiyama. Meta discovery: Learning to discover novel classes given very limited data. In *ICLR*, 2022. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *CVPR*, 2009. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020. Debidatta Dwibedi, Yusuf Aytar, Jonathan Tompson, Pierre Sermanet, and Andrew Zisserman. With a little help from my friends: Nearest-neighbor contrastive learning of visual representations. In *ICCV*, 2021. Enrico Fini, Enver Sangineto, Stéphane Lathuilière, Zhun Zhong, Moin Nabi, and Elisa Ricci. A unified objective for novel class discovery. In *ICCV*, 2021. Kai Han, Andrea Vedaldi, and Andrew Zisserman. Learning to discover novel visual categories via deep transfer clustering. In *ICCV*, 2019. Kai Han, Sylvestre-Alvise Rebuffi, Sebastien Ehrhardt, Andrea Vedaldi, and Andrew Zisserman. Automatically discovering and learning new visual categories with ranking statistics. In *ICLR*, 2020. Kai Han, Sylvestre-Alvise Rebuffi, Sebastien Ehrhardt, Andrea Vedaldi, and Andrew Zisserman. Autonovel: Automatically discovering and learning novel visual categories. *IEEE TPAMI*, 2021. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In *CVPR*, 2020. Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In *ICLR*, 2017. Yen-Chang Hsu, Zhaoyang Lv, and Zsolt Kira. Learning to cluster in order to transfer across domains and tasks. In *ICLR*, 2018a. Yen-Chang Hsu, Zhaoyang Lv, Joel Schlosser, Phillip Odom, and Zsolt Kira. Multi-class classification without multi-class labels. In *ICLR*, 2018b. Yen-Chang Hsu, Yilin Shen, Hongxia Jin, and Zsolt Kira. Generalized odin: Detecting out-of-distribution image without learning from out-of-distribution data. In *CVPR*, 2020. Xuhui Jia, Kai Han, Yukun Zhu, and Bradley Green. Joint representation learning and novel category discovery on single-and multi-modal data. In *ICCV*, 2021. KJ Joseph, Sujoy Paul, Gaurav Aggarwal, Soma Biswas, Piyush Rai, Kai Han, and Vineeth N Balasubramanian. Novel class discovery without forgetting. In *ECCV*, 2022. Zhanghan Ke, Daoye Wang, Qiong Yan, Jimmy Ren, and Rynson WH Lau. Dual student: Breaking the limits of the teacher in semi-supervised learning. In *ICCV*, 2019. Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. In *NeurIPS*, 2020. Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In *4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13)*, 2013. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In *NeurIPS*, 2012. Alex Krizhevsky et al. Learning multiple layers of features from tiny images. *Technical report*, 2009. H. W. Kuhn. The hungarian method for the assignment problem. *Naval Research Logistics Quarterly*, 1955. Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. In *ICLR*, 2017. Junnan Li, Caiming Xiong, and Steven CH Hoi. Comatch: Semi-supervised learning with contrastive graph regularization. In *ICCV*, 2021. Yewen Li, Chaojie Wang, Xiaobo Xia, Tongliang Liu, Bo An, et al. Out-of-distribution detection with an adaptive likelihood ratio on informative hierarchical vae. In *NeurIPS*, 2022. Shiyu Liang, Yixuan Li, and R Srikant. Enhancing the reliability of out-of-distribution image detection in neural networks. In *ICLR*, 2018. Yucen Luo, Jun Zhu, Mengxi Li, Yong Ren, and Bo Zhang. Smooth neighbors on teacher graphs for semi-supervised learning. In *CVPR*, 2018. James MacQueen et al. Some methods for classification and analysis of multivariate observations. In Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, 1967. Maxime Oquab, Timothée Darcet, Theo Moutakanni, Huy V. Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Russell Howes, Po-Yao Huang, Hu Xu, Vasu Sharma, Shang-Wen Li, Wojciech Galuba, Mike Rabbat, Mido Assran, Nicolas Ballas, Gabriel Synnaeve, Ishan Misra, Herve Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, and Piotr Bojanowski. Dinov2: Learning robust visual features without supervision. *arXiv preprint arXiv:2304.07193*, 2023. Hieu Pham, Zihang Dai, Qizhe Xie, and Quoc V Le. Meta pseudo labels. In *CVPR*, 2021. Mamshad Nayeem Rizve, Kevin Duarte, Yogesh S Rawat, and Mubarak Shah. In defense of pseudo-labeling: An uncertainty-aware pseudo-label selection framework for semi-supervised learning. In *ICLR*, 2020. Mamshad Nayeem Rizve, Navid Kardan, Salman Khan, Fahad Shahbaz Khan, and Mubarak Shah. Openldn: Learning to discover novel classes for open-world semi-supervised learning. In *ECCV*, 2022a. Mamshad Nayeem Rizve, Navid Kardan, and Mubarak Shah. Towards realistic semi-supervised learning. In ECCV, 2022b. Peter J. Rousseeuw. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. Journal of Computational and Applied Mathematics, 1987. Subhankar Roy, Mingxuan Liu, Zhun Zhong, Nicu Sebe, and Elisa Ricci. Class-incremental novel class discovery. In *ECCV*, 2022. M. Saquib Sarfraz, Vivek Sharma, and Rainer Stiefelhagen. Efficient parameter-free clustering using first neighbor relations. In *CVPR*, 2019. Walter J Scheirer, Anderson de Rezende Rocha, Archana Sapkota, and Terrance E Boult. Toward open set recognition. *IEEE TPAMI*, 2012. Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In *NeurIPS*, 2017. Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A Raffel, Ekin Dogus Cubuk, Alexey Kurakin, and Chun-Liang Li. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. In *NeurIPS*, 2020. Kiat Chuan Tan, Yulong Liu, Barbara A. Ambrose, Melissa Tulig, and Serge J. Belongie. The herbarium challenge 2019 dataset. *arXiv preprint arXiv:1906.05372*, 2019. Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In *NeurIPS*, 2017. Wouter Van Gansbeke, Simon Vandenhende, Stamatios Georgoulis, Marc Proesmans, and Luc Van Gool. Scan: Learning to classify images without labels. In *ECCV*, 2020. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In *NeurIPS*, 2017. Sagar Vaze, Kai Han, Andrea Vedaldi, and Andrew Zisserman. Open-set recognition: A good closed-set classifier is all you need. In *ICLR*, 2022a. Sagar Vaze, Kai Han, Andrea Vedaldi, and Andrew Zisserman. Generalized category discovery. In *CVPR*, 2022b. Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds-200-2011 dataset. *Technical report*, 2011. Xudong Wang, Zhirong Wu, Long Lian, and Stella X Yu. Debiased learning from naturally imbalanced pseudo-labels for zero-shot and semi-supervised learning. In *CVPR*, 2022. Chen Wei, Kihyuk Sohn, Clayton Mellina, Alan Yuille, and Fan Yang. Crest: A class-rebalancing self-training framework for imbalanced semi-supervised learning. In *CVPR*, 2021. Xin Wen, Bingchen Zhao, and Xiaojuan Qi. Parametric classification for generalized category discovery: A baseline study. In *ICCV*, 2023. Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V Le. Self-training with noisy student improves imagenet classification. In *CVPR*, 2020. Jay Yagnik, Dennis W. Strelow, David A. Ross, and Ruei-Sung Lin. The power of comparative reasoning. In ICCV, 2011. Bowen Zhang, Yidong Wang, Wenxin Hou, Hao Wu, Jindong Wang, Manabu Okumura, and Takahiro Shinozaki. Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling. In *NeurIPS*, 2021. Bingchen Zhao and Kai Han. Novel visual category discovery with dual ranking statistics and mutual knowledge distillation. In *NeurIPS*, 2021. Bingchen Zhao, Xin Wen, and Kai Han. Learning semi-supervised gaussian mixture models for generalized category discovery. In *ICCV*, 2023. Haotian Zheng, Qizhou Wang, Zhen Fang, Xiaobo Xia, Feng Liu, Tongliang Liu, and Bo Han. Out-ofdistribution detection learning with unreliable out-of-distribution sources. In *NeurIPS*, 2023. Mingkai Zheng, Shan You, Lang Huang, Fei Wang, Chen Qian, and Chang Xu. Simmatch: Semi-supervised learning with similarity matching. In *CVPR*, 2022. Zhun Zhong, Enrico Fini, Subhankar Roy, Zhiming Luo, Elisa Ricci, and Nicu Sebe. Neighborhood contrastive learning for novel class discovery. In *CVPR*, 2021. ![16_image_0.png](16_image_0.png) Unlabelled data points Labelled data points **Cluster centroids** Figure 6: **A more detailed illustration of SNC.** SNC iteratively clusters instances from the bottom to the top, producing multiple levels of different partitions. At each level, the auto-adaptive chain length λ is dynamically determined by the number of labelled 'instances' nℓ in each class. The connected components are extracted with selective neighbors (SNs), forming the clusters at each level. ## A More Analysis On Snc We present a more detailed illustration of our proposed SNC in Fig. 6. SNC is inspired by the idea from FINCH, but they are significantly different in two key aspects: (1) FINCH treats all instances the same and simply uses nearest neighbors to construct graphs; SNC uses a novel selective neighbor strategy tailored for the GCD setting to construct graphs, treating labelled and unlabelled instances differently. (2) SNC is able to cluster a mixed set of labelled and unlabelled data by fully exploiting label supervision, but FINCH is not. Different choices of chain lengths λ. In Tab. 7, we experiment on other formulations which satisfies the above relationship, e.g., λ = ⌈ √3 nℓ ⌉ and λ = ⌈nℓ/2⌉, and our formulation performs the best. We also compare our dynamic λ with a possible alternative of a fixed λ. For the fixed chain length, we conduct multiple experiments with different length values to find the best length giving the highest accuracy for each dataset. We observe that the best chain length varies from dataset to dataset, and there is no single fixed λ that gives the best performance for all datasets. In contrast, our dynamic λ consistently outperforms the fixed one, and it can automatically adjust the chain length for different datasets and different levels, without requiring any tuning or validation like the fixed one. Impacts of different levels for positive relation generation. When the chain length is determined, clustering on different datasets will follow a similar rate, which guarantees that a fixed clustering level provides an equal clustering degree in the hierarchy for all datasets. We consider the following two conditions for the choice of a proper clustering level: (1) the number of clusters should be notably larger than the labelled class number to overcluster the instances, and (2) the number of clusters should not be too larger than the labelled class number and the batch size, which may decrease useful pairwise correlations. A proper level for positive Table 7: **Comparison of different formulations of chain length** λ. The best fixed length values are 8 for CIFAR-100 and 3 for CUB-200. CIFAR-100 CUB-200 Classes All Seen Unseen All Seen Unseen Fixed 80.2 79.7 **81.3** 54.3 **58.8** 52.1 ⌈nℓ/2⌉ 81.4 **84.5** 75.2 45.5 45.5 45.5 ⌈ √3 nℓ ⌉ 72.5 77.1 63.2 42.4 45.0 41.1 ⌈ √nℓ ⌉ (ours) **81.5** 82.4 79.7 **57.1** 58.7 **55.6** Table 8: **Comparison of different levels.** Compare levels 2, 3, 4, and baseline (Vaze et al., 2022b). | CIFAR-100 | CUB-200 | | | | | | |-------------------------------|-----------|------|--------|------|------|--------| | Classes | All | Seen | Unseen | All | Seen | Unseen | | Baseline (Vaze et al., 2022b) | 70.8 | 77.6 | 57.0 | 51.3 | 56.6 | 48.7 | | CiPR w/ level 2 | 72.4 | 79.6 | 58.0 | 50.9 | 55.8 | 48.5 | | CiPR w/ level 3 (ours) | 81.5 | 82.4 | 79.7 | 57.1 | 58.7 | 55.6 | | CiPR w/ level 4 | 81.6 | 81.9 | 80.8 | 52.9 | 53.1 | 52.8 | relation generation should overcluster the labelled data to some extent, such that reliable positive relations can be generated. Level 1 is not a valid choice because no positive relations can be generated if each instance is treated as a cluster. In Tab. 8, we present the performance using levels 2, 3, and 4 to generate pseudo labels and also compare with the previous state-of-the-art baseline by Vaze et al. (2022b). We empirically find that the overclustering levels 3 and 4 are similarly good, while level 2 is worse because less positive relations are explored in each mini-batch. Even using level 2, our method still performs on par with Vaze et al. (2022b). | | Table 9: Effectivenss of SNC on different learned features. | | | | | | | | | | | | | |--------------|---------------------------------------------------------------|-----------|------|--------------|------|---------|--------|------|-------|--------|------|------|--------| | Clustering | Features | CIFAR-100 | | ImageNet-100 | | CUB-200 | | | SCars | | | | | | | | All | Seen | Unseen | All | Seen | Unseen | All | Seen | Unseen | All | Seen | Unseen | | | DINO (Caron et al., 2021) | 60.4 | 63.1 | 54.9 | 72.8 | 70.6 | 73.8 | 36.7 | 37.9 | 36.0 | 12.3 | 13.7 | 11.6 | | Semi-k-means | Vaze et al. (2022b) | 74.5 | 81.9 | 60.0 | 69.2 | 66.6 | 70.5 | 53.5 | 59.9 | 50.3 | 40.8 | 67.6 | 27.8 | | | CiPR (ours) | 76.5 | 75.1 | 79.3 | 72.8 | 70.6 | 73.8 | 49.8 | 46.1 | 51.7 | 42.6 | 55.2 | 36.5 | | | DINO (Caron et al., 2021) | 65.5 | 69.0 | 58.3 | 76.8 | 81.1 | 74.6 | 36.7 | 35.0 | 37.5 | 12.4 | 15.8 | 10.7 | | SNC (ours) | Vaze et al. (2022b) | 77.8 | 87.4 | 58.6 | 61.4 | 76.7 | 53.8 | 55.9 | 61.6 | 53.0 | 41.3 | 62.9 | 30.8 | | | CiPR (ours) | 81.5 | 82.4 | 79.7 | 80.5 | 84.9 | 78.3 | 57.1 | 58.7 | 55.6 | 47.0 | 61.5 | 40.1 | Effectivenss of SNC on different learned features. In Tab. 9, we evaluate SNC1 on features extracted from DINO (Caron et al., 2021), GCD method of Vaze et al. (2022b), and our method CiPR. We also compare SNC with semi-supervised k-means (Han et al., 2019; Vaze et al., 2022b). We can observe that SNC surpasses semi-supervised k-means with a significant margin on all features, except those extracted by Vaze et al. (2022b) on ImageNet-100. Moreover, semi-supervised k-means with our features performs better than with other features. Overall, SNC with our learned features gives the best performance. ## B A Unified Loss In this paper, to leverage pseudo labels produced by SNC, we jointly train our model with two supervised contrastive losses, one using true positive relations of labelled data and the other using pseudo positive relations of all data. Indeed, it is possible to train the model with a unified loss by replacing the pseudo relations in the second term of our loss, and remove the first term. Formally, let RB(i) be the set of positive relations for instance i. The unified loss L r i can be written as $${\mathcal{L}}_{i}^{r}=-{\frac{1}{|{\mathcal{R}}_{\mathcal{B}}(i)|}}\sum_{q\in{\mathcal{R}}_{\mathcal{B}}(i)}\log{\frac{\exp({\boldsymbol{z}}_{i}\cdot{\boldsymbol{z}}_{q}/\tau)}{\sum_{n\in{\mathcal{B}},n\neq i}\exp({\boldsymbol{z}}_{i}\cdot{\boldsymbol{z}}_{n}/\tau)}},$$ , (7) 1When representing a clustering method here, SNC denotes selective neighbor clustering with one-to-one merging. $$\left(7\right)$$ 18 ![18_image_0.png](18_image_0.png) $$({\boldsymbol{8}})$$ Figure 7: **Curves throughout class number estimation.** We report curves of accuracy on the labelled subset DvL, silhouette score on the unlabelled data DU , and our reference score on DvL ∪ DU . The cyan vertical line denotes the estimated class number and the red vertical line denotes the ground-truth class number. Note that the x-axis should be read from right to left, as the merging starts from the lower level to the upper level. where $$\mathcal{R}_{\mathcal{B}}(i)=\begin{cases}\mathcal{G}_{\mathcal{B}}(i)\cup(\mathcal{P}_{\mathcal{B}}(i)\cap\mathcal{I}_{\mathcal{U}})&{\mathrm{if}\ i\in\mathcal{I}_{\mathcal{L}}}\\ \mathcal{P}_{\mathcal{B}}(i)&{\mathrm{if}\ i\in\mathcal{I}_{\mathcal{U}}}\end{cases},$$ , (8) IL and IU denote the instance indices of the labelled and unlabelled set respectively. In Tab. 10, we compare our two-term loss formulation with this unified loss formulation. It turns out that our two-term loss appears to be more effective. We hypothesize the performance degradation of Eq. (7) is caused by unbalanced granularity of labelled data and unlabelled data, due to mixture of overclustering pseudo labels and non-overclustering ground-truth labels. | Table 10: Results using different loss formulations | .. | | | | | | |-------------------------------------------------------|---------|------|--------|------|------|--------| | CIFAR-100 | CUB-200 | | | | | | | Classes | All | Seen | Unseen | All | Seen | Unseen | | Eq. (7) | 79.3 | 80.3 | 77.3 | 53.9 | 53.5 | 54.0 | | Ours (CiPR) | 81.5 | 82.4 | 79.7 | 57.1 | 58.7 | 55.6 | ## C Class Number Estimation In Fig. 7, we show how labelled accuracy, silhouette score and reference score change throughout the whole procedure of class number estimation with one-to-one merging. The accuracy on the labelled instances or the silhouette score alone does not well fit the actual cluster number. By jointly considering both, we can see the actual class number aligns well with our suggested reference score. ## D Time Efficiency Here, we evaluate the time efficiency of CiPR, including both category discovery and class number estimation. Category discovery efficiency. The latency for the category discovery process mainly consists of two parts: feature extraction and label assignment. In Tab. 11, we present the feature extraction time. All methods consume roughly the same amount of time for feature extraction per image. RankStats+ (Han et al., 2021), UNO+ (Fini et al., 2021), and ORCA (Cao et al., 2022) assign labels with a linear classifier, thanks to the assumption of a known category number. Hence, the label assignment process is simply done by a fast feed-forward pass of a linear classifier, costing negligible time (< 0.0005 second per image), though their performance lags. Our CiPR and Vaze et al. (2022b) contain the transfer clustering process for label assignment, for which CiPR is 6-30 times faster than semi-supervised k-means used in Vaze et al. (2022b) (see Tab. 12). The high efficiency of SNC stems from the design of *hierarchical* clustering, which eliminates the need for extensive iterative steps required by methods like semi-supervised k-means. | Table 12: Time cost in clustering. CIFAR-10 CIFAR-100 ImageNet-100 | CUB-200 | SCars | Herbarium19 | | | | |----------------------------------------------------------------------|-----------|---------|---------------|------|------|-------| | Semi-k-means | 346s | 688s | 3863s | 256s | 356s | 6053s | | Ours (SNC w/ one-to-one merging) | 58s | 111s | 118s | 36s | 50s | 917s | | Table 13: Time and memory consumed in estimating class number. Method CIFAR-10 CIFAR-100 ImageNet-100 CUB-200 SCars Herbarium19 | | | | | | | | |-----------------------------------------------------------------------------------------------------------------------------------|---------------------|-------|-------|-------|------|------|-------| | Runtime (s) | Vaze et al. (2022b) | 15394 | 27755 | 64524 | 7197 | 8863 | 63901 | | Ours (CiPR) | 102 | 528 | 444 | 126 | 168 | 1654 | | | Memory (MB) | Vaze et al. (2022b) | 2206 | 2207 | 3760 | 1354 | 1394 | 1902 | | Ours (CiPR) | 2535 | 2932 | 5848 | 1392 | 1451 | 2205 | | | Table 11: Time cost in feature extraction per image. Time cost RankStats+ (Han et al., 2021) 0.015s±0.001 UNO+ (Fini et al., 2021) 0.017s±0.001 ORCA (Cao et al., 2022) 0.015s±0.001 Vaze et al. (2022b) 0.014s±0.001 Ours (CiPR) 0.014s±0.001 | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| Estimating class number. Compared to repeatedly running k-means with different class numbers as in Vaze et al. (2022b), CiPR only requires a single run to obtain the estimated class number, thus significantly increasing efficiency. In Tab. 13, CiPR is 40-150 times faster than Vaze et al. (2022b), which utilizes k-means with the optimization of Brent's algorithm (Brent, 1971). We also compare the memory cost. We can observe that our method costs comparable memory but achieves a much faster running speed than Vaze et al. (2022b) in class number estimation, thanks to (1) the higher efficiency of SNC and (2) the single run for our estimation method instead of multiple runs from a predefined list of possible class numbers required in Vaze et al. (2022b). ## E Comparison To Concurrent Simgcd With Backbone Enhancement SimGCD (Wen et al., 2023) is a concurrent work tackling GCD problem that shows competitive performance, which introduces a parametric classifier and an entropy regularization term. We further provide comparison with SimGCD using both DINOv1 (Caron et al., 2021) and recently released DINOv2 (Oquab et al., 2023) feature backbone. Our model demonstrates strong competence with both DINOv1 and DINOv2 backbones. | Table 14: Results on generic image recognition datasets. CIFAR-10 CIFAR-100 | | | | ImageNet-100 | | | | | | | |-------------------------------------------------------------------------------|---------------------------|------|------|----------------|------|------|--------|------|------|--------| | Backbone | Methods | All | Seen | Unseen | All | Seen | Unseen | All | Seen | Unseen | | Vaze et al. (2022b) | 91.5 | 97.9 | 88.2 | 70.8 | 77.6 | 57.0 | 74.1 | 89.8 | 66.3 | | | DINOv1 | SimGCD (Wen et al., 2023) | 97.1 | 95.1 | 98.1 | 80.1 | 81.2 | 77.8 | 83.0 | 93.1 | 77.9 | | Ours (CiPR) | 97.7 | 97.5 | 97.7 | 81.5 | 82.4 | 79.7 | 80.5 | 84.9 | 78.3 | | | Vaze et al. (2022b) | 97.8 | 99.0 | 97.1 | 79.6 | 84.5 | 69.9 | 78.5 | 89.5 | 73.0 | | | DINOv2 | SimGCD (Wen et al., 2023) | 98.8 | 96.9 | 99.7 | 88.5 | 89.3 | 86.9 | 90.4 | 96.3 | 87.4 | | Ours (CiPR) | 99.0 | 98.7 | 99.2 | 90.3 | 89.0 | 93.1 | 88.2 | 87.6 | 88.5 | | Table 15: **Results on fine-grained image recognition datasets.** CUB-200 SCars Herbarium19 Backbone Methods All Seen Unseen All Seen Unseen All Seen Unseen DINOv1 Vaze et al. (2022b) 51.3 56.6 48.7 39.0 57.6 29.9 35.4 51.0 27.0 SimGCD (Wen et al., 2023) 60.3 65.6 57.7 53.8 71.9 45.0 44.0 58.0 36.4 Ours (CiPR) 57.1 58.7 55.6 47.0 61.5 40.1 36.8 45.4 32.6 DINOv2 Vaze et al. (2022b) 70.2 70.1 70.2 62.8 65.7 61.4 38.3 40.1 37.4 SimGCD (Wen et al., 2023) 76.3 **80.0** 74.4 **71.3 81.6 66.4** 58.7 63.8 56.2 Ours (CiPR) **78.3** 73.4 **80.8** 66.7 77.0 61.8 **59.2 65.0 56.3** Unsurprisingly, with the stronger DINOv2 backbone, the results are improved for all methods, while CiPR still achieves the best performance on most datasets. ## F Data Splits In Tab. 16, we show the details on data splits of CIFAR-10 (Krizhevsky et al., 2009), CIFAR-100 (Krizhevsky et al., 2009), ImageNet-100 (Deng et al., 2009), CUB-200 (Wah et al., 2011), Stanford Cars (Krause et al., 2013) and Herbarium19 (Tan et al., 2019) in our experiments. Table 16: **Data splits of all datasets.** We present the number of classes in the labelled and unlabelled set (|YL|, |YU |), and the number of images (|DL|, |DU |). CIFAR-10 CIFAR-100 ImageNet-100 CUB-200 SCars Herbarium19 |YL| 5 80 50 100 98 341 |YU | 10 100 100 200 196 683 |DL| 12.5k 20k 32.5k 1.5k 2.0k 8.5k |DU | 37.5k 30k 97.5k 4.5k 6.1k 25.7k ## G Special Cases Of Unlabelled Data In the real world, we may meet scenarios where unlabelled data are all from seen or unseen classes. We investigate into such scenarios and conduct experiments to validate the effectiveness of our method. Our experiments are under two settings: (1) applying our pretrained models in the main paper to seen-only and unseen-only unlabelled data; (2) retraining the models with seen-only and unseen-only unlabelled data. In Tab. 17, we can observe that our model maintains strong performance in all cases. ## H Attention Map Visualization ViT (Dosovitskiy et al., 2020) has a multi-head attention design, with each head focusing on different contexts of the image. For the final block of ViT, the input X ∈ R (HW+1)×D, corresponding to a feature of HW Table 17: **Performance on seen-only and unseen-only unlabelled data.** "original setting" denotes the performance of CiPR dealing with GCD; "direct testing" denotes the performance of CiPR dealing with seen-only or unseen-only unlabelled data using pretrained GCD model; "retraining" denotes the performance of retrained CiPR dealing with seen-only or unseen-only unlabelled data. CIFAR-10 CIFAR-100 ImageNet-100 CUB-200 SCars Herbarium19 | of retrained CiPR dealing with seen-only or unseen-only unlabelled data. CIFAR-10 CIFAR-100 ImageNet-100 CUB-200 | | | SCars | Herbarium19 | | | | |--------------------------------------------------------------------------------------------------------------------|----------------|------|---------|---------------|------|------|------| | original setting | 97.5 | 82.4 | 84.9 | 58.7 | 61.5 | 45.4 | | | Seen | direct testing | 98.5 | 84.4 | 83.3 | 79.1 | 72.0 | 55.4 | | retraining | 98.9 | 87.0 | 87.3 | 81.9 | 75.2 | 66.3 | | | original setting | 97.7 | 79.7 | 78.3 | 55.6 | 40.1 | 32.6 | | | Unseen | direct testing | 97.6 | 82.7 | 74.3 | 56.5 | 39.3 | 37.9 | | retraining | 98.4 | 78.9 | 79.3 | 60.4 | 42.5 | 41.3 | | patches and a [CLS] token, is fed into multi-heads, which can be expressed as MultiHead(X) = [head1, head2*, . . . , head*h]WO (9) where $$({\mathfrak{g}})$$ $head_j=softmax(\dfrac{\mathbf{Q}_j\mathbf{K}_j^T}{\sqrt{d_k}})\mathbf{V}_j$ $\mathbf{Q}_j=\mathbf{X}\mathbf{W}_j^Q$ $\mathbf{K}_j=\mathbf{X}\mathbf{W}_j^K$ $\mathbf{V}_j=\mathbf{X}\mathbf{W}_j^V$ $$(12)^{\frac{1}{2}}$$ $$(13)^{\frac{1}{2}}$$ where dk is the dimension of queries and keys. In our model, patch size is 16×16 pixels and HW = 14×14 = 196. The number of heads h is 12. Referring to Vaswani et al. (2017), consider attention map of head j Aj = *sof tmax*( QjKT √ j dk ) ∈ [0, 1](HW+1)×(HW+1). Aj describes the similarity of one feature to every other feature captured in head j. The first row of Aj shows how head j attends [CLS] token to every spatial patch of the input image. In Fig. 8, we visualize some of the interpretable attention heads to show semantic regions that ViT attends to. We can observe that our model CiPR, as well as DINO (Caron et al., 2021) and Vaze et al. (2022b), can attend to specific semantic object regions. For instance, CiPR attends three heads respectively to 'license plate', 'light' and 'wheels' for Stanford Cars (head 1 fails in row 1), and to 'body', 'head' and 'neck' for CUB-200. ## I License For Experimental Datasets All datasets used in this paper are permitted for research use. CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009) are released under MIT License, allowing for research propose. ImageNet-100 is the subset of ImageNet (Deng et al., 2009), which allows non-commercial research use. Similarly, CUB-200 (Wah et al., 2011), Stanford Cars (Krause et al., 2013) and Herbarium19 (Tan et al., 2019) are also exclusive for non-commercial research purpose. ## J Limitation In our current experiments, we consider images from the same curated dataset. However, in practice, we might want to transfer concepts from one dataset to another, which may have different data distributions (*e.g.*, the unlabelled data could follow the long-tailed distribution), introducing more challenges. Another limitation is that currently, we need to train the model on both labelled and unlabelled data jointly. However, in the real world, there are often cases in which we do not have access to any labelled data from the seen classes when facing unlabelled data. We consider these as our future research directions. ![22_image_0.png](22_image_0.png) Figure 8: **Attention visualizations.** We report visualization results of DINO (Caron et al., 2021) (left), Vaze et al. (2022b) (middle), and CiPR (right) on Stanford Cars (top) and CUB-200 (bottom). For each dataset, we show two rows of 'Seen' categories (solid green box) and two rows of 'Unseen' categories (dashed red box). Zoom in to see attention details.
Review 1: Summary: This paper tackles the Generalized Category Discovery (GCD) problem in a partially labeled dataset, where unlabeled data may include instances from both known and novel categories. It proposes CiPR, a framework that utilizes Cross-instance Positive Relations for contrastive learning, a facet often overlooked in existing methods. To enhance representation learning, the paper introduces Selective Neighbor Clustering (SNC), a semi-supervised hierarchical clustering algorithm, that generates a clustering hierarchy directly from a graph constructed from selective neighbors. A method for estimating the unknown class number using SNC is also presented, extending it to facilitate label assignment for unlabeled instances. Evaluating on public image recognition datasets, CiPR with SNC establishes a new state-of-the-art in addressing the GCD problem with an unknown category number in partially labeled datasets. Strengths and Weaknesses: **Strengths** - This paper studies a realistic and important problem. - The motivation of this paper is clear. - The reported results are overall great. Although in some cases, the proposed method cannot work very well, it can achieve the best performance in most experimental cases. **Weaknesses** - The writing of this paper can be further improved before acceptance. There are a series of unclear descriptions and justifications in the current form, which need to be modified. Requested Changes: - Could the paper provide more details and intuitions about why the proposed method does not need the known class number? - Is the research topic related to Out-of-distribution detection [R1,R2]? - Why $\lambda$ is set to $\sqrt{n_l}$? I do not understand the intuition of the setting. - The proposed method relies on K-means. However, from the reported results, it seems that the proposed method does not suffer from a high time complexity. Could the paper provide more details about this? Overall, I think this is a good paper. A round of revision can further improve it. ---- [R1] Out-of-distribution detection with an adaptive likelihood ratio on informative hierarchical vae. NeurIPS 2022. [R2] Out-of-distribution detection learning with unreliable out-of-distribution sources. NeurIPS 2023. Broader Impact Concerns: NA. ================================================== Review 2: Summary: In this paper, the authors propose an enhanced version of a baseline model published in 2022 for generalized category discovery. Specifically, they design a new semi-supervised clustering approach to replace the original semi-supervised $k$-means, so that there is no need to estimate the class number during training. They also show it is important to perform supervised contrastive learning on unlabeled data by using pseudo-labels generated from clustering process. Strengths and Weaknesses: Strengths: +: The paper is written well and easy to follow. All the figures and tables are of high quality. +: The proposed approach is technically sound. All necessary details of the proposed approach are provided. +: The proposed approach achieves a new SOTA. Comprehensive experiments show the technical improvements against the baseline model are effective. Weaknesses: -: Using pseudo-labels to incorporate unlabeled data into supervised contrastive learning is not a new idea. Many works in semi-supervised learning have already adopted this idea. There is no discussion on these related works. -: The way to search hyperparameters may not be convincing. It seems that hyperparameters (e.g. chain length, clustering level, ratio of $|\mathcal{D}^l_\mathcal{L}| : |\mathcal{D}^v_\mathcal{L}|$) are directly determined according to the reported performance (transductive accuracy). -: It is not clear why the proposed semi-supervised clustering approach can achieve better performance than semi-supervised $k$-means. There should be more in-depth discussions and analyses. Requested Changes: 1. Many works in semi-supervised learning have used a similar idea to incorporate unlabeled data into supervised contrastive learning. At least, there should be some discussions on these related works. 2. How are hyperparameters in the proposed approach determined? Please elaborate it. 3. why the proposed semi-supervised clustering approach can achieve better performance than semi-supervised $k$-means? Please give more discussions. Broader Impact Concerns: N/A ================================================== Review 3: Summary: This paper proposes a framework of joint contrastive representation learning with cross-instance positive relations for generalized category discovery (GCD). The key of joint contrastive representation learning is to select reliable and informative pseudo labels. For that, this paper proposes selective neighbor clustering (SNC), a method for hierachical clustering of both labelled and unlabeled data. The proposed SNC method also serves to estimate the class number and assign pretictions to unlabeled data. The experiments are conducted in many GCD benchmarks and show very competitive results to previous methods. Strengths and Weaknesses: ## Strengths - This paper proposes a framework of joint contractive representation learning by utiling both labeled and unlabeled data with cross-instance relations. This framework for GCD problem is new. We can seen from the experiments that when pseudo labels are selected well, this learning framework would be work to improve the results. - This paper proposes a method of selective neighbor clustering (SNC), addressing the problems when labeled and unlabeled data are together for clustering. The solution for SNC is to provide $\lambda$ chain links for the labeled data, which extends the FINCH method to both labeled and unlabeled data efficiently. - The proposed SNC method has shown many benefits to previous semi-supervised clustering method, including better pseudo labels generations and class number estimation, and faster runing time. These advantages from SNC make a non-trivial contribution to the community. - The abalation studies and the experiments to GCD benchmarks are comprehensive and well designed, and the results of the propose method across many benchmarks are SOTA. - Overall presentation of the paper is nice to read and the orgnization of the paper is good. - The literature review of related works is comprehensive. ## Weaknesses - The proposed framework of joint contrastive representation learning is somehow resonable, but I think that such framework may be very sensitive to the noisy labels. Is there any insights that such framework could be well robust to noisy labels? If not, such framework may be not very general to all problems, especilly for those cases that the initial accuray is not vey good. - There are still some confusions about the overall method. For instance, during which stages is the SNC utilized? Is it applied after each epoch following the joint representation learning, or is it employed just once in the initial step? - The label assignment by SNC is not very clear to understand how it works. - Two figures in Fig. 4 are missing. Requested Changes: See weaknesses listed above. Broader Impact Concerns: No. ================================================== Metareview: Recommendation: Accept as is Comment: The authors incorporated the reviewer feedback in the manuscript already and the current version looks suitable for publication. ==================================================
# Gcondnet: A Novel Method For Improving Neural Networks On Small High-Dimensional Tabular Data Andrei Margeloiu *am2770@cam.ac.uk* Department of Computer Science and Technology University of Cambridge, UK Nikola Simidjievski *ns779@cam.ac.uk* Precision Breast Cancer Institute, Department of Oncology, University of Cambridge, UK Department of Computer Science and Technology, University of Cambridge, UK Pietro Liò *pl219@cam.ac.uk* Department of Computer Science and Technology University of Cambridge, UK Mateja Jamnik *mj201@cam.ac.uk* Department of Computer Science and Technology University of Cambridge, UK Reviewed on OpenReview: *https: // openreview. net/ forum? id= y0b0H1ndGQ* ## Abstract Neural networks often struggle with high-dimensional but small sample-size tabular datasets. One reason is that current weight initialisation methods assume independence between weights, which can be problematic when there are insufficient samples to estimate the model's parameters accurately. In such small data scenarios, leveraging additional structures can improve the model's performance and training stability. To address this, we propose GCondNet, a general approach to enhance neural networks by leveraging implicit structures present in tabular data. We create a graph between samples for each data dimension, and utilise Graph Neural Networks (GNNs) to extract this implicit structure, and for conditioning the parameters of the first layer of an underlying predictor network. By creating many small graphs, GCondNet exploits the data's high-dimensionality, and thus improves the performance of an underlying predictor network. We demonstrate GCondNet's effectiveness on 12 real-world datasets, where it outperforms 14 standard and state-of-the-art methods. The results show that GCondNet is a versatile framework for injecting graph-regularisation into various types of neural networks, including MLPs and tabular Transformers. Code is available at https://github.com/andreimargeloiu/GCondNet. ## 1 Introduction Tabular datasets are ubiquitous in scientific fields such as medicine (Meira et al., 2001; Balendra & Isaacs, 2018; Kelly & Semsarian, 2009), physics (Baldi et al., 2014; Kasieczka et al., 2021), and chemistry (Zhai et al., 2021; Keith et al., 2021). These datasets often have a limited number of samples but a large number of features for each sample. This is because collecting many samples is often costly or infeasible, but collecting many features for each sample is relatively easy. For example, in medicine (Schaefer et al., 2020; Yang et al., 2012; Gao et al., 2015; Iorio et al., 2016; Garnett et al., 2012; Bajwa et al., 2016; Curtis et al., 2012; Tomczak et al., 2015), clinical trials targeting rare diseases often enrol only a few hundred patients at most. Despite the small number of participants, it is common to gather extensive data on each individual, such as measuring thousands of gene expression patterns. This practice results in small-size datasets that are high-dimensional, with the number of features (D) greatly exceeding the number of samples (N). Making effective inferences from such datasets is vital for advancing research in scientific fields. When faced with high-dimensional tabular data, neural network models struggle to achieve strong performance (Liu et al., 2017; Feng & Simon, 2017), partly because they encounter increased degrees of freedom, which results in overfitting, particularly in scenarios involving small datasets. Despite transfer learning's success in image and language tasks (Tan et al., 2018), a general transfer learning protocol is lacking for tabular data (Borisov et al., 2022), and current methods assume shared features (Levin et al., 2023) or large upstream datasets (Wang & Sun, 2022; Nam et al., 2022), which is unsuitable for our scenarios. Consequently, we focus on improving training neural networks from scratch. Previous approaches for training models on small sample-size and high-dimensional data constrained the model's parameters to ensure that similar features have similar coefficients, as initially proposed in (Li & Li, 2008) for linear regression and later extended to neural networks (Ruiz et al., 2023). For applications in biomedical domains, such constraints can lead to more interpretable identification of genes (features) that are biologically relevant (Li & Li, 2008). However, these methods require access to external application-specific knowledge graphs (e.g., gene regulatory networks) to obtain feature similarities, which provide "explicit relationships" between features. But numerous tasks do not have access to such application-specific graphs. We aim to integrate a similar inductive bias, posing that performance is enhanced when similar features have similar coefficients. We accomplish this *without* relying on "explicit relationships" defined in external application-specific graphs. We propose a novel method GCondNet (Graph-**Cond**itioned Networks) to enhance the performance of various neural network predictors, such as Multi-layer Perceptrons (MLPs). The key innovation of GCondNet lies in leveraging the "implicit relationships" between *samples* by performing "soft parameter-sharing" to constrain the model's parameters in a principled manner, thereby reducing overfitting. Prior work has shown that such relationships between samples can be beneficial (Fatemi et al., 2021; Kazi et al., 2022; Zhou et al., 2022). These methods, however, typically generate and operate with one graph between samples while relying on additional dataset-specific assumptions such as the smoothness assumption (for extended discussion see Section 4). In contrast, we leverage *sample-wise multiplex graphs*, a novel and general approach to identify and use these potential relationships between samples by constructing many graphs between samples, one for each feature. We then use Graph Neural Networks (GNNs) to extract any implicit structure and condition the parameters of the first layer of an underlying predictor MLP network. Note that GCondNet still considers the samples as independent and identically distributed (IID) at both train-time and test-time because the information from the graphs is encapsulated within the model parameters and is not used directly for prediction (see Section 2.2). We introduce two similarity-based approaches for constructing the sample-wise multiplex graphs from any tabular dataset. Both approaches generate a graph for each feature in the dataset (resulting in D graphs), with each node representing a sample (totalling N nodes per graph). For instance, in a gene expression dataset, we create a unique graph of patients for each gene. Unlike other methods (Ruiz et al., 2023; Li & Li, 2008; Scherer et al., 2022) that require external knowledge for constructing the graphs, our graphs can be constructed from any tabular dataset. We also propose a decaying mechanism which improves the model's robustness when incorrect relationships between samples are specified. The inductive bias of GCondNet lies in constraining the model's parameters to ensure similar features have similar coefficients at the beginning of training, and we show that our approach yields improved downstream performance and enhanced model robustness. One reason is that creating many small graphs effectively "transposes" the problem and makes neural network optimisation more effective because we leverage the high-dimensionality of the data to our advantage by generating many small graphs. These graphs serve as a large training set for the GNN, which in turn computes the parameters of the MLP predictor. In addition, our approach also models a different aspect of the problem - the structure extracted from the implicit relationships between samples - which we show serves as a regularisation mechanism for reducing overfitting. Our contributions are summarised as follows: 1. We propose a novel method, GCondNet, for leveraging implicit relationships between samples into neural networks to improve predictive performance on small sample-size and high-dimensional tabular data. Our ![2_image_0.png](2_image_0.png) Figure 1: GCondNet is a general method for leveraging *implicit* relationships between samples to improve the performance of any predictor network with a linear first layer, such as a standard MLP, on tabular data. (A) Given a tabular dataset X ∈ R N×D, we generate a graph Gj for each feature in the dataset (results in D graphs), with each node representing a sample (totalling N nodes per graph). (B) The resulting graphs are passed through a shared Graph Neural Network (GNN), which extracts graph embeddings w(j) ∈ R K from each graph Gj . We concatenate the graph embeddings into a matrix WGNN = [w(1)*, ...,* w(D)]. (C) We use WGNN to parameterise the first layer W[1] MLP of the MLP predictor as a convex combination W[1] MLP = αWGNN + (1 − α)Wscratch, where Wscratch is initialised to zero. method is general and can be applied to any such tabular dataset, unlike other methods that rely on external application-specific knowledge graphs. 2. We validate GCondNet's effectiveness across 12 real-world biomedical datasets. We show that for such datasets, our method consistently outperforms an MLP with the same architecture and, in fact, outperforms all 14 state-of-the-art methods we evaluate. 3. We analyse GCondNet's inductive bias, showing that our proposed *sample-wise multiplex graphs* improve performance and serve as an additional regularisation mechanism. Lastly, we demonstrate that GCondNet is robust to various graph construction methods, which might also include incorrect relationships. ## 2 Method Problem formulation. We study tabular classification problems (although the method can be directly applied to regression too), where the data matrix X := [x (1)*, ...,* x (N)] ⊤ ∈ R N×D comprises N samples x (i) ∈ R D of dimension D, and the labels are y := [y1*, ..., y*N ] ⊤. Method overview. Our method applies to any network with a linear layer connected to the input features, and for illustration, we assume an MLP predictor network. Figure 1 presents our proposed method, which has two components: (i) a predictor network (e.g., MLP) that takes as input a sample x (i) ∈ R D and outputs the predicted label yi; and (ii) a Graph Neural Network (GNN) that takes as input *fixed* graphs (D of them) and generates the parameters W[1] MLP for the first input layer of the predictor MLP network. Note that the GNN is shared across graphs. Since these graphs are fixed for all inputs x (i), GCondNet maintains the dimensionality of the input space (which remains D). Algorithm 1 Computing WGNN. 1: for each feature j = 1, 2*, . . . , D* do 2: node-embeddings = GNN(Vj , Ej ) ▷ Vj represents the nodes, and Ej represents the edges of the j-th graph 3: w (j) = fagg(node-embeddings) ▷ Aggregate all node embeddings to obtain the graph embedding w (j) ∈ R K 4: **end for** 5: WGNN = [w (1)*, . . . ,* w (D)] ▷ Concatenate the graph embeddings In particular, we parameterise the MLP's first layer W[1] MLP as a convex combination of a weight matrix WGNN generated by the GNN (by extracting the implicit structure between samples), and a weight matrix Wscratch initialised to zero: $$W_{\mathrm{MLP}}^{[1]}:=\alpha W_{\mathrm{GNN}}+(1-\alpha)W_{\mathrm{scratch}}$$ $$(1)$$ MLP := αWGNN + (1 − α)Wscratch (1) The mixing coefficient α determines how much the model should be conditioned on the relationships between samples learnt by the GNN. We schedule α to linearly decay 1 → 0 over nα training steps, as further motivated in Section 2.2. We found that GCondNet is robust to nα, as supported by the statistical tests in Appendix F.1. Computing WGNN. We train the GNN model concurrently with the MLP predictor to compute the weight matrix WGNN. To do this, we use the training split of X to generate a graph Gj = (Vj , Ej ) for each feature in the dataset (resulting in D graphs), with each node representing a sample (totalling N nodes per graph). For example, in a gene expression dataset, one graph of patients is created for each gene. All graphs are simultaneously passed to the GNN and are *fixed* during the training process. This way, we take advantage of high-dimensional data by creating many graphs to train the GNN. We describe the graph construction in Section 2.1 and investigate the impact of this choice in Section 3.2. At each training iteration, we use the GNN to extract graph embeddings from these graphs, as presented in Algorithm 1. For each of the D graphs, we first apply the GNN to obtain node embeddings of size K for all nodes. We then compute graph embeddings w(j)∈R K by using a permutation invariant function fagg to aggregate the node embeddings. Thus, the graph embeddings are also of size K, which is independent of the number of nodes in the graphs. These embeddings are then concatenated horizontally to form the weight matrix WGNN = [w(1), w(2)*, ...,* w(D)]. Finally, we use the resulting matrix WGNN to parameterise the first layer of the underlying MLP predictor network that outputs the final prediction, as shown in Equation 1. Appendix A presents the complete pseudocode for training GCondNet. Test-time inference. We emphasise that the GNN and the associated graphs are employed exclusively during the training phase, becoming obsolete once the mixing coefficient α reaches zero after nα training steps. The predictor MLP retains its final weights upon training completion, rendering the GNN and graphs unnecessary for test inference. Test input samples are exclusively processed by the predictor MLP - resulting in a model size and inference speed identical to a standard MLP. ## 2.1 Sample-Wise Multiplex Graphs We propose a novel and general graph construction method from tabular data, creating a multiplex graph G = {G1*, ...,* GD}, where each graph layer Gj = (Vj , Ej ) represents the relations across feature j and is constructed using *only* the values X:,j of that feature. This enables the use of simple distance metrics between nodes and eliminates the need to work in a high-dimensional space where distances can be inaccurate. Note that, the graph construction phase is typically fast, performed once per dataset and adds a negligible computational overhead. Node features. The nodes Vj of each graph represent the N training samples. The node features are one-hot encoded vectors, with the feature value for a sample located in the corresponding position in the one-hot encoding. For instance, if the values of feature j of three training samples x (1), x (2), x (3) are X:,j = [x (1) j, x (2) j, x (3) j], then the first node's features would be [x (1) j, 0, 0], the second node's features would be [0, x (2) j, 0], and the third node's features would be [0, 0, x (3) j]. Edges. We propose two similarity-based methods for constructing the edges between samples from tabular data, which assume that similar samples should be connected. To measure sample similarity, we calculate the ℓ1 distances between the feature values X:,j . Using the earlier example, if the feature values used to create one graph are X:,j = [x (1) j, x (2) j, x (3) j], then the distances between nodes (based on which we create edges) are ∥x (1) j − x (2) j∥1, ∥x (1) j − x (3) j∥1, and ∥x (2) j − x (3) j∥1. While in this paper we use ℓ1 distance, other suitable distance functions can also be applied. The two types of edges are: (i) **KNN graphs** connect each node with the closest k neighbours (we set k = 5 in this paper). The memory complexity is O(D · N · K), enabling GCondNet to scale linearly memory-wise w.r.t. both the sample size N and the number of features D. The time complexity for KNN graph construction is O(D · N log N + D · N · K), with the first term for sorting features and the second for creating edges. (ii) **Sparse Relative Distance (SRD) graphs** connect a sample to all samples with a feature value within a specified distance. This process creates a network topology where nodes with common feature values have more connections, and we use an accept-reject step to sparsify the graph (all details are included in Appendix B). The time complexity mirrors that of the KNN graphs. For smaller datasets, the number of edges in the SRD graphs is comparable to those of KNN graphs. In larger datasets, KNN graphs are likely more scalable due to direct control over the number of edges via K. ## 2.2 Rationale For Model Architecture The inductive bias of GCondNet ensures that similar features have similar weights in the first layer of the NN at the beginning of training. Uniquely to GCondNet, the feature similarity is uncovered by training GNNs end-to-end on graphs defining the implicit relationships between samples across each feature. Thus, our approach ultimately learns the feature similarity by looking at the relationships between samples. For example, if features i and j have similar values across samples, they define similar graphs, leading to similar graph embeddings w(i) and w(j). These embeddings correspond to the first layer weights W[1] MLP in the neural network. GCondNet is appropriate when *D >>N* because it introduces a suitable inductive bias that enhances model optimisation, as we demonstrate in Section 3. On small sample-size and high-dimensional data, conventional neural approaches (such as an MLP) tend to exhibit unstable training behaviour and/or converge poorly - one reason is a large degree of freedom for the small datasets. This happens because; (i) the number of parameters in the first layer is proportional to the number of features; and (ii) modern weight initialisation techniques (Glorot & Bengio, 2010; He et al., 2015) assume independence between the parameters within a layer. Although the independence assumption may work well with large datasets, as it allows for flexibility, it can be problematic when there are too few samples to estimate the model's parameters accurately (as we show in Section 3.2). GCondNet is designed to mitigate these training instabilities: (i) by constraining the model's degrees of freedom via an additional GNN that outputs the model's first layer, which includes most of its learning parameters; and (ii) by providing a more principled weight initialisation on the model's first layer (because at the start we have W[1] MLP = WGNN). We parameterise the first layer due to its large number of parameters and propensity to overfit. On high-dimensional tabular data, an MLP's first layer holds most parameters; for instance, on a dataset of 20,000 features, the first layer has 98% of the parameters of an MLP with a hidden size 100. GCondNet still consider the samples as IID at both train-time and test-time. Recall that samples are IID if they are independent, conditioned on the model's parameters θ (Murphy, 2022), so that py1, y2 | x (1), x (2), θ= py1 | x (1), θ· py2 | x (2), θ. Note that unlike distance-based models (e.g., KNN), our graphs are not used directly to make predictions. In GCondNet, to make a prediction, input samples are exclusively processed by the predictor MLP, which uses the same model parameters θ across all samples. Because all information extracted from our sample-wise graphs is encapsulated within the model parameters θ, the above IID equation holds for GCondNet. In terms of graph construction, the conventional approach would be to have one large graph where nodes are features and w(j) are node embeddings. In contrast, we generate many small graphs and compute w(j) as graph embeddings. Our approach offers several advantages: (i) Having multiple graphs "transposes" the problem and uses the high-dimensionality of the data to our advantage by generating many small graphs which serve as a large training set for the GNN. (ii) It allows using simple distance metrics because the nodes contain only scalar values; in contrast, taking distances between features would require working in a high-dimensional space and encountering the curse of dimensionality. (iii) The computation is efficient because the graphs are small due to small-size datasets. (iv) Flexibility, as it can incorporate external knowledge graphs, if available, by forming hyper-graphs between similar feature graphs. Decaying the mixing coefficient α introduces flexibility in the learning process by enabling the model to start training initialised with the GNN-extracted structure and later adjust the weights more autonomously as training advances. Since the true relationships between samples are unknown, the GNN-extracted structure may be noisy or suboptimal for parameterising the model. At the start, α = 1 and the first layer is entirely determined by the GNN (W[1] MLP = WGNN). After α becomes 0, the model trains as a standard MLP (the GNN is disabled), but its parameters have been impacted by our proposed method and will reach a distinct minimum (evidenced in Section 3.1 by GCondNet consistently outperforming an equivalent MLP). In contrast to our decaying of the mixing coefficient α, maintaining α fixed (similar to PLATO's (Ruiz et al., 2023) inductive bias) leads to unstable training (see our experiments in Section 3.2). Moreover, if α was fixed, it would need optimisation like other hyperparameters, while by decaying α we avoid this time-consuming tuning. ## 3 Experiments Our central hypothesis is that exploiting the implicit sample-wise relationships by performing soft parameter-sharing improves the performance of neural network predictors. First, we evaluate our model against 14 benchmark models (Section 3.1). We then analyse the inductive bias of our method (Section 3.2), its effect on optimisation, and GCondNet's robustness to different graph construction methods. Datasets. We focus on classification tasks using small-sample and high-dimensional datasets and consider 12 real-world tabular biomedical datasets ranging from 72 − 200 samples, and 3312 − 22283 features. We specifically keep the datasets small to mimic practical scenarios where data is limited. See Appendix C for details on the datasets. Evaluation. We evaluate all models using a 5-fold cross-validation repeated 5 times, resulting in 25 runs per model. We report the mean ± std of the test balanced accuracy averaged across all 25 runs. To summarise the results in the manuscript, we rank the methods by their predictive performance. For each dataset, methods are ranked from 1 (the best) to 12 (the worst) based on their mean accuracy. If two methods have the same accuracy (rounded to two decimals), they obtain the same per-dataset rank. The final rank for each method is the average of their per-dataset ranks, which may be a non-integer. GCondNet architecture and settings. GCondNet uses an MLP predictor model (as its backbone) with three layers with 100, 100, 10 neurons. The GNN within GCondNet is a two-layer Graph Convolutional Network (GCN) (Kipf & Welling, 2017). The permutation invariant function fagg for computing graph embeddings is global average pooling1. We decay the mixing coefficient α over nα = 200 training steps, although we found that GCondNet is robust to the number of steps nα, as supported by the statistical tests in Appendix F.1. We present the results of GCondNet with both KNN and SRD graphs. We provide complete reproducibility details for all methods in Appendix D, and the training times for GCondNet and other methods are in Appendix E. Benchmark methods. We evaluate 14 benchmark models, encompassing a standard MLP and modern methods typically employed for small sample-size and high-dimensional datasets, such as DietNetworks (Romero et al., 2017), FsNet (Singh & Yamada, 2023), SPINN (Feng & Simon, 2017), DNP (Liu et al., 2017), and WPFS (Margeloiu et al., 2023), all of which use the same architecture as GCondNet for a fair comparison. We also include contemporary neural architectures for tabular data, like TabNet (Arık & Pfister, 2021), TabTransformer (Huang et al., 2020), Concrete Autoencoders (CAE) (Balın et al., 2019), and LassoNet2(Lemhadri et al., 2021), and standard methods such Random Forest (Breiman, 2001) and LightGBM (Ke et al., 2017). We also compare the performance of GCondNet with GNNs on tabular data where relationships between samples are not explicitly provided. In particular, we evaluate Graph Convolutional Network (GCN) (Kipf & 1Using hierarchical pooling (Ying et al., 2018; Ranjan et al., 2020) led to unstable training and significantly poorer performance. 2We discuss LassoNet training instabilities in Appendix D. Table 1: **Overall, GCondNet outperforms other benchmark models.** We show the classification performance of GCondNet with KNN and SRD graphs and 14 benchmark models on 12 real-world datasets. N/D represents the per-dataset ratio of samples to features. We report the mean ± std of the test balanced accuracy averaged over the 25 cross-validation runs. We highlight the First, **Second** and **Third** ranking accuracy for each dataset. To aggregate the results, we also compute each method's average rank across datasets, where a higher rank implies higher accuracy in general. Overall, GCondNet ranks best and generally outperforms all other benchmark methods. | Dataset | gli | smk | allaml | cll | glioma | prostate | toxicity | tcga-survival | tcga-tumor | meta-dr | meta-p50 | lung | Avg. | |--------------------------|---------------------------------|----------------------------------|---------------------------------|---------------------------------|----------------------|------------|------------|----------------------|--------------|-----------|------------|-----------|--------| | N/D | 0.004 | 0.009 | 0.01 | 0.01 | 0.011 | 0.017 | 0.03 | 0.046 | 0.046 | 0.048 | 0.048 | 0.059 | Rank | | DietNetworks | 76.42±13.2 62.71±9.4 | 92.00±8.4 | 68.84±9.2 | 68.00±14.8 | 81.71±11.0 82.13±7.4 | 53.62±5.5 | 46.69±7.1 | 56.98±8.7 | 95.02±4.8 | 90.43±6.2 | 8.92 | | | | FsNet | 74.52±11.7 56.27±9.2 | 78.00±12.9 66.38±9.2 | 53.17±12.9 | 84.74±9.8 | 60.26±8.1 | 53.83±7.9 | 45.94±9.8 | 56.92±10.1 83.86±8.2 | 91.75±3.0 | 11.17 | | | | | DNP | 83.17±12.1 66.61±8.4 | 96.18±5.7 | 85.13±5.5 | 75.00±12.8 | 88.71±6.8 | 93.49±6.2 | 58.14±8.2 | 47.53±8.7 | 55.79±7.1 | 93.56±5.5 | 92.81±6.7 | 5.58 | | | SPINN | 83.39±9.8 | 65.91±7.6 | 96.78±6.2 | 85.35±5.5 | 75.00±14.8 | 90.02±6.8 | 93.50±4.9 | 57.70±7.1 | 45.92±8.5 | 56.14±7.2 | 93.56±5.5 | 94.76±4.4 | 5.00 | | WPFS | 83.86±9.1 | 66.89±6.2 | 96.42±4.2 | 79.14±4.5 | 73.83±16.5 | 89.15±6.7 | 88.29±5.3 | 59.54±6.9 | 55.91±8.6 | 59.05±8.6 | 95.96±4.1 | 94.83±4.2 | 3.42 | | TabNet | 64.54±12.9 61.16±9.2 | 71.64±17.7 50.87±13.8 50.00±16.9 | 65.75±17.7 41.38±9.6 | 49.08±9.3 | 39.57±11.6 | 53.19±9.4 | 81.27±9.7 | 75.11±10.2 | 13.58 | | | | | | TabTransformer | 78.82±14.1 64.00±9.2 | 88.38±8.6 | 76.81±6.8 | 63.50±15.6 | 85.96±11.5 87.67±6.1 | 56.91±5.6 | 40.70±6.9 | 52.49±9.0 | 93.82±4.7 | 94.03±4.7 | 8.83 | | | | CAE | 74.18±11.7 59.96±11.0 89.80±9.2 | 71.94±13.4 67.83±17.6 | 87.60±7.8 | 60.36±11.3 59.54±8.3 | 40.69±7.4 | 57.35±9.4 | 95.78±3.6 | 85.00±5.0 | 9.17 | | | | | | LassoNet | 53.91±10.9 51.04±8.6 | 50.80±12.9 30.63±8.7 | 29.17±11.8 | 54.78±10.6 26.67±8.7 | 46.08±9.2 | 33.49±7.5 | 48.88±5.7 | 48.41±10.8 25.11±9.8 | 15.00 | | | | | | MLP | 77.72±15.3 64.42±8.4 | 91.30±6.7 | 78.30±9.0 | 73.00±14.9 | 88.76±5.5 | 93.21±6.1 | 56.28±6.7 | 48.19±7.8 | 59.56±5.5 | 94.31±5.4 | 94.20±4.9 | 6.00 | | | Random Forest | 81.15±8.5 | 69.84±4.6 | 96.80±5.6 | 76.44±10.1 74.17±10.6 | 90.35±8.2 | 80.99±4.5 | 66.04±5.2 | 47.12±7.0 | 52.98±5.4 | 89.39±7.2 | 88.14±5.2 | 6.42 | | | LightGBM | 80.79±7.6 | 70.07±5.8 | 95.36±5.2 | 74.22±14.6 75.50±11.9 91.91±4.8 | 81.26±4.2 | 59.46±5.8 | 44.99±9.3 | 57.69±8.6 | 93.42±7.2 | 89.79±4.4 | 6.08 | | | | GCN | 84.09±9.4 | 65.63±8.0 | 80.83±10.8 72.00±8.4 | 66.23±14.4 | 82.60±12.5 76.13±7.0 | 58.31±5.8 | 51.01±8.2 | 58.29±7.4 | 91.13±8.7 | 93.30±4.6 | 7.75 | | | | GATv2 | 73.57±12.4 66.06±8.2 | 71.36±11.6 57.74±14.1 57.67±15.1 | 83.23±10.6 76.65±11.2 53.60±6.9 | 45.45±9.3 | 54.71±7.1 | 86.96±8.2 | 93.33±6.2 | 10.92 | | | | | | | GCondNet (KNN) 85.02±9.0 | 65.92±8.7 | 96.18±4.9 | 80.70±5.5 | 76.67±12.9 90.38±5.6 | 94.33±4.1 | 58.62±7.0 | 51.70±8.8 | 59.34±8.9 | 95.96±4.2 | 95.20±3.8 | 1.92 | | | | GCondNet (SRD) | 86.36±8.0 | 68.08±7.3 | 97.56±4.1 | 79.92±6.2 | 77.67±10.5 89.33±7.6 | 95.25±4.5 | 56.36±9.4 | 50.82±9.5 | 58.24±6.4 | 96.13±4.0 | 96.64±3.1 | | | Welling, 2017) and Graph Attention Network v2 (GATv2) (Brody et al., 2022). To ensure fairness, we employ a similar setup to GCondNet, constructing a KNN-based graph (k = 5) in which each node represents a sample connected to its five nearest samples based on cosine similarity, which is well-suited for high-dimensional data. Both GCN and GATv2 are trained in a transductive setting, incorporating test sample edges during training while masking nodes to prevent data leakage. ## 3.1 Overall Classification Performance Our experiments in Table 1 show that GCondNet outperforms all 14 benchmark models on average, achieving a better overall rank across 12 real-world datasets - suggesting its effectiveness across diverse datasets. GCondNet is followed by WPFS, a specialised method for small sample-size and high-dimensional datasets, although GCondNet consistently outperforms it on 9 out of 12 tasks, providing improvements of up to 7%. Standard methods like LightGBM and Random Forest are competitive and perform well on some datasets, but their relative performance is sometimes highly dataset-dependent. For instance, in the case of the "tcga-survival" dataset, GCondNet (with an MLP backbone) enhances MLP's performance by 2%, but it still lags behind top methods like Random Forest, WPFS, and CAE, which exhibit an additional 1-7% better performance. This suggests that the additional feature selection capabilities of these approaches can lead to further improvements. While in this work we did not analyse this behaviour in detail, GCondNet, in principle, can readily handle this. GCondNet consistently outperforms an MLP with the same architecture, showing substantial improvements. The most notable increases of 3-8% are observed in the five most extreme datasets, which have the smallest N/D ratios. On those extreme datasets, GCondNet also improves stability compared to the standalone MLP, reducing the average standard deviation by over 3.5% when using the SRD version and 2.5% when using the KNN version. As the N/D ratio increases, GCondNet continues to deliver superior performance to the baseline MLP, albeit with similar stability. These results underscore the role of GCondNet's inductive bias to mitigating overfitting and enhancing stability, particularly in datasets with extreme N/D ratios. We compare against GNNs on tabular data where relationships between samples are not explicitly provided. Despite the advantage of GCN and GATv2 of being trained in a transductive setting, GCondNet outperforms both methods across tasks. The performance gap ranges between 19-25% on three tasks and more than 5% on four other tasks. This indicates that models heavily reliant on *latent* structure present in tabular data, such as GNNs, are particularly sensitive to misspecifications during model construction. In contrast, GCondNet demonstrates resilience against such misspecifications, which we analyse in the following section. The results also show that GCondNet outperforms other methods specialised for this data scenario by a large margin, such as DietNetworks, FsNet, SPINN, DNP, and more complex neural architectures for tabular data such as TabNet, TabTransformer, CAE and LassoNet. This finding aligns with recent research (Kadra et al., 2021), suggesting that well-regularised MLPs are more effective at handling tabular data than intricate architectures. Because our MLP baseline was already well-regularised, we attribute GCondNet's performance improvement to its inductive bias, which serves as an additional regularisation effect, as we further investigate in the next section. Finally, we find that GCondNet consistently performs well with both KNN and SRD graphs, ranking high across various datasets. However, no clear distinction emerges between the two graph construction methods, and in the next section, we further analyse GCondNet's robustness to this choice. ## 3.2 Analysing The Inductive Bias Of Gcondnet Having found that GCondNet excels on small-size and high-dimensional tasks, we analyse its inductive bias and robustness to different construction methods. GCondNet outperforms other initialisation schemes that do not use GNNs. To understand the effect of leveraging the latent relationships between samples to parameterise neural networks (as GCondNet does), we investigate if other weight initialisation methods can imbue a similar inductive bias. To the best of our knowledge, all such existing methods necessitate external knowledge, like in (Li & Li, 2008; Ruiz et al., 2023). Consequently, *we propose* three novel weight initialisation schemes incorporating a similar inductive bias as GCondNet, making similar features having similar weights. These schemes generate feature embeddings e (i), which are then utilised to initialise the MLP's first layer W[1] MLP = [e (1), e (2)*, ...,* e (D)]. The feature embeddings are computed using Non-negative matrix factorisation (NMF), Principal Component Analysis (PCA), and Weisfeiler-Lehman (WL) algorithm (Weisfeiler & Leman, 1968). The latter is a parameter-free method to compute graph embeddings, often used to check whether two graphs are isomorphic. We apply the WL algorithm to the same SRD graphs as used for GCondNet and use the j-th graph embedding as the feature embedding e (j) WL. See Appendix F.3 for more details on these initialisation methods. We follow (Grinsztajn et al., 2022) and compute the normalised test balanced accuracy across all 12 datasets and 25 runs. We include the absolute accuracy numbers in Appendix F.3. ![7_image_0.png](7_image_0.png) Figure 2: The inductive bias of GCondNet robustly improves performance and cannot be replicated without GNNs. We compute the normalised test balanced accuracy across all 12 datasets and 25 runs and report the relative improvement over a baseline MLP. First, we find that GCondNet is robust across various graph construction methods and provides consistent improvement over an equivalent MLP. Second, to assess the usefulness of the GNNs, we propose three weight initialisation methods designed to emulate GCondNet's inductive biases but without employing GNNs. The results show that GCondNet outperforms such methods, highlighting the effectiveness of the GNN-extracted latent structure. Figure 2 shows that the specialised initialisation schemes (NMF, PCA, WL) outperform a standard MLP, and we observe that GCondNet with both SRD and KNN graphs further improves over these initialisations. These results suggest that initialisation methods that incorporate appropriate inductive biases, as in GCondNet, can outperform popular initialisation methods, which assume the weights should be independent at initialisation (Glorot & Bengio, 2010; He et al., 2015), which can lead to overfitting on small datasets. ![8_image_0.png](8_image_0.png) Figure 3: **GCondNet reduces overfitting.** The impact of varying the mixing coefficient α is illustrated through the training and validation loss curves (averaged over 25 runs) on 'toxicity'. We train GCondNet with linearly *decaying* α, along with modified versions with *fixed* α. Two observations are notable: (i) GCondNet exhibits less overfitting (evident from the converging validation loss) compared to an MLP (α = 0), which overfits at the 4,000th iteration; (ii) decaying α enhances the training stability while improving the test-time accuracy by at least 2%. ![8_image_1.png](8_image_1.png) Figure 4: GCondNet is versatile and can enhance various models beyond MLPs. When applied to TabTransformer, GCondNet consistently improves performance by up to 14%. The advantage of using GNNs becomes more evident when comparing the performance of GCondNet, which incorporates a learnable GNN, to the Weisfeiler-Lehman method, which is a parameter-free method. Although both methods are applied on the same SRD graphs, GCondNet's higher performance by 7% underscores the crucial role of *training* GNNs to distil structure from the sample-wise relationships, exceeding the capabilities of other methods. GCondNet is robust to different graph construction methods. As GCondNet is general and does not rely on knowing the ground-truth relationship between samples, we analyse its robustness to the user-defined method for defining such relationships. In addition to the KNN and SRD graphs from Section 2.1, we also consider graphs with random edges between samples (called "RandEdge" graphs). Specifically, we generate the RandEdge graphs with similar graph statistics to the SRD graphs but random graph edges (full details for creating RandEdge graphs are in Appendix F.3). We train GCondNet on 25 different data splits, and for each split, we sample RandEdge graphs five times - resulting in 125 trained models on RandEdge graphs. We analyse two distinct facets of robustness. First, the *relative performance* compared to other benchmark models. We find that, generally, GCondNet, when using any of the KNN, SRD or RandEdge graphs, outperforms all 14 benchmark baselines from Table 1 and achieves a higher average rank. This suggests that GCondNet is robust to different graph-creation methods and maintains stable relative performance compared to other benchmark methods, even when the graphs are possibly misspecified. Next, we analyse the *absolute performance* denoted by the numerical value of test accuracy. As expected, we find that RandEdge graphs are suboptimal (see Figure 2) and GCondNet performs better with more principled KNN or SRD graphs, which define similarity-based edges between samples. Nonetheless, the three graphs exhibit similar absolute performance with statistical significance (see Appendix F.3), making GCondNet resilient to misspecifications during graph construction. Moreover, the optimal graph construction method is task-dependent: SRD excels in seven datasets, KNN in another four, and RandEdge graphs in one. For instance, even with identical input data X (as in 'tcga-survival' or 'tcga-tumor') - and thus identical graphs - the optimal graph construction method can differ based on the prediction task. We believe a limitation of this work is the lack of an optimal graph construction method. However, having an optimal graph construction method is non-trivial, as (i) different tasks rely on exploiting different modeling assumptions and structures in the data; and (ii) the GNN's oversmoothing issue (Chen et al., 2020) can lead to computing just an "average" of the feature values. Lastly, we highlight that GCondNet is robust to the number of steps nα, as supported by the statistical tests in Appendix F.1. GCondNet's inductive bias serves as a regularisation mechanism. To isolate GCondNet's inductive bias - which ensures that similar features have similar weights at *the beginning* of training - we train two versions of GCondNet: (i) with *decaying* α, and (ii) a modified version with a *fixed* mixing coefficient α ∈ {0, 0.2, 0.4, 0.6, 0.8, 1} throughout the training process. As α → 0, the model becomes equivalent to an MLP, and as α → 1, the first layer is conditioned on the GNN-extracted structure. We find that incorporating structure into the model serves as a regularisation mechanism and can help prevent overfitting. Figure 3 shows the training and validation loss curves for different α values. An MLP (equivalent to α = 0) begins overfitting at around the 4,000th iteration, as shown by the inflexion point in the validation loss. In contrast, all models incorporating the structure between samples (i.e., α > 0) avoid this issue and attain better validation loss. The optimisation perspective further motivates decaying α rather than using a fixed value. Firstly, using a fixed α during training can lead to instabilities, such as high variance in the training loss. Secondly, a fixed α results in a test-time performance drop compared to the decaying version. In the case of "toxicity" (presented in Figure 3) this leads to at least 2% loss in performance compared to the decaying variant with 95.25%, ranging between 91.33% − 93.08% (with no particular trend) for different values of α ∈ {0, 0.2, 0.4, 0.6, 0.8}. Training with a fixed α = 1 results in a lower accuracy of 84.24%. We posit this occurs because the model is overly constrained on potentially incorrect graphs without sufficient learning capacity for other weights. The model gains flexibility by decaying α from 1 → 0, achieving better generalisation and increased stability. Extension to Transformers. We highlight that GCondNet is a general framework for injecting graph-regularisation into various types of neural networks, and it can readily be applied to other architectures beyond an MLP. As a proof-of-principle, we apply GCondNet to TabTransformer (Huang et al., 2020), and the results in Figure 4 show consistent performance improvements by up to 14% (averaged over 25 runs). This highlights that GCondNet is general and can be applied to various downstream models. GCondNet is effective across various sampleto-feature ratios N/D. We compare the performance of GCondNet (with an MLP backbone and KNN graphs) to an identical MLP baseline. Both models were trained under the same conditions. We selected four datasets ("allaml", "cll", "gli", "glioma") to cover a broad range of N/D ratios, keeping the per-dataset number of samples N constant. We adjust the number of features D accordingly—for instance, a dataset with N = 100 samples had D reduced from 10,000 to 20. We split the feature sets by sequentially removing features, repeating the process five times, and training the MLP and GCondNet with identical features. Full details of the setup and numerical results are in Appendix G.2. Figure 5 shows that GCondNet outperforms a baseline MLP by up to 10% (in relative terms) across various sample-to-feature ratios N/D. This indicates that GCondNet can also be effective for small-dimensional data. Moreover, GCondNet demonstrates greater stability and lower variance in performance, even with reduced feature sets. ![9_image_0.png](9_image_0.png) Figure 5: Improvement of GCondNet (with an MLP backbone) over an equivalent MLP baseline. This figure shows the relative increase in test balanced accuracy of GCondNet compared to the MLP baseline, averaged across datasets. GCondNet consistently enhances performance across various N/D ratios, demonstrating its advantages even on small-dimensional data. 10 ## 4 Related Work "Diet" networks: Our focus is learning problems on high-dimensional tabular datasets with a limited number of samples. As such, our method takes inspiration from "diet" methods such as DietNetworks (Romero et al., 2017), FsNet (Singh & Yamada, 2023) and WPFS (Margeloiu et al., 2023), which rely on auxiliary networks to predict (and in turn reduce) the number of learnable parameters in the first layer of an underlying feed-forward network. However, GCondNet differs from "diet" methods in two important ways: (i) "diet" methods require a well-defined strategy for computing feature embeddings, and their performance is highly sensitive to this choice. In contrast, GCondNet defines graphs between samples and uses a GNN to learn the feature embeddings; (ii) GCondNet provides a different inductive bias which leverages the implicit relationships between samples (via the learned graph embeddings w(i)). Out of all "diet" methods, GCondNet is most closely related to PLATO (Ruiz et al., 2023), as both methods employ GNNs as auxiliary networks to parameterise the predictor network. However, the similarities end there, and we highlight two key differences: (i) PLATO relies on domain knowledge, making it inapplicable when such information is unavailable, which is common. In contrast, GCondNet is more general, as it can be applied to any tabular dataset without requiring domain knowledge but can still utilise it when available. (ii) PLATO constructs a single graph between features, whereas GCondNet creates multiple graphs between samples. This distinction is crucial, as PLATO leverages the relationships among features, while our method focuses on leveraging the relationships between samples (in addition to the relationships between features learnt by the MLP predictor itself). Graph-based approaches for tabular data, including semi-supervised approaches, construct a graph between samples to capture the underlying relationships. The graphs are created using either a user-defined metric or by learning a latent graph between samples. Recent methods apply GNNs to these graphs, and our work distinguishes itself from such tabular data approaches in three ways. 1. We use GNNs indirectly and *only during training* to improve an underlying MLP predictor. Once trained, we store the MLP predictor's final weights, eliminating the need for GNNs during inference. Test input samples are subsequently processed exclusively through the predictor MLP. In contrast, GNN approaches to tabular data (You et al., 2020; Wu et al., 2021; Du et al., 2022; Fatemi et al., 2021; Satorras & Bruna, 2018) directly employ GNNs *for inference* on new inputs, including making predictions (You et al., 2020; Du et al., 2022; Fatemi et al., 2021; Satorras & Bruna, 2018) and performing feature imputation (You et al., 2020; Wu et al., 2021). 2. Our graph structure is different. GCondNet generates many graphs between samples (one for each feature) and then extracts graph embeddings w(j)to parameterise a predictor network. This approach is novel and clearly distinguishes it from other work such as (You et al., 2020; Wu et al., 2021; Du et al., 2022), which generate graphs connecting features and samples. Both (You et al., 2020; Wu et al., 2021) construct a bipartite graph between samples and features, while (Du et al., 2022) creates a hyper-graph where each sample is a node linked to corresponding feature nodes (specifically for discrete data). 3. Graph-based approaches for tabular data often introduce additional assumptions that may be suboptimal or inapplicable. For example, (Kazi et al., 2022; Zhou et al., 2022; Fatemi et al., 2021) create a graph between samples and rely on the *smoothness assumption*, which posits that neighbouring instances share the same labels. As demonstrated in Section 3.1, such assumptions can be suboptimal for high-dimensional data. Concerning *dataset assumptions*, (Satorras & Bruna, 2018) addresses few-shot learning, which requires a substantial meta-training set comprising similar tasks. In contrast, our approach focuses on learning from small datasets without assuming the presence of an external meta-training set. The work of (Fatemi et al., 2021) infers a latent graph, focusing on either images or tabular data with a maximum of 30 features. In comparison, our research explores tabular datasets containing up to 20000 features. Feature selection. When faced with high-dimensional data, machine learning models are presented with increased degrees of freedom, making prediction tasks more challenging, especially on small sample-size tasks. To address this issue, various feature selection methods have been proposed to reduce the dimensionality of the data (Tibshirani, 1996; Feng & Simon, 2017; Liu et al., 2017; Singh & Yamada, 2023; Balın et al., 2019; Margeloiu et al., 2023; Lemhadri et al., 2021). All these methods aim to model the relationships between features (i.e., determining which features are similar or irrelevant to the task), but they do not consider the relationships between samples. In contrast, GCondNet uses a GNN to extract the relationships between samples, while the MLP predictor learns the relationships between features. Neural networks for tabular data. More broadly, our work is related to neural network methods for tabular data. Recent methods include various inductive biases, such as taking inspiration from tree-based methods (Katzir et al., 2020; Hazimeh et al., 2020; Popov et al., 2020; Yang et al., 2018), including attention-based modules (Arık & Pfister, 2021; Huang et al., 2020), or modelling multiplicative feature interactions (Qin et al., 2021). For a recent review on neural networks for tabular data, refer to (Borisov et al., 2022). However, these methods are generally designed for large sample size datasets, and their performance can vary on different datasets (Gorishniy et al., 2021), making them unsuitable for small-size and high-dimensional tasks. In contrast, our method is specifically designed for small-size high-dimensional tasks. Lastly, TabPFN (Hollmann et al., 2022) is a recent pre-trained Transformer using in-context learning for prediction, which can scale only up to 100 features, making it inapplicable for our high-dimensional datasets (of up to 22,000 features). ## 5 Conclusion We introduce GCondNet, a general method to improve neural network predictors on small and high-dimensional tabular datasets. The key innovation of GCondNet lies in exploiting the "implicit relationships" between samples by performing "soft parameter-sharing" to constrain the model's parameters. We also propose sample-wise multiplex graphs, a novel and general approach to identify and use these potential relationships between samples by constructing many graphs between samples, one for each feature. We then use Graph Neural Networks (GNNs) to extract any implicit structure and condition the parameters of the first layer of an underlying predictor network. Unlike other methods, which require external application-specific knowledge graphs, our method is general and can be applied to any tabular dataset. We evaluate 12 classification tasks on biomedical datasets - in real applications, this could mean identifying biomarkers for different diseases - and show that GCondNet outperforms 14 benchmark methods and is robust to different graph construction methods. We also show that the GNN-extracted structure serves as a regularisation mechanism for reducing overfitting. Future work can investigate using the learned structures to obtain insights into the dataset, such as detecting mislabeled data points or outliers. ## Broader Impact Statement This paper presents a novel method that aims to advance the field of machine learning by offering a new direction for leveraging the implicit sample relationships in machine learning, which is particularly beneficial for data-scarce tasks. This work can also serve as a basis for more interpretable approaches using the learned data structures. For critical domains such as medicine, GCondNet can provide patient/cohort-wise insights through post-hoc mechanisms such as graph concept-based explanations (Magister et al., 2021; 2022). From a machine learning perspective, GCondNet may also provide valuable insights into the dataset, such as identifying difficult training samples in the context of curriculum learning (Bengio et al., 2009). Our work's impact is to advance machine learning capabilities in critical fields such as medicine and scientific research, particularly in contexts where data availability is limited. By improving model performance in settings with scarce data, our approach supports essential research in early-phase clinical trials (Weissler et al., 2021; Zame et al., 2020) - where typically only a small number of patients are enrolled - and it can help in identifying subtle patterns and relationships from small datasets. By handling complex, highdimensional datasets, GCondNet can benefit genomics research, enabling better analysis of genetic variations and interactions with limited experimental data, thus supporting the discovery of genetic markers and pathways (Alharbi & Vakanski, 2023; Way & Greene, 2019). We do not foresee harmful applications for our method. ## Acknowledgements The authors thank Mateo Espinosa Zarlenga and Ramon Viñas Torné for their feedback on earlier versions of the manuscript. We also thank Iulia Duta and Pietro Barbiero for their early discussions on graph neural networks. NS acknowledges the support of the U.S. Army Medical Research and Development Command of the Department of Defense; through the FY22 Breast Cancer Research Program of the Congressionally Directed Medical Research Programs, Clinical Research Extension Award GRANT13769713. Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the Department of Defense. ## References Ludmil B Alexandrov, Serena Nik-Zainal, David C Wedge, Peter J Campbell, and Michael R Stratton. Deciphering signatures of mutational processes operative in human cancer. *Cell reports*, 3(1):246–259, 2013. Fadiyah Ahmed Alharbi and Aleksandar Vakanski. Machine learning methods for cancer classification using gene expression data: A review. *Bioengineering*, 10, 2023. Sercan O Arık and Tomas Pfister. Tabnet: Attentive interpretable tabular learning. In *AAAI Conference on* Artificial Intelligence, volume 35, pp. 6679–6687, 2021. Gagan Bajwa, Ralph J DeBerardinis, Baomei Shao, Brian Hall, J David Farrar, and Michelle A Gill. Cutting edge: Critical role of glycolysis in human plasmacytoid dendritic cell antiviral responses. *The Journal of* Immunology, 196(5):2004–2009, 2016. Pierre Baldi, Peter Sadowski, and Daniel Whiteson. Searching for exotic particles in high-energy physics with deep learning. *Nature communications*, 5(1):4308, 2014. Rubika Balendra and Adrian M Isaacs. C9orf72-mediated ALS and FTD: multiple pathways to disease. Nature Reviews Neurology, 14(9):544–558, 2018. Muhammed Fatih Balın, Abubakar Abid, and James Zou. Concrete autoencoders: Differentiable feature selection and reconstruction. In *International Conference on Machine Learning*, pp. 444–453. PMLR, 2019. Alessio Benavoli, Giorgio Corani, and Francesca Mangili. Should we really use post-hoc tests based on mean-ranks? *The Journal of Machine Learning Research*, 17(1):152–161, 2016. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In *International* Conference on Machine Learning, 2009. Arindam Bhattacharjee, William G Richards, Jane Staunton, Cheng Li, Stefano Monti, Priya Vasa, Christine Ladd, Javad Beheshti, Raphael Bueno, Michael Gillette, et al. Classification of human lung carcinomas by mrna expression profiling reveals distinct adenocarcinoma subclasses. Proceedings of the National Academy of Sciences, 98(24):13790–13795, 2001. Vadim Borisov, Tobias Leemann, Kathrin Sessler, Johannes Haug, Martin Pawelczyk, and Gjergji Kasneci. Deep neural networks and tabular data: A survey. *IEEE Transactions on Neural Networks and Learning* Systems, 2022. Leo Breiman. Random forests. *Machine learning*, 45(1):5–32, 2001. Shaked Brody, Uri Alon, and Eran Yahav. How attentive are graph attention networks? In *International* Conference on Learning Representations, 2022. Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, and Xu Sun. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. In *Proceedings of the AAAI conference on* artificial intelligence, volume 34, pp. 3438–3445, 2020. Christina Curtis, Sohrab P Shah, Suet-Feung Chin, Gulisa Turashvili, Oscar M Rueda, Mark J Dunning, Doug Speed, Andy G Lynch, Shamith Samarajiwa, Yinyin Yuan, et al. The genomic and transcriptomic architecture of 2,000 breast tumours reveals novel subgroups. *Nature*, 486(7403):346–352, 2012. Janez Demšar. Statistical comparisons of classifiers over multiple data sets. *The Journal of Machine learning* research, 7:1–30, 2006. Kounianhua Du, Weinan Zhang, Ruiwen Zhou, Yangkun Wang, Xilong Zhao, Jiarui Jin, Quan Gan, Zheng Zhang, and David Paul Wipf. Learning enhanced representations for tabular data via neighborhood propagation. *Advances in Neural Information Processing Systems*, 35, 2022. Bahare Fatemi, Layla El Asri, and Seyed Mehran Kazemi. Slaps: Self-supervision improves structure learning for graph neural networks. *Advances in Neural Information Processing Systems*, 34:22667–22681, 2021. Jean Feng and Noah Simon. Sparse-input neural networks for high-dimensional nonparametric regression and classification. *arXiv preprint arXiv:1711.07592*, 2017. Matthias Fey and Jan Eric Lenssen. Fast graph representation learning with pytorch geometric. RLGM Workshop at the International Conference on Learning Representations, 2019. William A Freije, F Edmundo Castro-Vargas, Zixing Fang, Steve Horvath, Timothy Cloughesy, Linda M Liau, Paul S Mischel, and Stanley F Nelson. Gene expression profiling of gliomas strongly predicts survival. Cancer research, 64(18):6503–6510, 2004. Hui Gao, Joshua M Korn, Stéphane Ferretti, John E Monahan, Youzhen Wang, Mallika Singh, Chao Zhang, Christian Schnell, Guizhi Yang, Yun Zhang, et al. High-throughput screening using patient-derived tumor xenografts to predict clinical trial drug response. *Nature medicine*, 21(11):1318–1325, 2015. Mathew J Garnett, Elena J Edelman, Sonja J Heidorn, Chris D Greenman, Anahita Dastur, King Wai Lau, Patricia Greninger, I Richard Thompson, Xi Luo, Jorge Soares, et al. Systematic identification of genomic markers of drug sensitivity in cancer cells. *Nature*, 483(7391):570–575, 2012. Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In *Proceedings of the thirteenth international conference on artificial intelligence and statistics*, pp. 249–256. JMLR Workshop and Conference Proceedings, 2010. Yu. V. Gorishniy, Ivan Rubachev, Valentin Khrulkov, and Artem Babenko. Revisiting deep learning models for tabular data. In *Advances in Neural Information Processing Systems*, 2021. Léo Grinsztajn, Edouard Oyallon, and Gaël Varoquaux. Why do tree-based models still outperform deep learning on typical tabular data? In *Neural Information Processing Systems, Track on Datasets and* Benchmarks, 2022. Christian Haslinger, Norbert Schweifer, Stephan Stilgenbauer, Hartmut Dohner, Peter Lichter, Norbert Kraut, Christian Stratowa, and Roger Abseher. Microarray gene expression profiling of b-cell chronic lymphocytic leukemia subgroups defined by genomic aberrations and vh mutation status. *Journal of Clinical Oncology*, 22(19):3937–3949, 2004. Hussein Hazimeh, Natalia Ponomareva, Petros Mol, Zhenyu Tan, and Rahul Mazumder. The tree ensemble layer: Differentiability meets conditional computation. In *International Conference on Machine Learning*, pp. 4138–4148. PMLR, 2020. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing humanlevel performance on imagenet classification. In *Proceedings of the IEEE international conference on* computer vision, pp. 1026–1034, 2015. Noah Hollmann, Samuel G. Müller, Katharina Eggensperger, and Frank Hutter. Tabpfn: A transformer that solves small tabular classification problems in a second. In *International Conference on Learning* Representations, 2022. Xin Huang, Ashish Khetan, Milan W. Cvitkovic, and Zohar S. Karnin. Tabtransformer: Tabular data modeling using contextual embeddings. *arXiv*, abs/2012.06678, 2020. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In *International Conference on Machine Learning*, pp. 448–456. PMLR, 2015. Francesco Iorio, Theo A Knijnenburg, Daniel J Vis, Graham R Bignell, Michael P Menden, Michael Schubert, Nanne Aben, Emanuel Gonçalves, Syd Barthorpe, Howard Lightfoot, et al. A landscape of pharmacogenomic interactions in cancer. *Cell*, 166(3):740–754, 2016. Arlind Kadra, Marius Lindauer, Frank Hutter, and Josif Grabocka. Well-tuned simple nets excel on tabular datasets. *Advances in Neural Information Processing Systems*, 34, 2021. Gregor Kasieczka, Benjamin Nachman, David Shih, Oz Amram, Anders Andreassen, Kees Benkendorfer, Blaz Bortolato, Gustaaf Brooijmans, Florencia Canelli, Jack H Collins, et al. The lhc olympics 2020 a community challenge for anomaly detection in high energy physics. *Reports on progress in physics*, 84(12): 124201, 2021. Liran Katzir, Gal Elidan, and Ran El-Yaniv. Net-dnf: Effective deep modeling of tabular data. In International Conference on Learning Representations, 2020. Anees Kazi, Luca Cosmo, Seyed-Ahmad Ahmadi, Nassir Navab, and Michael M Bronstein. Differentiable graph module (dgm) for graph convolutional networks. *IEEE Transactions on Pattern Analysis and* Machine Intelligence, 45(2):1606–1617, 2022. Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. Lightgbm: A highly efficient gradient boosting decision tree. Advances in Neural Information Processing Systems, 30, 2017. John A Keith, Valentin Vassilev-Galindo, Bingqing Cheng, Stefan Chmiela, Michael Gastegger, Klaus-Robert Muller, and Alexandre Tkatchenko. Combining machine learning and computational chemistry for predictive insights into chemical systems. *Chemical reviews*, 121(16):9816–9872, 2021. Matthew Kelly and Christopher Semsarian. Multiple mutations in genetic cardiovascular disease: a marker of disease severity? *Circulation: Cardiovascular Genetics*, 2(2):182–190, 2009. Hyunsoo Kim and Haesun Park. Sparse non-negative matrix factorizations via alternating non-negativityconstrained least squares for microarray data analysis. *Bioinformatics*, 23(12):1495–1502, 2007. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. Internation Conference on Learning Representations, 2015. Thomas Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. Internation Conference on Learning Representations, 2017. Ismael Lemhadri, Feng Ruan, and Rob Tibshirani. Lassonet: Neural networks with feature sparsity. In International Conference on Artificial Intelligence and Statistics, pp. 10–18. PMLR, 2021. Roman Levin, Valeriia Cherepanova, Avi Schwarzschild, Arpit Bansal, C. Bayan Bruss, Tom Goldstein, Andrew Gordon Wilson, and Micah Goldblum. Transfer learning with deep tabular models. *International* Conference on Learning Representations, 2023. Caiyan Li and Hongzhe Li. Network-constrained regularization and variable selection for analysis of genomic data. *Bioinformatics*, 24(9):1175–1182, 2008. Jundong Li, Kewei Cheng, Suhang Wang, Fred Morstatter, Robert P Trevino, Jiliang Tang, and Huan Liu. Feature selection: A data perspective. *ACM Computing Surveys (CSUR)*, 50(6):94, 2018. Arthur Liberzon, Chet Birger, Helga Thorvaldsdóttir, Mahmoud Ghandi, Jill P Mesirov, and Pablo Tamayo. The molecular signatures database hallmark gene set collection. *Cell systems*, 1(6):417–425, 2015. Bo Liu, Ying Wei, Yu Zhang, and Qiang Yang. Deep neural networks for high dimension, low sample size data. In *International Joint Conference on Artificial Intelligence*, pp. 2287–2293, 2017. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations, 2019. Lucie Charlotte Magister, Dmitry Kazhdan, Vikash Singh, and Pietro Lio'. Gcexplainer: Human-in-the-loop concept-based explanations for graph neural networks. *ICML Workshop on Human in the Loop Learning*, 2021. Lucie Charlotte Magister, Pietro Barbiero, Dmitry Kazhdan, Federico Siciliano, Gabriele Ciravegna, Fabrizio Silvestri, Mateja Jamnik, and Pietro Lio. Encoding concepts in graph neural networks. *arXiv preprint* arXiv:2207.13586, 2022. Andrei Margeloiu, Nikola Simidjievski, Pietro Lio, and Mateja Jamnik. Weight predictor network with feature selection for small sample tabular biomedical data. *AAAI Conference on Artificial Intelligence*, 2023. Lisiane B Meira, Antonio MC Reis, David L Cheo, Dorit Nahari, Dennis K Burns, and Errol C Friedberg. Cancer predisposition in mutant mice defective in multiple genetic pathways: uncovering important genetic interactions. *Mutation Research/Fundamental and Molecular Mechanisms of Mutagenesis*, 477(1-2):51–58, 2001. Kevin P Murphy. *Probabilistic machine learning: an introduction*. MIT press, 2022. Jaehyun Nam, Jihoon Tack, Kyungmin Lee, Hankook Lee, and Jinwoo Shin. Stunt: Few-shot tabular learning with self-generated tasks from unlabeled tables. In *International Conference on Learning Representations*, 2022. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. *Advances in Neural Information Processing Systems*, 32, 2019. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. Scikit-learn: Machine learning in python. *the Journal of machine Learning research*, 12:2825–2830, 2011. Sergei Popov, Stanislav Morozov, and Artem Babenko. Neural oblivious decision ensembles for deep learning on tabular data. In *International Conference on Learning Representations*, 2020. Zhen Qin, Le Yan, Honglei Zhuang, Yi Tay, Rama Kumar Pasumarthi, Xuanhui Wang, Michael Bendersky, and Marc-Alexander Najork. Are neural rankers still outperformed by gradient boosted decision trees? In International Conference on Learning Representations, 2021. Ekagra Ranjan, Soumya Sanyal, and Partha Talukdar. Asap: Adaptive structure aware pooling for learning hierarchical graph representations. *Proceedings of the AAAI Conference on Artificial Intelligence*, 34: 5470–5477, 2020. Adriana Romero, Pierre Luc Carrier, Akram Erraqabi, Tristan Sylvain, Alex Auvolat, Etienne Dejoie, MarcAndré Legault, Marie-Pierre Dubé, Julie G. Hussin, and Yoshua Bengio. Diet networks: Thin parameters for fat genomics. In *International Conference on Learning Representations*, 2017. Camilo Ruiz, Hongyu Ren, Kexin Huang, and Jure Leskovec. Tabular deep learning when *d >> n* by using an auxiliary knowledge graph. In *Advances in Neural Information Processing Systems*, 2023. Victor Garcia Satorras and Joan Bruna. Few-shot learning with graph neural networks. *International* Conference on Learning Representations, 2018. Julia Schaefer, Moritz Lehne, Josef Schepers, Fabian Prasser, and Sylvia Thun. The use of machine learning in rare diseases: a scoping review. *Orphanet journal of rare diseases*, 15:1–10, 2020. Paul Scherer, Maja Trebacz, Nikola Simidjievski, Ramon Viñas, Zohreh Shams, Helena Andres Terre, Mateja Jamnik, and Pietro Liò. Unsupervised construction of computational graphs for gene expression data with explicit structural inductive biases. *Bioinformatics*, 38(5):1320–1327, 2022. Dinesh Singh and Makoto Yamada. Fsnet: Feature selection network on high-dimensional biological data. International Joint Conference on Neural Networks (IJCNN), 2023. Dinesh Singh, Phillip G Febbo, Kenneth Ross, Donald G Jackson, Judith Manola, Christine Ladd, Pablo Tamayo, Andrew A Renshaw, Anthony V D'Amico, Jerome P Richie, et al. Gene expression correlates of clinical prostate cancer behavior. *Cancer cell*, 1(2):203–209, 2002. Avrum Spira, Jennifer E Beane, Vishal Shah, Katrina Steiling, Gang Liu, Frank Schembri, Sean Gilman, Yves-Martine Dumas, Paul Calner, Paola Sebastiani, et al. Airway epithelial gene expression in the diagnostic evaluation of smokers with suspect lung cancer. *Nature medicine*, 13(3):361–366, 2007. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. *The journal of machine learning research*, 15(1): 1929–1958, 2014. Chuanqi Tan, Fuchun Sun, Tao Kong, Wenchang Zhang, Chao Yang, and Chunfang Liu. A survey on deep transfer learning. In Artificial Neural Networks and Machine Learning–ICANN 2018: 27th International Conference on Artificial Neural Networks, Rhodes, Greece, October 4-7, 2018, Proceedings, Part III 27, pp. 270–279. Springer, 2018. Leo Taslaman and Björn Nilsson. A framework for regularized non-negative matrix factorization, with application to the analysis of gene expression data. *PloS one*, 7(11):e46331, 2012. Robert Tibshirani. Regression shrinkage and selection via the lasso. *Journal of the Royal Statistical Society:* Series B (Methodological), 58(1):267–288, 1996. Katarzyna Tomczak, Patrycja Czerwińska, and Maciej Wiznerowicz. The cancer genome atlas (tcga): an immeasurable source of knowledge. *Contemporary oncology*, 19(1A):A68, 2015. Zifeng Wang and Jimeng Sun. Transtab: Learning transferable tabular transformers across tables. *Advances* in Neural Information Processing Systems, 35:2902–2915, 2022. Gregory P Way and Casey S Greene. Discovering pathway and cell type signatures in transcriptomic compendia with machine learning. *Annual Review of Biomedical Data Science*, 2:1–17, 2019. Boris Weisfeiler and Andrei Leman. The reduction of a graph to canonical form and the algebra which appears therein. *nti, Series*, 2(9):12–16, 1968. E Hope Weissler, Tristan Naumann, Tomas Andersson, Rajesh Ranganath, Olivier Elemento, Yuan Luo, Daniel F Freitag, James Benoit, Michael C Hughes, Faisal Khan, et al. The role of machine learning in clinical research: transforming the future of evidence generation. *Trials*, 22:1–15, 2021. Qitian Wu, Chenxiao Yang, and Junchi Yan. Towards open-world feature extrapolation: An inductive graph learning approach. *Advances in Neural Information Processing Systems*, 34:19435–19447, 2021. Junchen Yang, Ofir Lindenbaum, and Yuval Kluger. Locally sparse neural networks for tabular biomedical data. In *International Conference on Machine Learning*, pp. 25123–25153. PMLR, 2022. Wanjuan Yang, Jorge Soares, Patricia Greninger, Elena J Edelman, Howard Lightfoot, Simon Forbes, Nidhi Bindal, Dave Beare, James A Smith, I Richard Thompson, et al. Genomics of drug sensitivity in cancer (gdsc): a resource for therapeutic biomarker discovery in cancer cells. *Nucleic acids research*, 41(D1): D955–D961, 2012. Yongxin Yang, Irene Garcia Morillo, and Timothy M. Hospedales. Deep neural decision trees. International Conference in Machine Learning - Workshop on Human Interpretability in Machine Learning (WHI), 2018. Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, and Jure Leskovec. Hierarchical graph representation learning with differentiable pooling. Advances in neural information processing systems, 31, 2018. Jiaxuan You, Xiaobai Ma, Yi Ding, Mykel J Kochenderfer, and Jure Leskovec. Handling missing data with graph representation learning. *Advances in Neural Information Processing Systems*, 33:19075–19087, 2020. William R Zame, Ioana Bica, Cong Shen, Alicia Curth, Hyun-Suk Lee, Stuart Bailey, James Weatherall, David Wright, Frank Bretz, and Mihaela van der Schaar. Machine learning for clinical trials in the era of covid-19. *Statistics in biopharmaceutical research*, 12(4):506–517, 2020. Zenan Zhai, Christian Druckenbrodt, Camilo Thorne, Saber A Akhondi, Dat Quoc Nguyen, Trevor Cohn, and Karin Verspoor. Chemtables: a dataset for semantic classification on tables in chemical patents. *Journal* of Cheminformatics, 13(1):1–20, 2021. Kaixiong Zhou, Zirui Liu, Rui Chen, Li Li, Soo-Hyun Choi, and Xia Hu. Table2graph: Transforming tabular data to unified weighted graph. In *International Joint Conference on Artificial Intelligence*, 2022. # Appendix: Gcondnet ## Table Of Contents | Table of Contents A GCondNet Training Pseudocode | 20 | | | |----------------------------------------------------|------------------------------------------------------------------------------------|----|----| | B | Sparse Relative Distance (SRD) Graph Construction | 20 | | | C | Datasets | 21 | | | D | Reproducibility: Benchmarks methods, Training Details and Hyper-parameters | 22 | | | E | Computational Cost | 25 | | | E.1 | Training time | 25 | | | E.2 | Number of steps for convergence | | 25 | | F | Ablations on GCondNet | 26 | | | F.1 | Ablation number of steps for decaying the mixing coefficient | 26 | | | F.2 | Vary the size of backbone's first layer | 26 | | | F.3 | Ablation on GCondNet's graph construction method and the MLP weight initialisation | . | 27 | | G | GCondNet beyond small-sample and high-dimensional data | 29 | | | G.1 | Varying the dataset size N | 29 | | | G.2 | Vary the sample-to-feature ratio N/D | | 31 | ## A Gcondnet Training Pseudocode Algorithm 2 Training GCondNet Input: training data X ∈ R N×D, training labels y ∈ R N , classification network fθMLP , graph neural network gθGNN , node aggregation function fagg, graph creation method h(·), steps for linear decay nα 1: for each feature j = 1, 2*, ..., D* do 2: Gj = h(X:,j ) ▷ Generate the sample-wise multiplex graphs 3: **end for** 4: Wscratch = 0 ▷ Initialise auxiliary weight matrix 5: for each mini-batch B = {(x (i), yi)} b i=1 do 6: for each feature j = 1, 2*, ..., D* do 7: node-embeddings = gθGNN (Gj ) 8: w (j) = fagg(node-embeddings) ▷ Aggregate all node embeddings to obtain the graph embedding w (j) ∈ R K 9: **end for** 10: WGNN = [w (1), w (2)*, ...,* w (D)] ▷ Horizontally concatenate the graph embeddings 11: α = max(0, 1 − (i/nα)) ▷ Compute mixing coefficient 12: W[1] MLP ← αWGNN + (1 − α)Wscratch ▷ Compute the weight matrix of the first layer 13: Make W[1] MLP the weight matrix of the first layer of fθMLP 14: for each sample i = 1, 2*, ..., b* do 15: yˆi = fθMLP (x (i)) 16: **end for** 17: yˆ ← [ˆy1, yˆ2*, ...,* yˆb] ▷ Concatenate all predictions 18: Compute training loss L = CrossEntropyLoss(y, yˆ) 19: Compute the gradient of the loss L w.r.t. 20: θMLP, θGNN,Wscratch using backpropagation 21: Update the parameters: 22: θMLP ← θMLP − ∇θMLP L 23: θGNN ← θGNN − ∇θGNN L 24: Wscratch ← Wscratch − ∇Wscratch L 25: **end for** Return: Trained models fθMLP , gθGNN and W[1] MLP ## B Sparse Relative Distance (Srd) Graph Construction We propose a novel similarity-based method, Sparse Relative Distance (SRD), for creating edges between node (representing samples) in a graph. It assumes that similar samples should be connected and use this principle to create edges Ej in the j th graph. The method also includes an accept-reject step to sparsify the graph. Specifically, SRD work as follows: 1. For each node i create a set of candidate edges Ci by identifying all samples l with a feature value within a certain distance, *dist*, of the corresponding feature value of sample i. Specifically, we include all samples l such that |Xi,j − Xl,j | ≤ *dist*, where *dist* is defined as 5% of the absolute difference between the 5th and 95th percentiles of all values of feature j (to eliminate the effect of outlier feature values). 2. Perform a Bernoulli trial with probability *size*(Ci)/Ntrain for each node i. If the trial outcome is positive, create undirected edges between node i and all nodes within the candidate set Ci. If the outcome is negative, no new edges are created. This sampling procedure results in sparser graphs, which helps alleviate the issue of oversmoothing (Chen et al., 2020) commonly encountered in GNNs. Oversmoothing occurs when the model produces similar embeddings for all nodes in the graph, effectively 'smoothing out' any differences in the graph structure. It is worth noting that a node can also acquire new edges as part of the candidate set of other nodes. This process results in a network topology in which nodes with larger candidate sets are more likely to have more connections. Intuitively, this means that 'representative' samples become the centres of node clusters in the network. 3. To further prevent oversmoothing, if a node has more than 25 connections, we randomly prune some of its edges until it has exactly 25 connections. This is because samples with highly frequent values can have an excessive number of connections, which can result in oversmoothing. Table B.1: Key statistics of the graphs created using the Sparse Relative Distance (SRD) with *dist* = 5%. | Dataset | Node degree | Edges of full graph (%) | |---------------|---------------|---------------------------| | cll | 3.88 ± 6.81 | 9.96 | | lung | 5.16 ± 9.46 | 7.37 | | meta-p50 | 3.97 ± 6.73 | 5.56 | | meta-dr | 3.92 ± 6.69 | 5.48 | | prostate | 11.28 ± 17.96 | 31.77 | | smk | 3.94 ± 6.61 | 5.93 | | tcga-survival | 7.7 ± 12.03 | 10.77 | | tcga-tumor | 7.45 ± 11.84 | 10.42 | | toxicity | 3.1 ± 5.42 | 5.12 | ## C Datasets | # samples (N) | # features (D) | D/N | # classes | # samples per class | | |-----------------|------------------|-------|-------------|-----------------------|-----------------| | Dataset allaml | 72 | 7129 | 99 | 2 | 25, 47 | | cll | 111 | 11340 | 102 | 3 | 11, 49, 51 | | gli | 85 | 22283 | 262 | 2 | 26, 59 | | glioma | 50 | 4434 | 89 | 4 | 7, 14, 14, 15 | | lung | 197 | 3312 | 17 | 4 | 17, 20, 21, 139 | | meta-dr | 200 | 4160 | 21 | 2 | 61, 139 | | meta-p50 | 200 | 4160 | 21 | 2 | 33, 167 | | prostate | 102 | 5966 | 58 | 2 | 50, 52 | | smk | 187 | 19993 | 107 | 2 | 90, 97 | | tcga-survival | 200 | 4381 | 22 | 2 | 78, 122 | | tcga-tumor | 200 | 4381 | 22 | 3 | 25, 51, 124 | | toxicity | 171 | 5748 | 34 | 4 | 39, 42, 45, 45 | Table C.2: Details of the 12 real-world biomedical datasets used for experiments. The datasets contain between 72-200 samples, and the number of features is 17 − 262 times larger than the number of samples. All datasets are publicly available and summarised in Table C.2. Eight datasets are open-source (Li et al., 2018) and available https://jundongl.github.io/scikit-feature/datasets.htmlonline: **CLL-SUB111** (called 'cll') (Haslinger et al., 2004), **GLI_85** (called 'gli') (Freije et al., 2004), **lung** (Bhattacharjee et al., 2001), **Prostate_GE** (called 'prostate') (Singh et al., 2002), **SMK-CAN-187** (called 'smk') (Spira et al., 2007), **TOX-171** (called 'toxicity') (Bajwa et al., 2016), as well as **allaml** and **glioma**, for which the original reference was not available. We created four additional datasets following the methodology presented in Margeloiu et al. (2023): - Two datasets from the **METABRIC** (Curtis et al., 2012) dataset. We combined the molecular data with the clinical label 'DR' to create the **'meta-dr'** dataset, and we combined the molecular data with the clinical label 'Pam50Subtype' to create the **'meta-p50'** dataset. Because the label 'Pam50Subtype' was very imbalanced, we transformed the task into a binary task of basal vs non-basal by combining the classes 'LumA', 'LumB', 'Her2', 'Normal' into one class and using the remaining class 'Basal' as the second class. For both 'meta-dr' and 'meta-p50' we selected the Hallmark gene set (Liberzon et al., 2015) associated with breast cancer, and the new datasets contain 4160 expressions (features) for each patient. We created the final datasets by randomly sampling 200 patients stratified because we are interested in studying datasets with a small sample-size. - Two datasets from the **TCGA** (Tomczak et al., 2015) dataset. We combined the molecular data and the label 'X2yr.RF.Surv' to create the **'tcga-survival'** dataset, and we combined the molecular data and the label 'tumor_grade' to create the **'tcga-tumor'** dataset. For both 'tcga-survival' and 'tcga-tumor' we selected the Hallmark gene set (Liberzon et al., 2015) associated with breast cancer, leaving 4381 expressions (features) for each patient. We created the final datasets by randomly sampling 200 patients stratified because we are interested in studying small sample-size datasets. Dataset processing. Before training the models, we apply Z-score normalisation to each dataset. Specifically, on the training split, we learn a simple transformation to make each column of X*train* ∈ R N*train*×D have zero mean and unit variance. We apply this transformation to the validation and test splits during cross-validation. ## D Reproducibility: Benchmarks Methods, Training Details And Hyper-Parameters Software implementation. We implemented GCondNet using PyTorch 1.12 (Paszke et al., 2019), an opensource deep learning library with a BSD licence. We implemented the GNN within GCondNet, and the GCN and GATv2 benchmarks using PyTorch-Geometric (Fey & Lenssen, 2019), an open-source library for implementing Graph Neural Networks with an MIT licence. We train using a library https://github.com/Lightning-AI/ lightning Pytorch-ligthning built on top of PyTorch and released under an Apache Licence 2.0. All numerical plots and graphics have been generated using Matplotlib 3.6, a Python-based plotting library with a BSD licence. The model architecture Figure 1 was generated using https://github.com/jgraph/drawio draw.io, a free drawing software under Apache License 2.0. As for the other benchmarks, we implement MLP, CAE and DietNetworks using PyTorch 1.12 (Paszke et al., 2019), Random Forest using scikit-learn (Pedregosa et al., 2011) (BSD license), LightGBM using the lightgbm library (Ke et al., 2017) (MIT licence) and TabNet (Arık & Pfister, 2021) using the https: //github.com/dreamquark-ai/tabnetimplementation (MIT licence) from Dreamquark AI. We use the MITlicensed https://github.com/andreimargeloiu/WPFS implementation of WPFS made public by (Margeloiu et al., 2023). We re-implement FsNet (Singh & Yamada, 2023) in PyTorch 1.12 (Paszke et al., 2019) because the official code implementation contains differences from the paper, and they used a different evaluation setup from ours (they evaluate using unbalanced accuracy, while we run multiple data splits and evaluate using balanced accuracy). We use the https://github.com/lasso-net/lassonet official implementation of LassoNet (MIT licence), and the https://github.com/jjfeng/spinn official implementation of SPINN (no licence). Computing Resources. All our experiments are run on a single machine from an internal cluster with a GPU Nvidia Quadro RTX 8000 with 48GB memory and an Intel(R) Xeon(R) Gold 5218 CPU with 16 cores (at 2.30GHz). The operating system was Ubuntu 20.4.4 LTS. We estimate that to carry out the full range of experiments, comprising both prototyping and initial experimental phases, we needed to train around 11000 distinct models, which we estimate required 1000 to 1200 GPU hours. GCondNet architecture and settings. GCondNet is leveraging an MLP backbone model of three layers with 100, 100, 10 neurons. After each linear layer we add LeakyReLU non-linearity with slope 0.01, batch normalisation (Ioffe & Szegedy, 2015) and dropout (Srivastava et al., 2014): we tune the dropout probability p ∈ {0.2, 0.4} on the validation accuracy. The last layer has softmax activation. The layers following the first one are initialized using a standard Kaiming method (He et al., 2015), which considers the activation. The GNN within GCondNet is a Graph Convolutional Network (GCN) (Kipf & Welling, 2017) with two layers of size 200 and 100. After the first GCN layer, we use a ReLU non-linearity and dropout with p = 0.5. The permutation invariant function fagg for computing graph embeddings is global average pooling.3 We intend GCondNet's computation overhead to be minimal, thus our GCondNet retains the same training hyper-parameters as the optimised backbone MLP model. We initially performed hyperparameter tuning on an MLP. After selecting the optimal hyperparameters, we retrained the GCondNet with the same MLP backbone and training settings, using two graph construction methods, resulting in two variants: GCondNet (KNN) and GCondNet (SRD), as shown in Table 1. Consequently, the time needed to tune GCondNet is 3Using hierarchical pooling methods (Ying et al., 2018; Ranjan et al., 2020) resulted in unstable training and significantly poor performance. essentially equivalent to tuning the backbone model. Specifically, we train GCondNet for 10000 steps with a batch size of 8 and optimise using AdamW (Loshchilov & Hutter, 2019) with a fixed learning rate of 1e − 4. We decay the mixing coefficient α over nα = 200 training steps, although we found that GCondNet is robust to the number of steps nα, as supported by the statistical tests in Appendix F.1. We use early stopping with patience 200 steps on the validation loss across all experiments. Training details for all benchmark methods. Here, we present the training settings for all benchmark models, and we discuss hyper-parameter tuning in the next paragraph. We train using 5-fold cross-validation with 5 repeats (training 25 models each run). For each run, we select 10% of the training data for validation. We perform a fair comparison whenever possible: for instance, we train all models using a weighted loss (e.g., weighted cross-entropy loss for neural networks), evaluate using balanced accuracy, and use the same classification network architecture for GCondNet, MLP, WPFS, FsNet, CAE and DietNetworks. - **WPFS, DietNetworks, CAE, FsNet** have three hidden layers of size 100, 100, 10. The Weight Predictor Network and the Sparsity Network have four hidden layers 100, 100, 100, 100. They are trained for 10,000 steps using early stopping with patience 200 steps on the validation cross-entropy and gradient clipping at 2.5. For CAE and **FsNet** we use the suggested annealing schedule for the concrete nodes: exponential annealing from temperature 10 to 0.01. On all datasets, **DietNetworks** performed best with not decoder. For **WPFS** we use the NMF embeddings suggested in Margeloiu et al., 2023. - For GCN, we used two graph convolutional layers with 200 and 100 neurons, followed by a linear layer with softmax activation for class label computation. ReLU and dropout with p = 0.5 follow each convolutional layer. We train using AdamW (Loshchilov & Hutter, 2019), tune the learning rate in [1e − 3, 3e − 3, 1e − 4], and select the best model on the validation accuracy. - For **GATv2**, we used two graph attentional layers with dropout p = 0.5. The first attention layer used 4 attention heads of size 100, and the second used one head of size 400. ReLU and dropout with p = 0.5 follow each attention layer. A linear layer with softmax activation for class label computation follows the attention layers. We train using AdamW (Loshchilov & Hutter, 2019), tune the learning rate in [1e − 3, 3e − 3, 1e − 4], and select the best model on the validation accuracy. - **LassoNet** has three hidden layers of size 100, 100, 10. We use dropout 0.2, and train using AdamW (with betas 0.9, 0.98) and a batch size of 8. We train using a weighted loss. We perform early stopping on the validation set. - For **Random Forest**, we used 500 estimators, feature bagging with the square root of the number of features, and used balanced weights associated with the classes. - For **LightGBM** we used 200200 estimators, feature bagging with 30 - For **TabNet**, we use width 8 for the decision prediction layer and the attention embedding for each mask (larger values lead to severe overfitting) and 1.5 for the feature re-usage coefficient in the masks. We use three steps in the architecture, with two independent and two shared Gated Linear Units layers at each step. We train using Adam (Kingma & Ba, 2015) with momentum 0.3 and gradient clipping at 2. - For **TabTransformer**, we use only the head for continuous features, as our datasets do not contain categorical features. For a fair comparison, we use the same architecture as the MLP following the initial layer normalization, and train the model with the optimal settings of the MLP. - For **SPINN**, we followed the results from the ablations in the original paper and tuned the sparse group lasso hyper-parameter λ ∈ {0, 0.001, 0.0032, 0.1}, the group lasso hyper-parameter α ∈ {0.9, 0.99, 0.999} in the sparse group lasso, the ridge-param λ0 ∈ {0, 0.0001}, and train for at most 1,000 steps. - For DNP we did not find any suitable implementation, and used SPINN with different settings as a proxy for DNP (because DNP is a greedy approximation to optimizing the group lasso, and SPINN optimises directly a group lasso). Specifically, our proxy for DNP results is SPINN trained for at most 1000 iterations, with α = 1 for the group lasso in the sparse group lasso, ridge-param λ0 = 0.0001, and we tuned the sparse group lasso hyper-parameter λ ∈ {0, 0.001, 0.0032, 0.1}. Hyper-parameter tuning. For each model, we use random search and previous experience to find a good range of hyper-parameter values that we can investigate in detail. We then performed a grid search and ran 25 runs for each hyper-parameter configuration. We selected the best hyper-parameter based on the average validation accuracy across the 25 runs. For the MLP and DietNetworks we individually grid-searched learning rate ∈ {0.003, 0.001, 0.0003, 0.0001}, batch size ∈ {8, 12, 16, 20, 24, 32}, dropout rate ∈ {0, 0.1, 0.2, 0.3, 0.4, 0.5}. We found that learning rate 0.003, batch size 8 and dropout rate 0.2 work well across datasets for both models, and we used them in the presented experiments. In addition, for DietNetworks we also tuned the reconstruction hyper-parameter λ ∈ {0, 0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30} and found that across dataset having λ = 0 performed best. For WPFS we used the best hyper-parameters for the MLP and tuned only the sparsity hyper-parameter λ ∈ {0, 3e − 6, 3e − 5, 3e − 4, 3e − 3, 1e − 2} and the size of the feature embedding ∈ {20, 50, 70}. For FsNet we grid-search the reconstruction parameter in λ ∈ {0, 0.2, 1, 5} and learning rate in {0.001, 0.003}. For LightGBM we performed grid-search for the learning rate in {0.1, 0.01} and maximum depth in {1, 2}. For Random Forest, we performed a grid search for the maximum depth in {3, 5, 7} and the minimum number of samples in a leaf in {2, 3}. For TabNet we searched the learning rate in {0.01, 0.02, 0.03} and the λ sparsity hyper-parameter in {0.1, 0.01, 0.001, 0.0001}, as motivated by (Yang et al., 2022). For GCN and GATv2 we searched the learning rate in {1e − 3, 3e − 3, 1e − 4}. We selected the best hyper-parameters (Table D.3) on the weighted cross-entropy (except Random Forest, for which we used weighted balanced accuracy). | Random Forest | LightGBM | TabNet | GCN | GATv2 | FsNet | CAE | WPFS | | | | | | | |-----------------|-------------|----------|-------|----------|------------|----------|----------------|------------|-------|-----------|------------|-----------|----| | max | min samples | learning | max | learning | λ sparsity | learning | learning | learning | λ | annealing | λ sparsity | embedding | | | depth | leaf | rate | depth | rate | rate | rate | reconstruction | iterations | size | | | | | | allaml | 3 | 3 | 0.1 | 2 | 0.03 | 0.0001 | 0.001 | 0.003 | 0.003 | 0 | 1000 | 0 | 50 | | cll | 3 | 3 | 0.1 | 2 | 0.03 | 0.001 | 0.003 | 0.003 | 0.003 | 0 | 1000 | 3e − 4 | 70 | | gli | 3 | 3 | 0.1 | 1 | 0.03 | 0.1 | 0.001 | 0.003 | 0.003 | 0 | 300 | 3e − 4 | 50 | | glioma | 3 | 3 | 0.01 | 2 | 0.03 | 0.001 | 0.003 | 0.003 | 0.003 | 0 | 1000 | 3e − 3 | 50 | | lung | 3 | 2 | 0.1 | 1 | 0.02 | 0.1 | 0.003 | 0.001 | 0.001 | 0 | 1000 | 3e − 5 | 20 | | meta-p50 | 7 | 2 | 0.01 | 2 | 0.02 | 0.001 | 0.003 | 0.003 | 0.003 | 0 | 1000 | 3e − 6 | 50 | | meta-dr | 7 | 2 | 0.1 | 1 | 0.03 | 0.1 | 0.003 | 0.003 | 0.003 | 0 | 300 | 0 | 50 | | prostate | 5 | 2 | 0.1 | 2 | 0.02 | 0.01 | 0.0001 | 0.0001 | 0.003 | 0 | 1000 | 3e − 3 | 50 | | smk | 5 | 2 | 0.1 | 2 | 0.03 | 0.001 | 0.001 | 0.0001 | 0.003 | 0 | 1000 | 3e − 5 | 50 | | tcga-survival | 3 | 3 | 0.1 | 1 | 0.02 | 0.01 | 0.003 | 0.003 | 0.003 | 0 | 300 | 3e − 5 | 50 | | tcga-tumor | 3 | 3 | 0.1 | 1 | 0.02 | 0.01 | 0.003 | 0.001 | 0.003 | 0 | 300 | 3e − 5 | 50 | | toxicity | 5 | 3 | 0.1 | 2 | 0.03 | 0.1 | 0.003 | 0.0001 | 0.001 | 0.2 | 1000 | 3e − 5 | 50 | Table D.3: Best performing hyper-parameters for each benchmark model across datasets. LassoNet unstable training. We used the official implementation of LassoNet (https://github.com/ lasso-net/lassonet), and we successfully replicated some of the results in the LassoNet paper (Lemhadri et al., 2021). However, LassoNet was unstable on all datasets we trained on. In our experiments, we grid-searched the L1 penalty coefficient λ ∈ {0.001, 0.01, 0.1, 1, 10, '*auto*′} and the hierarchy coefficient M ∈ {0.1, 1, 3, 10}. These values are suggested in the paper and used in the examples from the official codebase. For all hyper-parameter combinations, LassoNet's performance was equivalent to a random classifier (e.g., 25% balanced accuracy for a 4-class problem). ## E Computational Cost We present the time and number of steps required to train GCondNet. Note that we designed GCondNet to use the same training hyperparameters as the optimised backbone MLP model. Therefore, when implementing GCondNet, one should first perform hyperparameter tuning on the backbone model, which is quicker than training GCondNet. After selecting the optimal hyperparameters, retrain GCondNet using the same backbone and training settings, but choose a graph construction method (e.g., KNN or SRD). Consequently, the time needed to tune GCondNet is essentially the same as for the backbone model, with the only difference being the final one-off GCondNet training, for which we show the computational overhead here. ## E.1 Training Time | Dataset | GCondNet | MLP | WPFS | DietNetworks | FsNet | |-----------------|------------|-------|--------|----------------|---------| | cll | 12 | 6.1 | 7.7 | 8.1 | 5.4 | | lung | 11 | 6.1 | 9.9 | 10.6 | 6.1 | | meta-p50 | 9.8 | 6.6 | 8.9 | 8.7 | 4.3 | | meta-dr | 3.6 | 3 | 4.6 | 4.9 | 4.8 | | prostate | 9.6 | 7.2 | 9.9 | 7.9 | 3.5 | | smk | 9 | 6.2 | 8.9 | 5.9 | 4.3 | | tcga-survival | 3.8 | 3 | 4.2 | 4.7 | 5.3 | | tcga-tumor | 4 | 3.5 | 5 | 4.4 | 4.5 | | toxicity | 13.7 | 7.3 | 10 | 8.2 | 5.9 | | Average minutes | 8.5 | 5.4 | 7.7 | 7 | 4.9 | Table E.4: Average training time (in minutes) of a model with optimal hyper-parameters. In general, GCondNet trains slower than the benchmarks: GCondNet takes 8.5 minutes to train across datasets, other competitive "diet" networks, such as WPFS, take 7.7 minutes, and an MLP takes 5.4 minutes. | | MLP | GCondNet with different graphs | | | |---------------|------------|----------------------------------|-----------------|------| | | KNN graphs | SRD graphs | RandEdge graphs | | | cll | 1697 | 4020 | 4526 | 4298 | | lung | 2836 | 6927 | 6777 | 6935 | | meta-p50 | 2952 | 5504 | 5365 | 5242 | | meta-dr | 1331 | 1657 | 1577 | 1598 | | prostate | 2347 | 4307 | 4540 | 4535 | | smk | 1379 | 2175 | 2102 | 2218 | | tcga-survival | 1320 | 1541 | 1606 | 1588 | | tcga-tumor | 1316 | 1805 | 1790 | 1805 | | toxicity | 2807 | 6522 | 6809 | 6772 | | Average | 1998 | 3829 | 3899 | 3887 | ## E.2 Number Of Steps For Convergence Table E.5: Number of steps for convergence. ## F Ablations On Gcondnet F.1 Ablation Number Of Steps For Decaying The Mixing Coefficient Table F.6: We evaluate the impact of decaying the mixing coefficient α for varying number of steps nα. We present the mean±std balanced *validation accuracy* averaged over 25 runs. We find that GCondNet is robust to the decay length nα, and we choose nα = 200 steps for all experiments of this paper unless otherwise specified. | Dataset | Steps nα of linear decay for α 100 200 | | 400 | |---------------|------------------------------------------|---------------|---------------| | cll | 88.88 ± 8.47 | 88.09 ± 9.34 | 88.33 ± 8.28 | | lung | 96.77 ± 5.07 | 97.27 ± 4.97 | 97.36 ± 4.76 | | meta-p50 | 98.56 ± 3.70 | 98.56 ± 3.70 | 97.74 ± 5.02 | | meta-dr | 71.20 ± 8.80 | 68.36 ± 10.63 | 71.92 ± 8.39 | | prostate | 95.56 ± 7.35 | 93.97 ± 10.22 | 95.11 ± 7.99 | | smk | 76.39 ± 11.59 | 76.89 ± 13.10 | 75.78 ± 11.64 | | tcga-survival | 69.20 ± 11.38 | 70.6 ± 9.35 | 68.53 ± 10.01 | | tcga-tumor | 62.06 ± 13.64 | 62.46 ± 18.05 | 66.13 ± 15.17 | | toxicity | 98.75 ± 3.12 | 98.25 ± 3.83 | 98.5 ± 3.73 | | nα = 100 vs. nα = 200 | nα = 100 vs. nα = 400 | nα = 200 vs. nα = 400 | |-------------------------|-------------------------|-------------------------| | 4.00E-01 | 9.44E-01 | 4.96E-01 | Table F.7: Statistical analysis of the number of iterations from Table F.6. We report the p-values of a Wilcoxon signed-rank test (Demšar, 2006; Benavoli et al., 2016). The results support our observation, namely that GCondNet is robust to the hyper-parameter nα. ## F.2 Vary The Size Of Backbone'S First Layer The GNN component of GCondNet outputs a matrix WGNN ∈ R K×D, where K is the size of the first hidden layer of the underlying backbone network (e.g., an MLP), and D is the number of features. The graph embeddings w(j) ∈ R K described in Algorithm 1 (line 3) match the size K of the backbone MLP model; hence, K is determined by the backbone network, not as a hyper-parameter of GCondNet. | Dataset | gli | allaml | cll | | glioma | | | | |------------------|-------------|-------------|-------------|------------|------------|------------|-------------|-------------| | Size first layer | MLP | GCondNet | MLP | GCondNet | MLP | GCondNet | MLP | GCondNet | | 20 | 75.38±13.97 | 84.43±10.84 | 78.18±18.09 | 96.80±5.57 | 67.97±7.43 | 81.21±7.53 | 66.00±17.46 | 78.00±14.26 | | 50 | 79.98±10.44 | 84.02±9.58 | 95.07±7.39 | 97.58±4.13 | 73.49±4.66 | 81.29±6.93 | 56.50±16.93 | 74.50±14.25 | | 100 | 77.72±15.30 | 85.02±9.00 | 91.30±6.70 | 96.18±4.90 | 78.30±9.00 | 80.70±5.50 | 73.00±14.90 | 76.67±12.90 | | 150 | 79.88±15.32 | 84.92±10.12 | 91.64±10.22 | 95.16±6.73 | 81.06±7.82 | 80.60±6.32 | 71.50±15.67 | 67.33±17.12 | | 200 | 81.03±12.17 | 85.66±10.04 | 91.89±10.50 | 96.58±4.73 | 80.11±6.94 | 81.28±6.37 | 74.33±11.83 | 80.83±11.60 | We assess the effect of GCondNet when the backbone network varies in width. We vary the size K of the backbone MLP's first layer and apply GCondNet (with KNN graphs), which computed graph embeddings of size K accordingly. Figure F.1 and Table F.8 demonstrate that GCondNet consistently enhances performance across different values of K. Notably, GCondNet greatly improves stability, yielding comparable outcomes across various network widths, unlike the MLP, which exhibits substantial performance decrease in narrower networks. Table F.8: Numerical results for Figure F.1, comparing the performance when varying the size of the first hidden layer of an MLP, and GCondNet with an identical MLP backbone. ![26_image_0.png](26_image_0.png) Figure F.1: Comparing the performance when varying the size of the first hidden layer of an MLP, and GCondNet with an identical MLP backbone. GCondNet consistently enhances performance across different values of K, and it greatly improves stability, yielding comparable outcomes across various network widths. ## F.3 Ablation On Gcondnet'S Graph Construction Method And The Mlp Weight Initialisation RandEdge Graphs from Section 3.2. The proposed SRD method creates graphs that contain, on average, 8% of the edges of a fully connected graph (as shown in Appendix B). To create RandEdge graphs with similar statistics, we sample a proportion p from all possible edges in the graph, where p ∼ N (µ = 0.08, σ = 0.03) is sampled for each of the D graphs. We used the same initial node embeddings from Section 2.1. We sample each graph five times and train each of the 25 models on all graphs - resulting in 125 trained models on RandEdge graphs. Specialised initialisations from Section 3.2. We first compute W[1] MLP using any of the three initialisation methods that we propose below. To mitigate the risk of exploding gradients, we adopt the method proposed by He et al. (He et al., 2015) to rescale the weights. After computing W[1] MLP, we then perform zero-centring on each row of W[1] MLP and subsequently rescale it to match the standard deviation of the Kaiming initialisation (He et al., 2015). The resulting matrix is used to initialise the first layer of the predictor MLP. 1. **Principal Component Analysis (PCA) initialisation.** We use PCA to compute feature embeddings e (j) PCA for all features j. These embeddings are then concatenated horizontally to form the weight matrix of the first layer of the MLP predictor W[1] MLP = [e (1) PCA, e (2) PCA*, ...,* e (D) PCA]. 2. **Non-negative matrix factorisation (NMF) initialisation.** NMF has been applied in bioinformatics to cluster gene expression (Kim & Park, 2007; Taslaman & Nilsson, 2012) and identify common cancer mutations (Alexandrov et al., 2013). It approximates X ≈ WH, with the intuition that the column space of W represents "eigengenes", and the column H:,j represents coordinates of gene j in the space spanned by the eigengenes. The feature embedding is e (j) NMF := H:,j . These embeddings are then concatenated horizontally to form the weight matrix of the first layer of the MLP predictor W[1] MLP = [e (1) NMF, e (2) NMF*, ...,* e (D) NMF]. 3. **Weisfeiler-Lehman (WL) initialisation.** The WL algorithm (Weisfeiler & Leman, 1968), often used in graph theory, is a method to check whether two given graphs are isomorphic, i.e., identical up to a renaming of the vertices. The algorithm creates a graph embedding of size N. For our use-case, the embeddings must have size K, the size of the first hidden layer of the predictor MLP; thus, we obtain the feature embeddings e (j) WL by computing the histogram with K bins of the WL-computed graph embedding, which is then normalised to be a probability density. We apply the WL algorithm on the SRD graphs described in Section 2.1, and finally obtain W[1] WL = [e (1) WL, e (2) WL*, ...,* e (D) WL]. Table F.9: We evaluate the robustness of GCondNet on various graph construction methods and compare it to an identically structured and trained MLP initialised with three weight initialisation methods emulating the inductive biases of GCondNet, but without training a GNN. Here, all versions of MLPs and GCondNet use a fixed dropout p = 0.2. We report the mean ± std of the test balanced accuracy averaged over 25 runs and include statistical tests in Table F.10. Results show that GCondNet is robust to the graphs and consistently outperforms a standard MLP across all graph construction methods. Further, GCondNet often outperforms other initialisation methods, highlighting the effectiveness of the GNN-extracted latent structure. | MLP | MLP with specialised initialisations | | GCondNet with different graphs | | | | | |---------------|----------------------------------------|-------------|----------------------------------|-------------|-------------|-------------|------------| | PCA | NMF | WL | RandEdge | | KNN | SRD | | | allaml | 91.30±6.74 | 95.49±5.97 | 95.84±5.42 | 95.33±5.81 | 96.39±4.89 | 96.18±4.85 | 96.36±4.70 | | cll | 78.30±8.99 | 79.92±6.48 | 78.59±6.64 | 79.98±6.59 | 81.36±5.78 | 80.70±5.47 | 81.54±7.15 | | gli | 77.72±15.33 | 83.82±12.35 | 85.58±9.58 | 83.79±10.77 | 85.94±8.65 | 85.51±8.96 | 86.36±8.05 | | glioma | 73.00±14.88 | 75.00±15.17 | 74.67±13.55 | 75.50±13.41 | 75.67±12.09 | 76.67±12.90 | 77.50±8.51 | | lung | 94.20±4.95 | 96.04±4.00 | 95.05±4.06 | 94.56±5.97 | 94.86±4.58 | 94.68±4.25 | 95.34±4.49 | | meta-dr | 59.56±5.50 | 55.75±8.27 | 59.36±6.84 | 58.69±7.36 | 57.89±8.76 | 59.34±8.93 | 58.24±6.36 | | meta-p50 | 94.31±5.39 | 94.70±4.93 | 95.09±4.80 | 95.81±5.05 | 95.86±4.25 | 96.37±4.00 | 96.26±3.79 | | prostate | 88.76±5.55 | 91.04±5.08 | 89.36±6.49 | 89.97±5.94 | 89.56±6.37 | 90.38±5.59 | 89.96±6.14 | | smk | 64.42±8.44 | 66.79±10.80 | 65.87±7.35 | 64.47±8.17 | 66.13±8.12 | 65.92±8.68 | 68.08±7.31 | | tcga-survival | 56.28±6.73 | 55.22±7.39 | 60.08±6.18 | 54.79±8.23 | 58.31±7.81 | 58.61±7.01 | 56.36±9.41 | | tcga-tumor | 48.19±7.75 | 50.95±10.59 | 51.49±9.77 | 49.67±8.86 | 51.57±9.10 | 51.70±8.82 | 52.43±7.57 | | toxicity | 93.21±6.14 | 92.58±5.46 | 89.11±5.99 | 92.49±5.70 | 95.06±4.17 | 95.22±3.93 | 95.25±4.54 | | Average rank | 6.08 | 4.42 | 4.33 | 5.17 | 3.17 | 2.75 | 2.08 | | Model A | vs. | Model B | Wilcoxon p-value | |----------------|---------------------|------------|--------------------| | GCondNet (KNN) | MLP | 9.7656e-04 | | | GCondNet (KNN) | MLP (NMF) | 1.0934e-01 | | | GCondNet (KNN) | MLP (PCA) | 3.418e-02 | | | GCondNet (KNN) | MLP (WL) | 4.8828e-04 | | | GCondNet (SRD) | MLP | 3.418e-03 | | | GCondNet (SRD) | MLP (NMF) | 1.2939e-01 | | | GCondNet (SRD) | MLP (PCA) | 1.6113e-02 | | | CondNet (SRD) | MLP (WL) | 3.418e-03 | | | GCondNet (SRD) | GCondNet (KNN) | 6.8894e-01 | | | GCondNet (SRD) | GCondNet (RandEdge) | 1.2939e-01 | | | GCondNet (KNN) | GCondNet (RandEdge) | 5.9335e-01 | | Table F.10: Statistical analysis of Table F.9. We compare both GCondNet w/ SRD and KNN variants to each other, to GCondNet with RandomEdge graphs, and to the three weight initialisations methods we proposed. We perform statistical testing using the Wilcoxon test (Demšar, 2006; Benavoli et al., 2016). The results support our discussion that GCondNet is robust to the graph construction method, as the performance differences between different graph constructing methods are not significant at α = 0.05. Furthermore, we find that both GCondNet variants (SRD and KNN) perform significantly better than the MLP baseline and the PCA and WL initialisations. ## G Gcondnet Beyond Small-Sample And High-Dimensional Data G.1 Varying The Dataset Size N We assess the scalability of GCondNet in comparison to its backbone model. For this evaluation, we use an MLP backbone and train a GCondNet with KNN graphs. Both GCondNet and the baseline MLP are trained under the same conditions using the same dataset splits. Setup: Here, we adopt a different evaluation setup from the main paper to facilitate a fair and robust comparison across various dataset splits. Specifically, we examine two datasets from Metabric, which contains 2,000 samples, and two from TCGA, which contains 500 samples. We reserve 20% of the total data for testing, creating a test set of 400 samples for Metabric and 100 samples for TCGA. We use this fixed test set to evaluate the models across training subsets of varying sizes. We use the remaining 80% of the data to create training sets of different sizes. Specifically, we reduce the training set size by sequentially decreasing the number of samples from the larger preceding subsets without introducing new samples. We conduct five-fold cross-validation for each resulting training subset, training on four folds and using the other fold for validation. Results: In terms of accuracy, Figure G.2 (numerical results in Table G.11) illustrates the test accuracy when increasing the training dataset size N. GCondNet consistently outperforms the MLP on datasets prone to high overfitting, such as 'tcga-survival', 'tcga-tumor', and 'meta-dr', with average performance improvements of 5.7%. On 'meta-pam50', the MLP occasionally outperforms GCondNet; however, the predictive performance on this dataset is already high and nearly saturated. Overall, GCondNet proves effective and can enhance performance beyond very small datasets. In terms of training time, Figure G.3 (numerical results in Table G.11) indicates that GCondNet sometimes exhibits a slightly longer convergence time, primarily due to the complexities involved in training the auxiliary GNN component and generally slower convergence. However, the increase in runtime does not disproportionately escalate with larger datasets. The overhead remains relatively constant across varying dataset sizes, averaging just 20% more training time, which indicates that GCondNet scales effectively without significant size-dependent overhead, while providing performance improvements. ![28_image_0.png](28_image_0.png) Figure G.2: Test accuracy when increasing the training dataset size N, showing the mean±std across four datasets. GCondNet outperforms the MLP on datasets prone to high overfitting, such as 'tcga-survival', 'tcga-tumor', and 'meta-dr'. On 'meta-pam50', the MLP sometimes outperforms GCondNet; however, note that the predictive performance on this dataset is already high and nearly saturated. Overall, GCondNet proves effective and can enhance performance beyond very small datasets. ![29_image_0.png](29_image_0.png) Figure G.3: Training time with increasing dataset size, showing the mean±std across four datasets. GCondNet generally requires slightly more time to converge, but the overhead is relatively constant across different dataset sizes, indicating that GCondNet scales well and does not incur significant size-dependent overhead. Table G.11: Comparison of MLP and GCondNet (with an MLP backbone) on datasets of varying sizes (N): We report the mean±std of test balanced accuracy (%) and the average training time over five runs. GCondNet generally improves the baseline MLP's performance, though it introduces some computational overhead which is generally constant independent of the dataset size. | Dataset | Dataset Size (N) | Model | Test Accuracy | Runtime (m) | |---------------|--------------------|------------|-----------------|---------------| | tcga-survival | 100 | MLP | 50.93±2.86 | 2.1 | | GCondNet | 55.20±5.64 | 2.3 | | | | 200 | MLP | 50.36±3.41 | 2.0 | | | GCondNet | 55.02±4.59 | 2.8 | | | | 300 | MLP | 50.41±4.57 | 2.8 | | | GCondNet | 57.88±2.42 | 4.0 | | | | 400 | MLP | 55.38±2.50 | 2.9 | | | GCondNet | 57.55±9.56 | 4.0 | | | | tcga-tumor | 100 | MLP | 42.06±7.96 | 1.9 | | GCondNet | 51.32±4.84 | 3.0 | | | | 200 | MLP | 47.59±7.17 | 3.3 | | | GCondNet | 58.29±4.29 | 3.2 | | | | 300 | MLP | 52.33±4.90 | 2.8 | | | GCondNet | 57.48±6.06 | 4.7 | | | | 400 | MLP | 56.32±5.21 | 3.1 | | | GCondNet | 54.57±4.45 | 4.5 | | | | meta-dr | 100 | MLP | 51.07±4.36 | 3.1 | | GCondNet | 59.91±3.37 | 3.7 | | | | 250 | MLP | 54.26±2.44 | 2.8 | | | GCondNet | 59.35±1.33 | 3.6 | | | | 500 | MLP | 53.29±4.24 | 2.5 | | | GCondNet | 60.73±2.53 | 3.9 | | | | 1000 | MLP | 55.95±1.54 | 3.8 | | | GCondNet | 61.20±3.39 | 5.1 | | | | 1500 | MLP | 55.54±3.95 | 4.9 | | | GCondNet | 61.18±3.18 | 5.6 | | | | meta-pam | 100 | MLP | 89.86±2.75 | 8.0 | | GCondNet | 89.49±2.06 | 6.0 | | | | 250 | MLP | 90.11±1.15 | 9.0 | | | GCondNet | 90.86±2.59 | 8.3 | | | | 500 | MLP | 91.33±1.03 | 10.0 | | | GCondNet | 90.83±2.12 | 8.2 | | | | 1000 | MLP | 92.17±0.51 | 9.5 | | | GCondNet | 91.78±0.97 | 8.7 | | | | 1500 | MLP | 91.93±1.47 | 9.7 | | | GCondNet | 92.59±1.65 | 11.4 | | | ## G.2 Vary The Sample-To-Feature Ratio N/D Assessing the impact of the sample-to-feature ratio N/D: Table G.12 presents the complete numerical results, while Figure G.4 visually represents these results, highlighting that GCondNet improves both accuracy and stability across various N/D sizes. To isolate the impacts of GCondNet's inductive bias and regularisation, we compare the performance of GCondNet (with an MLP backbone and KNN graphs) to an identical MLP baseline. Both models were trained under the same conditions, as presented in the paper. We select four datasets ("allaml", "cll", "gli", "glioma") to examine a broad range of N/D ratios (0.01, 0.05, 0.2, 1, 5), keeping the per-dataset number of samples N constant (presented in Appendix C). We adjust the number of features D accordingly–for instance, a dataset with N = 100 samples had D reduced from 10,000 to 20. Note that this reduction led to a performance loss for both models, which is expected, as reducing the feature set might also remove important features. However, here we are interested in the relative performance of GCondNet w.r.t to the baseline MLP. We took several measures to ensure robust results: 1. We vary the N/D ratio by changing the size of D. Particularly, increasing the ratio is done by reducing the feature subset D, where smaller feature subsets were derived by removing features from the preceding larger subsets (e.g., [d1, d2, d3, d4] → [d1, d2, d4] → [d1, d2] → [d2]), ensuring that no new features were introduced during the reduction process. 2. We created feature subsets for each dataset separately, repeating the process five times. 3. Both GCondNet and the MLP were trained using the same feature sets. For each configuration – encompassing model, dataset, feature-subset, and feature-subset-repeat– we trained 25 distinct models (via 5-fold cross-validation, repeated 5 times), resulting in 125 runs per entry in our results table. Overall, this setup led to the training and evaluation of 5,000 models in total. Notably, GCondNet demonstrates greater stability and lower performance variability, even with reduced feature sets (Figure G.4). For instance, in the "cll" dataset at N/D ratios of 0.05 and 0.2, GCondNet exhibits smooth performance transitions, contrasting with the fluctuating trends observed in the MLP baseline. These findings underscore the effectiveness of GCondNet's inductive bias across diverse datasets and feature configurations. Table G.12: Assessing the impact of the sample-to-feature ratio N/D. We presented the mean± std test balanced accuracy (%) of an MLP and a GCondNet with an equivalent MLP backbone, averaged over 125 runs. We adjust the N/D ratio by holding the number of samples N constant and varying the number of features D. The results show that GCondNet's inductive bias predominantly enhances performance across different N/D ratios by up to 10%, except for "glioma" when N/D ∈ {1, 5}, where it slightly reduces accuracy. | Dataset | gli | allaml | cll | glioma | | | | | |-----------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------| | N/D ratio | MLP | GCondNet | MLP | GCondNet | MLP | GCondNet | MLP | GCondNet | | 0.01 | 82.30±10.28 | 84.46±8.11 | 91.67±8.94 | 96.61±4.62 | 78.38±7.09 | 81.03±5.75 | 73.13±12.77 | 76.13±12.76 | | 0.05 | 80.52±12.89 | 83.91±10.48 | 85.28±12.08 | 94.65±6.72 | 72.23±9.50 | 80.76±6.23 | 72.67±14.23 | 75.60±10.20 | | 0.20 | 78.57±12.56 | 81.77±10.59 | 84.99±11.91 | 90.05±8.79 | 72.95±8.48 | 78.49±6.52 | 69.40±17.58 | 72.57±13.42 | | 1.00 | 74.14±13.43 | 75.97±13.30 | 74.85±13.85 | 81.22±12.52 | 68.02±7.88 | 68.02±8.66 | 58.37±17.85 | 57.67±17.05 | | 5.00 | 59.70±13.49 | 64.98±13.51 | 61.89±15.34 | 66.13±14.83 | 56.59±14.15 | 60.93±10.95 | 52.83±16.34 | 49.97±16.12 | ![31_image_0.png](31_image_0.png) Figure G.4: Assessing the impact of the sample-to-feature ratio N/D. We presented the mean± std test balanced accuracy (%) of an MLP and a GCondNet with an equivalent MLP backbone, averaged over 125 runs. We adjust the N/D ratio by holding the number of samples N constant and varying the number of features D. The results show that GCondNet's inductive bias predominantly enhances performance across different N/D ratios by up to 10%, except for "glioma" when N/D ∈ {1, 5}, where it slightly reduces accuracy.
Review 1: Summary: In this article, the authors propose a new method to deal with high-dimensional datasets using graph neural networks (GNNs). When the number of dimensions, or features, exceeds the number of instances, the authors make use of this large dimensionality to train a GNN architecture that can generate feature-specific vectors on top of nearest neighbors graphs computed directly from the features. These vectors are then concatenated into a matrix, which itself serves as an initialization of some multilayer perceptron (MLP) architecture. At training time, these MLP parameters are computed as a linear combination of the GNN-based matrix and some standard, learnable parameter matrix. Moreover, the linear combination coefficient corresponding to the GNN-based matrix decreases to zero over iterations, so that the GNN-based matrix is not optimized and used after a while. The authors then demonstrate the usefulness of their approach by showing improvement (in terms of accuracy) over standard baselines on several high-dimensional datasets. They also show that such GNN-based vectors can also be incorporated to more general achitectures (such as transformers), that decaying the linear combination coefficient is indeed more efficient than keeping it fixed as it allows to avoid overfitting, and that GNN-based vectors are more efficient than other types of feature-specific vectors obtained with, e.g., NMF and PCA. Strengths and Weaknesses: Overall, I think this work is solid and interesting. The experiments are convincing and well-designed, with a good discussion over the different hyper parameters. Moreover, the paper is very clear and tackles a frequent and difficult problem in data science, namely the difficulty to handle high-dimensional datasets with neural networks. The weaknesses I identified were only about a few unclear points in the exposition, that I would like the authors to adress for the final version (see below). Requested Changes: 1. Section 2.1: the construction of the kNN and SRD graphs is a bit unclear, as the distances used to build the graphs are not specified. I figured that these distances were simply obtained from the absolute differences between the feature values on the instances, but it is not clear from the text as it looks that the distances were computed out of the graph node features (which are one-hot encoding vectors), which does not make a lot of sense. This part should be more explicit. 2. The claim that decaying alpha reduces overfitting is well supported by Figure 3, but does not show so clearly from Table 1. I expected that the overfitting reduction would also induce smaller standard deviations on the accuracies but it seems that it is not the case (like on the glioma dataset). It would be good to add a discussion / explanation of this observation, as the results are currently a bit confusing w.r.t. that aspect. 3. About the results presented in Figure 3, the text mentions a test performance drop of 2% when fixing alpha, but I could not find the numbers in the figure (which only presents train and validation losses), nor in Table 1 (in which alpha is decayed). It would be nice to include the performance values in the text. Broader Impact Concerns: N/A ================================================== Review 2: Summary: This paper introduces a framework called GCondNet for enhancing neural networks by leveraging implicit relationships in high-dimensional tabular data. It constructs multiple graphs corresponding to each data dimension and utilizes a Graph Neural Network to condition the initial layer parameters of the base predictor model. This strategy improves performance and robustness across various real-world datasets, particularly in scenarios with limited samples but numerous features. The approach is generic, not requiring external knowledge graphs, which broadens its applicability across different domains. Strengths and Weaknesses: [Strengths] 1. The novel use of implicit sample-wise relationships and graph-based regularization provides a clear improvement over existing methods. 2. The method’s effectiveness is well-demonstrated with extensive experiments on multiple real-world datasets, showing significant gains in performance. 3. The paper is clearly written, with a thorough explanation of the methodological innovations and the underlying theoretical considerations. [Weaknesses] 1. The training time for GCondNet is longer than for benchmark methods, which could limit its practicality in some scenarios. 2. The definitions of “small” and “large” in terms of sample size and feature number are vague and need clarification to better assess the method’s applicability. Requested Changes: 1. The claim at the end of the first paragraph in Section 2.1, stating that graph construction for each task takes five seconds, appears to be based on a specific experimental setup not yet introduced. Please clarify whether this timing is general or pertains to a particular dataset described later, and if so, detail this setup earlier to avoid confusion. 2. The paper notes increased training times for GCondNet compared to benchmark methods. Please explain the reasons for this slower performance. Additionally, it would be helpful to discuss how the method scales with larger datasets—does the increase in the number of samples or features exacerbate the training time disproportionately? 3. Given the method’s outlined suitability for datasets with many features but fewer samples, could you provide examples or scenarios where GCondNet underperforms? Specifically, insights into its performance on datasets with a large number of samples or fewer features would be valuable for understanding its limitations. 4. The determination of the parameter $K$ in the graph neural network setup remains unclear. Could you elaborate on how $K$ is chosen and discuss whether the model’s performance is sensitive to variations in $K$? This information would aid in assessing the robustness and applicability of GCondNet across different settings. Broader Impact Concerns: NA ================================================== Review 3: Summary: The paper presents GCondNet an approach for using implicit structure in the tabular data for improving neural network training. This proposed approach is designed for scenarios where small number tabular data is available with large number of features. The key idea is to introduce the inductive bias in the model’s training by ensuring similar features have similar coefficients early in the training. Many small graphs are built as a GNN initially and the similarity information from the features flows through the GNN to the model being trained. The evaluation shows that this approach is quite effective on several benchmarks compared to large number of baselines. Strengths and Weaknesses: Strengths: 1. The paper is well-written and the presented ideas are easy to follow 2. The idea presented in the paper is quite novel and shown to effective empirically. 3. The evaluation is extensive and shows improvements over several baselines. The study on inductive bias compared to other initialization methods such as NMF and PCA is quite interesting. The proposed approach is shown to be effective when used in combination with multiple architectures KNN, SRD, Transformers, etc. Questions: 1. I’m curious to see at what value of N/D the method will become less effective. A study on increasing the value N/D until GCondNet is not useful will be quite interesting. 2. Training time for GCondNet is longer than other baselines considered. Would training a existing baselines longer or baselines with larger number of parameters be better in practice if given equal training time? Requested Changes: Please see questions above. Broader Impact Concerns: NA ================================================== Metareview: Recommendation: Accept with minor revision Comment: Both the reviewers and the editors agree that the paper is interesting and relevant. The only request I have is that the authors carefully re-read their text and correct a few lingering typos and more carefully edit the text answered in response to the reviewer's comments. ==================================================
# How To Reuse And Compose Knowledge For A Lifetime Of Tasks: A Survey On Continual Learning And Functional Composition Jorge A. Mendez *jmendez@csail.mit.edu* Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Eric Eaton eeaton@seas.upenn.edu Department of Computer and Information Science University of Pennsylvania Reviewed on OpenReview: *https: // openreview. net/ forum? id= VynY6Bk03b* ## Abstract A major goal of artificial intelligence (AI) is to create an agent capable of acquiring a general understanding of the world. Such an agent would require the ability to continually accumulate and build upon its knowledge as it encounters new experiences. Lifelong or continual learning addresses this setting, whereby an agent faces a continual stream of problems and must strive to capture the knowledge necessary for solving each new task it encounters. If the agent is capable of accumulating knowledge in some form of compositional representation, it could then selectively reuse and combine relevant pieces of knowledge to construct novel solutions. Despite the intuitive appeal of this simple idea, the literatures on lifelong learning and compositional learning have proceeded largely separately. In an effort to promote developments that bridge between the two fields, this article surveys their respective research landscapes and discusses existing and future connections between them. ## 1 Introduction Consider the standard supervised machine learning (ML) setting. The learning agent receives a large labeled data set, and processes this entire data set with the goal of making predictions on data that was not seen during training. The central assumption for this paradigm is that the data used for training and the unseen future data are independent and identically distributed (*i.i.d.*). For example, a service robot that has learned a vision model for recognizing plates in a kitchen will continue to predict plates in the same kitchen. As AI systems become more ubiquitous, this *i.i.d.* assumption becomes impractical. The robot might move to a new kitchen with different plates, or it might need to recognize cutlery at a later time. If the underlying data distribution changes at any point in time, like in the robot example, then the model constructed by the learner becomes invalid, and traditional ML would require collecting a new large data set for the agent to learn to model the updated distribution. In contrast, a lifelong learning robot would leverage accumulated knowledge from having learned to detect plates and adapt it to the novel scenario with little data. The lifelong learning problem is therefore that of learning under a nonstationary data distribution, making use of past knowledge when adapting to the updated distribution. One formalism for modeling the nonstationarity, which most existing works have adopted, is that of learning a sequence of distinct tasks—whether this formalism is adequate is not the focus of this survey and is left for a separate discussion. In the robot example, the tasks could be detecting plates in the first kitchen, detecting plates in the second kitchen, and detecting cutlery. There are three main interdependent objectives of a lifelong learner: - **Performing well on new tasks.** As discussed above, one requirement that a lifelong learner should satisfy is to accelerate the learning of future tasks by leveraging knowledge of past tasks. This ability, often denoted *forward transfer*, requires the agent to discover knowledge that is reusable in the future without knowing what that future looks like. - **Performing well on recurrences of previous tasks.** A second requirement, which is typically in tension with the first, is that the agent should be capable of performing tasks seen in the past even long after having learned them. This requires the agent to maintain its previous knowledge, including retiring outdated knowledge as needed. For example, the robot might move back to the first kitchen after adapting to the second, and it should still be able to recognize plates. This objective necessitates that the agent avoids catastrophic forgetting (McCloskey and Cohen, 1989), but also that, whenever possible, it achieves *backward transfer* to those earlier tasks (Ruvolo and Eaton, 2013; Lopez-Paz and Ranzato, 2017). This is possible whenever knowledge from future tasks is useful for learning better models for older tasks. While most work views avoiding forgetting as a means solely for maintaining high performance on earlier tasks, in many cases it also permits better forward transfer, by enabling the agent to retain more general knowledge that works for a large set of tasks. - **Supporting tractable long-term retention.** In addition to these two desiderata, the growth of the agent's memory use should be constrained over time. This choice is practical: if the agent requires storing large models or data sets for all past tasks in memory, then it would become impractical to handle very long task sequences. Note that, while it might be cheap to store huge amounts of data in memory, making use of such massive stored information would often incur an equivalently large computational cost, which is infeasible for many applications. All these desiderata can be summarized as discovering knowledge that is *reusable*: reusable knowledge can be applied to both future and past tasks without uncontrolled growth. Beyond lifelong learning, the autonomous discovery of reusable knowledge has motivated work in transfer learning, multitask learning (MTL), and meta-learning—all of which deal with learning diverse tasks. These fields have received tremendous attention in the past decade, leading to a large body of literature spanning supervised learning, unsupervised learning, and reinforcement learning (RL). Traditionally, methods for solving this problem have failed to capture the intuition that, in order for knowledge to be maximally reusable, it must capture a self-contained unit that can be *composed* with similar pieces of knowledge. This compositionality refers to the ability of an agent to tackle parts of each problem individually, and then reuse the solution to each of the subproblems in combination with others to solve multiple bigger problems that contain shared parts. For example, a service robot that has learned to both search-and-retrieve objects and navigate across a university building should be able to quickly learn to put these together to deliver a stapler to Jorge's office. Instead, typical methods make assumptions about the way in which different tasks are related, and impose a structure that dictates the knowledge to share across tasks, usually in the form of abstract representations that are not explicitly compositional. Yet, compositionality is a promising notion for achieving the three lifelong learning requirements listed above. Knowledge that is compositional can be used in future tasks by combining it in novel ways—this enables forward transfer. Further, not all knowledge must be updated upon learning new tasks to account for them, but only the components of knowledge that are used for solving these new tasks—this prevents catastrophic forgetting of any unused component and could enable backward transfer to tasks that reuse shared components. Finally, compositional knowledge permits solving combinatorially many tasks by combining that knowledge in different ways; conversely, solving a fixed number of tasks requires logarithmically many knowledge components, thereby inhibiting the agent's memory growth. This article focuses on a form of functional composition, exemplified in Figure 1, whereby each module or component processes an input and produces an output to be consumed by a subsequent module. The process is analogous to programming, where functions specialize to solve individual subproblems and combine to solve complex problems. Recently, an increasing number of works have focused on the problem of learning compositional chunks of knowledge to share across different tasks (Andreas et al., 2016; Hu et al., 2017; Kirsch et al., 2018; Meyerson and Miikkulainen, 2018). At a high level, these methods aim to simultaneously discover *what* are the pieces ![2_image_0.png](2_image_0.png) 3 months Output Task: **locate youngest dog** Figure 1: Example of the functional composition studied in this article. The model decomposes the task "locate youngest dog" into simpler functions that are combined into an overall solution. of knowledge to reuse across tasks and how to compose them for solving each individual task. Most studies in this field have made one of two possible assumptions. The first assumption is that the agent has access to a large batch of tasks to learn simultaneously in an MTL setting. This way, the agent can attempt numerous combinations of possible components and explore how useful they are for solving all tasks jointly. While this assumption simplifies the problem by removing the forward and backward transfer requirements, it is unfortunately unrealistic: AI systems in the real world will not have access to batches of simultaneous tasks, but instead will face them in sequence in a lifelong setting—agents may face several learning tasks simultaneously (e.g., vision, language, and decision-making modalities from a single data stream), leading to a form of mini-batch lifelong setting. The second assumption is that the agent does face tasks sequentially, but it is capable of learning components on a single task that are reusable for solving many future tasks. This latter assumption, albeit more realistic, relies on the ability to find optimal and reusable components from solving a single task, which is not generally possible given the limited data available for the task and the uncertainty about future components and how the knowledge must be compatible with them. ## 1.1 Article Overview This article reviews the existing literature with an aim to provide context for future works on the nascent field of lifelong compositionality. The survey is primarily a separate discussion of two topics that are closely related, yet previously disjointly studied: 1) lifelong or continual learning and 2) compositional knowledge representations. Research into lifelong learning seeks to endow agents with the capability to accumulate knowledge over a nonstationary stream of data, typically presented to the agent in the form of tasks. In principle, if the tasks are related in some way, the agent should be able to detect and extract the commonalities across the tasks in order to leverage shared knowledge and improve its overall performance. On the other hand, the goal of learning compositional knowledge representations is to decompose complex problems into simpler subproblems, such that the solutions to the easier subproblems can be combined to solve the original, harder problem. This formulation makes compositional representations an appealing mechanism for learning the relations across multiple tasks: by discovering subproblems that are common to many tasks; the learner could reuse the solutions to these subproblems as modules that compose in different combinations to solve the many tasks. Despite their intuitive appeal, few works have explicitly used compositional representations as a means for transferring knowledge across a lifelong sequence of tasks. The discussion further divides works into those carried out in the supervised and the RL settings. While many of the techniques used for one are applicable to the other (albeit with minor-to-major adaptations), research into these two fields has proceeded mostly separately, with the vast majority of works focusing on the supervised setting. In particular, the form of functional composition studied in this article had been almost entirely overlooked in the RL literature until very recently. | 1. Introduction | | | |--------------------------------------------------------|-----------------------------------|-----------------------------------------| | 2. Problem formulation | | | | 3. Categorization of lifelong compositional approaches | | | | Supervised/Unsupervised | Reinforcement Learning | | | Non-compositional | 4. Lifelong or continual learning | 6. Lifelong reinforcement learning | | Compositional | 5. Compositional knowledge | 7. Compositional reinforcement learning | | 8. Summary and directions for future work | | | Figure 2: Organization of this article. The discussion ties works together by categorizing them along six axes (detailed in Section 3). The first axis divides works according to the *learning setting*: lifelong learning, MTL, and single-task learning (STL). The second axis analyzes each approach according to whether the environment provides the *structure* of the task to the agent and how—this refers to the compositional structure in compositional works, and to the relations among tasks in more general lifelong settings. The third axis dissects works in terms of the underlying *learning paradigm* in which the agent is evaluated: supervised, unsupervised, or RL. The fourth axis separates approaches according to the *type of compositionality* they study: functional composition, temporal composition, or no composition. The fifth axis classifies works with respect to how they structurally combine components: via chaining, aggregation, or a more general graph. The sixth axis divides works in terms of the *application domain* they consider. Multiple recent surveys have reviewed aspects of the continual learning problem, with special focus on the supervised setting (Parisi et al., 2019; De Lange et al., 2022). Closer to this article, Hadsell et al. (2020) included a section on modular architectures, and Khetarpal et al. (2020) reviewed existing works and open directions in lifelong RL. Other surveys have focused on the use of lifelong learning for specific applications, such as vision (Qu et al., 2021), language (Biesialska et al., 2020), and robotics (Lesort et al., 2019). In a similar spirit to this survey, Mundt et al. (2022) presented the CLEVA-Compass, a visual representation for analyzing lifelong learning approaches according to a variety of axes. Unlike the axes in this work, which focus on properties of the methods, the axes of the CLEVA-Compass measure properties of the evaluation protocols applied to those methods. This survey emphasizes peer-reviewed work from ML conferences, focusing on the years from 2017, with appropriate context from earlier literature and non-peer-reviewed work where appropriate. Compositionality has been examined in many other fields, including vision, language, and broader AI, with many overlapping ideas, but this review focuses on the (lifelong) ML perspective. The remainder of this article is organized as follows, as depicted graphically in Figure 2. Section 2 formulates the problems of lifelong learning and compositional learning, providing examples of the types of tasks that each formulation accepts. Section 3 details the categorization of approaches used throughout the article and includes a visual depiction of the research landscape along the six axes. Section 4 surveys the literature on lifelong learning focused on the supervised case, dividing works into task-agnostic and task-aware, and expanding on techniques based on regularization, replay, generative replay, and capacity expansion. Section 5 discusses existing works that learn to functionally decompose supervised learning problems, placing particular emphasis on approaches that rely on modular neural architectures; this section also includes a summary of the few existing compositional works that have operated in the lifelong setting. Section 6 reviews methods for lifelong RL, which remains to date a substantially underdeveloped field. Section 7 discusses the large variety of forms of compositionality that have been proposed in RL, where notably only a handful of techniques functionally decompose learning problems. Finally, Section 8 summarizes the findings of this article and proposes high-impact avenues for future work to investigate. ## 2 Problem Formulation Despite their intuitive connections, lifelong learning and compositional learning have largely proceeded as disjoint lines of work. This section describes the concrete problem formulations for both lifelong learning and compositional learning as studied in this article. At a high level, lifelong learning is the problem of accumulating knowledge over time and reusing it to solve related tasks, while compositional learning is the problem of decomposing knowledge into maximally reusable components. ## 2.1 The Lifelong Learning Problem At the highest level, lifelong learning involves learning over a nonstationary (non-*i.i.d.*) and potentially never-ending stream of data. From this high-level definition, different works have proposed multiple concrete instantiations of the problem. This section dissects common problem formulations in the literature, describes some of the advantages and disadvantages of each formulation, and provides example problems that can be captured under the most widely used definition. Van de Ven and Tolias (2019) categorized lifelong learning problem definitions in terms of how nonstationarity is presented to the agent, proposing the following three variations: - **Task-incremental learning.** This is the most common problem definition, which introduces nonstationarity into the learning problem in the form of *tasks*. Each task Z (t)is itself a standard i.i.d. learning problem, with its own input space X (t) and output space Y (t). There exists a groundtruth mapping f (t): X (t)*7→ Y*(t)that defines the individual task, as well as a cost function L (t)ˆf (t) that measures how well a learned ˆf (t) matches the true f (t) under the task's data distribution D(t)X (t), Y (t). During the learning process, the agent faces a sequence of tasks Z (1)*, . . . ,* Z (t)*, . . .*. The learner receives a data set X(t),Y (t) ∼ D(t)X (t), Y (t)along with a task indicator t that identifies the current task, but not how it relates to other tasks. Upon facing the t-th task, the goal of the learner is to solve (an online approximation of) the MTL objective: $$z_{t}=\frac{1}{t}\sum_{i=1}^{t}\mathcal{L}^{(i)}\left(\hat{f}_{t}^{(i)}\right)\ ,$$ $$(1)$$ t, (1) where ˆf (ˆt) tis the predictor for task Z (ˆt) at time t. Note that this definition assumes that the predictors for earlier tasks are affected by the learning of subsequent tasks (as indicated by the subscript t), but makes no explicit assumptions about how the predictors are related (e.g., a single shared model or a shared representation with task-specific predictors on top of the representation). - **Domain-incremental learning.** The key distinguishing factor of this setting is that the learning problem does not inform the agent of the task indicator t. Instead, the tasks vary only in their input distribution D(t)X (t), but there exists a single common solution that solves all tasks. For example, the problem could be a binary classification problem between "cat" and "dog", and each different task could be a variation in the input domain (e.g., changing light conditions or camera resolutions). The goal of the learner is still to optimize the approximate MTL objective of Equation 1. - **Class-incremental learning.** In this setting, there is a single multiclass classification task with a large number of classes (i.e., Y = {1*, . . . , C*} for some large C). The agent observes classes sequentially, and must be able to predict the correct class among all previously seen classes. For example, the task could be ImageNet (Deng et al., 2009) classification, and classes could be presented to the agent ten at a time, with later stages not containing previous classes in the *training* data, but indeed requiring accurate prediction across all seen classes in the *test* data. One alternative way to define this same problem is that each learning stage (i.e., each group of classes) is a distinct task, and the goal of the agent is to simultaneously predict the current task indicator t and the current class within that task Z (t). The latter equivalent formulation, though less intuitive, enables framing the learning objective in exactly the same way as the previous two problem settings, per Equation 1. Most existing works have considered the task-incremental setting. Concretely, a vector θ (t) parameterizes each task's solution, such that ˆf (t) = fθ(t) . After training on T tasks, the goal of the lifelong learner is to find parameters θ (1)*, . . . ,* θ (T)that minimize the cost across all tasks: 1 T PT t=1 L (t)ˆf (t). The agent does not know the total number of tasks, the order in which tasks will arrive, or how tasks are related to each other. Given a limited amount of data for each new task, typically insufficient for obtaining optimal performance in isolation, the agent must strive to discover any relevant information to 1) relate it to previously stored knowledge in order to permit transfer and 2) store any new knowledge for future reuse. The environment may require the agent to perform any previous task, implying that it must perform well on all known tasks at any time. In consequence, the agent must strive to retain knowledge from even the earliest learned tasks. While prior work in supervised learning has described this task-incremental setting as artificial, it contains some desirable properties that are missing from other common definitions. First, unlike in domain-incremental learning, it is not necessary that the inputs are the only aspect that changes over time. This is useful when extending these definitions to the RL setting, where different tasks naturally correspond to different reward functions or transition dynamics. Second, unlike in class-incremental learning, extension to RL is straightforward, since the learning objective can still easily be averaged across clearly differentiated tasks. All the above definitions assume that the agent is required to perform well on all previously seen tasks, which is often not realistic. For example, consider a service robot that has provided assistance for a long time in a small one-floor apartment, and is later moved to a much larger two-floor home. While knowledge from the small apartment may be useful for quickly adapting to the larger home, over time retaining full knowledge about how to traverse the small apartment might become counterproductive as that information becomes obsolete. Therefore, a handful of works have developed techniques for nonstationary lifelong learning, where not only the *data* distribution changes from task to task, but also the distribution over *tasks* itself changes from time to time (like in the small-apartment-to-large-home example). In this setting, the objective must change, since it is no longer desirable to retain performance on tasks from outdated distributions. However, works in this line fall outside of the scope of this survey and are left for a separate discussion. ## 2.1.1 Evaluation Settings And Performance Metrics Due to this survey's focus on the intersection of lifelong learning and functional composition, we only briefly touch on evaluation metrics and methodologies specific to lifelong learning. For a more detailed discussion, see Section 3.5 in the work of Parisi et al. (2019). In the task-incremental supervised setting, the standard way to craft the different tasks for benchmarking purposes originates from the class-incremental setting: split each data set into multiple smaller tasks, each containing a subset of the classes. As an example, CIFAR-100 (Krizhevsky and Hinton, 2009), a visual recognition data set of 100 classes corresponding to various object types, is commonly split randomly into 20 individual 5-way classification tasks to evaluate lifelong agents. However, to show that this is not the only possible setting that the proposed methods can handle, some evaluations have also executed experiments in a more complex setting with tasks from various distinct data sets. Most recently, Bornschein et al. (2022) released such a multi-data-set benchmark with 100 tasks sampled from 30 years of computer vision research. Typical measures used to evaluate the performance of lifelong learning methods include, among others, overall accuracy, forward transfer, backward transfer, and jumpstart performance. Note that, while the benchmarks used in the lifelong learning literature likely contain some level of implicit composition, they do not permit explicit analysis of compositionality. In consequence, researchers interested in studying the joint problem posed in this article should likely consider carrying out evaluations both on standard lifelong learning benchmarks, for adequate placement in the literature, and on explicitly compositional benchmarks, for rigorous analysis of the compositionality of their approaches. In the case of RL, most approaches are evaluated on custom benchmarks. Section 7.1 summarizes existing works seeking to standardize evaluation practices for multitask and lifelong RL, with a focus on compositionality. ## 2.2 The Compositional Learning Problem Section 2.1 described the lifelong learning problem in terms of how nonstationarity is presented to the agent in the form of a sequence of tasks. However, it does not provide insight into how the different tasks might be ![6_image_0.png](6_image_0.png) Figure 3: Compositional problem graphs. Each node in the graph represents a random variable for a representational space, produced by the output of a module or function. STL agents assume that tasks are unrelated and learn modules in isolation, while monolithic MTL agents assume that all tasks share a single module. In contrast, more general compositional MTL agents assume that tasks selectively share a set of modules, yielding different solutions to each task constructed from common solutions to subtasks. related to each other. In particular, this article focuses on tasks that are *compositionally* related and methods that explicitly exploit these compositional assumptions. Following the problem formulation from Chang et al. (2019), compositional methods assume (either implicitly or explicitly) that each task can be decomposed into subtasks. Equivalently, the predictive function f (t)characterizing each task can be decomposed into multiple subfunctions F (t) 1, F(t) 2*, . . .*, such that f (t) = F (t) 1◦ F (t) 2*◦ · · ·* x (t). This assumption trivially holds for any function f (t). Critically, the formulation further assumes that there exists a set of k subfunctions that are common to all tasks the agent might encounter: F (t) i ∈ {F1, . . . , Fk} ∀*t, i*, such that there is potential for compositional reuse across tasks. This way, the full learning problem can be characterized by a directed graph G = (V, E), with two types of nodes. The first type represents the inputs and outputs of each task as random variables. Concretely, each task has an input node u (t) with in-degree zero and an output node v (t) with out-degree zero such that u (t), v(t) ∼ D(t)X (t), Y (t). The second type of nodes F represents functional transformations such that: 1. for every edge ⟨*u, F*⟩ the function F takes as input the random variable u, 2. for every edge ⟨*F, F*′⟩ the output of F feeds into F ′, and 3. for every edge ⟨*F, v*⟩ v is the output of F. With this definition, the paths in the graph from u (t)to v (t)represent all possible solutions to task Z (t) given a set of functional nodes. This formalism also has an equivalent generative formulation. In particular, a compositional function graph G generates a task Z (t) by choosing one input node u (t) and a path p (t)through the graph to some node v (t). Then, the following two steps define the generative distribution for task Z (t). First, instantiate the random variable u (t) by sampling from the input distribution u (t) = x (t) ∼ D(t). Next, generate the corresponding label y (t) by compositionally applying all functions in the chosen path p (t)to the sampled u (t). As noted by Chang et al. (2019), there are generally multiple possible compositional solutions to each task. One additional reasonable assumption is that the generative problem graph is that with the minimum number of possible nodes, such that nodes (i.e., subtasks) are maximally shared across different tasks. This choice intuitively implies the maximum amount of possible knowledge transfer across tasks. Figure 3 shows three different assumptions that learning algorithms make over the space of tasks. The left-most graph (Figure 3a) shows the standard STL assumption: each task Z (t)is completely independent from the others, and therefore the agent learns the predictive functions ˆf (t)in isolation. Note that this doesn't explicitly prohibit learning compositional solutions: each ˆf (t)could itself be decomposed into multiple subtasks, but the subtasks would still be individual to each task. The center graph (Figure 3b) shows the typical monolithic MTL assumption: all different tasks can be solved with a single common solution. The right-most graph (Figure 3c) shows the assumption made by compositional approaches: each task can be decomposed into a task-specific sequence of subtasks, but the set of possible subtasks is common to all tasks. As a first example that matches the latter formulation, consider the following set of tasks: - Z (1): count the number of cats in an image - Z (2): locate the largest cat in an image - Z (3): locate the largest dog in an image - Z (4): count the number of dogs in an image These tasks can be decomposed into: detect cats, detect dogs, locate largest, and count. If an agent learns tasks Z (1), Z (2), and Z (3), and along the way discovers generalizable solutions to each of the four subtasks, then solving Z (4) would simply involve reusing the solutions to the dog detector and the general counter. Consider another example from the language domain, with tasks given by: - Z (1): translate text from English to Spanish - Z (2): translate text from Spanish to Italian - Z (3): translate text from English to Italian As above, if the learner has learned to solve tasks Z (1) and Z (2), it could solve task Z (3) by first translating the text from English to Spanish and subsequently translating the resulting text from Spanish to Italian. Here, Spanish would act as a *pivot language* (Boitet, 1988). This definition can be applied to RL problems as well. Consider the following task components in a robotic manipulation setting: - Robot manipulator: diverse robotic arms with different dynamics and kinematic configurations can be used to solve each task. - Objective: each task might have a different objective, like placing an object on a shelf or throwing it in the trash. - Obstacle: various obstacles may impede the robot's actions, such as a door frame the robot needs to go through or a wall the robot needs to circumvent. - Object: different objects require different grasping strategies. One way to solve each robot-objective-obstacle-object task is to decompose it into subtasks: detect the grasping points of the object, detect regions unobstructed by the obstacle, plan a trajectory for reaching the objective, and drive the robot's joints. Note that this is not equivalent to the temporal composition of skills or options. Instead, each time step requires solving all subtasks simultaneously (e.g., the actions must be tailored to the current robot arm at all times). Mendez et al. (2022b) provide a precise description of the problem formulation for the RL case, where the (more general) subproblems may involve sensing, acting, or a combination of both. Section 7 further discusses the two formulations. There is no consensus in the research community for a set of evaluation procedures or measures to assess the quality of the compositional solutions discovered by existing methods, in part due to the lack of precise definitions of compositionality. Sections 5 and 7 discuss the ways existing works have evaluated approaches in the supervised and the RL settings, respectively, as well as methods developed specifically to understand the compositional qualities of these approaches' solutions. ## 3 Categorization Of Lifelong Compositional Approaches This section describes in detail the six axes used to categorize existing methods. By design, this categorization bridges between lifelong learning approaches and compositional approaches, laying the foundations for the field to explore new techniques at the intersection of the two. 1. **Multiplicity of tasks.** *Single-task* methods train models individually on a single task, without any notion of shared knowledge or transfer. While this is the focus of most ML research, it is not the focus of this survey. However, many compositional methods, particularly in RL, seek to simultaneously extract the compositional structure and learn a policy within a single task. The focus of this survey is on methods that share information across multiple tasks. In particular, *multitask* approaches train on multiple tasks simultaneously, while *lifelong* methods train on tasks one after another in sequence. Very few approaches have trained compositional models in a lifelong setting. Note that this survey categorizes approaches as lifelong methods if they face tasks in sequence regardless of the underlying mechanism they use for training (e.g., maintaining separate models for each task with some shared parameters). In addition, the categorization of single-task and multitask approaches is not always clear, since some methods might treat solving multiple tasks as a single-task generalization problem; therefore, we mainly follow the language used in each paper to guide this classification. 2. **Knowledge about task structure.** Methods that assume that the *structure is explicitly given* to the agent do so in the form of external inputs. This might take the form of task descriptors, pretrained modules, or hard-coded modular structures. Approaches that consider *implicitly given structure* expect the input itself (e.g., image or language prompt) to contain cues that either distinguish tasks from each other (in noncompositional works) or that inform appropriate compositional solutions to each task (in compositional works). In most cases, techniques in these two categories do not assume that a task indicator is provided, since the input itself contains sufficient information to extract it. On the other hand, methods that assume *no access to the task structure* rely on some form of search to identify how tasks relate to one another. Most commonly, these approaches do so by relying on task indicators that enable them to learn different models for each task. 3. **Learning paradigm.** This axis considers whether the problems faced by the agent are *supervised*, unsupervised, or RL problems. Concretely, this is not fundamentally tied to the training mechanism used by the agent, but to the underlying problem. For example, multiple algorithms for learning compositional architectures use RL techniques to solve the non-differentiable search, but themselves solve supervised learning problems; such approaches are categorized as supervised. 4. **Type of composition.** Most of ML, and in particular most of lifelong learning, considers *noncompositional* problems. This survey considers a method to be compositional if it uses a compositional model (e.g., modular neural networks) or if it seeks to solve a compositional problem (e.g., compositional zero-shot generalization). While all supervised learning methods consider *functional* composition, RL methods can consider functional or *temporal* composition. This distinction, which refers to whether components are chained in time or in function space, is described in detail in Section 7. 5. **Type of structural configuration.** For any compositional method, we consider three types of structural configurations. *Aggregation* combines solutions in a flat manner, without any notion of hierarchy. For example, methods for logical composition in RL often aggregate models by summing their value functions. *Chaining* combines components hierarchically, with one component being executed after the next, and with a partial ordering over the components such that, if a ≺ b, component a must always come before component b in the chained model. Arbitrary *graph* composition chains components hierarchically, too, but assumes no fixed ordering over them. 6. **Application domain.** This axis specifically considers the domains to which a paper applies its proposed method. Methods in this survey have been evaluated in vision, robotics, language, *visual* question answering (VQA), *audio*, and toy domains. Figure 4 includes a visual depiction of the landscape according to this categorization, while Appendix A lists all cited references in terms of the same categories. ![9_image_0.png](9_image_0.png) No structure given Structure implicitly given Structure explicitly given Type of composition: Functional, Temporal, None Type of structural configuration: Graph, Chaining, Aggregation, None Application domain: Vision, Robotics, Language, VQA, Audio, Toy Figure 4: A categorization of existing works into six axes, defined in text within the figure. The scale of the icon represents the number of references in this survey per category. Most work on lifelong or continual learning has not learned explicitly compositional structures, while most efforts on compositional learning have operated in the MTL or STL settings. Appendix A contains a tabular version of this figure, listing all references in their respective categories. Best viewed in color. ![10_image_0.png](10_image_0.png) Regularization Avoid interference with previous tasks Section 4.1.3 - **generative adversarial nets** - **generative task models** Section 4.1.1 - **functional regularization** - **preserve important parameters** - **orthogonal model subspaces** Section 4.2.1 - **Bayesian models** - **trust regions of previous tasks** Replay ![10_image_1.png](10_image_1.png) Recall stored data representative of past tasks Section 4.1.2 - **experience replay** - **gradient episodic memory** Section 4.2.2 - **memory-based task similarity** - **selective storage of memories** Generative Replay ![10_image_2.png](10_image_2.png) Hallucinate data for replay Section 4.2.3 - **dynamic mixture models** - **parameter and data generators** Expandable Models ![10_image_3.png](10_image_3.png) Expand model capacity to handle new tasks Section 4.1.4 - **dynamically expandable nets** - **generative task models** Section 4.2.4 - **dynamic mixture of experts** - **hierarchical Bayesian models** Reusable Knowledge ![10_image_4.png](10_image_4.png) Learn and transfer reusable chunks of knowledge Section 4.3 - **factorized transfer / online dictionary learning** - **selective layer reuse in deep nets** - **selective transfer via attention** Other Approaches ![10_image_5.png](10_image_5.png) Section 4.4 - **online meta-learning** - **continual generalized zero-shot learning** - stable stochastic gradient descent - **recurrent neural nets** ## Task-Aware **Task-Agnostic** Figure 5: Major mechanisms used for lifelong or continual learning and techniques exemplifying those mechanisms in both task-aware and task-agnostic settings. Best viewed in color. ## 4 Lifelong Or Continual Learning Lifelong learning agents face a variety of tasks over their lifetimes, and should accumulate knowledge in a way that enables them to more efficiently learn to solve new problems. Thrun (1998) first introduced the concept of lifelong learning, which has received widespread attention in recent years (Chen and Liu, 2018). Recent efforts have mainly focused on avoiding catastrophic forgetting (McCloskey and Cohen, 1989). At a high level, existing approaches define parts of parametric models (e.g., deep neural networks) to share across tasks. As the agent encounters tasks sequentially, it strives to retain the knowledge required to solve earlier tasks. The following sections divide lifelong learning methods into task-aware (with task indicator) and task-agnostic (without task indicator), examining each according to the major mechanisms used. A summary of approaches according to this division is shown in Figure 5. The discussion then ties the task-awareness division back to the separation according to how models receive information about the structure of the tasks. ## 4.1 Task-Aware The majority of works in lifelong learning in recent years fall in the category of task-aware methods. This section summarizes the existing literature in this category. Note that most methods that can operate in the task-aware setting can in principle also operate in the task-agnostic setting. In such cases, their categorization into task-aware or task-agnostic follows the (majority of the) experiments used in their original evaluation. ## 4.1.1 Regularization One common approach to avoid forgetting is to impose data-driven regularization to prevent parameters from deviating in directions that are harmful to performance on the early tasks. The intuition is that similar parameters would lead to similar solutions to the earlier tasks. Formally, these approaches approximate the MTL objective in Equation 1 with a data-driven regularization term: $$z_{t}(\mathbf{\theta})=\frac{1}{t}\mathcal{L}\Big{(}\mathbf{X}^{(t)},\mathbf{Y}^{(t)},\mathbf{\theta}\Big{)}+\frac{1}{t}\sum_{i=1}^{t-1}\Omega\Big{(}w_{i},\mathbf{\theta},\mathbf{\theta}_{i}^{(i)}\Big{)}\,\tag{2}$$ where wˆt are regularization weights obtained from the data of task Z (ˆt), θ are the parameters being optimized, and θ (ˆt) ˆtare the parameters obtained at time ˆt. The canonical example of this idea is elastic weight consolidation (EWC), which, inspired by a Bayesian formulation, places a quadratic penalty on the parameters for deviating from the parameters of each previously seen task, weighted by the diagonal of the Fisher information matrix of each task, F (t): Ω(θ) = Pˆt θ − θ (ˆt)⊤F (ˆt)θ − θ (ˆt)(Kirkpatrick et al., 2017). EWC is one of the few exceptional works that was applied to both supervised and RL settings. An extension of EWC uses a Kronecker-factored approximation of the Fisher information matrix instead of a diagonal to improve performance at little additional cost (Ritter et al., 2018). Following this same principle, the literature has proposed a variety of regularization terms. Departing from the Bayesian formulation, the *synaptic intelligence* approach of Zenke et al. (2017) computes an estimate of each parameter's importance based on the trajectory of updates to the parameter, and uses this as the weighting for quadratic regularization. A generalized regularizer combines this latter idea with EWC (Chaudhry et al., 2018). Another well-known mechanism, *hard attention to the task*, learns task-specific, (nearly-)binary attention masks to select which nodes in a neural network to use for each task, and then constrains updates to parameters based on the previous tasks' masks (Serrà et al., 2018). A similar technique additively decomposes each task's parameters into a shared set of parameters modulated by a task-specific mask and a set of task-adaptive parameters, applying quadratic regularization to prevent previous tasks' parameters from diverging from their original values (Yoon et al., 2020). Recent work proposed a sparsity-based regularizer that reduces the storage space for regularization terms by computing node-wise (as opposed to parameter-wise) importance weights (Jung et al., 2020), and applied this approach to supervised and RL. Cha et al. (2021) proposed a complementary regularizer based on entropy maximization, which encourages wider local minima and can therefore be used in combination with existing regularizers to avoid forgetting. The approaches above, originally applied to vision domains, have been extended to image captioning, demonstrating their applicability to language domains (Del Chiaro et al., 2020). The methods described so far are based on the intuition that parameters that are important for previous tasks should be modified sparingly to avoid forgetting. Recent works have considered a different intuition: in order for new tasks to be learned without interfering with past tasks, they should lie on orthogonal subspaces of the parameter space. One mechanism for imposing orthogonality is to use precomputed task-specific orthogonal matrices to project the feature space of each task (Chaudhry et al., 2020). Alternatively, it is also possible to compute such projection matrices sequentially based on the learned solutions to previous tasks. This can be achieved by exploiting the singular vector space of the activations of the network, which applies to linear and convolutional layers (Saha et al., 2021; Deng et al., 2021), as well as recurrent layers (Duncker et al., 2020). Another regularization strategy that has become popular is online variational inference, which approximates the Kullback-Leibler (KL) divergence between the current and previous predictive distributions in a Bayesian setting. Nguyen et al. (2018) developed the first *variational continual learning* (VCL) method, which, akin to EWC (Kirkpatrick et al., 2017) in the standard regularization-based setting, requires storing penalty terms for each network parameter. In an effort to reduce storage requirements, Ahn et al. (2019) modified this method by storing penalty terms for each network node, akin to the work of Jung et al. (2020). A generalized variational objective adds a tunable hyperparameter to weight the KL divergence term in the objective function, encompassing EWC and VCL (Loo et al., 2021). These notions can extend to Gaussian mixture distributions. In particular, Zhang et al. (2021) developed a continual variational inference method using a Chinese restaurant process to automatically determine the number of latent components, while Kumar et al. (2021) did the same using an Indian buffet process for unsupervised and supervised learning. Other methods instead functionally regularize the outputs of the model (e.g., *learning without forgetting*, Li and Hoiem, 2017), penalizing deviations from the predictions of earlier tasks' models by optimizing for: $$z_{t}(\mathbf{\theta})=\frac{1}{t}\mathcal{L}^{(t)}\Big{(}\mathbf{X}^{(t)},\mathbf{Y}^{(t)},\mathbf{\theta}\Big{)}+\frac{1}{t}\sum_{t=1}^{t-1}\Omega\Big{(}w_{t},\hat{f}_{t}^{(t)},\hat{f}_{t}^{(t)}\Big{)}\;\;,\tag{3}$$ where ˆf (ˆt) t and ˆf (ˆt) ˆtare the predictive functions for task Z (ˆt) obtained at times t and ˆt, respectively. Critically, functional regularization requires evaluating the ˆf's, which previous works have accomplished either via replay (as described in Section 4.1.2) or by applying those functions to the current task's data, X(t). Benjamin et al. (2019) noted that distance in parameter space (parameter regularization) is not representative of distance in function space (functional regularization), which should be minimized to avoid forgetting. The authors then showed that naïvely estimating function-space distance on a small set of stored samples results in a strong functional regularization approach. Other methods convert the learned network into a Gaussian process (GP) and store a subset of samples from previous tasks in memory (Titsias et al., 2020; Pan et al., 2020). Then, the GP posterior on those points penalizes the model for making incorrect predictions on past tasks. A recent approach uses EWC with the Fisher information of the current task only, to forget knowledge of past tasks that prevents further learning (Wang et al., 2021); this method was used in supervised and RL tasks. Techniques in this category primarily focus on avoiding negative backward transfer by retaining models that are similar to the original models, while forward transfer might be enabled implicitly via parameter sharing. ## 4.1.2 Replay A distinct approach retains a small buffer of data from all tasks, and continually updates the model parameters using data from the current and previous tasks, thereby maintaining the knowledge required to solve the earlier tasks. The general form of the objective for replay-based techniques can be written as: $$z_{t}(\mathbf{\theta})=\frac{1}{t}+{\cal L}(\mathbf{X}^{(t)},\mathbf{Y}^{(t)},\mathbf{\theta})+\frac{1}{t}\sum_{i=1}^{t-1}\Omega\Big{(}\mathbf{X}_{\rm rep}^{(i)},\mathbf{Y}_{\rm rep}^{(i)},\mathbf{\theta}\Big{)}\;\;,\tag{4}$$ where X (ˆt) rep,Y (ˆt) rep = bufferX(ˆt),Y (ˆt)are replay buffers given by some mechanism buffer (typically a subsampling mechanism). Most naïvely, one could simply iterate over past tasks' data when training on the new task—this is known as *experience replay* (ER, Chaudhry et al., 2019b), and corresponds to Ω = L. If the amount of data stored for replay is small, one would expect that the model could overfit to the tiny memory. However, empirical evaluations found that even this naïve replay approach performs surprisingly well. Other popular approaches use replay data to constrain the directions of gradient updates to regions of the parameter space that do not conflict with the earlier tasks' gradients via *gradient episodic memory* (Lopez-Paz and Ranzato, 2017; Chaudhry et al., 2019a). These same techniques have been used with a meta-learning objective function, whereby the agent trains directly to optimize the network's feature representation to avoid gradient conflicts between future and previous tasks (Riemer et al., 2019; Gupta et al., 2020a). Mirzadeh et al. (2021) developed a distinct objective function. The authors found that a linear curve in the objective function connects the solutions of the MTL and continual learning problems, and consequently used replay data to encourage finding a solution that is closer to the (no-forgetting) MTL solution. Similarly, Raghavan and Balaprakash (2021) found theoretically that the balance between generalization and forgetting, viewed as a two-player zero-sum game, is stable and corresponds to a saddle point, and developed an algorithm that searches for this saddle point by playing the two-player game. Other works have not modified the objective function Ω, but instead have focused on other aspects of the problem. For example, one approach balances the loss terms for replay and current data via mixed stochastic gradients (Guo et al., 2020b). As another example, Pham et al. (2021b) used a dual memory to train a set of shared neural network layers and a task-specific controller to transform the features of the shared layers. Despite the appeal of these more advanced replay-based methods, the basic ER method that simply replays randomly stored data remains a strong and popular baseline (Chaudhry et al., 2019b). In practice, replaybased techniques have proven to be much stronger at avoiding forgetting than regularization-based methods. One potential theoretical explanation for this discrepancy is that optimally solving the continual learning problem requires storing and reusing all past data (Knoblauch et al., 2020). Replay-based approaches have also followed other, less common directions. One nonparametric kernel method uses an episodic memory for detecting the task at inference time instead of for replay (Derakhshani et al., 2021). Other works have learned hypernetworks that take a task descriptor as input and output the parameters for a task-specific network, using replay at the task-descriptor level (von Oswald et al., 2020; Henning et al., 2021). In the unsupervised setting, Rostami (2021) used replay to learn a Gaussian mixture model in a latent representation space such that all tasks map to the mixture distribution in the embedding space. Like regularization methods, approaches based on replay might achieve forward transfer as a result of parameter sharing. Intuitively, positive backward transfer could occur by virtue of repeated training over increasingly larger MTL problems, though most experimental results in the supervised setting have not exhibited this property. Instead, results have primarily focused on avoiding negative backward transfer. ## 4.1.3 Generative Replay A related technique is to replace the buffer mechanism by a generative model to "hallucinate" replay data, potentially reducing the memory footprint by avoiding storing earlier tasks' data. For example, this can be achieved by training a generative adversarial network (GAN) and using the trained network to generate artificial data for the previous tasks to avoid forgetting (Shin et al., 2017). In one of the few works that has considered lifelong language learning, the authors leveraged the intuition that a language model itself is a generative model and used it for replaying its own data (Sun et al., 2020). A recent unsupervised method for training GANs learns both global features that are kept fixed after the first task and task-specific transformations to those shared features (Varshney et al., 2021). While the approach does not train the GANs via generative replay, the learned GANs can generate replay data to train a supervised model. ## 4.1.4 Expandable Models While approaches described so far are capable of learning sequences of tasks without forgetting, they are limited by one fundamental constraint: the learning of many tasks eventually exhausts the capacity of the model, and it becomes impossible to learn new tasks without forgetting past tasks. Note that, while some replay and regularization approaches described so far add task-specific features that can be considered as a form of capacity expansion, this expansion is naïvely executed for every task. Some additional methods use per-task growth as their primary mechanism for avoiding forgetting. One example specific to convolutional layers keeps the filter parameters fixed after initial training on a single task and adapts them to each new task via spatial and channel-wise calibration (Singh et al., 2020). However, these methods still consider the set of *shared* features to be nonexpansive, and these shared features might still run out of capacity. As one potential exception, another technique leverages large pretrained language models that in practice seem to have sufficient capacity for a massive number of tasks (specifically BERT, Devlin et al., 2019) and adds small modules trained via task-specific masking to learn a sequence of language tasks (Ke et al., 2021). A more recent mechanism that follows this same direction learns a continuous input vector per task to adapt the behavior of a large pretrained language model (Razdaibiedina et al., 2023). To address capacity limitations, a similar line of work has studied how to automatically expand the capacity of the model as needed to accommodate new tasks. Yoon et al. (2018) trained *dynamically expandable networks* via a multistage process that first selects the relevant parameters from past tasks to optimize, then checks the loss on the new task after training, and—if the loss exceeds a threshold—expands the capacity and trains the expanded network with group sparsity regularization to avoid excessive growth. In order to avoid forgetting, the algorithm measures the change in each neuron's input weights, and duplicates the neuron and retrains it if the change is too large. A similar approach sidesteps the need for this duplication step by maintaining all parameters for past tasks fixed (Hung et al., 2019). A distinct method splits the capacity of the model across tasks selected via boosting, and adds one such model for every new task (Ramesh and Chaudhari, 2022). Note that the focus of model capacity expansion is to avoid forgetting by permitting the model to use new weights to adapt to each new task. However, since new parameters are typically reserved for future tasks, previous tasks do not benefit from the additional capacity, preventing positive backward transfer. ## 4.1.5 Closing Remarks On Task-Aware Lifelong Learning Figure 4 categorizes the vast majority of the approaches discussed in this section as lifelong supervised learning methods with no composition, no task structure provided, and a focus on vision applications. A handful of exceptions were highlighted that deal with RL, unsupervised learning, or language applications. While these techniques notably require no information of how tasks are related, they do require a task indicator during learning and evaluation. This choice enables task-aware methods to use task-specific parameters to specialize shared knowledge to each task, but may be inapplicable in some settings where there are no evident task boundaries or there is no potential for supervision at the level of the task indicator. ## 4.2 Task-Agnostic As an alternative, task-agnostic lifelong approaches automatically detect tasks. While the learning techniques are typically not fundamentally different from those for the task-aware setting, this survey categorizes them separately to highlight the conceptual difference of learning in the presence of implicit or explicit information about how tasks are related to each other. Note that these methods may or may not assume access to task indicators during *training*, but they all are unaware of the task indicator during *inference*. Since in either case the agent requires information about task relations, the following discussion omits such distinctions. ## 4.2.1 Regularization Like in the task-aware setting, a number of task-agnostic techniques aim at avoiding forgetting by penalizing deviations from earlier tasks' solutions. One early method uses a diagonal Gaussian approximation in order to obtain a closed-form update rule based on the variational free energy (Zeno et al., 2018). A later extension of this method handles arbitrary Gaussian distributions by using fixed point equations (Zeno et al., 2021). Another recent technique combines Kronecker-factored EWC (Ritter et al., 2018) with a novel projection method onto the trust region over the posterior of previous tasks (Kao et al., 2021). A distinct approach by Kapoor et al. (2021) trains a variational GP using sparse sets of inducing points per task. Joseph and Balasubramanian (2020) used regularization at a meta level, learning a generative regularized hypernetwork using a variational autoencoder (VAE) to generate parameters for each task based on a task descriptor. Whereas these methods have operated in the supervised setting, Egorov et al. (2021) recently proposed a VAE model with boosting density approximation for the unsupervised setting. Functional regularization also applies to the task-agnostic setting for avoiding forgetting. One such method relies on the lottery ticket hypothesis, which states that deep networks with random weight initialization contain much smaller subnetworks that can be trained from the same initialization and reach comparable performance to the original (much larger) network (Frankle and Carbin, 2019). Chen et al. (2021) extended this hypothesis to the lifelong setting by using a pruning and regrowing approach, in combination with functional regularization from unlabeled data from public sources. Another approach combines parameter regularization (specifically, EWC, Kirkpatrick et al., 2017) with functional regularization to avoid forgetting in an approach based on weight and feature calibration (Yin et al., 2021). ## 4.2.2 Replay Replaying past data stored in memory is another popular mechanism to avoid forgetting in the task-agnostic setting. One recent approach stores data points along with the network's output probabilities, and uses these to functionally regularize the output of the network to stay close to its past predictions (Buzzega et al., 2020). An additional method learns a dual network with different learning rates, training a fast learner to transform a slow learner's output pixel-wise and replaying past data to avoid forgetting (Pham et al., 2021a). A drastically different technique uses graph learning to discover pairwise similarities between memory and current samples, penalizing forgetting the edges between samples instead of the predictions in order to maintain correlations between samples while permitting significant changes to the network's representations (Tang and Matteson, 2021). An extension to the technique of Gupta et al. (2020a, from the task-aware setting) for lifelong learning via meta-learning trains an additional binary mask that determines which parameters to learn for each task, leading to sparse gradients (von Oswald et al., 2021). Note that this survey categorizes this method as task-agnostic simply because the paper conducted the majority of experiments in that setting. While these examples have focused on how to leverage past examples during training, a related line of work has explored *which* samples to store or replay from the earlier tasks. The method of Aljundi et al. (2019b) stores samples whose parameter gradients are most diverse. Other work proposed to sample from memory the points whose predictions would be affected most negatively by parameter updates without replay (Aljundi et al., 2019a). Chrysakis and Moens (2020) tackled the problem of class imbalance by developing a class-balancing sampling technique to store instances, in combination with a weighted replay strategy. One additional approach determines the optimal points to store in memory using bilevel optimization (Borsos et al., 2020). Intuitively, it is possible that the training samples are not optimal for lifelong replay (e.g., because they lie far from the decision boundaries). Jin et al. (2021) leveraged this idea by directly modifying the samples in memory via gradient updates to make them more challenging for the learner. Alternatively, one could imagine that storing high-resolution samples might be wasteful, as many features might be superfluous for retaining performance on past tasks. Prior work has exploited this intuition by compressing data in memory via a multilevel VAE that iteratively compresses samples to meet a fixed storage capacity (Caccia et al., 2020). Like most methods discussed so far, these replay-based approaches have all been used in the vision domain. One exception to this was the work of de Masson d'Autume et al. (2019), which directly applied memory-based parameter adaptation (Sprechmann et al., 2018) with sparse replay to the language domain. ## 4.2.3 Generative Replay Achille et al. (2018) developed a VAE-based approach to lifelong unsupervised learning of disentangled representations, which uses generative replay from the VAE itself to avoid forgetting. A similar technique uses a dynamically expandable mixture of Gaussians to identify when the unsupervised model needs to grow to accommodate new data (Rao et al., 2019). Ayub and Wagner (2021) developed another unsupervised learning method based on neural style transformers (Gatys et al., 2016). Unlike prior methods, this latter approach explicitly stores in memory the autogenerated samples in embedding space, and consolidates them into a centroid-covariance representation to conform to a fixed capacity. As unsupervised approaches, these three methods can seamlessly extend to the supervised setting, as demonstrated in the corresponding manuscripts. Alternatively, one approach specifically for supervised learning relies on three model components: a set of shared parameters, a dynamic parameter generator for classification, and a data generator (Hu et al., 2019). The data generator generates embeddings both as inputs to the dynamic parameter generator and as replay samples for functional regularization of the shared parameters. A similar approach for supervised learning, inspired by the brain, also replays hidden representations to avoid forgetting (Van de Ven et al., 2020). ## 4.2.4 Expandable Models In the vein of dynamically expandable models, Aljundi et al. (2017) conceived the first task-agnostic method, which trains a separate expert for each task, and automatically routes each data point to the relevant expert at inference time. A distinct method automatically detects distribution shifts during training to meta-learn new components in a mixture of hierarchical Bayesian models (Jerfel et al., 2019). Similarly, the method of Lee et al. (2020) trains a dynamically expandable mixture of experts via variational inference. ## 4.2.5 Closing Remarks On Task-Agnostic Lifelong Learning Overall, task-agnostic learning might appear at first glance as an unqualified improvement over task-aware learning. However, most task-agnostic approaches rely on one additional assumption: the task structure must be implicitly embedded in the features of each data point (e.g., one task might be daylight object detection and another nighttime object detection). In some settings, this assumption is not valid, for example if the same data point might correspond to different labels in different tasks (e.g., cat detection and dog detection from images with multiple animals). Figure 4 therefore categorizes these methods as requiring the task structure to be implicitly provided, primarily in the supervised and unsupervised settings, with applications to vision models. To reiterate: in practice, many of the task-aware methods can operate in the task-agnostic setting with minor modifications, and vice versa. ## 4.3 Reusable Knowledge Lifelong approaches discussed so far, although effective in avoiding the problem of catastrophic forgetting, make no substantial effort toward the discovery of reusable knowledge. One could argue that these methods learn the model parameters in such a way that they are reusable across all tasks. However, it is unclear what the reusability of these parameters means, and moreover the architecture design hard-codes how to reuse parameters. This latter issue is a major drawback when attempting to learn tasks with a high degree of variability, as the exact form in which tasks connect to one another is often unknown. One would hope that the algorithm could determine these connections autonomously. The ELLA framework introduced an alternative formulation based on online dictionary learning (Ruvolo and Eaton, 2013). The elements of the dictionary can be interpreted as reusable models, and task-specific coefficients select how to reuse them. This represents a rudimentary form of functional composition, where each component is a full task model and the new models aggregate the component parameters; in the case of linear models studied in the original paper, this is equivalent to aggregating the component models' outputs. A few other mechanisms instead autonomously identify which knowledge to transfer across tasks, without explicit compositionality. One approach relies on automatically detecting which layers in a neural network should be specific to a task and which should leverage a shared set of parameters. Existing works have achieved this either via variational inference (Adel et al., 2020) or via expectation maximization (Lee et al., 2021a). Another technique is to identify the most similar tasks by training one model via transfer and another via STL and comparing their validation performances (Ke et al., 2020). Once the agent has identified similar tasks, it uses an attention mechanism to transfer knowledge from only those tasks. An additional algorithm instead avoids explicitly selecting tasks or layers to transfer, and directly meta-learns a set of features that maximize reuse when task-specific parameters mask the shared weights for transfer (Hurtado et al., 2021). Going back to the illustration of Figure 4, the methods in the previous paragraphs have included applications to vision and have worked only in the supervised setting. As a sole exception, extensions to the work of Ruvolo and Eaton (2013) have applied to RL for robotics tasks, as discussed below in Section 6. In a distinct line of work, Yoon et al. (2021) developed a knowledge-sharing mechanism for lifelong federated learning, selectively transferring knowledge across clients. In the language domain, Gupta et al. (2020c) achieved lifelong transfer by sharing latent topics. ## 4.4 Additional Approaches While the majority of works on lifelong learning fall into the categories above, some exceptions do not fit this classification. This section briefly describes some recent such efforts. Javed and White (2019) developed an online meta-learning algorithm that explicitly trains a representation to avoid forgetting. Their work considers a novel problem setting: instead of one lifelong sequence of tasks, the agent faces a pretraining phase, during which it meta-learns the representation over *multiple* "lifelong" sequences of tasks. An extension to this work incorporates a generative classifier (Banayeeanzade et al., 2021). A similar method learns a dual network for gating the outputs of a standard network (Beaulieu et al., 2020). A different recent problem formulation is that of continual generalized zero-shot learning, which requires the agent to generalize to unseen tasks as well as perform well on all past tasks (Skorokhodov and Elhoseiny, 2021). The authors then presented an algorithm for tackling this new problem via class normalization. Other works have instead focused on understanding aspects of existing lifelong learning approaches. Mirzadeh et al. (2020) empirically studied the impact of training hyperparameters (dropout, learning rate decay, and mini-batch size) on the width of the obtained local minima—and therefore, on forgetting. This study led to the development of *stable stochastic gradient descent*, a now-popular baseline for benchmarking new lifelong approaches. Another study empirically evaluated the effect of task semantics on catastrophic forgetting, finding that intermediate similarity leads to the highest amount of forgetting (Ramasesh et al., 2021). Lee et al. (2021b) obtained a similar finding for lifelong learning specifically in the teacher-student setting, with additional insight separating task feature similarity and class similarity. A separate work evaluated existing lifelong learning methods on recurrent neural networks, finding that they perform reasonably well and are a solid starting point for developing lifelong methods specific to recurrent architectures (Ehret et al., 2021). Figure 4 categorizes these last few approaches as lifelong supervised learning methods without any type of composition. Most of the methods either assume implicit information about the structure of the tasks (but no access to a task indicator) or vice versa. The one exception is the work of Skorokhodov and Elhoseiny (2021), which assumes explicit task descriptors that enable zero-shot generalization, which equates to explicit information about the task structure. Similarly, all works considered solely vision applications, with the exception of Ehret et al. (2021), which additionally considered a simple audio application. ## 5 Compositional Knowledge A mostly distinct line of parallel work has explored the learning of compositional knowledge. These methods vary in the information they are given about the structure of the tasks (Figure 6) and the types of compositional representations they learn (Figure 7). This section discusses existing methods for functional composition in the supervised setting, while Section 7 discusses other forms of composition specifically used for RL. ## 5.1 Multitask Learning Most compositional learning methods either learn a set of components given a known structure for how to compose them, or learn the structure for piecing together a given set of components. In the former case, Andreas et al. (2016) proposed to use neural modules as a means for transferring information across VQA tasks. Their method parses the questions in natural language and manually transforms them into a neural architecture. Given this architecture, the agent learns modules for detecting shapes, colors, and spatial relations, and later combines the modules in novel ways to answer unseen questions. In this context, neural modules represent general-purpose, learnable, and composable functions, which permits thinking broadly about functional composition. Consequently, this survey primarily considers learnable components in the form of neural modules. ![18_image_0.png](18_image_0.png) Figure 6: Different forms of compositional learning, depending on what structural information is implicitly or explicitly given to the algorithm and what must be learned. Caption: Compositional Learning Settings Following the graph formalism of Section 2.2, each neural module is a node representing one individual subfunction Fi, and a sequence of edges over the graph indicates how to compose the modules into an overall solution by passing the output of one module as input to the next. The sequence of edges can be viewed as a selection of which modules to activate for each task and in what order. As an illustrative example of the types of problems that these modular architectures can tackle, consider the example of animal localization and counting from Section 2.2. Figure 8 depicts a compositional solution to the four tasks using modular networks, consisting of modules for detecting cats, detecting dogs, finding largest, and counting. A related work extended the *neural programmer-interpreter* (NPI, Reed and de Freitas, 2016) to learn an interpreter for the programming language Forth using neural modules as the primitive functions, given manually specified execution traces (Bošnjak et al., 2017). Wu et al. (2021) hypothesized that, in order for neural modules to be composable, they must be invertible, and validated this hypothesis by manually composing invertible modules with themselves and other pretrained modules. In the latter scenario of a given set of components, Zaremba et al. (2016) learned an RL-based controller to select from among a collection of predefined functions to execute more complex programs. Cai et al. (2017) improved generalization in the NPI framework by incorporating recursion. Another approach based on ![19_image_0.png](19_image_0.png) ![19_image_1.png](19_image_1.png) π state action Figure 7: Types of reusable functional components and mechanisms for using those components for MTL and lifelong learning, with illustrative algorithms (neural module networks depiction from Andreas et al. (2016)). Caption Note: Andreas et al, and Nangue Tasse et al figures taken from their paper programming languages uses RL for rewarding all semantically correct programs and additionally imposes syntactical correctness directly in the training procedure (Bunel et al., 2018). More advanced RL techniques have tackled the same problem, removing the need for any supervision in the form of annotated execution traces or structures (Pierrot et al., 2019). A separate approach specifically for robot programming tasks uses the application programming interfaces (APIs) of primitive actions to guide the learning (Xu et al., 2018). Recently, similar ideas have achieved compositional generalization by directly learning rules over fixed symbols (Nye et al., 2020) or by providing a curriculum (Chen et al., 2020b). In a related line of work, Saqur and Narasimhan (2020) trained graph neural networks to couple concepts across different modalities (e.g., image and text), keeping the set of possible symbols fixed. A more interesting case is when the agent knows neither the structure nor the set of components, and must autonomously discover the compositional structure underlying a set of tasks. For example, following Andreas et al. (2016), several approaches for VQA assume that there exists a mapping from the natural language question to the neural structure and automatically learn this mapping (Pahuja et al., 2019). The majority ![20_image_0.png](20_image_0.png) Figure 8: Example of functionally compositional solution using modular neural networks. Each neural module solves one subproblem (e.g., detect dogs). Composing modules by feeding the output of one as the input to the next yields a complete neural network that solves each of the tasks. of such methods assume access to a set of ground-truth program traces as a supervisory signal for learning the structures. The first such method simply learns a sequence-to-sequence model from text to network architectures in a supervised fashion (Hu et al., 2017). A similar method starts from supervised learning over a small set of annotated programs and then fine-tunes the structure via RL (Johnson et al., 2017). Recent extensions to these ideas have included using probabilistic modules to parse open-domain text (Gupta et al., 2020b) and modulating the weights of convolutional layers with language-guided kernels (Akula et al., 2021). Figure 4 categorizes the compositional works described so far as supervised MTL methods with explicitly given task structure, either in the form of fixed modules, fixed structures over the modules, or inputs that directly contain the structure (e.g., in natural language). The compositional structure is any arbitrary graph connecting the components, even allowing for components to be reused multiple times in a single task via recursion. Existing works have used these methods in varied application domains: toy programming tasks (Bošnjak et al., 2017; Cai et al., 2017; Bunel et al., 2018; Pierrot et al., 2019), VQA (Andreas et al., 2016; Saqur and Narasimhan, 2020; Pahuja et al., 2019; Hu et al., 2017; Johnson et al., 2017; Gupta et al., 2020b; Akula et al., 2021), natural language (Nye et al., 2020; Chen et al., 2020b), vision (Wu et al., 2021), audio (Wu et al., 2021), and robotics (Xu et al., 2018). As an aside, while most works described so far operate exclusively at the level of abstractions computed by neural modules, others (e.g., those deriving from the NPI framework) operate at a *neuro-symbolic* level. In this case, the outputs of neural modules are mapped to semantically meaningful representations that can be viewed as a form of manually defined interfaces connecting one module to the next, enabling explicit symbolic reasoning. However, some applications require agents (e.g., service robots) to learn more autonomously, without any kind of supervision on the compositional structures. Several approaches therefore learn this structure directly from the optimization of a cost function. Many such methods assume that the inputs themselves implicitly contain information about their own structure, such as natural language tasks, and therefore use the inputs to determine the structure. One challenge in this setting is that the agent must autonomously discover, in an unsupervised manner, what is the compositional structure that underlies a set of tasks. One approach to this is to train both the structure and the model end-to-end, assuming that the selection over modules is differentiable (i.e., soft module selection, Rahaman et al., 2021). Other approaches instead aim at discovering hard modular models, which increases the difficulty of the optimization process. Methods for tackling this variant of the problem have included using RL (Chang et al., 2019) or expectation maximization (Kirsch et al., 2018) as the optimization tool. These ideas have operated on both vision (Rahaman et al., 2021; Chang et al., 2019) and natural language (Kirsch et al., 2018) tasks. Other approaches do not assume that there is any information about the structure at all given to the agent, and it must therefore blindly search for it for each task. This often implies that the compositional structure for each task should be fixed across all data points, but often approaches permit reconfiguring the modular structure even *within* a task. On the other hand, much like in the lifelong learning setting without compositional structures, this assumption also implies that the agent requires access to some sort of task indicator. One example of this formulation approximates an arbitrary ordering over a set of neural modules via soft ordering and trains the entire model end-to-end (Meyerson and Miikkulainen, 2018). A related technique decomposes MTL architectures into tensors such that each matrix in the tensor corresponds to a subtask, using hypermodules (akin to hypernetworks) to generate local tensors (Meyerson and Miikkulainen, 2019). Another example assumes a hard module selection, and trains the modules via meta-learning so that they are able to quickly find solutions to new, unseen tasks (Alet et al., 2018). An extension of this method learns with graph neural networks (Alet et al., 2019), and a simplified version discovers whether modules should be task-specific or shared via Bayesian shrinkage (Chen et al., 2020c). The approach of Rosenbaum et al. (2018) also learns a hard module selection, but uses RL to select the modules to use for each data point and task. One of the advantages of keeping the structural configuration fixed for each task (instead of input-dependent) is that the reduced flexibility protects the model from overfitting. This has enabled applying these latter methods to domains with smaller data sets than are typically available in language domains (Meyerson and Miikkulainen, 2019; Chen et al., 2020c), such as vision (Rosenbaum et al., 2018; Meyerson and Miikkulainen, 2018; 2019) and robotics (Alet et al., 2018; 2019). Rosenbaum et al. (2019) discussed the challenges of optimizing modular architectures via an extensive evaluation with and without task indicators on vision and language tasks. ## 5.2 Lifelong Learning All compositional methods described so far assume that the agent has access to a batch of tasks for MTL, enabling it to evaluate numerous combinations of components and structures on all tasks simultaneously. In a more realistic setting, the agent faces a sequence of tasks in a lifelong learning fashion. Most work in this line has assumed that the agent can fully learn each component by training on a single task, and then reuse the learned module for other tasks. One example is the NPI, which assumes that the agent receives supervised module configurations for a set of tasks and can use this signal to learn a mapping from inputs to module configurations (Reed and de Freitas, 2016). Extensions to the NPI have operated in the MTL setting and were described in the previous section. Other methods do not assume that there is any information in the input about the task structure, and therefore must search for the structure for every new task. Fernando et al. (2017) trained a set of neural modules and chose the paths for each new task using an evolutionary search strategy, applying this technique to both supervised and RL. These methods maintain a constant number of modules and keep the weights of those modules fixed after training them on a single task, limiting their applicability to a small number of tasks. To alleviate these issues, other methods progressively add new modules, keeping existing modules fixed (Li et al., 2019). This is akin to capacity expansion methods from Section 4. Some such approaches introduce heuristics for searching over the space of possible module configurations upon encountering a new task to improve efficiency, for example using programming languages techniques (Valkov et al., 2018) or data-driven heuristics (Veniat et al., 2021). In the language domain, Kim et al. (2019) developed an approach that progressively grows a modular architecture for solving a VQA task by providing a curriculum that imposes which module solves which subtask, keeping old modules fixed. Unfortunately, this solution of keeping old modules fixed is infeasible in many real-world scenarios in which the agent has access to little data for each of the tasks, which would render these modules highly suboptimal. Therefore, other methods have permitted further updates to the model parameters. One early example, based on programming languages, simply assumed that future updates would not be harmful to previous tasks (Gaunt et al., 2017). This limited the applicability of the method to very simplistic settings. Rajasegaran et al. (2019) proposed a more complete approach that combines regularization and replay to avoid catastrophic forgetting, but requires expensively storing and training multiple models for each task to select the best one before adapting the existing parameters. Another approach routes each data point through a different path in the network, restricting updates to the path via EWC regularization if the new data point is different from past points routed through the same path (Chen et al., 2020a). However, this latter approach heavily biases the obtained solution toward the first task, and does not permit the addition of new modules over time. A recent, more complete framework uses multiple stages for initializing, reusing, adapting, and spawning components, without access to large batches of simultaneous tasks or expensive training of multiple parallel models (Mendez and Eaton, 2021). The authors further demonstrated the compatibility of compositional methods with replay, regularization, and capacity expansion approaches for lifelong training. Qin et al. (2021) developed a similar supervised learning approach that automatically grows and updates modules for each new task using an RL-based controller. While the approach of Mendez and Eaton (2021) sidesteps forgetting in the structure over modules by making it task-specific, the controller of Qin et al. (2021) is susceptible to catastrophic forgetting. Ostapenko et al. (2021) developed a local per-module selector that estimates whether each sample is in-distribution for the given module, and chooses the module with the highest value. This mechanism lets their method operate in the task-agnostic setting and limits forgetting to local, per-module parameters. While this addresses a large part of the problem of forgetting in the module-selection stage, it enables earlier tasks to select new modules that are likely to malfunction in the presence of old data they did not train on. Notably, this method demonstrated the ability of existing modules to combine in novel ways to solve unseen tasks, exhibiting for the first time compositional generalization in the lifelong learning setting. Most approaches described in this section assume an arbitrary graph structure over the components, and learn to construct paths through this graph. In the case of neural modules, this means that each module can be used as input to any other module, or equivalently that modules can be chosen at any depth of the network. Some exceptions in the lifelong setting impose a chaining structure by restricting certain modules to be eligible only at certain depths. Note that both of these choices contemplate an exponential number (in the network's depth) of possible configurations. However, the chaining approach does simplify the problem of learning modules, since it reduces the space of possible inputs and outputs that each module must accept and generate, respectively. ## 5.3 Nonmodular Compositional Works While modular neural architectures have become popular for addressing compositional problems, they are not the only solution. A number of works have dealt with the problem of compositional generalization to unseen textual tasks, where an agent may have learned the concepts of "walk", "twice", and "turn left" separately, and later be required to parse an instruction like "walk twice and turn left" (Lake and Baroni, 2018). One approach uses meta-learning to explicitly optimize the agent to reason compositionally by generalizing to unseen combinations of language instructions (Lake, 2019). Another method, inspired by the emergence of compositionality in human language, uses iterated learning on neural networks to compositionally generalize (Ren et al., 2020). Gordon et al. (2020) equated language composition to equivariance on permutations over group actions, and designed an architecture that maintains such equivariances. A similar work imposed invariance to partial permutations on a language understanding system (Guo et al., 2020a). Another recent technique incorporates a memory of automatically extracted analytical expressions and uses those to compositionally generalize (Liu et al., 2020). A distinct approach by Akyürek et al. (2021) uses data augmentation to specifically target compositionality, combining prototypes of a generative model into multiprototype samples. One method in this space operates in the lifelong setting, where the vocabulary grows over time (Li et al., 2020b). The agent separates the semantics and syntax of inputs, keeping the syntax for previously learned semantics parameters fixed and learning additional semantics parameters for each extension of the vocabulary. The literature on visual object detection has also studied the idea of compositional generalization, under the vein of attribute-based zero-shot classification. At a high level, objects in images contain annotations not only of their class label but also of a set of attributes (e.g., color, shape, texture), and the learning system seeks to detect unseen classes based on their attributes. This requires the agent to learn both the semantics of the attributes and how to combine them (Huynh and Elhamifar, 2020; Atzmon et al., 2020; Ruis et al., 2021). Other approaches compose attributes to generate images. One such method learns one energy-based model per attribute that can later be combined with other attributes in novel combinations—for example, to generate a smiling man from the attributes "smiling" and "man" (Du et al., 2020). The approach of Aksan et al. (2020) learns embeddings of manual drawings that can be composed into complex figures like flowcharts. Similarly, the mechanism of Arad Hudson and Zitnick (2021) uses GANs with structural priors to generate scenes by composing multiple objects. A different method instead relies on large-scale data sets to compositionally generate images without any explicit notion of composition during training (Ramesh et al., 2021; 2022). Another approach that leverages large-scale data uses pretrained language models in conjunction with external tools (such as a calculator and a calendar) to solve compositional tasks (Schick et al., 2023). While related, this line of work is farther from the notion of composition studied in this review, and so a comprehensive overview is outside of the scope of this discussion. Moreover, even though these methods enable generalizing compositionally, they lack the explicit modularity that would enable the addition of new components over time or the improvement of specific components, as required for true lifelong learning. ## 5.4 Understanding And Measuring Composition A recent line of work has sought to understand various aspects of compositionality. An initial study quantified the compositionality of a model as the ability to approximate its output on compositional inputs by combining representational primitives (Andreas, 2019). Evaluating a set of models under this measure, the author found a correlation (albeit small) between compositionality and generalization on vision and language tasks. A similar study found that the same definition of compositionality is related to zero-shot generalization on vision tasks (Sylvain et al., 2020). Schott et al. (2022) found that representation learning approaches do not compositionally generalize to new combinations of known factors of variation. D'Amario et al. (2021) showed that (manually defined) explicitly modular neural architectures improve compositional generalization in VQA tasks. Somewhat contradictorily, Agarwala et al. (2021) found theoretically and empirically that a single monolithic network is capable of learning multiple highly varied tasks. However, this ability requires an appropriate encoding of the tasks that separates them into clusters. One work used a similar intuition to develop a mechanism to compute a description of the execution trace of a modular architecture based on random matrix projections onto separate regions of an embedding space (Ghazi et al., 2019). Given the apparent importance of modularity and compositionality, Csordás et al. (2021) studied two properties of neural networks without modular architectures: whether they automatically learn specialized modules, and whether they reuse those modules. While they found that neural networks indeed automatically learn highly specialized modules, they do not automatically reuse those, thereby inhibiting compositional generalization. ## 5.5 A Caveat On Compositional Learning Methods The methods described in this section for discovering compositional knowledge rely on the assumption that the learning problems faced by the agent are related via some underlying compositional structure, as described in Section 2.2. Incorporating this assumption into learning algorithms constitutes a form of bias, which may hinder performance in instances where the problems are not compositionally related as expected. This is in contrast with the more general lifelong learning formulation of Equation 1, which makes no assumption about the relations between tasks. However, note that model sharing across tasks lies on a spectrum, where full sharing (Figure 3b) and no sharing (Figure 3a)—the most common approaches to the lifelong learning problem—form the two extrema, and compositionality (Figure 3c) is somewhere in between. Developing algorithms that can autonomously recover where the optimal solution to a set (or sequence) of learning tasks lies on this spectrum remains an open challenge. ## 6 Lifelong Reinforcement Learning The techniques discussed so far primarily deal with supervised learning tasks. The number of approaches that operate in the lifelong RL setting is substantially smaller. The following paragraphs describe some of the existing methods for lifelong RL in relation to the functional compositionality discussed in this article. Much like in the supervised setting, the majority of lifelong RL approaches rely on monolithic or nonmodular architectures, which as discussed in Section 4 inhibits the discovery of self-contained and reusable knowledge. These methods mainly use regularization techniques for avoiding forgetting, in a manner that is conceptually equivalent to the supervised methods of Sections 4.1.1 and 4.2.1. A prominent example is EWC (Kirkpatrick et al., 2017), a supervised method that imposes a quadratic penalty for deviating from earlier tasks' parameters, which has been directly applied to RL. One challenge in training RL models via EWC is that the vast exploration typically required to learn new RL tasks might be catastrophically damaging to the knowledge stored in the shared parameters. Consequently, as an alternative approach, progress & *compress* first trains an auxiliary model for each new task and subsequently discards it and distills any new knowledge into the single shared model via an approximate version of EWC (Schwarz et al., 2018). While these methods in principle can handle task-agnostic settings, assuming that the input contains implicit cues about the task structure, in practice evaluations have tested them most often in the task-aware setting, typically in vision-based tasks (e.g., Atari games, Bellemare et al., 2013). Moreover, these works have dealt with limited lifelong settings, with relatively short sequences of tasks and permitting the agent to revisit earlier tasks several times for additional training experience. Even in these simplified settings, these methods have failed to achieve substantial transfer over an agent trained independently on each task, without any transfer. Kaplanis et al. (2019) trained a similar monolithic architecture in the task-agnostic setting, including for tasks with continuous distribution shifts. Their approach regularizes the KL divergence of the policy to lie close to itself at different timescales, and was evaluated on simulated continuous control tasks. Other approaches store experiences for future replay. Compared to the supervised methods of Sections 4.1.2 and 4.2.2, the use of experience replay to retain performance on earlier RL tasks requires a number of special considerations. For example, the data collected over the agent's training on each individual task is nonstationary, since the behavior of the agent changes over time. Isele and Cosgun (2018) proposed various techniques for selectively storing replay examples and compared their impact on performance. Another challenge is that, as the agent modifies the policy for earlier tasks, the distribution of the data stored for them no longer matches the distribution imposed by the agent's policy. Rolnick et al. (2019) proposed using an importance sampling mechanism for limiting the effects of this distributional shift. While the former example considered mostly grid-world-style tasks in the task-aware setting, where the input contains no information about the task relations, the latter considered vision-based tasks in the task-agnostic setting, under the assumption that the observation space for each task contains sufficient information for distinguishing it from others. However, the challenges of replay in RL have limited the applicability of these methods to short sequences of two or three tasks, still with the ability to revisit previous tasks. Mendez et al. (2022b) established a connection between these issues and off-line RL (Levine et al., 2020), and leveraged it to develop a replay mechanism that operates on sequences of tens of tasks without revisits. Berseth et al. (2022) also used replay in longer sequences of RL tasks in conjunction with off-line meta-RL to achieve lifelong transfer. While most lifelong RL works have considered the use of a single monolithic structure for learning a sequence of tasks, some classical examples have instead followed the ELLA framework of Ruvolo and Eaton (2013) to devise similar RL variants. PG-ELLA follows the dictionary-learning mechanics of ELLA, but replaces the supervised models that form ELLA's dictionary by policy factors (Bou Ammar et al., 2014). An extension of this approach supports cross-domain transfer by projecting the dictionary onto domain-specific policy spaces (Bou Ammar et al., 2015), and another extension leverages task descriptors to achieve zero-shot transfer to unseen tasks (Isele et al., 2016; Rostami et al., 2020). Zhao et al. (2017) followed a similar dictionary-learning formulation for deep networks in the batch MTL setting, replacing all matrix operations with equivalent tensor operations. Like ELLA in the supervised setting (Section 4.3), this represents a rudimentary form of aggregated composition per the categorization of Figure 4. The primary challenge that ELLA-based approaches face is that the dictionary-learning technique requires first discovering a policy for each task in isolation (i.e., ignoring any information from other tasks) to determine similarity to previous policies, before factoring the parameters to improve performance via transfer. The downside is that the agent does not benefit from prior experience during initial exploration, which is critical for data-efficient training in lifelong RL. While these methods target continuous control tasks, their evaluations have considered the interleaved MTL setting, where the agent revisits tasks multiple times before evaluation. Mendez et al. (2020) developed LPG-FTW, a mechanism that uses multiple models like PG-ELLA variants, but learns these models directly via RL training like single-model methods. This enables the method to be flexible and handle highly varied tasks while also benefiting from prior information during the learning process, thus accelerating the training. A similar approach in the context of model-based RL represents the dynamics of the tasks via an aggregation of supervised models (Nagabandi et al., 2019). Other approaches instead use an entirely separate model for each task. One such method leverages shared knowledge in the form of a metamodel that informs exploration strategies to task-specific models, resulting in linear growth of the model parameters (Garcia and Thomas, 2019). Another popular example shares knowledge via lateral connections in a deep network, resulting in quadratic growth of the model parameters (Rusu et al., 2016). Both of these approaches are infeasible in the presence of large numbers of tasks. A separate line of lifelong RL work has departed completely from the notion of tasks and has instead learned information about the environment in a self-guided way. The seminal approach in this area is *Horde*, which learns a collection of general value functions for a variety of signals and uses those as a knowledge representation of the environment (Sutton et al., 2011). A more recent approach learns latent skills that enable the agent to reset itself in the environment in a way that encourages exploration (Xu et al., 2020). ## 7 Compositional Reinforcement Learning The most common form of composition studied in RL has been temporal composition. One influential work in this area is the *options* framework of Sutton et al. (1999). At a high level, options represent temporally extended courses of action, which can be thought of as skills. Once the agent has determined a suitable set of options, it can then learn a higher-level policy directly over the options. In the language used so far, each option is a module or component, and the high-level policy is the structural configuration over modules. Traditional work in temporal composition has assumed that the environment provides the structure a priori as a fixed set of options (or information about how to learn each option, such as subgoal rewards). For example, the approach of Lee et al. (2019) learns a policy for transitioning from one skill to the next, given a set of pretrained skills. However, other methods automatically discover both the modules and the configuration over them. *Option-critic* extends actor-critic methods to handle option discovery via an adaptation of the policy gradient theorem (Bacon et al., 2017). Another approach uses a large off-line data set to automatically extract skills and later meta-learns policies on top of these extracted skills (Nam et al., 2022). Recent work has developed mechanisms for skill chaining other than an explicit high-level policy, such as additively combining abstract skill embeddings (Devin et al., 2019) or multiplicatively combining policies (Peng et al., 2019). The high expressive power of a policy over options enables learning arbitrary graphs over the modules according to the categorization of Figure 4. However, these approaches have primarily been limited to toy applications, with some exceptions considering simple visual-based or continuous control tasks. ![26_image_0.png](26_image_0.png) Figure 9: Functional vs. temporal composition of RL policies. The policy to pick-and-place a can with an IIWA arm is decomposed along a temporal or a functional axis. Temporally extended actions correspond to skills, such as grasp, lift, or place, which are transferable to a place-on-shelf task with the IIWA arm. Functional components correspond to processing stages, such as grasp-pose detection, trajectory planning, or motor control, which are transferable to a pick-and-place task with a different arm, like a Jaco. Temporal modules are active one at a time, while multiple functional components activate simultaneously. Crucially, the problem considered in this article differs in that the functional composition occurs at every time step, as opposed to the temporal chaining of options. As illustrated in Figure 9, these two dimensions are orthogonal, and both capture real-world settings in which composition would greatly benefit the learning process of artificial agents. More concretely, the primary conceptual difference between hierarchical RL and functionally compositional RL is that hierarchical RL considers the composition of sequences of actions in time, whereas functionally compositional RL considers the composition *of functions* that, when combined, form a full policy. In particular, for a given compositional task, the agent uses all functions that make up its modular policy at every time step to determine the action to take (given the current state). Going back to the example of robot programming from Section 2.2, modules in the compositional RL formulation might correspond to sensor processing units, path planners, or robot motor drivers. In programming, at every time step, the sensory input passes through modules in some preprogrammed sequential order, which finally outputs the motor torques to actuate the robot. Similarly, in compositional RL the state observation passes through the different modules, used in combination to execute the agent's actions. The choice to focus on this form of functional composition permits us to analyze supervised and RL methods under the common lens of compositional problem graphs described in Section 2.2. Hierarchical RL takes a complementary approach. Instead, each "module" (e.g., an option) is a self-contained policy that receives as input the state observation and outputs an action. Each of these options operates in the environment, for example to reach a particular subgoal state. Upon termination of an option, the agent selects a different option to execute, starting from the state reached by the previous option. In contrast, the compositional RL framework assumes that the agent uses a single policy to solve a complete task. An integrated approach is possible that decomposes the problem along both a functional axis and a temporal axis. This would enable selecting a different functionally modular policy at different stages of solving a task, simplifying the amount of information that each module should encode. Conversely, it would enable options to be made up of functionally modular components, simplifying the form of the options themselves and enabling reuse *across* options. Research in this direction could drastically improve RL data efficiency. With this in mind, note that the discussion of works on skill discovery, which is a vast literature on its own, is by no means comprehensive. We refer the reader to separate surveys for a deeper study of this line of work (Barto and Mahadevan, 2003; Al-Emran, 2015; Pateria et al., 2021). Other forms of hierarchical RL have considered learning state abstractions that enable the agent to more easily solve tasks (Dayan and Hinton, 1993; Dietterich, 2000; Vezhnevets et al., 2017). While these are also related, they have mainly focused on a two-layer abstraction. This represents a simple form of composition where the agent executes actions based on a learned abstracted state. Instead, general functional composition considers arbitrarily many layers of abstraction that help the learning of both state and action representations. Most works on both temporal composition and state abstractions have been in the STL setting, where the agent must simultaneously learn to solve the individual task and learn to decompose its knowledge into suitable components. In practice, this has implied that there is not much benefit to learning such a decomposition, since the learning itself becomes more costly. However, other investigations have considered learning compositional structures for multiple tasks, in particular in the lifelong setting. Brunskill and Li (2014) developed a theoretical framework that automatically discovers options and policies over options throughout a sequence of tasks. A more practical approach trains each option separately on a subtask, and later reuses these options for learning subsequent tasks (Tessler et al., 2017). A recent model-based approach learns skills in an off-line phase that subsequently enable the agent to learn in a nonstationary lifetime without explicit tasks (Lu et al., 2021). Other work studied state abstractions from a theoretical perspective in the lifelong setting (Abel et al., 2018). When learning such compositional structures in a lifelong setting, the agent amortizes the cost of decomposing knowledge over the multiple tasks, yielding substantial benefits when the components capture knowledge that is useful in the future. Another form of composition studied in the RL literature has been to learn behaviors that solve different objectives and compose those behaviors to achieve combined objectives. Todorov (2009) showed that the linear composition of value functions is optimal in the case of linearly solvable Markov decision processes (MDPs). A similar result showed that successor features can be combined to solve this type of combined objectives (Barreto et al., 2018). A recent work leveraged this result to develop a simple algorithm for constructing a set of policies that can be combined into optimal policies for all possible tasks expressible with a particular set of successor features (Alver and Precup, 2022). One common terminology for discussing how this process combines objectives is logical composition. Intuitively, if an agent has learned to solve objective A and objective B separately, it can then combine its behaviors to solve A AND B or A OR B. This intuition has driven theoretical and empirical results in the setting of entropy-regularized RL (Haarnoja et al., 2018; Van Niekerk et al., 2019). One approach in this setting explicitly modularizes the inputs to a neural network to handle each of the different goals, aided by multi-hot indicators of the active goals (Colas et al., 2019). Nangue Tasse et al. (2020; 2022) later formalized the intuition of logical composition in the lifelong setting. Other recent work developed this idea of composing multiple simultaneous behaviors specifically for robotic control (Cheng et al., 2021; Li et al., 2021a; Bylard et al., 2021). A related line of work designed a formal language for specifying logically compositional tasks (Jothimurugan et al., 2019), and later used a similar language to learn hierarchical policies (Jothimurugan et al., 2021). A closely related work used differentiable program search to learn compositional policies (Qiu and Zhu, 2022). These compositional approaches require a specification of compositional objectives. Another related vein has sought to decompose the reward into such components, and learn separate policies for each component that can later be combined. These works have decomposed the reward manually (Van Seijen et al., 2017) or automatically (Lin et al., 2019; 2020). In practice, these logic-based approaches typically combine behaviors via simple aggregation of value functions (e.g., weighted combination or addition), which limits their applicability to components that represent solutions to entire RL problems. In contrast, the more general functional composition separates each policy itself into components, such that these components combine to form full policies. A handful of works have considered functional composition in RL with modular neural networks, resulting in chaining or full graph compositional solutions. A first method handles a setting where each task is a combination of one robot and one task objective (Devin et al., 2017). Given prior knowledge of this compositional structure, the authors manually crafted chained modular architectures and trained the agent to learn the parameters of the neural modules. Other works have instead assumed no knowledge of the task structure and learned them autonomously, under the assumption that the inputs contain implicit cues of what distinguishes the modular structure of one task from another. In this line, one recent technique learns *recurrent independent mechanisms* by encouraging modules to become independent via a competition procedure, and combines the modules in general graph structures (Mittal et al., 2020; Goyal et al., 2021; 2022). These methods were originally developed for the supervised setting and were directly applied to RL, and have primarily operated in the STL setting. Another closely related method also automatically learns a mapping from inputs to modular structures in the MTL setting, with applications to noncompositional robotic manipulation (Yang et al., 2020). More recently, one approach was developed that tackles explicitly compositional RL tasks in a lifelong setting, with applications to robotic manipulation (Mendez et al., 2022b). Compositionality has a long history in RL, given the promise that learning smaller, self-contained policies might make RL of complex tasks feasible. This has led to a wide diversity of ways to define composition. For completeness, this paragraph discusses other recent approaches to compositionality that have received less attention and bear less connection to the work presented here. As one example, Pathak et al. (2019) sought to decompose policies via a graph neural network such that each node in the graph corresponds to a link in a modular robot. A later version of this work extended the method by considering a setting where all links are morphologically equivalent in terms of their size and motor, and ensuring that all modules learn the same policy (Huang et al., 2020). Others have learned object-centric embeddings in order to generalize to environments with different object configurations (Li et al., 2020a; Mu et al., 2020). Li et al. (2021b) developed an approach related to skill discovery, but instead of combining skills, the agent learns to solve progressively harder tasks by truncating demonstrated trajectories in an imitation learning setting, such that the starting state leads to a task solvable by the current agent. The understanding of the modularity of RL agents at a fundamental level has received very little attention. Perhaps the one exception has been the recent work of Chang et al. (2021), which studied the modularity of credit assignment as the ability of an algorithm to learn mechanisms for choosing actions that can be modified independently of the mechanisms for choosing other actions. The conclusion of this study was that some single-step temporal difference methods are modular, but policy gradient methods are not. ## 7.1 Benchmarking Compositional Reinforcement Learning Large-scale, standardized benchmarks have been key to the acceleration of deep learning research (e.g., ImageNet, Deng et al., 2009). Inspired by this, multiple attempts have sought to construct equivalent benchmarks for deep RL, leading to popular evaluation domains in both discrete- (Bellemare et al., 2013; Vinyals et al., 2017) and continuous-action (Brockman et al., 2016; Tunyasuvunakool et al., 2020) settings. While these benchmarks have promoted deep RL advancements, they are restricted to STL—that is, they design each task to be learned in isolation. Consequently, work in multitask and lifelong RL has resorted to ad hoc evaluation settings, slowing down progress. Recent efforts have sought to close this gap by creating evaluation domains with multiple tasks that share a common structure that is (hopefully) transferable across the tasks. One example varied dynamical system parameters of continuous control tasks (e.g., gravity) to create multiple related tasks (Henderson et al., 2017). Other work created a grid world evaluation domain with tasks of progressive difficulty (Chevalier-Boisvert et al., 2019). In the continual learning setting, a recent benchmark evaluates approaches in a multiagent coordination setting (Nekoei et al., 2021). Specifically in the context of robotics, recent works have created large sets of tasks for evaluating MTL, lifelong learning, and meta-learning algorithms (Yu et al., 2019; James et al., 2020; Wołczyk et al., 2021). Despite this recent progress, it remains unclear exactly what an agent can transfer between tasks in these benchmarks, and so existing algorithms are typically limited to transferring neural network parameters in the hopes that they discover reusable information. Unfortunately, typical evaluations of compositionality use such standard benchmarks in both the supervised and RL settings. While this enables fair performance comparisons, it fails to give insight into the agent's ability to find meaningful compositional structures. Some notable exceptions exist for evaluating compositional generalization in supervised learning (Bahdanau et al., 2018; Lake and Baroni, 2018; Sinha et al., 2020; Keysers et al., 2020). Recently, Mendez et al. (2022a) developed an RL benchmark for functional compositional generalization in a robotic manipulation setting with varied robot arms, and Gur et al. (2021) developed a complementary benchmark for temporal composition. Another related work procedurally created robotics tasks by varying dynamical parameters to study causality in RL (Ahmed et al., 2021), considering a single robot arm dealing with continuous variations in the physical properties of objects. These evaluation domains represent initial steps toward objectively quantifying the compositional capabilities of RL approaches. Much work remains to be done to develop both algorithms that tackle these difficult benchmarks, and new benchmarks and metrics that that more generally evaluate and shed light on compositionality. ## 8 Summary And Directions For Future Work This article reviewed the state of prior research on the topics of lifelong learning and compositional learning, and categorized them along six dimensions. In summary, lifelong or continual learning has primarily focused on the problem of catastrophic forgetting in the supervised setting, but has mostly overlooked how to obtain knowledge that can be reusable for future tasks. On the other hand, compositional learning has developed methods for obtaining reusable knowledge, but has done so in the simpler case of MTL, where the agent trains on all tasks simultaneously. Few works have combined these two lines of work by developing algorithms that discover reusable compositional knowledge in a lifelong setting. Similarly, not much effort has been placed in porting lifelong learning techniques to the RL setting, and methods in this line have had shortcomings that have prevented their application to complex and diverse sequences of tasks. In particular, the form of functional composition discussed here has been severely understudied in the RL literature. Despite the relative lack of attention that lifelong compositional learning has received in the literature, it is the authors' perspective that this joint problem space holds promise for tremendously fruitful advances. The main arguments that support this claim are outlined below: - **Knowledge reuse.** While the field of lifelong learning has focused primarily on the problem of catastrophic forgetting in recent years, lifelong learning really is the study of the discovery and effective utilization of reusable knowledge. An agent tasked with efficiently learning over a stream of non-*i.i.d.* data—presented as a sequence of tasks or otherwise—should leverage any information gathered earlier that is related to any aspect of the problem in the current state of the distribution. This form of reuse would permit far more data-efficient learning and broader generalization. Compositionality is a natural, extremely general mechanism for reusing knowledge, by composing chunks of knowledge in various combinations to solve novel problems over time. Moreover, thanks to combinatorial explosion, compositionality enables reuse across a multitude of problems at scale. - **Modular knowledge updates.** Explicitly modular architectures offer another clear advantage over nonmodular ones: their potential for updating individual elements of the model to account for new information. This is especially relevant for avoiding catastrophic forgetting, since at any point in time the agent would only need to modify knowledge explicitly used to solve the current task or data point, which should represent only a small portion of the overall knowledge the agent has accumulated over its lifetime. This property is also advantageous for simplifying training, since limiting gradient updates to only relevant modules reduces the difficulty of credit assignment. - **Early evidence from existing approaches.** As a more pragmatic argument, the few existing compositional approaches to lifelong learning have exhibited drastic performance improvements over competing techniques, especially in settings with highly diverse tasks. As examples, Mendez et al. (2020) demonstrated an improvement of 82.5% over noncompositional methods, and Ostapenko et al. (2021) demonstrated zero-shot compositional generalization to novel task combinations. - **Advances in (non-lifelong) compositional learning.** Some years ago, the problem of automatically learning compositional neural networks seemed hopeless, which likely (understandably) drove lifelong learning researchers away from these types of models. However, the landscape has changed significantly in recent years, with many novel mechanisms enabling the learning of high-quality modular architectures. In particular, such approaches vary in the types of assumptions they make about the information the designers must provide to the agent about the relations among tasks. For example, Andreas et al. (2016) assume that the structure in which modules are combined must be given (explicit information), Zaremba et al. (2016) assume that the modules themselves are given and the agent must discover how to combine them (explicit information), Hu et al. (2017) assume textual descriptions directly encode how to combine modules (explicit information), Kirsch et al. (2018) assume the input contains cues of how a solution might be encoded (implicit information), and Alet et al. (2018) assume there is no information at all in the input and it must be discovered via search (no information). As one would expect, these choices impose significant trade-offs between the difficulty of designing the systems and the amount of data they require to learn (and, correspondingly, the quality of solutions they obtain given a fixed, limited amount of data). That being said, the diversity of possible assumptions would permit developing lifelong learning methods for various problem settings with varying degrees of data and domain-knowledge availability. As a nascent field, lifelong compositional learning has a range of open questions, which future investigations should tackle. The following subset of such questions would likely substantially impact the broader field of AI: - **Measuring compositionality.** Understanding precisely and quantitatively how well a solution captures the compositional nature of a problem remains a largely unsolved problem. Existing works have primarily focused on developing benchmarks that intuitively require the agent to exhibit compositional reasoning in order to solve the tasks, with a handful of exceptions that have gauged the ability of approaches to recombine existing solutions. Advancements toward quantifying compositionality would provide insight into directions to explore for creating better compositional methods. - **Real-world applications.** While some evaluations in existing works have been inspired by realistic robotic applications, it is uncertain how well existing approaches would fare upon deployment on physical robots. More broadly, this is a challenge faced by the larger research field. Benchmark data sets and RL environments enable fast development and fair performance comparisons, both of which are useful for accelerating progress. However, *applied* research should progress. In particular, lifelong learning has so far been disconnected from real-world deployments, partly because of the artificial nature of task-based lifelong learning. Future work tackling realistic applications with embodied agents to complement fundamental lifelong learning developments would have massive impact. - **Task-free lifelong learning.** As hinted at above, one significant step in the direction of deployed lifelong learning would be to move away from the task-based formulation. Some recent works have placed efforts in this direction, but this still remains a severely underdeveloped area. In particular, it is critical that future instantiations of the problem do not assume that individual inputs contain sufficient information to determine the agent's current objective. This assumption, which most task-agnostic works to date make, is unfortunately unrealistic. Instead, real-world (embodied) lifelong learning would require the agent to explore and study the environment over a stream of temporally correlated inputs to discover its current objective. - **Combining temporal and functional composition.** As discussed in Section 7, temporal and functional composition capture complementary dimensions of realistic problems. The combination of these two families of techniques has not been investigated to date. Devising mechanisms that can decompose temporally extended actions or skills into functional modules, such that these components can be reused across multiple skills, could lead to drastic data efficiency gains. - **Flexible compositionality.** Existing works have evaluated approaches in two settings: noncompositional and compositional. Noncompositional evaluations serve to demonstrate the flexibility of the methods, while compositional evaluations serve to study the compositional properties of the algorithms. The real world is neither of these two extremes: it has a multitude of compositional properties, but many tasks require highly specialized knowledge. Devising techniques (and corresponding evaluation domains) that explicitly reason about when compositional or specialized knowledge is required would constitute another significant step toward deployed lifelong learning. - **Other forms of composition.** While the focus of this article is to discuss the connections between lifelong learning and functional composition, it should also be clear from the surveyed works that there are numerous other forms of composition. Specifically, the RL community has developed a variety of temporal, representational, logical, and morphological views of the notion of compositionality. Each of these formulations is promising toward developing agents that accumulate knowledge and compose it in combinatorially many ways to solve a wide range of diverse tasks. Developing concrete instantiations of this intuition could potentially be highly impactful. - **Moving beyond deep learning.** This final comment encourages future work to look beyond deep learning in the development of lifelong and compositional learners. The current trend in the field is to leverage neural network modules as the main form of compositional structures. The reason for this choice is practical: neural networks and backpropagation are today the most powerful tools available in ML, and they permit abstracting away the many nuances of statistical learning and optimization, and focus instead on the notion of knowledge compositionality. Historical evidence suggests that the tools of the future will be different from (the current version of) deep learning, and consequently future research should not focus exclusively on deep learning, and develop approaches to lifelong compositional learning that look outside of deep learning as well. ## Acknowledgments We thank the anonymous reviewers for their valuable comments and suggestions. J. A. Mendez is funded by an MIT-IBM Distinguished Postdoctoral Fellowship. The research presented in this article was partially supported by the DARPA Lifelong Learning Machines program under grant FA8750-18-2-0117, the DARPA SAIL-ON program under contract HR001120C0040, the DARPA ShELL program under agreement HR00112190133, and the Army Research Office under MURI grant W911NF20-1-0080. Any opinions, findings, conclusions, or recommendations expressed in this article are those of the authors and do not necessarily reflect the views of DARPA, the U.S. Army, or the United States Government. ## References Abel, D., Arumugam, D., Lehnert, L., and Littman, M. (2018). State abstractions for lifelong reinforcement learning. In *Proceedings of the 35th International Conference on Machine Learning, ICML-18*, pages 10–19. [28, 50] Achille, A., Eccles, T., Matthey, L., Burgess, C., Watters, N., Lerchner, A., and Higgins, I. (2018). Lifelong disentangled representation learning with cross-domain latent homologies. In Advances in Neural Information Processing Systems 31, NeurIPS-18, pages 9873–9883. [16, 48] Adel, T., Zhao, H., and Turner, R. E. (2020). Continual learning with adaptive weights, CLAW. In 8th International Conference on Learning Representations, ICLR-20. [17, 48] Agarwala, A., Das, A., Juba, B., Panigrahy, R., Sharan, V., Wang, X., and Zhang, Q. (2021). One network fits all? Modular versus monolithic task formulations in neural networks. In 9th International Conference on Learning Representations, ICLR-21. [24, 49] Ahmed, O., Träuble, F., Goyal, A., Neitz, A., Wuthrich, M., Bengio, Y., Schölkopf, B., and Bauer, S. (2021). CausalWorld: A robotic manipulation benchmark for causal structure and transfer learning. In 9th International Conference on Learning Representations, ICLR-21. [30, 50] Ahn, H., Cha, S., Lee, D., and Moon, T. (2019). Uncertainty-based continual learning with adaptive regularization. In *Advances in Neural Information Processing Systems 32, NeurIPS-19*. [13, 48] Aksan, E., Deselaers, T., Tagliasacchi, A., and Hilliges, O. (2020). CoSE: Compositional stroke embeddings. In *Advances in Neural Information Processing Systems 33, NeurIPS-20*, pages 10041–10052. [24, 49] Akula, A., Jampani, V., Changpinyo, S., and Zhu, S.-C. (2021). Robust visual reasoning via language guided neural module networks. In *Advances in Neural Information Processing Systems 34, NeurIPS-21*, pages 11041–11053. [21, 22, and 48] Akyürek, E., Akyürek, A. F., and Andreas, J. (2021). Learning to recombine and resample data for compositional generalization. In *9th International Conference on Learning Representations, ICLR-21*. [24, 49] Al-Emran, M. (2015). Hierarchical reinforcement learning: A survey. International Journal of Computing and Digital Systems, 4(2). [28] Alet, F., Lozano-Perez, T., and Kaelbling, L. P. (2018). Modular meta-learning. In Proceedings of the 2nd Conference on Robot Learning, CoRL-18, pages 856–868. [22, 31, and 49] Alet, F., Weng, E., Lozano-Pérez, T., and Kaelbling, L. P. (2019). Neural relational inference with fast modular meta-learning. In *Advances in Neural Information Processing Systems 32, NeurIPS-19*. [22, 49] Aljundi, R., Belilovsky, E., Tuytelaars, T., Charlin, L., Caccia, M., Lin, M., and Page-Caccia, L. (2019a). Online continual learning with maximal interfered retrieval. In Advances in Neural Information Processing Systems 32, NeurIPS-19. [16, 48] Aljundi, R., Chakravarty, P., and Tuytelaars, T. (2017). Expert gate: Lifelong learning with a network of experts. In *Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition,* CVPR-17, pages 3366–3375. [17, 48] Aljundi, R., Lin, M., Goujaud, B., and Bengio, Y. (2019b). Gradient based sample selection for online continual learning. In *Advances in Neural Information Processing Systems 32, NeurIPS-19*. [16, 48] Alver, S. and Precup, D. (2022). Constructing a good behavior basis for transfer using generalized policy updates. In *10th International Conference on Learning Representations, ICLR-22*. [28, 50] Andreas, J. (2019). Measuring compositionality in representation learning. In 7th International Conference on Learning Representations, ICLR-19. [24, 49] Andreas, J., Rohrbach, M., Darrell, T., and Klein, D. (2016). Neural module networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR-16, pages 39–48. [2, 18, 20, 22, 31, and 48] Arad Hudson, D. and Zitnick, L. (2021). Compositional transformers for scene generation. In *Advances in* Neural Information Processing Systems 34, NeurIPS-21, pages 9506–9520. [24, 49] Atzmon, Y., Kreuk, F., Shalit, U., and Chechik, G. (2020). A causal view of compositional zero-shot recognition. In *Advances in Neural Information Processing Systems 33, NeurIPS-20*, pages 1462–1473. [24, 49] Ayub, A. and Wagner, A. (2021). EEC: Learning to encode and regenerate images for continual learning. In 9th International Conference on Learning Representations, ICLR-21. [16, 48] Bacon, P.-L., Harb, J., and Precup, D. (2017). The option-critic architecture. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI-17, pages 1726–1734. [26, 49] Bahdanau, D., Murty, S., Noukhovitch, M., Nguyen, T. H., de Vries, H., and Courville, A. (2018). Systematic generalization: What is required and can it be learned? In *6th International Conference on Learning* Representations, ICLR-18. [30, 48] Banayeeanzade, M., Mirzaiezadeh, R., Hasani, H., and Soleymani, M. (2021). Generative vs. discriminative: Rethinking the meta-continual learning. In *Advances in Neural Information Processing Systems 34,* NeurIPS-21, pages 21592–21604. [18, 48] Barreto, A., Borsa, D., Quan, J., Schaul, T., Silver, D., Hessel, M., Mankowitz, D., Zidek, A., and Munos, R. (2018). Transfer in deep reinforcement learning using successor features and generalised policy improvement. In *Proceedings of the 35th International Conference on Machine Learning, ICML-18*, pages 501–510. [28, 50] Barto, A. G. and Mahadevan, S. (2003). Recent advances in hierarchical reinforcement learning. *Discrete* Event Dynamic Systems, 13(1-2):41–77. [28] Beaulieu, S., Frati, L., Miconi, T., Lehman, J., Stanley, K. O., Clune, J., and Cheney, N. (2020). Learning to continually learn. In *Proceedings of the 24th European Conference on Artificial Intelligence, ECAI-20*, pages 992–1001. [18, 48] Bellemare, M. G., Naddaf, Y., Veness, J., and Bowling, M. (2013). The arcade learning environment: An evaluation platform for general agents. *Journal of Artificial Intelligence Research, JAIR*, 47:253–279. [25, 29, and 50] Benjamin, A., Rolnick, D., and Kording, K. (2019). Measuring and regularizing networks in function space. In *7th International Conference on Learning Representations, ICLR-19*. [13, 48] Berseth, G., Zhang, Z., Zhang, G., Finn, C., and Levine, S. (2022). CoMPS: Continual meta policy search. In *10th International Conference on Learning Representations, ICLR-22*. [25, 49] Biesialska, M., Biesialska, K., and Costa-jussà, M. R. (2020). Continual lifelong learning in natural language processing: A survey. In Proceedings of the 28th International Conference on Computational Linguistics, COLING-20, pages 6523–6541. [4] Boitet, C. (1988). Pros and cons of the pivot and transfer approaches in multilingual machine translation. In New Directions in Machine Translation, pages 93–107. [8] Bornschein, J., Galashov, A., Hemsley, R., Rannen-Triki, A., Chen, Y., Chaudhry, A., He, X. O., Douillard, A., Caccia, M., Feng, Q., et al. (2022). NEVIS'22: A stream of 100 tasks sampled from 30 years of computer vision research. *arXiv preprint arXiv:2211.11747*. [6] Borsos, Z., Mutny, M., and Krause, A. (2020). Coresets via bilevel optimization for continual learning and streaming. In *Advances in Neural Information Processing Systems 33, NeurIPS-20*, pages 14879–14890. [16, 48] Bošnjak, M., Rocktäschel, T., Naradowsky, J., and Riedel, S. (2017). Programming with a differentiable Forth interpreter. In *Proceedings of the 34th International Conference on Machine Learning, ICML-17*, pages 547–556. [19, 22, and 49] Bou Ammar, H., Eaton, E., Luna, J. M., and Ruvolo, P. (2015). Autonomous cross-domain knowledge transfer in lifelong policy gradient reinforcement learning. In *Proceedings of the Twenty-Fourth International Joint* Conference on Artificial Intelligence, IJCAI-15, pages 3345–3351. [26, 49] Bou Ammar, H., Eaton, E., Ruvolo, P., and Taylor, M. (2014). Online multi-task learning for policy gradient methods. In *Proceedings of the 31st International Conference on Machine Learning, ICML-14*, pages 1206–1214. [26, 49] Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., and Zaremba, W. (2016). OpenAI Gym. *arXiv preprint arXiv:1606.01540*. [29, 50] Brunskill, E. and Li, L. (2014). PAC-inspired option discovery in lifelong reinforcement learning. In *Proceedings* of the 31st International Conference on Machine Learning, ICML-14, pages 316–324. [28, 50] Bunel, R., Hausknecht, M., Devlin, J., Singh, R., and Kohli, P. (2018). Leveraging grammar and reinforcement learning for neural program synthesis. In *6th International Conference on Learning Representations, ICLR18*. [20, 22, and 49] Buzzega, P., Boschini, M., Porrello, A., Abati, D., and Calderara, S. (2020). Dark experience for general continual learning: A strong, simple baseline. In Advances in Neural Information Processing Systems 33, NeurIPS-20, pages 15920–15930. [16, 48] Bylard, A., Bonalli, R., and Pavone, M. (2021). Composable geometric motion policies using multi-task pullback bundle dynamical systems. In *2021 IEEE International Conference on Robotics and Automation,* ICRA-21, pages 7464–7470. [29, 50] Caccia, L., Belilovsky, E., Caccia, M., and Pineau, J. (2020). Online learned continual compression with adaptive quantization modules. In Proceedings of the 37th International Conference on Machine Learning, ICML-20, pages 1240–1250. [16, 48] Cai, J., Shin, R., and Song, D. (2017). Making neural programming architectures generalize via recursion. In 5th International Conference on Learning Representations, ICLR-17. [19, 22, and 49] Cha, S., Hsu, H., Hwang, T., Calmon, F., and Moon, T. (2021). CPR: Classifier-projection regularization for continual learning. In *9th International Conference on Learning Representations, ICLR-21*. [12, 48] Chang, M., Gupta, A., Levine, S., and Griffiths, T. L. (2019). Automatically composing representation transformations as a means for generalization. In 7th International Conference on Learning Representations, ICLR-19. [7, 22, and 49] Chang, M., Kaushik, S., Levine, S., and Griffiths, T. (2021). Modularity in reinforcement learning via algorithmic independence in credit assignment. In *Proceedings of the 38th International Conference on* Machine Learning, ICML-21, pages 1452–1462. [29, 50] Chaudhry, A., Dokania, P. K., Ajanthan, T., and Torr, P. H. (2018). Riemannian walk for incremental learning: Understanding forgetting and intransigence. In Proceedings of the European Conference on Computer Vision, ECCV-18, pages 532–547. [12, 48] Chaudhry, A., Khan, N., Dokania, P., and Torr, P. (2020). Continual learning in low-rank orthogonal subspaces. In *Advances in Neural Information Processing Systems 33, NeurIPS-20*, pages 9900–9911. [13, 48] Chaudhry, A., Ranzato, M., Rohrbach, M., and Elhoseiny, M. (2019a). Efficient lifelong learning with A-GEM. In *7th International Conference on Learning Representations, ICLR-19*. [14, 48] Chaudhry, A., Rohrbach, M., Elhoseiny, M., Ajanthan, T., Dokania, P. K., Torr, P. H., and Ranzato, M. (2019b). On tiny episodic memories in continual learning. *arXiv preprint arXiv:1902.10486*. [14, 48] Chen, H.-J., Cheng, A.-C., Juan, D.-C., Wei, W., and Sun, M. (2020a). Mitigating forgetting in online continual learning via instance-aware parameterization. In Advances in Neural Information Processing Systems 33, NeurIPS-20, pages 17466–17477. [23, 49] Chen, T., Zhang, Z., Liu, S., Chang, S., and Wang, Z. (2021). Long live the lottery: The existence of winning tickets in lifelong learning. In *9th International Conference on Learning Representations, ICLR-21*. [16, 48] Chen, X., Liang, C., Yu, A. W., Song, D., and Zhou, D. (2020b). Compositional generalization via neuralsymbolic stack machines. In *Advances in Neural Information Processing Systems 33, NeurIPS-20*, pages 1690–1701. [20, 22, and 49] Chen, Y., Friesen, A. L., Behbahani, F., Doucet, A., Budden, D., Hoffman, M., and de Freitas, N. (2020c). Modular meta-learning with shrinkage. In *Advances in Neural Information Processing Systems 33, NeurIPS20*, pages 2858–2869. [22, 49] Chen, Z. and Liu, B. (2018). Lifelong machine learning. In Synthesis Lectures on Artificial Intelligence and Machine Learning \#38. Morgan & Claypool Publishers. [12] Cheng, C.-A., Mukadam, M., Issac, J., Birchfield, S., Fox, D., Boots, B., and Ratliff, N. (2021). RMP*flow*: A geometric framework for generation of multitask motion policies. *IEEE Transactions on Automation* Science and Engineering, 18(3):968–987. [29, 50] Chevalier-Boisvert, M., Bahdanau, D., Lahlou, S., Willems, L., Saharia, C., Nguyen, T. H., and Bengio, Y. (2019). BabyAI: First steps towards grounded language learning with a human in the loop. In 7th International Conference on Learning Representations, ICLR-19. [30, 50] Chrysakis, A. and Moens, M.-F. (2020). Online continual learning from imbalanced data. In *Proceedings of* the 37th International Conference on Machine Learning, ICML-20, pages 1952–1961. [16, 48] Colas, C., Fournier, P., Chetouani, M., Sigaud, O., and Oudeyer, P.-Y. (2019). CURIOUS: Intrinsically motivated modular multi-goal reinforcement learning. In *Proceedings of the 36th International Conference* on Machine Learning, ICML-19, pages 1331–1340. [28, 50] Csordás, R., van Steenkiste, S., and Schmidhuber, J. (2021). Are neural nets modular? Inspecting functional modularity through differentiable weight masks. In *9th International Conference on Learning* Representations, ICLR-21. [24, 49] D'Amario, V., Sasaki, T., and Boix, X. (2021). How modular should neural module networks be for systematic generalization? In *Advances in Neural Information Processing Systems 34, NeurIPS-21*, pages 23374–23385. [24, 48] Dayan, P. and Hinton, G. E. (1993). Feudal reinforcement learning. In *Advances in Neural Information* Processing Systems 6, NIPS-93, pages 271–278. [28, 50] De Lange, M., Aljundi, R., Masana, M., Parisot, S., Jia, X., Leonardis, A., Slabaugh, G., and Tuytelaars, T. (2022). A continual learning survey: Defying forgetting in classification tasks. IEEE Transactions on Pattern Analysis and Machine Intelligence, TPAMI, 44(7):3366–3385. [4] de Masson d'Autume, C., Ruder, S., Kong, L., and Yogatama, D. (2019). Episodic memory in lifelong language learning. In *Advances in Neural Information Processing Systems 32, NeurIPS-19*. [16, 48] Del Chiaro, R., Twardowski, B., Bagdanov, A., and van de Weijer, J. (2020). RATT: Recurrent attention to transient tasks for continual image captioning. In Advances in Neural Information Processing Systems 33, NeurIPS-20, pages 16736–16748. [13, 48] Deng, D., Chen, G., Hao, J., Wang, Q., and Heng, P.-A. (2021). Flattening sharpness for dynamic gradient projection memory benefits in continual learning. In Advances in Neural Information Processing Systems 34, NeurIPS-21, pages 18710–18721. [13, 48] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009). ImageNet: A large-scale hierarchical image database. In *Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition,* CVPR-09, pages 248–255. [5, 29, and 50] Derakhshani, M. M., Zhen, X., Shao, L., and Snoek, C. (2021). Kernel continual learning. In Proceedings of the 38th International Conference on Machine Learning, ICML-21, pages 2621–2631. [14, 48] Devin, C., Geng, D., Abbeel, P., Darrell, T., and Levine, S. (2019). Compositional plan vectors. In Advances in Neural Information Processing Systems 32, NeurIPS-19. [26, 49] Devin, C., Gupta, A., Darrell, T., Abbeel, P., and Levine, S. (2017). Learning modular neural network policies for multi-task and multi-robot transfer. In *Proceedings of the 2017 IEEE International Conference* on Robotics and Automation, ICRA-17, pages 2169–2176. [29, 50] Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT-19), pages 4171–4186. [15] Dietterich, T. G. (2000). Hierarchical reinforcement learning with the MAXQ value function decomposition. Journal of Artificial Intelligence Research, JAIR, 13:227–303. [28, 50] Du, Y., Li, S., and Mordatch, I. (2020). Compositional visual generation with energy based models. In Advances in Neural Information Processing Systems 33, NeurIPS-20, pages 6637–6647. [24, 49] Duncker, L., Driscoll, L., Shenoy, K. V., Sahani, M., and Sussillo, D. (2020). Organizing recurrent network dynamics by task-computation to enable continual learning. In Advances in Neural Information Processing Systems 33, NeurIPS-20, pages 14387–14397. [13, 48] Egorov, E., Kuzina, A., and Burnaev, E. (2021). BooVAE: Boosting approach for continual learning of VAE. In *Advances in Neural Information Processing Systems 34, NeurIPS-21*, pages 17889–17901. [16, 48] Ehret, B., Henning, C., Cervera, M., Meulemans, A., von Oswald, J., and Grewe, B. F. (2021). Continual learning in recurrent neural networks. In *9th International Conference on Learning Representations,* ICLR-21. [18, 48] Fernando, C., Banarse, D., Blundell, C., Zwols, Y., Ha, D., Rusu, A. A., Pritzel, A., and Wierstra, D. (2017). PathNet: Evolution channels gradient descent in super neural networks. *arXiv preprint arXiv:1701.08734*. [23, 49] Frankle, J. and Carbin, M. (2019). The lottery ticket hypothesis: Finding sparse, trainable neural networks. In *7th International Conference on Learning Representations, ICLR-19*. [16] Garcia, F. and Thomas, P. S. (2019). A meta-MDP approach to exploration for lifelong reinforcement learning. In *Advances in Neural Information Processing Systems 32, NeurIPS-19*, pages 5692–5701. [26, 49] Gatys, L. A., Ecker, A. S., and Bethge, M. (2016). Image style transfer using convolutional neural networks. In *Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR-16*, pages 2414–2423. [16] Gaunt, A. L., Brockschmidt, M., Kushman, N., and Tarlow, D. (2017). Differentiable programs with neural libraries. In *Proceedings of the 34th International Conference on Machine Learning, ICML-17*, pages 1213–1222. [23, 49] Ghazi, B., Panigrahy, R., and Wang, J. (2019). Recursive sketches for modular deep learning. In Proceedings of the 36th International Conference on Machine Learning, ICML-19, pages 2211–2220. [24, 49] Gordon, J., Lopez-Paz, D., Baroni, M., and Bouchacourt, D. (2020). Permutation equivariant models for compositional generalization in language. In *8th International Conference on Learning Representations,* ICLR-20. [24, 49] Goyal, A., Didolkar, A. R., Lamb, A., Badola, K., Ke, N. R., Rahaman, N., Binas, J., Blundell, C., Mozer, M. C., and Bengio, Y. (2022). Coordination among neural modules through a shared global workspace. In 10th International Conference on Learning Representations, ICLR-22. [29, 50] Goyal, A., Lamb, A., Hoffmann, J., Sodhani, S., Levine, S., Bengio, Y., and Schölkopf, B. (2021). Recurrent independent mechanisms. In *9th International Conference on Learning Representations, ICLR-21*. [29, 50] Guo, Y., Lin, Z., Lou, J.-G., and Zhang, D. (2020a). Hierarchical poset decoding for compositional generalization in language. In *Advances in Neural Information Processing Systems 33, NeurIPS-20*, pages 6913–6924. [24, 49] Guo, Y., Liu, M., Yang, T., and Rosing, T. (2020b). Improved schemes for episodic memory-based lifelong learning. In *Advances in Neural Information Processing Systems 33, NeurIPS-20*, pages 1023–1035. [14, 48] Gupta, G., Yadav, K., and Paull, L. (2020a). Look-ahead meta learning for continual learning. In Advances in Neural Information Processing Systems 33, NeurIPS-20, pages 11588–11598. [14, 16, and 48] Gupta, N., Lin, K., Roth, D., Singh, S., and Gardner, M. (2020b). Neural module networks for reasoning over text. In *8th International Conference on Learning Representations, ICLR-20*. [21, 22, and 48] Gupta, P., Chaudhary, Y., Runkler, T., and Schuetze, H. (2020c). Neural topic modeling with continual lifelong learning. In *Proceedings of the 37th International Conference on Machine Learning, ICML-20*, pages 3907–3917. [18, 48] Gur, I., Jaques, N., Miao, Y., Choi, J., Tiwari, M., Lee, H., and Faust, A. (2021). Environment generation for zero-shot compositional reinforcement learning. In Advances in Neural Information Processing Systems 34, NeurIPS-21, pages 4157–4169. [30, 50] Haarnoja, T., Pong, V., Zhou, A., Dalal, M., Abbeel, P., and Levine, S. (2018). Composable deep reinforcement learning for robotic manipulation. In *Proceedings of the 2018 IEEE International Conference on Robotics* and Automation, ICRA-18, pages 6244–6251. [28, 50] Hadsell, R., Rao, D., Rusu, A. A., and Pascanu, R. (2020). Embracing change: Continual learning in deep neural networks. *Trends in Cognitive Sciences*, 24(12):1028—1040. [4] Henderson, P., Chang, W.-D., Shkurti, F., Hansen, J., Meger, D., and Dudek, G. (2017). Benchmark environments for multitask learning in continuous domains. ICML Lifelong Learning: A Reinforcement Learning Approach Workshop. [30, 50] Henning, C., Cervera, M., D'Angelo, F., von Oswald, J., Traber, R., Ehret, B., Kobayashi, S., Grewe, B. F., and Sacramento, J. (2021). Posterior meta-replay for continual learning. In *Advances in Neural Information* Processing Systems 34, NeurIPS-21, pages 14135–14149. [14, 48] Hu, R., Andreas, J., Rohrbach, M., Darrell, T., and Saenko, K. (2017). Learning to reason: End-to-end module networks for visual question answering. In *Proceedings of the 2017 IEEE International Conference* on Computer Vision, ICCV-17, pages 804–813. [2, 21, 22, 31, and 48] Hu, W., Lin, Z., Liu, B., Tao, C., Tao, Z., Ma, J., Zhao, D., and Yan, R. (2019). Overcoming catastrophic forgetting via model adaptation. In *7th International Conference on Learning Representations, ICLR-19*. [17, 48] Huang, W., Mordatch, I., and Pathak, D. (2020). One policy to control them all: Shared modular policies for agent-agnostic control. In *Proceedings of the 37th International Conference on Machine Learning, ICML-20*, pages 4455–4464. [29, 50] Hung, C.-Y., Tu, C.-H., Wu, C.-E., Chen, C.-H., Chan, Y.-M., and Chen, C.-S. (2019). Compacting, picking and growing for unforgetting continual learning. In Advances in Neural Information Processing Systems 32, NeurIPS-19. [15, 48] Hurtado, J., Raymond, A., and Soto, A. (2021). Optimizing reusable knowledge for continual learning via metalearning. In *Advances in Neural Information Processing Systems 34, NeurIPS-21*, pages 14150–14162. [17, 48] Huynh, D. and Elhamifar, E. (2020). Compositional zero-shot learning via fine-grained dense feature composition. In *Advances in Neural Information Processing Systems 33, NeurIPS-20*, pages 19849–19860. [24, 49] Isele, D. and Cosgun, A. (2018). Selective experience replay for lifelong learning. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, AAAI-18, pages 3302–3309. [25, 49] Isele, D., Rostami, M., and Eaton, E. (2016). Using task features for zero-shot knowledge transfer in lifelong learning. In *Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence,* IJCAI-16, pages 1620–1626. [26, 49] James, S., Ma, Z., Arrojo, D. R., and Davison, A. J. (2020). RLBench: The robot learning benchmark & learning environment. *IEEE Robotics and Automation Letters*, 5(2):3019–3026. [30, 50] Javed, K. and White, M. (2019). Meta-learning representations for continual learning. In Advances in Neural Information Processing Systems 32, NeurIPS-19. [18, 48] Jerfel, G., Grant, E., Griffiths, T., and Heller, K. A. (2019). Reconciling meta-learning and continual learning with online mixtures of tasks. In *Advances in Neural Information Processing Systems 32, NeurIPS-19*. [17, 48] Jin, X., Sadhu, A., Du, J., and Ren, X. (2021). Gradient-based editing of memory examples for online task-free continual learning. In *Advances in Neural Information Processing Systems 34, NeurIPS-21*, pages 29193–29205. [16, 48] Johnson, J., Hariharan, B., van der Maaten, L., Hoffman, J., Fei-Fei, L., Lawrence Zitnick, C., and Girshick, R. (2017). Inferring and executing programs for visual reasoning. In Proceedings of the 2017 IEEE International Conference on Computer Vision, ICCV-17, pages 2989–2998. [21, 22, and 48] Joseph, K. J. and Balasubramanian, V. N. (2020). Meta-consolidation for continual learning. In Advances in Neural Information Processing Systems 33, NeurIPS-20, pages 14374–14386. [15, 48] Jothimurugan, K., Alur, R., and Bastani, O. (2019). A composable specification language for reinforcement learning tasks. In *Advances in Neural Information Processing Systems 32, NeurIPS-19*. [29, 50] Jothimurugan, K., Bansal, S., Bastani, O., and Alur, R. (2021). Compositional reinforcement learning from logical specifications. In *Advances in Neural Information Processing Systems 34, NeurIPS-21*, pages 10026–10039. [29, 50] Jung, S., Ahn, H., Cha, S., and Moon, T. (2020). Continual learning with node-importance based adaptive group sparse regularization. In *Advances in Neural Information Processing Systems 33, NeurIPS-20*, pages 3647–3658. [12, 13, and 48] Kao, T.-C., Jensen, K., van de Ven, G. M., Bernacchia, A., and Hennequin, G. (2021). Natural continual learning: Success is a journey, not (just) a destination. In *Advances in Neural Information Processing* Systems 34, NeurIPS-21, pages 28067–28079. [15, 48] Kaplanis, C., Shanahan, M., and Clopath, C. (2019). Policy consolidation for continual reinforcement learning. In *Proceedings of the 36th International Conference on Machine Learning, ICML-19*, pages 3242–3251. [25, 49] Kapoor, S., Karaletsos, T., and Bui, T. D. (2021). Variational auto-regressive Gaussian processes for continual learning. In *Proceedings of the 38th International Conference on Machine Learning, ICML-21*, pages 5290–5300. [15, 48] Ke, Z., Liu, B., and Huang, X. (2020). Continual learning of a mixed sequence of similar and dissimilar tasks. In *Advances in Neural Information Processing Systems 33, NeurIPS-20*, pages 18493–18504. [17, 48] Ke, Z., Liu, B., Ma, N., Xu, H., and Shu, L. (2021). Achieving forgetting prevention and knowledge transfer in continual learning. In *Advances in Neural Information Processing Systems 34, NeurIPS-21*, pages 22443–22456. [15, 48] Keysers, D., Schärli, N., Scales, N., Buisman, H., Furrer, D., Kashubin, S., Momchev, N., Sinopalnikov, D., Stafiniak, L., Tihon, T., Tsarkov, D., Wang, X., van Zee, M., and Bousquet, O. (2020). Measuring compositional generalization: A comprehensive method on realistic data. In 8th International Conference on Learning Representations, ICLR-20. [30, 49] Khetarpal, K., Riemer, M., Rish, I., and Precup, D. (2020). Towards continual reinforcement learning: A review and perspectives. *arXiv preprint arXiv:2012.13490*. [4] Kim, S. W., Tapaswi, M., and Fidler, S. (2019). Visual reasoning by progressive module networks. In 7th International Conference on Learning Representations, ICLR-19. [23, 49] Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., Hassabis, D., Clopath, C., Kumaran, D., and Hadsell, R. (2017). Overcoming catastrophic forgetting in neural networks. *Proceedings of the National Academy of Sciences,* PNAS, 114(13):3521–3526. [12, 13, 16, 25, and 48] Kirsch, L., Kunze, J., and Barber, D. (2018). Modular networks: Learning to decompose neural computation. In *Advances in Neural Information Processing Systems 31, NeurIPS-18*, pages 2408–2418. [2, 22, 31, and 49] Knoblauch, J., Husain, H., and Diethe, T. (2020). Optimal continual learning has perfect memory and is NP-hard. In *Proceedings of the 37th International Conference on Machine Learning, ICML-20*, pages 5327–5337. [14, 48] Krizhevsky, A. and Hinton, G. (2009). Learning multiple layers of features from tiny images. Technical report, University of Toronto. [6] Kumar, A., Chatterjee, S., and Rai, P. (2021). Bayesian structural adaptation for continual learning. In Proceedings of the 38th International Conference on Machine Learning, ICML-21, pages 5850–5860. [13, 48] Lake, B. and Baroni, M. (2018). Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In *Proceedings of the 35th International Conference on Machine* Learning, ICML-18, pages 2873–2882. [24, 30, and 49] Lake, B. M. (2019). Compositional generalization through meta sequence-to-sequence learning. In *Advances* in Neural Information Processing Systems 32, NeurIPS-19. [24, 49] Lee, S., Behpour, S., and Eaton, E. (2021a). Sharing less is more: Lifelong learning in deep networks with selective layer transfer. In *Proceedings of the 38th International Conference on Machine Learning, ICML-21*, pages 6065–6075. [17, 48] Lee, S., Goldt, S., and Saxe, A. (2021b). Continual learning in the teacher-student setup: Impact of task similarity. In *Proceedings of the 38th International Conference on Machine Learning, ICML-21*, pages 6109–6119. [18, 48] Lee, S., Ha, J., Zhang, D., and Kim, G. (2020). A neural Dirichlet process mixture model for task-free continual learning. In *8th International Conference on Learning Representations, ICLR-20*. [17, 48] Lee, Y., Sun, S.-H., Somasundaram, S., Hu, E., and Lim, J. J. (2019). Composing complex skills by learning transition policies with proximity reward induction. In 7th International Conference on Learning Representations, ICLR-19. [26, 49] Lesort, T., Lomonaco, V., Stoian, A., Maltoni, D., Filliat, D., and Dıaz-Rodrıguez, N. (2019). Continual learning for robotics. *arXiv preprint arXiv:1907.00182*, pages 1–34. [4] Levine, S., Kumar, A., Tucker, G., and Fu, J. (2020). Offline reinforcement learning: Tutorial, review, and perspectives on open problems. *arXiv preprint arXiv:2005.01643*. [25] Li, A., Cheng, C.-A., Rana, M. A., Xie, M., Van Wyk, K., Ratliff, N., and Boots, B. (2021a). RMP2: A structured composable policy class for robot learning. *arXiv preprint arXiv:2103.05922*. [29, 50] Li, X., Zhou, Y., Wu, T., Socher, R., and Xiong, C. (2019). Learn to grow: A continual structure learning framework for overcoming catastrophic forgetting. In *Proceedings of the 36th International Conference on* Machine Learning, ICML-19, pages 3925–3934. [23, 49] Li, Y., He, H., Wu, J., Katabi, D., and Torralba, A. (2020a). Learning compositional Koopman operators for model-based control. In *8th International Conference on Learning Representations, ICLR-20*. [29, 50] Li, Y., Wu, Y., Xu, H., Wang, X., and Wu, Y. (2021b). Solving compositional reinforcement learning problems via task reduction. In *9th International Conference on Learning Representations, ICLR-21*. [29, 50] Li, Y., Zhao, L., Church, K., and Elhoseiny, M. (2020b). Compositional language continual learning. In 8th International Conference on Learning Representations, ICLR-20. [24, 49] Li, Z. and Hoiem, D. (2017). Learning without forgetting. IEEE Transactions on Pattern Analysis and Machine Intelligence, TPAMI, 40(12):2935–2947. [13, 48] Lin, Z., Yang, D., Zhao, L., Qin, T., Yang, G., and Liu, T.-Y. (2020). RD2: Reward decomposition with representation disentanglement. In *Advances in Neural Information Processing Systems 33, NeurIPS-20*, pages 11298–11308. [29, 50] Lin, Z., Zhao, L., Yang, D., Qin, T., Liu, T.-Y., and Yang, G. (2019). Distributional reward decomposition for reinforcement learning. In *Advances in Neural Information Processing Systems 32, NeurIPS-19*. [29, 50] Liu, Q., An, S., Lou, J.-G., Chen, B., Lin, Z., Gao, Y., Zhou, B., Zheng, N., and Zhang, D. (2020). Compositional generalization by learning analytical expressions. In Advances in Neural Information Processing Systems 33, NeurIPS-20, pages 11416–11427. [24, 49] Loo, N., Swaroop, S., and Turner, R. E. (2021). Generalized variational continual learning. In *9th International* Conference on Learning Representations, ICLR-21. [13, 48] Lopez-Paz, D. and Ranzato, M. (2017). Gradient episodic memory for continual learning. In Advances in Neural Information Processing Systems 30, NIPS-17, pages 6467–6476. [2, 14, and 48] Lu, K., Grover, A., Abbeel, P., and Mordatch, I. (2021). Reset-free lifelong learning with skill-space planning. In *9th International Conference on Learning Representations, ICLR-21*. [28, 49] McCloskey, M. and Cohen, N. J. (1989). Catastrophic interference in connectionist networks: The sequential learning problem. In *Psychology of Learning and Motivation*, volume 24, pages 109–165. Elsevier. [2, 12] Mendez, J. A. and Eaton, E. (2021). Lifelong learning of compositional structures. In *9th International* Conference on Learning Representations, ICLR-21. [23, 49] Mendez, J. A., Hussing, M., Gummadi, M., and Eaton, E. (2022a). CompoSuite: A compositional reinforcement learning benchmark. In *1st Conference on Lifelong Learning Agents (CoLLAs-22)*. [30, 50] Mendez, J. A., van Seijen, H., and Eaton, E. (2022b). Modular lifelong reinforcement learning via neural composition. In *10th International Conference on Learning Representations, ICLR-22*. [8, 25, 29, and 50] Mendez, J. A., Wang, B., and Eaton, E. (2020). Lifelong policy gradient learning of factored policies for faster training without forgetting. In *Advances in Neural Information Processing Systems 33, NeurIPS-20*, pages 14398–14409. [26, 31, and 49] Meyerson, E. and Miikkulainen, R. (2018). Beyond shared hierarchies: Deep multitask learning through soft layer ordering. In *6th International Conference on Learning Representations, ICLR-18*. [2, 22, and 49] Meyerson, E. and Miikkulainen, R. (2019). Modular universal reparameterization: Deep multi-task learning across diverse domains. In *Advances in Neural Information Processing Systems 32, NeurIPS-19*. [22, 49] Mirzadeh, S. I., Farajtabar, M., Gorur, D., Pascanu, R., and Ghasemzadeh, H. (2021). Linear mode connectivity in multitask and continual learning. In *9th International Conference on Learning Representations, ICLR-21*. [14, 48] Mirzadeh, S. I., Farajtabar, M., Pascanu, R., and Ghasemzadeh, H. (2020). Understanding the role of training regimes in continual learning. In *Advances in Neural Information Processing Systems 33, NeurIPS-20*, pages 7308–7320. [18, 48] Mittal, S., Lamb, A., Goyal, A., Voleti, V., Shanahan, M., Lajoie, G., Mozer, M., and Bengio, Y. (2020). Learning to combine top-down and bottom-up signals in recurrent neural networks with attention over modules. In *Proceedings of the 37th International Conference on Machine Learning, ICML-20*, pages 6972–6986. [29, 50] Mu, T., Gu, J., Jia, Z., Tang, H., and Su, H. (2020). Refactoring policy for compositional generalizability using self-supervised object proposals. In *Advances in Neural Information Processing Systems 33, NeurIPS-20*, pages 8883–8894. [29, 50] Mundt, M., Lang, S., Delfosse, Q., and Kersting, K. (2022). CLEVA-compass: A continual learning evaluation assessment compass to promote research transparency and comparability. In 10th International Conference on Learning Representations, ICLR-22. [4] Nagabandi, A., Finn, C., and Levine, S. (2019). Deep online learning via meta-learning: Continual adaptation for model-based RL. In *7th International Conference on Learning Representations, ICLR-19*. [26, 49] Nam, T., Sun, S.-H., Pertsch, K., Hwang, S. J., and Lim, J. J. (2022). Skill-based meta-reinforcement learning. In *10th International Conference on Learning Representations, ICLR-22*. [26, 49] Nangue Tasse, G., James, S., and Rosman, B. (2020). A Boolean task algebra for reinforcement learning. In Advances in Neural Information Processing Systems 33, NeurIPS-20, pages 9497–9507. [28, 50] Nangue Tasse, G., James, S., and Rosman, B. (2022). Generalisation in lifelong reinforcement learning through logical composition. In *10th International Conference on Learning Representations, ICLR-22*. [28, 50] Nekoei, H., Badrinaaraayanan, A., Courville, A., and Chandar, S. (2021). Continuous coordination as a realistic scenario for lifelong learning. In *Proceedings of the 38th International Conference on Machine* Learning, ICML-21, pages 8016–8024. [30, 50] Nguyen, C. V., Li, Y., Bui, T. D., and Turner, R. E. (2018). Variational continual learning. In *6th International* Conference on Learning Representations, ICLR-18. [13, 48] Nye, M., Solar-Lezama, A., Tenenbaum, J., and Lake, B. M. (2020). Learning compositional rules via neural program synthesis. In *Advances in Neural Information Processing Systems 33, NeurIPS-20*, pages 10832–10842. [20, 22, and 49] Ostapenko, O., Rodriguez, P., Caccia, M., and Charlin, L. (2021). Continual learning via local module composition. In *Advances in Neural Information Processing Systems 34, NeurIPS-21*, pages 30298–30312. [23, 31, and 49] Pahuja, V., Fu, J., Chandar, S., and Pal, C. (2019). Structure learning for neural module networks. In Proceedings of the Beyond Vision and LANguage: inTEgrating Real-world kNowledge, LANTERN, pages 1–10. Association for Computational Linguistics. [20, 22, and 48] Pan, P., Swaroop, S., Immer, A., Eschenhagen, R., Turner, R., and Khan, M. E. (2020). Continual deep learning by functional regularisation of memorable past. In Advances in Neural Information Processing Systems 33, NeurIPS-20, pages 4453–4464. [13, 48] Parisi, G. I., Kemker, R., Part, J. L., Kanan, C., and Wermter, S. (2019). Continual lifelong learning with neural networks: A review. *Neural Networks*, 113:54–71. [4, 6] Pateria, S., Subagdja, B., Tan, A.-h., and Quek, C. (2021). Hierarchical reinforcement learning: A comprehensive survey. *ACM Computing Surveys*, 54(5). [28] Pathak, D., Lu, C., Darrell, T., Isola, P., and Efros, A. A. (2019). Learning to control self-assembling morphologies: A study of generalization via modularity. In Advances in Neural Information Processing Systems 32, NeurIPS-19. [29, 50] Peng, X. B., Chang, M., Zhang, G., Abbeel, P., and Levine, S. (2019). MCP: Learning composable hierarchical control with multiplicative compositional policies. In *Advances in Neural Information Processing Systems* 32, NeurIPS-19. [26, 50] Pham, Q., Liu, C., and Hoi, S. (2021a). DualNet: Continual learning, fast and slow. In *Advances in Neural* Information Processing Systems 34, NeurIPS-21, pages 16131–16144. [16, 48] Pham, Q., Liu, C., Sahoo, D., and Hoi, S. (2021b). Contextual transformation networks for online continual learning. In *9th International Conference on Learning Representations, ICLR-21*. [14, 48] Pierrot, T., Ligner, G., Reed, S. E., Sigaud, O., Perrin, N., Laterre, A., Kas, D., Beguir, K., and de Freitas, N. (2019). Learning compositional neural programs with recursive tree search and planning. In *Advances* in Neural Information Processing Systems 32, NeurIPS-19. [20, 22, and 49] Qin, Q., Hu, W., Peng, H., Zhao, D., and Liu, B. (2021). BNS: Building network structures dynamically for continual learning. In *Advances in Neural Information Processing Systems 34, NeurIPS-21*, pages 20608–20620. [23, 49] Qiu, W. and Zhu, H. (2022). Programmatic reinforcement learning without oracles. In *10th International* Conference on Learning Representations, ICLR-22. [29, 50] Qu, H., Rahmani, H., Xu, L., Williams, B., and Liu, J. (2021). Recent advances of continual learning in computer vision: An overview. *arXiv preprint arXiv:2109.11369*. [4] Raghavan, K. and Balaprakash, P. (2021). Formalizing the generalization-forgetting trade-off in continual learning. In *Advances in Neural Information Processing Systems 34, NeurIPS-21*, pages 17284–17297. [14, 48] Rahaman, N., Goyal, A., Gondal, M. W., Wuthrich, M., Bauer, S., Sharma, Y., Bengio, Y., and Schölkopf, B. (2021). Spatially structured recurrent modules. In 9th International Conference on Learning Representations, ICLR-21. [22, 49] Rajasegaran, J., Hayat, M., Khan, S., Khan, F. S., and Shao, L. (2019). Random path selection for incremental learning. In *Advances in Neural Information Processing Systems 32, NeurIPS-19*, pages 12669–12679. [23, 49] Ramasesh, V. V., Dyer, E., and Raghu, M. (2021). Anatomy of catastrophic forgetting: Hidden representations and task semantics. In *9th International Conference on Learning Representations, ICLR-21*. [18, 48] Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., and Chen, M. (2022). Hierarchical text-conditional image generation with clip latents. *arXiv preprint arXiv:2204.06125*. [24, 49] Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., Chen, M., and Sutskever, I. (2021). Zero-shot text-to-image generation. In *Proceedings of the 38th International Conference on Machine* Learning, ICML-21, pages 8821–8831. [24, 49] Ramesh, R. and Chaudhari, P. (2022). Model Zoo: A growing brain that learns continually. In *10th* International Conference on Learning Representations, ICLR-22. [15, 48] Rao, D., Visin, F., Rusu, A., Pascanu, R., Teh, Y. W., and Hadsell, R. (2019). Continual unsupervised representation learning. In *Advances in Neural Information Processing Systems 32, NeurIPS-19*, pages 7647–7657. [16, 48] Razdaibiedina, A., Mao, Y., Hou, R., Khabsa, M., Lewis, M., and Almahairi, A. (2023). Progressive prompts: Continual learning for language models. In *11th International Conference on Learning Representations* (ICLR-23). [15, 48] Reed, S. and de Freitas, N. (2016). Neural programmers-interpreters. In 4th International Conference on Learning Representations, ICLR-16. [19, 23, and 49] Ren, Y., Guo, S., Labeau, M., Cohen, S. B., and Kirby, S. (2020). Compositional languages emerge in a neural iterated learning model. In *8th International Conference on Learning Representations, ICLR-20*. [24, 49] Riemer, M., Cases, I., Ajemian, R., Liu, M., Rish, I., Tu, Y., , and Tesauro, G. (2019). Learning to learn without forgetting by maximizing transfer and minimizing interference. In 7th International Conference on Learning Representations, ICLR-19. [14, 48] Ritter, H., Botev, A., and Barber, D. (2018). Online structured Laplace approximations for overcoming catastrophic forgetting. In *Advances in Neural Information Processing Systems 31, NeurIPS-18*, pages 3738–3748. [12, 15, and 48] Rolnick, D., Ahuja, A., Schwarz, J., Lillicrap, T., and Wayne, G. (2019). Experience replay for continual learning. In *Advances in Neural Information Processing Systems 32, NeurIPS-19*. [25, 49] Rosenbaum, C., Cases, I., Riemer, M., and Klinger, T. (2019). Routing networks and the challenges of modular and compositional computation. *arXiv preprint arXiv:1904.12774*. [22, 49] Rosenbaum, C., Klinger, T., and Riemer, M. (2018). Routing networks: Adaptive selection of non-linear functions for multi-task learning. In *6th International Conference on Learning Representations, ICLR-18*. [22, 49] Rostami, M. (2021). Lifelong domain adaptation via consolidated internal distribution. In *Advances in Neural* Information Processing Systems 34, NeurIPS-21, pages 11172–11183. [14, 48] Rostami, M., Isele, D., and Eaton, E. (2020). Using task descriptions in lifelong machine learning for improved performance and zero-shot transfer. *Journal of Artificial Intelligence Research, JAIR*, 67:673–704. [26, 49] Ruis, F., Burghouts, G., and Bucur, D. (2021). Independent prototype propagation for zero-shot compositionality. In *Advances in Neural Information Processing Systems 34, NeurIPS-21*, pages 10641–10653. [24, 49] Rusu, A. A., Rabinowitz, N. C., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., Pascanu, R., and Hadsell, R. (2016). Progressive neural networks. *arXiv preprint arXiv:1606.04671*. [26, 48] Ruvolo, P. and Eaton, E. (2013). ELLA: An efficient lifelong learning algorithm. In *Proceedings of the 30th* International Conference on Machine Learning, ICML-13, pages 507–515. [2, 17, 26, and 48] Saha, G., Garg, I., and Roy, K. (2021). Gradient projection memory for continual learning. In *9th International* Conference on Learning Representations, ICLR-21. [13, 48] Saqur, R. and Narasimhan, K. (2020). Multimodal graph networks for compositional generalization in visual question answering. In *Advances in Neural Information Processing Systems 33, NeurIPS-20*, pages 3070–3081. [20, 22, and 48] Schick, T., Dwivedi-Yu, J., Dessì, R., Raileanu, R., Lomeli, M., Zettlemoyer, L., Cancedda, N., and Scialom, T. (2023). Toolformer: Language models can teach themselves to use tools. *arXiv preprint arXiv:2302.04761*. [24, 49] Schott, L., von Kügelgen, J., Träuble, F., Gehler, P. V., Russell, C., Bethge, M., Schölkopf, B., Locatello, F., and Brendel, W. (2022). Visual representation learning does not generalize strongly within the same domain. In *10th International Conference on Learning Representations, ICLR-22*. [24, 49] Schwarz, J., Czarnecki, W., Luketina, J., Grabska-Barwinska, A., Teh, Y. W., Pascanu, R., and Hadsell, R. (2018). Progress & compress: A scalable framework for continual learning. In Proceedings of the 35th International Conference on Machine Learning, ICML-18, pages 4528–4537. [25, 48] Serrà, J., Surís, D., Miron, M., and Karatzoglou, A. (2018). Overcoming catastrophic forgetting with hard attention to the task. In *Proceedings of the 35th International Conference on Machine Learning, ICML-18*, pages 4548–4557. [12, 48] Shin, H., Lee, J. K., Kim, J., and Kim, J. (2017). Continual learning with deep generative replay. In *Advances* in Neural Information Processing Systems 30, NIPS-17, pages 2990–2999. [14, 48] Singh, P., Verma, V. K., Mazumder, P., Carin, L., and Rai, P. (2020). Calibrating CNNs for lifelong learning. In *Advances in Neural Information Processing Systems 33, NeurIPS-20*, pages 15579–15590. [15, 48] Sinha, K., Sodhani, S., Pineau, J., and Hamilton, W. L. (2020). Evaluating logical generalization in graph neural networks. *arXiv preprint arXiv:2003.06560*. [30, 50] Skorokhodov, I. and Elhoseiny, M. (2021). Class normalization for (continual)? generalized zero-shot learning. In *9th International Conference on Learning Representations, ICLR-21*. [18, 48] Sprechmann, P., Jayakumar, S., Rae, J., Pritzel, A., Badia, A. P., Uria, B., Vinyals, O., Hassabis, D., Pascanu, R., and Blundell, C. (2018). Memory-based parameter adaptation. In *6th International Conference on* Learning Representations, ICLR-18. [16] Sun, F.-K., Ho, C.-H., and Lee, H.-Y. (2020). LAMOL: LAnguage MOdeling for lifelong language learning. In *8th International Conference on Learning Representations, ICLR-20*. [14, 48] Sutton, R. S., Modayil, J., Delp, M., Degris, T., Pilarski, P. M., White, A., and Precup, D. (2011). Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. In *Tenth* International Conference on Autonomous Agents and Multiagent Systems, AAMAS-11, pages 761–768. [26, 49] Sutton, R. S., Precup, D., and Singh, S. (1999). Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. *Artificial Intelligence*, 112(1-2):181–211. [26, 49] Sylvain, T., Petrini, L., and Hjelm, D. (2020). Locality and compositionality in zero-shot learning. In 8th International Conference on Learning Representations, ICLR-20. [24, 49] Tang, B. and Matteson, D. S. (2021). Graph-based continual learning. In *9th International Conference on* Learning Representations, ICLR-21. [16, 48] Tessler, C., Givony, S., Zahavy, T., Mankowitz, D., and Mannor, S. (2017). A deep hierarchical approach to lifelong learning in Minecraft. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI-17. [28, 50] Thrun, S. (1998). Lifelong learning algorithms. In *Learning to learn*, pages 181–209. Springer. [12] Titsias, M. K., Schwarz, J., de G. Matthews, A. G., Pascanu, R., and Teh, Y. W. (2020). Functional regularisation for continual learning with Gaussian processes. In *8th International Conference on Learning* Representations, ICLR-20. [13, 48] Todorov, E. (2009). Compositionality of optimal control laws. Advances in Neural Information Processing Systems 22, NIPS-09, pages 1856–1864. [28, 50] Tunyasuvunakool, S., Muldal, A., Doron, Y., Liu, S., Bohez, S., Merel, J., Erez, T., Lillicrap, T., Heess, N., and Tassa, Y. (2020). dm_control: Software and tasks for continuous control. *Software Impacts*, 6. [29, 50] Valkov, L., Chaudhari, D., Srivastava, A., Sutton, C., and Chaudhuri, S. (2018). Houdini: Lifelong learning as program synthesis. In *Advances in Neural Information Processing Systems 31, NeurIPS-18*, pages 8687–8698. [23, 49] van de Ven, G. M., Siegelmann, H. T., and Tolias, A. S. (2020). Brain-inspired replay for continual learning with artificial neural networks. *Nature Communications*, 11(1). [17, 48] van de Ven, G. M. and Tolias, A. S. (2019). Three scenarios for continual learning. *arXiv preprint* arXiv:1904.07734. [5] van Niekerk, B., James, S., Earle, A., and Rosman, B. (2019). Composing value functions in reinforcement learning. In *Proceedings of the 36th International Conference on Machine Learning, ICML-19*, pages 6401–6409. [28, 50] van Seijen, H., Fatemi, M., Romoff, J., Laroche, R., Barnes, T., and Tsang, J. (2017). Hybrid reward architecture for reinforcement learning. In *Advances in Neural Information Processing Systems 30, NIPS-17*. [29, 50] Varshney, S., Verma, V. K., Srijith, P. K., Carin, L., and Rai, P. (2021). CAM-GAN: Continual adaptation modules for generative adversarial networks. In Advances in Neural Information Processing Systems 34, NeurIPS-21, pages 15175–15187. [14, 48] Veniat, T., Denoyer, L., and Ranzato, M. (2021). Efficient continual learning with modular networks and task-driven priors. In *9th International Conference on Learning Representations, ICLR-21*. [23, 49] Vezhnevets, A. S., Osindero, S., Schaul, T., Heess, N., Jaderberg, M., Silver, D., and Kavukcuoglu, K. (2017). FeUdal networks for hierarchical reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning, ICML-17, pages 3540–3549. [28, 50] Vinyals, O., Ewalds, T., Bartunov, S., Georgiev, P., Vezhnevets, A. S., Yeo, M., Makhzani, A., Küttler, H., Agapiou, J. P., Schrittwieser, J., Quan, J., Gaffney, S., Petersen, S., Simonyan, K., Schaul, T., van Hasselt, H., Silver, D., Lillicrap, T. P., Calderone, K., Keet, P., Brunasso, A., Lawrence, D., Ekermo, A., Repp, J., and Tsing, R. (2017). StarCraft II: A new challenge for reinforcement learning. arXiv preprint arXiv:1708.04782. [29, 50] von Oswald, J., Henning, C., Sacramento, J., and Grewe, B. F. (2020). Continual learning with hypernetworks. In *8th International Conference on Learning Representations, ICLR-20*. [14, 48] von Oswald, J., Zhao, D., Kobayashi, S., Schug, S., Caccia, M., Zucchet, N., and Sacramento, J. (2021). Learning where to learn: Gradient sparsity in meta and continual learning. In *Advances in Neural* Information Processing Systems 34, NeurIPS-21, pages 5250–5263. [16, 48] Wang, L., Zhang, M., Jia, Z., Li, Q., Bao, C., Ma, K., Zhu, J., and Zhong, Y. (2021). AFEC: Active forgetting of negative transfer in continual learning. In Advances in Neural Information Processing Systems 34, NeurIPS-21, pages 22379–22391. [13, 48] Wołczyk, M., Zając, M., Pascanu, R., Kuciński, Ł., and Miłoś, P. (2021). Continual World: A robotic benchmark for continual reinforcement learning. In Advances in Neural Information Processing Systems 34, NeurIPS-21, pages 28496–28510. [30, 49] Wu, M., Goodman, N., and Ermon, S. (2021). Improving compositionality of neural networks by decoding representations to inputs. In *Advances in Neural Information Processing Systems 34, NeurIPS-21*, pages 26689–26700. [19, 22, and 49] Xu, D., Nair, S., Zhu, Y., Gao, J., Garg, A., Fei-Fei, L., and Savarese, S. (2018). Neural task programming: Learning to generalize across hierarchical tasks. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation, ICRA-18, pages 3795–3802. [20, 22, and 49] Xu, K., Verma, S., Finn, C., and Levine, S. (2020). Continual learning of control primitives : Skill discovery via reset-games. In *Advances in Neural Information Processing Systems 33, NeurIPS-20*, pages 4999–5010. [26, 49] Yang, R., Xu, H., WU, Y., and Wang, X. (2020). Multi-task reinforcement learning with soft modularization. In *Advances in Neural Information Processing Systems 33, NeurIPS-20*, pages 4767–4777. [29, 50] Yin, H., Yang, P., and Li, P. (2021). Mitigating forgetting in online continual learning with neuron calibration. In *Advances in Neural Information Processing Systems 34, NeurIPS-21*, pages 10260–10272. [16, 48] Yoon, J., Jeong, W., Lee, G., Yang, E., and Hwang, S. J. (2021). Federated continual learning with weighted inter-client transfer. In *Proceedings of the 38th International Conference on Machine Learning, ICML-21*, pages 12073–12086. [18, 48] Yoon, J., Kim, S., Yang, E., and Hwang, S. J. (2020). Scalable and order-robust continual learning with additive parameter decomposition. In *8th International Conference on Learning Representations, ICLR-20*. [12, 48] Yoon, J., Lee, J., Yang, E., and Hwang, S. J. (2018). Lifelong learning with dynamically expandable networks. In *6th International Conference on Learning Representations, ICLR-18*. [15, 48] Yu, T., Quillen, D., He, Z., Julian, R., Hausman, K., Finn, C., and Levine, S. (2019). Meta-World: A benchmark and evaluation for multi-task and meta reinforcement learning. In Proceedings of the 3rd Conference on Robot Learning, CoRL-19. [30, 50] Zaremba, W., Mikolov, T., Joulin, A., and Fergus, R. (2016). Learning simple algorithms from examples. In Proceedings of the 33rd International Conference on Machine Learning, ICML-16, pages 421–429. [19, 31, and 49] Zenke, F., Poole, B., and Ganguli, S. (2017). Continual learning through synaptic intelligence. In *Proceedings* of the 34th International Conference on Machine Learning, ICML-17, pages 3987–3995. [12, 48] Zeno, C., Golan, I., Hoffer, E., and Soudry, D. (2018). Task agnostic continual learning using online variational Bayes. *arXiv preprint arXiv:1803.10123*. [15, 48] Zeno, C., Golan, I., Hoffer, E., and Soudry, D. (2021). Task-agnostic continual learning using online variational Bayes with fixed-point updates. *Neural Computation*, 33(11):3139–3177. [15, 48] Zhang, Q., Fang, J., Meng, Z., Liang, S., and Yilmaz, E. (2021). Variational continual Bayesian meta-learning. In *Advances in Neural Information Processing Systems 34, NeurIPS-21*, pages 24556–24568. [13, 48] Zhao, C., Hospedales, T. M., Stulp, F., and Sigaud, O. (2017). Tensor based knowledge transfer across skill categories for robot control. In *Proceedings of the Twenty-Sixth International Joint Conference on Artificial* Intelligence, IJCAI-17, pages 3462–3468. [26, 49] A **Appendix** Table 1: A categorization of existing works into six axes. The vast majority of work on lifelong learning has not learned explicitly compositional structures, while most efforts on compositional learning have operated in the MTL or STL settings. Comp. type Lifelong? Mechanism Struc. given Struc. type Domain References None Lifelong Supervised No None Vision | Kirkpatrick et al. (2017), Ritter et al. (2018), Zenke et al. (2017), Chaudhry et al. (2018), Serrà et al. (2018), Yoon et al. (2020), Jung et al. (2020), Cha et al. (2021), Chaudhry et al. (2020), Saha et al. (2021), Deng et al. (2021), Duncker et al. (2020), Nguyen et al. (2018), Ahn et al. (2019), Loo et al. (2021), Zhang et al. (2021), Kumar et al. (2021), Li and Hoiem (2017), Benjamin et al. (2019), Titsias et al. (2020), Pan et al. (2020), Wang et al. (2021), Chaudhry et al. (2019b), Lopez-Paz and Ranzato (2017), Chaudhry et al. (2019a), Riemer et al. (2019), Gupta et al. (2020a), Mirzadeh et al. (2021), Raghavan and Balaprakash (2021), Guo et al. (2020b), Pham et al. (2021b), Knoblauch et al. (2020), Derakhshani et al. (2021), von Oswald et al. (2020), Henning et al. (2021), Shin et al. (2017), Singh et al. (2020), Yoon et al. (2018), Hung et al. (2019), Adel et al. (2020), Lee et al. (2021a), Ke et al. (2020), Hurtado et al. (2021), Yoon et al. (2021), Mirzadeh et al. (2020), Ramasesh et al. (2021), Lee et al. (2021b), Ehret et al. (2021), Schwarz et al. (2018) Kirkpatrick et al. (2017), Jung et al. (2020), Wang et al. (2021), Schwarz et al. (2018), Rusu et al. (2016) None Lifelong Supervised No None Language Del Chiaro et al. (2020), Sun et al. (2020), Ke et al. (2021), Razdai- biedina et al. (2023), Gupta et al. (2020c) None Lifelong Unsupervised No None Vision Kumar et al. (2021), Rostami (2021), Varshney et al. (2021) Functional Lifelong Supervised No Aggregation Vision Ramesh and Chaudhari (2022), Ruvolo and Eaton (2013) None Lifelong Supervised Implicitly None Vision Zeno et al. (2018), Zeno et al. (2021), Kao et al. (2021), Kapoor et al. (2021), Chen et al. (2021), Yin et al. (2021), Buzzega et al. (2020), Pham et al. (2021a), Tang and Matteson (2021), von Oswald et al. (2021), Aljundi et al. (2019b), Aljundi et al. (2019a), Chrysakis and Moens (2020), Borsos et al. (2020), Jin et al. (2021), Caccia et al. (2020), Hu et al. (2019), Van de Ven et al. (2020), Aljundi et al. (2017), Jerfel et al. (2019), Lee et al. (2020), Javed and White (2019), Beaulieu et al. (2020), Banayeeanzade et al. (2021) None Lifelong Supervised Explicitly None Vision Joseph and Balasubramanian (2020), Skorokhodov and Elhoseiny (2021) None Lifelong Unsupervised Implicitly None Vision Egorov et al. (2021), Achille et al. (2018), Rao et al. (2019), Ayub and Wagner (2021) None Lifelong Supervised Implicitly None Language de Masson d'Autume et al. (2019) None Lifelong Supervised No None Audio Ehret et al. (2021) Functional MTL Supervised Explicitly Graph VQA None Lifelong RL No None Vision | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| Andreas et al. (2016), Saqur and Narasimhan (2020), Pahuja et al.(2019), Hu et al. (2017), Johnson et al. (2017), Gupta et al. (2020b),Akula et al. (2021), D'Amario et al. (2021), Bahdanau et al. (2018) | Bošnjak et al. (2017), Zaremba et al. (2016), Cai et al. (2017), Bunel et al. (2018), Pierrot et al. (2019), Agarwala et al. (2021), Ghazi et al. (2019) | | | Meyerson and Miikkulainen (2018), Meyerson and Miikkulainen (2019), Chen et al. (2020c), Rosenbaum et al. (2018), Rosenbaum et al. (2019) | |------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | | | Functional MTL Supervised Explicitly Graph Vision Wu et al. (2021) Functional MTL Supervised Explicitly Graph Robotics Xu et al. (2018) Functional MTL Supervised Explicitly Graph Language Nye et al. (2020), Chen et al. (2020b) Functional MTL Supervised Implicitly Graph Vision Rahaman et al. (2021), Chang et al. (2019), Rosenbaum et al. (2019) Functional MTL Supervised Implicitly Graph Language Kirsch et al. (2018), Rosenbaum et al. (2019), Lake and Baroni (2018), Lake (2019), Ren et al. (2020), Gordon et al. (2020), Guo et al. (2020a), Liu et al. (2020), Akyürek et al. (2021), Schick et al. (2023), Keysers et al. (2020) Functional MTL Supervised No Graph Vision | Functional MTL Supervised No Graph Language Meyerson and Miikkulainen (2019), Chen et al. (2020c), Rosenbaum et al. (2019) Functional MTL Supervised No Graph Robotics Alet et al. (2018), Alet et al. (2019) Functional Lifelong Supervised Explicitly Chaining Toy Reed and de Freitas (2016) Functional Lifelong RL No Chaining Vision Fernando et al. (2017) Functional Lifelong Supervised No Chaining Vision Fernando et al. (2017), Li et al. (2019), Valkov et al. (2018), Veniat et al. (2021), Rajasegaran et al. (2019), Chen et al. (2020a), Qin et al. (2021) Functional Lifelong Supervised Explicitly Chaining VQA Kim et al. (2019) Functional Lifelong Supervised No Chaining Toy Gaunt et al. (2017) Functional Lifelong Supervised No Graph Vision Mendez and Eaton (2021) Functional Lifelong Supervised Implicitly Chaining Vision Ostapenko et al. (2021) Functional Lifelong Supervised Implicitly Graph Language Li et al. (2020b) Functional MTL Supervised Explicitly Aggregation Vision Huynh and Elhamifar (2020), Atzmon et al. (2020), Ruis et al. (2021), Du et al. (2020), Aksan et al. (2020), Arad Hudson and Zitnick (2021), Ramesh et al. (2021), Ramesh et al. (2022) Functional STL Supervised Implicitly Graph Vision Andreas (2019), Schott et al. (2022) Functional STL Supervised Implicitly Graph Language Andreas (2019), Csordás et al. (2021) Functional STL Supervised Explicitly Graph Vision Sylvain et al. (2020) None Lifelong RL Implicitly None Robotics Kaplanis et al. (2019) None Lifelong RL Explicitly None Vision Isele and Cosgun (2018) None Lifelong RL Implicitly None Vision Rolnick et al. (2019) None Lifelong RL No None Robotics Berseth et al. (2022), Garcia and Thomas (2019), Sutton et al. (2011), Wołczyk et al. (2021) Functional Lifelong RL No Aggregation Robotics Bou Ammar et al. (2014), Bou Ammar et al. (2015), Mendez et al. (2020), Nagabandi et al. (2019) Functional Lifelong RL Explicitly Aggregation Robotics Isele et al. (2016), Rostami et al. (2020) Functional MTL RL No Aggregation Robotics Zhao et al. (2017) Temporal Lifelong RL No Graph Robotics Xu et al. (2020), Lu et al. (2021) Temporal STL RL Explicitly Graph Toy Sutton et al. (1999) Temporal STL RL Explicitly Graph Robotics Lee et al. (2019) Temporal STL RL No Graph Vision Bacon et al. (2017) Temporal MTL RL No Graph Robotics Nam et al. (2022), Devin et al. (2019) | | Comp. type Lifelong? Mechanism Struc. given Struc. type Domain References | Functional MTL Supervised Explicitly Graph Toy | | | | | Van Niekerk et al. (2019), Colas et al. (2019), Nangue Tasse et al. (2020) | | |---------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Comp. type Lifelong? Mechanism Struc. given Struc. type Domain References | Temporal STL RL No Graph Robotics Peng et al. (2019), Li et al. (2021b) Functional STL RL No Chaining Toy Dayan and Hinton (1993), Dietterich (2000) Functional STL RL No Chaining Vision Vezhnevets et al. (2017) Temporal Lifelong RL No Graph Toy Brunskill and Li (2014) Temporal Lifelong RL No Graph Vision Tessler et al. (2017) Functional Lifelong RL No Chaining Toy Abel et al. (2018) Functional STL RL Explicitly Aggregation Robotics Todorov (2009) Functional MTL RL Explicitly Aggregation Vision Barreto et al. (2018) Functional Lifelong RL Explicitly Aggregation Toy Alver and Precup (2022) Functional MTL RL Explicitly Aggregation Robotics Haarnoja et al. (2018), Cheng et al. (2021), Li et al. (2021a), Bylard et al. (2021), Jothimurugan et al. (2019) Functional MTL RL Explicitly Aggregation Toy | Functional Lifelong RL No Aggregation Toy Nangue Tasse et al. (2022) Temporal MTL RL Explicitly Graph Robotics Jothimurugan et al. (2021), Qiu and Zhu (2022) Functional STL RL Explicitly Aggregation Vision Van Seijen et al. (2017) Functional STL RL No Aggregation Vision Lin et al. (2019), Lin et al. (2020), Mu et al. (2020) Functional MTL RL No Chaining Robotics Devin et al. (2017), Yang et al. (2020), Mendez et al. (2022a) Functional STL RL No Graph Vision Mittal et al. (2020), Goyal et al. (2021), Goyal et al. (2022) Functional STL Supervised No Graph Language Mittal et al. (2020) Functional STL Supervised No Graph VQA Goyal et al. (2022) Functional Lifelong RL No Chaining Robotics Mendez et al. (2022b) Functional MTL RL No Graph Robotics Pathak et al. (2019), Huang et al. (2020) Functional STL RL No Aggregation Robotics Li et al. (2020a) Functional MTL RL No Graph Toy Chang et al. (2021) None STL Supervised No None Vision Deng et al. (2009) None STL RL No None Vision Bellemare et al. (2013), Vinyals et al. (2017) None STL RL No None Robotics Brockman et al. (2016), Tunyasuvunakool et al. (2020) None MTL RL No None Robotics Henderson et al. (2017), Yu et al. (2019), James et al. (2020) None MTL RL Explicitly None Toy Chevalier-Boisvert et al. (2019) None Lifelong RL No None Toy Nekoei et al. (2021) Functional Lifelong Supervised Explicitly Graph Toy Sinha et al. (2020) Functional MTL RL Explicitly Chaining Robotics Mendez et al. (2022a) Temporal MTL RL Explicitly Graph Toy Gur et al. (2021) Functional MTL RL Implicitly Graph Robotics Ahmed et al. (2021) |
Review 1: Summary: The paper investigates two important and practical problems: lifelong learning (LL) and compositional learning (CL). It fills an important gap by combining and unifying both approaches in a single survey. It clearly divides each problem into semantically meaningful categories that help position each problem and approach. It gives a broad coverage of LL and CL approaches under a variety of settings in a single accessible survey. Carefully designed figures at each section give a very useful summary and help visualize different approaches. Strengths and Weaknesses: **Strengths** - The paper is easy to follow with a good a summary of approaches for LL and CL as well as their intersection. It will be useful for the research community to get an overview of both topics. - Figures/table were carefully designed to get a high level overview of each section. For example, Figure 6 describes compositional learning from input/output and latent/observed information perspectives. Table-1 in the appendix gives a nice view of all the related work under main verticals such as type of compositionality, labeling mechanism, etc. that the paper has been based on from the beginning. - The paper nicely presents a hierarchical clustering of concepts within LL and CL including their intersection. Both LL and CL are explained under several broad categories that capture many of the practical tasks; there are 3 categories for LL: (i) Task-, (ii) Domain-, and (iii) Class-incremental learning; there are 2 categories for CL: (i) models and (ii) problems. Lifelong compositional learning is further divided into 6 additional categories to capture a more unified view. - Dividing the survey into supervised learning (SL) and reinforcement learning (RL) made the paper much easier to follow. Using SL as the prior was also helpful to get better understanding of RL approaches. **Weaknesses** - The Section 2.1 emphasizes MTL much more than the typical streaming LL problem -- streaming tasks. It would help clarify the differences in training objectives by clearly stating what information is accessible and which period of time is optimized. In addition, writing separate objectives for training and evaluation where it is appropriate could also help. - Could you please discuss if two different views of compositionality, function composition and graph-based composition, are equivalent? Based on Figure-3 (c), functions are placed after every rectangular node which suggests that it doesn't correspond to a chain of functional composition. Traversing left-to-right by grouping functions in the graph might give you a functional composition but not clear if this generalizes beyond a DAG-based structure. - Presenting a more formal definition that unify core approaches in LL and CL would really help understand the paper better. For example, a unified form of the *regularization* objective, such as KL-control or parameter masking, integrating *replay* buffer with a sampling mechanism, or selecting different modules in *compositional learning*. - You mention forward and backward transfer in earlier sections but without any pointers later. I think these are two really important topics to understand how different methods are helping improve which timespan of the streaming tasks. Please mention these where appropriate. - The paper is missing a major related work -- pretrained language or image models in lifelong and compositional learning. While I found one mention of BERT, pretrained models are mainly missing from the survey. Given that they are the main paradigm nowadays, the survey is missing the most up-to-date research stream with the highest practical importance. Several categories that should be included are: few-shot prompting, parameter efficient tuning (p-tuning) in LL [1], and tool use for compositionality [2]. - Time and memory complexities are missing. I think these are important to understand the practical usage and further gain insight into drawbacks of different methods. - While there is a section devoted to benchmarks in RL, these are largely missing in the SL setting. It is not clear which datasets/benchmarks are studied in the SL and what are some of the remaining challenges in these benchmarks. [1] Progressive Prompts: Continual Learning for Language Models. Anastasia Razdaibiedina, Yuning Mao, Rui Hou, Madian Khabsa, Mike Lewis, Amjad Almahairi. [2] Toolformer: Language Models Can Teach Themselves to Use Tools. Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, Thomas Scialom. Requested Changes: In addition to my concerns in the weaknesses above, - Please clarify that $F^{\hat{t}}$ is the fisher information matrix. - Please add respective section names to Figure-5 to easily find the related details. Broader Impact Concerns: This is a survey paper with not ethical concerns. ================================================== Review 2: Summary: This paper is a survey of recent works (focussing mainly on the most recent five years) of life-long learning and continual learning, including all forms of machine learning where these are relevant - (un)supervised and reinforcement learning. The paper provides a careful categorization of these methods - in terms of the framing such as task-based and task agnostic, in terms of algortihmic and data approach, and to some extent (esp. in the RL section) in terms of how an application domain becomes decomposed, etc. To the extent that this is mainly a survey of past work, the innovation is in the interpretation of the methods in order to extract the shared structures. Strengths and Weaknesses: The primary strength of this paper is that it is suitably comprehensive, as a survey should be, in covering the the relevant literature. I like the following elements: - useful categorization in terms of types of machine learning methods, types of algorithmic approach and data assumptions, task decomposition, etc. - This is chiefly but not exclusively in terms of the six axes of variation, which are well chosen - the writing is clear and the use of visualizations is helpful in bringing out and supporting the structures being addressed - to the best of my knowledge the coverage of the papers is also adequate The primary weaknesses of this paper are as follows: - The different sections addressing lifelong and compositional learning, supervised and reinforcement learning, etc. are indeed distinct from each other. While within each section, there is an appropriate form of analysis and categorization, I do not get a higher level understanding of principles that cut across the top level categories. So, for instance, if one asks how the compositional templates (such as chaining or aggregation, or even graphs) relate to, say, similar questions in the RL section then the paper does not quite explain this. Of course, there are indeed connections, e.g. there is graph structuring in both, but both the structuring of the sections and the ways of writing do not expose any such connections. This is important because the introduction explicitly starts by making the case for how the many different types of learning - lifelong, continual and compositional, multi-task, etc. are all essentially related and should be studied as such - A second major point is that the focus on just papers from 2017 seems a bit odd for subject matter that has indeed been thought about since quite a bit earlier. To be fair to the authors, they do mention key early works (such as Thrun) and the recent explosion of ML literature would make it hard to be entirely comprehensive over a much longer period. However, this has the effect that many other forms of decomposition of tasks and analysis of stucture are left out unless featured in the deep learning literature of the recent couple of years - There is a distinct lack of consideration of methods that approach representations in other ways, e.g. there is not much discussion of things like options or sub-task discovery in the RL setting, neuro-symbolic representations and algorithms that make use of symbolic structuring or reasoning in any way, etc. - There is a distinct lack of specific domain examples - from vision, language and so on, and much of the discussion is in terms of machine learning models alone. This makes the paper rather dry. That said, the examples in the RL section are quite useful (e.g. fig 8) and more such examples in the earlier sections would have been very welcome Requested Changes: Having said that this is a well written and clear paper that makes a timely contribution, the survey would be stronger if the authors find ways to address weaknesses outlined in the previous section. I will not reiterate the list here, but each weakness might require further thought about how best to organise and explain the works. Broader Impact Concerns: This is a survey paper on existing work, so as such this paper does not raise any new concerns about broader impact. The paper does a nice job of synthesizing the work from the past five years and of explaining how different strands of work relate to each other. ================================================== Review 3: Summary: This paper’s contributions are as follows: 1. A survey of lifelong learning, including a categorization of the problem statements within lifelong learning as well as a categorization of potential approaches to these in language, vision, and robotics domains. 2. A survey of compositional learning, including a delineation between temporal and functional composition. 3. A call for research to investigate the use of compositional methods in lifelong learning. Strengths and Weaknesses: The paper presents a quite thorough analysis of the problem settings within the scope of lifelong and compositional ML. Many relevant prior works are discussed and categorized across multiple axes. The cross section of lifelong learning and compositionality is an interesting area to explore. A strength of the paper is in its detailed categorization of prior works in this area, in terms of problem setting, type of supervision, algorithmic approach, and architectural approach. The difference between temporal and functional composition is especially good to have written down as these concepts are often discussed interchangeably. The figures in the paper are well designed (except for Figure 4) The paper concludes with a nice list of further work to be done in the field. —— One weakness of the survey is a lack of criticism of the standard lifelong learning (LL) problem statement, especially in regards to the feasibility of learning compositional agents in an LL setting. As described in the paper, the standard LL problem setting is to maximize performance on all tasks while only being able to train on each task once, in sequence. The artificialness of this definition is discussed in the final paragraph before 2.2.1, which explains exactly why the standard LL problem doesn’t actually address a realistic setting, as it would be impossible to continually learn new tasks, and never forget ones that are never seen again, with a finite memory. However, the standard LL problem statement is particularly adversarial to compositional methods, as learning how to best decompose multiple tasks into reusable components is easiest when you can train on multiple tasks simultaneously, even if the overall distribution across tasks changes over time consistent with the broader LL motivation. The work would also benefit from a more specific discussion of what is generally assumed when LL or compositional approaches are used. For example, a task-agnostic setting where always the same input is mapped to different outputs depending on the (hidden) task is an arbitrarily difficult LL problem. Past tasks can only accelerate future learning if they are in some way related. Compositional structures can lead to worse performance if they don’t match the underlying structure of the environment, as they are a form of bias onto the final model. The paper successfully describes the width of prior work across LL and compositional learning, but it could include a bit more discussion on which LL methods are most promising for composition, and vice versa. An early paper on functional compositional learning in robotics that the authors may want to include: “Learning modular neural network policies for multi-task and multi-robot transfer” (Devin 2017) Requested Changes: I suggest adding some discussion to the points above. —————————— There are a few statements in the paper that are incorrect or insufficiently supported. “AI systems in the real world will not have access to batches of simultaneous tasks, but instead will face them in sequence in a lifelong setting”. -> An agent in the real world will likely be simultaneously parsing sound into text, predicting future video input, detecting object, estimating human poses, etc. There is no reason it would have to learn only one task at a time in sequence and then never see data for that task again. “In order for knowledge to be maximally reusable, it must capture a self-contained unit that can be composed with similar pieces of knowledge” -> This statement can be either viewed as a strong claim that needs significantly more evidence to say, or as vague philosophizing that doesn’t really say anything at all. “A service robot that has learned to both search-and-retrieve objets and navigate across a university building should be able to quickly learn to put these together to deliver a stapler to Rosie’s office. Instead, typical methods […]” -> This is how most hierarchical policy learning works already, which is common and old enough to be considered a typical method. —————————— In Figure 4, the supervised vs unsupervised delineation may not be necessary. Unsupervised learning is usually algorithmically equivalent to supervised, it just denotes whether the supervision is from a human or from a non-human (such as physics, time, etc). At the same time, there are only 2 papers that fall in the unsupervised bucket. This figure could also be transposed to have application domain as rows, and supervised vs RL as the marker-fill, which would be easier to read. Broader Impact Concerns: Compositional and lifelong learning approaches will output different agents than unstructured, iid learning. How do they improve or worsen issues of bias and fairness? ================================================== Metareview: Recommendation: Accept as is Comment: zbLo: "The paper serves as a good survey of more traditional approaches but lack more recent LLMs and image models. I worry that it would make the survey less relevant very quickly." "It fills an important gap by combining and unifying both approaches in a single survey" ebXm: "this paper presents a thorough analysis of the problem settings within the scope of lifelong and compositional ML. As a survey paper, it can serve as a good foundation for new researchers to learn about the topic and as a source of areas that need further study." 4YsP: "The primary strength of this paper is that it is suitably comprehensive, as a survey should be, in covering the the relevant literature" ==================================================
# L-Svrg And L-Katyusha With Adaptive Sampling Boxin Zhao *boxinz@uchicago.edu* Boxiang Lyu *blyu@chicagobooth.edu* Mladen Kolar *mkolar@chicagobooth.edu* The University of Chicago Booth School of Business Reviewed on OpenReview: *https: // openreview. net/ forum? id= 9lyqt3rbDc* ## Abstract Stochastic gradient-based optimization methods, such as L-SVRG and its accelerated variant L-Katyusha (Kovalev et al., 2020), are widely used to train machine learning models. The theoretical and empirical performance of L-SVRG and L-Katyusha can be improved by sampling observations from a non-uniform distribution (Qian et al., 2021). However, designing a desired sampling distribution requires prior knowledge of smoothness constants, which can be computationally intractable to obtain in practice when the dimension of the model parameter is high. To address this issue, we propose an adaptive sampling strategy for L-SVRG and L-Katyusha that can learn the sampling distribution with little computational overhead, while allowing it to change with iterates, and at the same time does not require any prior knowledge of the problem parameters. We prove convergence guarantees for L-SVRG and L-Katyusha for convex objectives when the sampling distribution changes with iterates. Our results show that even without prior information, the proposed adaptive sampling strategy matches, and in some cases even surpasses, the performance of the sampling scheme in Qian et al. (2021). Extensive simulations support our theory and the practical utility of the proposed sampling scheme on real data. ## 1 Introduction We aim to minimize the following finite-sum problem: $$\operatorname*{min}_{x\in\mathbb{R}^{d}}F(x):={\frac{1}{n}}\sum_{i=1}^{n}f_{i}(x),$$ $$(1)$$ fi(x), (1) where each fiis convex, differentiable, and Li-smooth - see Assumptions 1 and 2 in Section 3. The minimization problem in (1) is ubiquitous in machine learning applications, where fi(x) typically represents the loss function on the i-th data point of a model parameterized by x. We denote the solution to (1) as x ⋆. However, due to computational concerns, it is typically solved via a first-order method (Bottou et al., 2018). When the sample size n is large, computing the full gradient ∇F(x) can be computationally expensive, and stochastic first-order methods, such as stochastic gradient descent (SGD) (Robbins & Monro, 1951), are the modern tools of choice for minimizing (1). Since SGD iterates cannot converge to the minimizer without decreasing the stepsize due to nonvanishing variance, a number of variance-reduced methods have been proposed, such as SAG (Schmidt et al., 2017), SAGA (Defazio et al., 2014), SVRG (Johnson & Zhang, 2013), and Katyusha (Allen-Zhu, 2017). Such methods can converge to the optimum of (1) even with a constant stepsize. In this paper, we focus on L-SVRG and L-Katyusha (Kovalev et al., 2020), which improve on SVRG and Katyusha by removing the outer loop in these algorithms and replacing it with a biased coin-flip. This change simplifies parameter selection, leads to better practical performance, and allows for clearer theoretical analysis. Stochastic first-order methods use a computationally inexpensive estimate of the full gradient ∇F(x) when minimizing (1). For example, at the beginning of round t, SGD randomly selects it ∈ [n] according to a sampling distribution p t over [n], and forms an unbiased estimate ∇fit (x) of ∇F(x). Typically, the sampling distribution p tis the uniform distribution, p t = (1/n, *· · ·* , 1/n), for all t. However, using a nonuniform sampling distribution can lead to faster convergence (Zhao & Zhang, 2015; Needell et al., 2016; Qian et al., 2019; Hanzely & Richtárik, 2019; Qian et al., 2021). For instance, when the sampling distribution is p IS = (p IS 1 , · · · , pIS n ), with p IS i = Li/(Pn i=1 Li) = Li/(nL¯), the convergence rate of L-SVRG and L-Katyusha can be shown to depend on the *average smoothness* L¯ := (1/n)Pn i=1 Li, instead of the *maximum smoothness* Lmax := max1≤i≤n Li (Kovalev et al., 2020). Sampling from a non-uniform distribution is commonly referred to as importance sampling (IS). While sampling observations from p IS can improve the speed of convergence, p IS depends on the smoothness constants {Li}i∈[n]. In general, these constants are not known in advance and need to be estimated, for example, by computing supx∈Rd λmax(∇2fi(x)), i ∈ [n], where λmax(·) denotes the largest eigenvalue of a matrix. However, when the dimension d is large, it is computationally prohibitive to estimate the smoothness constants, except in some special cases such as linear and logistic regression. In this paper, we develop a method to design a sequence of sampling distributions that leads to the convergence rate of L-SVRG and L-Katyusha that depends on L¯, instead of Lmax, without prior knowledge of {Li}i∈[n]. Instead of designing a *fixed sampling distribution*, where p t ≡ p for all t, we design a *dynamic sampling* distribution that can change with iterations of the optimization algorithm. We follow a recent line of work that formulates the design of the sampling distribution as an online learning problem (Salehi et al., 2017; Borsos et al., 2019; Namkoong et al., 2017; Hanchi & Stephens, 2020; Zhao et al., 2021). Using the gradient information obtained in each round, we update the sampling distribution with minimal computational overhead. This sampling distribution is subsequently used to adaptively sample the observations used to compute the stochastic gradient. When the sequence of designed distributions is used for importance sampling, we prove convergence guarantees for L-SVRG, under both strongly convex and weakly convex settings, and for L-Katyusha under the strongly convex setting. These convergence guarantees show that it is possible to design a sampling distribution that not only performs as well as p IS but can also improve over it without using prior information. We focus on comparing with p IS as it is the most widely used fixed sampling distribution (Qian et al., 2021) and leads to the best-known convergence rates with fixed sampling distribution (Zhao & Zhang, 2015; Needell et al., 2016). Contributions. Our paper makes the following contributions. We propose an adaptive sampling algorithm for L-SVRG and L-Katyusha that does not require prior information, such as smoothness constants. This is the first practical sampling strategy for these algorithms. We prove convergence guarantees for L-SVRG under both strong and weak convexity, and for L-Katyusha under strong convexity, using a sequence of sampling distributions that changes with iterations. These theoretical results show when the sequence of sampling distributions performs as well as p IS, and even outperforms it in some cases. Our numerical experiments support these findings. We also show that the control variate technique in SVRG and adaptive sampling reduce variance from different aspects, as demonstrated in a simulation. We conduct extensive simulations to provide empirical support for various aspects of our theory and real data experiments to demonstrate the practical benefits of adaptive sampling. Given its low computational cost and superior empirical performance, we suggest that our adaptive sampling should be considered as the default alternative to the uniform sampling used in L-SVRG and L-Katyusha. Related work. Our paper contributes to the literature on non-uniform sampling in first-order stochastic optimization methods. Previous work, such as Zhao & Zhang (2015), Needell et al. (2016), and Qian et al. (2021), studied non-uniform sampling in SGD, stochastic coordinate descent, and L-SVRG and LKatyusha, respectively, but focused on sampling from a fixed distribution. In contrast, we allow the sampling distribution to change with iterates, which is important as the best sampling distribution changes with iterations. Shen et al. (2016) studied adaptive sampling methods for variance-reducing stochastic methods, such as SVRG and SAGA, but their approach requires computing all gradients {∇fi(x t)} n i=1 at each step, which is impractical. Our method only requires computing the stochastic gradient ∇fit (x t). The sampling distribution can be designed adaptively using an online learning framework (Namkoong et al., 2017; Salehi et al., 2017; Borsos et al., 2018; 2019; Hanchi & Stephens, 2020; Zhao et al., 2021). We call this process adaptive sampling, and its goal is to minimize the cumulative sampling variance, which appears in the convergence rates of L-SVRG and L-Katyusha (see Section 3). More specifically, Namkoong et al. (2017) and Salehi et al. (2017) designed the sampling distribution by solving a multi-armed bandit problem with the EXP3 algorithm. Borsos et al. (2018) took an online convex optimization approach and made updates to the sampling distribution using the follow-the-regularized-leader algorithm. Borsos et al. (2019) considered the class of distributions that is a linear combination of a set of given distributions and used an online Newton method to update the weights. Hanchi & Stephens (2020) and Zhao et al. (2021) investigated nonstationary approaches to learning sampling distributions. Among these works, Zhao et al. (2021) is the only one that compared their sampling distribution to a dynamic comparator that can change with iterations without requiring stepsize decay. While our theory quantifies the effect of any sampling distribution on the convergence rate of L-SVRG and L-Katyusha, we use the OSMD sampler and AdaOSMD sampler from Zhao et al. (2021), as they lead to the best upper bound and yield the best empirical performance. Notation. For a positive integer n, let [n] := {1, · · · , n}. We use *∥ · ∥* to denote the l2-norm in the Euclidean space. Let Pn−1 = {x ∈ R n :Pn i=1 xi = 1, xj ≥ 0, j ∈ [n]} be the (n − 1)-dimensional simplex. For a symmetric matrix A ∈ R d×d, we use λmax(A) to denote its largest eigenvalue. For a vector x ∈ R d, we use xj or x[j] to denote its j-th entry. For two sequences {an} and {bn}, an = O(bn) if there exists C > 0 such that |an/bn| ≤ C for all n large enough; an = Θ(bn) if an = O(bn) and bn = O(an) simultaneously. Organization of the paper. In Section 2, we introduce the algorithm for designing the sampling distribution. In Section 3, we give the convergence analysis. Extensive simulations that demonstrate various aspects of our theory are given in Section 4. Section 5 illustrates an application to real world data. Finally, we conclude the paper with Section 6. ## 2 As-Lsvrg And As-Lkatyusha To solve (1) using SGD, one iteratively samples it uniformly at random from [n] and updates the model parameter by x t+1 ← x t − ηt∇fit (x t). However, due to the non-vanishing variance V[∇fit (x t)], x tcannot converge to x ⋆ unless one adopts a diminishing step size by letting ηt → 0. To address this issue, LSVRG (Kovalev et al., 2020) constructs an adjusted estimated of the gradient g t = ∇fit (x t) − ∇fit (w t) + ∇F(w t), where w tis a control variate that is updated to x t with probability ρ in each iteration. Note that g tis still an unbiased estimate of ∇F(x t). Since both x t and w tconverge to x ⋆, we have V[g t] → 0, and thus x tcan converge to x ⋆even with a constant step size. L-Katyusha incorporates a Nesterov-type acceleration to improve the dependency of the computational complexity on the condition number under the strongly convex setting (Kovalev et al., 2020). Qian et al. (2021) investigated sampling it from [n] using a non-uniform sampling distribution to achieve faster convergence. Given the model parameter x t at iteration t, suppose that it is sampled from the distribution p t = (p t 1 , . . . , ptn ). Then $$g^{t}=\frac{1}{n p_{i_{t}}^{t}}\left(\nabla f_{i_{t}}(x^{t})-\nabla f_{i_{t}}(w^{t})\right)+\nabla F(w^{t})$$ is an unbiased estimate of ∇F(x t). The variance of g tis $$\mathbb{V}\left[g^{t}\right]=V_{e}^{t}\left(\mathbf{p}^{t}\right)-\left\|\nabla F(x^{t})-\nabla F(w^{t})\right\|^{2},$$ $$\mathrm{where}$$ $$V_{e}^{t}\left(\mathbf{p}^{t}\right):={\frac{1}{n^{2}}}\sum_{i=1}^{n}{\frac{1}{p_{i}^{t}}}\left\|\nabla f_{i}(x^{t})-\nabla f_{i}(w^{t})\right\|^{2}.\tag{1}$$ We let V t(p t) := V [g t] be the *sampling variance* of the sampling distribution p t, and V t e (p t) be the effective variance. Therefore, in order to minimize the variance of g t, we can choose p tto minimize V t e (p t). Let p t ⋆ = arg minp∈Pn−1 V t e (p t) be the oracle optimal dynamic sampling distribution at the t-th iteration, which has the closed form $$p^{t}\star,i={\frac{\|\nabla f_{i}(x^{t})-\nabla f_{i}(w^{t})\|}{\sum_{j=1}^{n}\|\nabla f_{j}(x^{t})-\nabla f_{j}(w^{t})\|}},\quad i\in[n].$$ However, we cannot compute p t ⋆ in each iteration, since computing it requires knowledge of all {∇fi(x t)} n i=1 and {∇fi(w t)} n i=1. If that were the case, we could simply use full-gradient descent, and there would be no $$\left(2\right)$$ $\quad(3)$ . Algorithm 1 AS-LSVRG 1: **Input:** stepsizes {η}t≥1, ρ ∈ (0, 1]. 2: **Initialize:** x 0 = w 0; p 0 = (1/n, *· · ·* , 1/n). 3: for t = 0, 1, · · · , T − 1 do 4: Sample it from [n] with p t = (p t 1 , · · · , ptn ). 5: g t =1 npt it (∇fit (x t) − ∇fit (w t)) + ∇F(w t). 6: x t+1 = x t − ηtg t. 7: w t+1 = (x t with probability ρ, w t with probability 1 − ρ. 8: Update p tto p t+1 by OSMD sampler (Algorithm 3) or AdaOSMD sampler (Algorithm 4). 9: end for Algorithm 2 AS-LKatyusha 1: **Input:** stepsizes {η}t≥1, ρ ∈ (0, 1], θ1, θ2 ∈ [0, 1], 0 < κ < 1, L > 0. 2: **Initialize:** v 0 = w 0 = z 0. 3: for t = 0, 1, · · · , T − 1 do 4: x t = θ1z t + θ2w t + (1 − θ1 − θ2)v t. 5: Sample it from [n] with p t = (p t 1 , · · · , ptn ). 6: g t =1 npt it (∇fit (x t) − fit (w t)) + F(w t). 7: z t+1 =1 1+ηtκ ηtκxt + z t − ηt L g t 8: v t+1 = x t + θ1(z t+1 − z t). 9: w t+1 = (v t with probability ρ, w t with probability 1 − ρ. 10: Update p tto p t+1 by OSMD sampler (Algorithm 3) or AdaOSMD sampler (Algorithm 4). 11: **end for** ``` need for either sampling or control variate. Therefore, some kind of approximation of p t ⋆ is unavoidable for ``` practical purposes. Qian et al. (2021) proposed substituting each ∥∇fi(x t) − ∇fi(w t)∥ with its upper bound. Based on the smoothness assumption (Assumption 2 in Section 3), we have ∥∇fi(x t)− ∇fi(w t)∥ ≤ Li∥x t −w t∥. Thus, by substituting ∥∇fi(x t) − ∇fi(w t)∥ with Li∥x t − w t∥ in (2), we obtain an approximate sampling distribution p IS = (p IS 1 , · · · , pIS n ), with p IS i = Li/(Pn i=1 Li) = Li/(nL¯). L-SVRG and L-Katyusha that use p IS can achieve faster convergence compared to using uniform sampling (Qian et al., 2021). However, one difficulty of applying p IS in practice is that we need to know Li for all i = 1*, . . . , n*. While such information can be easy to access in some cases, such as in linear and logistic regression problems, it is generally hard to estimate, especially when the dimension of the model parameter is high. To circumvent this problem, recent work has formulated the design of the sampling distribution as an online learning problem (Salehi et al., 2017; Borsos et al., 2019; Namkoong et al., 2017; Hanchi & Stephens, 2020; Zhao et al., 2021). More specifically, at each iteration t, after sampling it with sampling distribution p t, we can receive information about ∥∇fit (x t) − ∇fit (w t)∥. Although we cannot have ∥∇fi(x t) − ∇fi(w t)∥ for all i = 1*, . . . , n*, the partial information obtained from {∥∇fis (x s) − ∇fis (w s)∥}ts=0 and {p s} t s=0 is helpful in constructing the sampling distribution p t+1 to minimize V t e (p t). In this paper, we adapt the methods proposed in Zhao et al. (2021) for L-SVRG and L-Katyusha and apply them in our experiments; however, our analysis is not restrictive to this choice and can fit other methods as well. We introduce our modifications of L-SVRG and L-Katyusha that use adaptive sampling, namely Adaptive Sampling L-SVRG (AS-LSVRG, Algorithm 1) and Adaptive Sampling L-Katyusha (AS-LKatyusha, Algorithm 2). The key change here is that instead of using a fixed sampling distribution p t ≡ p, t ≥ 0, we allow the sampling distribution to change with iterations and adaptively learn it. More specifically, Step 8 of Algorithm 1 and Step 10 of Algorithm 2 use OSMD sampler or AdaOSMD sampler (Zhao et al., 2021) to update the sampling distribution, which are described in Algorithm 3 and Algorithm 4, respectively. While ## Algorithm 3 Osmd Sampler 1: **Input:** Learning rate η; parameter α ∈ (0, 1], A = PM−1 ∩ [*α/M,* ∞)M; number of iterations T. 2: **Output:** p tfor t = 1*, . . . , T*. 3: **Initialize:** p 1 = (1*/n, . . . ,* 1/n). 4: for t = 1, 2*, . . . , T* − 1 do 5: Sample it from [n] by p t. Let a t it = ∥∇fit (x t) − ∇fit (w t)∥ 2. 6: Compute the sampling loss gradient estimate ∇Vˆ t e (p t) ∈ R n: all entries are zero except for the it-th entry, which is $$\left[\nabla\hat{V}_{e}^{t}(\mathbf{p}^{t})\right]_{i_{t}}=-\frac{1}{n^{2}}\cdot\frac{a_{i_{t}}^{t}}{(p_{i_{t}}^{t})^{3}}.$$ . (4) 7: Solve p t+1 = arg min p∈A η⟨p, ∇Vˆ t e (p t)⟩ + DΦ p ∥ p tusing Algorithm 5 with the learning rate η. 8: end for ## Algorithm 4 Adaosmd Sampler 1: **Input:** Meta-algorithm learning rate γ; expert learning rates E = {η1 ≤ η2 *≤ · · · ≤* ηH}; α ∈ (0, 1]; A = Pn−1 ∩ [*α/n,* ∞) n. Number of iterations T. 2: **Output:** p tfor t = 1*, . . . , T*. 3: Set θ 1 h = (1 + 1/H)/(h(h + 1)), h ∈ [H]. 4: **Initialize:** p 1 h = (1*/n, . . . ,* 1/n) for h ∈ [H]. 5: for t = 1, 2*, . . . , T* − 1 do 6: Compute p t =PH h=1 θ t hp t h . 7: Sample it from [n] by p t. Let a t it = ∥∇fit (x t) − ∇fit (w t)∥ 2. 8: for h = 1, 2*, . . . , H* do 9: Compute the sampling loss estimate $$\left({4\atop4}\right)$$ $${\hat{V}}_{e}^{t}(\mathbf{p}_{h}^{t};\mathbf{p}^{t})={\frac{1}{n^{2}}}\cdot{\frac{a_{i_{t}}^{t}}{p_{i_{t}}^{t}p_{h,i_{t}}^{t}}}.$$ $$\left(5\right)$$ $$({\bar{0}})$$ . (5) 10: Compute the sampling loss gradient estimate ∇Vˆ t e (p t h ; p t) ∈ R n: all entries are zero except for the it-th entry, which is $$\left[\nabla\hat{V}_{e}^{t}(\mathbf{p}_{h}^{t};\mathbf{p}^{t})\right]_{i_{t}}=-\frac{1}{n^{2}}\cdot\frac{a_{i_{t}}^{t}}{p_{i_{t}}^{t}(p_{h,i_{t}}^{t})^{2}}.$$ . (6) 11: Solve p t+1 h = arg minp∈A ηh⟨p, ∇Vˆ t e (p t h ; p t)⟩ + DΦ (p ∥ p t h ) using Algorithm 5 with the learning rate ηh. 12: **end for** 13: Update the weights of each expert $$\theta_{h}^{t+1}=\frac{\theta_{h}^{t}\exp\left\{-\gamma\hat{V}_{e}^{t}(\mathbf{p}_{h}^{t};\mathbf{p}^{t})\right\}}{\sum_{h=1}^{H}\theta_{h}^{t}\exp\left\{-\gamma\hat{V}_{e}^{t}(\mathbf{p}_{h}^{t};\mathbf{p}^{t})\right\}},$$ $$h\in[H].$$ o, h ∈ [H]. 14: **end for** the OSMD sampler and AdaOSMD sampler allow for choosing a mini-batch of samples in each iteration, here we focus on choosing only one sample in each iteration. We choose Φ to be the unnormalized negative entropy, that is, Φ(x) = Pn i=1 xilog xi −Pn i=1 xi, x = (x1*, . . . , x*n) ⊤ ∈ [0, ∞) n, with 0 log 0 defined as 0. Additionally, DΦ (x ∥ y) = Φ(x)−Φ(y)−⟨∇Φ(y), x−y⟩ is the Bregman divergence between any *x, y* ∈ (0, ∞) n with respect to the function Φ. Algorithm 5 OSMD Solver: Solve p t+1 = arg minq∈A η⟨q, uˆ t⟩ + DΦ(q ∥ p t) 1: **Input:** p t, uˆ t, A = Pn−1 ∩ [*α/n,* ∞) n. Learning rate η. 2: **Output:** p t+1. 3: Let p˜ t+1 i = p t i exp (−ηuˆ t i ) for i ∈ [n]. 4: Sort {p˜ t+1 i} n i=1 in a non-decreasing order: p˜ t+1 π(1) ≤ *. . .* ≤ p˜ t+1 π(n) . 5: Let vi = ˜p t+1 π(i) 1 − i−1 n αfor i ∈ [n]. 6: Let zi = α n Pn j=i p˜ t+1 π(j) for i ∈ [n]. 7: Find the smallest i such that vi > zi, denoted as i⋆. $i\in[n]$. $[n]$. $v_i>z_i$, denoted as $i_\star$. $$\tau_{\mathrm{{}}}$$ * Find the smallest $i$ such that $v_{i}>z_{i}$, denoted as $t_{*}$. * Let $p_{i}^{t+1}=\begin{cases}\alpha/n\\ \left((1-((i_{*}-1)/n)\alpha)\hat{p}_{i}^{t+1}\right)/\left(\sum_{j=i_{*}}^{n}\hat{p}_{\pi(j)}^{t+1}\right)\end{cases}$ $\mathrm{otherwise}$... The key insight of the OSMD Sampler is to use Online Stochastic Mirror Descent (Lattimore & Szepesvári, 2020) to minimize the cumulative sampling loss PT t=1 V t e (p t), where V t e (p t) is defined in(2). To apply OSMD, we first construct an unbiased estimate of the gradient of V t e (p t), which is shown in (4). Then, in Step 7, we update the sampling distribution by taking a mirror descent. Intuitively, the optimization objective in Step 7 involves two terms. The first term encourages the sampling distribution to fit the most recent history, while the second term ensures that it does not deviate too far from the previous decision. By choosing a learning rate η, we keep a trade-off between these two concerns. A larger learning rate implies a stronger fit towards the most recent history. To automatically choose the best learning rate, AdaOSMD uses a set of expert learning rates and combines them using exponentially weighted averaging. Note that the total number of iterations T is assumed to be known and used as an input to AdaOSMD. When the number of iterations T is not known in advance, Zhao et al. (2021) proposed a doubling trick, which could also be used here. The set of expert learning rates is given by $$\mathrm{if}\ \pi(i)<i_{\star}$$ $${\mathcal{E}}:=\left\{2^{h-1}\cdot{\frac{\alpha^{3}}{n^{3}{\bar{a}}^{1}}}{\sqrt{\frac{\log n}{2T}}}\,{\Bigg|}\,h=1,2,\ldots,H\right\},$$ $$\left(7\right)$$ $$({\boldsymbol{\delta}})$$ where $$H=\lfloor{\frac{1}{2}}\log_{2}\left(1+{\frac{4\log(n/\alpha)}{\log n}}(T-1)\right)\rfloor+1.$$ log n(T − 1)⌋ + 1. (8) The learning rate in AdaOSMD is set to γ = α n q8 Ta¯ 1 , where a¯ 1 = maxi∈[n] ∥∇fi(x 0)∥. For all experiments in this paper, we set α = 0.4. The main computational bottleneck of both the OSMD sampler and the AdaOSMD sampler is the mirror descent step. Fortunately, Step 7 of Algorithm 3 and Step 11 of Algorithm 4 can be efficiently solved by Algorithm 5. The main cost of Algorithm 5 comes from sorting the sequence {p˜ t+1 i} n i=1, which can be done with the computational complexity of O(n log n). However, note that we only update one entry of p tto get ˜p t+1 and p tis sorted in the previous iteration. Therefore, most entries of ˜p t+1 are also sorted. Using this observation, we can usually achieve a much faster running time, for example, by using an adaptive sorting algorithm (Estivill-Castro & Wood, 1992). ## 3 Convergence Analysis We provide convergence rates for AS-LSVRG (Algorithm 1) and AS-LKatyusha (Algorithm 2), for any sampling distribution sequence {p t}t≥0. We begin by imposing assumptions on the optimization problem in (1). Assumption 1 (Convexity). For each i ∈ [n], the function fi(·) *is convex and first-order continuously* differentiable: $f_{i}(x)\geq f_{i}(y)+\langle\nabla f_{i}(y),x-y\rangle$ for all $x,y\in\mathbb{R}^{d}$._ Assumption 2 (Smoothness). For each i ∈ [n], the function fi is Li*-smooth:* $\|\nabla f_{i}(x)-\nabla f_{i}(y)\|\leq L_{i}\|x-y\|\quad\text{for all}x,y\in\mathbb{R}^{d}$. Furthermore, the function F is LF *-smooth:* $$\|\nabla F(x)-\nabla F(y)\|\leq L_{F}\|x-y\|\quad{\mathrm{for~all~}}x,y\in\mathbb{R}^{d}.$$ Recall that L¯ = (1/n)Pn i=1 Li and Lmax = max1≤i≤n Li. By the convexity of *∥ · ∥* and Jensen's inequality, we have that LF ≤ L¯. For some results, we will assume that F is strongly convex. Assumption 3 (Strong Convexity). The function F(·) is µ*-strongly convex:* $$F(x)\geq F(y)+\langle\nabla F(y),x-y\rangle+{\frac{\mu}{2}}\|x-y\|^{2}$$ for all *x, y* ∈ R d, where µ > 0. Additionally, the *optimization heterogeneity* is defined as $$\sigma_{\star}^{2}:=\frac{1}{n}\sum_{i=1}^{n}\|\nabla f_{i}(x^{\star})\|^{2},$$ (9) $\frac{1}{2}$ 2, (9) and the *smoothness heterogeneity* is defined as Lmax/L¯. ## 3.1 Convergence Analysis Of As-Lsvrg We begin by providing a convergence rate for AS-LSVRG (Algorithm 1) under strong convexity. Let $${\cal D}^{t}:=\frac{1}{n}\sum_{i=1}^{n}\frac{1}{L_{i}}\left\|\nabla f_{i}(w^{t})-\nabla f_{i}(x^{*})\right\|^{2}.\tag{10}$$ Roughly speaking, Dt measures the weighted distance between control-variates w t and the minimizer x ⋆, where the weights are the inverse of Lipschitz constants. Theorem 1. Suppose Assumptions 1-3 hold. Let ηt ≡ η for all t, where η ≤ 1/(6L¯ + LF )*, and let* $$\alpha_{1}:=\operatorname*{max}\left\{1-\eta\mu,1-{\frac{\rho}{2}}\right\}.$$ Then $$\mathbb{E}\left[\left\|x^{T}-x^{\star}\right\|^{2}+\frac{4\eta^{2}\tilde{L}}{\rho}\mathcal{D}^{T}\right]\leq\,\alpha_{1}^{T}\mathbb{E}\left[\left\|x^{0}-x^{\star}\right\|^{2}+\frac{4\eta^{2}\tilde{L}}{\rho}\mathcal{D}^{0}\right]+\eta^{2}\sum_{t=0}^{T}\alpha_{1}^{T-t}\mathbb{E}\left[V_{e}^{t}\left(\mathbf{p}^{t}\right)-V_{e}^{t}\left(\mathbf{p}^{t S}\right)\right].$$ $$(11)$$ See proof in Appendix A.1. From the convergence rate in Theorem 1, we observe that a good sampling distribution sequence should minimize the cumulative sampling variance PT t=0 α T −t 1 E [V t e (p t)]. This justifies the usage of AdaOSMD to design a sequence of sampling distributions, as its purpose is to minimize the cumulative sampling variance (Zhao et al., 2021). When $$\sum_{t=0}^{T}\alpha_{1}^{T-t}\mathbb{E}\left[V_{e}^{t}\left(\mathbf{p}^{t}\right)-V_{e}^{t}\left(\mathbf{p}^{I S}\right)\right]=O\left(\alpha^{T}\right),$$ the iteration complexity to achieve ϵ-accuracy is O(1/(log(1/α1)) log(1/ϵ)). When ρ = 1/n, η = 1/(6L¯+LF ), and both L/µ ¯ and n are large, this bound is O((n+L/µ ¯ ) log(1/ϵ)), which recovers the complexity of L-SVRG when sampling from p IS (Qian et al., 2021). When (11) holds, we can further compare the iteration complexity of AS-LSVRG with the iteration complexity of SGD with importance sampling from p IS, which is O((σ 2 ⋆/(µ 2ϵ) + L/µ ¯ ) log(1/ϵ)), where σ 2 ⋆ is defined in (9) (Needell et al., 2016), and the iteration complexity of L-SVRG, which is O((n + Lmax/µ) log(1/ϵ)) (Kovalev et al., 2020). First, we observe that the iteration complexities of AS-LSVRG and L-SVRG do not depend on σ 2 ⋆ , while the iteration complexity of SGD does. This shows that the control-variate improves upon optimization heterogeneity. Second, we observe that both iteration complexities of AS-LSVRG and SGD depend on L¯, while the iteration complexity of L-SVRG depends on Lmax. This shows that adaptive sampling improves upon smoothness heterogeneity. Based on these two observations, we have the following important takeaway: While both the control-variate and adaptive sampling are reducing the variance of stochastic gradient, the control-variate is improving upon optimization heterogeneity, and adaptive sampling is improving upon smoothness heterogeneity. ``` Another important observation is that when p t = p t ⋆ , we have V t e (p t ⋆ ) ≤ V t e p IS. Therefore, the perfor- ``` mance of the oracle optimal dynamic sampling distribution is at least as good as the fixed sampling distribution p IS. The gains from using a dynamic sampling distribution can be significant, as we show in experiments in Section 4 and Section 5. While the closed form of p t ⋆ in (3) requires knowledge of ∇fi(x t) − ∇fi(w t), which is not available in practice, we can minimize the cumulative sampling variance PT t=1 V t e (p t) sequentially using AdaOSMD, which results in the approximation p t, without the need for prior information. We discuss in Section 3.3 below when this adaptive sampling strategy can perform better than p IS. The following result provides the convergence rate when F(x) is weakly convex. Theorem 2. Suppose Assumptions 1 and 2 hold. Let ηt ≡ η for all t, where η ≤ 1/(6LF )*, and let* xˆ T = (1/T)PT t=1 x t*. Then* $$\mathbb{E}\left[F(\dot{x}^{T})-F(x^{\star})\right]\leq\frac{4}{T}\left(F(x^{0})-F(x^{\star})\right)$$ $$\qquad+\frac{5}{T}\left\{\frac{1}{2\eta}\left\|x^{0}-x^{\star}\right\|^{2}+\frac{12\eta L(1-\rho)}{5\rho}\left(F(w^{0})-F(x^{\star})\right)\right\}+\frac{3\eta}{T}\sum_{t=0}^{T}\mathbb{E}\left[V_{e}^{t}\left(\mathbf{p}^{t}\right)-V_{e}^{t}\left(\mathbf{p}^{tS}\right)\right].$$ See proof in Appendix A.2. In the weakly convex case, the cumulative sampling variance is defined as PT t=0 E [V t e (p t)], and a good sampling distribution sequence should minimize it. When η = 1/(6LF ), ρ = 1/n, and PT t=0 E-V t e (p t) − V t e p IS = O(T(LF + n)), the iteration complexity to reach ϵ-accuracy is O((LF + n)(1/ϵ)), which recovers the rate of L-SVRG when sampling from p IS Qian et al. (2021). ## 3.2 Convergence Analysis Of As-Lkatyusha We prove a convergence rate for AS-LKatyusha (Algorithm 2) under strong convexity. Let $$\mathcal{Z}^{t}:=\frac{L(1+\eta_{t}\kappa)}{2\eta_{t}}\left\|z^{t}-x^{\star}\right\|^{2},$$ $$\mathcal{W}^{t}:=\frac{1}{\theta_{1}}\left(F(v^{t})-F(x^{\star})\right),\tag{12}$$ $$\mathcal{W}^{t}:=\frac{\theta_{2}(1+\theta_{1})}{\rho\theta_{1}}\left(F(w^{t})-F(x^{\star})\right),$$ and Ψt:= Z t + V t + Wt. We then have the following theorem. See proof in Appendix A.3. Theorem 3. Suppose Assumptions 1-3 hold. Let ηt ≡ η for all t*, where* η = ((1 + θ2)θ1) −1θ2, and κ = µ/L with L = L¯*. Let* θ2 = 1/2, θ1 ≤ 1/2*, and* $$\alpha_{2}:=\operatorname*{max}\left\{{\frac{1}{1+\eta\kappa}},1-{\frac{\theta_{1}}{2}},1-{\frac{\rho\theta_{1}}{1+\theta_{1}}}\right\}.$$ Then $$\mathbb{E}\left[\Psi^{T}\right]\leq\alpha_{2}^{T}\Psi^{0}+\frac{1}{4\bar{L}\theta_{1}}\sum_{t=0}^{T-1}\alpha_{2}^{T-t-1}\mathbb{E}\left[V_{e}^{t}\left(\mathbf{p}^{t}\right)-V_{e}^{t}\left(\mathbf{p}^{I S}\right)\right].$$ The cumulative sampling variance is defined as PT −1 t=0 α T −t−1 2 E [V t e (p t)], and can be used as the minimization objective to design a sequence of sampling distributions. When ρ = 1/n, θ1 = min{p2κn/3, 1/2}, and PT −1 t=0 α T −t−1 2 E-V t e (p t) − V t e p IS = O(α T 2 ), then the iteration complexity to reach ϵ-accuracy is O((n + q*nL/µ* ¯ ) log(1/ϵ)), which recovers the rate of L-Katyusha when sampling from p IS Qian et al. (2021). Additionally, when compared with the rate of L-Katyusha Kovalev et al. (2020), we see that the dependency on Lmax is improved to L¯, which is consistent with our conclusion in Section 3.1 that adaptive sampling is responsible for improving smoothness heterogeneity. ## 3.3 Benefits Of Adaptive Sampling We analyze when adaptive sampling will improve over sampling from p IS. We first emphasize that sampling from p IS requires knowledge of Lipschitz constants {Li}i∈[n], which, in general, are expensive to compute. On the other hand, the additional computational cost of adaptive sampling is usually comparable to the cost of computing a stochastic gradient. In addition to computational benefits, there are certain settings where adaptive sampling may result in improved convergence, despite not using prior information. A key quantity to understand is $$\Delta V\left(\mathbf{p}^{1:T}\right):=\sum_{t=0}^{T}\alpha^{T}\mathbb{E}\left[V_{e}^{t}\left(\mathbf{p}^{I S}\right)-V_{e}^{t}\left(\mathbf{p}^{t}\right)\right],$$ where α ∈ {α1, α2, 1}, depending on the algorithm used and the assumptions made. The larger ∆Vp 1:T is, the more beneficial adaptive sampling is. In the following, we discuss when ∆V (p 1:T ⋆) is large. Although p 1:T ⋆is not available in practice, ∆V (p 1:T ⋆) can be used to understand when adaptive sampling methods that approximate p t ⋆ will be superior to using p IS for importance sampling. In many machine learning applications, fi(x) has the form fi(x) = l(*x, ξ*i), where ξiis the i-th data point. Let x ⋆ i ∈ R d be such that ∇l(x ⋆ i , ξi) = 0. Then ∥∇fi(x)∥ = ∥∇l(x, ξi) − ∇l(x ⋆ i , ξi)∥. This way, we see that the variability of norms of gradients of different data points has two sources: the first source is the difference between ξi's, the second source is the difference between x ⋆ i 's. We name the first source as the *context-shift* and the second source as the *concept-shift*. When fi(x) is twice continuously differentiable, we have $L_{i}=\sup\limits_{x\in\mathbb{R}^{d}}\lambda_{\max}\left(\nabla^{2}f_{i}(x)\right)=\sup\limits_{x\in\mathbb{R}^{d}}\lambda_{\max}\left(\nabla^{2}l_{i}(x,\xi_{i})\right).$ Thus, when we use p IS to sample, we ignore the concept-shift and only leverage the context-shift with the sampling distribution. As a result, p IS is most useful when the context-shift dominates. On the other hand, adaptive sampling takes both the concept-shift and context-shift into consideration. When the major source of gradient norm differences is the concept-shift, adaptive sampling can perform better than sampling from p IS. This is illustrated in Section 4.3. ## 4 Synthetic Data Experiment We use synthetic data to illustrate our theory and compare several different stochastic optimization algorithms. We denote L-SVRG + uniform sampling as L-SVRG, L-SVRG + oracle optimal sampling as Optimal-LSVRG, and L-SVRG + sampling from p IS as IS-LSVRG. Similarly, we define SGD, Optimal-SGD, IS-SGD, L-Katyusha, Optimal-LKatyusha, and IS-LKatyusha. Additionally, AS-LSVRG and AS-LKatyusha refer to Algorithm 1 and Algorithm 2 with the AdaOSMD sampler (Algorithm 4), respectively, except in Section 4.4, where we use the OSMD Sampler (Algorithm 3). We set ρ = 1/n for all algorithms. The algorithm parameters for L-Katyusha with all sampling strategies are set according to Theorem 3, where L = L¯ for Optimal-LKatyusha and IS-LKatyusha, and L = Lmax for L-SVRG. For AS-LKatyusha, we set L = 0.4Lmax + 0.6L¯. As for the parameters of AdaOSMD, they are configured as stated in Section 2; when choosing a mini-batch of samples in each iteration, we set them according to Zhao et al. (2021). Data generation: We generate data from a linear regression model: bi = ⟨θ ⋆, ai⟩ + ζi, where ai i.i.d. ∼ N(0, si· Σ) with Σ = diag(25 0 d−1 −1, *· · ·* , 25 d−1 d−1 −1) and si i.i.d. ∼ e N(0,ν2), ζi i.i.d. ∼ N(0, σ2), and the entries of θ ⋆ are generated i.i.d. from N(10.0, 3.0 2). We let fi(x) := l(x; ai, bi), where l(x; ai, bi) := (1/2)(bi − ⟨*x, a*i⟩) 2is the square error loss. In this setting, the variance σ 2controls the optimization heterogeneity in (9), with larger σ 2corresponding to larger optimization heterogeneity, while ν controls the smoothness heterogeneity, with larger ν corresponding to larger smoothness heterogeneity. Under this model, the variability of the gradient norms is primarily caused by the differences between bi's, which corresponds to the context-shift. As a result, we expect that sampling according to p IS would perform similarly to oracle optimal sampling. Note that in this setting, we have Li = ∥ai∥ 2, so we set p IS i = ∥ai∥ 2/(Pn j=1 ∥aj∥ 2) for all i = 1*, . . . , n*. We set n = 100, d = 10, and report results averaged across 10 independent runs. ## 4.1 Sgd V.S. L-Svrg We compare SGD and Optimal-SGD with L-SVRG and Optimal-LSVRG. From the results in Figure 1, we have three main observations. First, with large optimization heterogeneity (rightmost column), OptimalLSVRG converges faster and can achieve a smaller optimal value compared to Optimal-SGD. This observation is consistent with our conclusion in Section 3.1 that the control variate is responsible for improving optimization heterogeneity. Second, Optimal-LSVRG always improves the performance over L-SVRG, with the largest improvement observed when the smoothness heterogeneity is large (bottom row). This observation illustrates our conclusion that importance sampling can improve smoothness heterogeneity. Finally, we observe that L-SVRG is more vulnerable to the smoothness heterogeneity compared to SGD, which can also be seen from the condition on the step size: we need η ≤ 1/(6Lmax) for L-SVRG (Theorem 5 of Kovalev et al. (2020)) and we only need η ≤ 1/Lmax for SGD (Theorem 2.1 of Needell et al. (2016)) to ensure convergence. ## 4.2 Non-Uniform Sampling For L-Svrg And L-Katyusha We compare L-SVRG and L-Katyusha with different sampling strategies. Figure 2 shows results for L-SVRG. We observe that the performances of Optimal-LSVRG and IS-LSVRG are similar, since the context-shift dominates the variability of the gradient norms. Furthermore, we see that adaptive sampling improves the performance of L-SVRG compared to uniform sampling. The improvement is most significant when the smoothness heterogeneity is large (bottom row). Figure 3 shows results for L-Katyusha. We set the step size according to Theorem 3. The oracle optimal sampling distribution results in considerable improvement over sampling from p IS after adding acceleration. In addition, we note that adaptive sampling efficiently improves over uniform sampling. ## 4.3 Importance Sampling V.S. Adaptive Sampling We provide an example where adaptive sampling can perform better than sampling from p IS. We generate data from a linear regression model bi = ⟨θ ⋆, ai⟩ + ζi, where ζi i.i.d. ∼ N(0, 0.5 2) and, for each ai ∈ R d, we choose uniformly at random one dimension, denoted as supp(i) ∈ [d], and set it to a nonzero value, while the remaining dimensions are set to zero. The nonzero value ai[supp(i)] is generated from N(1.0, 0.1 2). The entries of θ ⋆ are generated i.i.d. from e N(0,ν2). Therefore, ν controls the variance of entries of θ ⋆. We let n = 300 and d = 30. In this setting, we have Li = ∥ai∥ 2 = |ai[supp(i)]| 2 ≈ 1.0, and thus sampling from p IS will perform similarly to uniform sampling. On the other hand, we have $$\left|{\boldsymbol{x}}\right\rangle\!\|=\left|({\boldsymbol{x}}-\theta^{\star})\right|$$ ∥∇fi(x)∥ = |(x − θ ⋆) [supp(i)] · ai[supp(i)] + ζi| . ![10_image_0.png](10_image_0.png) Figure 1: Comparison of four methods: SGD, Optimal-SGD, I-SVRG, and Optimal-LSVRG. Columns correspond to different o values, while rows correspond to different v values. The stepsize the same for all algorithms, and is 0.1 when v = 0, is 0.05 when v = 0.5, and is 0.005 when v = 1.0. Thus, the variability of the gradient norms is mainly determined by the variance of entries of 0*. For each i E [n], we can understand f; as a separate univariate quadratic function with the minimizer 0*[supp(i)], and the variance of entries of 0* can be understood as the concept-shift. In this case, we expect that sampling from p15 will not perform as well as oracle optimal sampling or adaptive sampling. We implement Optimal-LSVRG, IS-LSVRG, and AS-LSVRG with the stochastic gradient obtained from a mini-batch of size 5, rather than choosing only one random sample, to allow adaptive sampling to explore more efficiently.1 The step size is set to 0.3. Figure 4 presents the results. We see that as v increases, the gap between oracle optimal sampling and sampling from p15 increases as well, due to the concept-shift. In addition, we see that adaptive sampling also performs better than sampling from p13, despite the fact that it does not use prior knowledge, since adaptive sampling can asymptotically approximate oracle optimal sampling. 1 AdaOSMD relies on the feedback obtained by exploration to update the sampling distribution. A larger batch size will allow adaptive sampling to explore more efficiently (in other words, to 'see' more samples in each iteration). Compared with the fixed sampling distribution, where a larger batch size is only reducing the variance of a stochastic gradient, a larger batch size will also help adaptive sampling to make faster updates of the sampling distribution. Therefore, the adaptive sampling strategy is generally more sensitive to batch size than sampling with a fixed distribution. ![11_image_0.png](11_image_0.png) Figure 2: Comparison of four methods: L-SVRG, Optimal-LSVRG, IS-LSVRG, AS-LSVRG. Columns correspond to different o values, and rows correspond to different v values. The stepsize is the same for all algorithms, and is 0.1 when v = 0, is 0.05 when v = 0.5, and is 0.005 when v = 1.0. ## 4.4 Nonconvex Objective In this section, we compare L-SVRG, IS-LSVRG, and AS-LSVRG with nonconvex objectives under a similar setting as in Section 4.2. We increase d to 100 and n to 1000. Instead of fitting the data with linear regression, we use a two-layer neural network with 10 neurons in the hidden layer. While we still minimize the mean squared error loss, the objective function is now nonconvex due to the nonconvexity of the neural network model. To estimate p18, we still set p2 = ||a;||2) as in Section 4.2. For AS-LSVRG, we use the OSMD Sampler (Algorithm 3). Both the optimization step size and the learning rate of the OSMD Sampler are tuned such that AS-LSVRG converges at the fastest speed. The result is shown in Figure 5. We see that adaptive sampling still obtains an advantage over uniform sampling and importance sampling, especially when the smoothness heterogeneity is large. It is worth noting that p15 does not perform well in this case. We suspect that this is because ||a;||2 is a poor estimate of Li in this case; however, it is unclear if there exists an easy way to accurately estimate Li with nonconvex models. This result justifies the motivation of adaptive sampling since it can achieve advantageous performance over uniform sampling without the need to estimate the smoothness constants. ![12_image_0.png](12_image_0.png) Figure 3: Comparison of four methods: L-Katyusha, Optimal-LKatyusha, IS-LKatyusha, AS-LKatyusha. Columns correspond to different o values, and rows correspond to different v values. The stepsizes are set based on Theorem 3. ![12_image_1.png](12_image_1.png) Figure 4: Optimal-LSVRG v.s. IS-LSVRG v.s. AS-LSVRG. Columns correspond to different v values. ![13_image_0.png](13_image_0.png) Figure 5: Comparison of L-SVRG, IS-LSVRG and AS-LSVRG with nonconvex objective. Columns correspond to different o values, and rows correspond to different v values. The stepsize of each method is tuned such that the method converges in the fastest speed. ## 5 Real Data Experiment We use the w8a dataset from LibSVM classification tasks Zeng et al. (2008); Chang & Lin (2011). On a real dataset, obtaining the theoretically optimal sampling distribution is infeasible, while constructing p16 requires access to Lipschitz constants of each loss function. Therefore, here we only show the performance of L-SVRG and AS-LSVRG on the following logistic regression problem: $$\operatorname*{min}_{x\in\mathbb{R}^{d}}\;-{\frac{1}{n}}\sum_{i=1}^{n}(y_{i}\log p_{i}+(1-y_{i})\log(1-p_{i})),$$ where Pi(x) = pi = (1 + exp-x"2;)-1, yi € {0,1} is the response variable, and zi is the ddimensional feature vector. The stepsizes for both I-SVRG and AS-SVRG are initially tuned over the grid {10-2, 10-1.5, ... , 102}. The initial search showed us that the optimal stepsize should be in the interval (0,1). Therefore, we tune the stepsizes over a grid of 20 evenly spaced points on [0.05, 1]. The two algorithms are then used to train the model for 1000 iterations, repeated 10 times, and the best stepsize is chosen by picking the one that corresponds to the lowest loss at the 1000-th iteration. ![14_image_0.png](14_image_0.png) Figure 6: LSVRG v.s. AS-LSVRG. Columns correspond to different batch sizes. ![14_image_1.png](14_image_1.png) Figure 7: L-Katyusha v.s. AS-LKatyusha. Columns correspond to different batch sizes. The stepsizes are set according to Theorem 3.2 from (Qian et al., 2021) and Theorem 3 in this paper. Figure 6 corresponds to the average log cross entropy loss over 10 runs against the number of iterations. The shaded region corresponds to the standard deviation of the loss. When the batch size is 1, AS-LSVRG and L-SVRG have similar convergence behaviour, but the standard deviation is reduced for AS-LSVRG. When the batch size is 5, AS-LSVRG significantly outperforms L-SVRG. We illustrate the performance of L-Katyusha and AS-LKatyusha by solving the following ℓ2-regularized optimization problem: $$\operatorname*{min}_{x\in\mathbb{R}^{d}}-{\frac{1}{n}}\sum_{i=1}^{n}(y_{i}\log p_{i}+(1-y_{i})\log(1-p_{i}))+{\frac{\mu}{2}}\|x\|^{2},$$ where pi = pi(x) has the form as before and µ = 10−7to ensure that the problem is strongly convex. Figure 7 shows results over 10 runs. AS-LKatyusha significantly outperforms its uniform sampling counterpart. While some of the improvement in performance could be attributed to our superior dependence on the Lipschitz constant, the losses we obtain enjoy slightly reduced variances. ## 6 Conclusion And Future Directions We studied the convergence behavior of L-SVRG and L-Katyusha when non-uniform sampling with a dynamic sampling distribution is used. Compared to previous research, we do not restrict ourselves to a fixed sampling distribution but allow it to change with iterations. This flexibility enables us to design the sampling distribution adaptively using the feedback from sampled observations. We do not need prior information, which can be computationally expensive to obtain in practice, to design a well-performing sampling distribution. Therefore, our algorithm is practically useful. We derive upper bounds on the convergence rate for any sampling distribution sequence for both L-SVRG and L-Katyusha under commonly used assumptions. Our theoretical results justify the usage of online learning to design the sequence of sampling distributions. More interestingly, our theory also explains when adaptive sampling with no prior knowledge can perform better than a fixed sampling distribution designed using prior knowledge. Extensive experiments on both synthetic and real data demonstrate our theoretical findings and illustrate the practical value of the methodology. We plan to extend the adaptive sampling strategy to a broader class of stochastic optimization algorithms. For example, stochastic coordinate descent (Zhu et al., 2016) and stochastic non-convex optimization algorithms (Fang et al., 2018). In addition, exploring adaptive sampling with second-order methods, such as the stochastic Quasi-Newton method (Byrd et al., 2016), could be a fruitful future direction. ## References Zeyuan Allen-Zhu. Katyusha: The first direct acceleration of stochastic gradient methods. Journal of Machine Learning Research, 18:221:1–221:51, 2017. Zalan Borsos, Andreas Krause, and Kfir Y. Levy. Online variance reduction for stochastic optimization. In Conference On Learning Theory, 2018. Zalán Borsos, Sebastian Curi, Kfir Yehuda Levy, and Andreas Krause. Online variance reduction with mixtures. In *International Conference on Machine Learning*, 2019. Léon Bottou, Frank E. Curtis, and Jorge Nocedal. Optimization methods for large-scale machine learning. SIAM Review, 60(2):223–311, 2018. Richard H. Byrd, S. L. Hansen, Jorge Nocedal, and Yoram Singer. A stochastic quasi-newton method for large-scale optimization. *SIAM Journal on Optimization*, 26(2):1008–1031, 2016. Chih-Chung Chang and Chih-Jen Lin. Libsvm: a library for support vector machines. ACM transactions on intelligent systems and technology (TIST), 2(3):1–27, 2011. Aaron Defazio, Francis R. Bach, and Simon Lacoste-Julien. SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. In *Advances in Neural Information Processing* Systems, 2014. Vladimir Estivill-Castro and Derick Wood. A survey of adaptive sorting algorithms. ACM Computing Surveys (CSUR), 24(4):441–476, 1992. Cong Fang, Chris Junchi Li, Zhouchen Lin, and Tong Zhang. SPIDER: near-optimal non-convex optimization via stochastic path-integrated differential estimator. In *Advances in Neural Information Processing* Systems, 2018. Ayoub El Hanchi and David A. Stephens. Adaptive importance sampling for finite-sum optimization and sampling with decreasing step-sizes. In *Advances in Neural Information Processing Systems*, 2020. Filip Hanzely and Peter Richtárik. Accelerated coordinate descent with arbitrary sampling and best rates for minibatches. In *International Conference on Artificial Intelligence and Statistics*, 2019. Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In *Advances in Neural Information Processing Systems*, 2013. Dmitry Kovalev, Samuel Horváth, and Peter Richtárik. Don't jump through hoops and remove those loops: SVRG and katyusha are better without the outer loop. In *Algorithmic Learning Theory*, 2020. Tor Lattimore and Csaba Szepesvári. *Bandit algorithms*. Cambridge University Press, 2020. Hongseok Namkoong, Aman Sinha, Steve Yadlowsky, and John C. Duchi. Adaptive sampling probabilities for non-smooth optimization. In *International Conference on Machine Learning*, 2017. Deanna Needell, Nathan Srebro, and Rachel Ward. Stochastic gradient descent, weighted sampling, and the randomized kaczmarz algorithm. *Mathematical Programming*, 155(1-2):549–573, 2016. Yurii Nesterov. *Lectures on convex optimization*. Springer, 2018. Xun Qian, Zheng Qu, and Peter Richtárik. SAGA with arbitrary sampling. In International Conference on Machine Learning, 2019. Xun Qian, Zheng Qu, and Peter Richtárik. L-svrg and l-katyusha with arbitrary sampling. *Journal of* Machine Learning Research, 22(112):1–47, 2021. Herbert Robbins and Sutton Monro. A stochastic approximation method. *Annals of Mathematical Statistics*, 22:400–407, 1951. Farnood Salehi, L Elisa Celis, and Patrick Thiran. Stochastic optimization with bandit sampling. *arXiv* preprint arXiv:1708.02544, 2017. Mark Schmidt, Nicolas Le Roux, and Francis Bach. Minimizing finite sums with the stochastic average gradient. *Mathematical Programming*, 162(1-2):83–112, 2017. Zebang Shen, Hui Qian, Tengfei Zhou, and Tongzhou Mu. Adaptive variance reducing for stochastic gradient descent. In *International Joint Conference on Artificial Intelligence*, 2016. Zhi-Qiang Zeng, Hong-Bin Yu, Hua-Rong Xu, Yan-Qi Xie, and Ji Gao. Fast training support vector machines using parallel sequential minimal optimization. In *International Conference on Intelligent System and* Knowledge Engineering, 2008. Boxin Zhao, Ziqi Liu, Chaochao Chen, Mladen Kolar, Zhiqiang Zhang, and Jun Zhou. Adaptive client sampling in federated learning via online learning with bandit feedback. *arXiv preprint arXiv:2112.14332*, 2021. Peilin Zhao and Tong Zhang. Stochastic optimization with importance sampling for regularized loss minimization. In *International Conference on Machine Learning*, 2015. Zeyuan Allen Zhu, Zheng Qu, Peter Richtárik, and Yang Yuan. Even faster accelerated coordinate descent using non-uniform sampling. In *International Conference on Machine Learning*, 2016. ## A Proof Of Main Theorems A.1 Proof Of Theorem 1 We use the proof technique from Theorem 5 of Kovalev et al. (2020). The key step is to decompose the variance of the stochastic gradient. Let Ft = σ(x0, w0, x1, w1, · · · , xt, wt) be the σ-algebra generated by x0, w0, x1, w1, · · · , xt, wt, and let Et[·] := E[ *· | F*t] be the conditional expectation given Ft. Note that Et[g t] = ∇F(x t). By Assumption 3, we have Et hx t+1 − x ⋆ 2i= Et hx t − ηgt − x ⋆ 2i =x t − x ⋆ 2− 2η∇F(x t), xt − x ⋆+ η 2Et hg t 2i ≤x t − x ⋆ 2− 2η F(x t) − F(x ⋆) − µ 2 ∥x t − x ⋆∥ 2+ η 2Et hg t 2i = (1 − ηµ)x t − x ⋆ 2− 2ηF(x t) − F(x ⋆)+ η 2Et hg t 2i. (13) Furthermore, we have $$\left(13\right)$$ Et hg t 2i= Et hg t − Et -g t 2i+Et -g t 2 = V t e p t−∇F(x t) − ∇F(w t) 2+∇F(x t) 2 = V t e p IS−∇F(x t) − ∇F(w t) 2+∇F(x t) 2+ V t e p t− V t e p IS = L¯ n Xn i=1 1 Li ∇fi(x t) − ∇fi(w t) 2−∇F(x t) − ∇F(w t) 2 +∇F(x t) 2+ V t e p t− V t e p IS ≤ L¯ n Xn i=1 1 Li ∇fi(x t) − ∇fi(w t) 2+∇F(x t) 2+ V t e p t− V t e p IS. (14) $$\left(14\right)$$ $$(15)$$ By Assumption 1 and Assumption 2 that $F(\cdot)$ is convex and $L_{F}$-smooth, we have $$\left\|\nabla F(x^{\star})\right\|^{2}=\left\|\nabla F(x^{\star})-\nabla F(x^{\star})\right\|^{2}\leq2L_{F}\left(F(x^{\star})-F(x^{\star})\right).$$ With Dtin (10), we have L¯ n Xn i=1 1 Li ∇fi(x t) − ∇fi(w t) 2 ≤ 2L¯ n Xn i=1 1 Li ∇fi(x t) − ∇fi(x ⋆) 2+ 2L¯ n Xn i=1 1 Li ∇fi(w t) − ∇fi(x ⋆) 2 ≤ 2L¯ n Xn i=1 1 Li (2Li)fi(x t) − fi(x ⋆) − ⟨∇fi(x ⋆), xt − x ⋆⟩+ 2L¯D t = 4L¯F(x t) − F(x ⋆)+ 2L¯D t. (16) Combining (14)-(16), we have ing (14)-(16), we have $$\mathbb{E}_{4}\left[\left\|\boldsymbol{\sigma}^{t}\right\|^{2}\right]\leq4\bar{L}\left(F(x^{t})-F(x^{\star})\right)+2L_{F}\left(F(x^{t})-F(x^{\star})\right)+2\bar{L}\mathcal{D}^{t}+V_{e}^{t}\left(\mathbf{p}^{t}\right)-V_{e}^{t}\left(\mathbf{p}^{IS}\right).$$ ing (17) and (13), we have IS. (17) $$\mathbb{E}_{t}\left[\left\|x^{t+1}-x^{\star}\right\|^{2}\right]$$ $$\leq\left(1-\eta\mu\right)\left\|x^{t}-x^{\star}\right\|^{2}-2\eta(1-2\eta\bar{L}-\eta L_{F})\left(F(x^{t})-F(x^{\star})\right)$$ $$+2\eta^{2}\bar{L}D^{t}+\eta^{2}\left\{V_{e}^{t}\left(\mathbf{p}^{t}\right)-V_{e}^{t}\left(\mathbf{p}^{t S}\right)\right\}.$$ $$\left(16\right)$$ $$\left(17\right)$$ Using Lemma 4, for any β > 0, we have $$\mathbb{E}_{\tau}\left[\left\|x^{\mu+1}-x^{\star}\right\|^{2}\right]+\beta\mathbb{E}_{\tau}\left[\mathcal{D}^{\mu+1}\right]$$ $$\leq\left(1-\eta\mu\right)\left\|x^{\mu}-x^{\star}\right\|^{2}-\left(2\eta(1-2\eta\bar{L}-\eta L_{F})-2\beta\rho\right)\left(F(x^{\sharp})-F(x^{\star})\right)$$ $$+\left(2\eta^{2}\bar{L}+\beta(1-\rho)\right)\mathcal{D}^{\mu}+\eta^{2}\left\{V_{e}^{\prime}\left(\mathbf{p}^{\prime}\right)-V_{e}^{\prime}\left(\mathbf{p}^{JS}\right)\right\}.$$ With $\beta=4\eta^{2}\bar{L}/\rho$, we have Et hx t+1 − x ⋆ 2i+ 4η 2L¯ ρ Et -D t+1 ≤ (1 − ηµ)x t − x ⋆ 2− 2η(1 − 6ηL¯ − ηLF )F(x t) − F(x ⋆) + 4η 2L¯ ρ 1 − ρ 2 D t + η 2V t e p t− V t e p IS . Since $\eta\leq1/(6\bar{L}+L_{F})$, we further have $$\mathbb{E}_{t}\left[\left\|x^{t+1}-x^{t}\right\|^{2}\right]+\frac{4\eta^{2}\bar{L}}{\rho}\mathbb{E}_{t}\left[\mathcal{D}^{t+1}\right]\leq\left(1-\eta\mu\right)\left\|x^{t}-x^{t}\right\|^{2}+\frac{4\eta^{2}\bar{L}}{\rho}\left(1-\frac{\rho}{2}\right)\mathcal{D}^{t}+\eta^{2}\left\{V_{e}^{t}\left(\mathbf{p}^{t}\right)-V_{e}^{t}\left(\mathbf{p}^{tS}\right)\right\}.$$ Recalling that $$\alpha_{1}:=\max\left\{1-\eta\mu,1-\frac{\rho}{2}\right\},$$ we have : $$\mathbb{E}_t\left[\left\|x^{t+1}-x^*\right\|^2+\frac{4\eta^2\bar{L}}{\rho}\mathcal{D}^{t+1}\right]\leq\alpha_1\left(\left\|x^t-x^*\right\|^2+\frac{4\eta^2\bar{L}}{\rho}\mathcal{D}^t\right)+\eta^2\left\{V_e^t\left(\mathbf{p}^t\right)-V_e^t\left(\mathbf{p}^{tS}\right)\right\}.$$ the full expectation on both sides and recursively repeating the above relationship from $t=\mathcal{T}$. t = 0, we have E x T − x ⋆ 2+ 4η 2L¯ ρD T ≤ α1E x T −1 − x ⋆ 2+ 4η 2L¯ ρD T −1 + η 2E -V T −1 ep T −1− V T −1 ep IS ≤ α T 1 E x 0 − x ⋆ 2+ 4η 2L¯ ρD 0 + η 2X T t=0 α T −t 1 E-V t e p t− V t e p IS . ## A.2 Proof Of Theorem 2 We use the technique from Theorem 17 of Qian et al. (2021). The key difference here is the decomposition of the variance of the stochastic gradient. Let Ξ t:=1 2ηt x t − x ⋆ 2+ 6ηtL¯(1 − ρ) 5ρD t. (18) Let Ft = σ(x0, w0, x1, w1, · · · , xt, wt) be the σ-algebra generated by x0, w0, x1, w1, · · · , xt, wt, and let Et[·] := E[ *· | F*t] be the conditional expectation given Ft. Note that Et[g t] = ∇F(x t). We have $$F(x^{\star})\geq F(x^{t})+\langle\nabla F(x^{t}),x^{\star}-x^{t}\rangle$$ $$=F(x^{t})+\mathbb{E}_{t}\left[\langle y^{t},x^{t}-x^{t}\rangle\right]$$ $$=F(x^{t})+\mathbb{E}_{t}\left[\langle y^{t},x^{t}-x^{t+1}\rangle\right]+\mathbb{E}_{t}\left[\langle y^{t},x^{t+1}-x^{t}\rangle\right]$$ $$=F(x^{t})+\mathbb{E}_{t}\left[\langle y^{t},x^{t}-x^{t+1}\rangle\right]+\mathbb{E}_{t}\left[\langle y^{t}-\nabla F(x^{t}),x^{t+1}-x^{t}\rangle\right]+\mathbb{E}_{t}\left[\langle\nabla F(x^{t}),x^{t+1}-x^{t}\rangle\right].\tag{19}$$ By Assumption 1 and 2, we have $$F(x^{t+1})-F(x^{t})-\langle\nabla F(x^{t}),x^{t+1}-x^{t}\rangle\leq{\frac{L_{F}}{2}}\left\|x^{t+1}-x^{t}\right\|^{2}.$$ Thus, $F(x^{t})+\langle\nabla F(x^{t}),x^{t+1}-x^{t}\rangle\geq F(x^{t+1})-\frac{L_{F}}{2}\left\|x^{t+1}-x^{t}\right\|^{2}.$ We have Combined with (19), we have F(x ⋆) ≥ Et -F(x t+1)− LF 2 Et hx t+1 − x t 2i + Et -g t − ∇F(x t), xt+1 − x t + Et -g t, x⋆ − x t+1 . (20) Since ⟨a, b⟩ ≤ 1 2β ∥a∥ 2 + β 2 ∥b∥ 2for all a, b ∈ R d and β > 0 by Young's inequality, we have Et -⟨g t − ∇F(x t), xt − x t+1⟩≤ β 2 Et hg t − ∇F(x t) 2i+ 1 2β Et hx t − x t+1 2i, β > 0. Equivalently, Et -⟨g t − ∇F(x t), xt+1 − x t⟩≥ − β 2 Et hg t − ∇F(x t) 2i− 1 2β Et hx t+1 − x t 2i, β > 0. By Lemma 3, we then have By Lemma 3, we then have $$\mathbb{E}_{t}\left[\left(g^{t}-\nabla F(x^{t}),x^{t+1}-x^{t}\right)\right]$$ $$\geq-2\beta\widehat{L}\left(F(x^{t})-F(x^{*})\right)-\frac{\beta\widehat{L}}{n}\sum_{i=1}^{n}\frac{1}{L_{i}}\left\|\nabla f_{i}(w^{t})-\nabla f_{i}(x^{*})\right\|^{2}$$ $$-\frac{\beta}{2}\left\{V_{\epsilon}^{t}\left(\mathbf{p}^{t}\right)-V_{\epsilon}^{t}\left(\mathbf{p}^{t\widehat{S}}\right)\right\}-\frac{1}{2\beta}\mathbb{E}_{t}\left[\left\|x^{t+1}-x^{t}\right\|^{2}\right].\tag{21}$$ Combine (20)-(21) and noting that 2 e e 2β Combine (20)-(21) and noting that g t, x⋆ − x t+1= 1 η x t+1 − x t, x⋆ − x t+1= 1 2η x t − x t+1 2+ 1 2η x t+1 − x ⋆ 2− 1 2η x t − x ⋆ 2, we have F(x ⋆) ≥ Et -F(x t+1)− LF 2 Et hx t+1 − x t 2i− βL¯ n Xn i=1 1 Li ∇fi(w t) − ∇fi(x ⋆) 2 − β 2 V t e p t− V t e p IS − 1 2β Et hx t+1 − x t 2i + 1 2η Et hx t − x t+1 2i+ 1 2η Et hx t+1 − x ⋆ 2i− 1 2η x t − x ⋆ 2 = Et -F(x t+1)+ 1 2η − LF 2− 1 2β Et hx t − x t+1 2i+ 1 2η Et hx t+1 − x ⋆ 2i− 1 2η x t − x ⋆ 2 − 2βL¯F(x t) − F(x ⋆)− βL¯ n Xn i=1 1 Li ∇fi(w t) − ∇fi(x ⋆) 2− β 2 V t e p t− V t e p IS . Therefore, Therefore, 2βL¯F(x t) − F(x ⋆)+ β 2 V t e p t− V t e p IS + 1 2η x t − x ⋆ 2 ≥ Et -F(x t+1)− F(x ⋆) + 1 2η − LF 2− 1 2β Et hx t − x t+1 2i + 1 2η Et hx t+1 − x ⋆ 2i− βL¯ n Xn i=1 1 Li ∇fi(w t) − ∇fi(x ⋆) 2. Then by definition of Dtin (10) and Lemma 4, for any α > 0, we have 2(βL¯ + αρ)F(x t) − F(x ⋆)+ β 2 V t e p t− V t e p IS + 1 2η x t − x ⋆ 2+ α(1 − ρ)D t ≥ Et -F(x t+1)− F(x ⋆) + 12 1 η − LF − 1 β Et hx t − x t+1 2i + 1 2η Et hx t+1 − x ⋆ 2i+α − βL¯Et -D t+1. Let β = 6 5 η and α = βL¯ ρ = 6ηL¯ 5ρ . Since η ≤1 6LF , we have 1 η − LF − 1 β = 1 6η − LF ≤ 0. Then 4 5 F(x t) − F(x ⋆)+ 3 5 ηV t e p t− V t e p IS + 1 2η x t − x ⋆ 2+ 6ηL¯(1 − ρ) 5ρD t ≥ 24 5 ηL¯F(x t) − F(x ⋆)+ 3 5 ηV t e p t− V t e p IS + 1 2η x t − x ⋆ 2+ 6ηL¯(1 − ρ) 5ρD t ≥ Et -F(x t+1) − F(x ⋆)+ 1 2η Et hx t+1 − x ⋆ 2i+ 6ηL¯(1 − ρ) 5ρ Et -D t+1. From the definition of Ξt in (18), we have $$\mathbb{E}_{t}\left[F(x^{t+1})-F(x^{\star})\right]+\mathbb{E}_{t}\left[\Xi^{t+1}\right]-\Xi^{t}\leq\frac{4}{5}\left(F(x^{t})-F(x^{\star})\right)+\frac{3}{5}\eta\left\{V_{e}^{t}\left(\mathbf{p}^{t}\right)-V_{e}^{t}\left(\mathbf{p}^{tS}\right)\right\}$$ be full expectation on both sides and recursively repeating the above relationship from Taking the full expectation on both sides and recursively repeating the above relationship from t = T to t = 0, we have $$\sum_{t=0}^{T}\mathbb{E}\left[F(x^{t+1})-F(x^{\star})+\Xi^{t+1}-\Xi^{0}\right]\leq\frac{4}{5}\sum_{t=0}^{T}\mathbb{E}\left[F(x^{t})-F(x^{\star})\right]+\frac{3}{5}\eta\sum_{t=0}^{T}\mathbb{E}\left[V_{e}^{t}\left(\mathbf{p}^{t}\right)-V_{e}^{t}\left(\mathbf{p}^{t S}\right)\right],$$ which implies that $$\frac{1}{5}\sum_{t=1}^{T}\mathbb{E}\left[F(x^{t})-F(x^{\star})\right]$$ $$\leq\mathbb{E}\left[F(x^{T+1})-F(x^{\star})+\Xi^{T+1}\right]+\frac{1}{5}\sum_{t=1}^{T}\mathbb{E}\left[F(x^{t})-F(x^{\star})\right]$$ $$\leq\frac{4}{5}\left(F(x^{0})-F(x^{\star})\right)+\Xi^{0}+\frac{3}{5}\eta\sum_{t=0}^{T}\mathbb{E}\left[V_{e}^{t}\left(\mathbf{p}^{t}\right)-V_{e}^{t}\left(\mathbf{p}^{IS}\right)\right].$$ $\mathbf{\chi}$\(\mathbf{\chi}\ By convexity of F(·) and since xˆ $\mathbf{\tau}=(1/T)\sum_{t=1}^T x^t$, we have $$\mathbb{E}\left[F(\hat{x}^{T})-F(x^{\star})\right]\leq\frac{4}{T}\left(F(x^{0})-F(x^{\star})\right)+\frac{5\Xi^{0}}{T}+\frac{3\eta}{T}\sum_{t=0}^{T}\mathbb{E}\left[V_{e}^{t}\left(\mathbf{p}^{t}\right)-V_{e}^{t}\left(\mathbf{p}^{I S}\right)\right].$$ Finally, by Lemma 1, we have $$\Xi^{0}=\frac{1}{2\eta}\left\|x^{0}-x^{\star}\right\|^{2}+\frac{6\eta\bar{L}(1)}{5\rho}$$ $$\leq\frac{1}{2\eta}\left\|x^{0}-x^{\star}\right\|^{2}+\frac{6\eta\bar{L}(1)}{5\rho}$$ $$\leq\frac{1}{2\eta}\left\|x^{0}-x^{\star}\right\|^{2}+\frac{12\eta\bar{L}(1)}{5\rho}$$ $$\begin{array}{l}{{\frac{6\eta\bar{L}(1-\rho)}{5\rho}\mathcal{D}^{0}}}\\ {{\frac{6\eta\bar{L}(1-\rho)}{5\rho}\frac{1}{n}\sum_{i=1}^{n}\frac{1}{L_{i}}(2L_{i})\left(f_{i}(w^{0})-f_{i}(x^{\star})-\left\langle\nabla f_{i}(x^{\star}),x^{t}-x^{\star}\right\rangle\right)}}\\ {{\frac{12\eta\bar{L}(1-\rho)}{5\rho}\left(F(w^{0})-F(x^{\star})\right).}}\end{array}$$ Thus, we have $$\mathbb{E}\left[F(\hat{x}^{T})-F(x^{\star})\right]\leq\frac{4}{T}\left(F(x^{0})-F(x^{\star})\right)$$ $$\qquad+\frac{5}{T}\left\{\frac{1}{2\eta}\left\|x^{0}-x^{\star}\right\|^{2}+\frac{12\eta\bar{L}(1-\rho)}{5\rho}\left(F(w^{0})-F(x^{\star})\right)\right\}+\frac{3\eta}{T}\sum_{t=0}^{T}\mathbb{E}\left[V_{e}^{t}\left(\mathbf{p}^{t}\right)-V_{e}^{t}\left(\mathbf{p}^{tS}\right)\right].$$ ## A.3 Proof Of Theorem 3 We use the proof technique of Theorem 11 ion Kovalev et al. (2020). The key step is to decompose the variance of the stochastic gradient. We let Ft = σ(x0, w0, v0, z0, · · · , xt, wt, vt, zt) be the σ-algebra generated by x0, w0, v0, z0, · · · , xt, wt, vt, zt, and let Et[·] := E[ *· | F*t] be the conditional expectation given Ft. By Assumption 3, we have $$F(x^{\star})\geq F(x^{t})+\left\langle\nabla F(x^{t}),x^{\star}-x^{t}\right\rangle+\frac{\mu}{2}\left\|x^{t}-x^{\star}\right\|^{2}\tag{22}$$ $$=F(x^{t})+\frac{\mu}{2}\left\|x^{t}-x^{\star}\right\|^{2}+\left\langle\nabla F(x^{t}),x^{\star}-z^{t}\right\rangle+\left\langle\nabla F(x^{t}),z^{t}-x^{t}\right\rangle.$$ Note that $$x^{t}=\theta_{1}z^{t}+\theta_{2}w^{t}+(1-\theta_{1}-\theta_{2})v^{t}.$$ Thus $$z^{t}={\frac{1}{\theta_{1}}}x^{t}-{\frac{\theta_{2}}{\theta_{1}}}w^{t}-{\frac{1-\theta_{1}-\theta_{2}}{\theta_{1}}}v^{t}$$ $$z^{t}-x^{t}=\frac{1-\theta_{1}}{\theta_{1}}x^{t}-\frac{\theta_{2}}{\theta_{1}}w^{t}-\frac{1-\theta_{1}-\theta_{2}}{\theta_{1}}v^{t}=\frac{\theta_{2}}{\theta_{1}}\left(x^{t}-w^{t}\right)+\frac{1-\theta_{1}-\theta_{2}}{\theta_{1}}\left(x^{t}-v^{t}\right).$$ Since $\mathbb{E}_{t}[g^{t}]=\nabla F(x^{t})$, combining the above relationships with (22), we have F(x ⋆) ≥ F(x t) + µ 2 x t − x ⋆ 2+∇F(x t), x⋆ − z t + θ2 θ1 ∇F(x t), xt − w t+ 1 − θ1 − θ2 θ1 ∇F(x t), xt − v t = F(x t) + θ2 θ1 ∇F(x t), xt − w t+ 1 − θ1 − θ2 θ1 ∇F(x t), xt − v t + Et hµ 2 x t − x ⋆ 2+g t, x⋆ − z ti = F(x t) + θ2 θ1 ∇F(x t), xt − w t+ 1 − θ1 − θ2 θ1 ∇F(x t), xt − v t + Et hµ 2 x t − x ⋆ 2+g t, x⋆ − z t+1+g t, zt+1 − z ti. By Lemma 5, we have $$\left\langle g^{t},x^{\star}-z^{t+1}\right\rangle+\frac{\mu}{2}\left\|x^{t}-x^{\star}\right\|^{2}\geq\frac{\bar{L}}{2\eta}\left\|z^{t}-z^{t+1}\right\|^{2}+\mathcal{Z}^{t+1}-\frac{1}{1+\eta\kappa}\mathcal{Z}^{t}.$$ Thus $$F(x^{t})\geq F(x^{t})+\frac{\theta_{2}}{\theta_{1}}\left\langle\nabla F(x^{t}),x^{t}-w^{t}\right\rangle+\frac{1-\theta_{1}-\theta_{2}}{\theta_{1}}\left\langle\nabla F(x^{t}),x^{t}-v^{t}\right\rangle$$ $$+\mathbb{E}_{t}\left[2^{t+1}-\frac{1}{1+\eta\kappa}z^{t}\right]+\mathbb{E}_{t}\left[\left\langle g^{t},z^{t+1}-z^{t}\right\rangle+\frac{\tilde{L}}{2\eta}\left\|z^{t}-z^{t+1}\right\|^{2}\right].\tag{23}$$ By Lemma 6, we have L¯ 2η z t+1 − z t 2+g t, zt+1 − z t≥ 1 θ1 F(v t+1) − F(x t)−η 2L¯(1 − ηθ1) g t − ∇F(x t) 2. Note that η =θ2 (1+θ2)θ1 . Thus η 2L¯(1−ηθ1) =θ2 2Lθ¯ 1 . Then, by (23), we have F(x ⋆) ≥ F(x t) + θ2 θ1 ∇F(x t), xt − w t+ 1 − θ1 − θ2 θ1 ∇F(x t), xt − v t + Et Z t+1 −1 1 + ηκ Z t + Et 1 θ1 F(v t+1) − F(x t)−θ2 2Lθ¯1 g t − ∇F(x t) 2 = F(x t) + θ2 θ1 ∇F(x t), xt − w t+ 1 − θ1 − θ2 θ1 ∇F(x t), xt − v t + Et Z t+1 −1 1 + ηκ Z t + Et 1 θ1 F(v t+1) − F(x t) −θ2 2Lθ¯1 V t e p t+θ2 2Lθ¯1 ∇F(x t) − ∇F(w t) 2 ≤ F(x t) + θ2 θ1 ∇F(x t), xt − w t+ 1 − θ1 − θ2 θ1 ∇F(x t), xt − v t + Et Z t+1 −1 1 + ηκ Z t + Et 1 θ1 F(v t+1) − F(x t)−θ2 2Lθ¯1 V t e p t = F(x t) + θ2 θ1 ∇F(x t), xt − w t+ 1 − θ1 − θ2 θ1 ∇F(x t), xt − v t + Et Z t+1 −1 1 + ηκ Z t + Et 1 θ1 F(v t+1) − F(x t) −θ2 2Lθ¯1 V t e p IS−θ2 2Lθ¯1 V t e p t− V t e p IS = F(x t) + θ2 θ1 ∇F(x t), xt − w t+ 1 − θ1 − θ2 θ1 ∇F(x t), xt − v t + Et Z t+1 −1 1 + ηκ Z t + Et 1 θ1 F(v t+1) − F(x t) −θ2 2θ1n Xn i=1 1 Li ∇fi(x t) − fi(w t) 2−θ2 2Lθ¯1 V t e p t− V t e p IS . By Assumption 1 and 2, and Lemma 2, we have $$\frac{1}{n}\sum_{i=1}^{n}\frac{1}{L_{i}}\left\|\nabla f_{i}(x^{t})-f_{i}(w^{t})\right\|^{2}\leq\frac{1}{n}\sum_{i=1}^{n}\frac{1}{L_{i}}(2L_{i})\left(f_{i}(w^{t})-f_{i}(x^{t})-\left\langle\nabla f_{i}(x^{t}),w^{t}-x^{t}\right\rangle\right)$$ $$=2\left(F(w^{t})-F(x^{t})-\left\langle\nabla F(x^{t}),w^{t}-x^{t}\right\rangle\right).$$ In the above note that $\left\langle\nabla F(x^{t}),f_{i}(w^{t}),f_{i}(w^{t}),F(x^{t}),w^{t}-x^{t}\right\rangle$. The proof of the above On the other hand, note that ⟨∇F(x t), xt − v t⟩ ≥ F(x t) − F(v t). Thus, we further have F(x ⋆) ≥ F(x t) + θ2 θ1 ∇F(x t), xt − w t+ 1 − θ1 − θ2 θ1 ∇F(x t), xt − v t + Et Z t+1 −1 1 + ηκ Z t + Et 1 θ1 F(v t+1) − F(x t) − θ2 θ1 F(w t) − F(x t) −∇F(x t), wt − x t −θ2 2Lθ¯1 V t e p t− V t e p IS = F(x t) + 1 − θ1 − θ2 θ1 F(x t) − F(v t)−1 1 + ηκ Z t − θ2 θ1 F(w t) − F(x t) + Et Z t+1 + 1 θ1 F(v t+1) − F(x t)−θ2 2Lθ¯1 V t e p t− V t e p IS = − 1 − θ1 − θ2 θ1F(v t) −1 1 + ηκ Z t − θ2 θ1 F(w t) + Et Z t+1 + 1 θ1 F(v t+1) −θ2 2Lθ¯1 V t e p t− V t e p IS = F(x ⋆) − 1 − θ1 − θ2 θ1 F(v t) − F(x ⋆)−1 1 + ηκ Z t − θ2 θ1 F(w t) − F(x ⋆) + Et Z t+1 + 1 θ1 F(v t+1) − F(x ⋆)−θ2 2Lθ¯1 V t e p t− V t e p IS . Recalling the definition of V tin (12), we have Et -Z t+1 + V t+1≤ (1 − θ1 − θ2)V t +1 1 + ηκ Z t + θ2 θ1 F(w t) − F(x ⋆)+θ2 2Lθ¯1 V t e p t− V t e p IS . Since $$\mathbb{E}_{t}\left[F(w^{t+1})-F(x^{\star})\right]=\left(1-\rho\right)\left(F(w^{t})-F(x^{\star})\right)+\rho\left(F(v^{t})-F(x^{\star})\right)$$ $$=\left(1-\rho\right)\left(F(w^{t})-F(x^{\star})\right)+\theta_{1}\rho\mathcal{V}^{t},$$ recalling the definition of Wtin (12), we have Et -Z t+1 + V t+1 + Wt+1 ≤ (1 − θ1 − θ2)V t +1 1 + ηκ Z t + θ2 θ1 F(w t) − F(x ⋆)+θ2 2Lθ¯1 V t e p t− V t e p IS + θ2(1 + θ1) ρθ1 (1 − ρ)F(w t) − F(x ⋆)+ θ1ρV t+θ2 2Lθ¯1 V t e p t− V t e p IS =1 1 + ηκ Z t + (1 − θ1(1 − θ2)) V t + 1 −ρθ1 1 + θ1 Wt +θ2 2Lθ¯1 V t e p t− V t e p IS . By the definition of α2 in Theorem 3 and since θ2 = 1/2, taking the full expectation on both sides, we have E -Z t+1 + V t+1 + Wt+1≤ α2E-Z t + V t + Wt+1 4Lθ¯1 E -V t e p t− V t e p IS . Recursively repeating the above relationship from t = T − 1 to t = 0, we have E -Ψ T≤ α2E-Ψ T −1+1 4Lθ¯1 E -V T −1 ep T −1− V T −1 ep IS ≤ α T 2 Ψ 0 +1 4Lθ¯1 T X−1 t=0 α T −t−1 2 E-V t e p t− V t e p IS B Useful Lemmas We state and prove technical lemmas that are used to prove the main theorems. Lemma 1. Let F(·) *be defined in* (1). Suppose Assumption 1 and Assumption 2 hold. Then F(·) is convex and L¯*-smooth, where* L¯ = (1/n)Pn i=1 Li. Proof. Under Assumption 1, F(·) is a linear combination of convex functions and, thus, is convex. To prove that it is L¯-smooth, we only need to note that $$\|\nabla F(x)-\nabla F(y)\|\leq\frac{1}{n}\sum_{i=1}^{n}\|\nabla f_{i}(x)-\nabla f_{i}(y)\|\leq\frac{1}{n}\sum_{i=1}^{n}L_{i}\|x-y\|=\tilde{L}\|x-y\|,\qquad x,y\in\mathbb{R}^{d},$$ where the first inequality follows from the Jensen's inequality and the second inequality follows from Assumption 2. Lemma 2. Assume that f(·) *is a differentiable convex function on* R d and is L*-smooth. Then, for all* x, y ∈ R d*, we have* $$0\leq f(y)-f(x)-\langle\nabla f(x),y-x\rangle\leq\frac{L}{2}\|x-y\|^{2},\tag{24}$$ $$f(y)-f(x)-\langle\nabla f(x),y-x\rangle\geq\frac{1}{2L}\|\nabla f(x)-\nabla f(y)\|^{2}.\tag{25}$$ $\square$ Proof. See Theorem 2.1.5 of Nesterov (2018). Lemma 3. *Suppose Assumption 1 and Assumption 2 hold. Let* x t, w t, g t and p tbe defined as in Algorithm 1. We have $$\mathbb{E}_{t}\left[\left\|g^{t}-\nabla F(x^{t})\right\|^{2}\right]\leq4\bar{L}\left(F(x^{t})-F(x^{\star})\right)+4\bar{L}\left(F(w^{t})-F(x^{\star})\right)+V_{e}^{t}\left(\mathbf{p}^{t}\right)-V_{e}^{t}\left(\mathbf{p}^{t S}\right).$$ Proof. Note that E-∥x − E[x]∥ 2= E-∥x∥ 2− ∥E[x]∥ 2for any random vector x ∈ R d. Thus we have Et hg t − ∇F(x t) 2i= Et "1 npt it ∇fi(x t) − fi(w t)−∇F(x t) − ∇F(w t) 2# = Et "1 npt it ∇fi(x t) − fi(w t) 2# −∇F(x t) − ∇F(w t) 2 = V t e p t−∇F(x t) − ∇F(w t) 2 ≤ V t e p t = V t e p IS+ V t e p t− V t e p IS, (26) $$(26)$$ where V t e (p t) is defined in (2). On the other hand, note that V t e p IS= L¯ n Xn i=1 1 Li ∇fi(x t) − ∇fi(w t) 2 ≤ 2L¯ n (Xn i=1 1 Li ∇fi(x t) − ∇fi(x ⋆) 2+ Xn i=1 1 Li ∇fi(w t) − ∇fi(x ⋆) 2 ) ≤ 2L¯ n (Xn i=1 1 Li (2Li)fi(x t) − fi(x ⋆) −∇fi(x ⋆), xt − x ⋆ + Xn i=1 1 Li ∇fi(w t) − ∇fi(x ⋆) 2 ) ≤ 4L¯F(x t) − F(x ⋆)+ 2L¯ n Xn i=1 1 Li ∇fi(w t) − ∇fi(x ⋆) 2, (27) where the second inequality follows Assumption 1, Assumption 2 and Lemma 2, and the last inequality follows from that $\nabla F(\mathbf{x}^{*})=0$. Combining (26) and (27), we have $$\mathbb{E}_{t}\left[\left\|g^{t}-\nabla F(x^{t})\right\|^{2}\right]\leq4\bar{L}\left(F(x^{t})-F(x^{*})\right)+\frac{2\bar{L}}{n}\sum_{i=1}^{n}\frac{1}{L_{i}}\left\|\nabla f_{i}(w^{t})-\nabla f_{i}(x^{*})\right\|^{2}+V_{e}^{t}\left(\mathbf{p}^{t}\right)-V_{e}^{t}\left(\mathbf{p}^{tS}\right).$$ Lemma 4. Suppose Assumption 1 and Assumption 2 hold. Let Dt*be defined as in* (10)*. We have* $$\mathbb{E}_{t}\left[\mathcal{D}^{t+1}\right]\leq2\rho\left(F(x^{t})-F(x^{\star})\right)+(1-\rho)\mathcal{D}^{t}$$ t. Proof. By the update rule of w t, we have Et "1 n Xn i=1 1 Li ∇fi(w t+1) − ∇fi(x ⋆) 2 # = 1 − ρ n Xn i=1 1 Li ∇fi(w t) − ∇fi(x ⋆) 2+ ρ n Xn i=1 1 Li ∇fi(x t) − ∇fi(x ⋆) 2 ≤ ρ n Xn i=1 1 Li (2Li)fi(x t) − fi(x ⋆) −∇fi(x ⋆), xt − x ⋆ + 1 − ρ n Xn i=1 1 Li ∇fi(w t) − ∇fi(x ⋆) 2 = 2ρF(x t) − F(x ⋆)+ 1 − ρ n Xn i=1 1 Li ∇fi(w t) − ∇fi(x ⋆) 2, where the second inequality follows Assumption 1, Assumption 2, and (24) of Lemma 2, and the last inequality follows from ∇F(x ⋆) = 0. Lemma 5. *Suppose the conditions of Theorem 3 hold. Then* $$\left\langle g^{t},x^{\star}-z^{t+1}\right\rangle+\frac{\mu}{2}\left\|x^{t}-x^{\star}\right\|^{2}\geq\frac{\bar{L}}{2\eta}\left\|z^{t}-z^{t+1}\right\|^{2}+\mathcal{Z}^{t+1}-\frac{1}{1+\eta\kappa}\mathcal{Z}^{t},$$ where Z t*is defined in* (12). Proof. Note that $$z^{t+1}=\frac{1}{1+\eta\kappa}\left(\eta\kappa x^{t}+z^{t}-\frac{\eta}{L}g^{t}\right),$$ where κ = µ/L¯. Thus, $$g^{t}=\mu\left(x^{t}-z^{t}\right)+\frac{\bar{L}}{\eta}\left(z^{t}-z^{t+1}\right),$$ which implies that g t, zt+1 − x ⋆= µx t − z t+1, zt+1 − x ⋆+ L¯ η z t − z t+1, zt+1 − x ⋆ = µ 2 x t − x ⋆ 2−x t − z t+1 2−z t+1 − x ⋆ 2 + L¯ 2η z t − x ⋆ 2−z t − z t+1 2−z t+1 − x ⋆ 2 = µ 2 x t − x ⋆ 2+ L¯ 2η z t − x ⋆ 2− (1 + ηκ)z t+1 − x ⋆ 2− L¯ 2η z t − z t+1 2. Combining with the definition of Z t, we then have the final result. Lemma 6. *Suppose that the conditions of Theorem 3 hold. Then* $${\frac{\bar{L}}{2\eta}}\left\|z^{t+1}-z^{t}\right\|^{2}+\left\langle g^{t},z^{t+1}-z^{t}\right\rangle\geq{\frac{1}{\theta_{1}}}\left(F(v^{t+1})-F(x^{t})\right)-{\frac{\eta}{2\bar{L}(1-\eta\theta_{1})}}\left\|g^{t}-\nabla F(x^{t})\right\|^{2}.$$ $\square$ 27 Proof. By the definition of v t+1, we have L¯ 2η z t+1 − z t 2+g t, zt+1 − z t = 1 θ1 L¯ 2ηθ1 θ1 z t+1 − z t 2+g t, θ1 z t+1 − z t = 1 θ1 L¯ 2ηθ1 v t+1 − x t 2+g t, vt+1 − x t = 1 θ1 L¯ 2ηθ1 v t+1 − x t 2+∇F(x t), vt+1 − x t+g t − ∇F(x t), vt+1 − x t = 1 θ1 L¯ 2 v t+1 − x t 2+∇F(x t), vt+1 − x t+ L¯ 2 1 ηθ1 − 1 v t+1 − x t 2+g t − ∇F(x t), vt+1 − x t ≥ 1 θ1 F(v t+1) − F(x t) + L¯ 2 1 ηθ1 − 1 v t+1 − x t 2+g t − ∇F(x t), vt+1 − x t, where the last inequality follows Lemma 1 and Lemma 2. By Young's inequality, ⟨a, b⟩ ≥ −∥a∥ 2 2β − β∥b∥ 2 2 with β =ηθ1 L¯(1−ηθ1) , we have L¯ 2η z t+1 − z t 2+g t, zt+1 − z t ≥ 1 θ1 F(v t+1) − F(x t) + L¯ 2 1 ηθ1 − 1 v t+1 − x t 2−ηθ1 2L¯(1 − ηθ1) g t − ∇F(x t) 2 − L¯ 2 1 ηθ1 − 1 v t+1 − x t 2 = 1 θ1 F(v t+1) − F(x t)−η 2L¯(1 − ηθ1) g t − ∇F(x t) 2.
Review 1: Summary: In finite sum optimization, the choice of sampling distribution greatly affects the performance of stochastic algorithms such as L-SVRG and L-Katyusa. This paper contributes a novel and purely adaptive sampling strategy that does not rely on any prior information. Not only this adaptive sampling provably improves the complexity of L-SVRG and L-Katyusa, it also has small memory footprint and computation cost. Strengths and Weaknesses: Strengths: Convergence rate analysis is provided, which justifies the importance of using adaptive sampling. Both synthetic experiments to highlight the intuition and real world experiments to demonstrate the advantage are provided. Weakness I am a little uncertain about the efficiency of the adaptive sampling in practice, since the sampling distribution is changing every iteration, how can you efficiently perform the sampling with time complexity comparable to computing a SGD step? Requested Changes: - Can you add a table to summarize the main complexity of ASLSVRG, AS-Katyusa and the prior algorithms. I am a bit confused about what more specific advantage in terms of complexity the proposed algorithms have compared with the state-of-the-art importance sampling algorithm. - Can you elaborate more about the intuition of AdaOSMD and algorithm 4? What is the difference between AdaOSMD and standard mirror descent? and why "adaptive" is necessary? Broader Impact Concerns: Not available. ================================================== Review 2: Summary: This paper considers the problem of finite-sum minimization. Stochastic gradient-based algorithms with control variate technique have been the center of this research direction and provably achieve the optimal gradient access complexity, especially in the convex setting. This paper further improves two successful methods in this literature, namely L-SVGD and L-Katyusha, by replacing the importance sampling in existing procedures with adaptive sampling. The adaptive sampling probability is updated by the AdaOMSD method. The authors prove the convergence rate of the proposed methods, AS-LSVRG and AS-LKayusha. Strengths and Weaknesses: My major concern about this paper is the lack of novelty. The algorithms proposed in this paper is the combination of existing methods, L-SVGD and L-Katyusha for the control variate technique, and AdaOMSD for maintaining the adaptive sampling probability. Using an adaptive sampling method to accelerate the convergence of stochastic gradient-based algorithms and their variants with the control variate technique is by no mean new, e.g. please see [1]. I acknowledge that using the oracle optimal dynamic sampling distribution in Eq.(8) improves the convergence of the importance-sampling based counterpart, but I do not think the current approximate implementation (Eq.(8) is impractical to obtain, and the authors used AdaOMSD to approximate it in an online learning manner) leads to any improvement in the worst-case scenario. [1] Adaptive Variance Reducing for Stochastic Gradient Descent. Z Shen, H Qian, T Zhou, T Mu - IJCAI, 2016 Requested Changes: I think the novelty of this paper is limited. Unless the oracle optimal dynamic sampling distribution in Eq.(8) can be exactly used for adaptive sampling, the improvement of this paper over the previous work is not justified. Broader Impact Concerns: This is a theory paper. I see no concerns on the ethical implications. ================================================== Review 3: Summary: This paper proposes an adaptive sampling strategy for L-SVRG (Loopless stochastic variance-reduced gradient) and L-Katyusha that learns the sampling distribution that changes with iterations, which does not require any prior knowledge of the problem parameters as in previous importance sampling-based method. Convergence guarantees for L-SVRG and L-Katyusha are established for convex objectives. Extensive simulations and the practical usage of the proposed sampling scheme on real data are provided. Strengths and Weaknesses: Strengths: I like the insights obtained from the convergence analysis that “the control-variate is improving upon optimization heterogeneity, and adaptive sampling is improving upon smoothness heterogeneity,” which highlights the improvement of the adaptive sampling coming from addressing the smoothness heterogeneity. The discussion on “context-shift” and “concept-shift” is also inspiring, and the authors constructed simulations to further validate these insights obtained from theoretical analysis in Sections 4.1 and 4.3. Weaknesses and Requested changes: The discussion in section 3.3 is quite interesting. I understand it might be tricky, but is there a way to make the statement more concrete instead of just saying qualitatively? The organization of the paper should be improved: Section 2 starts with a detailed description of the proposed algorithm without any high-level interpretation. I had a hard time understanding why the sampling distribution should minimize the proposed sampling loss function as in eq (4), and the explanations are in eq (8) and Theorem 1. Thus, I would recommend adjusting the flow of the paper by adding more background or adding more insights into algorithm design to improve the readability. Experimental results on non-convex loss: I understand that due to the technical difficulty, analysis can only be conducted under convex conditions. However, one can always perform experiments to see what will happen in the non-convex case. The experiments provided in the paper are all convex optimization problems; how will the proposed algorithm perform for non-convex loss, like a simple neural network? Minor comments: 1. In the figures, it is best to distinguish different algorithms using both line style and colors. The meanings of red and blue curves in Figures 1 and 2 are different, and it is sometimes confusing. 2. Some background on L-SVRG and L-Katyusha, i.e., Qian et al. (2021) can be included to make the paper more self-contained. I referred to this paper multiple times during the review. Requested Changes: See the above weakness discussion. Broader Impact Concerns: There is no such concern for this paper. ================================================== Metareview: Recommendation: Accept with minor revision Comment: The paper proposes an adaptive sampling strategy for L-SVRG and L-Katyusha that learns the sampling distribution changing with iterates. Specifically, it synthesizes the techniques of adaptive sampling and online learning framework to obtain practical algorithms. Theoretical analyses establish the convergence rate for the proposed methods in the convex setting, and reveal some interesting insights (e.g., optimization/smoothness heterogeneity, context/concept-shift). The paper also conducts experiments on both synthetic and real-world data to demonstrate the intuition and advantages of the algorithms. On the other hand, as there are many related work on adaptive sampling, the authors should revise their paper to emphasize the novelty and the significance of their results. ==================================================
# Global Safe Sequential Learning Via Efficient Knowledge Transfer Anonymous authors Paper under double-blind review ## Abstract Sequential learning methods such as active learning and Bayesian optimization select the most informative data to learn about a task. In many medical or engineering applications, the data selection is constrained by a priori unknown safety conditions. A promissing line of safe learning methods utilize Gaussian processes (GPs) to model the safety probability and perform data selection in areas with high safety confidence. However, accurate safety modeling requires prior knowledge or consumes data. In addition, the safety confidence centers around the given observations which leads to local exploration. As transferable source knowledge is often available in safety critical experiments, we propose to consider transfer safe sequential learning to accelerate the learning of safety. We further consider a pre-computation of source components to reduce the additional computational load that is introduced by incorporating source data. In this paper, we theoretically analyze the maximum explorable safe regions of conventional safe learning methods. Furthermore, we empirically demonstrate that our approach 1) learns a task with lower data consumption, 2) globally explores multiple disjoint safe regions under guidance of the source knowledge, and 3) operates with computation comparable to conventional safe learning methods. ## 1 Introduction Despite the great success of machine learning, accessing data is a non-trivial task. One prominent approach is to consider experimental design (Lindley, 1956; Chaloner & Verdinelli, 1995; Brochu et al., 2010). In particular, active learning (AL) (Krause et al., 2008; Kumar & Gupta, 2020) and Bayesian optimization (BO) (Brochu et al., 2010; Snoek et al., 2012) resort to a sequential data selection process. The methods initiate with a small amount of data, iteratively compute an acquisition function, query new data according to the acquisition score, receive observations from the oracle, and update the belief, until the learning goal is achieved or the acquisition budget is exhausted. These learning algorithms often utilize Gaussian processes (GPs Rasmussen & Williams (2006)) as surrogate models for the acquisition computation. In many applications such as spinal cord stimulation (Harkema et al., 2011) and robotic learning (Berkenkamp et al., 2016; Dominik Baumann et al., 2021), the algorithms must respect some a priori unknown safety concerns. One effective approach of performing safe learning is to model the safety constraints with additional GPs (Sui et al., 2015; Schreiter et al., 2015; Zimmer et al., 2018; Yanan Sui et al., 2018; Matteo Turchetta et al., 2019; Berkenkamp et al., 2020; Sergeyev et al., 2020; Dominik Baumann et al., 2021; Li et al., 2022). The algorithms initiate with given safe observations. A safe set is then defined to restrict the exploration to regions with high safety confidence. The safe set expands as the learning proceeds, and thus the explorable area grows. Safe learning is also considered in related domains such as Markov Decision Processes (Matteo Turchetta et al., 2019) and reinforcement learning (García et al., 2015). In this paper, we focus on GPs as they are often considered the gold-standard when it comes to calibrated uncertainties. While such safe learning methods have achieved a huge impact, few challenges remain. Firstly, GP priors need to be given prior to the exploration (Sui et al., 2015; Berkenkamp et al., 2016; 2020) or fitted with initial data (note that accessing the data is expensive) (Schreiter et al., 2015; Zimmer et al., 2018; Li et al., 2022). In addition, safe learning algorithms suffer from local exploration. GPs are typically smooth ![1_image_0.png](1_image_0.png) Figure 1: Illustration: safe sequential learning with transfer (top) and conventional (bottom) learning. and the uncertainty increases beyond the reachable safe set boundary. Disconnected safe regions will be classified as unsafe and will remain unexplored. We provide a detailed analysis and illustration of explorable regions in Section 3. In reality, local exploration increases the effort of deploying safe learning algorithms because the domain experts need to provide safe data from multiple safe regions. Our contribution: As safe learning (Schreiter et al., 2015; Sui et al., 2015) is always initialized with prior knowledge, we fairly assume correlated experiments have been performed and the results are available. This assumption enables transfer learning (Figure 1), where the benefit is twofold: 1) exploration as well as expansion of safe regions are significantly accelerated, and 2) the source task may provide guidance on safe regions disconnected from the initial target data and thus helps us to explore globally. Concrete applications are ubiquitous, including simulation to reality (Marco et al., 2017), serial production, and multi-fidelity modeling (Li et al., 2020). Transfer learning can be achieved by considering the source and target tasks jointly as multi-output GPs (Journel & Huijbregts, 1976; Álvarez et al., 2012). However, GPs are notorious for the cubic time complexity due to the inversion of Gram matrices (Section 3). Large amount of source data thus introduce pronounced computational time, which is often a bottleneck in real experiments. We further modularize the multi-output GPs such that the source relevant components can be pre-computed and fixed. This alleviates the complexity of multi-output GPs while the benefit is retained. In summary, we 1) introduce the idea of transfer safe sequential learning supported by a thorough mathematical formulation, 2) derive that conventional no-transfer approaches have an upper bound of explorable region, 3) provide a modularized approach to multi-output GPs that can alleviate the computational burden of source data, with our technique being more general than the previous method in Tighineanu et al. (2022), and 4) demonstrate the empirical efficacy. Related work: Safe learning is considered in many problems such as Markov Decision Processes (Matteo Turchetta et al., 2019) and reinforcement learning (García et al., 2015). In this paper, we focus on GP learning problems. In Gelbart et al. (2014); Hernandez-Lobato et al. (2015); Hernández-Lobato et al. (2016), the authors investigated constrained learning with GPs. The authors integrated constraints directly into the acquisition function (e.g. discounting the acquisition score by the probability of constraint violation). These works do not exclude unsafe data from the search pool, and the experimenting examples are mostly not safety critical. A safe set concept was introduced for safe BO (Sui et al., 2015) and safe AL (Schreiter et al., 2015). The concept was then extended to BO with multiple safety constraints (Berkenkamp et al., 2020), to AL for time series modeling (Zimmer et al., 2018), and to AL for multi-output problems (Li et al., 2022). For safe BO, Sui et al. Yanan Sui et al. (2018) proposed to conduct the safe set exploration and BO in two distinguished stages. All of these methods suffer from local exploration (Section 3). Sergeyev et al. (2020) considered disjoint safe regions, assuming regions separated only by a small gap where the constraint function(s), with the noise, shortly goes beneath (but still close to) the safety threshold. Dominik Baumann et al. (2021) proposed a global safe BO method on dynamical systems, assuming that unsafe areas are approached slowly enough and that there exists an intervention mechanism which stops the system quickly enough. None of these methods exploits transfer safe learning which can allow for global exploration on any systems given prior source knowledge. Transfer learning and multi-task learning have caught increasing attention. In particular, multi-output GP methods have been developed for multi-task BO (Swersky et al., 2013; Poloczek et al., 2017), sim-to-real transfer for BO (Marco et al., 2017), and multi-task AL (Zhang et al., 2016). However, GPs have time complexity cubic to the number of observations, competed by multiple outputs. In Tighineanu et al. (2022), the authors assume a specific structure of the multi-output kernel, and factorize the computation with an ensembling technique. This eases the computational burdens for transfer sequential learning. In our paper, we propose a modularized transfer safe learning to facilitate real experiments while avoiding cubic complexity. Our modularization technique can be generalized to arbitrary multi-output kernels. Paper structure: The remaining of this paper is structured as follows: we provide the goal of safe sequential learning in Section 2; in Section 3, we introduce the background and analyze the local exploration problem of safe learning; Section 4 elaborates our approach under a transfer learning scenario; Section 5 is the experimental study; finally, we conclude our paper in Section 6. ## 2 Problem Statement Preliminary: Throughout this paper, we inspect regression output and safety values. Each input x *∈ X ⊆* R D has a corresponding noisy regression output y ∈ R and the corresponding noisy safety values jointly expressed as a vector z = (z 1*, ..., z*J) ∈ R J. Assumption 2.1. y = f(x) + f , zj = qj (x) + qj , where f ∼ N 0, σ2 f , qj ∼ N 0, σ2 qj . In addition, ys = fs(xs) + fs , zjs = qj,s(xs) + qj,s , where fs ∼ N 0, σ2 fs , qj,s ∼ N 0, σ2 qj,s . {*f, q*j} are our target black-box function and safety functions. The source and target tasks may have different number of safety conditions, but we can add trivial constraints (e.g. 1 *≥ −∞*) to either task in order to have the same number of constraints J for both tasks. The notation is summarized in Table 1. Safe learning problem statement: We are given a small number of safe observations DN*init* = {x1:Ninit , y1:Ninit , z1:Ninit }, x1:Ninit = {x1, ..., xNinit } ⊆ X , y1:Ninit = {y1, ..., yNinit } ⊆ R and safety observations z1:N*init* := (z 1, ..., zJ)1:N*init* := (z 1 1:N*init* , ..., zJ 1:N*init* ) = {zn = (z 1 n , ..., zJn )} N*init* n=1 . In practice, the initial data usually meet the safety constraints, i.e. z j n ≥ Tj for all observation index n and constraint index j. We are further given source data Dsource Nsource = {xs,1:N*source* , ys,1:N*source* , zs,1:N*source* }, xs,1:Nsource = {xs,1, ..., xs,N*source* } ⊆ X , ys,1:Nsource = {ys,1, ..., ys,N*source* } ⊆ R and zs,1:N*source* = {zn = (z 1 s,n, ..., zJ s,n)|n = 1, ..., Nsource} ⊆ R J. N*source* is the number of source data points. Notably, the source data do not need to be measured with the same safety constraints as the target task. In our main paper, we consider only one source task for simplicity, while Appendix E provides formulation and ablation studies on more source tasks. With one source task, we assume N*source*, the number of source data, is large enough and we do not need to explore for the source task. This is often the case when there is plenty of data from previous versions of systems or prototypes. The goal is to evaluate the function f : X → R where each evaluation is expensive. In each iteration, we select a point x∗ ∈ Xpool ⊆ X to evaluate (Xpool ⊆ X is the search pool which can be the entire space X or a predefined subspace of X , depending on the applications). This selection should respect the a priori unknown safety constraints ∀j = 1*, ..., J, q*j (x∗) ≥ Tj , where true qj are inaccessible. Then, a budget consuming | Symbols | Meaning | | |-------------------------------|--------------------------------------------------------------------------|--------| | DN = {x1:N , y1:N , z1:N } | dataset of the target task, N = Ninit, ..., Ninit + num_steps | | | j | j | | | z 1:N = {z 1 , ..., zJ N } | safety observations of the j-th constraint (unknown function qj ) | | | 1 | | | | z1:N = (z 1:N , ..., zJ 1:N ) | safety observations of all constraints jointly | | | Dsource Nsource | dataset of the source task {xs,1:Nsource , ys,1:Nsource , zs,1:Nsource } | | | y = f(x) + f | observation of unknown function f ∼ GP(0, kf ), f ∼ N 0, σ2 f | | | z j = qj (x) + qj | observation of unknown constraint qj ∼ GP(0, kqj ), qj ∼ N 0, σ2 qj | | | qj (x) ≥ Tj | j-th safety condition | | | ys = fs(xs) + fs | source task observation prior fs ∼ GP(0, kfs ), fs ∼ N 0, σ2 fs | | | s = qj,s(xs) + qj,s | source task constraint prior qj,s ∼ GP(0, kqj,s ), qj,s ∼ N 0, σ2 | | | j | qj,s | | | z | | | | f : X × {task indices} → R | fs and f jointly as a multi-task function | | | qj : X × {task indices} → R | qj,s and qj jointly as a multi-task function | , θf ) | | f ∼ GP (0, kf ) | multi-task GP prior, kernel kf parameterized by θf = (θfs | | | qj ∼ GP 0, kqj | multi-task GP prior, kernel kqj parameterized by θqj = (θqj,s , θqj ) | | Table 1: Key notation labeling process occurs, and we obtain a noisy y∗ and noisy safety values z∗. The labeled points are then added to DN*init* (observed dataset becomes DN*init*+1), and we proceed to the next iterations (Algorithm 1). In the following, N will be the size of observed dataset of the target task, and it varies from N*init* to Ninit + num_*steps* (number of AL steps, i.e. AL budget). The notation is summarized in Table 1. This problem formulation applies to both AL and BO. In this paper, we focus on AL problems. The goal is using the evaluations to make accurate predictions f(X ), and the points we select would favor general understanding over space X , up to the safety constraints. ## 3 Background & Local Exploration Of Safe Learning Methods In this section, we introduce GPs, safe learning algorithms for GPs, and then provide detailed analysis and illustration of the local exploration problem. Gaussian processes (GPs): A GP is a stochastic process specified by a mean and a kernel function (Rasmussen & Williams, 2006; Kanagawa et al., 2018; Schoelkopf & Smola, 2002). Without loss of generality, we assume the GPs have zero mean. In addition, without prior knowledge to the data, it is common to assume the governing kernels are stationary. Assumption 3.1. g ∈ {f, q1, ..., qJ }, g ∼ GP(0, kg) and kg(x, x 0) := kg(x − x 0) ≤ 1 are stationary. Bounding the kernels by 1 provides advantages in theoretical analysis (Srinivas et al., 2012) and is not restrictive because the data are usually normalized to zero mean and unit variance. The GP assumptions (Assumption 2.1 and Assumption 3.1) indicate that each of {f, q1*, ..., q*J } has a predictive distribution given as the following. We write down the distribution for f at a test point x∗, while the distributions of qj can be obtained by replacing f with qj and y1:N with z j 1:N : p (f(x∗)|x1:N , y1:N ) = N µf,N (x∗), σ2 f,N (x∗) , $$\begin{array}{l}{{\mu_{f,N}(\mathbf{x_{*}})=:\mu_{f,N}=k_{f}(\mathbf{x_{1:N}},\mathbf{x_{*}})^{T}\left(\mathbf{K}_{f}+\sigma_{f}^{2}I\right)^{-1}y_{1:N},}}\\ {{\sigma_{f,N}^{2}(\mathbf{x_{*}})=:\sigma_{f,N}^{2}=k_{f}(\mathbf{x_{*}},\mathbf{x_{*}})-k_{f}(\mathbf{x_{1:N}},\mathbf{x_{*}})^{T}\left(\mathbf{K}_{f}+\sigma_{f}^{2}I\right)^{-1}k_{f}(\mathbf{x_{1:N}},\mathbf{x_{*}}),}}\end{array}$$ $$(1)$$ where kf (x1:N , x∗) = (kf (x1, x∗)*, ..., k*f (xN , x∗)) ∈ R N×1, and Kf ∈ R N×N is a matrix with [Kf ]ij = kf (xi, xj ). Typically, kf is parameterized and can be fitted together with σ 2 f . Safe learning: A core of safe learning methods (Sui et al., 2015; Yanan Sui et al., 2018; Berkenkamp et al., 2020; Dominik Baumann et al., 2021) is to compare the safety confidence bounds with the thresholds and define a safe set SN ⊆ X*pool* as SN = ∩ J j=1{x ∈ X*pool*|µqj ,N (x) − β 1/2σqj ,N (x) ≥ Tj}, (2) where β ∈ R + is a parameter for probabilistic tolerance control (Sui et al., 2015; Berkenkamp et al., 2020). This definition is equivalent to ∀x ∈ SN , p (q1(x) ≥ T1*, ..., q*J (x) ≥ TJ ) ≥ (1 − α) J when α = 1−Φ(β 1/2) (Schreiter et al., 2015; Zimmer et al., 2018; Li et al., 2022). In each iteration, a new point is queried by mapping safe candidate inputs to acquisition scores: $$0-\beta^{1/2}\sigma_{q_{j},N}({\boldsymbol{x}})\geq T_{j}\},$$ $$(2)$$ $\left(3\right)$. x∗ = argmaxx∈SN a (x|DN ), (3) where DN is the current observed dataset and a is an acquisition function. Remark 3.2. Notably, solving such a constrained optimization is challenging. In the literature (Schreiter et al., 2015; Zimmer et al., 2018; Li et al., 2022; Sui et al., 2015; Berkenkamp et al., 2020), this is solved on discrete pool with finite elements, i.e. Npool := |X*pool*| < ∞. One would apply Equation (1) on the entire pool X*pool* to determine the safe set, then optimize the acquisition scores over the safe set. In our paper, we inheritate this finite discrete pool setting. The whole learning process is summarized in Algorithm 1. In AL problems, a prominent acquisition function is the predictive entropy: a(x|DN ) = Hf [x|DN ] = 1 2 log 2πeσ2 f,N (x) (Schreiter et al., 2015; Zimmer et al., 2018; Li et al., 2022). We use a(x|DN ) = Pg∈{f,q1*,...,q*J } Hg [x|DN ] to accelerate the exploration of safety models. It is possible to exchange the acquisition function by SafeOpt criteria for safe BO problems (Sui et al., 2015; Berkenkamp et al., 2020; Rothfuss et al., 2022)). Algorithm 1 Sequential Learning Require: DNinit , X*pool*, β or α 1: for N = Ninit, ..., Ninit + num_*steps* do 2: Fit GPs (kf , kqj , σ2 f , σ2 qj ) 3: x∗ ← argmaxx∈SN a(x|DN ) 4: Evaluate at x∗ to get y∗ and z∗ 5: DN+1 ← DN ∪ {x∗, y∗, z∗}, Xpool ← Xpool \ {x∗} 6: end for Safe learning suffer from local exploration: In this section, we analyze the upper bound of explorable safe regions. Commonly used stationary kernels (Assumption 3.1) measure the difference of a pair of points while the actual point values do not matter. These kernels have the property that closer points correlate strongly while distant points result in small kernel values. We first formulate this property as the following assumption. Assumption 3.3. Given a kernel function k : *X × X →* R, assume ∀δ > 0, ∃r > 0 s.t. kx − x 0k ≥ r ⇒ k(x, x 0) ≤ δ under L2 norm. We provide expression of popular stationary kernels (RBF kernel and Matérn kernels), as well as their r − δ relations in the Appendix B.3. In the following, we derive a theorem showing that standard kernels only allow local exploration of safety regions. The main idea is: when a point x∗ is far away from the observations, we can get very small δ (i.e. small covariance measured by kernel). Thus the prediction at x∗ is weakly correlated to the observations. As a result, the predictive mean is close to zero and the predictive uncertainty is large, both of which imply that the method has small safety confidence, i.e. p (qj (x∗) ≥ Tj )|x1:N , z j 1:N . Here we assume that qj ≥ Tj is not a trivial condition. In other words, Tj is in sensitive domain of qj (i.e. Tj is not far away from zero). Theorem 3.4 (Local exploration of single-output GPs). We are given ∀x∗ ∈ X , x1:N ⊆ X *, a kernel* kqj satisfying Assumption 3.3 *(distant points result in weak correlation) and* kqj (·, ·) ≤ 1*. Denote* k j scale := *max k*qj (·, ·). qj ∼ GP(0, kqj ) *is a GP,* z j 1:N := (z j 1 , ..., z j N ) is a set of observed noisy values (Assumption *2.1) and* k(z j 1 , ..., z j N )k ≤ √N*. Then* ∀δ ∈ (0, qk j scaleσqj / √N), ∃r > 0 s.t. when minxi∈x1:N kx∗ − xik ≥ r, the probability thresholded on a constant Tj *is bounded by* p (qj (x∗) ≥ Tj )|x1:N , z j 1:N ≤ Φ *N δ/σ*2 qj −Tj pk j scale−( √*N δ/σ*qj ) 2 . Our theorem (proof in the Appendix B.4) provides the maximum safety probability of a point as a function of its distance to the observed data in X . Therefore, it measures an upper bound of explorable safe area. Notice that kz j 1:N k ≤ √N is not very restrictive because an unit-variance dataset has kz j 1:N k = √N. This theorem indicates that a standard GP with commonly used kernels explores only neighboring regions of the initial x1:N . Remark 3.5. In Section 4, we will see that our new transfer safe sequential learning framework may explore beyond the neighborhood of target x1:N , taken the source input xs,1:N*source* into consideration. In the following, we plug exact numbers into Theorem 3.4 for an illustration. Example 3.6. We consider a one-dimensional toy dataset visualized in Figure 2. Assume N = 10, σ 2 q = 0.01 and T = 0. We omit j because J = 1 here. σq/ √N is roughly 0.0316. In this example, the generated data have kz1:N k ≤ √10. We train an unit-variance (kscale = *max k*q(·, ·) = 1) Matérn -5/2 kernel on this example, and we obtain lengthscale ≈ 0.1256. This kernel is strictly decreasing, so Assumption 3.3 is satisfied. In particular, r = 4.485 ∗ 0.1256 = 0.563316 ⇒ δ ≤ 0.002, noticing that δ = 0.002 ⇒ Φ *N δ/σ*2 q−T √ 1−( √*N δ/σ*q) 2 ≈ Φ(2). When the safety tolerance is set to β 1/2 = 2, we can thus know from Theorem 3.4 that safe regions that are 0.563316 further from the observed ones are always identified as unsafe and is not explorable. In Figure 2, the two safe regions are more than 0.7 distant from each other, indicating that the right safe region is never explored by conventional safe learning methods. Please see Appendix B for numerical details and additional illustrations. $\left(\begin{array}{l}\hline\\ \hline\end{array}\right)$ Our probability bound Φ *N δ/σ*2 p q−T k j scale−( √*N δ/σ*q) 2 is the worst case obtained with very mild assumptions. Empirically, the explorable regions found by GP models are smaller (see Figure 2 and appendix Figure 5). ![5_image_0.png](5_image_0.png) Figure 2: The safety function q(x) = sin 10x 3 − 5x − 10+ 1 3 x 2 − 1 2 . The observations are with noise drawn from N (0, 0.1 2). ## 4 Modularized Gp Transfer Learning In the previous section, we introduced GP safe learning technique, and we analyzed the local exploration problem. In this section, we present our transfer learning strategy, where the aim is to facilitate safe learning and to enable global exploration if properly guided by the source data. Modeling the data with source knowledge: The idea is to extend the GPs (Assumption 3.1) to multioutput models (Journel & Huijbregts, 1976; Álvarez et al., 2012; Tighineanu et al., 2022). We say *index*s and indext are source and target task index variables. For example, *index*t = 0 can be target task, *index*s = 1*, ...* can be indices of multiple source tasks. In our main paper, we consider one source task, so the task indices indexs*, index*t are just binary. Scenarios of more source tasks are provided in Appendix E. We concatenate the source and target tasks and then define notation f : X × index_*space* → R and qj : X × index_*space* → R, where f(·*, index*s) = fs(·), f(·*, index*t) = f(·), qj (·*, index*s) = qj,s(·) and qj (·*, index*t) = qj (·). Please also see Table 1 for the summary of our notation. The multi-task functions can then be modeled with GP as well. Assumption 4.1. f ∼ GP (0, kf ) and qj ∼ GP 0, kqj for some stationary kernels kf , kqj: (X × index_space) × (X × index_*space*) → R. Let xˆs,1:N*source* := {(xs,i, indexs)|xs,i ∈ xs,1:N*source* } and xˆ1:N := {(xi*, index*t)|xi ∈ x1:N } denote the input data concatenated with the task indices. Then for g ∈ {f, qj}, the predictive distribution given in Equation (1) becomes (similarly, we write down the distribution for f, while distributions for qj can be obtained by replacing f with qj and y· with z j · ) µf,N (x∗, indext) = v T f Ω −1 f ys,1:Nsource y1:N , σ 2 f,N (x∗, indext) = kf ((x∗, indext),(x∗, indext)) − v T f Ω −1 f vf , (4) vf = kf xˆs,1:Nsource xˆ1:N ,(x∗, indext) Ωf = Kfs + σ 2 fs INsource Kfs,f KT fs,f Kf + σ 2 f IN where Kfs = kf (xˆs,1:Nsource , xˆs,1:Nsource ), Kfs,f = kf (xˆs,1:Nsource , xˆ1:N ) and Kf = kf (xˆ1:N , xˆ1:N ). The $$\quad(4)$$ GP model f (and qj ) is governed by the multitask kernel kf (and kqj for each safety function) and noise parameters σ 2 fs , σ2 f (and σ 2 qj,s , σ2 qj ) which can be fitted with observations. Remark 4.2. In our paper, we assume all safety constraints are independent. If this is not the case, one may still model different safety constraints with our notation: for example, we have 3 unknown constraints q1, q2, q3, and the corresponding source q1,s, q2,s, q3,s, then Assumption 4.1 still holds if the index space is expanded to source indices *index*s = 0, 1, 2 specifying data of q1,s, q2,s or q3,s and target indices *index*t = 3, 4, 5 specifying q1, q2 or q3, which means we model all the 6 functions jointly with one multi-output constraint function. In this formulation, the covariance bound δ in Theorem 3.4 takes the source input xs,1:N*source* into consideration. Thus, comparing to modeling solely with target task, incorporating a source task provides the potential to significantly enlarge the area with high safety confidence (i.e. region not bounded by Theorem 3.4). We show empirically in Section 5 that global exploration is indeed easier to achieve with appropriate xs,1:N*source* . In-experiment speed-up via source pre-computation: Computation of Ω −1 f(and Ωqj ) has cubic complexity O(N*source* + N) 3in time, for N = Ninit, ..., Ninit + num_*steps*. This computation is also required for fitting the models: common fitting techniques include Type II ML, Type II MAP and Bayesian treatment (Snoek et al., 2012; Riis et al., 2022) over kernel and noise parameters (Rasmussen & Williams, 2006), all of Then and noise parameters (Rasmusén & Williams, 2000), an of $\mathrm{s}\,\mathcal{N}\left(\begin{pmatrix}y_{s,1:N_{source}}\\ y_{1:N}\end{pmatrix}|\mathbf{0},\Omega_f\right)$ and $\mathcal{N}\left(\begin{pmatrix}z_{s,1:N_{source}}^j\\ z_{1:N}^j\end{pmatrix}|\mathbf{0},\Omega_{\mathbf{q}_j}\right)$. red because MC sampling is time consuming. $\mathrm{g}$ the marginal like . which involves computing the marginal likelihoods N In our paper, Bayesian treatment is not considered because MC sampling is time consuming. The goal now is to avoid calculating Ω −1 fand Ω −1 qjrepeatedly in the experiments. For brevity, we describe how we do this with Ω −1 f , while the same principle applies to Ω −1 qj . For GP models, the inversion is achieved Algorithm 2 Modularized SL Require: D*source* N*source* , DNinit , X*pool*, β or α 1: Fit GPs and then fix θfs , θqj,s , σfs , σqj,s 2: Compute and fix Lfs , Lqj,s 3: for N = Ninit, ..., Ninit + num_*steps* do 4: Fit GPs (remaining parameters θf , θqj , σf , σqj ) 5: x∗ ← argmaxx∈SN a(x|DN ) 6: Evaluate at x∗ to get y∗ and z∗ 7: DN+1 ← DN ∪ {x∗, y∗, z∗}, Xpool ← Xpool \ {x∗} 8: **end for** by performing a Cholesky decomposition L(Ωf ), i.e. Ωf = L(Ωf )L(Ωf ) T, where L(Ωf ) is a lower triangular matrix (Rasmussen & Williams, 2006), and then for any matrix C, L(Ωf ) −1C is computed by solving a linear system. We propose to perform the cholesky decomposition in two steps, as described below. The aim here is to compute part of L(Ωf ) beforehand. The key idea is to cluster the parameters of kf into θf = (θfs , θf ), where the source kf ((·, indexs),(·*, index*s)) is independent of θf . Then, as xs,1:N*source* is invariant, Kfs adapts only to θfs . Given that the source tasks are well explored, the source likelihood p(ys,1:N*source* |xs,1:N*source* ) = N (ys,1:N*source* |0, Kfs + σ 2 fs IN*source* ) can be barely increased while we explore for the target task. Thus we assume Kfs (i.e. θfs ) and σ 2 fs remain fixed in the experiments, this allows us to isolate the source relevant computations, as the source relevant block (top left block) of L(Ωf ) is also fixed. We can then prepare a safe learning experiment with pre-computed Lfs = L(Kfs + σ 2 fs IN*source* ). The same procedure applies to each qj . The learning procedure is summarized in Algorithm 2. In each iteration (line 4 of Algorithm 2), the time complexity becomes O(N2 sourceN) + O(N*source*N2) + O(N3) instead of O(N*source* + N) 3. We provide mathematical details in the Appendix C. Our technique can be applied to any multi-output kernel because the clustering θf = (θfs , θf ) does not require independence of kf ((·, indexs),(·*, index*t)) and kf ((·, indext),(·*, index*t)) from θfs . The same principle applies to qj . Kernel selection: In the following, we briefly review existing multi-output GP models and motivate selection of the model we use later in our experiments. Here, each g ∈ {f, q1*, ...,* qJ } is a multi-output GP correlating source and target tasks. The task indices are binary: *index*s = 0 is source and *index*t = 1 is target. A widely investigated multi-output framework is the linear model of corregionalization (LMC): kg =Pl W2 l,s + κs Wl,sWl,t Wl,sWl,t W2 l,t + κ ⊗ kl(·, ·), i.e. a 2 × 2 matrix specified by task indices, where kl(·, ·) is a standard kernel as in Assumption 3.1, and (WlWT l + diag(κs, κ)) learns the task correlation induced by the l-th latent function (Álvarez et al., 2012). Here, each g has its own kernel, but we omit g in the parameter subscripts for brevity. When pairing this kernel with our Algorithm 2, we observe that the training can become unstable due to multiple local optima in the first phase (line 1 of Algorithm 2). This may be because LMC learns joint patterns from all present tasks. In Poloczek et al. (2017); Marco et al. (2017); Tighineanu et al. (2022), the authors consider a hierarchical GP (HGP): kg = ks(·, ·) ks(·, ·) ks(·, ·) ks(·, ·) + kt(·, ·) . Similarly, each g has its own kernel, but we omit g in the parameter subscripts for brevity. HGP is a variant of LMC, where the target task is treated as a sum of the source (modeled by ks) and the target-source residual (modeled by kt). This formulation has the benefit that the fitting of source (ks) and residual (kt) are separated and thus makes HGP a good model to run Algorithm 2 (set θgs the parameters of ks and θgs the parameters of kt). In Tighineanu et al. (2022), the authors derived an ensembling technique allowing also for a source precomputation. Their technique is equivalent to our method when we use HGP, but our approach can be generalized to any multi-output kernels (with implicit restriction that a source fitting of the chosen model needs to be accurate) while the ensembling technique is limited to HGP. ![8_image_0.png](8_image_0.png) Figure 3: Safe AL experiments on three benchmark datasets. GP data: f and safety function q ≥ 0 over X = [−2, 2]D, D = 1 (N*source* = 50, N*init* = 10, 50 data points are queried) or D = 2 (N*source* = 250, N*init* = 20, 100 data points are queried). Branin data: constraint q = f ≥ 0 (Section 5.1), N*source* = 100, N*init* = 20, 100 data points are queried. The results are mean and one standard error of 100 (GP data) or 25 (Branin data) experiments. The test points for RMSEs are sampled from all of the true safe area, including the regions individual methods (e.g. SAL) may fail to explore. Note that FullTransLMC has more than ten model parameters, while in GP1D dataset we start with N = 10. The TP/FP safe areas are portion of the input space area. Ground true safe area portion of each dataset is marked black in the second column. Please also see appendix Figure 10 for fitting time and region cluster of each query. In our experiments, we perform Algorithm 2 with HGP as our main pipeline, and Algorithm 1 with LMC (more flexible in learning yet slow) and with HGP as full transfer scenarios. The base kernels ks, kt, kl are all Matérn-5/2 kernel with D lengthscale parameters (X ⊆ R D). The scaling variance of klis fixed to 1 because it can be absorbed into the output-covariance terms (see above). One can of course change the base kernel as long as it is suitable for the application. Although we did not pair Algorithm 2 with LMC as discussed above, note that our modularized computation scheme can still benefit the general LMC in closely related settings, e.g. (i) datasets in which more than one source task is available or (ii) sequential learning schemes that only refit the GPs after receiving a batch of query points. ## 5 Experiments In this section, we perform safe AL experiments to answer the following questions: 1) do multi-output GPs facilitate learning of disconnected safe regions, 2) is it more data efficient to learn with transfer safe learning than applying a conventional method, and 3) how is the runtime of our modularized approach compared with the baseline? We compare five experimental setups: 1) EffTransHGP: Algorithm 2 with multi-output HGP, 2) FullTransHGP: Algorithm 1 with multi-output HGP, 3) FullTransLMC: Algorithm 1 with multi-output LMC, 4) Rothfuss et al. 2022: GP model meta learned with the source data by applying Rothfuss et al. (2022), and 5) SAL: the conventional Algorithm 1 with single-output GPs and Matérn-5/2 kernel. ![9_image_0.png](9_image_0.png) Figure 4: Safe AL experiments on Hartmann3 and engine modeling problems. Hartmann3: N*source* = 100, N is from 20 to 120, results are mean and one standard error of 25 experiments. Engine: N*source* = 500, N is from 20 to 120, results are mean and one standard error of 5 repetitions. methods **GP1D+z GP2D+z Branin** num_*steps* 50 100 100 EffTransHGP 1.79 ± 0.07 2.77 ± 0.13 2 ± 0 FullTransHGP 1.78 ± 0.07 3 ± 0.14213 2 ± 0 FullTransLMC 1.78 ± 0.08 2.68 ± 0.14 2 ± 0 Rothfuss2022 1.22 ± 0.05 1.07 ± 0.03 1 ± 0 SAL 1 ± 0 1.29 ± 0.09 1 ± 0 Transfer learning discovers multiple disjoint safe regions while baselines stick to neighborhood of the initial region. In appendix Figure 10, we track the number of explored regions per iteration. Table 2: Number of discovered regions For the safety tolerance, we always fix β = 4, i.e. α = 1 − Φ(β 1/2) = 0.02275 (Equation (2)), implying that each fitted GP safety model allows 2.275% unsafe tolerance when inferring safe set Equation (2). Notice that with Rothfuss et al. (2022), the GP model parameters are trained up-front and remain fixed during the experiments. Rothfuss et al. 2022 considered safe BO problems. We change the acquisition function to entropy so it becomes a safe AL method. Our code will be published on GitHub. Test problems with tractable safe regions: We start from simple simulated problems with input dimension D = 1 or D = 2 (GP1D, GP2D, Branin problems). In such cases, it is analytically and computationally possible to cluster the disconnected safe regions via connected component labeling (CCL) algorithms (He et al., 2017). This means, in each iteration of the experiments, we track to which safe region each observation belongs (Table 2). In these initial experiments, we generate one source dataset and one target dataset such that the target task has at least two disjoint safe regions, each of which has a portion also safe in the source problem. The design is due to the selection of our kernels. Our base kernel, the Matérn-5/2 kernel, correlates closeness of data points, and LMC and HGP rescale the Matérn-5/2 kernel measures for different tasks, which means patterns of the same area in the space are transferred. Modeling more complicated transferring pattern, e.g. correlation on an abstract feature space, may require a careful selection of appropriate base kernel (see e.g. Bitzer et al. (2022)). In the main experiments, N*source* (the | methods | GP1D+z | GP2D+z | Branin | Hartmann3 | Engine | |-------------------------------------------------------------------------|----------------|-----------------|-----------------|-----------------|-----------------| | (Nsource, N) | (100, 10 + 50) | (250, 20 + 100) | (100, 20 + 100) | (100, 20 + 100) | (500, 20 + 100) | | EffTransHGP | 8.947 ± 0.198 | 10.73 ± 0.190 | 3.754 ± 0.121 | 3.662 ± 0.089 | 9.596 ± 0.418 | | FullTransHGP | 9.171 ± 0.133 | 39.31 ± 0.639 | 8.129 ± 0.267 | 9.092 ± 0.467 | 124.99 ± 5.608 | | FullTransLMC | 26.56 ± 0.628 | 202.8 ± 12.43 | 21.16 ± 1.207 | 34.43 ± 1.664 | 615.7 ± 27.99 | | Rothfuss2022 | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | 0.0 ± 0.0 | | SAL | 6.881 ± 0.083 | 8.044 ± 0.142 | 4.691 ± 0.078 | 4.073 ± 0.083 | 4.686 ± 0.243 | | The training time (s) of f and q (if q is not f) at the last iteration. | | | | | | Table 3: Training time number of source data points) is fixed for each problem. In our Appendix E, we provide ablation studies on the Branin dataset, where we vary the number of source data points and number of source tasks. General test problem and real-world problem: We first consider a problem with higher dimension: Hartmann3 (D = 3). Here, it becomes computationally not tractable to cluster the safe regions. Thus (i) the source task, the source data and the initial target data are all sampled randomly (in contrast to GP1D, GP2D and Branin where we focus on problems with disjoint safe regions), and (ii) the CCL algorithm, i.e. the safe region clustering, is not performed during the AL experiments. Secondly, an engine modeling problem is considered, where we transfer from one engine dataset to another. This problem (i) has noisy grid values interpolated from raw measurements which makes the CCL algorithm inaccurate, and (ii) the safe set of this target task is not clearly separated into multiple disjoint regions. Therefore, the CCL algorithm is not performed during the experiments. Metrics: The learning result of f is shown as RMSEs between the GP mean prediction and test y sampled from true safe regions. To measure the performance of q, we use the area of SN (Equation (2)), as this indicates the explorable coverage of the space. In particular, we look at the area of SN ∩ S*true* (true positive or TP area, the larger the better) and SN ∩ (X \ S*true*) (false positive or FP area, the smaller the better). Here, Strue ⊆ X*pool* is the set of true safe candidate inputs, and this is available since our datasets in the experiments are prepared as executed queries. With GP1D, GP2D and Branin data, CCL (He et al., 2017) is performed to cluster which safe region each query belongs to (Table 2). ## 5.1 Al On Problems With Tractable Safe Regions Datasets: We adapt algorithm 1 of Kanagawa et al. (2018) to generate multi-output GP samples. The first output is treated as our source task and the second output as the target task. We have one main function f and an additional safety function q. Numerical details and example datasets are plotted in Appendix D. We generate 10 datasets and repeat the AL experiments five times for each dataset. For Branin data, we take the numerical setting from Rothfuss et al. (2022); Tighineanu et al. (2022) to generate five different datasets. With each dataset, we repeat the experiments for five times. Result: In Figure 3, we show the results of GP1D, GP2D and of Branin data. We see that EffTransHGP, FullTransHGP and FullTransLMC experiments achieve accurate and much larger safe set coverage (larger TP area and small FP area). In addition, the learning of f is more efficient with EffTransHGP, FullTransHGP and FullTransLMC as the RMSE drops faster compared to the baseline methods. Note that the test points are sampled from all of the true safe area, including the part baseline SAL fails to explore. It is thus not guaranteed that RMSE of SAL monotonically decreases (Branin). We observe from the experiments that the meta learning approach, Rothfuss et al. 2022, fails to generalize to larger area, which might be due to a lack of data in target task representativeness (one source, very few for meta learning) or/and in quantity. In Table 2, we count the number of safe regions explored by the queries. This confirms the ability to explore disjoint safe regions. One remark is that Branin function is smooth and has two clear safe regions; while huge stochasticity exists in GP data and we may have various number of small or large safe regions scattered in the space. Table 3 shows the model fitting time, confirming that EffTransHGP has comparable time complexity as baseline SAL, as opposed to FullTransHGP and FullTransLMC. We provide additional ratios of safe queries in appendix Table 4, which is a sanity check that the methods are indeed safe. Please note the learning flexibility is FullTransLMC > FullTransHGP > EffTransHGP, and our experimental results are consistent to this intuition (RMSE of FullTransLMC in 1D data is worse because we starts with 10 data points which is less than the number of LMC parameters, Figure 3). ## 5.2 Al On General Test Problem And Real-World Problem Hartmann problem: We take the numerical setting from Rothfuss et al. (2022); Tighineanu et al. (2022) to generate five different Hartmann3 datasets. With each dataset, we repeat the experiments for five times. Please see Appendix D.2 for details. In this experiment, EffTransHGP, FullTransLMC and FullTransHGP provide much smaller RMSEs and larger safe area (Figure 4). Engine datasets: We have two datasets, measured from the same prototype of engine under different conditions. Both datasets measure the temperature, roughness, emission HC, and emission NOx. The raw data were measured by operating an engine and the measurement equipments. We perform independent AL experiments to learn about roughness (Figure 4) and temperature (put in appendix Figure 11), both constrained by the normalized temperature values q ≤ 1.0. The safe set is around 0.5293 of the entire space. The datasets have two free variables and two contextual inputs which are supposed to be fixed. The contextual inputs are recorded with noise, so we interpolate the values with a multi-output GP simulator, trained on the full datasets. Thus this experiment is performed on a semi-simulated condition. Details are given in Appendix D.2. The safe set of this target task is not clearly separated into multiple disjoint regions. Thus the conventional method can eventually identify most part of the safe area. Nevertheless, we still see a much better RMSEs and much less data consumption for large safe set coverage (Figure 4). We also observe that Rothfuss et al. 2022 failed to generalize the meta-learned source knowledge to the entire target space exploration. ## 6 Conclusion We propose a transfer safe sequential learning to facilitate real experiments. We demonstrate its pronounced acceleration of learning which can be seen by a faster drop of RMSE and a larger safe set coverage. At the same time, our modularized multi-output modeling 1) retains the potential of performing global GP safe learning and 2) alleviates the cubic complexity in the experiments, leading to a considerable reduce of time complexity. Limitations: Our modularized method is in theory compatible with any multi-output kernel, in contrast to the ensemble technique in Tighineanu et al. (2022) which is only valid for a specific kernel. However, one limitation of source precomputation is that it requires to fix correct source relevant hyperparameters solely with source data (e.g. HGP is a good candidate due to its separable source-target structure while LMC, which learns joint patterns of tasks, will not be fixed correctly with only source data). Another limitation is that the benefit of transfer learning relies on multi-task correlation. This means transfer learning will not be helpful when the correlation is absent, or when the source data are not present in our target safe area. Modeling with more complicated base kernel (we use Matérn-5/2 kernel) may enable more sophisticated multi-task correlations, but this is beyond the scope of this paper (see e.g. Bitzer et al. (2022) for kernel selections). ## Acknowledgements References Mauricio A. Álvarez, Lorenzo Rosasco, and Neil D. Lawrence. Kernels for vector-valued functions: a review. arXiv, 2012. Felix Berkenkamp, Angela P. Schoellig, and Andreas Krause. Safe controller optimization for quadrotors with gaussian processes. *International Conference on Robotics and Automation*, 2016. Felix Berkenkamp, Andreas Krause, and Angela P. Schoellig. Bayesian optimization with safety constraints: Safe and automatic parameter tuning in robotics. *Machine Learning*, 2020. Matthias Bitzer, Mona Meister, and Christoph Zimmer. Structural kernel search via bayesian optimization and symbolical optimal transport. *Advances in Neural Information Processing Systems*, 2022. Eric Brochu, Vlad M. Cora, and Nando de Freitas. A tutorial on bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. *arXiv*, 2010. Kathryn Chaloner and Isabella Verdinelli. Bayesian experimental design: A review. *Statistical Science*, 1995. Dominik Baumann, Alonso Marco, Matteo Turchetta, and Sebastian Trimpe. GoSafe: Globally Optimal Safe Robot Learning. *IEEE International Conference on Robotics and Automation*, 2021. Javier García, Fern, and o Fernández. A comprehensive survey on safe reinforcement learning. Journal of Machine Learning Research, 2015. Michael A. Gelbart, Jasper Snoek, and Ryan P. Adams. Bayesian optimization with unknown constraints. Conference on Uncertainty in Artificial Intelligence, 2014. Susan Harkema, Yury Gerasimenko, Jonathan Hodes, Joel Burdick, Claudia Angeli, Yangsheng Chen, Christie Ferreira, Andrea Willhite, Enrico Rejc, Robert G Grossman, and V Reggie Edgerton. Effect of epidural stimulation of the lumbosacral spinal cord on voluntary movement, standing, and assisted stepping after motor complete paraplegia: a case study. *The Lancet*, 2011. Lifeng He, Xiwei Ren, Qihang Gao, Xiao Zhao, Bin Yao, and Yuyan Chao. The connected-component labeling problem: A review of state-of-the-art algorithms. *Pattern Recognition*, 2017. Jose Miguel Hernandez-Lobato, Michael Gelbart, Matthew Hoffman, Ryan Adams, and Zoubin Ghahramani. Predictive entropy search for bayesian optimization with unknown constraints. International Conference on Machine Learning, 2015. José Miguel Hernández-Lobato, Michael A. Gelbart, Ryan P. Adams, Matthew W. Hoffman, and Zoubin Ghahramani. A general framework for constrained bayesian optimization using information-based search. Journal of Machine Learning Research, 2016. A. G. Journel and C. J. Huijbregts. Mining geostatistics. *Academic Press London*, 1976. M. Kanagawa, P. Hennig, D. Sejdinovic, and B. K. Sriperumbudur. Gaussian processes and kernel methods: A review on connections and equivalences. *arXiv*, 2018. Andreas Krause, Ajit Singh, and Carlos Guestrin. Near-optimal sensor placements in gaussian processes: Theory, efficient algorithms and empirical studies. *Journal of Machine Learning Research*, 2008. Punit Kumar and Atul Gupta. Active learning query strategies for classification, regression, and clustering: A survey. *Journal of Computer Science and Technology*, 2020. Armin Lederer, Jonas Umlauft, and Sandra Hirche. Posterior variance analysis of gaussian processes with application to average learning curves. *arXiv*, 2019. Cen-You Li, Barbara Rakitsch, and Christoph Zimmer. Safe active learning for multi-output gaussian processes. *International Conference on Artificial Intelligence and Statistics*, 2022. Shibo Li, Wei Xing, Robert Kirby, and Shandian Zhe. Multi-fidelity bayesian optimization via deep neural networks. *Advances in Neural Information Processing Systems*, 2020. D. V. Lindley. On a Measure of the Information Provided by an Experiment. *The Annals of Mathematical* Statistics, 1956. Alonso Marco, Felix Berkenkamp, Philipp Hennig, Angela P. Schoellig, Andreas Krause, Stefan Schaal, and Sebastian Trimpe. Virtual vs. real: Trading off simulations and physical experiments in reinforcement learning with bayesian optimization. *IEEE International Conference on Robotics and Automation*, 2017. Matteo Turchetta, Felix Berkenkamp, and Andreas Krause. Safe Exploration for Interactive Machine Learning. Advances in Neural Information Processing Systems, 2019. Matthias Poloczek, Jialei Wang, and Peter Frazier. Multi-information source optimization. Advances in Neural Information Processing Systems, 2017. CE. Rasmussen and CKI. Williams. Gaussian processes for machine learning. *MIT Press*, 2006. Christoffer Riis, Francisco Antunes, Frederik Boe HÃijttel, Carlos Lima Azevedo, and Francisco CÃćmara Pereira. Bayesian active learning with fully bayesian gaussian processes. *Advances in Neural Information* Processing Systems, 2022. Jonas Rothfuss, Christopher Koenig, Alisa Rupenyan, and Andreas Krause. Meta-Learning Priors for Safe Bayesian Optimization. *6th Annual Conference on Robot Learning*, 2022. Bernhard Schoelkopf and Alexander J. Smola. Learning with kernels: Support vector machines, regularization, optimization, and beyond. *MIT Press*, 2002. Jens Schreiter, Duy Nguyen-Tuong, Mona Eberts, Bastian Bischoff, Heiner Markert, and Marc Toussaint. Safe exploration for active learning with gaussian processes. *Machine Learning and Knowledge Discovery* in Databases, 2015. Yaroslav D. Sergeyev, Antonio Candelieri, Dmitri E. Kvasov, and Riccardo Perego. Safe global optimization of expensive noisy black-box functions in the δ-Lipschitz framework. *Soft Computing*, 2020. B. W. Silverman. Spline smoothing: The equivalent variable kernel method. *Annals of Statistics*, 1984. Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical bayesian optimization of machine learning algorithms. *Advances in Neural Information Processing Systems*, 2012. Niranjan Srinivas, Andreas Krause, Sham M. Kakade, and Matthias W. Seeger. Information-theoretic regret bounds for gaussian process optimization in the bandit setting. *IEEE Transactions on Information Theory*, 2012. Yanan Sui, Alkis Gotovos, Joel Burdick, and Andreas Krause. Safe exploration for optimization with gaussian processes. *International Conference on Machine Learning*, 2015. Kevin Swersky, Jasper Snoek, and Ryan P Adams. Multi-task bayesian optimization. *Advances in Neural* Information Processing Systems, 2013. Petru Tighineanu, Kathrin Skubch, Paul Baireuther, Attila Reiss, Felix Berkenkamp, and Julia Vinogradska. Transfer learning with gaussian processes for bayesian optimization. International Conference on Artificial Intelligence and Statistics, 2022. Yanan Sui, Vincent Zhuang, Joel W. Burdick, and Yisong Yue. Stagewise Safe Bayesian Optimization with Gaussian Processes. *International Conference on Machine Learning*, 80, 2018. Yehong Zhang, Trong Nghia Hoang, Kian Hsiang Low, and Mohan Kankanhalli. Near-optimal active learning of multi-output gaussian processes. *AAAI Conference on Artificial Intelligence*, 2016. Christoph Zimmer, Mona Meister, and Duy Nguyen-Tuong. Safe active learning for time-series modeling with gaussian processes. *Advances in Neural Information Processing Systems*, 2018. ## A Appendix Overview Appendix B provides detailed analysis and illustrations of our main theorem. In Appendix C, we demonstrate the math of our source pre-computation technique. Appendix D contains the experiment details and Appendix E the ablation studies, additional plots and tables. ## B Gps With Classical Stationary Kernels Cannot Jump Through An Unsafe Valley B.1 Bound Of Explorable Region Of Safe Learning Methods In our main script, we provide a bound of the safety probability. The theorem is restated here. Theorem 3.3. We are given ∀x∗ ∈ X , x1:N ⊆ X , a kernel kqj satisfying Assumption 3.3 and kqj (·, ·) ≤ 1. Denote k j scale := *max k*qj (·, ·). qj ∼ GP(0, kqj ) is a GP, z j 1:N:= (z j 1 , ..., z j N ) is a set of observed noisy values (Assumption 2.1) and k(z j 1 , ..., z j N )k ≤ √N. Then ∀δ ∈ (0, qk j scaleσqj / √N), ∃r > 0 s.t. when minxi∈x1:N kx∗ − xik ≥ r, the probability thresholded on a constant Tj is bounded by p (qj (x∗) ≥ Tj )|x1:N , z j 1:N ≤ Φ *N δ/σ*2 qj −Tj pk j scale−( √*N δ/σ*qj ) 2 . In this section, we illustrate a concrete example of our theorem, where conventional methods cannot explore the entire safe set in the space. Then we provide the proof of this theorem. ## B.2 Single-Output Gp Does Not Reach Disconnected Safe Region We plug some exact numbers into the probability bound. Consider an one dimensional situation as Figure 2 and Figure 5. We omit j because J = 1 here. Assume $\mathbf{\text{-}}\;\Lambda\mathbf{V}\;=$ . 1. N = 10, 2. σ 2 q = 0.01, 3. T = 0 (notice z j 1:N is normalized to 0-mean and unit-variance). In this example, the generated data have kz1:N k ≤ √N (see Figure 2 for the rough functional values). Noticed also that σq/ √N is around 0.0316. We fix kscale := *max k*q(·, ·) = 1 (the surrogate model in Figure 2). Then our theoretical bound of the safety probability is Φ *N δ/σ* √2−T 1−( √*N δ/σ*) 2 = Φ √ 1000δ 1−1000δ 2 . In our main script, x∗ is unsafe if p (qj (x∗) ≥ Tj )|x1:N , z j 1:N < 1 − Φ(−β 1/2) = Φ(β 1/2). We set the safety tolerance to β 1/2 = 2. The decision boundary of our theorem √ 1000δ 1−1000δ 2 = 2 means δ ≈ 0.002. From Appendix B.3 we see that kx − x 0k ≥ 4.485 ⇒ δ ≤ 0.002 for unit lengthscale Matérn-5/2 kernel. With a lengthscale parameter l, this becomes kx−x 0k l ≥ 4.485 ⇔ kx − x 0k ≥ 4.485 ∗ l. Therefore δ ≤ 0.002 if kx − x 0k ≥ 4.485 ∗ l. The GP model trained on this example has lengthscale ≈ 0.1256 (the surrogate model in Figure 2 and in left of Figure 5), so points that are at least 4.485 ∗ 0.1256 = 0.563316 away from the observations are always identified unsafe. Thus the safe region on the right is never inferred as safe and is not explored with conventional single-output GP model ( Figure 5, left), because the distance between the two disjoint safe regions is around 0.7. We also show empirically that a multi-output GP model transfer safety confidence from a source task and identify safe region Ssub2( Figure 5, right). ## B.3 R-Δ **Relation For Commonly Used Kernels** Our main theorem consider kernels satisfying Assumption 3.2 which is restated here: ![15_image_0.png](15_image_0.png) Figure 5: The safety function q(x) = sin 10x 3 − 5x − 10+ 1 3 x 2 − 1 2 . Safety threshold is set to T = 0. The observations are with noise drawn from N (0, 0.01). Left: a GP with Matérn-5/2 kernel (lengthscale ≈ 0.1256) is shown. The red lines indicate the largest observed x and the closest safe point of another region. The gap between the red lines is close to 0.7, which is beyond explorable region of conventional safe learning methods. Right: the multi-output model uses an LMC kernel with 2 latent Matérn-5/2 kernels (Álvarez et al., 2012). Additional noisy data from function qs(x) = sin 10x 3 − 5x − 10+ sin(x 2) − 1 2 are provided (yellow). Ssub1 and Ssub2 are the safe set inferred by the LMC. Assumption 3.2. Given a kernel function k : *X × X →* R, assume ∀δ > 0, ∃r > 0 s.t. kx − x 0k ≥ r ⇒ k(x, x 0) ≤ δ under L2 norm. Notice that this assumption is weaker than k being strictly decreasing (see e.g. Lederer et al. (2019)), and it does not explicitly force stationarity. Here we want to find the exact r for commonly used kernels, given a δ. The following kernels (denoted by k(·, ·)) are described in their standard forms. In the experiments, we often add a lengthscale l and variance k*scale*, i.e. k*parameterized*(x, x 0) = kscalek(x/l, x 0/l) where k*scale* and l are trainable parameters. The lengthscale l can also be a vector, where each component is a scaling factor of the corresponding dimension of the data. RBF kernel k(x, x 0) = exp −kx − x 0k 2/2: k(x, x 0) ≤ δ ⇔ kx − x 0k ≥ qlog 1 δ 2 . E.g. δ ≤ 0.3 ⇐ kx − x 0k ≥ 1.552 δ ≤ 0.1 ⇐ kx − x - $\boldsymbol{x}$ || $\geq$ 1.052 - $\boldsymbol{x}^\prime||\geq2.146$ - $\boldsymbol{x}^\prime||\geq3.526$ . δ ≤ 0.002 ⇐ kx − x Matérn-1/2 kernel k(x, x 0) = exp (−kx − x 0k): k(x, x 0) ≤ δ ⇔ kx − x 0k ≥ log 1 δ . E.g. δ ≤ 0.3 ⇐ kx − x 0k ≥ 1.204 δ ≤ 0.1 ⇐ kx − x 0k ≥ 2.303 δ ≤ 0.002 ⇐ kx − x 0k ≥ 6.217 $\mathbf{M}$ Matérn-3/2 kernel k(x, x 0) = 1 + √3kx − x 0kexp − √3kx − x 0k: E.g. δ ≤ 0.3 ⇐ kx − x 0k ≥ 1.409 δ ≤ 0.1 ⇐ kx − x 0k ≥ 2.246 δ ≤ 0.002 ⇐ kx − x 0k ≥ 4.886 Matérn-5/2 kernel k(x, x 0) = 1 + √5kx − x 0k + 5 3 kx − x 0k 2exp − √5kx − x 0k: E.g. δ ≤ 0.3 ⇐ kx − x 0k ≥ 1.457 δ ≤ 0.1 ⇐ kx − x 0k ≥ 2.214 δ ≤ 0.002 ⇐ kx − x 0k ≥ 4.485 ## B.4 Proof Of Our Main Theorem We first introduce some necessary theoretical properties in Appendix B.4.1, and then use the properties to prove Theorem 3.3 in Appendix B.4.2. ## B.4.1 Additional Lemmas Definition B.1. Let k : *X × X →* R be a kernel, A ⊆ X be any dataset of finite number of elements, and let σ be any positive real number, denote Ωk,A,σ2 := k(A, A) + σ 2I. Definition B.2. Given a kernel k : *X × X →* R, dataset A ⊆ X , and some positive real number σ, then for x ∈ X , the k-, A-, and σ 2-dependent function h(x) = k(A, x) T Ω −1 k,A,σ2 is called a weight function (Silverman, 1984). Proposition B.3. C ∈ RM×M is a positive definite matrix and b ∈ RM is a vector. λmax is the maximum eigenvalue of C. We have kCbk2 ≤ λmaxkbk2. Proof of Proposition *B.3.* Because C is positive definite (symmetric), we can find orthonormal eigenvectors {e1*, ...,* eM} of C that form a basis of RM. Let λi be the eigenvalue corresponding to ei, we have λi > 0. As {e1*, ...,* eM} is a basis, there exist b1*, ..., b*M ∈ R s.t. b =PM i=1 biei. Since {ei} is orthonormal, kbk 2 2 =Pi b 2 i . Then $$\|C\mathbf{b}\|_{2}=\|\sum_{i=1}^{M}b_{i}\lambda_{i}\mathbf{e}_{i}\|_{2}={\sqrt{\sum_{i=1}^{M}b_{i}^{2}\lambda_{i}^{2}}}$$ $$\leq{\sqrt{\sum_{i=1}^{M}b_{i}^{2}\lambda_{max}^{2}}}=\lambda_{max}{\sqrt{\sum_{i=1}^{M}b_{i}^{2}}}=\lambda_{max}\|\mathbf{b}\|_{2}$$ . Proposition B.4. ∀A ⊆ X , any kernel k, and any positive real number σ, an eigenvalue λ of Ωk,A,σ2 (Definition *B.1) must satisfy* λ ≥ σ 2. Proof of Proposition *B.4.* Let K := k(A, A). We know that 1. K is positive semidefinite, so it has only non-negative eigenvalues, denote the minimal one by λK, and 2. σ 2is the only eigenvalue of σ 2I. Then Weyl's inequality immediately gives us the result: λ ≥ λK + σ 2 ≥ σ 2. Corollary B.5. We are given ∀x∗ ∈ X , A ⊆ X , any kernel k *satisfying Assumption 3.2 and any positive* real number σ. Let M := number of elements of A, and let B ∈ RM be a vector. Then ∀δ > 0, ∃r > 0 s.t. when minx0∈Akx∗ − x 0k ≥ r*, we have* 1. |h(x∗)B| ≤ √MδkBk/σ2(see also Definition *B.2),* 2. k(x∗, x∗) − k(A, x∗) T Ω −1 k,A,σ2 k(A, x∗) ≥ k(x∗, x∗) − Mδ2/σ2(see also Definition *B.1).* Proof of Corollary *B.5.* Let K := k(A, A). Proposition B.4 implies that the eigenvalues of K + σ 2I−1are bounded by 1 σ2 . In addition, minx0∈Akx∗ − x 0k ≥ r ⇒ all components of row vector k(x∗, A) are in region [0, δ]. 1. Apply Cauchy-Schwarz inequality (line 1) and Proposition B.3 (line 2), we obtain Schwarz inequality (time 1) and Proposition 2.5 (time 2), we obtain $$|k(\mathbf{A},\mathbf{x}_{*})^{T}\left(k(\mathbf{A},\mathbf{A})+\sigma^{2}I\right)^{-1}\mathbf{B}|\leq\|k(\mathbf{A},\mathbf{x}_{*})^{T}\|\,\|\left(\mathbf{K}+\sigma^{2}I\right)^{-1}\mathbf{B}\|$$ $$\leq\|k(\mathbf{A},\mathbf{x}_{*})\|\frac{1}{\sigma^{2}}\|\mathbf{B}\|$$ $$\leq\|(\delta,...,\delta)\|\frac{1}{\sigma^{2}}\|\mathbf{B}\|$$ $$\leq\frac{\sqrt{M}\delta\|\mathbf{B}\|}{\sigma^{2}}.$$ $\square$ $\text{2.}\left(K+\sigma^2I\right)^{-1}\text{is}\frac{1}{2}$ . 2I−1is positive definite Hermititian matrix, so $$\begin{split}k(\mathbf{A},\mathbf{x}_{*})^{T}\left(\mathbf{K}+\sigma^{2}I\right)^{-1}k(\mathbf{A},\mathbf{x}_{*})&\leq\frac{1}{\sigma^{2}}\|k(\mathbf{A},\mathbf{x}_{*})\|^{2}\\ &\leq\frac{1}{\sigma^{2}}M\delta^{2}.\end{split}$$ Then, we immediately see that $$k(\mathbf{x}_{*},\mathbf{x}_{*})-k(\mathbf{A},\mathbf{x}_{*})^{T}\left(\mathbf{K}+\sigma^{2}I\right)^{-1}k(\mathbf{A},\mathbf{x}_{*})\geq k(\mathbf{x}_{*},\mathbf{x}_{*})-\frac{1}{\sigma^{2}}\|k(\mathbf{A},\mathbf{x}_{*})\|^{2}$$ $$\geq k(\mathbf{x}_{*},\mathbf{x}_{*})-\frac{1}{\sigma^{2}}M\delta^{2}.$$ Remark B.6. A CDF of a standard Gaussian distribution is often denoted by p(x ≤ T) = Φ(T), x ∼ N (0, 1). Notice that p(x ≤ −T) = Φ(−T) = 1 − Φ(T) = p(x ≥ T). ## B.4.2 Main Proof Theorem 3.3. We are given ∀x∗ ∈ X , x1:N ⊆ X , a kernel kqj satisfying Assumption 3.3 and kqj (·, ·) ≤ 1. Denote k j scale := *max k*qj (·, ·). qj ∼ GP(0, kqj ) is a GP, z j 1:N:= (z j 1 , ..., z j N ) is a set of observed noisy values (Assumption 2.1) and k(z j 1 , ..., z j N )k ≤ √N. Then ∀δ ∈ (0, qk j scaleσqj / √N), ∃r > 0 s.t. when minxi∈x1:N kx∗ − xik ≥ r, the probability thresholded on a constant Tj is bounded by p (qj (x∗) ≥ Tj )|x1:N , z j 1:N ≤ Φ *N δ/σ*2 qj −Tj pk j scale−( √*N δ/σ*qj ) 2 . Proof. From Equation (1) in the main script, we know that p qj (x∗)|x1:N , z j 1:N = N x∗|µqj ,N (x∗), σ2 qj ,N (x∗) µqj ,N (x∗) = kqj (x1:N , x∗) Tkqj (x1:N , x1:N ) + σ 2 qj IN −1z j 1:N σ 2 qj ,N (x∗) = kqj (x∗, x∗) − kqj (x1:N , x∗) Tkqj (x1:N , x1:N ) + σ 2 qj IN −1kqj (x1:N , x∗). We also know that (Remark B.6) $$p\left((q_{j}(\mathbf{x}_{*})\geq T_{j})|\mathbf{x}_{1:N},z_{1:N}^{j}\right)=1-\Phi\left(\frac{T_{j}-\mu_{q_{j},N}(\mathbf{x}_{*})}{\sigma_{q_{j},N}(\mathbf{x}_{*})}\right)$$ $$=\Phi\left(\frac{\mu_{q_{j},N}(\mathbf{x}_{*})-T_{j}}{\sigma_{q_{j},N}(\mathbf{x}_{*})}\right).$$ From Corollary B.5, we get µqj ,N (x∗)−Tj σqj ,N (x∗) ≤ √N δkz j 1:N k/σ2 qj −Tj pkqj (x∗,x∗)−N δ2/σ2 qj . This is valid because we assume δ < qk j scaleσqj / √N. Then with kz j 1:N k ≤ √N and the fact that Φ is an increasing function, we immediately see the result $$p\left((q_{j}(\mathbf{x}_{*})\geq T_{j})|\mathbf{x}_{1:N},z_{1:N}^{j}\right)\leq\Phi\left(\frac{N\delta/\sigma_{q_{j}}^{2}-T_{j}}{\sqrt{k_{s c a l e}^{j}-(\sqrt{N}\delta/\sigma_{q_{j}})^{2}}}\right)\,.$$ $\square$ ## C Multi-Output Gps With Source Pre-Computation Given a multi-output GP g ∼ GP (0, kg), g ∈ {f, q1*, ...,* qJ }, where kg is an arbitrary kernel, the main computational challenge is to compute the inverse or Cholesky decomposition of $$\Omega_{\mathbf{g}}=\begin{pmatrix}K_{g_{s}}+\sigma_{g_{s}}^{2}I_{N_{s o u r e e}}&K_{g_{s},g}\\ K_{g_{s},g}^{T}&K_{g}+\sigma_{g}^{2}I_{N}\end{pmatrix}.$$ Such computation has time complexity O(N*source* + N) 3. We wish to avoid this computation repeatedly. As in our main script, kg is parameterized and we write the parameters as θg = (θgs , θg), where kg ((·, indexs),(·*, index*s)) is independent of θg. kg ((·, indexs),(·*, index*t)) and kg ((·, indext),(·*, index*t)) does not need to be independent of θgs Here we propose to fix Kgs (i.e. θgs ) and σ 2 gs and precompute the Cholesky decomposition of the source components, Lgs = L(Kgs + σ 2 gs IN*source* ), then $$L\left(\Omega_{\mathbf{g}}\right)=\begin{pmatrix}L_{g_{*}}&\mathbf{0}\\ \left(L_{g_{*}}^{-1}K_{g_{*},g}\right)^{T}&L\left(\hat{K_{t}}\right)\end{pmatrix},\tag{5}$$ $$\hat{K}_{t}=K_{g}+\sigma_{g}^{2}I_{N}-\left(L_{g_{*}}^{-1}K_{g_{*},g}\right)^{T}L_{g_{*}}^{-1}K_{g_{*},g}.$$ This is obtained from the definition of Cholesky decomposition, i.e. Ωg = L(Ωg)L(Ωg) T, and from the fact that a Cholesky decomposition exists and is unique for any positive definite matrix. The complexity of computing L(Ωg) thus becomes O(N2 sourceN) + O(N*source*N2) + O(N3) instead of O(N*source* + N) 3. In particular, computing L −1 gs Kg,st is O(N2 sourceN), acquiring matrix product Kˆt is O(N*source*N2) and Cholesky decomposition L(Kˆt) is O(N3). The learning procedure is summarized in Algorithm 2 in the main script. We prepare a safe learning experingent with Dy; we fix 0 , 10, we fix incorporating Equation (5) in Equation (4) of the main script (Section 4). ![19_image_0.png](19_image_0.png) Figure 6: Example simulated GP data of D = 1, f is the function we want to learn (top), under an additional safety constraint q ≥ 0 (bottom). The curves are true source (yellow) and target (black) functions. The dots are safe source data and a pool of initial target ticket (this pool of target data are more than those actually used in the experiments). ![19_image_1.png](19_image_1.png) Figure 7: Example simulated GP data of D = 2, f is the function we want to learn (left), with an additional safety function q (middle), and the green is true safe regions q ≥ 0 (right). The top is source task and the bottom is target task. The dots are safe source data and a pool of initial target ticket (this pool of target data are more than those actually used in the experiments). ## D Experiment Details D.1 Labeling Safe Regions The goal is to label disjoint safe regions, so that we may track the exploration of each land. In our experiments, the test safety values are always available because we are dealing with executed pool of data. It is thus possible to access safety conditions of each test point as a binary label. We perform connected component labeling (CCL, see He et al. (2017)) to the safety classes over grids (grids are available, see the following sections). When D = 1, this labeling is trivial. When D = 2, we consider 4-neighbors of each pixel (He et al., 2017). With simulated datasets, the ground truth is available, and thus CCL is deterministic. The CCL can be computationally intractable on high dimension (number of grids grows exponentially), and this method can be inacurrate over real data where observations are noisy and grid values need interpolation from the measurements. After clustering the safe regions over grids, we identify which safe region each test point x∗ belongs to by searching the grid nearest to x∗. See main Table 2 and the queried regions count of Figure 10 for the results. ## D.2 Numerical Details When we run algorithm 1 and 2 (in the main paper), we set N*init* (number of initial observed target data), N*source* (number of observed source data) and N*pool* (size of discretized input space X*pool*) as follows: 1. GP1D: N*source* = 100, N*init* = 10, run Algorithm 1 or Algorithm 2 for 50 iterations, and N*pool* = 5000; 2. GP2D: N*source* = 250, N*init* = 20, run Algorithm 1 or Algorithm 2 for 100 iterations, and N*pool* = 5000; 3. Branin & Hartmann3: N*source* = 100, N*init* = 20, run Algorithm 1 or Algorithm 2 for 100 iterations, and N*pool* = 5000; 4. Engine: N*source* = 500, N*init* = 20, run Algorithm 1 or Algorithm 2 for 100 iterations, and N*pool* = 3000. In the following, we describe in details how to prepare each dataset. We first sample source and target test functions and then sample initial observations from the functions. With GP1D, GP2D and Branin problems (Section 5.1), we reject the sampled functions unless all of the following conditions are satisfied: (i) the target task has at least two disjoint safe regions, (ii) each of these regions has a common safe area shared with the source, and (iii) for at least two disjoint target safe regions, each aforementioned shared area is larger than 5% of the overall space (in total, at least 10% of the space is safe for both the source and the target tasks). In our general test problems, i.e. Hartmann3 (Section 5.2), we generate functions as they are. In other words, we do not restrict the datasets to any safe region characteristics. GP data: We generate datasets of two outputs. The first output is treated as our source task and the second output as the target task. To generate the multi-output GP datasets, we use GPs with zero mean prior and multi-output kernel P2 l=1 WlWT l ⊗ kl(·, ·), where ⊗ is the Kronecker product, each Wlis a 2 by 2 matrix and klis a unit variance Matérn-5/2 kernel (Álvarez et al., 2012). All components of Wl are generated in the following way: we randomly sample from a uniform distribution over interval [−1, 1), and then the matrix is normalized such that each row of Wl has norm 1. Each kl has an unit variance and a vector of lengthscale parameters, consisting of D components. For GP1D and GP2D problems, each component of the lengthscale is sampled from a uniform distribution over interval [0.1, 1). We adapt algorithm 1 of Kanagawa et al. (2018) for GP sampling, detailed as follows: 1. sample input dataset X ∈ R n×D within interval [−2, 2], and n = 100D. 2. for l = 1, 2, compute Gram matrix Kl = kl(X, X). 3. compute Cholesky decomposition Ll = L(WlWT l ⊗Kl) = L(WlWT l )⊗L(Kl) (i.e. WlWT l ⊗Kl = LlL T l , Ll ∈ R 2∗n×2∗n). 4. for l = 1, 2, draw ul ∼ N (0, I2∗n) (ul ∈ R (2∗n)×1). 5. obtain noise-free output dataset F =P2 l=1 Llul 6. reshape $\textbf{{F}}=\begin{pmatrix}\textbf{{f}}(\textbf{{X}},s)\\ \textbf{{f}}(\textbf{{X}},t)\end{pmatrix}\in\mathbb{R}^{2*n\times1}$ into $\textbf{{F}}=\begin{pmatrix}\textbf{{f}}(\textbf{{X}},s)&\textbf{{f}}(\textbf{{X}},t)\end{pmatrix}\in\mathbb{R}^{n\times2}$. 7. normalize F again s.t. each column has mean 0 and unit variance. 8. generate initial observations (more than needed in the experiments, always sampled from the largest safe region shared between the source and the target). During the AL experiments, the generated data X and F are treated as grids. We construct an oracle on continuous space [−2, 2]D by interpolation. During the experiments, the training data and test data are blurred with a Gaussian noise of standard deviation 0.01. Once we sample the GP hyperparameters, we sample one main function f and an additional safety function from the GP. During the experiments, the constraint is set to q ≥ 0. For each dimension, we generate 10 datasets and repeat the AL experiments 5 times for each dataset. We illustrate examples of X and F in Figure 6 and Figure 7. Branin data: The Branin function is a function defined over (x1, x2) ∈ X = [−5, 10] × [0, 15] as f*a,b,c,r,s,t* ((x1, x2)) = a(x2 − bx21 + cx1 − r) + s(1 − t)cos(x1) + s, where *a, b, c, r, s, t* are constants. It is common to set (*a, b, c, r, s, t*) = (1, 5.1 4π2 , 5 π , 6, 10, 1 8π ), which is our setting for target task. We take the numerical setting of Tighineanu et al. (2022); Rothfuss et al. (2022) to generate five different source datasets (and later repeat 5 experiments for each dataset): $$a\sim b$$ a ∼*Uniform*(0.5, 1.5), $\downarrow$ . b ∼*Uniform*(0.1, 0.15), c ∼*Uniform*(1.0, 2.0), r ∼*Uniform*(5.0, 7.0), s ∼*Uniform*(8.0, 12.0), t ∼*Uniform*(0.03, 0.05). After obtaining the constants for our experiments, we sample noise free data points and use the samples to normalize our output $$f_{a,b,c,r,s,t}\left((x_{1},x_{2})\right)_{n o r m a l i z e}={\frac{f_{a,b,c,r,s,t}\left((x_{1},x_{2})\right)-m e a n(f_{a,b,c,r,s,t})}{s t d(f_{a,b,c,r,s,t})}}.$$ Then we set safety constraint f ≥ 0 and sample initial safe data. The sampling noise is Gaussian during the experiments. $$f_{a_{1},a_{2},a_{3},a_{4}}\left((x_{1},x_{2},x_{3})\right)=-\sum_{i}^{4}a_{i}exp\left(-\sum_{j=1}^{3}A_{i,j}(x_{j}-P_{i,j})^{2}\right),$$ $$\mathbf{A}=\begin{pmatrix}3&10&30\\ 0.1&10&35\\ 3&10&30\\ 0.1&10&35\end{pmatrix},$$ $$\mathbf{P}=10^{-4}\begin{pmatrix}3689&1170&2673\\ 4699&4387&7470\\ 1091&8732&5547\\ 381&5743&8828\end{pmatrix},$$ Hartmann3 data: The Hartmann3 function is a function defined over x ∈ X = [0, 1]3 as where a1, a2, a3, a4 are constants. It is common to set (a1, a2, a3, a4) = (1, 1.2, 3, 3.2), which is our setting for target task. We take the numerical setting of Tighineanu et al. (2022) to generate five different source datasets (and later repeat 5 experiments for each dataset): $$\begin{array}{l}{{a_{1}\sim\!U n i f o r m(1.0,1.02),}}\\ {{a_{2}\sim\!U n i f o r m(1.18,1.2),}}\\ {{a_{3}\sim\!U n i f o r m(2.8,3.0),}}\\ {{a_{4}\sim\!U n i f o r m(3.2,3.4).}}\end{array}$$ After obtaining the constants for our experiments, we sample noise free data points and use the samples to normalize our output $f_{a_1,a_2,a_3,a_4}\left((x_1,x_2,x_3)\right)_{normalize}=\dfrac{f_{a_1,a_2,a_3,a_4}\left((x_1,x_2,x_3)\right)-mean(f_{a_1,a_2,a_3,a_4})}{std(f_{a_1,a_2,a_3,a_4})}$... Then we set safety constraint f ≥ 0 and sample initial safe data. The sampling noise is Gaussian during the experiments. Engine data We have 2 datasets, measured from the same prototype of engine under different conditions. Both datasets measure the temperature, roughness, emission HC, and emission NOx. The inputs are engine speed, relative cylinder air charge, position of camshaft phaser and air-fuel-ratio. The contextual input variables "position of camshaft phaser" and "air-fuel-ratio" are desired to be fixed. These two contextual inputs are recorded with noise, so we interpolate the values with a multi-output GP simulator. We construct a LMC trained with the 2 datasets, each task as one output. During the training, we split each of the datasets (both safe and unsafe) into 60% training data and 40% test data. After the model parameters are selected, the trained models along with full dataset are utilized as our GP simulators (one simulator for each output channel, e.g. temperature simulator, roughness simulator, etc). The first output of each GP simulator is the source task and the second output the target task. The simulators provide GP predictive mean as the observations. During the AL experiments, the input space is a rectangle spanned from the datasets, and X*pool* is a discretization of this space from the simulators with N*pool* = 3000. We set N*source* = 500, N = 20 (initially) and we query for 100 iterations (N = 20 + 100). When we fit the models for simulators, the test RMSEs (60% training and 40% test data) of roughness is around 0.45 and of temperature around 0.25. In a sequential learning experiment, the surrogate models are trainable GP models. These surrogate models interact with the simulators, i.e. take X*pool* from the simulators, infer the safety and query from X*pool*, and then obtain observations from the simulators. In our main Algorithms 1 to 2, the surrogate models are the GP models while the GP simulators are systems that respond to queries x∗. ## E Ablation Studies And Further Experiments In this section, we provide ablation studies on the size of source dataset. One source task, varied N*source*: We perform experiments on the Branin function. The results are presented in Figure 8. The first conclusion is that all of the multioutput methods outperform baseline safe AL (safe AL result shown in Figure 3). Note again that the RMSEs are evaluated on the entire space while the baseline safe AL explore only one safe region. In addition, we observe that more source data result in better performances, i.e. lower RMSE and larger safe set coverage (TF area), while there exist a saturation level at around N*source* = 100. Multiple source tasks: Next, we wish to manipulate the number of source tasks. Before presenting the results, we first introduce the model on multiple source tasks. In this paragraph, we say D*source* is the number of source tasks. As described in Section 4, each g ∈ {f, q1*, ...,* qJ } is a multi-output GP correlating source and target tasks. The LMC, linear model of corregionalization, can be taken without any change: kg =Pl WlWT l + *diag*κ⊗ kl(·, ·), where kl(·, ·) is a standard single task kernel as in Assumption 3.1, and Wl and κ are vector of (D*source* + 1) elements (Álvarez et al., 2012). The HGP can be extended in two ways, models in Poloczek et al. (2017) or in Tighineanu et al. (2022). Here we take the model from Tighineanu et al. (2022): kg =PD*source* i=0 M aski ⊗ ki(·, ·), *M ask*i ∈ R Dsource+1×D*source*+1 is a matrix where the first i rows and columns are zero and the other entries are all one (all elements of *M ask*0 are ones). One can see that if D*source* = 1, then we get the HGP described in Section 4 by reindexing k0 and k1 here. In this study, we generate source data with constraints, but disjoint safe regions requirement when we sample the source tasks and data (in Section 5.1, the data are generated s.t. source and target task has large enough shared safe area). We consider 1, 3 or 4 source tasks, and we generate 20 or 30 data points per task. In general, we see that 3 source tasks significantly outperform 1 source task while the performance saturates as adding 10 more points per source task seems to benefit more than adding one more source task. Note here that all source data are generated independently, i.e. the observations of each task are not restricted to the same input locations. Further plots and experiments: In Section 5.1, we track the safe region of each query in AL experiments. We measure the model fitting time per iteration as well. The main Table 2 and Table 3 present only the summary results. In Figure 10, we additionally provide the region clustering and fitting time w.r.t. AL iterations. Furthermore, Table 4 counts among the AL selected queries which, after a safety measurements are accessed, satisfy the safety constraints. This table is a sanity check that the methods are selecting points safely. With the engine datasets, we perform additional experiments of learning f = q =temperature, and the results are shown in Figure 11. Table 4: Ratio of safe queries | methods | GP1D + z | GP2D + z | Branin | Hartmann3 | |--------------|---------------|---------------|-----------------|---------------| | num_steps | 50 | 100 | 100 | 100 | | EffTransHGP | 0.986 ± 0.001 | 0.974 ± 0.002 | 0.999 ± 0.0006 | 0.972 ± 0.003 | | FullTransHGP | 0.979 ± 0.004 | 0.952 ± 0.005 | 0.9996 ± 0.0004 | 0.972 ± 0.003 | | FullTransLMC | 0.984 ± 0.002 | 0.969 ± 0.002 | 0.993 ± 0.0009 | 0.968 ± 0.003 | | Rothfuss2022 | 0.975 ± 0.003 | 0.905 ± 0.006 | 1.0 ± 0.0 | 0.84 ± 0.011 | | SAL | 0.995 ± 0.001 | 0.958 ± 0.005 | 1.0 ± 0.0 | 0.966 ± 0.002 | Ratio of all queries selected by the methods which are safe in the ground truth (initial data not included, see Section 5 for the experiments). This is a sanity check in additional to FP safe set area, demonstrates that all the methods are safe during the experiments (our datasets have 0 mean, the constraint q ≥ 0 indicates that around half of the space is unsafe). Note: β = 4 (equivalently α = 1 − Φ(β 1/2) = 0.002275) implies 2.275 % unsafe tolerance is allowed by each fitted GP safety model. Engine results are not shown because the queries are all safe (the modeling FP safe set area is almost zero in this problem, see Figure 4 and Figure 11). ![25_image_0.png](25_image_0.png) Figure 8: Safe AL experiments: Branin data with different number of source data. Each multi-task method is plotted in one column. The results are mean and one standard error of 25 experiments per setting. X*pool* is discretized from X with N*pool* = 5000. The TP/FP areas are computed as number of TP/FP points divided by N*pool* (i.e. TP/FP as portion of X*pool*). The third row shows the number of disjoint safe regions explored by the queries. The fifth row, the unsafe queries ratio, are presented as percentage of number of iterations (e.g. at the 2nd-iteration out of a total of 100 iterations, one of the two queries is unsafe, then the ratio is 1 divided by 100). The last row demonstrates the model fitting time. At the first iteration (iter 0-th), this includes the time for fitting both the source components and the target components (EffTransHGP). With Rothfuss et al. 2022, source fitting is the meta learning phase. ![26_image_0.png](26_image_0.png) Figure 9: Safe AL experiments: Branin data with multiple source tasks. Each multi-task method is plotted in one column. We consider 1, 3 or 4 source tasks and sample 20 or 30 data points per task. The remaining setting is the same as described in Figure 8. RMSE plots are plotted in log scale. ![27_image_0.png](27_image_0.png) Figure 10: Safe AL experiments on three benchmark datasets: GP data with X = [-2, 2]0, D = 1 or 2, constrained to q ≥ 0, and the benchmark Branin function with constraint f ≥ 0. The results are mean and one standard error of 100 (GP data) or 25 (Branin data) experiments. Xpool is discretized from X with Npool = 5000. We set Nsource = 100 and N is from 10 (0th iteration) to 60 (50th iteration) for GP1D, N source = 250, N is 20 to 120 for GP2D, and N source = 100, N is 20 to 120 for Branin. The first, second and fourth rows are presented in Figure 3 of the main paper. The TP/FP areas are computed as number of TP/FP points divided by Npoa (i.e. TP/FP as portion of Xpool). The third row shows the number of disjoint safe regions explored by the queries (main Table 2 is taken from the last iteration here). The fifth row, the unsafe queries ratio, are presented as percentage of number of iterations (e.g. at the 2nd-iteration out of a total of 50 iterations, one of the two queries is unsafe, then the ratio is 1 divided by 50). The last row demonstrates the model fitting time. At the first iteration (iter 0-th), this includes the time for fitting both the source components and the target components (EffTransHGP). With Rothfuss et al. 2022, source fitting is the meta learning phase. ![28_image_0.png](28_image_0.png) Figure 11: Safe AL experiments on engine emission modeling, AL on f (temperature) constrained by q = f ≤ 1.0. Baseline is safe AL without source data. Transfer is LMC without modularization. Efficient_transfer is HGP with fixed and pre-computed source knowledge. Noource = 500, Nis from 20 to 120. The results are mean and one standard error of 5 repetitions. The fitting time is in seconds.
Review 1: Summary: This paper presents a multi-output GP approach to address the safe exploration challenge in active learning. The key idea is to enable knowledge transfer from a related source so that the safe region can be estimated better (i.e., requiring fewer prior data from the target task). In this setup, the target task is characterized by one GP-distributed main function and a set of GP-distributed constraint functions. At each iteration, an input candidate (whose constraint outputs are above a pre-specified threshold) with maximum predictive entropy will be selected. All GPs will be retrained for the next iteration and so on. The main contributions of this paper include: (1) the derivation of the safety probability in the single-output GP (no transfer); and (2) the use of multi-output GP to enable transfer from an existing source to the target problem. The source is represented in a similar manner to the target where there is a GP-distributed (source) main function and several other GP-distributed constraint functions. Strengths and Weaknesses: Strengths: The paper is well-organized and well-written. The idea of source transfer to boost the accuracy of the safety probability estimate is sufficiently novel. The technical flow is clear and sound.  Weaknesses: 1. The result in Theorem 3.3 is giving out a vibe that it is trivial and can also be come vacuous (due to strong independence assumption). To elaborate: As the predictive distribution is a Gaussian, the chance that the prediction being larger than a threshold is always represented in forms of a Gaussian cumulative function and the rest of the algebraic inside the cumulative function are direct consequences of the assumption. Furthermore, assuming the constraint functions are independent will lead to a vacuous bound. Even if the safety probability per constraint is sufficiently close to 1.0, their product will approach zero when there are sufficiently many constraints -- see the text after Eq. (2). This suggests that the technical setting here might be oversimplified. In practice, constraint functions are likely correlated. I think the authors might want to elaborate more on this point if I have missed something important here. 2. The paper also gives the vibe that the above theoretical guarantee does not extend to the source-transfer setting via multi-output GP. Can the authors provide clarification on this? The multi-output GP is a GP with co-regionalized kernel so I tend to think any results for single-output GP would extend straightforwardly to multi-output setting. 3. On the same thought that the multi-output GP is a GP with co-regionalized kernel, I wonder why the authors prefer inventing new approximation to adopting existing literature on sparse GP (to improve scalability). Please elaborate. 4. Several technical details are not quite clear. For example, how do we maximize the acquisition function? how do we guarantee that the maxima of the acquisition function will remain within the safety set? How do we define the co-regionalization kernel in cases where both source and target have multiple constraint functions. The paper seems to detail only the formulation when there are only two constraint functions (one for source, one for target). It would be good to flesh this out clearer. 5. Experiments are mostly conducted on simulated or toy datasets in low dimensions. The authors should probably consider running their proposed solution on more benchmark datasets with larger dimension and quantity. 6. I also do not see concrete theoretical analysis showing the sample save-up in target domain when there is a source transfer. Can the authors discuss this? 7. The notation is probably heavy to people who are not already familiar with GP. Perhaps the authors should consider lightening up the notation. Requested Changes: Please address the concerns that I raised in the Weakness section above. Broader Impact Concerns: N/A ================================================== Review 2: Summary: Sequential learning is essential in many applications. Two concrete (global) variants of it are active learning and Bayesian optimization. In practice, however, one has to pay attention to safety constraints on how to sample new data. Since it is often not trivial to identify feasible regions, the authors propose knowledge transfer from previous (source) learning tasks s.t. the performance on a target task can be improved. To this end, the authors describe how local exploration of feasible regions can be done with GPs (focussing on the active learning setting) and extend this to a modularized GP transfer learning for a given set of source tasks. In their experiments, they show that they can outperform previous approaches on 1D and 2D functions. Strengths and Weaknesses: ### Strengths: 1. The paper addresses a very important problem in practice. Since data acquisition is becoming more and more central to many AI applications, it is a very timely problem. In addition, the safety aspect is an understudied issue that is highly relevant in many applications. 2. The paper is very thorough on the theory side. It provides all formal background and assumptions and derives properties and bounds of the proposed approach. 3. The experiments are very well structured, along with meaningful research questions. ### Weaknesses: 1. The paper is not very well written. Since it provides a very deep theory and very good formalism, it is even more important to guide the reader through all the notation. After spending quite a bit of time, I was able to understand most of it – I admit that I was not able to check all the details. I fully believe that the authors know the notation by heart, but with all the subscripts and superscripts, I had to go back and forth several times to understand everything. It would make it much easier if the authors would give their notation more often a name s.t. one does not have to go back to the place where it was first introduced. Furthermore, a lot of important content is also hidden in the appendix. Even worse, some figures are in the appendix that are more marked as such in the text – making it painful to find them. Figure placement can be improved in general. In view of the recommended page limit of 12 pages, but only having 10 pages, I recommend improving readability substantially. 2. The benchmarks are only rather shallow. a. The authors only consider 1-D and 2-D functions. This is not of particular interest in practice, neither for active learning nor for Bayesian optimization. b. They use only a single source. Again, this makes the whole work very uninteresting for practical applications. c. Most benchmarks are artificial benchmarks (GP1D, GP2D and Branin). d. I don’t know whether the Engine datasets are publicly available s.t. one could reproduce the results. Furthermore, the Engine results are missing in Table 1. e. The authors wrote: “The GP model parameters are trained up-front and remain fixed during the experiments”. I would strongly argue that this is against common practice and needs clear justification as to why this is possible and reasonable. Further questions to the authors: 1. You only mention local safe learning. However, there are also prior works on global safe optimization. For example, Antonio Candelieri worked quite a bit on safe global optimization. Could you please comment on how this relates to your work? 2. Could you please elaborate on X_pool. Why is this restriction needed? 3. You said at the very end that a correlation between the source and the target is needed. This makes absolute sense, but what happens if this is not the case? Can your approach recover from it? If not, can you modify it so it can recover? Requested Changes: 1. Improve readability (see above) 2. Redo the experiments s.t. they have a practical relevance (see above). Broader Impact Concerns: None ================================================== Review 3: Summary: In this paper, the authors studied the problem of safe sequential learning where some source data are available. They propose a joint modeling approach incorporating source data to aid in predictive distribution in the target domain. Additionally, they propose a pre-computation technique to speed up the computation for posterior distributions. They also established some theoretical results on the limitation of local exploration of single-output Gaussian processes. The submission includes empirical evaluations with both synthetic function data and engine modeling task data. Results show that the proposed algorithm can achieve lower MSE and identify more disconnected safe regions. Strengths and Weaknesses: Strengths: 1. Safe exploration with Gaussian process models is a topic that is relevant to the TMLR audience. This work adds valuable theoretical and empirical understanding to this topic. 2. The proposed method is easy to understand and implement in practice. I found the innovation of using pre-computation for speed-up interesting. Weaknesses: 1. I think the submission will benefit a lot by making the empirical evaluations more comprehensive. All the experiments are on low-dimensional datasets (<5 dimensions). Furthermore, including more datasets would make the algorithmic improvements more convincing as well. 2. It would be good to see an additional ablation study on the amount of the available source data and study its influence on the effectiveness of transfer learning. 3. There are some places where the descriptions are unclear. Please see the Requested Changes section below for details. Requested Changes: 1. In the section on “In-experiment speed-up via source pre-computation”, $\Omega_{g}$ is not defined (I think it refers to the covariance matrix being inverted). 2. In the caption to Figure 3, it is unclear to me what “$N$ is from 20 to 120” means. Since $N$ refers to the initial amount of target domain data, are you varying the amount of the initial target data here? 3. In Section 5.1, the authors stated “The datasets are generated such that the target task has at least two disjoint safe regions where each region has a common safe area shared with the source”. I found this condition for dataset construction too stringent. In practice, as source and target tasks can have different safety constraints altogether, this assumption seems to make the task too easy. Can the authors either justify this choice or include experiments that intentionally make the transfer from source to target domain more challenging? Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Reject Comment: This paper studies a transfer learning problem where the safety constraints are also transferred. The transfer is done using Gaussian processes. The original paper had several shortcomings, such as hard to read notation and experiments on mostly toy datasets. The authors updated the paper significantly. The notation simplified and is also summarized in Table 1, which improves readability. However, some issues, such as mostly toy datasets, remained. Therefore, the paper cannot be accepted at this time. These are additional comments of the reviewers from the recommendation to accept / reject. I hope that you will find them useful: **Reviewer Apea** The clarity of the paper could still be improved, and the benchmarks are also weak, in particular the low-dimensional, artificial functions worry me. I'm also still concerned regarding the fixed GP parameters. I see reasons why prior work also did it that way, but I nevertheless believe that this prevents any reasonable insights into practical applications of the proposed approach. **Reviewer 87Xf** First, regarding weakness #1, I agree with the authors that when the constraints are not independent (as assumed), a potential fix is to model them using a multi-output GP which explicitly characterizes their correlation. But, the rebuttal does not address how that would affect their theoretical results which might be built on that assumption. The added text in Remark 4.2 does not seem to elaborate more on this. Hence, while I believe this is a potential fix but my impression is that the authors have not quite fleshed it out. Second, regarding weakness #3, I have the impression that the authors had misunderstood my question or maybe I simply do not understand the main point of the response. I was asking that in the spirit of speeding up the GP computation, why wouldn't the authors simply adopt sparse GPs to achieve this instead of leveraging prior pre-computation, which is not quite as effective. Certainly, if the combined data from both source and target are small, this should not be an issue. But, I tend to think that that should not be the case here since the authors' response to weakness #2 indicated that the combined source data is expected to be large. In this case, this might be a problem because the complexity of the pre-computed approach devised by the authors is quadratic in the amount of source data. Third, regarding weakness #4, the authors revealed that the maximization of the acquisition function is based on a discretization. But, the detail regarding the discretization is not quite clear, e.g., will it impact the derived guarantee? how will it impact the computational cost, especially when the discretized mesh can grow exponentially in the no. of input dimensions? Last, regarding weakness #5, even with the addition of the Hartman dataset, it is still the case that the empirical studies have been conducted mostly on very low-dimensional (toy) datasets. Other than the obvious need to demonstrate the practical significance of the algorithm on more sophisticated, high-dimensional datasets. there is also a concern that arises from the fact that the acquisition function is optimized based on an exhaustively enumeration over a discretized mesh. Will it still be scalable on high-dimensional domains? ==================================================
# Spikegpt: Generative Pre-Trained Language Model With Spiking Neural Networks Rui-Jie Zhu *rzhu48@ucsc.edu* Department of Electrical and Computer Engineering University of California, Santa Cruz Qihang Zhao zhaoqihang@kuaishou.com Kuaishou Guoqi Li *guoqi.li@ia.ac.cn* Institute of Automation Chinese Academy of Sciences Jason K. Eshraghian *jsn@ucsc.edu* Department of Electrical and Computer Engineering University of California, Santa Cruz Reviewed on OpenReview: *https: // openreview. net/ forum? id= gcf1anBL9e* ## Abstract As the size of large language models continue to scale, so does the computational resources required to run them. Spiking Neural Networks (SNNs) have emerged as an energy-efficient approach to deep learning that leverage sparse and event-driven activations to reduce the computational overhead associated with model inference. While they have become competitive with non-spiking models on many computer vision tasks, SNNs have proven to be more challenging to train. As a result, their performance lags behind modern deep learning, and until now, SNNs have yet to succeed at language generation on large-scale datasets. In this paper, inspired by the Receptance Weighted Key Value (RWKV) language model, we successfully implement 'SpikeGPT', a generative language model with binary, event-driven spiking activation units. We train the proposed model on two model variants: 46M and 216M parameters. To the best of our knowledge, SpikeGPT is the largest backpropagation-trained SNN model when released, rendering it suitable for both the generation and comprehension of natural language. We achieve this by modifying the transformer block to replace multi-head self-attention to reduce quadratic computational complexity O(T 2) to linear complexity O(T) with increasing sequence length. Input tokens are instead streamed in sequentially to our attention mechanism (as with typical SNNs). Our experiments show that SpikeGPT remains competitive with non-spiking models on tested benchmarks, while maintaining 32.2× fewer operations when processed on neuromorphic hardware that can leverage sparse, event-driven activations. Our code implementation is available at https://github.com/ ridgerchu/SpikeGPT. ## 1 Introduction Artificial Neural Networks (ANNs) have recently achieved widespread, public-facing impact in Natural Language Processing (NLP), but with a significant computational and energy consumption burden across training and deployment. As an example, training GPT-3 was projected to use 190,000 kWh of energy (Brown et al., 2020; Dhar, 2020; Anthony et al., 2020). Spiking neural networks (SNNs), inspired by neuroscientific models of neuronal firing, offer a more energy-efficient alternative by using discrete spikes to compute and transmit information (Maass, 1997). Spike-based computing combined with neuromorphic hardware holds great potential for low-energy AI (Davies et al., 2018; Merolla et al., 2014; Sun et al., 2022; Nath et al., 2024), and its effectiveness in integration with deep learning has been demonstrated through numerous studies (Roy et al., 2019; Pfeiffer & Pfeil, 2018; Fang et al., 2021; Eshraghian et al., 2021; Wu et al., 2018; Zhang et al., 2020). While SNNs have shown competitive performance in computer vision tasks such as classification and object detection (Barchid et al., 2023; Kim et al., 2020; Cordone et al., 2022), they have yet to attain similar success in generative models. With respect to language generation, this can be attributed to several reasons: i) the absence of an effective language encoding technique for SNNs, ii) the difficulty of training large-scale SNNs due to the extreme constraint on layer-to-layer bandwidth (i.e., binarized spike activations) (Eshraghian et al., 2022), and the lack of informative gradient signals in excessively sparsified models (Eshraghian & Lu, 2022), and iii) the inherent recurrence of SNNs is incompatible with self-attention, where input token are passed to the model in parallel (Vaswani et al., 2017). These issues mean that training large-scale SNNs via error backpropagation is extremely challenging, leading to a lack of performant SNNs in language generation. Despite the difficulties faced by recurrent networks in NLP, the sequential structure of linguistic data presents a unique advantage for SNNs. To address these problems, we propose three techniques. First, rather than using an encoder to project information into an additional temporal dimension, SpikeGPT aligns the sequence dimension of language with the temporal dimension of SNNs. This eliminates the need for an encoder as with most prior attention-based SNNs (Zhou et al., 2023; Li et al., 2022; Deng et al., 2024; Qiu et al., 2024). Second, we apply autoregressive training rather than accumulating spikes at the output to calculate the loss, as is common in modern SNNs. Third, although binary activations constrain layer-to-layer bandwidth, we show that using stateful neurons can greatly reduce the adverse impact of binarization. By leveraging the capabilities of the RWKV model and integrating these three techniques, we have developed the SpikeGPT language model. This was the first SNN to achieve language generation at the time of writing and code release of SpikeGPT. It is also the largest SNN trained to date in terms of parameter count when it is released; the largest version has 216M parameters, which is 3× more than the previous largest SNN (Zhou et al., 2023). The implementation of SpikeGPT is based on integrating recurrence into the Transformer block such that it is compatible with SNNs and eliminates quadratic computational complexity, allowing for the representation of words as event-driven spikes. Combining recurrent dynamics with linear attention enables our network to stream incoming data word-by-word, and commence computation before a sentence has been completed, while still retaining long-range dependencies present in complex syntactic structures. Our experiments show that SpikeGPT achieves competitive performance on all tested datasets while consuming significantly less energy compared to traditional ANN models. Our contributions in the field of NLP and language generation can be succinctly described as follows: 1. we provide the first demonstration of language-generation using direct-SNN training; 2. we achieve performance comparable to that of ANNs, while preserving the energy efficiency of spikebased computations and reducing quadratic computational complexity O(T 2) to linear complexity O(T); 3. our results demonstrate that a small-scale variant of the SpikeGPT model with 46 million parameters performs competitively against similar transformer models. Furthermore, it achieves this performance with an estimated 33.2× less energy consumption on asynchronous hardware. ## 2 Related Works Although language generation has not previously been achieved with SNNs, this section provides an overview of how SNNs have been used in basic NLP tasks, and the ways in which transformers have been adopted for SNNs. Spiking Neural Networks for Natural Language Processing. Xiao et al. (2022) proposes a bidirectional SNN for sentiment classification and machine translation tasks. Their approach uses spiking encoders, which replace costly multiplication operations with much cheaper additive operations to significantly reduce computational energy consumption. Similarly, Lv et al. (2023b) presents a two-step method to train SNNs for text classification with a simple way to encode pre-trained word embeddings as spike trains. Their results indicate that converted SNNs achieve comparable results to their ANN counterparts and are more robust against adversarial attacks. Furthermore, Diehl et al. (2016) demonstrate the train-and-constrain methodology that enables the mapping of machine-learned recurrent neural networks (RNNs) on a substrate of spiking neurons. The authors achieve 74% accuracy on a question classification task using less than 0.025% of the cores on one TrueNorth chip (Merolla et al., 2014), showcasing the potential for SNNs in classification tasks in NLP. For the pre-train language model side, two works ( Bal & Sengupta (2023) and Lv et al. (2023a)) offer a direct-trained spiking Bidirectional Encoder Representations from Transformers (BERT) model via knowledge distillation. Transformer in Spiking Neural Networks. The Transformer model, first introduced in Vaswani et al. (2017), has shown significant success in various NLP tasks. However, the application of the Transformer model to SNNs has been relatively limited. The first Spiking Transformer model was proposed in Zhou et al. (2023), which proposes spiking self-attention to model visual features using sparse Query, Key and Value matrices. Li et al. (2022) proposes another variant on Transformer-based SNNs, adopting spatial-temporal attention instead of spatial or temporal-wise attention to better incorporate the attention mechanism within the Transformer. While Transformers were initially proposed to solve NLP tasks, SNN-based Transformers have only succeeded with vision tasks. An additional temporal dimension must be added which increases the computational complexity from quadratic to the cubic order (O(T 3)), which makes training more expensive. The additional challenges of extreme sparsity, non-differential operators, approximate gradients, and single-bit activations that are characteristic of SNNs make training convergence more challenging. The demonstrated image classification tasks have a far smaller number of output classes, which shrinks the scale of demonstrated networks. Image classification also does not exploit the inherent long-range learning capacity of self-attention. Therefore, there is under-explored potential in the application of Transformer models in other SNN-based applications beyond vision tasks. In the following sections, we demonstrate how this computational complexity is reduced to enable scaled-up models that are capable of language generation. ## 3 Methods 3.1 Leaky Integrate-And-Fire Neuron We employ the Leaky Integrate-and-Fire (LIF) neuron as the default spiking neuron of our model, a widely used model for SNNs often trained via error backpropagation (Maass, 1997). The LIF dynamics are represented as follows: $$\left\{\begin{array}{l l}{U[t]=H[t]+\beta(Y[t]-(H[t-1]-U_{\mathrm{reset}}))}\\ {S[t]=\Theta(U[t]-U_{\mathrm{threshold}})}\\ {H[t]=U[t]\cdot(1-S[t])}\end{array}\right.$$ $$\left(1\right)$$ where β is a decay factor, U is the membrane potential (or hidden state) of the neuron, S is the spiking tensor with binarized elements, Y denotes the output of the previous series RWKV block (see Eq. 7), Θ(·) denotes the Heaviside function, and H represents the reset process after spike emission. We set Uthreshold = 1, Ureset = 0 and β = 0.5 as done in Zhu et al. (2022) and Yao et al. (2021; 2023). To overcome the non-differentiable problem during back-propagation caused by the Heaviside step function Θ(·), we employ the arctangent surrogate function during the backward pass. The arctangent function σ ′(x) = α 2(1+( π 2 αx) 2) (a Sigmoid-like shape) is applied as a 'surrogate gradient' for backward propagation to provide a biased gradient estimator (Fang et al., 2021; Neftci et al., 2019). Additional details on the derivation and training of LIF neurons using backpropagation are provided in Appendix A. ![3_image_0.png](3_image_0.png) Figure 1: Model Architecture. The left portion displays the block-level structure. The middle and right illustrations demonstrate the Spiking RWKV and Spiking RFFN architectures, respectively. Spiking RWKV serves as a token mixer and Spiking RFFN functions as a channel mixer. These components are arranged in a loop with residual connections in a manner akin to a Transformer architecture. ## 3.2 Model Architecture The high-level architecture of SpikeGPT is shown in Fig. 1. Given an embedded input I ∈ R T ×d, we first use a Binary Embedding (BE) layer to embed the input to a binarized representation, X0 (Eq. 2). Using Spiking RWKV, X0 is passed to the L-th SpikeGPT layer. Similar to the standard Transformer block, a SpikeGPT block consists of Spiking RWKV (SRWKV) unit and a Spiking Receptance Feed-Forward Networks (SRFFN) unit. Residual connections are used in both the SRWKV and SRFFN blocks using the SEW-ResNet (Fang et al., 2021) configuration, which is a well-established standard form within the Spiking ResNet framework with sparse integer spikes. Once the data has traversed through all layers, the model is directed towards the generation head (GH) for the purpose of generating the next token. For natural language understanding (NLU) tasks, the model uses a classification head instead. Sec. 3.6 provides further information. $$\begin{array}{l}{{X_{0}=\mathrm{{BE}}\left(I\right),}}\\ {{X_{l}^{\prime}=\mathrm{{SRWKV}}(X_{l-1})+X_{l-1},}}\\ {{X_{l}=\mathrm{{SRFFN}}(X_{l}^{\prime})+X_{l}^{\prime},}}\\ {{Y=G H(X_{L}).}}\end{array}$$ T ×d, (2) $$\begin{array}{l}{{I\in\mathbb{R}^{T\times d},X_{0}\in\mathbb{R}^{T\times d},}}\\ {{X_{l}^{\prime}\in\mathbb{R}^{T\times d},l=1...L}}\\ {{X_{l}\in\mathbb{R}^{T\times d},l=1...L}}\end{array}$$ T ×d, l = 1*...L* (3) , Xl ∈ R T ×d, l = 1*...L* (4) Y = GH(XL). (5) In Section 3.3, detailed information about Binary Embedding (BE) will be provided. Sec. 3.4 will primarily focus on the SRWKV block, while Sec. 3.5 will delve into the Spiking Receptance Feed-Forward Network (SRFFN) block. Finally, in Sec .3.6, we will describe the training and inference details employed for natural language generation (NLG) and natural language understanding (NLU). At the commencement of each block, a token-shift is applied which is detailed in Appendix C.1. $$(2)^{\frac{1}{2}}$$ $$(3)$$ $\left(4\right)$. $\left(5\right)$. ## 3.3 Binary Embedding To maintain consistency with the binary activations of SNNs, we propose a binary embedding step to convert the continuous outputs of the embedding layer into binary spikes. The conversion is performed using a Heaviside function for feed-forward propagation which maps the continuous values to binary spikes. This allows us to convert continuous embedding values into spikes using non-differentiable functions, while still being able to perform backpropagation and update the weights of the embedding layer (Neftci et al., 2019). During backpropagation, the arctangent surrogate function is applied as is the case with LIF neurons. ## 3.4 Efficient Processing Of Variable-Length Sequences Using Spiking Rwkv In this section, we revisit the self-attention mechanism, which endows Transformers with the ability to process variable-length sequences, enabling advanced language modeling capabilities. Subsequently, we highlight a fundamental challenge: the incompatibility of self-attention with SNN language modeling. Self-attention relies on access to the entire sequence for effective language modeling and is not inherently recurrent, making it incompatible with SNNs. Furthermore, it involves dynamic matrix-matrix multiplication operations. The associated computational overhead compromises the objective of reducing computational costs via SNNs. To address these limitations, we introduce the Spiking RWKV (SRWKV) module as a replacement for the self-attention module. This module draws inspiration from the RWKV model (Peng et al., 2023), an RNN model renowned for achieving Transformer-level proficiency across a spectrum of model sizes. This module serves the same essential function of facilitating information exchange across token dimensions but achieves this through element-wise products rather than matrix-matrix multiplication, while also accommodating the recurrent nature of SNNs. Recall Self-Attention. In the context of Transformers, the self-attention mechanism operates on an input sequence denoted as X ∈ R T ×d and employs a scaled dot product attention technique. Mathematically, self-attention is formally expressed as follows: $$f(X)=\mbox{softmax}\left(\frac{Q(K)^{T}}{\sqrt{d_{k}}}\right)V,Q=XM_{Q},K=XM_{K},V=XM_{V}\tag{6}$$ Here, MQ ∈ R d×dk , MK ∈ R d×dk , and MV ∈ R d×dv represent linear transformations, while dk and dv signify the dimensions of the key and value vectors, respectively. This mechanism enables dynamic information mixing across token dimensions and accommodates sequences of variable lengths by leveraging dynamic matrix-matrix multiplication (Q(K) T) and the softmax function (which necessitates access to the entire sequence). Nevertheless, in the case of a typical SNN characterized by its event-driven nature, the recurrent structure of the network poses a challenge. SNNs can only generate spikes based on information from previous time-steps, making it impossible to access the entire sequence at once, as required by self-attention. One potential solution involves introducing an additional temporal dimension, denoted as Tadditional, to allow spiking neurons to feed forward. However, this approach would substantially increase the model's size and necessitate additional computations, scaling proportionally with Tadditional. This is not practical, especially for large-scale language models reaching the scale of hundreds of billions of parameters. Therefore, the optimal approach is not to introduce a new dimension, but to leverage the existing sequence dimension for forward propagation. This is why a spiking transformer would benefit from a recurrent alternative to self-attention. Spiking RWKV. To address the challenges posed by self-attention in the context of language modeling, we introduce the Spiking RWKV, a novel approach that incorporates both recurrent and block-level spiking features. The foundation of the vanilla RWKV is inspired by the Attention Free Transformer (Zhai et al., 2021), serving as a replacement for the traditional self-attention mechanism. For comprehensive information on vanilla RWKV, including its overall structure and parallelization techniques, we refer the reader to Appendix C.1. In contrast to self-attention, which operates on the entire sequence dimension, the spiking input of Spiking RWKV is unrolled as X[t] ∈ R 1×d, where t represents the time step index, rather than the entire sequence X ∈ R T ×d. Similar to self-attention, Spiking RWKV initiates by applying linear transformations: R = X[t]MR, K = X[t]MK, and V = X[t]MV , where MR, MK, MV ∈ R d×H, with H indicating the hidden size. Notably, all inputs to the three MLP layers are in the form of spiking activations. The subsequent step involves generating the output using the following equation: $$Y[t+1]={\mathcal{S N}}\left(\sigma(R[t])\odot{\frac{\exp(W_{f})\odot\exp(K[t])\odot(V[t])+A[t]}{\exp(W_{f})\odot\exp(K[t])+B[t]}}\right)$$ The hidden states A and B are defined as: $$\left(7\right)$$ $$A[t]=\exp(K[t])\odot(V[t)+\exp(W_{d})\odot A[t-1]$$ A[t] = exp(K[t]) ⊙ (V [t) + exp(Wd) ⊙ A[t − 1] (8) and $$B[t]=\exp(K[t-1])+\exp(W_{d})\odot B[t-1]$$ B[t] = exp(K[t − 1]) + exp(Wd) ⊙ B[t − 1] (9) Here, SN refers to spiking neuron layers, the ⊙ symbol represents element-wise multiplication, Wd ∈ R 1×d and Wf ∈ R 1×d are decay vectors, where Wf is responsible for weighting the current time-step's information, and Wd plays a role in decaying the influence of information from previous time-steps. Details of Wd and Wf can be found in Appendix C.4. Notably, Spiking RWKV diverges from self-attention by not employing matrix-matrix multiplication to dynamically adjust the attention map according to the input. Instead, it utilizes the learnable vectors Wd and Wf to recurrently blend the token dimensions of the input. While self-attention dynamically reweights token dimensions based on the input, Spiking RWKV adopts a continuous decay strategy rather than a dynamic weighted attention strategy. Because the hidden states A[t] and B[t] encapsulate information from the previous time-step, this module can effectively blend information across token dimensions, much like self-attention. Additionally, this approach is particularly well-suited for language modeling. When using self-attention in casual language models, it's necessary to mask out half of the attention map to prevent information leakage. In contrast, Spiking RWKV combines the inherent recurrent properties of SNN with RWKV, and both naturally operate in a unidirectional manner. ## 3.5 Spiking Receptance Feed-Forward Networks (Srffn) Each block in our model contains a fully connected feed-forward network with a gating mechanism (SRFFN), which is applied to normalized and token-shifted outputs of each spiking-RWKV module. This SRFFN module consists of three linear transformations as follows: $$({\mathfrak{s}})$$ $$({\mathfrak{g}})$$ ![5_image_0.png](5_image_0.png) $$Y^{\prime}[t]={\mathcal{SN}}(\sigma(M_{P}X[t]){\odot}M_{S}(\mathrm{Activation}(M_{G}X[t])))\tag{10}$$ where SN represents spiking neuron layers, Y ′[t] denotes the output of SRFFN at time-step t which is then passed to the spiking neuron (Eq. 1). {MP , MG, MS} ∈ R d×H are learnable parameters of the linear transformations, and Activation represents the activation function. Depending on the specific SpikeGPT variant being used, we employ either the ReLU2 or SN activation function. SRFFN is a variant of the Gated Linear Unit (GLU) (Dauphin et al., 2017), which can control the degree of information flowing into the model by σ(MP X[t]). In order to maintain consistency between SRFFN and GEGLU parameters (Shazeer, 2020), we set the size of H from the SRFFN to 4d. After the feedforward process in the SRFFN layer, the resulting output Y ′[t] will be subsequently fed into a LIF neuron, as depicted in Eq. 1. In this context, Y [t] is replaced with the updated value Y ′[t] to preserve the block-level spiking and binary characteristics, thereby ensuring the maintenance of the desired sparsity. Figure 2: Training SpikeGPT for NLG and NLU tasks. ## 3.6 Training & Inference Our training procedure consists of two stages. The first stage is pre-training on a large-scale corpus to build a high-capacity language model, and the next stage is specific fine-tuning to perform downstream tasks, such as natural language generation (NLG) and natural language understanding (NLU). NLG Tasks. We adopt a decoder-only pre-training paradigm similar to GPT to train the model. Specifically, our model utilizes SRWKV and SRFFN modules to process the input token sequence and generate an output distribution for each target token. Formally, given a token sequence C = {c1, c2, · · · , cn}, we use the standard language modeling objective to maximize the following likelihood: $\mathcal{P}(c_{t})=\text{softmax}(Y^{\prime}[t]W_{e}^{T})$ $\mathcal{L}_{p}=\sum_{i=1}^{T}log\mathcal{P}(c_{i}|c_{1},c_{2},\cdots,c_{i-1};\Theta)$ $$(11)$$ $$(12)$$ where WT eis the token embedding matrix, and Θ is the set of all model parameters. After pre-training the model using the loss in Eq. 12, model parameters are fine-tuned to adapt to different downstream tasks in NLG and NLU. For natural language generation tasks, we define a new dataset DG, where each sample of data consists of a sequence of input tokens. The consistency between the pre-training process and the NLG task allows for the fine-tuning procedure to reuse the method and objectives adopted in the pre-training pipeline (Eq. 11 and Eq. 12), which maximizes the likelihood of the target token based on the previous information of the target position. NLU Tasks. As for NLU tasks, such as sentiment classification, the fine-tuning process requires several modifications to the top-level of the pre-trained SpikeGPT model to adapt to NLU tasks, as shown in Fig. 2. Specifically, given a new dataset for the downstream task DU , where each instance consists of a token sequence Ci and label li, the following objective is maximized: $${\cal L}_{N L U}=\sum_{({\cal C}_{i},l_{i})}l_{i}*logP({\cal C}_{i})\tag{1}$$ $${\mathcal{P}}({\mathcal{C}}_{i})=\mathrm{softmax}(Y W_{m}^{T})$$ $$Y=\mathrm{AvgPooling}(X_{L}[1],X_{L}[2],\cdots,X_{L}[T])$$ $$\left(13\right)$$ $$\left(14\right)$$ $$\left(15\right)$$ $\downarrow$ . where P(Ci) is defined as: P(Ci) = softmax(Y WTm) (14) WTm includes the learnable parameters of the MLP module in the top layer, and Y is generated by passing the input token sequence Ci through the model and then average-pooling the embedding of each target, which is formalized as: Y = AvgPooling(XL[1], XL[2], · · · , XL[T]) (15) In the inference phase, we directly give a prompt for the NLG task, and let the model continue and calculate bits-per-character (BPC) and perplexity (PPL) as an evaluation metric. For the NLU task, we pass the inputs through our model to obtain the embedding over target positions, and then apply a mean-pooling operation to all embedded tokens to predict the label of each instance. ## 3.7 Theoretical Energy Consumption Analysis The key attribute of SNNs here is their remarkable energy efficiency due to dynamical sparsity. We provide a theoretical energy consumption analysis of SpikeGPT's constituent components, while evaluating its feasibility for deployment on asynchronous hardware. The block-level theoretical energy consumption analysis is shown in Tab. 1. Spiking RWKV. The SRWKV block comprises two integral components: the Spiking MLP layer, which receives sparse spike inputs, and the full-precision element-wise product. Within the Spiking RWKV, akin to self-attention mechanisms, the spike input denoted as X undergoes projection through MLP layers, resulting in three distinct variables, namely, R, K, and V . Notably, all these MLP layers operate on spike inputs, facilitating the transformation of matrix multiplication into sparse addition. It is worth highlighting that the non-binary element-wise product, as depicted in Tab. 1, constitutes a relatively minor proportion compared to the MLP layers. Importantly, it is essential to note that all these operations can be efficiently supported by neuromorphic chip technology. SRFFN. The SRFFN block primarily consists of MLP layers and element-wise product operations. However, an alternative operation is introduced when employing the *ReLU*2 activation function instead of spiking neurons, While *ReLU*2 activations are not binary, they induce dynamical sparsity. We can further employ quantization techniques to convert them into integers, a common practice in LLMs (Liu et al., 2023; Dettmers et al., 2022b;a) and SNNs (Venkatesh et al., 2024; Shen et al., 2023). where 8-bit activations and weights can be used with no noticeable loss in performance (Xiao et al., 2023). Moreover, we investigated the possibility of using LIF to replace *ReLU*2in Sec. 4.4, which demonstrates that the impact of this substitution on performance is negligible. With regard to integer sparse spikes, modern neuromorphic chips (Orchard et al., 2021; Parpart et al., 2023) accommodate graded spikes that support up to 32-bit integer, rendering the entire SRFFN block compatible with neuromorphic hardware, requiring solely addition operations for its execution. Table 1: Energy Evaluation: F LMLP represents the Floating-Point Operations (FLOPs) of the MLP layers in the ANNs, while Rˆ denotes the spike firing rates (indicating the proportion of non-zero elements in the spike matrix) within the spike matrices. In computing operation counts, models are parameterized with T = 3072, d = 512, Rˆ = 0.15, all derived from real test data of SpikeGPT-46M. Additionally, energy consumption is characterized by assuming EMAC = 4.5pJ and EAC = 0.9pJ based on Horowitz (2014) and Rathi & Roy (2021). Other than element-wise operation involving dot products in SpikeGPT, all other module inputs are in spike form, leading to a significant reduction in operations by a factor of up to 32.2× overall. This is reflected in the Vanilla GPT-to-SpikeGPT (V/S) ratio. Vanilla GPT (Radford et al., 2018) SpikeGPT Energy Consumption (pJ) Ratio (with GLU (Dauphin et al., 2017)) (**This work**) Vanilla GPT SpikeGPT V/S | reflected in the Vanilla GPT-to-SpikeGPT (V/S) ratio. Vanilla GPT (Radford et al., 2018) | | SpikeGPT | Energy Consumption (pJ) | Ratio | | | |--------------------------------------------------------------------------------------------|-----------------|---------------------|---------------------------|-------------|------------|-------| | (with GLU (Dauphin et al., 2017)) | | (This work) | Vanilla GPT | SpikeGPT | V/S | | | Q/R, K, V | EMAC · 3T d2 | EAC · Rˆ · 3T d2 | 1.09 × 1010 | 3.25 × 108 | 33.3× | | | f(Q/R, K, V ) | EMAC · 2T 2d | EMAC · 7T d | 8.50 × 107 | 4.95 × 107 | 1.71× | | | Attention | Scale | EMAC · T 2 | - | 4.25 × 107 | - | - | | Softmax | EMAC · 2T 2 | - | 8.50 × 107 | - | - | | | Layer 1 | EMAC · F LMLP 1 | EAC · Rˆ · F LMLP 1 | 3.62 × 109 | 1.09 × 108 | 33.3× | | | FFN | Layer 2 | EMAC · F LMLP 2 | EAC · Rˆ · F LMLP 2 | 1.45 × 1010 | 4.35 × 108 | 33.3× | | Layer 3 | EMAC · F LMLP 3 | EAC · Rˆ · F LMLP 3 | 3.62 × 109 | 1.09 × 108 | 33.3× | | | Overall | - | - | - | 3.29 × 1010 | 1.02 × 109 | 32.2× | ## 4 Experiments 4.1 Experiment Settings We test two variants of the 46 million parameter model; one where T = 1, 024 and another where T = 3, 072. We used the Enwik8 dataset to conduct both training and testing in 46M scale, and our most extensive model with 216 million parameters was trained using the OpenWebText2 (Gao et al., 2020) corpus for pre-training. In our 46M model, we employ a character-level tokenizer, as has been done in previous works Zhai et al. (2021). To evaluate the performance of our model, we calculate its BPC metrics. To mitigate the issue of overfitting, we incorporate dropout after the output of each SRFFN block and set the dropout ratio to 0.03. The 216M model with pre-training consists of 18 layers and a feature dimension d = 768. We employ the Byte Pair Encoding (BPE) tokenizer and share the same hyper-parameters as GPT-NeoX (Black et al., 2022). Due to the availability of sufficient data for pre-training, we do not incorporate dropout as we did in our 46M model and remove the binary embedding but use the first layer neurons for encoding. To facilitate better convergence, we utilize a warmup technique during the first 500 training steps. For both the 46M and 216M models, we use the Adam optimizer, and set the learning rate to 6 × 10−4 and 4 × 10−4, respectively. All experiments were conducted on four NVIDIA V100 graphic cards. For the models of 46M and 216M, we trained them for 12 and 48 hours respectively. We evaluated SpikeGPT on two major language-related tasks: Natural Language Generation (NLG) and Natural Language Understanding (NLU). For NLG, we evaluated the text generation performance of SpikeGPT on three classic text generation datasets: Enwik8 (Mahoney, 2011), WikiText-2 (Merity et al., 2017), and WikiText-103 (Merity et al., 2017). These datasets are widely used to measure a model's ability to perform compression and for benchmarking. For NLU, we evaluated the performance of SpikeGPT on four classic text classification datasets: MR (Pang & Lee, 2005), SST-5 (Socher et al., 2013), SST-2 (Socher et al., 2013), and Subj (Pang & Lee, 2004). These datasets cover sentiment analysis and subjective/objective classification. Our implementation is based on PyTorch (Paszke et al., 2019) and SpikingJelly (Fang et al., 2020). For detailed information regarding the datasets and baseline methods, please refer to Appendix B. Our zero-shot text samples of this experiment can be found in Appendix E. ## 4.2 Results On Natural Language Generating Tasks A summary of results are provided in Tabs. 2 and 3. This includes the BPC and PPL achieved on NLG tasks using SpikeGPT trained on Enwik8, WikiText-103, and WikiText-2 compared to several baselines, including 46M and 216M parameter variants. | (github.com/idiap/fast-transformers), and all other models are implemented in native Pytorch. Method Spiking L d T Train BPC Test BPC Complexity Param | | | | | | | s. | | |----------------------------------------------------------------------------------------------------------------------------------------------------------|----|----|------|------|-------|-------|-----------------|-------| | Transformer | ✗ | 12 | 512 | 1024 | 0.977 | 1.137 | O(T 2 · d) | 43.0M | | Reformer | ✗ | 12 | 512 | 1024 | 1.040 | 1.195 | O(T logT · d) | 40.1M | | Synthesizer | ✗ | 12 | 512 | 1024 | 0.994 | 1.298 | O(T · d 2 ) | 42.8M | | Linear Transformer | ✗ | 12 | 512 | 1024 | 0.981 | 1.207 | O(T · d 2 ) | 43.0M | | Performer | ✗ | 12 | 512 | 1024 | 1.002 | 1.199 | O(T · d 2 logd) | 43.0M | | Stacked LSTM | ✗ | 7 | - | - | 1.420 | 1.670 | O(T · d 2 ) | - | | SHA-LSTM (no attention) | ✗ | 4 | 1024 | 1024 | - | 1.330 | O(T · d 2 ) | - | | SpikeGPT 46M | ✓ | 12 | 512 | 1024 | 1.113 | 1.283 | O(T · d) | 46.1M | | SpikeGPT 46M | ✓ | 12 | 512 | 3072 | 0.903 | 1.262 | O(T · d) | 46.1M | As shown in Tab. 2, the generative performance of SpikeGPT has far surpassed that of models with an LSTM backbone, and can approach or even surpass some simplified variants of the Transformer, such as Linear Transformer and Synthesizer. However, it should be pointed out that there is still a certain gap between SpikeGPT and the vanilla Transformer. While increasing the size of T significantly reduces the training BPC of SpikeGPT, its test BPC has not changed significantly, indicating that SpikeGPT is potentially suffering from over-fitting. We also compared the Perplexity of SpikeGPT and GPT-2 based on WikiText-2 and WikiText-103 datasets on text generation tasks. The results are shown in Tab. 3. In the interest of fairly comparing models of similar scales, we selected GPT-2 small and GPT-2 medium (Radford et al., 2019) with parameter sizes similar to those of the fine-tuned 216M SpikeGPT. We found that after fine-tuning, the performance of SpikeGPT on WikiText-2 has surpassed that of GPT-2 series. Unfortunately, the performance of SpikeGPT on the larger WikiText-103 dataset has fallen behind the GPT-2 series models, which suggests a potential need for refined, and perhaps more sophisticated, training methodologies for SpikeGPT when dealing with larger scale corpora, e.g., knowledge distillation (Bal & Sengupta, 2023). Table 3: Results on WikiText-2 and WikiText-103 measured in token-level perplexity. Lower values indicate better performance. We report the perplexity when the lowest value was achieved on the validation datasets. Note that for WikiText-2, we use both the WikiText-103 and WikiText-2 training sets to extend the training corpus. | corpus. | WikiText-103 | WikiText-2 | | | | |-------------------------------------|----------------|--------------|-------|-------|-------| | Method | Parameters | Val. | Test. | Val. | Test. | | GPT-2 Small (Radford et al., 2019) | 124M | - | 29.41 | - | 37.50 | | GPT-2 Medium (Radford et al., 2019) | 346M | - | 26.37 | - | 22.76 | | SpikeGPT With Pre-training | 216M | 39.92 | 39.75 | 19.17 | 18.01 | ## 4.3 Results On Natural Language Understanding Tasks For NLU tasks, we utilize SpikeGPT as a dynamic embedding generator that constructs embeddings based on context. We compare SpikeGPT with text classification algorithms of similar scales, including LSTM (Hochreiter & Schmidhuber, 1997), TextCNN (Kim, 2014), BERT (Kenton & Toutanova, 2019), and the latest SNN-based text classification algorithm TextSCNN (Lv et al., 2023b). The accuracy on four datasets is shown in Tab. 4. The fine-tuned 216M SpikeGPT achieves the second-highest performance among the models, only surpassed by BERT. BERT is a bidirectional Transformer encoder that uses masked training to obtain a high-quality text embedding. However, unlike SpikeGPT, BERT does not have the capability to generate text directly. Our 46M model without any fine-tuning also achieves competitive results compared to the baseline models, indicating the potential of SpikeGPT in NLU tasks. We also analyze the complexity of each method and show that SpikeGPT can achieve linear complexity by using spiking neurons and a recurrent structure. Unlike TextSCNN, our model does not require an additional temporal dimension for processing, as it uses the sequence dimension iteratively during the forward-pass through the spiking neurons. Table 4: Results of NLU tasks on four text classification datasets using both SNN and ANN methods. Measured in classification accuracy: the higher the better. In the 'complexity per layer' column, we compute the complexity of each method using the following notation: T is sequence length, d is the embedding dimension, K is the convolution kernel size, and T*additional* is the additional temporal dimension for feedforward processing. Our model combines recurrent and spiking features and does not need an extra temporal dimension for feed-forward processing, as it exploits the inherent temporal dimension in the language model. Method Spiking Recurrent Complexity per layer SST-2 SST-5 MR Subj. TextCNN (Kim, 2014)*EMNLP* ✗ ✗ O(K · T · d 2) 81.93 44.29 75.02 92.20 TextSCNN-Direct Training (Lv et al., 2023b)*ICLR-2023* ✓ ✗ O(T*additional* · K · T · d 2) 75.73 23.08 51.55 53.30 TextSCNN-ANN2SNN+Fine-tune (Lv et al., 2023b)*ICLR-2023* ✓ ✗ O(T*additional* · K · T · d 2) 80.91 41.63 75.45 90.60 LSTM (Tai et al., 2015) ✗ ✓ O(T · d 2) 84.92 46.43 81.60 - BERT (Kenton & Toutanova, 2019) ✗ ✗ O(T 2· d) 91.73 53.21 86.72 - SpikeGPT 46M ✓ ✓ O(T · d) 80.39 37.69 69.23 88.45 SpikeGPT 216M ✓ ✓ O(T · d) 82.45 38.91 68.11 89.10 SpikeGPT 216M With Pre-training ✓ ✓ O(T · d) **88.76 51.27 85.63 95.30** ## 4.4 A Study Of Spikegpt And Rwkv Variants We conducted a study of variants of SpikeGPT and RWKV variants to evaluate how the binarization of activations impacts performance. The following models are tested: 1. Vanilla RWKV: The original RWKV model as reported in Peng et al. (2023). 2. Heaviside RWKV: The above RWKV model where neuron activations are binarized. 3. SpikeGPT-B: The model above, where the second-MLP activation is swapped from *ReLU*2to a spiking neuron LIF. 4. SpikeGPT: The model described and proposed in this paper. Results for those experiments are shown in Tab. 5. For the Heaviside RWKV variant, we applied the Heaviside function directly to the RWKV layer for binarization, in lieu of the LIF neuron. Specifically, we employed the Heaviside function for the feed-forward process, and use the arctangent surrogate function to address the non-differentiable nature of the Heaviside function during the backward pass, similar to the original SpikeGPT. For SpikeGPT-B, we substituted the middle activation function in the SRFFN layer (as illustrated in Eq. 10) with the spiking neuron LIF to introduce layer-level binary spiking. The test BPC of SpikeGPT-B is marginally lower (and thus, better) than that of SpikeGPT. This shows that RFFN layers are tolerant to binarized activations, provided that recurrent dynamics are included. In absence of recurrent dynamics, the Heaviside-RWKV experiment shows the worst performance, thus indicating that single-order recurrence is necessary to compensate for the performance degradation of binarized activations. Table 5: An ablation study showcases the impact of different architectural modifications on the performance of SpikeGPT models, with all experiments conducted under the consistent settings of L = 12, T = 1024, and d = 512. The evaluation includes Vanilla RWKV as the baseline model without any modifications, Heaviside RWKV, which replaces all spiking neurons with Heaviside functions, SpikeGPT-B featuring a variation where the middle activation in the SRFFN block is also a spiking neuron rather than *ReLU*2, and SpikeGPT, representing the default configuration of SpikeGPT. Model Vanilla RWKV Heaviside RWKV SpikeGPT-B SpikeGPT Train BPC 1.014 1.318 1.109 1.113 Test BPC 1.201 1.403 1.306 1.283 ![10_image_0.png](10_image_0.png) ## 4.5 Scaling Spikegpt Figure 3: Training Loss for Different Model Sizes on 0.9B Tokens. Neural scaling laws posit that model error decreases as a power function of training set size and model size, and have given confidence in performance. Such projections become important as training becomes increasingly expensive with larger models. A widely adopted best practice in LLM training is to first test scalability with smaller models, where scaling laws begin to take effect (Kaplan et al., 2020; Hoffmann et al., 2022; Yang et al., 2023; Zhu et al., 2024). The GPT-4 technical report revealed that a prediction model just 1/10, 000 the size of the final model can still accurately forecast the full-sized model performance (Achiam et al., 2023). 11 We evaluate the scaling capability of SpikeGPT by training models of three different sizes: 216M, 430M, and 1B2, using the MiniPile (Kaddour, 2023) dataset with 0.9B tokens. Due to the constrain on the computational resources, we cannot fully train the 430M and 1B2 model with OpenWebText2. As shown in Fig. 3, the training loss decreases as the model size increases, demonstrating SpikeGPT's ability to scale effectively. The 216M model achieves a final training loss of 3.20, while the 430M model reaches a loss of 3.00, and the 1B2 model attains the lowest loss of 2.88. These results suggest that SpikeGPT follows the neural scaling laws, with larger models exhibiting better performance when trained on the same dataset. Furthermore, we observe that the rate of improvement in training loss slows down as the model size increases, which is consistent with the diminishing returns predicted by scaling laws. The performance gain from 216M to 430M is more significant than the improvement from 430M to 1B2, indicating that the benefits of increasing model size gradually taper off. This insight can help guide decisions on model size selection, balancing performance gains with computational costs. ## 5 Conclusion Our results demonstrate that event-driven spiking activations are not only capable of language generation, but they can do so with fewer high-cost operations. We develop techniques that promote lightweight models for the NLP community, and make large-scale models for the neuromorphic and SNN community more effective. We demonstrate how large SNNs can be trained in a way that harnesses advances in transformers and our own serialized version of the attention mechanisms. We expect this research can open new directions for large-scale SNNs. ## References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. *arXiv preprint* arXiv:2303.08774, 2023. Lasse F Wolff Anthony, Benjamin Kanding, and Raghavendra Selvan. Carbontracker: Tracking and predicting the carbon footprint of training deep learning models. *arXiv preprint arXiv:2007.03051*, 2020. Malyaban Bal and Abhronil Sengupta. Spikingbert: Distilling bert to train spiking language models using implicit differentiation. *arXiv preprint arXiv:2308.10873*, 2023. Sami Barchid, José Mennesson, Jason Eshraghian, Chaabane Djéraba, and Mohammed Bennamoun. Spiking neural networks for frame-based and event-based single object localization. *Neurocomputing*, pp. 126805, 2023. Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, et al. Gpt-neox-20b: An open-source autoregressive language model. arXiv preprint arXiv:2204.06745, 2022. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in Neural Information Processing Systems (NeurIPS), 33:1877–1901, 2020. Nicolas Brunel and Mark CW Van Rossum. Lapicque's 1907 paper: From frogs to integrate-and-fire. Biol. Cybern., 97(5):337–339, 2007. Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, et al. Rethinking attention with performers. arXiv preprint arXiv:2009.14794, 2020. Loïc Cordone, Benoît Miramond, and Philippe Thierion. Object detection with spiking neural networks on automotive event data. In *International Joint Conference on Neural Networks (IJCNN)*, pp. 1–8, 2022. Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolutional networks. In *International Conference on Machine Learning (ICML)*, volume 70 of Proceedings of Machine Learning Research, pp. 933–941, 2017. URL http://proceedings.mlr.press/v70/dauphin17a.html. Mike Davies, Narayan Srinivasa, Tsung-Han Lin, Gautham Chinya, Yongqiang Cao, Sri Harsha Choday, Georgios Dimou, Prasad Joshi, Nabil Imam, Shweta Jain, et al. Loihi: A neuromorphic manycore processor with on-chip learning. *IEEE Micro*, 38(1):82–99, 2018. Haoyu Deng, Ruijie Zhu, Xuerui Qiu, Yule Duan, Malu Zhang, and Liang-Jian Deng. Tensor decomposition based attention module for spiking neural networks. *Knowledge-Based Systems*, 295:111780, 2024. Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. Gpt3. int8 (): 8-bit matrix multiplication for transformers at scale. *Advances in Neural Information Processing Systems (NeurIPS)*, 35:30318–30332, 2022a. Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. Llm. int8 (): 8-bit matrix multiplication for transformers at scale. *arXiv preprint arXiv:2208.07339*, 2022b. Payal Dhar. The carbon impact of artificial intelligence. *Nature Machine Intelligence*, 2:423–5, 2020. Peter U Diehl, Guido Zarrella, Andrew Cassidy, Bruno U Pedroni, and Emre Neftci. Conversion of artificial recurrent neural networks to spiking neural networks for low-power neuromorphic hardware. In *IEEE* International Conference on Rebooting Computing (ICRC), pp. 1–8, 2016. Jason K Eshraghian and Wei D Lu. The fine line between dead neurons and sparsity in binarized spiking neural networks. *arXiv preprint arXiv:2201.11915*, 2022. Jason K Eshraghian, Max Ward, Emre Neftci, Xinxin Wang, Gregor Lenz, Girish Dwivedi, Mohammed Bennamoun, Doo Seok Jeong, and Wei D Lu. Training spiking neural networks using lessons from deep learning. *arXiv preprint arXiv:2109.12894*, 2021. Jason K Eshraghian, Xinxin Wang, and Wei D Lu. Memristor-based binarized spiking neural networks: Challenges and applications. *IEEE Nanotechnology Magazine*, 16(2):14–23, 2022. Wei Fang, Yanqi Chen, Jianhao Ding, Ding Chen, Zhaofei Yu, Huihui Zhou, Timothée Masquelier, Yonghong Tian, and other contributors. Spikingjelly. https://github.com/fangwei123456/spikingjelly, 2020. Accessed: 2022-05-21. Wei Fang, Zhaofei Yu, Yanqi Chen, Tiejun Huang, Timothée Masquelier, and Yonghong Tian. Deep residual learning in spiking neural networks. *Advances in Neural Information Processing Systems (NeurIPS)*, 34: 21056–21069, 2021. Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027, 2020. Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. Transformer feed-forward layers are key-value memories. In *Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing* (EMNLP), pp. 5484–5495, 2021. Alex Graves. Generating sequences with recurrent neural networks. *arXiv preprint arXiv:1308.0850*, 2013. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. *Neural Computation*, 9(8):1735–1780, 1997. Alan L Hodgkin and Andrew F Huxley. A quantitative description of membrane current and its application to conduction and excitation in nerve. *The J. of Physiol.*, 117(4):500–544, 1952. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. *arXiv preprint arXiv:2203.15556*, 2022. Mark Horowitz. 1.1 computing's energy problem (and what we can do about it). In 2014 IEEE international solid-state circuits conference digest of technical papers (ISSCC), pp. 10–14. IEEE, 2014. Jean Kaddour. The minipile challenge for data-efficient language models. *arXiv preprint arXiv:2304.08442*, 2023. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. *arXiv preprint* arXiv:2001.08361, 2020. A. Katharopoulos, A. Vyas, N. Pappas, and F. Fleuret. Transformers are rnns: Fast autoregressive transformers with linear attention. In *International Conference on Machine Learning (ICML)*, 2020a. URL https: //fleuret.org/papers/katharopoulos-et-al-icml2020.pdf. Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. Transformers are rnns: Fast autoregressive transformers with linear attention. In *Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event*, volume 119 of *Proceedings of Machine Learning* Research, pp. 5156–5165, 2020b. URL http://proceedings.mlr.press/v119/katharopoulos20a.html. Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of NAACL-HLT (NAACL)*, pp. 4171–4186, 2019. Seijoon Kim, Seongsik Park, Byunggook Na, and Sungroh Yoon. Spiking-yolo: spiking neural network for energy-efficient object detection. In *Proceedings of the AAAI conference on artificial intelligence (AAAI)*, volume 34, pp. 11270–11277, 2020. Yoon Kim. Convolutional neural networks for sentence classification. In Alessandro Moschitti, Bo Pang, and Walter Daelemans (eds.), *Proceedings of the 2014 Conference on Empirical Methods in Natural Language* Processing (EMNLP), pp. 1746–1751, 2014. URL https://doi.org/10.3115/v1/d14-1181. Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya. Reformer: The efficient transformer. In *8th International Conference on Learning Representations (ICLR)*, 2020. URL https://openreview.net/forum?id= rkgNKkHtvB. Louis Lapique. Recherches quantitatives sur l'excitation electrique des nerfs traitee comme une polarization. J. of Physiol. and Pathology, 9:620–635, 1907. Yudong Li, Yunlin Lei, and Xu Yang. Spikeformer: A novel architecture for training high-performance low-latency spiking neural network. *arXiv preprint arXiv:2211.10686*, 2022. Zechun Liu, Barlas Oguz, Changsheng Zhao, Ernie Chang, Pierre Stock, Yashar Mehdad, Yangyang Shi, Raghuraman Krishnamoorthi, and Vikas Chandra. Llm-qat: Data-free quantization aware training for large language models. *arXiv preprint arXiv:2305.17888*, 2023. Changze Lv, Tianlong Li, Jianhan Xu, Chenxi Gu, Zixuan Ling, Cenyuan Zhang, Xiaoqing Zheng, and Xuanjing Huang. Spikebert: A language spikformer learned from bert with knowledge distillation. 2023a. Changze Lv, Jianhan Xu, and Xiaoqing Zheng. Spiking convolutional neural networks for text classification. In *The Eleventh International Conference on Learning Representations (ICLR)*, 2023b. Wolfgang Maass. Networks of spiking neurons: the third generation of neural network models. Neural Networks, 10(9):1659–1671, 1997. Matt Mahoney. Large text compression benchmark, 2011. Stephen Merity. Single headed attention RNN: stop thinking with your head. *CoRR*, abs/1911.11423, 2019. URL http://arxiv.org/abs/1911.11423. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. In 5th International Conference on Learning Representations (ICLR), 2017. Paul A Merolla, John V Arthur, Rodrigo Alvarez-Icaza, Andrew S Cassidy, Jun Sawada, Filipp Akopyan, Bryan L Jackson, Nabil Imam, Chen Guo, Yutaka Nakamura, et al. A million spiking-neuron integrated circuit with a scalable communication network and interface. *Science*, 345(6197):668–673, 2014. Shimul Kanti Nath, Sujan Kumar Das, Sanjoy Kumar Nandi, Chen Xi, Camilo Verbel Marquez, Armando Rúa, Mutsunori Uenuma, Zhongrui Wang, Songqing Zhang, Rui-Jie Zhu, et al. Optically tunable electrical oscillations in oxide-based memristors for neuromorphic computing. *Advanced Materials*, pp. 2400904, 2024. Emre O Neftci, Hesham Mostafa, and Friedemann Zenke. Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks. *IEEE Signal* Processing Magazine, 36(6):51–63, 2019. Garrick Orchard, E Paxon Frady, Daniel Ben Dayan Rubin, Sophia Sanborn, Sumit Bam Shrestha, Friedrich T Sommer, and Mike Davies. Efficient neuromorphic signal processing with loihi 2. In *2021 IEEE Workshop* on Signal Processing Systems (SiPS), pp. 254–259. IEEE, 2021. Bo Pang and Lillian Lee. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. *arXiv preprint cs/0409058*, 2004. Bo Pang and Lillian Lee. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. *arXiv preprint cs/0506075*, 2005. Gavin Parpart, Sumedh Risbud, Garrett Kenyon, and Yijing Watkins. Implementing and benchmarking the locally competitive algorithm on the loihi 2 neuromorphic processor. In Proceedings of the 2023 International Conference on Neuromorphic Systems, pp. 1–6, 2023. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. *Advances in Neural Information Processing Systems (NeurIPS)*, 32, 2019. Bo Peng, Eric Alcaide, Quentin Anthony, Alon Albalak, Samuel Arcadinho, Huanqi Cao, Xin Cheng, Michael Chung, Matteo Grella, Kranthi Kiran GV, et al. Rwkv: Reinventing rnns for the transformer era. *arXiv* preprint arXiv:2305.13048, 2023. Michael Pfeiffer and Thomas Pfeil. Deep learning with spiking neurons: Opportunities and challenges. Frontiers in Neuroscience, 12:774, 2018. Xuerui Qiu, Rui-Jie Zhu, Yuhong Chou, Zhaorui Wang, Liang-jian Deng, and Guoqi Li. Gated attention coding for training high-performance and efficient spiking neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pp. 601–610, 2024. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training, 2018. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. *OpenAI blog*, 1(8):9, 2019. Nitin Rathi and Kaushik Roy. Diet-snn: A low-latency spiking neural network with direct input encoding and leakage and threshold optimization. *IEEE Transactions on Neural Networks and Learning Systems*, 2021. Frank Rosenblatt. The perceptron: A probabilistic model for information storage and organization in the brain. *Psychological Rev.*, 65(6):386, 1958. Kaushik Roy, Akhilesh Jaiswal, and Priyadarshini Panda. Towards spike-based machine intelligence with neuromorphic computing. *Nature*, 575(7784):607–617, 2019. Noam Shazeer. GLU variants improve transformer. *CoRR*, abs/2002.05202, 2020. URL https://arxiv. org/abs/2002.05202. Guobin Shen, Dongcheng Zhao, Tenglong Li, Jindong Li, and Yi Zeng. Is conventional snn really efficient? a perspective from network quantization. *arXiv preprint arXiv:2311.10802*, 2023. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1631–1642, 2013. Pao-Sheng Vincent Sun, Alexander Titterton, Anjlee Gopiani, Tim Santos, Arindam Basu, Wei D Lu, and Jason K Eshraghian. Intelligence processing units accelerate neuromorphic learning. arXiv preprint arXiv:2211.10725, 2022. Kai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic representations from tree-structured long short-term memory networks. *arXiv preprint arXiv:1503.00075*, 2015. Y Tay, D Bahri, D Metzler, D Juan, Z Zhao, and C Zheng. Synthesizer: rethinking self-attention in transformer models. *arXiv preprint arXiv:2005.00743*, 2020. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In *Advances in neural information processing systems* (NeurIPS), pp. 5998–6008, 2017. Sreyes Venkatesh, Razvan Marinescu, and Jason K Eshraghian. Squat: Stateful quantization-aware training in recurrent spiking neural networks. *Neuro-Inspired Computational Elements (NICE)*, 2024. Xiuying Wei, Yunchen Zhang, Xiangguo Zhang, Ruihao Gong, Shanghang Zhang, Qi Zhang, Fengwei Yu, and Xianglong Liu. Outlier suppression: Pushing the limit of low-bit transformer language models. In Advances in Neural Information Processing Systems (NeurIPS), 2022. Yujie Wu, Lei Deng, Guoqi Li, Jun Zhu, and Luping Shi. Spatio-temporal backpropagation for training high-performance spiking neural networks. *Frontiers in neuroscience*, 12:331, 2018. Guangxuan Xiao, Ji Lin, Mickael Seznec, Hao Wu, Julien Demouth, and Song Han. Smoothquant: Accurate and efficient post-training quantization for large language models. In International Conference on Machine Learning (ICML), pp. 38087–38099. PMLR, 2023. Rong Xiao, Yu Wan, Baosong Yang, Haibo Zhang, Huajin Tang, Derek F Wong, and Boxing Chen. Towards energy-preserving natural language understanding with spiking neural networks. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 31:439–447, 2022. Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, et al. Baichuan 2: Open large-scale language models. *arXiv preprint arXiv:2309.10305*, 2023. Man Yao, Huanhuan Gao, Guangshe Zhao, Dingheng Wang, Yihan Lin, Zhaoxu Yang, and Guoqi Li. Temporal-wise attention spiking neural networks for event streams classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 10221–10230, 2021. Man Yao, Guangshe Zhao, Hengyu Zhang, Yifan Hu, Lei Deng, Yonghong Tian, Bo Xu, and Guoqi Li. Attention spiking neural networks. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, PP: 1–18, 01 2023. Shuangfei Zhai, Walter Talbott, Nitish Srivastava, Chen Huang, Hanlin Goh, Ruixiang Zhang, and Josh Susskind. An attention free transformer. *arXiv preprint arXiv:2105.14103*, 2021. Youhui Zhang, Peng Qu, Yu Ji, Weihao Zhang, Guangrong Gao, Guanrui Wang, Sen Song, Guoqi Li, Wenguang Chen, Weimin Zheng, et al. A system hierarchy for brain-inspired computing. *Nature*, 586 (7829):378–384, 2020. Zhaokun Zhou, Yuesheng Zhu, Chao He, Yaowei Wang, Shuicheng YAN, Yonghong Tian, and Li Yuan. Spikformer: When spiking neural network meets transformer. In *The Eleventh International Conference* on Learning Representations (ICLR), 2023. Rui-Jie Zhu, Qihang Zhao, Tianjing Zhang, Haoyu Deng, Yule Duan, Malu Zhang, and Liang-Jian Deng. Tcja-snn: Temporal-channel joint attention for spiking neural networks. *arXiv preprint arXiv:2206.10177*, 2022. Rui-Jie Zhu, Yu Zhang, Ethan Sifferman, Tyler Sheaves, Yiqiao Wang, Dustin Richmond, Peng Zhou, and Jason K Eshraghian. Scalable matmul-free language modeling. *arXiv preprint arXiv:2406.02528*, 2024. ## Appendix A Details About Lif Neuron A.1 Derivation And Analysis Of The Lif Neuron ANNs and SNNs can model similar network topologies, but SNNs use spiking neurons instead of artificial ones (Rosenblatt, 1958). Spiking neurons, like artificial ones, work on a weighted sum of inputs. Instead of using a sigmoid or ReLU nonlinearity, this sum affects the neuron's membrane potential U(t). When the potential exceeds a threshold θ, the neuron spikes, sending a signal to connected neurons. Most inputs are brief electrical spikes, unlikely to arrive simultaneously, indicating temporal dynamics that sustain the membrane potential. Louis Lapicque, in 1907, demonstrated that spiking neurons resemble a low-pass filter circuit with a resistor R and capacitor C, known as the leaky integrate-and-fire (LIF) neuron (Lapique, 1907; Brunel & Van Rossum, 2007). Physiologically, the capacitance comes from the lipid bilayer membrane, and the resistance from ion channels (Hodgkin & Huxley, 1952). This can be modeled as: $$\tau{\frac{d U(t)}{d t}}=-U(t)+I_{\mathrm{in}}(t)R$$ $$(16)$$ dt = −U(t) + Iin(t)R (16) where τ = RC is the time constant. For a constant input current, the solution is: $$U(t)=I_{\mathrm{in}}R+[U_{0}-I_{\mathrm{in}}R]e^{-{\frac{t}{\tau}}}$$ $$\left(17\right)$$ − tτ (17) This shows exponential relaxation of U(t) to a steady state. For a sequence-based neural network, the forward Euler method approximates the solution: $$U[t]=\beta U[t-1]+(1-\beta)I_{\mathrm{in}}[t]$$ $$(18)$$ Here, β = e −1/τ is the decay rate. In deep learning, the input weight is usually a learnable parameter. Simplifying, the input current coefficient becomes a learnable weight W, leading to Iin[t] = W X[t]. Including spiking and reset, we get: $$U[t]=\beta U[t-1]+W X[t]-S_{\mathrm{out}}[t-1]\theta$$ Sout[t] ∈ {0, 1} is the neuron's spike output. If Sout = 1, the reset term subtracts the threshold θ; otherwise, it has no effect. A spike is generated if: $$\left(19\right)$$ $$S_{\text{out}}[t]=\begin{cases}1,&\text{if}U[t]>\theta\\ 0,&\text{otherwise}\end{cases}\tag{1}$$ $$(20)$$ ## A.2 Backpropagation Through Lif Neuron Here we describe how backpropagation can be applied to LIF neurons. At time-step t, Hit and U i t represent the membrane potential after neuronal dynamics and after reset, respectively. The vectors U i th and U i reset indicate the threshold and reset potential, respectively. The weighted inputs from the preceding layer are given by Xit = Wi−1I i t . The output spike at time-step t is represented by S i t = [s i t,j ], where s i t,j = 1 if the j-th neuron fires a spike, otherwise s i t,j = 0. The gradients propagating back from the next layer are ∂Lt ∂Sit . We can recursively compute the gradients as follows: $\dfrac{\partial L}{\partial\mathbf{H}_t^i}=\dfrac{\partial L}{\partial\mathbf{H}_{t+1}^i}\dfrac{\partial\mathbf{H}_{t+1}^i}{\partial\mathbf{H}_t^i}+\dfrac{\partial L_t}{\partial\mathbf{H}_t^i}$ $\dfrac{\partial\mathbf{H}_{t+1}^i}{\partial\mathbf{H}_t^i}=\dfrac{\partial\mathbf{H}_{t+1}^i}{\partial\mathbf{U}_t^i}\dfrac{\partial\mathbf{U}_t^i}{\partial\mathbf{H}_t^i}$ $\dfrac{\partial L_t}{\partial\mathbf{H}_t^i}=\dfrac{\partial L_t}{\partial\mathbf{S}_t^i}\dfrac{\partial\mathbf{S}_t^i}{\partial\mathbf{H}_t^i}$ $$(21)$$ $$(24)$$ $$(26)$$ $$(27)$$ $$(22)$$ $$(23)$$ Using Eq. 1, we obtain: Note that ∂∗ ∂Sit = 0 when t ≥ T, and U i −1 = U i reset. We use the derivative of the surrogate function σ(x) to define the derivative of the Heaviside function Θ(x). For more details, please refer to Interactive tutorial notebooks on snntorch1. ## B Datasets And Baselines B.1 Datasets We conducted experiments on two major types of tasks, Natural Language Generation (NLG) and Natural Language Understanding (NLU). For NLG tasks, we chose the following 3 classic text classification datasets to evaluate the text generation performance of SpikeGPT: Enwik8 (Mahoney, 2011), WikiText-2 (Merity et al., 2017) and WikiText103 (Merity et al., 2017). - Enwik8. The Enwik8 dataset is a subset of the English Wikipedia XML dump from March 2006. It contains the first 100 million bytes of the dump and is typically used to measure a model's ability to compress data. The dataset is based on the Hutter Prize, a competition for lossless compression of human knowledge. We split the tokens into three subsets: 90% for training, 5% for validation, and 5% for testing. - WikiText-2. WikiText-2 is a natural language dataset comprising a collection of 2 million tokens derived from Wikipedia articles. This dataset is commonly utilized for benchmarking various natural language processing models. - WikiText-103. The Wikitext-103 dataset is a large collection of text extracted from Wikipedia articles that are verified as Good or Featured. It contains over 100 million tokens and covers a wide range of topics and domains. The dataset is suitable for language modeling tasks that require long-term dependencies and rich vocabulary. The Wikitext-103 dataset is a larger and more diverse version of the Wikitext-2 dataset. 1https://snntorch.readthedocs.io/en/latest/tutorials/index.html $\begin{array}{l}\frac{\partial\mathbf{H}_{t+1}^i}{\partial\mathbf{U}_t^i}=1-\beta\\ \frac{\partial\mathbf{U}_t^i}{\partial\mathbf{H}_t^i}=1-\mathbf{S}_t+(\mathbf{U}_{reset}^i-\mathbf{H}_t^i)\frac{\partial\mathbf{S}_t^i}{\partial\mathbf{H}_t^i}\\ \frac{\partial\mathbf{S}_t^i}{\partial\mathbf{H}_t^i}=\Theta'(\mathbf{H}_t^i-\mathbf{U}_{th}^i)\\ \frac{\partial\mathbf{H}_t^i}{\partial\mathbf{X}_t^i}=\beta\end{array}$ $$(25)$$ For NLU tasks, we chose the following 4 classic text classification datasets to evaluate the performance of our proposed SpikeGPT: MR (Pang & Lee, 2005), SST-5 (Socher et al., 2013), SST-2 (Socher et al., 2013), Subj. (Pang & Lee, 2004) - MR (Pang & Lee, 2005). It consists of movie review files, labeled based on their overall sentiment polarity (positive or negative) or subjective rating. - SST-5. The Stanford Sentiment Tree Library 5 includes 11855 sentences extracted from movie reviews for sentiment classification (Socher et al., 2013). There are 5 different categories (very negative, negative, neutral, positive, and very positive) - SST-2 (Socher et al., 2013). It is a binary version of SST-5, with only two classes (positive and negative). - Subj (Pang & Lee, 2004). Classify sentences in the dataset as subjective or objective. The sample sizes and text lengths of these datasets vary. If there is no standard training test segmentation, we will follow Lv et al. (2023b) and randomly select 10% of the samples from the entire dataset as the test set. ## B.2 Baselines To verify the effectiveness on NLG and NLU tasks of our proposed SpikeGPT, we compare it with the following representative baselines: For NLG, we list the baselines that we have selected as follows: - **Stacked LSTM**. A model architecture that stacks multiple LSTM modules together. - **SHA-LSTM (Merity, 2019)**. An LSTM model that follows by a single attention head layer. - **Transformer (Vaswani et al., 2017)**. Transformer is a state-of-the-art neural network architecture, leveraging self-attention mechanisms to capture global dependencies of sequential data. - **Reformer (Kitaev et al., 2020)**. Reformer is an extensible variant of the Transformer model. By introducing the invertible sheaf and using the local sensitive hash mechanism, it solves the problem of low memory and computing efficiency of the traditional Transformer, and realizes efficient processing of long sequences. - **Synthesizer (Tay et al., 2020)**. Synthesizer is also a variant of Transformer, which is a model that learns to synthesize attention weights without token-token interaction. - **Linear Transformer (Katharopoulos et al., 2020b)**. Linear Transformer is a lightweight variant of Transformer that uses linear transformation layers to construct a self attention mechanism. - **Performer (Choromanski et al., 2020)**. A variant of Transformer that does not depend on sparsity or low-rankness assumptions and could use linear complexity to accurately estimate attention weights. - **GPT-2 (Radford et al., 2019)**. GPT-2 is a transformer-based language model that specifically functions as a decoder. It is an extensively trained, large-scale generative model using the autoregressive paradigm. To ensure compatibility with the parameter sizes of SpikeGPT, we selected GPT-2 medium and GPT-2 small as suitable alternatives. For NLU, the baselines we have selected are as follows: - **LSTM (Hochreiter & Schmidhuber, 1997)**. LSTM model is a type of recurrent neural network with the ability to capture and utilize long-term dependencies in input sequences. - **TextCNN (Kim, 2014)**. TextCNN is a convolutional neural network architecture specifically designed for text classification tasks, leveraging convolutional layers to capture local patterns and features in textual data. - **TextSCNN (Lv et al., 2023b)**. A variant of TextCNN model that combines spiking neural networks. - **BERT (Kenton & Toutanova, 2019)**. BERT is a bidirectional language model based on the Transformer Encoder-only architecture and an auto-encoding training paradigm. ## C Details Of Rwkv C.1 Token Shift Given an input X, we perform a *token shift* operation on it as follows: $$\begin{array}{l}{{X_{s}=\mathrm{ZeroPad}_{[0,0,-1,1]}(X)}}\\ {{W_{\mathrm{shift}}=\left[\left({\frac{i}{E}}\right)^{n/N}\right],i=1,\cdots,E}}\\ {{\mathcal{X}=W_{\mathrm{shift}}\odot X+(1-W_{\mathrm{shift}})\odot X_{s}}}\end{array}$$ $$(28)$$ where ZeroPad2 denotes the zero padding operation, Wshift represents a learnable shift mask, E is the embedding size of each token, t is the current block, and T is the total number of blocks. ## C.2 General Rwkv Inspired by the Attention Free Transformer (Zhai et al., 2021), RWKV acts as a replacement for self-attention. It reduces computational complexity by swapping matrix-matrix multiplication with a convolution that sweeps along the time dimension. We subsequently modify this step to instead operate recurrently on input data. This modification enables compatibility with recurrent SNNs, thus making it more manageable to run on limited resources. Given an input token-shifted embedding vector X , similar to self-attention, RWKV first applies a linear transform R = XMR, K = XMK, V = XMV 3. X is a time-varying embedding (varying over the sequence), and so *R, K, V* are also time-varying. Fig. 1 depicts the sequence unrolled into a set of 2-D matrices. MR, MK and MV consist of learnable parameters, where K and V can be likened to the key and value matrices of self-attention. R is referred to as the receptance matrix, where each element indicates the acceptance of past information. Next, the following operation is applied: $$Y_{t}=\sigma(R_{t})\odot{\frac{\sum_{i=1}^{t}\exp(W_{(T+i-t)})\odot\exp(K_{i})\odot V_{i}}{\sum_{i=1}^{t}\exp(W_{(T+i-t)})\odot\exp(K_{i})}}$$ $$(29)^{\frac{1}{2}}$$ where ⊙ is the element-wise product, T is the sequence length, σ is the non-linearity applied to R with the default being Sigmoid; W ∈ R T ×E is the positional weight decay matrix (represented as a vector unrolled over time in Fig. 1). W encodes the sequential importance of a given word on subsequent words. It is not directly learnable, but it varies over time with learnable dynamics. Long-range dependence can be captured when the dynamics are learnt to decay slowly. The parallel version of RWKV is also shown in Fig. 5. Intuitively, as time t increases, the vector Yt is dependent on a longer history, represented by the summation of an increasing number of terms. 2The subscript [0, 0, −1, 1] is written with PyTorch syntax in mind, where −1 clips the top row and 1 zero-pads the bottom row. 3{MR, MK, MV } ∈ RE×H, where H denotes hidden size. In RWKV, we set E = H. For the target position t, RWKV performs a weighted summation in the positional interval of [1, t], and takes the Hadamard product of the weighted result with the receptance σ(Rt). By taking the Sigmoid of Rt, the receptance acts as a 'forget gate' by eliminating unnecessary historical information. ## C.3 Self-Attention And Rwkv Distinct from the method of calculating the matching degree4 between tokens by the self-attention mechanism, RWKV decomposes the calculation of matching degree into: αij = σ(Ri) ⊙ exp(WT −i+1) ⊙ exp(Kj ), where αij ∈ R E is a vector. Each element in αij , that is αijk, represents the matching degree at the k-th position of the embedding of the i-th and j-th tokens. In other words, it can be seen as a multi-headed RWKV with E heads, each of which has a hidden size=1, which is similar to the multi-headed self-attention (MHA) mechanism. To gain a more comprehensive understanding of SpikeGPT, we conducted visualizations of the spike and membrane potential patterns in the Spiking RWKV layer and Spiking Receptance Feed-Forward Networks (SRFFN) layers (Fig. 4). These visualizations clearly reveal distinct differences between the Spiking RWKV and SRFFN layers, indicating diverse information representation patterns. Notably, the SRFFN layer exhibits a higher firing rate, suggesting that it may retain more information similar to Transformer FFN layers (Geva et al., 2021). It is worth noting that in various studies, outliers in Transformer-based language models have been shown to significantly impact performance, making them a crucial concern in the quantification of large language models (Wei et al., 2022). Therefore, understanding and addressing the implications of outliers in SpikeGPT are of utmost importance, particularly in the context of optimizing its overall performance and reliability. However, due to the binary nature of SNNs, these outliers cannot be expressed in activation values as in ANNs. Nevertheless, a notable observation is the presence of prominent outliers in the membrane potential of individual neurons, many of which are negative. This finding suggests that SpikeGPT employs a different approach to accommodate and preserve outliers. ## C.4 Positional Weight Decay The positional weight bias matrix W is determined by three matrices, Wd, Wc and Wf , parameterized as follows: $W_{d}=\ln(W_{s}),W_{s}\in\mathbb{R}^{E\times1}$ $W_{c}=\left[(-T+2)\ \ (-T+3)\ \ (-T+4)\ \ \cdots\ \ -1\ \ 0\right]\in\mathbb{R}^{1\times(T-1)}$ $W_{f}=\left[\ln(p_{k})\ \ln(p_{k})\ \cdots\ \ln(p_{k})\right]\in\mathbb{R}^{E\times1}$ 1×(T −1) (31) E×1(32) where Ws is a pre-calculated matrix dependent on the layer and size of E, the vector Wd contains the decay factors for each time-step, Wc is an indicator of time-step, and Wf is the initial decay factor to avoid the constant decay phenomenon of RNNs. Wd and Wf are both learnable, and Wc is a static, pre-calculated matrix based on training time-step. pk is a hyperparameter, which is set to 0.3 in this paper. ## D Visualization Of Spike And Membrane Potential To gain a more comprehensive understanding of SpikeGPT, we conducted visualizations of the spike and membrane potential patterns in the Spiking RWKV layer and Spiking Receptance Feed-Forward Networks (SRFFN) layers (Fig. 4). These visualizations clearly reveal distinct differences between the Spiking RWKV and SRFFN layers, indicating diverse information representation patterns. Notably, the SRFFN layer exhibits a higher firing rate, suggesting that it may retain more information similar to Transformer FFN layers (Geva et al., 2021). It is worth noting that in various studies, outliers in Transformer-based language models have been shown to significantly impact performance, making them a crucial concern in the quantification of large 4A scalar in self-attention, αij = QiKT j $$(30)^{\frac{1}{2}}$$ $$(31)^{\frac{1}{2}}$$ $$(32)^{\frac{1}{2}}$$ (a): Membrane potential of SRFFN layer. (b): Membrane potential of spiking RWKV layer. (a): Membrane potential of SRFFN layer. (b): Membrane potential of spiking RWKV layer. ![22_image_0.png](22_image_0.png) (c): Spike output of SRFFN layer. (d): Spike output of spiking RWKV layer. (c): Spike output of SRFFN layer. (d): Spike output of spiking RWKV layer. Figure 4: Visualization of spike and membrane potential of neurons. Figure (a) and (b) depict the membrane potential of the Spiking RWKV layer, while figure (c) and (d) display the spike patterns observed in the SRFFN layer, where each dot represents a spike event. ![22_image_1.png](22_image_1.png) Figure 5: A demonstration of the Parallelized RWKV model adeptly handling computations involving eW and e KV through the application of a large-kernel convolution operation. Notably, during the convolution's sliding window process, the model implements a decay mechanism to effectively manage temporal dependencies. For this demonstration, the sequence length is set to N = 4, and the embedding size is configured to E = 3. language models (Wei et al., 2022). Therefore, understanding and addressing the implications of outliers in SpikeGPT are of utmost importance, particularly in the context of optimizing its overall performance and reliability. However, due to the binary nature of SNNs, these outliers cannot be expressed in activation values as in ANNs. Nevertheless, a notable observation is the presence of prominent outliers in the membrane potential of individual neurons, many of which are negative. This finding suggests that SpikeGPT employs a different approach to accommodate and preserve outliers. ## E Generation Examples | Context → | Who are you? I am the AI. Who are you? I am Jack. Who are you? I am a woman. Who are you? | |-----------------------------------------------------|---------------------------------------------------------------------------------------------| | –––– Generated Answer 1 –––– | –––– Generated Answer 2 –––– | | Amy. | I am the woman who is, | | The amazing thing is, | in my book. | | you can tell the stories about what happened to me. | | Figure 6: Example of text generated by SpikeGPT. The model is trained on OpenWebText2. Context → <start> –––– Generated Answer –––– "so what do you think of me? " I asked. he was standing behind me, still looking at the posters, and I suddenly knew what he was asking. " what do you think of me? " I looked up at him , the corners of his lips pressed together. "I don't know." he looked at me and I was staring at him, not even realizing what he was doing. all I could do was watch him, look at him, listen to his voice, and then I would fall back to the stage and look at him. Figure 7: Example of text generated by SpikeGPT. The model is trained on BookCorpus. Context → In a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese. –––– Generated Answer –––– In a separate case, they found that their earliest ancestor were extinct. They did not have the physical condition of the tree. The discovery of a Chinese zoo was the latest study by archaeologist Peter Horowitz, who led the researchers at the University of Texas, who made the discovery a great example. The researchers also examined the significance of the artifacts of the bones, the humanoid skeleton, the skull, the cell parts and the outer jaws. The species is believed to be located in the mountains and to provide information about the artifacts of the island. The discovery shows that this is a large area in the surrounding area of a local island, which has a huge influx of the populations. Figure 8: Example of text generated by SpikeGPT. The model is trained on OpenWebText2. ``` Context → Pinky The pink ghost's AI is designed to "feel" opposite of the red ghost's behavior. Pinky actually attempts to get out in front of Pac-Man. This is accomplished by setting the target 4 tiles ahead of Pac-Man's current location in the direction that Pac-Man is travelling. One exception to this is when Pac-Man is traveling up. Due to an overflow bug in the code, the calculation includes a left offset equal to the expected up offset. ``` –––– Generated Answer –––– This is where the AI is able to see how much the AI is still in motion. This is because the player can control the location of the player in the real world. It can also predict the movement of the player by moving the screen forward towards the player. This is why the game is so powerful when it comes to setting the target up. This means that the player can be able to see what is happening in the world, but not exactly where the player is. This allows the player to see where they are at and how far the player is heading. The game can also be used to track the location of the player, and even track the location of the player. In addition to this, the game can also track the location of enemies in the sky and detect targets. It can also also monitor the location of enemies in the area, such as the locations of buildings, schools, schools, or even buildings where a player is in a situation where the player can see the world around them. One of the most interesting features of the game is the ability to control the direction of the player. This allows players to choose between the location of the player they want to fly, or the distance of the enemy location. This allows players to see the direction of the player, as well as where they want to go. The game also supports navigation, and can also track the location of vehicles in the sky. Figure 9: Example of text generated by SpikeGPT. The model is trained on OpenWebText2. Context → Aaron loves mint chocolate cake, but he requires that it be paired with mini chocolate chips, so I threw some of those in between the layers. I also had a few Peppermint Jo Jos on hand so I crushed them up and threw some of those in along with some crushed meringue cookies because, why not? It's a total smorgasbord of minty chocolate chippy cookie crunchy goodness. I didn't measure how much of each topping I used, but after I tasted the finished product, I wish I had added more. You can add anything you want- crushed candy canes, peppermint bark, etc. And don't be afraid to use a heavy hand. Texture = good. ``` –––– Generated Answer –––– Pour out of the cupcake, cut out some vanilla butter, then add sugar and vanilla. Remove from heat until thick and smooth, then place into the pan for 10 minutes until golden brown. Turn on and off, stirring until the cake's browned, as if it were black chocolate and orange syrup. ``` Figure 10: Example of text generated by SpikeGPT. The model is trained on OpenWebText2. Context → Corporal Michael P. Goeldin was an unskilled laborer from Ireland when he enlisted in Company A in November 1860. Goldein survived the war. Corporal Patrick O'Neal, also from Ireland, first enlisted in 1854 and served with Company L, 3d U.S. Artillery, in Oregon. He returned to the East Coast and enlisted in the company in 1860. O'Neal served until 1874, when he was named superintendent of the National Cemetery at Willets Point, New York. Corporal Benjamin Browne was a shoemaker from Orange County, New York. In August 1862, he enlisted in the newly formed 124th New York Volunteers, and was one of sixty-one men who transferred into Company A that October. Browne reenlisted in the company in February 1864 while it was camped at Brandy Station. He returned to civilian life after completing his enlistment in 1867. On 10 June, Artificer William Collins was promoted to corporal, probably to fill a combat leadership void for the crossing of the James River. Collins's service record does not reflect the qualities he demonstrated to earn this promotion, but he had obviously overcome some serious problems. Born in Sacketts Harbor, New York, Collins enlisted in the company in December 1853 at the age of twenty-two, and reenlisted in December 1858. Just a month before the war began in April 1861, Collins went "over the hill" and was not caught until three years later. Returned to the company on 22 March 1864, –––– Generated Answer –––– he was tried for possession of marijuana and attempted first degree murder. For more information on Collins on Twitter visit www.facebook.com/JoeCollins In April 1879, Collins was sentenced to life in prison for two counts of first degree murder: first degree first degree first degree murder, second degree second degree second degree murder, Figure 11: Example of text generated by SpikeGPT. The model is trained on OpenWebText2. ``` Context → Prehistoric man sketched an incredible array of prehistoric beasts on the rough limestone walls of a cave in modern day France 36,000 years ago. Now, with the help of cutting-edge technology, those works of art in the Chauvet-Pont-d'Arc Cave have been reproduced to create the biggest replica cave in the world. The manmade cavern named the Caverne du Pont-d'Arc has been built a few miles from the original site in Vallon-Pont-D'arc in Southern France and contains 1,000 painstakingly-reproduced drawings as well as around 450 bones and other features... Cavemen and women sketched an incredible array of prehistoric beasts on the rough limestone walls of a cave 36,000 years ago and now a replica has been created (pictured) that show how beautiful the cave was in the 70s and 80s ``` –––– Generated Answer –––– ``` Scroll down for video Growth of a rare cave on a cliff near the river level near the Great Wall of Barcelona (pictured right): The world's oldest stone cave in Italy has been discovered in part because of ancient cave erosion. It has become a key part of modern archaeology to determine how the pyramids lived before the Stone Age. In an open letter dated 19 June, archaeologists from the University of Barcelona say they found the excavation and excavation on the site 'significant'. ``` Figure 12: Example of text generated by SpikeGPT. The model is trained on OpenWebText2.
Review 1: Summary: This work presents efficient training of Spike Neural Network (SNN)-based transformers on natural language tasks, obtained by adapting the RWKV architecture to SNN. This is achieved primarily by replacing the multi-head self-attention module with a Spiking RWKV module, which achieves linear complexity with sequence length. They perform autoregressive training on sizeable SSNs (up to 216M parameters) demonstrating reasonable performance against non-spiking networks of similar size on some Natural Language Generation and Natural Language Understanding tasks. Strengths and Weaknesses: STRENGTHS: - enabling efficient training and scaling of SNN is a topic of clear interest to the audience of this journal - first demonstration of adapting the RWKV architecture for SNN training - comparable results (typically moderately behind) in terms of Bits Per Characters, perplexity, and accuracy against non-spiking NN of similar size on a selection of downstream tasks - presented training and inference workflows for both NLG and NLU tasks, adapted for SNN - theoretical analysis of energy consumption complements well the aforementioned results, and, as expected, is highly favorable towards SNN (estimate of 32.2x vs. comparable non-spiking NN) WEAKNESSES: - what drove the architectural choice of settling on a 46M and a 216M variants? Also, I may have missed it but I couldn't find indicated the number of layers and hidden size dimension for the 216M model - Sec 3.7 mentions the option to "further employ quantization techniques to convert them [the activations] to integer" but it is not clear whether this is actually done in this study, and what implementation is considered by the energy consumption analysis. It should be noted that the references cited here all refer to non-spiking NN works - Sec 4.4 "Ablation studies" can be confusing. It compares two SNN variants to a non-spiking RWKV and an unmodified spiking RWKV. The latter is called SpikeGPT-R in this section only, so it wasn't clear this was the configuration used in the rest of the paper - Sec 4.4 also states "Notably, SpikeGPT-S exhibits superior training loss compared to SpikeGPT-R, indicating that spiking neurons can enhance model performance in certain cases." when comparing network with LIF in SRFFN (SpikeGPT-S) instead of ReLU2 (SpikeGPT-R) but training performance are at best comparable (1.109 vs. 1.113), especially as no dispersion over multiple seeds is reported, and test performance of SpikeGPT-S are inferior. So this statement appears overblown. Also calling this an "extensive ablation" study is not appropriate. - The whole Sec 5 "Discussion" reads as an appendix and the authors may want to consider moving it there: - Sec 5.1 is superficial, just noting that the membrane potential appears different in SRWKV and SRFNN layers, and that it results in SRFNN having higher firing rate. It's an interesting observation but it isn't really discussed. - Sec 5.1 is also repeated verbatim in its entirety with appendix B.3 - Sec 5.2 consists of one paragraph discussing parallelization efforts of RWKV. Again, interesting (although very limited in scope) but not a discussion. OTHER MINOR OBSERVATIONS: - Fig 3 presents SRFNN to the left, followed by SRWKV on the right. It's slightly misleading as the SRFNN blocks follows SRWKV in the SpikeGPT architecture - Sec 5.2 refers to equation 18 which is not part of the main text but is in an appendix - typo in Sec 3.3: "binary spikes, This allows" -> "binary spikes. This allows" Requested Changes: Requested clarifications and polishing as per "Weaknesses". Consider expanding Sec 5 or moving it to the appendix. Broader Impact Concerns: Not addressed in the manuscript. No concerns on my end. ================================================== Review 2: Summary: The authors introduced SpikeGPT, an energy-efficient generative language model employing Spiking Neural Networks (SNNs). This model integrates modifications to the traditional transformer architecture, reducing computational complexity from quadratic to linear by adapting a streaming input mechanism suitable for SNNs. The authors applied innovative techniques tailored for SNN training, including aligning the language sequence with SNN temporal dynamics, autoregressive training that accumulates spikes, and utilizing stateful neurons to overcome binary activation constraints. Experimental results show that SpikeGPT can achieve competitive performance against similar-size transformer models with less energy consumption. Strengths and Weaknesses: **Strengths**: - The paper successfully integrates spiking neural networks with the transformer model architecture, a novel approach in the field of natural language processing. This integration allows the model to leverage the energy efficiency of SNNs while handling complex language generation tasks. - By modifying the transformer architecture to replace multi-head self-attention with a mechanism that supports streaming inputs, the authors reduce the computational complexity from quadratic to linear, which is important for scaling up models both in terms of size and efficiency. - The paper demonstrates that SpikeGPT can achieve substantial reductions in energy consumption on neuromorphic hardware. In the meanwhile, SpikeGPT remains competitive with conventional non-spiking models on tested benchmarks compared with transformer models. **Weaknesses**: - Even though the paper presents a successful implementation, training SNNs is inherently complex due to their non-differentiable nature and the use of surrogate gradients. This complexity might limit wider adoption or replicability by other researchers without substantial expertise in neuromorphic computing. - While the model scales up to 216 million parameters, the challenges associated with scaling SNNs even further (to the size of the largest ANNs in use today, e.g., billions of parameters with transformer models) are not thoroughly discussed. Potential bottlenecks and computational constraints could be significant as the model size increases. Requested Changes: - To mitigate the steep learning curve associated with training SNNs, the authors could provide detailed documentation, tutorials, and open-source code. This would include clear explanations of the surrogate gradient approach and its implementation. Such resources can help demystify the process, making it more accessible to researchers without a deep background in neuromorphic computing. - The paper could include a more thorough analysis of the scalability challenges encountered when increasing the size of SNNs. This should cover both theoretical and practical constraints, such as memory usage, processing speed, and the efficiency of surrogate gradients at scale. Broader Impact Concerns: NA ================================================== Review 3: Summary: This paper proposes SpikeGPT, the first GPT implementation with the Spiking Neural Network architecture. To achieve its goal, the paper resolves the issue of language encoding for SNN, training large-scale SNN, and the incompatible of SNN with self-attention. SpikeGPT leverage the architecture of the RWKV language model, with novel techniques resolving the encoding and training issues. The final outcome leads to an implementation of a 46M and a 216M model, achieving competitive benchmark results with non-spiking models while maintaining 32.2x fewer operations. Strengths and Weaknesses: ## Strength This paper is overall well written, well motivated, and technically sound. Being the first paper to achieve GPT-level model on SNN and achieving competitive benchmark results, the paper would have significant impact to the field of SNN and general efficienct deep learning algorithms and systems. Overall a very solid work. ## Weakness The paper can be further improved by discussing the following topic: 1. As the first paper to present a SNN implementation of GPT, this paper would have a broad impact in the TMLR audiance. However, this paper lacks discussion on fundamental related work of SNN and the discussion of basic SNN operation. Adding a subsection in related work and add a few diagrams/equations to explain SNN operation will make the paper more accessible to broader audiences. 2. It would be interesting to see if the proposed SpikeGPT can benefit from migrating the weights of pretrained non-spiking LLM or distill from a pretrained model, rather than doing the full pretraining from scratch Requested Changes: See weakness part. Broader Impact Concerns: No concern on broader impact ================================================== Metareview: Recommendation: Accept as is Comment: All reviewers unanimously recommend the acceptance of the manuscript in its present form, as (i) all claims are convincingly supported by appropriate experiments, and (ii) the overall contribution (the “first demonstration of spiking neural networks” applied to language modeling) is significant and of interest to the TMLR audience. ==================================================
# Equivariant Symmetry Breaking Sets Anonymous authors Paper under double-blind review ## Abstract Equivariant neural networks (ENNs) have been shown to be extremely effective in applications involving underlying symmetries. By construction ENNs cannot produce lower symmetry outputs given a higher symmetry input. However, symmetry breaking occurs in many physical systems and we may obtain a less symmetric stable state from an initial highly symmetric one. Hence, it is imperative that we understand how to systematically break symmetry in ENNs. In this work, we propose a novel symmetry breaking framework that is fully equivariant and is the first which fully addresses spontaneous symmetry breaking. We emphasize that our approach is general and applicable to equivariance under any group. To achieve this, we introduce the idea of symmetry breaking sets (SBS). Rather than redesign existing networks, we design sets of symmetry breaking objects which we feed into our network based on the symmetry of our inputs and outputs. We show there is a natural way to define equivariance on these sets, which gives an additional constraint. Minimizing the size of these sets equates to data efficiency. We prove that minimizing these sets translates to a well studied group theory problem, and tabulate solutions to this problem for the point groups. Finally, we provide some examples of symmetry breaking to demonstrate how our approach works in practice. ## 1 Introduction Equivariant neural networks have emerged as a promising class of models for domains with latent symmetry (Wang et al., 2022a). This is especially useful for scientific and geometric data, where the underlying symmetries are often well known. For example, the coordinates of a molecule may be different under rotations and translations, but the molecule remains the same. Traditional neural networks must learn this symmetry through data augmentation or other training schemes. In contrast, equivariant neural networks already incorporate these symmetries and can focus on the underlying physics. ENNs have achieved state-ofthe-art results on numerous tasks including molecular dynamics, molecular generation, and protein folding (Batatia et al., 2022; Batzner et al., 2022; Daigavane et al., 2023; Ganea et al., 2021; Hoogeboom et al., 2022; Jia et al., 2020; Jumper et al., 2021; Liao & Smidt, 2022). A consequence of the symmetries built-in to ENNs is that their outputs must have equal or higher symmetry than their inputs (Smidt et al., 2021). However, many physical systems exhibit symmetry breaking. In physics, these are classified into two types: explicit symmetry breaking and spontaneous symmetry breaking (Castellani et al., 2003). In explicit symmetry breaking, the governing laws are manifestly asymmetric while in spontaneous symmetry breaking the laws are symmetric but we observe asymmetry in individual data samples. For example consider ferromagnetic materials Aharoni (2000). Suppose we have a heated ferromagnetic material which we then cool in the presence and absence of a magnetic field as shown in Figure 1. In the presence of a strong external magnetic field, we observe a magnetic moment aligned along that field in each magnetic domain. This is an example of explicit symmetry breaking since the external field explicitly breaks rotational symmetry. However, if there is no external field, we still observe a magnetic moment due to ferromagneticity which breaks rotational symmetry. However, the direction of the moment is uniformly random for each domain. This is an example of spontaneous symmetry breaking, the governing laws are symmetric yet we observe asymmetry in individual samples. We define explicit and spontaneous symmetry breaking more formally in Section 2. ![1_image_0.png](1_image_0.png) (a) Explicit symmetry breaking (b) Spontaneous symmetry breaking ![1_image_1.png](1_image_1.png) Figure 1: (a) Example of explicit symmetry breaking. Magnetic moment in domains align with a strong external field. The external field explicitly breaks symmetry of the system. (b) Example of spontaneous symmetry breaking. Presence of a moment in each domain breaks rotational symmetry. However, there is no magnetic field so governing laws of the system are symmetric. Consequently the moments are uniformly random in orientation. One class of approaches conducive for explicit symmetry breaking is learning to break symmetry. For example, Smidt et al. (2021) showed that the gradients of the loss function can be used to learn a symmetry breaking order parameter, identifying what type of missing asymmetry is needed to correctly model the results. Another related approach is approximate and relaxed equivariant networks (Huang et al., 2023; van der Ouderaa et al., 2022; Wang et al., 2023; 2022b). These networks have similar architectures to equivariant models, but allow nontrivial weights in the layers to break equivariance. Hence, they can learn how much symmetry to preserve to fit the data distribution. However, since these methods break equivariance, they are not appropriate for spontaneous symmetry breaking. If all symmetrically related lower symmetry outputs are equally likely in the data, then relaxed networks will see the distribution as symmetric and fail to break symmetry. Further, since equivariance is broken, the method is not guaranteed to behave properly when shown data transformed under the group. In the case of spontaneous symmetry breaking, there exist some works which partially solve the problem. Balachandar et al. (2022) design a symmetry detection algorithm and an orientation aware linear layer for mirror plane symmetries. However, the scope of their methods are specific to that type of symmetry and point cloud data. Finally, Kaba & Ravanbakhsh (2023) take the approach of defining a relaxed version of equivariance which takes input symmetry into account. They derive modified constraints linear layers would need to satisfy for this relaxed equivariance and argue such models can give lower symmetry outputs. However, they mention these conditions do not reduce as easily as for the usual equivariant linear layers. Further this method still does not provide a mechanism to sample all possible outputs. Our work focuses on the spontaneous symmetry breaking case. This case is extremely important as spontaneous symmetry breaking occurs in many physical phenomena (Beekman et al., 2019). Hence the widespread adoption of machine learning techniques for scientific applications will inevitably run into symmetry breaking issues. Examples of existing applications which may run into difficulties include crystal distortion, predicting ground state solutions from Hamiltonians, and solving PDEs (Jafary-Zadeh et al., 2019; Lewis et al., 2024; Lino et al., 2022). In the crystal distortion case, there are distortions from high symmetry to low symmetry structures (Kay & Vousden, 1949). The lowest energy states of physical systems (ground states of the Hamiltonian) are often low symmetry and famously explains the Higgs mechanism for giving particles mass (Higgs, 1964). Finally, Karman vortex sheets are a well known example of symmetry breaking in fluid simulations (Tang & Aubry, 1997). In this work, we provide the first general solution for the spontaneous symmetry breaking problem in equivariant networks. We identify that the key difficulty is how to allow equivariant networks to output a set of valid lower symmetry outputs. Our approach is similar to Smidt et al. (2021) in that we provide symmetry breaking parameters as input to the model. However, rather than learning these parameters, we show that we can sample them from a symmetry breaking set (SBS) that we design based only on the input and output symmetries. In particular, we prove that optimizing equivariant SBSs is equivalent to a fundamental group theory question. We emphasize that this fully characterizes how to efficiently break symmetry with equivariant SBSs for any group. Counter-intuitively, we find that it is sometimes beneficial to break more symmetry than needed. Compared to existing methods, our approach has the following advantages: 1. **Equivariance:** Our framework guarantees equivariance. That we can achieve this is a key point of this work and allows simulation of spontaneous symmetry breaking. 2. **Simple to implement:** Our approach only requires a designing a set of additional inputs into an equivariant network. We have fully characterized such sets. 3. **Generalizability:** We emphasize that our characterization of SBSs applies to any groups. We would like to point out that to achieve our results, we assume we can detect the symmetry of our input and outputs. Further, there is the more general problem of treating symmetrically related outputs as the same in our loss. We discuss these limitations in Appendix D. The rest of this paper is organized as follows. In Section 2 we formalize the symmetry breaking problem and the type of task performed in the spontaneous symmetry breaking setting. In Section 3, we examine the case where we break all symmetries of our input. We motivate the idea of a SBS and show that imposing equivariance leads to an additional constraint of closure under the normalizer. The intuition is that the normalizer characterizes all orientations of our data which do not change its symmetry. Next, we translate bounds on the size of the equivariant SBSs into the purely group theoretical problem of finding complements. We have tabulated these complements in Appendix F for the point groups. In Section 4, we generalize to the case where we may still share some symmetries with our input. In Section 5 we describe how to construct SBSs in an actual implementation. Finally, in Section 6, we introduce examples of symmetry breaking and demonstrate how our method works in practice. Notation and background: An overview of the notation and common symbols used can be found in Appendix A. A brief overview of mathematical concepts needed in the paper can be found in Appendix B and an overview of ENNs can be found in Appendix C. ## 2 Symmetry Breaking Problem Here, we make precise what we mean by a symmetry breaking and the issue it poses for equivariant neural networks. We begin with the following observation first made in Smidt et al. (2021). Lemma 2.1. Let f : X → Y be an G-equivariant function. We can choose f *such that* f(x) = y *only if* StabG(y) ≥ StabG(x). See Appendix E.1 for a generalization of this lemma and a proof. Hence, the output of an equivariant function must have at least the symmetry of the input. This motivates the following definition of symmetry breaking at the individual sample level. Definition 2.2 (Symmetry breaking sample). Let G be a group. A sample with input x and output y is symmetry breaking with respect to G if StabG(x) > StabG(y). Lemma 2.1 tells us a symmetry breaking sample with respect to G can never be perfectly modeled by a G-equivariant function. In experimental samples, we may have random noise in our observations of our outputs which causes samples to be symmetry breaking. In such cases equivariance is beneficial since it can help remove the noise. However, in some cases we truly have a symmetry breaking sample even if there is no noise. In physics this is classified into two cases: explicit symmetry breaking and spontaneous symmetry breaking. In the following discussion, it is useful to view the underlying model as a set valued function. A typical function h : X → Y can be thought of as a set valued function f : X → P(Y ) defined as f(x) = {g(x)}. Note that if there is an action of G on Y , one can naturally define an action on P(Y ) such that U ⊆ Y transforms as gU = {gu : u ∈ U}. Hence there is a natural way to define equivariance of set valued functions. In explicit symmetry breaking, the underlying physics of the system is asymmetric. For example, there may be an unknown electric field which breaks rotation symmetry. Definition 2.3 (Explicit symmetry breaking). Let G be a group. A function f : X → P(Y ) which is not G-equivariant explicitly symmetry breaks G. In such cases using an equivariant function actually prevents us from learning the true function. In spontaneous symmetry breaking, the underlying physics of the system is symmetric, however there is a set of stable lower symmetry outputs which occur with equal probability. Definition 2.4 (Spontaneous symmetry breaking (SSB) function). Let G be a group with actions defined on spaces *X, Y* . Let f : X → P(Y ) be G-equivariant. We say f spontaneous symmetry breaks at x if there is some y ∈ f(x) such that StabG(x) > StabG(y). The key things here are that the set valued function f is equivariant even though individual observed samples (*x, y*) for y ∈ f(x) may be symmetry breaking. Hence, our problem becomes the following. Problem: How does one create an equivariant architecture which can output a set of possibly lower symmetry outputs? ## 3 Fully Broken Symmetry First, we consider the case where we break all symmetry of our input. Here, our desired outputs y share no symmetry with x. In other words StabS(y) = {e}. This will lay the foundation for analyzing the general case of partially broken symmetry. Let the symmetry group of our data x be S. ## 3.1 Symmetry Breaking Set (Sbs) When there is symmetry breaking there are multiple equally valid symmetrically related outputs. The purpose of a symmetry breaking object is to allow an equivariant network to pick one of them. In principle, we want all symmetrically related outputs to be equally likely so it makes sense to think of a set B of symmetry breaking objects we sample from. For any s ∈ S and b ∈ B, since sb is symmetrically related to b it is natural to also include it in B. Hence, acting with s on the elements of B should leave the set unchanged. Further, for any b ∈ B, the stabilizer StabS(b) must be trivial since we want to break all symmetries of our input. This is exactly the definition of a free group action of S on B. Hence, we define a symmetry breaking set as follows. Definition 3.1 (Symmetry breaking set). Let S be a symmetry group. Let B be a set of elements which S acts on. Then B is a symmetry breaking set (SBS) if the action of S on B is a free action. ## 3.2 Equivariant Sbs However, the above definition of an SBS is insufficient when considering equivariance. Here, we show that we need the stronger constraint of closure under the normalizer. To illustrate the problem, consider a network which is SO(3) equivariant and a triangular prism aligned so that the triangular faces are parallel to the xy plane. Suppose our task was just to pick a point of the prism. A naive way to break the symmetry is to have an ordered pair of unit vectors. The first vector is in ![4_image_0.png](4_image_0.png) Figure 2: (a) Naive way to break symmetry in a triangular prism where one vector points to a vertex of a triangle and a second vector points to the lower or outer triangle. (b) A rotated version of the triangular prism in. Note that the same symmetry breaking objects now point to edges of the triangle rather than vertices. However, both prisms have the exact same symmetry elements. the xy plane and points towards one of the triangle vertices. The second vector points up or down in the z direction, corresponding to the upper or lower triangle. However, consider the same prism but rotated 180◦ around z. We can check that the symmetry groups are exactly the same so we want the same SBS. But the symmetry breaking objects are related differently. In the second prism, the first vector points to an edge rather than a vertex. For equivariance, our symmetry breaking objects should be related to both prisms in the same way. So our choice of SBS was not equivariant. Here, one may simply choose a canonical orientation and decide that we will rotate the original SBS by 180◦ ![4_image_1.png](4_image_1.png) in the latter case. However, our input may be arbitrarily complicated, and it may be hard to decide on a canonicalization. Further, canonicalization may introduce discontinuities. Hence, we would like to construct SBSs to be only dependent on the symmetry of our data, not how our data is represented. To understand exactly what additional condition is necessary, we need to carefully investigate how the symmetry breaking scheme works. Figure 3: Diagram of how we might structure our symmetry breaking scheme. From our data x, we may obtain its symmetry S. This S is then fed into a function σ which gives us the set of symmetry breaking objects needed. We sample a b from this set breaking the symmetry of our input and feed this b along with the input x into our equivariant function f. Finally we obtain an output y which has lower symmetry than the input x. Let f be our G-equivariant function and x be our input data. Suppose we know the symmetry S of our input. Let B be some set with a group action of G defined on it. We would like to obtain our set of symmetry breaking objects based on just information about the input symmetry. So suppose we have a function σ : Sub(G) → P(B) that does so. This function takes in a subgroup symmetry and gives a SBS composed of elements from B. Then the symmetry breaking step happens when we take a random sample b from the SBS. This symmetry breaking object is then fed into our equivariant function, allowing it to break symmetry. A diagram of this process is shown in Figure 3. Certainly, since we break the symmetry of our input data, we break equivariance. However, imagine we give our function all possible symmetry breaking objects and collect at the end the set Y of all outputs our model gives. This process shown in Figure 4 would then not need to break any symmetry. This is because all outputs as a set has the same symmetry as the input. Hence we can impose equivariance on our process. ![5_image_0.png](5_image_0.png) Figure 4: Diagram of how we break symmetry, but now we keep all possible outputs. ![5_image_1.png](5_image_1.png) It is well known that the composition of equivariant functions remains equivariant. Hence, we just need to impose equivariance on σ. In order to do so, we must understand how the input and output transform. Suppose we act on our data with some group element g ∈ G. Then it becomes gx. Since S is the symmetry of our original data, we find gx = gsx = (gsg−1)(gx) for any s ∈ S. So the symmetry of the transformed data is gSg−1. Hence, the input of σ transforms as conjugation. Next, recall the output of σ is some subset B of elements of B. Since there is a group action for G defined on B, we can define an action on B by just acting on its elements and forming a new set. Figure 5 shows how our procedure would change if it were equivariant, and we act on our input by some group element g. Figure 5: Diagram of what happens when we act on the input with some group element g. We can now clearly see the issue. The input into σ transforms under conjugation, and the stabilizer of a subgroup under conjugation is precisely the definition of the normalizer NG(S). However, in many cases NG(S) is a supergroup of S. Therefore by Lemma E.1, our SBS not only needs to be invariant under S, but also be invariant under NG(S). See Appendix E.2 for a more formal justification. With this intuition, we can now provide a proper definition for equivariant SBSs. Definition 3.2 (Equivariant symmetry breaking sets). Let S be a subgroup symmetry of a group G and B be a set with an action of G defined on it. Let B ⊂ B be a SBS. Then B is G-equivariant if ∀g ∈ NG(S) we have B = gB. ## 3.3 Ideal Case And Complement Of Normal Subgroups Now that we know how to equivariantly break a symmetry, we would like to understand how to do so efficiently. Intuitively, we expect a smaller SBS to be better and this is indeed true. If we have a larger SBS, multiple symmetry breaking objects map to the same output so the network needs to learn that these are the same. Reducing the SBS would decrease the equivalences our network needs to learn. In the ideal case, exactly one symmetry breaking parameter corresponds to each output. Since our outputs are related by symmetry transformations (transitive) under S, this corresponds to the equivariant SBS being transitive under S. It turns out, we can equate constructing ideal equivariant SBSs to the constructing complements of normal subgroups. The intuition is that we want to maximize the symmetries of the symmetry breaking objects but only in the directions "orthogonal" to S. The complement is essentially this "orthogonal" symmetry we need. A slightly weaker version of this statement can be found in Theorem 3.1.4 of Kurzweil & Stellmacher (2004). Theorem 3.3. Let G be a group and S a subgroup. Let B be a G-equivariant SBS for S*. Then it is possible* to choose an ideal B if and only if S has a complement in NG(S). If b *is an element where* StabNG(S)(b) is a complement of S in NG(S)*, then* OrbS(b) is an ideal G-equivariant SBS. Remark 3.4. It turns out the complement if it exists is isomorphic to NG(S)/S. We can intuitively think of NG(S)/S as giving all possible orientations of our data such that its symmetry remains unchanged. The proof of this theorem is in Appendix E.3. Finding complements of normal subgroups is a well studied group theory problem Kurzweil & Stellmacher (2004). For the point groups, which are the finite subgroups of O(3), we have tabulated the complements if they exist in Appendix F. ## 3.4 Nonideal Equivariant Sbss In the case where we cannot achieve an ideal equivariant SBS, we would still like to characterize how efficient it is. To do this, we define what we call the degeneracy of an equivariant SBS. In general, each orbit under S gives us one SBS which can be matched one to one to our outputs. Definition 3.5 (Degeneracy). Let B be a G-equivariant SBS for S. We define the degeneracy to be DegS(B) = |B/S|. Note that an ideal equivariant SBS B*ideal* (if it exists) has exactly 1 orbit of S, so DegS(B*ideal*) = 1. We would also like to understand how small we can make the degeneracy if we cannot make it 1. It turns out Theorem 3.3 allows us to convert this to a group theory problem. Corollary 3.6. Let G be a group and S a subgroup. Let M be such that S ≤ M ≤ NG(S). Let B *be a* G-equivariant SBS for S which is transitive under NG(S). Then it is possible to choose B such that every S-orbit is also a M-orbit if and only if S has a complement in M*. In particular,* DegS(B) ≤ |NG(S)/M|. See Appendix E.4 for a proof. In the ideal case we can make M to be NG(S) so the above formula gives an degeneracy of 1 as expected. ## 4 Partially Broken Symmetry We can now use our framework for full symmetry breaking to understand the case of partial symmetry breaking. In this case, our desired output may share some nontrivial subgroup symmetry K ≤ S with our input. Note the case of K = 1 corresponds to full symmetry breaking and K = S corresponds to no symmetry breaking. ## 4.1 Partial Sbs Similar to the full symmetry breaking case, we would like to create a set of objects which we can use to break our symmetry. Now we can relax the restriction of free action. Intuitively, we can allow our symmetry breaking objects to share symmetry with our input, as long as it is lower symmetry than our outputs. However, the symmetrically related outputs may be invariant under different subgroups of S. Recall that if some element y gets transformed to sy, its stabilizer K gets transformed to sKs−1. Hence, the stabilizers of the outputs are the subgroups conjugate to K under S denoted as ClS(K). Based on this intuition, we can define partial SBS as follows. Definition 4.1 (Partial SBSs). Let S be a symmetry group and K a subgroup of S. Let P be a set of elements with an action by S. Then P is a K-partial SBS if for any p ∈ P, there exists some K′ ∈ ClS(K) such that K′ ≥ StabS(p). Certainly, a full SBS is a partial one as well since the stabilizers of all its elements under S is the trivial group. In general, we can always break more symmetry than needed and still obtain our desired output. However, it is useful to consider the case where we only break the necessary symmetries. Counter-intuitively, we discuss in Section 4.5 that this turns out to not always be optimal. Definition 4.2 (Exact partial SBS). Let S be a group and K a subgroup of S. Let P be a K-partial SBS for S. We say P is exact if for all p ∈ P, we have StabS(p) ∈ ClS(K). ## 4.2 Equivariant Partial Sbs Similar to before, we define equivariant partial SBSs. The idea is the same, but now we need to identify the symmetry of the input and the set of conjugate symmetries for the output. Define $\mathrm{SubCl}(G)=\{(S,\mathrm{Cl}_{S}(K)):S\in\mathrm{Sub}(G),K\leq S\}$. $\mathbf{S}^{\prime}$. ![7_image_0.png](7_image_0.png) Let P be a set with a group action of G defined on it. As before, the idea is that we have an function π : SubCl(G) → P(P) which outputs our partial SBS. The condition of equivariance for our partial SBS is imposing equivariance on π. Figure 6: Diagram of how we perform partial symmetry breaking. Here, we need to specify not just the symmetry of our input but also the symmetries of our output. Since any of our outputs are equally valid, it only makes sense to specify the set of conjugate subgroups ClS(K) our outputs are symmetric under. The symmetry breaking scheme is depicted in Figure 6. As before, we can impose equivariance on this diagram. We need to know how ClS(K) transforms. Note that if our input gets acted by g, we expect the outputs to also get acted by g. Since K is the stabilizer of one of the outputs, we expect K to transform to gKg−1. Hence we have the transformation $$\operatorname{Cl}_{S}(K)\to\operatorname{Cl}_{g S g^{-1}}(g K g^{-1}).$$ $$g S g^{-1},\mathrm{Cl}_{\underset{\downarrow}{\bullet}}g_{\phantom{\bullet}}g_{\phantom{\bullet}}^{-1}(g K g^{-1}){\longrightarrow}$$ Similar to before, by Lemma E.1 we need the output of π to also be invariant under the stabilizer of the input. Noting that the normalizer is defined as the stabilizer of S under conjugation, we can define a generalized normalizer as the stabilizer of S, ClS(K). Figure 7: Diagram of how our symmetry scheme changes when we transform our input by some group element g ∈ G. Definition 4.3 (Generalized normalizer). Define the generalized normalizer NG(*S, K*) to be ![7_image_1.png](7_image_1.png) NG(*S, K*) = {g : gKg−1 ∈ ClS(K), g ∈ NG(S)}. We can now define equivariant partial SBSs using closure under this generalized normalizer. See Appendix E.5 for a more formal justification. Definition 4.4 (Equivariant partial SBSs). Let S be a subgroup symmetry of a group G. Let P be a K-partial SBS. Then P breaks the symmetry G-equivariantly if ∀g ∈ NG(*S, K*) we have P = gP. Note that closure under NG(*S, K*) is a weaker condition than closure under NG(S). Hence any equivariant full SBS is also an equivariant K-partial SBS for any K. ## 4.3 Ideal Equivariant Partial Sbs Similar to the full symmetry breaking case, we ideally would like to have a one to one correspondence between elements in our equivariant SBS and our symmetrically related outputs. For this to happen, we clearly need our SBS to be exact and for our SBS to be transitive under S. We can generalize Theorem 3.3 to obtain a necessary and sufficient condition to have an ideal equivariant partial SBS. Theorem 4.5. Let G be a group and S and K be subgroups K ≤ S ≤ G. Let P be a G-equivariant Kpartial SBS. Then we can choose an ideal P (exact and transitive under S) if and only if NS(K)/K *has a* complement in NNG(S,K)(K)/K. If p *is an element such that* StabNG(S,K)(p)/K is a complement of NS(K)/K in NNG(S,K)(K)/K*, then* OrbS(p) is an ideal G-equivariant K*-partial SBS* See Appendix E.6 for a proof. ## 4.4 Nonideal Equivariant Partial Sbs Similar to the full symmetry breaking case, when we cannot achieve an ideal equivariant partial SBS we want to characterize how efficient our nonideal partial SBS is. Again, the idea is that in the nonideal case, our network needs to map multiple symmetry breaking objects to the same output. We define the degeneracy of P to quantify this multiplicity. Definition 4.6 (Degeneracy). Let G be a group, S be a subgroup, and K a subgroup of S. Let P be a G-equivariant K-partial SBS for S. Let T be a transversal of S/K. Let Pt be such that every p ∈ P is uniquely written as p = tpt for some t ∈ T and pt ∈ Pt. Then we define DegS,K(P) = |Pt|. The intuition for this definition is that Pt is the set of objects which together with our input may get mapped to some output y by our equivariant network. In other words, we have f(*x, P*t) = {y} for equivariant f and all other P get mapped to different symmetrically related outputs. Without loss of generality assume y has StabS(y) = K. Then for any symmetrically related output ty (where t ∈ T), we can see from equivariance of f that f(*x, tP*t) = {ty}. It is now clear that the size of Pt counts how many symmetry breaking objects must be mapped to the same output. Note that in the case K = 1, S/K = S so Pt just consists of representatives from P/S. So this reduces to the degeneracy defined for full SBS. Also, note that in the ideal case, there is exactly one symmetry breaking object for each output. So degeneracy is 1 in that case. We would like to derive bounds on the degeneracy of our equivariant partial SBSs. Similar to the full SBS case, we use Theorem 4.5 to convert this into a group theory question. Corollary 4.7. Let G be a group, S a subgroup, and K a subgroup of S. Let K′be a subgroup of K and M a subgroup of NG(S, K) ∩ NG(S, K′) which contains S. Suppose P is a G-equivariant K*-partial SBS for* S which is transitive under NG(S, K). We can choose P *such that* StabS(p) ∈ ClNG(S,K)(K′) for all p *and all* S-orbits in P are also M-orbits if and only if NS(K′)/K′ has a complement in NM(K′)/K′*. Further, such* a P has DegS,K(P) ≤ |K/K′| · |NG(*S, K*)/M|. See Appendix E.7 for a proof. ## 4.5 Optimality Of Exact Partial Symmetry Breaking Note that in the previous section, we have been very careful to allow our partial SBS to break more symmetry than needed. Intuitively, we would like to say that it is always optimal to break down exactly to the symmetry of our output. That is, we only need to consider exact partial SBSs. Certainly, ignoring any equivariance constraints, given any non-exact K-partial SBS, we can construct an exact K-partial SBS by picking an element b with StabS(b) ≤ K and identifying its orbit under K together as one partial symmetry breaking object p = Kb. We construct the orbit of p under action by S as our K-partial SBS. We might expect that some modification of this construction can convert any non-exact equivariant Kpartial SBS into an exact equivariant K-partial symmetry breaking one. Naively, we just take the orbit of the elements in the construction above under NG(*S, K*) to obtain G-equivariance. However, in Appendix G we come up with an explicit example where no exact equivariant K-partial SBS is smaller than the best equivariant full SBS. ## 5 Constructing Sbss Now that we have fully characterized what equivariant symmetry breaking sets are, we show how to construct them. ## 5.1 Expressing Subgroups First, it is important to have a way of expressing subgroup of G, the group we want to be equivariant under. We focus on G = O(3) in this section though the ideas here are applicable for other groups as well. The subgroups of O(3) are well studied and have been completely classified Hahn et al. (1983). In particular, ![9_image_0.png](9_image_0.png) there are 7 infinite axial families of finite point groups and 7 additional finite ones. There are only 5 infinite subgroups which are closed. However, the names of these subgroups do not specify how they are "oriented" in O(3). Hence, we propose to represent the subgroups in the following way. We first choose a canonical orientation of the classified point groups. Figure 8: Two identical triangular prisms differing by a rotation. Both have symmetry D6h by name, however the actual symmetry axes differ. A standard choice is inspired by the Hermann-Mauguin naming scheme for point groups. For the 7 infinite series of axial groups, we choose to align the high order symmetry axis along the z-axis. If in addition there are 2-fold rotations, we choose one of them to be along the x-axis. If there are no 2-fold rotations but there are mirror plane parallel to the z-axis, we choose the yz-plane to be in the group. Of the remaining 7 point groups, 5 are cubic groups. For these we can choose the cube they leave invariant to have sides perpendicular to one of the *x, y, z* axes. Finally, the remaining 2 point groups are the icosahedral groups with and without inversion. For these we can choose to align a 5-fold axis with the z-axis and a 3-fold axis with the x-axis. ![9_image_1.png](9_image_1.png) Next, for any point group S with arbitrary alignment, there is always some g ∈ SO(3) such that g −1Sg brings it to the canonical orientations defined above. Hence, we can always express an arbitrary point group S as a pair g, name(S) of a rotation and the name of the group. ## 5.2 Representing A Set Of Conjugate Subgroups In the partial symnmetry breaking case, we also provide a set of conjugate subgroups. Similar to our notation ClS(K), we can specify a set of conjugate subgroups with (*S, K*), where subgroups S and K are represented in the way described previously. ## 5.3 Representing A Sbs The idea is very simple. We start with some object which breaks enough symmetry for our task. To satisfy the equivariance condition of closure under the normalizer or generalized normalized, we can simply take the orbit as our SBS or partial SBS. In principle a SBS can consist of multiple such orbits, but we can always only use one orbit as a SBS and multiple orbits increase the degeneracy. Hence we assume all our SBSs consist of one orbit and we can fully specify a SBS as a pair (*b, N*) where b is a symmetry breaking object and N is a group over which we take the orbit of b. In the case of finite group N, we can explicitly compute the elements in the orbit. However, if N is infinite then this does not work. In practice, it is usually enough that we can sample lower symmetry outputs. Hence, it suffices to be able to sample the SBS which we can do by sampling an element from N. ## 5.4 Constructing An Equivariant Full Sbs We would like to construct a full SBS given an input symmetry S = (g, name(S)). However, we want to do so in an equivariant way. One way to achieve this is to first consider only any input group in its canonical orientation and construct a SBS B for it. Then simply returning gB would guarantee that our construction is equivariant. Hence, we just need to construct a canonical SBS for each possible point group. Note an equivariant full SBS needs closure under a normalizer. If we work with O(3) equivariance, we simply look up the normalizer NO(3)(S) from Table 4. If we are equivariant under some subgroup G ⊂ O(3) then we note that NG(S) = NO(3)(S)∩G which can be used to compute the desired normalizer. All that remains is how to specify a canonical symmetry breaking object for each point group in canonical orientation. If we have such an object, then we obtain the following algorithm for creating equivariant full SBSs. Let Normalizers be a function which takes in a name of a point group and gives the corresponding normalizer classified in Table 4. | Algorithm 1 Equivariant full SBS Input S Symmetry of input expressed as pair (g, name(S)) b Canonical symmetry breaking object Output ′ , N) | | |------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------| | B | Symmetry breaking set expressed as a pair (b | | (n, name(NO(3)(S))) ← Normalizers[name(S)] N ← (gn, name(NO(3)(S))) return (gb, N) | | In general, the choice of a canonical symmetry breaking object is flexible. To satisfy the definition, one just needs the corresponding object for name(S) to not share any symmetries with S = (e, name(S)). However, as discussed in Section 3.3, an ideal SBS should be more efficient than a nonideal one. Hence, if possible we would like to pick such an object so that it generates an ideal SBS. Theorem 3.3 tells us exactly the conditions needed to choose an ideal SBS. In particular, we would need the additional condition that b have the symmetry of a complement H while not having the symmetry of S. We fully characterized the relevant cases in Appendix F. ## 5.5 Constructing Equivariant Partial Sbs In the partial symmetry breaking case, we want to obtain a partial SBS from the symmetry of the input and the set of conjugate subgroup symmetries of the outputs. Importantly, note that we want closure under NG(*S, K*) rather than under NG(S). To compute NG(*S, K*), we use the following fact. Lemma 5.1. *We have the following formula* NG(*S, K*) = S(NG(S) ∩ NG(K)). See Appendix E.8 for a proof. Thus we get the following algorithm for computing an equivariant partial SBS. | Algorithm 2 Equivariant partial SBS from objec | t | |--------------------------------------------------|-----| Input S Symmetry of input expressed as pair (gS, name(S)) ClS(K) Set of conjugate subgroups expressed as (*S, K*) = ((gS, name(S)),(gK, name(K))) p Canonical partial symmetry breaking object Output P Symmetry breaking set expressed as a pair (p ′, N) N1 ← Normalizers[name(S)] (n, name(N2)) ← Normalizers[name(K)] N2 ← (g −1 S gKn, name(N2)) N ← N1 ∩ N2 N ← (e, name(S))N N ← gSNg−1 S return (gS*p, N*) We emphasize that for any K′ < K, a K′-partial SBS can also serve as a K-partial SBS since we can always break more symmetry than needed. In particular, a full SBS often suffices for simplicity. Similar to the full SBS case, we have flexibility in choosing our canonical object used to generate the partial SBS. To satisfy the definition, all we need is for StabG(p) ≤ K′for some K′ ∈ ClS(K). An ideal partial SBS is desirable, especially if we wish to ensure we do not break any extra symmetry. The condition given by Theorem 4.5 is more complicated. However, if we have a FindComplement function, we can automate the process of finding a symmetry an object which generates an ideal partial SBS should have. Here, let Quotient be a function which returns a quotient group and a mapping from cosets to elements of the quotient group. We have the following algorithm which returns a pair of subgroups (*H, K*) such that if p has H ≤ StabO(3)(p) and StabS(p) = K then p generates an ideal partial SBS. ## 6 Experiments Here, we provide some example tasks where we apply our framework to full symmetry breaking and partial symmetry breaking cases. We consider the cases where we can find an ideal equivariant SBS or partial SBS. We explicitly work through how to obtain the ideal equivariant SBS in each case. This section serves primarily as a proof of concept for how our approach works in practice. For each of our tasks, we trained an equivariant convolutional message-passing graph neural network (GNN) to output a vector pointing to a vertex of the prism. We use a modified version of the predefined network from the e3nn library Geiger & Smidt (2022). Each layer consists of the equivariant 3D steerable convolutions followed by gated nonlinearities described in Weiler et al. (2018). We use a gaussian basis for our radial network. Algorithm 3 Ideal partial SBS generating object symmetry Input S Symmetry of input expressed as pair (gS, name(S)) ClS(K) Set of conjugate subgroups expressed as (*S, K*) = ((gS, name(S)),(gK, name(K))) Output H Symmetry needed for object p to generate ideal partial SBS N1 ← Normalizers[name(S)] (n, name(N2)) ← Normalizers[name(K)] N2 ← (g −1 S gKn, name(N2)) N ← N1 ∩ N2 N′ ← (e, name(S)) ∩ N2 (Q1, ϕ) ← Quotient[N,(g −1 S gK, K)] Q2 ← ϕ(N′) C ← FindComplement[Q1, Q2] if C exists **then** (h, name(H)) ← ϕ −1(C) return ((gSh, name(H)), K) else return None end if ## 6.1 Full Symmetry Breaking: Triangular Prism For an example of full symmetry breaking, we consider the task of pointing to a vertex of a triangular prism, similar to that described in Section 3.2. Our input is a graph with 6 nodes with edges given by the edges of the prism. We have position features at the nodes corresponding to the positions of the vertices. Since we want this to be a full-symmetry breaking task, we require chirality in our prism. In order to make it chiral, we also have shared pseudoscalar features of value 1 at all the vertices. For this task, the hidden features in our model are up to l = 2 of both parities and our convolutional filters use up to l = 4 spherical harmonics. For our radial network, we use a 3 layer fully connected network with 16 hidden features in each layer. We add our symmetry breaking object as an additional feature to all nodes of the graph. We output vectors (odd parity l = 1) at each node and take the sum as our output. In this case, it turns out a choice of ideal equivariant SBS is the set of unit vectors parallel to an edge of the triangular faces of the prism. See Appendix H.1.1 for details on why this choice works. We first fix a choice of one symmetry breaking object from our equivariant SBS and one of the vertices of the triangular prism. We then give the chosen symmetry breaking object as an additional input to our equivariant GNN and train it to output a vector (odd l = 1) feature pointing to our chosen point from the center. An example of the result of this training is shown in Figure 9a. We also observe that no matter which choices of vertex and symmetry breaking object we pick, our equivariant network is able to learn to output the vector pointing that that vertex. In practice, this means that we can choose any of our symmetry breaking objects as additional input. Once trained on one pair of symmetry breaking object and vertex, the equivariance of our GNN means that inputting the other symmetry breaking objects in our SBS gives the other symmetrically related outputs. This is shown in Figure 9b. Further, rather than picking one vertex, we also tried modifying our loss so that we compute the loss for all choices of vertex and take the minimum. Hence, our network can learn which vertex to pair with each symmetry breaking object. In this prism example, our pairing is random. This method of taking the minimum loss is especially useful when we have multiple instances of symmetry breaking in our data. Finally, in Appendix H.1.2 we demonstrate that a non-equivariant SBS fails as described in Section 3.2 and in Appendix H.1.3 we show that a nonideal SBS is less efficient than an ideal one. ![13_image_1.png](13_image_1.png) (a) (b) ![13_image_0.png](13_image_0.png) Figure 9: (a) Output (red) generated by our model and symmetry breaking object (blue) given. (b) The set of all the outputs generated by our model if we feed in all symmetry breaking objects. ## 6.2 Partial Symmetry Breaking: Octagon To Rectangle For an example of partial symmetry breaking, we consider the task of deforming an octagon to a rectangle. We choose to make our octagon chiral and impose chirality on our octagon by adding pseudoscalar features of value 1 to all vertices. We select this example because the construction of the stabilizer for a single symmetry breaking object illustrates the general procedure. The nodes of the input graph are just the vertices of the octagon and the edges are just the sides. We use the exact same architecture as for the triangular prism experiment. Here, we output vector (odd parity l = 1) features on each vertex which represents the distortion of that vertex. As shown in Appendix H.2.1, one choice of ideal equivariant SBS for this case consists of l = 2 objects aligned to be parallel to an edge of the octagon. Similar to the prism case, we try training by matching a specific symmetry breaking object to a rectangle. When the symmetry breaking object and rectangle are compatible (share the same symmetries), then our model has no problem learning to deform the octagon into the rectangle. This is shown in Figures 10a and 10b. An interesting failure case occurs when we try to match a symmetry breaking object and rectangle with incompatible symmetries. This is shown in Figure 10c. Here, the D2 symmetry of the rectangle and of the symmetry breaking object are misaligned. As a result, our model predicts an output which has symmetry of D4 which is the group generated when we include the symmetry elements of both the target rectangle and the symmetry breaking object. Hence, the resulting shape is a square. As with the triangular prism case, we also tried letting the model choose which rectangle to deform to given a symmetry breaking object. In this case, our model computes loss separately for all 4 possible rectangles and takes the minimum. We note that for a given symmetry breaking object, 2 of the possible rectangles are symmetrically compatible while 2 are not. Over 200 random initializations, we find roughly 30% of the time our model attempts to match symmetrically incompatible symmetry breaking objects to a rectangle. This is better than the 50% we would expect if it matches pairs randomly. ![14_image_0.png](14_image_0.png) Figure 10: (a) Output (blue) of our model when we match a symmetry breaking object with a compatible rectangle. (b) Output (blue) of our model when we match a symmetry breaking object with a different compatible rectangle. (c) Output (blue) when we match a symmetry breaking object with an incompatible rectangle (green). Note the square has symmetries of both the symmetry breaking object and the target rectangle. ## 6.3 Batio3 **Phase Transitions** Finally, we demonstrate our framework on a more realistic example. For this, we examine the crystal structure of barium titanate (BaTiO3). Specifically, as we decrease temperature, there is a phase transition from a high space-group symmetry Pm¯3m state to a lower space-group symmetry P4mm state at 403K Kay & Vousden (1949); Oliveira et al. (2020); Woodward (1997). The high and low symmetry states are shown in Figures 11a and 11b respectively. Note that the real distortions are rather small and hard to see visually. Table 1 provides some numerical quantities which help distinguish the two. In particular, there are 3 distinct Ti-O-Ti bond angles in a primitive cell, 2 of which are distorted equally to 171.80◦in the low symmetry structure. This bent angle is shown more clearly in the schematic in Figure 11b. For this task, we seek to deform a high symmetry state into the lower symmetry one. Data for the high and low symmetry BaTiO3 crystals are obtained from materials project database Jain et al. (2013). For our demonstration, we focus on breaking point group symmetries which are Oh and C4v for Pm¯3m and P4mm respectively, leaving translational symmetry for future work. Hence, we set the unit cell of both crystals to be a cube with side length 4Å, which is close to the real unit cells. The model used for this task has a similar architecture as the previous ones, with the modification of incorporating periodic boundary conditions because we are modelling a crystal. Similar to the octagon distortion task, we output vector (l = 1) features at each node which tells us how much to distort the corresponding atom. It turns out that any object sharing C4v symmetry works for generating an ideal equivariant partial SBS. This is because Oh has itself as normalizer in O(3) so the symmetry completely determines orientation. A simple choice consists of vectors (odd parity l = 1 object) pointing along the 4-fold rotation axes. As shown in Figure 12 and Table 1, our model is able to learn to distort the crystal structure appropriately when given an appropriate symmetry breaking object. Without such an input the model cannot provide any distortions. ![15_image_0.png](15_image_0.png) Figure 11: (a) Initial high symmetry crystal structure of BaTiO3. Left is an actual plot of the crystal structure and right is a side-on schematic. (b) Target low symmetry crystal structure of BaTiO3. Left is an actual plot of the distorted crystal structure and right is a side-on schematic with exaggerated distortion. The angle of the bent bond is 171.80◦. Table 1: Values of various quantities which help distinguish the high symmetry and low symmetry structures. Our models here try to distort the high symmetry structure to the low symmetry one. | Structure | Bond length average | Bond length variance | Ti-O-Ti | |-----------------------------|-----------------------|------------------------|-----------| | High symmetry | 2 | 0 | 180◦ | | Low symmetry | 2.003417 | 0.01392 | 171.80◦ | | Model (no SBS) | 2 | 0 | 180◦ | | Model (SB object (1, 0, 0)) | 2.003417 | 0.01392 | 171.80◦ | ![16_image_0.png](16_image_0.png) Figure 12: Distorted crystal structure generated by our model when given a symmetry breaking object shown on the right in blue. ## 7 Conclusion We formalize the problem equivariant neural networks face in the spontaneous symmetry breaking setting. We propose the idea of equivariant symmetry breaking sets which allows ENNs to sample or generate all possible symmetrically related outputs given a highly symmetric input. Importantly, we show that minimizing these sets is intimately connected to a well studied group theory problem, and tabulate solutions for the ideal case for the point groups. We then demonstrate how our symmetry breaking framework works in practice on example problems. One future direction is to include translations and tabulate complements for the space groups in their respective normalizers. This would be particularly useful for crystallography applications. Another direction is to automate finding stabilizers for partial symmetry breaking objects. In addition, our method assumes we can efficiently detect the symmetry of our input and outputs. Designing fast symmetry detection algorithms would also be extremely beneficial. Finally, designing efficient loss functions which do not punish symmetrically related outputs would be useful for any network dealing with spontaneous symmetry breaking. ## References Amikam Aharoni. *Introduction to the Theory of Ferromagnetism*, volume 109. Clarendon Press, 2000. Sidhika Balachandar, Adrien Poulenard, Congyue Deng, and Leonidas Guibas. Breaking the symmetry: Resolving symmetry ambiguities in equivariant neural networks. In NeurIPS 2022 Workshop on Symmetry and Geometry in Neural Representations, 2022. Ilyes Batatia, David P Kovacs, Gregor Simm, Christoph Ortner, and Gábor Csányi. Mace: Higher order equivariant message passing neural networks for fast and accurate force fields. Advances in Neural Information Processing Systems, 35:11423–11436, 2022. Simon Batzner, Albert Musaelian, Lixin Sun, Mario Geiger, Jonathan P Mailoa, Mordechai Kornbluth, Nicola Molinari, Tess E Smidt, and Boris Kozinsky. E (3)-equivariant graph neural networks for dataefficient and accurate interatomic potentials. *Nature communications*, 13(1):2453, 2022. Aron Beekman, Louk Rademaker, and Jasper van Wezel. An introduction to spontaneous symmetry breaking. SciPost Physics Lecture Notes, pp. 011, 2019. Martin Bokeloh, Alexander Berner, Michael Wand, H-P Seidel, and Andreas Schilling. Symmetry detection using feature lines. In *Computer Graphics Forum*, volume 28, pp. 697–706. Wiley Online Library, 2009. Elena Castellani et al. On the meaning of symmetry breaking. *Symmetries in physics: Philosophical reflections*, pp. 321–334, 2003. Taco Cohen and Max Welling. Group equivariant convolutional networks. In International conference on machine learning, pp. 2990–2999. PMLR, 2016a. Taco S Cohen and Max Welling. Steerable cnns. In *International Conference on Learning Representations*, 2016b. Taco S Cohen, Mario Geiger, Jonas Köhler, and Max Welling. Spherical cnns. In *International Conference* on Learning Representations, 2018. Taco S Cohen, Mario Geiger, and Maurice Weiler. A general theory of equivariant cnns on homogeneous spaces. *Advances in neural information processing systems*, 32, 2019. David F Crouse. On implementing 2d rectangular assignment algorithms. IEEE Transactions on Aerospace and Electronic Systems, 52(4):1679–1696, 2016. Ameya Daigavane, Song Kim, Mario Geiger, and Tess Smidt. Symphony: Symmetry-equivariant pointcentered spherical harmonics for molecule generation. *arXiv preprint arXiv:2311.16199*, 2023. Mildred S Dresselhaus, Gene Dresselhaus, and Ado Jorio. Group theory: application to the physics of condensed matter. Springer Science & Business Media, 2007. David Steven Dummit and Richard M Foote. *Abstract algebra*, volume 3. Wiley Hoboken, 2004. Marc Finzi, Samuel Stanton, Pavel Izmailov, and Andrew Gordon Wilson. Generalizing convolutional neural networks for equivariance to lie groups on arbitrary continuous data. In *Proceedings of the 37th International Conference on Machine Learning*, pp. 3165–3176, 2020. Octavian-Eugen Ganea, Xinyuan Huang, Charlotte Bunne, Yatao Bian, Regina Barzilay, Tommi S Jaakkola, and Andreas Krause. Independent se (3)-equivariant models for end-to-end rigid protein docking. In International Conference on Learning Representations, 2021. GAP. *GAP - Groups, Algorithms, and Programming, Version 4.12.2*. The GAP Group, 2022. URL https://www.gap-system.org. Mario Geiger and Tess Smidt. e3nn: Euclidean neural networks. *arXiv preprint arXiv:2207.09453*, 2022. Theo Hahn, Uri Shmueli, and JC Wilson Arthur. *International tables for crystallography*, volume 1. Reidel Dordrecht, 1983. Allen Hatcher. *Algebraic Topology*. Cambridge University Press, 2002. Peter W Higgs. Broken symmetries and the masses of gauge bosons. *Physical review letters*, 13(16):508, 1964. Emiel Hoogeboom, Vıctor Garcia Satorras, Clément Vignac, and Max Welling. Equivariant diffusion for molecule generation in 3d. In *International conference on machine learning*, pp. 8867–8887. PMLR, 2022. Ningyuan Teresa Huang, Ron Levie, and Soledad Villar. Approximately equivariant graph networks. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. Mehdi Jafary-Zadeh, Khoong Hong Khoo, Robert Laskowski, Paulo S Branicio, and Alexander V Shapeev. Applying a machine learning interatomic potential to unravel the effects of local lattice distortion on the elastic properties of multi-principal element alloys. *Journal of Alloys and Compounds*, 803:1054–1062, 2019. Anubhav Jain, Shyue Ping Ong, Geoffroy Hautier, Wei Chen, William Davidson Richards, Stephen Dacek, Shreyas Cholia, Dan Gunter, David Skinner, Gerbrand Ceder, et al. Commentary: The materials project: A materials genome approach to accelerating materials innovation. *APL materials*, 1(1), 2013. Weile Jia, Han Wang, Mohan Chen, Denghui Lu, Lin Lin, Roberto Car, E Weinan, and Linfeng Zhang. Pushing the limit of molecular dynamics with ab initio accuracy to 100 million atoms with machine learning. In *SC20: International conference for high performance computing, networking, storage and* analysis, pp. 1–14. IEEE, 2020. John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, et al. Highly accurate protein structure prediction with alphafold. *Nature*, 596(7873):583–589, 2021. Sékou-Oumar Kaba and Siamak Ravanbakhsh. Symmetry breaking and equivariant neural networks. In NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations, 2023. Herbert Frederick Kay and P Vousden. Xcv. symmetry changes in barium titanate at low temperatures and their relation to its ferroelectric properties. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 40(309):1019–1040, 1949. Yosi Keller and Yoel Shkolnisky. An algebraic approach to symmetry detection. In *ICPR (3)*, pp. 186–189, 2004. E Koch and W Fischer. Normalizers of point groups. *International Tables for Crystallography*, pp. 904–905, 2006. Risi Kondor and Shubhendu Trivedi. On the generalization of equivariance and convolution in neural networks to the action of compact groups. In *International Conference on Machine Learning*, pp. 2747–2755. PMLR, 2018. Hans Kurzweil and Bernd Stellmacher. *The theory of finite groups: an introduction*, volume 1. Springer, 2004. R Jeffrey Largent, William F Polik, and JR Schmidt. Symmetrizer: algorithmic determination of point groups in nearly symmetric molecules. *Journal of Computational Chemistry*, 33(19):1637–1642, 2012. Laura Lewis, Hsin-Yuan Huang, Viet T Tran, Sebastian Lehner, Richard Kueng, and John Preskill. Improved machine learning algorithm for predicting ground state properties. *nature communications*, 15(1):895, 2024. Yi-Lun Liao and Tess Smidt. Equiformer: Equivariant graph attention transformer for 3d atomistic graphs. In *The Eleventh International Conference on Learning Representations*, 2022. Mario Lino, Stathi Fotiadis, Anil A Bharath, and Chris D Cantwell. Multi-scale rotation-equivariant graph neural networks for unsteady eulerian fluid dynamics. *Physics of Fluids*, 34(8), 2022. Niloy J Mitra, Leonidas J Guibas, and Mark Pauly. Partial and approximate symmetry detection for 3d geometry. *ACM Transactions on Graphics (ToG)*, 25(3):560–568, 2006. Marisa C Oliveira, Renan AP Ribeiro, Elson Longo, Maurício RD Bomio, Fabiana V Motta, and Sergio R de Lazaro. Temperature dependence on phase evolution in the batio3 polytypes studied using ab initio calculations. *International Journal of Quantum Chemistry*, 120(1):e26054, 2020. Tess E Smidt, Mario Geiger, and Benjamin Kurt Miller. Finding symmetry breaking order parameters with euclidean neural networks. *Physical Review Research*, 3(1):L012002, 2021. Shaojie Tang and Nadine Aubry. On the symmetry breaking instability leading to vortex shedding. *Physics* of Fluids, 9(9):2550–2561, 1997. Nathaniel Thomas, Tess Smidt, Steven Kearnes, Lusann Yang, Li Li, Kai Kohlhoff, and Patrick Riley. Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds. *arXiv preprint* arXiv:1802.08219, 2018. Tycho van der Ouderaa, David W Romero, and Mark van der Wilk. Relaxing equivariance constraints with non-stationary continuous filters. *Advances in Neural Information Processing Systems*, 35:33818–33830, 2022. Dian Wang, Jung Yeon Park, Neel Sortur, Lawson LS Wong, Robin Walters, and Robert Platt. The surprising effectiveness of equivariant models in domains with latent symmetry. In *The Eleventh International* Conference on Learning Representations, 2022a. Rui Wang, Robin Walters, and Rose Yu. Approximately equivariant networks for imperfectly symmetric dynamics. In *International Conference on Machine Learning*, pp. 23078–23091. PMLR, 2022b. Rui Wang, Robin Walters, and Tess Smidt. Relaxed octahedral group convolution for learning symmetry breaking in 3d physical systems. In *NeurIPS 2023 AI for Science Workshop*, 2023. Maurice Weiler, Mario Geiger, Max Welling, Wouter Boomsma, and Taco S Cohen. 3d steerable cnns: Learning rotationally equivariant features in volumetric data. Advances in Neural Information Processing Systems, 31, 2018. Patrick M Woodward. Octahedral tilting in perovskites. i. geometrical considerations. *Acta Crystallographica* Section B: Structural Science, 53(1):32–43, 1997. Table of Contents | A | Notation and commonly used symbols | 22 | | |------------------------------------------------------|---------------------------------------------------------|------|----| | B | Group theory | 23 | | | C | Equivariant neural networks | 25 | | | D Limitations | 26 | | | | D.1 | Symmetry detection | | 26 | | D.2 | Loss functions | 26 | | | E | Proofs | 27 | | | E.1 | Proof of Lemma 2.1 | | 27 | | E.2 | Formal justification of Definition 3.2 | | 27 | | E.3 | Proof of Theorem 3.3 | 27 | | | E.4 | Proof of Corollary 3.6 | | 28 | | E.5 | Justification of Definition 4.4 | | 28 | | E.6 | Proof of Theorem 4.5 | 28 | | | E.7 | Proof of Corollary 4.7 | | 29 | | E.8 | Proof of Lemma 5.1 | | 30 | | F | Classification of full symmetry breaking cases for O(3) | 31 | | | F.1 | Normalizer: Kh | 31 | | | F.2 | Normalizer: D∞h | 31 | | | F.3 | Normalizer: D(2n)h | 33 | | | F.4 | Normalizer: Ih | | 34 | | F.5 | Normalizer: Oh | 34 | | | G Equivariant full SBS better than exact partial SBS | 35 | | | | H Experiments | 36 | | | | H.1 | Triangular prism | | 36 | | H.2 | Octagon to rectangle | 40 | | | H.3 | BaTiO3 experiment | 41 | | ## A Notation And Commonly Used Symbols Here, we present the notation we use throughout this paper and the typical variable names. Table 2: Notation used throughout this paper StabG(x) Stabilizer of an element x under a group G NG(S) Normalizer of group S in group G ClG(S) Set of groups obtained by conjugating group S with elements in G OrbG(x) Orbit of an element x under action by elements of group G P(X) Set of all subsets of X G/S When G is a group, this is the set of left cosets. If S is a normal subgroup, this also denotes the quotient group X/S When X is a set, this is the equivalence classes induced by action of S on X S ≤ G If S and G are groups, this denotes that S is a subgroup of G f|X Function f with domain restricted to X Table 3: Commonly used symbols G Group our network is equivariant under 1 Used to denote the trivial group e Identity element of a group x Input y Output S Symmetry of our input, more precisely StabG(x) K Symmetry of our output, more precisely StabS(y) B Full symmetry breaking set P Partial symmetry breaking set ## B Group Theory Group theory is the mathematical language used to describe symmetries. Here, we present a brief overview of concepts from group theory we need to both define equivariance, and to understand our proposed symmetry breaking scheme. For a more comprehensive treatment of group theory, we refer to standard textbooks Dresselhaus et al. (2007); Dummit & Foote (2004); Kurzweil & Stellmacher (2004). We begin by defining what a group is. Definition B.1 (Group). Let G be a nonempty set equipped with a binary operator · : G × G → G. This is a group if the follwing group axioms are satsfied 1. Associativity: For all *a, b, c* ∈ G, we have (a · b) · c = a · (b · c) 2. Identity element: There is an element e ∈ G such that for all g ∈ G we have e · g = g · e = g 3. Inverse element: For all g ∈ G, there is an inverse g −1 ∈ G such that g · g −1 = g −1· g = e for identity e. Some examples of groups include the group of rotation matrices with matrix multiplication as the group operation, the group of integers under addition, and the group of positive reals under multiplication. One very important group is the group of automorphisms on a vector space. This group is denoted GL(V ) and we can think of it as the group of invertible matrices. While abstractly groups are interesting on their own, we care about using them to describe symmetries. Intuitively, the group elements abstractly represent the symmetry operations. In order to understand what these actions are, we need to define a group action. Definition B.2 (Group action). Let G be a group and Ω a set. A group action is a function α : G×Ω → Ω such that α(*e, x*) = x and α(g, α(*h, x*)) = α(*gh, x*) for all *g, h* ∈ G and x ∈ Ω. Often, we may want to relate two groups to each other. This is done using group homomorphisms, a mapping which preserves the group structure. Definition B.3 (Group homomorphism and isomorphism). Let G and H be groups. A group homomorphism is a function f : G → H such that f(u · v) = f(u) · f(v) for all *u, v* ∈ G. A group homomorphism is an isomorphism if f is a bijection. Because there are many linear algebra tools for working with matrices, it is particular useful to relate arbitrary groups to groups consisting of matrices. Such a homomorphism together with the vector space the matrices act on is a group representation. Definition B.4 (Group representation). Let G be a group and V a vector space over a field F. A group representation is a homomorphism ρ : G → GL(V ) taking elements of G to autmorphisms of V . Given any representation, there are often orthogonal subspaces which do not interact with each other. If this is the case, we can break our representation down into smaller pieces by restricting to these subspaces. Hence, it is useful to consider the representations which cannot be broken down. These are known as the irreducible representations (irreps) and often form the building blocks of more complex representations. Definition B.5 (Irreducible representation). Let G be a group, V a vector space, and ρ : G → GL(V ) a representation. A representation is irreducible if there is no nontrivial proper subspace W ⊂ V such that ρ|W is a representation of G over space W. There has been much work on understanding the irreps of various groups and many equivariant neural network designs use this knowledge. One natural question is whether there is a subset of group elements which themselves form a group under the same group operation. Such a subset is a called a subgroup. Definition B.6 (Subgroup). Let G be a group and S ⊆ G. If S together with the group operation of G · satisfy the group axioms, then S is a subgroup of G which we denote as S ≤ G. One particular feature of a subgroup is that we can use them to decompose our group into disjoint chunks called cosets. Definition B.7 (Cosets). Let G be a group and S a subgroup. The left cosets are sets obtained by multiplying S with some fixed element of G on the left. That is, the left cosets are for all g ∈ G gS = {gs : s ∈ S}. We denote the set of left cosets as G/S. The right cosets are defined similarly except we multiply with a fixed element of G on the right. That is, the right cosets are for all g ∈ G Sg = {sg : s ∈ S}. We denote the set of right cosets as G\S. In general, the left and right cosets are not the same. However, for some subgroups they are the same. Those subgroups are called normal subgroups. Definition B.8 (Normal subgroup). Let G be a group and N a subgroup. Then N is a normal subgroup if for all g ∈ G, we have gNg−1 = N. It turns out that given a normal subgroup, one can construct a group operation on the cosets. The resulting group is called a quotient group. Definition B.9 (Quotient group). Let G be a group and N a normal subgroup. One can define a group operation on the cosets as aN ·bN = (a·b)N. The resulting group is called the quotient group and is denoted G/N. For subgroups S which are not normal in G, it is often useful to consider a subgroup of G containing S where S is in fact normal. The largest such subgroup is called the normalizer. Definition B.10 (Normalizer). Let G be a group and S a subgroup. The normalizer of S in G is NG(S) = {g : gSg−1 = S}. Similar to orthogonal vector spaces, one can imagine an analogous notion for groups. These are called complement subgroups. Definition B.11 (Complement). Let G be a group and S a subgroup. A subgroup H is a complement of S if for all g ∈ G, we have g = sh for some s ∈ S and h ∈ H and S ∩ H = {e}. It turns out that if S is a normal subgroup of G and H is a complement, then H is isomorphic to the quotient group. Finally, it is useful to define what we mean by symmetry of an object. These are all group elements which leave the object unchanged and is called the stabilizer. Definition B.12 (Stabilizer). Let G be a group, Ω some set with an action of G defined on it, and u ∈ Ω. The stabilizer of u is all elements of G which leave u invariant. That is StabG(u) = {g : gu = *u, g* ∈ G}. One can check that the stabilizer is indeed a subgroup. Closely related to the stabilizer is the orbit. This is all the values we get when we act with our group on some object. Definition B.13 (Orbit). Let G be a group, Ω some set with an action of G defined on it, and u ∈ Ω. The orbit of u is the set of all values obtained when we act with all elements of G on it. That is, OrbG(u) = {gu : g ∈ G} = Gu. It turns out one can show that the stabilizer of elements in the orbit are related. This relation turns out to be conjugation which we define below. Definition B.14 (Conjugate subgroups). Let S and S ′ be subgroups of G. We say S and S ′ are conjugate in G if there is some g ∈ G such that S = gS′g −1. We denote the set of all conjugate subgroups by ClG(S) = {gSg−1: g ∈ G}. ## C Equivariant Neural Networks Here, we give a brief overview of equivariant neural networks. For a more in depth coverage of the general theory and construction of equivariance, we refer to works such as Cohen et al. (2019); Finzi et al. (2020); Kondor & Trivedi (2018). We emphasize that the symmetry breaking techniques presented in the paper apply to any equivariant architecture. We first define equivariance. Definition C.1 (Equivariance). Let G be a group with actions on spaces X and Y . A function f : X → Y is said to be equivariant if for all x ∈ X and g ∈ G we have $$f(g x)=g f(x).$$ Intuitively, we can interpret this as rotating the input giving the same result as just rotating the output. It is easy to check that the composition of equivariant functions is still an equivariant function. Hence, equivariant neural networks are designed using a composition of equivariant layers. There has been considerable study into how one should design equivariant layers. One approach is to modify convolutional filters by transforming them with the elements of our group Cohen & Welling (2016a). This approach is known as group convolution and is based on the intuition that convolutional filters are translation equivariant. In group convolution, one interprets our data as a signal over some domain. The first layer is a lifting convolution which transforms our data into a signal over the group. The remaining layers then just convolve this signal with filters which are also signals over the group. One can further use group theory tools to break down the convolutional filters into irreps. This leads to steerable convolutional networks Cohen & Welling (2016b). These can be extended and used to parameterize continuous filters which can be used for infinite groups Cohen et al. (2018). It turns out the irreps of the group are natural data types for equivariant networks. Further, we can express the convolutions as tensor products of irreps. We can think of equivariant operations as being composed of tensor products of irreps, linear mixing of irreps, and scaling by invariant quantities. Combining these, we get tensorfield networks which works on point clouds and is rotation equivariant Thomas et al. (2018). In this paper, we demonstrate our method using networks built from the e3nn framework for O(3) equivariance Geiger & Smidt (2022). ## D Limitations D.1 Symmetry Detection To use our procedure, we do assume knowledge of the symmetries of the inputs and outputs to our network. In the full symmetry breaking framework, we only need the symmetry of the input. In the partial symmetry breaking framework, we need the symmetry of both the input and the output. However, we argue that this is not a major concern. First, symmetry detection is a well studied problem and there are many algorithms exist for various types of data Bokeloh et al. (2009); Keller & Shkolnisky (2004); Largent et al. (2012); Mitra et al. (2006). Further, sometimes the symmetries of the inputs and outputs are already known. This is especially true for crystallographic data Jain et al. (2013). In addition, because we only need the symmetry to design equivariant SBSs, we only need to perform symmetry detection once. This can simply be incorporated as a preprocessing step for our data. Further, we want to emphasize that our framework can be used to prove whether knowledge of input symmetry is beneficial. We prove in Lemma F.1 that no finite subgroups of SO(2) have complement in their normalizer (which is also just SO(2)). Combined with Corollary 3.6 this actually implies the degeneracy of any SO(2)-equivariant SBS for cyclic groups is infinite. Hence, we cannot do much better than a something like noise injection, which introduces asymmetry without knowledge of input symmetry. ## D.2 Loss Functions While this work focuses on allowing equivariant networks to produce a set of lower symmetry outcomes, it turns out another important problem is designing appropriate loss functions. Suppose we only have one example input output pair *x, y* where y shares no symmetries with x. In this case there is no problem. We can fix any b ∈ B and train to minimize a simple MAE loss ||f(*x, b*) − y||2for example. However, suppose we observe *x, y* and *x, y*′in the data where y ′ = sy and s ∈ S = StabG(x). Then suppose we try to minimize MSE loss ||f(x, b) − y||2 + ||f(*x, b*′) − y ′||2, where b ′ = s ′b. Then by equivariance, the second term in the loss is $$||f(x,b^{\prime})-y^{\prime}||^{2}=||f(x,s^{\prime}b)-y^{\prime}||^{2}=||s^{\prime}f(x,b)-s y||^{2}=||f(x,b)-s^{\prime-1}s y||^{2}.$$ So in fact we see we must choose s ′ = s. So with multiple input output pairs and a simple loss directly comparing outputs such as MSE, we have a problem pairing symmetry breaking objects with outputs. However, suppose instead our loss was chosen such that loss(*y, y*′) = loss(*y, y*) is small if y ′ = sy for any s ∈ S. Then even if our network outputs sy instead of y when given some symmetry breaking object b, we do not punish it. Hence, this pairing problem would not exist. A simple version of such a loss would be to compute the MSE for all possible symmetrically related outputs and take the closest one loss(f(x, b), y) = min $=\;\underset{s\in S}{\text{min}}\{||\}$ $$(x,b)-s$$ (||f(*x, b*) − sy||2). This is what we use for our experimental examples in this work. However, this can be inefficient for large or infinite S and designing appropriate loss functions in such cases remains an open question. ## E Proofs E.1 Proof Of Lemma 2.1 Lemma E.1. Let X be a space with a transitive group action by G defined on it. Let Y be some space with a group action of G defined on it. Let f : X → Y be an G-equivariant function. We can choose f *such that* f(u) = y *if and only if* StabG(y) ≥ StabG(u)*. Further this uniquely defines* f|OrbG(x). Proof. First suppose we did have f(u) = y. For any g ∈ StabG(u), we have by equivariance of f that gy = f(gu) = f(u) = y. So g ∈ StabG(y). Next, suppose StabG(y) ≥ StabG(u). For any x ∈ X, there is some r ∈ G so that x = ru. Let us pick exactly one such r for each X and form a set R. Hence any x is uniquely written as x = ru for r ∈ R. Define f(x) = f(ru) = ry. We claim f is equivariant. For any g ∈ G and x ∈ X, let x = ru and gx = r ′u for some *r, r*′ ∈ R. Then, f(gx) = f(gru) = f(r ′u) = r ′y. But note that gx = r ′u implies r ′−1gx = r ′−1gru = u. So r ′−1gr ∈ StabG(u) ≤ StabG(y). Hence, we also have r ′−1gry = y. So, f(gx) = r ′y = r ′(r ′−1gry) = gry = gf(x). Hence, f is equivariant. Finally, for uniqueness, suppose *f, f*′ are two equivariant functions such that f(u) = f ′(u) = y. Then by equivariance, for any x = gu ∈ X we have $$f(x)=f(g u)=g y=f^{\prime}(g u)=f^{\prime}(x).$$ $\square$ ## E.2 Formal Justification Of Definition 3.2 We can justify Definition 3.2 by characterizing exactly when σ can be equivariant. This leads to the following proposition. Proposition E.2. Let G be a group and S be a subgroup of G. Let B ∈ P(B) be a set where there is some group action of G defined on B*. Then there exists an equivariant* σ|ClG(S): ClG(S) → P(B) *such that* σ|ClG(S) = B if and only if nB = B *for all* n ∈ NG(S). Proof. Note that ClG(S) is a set where action by conjugation is a transitive one. Also note by definition that StabG(S) for this action is precisely the definition of a normalizer NG(S). Then by Lemma E.1, we see such a function exists if and only B is also symmetric under NG(S). ## E.3 Proof Of Theorem 3.3 Theorem 3.3. Let G be a group and S a subgroup. Let B be a G-equivariant SBS for S. Then it is possible to choose an ideal B if and only if S *has a complement in* NG(S). Proof. Suppose B is transitive under S and pick b ∈ B. Consider the stabilizer group StabNG(S)(b). For any g ∈ NG(S), by transitivity under S we must have gb = sb for some s ∈ S. So, s −1gu = u implying that h = s −1g ∈ StabNG(S)(b). So we find that we can write any g as g = sh for some s ∈ S and h ∈ StabNG(S)(u) so NG(S) = S · StabNG(S)(u). But note that since B is symmetry breaking, S ∩ StabNG(S)(u) = {e}. Hence, StabNG(S)(u) is indeed a complement. For the converse, suppose H is a complement of S in NG(S). We claim B = NG(S)/H is the equivariant SBS we desire. Note that clearly by construction, this is closed under NG(S) so we satisfy the equivariance condition. Further, note that StabS(H) = StabNG(S)(H) ∩ S = H ∩ S = {e}. Since B is transitive under NG(S), stabilizers of all other elements are obtained by conjugation and hence also trivial. Hence, it is indeed symmetry breaking. Finally, any g ∈ NG(S) is uniquely written as sh for some s ∈ *S, h* ∈ H so gH = shH = sH. So B is transitive under S as well. ## E.4 Proof Of Corollary 3.6 Corollary 3.6. Let G be a group and S a subgroup. Let M be such that S ≤ M ≤ NG(S). Let B *be a* G-equivariant SBS for S which is transitive under NG(S). Then it is possible to choose B such that every M-orbit is also transitive under S if and only if S has a complement in M. In particular, such a B has ## Degs(B) ≤ |Ng(S)/M|. Proof. Suppose we have such a B and pick any b ∈ B. By transitivity of the orbit under S, we have M b = Sb. Let B′ = M b. We can check that this is in fact an ideal M-equivariant SBS for S. That it is a symmetry breaker follows since B is symmetry breaking. That it is M-equivariant and transitive follows since M b = Sb and NM(S) = M. By Theorem 3.3 this implies S has a complement in M. Next, suppose we have a complement of S in M. By Theorem 3.3 we can construct B′ which is an ideal M-equivariant SBS for S. We can lift this to a G-equivariant SBS for S by just taking B = NG(S)B′. Finally, to compute the order, we note that every S-orbit is also a M orbit. Since B is transitive under NG(S), there are at most |NG(S)/M| number of M-orbits and hence only that many S-orbits. So DegS(B) ≤ |NG(S)/M|. ## E.5 Justification Of Definition 4.4 Similar to the full SBS case, we can justify Definition 4.4 by characterizing exactly when an equivariant π can exist. This leads to the following proposition. Proposition E.3. Let G be a group, S a subgroup of G, and K a subgroup of S. Let P *be a set with a group* action of G defined on it and P ⊂ P*. There exists an equivariant* π|OrbG((S,ClS (K))) : OrbG((S, ClS(K))) → P(P) *such that* π|OrbG((S,ClS (K)))((S, ClS(K))) = P if and only if NG(S, K) leaves P invariant. Proof. By Lemma E.1, we need P to be closed under the stabilizer of the input. But the generalized normalizer NG(*S, K*) is precisely this stabilizer. ## E.6 Proof Of Theorem 4.5 Theorem 4.5. Let G be a group and S and K be subgroups K ≤ S ≤ G. Let P be a G-equivariant Kpartial SBS. Then we can choose an ideal P (exact and transitive under S) if and only if NS(K)/K *has a* complement in NNG(S,K)(K)/K. Proof. Let P = Su where u has symmetry StabS(u) = K. We can define an action of any coset NG(*S, K*)/K on u as just the action of a coset representative on u. This is consistent since u is invariant under K. In particular, note that K is a normal subgroup of NS(K) so NS(K)/K is a quotient group. Let B′ = (NS(K)/K)u. Since u is in a K-partial SBS, we must have su ̸= u for any s ∈ S − K. Hence, for any coset gK ∈ NS(K)/K, gu ̸= u if g /∈ K. Therefore, B′ must be a SBS for NS(K)/K. Next, consider any coset gK in NNG(S,K)(K)/K. Then we know gu ∈ Su so gu = su for some s ∈ S. Since K was a symmetry of u, gKg−1 = sKs−1is a symmetry of gu = su. So the stabilizer of su must be sKs−1 = K. Hence, s must be in NS(K). Therefore the action of gK on u gives us an element of B′ = (NS(K)/K)u. Hence B′is NNG(S,K)(K)/K-equivariant. By Theorem 3.3, the existence of an ideal NNG(S,K)(K)/K-equivariant SBS for NS(K)/K implies that NS(K)/K has a complement in NNG(S,K)(K)/K. For the converse direction, suppose that A is a complement of NS(K)/K in NNG(S,K)(K)/K. Note the elements of A are cosets of K so we can define a set of elements of NG(*S, K*) as $$H=\bigcup_{C\in A}C.$$ Define P = OrbS(H). We claim that P is a transitive exact equivariant partial SBS. We first show that P is exact K-partial symmetry breaking. Consider s ∈ S. We can write $$s H=\bigcup_{C\in A}s C=\bigcup_{C\in s A}C.$$ Now we see if s ∈ K, then since K is the identity in the quotient group sA = A. Hence sH = H in this case. If s ∈ NS(K) − K, then sK is not the identity in NS(K)/K. But A is a complement so sA ̸= A implying sH ̸= H. Finally, if s /∈ NS(K) then sK /∈ NNG(S,K)(K)/K. So sH ̸⊂ NNG(S,K)(K). But H ⊂ NNG(S,K)(K) so sH ̸= H. Hence, StabS(H) = K and since the rest of P is just the orbit of H, stabilizers of the other elements are in ClS(K). Hence, P as we constructed is an exact K-partial SBS. For equivariance consider any n ∈ NG(*S, K*) giving a coset $$n H=\bigcup_{C\in A}n C=\bigcup_{C\in n A}C.$$ If n ∈ NNG(S,K)(K) then since A is a complement, (nK) = (sK)(aK) for some sK ∈ NS(K)/K and aK ∈ A, so nA = (nK)A = (sK)(aK)A = (sK)A = sA for some s ∈ NS(K) ⊂ S. Hence, nH = sH for some s ∈ S. If n /∈ NNG(S,K)(K), then there is some s so that nKn−1 = sKs−1. Therefore, s −1nKn−1s = K so s −1n ∈ NNG(S,K). But we saw before that this means there is some s ′such that s −1nH = s ′H. Thus, nH = ss′H and ss′ ∈ S. So nH ∈ P so P is indeed closed under action by NG(*S, K*). ## E.7 Proof Of Corollary 4.7 Corollary 4.7. Let G be a group, S a subgroup, and K a subgroup of S. Let K′be a subgroup of K and M a subgroup of NG(S, K) ∩ NG(S, K′) which contains S. Suppose P is a G-equivariant K*-partial SBS for* S which is transitive under NG(S, K). We can choose P *such that* StabS(p) ∈ ClNG(S,K)(K′) for all p *and all* M-orbits in P are transitive under S if and only if NS(K′)/K′ has a complement in NM(K′)/K′*. Further,* such a P has DegS,K(P) ≤ |K/K′| · |NG(*S, K*)/M|. Proof. Suppose we had such a P. Pick some p ∈ P such that StabS(p) = K′. Since M-orbits are transitive under S, we have M p = Sp. Let P ′ = M p. We can check then that this is a M-equivariant set. Further, since M ⊂ NG(*S, K*′), we see that this is an exact K′-partial symmetry breaking set. Hence, it is an ideal M-equivariant K′-partial SBS. Also note that since M ⊂ NG(*S, K*′), we have M = NM(*S, K*′). So by Theorem 4.5, NS(K′)/K′ must have a complement in NM(K′)/K′. Conversely, suppose NS(K′)/K′ has a complement in NM(K′)/K′. Again, we note M = NM(*S, K*′) so by Theorem 4.5 we have an ideal M-equivariant K′-partial SBS for S. We can lift this to G-equivariance by taking the orbit under NG(*S, K*). To see the order of such a P, we consider the S-orbits. Let T be a transversal of S/K. For each S-orbit, we can pick some p in that orbit so that StabS(p) ≤ K. We put the elements of Kp in our Pt. Within each S-orbit, any p ′can be written as sp for some s ∈ S and any s is uniquely written as s = tk for some t ∈ T and k ∈ K. So p ′ = tkp. However, note that any other s ′ where p ′ = s ′p = sp can be written as s ′ = sk′for some k ′ ∈ StabS(p). Hence, p ′is uniquely written as p ′ = t(kp) since kk′p = kp. So each S-orbit contributes |K/StabS(p)| elements to Pt. However, since StabS(p) ∈ ClNG(S,K)(K′), we must have |K/StabS(p)| = |K/K′|. Finally, we know each S-orbit is also an M-orbit, since P is transitive under NG(*S, K*), there are at most |NG(*S, K*)/M| different S-orbits. So $$\operatorname{Deg}_{S,K}(P)=|P_{t}|\leq|K/K^{\prime}|\cdot|N_{G}(S,K)/M|.$$ ## E.8 Proof Of Lemma 5.1 Lemma 5.1. *We have the following formula* $$\square$$ NG(*S, K*) = S(NG(S) ∩ NG(K)). Proof. For any n ∈ NG(*S, K*), by definition we must have nKn−1 = sKs−1for some s ∈ S. Hence, s −1nKn−1s = K so s −1n ∈ NG(K). Further, clearly s ∈ NG(S) and n ∈ NG(S) so also s −1n ∈ NG(S). Therefore, s −1n ∈ NG(S) ∩ NG(K). So n = s(s −1n) ∈ S(NG(S) ∩ NG(K)). Hence NG(*S, K*) ⊆ S(NG(S) ∩ NG(K)). Next, consider any n ′ = sn ∈ S(NG(S) ∩ NG(K)) where s ∈ *S, n* ∈ NG(S) ∩ NG(K). Since s ∈ NG(S), n ∈ NG(S) clearly sn ∈ NG(S). Further, we find snK(sn) −1 = s(nKn−1)s −1 = sKs−1 ∈ ClS(K). Therefore by definition sn ∈ NG(*S, K*). So also .$-1\,\rangle\,e^{-1}\,-$ $=e^{K_0-1}\in\mathbb{R}$ S(NG(S) ∩ NG(K)) ⊆ NG(*S, K*). Hence, we find that NG(*S, K*) = S(NG(S) ∩ NG(K)). ## F Classification Of Full Symmetry Breaking Cases For O(3) Here we tabulate the cases for full symmetry breaking for the finite subgroups of O(3). These are the point groups and the normalizers are tabulated in the International Tables for Crystallography in Hermann–Mauguin notation Koch & Fischer (2006). We have translated these to Schönflies notation in Table 4. Table 4: Normalizers of the point groups in Schönflies notation. Note we have the equivalences C1 = 1, S2 = Ci, C1h = C1v = Cs, D1 = C2, D1h = C2v, D1d = C2h. | Normalizer: | Groups: | |---------------|---------------------------------| | Kh | 1, Ci | | D∞h | Cn, S2n, Cnh ∀n ≥ 2; Cs | | D(2n)h | Cnv, Dnd, Dnh ∀n ≥ 2; Dn ∀n ≥ 3 | | Ih | I, Ih | | Oh | D2, D2h, T, Td, Th, O, Oh | In the following subsections we do casework by normalizers. For each, subgroup with a given normalizer, we give a valid complement by name if it exists. In some normalizers, the name of a subgroup is not sufficient to identify it. This is because there are multiple copies of subgroups with that name in the normalizer. In such cases, we must identify which copy of the subgroup we care about. To do so, we give the normalizers in terms of a group presentation found with the help of GAP. Group presentations are essentially a set of generators and relations among the generators. We can then specify any specific subgroups of the normalizer using the generators of the normalizer. ## F.1 Normalizer: Kh All the groups with this normalizer do have complements. Note that in Schönflies notation, Kh is just the entire group O(3). The only subgroups with O(3) as normalizer are the trivial group C1 and inversion Ci. Clearly for the trivial group the complement is O(3). For inversion, the complement is just SO(3). | Group | Complement | |---------|--------------| | 1 | Kh = O(3) | | Ci | K = SO(3) | Table 5: Groups with normalizer Kh = O(3) and their complements. ## F.2 Normalizer: D∞H | Group | Complement | |---------------------|--------------| | Cs | C∞v, D∞ | | Cn, S2n, Cnh ∀n ≥ 2 | None | Unfortunately, most of the groups in this case have no complements. We provide a proof of this fact here. We begin by showing no nontrivial cyclic group has a complement in C∞ (which is SO(2) in Schönflies notation). Table 6: Groups with normalizer D∞h and their complements. Lemma F.1. Let Cn be a cyclic group of order n ≥ 2 which is embedded in C∞. Note that it is a normal subgroup since all groups here are abelian. Then Cn *does not have a complement in* C∞. Proof. 1 Suppose there was a complement H. By definition, for any g ∈ C∞ we have g = ch for unique h ∈ H and c ∈ Cn. Next, note that C∞ is a divisible group. In particular, for any g ∈ C∞, there exists some g ′such that g = (g ′) n. Let g ′ be uniquely written as h ′c ′for some h ′ ∈ H and c ′ ∈ Cn. Then we have g = (c ′h ′) n = (c ′) n(h ′) n = (h ′) n where we noted (c ′) n = e. So g is uniquely written as g = ch for c = e and h = (h ′) n. But this holds for all g so all elements of C∞ are just elements of H. This contradicts the fact that H ∩ Cn = {e}. Proof. 2 For those familiar with exact sequences, one can consider the following alternative proof. Consider short exact sequence 1 → Cn → C∞ → C∞/Cn → 1. We can check that C∞/Cn ∼= C∞. Existence of a complement for Cn implies the above sequence is split, which by the splitting lemma Hatcher (2002) implies C∞ ∼= C∞ ⊕ Cn, a contradiction. We can now extend the lemma above to there being no complement of any cyclic group in D∞ (which is O(2) in Schönflies notation). Lemma F.2. Let Cn be a cyclic group of order n ≥ 2 which is embedded in D∞ *such that the rotation* axis aligns with the infinite rotation axis in D∞*. Note that it is a normal subgroup since all groups here are* abelian. Then Cn *does not have a complement in* D∞. Proof. Suppose there was a complement H. Consider H′ = H ∩ C∞. Clearly, we have H′ ∩ Cn = {e}. Next, for any g ∈ C∞, there is unique c ∈ Cn and h ∈ H such that g = ch. But h = c −1g ∈ C∞ so h ∈ H′. So H′ is a complement of Cn in C∞. But this contradicts Lemma F.1. Finally, we can prove that no subgroups except for Cs in this case have complements. Note here that Kh is just O(3) and K just SO(3) in Schönflies notation. Theorem F.3. Consider any point group A which has normalizer D∞h in Kh. If A *has a nontrivial pure* rotation, then it has no complement in D∞h. Proof. First, note that Kh is the direct product of K and inversion Ci. Suppose A had a complement H. We can split A = Ae ⊔ Ai where Ae = A ∩ K is the subgroup of pure rotations and Aiis a coset consisting of elements with an inversion. Similarly, we can split H = He ⊔ Hi and D∞h = D∞ ⊔ Diinto subgroups of pure rotations and coset of elements with inversions. We claim the elements of G = AeHe form a group. Consider any *a, a*′ ∈ Ae and *h, h*′ ∈ He. Since A is a normal subgroup of D∞h = AH, we have hA = Ah so ha′ = a ′′h for some a ′′ ∈ A. But since *h, a*′ ∈ K, we have a ′′h ∈ K so a ′′ ∈ K. Hence a ′′ ∈ Ae. Therefore, (ah)(a ′h ′) = a(ha′)h ′ = a(a ′′h)h ′ = (aa′′)(hh′) ∈ AeHe. So G = AeHe is a group. Next, since H is a complement of A, clearly Ae ∩ He = {e}. Since G ≤ AH = D∞h, any g = ah for unique a ∈ A and h ∈ H which by construction of G are a ∈ Ae and h ∈ He. So He is certainly a complement of Ae in G. Now, we claim either G = D∞ or G = C∞. For any g ∈ D∞, since H is a complement, there is a unique a ∈ A and h ∈ H such that g = ah. In particular, we note we must either have a ∈ Ae and h ∈ He or a ∈ Ai and h ∈ Hi to have the right inversion parity. One possibility is G = AeHe = D∞. For the other possibility, suppose D∞ − G is nonempty. Fix g ∈ D∞ − G and consider any g ′ ∈ D∞ − G. Then g = ah and g ′ = a ′h ′ where *a, a*′ ∈ Ai and hh′ ∈ Hi. Now, note that a −1a ′is the combination of 2 elements with odd parity in i so a −1a ′ ∈ Ae. Next, since A is a normal subgroup, we have h −1A = Ah−1 and in particular, h −1a −1a ′ = a ′′h −1for some a ′′ ∈ A. But since h has odd parity and a −1a ′ has even parity in inversion, a ′′ must have even parity in inversion. Hence, we have $$g^{-1}g^{\prime}=(a h)^{-1}(a^{\prime}h^{\prime})=h^{-1}a^{-1}a^{\prime}h^{\prime}=a^{\prime\prime}(h^{-1}h^{\prime}).$$ Since *h, h*′ both have odd parity, h ′′ = h −1h ′ has even parity so in fact g −1g ′ = a ′′h ′′ where a ′′ ∈ Ae and h ′′ ∈ He. Therefore, g −1g ′ ∈ G so D∞ − G = gG is just a G-coset of D∞. We can similarly also show that D∞ − G = Gg. Now, we claim gG ∩ C∞ = ϕ. Suppose not. Then there is some c ∈ gG ∩ C∞. Since C∞ is a divisible group, there is some c ′ where c = (c ′) 2. But we must have (c ′) 2 ∈ G, a contradiction. Hence gG ∩ C∞ = ϕ so G ∩ C∞ = C∞. To conclude, we must have g ∈ D∞ − C∞ and it is clear gC∞ would generate the remaining elements in D∞. So G = C∞ in this case. From Table 4, we can see that Ae must in fact be a nontrivial cyclic group. But then the above implies that He is a complement of this cyclic group in either G = D∞ or G = C∞, which contradict Lemma F.2 and Lemma F.1 respectively. So A cannot have a complement in D∞h. ## F.3 Normalizer: D(2N)H All groups with this normalizer do have complements. We list the subgroup and its complement in Table 7. One presentation of D(2n)h is ⟨*a, b, m*|a ![32_image_0.png](32_image_0.png) 2n, b2, m2,(ab) 2,(am) 2,(bm) 2⟩. Figure 13 depicts an example of a D10h object. The element a correspond to a 2π/10 rotation about the blue vertical axis, b corresponds to a π rotation about the red axis, and m corresponds to a reflection across the mirror plane shown in orange. Figure 13: Object with symmetryD10h. We can identify generator a as the 10-fold rotation about the blue axis, generator b as the 2-fold rotation about the red axis, and m as the reflection over the plane shown in orange. | Group | Generators of group | Complement | Generators of a complement | |---------|-----------------------|--------------|------------------------------| | Cnv | a 2 , m | C2v | am, bm | | Dnd | a 2 , abm, m | Cs | bm | | Dnh | a 2 , b, m | Cs | am | | Dn | a 2 , b | C2v | am, bm | Table 7: Groups with normalizer D(2n)h and their complements. ## F.4 Normalizer: Ih This case is simple, we either have I or Ih. Clearly we just need to add inversion to get a complement in the former case and in the latter case we can just take the trivial group. | Group | Complement | |---------|--------------| | I | Ci | | Ih | 1 | Table 8: Groups with normalizer Ih and their complements. ## F.5 Normalizer: Oh All subgroups in this case have complements as well. One presentation of Oh is ⟨*a, b, i*|a 4, b4, i2,(aba) 2,(ab) 3, iaia−1*, ibib*−1⟩. | Group | Generators of group | Complement | Generators of a complement | |---------|-----------------------|--------------|------------------------------| | D2 | a 2 , b2 | D3d | ab, ba2 , i | | D2h | a 2 , b2 , i | D3 | ab, ba2 | | T | ab, ba | S4 | a 2 b, i | | Td | ab, ba, ai | C2 | a 2 b | | Th | ab, ba, i | C2 | a 2 b | | O | a, b | Ci | i | | Oh | a, b, i | 1 | ϕ | Here, a and b are π/2 rotations about perpendicular axes and i is just inversion. Table 9: Groups with normalizer Oh and their complements. ## G Equivariant Full Sbs Better Than Exact Partial Sbs We provide an outline of the construction of the counterexample. It is easiest to explain this by introducing the concept of a wreath product on groups. Definition G.1 (Wreath product). Let H be a group with a group action on some set Ω. Let A be another group. We can define a direct product group indexed by Ω as the set of sequences (aω)ω∈Ω where aω ∈ A. The action of H on Ω induces a semidirect product by reindexing. In particular, for all h ∈ H and sequences in AΩ we define h · (aω)ω∈Ω = (ah−1ω)ω∈Ω. The resulting group is the unrestricted wreath product and denoted as A WrΩ H. If rather than a direct product group AΩ, we restrict ourselves to a direct sum where all but finitely many elements in our sequence is not the identity, then we get the restricted wreath product denoted as A wrΩ H. Note that the direct sum and direct product are the same for finite Ω so the restricted and unrestricted wreath products also coincide in those cases. Consider the space Ω = {1, −1} and an action of D4 on Ω corresponding to the A2 representation. Intuitively, if we think of D4 as the rotational symmetries of a square in the xy-plane, this corresponds to how the z coordinate transforms by flipping signs. Define a group G′ as G′ = C2 wrΩ D4. This is a group of order 32 and is SmallGroup(32,28) in the Small Groups library GAP. One presentation of this group is $\quad1$. $\left(1\right)$. $${\mathrm{:)}}^{\top}{\mathrm{)}}.$$ ⟨*a, b, c*|a 2, b4,(ab) 4, c2*, bcb*−1c,(ac) 4⟩. (1) In this presentation, we can interpret *a, b* as generators of D4 and c as the generator one copy of C2. Consider the group G = G′ × G′ defined using the direct product. We can write generators of G as a1, b1, c1, a2, b2, c2 corresponding to two copies of those in the presentation given in equation 1 where generators with different indices commute. Define S as the subgroup generated by a1, b21 , c1, a2, b22 , c2 and K as the subgroup generated by c1c2. We can check that NG(S) = NG(*S, K*) = G. It is also not hard to check that a1b1, a2b2 generate a complement for S in G. Hence, by Theorem 3.3, we know that an ideal G-equivariant SBS is possible for S. Hence, we know the size of the equivariant full SBS is |S| = 256. Next, suppose we wanted a G-equivariant exact partial SBS. We can always generate this partial SBS by taking the orbit of some element p under action by NG(*S, K*) = G where StabS(p) = K. We claim we must also have StabG(p) = K. Suppose not, then there must be some g ∈ StabG(p) such that g /∈ S. However, we can check through casework or brute force that for all such g, either gKg−1 ̸= K or g 2 ∈ S − K. But would mean that there are elements not in K which stabilize p so StabS(p) ̸= K, a contradiction. Hence, no such g can exist so StabG(p) = K. Finally, by Orbit-Stabilizer theorem, this means the set must have size |P| = |OrbG(p)| = |G|/|StabG(p)| = (82· 2 4)/2 = 512. This is larger than the equivariant full SBS. In our supplementary material, we provide a script in GAP which verifies our claims above. In particular it performs a brute force check that StabS(p) = K implies StabG(p) = K for the group *G, S, K* described above. ## H Experiments We provide some additional details on our experiments here. We also provide our code for running these experiments in the supplementary material. ## H.1 Triangular Prism H.1.1 Obtaining an ideal SBS ![35_image_0.png](35_image_0.png) (a) (b) Figure 14: (a) Triangular prism with D3 symmetry and patterned cylinder with D6h symmetry. The generators are *a, b, m* where a is a 2π/6 rotation about the blue axis, b is a π rotation about the red axis, and m is a reflection across the orange plane. (b) An ideal symmetry breaking set for the triangular prism. A complement of D3 in D6h is generated by the mirror planes shown here in yellow and purple. The vector in red is a symmetry breaking object with this complement as stabilizer. The orbit of this vector under the normalizer generates the other vectors shown in black. Table 7 tells us D3 is generated by a 2, b and that a complement H is generated by *am, bm.* By Theorem 3.3, we know that if we can pick some object v with stabilizer StabD6h (v) = H, then the orbit of v under D3 gives an ideal equivariant SBS. In this case, one such v that works is a vector parallel to the triangular faces of the prism and one of the sides of the prism. This is shown in Figure 14b. Note reflection across the yellow plane corresponds to am and across the purple plane corresponds to bm. It is clear the arrow in red is stabilized by this complement. One can further check it shares no symmetries with the triangular prism. The other 5 arrows in black are the other symmetry breaking objects we obtain by taking the orbit of the red arrow under action by D6h. ## H.1.2 Nonequivariant Sbs In addition to using an equivariant SBS as presented in the main paper, we also tried training with the non-equivariant SBS described in Section 3.2. Recall that the symmetry breaking objects here are a vector pointing to one of the vertices of the triangle projected in the xy plane and a vector pointing up or down corresponding to which triangle we pick from. We fix one pair of vectors as our symmetry breaking object and train our equivariant model to match it with a vertex. As shown in Figure 15a, our model is able to complete this. However, rotating the prism by 180◦ and feeding ![36_image_0.png](36_image_0.png) this rotated prism along with our symmetry breaking object, we find that our model outputs a vector which does not point to any vertex. This is shown in Figure 15b. Contrast this with the equivariant case in Figures 15c and 15d where our model still produces a vector which points to a vertex of the prism. (a) (b) ![36_image_1.png](36_image_1.png) (c) (d) Figure 15: (a) Output (red) generated by our model and symmetry breaking object (blue) given that is chosen from a non-equivariant SBS (b) Output (red) generated by our model and symmetry breaking object (blue) given when our prism is rotated by 180◦(c) Output (red) generated by our model and symmetry breaking object (blue) given that is chosen from an equivariant SBS (d) Output (red) generated by our model and symmetry breaking object (blue) given when our prism is rotated by 180◦ ## H.1.3 Nonideal Sbs We can modify the nonequivariant SBS into an equivariant one by adding the additional objects needed for ![37_image_0.png](37_image_0.png) ![37_image_1.png](37_image_1.png) closure under the normalizer. Doing so we see there are 2 S-orbits in this nonideal equivariant SBS shown in Figure 16. This corresponds to a degeneracy of 2 as defined in Section H.1.3. (a) S-orbit 1 (b) S-orbit 2 Figure 16: (a) S-orbit corresponding to the original nonequivariant SBS (b) Additional S-orbit needed which makes the set closed under the normalizer. From the discussion in Section 3.4, we expect this SBS to be less efficient than an ideal one. We first train using only a symmetry breaking object from one of the S-orbits. We see that when given objects from that orbit, the network correctly outputs vectors pointing to vertices of the prism but fails when given objects from the other S-orbit. However, if we use objects from both orbits in training, the output vectors point to vertices when given objects from either S-orbit. Hence, we need to use at minimum 2 symmetry breaking objects during training to guarantee correct behavior for all objects in the SBS compared to only needing one example to train for an ideal SBS. Table 10: Results of training using a nonideal equivariant SBS. If we only see one of the S-orbits in training, the network fails on the unseen orbits. If we see both S-orbits then the network behaves correctly. | Seen in training | | Outputs | | |--------------------|-----------|-----------|-----------| | S-orbit 1 | S-orbit 2 | S-orbit 1 | S-orbit 2 | | Yes | No | | | | No | Yes | | | | Yes | Yes | | | ## H.2 Octagon To Rectangle H.2.1 Obtaining An Ideal Partial Sbs In this scenario, we want G-equivariance for G = O(3) and we have S = D8 and partial symmetry K = D2. ![39_image_0.png](39_image_0.png) (a) (b) Figure 17: (a) Octagon with D8 symmetry we input and patterned cylinder with NG(*S, K*) = D8h symmetry. The generators are a 2*, b, m* where a 2is a 2π/8 rotation about the blue axis, b is a π rotation about the red axis, and m is a reflection across the orange plane. (b) A symmetry breaking object which generates an ideal partial SBS. Note that in addition to the D2 symmetry, this object is also symmetric under reflections across the purple and yellow planes. The octagon has D8 symmetry and we know from Table 4 that its normalizer is D16h. We also know from Appendix F.3 that a presentation of this normalizer is $$\langle a,b,m|a^{16},b^{2},m^{2},(a b)^{2},(a m)^{2},(b m)^{2}\rangle.$$ Note the symmetry of the octagon S is generated by a 2, b and that of a rectangle K by a 8, b. Next, we can check by brute force that NG(*S, K*) is generated by a 2*, b, m*. We then need to compute NS(K) and NNG(S,K)(K). We note from Table 4 that NG(K) = D4h and in particular, it is not hard to see the specific copy of D4h is generated by a 4*, b, m*. We now just have NS(K) = S ∩ NG(K). Looking at the generators, we can check that m is not present in S so NS(K) = D4 and is generated by a 4, b. For NNG(S,K), from the generators of NG(*S, K*) we can see that NG(K) is a subgroup so NNG(S,K)(K) = NG(K) = D4h and is generated by a 4*, b, m*. Finally, from Theorem 4.5, we need to look at the quotient groups NS(K)/K and NNG(S,K)(K)/K. For the latter, we can set the cosets $$X=\{a^{4},a^{12},a^{4}b,a^{12}b\}\quad\quad\quad\quad Y=\{m,a^{8}m,b m,a^{8}b m\}$$ which we can check generate the quotient group. In particular we have relations X2 = Y 2 = (XY ) 2 = 1 so a presentation is given by just $$\langle X,Y|X^{2},Y^{2},(X Y)^{2}\rangle.$$ For NS(K)/K which is a subgroup of this, we can see that it is just generated by X. But it is easy to see that the quotient group generated by Y forms a complement. Following the argument in Theorem 4.5, we see that we need a symmetry breaking object with stabilizer generated by group elements in the cosets of the complement of NS(K)/K. Since here the complement is generated by coset Y , we need the stabilizer to be generated by m, a8*m, bm, a*8bm. It is not hard to check we can simplify the list of generators to a 8*, b, m*. We note that an l = 2 irrep with even parity works in this case. ## H.3 Batio3 **Experiment** H.3.1 Atom Matching Algorithm We would like to predict atom distortions. However, our data consists of atom coordinates in the initial and target structures not necessarily in the same order. Hence, we need an algorithm to match similar atoms together so we know how much they are distorted. This process is complicated by the fact that we have periodic boundary conditions with periodicity determined by lattice vectors and that our atoms may be translated within the lattice. We assume our structures are given in the same rotational orientation. For our algorithm, we use the insight that the distorted atom should still have similar vectors to neighboring atoms. Hence, we can compute a signature for an atom a by taking the difference of the positions of that atom and all other atoms in the lattice. We call the set of position differences for atom a from all other atoms the signature σa of a. Note in our implementation, we also separate out the atom types in addition to the position differences. Next, we need a way to compare signatures. Suppose we had atom a from the initial structure and atom a ′ from the target structure. Certainly, if they are different atom types, we assign a cost of ∞ to this pairing. Otherwise, we look at their signatures. However, we actually need to optimally pair the other atoms to do so. For a pair of atoms b and b ′from the initial and target structures, we can give a cost of ∞ if they are different atoms and σa[b] − σa′ [b ′] otherwise. If the atoms are similar, this should be small, but it may also be shifted by lattice parameters from the smallest it could be. So we just look at all small lattice shifts L and set the smallest ||σa[b] − σa′ [b ′] + L||2 as the cost of pairing *b, b*′. This gives a cost matrix Mb,b′ . With all the costs, we can run the matching algorithm in Crouse (2016) to match the atoms. The cost of the assignment is the difference in the signatures of *a, a*′. Finally, we can match the atoms using the comparison of signatures. We create a cost matrix where Ca,a′ is the difference in signatures of *a, a*′from the initial and target structures. We then run another iteration of the algorithm from Crouse (2016) to find our matching. Because we only ever use differences of atoms, this matching algorithm is independent of translations. ## H.3.2 Translation Invariant Loss In the previous experiments, we could simply use a MSE loss of the vector differences. However, for crystals we wanted to have a loss which is invariant under translation. We realize that our matching algorithm in fact produces such a loss. By storing the matching information, we can effectively compute the same loss without having to run the matching algorithm every time. This is what we use to evaluate our model. H.3.3 Symmetrically related outputs ![41_image_0.png](41_image_0.png) Figure 18: Distortions of a highly symmetric crystal structure of BaTiO3 when provided with each of the possible symmetry breaking objects in our ideal equivariant partial SBS. Table 11: Values of various quantities which help distinguish the high symmetry and low symmetry structures. Our models here try to distort the high symmetry structure to the low symmetry one. | Structure | SB object | Bond length average | Bond length variance | Ti-O1-Ti | Ti-O2-Ti | Ti-O3-Ti | |---------------|-------------|-----------------------|------------------------|------------|------------|------------| | High symmetry | 2 | 0 | 180◦ | 180◦ | 180◦ | | | Low symmetry | 2.003417 | 0.01392 | 180◦ | 171.80◦ | 171.80◦ | | | Model | None | 2 | 0 | 180◦ | 180◦ | 180◦ | | Model | (1, 0, 0) | 2.003417 | 0.01392 | 180◦ | 171.80◦ | 171.80◦ | | Model | (−1, 0, 0) | 2.003417 | 0.01392 | 180◦ | 171.80◦ | 171.80◦ | | Model | (0, 1, 0) | 2.003417 | 0.01392 | 171.80◦ | 180◦ | 171.80◦ | | Model | (0, −1, 0) | 2.003417 | 0.01392 | 171.80◦ | 180◦ | 171.80◦ | | Model | (0, 0, 1) | 2.003417 | 0.01392 | 171.80◦ | 171.80◦ | 180◦ | | Model | (0, 0, −1) | 2.003417 | 0.01392 | 171.80◦ | 171.80◦ | 180◦ |
# Asymptotic Analysis Of Conditioned Stochastic Gradient Descent Rémi Leluc remi.leluc@gmail.com CMAP, École Polytechnique Institut Polytechnique de Paris, Palaiseau (France) François Portier *francois.portier@gmail.com* CREST, ENSAI École Nationale de la Statistique et de l'Analyse de l'Information, Rennes (France) Reviewed on OpenReview: *https: // openreview. net/ forum? id= U4XgzRjfF1* ## Abstract In this paper, we investigate a general class of stochastic gradient descent (SGD) algorithms, called *conditioned* SGD, based on a preconditioning of the gradient direction. Using a discrete-time approach with martingale tools, we establish under mild assumptions the weak convergence of the rescaled sequence of iterates for a broad class of conditioning matrices including stochastic first-order and second-order methods. Almost sure convergence results, which may be of independent interest, are also presented. Interestingly, the asymptotic normality result consists in a stochastic equicontinuity property so when the conditioning matrix is an estimate of the inverse Hessian, the algorithm is asymptotically optimal. ## 1 Introduction Consider some unconstrained optimization problem of the following form: min θ∈Rd {F(θ) = Eξ[f(*θ, ξ*)]}, where f is a loss function and ξ is a random variable. This key methodological problem, known under the name of *stochastic programming* (Shapiro et al., 2014), includes many flagship machine learning applications such as *empirical risk minimization* (Bottou et al., 2018), *adaptive importance sampling* (Delyon & Portier, 2018) and *reinforcement learning* (Sutton & Barto, 2018). When F is differentiable, a common appproach is to rely on first-order methods. However, in many scenarios and particularly in large-scale learning, the gradient of F may be hard to evaluate or even intractable. Instead, a random unbiased estimate of the gradient is available at a cheap computing cost and the state-of-the-art algorithm, stochastic gradient descent (SGD), just moves along this estimate at each iteration. It is an iterative algorithm, simple and computationally fast, but its convergence towards the optimum is generally slow. Conditioned SGD, which consists in multiplying the gradient estimate by some conditioning matrix at each iteration, can lead to better performance as shown in several recent studies ranging from natural gradient (Amari, 1998; Kakade, 2002) and stochastic second-order methods with quasi-Newton (Byrd et al., 2016) and (L)-BFGS methods (Liu & Nocedal, 1989) to diagonal scaling methods such as AdaGrad (Duchi et al., 2011), RMSProp (Tieleman et al., 2012), Adam (Kingma & Ba, 2014), AMSGrad (Reddi et al., 2018) and adaptive coordinate sampling (Wangni et al., 2018; Leluc & Portier, 2022). These conditioning techniques are based on different strategies: diagonal scaling rely on feature normalization, stochastic second-order methods are concerned with minimal variance and adaptive coordinate sampling techniques aim at taking advantage of particular data structure. Furthermore, these methods proved to be the current state-of-the-art for training machine learning models (Zhang, 2004; LeCun et al., 2012) and are implemented in widely used programming tools (Pedregosa et al., 2011; Abadi et al., 2016). Conditioned SGD generalizes *standard SGD* by adding a conditioning step to refine the descent direction. Starting from θ0 ∈ R d, the algorithm of interest is defined by the following iteration $$\theta_{k+1}=$$ θk+1 = θk − γk+1Ckg(θk, ξk+1), k ≥ 0, where g(θk, ξk+1) is some unbiased gradient valued in R d, Ck ∈ R d×dis called *conditioning matrix* and (γk)k≥1 is a decreasing learning rate sequence. An important question, which is still open to the best of our knowledge, is to characterize the asymptotic variance of such algorithms for non-convex objective F and general estimation procedure for the conditioning matrix Ck. Related work. Seminal works around standard SGD (Ck = Id) were initiated by Robbins & Monro (1951) and Kiefer et al. (1952). Since then, a large literature known as *stochastic approximation*, has developed. The almost sure convergence is studied in Robbins & Siegmund (1971) and Bertsekas & Tsitsiklis (2000); rates of convergence are investigated in Kushner & Huang (1979) and Pelletier (1998a); non-asymptotic bounds are given in Moulines & Bach (2011). The asymptotic normality can be obtained using two different approaches: a diffusion-based method is employed in Pelletier (1998b) and Benaïm (1999) whereas martingale tools are used in Sacks (1958) and Kushner & Clark (1978). We refer to Nevelson & Khas'minski˘ı (1976); Delyon (1996); Benveniste et al. (2012); Duflo (2013) for general textbooks on *stochastic approximation*. The aforementioned results do not apply directly to *conditioned* SGD because of the presence of the matrix sequence (Ck)k≥0 involving an additional source of randomness in the algorithm. Seminal papers dealing with the weak convergence of *conditioned* SGD are Venter (1967) and Fabian (1968). Within a restrictive framework (univariate case d = 1 and strong assumptions on the function F), their results are encouraging because the limiting variance of the procedure is shown to be smaller than the limiting variance of standard SGD. Venter's and Fabian's results have then been extended to more general situations (Fabian, 1973; Nevelson & Khas'minski˘ı, 1976; Wei, 1987). In Wei (1987), the framework is still restrictive not only because the random errors are assumed to be independent and identically distributed but also because the objective F must satisfy their assumption (4.10) which hardly extends to objectives other than quadratic. More recently, Bercu et al. (2020) have obtained the asymptotic normality as well as the efficiency of certain conditioned SGD estimates in the particular case of *logistic regression*. The previous approach has been generalized not long ago in Boyer & Godichon-Baggioni (2022) where the use of the Woodbury matrix identity is promoted to compute the Hessian inverse in the online setting. Several theoretical results, including the weak convergence of *conditioned* SGD, are obtained for convex objective functions. An alternative to *conditioning*, called *averaging*, developed by Polyak (1990) and Polyak & Juditsky (1992), allows to recover the same asymptotic variance as *conditioned SGD*. When dealing with convex objectives, the theory behind this averaging technique is a well-studied topic (Moulines & Bach, 2011; Gadat & Panloup, 2017; Dieuleveut et al., 2020; Zhu et al., 2021). However, it is inevitably associated with a large bias caused by poor initialization and requires some parameter tuning through the *burn-in* phase. Contributions. The main result of this paper deals with the weak convergence of the rescaled sequence of iterates. Interestingly, our asymptotic normality result consists of the following continuity property: whenever the matrix sequence (Ck)k≥0 converges to a matrix C and the iterates (θk)k≥0 converges to a minimizer θ ?, the algorithm behaves in the same way as an oracle version in which C would be used instead of Ck. We stress that contrary to Boyer & Godichon-Baggioni (2022), no convexity assumption is needed on the objective function and no rate of convergence is required on the sequence (Ck)k≥0. This is important because, in most studies, deriving a convergence rate on (Ck)k≥0 requires a specific convergence rate on the iterates (θk)k≥0 which, in general, is unknown at this stage of the analysis. From a more practical point of view, our main result claims that the impact of the approximation error resulting from the conditioning matrices estimation assumes a secondary role. This finding promotes the use of simple and cheap sequential algorithm to estimate the conditioning matrix which encompasses a broad spectrum of conditioned SGD methods, highlighting the applicability and generalizability of the obtained result. Another result of independent interest dealing with the almost sure convergence of the gradients is also provided. In addition, for illustration purposes, we apply our results to the popular variational inference problem where one seeks to approximate a target density out of a parametric family by solving an optimization problem. In this framework, by optimizing the forward Kullback-Liebler divergence (Jerfel et al., 2021) and building stochastic gradients relying on importance sampling schemes (Delyon & Portier, 2018), we show that the approach has some efficiency properties. For the sake of completeness, we present in appendix practical ways to compute the *conditioning* matrix Ck and show that the resulting procedure satisfies the high-level conditions of our main Theorem. This yields a feasible algorithm achieving minimum variance. To obtain these results, instead of approximating the rescaled sequence of iterates by a continuous diffusion (as for instance in Pelletier (1998b)), we rely on a discrete-time approach where the recursion scheme is directly analyzed (as for instance in Delyon (1996)). More precisely, the sequence of iterates is studied with the help of an auxiliary linear algorithm whose limiting distribution can be deduced from the central limit theorem for martingale increments (Hall & Heyde, 1980). The limiting variance is derived from a discrete time matrix-valued dynamical system algorithm. It corresponds to the solution of a Lyapunov equation involving the matrix C. It allows a special choice for C which guarantees an optimal variance. Finally, a particular recursion is identified to examine the remaining part. By studying it on a particular event, this part is shown to be negligible. Outline. Section 2 introduces the framework of standard SGD with asymptotic results. Section 3 is dedicated to *conditioned* SGD: it first presents popular optimization methods that fall in the considered framework and then presents our main results, namely the weak convergence and asymptotic optimality. Section 4 gathers practical implications of the main results for machine learning models in the framework of variational inference and Section 5 concludes the paper with a discussion of avenues for further research. Technical proofs, additional propositions and numerical experiments are available in the appendix. ## 2 Mathematical Background In this section, the mathematical background of stochastic gradient descent (SGD) methods is presented and illustrated with the help of some examples. Then, to motivate the use of *conditioning* matrices, we present a known result from Pelletier (1998b) about the weak convergence of SGD. ## 2.1 Problem Setup Consider the problem of finding a minimizer θ ? ∈ R d of a function F : R d → R, that is, $$\theta^{\star}\in{\underset{\theta\in\mathbb{R}^{d}}{\operatorname{arg\,min}}}\,F(\theta).$$ In many scenarios and particularly in large scale learning, the gradient of F cannot be fully computed and only a stochastic unbiased version of it is available. The SGD algorithm moves the iterate along this direction. To increase the efficiency, the random generators used to derive the unbiased gradients might evolve during the algorithm, *e.g.*, using the past iterations. To analyse such algorithms, we consider the following probabilistic setting. Definition 1. A stochastic algorithm is a sequence (θk)k≥0 of random variables defined on a probability space (Ω, F, P) *and valued in* R d. Define (Fk)k≥0 as the natural σ*-field associated to the stochastic algorithm* (θk)k≥0, i.e., Fk = σ(θ0, θ1, . . . , θk), k ≥ 0*. A policy is a sequence of random probability measures* (Pk)k≥0, each defined on a measurable space (S, S) *that are adapted to* Fk. Given a *policy* (Pk)k≥0 and a *learning rates* sequence (γk)k≥1 of positive numbers, the SGD algorithm (Robbins & Monro, 1951) is defined by the update rule $$\theta_{k+1}=\theta_{k}-\gamma_{k+1}g(\theta_{k},\xi_{k+1})\quad\mathrm{with}\quad\xi_{k+1}\sim P_{k},$$ θk+1 = θk − γk+1g(θk, ξk+1) with ξk+1 ∼ Pk, (1) where g : R d×S → R dis called the gradient generator. The choice of the *policy* (Pk)k≥0 in SGD is important as it can impact the convergence speed, generalization performance, and efficiency of the optimization algorithm. While most classical approaches rely on uniform sampling and mini-batch sampling, it may be more efficient to use advanced selection sampling strategy such as stratified sampling or importance sampling (see Example 1 for details). The policy (Pk)k≥0 is used at each iteration to produce random gradients through the function g. Those gradients are assumed to be unbiased. $$(1)$$ Assumption 1 (Unbiased gradient). *The gradient generator* g : R d × S → R d*is such that for all* θ ∈ R d, g(θ, ·) is measurable, and we have: ∀k ≥ 0, E [g(θk, ξk+1)|Fk] = ∇F(θk). We emphasize three important examples covered by the developed approach. In each case, explicit ways to generate the stochastic gradient are provided. Example 1. *(Empirical Risk Minimization)* Given some observed data z1*, . . . , z*n ∈ R p and a differentiable loss function ` : R d ×R p → R, the objective function F approximates the true expected risk Ez[`(*θ, z*)] using its empirical counterpart F(θ) = n −1 Pn i=1 `(*θ, z*i). Classically, the gradient estimates at θk are given by the policy $$g(\theta_{k},\xi_{k+1})=\nabla_{\theta}\ell(\theta_{k},\xi_{k+1})\quad\mathrm{~with~}\quad\xi_{k+1}\sim\sum_{i=1}^{n}\delta_{z_{i}}/n.$$ Another one, more subtle, referred to as mini-batching (Gower et al., 2019), consists in generating uniformly a set of nk samples (z1*, . . . , z*nk ) and computing the gradient as the average n −1 kPnk j=1 ∇θ`(θk, zj ). Note that interestingly, we allow changes of the minibatch size throughout the algorithm. Our framework also includes adaptive non-uniform sampling (Papa et al., 2015) and survey sampling (Clémençon et al., 2019), which use Pk =Pn i=1 w (k) iδzi with Fk-adapted weights satisfying Pn i=1 w (k) i = 1 for each k ≥ 0. Example 2. *(Adaptive importance sampling for variational inference)* Given a target density function f, which for instance might result from the posterior distribution of some observed data, and a parametric family of samplers {qθ : θ ∈ Θ}, the aim is to find a good approximation of f out of the family of samplers. A standard choice (Jerfel et al., 2021) for the objective function is the so called *forward Kullback-Leibler* divergence given by F(θ) = −Rlog(qθ(y)/f(y))f(y)dy. Then in the spirit of adaptive importance sampling schemes (Delyon & Portier, 2018), gradient estimates are given by $$g(\theta_{k},\xi_{k+1})=-\nabla_{\theta}\log(q_{\theta_{k}}(\xi_{k+1}))\frac{f(\xi_{k+1})}{q_{\theta_{k}}(\xi_{k+1})},\quad\xi_{k+1}\sim q_{\theta_{k}}.$$ Other losses such as α-divergence (Daudel et al., 2021) or generalized method of moment (Delyon & Portier, 2018) may also be considered depending on the problem of interest. Some applications of *conditioned* SGD algorithm to this particular framework are considered with more details in Section 4. Example 3. *(Policy-gradient methods)* In reinforcement learning (Sutton & Barto, 2018), the goal of the agent is to find the best action-selection policy to maximize the expected reward. Policy-gradient methods (Baxter & Bartlett, 2001; Williams, 1992) use a parameterized policy {πθ : θ ∈ Θ} to optimize an expected reward function F given by F(θ) = Eξ∼πθ [R(ξ)] where ξ is a trajectory including nature states and selected actions. Using the policy gradient theorem, one has ∇F(θ) = Eξ∼πθ [R(ξ)∇θ log πθ(ξ)], leading to the REINFORCE algorithm (Williams, 1992) given by g(θk, ξk+1) = R(ξk+1)∇θ log πθk $$\log\pi_{\theta_{k}}(\xi_{k+1}),\quad\xi_{k+1}\sim\pi_{\theta_{k}}.$$ ## 2.2 Weak Convergence Of Sgd This section is related to the weak convergence property of the normalized sequence of iterates (θk−θ ?)/ √γk. The working assumptions include the almost sure convergence of the sequence of iterates (θk)k≥0 towards a stationary point θ ?. Note that, given Assumptions 1 and 2, there exist many criteria on the objective function that give such almost sure convergence. For these results, we refer to Bertsekas & Tsitsiklis (2000); Benveniste et al. (2012); Duflo (2013). In addition to this high-level assumption of almost sure convergence, we require the following classical assumptions. Let S ++ d(R) denote the space of real symmetric positive definite matrices and define for all k ≥ 0, $$w_{k+1}=\nabla F(\theta_{k})-g(\theta_{k},\xi_{k+1}),\quad\Gamma_{k}=\mathbb{E}\left[w_{k+1}w_{k+1}^{\top}|\mathcal{F}_{k}\right].$$ The learning rates sequence (γk)k≥1 should decay to eventually anneal the noise but not too fast so that the iterates (θk)k≥0 can reach interesting places in a finite time. Assumption 2 (Learning rates). The sequence of step-size is γk = αk−β *with* β ∈ (1/2, 1]. This classical form of the step-size ensures theoretical convergence guarantee through the Robbins-Monro condition: Pk γk = ∞,Pk γ 2 k < ∞. However, note that in practice, the choice of learning rate is often determined through experimentation and fine-tuning to achieve the best performance on the given task. Assumption 3 (Hessian). *The Hessian matrix at stationary point is positive definite, i.e.,* H = ∇2F(θ ?) ∈ S ++ d(R) and the mapping θ 7→ ∇2F(θ) *is continuous at* θ ?. The positive definiteness of the Hessian matrix provides stability and robustness guarantees in the optimization process. It ensures that small perturbations or noise in the objective function or the training data do not significantly affect the convergence behavior. The positive curvature helps in confining the optimization trajectory near the minimum and prevents it from getting trapped in flat regions or saddle points. The noise sequence (wk)k≥1 defines a sequence of conditional covariance matrices (Γk)k≥1 that is assumed to converge so that one can identify the limiting covariance Γ = E[g(θ ?, ξ)g(θ ?, ξ) >]. Assumption 4 (Covariance matrix). There exists Γ ∈ S++ d(R) *such that* Γk k→+∞ −→ Γ *a.s.* Finally, in order to derive a central limit theorem for the iterates of the algorithm, there is an extra need for stability which is synonymous with a uniform bound on the noise around the minimizer. Assumption 5 (Lyapunov bound). There exist δ, ε > 0 *such that:* $\sup\mathbb{E}[\|w_{k+1}\|_{2}^{2+\delta}|\mathcal{F}_{k}]\mathbb{1}_{\{\|\theta_{k}-\theta^{*}\|\leq\varepsilon\}}<\infty\quad a.s.$ Note that all these assumptions are stated in the spirit of Pelletier (1998b) making them mild and general. In particular, Assumptions 4 and 5 are similar to (A1.2) in Pelletier (1998b). More precisely, Assumption 4 is needed to identify the limiting distribution while Assumption 5 is a stability condition often referred to as the Lyapunov condition. This last condition is technical but not that strong as it is similar to the Lindeberg's condition which is necessary (Hall & Heyde, 1980) for tightness. The following result can be either derived from (Pelletier, 1998b, Theorem 1) or as a direct corollary of our main result, Theorem 2, given in Section 3.2. Theorem 1 (Weak convergence of SGD). Let (θk)k≥0 *be obtained by the SGD rule* (1)*. Suppose that* Assumptions 1, 2, 3, 4, 5 *are fulfilled and that* θk → θ ? almost surely. If moreover, (H − ζI) is positive definite with ζ = 1{β=1}/2α*, it holds that* $$\frac{1}{\sqrt{\gamma_{k}}}(\theta_{k}-\theta^{\star}){\leadsto}{\mathcal{N}}(0,\Sigma),\qquad a s\ k\to\infty$$ where Σ *satisfies the Lyapunov equation:* (H − ζId)Σ + Σ(H − ζId) > = Γ. Several remarks are to be explored. Since Γ and (H − ζI) are positive definite matrices, there exists a unique solution Σ to the Lyapunov equation (H − ζId)Σ + Σ(H − ζId) > = Γ given by Σ = R +∞ 0exp[−t(H − ζId)]Γ exp[−t(H − ζId) >]dt. Second, the previous result can be expressed as k β/2(θk − θ ?) N (0, αΣ). Hence, the fastest rate of convergence is obtained when β = 1 for which we recover the classical 1/ √k-rate of a Monte Carlo estimate. In this case, the coefficient α should be chosen large enough to ensure the convergence through the condition H − Id/(2α) 0, but also such that the covariance matrix αΣ is small. The choice of α is discussed in the next section and should be replaced with a matrix gain. ## 3 The Asymptotics Of Conditioned Stochastic Gradient Descent This Section first presents practical optimization schemes that fall in the framework of *conditioned* SGD. Then it contains our main results, namely the weak convergence and asymptotic optimality. Another result of independent interest dealing with the almost sure convergence of the gradients and the iterates is also provided. ## 3.1 Framework And Examples We introduce the general framework of *conditioned* SGD as an extension of the standard SGD presented in Section 2. It is defined by the following update rule, for k ≥ 0, $$\theta_{k+1}=\theta_{k}-\gamma_{k+1}C_{k}g(\theta_{k},\xi_{k+1}),$$ $$\left(2\right)$$ θk+1 = θk − γk+1Ckg(θk, ξk+1), (2) where the conditioning matrix Ck ∈ S++ d(R) is a Fk-measurable real symmetric positive definite matrix so that the search direction always points to a descent direction. In convex optimization, inverse of the Hessian is a popular choice but (1) it may be hard to compute, (2) it is not always positive definite and (3) it may increase the noise of SGD especially when the Hessian is ill-conditioned. Quasi-Newton. These methods build approximations of the Hessian Ck ≈ ∇2F(θk) −1 with gradient-only information, and are applicable for convex and nonconvex problems. For scalability issue, variants with limited memory are the most used in practice (Liu & Nocedal, 1989). Following Newton's method idea with the secant equation, the update rule is based on pairs (sk, yk) tracking the differences of iterates and stochastic gradients, *i.e.*, sk = θk+1 − θk and yk = g(θk+1, ξk+1) − g(θk, ξk+1). Let ρk = 1/(s > k yk) then the Hessian updates are $$C_{k+1}=(I-\rho_{k}y_{k}s_{k}^{\top})^{\top}C_{k}(I-\rho_{k}y_{k}s_{k}^{\top})+\rho_{k}s_{k}s_{k}^{\top}.$$ In the deterministic setting, the BFGS update formula above is well-defined as long as s > k yk > 0. Such condition preserves positive definite approximations and may be obtained in the stochastic setting by replacing the Hessian matrix with a Gauss-Newton approximation and using regularization. Adaptive methods and Diagonal scalings. These methods adapt locally to the structure of the optimization problem by setting Ck as a function of past stochastic gradients. General adaptive methods differ in the construction of the *conditioning* matrix and whether or not they add a momentum term. Using different representations such as dense or sparse conditioners also modify the properties of the underlying algorithm. For instance, the optimizers Adam and RMSProp maintain an exponential moving average of past stochastic gradients with a factor τ ∈ (0, 1) but fail to guarantee Ck+1 Ck. Such behaviour can lead to large fluctuations and prevent convergence of the iterates. Instead, AdaGrad and AMSGrad ensure the monotonicity Ck+1 Ck. | Optimizer | Gradient matrix Gk+1 | m | | |-------------|-----------------------------------------|---------------|----| | AdaFull | Gk + gkg > k | 0 | | | AdaNorm | Gk + kgkk 2 2 | 0 | | | | > | | | | AdaDiag | Gk + diag(gkg | ) | 0 | | | k | > ) | 0 | | RMSProp | τGk + (1 − τ )diag(gkg k | | | | Adam | [τGk + (1 − τ )diag(gkg > )]/(1 − τ k ) | m | | | | | k > | | | AMSGrad | [τGk + (1 − τ )diag(gkg | )]/(1 − τ k ) | m | | | | k | | Table 1: Adaptive Gradient Methods. Denote by gk = g(θk, ξk+1) and m ∈ [0, 1) a momentum parameter. General adaptive gradient methods are defined by: θk+1 = θk − γk+1Ckgˆk, gˆk = mgˆk−1 + (1 − m)gk. Different optimizers are summarized in Table 1 above. They all rely on a gradient matrix Gk which accumulates the information of stochastic gradients. The *conditioning* matrix is equal to Ck = G −1/2 kexcept for AMSGrad which uses Ck = max{Ck−1; G −1/2 k}. Starting from G0 = δI with δ > 0, Gk+1 is updated either in a dense or sparse (diagonal) manner or using an exponential moving average. Note that *conditioned SGD* methods also include schemes with general estimation of the matrix Ck such as Hessian sketching (Gower et al., 2016) or Jacobian sketching (Gower et al., 2021). A common assumption made in the literature of adaptive methods is that *conditioning* matrices are wellbehaved in the sense that their eigenvalues are bounded in a fixed interval. This property is easy to check for diagonal matrices and can always be implemented in practice using projection. ## 3.2 Main Result Similarly to standard SGD, it is interesting to search for an appropriate rescaled process to obtain some convergence rate and asymptotic normality results. In fact the only additional assumption needed, compared to SGD, is the almost sure convergence of the sequence (Ck)k≥0. This makes Theorem 1 a particular case of the following Theorem which is the main result of the paper (the proof is given in Appendix A.1). Theorem 2 (Weak convergence of Conditioned SGD). Let (θk)k≥0 *be obtained by conditioned SGD* (2). Suppose that Assumptions 1, 2, 3, 4, 5 *are fulfilled and that* θk → θ ? almost surely. If moreover, Ck → C ∈ S ++ d(R) almost surely and all the eigenvalues of (CH − ζI) are positive with ζ = 1{β=1}/2α*, it holds that* $$\frac{1}{\sqrt{\gamma_{k}}}(\theta_{k}-\theta^{\star})\leadsto\mathcal{N}(0,\Sigma_{C}),\qquad a s\ k\to\infty,$$ where ΣC *satisfies:* (CH − ζId) ΣC + ΣC (CH − ζId) $$\left(C H-\zeta I_{d}\right)^{\top}=C\mathrm{T}C^{\top}.$$ Sketch of the proof. The idea of the proof is to rely on the following bias-variance decomposition. Remark that the difference ∆k = θk − θ ?is subjected to the iteration: $$\Delta_{k+1}=\Delta_{k}-\gamma_{k+1}C_{k}\nabla F(\theta_{k})+\gamma_{k+1}C_{k}w_{k+1},\qquad k\geq0.$$ In a similar spirit as in Delyon (1996), we use the Taylor approximation ∇F(θk) = ∇F(θ ?) + H(θk − θ ?) + o(θk − θ ?) ' H(θk − θ ?) to define the following auxiliary linear stochastic algorithm which carries the same variance as the main algorithm, $$\widetilde{\Delta}_{k+1}=\widetilde{\Delta}_{k}-\gamma_{k+1}K\widetilde{\Delta}_{k}+\gamma_{k+1}C_{k}w_{k+1},\qquad k\geq1,$$ where K = CH. As a first step we establish the weak convergence of ∆ek+1 using discrete martingale tools. Note that the analysis is made possible because the matrix K is fixed along this algorithm. As a second step, we prove that the difference (∆k − ∆ek), which represents some bias term, is negligible. Comparison with previous works. Theorem 2 stated above is comparable to Theorem 1 given in Pelletier (1998b). However, our result on the weak convergence cannot be recovered from the one of Pelletier (1998b) due to their Assumption (A1.2) about convergence rates. Indeed, this assumption would require that the sequence (Ck)k≥0 converges towards C faster than √γk. This condition is either hardly meet in practice or difficult to check. Unlike this prior work, our result only requires the almost sure convergence of the sequence (Ck)k≥0. In a more restrictive setting of convex objective and online learning framework, *i.e.* in which data becomes available in a sequential order, another way to obtain the weak convergence of the rescaled sequence of iterates (θk −θ ?)/ √γk is to rely on the results of Boyer & Godichon-Baggioni (2022). However, once again, their work rely on a particular convergence rate for the matrix sequence (Ck)k≥0. This implies the derivation of an additional result on the almost sure convergence rate of the iterates. To overcome all these issues, we show in Appendix B that our conditions on the matrices Ck are easily satisfied in common situations. ## 3.3 Asymptotic Optimality Of Conditioned Sgd The best *conditioning* matrix C that could be chosen regarding the asymptotic variance is specified in the next proposition whose proof is given in the supplementary material (Appendix C.3). Proposition 1 (Optimal choice). *The choice* C ? = H−1is optimal in the sense that ΣC∗ ΣC *for all* C ∈ CH*. Moreover, we have* ΣC? = H−1ΓH−1. Another remarkable result, which directly follows from the Theorem 2 is now stated as a corollary. Corollary 1 (Asymptotic optimality). Under the assumptions of Theorem *2, if* γk = 1/k and C = H−1, then√ $$\sqrt{k}(\theta_{k}-\theta^{\star})\leadsto{\mathcal N}(0,H^{-1}\Gamma H^{-1}),\;\;\;\;\;\;a s\;k\to\infty.$$ Moreover, let (Z1, . . . , Zd) ∼ N (0, Id) and (λk)k=1,...,d be the eigenvalues of the matrix H−1/2ΓH−1/2, we have the convergence in distribution: $$k(F(\theta_{k})-F(\theta^{\star}))\leadsto\sum_{k=1}^{d}\lambda_{k}Z_{k}^{2},\ \ \ \ \ a s\ k\to\infty.$$ This result shows the success of the proposed approach as the asymptotic variance is the optimal one. It provides the user a practical choice for the sequence of rate, γk = 1/k and also removes the assumption that 2αH Id which is usually needed in SGD (see Theorem 1). Concerning the almost sure convergence of the conditioning matrices, we provide in Appendix B an explicit way to ensure that Ck → H−1. The above statement also provides insights about the convergence speed. It claims that the convergence rate of F(θk) towards the optimum F(θ ?), in 1/k, is faster than the convergence rate of the iterates, in 1/ √k. Another important feature, which is a consequence of Proposition 1, is that the eigenvalues (λk)k=1*,...,d* that appear in the limiting distribution are the smallest ones among all the other possible version of *conditioned* SGD (defined by the matrix C). ## 3.4 Convergence Of The Iterates (Θk) **Of Conditioned Sgd** To apply both Theorem 2 and Corollary 1, it remains to check the almost sure convergence of the iterates. In a non-convex setting, the iterates of stochastic first-order methods can only reach local optima, *i.e.* the iterates are expected to converge to the following set S = {θ ∈ R d: ∇F(θ) = 0}. Going in this direction, we first prove the almost sure convergence of the gradients towards zero for general *conditioned* SGD methods under mild assumptions. This theoretical result may be of independent interest. Under a condition on S, one may uniquely identify a limit point θ ? and consider the event {θk → θ ?} which is needed for the weak convergence results. The next analysis is based on classical assumptions which are used in the literature to obtain the convergence of standard SGD. Assumption 6 (L-smooth). ∃L > 0 : ∀*θ, η* ∈ R d, k∇F(θ) − ∇F(η)k2 ≤ Lkθ − ηk2. Assumption 7 (Lower bound). ∃F ? ∈ R : ∀θ ∈ R d, F? ≤ F(θ). To handle the noise of the stochastic estimates, we consider a weak growth condition, related to the notion of *expected smoothness* as introduced in Gower et al. (2019) (see also Gazagnadou et al. (2019); Gower et al. (2021)). In particular, we extend the condition of Gower et al. (2019) to our general context in which the sampling distributions are allowed to change along the algorithm. Assumption 8 (Growth condition). With probability 1, there exist 0 ≤ L, σ2 < ∞ *such that for all* θ ∈ R d, k ∈ N, E-kg(*θ, ξ*k+1)k 2 2 |Fk ≤ 2L(F(θ) − F ?) + σ 2. This almost-sure bound on the stochastic noise E-kg(*θ, ξ*k)k 2 2 |Fk−1 is key in the analysis of the *conditioned* SGD algorithm. This weak growth condition on the stochastic noise is general and can be achieved in practice with a general Lemma available in the supplement (Appendix C.4). Note that Assumption 8, often referred to as a *growth condition*, is mild since it allows the noise to be large when the iterate is far away from the optimal point. In that aspect, it contrasts with uniform bounds of the form E-kg(θk, ξk+1)k 2 2 |Fk ≤ σ 2for some deterministic σ 2 > 0 (see Nemirovski et al. (2009); Nemirovski & Yudin (1983); Shalev-Shwartz et al. (2011)). Observe that such uniform bound is recovered by taking L = 0 in Assumption 8 but cannot hold when the objective function F is strongly convex (Nguyen et al., 2018). Besides, fast convergence rates have been derived in Schmidt & Roux (2013) under the strong-growth condition: E[kg(*θ, ξ*k+1)k 2 2 |Fk] ≤ Mk∇F(θ)k 2 2 for some M > 0. Similarly to our growth condition, Bertsekas & Tsitsiklis (2000) and Bottou et al. (2018) performed an analysis under the condition E[kg(*θ, ξ*k+1)k 2 2 |Fk] ≤ Mk∇F(θ)k 2 2 + σ 2for *M, σ*2 > 0. Under Assumptions 6 and 7, we have k∇F(θ)k 2 2 ≤ 2L(F(θ) − F(θ ?)) (Gower et al., 2019, Proposition A.1) so our growth condition is less restrictive. If F satisfies the Polyak-Lojasiewicz condition (Karimi et al., 2016), then our growth condition becomes a bit stronger. Another weak growth condition has been used for a non-asymptotic study in Moulines & Bach (2011). The success of *conditioned* SGD relies on the following extended Robbins-Monro condition which ensures a control on the eigenvalues of the *conditioning* matrices. Assumption 9 (Eigenvalues). Let (µk)k≥1 and (νk)k≥1 *be positive sequences such that:* ∀k ≥ 1, µkId Ck−1 νkId;Pk γkνk = +∞;Pk (γkνk) 2 < +∞; lim supk νk/µk < ∞ *a.s.* The last condition deals with the ratio (νk/µk) which may be seen as a conditioned number and ensures that the matrices Ck are well-conditioned. The following Theorem reveals that all these assumptions are sufficient to ensure the almost sure convergence of the gradients towards zero. Theorem 3 (Almost sure convergence). Suppose that Assumptions 1, 6, 7, 8, 9 *are fulfilled. Then* (θk)k≥0 obtained by conditioned SGD (2) satisfies ∇F(θk) → 0 as k → ∞ *a.s.* Other convergence results concerning the sequence of iterates towards global minimizers may be obtained by considering stronger assumptions such as convexity or that F is coercive and the level sets of stationary point S ∩ {*θ, F*(θ) = y} are locally finite for every y ∈ R d(see Gadat et al. (2018)). In our analysis, the proof of Theorem 3 reveals that θk+1 − θk → 0 in L 2 and almost surely. Thus, as soon as the stationary points are isolated, the sequence of iterates will converge towards a unique stationary point θ ? ∈ R d. This result is stated in the next Corollary. Corollary 2 (Almost sure convergence). Under the assumptions of Theorem 3, assume that F is coercive and let (θk)k≥0 *be the sequence of iterates obtained by the conditioned SGD* (2), then d(θk, S) → 0 as k → ∞. In particular, if S is a finite set, (θk) *converges to some* θ ? ∈ S. ## 4 Asymptotic Optimality In Adaptive Importance Sampling The aim of this section is to demonstrate that statistical efficiency can be ensured in variational inference problems through the combination of adaptive importance sampling and *conditioned* SGD. While wellknown results regarding the asymptotic optimality of maximum likelihood estimates (MLE) obtained from conditioned SGD (see for instance Amari (1998) or Bercu et al. (2020)) are initially revisited, attention is subsequently shifted towards the variational inference topic relying on adaptive sampling schemes methods. A novel result is then presented, asserting that even within this challenging framework, *conditioned* SGD allows for the recovery of a certain statistical efficiency. ## 4.1 Maximum Likelihood Estimation Assume that (Xk)k≥1 is an independent sequence of random variables with distribution q ?. Consider a parametric family {qθ : θ ∈ Θ} from which we aim to obtain an estimate of q ?. We further assume that the model is well-specified, *i.e.* q ? = qθ ? for some θ ? ∈ Θ. The MLE is given by $${\hat{\theta}}_{n}\in\arg\operatorname*{max}_{\theta\in\Theta}\sum_{i=1}^{n}\log(q_{\theta}(X_{i})).$$ Under suitable condition (van der Vaart, 1998), it is well known that ˆθn is efficient, meaning it is asymptotically unbiased and has the smallest achievable variance. The Cramer-Rao bound is given by the inverse of the Fisher information matrix, denoted by I −1 and defined as $${\mathcal{I}}=\int\nabla_{\theta}\log(q_{\theta^{*}})\nabla_{\theta}\log(q_{\theta^{*}})^{\top}q_{\theta^{*}}d\lambda.$$ Unfortunately, the estimate ˆθn is often unknown in closed-form, requiring the use of a sequential procedure for approximation. This raises the further question of whether the estimate obtained through the sequential procedure achieves the efficiency bound. When using standard SGD without conditioning, the update rule is θk+1 = θk − γk+1∇ log(qθk (Xk+1)). However, the optimal variance bound is not achieved in this case. To recover efficiency, one can rely on *conditioned* SGD, incorporating a conditioning matrix that estimates the inverse of the Hessian. In light of the definition of the Fisher information matrix, the conditioning matrix can be estimated iteratively using at each step a new sample Xk+1 as follows $${\mathcal{I}}_{k+1}=(1-\gamma_{k+1}){\mathcal{I}}_{k}+\gamma_{k+1}\nabla\log(q_{\theta_{k}}(X_{k}))\nabla_{\theta}\log(q_{\theta_{k}}(X_{k}))^{\top}$$ and then relying on the CSGD algorithm with update rule θk+1 = θk − γk+1I −1 k ∇ log(qθk (Xk+1)). As a consequence of Theorem 2, under stipulated assumptions, one can recover the optimal bound I −1 as the asymptotic variance of √k(θk − θ ∗). ## 4.2 Adaptive Importance Sampling Consider the variational inference problem where the aim is to approximate a target distribution q ? = qθ ? based on a family of density {qθ : θ ∈ Θ}. Unlike the previous statistical framework, one does not have access to random variables distributed according to q ?. Instead, one can usually evaluate the target function q ?. More background about this type of problem might be found in Zhang et al. (2018). In the following, we show that *conditioned* SGD methods allow to achieve the same variance as the optimal variance described in the previous statistical setting. To the best of our knowledge, this result is novel and has potential implications in variational inference problems using forward KL (or α-divergence) as described in Jerfel et al. (2021) and Section 5.2 in Zhang et al. (2018). Consider the objective function defined as the Kullback-Liebler divergence between a sampler qθ and the target distribution q ?, *i.e.*, $$F(\theta)=-\int\log(q_{\theta}/q^{\star})q^{\star}d\lambda.$$ Under regularity conditions, the gradient and Hessian are respectively written as ∇θF(θ) = − Eq ? [∇θ log(qθ)] and ∇2 θF(θ) = − Eq ? [∇2 θ log(qθ)]. Stochastic gradients can be defined using adaptive importance samplingbased estimate as in Delyon & Portier (2018). Given the current iterate θk, one needs to generate Xk+1 from qθk and compute the (unbiased) stochastic gradient gk+1 = vk+1∇k+1 where vk+1 = q ?(Xk+1)/qθk (Xk+1) and ∇k+1 = ∇θ log(qθk (Xk+1)). Based on our almost sure convergence result, one can obtain that θk → θ ∗ and then deduce $$\Gamma=\operatorname*{lim}_{k\to\infty}\mathbb{E}[g_{k+1}g_{k+1}^{\top}]=\operatorname*{lim}_{k\to\infty}\int\nabla_{\theta}\log(q_{\theta_{k}})\nabla_{\theta}\log(q_{\theta_{k}})^{\top}{\frac{q^{*2}}{q_{\theta_{k}}}}d\lambda=\mathcal{I},$$ where the value I for the limit comes from replacing qθk by its limit qθ ∗ = q ∗. The choice of the *conditioning* matrix Ck may be done using an auxiliary algorithm of the following form $$C_{k+1}=(1-\gamma_{k+1})C_{k}+\gamma_{k+1}v_{k+1}\nabla_{k+1}\nabla_{k+1}^{\top}$$ $$\mathbf{l}_{\mathrm{{t+1}}}\cdot$$ It can be shown that the sequence of *conditioning* matrices (Ck) converges to I. Thus, Theorem 2 implies that conditioned SGD is efficient in this framework as it matches the lower bound of the previous less restrictive statistical framework in which qk = q ?. Similar computations, left for future work, may be performed to investigate if the same optimal variance can be achieved with more general similarity measures such as α-divergences (Daudel et al., 2021). ## 5 Conclusion And Discussion We derived an asymptotic theory for *Conditioned* SGD methods in a general non-convex setting. Compared to standard SGD methods, the only additional assumption required to obtain the weak convergence is the almost sure convergence of the *conditioning* matrices. The use of appropriate *conditioning* matrices with the help of Hessian estimates is the key to achieve asymptotic optimality. While our study focuses on the weak convergence of the rescaled sequence of iterates - an appropriate tool to deal with efficiency issues since algorithms can be easily compared through their asymptotic variances - it would be interesting to complement our asymptotic results with concentration inequalities. This research direction, left for future work, may be done at the cost of extra assumptions, *e.g.*, strong convexity of the objective function combined with bounded gradients. Furthermore, by using some recent results on the behavior of adaptive gradient methods in non-convex settings (Daneshmand et al., 2018; Staib et al., 2019; Antonakopoulos et al., 2022), another research direction would be to extend the current weak convergence analysis to edge cases where the objective function possesses saddle points. From a practical standpoint, the approach proposed in Appendix B may not be computationally optimal as it requires eigenvalue decomposition. However, *conditioned* SGD methods and especially stochastic secondorder methods do not actually require the full computation of a matrix decomposition but rely on matrixvector products which may be performed in O(d 2) operations. Futhermore, using low-rank approximation with BFGS algorithm (Broyden, 1970; Fletcher, 1970; Goldfarb, 1970; Shanno, 1970) and its variant L-BFGS (Liu & Nocedal, 1989), those algorithms approximately invert Hessian matrices in O(d) operations. More recently, this technique was extended to the online learning framework (Schraudolph et al., 2007) and a purely stochastic setting (Moritz et al., 2016). Similarly, the different adaptive optimizers presented in Section 3.1 are concerned with both fast computations and high precision. Designing an efficient *conditioned* SGD algorithm involves a careful trade-off between the low-memory storage of the scaling matrix representation Ck and the quality of its approximation of either the inverse Hessian ∇2F(θ ?) −1 or the information brought in by the underlying geometry of the problem. ## Acknowledgments The authors are grateful to the Associate Editor and three anonymous Reviewers for their many valuable comments and interesting suggestions. ## References Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: A system for large-scale machine learning. In 12th {USENIX} symposium on operating systems design and implementation ({OSDI} 16), pp. 265–283, 2016. Shun-Ichi Amari. Natural gradient works efficiently in learning. *Neural computation*, 10(2):251–276, 1998. Kimon Antonakopoulos, Panayotis Mertikopoulos, Georgios Piliouras, and Xiao Wang. Adagrad avoids saddle points. In *International Conference on Machine Learning*, pp. 731–771. PMLR, 2022. Jonathan Baxter and Peter L Bartlett. Infinite-horizon policy-gradient estimation. Journal of Artificial Intelligence Research, 15:319–350, 2001. Michel Benaïm. Dynamics of stochastic approximation algorithms. In *Seminaire de probabilites XXXIII*, pp. 1–68. Springer, 1999. Albert Benveniste, Michel Métivier, and Pierre Priouret. *Adaptive algorithms and stochastic approximations*, volume 22. Springer Science & Business Media, 2012. Bernard Bercu, Antoine Godichon, and Bruno Portier. An efficient stochastic newton algorithm for parameter estimation in logistic regressions. *SIAM Journal on Control and Optimization*, 58(1):348–367, 2020. Dimitri P Bertsekas and John N Tsitsiklis. Gradient convergence in gradient methods with errors. SIAM Journal on Optimization, 10(3):627–642, 2000. M Émile Borel. Les probabilités dénombrables et leurs applications arithmétiques. Rendiconti del Circolo Matematico di Palermo (1884-1940), 27(1):247–271, 1909. Léon Bottou, Frank E Curtis, and Jorge Nocedal. Optimization methods for large-scale machine learning. Siam Review, 60(2):223–311, 2018. Claire Boyer and Antoine Godichon-Baggioni. On the asymptotic rate of convergence of stochastic newton algorithms and their weighted averaged versions. *Computational Optimization and Applications*, pp. 1–52, 2022. Charles George Broyden. The convergence of a class of double-rank minimization algorithms 1. general considerations. *IMA Journal of Applied Mathematics*, 6(1):76–90, 1970. Richard H Byrd, Samantha L Hansen, Jorge Nocedal, and Yoram Singer. A stochastic quasi-newton method for large-scale optimization. *SIAM Journal on Optimization*, 26(2):1008–1031, 2016. Stephan Clémençon, Patrice Bertail, Emilie Chautru, and Guillaume Papa. Optimal survey schemes for stochastic gradient descent with applications to m-estimation. *ESAIM: Probability and Statistics*, 23: 310–337, 2019. Hadi Daneshmand, Jonas Kohler, Aurelien Lucchi, and Thomas Hofmann. Escaping saddles with stochastic gradients. In *International Conference on Machine Learning*, pp. 1155–1164. PMLR, 2018. Kamélia Daudel, Randal Douc, and François Portier. Infinite-dimensional gradient-based descent for alphadivergence minimisation. *The Annals of Statistics*, 49(4):2250 - 2270, 2021. doi: 10.1214/20-AOS2035. URL https://doi.org/10.1214/20-AOS2035. Bernard Delyon. General results on the convergence of stochastic algorithms. *IEEE Transactions on Automatic Control*, 41(9):1245–1255, 1996. Bernard Delyon and François Portier. Asymptotic optimality of adaptive importance sampling. In *Proceedings of the 32nd International Conference on Neural Information Processing Systems*, pp. 3138–3148. Curran Associates Inc., 2018. Bernard Delyon and François Portier. Safe adaptive importance sampling: A mixture approach. The Annals of Statistics, 49(2):885–917, 2021. Aymeric Dieuleveut, Alain Durmus, and Francis Bach. Bridging the gap between constant step size stochastic gradient descent and markov chains. *Annals of Statistics*, 48(3):1348–1382, 2020. Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ ml. John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. *Journal of machine learning research*, 12(Jul):2121–2159, 2011. Marie Duflo. *Random iterative models*, volume 34. Springer Science & Business Media, 1st edition, 2013. Vaclav Fabian. On asymptotic normality in stochastic approximation. *The Annals of Mathematical Statistics*, 39(4):1327–1332, 1968. Vaclav Fabian. Asymptotically efficient stochastic approximation; the rm case. *The Annals of Statistics*, 1 (3):486–495, 1973. Roger Fletcher. A new approach to variable metric algorithms. *The computer journal*, 13(3):317–322, 1970. Sébastien Gadat and Fabien Panloup. Optimal non-asymptotic bound of the ruppert-polyak averaging without strong convexity. *arXiv preprint arXiv:1709.03342*, 2017. Sébastien Gadat, Fabien Panloup, Sofiane Saadane, et al. Stochastic heavy ball. *Electronic Journal of* Statistics, 12(1):461–529, 2018. Nidham Gazagnadou, Robert Gower, and Joseph Salmon. Optimal mini-batch and step sizes for saga. In International conference on machine learning, pp. 2142–2150. PMLR, 2019. Donald Goldfarb. A family of variable-metric methods derived by variational means. *Mathematics of computation*, 24(109):23–26, 1970. Robert Gower, Donald Goldfarb, and Peter Richtárik. Stochastic block bfgs: Squeezing more curvature out of data. In *International Conference on Machine Learning*, pp. 1869–1878. PMLR, 2016. Robert M Gower, Peter Richtárik, and Francis Bach. Stochastic quasi-gradient methods: Variance reduction via jacobian sketching. *Mathematical Programming*, 188(1):135–192, 2021. Robert Mansel Gower, Nicolas Loizou, Xun Qian, Alibek Sailanbayev, Egor Shulgin, and Peter Richtárik. Sgd: General analysis and improved rates. In *International Conference on Machine Learning*, pp. 5200– 5209. PMLR, 2019. P. Hall and C.C. Heyde. *Martingale Limit Theory and Its Application*. Probability and mathematical statistics. Academic Press, 1980. ISBN 9781483240244. URL https://books.google.fr/books?id= wdLajgEACAAJ. David Harrison Jr and Daniel L Rubinfeld. Hedonic housing prices and the demand for clean air. *Journal* of environmental economics and management, 5(1):81–102, 1978. Ghassen Jerfel, Serena Wang, Clara Wong-Fannjiang, Katherine A Heller, Yian Ma, and Michael I Jordan. Variational refinement for importance sampling using the forward kullback-leibler divergence. In Uncertainty in Artificial Intelligence, pp. 1819–1829. PMLR, 2021. Sham M Kakade. A natural policy gradient. In *Advances in neural information processing systems*, pp. 1531–1538, 2002. Hamed Karimi, Julie Nutini, and Mark Schmidt. Linear convergence of gradient and proximal-gradient methods under the polyak-łojasiewicz condition. In *Joint European Conference on Machine Learning and* Knowledge Discovery in Databases, pp. 795–811. Springer, 2016. Hassan K Khalil. *Nonlinear systems; 3rd ed.* Prentice-Hall, Upper Saddle River, NJ, 2002. URL https: //cds.cern.ch/record/1173048. Jack Kiefer, Jacob Wolfowitz, et al. Stochastic estimation of the maximum of a regression function. The Annals of Mathematical Statistics, 23(3):462–466, 1952. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Harold J Kushner and Hai Huang. Rates of convergence for stochastic approximation type algorithms. *SIAM* Journal on Control and Optimization, 17(5):607–617, 1979. Harold Joseph Kushner and Dean S Clark. Stochastic approximation methods for constrained and unconstrained systems. 1978. Yann A LeCun, Léon Bottou, Genevieve B Orr, and Klaus-Robert Müller. Efficient backprop. In Neural networks: Tricks of the trade, pp. 9–48. Springer, 2012. Rémi Leluc and François Portier. Sgd with coordinate sampling: Theory and practice. *Journal of Machine* Learning Research, 23(342):1–47, 2022. URL http://jmlr.org/papers/v23/21-1240.html. Dong C Liu and Jorge Nocedal. On the limited memory bfgs method for large scale optimization. *Mathematical programming*, 45(1):503–528, 1989. Philipp Moritz, Robert Nishihara, and Michael Jordan. A linearly-convergent stochastic l-bfgs algorithm. In Artificial Intelligence and Statistics, pp. 249–258. PMLR, 2016. Eric Moulines and Francis R Bach. Non-asymptotic analysis of stochastic approximation algorithms for machine learning. In *Advances in Neural Information Processing Systems*, pp. 451–459, 2011. Arkadi Nemirovski, Anatoli Juditsky, Guanghui Lan, and Alexander Shapiro. Robust stochastic approximation approach to stochastic programming. *SIAM Journal on optimization*, 19(4):1574–1609, 2009. Arkadi Semenovich Nemirovski and David Borisovich Yudin. Problem complexity and method efficiency in optimization. 1983. Yurii Nesterov. *Introductory lectures on convex optimization: A basic course*, volume 87. Springer Science & Business Media, 2013. Mikhail Borisovich Nevelson and Rafail Zalmanovich Khas'minski˘ı. *Stochastic approximation and recursive* estimation, volume 47. American Mathematical Soc., 1976. Lam Nguyen, Phuong Ha Nguyen, Marten Dijk, Peter Richtárik, Katya Scheinberg, and Martin Takác. Sgd and hogwild! convergence without the bounded gradients assumption. In *International Conference on* Machine Learning, pp. 3750–3758. PMLR, 2018. Guillaume Papa, Pascal Bianchi, and Stéphan Clémençon. Adaptive sampling for incremental optimization using stochastic gradient descent. In *International Conference on Algorithmic Learning Theory*, pp. 317– 331. Springer, 2015. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. *Journal of Machine Learning Research*, 12:2825–2830, 2011. Mariane Pelletier. On the almost sure asymptotic behaviour of stochastic algorithms. Stochastic processes and their applications, 78(2):217–244, 1998a. Mariane Pelletier. Weak convergence rates for stochastic approximation with application to multiple targets and simulated annealing. *Annals of Applied Probability*, pp. 10–44, 1998b. Boris T Polyak. A new method of stochastic approximation type. *Avtomatika i telemekhanika*, (7):98–107, 1990. Boris T Polyak and Anatoli B Juditsky. Acceleration of stochastic approximation by averaging. *SIAM* Journal on Control and Optimization, 30(4):838–855, 1992. Sashank J Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of adam and beyond. In *International* Conference on Learning Representations, 2018. Herbert Robbins and Sutton Monro. A stochastic approximation method. *The annals of mathematical* statistics, pp. 400–407, 1951. Herbert Robbins and David Siegmund. A convergence theorem for non negative almost supermartingales and some applications. In *Optimizing methods in statistics*, pp. 233–257. Elsevier, 1971. Jerome Sacks. Asymptotic distribution of stochastic approximation procedures. *The Annals of Mathematical* Statistics, 29(2):373–405, 1958. Mark Schmidt and Nicolas Le Roux. Fast convergence of stochastic gradient descent under a strong growth condition. *arXiv preprint arXiv:1308.6370*, 2013. Nicol N Schraudolph, Jin Yu, and Simon Günter. A stochastic quasi-newton method for online convex optimization. In *Artificial intelligence and statistics*, pp. 436–443. PMLR, 2007. Shai Shalev-Shwartz, Yoram Singer, Nathan Srebro, and Andrew Cotter. Pegasos: Primal estimated subgradient solver for svm. *Mathematical programming*, 127(1):3–30, 2011. David F Shanno. Conditioning of quasi-newton methods for function minimization. *Mathematics of computation*, 24(111):647–656, 1970. Alexander Shapiro, Darinka Dentcheva, and Andrzej Ruszczyński. Lectures on stochastic programming: modeling and theory. SIAM, 2014. Matthew Staib, Sashank Reddi, Satyen Kale, Sanjiv Kumar, and Suvrit Sra. Escaping saddle points with adaptive gradient methods. In *International Conference on Machine Learning*, pp. 5956–5965. PMLR, 2019. Richard S Sutton and Andrew G Barto. *Reinforcement learning: An introduction*. MIT press, 2018. Tijmen Tieleman, Geoffrey Hinton, et al. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. *COURSERA: Neural networks for machine learning*, 4(2):26–31, 2012. A. W. van der Vaart. *Asymptotic statistics*, volume 3 of *Cambridge Series in Statistical and Probabilistic* Mathematics. Cambridge University Press, Cambridge, 1998. J. H. Venter. An extension of the robbins-monro procedure. *The Annals of Mathematical Statistics*, 38(1): 181–190, 1967. Jianqiao Wangni, Jialei Wang, Ji Liu, and Tong Zhang. Gradient sparsification for communication-efficient distributed optimization. In *Advances in Neural Information Processing Systems*, pp. 1299–1309, 2018. CZ Wei. Multivariate adaptive stochastic approximation. *The Annals of Statistics*, 15(3):1115–1130, 1987. Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256, 1992. Mishael Zedek. Continuity and location of zeros of linear combinations of polynomials. Proceedings of the American Mathematical Society, 16(1):78–84, 1965. Cheng Zhang, Judith Bütepage, Hedvig Kjellström, and Stephan Mandt. Advances in variational inference. IEEE transactions on pattern analysis and machine intelligence, 41(8):2008–2026, 2018. Tong Zhang. Solving large scale linear prediction problems using stochastic gradient descent algorithms. In Proceedings of the twenty-first international conference on Machine learning, pp. 116, 2004. Wanrong Zhu, Xi Chen, and Wei Biao Wu. Online covariance matrix estimation in stochastic gradient descent. *Journal of the American Statistical Association*, pp. 1–12, 2021. # Appendix: Asymptotic Analysis Of Conditioned Stochastic Gradient Descent Appendix A contains the mathematical proofs of the main results while Appendix B is dedicated to a practical procedure and numerical experiments for illustration purposes. Appendix C gathers some technical auxiliary results and additional propositions. | C | Auxiliary results | 32 | | |-----|------------------------------------------|------|----| | C.1 | Robbins-Siegmund Theorem | 32 | | | C.2 | Auxiliary lemmas | 32 | | | C.3 | Additional propositions | | 34 | | C.4 | Auxiliary results on expected smoothness | | 36 | | B | Practical procedure | 28 | | |-----|--------------------------------------------|------|----| | B.1 | Construction of the conditioning matrix Ck | | 28 | | B.2 | Numerical illustration | | 29 | | A | Proofs of main results | 16 | | |-----|--------------------------------------------------|------|----| | A.1 | Proof of the weak convergence (Theorem 2) | | 16 | | A.2 | Proof of the almost sure convergence (Theorem 3) | | 26 | | A.3 | Proof of Corollary 2 | | 28 | ## A Proofs Of Main Results A.1 Proof Of The Weak Convergence (Theorem 2) For any matrix A ∈ R d×d, we denote by kAk = maxkuk2=1 kAuk2 the operator norm associated to the Euclidian norm and by ρ(A) the spectral radius of A, *i.e.*, ρ(A) = max{|λ1|*, . . . ,* |λn|} where λ1*, . . . , λ*n are the eigenvalues of A. We also introduce λmin(A) = min{|λ1|*, . . . ,* |λn|}. Note that when A is symmetric kAk = ρ(A) and recall that the spectral radius is a (submultiplicative) norm on the real linear space of symmetric matrices. ## Structure Of The Proof. In virtue of Assumption 5, there exist *δ, ε >* 0 such that almost surely sup k≥0 E[kwk+1k 2+δ 2|Fk]1{kθk−θ ?k2≤ε} < ∞. (3) An important event in the following is Ak = {kθk − θ ?k2 ≤ ε, kCkk < 2kCk, kΓkk ≤ 2kΓk}. By assumption, this event has probability going to 1. Introduce the difference $$\left({\boldsymbol{3}}\right)$$ $$\Delta_{k}=\theta_{k}-\theta^{\star},$$ and remark that ∆k is subjected to the iteration: $${\boldsymbol{\theta}}^{-},$$ ∆0 = θ0 − θ ?, $$\begin{array}{l}{{\Delta_{0}=\delta_{0}\quad\delta\ ,}}\\ {{\Delta_{k+1}=\Delta_{k}-\gamma_{k+1}C_{k}\nabla F(\theta_{k})+\gamma_{k+1}C_{k}w_{k+1},}}\end{array}$$ $$t_{k}w_{k+1},\qquad k\geq0,$$ with wk+1 = ∇F(θk) − g(θk, ξk+1). We have by assumption that Ck → C almost surely and we can define K = limk→∞ CkH = CH. The proof relies on the introduction of an auxiliary stochastic algorithm which follows the iteration: $$\begin{array}{l l}{{\widetilde{\Delta}_{0}=\theta_{0}-\theta^{\star}}}\\ {{\widetilde{\Delta}_{k+1}=\widetilde{\Delta}_{k}-\gamma_{k+1}K\widetilde{\Delta}_{k}+\gamma_{k+1}C_{k}w_{k+1}\mathds{1}_{\mathcal{A}_{k}},\qquad k\geq0}}\end{array}$$ The previous algorithm is a linear approximation of the algorithm that defines ∆k in the sense that ∇F(θk) = ∇F(θ ?) + H(θk − θ ?) + o(θk − θ ?) ' H(θk − θ ?) has been linearly expanded around θ ?. Writing $$\Delta_{k}=\widetilde{\Delta}_{k}+(\Delta_{k}-\widetilde{\Delta}_{k}),$$ and invoking the Slutsky lemma, the proof will be complete as soon as we obtain that $\gamma_{k}^{-1/2}\widetilde{\Delta}_{k}\leadsto{\cal N}(0,\Sigma)$, $(\Delta_{k}-\widetilde{\Delta}_{k})=o_{\mathbb{P}}(\gamma_{k}^{1/2})$. $$\left(4\right)$$ (5) $\frac{1}{2}$ . Denote by √H the positive square root of the real symmetric positive definite matrix H and consider the transformation Θk = √H∆ek which satisfies $$\begin{array}{l l}{{\Theta_{0}=\sqrt{H}\widetilde{\Delta}_{0}}}\\ {{\Theta_{k+1}=\Theta_{k}-\gamma_{k+1}\widetilde{K}\Theta_{k}+\gamma_{k+1}\sqrt{H}C_{k}w_{k+1}\mathds{1}_{\mathcal{A}_{k}},\qquad k\geq1,}}\end{array}$$ where Ke = √HC√H ∈ S++ d(R) is a real symmetric positive definite matrix. The sequence (Θk)k≥0 is easier to study than ∆ek because contrary to Ke, the matrix K = CH is not symmetric in general unless C and H commute. In view of Assumption 3, the eigenvalues of Ke are real and positive. Denote by λm (*resp.* λM) the smallest (*resp.* the largest) eigenvalue of Ke, *i.e.*, $$\lambda_{m}=\lambda_{\operatorname*{min}}(\widetilde{K}),\quad\lambda_{M}=\lambda_{\operatorname*{max}}(\widetilde{K}).$$ Because CH is similar to Ke, they share the same eigenvalues. Since by assumption, the eigenvalues of (CH − ζId) are positive, we have 2αλm > 1{β=1}. For all k ≥ 1, introduce the real symmetric matrix Ak = I − γkKe. Observe that all these matrices commute, i.e., for any *i, j* ≥ 0, we have AiAj = AjAi. For any *k, n* ≥ 0, denote the matrices product $$\left\{\begin{array}{l l}{\Pi_{n,k}}&{=A_{n}\ldots A_{k+1}{\mathrm{~if~}}k<n}\\ {\Pi_{n,k}}&{=I_{d}{\mathrm{~if~}}k\geq n,\Pi_{n}=\Pi_{n,0}}\end{array}\right.$$ Since the matrices Ak commute, we have Π> n,k = Πn,k is also real symmetric. Step 1. Proof of Equation (4). The random process (Θk)k≥0 follows the recursion equation $$\Theta_{k}=A_{k}\Theta_{k-1}+\gamma_{k}\sqrt{H}C_{k-1}w_{k}\mathbb{1}_{{\mathcal{A}}_{k-1}}.$$ √ We have by induction $$\Theta_{n}=\Pi_{n}\Theta_{0}+\sum_{k=1}^{n}\gamma_{k}\Pi_{n,k}\sqrt{H}C_{k-1}w_{k}\mathds{1}_{{\mathcal{A}}_{k-1}},$$ and the rescaled process is equal to $$\frac{\Theta_{n}}{\sqrt{\gamma_{n}}}=\underbrace{\frac{\Pi_{n}}{\sqrt{\gamma_{n}}}\Theta_{0}}_{i n i t i a l~e r r o~Y_{n}}~+\underbrace{\sum_{k=1}^{n}\frac{\gamma_{k}}{\sqrt{\gamma_{n}}}\Pi_{n,k}\sqrt{H}C_{k-1}w_{k}\mathds{1}_{{\mathcal{A}}_{k-1}}}_{s a m p l i n g~e r r o~M_{n}}.$$ Bound on the initial error. Define τn =Pn k=1 γk the partial sum of the learning rates. Since Πn is symmetric, we have ρ(ΠnΘ0) ≤ ρ(Πn)kΘ0k2. In view of Lemma 5, since γk → 0, there exists j ≥ 1 such that $$\rho(\Pi_{n})\leq\rho(\Pi_{j})\exp(-\lambda_{m}(\tau_{n}-\tau_{j})).$$ Therefore, the initial error is bounded by $$\rho(Y_{n})\leq\rho(\Pi_{j})\exp(\lambda_{m}\tau_{j})\|\Theta_{0}\|_{2}\exp(d_{n})\quad\mathrm{with}\quad d_{n}=-\lambda_{m}\tau_{n}-\log(\sqrt{\gamma_{n}}).$$ Using Lemma 6, we can treat the two cases β < 1 and β = 1. On the one hand, if β < 1 then we always have dn *→ −∞*. On the other hand, if β = 1, we have dn ∼12 − γλm log(n) and the condition 2αλm − 1 > 0 ensures dn *→ −∞*. In both cases we get exp(dn) → 0 and the initial error vanishes to 0. Weak convergence of the sampling error. Consider the random process $$M_{n}=\gamma_{n}^{-1/2}\sum_{k=1}^{n}\gamma_{k}\Pi_{n,k}\sqrt{H}C_{k-1}w_{k}\mathds{1}_{{\mathcal{A}}_{k-1}}.$$ Note that θk, Ak and Ck are Fk-measurable. As a consequence, Mn is a sum of martingale increments and we may rely on the following central limit theorem for martingale arrays. Theorem 4. (Hall & Heyde, 1980, Corollary 3.1) Let (Wn,i)1≤i≤n, n≥1 be a triangular array of random vectors such that $$\mathbb{E}[W_{n,i}\mid\mathcal{F}_{i-1}]=0,\quad\text{for all}1\leq i\leq n,\tag{6}$$ $$\sum_{i=1}^{n}\mathbb{E}[W_{n,i}W_{n,i}^{\top}\mid\mathcal{F}_{i-1}]\to V^{*}\geq0,\quad\text{in probability},$$ (7) $$\sum_{i=1}^{n}\mathbb{E}[\|W_{n,i}\|^{2}\mathds{1}_{\{\|W_{n,i}\|\geq\tau\}}\mid\mathcal{F}_{i-1}]\to0,\quad\text{in probability},\tag{8}$$ then, Pn i=1 Wn,i N (0, V ∗), as n → ∞. We start by verifying (7). Let Dk = √HCk−1Γk−1C T k−1 √H1Ak−1 ∈ Sd(R). The quadratic variation of Mn is given by $$\Sigma_{n}=\gamma_{n}^{-1}\sum_{k=1}^{n}\gamma_{k}^{2}\Pi_{n,k}D_{k}\Pi_{n,k}^{\top}.$$ First we can check that Σn is bounded. Using the triangle inequality and since the operator norm is submultiplicative, we have $$\|\Sigma_{n}\|\leq\gamma_{n}^{-1}\sum_{k=1}^{n}\gamma_{k}^{2}\|\Pi_{n,k}D_{k}\Pi_{n,k}^{T}\|\leq\gamma_{n}^{-1}\sum_{k=1}^{n}\gamma_{k}^{2}\|D_{k}\|\|\Pi_{n,k}\|^{2}=\gamma_{n}^{-1}\sum_{k=1}^{n}\gamma_{k}^{2}\|D_{k}\|\rho(\Pi_{n,k})^{2},$$ where we use in the last equality that Πn,k is real symmetric so kΠn,kk = ρ(Πn,k). On the event Ak−1, the matrices Ck−1 and Γk−1 are bounded as kCk−1k ≤ 2kCk and kΓk−1k ≤ 2kΓk leading to the following bound for the matrix Dk, $$\begin{split}\|D_{k}\|&=\|\sqrt{H}C_{k-1}\Gamma_{k-1}C_{k-1}^{T}\sqrt{H}\mathds{1}_{\mathcal{A}_{k-1}}\|\\ &\leq\|H\|\|\Gamma_{k-1}\|\|C_{k-1}\|^{2}\mathds{1}_{\mathcal{A}_{k-1}}\\ &\leq8\|H\|\|\Gamma\|\|C\|^{2}=U_{D}.\end{split}$$ It follows that $$\|\Sigma_{n}\|\leq U_{D}\gamma_{n}^{-1}\sum_{k=1}^{n}\gamma_{k}^{2}\rho(\Pi_{n,k})^{2}.$$ In view of Lemma 5, we shall split the summation from k = 1*, . . . , j* and k = j + 1*, . . . , n* as $$\begin{array}{c}{{\gamma_{n}^{-1}\sum_{k=1}^{n}\gamma_{k}^{2}\rho\left(\Pi_{n,k}\right)^{2}=\gamma_{n}^{-1}\sum_{k=1}^{j}\gamma_{k}^{2}\rho\left(\Pi_{n,k}\right)^{2}+\gamma_{n}^{-1}\sum_{k=j+1}^{n}\gamma_{k}^{2}\rho\left(\Pi_{n,k}\right)^{2}}}\\ {{\leq\underbrace{\gamma_{n}^{-1}\sum_{k=1}^{j}\gamma_{k}^{2}\rho\left(\Pi_{n,k}\right)^{2}}_{a_{n}}+\underbrace{\gamma_{n}^{-1}\sum_{k=j+1}^{n}\gamma_{k}^{2}\prod_{i=k+1}^{n}\left(1-\lambda_{m}\gamma_{i}\right)^{2}}_{b_{n}}.}}\end{array}$$ For the first term an, we have for all k = 1, . . . , j $$\rho(\Pi_{n,k})\leq\rho(\Pi_{n,j})\leq\prod_{i=j+1}^{n}(1-\lambda_{m}\gamma_{i})\leq\exp(-\lambda_{m}(\tau_{n}-\tau_{j})),$$ which implies since (γk) is decreasing with γ1 = α that $$\sum_{k=1}^{j}\gamma_{k}^{2}\rho\left(\Pi_{n,k}\right)^{2}\leq\alpha\tau_{j}\exp(-2\lambda_{m}(\tau_{n}-\tau_{j})).$$ Therefore, similarly to the initial error term, we get an ≤ ατj exp(2λmτj )) exp(dn) with dn = −2λmτn − log(γn), and the condition 2αλm −1 > 0 ensures dn *→ −∞* so that an goes to 0 and is almost surely bounded by Ua. For the second term bn, we can apply Lemma 3 and need to distinguish between the two cases: - (β = 1) If γn = α/n, since 2αλm > 1, we can apply Lemma 3 (p = 1, m = 2, λ = λm*α, x*j = 0, εk = α 2) and obtain $$b_{n}\leq\frac{\alpha^{2}}{2\alpha\lambda_{m}-1}=U_{b}.$$ - (β < 1) If γn = γ/nβ, we deduce the same as before because λm > 0. Finally in both cases, we get $$\|\Sigma_{n}\|\leq U_{D}\left(U_{a}+U_{b}\right).$$ kΣnk ≤ UD (Ua + Ub). (9) We now derive the limit of Σn. We shall use a recursion equation to recover a stochastic approximation scheme. Note that $$\gamma_{n}\Sigma_{n}=\sum_{k=1}^{n}\gamma_{k}^{2}\Pi_{n,k}D_{k}\Pi_{n,k}^{T}$$ $$=\gamma_{n}^{2}D_{n}+A_{n}\left(\sum_{k=1}^{n-1}\gamma_{k}^{2}\Pi_{n-1,k}D_{k}\Pi_{n-1,k}^{T}\right)A_{n}^{\top},$$ $$({\mathfrak{g}})$$ (10) $\binom{11}{2}$ (11) ... and recognize $$\gamma_{n}\Sigma_{n}=\gamma_{n}^{2}D_{n}+\gamma_{n-1}A_{n}\Sigma_{n-1}A_{n}^{\top}.$$ Replacing the symmetric matrix An = I − γnKe, we get (because Σn is bounded almost surely) $$\gamma_{n}\Sigma_{n}=\gamma_{n}^{2}D_{n}+\gamma_{n-1}(I-\gamma_{n}\widetilde{K})\Sigma_{n-1}(I-\gamma_{n}\widetilde{K})$$ $$=\gamma_{n}^{2}D_{n}+\gamma_{n-1}\left[\Sigma_{n-1}-\gamma_{n}\Sigma_{n-1}\widetilde{K}-\gamma_{n}\widetilde{K}\Sigma_{n-1}+O(\gamma_{n}^{2})\right].$$ Divide by γn to obtain $$\Sigma_{n}=\gamma_{n}D_{n}+\frac{\gamma_{n-1}}{\gamma_{n}}\left[\Sigma_{n-1}-\gamma_{n}(\tilde{K}\Sigma_{n-1}+\Sigma_{n-1}\tilde{K})+O(\gamma_{n}^{2})\right],$$ and we recognize a stochastic approximation scheme Σn = Σn−1 − γn hKeΣn−1 + Σn−1Ke − Dn i+ γn−1 − γn γnΣn−1 + O(γn−1γn + |γn−1 − γn|) Recall that when β < 1 we have $$\frac{1}{\gamma_{n}}-\frac{1}{\gamma_{n-1}}\rightarrow0,\mathrm{~i.e.,~}\frac{\gamma_{n-1}-\gamma_{n}}{\gamma_{n}}=o(\gamma_{n}).$$ - (β = 1) If γn = α/n we get $$\Sigma_{n}=\Sigma_{n-1}-\frac{\alpha}{n}\left[\widetilde{K}\Sigma_{n-1}+\Sigma_{n-1}\widetilde{K}-\frac{1}{\alpha}\Sigma_{n-1}-D_{n}\right]+O(n^{-2})$$ $$\Sigma_{n}=\Sigma_{n-1}-\frac{\alpha}{n}\left[\left(\widetilde{K}-\frac{I}{2\alpha}\right)\Sigma_{n-1}+\Sigma_{n-1}\left(\widetilde{K}-\frac{I}{2\alpha}\right)-D_{n}\right]+O(n^{-2}).$$ $$\bullet\ (\beta<1)$$ - (β < 1) If γn = α/nβ we get $$\Sigma_n=\Sigma_{n-1}-\gamma_n\left[\widetilde{K}\Sigma_{n-1}+\Sigma_{n-1}\widetilde{K}-D_n\right]+o(\gamma_n).$$ $\gamma$) and define $\widetilde{K}_{\xi}=\widetilde{K}-\zeta I$, so that in both cases, the $\xi$-th that $\zeta=\mathbb{1}_{\{\beta=1\}}/(2\epsilon$ Recall that ζ = 1{β=1}/(2α) and define Keζ = Ke − ζI, so that in both cases, the recursion equation becomes $$\Sigma_{n}=\Sigma_{n-1}-\gamma_{n}\left[\tilde{K}_{\zeta}\Sigma_{n-1}+\Sigma_{n-1}\tilde{K}_{\zeta}^{\top}-D_{n}\right]+o(\gamma_{n}).$$ We can vectorize this equation. The vectorization of an m × n matrix A = (ai,j ), denoted vec(A), is the mn × 1 column vector obtained by stacking the columns of the matrix A on top of one another: vec(A) = [a1,1, . . . , am,1, a1,2, . . . , am,2, . . . , a1,n, . . . , am,n] T. Applying this operator to our stochastic approximation scheme gives $$\operatorname{vec}(\Sigma_{n})=\operatorname{vec}(\Sigma_{n-1})-\gamma_{n}\left[\operatorname{vec}\left({\widetilde{K}}_{\zeta}\Sigma_{n-1}+\Sigma_{n-1}{\widetilde{K}}_{\zeta}^{\top}\right)-\operatorname{vec}(D_{n})\right]+o(\gamma_{n}).$$ Denote by ⊗ the Kronecker product, we have the following property $$\operatorname{vec}\left(K_{\zeta}\Sigma_{n-1}+\Sigma_{n-1}K_{\zeta}^{\top}\right)=\left(I_{d}\otimes K_{\zeta}+K_{\zeta}^{\top}\otimes I_{d}\right)\operatorname{vec}(\Sigma_{n-1}).$$ Define D as the almost sure limit of Dn, *i.e.* $$D=\operatorname*{lim}_{n\to\infty}D_{n}={\sqrt{H}}C\Gamma C{\sqrt{H}}.$$ Introduce vn = vec(Σn) and Q = Id ⊗ Keζ + Keζ ⊗ Id . We have almost surely $$\begin{array}{r l}{v_{n}=v_{n-1}-\gamma_{n}\left(Q v_{n-1}-\operatorname{vec}(D)\right)+\gamma_{n}\operatorname{vec}(D_{n}-D)+o(\gamma_{n})}\\ {=v_{n-1}-\gamma_{n}\left(Q v_{n-1}-\operatorname{vec}(D)\right)+\varepsilon_{n}\gamma_{n}}\end{array}$$ where εn → 0 almost surely. This is a stochastic approximation scheme with the affine function h(v) = Qv − vec(D) for v ∈ R d 2. Let v ? be the solution of h(v) = 0 which is well defined since Q = Id ⊗ Keζ + Ke > ζ ⊗ Id is invertible. Indeed, the eigenvalues of Q are µi + µj , 1 ≤ *i, j* ≤ d, where the µi, i = 1*, . . . , d* are the eigenvalues of Keζ . Equivalently, the eigenvalues of Q are of the form (λi − ζ) + (λj − ζ) where the λi, i = 1*, . . . , d* are the eigenvalues of Ke. Because λm > ζ, we have that Q 0. As a consequence $$(v_{n}-v^{\star})=(v_{n-1}-v^{\star})-\gamma_{n}\left(h(v_{n-1})-h(v^{\star})\right)+\varepsilon_{n}\gamma_{n}$$ $$=(v_{n-1}-v^{\star})-\gamma_{n}Q\left(v_{n-1}-v^{\star}\right)+\varepsilon_{n}\gamma_{n}$$ $$=B_{n}\left(v_{n-1}-v^{\star}\right)+\varepsilon_{n}\gamma_{n},$$ with Bn = (Id2 − γnQ). By induction, we obtain $$\left(v_{n}-v^{\star}\right)=\left(B_{n}\ldots B_{1}\right)\left(v_{0}-v^{\star}\right)+\sum_{k=1}^{n}\gamma_{k}\left(B_{n}\ldots B_{k+1}\right)\varepsilon_{k},$$ Define λQ = λmin(Q) > 0 and remark that $$\|B_{n}\ldots B_{k+1}\|\leq\prod_{j=k+1}^{n}\|B_{j}\|=\prod_{j=k+1}^{n}(1-\gamma_{j}\lambda_{Q}).$$ It follows that $$\|v_{n}-v^{*}\|_{2}\leq\|B_{n}\ldots B_{1}\|\|v_{0}-v^{*}\|_{2}+\sum_{k=1}^{n}\gamma_{k}\|B_{n}\ldots B_{k+1}\|\|\varepsilon_{k}\|_{2}$$ $$\leq\prod_{j=1}^{n}(1-\gamma_{j}\lambda_{Q})\|v_{0}-v^{*}\|_{2}+\sum_{k=1}^{n}\gamma_{k}\prod_{j=k+1}^{n}(1-\gamma_{j}\lambda_{Q})\|\varepsilon_{k}\|_{2}\.$$ Applying Lemma 3 we obtain that the right-hand side term goes to 0. The left-hand side term goes to 0 under the effect of the product by definition of (γk)k≥1. We therefore conclude that vn → v ? almost surely. From easy manipulation involving vec(·) and ⊗, this is equivalent to Σn → Σ, where Σ is the solution of the Lyapunov equation $$(\widetilde{K}-\zeta I)\Sigma+\Sigma(\widetilde{K}-\zeta I)=D.$$ Now we turn our attention to (8). We need to show that almost surely, $$\gamma_{n}^{-1}\sum_{k=1}^{n}\gamma_{k}^{2}\mathbb{E}\{\|\Pi_{n,k}\sqrt{H}C_{k-1}w_{k}\|_{2}^{2}\mathds{1}_{\{\gamma_{k}\|\Pi_{n,k}\sqrt{H}C_{k-1}w_{k}\|_{2}>\epsilon\gamma_{n}^{1/2}\}}\mid\mathcal{F}_{k-1}\}\mathds{1}_{\mathcal{A}_{k-1}}\to0.$$ We have $$\begin{array}{l}{{\mathbb{E}[\gamma_{n}^{-1}\gamma_{k}^{2}\|\Pi_{n,k}\sqrt{H}C_{k-1}w_{k}\|_{2}^{2}\mathbbm{1}_{\{\gamma_{k}\|\Pi_{n,k}\sqrt{H}C_{k-1}w_{k}\|_{2}>\varepsilon\gamma_{n}^{1/2}\}}\mid\mathcal{F}_{k-1}]}}\\ {{\leq\varepsilon^{-\delta}\mathbb{E}[(\gamma_{n}^{-1/2}\gamma_{k}\|\Pi_{n,k}\sqrt{H}C_{k-1}w_{k}\|_{2})^{2+\delta}\mid\mathcal{F}_{k-1}]}}\\ {{\leq\varepsilon^{-\delta}(\gamma_{n}^{-1/2}\gamma_{k}\|\Pi_{n,k}\sqrt{H}C_{k-1}\|^{2+\delta}\mathbb{E}[\|w_{k}\|_{2}^{2+\delta}\mid\mathcal{F}_{k-1}].}}\end{array}$$ Let U(ω) = supk≥1 E[kwkk 2+δ 2| Fk−1]1Ak−1 which is almost surely finite by Assumption 5. We get $$\begin{array}{l}{{\mathbb{E}[\gamma_{n}^{-1}\gamma_{k}^{2}\|\Pi_{n,k}\sqrt{H}C_{k-1}w_{k}\|_{2}^{2}\mathbb{1}_{\{\gamma_{k}\|\Pi_{n,k}\sqrt{H}C_{k-1}w_{k}\|_{2}>\varepsilon\gamma_{n}^{1/2}\}}\mid\mathcal{F}_{k-1}]\mathbb{1}_{\mathcal{A}_{k-1}}}}\\ {{\leq\varepsilon^{-\delta}\left(2\|\sqrt{H}\|\|C\|\right)^{2+\delta}U(\omega)(\gamma_{n}^{-1/2}\gamma_{k}\rho(\Pi_{n,k}))^{2+\delta}}}\end{array}$$ Hence by showing that $$\sum_{k=1}^{n}(\gamma_{n}^{-1/2}\gamma_{k}\rho(\Pi_{n,k}))^{2+\delta}\to0,$$ we will obtain (8). The previous convergence can be deduced from Lemma 3 with p = 1 + δ/2, m = 2 + δ, k = γ δ/2 k, checking that (2 + δ)αλm > 1 + δ/2. ## Step 2. Proof Of Equation (5). A preliminary step to the derivation of Equation (5) is to obtain that ∆ek → 0 almost surely. For any θ and η in R d, we have $$\|\theta\|^{2}=\|\eta\|^{2}+2\eta^{\top}(\theta-\eta)+\|\theta-\eta\|^{2}$$ implying that for all k ≥ 0 $$\mathbb{E}[\|\Theta_{k+1}\|^{2}|\mathcal{F}_{k}]=\|\widetilde{\Theta}_{k}\|^{2}-2\gamma_{k+1}\Theta_{k}^{\top}\widetilde{K}\Theta_{k}+\gamma_{k+1}^{2}\,\mathbb{E}[\|\widetilde{K}\Theta_{k}-C_{k}w_{k+1}\mathds{1}_{\mathcal{A}_{k}}\|^{2}|\mathcal{F}_{k}].$$ Since (wk) is a martingale increment and because on Ak, ρ(Ck) ≤ 2ρ(C), we get $$\mathbb{E}[\|\tilde{K}\Theta_{k}-C_{k}w_{k+1}\mathds{1}_{\mathcal{A}_{k}}\|^{2}|\mathcal{F}_{k}]=\mathbb{E}[\|\tilde{K}\Theta_{k}\|^{2}|\mathcal{F}_{k}]+\mathbb{E}[\|C_{k}w_{k+1}\mathds{1}_{\mathcal{A}_{k}}\|^{2}|\mathcal{F}_{k}]$$ $$\leq\lambda_{M}^{2}\|\Theta_{k}\|^{2}+\rho(C_{k})^{2}\,\mathbb{E}[\|w_{k+1}\mathds{1}_{\mathcal{A}_{k}}\|^{2}|\mathcal{F}_{k}]$$ $$\leq\lambda_{M}^{2}\|\Theta_{k}\|^{2}+4\rho(C)^{2}\,\mathbb{E}[\|w_{k+1}\mathds{1}_{\mathcal{A}_{k}}\|^{2}|\mathcal{F}_{k}],$$ Injecting this bound in the previous equality yields E[kΘk+1k 2|Fk] ≤ kΘkk 2(1 + γ 2 k+1λ 2 M) − 2γk+1Θ > k KeΘk + 4ρ(C) 2γ 2 k+1 E[kwk+1k 2|Fk]1Ak . Since, using (3), $$\sum_{k\geq0}\gamma_{k+1}^{2}\,\mathbb{E}[\|w_{k+1}\|^{2}|\mathcal{F}_{k}]\mathds{1}_{\mathcal{A}_{k}}\leq\left(\sup_{k\geq0}\mathbb{E}[\|w_{k+1}\|^{2}|\mathcal{F}_{k}]\mathds{1}_{\mathcal{A}_{k}}\right)\left(\sum_{k\geq0}\gamma_{k+1}^{2}\right)<\infty,$$ we are in position to apply the Robbins-Siegmund Theorem 6 and we obtain the almost sure convergence of Pk γk+1Θ> k KeΘk and kΘkk 2 2 → V∞. Because Ke is positive definite, it gives that, with probability 1,Pk≥0 γk+1kΘkk 2 < +∞, from which, we deduce lim infk kΘkk 2 = 0. Therefore one can extract a subsequence Θk such that kΘkk 2 → 0. Using the above second condition yields V∞ = 0 and we conclude that ∆ek = H−1/2Θk → 0. Define the difference $$E_{k}=\Delta_{k}-\widetilde{\Delta}_{k}.$$ Since θ *7→ ∇*2F(θ) is continous at θ ?, we can apply a coordinate-wise mean value theorem. Indeed, for any θ ∈ R d, we have ∇F(θ) = (∂1F(θ)*, . . . , ∂*dF(θ)) where for all j = 1*, . . . , d*, the partial derivatives functions ∂jF : R d → R are Lipschitz continuous. Denote by ∇(∂jF) : R d → R dthe gradient of the partial derivative ∂jF, *i.e.*, ∇(∂jF)(θ) = (∂ 2 1,jF(θ)*, . . . , ∂*2 d,jF(θ)). For any *θ, η* ∈ B(θ ?, ε), there exists ξj ∈ R dsuch that $$\partial_{j}F(\theta)-\partial_{j}F(\eta)=\nabla(\partial_{j}F)(\xi_{j})(\theta-\eta).$$ We construct a Hessian matrix by rows H(ξ) = H(ξ1*, . . . , ξ*d) where the j-th row is equal to ∇(∂jF)(ξj ) = (∂ 2 1,jF(ξj )*, . . . , ∂*2 d,jF(ξj )) $$H(\xi)=\begin{bmatrix}\partial_{1,1}^{2}F(\xi_{1})&\dots&\partial_{1,d}^{2}F(\xi_{1})\\ \vdots&\ddots&\vdots\\ \partial_{d,1}^{2}F(\xi_{d})&\dots&\partial_{d,d}^{2}F(\xi_{d})\end{bmatrix}$$ and we can write $$\nabla F(\theta)-\nabla F(\eta)=H(\xi)(\theta-\eta).$$ $$\begin{array}{l}{(12)}\\ {(13)}\end{array}$$ There exists ξk = (ξ (1) k, . . . , ξ(d) k) with ξ (j) k ∈ [θ ? + Ek, θk] and ξ 0 k = (ξ 0(1) k*, . . . , ξ*0(d) k) with ξ 0(j) k ∈ [θ ? + Ek, θ?] such that $$\begin{array}{c}\nabla F(\theta^{\star}+E_{k})-\nabla F(\theta_{k})=-H(\xi_{k})\widetilde{\Delta}_{k}\\ \nabla F(\theta^{\star}+E_{k})=H(\xi_{k}^{\prime})E_{k}.\end{array}$$ Let η > 0 such that 2αλm(1 − 3η) > 1. This choice will come clear at the end of the reasoning. On the one hand, we have Ck → C. On the other hand, using Lemma 4, the spectrum of CkH is real and positive. Hence, we have the convergence of the eigenvalues of CkH towards the eigenvalues of K = CH. This follows from the definition of eigenvalues as roots of the characteristic polynomial and the fact that the roots of any polynomial P ∈ C[X] are continuous functions of the coefficients (Zedek, 1965). Consequently, there exists n1(ω) such that for all k ≥ n1(ω), $$(1-\eta)\lambda_{m}\leq\lambda_{\min}(C_{k}H)\leq\lambda_{\max}(C_{k}H)\leq(1+\eta)\lambda_{M}.\tag{1}$$ We can define n2(ω) such that for all k ≥ n2(ω) $$(14)$$ $$(15)$$ $$(16)$$ $$(17)$$ Ak is realized. (15) Since $\|\sqrt{H^{-1}}H(\xi_k')\sqrt{H^{-1}}-I_d\|\to$ √H−1 − Idk → 0 as k → ∞, there is n3(ω) and n4(ω) such that for all k ≥ n3(ω) $$\|\sqrt{H^{-1}}H(\xi_{k}^{\prime})\sqrt{H^{-1}}-I_{d}\|\leq\frac{\eta}{1+\eta}\frac{\lambda_{m}}{\lambda_{M}},$$ , (16) and for all k ≥ n4(ω), $$\|\sqrt{H^{-1}}H(\xi_{k}^{\prime})\sqrt{H^{-1}}\|\leq1.$$ H−1k ≤ 1. (17) Since γk → 0, there is n5 such that for all k ≥ n5 $$\gamma_{k}\geq\gamma_{8}$$ $$\gamma_{k+1}\leq\frac{2\eta\lambda_{m}}{(1+\eta)^{2}\lambda_{M}^{2}}.\tag{14}$$ $$(18)$$ To use the previous local properties, define n0(ω) = n1(ω) ∨ n2(ω) ∨ n3(ω) ∨ n4(ω) ∨ n5 and introduce the set Ej along with its complement E c j , defined by $${\mathcal{E}}_{j}=\{\omega\,:\,j\geq n_{0}(\omega)\}.$$ Let δ > 0 and take j ≥ 1 large enough such that P(E c j ) ≤ δ. Invoking the Markov inequality, we have for all a > 0 $$\mathbb{P}(\gamma_{k}^{-1/2}\|E_{k}\|>a)=\mathbb{P}(\gamma_{k}^{-1/2}\|E_{k}\|>a,\,\mathcal{E}_{j})+\mathbb{P}(\gamma_{k}^{-1/2}\|E_{k}\|>a,\,\mathcal{E}_{j}^{c})$$ $$\leq\mathbb{P}(\gamma_{k}^{-1/2}\|E_{k}\|>a,\,\mathcal{E}_{j})+\delta$$ $$\leq\gamma_{k}^{-1/2}a^{-1}\mathbb{E}[\|E_{k}\|\mathds{1}_{\mathcal{E}_{j}}]+\delta$$ Because δ is arbitrary, we only need to show that for any value of j ≥ 1, $$e_{k}:=\mathbb{H}$$ ek := E[kEkk1Ej $$=o(\gamma_{k}^{1/2}).$$ To prove this fact, we shall recognize a stochastic algorithm for the sequence ek. Let k ≥ j and assume further that Ej is realized. We have, because of (15), $$E_{k+1}=\Delta_{k}-\tilde{\Delta}_{k}-\gamma_{k+1}C_{k}\nabla F(\theta_{k})+\gamma_{k+1}K\tilde{\Delta}_{k}.$$ Introducing Eek = √HEk, we find $$\tilde{E}_{k+1}=\tilde{E}_{k}-\gamma_{k+1}\sqrt{H}C_{k}\nabla F(\theta_{k})+\gamma_{k+1}\sqrt{H}K\tilde{\Delta}_{k},$$ and using (12), it comes that $$\widetilde{E}_{k+1}=\widetilde{E}_{k}-\gamma_{k+1}\sqrt{H}C_{k}\nabla F(\theta^{*}+E_{k})-\gamma_{k+1}\sqrt{H}C_{k}H(\xi_{k})\widetilde{\Delta}_{k}+\gamma_{k+1}\sqrt{H}K\widetilde{\Delta}_{k}$$ $$=\widetilde{E}_{k}-\gamma_{k+1}\sqrt{H}C_{k}\nabla F(\theta^{*}+E_{k})+\gamma_{k+1}\sqrt{H}(K-C_{k}H(\xi_{k}))\widetilde{\Delta}_{k}.$$ Using Minkowski inequality, we have $$\|\tilde{E}_{k+1}\|\leq\|\tilde{E}_{k}-\gamma_{k+1}\sqrt{H}C_{k}\nabla F(\theta^{*}+E_{k})\|+\|\gamma_{k+1}\sqrt{H}(K-C_{k}H(\xi_{k}))\tilde{\Delta}_{k}\|.$$ We shall now focus on the first term. Still on the set ${\cal E}_{j}$, we have $\begin{array}{l}\|\widehat{E}_k-\gamma_{k+1}\sqrt{H}C_k\nabla F(\theta^\star+E_k)\|^2\\ \quad=\|\widehat{E}_k\|^2-2\gamma_{k+1}(\widehat{E}_k,\sqrt{H}C_k\nabla F(\theta^\star+E_k))+\gamma_{k+1}^2\|\sqrt{H}C_k\nabla F(\theta^\star+E_k)\|^2\end{array}$ one hard using (3). 2(19) We have on the one hand using (13) $$\langle\widetilde{E}_{k},\sqrt{H}C_{k}\nabla F(\theta^{*}+E_{k})\rangle=\langle\widetilde{E}_{k},\sqrt{H}C_{k}H(\xi^{\prime}_{k})E_{k}\rangle$$ $$=\langle\widetilde{E}_{k},\sqrt{H}C_{k}HE_{k}\rangle+\langle\widetilde{E}_{k},\sqrt{H}C_{k}(H(\xi^{\prime}_{k})-H)E_{k}\rangle\.$$ Due to (14), the first term satisfies $$\begin{array}{c}{{\langle\tilde{E}_{k},\sqrt{H}C_{k}H E_{k}\rangle=\langle\tilde{E}_{k},\sqrt{H}C_{k}\sqrt{H}\tilde{E}_{k}\rangle}}\\ {{\geq\lambda_{\mathrm{min}}(C_{k}H)\|\tilde{E}_{k}\|^{2}}}\\ {{\geq(1-\eta)\lambda_{m}\|\tilde{E}_{k}\|^{2}}}\end{array}$$ $$(19)$$ The second term satisfies $$\langle\widetilde{E}_{k},\sqrt{H}C_{k}(H(\xi^{\prime}_{k})-H)E_{k}\rangle=\langle\widetilde{E}_{k},\sqrt{H}C_{k}\sqrt{H}(\sqrt{H^{-1}}H(\xi^{\prime}_{k})\sqrt{H^{-1}}-I_{d})\widetilde{E}_{k}\rangle$$ $$\geq-\left|\langle\widetilde{E}_{k},\sqrt{H}C_{k}\sqrt{H}(\sqrt{H^{-1}}H(\xi^{\prime}_{k})\sqrt{H^{-1}}-I_{d})\widetilde{E}_{k}\rangle\right|\.$$ Using Cauchy-Schwarz inequality, the submultiplicativity of the norm, (14) and (16), we have $$\begin{array}{l}{{\left|\langle\tilde{E}_{k},\sqrt{H}C_{k}\sqrt{H}(\sqrt{H^{-1}}H(\xi_{k}^{\prime})\sqrt{H^{-1}}-I_{d})\tilde{E}_{k}\rangle\right|}}\\ {{\leq\|\sqrt{H}C_{k}\sqrt{H}\|\|\sqrt{H^{-1}}H(\xi_{k}^{\prime})\sqrt{H^{-1}}-I_{d}\|\|\tilde{E}_{k}\|^{2}}}\\ {{\leq\eta\lambda_{m}\|\tilde{E}_{k}\|^{2}.}}\end{array}$$ Finally, it follows that $$\langle\tilde{E}_{k},\sqrt{H}C_{k}\nabla F(\theta^{*}+E_{k})\rangle\geq(1-2\eta)\lambda_{m}\|\tilde{E}_{k}\|^{2}\tag{1}$$ $${\mathrm{nd~}}(\,^{(\,1\,7}),$$ On the other hand using (13), (14) and (17), $$\|\sqrt{H}C_{k}\nabla F(\theta^{*}+E_{k})\|^{2}=\|\sqrt{H}C_{k}H(\xi^{\prime}_{k})E_{k}\|^{2}$$ $$=\|\sqrt{H}C_{k}\sqrt{H}(\sqrt{H^{-1}}H(\xi^{\prime}_{k})\sqrt{H^{-1}})\widetilde{E}_{k}\|^{2}$$ $$\leq\lambda_{\max}(C_{k}H)^{2}\|\sqrt{H^{-1}}H(\xi^{\prime}_{k})\sqrt{H^{-1}}\|^{2}\|\widetilde{E}_{k}\|^{2}$$ $$\leq(1+\eta)^{2}\lambda^{2}_{M}\|\widetilde{E}_{k}\|^{2}$$ $$(20)$$ $$(21)$$ Putting together (19), (20), (21) and using (18) gives that, on Ej , kEek − γk+1 √ HCk∇F(θ ? + Ek)k 2 ≤ kEekk 2(1 − 2γk+1(1 − 2η)λm + γ 2 k+1(1 + η) 2λ 2 M) ≤ kEekk 2(1 − 2γk+1(1 − 3η)λm). By the Minkowski inequality and the fact that (1 − x) 1/2 ≤ 1 − x/2, on Ej , it holds kEek+1*k ≤ k*Eekk(1 − 2γk+1(1 − 3η)λm) 1/2 + γk+1k √ H(K − CkH(ξk))∆ekk ≤ kEekk(1 − γk+1(1 − 3η)λm) + γk+1k √ Hkk(K − CkH(ξk))∆ekk Hence, we have shown that for any k ≥ j, kEekk1Ej ≤ kEekk1Ej (1 − γk+1(1 − 3η)λm) + γk+1k √ Hkk(K − CkH(ξk))1Ej∆ekk. It follows that, for any k ≥ j, ek+1 ≤ ek(1 − γk+1(1 − 3η)λm) + γk+1k √ HkE[kUk∆ekk], with Uk = (K − CkH(ξk))1Ej . Because with probability 1, kUkk is bounded, we can apply the Lebesgue dominated convergence theorem to obtain that εk = E[kUkk 2] → 0. From the Cauchy-Schwarz inequality, we get $$\mathbb{E}[\|U_{k}\widetilde{\Delta}_{k}\|]\leq\sqrt{\varepsilon_{k}}\sqrt{\mathbb{E}[\|\widetilde{\Delta}_{k}\|_{2}^{2}]}$$ ]. On the other hand, we have already shown in (9) that ρ(Σk) = kΣkk ≤ UD (Ua + Ub). Since ∆ek = √H−1Θk = √H−1√γk(Yk + Mk), we have E[k∆ekk 2 2 ] ≤ 2(γk/λm)(kYkk 2 2 + E[kMkk 2 2 ]), where the last term is the leading term and satisfies E[kMkk 2 2 ]] = E[Tr(Σk)] ≤ dE[ρ(Σk)]. Therefore, we have $$\mathbb{E}[\|{\widetilde{\Delta}}_{k}\|_{2}^{2}]\leq\gamma_{k}A$$ for some A > 0. Consequently, for all k ≥ j, ek+1 ≤ ek(1 − γk+1(1 − 3η)λm) + γ 3/2 k+1A 0k √ Hkε 1/2 k. The condition 2αλm(1 − 3η) > 1 ensures that we can apply Lemma 3 with (mλ > p), m = 1, p = 1/2, λ = α(1 − 3η)λm. we finally get $${\bar{-k}}/{\bar{-}}$$ $\mathbf{1}\cdot\mathbf{5}$ . lim sup k (ek/γ1/2 $$\gamma_{k}^{1/2})=0.$$ As a consequence, ek = o( √γk), which concludes the proof. Since γ −1/2 k √H∆ek → N (0, Σ), we have γ −1/2 k ∆ek → N (0, Σ) e where Σ = e √H−1Σ √H−1. Recall that Σ satisfies the Lyapunov equation ( √ HC√H − ζId)Σ + Σ(√HC√H − ζId) = √HCΓC √ H. Multiplying on the left and right sides by √H−1, we get C √ HΣ √ H−1 − ζ √ H−1Σ √ H−1 + √ H−1Σ √ HC − ζ √ H−1Σ √ H−1 = CΓC, where we recognize the following Lyapunov equation $$H-\zeta I_{d})\widetilde{\Sigma}+\widetilde{\Sigma}(C)$$ (CH − ζId)Σ +e Σ( e CH − ζId) > = CΓC. ## A.2 Proof Of The Almost Sure Convergence (Theorem 3) The idea behind the proof of the almost sure convergence is to apply the Robbins-Siegmund Theorem (Theorem 6) (which can be found in Appendix C) in combination with the following key deterministic result. Lemma 1 (Deterministic result). Let F : R d → R be a L-smooth function and (θt) a random sequence obtained by the SGD update rule θt+1 = θt − γt+1Ctgt where (γ)t≥1 *a positive sequence of learning rates and* Ct−1 νtId *are such that* Pt γtνt = ∞. Let ω ∈ Ω *such that the following limits exist:* $(i)\ \sum_{t\geq0}\gamma_{t+1}\nu_{t+1}\|\nabla F(\theta_{t}(\omega))\|_{2}^{2}<\infty\quad(ii)\ \sum_{t\geq1}\gamma_{t}C_{t-1}(g_{t-1}(\omega)-\nabla F(\theta_{t-1}(\omega)))<\infty$ then ∇F(θt(ω)) → 0 as t → ∞. Proof. The proof (and in particular the reasoning by contradiction) is inspired from the proof of Proposition 1 in Bertsekas & Tsitsiklis (2000). For ease of notation we omit the ω in the proof. Note that condition (i) along with Pt γtνt = ∞ implie that lim inf t k∇F(θt)k = 0. Now, by contradiction, let ε > 0 and assume that $$\operatorname*{lim}_{t}\operatorname*{sup}_{t}||\nabla F(\theta_{t})||>\varepsilon$$ t We have that there is infinitely many t such that k∇F(θt)k *< ε/*2 and also infinitely many t such that k∇F(θt)k > ε. It follows that there is infinitely many *crossings* between the sets {t ∈ N : k∇F(θt)k *< ε/*2} and {t ∈ N : k∇F(θt)k > ε}. A *crossing* is a collection of indexes Ik = {Lk, Lk + 1*, . . . , U*k −1} with Lk ≤ Uk (Ik = ∅ when Lk = Uk) such that for all t ∈ Ik, $\|\nabla F(\theta_{L_{k}-1})\|<\varepsilon/2\leq\|\nabla F(\theta_{t})\|\leq\varepsilon<\|\nabla F(\theta_{U_{k}})\|$. Define the following partial Cauchy sequence Rk =PUk t=Lk γt(gt−1 − ∇F(θt−1)) and note that condition (ii) implies that Rk → 0 as k → ∞. For all k ≥ 1, $$\begin{array}{l l}{{\varepsilon/2\leq\|\nabla F(\theta_{U_{k}})\|_{2}-\|\nabla F(\theta_{L_{k}-1})\|_{2}}}\\ {{}}&{{\leq\|\nabla F(\theta_{U_{k}})-\nabla F(\theta_{L_{k}-1})\|_{2}}}\\ {{}}&{{\leq L\|\theta_{U_{k}}-\theta_{L_{k}-1}\|_{2},}}\end{array}$$ where we use that ∇F is L-Lipschitz. Then using the update rule θt − θt−1 = −γtCt−1gt−1, we have by sum $$\varepsilon/2\leq L\|\sum_{t=L_{k}}^{U_{k}}\theta_{t}-\theta_{t-1}\|_{2}=L\|\sum_{t=L_{k}}^{U_{k}}\gamma_{t}C_{t-1}g_{t-1}\|_{2}$$ $$\leq L\|\sum_{t=L_{k}}^{U_{k}}\gamma_{t}C_{t-1}\nabla F(\theta_{t-1})\|_{2}+L\|\sum_{t=L_{k}}^{U_{k}}\gamma_{t}C_{t-1}(g_{t-1}-\nabla F(\theta_{t-1}))\|_{2}$$ $$\leq L\sum_{t=L_{k}}^{U_{k}}\gamma_{t}\nu_{t}\|\nabla F(\theta_{t-1})\|_{2}+L\|R_{k}\|_{2}$$ Since in the previous equation k∇F(θt−1)k2 *> ε/*2, we get $$(\varepsilon/2)^{2}\leq L\sum_{t=L_{k}}^{U_{k}}\gamma_{t}\nu_{t}\|\nabla F(\theta_{t-1})\|_{2}^{2}+(\varepsilon/2)L\|R_{k}\|_{2}$$ But since Pt≥0 γt+1νt+1k∇F(θt)k 2is finite and limk Rk = 0, the previous upper bound goes to 0 and implies a contradiction. It remains to show that points (i) and (ii) in Lemma 1 are valid with probability one. Since θ 7→ F(θ) is L-smooth, we have the quadratic bound (see Nesterov (2013)) $$\forall\theta,\eta\in\mathbb{R}^{d}\quad F(\eta)\leq F(\theta)+\langle\nabla F(\theta),\eta-\theta\rangle+\frac{L}{2}\|\eta-\theta\|_{2}^{2}.$$ Using the update rule θk+1 = θk − γk+1Ckg(θk, ξk+1), we get $$F(\theta_{k+1})\leq F(\theta_{k})+\langle\nabla F(\theta_{k}),\theta_{k+1}-\theta_{k}\rangle+\frac{L}{2}\|\theta_{k+1}-\theta_{k}\|_{2}^{2}\,.$$ $$=F(\theta_{k})-\gamma_{k+1}\langle\nabla F(\theta_{k}),C_{k}g(\theta_{k},\xi_{k+1})\rangle+\frac{L}{2}\gamma_{k+1}^{2}\|C_{k}g(\theta_{k},\xi_{k+1})\|_{2}^{2}.$$ $\xi$? The last term can be upper bounded using the matrix norm and Assumption 9 as kCkg(θk, ξk+1)k 2 2 ≤ kCkk 2kg(θk, ξk+1)k 2 2 ≤ ν 2 k+1kg(θk, ξk+1)k 2 2 , and we have the inequality $$F(\theta_{k+1})\leq F(\theta_{k})-\gamma_{k+1}\langle\nabla F(\theta_{k}),C_{k}g(\theta_{k},\xi_{k+1})\rangle+\frac{L}{2}(\gamma_{k+1}\nu_{k+1})^{2}\|g(\theta_{k},\xi_{k+1})\|_{2}^{2}.$$ Introduce uk = γkνk and vk = γkµk, we have Pk≥1 vk = +∞ and Pk≥1 u 2 k < +∞ a.s. in virtue of Assumption 9. The random variables F(θk), Ck are Fk-measurable and the gradient estimate is unbiased with respect to Fk. Taking the conditional expectation denoted by Ek leads to $$\mathbb{E}_{k}\left[F(\theta_{k+1})\right]-F(\theta_{k})\leq-\gamma_{k+1}\langle\nabla F(\theta_{k}),\mathbb{E}_{k}\left[C_{k}g(\theta_{k},\xi_{k+1})\right]\rangle+\frac{L}{2}u_{k+1}^{2}\mathbb{E}_{k}\left[\|g(\theta_{k},\xi_{k+1})\|_{2}^{2}\right]$$ $$=-\gamma_{k+1}\nabla F(\theta_{k})^{\top}C_{k}\nabla F(\theta_{k})+\frac{L}{2}u_{k+1}^{2}\mathbb{E}_{k}\left[\|g(\theta_{k},\xi_{k+1})\|_{2}^{2}\right].$$ In the above case, we have On the one hand for the first term, using Assumption 9 , $$\nabla F(\theta_{k})^{\top}$$ >Ck∇F(θk) ≥ λmin(Ck)k∇F(θk)k 2 2 ≥ µk+1k∇F(θk)k 2 2 . On the other hand, using Assumption 8, there exist 0 ≤ L, σ2 < ∞ such that almost surely ∀k ∈ N, Ek -kg(θk, ξk+1)k 2 2 ≤ 2L(F(θk) − F ?) + σ 2. Inject these bounds in the previous inequality and substract F(θ ?) on both sides to have Ek [F(θk+1) − F ?] ≤ (1 + LLu 2 k+1)(F(θk) − F ?) − vk+1k∇F(θk)k 2 2 + (L/2)u 2 k+1σ 2. Introduce Vk = F(θk)−F ?, Wk = vk+1k∇F(θk)k 2 2 , ak = LLu 2 k+1 and bk = (L/2)u 2 k+1σ 2. These four random sequences are non-negative Fk-measurable sequences with Pk ak < ∞ and Pk bk < ∞ almost surely. Moreover we have $$\mathbb{N},\quad\mathbb{E}\left[V_{k+1}|{\mathcal{F}}_{k}\right]\leq(1+a_{k})V_{k}-W_{k}+b_{k}.$$ We can apply Robbins-Siegmund Theorem 6 to have $$(a)\,\sum_{k\geq0}V_{k}<\infty\ a.s.\qquad(b)\,\,V_{k}\stackrel{{a.s.}}{{\longrightarrow}}V_{\infty},\mathbb{E}\left[V_{\infty}\right]<\infty.\qquad(c)\,\sup_{k\geq0}\mathbb{E}\left[V_{k}\right]<\infty.$$ Therefore we have the almost sure convergence of the series $\sum v_{k+1}\|\nabla F(\theta_{k})\|_{2}^{2}$ which, given that lim supk νk/µk exists, implies that Puk+1k∇F(θk)k 2 2 is finite. Hence we obtain (i) in Lemma 1. We now show that (ii) in Lemma 1 is also valid. The term of interest is a sum of martingale increments. The quadratic variation is given by $$\sum_{t\geq1}\gamma_{t}^{2}\mathbb{E}_{t}[\|C_{t-1}(g_{t-1}(\omega)-\nabla f(\theta_{t-1}(\omega)))\|_{2}^{2}]\leq\sum_{t\geq1}\gamma_{t}^{2}\nu_{t}^{2}\mathbb{E}_{t}[\|(g_{t-1}(\omega)-\nabla F(\theta_{t-1}(\omega)))\|_{2}^{2}]$$ $$\leq\sum_{t\geq1}\gamma_{t}^{2}\nu_{t}^{2}\mathbb{E}_{t}[\|g_{t-1}(\omega)\|_{2}^{2}]$$ $$\leq\sum_{t\geq1}\gamma_{t}^{2}\nu_{t}^{2}(2\mathcal{L}(F(\theta_{t-1})-F^{*})+\sigma^{2}).$$ Now we can use that Vk = F(θk) − F ? *a.s.* −→ V∞ (which was deduced from Robbins-Siegmund Theorem) to obtain that the previous series converges. Invoking Theorem 2.17 in Hall & Heyde (1980), we obtain (ii) in Lemma 1. Furthermore we can prove that θk+1 − θk → 0 almost surely and in L 2. Indeed, we have $\mathbb{E}\left[\|\theta_{k+1}-\theta_{k}\|_{2}^{2}\right]=\mathbb{E}\left[\|\gamma_{k+1}C_{k}g(\theta_{k},\xi_{k+1}\|_{2}^{2})\leq u_{k+1}^{2}\left(2\mathcal{L}\left(F(\theta_{k})-F^{\star}\right)+\sigma^{2}\right)\right]$ 2. In virtue of the almost sure convergence of Vk = F(θk) − F ?, the last term in parenthesis is upper bounded by a constant so that in view of the convergence of Pu 2 k+1, we have the convergence of the series PE-kθk+1 − θkk 2 2 . We then deduce that E-kθk+1 − θkk 2 2 → 0 and P-kθk+1 − θkk 2 2 < +∞ almost surely. In particular, θk+1 − θk → 0 in L 2 and almost surely. The last point follows from the fact that, for every δ > 0, $$\operatorname*{lim}_{n\to\infty}\mathbb{P}\left(\operatorname*{sup}_{k\geq n}\|\theta_{k+1}-\theta_{k}\|\geq\delta\right)\leq\delta^{-2}\operatorname*{lim}_{n\to\infty}\sum_{k\geq n}\mathbb{E}\left[\|\theta_{k+1}-\theta_{k}\|_{2}^{2}\right]=0.$$ ## A.3 Proof Of Corollary 2 First observe that since F is coercive, the convergence of (F(θk)) obtained by Robbins-Siegmund theorem implies that the sequence of iterates (θk)k≥0 remains in a compact subset K ⊂ R d. Let ε > 0. Since θ 7→ d(θ, S) is continuous, the set D(ε) = {θ ∈ R d: d(θ, S) ≥ ε} is closed and the set K(ε) = *K ∩ D*(ε) is compact. On this set, the map θ *7→ k∇*F(θ)k2 is stricly positive and there exists ηε > 0 such that: θ ∈ K(ε) ⇒ k∇F(θ)k2 > ηε. Thus, P(θ ∈ K(ε)) ≤ P(k∇F(θ)k2 > ηε) and this last quantity goes to zero which proves the convergence in probability d(θk, S) → 0. Actually the almost sure convergence ∇F(θk) → 0 implies the convergence of the distances. Define Ak(ε) = {ω : θk(ω) ∈ K(ε)} and Bk(ε) = {ω : k∇F(θk(ω))k2 > ηε}. We have Ak(ε) ⊂ Bk(ε) then ∪n≥1 ∩k≥n Ak(ε) ⊂ ∪n≥1 ∩k≥n Bk(ε). Conclude by using the almost sure convergence P(∪n≥1 ∩k≥n Bk(ε)) = 0 for each ε > 0. If S is finite, it is in particular a compact set so the distance is attained for every k ≥ 0, d(θk, S) = mins∈S d(θk, s) → 0. Since θk+1 − θk → 0, the sequence of iterates can only converge to a single point of S. ## B Practical Procedure For the sake of completeness, the aim of this Section is to derive a feasible procedure that achieves the optimal asymptotic variance described in Corollary 1. First, we present a practical way to compute the conditioning matrix Ck and then we show that the resulting algorithm satisfies the high-level conditions of Theorem 2. This method is considered in a numerical illustration along with a novel variant of AdaGrad. ## B.1 Construction Of The Conditioning Matrix Ck Similarly to the unavailability of gradients, one may not have access to values of the Hessian matrix but only stochastic versions of it (see details in numerical experiments below). As a consequence, we consider the following framework which involves random Hessian matrices. As for gradients, a policy (P 0 k )k≥0 is used at each iteration to produce random Hessians through H(θk, ξ0k+1) with ξ 0 k+1 ∼ P 0 k . Assumption 10 (Unbiased and bounded Hessians). *The Hessian generator* H : R d×S → R d×dis uniformly bounded around the minimizer and is such that for all θ ∈ R d, H(θ, ·) *is measurable and we have:* ∀k ≥ 0, E-H(θk, ξ0k+1)|Fk = ∇2F(θk). An estimate of the Hessian matrix H = ∇2F(θ ?) is now introduced as the weighted average $$\Phi_{k}=\sum_{j=0}^{k}\omega_{j,k}H(\theta_{j},\xi^{\prime}_{j+1})\quad\mbox{with}\quad\sum_{j=0}^{k}\omega_{j,k}=1.\tag{22}$$ The previous estimate has two advantages. First, thanks to averaging, the noise associated to each evaluation H(θj , ξ0j+1) will eventually vanished due to the sum of martingale increments. Second, the weights ωj,k may help to give more importance to most recent iterates. In the idea that θk lies near θ ?eventually, it might be helpful to reduce the bias when estimating H = ∇2F(θ ?). Proposition 2. Let (Φk)k≥0 *be obtained by* (22). Suppose that Assumptions 3 and 10 *are fulfilled and that* θk → θ ? *a.s. If* sup0≤j≤k ωj,k = O(1/k)*, then we have* Φk → H = ∇2F(θ ?) *a.s.* A natural choice is to take equal weights ωj,k = (k + 1)−1. However, since the last iterates are more likely to bring more relevant information through their Hessian estimates, we advocate the use of adaptive weights of the form ωj,k ∝ exp(−ηkθj −θkk1) with a parameter η ≥ 0 that recovers equal weights with η = 0. These two weights sequences satisfy the assumption of Proposition 2. They are considered in the numerical illustration of the next Section. While inverting Φk would produce a simple estimate of H−1, such an approach might result in a certain instability in practice caused by large jumps towards wrong directions (large eigenvalues) or a too restrictive visit along other components (vanishing eigenvalues). To overcome this issue, we rely on the following filter which clamps the eigenvalues of a symmetric matrix. For any symmetric matrix S and two positive numbers 0 *< a < b*, denote by S[*a, b*] the associated matrix where all the eigenvalues are clamped to [a, b], *i.e.*, any eigenvalue λ of S is modified as λ ← max{a, min{*λ, b*}}. Let (λ (m) k)k≥1 and (λ (M) k)k≥1 be two sequence of positive numbers such that λ (m) k ≤ λ (M) kfor all k ≥ 1. Define the matrices $$\forall k\in\mathbb{N},\quad C_{k}=\left(\Phi_{k}[(\lambda_{k+1}^{(M)})^{-1},(\lambda_{k+1}^{(m)})^{-1}]\right)^{-1}.$$ $$(23)$$ Such a definition guarantees two properties. First, Ck ∈ S++ d(R) with λ (m) k+1Id Ck λ (M) k+1Id. Second, in virtue of Proposition 2, Φk → H a.s. so that, as soon as (λ (m) k)k≥1 and (λ (M) k)k≥1 go to 0 and +∞ respectively, the matrix Ck converges almost surely to H−1(as recommended by Corollary 1). Therefore, we obtain a feasible procedure leading to asymptotic optimality. Theorem 5 (Asymptotic optimality of the iterates). Let (θk)k≥0 *be obtained by conditioned SGD* (2) *with* γk = 1/k, Φk *defined by* (22), λ (m) k → 0, λ(M) k → +∞ and Ck *given by* (23). Suppose that Assumptions 1 to 9 *are fulfilled and* sup0≤j≤k ωj,k = O(1/k)*. We have* √ $$\sqrt{k}(\theta_{k}-\theta^{\star})\sim{\mathcal N}(0,H^{-1}\Gamma H^{-1}),\qquad a s\ k\to\infty.$$ This algorithm is theoretically asymptotically optimal. However in practice, adaptive gradient methods described in Table 1 have become the workhorse for training deep learning models as they take advantage of low rank-approximations and diagonal scalings. Interestingly, the *conditioned* matrices involved in these methods are linked to gradient estimates and thus to covariance matrices Γk (see Assumption 4) rather than the Hessian H. Indeed, since θ ? ∈ S, we have for the limiting covariance Γ = Eξ[g(θ ?, ξ)g(θ ?, ξ) >]. Consider a variant of AdaGrad which accumulates the average gradients Gk = δI + (1/k)Pk i=1 gig > iand Ck = G −1/2 k. Averaging allows to anneal the stochastic noise of the gradient estimate. By the law of large numbers, the limiting matrix in our Theorem 2 will be C = (Γ + δI) −1/2. ## B.2 Numerical Illustration Consider the empirical risk minimization framework applied to Generalized Linear Models. Given a data matrix X = (xi,j ) ∈ R n×d with labels y ∈ R n and a regularization parameter λ > 0, we are interested in solving minθ∈Rd F(θ) with $$F(\theta)=\frac{1}{n}\sum_{i=1}^{n}f_{i}(\theta),\quad f_{i}(\theta)=\mathcal{L}(x_{i}^{\top}\theta,y_{i})+\lambda\Omega(\theta),$$ L : R × R → R is smooth loss function and Ω : R d → R+ is a smooth convex regularizer chosen as Tikhonov regularization Ω(θ) = 12 kθk 2 2 . The gradient and Hessian of each component fi are given for all i = 1*, . . . , n* by $$\begin{array}{c}{{\nabla f_{i}(\theta)={\mathcal{L}}^{\prime}(x_{i}^{\top}\theta,y_{i})x_{i}+\lambda\theta}}\\ {{\nabla^{2}f_{i}(\theta)={\mathcal{L}}^{\prime\prime}(x_{i}^{\top}\theta,y_{i})x_{i}x_{i}^{\top}+\lambda I_{d},}}\end{array}$$ where L 0(·, ·) and L 00(·, ·) are the first and second derivative of L(·, ·) with respect to the first argument. Consider two well-known losses, namely least-squares and logistic. These losses are respectively associated to the Ridge regression problem with y ∈ R n and the binary classication task with y *∈ {−*1, +1} n. The regularization parameter is set to the classical value λ = 1/n. Denote by σ(z) = 1/(1+ exp(−z)) the sigmoid function, we have the following closed-form equations (Ridge Regression) | (Ridge Regression) | (Logistic Regression) | | | | |----------------------|-------------------------|--------------------|------------|-----| |   | | | | | | > | > | 2 | | | | L(x | θ, yi) | = 1 2 (yi − x | θ) | | | i | i | | | | | L 0 (x > | > | | | | | i θ, yi) | = x i θ − yi | | | | | > | | | | | | L 00(x | θ, yi) | = 1 | | | | i | > | > | | | | L(x | θ, yi) | = log(1 + exp(−yix | θ)) | | | i | i | | | | | L 0 (x > | > | | | | | i θ, yi) | = σ(x i θ) − yi | | | | | > | > | > | | | |  L 00(x | θ, yi) | = σ(x | θ)(1 − σ(x | θ)) | | i | i | i | | | L(x > i θ, yi) = log(1 + exp(−yix > i θ)) L 0(x > i θ, yi) = σ(x > i θ) − yi L 00(x > i θ, yi) = σ(x > i θ)(1 − σ(x > i θ)) As stated in Example 1 of Section 2, stochastic versions of both the gradient and the Hessian of the objective F can be easily computed using only a batch B ⊂ {1*, . . . , n*} of data and ∇BF(θ) = Pi∈B ∇fi(θ)/|B| (resp. ∇2BF(θ) = Pi∈B ∇2fi(θ)/|B|) for the gradient (resp. Hessian) estimate. Note that these random generators meet Assumptions 1 and 10 as they produce unbiased estimates of the gradient and the Hessian matrix respectively. For the sake of completeness and illustrative purposes, we compare the performance of classical stochastic gradient descent (sgd) and the *conditionned* variant (csgd) presented in Appendix B where the matrix Φk is an averaging of past Hessian estimates as given in Equation (22). We shall compare equal weights ωj,k = (k + 1)−1 and adaptive weights ωj,k ∝ exp(−ηkθj − θkk1) with η > 0 to give more importance to Hessian estimates associated to iterates which are closed to the current point. Furthermore, for computational reason, we consider a novel adaptive stochastic first-order method which is a variant of Adagrad. Starting from the null vector θ0 = (0*, . . . ,* 0) ∈ R d, we use optimal learning rate of the form γk = α/(k + k0) (Bottou et al., 2018) and set λ (m) k ≡ 0, λ(M) k = Λ√k in the experiments where *γ, k*0 and Λ are tuned using a grid search. The means of the optimality ratio k 7→ [F(θk) − F(θ ?)]/[F(θ0) − F(θ ?)], obtained over 100 independent runs, are presented in Figures below. Methods in competition. The different methods in the experiments are: - sgd: standard stochastic gradient descent. - *sgd_avg*: Polyak-averaging stochastic gradient descent , with a burn-in period (n0 = 15 for d = 20 and n0 = 30 for d = 100) to avoid the poor performance of bad initialization. - *csgd*(η = 0) and csgd(η > 0): *conditioned* stochastic gradient descent methods with equal and adaptive weights where the matrix Φk is an averaging of past Hessian estimates as given in Equation (22). - *adafull_avg*: The variant of Adagrad presented in Appendix B where the gradient matrix Gk is updated as an average Gk = δI + (1/k)Pk i=1 gig > iand Ck = G −1/2 kinstead of the cumulative sum provided in the literature of Adagrad. Note that averaging here allows to anneal the stochastic noise whereas classical versions of Adagrad often rely on true gradients and may use cumulative sums. The parameter δ is also tuned using a grid search. We focus on Ridge regression on simulated data with n = 10, 000 samples in dimensions d ∈ {20; 100}. Stochastic gradient methods are known to greatly benefit from mini-batch instead of picking a single random sample when computing the gradient estimate. We use a batch-size equal to |B| = 16. In Figure 1, we can see that *conditioned* SGD outperforms standard SGD. Furthermore, adaptive weights (η > 0) improve the convergence speed of *conditioned* SGD methods. Interestingly, the novel approach *adafull_avg* offers great performance at a cheap computing cost. Indeed, the update of Ck+1 relies on the inverse of an average. This operation can be carried out in an efficient way thanks to Woodbury matrix identity. Real-world data. We now turn our attention to real-world data and consider again the Ridge regression problem on the following datasets: *Boston Housing dataset* (Harrison Jr & Rubinfeld, 1978) (n = 506; d = 14) and *Diabetes dataset* (Dua & Graff, 2017) (n = 442; d = 10). ![30_image_0.png](30_image_0.png) Figure 1: Optimality ratio k 7→ [F(θk) − F(θ ?)]/[F(θ0) − F(θ ?)] for Ridge regression in dimension d ∈ {20; 100}. - *Boston Housing dataset* (Harrison Jr & Rubinfeld, 1978): This dataset contains information collected by the U.S Census Service concerning housing in the area of Boston Mass. It contains n = 506 samples in dimension d = 14. - *Diabetes dataset* (Dua & Graff, 2017): Ten baseline variables, age, sex, body mass index, average blood pressure, and six blood serum measurements were obtained for each of n = 442 diabetes patients, as well as the response of interest, a quantitative measure of disease progression one year after baseline. The means of the optimality ratio k 7→ [F(θk)−F(θ ?)]/[F(θ0)−F(θ ?)], obtained over 100 independent runs, are presented in Figure 2. Once again, the *conditioned* SGD methods offer better performance than plain SGD. For these datasets, it is the *conditioning* matrix with adaptive weights as given in Equation (22) which presents the best results. ![30_image_1.png](30_image_1.png) Figure 2: Optimality ratio k 7→ [F(θk)−F(θ ?)]/[F(θ0)−F(θ ?)] for Ridge regression on datasets Boston and Diabetes. ## C Auxiliary Results C.1 Robbins-Siegmund Theorem Theorem 6. (Robbins & Siegmund *(1971)) Consider a filtration* (Fn)n≥0 and four sequences of random variables(Vn)n≥0 ,(Wn)n≥0 ,(an)n≥0 and (bn)n≥0 that are adapted and non-negative. Assume that almost surely Pk ak < ∞ and Pk bk < ∞. Assume moreover that E [V0] < ∞ *and for all* n ∈ N, E[Vn+1|Fn] ≤ (1 + an)Vn − Wn + bn*. Then it holds* $(a)\ \sum_{k}W_{k}<\infty\ a.s.\qquad(b)\ V_{n}\stackrel{{a.s.}}{{\longrightarrow}}V_{\infty},\mathbb{E}\left[V_{\infty}\right]<\infty.\qquad(c)\ \sup_{n\geq0}\mathbb{E}\left[V_{n}\right]<\infty.$ ## C.2 Auxiliary Lemmas Lemma 2. Let (un)n≥1,(vn)n≥1 and (γn)n≥1 be non-negative sequences such that γn → 0 and Pn γn = +∞. Assume that there exists a real number m ≥ 1 and j ≥ 1 such that for all n ≥ j, un ≤ (1−γn) mun−1+γnvn. Then it holds that lim sup n→+∞ un ≤ lim sup n→+∞ vn. Proof. Denote x+ = max(x, 0). One has (x + y)+ ≤ x+ + y+. Set ε > 0 and v = lim supn vn + ε. Then there exists an integer N ≥ 1 such that (1 − γn) m ≤ (1 − γn) and vn < v, i.e., (vn − v)+ = 0 for n ≥ N. We have for large enough n ≥ N ∨ j, $$u_{n}-v\leq(1-\gamma_{n})(u_{n-1}-v)+\gamma_{n}(v_{n}-v),$$ and taking the positive part gives (un − v)+ ≤ (1 − γn)(un−1 − v)+ + γn(vn − v)+ = (1 − γn)(un−1 − v)+. Since Pn γn = +∞, this inequality implies that (un − v)+ tends to zero, but this is true for all ε > 0 so v is arbitrarily close to lim supn vn and the result follows. Lemma 3. Let (γn)n≥1 be a non-negative sequence converging to zero, and λ, m and p three real numbers with λ > 0, m ≥ 1, p ≥ 0. Consider two non-negative sequences (xn),(εn) and an integer j ≥ 1 *such that* $$\begin{array}{r l}{{\forall}n\geq j,}&{{}x_{n}=(1-\lambda\gamma_{n})^{m}x_{n-1}+\gamma_{n}^{p+1}\varepsilon_{n},}\\ {{}}&{{}{\quad i.e.,\quad}x_{n}=\prod_{i=j}^{n}(1-\lambda\gamma_{i})^{m}x_{j-1}+\sum_{k=j}^{n}\gamma_{k}^{p+1}\left(\prod_{i=k+1}^{n}\left(1-\lambda\gamma_{i}\right)^{m}\right)\varepsilon_{k}.}\end{array}$$ The following holds - if γn = n −β, β ∈ (1/2, 1)*, then for any p* $$\operatorname*{lim}_{n\to+\infty}{\frac{x_{n}}{\gamma_{n}^{p}}}\leq{\frac{1}{m\lambda}}\operatorname*{lim}_{n\to+\infty}\varepsilon_{n}.$$ - if γn = 1/n, then for any *p < mλ* $$\operatorname*{lim}_{n\to+\infty}{\frac{x_{n}}{\gamma_{n}^{p}}}\leq{\frac{1}{m\lambda-p}}\operatorname*{lim}_{n\to+\infty}\varepsilon_{n}.$$ In particular, when εn → 0 *with* j = 1 and x0 = 0, $$\operatorname*{lim}_{n\to+\infty}\sum_{k=1}^{n}\gamma_{k}\prod_{i=k+1}^{n}(1-\lambda\gamma_{i})^{m}\varepsilon_{k}=0,$$ $$(m\lambda>1)\operatorname*{lim}_{n\to+\infty}\frac{1}{\gamma_{n}}\sum_{k=1}^{n}\gamma_{k}^{2}\prod_{i=k+1}^{n}(1-\lambda\gamma_{i})^{m}\varepsilon_{k}=0.$$ Before proving this result, note that if we consider γn = γ/nβthen we can write $$x_{n}=(1-\lambda\gamma_{n})^{m}x_{n-1}+\gamma_{n}^{p+1}\varepsilon_{n}=(1-(\lambda\gamma)n^{-\beta})^{m}x_{n-1}+(n^{-\beta})^{p+1}(\gamma^{p+1}\varepsilon_{n})$$ and apply the result with λ˜ = γλ and ε˜n = γ p+1εn. ``` Proof. We apply Lemma 2 to the sequence un = xn γ p n . We have for all n ≥ j, ``` $$u_{n}=\frac{1}{\gamma_{n}^{p}}\left((1-\lambda\gamma_{n})^{m}x_{n-1}+\gamma_{n}^{p+1}\varepsilon_{n}\right)$$ $$=\left(\frac{\gamma_{n-1}}{\gamma_{n}}\right)^{p}(1-\lambda\gamma_{n})^{m}u_{n-1}+\gamma_{n}\varepsilon_{n}$$ $$=\exp\left(p\log\left(\frac{\gamma_{n-1}}{\gamma_{n}}\right)+m\log(1-\lambda\gamma_{n})\right)u_{n-1}+\gamma_{n}\varepsilon_{n}.$$ Define $$\lambda_{n}=\frac{1}{\gamma_{n}}\left(1-\exp\left(p\log\left(\frac{\gamma_{n-1}}{\gamma_{n}}\right)+m\log(1-\lambda\gamma_{n})\right)\right),$$ so we get the recursion equation $$\forall n\geq j,\quad u_{n}=(1-\lambda_{n}\gamma_{n})u_{n-1}+\lambda_{n}\gamma_{n}{\frac{\varepsilon_{n}}{\lambda_{n}}}.$$ - if γn = n −β, β ∈ (1/2, 1) then 1/γn − 1/γn−1 → 0 and the ratio γn−1/γn tends to 1 with $$\log\left({\frac{\gamma_{n-1}}{\gamma_{n}}}\right)=\left({\frac{\gamma_{n-1}}{\gamma_{n}}}-1\right)(1+o(1))=\gamma_{n-1}\left({\frac{1}{\gamma_{n}}}-{\frac{1}{\gamma_{n-1}}}\right)(1+o(1))=o(\gamma_{n}).$$ Besides, m log(1 − λγn) = −mλγn + o(γn) when n → +∞ and we get $$\lambda_{n}=\frac{1}{\gamma_{n}}\left[1-\exp\left(-m\lambda\gamma_{n}+o(\gamma_{n})\right)\right],$$ which implies that λn converges to mλ. We conclude with Lemma 2. - if γn = 1/n then the ratio γn−1/γn tends to 1 with $$\log\left({\frac{\gamma_{n-1}}{\gamma_{n}}}\right)=\log\left(1+{\frac{1}{n-1}}\right)=\gamma_{n}+o(\gamma_{n}).$$ We still have m log(1 − λγn) = −mλγn + o(γn) when n → +∞ and therefore $$\lambda_{n}=\frac{1}{\gamma_{n}}\left[1-\exp\left((p-m\lambda)\gamma_{n}+o(\gamma_{n})\right)\right],$$ which implies λn converges to (mλ − p) and we conclude in the same way. Lemma 4. Let A, B ∈ S++ d(R) then the eigenvalues of AB *are real and positive with* Sp(AB) ⊂ [λmin(A)λmin(B); λmax(A)λmax(B)]. Proof. Denote by √B the unique positive square root of B. The matrix AB is similar to the real symmetric positive definite matrix √BA√B. Therefore its eigenvalues are real and positive. Since A 7→ λmax(A) is a sub-multiplicative matrix norm on S ++ d(R), λmax(AB) ≤ λmax(A)λmax(B) which gives λmax((AB) −1) ≤ λmax(A−1)λmax(B−1), *i.e.,* λmin (AB) −1 ≤ λmin(A) −1λmin(B) −1, and finally λmin(A)λmin(B) ≤ λmin (AB). Lemma 5. Let S ∈ S++ d(R) be a real symmetric positive definite matrix. Let (γk)k≥1 be a positive decreasing sequence converging to 0 such that Pk γk = +∞. Denote by λm the smallest eigenvalue of S. It holds that there exists j ≥ 1 such that for any k > j*, all the eigenvalues of the real symmetric matrix* Ak = I − γkS are positive and we have $$\rho(\Pi_{n})=\rho(A_{n}\ldots A_{1})\stackrel{n\to+\infty}{\longrightarrow}0,$$ $\forall k>j,\quad\rho(\Pi_{n,k})=\rho(A_{n}\ldots A_{k+1})\leq\prod_{i=k+1}^{n}(1-\gamma_{i}\lambda_{m}).$ Proof. For any k ∈ N, the eigenvalues of the real symmetric matrix Ak = I − γkS are given by Sp(Ak) = {(1 − γkλ), λ ∈ Sp(S)}. Since γk → 0, there exists j ≥ 1 such that γkλm < 1 for all *k > j*. Therefore for any k > j, we have Sp(Ak) ⊂ R ? + and the largest eigenvalue is ρ(Ak) = 1 − γkλm. Since ρ is a sub-multiplicative norm for real symmetric matrices, we get ρ(Πn) ≤Qn k=1 ρ(Ak) = Qjk=1 ρ(Ak)Qn k=j+1 ρ(Ak). The second product can be upper bounded with the convexity of exponential, $$\prod_{k=j+1}^{n}\rho(A_{k})=\prod_{k=j+1}^{n}\left(1-\gamma_{k}\lambda_{m}\right)\leq\prod_{k=j+1}^{n}\exp\left(-\gamma_{k}\lambda_{m}\right)=\exp\left(-\lambda_{m}(\tau_{n}-\tau_{j})\right)\stackrel{{n\to+\infty}}{{\longrightarrow}}0.$$ Similarly we have for all $k>j,\rho(\Pi_{n,k})\leq\prod_{k=k+1}^{n}\rho(A_{i})\leq\prod_{k=k+1}^{n}(1-\gamma_{i}\lambda_{m})$. Lemma 6. Let γn = αn−β with β ∈ (1/2, 1] *then it holds* $$(\beta<1)\sum_{k=1}^{n}\gamma_{k}\sim\frac{n\gamma_{n}}{1-\beta}=\frac{\alpha}{1-\beta}n^{1-\beta},\quad(\beta=1)\sum_{k=1}^{n}\gamma_{k}\sim\alpha\log(n).$$ Proof. By series-integral comprison, R n+1 1t −βdt ≤Pn k=1 k $$\leq1+\int_{1}^{n}t^{-\beta}d t.$$ Theorem 7. (Delyon & Portier, 2021, Theorem 17)(Freedman inequality) Let (Xj )1≤j≤n *be random variables such that* E[Xj |Fj−1] = 0 for all 1 ≤ j ≤ n then, for all t ≥ 0 and *v, m >* 0, $$\mathbb{P}\left(\left|\sum_{j=1}^{n}X_{j}\right|\geq t,\,\max_{j=1,\ldots,n}|X_{j}|\leq m,\,\sum_{j=1}^{n}\mathbb{E}\left[X_{j}^{2}\mid\mathcal{F}_{j-1}\right]\leq v\right)\leq2\exp\left(-\frac{t^{2}/2}{v+tm/3}\right).$$ $$\square$$ $$\square$$ Lemma 7. Let A ∈ R n×n *be a symmetric positive semi-definite matrix. Then for any* B ∈ R m×n*, the* matrix BAB> ∈ R m×m *is symmetric positive semi-definite.* Proof. First note that (BAB>) > = (B>) >A>B> = BAB> because A is symmetric. Then for any vector x ∈ R, we have x >(BAB>)x = (B>x) >A(B>x) ≥ 0 since A is positive semi-definite. Proposition 3. (Khalil, 2002, Theorem 4.6) Let H be a positive definite matrix and Γ a symmetric positive definite matrix of same dimension. Then there exists a symmetric positive definite matrix Σ*, unique solution* of the Lyapunov equation HΣ + ΣH> = Γ*, which is given by* Σ = R +∞ 0e −tHΓe −tH>dt. The results remains true if the matrix Γ is only symmetric positive semi-definite: in that case the matrix Σ is also symmetric positive semi-definite and is the solution of the Lyapunov equation. ## C.3 Additional Propositions This section gathers the proofs of Proposition 1 about the optimal choice for the *conditioning* matrix and of Proposition 2 about the almost sure convergence of the *conditioning* matrices. Proposition 1. *The choice* C ? = H−1is optimal in the sense that ΣC∗ ΣC , ∀C ∈ CH*. Moreover,* ΣC? = H−1ΓH−1. Proof. Define ∆C = ΣC − H−1ΓH−1 and check that ∆C satisfies $$(C H-I_{d}/2)\,\Delta_{C}+\Delta_{C}\,(C H-I_{d}/2)^{\top}=(C-H^{-1})\Gamma(C-H^{-1}).$$ Because Γ is symmetric positive semi-definite, we have using Lemma 7 that the term on the right side is symmetric positive semi-definite. Therefore, in view of Proposition 3, we get that ∆C is symmetric positive semi-definite ∆C 0 which implies ΣC H−1ΓH−1for all C ∈ CH. The equality is reached for C ? = H−1 with ∆C = 0, ΣC? = H−1ΓH−1. Proposition 2. Let (Φk)k≥0 *be obtained by* (22). Suppose that Assumptions 3 and 10 *are fulfilled and that* θk → θ ? *almost surely . If* sup0≤j≤k ωj,k = O(1/k)*, then we have* Φk → H = ∇2F(θ ?) almost surely. Proof. We use the decomposition $$\Phi_{k}-H=\sum_{j=0}^{k}\omega_{j,k}\left(\nabla^{2}F(\theta_{j})-H\right)+\sum_{j=0}^{k}\omega_{j,k}\left(H(\theta_{j},\xi_{j+1}^{\prime})-\nabla^{2}F(\theta_{j})\right).$$ The continuity of ∇2F at θ ? and the fact that θj → θ ? a.s. implie that∇2F(θj ) − H → 0 a.s. Since sup0≤j≤k ωj,k = O(1/k), there exists a > 0 such that $$\left\|\sum_{j=0}^{k}\omega_{j,k}\left(\nabla^{2}F(\theta_{j})-H\right)\right\|\leq{\frac{a}{k+1}}\sum_{j=0}^{k}\left\|\nabla^{2}F(\theta_{j})-H\right\|,$$ which goes to 0 in virtue of Cesaro's Lemma, therefore limk→∞ Pk j=0 ωj,k ∇2F(θj ) − H= 0. The second term is a sum of martingale increments and shall be treated with Freedman inequality and Borel-Cantelli Lemma. Introduce the martingale increments $$\forall0\leq j\leq k,\quad X_{j+1,k}=\omega_{j,k}\left(H(\theta_{j},\xi_{j+1}^{\prime})-\nabla^{2}F(\theta_{j})\right).$$ For a fixed k, we have Xj+1,k = x (i,l) j+11≤i,l≤d where we remove the index k for the sake of clarity. Because the Hessian generator is unbiased, we have for all coordinates $$\mathbb{E}\left[x_{j+1}^{(i,l)}|{\mathcal{F}}_{j}\right]=0\quad{\mathrm{for~all~}}0\leq j\leq k.$$ By definition of the Hessian generator and using that (∇2F(θj )) is bounded, we get that H(θj , ξ0j+1) − ∇2F(θj ) = O(1) for all j ≥ 0. For any b > 0, consider the following event $$\Omega_{b}=\left\{\operatorname*{sup}_{k\geq0}\operatorname*{max}_{j=0,\ldots,k}(k+1)\left|x_{j+1}^{(i,l)}\right|\leq b\right\},$$ and note that since ωj,k = O(1/k) we have P(Ωb) → 1 as b → ∞. On this event, the martingale increments and the variance term are bounded as $$\operatorname*{max}_{j=0,\ldots,k}\left|x_{j+1}^{(i,l)}\right|\leq b(k+1)^{-1},\quad\sum_{j=0}^{k}\mathbb{E}\left[\left(x_{j+1}^{(i,l)}\right)^{2}\mid\mathcal{F}_{j}\right]\leq b^{2}(k+1)^{-1}.$$ Using Freedman inequality (Theorem 7), we have for all coordinates *i, l* = 1*, . . . , d*, $$\mathbb{P}\left(\left|\sum_{j=0}^{k}x_{j+1}^{(i,l)}\right|>\varepsilon,\Omega_{b}\right)\leq2\exp\left(-\frac{\varepsilon^{2}(k+1)}{2b(b+\varepsilon)}\right).$$ The last term is the general term of a convergent series. Apply Borel-Cantelli Lemma (Borel, 1909) to finally get almost surely on Ωb that limk→∞ Pk j=0 x (i,l) j+1 = 0. Since b > 0 is arbitrary and P(Ωb) → 1 when b → ∞, we have almost surely limk→∞ Pk j=0 x (i,l) j+1 = 0. This is true for all the coordinates of the martingale increments and therefore $\lim_{k\to\infty}\sum_{j=0}^{k}x_{j+1}=0$. This is true for all the columns $$\lim_{k\to\infty}\sum_{j=0}^{k}\omega_{j,k}\left(H(\theta_{j},\xi_{j+1}^{\prime})-\nabla^{2}F(\theta_{j})\right)=0\text{a.s.}$$ $\square$ ## C.4 Auxiliary Results On Expected Smoothness The following Lemma gives sufficient conditions to meet the weak growth condition on the stochastic noise as stated in Assumption 8. Lemma 8. Suppose that for all k ≥ 1, θ ∈ R d, F(θ) = E [f(θ, ξk)|Fk−1] with ξk ∼ Pk−1*. Assume that for* all ξk ∼ Pk−1, the function θ 7→ f(θ, ξk) is L-smooth almost surely and there exists m ∈ R *such that for all* θ ∈ R d, f(θ, ξk) ≥ m. Then a gradient estimate is given by g(*θ, ξ*) = ∇f(θ, ξ) *and the growth condition of* Assumption 8 *is satisfied with* σ 2 = 2L(F ? − m) and $\forall\theta\in\mathbb{R}^{d},\forall k\in\mathbb{N},\quad\mathbb{E}\left[\|g(\theta,\xi_{k})\|_{2}^{2}|\mathcal{F}_{k-1}\right]\leq2L\left(F(\theta)-F^{\star}\right)+\sigma^{2}$. Proof. For all ξk ∼ Pk−1, Lipschitz continuity of the gradient θ 7→ ∇f(*θ, ξ*k) implies (see Nesterov (2013)) $$f(y,\xi_{k})\leq f(\theta,\xi_{k})+\langle\nabla f(\theta,\xi_{k}),y-\theta\rangle+(L/2)\|y-\theta\|_{2}^{2}\,,$$ . Plug y = θ − (1/L)∇f(*θ, ξ*k) and use the lower bound f(*y, ξ*k) ≥ m to obtain $${\frac{1}{2L}}\|\nabla f(\theta,\xi_{k})\|_{2}^{2}\leq f(\theta,\xi_{k})-f(y,\xi_{k})\leq f(\theta,\xi_{k})-m,$$ $$\square$$ which gives, $$\|g(\theta,\xi_{k})\|_{2}^{2}\leq2L\left(f(\theta,\xi_{k})-f(\theta^{\star},\xi_{k})\right)+2L\left(f(\theta^{\star},\xi_{k})-m\right)$$ and conclude by taking the conditional expectation with respect to Fk−1. The next Lemma links our weak growth condition with the notion of expected smoothness as introduced in Gower et al. (2019). In particular, this notion can be extended to our general context where the sampling distribution can evolve through the stochastic algorithm. Lemma 9. *(Expected smoothness) Assume that with probability one,* $$\sup_{k\geq1}\sup_{x\neq x^{*}}\frac{\mathbb{E}\left[\|g(\theta,\xi_{k})-g(\theta^{*},\xi_{k})\|_{2}^{2}|\mathcal{F}_{k-1}\right]}{F(\theta)-F^{*}}<\infty\quad\text{and}\quad\sup_{k\geq1}\mathbb{E}\left[\|g(\theta^{*},\xi_{k})\|_{2}^{2}|\mathcal{F}_{k-1}\right]<\infty.$$ _Then there exist $0\leq\mathcal{L},\sigma^{2}<\infty$ such that_ $$\forall\theta\in\mathbb{R}^{d},\forall k\in\mathbb{N},\quad\mathbb{E}\left[\|g(\theta,\xi_{k})\|_{2}^{2}|\mathcal{F}_{k-1}\right]\leq2\mathcal{L}\left(F(\theta)-F^{*}\right)+2\sigma^{2}.$$ Proof. For all θ ∈ R d and all k ∈ N, we have $$\|g(\theta,\xi_{k})\|_{2}^{2}=\|g(\theta,\xi_{k})-g(\theta^{\star},\xi_{k})+g(\theta^{\star},\xi_{k})\|_{2}^{2}$$ $$\leq2\|g(\theta,\xi_{k})-g(\theta^{\star},\xi_{k})\|_{2}^{2}+2\|g(\theta^{\star},\xi_{k})\|_{2}^{2}.$$ Using the expected smoothness, with probability one, there exists 0 ≤ L < ∞ such that $$\mathbb{E}\left[\|g(\theta,\xi_{k})-g(\theta^{\star},\right.$$ ?, ξk)k 2 2 |Fk−1 $$\tilde{\cal L}_{k-1}\Big]\leq{\cal L}\left(F(\theta)-F^{\star}\right).$$ Since the noise at optimal point is almost surely finite there exists 0 ≤ σ 2 < ∞ such that $$\mathbb{E}\left[||g(\theta^{\star},\xi_{k})||_{2}^{2}|{\mathcal{F}}_{k-1}\right]\leq\sigma^{2},$$ which allows to conclude by taking the conditional expectation.
Review 1: Summary: This work investigates the asymptotic behavior of a general class of preconditioned stochastic gradient descent methods. Under certain technical conditions, the authors characterize the asymptotic normality of the iterates under certain technical conditions, and show that using the inverse of Hessian as the conditioner leads to optimal variance. Strengths and Weaknesses: ### Strength: - Overall the paper is well-written with clear presentation. The results are corroborated with numerical experiments. - The analysis does not impose additional assumptions on the convergence rate of the preconditioners $\\{C_k\\}_\{k\ge0\}$, and shows that the algorithm would behave in the same way as an oracle version where $\lim_\{k\to\infty\}C_k$ is used instead. This technical novelty allows one to apply the analysis in a more general way and can be of independent interest. ### Weakness: - In section 3.4, the authors provide some additional convergence results that are helpful in applying the main results, which require the convergence of the iterates in the first place. The discussion is limited to the stationary points, while the main results also require certain conditions involving the final Hessian matrix $H$. Since a number of existing literature has investigated how SGD algorithms escape from saddle points, it would be helpful if the authors can provide a more detailed discussion on dealing with $H$ in a non-convex setting. - Assumption 9 introduces a set of extended Robbins-Monro conditions on the preconditioners $\\{C_k\\}_\{k\\ge0\}$, and essentially requires $\\lim_\{k\\to\\infty\} C_k = 0$ (if I am not mistaken), which is not compatible with the conditions in Theorem 2. If this is the case, please make it clear in the discussion. ======== The authors' response has addressed my concerns and I recommend for accept. Requested Changes: See Strengths And Weaknesses. Broader Impact Concerns: N/A ================================================== Review 2: Summary: This paper provides an asymptotic analysis of SGD, in terms of central limit theorems, for the case where the SGD is preconditioned. The result is novel. Strengths and Weaknesses: General remarks: The paper is cleanly written and reads well -- with interesting content. I also found the part related to the review and summary of existing methods well written. I found the overall contributions slightly limited. Requested Changes: I am not sure about the paper's contributions' significance, I have the following questions/suggestions and open to discussion: 1- I think the authors can make it clearer in their case policy P_k has an explicit link to sampling patterns of data in Example 1. More generally, the "policy" seems to be sampling distribution that computes the stochastic gradients. It is a bit of a different terminology -- but would be good to clarify this. 2- For A2-A5: I would appreciate some detailed explanation for the intuition in the assumptions, optimally in the form of remarks. For example, A2 is standard learning rate decay but known to fail to work in practice. Why A3 is important and what does it "intuitively" bring? Same for A5, a thorough explanation of the assumption would be useful. This is assuming that the MSE (more correctly 2+\delta power of the error in the gradient) of the stochastic gradients are bounded "around the stationary point". Is it a mild assumption or a strong assumption? How does it compare to usual bounded variance (or MSE) assumption in SGD? Also a discussion about how these relates to standard assumptions for SGD (namely, \mu-strong convexity + L-smooth gradients) should be discussed. Are these weaker or stronger? 3- I found sketch of the proof a bit unhelpful -- not clarifying a lot the general idea. Also it would be good to cite here the exact section in Appendix where the proof can be found. 4- About Prop. 1 and Corollary 1: I think the results here could be made a bit more specific given the setting. While the optimal choice of the preconditioning matrix is the Hessian, this is often approximated in practice. What kind of approximation errors can be accounted for theory in this paper to hold? Is there a structure to this error? If the preconditioned matrix is the perturbed version (an approximation) of the Hessian, can theory say anything approximate here? I think this would be useful direction to enhance this paper and demonstrate the utility of the introduced result. 5- The last part of the paper seems to proceed under stronger assumptions (A6 onwards) to prove almost sure convergence. As I said above, it is also important to contrast these with the A2-A5 on what/how they differ and compare. Broader Impact Concerns: theory work, no broader impact. ================================================== Review 3: Summary: This paper considers the conditioned stochastic gradient descent and provides the weak convergence for this method for the non-convex objective functions. Strengths and Weaknesses: Strength: All the prooves are clear to understand and easy to follow. Weakness: The practical implication of the provided analysis for the machine learning community is not discussed enough. The numerical evaluations are just for the convex settings. Also, assumption 5 may not hold for all the stationary points of a non-convex objective function. Requested Changes: Please bring up the practical implication of this analysis to the main part of the paper and add more examples about where CSGD can be utilized for ML models along side showing that the assumptions of the paper hold for those models. Broader Impact Concerns: No broader impact concerns. ================================================== Metareview: Recommendation: Accept as is Comment: All reviewers have recommended acceptance, and the authors have already updated their paper addressing the points made by the reviewers. ==================================================
# Reproducibility Study Of Learning Fair Graph Representations Via Automated Data Augmentations ## Abstract In this study, we undertake a reproducibility analysis of "Learning Fair Graph Representations Via Automated Data Augmentations" by Ling et al. (2022). We assess the validity of the original claims focused on node classification tasks and explore the performance of the Graphair framework in link prediction tasks. Our investigation reveals that we can partially reproduce one of the original three claims and fully substantiate the other two. Additionally, we broaden the application of Graphair from node classification to link prediction across various datasets. Our findings indicate that, while Graphair demonstrates a comparable fairness-accuracy trade-off to baseline models for mixed dyadic-level fairness, it has a superior trade-off for subgroup dyadic-level fairness. These findings underscore Graphair's potential for wider adoption in graph-based learning. Our code base can be found on GitHub at https://anonymous.4open.science/r/graphair-reproducibility-2871. ## 1 Introduction Graph Neural Networks (GNNs) have become increasingly popular for their exceptional performance in various applications (Hamaguchi et al., 2017; Liu et al., 2022; Han et al., 2022a). A key application area is graph representation learning (Grover & Leskovec, 2016; Hamilton, 2020; Han et al., 2022b), where a significant concern is the potential for GNNs to inherit or amplify biases present in the graph representation training data. This can lead to discriminatory model behavior (Dai & Wang, 2022). To address this issue, Ling et al. (2022) introduced Graphair, an automated data augmentation technique aimed at learning fair graph representations without maintaining biases from the training data. This work evaluates the main claims made by Ling et al. (2022), which were based solely on the performance of the framework in node classification tasks. We expand our evaluation by conducting additional experiments to assess the adaptability and generalizability of Graphair through its application to a different downstream task, namely, link prediction. This approach allows us to further test the performance and fairness of the embeddings. For this purpose, we apply Graphair to new real-world datasets, adapting certain aspects of the framework and fairness metrics to suit link prediction. Our contributions are summarized as follows: - Replicating the original experiments to assess the reproducibility of the primary claims. We find that one of the three claims made by the original authors was partially reproducible and the rest of the claims are fully verified. - Adapting the Graphair framework to various real-world datasets for link prediction necessitated adjustments to both the framework and the fairness metrics. These modifications provided valuable insights into Graphair's adaptability and generalizability to another downstream task. Our findings indicate that Graphair demonstrates a superior trade-off in one of the fairness metrics used for this task. ## 2 Scope Of Reproducibility The original paper *Learning Fair Graph Representations Via Automated Data Augmentations* by Ling et al. (2022) introduces Graphair, an innovative automated graph augmentation method for fair graph representation learning. This approach stands out from prior methods (Luo et al., 2022; Zhao et al., 2022; Agarwal et al., 2021) by utilizing a dynamic and automated model to generate a new, fairer version of the original graph, aiming to balance fairness and informativeness (Ling et al., 2022). In this work, we study the reproducibility of this paper. Besides examining the three main claims, we also assess the adaptability and effectiveness of Graphair in a different context, namely link prediction. This extension tests the framework's ability to maintain fairness and informativeness in a different downstream tasks. We will test this on a variety of datasets. The claims and our extension are as follows: - Claim 1: Graphair consistently outperforms state-of-the-art baselines in node classification tasks for real-world graph datasets. Our extension evaluates whether this superior performance extends to link prediction tasks as well. - Claim 2: Both fair node features and graph topology structures contribute to mitigating prediction bias. - Claim 3: Graphair can automatically generate new graphs with fair node topology and features. ## 3 Methodology 3.1 Graphair The *Graphair* framework introduces a novel approach to graph augmentation, focusing on the dual objectives of maintaining information richness and ensuring fairness. It incorporates a model g, which adeptly transforms an input graph G = {*A, X, S*}, where A represents the adjacency matrix, X the node features, and S the sensitive attributes, into an augmented counterpart G′ = {A′, X′, S}. This transformation utilizes two main operations: TA for adjusting the adjacency matrix A and TX for masking node features X. A GNN-based encoder genc precedes these transformations, tasked with extracting deep embeddings to inform these operations. The resulting augmented graph G′is designed to capture the core structural and feature-based elements of the original graph G, while simultaneously keeping fairness principles to prevent the propagation of sensitive attribute information. This objective is achieved by training an adversarial model and using contrastive loss to optimize G′. An overview of the Graphair framework is presented in Figure 5 in Appendix A.1. ## 3.1.1 Adversarial Training For Fairness To ensure the fairness of the augmented graph, the model employs an adversarial training strategy. The adversarial model k learns to predict sensitive attributes S from the graph's features. Simultaneously, the representation encoder f and augmentation model g are optimized to minimize the predictive capability of the adversarial model, effectively removing biases from the augmented graph. The primary goal of adversarial training is to guide the encoder in generating representations that are free from sensitive attribute information. This optimization can be formally defined as follows: $$\operatorname*{min}_{g,f}\operatorname*{max}_{k}L_{\mathrm{adv}}=\operatorname*{min}_{g,f}\operatorname*{max}_{k}{\frac{1}{n}}\sum_{i=1}^{n}\left[S_{i}\log{\hat{S}}_{i}+(1-S_{i})\log(1-{\hat{S}}_{i})\right]$$ i(1) Here, Sˆi represents the predicted sensitive attributes, and n is the number of nodes. ## 3.1.2 Contrastive Learning For Informativeness The model employs contrastive learning to enhance the informativeness of the augmented graphs. This method focuses on ensuring that the node representations between the original graph hi and the augmented $$\left(1\right)$$ graph h ′ i maintain a high degree of similarity, thereby preserving key information. The positive pair in this context is defined as any pair (hi, h′i ), where hiis the node representation in the original graph and h ′ i is the corresponding representation in the augmented graph. The contrastive learning function l, used to compute the loss for these positive pairs, is specified as follows: $$l(h_{i},h_{i}^{\prime})=-\log\left(\frac{\exp(\mathrm{sim}(h_{i},h_{i}^{\prime})/\tau)}{\sum_{j=1}^{n}\exp(\mathrm{sim}(h_{i},h_{j})/\tau)+\sum_{j=1}^{n}\mathbb{I}_{j\neq i}\exp(\mathrm{sim}(h_{i},h_{j}^{\prime})/\tau)}\right)$$ $$\mathrm{(2)}$$ where τ is the temperature scaling parameter, and Ij̸=iis an indicator function that equals 1 if j ̸= i and 0 otherwise. This loss function aims to minimize the distance between similar node pairs while maximizing the distance between dissimilar pairs in the embedding space. The overall contrastive loss Lcon is given by: $$L_{\rm con}=\frac{1}{2n}\sum_{i=1}^{n}\left[l(h_{i},h_{i}^{\prime})+l(h_{i}^{\prime},h_{i})\right]\tag{3}$$ ## 3.1.3 Reconstruction Based Regularization To Ensure Graph Consistency To ensure that the augmentation model produces graphs that do not deviate significantly from the input graphs, the model includes a reconstruction-based regularization term in its overall training objective. Specifically, let LBCE and LMSE represent the binary cross-entropy and mean squared error losses, respectively. The regularization term is mathematically expressed as: $$L_{\rm{recomm}}=L_{BCE}(A,\widetilde{A^{\prime}})+\lambda L_{MSE}(X,X^{\prime})\tag{4}$$ $$=-\sum_{i=1}^{n}\sum_{j=1}^{n}[A_{ij}\log{(\widetilde{A^{\prime}}_{ij})}+(1-A_{ij})\log{(1-\widetilde{A^{\prime}}_{ij})}]+||X-X^{\prime}||_{F}^{2}$$ Here, λ is a hyperparameter, and ∥·∥F denotes the Frobenius norm of a matrix. ## 3.1.4 Integrated Training Objective By emphasizing informativeness and fairness properties, *Graphair* seeks to produce augmented graphs that are less susceptible to bias. This approach contributes to fairer graph representation learning while preserving the valuable information contained in the data. The overall training process is described as the following min-max optimization procedure, where α, β, and γ are hyperparameters that balance the different loss components: $$\min_{f,g}\max_{k}L=\min_{f,g}\max_{k}\alpha L_{\rm adv}+\beta L_{\rm con}+\gamma L_{\rm recon}\tag{1}$$ ## 3.2 Datasets To replicate the main claims, we use the same datasets as the original paper by Ling et al. (2022), which include specific dataset splits and sensitive and target attributes. We employ three real-world graph datasets: NBA1, containing player statistics, and two subsets of the Pokec social network from Slovakia, namely Pokecn and Pokec-z (Dai & Wang, 2021). The specifics of these datasets are summarized in Table 1. For the link prediction task, we utilize well-established benchmark datasets in this domain: Cora, Citeseer, and Pubmed (Spinelli et al., 2021; Chen et al., 2022; Current et al., 2022; Li et al., 2021). These datasets feature scientific publications as nodes with bag-of-words vectors of abstracts as node features. Edges 1https://www.kaggle.com/datasets/noahgift/social-power-nba $$\left({\bar{5}}\right)$$ represent citation links between publications. Notably, these datasets possess a broader range of features compared to those used by Ling et al. (2022) and have a larger set of possible sensitive attributes |S|. Full details are provided in Table 1. | Dataset | S | |S| | Features | Nodes | Edges | |-----------|-------------|-------|------------|---------|---------| | NBA | nationality | 2 | 39 | 403 | 16,570 | | Pokec-z | region | 2 | 59 | 67,797 | 882,765 | | Pokec-n | region | 2 | 59 | 66,569 | 729,129 | | Citeseer | paper class | 6 | 3,703 | 3,327 | 9,104 | | Cora | paper class | 7 | 1,433 | 2,708 | 10,556 | | PubMed | paper class | 3 | 500 | 19,717 | 88,648 | Table 1: Dataset Statistics. ## 3.3 Experimental Setup We obtain the model's codebase from the DIG library, more specifically, from the FairGraph module2. To enhance reproducibility, we employ complete seeding across all operations, which was missing in some operations of the original code. A key difference in the experimental setup between the one reported by the original authors and ours is that we conducted a 10,000-epoch grid search for the Pokec dataset, instead of the 500-epoch grid search initially reported by (Ling et al., 2022). This modification was recommended by the original authors to enhance reproducibility. We refer to the subsection A.4 for more details, where we show that a 500-epoch search does not yield optimal results for the Pokec datasets, but higher epochs improve performance in accuracy and fairness. To verify Claims 1 and 2, we follow the procedure described by Ling et al. (2022). For Claim 3, due to memory constraints, we compute the homophily and Spearman correlation values for the Pokec datasets on a mini-batch instead of the entire graphs. Aside from this, we adhere to the same procedures for this claim as well. ## 3.4 Link Prediction In our study, we extend the scope of the original work by performing a different downstream task, namely link prediction. We implement several modifications to adjust the Graphair network for the downstream task of link prediction. First of all, we modifiy the output size of the adversarial network from two, which corresponded to the binary sensitive feature in the Pokec and NBA datasets, to the respective number of distinct values of the sensitive feature in the corresponding dataset. To create the link embeddings for the input of the classifier, we compute the Hadamard product of the node embeddings Horn & Johnson (2012), which pairs node embeddings effectively. For nodes v and u, we define the link embedding hvu as: hvu = hv ◦ hu (6) $$h_{v u}=h_{v}\circ h_{u}$$ These new link embeddings require labels that reflect the sensitive attributes of the connected nodes. To this end, we integrate dyadic-level fairness criteria into our fairness assessment for link prediction. We form dyadic groups that relate sensitive node attributes to link attributes, following the mixed and subgroup dyadic-level fairness principles suggested by Masrour et al. (2020). The mixed dyadic-level groups classify links as either inter- or intra-group based on whether they connect nodes from the same or different sensitive groups. The subgroup dyadic-level approach assigns each link to a subgroup based on the sensitive groups of the nodes it connects. A subgroup is formed for each possible combination of sensitive groups. This method facilitates the measurement of fairness in two distinct aspects. Firstly, it assesses how well each protected subgroup is represented in link formation at the subgroup dyadic-level. Secondly, it evaluates node homogeneity for links at the mixed dyadic-level. For assessing fairness in link prediction, we aim to optimize for Equality of Opportunity (EO) and Demographic Parity (DP) using these dyadic groups. Following the original paper's definition, DP is defined 2https://github.com/divelab/DIG/tree/dig-stable/dig/fairgraph $$(6)$$ as: $$\left|\mathbb{P}({\hat{Y}}=1\mid D=0)-\mathbb{P}({\hat{Y}}=1\mid D=1)\right|$$ and EO as: $\frac{1}{2}$ and EO as: where Y is a ground-truth label, Yˆ is a prediction, and in the case of link prediction, D denotes the dyadic group to which the link belongs. These definitions extend to multiple dyadic groups (|D| > 2), which is the $$\left|\mathbb{P}({\hat{Y}}=1\mid D=0,Y=1)-\mathbb{P}({\hat{Y}}=1\mid D=1,Y=1)\right|$$ case for subgroup dyadic analysis. As our final metrics, we define the Demographic Parity difference (∆DP) as the selection rate gap and the Equal Opportunity difference (∆EO) as the largest discrepancy in true positive rates (TPR) and false positive rates (FPR) across groups identified by D: $$\begin{array}{r l}{{}}&{{}\Delta\mathrm{DP}=\operatorname*{max}_{d}\mathbb{P}({\hat{Y}}|D=d)-\operatorname*{min}_{d}\mathbb{P}({\hat{Y}}|D=d),}\\ {{}}&{{}\Delta\mathrm{TPR}=\operatorname*{max}_{d}\mathbb{P}({\hat{Y}}=1|D=d,Y=1)-\operatorname*{min}_{d}\mathbb{P}({\hat{Y}}=1|D=d,Y=1),}\\ {{}}&{{}\Delta\mathrm{FPR}=\operatorname*{max}_{d}\mathbb{P}({\hat{Y}}=1|D=d,Y=0)-\operatorname*{min}_{d}\mathbb{P}({\hat{Y}}=1|D=d,Y=0),}\\ {{}}&{{}\Delta\mathrm{EO}=\operatorname*{max}(\Delta\mathrm{TPR},\Delta\mathrm{FPR}).}\end{array}$$ Link prediction performance is measured using accuracy and the Area Under the ROC Curve (AUC) metrics, representing the trade off between true and false positives. ## 3.4.1 Baselines We adopt FairAdj Li et al. (2021) and FairDrop Spinelli et al. (2021) as our benchmarks. FairAdj learns a fair adjacency matrix during an end-to-end link prediction task. It utilizes a graph variational autoencoder and employs two distinct optimization processes: one for learning a fair version of the adjacency matrix and the other for link prediction. We used two versions of FairAdj: one with 5 epochs (r = 5) and the other with 20 epochs (r = 20). FairDrop, on the other hand, proposes a biased edge dropout algorithm to counteract homophily and improve fairness in graph representation learning. We used two different versions of FairDrop: one with Graph Convolutional Network (GCN) Kipf & Welling (2016) and the other with Graph Attention Networks (GAT) Velickovic et al. (2017). ## 3.5 Hyperparameters Node Classification To align our experiments closely with the original study, we adopt the hyperparameters specified by the authors. This includes conducting a grid search on the hyperparameters α, γ, and λ with the values {0.1, 1.0, 10.0}, as performed in the original work (Ling et al., 2022). We use the default settings from the original code where specific hyperparameters are not disclosed, a choice validated by the original authors. A complete list of all hyperparameters is provided in Table 7 in subsection A.2. Link Prediction We replicate the grid search from the node classification experiments for link prediction on the Citeseer, Cora, and PubMed datasets. Initially, we conduct a grid search on the model parameters, including varying the number of epochs, the learning rates for both the Graphair module and the classifier, and the sizes of the hidden layers for both components. We select the most notable model setup based on performance metrics (accuracy and ROC) and fairness values, and then perform a subsequent grid search on the loss hyperparameters α, λ, and γ to fine-tune the model further. We compare the results of Graphair with baseline results from Spinelli et al. (2021) and Li et al. (2021), which also underwent grid searches. For FairAdj, we conducted a grid search focusing on model parameters, involving variations in learning rates, hidden layer sizes, number of outer epochs, and the specific configuration parameter. We evaluated two versions of FairAdj: one with 5 epochs and the other with 20 epochs. For FairDrop, we also performed a grid search on the model parameters, testing different learning rates, epoch counts, and hidden layer sizes. We evaluated two configurations of FairDrop: one using a Graph Convolutional Network (GCN) and the other using Graph Attention Networks (GAT). More detailed information on the hyperparameters fine-tuned during the grid search for each model is presented in Table 8 in subsection A.2. ## 3.6 Computational Requirements All of our experiments are conducted on a high-performance computing (HPC) cluster, that features NVIDIA A100 GPUs, divided into four partitions with a combined memory of 40 GB. For a detailed overview of the GPU hours required for each experiment, see Table 9 in subsection A.3. A rough estimate suggests that a total of 80 GPU hours are necessary to complete all experiments. ## 4 Results This section presents the outcomes of our experimental evaluations aimed at reproducing and extending the findings of the original work on Graphair. We discuss the reproducibility of specific claims made in the original paper in subsection 4.1 and explore the performance of Graphair in a link prediction downstream task in subsection 4.2. ## 4.1 Results Reproducing Original Paper Claim 1: To verify Claim 1, we performed node classification on the NBA, Pokec-n, and Pokec-z datasets. Table 2 shows a comparison of the results reported by the original authors and the results of baseline models given by the original authors with those obtained by us through replicating the experimental setup described in subsection 3.3. Consistent with the original study, our results are derived by selecting the best outcome from the grid search procedure. We observe that the results for the NBA dataset are similar to those reported by the authors. However, for the Pokec datasets, our Graphair model gets better fairness scores at the cost of worse accuracies. When examining the fairness-accuracy trade-off in Figure 1, which uses the ∆DP fairness metric, we see that for the NBA dataset we can achieve a similar trade-off. For the Pokec-z data, a small discrepancy is reflected by a similar trend, but with lower accuracy scores. The Pokec-n dataset also shows a similar trend but fails to reach the higher accuracies of the original model. Considering that the code we used from the DIG library differs from what the original authors used, combined with the fact that a different number of epochs, namely 10,000 was used for the Pokec dataset instead of the originally reported 500, we think there might still be some differences in the experimental setups. Even though these discrepancies are probably minor, they do not allow us to achieve better performance in terms of the accuracy-fairness trade-off for all datasets compared to baseline models, which makes us only partially able to reproduce Claim 1. | | NBA | | | Pokec-n | | Pokec-z | | | | |-----------------|--------------|--------------|--------------|--------------|-------------|-------------|--------------|--------------|--------------| | Methods | ACC ↑ | ∆DP ↓ | ∆EO ↓ | ACC ↑ | ∆DP ↓ | ∆EO ↓ | ACC ↑ | ∆DP ↓ | ∆EO ↓ | | FairWalk | 64.54 ± 2.35 | 3.67 ± 1.28 | 9.12 ± 7.06 | 67.07 ± 0.24 | 7.12 ± 0.74 | 8.24 ± 0.75 | 65.23 ± 0.78 | 4.45 ± 1.25 | 4.59 ± 0.86 | | FairWalk+X | 69.74 ± 1.71 | 14.61 ± 4.98 | 12.01 ± 5.38 | 69.01 ± 0.38 | 7.59 ± 0.96 | 9.69 ± 0.09 | 67.65 ± 0.60 | 4.46 ± 0.38 | 6.11 ± 0.54 | | GRACE | 70.14 ± 1.40 | 7.49 ± 3.78 | 7.67 ± 3.78 | 68.25 ± 0.99 | 6.41 ± 0.71 | 7.38 ± 0.84 | 67.81 ± 0.41 | 10.77 ± 0.68 | 10.69 ± 0.69 | | GCA | 70.43 ± 1.19 | 18.08 ± 4.80 | 20.04 ± 4.34 | 69.34 ± 0.20 | 6.07 ± 0.96 | 7.39 ± 0.82 | 67.07 ± 0.14 | 7.90 ± 1.10 | 8.05 ± 1.07 | | FairDrop | 69.01 ± 1.11 | 3.66 ± 2.32 | 7.61 ± 2.21 | 67.78 ± 0.60 | 5.77 ± 1.83 | 5.48 ± 1.32 | 67.32 ± 0.61 | 4.05 ± 1.05 | 3.77 ± 1.00 | | NIFTY | 69.93 ± 0.09 | 3.31 ± 1.52 | 4.70 ± 1.04 | 67.15 ± 0.43 | 4.40 ± 0.99 | 3.75 ± 1.04 | 65.52 ± 0.31 | 6.51 ± 0.51 | 5.14 ± 0.68 | | FairAug | 66.38 ± 0.85 | 4.99 ± 1.02 | 6.21 ± 1.95 | 69.17 ± 0.18 | 5.28 ± 0.49 | 6.77 ± 0.45 | 68.61 ± 0.19 | 5.10 ± 0.69 | 5.22 ± 0.84 | | Graphair | 69.36 ± 0.45 | 2.56 ± 0.41 | 4.64 ± 0.17 | 67.43 ± 0.25 | 2.02 ± 0.40 | 1.62 ± 0.47 | 68.17 ±0.08 | 2.10 ± 0.17 | 2.76 ± 0.19 | | Graphair (ours) | 68.54 ± 0.32 | 1.31 ± 0.19 | 5.34 ± 0.32 | 65.76 ± 0.01 | 0.72 ± 0.34 | 0.41 ± 0.40 | 65.22 ± 0.01 | 1.32 ± 0.29 | 2.24 ± 0.31 | Table 2: Comparison of the results reported by the original authors with those obtained by us. 6 Figure 1: ACC and DP trade-off for baselines, Graphair and our results for Graphair. Upper-left corner ![6_image_0.png](6_image_0.png) (high accuracy, low demographic parity) is preferable. Claim 2: Table 3 confirms that Claim 2 is substantiated for both the NBA and Pokec-z datasets. In comparisons of Graphair with and without feature masking (FM) or edge perturbation (EP), notable increases in fairness metrics are observed. This supports the claim that each model component contributes to mitigating prediction bias. Interestingly, we notice an increase in accuracy when removing feature masking across all datasets, a result that deviates from findings in the original work, which showcased similar accuracy scores when EP was removed. We attribute this to the use of more training epochs in our experimental setup. It seems logical that performance would improve when edge perturbation is removed, as the encoder model genc can utilize the original adjacency matrix. This allows the classifier to exploit increased homophily in the network, thereby increasing performance and worsening fairness. Table 3: Comparisons among different components in the augmentation model. | NBA | Pokec-n | Pokec-z | | | | | | | | |------------------------|--------------|--------------|--------------|--------------|-------------|-------------|--------------|-------------|-------------| | Methods | ACC ↑ | ∆DP ↓ | ∆EO ↓ | ACC ↑ | ∆DP ↓ | ∆EO ↓ | ACC ↑ | ∆DP ↓ | ∆EO ↓ | | Graphair (ours) | 68.54 ± 0.40 | 1.31 ± 0.27 | 5.34 ± 0.24 | 65.76 ± 0.02 | 0.72 ± 0.36 | 0.41 ± 0.42 | 65.22 ± 0.02 | 1.32 ± 0.33 | 2.24 ± 0.35 | | Graphair w/o EP (ours) | 72.68 ± 0.40 | 2.95 ± 1.12 | 9.05 ± 2.53 | 67.26 ± 0.16 | 2.11 ± 0.09 | 1.37 ± 0.33 | 69.25 ± 0.10 | 6.56 ± 0.74 | 6.73 ± 0.65 | | Graphair w/o FM (ours) | 67.79 ± 0.32 | 10.73 ± 1.12 | 26.79 ± 2.53 | 64.61 ± 0.36 | 3.83 ± 0.47 | 3.18 ± 0.27 | 57.91 ± 0.13 | 5.19 ± 0.74 | 6.34 ± 0.33 | Claim 3: The plots in Figure 2 clearly illustrate that the homophily values for the augmented graph are ![6_image_1.png](6_image_1.png) lower than those for the original graph. These results support the authors' claim that Graphair automatically generates new graphs with a fairer node topology. The plots in Figure 3 reveal that, for all three datasets, the features with the highest Spearman correlation to the sensitive feature generally exhibit lower values in the fair view. These findings lend support to the authors' claim that Graphair produces features that are fairer. We can therefore conclude that Claim 3 is fully reproducible. Figure 2: Node sensitive homophily distributions in the original and the fair graph data. Figure 3: Spearman correlation between node features and the sensitive attribute in the original and the fair ![7_image_0.png](7_image_0.png) graph data. ## 4.2 Results Beyond Original Paper The performance of Graphair compared to baseline models is presented in Table 4, Table 5, and Table 6. For fairness metrics, subscripts m and s denote application to mixed groups and subgroups, respectively. Our findings indicate that Graphair is outperformed by both FairDrop variants in terms of accuracy and AUC across all datasets. However, compared to FairDrop, Graphair excels in all fairness metrics on all datasets, particularly in the subgroup variants on the Citeseer and Cora datasets. FairAdj demonstrates comparable performance in accuracy and AUC relative to Graphair but performs worse in subgroup fairness metrics, only excelling in the ∆DPm metric across all datasets. We further investigate the trade-off between accuracy and ∆DP for both mixed and subgroup variants across the three datasets for each model, as shown in Figure 4. The trade-off for Graphair is comparable to the baseline models for the mixed variant, while achieving a superior trade-off for the subgroup variant. While Spinelli et al. (2021) suggest that high scores for subgroup fairness metrics are due to dataset characteristics, we find that Graphair improves this metric through its augmentative approach, which generates augmentations with similar predictive power across different subgroups. This enables Graphair to consistently predict links with the same probability across various subgroups, resulting in lower subgroup dyadic-level fairness while maintaining predictive power. Table 4: Link Prediction on Citeseer | Method | Accuracy ↑ | AUC ↑ | ∆DPm ↓ | ∆EOm ↓ | ∆DPs ↓ | ∆EOs ↓ | |-----------------|--------------|------------|------------|-----------|------------|-------------| | FairAdjr=5 | 77.1 ± 2.5 | 83.4 ± 2.3 | 33.8 ± 3.5 | 8.2 ± 4.1 | 65.6 ± 6.2 | 80.0 ± 9.0 | | FairAdjr=20 | 73.5 ± 2.8 | 81.2 ± 3.0 | 26.0 ± 3.4 | 5.3 ± 3.5 | 56.5 ± 7.5 | 70.1 ± 8.0 | | GCN+FairDrop | 87.3 ± 1.7 | 97.1 ± 1.6 | 46.7 ± 3.0 | 8.8 ± 4.5 | 68.9 ± 6.0 | 41.1 ± 10.0 | | GAT+FairDrop | 86.3 ± 1.3 | 96.6 ± 1.2 | 45.0 ± 2.7 | 9.6 ± 4.8 | 68.4 ± 5.9 | 39.3 ± 9.8 | | Graphair (ours) | 79.2 ± 0.7 | 86.9 ± 0.6 | 40.5 ± 0.6 | 1.1 ± 1.5 | 2.9 ± 3.0 | 11.3 ± 3.8 | | Method | Accuracy ↑ | AUC ↑ | ∆DPm ↓ | ∆EOm ↓ | ∆DPs ↓ | ∆EOs ↓ | |-----------------|--------------|------------|------------|------------|------------|-------------| | FairAdjr=5 | 76.6 ± 1.9 | 83.8 ± 2.4 | 38.9 ± 4.5 | 11.6 ± 4.7 | 78.5 ± 5.3 | 100.0 ± 8.0 | | FairAdjr=20 | 73.0 ± 1.8 | 78.9 ± 2.2 | 29.3 ± 3.1 | 4.7 ± 4.7 | 83.3 ± 7.2 | 97.5 ± 7.8 | | GCN+FairDrop | 90.4 ± 1.1 | 97.0 ± 0.8 | 58.2 ± 2.7 | 17.3 ± 5.2 | 93.6 ± 3.7 | 98.5 ± 0.5 | | GAT+FairDrop | 85.4 ± 1.4 | 96.2 ± 1.2 | 53.2 ± 3.0 | 17.8 ± 4.6 | 84.7 ± 2.3 | 98.2 ± 0.5 | | Graphair (ours) | 75.2 ± 0.8 | 83.3 ± 0.9 | 38.7 ± 0.8 | 12.7 ± 1.7 | 37.9 ± 3.2 | 13.3 ± 2.1 | Table 5: Link Prediction on Cora Table 6: Link Prediction on PubMed | Method | Accuracy ↑ | AUC ↑ | ∆DPm ↓ | ∆EOm ↓ | ∆DPs ↓ | ∆EOs ↓ | |-----------------|--------------|------------|-------------|------------|------------|------------| | FairAdjr=5 | 83.7 ± 3.2 | 90.6 ± 4.7 | 40.76 ± 3.7 | 35.5 ± 2.5 | 40.7 ± 3.2 | 16.7 ± 1.9 | | FairAdjr=20 | 76.1 ± 1.8 | 83.7 ± 2.1 | 37.2 ± 2.5 | 22.6 ± 2.0 | 53.4 ± 5.5 | 39.9 ± 5.4 | | GCN+FairDrop | 92.3 ± 0.4 | 97.4 ± 0.2 | 44.2 ± 0.5 | 7.6 ± 0.7 | 54.1 ± 1.5 | 15.3 ± 2.6 | | GAT+FairDrop | 92.2 ± 0.8 | 97.3 ± 0.7 | 43.7 ± 0.9 | 7.8 ± 1.1 | 54.5 ± 2.1 | 14.3 ± 4.1 | | Graphair (ours) | 82.3 ± 0.2 | 89.8 ± 2.9 | 37.6 ± 0.4 | 2.6 ± 0.7 | 36.4 ± 2.4 | 8.0 ± 2.4 | ![8_image_0.png](8_image_0.png) Figure 4: ACC and DP trade-off for the baselines and our Graphair for link prediction. The top row shows the ∆DPm metric, and the bottom row shows the ∆DPs metric. Points in the upper-left corner are desired. ## 5 Discussion Upon revisiting the three claims in our study, we find that Claim 1 is partially reproducible, whereas Claims 2 and 3 are fully reproducible. In the case of Claim 1, while we were able to replicate the performance of the NBA dataset consistent with the original paper, discrepancies emerged with the Pokec datasets. Specifically, our results showed improved fairness scores at the expense of lower accuracies compared to the original findings. This could be attributed to differences in experimental setup, particularly the number of training epochs used, which deviated from the original study's methods. We used 10,000 epochs for the Pokec datasets as opposed to the 500 reported in the original paper, a change recommended by the original authors. Further analysis of Graphair's performance in link prediction indicates that, while it demonstrates a comparable fairness-accuracy trade-off to baseline models for mixed dyadic-level fairness, Graphair has a superior trade-off for subgroup dyadic-level fairness. ## 5.1 What Was Easy And What Was Difficult The clarity of the code within the DIG library3significantly facilitated reproducibility. The original paper provided a clear outline of the experiments, enabling a straightforward process to identify the necessary components for reproducing the study's claims and implementing our link prediction extensions. We encountered initial challenges with the reproducibility of Claim 1, which necessitated seeking clarification from the authors. Correspondence with the original authors resolved issues related to unspecified hyperpa-3https://github.com/divelab/DIG rameter settings and a bug in the code. Reproducing Claim 3 for the Pokec datasets proved non-trivial due to the large memory requirements for processing the full graph, necessitating solutions to acquire experimental results. ## 5.2 Communication With Original Authors We initiated contact with one of the authors, Hongyi Ling, via email to seek clarification on our initial results that did not match those of the original paper. These discrepancies were resolved, and the authors responded promptly to our emails, providing valuable feedback. Most notably, they recommended a change in our experimental setup for the Pokec datasets, specifically increasing the number of epochs from 500 to 10,000. ## References Chirag Agarwal, Himabindu Lakkaraju, and Marinka Zitnik. Towards a unified framework for fair and stable graph representation learning. In *Uncertainty in Artificial Intelligence*, pp. 2114–2124. PMLR, 2021. April Chen, Ryan Rossi, Nedim Lipka, Jane Hoffswell, Gromit Chan, Shunan Guo, Eunyee Koh, Sungchul Kim, and Nesreen K Ahmed. Graph learning with localized neighborhood fairness. arXiv preprint arXiv:2212.12040, 2022. Sean Current, Yuntian He, Saket Gurukar, and Srinivasan Parthasarathy. Fairegm: Fair link prediction and recommendation via emulated graph modification. In *Proceedings of the 2nd ACM Conference on Equity* and Access in Algorithms, Mechanisms, and Optimization, pp. 1–14, 2022. Enyan Dai and Suhang Wang. Say no to the discrimination: Learning fair graph neural networks with limited sensitive attribute information. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, pp. 680–688, 2021. Enyan Dai and Suhang Wang. Learning fair graph neural networks with limited and private sensitive attribute information. *IEEE Transactions on Knowledge and Data Engineering*, 2022. Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 855–864, 2016. Takuo Hamaguchi, Hidekazu Oiwa, Masashi Shimbo, and Yuji Matsumoto. Knowledge transfer for out-ofknowledge-base entities: A graph neural network approach. *arXiv preprint arXiv:1706.05674*, 2017. William L Hamilton. *Graph representation learning*. Morgan & Claypool Publishers, 2020. Kai Han, Yunhe Wang, Jianyuan Guo, Yehui Tang, and Enhua Wu. Vision gnn: An image is worth graph of nodes. *Advances in Neural Information Processing Systems*, 35:8291–8303, 2022a. Xiaotian Han, Zhimeng Jiang, Ninghao Liu, Qingquan Song, Jundong Li, and Xia Hu. Geometric graph representation learning via maximizing rate reduction. In *Proceedings of the ACM Web Conference 2022*, pp. 1226–1237, 2022b. Roger A Horn and Charles R Johnson. *Matrix analysis*. Cambridge university press, 2012. Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. *arXiv* preprint arXiv:1609.02907, 2016. Peizhao Li, Yifei Wang, Han Zhao, Pengyu Hong, and Hongfu Liu. On dyadic fairness: Exploring and mitigating bias in graph connections. In *International Conference on Learning Representations*, 2021. Hongyi Ling, Zhimeng Jiang, Youzhi Luo, Shuiwang Ji, and Na Zou. Learning fair graph representations via automated data augmentations. In *The Eleventh International Conference on Learning Representations*, 2022. Yi Liu, Limei Wang, Meng Liu, Yuchao Lin, Xuan Zhang, Bora Oztekin, and Shuiwang Ji. Spherical message passing for 3d molecular graphs. In *International Conference on Learning Representations (ICLR)*, 2022. Youzhi Luo, Michael McThrow, Wing Yee Au, Tao Komikado, Kanji Uchino, Koji Maruhashi, and Shuiwang Ji. Automated data augmentations for graph classification. *arXiv preprint arXiv:2202.13248*, 2022. Farzan Masrour, Tyler Wilson, Heng Yan, Pang-Ning Tan, and Abdol Esfahanian. Bursting the filter bubble: Fairness-aware network link prediction. In *Proceedings of the AAAI conference on artificial intelligence*, volume 34, pp. 841–848, 2020. Indro Spinelli, Simone Scardapane, Amir Hussain, and Aurelio Uncini. Fairdrop: Biased edge dropout for enhancing fairness in graph representation learning. *IEEE Transactions on Artificial Intelligence*, 3(3): 344–354, 2021. Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, Yoshua Bengio, et al. Graph attention networks. *stat*, 1050(20):10–48550, 2017. Tong Zhao, Wei Jin, Yozen Liu, Yingheng Wang, Gang Liu, Stephan Günnemann, Neil Shah, and Meng Jiang. Graph data augmentation for graph machine learning: A survey. *arXiv preprint arXiv:2202.08871*, 2022. ## A Appendix A.1 Overview Of The Graphair Model ![12_Image_0.Png](12_Image_0.Png) Figure 5: Overview of the Graphair framework (Ling et al., 2022) ## A.2 List Of All Hyperparameters Table 7: Overview of hyperparameters for the Graphair model m and the evaluation classifier c on all datasets. | Hyperparameter | NBA | Pokec-n | Pokec-z | Citeseer | Cora | PubMed | |------------------|-------|-----------|-----------|------------|--------|----------| | α | 1.0 | 0.1 | 10.0 | 0.1 | 10.0 | 10.0 | | β | 0.1 | 1.0 | 10.0 | 0.1 | 10.0 | 10.0 | | γ | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | | λ | 1.0 | 10.0 | 10.0 | 1.0 | 10.0 | 0.1 | | chidden | 128 | 128 | 128 | 128 | 128 | 128 | | clearning_rate | 1e-3 | 1e-3 | 1e-3 | 5e-3 | 5e-3 | 5e-3 | | cweight_decay | 1e-5 | 1e-5 | 1e-5 | 1e-5 | 1e-5 | 1e-5 | | mlearning_rate | 1e-4 | 1e-4 | 1e-4 | 1e-4 | 1e-4 | 1e-4 | | mweight_decay | 1e-5 | 1e-5 | 1e-5 | 1e-5 | 1e-5 | 1e-5 | Table 8 presents a comprehensive overview of the hyperparameters adjusted during the grid search. The initial seven rows correspond to the Graphair model, while the subsequent rows correspond to the baseline models used for link prediction tasks. Regarding the epoch count for node classification with Graphair, 500 epochs were used for the NBA dataset and 10,000 for the Pokec datasets. For grid searches with the Graphair model for link prediction, the number of epochs was set to 200. | Hyperparameter | Node Classification | Link Prediction | |-----------------------------|-----------------------|--------------------------------------| | α | { 0.1, 1, 10} | { 0.1, 1, 10} | | γ | { 0.1, 1, 10} | { 0.1, 1, 10} | | λ | { 0.1, 1, 10} | { 0.1, 1, 10} | | Classifier lr | 1e-3 | {1e-2, 1e-3, 1e-4} | | Model lr | 1e-4 | {1e-2, 1e-3, 1e-4} | | Classifier Hidden Dimension | 128 | {64, 128, 256} | | Model Hidden Dimension | 128 | {64, 128, 256} | | Model lr (FairAdj)* | - | {0.1, 1e-2, 1e-3} | | hidden1 (FairAdj) | - | {16, 32, 64} | | hidden2 (FairAdj) | - | {16, 32, 64} | | outer epochs (FairAdj) | - | {4, 10, 20} | | Epochs(FairDrop) | - | {100, 200, 500, 1000} | | Model lr (FairDrop) | - | {5e-2, 1e-2, 5e-3, 1e-3, 5e-4, 1e-4} | | Hidden dim. (FairDrop) | - | {64, 128, 256, 512} | Table 8: Overview of all hyperparameters tuned in the grid search. ## A.3 Gpu Run Hours | Name of the experiment | GPU Hour (Hour) | Max GPU Memory Usage (GB) | |-------------------------------|-------------------|-----------------------------| | Claim 1 (grid search) | 20 | 3.70 | | Claim 2 | 2 | 3.67 | | Claim 3 | 0.5 | 3.70 | | Link Prediction (grid search) | 60 | 3.70 | Table 9: Computational Requirement Overview GPU hour is measured by the amount of time each experiment script needed from the start to the end. Maximum GPU memory usage is determined by *max_memory_allocated* method from the Pytorch library. A.4 Impact of the number of epochs on the accuracy and fairness results ![13_image_0.png](13_image_0.png) Figure 6: Impact of the number of epochs on the accuracy and fairness results. Following the original authors' recommendation to increase the number of epochs in order to replicate their results, we performed an ablation study to examine the impact of the number of epochs on accuracy and fairness metrics. As illustrated in Figure 6, there is a positive correlation between the number of epochs and both accuracy and fairness metrics. Based on these findings, we conducted our experiments using 10,000 epochs for the Pokec datasets as this led to a notable increase in performance.
Review 1: Summary: This paper conducts a reproducibility analysis of Graphair (ICLR 2023) with an extensive grid search, which results in new findings different from the existing papers. The paper also explores wider applications of Graphair. Strengths and Weaknesses: The strengths of this paper are as follows. - The reproducibility analysis is welcome for TMLR, which expects diverse and high-practicality submissions. - The authors have established a more reasonable experimental setup compared with the original paper of ICLR 2023. - The authors have released their codes. The weaknesses of this paper are as follows. - The authors have ignored a careful grid search of the other compared baselines. If the proposed method does not use grid search, it is questionable for the compared methods. - The Graphair has poor accuracy while achieving high accuracy. As shown in Table 4, Table 5, and Table 6, Graphair is far worse on accuracy-based metrics: Accuracy and AUC. As we know, for graph tasks such as link prediction, etc., the drop in 0.01-level on AUC could be a significant drop. Therefore, Graphair is still not so powerful even if the authors have conducted a careful grid search. - Although reproducibility analysis is a good topic for TMLR, we would like to see more insightful observations in addition to the original paper, since TMLR has very high standards. Requested Changes: Please address my concerns above. Broader Impact Concerns: N/A ================================================== Review 2: Summary: The authors attempt to reproduce the results of the paper "Learning Fair Graph Representations Via Automated Data Augmentations" by Ling et al. (2022), where the Graphair framework was proposed. Furthermore, the authors extend the Graphair framework from the original node classification setting in Ling et al. (2022) to link prediction. The main findings are as follows: - One of the claims in Ling et al. (2022) can be fully verified, while the other two can only be partially verified. - The Graphair framework also performs quite well on the link prediction, resulting in smaller deviations from fairness. Strengths and Weaknesses: Strengths: - Not only attempts to replicate prior work, but also extends its scope to link prediction. - Addresses a topic that should be of high interest to TMLR readers. Weaknesses: - Findings are of questionable value to the community due to the limited scope of the study. It is unclear why the results obtained by the authors for Claims 1 and 2 differ so much from Ling et al. (2022). I elaborate further in Requested Changes. - Several presentation issues. See both the major and minor issues listed in Requested Changes. Requested Changes: Major issues: - Table 2 only shows a comparison between your implementation of Graphair and the results reported by the authors. The other methods that Ling et al. (2022) compare against are missing, so it is difficult to interpret the results. After examining Table 1 in Ling et al. (2022), I see that the accuracy values you are obtaining on the Pokec data sets are far lower than the other baselines, which is a large difference that needs to be accounted for. To further investigate the discrepancy you observe, consider expanding the scope of your replication study to include additional methods. Perhaps this will help you to identify what is causing the discrepancy with Graphair. - Description of results for Claim 2 in Section 4.1 are misleading: "we observe clear increases in both fairness metrics and accuracy". Indeed, fairness metrics are improving, but accuracy is not. This discussion should be revised. Minor issues: - What is the FairAdj method in Section 4.2? It is not discussed at all. Add at least a citation and brief description. - Hyperparameter $\lambda$ in Section 3.3 has not been mentioned previously in the paper, unlike the other 3 hyperparameters. - Fairness metrics reported in Tables 4 and 5 have $m$ and $s$ subscripts, which are not described. I assume these correspond to the mixed and subgroup fairness principles discussed in Section 3.4.1. Broader Impact Concerns: None that I observe. ================================================== Review 3: Summary: This paper studies the reproducibility of the paper "Learning fair graph representations via automated data augmentations" as well as extending the scope of the paper (method of which is called GraphAir) to link prediction. The reproduction experiments of GraphAir shows that two of the three claims are partially verified while the third one is fully verified. In addition, the authors experiment GraphAir on link prediction tasks, with results showing the good performances of GraphAir. Strengths and Weaknesses: ## Strengths 1. The reproduced paper GraphAir, as of March 2024 has 29 citations so it has reasonable impact. Therefore, a reproduction of it will be of interest to some of the audience of TMLR. 2. The paper is generally well written and easy to follow. ## Weaknesses Before I state the weaknesses of this paper, I would admit that I am not an expert in fair graph learning, so my comments are from a generally high level. 1. Not sure about the 'training instability of GraphAir'. This paper states in Section 4.1 that 'this issue might also stems from the observed training instability', but no evidence regarding the training instability is given or discussed, which makes it hard to understand the detailed cause of this paper. 2. I am not so sure about the setting of citation datasets. According to the authors, the sensitive attributes are paper classes. I am wondering why the setting is done as such, as paper classes are generally not considered sensitive. Moreover, the paper classes are just the node labels and may be very directly related to the citation relations. Therefore, I am a bit confused about why the setting is done as such. Of course, the authors say that this follows another paper (Spinelli et al. 2021), which is fine. However I do like to know some rationale underlying it. 3. I am not so convinced by the results in Table 4-6. As can be seen in Table 4-6, there is a very clear tradeoff between fairness and accuracy, and GraphAir commonly excels on fairness but not so on Accuracy. I think it is difficult to draw a conclusion about which method is definitely better, or whether the performance gap is large or small, as comparison on two dimensions cannot be solely determined by one dimension. Are there any ways to make the comparison more thorough by involving both dimensions (accuracy, fairness)? Requested Changes: 1. Please show the training stability issue of GraphAir, and discuss whether some common ways to mitigate instability (e.g. gradient clipping, regularization, larger batches, etc.) will work to mitigate it. 2. Please provide some insights about why the paper classes are used as sensitive attributes. I have a slight concern that this attribute may not act similarly as commonly considered sensitive attributes (e.g. gender, age, etc.). 3. Please consider some better evaluation methods to jointly evaluate both the fairness and the accuracy for Tables 4-6. Broader Impact Concerns: No. ================================================== Metareview: Recommendation: Accept with minor revision Comment: Two of the reviewers recommended to accept the paper while one recommended rejection. The points raised by the reviewer recommending rejection were (1) a lack of grid search for the baselines, which have been addressed in the revision and (2) a concern regarding the relevance of reproducibility analysis for TMLR. Overall, and without further details from the reviewer, I believe the paper should be accepted. When preparing the final version of the paper, please address the following two points raised by reviewer mrv3: - Section 3.1.3: lambda is missing in front of Frobenius norm in the second line of the equation. - Inappropriate use of text citations without parentheses in several locations, such as Section 3.4.1: "FairAdj Li et al. (2021) and FairDrop Spinelli et al. (2021)" should both be replaced with the parenthesis-based citations. This also appears in some other locations, such as Section 3.4: "the node embeddings Horn & Johnson (2012)" ==================================================
# Dynamic Window-Level Granger Causality Of Multi-Channel Time Series Anonymous authors Paper under double-blind review ## Abstract Granger causality (GC) techniques facilitate the examination of temporal causal relationships by assessing the predictive utility of one time series for another. However, conventional GC approaches are limited to linear scenarios and assume that causalities exclusively exist between entire time series channels, remaining constant over time. This assumption falls short when dealing with real-world time series data characterized by dynamic causalities among channels. To address this limitation, we present the dynamic window-level Granger causality with causality index (DWGC-CI) method, which incorporates nonlinear windowlevel variability into the traditional GC framework. The DWGC-CI method specifically employs F-tests on sliding windows to assess forecasting errors, subsequently extracting dynamic causalities using a causality index. This index involves the introduction of a negative feedback adjustment mechanism, which mitigates window-level noisy fluctuations and helps extract dynamic causalities. We theoretically demonstrate that, compared to traditional channel-level GC methods, DWGC-CI accurately captures window-level GCs. In experimental evaluations involving two synthetic datasets, DWGC-CI significantly surpasses the baseline in terms of accuracy and recall rates. Furthermore, when applied to two real-world datasets, DWGC-CI effectively identifies seasonal and annual varying GCs, demonstrating a markedly better consistency with domain-specific knowledge of seasonal meteorology and stock market dynamics. ## 1 Introduction Time series data, characterized by a pre-defined temporal or sequential order Hamilton (1994), is extensively employed in a diverse range of real-world applications, encompassing signal processing Scharf (1991), economics Granger & Newbold (2014), and climatology Mudelsee (2013), among others. Common tasks involving time series data consist of indexing, clustering, classification, regression Keogh & Kasetty (2003), and causality detection Eichler (2012). Among these tasks, causality detection, which emphasizes cognitive reasoning Pearl (2018), offers a more comprehensive understanding of system behaviour compared to traditional statistical machine learning and deep learning methods that rely on correlation fitting. Owing to the inherent nature of temporal precedence in time series data, it not only encapsulates empirical patterns of variable changes but also captures prior knowledge of causal relationships among distinct channels Eichler (2013). To deal with the causal reasoning task in time series, Granger Granger (1969) first proposed the statistical test to decide whether one time series channel is influential when predicting another, which is known as Granger causality (GC). With the GC method, the causality can be obtained from the temporal data in various domains, such as physics Kaufmann & Stern (1997), biology Angelini et al. (2009) and economy Comincioli (1996). Nonetheless, conventional methods Xu et al. (2016); Wu et al. (2022); Granger (1969) presuppose that GC remains invariant across time series channels over time. It is a concept referred to as "channel-level GC", more precisely. While this assumption facilitates the resolution of numerous Granger tests, this channel-level static assumption is not obligatory. It mainly serves merely to streamline the Granger testing process Granger ![1_image_0.png](1_image_0.png) Figure 1: The channel-level approach (a) assumes that causal influences between time series channels remain consistent throughout the duration of the series. In contrast, the window-level causality method (b) involves scrutinizing the dynamic evolution of causal interactions within sliding windows applied to the time series data. This latter approach allows for a more nuanced understanding of shifting causality patterns (shown as four-wave crests) over time. (1980). More crucially, this assumption contradicts numerous practical situations involving dynamic causalities. In reality, time series data has grown increasingly voluminous, intricate, and uncertain, causing causality relationships to evolve in conjunction with the sliding windows of time series data (refer to the four-wave crests as in Fig. 1). Notable applications encompass seasonal meteorology and stock market dynamics, which will be detailed in Sec. 5. Consequently, dynamic window-level causalities, when offering reliable and precise predictions, are better suited for real-world applications. Addressing the challenge of time-varying window-level causalities, preliminary efforts have been made on the specific application domains. One notable example is the field of neuroscience, where the "dynamic causal modelling" (DCM) method Friston et al. (2003) identifies dynamic causal relationships between neuronal clusters by formulating differential equations that capture neural dynamics. However, the DCM approach is grounded in a domain-specific hypothesis-testing framework, rendering it unsuitable for the broader GC context. Stepping backwards, although alternative time-varying GC methods have been proposed Cekic et al. (2018); Pagnotta et al. (2018); Ming-Hsien & Chih-She (2015), these approaches predominantly rely on linear forecasting models and inadequately account for local fluctuations. For example, it is challenging to linearly model the series data depicted in Fig. 1, and more importantly, such fluctuations might result from random noise rather than causal relationships, leading to significant inaccuracies in local-window GC. In summary, current research on the theoretical underpinnings and extraction techniques for window-level time-varying GC is not mature enough, necessitating further investigation. ## 1.1 Our Proposal In this study, we propose a novel approach to GC analysis by introducing dynamic window-level GC with causality index (DWGC-CI), which extends the conventional GC framework by relaxing the assumption of constant causality on sufficiently long time series Granger (1980). Our method is insensitive to the selection of the prediction model, and we adopt the nonlinear autoregressive (NAR) model Chen et al. (2000) for simplification. Subsequently, we first present the dynamic window-level GC (DWGC) method, which employs the window-level F-test to compare prediction results on sliding windows, with GC being a special case when the entire channel is treated as a single window. Second, to mitigate potential fluctuations in the window-level F-test, we further establish a causality index matrix by optimizing its associated index loss, culminating in the complete DWGC-CI methodology. In this study, we will provide a comprehensive elucidation of the theoretical guarantees and practical advantages of DWGC-CI. Theoretically, we demonstrate that: 1) the recall rate of DWGC increases with window length, ultimately converging to that of GC; It mathematically reveals a *granularity-efficacy trade-off*, which drives the proposal of the DWCI-CI technique. By incorporating supplementary CI, our DWGC aspires to overcome this trade-off and 2) surpasses DWGC/GC in terms of recall rate. Experimentally, to evaluate the performance of DWGC-CI, we conduct experiments on four datasets, including two synthetic and two real-world datasets. For the synthetic data, we employ linear autoregressive (AR) and NAR simulators for generation, with DWGC-CI achieving superior window-level causality detection compared to the baseline, regarding the recall rate and accuracy rate. In the case of real-world datasets, DWGC-CI effectively uncovers dynamic causalities in the El-Niño and East Asian monsoon phenomena, as well as the relationship between the stock market and the economy. These findings are consistent with expert knowledge previously reported in the literature. In summary, the contributions of this paper are threefold: - We are the first to propose a general dynamic window-level GC detection method taking advantage of the causality index, called DWGC-CI in total. - We provide the theoretical proof demonstrating that the DWGC method contains a granularityefficacy trade-off. Moreover, the recall rate of DWGC can be enhanced via utilizing the causality index trick, which is integrated within the final comprehensive DWGC-CI methodology. - We evaluate the performance of our DWGC-CI method through a series of numerical experiments involving two synthetic and two real-world datasets. Our findings reveal that the DWGC-CI approach successfully identifies window-level causalities that align with previously established academic knowledge. ## 2 Preliminaries Of Granger Causality We use (Yi,1, Yi,2, · · · , Yi,t, *· · ·*) to denote the time series channel i ∈ {1, 2*, ...d*}, where Yi,t and Y*i,<t* are the series channel i at and before time t. 1. In this paper, we use the double-headed arrow to represent the "channel-level GC" Granger (1969) from channel j to i: Yj=⇒Yi. The GC from the series channel j to i holds if channel j provides useful information when predicting the future values of series i. In detail, the traditional GC uses an AR model g to predict future values: $$\begin{array}{r l}{{\hat{Y}}_{i,t}=}&{{}g_{i}(Y_{i,<t}),{\hat{Y}}_{i|j,t}=}&{{}g_{i}(Y_{i,<t},Y_{j,<t}).}\end{array}$$ $$\left(1\right)$$ Yˆi,t = gi(Yi,<t), Yˆi|j,t = gi(Yi,<t, Y*j,<t*). (1) Then GC can be built based on an F-test to check the difference between Yˆi|j,t and Yˆi,t, i.e., the forecasting result of the channel i with and without channel j: $$F_{j\Longrightarrow i}=\sum_{t}(\hat{Y}_{i,t}-Y_{i,t})^{2}/\sum_{t}(\hat{Y}_{i|j,t}-Y_{i,t})^{2},$$ $$\mathbf{\Pi}(2)$$ where giis the prediction on time series i (e.g., AR model in traditional GC method). The criteria of this hypothesis test is that we accept that GC Yj =⇒ Yi exists when Fj=⇒iin 2 is larger than a pre-defined causality threshold ϵ (*e.g., ϵ* ≥ 1). 2 The generalization of GC methods can be categorized into two primary domains: window-level and nonlinear. However, achieving a concise and effective integration of both within the general GC framework remains a challenging task. Regarding window-level GCs, the hypothesis-testing framework of dynamical causal modelling (DCM)Friston et al. (2003) is incompatible with the general GC framework. Additionally, other time-varying methodsCekic et al. (2018); Pagnotta et al. (2018); Ming-Hsien & Chih-She (2015) simply incorporate time-varying coefficients in regression models, but they exhibit suboptimal performance in handling irregular nonlinear cases. As for nonlinear GCs, Chen et al.Chen et al. (2004) employed a delay embedding technique to extend nonlinear Granger causality. Later, Sun et al.Sun (2008) suggested using 1In practice, we follow the GC's basic assumption–the time series Y is stationary; for the generating process, the joint probability distribution function of the piece of time series does not change over time. In order to achieve this objective optimally, it is imperative that we conduct stationery tests previously and subsequently perform appropriate transformations to ensure stationery. 2The well-known premise of 2 is that series channels i and j meet the "backdoor condition" Pearl & Robins (1995), shown as Appendix. G. kernel embeddings to address the nonlinearity. Subsequently, Sugihara et al.Sugihara et al. (2012) expanded the traditional GC to the Convergent Cross Mapping (CCM) theorem, which is based on Taken's embedding theoremTakens (1981). Moreover, there are additional attempts via tools such as transfer entropy Syczewska & Struzik (2015), causal structure learning Weichwald et al. (2020), spectrum-based framework Zhang et al. (2016). Eichler Eichler (2013) subsequently re-summarized the challenges associated with transferring GC to the nonlinear version, which includes: 1) aggregating the time-varying coefficients required by Granger causality tests and 2) the instability of the causal structure itself. More recently, neural networks have been introduced Tank et al. (2018); Xu et al. (2019); Duggento et al. (2019); He et al. (2019) to supplant the original autoregressive (AR) model for sequence fitting and prediction. Regrettably, the combination of window-level GC and nonlinear GC is quite challenging since it is hard to get through the local noisy information of complex data to extract informative GC. To our best knowledge, no absolutely universal and simple framework has yet been developed yet, and currently, there are only several attempts in specific engineering applications (such as EEG signals Li et al. (2012), switching systems Ma et al. (2014)). In addition to GC, causal graph modelling serves as an alternative approach for discerning causal relationships between distinct time series channels Xu et al. (2019). Causal graph models can be constructed using various methodologies, including vector autoregressive Granger analysis (VAR), generalized additive models (GAMs) Lütkepohl (2005), and specific predefined regression models Sommerlade et al. (2012). However, generating a causality graph for each time point presents a significant challenge due to the increased computational complexity, particularly when identifying dynamic window-level causality. Furthermore, after graph construction, A. Mastakouri Mastakouri et al. (2021) theoretically analyze the condition of dynamic causality identification in a time series graph, whereas their theorem is based on complex assumptions which are computationally hard to verify in the real world (such as considerable conditional independence tests), and it is also challenging to concurrently obtain all window-level causal relationships. ## 3 Method 3.1 Dwgc Model To detect dynamic window-level causality between two data series Yi and Yj , we first define sliding windows of length k on the same time position t of both series: $$\{Y_{i,t},Y_{i,t+1},\cdots,Y_{i,t+k-1}\},\quad\{Y_{j,t},Y_{j,t+1},\cdots,Y_{j,t+k-1}\}.$$ Then we consider two forms of time-series fitting on the sliding windows on channel i with or without the information from channel j. Specifically, to fit the nonlinearity of the data, we use the NAR model as g in 1, and modify 2 as follows: $$F^{t,k}_{j\Longrightarrow i}=\sum_{q}(\hat{Y}_{i,q}-Y_{i,q})^{2}/\sum_{q}(\hat{Y}_{i|j,q}-Y_{i,q})^{2},q\in[t,t+k-1],\tag{3}$$ where Yˆi,q and Yˆi|j,q denote the predictions on the local sliding window with and without the information of channel j, respectively. F t,k j=⇒i denotes the F-statistic on the sliding window [*t, t* + k − 1] from channel j to i. Analogously to 2, the criteria is that we accept the window-level causality Yj,t∼t+k−1 =⇒ Yi,t∼t+k−1 exists if F t,k j=⇒i is larger than the pre-defined threshold ϵ (≥ 1). In this sense, GC is the special case of the DWGC method when k is equal to the channel length. Noteworthy, DWGC makes a compromise on time-invariance and varying and takes advantage of the information of the channel up to not only the information of the current window but also the information of the channel up to time point. The task of DWGC is to accurately determine whether channel has received a causal effect within the window t ∼ t + k − 1. ## 3.2 Dwgc-Ci Model In the preceding section, it was observed that the reduction in length from the channel (Eqn.2) to window (Eqn.3) renders the DWGC method susceptible to instability arising from NAR forecasting results. To address this issue, we introduce the "causality index (CI) matrix", designed to mitigate the temporal fluctuations of NAR forecasting outcomes. This approach can be likened to a negative feedback regulation mechanism, wherein iterative training is performed, and varying weights are assigned to each time point to stabilize the overall F-statistic results. Specifically, each original point Yi,t could be weighted by an index matrix denoted as Φ = {Φi,t}, which is taken as an additional input layer of the NAR model, the weighted item {Φi,t ∗ Yi,t} is denoted as: $$Y^{\Phi}_{i,t\sim t+k-1}=\Phi_{i,t\sim t+k-1}*Y_{i,t\sim t+k-1},\ i=1,2,...d,\tag{4}$$ where Yi,t∼t+k−1,Φi,t∼t+k−1, Y Φ i,t∼t+k−1 are the column vectors consisting of k elements in the window [*t, t* + k − 1], denoting original data, weighting factor and weighted data respectively, and ∗ is the Hadamard product. We establish a minimization goal to learn the indexing matrix Φ to scale down the temporal fluctuations: $$L_{index}=\sum_{i,q\in M}\left\|\Phi_{i,q}-h\left(\left|(\hat{Y}_{i,q}^{\Phi}-Y_{i,q}^{\Phi})^{2}-\mathbb{E}_{q}(\hat{Y}_{i,q}^{\Phi}-Y_{i,q}^{\Phi})^{2}\right|\right)\right\|_{2}^{2},\tag{5}$$ where M is the set of sliding windows which detected causality and h is a monotonically decreasing function3. On this basis, the effectiveness of the "causality index (CI) matrix" can be explained as a dynamic negative feedback regulation: In each iteration, it aims to regulate overall forecasting errors on each window which detected causality by reducing the effect of adverse factors such as local noise and local over-fitting. For instance, when the term (Yˆi,q − Yi,q) 2is much larger (noise) or smaller (over-fitting) than others, which is a statistically significant anomaly, then (Yˆi,q − Yi,q) 2 − Eq(Yˆi,q − Yi,q) 2 will be significantly large. We use the negatively correlated index Φi,q to superimpose it to weaken its fluctuation on the local-window F-statistics. Remark 1. We provide the further justification of L*index*. A: The reason for not updating CI on the entire area. Noteworthy, we aim to increase the credibility of areas where there may be potential causal relationships with significant fluctuations by excluding the influence of noise. The specific approach is to weaken the impact of excessive outliers in these "suspicious areas" (as they are likely noise). This compels us to focus on updating the coefficients of the CI matrix in the "suspected" regions of causal relationships rather than considering the entire area. It should be noted that this weighted filtering pattern is dynamic because the extracted set of causal relationship windows may vary in each round. Our experiment results could further demonstrate this motivation since the performance is weaker if we conduct an update for all windows. B: The reason for comparison with Eq(Yˆ Φ i,q − Y Φ i,q) 2: *Due to the unstable and easily disrupted nature* of DWGC under window settings, when this indicator significantly increases, there are two factors at play: a) the genuine causal effect (observing the influences), and b) the local window noise (observing outliers of the instantaneous errors in the auto-regressive model). The calculation of differences between instantaneous errors and the expectation over instantaneous errors aims to disentangle these two factors. Specifically, in the windows where potential causal relationships exist, we actively detect significant outliers in this set of instantaneous errors. These outliers are more likely to be influenced by real-time noise compared to other points. Through a negatively correlated (monotonically decreasing) function h, we apply negative feedback adjustment to the weighted values (CI matrix) corresponding to these outliers. This aids in reducing the impact of these noisy points on the final result's weight (since the absolute prediction error is often proportional to the scale of the data points themselves). On this basis, the window-level F-statistics in each iteration is addressed by the F-test method analogously to 3, and we denote the training data/model outcome in each iteration as Y Φ i,q, Yˆ Φ i,q, Yˆ Φ i|j,q. We adopt it to iteratively determine the window set M in 5: $$F_{j\Longrightarrow i}^{t,k}=\sum_{q}\left(\hat{Y}_{i,q}^{\Phi}-Y_{i,q}^{\Phi}\right)^{2}/\sum_{q}\left(\hat{Y}_{i|j,q}^{\Phi}-Y_{i,q}^{\Phi}\right)^{2},q\in[t,t+k-1].\tag{6}$$ 3The choice of h is not unique and in this paper we set h(·) = (α − tanh(·)), α > 1. Algorithm 1 DWGC-CI method Data: Multi-channel time series {Yi,t} and predefined F-test threshold ϵ. Initialization, Φ set to all-one; while NAR forecast loss and Lindex *converge* do Reweight the original time series using causality index matrix Φ via 4; Train an NAR model using the reweighted series {Y Φ i,t}; Finding dynamic causalities via 6; In all the window pairs with detected causalities, optimize L*index* in 5 to update Φ; end Result: Window-level GC {Yj,t∼t+k−1 =⇒ Yi,t∼t+k−1}, i, j ∈ {1, 2*, ...d*}. In summary, the DWGC-CI method uses NAR's prediction errors to reversely control the causality index, which can be seen as the input layer weights of the NAR model, and detect causalities in this process. The final procedure for the conclusion is outlined in Algorithm 1. 4 ## 3.3 Practical Issues Selection of sliding window size The optimal selection of the sliding window size can be best decided based on the background knowledge of the GC period. In the following experimental part, for a monsoon investigation, we employ a monthly sliding window to investigate the causality of seasonal fluctuations, whereas, in a stock market analysis, an annual sliding window is utilized to examine the causality of yearly variations. We will illustrate in Appendix. E.1 that, when our choice of sliding window size is clearly not in line with a priori intuition, it is likely that we will not reach an informative conclusion about GC. In cases where background knowledge is not accessible, our approach relies on previously established sliding window size selection methods found in the time series forecasting literature, which could potentially yield suboptimal outcomes. These methods encompass: 1) preemptively categorizing point-in-time sample features to ensure that samples sharing similar attributes are grouped within the same window; 2) introducing minor perturbations to the dataset and evaluating the robustness of DWGC-CI, with the optimal window length corresponding to the most robust result; and 3) implementing a ROC-based technique to determine the appropriate sliding window size, as detailed in reference Fkih et al. (2012). Precision-recall curve The precision and recall rates are both intimately related to the selection of the specific threshold value, denoted as ϵ. As ϵ increases, and the likelihood of a window being accepted as window-level GC diminishes, leading to a decrease in the recall rate. Simultaneously, due to the conservative nature of the decision criteria, the precision rate is expected to increase correspondingly. A thorough precision-recall trade-off with ϵ will be presented in Appendix. E.2, in which we also show that DWGC-CI contains a relatively low false-positive rate without GC ground truth between two series. ## 4 Theoretical Analysis In this section, we give the theoretical analysis of our methods. First, we present an important and interesting 'Negative Result' regarding the performance of traditional GC methods at the window-level: their performance gradually worsens as the window size decreases, which aligns with our intuition that "windowlevel instability" is prone to occur; 2) This negative result serves as a solid foundation for the innovative design of DWGC-CI. We need to introduce a new force, namely the dynamically adjustable CI matrix, to counteract the diminishing power of the GC model as the window size decreases. Intuitively, the robustness and stability of DWGC are in direct proportion to the window length and can be enhanced by the causal index (CI) based on the view of negative feedback regulation. Mathematically, we re-organized the statement into two theorems: 1) DWGC's recall rate increases with window length 4In addition, compared with the winsorization Dixon & Yuen (1974) method, our causality index can effectively capture the causal information of each window, while winsorization only concentrates on several points with abnormal prediction errors in each iteration. and degenerates to GC's, and 2) DWGC-CI's recall rate surpasses DWGC's in window length and GC's in channel length. For definitions, we analytically consider the series channel length as L and the sample pairs Tj , Ti. Naturally, the recall rate as below refers to the overall probability of detecting Fj=⇒i > ϵ on retrieved windows with causalities. The definition of accuracy is analogous. ## 4.1 Dwgc: Granularity-Efficacy Trade-Off We first analyze the granularity-efficacy trade-off of DWGC. We choose the sliding window length as granularity and the recall rate as efficacy. We refer readers to some additional criteria of efficacy (such as accuracy rate) in the experiment part. Assumption 1. The output of the machine learning model g(·) *approximates a Gaussian distribution.* Namely, gi(Yi,<t) ∼ N (Yi,t, ·), gi(Yi,<t, Yj,<t) ∼ N (Yi,t, ·). Theorem 1 (Positive correlation between recall rate and window length of DWGC). *Suppose Assumption. 1* holds. Assume the channel level GC exists (we have enough confidence to reject it only when Fj=⇒i < ϵ); moreover, the outputs satisfy an unbiased and stable Gaussian distribution with the same magnitude of fluctuations at each data point, i.e., the Gaussian distributions of these outputs share a uniform noise variance when conducting a same regression model. *Then the DWGC's recall rate satisfies* P(F t,k2 j=⇒i > ϵ) − P(F t,k1 j=⇒i > ϵ) = O( k2−k1 2 k1)*. Specifically,* ∀k1, k2 ∈ [0, L], k1 < k2, $$\mathbb{P}(F_{j\Longrightarrow i}^{t,k_{2}}>\epsilon)-\mathbb{P}(F_{j\Longrightarrow i}^{t,k_{1}}>\epsilon)\in\left[0,C(k_{2}-k_{1})2^{-k_{1}}\right],{\mathrm{where~}}C{\mathrm{~is~a~constant}}.$$ Corollary 1 (Granularity-efficacy trade-off). DWGC's recall rate monotonically increases with (efficacy increases) the sliding window length k *(granularity decreases). Naturally, DWGC degenerates to the traditional* GC method in the process of window length k *approaching* L. The strict proof is referred to in Appendix B-D. This theorem encapsulates a critical insight: directly transitioning the GC detection from channel-level to window-level may imply a significant compromise in performance, particularly with regard to the recall rate. Owing to the "*trade-off* " between the granularity of the results and the efficacy, we are compelled to propose the DWCI-CI method. We endeavour to strike this "*trade-off* " by introducing additional CI, thereby controlling local noise. We assert that, through the appropriate selection of CI, the performance of DWGC-CI will surpass that of DWGC, which will be illustrated in the following part. We also refer readers to Fig. 2 for deeper comprehension. ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) $$\left(7\right)$$ Figure 2: (a) shows the granularity-efficacy trade-off of DWGC on a simulation dataset (NAR dataset generation in Sec 5.1). The recall rate P(F t,k j=⇒i > ϵ = 1) is monotonically increasing with the window length k, and when k is large (e.g., k > 20), its performance stabilises quickly. This is consistent with the monotonicity and convergence property as reflected in Corollary. 1. Based on this insight, DWGC-CI (b) is exactly what helps to improve the performance of DWGC, especially in smaller windows (e.g., k < 20). ## 4.2 Dwgc-Ci: Enhancing The Granularity While Keeping Efficacy In the above section, we introduce the granularity-efficacy trade-off implicated by DWGC. In this part, we claim the causality index (CI) can help enhance the efficacy (recall rate) while keeping the granularity (window level). Theorem 2 (DWGC-CI outperforms DWGC). Suppose Assumption. 1 holds. Assume the window length is chosen as k, and the dynamic window-level GC on [t, t + k − 1] *exists (we only have enough confidence* to reject it only when F t,k j=⇒i < ϵ); moreover, if the data are weighted using the CI-matrix, the output will also be a Gaussian distribution with the same scale of deflation as the corresponding CI-matrix. Then, the causality index (CI) matrix Φ *exists to make DWGC-CI's recall rate outperform DWGC's.* We prove it via constructing sufficient conditions and leave the proof in Appendix C. Intuitively, we impose smaller CI to the time point with local fluctuations (noise/overfit) in order to make stabilization, and hence the F-test is more informative and the recall rate is enhanced. Seen from Fig. 2, most F-statistics of DWGCCI are steadily higher than DWGC's on the same window length k, especially when k is relatively small (e.g., k < 20). Moreover, in the process of training approximate Φ, we adopt the sufficient condition of Φ as the regularization items for 5 (see Appendix. C for detail) and conduct automatic differentiation. Honestly, it is acknowledged that there still lacks rigorous convergence conditions of CI matrix due to the lack of clear mathematical tools, which presents an promising theoretical direction for future work. However, it is gratifying that in experiments, DWGC-CI has demonstrated superior performance, confirming that good CI is often easy to obtain. ## 5 Experiments In this section, we employ our proposed techniques to analyze two synthetic and two real-world datasets. As previously elucidated in the preliminary discussion, the integration of window-level and non-linear approaches represent a novel avenue of research, which consequently leads to a scarcity of well-established baselines. To ensure rigorous and impartial evaluation, our experimental analysis primarily concentrates on validating the theoretical propositions (Sec. 5.1) and assessing the congruence with a priori knowledge (Sec. 5.2). ## 5.1 Experiment On Synthetic Ar/Nar Simulation Dataset We apply DWGC/DWGC-CI on the synthetic (N)AR dataset. These results further verify the theoretical results in the last section. ## 5.1.1 Experimental Setting The AR and NAR simulations generate two synthetic datasets, and the generation process is detailed as follows. 1) *The linear AR simulation construction*: We simulate two linear AR time series with a lag value randomly picked from the integers from one to nine (t = 0, 1, ...): T1,t = m1T2,t−lag + 0.02n1(t), T2,t = m2T1,t−lag + 0.02n2(t). For i = 1, 2, ni(t) is the Gaussian noise term (ni(t) ∼ N(0, 1)), and the initial values are Ti,t = t1t∈[1,5] + (11−t)1t∈[6,10], mi = 0.91*rand*(0,1)≤0.95 + 101*rand*(0,1)>0.95. 5 2) *The NAR simulation construction*: we simulate two NAR time series: T1,t = Re(m1 qT2,t−lag 2 − 1+n1(t)), T2,t = Re(m2 qT1,t−lag 2 − 1+n2(t)), where n1(t), n2(t), the lag and the initial value of T1, T2 are the same as AR's. Let f(s) = sin(0.1s), s is random. Then the values of m1, m2 are m1 = 101f(s)>0.9 + 0.91f(s)<0.9 , m2 = 101f(s)<−0.9 + 0.91f(s)>−0.9. The time point of mi = 10, i = 1, 2 denotes the beginning of the causality6. At last, Re(·) takes the real part of the square root. 5Let X be the set of possible values of x, then 1x∈X denotes the characteristic function. Moreover, *rand*(0, 1) denotes the random values ranging from 0 to 1. 6This chronological construction is consistent with the very nature of GC, i.e., the 'precedence' principle as we illustrated in preliminaries. ![8_image_0.png](8_image_0.png) Figure 3: The partial schematic in AR/NAR experiment. The blue/red curve represents two channels of time series (series 1: T1, series 2: T2), and arrows represent causalities. The starting position of each causality arrow is at the point when m1 (m2) is abnormally large. Noting that some mutations are caused by noise instead of causalities in our construction, which are served as confounding factors. | datasets. dataset | method | window size | 10 | 20 | 30 | 100 | |------------------------------------------|--------------------------------------------|-------------------------|-------------------------|-------------------------|-------------------------|-------| | AR | Change-point (Aminikhanghahi & Cook, 2017) | 0.42(0.016) 0.50(0.012) | 0.50(0.017) 0.50(0.013) | 0.69(0.013) 0.88(0.011) | 0.88(0.005) 0.97(0.003) | | | Extreme causal (Bodik et al., 2021) | 0.30(0.018) 0.47(0.013) | 0.56(0.013) 0.52(0.016) | 0.76(0.009) 0.94(0.010) | 0.87(0.005) 0.99(0.003) | | | | | Anomal causal (Yang et al., 2022) | 0.22(0.014) 0.54(0.013) | 0.43(0.018) 0.52(0.015) | 0.53(0.009) 0.95(0.010) | 0.65(0.004) 0.96(0.004) | | | Rolling GC (Ming-Hsien & Chih-She, 2015) | 0.20(0.011) 0.43(0.013) | 0.29(0.010) 0.52(0.011) | 0.53(0.009) 0.95(0.010) | 0.75(0.002) 0.99(0.002) | | | | | DWGC (ours) | 0.59(0.014) 0.42(0.019) | 0.60(0.009) 0.76(0.010) | 0.72(0.011) 0.93(0.013) | 0.87(0.007) 0.99(0.008) | | | | DWGC-CI (ours) | 0.84(0.009) 0.44(0.009) | 0.89(0.007) 0.79(0.009) | 0.84(0.005) 0.94(0.005) | 0.88(0.005) 0.99(0.001) | | | NAR | Change-point (Aminikhanghahi & Cook, 2017) | 0.39(0.014) 0.42(0.013) | 0.40(0.013) 0.41(0.017) | 0.59(0.014) 0.80(0.010) | 0.75(0.003) 0.93(0.003) | | | Extreme causal (Bodik et al., 2021) | 0.10(0.019) 0.37(0.020) | 0.42(0.017) 0.32(0.014) | 0.56(0.011) 0.74(0.011) | 0.67(0.015) 0.87(0.007) | | | | | Anomal causal (Yang et al., 2022) | 0.02(0.015) 0.42(0.016) | 0.35(0.018) 0.42(0.019) | 0.49(0.009) 0.78(0.014) | 0.58(0.014) 0.78(0.016) | | | Rolling GC (Ming-Hsien & Chih-She, 2015) | 0.03(0.011) 0.35(0.014) | 0.25(0.014) 0.48(0.017) | 0.50(0.007) 0.85(0.011) | 0.64(0.005) 0.89(0.003) | | | | | DWGC (ours) | 0.39(0.016) 0.22(0.023) | 0.46(0.010) 0.62(0.012) | 0.58(0.014) 0.73(0.014) | 0.76(0.010) 0.92(0.010) | | | | DWGC-CI (ours) | 0.64(0.010) 0.40(0.011) | 0.69(0.009) 0.59(0.010) | 0.64(0.008) 0.84(0.009) | 0.74(0.005) 0.94(0.003) | | ## 5.1.2 Experimental Process And Result For the two AR/NAR simulation datasets, we consider the case of m1, m2 = 10 as the beginning of the causality. To determine ϵ and the parameter of the function h(·), we use the 5-fold cross-validation, during which we randomly choose one hundred time points as the test set and the other four hundred as the training set. We choose the function h(·) = ( 65 −tanh(·)) and the causality threshold ϵ = 1 (the sensitivity test of ϵ is in Appendix. D). Moreover, the rolling window intervals are taken as {[i ∗ k,(i + 1)k] : i ∗ k ∈ [0, L], i ∈ N}, where k = 10, 20, 30, 100. Moreover, in order to hold the generality, we adopt the most traditional Adam operator with the adaptive learning rate initialized as η = 0.05. Fig. 3 shows the partial schematic of AR/NAR simulations, where the arrows mean the lag relation between the two series and we consider them as the ground truth of the dynamic causalities. Following the definition in Section 4, if F t,k 2=⇒1 > 1 or F t,k 1=⇒2 > 1 is detected on a window with at least a set of causality pairs, our causal extraction on this window is successful. To our best knowledge, we adopt the current baseline (Bodik et al., 2021; Yang et al., 2022) of causality extraction on time series. In this case, we calculate the recall/accuracy of our methods in Table 5.1.1 after making stationary processing. As seen from the results, 1) From a vertical comparison perspective, we found that our method DWGC, especially DWGC-CI, significantly outperforms the baseline methods. Moreover, this superiority becomes more significant when the window size is smaller, as our CI matrix is specifically designed to address the instability of local windows. ![9_image_0.png](9_image_0.png) Figure 4: The experimental results on Climate Dataset for ElNiño by DWGC and DWGC-CI methods. The first column is the original data of MKE & OLR and ENSO (1992). The second and third column is the F-test result between ENSO and MKE & OLR using DWGC and DWGC-CI respectively. In all mentioned cases, our DWGC-CI method is closer to the prior knowledge of Kumar's Kumar et al. (1999) and Yasunari's Yasunari (1990). 2) With the increase in window length, both the baseline method and our DWGC method exhibit an overall improvement in accuracy and recall. This is because as the window length increases, the situation degrades into a traditional GC, and the model's performance becomes more stable, which is consistent with Theorem. 1 and Theorem. 2. 3) Furthermore, the out-performance is more significant in NAR than in AR, the potential reason is that in the NAR scenario, it is more challenging to decouple local noise and causal effects, and the performance becomes simultaneously worse. Therefore, the baseline method, which does not focus on distinguishing noise, exhibits a larger performance gap in the NAR scenario. 4) It is worth noting that this result is not consistent in all cases, as the model performance of the baseline method is unstable and biased on local windows. When we supplement variation analysis, we find that the results are basically consistent. ## 5.2 Experiment On Climate Dataset For Elniño In this part, we verify our methods on a real-world climate dataset, which has widely recognized seasonal causalities. Due to space limitation, we refer readers to Appendix. A for domain-specific knowledge. ## 5.2.1 Experimental Processing And Result ENSO and Asian monsoon data in 1992 Yang et al. (2018) can be seen from the first column in Fig. 4. The first four months are taken as the training data and the rest as the testing data. In order to facilitate comparison with prior knowledge, we selected the window length k as one month. We use our model to judge the window-level causalities between ENSO and MKE & OLR every month. In the second column of Fig. 4, we show the F-statistic of every month of DWGC-CI and DWGC for two series (ENSO to MKE) and (ENSO to OLR). The x-axis is the month-time and the y-axis is the F-statistic. By DWGC, the causality between ENSO and MKE & OLR can hardly be successfully detected, not to mention the difference of causality between May-Aug (spring & summer) and Aug-Dec (autumn & winter). However, by DWGC-CI, we not only obtain the basic causalities of ENSO =⇒ MKE and ENSO =⇒ OLR, but also find that the causality in Sep-Dec (autumn & winter) is more significant than that in May-Aug (spring & summer). This significant causality variation detected by DWGC-CI is more consistent with the academic knowledge Kumar et al. (1999) than that by DWGC. In the third column of Fig. 4, we show the F-statistic of each month of DWGC and DWGC-CI for two series (MKE to ENSO) and (OLR to ENSO). By DWGC, we detect the causality MKE & OLR =⇒ ENSO in Sep-Dec (autumn & winter). However, by DWGC-CI, ![10_image_1.png](10_image_1.png) ![10_image_0.png](10_image_0.png) ![10_image_2.png](10_image_2.png) Figure 5: The causality between the stock market and economy by DWGC-CI method. Three subfigures show the trend of five variables (NLC, TAL (million), TMV (‰), TFR (million), GDP (million)) during 2007-2015 in three countries. The arrows represent causalities between different curves (the red is to GDP while the blue is from GDP). Results of our DWGC method are consistent with academic knowledge in three areas. Detailed F-statistic results are shown in Table 2,3,4. we detect the causality MKE & OLR =⇒ ENSO in May-Aug (spring & summer). Compared to the DWGC method, our causalities detected by DWGC-CI are more aligned with the academic knowledge Yasunari (1990). ## 5.3 Experiment On Stock Market And Economy In this part, we use DWGC and DWGC-CI methods to detect causality between the stock market and the economy of three representative economies: the United States, China and South Africa. The a priori knowledge is shown in Appendix. A. ## 5.3.1 Experimental Processing And Result In this experiment, we choose four widely-acknowledged indexes Edou (2017) to jointly evaluate the prosperity of the stock market: the number of listed companies (NLC), the total assets listed (TAL), the total market value (TMV), and the total funds raised (TFR). On the other hand, Economic growth is measured in terms of gross domestic product (GDP). Considering the authorization limit of data sources, we selected data from three countries from 2005-2015 (5 dimensions in total) and choose 2005-2007 among them as the training data. For the sake of comparing with the prior knowledge, we set the window length k as one year to observe the annual change trend of stock-economy causalities in three countries, and causal threshold ϵ = 500. In this experiment, we test causalities between GDP and NLC, TAL, TMV, and TFR by DWGC-CI and DWGC methods, and the DWGC-CI results are illustrated by red/blue arrows in figure. 5. The results show that 1) in America, causalities NLC (2012), TMV (2009), TFR (2010) =⇒ GDP exist, 2) there are no specific causalities between the stock market and economy in China, and 3) bidirectional causalities TMV (2008), TAL (2010) =⇒ GDP, GDP =⇒ TAL (2008), TFR (2009,2012) exist in South Africa. However, the DWGC results don't show such a good correlation with the prior conclusions. In all, these causalities between stock & market of three areas in fig. 5 are consistent with our academic knowledge Pearce (1983); Aziz & Duenwald (2006); Ghirmay (2004) respectively by DWGC-CI, better than by DWGC. The experimental results on the stock market and economy are in table 1,2,3. In these tables, the data beyond the causal threshold ϵ = 500 are bolded, which matches the causal arrow in Fig (5). | Table 4: FX=⇒GDP (left) and FGDP =⇒X(right) of South Africa. | | | | | | | | | | |----------------------------------------------------------------|-------------|--------------|---------------|--------------|-------------|--------------|-------------|-----------|------| | X | method | F-test | 2008 | 2009 | 2010 | 2011 | 2012 | 2013 | 2014 | | NLC | DWGC-CI | 1.47/414.05 | 0.042/4.43 | 0.87/2.58 | 0.14/1.00 | 8.97/1.00 | 54.10/1.00 | 3.10/1.00 | | | DWGC | 1.22/0.023 | 0.20/0.027 | 0.94/2.40 | 0.36/44.09 | 3.01/4.99 | 7.59/4.58 | 1.77/3.7 | | | | TAL | DWGC-CI | 1.98/8245.55 | 1.10/150.84 | 772.42/0.45 | 0.0047/0.21 | 137.48/1.22 | 162.61/1.01 | 4.03/1.00 | | | DWGC | 1.36/0.85 | 3.93/0.03 | 15.82/0.57 | 0.19/0.05 | 12.38/11.09 | 12.25/2.27 | 1.85/2.41 | | | | TMV | DWGC-CI | 2730.82/1.65 | 0.0018/297.02 | 0.49/0.27 | 0.0012/0.38 | 0.22/0.0039 | 0.27/2.53 | 0.36/4.72 | | | DWGC | 53.10/1.31 | 0.41/14.96 | 0.87/0.55 | 0.25/0.63 | 0.49/0.19 | 0.51/1.47 | 0.64/1.88 | | | | TFR | DWGC-CI | 1.81/1.33 | 0.45/1205.46 | 267.35/93.50 | 0.036/0.19 | 85.53/579.83 | 204.13/6.06 | 3.52/3.16 | | | DWGC | 102.98/0.57 | 0.38/2.12 | 0.92/1.29 | 0.25/1.85 | 0.49/78.81 | 0.51/10.05 | 0.64/4.63 | | | | | Table 3: FX=⇒GDP (left) and FGDP =⇒X (right) of China. | | | | | | | | | |-----|----------------------------------------------------------|-----------|-----------|-----------|-----------|------------|------------|-----------|------| | X | method | F-test | 2008 | 2009 | 2010 | 2011 | 2012 | 2013 | 2014 | | NLC | DWGC-CI | 0.77/0.95 | 3.66/2.06 | 1.01/0.72 | 1.01/1.00 | 0.99/0.99 | 0.98/1.00 | 0.99/1.00 | | | | DWGC | 0.87/1.20 | 2.20/3.34 | 1.01/2.65 | 1.00/5.93 | 0.99/16.86 | 0.99/17.71 | 0.99/5.47 | | | TAL | DWGC-CI | 2.28/0.88 | 0.22/1.23 | 1.12/0.99 | 1.05/1.01 | 0.90/0.97 | 0.87/0.94 | 0.90/1.05 | | | | DWGC | 1.70/0.94 | 0.50/1.11 | 1.06/0.99 | 1.03/1.01 | 0.95/0.98 | 0.930.97 | 0.95/1.02 | | | TMV | DWGC-CI | 0.27/0.92 | 0.06/1.00 | 0.86/0.98 | 0.93/0.98 | 1.18/0.98 | 1.26/1.15 | 1.18/1.01 | | | | DWGC | 0.33/1.00 | 0.40/9.35 | 0.86/1.01 | 0.93/1.01 | 1.20/1.01 | 1.30/0.96 | 1.19/1.00 | | | TFR | DWGC-CI | 9.94/1.40 | 0.02/3.13 | 0.83/1.55 | 0.90/2.26 | 1.33/0.02 | 1.33/0.02 | 1.23/4.36 | | | | DWGC | 2.83/1.16 | 0.10/1.73 | 0.93/1.26 | 0.95/1.60 | 1.15/0.13 | 1.12/0.07 | 1.09/2.47 | | | | Table 2: FX=⇒GDP (left) and FGDP =⇒X (right) of America. | | | | | | | | | |-----|------------------------------------------------------------|------------|--------------|-------------|--------------|----------------|--------------|-------------|------| | X | method | F-test | 2008 | 2009 | 2010 | 2011 | 2012 | 2013 | 2014 | | NLC | DWGC-CI | 32.67/1.34 | 3.65/2.56 | 289.09/2.68 | 2.30/21.34 | 1269.82/390.30 | 51.11/330.92 | 29.59/99.81 | | | | DWGC | 4.35/1.17 | 3.64/1.54 | 9.74/0.28 | 0.22/5.62 | 22.50/26.10 | 5.92/13.71 | 5.32/9.66 | | | TAL | DWGC-CI | 0.64/23.02 | 1.03/169.64 | 1.00/0.23 | 1.00/1.68 | 1.00/1.02 | 1.00/0.99 | 1.00/1.00 | | | | DWGC | 1.29/5.03 | 10.23/10.14 | 1.60/0.48 | 0.020/1.29 | 12.55/1.01 | 2.21/0.99 | 2.14/1.00 | | | TMV | DWGC-CI | 15.42/0.82 | 9480.66/0.99 | 0.78/0.02 | 0.0036/0.024 | 91.19/0.0056 | 0.93/0.038 | 0.96/0.062 | | | | DWGC | 4.20/6.8 | 41.38/169.6 | 0.86/0.56 | 0.059/3.13 | 3.76/1.08 | 1.08/3.09 | 1.03/2.07 | | | TFR | DWGC-CI | 28.37/2.05 | 3.49/1.17 | 616.96/2.47 | 0.078/0.13 | 20.81/0.17 | 411.65/1.67 | 11.23/5.56 | | | | DWGC | 3.94/0.07 | 3.71/0.31 | 17.10/0.43 | 0.036/1.93 | 3.68/1.47 | 9.11/10.86 | 3.23/7.12 | | ## 6 Conclusions And Discussions In this paper, we focus on a new task to detect the dynamic window-level Granger causality (DWGC) between the time series. We proposed the DWGC method by conducting the F-test on comparing the window-level forecasting predictions, whereas there is demonstrated to be a granularity-efficacy tradeoff and DWGC's power of window-level GC extraction is limited. We then introduce a "causality index (CI)" method to weigh the original time series, whose purpose is to decrease the local fluctuations of the window-level F-test. The full method is DWGC-CI and is theoretically proven to have better recall rates. In the experiments on two synthetic and two real datasets, the DWGC-CI method successfully detects the window causalities, which cannot be accomplished by GC. It also outperforms DWGC, both on recall rate and accuracy rate. This is fundamental work with many areas for future improvement and expansion. Subsequently, we will be interested in further detecting the principle of DWGC-CI on 1) window adjustment, such as dynamic rolling window intervals with different starting points, 2) weight strategy enhancement, such as assigning dynamic causality index with domain-specific knowledge, 3) graph extension, such as constructing time series causal graph combined with additional graph theory, and 4) further thoughts on the mechanism design, such as how to strike the granularity-efficacy trade-off in a more general intervention/reweighting framework. ## References Samaneh Aminikhanghahi and Diane J Cook. A survey of methods for time series change point detection. Knowledge and information systems, 51(2):339–367, 2017. Leonardo Angelini, Mario Pellicoro, and Sebastiano Stramaglia. Granger causality for circular variables. Physics Letters A, 373(29):2467–2470, 2009. Jahangir Aziz and Christoph Duenwald. Growth-financial intermediation nexus in china. *Imf Working* Papers, 02(194), 2006. Juraj Bodik, Milan Paluš, and Zbyněk Pawlas. Causality in extremes of time series. arXiv preprint arXiv:2112.10858, 2021. Sezen Cekic, Didier Grandjean, and Olivier Renaud. Time, frequency, and time-varying granger-causality measures in neuroscience. *Statistics in Medicine*, 37(11):1910–1931, 2018. S. Chen, S. A. Billings, and P. M. Grant. Non-linear system identification using neural networks. *International* Journal of Control, 51(6):1191–1214, 2000. Yonghong Chen, Govindan Rangarajan, Jianfeng Feng, and Mingzhou Ding. Analyzing multiple nonlinear time series with extended granger causality. *Physics Letters A*, 324(1):26–35, 2004. Brad Comincioli. The stock market as a leading indicator: An application of granger causality. *University* avenue undergraduate journal of economics, 1(1):1, 1996. Wilfrid J Dixon and Kareb K Yuen. Trimming and winsorization: A review. *Statistische Hefte*, 15(2-3): 157–170, 1974. Andrea Duggento, Maria Guerrisi, and Nicola Toschi. Echo state network models for nonlinear granger causality. *bioRxiv*, pp. 651679, 2019. TCHIDI Guillaume Edou. Stock market and economic growth: A comparative study - evidences from china, usa, and south africa. *INTERNATIONAL JOURNAL OF ECONOMICS AND FINANCIAL ISSUES*, 2017. Michael Eichler. *Causal inference in time series analysis*. Wiley Online Library, 2012. Michael Eichler. Causal inference with multiple time series: principles and problems. *Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences*, 371(1997):20110613, 2013. Fethi Fkih, Mohamed Nazih Omri, et al. Learning the size of the sliding window for the collocations extraction: A roc-based approach. In Proceedings of the 2012 International Conference on Artificial Intelligence (ICAI'12), pp. 1071–1077, 2012. K.J. Friston, L. Harrison, and W. Penny. Dynamic causal modelling. *NeuroImage*, 19(4):1273 - 1302, 2003. ISSN 1053-8119. doi: https://doi.org/10.1016/S1053-8119(03)00202-7. URL http://www. sciencedirect.com/science/article/pii/S1053811903002027. Teame Ghirmay. Financial development and economic growth in sub-saharan african countries: Evidence from time series analysis. *African Development Review*, 16(3):415–432, 2004. doi: 10.1111/j.1017-6772. 2004.00098.x. URL https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1017-6772.2004.00098. x. Clive William John Granger and Paul Newbold. *Forecasting economic time series*. Academic Press, 2014. Clive WJ Granger. Investigating causal relations by econometric models and cross-spectral methods. *Econometrica: journal of the Econometric Society*, pp. 424–438, 1969. C.W.J. Granger. Testing for causality: A personal viewpoint. *Journal of Economic Dynamics and Control*, 2:329 - 352, 1980. ISSN 0165-1889. doi: https://doi.org/10.1016/0165-1889(80)90069-X. URL http: //www.sciencedirect.com/science/article/pii/016518898090069X. James D Hamilton. *Time series analysis*, volume 2. Princeton New Jersey, 1994. David Hammarwall, Mats Bengtsson, and Björn Ottersten. Acquiring partial csi for spatially selective transmission by instantaneous channel norm feedback. *IEEE Transactions on Signal Processing*, 56(3): 1188–1204, 2008. Godfrey Harold Hardy. *Orders of infinity: The'infinitärcalcül'of Paul du Bois-Reymond*. University Press, 1924. Miao He, Weixi Gu, Ying Kong, Lin Zhang, Costas J Spanos, and Khalid M Mosalam. Causalbg: Causal recurrent neural network for the blood glucose inference with iot platform. IEEE Internet of Things Journal, 2019. Robert K Kaufmann and David I Stern. Evidence for human influence on climate from hemispheric temperature relations. *Nature*, 388(6637):39–44, 1997. Eamonn Keogh and Shruti Kasetty. On the need for time series data mining benchmarks: a survey and empirical demonstration. *Data Mining and knowledge discovery*, 7(4):349–371, 2003. K Krishna Kumar, Balaji Rajagopalan, and Mark A Cane. On the weakening relationship between the indian monsoon and enso. *Science*, 284(5423):2156–2159, 1999. Yang Li, Hua-Liang Wei, Steve A Billings, and Xiao-Feng Liao. Time-varying linear and nonlinear parametric model for granger causality analysis. *Physical Review E*, 85(4):041906, 2012. Helmut Lütkepohl. *New introduction to multiple time series analysis*. Springer Science & Business Media, 2005. Huanfei Ma, Kazuyuki Aihara, and Luonan Chen. Detecting causality from nonlinear dynamics with shortterm time series. *Scientific reports*, 4(1):1–10, 2014. Atalanti A Mastakouri, Bernhard Schölkopf, and Dominik Janzing. Necessary and sufficient conditions for causal feature selection in time series with latent common causes. In *International Conference on Machine* Learning, pp. 7502–7511. PMLR, 2021. YANG Ming-Hsien and WU Chih-She. Revisit export and gdp nexus in china and taiwan: A rolling window granger causality test. *Theoretical & Applied Economics*, 22(3), 2015. Manfred Mudelsee. *Climate time series analysis*, volume 30. Springer, 2013. Mattia F. Pagnotta, Mukesh Dhamala, and Gijs Plomp. Benchmarking nonparametric granger causality: Robustness against downsampling and influence of spectral decomposition parameters. *NeuroImage*, 183: 478 - 494, 2018. ISSN 1053-8119. doi: https://doi.org/10.1016/j.neuroimage.2018.07.046. URL http: //www.sciencedirect.com/science/article/pii/S1053811918306621. John Pearce. The relationship of internal versus external orientations to financial measures of strategic performance. *Strategic Management Journal*, 4:297 - 306, 10 1983. doi: 10.1002/smj.4250040402. Judea Pearl. Theoretical impediments to machine learning with seven sparks from the causal revolution. In *Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining*, pp. 3–3, 2018. Judea Pearl and James Robins. Probabilistic evaluation of sequential plans from causal models with hidden variables. In *Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence*, UAI'95, pp. 444–453, San Francisco, CA, USA, 1995. Morgan Kaufmann Publishers Inc. ISBN 1558603859. CS Ramage. guillemotright monsoon meteorology. *International Geophysics Series*, 15, 1971. Louis L Scharf. *Statistical signal processing*, volume 98. Addison-Wesley Reading, MA, 1991. Linda Sommerlade, Marco Thiel, Bettina Platt, Andrea Plano, Gernot Riedel, Celso Grebogi, Jens Timmer, and Björn Schelter. Inference of granger causal time-dependent influences in noisy multivariate time series. Journal of neuroscience methods, 203(1):173–185, 2012. George Sugihara, Robert May, Hao Ye, Chih-hao Hsieh, Ethan Deyle, Michael Fogarty, and Stephan Munch. Detecting causality in complex ecosystems. *science*, 338(6106):496–500, 2012. Xiaohai Sun. Assessing nonlinear granger causality from multivariate time series. In *Joint European Conference on Machine Learning and Knowledge Discovery in Databases*, pp. 440–455. Springer, 2008. E Syczewska and Z Struzik. Granger causality and transfer entropy for financial returns. Acta Physica Polonica A, 127(3A), 2015. Floris Takens. Detecting strange attractors in turbulence. In Dynamical systems and turbulence, Warwick 1980, pp. 366–381. Springer, 1981. Alex Tank, Ian Covert, Nicholas Foti, Ali Shojaie, and Emily Fox. Neural granger causality for nonlinear time series. *arXiv preprint arXiv:1802.05842*, 2018. Alexey Tsymbal. The problem of concept drift: definitions and related work. *Computer Science Department,* Trinity College Dublin, 106(2):58, 2004. Sebastian Weichwald, Martin E Jakobsen, Phillip B Mogensen, Lasse Petersen, Nikolaj Thams, and Gherardo Varando. Causal structure learning from time series: Large regression coefficients may predict causal links better in practice than small p-values. In *NeurIPS 2019 Competition and Demonstration Track*, pp. 27–36. PMLR, 2020. John Woods and Douglas Walton. Post hoc, ergo propter hoc. *The Review of Metaphysics*, pp. 569–593, 1977. Alexander P Wu, Rohit Singh, and Bonnie Berger. Granger causal inference on dags identifies genomic loci regulating transcription. In *International Conference on Learning Representations*, 2022. Chenxiao Xu, Hao Huang, and Shinjae Yoo. Scalable causal graph learning through a deep neural network. In *Proceedings of the 28th ACM International Conference on Information and Knowledge Management*, pp. 1853–1862. ACM, 2019. Hongteng Xu, Mehrdad Farajtabar, and Hongyuan Zha. Learning granger causality for hawkes processes. In *International conference on machine learning*, pp. 1717–1726. PMLR, 2016. Song Yang, Kaiqiang Deng, and Wansuo Duan. Selective interaction between monsoon and enso: Effects of annual cycle and spring predictability barrier. *Chinese Journal of Atmospheric Sciences*, 2018. Wenzhuo Yang, Kun Zhang, and Steven CH Hoi. Causality-based multivariate time series anomaly detection. arXiv preprint arXiv:2206.15033, 2022. T Yasunari. Impact of indian monsoon on the coupled atmosphere/ocean system in the tropical pacific. Meteorology and Atmospheric Physics, 44(1-4):29–41, 1990. Yaoyu Zhang, Yanyang Xiao, Douglas Zhou, and David Cai. Granger causality analysis with nonuniform sampling and its application to pulse-coupled nonlinear dynamics. *Physical Review E*, 93(4):042217, 2016. ## Ethics Statement This study is conducted in accordance with the general academical principles and adhered to the corresponding ethical guidelines and standards. The research design, procedures, and data collection methods were reviewed and approved by the corresponding institutes of authors. All participants involved in the study were informed of the study's objectives, procedures, potential risks, and benefits. Written informed consent was obtained from each participant prior to their involvement in the research. Participants were assured of the confidentiality of their personal information and were informed that they had the right to withdraw from the study at any time without any consequences. All data collected in this study were anonymized to protect the privacy of the participants. Any identifiable information was removed, and the data were securely stored in compliance with the data protection policies. Access to the data was granted only to the research team members involved in the study. The authors declare that they have no competing interests or conflicts of interest related to this research. The study was conducted with the utmost respect for the participants and their rights, ensuring their welfare and privacy throughout the research process. # Supplemental Materials ## A Domain-Specific Knowledge In Experiments A.1 Academic Knowledge Of Elniño And Monsoon The climate academia already have the following definition on ElNiño: a climate phenomenon in the Pacific equatorial belt where the ocean and atmosphere interact with each other and lose their balance Ramage (1971), while monsoon generally eqrefers to the seasonal conversion of atmospheric circulation and precipitation in tropical and subtropical areas, which is measured by two parameters: OLR (Outgoing Longwave Radiation) and MKE (Monsoon Kinetic Energy). The causalities between ENSO and the East Asian monsoon (MKE, OLR) have been extensively explored. Up till now, it is widely believed that the causal direction and intensity between the two has seasonal variation summarized as below: 1. The causality ENSO =⇒ MKE & OLR exists, more intense in autumn & winter than in spring & summer. 2. The reverse causality MKE & OLR =⇒ ENSO exists in spring & summer. However, with the increasing instability and complexity of ENSO, the above are only observational conclusions based on meteorology knowledge and lacks highly interpretable causality analysis. In all, a priorknowledge-fitted result in this experiment will strongly demonstrate the effectiveness of our method. ## A.2 Academic Knowledge Of Stock Market And Economy In America, China And South Africa The relationship between the development of the stock market and economic growth has long been a hot topic in financial circles. It's one of the most classical cases where a mutual causal-and-effect relationship exists. These causalities are hard to be captured since they show different characteristics in various social environments and economic development stages. In the world, the United States, China and South Africa, as three highly representative countries in the Americas, Asia and Africa, contain the empirical knowledge of stock-economy causality summarized as follows: 1 the United States: Pearce Pearce (1983) showed that the stock market was a long-term stability indicator of economic growth, i.e., Stock Market ⇒ Economy. 2 China: Aziz Aziz & Duenwald (2006) used the panel data method, and found that China's stock market development had little impact on GDP growth during the post-1978 eqreform period. 3 South Africa: Ghirmay Ghirmay (2004) studied the causal relationship between financial development and economic growth in 13 sub-Saharan African countries by bivariate VAR model, results showed that financial development promoted economic growth in eight countries, and among which there were bidirectional causalities in six countries, i.e., Economy ⇒ Stock Market. ## B Proof Of Theorem. 1 Proof. Without loss of generation, we can choose ϵ = 1, and the approximate Gaussian distribution on each time point shares a similar variance as in Assumption. 1. Yi,t is partitioned into original real data Y o i,t and noise Y n i,t as $$Y_{i,t}=Y_{i,t}^{o}+Y_{i,t}^{n}.$$ i,t. (8) We assume that Ydi,t ∼ N (Y o i,t, σ2), Y[i|j,t ∼ N (Y o i,t, ϵ20σ 2), where ϵ0 ∈ (0, 1), σ are constants. Moreover, the Gaussian noise satisfies Y n i,t ∼ N (0, σ0). In Equation. 3 , if we denote Pq (Yˆi,q−Yi,q) 2 Pq (Yˆi|j,q−Yi,q) 2 =σ 2+σ 2 0 ϵ 2 0 σ2+σ 2 0 X, then we conclude that X satisfies the F distribution. Its density probability function is $f(x)=\begin{cases}0,x\leq0\\ \frac{1}{B(\frac{k}{2},\frac{k}{2})}x^{\frac{k}{2}-1}(1+x)^{-k},x>0.\end{cases}$ #### 11.1 On this basis, we have ... In our proof, we choose the threshold 1. On this basis, we have $$P(F^{\frac{1}{2},\frac{k}{2}}_{j=\sum_{i}j}>1)=\int_{\frac{\sum_{i}j=2^{n}+\frac{n}{6}}{2^{i}x_{0}^{2}+\frac{n}{6}}}^{+\infty}\frac{1}{B(\frac{k}{2},\frac{k}{2})}x^{\frac{k}{2}-1}(1+x)^{-k}dx$$ $$=\frac{1}{B(\frac{k}{2})}\int_{x_{0}}^{x_{0}}x^{\frac{k}{2}-1}(1-x)^{\frac{k}{2}-1}dx,\ \ \ (x_{0}:=\frac{\epsilon_{0}^{2}\sigma^{2}+\sigma_{0}^{2}}{(\epsilon_{0}^{2}+1)\sigma^{2}+2\sigma_{0}^{2}}\in(0,\frac{1}{2}))\tag{10}$$ $$=:1-I_{x_{0}}(\frac{k}{2},\frac{k}{2})$$ $$\mathbf{\Sigma}(9)$$ We introduce the following lemma: Lemma 1. If x0 ∈ (0, 1 $(0,\frac{k}{2})$, then_ $$I_{x_{0}}(\frac{k}{2},\frac{k}{2})=\frac{\Gamma(k)}{\Gamma(\frac{k}{2}+1)\Gamma(\frac{k}{2})}(1-2x_{0})x_{0}^{\frac{k}{2}}(1-x_{0})^{\frac{k}{2}}+I_{x_{0}}(\frac{k+2}{2},\frac{k+2}{2})\tag{11}$$ $$\geq I_{x_{0}}(\frac{k+2}{2},\frac{k+2}{2}).$$ We make contradictions. If there is k = i0 such that Ix0 ( 2 , 2 ) > Ix0 ( 2, 2). Then we have Ix0 ( i0 + 2 2, i0 + 2 2) − Ix0 ( i0 + 1 2, i0 + 1 2) ≥Ix0 ( i0 2 , i0 2 ) − Ix0 ( i0 − 1 2, i0 − 1 2) + Γ(k − 1)(1 + 2(k−1) k+1 x0)x k−1 2 0(1 − x0) k−1 2 Γ( k−1 2 + 1)Γ( k−1 2 ) (12) − Γ(k)(1 + 2k k+2x0)x k 2 0 (1 − x0) k 2 Γ( k 2 + 1)Γ( k 2 ) =Ix0 ( i0 2 , i0 2 ) − Ix0 ( i0 − 1 2, i0 − 1 2). k→+∞ |Ix0 ( k+1 2 , k+1 2 ) − Ix0 ( k 2 , k 2 )| ≥ Ix0 ( i0 2 , i0 2 ) − Ix0 ( i0−1 2, i0−1 2) > 0. It's contradictive! i0 i0 i0−1 i0−1 Hence we have 0 = lim Therefore, we have ∀*k, I*x0 ( k 2 , k 2 ) > Ix0 ( k+1 $P_{x_{0}}(\frac{1}{2},\frac{1}{2})$. Thus $\forall k,\ P(F^{t,k}_{j\Longrightarrow i}>1)<P(F^{t,k+1}_{j\Longrightarrow i}>1)$. , k+1 ). Thus Moreover, according to lemma. 1, we further have $$0<\mathbb{P}(F_{j\Longrightarrow i}^{i,k_{2}}>1)-\mathbb{P}(F_{j\Longrightarrow i}^{i,k_{1}}>1)=\sum_{i=k_{1}}^{k_{2}}[I_{i\alpha}(\frac{i}{2},\frac{i}{2})-I_{i\alpha}(\frac{i+1}{2},\frac{i+1}{2})]$$ $$=\sum_{i=k_{1}}^{k_{2}}\mathcal{O}\left(\frac{\Gamma(i)}{\Gamma(\frac{1}{2}+1)\Gamma(\frac{1}{2})}(1-2x_{0})x_{0}^{\frac{i}{2}}(1-x_{0})^{\frac{i}{2}}\right)\tag{14}$$ $$<C\left((\frac{1}{4})^{\frac{k_{1}}{2}}(k_{2}-k_{1})\right).$$ $$(13)$$ Proved. ## B.1 The Proof Of Lemma. 1 Proof. Via integration by parts, we have Ix0 (p, q) − Ix0 (p + 1, q) = R x0 0x p−1(1 − x) q−1 B(p, q)− R x0 0x p(1 − x) q−1 B(p + 1, q) =1 B(p, q) [ Z x0 0 x p−1(1 − x) q−1dx − q + p p Z x0 0 x p(1 − x) q−1dx] =1 B(p, q) [ Z x0 0 x p−1(1 − x) qdx − q p Z x0 0 x p(1 − x) q−1dx] (15) =1 B(p, q) [ Z x0 0 x p−1(1 − x) qdx − ( 1 p x p(1 − x) q| x0 0 + Z x0 0 x p−1(1 − x) qdx)] =Γ(p + q) Γ(p + 1)Γ(q) x p 0 (1 − x0) q. If we choose p = k 2 , q = k 2 , then $$I_{x_{0}}(\frac{k}{2},\frac{k}{2})=\frac{\Gamma(k)}{\Gamma(\frac{k}{2}+1)\Gamma(\frac{k}{2})}x_{0}^{\frac{k}{2}}(1-x_{0})^{\frac{k}{2}}+I_{x_{0}}(\frac{k+2}{2},\frac{k}{2})\tag{16}$$ On the other hand, we also have $$I_{x_{0}}(p,q)=1-\frac{\int_{x_{0}}^{1}x^{p-1}(1-x)^{q-1}dx}{\mathcal{B}(p,q)}$$ $$=1-\frac{\int_{0}^{1-x_{0}}x^{q-1}(1-x)^{p-1}dx}{\mathcal{B}(p,q)}$$ $$=1-I_{1-x_{0}}(q,p)\tag{17}$$ $$\stackrel{{*}}{{=}}1-I_{1-x_{0}}(q+1,p)-\frac{\Gamma(p+q)}{\Gamma(q+1)\Gamma(p)}(1-x_{0})^{q}x_{0}^{p}$$ $$=I_{x_{0}}(p,q+1)-\frac{\Gamma(p+q)}{\Gamma(q+1)\Gamma(p)}(1-x_{0})^{q}x_{0}^{p},$$ where ∗ denotes 15. If we choose p = k+2 2 , q = k 2 , then $$I_{x_{0}}(\frac{k+2}{2},\frac{k}{2})=-\frac{\Gamma(k+1)}{\Gamma(\frac{k}{2}+1)\Gamma(\frac{k+2}{2})}x_{0}^{\frac{k+2}{2}}(1-x_{0})^{\frac{k}{2}}+I_{x_{0}}(\frac{k+2}{2},\frac{k+2}{2}).$$ Combined 16 and 18, and due to x0 ∈ (0, 1 2 ), we conclude that $$I_{x_{0}}(\frac{k}{2},\frac{k}{2})=\frac{\Gamma(k)}{\Gamma(\frac{k}{2}+1)\Gamma(\frac{k}{2})}(1-2x_{0})x_{0}^{\frac{k}{2}}(1-x_{0})^{\frac{k}{2}}+I_{x_{0}}(\frac{k+2}{2},\frac{k+2}{2})$$ $$>I_{x_{0}}(\frac{k+2}{2},\frac{k+2}{2}).$$ $$(18)$$ $$\left(19\right)$$ Proved. ## C The Proof Of Corollary. 1 Proof. We have $$P(F_{j\Longrightarrow i}^{t,k}>1)-P(F_{j\Longrightarrow i}^{t,L}>1)\mid=\sum_{i=k}^{L}[I_{x_{0}}(\frac{i}{2},\frac{i}{2})-I_{x_{0}}(\frac{i+1}{2},\frac{i+1}{2})]$$ $$=\sum_{i=k}^{L}\mathcal{O}\left(\frac{\Gamma(i)}{\Gamma(\frac{i}{2}+1)\Gamma(\frac{i}{2})}(1-2x_{0})x_{0}^{\frac{i}{2}}(1-x_{0})^{\frac{i}{2}}\right)\tag{20}$$ $$=\mathcal{O}\left((\frac{1}{4})^{\frac{k}{2}}(L-k)\right),$$ O is the asymptotic notation denoting the orders of approximation following Hardy (1924). In conclusion, if GC exists, i.e., ϵ0 ∈ (0, 1), the recall rate of DWGC is positively related to the window length k and is convergent to 1 when k → +∞. Proved. ## D Proof Of Theorem 2 Analogous to the above section, the re-weighted. Y Φ i,t is partitioned as $$Y_{i,t}^{\Phi}:=\Phi_{t}Y_{i,t}=\Phi_{t}Y_{i,t}^{o}+\Phi_{t}Y_{i,t}^{n}.$$ i,t. (21) On this basis, we assume YdΦ i,t ∼ N (ΦtY o i,t, Φ 2 t σ 2), Y[Φ i|j,t ∼ N (ΦtY o i,t, Φ 2 t ϵ 2 0σ 2), ΦtYdn i,t ∼ N (0, Φ 2 t σ 2 0 ). Moreover, for simplification, we assume Φti ̸= Φtj , ti ̸= tj , or else a small perturbation is applied. According to the generalized chi-square distribution in Hammarwall et al. (2008), the summarization satisfies the following probability density distribution: t+ X k−1 q=t (Yˆ Φ i,q − Y Φ i,q) 2 ∼ f1(x; Φt, ...Φt+k−1) e − x Φ2 q (σ2+σ2 0 ) := t+ X k−1 Φ2 q (σ 2 + σ 2 0 )Qt+k−1 j=t,j̸=q 1 − Φ2 j Φ2 q , for x ≥ 0. q=t (22) t+ X k−1 q=t (Yˆ Φ i|j,q − Y Φ i,q) 2 ∼ f2(x; Φt, ...Φt+k−1) e − x Φ2 q (ϵ 2 0 σ2+σ2 0 ) := t+ X k−1 Φ2 q (ϵ 2 0σ 2 + σ 2 0 )Qt+k−1 j=t,j̸=q 1 − Φ2 j Φ2 q , for x ≥ 0. q=t $$(21)$$ On this basis, we construct {Φt*, ...*Φt+k−1} to make P( Pq (Yˆ Φ i,q−Y Φ i,q) 2 Pq (Yˆ Φ i|j,q−Y Φ i,q) 2 > 1) > P( Pq (Yˆi,q−Yi,q) 2 Pq (Yˆi|j,q−Yi,q) 2 > 1). In this sense, we equivalently transform the Theorem. 2 to the following lemma: **Lemma 2**.: _If we have $\sum_{n=1}^{k+k-1}\sum_{p=1}^{k+k-1}\frac{1}{(\omega_{0}\frac{2\pi}{k+1})\prod_{i=i+j+k+1}^{k+1}(1-\frac{\pi^{k}}{\omega_{i}^{k}})\prod_{i=i+j+k+1}^{k+1+k+1}(1-\frac{\pi^{k}}{\omega_{0}^{k}})>\int_{c_{0}}^{+\infty}\frac{1}{8(\frac{1}{k})^{\frac{1}{2}}}x^{\frac{k}{2}-1}(1+\frac{\pi^{k}}{\omega_{0}^{k}})\frac{1}{2}$, $x)^{-k}dx$, where $c_{0}=\frac{\frac{1}{\omega}x^{0}+x^{0}}{\omega^{2}+x^{0}_{0}}$. Then the real rate of DWGC-CI outperforms the DWGC's in window length $k$._ Notice that the left part in the lemma. 2 holds the property $$\sup_{\{\Phi_{t},\ldots,\Phi_{t+k-1}\}\in R^{k}}\sum_{q_{1}=t}^{t+k-1+t+k-1}\frac{1}{(\alpha_{0}\frac{\Phi_{q_{1}}^{2}}{\Phi_{t_{1}}^{2}}+1)\prod_{j=t,j\neq q_{1}}^{t+k-1}(1-\frac{\Phi_{j}^{2}}{\Phi_{t_{1}}^{2}})\prod_{j=t,j\neq q_{2}}^{t+k-1}(1-\frac{\Phi_{j}^{2}}{\Phi_{t_{2}}^{2}})}=+\infty,\tag{23}$$ thus such set {Φt*, ...*Φt+k−1} in lemma. 2 naturally exists. On this basis, following the lemma. 2, the regularization term can be designed as $$\left\|\sum_{q_{1}=t}^{t+k-1}\sum_{q_{2}=t}^{t+k-1}\frac{1}{(c_{0}\frac{\phi_{2q}^{*}}{\phi_{21}^{*}}+1)\prod_{j=t,j+q_{1}}^{t+k-1}(1-\frac{\phi_{2j}^{*}}{\phi_{21}^{*}})\prod_{j=t,j+q_{2}}^{t+k-1}(1-\frac{\phi_{2j}^{*}}{\phi_{22}^{*}})}-\right.\tag{24}$$ $$\left.\int_{c_{0}}^{+\infty}\frac{1}{\mathcal{B}(\frac{k}{2},\frac{k}{2})}x^{\frac{k}{2}-1}(1+x)^{-k}dx\right\|_{2}.$$ ## D.1 The Proof Of Lemma. 2 Proof. P( Pq (Yˆi,q − Yi,q) 2 Pq (Yˆi|j,q − Yi,q) 2 > 1) = Z +∞ c0 1 B( k 2 , k 2 ) x k 2 −1(1 + x) −kdx ∗≤ t+ X k−1 q1=t t+ X k−1 1 (c0 Φ2 q2 Φ2 q1 + 1)Qt+k−1 j=t,j̸=q1 (1 − Φ2 j Φ2 q1 )Qt+k−1 j=t,j̸=q2 (1 − Φ2 j Φ2 q2 ) q2=t 1 1 Φ2 q1 (σ2+σ2 0 ) + 1 Φ2 q2 (ϵ 2 0 σ2+σ2 0 ) Qt+k−1 j=t,j̸=q1 (1 − Φ2 j Φ2 q1 )Φ2 q2 (ϵ 2 0σ 2 + σ 2 0 )Qt+k−1 j=t,j̸=q2 = t+ X k−1 q1=t t+ X k−1 q2=t e − x Φ2 q1 (σ2+σ2 0 ) −− x Φ2 q2 (ϵ 2 0 σ2+σ2 0 ) t+ X k−1 q1=t t+ X k−1 = Z +∞ 0 (25) Qt+k−1 j=t,j̸=q1 (1 − Φ2 j Φ2 q1 )Φ2 q2 (ϵ 2 0σ 2 + σ 2 0 )Qt+k−1 j=t,j̸=q2 dx q2=t e − m Φ2 q (σ2+σ2 0 ) t+ X k−1 = Z +∞ 0 Z +∞ x Φ2 q (σ 2 + σ 2 0 )Qt+k−1 j=t,j̸=q 1 − Φ2 j Φ2 q dm q=t e − x Φ2 q (ϵ 2 0 σ2+σ2 0 ) t+ X k−1 Φ2 q (ϵ 2 0σ 2 + σ 2 0 )Qt+k−1 j=t,j̸=q 1 − Φ2 j Φ2 q dx q=t = Z +∞ 0 Z +∞ x f2(m)f1(x)*dmdx* =P( Pq (Yˆ Φ i,q − Y Φ i,q) 2 Pq (Yˆ Φ i|j,q − Y Φ i,q) 2 > 1) Here ∗ is from the inequality in lemma. 2. Hence we proved P( Pq (Yˆ Φ i,q−Y Φ i,q) 2 Pq (Yˆ Φ i|j,q−Y Φ i,q) 2 > 1) > P( Pq (Yˆi,q−Yi,q) 2 Pq (Yˆi|j,q−Yi,q) 2 > 1). ## E Sensitivity Analysis E.1 Sliding Window Size Selection The impact of window size selection is shown in Tab. 5. Table 5: To illustrate the impact of the window size selection, we add an experimental comparison using the American data of 2008 in the stock-and-market experiment. We change the sliding window size to a month and the results are as follows. Compared with the results in the paper, the detected causality does not completely fit with the prior knowledge and possibly misses some part of the dynamic causality directions. X methodF-test Jan-Feb Mar-Apr May-Jun Jul-Aug Sept-Oct Nov-Dec | completely fit with the prior knowledge and possibly misses some part of the dynamic causality direct | | | | | | ions. | | | |---------------------------------------------------------------------------------------------------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|---------| | X | method | F-test | Jan-Feb | Mar-Apr | May-Jun | Jul-Aug | Sept-Oct | Nov-Dec | | NLC | DWGC-CI | 21.67/12.30 | 38.6/22.51 | 28.09/2.60 | 7.20/34.54 | 12.62/39.36 | 34.76/53.65 | | | DWGC | 32.64/43.65 | 5.32/5.32 | 6.43/54.32 | 5.23/46.23 | 53.23/54.23 | 54.32/45.32 | | | | TAL | DWGC-CI | 3.21/32.12 | 32.12/54.23 | 43.23/0.54 | 32.11/14.22 | 1.23/2.34 | 32.43/0.53 | | | DWGC | 43.12/43.23 | 34.21/32.12 | 54.23/0.43 | 0.32/54.23 | 54.23/56.65 | 54.23/0.43 | | | | TMV | DWGC-CI | 43.23/54.23 | 43.23/0.65 | 0.76/0.89 | 0.43/0.67 | 91.54/54.23 | 0.76/0.32 | | | DWGC | 54.34/23.32 | 54.32/34.23 | 43.23/43.54 | 12.23/45.23 | 34.12/35.32 | 4.12/4.23 | | | | TFR | DWGC-CI | 47.32/56.09 | 4.23/45.21 | 54.23/56.64 | 54.23/65.23 | 43.23/1.24 | 43.23/23.12 | | | DWGC | 43.23/49.34 | 2.24/43.23 | 34.23/0.34 | 1.24/1.65 | 9.34/4.34 | 4.32/65.23 | | | ![22_image_0.png](22_image_0.png) ## E.2 Precision-Recall Rate (Threshold Selection) This is an auxiliary experiment for our AR/NAR simulation. In this section, we set window length = 20 in the NAR case, for instance, to test the correlation between the causality threshold (ϵ) and the performance (precision and recall). The experimental results show that our method DWGC-CI is robust to the selection of epsilon values and consistently outperforms DWGC and Rolling-window GC. Moreover, the difference between the performances of various methods is most significant at ϵ = 1.1. Very few false-positive rates without GC ground truth If the data has no GC relation, our DWGC-CI captures very few false positives. We add the following result to the synthetic data experiment: two series with no correlation (and certainly no GC) are constructed as Ti,t = rand(0, 1), i = 0, 2, t = 0 1000 , where is a random variable ranging from to . We find that the probability of being judged as GC by DWGC-CI is relatively low at all window sizes, as follows. When there is no correlation between the two series, the false-positive probability is relatively low. | Sliding window size | 10 | 20 | 30 | 100 | |--------------------------------|-------|-------|-------|-------| | False-positive rate of DWGC | 0.142 | 0.190 | 0.072 | 0.032 | | False-positive rate of DWGC-CI | 0.041 | 0.024 | 0.039 | 0.00 | Table 6: Very few false-positive rates without GC ground truth of DWGC-CI. ## E.3 Additional Analysis On The Training Process In this section, we showcase three interesting issues: 1) What about conducting CI matrix updating on all points instead of the spurious windows where causality exists as in Eqn 5? 2) What about choosing difference form of h(·) function, such as α − *tanh*(·) with different α? and 3) What about the NAR model is not sufficiently trained? We take the NAR case in the synthetic experiment for instance, and the result is summarized as in the following table: 1) Although the operation of updating CI for the whole graph is still effective, its performance is simultaneously poorer than the original form. 2) DWGC-CI will significantly fail when α is far away than 1 since our model would be potentially unstable when the total data scale is not equal to one, which is consistent with the original form in our main text. 3) Unlike the sensitivity of alpha in the second point, DWGC-CI is not so sensitive to whether the NAR model is well-trained or not. This is because we want to examine the F-statistics value, which measures | Table 7: The extension from the bi-channel case to the multi-channel case (recall (left) and accuracy ( | right)). | | | | | | |-----------------------------------------------------------------------------------------------------------|-------------------------|-------------------------|-------------------------|-------------------------|----|-----| | dataset | method | window size | 10 | 20 | 30 | 100 | | DWGC-CI (original, α = 1.1) | 0.59(0.013) 0.56(0.014) | 0.94(0.010) 0.76(0.014) | 0.87(0.012) 0.95(0.004) | 0.88(0.015) 0.98(0.002) | | | | DWGC-CI (Φ on the whole graph) | 0.55(0.043) 0.43(0.054) | 0.93(0.013) 0.76(0.016) | 0.80(0.011) 0.78(0.055) | 0.80(0.015) 0.80(0.012) | | | | DWGC-CI(α = 10) | 0.32(0.043) 0.65(0.065) | 0.76(0.054) 0.76(0.054) | 0.76(0.042) 0.65(0.044) | 0.58(0.025) 0.98(0.002) | | | | DWGC-CI (10% time of NAR training) | 0.56(0.017) 0.56(0.020) | 0.90(0.020) 0.74(0.024) | 0.80(0.017) 0.90(0.007) | 0.84(0.015) 0.98(0.005) | | | the difference between two fittings using the same model rather than necessarily achieving the best fit in a specific fitting. ## F Additional Discussion F.1 Concept Drift Besides, we can also treat this problem given "concept drift" Tsymbal (2004), which means that "changes in the hidden context can induce more or less radical changes in the target concept" Tsymbal (2004). "Concept drift" falls into two categories: sudden (instantaneous) and gradual. The traditional GC only extracts the causal information from the latter and ignores the former, while our DWGC-CI method can extract both. ## F.2 Back Door Criteria The premise of 2 is that series channels i and j meet the "backdoor condition" Pearl & Robins (1995). That is, we should first exclude common confounding from other channels that both effect series i and j. Moreover, in a more strict sense, Granger causality is considered as a non-causal "precedence" Pearl (2018), according to the informal fallacy "Post hoc ergo propter hoc" Woods & Walton (1977) which means that "after this, therefore because of this". It is a flaw inherent in GC's approach itself. Thus the objective of our paper is to generalise the existing GC rather than sticking to such philosophical dilemmas. Despite of this, we still perform an extension from the bi-level case to the multi-level case. We have expanded the 2-dimensional experiments to 20 dimensions. Our approach consists of two steps: 1) performing a 2dimensional operation on any two dimensions to examine the window-level causal relationships, and 2) for all elements where Yi1 (ti1 ) < −Yj (tj ) − > Yi2 (ti2) is detected, we predict Yi1 and Yi2 sequences again after removing the information of Yj to eliminate the potential influence of confounders on causal identification. With this basic operation, our DWGC-CI method significantly outperforms all baselines. The causal graph upon multi-dimensional time series is defined as $T_{i}(t)=\text{Re}\left(\sqrt{T_{j,t-lag}^{2}-1}\right)+n(t),i,j\in1,2,...M,$ which is naturally derived from our main text. Here M is the total sum of the number of channels. *Please note that the multidimensional approaches discussed here are only provided as examples and are not exhaustive. There are other more complex and sophisticated methods available such as [1]. The application and comparison of these methods can be considered as future work. ## F.3 More Comparison With Other Methods Compared with change-point detection and model parameters smoothing, window-level causality detection has the following advantages: a) Finer time scales: Window-level causality detection can analyze and detect causal relationships on finer time scales, whereas change-point detection usually analyzes based on the entire time series and cannot provide the same fine temporal resolution. b)Localized analysis: Window-level causality detection divides the time series into consecutive windows and | Table 8: The extension from the bi-channel case to the multi-channel case (recall (left) and accuracy (right)). dataset method window size 10 20 30 100 AR Change-point (Aminikhanghahi & Cook, 2017) 0.46(0.018) 0.55(0.013) 0.55(0.019) 0.45(0.014) 0.76(0.014) 0.98(0.012) 0.97(0.006) 0.97(0.003 | | ) | | | | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------|-------------------------|-------------------------|-------------------------|-------------------------| | Extreme causal (Bodik et al., 2021) | 0.40(0.027) 0.57(0.019) | 0.52(0.019) 0.62(0.024) | 0.71(0.013) 0.93(0.015) | 0.82(0.010) 0.94(0.013) | | | Anomal causal (Yang et al., 2022) | 0.17(0.021) 0.70(0.018) | 0.46(0.027) 0.71(0.023) | 0.58(0.013) 0.94(0.015) | 0.61(0.012) 0.94(0.012) | | | Rolling GC (Ming-Hsien & Chih-She, 2015) | 0.15(0.016) 0.65(0.019) | 0.39(0.015) 0.71(0.016) | 0.58(0.013) 0.93(0.015) | 0.63(0.016) 0.94(0.005) | | | DWGC (ours) | 0.68(0.019) 0.53(0.028) | 0.71(0.013) 0.87(0.015) | 0.83(0.016) 0.94(0.020) | 0.85(0.013) 0.98(0.019) | | | DWGC-CI (ours) | 0.93(0.013) 0.66(0.014) | 0.99(0.010) 0.86(0.014) | 0.99(0.012) 0.95(0.005) | 0.99(0.010)1.00(0.012) | | | NAR | Change-point (Aminikhanghahi & Cook, 2017) | 0.40(0.017) 0.50(0.023) | 0.48(0.023) 0.40(0.012) | 0.70(0.034) 0.90(0.018) | 0.87(0.016) 0.89(0.005) | | Extreme causal (Bodik et al., 2021) | 0.15(0.021) 0.57(0.022) | 0.57(0.019) 0.47(0.017) | 0.73(0.015) 0.87(0.011) | 0.75(0.013) 0.87(0.016) | | | Anomal causal (Yang et al., 2022) | 0.01(0.017) 0.70(0.018) | 0.49(0.020) 0.68(0.021) | 0.65(0.011) 0.86(0.016) | 0.70(0.015) 0.87(0.016) | | | Rolling GC (Ming-Hsien & Chih-She, 2015) | 0.07(0.015) 0.49(0.018) | 0.31(0.017) 0.72(0.020) | 0.55(0.011) 0.92(0.014) | 0.60(0.010) 0.92(0.014) | | | DWGC (ours) | 0.44(0.018) 0.12(0.025) | 0.53(0.013) 0.82(0.016) | 0.68(0.015) 0.81(0.022) | 0.73(0.012) 0.87(0.002) | | | DWGC-CI (ours) | 0.59(0.013) 0.56(0.014) | 0.94(0.010) 0.76(0.014) | 0.87(0.012) 0.95(0.004) | 0.88(0.015)0.98(0.002) | | performs causality analysis within each window. This localized analysis can more accurately capture local variations and causality in the time series without being affected by the entire time series. c) Better sensitivity and specificity: Window-level causality detection can analyze the data within each window, targeting causality judgments to the characteristics of the data within a specific window. This targeted analysis improves the sensitivity and specificity of the detection algorithm and reduces false positives and misses. d) Most importantly, adaptation to dynamic changes: Window-level causality detection can flexibly adapt to dynamic changes in the time series, and **can identify complex relationships between different** change points. The window size can be adjusted as needed to accommodate potential causal changes in different time periods. This dynamic adaptability makes window-level causality detection more applicable and robust. In sum, window-level causality detection has more advantages than variable-point detection in terms of time scale, analysis accuracy, sensitivity, and adaptability. The experiment also demonstrates the superiority of DWGC-CI over traditional change-point detection. ## F.4 Justification Of Assumption Justification of Gaussian assumption in the theoretical part. In the context of time series modeling, the Regressive (AR and NAR) model assumes that the output of the model approximately follows a Gaussian distribution. This assumption is based on the idea that the current value of the time series is a linear combination of past values and some random noise, which is typically assumed to be normally distributed. The assumption of Gaussianity in the AR model allows for efficient parameter estimation using maximum likelihood estimation, as it simplifies the mathematical calculations and inference procedures. Moreover, many statistical techniques, such as hypothesis testing and confidence interval estimation, rely on the assumption of Gaussianity. Justification of stationery assumption in footnote 1. We need to define traditional GC based on the assumption of stationarity before we can continue discussing the possibility of DWGC and DWGC-CI. From a practical perspective, it is more appropriate to understand this assumption as a very common and necessary preprocessing (smoothing) step, rather than simply referring to it as a stationarity assumption.
Review 1: Summary: This work addresses the challenging task of detecting Granger causality (i.e., the predictive utility of one time-series channel to predict another) for time-series characterized by dynamic causalities. Contrary to the classic Granger causality paradigm that assumes the causal effect between channels to be invariant over time, the authors aim at capturing window-level Granger causality, i.e., detecting causal relationships that evolve within sliding windows of a time-series. The proposed methodology, named DWGC-CI, consists of: - a windowing equivalent of the GC methodology (DWCG), where F-statistics within sliding windows of size $k$ are computed ---as per Equation (3)--- and, - incorporating an indexing matrix $\Phi$ that scales down the fluctuations of the learned (within and across channels) forecasting models ---as per Equation (5). The authors argue that the causality index (CI) matrix provides a dynamic, negative feedback regulation process to the DWGC computation, where $\Phi$ is learned to increase the recall rate of the proposed DWGC method. Section 4 provides theoretical insights on DWGC ---when compared to classic GC--- and the impact of incorporating the CI matrix into the windowed GC framework. In Section 5, empirical results are presented on synthetic and real-world datasets, showcasing relative recall and accuracy improvements for the former, and providing qualitative arguments for the latter. Strengths and Weaknesses: The main strengths of this work are: - The significance of the studied problem: detecting window-level Granger causality is an important and relevant challenge. - The windowing-based GC approach, and the corresponding per-window F-statistic computation, is sound. - Results in synthetic experiments showcase a relative improvement of DWGC-CI over DWGC, and the selected alternative baseline. There are several weaknesses of this work: 1. Lack of clarity and open questions on the proposed methodology (see requested clarifications in section below): - What time-series information is used to compute DWCG predictors $\hat{Y}\_{i|q}$ and $\hat{Y}\_{i|j,q}$ in Equation (3)? Are there as many predictors as windows within a time-series or is there a single predictor for each time-series channel? - Computation of Equation (5) is unclear and its reasonale demands further justification. - Is global convergence of the objectives in Algorithm 1 (NAR forecast loss and $L\_{index}$ minimization) simultaneously possible? 2. The theoretical analysis does not list all its assumptions, and the significance of the presented Theorems is unclear for window-level causality detection: - Assumption 1 relates to the unbiased nature of the predictors, while the proof of theorem 1 demands further assumptions (constant predictor variance, gaussian time-series innovations) that have not been clearly stated and argued for in the main manuscript. - Theorem 1, presented as a result on the positive correlation between recall rate and window length of DWGC, assumes **channel level GC**, while DWGC aims at **window-level GC**: the significance of such Theorem in the context of the goal of this work is unclear. - Assumptions for Theorem 2 (e.g., Gaussian weighted predictors) are not clearly laid out in the main manuscript. - Theorem 2 argues that causality matrix $\phi$ exists under window-level GC, yet it does not provide any insight on how the selected loss function $L_{index}$ minimization helps achieve such goal. 3. Empirical details and evaluation: - The proposed methodology requires learning 2 extra hyperparameters ($\epsilon$ and $\alpha$, for fixed $h$), besides the selection of time-series window size $k$ and form of $h$. - Synthetic results (with ground truth access) are provided for only 2 simulated realizations ---which showcase a promising, yet inconclusive, performance improvement. The evaluation with real datasets seems subject to qualitative expert knowledge ---due to a lack of ground truth. Requested Changes: 1. Methodology: - In GC, the predictors are computed based on all the information available up to $t$: as indicated in Equation (1), one leverages data up to time instant $t$, i.e., $Y_{i,<t}$ indicates the time-series channel $i$ at and before time $t$. This approach makes sense in the GC framework, given that the underlying assumption is that the causality is invariant across time. - However, it is unclear what information is used to compute predictors $\hat{Y}_{i|q}$ and $\hat{Y}\_{i|j,q}$ in Equation (3): is information only within each window of the time-series used? - Using all previous information for each channel as in Equation (1) does not seem to correspond with the purpose of computing window-level causalities. - Clarifications and details on this matter are deemed necessary. - More detailed explanations and a rationale for computing the CI matrix $\Phi$ as in Equation (5) would be appreciated: - $q \in M$ in Equation (5) is said to relate to the set of sliding windows which detected causality ---contrary to the definition of time instants $q \in [t, t+k-1]$ in Equation (3)---: why is $\Phi$ computation limited to those windows? Isn't causality detection via F-statistics impacted by the $\Phi$ values themselves? - What is the expectation $\mathbb{E}_q$ inside $h$ referring to? Is this expectation over all sets $q \in M$ or all time instants $q \in [t, t+k-1]$? - Can the authors justify the choice of the argument of $h$ better? Why compute the difference between the squared differences of instantaneous errors and expectation over instantaneous errors? - Can the authors elaborate on why the need for the function $h$ to be mononotically decreasing? The authors propose a specific form in footnote 3, dependent on hyperparameter $\alpha$: can they elaborate on this parametric form and evaluate how sensitive is DWGC-CI to the choices of $h$? 2. Theory: - Theorem 1 assumes **channel level GC**, while DWGC aims at **window-level GC**: what is the significance of such Theorem? - To the best of my understanding, Theorem 1 clarifies that doing GC detection based on window-level information implies a performance loss under channel level GC, yet it does not provide insights on how accurate window-level GC detection is (i.e., the aim of this work). - I would appreciate it if the authors could clarify and expand on this important concern. - Theorem 2 argues for the existence of a causality matrix $\Phi$ under window-level GC that reduces the recall rate of DWGC-CI when compared to DWGC. However, details on how the definition of $L_{index}$ (e.g., the choice of h) and its minimization may allow us to achieve that goal are unclear. - I would encourage the authors to precisely lay down the assumptions needed for both Theorems to hold. 3. Experiments: - If my understanding is correct, Table 1 presents results for a single realization of each of the simulated AR and NAR time-series. - It would be interesting if the authors could replicate the experiments across different realizations and provide average and variability information of the results. - The simulated environments define "The time point of $m_i = 10, i = 1,2$" as the beginning of causality. Could the authors clarify how long is the causality considered to be active, and how it relates to windowing? - Are all windows of size $k$ that contain the time-instant where $m_i=10$ considered windows with GC? - The authors argue that DWGC-CI is "better than the baseline Ming-Hsien & Chih-She (2015), especially at k = 20, 100". - Can the authors elaborate on the dependency of the selected window size and windowed GC detection performance? - Why is performance not consistently better/worse with increasing/decreasing window sizes? - How long are the simulated time-series and how does it relate to the evaluated window-sizes? - The significance of the results could be strengthen with more simulations run to provide more clear and thorough evidence supporting the improved dynamic windowed GC detection claims. - For instance, given that the authors have simulated a linear scenario (AR), why not compare their approach to other, linear window-level causality baselines cited in the introduction? 4. Other suggested edits: - References to Figures, Equations and Assumptions are incorrectly typeset, as they appear as "Fig X equation X", "Eqn. equation X", "Assumption. equation X" - In text between Equations (1) and (2), it reads "check the difference between $\hat{Y}\_{i|j,t}$ and $\hat{Y}\_{i,t}$, i.e.,the forecasting result of the channel $j$ with and without channel $i$": - shouldn't it be "check the difference between $\hat{Y}\_{i|j,t}$ and $\hat{Y}\_{i,t}$, i.e.,the forecasting result of the channel $i$ with and without channel $j$"? - In the last paragraph of Section 2, the second to last sentence reads "are computational to verify", which seems to lack a clarifying word (e.g,. computationally hard?). - Before Equation 5, "We establish an minimization goal to learn" -> "We establish a minimization goal to learn" - Appendix Section D is entitled "Proof of Theorem 3", should be "Proof of Theorem 2" Broader Impact Concerns: This work does not discuss broader impact concerns. ================================================== Review 2: Summary: The authors propose a procedure to perform dynamic Granger-causality analysis of multivariate time series in a non-stationary and nonlinear setting. Their algorithm is based on a window-level estimation of a nonlinear forecasting model and a F-test for pairwise Granger-causal relationships. The local fluctuations due to the sliding window approach are reduced by estimating a causality index matrix, which weights the observations in the F-test statistic. The method, called DWGS-CI (dynamic window-level Granger causality with causality index), is tested on 2 synthetic and 2 real data sets, and shown to perform better than a variant of the method without the causality index matrix correction, DWGS, and a baseline method using a linear time series model and a rolling window. The authors also show that under some assumptions on the forecasting model, the size of the rolling window is positively correlated with the recall rate of the GC analysis, and recall is higher for DWGS-CI than DWGS. Strengths and Weaknesses: Strengths: - The task considered in this work is well introduced and the approach by rolling window to estimate dynamic Granger-causal relations is clearly described. - The authors propose an interesting strategy based on a causality index matrix for mitigating the fluctuations introduced by the rolling window. - The method is able to estimate different Granger-causal (GC) relationships in synthetic time series with a change point and recovers well-identified causalities existing in real data set. Weaknesses: - The claim of the paper that such method can perform dynamic GC analysis in multivariate time series is not sufficiently demonstrated. The empirical evidence is not sufficient and lacks comparison to other approaches, e.g., procedures that first estimate change-points in the multivariate time series. - Some aspects of the methodology are not justified and discussed enough. For instance, the sensitivity on the choice of the threshold of the F-statistic is empirically tested but there is no general principled method proposed, e.g., to select significant values of the F-statistic. More importantly, the authors perform pairwise tests in multivariate time series, a strategy that is known to potentially detect spurious causalities in the presence of confounders [1]. I think this point should be discussed in the paper. Besides, I would also expect a discussion on other strategies for regulating the local fluctuations due to the window slicing, e.g., model parameters smoothing. Could the optimisation of the causality index matrix be avoided? - The formatting and general writing of the paper are not polished. The limitations of previous methods mentioned in the literature review (Section 2) are not clear to me. Besides, the paper contains a lot of typos and the tables and figures are not sufficiently described. Requested Changes: 1. I think the experimental section needs to be strengthened. In particular, the synthetic data part should contain a multivariate setting with more than 2 dimensions. The possibility to detect spurious causalities in this case should also be tested. 2. To me, the method by rolling window should also be compared to methods that incorporate change-point detection such as [2], unless there is a good reason not to do so here. Also, as an ablation study, since the authors claim that their method is insensitive to the prediction model, it would be valuable to test what happens when a mis-specified VAR model is used instead of the NAR in the DWGS-CI algorithm. Besides, which nonlinear function is used in the NAR in Section 5? 3. The methodology for optimising the causality index matrix, which is, to my understanding, a new method, should be better described. In particular, what are the constraints on the entries or the norm of this matrix? Besides, the optimiser used in the experimental part should be specified in the appropriate section. 4. The authors should correct the general writing and formatting of the paper. There are some reference problems in the appendix (e.g., Theorem 3 which is Theorem 2 in the main paper if I am correct). Tables and Figures also need reformatting. Broader Impact Concerns: No concern. ================================================== Review 3: Summary: This paper proposes a dynamic window Granger causality (DWGC) detection approach and explains why it can be improved by using a "causality index" (CI) matrix resulting in a DWGC-CI method. Theory is provided that shows that DWGC has a granularity-efficacy tradeoff, and that DWGC-CI has higher recall than DWGC. Experimental results on synthetic and real data show the effectiveness of the proposed DWGC-CI method. Strengths and Weaknesses: Strengths: - The problem setup is general and could be broadly applicable. - The proposed method appears promising in theory and in experimental results. Weaknesses: - Very few experimental baselines are tested. I would have liked to see a more serious effort at benchmarking against more baselines, even if they are a bit more computationally expensive. - I found the paper's exposition currently to be very confusing to follow and it doesn't help that there are way too many typos. Please proofread the paper carefully. To make matters worse, the citations right now really need to be fixed. If you do not mean to use an inline citation, then use \citep{}, whereas if you intend on using an inline citation, use \citet{}. Right now the citations are done in a manner that makes reading the paper to be difficult. For example, even the very first paragraph of the paper (in the introduction) has problems. It should read "Time series data, characterized by a pre-defined temporal or sequential order (Hamilton, 1994), is extensively employed in a diverse range of real-world applications, encompassing signal processing (Scharf, 1991), economics (Granger & Newbold, 2014), and climatology (Mudelsee, 2013), among others." - More discussion of how realistic Assumption 1 is in practice would be helpful. To what extent does it apply to the real datasets considered? - More discussion on the stationarity assumption in footnote 1 would also be helpful as well as it pertains to the datasets considered. Do any of the recently proposed methods mentioned in Section 2 not require this stationarity assumption? Requested Changes: Please address the weaknesses that I brought up above. Overall, I think that the paper really needs to better compare the proposed method DWGC-CI with more baselines (even if it means using a dataset/setup that helps makes the comparison possible and that might not be one of the datasets currently considered) and provide much more discussion on why the assumptions made by the model make sense in practice (also compared against other methods). I would imagine that there is no single best model at present and I think that is okay, but providing this sort of context is very important and when and why people should use the proposed method vs a baseline method instead. Separately, the paper is littered in typos and is written in a manner right now that is simply not very clear. Please read over and polish the draft. Broader Impact Concerns: N/A ================================================== Metareview: Recommendation: Reject Comment: The authors propose a novel approach for dynamic Granger-causality analysis in nonstationary and nonlinear settings. Their method, DWGS-CI, combines a window-level estimation of a nonlinear forecasting model with F-tests for pairwise Granger-causal relationships. It outperforms the previous methods in tests on synthetic and real datasets, demonstrating the value of incorporating a causality index in the analysis. All reviewers appreciate the authors' efforts in responding to their comments and making revisions to the manuscript. However, they have raised several key concerns about the clarity and theoretical foundation of the work. After author rebuttal, the reviewers still have the following concerns during reviewer discussion and the majority of them do not recommend acceptance of the current version. Firstly, the main claim, which involves the incorporation of the causality index to stabilize the window-level approach, remains inadequately justified. Both reviewers L8bC and i4XB found this aspect lacking in clear theoretical support. They emphasize that the significance and scope of the theoretical results need to be more clearly articulated. Additionally, Reviewer L8bC expressed concerns about the marginal improvement in the revised manuscript and suggested that the added clarifications should be integrated into the text, as the content currently lacks good organization and description, impacting the overall readability. Reviewer i4XB highlighted the importance of a more comprehensive discussion and evaluation of the practical aspects of the computation method, which currently appears somewhat ad-hoc. Moreover, the theoretical foundation of the work was questioned by Reviewer i4XB. In Theorem 1, the performance of DWGC affecting by the window size, is not adequately informative and fails to justify various aspects of the proposed framework, such as the choice of windowing or the specific formulation of the causality index. Additionally, assumptions about Gaussianity and constant variance underlying the presented theorems were too restrictive from a practical perspective. In summary, the current manuscript needs a major revision. I have to recommend rejection because the reviewing system does not allow a second round of major revision, but I encourage the authors resubmit the paper to TMLR after another round of revision. ==================================================
# Time Series Alignment With Global Invariances Titouan Vayer titouan.vayer@inria.fr Université Lyon, INRIA, CNRS, ENS de Lyon, UCB Lyon 1, LIP, Lyon, France Romain Tavenard *romain.tavenard@univ-rennes2.fr* Université de Rennes, CNRS, LETG, IRISA, Rennes, France Laetitia Chapel *laetitia.chapel@irisa.fr* Université Bretagne-Sud, CNRS, IRISA, Vannes, France Rémi Flamary remi.flamary@polytechnique.edu CMAP, Ecole Polytechnique, IP Paris, France Nicolas Courty *nicolas.courty@irisa.fr* Université Bretagne-Sud, CNRS, IRISA, Vannes, France Yann Soullard yann.soullard@univ-rennes2.fr Université de Rennes, CNRS, LETG, IRISA, Rennes, France Reviewed on OpenReview: *https: // openreview. net/ forum? id= JXCH5N4Ujy* ## Abstract Multivariate time series are ubiquitous objects in signal processing. Measuring a distance or similarity between two such objects is of prime interest in a variety of applications, including machine learning, but can be very difficult as soon as the temporal dynamics and the representation of the time series, *i.e.* the nature of the observed quantities, differ from one another. In this work, we propose a novel distance accounting both feature space and temporal variabilities by learning a latent global transformation of the feature space together with a temporal alignment, cast as a joint optimization problem. The versatility of our framework allows for several variants depending on the invariance class at stake. Among other contributions, we define a differentiable loss for time series and present two algorithms for the computation of time series barycenters under this new geometry. We illustrate the interest of our approach on both simulated and real world data and show the robustness of our approach compared to state-of-the-art methods. ## 1 Introduction Time series are subject to a number of variabilities that make their processing difficult in practice. One of the most well-known example is the temporal shift, usually handled using the celebrated Dynamic Time Warping (DTW, Sakoe & Chiba 1978) that aligns, in time, two time series and is invariant to any monotonically increasing temporal map. It has been initially introduced for speech processing applications and is now widely used in a variety of contexts such as human activity recognition (Chang et al., 2019), satellite image analysis (Wegner Maus et al., 2019) or medical applications (Huang & Lu, 2020; Janati et al., 2020). Another source of variability in time series is feature space alterations, that may occur due to a permutation of sensors, changes in sensor properties or even different number of sensors. This problem of heterogeneous representations, also called distribution shifts in machine learning, has been studied mostly on non-structured data. It has been shown that algorithms are notoriously weak (Ben-David et al., 2010) when it comes to generalizing to out-of-distribution samples, as they rely on the correlations that are found in the training ![1_image_0.png](1_image_0.png) Figure 1: DTW-GI aligns time series by optimizing on temporal alignment π (through Dynamic Time Warping) and feature space transformation f (*e.g.* here in the Stiefel manifold V2,3). In the figure π ∗, f ∗ denote the solutions of the optimization problem DTW-GI. Time series represented here are color-coded trajectories, whose starting (*resp.* end) point is depicted in blue (*resp.* yellow). data (Arjovsky et al., 2019). Consequently, dedicated paradigms such as domain adaptation (Kouw & Loog, 2019) directly take into account this problem in the learning process. Another approach is to learn with respect to some invariance classes (based on some prior knowledge) in order to be more robust to irrelevant feature transformations (Battaglia et al., 2018; Goodfellow et al., 2009). In this work, we aim at tackling this problem in the time series context through the definition of similarity measures that naturally encode desirable invariances. More precisely, we introduce similarity measures that are able to deal with both temporal and feature space transformations. There exist many frameworks to register different spaces under some classes of invariance. In the shape analysis community, matching objects under rigid transformations is a widely studied problem. Iterative Closest Point (ICP, Chen & Medioni 1992) is a standard algorithm for such a task. It acts by alternating two simple steps: (i) matching points using nearest neighbor search and (ii) registering shapes together based on the obtained matches, which is known as the orthogonal Procrustes problem that has a closed form solution (Goodall, 1991). This idea is further explored in Cohen & Guibas (1999); Alvarez-Melis et al. (2019), where optimal transport is used to match points in the first step, and a recent extension to objects with a hierarchical structure has been introduced in Alvarez-Melis et al. (2020) that considers a dedicated invariance class for the registration step. This heterogeneous setting has also been investigated in the time series context, where the goal is to align series of features lying in different spaces. One of the most salient track of research in this setting is the Canonical Time Warping (CTW) method. CTW (Zhou & Torre, 2009) has been introduced for human motion alignment under rigid space transformations. It consists of temporal alignment (using DTW) of transformed time series, using Canonical Correlation Analysis (CCA) to define the feature space transform. Few extensions to CTW have been proposed. GTW (Zhou & De la Torre, 2012) parametrizes CTW temporal alignments in continuous time instead of relying on DTW. Deep CTW (Trigeorgis et al., 2016) extends CTW by learning a feature space embedding (in the form of a neural network) before performing CTW. Finally, Canonical soft Time Warping applies the CTW methodology to soft alignments (see Section 2 for more details on soft alignments). In the same vein as CTW, Deng et al. (2020) learns an invariant subspace based on DTW alignments. More recently, GromovDTW (Cohen et al., 2021) has been introduced as an extension of the Gromov-Wasserstein distance measure between heterogeneous distributions to the time series context. GromovDTW relies on time series self-similarities as a way to circumvent the need to compute distances across feature spaces. Compared to these approaches, our method works by optimizing a map between feature spaces, hence allowing one to (i) add prior information in the form of constraints on the set of allowed maps and (ii) use the computed map for downstream application, as illustrated in our experiments on MoCap data (as described in Section 4). In more detail, we aim at tackling both temporal and feature space invariances. To do so, we state the problem as a joint optimization over temporal alignments and feature space transformations, as depicted in Figure 1. Our general framework allows the use of either DTW or its smoothed counterpart softDTW as an alignment procedure. Similarly, though rigid transformations of the feature space seem a reasonable invariance class, we show that our method can be used in conjunction with other families of transformations. Such a framework allows considering the case when time series differ both in length and feature space dimensionality. We introduce two different optimization procedures that could be used to tackle this problem and show experimentally that they lead to effectively invariant similarity measures. Our method can also be used to compute meaningful barycenters even when time series at stake do not lie in the same feature space. Finally, we showcase the versatility of our method and the importance of jointly learning feature space transformations and temporal alignments on two real-world applications that are time series forecasting for human motion and cover song identification. ## 2 Dynamic Time Warping (Dtw) Dynamic Time Warping (DTW, Sakoe & Chiba 1978) is an algorithm used to assess similarity between time series, with extensions to multivariate time series proposed in Ten Holt et al. (2007); Wöllmer et al. (2009). In its standard form, given two multivariate time series x ∈ R Tx×p and y ∈ R Ty×p of the same dimensionality p, DTW is defined as: $$\text{DTW}(\mathbf{x},\mathbf{y})=\min_{\pi\in\mathcal{A}(\mathbf{x},\mathbf{y})}\sum_{(i,j)\in\pi}d(\mathbf{x}_{i},\mathbf{y}_{j})\tag{1}$$ where A(x, y) is the set of all admissible alignments between x and y and d is a ground metric. In most cases, d is the squared Euclidean distance, *i.e.* d(xi, yj ) = kxi − yjk 2. An alignment π is a sequence of pairs of time frames which is considered to be admissible iff (i) it matches first (and respectively last) indexes of time series x and y together, (ii) it is monotonically increasing and (iii) it is connected (*i.e.* every index from one time series must be matched with at least one index from the other time series). Efficient computation of the above-defined similarity measure can be performed in quadratic time using dynamic programming, relying on the following recurrence formula: $$\text{DTW}(\mathbf{x}_{\to t_{1}},\mathbf{y}_{\to t_{2}})=d(\mathbf{x}_{t_{1}},\mathbf{y}_{t_{2}})+\min\left\{\begin{array}{l}\text{DTW}(\mathbf{x}_{\to t_{1}},\mathbf{y}_{\to t_{2}-1})\\ \text{DTW}(\mathbf{x}_{\to t_{1}-1},\mathbf{y}_{\to t_{2}})\\ \text{DTW}(\mathbf{x}_{\to t_{1}-1},\mathbf{y}_{\to t_{2}-1})\end{array}\right.\tag{2}$$ where we denote by x→t the time series x observed up to time t. Many variants of this similarity measure have been introduced. For example, the set of admissible alignment paths can be restricted to those lying close to the diagonal using the so-called Itakura parallelogram or Sakoe-Chiba band, or a maximum path length can be enforced (Zhang et al., 2017). Most notably, a differentiable variant of DTW, coined softDTW, has been introduced in Cuturi & Blondel (2017) and is based on previous works on alignment kernels (Cuturi et al., 2007). It replaces the min operation in Equation 2 by a soft-min operator minγ whose smoothness is controlled by a parameter γ > 0, resulting in the DTWγ distance: $$\text{DTW}_{\gamma}(\mathbf{x},\mathbf{y})=\min_{\pi\in\mathcal{A}(\mathbf{x},\mathbf{y})}^{\gamma}\sum_{(i,j)\in\pi}d(\mathbf{x}_{i},\mathbf{y}_{j})=-\gamma\log\left(\sum_{\pi\in\mathcal{A}(\mathbf{x},\mathbf{y})}e^{-\sum_{(i,j)\in\pi}d(\mathbf{x}_{i},\mathbf{y}_{j})/\gamma}\right).\tag{3}$$ In the limit case γ = 0, minγreduces to a hard min operator and DTWγ is equivalent to the DTW algorithm. ## 3 Dtw With Global Invariances Despite their widespread use, DTW and softDTW are not able to deal with time series of different dimensionalities or to encode feature transformations that may arise between time series. In the following, we introduce a new similarity measure aiming at aligning time series in this complex setting and provide ways to compute associated alignments. We also derive a Fréchet mean formulation that allows computing barycenters under this new geometry. ![3_image_0.png](3_image_0.png) Figure 2: Example alignments between 2D time series (trajectories in the plane). Color coding corresponds to timestamps. Our DTW-GI method jointly estimates temporal alignment and global rotation between time series. On the contrary, standard DTW alignment fails at capturing feature space distortions and therefore produces a mostly erroneous alignment (matching in red), except at the beginning and end of the time series, whose alignments are preserved thanks to DTW border constraints (see Section 2). ## 3.1 Definitions Let x ∈ R Tx×px and y ∈ R Ty×py be two time series of length Tx and Ty. In the following, we assume that the dimension px ≥ py. In order to allow comparison between time series x and y, we optimize on a family of functions F that map y onto the feature space in which x lies. More formally, we define Dynamic Time Warping with Global Invariances (DTW-GI) as the solution of the following joint optimization problem: DTW-GI($\mathbf{x},\mathbf{y}$) = $\min_{f\in\mathcal{F},\pi\in\mathcal{A}(\mathbf{x},\mathbf{y})}\sum_{(i,j)\in\pi}d(\mathbf{x}_{i},f(\mathbf{y}_{j}))$, $$\left(4\right)$$ where F is a family of functions from R py to R px . Properties (including symmetry) of this similarity measure are detailed in Sec. 3.1.1. Note that this problem can also be written as: DTW-GI($\mathbf{x},\mathbf{y}$) = $\min_{f\in\mathcal{F},\pi\in\mathcal{A}(\mathbf{x},\mathbf{y})}\left\langle\mathbf{W}_{\pi},C(\mathbf{x},f(\mathbf{y}))\right\rangle$ (1) where f(y) is a shortcut notation for the transformation f applied to all observations in y, h*., .*i denotes the Frobenius inner product, Wπ is defined as: $$\forall i\leq T_{x},j\leq T_{y},\,(\mathbf{W}_{\pi})_{i,j}={\begin{cases}1&\text{if}(i,j)\in\pi\\ 0&\text{otherwise}\end{cases}}\tag{1}$$ $$\left(6\right)$$ and C(x, f(y)) is the cross-similarity matrix of squared Euclidean distances between samples from x and f(y), respectively. This definition can be extended to the softDTW case of Equation 3 as proposed in the following: $$\text{DTW}_{\gamma}\text{-GI}(\mathbf{x},\mathbf{y})=\min_{f\in\mathcal{F}}\min_{\pi\in\mathcal{A}(\mathbf{x},\mathbf{y})}\gamma\ (\mathbf{W}_{\pi},C(\mathbf{x},f(\mathbf{y})))$$ $$=\min_{f\in\mathcal{F}}-\gamma\log\sum_{\pi\in\mathcal{A}(\mathbf{x},\mathbf{y})}e^{-\langle\mathbf{W}_{\pi},C(\mathbf{x},f(\mathbf{y}))\rangle/\gamma}.$$ $$\left(5\right)$$ $$\left(7\right)$$ Note that, due to the use of a soft-min operator, Equation 7 is no longer a joint optimization. These similarity measures estimate both temporal alignment and feature space transformation between time series simultaneously, allowing the alignment of time series when the similarity should be defined up to a global transformation. For instance, one can see in Figure 2 two temporal alignments between two series in 2D that have been rotated in their feature space. In this case DTW-GI, whose invariant is the space of rotations, recovers the proper alignment whereas DTW fails. ## 3.1.1 Properties Of Dtw-Gi By definition, DTW-GI and softDTW-GI are invariant under any global transformation T(·) such that {f ◦ T | f ∈ F} = F (*i.e.* F is stable under T), which motivates the name (soft)DTW with Global Invariances. Moreover, DTW-GI inherits from some of the classical DTW properties. First, if for any f ∈ F, f −1exists and is in F (which implies px = py), and if elements of F are norm preserving operations, then DTW-GI and softDTW-GI are symmetric, since in this case: DTW-GI(x, y) = min f∈F,π∈A(x,y) X (i,j)∈π d(xi, f(yj )) (8) = min f∈F,π∈A(x,y) X (i,j)∈π d(f(f −1(xi)), f(yj )) (9) = min f∈F,π∈A(x,y) X (i,j)∈π d(f −1(xi), yj ) (10) = min f∈F,π∈A(x,y) X (i,j)∈π d(f(xi), yj ) (11) = min f∈F,π∈A(y,x) X (i,j)∈π d(yi, f(xj )) = DTW-GI(y, x). (12) $$({\boldsymbol{\delta}})$$ (9) $$\begin{array}{l}\small\left(10\right)\end{array}$$ = $$\begin{array}{l}\small\left(11\right)\end{array}$$ . $$\left(12\right)$$ Though the constraints on F for this condition to hold might appear rather strong, this still allows to include standard rigid transformations such as rotations, translations and reflections. Finally, it is straightforward to see that DTW-GI(x, x) = 0 for any time series x as soon as F contains the identity map. More generally, regardless of the class of functions F and for any pair of time series (x, y), we have DTW-GI(x, y) = 0 iff there exists f ∈ F such that x and f(y) are equal up to repetitions in the series. ## 3.2 Optimization Optimization on the above-defined losses can be performed in several ways, depending of the nature of F. We now present one optimization scheme for each loss. ## 3.2.1 Gradient Descent We first consider the optimization on the softDTW-GI loss (Equation 7) in the case where F is a parametric family of functions, here denoted fθ, that are differentiable with respect to their parameters θ. The optimization can be done with a gradient descent on the parameters of fθ. Since softDTW is smooth (contrary to DTW), this strategy can be used to compute gradients of DTWγ-GI *w.r.t.* θ. Complexity for this approach is driven by (i) that of a softDTW computation and (ii) that of computing fθ(y). If we denote the latter cf , overall complexity for this approach is hence O(niter(TxTypx + cf )). Note that when Riemannian optimization is involved, an extra complexity term has to be added, corresponding to the cost of projecting gradients onto the considered manifold. This cost is O(p 3 y ) for example when optimization is performed on the Stiefel manifold (Wen & Yin, 2013), which is an important case for our applications, as discussed in more details in the following. ## 3.2.2 Block Coordinate Descent (Bcd) When DTW-GI (see Equation 5) is concerned, we introduce another strategy that consists in alternating minimization over (i) the temporal alignment and (ii) the feature space transformations. We will refer to this strategy as Block Coordinate Descent (BCD) in the following. Optimization over the alignment path given a fixed transformation f solely consists in a DTW alignment, as described in Section 2. For a fixed alignment path π, the optimization problem then becomes: $$\operatorname*{min}_{f\in{\mathcal{F}}}\left\langle\mathbf{W}_{\pi},C(\mathbf{x},f(\mathbf{y}))\right\rangle.$$ hWπ, C(x, f(y))i. (13) Recall that C is a matrix of squared distances, which means that the problem above is a weighted least square problem. Depending on F, there can exist a closed form solution for this problem (*e.g.* when F is the set of affine maps with no further constraints). Let us first note that the matrix C can be rewritten as: $$C(\mathbf{x},f(\mathbf{y}))=\mathbf{u_{x}1}_{T_{y}}^{\top}+\mathbf{1}_{T_{x}}\mathbf{v}_{f,\mathbf{y}}^{\top}-2\mathbf{x}f(\mathbf{y})^{\top}$$ > (14) where: $\frac{1}{2}$ where: $$\mathbf{u}_{\mathbf{x}}=\left(\|\mathbf{x}_{1}\|^{2},\ldots,\|\mathbf{x}_{T_{x}}\|^{2}\right)^{\top}\text{and}\mathbf{v}_{f,\mathbf{y}}=\left(\|f(\mathbf{y}_{1})\|^{2},\ldots,\|f(\mathbf{y}_{T_{y}})\|^{2}\right)^{\top}$$ and $\mathbf{1}_{n}=\underbrace{(1,\ldots,1)^{\top}}_{n\text{times}}$. In particular, the optimization problem (13) reduces to maximizing $\left\langle\mathbf{W}_{\mathbf{v}},\mathbf{x}f(\mathbf{y})^{\top}\right\rangle$, when $\mathcal{F}$ is a set of norm preserving operations. ## 3.2.3 Estimating F **In The Stiefel Manifold** Let us consider the special case where F is the set of linear maps whose linear operator is an orthonormal matrix, hence lying on the Stiefel manifold that we denote Vpy,px in the following. It is defined for py ≤ px as Vpy,px = {P ∈ R px×py , P>P = Ipy } and this invariance class encodes rigid transformations of the features. In this case, the optimization problem becomes: $$(14)$$ $$\operatorname*{min}_{\mathbf{P}\in\mathbb{V}_{p y\,,p x}}\left\langle\mathbf{W}_{\pi},\mathbf{u_{x}1}_{T_{y}}^{\top}+\mathbf{1}_{T_{x}}\mathbf{v}_{\mathbf{P},\mathbf{y}}^{\top}-2\mathbf{x}\mathbf{P}\mathbf{y}^{\top}\right\rangle$$ $$(15)$$ $$(16)$$ P,y − 2xPy>E(15) and we have $\mathbf{v_{P,y}}=(\|\mathbf{P}\mathbf{v_{1}}\|^{2},\ldots,\|\mathbf{P}\mathbf{v_{j}}\|^{2})$. $\mathbf{v_{j}}^{\top}\mathbf{P}^{\top}\mathbf{P}\mathbf{v_{j}}=\|\mathbf{v_{j}}\|^{2}$ and thus the considered optimization problem: $\mathbf{v_{j}}=\mathbf{v_{j}}$ > = vy since for all j, kPyjk 2 = $\mathbf{y}_{T_{n}}\|^{2}\mathbf{y}^{\top}=(\|\mathbf{y}_{1}\|^{2},\ldots,\|\mathbf{y}_{T_{n}}\|^{2})\top$ 2 and thus the considered applications are norm-preserving. Overall, we get the following optimization problem: min P∈Vpy,px DWπ, ux1 > Ty + 1Tx v > y E− 2Wπ, xPy>(16) which, since the term DWπ, ux1 > Ty + 1Tx v > y Edoes not depend on P, is equivalent to solving: max P∈Vpy,px Wπ, xPy>= max P∈Vpy,px x >Wπy, P(17) $$(17)$$ the last equality being a direct consequence of the cyclic property of the trace. $\mathbf{\underline{Algorithm\ 1:Block-Coordinate\ Descent\ for\ DTW-GI\ with\ Stiefel\ registration}}$. P ← Ipx,py ; repeat Wπ ← Alignment matrix from DTW(x, yP>) ; M ← x >Wπy (see Equation 17) ; U, Σ, V> ← SVD(M) ; P ← UV> ; until *convergence*; As described in Jaggi (2013), the latter problem can be solved exactly using Singular Value Decomposition (SVD): if UΣV> = M is the SVD of a matrix M of shape (py, px), then S ? = UV> is a solution to the linear problem maxS∈V*py,px* hS,Mi. Note that this method can also tackle the case where F is an affine map whose linear part lies in the Stiefel manifold by realigning time series means, as discussed for example in Lawrence et al. (2019). A sketch of the algorithm is presented in Algorithm 1 (for the simplified case where time series means do not have to be realigned). Interestingly, this optimization strategy where we alternate between time series alignment, *i.e.* time correspondences between both time series, and feature space transform optimization can be seen as a variant of the Iterative Closest Point (ICP) method in image registration (Chen & Medioni, 1992), in which nearest neighbors are replaced by matches resulting from DTW alignment. Its overall complexity is then O(niter(TxTypx+pxp 2 y )). This complexity is equal to that of the gradient-descent when px = O(py). However, in practice, the number of iterations required is much lower for this BCD variant, making it a very competitive optimization scheme, as discussed in Section 4. Generalizations. The algorithms presented above are mainly focused on optimization on the Stiefel manifold. Note however that they are not strictly restricted to this case. Typically, (projected) gradient descent based optimization could be performed for softDTW-GI on any family of functions parametrized by a neural network. Regarding the BCD algorithm, it requires a numerically efficient way to compute the optimal feature space transform for a fixed alignment. In our experiment, we illustrate this in the context of cover song identification, for which aligning song keys is a well-known registration step (see Section 4.5 for details). Another example is for block-structured linear transformations of the features. More precisely, this corresponds to the case where the features are structured into k groups *i.e.* px = k · qx and py = k · qy with qy ≤ qx, k ∈ N and when one looks for a linear transformation p ∈ R qx×qy that aligns the features of each group. To make the connection with our framework, this coincides with a global transformation P ∈ R px×py that can be written as P = blockdiagk (p) where: $\text{blockdiag}_{k}(\mathbf{p})=\text{diag}(\underbrace{\mathbf{p},\cdots,\mathbf{p}}_{k\text{times}})=\begin{pmatrix}\mathbf{p}&\mathbf{0}&\cdots&\mathbf{0}\\ \mathbf{0}&\mathbf{p}&\cdots&\mathbf{0}\\ \vdots&\vdots&\ddots&\vdots\\ \mathbf{0}&\mathbf{0}&\cdots&\mathbf{p}\end{pmatrix}$ . In the special case of p ∈ Vqy,qx it is easy to show that P>P = Ipy and thus P ∈ Vpy,px which also corresponds to a rigid transform of the features but this time structured in blocks. In this situation, the alignment problem becomes: $$\begin{array}{c}{{\operatorname*{min}_{\mathbf{P}=\operatorname{blockdiag}_{k}(\mathbf{p})}\left\langle\mathbf{W}_{\pi},\mathbf{u_{x}1}_{T_{y}}^{\mathsf{T}}+\mathbf{1}_{T_{x}}\mathbf{v}_{\mathbf{P},\mathbf{y}}^{\mathsf{T}}-2\mathbf{x}\mathbf{P}\mathbf{y}^{\mathsf{T}}\right\rangle.}}\\ {{\mathrm{s.t.}\ \mathbf{p}\in\mathbb{V}_{q y,q x}}}\end{array}$$ $$(18)$$ As P ∈ Vpy,px we can use the same reasoning as above and show that it is equivalent to maxp∈V*qy,qx* x >Wπy, blockdiagk (p). Interestingly the latter problem also admits a closed form expression. More precisely by decomposing x >Wπy into k blocks Cij , each of size qx × qy, the solution is given by S ? = UV> where U, V come from the SVD of Pk i=1 Cii, *i.e.* the sum of the diagonal blocks. For more details we refer the reader to the Lemma 6.1 in the supplementary materials. We will illustrate this type of transformation for the alignment of human motion trajectories in Section 4.4. ## 3.3 Barycenters Let us now assume we are given a set {x (i)}i of time series of possibly different lengths and dimensionalities. A barycenter of this set in the DTW-GI sense is a solution to the following optimization problem: $$\operatorname*{min}_{\mathbf{b}\in\mathbb{R}^{T\times p}}\sum_{i}w_{i}\operatorname*{min}_{f_{i}\in\mathcal{F}}\operatorname{DTW}(\mathbf{x}^{(i)},f_{i}(\mathbf{b})),$$ $$(19)$$ (i), fi(b)), (19) where weights {wi}i as well as barycenter length T and dimensionality p are provided as input to the problem. Note that, with this formulation, when F is the Stiefel manifold, p is supposed to be lower or equal to the dimensionality of any time series in the set {x (i)}i. In terms of optimization, as for similarity estimation, two schemes can be used. First, softDTW-GI barycenters can be estimated through gradient descent (and when the set of series to be averaged is large, a stochastic variant relying on minibatches can easily be implemented). Second, when BCD is used for time series alignment, barycenters can be estimated using a similar approach as DTW Barycenter Averaging (DBA, Petitjean et al. 2011), that would consist in alternating between barycentric coordinate estimation and DTW-GI alignments. ## 4 Experiments In this section, we provide an experimental study of DTW-GI (and its soft counterpart) on simulated data and real-world datasets. Unless otherwise specified, the set F of feature space transforms is the set of affine maps whose linear part lies in the Stiefel manifold. In all our experiments, tslearn (Tavenard et al., 2020) implementation is used for baseline methods and gradient descent on the Stiefel manifold is performed using GeoOpt (Kochurov et al., 2019; Becigneul & Ganea, 2019) in conjunction with PyTorch (Paszke et al., 2019). Open source code of our method will be released upon publication. ![7_image_0.png](7_image_0.png) ![7_image_1.png](7_image_1.png) Figure 3: Computing time as a function of time series length (left) and dimensionality (right). Solid lines correspond to median values and shaded areas correspond to 20th (*resp.* 80th) percentiles. ## 4.1 Timings We are first interested in a quantitative evaluation of the temporal complexity of our methods. Note that the theoretical complexity of DTW and softDTW are the same, hence any difference observed in this series of experiments between DTW-GI and softDTW-GI would be solely due to their optimization schemes discussed in Section 3.2. In these experiments, the number of iterations for BCD as well as the number of gradient steps for the gradient descent optimizer are set to 5,000. The BCD algorithm used for DTW-GI is stopped as soon as it reaches a local minimum, while early stopping is used for the gradient-descent variant with a patience parameter set to 100 iterations. We first study the computation time as a function of the length of the time series involved. To do so, we generate random time series in dimension 8 and vary their lengths from 8 to 1,024 timestamps. Figure 3 (left) shows a clear quadratic trend for all methods presented, except GromovDTW whose complexity is cubic *w.r.t.* the length of the time series due the tensor-matrix multiplication that is involved at each step of its pseudo-Frank-Wolfe algorithm (Cohen et al., 2021). Note that DTW-GI and its BCD optimizer clearly outperform the gradient descent strategy used for softDTW-GI because the latter requires more iterations before early stopping can be triggered. Building on this, we now turn our focus on the impact of feature space dimensionality p (with a fixed time series length of 32). DTW and softDTW baselines are asymptotically linear with respect to p. Similarly, since GromovDTW relies on pre-computed self-similarity matrices, it only linearly depends in the feature space dimensionality for the computation of these self-similarity matrices. Since feature space registration is performed through optimization on the Stiefel manifold, both our optimization schemes rely on Singular Value Decomposition, which leads to an O(p 3) complexity that can also be observed for both methods in Figure 3 (right). Note also that the CTW baseline is slightly more computationally expensive than DTW-GI in practice, even if asymptotic complexities are the same as for DTW-GI. ## 4.2 Rotational Invariance We now evaluate the ability of our method to recover invariance to rotation. To do so, we rely on a synthetic dataset of noisy spiral-like 2D trajectories. For increasing values of an angle α, we generate pairs of spirals rotated by α with additive Gaussian noise. Alignments between a reference time series and variants that are subject to an increasing rotation are computed and repeated 50 times per angle. The ratio of each distance to the distance when α = 0 is reported in Figure 4 (left). One can clearly see that the GI counterparts of DTW and softDTW are invariant to rotation in the 2d feature space, while DTW and softDTW are not. Interestingly, CTW and GromovDTW, that should be invariant to rotation, still exhibit an increase in the loss with the angle α, suggesting that their algorithm has more difficulties reaching a global minimum in practice. Also, when varying the noise level in Figure 4 (right), one can notice that (soft-)DTW-GI are ![8_image_0.png](8_image_0.png) ![8_image_1.png](8_image_1.png) Figure 4: Illustration of the rotation invariance provided by DTW-GI. On the left, ratio of the distance to that of a non-rotated pair of spirals is presented as a function of the rotation angle for a fixed noise level. On the right, ratio of the distance to that of a non-rotated pair of spirals is presented as a function of the noise level for a fixed rotation angle π. Median distance ratios are reported as solid lines and shaded areas correspond to 20th (*resp.* 80th) percentiles. ![8_image_2.png](8_image_2.png) Figure 5: Barycenter computation using (i) DTW and softDTW baseline approaches, (ii) the alternative Gromov-DTW (Cohen et al., 2021) and (iii) our proposed rotation-invariant DTW-GI and softDTW-GI. Each row correspond to a different dataset, and the latter one contains both 2D and 3D trajectories, hence cannot be tackled by DTW nor softDTW. Trajectories are color-coded from blue (beginning of the series) to yellow (end of the series). slightly more robust to high levels of noise than the CTW baseline, while GromovDTW is very sensitive to this noise level (the GW loss is a quadratic function of its input). ![9_image_0.png](9_image_0.png) Figure 6: Barycenters computed on the RTD Dataset from 50 sample trajectories from each of the 10 classes using DTW-GI, softDTW-GI and GromovDTW. Trajectories are color-coded from blue (beginning of the series) to yellow (end of the series). ## 4.3 Barycenter Computation So as to better grasp the notion of similarity captured by our methods, we compute barycenters using the strategy presented in Section 3.3. Barycenters are computed for 3 different datasets: the first two are made of 2D trajectories of rotated and noisy spirals or folia, and the third one is composed of both 2- and 3-dimensional spirals (see samples in the left part of Figure 5). For each dataset, we provide barycenters obtained by three baseline methods. DTW Barycenter Averaging (DBA, Petitjean et al. 2011) is used for DTW while softDTW resorts to a gradient-descent scheme to compute the barycenters. Their GI counterpart use the same algorithms but rely on the alignments obtained from DTW-GI and softDTW-GI respectively. Finally, GromovDTW is optimized by alternating between computation of the barycenter self-similarity matrix and alignments, as done in Cohen et al. (2021). Note that the DTW and softDTW baselines cannot be used for the third dataset since features of the time series do not lie in the same ambient space. We would like to emphasize that the barycenter based on GromovDTW only finds a pairwise distance matrix from which the positions of the points must be inferred, for example by applying multidimensional scaling (MDS) (Kruskal & Wish, 1978) (as done here and in Cohen et al. 2021). For the 2d spiral dataset, all the reconstructed barycenters can be considered as meaningful. Note however that the outer loop of the spiral (the one that suffers the most from the rotation) is better reconstructed using DTW-GI and softDTW-GI variants. When it comes to the folia trajectories, that are more impacted by rotations, baseline barycenters fail to capture the inherent structure of the trajectories at stake, while both our methods generate smooth and representative barycenters. DTW-GI and softDTW-GI are even able to recover barycenters when datasets are made of series that do not lie in the same space, as shown in the third row of Figure 5. Finally, in all three settings considered, temporal alignments successfully capture the irregular sampling from the samples to be averaged (denser towards the center of the spiral / loop of the folium). The RealSense based Trajectory Digit (RTD) dataset Alam et al. (2020) is made of digit writing trajectories. In our experiment, we have randomly sampled 50 trajectories per digit class and computed trajectory barycenters using (soft)DTW-GI and GromovDTW. On such data, expected invariants are rotations and translations. One can observe that invariance to mirroring strongly impacts GromovDTW's barycenter estimation on classes 4, 7 and 9. On the other hand, for both DTW-GI and softDTW-GI, we obtain meaningful barycenters that do not suffer from mirroring artifacts and better preserve overall trajectories. ## 4.4 Time Series Forecasting To further illustrate the benefit of our approach, we consider a time series forecasting problem (Le Guen & Thome, 2019), where the goal is to infer the future of a partially observed time series. In this setting, we suppose that we have access to a training set of full time series X, with x (i) ∈ X a time series of length T | Ground truth | Train Sample 0 | Train Sample 1 | Train Sample 2 | Train Sample 3 | Train Sample 4 | | |-----------------------|------------------|------------------|------------------|------------------|------------------|-----------------| | DTW-GI. Err=37.68 | | Coeff = 0.89792 | | Coeff = 0.09569 | | Coeff = 0.00317 | | softDTW-GI. Err=39.23 | | Coeff = 0.79506 | | Coeff = 0.06918 | | Coeff = 0.03905 | | CTW. Err=187.01 | | Coeff = 0.03336 | | Coeff = 0.03336 | | Coeff = 0.03335 | | GromovDTW. Err=184.99 | | Coeff = 0.09044 | | Coeff = 0.08969 | | Coeff = 0.07673 | Figure 7: Examples of the forecasted subseries. **(first row)** The first sample is the ground-truth yˆT0→ for the subject S1 and then training samples x (i) T0→ are depicted. **(from second to last row)** Predictions for the methods DTW-GI, softDTW-GI, CTW and GromovDTW (the other methods are given in 8). In the first column the prediction yˆT0→ is depicted along with the error kyˆT0→ − yT0→k2. In the other columns, we illustrate the first 3 neighbors *w.r.t.* the method x (i) →T0 associated with their coefficients ad y→T0 , x (i) →T0 . For each movement an arrow indicates the orientation of the subject. The beginning of the movement is displayed in shaded blue while the end is displayed in bold red. | L2. Err=187.01 | | Coeff = 0.03334 | | Coeff = 0.03334 | | Coeff = 0.03334 | |-------------------------------|-----------------|-------------------|-----------------|-------------------|-----------------|-------------------| | Procrustes. Err=46.54 | | Coeff = 0.93897 | | Coeff = 0.02469 | | Coeff = 0.02304 | | softDTW. Err=187.02 | | Coeff = 0.03459 | | Coeff = 0.03443 | | Coeff = 0.03440 | | DTW+Procrustes. Err=43.34 | Coeff = 0.89792 | | Coeff = 0.09569 | | Coeff = 0.00317 | | | softDTW+Procrustes. Err=43.22 | Coeff = 0.94239 | | Coeff = 0.02054 | | Coeff = 0.01462 | | Figure 8: Examples of the forecasted subseries for the methods L2, L2+Procrustes, softDTW, DTW+Procrustes and softDTW+Procrustes and dimensionality px, and another test set of partial time series Y where each y ∈ Y is of length T 0 < T and dimensionality py. The goal is to predict the values for timestamps T 0to T for each test time series. We will denote by x→T0 the beginning of the time series x (up to time T 0) and xT0→ its end (from time T 0to time T). Let d(y, x (i)) denote a dissimilarity measure between time series y and x (i) associated with a transformation fi ∈ F : R px → R py that maps the features of x (i) onto the features of y. This function aims at capturing the desired invariances in the feature space, as described in the previous section. A typical example is when d is the (soft)DTW-GI cost, then the fi are the Stiefel linear maps which capture the possible rigid transformation between the features. We propose to predict the future of a time series y as follows: $$\hat{\mathbf{y}}_{T^{\prime}\to}=\sum_{i}a_{d}\left(\mathbf{y}_{\to T^{\prime}},\mathbf{x}_{\to T^{\prime}}^{(i)}\right)f_{i}\left(\mathbf{x}_{T^{\prime}\to}^{(i)}\right)\tag{1}$$ where ad is the attention kernel: $$(20)$$ $$a_{d}({\bf y},{\bf x}_{i})=\frac{e^{-\lambda d({\bf y},{\bf x}_{i})}}{\sum_{j}e^{-\lambda d({\bf y},{\bf x}_{j})}}\tag{10}$$ $$(21)$$ with λ > 0. The prediction is based on the known timestamps for the time series of the training set and on transformations fi that aim at capturing the latent transformation between training and test time series. The attention kernel gives more importance to time series that are close to the time series we want to forecast w.r.t. the notion of dissimilarity d. Note that for large values of λ, the softmax in Equation 21 converges to a hard max and the proposed approach corresponds to a nearest neighbor imputation. ## 4.4.1 Dataset And Methodology We use the *Human3.6M* dataset (Ionescu et al., 2014) which consists of 3.6 million video frames of human movements recorded in a controlled indoor motion capture setting. This dataset is composed of 7 actors performing 15 activities ("Walking", "Sitting" ...) twice. We are interested in forecasting the 3D positions of the subject joints evolving over time. More precisely, each data point x (i)is a time series representing a skeleton of 32 joints where each joint is described by 3D coordinates. This problem corresponds to px = py = 32 × 3. We follow the same data partition as Coskun et al. (2017): the training set has 5 subjects (S6, S7, S8, S9 and S11) and the remaining 2 subjects (S1 and S5) compose the test set. In our experiments, 1) we split the limit frames as follows: we keep the first T 0 = 300 timestamps to calculate the coefficient ad y→T0 , x (i) →T0 and the transformations fi 2) we find the hyperparameter λ which gives the best prediction (*w.r.t.* the `2 norm) for t ∈ [T 0, T0] (where T0 = 400) 3) the remaining times [T0, T0] are used for the test set. We set the last limit frame as T 0 = 1100 which corresponds to predicting T 0 − T0 = 700 timestamps, that is predicting 14 seconds of motion given the initial 8 seconds. To emulate possible changes in signal acquisition (*e.g.* rotations of the camera), we randomly rotate the train subjects *w.r.t.* the z-axis. We consider the movements of type "Walking", "WalkDog" and "WalkTogether" for the training set and "Walking" for the test set. Top row of Figure 7 illustrates samples of movements x (i) →T0 resulting from this procedure and the resulting dataset is provided as supplementary material. ## 4.4.2 Competing Structured Prediction Methods We look for global transformations of the features fi: R 32×3 → R 32×3. In order to obtain coherent transformations for each joint of the skeleton, the fi are structured as fi = (gi, gi, · · · , gi | {z } 32 times ) where gi: R 3 → R 3. In other words, we consider that there is only one global 3D transformation that aligns all the joints of two time series (such as one rotation). We use DTW-GI and softDTW-GI as our similarity measures and the associated maps fi as described in Equation 5 and in Equation 7. We compute DTW-GI using the BCD procedure with block-structured rigid transformations of the features as in Equation 18 (qx = qx = 3, k = 32 in this context). We calculate softDTW-GI with automatic differentiation as described in Section 3.2. In this experiment we set γ = 0.05 for the smoothness parameter. We compare these methods to 8 baselines, that correspond to different pairs of time series similarity measure and feature space invariances. The first two | Method | Average test error | |----------------------------------|----------------------| | L2 | 183.11 ± 3.90 | | softDTW (Cuturi & Blondel, 2017) | 183.12 ± 3.90 | | CTW (Zhou & Torre, 2009) | 183.11 ± 3.90 | | GromovDTW (Cohen et al., 2021) | 181.28 ± 3.71 | | L2+Procrustes | 46.33 ± 0.21 | | DTW+Procrustes | 43.31 ± 0.03 | | softDTW+Procrustes | 43.90 ± 0.11 | | DTW-GI (ours) | 38.05 ± 0.37 | | softDTW-GI (ours) | 39.58 ± 0.34 | Table 1: Average error on tests subjects for the time series forecasting on the Human3.6M dataset baselines, denoted L2 and softDTW, do not encode any feature space invariance and are based on `2 and softDTW similarities respectively. We also consider a Procrustes baseline (Goodall, 1991) defined as: $$d(\mathbf{y}_{\to T^{\prime}},\mathbf{x}_{\to T^{\prime}}^{(i)})=\operatorname*{min}_{\mathbf{P},\mathbf{b}}\|(\mathbf{x}_{\to T^{\prime}}^{(i)}\mathbf{P}^{\top}+\mathbf{b})-\mathbf{y}_{\to T^{\prime}}\|_{2}^{2}$$ $$(22)$$ 2(22) where P = blockdiag32(p) with p ∈ V3,3 and b = (b, *· · ·* , b) with b ∈ R 3. The corresponding transformation fiis the affine map based on the optimal P?, b ?found by the previous problem. We denote this baseline L2+Procrustes. Two other baselines are computed by first registering series using the Procrustes procedure defined above and then using the similarity measure d(y→T0 , x (i) →T0 ) = DTW(y→T0 , x (i) →T0P ?> + b ?) and d(y→T0 , x (i) →T0 ) = DTWγ(y→T0 , x (i) →T0P ?> + b ?). They are denoted respectively by DTW+Procrustes and softDTW+Procrustes. Finally, we also compare with GromovDTW (Cohen et al., 2021) and CTW (Zhou & Torre, 2009). Note that the methods L2, softDTW, GromovDTW and CTW do not provide a transformation of the features of x (i) →T0 **onto** those of y→T0 and, as such, we set fi = id for all of these methods. ## 4.4.3 Results Qualitative and quantitative results are provided in Figures 7, 8 and Table 1 respectively. We evaluate, for each test subject, the `2 reconstruction loss kyT0→ − yˆT0→k2 between the ground truth time series and its prediction. Table 1 displays the average loss on the test subjects based on the best hyperparameter found using the timestamps [T0, T0]. Figures 7 and 8 present examples of reconstructed movements for the different methods on one test subject as well as the 3 highest coefficients ad with the corresponding neighbors. We observe from the quantitative study that softDTW, L2, CTW and GromovDTW lead to the worst reconstruction losses while L2+Procrustes, DTW+Procrustes, softDTW+Procrustes and DTW-GI, softDTWGI lead to the best ones. The results for the first four methods can be explained by the fact that none of them can use an explicit spatial transformation of the feature fi for the prediction and thus only a simple weighted average of the time-series x (i) T0→ is realized. This is illustrated for CTW, GromovDTW, softDTW and L2 in Figures 7 and 8, where we can see that the prediction tends to shrink. We can also see that DTW+Procrustes and softDTW+Procrustes are superior to a simple L2+Procrustes which highlights the importance of temporal realignment. More importantly, the performances of DTW-GI and softDTW-GI are also better than DTW+Procrustes, softDTW+Procrustes and L2+Procrustes which shows that our **joint** realignment of time and space has an advantage over a two-step procedure such as DTW+Procrustes, softDTW+Procrustes which first finds the feature transformation and then aligns series in time. Moreover, one can observe qualitatively that GromovDTW and CTW seem to uniformly average all the different motions to compensate for the lack of reprojection fi. On the contrary, by capturing the possible spatial variability, L2+Procrustes and softDTW+Procrustes perform reasonably well qualitatively but the predicted movement is slightly less accurate than the one of softDTW-GI. This is due to the fact that L2+Procrustes or softDTW+Procrustes mainly chooses the movement corresponding to the first nearest neighbor (ad[1] = 0.94) while softDTW-GI is able to average other dynamics (ad[1] = 0.79). It is somehow ![14_image_0.png](14_image_0.png) Figure 9: Cover song identification using the covers80 dataset. Methods are compared in terms of recall and results are averaged over 10 train / test set draws. For each method, the shaded area corresponds to one standard deviation around the mean value. a natural conclusion since the optimal transformations found by the Procrustes analysis supposes a trivial one-to-one correspondence of the timestamps (*i.e.* y→T0 (t) corresponds to x (i) →T0 (t) at **the same time** t) and do not consider the temporal shifts between them. In this way, the method L2+Procrustes leads to unrealistic transformations when the dynamics of movements are not the same. Note that the two-step procedure softDTW+Procrustes is only slightly more precise as the feature realignment is independent of the temporal realignment since both are not optimized jointly. On the opposite, softDTW-GI method leads to the best qualitative results, highlighting the benefits of our approach over methods that whether discard the temporal variability of the movements (L2+Procrustes) or its spatial variability (softDTW). ## 4.5 Cover Song Identification Cover song identification is the task of retrieving, for a given query song, its covers (*i.e.* different versions of the same song) in a training set. State-of-the-art methods either rely on anchor matches in the songs and/or on temporal alignments. In most related works, chroma or harmonic pitch class profile (HPCP) features are usually chosen, as they capture harmonic characteristics of the songs at stake (Heo et al., 2017). For this experiment, we use the covers80 dataset (Ellis & Cotton, 2007) that consists in 80 cover pairs of pop songs and we evaluate the performance in terms of recall. Since the selection of features is not our main focus, we choose to extract chroma energy normalized statistics (CENS, (Müller et al., 2005)) over half a second windows. We compare variants of our method to a baseline that consists in a DTW alignment between songs transposed to the same key using the Optimal Transposition Index (OTI, (Serra et al., 2008)). This OTI computes a transposition based on average energy in each semitone band. For each competitor, test set songs are ranked based on their distances to query songs. Figure 9 presents recall scores for compared methods as well as the mean rank of the first correctly identified cover (MR1) that is a standard evaluation metric for the task (used in MIREX1for example). For the sake of readability, performance for DTW-GI with feature space optimization on the Stiefel manifold is omitted in this Figure. In practice, this approach leads to an MR1 of 29.0 ± 1.0, which is clearly outperformed by the baseline relying on OTI. This is because a very common transformation in this setting is when cover songs are played in different keys, which is captured by the OTI transposition strategy. Similarly, CTW, which 1https://www.music-ir.org/mirex/wiki/2019:Audio_Cover_Song_Identification does not allow to use prior information about the form of the feature space registration into account, behaves poorly in this setting. Interestingly enough, the flexibility of our DTW-GI framework allows us to use the OTI strategy. Since the registration family F in this OTI setting is restricted to the set of 12 possible key transpositions, we do not rely on BCD in this case and rather seek for the exact optimum by computing (soft-)DTW for each of the 12 possible transpositions and retain the minimizer. This way, we are able to compute the optimal transposition index along the alignment path instead of computing it on averaged features, as the "DTW (OTI)" baseline does. This leads to a significant improvement of the performance for both DTW-GI and its soft counterpart and illustrates both the versatility of our method and the importance of performing joint feature space transformation and temporal alignment. Note finally that the performance reached by the competitors in this study is far from what deep models can achieve. ByteCover Du et al. (2021), for example, reaches an MR1 of 3.54 on this task, by relying on both a triplet loss and a classification loss derived from a proxy classification task to train a ResNet. The use of softDTW-GI as an alternative similarity measure in the triplet loss of such an approach is left for future work. ## 5 Conclusion And Perspectives We propose in this paper a novel similarity measure that can compare time series across different spaces in order to tackle both temporal and feature space invariances. This work extends the well-known Dynamic Time Warping algorithm to deal with time series from different spaces thanks to the introduction of a joint optimization over temporal alignments and space transformations. In addition, we provide a formulation for the computation of the barycenter of a set of times series under our new geometry, which is, to the best of our knowledge, the first barycenter formulation for a set of heterogeneous time series. Another important special case of our approach allows for performing temporal alignment of time series with invariance to rotations in the feature space. We illustrate our approach on several datasets. First, we use simulated time series to study the computational complexity of our approach and illustrate invariance to rotations. Then, we apply our approach on two real-life datasets for human motion prediction and cover song identification where invariant similarity measures are shown to improve performance. Extensions of this work will consider scenarios where features of the series do not lie in a Euclidean space, which would allow covering the case of structured data such as graphs evolving over time, for example. Future works also include the use of our methods in more elaborated models where, following ideas from Cai et al. (2019); Iwana et al. (2020), softDTW-GI could be used as a feature extractor in neural networks. It could also serve as a loss to train heterogeneous time series forecasting models (Le Guen & Thome, 2019; Cuturi & Blondel, 2017) or for imitation learning problems as considered in Cohen et al. (2021) where one wants to learn an agent (parametrized by a neural network) that generates trajectories on a different space than the initial ones. ## Acknowledgments This research was supported in part by ANR through the MATS project ANR-18-CE23-0006, by the AllegroAssai project ANR-19-CHIA-0009 and the ACADEMICS grant of the IDEXLYON, project of the Université de Lyon, PIA operated by ANR-16-IDEX-0005. NC is partially funded by the projects OTTOPIA ANR-20-CHIA-0030 AI chair. This research was also supported by 3rd Programme d'Investissements d'Avenir ANR-18-EUR-0006-02. This action benefited from the support of the Chair "Challenging Technology for Responsible Energy" led by l'X - Ecole polytechnique and the Fondation de l'Ecole polytechnique. TV gratefully acknowledges the support of the Centre Blaise Pascal's IT test platform at ENS de Lyon (Lyon, France) for Machine Learning facilities. The platform operates the SIDUS solution (Quemener & Corvellec, 2013). The results are processed for visualizations using matplotlib (Hunter, 2007). Numerical computations involve numpy (Harris et al., 2020), scipy (Virtanen et al., 2020) and scikit-learn (Pedregosa et al., 2011) for the CTW implementation. ## References Md. Shahinur Alam, Ki-Chul Kwon, Md. Ashraful Alam, Mohammed Y. Abbass, Shariar Md Imtiaz, and Nam Kim. Trajectory-based air-writing recognition using deep neural network and depth sensor. *Sensors*, 20(2), 2020. ISSN 1424-8220. David Alvarez-Melis, Stefanie Jegelka, and Tommi S Jaakkola. Towards optimal transport with global invariances. In *International Conference on Artificial Intelligence and Statistics*, 2019. David Alvarez-Melis, Youssef Mroueh, and Tommi Jaakkola. Unsupervised hierarchy matching with optimal transport over hyperbolic spaces. In *International Conference on Artificial Intelligence and Statistics*, pp. 1606–1617. PMLR, 2020. Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. *arXiv* preprint arXiv:1907.02893, 2019. Peter Battaglia, Jessica Blake Chandler Hamrick, Victor Bapst, Alvaro Sanchez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, Caglar Gulcehre, Francis Song, Andy Ballard, Justin Gilmer, George E. Dahl, Ashish Vaswani, Kelsey Allen, Charles Nash, Victoria Jayne Langston, Chris Dyer, Nicolas Heess, Daan Wierstra, Pushmeet Kohli, Matt Botvinick, Oriol Vinyals, Yujia Li, and Razvan Pascanu. Relational inductive biases, deep learning, and graph networks. *arXiv*, 2018. Gary Becigneul and Octavian-Eugen Ganea. Riemannian adaptive optimization methods. In *Proceedings of* the International Conference on Learning Representations, 2019. Shai Ben-David, Tyler Lu, Teresa Luu, and Dávid Pál. Impossibility theorems for domain adaptation. In International Conference on Artificial Intelligence and Statistics, pp. 129–136, 2010. Xingyu Cai, Tingyang Xu, Jinfeng Yi, Junzhou Huang, and Sanguthevar Rajasekaran. Dtwnet: a dynamic time warping network. In *Neural Information Processing Systems*, pp. 11636–11646, 2019. Chien-Yi Chang, De-An Huang, Yanan Sui, Li Fei-Fei, and Juan Carlos Niebles. D3tw: Discriminative differentiable dynamic time warping for weakly supervised action alignment and segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 3546–3555, 2019. Yang Chen and Gérard Medioni. Object modelling by registration of multiple range images. Image and Vision Computing, 10(3):145 - 155, 1992. Samuel Cohen, Giulia Luise, Alexander Terenin, Brandon Amos, and Marc Peter Deisenroth. Aligning time series on incomparable spaces. In *International Conference on Artificial Intelligence and Statistics*, volume 130, pp. 1036–1044, 2021. Scott Cohen and L Guibas. The Earth mover's distance under transformation sets. In *IEEE International* Conference on Computer Vision, volume 2, pp. 1076–1083, 1999. Huseyin Coskun, Felix Achilles, Robert DiPietro, Nassir Navab, and Federico Tombari. Long short-term memory kalman filters: Recurrent neural estimators for pose regularization. In International Conference on Computer Vision, Oct 2017. Marco Cuturi and Mathieu Blondel. Soft-DTW: a differentiable loss function for time-series. In International Conference on Machine Learning, pp. 894–903, 2017. Marco Cuturi, Jean-Philippe Vert, Oystein Birkenes, and Tomoko Matsui. A kernel for time series based on global alignments. In *IEEE International Conference on Acoustics, Speech, and Signal Processing*, volume 2, pp. II–413. IEEE, 2007. Huiqi Deng, Weifu Chen, Qi Shen, Andy J. Ma, Pong C. Yuen, and Guocan Feng. Invariant subspace learning for time series data based on dynamic time warping distance. *Pattern Recognition*, 102:107210, 2020. Xingjian Du, Zhesong Yu, Bilei Zhu, Xiaoou Chen, and Zejun Ma. Bytecover: Cover song identification via multi-loss training. In *IEEE International Conference on Acoustics, Speech, and Signal Processing*, pp. 551–555, 2021. doi: 10.1109/ICASSP39728.2021.9414128. Daniel PW Ellis and Courtenay Valentine Cotton. The 2007 labrosa cover song detection system, 2007. Colin Goodall. Procrustes methods in the statistical analysis of shape. *Journal of the Royal Statistical* Society: Series B (Methodological), 53(2):285–321, 1991. Ian Goodfellow, Honglak Lee, Quoc V. Le, Andrew Saxe, and Andrew Y. Ng. Measuring invariances in deep networks. In *Neural Information Processing Systems*, pp. 646–654, 2009. Charles R Harris, K Jarrod Millman, Stéfan J Van Der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J Smith, et al. Array programming with numpy. *Nature*, 585(7825):357–362, 2020. Hoon Heo, Hyunwoo J Kim, Wan Soo Kim, and Kyogu Lee. Cover song identification with metric learning using distance as a feature. In Proceedings of the International Society for Music Information Retrieval Conference, 2017. Shih-Feng Huang and Hong-Ping Lu. Classification of temporal data using dynamic time warping and compressed learning. *Biomedical Signal Processing and Control*, 57:101781, 2020. John D Hunter. Matplotlib: A 2d graphics environment. *Computing in science & engineering*, 9(03):90–95, 2007. Catalin Ionescu, Dragos Papava, Vlad Olaru, and Cristian Sminchisescu. Human3.6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(7):1325–1339, jul 2014. Brian Kenji Iwana, Volkmar Frinken, and Seiichi Uchida. DTW-NN: a novel neural network for time series recognition using dynamic alignment between inputs and weights. *Knowledge-Based Systems*, 188, 1 2020. ISSN 0950-7051. doi: 10.1016/j.knosys.2019.104971. Martin Jaggi. Revisiting Frank–Wolfe: Projection-free sparse convex optimization. In *International Conference* on Machine Learning, 2013. Hicham Janati, Marco Cuturi, and Alexandre Gramfort. Spatio-temporal alignments: Optimal transport through space and time. In *International Conference on Artificial Intelligence and Statistics*, pp. 1695–1704, 2020. Max Kochurov, Sergey Kozlukov, Rasul Karimov, and Viktor Yanush. Geoopt: Adaptive riemannian optimization in PyTorch, 2019. https://github.com/geoopt/geoopt. Wouter Marco Kouw and Marco Loog. A review of domain adaptation without target labels. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019. J.B. Kruskal and M. Wish. *Multidimensional Scaling*. Sage Publications, 1978. Jim Lawrence, Javier Bernal, and Christoph Witzgall. A purely algebraic justification of the kabsch-umeyama algorithm. *Journal of Research of the National Institute of Standards and Technology*, 124:1–6, 2019. Vincent Le Guen and Nicolas Thome. Shape and time distortion loss for training deep time series forecasting models supplementary material. In *Neural Information Processing Systems*, 2019. Meinard Müller, Frank Kurth, and Michael Clausen. Audio matching via chroma-based statistical features. In *Proceedings of the International Society for Music Information Retrieval Conference*, 2005. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché Buc, E. Fox, and R. Garnett (eds.), *Neural Information Processing* Systems, pp. 8024–8035. 2019. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. *Journal of Machine Learning Research*, 12:2825–2830, 2011. François Petitjean, Alain Ketterlin, and Pierre Gançarski. A global averaging method for dynamic time warping, with applications to clustering. *Pattern Recognition*, 44(3):678 - 693, 2011. Emmanuel Quemener and Marianne Corvellec. Sidus—the solution for extreme deduplication of an operating system. *Linux Journal*, 2013(235):3, 2013. Hiroaki Sakoe and Seibi Chiba. Dynamic programming algorithm optimization for spoken word recognition. IEEE Transactions on Acoustics, Speech and Signal Processing, 26(1):43–49, 1978. Joan Serra, Emilia Gómez, and Perfecto Herrera. Transposing chroma representations to a common key. In Proceeding sof the IEEE Conference on The Use of Symbols to Represent Music and Multimedia Objects, pp. 45–48, 2008. Romain Tavenard, Johann Faouzi, Gilles Vandewiele, Felix Divo, Guillaume Androz, Chester Holtz, Marie Payne, Roman Yurchak, Marc Rußwurm, Kushal Kolar, et al. Tslearn, A Machine Learning Toolkit for Time Series Data. *J. Mach. Learn. Res.*, 21(118):1–6, 2020. Gineke A Ten Holt, Marcel JT Reinders, and EA Hendriks. Multi-dimensional dynamic time warping for gesture recognition. In *Annual conference of the Advanced School for Computing and Imaging*, volume 300, pp. 1, 2007. George Trigeorgis, Mihalis A Nicolaou, Stefanos Zafeiriou, and Bjorn W Schuller. Deep canonical time warping. In *EEE Conference on Computer Vision and Pattern Recognition*, pp. 5110–5118, 2016. Pauli Virtanen, Ralf Gommers, Travis E Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, et al. Scipy 1.0: fundamental algorithms for scientific computing in python. *Nature methods*, 17(3):261–272, 2020. Victor Wegner Maus, Gilberto Câmara, Marius Appel, and Edzer Pebesma. dtwsat: Time-weighted dynamic time warping for satellite image time series analysis in R. *Journal of Statistical Software*, 88(5):1–31, 2019. Zaiwen Wen and Wotao Yin. A feasible method for optimization with orthogonality constraints. *Mathematical* Programming, 142(1-2):397–434, 2013. Martin Wöllmer, Marc Al-Hames, Florian Eyben, Björn Schuller, and Gerhard Rigoll. A multidimensional dynamic time warping algorithm for efficient multimodal fusion of asynchronous data streams. *Neurocomputing*, 73(1-3):366–380, 2009. Zheng Zhang, Romain Tavenard, Adeline Bailly, Xiaotong Tang, Ping Tang, and Thomas Corpetti. Dynamic time warping under limited warping path length. *Information Sciences*, 393:91–107, 2017. Feng Zhou and Fernando De la Torre. Generalized time warping for multi-modal alignment of human motion. In *IEEE Conference on Computer Vision and Pattern Recognition*, pp. 1282–1289, 2012. Feng Zhou and Fernando Torre. Canonical time warping for alignment of human behavior. In *Advances in* neural information processing systems, pp. 2286–2294, 2009. ## 6 Appendix For a matrix p we note: $\mathbf{blockdiag}_{k}(\mathbf{p})=\text{diag}(\underbrace{\mathbf{p},\cdots,\mathbf{p}}_{k\text{times}})=\begin{pmatrix}\mathbf{p}&\mathbf{0}&\ldots&\mathbf{0}\\ \mathbf{0}&\mathbf{p}&\ldots&\mathbf{0}\\ \vdots&\vdots&\ddots&\vdots\\ \mathbf{0}&\mathbf{0}&\ldots&\mathbf{p}\end{pmatrix}$ We will prove the following Lemma: Lemma 6.1. Let qx, qy ∈ N such that qy ≤ qx and k ∈ N*. Consider* C ∈ R k·qx×k·qy *. We write the block* diagonal decomposition of C as: $$\mathbf{C}={\begin{pmatrix}\mathbf{C}_{11}&\mathbf{C}_{12}&\dots&\mathbf{C}_{1k}\\ \mathbf{C}_{21}&\mathbf{C}_{22}&\dots&\mathbf{C}_{2k}\\ \vdots&\vdots&\ddots&\vdots\\ \mathbf{C}_{k1}&\mathbf{C}_{k2}&\dots&\mathbf{C}_{k k}\end{pmatrix}}$$ where Cij ∈ R qx×qy for i, j ∈ {1, · · · , k}*. Consider* C =Pk i=1 Cii ∈ R qx×qy *the sum of the diagonal blocks of* C *and consider the* full *SVD decomposition of* C = UΣV> *where* U, Σ, V ∈ R qx×qy *. Then the solution to:* $$\operatorname*{max}_{{\bf p}\in V_{q y,q x}}\;\;\langle{\bf C},\mathrm{blockdiag}_{k}({\bf p})\rangle$$ (p)i (23) is given by p ? = UV>. Proof. We have: hC, blockdiagk (p)iF = tr blockdiagk (p) >C= tr p > 0 . . . 0 0 p> . . . 0 ............ 0 0 . . . p > C11 C12 . . . C1k C21 C22 . . . C2k ............ Ck1 Ck2 . . . Ckk = tr p >C11 p >C12 . . . p >C1k p >C21 p >C22 . . . p >C2k ............ p >Ck1 p >Ck2 . . . p >Ckk = X k i=1 tr(p >Cii) = tr p >( X k i=1 Cii) ! = h X k i=1 Cii, pi = hC, pi $$(23)$$ Thus finding the solution to Equation 23 is the same as finding the solution to maxp∈V*qy,qx* hC, pi which is given by the SVD of C (Jaggi, 2013).
Review 1: Summary: **Disclaimer:** I have reviewed this paper in the past for another journal. My impression of this manuscript, even in its earlier versions, has always been positive with a few points that can be improved. In my review below I include some of my comments given in a previous review that are still valid in this version. **Summary:** The article proposes an extension to the Dynamic Time Warping (DTW) and soft-DTW methods by jointly optimising a transformation of one of the series in order to compare series of different dimension. The authors refer to this methodology as a comparison among time series in both the temporal and feature spaces. The paper is clearly written, the contribution well placed within the literature, the experimental validation is convincing, and the proposed idea is certainly of interest for the community. Strengths and Weaknesses: I would like to draw the attention of the authors to the following points that can improve the manuscript. - My intuition is that the performance of the method heavily depends on the choice of the family of transformations $\mathcal{F}$. Additionally, to the best of my understanding, the authors only consider the space of linear mappings. This, however, makes the contribution valuable, since for the arguably simplest family of transformations (linear maps) the proposed method outperforms the benchmarks in many aspects. - What are the invariances that DTW-GI can incorporate in practice? The linear assumption on the maps results (in, again, my intuition) only in rotations, so, how does this help for real-world datasets? - In the same line, are the invariances with which the method can deal truly "global" as the title suggests? After reading the paper I see that the invariances are "linear" only (which in the experiments works well, but I don't see how a global or more general class of invariances can be accounted for). - The analysis of the meaning of linear transformations can be studied deeper in the paper. For instance, in addition to rotations, a linear transformation can be a combination of sources and thus related to PCA or other factor models, with this in mind one could both perform the temporal matching and find a compressed representation of the data at the same time. Another linear transformation of interest are the Fourier/cosine/wavelet transforms, where one series lies in time and the other in the spectrum but they could in theory still be compared under DTW-GI nonetheless. - Lastly, a point that can be further explained is why the authors match the time series in the higher dimensional space. The method is applied (Sec 3.1) to two series x and y where dim(x)>dim(y) and x is compared to f(y), therefore, the DTW occurs in the space of x which is the high dimensional one (instead of the space of y). Why is this? isn't it better to compare the series in the lower domain due to computational complexity? I am also thinking that, if both series were to be transformed onto a lower dimensional domain, the method could provide a "minimal-dimension" comparison. These are just ideas but perhaps worth mentioning them Requested Changes: **In addition to changes derived by the comments above**, some particular changes are: - Eqs lack punctuation. - Intro: aligns in "times" (is it "time"?) - Intro: in more "details" --> "detail" - Sec 2, page 3: what do you mean "around the diagonal"? - Sec 3.1: I suggest to state that Tx and px are the "size" and "dimension" of the time series x respectively (these concepts are used throughout the text) - Sec 4: Fig 9, I suggest to include axis labels in zoomed in figure I hope this recommendations helps improving the paper. Broader Impact Concerns: NA ================================================== Review 2: Summary: This paper introduces a time series distance, (soft)DTW-GI, based on matching the alignments of series both in time and in space. This is achieved by selecting the minimal (soft)DTW distance between the compared series up to pointwise transformations, e.g. rotations, defining an invariance class for the distance. DTW-GI provides two key advantages: integrating prior knowledge on the series via the invariance class, and recovering a related optimal mapping between series that are possibly lying in different feature spaces. The authors expose two possible optimization procedures to compute DTW-GI depending on the invariance class: gradient descent for differentiable objectives, and alternating optimization between spatial and temporal alignments, which for rigid transformations can be accelerated though SVDs. The applicability of DTW-GI is then empirically assessed against prior related approaches in diverse experiments: firstly, on synthetic data with an efficiency and robustness analysis and a barycenter computation test; secondly on real-world data with human motion prediction and song identification relying on intra-dataset comparisons using the compared distances. Strengths and Weaknesses: ### Strengths This paper presents, to the best of my knowledge, an **original solution to a relevant problem**. Computing time series similarities is a non-trivial task, and this proposition tackles this issue with an interesting solution given its **simplicity, applicability and clear advantages**. In particular, the choice of invariance class and the recovered mapping between series make it transparent and appropriate when prior information on the compared series is available. Its **robustness to various settings**, stemming from this explicit choice of invariance class, is particularly appreciable. **All claims are well supported**, especially via a diverse set of experiments illustrating the interest of the proposed distance w.r.t. prior related approaches. The paper is overall **well written**. It provides for the most part sufficient details on the proposed distance and experiments, making it **clear and easy to read** for a general audience. In particular, the most recent revision clearly improved the quality of the paper in this regard. The method is **sufficiently motivated** w.r.t. the existing literature and its presentation is supported by intuitive arguments and examples. ### Weaknesses To the best of my understanding, I detected no major weakness in the paper and/or method. Nonetheless, I think that there remains some room for improvement in some aspects. While the authors present two different optimization procedures to compute (soft)DTW-GI, **the interest of gradient descent remains unclear**. The performance difference between DTW-GI and softDTW-GI, as observed throughout the experiments and especially in Table 1 and Figure 2 is limited, while softDTW-GI is computationally more costly than DTW-GI. Moreover, all the experiments are performed in settings where other optimizations schemes can be be applied, like the proposed BCD over the Stiefel manifold. As hinted on page 6, gradient descent could be key to optimize DTW-GI when considering invariance w.r.t. a class of neural networks. However, **the use of DTW-GI combined with an overly large invariance class like neural networks might not be appropriate**: it could flatten differences between series and make the distance lose its meaning, which is one of its advantages. The current set of experiments does show the interest of DTW-GI w.r.t. other distances in the literature, but **they do not demonstrate its interest w.r.t. other (non-distance based) approaches**. Indeed, the experiments with real-world data could be handled with other dedicated methods: sequential prediction models for time series forecasting, and deep models for cover song identification (as acknowledged by the authors). The barycenter experiment could highlight the interest of DTW-GI as a proper distance is key to compute barycenters, but it is only qualitative and performed on simple synthetic data. One of the advantages of DTW-GI, shared with GromovDTW, is to handle time series lying in different feature spaces. However, **having different feature spaces raises applicability questions** which I think are not addressed in the paper. For example in the barycenter experiment, how can the dimensionality of the feature space be chosen? Moreover, only this toy experiment showcases this ability of DTW-GI; it would be beneficial for the paper to include an application case on real-world data. The experiment of Section 4.2 raises an interesting comparison between GromovDTW and DTW-GI in terms of robustness to noise. This is a crucial property, yet **this robustness to noise is not highlighted in the paper beyond this experiment on synthetic data**. I believe that further comments and/or experiments could help strengthening the interest of DTW-GI. Finally, I would have two comments regarding the reproducibility of the experiments: - there is no available source code at submission time but the authors indicate that it will be released upon publication; - the ranking method in Section 4.5 should be explicitly stated: I assume rankings are obtained based on distances between the covers and the base songs but this should be clearly indicated. ### Context of Review - I am knowledgeable about this domain but this is not my primary area of research. - This review was completed after the first revision of the paper (September 23rd, 2022). Requested Changes: Accordingly with the previous section, I would like to highlight that none of the following suggested changes are critical to securing my recommendation, but I believe that addressing them would help improving the submission. - Could the authors comment on the interest of the gradient descent optimization scheme? In particular, how could we interpret DTW-GI when used with neural networks as invariance class? - Could the authors include a discussion and/or new experiments to motivate the use of DTW-GI beyond applications when other (non-distance based) dedicated methods might outperform it? - Could concrete examples of applications with time series lying in different feature spaces be included in the paper? - Could the experiments elaborate on the robustness to noise of DTW-GI w.r.t. GromovDTW, for example in the real-world data of Sections 4.4 and 4.5? - Could the authors explicitly state in Section 4.5 that rankings are obtained based on distances between the covers and the base songs? - Presentation request: please format Table 1 with the `booktabs` LaTeX package (for a better-looking presentation) and adjust the rainbow colormap of Figures 1, 2 and 5 to a perceptually uniform one (for better readability). Broader Impact Concerns: N/A ================================================== Review 3: Summary: This submission presents a novel framework for aligning time series. In contrast to existing work, the proposed framework permits alignments in cases of differences in sampling frequency as well as certain geometric deformations. The latter part is of particular interest in practice, as it permits imbuing the alignment algorithm with additional information, such as certain transformations, making it *invariant* to these. The framework permits different operations with time series, such as: - Alignment - Similarity assessment - Calculation of barycentres Moreover, the proposed framework gives rise to algorithms that are differentiable under mild conditions. The state of the art is thus enriched by demonstrating the utility of a new perspective that is capable of encoding desirable invariances. Next to this, the paper also provides a thorough set of experiments to demonstrate how the proposed framework improves time series tasks in practice. This is a valuable contribution to time series learning on its own. Strengths and Weaknesses: The primary strength of the paper lies in the well-founded algorithmic description. Algorithms and concepts are, for the most part (see below for some comments), introduced in sufficient detail and explained very well. The theoretical derivation is correct; most experimental details are described adequately to ensure reproducibility. Some minor issues need to be rectified before publishing this work: - A more quantitative evaluation of certain results - Minor inconsistencies in descriptions - Missing details for forecasting setup I will subsequently comment on these changes (along with some minor textual changes) but want to stress that the authors are to be commended for a high-quality work; this paper is really a joy to read and contributes a direly-needed perspective to time series analysis. Requested Changes: As a brief note: whenever I use the term "consider", I want to stress that this is not a required change from my point of view, but there is no other place in the review form to collect my thoughts. - In Figure 1, the meaning of $\mathbb{V}_{2,3}$ should either be explained directly or the registration part of the figure should be slightly simplified. Moreover, the notation of $f^\ast$ and $\pi^\ast$ should be briefly explained in the caption. This notation only became clear to me after reading the full paper. - Please provide some notational details to Equation 2 when describing the time series segments. - When introducing the optimisation problem in Equation 3, consider briefly discussing to what extent symmetry in the arguments is required here. - I have trouble understanding the re-formulation of Equation 14. $C$ is supposed to be a matrix, but the vectors that are being used here appear to be incompatible in terms of their dimensions. I am probably misunderstanding something; can you please clarify this point to me and potentially revise the phrasing accordingly for the revision? - Consider providing more steps (i.e. a brief justification) when describing the equivalence of the left-hand side and the right-hand side in Equation 17. - When describing the barycentre formulation, does the fact that DTW is *not* a distance change anything? In the case of optimal transport distances between objects, for instance, the minimiser of an equation like Equation 18 is *unique*. Is this also the case here? - Please try to quantify the results in Section 4.2 and Figure 5, 6, 7. In particular for Figures 6 and 7, I had trouble relating the things that are being *shown* in the figure to the errors. Is there maybe a simpler way to show differences in the individual trajectories, for instance by highlighting a sub-area that did not work well? - Figure 8 could use a small inset to show "disentangle" some of the parts with many overlaps a little bit better. - Consider adding some more details about optimisation spaces other than the Stiefel manifold. Are there "canonical" examples that one could make use of in practice? (Such a discussion does **not** have to be fleshed out; I think it would improve the scope of the paper to a substantial extent if more examples could be provided here) Some minor points concerning the wording and typesetting: - "is defined as equivalent" --> "is equivalent" - "produces mostly erroneous" --> "produces a mostly erroneous" - "depend in" --> "depend on" - "gaussian" --> "Gaussian" - "Extension of this work" --> "Extensions of this work" - "cf." --> see (I was under the impression, drilled into me by some well-meaning English literature students, that "cf." should be used to point to a cited proposition that is *analogous* to the present one, but not directly equivalent, following the idea to "compare" two sources. In this sense, "cf." is not directly equivalent to the plain "see". Apologies for being nitpicky here.) - Consider using `\operatorname` or `\DeclareMathOperator` instead of writing $DTW$ in plain (La)TeX. - Why is the citation by Paszke et al. typeset in a different colour? - Please check the bibliography carefully for erroneous bibliographical items; for instance, I saw that the paper by Cohen and Guibas is incorrectly cited as "Cohen and Guibasm". Likewise, the paper by Jaggi (2013) should use proper capitalisation for the Franke--Wolfe algorithm. Broader Impact Concerns: There are no broader impact concerns to be raised by this work. ================================================== Review 4: Summary: The authors propose a way to compute similarities between time series which are invariant to temporal and feature-level distortions. They achieve this by using a (potentially soft) variant of dynamic time warping (DTW) to align the time points and a learnable feature mapping to align the features, which notably even works when the time series have a different dimensionality. They then show that this can be used to compute barycenters, find nearest neighbors, and perform time series forecasting. Strengths and Weaknesses: Strengths: - The paper is well written and the problem is clearly motivated. - The method is simple to understand and presumably simple to implement. - The proposed method empirically outperforms different baselines. Weaknesses: - The tradeoffs between the soft and hard versions of the proposed method could be explained in more detail. - For the real-world experiments, some more standard baselines could be helpful. Major comments: - In the forecasting experiment, only softDTW-GI is used and not DTW-GI. Why is that? Would DTW-GI not be applicable? - Since Fig. 3 suggests that softDTW-GI is several orders of magnitude slower than the other baselines, it would also be useful to report the runtimes in the forecasting experiment. - In the cover identification experiment, only DTW-GI is used and not the soft version. Again, why is that? It seems from the previous experiments that softDTW-GI generally works a bit better. - It would be useful to not only use time series alignment methods as baselines in the real-world experiments (although I do understand that those are the main competitors of the proposed methods), but also methods that practitioners would actually use. Especially in the case of the cover identification, methods like ByteCover [Du et al. 2020] achieve an MR1 < 4, which seems much better than all reported methods. It should therefore be discussed how realistic it would really be to use the proposed method in practice in this application. Additionally, reporting the performance in this experiment in terms of mAP would be helpful to compare to the literature. Minor comments: - Is there any intuition for why softDTW-GI outperforms DTW-GI on the synthetic data in Fig. 4? The ground-truth rotation should in principle be included in the search space of DTW-GI, right? Why does it not find it? - Sec. 3.1 bottom: alignement -> alignment - Sec. 4.4.2: Procruste -> Procrustes Requested Changes: My main two requested changes, based on the comments above, would be: - Make sure that DTW-GI and softDTW-GI are both used in all experiments or otherwise discuss why they should not be used in certain settings. Generally, provide a clear discussion of which of the two methods would be advisable to use as a practitioner, including a discussion of the tradeoff between performance and runtime on the different tasks. - Report the performance of some more standard baselines in the two real-world tasks. Especially in the cover identification experiment, all the reported methods seem to be suspiciously bad compared to the standard literature on the subject. Broader Impact Concerns: No concerns. ================================================== Metareview: Recommendation: Accept as is Comment: This paper introduces a new method for aligning time series jointly in time and in the feature space, denoted as DTW-GI. The submission received positive reviews, highlighting the relevance of the the approach, the clarity of the presentation and the convincing experimental validation. During the discussion period, the authors provided additional experimental results, clarified the notations in the paper, and answered more general reviewers' concerns about the generality of the approach. The reviewers were very satisfied with the answers and they unanimously recommended paper acceptance. The AE carefully reads the submission, the reviews and discussions. He agrees that the approach is valuable and provides a solid contribution for the TMLR community. Therefore, the AE recommends acceptance. ==================================================
# Enhancing Compositional Generalization Via Compositional Feature Alignment Haoxiang Wang hwang264@illinois.edu Department of Electrical and Computer Engineering University of Illinois Urbana-Champaign Haozhe Si *haozhes3@illinois.edu* Department of Electrical and Computer Engineering University of Illinois Urbana-Champaign Huajie Shao hshao@wm.edu Department of Computer Science William and Mary Han Zhao hanzhao@illinois.edu Department of Computer Science University of Illinois Urbana-Champaign Reviewed on OpenReview: *https: // openreview. net/ forum? id= k3d5C0YvfK* ## Abstract Real-world applications of machine learning models often confront data distribution shifts, wherein discrepancies exist between the training and test data distributions. In the common multi-domain multi-class setup, as the number of classes and domains scales up, it becomes infeasible to gather training data for every domain-class combination. This challenge naturally leads the quest for models with Compositional Generalization (CG) ability, where models can generalize to unseen domain-class combinations. To delve into the CG challenge, we develop CG-Bench, a suite of CG benchmarks derived from existing real-world image datasets, and observe that the prevalent pretraining-finetuning paradigm on foundational models, such as CLIP and DINOv2, struggles with the challenge. To address this challenge, we propose Compositional Feature Alignment (CFA), a simple two-stage finetuning technique that i) learns two orthogonal linear heads on a pretrained encoder with respect to class and domain labels, and ii) fine-tunes the encoder with the newly learned head frozen. We theoretically and empirically justify that CFA encourages compositional feature learning of pretrained models. We further conduct extensive experiments on CG-Bench for CLIP and DINOv2, two powerful pretrained vision foundation models. Experiment results show that CFA outperforms common finetuning techniques in compositional generalization, corroborating CFA's efficacy in compositional feature learning. The code is released at https://github.com/Haoxiang-Wang/Compositional-Feature-Alignment. ## 1 Introduction Over the past decade, machine learning has emerged as a transformative technology, driving advancements across various domains such as computer vision (He et al., 2016a;b), natural language processing (Devlin et al., 2019; Brown et al., 2020), biology (Jumper et al., 2021), etc. These innovations have been fueled by the development of increasingly sophisticated models, the availability of large-scale datasets, and the growth of computational power. However, a crucial obstacle persists in applying machine learning models to real-world scenarios: their performance tends to degrade significantly when confronted with data ![1_image_0.png](1_image_0.png) Figure 1: Compositional generalization (CG) vs. domain generalization (DG). Masked entries are unseen domain-class combinations, while unmasked ones exist in the training dataset. distribution shifts (Koh et al., 2021; Gulrajani & Lopez-Paz, 2021; Santurkar et al., 2021), where the data distribution during testing differs from that used in training. In an effort to overcome this problem, the machine learning research community has turned its attention to Out-of-Distribution (OOD) generalization, with the goal of developing models that are robust under data distribution shifts. Existing research primarily investigates various types of data distribution shifts, such as domain generalization (Gulrajani & Lopez-Paz, 2021; Koh et al., 2021), subpopulation shift (Santurkar et al., 2021; Yang et al., 2023), input corruption (Deng et al., 2009), and spurious correlation (Sagawa et al., 2020). While generalizing to these different type of distribution shifts has garnered significant attention, there exists another realistic yet understudied challenge in OOD generalization: *compositional generalization* (CG). Within the multi-domain, multi-class context, assumes we have E domains (i.e., environments) and K classes, leading to E ˆ K pairs of domain and class combinations, which could be formulated as elements in a E ˆ K matrix as shown in Figure 1. In domain generalization (DG), the learner has access to data from all the classes and all the domains and aims to make predictions on data from a new, unseen domain. Nonetheless, in real-world scenarios, given the large number of categories, e.g., 1000 classes in ImageNet, one cannot always collect the complete data from all the domains. Put in another word, the training data might not cover all the possible combinations of domains and classes, as represented by each cell in the matrix in Figure 1. This sparsity pattern becomes especially pronounced when the number of classes or environments is large because collecting comprehensive training data for each combination becomes a formidable task. In such cases, a key challenge arises: *can the model generalize to unseen domain-class* combinations? This is the compositional generalization (CG) challenge we aim to tackle in this work. The CG challenge manifests ubiquitously across various real-world applications. For example, data distribution shifts in certain existing DG datasets, such as iWildCam (Beery et al., 2020; Koh et al., 2021), are more accurately characterized by CG than DG. Moreover, we find that the widely-used method of finetuning pretrained (foundation) models struggles to tackle the CG challenge. This emphasizes the need for the machine learning community to recognize and address this emerging distribution shift challenge with innovative solutions. Our Contributions In our attempt to tackle this challenge, we draw inspiration from existing lines of research such as invariant risk minimization (IRM) (Arjovsky et al., 2019) and invariant-feature subspace recovery (ISR) (Wang et al., 2022). In particular, Wang et al. (2022) showed that under certain structural conditions in the data generative process, post-processing methods via subspace projection can effectively learn invariant features that can generalize across unseen domains from the same data generative process but under different interventions on non-causal factors. Empirically, we find that if the learned features (i.e., the outputs of the last hidden layer) conform to a compositional structure where the subspace of domain- ![2_image_0.png](2_image_0.png) Figure 2: Illustration of a desired compositional feature structure for compositional generalization. related features is orthogonal to that of *class-related* features, the corresponding model can generalize across unknown domain-class pairs. Motivated by this observation, to induce features that match this compositional structure, we introduce a two-stage finetuning approach termed Compositional Feature Alignment (CFA), which is also inspired by recent progress in the literature of neural collapse (Papyan et al., 2020; Zhu et al., 2021; Yang et al., 2022). More specifically, upon the features given by the encoder, we construct two heads, one for predicting the target label of interest and the other for predicting the domain index. Note that the two-head architecture is not new, and has been widely used in domain adversarial neural networks (Ganin & Lempitsky, 2015; Zhao et al., 2018). However, different from domain adversarial neural networks where adversarial training through minimax optimization is needed, our proposed method is computationally lightweight and can be divided into two stages. CFA first identifies a proper compositional feature structure via a two-head regularized linear probing (i.e., training linear heads with the encoder frozen). Subsequently, the encoder undergoes finetuning with the heads being frozen. Leveraging tools from the neural collapse literature, we theoretically prove that CFA can effectively align features with the compositional feature structure under mild assumptions. Furthermore, we construct a synthetic Color-CIFAR dataset to examine CFA empirically and observe that CFA can indeed align features with the desired compositional feature structure. To facilitate the studies of compositional generalization, we curate a suite of benchmarks for the compositional generalization challenge, building on four real-world image datasets: OfficeHome (Venkateswara et al., 2017), DomainNet (Peng et al., 2019), WILDS-iWildCam (Beery et al., 2020), and WILDS-FMoW (Christie et al., 2018). We consider two powerful pretrained vision encoders, CLIP (Radford et al., 2021) and DINOv2 (Oquab et al., 2023), with ViT-B (Dosovitskiy et al., 2021) architecture, and apply different finetuning methods to them, including linear probing, full finetuning, LP-FT (Kumar et al., 2022), reweighting and our proposed CFA. Extensive experimental results on CG-Suite show that CFA-finetuned models can indeed generalize to unseen domain-class combinations better than other finetuning methods. We hope that the curated CG-Suite can facilitate future research on compositional generalization. ## 2 Compositional Feature Alignment The key to the CG challenge is to identify and encode the compositional relationship between classes and domains by learning the features. Hence, it is important to first understand what kind of feature structures are desired for compositional generalization. To this end, we first provide a formal definition of the compositional feature structure, and then explain our motivations behind the definition. Definition 1 (Compositional Feature Structure). For any input x *from class* y P t1, . . . , Ku *and domain* e P t1, . . . , Eu*, its feature* z P R dsatisfies the compositional feature structure as long as z can be decomposed as: ![3_image_0.png](3_image_0.png) Figure 3: Illustration of our proposed method, Compositional Feature Alignment (CFA). _Class Feature: $z_{1}\sim\mathcal{N}(\mu_{1}^{y},\Sigma_{1}^{y})\in\mathbb{R}^{d_{1}}$_, _Domain Feature: $z_{2}\sim\mathcal{N}(\mu_{2}^{e},\Sigma_{2}^{e})\in\mathbb{R}^{d_{2}}$_ d2*Total Feature:* z " R fi *Total Feature:* $z=R\left[\begin{array}{c}z_{1}\\ z_{2}\\ z_{\text{noise}}\end{array}\right]$. where znoise P R d´d1´d2 *represent noise features irrelevant to classes and domains, and* R P R dˆd*is a* full-rank orthonormal matrix. Note that µ y 1 , Σ y 1 have dependence on y*, while* µ e 2 , Σ e 2 rely on e. Fig. 2 provides a visualization of this compositional feature structure for a simple setup of 2 classes and 3 domains, where one domain-class combination is absent in the training set. It is evident that class features and domain features exist in orthogonal subspaces, as required by Definition 1. In this case, a linear classifier that exclusively utilizes class features and disregards domain and noise features can effectively generalize the unseen domain-class combination. It is noteworthy that even with the perfect alignment of learned features to this compositional structure on all training data, there is no guarantee that features from unseen domain-class combinations will still conform to this structure. ## 2.1 Method: Training With Frozen Orthogonal Heads Under Normalization Though we have defined an ideal feature structure for compositional generalization, the neural features produced by pretrained models may not align with this structure. In this section we introduce our method to encourage the learned features to follow the above structure. At a high level, the proposed method contains two stages of finetuning from pretrained models. We include an auxiliary linear head for domain prediction, complementing the pre-existing class prediction head. The first stage involves multi-label linear probing, which adjusts the two heads to achieve an optimal compositional feature structure. In the second stage, we fine-tune the encoder, while keeping the two heads frozen. The final product is a finetuned encoder that generates features in alignment with the predetermined compositional feature structure from stage one. The two stages, diagrammatically represented in Fig. 3, are detailed below, followed by a discussion on our rationale behind the algorithm design. Stage 1 (Multi-Label Linear Probing). *We begin with a pretrained encoder,* Φp¨q that maps inputs to ddimensional features of unit norm, where d ą K ` E). We construct two linear heads without bias terms, denoted by W1 P R Kˆd and W2 P R Eˆd*. Keeping* Φp¨q frozen, we train these heads with two cross-entropy loss terms, which take into account both class and domain labels. An orthogonality constraint ensures W1 and W2 span orthogonal subspaces. Mathematically, the optimization objective of the first stage can be written as $$\min_{W_{1},W_{2}}\frac{1}{N}\sum_{(x,y,e)\in D_{\rm TDM}}\frac{1}{K}\ell_{\rm CE}(\beta_{1}\cdot W_{1}\Phi(x),\ y)+\lambda\frac{1}{E}\ell_{\rm CE}(\beta_{2}\cdot W_{2}\Phi(x),\ e)\tag{1}$$ s.t. $\beta_{1},\beta_{2},\lambda>0$ and $W_{1}\in\mathcal{U}(d)^{K},W_{2}\in\mathcal{U}(d)^{E},W_{1}W_{2}^{T}=\mathbf{0}$ where Dtrain represents the training set, ℓCE is the cross-entropy loss, β1, β2 are inverse temperature parameters (also called as logit scale in CLIP (Radford et al., 2021)), Updq denotes the set of d*-dimensional unit* vectors, and 0 *stands for the zero matrix.* Stage 2 (finetuning with Frozen Heads). We then freeze the trained W1 and W2*, and proceed to fine-tune* the encoder Φp¨q end-to-end, using the same multi-label cross-entropy loss function. The optimization objective of this finetuning stage can be expressed as $$\operatorname*{min}_{\Phi}{\frac{1}{N}}\sum_{(x,y,e)\in\mathcal{D}}{\frac{1}{K}}\ell_{\mathrm{CE}}(\beta_{1}\cdot W_{1}\Phi(x),y)+\lambda{\frac{1}{E}}\ell_{\mathrm{CE}}(\beta_{2}\cdot W_{2}\Phi(x),e)$$ $$(3)$$ ℓCEpβ2 ¨ W2Φpxq, eq (3) The following discussion explains our motivations and reasoning underlying the algorithm design: - *Freezing Head for Feature Alignment*: Recent work on neural collapse indicate that during the training of a multi-class classifier using cross-entropy loss, freezing the linear head according to a simplex structure can guide the features to align with the frozen head (Zhu et al., 2021; Yang et al., 2022). This observation implies that the features of data in class y to collapse in the direction of the row vector of the classifier corresponding to class y. In addition, we empirically observe that the head-freezing technique does not compromise the model's performance compared to end-to-end finetuning. We include the details regarding this experiment in an ablation study in Appendix C. Inspired by these, we devise the two-stage strategy where the Stage 1 determines the optimal head weights for our compositional feature structure, and the Stage 2 finetunes the encoder with frozen head weights to align the features with the feature structure. - *Linear Probing Two Orthogonal Heads*: Unlike research work on neural collapse that focus exclusively on class prediction (Papyan et al., 2020; Zhu et al., 2021; Yang et al., 2022), our work also accounts for the effects of domains, as outlined in Definition 1. We therefore introduce an auxiliary head, W2, for domain prediction, alongside the original class prediction head denoted as W1. Thus, for a sample x, the encoder, W1, and W2 predict the class and domain labels based on the feature Φpxq. Definition 1 implicitly poses an orthogonality requirement on domain-related and class-related features since R is an orthonormal matrix. To meet this feature orthogonality requirement, we impose an orthogonality constraint on the two heads (i.e., W1WT 2 " 0). - *Normalizing Features and Weights to Address Data Imbalance*: While Zhu et al. (2021); Yang et al. (2022) provide a theoretical justification for head freezing, their theory assumes class-balanced training data. In the case of data imbalance, Thrampoulidis et al. (2022) shows that the head and features may become misaligned. Upon reviewing the technical details of Thrampoulidis et al. (2022), we find that this misalignment can be rectified by normalizing features and head weights to a hyper-sphere. This normalization ensures constant norms for features and head weights, thereby ensuring alignment (cf. Theorem 1). Consequently, we assume that the features produced by the encoder Φ are also normalized to unit norm, which is a common practice in modern vision model pretraining such as CLIP (Radford et al., 2021), SimCLR (Chen et al., 2020), MoCo (He et al., 2020), DINO (Caron et al., 2021). Additionally, we impose the head normalization constraint W1 P Updq K, W2 P Updq E in (2), a technique already employed in CLIP (Radford et al., 2021)1. ## 2.2 Theoretical Guarantee In the algorithm above, Stage 1 is relatively simple, comprising a joint minimization problem over two linear heads. In contrast, Stage 2 is more complex, as it optimizes a neural encoder using two heads under two cross-entropy loss terms. We offer theoretical justification for Stage 2 below, demonstrating that the finetuned encoder can indeed align features with the two frozen orthogonal heads produced by Stage 1, thereby creating a feature structure that meets the requirement in Definition 1. In line with recent research on neural collapse (Mixon et al., 2020; Fang et al., 2021; Zhu et al., 2021; Thrampoulidis et al., 2022), we adopt the unconstrained feature model (UFM) or layer-peeled model, where z " Φpxq is treated as a free optimization variable in R dfor every input x. We denote Z " rϕpx1q*, . . . , ϕ*pxn**qs P** R dˆN , Y " ry1*, . . . , y*N s, and E " re1*, . . . , e*N s as the stack of features, class labels, and environment labels, respectively. In the context of the unconstrained feature model, the 1Recent studies show that weight normalization for the linear head (without bias) can enhance the performance of fine-tuned CLIP (Goyal et al., 2022; Wang et al., 2023). optimization objective of Stage 2 is transformed to: $$\operatorname*{min}_{\mathbf{Z}}{\frac{1}{K N}}\ell_{\mathrm{CE}}(\beta_{1}\cdot W_{1}\mathbf{Z},\mathbf{Y})+\lambda{\frac{1}{E N}}\ell_{\mathrm{CE}}(\beta_{2}\cdot W_{2}\mathbf{Z},\mathbf{E})\quad{\mathrm{s.t.}}\quad\mathbf{Z}\in{\mathcal{U}}(d)^{N}$$ N (4) Theorem 1 (Feature Alignment). Assuming the feature dimension d is no smaller than K `E*, and training* data exists for each class and domain (though not necessarily for each domain-class combination), and W1 and W2 *are normalized and span orthogonal subspaces such that* W1 P Updq K, W2 P Updq E and W1WT 2 " 0. Additionally, we assume β1, β2 *are sufficiently large. The global minimum of* (4) results in the following: for any i P rNs, denote zi as the i-th column vector of Z*, we have* $$\left({\boldsymbol{4}}\right)$$ z ˚ i " WT 1 ayi ` WT $$\left(5\right)$$ 2 bei(5) where ayi P R K is a vector depending on the class label yi*, and* bei P R E *is a vector relying on the domain* label ei. This theorem, intuitively, demonstrates that upon optimizing (4) to a global minimum, for any training sample from class y in environment e, its corresponding feature z ˚ can be decomposed as a linear combination of two vectors depending on y and e, respectively, and the two vectors live in orthogonal feature subspaces. This indicates that the learned features conform to a compositional feature structure satisfying Definition 1. The complete proof is found in Appendix B, where we leverage theoretical tools from Thrampoulidis et al. (2022) in the proof. ## 3 Empirical Studies 3.1 Benchmark Development Of Cg-Bench We create CG-Bench, a compositional generalization benchmark built on four datasets previously designed for DG research: Office-Home (Venkateswara et al., 2017), DomainNet (Peng et al., 2019), and iWildCam (Beery et al., 2020) & FMoW (Christie et al., 2018) from the WILDS benchmark (Koh et al., 2021). These datasets span a wide range of application scenarios from common object recognition in web-crawled images to wildlife recognition from camera traps and building recognition from satellite images. Due to the page limit, we elaborate on the motivation for creating the CG-bench and the curation procedure by taking DomainNet as an example. Additional details regarding benchmark curation can be found in Appendix A. DomainNet (Peng et al., 2019) consists of objects in different art styles. In the DomainNet dataset, there are K " 345 classes and E " 6 domains: {Clipart, Infograph, Painting, Quickdraw, RealImage, Sketch}. In addressing the DG challenge, prior research using DomainNet typically employed a leave-one-out cross-validation strategy. This involves training the model on data from five of the domains and subsequently evaluating its performance on the sixth, omitted domain. In addition to the DG task, it's worth noting that the CG challenge is intrinsically present within DomainNet. To underscore this point, we carried out a preliminary experiment using CLIP. CG Challenge in DomainNet We randomly divide DomianNet into training and evaluation sets, with an 80:20 split. A CLIP model is fully fine-tuned on this training data, and evaluated on validation data from all domain-class combination. We also gathered zero-shot accuracy of the CLIP model for comparison. As a final step, we examined the test accuracies for each domain-class combination, correlating them with the count of their respective training data samples. We only focus on *hard* domain-class combinations that the zero-shot accuracy is below 30%, and visualize evaluation results over these combinations inFigure 4a. Firstly, we notice that certain domain-class combinations possess minimal or even no training samples (e.g., some combinations have only 2 images and neither of them is sampled into the training set). This observation aligns with our considered CG scenario. Within this CG context, both the fine-tuned and zero-shot models encounter difficulties in achieving high test accuracy when the training data for a specific domain-class combination is insufficient. This leads us to conclude that the CG challenge is inherently present in DomainNet, and current zero-shot and fine-tuned models fail to address. Consequently, we are motivated to establish a benchmark for a methodical investigation of this challenge. ![6_image_0.png](6_image_0.png) ![6_image_1.png](6_image_1.png) (a) Test Accuracy vs. Training Data Size (b) Zero-shot Test Accuracy Distribution Figure 4: Test accuracy statistics for each domain-class combination on DomainNet dataset. **Left**: The test accuracy compared with the number of training data. Points are median accuracy while the shading area is bounded by 25% and 75% quantiles. **Right**: The number of domain-class combinations at different zero-shot test accuracies. Benchmark Curation Setup Consider data belonging to E domains (i.e., environments) and K classes, resulting in an EˆK matrix of domain-class combinations (such as the demo shown in Fig. 1). A binary mask Mid P t0, 1u EˆK is applied to this matrix to indicate *in-distribution* domain-class combinations, ensuring that each row or column contains both 0 and 1 so that the training data includes all classes and domains, while some domain-class combinations may be absent. The complementary binary mask (Mood " 1 ´ Mid) represents OOD domain-class combinations. CG Curation of DomainNet We form a E ˆ K class-environment matrix for DomainNet and evaluate the zero-shot accuracy for every domain-class combination. The resulting data distribution is visualized in Figure 4b. We designate the combinations that fall within the lowest 20% of zero-shot accuracies as the out-of-distribution (OOD) set, while the top 80% constitute the in-distribution (ID) set. To ensure comprehensive representation, each row/column contains at least one entry from the ID set. The ID data is then further segregated into a training set and an ID validation set at a 9:1 ratio. Meanwhile, the OOD data is divided between OOD validation and test sets. When evaluating a model trained on the main training dataset, we assess its performance across the ID validation, OOD validation, and OOD test subsets. For this benchmark, our key metric is the average top-1 accuracy for both the ID and OOD sets. ## 3.2 Evaluations Baselines We take the OpenAI's ViT-B/16 CLIP (Radford et al., 2021) and the Meta's ViT-B/14 DINOv2 (Oquab et al., 2023) as the pretrained model for each benchmark and implement three finetuning strategies as the baseline: full finetuning, **linear probing then full finetuning** (*LP-FT*) and **reweighting**. LP-FT (Kumar et al., 2022) is a simple two-stage finetuning strategy that addresses the problem that full finetuning can distort pretrained features and often underperforms linear-probing on out-of-distribution data. *Note:* Our proposed approach bears similarities with LP-FT (Kumar et al., 2022), as both methods start with a linear probing stage and then proceed with finetuning. However, two critical differences are: i) our approach employs two heads with a multi-label cross-entropy loss and imposes an orthogonality constraint during the linear probing stage, and ii) we keep the heads frozen during the second finetuning stage. *Reweighting* (Buda et al., 2018) balances the number of samples from each group in each batch during training. It is robust to group shifts in OOD generalization tasks. For the CG benchmarks, we implemented two versions of reweighting strategies: Reweight-E, which does re-sampling according to the domain labels, and ReweightYˆE, which balances according to the domain-class combinations. | OfficeHome | DomainNet | iWildCam | FMoW | | | | | | | | | |---------------------|-------------|------------|---------|--------|---------|--------|---------|-------|--------|--------|---------| | Model | Methods | ID Acc | OOD Acc | ID Acc | OOD Acc | ID Acc | OOD Acc | ID F1 | OOD F1 | ID Acc | OOD Acc | | Zero-Shot | 89.2 | 50.3 | 61.7 | 6.6 | 13.7 | 6.9 | 11.7 | 9.2 | 20.4 | 18.8 | | | Linear Probing | 90.9 | 41.0 | 72.7 | 4.7 | 72.5 | 14.4 | 42.1 | 22.5 | 37.7 | 27.6 | | | Fine-Tuning | 94.3 | 51.0 | 82.0 | 7.5 | 74.5 | 16.5 | 43.8 | 22.2 | 65.8 | 38.7 | | | Fine-Tuning (WiSE) | 93.7 | 52.5 | 76.4 | 8.7 | 67.0 | 13.7 | 31.6 | 17.0 | 49.5 | 40.6 | | | LP-FT | 93.5 | 43.9 | 81.5 | 5.3 | 74.0 | 17.0 | 42.5 | 26.6 | 65.9 | 40.2 | | | LP-FT (WiSE) | 93.0 | 42.8 | 79.4 | 5.3 | 74.4 | 18.2 | 44.5 | 28.7 | 56.6 | 36.3 | | | Reweight-E | 94.0 | 51.9 | 81.2 | 7.4 | 75.3 | 17.2 | 45.2 | 24.3 | 62.4 | 41.8 | | | Reweight-E (WiSE) | 93.6 | 53.1 | 75.9 | 8.5 | 68.5 | 13.7 | 32.9 | 18.0 | 46.8 | 41.7 | | | Reweight-YˆE | 93.7 | 52.2 | 81.0 | 7.6 | 72.2 | 17.4 | 41.6 | 30.0 | 58.0 | 41.1 | | | Reweight-YˆE (WiSE) | 93.4 | 53.4 | 75.5 | 8.5 | 55.9 | 15.1 | 29 | 22.7 | 42.1 | 37.5 | | | CFA | 94.3 | 54.3 | 81.6 | 7.3 | 74.0 | 18.3 | 43.6 | 31.0 | 65.3 | 41.6 | | | CLIP | CFA (WiSE) | 93.1 | 56.9 | 76.5 | 9.2 | 74.6 | 19.7 | 45.6 | 32.5 | 53.5 | 36.6 | | Fine-Tuning | 91.8 | 38.6 | 82.4 | 5.3 | 76.4 | 14.4 | 47.6 | 18.3 | 66.1 | 38.4 | | | Linear Probing | 93.3 | 40.0 | 75.4 | 4.8 | 77.0 | 19.6 | 50.7 | 27.9 | 45.5 | 25.5 | | | LP-FT | 93.1 | 38.2 | 82.5 | 5.1 | 77.6 | 23.1 | 52.8 | 30.8 | 67.1 | 37.4 | | | LP-FT (WiSE) | 94.0 | 39.7 | 81.6 | 6.2 | 77.9 | 22.3 | 53.2 | 31.0 | 61.0 | 33.7 | | | DINOv2 | Reweight-E | 91.2 | 38.9 | 81.8 | 5.2 | 76.9 | 13.1 | 48 | 17.9 | 62.3 | 38.4 | | Reweight-YˆE | 91.3 | 39.0 | 81.5 | 5.3 | 72.2 | 17.4 | 41.6 | 30.0 | 57.5 | 37.6 | | | CFA | 92.8 | 39.2 | 82.6 | 5.6 | 78.1 | 22.8 | 52.5 | 30.8 | 67.2 | 38.5 | | | CFA (WiSE) | 93.1 | 40.4 | 79.6 | 6.4 | 78.2 | 23.8 | 52.6 | 33.4 | 59.8 | 34.3 | | Table 1: Test accuracy(%) and F1-macro scores (%) of different methods on CG-Bench. Bold values mark the highest accuracy or F1 score. The OOD accuracy of FMoW is the worst region accuracy. The highest accuracy values and those within a range of 0.2 are in **bold**. Postprocessing with WiSE-FT (Wortsman et al., 2022) After finetuning using the three baseline methods and our proposed CFA, we also apply WiSE-FT (Wortsman et al., 2022) with α " 0.5 to postprocess the model, which just takes an average of initial and finetuned model parameters in the parameter space. It has been shown that Wise-FT can improve the model performance (especially OOD performance) in some cases (Wortsman et al., 2022). *Note:* For the CLIP experiments, we interpolate the finetuned model with the zero-shot CLIP encoder and classification head. For the DINOv2 experiments, since there is no zeroshot classification head available, we interpolate the finetuned models with the linear probing/stage-1 CFA results. Consequently, we do not perform WiSE-FT on full finetuning and reweighting baselines since they do have linear-probed heads. Implementation of CFA Empirically, we make small modifications to Stage 1 to divide it into two steps: i) Train W2 on the domain labels with reweighting until it converges with W1 fixed to its zero-shot weight; ii) then train W1 on the class label with reweighting, and encourage the orthogonality between W1 and the fixed W2 with a ℓ2 regularization term, }WT 1 W2} 2 F . We also use reweighting for the class labels when performing the linear probing for LP-FT for a fair comparison. Following Sec. 2.1, we normalize the row vectors of W1 and W2 to the unit norm and normalize the outputs of Φ also to the unit norm. In addition, in the linear probing stage of CFA and LP-FT, we constrain W1 and W2 to a subspace determined by the zero-shot linear classifier of CLIP, as we find it can improve the final OOD performance. Besides, we find that in Stage 2, it is empirically sufficient to train the encoder with a very small λ value, which is a loss coefficient in (3), and we deploy λ " 0 in Stage 2 to reduce compute cost. In both Stage 1 and Stage 2, we use the AdamW (Loshchilov & Hutter, 2017) optimizer with a cosine annealing scheduler (Loshchilov & Hutter, 2016). More details and hyperparameters can be found in Appendix C. Empirical Conclusions The results of our empirical experiments are presented in Table 1. Observing the results, we conclude that: a) Compared with full finetuning, LP-FT and reweighting, our CFA can improve the performance of pretrained models on OOD data for compositional generalization; b) WiSE-FT can further improve the OOD performance of all methods in most cases (when WiSE-FT fails, it will fail on both ID and OOD); c) While CFA enjoys a superior OOD performance, its in-distribution (ID) performance is maintained around the same level of full finetuning, which is a desired property. Also, we notice that, 8 ![8_image_0.png](8_image_0.png) Figure 5: Visualization of the features for CLIP ViT-B/16 before and after the finetunning of CFA. **Left**: Feature extracted using pre-trained CLIP ViT-B/16 image encoder. **Right**: Feature extracted using CFAfinetuned CLIP ViT-B/16 image encoder. although our CFA increases the performance of models on OOD data in CG tasks, there is still a gap between its ID and OOD performance, indicating that CG is quite a challenging task and needs future algorithm advancement to be further addressed. ## 3.3 Feature Visualization On Real-World Data To empirically show that our two-stage algorithm promotes the encoder's ability to learn the compositional feature, we conduct a feature visualization study for the CLIP ViT-B/16 image encoder on the DomainNet (Peng et al., 2019) dataset. Specifically, we take the encoder both *before* and *after* finetuning with CFA, visualizing features for 2 domains and 3 classes. This resulted in 6 unique domain-class combinations (5 out of which are present in the training set). The visualization in Figure 5b clearly shows that the features finetuned with CFA conform to a compositional feature structure. In contrast, the features of the original CLIP model (Figure 5a) do not exhibit this structure. This visualization not only demonstrates the feature alignment ability of CFA but also provides further evidence of its effectiveness for large neural networks trained on real-world data. ## 3.4 Partial Availability Of Domain Labels In real-world applications, training samples may not be provided with domain labels, which may affect the applicability of CFA. We divide this problem into two scenarios: i) Domain labels are partially available; ii) Domain labels are completely unavailable. Corresponding experiments on the Office-Home dataset are designed and conducted to demonstrate that the CFA remains effective even in the absence of domain labels. Under the first condition, we conduct experiments by only using 10%, 20%, and 50% of the domain labels for Stage-1 to learn the linear head for finetuning in Stage-2. We repeat each experiment 3 times using different subsets of the data and present the average result to mitigate the randomness caused by subsampling. For the second scenario, we propose that we can leverage the zeroshot ability of CLIP to predict the domain labels. To be specific, we first manually design a set of labels that can describe the domains of the data (for the experiment we just use the original domain names). Then, we perform the zeroshot classification on the training data and the domain labels using CLIP model. Finally, we take the predicted domain labels as ground-truth domain labels and use them in Stage-1. We adopt the hyperparameters shown in the manuscript and present the mean accuracy of CFA | Method | Domain label ratio | ID | ID (WiSE) | OOD | OOD (WiSE) | |----------|----------------------|------|-------------|-------|--------------| | CFA | 100% (Original) | 94 | 93.1 | 54.3 | 56.9 | | | 50% | 94 | 93.3 | 53.8 | 56.9 | | | 20% | 93.9 | 93.1 | 53.8 | 56.7 | | | 10% | 94.0 | 93.2 | 53.4 | 55.9 | | | 0% (CLIP Predict) | 94.1 | 93.0 | 52.0 | 53.6 | | Finetune | 0% | 94.3 | 93.7 | 51 | 52.5 | | LP-FT | 0% | 93.5 | 93.0 | 43.9 | 42.8 | Table 2: CFA with partial availability of domain labels. Experiments are conducted on Offce-Home. Table 3: Domain label prediction with partial training label availability. Experiments are conducted on Office-Home. Domain Label Ratio 0% (Zero-Shot) 10% 20% 50% 100% Avg. #Data per Domain 0 308 616 1541 3082 Accuracy 61.5 82.0 84.6 85.5 86.3 and WiSE-FT over 3 seeds on the Office-Home dataset. From results in Table 2, we can see our CFA works well as domain labels are partially available. The mild reliance of CFA's performance on domain label availability is beneficial for the practical application of this method but may raise questions about why the reliance is so mild. We believe there are two main reasons: i) the number of domains is small (4-6 domains for each 4 dataset of CG-Bench), so the available data per domain is relatively abundant, and ii) domain labels are easier to predict than class labels since image style or background information is highly indicative of the domain label, and these visual features are easy to capture by neural networks. We clarify this by conducting a simple ablation study: in the Office-Home dataset with 4 domains, we fit linear classifiers to CLIP features for different amounts of domain labels (1%, 2%,..., 100% of the training data), and show the test prediction accuracy for domains below. Each experiment is repeated three times with different sampling seeds, and mean accuracy is reported in Table 3. From the results, we can observe that as the domain labeling ratio increases from 10% to 100%, the domain prediction accuracy modestly improves from 82.0% to 86.3%, indicating a relatively small enhancement. This suggests that the domain label is indeed quite easy to predict, which could explain why our CFA can work well with a partial availability of domain labels. Furthermore, the zero-shot prediction accuracy of the CLIP ViT-B/16 model for domain labels is 61.5%, significantly higher than a random guess (25%). This outcome supports the notion that CFA with CLIP-predicted domain labels can also improve over vanilla finetuning. Class Label Availability As for class labels, we do require their full availability for the training set, since our work focuses on *supervised finetuning* of pretrained encoders. Meanwhile, although CLIP is not explicitly supervised by domain and class labels, the texts in the image-text pairs (used for CLIP pretraining) provide abundant class and environment information. Furthermore, CLIP's unsatisfactory zero-shot performance on our CG-bench indicates the need for supervised finetuning to improve its effectiveness. On the other hand, DINOv2 is a self-supervised pretraining method, thus it needs to be supervised finetuned before applying to downstream classification tasks. Overall, the above statement wants to justify that the class label availability cannot be waived for the compositional generalization task considered in our paper. Ablation Studies We conducted ablation studies on i) the choice of normalized head v.s. unconstrained head, ii) the choice of frozen head v.s. trainable head, and iii) the loss coefficient λ in Stage 2 on CLIP model. Due to the page limit, we defer results and details to Appendix C. Also, we discuss the training stability, overhead and performance gain of CFA in Appendix C. ## 4 Related Works OOD Generalization In OOD generalization, the model is trained on labeled data from a limited number of known domains, and the goal is to improve the performance of the models so that they can better generalize to previously unseen or new test domains (Blanchard et al., 2011). A common approach to tackle OOD generalization is domain-invariant learning, which aims to learn a representation of data that has an invariant distribution over different training domains. Previous works taking this approach match domains in feature space either by aligning moments (Sun & Saenko, 2016) or using an adversarial loss (Ganin et al., 2016). However, these methods are later pointed out to be generally inadequate (Zhao, 2019). Another popular approach is to learn the optimal invariant predictors. Taking this approach, an invariant risk minimizer (IRM) optimizes a highly non-convex bi-level objective and simplifies the optimization using a penalty regularized objective. However, Rosenfeld et al. (2021); Kamath et al. (2021); Ahuja et al. (2021) theoretically show that these algorithms fail even in simple data models. Similarly, Wang et al. (2022) proposed Invariant-feature Subspace Recovery (ISR), which recovers the subspace spanned by the invariant features, and then fits predictors in this subspace. Distributionally robust optimization (Sagawa et al., 2020) is also a choice to tackle OOD generalization, which optimizes models over a worst-case distribution that is perturbed around the original distribution. In addition to the methods designed for training a model from scratch, recent works (Kumar et al., 2022; Wortsman et al., 2022; Goyal et al., 2022) also discuss increasing the OOD accuracy over the pretrained model. While these methods provide impressive empirical improvements on pretrained models, theoretical explanations are yet to be provided. Composition Generalization In the computer vision literature, previous research has investigated attribute-object compositions (also referred to as compositional zero-shot learning) (Misra et al., 2017; Nagarajan & Grauman, 2018; Purushwalkam et al., 2019; Naeem et al., 2021; Nayak et al., 2023; Hao et al., 2023), with the goal of predicting attributes of an object in addition to its class. For instance, in a binary image classification task involving cat versus tiger, a classifier might be required to predict whether the animal is old or young alongside the conventional class prediction. In contrast, compositional generalization (CG) has a different focus. It concentrates purely on predicting the object class (e.g., cat versus tiger). Specifically, CG tasks aim to accurately identify young cat images as cat, even when the training data consisting only of young tiger and old cat instances and lacking young cat images. This scenario introduces an out-of-distribution (OOD) shift, and the challenge lies in developing models robust to such shifts. While compositional zero-shot learning (CZSL) can decouple attributes from objects, it is not universally adaptable for OOD generalization tasks. This approach relies on powerful vision-language models (VLMs) such as CLIP, while CG does limit the type of image classifiers. However, in certain real-world domains, such as remote sensing or medical imaging, there is a lack of paired image-text data to train strong VLMs. Therefore, adopting *self-supervised* models such as MAE (He et al., 2022) and DINO (Caron et al., 2021) presents a more practical strategy for these domains (Cong et al., 2022; Wanyan et al., 2023). As shown in Table 1, our CFA can work with both VLMs and self-supervised models. In contrast, CZSL can not be directly applied to self-supervised models. Besides, Sivaprasad et al. (2022) explores CG under a slightly simplified premise, where only one random domain is masked for each class. In addition, their method is built upon ResNet (He et al., 2016b) and does not scale well with modern transformer architectures. ## 5 Conclusion This paper delves into the challenge of Compositional Generalization (CG) in machine learning, focusing on generalization to unseen domain-class combinations. By developing CG-Bench, a suite of benchmarks from real-world image datasets, we highlighted the shortcomings of prevalent pretraining-finetuning frameworks in tackling this challenge. Our proposed solution, the Compositional Feature Alignment (CFA), offers a promising approach to improve the CG performance of pretrained models, as evidenced in our experiments. Despite these advances, our study is not without limitations. Our experiments are currently limited to the base-sized ViT models, and our empirical studies draw from a restricted number of datasets of limited size. As we strive to overcome these limitations in future work, we look to include larger models and diversify our benchmark suite, exploring alternative data sources beyond images. We invite the broader machine learning community to join us in the ongoing exploration of the important challenge of CG. ## References Kartik Ahuja, Jun Wang, Amit Dhurandhar, Karthikeyan Shanmugam, and Kush R. Varshney. Empirical or invariant risk minimization? a sample complexity perspective. In *International Conference on Learning* Representations, 2021. URL https://openreview.net/forum?id=jrA5GAccy_. Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. *arXiv* preprint arXiv:1907.02893, 2019. Sara Beery, Elijah Cole, and Arvi Gjoka. The iwildcam 2020 competition dataset. arXiv preprint arXiv:2004.10340, 2020. Gilles Blanchard, Gyemin Lee, and Clayton Scott. Generalizing from several related classification tasks to a new unlabeled sample. *Advances in neural information processing systems*, 24:2178–2186, 2011. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. Mateusz Buda, Atsuto Maki, and Maciej A Mazurowski. A systematic study of the class imbalance problem in convolutional neural networks. *Neural networks*, 106:249–259, 2018. Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In *Proceedings of the IEEE/CVF* international conference on computer vision, pp. 9650–9660, 2021. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In *International conference on machine learning*, pp. 1597–1607. PMLR, 2020. Gordon Christie, Neil Fendley, James Wilson, and Ryan Mukherjee. Functional map of the world. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018. Yezhen Cong, Samar Khanna, Chenlin Meng, Patrick Liu, Erik Rozi, Yutong He, Marshall Burke, David Lobell, and Stefano Ermon. Satmae: Pre-training transformers for temporal and multi-spectral satellite imagery. *Advances in Neural Information Processing Systems*, 35:197–211, 2022. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In *CVPR*, 2009. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of the North* American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. *ICLR*, 2021. Cong Fang, Hangfeng He, Qi Long, and Weijie J Su. Exploring deep neural networks via layer-peeled model: Minority collapse in imbalanced training. *Proceedings of the National Academy of Sciences*, 118 (43):e2103091118, 2021. Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In *International conference on machine learning*, pp. 1180–1189. PMLR, 2015. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096–2030, 2016. Sachin Goyal, Ananya Kumar, Sankalp Garg, Zico Kolter, and Aditi Raghunathan. Finetune like you pretrain: Improved finetuning of zero-shot vision models. *arXiv preprint arXiv:2212.00638*, 2022. Ishaan Gulrajani and David Lopez-Paz. In search of lost domain generalization. In *International Conference* on Learning Representations, 2021. Shaozhe Hao, Kai Han, and Kwan-Yee K Wong. Learning attention as disentangler for compositional zeroshot learning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 15315–15324, 2023. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016a. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016b. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In *Proceedings of the IEEE/CVF conference on computer vision and pattern* recognition, pp. 9729–9738, 2020. Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 16000–16009, 2022. John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, et al. Highly accurate protein structure prediction with alphafold. *Nature*, 596(7873):583–589, 2021. Pritish Kamath, Akilesh Tangella, Danica Sutherland, and Nathan Srebro. Does invariant risk minimization capture invariance? In *International Conference on Artificial Intelligence and Statistics*, pp. 4069–4077. PMLR, 2021. Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, et al. Wilds: A benchmark of inthe-wild distribution shifts. In *International Conference on Machine Learning*, pp. 5637–5664. PMLR, 2021. Ananya Kumar, Aditi Raghunathan, Robbie Matthew Jones, Tengyu Ma, and Percy Liang. Fine-tuning can distort pretrained features and underperform out-of-distribution. In International Conference on Learning Representations, 2022. Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. *arXiv preprint arXiv:1711.05101*, 2017. Ishan Misra, Abhinav Gupta, and Martial Hebert. From red wine to red tomato: Composition with context. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pp. 1792–1801, 2017. Dustin G. Mixon, Hans Parshall, and Jianzong Pi. Neural collapse with unconstrained features. *Sampling* Theory, Signal Processing, and Data Analysis, 20, 2020. Muhammad Ferjad Naeem, Yongqin Xian, Federico Tombari, and Zeynep Akata. Learning graph embeddings for compositional zero-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 953–962, 2021. Tushar Nagarajan and Kristen Grauman. Attributes as operators: factorizing unseen attribute-object compositions. In *Proceedings of the European Conference on Computer Vision (ECCV)*, pp. 169–185, 2018. Nihal V. Nayak, Peilin Yu, and Stephen Bach. Learning to compose soft prompts for compositional zero-shot learning. In *The Eleventh International Conference on Learning Representations*, 2023. Maxime Oquab, Timothée Darcet, Theo Moutakanni, Huy V. Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Russell Howes, Po-Yao Huang, Hu Xu, Vasu Sharma, Shang-Wen Li, Wojciech Galuba, Mike Rabbat, Mido Assran, Nicolas Ballas, Gabriel Synnaeve, Ishan Misra, Herve Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, and Piotr Bojanowski. Dinov2: Learning robust visual features without supervision, 2023. Vardan Papyan, XY Han, and David L Donoho. Prevalence of neural collapse during the terminal phase of deep learning training. *Proceedings of the National Academy of Sciences*, 117(40):24652–24663, 2020. Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 1406–1415, 2019. Senthil Purushwalkam, Maximilian Nickel, Abhinav Gupta, and Marc'Aurelio Ranzato. Task-driven modular networks for zero-shot compositional learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3593–3602, 2019. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In *International Conference on Machine Learning*, pp. 8748–8763. PMLR, 2021. Elan Rosenfeld, Pradeep Kumar Ravikumar, and Andrej Risteski. The risks of invariant risk minimization. In *International Conference on Learning Representations*, 2021. URL https://openreview.net/forum? id=BbNIbVPJ-42. Shiori Sagawa, Pang Wei Koh, Tatsunori B. Hashimoto, and Percy Liang. Distributionally robust neural networks. In *International Conference on Learning Representations*, 2020. Shibani Santurkar, Dimitris Tsipras, and Aleksander Madry. {BREEDS}: Benchmarks for subpopulation shift. In *International Conference on Learning Representations*, 2021. Sarath Sivaprasad, Akshay Goindani, Mario Fritz, and Vineet Gandhi. Class-wise domain generalization: A novel framework for evaluating distributional shift. In *NeurIPS 2022 Workshop on Distribution Shifts:* Connecting Methods and Applications, 2022. Baochen Sun and Kate Saenko. Deep coral: Correlation alignment for deep domain adaptation. In *European* conference on computer vision, pp. 443–450. Springer, 2016. Christos Thrampoulidis, Ganesh Ramachandra Kini, Vala Vakilian, and Tina Behnia. Imbalance trouble: Revisiting neural-collapse geometry. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), *Advances in Neural Information Processing Systems*, 2022. Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In *Proceedings of the IEEE Conference on Computer Vision* and Pattern Recognition, pp. 5018–5027, 2017. Haoxiang Wang, Haozhe Si, Bo Li, and Han Zhao. Provable domain generalization via invariant-feature subspace recovery. In *International Conference on Machine Learning*, pp. 23018–23033. PMLR, 2022. URL https://proceedings.mlr.press/v162/wang22x.html. Peisong Wang, Weihan Chen, Weixiang Xu, Qinghao Hu, and Jian Cheng. Rethinking the value of prompt learning for vision-language models, 2023. URL https://openreview.net/forum?id=1FsdIfRngtw. Xinye Wanyan, Sachith Seneviratne, Shuchang Shen, and Michael Kirley. Dino-mc: Self-supervised contrastive learning for remote sensing imagery with multi-sized local crops. *arXiv preprint arXiv:2303.06670*, 2023. Mitchell Wortsman, Gabriel Ilharco, Jong Wook Kim, Mike Li, Simon Kornblith, Rebecca Roelofs, Raphael Gontijo Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, and Ludwig Schmidt. Robust fine-tuning of zero-shot models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7959–7971, 2022. Yibo Yang, Shixiang Chen, Xiangtai Li, Liang Xie, Zhouchen Lin, and Dacheng Tao. Inducing neural collapse in imbalanced learning: Do we really need a learnable classifier at the end of deep neural network? In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022. Yuzhe Yang, Haoran Zhang, Dina Katabi, and Marzyeh Ghassemi. Change is hard: A closer look at subpopulation shift. In *International Conference on Machine Learning*, 2023. Han Zhao. On learning invariant representations for domain adaptation. *ICML*, 2019. Han Zhao, Shanghang Zhang, Guanhang Wu, José MF Moura, Joao P Costeira, and Geoffrey J Gordon. Adversarial multiple source domain adaptation. *Advances in neural information processing systems*, 31: 8559–8570, 2018. Zhihui Zhu, Tianyu Ding, Jinxin Zhou, Xiao Li, Chong You, Jeremias Sulam, and Qing Qu. A geometric analysis of neural collapse with unconstrained features. Advances in Neural Information Processing Systems, 34:29820–29834, 2021. ## A Benchmark Curation We introduce the curation process of DomainNet (Peng et al., 2019) in Section 3.1. In this section, we provide more details of the proposed benchmark on the remaining datasets. OfficeHome (Venkateswara et al., 2017) consists of 65 classes of objects depicted in four domains: {Art, Clipart, Product Image, Real World Image}. The ID and OOD splits are created in the same process as the DomainNet. The average top-1 accuracy is measured to evaluate the model performance on both sets. iWildCam (Beery et al., 2020) is a dataset designed for classifying 182 classes of wild animals in camera traps. In the original iWildCam, each camera trap is defined to be a domain, resulting in 324 domains that are labeled as camera IDs. Such a definition of domains ignores the semantic similarities among them. For example, two camera traps may both be set in forests, however, the original iWildCam assigns them to different domains. Consequently, images taken in the forests are distributed in both ID and OOD sets regardless of the fact that they are from very similar environments. Observing such deficiencies in the original iWildCam, we are motivated to reassign the domain labels to the data. We propose 6 domains for the dataset: {colored forest, gray-scale forest, colored trail, gray-scale trail, colored savanna, gray-scale savanna}. To assign the new domains to the images, we take advantage of the zero-shot classification ability of the CLIP models. Specifically, we design 10 prompts for each domain and perform the zero-shot prediction of the domains for each image using the assembled prompts. New domain labels are generated for both training and test sets, resulting in a dataset that is consistent with our CG setting: the E ˆ K matrix is sparse, and the test set contains domain-class pairs that have few or no samples in the training set. We take the domain-class pairs with less or equal to 20 samples in the training set as the OOD samples in the test set. In addition to measuring the average top-1 accuracy, we also measure the F1-macro score for both the ID and OOD sets. FMoW (Christie et al., 2018) is a dataset that aims to classify utility types from satellite images. This dataset has two sets of domains: *years* and *regions*. The ID and OOD of the dataset are split according to the *years*, i.e., all the satellite images in the test set taken before 2016 are ID samples, and images after 2016 are OOD samples. In our CG-Bench, we only perform evaluations on *region* domains. To be specific, the dataset has K " 62 classes and E " 5 domains: {Asia, Europe, America, Africa, Oceania}. The E ˆ K matrix is naturally sparse because certain utilities were only introduced post-2016 in some underdeveloped regions, leading to the creation of OOD entries. Therefore, FMOW is readily suited for our CG setting without the need for additional curation. We measure the average top-1 accuracy for the ID set and the worst region accuracy for the OOD set as suggested by previous works. ## B Proof Of Theorem 1 We first restate the setting, and introduce some notations and definitions. Then we present two lemmas useful for the proof of our main theorem. Finally, we restate our main theorem (Theorem 1) and provide a proof. Notation and Setting We denote Z " rϕpx1q*, . . . , ϕ*pxn**qs P** R dˆN , Y " ry1*, . . . , y*N s, and E " re1*, . . . , e*N s as the stack of features, class labels, and environment labels, respectively. In the context of the unconstrained feature model, the optimization objective of Stage 2 is transformed to: $$\min_{\bf Z}\frac{1}{KN}\ell_{\rm CE}(\beta_{1}\cdot W_{1}{\bf Z},{\bf Y})+\lambda\frac{1}{EN}\ell_{\rm CE}(\beta_{2}\cdot W_{2}{\bf Z},{\bf E})\quad{\rm s.t.}\quad{\bf Z}\in{\cal U}(d)^{N}\tag{6}$$ Furthermore, we use Wpjqto refer to the j-th row vector of matrix W, and zi represents the i-th column vector of Z, which is also the learned feature of the i-th training sample. Below, we define simplex-encoding label (SEL) matrices, which are introduced in Thrampoulidis et al. (2022), and also consider SVD of the two heads, W1 and W2. Definition 2 (Simplex-Encoding Label Matrices). S1 P R KˆN and S2 P R EˆN *are simplex-encoding label* (SEL) matrices for classes and domains, respectively, such that $$\forall c\in[K],i\in[N]:\quad\mathbf{S}_{1}[c,i]=\begin{cases}1-1/K&,\quad c=y_{i}\\ -1/K&,\quad c\neq y_{i}\end{cases}\tag{7}$$ $$\forall c\in[E],i\in[N]:\quad\mathbf{S}_{2}[c,i]=\begin{cases}1-1/E&,\quad c=e_{i}\\ -1/E&,\quad c\neq e_{i}\end{cases}\tag{8}$$ In other words, the i-th column of S1 is a K*-dimensional vector,* peyi ´ 1 K 1q, where ej represents the i-th standard basis vector. Similarly, the i-th column of S2 is a E*-dimensional vector,* peei ´ 1 E 1q. Definition 3 (SVD of Heads). For W1 and W2*, we consider their compact SVD as* $$W_{1}=U_{1}\Lambda_{1}V_{1}^{\mathsf{T}}\quad a n d\quad W_{2}=U_{2}\Lambda_{2}V_{2}^{\mathsf{T}}$$ 2(9) Specifically, Λ1,Λ2 *are positive diagonal matrices and* U1, V1, U2, V2 *have orthonormal columns.* Now, we focus on the first term of (6), and study its optimum with theoretical tools from Thrampoulidis et al. (2022). Lemma 1 (Optimum of Class Loss). Assuming the feature dimension d is at least K*, and training data* exists for each class, and W1 *is normalized such that* W1 P Updq K. Additionally, we assume β1 *is sufficiently* large. Then, for any constant norm magnitude a ą 0*, the global minimum of the following objective,* $$\operatorname*{min}_{{\bf Z}}\frac{1}{K N}\ell_{\mathrm{CE}}(\beta_{1}\cdot W_{1}{\bf Z},{\bf Y})\quad s.t.\ \|z_{i}\|_{2}=a\ ,$$ ℓCEpβ1 ¨ W1Z, Yq s.t. }zi}2 " a , (10) satisfies $${\bf Z^{*}}=\gamma_{1}V_{1}\Lambda_{1}^{-1}U_{1}^{\sf T}{\bf S_{1}}\ ,$$ 1 S1 , (11) * [16] A. A. K. Proof. From Theorem 1 of Thrampoulidis et al. (2022), we know that the optimum of (10) shall satisfy $$({\mathfrak{g}})$$ $$(10)$$ $$(11)$$ $$(12)$$ $$W_{1}\mathbf{Z}^{*}=\gamma\mathbf{S}_{1}$$ $$(13)$$ ˚ " γS1 (12) where γ ą 0 is a scaling factor and S1 is defined in Definition 2. With (12) and the norm constraint }zi}2 " a, it is obvious that Z ˚ should span the same feature subspace as W1, since the cross-entropy loss is monotone such that a larger magnitude of W1Z ˚ leads to smaller loss. Therefore, we can express Z ˚ as $${\bf Z^{*}}=\gamma_{1}V_{1}\Lambda_{1}^{-1}U_{1}^{\mathsf{T}}{\bf S_{1}}\ ,$$ $$\square$$ 1 S1 , (13) and the scaling factor can be easily determined by considering }W piq 1}2 " 1 and }zi}2 " a. We can study the optimum of the second term of (6) in the same fashion, and obtain similar results. Lemma 2 (Optimum of Domain Loss). Assuming the feature dimension d is at least E*, and training* data exists for each domain, and W2 *is normalized such that* W2 P Updq E*. Additionally, we assume* β2 is sufficiently large. Then, for any constant norm magnitude b ą 0, the global minimum of the following objective, $$\min_{\bf Z}\frac{1}{E N}\ell_{\mathrm{CE}}(\beta_{2}\cdot W_{2}{\bf Z},{\bf E})\quad s.t.\ \|z_{i}\|_{2}=b\ ,$$ satisfies $where\ \gamma_2=\frac{b}{1-1/E}$. . $${\bf Z^{*}}=\gamma_{2}V_{2}\Lambda_{2}^{-1}U_{2}^{\sf T}{\bf S_{2}}\ ,$$ 2 S2 , (15) $$(14)$$ $$(15)$$ Proof. The proof is the same as that of Lemma 1. With Lemma 1 and Lemma 2, we can prove that the learned feature vector of each training sample can be decomposed as a linear combination of two vectors depending on the class label and domain label, respectively, while the two vectors live in orthogonal feature subspaces. This indicates that the learned features conform to a compositional feature structure satisfying Definition 1. Notice that Theorem 2 (the restatement of Theorem 1) is slightly different from Theorem 1. This is because we found a small issue in the original proof for Theorem 1. We fixed the issue before the supplementary material submission deadline, which led to a slightly different expression of z ˚ i . However, the same main conclusion still applies to the updated theorem (Theorem 2): the learned features comply with the compositional feature structure in Definition 1, which justifies the algorithm design of our algorithm design. We will revise Theorem 1 in our main text correspondingly in the revision. Theorem 2 (Feature Alignment (Theorem 1 Restated)). Assuming the feature dimension d *is no smaller* than K `E*, and training data exists for each class and domain (though not necessarily for each domain-class* combination), and W1 and W2 *are normalized and span orthogonal subspaces such that* W1 P Updq K, W2 P Updq E and W1WT 2 " 0. Additionally, we assume β1, β2 *are sufficiently large. The global minimum of* (6) results in the following: for any i P rNs, denote zi as the i-th column vector of Z*, we have* $\square$ $$z_{i}^{*}=W_{1}^{\mathsf{T}}a_{y_{i}}+W_{2}^{\mathsf{T}}b_{e_{i}}$$ $$(16)$$ 2 bei(16) where ayi P R K is a vector depending on the class label yi*, and* bei P R E is a vector depending on the domain label ei. Proof. Lemma 1 and Lemma 2 derive the optimum of the two loss terms of (6), respectively. For the linear combination of the two loss terms with coefficient λ, it is straightforward to see that the optimum Z ˚ should also be a linear combination of (11) and (15), since these two optima stay in orthogonal subspaces spanned by W1 and W2, respectively (Stage 1 ensures that W1 and W2 live in orthogonal subspaces). In other words, the first loss term 1 KN ℓCEpβ1 ¨W1Z, Yq does not affect the converged direction of **Proj**W2 Z ˚, i.e., Z ˚ projected into the subspace spanned by W2, and vice versa. However, the first term affects the magnitude of **Proj**W2 Z ˚ since we have the norm constraint Z P Updq N in (6). Specifically, the larger λ in (6) leads to larger magnitude of }**Proj**W2 Z ˚}2. Hence, we can express Z ˚ as $$\mathbf{Z}^{*}=a\gamma_{1}V_{1}\Lambda_{1}^{-1}U_{1}^{\mathsf{T}}\mathbf{S_{1}}+b\gamma_{2}V_{2}\Lambda_{2}^{-1}U_{2}^{\mathsf{T}}\mathbf{S_{2}}$$ $$(17)$$ $L(d)N_{\rm s}$. 2 S2 (17) where *a, b* ą 0 are scaling factors (depending on λ) that ensure Z ˚ P Updq N . By plugging in Definition 2, we can express the i-th column vector of Z ˚, the learned feature of the i-th sample, as $$z_{i}^{\star}=a\gamma_{1}V_{1}\Lambda_{1}^{-1}U_{1}^{\mathsf{T}}(e_{y_{i}}-\frac{1}{K}{\bf1})+b\gamma_{2}V_{2}\Lambda_{2}^{-1}U_{2}^{\mathsf{T}}(e_{e_{i}}-\frac{1}{E}{\bf1})$$ Clearly, the first term of (18) depends on the class label yi only and the second term relies only on the domain label ei. Besides, the two terms live in the feature space spanned by V1 and V2, respectively, which are determined by W1 and W2. Hence, we can be re-expressed (18) as $$(18)$$ $$z_{i}^{*}=W_{1}^{\sf T}a_{y_{i}}+W_{2}^{\sf T}b_{e_{i}}\tag{10.11}$$ $$\mathrm{where}~\mathbf{a}_{y_{i}}=a\gamma_{1}U_{1}\Lambda_{1}^{2}U_{1}^{\mathsf{T}}(\mathbf{e}_{y_{i}}-{\frac{1}{K}}\mathbf{1})~\mathrm{and}~\mathbf{b}_{e_{i}}=b\gamma_{2}U_{2}\Lambda_{2}^{2}U_{2}^{\mathsf{T}}(\mathbf{e}_{e_{i}}-{\frac{1}{E}}\mathbf{1}).$$ ## C Details Of Experiment Here, we supplement Section 3 with more details about our experimental studies. $$(19)$$ $$\begin{array}{l}{\square}\end{array}$$ Computational Expenses The experiments described in this paper are executed on NVIDIA RTX A6000 GPUs with 48GB memory, utilizing a total of 12 GPUs. The individual training runs for OfficeHome, DomainNet, iWildCam, and FMoW require 0.2, 0.8, 0.5, and 0.2 GPU hours respectively. Each experiment is repeated ten times with varying random seeds, which alter the order of batch shuffling in the training loader. Hyperparameter Choices We present the hyperparameter settings for our CFA models in Table 4. The parameters for each stage are chosen based on model performances on the OOD validation set. Note that λ is the domain loss coefficient in (3), and λortho is the coefficient for the orthogonality regularization loss }WT 1 W2} 2 F that we use to ensure orthogonality of heads in Stage 1. The hyper-parameters in Stage 2 are also used for the two baseline algorithms, Full finetuning (FT) and LP-FT. Ablation Studies We conduct ablation studies on i) the choice of normalized head v.s. unconstrained head, ii) the choice of frozen head v.s. trainable head, and iii) the loss coefficient λ in (3). The results are discussed as follows: - *Head Normalization.* In our implementation, we normalize the weight of our classification heads that has no bias terms, following recent works (Goyal et al., 2022; Wang et al., 2023) which show that head normalization is useful for CLIP finetuning. In this ablation study, we repeat the vanilla full finetuning experiment using unconstrained classification heads (i.e., with bias and without normalization). The test accuracies(%) are present in Table 5. From the results, we can see that the models finetuned with constrained heads have slightly better OOD performances compared to the unconstrained ones, justifying the use of the head normalization technique. - *Trainable Heads v.s. Frozen Heads.* In Stage 2, we finetune the models with *frozen* heads, W1 and W2, which are linearly probed in Stage 1. In this ablation study, we perform the full finetuning during Stage 2, i.e., training all the layers including the classification heads. The test accuracies(%) are shown in Table 6. The results indicate that using trainable heads or using frozen heads does not vary the finetuned model performance much. Therefore, this ablation study justifies that our algorithmic design of frozen classification heads is reasonable. - *Domain Loss Coefficient.* In our empirical implementation, we consider λ " 0 in (3) and only finetune with respect to W1 in Stage 2, to reduce the training cost. The motivation for doing so is the observation that the coefficient λ in Stage 2 does not affect the final performance. To justify this, we conduct the following ablation study: we run Stage 2 with the original two-term loss, (3), while adjusting the coefficient λ for the domain prediction loss. The test accuracies(%) are shown in Figure 6. From the figure, one can observe that the value of λ can be set to zero in Stage 2 without negatively affecting the final performance. Therefore, it is sufficient to consider λ " 0 in Stage 2 and only fine-tune the model with the first loss term in (3). ## C.1 Discussion On Training Stability And Overhead Here we conduct an additional study to examine the training stability of our two-stage method, CFA. All of the following experiments are conducted for CLIP ViT-B/16 on the Office-Home dataset and are an average over 3 seeds. | Model | CLIP | | | DINOv2 | | | | | | | |------------|--------------------------|-----------------------|--------------------------|---------------|-----------------------|--------|--------|--------|---------------|----------| | Dataset | Stage 1 (Linear Probing) | Stage 2 (Fine-Tuning) | Stage 1 (Linear Probing) | | Stage 2 (Fine-Tuning) | | | | | | | λ | λortho | Epochs | Epochs | Learning Rate | λ | λortho | Epochs | Epochs | Learning Rate | | | OfficeHome | 1 | 100 | 200 | 3 | 10´5 | 1 | 100 | 200 | 3 | 5 ˆ 10´5 | | DomainNet | 1 | 10000 | 200 | 3 | 10´5 | 1000 | 10 | 200 | 10 | 5 ˆ 10´5 | | iWildCam | 10 | 10 | 200 | 5 | 10´5 | 1 | 100 | 200 | 5 | 10´5 | | FMoW | 10 | 100 | 200 | 3 | 10´5 | 100 | 1000 | 200 | 4 | 5 ˆ 10´5 | Table 4: Hyperparameters for our algorithm. | OfficeHome | DomainNet | iWildCam | FMoW | | | | | | |--------------|-------------|------------|--------|-----|------|------|------|------| | Methods | ID | OOD | ID | OOD | ID | OOD | ID | OOD | | FT | 94.3 | 51.0 | 82.0 | 7.5 | 74.5 | 16.5 | 65.8 | 38.7 | | FT-Wise | 93.7 | 52.5 | 76.4 | 8.7 | 67.0 | 13.7 | 49.5 | 40.6 | | FT | 94.3 | 50.8 | 82.0 | 7.5 | 74.9 | 16.2 | 65.8 | 38.5 | | FT-Wise | 93.7 | 52.3 | 76.5 | 8.7 | 66.8 | 13.5 | 49.5 | 40.1 | Table 5: The ablation study on the constraints on classification heads. Blue cells represent the normalized classification head with no bias; Orange cells indicate the heads are unconstrained. | OfficeHome | DomainNet | iWildCam | FMoW | | | | | | |-------------------------------------|-------------|------------|--------|-----|------|------|------|------| | Methods | ID | OOD | ID | OOD | ID | OOD | ID | OOD | | Zeroshot | 89.2 | 50.3 | 61.7 | 6.6 | 13.7 | 6.9 | 20.4 | 18.8 | | FT | 94.3 | 51.0 | 82.0 | 7.5 | 74.5 | 16.5 | 65.8 | 38.7 | | FT (Wise) | 93.7 | 52.5 | 76.4 | 8.7 | 67.0 | 13.7 | 49.5 | 40.6 | | FT | 94.3 | 51.4 | 82.1 | 7.3 | 75.3 | 16.4 | 65.8 | 38.5 | | FT (WiSE) | 93.7 | 52.9 | 76.6 | 8.4 | 67.7 | 13.6 | 49.7 | 39.7 | | LP-FT | 93.5 | 43.9 | 81.5 | 5.3 | 74.0 | 17.0 | 65.9 | 40.2 | | LP-FT (WiSE) | 93.0 | 42.8 | 79.4 | 5.3 | 74.4 | 18.2 | 56.6 | 36.3 | | LP-FT | 93.5 | 43.9 | 81.6 | 5.3 | 74.0 | 17.2 | 65.5 | 39.9 | | LP-FT (WiSE) | 93.1 | 42.9 | 79.4 | 5.3 | 74.4 | 18.3 | 58.8 | 36.4 | | CFA (LP-FT) | 93.7 | 53.2 | 81.6 | 7.3 | 74.0 | 18.3 | 65.3 | 41.6 | | CFA (WiSE) | 93.1 | 56.4 | 76.5 | 9.2 | 74.6 | 19.7 | 53.5 | 36.6 | | CFA (LP-FT) | 93.7 | 52.8 | 81.7 | 7.0 | 73.9 | 18.4 | 65.5 | 41.0 | | CFA (WiSE) | 93.0 | 56.2 | 76.6 | 8.9 | 74.5 | 19.6 | 53.7 | 36.0 | | Ablation on Loss Coefficient (FMoW) | | | | | | | | | Table 6: The ablation study on the frozen classification heads v.s. trainable heads in Stage 2. Blue cells represent the fronzen heads; Orange cells indicate the heads are trainable. ![19_image_0.png](19_image_0.png) Figure 6: The ablation study on domain loss coefficients. The experiments are conducted on the FMoW dataset as an example. The red dashed lines are the performance of the model without supervision on the domain labels. We first show that our method is stable to the number of iterations in Stage-1. From Table 7, we can see that as the linear heads are trained longer in Stage-1, the ID (WiSE) performance slightly increases and the OOD accuracy reaches the peak at 4000 to 6000 iterations. Except for the trade-off between ID and OOD performance, our method is stable with no drastic drop in accuracy. The 6000 iterations of Stage-1 was the hyperparameter used for experiments in Table 1. We then demonstrate how training epochs in Stage-2 will affect the final performance of our model. Table 8 shows that for longer training epochs, the ID accuracy increases and the OOD accuracy decreases. Hence, in our paper, we used 3 epochs for the best OOD performance. | LP Iterations | ID | ID (WiSE) | OOD | OOD (WiSE) | |------------------------------------------------------------|--------|---------------|---------|----------------| | 2000 | 94.0 | 92.8 | 54.3 | 56.9 | | 4000 | 94.0 | 93.1 | 54.3 | 57.3 | | 6000 (Chosen) | 94.0 | 93.1 | 54.3 | 56.9 | | 8000 | 94.0 | 93.6 | 53.3 | 56.5 | | 10000 | 94.0 | 93.7 | 52.1 | 55.0 | | Table 8: Study on the training stability of CFA's Stage-2. | | | | | | Epochs | ID Acc | ID Acc (WiSE) | OOD Acc | OOD Acc (WiSE) | | 3 (Chosen) | 94.0 | 93.1 | 54.3 | 56.9 | | 5 | 94.1 | 93.5 | 52.3 | 56.3 | | 10 | 94.1 | 93.7 | 52.2 | 55.7 | Table 7: Study on the training stability of CFA's Stage-1. Parameter Overhead Compared with the vanilla full fine-tuning, our CFA introduces an additional linear classifier for domain labels in Stage-1. In terms of model parameters, we provide the parameter count of CLIP and DINOv2 on DomainNet, in Table 9. One can see that the additional parameters (e-classifier) introduced by CFA is negligible compared with the total number of parameters. Table 9: Parameter count for CLIP and DINOv2 on DomainNet. y-classifier is the linear classifier for class labels, and e-classifier is the linear classifier for domain labels. Vision encoders, language encoders and y-classifier are modules shared between the vanilla fine-tuning and CFA, and the last column is additional parameters introduced by CFA. Parameters Vision Encoder Language Encoder y-classifier e**-classifier** (CFA only) CLIP ViT-B 86 ˆ 106 91 ˆ 106 177 ˆ 103 3.1 ˆ 103 DINOv2 ViT-B 86 ˆ 106- 265 ˆ 103 4.6 ˆ 103 Compute Overhead The training cost of Stage-1 (linear probing) is much smaller compared to Stage-2 (backbone finetuning). Stage-1 requires only a single forward pass over training samples for feature gathering (features are stored in CPU memory), and then the linear probing can be achieved fastly on CPU. The training cost for Stage-2 of CFA is slightly less than standard full finetuning, as the last layer remains frozen during Stage-2 of CFA. Overall, we conclude that the additional training costs of CFA are mild (mostly coming from the forward pass over training samples), which is less than that of *one fine-tuning epoch*. In other words, compared with vanilla fine-tuning for N epochs, our CFA's training compute cost is less than fine-tuning for N ` 1 epochs. The modest compute overhead of CFA making it a practical method for addressing compositional generalization problems in real-world applications. Memory and Speed Overhead For CFA, the memory requirement of Stage-1 is less than Stage-2 since Stage-1 only involves forward passes of the image encoder (no backward pass). For Stage-2 of CFA, the memory usage and training speed are the *same as that of the vanilla finetuning*, since Stage-2 of CFA does not introduce additional parameters or loss for training. ## C.2 Discussion Of The Performance Gain From Table 1, one may think the performance gain of CFA over reweighting, a strong baseline method, is not significant. However, our main contribution lies in designing a principled algorithm for compositional generalization, a novel challenge in OOD generalization. As our theoretical analysis and feature visualization in Fig. 5 demonstrate (on Color MNIST and DomainNet, respectively), our method aligns features with a compositional structure suitable for compositional generalization. In OOD generalization research, it's typical for specialized algorithms to only modestly outperform baselines like ERM, as evidenced by extensive benchmarking in studies like DomainBed (Gulrajani & Lopez-Paz, 2021) and WILDS (Koh et al., 2021). Thus, we consider CFA's performance gain to be meaningful and convincing. Additionally, the mild implementation complexity and training costs (discussed in Sec. C.1) make CFA a viable method for enhancing compositional generalization. Moreover, we want to emphasize that real-world datasets used in our paper have lots of labeling issues (for both class and domain labels), which may prevent CFA from obtaining greater performance gain. Notably, CFA employs both class and domain labels for training, leading to an increased susceptibility to label noise, particularly from domain labels. Below, we illustrate two types of labeling noise encountered: - *Mislabeling*: We observe that the DomainNet dataset contains many mislabeled images. Many images in the "Clipart" domain are actually real photos of a single object on a white background, which should be categorized into the "Product" domain. On the other hand, numerous class labels are also mislabeled: images in the "bush (plant)" class contain pictures of President George Bush of the USA; the "cooler" class includes electric fan images, which are incorrectly categorized; the "square (shape)" class contains table, which should be placed into the "table" class instead. Note: We only manually examined a very tiny subset of the DomainNet dataset (Gulrajani & LopezPaz, 2021), which comprises 0.6 million images, and have already found many mislabeled images. Therefore, overall, we believe the mislabeling ratio is not negligible. - *Ambiguous Labels*: We observe that certain domains contain a large number of images that are visually similar to those in another domain. For example, in both Office-Home and DomainNet, the images in the "Product" domain are real photos of objects, making them almost indistinguishable from their counterparts in the "Real" domain. The distinguishing feature of the "Product" domain is that its images all have a white background; however, some images in the "Real" domain also share this characteristic. Additionally, in DomainNet, the "Infograph" domain contains many images stylistically similar to those in "Clipart" or "Real"; some images in the "Painting" domain are sketches, despite the presence of a separate "Sketch" domain. This ambiguity issue extends to class labels as well. In DomainNet, the classes "cup," "coffee cup," and "mug" lack clear stylistic distinctions. In addition to these issues, other aspects of these datasets may affect learning. For instance, the iWildCam dataset includes a class labeled "empty", signifying the absence of animals, which comprises a significant portion of the dataset. ## C.3 Effect Of The Orthogonality Regularization To better understand the impact of the orthogonality regularization on the optimization process, we conducted an ablation study on Stage-1 of our CFA method using the DomainNet dataset. In this study, we optimized the objective in Equation 1 with different orthogonal regularization coefficients (λortho) for 20 epochs. The results of this ablation study are visualized in Fig. 7. As shown in Fig. 7, the orthogonality loss (measured by }WT 1 W2}F ) decreases as the linear probing epochs progress, indicating that the orthogonality regularization effectively enforces the orthogonality constraint between the two classifier heads W1 and W2. The rate of decrease in orthogonality loss is proportional to the value of the regularization coefficient λortho, with larger values resulting in faster convergence towards orthogonal classifier heads. This ablation study demonstrates that the orthogonality regularization term in our CFA method is crucial for obtaining orthogonal classifier heads W1 and W2, which are subsequently used in Stage-2 of our approach. By effectively enforcing the orthogonality constraint, our method ensures that the learned features are wellsuited for compositional generalization tasks. ![22_image_0.png](22_image_0.png) Figure 7: Effectiveness of the orthogonality regularization during Stage-1 linear probing on DomainNet. The orthogonality loss, |WT 1 W2|F, is plotted against the number of epochs for different values of *λortho*. Larger λ*ortho* leads to faster convergence towards orthogonal W1 and W2.
Review 1: Summary: This paper proposes a method to achieve compositional generalization across domains and item classes via a two stage learning process. First, orthogonal domain and classes discriminators are learned given a fixed set of features and then in the second stage, a new set of features are learned given fixed class and domain discriminators. The paper shows some improvement of using this method over reasonable baselines. Strengths and Weaknesses: ## Strength 1. Paper is well written and motivations are clearly stated. Good problem setup with helpful visualizations that clearly convey the message of the paper 2. Method is relatively simple but effective as results from Table 1 demonstrate 3. Experiments using Clip to generate domain labels show compelling performance and an ability to leverage the approach even in settings without domain labels. ## Weaknesses 1. The benchmark is a bit overstated as a contribution since 2/3 of the sub-tasks are basically wholesale lifting of existing datasets with no real modification. 2. The theoretical guarantees given are with respect to training data but say nothing about whether the aligned featurizer also exhibits the compositional property for data not in the training set. ## Questions : 1. How do you decide what numbers to bold in Table 1 ? On FMoW, the performance of Reweight-E with clip (41.8) is better than your method (41.6) but your method's result is instead bolded. 2. Any idea hypotheses about why WISE tends to hurt performance on the FMoW dataset ? Requested Changes: 1. The clip-CFA row in Table 1 has some incorrectly formatted #s in the first 2 columns 2. Since the paper is currently under the 12 page limit, the Ablations could have been included in the main body of the paper instead of the appendix. 3. **[Requested Experiment]** It would be good to have an ablation with a single stage version of the whole fine-tuning process to confirm that the 2 stage process is indeed needed though it is more complex and has extra overhead. Specifically, having a single stage of training where the body parameters and heads are all jointly trained ---> with and without the orthogonality constraints on the heads. Broader Impact Concerns: N/A ================================================== Review 2: Summary: This paper introduces a novel approach to tackle the a new challenge in machine learning named as compositional generalization (CG), which aims at enhancing generalization to unseen domain-class combinations. The study proposes a method called Compositional Feature Alignment (CFA) to boost the CG performance of pretrained models via two-stage training. It also contributes to the field by developing CG-Bench, a benchmark suite derived from real-world image datasets, to assess the effectiveness of the proposed approach. Strengths and Weaknesses: Pros: 1. The paper addresses a significant and relatively unexplored aspect of machine learning: CG, highlighting its practical implications. 2. The introduction of CFA is innovative and shows promise, as evidenced by empirical evaluations. 3. The creation of CG-Bench marks a valuable contribution, providing a resourceful benchmark for future research in this area. Cons: 1. While the theory presented in the paper aims to learn features with robustness and interpretability, there are doubts about the generalizability of the results. Figure 5, although impressive, may represent a selectively favorable scenario. 2. The methods to maintain orthogonality between W_1 and W_2 during training, and the integration of constraints in Equation (2) into the overall optimization objective, are not clearly explained. More detail on the optimization process and the impact of the orthogonal constraint on loss optimization would be beneficial. Visualizations of loss curves with orthogonal contrast ablations would enhance clarity. 3. There are typographical errors in Table 1, such as "943.07" or "54.33.2". 4. The overall improvements presented in the paper, while positive, are not markedly significant. Requested Changes: The paper introduces a promising new approach to CG in machine learning, demonstrating potential in empirical tests. However, there are areas where improvements are needed, particularly in terms of clarity and detail in the methodology. Broader Impact Concerns: N/A ================================================== Review 3: Summary: The paper proposed a, two-stage, fine-tuning approach for achieving compositional generalization between class labels & domains. The approach involves : 1-multi-head training on classes & domains, 2-finetuning the features through the heads. It also makes use for weight averaging (WiSE-FT) to improve the performance further. Results are presented on a combined dataset consisting of Office-home, DomainNet, and iWildCam and WILDS datasets. Strengths and Weaknesses: Strengths: 1-the presented method seems relatively simple and therefore reproducible. Weaknesses: 1-The main weakness seems to be that the method is specifically tailored to the dataset and may have limited impact on real-world problems. Specifically there seems to be two key assumptions: (i) - domain labels are available, and (ii) - the domain labels and class labels are mutually independent. The second assumption is implicit in the algorithmic constraint $W_1 W_2^T = 0$. Yes, section 3.4 has a discussion on having limited domain data labels, nevertheless the fact that the data generative process cleanly consist of just a few domains is already a limiting factor. For many real world applications (ii) is not a valid assumption, e.g. imagine different species being recorded with different instruments, in this case the domains are not independent with the classes so it would seem a key design decision is violated. 2-As the authors mention in the paper, the evaluations were conducted with "based-size" (i.e. small) ViT models. Therefore, the relevance of this algorithm at large scales is dubious. Requested Changes: -The legend in Figure 2 is broken (some shapes should be triangles instead of squares), axes are not labeled -- do the axes represent various features? -Definition 1 seems arbitrary and overly constrained. It seems tailored to the datasets under study, this definition does not seem suitable for realistic data. Broader Impact Concerns: None ================================================== Metareview: Recommendation: Accept as is Comment: The reviewers found the paper well-written with motivations clearly stated (Reviewer L4Jk), the proposed method is simple (Reviewer L4Jk, Reviewer kFXZ) and effective (Reviewer L4Jk) and “shows promise, as evidenced by empirical evaluations” (Reviewer kZdn). The paper addresses a “significant and relatively unexplored aspect of machine learning” (Reviewer kZdn). While Reviewer kFXZ is concerned about limiting assumptions made that aren't met in (all) real-world applications, the authors have made claims within the scope of that setting and have clearly outlined the assumptions made -- e.g. "Empirically, we find that if the learned features (i.e., the outputs of the last hidden layer) conform to a compositional structure where the subspace of domain related features is orthogonal to that of class-related features, the corresponding model can generalize across unknown domain-class pairs." During the rebuttal, the authors have made clarifications, expanded on the discussion of limitations and conducted an additional ablation experiment to verify the necessity of the two-stage design of their method. I encourage the authors to add the additional discussion of limitation to the revised manuscript as well. ==================================================
# Federated Learning Under Evolving Distribution Shifts Anonymous authors Paper under double-blind review ## Abstract Federated learning (FL) is a distributed learning paradigm that facilitates training a global machine learning model without collecting the raw data from distributed clients. Recent advances in FL have addressed several considerations that are likely to transpire in realistic settings such as data distribution heterogeneity among clients. However, most of the existing works still consider clients' data distributions to be static or conforming to a simple dynamic, e.g., in participation rates of clients. In real FL applications, client data distributions change over time, and the dynamics, i.e., the evolving pattern, can be highly non-trivial. Further, evolution may take place from training to testing. In this paper, we address dynamics in client data distributions and aim to train FL systems from time-evolving clients that can generalize to future target data. Specifically, we propose two algorithms, *FedEvolve* and FedEvp, which are able to capture the evolving patterns of the clients during training and are test-robust under evolving distribution shifts. Through extensive experiments on both synthetic and real data, we show the proposed algorithms can significantly outperform the FL baselines across various network architectures. ## 1 Introduction Federated learning (FL) is a widely used distributed learning framework where multiple clients, using their local data, train machine learning models collaboratively, orchestrated by a server (McMahan et al., 2017; Yang et al., 2019; Zhang et al., 2021a). A problem that has been extensively studied in FL literature is learning from heterogeneous clients, i.e., ensuring convergence of FL training and avoiding degradation of accuracy when clients' data are not identically and independently distributed (non i.i.d.) (Diao et al., 2021; Achituve et al., 2021; Reisizadeh et al., 2020). Although a variety of approaches such as robust FL (Reisizadeh et al., 2020) and personalized FL (Wang et al., 2019) have been proposed to tackle the issue of data heterogeneity, most of them still assume that the data distribution of each client is *static* and, in particular, remains fixed between training and testing. Some recent works (Jiang & Lin, 2023; Gupta et al., 2022) move one step further by proposing test-robust FL models when there exist distribution shifts between training and testing data. However, they only consider one-step shift between training and testing while the training data distribution is still assumed to be static. In practice, FL systems are trained and deployed in dynamic environments that may continually change over time, e.g., satellite data evolve due to environmental changes, clinical data evolve due to changes in disease prevalence, etc. Existing FL algorithms without considering such evolving distribution shifts may result in inaccurate models and even fail to converge during the training phase. In this paper, we will explore two questions: - How can data stream with evolving distribution shifts impact FL systems (with or without client heterogeneity)? - How can we exploit the evolving patterns from training data (source domains) and deploy our model on the unseen future distribution (target domain)? The goal is to continuously train an FL model from distributed, time-evolving data that can generalize well on future target data. Figure 1 shows one motivating example. Note that although the problem of learning under evolving distribution shifts has been studied recently in the centralized setting (typically known as evolving domain generalization), e.g., see Wang et al. (2022); Qin et al. (2022), it remains unclear how evolving distribution shifts can impact FL training and how to design FL algorithms when both evolving distribution shifts and data heterogeneity exist. The most relevant line of research to ours is continual federated learning (CFL) (Yoon et al., 2021; Casado et al., 2022), which aims to train an FL system continuously from a set of distributed time series. However, the primary objective of these works is to stabilize the training process and tackle the issue of catastrophic forgetting (i.e., prevent forgetting the previously learned old knowledge as the model is updated on new data). This differs from our work where we aim to explicitly learn evolving patterns and leverage them to adapt the model on future unseen data. ![1_image_0.png](1_image_0.png) Figure 1: Illustration of evolving distribution shifts and client heterogeneity: Consider an FL system trained from distributed time-evolving photos (Ginosar et al., 2015) for gender classification. In this example, data exhibits obvious evolving patterns (e.g., changes in facial expression and hairstyle, improvement in the quality of images). Besides, clients are non-i.i.d and they have different class distributions. Our goal is to train an FL model that captures the evolving pattern of source domains and generalizes it to the future target domain. To answer the above two questions, we will examine the performance of existing FL methods on time-evolving data, including a wide range of methods such as traditional FL methods, personalized FL methods, test-time adaptation methods, domain generalization methods, and continual FL methods. We observe that existing methods cannot capture evolving patterns and fail to generalize on future data. We then propose *FedEvolve*, an FL algorithm that learns the evolving patterns of clients during the training process and can generalize to future test data. Specifically, *FedEvolve* learns the evolving pattern of source domains through *representation learning*. It assumes there exists a mapping function for each client that captures the transition of any two consecutive domains. To learn such transition, each client in *FedEvolve* learns two distinct representation mappings that map the inputs of domains in two consecutive time steps to a representation/latent space. By minimizing the distance between the distributions of these feature representations, *FedEvolve* captures the transition over two consecutive steps. Although *FedEvolve* shows superior performance in learning from evolving distribution shifts in empirical experiments, the need for two distinct representation mappings brings double overhead during FL training. To reduce the computation cost and communication overhead, we further develop *FedEvp* as a more efficient and versatile version of *FedEvolve* by updating one representation mapping when evolving distribution shifts occur. Moreover, *FedEvp* better tackles heterogeneous data by incorporating the personalization strategy to partially personalize the model on each client's local data. We illustrate via extensive experiments that our algorithms significantly outperform current benchmarks of FL when the feature domain is evolving, on multiple datasets (Rotated MNIST/EMNIST, Circle, Portraits, Caltran) using different models (MLP, CNN, ResNet). Our main contributions are: - We identify the evolving distribution shift in FL that the current robust FL, personalized FL, and test-robust FL frameworks have failed to consider. - We propose *FedEvolve* to actively capture the evolving pattern from evolving source domains and generalize to unseen target domains. - We propose a more efficient and versatile version of algorithm *FedEvp* that learns domain-invariant representation from evolving prototypes. - We empirically study how FL systems are affected when both evolving shifts and local heterogeneity exist. Experiments on multiple datasets show the superior performance of our methods compared to previous benchmark models. ## 2 Related Work We briefly review related previous works in this section. Tackle client heterogeneity in FL. Many approaches have been proposed to tackling data heterogeneity issues in FL and they can be roughly categorized into four classes. The first method is to add a regularization term. For example, Li et al. (2020; 2021b) proposed to steer the local models towards a global model by adding a regularization term to guarantee convergence when the data distributions among different clients are non-IID. The second method is clustering (Briggs et al., 2020; Ghosh et al., 2020; Sattler et al., 2020). By aggregating clients with similar distribution into the same cluster, the clients within the same cluster have lower statistical heterogeneity. Then, a cluster model that performs well for clients within this cluster can be found to reduce the performance degradation of statistical heterogeneity. The third method is to mix models or data. For example, Zhao et al. (2018) proposed a data-sharing mechanism where clients update models according to both the local train data and a small amount of globally shared data. Wu et al. (2022); Shin et al. (2020) developed mixup data augmentation techniques to let local devices decode the samples collected from other clients. Mansour et al. (2020) find a mixture of the local and global models according to a certain weight. The fourth method is robust FL. For instance, Reisizadeh et al. (2020); Deng et al. (2020b) obtain robust Federated learning models by finding the best model for worst-case performance. Notably, Reisizadeh et al. (2020) only considers the affine transformation of data distributions and Deng et al. (2020b) focuses on varying weight combinations over local clients. In addition, different personalization methods are applied to local clients, such as personalization (Wang et al., 2019; Yu et al., 2020; Arivazhagan et al., 2019; Huang et al., 2023; Bao et al., 2023), representation learning (Arivazhagan et al., 2019; Collins et al., 2021; Chen & Chao, 2022; Jiang & Lin, 2023), and meta-learning (Fallah et al., 2020). FL with dynamic data distributions. While most previous works on statistical heterogeneity have considered static situations (i.e., the local heterogeneity stays constant during training), another line of literature focuses on FL in a dynamic environment where various distribution drifts occur. Some works aim to tackle drifts caused by time-varying participation rates of clients with local heterogeneity (Rizk et al., 2020; Park et al., 2021; Wang & Ji, 2022; Zhu et al., 2021), while other works assume the global distributions are also evolving (Guo et al., 2021; Casado et al., 2022; Yoon et al., 2021). However, among all previous works, Jiang & Lin (2023); Gupta et al. (2022) are the only ones considering the distribution shift between training and testing, but they assume the training distribution itself is static. Evolving domain generalization. *Domain Generalization* (DG) has been extensively studied to generalize ML algorithms to unseen domains where different methods including data manipulation (Khirodkar et al., 2019; Robey et al., 2021), representation learning (Blanchard et al., 2017; Deshmukh et al., 2019), and domain adversarial learning (Rahman et al., 2020; Zhao et al., 2020). To go one step further, a few works have considered the evolving patterns of the domains (Hong Liu, 2020; Zhang & Davison, 2021; Kumar et al., 2020; Wang et al., 2022; Qin et al., 2022), but only Wang et al. (2022); Qin et al. (2022) consider Evolving Domain Generalization (EDG) where the target domain is not accessible. Wang et al. (2022) developed an algorithm to learn embeddings of the previous domain and the current domain such that their representations ![3_image_0.png](3_image_0.png) Figure 2: Illustration of *FedEvolve* (left) and *FedEvp* (right): (i). *FedEvolve* consists of two distinct modules ϕ and ψ, where ϕ calculates the prototypes for domain Sm, individually for each class, using mean values as class representations. Then, ψ represents a data batch from the domain Sm+1. Both modules are updated based on the distance between Sm+1 representations and Sm prototypes. During inference, ψ computes the distance to the latest domain's prototypes, then selects the minimal one as the prediction result. (ii). *FedEvp* simplifies *FedEvolve* by removing ψ and integrating a classifier w with ϕ. This decreases the communication cost during federated training. Instead of using localized prototypes from just Sm, *FedEvp* builds global prototypes from domains S1 to Sm. These prototypes align with the representations of the succeeding domain Sm+1, providing an integrated feature representation across diverse domains. By emphasizing consistent feature representation, *FedEvp* ensures its classifier adeptly handles an unseen domain, making predictions resilient and versatile across changing data contexts. are invariant. Qin et al. (2022) developed a dynamic probabilistic framework to model the underlying latent variables across domains. Wang et al. (2020) considers a similar problem under the domain adaptation setting, where they use domain discriminators to learn domain-invariant features and adapt the model to target domains. However, all these previous works consider the *centralized setting*. Thus, there is a gap for EDG under distributed settings, and in particular for FL. ## 3 Problem Formulation Consider a federated learning (FL) system consisting of K clients, whose data distributions vary dynamically over time. Define {S1*, ..., S*M} as M consecutive domains that characterize the evolution of the clients' global distribution. Let Dkm be the local dataset of client k ∈ {1*, . . . , K*} at m-th domain. The clients are heterogeneous and they may have access to different class labels. Given an FL model with parameter h, let ℓ(*x, y*; h) be the corresponding loss evaluated on a labeled data sample (*x, y*). Our goal is to learn an FL model h from K clients over M domains that can generalize on a subsequent target domain SM+1. That is, we wish to find h ∗that minimizes the total loss at target domain SM+1 over K clients: $$h^{*}=\operatorname*{arg\,min}_{h\in\mathbb{R}^{d}}\sum_{k=1}^{K}\alpha_{k}L_{k}(h)$$ $\left(1\right)^{2}$ αkLk(h) (1) where αk is the weight of client k (e.g., the proportion of sample size), and Lk(h) := E(xi,yi)∼DkM+1 ℓ(xi, yi; h) is the local loss of client k evaluated on target domain SM+1. ## 4 Methodology To learn an FL model from time-evolving data that generalizes well to the future domain, we need to learn the evolving pattern of source domains during federated training. Motivated by (Wang et al., 2022; Snell et al., 2017), we assume there is a pattern in the transition between every two consecutive domains Sm and Sm+1. Instead of learning evolving patterns directly in the input space, we consider *representation learning* to learn the evolution in a representation space. Next, we introduce two algorithms *FedEvolve* and *FedEvp*, which align data representation from evolving domains and facilitate local personalization. Specifically, *FedEvolve* is designed to actively identify the evolving pattern between two consecutive domains, while *FedEvp* first learns invariant representation across all existing domains, then generalizes to the unknown evolving domain. ## 4.1 Fedevolve To actively capture the evolving patterns of source domains, *FedEvolve* learns two distinct learnable representation functions fϕ, fψ. Given two consecutive domain Sm and Sm+1: - fϕ(Sm) is the estimated representation of **subsequent** domain Sm+1 using input Sm. - fψ(Sm+1) is the **representation** of input domain Sm+1. Because fϕ estimates the representation of a domain using the previous domain, , we can use it to estimate unknown target domain SM+1 from source domains {S1*, ..., S*M}. Let *ϕ, ψ* be the trainable neural network parameters of fϕ, fψ, respectively. To learn the evolving pattern, we aim to learn *ϕ, ψ* such that the estimated future domain representation fϕ(Sm) is sufficiently accurate and close to the actual representation fψ(Sm+1), i.e., we need minimize the distance between fϕ(Sm) and fψ(Sm+1). Inspired by (Wang et al., 2022), to align the two representations while capturing the class characteristics across evolving domains, we leverage prototypical learning (Snell et al., 2017) to directly align their representation prototypes. Specifically, for each client k and domain Sm, we define the *prototype* C k m,y of class y on the client's local dataset Dkm as the mean value of the representations produced by fϕek , where ϕek is the local parameter learned on client k, i.e., $$C_{m,y}^{k}=\frac{1}{|\mathcal{D}_{m,y}^{k}|}\sum_{x\in\mathcal{D}_{m,y}^{k}}f_{\widetilde{\phi}_{k}}\left(x\right)\tag{1}$$ where Dkm,y ⊆ Dkm is a subset of data instances with label y, |Dkm,y| is the cardinality of this set. For the next domain Sm+1, *FedEvolve* minimizes the distance between its representation fψek (Sm+1) and fϕek (Sm) estimated from Sm. This can be achieved by aligning the representation fψek for data points from the domain Sm+1 to its corresponding class prototype C k m,y. Mathematically, we minimize the loss defined below: $$\ell(x,y)=\log\frac{\exp\left(-d\left(f_{\bar{\psi}_{k}}^{-}(x),C_{m,y}^{k}\right)\right)}{\sum_{y^{\prime}\in\mathcal{Y}_{\mathrm{D}_{m+1}^{k}}^{k}}\exp\left(-d\left(f_{\bar{\psi}_{k}}^{-}(x),C_{m,y^{\prime}}^{k}\right)\right)}$$ $$\left(2\right)$$ $$\left({3}\right)$$ (3) where (*x, y*) is a sample pair from Dkm+1 and YDkm+1 including all class labels in Dkm+1. d is a distance measure (e.g. Euclidean distance, cosine distance) that quantifies the difference between the feature representation fψek (x) and the prototype C k m,y of class y from the local dataset Dkm. In this paper, we employ Euclidean distance. By minimizing Eqn. equation 3 on all active clients, local models learn the evolving pattern by aligning representations of domain Sm+1 with prototypes from the former domain Sm. After local updates, active clients It send local parameters to the server and the server performs an average aggregation to update the global parameters ϕ =1 |It| Pk∈It ϕek, ψ =1 |It| Pk∈It ψek. These aggregations encapsulate global information with diverse data contributions of all clients. Once consolidated, these models can be directly dispatched to the clients and facilitate continuous model adaptations to the evolving data distributions across the federated network. After training on source domains, we can use the learned representation functions fϕ, fψ to predict the target domain SM+1. Specifically, we first compute the prototypes of fϕ(SM) on SM. Then, we apply fψ to test samples in SM+1 to generate representations fψ(SM+1) and classify them based on proximity to prototypes. We present the pseudocode of *FedEvolve* in Algorithm 1 in Appendix A. ## 4.2 Fedevp Because the two distinct representation functions fϕ and fψ in *FedEvolve* are usually large neural networks such as ResNet (He et al., 2016) in real-world image datasets, there is a non-negligible additional overhead to transmit extra parameters of the second representation mapping, rendering deployment challenges in environments with limited computational resources or network bandwidth. To address the potential overhead, we also present *FedEvp*, an efficient and streamlined strategy that achieves similar performance as *FedEvolve*. Unlike the dual model mechanism of FedEvolve, *FedEvp* adopts a single-model strategy to reduce communication costs while simultaneously accelerating training. As shown in the right plot of Figure 2, *FedEvp* aims learn the evolving-domain-invariant representation using a representation function fϕ by continuously aligning data to prototypes from previous domains. If we can develop a representation that is resilient to evolving distributional shifts, a single classifier could effectively serve all domains. To further address local heterogeneity, we also incorporate an efficient personalization step for the classifier. To ensure a consistent learning process, *FedEvp* maintains evolving prototypes according to the classes of consecutive domains. In essence, the prototypes learned by *FedEvp* consolidate the global information from all previous domains to enable the learning of domain-invariant features. For each class y within client k, an evolving prototype C k m,y is established as follows: C k 0,y is set to zero; for other domains ranging from 1 to M, the prototype is updated as Eqn. equation 4, $$C_{m,y}^{k}=\frac{(m-1)}{m}C_{m-1,y}^{k}+\frac{1}{m}\left(\frac{1}{|\mathcal{D}_{m,y}^{k}|}\right)\sum_{x\in\mathcal{D}_{m,y}^{k}}f_{\phi_{k}}^{\sim}(x)$$ $$|\rangle$$ $$\left(5\right)$$ where Dkm,y is the set of all instances in the current domain m that belongs to class y, and fϕek (xi) denotes the representation of instance xi under the client k's local model parameters ϕek. Such an iterative update mechanism ensures that the prototype C k m,y evolves as new domains are introduced, gradually incorporating information from each one. As a result, C k M,y becomes a representative prototype of class y across all available training domains for client k. We then align the data from domain Sm+1 to the prototypes C k m to update parameter ϕ. We adopt the same loss function as *FedEvolve* given in Eqn. equation 5, $$\ell_{f}(x,y)=\log{\frac{\exp\left(-d\left(f_{\tilde{\phi}_{k}}^{*}(x),C_{m,y}^{k}\right)\right)}{\sum_{y^{\prime}\in\mathcal{Y}_{\mathbb{D}_{m+1}^{k}}}\exp\left(-d\left(f_{\tilde{\phi}_{k}^{*}}^{*}(x),C_{m,y^{\prime}}^{k}\right)\right)}}$$ where d is the same distance metric as in *FedEvolve*. And d(fϕek (x), Ckm,y) is the distance between the feature representation fϕek (x) of instance x and the prototype C k m,y of class y, YDkm+1 is the set of classes in the m + 1 domain. Besides minimizing ℓf to learn domain-invariant representation, we introduce a classifier wek which is updated by minimizing empirical risk ℓe defined as: $$\ell_{e}(x,y)=-y\log{\frac{\exp\left(g_{\frac{y}{w_{k}}}^{y}\left(f_{\phi_{k}}^{\sim}(x)\right)\right)}{\sum_{y^{\prime}\in\mathcal{Y}_{\mathrm{ph}}^{b_{k}}}\exp\left(g_{\frac{y^{\prime}}{w_{k}}}^{y^{\prime}}\left(f_{\phi_{k}}^{\sim}(x)\right)\right)}}$$ $$\mathbf{\hat{b}})$$ where g y wek fϕek (x) is the predicted outputs of the class y for instance (x, y) ∈ Dkm,y, computed by the classifier wek. In our experiments, ℓe is the classical cross-entropy loss. After local updates, *FedEvp* aggregates the local parameters at the server ϕ =1 |It| Pk∈It ϕek, w =1 |It| Pk∈It wek. These aggregated global models are then sent back to clients for future updates. As *FedEvp* relies on the classifier using evolving domain invariant features instead of directly using the difference between two consecutive domain representations, the prediction may be influenced by the client's heterogeneity. To handle the issue raised by local heterogeneity, a personalization mechanism, akin to *local fine-tuning*, is further incorporated. Specifically, we personalize each client by updating both the classifier we and the *last layer* of the feature extractor ffϕ for an additional epoch on the client's local dataset. The pseudocode of *FedEvp* is given in Algorithm 2 in Appendix A. ## 5 Experiments To evaluate our methods, we consider classification tasks using various network architectures and report average accuracy and standard deviation over three runs. The detailed implementation can be found in Appendix C. Dirichlet distribution is used to control the level of heterogeneity with parameter Dir ∈ [0, ∞). The smaller Dir implies that the clients are more heterogeneous. Heterogeneous clients may have access to different class labels. We report the average performance across clients and the performance on the server. Both are evaluated on the test domain after the last epoch. The federated training phase follows typical FL steps. In each communication round t, a subset It of K clients join the system and the server distributes aggeragated global model parameters to client k ∈ It. Upon receiving these parameters, each client k initializes its local parameters to those and performs τ local updates. ## 5.1 Datasets And Networks We evaluate *FedEvolve* and *FedEvp* on both **synthetic data** (Circle) and **real data** (Rotated MNIST, Rotated EMNIST, Portraits, and Caltran). All datasets either come with evolving patterns or are adapted to evolving environments. For all datasets, the last domain is viewed as the target domain. The feature extractor in the neural network is viewed as ϕ and ψ, and the classifier is w mentioned in the previous section. Circle (Pesaranghader & Viktor, 2016). This synthetic data has 30 evolving domains. 30000 instances within these domains are sampled from 30 two-dimensional Gaussian distributions, with the same variance but different means that are uniformly distributed on a half-circle. We use a 5-layer multilayer perception (MLP) with 3 layers serving as a representation function (fϕ and fψ in FedEvolve, fϕ in FedEvp) and the remaining 2 layers as a classifier (fw in FedEvp). Rotated MNIST (Ghifary et al., 2015) and Rotated EMNIST (Cohen et al., 2017). The Rotated MNIST is a variation of the MNIST data, where we rotate the original handwritten digit images to produce different domains. Specifically, we partition the data into 12 domains and rotate the images within each domain by an angle θ, beginning at θ = 0◦ and progressing in 15-degree increments up to θ = 165◦. We also consider other increments spanning from 0 ◦to 25◦to simulate varying degrees of evolving shifts. EMNIST is a more challenging alternative to MNIST with more classes including both hand-written digits and letters. We use the hand-written letters subset and split it into 12 domains by rotating images with a degree of θ = {0 ◦, 8 ◦*, ...,* 88◦}. We design a model consisting of a 4-layer convolutional neural network (CNN) for representation layers, followed by two linear layers for classification. Portraits (Ginosar et al., 2015). It is a real dataset consisting of frontal-facing American high school yearbook photos over a century. This time-evolving dataset reflects the changes in fashion (e.g., high style and smile). We resize images to 32 × 32 and split the dataset by every 12 years into 9 domains. We use WideResNet (Zagoruyko & Komodakis, 2016) as the representation function to train the gender classifier. Note that the data is only intended to compare various methods. Caltran (Hoffman et al., 2014). This real surveillance dataset comprises images captured by a fixed traffic camera. We divide the dataset into 12 domains where the samples from every 2-hour block form a domain (evolving shifts arising from changes in light intensity). ResNet18 (He et al., 2016) backbone is used as the representation function and the last linear layer is used as the classifier. ## 5.2 Baselines We compare *FedEvolve* and *FedEvp* with various existing FL methods. These baselines cover a broad range of methods including traditional FL, strong personalized FL (PFL), centralized test-time adaptation (TTA) methods, federated TTA methods, and continual federated learning methods. - *FedAvg* (McMahan et al., 2017): A FL method that learns the global model by averaging the client's local model. - GMA (Tenison et al., 2022): A FL method using gradient masked averaging approach to aggregate local models. Table 1: Average accuracy over three runs of experiments on rotated MNIST under i.i.d and non-i.i.d distribution. The client heterogeneity(Dir) is determined by the value of Dirichlet distribution (Yurochkin et al., 2019). | | Dir→ ∞ | Dir=1.0 | Dir=0.1 | | | | |-----------|-------------|------------|------------|------------|------------|------------| | Method | Client | Server | Client | Server | Client | Server | | FedAvg | 65.92±1.01 | 66.34±0.34 | 62.35±0.97 | 63.16±1.78 | 51.68±0.73 | 51.59±2.48 | | GMA | 65.94±0.91 | 66.17±0.21 | 61.49±0.30 | 61.68±0.66 | 50.86±1.15 | 51.32±2.47 | | Memo(G) | 65.94±1.34 | 66.78±2.30 | 61.39±0.94 | 62.91±2.55 | 49.76±5.58 | 52.06±1.23 | | FedAvgFT | 48.70±1.03 | 66.61±0.59 | 57.95±2.91 | 62.61±1.02 | 69.51±1.97 | 51.59±1.70 | | APFL | 62.37±1.08 | 65.57±1.54 | 67.58±1.09 | 63.98±2.31 | 70.37±2.19 | 50.66±0.47 | | FedRep | 60.04±1.00 | 68.09±3.10 | 63.95±0.75 | 63.49±2.62 | 76.35±1.67 | 52.25±1.75 | | Ditto | 65.23±0.87 | 65.35±1.50 | 68.14±0.92 | 64.64±1.45 | 75.55±2.56 | 50.89±2.79 | | FedRod | 52.30±1.87 | 67.93±1.05 | 54.00±3.98 | 63.32±2.33 | 64.11±3.68 | 53.02±1.22 | | Memo(P) | 51.70±2.48 | 65.35±1.47 | 59.84±0.61 | 64.75±1.59 | 69.46±2.77 | 50.27±2.85 | | T3A | 53.94±0.76 | 66.61±0.59 | 61.60±2.49 | 62.61±1.02 | 71.73±1.63 | 51.59±1.70 | | FedTHE | 66.84±1.51 | 67.43±0.23 | 67.98±0.43 | 62.55±1.98 | 78.52±3.92 | 53.40±0.74 | | Flute | 62.97±1.39 | 63.27±1.13 | 68.86±0.75 | 61.46±0.16 | 78.44±3.54 | 54.71±3.28 | | FedSR | 69.91±1.14 | 71.79±1.75 | 67.00±1.23 | 68.01±2.65 | 61.49±2.60 | 59.88±3.54 | | CFL | 63.75±0.98 | 64.33±2.17 | 60.29±1.85 | 60.82±1.97 | 50.76±1.41 | 51.04±2.49 | | CFeD | 70.22±0.63 | 71.66±0.66 | 68.07±0.72 | 68.64±1.38 | 60.41±2.33 | 61.27±2.93 | | FedEvolve | 84.75 ±1.39 | 84.43±1.21 | 79.93±1.00 | 77.25±1.82 | 83.86±1.81 | 71.66±1.95 | | FedEvp | 75.99±0.31 | 77.63±1.99 | 77.91±1.80 | 73.85±1.53 | 83.15±0.49 | 61.84±3.34 | - *FedAvg + FT*: Fine-tunes the global model on local training data, an effective strategy for personalization in FL. - *MEMO* (Zhang et al., 2022): A TTA method and we adapt it to FL. Following (Jiang & Lin, 2023), we term MEMO applied to the global model as MEMO(G) and to the FedAvg + FT model as MEMO(P). - *APFL* (Deng et al., 2020a): A PFL method that leverages a weighted ensemble of personalized and global models. - *FedRep* (Collins et al., 2021) and *FedRoD* (Chen & Chao, 2021): PFL methods that use a decoupled feature extractor and classifier to enhance personalization in FL. - *Ditto* (Li et al., 2021a): A fairness-aware PFL method that has been shown to outperform other fairness FL methods. - T3A (Iwasawa & Matsuo, 2021): A TTA method that is adapted to personalized FL by adding test-time adaptation to *FedAvg + FT*. - *FedTHE* (Jiang & Lin, 2023): A TTA PFL method that tackles the data heterogeneity issue while learning test-time robust FL under distribution shifts. - *FedSR* (Nguyen et al., 2022): A TTA FL method using the regular domain generalization method. - CFL (Guo et al., 2021): A continual federated learning method that learns from time-series data while preventing catastrophic forgetting. - *CFeD* (Ma et al., 2022): It uses distillation to tackle catastrophic forgetting in continual federated learning. - *Flute* (Liu et al., 2024): Flute is a PFL method that facilitates the distillation of the subspace spanned by the global optimal representation from the misaligned local representations. ## 5.3 Results In Figure 3, we examine how the algorithm performance changes as the degree of evolving shifts varies. Tables 1, 2 and 3 show the comparison with baselines, where we report both the averaged performance of clients' local models and the performance of the global model at the server. We also extend the experiments in Table 3 to the setting when clients are heterogeneous (Dir = 1.0) and present the results in Table 4. Impacts of distribution shifts and local heterogeneity. First, we examine the impact of distribution shifts and client heterogeneity on FL systems. Figure 3 presents the results on RMNIST data under clients | | Dir→ ∞ | Dir=1.0 | Dir=0.1 | | | | |-----------|------------|------------|------------|-------------|------------|------------| | Method | Client | Server | Client | Server | Client | Server | | FedAvg | 53.83±1.84 | 54.18±1.72 | 52.72±4.45 | 52.77±3.74 | 46.72±2.55 | 45.71±1.77 | | GMA | 54.23±1.77 | 55.10±1.71 | 51.23±1.93 | 51.42±0.79 | 48.40±1.75 | 48.61±2.13 | | Memo(G) | 53.32±1.38 | 53.85±0.72 | 50.33±2.06 | 50.37±1.10 | 47.53±2.09 | 47.20±1.86 | | FedAvgFT | 44.20±2.54 | 54.09±1.30 | 52.16±4.62 | 53.82±2.13 | 66.96±0.68 | 46.87±0.60 | | APFL | 44.98±1.57 | 54.33±1.12 | 49.84±1.48 | 50.99±0.62 | 66.80±0.37 | 46.42±2.58 | | FedRep | 39.01±2.03 | 46.39±2.49 | 47.26±2.64 | 47.25±0.93 | 67.51±1.35 | 44.12±0.46 | | Ditto | 42.38±1.77 | 53.90±1.20 | 53.80±1.89 | 56.22±1.58 | 72.66±0.61 | 55.48±1.94 | | FedRod | 44.25±1.60 | 51.79±2.77 | 49.53±0.81 | 50.32±2.61 | 67.31±2.03 | 45.74±3.99 | | Memo(P) | 45.42±2.39 | 53.47±1.33 | 51.23±4.94 | 51.10±1.10 | 68.37±1.48 | 47.73±2.26 | | T3A | 48.80±2.84 | 54.49±0.46 | 55.93±2.28 | 53.29±1.12 | 71.80±1.95 | 52.08±2.84 | | FedTHE | 52.40±3.87 | 53.27±3.60 | 58.08±1.44 | 53.45±1.87 | 69.34±2.10 | 46.15±2.17 | | Flute | 48.89±3.04 | 51.52±4.63 | 55.10±3.88 | 46.71 ±2.73 | 64.99±3.35 | 40.27±3.01 | | FedSR | 55.71±0.09 | 56.92±0.44 | 51.40±4.65 | 55.35±3.93 | 44.38±2.30 | 49.43±2.48 | | CFL | 40.65±2.19 | 41.41±1.86 | 45.82±2.34 | 46.13±1.01 | 40.24±3.50 | 39.37±4.29 | | CFeD | 56.76±0.65 | 56.17±1.39 | 55.50±4.33 | 55.53±5.73 | 47.20±1.37 | 47.76±2.22 | | FedEvolve | 83.58±1.45 | 82.91±1.36 | 82.13±0.48 | 78.68±0.25 | 87.67±0.55 | 72.85±1.03 | | FedEvp | 67.30±1.35 | 71.94±1.50 | 73.61±1.70 | 68.91±0.30 | 87.01±0.22 | 58.73±0.96 | Table 2: Average accuracy over three runs of experiments on rotated EMNIST-Letter under i.i.d and non-i.i.d distribution. | | Circle | Portraits | Caltran | | | | |-----------|------------|-------------|------------|-------------|------------|------------| | | Client | Server | Client | Server | Client | Server | | FedAvg | 70.40±6.51 | 70.40±6.51 | 94.10±0.13 | 94.10±0.13 | 62.93±2.10 | 64.31±2.13 | | GMA | 62.55±6.94 | 62.55±6.94 | 94.18±0.14 | 94.18±0.14 | 63.28±3.48 | 63.85±3.49 | | Memo(G) | - | - | 94.38±0.07 | 94.63±0.31 | 63.41±2.81 | 63.82±2.92 | | FedAvgFT | 60.85±3.07 | 63.55±5.67 | 90.99±0.74 | 93.21±1.86 | 63.82±0.70 | 63.98±3.22 | | APFL | 59.90±2.48 | 63.55±5.67 | 90.54±0.29 | 94.64±0.16 | 62.11±1.85 | 63.17±3.29 | | FedRep | 64.37±5.60 | 64.97±6.05 | 90.88±0.63 | 93.50±1.15 | 62.03±3.05 | 64.07±2.41 | | Ditto | 62.60±2.64 | 63.10±6.00 | 91.46±0.13 | 94.07±0.30 | 62.44±2.59 | 63.58±3.43 | | FedRod | 64.60±2.33 | 65.00±6.55 | 91.57±0.18 | 94.78±0.43 | 64.14±3.94 | 58.29±4.75 | | Memo(P) | - | - | 91.30±0.16 | 94.34±0.28 | 63.66±2.93 | 63.58±3.43 | | T3A | 62.20±4.11 | 66.50±4.95 | 91.84±0.61 | 94.59±0.34 | 63.90±0.60 | 63.98±3.22 | | FedTHE | 64.03±4.79 | 63.27±5.05 | 94.13±0.24 | 93.48±0.98 | 60.48±1.44 | 58.17±3.18 | | Flute | 65.69±3.81 | 63.04±3.31 | 94.25±0.10 | 94.53±0.18 | 61.71±2.19 | 61.46±3.70 | | FedSR | 72.77±3.38 | 71.62±5.70 | 94.43±0.35 | 94.52±0.35 | 64.57±1.36 | 66.02±1.47 | | CFL | 72.12±8.76 | 72.12±8.76 | 92.91±1.07 | 92.91±1.07 | 63.68±3.61 | 63.92±3.15 | | CFeD | 71.60±6.77 | 71.60±6.77 | 93.64±0.27 | 93.64±0.27 | 63.48±3.87 | 63.55±3.27 | | FedEvolve | 84.25±2.45 | 81.64±1.95 | 95.43±0.17 | 96.88±1.35 | 65.04±1.66 | 63.54±0.74 | | FedEvp | 73.30±5.02 | 74.12±6.93 | 93.54±0.19 | 94.92 ±0.11 | 66.59±1.44 | 66.34±0.69 | Table 3: Average accuracy across various datasets over three runs. We consider the i.i.d setting that Dir→ ∞. | Circle | Portraits | Caltran | | | | | |-----------|-------------|------------|------------|------------|------------|------------| | Client | Server | Client | Server | Client | Server | | | FedAvg | 66.53±4.74 | 66.53±4.74 | 94.37±0.86 | 94.37±0.86 | 66.34±2.41 | 65.12±4.87 | | GMA | 65.93±6.01 | 65.93±6.01 | 93.75±0.68 | 93.75±0.68 | 65.12±1.95 | 63.05±4.51 | | Memo(G) | - | - | 93.81±0.45 | 93.81±0.45 | 66.20±2.07 | 65.54±0.73 | | FedAvgFT | 65.97±1.49 | 66.93±3.30 | 92.54±0.65 | 94.56±0.43 | 65.12±2.84 | 65.56±3.61 | | APFL | 64.23±0.80 | 66.93±3.30 | 92.16±0.42 | 94.47±0.38 | 70.49±3.70 | 65.41±3.84 | | FedRep | 66.87±4.91 | 69.07±5.42 | 92.50±0.65 | 94.19±0.56 | 65.27±1.86 | 65.90±3.39 | | Ditto | 69.05±4.41 | 64.50±5.09 | 91.86±0.87 | 94.93±0.32 | 65.45±3.43 | 65.61±4.52 | | FedRod | 63.70±1.96 | 77.20±4.98 | 92.64±0.58 | 95.26±0.31 | 73.27±3.35 | 64.88±4.03 | | Memo(P) | - | - | 92.94±0.65 | 94.48±0.32 | 64.88±3.13 | 62.24±3.97 | | T3A | 69.80±1.60 | 69.10±1.50 | 91.93±0.50 | 94.20±0.34 | 67.24±2.01 | 65.61±4.52 | | FedTHE | 70.30±5.83 | 74.97±3.90 | 91.77±0.85 | 94.53±0.32 | 71.80±3.07 | 62.02±4.22 | | Flute | 70.33±4.31 | 67.04±3.66 | 94.16±0.69 | 94.59±0.38 | 71.61±1.61 | 63.46±1.58 | | FedSR | 73.88±3.10 | 72.08±4.85 | 93.99±0.79 | 94.22±0.77 | 62.99±2.11 | 68.35±0.53 | | CFL | 70.82±5.43 | 70.82±5.43 | 93.84±0.30 | 93.84±0.30 | 64.50±3.17 | 65.28±3.50 | | CFeD | 68.37±8.22 | 68.38±8.22 | 93.22±3.21 | 94.77±0.92 | 65.30±2.92 | 67.18±2.91 | | FedEvolve | 82.52±1.94 | 83.59±5.91 | 93.84±1.62 | 96.54±1.39 | 75.04±4.03 | 64.06±3.83 | | FedEvp | 74.80±1.69 | 77.93±4.20 | 94.50±0.28 | 93.91±2.19 | 73.46±0.90 | 68.24±1.08 | Table 4: Accuracy of baselines across various datasets over three runs (Dir=1.0). ![9_image_1.png](9_image_1.png) ![9_image_0.png](9_image_0.png) Accuracy (%)FedAvg Client Figure 3: Performance comparison of various methods across different rotation angles on RMNIST for distinct distributions. with varying degrees of local heterogeneity (Dir = ∞, 1.0, 0.1). Each sub-figure shows how performance changes as the extent of distribution shift changes from no distribution shift (0 ◦incremental angle) to high distribution shift (25◦incremental angle): - In the absence of significant distribution shifts (e.g. rotation incremental angle 0 ◦, 3 ◦, or 5 ◦), Figure 3a shows that, when there is no client heterogeneity, our methods have similar performance as the traditional FL methods. The learning task reduces to the standard FL task, and the classical FL methods maintain competitive performance. As clients get more heterogeneous, Figures 3b and 3c show that all methods experience the accuracy drop and the performance on the server for *FedEvolve* is marginally inferior to that of *FedAvg*, while *FedEvp* with personalization still shows the robustness under heterogeneous clients. This is further verified when Dir = 0.1. We also observe that *FedEvolve* is still robust when client heterogeneity is large. The decline in performance due to heterogeneity mainly comes from the error of the classifier, and *FedEvolve* avoids this by using representation distance to make predictions rather than relying on the classifier. - When the rotation increments increase, *FedAvg* experiences a significant performance drop (e.g., nearly 12% decrease when the incremental angle increases for 5 degrees, see Figure 3a). Such impacts are more significant than the performance drop caused by client heterogeneity, indicating the challenge of evolving shifts. However, our methods are still robust against such shifts and significantly better than baselines. When both strong local heterogeneity and distribution shifts are present (Figure 3c), both the baselines and ours experience a performance drop while ours exhibit a relatively slower decline. The better performance on the clients compared to the server for *FedEvp* further validates the effectiveness of the personalization mechanism of *FedEvp*. Comparison with Baselines. We conduct extensive experiments on five datasets with different levels of client heterogeneity. Table 1 and 2 and the results of Circle data in Table 3 compare different methods in scenarios with strong evolving patterns. We observe that both *FedEvolve* and *FedEvp* outperform the baseline methods. In particular, *FedEvolve* attains the highest accuracy (84.75%, 83.58%, and 84.25% on RMNIST, REMNIST, and Circle respectively), demonstrating its capability to learn from the evolving pattern and effectively address the distribution shifts. This advantage also shows on other datasets (Portraits and Caltran) in Table 3 with less obvious evolving patterns. For PFL or TTA baselines tuned on local source domains, without client heterogeneity (Dir → ∞), the performance may deteriorate compared to classical FL such as *FedAvg*. Specifically, methods such as FedAvgFT, *APFL*, and *FedRep* may experience a drop in client performance compared to the server on certain datasets. These methods originally designed to tackle client heterogeneity without learning evolving patterns suffer performance degradation; this further highlights the importance of considering evolving distribution shifts in FL systems. Nonetheless, when clients are heterogeneous (Dir is 1.0 or 0.1 in Table 1 and 2), their personalization or test-time adaptation can still be beneficial. General domain generalization methods like *FedSR* and continual FL methods tend to achieve better results than other baselines, indicating their capability to mitigate the influence of evolving distribution shifts. But the gap between their performance and that of ours still emphasizes the need for a specific design to solve the problem. Among all methods, our proposed *FedEvolve* and *FedEvp* show the best performance and are robust to both client heterogeneity and evolving shifts. *FedEvp* achieves comparable performance with *FedEvolve* but only uses half numbers of parameters as *FedEvolve*. Specifically, when Dir = 0.1, *FedEvolve* achieves accuracy of 83.86% and **87.67%** on RMNIST and REMNIST, while *FedEvp* achieves similar accuracy of **83.15%** and 87.01%. Thus, a careful design of personalization can prevent the unintended consequence of performance degradation. Impact of Straggler. Stragglers in FL systems introduce heterogeneity at the system level; therefore, we also study how our methods could be resilient to the straggler problem. We report the results in Table 5 when stragglers are present during the training phase. The results show our methods are not significantly affected by stragglers. In this experiment, the straggler ratio represents the probability that a client will train fewer local iterations than the specified number τ . For stragglers, the actual number of local iterations is randomly selected, ranging from 1 to τ . | R-MNIST(Dir=1.0) | 0 | 0.1 | 0.3 | 0.5 | 0.7 | 0.9 | |--------------------|------------|------------|------------|------------|------------|------------| | FedEvolve | 79.93±1.00 | 78.56±4.17 | 77.50±4.87 | 77.42±3.45 | 75.88±2.58 | 71.87±0.98 | | FedEvp | 77.91±1.80 | 77.79±1.69 | 77.11±0.83 | 77.00±1.14 | 76.91±0.84 | 76.60±1.54 | Table 5: Performance under different straggler ratio. Overhead Comparison. Table 6 compares transmission overhead. We use an CNN as an example to report the number of parameters and server-client transmission time in the MPI environment. Although FedEvolve has the higher transmission overhead, its cost-efficient version *FedEvp* has comparable overhead as the baselines. | FedRod | FedTHE | FedSR | FedEvolve(Ours) | FedEvp (Ours) | Others | | |------------|--------------|--------------|-------------------|-----------------|--------------|--------------| | Parameters | 382106 | 382208 | 391937 | 741120 | 379392 | 379392 | | Time/ms | 21.38 ± 0.45 | 21.95 ± 1.23 | 21.62 ± 0.87 | 46.32 ± 0.78 | 21.30 ± 0.86 | 21.26 ± 1.11 | Table 6: The number of model parameters and transmission time. Table 7: Ablation for *FedEvp* (Dir=0.1). We compare the average accuracy on clients for *FedEvp* with three versions: one without any personalization, another that personalizes only the classifier, and a third that personalizes all parameters. | Method | MNIST Acc | EMNIST Acc | |----------------------------|-------------|--------------| | FedEvp | 83.15±0.49 | 87.01±0.22 | | FedEvp w/o personalization | 63.59±2.38 | 57.67±1.64 | | FedEvp personalize C | 79.21±2.29 | 86.59±0.35 | | FedEvp personalize all | 73.06±1.07 | 82.78±0.54 | Ablation study. We also study the influence of personalization mechanisms of *FedEvp* on the performance in Table 7. The results show personalizing part of the feature extractor and classifier can achieve the best results. We also notice that personalizing the classifier brings the most significant improvement which means the classifier is most sensitive to the client heterogeneity with evolving distribution shifts. | R-MNIST(Dir→ ∞) | 7 | 10 | 12 | |-------------------|--------------|--------------|--------------| | FedAvg | 74.92 ± 1.08 | 70.07 ± 1.48 | 65.92 ± 1.01 | | FedEvolve | 78.24 ± 1.18 | 82.44 ± 1.40 | 84.57 ± 2.45 | | FedEvp | 80.20 ± 2.09 | 74.17 ± 1.15 | 75.99 ± 0.31 | | R-MNIST(Dir=1.0) | 7 | 10 | 12 | | FedAvg | 71.44 ± 1.21 | 65.40 ± 0.58 | 62.35 ± 0.97 | | FedEvolve | 75.07 ± 2.42 | 80.81 ± 2.33 | 79.93 ± 1.00 | | FedEvp | 80.19 ± 1.26 | 78.99 ± 0.82 | 77.91 ± 1.80 | | R-MNIST(Dir=0.1) | 7 | 10 | 12 | | FedAvg | 61.18 ± 0.91 | 54.83 ± 1.24 | 51.68 ± 0.73 | | FedEvolve | 78.56 ± 1.69 | 83.44 ± 2.25 | 83.86 ± 1.81 | | FedEvp | 78.16 ± 4.26 | 83.92 ± 1.59 | 82.12 ± 1.84 | We further investigate the impact of varying the number of source domains on the prediction performance for the target domain. Specifically, we conduct experiments under the same settings as shown in Table 8, while controlling for the number of domains. Our methods are compared to FedAvg using reduced numbers of domains: 7 domains (rotation starting at 75◦ and increasing to 165◦), 10 domains (rotation starting at 30◦ and increasing to 165◦), and 12 domains (rotation starting at 0 ◦ and increasing to 165◦). Table 8: Performance under different number of domains. As the number of domains increases, FedAvg shows significant performance degradation across all heterogeneity settings. This indicates standard methods' vulnerability to evolving distributional shifts. Both FedEvolve and FedEvp display robustness against increasing domain numbers, maintaining or improving performance. In particular, by incorporating more source domains, FedEvolve is able to fully learn the transition of two consecutive domains. However, FedEvp remains less sensitive to domain transitions, performing consistently well across different settings. The robustness of our methods contrasts sharply with the performance drop observed in FedAvg, highlighting the importance of handling distribution variability in federated learning. Discussion. The empirical evidence suggests that conventional FL algorithms cannot simultaneously handle the evolving distributional shifts and clients heterogeneity. In addition, evolving distributional shifts could be viewed as a specific form of data heterogeneity affecting client devices. Present personalization strategies, designed for data heterogeneity, fail in adapting models to unseen distributions. Simply tuning clients on known domains without considering shifts between training data and test data, these methods may inadvertently increase the model's bias towards training data resulting in performance that is sometimes inferior to that of non-personalized algorithms. While continual FL frameworks take account of dynamic distributional shifts during training, they primarily concentrate on preventing catastrophic forgetting of prior tasks or domains rather than adapting to new, unseen ones. This focus makes them inadequate for managing evolving distributional shifts effectively. However, when the distribution of a target domain is predictable based on existing data, our methods explicitly leverage and learn the pattern of distribution transitions, enabling the extrapolation of the model to the target domain. Therefore, our methods mitigate the performance drop and achieve the best results. ## 6 Conclusions This paper studies FL under evolving distribution shifts. We explored the impacts of evolving shifts and client heterogeneity on FL systems and proposed two algorithms: *FedEvolve* that precisely captures the evolving patterns of two consecutive domains, and *FedEvp* that learns a domain-invariant representation for all domains with the aid of personalization. Extensive experiments show both algorithms have superior performance compared to SOTA methods. ## References Idan Achituve, Aviv Shamsian, Aviv Navon, Gal Chechik, and Ethan Fetaya. Personalized federated learning with gaussian processes. Advances in Neural Information Processing Systems, 34, 2021. Manoj Ghuhan Arivazhagan, Vinay Aggarwal, Aaditya Kumar Singh, and Sunav Choudhary. Federated learning with personalization layers, 2019. Wenxuan Bao, Tianxin Wei, Haohan Wang, and Jingrui He. Adaptive test-time personalization for federated learning. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https: //openreview.net/forum?id=rbw9xCU6Ci. Gilles Blanchard, Aniket Anand Deshmukh, Urun Dogan, Gyemin Lee, and Clayton Scott. Domain generalization by marginal transfer learning. arXiv preprint arXiv:1711.07910, 2017. Christopher Briggs, Zhong Fan, and Peter Andras. Federated learning with hierarchical clustering of local updates to improve training on non-iid data. In 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–9. IEEE, 2020. Fernando E Casado, Dylan Lema, Marcos F Criado, Roberto Iglesias, Carlos V Regueiro, and Senén Barro. Concept drift detection and adaptation for federated and continual learning. Multimedia Tools and Applications, pp. 1–23, 2022. Hong-You Chen and Wei-Lun Chao. On bridging generic and personalized federated learning for image classification. In International Conference on Learning Representations, 2021. Hong-You Chen and Wei-Lun Chao. On bridging generic and personalized federated learning for image classification. In International Conference on Learning Representations, 2022. URL https://openreview. net/forum?id=I1hQbx10Kxn. Gregory Cohen, Saeed Afshar, Jonathan Tapson, and Andre Van Schaik. Emnist: Extending mnist to handwritten letters. In 2017 international joint conference on neural networks (IJCNN), pp. 2921–2926. IEEE, 2017. Liam Collins, Hamed Hassani, Aryan Mokhtari, and Sanjay Shakkottai. Exploiting shared representations for personalized federated learning. In International Conference on Machine Learning, pp. 2089–2099. PMLR, 2021. Yuyang Deng, Mohammad Mahdi Kamani, and Mehrdad Mahdavi. Adaptive personalized federated learning. arXiv preprint arXiv:2003.13461, 2020a. Yuyang Deng, Mohammad Mahdi Kamani, and Mehrdad Mahdavi. Distributionally robust federated averaging. Advances in Neural Information Processing Systems, 33:15111–15122, 2020b. Aniket Anand Deshmukh, Yunwen Lei, Srinagesh Sharma, Urun Dogan, James W Cutler, and Clayton Scott. A generalization error bound for multi-class domain generalization. arXiv:1905.10392, 2019. Enmao Diao, Jie Ding, and Vahid Tarokh. Heterofl: Computation and communication efficient federated learning for heterogeneous clients. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=TNkPBBYFkXg. Alireza Fallah, Aryan Mokhtari, and Asuman Ozdaglar. Personalized federated learning with theoretical guarantees: A model-agnostic meta-learning approach. Advances in Neural Information Processing Systems, 33:3557–3568, 2020. M. Ghifary, W. Kleijn, M. Zhang, and D. Balduzzi. Domain generalization for object recognition with multitask autoencoders. In 2015 IEEE International Conference on Computer Vision (ICCV), pp. 2551–2559, 2015. doi: 10.1109/ICCV.2015.293. Avishek Ghosh, Jichan Chung, Dong Yin, and Kannan Ramchandran. An efficient framework for clustered federated learning. Advances in Neural Information Processing Systems, 33:19586–19597, 2020. Shiry Ginosar, Kate Rakelly, Sarah Sachs, Brian Yin, and Alexei A Efros. A century of portraits: A visual historical record of american high school yearbooks. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 1–7, 2015. Yongxin Guo, Tao Lin, and Xiaoying Tang. Towards federated learning on time-evolving heterogeneous data. arXiv preprint arXiv:2112.13246, 2021. Sharut Gupta, Kartik Ahuja, Mohammad Havaei, Niladri Chatterjee, and Yoshua Bengio. Fl games: A federated learning framework for distribution shifts. In Workshop on Federated Learning: Recent Advances and New Challenges (in Conjunction with NeurIPS 2022), 2022. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Judy Hoffman, Trevor Darrell, and Kate Saenko. Continuous manifold based adaptation for evolving visual domains. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 867–874, 2014. Jianmin Wang Yu Wang Hong Liu, Mingsheng Long. Learning to adapt to evolving domains. In NIPS, 2020. Hong Huang, Lan Zhang, Chaoyue Sun, Ruogu Fang, Xiaoyong Yuan, and Dapeng Wu. Distributed pruning towards tiny neural networks in federated learning. In 2023 IEEE 43rd International Conference on Distributed Computing Systems (ICDCS), pp. 190–201. IEEE, 2023. Yusuke Iwasawa and Yutaka Matsuo. Test-time classifier adjustment module for model-agnostic domain generalization. Advances in Neural Information Processing Systems, 34, 2021. Liangze Jiang and Tao Lin. Test-time robust personalization for federated learning. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id= 3aBuJEza5sq. Rawal Khirodkar, Donghyun Yoo, and Kris Kitani. Domain randomization for scene-specific car detection and pose estimation. In WACV, pp. 1932–1940. IEEE, 2019. Ananya Kumar, Tengyu Ma, and Percy Liang. Understanding self-training for gradual domain adaptation. In International Conference on Machine Learning, pp. 5468–5479. PMLR, 2020. Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization in heterogeneous networks. Proceedings of Machine Learning and Systems, 2:429–450, 2020. Tian Li, Shengyuan Hu, Ahmad Beirami, and Virginia Smith. Ditto: Fair and robust federated learning through personalization. In International Conference on Machine Learning, pp. 6357–6368. PMLR, 2021a. Tian Li, Shengyuan Hu, Ahmad Beirami, and Virginia Smith. Ditto: Fair and robust federated learning through personalization. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 6357–6368. PMLR, 18–24 Jul 2021b. URL https://proceedings.mlr.press/v139/li21h.html. Renpu Liu, Cong Shen, and Jing Yang. Federated representation learning in the under-parameterized regime. In Forty-first International Conference on Machine Learning, 2024. URL https://openreview. net/forum?id=LIQYhV45D4. Yuhang Ma, Zhongle Xie, Jue Wang, Ke Chen, and Lidan Shou. Continual federated learning based on knowledge distillation. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, volume 3, 2022. Y. Mansour, Mehryar Mohri, Jae Ro, and Ananda Theertha Suresh. Three approaches for personalization with applications to federated learning. ArXiv, abs/2002.10619, 2020. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communicationefficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics, pp. 1273–1282. PMLR, 2017. A. Tuan Nguyen, Philip Torr, and Ser-Nam Lim. Fedsr: A simple and effective domain generalization method for federated learning. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id= mrt90D00aQX. Tae Jin Park, Kenichi Kumatani, and Dimitrios Dimitriadis. Tackling dynamics in federated incremental learning with variational embedding rehearsal. arXiv preprint arXiv:2110.09695, 2021. Ali Pesaranghader and Herna Viktor. Fast hoeffding drift detection method for evolving data streams. pp. 96–111, 2016. ISBN 978-3-319-46226-4. doi: 10.1007/978-3-319-46227-1_7. Tiexin Qin, Shiqi Wang, and Haoliang Li. Generalizing to evolving domains with latent structure-aware sequential autoencoder. In Proceedings of the 39th International Conference on Machine Learning, volume 162, pp. 18062–18082. PMLR, 17–23 Jul 2022. Mohammad Mahfujur Rahman, Clinton Fookes, Mahsa Baktashmotlagh, and Sridha Sridharan. Correlationaware adversarial domain adaptation and generalization. Pattern Recognition, 100:107124, 2020. Amirhossein Reisizadeh, Farzan Farnia, Ramtin Pedarsani, and Ali Jadbabaie. Robust federated learning: The case of affine distribution shifts. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/ f5e536083a438cec5b64a4954abc17f1-Abstract.html. Elsa Rizk, Stefan Vlaski, and Ali H. Sayed. Dynamic federated learning, 2020. Alexander Robey, George J Pappas, and Hamed Hassani. Model-based domain generalization. In NeurIPS, 2021. Felix Sattler, Klaus-Robert Müller, and Wojciech Samek. Clustered federated learning: Model-agnostic distributed multitask optimization under privacy constraints. IEEE Transactions on Neural Networks and Learning Systems, 2020. MyungJae Shin, Chihoon Hwang, Joongheon Kim, Jihong Park, Mehdi Bennis, and Seong-Lyun Kim. Xor mixup: Privacy-preserving data augmentation for one-shot federated learning. arXiv preprint arXiv:2006.05148, 2020. Jake Snell, Kevin Swersky, and Richard S. Zemel. Prototypical networks for few-shot learning, 2017. Irene Tenison, Sai Aravind Sreeramadas, Vaikkunth Mugunthan, Edouard Oyallon, Eugene Belilovsky, and Irina Rish. Gradient masked averaging for federated learning. arXiv preprint arXiv:2201.11986, 2022. Hao Wang, Hao He, and Dina Katabi. Continuously indexed domain adaptation. In ICML, 2020. Kangkang Wang, Rajiv Mathews, Chloé Kiddon, Hubert Eichner, Françoise Beaufays, and Daniel Ramage. Federated evaluation of on-device personalization. arXiv preprint arXiv:1910.10252, 2019. Shiqiang Wang and Mingyue Ji. A unified analysis of federated learning with arbitrary client participation. Advances in Neural Information Processing Systems, 35:19124–19137, 2022. William Wei Wang, Gezheng Xu, Ruizhi Pu, Jiaqi Li, Fan Zhou, Changjian Shui, Charles Ling, Christian Gagné, and Boyu Wang. Evolving domain generalization. arXiv preprint arXiv:2206.00047, 2022. Bingzhe Wu, Zhipeng Liang, Yuxuan Han, Yatao Bian, Peilin Zhao, and Junzhou Huang. Drflm: Distributionally robust federated learning with inter-client noise via local mixup. arXiv preprint arXiv:2204.07742, 2022. Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology, 10(2):1–19, 2019. Jaehong Yoon, Wonyong Jeong, Giwoong Lee, Eunho Yang, and Sung Ju Hwang. Federated continual learning with weighted inter-client transfer. In International Conference on Machine Learning, pp. 12073–12086. PMLR, 2021. Tao Yu, Eugene Bagdasaryan, and Vitaly Shmatikov. Salvaging federated learning by local adaptation. arXiv preprint arXiv:2002.04758, 2020. Mikhail Yurochkin, Mayank Agarwal, Soumya Ghosh, Kristjan Greenewald, Nghia Hoang, and Yasaman Khazaeni. Bayesian nonparametric federated learning of neural networks. In International Conference on Machine Learning, pp. 7252–7261. PMLR, 2019. Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016. Chen Zhang, Yu Xie, Hang Bai, Bin Yu, Weihong Li, and Yuan Gao. A survey on federated learning. Knowledge-Based Systems, 216:106775, 2021a. ISSN 0950-7051. doi: https://doi.org/10.1016/j.knosys. 2021.106775. URL https://www.sciencedirect.com/science/article/pii/S0950705121000381. Marvin Zhang, Sergey Levine, and Chelsea Finn. Memo: Test time robustness via adaptation and augmentation. arXiv preprint arXiv:2110.09506, 2021b. Marvin Zhang, Sergey Levine, and Chelsea Finn. Memo: Test time robustness via adaptation and augmentation. Advances in Neural Information Processing Systems, 35:38629–38642, 2022. Youshan Zhang and Brian D Davison. Adversarial continuous learning in unsupervised domain adaptation. In Pattern Recognition. ICPR International Workshops and Challenges: Virtual Event, January 10–15, 2021, Proceedings, Part II, pp. 672–687. Springer, 2021. Shanshan Zhao, Mingming Gong, Tongliang Liu, Huan Fu, and Dacheng Tao. Domain generalization via entropy regularization. In NeurIPS, volume 33, 2020. Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. Federated learning with non-iid data. 2018. doi: 10.48550/ARXIV.1806.00582. URL https://arxiv.org/abs/1806.00582. Chen Zhu, Zheng Xu, Mingqing Chen, Jakub Konečn`y, Andrew Hard, and Tom Goldstein. Diurnal or nocturnal? federated learning of multi-branch networks from periodically shifting distributions. In International Conference on Learning Representations, 2021. ## A Algorithm We present the pseudo-code for FedEvolve and FedEvp in Alg.1 and Alg.2. We randomly sample a subset of data from the dataset to train the model for each update instead of the whole dataset. Algorithm 1 FedEvolve Require: Number of clients K; client participation ratio r; step size η; the number of local training updates τ ; communication rounds T; the number of source domains M; initial global parameter ϕ and global parameter ψ; local datasets Dkm and their known classes YDkm for m ∈ {1, . . . , M}, k ∈ {1*, . . . , K*}. 1: for t ∈ {1*, . . . , T*} do 2: server samples rK clients as It from all clients 3: server sends ϕ, ψ to It 4: for each client k ∈ It in parallel do 5: client k initialize ϕek := ϕ, ψek := ψ 6: for τ local training iterations do 7: for m ∈ {1*, . . . , M* − 1} do 8: A ← *RandomSample*(Dkm) 9: B ← *RandomSample*(Dkm+1) 10: for y ∈ YDkm do 11: Ay ← {(xi, yi) *∈ A|*yi = y} 12: C k m,y =1 |Ay| P(xi,yi)∈Ay fϕek (xi) 13: **end for** 14: ℓ = 0 15: for (x, y) ∈ B do 16: ℓ = ℓ + 1 |B| [log exp−d f ψek (x),Ckm,y Py*′∈YD*km exp−d f ψek (x),Ckm,y′ ) ] 17: **end for** 18: ϕek, ψek = GradientDescent(ℓ; ϕek, ψek, η) 19: **end for** 20: **end for** 21: client k sends local parameters ϕek, ψek to server 22: **end for** 23: ϕ =1 |It| Pk∈It ϕek 24: ψ =1 |It| Pk∈It ψek 25: **end for** 26: **Output** ϕ and ψ ## Algorithm 2 Fedevp Require: Number of clients K; client participation ratio r; step size η; the number of local training updates τ ; communication rounds T; the number of source domains M; initial global parameter ϕ and global parameter ψ; local datasets Dkm and their known classes YDkm for m ∈ {1, . . . , M}, k ∈ {1*, . . . , K*}. 1: for t ∈ {1*, . . . , T*} do 2: server samples rK clients as It from all clients 3: server sends ϕ, ψ to It 4: for each client k ∈ It in parallel do 5: client k initialize ϕek := ϕ, wek := w 6: for τ local training iterations do 7: for y ∈ YDkm do 8: C k 0,y = 0 9: **end for** 10: for m ∈ {1*, . . . , M*} do 11: A ← *RandomSample*(Dkm) 12: ℓe ← − 1 |A|P (xi,yi)∈A yilog expg y wek f eϕk (x) 20: ℓf = ℓf +1 |A| log exp−d f eϕk (x),Ckm,y 13: for y ∈ YDkm do 14: Ay ← {(xi, yi) *∈ A|*yi = y} 15: C k m,y = (m−1) m C k m−1,y + 1 m1 |Ay| P(xi,yi)∈Ay fϕek (xi) 16: **end for** 17: if m≥2 **then** 18: ℓf = 0 19: for (x, y) ∈ A do Py*′∈YD*km exp−d f eϕk (x),Ckm,y′ ) 21: **end for** 22: ϕek, wek = GradientDescent(ℓf + ℓe; ϕek, wek, η) 23: **end if** 24: **end for** 25: **end for** 26: client k sends local parameters ϕek, wek to server 27: **end for** 28: ϕ =1 |It| Pk∈It ϕek 29: w =1 |It| Pk∈It wek 30: **end for** 31: **Server Output** ϕ, w 32: for each client k do 33: **Client Output** ϕek, wek = personalize(ϕ, w, Dk) 34: **end for** ## B Datasets B.1 Rotated Mnist (Ghifary Et Al., 2015) And Rotated Emnist (Ghifary Et Al., 2015) For Rotated MNIST (RMNIST), We generate 12 domains by applying the rotations with angles of θ = {0 ◦, 15◦*, ...,* 165◦} on each domain respectively. For Rotated EMNIST (REMNIST), we generate 12 domains by applying the rotations with angles of θ = {0 ◦, 8 ◦*, ...,* 88◦} on each domain respectively. ## B.2 Circle (Pesaranghader & Viktor, 2016) We follow (Pesaranghader & Viktor, 2016) to generate this dataset. In this synthetic data set, we have 30 Gaussian distributions centered on a half circle with standard deviation 0.6, and the radius r is set to 10. Each data point has two attributes, and the number of classes is 2. The decision boundary is (x − x0) 2 + (y − y0) 2 ≤ r 2, where (x0, y0) are the coordinates of the circle's center (we set it as (0, 0)). ## B.3 Portraits (Ginosar Et Al., 2015) The portraits dataset contains human face images from yearbooks spanning from 1905 to 2013. We partition the data into nine domains by segmenting the dataset into 12-year intervals. All images are resized into 32×32 without any augmentation. ## B.4 Caltran(Hoffman Et Al., 2014) This real surveillance dataset comprises images captured by a fixed traffic camera deployed in an intersection. The images in this dataset come with time attributes. We categorize the images into 12 distinct domains based on their capture time throughout the day. Specifically, each domain represents a 2-hour interval. As such, a 24-hour day is evenly divided into these 12 domains. We resize images in Caltran to 224×224. ## C Implementation All experiments are conducted on a server equipped with multiple NVIDIA A5000 GPUs, two AMD EPYC 7313 CPUs, and 256GB memory. The code is implemented with Python 3.8 and PyTorch 1.13.0 on Ubuntu 20.04 based on the implementation in Jiang & Lin (2023). To evaluate our methods, we consider classification tasks using various network architectures and report average accuracy over three different random seeds. Due to the constraints of our computing resources, our experiments involve between 10 to 20 clients and are conducted over 50 communication rounds, the personalization epoch is 1 for PFL methods including FedEvp. - RMNIST: For the Rotated MNIST dataset, we employ a CNN with four convolutional layers, each equipped with a 3x3 kernel. Group Normalization is applied post-convolution for stabilization using groups of 8 channels. Followed by convolutional layers, there are two linear layers with a hidden dimension of 64. The four convolutional layers and the first linear layer form the representation functions (fϕ and fψ in FedEvolve, fϕ in FedEvp) with the final linear layer serving as the classifier (fw in FedEvp). We employ an SGD optimizer with a weight decay of 5e-4 and conduct local training for 10 epochs per communication round. - **REMNIST**: For the Rotated EMNIST dataset, we employ the same CNN as the one in RMNIST. We use Adam optimizer with weight decay of 1e-4 and run local training for 10 epochs per communication round. - **Circle:** For the Circle dataset, we utilize a five-layer Multi-Layer Perceptron (MLP). This includes three dense layers (2x256, 256x256, 256x256) for feature representation (fϕ and fψ in FedEvolve, fϕ in FedEvp), linked by ReLU activations, and two subsequent linear layers (256x64, 64x2) that function as the classifier (fw in FedEvp). Given data constraints, we utilize 10 clients and train for 5 epochs using Adam with a weight decay of 5 × 10−4. - **Portraits**: For this dataset, images are resized to 32x32 and processed using a WideResNet architecture. The convolution layers along with the average pooling layer act as the representation function. A linear layer is designated as the classifier after representation. The model is trained among 20 clients over 5 epochs using an Adam optimizer with a weight decay of 5e-4. - **Caltran:** We deploy ResNet18 for the Caltran dataset, with the last linear layer used as the classifier. The representation function comprises four residual convolution blocks and an average pooling layer. With pre-trained weights, we tune the model using an SGD optimizer with a weight decay of 5e-4. Given data limitations, training involves 10 clients over 5 epochs. | Dataset | Input | Number | Network | |-----------|---------------|----------|------------| | Dimension | of Classes | | | | RMNIST | 28 × 28 | 10 | CNN | | REMNIST | 28 × 28 | 26 | CNN | | Circle | 2 | 2 | MLP | | Portraits | 32 × 32 | 2 | WideResNet | | Caltran | 3 × 224 × 224 | 2 | ResNet18 | Networks for each dataset are presented in Table 9 Table 9: Networks for datasets | Dataset | Num of | Batch | Learning Rate | |-----------|----------|---------|------------------------------| | Clients | Size | Range | | | RMNIST | 20 | 32 | 1e-3, 1e-2, 1e-1 | | REMNIST | 20 | 96 | 1e-3, 5e-3, 1e-2, 5e-2, 1e-1 | | Circle | 10 | 32 | 1e-6, 5e-6, 1e-5, 5e-5, 1e-4 | | Portraits | 20 | 32 | 1e-3, 5e-3, 1e-2 | | Caltran | 10 | 32 | 1e-5, 5e-5, 1e-4, 5e-4 | For each dataset, we search the learning rate for each algorithm to find the best results. The training detail is given in Table 10. Table 10: Training Details for datasets We use the same search strategy for hyperparameters to tune the models. - For GMA(Tenison et al., 2022), we set the masking threshold as 0.1, searching from {0.1, 0.2, 0.3*, ...,* 1.0} - For FedRep(Collins et al., 2021), FedRod(Chen & Chao, 2022), and FedTHE(Jiang & Lin, 2023), the last fc layer of the model is used as the head. - For Ditto(Li et al., 2021a), the regularization factor λ is set to 0.1. - For MEMO,(Zhang et al., 2021b) we use 32 augmentations and 3 optimization steps. - For T3A(Iwasawa & Matsuo, 2021), M = 50 is used in our experiments. - For FedSR(Nguyen et al., 2022), we follow the same setting in their paper: α L2R = 0.01 and α CM I = 0.001. ## D Supplementary Results | FedAvg | GMA | Memo(G) | FedAvgFT | APFL | FedRep | Ditto | | |-----------|-------------|-------------|-------------|-------------|-------------|-------------|-------------| | FedEvolve | 5.17 × 10−4 | 4.42 × 10−4 | 7.69 × 10−4 | 1.44 × 10−3 | 2.26 × 10−4 | 9.41 × 10−4 | 7.27 × 10−4 | | FedEvp | 7.33 × 10−3 | 6.76 × 10−3 | 9.82 × 10−3 | 2.42 × 10−3 | 6.20 × 10−6 | 4.60 × 10−4 | 2.16 × 10−5 | | FedRod | Memo(P) | T3A | FedTHE | FedSR | CFL | CFeD | | | FedEvolve | 6.21 × 10−4 | 1.04 × 10−3 | 8.09 × 10−4 | 8.92 × 10−4 | 6.27 × 10−4 | 2.46 × 10−4 | 1.41 × 10−3 | | FedEvp | 2.71 × 10−3 | 1.20 × 10−3 | 7.71 × 10−4 | 2.27 × 10−4 | 2.77 × 10−2 | 4.04 × 10−3 | 4.40 × 10−2 | We compared the P-values of our proposed methods, FedEvolve and FedEvp, with various baseline federated learning algorithms in Table 11. The p-values from our t-test statistical analysis indicated that our methods significantly outperform the baseline methods. Table 11: P-values comparing FedEvolve and FedEvp with baseline methods on rotated MNIST. Table 12: Average accuracy over three runs of experiments on rotated MNIST with different rotation degrees for the target domain. | Dir→ ∞ | | Dir=1.0 | Dir=0.1 | | | | |-------------------|------------|------------|------------|------------|------------|------------| | Method | Client | Server | Client | Server | Client | Server | | FedAvg (110◦ ) | 80.96±1.62 | 80.92±0.23 | 79.41±0.30 | 81.63±1.71 | 70.16±0.48 | 71.93±2.06 | | FedAvg (120◦ ) | 66.39±0.92 | 66.61±1.17 | 64.30±1.36 | 65.08±1.89 | 54.74±1.40 | 53.62±1.01 | | FedAvg (130◦ ) | 50.53±0.88 | 51.04±2.72 | 48.64±1.28 | 48.41±1.16 | 40.87±1.44 | 40.41±0.91 | | FedEvolve (110◦ ) | 86.66±0.66 | 86.62±1.60 | 85.57±1.35 | 83.11±2.31 | 86.92±1.72 | 74.67±2.29 | | FedEvolve (120◦ ) | 78.09±0.88 | 77.80±0.82 | 74.85±2.86 | 72.81±5.46 | 78.43±4.92 | 61.57±7.44 | | FedEvolve (130◦ ) | 65.13±1.79 | 64.64±1.85 | 62.88±2.23 | 60.47±3.91 | 69.77±4.30 | 50.82±5.58 | | FedEvp (110◦ ) | 84.68±1.51 | 86.07±1.38 | 85.84±1.33 | 83.33±3.50 | 84.99±2.32 | 69.41±2.47 | | FedEvp (120◦ ) | 72.91±0.65 | 75.66±0.82 | 74.93±2.54 | 71.82±2.83 | 79.48±1.89 | 62.28±3.88 | | FedEvp (130◦ ) | 61.67±0.31 | 64.64±0.16 | 65.79±2.36 | 62.99±2.11 | 72.11±3.01 | 53.84±4.05 | ## D.1 Impact Of Changing Pattern In previous experiments, we primarily focus on invariant changing patterns in image rotation experiments. Here we examine if our methods are robust against an unexpected pattern. In this experiment, we test the robustness of our methods against an unexpected pattern. Specifically, we simulate an unexpected domain by rotating images from the target domain by an additional 10◦ and 20◦. To prevent confusion between numerals like 6 and 9 when rotated by 180◦, we set the incremental rotation degree as 10◦. Therefore, the images experience a 120◦rotation or a 130◦rotation instead of the expected 110◦. This experiment aims to evaluate whether our methods can handle deviations from anticipated patterns. As shown in Table 12, all methods exhibit a significant performance drop when the test data distribution changes substantially, however, our methods still outperform the baseline and the drop is less than the baseline. Notably, FedEvp demonstrates superior performance compared to FedEvolve when clients are heterogeneous. This difference arises because FedEvolve explicitly learns the distribution transition between consecutive domains, while FedEvp learns evolving-domain-invariant features. Consequently, when the distribution transition deviates from the learned pattern, the performance of FedEvolve is adversely affected, whereas FedEvp remains less influenced by the change.
Review 1: Summary: The paper addresses a problem of heterogeneity of clients' data distribution that is non-trivially evolving in time. To tackle the challenges, the paper proposes two algorithms -- FedEvolve and FedEvp. The FedEvolve learns two different representation mappings of previous and current time step's data by the expense of computational and storage cost of maintaining two representation spaces. The FedEvp maintains only a single space for efficiency. In the empirical evaluations, the proposed methods outperforms the prior arts by noticeable margins. Strengths and Weaknesses: **Strengths** - Application of time evolving distribution domain adaptation to federated learning context. - Empirical results seems good **Weaknesses** - Technical contribution is weak. The methods are quite common, similar to conventional large margin embedding (Weinberger and Chapelle, Large Margin Taxonomy Embedding with an Application to Document Categorization, NeurIPS 2008) - Compared methods are one year old. In the rapidly changing field of ML, there are a number of relevant methods to be compared. To name a few, - Abdelmoniem et al, Resource-Efficient Federated Learning, ICML 2023 workshop - Kim et al., Clustered Federated Learning via Gradient-based Partitioning, ICML 2024 - Yang et la., SimFBO: Towards Simple, Flexible and Communication-efficient Federated Bilevel Learning, NeurIPS 2023 - Contents of the paper is quite slim as if it is a conference submission. There is no in-depth analysis or deep insight about a problem typically shown in a journal paper. Requested Changes: - Add analysis why this method should be the best way so far to address this tasks - Compare with more recent methods Broader Impact Concerns: N/A ================================================== Review 2: Summary: The paper considers federated learning for non-stationary and heterogeneous client data distributions. The non-stationarity covers domain shifts at predefined time steps. Two algorithms are proposed to deal with the variation in the data distribution. Empirical evaluations with synthetic and image based data show that the proposed algorithms perform better than alternative federated learning algorithm. Strengths and Weaknesses: Enabling federated learning to work under realistic conditions when the data distributions are not static (and not homogeneous) is certainly important. In many domains, data distributions change gradually and at time steps that are not known a priori, therefore the setting is somewhat limited. Nevertheless, there are domains where data arrives in batches (perhaps from different sources), and the setting discussed can be relevant. The presentation of the algorithms is rather superficial with too many details left out or specified informally only. This include the imprecise definition of the mapping functions, the description of the training algorithm, and even the pseudocode in the Appendix still leaves functions such as Update in the air. Source code is not provided, and given the superficial description of the algorithms, the experiments would be rather difficult to replicate. Given the profile of journal, this is a considerable weakness for a submission with largely empirical contribution. The number of baselines is reasonable. I would have been tempted to include centralized algorithms as well. Requested Changes: Defining the algorithm in a more precise way with all the necessary details is really a must. I would also suggest the authors to provide access to the source code and data sets. Broader Impact Concerns: None. ================================================== Review 3: Summary: This paper aims to tackle the challenge where the client data distributions changeover time, and the dynamics, i.e., the evolving patter and the distribution shift between the training and testing data in the federated learning framework. Two new approaches named FedEvolve and FedEvp are proposed to capture the evolving patterns of the clients during training and are test-robust under evolving distribution shifts. Experimental results show the effectiveness of the proposed methods. Strengths and Weaknesses: I have few technical comments. Combining the federated setting with evolving distribution shifts is logical. However, the proposed setting appears to be a straightforward combination of the two without introducing significant challenges, at least as per the methods presented in this paper. While the technical aspects don't particularly stand out to me, it is acceptable to initially formulate this problem and design a model to address this scenario. Requested Changes: - In the current setting, the authors implicitly consider the invariant changing pattern, suggesting that the evolving patterns remain constant despite the distribution shifts. How would the methods handle a scenario where the changing pattern itself changes? Please discuss this aspect in your revision. - Is there any real time-series data available to test the performance of your methods? - Would using more domains to predict the new domain representation enhance performance? - Could you discuss "Continuously Indexed Domain Adaptation" and explore whether their techniques could be beneficial for your approach? Broader Impact Concerns: I do not foresee adverse ethical implications of the proposed work. ==================================================
# Profeat: Projected Feature Adversarial Training For Self-Supervised Learning Of Robust Representations Anonymous authors Paper under double-blind review ## Abstract The need for abundant labelled data in supervised Adversarial Training (AT) has prompted the use of Self-Supervised Learning (SSL) techniques with AT. However, the direct application of existing SSL methods to adversarial training has been sub-optimal due to the increased training complexity of combining SSL with AT. A recent approach DeACL (Zhang et al., 2022) mitigates this by utilizing supervision from a standard SSL teacher in a distillation setting, to mimic supervised AT. However, we find that there is still a large performance gap when compared to supervised adversarial training, specifically on larger models. In this work, investigate the key reason for this gap and propose Projected Feature Adversarial Training (ProFeAT) to bridge the same. We show that the sub-optimal distillation performance is a result of mismatch in training objectives of the teacher and student, and propose to use a projection head at the student, that allows it to leverage weak supervision from the teacher while also being able to learn adversarially robust representations that are distinct from the teacher. We further propose appropriate attack and defense losses at the feature and projector, alongside a combination of weak and strong augmentations for the teacher and student respectively, to improve the training data diversity without increasing the training complexity. Through extensive experiments on several benchmark datasets and models, we demonstrate significant improvements in both clean and robust accuracy when compared to existing SSL-AT methods, setting a new state-of-the-art. We further report on-par/ improved performance when compared to TRADES, a popular supervised-AT method. ## 1 Introduction Deep Neural Networks are known to be vulnerable to crafted imperceptible input-space perturbations known as *Adversarial attacks* (Szegedy et al., 2013), which can be used to fool classification networks into predicting any desired output, leading to disastrous consequences. Amongst the diverse attempts at improving the adversarial robustness of Deep Networks, Adversarial Training (AT) (Madry et al., 2018; Zhang et al., 2019) has been the most successful. This involves the generation of adversarial attacks by maximizing the training loss, and further minimizing the loss on the generated attacks for training. While adversarial training based methods have proved to be robust against various attacks developed over time (Carlini et al., 2019; Croce & Hein, 2020; Sriramanan et al., 2020), they require significantly more training data when compared to standard training (Schmidt et al., 2018), incurring a large annotation cost. This motivates the need for self-supervised learning (SSL) of robust representations, followed by lightweight standard training of the classification head. Motivated by the success of contrastive learning for standard self-supervised learning (Van den Oord et al., 2018; Chen et al., 2020b; He et al., 2020), several works have attempted to use contrastive learning for self-supervised adversarial training as well (Jiang et al., 2020; Kim et al., 2020; Fan et al., 2021). While this strategy works well in a full network fine-tuning setting, the performance is sub-optimal when the robustly pretrained feature encoder is frozen while training the classification head (linear probing), demonstrating that the representations learned are indeed sub-optimal. A recent work, Decoupled Adversarial Contrastive Learning (DeACL) (Zhang et al., 2022), demonstrated significant improvements in performance and training efficiency by splitting this combined self-supervised adversarial training into two stages; first, where a standard self-supervised model is trained, and second, where this pretrained model is used as a teacher to provide supervision to the adversarially trained student network. Although the performance of this method is on par with supervised adversarial training on small model architectures (ResNet-18), we find that it does not scale to larger models such as WideResNet-34-10, which is widely reported in the adversarial ML literature. In this work, we aim to bridge the performance gap between self-supervised and supervised adversarial training methods, and improve the scalability of the former to larger model capacities. We utilize the distillation setting discussed above, where a standard self-supervised trained teacher provides supervision to the student. In contrast to a typical distillation scenario, the student's objective or its *ideal goal* is not to replicate the teacher, but to leverage weak supervision from the teacher while also learning adversarially robust representations. This involves a trade-off between the sensitivity towards changes that flip the class of an image (for better clean accuracy) and invariance towards imperceptible perturbations that preserve the true class (for adversarial robustness) (Tramèr et al., 2020). Towards this, we propose to impose similarity with respect to the teacher in the appropriate dimensions by applying the distillation loss in a *projected space* (output of a projection MLP layer), while enforcing the smoothness-based robustness loss in the *feature space* (output of a backbone/ feature extractor). However, we find that enforcing these losses at different layers results in training instability, and thus introduce the complementary loss (clean distillation loss or robustness loss) as a regularizer to improve training stability. We further propose to reuse the pretrained projection layer from the teacher model for better convergence. In line with the training objective, the adversarial attack used during training aims to find images that maximize the smoothness loss in the feature space, and cause misalignment between the teacher and student in the projected space. Further, since data augmentations are known to increase the training complexity of adversarial training resulting in a drop in performance (Zhang et al., 2022; Addepalli et al., 2022), we propose to use augmentations such as AutoAugment (or *strong augmentations*) only at the student for better attack diversity, while using spatial transforms such as pad and crop (PC) (or *weak augmentations*) at the teacher. We summarize our contributions below: - We propose Projected Feature Adversarial Training (ProFeAT) - a teacher-student distillation setting for self-supervised adversarial training, where the projection layer of the standard self-supervised pretrained teacher is utilized for student distillation. We further propose appropriate attack and defense losses for training, coupled with a combination of weak and strong augmentations for the teacher and student respectively. - Towards understanding why the projector helps, we first show that the compatibility between the training methodology of the teacher and the ideal goals of the student plays a crucial role in the student model performance in distillation. We further show that the use of a projector can alleviate the negative impact of the inherent misalignment of the above. - We demonstrate the effectiveness of the proposed approach on the standard benchmark datasets CIFAR-10 and CIFAR-100. We obtain significant gains of 3.5 − 8% in clean accuracy and ∼ 3% in robust accuracy on larger model capacities (WideResNet-34-10), and improved performance on small model architectures (ResNet-18), while also outperforming TRADES supervised training (Zhang et al., 2019) on larger models. ## 2 Preliminaries We consider the problem of self-supervised learning of robust representations, where a self-supervised standard trained teacher model T is used to provide supervision to a student model S. The feature, projection and linear probing layers of the teacher are denoted as Tf , Tp and Tl, respectively. We will use projection layer and projector interchangeably in the rest of the paper. A composition of the feature extractor followed by the projector of the teacher is denoted as Tpf = Tp ◦ Tf , and a composition of the feature extractor followed by the linear probing layer is denoted as Tlf = Tl ◦ Tf . An analogous notation is followed for the student as well. The dataset used for self-supervised pretraining D consists of images xi where i ≤ N. An adversarial image corresponding to the image xiis denoted as x˜i. We consider the ℓ∞ based threat model where ||x˜i −xi||∞ ≤ ε. The value of ε is set to 8/255 for CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009), as is standard in literature (Madry et al., 2018; Zhang et al., 2019). To evaluate the representations learned after self-supervised adversarial pretraining, we freeze the pretrained backbone, and perform linear layer training on a downstream labeled dataset consisting of image-label pairs, popularly referred to as linear probing (Kumar et al., 2022). The training is done using cross-entropy loss on clean samples unless specified otherwise. We compare the robustness of the representations on both in-distribution data, where the linear probing is done using the same distribution of images as that used for pretraining, and in a transfer learning setting, where the distribution of images in the downstream dataset is different from that used for pretraining. ## 3 Related Works Self Supervised Learning (SSL): Early works on SSL had focused on designing well-posed tasks called "pretext tasks", such as rotation prediction (Gidaris et al., 2018) and solving jigsaw puzzles (Noroozi & Favaro, 2016), to provide a meaningful supervisory signal in the absence of true labels. However, the design of these hand-crafted tasks involves manual effort, and is generally specific to the dataset and training task. To alleviate this problem, contrastive learning based SSL approaches have emerged as a promising direction (Van den Oord et al., 2018; Chen et al., 2020b; He et al., 2020), where different augmentations of a given anchor image form the *positives*, and augmentations of other images in the batch form the *negatives*. The training objective involves pulling the representations of the positives together, and pushing away the representations of negatives. Strong augmentations like random cropping and color jitter are applied to make the learning task sufficiently hard, while also enabling the learning of invariant representations. Self Supervised Adversarial Training: To alleviate the large sample complexity and training cost of adversarial training, there have been several works that have attempted self-supervised learning of adversarially robust representations. Chen et al. (2020a) propose AP-DPE, an ensemble adversarial pretraining framework where several pretext tasks like Jigsaw puzzles (Noroozi & Favaro, 2016), rotation prediction (Gidaris et al., 2018) and Selfie (Trinh et al., 2019) are combined to learn robust representations without task labels. Jiang et al. (2020) propose ACL, that combines the popular contrastive SSL method - SimCLR (Chen et al., 2020b) with adversarial training, using Dual Batch normalization layers for the student model - one for the standard branch and another for the adversarial branch. RoCL (Kim et al., 2020) follows a similar approach to ACL by combining the contrastive objective with adversarial training to learn robust representations. Fan et al. (2021) propose AdvCL, that uses high-frequency components in data as augmentations in contrastive learning, performs attacks on unaugmented images, and uses a pseudo label based loss for training to minimize the cross-task robustness transferability. Luo et al. (2023) study the role of augmentation strength in self-supervised contrastive adversarial training, and propose DynACL, that uses a "strong-to-weak" annealing schedule on augmentations. Additionally, motivated by Kumar et al. (2022), they propose DynACL++ that obtains pseudo-labels via k-means clustering on the clean branch of the DynACL pretrained network, and performs linear-probing (LP) using these pseudo-labels followed by adversarial full-finetuning (AFT) of the backbone. This is a generic strategy that can be integrated with several algorithms including ours. Decoupled Adversarial Contrastive Learning (DeACL): While most self-supervised adversarial training methods aimed at integrating contrastive learning methods with adversarial training, Zhang et al. (2022) showed that combining the two is a complex optimization problem due to their conflicting requirements. The authors propose DeACL, where a *teacher* model is first trained using existing self-supervised training methods such as SimCLR, followed by the adversarial training of a *student* model using the supervision from the earlier-trained teacher in a distillation framework as shown in Fig.4. The loss used during distillation is a combination of the cosine similarity between the representations of the clean image at the output of the teacher and student, along with a smoothness loss at the output of the student as shown in Eq.5. The attack used during training is generated by a minimizing the cosine similarity between the teacher and student representations at the feature output as shown in Eq.6. We present a more detailed description of DeACL in Appendix-A.1. The robust student model is used for the downstream tasks. While existing methods used ∼ 1000 epochs for contrastive adversarial training, the compute requirement for DeACL is much lesser since the first stage does not involve adversarial training, and the second stage is similar in complexity to supervised adversarial training (details in Appendix E). In this work, we investigate the shortcomings of the distillation framework presented in DeACL, and introduce a projection layer during the distillation process, along with well-designed training and attack losses, and an appropriate augmentation scheme for better attack diversity, when compared to DeACL. Table 1: **Diverse representations of Standard Trained (ST) and Adversarially Trained (AT)** models (CIFAR-100, WRN-34-10): ST models achieve 0% robust accuracy even with adversarial training of the linear layer, and AT models lose their robustness with standard full-finetuning. SA: Standard Accuracy, RA-G: Robust accuracy against GAMA Sriramanan et al. (2020), RA-PGD20: Robust Accuracy against PGD-20 Madry et al. (2018) attack. | Training/ LP Method | SA | RA-PGD20 | RA-G | Training/ LP Method | SA | RA-PGD20 | RA-G | |------------------------------|-------|------------|--------|----------------------------|-------|------------|--------| | Standard trained model | 80.86 | 0.00 | 0.00 | TRADES AT model | 60.22 | 28.67 | 26.36 | | + Adversarial Linear Probing | 80.10 | 0.00 | 0.00 | + Standard Full Finetuning | 76.11 | 0.37 | 0.11 | ## 4 Proposed Method 4.1 Projection Layer In Self-Supervised Distillation In this work, we follow the setting proposed by Zhang et al. (2022), where a standard self-supervised pretrained teacher provides supervision for self-supervised adversarial training of the student model. This is different from a standard distillation setting (Hinton et al., 2015) because the representations of standard and adversarially trained models are known to be inherently different (Engstrom et al., 2019). Ilyas et al. (2019) attribute the adversarial vulnerability of models to the presence of non-robust features which can be disentangled from robust features that are learned by adversarially trained models. The differences in representations of standard and adversarially trained models can also be justified by the fact that linear probing of standard trained models using adversarial training cannot produce robust models as shown in Table 1. Similarly, standard full finetuning of adversarially trained models destroys the robust features learned (Chen et al., 2020a; Kim et al., 2020; Fan et al., 2021), yielding 0% robustness, as shown in the table. Due to the inherently diverse representations of these models, the ideal goal of the student in the considered distillation setting is not to merely follow the teacher, but to be able to take weak supervision from it while being able to differ considerably. In order to achieve this, we take inspiration from standard self-supervised learning literature (Van den Oord et al., 2018; Chen et al., 2020b; He et al., 2020; Navaneet et al., 2022; Gao et al., 2022; Gupta et al., 2022) and propose to utilize a projection layer following the student backbone, so as to isolate the impact of the enforced loss on the learned representations. Bordes et al. (2022) show that in standard supervised and self-supervised training, a projector is useful when there is a misalignment between the pretraining and downstream tasks, and aligning them can eliminate the need for the same. Xue et al. (2024) further shows theoretically that lower layers represent more features evenly resulting in better generalizability of the learned representations. Motivated by this, we hypothesize the following for the setting of self-supervised distillation: Student model performance improves by matching the following during distillation: 1. Training objectives of the teacher and the ideal goals of the student, 2. *Pretraining and linear probe training objectives of the student.* Here, the ideal goal of the student depends on the downstream task, which is clean (or standard) accuracy in standard training, and clean and robust accuracy in adversarial training. On the other hand, the training objective of the standard self-supervised trained teacher is to achieve invariance to augmentations of the same image when compared to augmentations of other images. We now explain the intuition behind the above hypotheses and empirically justify the same by considering several distillation settings involving standard and adversarial, supervised and self-supervised trained teacher models in Tables 2 and 3. The results are presented on CIFAR-100 (Krizhevsky et al., 2009) with WideResNet34-10 (Zagoruyko & Komodakis, 2016) architecture for both teacher and student. The standard self-supervised model is trained using SimCLR (Chen et al., 2020b). Contrary to a typical knowledge distillation setting where a cross-entropy loss is also used (Hinton et al., 2015), all the experiments presented involve the use of only self-supervised losses for distillation (cosine similarity between representations), and labels are used only during linear probing. Adversarial self-supervised distillation in Table 3 is performed using a combination of Table 2: **Role of projector in self-supervised distillation (CIFAR-100, WRN-34-10):** The drop in accuracy of student (S) w.r.t. the teacher (T ) indicates distillation performance, which improves by matching the training objective of the teacher with ideal goals of the student (S3/ S4 vs. S1), and by using similar losses for pretraining and linear probing (LP) (S2 vs. S1). Using a projector improves performance in case of mismatch in the above (S5 vs. S1). The similarity between teacher and student is significantly higher at the projector space when compared to the feature space in S5. | | Teacher | | | | | | | | |---------------|-----------------|-----------|-------------------|-------------------------------------|---------------------|---------------|-----------------|------| | Exp # Teacher | acc (%) | Projector | LP Loss | Student accuracy after linear probe | | cos(T ,S) | | | | training | | | Feature space (%) | | Projector space (%) | Feature space | Projector space | | | S1 | Self-supervised | 70.85 | Absent | CE | 64.90 | - | 0.94 | - | | S2 | Self-supervised | 70.85 | Absent | cos(T , S) | 68.49 | - | 0.94 | - | | S3 | Supervised | 80.86 | Absent | CE | 80.40 | - | 0.94 | - | | S4 | Supervised | 69.96 | Absent | CE | 71.73 | - | 0.98 | | | S5 | Self-supervised | 70.85 | Present | CE | 73.14 | 64.67 | 0.19 | 0.92 | | Exp # | Teacher training | Teacher accuracy | Projector | LP Loss | Student accuracy | cos(T ,S) | | | |------------|------------------------------------------|--------------------|-------------|-----------|--------------------|-------------|-------|------| | | SA (%) | RA-G (%) | SA (%) | RA-G (%) | | | | | | A1 | Self-supervised (standard training) | 70.85 | 0 | Absent | CE | 50.71 | 24.63 | 0.78 | | A2 (DeACL) | Self-supervised (standard training) | 70.85 | 0 | Absent | cos(T , S) | 54.48 | 23.20 | 0.78 | | A3 | Supervised (TRADES adversarial training) | 59.88 | 25.89 | Absent | CE | 54.86 | 27.17 | 0.94 | | A4 | Self-supervised (standard training) | 70.85 | 0 | Present | CE | 57.51 | 24.10 | 0.18 | Table 3: **Role of projector in self-supervised adversarial distillation (CIFAR-100, WRN-34-10):** Student performance after linear probe at feature space is reported. The drop in standard accuracy (SA) of the student (S) w.r.t. the teacher (T ), and the robust accuracy (RA-G) of the student improve by matching the training objective of the teacher with ideal goals of the student (A3 vs. A1), and by using similar losses for pretraining and linear probing (LP) (A2 vs. A1). Using a projector improves performance in case of mismatch in the above (A4 vs. A1). distillation loss on natural samples and smoothness loss on adversarial samples as shown in Equation (2) (Zhang et al., 2022). A randomly initialized trainable projector is used at the output of student backbone in S5 of Table 2 and A4 of Table 3. Here, the training loss is considered in the projection space of the student (Sp) rather than the feature space (Sf ). 1. Matching the training objectives of teacher with the ideal goals of the student: Consider task-A to be the teacher's training task, and task-B to be the student's downstream task or its ideal goal. The representations in deeper layers (last few layers) of the teacher are more tuned to its training objective, and the early layers contain a lot more information than what is needed for this task (Bordes et al., 2022). Thus, features specific to task-A are dominant or replicated in the final feature layer, and other features that may be relevant to task-B are sparse. When a similarity based distillation loss is enforced on such features, higher importance is given to matching the task-A dominated features, and the sparse features which may be important for task-B are suppressed further in the student (Addepalli et al., 2023). On the other hand, when the student's task matches with the teacher's task, a similarity based distillation loss is very effective in transferring the necessary representations to the student, since they are predominant in the final feature layer. Thus, matching the training objective of the teacher with the ideal goals of the student should improve downstream performance. To test this hypothesis, we first consider the standard training of a student model, using either a self-supervised or supervised teacher, in Table 2. One can note that in the absence of a projector, the drop in student accuracy w.r.t. the respective teacher accuracy is 6% with a self-supervised teacher (S1), and < 0.5% with a supervised teacher (S3). To ensure that our observations are not a result of the 10% difference in teacher accuracy between S1 and S3, we present results and similar observations with a supervised sub-optimally trained teacher in S4. Thus, a supervised teacher is significantly better than a self-supervised teacher for distilling representations specific to a given task. This justifies the hypothesis that, *student performance* improves by matching the training objectives of the teacher and the ideal goals of the student. We next consider adversarial training of a student, using either a standard self-supervised teacher, or a supervised adversarially trained (TRADES) teacher, in Table 3. Since the TRADES model is more aligned with the ideal goals of the student, despite its lower clean accuracy, the clean and robust accuracy of the student are better than those obtained using a standard self-supervised model as a teacher (A3 vs. A1). This further justifies the first hypothesis. 2. Matching the pretraining and linear probe training objectives of the student: For a given network, aligning the pretraining task with downstream task results in better performance since the matching of tasks ensures that the required features are predominant, and they are easily used by, for e.g., an SVM or a linear classifier trained over it (Addepalli et al., 2023). In context of distillation, since the features of the student are trained by enforcing similarity based loss w.r.t. the teacher, we hypothesize that enforcing similarity w.r.t. the teacher is the best way to learn the student classifier as well. To illustrate this, let's consider task-A to be the teacher pretraining task, and task-B to be the downstream task or ideal goal of the student. As discussed above, the teacher's features are aligned to task-A and these are transferred effectively to the student. The features related to task-B are suppressed in the teacher and are further suppressed in the student. As the features specific to a given task become more sparse, it is harder for an SVM (or a linear) classifier to rely on that feature, although it important for classification (Addepalli et al., 2023). Thus, training a linear classifier for task-B is more effective on the teacher when compared to the student. The linear classifier of the teacher in effect amplifies the sparse features, allowing the student to learn them more effectively. Thus, training a classifier on the teacher and distilling it to the student is better than training a classifier directly on the student. We now provide empirical evidence in support of this hypothesis. To align pretraining with linear probing, we perform linear probing on the teacher model, and further train the student by maximizing the cosine similarity between the logits of the teacher and student. This boosts the student accuracy by 3.6%, in Table 2 (S2 vs. S1) and by 3.8% in Table 3 (A2 vs. A1). The projector isolates the representations of the student from the training loss, as indicated by the lower similarity between the student and teacher at feature space when compared to that at the projector (in S5 and A4), and prevents overfitting of the student to the teacher training objective. This makes the student robust to the misalignment between the teacher training objective and ideal goals of the student, and also to the mismatch in student pretraining and linear probing objectives, thereby improving student performance, as seen in Table 2 (S5 vs. S1) and Table 3 (A4 vs. A1). ## 4.2 Profeat: Projected Feature Adversarial Training We now present our proposed approach ProFeAT which is illustrated in Figure 1. ProFeAT training happens in two stages. Firstly, a teacher model T is trained using a standard self-supervised learning algorithm such as SimCLR (Chen et al., 2020b), whose weights are used as an initialization for the student S for better convergence in the next stage. Next, the student model is trained robustly in a distillation framework where the teacher model supervises the student model (see Figure 1). Both the student and the teacher are trained with a *projection layer* on top of their feature backbone. The projection layer is kept frozen and is initialized using the learned weights from the earlier teacher training. We use well-designed *attack* and defense losses along with appropriate *augmentations* to ensure the student model is trained robustly while taking appropriate supervision from the teacher model. We now describe in detail the individual design components of the proposed approach ProFeAT: Projection Layer: As discussed in Section 4.1, to overcome the impact of the inherent misalignment between the training objective of the teacher and the ideal goals of the student, and to mitigate the mismatch between the pretraining and linear probing objectives, we propose to use a projection head at the output of both the teacher and the student backbone. Most self-supervised pretraining methods (Chen et al., 2020b; He et al., 2020; Grill et al., 2020; Chen & He, 2021; Zbontar et al., 2021) use similarity based losses at the output of a projection layer for standard training, resulting in a projected space where similarity has been enforced during pretraining, thus giving higher importance to the key dimensions. Defense Loss: As is common in adversarial training literature (Zhang et al., 2019; 2022), we use a combination of (1) loss on clean samples and (2) smoothness loss, to enforce adversarial robustness in the student model. Since the loss on clean samples utilizes supervision from the self-supervised pretrained teacher, it is enforced at the outputs of the respective projectors of the teacher and student. The goal of the smoothness loss is to enforce local smoothness in the loss surface of the student backbone (Zhang et al., $$(1)$$ ![6_image_0.png](6_image_0.png) Figure 1: **Proposed approach (ProFeAT):** The student is trained using a distillation loss on clean samples using supervision from an SSL pretrained teacher, and a smoothness loss to enforce adversarial robustness (details of exact loss formulation is presented in Section 4). A frozen pretrained projection layer is used at the teacher and student to prevent overfitting to the clean distillation loss. The use of strong augmentations at the student increases attack diversity, while weak augmentations at the teacher reduce the training complexity. 2019), and is ideally enforced at the feature space of the student network, since these representations are directly used for downstream applications. While the ideal locations for the clean and adversarial losses are the projected and feature spaces respectively, we find that such a loss formulation is hard to optimize, resulting in either a non-robust model, or collapsed representations (shown in Table 18). We therefore use a complimentary loss as a regularizer in the respective spaces. This results in a combination of losses at the feature and projection spaces as shown below: Lpf = − X i cos Tpf (xi), Spf (xi)+ β cos Spf (xi), Spf (˜xi)(1) Lf = − X i cos Tf (xi), Sf (xi)+ β cos Sf (xi), Sf (˜xi)(2) LProFeAT = 1 2 Lpf + Lf (3) x˜i = arg min x˜i:||x˜i−xi||∞≤ε cos Tpf (xi), Spf (˜xi)+ cos Sf (xi), Sf (˜xi)(4) $$\left(2\right)$$ $$\left({\mathrm{3}}\right)$$ $$\left(4\right)$$ Here, Lpf and Lf are the defense losses enforced at the projection and feature spaces, respectively. S˜ = S(x˜) is the student representation for the adversarial input x˜ (see Section 2 for notations). The first terms in Equations (1) and (2) represents the **Distillation loss** (see Figure 1) involving only the clean input xi, whereas the second terms correspond to the **Smoothness loss** at the respective layers of the student, involving both clean and adversarial input x˜i, and is weighted by a hyperparameter β that controls the robustness-accuracy trade-off in the downstream model. The overall loss LProFeAT in Equation (3) is minimized during training. Attack generation: The attack used during training is generated by a maximizing a combination of losses at both projection and feature spaces as shown in Equation (4). Since the projection space is primarily used for enforcing similarity with the teacher, we minimize the cosine similarity between the teacher and student representations for attack generation. Since the feature space is primarily used for enforcing local smoothness in the loss surface of the student, we utilize the unsupervised formulation that minimizes similarity between representations of clean and adversarial samples at the student. Augmentations: Standard supervised and self-supervised training approaches are known to benefit from the use of strong data augmentations such as AutoAugment (Cubuk et al., 2018). However, such augmentations, which distort the low-level features of images, are known to deteriorate the performance of adversarial training (Rice et al., 2020; Gowal et al., 2020). Addepalli et al. (2022) attribute the poor performance to the larger domain shift between the augmented train and unaugmented test set images, in addition to the increased complexity of the adversarial training task, which overpower the superior generalization attained due to the use of diverse augmentations. Although these factors influence adversarial training in the self-supervised regime as well, we hypothesize that the need for better generalization is higher in self-supervised training, Table 4: **SOTA comparison:** Standard Linear Probing performance (%) on CIFAR-10 and CIFAR-100 datasets on ResNet-18 and WideResNet-34-10 models. Mean and standard deviation across 3 reruns are reported for the best baseline DeACL (Zhang et al., 2022) and the proposed approach ProFeAT. Standard Accuracy (SA), Robust Accuracy against PGD-20 attack (RA-PGD20) and AutoAttack (RA-AA) reported. | Method | CIFAR-10 | CIFAR-100 | | | | | |---------------------|-------------|-------------|-------------|-------------|-------------|-------------| | SA | RA-PGD20 | RA-AA | SA | RA-PGD20 | RA-AA | | | ResNet-18 | | | | | | | | Supervised (TRADES) | 83.74 | 49.35 | 47.60 | 59.07 | 26.22 | 23.14 | | AP-DPE | 78.30 | 18.22 | 16.07 | 47.91 | 6.23 | 4.17 | | RoCL | 79.90 | 39.54 | 23.38 | 49.53 | 18.79 | 8.66 | | ACL | 77.88 | 42.87 | 39.13 | 47.51 | 20.97 | 16.33 | | AdvCL | 80.85 | 50.45 | 42.57 | 48.34 | 27.67 | 19.78 | | DynACL | 77.41 | - | 45.04 | 45.73 | - | 19.25 | | DynACL++ | 79.81 | - | 46.46 | 52.26 | - | 20.05 | | DeACL (Reported) | 80.17 | 53.95 | 45.31 | 52.79 | 30.74 | 20.34 | | DeACL (Our Teacher) | 80.05± 0.29 | 52.97± 0.08 | 48.15± 0.05 | 51.53± 0.30 | 30.92± 0.21 | 21.91± 0.13 | | ProFeAT (Ours) | 81.68± 0.23 | 49.55± 0.16 | 47.02± 0.01 | 53.47± 0.10 | 27.95± 0.13 | 22.61± 0.14 | | WideResNet-34-10 | | | | | | | | Supervised (TRADES) | 85.50 | 54.29 | 51.59 | 59.87 | 28.86 | 25.72 | | DynACL++ | 80.97 | 48.28 | 45.50 | 52.60 | 23.42 | 20.58 | | DeACL | 83.83± 0.20 | 57.09± 0.06 | 48.85± 0.11 | 52.92± 0.35 | 32.66± 0.08 | 23.82± 0.07 | | ProFeAT (Ours) | 87.62± 0.13 | 54.50± 0.17 | 51.95± 0.19 | 61.08± 0.18 | 31.96± 0.08 | 26.81± 0.11 | since the pretraining task is not aligned with the ideal goals of the student, making it important to use strong augmentations. However, it is also important to ensure that the training task is not too complex. We thus propose to use a combination of weak and strong augmentations as inputs to the teacher and student respectively, as shown in Figure 1. From Figure 5 we note that, the use of strong augmentations results in the generation of more diverse attacks, resulting in a larger drop when differently augmented images are used across different restarts of a PGD 5-step attack. The use of weak augmentations at the teacher imparts better supervision to the student, reducing the training complexity. ## 5 Experiments And Results We first present an empirical evaluation of the proposed method, followed by several ablation experiments to understand the role of each component individually. Due to lack of space, details on the training, datasets and compute are presented in Appendices C and D. For evaluation, we compare the proposed approach ProFeAT with several existing self-supervised adversarial training approaches (Chen et al., 2020a; Kim et al., 2020; Jiang et al., 2020; Fan et al., 2021; Zhang et al., 2022; 2019) by performing linear probing (LP) using cross-entropy loss on clean samples. To ensure a fair comparison, the same is done for the supervised AT method TRADES (Zhang et al., 2019) as well. The results are presented on CIFAR-10 and CIFAR-100 datasets Krizhevsky et al. (2009), and on ResNet-18 He et al. (2016) and WideResNet-34-10 (WRN-34-10) Zagoruyko & Komodakis (2016) architectures, as is common in the adversarial research community. The results of existing methods on ResNet-18 are as reported by Zhang et al. (2022). Since DeACL (Zhang et al., 2022) also uses a teacher-student architecture, we reproduce their results using the same teacher as used in our method, and report it as "DeACL (Our Teacher)". Since existing methods do not report results on larger architectures like WideResNet-34-10, we compare our results only with the best performing method (DeACL) and a recent approach DynACL (Luo et al., 2023). As the results with larger architectures are not reported in these papers, we run them using their official code. Table 5: **Performance with additional evaluations:** Performance (%) of DeACL (best baseline) and ProFeAT (Ours) on CIFAR-10 and CIFAR-100 datasets with WideResNet-34-10 architecture. The model is first pretrained using the respective self-supervised adversarial training algorithm, and further we compute the standard accuracy (SA) and robust accuracy against GAMA (RA-G) using several methods such as standard linear probing (LP), training a 2 layer MLP head (MLP), and performing KNN in the feature space with k = 10. The proposed method achieves improvements over the baseline across all evaluation methods. | Method | LP Eval | MLP Eval | KNN Eval (k = 10) | | | | |----------------|-----------|------------|---------------------|-------|-------|-------| | | SA | RA-G | SA | RA-G | SA | RA-G | | | CIFAR-10 | | | | | | | DeACL | 83.60 | 49.62 | 85.66 | 48.74 | 87.00 | 54.58 | | ProFeAT (Ours) | 87.44 | 52.24 | 89.37 | 50.00 | 87.38 | 55.77 | | | CIFAR-100 | | | | | | | DeACL | 52.90 | 24.66 | 55.05 | 22.04 | 56.82 | 31.26 | | ProFeAT (Ours) | 61.05 | 27.41 | 63.81 | 26.10 | 58.09 | 32.26 | The Robust Accuracy (RA) in the SOTA comparison table is presented against AutoAttack (RA-AA) (Croce & Hein, 2020) which is widely used as a reliable benchmark for robustness evaluation (Croce et al., 2021). In other tables, we also present robust accuracy against the GAMA attack (RA-G) (Sriramanan et al., 2020) which is known to be a competent and a reliable estimate of AutoAttack while being significantly faster to evaluate. We additionally present results against a 20-step PGD attack (RA-PGD20) (Madry et al., 2018) in the SOTA comparison table (Table 4), although it is a significantly weaker attack. A larger difference between RA-PGD20 and RA-AA indicates that the loss surface is more convoluted due to which weaker attacks are unsuccessful, yielding a false sense of robustness (Athalye et al., 2018). Thus this difference serves as a check for verifying the extent of gradient masking (Papernot et al., 2017; Tramèr et al., 2018). Therefore, in order to compare true robustness between any two defenses, accuracy against RA-AA or RA-G should be considered, while RA-PGD20 should not be considered. The accuracy on clean or natural samples is denoted as SA, which stands for Standard (Clean) Accuracy. ## 5.1 Comparison With The State-Of-The-Art Table 4 presents the standard linear probing results of the proposed method ProFeAT in comparison to several SSL-AT baseline approaches. The proposed approach obtains superior robustness-accuracy trade-off when compared to the best performing baseline method DeACL, with ∼ 3−3.5% gains in both robust and clean accuracy on CIFAR-10 dataset and similar gains in robustness on CIFAR-100 dataset with WideResNet-34-10 architecture. We obtain significant gains of ∼ 8% on the clean accuracy on CIFAR-100. With ResNet-18 architecture, ProFeAT achieves competent robustness-accuracy trade-off when compared to DeACL on CIFAR10 dataset, and obtains ∼ 2% higher clean accuracy alongside improved robustness on CIFAR-100 dataset. We notice significantly less gradient masking, indicated by a lower value of (RA-PGD20 − RA-AA), for the proposed approach compared to all other baselines across all settings, indicating a reliable attack generation even in the absence of ground truth labels. Overall, the proposed approach significantly outperforms all the existing baselines, especially for larger model capacities (WRN-34-10), with improved results on smaller models (ResNet-18). Additionally, we obtain superior results when compared to the supervised AT method TRADES as well, at higher model capacities. Performance comparison with other evaluation methods: We also present results of additional evaluation methods to evaluate the performance of the pretrained backbone in Table 5. We note that the proposed method achieves improvements over the baseline across all evaluation methods. Since the training of classifier head in LP and MLP is done using standard training and not adversarial training, the robust accuracy reduces as the number of layers increases (from linear to 2-layers), and the standard accuracy improves. The standard accuracy of KNN is better than the standard accuracy of LP for the baseline, indicating that the representations are not linearly separable. Whereas, as is standard, for the proposed approach, LP standard accuracy is higher than that obtained using KNN. The adversarial attack used for evaluating the robust accuracy using KNN is generated using GAMA attack on a linear classifier. The attack Table 7: **Transfer Learning with Adv. FullFinetuning:** AFF performance (%) using TRADES algorithm for 25 epochs, when transferred from CIFAR-10/100 to STL-10 and Caltech-101 dataset on WideResNet-34-10 model. Table 6: **Transfer Learning with Linear Probing:** Standard LP Performance (%) for transfer learning from CIFAR-10/100 to STL-10 dataset on ResNet-18 and WRN-34-10 model. ProFeAT (Ours) outperforms both DeACL and the supervised TRADES model. | CIFAR-10 | CIFAR-100 | | | | | | | | |------------------|---------------|---------------|-------|-------|----|------|----|------| | → STL-10 | → STL-10 | | | | | | | | | Method | SA | RA-AA | SA | RA-AA | | | | | | ResNet-18 | | | | | | | | | | Supervised | 54.70 | 22.26 | 51.11 | 19.54 | | | | | | DeACL | 60.10 | 30.71 | 50.91 | 16.25 | | | | | | ProFeAT | 64.30 | 30.95 | 52.63 | 20.55 | | | | | | WideResNet-34-10 | | | | | | | | | | Supervised | 67.15 | 30.49 | 57.68 | 11.26 | | | | | | DeACL | 66.45 | 28.43 | 50.59 | 13.49 | | | | | | ProFeAT | 69.88 | 31.65 | 56.68 | 19.46 | SA | RA-G | SA | RA-G | | Method | CIFAR-10 | CIFAR-100 | | | | | | | | | → STL-10 | → STL-10 | | | | | | | | Supervised | 64.58 | 32.78 | 64.22 | 31.01 | | | | | | DeACL | 61.65 | 28.34 | 60.89 | 30.00 | | | | | | ProFeAT | 74.12 | 36.04 | 68.77 | 31.23 | | | | | | | CIFAR-10 | CIFAR-100 | | | | | | | | | → Caltech-101 | → Caltech-101 | | | | | | | | Supervised | 62.46 | 39.40 | 64.97 | 41.02 | | | | | | DeACL | 62.65 | 39.18 | 61.01 | 39.09 | | | | | | ProFeAT | 66.11 | 42.12 | 64.16 | 41.25 | | | | | | Method | 2-step PGD attack | 5-step PGD attack | | | | | |---------------------|---------------------|---------------------|-------|-------|-------|-------| | SA | RA-G | RA-AA | SA | RA-G | RA-AA | | | Supervised (TRADES) | 60.80 | 24.49 | 23.99 | 61.05 | 25.87 | 25.77 | | DeACL | 51.00 | 24.89 | 23.45 | 52.90 | 24.66 | 23.92 | | ProFeAT (Ours) | 60.43 | 26.90 | 26.23 | 61.05 | 27.41 | 26.89 | Table 8: **Efficiency of Self-supervised adversarial training (CIFAR-100, WRN-34-10):** Performance (%) using lesser number of attack steps (2 steps) when compared to the standard case (5 steps) during adversarial training. Clean/ Standard Accuracy (SA) and robust accuracy against GAMA attack (RA-G) and AutoAttack(RA-AA) are reported. The proposed approach is stable at lower attack steps as well, while being better than both TRADES (Zhang et al., 2019) and DeACL (Zhang et al., 2022). is suboptimal since it is not generated by using the evaluation process (KNN), and thus the robust accuracy against such an attack is higher. Transfer learning: To evaluate the transferrability of the learned robust representations, we compare the proposed approach with the best baseline DeACL in Table 6 under standard linear probing (LP). We consider transfer from CIFAR-10/100 to STL-10 (Coates et al., 2011). When compared to DeACL, the clean accuracy is ∼ 4 − 10% higher on CIFAR-10 and ∼ 1.7 − 6% higher on CIFAR-100. We also obtain 3 − 5% higher robust accuracy when compared to DeACL on CIFAR-100, and higher improvements over TRADES. We also present transfer learning results using lightweight adversarial full finetuning (AFF) to STL-10 and Caltech-101 (Li et al., 2022) in Table 7. We defer the details on the process of adversarial full-finetuning and the selection criteria for the transfer datasets to Appendix D. The transfer is performed on WRN-34-10 model that is pretrained on CIFAR-10/100. As shown in Table 7, the proposed method outperforms DeACL by a significant margin. Note that by using merely 25 epochs of adversarial full-finetuning, the proposed method achieves improvements of around 4% on CIFAR-10 and 11% on CIFAR-100 when compared to the linear probing accuracy presented in Table 6, highlighting the practical utility of the proposed method. The AFF performance of the proposed approach is better than that of a supervised TRADES pretrained model as well. Efficiency of self-supervised adversarial training Similar to prior works (Zhang et al., 2022), the proposed approach uses 5-step PGD based optimization for attack generation during adversarial training. In Table 8, we present results with lesser optimization steps (2 steps). The proposed approach is stable and obtains similar results even by using 2-step attack. Even in this case, the clean and robust accuracy of the proposed approach is significantly better than the baseline approach DeACL (Zhang et al., 2022), and also outperforms the supervised TRADES model (Zhang et al., 2019). We compare the computational aspects of the existing baselines and the proposed method in more detail in Appendix E. Table 9: **Ablations on ProFeAT (CIFAR-100, WRN-34-10):** Performance (%) by enabling different components of the proposed approach. A tick mark in the Projector column means that a frozen pretrained projector is used for the teacher and student, with the defense loss being enforced at the feature and projector as shown in Equation (3). E1 represents the baseline or DeACL defense, and E9 represents the proposed defense or ProFeAT. E8*:Defense loss applied only at the projector. SA: Standard Accuracy, RA-G: Robust Accuracy against GAMA attack. | Ablation | Projector | Augmentations | Attack loss | SA | RA-G | |------------|-------------|-----------------|---------------|-------|--------| | E1 | | | 52.90 | 24.66 | | | E2 | ✓ | | 57.66 | 25.04 | | | E3 | | ✓ | 52.83 | 27.13 | | | E4 | | | ✓ | 51.80 | 24.77 | | E5 | | ✓ | ✓ | 55.35 | 27.86 | | E6 | ✓ | | ✓ | 56.57 | 25.29 | | E7 | ✓ | ✓ | 62.01 | 26.89 | | | E8 | ✓* | ✓ | ✓ | 59.65 | 26.90 | | E9 | ✓ | ✓ | ✓ | 61.05 | 27.41 | ## 5.2 Ablations We now present some of the ablation experiments to gain further insights into the proposed method, and defer more in-depth ablation results to Appendix F due to space constraints. Effect of each component of the proposed approach: We study the impact of each component of the ProFeAT in Table-9, and make the following observations based on the results: - *Projector*: A key component of the proposed method is the introduction of the projector. We observe significant gains in clean accuracy (∼ 5%) by introducing the projector along with defense losses at the feature and projection spaces (E1 vs. E2). The importance of the projector is also evident by the fact that removing the projector from the proposed defense results in a large drop (5.7%) in clean accuracy (E9 vs. E5). We observe a substantial improvement of 9.2% in clean accuracy when the projector is introduced in the presence of the proposed augmentation strategy (E3 vs. E7), which is significantly higher than the gains obtained by introducing the same in the baseline DeACL (4.76%, E1 vs. E2). Further ablations on the projector are provided in Appendix F.1. - *Augmentations*: The proposed augmentation strategy improves robustness across all settings. Introducing the proposed strategy in the baseline improves its robust accuracy by 2.47% (E1 vs. E3). Moreover, the importance of the proposed strategy is also evident from the fact that in the absence of the same, there is a 4.48% drop in SA and ∼ 2% drop in RA-G (E9 vs. E6). Further, when combined with other components as well, the proposed augmentation strategy shows good improvements (E4 vs. E5, E2 vs. E7, E6 vs. E9). Detailed ablation on the respective augmentations used for the teacher and student model can be found in Appendix F.2. - *Attack loss*: The proposed attack objective is designed to be consistent with the proposed defense strategy, where the goal is to enforce smoothness at the student in the feature space and similarity with the teacher in the projector space. The impact of the attack loss in feature space can be seen in combination with the proposed augmentations, where we observe an improvement of 2.5% in clean accuracy alongside notable improvements in robust accuracy (E3 vs. E5). However, in presence of projector, the attack results in only marginal robustness gains, possibly because the clean accuracy is already high (E9 vs. E7). More detailed ablations on the attack loss in provided in Appendix F.3. - *Defense loss*: We do not introduce a separate column for defense loss as it is applicable only in the presence of the projector. We show the impact of the proposed defense losses in the last two rows (E8 vs. E9). The proposed defense loss improves the clean accuracy by 1.4% and robust accuracy marginally. Appendix F.4 provides further insights on various defense loss formulations and their impact on the proposed method. Performance across different model architectures: We report performance of the proposed method ProFeAT and the best baseline DeACL on diverse architectures including Vision transformers (Dosovitskiy ![11_image_0.png](11_image_0.png) ProFeAT DeACL ![11_image_1.png](11_image_1.png) Figure 2: Performance of ProFeAT when compared to DeACL (Zhang et al., 2022) across variation in the robustness-accuracy trade-off parameter β on CIFAR100 dataset with WRN-34-10 architecture. Figure 3: Performance (%) of ProFeAT by varying the weight λ between the defense losses at the feature and projector. The performance is stable across the range λ ∈ [0.25, 0.75]. We thus fix the value of λ to 0.5 in the proposed approach. Table 10: **Performance across different architecture types and sizes:** Standard Linear Probing performance (%) of DeACL (Baseline) and ProFeAT (Ours) across different architectures on CIFAR-100. ViT-B/16 uses Imagenet-1K trained SSL teacher for training, while the SSL teacher in all other cases is trained on the CIFAR-100. SA: Standard Accuracy, RA-AA: Robust Accuracy against AutoAttack. | Method | #params (M) | DeACL | ProFeAT (Ours) | | | |------------------|---------------|---------|------------------|-------|-------| | SA | RA-AA | SA | RA-AA | | | | ResNet-18 | 11.27 | 51.53 | 21.91 | 53.47 | 22.61 | | ResNet-50 | 23.50 | 53.30 | 23.00 | 59.34 | 25.86 | | WideResNet-34-10 | 46.28 | 52.92 | 23.82 | 61.08 | 26.81 | | ViT-B/16 | 85.79 | 61.34 | 17.49 | 65.08 | 21.52 | et al., 2021) on the CIFAR-100 dataset in Table 10. ProFeAT consistently outperforms DeACL in both clean and robust accuracy across various model architectures. An explanation behind the mechanism for successful scaling to larger datasets can be found in Appendix B. Robustness-Accuracy trade-off: We present results across variation in the robustness-accuracy trade-off parameter β (Equations (1) and (2)) in Figure 2. Both robustness and accuracy of the proposed method are significantly better than DeACL across all values of β. Weighting of defense losses at the feature and projector: In the proposed approach, the defense losses are equally weighted between the feature and projector layers as shown in Equation (3). In Figure 3, we present results by varying the weighting λ between the defense losses at the feature (Lf ) and projector (Lpf ) layers: LProFeAT = λ · Lf + (1 − λ) · Lpf , where Lpf and Lf are given by Equations (1) and (2), respectively. It can be noted that the two extreme cases of λ = 0 and λ = 1 result in a drop in clean accuracy, with a larger drop in the case where the loss is enforced only at the feature layer. The robust accuracy shows lesser variation across different values of λ. Thus, the overall performance is stable over the range λ ∈ [0.25, 0.75], making the default setting of λ = 0.5 a suitable option. ## 6 Conclusion In this work, we bridge the performance gap between supervised and self-supervised adversarial training approaches, specifically for large capacity models. We utilize a teacher-student setting (Zhang et al., 2022) where a standard self-supervised trained teacher is used to provide supervision to the student. Due to the inherent misalignment between the teacher training objective and the ideal goals of the student, we propose to use a projection layer to prevent the network from overfitting to the standard SSL trained teacher. We present a detailed analysis on the use of projection layer in distillation to justify our method. We additionally propose appropriate attack and defense losses in the feature and projector spaces alongside the use of weak and strong augmentations for the teacher and student respectively, to improve the attack diversity while maintaining low training complexity. The proposed approach obtains significant gains over existing self-supervised adversarial training methods, specifically for large models, demonstrating its scalability. While we improve the scalability of self-supervised AT methods to larger capacity models, we limit the datasets to CIFAR scale similar to prior works in this field due to the large computational costs. We hope future works explore the application of such SSL-AT methods to large scale datasets such as ImageNet. ## References Sravanti Addepalli, Samyak Jain, and R.Venkatesh Babu. Efficient and effective augmentation strategy for adversarial training. *Advances in Neural Information Processing Systems (NeurIPS)*, 35:1488–1501, 2022. Sravanti Addepalli, Anshul Nasery, Venkatesh Babu Radhakrishnan, Praneeth Netrapalli, and Prateek Jain. Feature reconstruction from outputs can mitigate simplicity bias in neural networks. In *The Eleventh* International Conference on Learning Representations, 2023. Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion, and Matthias Hein. Square attack: a query-efficient black-box adversarial attack via random search. In The European Conference on Computer Vision (ECCV), 2020. Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In *International Conference on Machine Learning (ICML)*, 2018. Florian Bordes, Randall Balestriero, Quentin Garrido, Adrien Bardes, and Pascal Vincent. Guillotine regularization: Improving deep networks generalization by removing their head. *arXiv preprint arXiv:2206.13378*, 2022. Jacob Buckman, Aurko Roy, Colin Raffel, and Ian Goodfellow. Thermometer encoding: One hot way to resist adversarial examples. In *International Conference on Learning Representations (ICLR)*, 2018. Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, Dimitris Tsipras, Ian Goodfellow, and Aleksander Madry. On evaluating adversarial robustness. *arXiv preprint arXiv:1902.06705*, 2019. Jinghui Chen and Quanquan Gu. Rays: A ray searching method for hard-label adversarial attack. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1739–1747, 2020. Tianlong Chen, Sijia Liu, Shiyu Chang, Yu Cheng, Lisa Amini, and Zhangyang Wang. Adversarial robustness: From self-supervised pre-training to fine-tuning. In *Proceedings of the IEEE/CVF Conference on Computer* Vision and Pattern Recognition, pp. 699–708, 2020a. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In *International conference on machine learning*, pp. 1597–1607. PMLR, 2020b. Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. In *Proceedings of the* IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021. Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In *Proceedings of the fourteenth international conference on artificial intelligence and statistics*, pp. 215–223. JMLR Workshop and Conference Proceedings, 2011. Francesco Croce and Matthias Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In *International Conference on Machine Learning (ICML)*, 2020. Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, and Matthias Hein. Robustbench: a standardized adversarial robustness benchmark, 2021. Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. Autoaugment: Learning augmentation policies from data. *arXiv preprint arXiv:1805.09501*, 2018. Guneet S. Dhillon, Kamyar Azizzadenesheli, Jeremy D. Bernstein, Jean Kossaifi, Aran Khanna, Zachary C. Lipton, and Animashree Anandkumar. Stochastic activation pruning for robust adversarial defense. In International Conference on Learning Representations (ICLR), 2018. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. 2021. Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Brandon Tran, and Aleksander Madry. Adversarial robustness as a prior for learned representations. *arXiv preprint arXiv:1906.00945*, 2019. Lijie Fan, Sijia Liu, Pin-Yu Chen, Gaoyuan Zhang, and Chuang Gan. When does contrastive learning preserve adversarial robustness from pretraining to finetuning? *Advances in Neural Information Processing Systems*, 34, 2021. Yuting Gao, Jia-Xin Zhuang, Shaohui Lin, Hao Cheng, Xing Sun, Ke Li, and Chunhua Shen. Disco: Remedy self-supervised learning on lightweight models with distilled contrastive learning. In *The European* Conference on Computer Vision (ECCV), 2022. Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. *arXiv preprint arXiv:1803.07728*, 2018. Sven Gowal, Chongli Qin, Jonathan Uesato, Timothy Mann, and Pushmeet Kohli. Uncovering the limits of adversarial training against norm-bounded adversarial examples. *arXiv preprint arXiv:2010.03593*, 2020. Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, Bilal Piot, koray kavukcuoglu, Remi Munos, and Michal Valko. Bootstrap your own latent - a new approach to self-supervised learning. In *Advances in Neural Information Processing Systems (NeurIPS)*, 2020. Kartik Gupta, Thalaiyasingam Ajanthan, Anton van den Hengel, and Stephen Gould. Understanding and improving the role of projection head in self-supervised learning, 2022. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and* Pattern Recognition, pp. 9729–9738, 2020. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. *arXiv preprint* arXiv:1503.02531, 2015. Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. *Advances in neural information processing systems*, 32, 2019. Ziyu Jiang, Tianlong Chen, Ting Chen, and Zhangyang Wang. Robust pre-training by adversarial contrastive learning. *Advances in Neural Information Processing Systems*, 33:16199–16210, 2020. Minseon Kim, Jihoon Tack, and Sung Ju Hwang. Adversarial self-supervised contrastive learning. Advances in Neural Information Processing Systems, 33:2983–2994, 2020. Alex Krizhevsky et al. Learning multiple layers of features from tiny images. 2009. Ananya Kumar, Aditi Raghunathan, Robbie Jones, Tengyu Ma, and Percy Liang. Fine-tuning can distort pretrained features and underperform out-of-distribution. *arXiv preprint arXiv:2202.10054*, 2022. Fei-Fei Li, Marco Andreeto, Marc'Aurelio Ranzato, and Pietro Perona. Caltech 101, Apr 2022. Rundong Luo, Yifei Wang, and Yisen Wang. Rethinking the effect of data augmentation in adversarial contrastive learning. In *International Conference on Learning Representations (ICLR)*, 2023. Xingjun Ma, Bo Li, Yisen Wang, Sarah M. Erfani, Sudanthi Wijewickrema, Grant Schoenebeck, Michael E. Houle, Dawn Song, and James Bailey. Characterizing adversarial subspaces using local intrinsic dimensionality. In *International Conference on Learning Representations (ICLR)*, 2018. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Tsipras Dimitris, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations (ICLR), 2018. KL Navaneet, Soroush Abbasi Koohpayegani, Ajinkya Tejankar, and Hamed Pirsiavash. Simreg: Regression as a simple yet effective tool for self-supervised knowledge distillation. *arXiv preprint arXiv:2201.05131*, 2022. Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In *European Conference on Computer Vision*, pp. 69–84. Springer, 2016. Tianyu Pang, Xiao Yang, Yinpeng Dong, Hang Su, and Jun Zhu. Bag of tricks for adversarial training. International Conference on Learning Representations (ICLR), 2021. Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the ACM Asia Conference on Computer and Communications Security (ACM ASIACCS), 2017. Leslie Rice, Eric Wong, and J. Zico Kolter. Overfitting in adversarially robust deep learning. In International Conference on Machine Learning (ICML), 2020. Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, and Aleksander Madry. Adversarially robust generalization requires more data. *Advances in neural information processing systems*, 31, 2018. Yang Song, Taesup Kim, Sebastian Nowozin, Stefano Ermon, and Nate Kushman. Pixeldefend: Leveraging generative models to understand and defend against adversarial examples. In *International Conference on* Learning Representations (ICLR), 2018. Gaurang Sriramanan, Sravanti Addepalli, Arya Baburaj, and R Venkatesh Babu. Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses. In *Advances in Neural Information Processing Systems* (NeurIPS), 2020. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR), 2013. Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. In International Conference on Learning Representations (ICLR), 2018. Florian Tramèr, Jens Behrmann, Nicholas Carlini, Nicolas Papernot, and Jörn-Henrik Jacobsen. Fundamental tradeoffs between invariance and sensitivity to adversarial perturbations. In International Conference on Machine Learning (ICML), 2020. Florian Tramer, Nicholas Carlini, Wieland Brendel, and Aleksander Madry. On adaptive attacks to adversarial example defenses. *arXiv preprint arXiv:2002.08347*, 2020. Trieu H Trinh, Minh-Thang Luong, and Quoc V Le. Selfie: Self-supervised pretraining for image embedding. arXiv preprint arXiv:1906.02940, 2019. Aaron Van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv e-prints, pp. arXiv–1807, 2018. Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, and Alan Yuille. Mitigating adversarial effects through randomization. In *International Conference on Learning Representations (ICLR)*, 2018. Yihao Xue, Eric Gan, Jiayi Ni, Siddharth Joshi, and Baharan Mirzasoleiman. Investigating the benefits of projection head for representation learning. In International Conference on Learning Representations (ICLR), 2024. Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. *arXiv preprint arXiv:1605.07146*, 2016. Jure Zbontar, Li Jing, Ishan Misra, Yann LeCun, and Stéphane Deny. Barlow twins: Self-supervised learning via redundancy reduction. In *International Conference on Machine Learning*, pp. 12310–12320. PMLR, 2021. Chaoning Zhang, Kang Zhang, Chenshuang Zhang, Axi Niu, Jiu Feng, Chang D Yoo, and In So Kweon. Decoupled adversarial contrastive learning for self-supervised adversarial robustness. In *Computer Vision–* ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXX, pp. 725–742. Springer, 2022. Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael I Jordan. Theoretically principled trade-off between robustness and accuracy. In International Conference on Machine Learning (ICML), 2019. ## Appendix A Background A.1 Decoupled Adversarial Contrastive Learning (Deacl) DeACL introduces a teacher-student distillation framework to train robust models in a self-supervised setting without the need of labeled training data. While most self-supervised adversarial training methods aimed at integrating contrastive learning methods with adversarial training, the authors (Zhang et al., 2022) showed that combining the two is a complex optimization problem due to their conflicting training requirements. The training is thus decoupled to two stages as discussed below. The first stage trains a standard self-supervised model using any of the existing Self-supervised Learning (SSL) techniques such as SimCLR, BYOL, or BarlowTwins. This is followed by the second stage that involves adversarial training of a *student* model using the supervision from the earlier-trained teacher in a distillation framework as shown in Fig.4. The loss used during distillation is a combination of the cosine similarity between the representations of the clean image at the output of the teacher and student, along with a smoothness loss at the output of the student as shown in Eq.5. $${\mathcal{L}}_{\mathrm{DeACL}}=-\sum_{i}\cos\left({\mathcal{T}}_{f}(x_{i}),{\mathcal{S}}_{f}(x_{i})\right)+\beta\cos\left({\mathcal{S}}_{f}(x_{i}),{\mathcal{S}}_{f}({\hat{x}}_{i})\right)$$ $$\tilde{x}_{i}=\operatorname*{arg\,min}_{\tilde{x}_{i}:||\tilde{x}_{i}-x_{i}||_{\infty}\leq\varepsilon}\left({\mathcal{T}}_{f}(x_{i}),{\mathcal{S}}_{f}(\tilde{x}_{i})\right)$$ cos Tf (xi), Sf (˜xi)(6) In Eq.5, the first term is the **Distillation loss** on the feature backbone Tf of the teacher T , and Sf of the student S for a clean input xi. x˜ is the adversarial input. The second term corresponds to the Smoothness loss at the student to enforce similar representations for the clean and adversarial inputs, and is weighted by a hyperparameter β that controls the robustness-accuracy trade-off in the downstream model. The loss LDeACL (Equation (5)) is minimized during training. The attack used during training is generated by a minimizing the cosine similarity between the teacher and student representations at the feature output as shown in Eq.6. ![16_image_0.png](16_image_0.png) $\left(5\right)^{\frac{1}{2}}$ $$\left({\mathrm{6}}\right)$$ Figure 4: **Decoupled Adversarial Contrastive Learning (DeACL):** The student is trained using a distillation loss on clean samples using supervision from an SSL pretrained teacher, and a smoothness loss to enforce adversarial robustness (Eq.5). ## A.2 Supervised Adversarial Defenses Following the demonstration of adversarial attacks by Szegedy et al. (2013), there have been several attempts of defending Deep Networks against them. Early defenses proposed intuitive methods that introduced non-differentiable or randomized components in the network to thwart gradient-based attacks (Buckman et al., 2018; Ma et al., 2018; Dhillon et al., 2018; Xie et al., 2018; Song et al., 2018). While these methods were efficient and easy to implement, Athalye et al. (2018) proposed adaptive attacks which successfully broke several such defenses by replacing the non-differentiable components with smooth differentiable approximations, and by taking an expectation over the randomized components. Adversarial Training (Madry et al., 2018; Zhang et al., 2019) was the most successful defense strategy that withstood strong white-box (Croce & Hein, 2020; Sriramanan et al., 2020), black-box (Andriushchenko et al., 2020; Chen & Gu, 2020) and adaptive attacks (Athalye et al., 2018; Tramer et al., 2020) proposed over the years. PGD (Projected Gradient Descent) based adversarial training (Madry et al., 2018) involves maximizing the cross-entropy loss to generate adversarial attacks, and further minimizing the loss on the adversarial attacks for training. Another successful supervised Adversarial Training based defense was TRADES (Zhang et al., 2019), where the Kullback-Leibler (KL) divergence between clean and adversarial samples was minimized along with the cross-entropy loss on clean samples for training. Adjusting the weight between the losses gave a flexible trade-off between the clean and robust accuracy of the trained model. Although these methods have been robust against several attacks, it has been shown that the sample complexity of adversarial training is large (Schmidt et al., 2018), and this increases the training and annotation costs needed for adversarial training. ## B Mechanism Behind Scaling To Larger Datasets For a sufficiently complex task, a scalable approach results in better performance on larger models given enough data. Although the task complexity of adversarial self-supervised learning is high, the gains in prior approaches are marginal with an increase in model size, while the proposed method results in significantly improved performance on larger capacity models (Table 4). We discuss the key factors that result in better scalability below: - As discussed in Section 4.1, a mismatch between training objectives of the teacher and ideal goals of the student causes a drop in student performance. This primarily happens because of the overfitting to the teacher training task. As model size increases, the extent of overfitting increases. The use of a projection layer during distillation alleviates the impact of this overfitting and allows the student to retain more generic features that are useful for the downstream robust classification objective. Thus, a projection layer is more important for larger model capacities where the extent of overfitting is higher. - Secondly, as the model size increases, there is a need for higher amount of training data for achieving better generalization. The proposed method has better data diversity as it enables the use of more complex data augmentations in adversarial training by leveraging supervision from weak augmentations at the teacher. ## C Details On Datasets We compare the performance of the proposed approach ProFeAT with existing methods on the benchmark datasets CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009), that are commonly used for evaluating the adversarial robustness of models (Croce et al., 2021). Both datasets consist of RGB images of dimension 32 × 32. CIFAR-10 consists of 50,000 images in the training set and 10,000 images in the test set, with the images being divided equally into 10 classes - airplane, automobile, bird, cat, deer, dog, frog, horse, ship and truck. CIFAR-100 dataset is of the same size as CIFAR-10, with images being divided equally into 100 classes. Due to the larger number of classes, there are only 500 images per class in CIFAR-100, making it a more challenging dataset when compared to CIFAR-10. ## D Details On Training And Compute Model Architecture: We report the key comparisons with existing methods on two of the commonly considered model architectures in the literature of adversarial robustness (Pang et al., 2021; Zhang et al., 2019; Rice et al., 2020; Croce et al., 2021) - ResNet-18 (He et al., 2016) and WideResNet-34-10 (Zagoruyko & | Method | #Epochs | #Attack steps | #FP or BP | | | |---------------------|-----------|-----------------|---------------|-------|-------| | | | AT | Teacher model | Total | | | Supervised (TRADES) | 110 | 10 | 1210 | - | 1210 | | AP-DPE | 450 | 10 | 4950 | - | 4950 | | RoCL | 1000 | 5 | 6000 | - | 6000 | | ACL | 1000 | 5 | 12000 | - | 12000 | | AdvCL | 1000 | 5 | 12000 | 1000 | 13000 | | DynACL | 1000 | 5 | 12000 | - | 12000 | | DynACL++ | 1025 | 5 | 12300 | - | 12300 | | DeACL | 100 | 5 | 700 | 1000 | 1700 | | ProFeAT (Ours) | 100 | 5 | 800 | 1000 | 1800 | Table 11: **Total number of Forward (FP) or Backward (BP) Propagations during training** of the proposed approach when compared to prior works. Distillation based approaches - ProFeAT and DeACL require significantly lesser compute when compared to prior methods, and are only more expensive than supervised adversarial training. Komodakis, 2016). Although most existing methods for self-supervised adversarial training report results only on ResNet-18 (Zhang et al., 2022; Fan et al., 2021), we additionally consider the WideResNet-34-10 architecture to demonstrate the scalability of the proposed approach to larger model architectures. We perform the ablation experiments on the CIFAR-100 dataset with WideResNet-34-10 architecture, which is a very challenging setting in self-supervised adversarial training, to be able to better distinguish between different variations adopted during training. Training Details: The self-supervised training of the teacher model is performed for 1000 epochs with the SimCLR algorithm (Chen et al., 2020b) similar to prior work (Zhang et al., 2022). We utilize the solo-learn 1 GitHub repository for this purpose. For the SimCLR SSL training, we tune and use a learning rate of 1.5 with SGD optimizer, a cosine schedule with warmup, weight decay of 1e−5 and train the backbone for 1000 epochs with other hyperparameters kept as default as in the repository. The self-supervised adversarial training of the feature extractor using the proposed approach is performed for 100 epochs using SGD optimizer with a weight decay of 3e−4, cosine learning rate with 10 epochs of warm-up, and a maximum learning rate of 0.5. We consider the standard ℓ∞ based threat model Zhang et al. (2022); Fan et al. (2021) for attack generation during both training and evaluation, i.e., ||x˜i − xi||∞ ≤ ε, where x˜ is the adversarial version of the clean sample xi. The value of ε is set to 8/255 for both CIFAR-10 and CIFAR-100 datasets, as is standard in literature (Madry et al., 2018; Zhang et al., 2019). A 5-step PGD attack is used for attack generation during training with a step size of 2/255. We fix the value of β, the robustness-accuracy trade-off parameter (Ref: Equations (1) and (2) in the main paper) to 8 in all our experiments, unless specified otherwise. Details on configurations in Tables 2 and 3: In these tables, we present the experiments showing the importance of the projector in self-supervised distillation, first in a standard setting (Table 2) and then in an adversarial setting (Table 3). In Table 2, the linear probing performance of the student model is shown at both the feature and projection space when linear probed using two different probing methods (column "LP Loss"): standard CE loss (S1, S3-S5), and cosine similarity loss between the student and the teacher (denoted as cos(T , S), S2). Experiments S1-S4 refer to settings where there is no projector involved during distillation, while S5 includes the projector network. S1 and S2 considers a self-supervised trained teacher for distillation, whereas S3 and S4 uses a standard teacher trained using label supervision. In the absence of a projector, when the training objectives of teacher matches with the student (S3/S4 vs. S1), or when similar losses are used for pretraining and linear probing (S2 vs. S1), we see a improvement in the downstream performance of the student. Additionally, one can employ a projector network (S5 vs. S1) to alleviate the impact of the mismatch between the two and gain back the same performance of the student. In that case, the similarity 1https://github.com/vturrisi/solo-learn | ResNet-18 | WideResNet-34-10 | | | | | | |----------------|--------------------|------------|--------|------------|------------|-------| | GFLOPS | Time/epoch | #Params(M) | GFLOPS | Time/epoch | #Params(M) | | | DeACL | 671827 | 51s | 11.27 | 6339652 | 4m 50s | 46.28 | | ProFeAT (Ours) | 672200 | 51s | 11.76 | 6340197 | 4m 50s | 46.86 | | % increase | 0.056 | 0.00 | 4.35 | 0.009 | 0.00 | 1.25 | Table 12: **Floating Point Operations per Second (GFLOPS) and latency per epoch during** training of the proposed approach ProFeAT when compared to the baseline DeACL for ResNet-18 and WideResNet-34-10 models. The computational overhead during training is marginal with the addition of the projection layer, and reduces further for larger capacity models. between the teacher and the student is significantly higher at the projector space when compared to the feature space (S5). In a similar vein, we show the same in the setting of adversarial self-supervised distillation in Table 3. Let's consider the case when there's no projector. When using CE loss for linear probing (A3 vs. A1), using a supervised AT teacher for adversarial distillation (A3, which is also trained using CE loss) results in much better student performance compared to a self-supervised teacher (A1, which is trained using a similarity-based objective). On the other hand, when the linear probing objective is changed to a similarity-based loss (A2: cos(T , S)), the mismatch with the student alleviates, and we recover the student performance (note similar clean performance of A2 and A3, w.r.t. A1). Using a projector network, on the other hand, improves the performance in case of a mismatch in the above (A4 vs. A1). Details on Linear Probing: To evaluate the performance of the learned representations, we perform standard linear probing by freezing the adversarially pretrained backbone as discussed in Section 2 of the main paper. We use a class-balanced validation split consisting of 1000 images from the train set and perform early-stopping during training based on the performance on the validation set. The training is performed for 25 epochs with a step learning rate schedule where the maximum learning rate is decayed by a factor of 10 at epoch 15 and 20. The learning rate is chosen amongst the following settings - {0.1, 0.05, 0.1, 0.5, 1, 5}, with SGD optimizer, and the weight decay is fixed to 2e−4. The same evaluation protocol is used for the best baseline DeACL (Zhang et al., 2022) as well as the proposed approach ProFeAT, for both in-domain and transfer learning settings. Details on Transfer Learning: To perform transfer learning using Adversarial Full-Finetuning, a robustly pretrained base model is used as an initialization, which is then adversarially finetuned using TRADES Zhang et al. (2019) for 25 epochs. After finetuning, the model is evaluated using linear probing on the same transfer dataset. We consider STL-10 Coates et al. (2011) and Caltech-101 Li et al. (2022) as the transfer datasets. Caltech-101 contains 101 object classes and 1 background class, with 2416 samples in the train set and 6728 samples in the test set. The number of samples per class range from 17 to 30, and thus is a suitable dataset to highlight the practical importance of lightweight finetuning of an adversarial self-supervised pretrained representations for low-data regime. Compute: The following Nvidia GPUs have been used for performing the experiments reported in this work - V100, A100, and A6000. Each of the experiments are run either on a single GPU, or across 2 GPUs based on the complexity of the run and GPU availability. Uncertainty estimates in the main SOTA table (Table 4 of the main paper) were obtained on the same machine and with same GPU configuration to ensure reproducibility. For 100 epochs of single-precision (FP32) training with a batch size of 256, the proposed approach takes ∼8 hours and ∼16GB of GPU memory on a single A100 GPU for WideResNet-34-10 model on CIFAR-100. ## E Computational Complexity In terms of compute, both the proposed method ProFeAT and DeACL (Zhang et al., 2022) lower the overall computational cost when compared to prior approaches. This is because self-supervised training in general Table 13: **Ablation on Projector training configuration (CIFAR-100, WRN-34-10):** Performance (%) using variations in projector (proj.) initialization (init.) and trainability. SA: Standard Accuracy, RA-G: Robust accuracy against GAMA attack. | Ablation | Student proj. | Proj. init. (Student) | Teacher proj | Proj. init. (Teacher) | SA | RA-G | |------------|-----------------|-------------------------|----------------|-------------------------|-------|--------| | AP1 | Absent | - | Absent | - | 55.35 | 27.86 | | AP2 | Trainable | Random | Absent | - | 63.07 | 26.57 | | AP3 | Frozen | Pretrained | Absent | - | 40.43 | 22.23 | | AP4 | Trainable | Pretrained | Absent | - | 62.89 | 26.57 | | AP5 | Trainable | Random (common) | Trainable | Random (common) | 53.43 | 27.23 | | AP6 | Trainable | Pretrained (common) | Trainable | Pretrained (common) | 54.60 | 27.41 | | AP7 | Trainable | Pretrained | Frozen | Pretrained | 58.18 | 27.73 | | Ours | Frozen | Pretrained | Frozen | Pretrained | 61.05 | 27.41 | | Ablation | Projector config. | Projector arch. | Teacher SA | Student SA | Drop in SA | % Drop in SA | RA-G | |------------|---------------------|-------------------|--------------|--------------|--------------|----------------|--------| | APA1 | No projector | - | 70.85 | 55.35 | 15.50 | 21.88 | 27.86 | | APA2 | Linear layer | 640-256 | 68.08 | 53.35 | 14.73 | 21.64 | 27.47 | | Ours | 2 Layer MLP | 640-640-256 | 70.85 | 61.05 | 9.80 | 13.83 | 27.41 | | APA3 | 3 Layer MLP | 640-640-640-256 | 70.71 | 60.37 | 10.34 | 14.62 | 27.37 | | APA4 | 2 Layer MLP | 640-640-640 | 69.88 | 61.24 | 8.64 | 12.36 | 27.36 | | APA5 | 2 Layer MLP | 640-2048-640 | 70.96 | 61.76 | 9.20 | 12.97 | 26.66 | | APA6 | 2 Layer MLP | 640-256-640 | 69.37 | 57.87 | 11.50 | 16.58 | 27.56 | Table 14: **Ablation on Projector architecture (CIFAR-100, WRN-34-10):** Performance (%) obtained by varying the projector configuration (config.) and architecture (arch.). A non-linear projector effectively reduces the gap in clean accuracy between the teacher and student. A bottleneck architecture for the projector is worse than other variants. SA: Standard Accuracy, RA-G: Robust Accuracy against GAMA attack. requires larger number of epochs (1000) to converge (Chen et al., 2020b; He et al., 2020) when compared to supervised learning (<100). Prior approaches like RoCL (Kim et al., 2020), ACL (Jiang et al., 2020) and AdvCL (Fan et al., 2021) combine the contrastive training objective of SSL approaches and the adversarial training objective. Thus, these methods require larger number of training epochs (1000) for the adversarial training task, which is already computationally expensive due to the requirement of generating multi-step attacks during training. ProFeAT and DeACL use a SSL teacher for training and thus, the adversarial training is more similar to supervised training, requiring only 100 epochs. In Table 11, we present the approximate number of forward and backward propagations for each algorithm, considering both pretraining of the auxiliary network used and the training of the main network. It can be noted that the distillation based approaches - ProFeAT and DeACL require significantly lesser compute when compared to prior methods, and are only more expensive than supervised adversarial training. In Table 12, we present the FLOPS required during training for the proposed approach and DeACL. One can observe that there is a negligible increase in FLOPS compared to the baseline approach. ## F Additional Results F.1 More Ablations On The Projector Training configuration of the Projector: We present ablations using different configurations of the projection layer in Table 13. As also discussed in Section 4.1 of the main paper, we observe a large boost in clean accuracy when a random (or pretrained) trainable projection layer is introduced to the student (AP2/ AP4 vs. AP1 in Table 13). While the use of pretrained frozen projection head only for the student degrades performance considerably (AP3), the use of the same for both teacher and student (Ours) yields a optimal robustness-accuracy trade-off across all variations. The use of a common trainable projection head for both teacher and student results in collapsed representations at the projector output (AP5, AP6), yielding results similar to the case where projector is not used for both teacher and student (AP1). This issue is overcome when the pretrained projector is trainable only for the student (AP7). | Ablation | Teacher | Student | SA | RA-G | |------------|-----------|-----------|-------|--------| | AG1 | PC | PC | 56.57 | 25.29 | | AG2 | AuAu | AuAu | 60.76 | 27.21 | | AG3 | PC1 | PC2 | 56.95 | 25.39 | | AG4 | AuAu1 | AuAu2 | 59.51 | 28.15 | | AG5 | AuAu | PC | 57.28 | 26.14 | | Ours | PC | AuAu | 61.05 | 27.41 | ![21_image_0.png](21_image_0.png) Table 15: Ablation on Augmentations used (CIFAR-100, WRN-34-10): Performance (%) using different augmentations for the teacher and student. (PC: Pad+Crop, AuAu: AutoAugment). Standard Accuracy (SA) and Robust accuracy (RA-G) reported. Figure 5: Robust accuracy of a supervised TRADES model across random restarts of PGD 5-step attack (CIFAR-100, WRN-34-10). | Ablation | Attack @ feat. | Attack @ proj. | SA | RA-G | |------------|------------------|------------------|-------|--------| | AT1 | cos(T , S) | cos(T , S) | 60.84 | 26.78 | | AT2 | cos(S, S) | cos(S, S) | 61.30 | 26.75 | | AT3 | cos(T , S) | cos(S, S) | 60.69 | 27.44 | | AT4 | cos(S, S) | - | 61.62 | 26.62 | | AT5 | - | cos(S, S) | 61.09 | 27.00 | | AT6 | cos(T , S) | - | 62.01 | 26.89 | | AT7 | - | cos(T , S) | 61.18 | 27.24 | | Ours | cos(S, S) | cos(T , S) | 61.05 | 27.41 | | Ablation | Loss @ feat. | Loss @ proj. | SA | RA-G | |------------|--------------------|--------------------|-------|--------| | AD1 | clean + adv(S, S) | - | 55.35 | 27.86 | | AD2 | - | clean + adv(S, S) | 59.65 | 26.90 | | AD3 | clean + adv(S, S) | clean | 61.69 | 26.40 | | AD4 | clean + adv(S, S) | adv(S, S) | 49.59 | 25.35 | | AD5 | adv(S, S) | clean | 59.72 | 1.38 | | AD6 | adv(S, S) | clean + adv(S, S) | 59.22 | 26.50 | | AD7 | clean | clean + adv(S, S) | 62.24 | 25.97 | | AD8 | clean + adv(S, S) | clean + adv(T , S) | 63.85 | 23.91 | | AD9 | clean + adv(T , S) | clean + adv(T , S) | 65.34 | 22.40 | | Ours | clean + adv(S, S) | clean + adv(S, S) | 61.05 | 27.41 | Table 16: **Ablation on Attack Loss (CIFAR-100, WRN-34-10):** Performance (%) with variations in attack loss at feature (feat.) and projector (proj.). While the proposed defense is stable across several variations in the attack loss, minimizing a combination of both losses cos(T , S) and cos(S, S) gives the best robustness-accuracy trade-off. SA: Standard Accuracy, RA-G: Robust Accuracy against GAMA. Table 17: **Ablation on Defense Loss (CIFAR-100, WRN-34-10):** Performance (%) with variations in training loss at feature (feat.) and projector (proj.). "clean" denotes the cosine similarity between representations of teacher and student on clean samples. "adv" denotes the cosine similarity between representations of the corresponding clean and adversarial samples either at the output of student (S, S) or between the teacher and student (T , S). SA: Standard Accuracy, RA-G: Robust accuracy against GAMA. Table 18: Failure of AD5 (Table 17) defense loss for SSL-AT training of WRN-34-10 model on CIFAR-100: Using clean and adversarial loss exclusively at projector and feature space respectively, results in an unstable optimization problem, giving either a non-robust model or collapsed representations at the end of training. As shown below, a lower value of β (the robustness-accuracy trade-off parameter) results in a non-robust model, while higher β results in the learning of collapsed representations. | β | Standard Accuracy (SA) | Robust Accuracy against GAMA (RA-G) | |-----|--------------------------|---------------------------------------| | 1 | 67.34 | 0.46 | | 5 | 51.99 | 0.71 | | 10 | 31.34 | 7.81 | | 50 | 11.59 | 2.55 | | 100 | 8.23 | 2.61 | Architecture of the Projector: In the proposed approach, we use the following 2-layer MLP projector architecture for both SSL pretraining of the teacher and adversarial training of the student: (1) ResNet-18: 512-512-256, and (2) **WideResNet-34-10: 640-640-256**. In Table 14, we present results using different configurations and architectures of the projector. Firstly, the use of a linear projector (APA2) is similar to the case where projector is not used for student training (APA1), with ∼ 21% drop in clean accuracy of the student with respect to the teacher. This improves to 12 − 17% when a non-linear projector is introduced (APA3-APA6 and Ours). The use of a 2-layer MLP (Ours) is marginally better than the use of a 3-layer MLP (APA3) in terms of clean accuracy of the student. The accuracy of the student is stable across different architectures of the projector (Ours, APA4, APA5). However, the use of a bottleneck architecture (APA6) results in a higher drop in clean accuracy of the student. ## F.2 Augmentation For The Student And Teacher We present ablation experiments to understand the impact of different augmentations used for the teacher and student separately in Table 15. The base method (AG1) uses common Pad and Crop (PC) augmentation for both teacher and student. By using more complex augmentations —AutoAugment followed by Pad and Crop (denoted as AuAu in the table), there is a significant improvement in both clean and robust accuracy. By using separate augmentations for the teacher and student, there is an improvement in the case of PC (AG3), but a drop in clean accuracy accompanied by better robustness in case of AuAu. Finally by using a mix of both AuAu and PC at the student and teacher respectively (Ours), we obtain improvements in both clean and robust accuracy, since the former improves attack diversity (shown in Figure 5), while the latter makes the training task easier. ## F.3 Attack Loss For performing adversarial training using the proposed approach, attacks are generated by minimizing a combination of cosine similarity based losses as shown in Equation (4) of the main paper. This includes an unsupervised loss at the feature representations of the student and another loss between the representations of the teacher and student at the projector. As shown in Table 16, we obtain a better robustness-accuracy trade-off by using a combination of both losses rather than by using only one of the two losses, due to better diversity and strength of attack. These results also demonstrate that the proposed method is not very sensitive to different choices of attack losses. ## F.4 Defense Loss We present ablation experiments across variations in training loss at the feature space and the projection head in Table 17. In the proposed approach (Ours), we introduce a combination of clean and robust losses at both feature and projector layers, as shown in Equation (3) of the main paper. By introducing the loss only at the features (AD1), there is a considerable drop in clean accuracy as seen earlier, which can be recovered by introducing the clean loss at the projection layer (AD3). Instead, when only the robust loss is Table 19: **Ablations on the accuracy of the teacher SSL model (CIFAR-100, WRN-34-10):** Performance (%) obtained by varying the number of epochs for which the standard self-supervised teacher model is pretrained. Improvements in accuracy of the teacher result in corresponding gains in both standard and robust accuracy of the student. SA: Standard Accuracy, RA-G: Robust Accuracy against GAMA. | #Epochs of PT | Teacher SA | Student SA | Drop in SA | % Drop in SA | RA-G | |-----------------|--------------|--------------|--------------|----------------|--------| | 100 | 55.73 | 49.37 | 6.36 | 11.41 | 20.86 | | 200 | 65.43 | 56.16 | 9.27 | 14.17 | 24.15 | | 500 | 69.27 | 59.62 | 9.65 | 13.93 | 26.75 | | 1000 | 70.85 | 61.05 | 9.80 | 13.83 | 27.41 | | Method (Teacher training) | Teacher SA | Student SA | RA-G | |-----------------------------|--------------|--------------|--------| | SimCLR | 67.98 | 62.20 | 26.13 | | SimCLR (tuned) | 70.85 | 63.07 | 26.57 | | BYOL | 72.97 | 63.19 | 26.82 | | Barlow Twins | 67.74 | 60.69 | 24.48 | | SimSiam | 68.60 | 63.46 | 26.69 | | MoCoV3 | 72.48 | 65.57 | 26.65 | | DINO | 68.75 | 60.61 | 24.80 | Table 20: **Ablations on the algorithm used for training the self-supervised teacher model (CIFAR100, WRN-34-10):** Performance (%) of the proposed approach by varying the pretraining algorithm of the teacher model. A random trainable projector is used for training the student model, to maintain uniformity in projector architecture across all methods. SA: Standard Accuracy, RA-G: Robust Accuracy against GAMA attack. introduced at the projection layer (AD4), there is a large drop in clean accuracy confirming that the need for projection layer is mainly enforcing the clean loss. When the combined loss is enforced only at the projection head (AD2), the accuracy is close to that of the proposed approach, with marginally lower clean and robust accuracy. Enforcing only adversarial loss in the feature space, and only clean loss in the projector space is a hard optimization problem, and this results in a non-robust model (AD5). As shown in Table 18, even by increasing β in AD5, we do not obtain a robust model, rather, there is a representation collapse. Thus, as discussed in Section 4.1 of the main paper, it is important to introduce the adversarial loss as a regularizer in the projector space as well (AD6). Enforcing only one of the two losses at the feature space (AD6 and AD7) also results in either inferior clean accuracy or robustness. Finally from AD8 and AD9 we note that the robustness loss is better when implemented as a smoothness constraint on the representations of the student, rather than by matching representations between the teacher and student. Overall, the proposed approach (Ours) results in the best robustness-accuracy trade-off. ## F.5 Accuracy Of The Self-Supervised Teacher Model The self-supervised teacher model is obtained using 1000 epochs of SimCLR (Chen et al., 2020b) training in all our experiments. We now study the impact of training the teacher model for lesser number of epochs. As shown in Table 19, as the number of teacher training epochs reduces, there is a drop in the accuracy of the teacher, resulting in a corresponding drop in the clean and robust accuracy of the student model. Thus, the performance of the teacher is crucial for training a better student model. ## F.6 Self-Supervised Training Algorithm Of The Teacher In the proposed approach, the teacher is trained using the popular self-supervised training algorithm SimCLR (Chen et al., 2020b), similar to prior works (Zhang et al., 2022). In this section, we study the impact of using different algorithms for the self-supervised training of the teacher and present results in Table 20. In order to ensure consistency across different SSL methods, we use a *random trainable* projector (2-layer MLP with both hidden and output dimensions of 640) for training the student and do not employ any projection head for the pretrained frozen teacher. While the default teacher trained using SimCLR was finetuned across hyperparameters, we utilize the default hyperparameters from the solo-learn Github repository for this table, and thus present SimCLR also without tuning for a fair comparison. For uniformity, we report all results with β = 8 (the robustness-accuracy trade-off parameter). From Table 20, we note that in most cases, the clean accuracy of the student increases as the accuracy of the teacher improves, while the robust accuracy does not change much. We note that this table merely shows that the proposed approach can be effectively integrated with several base self-supervised learning algorithms for the teacher model. However, it does not present a fair comparison across different SSL pretraining algorithms, since the ranking on the final performance of the student would change if the pretraining SSL algorithms were used with appropriate hyperparameter tuning.
Review 1: Summary: The paper titled “ProFeAT: Projected Feature Adversarial Training for Self-Supervised Learning of Robust Representations” proposes a novel method, Projected Feature Adversarial Training (ProFeAT), to improve self-supervised adversarial training (AT) by addressing the limitations in existing methods. The proposed method aims to bridge the performance gap between self-supervised and supervised adversarial training, particularly for larger models. The paper demonstrates significant improvements in both clean and robust accuracy through extensive experiments on benchmark datasets and models, setting a new state-of-the-art in self-supervised AT. Strengths and Weaknesses: Strengths 1. The introduction of a projection head for the student model to leverage weak supervision from the teacher while learning adversarially robust representations is innovative. This method addresses the training objective mismatch between the teacher and student, which is a key reason for sub-optimal distillation performance. 2. The paper provides a detailed methodology for the proposed ProFeAT, including appropriate attack and defense losses at the feature and projector levels, and a combination of weak and strong augmentations for the teacher and student respectively. This comprehensive approach enhances training data diversity without increasing complexity. 3. The authors conduct extensive experiments on CIFAR-10 and CIFAR-100 datasets with different model architectures. The results show significant improvements in clean and robust accuracy over existing methods. 4. The paper addresses the scalability of self-supervised adversarial training to larger model capacities, a common limitation in previous methods. The proposed ProFeAT demonstrates substantial gains in both clean and robust accuracy for larger models. Requested Changes: 1. While the paper is well-structured, certain sections could benefit from improved clarity. For instance, the detailed explanations of the proposed losses and the training methodology can be made more concise to enhance readability. Additionally, the notation and terminology used in the equations should be further clearly defined and consistently used throughout the paper. 2. The paper primarily focuses on benchmark datasets like CIFAR-10 and CIFAR-100. Including experiments on more diverse and larger-scale datasets, along with potential real-world applications, could strengthen the impact and applicability of the proposed method. 3. The paper would benefit from a discussion on the potential limitations of the proposed method. For example, the computational cost associated with the additional projection head and the training complexity could be addressed to provide a balanced view of the approach. Overall, the paper presents a significant advancement in the field of self-supervised adversarial training. The proposed ProFeAT method addresses key limitations in existing techniques and demonstrates substantial improvements in both clean and robust accuracy, especially for larger models. With improvements in clarity and additional comparisons, this work has the potential to make a considerable impact in the domain of robust representation learning. Broader Impact Concerns: There is no ethical implications involved in the paper. ================================================== Review 2: Summary: The work revisits self supervised learning for adversarial training and robustness and details findings with regards to adapting pre-trained networks towards these tasks. The use of projector networks and separate losses for the feature space and the projector space are emphasized. The framework of teacher-student based distillation is adapted towards construction of a robust student that aims to strike a balance between pre-existing knowledge and newer adversarial objectives. The work claims that differences between the pre-training tasks of the teacher (and the student) and the downstream tasks of the student may lead to sub-optimal performance of the student for downstream tasks and conducts ablations in order to reveal the best performing arrangement with respect to the architectural choices and loss functions. Strengths and Weaknesses: **Strengths** The method is relatively easy to understand and implement. The ablations seem thorough, explicating the rationale behind the choice in the objective functions and architecture. Benefits, also with regards to larger backbones are evident. **Weaknesses** The novelty of the method may be limited. Use of SSL objectives towards downstream performance has been thoroughly studied, including the use of projector networks in order to address the discrepancy between pre-training and downstream objectives [1, 2]. The performance of the method on the RA-PGD20 metric appears sub-optimal to the DeACL method, especially considering the discrepancy in the 'AT' metric between the proposed method and DeACL. Why was SimCLR used as the pre-training objective when BYOL apparently does better as shown in Table 20? The choice of weak/strong augmentations in the respective encoders is not adequately justified. The comparison against other methods may not be using identical configurations with respect to this aspect of training the models. [1] Gupta, Kartik, et al. "Understanding and improving the role of projection head in self-supervised learning." arXiv preprint arXiv:2212.11491 (2022). [2] Xue, Yihao, et al. "Investigating the Benefits of Projection Head for Representation Learning." arXiv preprint arXiv:2403.11391 (2024). Requested Changes: Is it possible to list results on a larger set of downstream objectives, more standard to the SSL literature? Evaluation/comparison using a linear probing objective in the ELEVATER benchmark [1] should suffice. Kindly justify the lower number of AT metric used in DeACL or possibly additional experiments that detail the comparison using an equal number of AT metric. Were the settings identical in comparisons? Is it possible to list the results of the other methods, specifically DeACL using weak and strong augmentations? *Minor*: Notational and font related inconsistencies in the lines after Equation (4). Is it possible to include one of the adversarial perturbations in attacks under the pre-training regimes of the networks? Intuitively, SSL in limit must incorporate all possible augmentations based invariance within the pre-training framework. It may be much clearer to define the configurations of the various settings used in Tables 2 and 3 in one place. Kindly explain the lack of competitive performance over the RA-PGD20 metric as compared to the DeACL method [1] Li, Chunyuan, et al. "Elevater: A benchmark and toolkit for evaluating language-augmented visual models." Advances in Neural Information Processing Systems 35 (2022): 9287-9301. Broader Impact Concerns: Impact statement is missing. One may surmise a substantial impact of methods related to adversarial robustness given the recent relevance of this field. ================================================== Review 3: Summary: In their paper "ProFeAT: projected feature adversarial training for self-supervised learning of robust representations", the authors suggested a new approach for combining self-supervised learning with adversarial training. The method builds on a method called DeACL from Zhang et al 2022, but the authors show that ProFeAT outperforms it in benchmark experiments. Disclaimer: I am not familiar with the literature on adversarial training, and so cannot judge any claims about the novelty. Strengths and Weaknesses: Strengths: The authors perform an extensive benchmark and show that their ProFeAT outperforms the competitors. The authors perform multiple detailed ablation experiments and in general spend a lot of effort studying _why_ different approaches work or do not work so well. Weaknesses: As somebody unfamiliar with prior work on SSL combined with adversarial training, it was difficult for me to understand what is going on, and how ProFeAT differs from DeACL. Some reorganization of Sections 3-4 could help. Overall, the paper is clearly on topic in TMLR and presents interesting work. It can be accepted after a minor revision. Requested Changes: MAJOR COMMENTS * Even though the authors commendably spend a lot of effort on ablations (Tables 1/2/3), I found it very difficult to follow Section 4. From the last paragraph in Section 3, it is clear that ProFeAT is *very* similar to DeACL. So naturally I am wondering, what exactly is DeACL? And how exactly does ProFeAT differ from it? However, Section 3 does not answer that. Section 4 begins with lots of ablation experiments (Tables 1/2/3) but this is *before* actually introducing the model in any detail. Only afterwards, in Section 3.2, we see Figure 1 showing the schematic for ProFeAT, and then Equations 1--4, defining the loss function. Throughout Section 3, it remains unclear how DeACL works: how does it differ from Fig 1? How does the loss differ from Eq 1--4? Does DeACL correspond to some particular rows in Tables 2/3 (e.g. "projector absent")? I would suggest to define DeACL very clearly from the beginning (in the beginning of Section 4). Show a schematic, give loss definitions. Then say what you changed in ProFeAT. * Can the ideas from ProFeAT be used without the distillation approach? First train SimCLR, then add the smoothness loss as in Eq 1--4 and keep training? While only using a single network. Can this work? * Section 2 mentions that SSL advserarial methods are very slow, but DeACL is fast. You never get back to this and don't give runtimes for methods in Table 4. If runtime differences are important, it may be worhwhile adding them. * Table 4 misses some bold, e.g. in column 2 for WideResNet and in column 5 for both ResNets. In all cases, the top performer (which is not highlighted in bold) is DeACL. * Related to the above: in Table 4, the robust performance is often better with DeACL (esp. RA-PGD20). While clean performance (SA) is better with ProFeAT, isn't the robust performance more important here? I feel this is not sufficiently emphasized in Section 5.1, where you emphasize clean accuracy. I think Section 5.1 should be very clear that in terms of robust accuracy ProFEAT is *not* consistently better than DeACL. * Figure 2 -- it wasn't clear to me until reaching page 12 that DeACL also has the \beta parameter. See my first comment above. * Conclusions (Section 6) should also make it clearer that the performance gap that you closed refers to the clean accuracy. Whereas the robust accuracy is on the DeACL level. MINOR COMMENTS * After Eq 4 the text mentions L_pf -- should be L_fp. Broader Impact Concerns: None ==================================================
# Multi-Point Dimensionality Reduction To Improve Projection Layout Reliability Anonymous authors Paper under double-blind review ## Abstract In ordinary Dimensionality Reduction (DR), each data instance in a high dimensional space (original space), is mapped to one point in a low dimensional space (visual space). This builds a layout of projected points that attempts to preserve as much as possible some property of data such as distances, neighbourhood relationships, and/or topology structures, but with the ultimate goal of approximating semantic properties of data. The approximation of semantic properties, is achieved by preserving geometric properties or topology structures in visual space. In this paper, the first general algorithm of Multi-point Dimensionality Reduction is introduced on where each data instance can be mapped to possibly more than one point in visual space with the aim of improving reliability, usability and interpretability of dimensionality reduction. Furthermore, by allowing the points in visual space to be split into two layers while maintaining the possibility of having more than one projection per data instance, the benefit of separating more reliable points from less reliable points is discussed. The proposed algorithm in this paper, named Layered Vertex Splitting Data Embedding (LVSDE), is built upon and extends a combination of ordinary DR and graph drawing techniques. Based on the experiments of this paper, the particular proposed algorithm (LVSDE) practically outperforms popular ordinary DR methods visually in terms of semantics, group separation, subgroup detection or combinational group detection. ## 1 Introduction Dimensionality Reduction (DR) or data embedding in a low dimensional space has gained widespread attention in different applications. Some DR techniques such as t-SNE Maaten & Hinton (2008) and UMAP McInnes et al. (2018a;b) have become essential tools in many application domains such as genomics, machine learning, NLP, cancer research, and protein folding. Typically, a DR technique maps data from a high dimensional space (original space), or on a distance matrix denoting original space distances, onto a two dimensional visual space, preserving some property of the data Nonato & Aupetit (2019). DR techniques can be categorized into linear and non-linear. For linear DR techniques such as Principal Component Analysis (PCA) Jolliffe (1986) each dimension of visual space is a linear combination of dimensions of original space. But for non-linear DR techniques such as t-SNE Maaten & Hinton (2008) and UMAP McInnes et al. (2018a;b) the meaning of each dimension of the visual space is not as welldefined. Regardless, the possibility of errors in the resulting visual representations invites more attention as the presence of groups that do not exist, inaccurate neighbourhoods, mismatching topology structures, or mismatching distances are inevitable in many scenarios. Also semantically, in dimensionality reduction important information is sometimes lost or inadequately visualized to be properly perceivable. To reduce those shortcomings and noting limitations in the ordinary definition of dimensionality reduction, this paper deeply explores a relaxed definition of dimensionality reduction called Multi-point Dimensionality Reduction (MDR) where each data instance can be mapped to multiple points in visual space. Then, this relaxation is augmented with layers in the definition of Multi-layered Multi-point Dimensionality Reduction (MMDR) where not only each data instance can be mapped to multiple points in visual space but also the set of projected points in visual space can be split into multiple layers. The contributions of this paper can be categorized into philosophy discussion and empirical exploration. On the philosophy front, a clear picture of (Multi-layered) Multi-point Dimensionality Reduction as a general DR formulation is drawn by showing how and which problems it can solve either theoretically or practically from multiple perspectives. This discussion is built on top of previous efforts of others on related problems but the extent of discussion and general feasibility of MDR is unique to this paper. On the empirical side, it is shown that the particular proposed algorithm (LVSDE) practically outperforms popular single point DR methods visually by qualitative analysis. A quantitative analysis based on KNN classification is also provided to increase the confidence in the visual results. The LVSDE algorithm proposed in this paper which is a tangible implementable instance of MMDR and therefore MDR, is studied on different data sets to show that it performs better than ordinary DR techniques visually with respect to semantics, group separation, subgroup detection or combinational group detection. To showcase these, different embeddings of different data sets consisting of genomic distances, handwritten digit images, flower plant features, email text features, and image classifier neural network layer outputs are illustrated in Figs. 1, 2, 10, 11, 12, 13, 14, 15, 16, 17 and 18. ![1_image_0.png](1_image_0.png) Figure 1: (a) An LVSDE embedding of the 1000 genomes project data set Auton et al. (2015) distances. Points with a black circle around them are in the gray layer. Points with a black dot inside them are second projections of an input data instance. Points without a black circle around them are in the red layer. Duplication of some points in America from some area close to Europe into an area close to duplicates of some of points of Africa in an area of visual space between Europe and Africa, matches immigration from Europe and Africa to America. LVSDE has successfully selected points to duplicate and successfully guided them in a meaningful way. For LVSDE configuration 1 is used. (b) Yellow circles specify points that have a corresponding duplicate point in the blue rectangle meaning that for each point with yellow circle around it there is a point in the blue rectangle which is another projection of the same data instance of original space. The blue rectangle is specified by the user. ## 2 Related Work While dimensionality reduction relies on preserving geometric properties and topology structures of data, the ultimate goal on the application front has been to visualize and group the semantic properties of data for that application. Such groupings are sometimes approximated with geometric properties and topology structures of data and then preserved into visual space. An important shortcoming of dimensionality reduction with exactly one projection per data instance point is that it cannot reliably and to fully show independent belonging to multiple fuzzy sets. It is important to distinguish between the definition of fuzzy sets Zadeh (1965) and fuzzy clustering Ruspini (1969) as the definition of fuzzy clustering has an additional restriction that the fuzzy sets Zadeh (1965) definition does not have which is the sum of intensities of belonging to different clusters should be 1. In the definition of fuzzy clustering Ruspini (1969) which has taken a more univariate approach, the intensity of belonging to each cluster depends on the intensity of belonging to the other clusters. But in the model of belonging to multiple fuzzy sets by Zadeh Zadeh (1965) there is no such restriction. In that model, the intensity of belonging to one cluster can change independent of the intensity of belonging to another cluster and is more in line with multivariate classification Anderson (1951) emerging as an important topic in many applications Morais et al. (2020); Liu et al. (2015); di Bella et al. (2013). While ordinary dimensionality reduction has been used in a user study for exploring fuzzy clusterings by Ying Zhao, et al. Zhao et al. (2019), it can be only accurate or reliable for univariate fuzziness in fuzzy clustering Ruspini (1969) rather than the multivariate model of belonging to multiple fuzzy sets Zadeh (1965). Also, our proposed method is examined on more realistic data that has not already gone through a fuzzy clustering algorithm. Moreover, our method goes beyond univariate fuzzy semantics and visualizes multivariate fuzzy semantics more accurately and more reliably. A single point shown between two groups of points in visual space can indicate some form of fuzzy relationship to the two groups but not every form of fuzzy relationship. If a point is between two groups of points, then moving toward one group will be moving away from the other group. This restricts the possibility of independently specifying the intensity of belonging to the first group and the intensity of belonging to the second group using any position-based visual clue. Having possibly more than one projection per data instance can be a better indicator of unrestricted belonging to multiple fuzzy sets by allowing different indicators for each fuzzy set. The existence of such indicators improves the potential of preserving geometric and topology structures of data for approximating multivariate fuzzy semantics. In visual space, the Euclidean distance from an area of visual space, is generally used as a visual clue for how related a point is. But the fact that Euclidean distance has to adhere to the metric triangular inequality, restricts the closeness it can imply to different areas of visual space. This creates a mismatching model if the closeness relationship in the data to be visualized does not adhere to the triangular inequality. The inability of ordinary dimensionality reduction to properly visualize non-metric distances is well highlighted by Van der Maaten and Hinton Van der Maaten & Hinton (2012) and Cook et al. Cook et al. (2007). However, they fell short of offering a comprehensive solution to the problem they raised. Their idea of embedding into to multiple maps, each with exactly one projection per data instance, does not adequately resolve the problem and causes more issues than it fixes. One reason that their idea causes issues is the effect of change blindness Simons & Ambinder (2005); Simons & Levin (1997) on multiple embeddings. Another reason is the observation that the aggregate sum of the total number of projected points in all of the multiple embedding maps is unjustifiably high. Indeed, embedding into a single map with possibly more than one projection per data instance is a more succinct and direct approach. It is so because of the aforementioned reasons and also because the number of projected points needed to explain the same set of relationships between projected points is lower. Furthermore, more relationships are visualized in a single view giving the viewer a better chance to reason on them. ![3_image_0.png](3_image_0.png) ![3_image_1.png](3_image_1.png) Figure 2: (a) An LVSDE embedding of a subset of MNIST data set using the information from the whole data set training data. A variant of overlap reduction algorithm of Nachmanson et al. (2016) by Nachmanson et al. is used on the points in blue rectangles to achieve the layouts in the attached rectangles. Separation of digits 3,5 and 8 on the red layer are better than popular techniques. Separation of digits 4 and 9 on the red layer are better than popular techniques. By using layers, the identification of groups is much easier when only considering red layer while gray layer provides some background information. (b,c) Yellow rectangles specify points that have a corresponding duplicate point in the blue rectangle. The blue rectangle is specified by the user. An interesting argument emerges when looking at the fact that for many applications there exists data already in a non-metric distance form. If the semantic distance in data can be non-metric, then why is Euclidean distance usually used to model the semantics of neighbourhoods in the original space, knowing that Euclidean distance is a metric distance? One might argue that Cosine distance has been used too and Cosine distance is not a metric distance. But Cosine distance still is far from an unrestricted distance model as it equates to half of squared Euclidean distance of normalized vectors. Borrowing some ideas from Principal Component Analysis (PCA) Jolliffe (1986), this paper argues that semantic neighbourhoods should be modelled on the set of perpendicular geometric projections of points of original space on each hyper-line of a specific set of hyper-lines or each manifold of a specific set of manifolds1. The specific hyper-lines or manifolds are meant to represent different semantic aspects of data since assuming just one aspect for data is an oversimplification of an important problem. Then a point can have several semantic neighbourhoods defined around it based on different hyper-lines or manifolds. The challenge becomes how to bring those neighbourhoods to a visual space usually perceived through Euclidean distance which has to adhere to the triangular inequality. Multi-point Dimensionality Reduction (MDR) can better show multiple neighbourhoods around a single ![4_image_0.png](4_image_0.png) point of original space by having multiple projections of it in visual space. This is because MDR gives each data instance more freedom to express itself in different neighbourhoods of visual space using multiple projected points of it. Also, layers can improve the interpretation of those neighbourhoods of original space in visual space by the separation of the set of visualized points. A simplification of multiple neighbourhoods around a point of original space can be understood from Fig. 3 example. Figure 3: Points on the surface of two planes regularly distributed. The points on the purple line have two distinct neighbourhoods based on two different planes. Visualizing multiple neighbourhoods has also recently been the focus of work of Huroyan, et al. Huroyan et al. (2022) where they use reduction to three dimensions as a remedy for simultaneous neighbourhoods. However, it has been long disputed whether 3D visualizations rendered on a 2D display limits the ability of perceiving all information. Nevertheless, in Barahimi & Wismath Barahimi and Wismath argued that the virtual reality technologies and stereoscopic 3D might provide some improvement over monocular 3D though this would represent significant hardware commitment. Specifically, for word embeddings, research by Huang et al. Huang et al. (2012) about studied how different contexts should affect them. While their technique is specific to word embeddings and is not generalizable to other applications, it also requires additional context data, distancing it to some extent from the dimen-1The meaning of perpendicular geometric projection on a manifold, is based on the normal vector of the tangent space on any point on the manifold creating a perpendicular ray from any point on the manifold which means a point can have multiple perpendicular geometric projections on a manifold. sionality reduction paradigm. In this paper, not only is a general solution (LVSDE) proposed applicable to different application domains, it does not require any context data and tries to find the semantic contexts implicit in the data. Although preservation of a well-defined topology structure is not the main objective of this paper, we partially address the challenge of visually connecting multiple projections of a point which are notable steps forward for this important issue. To go from a finite set of points in a high dimensional space or even a distance matrix to a topological structure, one known method is Vietoris-Rips filtration Bauer (2021); Edelsbrunner & Harer (2010) sometimes simply referred to as Rips filtration. Persistence diagrams are a way to represent such topological structures. Relevant research in preserving topology structures, is the interesting prior work of Doraiswamy et al.Doraiswamy et al. (2020) in which they preserve 0-dimensional persistence diagram of the Rips filtration Bauer (2021); Edelsbrunner & Harer (2010) of high dimensional data. AupetitAupetit (2007), studied the question of whether, as a result of dimensionality reduction a manifold in original space that data instances were laying on is torn in visual space. While the study stops short of giving any theoretical or empirical remedy to preserve neighbourhoods when tearing happens, it is related to the theoretical argument made in Section 3 regarding if MMDR can maintain neighbourhoods when manifold tearing happens. For the particular example of points on a 3D cylinder that is discussed in Section 3, no empirical solution is provided in this paper and rather the discussion is only framed in a theoretical fashion. While defining neighbourhoods can be symmetric, asymmetric or partially symmetric, Van der Maaten and Hinton Maaten & Hinton (2008) opted for symmetry in their t-SNE paper Maaten & Hinton (2008) where they described the addition of symmetry as one of the major differences of t-SNE from its predecessor SNE Hinton & Roweis (2002). The desirability of symmetry of neighbourhoods is also reflected in our proposed algorithm in its neighbourhood normalization transformation. However the move is toward more symmetry but not necessarily total symmetry because if the nature of data has asymmetric neighbourhoods, making the neighbourhoods totally symmetric may remove one of the important structures of original space data in visual space which is in opposition to the goal of preserving structures. As dimensionality reduction reduces the dimensionality of data, it also makes sense to reduce but not remove the asymmetry of data if it has one (symmetry can be usually preserved without a need for reduction but asymmetry is easier to hold in higher dimensions than lower dimensions). ## 3 Problem Formalization And Visual Metaphors To formalize the problem, let's first recall the definition of ordinary dimensionality reduction (ODR) or data embedding. We are given a set of data instances Q = {q1, q2*, ..., q*n} either specified by points in an m-dimensional space called original space where each qi ∈ R m, or specified on rows and columns of a matrix ¯δ denoting original space distances. Then the goal is to project each qi onto a point piin a low dimensional space called visual space (usually 2-dimensional) so that some of geometric properties or topology structures, are preserved as much as possible Martins et al. (2014); Lee & Verleysen (2007). One approach to dimensionality reduction, is the distance preservation strategy. Let's assume distances in the original space between qi and qj are denoted by ¯δ(qi, qj ) and distances in the visual space between pi and pj are denoted by Dv(pi, pj ). The strategy tries to minimize the difference between Dv(pi, pj ) and ¯δ(qi, qj ) for all 1 ≤ *i < j* ≤ n or some variant of that Nonato & Aupetit (2019). Although well-accepted, if distances are Euclidean, this minimization usually fails to reach zero difference for all 1 ≤ *i < j* ≤ n, even for simple 3D data sets, such as points on a cylinder's surface as shown in Figure 4(a). The failure expands to preserving local neighbourhoods too which is better addressed with duplicate points as shown in Figure 4(b). Duplicate points that are not allowed in ODR, but are allowed in the MDR defined bellow, enable a data instance to be projected onto two distant neighbourhoods in visual space. This is done by allowing two separate projections for the same data instance. ## Definition 3.1. Multi-Point Dimensionality Reduction. Given a set of data instances Q = {q1, q2, ..., qn} either specified by points in an m*-dimensional space called* original space where each qi ∈ R m, or specified on rows and columns of a distance matrix denoting original ![6_image_0.png](6_image_0.png) Figure 4: (a) Points on the surface of a cylinder regularly distributed. (b) The torn and unfolded cylinder with duplicate points added to the sides where the points coloured gray on each side are duplicates of the points with non-gray colour on the other side (the image is not the result of any particular empirical dimensionality reduction method and only highlights the potential of Multi-layered Multi-point Dimensionality Reduction as a problem formulation in a theoretical fashion). space distances, an MDR maps each data instance qi *to a non-empty set of points* Si = {si1 , si2 , ..., siλi } where ∀sit ∈ R 2. With this definition, a variant of the classical distance preservation problem for ordinary dimensionality reduction can be transformed to MDR as follows: Definition 3.2. *Multi-point Dimensionality Reduction Distance Preservation Problem.* The MDR Distance Preservation Problem seeks to find an MDR that minimizes $$\sum_{i=1}^{n}\sum_{j=1}^{n}\bigl(\operatorname*{min}_{(s_{i_{t}},s_{j_{r}})\in S_{i}\times S_{j}}([\bar{\delta}(q_{i},q_{j})-D_{v}(s_{i_{t}},s_{j_{r}})]^{2})\bigr)$$ $$\left(1\right)$$ n2(1) subject to Pn i=1 |Si| ≤ φ, where φ *is a constant for limiting the number of points in visual space.* In Section 2, the benefit of splitting the set of projected points into different layers for distinction between different neighbourhoods around a projected point was discussed. By combining the idea of layers with the idea of more than one projection per data instance, Multi-layered Multi-point Dimensionality Reduction (MMDR) is defined as follows: Definition 3.3. Multi-layered Multi-point Dimensionality Reduction. Given a set of data instances Q = {q1, q2, ..., qn} either specified by points in an m-dimensional space called original space where each qi ∈ R m*, or specified on rows and columns of a distance matrix denoting original* space distances, an MMDR maps each data instance qi *to a non-empty set of points* Si = {si1 , si2 , ..., siλi } where ∀sit ∈ R 2, and also maps each sit ∈ Si *to a layer number* η(sit ). For convenience γiθ ⊆ Si is defined as the subset of points in Si *such as* siΩ for which the condition η(siΩ ) = θ *holds, meaning that* γiθ is the set of projections of qi in the layer θ *of visual space.* One of the challenges introduced by MMDR is that a point might not have any projection in one of the layers in visual space. If only that layer is displayed, this can hide some relevant information. One approach to address this challenge is to display all layers but with a different visual indicator for each layer such as colour. While the interpretation of different layers is open in the definition of MMDR, this paper emphasizes a particular type of MMDR where there are only two layers named red layer and gray layer. The red layer is intended for more reliable points and the gray layer is intended for less reliable points. As a result, a Red Gray Embedding is defined below: ## Definition 3.4. Red Gray Embedding. A Red Gray Embedding, is an MMDR with just two layers named red layer and gray layer. The red layer is intended for more reliable points and the gray layer is intended for less reliable points. In the above definition a point can have several projections in the red layer and several projections in the gray layer at the same time. The following stricter definition is more pragmatic as it is explained after. Definition 3.5. Strict Red Gray Embedding. A Strict Red Gray Embedding, is an MMDR with just two layers named red layer and gray layer where points that have more than one projection are mapped to the gray layer (points that have exactly one projection can be mapped to either red layer or gray layer). The red layer is intended for more reliable points and the gray layer is intended for less reliable points. Such a strict definition is based on experimental results that if a point is projected unreliably, most likely it is because it is related to multiple groups of points as independent separate fuzzy sets in a way that is not consistent with triangular inequality. Therefore having more than one projection might help mitigate the unreliability. ## 4 Method ![7_Image_0.Png](7_Image_0.Png) Figure 5: Summary of the proposed dimensionality reduction algorithm of this paper (LVSDE) showing the flow of data through different components of the algorithm In this section, the proposed algorithm of this paper named Layered Vertex Splitting Data Embedding (LVSDE) is discussed which produces a Strict Red Gray Embedding. In this discussion Dv indicates distances in visual space, ¯δ for distances in original space before any transformation, and δ for original space distances after transformation using the neighbour-normalized distance transformation. Also Dvmax is used to denote the maximum distance in visual space and δmax is used to denote the maximum of transformed distance of original space. ## 4.1 Overview An overall of the proposed algorithm (LVSDE) is summarized in Fig. 5. The algorithm builds upon, extends and modifies a combination of UMAP McInnes et al. (2018a;b), the Fruchterman and Reingold Fruchterman & Reingold (1991) force-directed graph drawing algorithm and the force scheme Tejada et al. (2003) embedding technique. The proposed algorithm in this paper has three preliminary steps and four phases. The first preliminary step which is UMAP McInnes et al. (2018a;b) to 30 dimensions, is optional as it is discussed later. The second preliminary step is the neighbourhood-normalized distance transformation. The third preliminary step is to build a neighbourhood graph G based on transformed distances from the previous preliminary step. Once a neighbourhood graph is ready a four phase force-directed graph drawing process is started. While Section 4.2 elaborates on the three preliminary steps of the proposed algorithm, the Section 4.3 which follows it, elaborates the four phases through a force-directed graph drawing. ## 4.2 Preliminary Steps 4.2.1 Umap To 30 Dimensions The first preliminary step which is UMAP to 30 dimensions, is optional. The reason for this is to benefit from its manifold learning capabilities when sufficient portion of the data is uniformly distributed on a Riemannian manifold and to be able to skip it when that is not the case. In some scenarios, another benefit is that UMAP is much faster than the rest of the proposed algorithm and therefore such a preliminary step allows partially learning the embedding from a larger data size and the rest of the algorithm uses a smaller data size. If the user opts to use UMAP to 30 dimensions as a preliminary step, then a choice needs to be made on how to compute distances on the resulting 30 dimensional data. While the typical choice here is Euclidean distance, another possibility (especially for text data sets) is Cosine distance and those distances will be considered as original space distances (¯δ) for the rest of the algorithm. ## 4.2.2 Neighbourhood-Normalized Distance Transformation The second preliminary step is neighbourhood-normalized distance transformation. This algorithm relies on drawing a pb nearest neighbour graph where pb is a constant parameter indicating neighbourhood size. But nearest neighbour graphs with a constant number of neighbours for each vertex are directed graphs, and the graph drawing algorithm chosen to be modified is designed for undirected graphs. The distance transformation named *neighbourhood-normalized distance transformation* that is introduced below at the very least is designed to make neighbourhood graphs more balanced where more balanced means fewer edges without an edge in the opposite direction, and in practice is very effective. Given a neighbourhood size z (default value z = 20), for each point qiin Q a constant value miis computed such that: $$t a n^{-1}(\bar{\delta}_{i_{z}}\cdot m_{i})=1$$ where ¯δiz is the Euclidean distance or Cosine distance or a custom precomputed distance from the z th nearest neighbour of qiin the original space to qi. The *neighbourhood-normalized distance* is then defined as: $$\delta(q_{i},q_{j})=\frac{t a n^{-1}(\bar{\delta}(q_{i},q_{j})\cdot m_{i})+t a n^{-1}(\bar{\delta}(q_{i},q_{j})\cdot m_{j})}{2}$$ 2(2) where δ(qi, qj ) becomes the transformed distance between qi and qj , and ¯δ(qi, qj ) is the Euclidean distance or Cosine distance or a custom precomputed distance between qi and qj . The repeating of ¯δ(qi, qj ) in Equation 2 is not a typo as discussed later and the custom precomputed distance mentioned above corresponds to the case where the input data is in distance matrix form. Fig. 6 is helpful in understanding the computation of mi for the neighbourhood-normalized distance transformation. The density of distances around each data instance of original space is brought into play by ¯δiz in a similar way to t-SNE computes and uses a per data instance standard deviation σi of Gaussian distribution in order to bring the density of distances around each data instance of original space into play. The role that the value z plays is in analogy with perplexity for t-SNE. Needless to say, t-SNE being a statistical and probabilistic method, uses a different paradigm than the geometric neighbourhood-based graph drawing paradigm that is used in this paper but the requirements of a good embedding remain the same. $$\left(2\right)$$ One important observation to make is that in a nearest neighbours graph with a constant number of neighbours for each vertex, an edge without an edge in the opposite direction between the same pair of vertices, is an indication of density change. So in a more balanced neighbourhood graph, then in some regions, either density changes become smoother or a big separation replaces them. This depends on how the conflict of an edge without edge in the opposite direction is resolved for some edges to either two edges in the opposite direction or no edge at all. In most cases the original space distances before any transformation are symmetric, and the distinction between ¯δ(qi, qj ) and ¯δ(qj , qi) are not relevant to Equation 2. In rare cases where the original space distances before transformation might be non-symmetric, if ¯δ(qj , qi) · mj was used instead of ¯δ(qi, qj ) · mj in Equation 2, it would transform a non-symmetric distance matrix into a symmetric distance matrix, removing an important property of input data. However, the distance transformation still brings non-symmetric distances closer to symmetric but not to completely symmetric. The reason that the proposed method does not synchronize z (the neighbourhood size of neighbourhoodnormalized distance transformation) with the neighbourhood size for building the neighbourhood graph ( pb ![9_image_0.png](9_image_0.png) ), is that a change to pb would have brought a more dramatic change to the final embedding. However, the objective was to reduce the volatility of parameters. Figure 6: Computing mi for neighbourhood-normalized distance transformation ## 4.2.3 Building The Neighbourhood Graph The third preliminary step is to build a neighbourhood graph G based on transformed distances from the previous preliminary step choosing the pb nearest points as the neighbours of a point, where pb is a constant parameter specifying the number of neighbours for each point. The lower that pb is chosen, the more local the method becomes. ## 4.3 Point Placement Using Force-Directed Graph Drawing In the four phases of the proposed algorithm, force-directed graph drawing Fruchterman & Reingold (1991) is used but several modifications are made. ## 4.3.1 Brief Review Of The Fruchterman And Reingold Fruchterman & Reingold (1991) Algorithm The algorithm in Fruchterman & Reingold (1991) for drawing a graph in two dimensions, uses two different sets of forces based on the geometric distance in the drawing and adjacency of vertices. The first set of forces is called repulsive forces and is applied to every pair of graph vertices. The second set of forces is called attractive forces and is applied only to adjacent vertices. The forces are applied to vertices through multiple iterations. This assumes a unit time (in physics) between two iterations, unit mass for each vertex, and a constant speed during the iteration only changed from zero by acceleration at the beginning of the iteration. The movement of vertices by forces is limited to a gradually decreasing temperature. Temperature can be seen as the radius of a circle around the location of a vertex in the previous iteration. The vertex is stopped on the edge of the circle in the current iteration if it tries to move outside the circle, limiting the amount of movement in each iteration for each vertex. The repulsive force ( ⃗fr) is imposed by any unordered pair of vertices on each of the two vertices and the attractive force ( ⃗fa) is imposed by each edge on each of its endpoints as they were defined in Fruchterman & Reingold (1991). But when brought into the conceptual context and notation of this paper, they are in the form of: and $${\vec{f}}_{r}=-{\frac{\Gamma^{2}}{D_{v}(s_{i_{t}},s_{j_{r}})}}\cdot{\vec{\alpha}}$$ $${\vec{f}}_{a}={\frac{(D_{v}(s_{i_{t}},s_{j_{r}}))^{2}}{\Gamma}}\cdot{\vec{\alpha}}$$ where $${\vec{\alpha}}=\left\{\begin{array}{l l}{{\frac{s_{j r}-s_{i_{t}}}{|s_{j r}-s_{i t}|}}}\\ {{}}&{{}}\\ {{\frac{s_{i_{t}}-s_{j r}}{|s_{i_{t}}-s_{j r}|}}}\end{array}\right.$$ if the force is applied on $s_{i_t}$ $$\vec{r}_{i_t}$$ if the force is applied on $s_{j_r}$ $$\vec{r}_{i_t}$$ if the force is applied on $s_{j_r}$ $$\vec{r}_{i_t}$$. $$(3)$$ In the notation used, sit and sjr are projections of a data instance in visual space and therefore a vertex of the neighbourhood graph, and Γ is a constant optimal distance defined as: $$\Gamma={\sqrt{\frac{w i d t h\times h e i g h t}{N u m b e r O f P o i n t s}}}$$ However, while it would be natural to assume the *NumberOfP oints* in the definition of Γ refers to the number of projected points in visual space, but, that would not be constant for the proposed algorithm of this paper. So instead a definition of Γ is opted for where *NumberOfP oints* refers to number of data instances in original space (|Q|) which is a constant, so the definition of Γ becomes: $$\Gamma=\sqrt{\frac{width\times height}{|Q|}}\tag{1}$$ $$\mathbf{\Sigma}$$ Moreover, whether edges of a neighbourhood graph should be considered in a directed fashion or not is determined by considering edges as directed and if there are two edges in opposite directions between two vertices, we consider them separate edges and therefore apply separate forces for them. To decrease the temperature the simplest approach (as mentioned by Fruchterman & Reingold (1991)) is to start with an initial temperature such as one tenth of the width of drawing and linearly decrease temperature so that it becomes zero in the last iteration. The maximum temperature is denoted by u¯ and the default value of u¯ = 100 is used in the implementation. ## 4.3.2 Modifications To The Computation Of Forces And Usage Of Masses In Acceleration Although the graph layout algorithm of Fruchterman & Reingold (1991) is a good approach, it is not designed for the purpose of dimensionality reduction, so some modifications are proposed in this paper for that purpose. Our first modification is to change the attractive forces for any edge (sit ,sjr ) of the neighbourhood graph G on sit or sjr to the form: ⃗fa = ψ · ⃗α (5) $${\vec{f_{a}}}=\psi\cdot{\vec{\alpha}}$$ where $$\psi=\left({\frac{D_{v}(s_{i_{t}},s_{j_{r}})}{\Gamma}}\right)^{(1-b)}$$ $$(6)$$ (1−b)(6) and where the parameter b is a *visual density adjustment parameter* (default b = 0.9). This modification, changes the balance between attractive and repulsive forces in a way that is in contrast with two characteristics of the Fruchterman & Reingold (1991) algorithm: distributing vertices evenly and uniform edge lengths. Although distributing vertices evenly and uniform edge lengths are considered good properties in general graphs, they are not good properties for dimensionality reduction, mainly because they can hide the inherent structure in the original space. While without this change the forces would attempt to achieve a uniform density connected visual space, this change can split the visual space into different sections as attractive forces can become more dense in one part than another. In particular the parameter b, controls how much the density of edges in the induced subgraph of any subset of vertices of the neighbourhood graph should be reflected in the density of projected points (vertices) in the visual space. The induced subgraph can be any induced subgraph of G. If the vertices of the induced subgraph correspond to a semantic group for an application of dimensionality reduction, and if the density for the part of visual space corresponding to those vertices is different from the density of surrounding area, a separate visual group is perceived for that semantic group. This argument is more indicative of necessity than sufficiency. Sufficiency of the argument is studied empirically on various data sets and reflected in the results of the experiments of this paper. To find a good value for b quantitative metrics, the number of projected points on the frame border, the type of data set, size of data set, or visual quality may be used. But quantitative metrics that do not rely on classes, or the number of projected points on the frame border, may be more practical when dealing with data in which the classes are not known or the type of data set is not familiar. The second modification that is proposed in this algorithm is to allow to some degree the exact value of distances in the original space (after transformation) to affect the forces in the visual space already defined based on the neighbourhood graph, the distances in the visual space and Γ. The reason is that although the neighbourhood graph is built on the transformed distances it does not retain the exact value of transformed distances and therefore is blind to the relative importance of neighbours. So a multi-objective approach is pursued here were the embedding of the neighbourhood graph has a higher priority but relative distance preservation can fine tune the embedding. Therefore in the second modification, attractive forces are modified again to the form: $${\vec{f}}_{a}={\hat{\psi}}\cdot{\vec{\alpha}}$$ where $${\hat{\psi}}=\psi+{\left\{\begin{array}{l l}{\operatorname*{min}\left({\frac{|\psi|}{2}},h\right)}&{{\mathrm{if~}}h>0}\\ {}&{}\\ {\operatorname*{max}\left({\frac{|\psi|}{-2}},h\right)}&{{\mathrm{if~}}h\leq0}\end{array}\right.}$$ $\uparrow$a. where h is a force defined based on distances in the original space (after transformation) and h is in the form h = δ(qi,qj ) δmax− Dv(sit ,sjr ) D*vmax*. The definition of h borrows some ideas from the force scheme algorithm of Tejada et al. (2003) where distances of original space are the core of force computations. In the implementation of this algorithm, Dvmax is only computed once after the initial random layout and used for subsequent iterations as well. It is important to note that in the second modification, the effect of h on the attractive force is limited to only half of the attractive force obtained in the previous modification, in order to stop it from dramatically changing the embedding. While the neighbourhood graph creates the substance of the embedding, exact values of transformed distances help fine tune the embedding in order to retain the significance of effect of the neighbourhood graph. A third relevant modification, but not to the forces, is that the proposed algorithm in this paper opts to start from a unit mass for each vertex (projected point) but changes it when duplicating a vertex. This is done to account for the lower number of edges on a vertex in the process of duplicating that is discussed later in this paper. In contrast, the algorithm of Fruchterman & Reingold (1991) conceptually assumes unit mass for each vertex for computing acceleration of a vertex from aggregate forces on a vertex, when calculating acceleration by attractive forces2. In this paper, the mass for each projected point sit denoted as ωit is initially set to 1. For acceleration of a vertex by a repulsive force ⃗fr, a unit mass is assumed for each vertex because the repulsive forces are not based on the edges of the neighbourhood graph and therefore the changes made to the neighbourhood graph is not relevant to them. While Procedure 1 summarizes how repulsive forces are computed and applied to projected points on each iteration, Procedure 2 summarizes how attractive forces are computed and applied to projected points on each iteration. ![12_image_0.png](12_image_0.png) Procedure 1: Summarizes how repulsive forces are computed and applied to projected points on each iteration. ## 4.3.3 Computing Replication Pressure In Phase 2 And Modifying The Neighbourhood Graph In Phase 4 In phase 2 to be described in section 4.3.6 and phase 4 to be described in section 4.3.8, in each iteration a replication pressure per projected point is computed which is a measure how misplaced a projected point is in the layout of the embedding. To compute the pressure per point based on the attractive and repulsive forces on that point, a circle centered at that point is considered to form 36 axis. The radius of the circle is arbitrary. The perimeter of the circle is equally divided to 36 segments (arcs). The 36 different axes on the projected point are formed by having the projected point as the origin of the axis, and the midpoint of one of the segments incident on the positive side of the axis. This way, each of the 36 axis is radially 10◦ apart from the previous one starting with a horizontal axis on the projected point and the origin of all of the axis is the projected point. A conceptual picture of the 36 axis around a projected point is depicted in Fig. 7. Here 36 specifies the precision of duplication process and the precision of choosing the gray layer projected points. For each axis, all force vectors on that projected point are first perpendicularly projected onto that axis and the sum of magnitude of projected vectors that are on the positive side of the axis, is considered as the positive pressure on that axis. Also, the sum of magnitude of projected vectors on the negative side of the axis is considered as the negative pressure. The replication pressure of the axis on the projected point is defined as the aggregate sum of the positive and the negative pressure on that axis. The replication pressure of the projected point is then computed as the maximum of replication pressures among all of the 36 axis on the projected point. 2The physics concepts used to describe the algorithm of Fruchterman & Reingold (1991) are interpreted from their formulas and not necessarily reflected in their paper as physics concepts In phase 4 to be described in section 4.3.8, when trying to duplicate a projected point of the gray layer, the neighbourhood graph is modified based on the axis which produced the maximum replication pressure. When a projected point is duplicated, the projected point that was duplicated is denoted as the first point of duplication and the projected point that was added is denoted as the second point of duplication. This means that the second one becomes another mapping of the same data instance initially located at the same location in visual space. ![13_image_0.png](13_image_0.png) Procedure 2: Summarizes how attractive forces are computed and applied to projected points on each iteration. The neighbours of the first point of duplication that are in one of the half planes resulted from cutting the plane with a line perpendicular to the axis that produced the maximum pressure and incident on the first point of duplication, are now considered as the neighbours of the second point of duplication and are removed from the neighbours of first point of duplication. The location of the first point of duplication remains unchanged, but the location of the second point of duplication is moved to the centroid of its neighbours. After that any neighbour of the first point of duplication which is closer to the second point of duplication is also considered as the neighbour of the second point of duplication instead. Let's assume that the point to be duplicated has c1 neighbours before duplication and the first point of duplication has c2 neighbours after duplication and the second point of duplication has c3 neighbours after duplication. Then the value of masses is changed after duplication as follows. The value of the mass for the first point of duplication for applying attractive forces is multiplied by c2 c1 and the value of the mass for the second point of duplication for applying attractive forces is multiplied by c3 c1 compared to before duplication. This means that if the point to be duplicated is Si1 and ωi1 = ωb before duplication, then after duplication: $$\omega_{i_{1}}={\frac{c_{2}}{c_{1}}}\cdot{\widehat{\omega}}$$ · ωb $$\mathrm{and}\qquad\qquad\omega_{i_{2}}={\frac{c_{3}}{c_{1}}}\,.$$ $\epsilon\widehat{\delta\epsilon}$ If modifying the neighbourhood graph results in no neighbours either for the first point of duplication or the ![14_image_0.png](14_image_0.png) second point of duplication, the duplication is considered as a failed duplication. The neighbourhood graph is not changed by a failed duplication as if no attempt was made to duplicate the point. A failed duplication is very unlikely but theoretically it is possible. Figure 7: The 36 radial axes around a vertex (point) in G. Perpendicular projection of ⃗fa on the blue axis. (Perpendicular projection of ⃗fr is similar) ## 4.3.4 The Four Phases After the three preliminary steps, the LVSDE algorithm starts the modified force-directed graph drawing through four phases spanning over a number of iterations. By default 1830 iterations are used which is the aggregate sum of the number of iterations of the four phases but small adjustments to the number of iterations should not significantly change the outcome. The algorithm starts with a random embedding with exactly one projection per data instance in which all projected points are in the red layer and no frozen projected point exist3. While the first phase is meant to build the general structure of the embedding, the second phase tries to identify the projected points to be moved to the gray layer, freezes them and marks them as ineffective4. Phase 3 marks the projected point in the gray layer as effective and reverses the freezing where reversing freezing means the projected points in the red layer become frozen and the projected points in the gray layer are unfrozen5. The fourth phase is were the effect of duplication happens on some or all of the projected points in the gray layer. As an example, fig. 8 shows an outcome of the phases of LVSDE on the 1000 genomes project data set Auton et al. (2015) distances at the end of each phase of LVSDE. ## 4.3.5 Phase One: Simple Graph Drawing Phase one is where exactly one projection per data instance is used and the process runs for a number of iterations (default 500). While in the Frutcherman and Reingold algorithm the temperature changes linearly from maximum temperature (u¯) to zero, in the first phase, the temperature is reduced linearly from maximum temperature to half of the maximum temperature. By default temperature in phase one is (1000−µ)¯u 1000 where µ is the iteration number in current phase. The first iteration starts with a random location of projected points in the visual space and all in the red layer, with none of them frozen or ineffective, and ends in a drawing 3A frozen projected point means a projected point designated to have no movement and no effect on the movement of other projected points unless the designation is removed. 4Once a projected point is designated as ineffective not only does it not move but also it does not apply force on any other projected point. The term freezing a projected point means the projected point does not move but does not necessarily mean that it does not apply forces to other projected points. 5An ineffective projected point is a frozen projected point but a frozen projected point is not necessarily a an ineffective projected point. ![15_image_0.png](15_image_0.png) Figure 8: An LVSDE embedding of the 1000 genomes project data set Auton et al. (2015) distances at the end of each phase of LVSDE. Points with a black circle around them are in the gray layer. Points with a black dot inside them are a second projection of an input data instance. Points without a black circle around them are in the red layer. which ideally should have balanced forces. The next three phases try to improve the drawing resulted in the phase 1. In the first phase a border frame is not used in visual space, in order to allow the general structure of the embedding to be shaped in phase one. But at the very beginning of the second phase a frame slightly bigger than the current embedding is put around the embedding which will stay until the end of the algorithm. This is done so that the extra freedom given in the phases 2 to 4 do not enlarge the size of the embedding so high that it requires the embedding to be scaled significantly affecting the visual perception of the embedding negatively. ## 4.3.6 Phase Two: Selecting Gray Points And Freezing Them The second phase starts with a frame slightly bigger than the current embedding. In the beginning of each iteration of the second phase, a replication pressure is computed for each projected point as was discussed in Section 4.3.3. At the first iteration of the second phase, after replication pressures for the first iteration are computed, the maximum number of points in the gray layer is chosen. While a good measure for computing replication pressure is achieved in this paper, the discussion on the ideal maximum number of data instances in the gray layer is a more open question. We compute the maximum number of data instances in gray layer as the minimum of two thresholds. The first threshold is calculated by counting the number of projected points whose replication pressure is outside the range. This range is centred at the mean of replication pressures of all projected points and expanded in each direction 1.2 times the standard deviation of replication pressure of all projected points. The projected points are considered from the embedding at the very beginning of the second phase. While the bidirectional style of this threshold might seem unusual, it is meant to address the cases where the distribution of the replication pressures is twisted toward one direction in a non-symmetric way from the mean. The second threshold which is the most important one is the number of data instances divided by 4, in order to ensure that the number of data instances in the red layer is not less than 75% of the total number of data instances embedded. In the second phase at each iteration a projected point with the highest replication pressure is chosen and marked as ineffective until as many as the maximum number of data instances in the gray layer are marked as ineffective. In the second phase when a projected point is marked as ineffective, it is also moved to the gray layer. A projected point in the gray layer remains there until the end of the algorithm but whether it is marked as ineffective or not can change in the next phases. The second phase continues for a number of iterations (default 450)6. While in the Frutcherman and Reingold algorithm the temperature changes linearly from maximum temperature (u¯) to zero, in the second phase, the temperature is reduced linearly from half of the maximum temperature to almost zero7. By default the temperature in phase two is (1000−(µ+500))¯u 1000 where µ is the iteration number in current phase. ## 4.3.7 Phase 3: Reversing Freezing And Marking Gray Points As Effective At the very beginning of the third phase, every projected point that was marked as ineffective in the second phase is marked as effective again but the rest of the projected points are frozen. This also implies reversing the freezing as the projected points in the red layer become frozen but effective and the projected points in the gray layer are unfrozen. The third phase continues for a number of iterations (default 390). While in the Frutcherman and Reingold algorithm the temperature changes linearly from maximum temperature (u¯) to zero, in the third phase, the temperature is reduced linearly from almost half of the maximum temperature to almost zero. By default the temperature in phase three is (1000−(µ+510))¯u 1000 where µ is the iteration number in current phase. ## 4.3.8 Phase 4: Replicating Gray Points When Possible In the first iteration of phase 4, an attempt is made to duplicate each of the projected points of the gray layer. When duplicating a projected point, the new added point is also set to be in the gray layer. When duplicating, the neighbourhood graph is also modified as was discussed in Section 4.3.3. Not all duplication are successful and a failed duplication may happen when trying to modify the neighbourhood graph. A failed duplication does not change the embedding or neighbourhood graph. In the proposed algorithm a maximum of 2 projections per data instance limit is enforced. The force-directed graph drawing continues in the forth phase through a number of iterations (default 490). While in the Frutcherman and Reingold algorithm the temperature changes linearly from maximum temperature (u¯) to zero, in the fourth phase, the temperature is reduced linearly from almost half of the maximum temperature to zero. By default the temperature in phase four is (1000−(µ+510))¯u 1000 where µ is the iteration number in current phase. ## 4.4 Parallel Speedup A parallel version of the proposed algorithm is possible and is implemented8 which scales well with an increase in the number of CPU cores. In order to parallelized the algorithm, the For Loops for attractive and repulsive forces should be parallelized. If coded by slicing the calculations correctly, a race condition in the calculation of the forces can be avoided. ## 4.5 Parameters The visual density adjustment parameter (b), the number of neighbours for building neighbourhood graph (pb), and whether to use UMAP to 30 dimensions as a preliminary step can significantly change the outcome of the algorithm. To overcome the challenge of setting the values of parameters, one approach that provides practical benefit is as follows. While the visual density adjustment parameter can be set freely to any real number and while the number of neighbours for building neighbourhood graph can be set freely to any positive integer, in practice constraints can be used. Choosing the visual density adjustment parameters from the set {−0.9, −0.5, −0.1, 0.1, 0.5, 0.9} and the number of neighbours for building neighbourhood graphs from the set {10, 20, ⌊ |Q| 3 ⌋, ⌊ |Q| 4 ⌋, ⌊ |Q| 5 ⌋} where |Q| is the number of data instances in original space empirically provide a practical way to approach the challenge. 6Using 450, 390 & 490 as the default values in the phases two, three and four, remained for historical reasons in the development of the algorithm but other values of a similar scale such as 500 would also work as well. 7Almost zero and almost half for some of temperature limits in phases two, three and four were used in the code base for historical reasons as well, these can also be replaced with simply zero and half. 8https://web.cs.dal.ca/~barahimi/chocolate-lvsde/ In this way, by testing the outcome of a total of 60 configurations for the three parameters sufficiently useful values for the parameters may be found. For text related high dimensional data sets, whether to use Euclidean distance or Cosine distance is a consideration that needs to be added to the above discussion as Cosine distance in general is known to usually perform better on text based high dimensional data. ## 5 Experimental Setup 5.1 Data Sets In this paper six different data sets are considered. The first data set is the 1000 genomes project Auton et al. (2015) consisting of DNA samples from 2504 individuals in different locations in the world. The data from the third phase of the project is used and converted to a distance matrix (details in subsection 5.2). The second data set is the MNIST set of handwritten digits (details in subsection 5.2). The third data set is the IRIS data set classifying 3 different classes of the Iris plant. The fourth is the MeefTCD Barahimi (2022) enhanced email features data set and has 2400 points and 48557 dimensions and is labelled with 8 classes described in Barahimi (2022). The fifth and sixth are neural network layer outputs on the CIFAR-10 Krizhevsky image classification data set. ## 5.1.1 Cifar-10 Image Classifier Neural Network Last Hidden Layer For the fifth and sixth data sets, instead of using dimensionality reduction on the CIFAR-10 Krizhevsky data set directly, we first trained a neural network image classifier using the data. Then for the fifth data set we took the output of the last hidden layer on the first 5000 data instances and for the sixth data set we took the output of the last hidden layer on the first 5000 data instances of the test data. The architecture is a Convolutional Neural Network (CNN) displayed in Fig. 9. It consists of two convolutional layers each with 2 × 2 maxpooling and ReLU activation, one with 32 filters and the second one with 16 filters, then a Dropout with probability of 0.5, a flattening layer and then two fully connected 800 dimensional DropConnect Wan et al. (2013) layers. Each has a DropConnect probability 0.5 and ReLU activation. Finally there is a fully connected 10 dimensional layer with softmax activation. It is notable that the architecture, while reasonable, is not the state of the art in image classification as the purpose of this paper is evaluation of a dimensionality reduction method. The training is done in 100 epochs using the Adam Kingma & Ba (2014) optimizer of TensorFlow and all ![17_image_0.png](17_image_0.png) training data of CIFAR-10. Figure 9: Convolutional Neural Network (CNN) image classifier architecture for CIFAR-10 ## 5.2 Experimental Setup Details The proposed method (*LVSDE*) is evaluated in this paper in comparison to UMAP McInnes et al. (2018a;b) and the t-SNE Maaten & Hinton (2008) Barnes-Hut variant Van Der Maaten (2014) on the six data sets. For UMAP McInnes et al. (2018a;b), the umap-learn 0.5.3 python implementation of UMAP McInnes et al. (2018a;b) is used. For the 1000 genomes project data set Auton et al. (2015), for the *LVSDE* method, two different configurations are reported on and the difference between them is discussed. For the first configuration, the parameters that are used are the visual density adjustment parameter b = 0.1, one third of the number of data instances as the number of neighbours in the initial neighbourhood graph, and opting not to use UMAP to 30 dimensions. For the second configuration, the parameters that are used are visual density adjustment parameter b = 0.5, 20 as the number of neighbours in the initial neighbourhood graph, and opting not to use UMAP to 30 dimensions. The data from the third phase of the 1000 genomes project is used and converted to a distance matrix using the version 2.0 of the PLINK Purcell et al. (2007); Purcell whole genome analysis toolkit. This has an implementation of the Yang et al. (2011) method for producing a similarity relationship matrix followed by a simple linear transformation to produce non-negative dissimilarity distances (maximum similarity minus the similarity as distance), and the distance matrix is used as the input to all methods. For the MNIST data set, for the *LVSDE* method the parameters that are used are visual density adjustment parameter b = 0.1, and 50 as the number of neighbours in the initial neighbourhood graph, opting to use UMAP to 30 dimensions. Also for LVSDE, Euclidean distance is used for UMAP to 30 dimensions and is used to convert 30 dimensional data of UMAP output to distances. For the MNIST data set, for *LVSDE*, all MNIST training data are embedded into 30 dimensions but only the first 5000 rows are used for the rest of the algorithm, evaluation and display. Similar options are chosen for UMAP where UMAP is applied to all training data but only the first 5000 rows are picked for evaluation and display. Similar conditions are used for t-SNE where t-SNE is applied to all of training data but then only the first 5000 rows are picked for evaluation and display. For the MNIST data set, for all methods, Euclidean distance is used as the distance measure. For the IRIS data set the parameters that are used for *LVSDE* are the visual density adjustment parameter b = −0.1, and 20 as the number of neighbours in the initial neighbourhood graph, and opting not to use UMAP to 30 dimensions. For all methods, Euclidean distance is used as the distance measure. For the MeefTCD email features data set, for the *LVSDE* method, the parameters that are used were the visual density adjustment parameter b = 0.5, 40 as the number of neighbours in the initial neighbourhood graph, opting to use UMAP to 30 dimensions. Also for LVSDE, Cosine distance is used as the distance measure for UMAP to 30 dimensions but afterthat, Euclidean distance is used. For t-SNE and UMAP Cosine distance is used as the distance measure. For the neural network output on the CIFAR-10 data set, for the *LVSDE* method, two different configurations are reported on and the difference between them is visualized. For the first configuration, the parameters that are used are visual density adjustment parameter b = 0.1, and 20 as the number of neighbours in the initial neighbourhood graph, opting to use UMAP to 30 dimensions. For the second configuration, the parameters that are used are visual density adjustment parameter b = 0.5, and 20 as the number of neighbours in the initial neighbourhood graph, opting to use UMAP to 30 dimensions. Euclidean distance is used for for all configurations. ## 6 Visual Metaphors To visualize a Strict Red Gray Embedding, several visual metaphors are used in this paper. For example, in the simplest form, the points in the red layer are rendered with red colour and the points in gray layer are rendered with gray colour. Another example is that giving priority to one layer when overlap occurs, meaning that first the points of one of the layers are rendered and then the points of the other layer are rendered on top of them. To be able display class colours in addition to the separation of layers, a visual metaphor is used where class colours are used as the rendering colour for the points of visual space but a black circle is drawn around points of only one of the layers, or points of only one of the layers are drawn smaller. Fig. 13(a,b) is a good showcase of these metaphors. Another variant that is considered for grayscale images especially for the readability of MNIST digits is to draw images for the points in the red layer by a linear transformation of grayscale values [0 − 255] to the range of colours starting with full black and ending to full red. For the points in the gray layer however, the linear transformation that transforms [0 − 255] grayscale values to the range of colours starting from mid gray colour to full white is considered. See Fig. 2 as a showcase. ## 6.1 Visually Connecting Multiple Projections Of Data Instances We wish to visually connect multiple projections of a data instance for all data instances that have more than one projection. In the simplest form, where there are at most two projections per data instance, the connection can be a line between the two projections of the data instance. Conceptually this can be modelled by a graph where an edge between two projected points indicate that they are projections of the same data instance. In this scenario, if the visual metaphor can visually connect on a group basis rather than individual data instances, then less clutter is caused in the visualization. For example, if the number of data instances that have two projections is 300, and if the viewer has to think about them individually, there are 300 pairs of projected points which takes a significant amount of time even if somehow those 300 connections between the 300 pairs of projected points were visualized without clutter. ![19_image_0.png](19_image_0.png) Figure 10: (a,b,c,d) An LVSDE embedding of the 1000 genomes project data set Auton et al. (2015) distances with different visual metaphors.(b) Edge bundling algorithm of Holten & Van Wijk (2009) is applied on the duplication graph. (c) Edge clustering is done based on the edge bundling. (d,e,f,g,h) Meta balls are used to visualize edge clusters which are based on the edge bundling done by the algorithm of Holten & Van Wijk (2009). (a,b,c,d) Points with a black circle around them are in gray layer. Points with a black dot inside them are second projection of an input data instance. Points without a black circle around them are in red layer. (e,g,h) Points with a black circle around them are in the red layer and the points in the gray layer are drawn smaller. (f) An LVSDE embedding of a subset of the MNIST data set visualized with meta balls. (g) An LVSDE embedding of a subset of the IRIS data set visualized with meta balls. (h) An LVSDE embedding of a subset of the MeeefTCD data set visualized with meta balls. To reduce the clutter and provide a means of grouping for the pairs of projected points the following approach is proposed. The approach starts with the edge bundling algorithm of Holten & Van Wijk (2009) which draws each edge of the graph as a multi-segment polyline in a way that related edges overlap on each other which reduces clutter. Next, edge clustering partitions the set of the drawn edges into multiple sets called edge clusters setting the stage for grouping pairs of projected points (see Fig. 10(c) for an example). Although edge bundling reduces clutter, there is still room for improvement. Next we use an adaptation of the concept of meta balls Blinn (1982) to guide the viewer on which parts of the edge bundling and edge clustering to examine, and secondly provide an approximate partition of visual space (see Fig. 10(d) for an example). Meta balls, are algebraic surface drawings Blinn (1982) meaning that an algebraic formula is used to specify the colour of each pixel of a surface. In the adaptation that we use in this paper however, a meta ball has a corresponding edge cluster and instead of having one center, a single meta ball has multiple centers which are the endpoints of the edges in the cluster. The density change of the meta ball is reflected in the opacity component of the RGBA colour of each pixel and the size of the edge cluster also affects how much the meta ball spreads in the visual space. This allows the signifying of larger clusters. The meta balls are only displayed for the ten largest size edge clusters and a different base colour is assigned to each meta ball. Iterating through all meta balls in the order of corresponding edge cluster size, for each meta ball, the pixels of the meta ball are drawn on a separate canvas for each meta ball. The canvases are then displayed on top of each other with the opacity of pixels controlling how much the previous canvases' pixels should be visible. Then the actual projected points are drawn and then the corresponding edge bundling are drawn. Consider again Fig. 10, in which edge bundling, edge clustering and adapted meta balls are used to visualize embeddings of different data sets. In this example we want to emphasize the blue meta ball in Fig. 10(f) which guides the user to the part of visual space where duplication of the digit 8 has occurred. Next the edge bundling in that region fine tunes the picture of the path on which duplications of digit 8 are located. With regards to the details, when applying the edge bundling algorithm of Holten & Van Wijk (2009), we avoid the smoothing of edges discussed in Holten & Van Wijk (2009). Moreover, in performing the edge clustering, approximate edge distances and the concept of edge compatibility defined in Holten & Van Wijk (2009) are used as a measure to combine edge clusters. Initially, each edge is a separate cluster and if two edges belonging to two different clusters are found that have both a sufficiently small approximate distance and sufficiently large edge compatibility as defined in defined in Holten & Van Wijk (2009), then the two clusters are combined to one cluster. Furthermore, when choosing the ten largest clusters, if the number of edge clusters is less than ten, every edge cluster is selected. And most notably, if the RGB colour assigned to a cluster is specified by (R1, G1, B1) and if the coordinates of endpoints of the edges of the cluster are specified by {(x1, y1),(x2, y2), ...,(xξ, yξ)} then the RGBA colour of a pixel of the adapted meta ball at the coordinates (*x, y*) is defined as ![20_image_0.png](20_image_0.png) where β indicates the spread of the metaball for the largest edge cluster and ξmax specifies the size of the largest cluster. Since the opacity as it is specified above is at most 80%, overlapping meta balls can still be perceivably detectable to some degree. ## 7 Empirical Results And Comparison 7.1 Qualitative Analysis Of Results Starting with qualitative results, the most notable result in terms of semantics is the embedding of the 1000 genomes project data set Auton et al. (2015) distances by LVSDE displayed in Fig. 1. The 1000 genomes ![21_image_0.png](21_image_0.png) Figure 11: Two configurations of LVSDE used for embedding the 1000 genomes project data set Auton et al. (2015) distances. While configuration 1 performs better semantically, configuration 2, like UMAP and t-SNE artificially performs better in groups separation, however knowing the phylogenetic connection of human genomes that separation has less semantic harmony with the reality of significant connection between human genomes. Configuration 1 although being better does not show the ideal representation of connection human genomes either and for future work it is an interesting research area to improve that in order to have an embedding method that shows the connection of human genomes better and more inclusive. The shortcoming in representation of that connection is an important flaw in the current state dimensionality reduction methods improved by LVSDE but not to the full extent. In part a and b, points with a black circle around them are in gray layer and points with a black dot inside them are second projection of an input data instance and points without a black circle around them are in red layer. In part c, points with a black circle around them are in the red layer and the points in the gray layer are drawn smaller. project is one of the first genomes data sets for studying human genome variety and contains the DNA samples from different locations of the world labelled with geographical locations. One important observation in Fig. 1 is that some of the projected points labelled with America are duplicated from some area close to the area of visual space consisting of the projected points labelled with Europe to an area close to duplicates of some of projected points labelled with Africa. This matches the immigration history from Europe and Africa to America. In this manner LVSDE has achieved multiple successes. One is identifying points to duplicate and second, duplicating them in a meaningful way to a correct area of visual space showing its capacity to guide the duplication. While Figs. 12(a) and 12(b) show embeddings of 1000 genomes projected distances using UMAP and t-SNE, the fact that they do not support duplication, puts them in a weaker position in terms of meaning. In Fig. 11 two configurations of LVSDE are used for embedding the 1000 genomes project data set Auton et al. (2015) distances. While configuration 1 performs better semantically, configuration 2, like UMAP and t-SNE in Figs. 12(a) and 12(b), unrealistically separates the genomes in a way that hides the significant connection between human genomes regardless of location or race. Knowing the phylogenetic connection of human genomes regardless of any division, that separation has less semantic harmony with the reality. Configuration 1 although being better does not show the ideal representation of connection between human genomes either and for future work it is an interesting research area to improve that in order to have an embedding method that shows the connection more clearly and inclusively. The next important result is LVSDE on a subset of the MNIST data set. For the MNIST data set, it is known that popular dimensionality reduction methods have some difficulty in separating digits 3, 5 and 8. The handwritten digits do not necessarily correspond to digital digits with complete certainty. A handwritten digit, especially if not written well may be for example 80% 3, 60% 5 and 30% 8, the so called multivariate fuzzy semantics that was discussed in Section 2. But digits 3, 5 and 8 are not the only digits that cause issues for dimensionality reduction techniques on the MNIST data set. ![22_image_0.png](22_image_0.png) Figure 12: UMAP and t-SNE embeddings of 1000 genomes project data set Auton et al. (2015) distances and a subset of the MNIST data set. Digits 4 and 9 also confuse dimensionality reduction techniques limiting the ability of such techniques to separate them. Again, the case of multivariate fuzzy semantics exists for some of the handwritten digits of 4 and 9. Looking at Fig. 2, the separation of digits 3, 5 and 8 on the red layer and also the separation of digits 4 and 9 on the red layer are better than popular techniques. By using layers, the identification of groups in the LVSDE embedding of the subset of MNIST data set is much easier when only considering the red layer while the gray layer provides some background information. For comparison, Figs. 12(c) and 12(d) show UMAP and t-SNE embeddings of the subset of the MNIST data set. While t-SNE does a better job than UMAP on separating digits 3, 5 and 8, it has a lot of misplaced digits 3, 5 and 8 meaning a digit being in close proximity to images of different digits. Next in line is the IRIS data set where detecting subgroups is the focus. Identifying a subgroup means identifying a significantly large subset of a group that only contains data instances from that group but not all of them. Being a very simple data set, IRIS has only three classes for groups. By looking at Fig. 13(b) which does not have class colours, one would normally immediately identify at least 4 subgroups, out of which only one is wrong, meaning at most a 25% chance of failure. In contrast, if class colours are removed from Fig. 13(c) for t-SNE on IRIS, one would normally identify only two subgroups, out of which one of them is wrong, meaning a 50% chance of failure. While UMAP on IRIS shown in Fig. 13(d), is not as defected as t-SNE, how it compares to LVSDE can be a subjective argument and requires more analysis. For the MeeefTCD data set, identification of combinational groups is the focus. In Fig. 14, the separation of the combination of two classes CES and logistics is much better for LVSDE than t-SNE and UMAP since for LVSDE, the two classes combined are clearly separate from the rest of classes but for t-SNE the two classes combined are not separate from at least 5 other classes and for UMAP, the two classes combined are not separate from 4 other classes. For the image classifier neural network on CIFAR-10, we took the output of last hidden layer (the second DropConnect layer) for first 5000 data instances of the training data and passed it to LVSDE, t-SNE and UMAP which yielded the embeddings displayed in Figs. 15 and 16. We also took the output of last hidden layer (the second DropConnect layer) for the first 5000 data instances of the test data and passed it to LVSDE, t-SNE and UMAP which yielded the embeddings displayed in Figs. 17 and 18. Looking at Figs. 15 and 16, better separation of some of the classes by LVSDE when compared to t-SNE and UMAP is very clear. Looking at Figs. 17 and 18, at last for configuration 1 of LVSDE, the argument holds, especially if someone combines some of the classes together. ![23_image_0.png](23_image_0.png) Figure 13: Embeddings of the IRIS data set using different dimensionality reduction methods. For LVSDE points with a black circle around them are in the red layer and the points in the gray layer are drawn smaller. For LVSDE in part b, points in the red layer are coloured red and the points in the gray layer are coloured gray. LVSDE has performed better for subgroup detection. ![23_image_1.png](23_image_1.png) Figure 14: Embeddings of the MeefTCD data set using different dimensionality reduction methods. For LVSDE points with a black circle around them are in the red layer and the points in the gray layer are drawn smaller. For LVSDE in part b, two of the classes are combined. The separation of the combination of two classes CES and logistics is much better in the LVSDE embedding. ![24_image_0.png](24_image_0.png) neural network layer output on CIFAR-10 training data. neural network layer output on CIFAR-10 training data. ![24_image_1.png](24_image_1.png) layer output on CIFAR-10 training data. layer output on CIFAR-10 training data. Figure 15: Embeddings of a subset of image classifier neural network layer output on CIFAR-10 training data, using different dimensionality reduction methods. For LVSDE points with a black circle around them are in the red layer and the points in the gray layer are drawn smaller. ![25_image_0.png](25_image_0.png) neural network layer output on CIFAR-10 training data. neural network layer output on CIFAR-10 training data. ![25_image_1.png](25_image_1.png) layer output on CIFAR-10 training data. layer output on CIFAR-10 training data. Figure 16: Embeddings of a subset of image classifier neural network layer output on CIFAR-10 training data, using different dimensionality reduction methods. For LVSDE points with a black circle around them are in the red layer and the points in the gray layer are drawn smaller. For LVSDE points in the red layer are coloured red and the points in the gray layer are coloured gray. ![26_image_1.png](26_image_1.png) ![26_image_0.png](26_image_0.png) of image classifier neural network layer output on CIFAR-10 test data. ![26_image_2.png](26_image_2.png) neural network layer output on CIFAR10 test data. neural network layer output on CIFAR10 test data. Figure 17: Embeddings of a subset of image classifier neural network layer output on CIFAR-10 training data, using different dimensionality reduction methods. For LVSDE points with a black circle around them are in the red layer and the points in the gray layer are drawn smaller. ![26_image_3.png](26_image_3.png) Figure 18: Embeddings of a subset of image classifier neural network layer output on CIFAR-10 training data, using different dimensionality reduction methods. For LVSDE points with a black circle around them are in the red layer and the points in the gray layer are drawn smaller. For LVSDE points in the red layer are coloured red and the points in the gray layer are coloured gray. ## 7.2 Quantitative Measurement And Qualitative Analysis While embeddings can be visually evaluated qualitatively, it is good to also have some quantitative measurement of the embeddings. The objectives of qualitative analysis and quantitative are not necessarily the same as they evaluate different aspects of the embeddings and quantitative analysis alone should not be considered as the sole indicator of better embeddings but can increase the confidence of the reader in the qualitative visual results. Since Multi-layered Multi-point Dimensionality Reduction and therefore Strict Red Gray Embeddings are not completely compatible with existing quantitative measurements some discussion is needed on how to adapt current quantitative measurements for them. In particular a measure called Λ measure is used in this paper which is the same as KNN classification accuracy except adapted to Multi-layered Multi-point Dimensionality Reduction and Strict Red Gray Embeddings. For a layer θ (or set of layers L) of a Multi-layered Multi-point Dimensionality Reduction embedding, the Λ measure is the percentage (or ratio) of data instances that have at least one projection in θ (or L) and the class label for that data instance matches the label with maximum occurrence among k nearest neighbours of any of the projections of that data instance in θ (or L). Neighbours are limited to a specific set of layer(s) in visual space called classification layer(s) Lb and the result of evaluation is denoted by Λ(θ,Lb) or Λ(L,Lb) respectively (k is called evaluation neighbourhood size, θ or L are called evaluation layer(s). The choice of k, θ or L and Lb are parameters for evaluation. In this paper k = 15 is used the evaluation neighbourhood size for all experiments of the paper). Table 1 shows the quantitative results and comparison on the six data sets using different dimensionality reduction methods. Three different combinations of evaluation layers and classification layers are reported since. While red-red evaluation is more suitable when the intent is finding a sufficient portion of data that shows a pattern, the all-all evaluation is more suitable when the intent is finding a pattern that covers all of the data. Gray-gray evaluation when compared to red-red evaluation, is more suitable to see how the gray layer and the red layer are acting differently. | Data set | Embedding method | Measure 1 (red-red) | Measure 2 (all-all) | Measure 3 (gray-gray) | |---------------------------------------|-----------------------|-----------------------|-----------------------|-------------------------| | 1000 genomes | LVSDE configuration 1 | 98.551% | 98.562% | 97.802% | | 1000 genomes | LVSDE configuration 2 | 99.708% | 99.681% | 99.333% | | 1000 genomes | UMAP | 99.321% | 99.321% | NA | | 1000 genomes | t-SNE | 99.241% | 99.241% | NA | | MNIST | LVSDE | 97.359% | 97.040% | 90.492% | | MNIST | UMAP | 97.000% | 97.000% | NA | | MNIST | t-SNE | 96.400% | 96.400% | NA | | IRIS | LVSDE | 97.368% | 95.333% | 86.111% | | IRIS | UMAP | 97.333% | 97.333% | NA | | IRIS | t-SNE | 97.333% | 97.333% | NA | | MeeefTCD (classes not combined) | LVSDE | 74.359% | 74.542% | 72.222% | | MeeefTCD (classes not combined) | UMAP | 75.083% | 75.083% | NA | | MeeefTCD (classes not combined) | t-SNE | 75.125% | 75.125% | NA | | Neural network layer on training data | LVSDE configuration 1 | 79.121% | 78.560% | 70.000% | | Neural network layer on training data | LVSDE configuration 2 | 79.451% | 78.860% | 72.222% | | Neural network layer on training data | UMAP | 76.580% | 76.580% | NA | | Neural network layer on training data | t-SNE | 79.040% | 79.040% | NA | | Neural network layer on test data | LVSDE configuration 1 | 63.165% | 62.100% | 47.778% | | Neural network layer on test data | LVSDE configuration 2 | 61.692% | 61.600% | 50.889% | | Neural network layer on test data | UMAP | 60.740% | 60.740% | NA | | Neural network layer on test data | t-SNE | 61.440% | 61.440% | NA | Table 1: Experiments' knn measures on the six data sets. For the column Measure 1, the evaluation layer is the red layer and the classification layer is the red layer. For the column Measure 2, the evaluation layers are both the red layer and the gray layer while the classification layers for Measure 2 are also both the red layer and gray layer. For the column Measure 3, the evaluation layer is the gray layer and the classification layer is the gray layer. For t-SNE and UMAP it is assumed that all projected points are in red layer. All the numbers in this table are rounded to 3 decimal places. Evaluation neighbourhood size k = 15 is used. ## 8 Availability For an implementation of LVSDE, please visit the URL https://web.cs.dal.ca/~barahimi/ chocolate-lvsde/ (date accessed: January 12, 2023). ## 9 Limitations One challenging and not completely resolved potential of Multi-layered Multi-point Dimensionality Reduction is the preservation of a well-defined topology structure where the meaning is interpreted in the context of finite point set topology Evans et al. (1967); Aleksandrov (1998). While infinite geometric topology Sher & Daverman (2001) or infinite shape theory Kendall et al. (2009) are also conceptually relevant as a context for topology, their relevance is less direct than finite point set topology which deals with a finite number of points. If the challenge of visually connecting multiple projections of a point is adequately addressed, preservation of well defined topology structures can become a significant potential for Multi-layered Multipoint Dimensionality Reduction. The preservation is in a sense that a well defined procedure finds a topology structure in original space and a corresponding well defined visual procedure finds a close or exactly the same topology structure in visual space. Furthermore, LVSDE is dependent on some parameters which can significantly change the outcome of the embedding. While the diversity of possible embeddings is actually in favour of LVSDE, lack or weakness of existence of an established way to set those parameters is a limitation. For the 1000 genomes project data set, on the inclusive connection of human genomes, the shortcoming in adequate representation of that connection is an important flaw in the current state of dimensionality reduction methods improved by LVSDE but not entirely. LVSDE also limits the number of projected points per data instance to 2 which is simplistic but is not necessarily the best of option for MMDR in terms of possibility of better embeddings. Lastly, while quantitative evaluation was achieved using a class-aware evaluation measure, any difference between a class-blind evaluation and class-aware evaluation could provide insights for future improvements. ## 10 Conclusion And Future Work A novel dimensionality reduction method named LVSDE was presented and the philosophy of its underlining platform was discussed and elaborated on. The road ahead is open on many fronts including but not limited to improving speed, and application to data sets in different domains. Even better visual metaphors for visually connecting different projections of a data instance and moving in the direction of preservation of well defined finite point set typology structures may also be a significant research direction in the future. Finding better measures to choose the number of projected points in the red layer is another aspect to improve. Looking at Figs. 15 and 16, the two configurations of LVSDE that are displayed, while both preferable when compared with t-SNE and UMAP, there is a substantial difference between the two configurations which invites more attention. For the 1000 genomes project data set, on the inclusive connection of human genomes, while configuration 1 of LVSDE is substantially better than UMAP, t-SNE and configuration 2 of LVSDE, there is still room for improvements. Finally, while LVSDE limits the number of projected points per data instance to 2, going beyond 2 is another direction of interest. Also finding better ways to set the parameters for LVSDE is a further important direction to improve upon. Another interesting direction for improvement is better ways to choose the number of points in the gray layer. While in this paper some quantitative evaluation was reported, it was more to increase confidence in the qualitative visual results rather than as a standalone numerical comparison with careful discussion of the objective, lack of bias and usage scenarios. In particular, in addition to quantitative measures that rely on class numbers, having a proper, suitable and meaningful class number blind quantitative measurement for Multi-Layered Multi-Point Dimensionality Reduction is also of interest. Such a quantitative measure should be able to take into account multiple neighbourhoods per data instance both in original space and visual space at least to some practical degree. For ordinary dimensionality reduction, trustworthiness Venna & Kaski (2001); Kaski et al. (2003) has gained attention for matching neighbourhoods in original space and visual space. It is based on a single neighbourhood per data instance both in original space and visual space. While adapting it for multiple neighbourhoods in visual space is less challenging, adapting it for multiple neighbourhoods in original space based on projections of points of original space on a set of hyper lines or a set of manifolds is more challenging and may well prove to be computationally intractable. So practical remedies are desirable on that front as well. ## References Pavel S Aleksandrov. *Combinatorial topology*, volume 1. Courier Corporation, 1998. Theodore W Anderson. Classification by multivariate analysis. *Psychometrika*, 16(1):31–50, 1951. Michaël Aupetit. Visualizing distortions and recovering topology in continuous projection techniques. *Neurocomputing*, 70(7):1304–1330, 2007. ISSN 0925-2312. Advances in Computational Intelligence and Learning. Adam Auton, Gonçalo R. Abecasis, and et al. A global reference for human genetic variation. *Nature*, 526: 68 - 74, 2015. Farshad Barahimi. Massive enhanced extracted email features tailored for cosine distance, 2022. arXiv preprint, arXiv:2205.04819v2. Farshad Barahimi and Stephen Wismath. 3d graph visualization with the oculus rift. pp. 519–520. Springer. Proceedings of 22nd International Symposium on Graph Drawing. Ulrich Bauer. Ripser: efficient computation of vietoris–rips persistence barcodes. Journal of Applied and Computational Topology, pp. 1–33, 2021. James F. Blinn. A generalization of algebraic surface drawing. *ACM Trans. Graph.*, 1(3):235–256, jul 1982. ISSN 0730-0301. doi: 10.1145/357306.357310. James Cook, Ilya Sutskever, Andriy Mnih, and Geoffrey Hinton. Visualizing similarity data with a mixture of maps. In Marina Meila and Xiaotong Shen (eds.), Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, volume 2 of *Proceedings of Machine Learning Research*, pp. 67–74, San Juan, Puerto Rico, 21–24 Mar 2007. PMLR. Enrico di Bella, Alberto Sillitti, and Giancarlo Succi. A multivariate classification of open source developers. Information Sciences, 221:72–83, 2013. ISSN 0020-0255. Harish Doraiswamy, Julien Tierny, Paulo J.S. Silva, Luis Gustavo Nonato, and Cláudio Silva. 0-dimensional homology preserving dimensionality reduction with topomap. In NeurIPS 2020 Workshop on Topological Data Analysis and Beyond, 2020. Herbert Edelsbrunner and John Harer. *Computational topology: an introduction*. American Mathematical Soc., 2010. J. W. Evans, F. Harary, and M. S. Lynn. On the computer enumeration of finite topologies. *Commun. ACM*, 10(5):295–297, May 1967. ISSN 0001-0782. Thomas M. J. Fruchterman and Edward M. Reingold. Graph drawing by force-directed placement. Software: Practice and Experience, 21(11):1129–1164, 1991. Geoffrey Hinton and Sam T Roweis. Stochastic neighbor embedding. In *NIPS*, volume 15, pp. 833–840, 2002. Danny Holten and Jarke J. Van Wijk. Force-directed edge bundling for graph visualization. Computer Graphics Forum, 28(3):983–990, 2009. doi: https://doi.org/10.1111/j.1467-8659.2009.01450.x. Eric H. Huang, Richard Socher, Christopher D. Manning, and Andrew Y. Ng. Improving word representations via global context and multiple word prototypes. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1, ACL '12, pp. 873–882, USA, 2012. Association for Computational Linguistics. Vahan Huroyan, Raymundo Navarrete, Md Iqbal Hossain, and Stephen Kobourov. Embedding neighborhoods simultaneously t-sne (ens-t-sne), 2022. arXiv preprint, arXiv:2205.11720. I. T. Jolliffe. *Principal component analysis*. Springer, 1986. Samuel Kaski, Janne Nikkilä, Merja Oja, Jarkko Venna, Petri Törönen, and Eero Castrén. Trustworthiness and metrics in visualizing similarity of gene expression. *BMC bioinformatics*, 4(1):48, 2003. David George Kendall, Dennis Barden, Thomas K Carne, and Huiling Le. *Shape and shape theory*, volume 500. John Wiley & Sons, 2009. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2014. arXiv preprint, arxiv:1412.6980. Alex Krizhevsky. The cifar-10 data set. Technical report: Learning Multiple Layers of Features from Tiny Images, Chapter 3: Object classification experiments. John A Lee and Michel Verleysen. *Nonlinear Dimensionality Reduction*. Springer, 2007. Feng Liu, Wenbin Guo, Jean-Paul Fouche, Yifeng Wang, Wenqin Wang, Jurong Ding, Ling Zeng, Changjian Qiu, Qiyong Gong, Wei Zhang, et al. Multivariate classification of social anxiety disorder using whole brain functional connectivity. *Brain Structure and Function*, 220(1):101–115, 2015. Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. *Journal of Machine Learning* Research, 9(Nov):2579–2605, 2008. Rafael Messias Martins, Danilo Barbosa Coimbra, Rosane Minghim, and A.C. Telea. Visual analysis of dimensionality reduction quality for parameterized projections. *Computers & Graphics*, 41:26 - 42, 2014. ISSN 0097-8493. Leland McInnes, John Healy, and James Melville. Umap: Uniform manifold approximation and projection for dimension reduction, 2018a. arXiv preprint, arXiv:1802.03426. Leland McInnes, John Healy, Nathaniel Saul, and Lukas Großberger. Umap: Uniform manifold approximation and projection. *Journal of Open Source Software*, 3(29):861, 2018b. Camilo LM Morais, Kássio MG Lima, Maneesh Singh, and Francis L Martin. Tutorial: multivariate classification for vibrational spectroscopy in biological samples. *Nature Protocols*, 15(7):2143–2162, 2020. Lev Nachmanson, Arlind Nocaj, Sergey Bereg, Leishi Zhang, and Alexander Holroyd. Node overlap removal by growing a tree. In *International Symposium on Graph Drawing and Network Visualization*, pp. 33–43. Springer, 2016. L. G. Nonato and M. Aupetit. Multidimensional projection for visual analytics: Linking techniques with distortions, tasks, and layout enrichment. *IEEE Transactions on Visualization and Computer Graphics*, 25(8):2650–2673, 2019. Shaun Purcell. Plink 2.0. https://www.cog-genomics.org/plink/2.0/. May 3, 2022 edition of PLINK 2.0. Web page last accessed on May 28, 2022. Shaun Purcell, Benjamin Neale, Kathe Todd-Brown, Lori Thomas, Manuel A.R. Ferreira, David Bender, Julian Maller, Pamela Sklar, Paul I.W. de Bakker, Mark J. Daly, and Pak C. Sham. Plink: A tool set for whole-genome association and population-based linkage analyses. The American Journal of Human Genetics, 81(3):559–575, 2007. ISSN 0002-9297. Enrique H Ruspini. A new approach to clustering. *Information and control*, 15(1):22–32, 1969. RB Sher and RJ Daverman. *Handbook of geometric topology*. Elsevier, 2001. Daniel J. Simons and Michael S. Ambinder. Change blindness: Theory and consequences. Current Directions in Psychological Science, 14(1):44–48, 2005. Daniel J. Simons and Daniel T. Levin. Change blindness. *Trends in Cognitive Sciences*, 1(7):261–267, 1997. ISSN 1364-6613. Eduardo Tejada, Rosane Minghim, and Luis Gustavo Nonato. On improved projection techniques to support visual exploration of multi-dimensional data sets. *Information Visualization*, 2(4):218–231, 2003. Laurens Van Der Maaten. Accelerating t-sne using tree-based algorithms. The journal of machine learning research, 15(1):3221–3245, 2014. Laurens Van der Maaten and Geoffrey Hinton. Visualizing non-metric similarities in multiple maps. *Machine* Learning, 87(1):33–55, 2012. Jarkko Venna and Samuel Kaski. Neighborhood preservation in nonlinear projection methods: An experimental study. In Georg Dorffner, Horst Bischof, and Kurt Hornik (eds.), *Artificial Neural Networks —* ICANN 2001, pp. 485–491, Berlin, Heidelberg, 2001. Springer Berlin Heidelberg. ISBN 978-3-540-44668-2. Li Wan, Matthew Zeiler, Sixin Zhang, Yann Le Cun, and Rob Fergus. Regularization of neural networks using dropconnect. In Sanjoy Dasgupta and David McAllester (eds.), Proceedings of the 30th International Conference on Machine Learning, volume 28 of *Proceedings of Machine Learning Research*, pp. 1058–1066. PMLR, 17–19 Jun 2013. Jian Yang, S. Hong Lee, Michael E. Goddard, and Peter M. Visscher. Gcta: A tool for genome-wide complex trait analysis. *The American Journal of Human Genetics*, 88(1):76–82, 2011. ISSN 0002-9297. doi: https://doi.org/10.1016/j.ajhg.2010.11.011. Lofti A Zadeh. Fuzzy sets. *Information and Control*, 8:338–353, 1965. Ying Zhao, Feng Luo, Minghui Chen, Yingchao Wang, Jiazhi Xia, Fangfang Zhou, Yunhai Wang, Yi Chen, and Wei Chen. Evaluating multi-dimensional visualizations for understanding fuzzy clusters. *IEEE Transactions on Visualization and Computer Graphics*, 25(1):12–21, 2019.
Review 1: Summary: In this article, the authors provide a method for multi-point dimension reduction, where each point in the original space can have several projections in the lower-dimensional / visual space. For this, (roughly speaking) they first reduce dimension with UMAP, then they compute a nearest-neighbor graph with modified distances, and finally use this graph to compute attractive and repulsive forces on each point / vertex. Based on the values of these forces, points that need to be duplicated are detected (using their so-called replication pressures), and the neighborhood graph is updated with the corresponding copies. This goes on for a fixed number of iterations. Finally, they present a few applications of their method on several common data sets such as MNIST, IRIS and 1000 genomes. Strengths and Weaknesses: The strengths of this work are listed below. *** the paper provides a nice adaptation of force-directed graph drawing methods to multi-point dimension reduction. The method is described with a lot of details and seems easy to implement and test. *** several examples on real data are provided and explained, with complete details on what multi-point dimension reduction brings in addition to usual methods such as t-SNE and UMAP. The weaknesses of this work are listed below> *** while I appreciate that the authors provide all necessary details, I think the writing of the article should be improved quite a lot. Currently, it is extremely wordy with a lot of repetitions and typos which actually makes it quite confusing sometimes. It is not always easy to keep a global understanding of the methods as all details (even the unnecessary ones) are presented in front next to the main ideas. *** the methods has many parameters but most of them are not discussed: why 36 axes in the computation of the replication pressures? why 30 dimensions in the initial UMAP? why computing normalized distances for the neighborhood graph with arctan function (whereas a squareroot seems also possible based on Fig6)? If normalized distances are used to symmetrize the graph, why not manually adding the edges of opposite direction when they are missing? How is the number of iterations (1830) obtained? Is there any criterion that could be used to detect when one has to stop iterating? Requested Changes: In terms of changes, I think a major rewriting is needed. The paper should be much more concise, and should emphasize the main aspects of the method more while keeping the details separated from the main exposition. I would also like every parameter to be discussed (ie, how to choose it in practice?). This would also allow reader to get a much better grasp of the proposed method. I would also reduce the size of the "philosophy discussion", and only briefly explain why multi-point dimension reduction matters in practice, with only straight to the point arguments and examples. The presentation is currently not so convincing as it is comprised of many vague and hand-waving statements about the field of dimension reduction, which makes that section a bit fishy. Broader Impact Concerns: No concern. ================================================== Review 2: Summary: This work studies the problem of multi-point dimensionality reduction, where each data point in the original high dimensional space can have more than one projection in the 2-dimensional visual space, and the projected points are split into two layers, indicating how reliable each projected point is. To try to solve the problem, an algorithm called Layered Vertex Splitting Data Embedding (LVSDE) is proposed in the paper. The algorithm has four phases, using an existing force-directed graph drawing algorithm to plot a 2d visualization, selecting less reliable projected points and re-computing the 2d visualization with the selected points frozen, reversing frozen points and performing the 2d visualization again, and replicating the less reliable projected points and running the 2d visualization for the last time. Preliminary steps, including UMAP algorithm, distance transformation, and KNN, are required to build a neighborhood graph, which is the input of LVSDE. Numerical experiments on six datasets are performed to show the effectiveness of the proposed algorithm. Strengths and Weaknesses: Strengths: (1) The problem of multi-point dimensionality reduction is interesting and can be useful to visualize fuzzy relationship among a dataset. (2) Some steps in the proposed methods are novel, including the proposed neighborhood-normalized (or density-normalized) distance, the modified forces and masses computation in a force-directed graph drawing algorithm, and the selection and replication of gray (less reliable) projected points via computing the proposed replication pressure. (3) In numerical experiments, results of LVSDE with different settings of parameters show that the proposed algorithm can have the versatility to adopt to different use cases from semantic visualization to clustering. Weaknesses: (1) The "proposed method" in section 4 is described more as an experiment design, rather than a generic algorithm. - There are too many fixed numbers in the algorithm that should depend on the data dimension or other problem-specific factors. For example, in section 4.2.1 it makes little sense to fix the UMAP projection dimension to 30. - Is there any mathematical indication that the proposed method could converge to a stable status with forces balanced under certain condition? Is there any justification that the number of iterations, such as 1830, should be fixed instead of problem-specific? - The parameter setup in section 4.5 is also not generic enough. For example, should the number of neighbors also depend on data dimension? (2) Solidness and clarity of this work can be improved by better mathematical definition and presentation. Some examples are as follows. - In Definition 3.2, is $S_i \cup S_j$ required to be empty? It should be noted that sets $S_i$ for all $i$ are fixed and given, not minimization variables. - Definition 3.3 does not give a formal objective function that multi-point dimensionality reduction tries to minimize. Since this is the core problem that the work tries to solve, it is fundamental that the problem itself can be rigorously defined in math. - Definition 3.4 and Definition 3.5 are also not in math. It would be better to first mathematically define what reliability of projected point means and then mathematically define (strict) red gray embedding on top of that. - Computation of replication pressure in section 4.3.3 can be expressed much more concisely and clearly in math. - Distance from the zth nearest neighborhood of a point $q_i$ should be defined in math. - In equation (4), what are weight and height? - In the last paragraph of section 4.3.1, when the temperature first appears, it should be formally defined. - In Procedure 1, "Bring $s_{it}$ on the frame" should be specified. And at the end of 4.3.5, "slightly bigger" should be more accurate. Otherwise, it is hard to reproduce the proposed algorithm. (3) Is there any reason to project the forces onto 10 axis when computing the replication pressure in 4.3.3? Why not just computing the net force imposed on each point? (4) In numerical experiments, it would be better to compared with methods more recently proposed, such as the one in "Agrawal, Akshay, Alnur Ali, and Stephen Boyd. Minimum-distortion embedding. Foundations and Trends® in Machine Learning 14.3 (2021): 211-378", and also other multi-point dimensionality reduction methods. Requested Changes: In my opinion, all comments and questions listed in (1), (2), and (3) in the "Weaknesses" above are required to be addressed, while addressing the comment in (4) would strengthen the work. Broader Impact Concerns: None ================================================== Review 3: Summary: The authors propose an algorithm for multi-point dimensionality reduction. This new task is designed for learning a low-dimensional representation of high-dimensional data that enables the flexibility of embedding a single point into multiple locations in the low-dimensional space. They provide another level of flexibility by splitting points into two layers of representation. The proposed solution, termed Layered Vertex Splitting Data Embedding (LVSDE), builds on a combination of existing dimensionality reduction schemes. The authors apply the new method to several datasets and compare the embedding to the results of UMAP and TSNE. Strengths and Weaknesses: Strengths: -The proposed problem seems novel and interesting. -Dimensionality reduction has numerous applications in many fields. -The concept of using multiple low-dimensional values for a high-dimensional vector is interesting and could lead to follow-up work. -The English level in the paper is of satisfactory level. Weakness: -The paper is poorly written and hard to follow; some parts are completely impossible to understand. -I don’t completely understand the justification for the multiple points and multiple layers. -Many elements of the algorithm are not well explained; it is written as a list of elements and modifications with no overview and high-level description or intuition. I seen figure 5, but a text summary would help. -The experimental results are not convincing, are the numerical results based on multiple runs? Or one run? What are the std? The gap between the proposed method and UMAP or tsne seems marginal and only based on one metric. -The method is mostly based on the modification of existing schemes, while I can consider this as a contribution, it requires a good systematic evaluation of the contribution of each modification. For example, an ablation study. -No runtime evaluation of the method, how does it compare to TSNE and UMAP? -A method termed LDLE was recently introduced by Kohli et al. I believe that it can handle the split issue presented in the paper as a motivating example. Therefore this should be discussed or compared by the authors. Kohli et al. LDLE: Low Distortion Local Eigenmaps Requested Changes: -The presentation needs dramatic improvement, I am not sure this is feasible as a rebuttal, I think the paper should be completely re written, and the authors should select one main message and focus on it. Currently, it reads like a mix of several ideas. -Commas missing after many equations. -Most figures don’t have a clear pointer in the text that explains what is going on in the figure. Referring to 12 figures in one line is not appropriate. For example, the reader can not understand what is going on in figure 1 just based on its caption. -P3 “the ultimate goal on the application front has been visualization” - I am not sure this statement is correct. DM has numerous other applications besides visualization, such as manifold learning, noise reduction, imputation, clustering, compression, etc. -Figure 4 - it might be better to visualize where the gray points are located in (a). -Not sure how you use UMAP to dimension 30? This is not clear; the algorithm is designed for 2d 3d embeddings. Also, once this is applied as a first step, your results are limited by the performance of UMAP. -Other plots are also not well explained. -The technical details of the training could be moved to and appendix. Broader Impact Concerns: No specific ethical concerns. ================================================== Metareview: Recommendation: Reject Comment: The authors did not address reviewers’ concerns. While the problem of multi-point dimensionality reduction is interesting, the paper needs substantial rewriting and modifications. All three reviewers agreed that major rewriting is needed. The paper should be much more concise and emphasize the main aspects of the method while keeping the details separated from the main exposition. Additionally, each parameter of the proposed method should be discussed in particular regarding how to choose them in practice. ==================================================